Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 6.16-rc4 into tty-next

We need the tty/serial fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+8588 -3883
+11 -3
.mailmap
··· 197 197 Daniel Borkmann <daniel@iogearbox.net> <dborkmann@redhat.com> 198 198 Daniel Borkmann <daniel@iogearbox.net> <dborkman@redhat.com> 199 199 Daniel Borkmann <daniel@iogearbox.net> <dxchgb@gmail.com> 200 + Danilo Krummrich <dakr@kernel.org> <dakr@redhat.com> 200 201 David Brownell <david-b@pacbell.net> 201 202 David Collins <quic_collinsd@quicinc.com> <collinsd@codeaurora.org> 202 203 David Heidelberg <david@ixit.cz> <d.okias@gmail.com> ··· 223 222 Dmitry Safonov <0x7f454c46@gmail.com> <dsafonov@virtuozzo.com> 224 223 Domen Puncer <domen@coderock.org> 225 224 Douglas Gilbert <dougg@torque.net> 225 + Drew Fustini <fustini@kernel.org> <drew@pdp7.com> 226 + <duje@dujemihanovic.xyz> <duje.mihanovic@skole.hr> 226 227 Ed L. Cashin <ecashin@coraid.com> 227 228 Elliot Berman <quic_eberman@quicinc.com> <eberman@codeaurora.org> 228 229 Enric Balletbo i Serra <eballetbo@kernel.org> <enric.balletbo@collabora.com> ··· 285 282 Gustavo Padovan <padovan@profusion.mobi> 286 283 Hamza Mahfooz <hamzamahfooz@linux.microsoft.com> <hamza.mahfooz@amd.com> 287 284 Hanjun Guo <guohanjun@huawei.com> <hanjun.guo@linaro.org> 285 + Hans de Goede <hansg@kernel.org> <hdegoede@redhat.com> 288 286 Hans Verkuil <hverkuil@xs4all.nl> <hansverk@cisco.com> 289 287 Hans Verkuil <hverkuil@xs4all.nl> <hverkuil-cisco@xs4all.nl> 290 288 Harry Yoo <harry.yoo@oracle.com> <42.hyeyoo@gmail.com> ··· 695 691 Serge Hallyn <sergeh@kernel.org> <serue@us.ibm.com> 696 692 Seth Forshee <sforshee@kernel.org> <seth.forshee@canonical.com> 697 693 Shakeel Butt <shakeel.butt@linux.dev> <shakeelb@google.com> 698 - Shannon Nelson <shannon.nelson@amd.com> <snelson@pensando.io> 699 - Shannon Nelson <shannon.nelson@amd.com> <shannon.nelson@intel.com> 700 - Shannon Nelson <shannon.nelson@amd.com> <shannon.nelson@oracle.com> 694 + Shannon Nelson <sln@onemain.com> <shannon.nelson@amd.com> 695 + Shannon Nelson <sln@onemain.com> <snelson@pensando.io> 696 + Shannon Nelson <sln@onemain.com> <shannon.nelson@intel.com> 697 + Shannon Nelson <sln@onemain.com> <shannon.nelson@oracle.com> 701 698 Sharath Chandra Vurukala <quic_sharathv@quicinc.com> <sharathv@codeaurora.org> 702 699 Shiraz Hashim <shiraz.linux.kernel@gmail.com> <shiraz.hashim@st.com> 703 700 Shuah Khan <shuah@kernel.org> <shuahkhan@gmail.com> ··· 832 827 Yusuke Goda <goda.yusuke@renesas.com> 833 828 Zack Rusin <zack.rusin@broadcom.com> <zackr@vmware.com> 834 829 Zhu Yanjun <zyjzyj2000@gmail.com> <yanjunz@nvidia.com> 830 + Zijun Hu <zijun.hu@oss.qualcomm.com> <quic_zijuhu@quicinc.com> 831 + Zijun Hu <zijun.hu@oss.qualcomm.com> <zijuhu@codeaurora.org> 832 + Zijun Hu <zijun_hu@htc.com>
+5
CREDITS
··· 2981 2981 S: Potsdam, New York 13676 2982 2982 S: USA 2983 2983 2984 + N: Shannon Nelson 2985 + E: sln@onemain.com 2986 + D: Worked on several network drivers including 2987 + D: ixgbe, i40e, ionic, pds_core, pds_vdpa, pds_fwctl 2988 + 2984 2989 N: Dave Neuer 2985 2990 E: dave.neuer@pobox.com 2986 2991 D: Helped implement support for Compaq's H31xx series iPAQs
+16
Documentation/ABI/testing/sysfs-edac-scrub
··· 49 49 (RO) Supported minimum scrub cycle duration in seconds 50 50 by the memory scrubber. 51 51 52 + Device-based scrub: returns the minimum scrub cycle 53 + supported by the memory device. 54 + 55 + Region-based scrub: returns the max of minimum scrub cycles 56 + supported by individual memory devices that back the region. 57 + 52 58 What: /sys/bus/edac/devices/<dev-name>/scrubX/max_cycle_duration 53 59 Date: March 2025 54 60 KernelVersion: 6.15 ··· 62 56 Description: 63 57 (RO) Supported maximum scrub cycle duration in seconds 64 58 by the memory scrubber. 59 + 60 + Device-based scrub: returns the maximum scrub cycle supported 61 + by the memory device. 62 + 63 + Region-based scrub: returns the min of maximum scrub cycles 64 + supported by individual memory devices that back the region. 65 + 66 + If the memory device does not provide maximum scrub cycle 67 + information, return the maximum supported value of the scrub 68 + cycle field. 65 69 66 70 What: /sys/bus/edac/devices/<dev-name>/scrubX/current_cycle_duration 67 71 Date: March 2025
+1 -1
Documentation/arch/arm64/booting.rst
··· 234 234 235 235 - If the kernel is entered at EL1: 236 236 237 - - ICC.SRE_EL2.Enable (bit 3) must be initialised to 0b1 237 + - ICC_SRE_EL2.Enable (bit 3) must be initialised to 0b1 238 238 - ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b1. 239 239 240 240 - The DT or ACPI tables must describe a GICv3 interrupt controller.
+7 -1
Documentation/bpf/map_hash.rst
··· 233 233 other CPUs involved in the following operation attempts: 234 234 235 235 - Attempt to use CPU-local state to batch operations 236 - - Attempt to fetch free nodes from global lists 236 + - Attempt to fetch ``target_free`` free nodes from global lists 237 237 - Attempt to pull any node from a global list and remove it from the hashmap 238 238 - Attempt to pull any node from any CPU's list and remove it from the hashmap 239 + 240 + The number of nodes to borrow from the global list in a batch, ``target_free``, 241 + depends on the size of the map. Larger batch size reduces lock contention, but 242 + may also exhaust the global structure. The value is computed at map init to 243 + avoid exhaustion, by limiting aggregate reservation by all CPUs to half the map 244 + size. With a minimum of a single element and maximum budget of 128 at a time. 239 245 240 246 This algorithm is described visually in the following diagram. See the 241 247 description in commit 3a08c2fd7634 ("bpf: LRU List") for a full explanation of
+3 -3
Documentation/bpf/map_lru_hash_update.dot
··· 35 35 fn_bpf_lru_list_pop_free_to_local [shape=rectangle,fillcolor=2, 36 36 label="Flush local pending, 37 37 Rotate Global list, move 38 - LOCAL_FREE_TARGET 38 + target_free 39 39 from global -> local"] 40 40 // Also corresponds to: 41 41 // fn__local_list_flush() 42 42 // fn_bpf_lru_list_rotate() 43 43 fn___bpf_lru_node_move_to_free[shape=diamond,fillcolor=2, 44 - label="Able to free\nLOCAL_FREE_TARGET\nnodes?"] 44 + label="Able to free\ntarget_free\nnodes?"] 45 45 46 46 fn___bpf_lru_list_shrink_inactive [shape=rectangle,fillcolor=3, 47 47 label="Shrink inactive list 48 48 up to remaining 49 - LOCAL_FREE_TARGET 49 + target_free 50 50 (global LRU -> local)"] 51 51 fn___bpf_lru_list_shrink [shape=diamond,fillcolor=2, 52 52 label="> 0 entries in\nlocal free list?"]
-4
Documentation/devicetree/bindings/display/bridge/ti,sn65dsi83.yaml
··· 118 118 ti,lvds-vod-swing-clock-microvolt: 119 119 description: LVDS diferential output voltage <min max> for clock 120 120 lanes in microvolts. 121 - $ref: /schemas/types.yaml#/definitions/uint32-array 122 - minItems: 2 123 121 maxItems: 2 124 122 125 123 ti,lvds-vod-swing-data-microvolt: 126 124 description: LVDS diferential output voltage <min max> for data 127 125 lanes in microvolts. 128 - $ref: /schemas/types.yaml#/definitions/uint32-array 129 - minItems: 2 130 126 maxItems: 2 131 127 132 128 allOf:
+23 -1
Documentation/devicetree/bindings/i2c/nvidia,tegra20-i2c.yaml
··· 97 97 98 98 resets: 99 99 items: 100 - - description: module reset 100 + - description: 101 + Module reset. This property is optional for controllers in Tegra194, 102 + Tegra234 etc where an internal software reset is available as an 103 + alternative. 101 104 102 105 reset-names: 103 106 items: ··· 118 115 items: 119 116 - const: rx 120 117 - const: tx 118 + 119 + required: 120 + - compatible 121 + - reg 122 + - interrupts 123 + - clocks 124 + - clock-names 121 125 122 126 allOf: 123 127 - $ref: /schemas/i2c/i2c-controller.yaml ··· 178 168 else: 179 169 properties: 180 170 power-domains: false 171 + 172 + - if: 173 + not: 174 + properties: 175 + compatible: 176 + contains: 177 + enum: 178 + - nvidia,tegra194-i2c 179 + then: 180 + required: 181 + - resets 182 + - reset-names 181 183 182 184 unevaluatedProperties: false 183 185
-65
Documentation/devicetree/bindings/pmem/pmem-region.txt
··· 1 - Device-tree bindings for persistent memory regions 2 - ----------------------------------------------------- 3 - 4 - Persistent memory refers to a class of memory devices that are: 5 - 6 - a) Usable as main system memory (i.e. cacheable), and 7 - b) Retain their contents across power failure. 8 - 9 - Given b) it is best to think of persistent memory as a kind of memory mapped 10 - storage device. To ensure data integrity the operating system needs to manage 11 - persistent regions separately to the normal memory pool. To aid with that this 12 - binding provides a standardised interface for discovering where persistent 13 - memory regions exist inside the physical address space. 14 - 15 - Bindings for the region nodes: 16 - ----------------------------- 17 - 18 - Required properties: 19 - - compatible = "pmem-region" 20 - 21 - - reg = <base, size>; 22 - The reg property should specify an address range that is 23 - translatable to a system physical address range. This address 24 - range should be mappable as normal system memory would be 25 - (i.e cacheable). 26 - 27 - If the reg property contains multiple address ranges 28 - each address range will be treated as though it was specified 29 - in a separate device node. Having multiple address ranges in a 30 - node implies no special relationship between the two ranges. 31 - 32 - Optional properties: 33 - - Any relevant NUMA associativity properties for the target platform. 34 - 35 - - volatile; This property indicates that this region is actually 36 - backed by non-persistent memory. This lets the OS know that it 37 - may skip the cache flushes required to ensure data is made 38 - persistent after a write. 39 - 40 - If this property is absent then the OS must assume that the region 41 - is backed by non-volatile memory. 42 - 43 - Examples: 44 - -------------------- 45 - 46 - /* 47 - * This node specifies one 4KB region spanning from 48 - * 0x5000 to 0x5fff that is backed by non-volatile memory. 49 - */ 50 - pmem@5000 { 51 - compatible = "pmem-region"; 52 - reg = <0x00005000 0x00001000>; 53 - }; 54 - 55 - /* 56 - * This node specifies two 4KB regions that are backed by 57 - * volatile (normal) memory. 58 - */ 59 - pmem@6000 { 60 - compatible = "pmem-region"; 61 - reg = < 0x00006000 0x00001000 62 - 0x00008000 0x00001000 >; 63 - volatile; 64 - }; 65 -
+48
Documentation/devicetree/bindings/pmem/pmem-region.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pmem-region.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + maintainers: 8 + - Oliver O'Halloran <oohall@gmail.com> 9 + 10 + title: Persistent Memory Regions 11 + 12 + description: | 13 + Persistent memory refers to a class of memory devices that are: 14 + 15 + a) Usable as main system memory (i.e. cacheable), and 16 + b) Retain their contents across power failure. 17 + 18 + Given b) it is best to think of persistent memory as a kind of memory mapped 19 + storage device. To ensure data integrity the operating system needs to manage 20 + persistent regions separately to the normal memory pool. To aid with that this 21 + binding provides a standardised interface for discovering where persistent 22 + memory regions exist inside the physical address space. 23 + 24 + properties: 25 + compatible: 26 + const: pmem-region 27 + 28 + reg: 29 + maxItems: 1 30 + 31 + volatile: 32 + description: 33 + Indicates the region is volatile (non-persistent) and the OS can skip 34 + cache flushes for writes 35 + type: boolean 36 + 37 + required: 38 + - compatible 39 + - reg 40 + 41 + additionalProperties: false 42 + 43 + examples: 44 + - | 45 + pmem@5000 { 46 + compatible = "pmem-region"; 47 + reg = <0x00005000 0x00001000>; 48 + };
+1 -1
Documentation/devicetree/bindings/serial/8250.yaml
··· 45 45 - ns16550 46 46 - ns16550a 47 47 then: 48 - anyOf: 48 + oneOf: 49 49 - required: [ clock-frequency ] 50 50 - required: [ clocks ] 51 51
-5
Documentation/devicetree/bindings/serial/altera_jtaguart.txt
··· 1 - Altera JTAG UART 2 - 3 - Required properties: 4 - - compatible : should be "ALTR,juart-1.0" <DEPRECATED> 5 - - compatible : should be "altr,juart-1.0"
-8
Documentation/devicetree/bindings/serial/altera_uart.txt
··· 1 - Altera UART 2 - 3 - Required properties: 4 - - compatible : should be "ALTR,uart-1.0" <DEPRECATED> 5 - - compatible : should be "altr,uart-1.0" 6 - 7 - Optional properties: 8 - - clock-frequency : frequency of the clock input to the UART
+19
Documentation/devicetree/bindings/serial/altr,juart-1.0.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/serial/altr,juart-1.0.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Altera JTAG UART 8 + 9 + maintainers: 10 + - Dinh Nguyen <dinguyen@kernel.org> 11 + 12 + properties: 13 + compatible: 14 + const: altr,juart-1.0 15 + 16 + required: 17 + - compatible 18 + 19 + additionalProperties: false
+25
Documentation/devicetree/bindings/serial/altr,uart-1.0.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/serial/altr,uart-1.0.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Altera UART 8 + 9 + maintainers: 10 + - Dinh Nguyen <dinguyen@kernel.org> 11 + 12 + allOf: 13 + - $ref: /schemas/serial/serial.yaml# 14 + 15 + properties: 16 + compatible: 17 + const: altr,uart-1.0 18 + 19 + clock-frequency: 20 + description: Frequency of the clock input to the UART. 21 + 22 + required: 23 + - compatible 24 + 25 + unevaluatedProperties: false
+1 -1
Documentation/devicetree/bindings/soc/fsl/fsl,ls1028a-reset.yaml
··· 1 1 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 2 %YAML 1.2 3 3 --- 4 - $id: http://devicetree.org/schemas//soc/fsl/fsl,ls1028a-reset.yaml# 4 + $id: http://devicetree.org/schemas/soc/fsl/fsl,ls1028a-reset.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 7 title: Freescale Layerscape Reset Registers Module
+9
Documentation/filesystems/porting.rst
··· 1249 1249 1250 1250 Calling conventions for ->d_automount() have changed; we should *not* grab 1251 1251 an extra reference to new mount - it should be returned with refcount 1. 1252 + 1253 + --- 1254 + 1255 + collect_mounts()/drop_collected_mounts()/iterate_mounts() are gone now. 1256 + Replacement is collect_paths()/drop_collected_path(), with no special 1257 + iterator needed. Instead of a cloned mount tree, the new interface returns 1258 + an array of struct path, one for each mount collect_mounts() would've 1259 + created. These struct path point to locations in the caller's namespace 1260 + that would be roots of the cloned mounts.
+1 -1
Documentation/gpu/nouveau.rst
··· 25 25 GSP Support 26 26 ------------------------ 27 27 28 - .. kernel-doc:: drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c 28 + .. kernel-doc:: drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c 29 29 :doc: GSP message queue element 30 30 31 31 .. kernel-doc:: drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h
+10 -7
Documentation/netlink/genetlink.yaml
··· 6 6 7 7 # Common defines 8 8 $defs: 9 + name: 10 + type: string 11 + pattern: ^[0-9a-z-]+$ 9 12 uint: 10 13 type: integer 11 14 minimum: 0 ··· 32 29 properties: 33 30 name: 34 31 description: Name of the genetlink family. 35 - type: string 32 + $ref: '#/$defs/name' 36 33 doc: 37 34 type: string 38 35 protocol: ··· 51 48 additionalProperties: False 52 49 properties: 53 50 name: 54 - type: string 51 + $ref: '#/$defs/name' 55 52 header: 56 53 description: For C-compatible languages, header which already defines this value. 57 54 type: string ··· 78 75 additionalProperties: False 79 76 properties: 80 77 name: 81 - type: string 78 + $ref: '#/$defs/name' 82 79 value: 83 80 type: integer 84 81 doc: ··· 99 96 name: 100 97 description: | 101 98 Name used when referring to this space in other definitions, not used outside of the spec. 102 - type: string 99 + $ref: '#/$defs/name' 103 100 name-prefix: 104 101 description: | 105 102 Prefix for the C enum name of the attributes. Default family[name]-set[name]-a- ··· 124 121 additionalProperties: False 125 122 properties: 126 123 name: 127 - type: string 124 + $ref: '#/$defs/name' 128 125 type: &attr-type 129 126 enum: [ unused, pad, flag, binary, 130 127 uint, sint, u8, u16, u32, u64, s8, s16, s32, s64, ··· 246 243 properties: 247 244 name: 248 245 description: Name of the operation, also defining its C enum value in uAPI. 249 - type: string 246 + $ref: '#/$defs/name' 250 247 doc: 251 248 description: Documentation for the command. 252 249 type: string ··· 330 327 name: 331 328 description: | 332 329 The name for the group, used to form the define and the value of the define. 333 - type: string 330 + $ref: '#/$defs/name' 334 331 flags: *cmd_flags 335 332 336 333 kernel-family:
+4 -4
Documentation/netlink/specs/devlink.yaml
··· 38 38 - 39 39 name: dsa 40 40 - 41 - name: pci_pf 41 + name: pci-pf 42 42 - 43 - name: pci_vf 43 + name: pci-vf 44 44 - 45 45 name: virtual 46 46 - 47 47 name: unused 48 48 - 49 - name: pci_sf 49 + name: pci-sf 50 50 - 51 51 type: enum 52 52 name: port-fn-state ··· 220 220 - 221 221 name: flag 222 222 - 223 - name: nul_string 223 + name: nul-string 224 224 value: 10 225 225 - 226 226 name: binary
+1 -1
Documentation/netlink/specs/dpll.yaml
··· 188 188 value: 10000 189 189 - 190 190 type: const 191 - name: pin-frequency-77_5-khz 191 + name: pin-frequency-77-5-khz 192 192 value: 77500 193 193 - 194 194 type: const
+6 -3
Documentation/netlink/specs/ethtool.yaml
··· 7 7 doc: Partial family for Ethtool Netlink. 8 8 uapi-header: linux/ethtool_netlink_generated.h 9 9 10 + c-family-name: ethtool-genl-name 11 + c-version-name: ethtool-genl-version 12 + 10 13 definitions: 11 14 - 12 15 name: udp-tunnel-type ··· 48 45 name: started 49 46 doc: The firmware flashing process has started. 50 47 - 51 - name: in_progress 48 + name: in-progress 52 49 doc: The firmware flashing process is in progress. 53 50 - 54 51 name: completed ··· 1422 1419 name: hkey 1423 1420 type: binary 1424 1421 - 1425 - name: input_xfrm 1422 + name: input-xfrm 1426 1423 type: u32 1427 1424 - 1428 1425 name: start-context ··· 2238 2235 - hfunc 2239 2236 - indir 2240 2237 - hkey 2241 - - input_xfrm 2238 + - input-xfrm 2242 2239 dump: 2243 2240 request: 2244 2241 attributes:
+18 -18
Documentation/netlink/specs/fou.yaml
··· 15 15 definitions: 16 16 - 17 17 type: enum 18 - name: encap_type 18 + name: encap-type 19 19 name-prefix: fou-encap- 20 20 enum-name: 21 21 entries: [ unspec, direct, gue ] ··· 43 43 name: type 44 44 type: u8 45 45 - 46 - name: remcsum_nopartial 46 + name: remcsum-nopartial 47 47 type: flag 48 48 - 49 - name: local_v4 49 + name: local-v4 50 50 type: u32 51 51 - 52 - name: local_v6 52 + name: local-v6 53 53 type: binary 54 54 checks: 55 55 min-len: 16 56 56 - 57 - name: peer_v4 57 + name: peer-v4 58 58 type: u32 59 59 - 60 - name: peer_v6 60 + name: peer-v6 61 61 type: binary 62 62 checks: 63 63 min-len: 16 64 64 - 65 - name: peer_port 65 + name: peer-port 66 66 type: u16 67 67 byte-order: big-endian 68 68 - ··· 90 90 - port 91 91 - ipproto 92 92 - type 93 - - remcsum_nopartial 94 - - local_v4 95 - - peer_v4 96 - - local_v6 97 - - peer_v6 98 - - peer_port 93 + - remcsum-nopartial 94 + - local-v4 95 + - peer-v4 96 + - local-v6 97 + - peer-v6 98 + - peer-port 99 99 - ifindex 100 100 101 101 - ··· 112 112 - af 113 113 - ifindex 114 114 - port 115 - - peer_port 116 - - local_v4 117 - - peer_v4 118 - - local_v6 119 - - peer_v6 115 + - peer-port 116 + - local-v4 117 + - peer-v4 118 + - local-v6 119 + - peer-v6 120 120 121 121 - 122 122 name: get
+4 -4
Documentation/netlink/specs/mptcp_pm.yaml
··· 57 57 doc: >- 58 58 A new subflow has been established. 'error' should not be set. 59 59 Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | 60 - daddr6, sport, dport, backup, if_idx [, error]. 60 + daddr6, sport, dport, backup, if-idx [, error]. 61 61 - 62 62 name: sub-closed 63 63 doc: >- 64 64 A subflow has been closed. An error (copy of sk_err) could be set if an 65 65 error has been detected for this subflow. 66 66 Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | 67 - daddr6, sport, dport, backup, if_idx [, error]. 67 + daddr6, sport, dport, backup, if-idx [, error]. 68 68 - 69 69 name: sub-priority 70 70 value: 13 71 71 doc: >- 72 72 The priority of a subflow has changed. 'error' should not be set. 73 73 Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | 74 - daddr6, sport, dport, backup, if_idx [, error]. 74 + daddr6, sport, dport, backup, if-idx [, error]. 75 75 - 76 76 name: listener-created 77 77 value: 15 ··· 255 255 name: timeout 256 256 type: u32 257 257 - 258 - name: if_idx 258 + name: if-idx 259 259 type: u32 260 260 - 261 261 name: reset-reason
+2 -2
Documentation/netlink/specs/nfsd.yaml
··· 27 27 name: proc 28 28 type: u32 29 29 - 30 - name: service_time 30 + name: service-time 31 31 type: s64 32 32 - 33 33 name: pad ··· 139 139 - prog 140 140 - version 141 141 - proc 142 - - service_time 142 + - service-time 143 143 - saddr4 144 144 - daddr4 145 145 - saddr6
+3 -3
Documentation/netlink/specs/ovs_flow.yaml
··· 216 216 type: struct 217 217 members: 218 218 - 219 - name: nd_target 219 + name: nd-target 220 220 type: binary 221 221 len: 16 222 222 byte-order: big-endian ··· 258 258 type: struct 259 259 members: 260 260 - 261 - name: vlan_tpid 261 + name: vlan-tpid 262 262 type: u16 263 263 byte-order: big-endian 264 264 doc: Tag protocol identifier (TPID) to push. 265 265 - 266 - name: vlan_tci 266 + name: vlan-tci 267 267 type: u16 268 268 byte-order: big-endian 269 269 doc: Tag control identifier (TCI) to push.
+2 -2
Documentation/netlink/specs/rt-link.yaml
··· 603 603 name: optmask 604 604 type: u32 605 605 - 606 - name: if_stats_msg 606 + name: if-stats-msg 607 607 type: struct 608 608 members: 609 609 - ··· 2486 2486 name: getstats 2487 2487 doc: Get / dump link stats. 2488 2488 attribute-set: stats-attrs 2489 - fixed-header: if_stats_msg 2489 + fixed-header: if-stats-msg 2490 2490 do: 2491 2491 request: 2492 2492 value: 94
+2 -2
Documentation/netlink/specs/tc.yaml
··· 232 232 type: u8 233 233 doc: log(P_max / (qth-max - qth-min)) 234 234 - 235 - name: Scell_log 235 + name: Scell-log 236 236 type: u8 237 237 doc: cell size for idle damping 238 238 - ··· 253 253 name: DPs 254 254 type: u32 255 255 - 256 - name: def_DP 256 + name: def-DP 257 257 type: u32 258 258 - 259 259 name: grio
+1 -1
Documentation/networking/device_drivers/ethernet/marvell/octeontx2.rst
··· 66 66 As mentioned above RVU PF0 is called the admin function (AF), this driver 67 67 supports resource provisioning and configuration of functional blocks. 68 68 Doesn't handle any I/O. It sets up few basic stuff but most of the 69 - funcionality is achieved via configuration requests from PFs and VFs. 69 + functionality is achieved via configuration requests from PFs and VFs. 70 70 71 71 PF/VFs communicates with AF via a shared memory region (mailbox). Upon 72 72 receiving requests AF does resource provisioning and other HW configuration.
+1
Documentation/process/embargoed-hardware-issues.rst
··· 290 290 AMD Tom Lendacky <thomas.lendacky@amd.com> 291 291 Ampere Darren Hart <darren@os.amperecomputing.com> 292 292 ARM Catalin Marinas <catalin.marinas@arm.com> 293 + IBM Power Madhavan Srinivasan <maddy@linux.ibm.com> 293 294 IBM Z Christian Borntraeger <borntraeger@de.ibm.com> 294 295 Intel Tony Luck <tony.luck@intel.com> 295 296 Qualcomm Trilok Soni <quic_tsoni@quicinc.com>
+19 -5
Documentation/sound/codecs/cs35l56.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0-only 2 2 3 - ===================================================================== 4 - Audio drivers for Cirrus Logic CS35L54/56/57 Boosted Smart Amplifiers 5 - ===================================================================== 3 + ======================================================================== 4 + Audio drivers for Cirrus Logic CS35L54/56/57/63 Boosted Smart Amplifiers 5 + ======================================================================== 6 6 :Copyright: 2025 Cirrus Logic, Inc. and 7 7 Cirrus Logic International Semiconductor Ltd. 8 8 ··· 13 13 14 14 The high-level summary of this document is: 15 15 16 - **If you have a laptop that uses CS35L54/56/57 amplifiers but audio is not 16 + **If you have a laptop that uses CS35L54/56/57/63 amplifiers but audio is not 17 17 working, DO NOT ATTEMPT TO USE FIRMWARE AND SETTINGS FROM ANOTHER LAPTOP, 18 18 EVEN IF THAT LAPTOP SEEMS SIMILAR.** 19 19 20 - The CS35L54/56/57 amplifiers must be correctly configured for the power 20 + The CS35L54/56/57/63 amplifiers must be correctly configured for the power 21 21 supply voltage, speaker impedance, maximum speaker voltage/current, and 22 22 other external hardware connections. 23 23 ··· 34 34 * CS35L54 35 35 * CS35L56 36 36 * CS35L57 37 + * CS35L63 37 38 38 39 There are two drivers in the kernel 39 40 ··· 105 104 106 105 The format of the firmware file names is: 107 106 107 + SoundWire (except CS35L56 Rev B0): 108 + cs35lxx-b0-dsp1-misc-SSID[-spkidX]-l?u? 109 + 110 + SoundWire CS35L56 Rev B0: 111 + cs35lxx-b0-dsp1-misc-SSID[-spkidX]-ampN 112 + 113 + Non-SoundWire (HDA and I2S): 108 114 cs35lxx-b0-dsp1-misc-SSID[-spkidX]-ampN 109 115 110 116 Where: ··· 119 111 * cs35lxx-b0 is the amplifier model and silicon revision. This information 120 112 is logged by the driver during initialization. 121 113 * SSID is the 8-digit hexadecimal SSID value. 114 + * l?u? is the physical address on the SoundWire bus of the amp this 115 + file applies to. 122 116 * ampN is the amplifier number (for example amp1). This is the same as 123 117 the prefix on the ALSA control names except that it is always lower-case 124 118 in the file name. 125 119 * spkidX is an optional part, used for laptops that have firmware 126 120 configurations for different makes and models of internal speakers. 121 + 122 + The CS35L56 Rev B0 continues to use the old filename scheme because a 123 + large number of firmware files have already been published with these 124 + names. 127 125 128 126 Sound Open Firmware and ALSA topology files 129 127 -------------------------------------------
+58 -1
Documentation/virt/kvm/api.rst
··· 6645 6645 .. note:: 6646 6646 6647 6647 For KVM_EXIT_IO, KVM_EXIT_MMIO, KVM_EXIT_OSI, KVM_EXIT_PAPR, KVM_EXIT_XEN, 6648 - KVM_EXIT_EPR, KVM_EXIT_X86_RDMSR and KVM_EXIT_X86_WRMSR the corresponding 6648 + KVM_EXIT_EPR, KVM_EXIT_HYPERCALL, KVM_EXIT_TDX, 6649 + KVM_EXIT_X86_RDMSR and KVM_EXIT_X86_WRMSR the corresponding 6649 6650 operations are complete (and guest state is consistent) only after userspace 6650 6651 has re-entered the kernel with KVM_RUN. The kernel side will first finish 6651 6652 incomplete operations and then check for pending signals. ··· 7175 7174 - KVM_NOTIFY_CONTEXT_INVALID -- the VM context is corrupted and not valid 7176 7175 in VMCS. It would run into unknown result if resume the target VM. 7177 7176 7177 + :: 7178 + 7179 + /* KVM_EXIT_TDX */ 7180 + struct { 7181 + __u64 flags; 7182 + __u64 nr; 7183 + union { 7184 + struct { 7185 + u64 ret; 7186 + u64 data[5]; 7187 + } unknown; 7188 + struct { 7189 + u64 ret; 7190 + u64 gpa; 7191 + u64 size; 7192 + } get_quote; 7193 + struct { 7194 + u64 ret; 7195 + u64 leaf; 7196 + u64 r11, r12, r13, r14; 7197 + } get_tdvmcall_info; 7198 + }; 7199 + } tdx; 7200 + 7201 + Process a TDVMCALL from the guest. KVM forwards select TDVMCALL based 7202 + on the Guest-Hypervisor Communication Interface (GHCI) specification; 7203 + KVM bridges these requests to the userspace VMM with minimal changes, 7204 + placing the inputs in the union and copying them back to the guest 7205 + on re-entry. 7206 + 7207 + Flags are currently always zero, whereas ``nr`` contains the TDVMCALL 7208 + number from register R11. The remaining field of the union provide the 7209 + inputs and outputs of the TDVMCALL. Currently the following values of 7210 + ``nr`` are defined: 7211 + 7212 + * ``TDVMCALL_GET_QUOTE``: the guest has requested to generate a TD-Quote 7213 + signed by a service hosting TD-Quoting Enclave operating on the host. 7214 + Parameters and return value are in the ``get_quote`` field of the union. 7215 + The ``gpa`` field and ``size`` specify the guest physical address 7216 + (without the shared bit set) and the size of a shared-memory buffer, in 7217 + which the TDX guest passes a TD Report. The ``ret`` field represents 7218 + the return value of the GetQuote request. When the request has been 7219 + queued successfully, the TDX guest can poll the status field in the 7220 + shared-memory area to check whether the Quote generation is completed or 7221 + not. When completed, the generated Quote is returned via the same buffer. 7222 + 7223 + * ``TDVMCALL_GET_TD_VM_CALL_INFO``: the guest has requested the support 7224 + status of TDVMCALLs. The output values for the given leaf should be 7225 + placed in fields from ``r11`` to ``r14`` of the ``get_tdvmcall_info`` 7226 + field of the union. 7227 + 7228 + KVM may add support for more values in the future that may cause a userspace 7229 + exit, even without calls to ``KVM_ENABLE_CAP`` or similar. In this case, 7230 + it will enter with output fields already valid; in the common case, the 7231 + ``unknown.ret`` field of the union will be ``TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED``. 7232 + Userspace need not do anything if it does not wish to support a TDVMCALL. 7178 7233 :: 7179 7234 7180 7235 /* Fix the size of the union. */
+84 -47
MAINTAINERS
··· 207 207 X: include/uapi/ 208 208 209 209 ABIT UGURU 1,2 HARDWARE MONITOR DRIVER 210 - M: Hans de Goede <hdegoede@redhat.com> 210 + M: Hans de Goede <hansg@kernel.org> 211 211 L: linux-hwmon@vger.kernel.org 212 212 S: Maintained 213 213 F: drivers/hwmon/abituguru.c ··· 371 371 F: drivers/platform/x86/quickstart.c 372 372 373 373 ACPI SERIAL MULTI INSTANTIATE DRIVER 374 - M: Hans de Goede <hdegoede@redhat.com> 374 + M: Hans de Goede <hansg@kernel.org> 375 375 L: platform-driver-x86@vger.kernel.org 376 376 S: Maintained 377 377 F: drivers/platform/x86/serial-multi-instantiate.c ··· 1157 1157 F: arch/x86/kernel/amd_node.c 1158 1158 1159 1159 AMD PDS CORE DRIVER 1160 - M: Shannon Nelson <shannon.nelson@amd.com> 1161 1160 M: Brett Creeley <brett.creeley@amd.com> 1162 1161 L: netdev@vger.kernel.org 1163 1162 S: Maintained ··· 3550 3551 F: scripts/make_fit.py 3551 3552 3552 3553 ARM64 PLATFORM DRIVERS 3553 - M: Hans de Goede <hdegoede@redhat.com> 3554 + M: Hans de Goede <hansg@kernel.org> 3554 3555 M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> 3555 3556 R: Bryan O'Donoghue <bryan.odonoghue@linaro.org> 3556 3557 L: platform-driver-x86@vger.kernel.org ··· 3711 3712 F: drivers/platform/x86/eeepc*.c 3712 3713 3713 3714 ASUS TF103C DOCK DRIVER 3714 - M: Hans de Goede <hdegoede@redhat.com> 3715 + M: Hans de Goede <hansg@kernel.org> 3715 3716 L: platform-driver-x86@vger.kernel.org 3716 3717 S: Maintained 3717 3718 T: git git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86.git ··· 5613 5614 F: drivers/usb/chipidea/ 5614 5615 5615 5616 CHIPONE ICN8318 I2C TOUCHSCREEN DRIVER 5616 - M: Hans de Goede <hdegoede@redhat.com> 5617 + M: Hans de Goede <hansg@kernel.org> 5617 5618 L: linux-input@vger.kernel.org 5618 5619 S: Maintained 5619 5620 F: Documentation/devicetree/bindings/input/touchscreen/chipone,icn8318.yaml 5620 5621 F: drivers/input/touchscreen/chipone_icn8318.c 5621 5622 5622 5623 CHIPONE ICN8505 I2C TOUCHSCREEN DRIVER 5623 - M: Hans de Goede <hdegoede@redhat.com> 5624 + M: Hans de Goede <hansg@kernel.org> 5624 5625 L: linux-input@vger.kernel.org 5625 5626 S: Maintained 5626 5627 F: drivers/input/touchscreen/chipone_icn8505.c ··· 6918 6919 F: include/linux/devfreq-event.h 6919 6920 6920 6921 DEVICE RESOURCE MANAGEMENT HELPERS 6921 - M: Hans de Goede <hdegoede@redhat.com> 6922 + M: Hans de Goede <hansg@kernel.org> 6922 6923 R: Matti Vaittinen <mazziesaccount@gmail.com> 6923 6924 S: Maintained 6924 6925 F: include/linux/devm-helpers.h ··· 7517 7518 F: include/drm/gud.h 7518 7519 7519 7520 DRM DRIVER FOR GRAIN MEDIA GM12U320 PROJECTORS 7520 - M: Hans de Goede <hdegoede@redhat.com> 7521 + M: Hans de Goede <hansg@kernel.org> 7521 7522 S: Maintained 7522 7523 T: git https://gitlab.freedesktop.org/drm/misc/kernel.git 7523 7524 F: drivers/gpu/drm/tiny/gm12u320.c ··· 7917 7918 F: drivers/gpu/drm/vkms/ 7918 7919 7919 7920 DRM DRIVER FOR VIRTUALBOX VIRTUAL GPU 7920 - M: Hans de Goede <hdegoede@redhat.com> 7921 + M: Hans de Goede <hansg@kernel.org> 7921 7922 L: dri-devel@lists.freedesktop.org 7922 7923 S: Maintained 7923 7924 T: git https://gitlab.freedesktop.org/drm/misc/kernel.git ··· 8318 8319 F: include/drm/drm_panel.h 8319 8320 8320 8321 DRM PRIVACY-SCREEN CLASS 8321 - M: Hans de Goede <hdegoede@redhat.com> 8322 + M: Hans de Goede <hansg@kernel.org> 8322 8323 L: dri-devel@lists.freedesktop.org 8323 8324 S: Maintained 8324 8325 T: git https://gitlab.freedesktop.org/drm/misc/kernel.git ··· 9941 9942 9942 9943 FWCTL PDS DRIVER 9943 9944 M: Brett Creeley <brett.creeley@amd.com> 9944 - R: Shannon Nelson <shannon.nelson@amd.com> 9945 9945 L: linux-kernel@vger.kernel.org 9946 9946 S: Maintained 9947 9947 F: drivers/fwctl/pds/ ··· 10221 10223 F: Documentation/devicetree/bindings/connector/gocontroll,moduline-module-slot.yaml 10222 10224 10223 10225 GOODIX TOUCHSCREEN 10224 - M: Hans de Goede <hdegoede@redhat.com> 10226 + M: Hans de Goede <hansg@kernel.org> 10225 10227 L: linux-input@vger.kernel.org 10226 10228 S: Maintained 10227 10229 F: drivers/input/touchscreen/goodix* ··· 10260 10262 K: [gG]oogle.?[tT]ensor 10261 10263 10262 10264 GPD POCKET FAN DRIVER 10263 - M: Hans de Goede <hdegoede@redhat.com> 10265 + M: Hans de Goede <hansg@kernel.org> 10264 10266 L: platform-driver-x86@vger.kernel.org 10265 10267 S: Maintained 10266 10268 F: drivers/platform/x86/gpd-pocket-fan.c ··· 10839 10841 F: drivers/dma/hisi_dma.c 10840 10842 10841 10843 HISILICON GPIO DRIVER 10842 - M: Jay Fang <f.fangjian@huawei.com> 10844 + M: Yang Shen <shenyang39@huawei.com> 10843 10845 L: linux-gpio@vger.kernel.org 10844 10846 S: Maintained 10845 10847 F: Documentation/devicetree/bindings/gpio/hisilicon,ascend910-gpio.yaml ··· 11155 11157 11156 11158 HUGETLB SUBSYSTEM 11157 11159 M: Muchun Song <muchun.song@linux.dev> 11158 - R: Oscar Salvador <osalvador@suse.de> 11160 + M: Oscar Salvador <osalvador@suse.de> 11161 + R: David Hildenbrand <david@redhat.com> 11159 11162 L: linux-mm@kvack.org 11160 11163 S: Maintained 11161 11164 F: Documentation/ABI/testing/sysfs-kernel-mm-hugepages ··· 11167 11168 F: include/linux/hugetlb.h 11168 11169 F: include/trace/events/hugetlbfs.h 11169 11170 F: mm/hugetlb.c 11171 + F: mm/hugetlb_cgroup.c 11170 11172 F: mm/hugetlb_cma.c 11171 11173 F: mm/hugetlb_cma.h 11172 11174 F: mm/hugetlb_vmemmap.c ··· 11423 11423 F: drivers/i2c/busses/i2c-viapro.c 11424 11424 11425 11425 I2C/SMBUS INTEL CHT WHISKEY COVE PMIC DRIVER 11426 - M: Hans de Goede <hdegoede@redhat.com> 11426 + M: Hans de Goede <hansg@kernel.org> 11427 11427 L: linux-i2c@vger.kernel.org 11428 11428 S: Maintained 11429 11429 F: drivers/i2c/busses/i2c-cht-wc.c ··· 12013 12013 F: sound/soc/intel/ 12014 12014 12015 12015 INTEL ATOMISP2 DUMMY / POWER-MANAGEMENT DRIVER 12016 - M: Hans de Goede <hdegoede@redhat.com> 12016 + M: Hans de Goede <hansg@kernel.org> 12017 12017 L: platform-driver-x86@vger.kernel.org 12018 12018 S: Maintained 12019 12019 F: drivers/platform/x86/intel/atomisp2/pm.c 12020 12020 12021 12021 INTEL ATOMISP2 LED DRIVER 12022 - M: Hans de Goede <hdegoede@redhat.com> 12022 + M: Hans de Goede <hansg@kernel.org> 12023 12023 L: platform-driver-x86@vger.kernel.org 12024 12024 S: Maintained 12025 12025 F: drivers/platform/x86/intel/atomisp2/led.c ··· 13347 13347 M: Mike Rapoport <rppt@kernel.org> 13348 13348 M: Changyuan Lyu <changyuanl@google.com> 13349 13349 L: kexec@lists.infradead.org 13350 + L: linux-mm@kvack.org 13350 13351 S: Maintained 13351 13352 F: Documentation/admin-guide/mm/kho.rst 13352 13353 F: Documentation/core-api/kho/* ··· 13681 13680 F: drivers/platform/x86/lenovo-wmi-hotkey-utilities.c 13682 13681 13683 13682 LETSKETCH HID TABLET DRIVER 13684 - M: Hans de Goede <hdegoede@redhat.com> 13683 + M: Hans de Goede <hansg@kernel.org> 13685 13684 L: linux-input@vger.kernel.org 13686 13685 S: Maintained 13687 13686 T: git git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.git ··· 13731 13730 F: drivers/ata/sata_gemini.h 13732 13731 13733 13732 LIBATA SATA AHCI PLATFORM devices support 13734 - M: Hans de Goede <hdegoede@redhat.com> 13733 + M: Hans de Goede <hansg@kernel.org> 13735 13734 L: linux-ide@vger.kernel.org 13736 13735 S: Maintained 13737 13736 F: drivers/ata/ahci_platform.c ··· 13801 13800 L: nvdimm@lists.linux.dev 13802 13801 S: Supported 13803 13802 Q: https://patchwork.kernel.org/project/linux-nvdimm/list/ 13804 - F: Documentation/devicetree/bindings/pmem/pmem-region.txt 13803 + F: Documentation/devicetree/bindings/pmem/pmem-region.yaml 13805 13804 F: drivers/nvdimm/of_pmem.c 13806 13805 13807 13806 LIBNVDIMM: NON-VOLATILE MEMORY DEVICE SUBSYSTEM ··· 14101 14100 F: block/partitions/ldm.* 14102 14101 14103 14102 LOGITECH HID GAMING KEYBOARDS 14104 - M: Hans de Goede <hdegoede@redhat.com> 14103 + M: Hans de Goede <hansg@kernel.org> 14105 14104 L: linux-input@vger.kernel.org 14106 14105 S: Maintained 14107 14106 T: git git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.git ··· 14783 14782 F: drivers/power/supply/max17040_battery.c 14784 14783 14785 14784 MAXIM MAX17042 FAMILY FUEL GAUGE DRIVERS 14786 - R: Hans de Goede <hdegoede@redhat.com> 14785 + R: Hans de Goede <hansg@kernel.org> 14787 14786 R: Krzysztof Kozlowski <krzk@kernel.org> 14788 14787 R: Marek Szyprowski <m.szyprowski@samsung.com> 14789 14788 R: Sebastian Krzyszkowiak <sebastian.krzyszkowiak@puri.sm> ··· 15585 15584 F: drivers/net/ethernet/mellanox/mlxfw/ 15586 15585 15587 15586 MELLANOX HARDWARE PLATFORM SUPPORT 15588 - M: Hans de Goede <hdegoede@redhat.com> 15587 + M: Hans de Goede <hansg@kernel.org> 15589 15588 M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> 15590 15589 M: Vadim Pasternak <vadimp@nvidia.com> 15591 15590 L: platform-driver-x86@vger.kernel.org ··· 15676 15675 M: Mike Rapoport <rppt@kernel.org> 15677 15676 L: linux-mm@kvack.org 15678 15677 S: Maintained 15678 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock.git for-next 15679 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock.git fixes 15679 15680 F: Documentation/core-api/boot-time-mm.rst 15680 15681 F: Documentation/core-api/kho/bindings/memblock/* 15681 15682 F: include/linux/memblock.h 15683 + F: mm/bootmem_info.c 15682 15684 F: mm/memblock.c 15685 + F: mm/memtest.c 15683 15686 F: mm/mm_init.c 15687 + F: mm/rodata_test.c 15684 15688 F: tools/testing/memblock/ 15685 15689 15686 15690 MEMORY ALLOCATION PROFILING ··· 15740 15734 F: Documentation/mm/ 15741 15735 F: include/linux/gfp.h 15742 15736 F: include/linux/gfp_types.h 15743 - F: include/linux/memfd.h 15744 15737 F: include/linux/memory_hotplug.h 15745 15738 F: include/linux/memory-tiers.h 15746 15739 F: include/linux/mempolicy.h ··· 15799 15794 W: http://www.linux-mm.org 15800 15795 T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm 15801 15796 F: mm/gup.c 15797 + F: mm/gup_test.c 15798 + F: mm/gup_test.h 15799 + F: tools/testing/selftests/mm/gup_longterm.c 15800 + F: tools/testing/selftests/mm/gup_test.c 15802 15801 15803 15802 MEMORY MANAGEMENT - KSM (Kernel Samepage Merging) 15804 15803 M: Andrew Morton <akpm@linux-foundation.org> ··· 15850 15841 F: mm/numa_emulation.c 15851 15842 F: mm/numa_memblks.c 15852 15843 15844 + MEMORY MANAGEMENT - OOM KILLER 15845 + M: Michal Hocko <mhocko@suse.com> 15846 + R: David Rientjes <rientjes@google.com> 15847 + R: Shakeel Butt <shakeel.butt@linux.dev> 15848 + L: linux-mm@kvack.org 15849 + S: Maintained 15850 + F: include/linux/oom.h 15851 + F: include/trace/events/oom.h 15852 + F: include/uapi/linux/oom.h 15853 + F: mm/oom_kill.c 15854 + 15853 15855 MEMORY MANAGEMENT - PAGE ALLOCATOR 15854 15856 M: Andrew Morton <akpm@linux-foundation.org> 15855 15857 M: Vlastimil Babka <vbabka@suse.cz> ··· 15875 15855 F: include/linux/gfp.h 15876 15856 F: include/linux/page-isolation.h 15877 15857 F: mm/compaction.c 15858 + F: mm/debug_page_alloc.c 15859 + F: mm/fail_page_alloc.c 15878 15860 F: mm/page_alloc.c 15861 + F: mm/page_ext.c 15862 + F: mm/page_frag_cache.c 15879 15863 F: mm/page_isolation.c 15864 + F: mm/page_owner.c 15865 + F: mm/page_poison.c 15866 + F: mm/page_reporting.c 15867 + F: mm/show_mem.c 15868 + F: mm/shuffle.c 15880 15869 15881 15870 MEMORY MANAGEMENT - RECLAIM 15882 15871 M: Andrew Morton <akpm@linux-foundation.org> ··· 15899 15870 S: Maintained 15900 15871 F: mm/pt_reclaim.c 15901 15872 F: mm/vmscan.c 15873 + F: mm/workingset.c 15902 15874 15903 15875 MEMORY MANAGEMENT - RMAP (REVERSE MAPPING) 15904 15876 M: Andrew Morton <akpm@linux-foundation.org> ··· 15912 15882 L: linux-mm@kvack.org 15913 15883 S: Maintained 15914 15884 F: include/linux/rmap.h 15885 + F: mm/page_vma_mapped.c 15915 15886 F: mm/rmap.c 15916 15887 15917 15888 MEMORY MANAGEMENT - SECRETMEM ··· 15945 15914 MEMORY MANAGEMENT - THP (TRANSPARENT HUGE PAGE) 15946 15915 M: Andrew Morton <akpm@linux-foundation.org> 15947 15916 M: David Hildenbrand <david@redhat.com> 15917 + M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 15948 15918 R: Zi Yan <ziy@nvidia.com> 15949 15919 R: Baolin Wang <baolin.wang@linux.alibaba.com> 15950 - R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 15951 15920 R: Liam R. Howlett <Liam.Howlett@oracle.com> 15952 15921 R: Nico Pache <npache@redhat.com> 15953 15922 R: Ryan Roberts <ryan.roberts@arm.com> ··· 16005 15974 W: http://www.linux-mm.org 16006 15975 T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm 16007 15976 F: include/trace/events/mmap.h 15977 + F: mm/mincore.c 16008 15978 F: mm/mlock.c 16009 15979 F: mm/mmap.c 16010 15980 F: mm/mprotect.c 16011 15981 F: mm/mremap.c 16012 15982 F: mm/mseal.c 15983 + F: mm/msync.c 15984 + F: mm/nommu.c 16013 15985 F: mm/vma.c 16014 15986 F: mm/vma.h 16015 15987 F: mm/vma_exec.c ··· 16575 16541 F: drivers/platform/surface/surface_gpe.c 16576 16542 16577 16543 MICROSOFT SURFACE HARDWARE PLATFORM SUPPORT 16578 - M: Hans de Goede <hdegoede@redhat.com> 16544 + M: Hans de Goede <hansg@kernel.org> 16579 16545 M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> 16580 16546 M: Maximilian Luz <luzmaximilian@gmail.com> 16581 16547 L: platform-driver-x86@vger.kernel.org ··· 17743 17709 F: tools/testing/selftests/nolibc/ 17744 17710 17745 17711 NOVATEK NVT-TS I2C TOUCHSCREEN DRIVER 17746 - M: Hans de Goede <hdegoede@redhat.com> 17712 + M: Hans de Goede <hansg@kernel.org> 17747 17713 L: linux-input@vger.kernel.org 17748 17714 S: Maintained 17749 17715 F: Documentation/devicetree/bindings/input/touchscreen/novatek,nvt-ts.yaml ··· 19413 19379 F: include/crypto/pcrypt.h 19414 19380 19415 19381 PDS DSC VIRTIO DATA PATH ACCELERATOR 19416 - R: Shannon Nelson <shannon.nelson@amd.com> 19382 + R: Brett Creeley <brett.creeley@amd.com> 19417 19383 F: drivers/vdpa/pds/ 19418 19384 19419 19385 PECI HARDWARE MONITORING DRIVERS ··· 19435 19401 F: include/linux/peci.h 19436 19402 19437 19403 PENSANDO ETHERNET DRIVERS 19438 - M: Shannon Nelson <shannon.nelson@amd.com> 19439 19404 M: Brett Creeley <brett.creeley@amd.com> 19440 19405 L: netdev@vger.kernel.org 19441 19406 S: Maintained ··· 21410 21377 K: spacemit 21411 21378 21412 21379 RISC-V THEAD SoC SUPPORT 21413 - M: Drew Fustini <drew@pdp7.com> 21380 + M: Drew Fustini <fustini@kernel.org> 21414 21381 M: Guo Ren <guoren@kernel.org> 21415 21382 M: Fu Wei <wefu@redhat.com> 21416 21383 L: linux-riscv@lists.infradead.org ··· 22207 22174 R: David Vernet <void@manifault.com> 22208 22175 R: Andrea Righi <arighi@nvidia.com> 22209 22176 R: Changwoo Min <changwoo@igalia.com> 22210 - L: linux-kernel@vger.kernel.org 22177 + L: sched-ext@lists.linux.dev 22211 22178 S: Maintained 22212 22179 W: https://github.com/sched-ext/scx 22213 22180 T: git://git.kernel.org/pub/scm/linux/kernel/git/tj/sched_ext.git ··· 22744 22711 K: [^@]sifive 22745 22712 22746 22713 SILEAD TOUCHSCREEN DRIVER 22747 - M: Hans de Goede <hdegoede@redhat.com> 22714 + M: Hans de Goede <hansg@kernel.org> 22748 22715 L: linux-input@vger.kernel.org 22749 22716 L: platform-driver-x86@vger.kernel.org 22750 22717 S: Maintained ··· 22777 22744 F: drivers/i3c/master/svc-i3c-master.c 22778 22745 22779 22746 SIMPLEFB FB DRIVER 22780 - M: Hans de Goede <hdegoede@redhat.com> 22747 + M: Hans de Goede <hansg@kernel.org> 22781 22748 L: linux-fbdev@vger.kernel.org 22782 22749 S: Maintained 22783 22750 F: Documentation/devicetree/bindings/display/simple-framebuffer.yaml ··· 22906 22873 F: drivers/hwmon/emc2103.c 22907 22874 22908 22875 SMSC SCH5627 HARDWARE MONITOR DRIVER 22909 - M: Hans de Goede <hdegoede@redhat.com> 22876 + M: Hans de Goede <hansg@kernel.org> 22910 22877 L: linux-hwmon@vger.kernel.org 22911 22878 S: Supported 22912 22879 F: Documentation/hwmon/sch5627.rst ··· 23561 23528 F: Documentation/process/stable-kernel-rules.rst 23562 23529 23563 23530 STAGING - ATOMISP DRIVER 23564 - M: Hans de Goede <hdegoede@redhat.com> 23531 + M: Hans de Goede <hansg@kernel.org> 23565 23532 M: Mauro Carvalho Chehab <mchehab@kernel.org> 23566 23533 R: Sakari Ailus <sakari.ailus@linux.intel.com> 23567 23534 L: linux-media@vger.kernel.org ··· 23857 23824 F: drivers/net/ethernet/i825xx/sun3* 23858 23825 23859 23826 SUN4I LOW RES ADC ATTACHED TABLET KEYS DRIVER 23860 - M: Hans de Goede <hdegoede@redhat.com> 23827 + M: Hans de Goede <hansg@kernel.org> 23861 23828 L: linux-input@vger.kernel.org 23862 23829 S: Maintained 23863 23830 F: Documentation/devicetree/bindings/input/allwinner,sun4i-a10-lradc-keys.yaml ··· 24099 24066 L: linux-i2c@vger.kernel.org 24100 24067 S: Maintained 24101 24068 F: drivers/i2c/busses/i2c-designware-amdisp.c 24069 + F: include/linux/soc/amd/isp4_misc.h 24102 24070 24103 24071 SYNOPSYS DESIGNWARE MMC/SD/SDIO DRIVER 24104 24072 M: Jaehoon Chung <jh80.chung@samsung.com> ··· 25064 25030 R: Baolin Wang <baolin.wang@linux.alibaba.com> 25065 25031 L: linux-mm@kvack.org 25066 25032 S: Maintained 25033 + F: include/linux/memfd.h 25067 25034 F: include/linux/shmem_fs.h 25035 + F: mm/memfd.c 25068 25036 F: mm/shmem.c 25037 + F: mm/shmem_quota.c 25069 25038 25070 25039 TOMOYO SECURITY MODULE 25071 25040 M: Kentaro Takeda <takedakn@nttdata.co.jp> ··· 25629 25592 F: drivers/hid/usbhid/ 25630 25593 25631 25594 USB INTEL XHCI ROLE MUX DRIVER 25632 - M: Hans de Goede <hdegoede@redhat.com> 25595 + M: Hans de Goede <hansg@kernel.org> 25633 25596 L: linux-usb@vger.kernel.org 25634 25597 S: Maintained 25635 25598 F: drivers/usb/roles/intel-xhci-usb-role-switch.c ··· 25820 25783 F: drivers/usb/typec/mux/intel_pmc_mux.c 25821 25784 25822 25785 USB TYPEC PI3USB30532 MUX DRIVER 25823 - M: Hans de Goede <hdegoede@redhat.com> 25786 + M: Hans de Goede <hansg@kernel.org> 25824 25787 L: linux-usb@vger.kernel.org 25825 25788 S: Maintained 25826 25789 F: drivers/usb/typec/mux/pi3usb30532.c ··· 25849 25812 25850 25813 USB VIDEO CLASS 25851 25814 M: Laurent Pinchart <laurent.pinchart@ideasonboard.com> 25852 - M: Hans de Goede <hdegoede@redhat.com> 25815 + M: Hans de Goede <hansg@kernel.org> 25853 25816 L: linux-media@vger.kernel.org 25854 25817 S: Maintained 25855 25818 W: http://www.ideasonboard.org/uvc/ ··· 26380 26343 F: sound/virtio/* 26381 26344 26382 26345 VIRTUAL BOX GUEST DEVICE DRIVER 26383 - M: Hans de Goede <hdegoede@redhat.com> 26346 + M: Hans de Goede <hansg@kernel.org> 26384 26347 M: Arnd Bergmann <arnd@arndb.de> 26385 26348 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 26386 26349 S: Maintained ··· 26389 26352 F: include/uapi/linux/vbox*.h 26390 26353 26391 26354 VIRTUAL BOX SHARED FOLDER VFS DRIVER 26392 - M: Hans de Goede <hdegoede@redhat.com> 26355 + M: Hans de Goede <hansg@kernel.org> 26393 26356 L: linux-fsdevel@vger.kernel.org 26394 26357 S: Maintained 26395 26358 F: fs/vboxsf/* ··· 26643 26606 26644 26607 WACOM PROTOCOL 4 SERIAL TABLETS 26645 26608 M: Julian Squires <julian@cipht.net> 26646 - M: Hans de Goede <hdegoede@redhat.com> 26609 + M: Hans de Goede <hansg@kernel.org> 26647 26610 L: linux-input@vger.kernel.org 26648 26611 S: Maintained 26649 26612 F: drivers/input/tablet/wacom_serial4.c ··· 26810 26773 F: include/uapi/linux/wwan.h 26811 26774 26812 26775 X-POWERS AXP288 PMIC DRIVERS 26813 - M: Hans de Goede <hdegoede@redhat.com> 26776 + M: Hans de Goede <hansg@kernel.org> 26814 26777 S: Maintained 26815 26778 F: drivers/acpi/pmic/intel_pmic_xpower.c 26816 26779 N: axp288 ··· 26902 26865 F: arch/x86/mm/ 26903 26866 26904 26867 X86 PLATFORM ANDROID TABLETS DSDT FIXUP DRIVER 26905 - M: Hans de Goede <hdegoede@redhat.com> 26868 + M: Hans de Goede <hansg@kernel.org> 26906 26869 L: platform-driver-x86@vger.kernel.org 26907 26870 S: Maintained 26908 26871 T: git git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86.git 26909 26872 F: drivers/platform/x86/x86-android-tablets/ 26910 26873 26911 26874 X86 PLATFORM DRIVERS 26912 - M: Hans de Goede <hdegoede@redhat.com> 26875 + M: Hans de Goede <hansg@kernel.org> 26913 26876 M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> 26914 26877 L: platform-driver-x86@vger.kernel.org 26915 26878 S: Maintained
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 16 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc4 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
-62
arch/arm64/include/asm/kvm_emulate.h
··· 561 561 vcpu_set_flag((v), e); \ 562 562 } while (0) 563 563 564 - #define __build_check_all_or_none(r, bits) \ 565 - BUILD_BUG_ON(((r) & (bits)) && ((r) & (bits)) != (bits)) 566 - 567 - #define __cpacr_to_cptr_clr(clr, set) \ 568 - ({ \ 569 - u64 cptr = 0; \ 570 - \ 571 - if ((set) & CPACR_EL1_FPEN) \ 572 - cptr |= CPTR_EL2_TFP; \ 573 - if ((set) & CPACR_EL1_ZEN) \ 574 - cptr |= CPTR_EL2_TZ; \ 575 - if ((set) & CPACR_EL1_SMEN) \ 576 - cptr |= CPTR_EL2_TSM; \ 577 - if ((clr) & CPACR_EL1_TTA) \ 578 - cptr |= CPTR_EL2_TTA; \ 579 - if ((clr) & CPTR_EL2_TAM) \ 580 - cptr |= CPTR_EL2_TAM; \ 581 - if ((clr) & CPTR_EL2_TCPAC) \ 582 - cptr |= CPTR_EL2_TCPAC; \ 583 - \ 584 - cptr; \ 585 - }) 586 - 587 - #define __cpacr_to_cptr_set(clr, set) \ 588 - ({ \ 589 - u64 cptr = 0; \ 590 - \ 591 - if ((clr) & CPACR_EL1_FPEN) \ 592 - cptr |= CPTR_EL2_TFP; \ 593 - if ((clr) & CPACR_EL1_ZEN) \ 594 - cptr |= CPTR_EL2_TZ; \ 595 - if ((clr) & CPACR_EL1_SMEN) \ 596 - cptr |= CPTR_EL2_TSM; \ 597 - if ((set) & CPACR_EL1_TTA) \ 598 - cptr |= CPTR_EL2_TTA; \ 599 - if ((set) & CPTR_EL2_TAM) \ 600 - cptr |= CPTR_EL2_TAM; \ 601 - if ((set) & CPTR_EL2_TCPAC) \ 602 - cptr |= CPTR_EL2_TCPAC; \ 603 - \ 604 - cptr; \ 605 - }) 606 - 607 - #define cpacr_clear_set(clr, set) \ 608 - do { \ 609 - BUILD_BUG_ON((set) & CPTR_VHE_EL2_RES0); \ 610 - BUILD_BUG_ON((clr) & CPACR_EL1_E0POE); \ 611 - __build_check_all_or_none((clr), CPACR_EL1_FPEN); \ 612 - __build_check_all_or_none((set), CPACR_EL1_FPEN); \ 613 - __build_check_all_or_none((clr), CPACR_EL1_ZEN); \ 614 - __build_check_all_or_none((set), CPACR_EL1_ZEN); \ 615 - __build_check_all_or_none((clr), CPACR_EL1_SMEN); \ 616 - __build_check_all_or_none((set), CPACR_EL1_SMEN); \ 617 - \ 618 - if (has_vhe() || has_hvhe()) \ 619 - sysreg_clear_set(cpacr_el1, clr, set); \ 620 - else \ 621 - sysreg_clear_set(cptr_el2, \ 622 - __cpacr_to_cptr_clr(clr, set), \ 623 - __cpacr_to_cptr_set(clr, set));\ 624 - } while (0) 625 - 626 564 /* 627 565 * Returns a 'sanitised' view of CPTR_EL2, translating from nVHE to the VHE 628 566 * format if E2H isn't set.
+2 -4
arch/arm64/include/asm/kvm_host.h
··· 1289 1289 }) 1290 1290 1291 1291 /* 1292 - * The couple of isb() below are there to guarantee the same behaviour 1293 - * on VHE as on !VHE, where the eret to EL1 acts as a context 1294 - * synchronization event. 1292 + * The isb() below is there to guarantee the same behaviour on VHE as on !VHE, 1293 + * where the eret to EL1 acts as a context synchronization event. 1295 1294 */ 1296 1295 #define kvm_call_hyp(f, ...) \ 1297 1296 do { \ ··· 1308 1309 \ 1309 1310 if (has_vhe()) { \ 1310 1311 ret = f(__VA_ARGS__); \ 1311 - isb(); \ 1312 1312 } else { \ 1313 1313 ret = kvm_call_hyp_nvhe(f, ##__VA_ARGS__); \ 1314 1314 } \
+3 -1
arch/arm64/kernel/process.c
··· 288 288 if (!system_supports_gcs()) 289 289 return; 290 290 291 - gcs_free(current); 291 + current->thread.gcspr_el0 = 0; 292 + current->thread.gcs_base = 0; 293 + current->thread.gcs_size = 0; 292 294 current->thread.gcs_el0_mode = 0; 293 295 write_sysreg_s(GCSCRE0_EL1_nTR, SYS_GCSCRE0_EL1); 294 296 write_sysreg_s(0, SYS_GCSPR_EL0);
+1 -1
arch/arm64/kernel/ptrace.c
··· 141 141 142 142 addr += n; 143 143 if (regs_within_kernel_stack(regs, (unsigned long)addr)) 144 - return *addr; 144 + return READ_ONCE_NOCHECK(*addr); 145 145 else 146 146 return 0; 147 147 }
+2 -1
arch/arm64/kvm/arm.c
··· 2764 2764 bool kvm_arch_irqfd_route_changed(struct kvm_kernel_irq_routing_entry *old, 2765 2765 struct kvm_kernel_irq_routing_entry *new) 2766 2766 { 2767 - if (new->type != KVM_IRQ_ROUTING_MSI) 2767 + if (old->type != KVM_IRQ_ROUTING_MSI || 2768 + new->type != KVM_IRQ_ROUTING_MSI) 2768 2769 return true; 2769 2770 2770 2771 return memcmp(&old->msi, &new->msi, sizeof(new->msi));
+138 -9
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 65 65 } 66 66 } 67 67 68 + static inline void __activate_cptr_traps_nvhe(struct kvm_vcpu *vcpu) 69 + { 70 + u64 val = CPTR_NVHE_EL2_RES1 | CPTR_EL2_TAM | CPTR_EL2_TTA; 71 + 72 + /* 73 + * Always trap SME since it's not supported in KVM. 74 + * TSM is RES1 if SME isn't implemented. 75 + */ 76 + val |= CPTR_EL2_TSM; 77 + 78 + if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs()) 79 + val |= CPTR_EL2_TZ; 80 + 81 + if (!guest_owns_fp_regs()) 82 + val |= CPTR_EL2_TFP; 83 + 84 + write_sysreg(val, cptr_el2); 85 + } 86 + 87 + static inline void __activate_cptr_traps_vhe(struct kvm_vcpu *vcpu) 88 + { 89 + /* 90 + * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to 91 + * CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2, 92 + * except for some missing controls, such as TAM. 93 + * In this case, CPTR_EL2.TAM has the same position with or without 94 + * VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM 95 + * shift value for trapping the AMU accesses. 96 + */ 97 + u64 val = CPTR_EL2_TAM | CPACR_EL1_TTA; 98 + u64 cptr; 99 + 100 + if (guest_owns_fp_regs()) { 101 + val |= CPACR_EL1_FPEN; 102 + if (vcpu_has_sve(vcpu)) 103 + val |= CPACR_EL1_ZEN; 104 + } 105 + 106 + if (!vcpu_has_nv(vcpu)) 107 + goto write; 108 + 109 + /* 110 + * The architecture is a bit crap (what a surprise): an EL2 guest 111 + * writing to CPTR_EL2 via CPACR_EL1 can't set any of TCPAC or TTA, 112 + * as they are RES0 in the guest's view. To work around it, trap the 113 + * sucker using the very same bit it can't set... 114 + */ 115 + if (vcpu_el2_e2h_is_set(vcpu) && is_hyp_ctxt(vcpu)) 116 + val |= CPTR_EL2_TCPAC; 117 + 118 + /* 119 + * Layer the guest hypervisor's trap configuration on top of our own if 120 + * we're in a nested context. 121 + */ 122 + if (is_hyp_ctxt(vcpu)) 123 + goto write; 124 + 125 + cptr = vcpu_sanitised_cptr_el2(vcpu); 126 + 127 + /* 128 + * Pay attention, there's some interesting detail here. 129 + * 130 + * The CPTR_EL2.xEN fields are 2 bits wide, although there are only two 131 + * meaningful trap states when HCR_EL2.TGE = 0 (running a nested guest): 132 + * 133 + * - CPTR_EL2.xEN = x0, traps are enabled 134 + * - CPTR_EL2.xEN = x1, traps are disabled 135 + * 136 + * In other words, bit[0] determines if guest accesses trap or not. In 137 + * the interest of simplicity, clear the entire field if the guest 138 + * hypervisor has traps enabled to dispel any illusion of something more 139 + * complicated taking place. 140 + */ 141 + if (!(SYS_FIELD_GET(CPACR_EL1, FPEN, cptr) & BIT(0))) 142 + val &= ~CPACR_EL1_FPEN; 143 + if (!(SYS_FIELD_GET(CPACR_EL1, ZEN, cptr) & BIT(0))) 144 + val &= ~CPACR_EL1_ZEN; 145 + 146 + if (kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, S2POE, IMP)) 147 + val |= cptr & CPACR_EL1_E0POE; 148 + 149 + val |= cptr & CPTR_EL2_TCPAC; 150 + 151 + write: 152 + write_sysreg(val, cpacr_el1); 153 + } 154 + 155 + static inline void __activate_cptr_traps(struct kvm_vcpu *vcpu) 156 + { 157 + if (!guest_owns_fp_regs()) 158 + __activate_traps_fpsimd32(vcpu); 159 + 160 + if (has_vhe() || has_hvhe()) 161 + __activate_cptr_traps_vhe(vcpu); 162 + else 163 + __activate_cptr_traps_nvhe(vcpu); 164 + } 165 + 166 + static inline void __deactivate_cptr_traps_nvhe(struct kvm_vcpu *vcpu) 167 + { 168 + u64 val = CPTR_NVHE_EL2_RES1; 169 + 170 + if (!cpus_have_final_cap(ARM64_SVE)) 171 + val |= CPTR_EL2_TZ; 172 + if (!cpus_have_final_cap(ARM64_SME)) 173 + val |= CPTR_EL2_TSM; 174 + 175 + write_sysreg(val, cptr_el2); 176 + } 177 + 178 + static inline void __deactivate_cptr_traps_vhe(struct kvm_vcpu *vcpu) 179 + { 180 + u64 val = CPACR_EL1_FPEN; 181 + 182 + if (cpus_have_final_cap(ARM64_SVE)) 183 + val |= CPACR_EL1_ZEN; 184 + if (cpus_have_final_cap(ARM64_SME)) 185 + val |= CPACR_EL1_SMEN; 186 + 187 + write_sysreg(val, cpacr_el1); 188 + } 189 + 190 + static inline void __deactivate_cptr_traps(struct kvm_vcpu *vcpu) 191 + { 192 + if (has_vhe() || has_hvhe()) 193 + __deactivate_cptr_traps_vhe(vcpu); 194 + else 195 + __deactivate_cptr_traps_nvhe(vcpu); 196 + } 197 + 68 198 #define reg_to_fgt_masks(reg) \ 69 199 ({ \ 70 200 struct fgt_masks *m; \ ··· 616 486 */ 617 487 if (system_supports_sve()) { 618 488 __hyp_sve_save_host(); 619 - 620 - /* Re-enable SVE traps if not supported for the guest vcpu. */ 621 - if (!vcpu_has_sve(vcpu)) 622 - cpacr_clear_set(CPACR_EL1_ZEN, 0); 623 - 624 489 } else { 625 490 __fpsimd_save_state(host_data_ptr(host_ctxt.fp_regs)); 626 491 } ··· 666 541 /* Valid trap. Switch the context: */ 667 542 668 543 /* First disable enough traps to allow us to update the registers */ 669 - if (sve_guest || (is_protected_kvm_enabled() && system_supports_sve())) 670 - cpacr_clear_set(0, CPACR_EL1_FPEN | CPACR_EL1_ZEN); 671 - else 672 - cpacr_clear_set(0, CPACR_EL1_FPEN); 544 + __deactivate_cptr_traps(vcpu); 673 545 isb(); 674 546 675 547 /* Write out the host state if it's in the registers */ ··· 687 565 write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2); 688 566 689 567 *host_data_ptr(fp_owner) = FP_STATE_GUEST_OWNED; 568 + 569 + /* 570 + * Re-enable traps necessary for the current state of the guest, e.g. 571 + * those enabled by a guest hypervisor. The ERET to the guest will 572 + * provide the necessary context synchronization. 573 + */ 574 + __activate_cptr_traps(vcpu); 690 575 691 576 return true; 692 577 }
+4 -1
arch/arm64/kvm/hyp/nvhe/hyp-main.c
··· 69 69 if (!guest_owns_fp_regs()) 70 70 return; 71 71 72 - cpacr_clear_set(0, CPACR_EL1_FPEN | CPACR_EL1_ZEN); 72 + /* 73 + * Traps have been disabled by __deactivate_cptr_traps(), but there 74 + * hasn't necessarily been a context synchronization event yet. 75 + */ 73 76 isb(); 74 77 75 78 if (vcpu_has_sve(vcpu))
-59
arch/arm64/kvm/hyp/nvhe/switch.c
··· 47 47 48 48 extern void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc); 49 49 50 - static void __activate_cptr_traps(struct kvm_vcpu *vcpu) 51 - { 52 - u64 val = CPTR_EL2_TAM; /* Same bit irrespective of E2H */ 53 - 54 - if (!guest_owns_fp_regs()) 55 - __activate_traps_fpsimd32(vcpu); 56 - 57 - if (has_hvhe()) { 58 - val |= CPACR_EL1_TTA; 59 - 60 - if (guest_owns_fp_regs()) { 61 - val |= CPACR_EL1_FPEN; 62 - if (vcpu_has_sve(vcpu)) 63 - val |= CPACR_EL1_ZEN; 64 - } 65 - 66 - write_sysreg(val, cpacr_el1); 67 - } else { 68 - val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1; 69 - 70 - /* 71 - * Always trap SME since it's not supported in KVM. 72 - * TSM is RES1 if SME isn't implemented. 73 - */ 74 - val |= CPTR_EL2_TSM; 75 - 76 - if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs()) 77 - val |= CPTR_EL2_TZ; 78 - 79 - if (!guest_owns_fp_regs()) 80 - val |= CPTR_EL2_TFP; 81 - 82 - write_sysreg(val, cptr_el2); 83 - } 84 - } 85 - 86 - static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu) 87 - { 88 - if (has_hvhe()) { 89 - u64 val = CPACR_EL1_FPEN; 90 - 91 - if (cpus_have_final_cap(ARM64_SVE)) 92 - val |= CPACR_EL1_ZEN; 93 - if (cpus_have_final_cap(ARM64_SME)) 94 - val |= CPACR_EL1_SMEN; 95 - 96 - write_sysreg(val, cpacr_el1); 97 - } else { 98 - u64 val = CPTR_NVHE_EL2_RES1; 99 - 100 - if (!cpus_have_final_cap(ARM64_SVE)) 101 - val |= CPTR_EL2_TZ; 102 - if (!cpus_have_final_cap(ARM64_SME)) 103 - val |= CPTR_EL2_TSM; 104 - 105 - write_sysreg(val, cptr_el2); 106 - } 107 - } 108 - 109 50 static void __activate_traps(struct kvm_vcpu *vcpu) 110 51 { 111 52 ___activate_traps(vcpu, vcpu->arch.hcr_el2);
+14 -93
arch/arm64/kvm/hyp/vhe/switch.c
··· 90 90 return hcr | (guest_hcr & ~NV_HCR_GUEST_EXCLUDE); 91 91 } 92 92 93 - static void __activate_cptr_traps(struct kvm_vcpu *vcpu) 94 - { 95 - u64 cptr; 96 - 97 - /* 98 - * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to 99 - * CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2, 100 - * except for some missing controls, such as TAM. 101 - * In this case, CPTR_EL2.TAM has the same position with or without 102 - * VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM 103 - * shift value for trapping the AMU accesses. 104 - */ 105 - u64 val = CPACR_EL1_TTA | CPTR_EL2_TAM; 106 - 107 - if (guest_owns_fp_regs()) { 108 - val |= CPACR_EL1_FPEN; 109 - if (vcpu_has_sve(vcpu)) 110 - val |= CPACR_EL1_ZEN; 111 - } else { 112 - __activate_traps_fpsimd32(vcpu); 113 - } 114 - 115 - if (!vcpu_has_nv(vcpu)) 116 - goto write; 117 - 118 - /* 119 - * The architecture is a bit crap (what a surprise): an EL2 guest 120 - * writing to CPTR_EL2 via CPACR_EL1 can't set any of TCPAC or TTA, 121 - * as they are RES0 in the guest's view. To work around it, trap the 122 - * sucker using the very same bit it can't set... 123 - */ 124 - if (vcpu_el2_e2h_is_set(vcpu) && is_hyp_ctxt(vcpu)) 125 - val |= CPTR_EL2_TCPAC; 126 - 127 - /* 128 - * Layer the guest hypervisor's trap configuration on top of our own if 129 - * we're in a nested context. 130 - */ 131 - if (is_hyp_ctxt(vcpu)) 132 - goto write; 133 - 134 - cptr = vcpu_sanitised_cptr_el2(vcpu); 135 - 136 - /* 137 - * Pay attention, there's some interesting detail here. 138 - * 139 - * The CPTR_EL2.xEN fields are 2 bits wide, although there are only two 140 - * meaningful trap states when HCR_EL2.TGE = 0 (running a nested guest): 141 - * 142 - * - CPTR_EL2.xEN = x0, traps are enabled 143 - * - CPTR_EL2.xEN = x1, traps are disabled 144 - * 145 - * In other words, bit[0] determines if guest accesses trap or not. In 146 - * the interest of simplicity, clear the entire field if the guest 147 - * hypervisor has traps enabled to dispel any illusion of something more 148 - * complicated taking place. 149 - */ 150 - if (!(SYS_FIELD_GET(CPACR_EL1, FPEN, cptr) & BIT(0))) 151 - val &= ~CPACR_EL1_FPEN; 152 - if (!(SYS_FIELD_GET(CPACR_EL1, ZEN, cptr) & BIT(0))) 153 - val &= ~CPACR_EL1_ZEN; 154 - 155 - if (kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, S2POE, IMP)) 156 - val |= cptr & CPACR_EL1_E0POE; 157 - 158 - val |= cptr & CPTR_EL2_TCPAC; 159 - 160 - write: 161 - write_sysreg(val, cpacr_el1); 162 - } 163 - 164 - static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu) 165 - { 166 - u64 val = CPACR_EL1_FPEN | CPACR_EL1_ZEN_EL1EN; 167 - 168 - if (cpus_have_final_cap(ARM64_SME)) 169 - val |= CPACR_EL1_SMEN_EL1EN; 170 - 171 - write_sysreg(val, cpacr_el1); 172 - } 173 - 174 93 static void __activate_traps(struct kvm_vcpu *vcpu) 175 94 { 176 95 u64 val; ··· 558 639 host_ctxt = host_data_ptr(host_ctxt); 559 640 guest_ctxt = &vcpu->arch.ctxt; 560 641 561 - sysreg_save_host_state_vhe(host_ctxt); 562 - 563 642 fpsimd_lazy_switch_to_guest(vcpu); 643 + 644 + sysreg_save_host_state_vhe(host_ctxt); 564 645 565 646 /* 566 647 * Note that ARM erratum 1165522 requires us to configure both stage 1 ··· 586 667 587 668 __deactivate_traps(vcpu); 588 669 589 - fpsimd_lazy_switch_to_host(vcpu); 590 - 591 670 sysreg_restore_host_state_vhe(host_ctxt); 671 + 672 + __debug_switch_to_host(vcpu); 673 + 674 + /* 675 + * Ensure that all system register writes above have taken effect 676 + * before returning to the host. In VHE mode, CPTR traps for 677 + * FPSIMD/SVE/SME also apply to EL2, so FPSIMD/SVE/SME state must be 678 + * manipulated after the ISB. 679 + */ 680 + isb(); 681 + 682 + fpsimd_lazy_switch_to_host(vcpu); 592 683 593 684 if (guest_owns_fp_regs()) 594 685 __fpsimd_save_fpexc32(vcpu); 595 - 596 - __debug_switch_to_host(vcpu); 597 686 598 687 return exit_code; 599 688 } ··· 632 705 */ 633 706 local_daif_restore(DAIF_PROCCTX_NOIRQ); 634 707 635 - /* 636 - * When we exit from the guest we change a number of CPU configuration 637 - * parameters, such as traps. We rely on the isb() in kvm_call_hyp*() 638 - * to make sure these changes take effect before running the host or 639 - * additional guests. 640 - */ 641 708 return ret; 642 709 } 643 710
+42 -39
arch/arm64/kvm/vgic/vgic-v3-nested.c
··· 36 36 37 37 static DEFINE_PER_CPU(struct shadow_if, shadow_if); 38 38 39 + static int lr_map_idx_to_shadow_idx(struct shadow_if *shadow_if, int idx) 40 + { 41 + return hweight16(shadow_if->lr_map & (BIT(idx) - 1)); 42 + } 43 + 39 44 /* 40 45 * Nesting GICv3 support 41 46 * ··· 214 209 return reg; 215 210 } 216 211 212 + static u64 translate_lr_pintid(struct kvm_vcpu *vcpu, u64 lr) 213 + { 214 + struct vgic_irq *irq; 215 + 216 + if (!(lr & ICH_LR_HW)) 217 + return lr; 218 + 219 + /* We have the HW bit set, check for validity of pINTID */ 220 + irq = vgic_get_vcpu_irq(vcpu, FIELD_GET(ICH_LR_PHYS_ID_MASK, lr)); 221 + /* If there was no real mapping, nuke the HW bit */ 222 + if (!irq || !irq->hw || irq->intid > VGIC_MAX_SPI) 223 + lr &= ~ICH_LR_HW; 224 + 225 + /* Translate the virtual mapping to the real one, even if invalid */ 226 + if (irq) { 227 + lr &= ~ICH_LR_PHYS_ID_MASK; 228 + lr |= FIELD_PREP(ICH_LR_PHYS_ID_MASK, (u64)irq->hwintid); 229 + vgic_put_irq(vcpu->kvm, irq); 230 + } 231 + 232 + return lr; 233 + } 234 + 217 235 /* 218 236 * For LRs which have HW bit set such as timer interrupts, we modify them to 219 237 * have the host hardware interrupt number instead of the virtual one programmed ··· 245 217 static void vgic_v3_create_shadow_lr(struct kvm_vcpu *vcpu, 246 218 struct vgic_v3_cpu_if *s_cpu_if) 247 219 { 248 - unsigned long lr_map = 0; 249 - int index = 0; 220 + struct shadow_if *shadow_if; 221 + 222 + shadow_if = container_of(s_cpu_if, struct shadow_if, cpuif); 223 + shadow_if->lr_map = 0; 250 224 251 225 for (int i = 0; i < kvm_vgic_global_state.nr_lr; i++) { 252 226 u64 lr = __vcpu_sys_reg(vcpu, ICH_LRN(i)); 253 - struct vgic_irq *irq; 254 227 255 228 if (!(lr & ICH_LR_STATE)) 256 - lr = 0; 229 + continue; 257 230 258 - if (!(lr & ICH_LR_HW)) 259 - goto next; 231 + lr = translate_lr_pintid(vcpu, lr); 260 232 261 - /* We have the HW bit set, check for validity of pINTID */ 262 - irq = vgic_get_vcpu_irq(vcpu, FIELD_GET(ICH_LR_PHYS_ID_MASK, lr)); 263 - if (!irq || !irq->hw || irq->intid > VGIC_MAX_SPI ) { 264 - /* There was no real mapping, so nuke the HW bit */ 265 - lr &= ~ICH_LR_HW; 266 - if (irq) 267 - vgic_put_irq(vcpu->kvm, irq); 268 - goto next; 269 - } 270 - 271 - /* Translate the virtual mapping to the real one */ 272 - lr &= ~ICH_LR_PHYS_ID_MASK; 273 - lr |= FIELD_PREP(ICH_LR_PHYS_ID_MASK, (u64)irq->hwintid); 274 - 275 - vgic_put_irq(vcpu->kvm, irq); 276 - 277 - next: 278 - s_cpu_if->vgic_lr[index] = lr; 279 - if (lr) { 280 - lr_map |= BIT(i); 281 - index++; 282 - } 233 + s_cpu_if->vgic_lr[hweight16(shadow_if->lr_map)] = lr; 234 + shadow_if->lr_map |= BIT(i); 283 235 } 284 236 285 - container_of(s_cpu_if, struct shadow_if, cpuif)->lr_map = lr_map; 286 - s_cpu_if->used_lrs = index; 237 + s_cpu_if->used_lrs = hweight16(shadow_if->lr_map); 287 238 } 288 239 289 240 void vgic_v3_sync_nested(struct kvm_vcpu *vcpu) 290 241 { 291 242 struct shadow_if *shadow_if = get_shadow_if(); 292 - int i, index = 0; 243 + int i; 293 244 294 245 for_each_set_bit(i, &shadow_if->lr_map, kvm_vgic_global_state.nr_lr) { 295 246 u64 lr = __vcpu_sys_reg(vcpu, ICH_LRN(i)); 296 247 struct vgic_irq *irq; 297 248 298 249 if (!(lr & ICH_LR_HW) || !(lr & ICH_LR_STATE)) 299 - goto next; 250 + continue; 300 251 301 252 /* 302 253 * If we had a HW lr programmed by the guest hypervisor, we ··· 284 277 */ 285 278 irq = vgic_get_vcpu_irq(vcpu, FIELD_GET(ICH_LR_PHYS_ID_MASK, lr)); 286 279 if (WARN_ON(!irq)) /* Shouldn't happen as we check on load */ 287 - goto next; 280 + continue; 288 281 289 - lr = __gic_v3_get_lr(index); 282 + lr = __gic_v3_get_lr(lr_map_idx_to_shadow_idx(shadow_if, i)); 290 283 if (!(lr & ICH_LR_STATE)) 291 284 irq->active = false; 292 285 293 286 vgic_put_irq(vcpu->kvm, irq); 294 - next: 295 - index++; 296 287 } 297 288 } 298 289 ··· 373 368 val = __vcpu_sys_reg(vcpu, ICH_LRN(i)); 374 369 375 370 val &= ~ICH_LR_STATE; 376 - val |= s_cpu_if->vgic_lr[i] & ICH_LR_STATE; 371 + val |= s_cpu_if->vgic_lr[lr_map_idx_to_shadow_idx(shadow_if, i)] & ICH_LR_STATE; 377 372 378 373 __vcpu_assign_sys_reg(vcpu, ICH_LRN(i), val); 379 - s_cpu_if->vgic_lr[i] = 0; 380 374 } 381 375 382 - shadow_if->lr_map = 0; 383 376 vcpu->arch.vgic_cpu.vgic_v3.used_lrs = 0; 384 377 } 385 378
+2 -2
arch/arm64/lib/crypto/poly1305-glue.c
··· 38 38 unsigned int todo = min_t(unsigned int, len, SZ_4K); 39 39 40 40 kernel_neon_begin(); 41 - poly1305_blocks_neon(state, src, todo, 1); 41 + poly1305_blocks_neon(state, src, todo, padbit); 42 42 kernel_neon_end(); 43 43 44 44 len -= todo; 45 45 src += todo; 46 46 } while (len); 47 47 } else 48 - poly1305_blocks(state, src, len, 1); 48 + poly1305_blocks(state, src, len, padbit); 49 49 } 50 50 EXPORT_SYMBOL_GPL(poly1305_blocks_arch); 51 51
+2 -1
arch/arm64/mm/mmu.c
··· 1305 1305 next = addr; 1306 1306 end = addr + PUD_SIZE; 1307 1307 do { 1308 - pmd_free_pte_page(pmdp, next); 1308 + if (pmd_present(pmdp_get(pmdp))) 1309 + pmd_free_pte_page(pmdp, next); 1309 1310 } while (pmdp++, next += PMD_SIZE, next != end); 1310 1311 1311 1312 pud_clear(pudp);
+4 -4
arch/loongarch/include/asm/addrspace.h
··· 18 18 /* 19 19 * This gives the physical RAM offset. 20 20 */ 21 - #ifndef __ASSEMBLY__ 21 + #ifndef __ASSEMBLER__ 22 22 #ifndef PHYS_OFFSET 23 23 #define PHYS_OFFSET _UL(0) 24 24 #endif 25 25 extern unsigned long vm_map_base; 26 - #endif /* __ASSEMBLY__ */ 26 + #endif /* __ASSEMBLER__ */ 27 27 28 28 #ifndef IO_BASE 29 29 #define IO_BASE CSR_DMW0_BASE ··· 66 66 #define FIXADDR_TOP ((unsigned long)(long)(int)0xfffe0000) 67 67 #endif 68 68 69 - #ifdef __ASSEMBLY__ 69 + #ifdef __ASSEMBLER__ 70 70 #define _ATYPE_ 71 71 #define _ATYPE32_ 72 72 #define _ATYPE64_ ··· 85 85 /* 86 86 * 32/64-bit LoongArch address spaces 87 87 */ 88 - #ifdef __ASSEMBLY__ 88 + #ifdef __ASSEMBLER__ 89 89 #define _ACAST32_ 90 90 #define _ACAST64_ 91 91 #else
+2 -2
arch/loongarch/include/asm/alternative-asm.h
··· 2 2 #ifndef _ASM_ALTERNATIVE_ASM_H 3 3 #define _ASM_ALTERNATIVE_ASM_H 4 4 5 - #ifdef __ASSEMBLY__ 5 + #ifdef __ASSEMBLER__ 6 6 7 7 #include <asm/asm.h> 8 8 ··· 77 77 .previous 78 78 .endm 79 79 80 - #endif /* __ASSEMBLY__ */ 80 + #endif /* __ASSEMBLER__ */ 81 81 82 82 #endif /* _ASM_ALTERNATIVE_ASM_H */
+2 -2
arch/loongarch/include/asm/alternative.h
··· 2 2 #ifndef _ASM_ALTERNATIVE_H 3 3 #define _ASM_ALTERNATIVE_H 4 4 5 - #ifndef __ASSEMBLY__ 5 + #ifndef __ASSEMBLER__ 6 6 7 7 #include <linux/types.h> 8 8 #include <linux/stddef.h> ··· 106 106 #define alternative_2(oldinstr, newinstr1, feature1, newinstr2, feature2) \ 107 107 (asm volatile(ALTERNATIVE_2(oldinstr, newinstr1, feature1, newinstr2, feature2) ::: "memory")) 108 108 109 - #endif /* __ASSEMBLY__ */ 109 + #endif /* __ASSEMBLER__ */ 110 110 111 111 #endif /* _ASM_ALTERNATIVE_H */
+3 -3
arch/loongarch/include/asm/asm-extable.h
··· 7 7 #define EX_TYPE_UACCESS_ERR_ZERO 2 8 8 #define EX_TYPE_BPF 3 9 9 10 - #ifdef __ASSEMBLY__ 10 + #ifdef __ASSEMBLER__ 11 11 12 12 #define __ASM_EXTABLE_RAW(insn, fixup, type, data) \ 13 13 .pushsection __ex_table, "a"; \ ··· 22 22 __ASM_EXTABLE_RAW(\insn, \fixup, EX_TYPE_FIXUP, 0) 23 23 .endm 24 24 25 - #else /* __ASSEMBLY__ */ 25 + #else /* __ASSEMBLER__ */ 26 26 27 27 #include <linux/bits.h> 28 28 #include <linux/stringify.h> ··· 60 60 #define _ASM_EXTABLE_UACCESS_ERR(insn, fixup, err) \ 61 61 _ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, err, zero) 62 62 63 - #endif /* __ASSEMBLY__ */ 63 + #endif /* __ASSEMBLER__ */ 64 64 65 65 #endif /* __ASM_ASM_EXTABLE_H */
+4 -4
arch/loongarch/include/asm/asm.h
··· 110 110 #define LONG_SRA srai.w 111 111 #define LONG_SRAV sra.w 112 112 113 - #ifdef __ASSEMBLY__ 113 + #ifdef __ASSEMBLER__ 114 114 #define LONG .word 115 115 #endif 116 116 #define LONGSIZE 4 ··· 131 131 #define LONG_SRA srai.d 132 132 #define LONG_SRAV sra.d 133 133 134 - #ifdef __ASSEMBLY__ 134 + #ifdef __ASSEMBLER__ 135 135 #define LONG .dword 136 136 #endif 137 137 #define LONGSIZE 8 ··· 158 158 159 159 #define PTR_SCALESHIFT 2 160 160 161 - #ifdef __ASSEMBLY__ 161 + #ifdef __ASSEMBLER__ 162 162 #define PTR .word 163 163 #endif 164 164 #define PTRSIZE 4 ··· 181 181 182 182 #define PTR_SCALESHIFT 3 183 183 184 - #ifdef __ASSEMBLY__ 184 + #ifdef __ASSEMBLER__ 185 185 #define PTR .dword 186 186 #endif 187 187 #define PTRSIZE 8
+2 -2
arch/loongarch/include/asm/cpu.h
··· 46 46 47 47 #define PRID_PRODUCT_MASK 0x0fff 48 48 49 - #if !defined(__ASSEMBLY__) 49 + #if !defined(__ASSEMBLER__) 50 50 51 51 enum cpu_type_enum { 52 52 CPU_UNKNOWN, ··· 55 55 CPU_LAST 56 56 }; 57 57 58 - #endif /* !__ASSEMBLY */ 58 + #endif /* !__ASSEMBLER__ */ 59 59 60 60 /* 61 61 * ISA Level encodings
+2 -2
arch/loongarch/include/asm/ftrace.h
··· 14 14 15 15 #define MCOUNT_INSN_SIZE 4 /* sizeof mcount call */ 16 16 17 - #ifndef __ASSEMBLY__ 17 + #ifndef __ASSEMBLER__ 18 18 19 19 #ifndef CONFIG_DYNAMIC_FTRACE 20 20 ··· 84 84 85 85 #endif 86 86 87 - #endif /* __ASSEMBLY__ */ 87 + #endif /* __ASSEMBLER__ */ 88 88 89 89 #endif /* CONFIG_FUNCTION_TRACER */ 90 90
+3 -3
arch/loongarch/include/asm/gpr-num.h
··· 2 2 #ifndef __ASM_GPR_NUM_H 3 3 #define __ASM_GPR_NUM_H 4 4 5 - #ifdef __ASSEMBLY__ 5 + #ifdef __ASSEMBLER__ 6 6 7 7 .equ .L__gpr_num_zero, 0 8 8 .irp num,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31 ··· 25 25 .equ .L__gpr_num_$s\num, 23 + \num 26 26 .endr 27 27 28 - #else /* __ASSEMBLY__ */ 28 + #else /* __ASSEMBLER__ */ 29 29 30 30 #define __DEFINE_ASM_GPR_NUMS \ 31 31 " .equ .L__gpr_num_zero, 0\n" \ ··· 47 47 " .equ .L__gpr_num_$s\\num, 23 + \\num\n" \ 48 48 " .endr\n" \ 49 49 50 - #endif /* __ASSEMBLY__ */ 50 + #endif /* __ASSEMBLER__ */ 51 51 52 52 #endif /* __ASM_GPR_NUM_H */
+2 -2
arch/loongarch/include/asm/irqflags.h
··· 5 5 #ifndef _ASM_IRQFLAGS_H 6 6 #define _ASM_IRQFLAGS_H 7 7 8 - #ifndef __ASSEMBLY__ 8 + #ifndef __ASSEMBLER__ 9 9 10 10 #include <linux/compiler.h> 11 11 #include <linux/stringify.h> ··· 80 80 return arch_irqs_disabled_flags(arch_local_save_flags()); 81 81 } 82 82 83 - #endif /* #ifndef __ASSEMBLY__ */ 83 + #endif /* #ifndef __ASSEMBLER__ */ 84 84 85 85 #endif /* _ASM_IRQFLAGS_H */
+2 -2
arch/loongarch/include/asm/jump_label.h
··· 7 7 #ifndef __ASM_JUMP_LABEL_H 8 8 #define __ASM_JUMP_LABEL_H 9 9 10 - #ifndef __ASSEMBLY__ 10 + #ifndef __ASSEMBLER__ 11 11 12 12 #include <linux/types.h> 13 13 ··· 50 50 return true; 51 51 } 52 52 53 - #endif /* __ASSEMBLY__ */ 53 + #endif /* __ASSEMBLER__ */ 54 54 #endif /* __ASM_JUMP_LABEL_H */
+1 -1
arch/loongarch/include/asm/kasan.h
··· 2 2 #ifndef __ASM_KASAN_H 3 3 #define __ASM_KASAN_H 4 4 5 - #ifndef __ASSEMBLY__ 5 + #ifndef __ASSEMBLER__ 6 6 7 7 #include <linux/linkage.h> 8 8 #include <linux/mmzone.h>
+8 -8
arch/loongarch/include/asm/loongarch.h
··· 9 9 #include <linux/linkage.h> 10 10 #include <linux/types.h> 11 11 12 - #ifndef __ASSEMBLY__ 12 + #ifndef __ASSEMBLER__ 13 13 #include <larchintrin.h> 14 14 15 15 /* CPUCFG */ 16 16 #define read_cpucfg(reg) __cpucfg(reg) 17 17 18 - #endif /* !__ASSEMBLY__ */ 18 + #endif /* !__ASSEMBLER__ */ 19 19 20 - #ifdef __ASSEMBLY__ 20 + #ifdef __ASSEMBLER__ 21 21 22 22 /* LoongArch Registers */ 23 23 #define REG_ZERO 0x0 ··· 53 53 #define REG_S7 0x1e 54 54 #define REG_S8 0x1f 55 55 56 - #endif /* __ASSEMBLY__ */ 56 + #endif /* __ASSEMBLER__ */ 57 57 58 58 /* Bit fields for CPUCFG registers */ 59 59 #define LOONGARCH_CPUCFG0 0x0 ··· 171 171 * SW emulation for KVM hypervirsor, see arch/loongarch/include/uapi/asm/kvm_para.h 172 172 */ 173 173 174 - #ifndef __ASSEMBLY__ 174 + #ifndef __ASSEMBLER__ 175 175 176 176 /* CSR */ 177 177 #define csr_read32(reg) __csrrd_w(reg) ··· 187 187 #define iocsr_write32(val, reg) __iocsrwr_w(val, reg) 188 188 #define iocsr_write64(val, reg) __iocsrwr_d(val, reg) 189 189 190 - #endif /* !__ASSEMBLY__ */ 190 + #endif /* !__ASSEMBLER__ */ 191 191 192 192 /* CSR register number */ 193 193 ··· 1195 1195 #define LOONGARCH_IOCSR_EXTIOI_ROUTE_BASE 0x1c00 1196 1196 #define IOCSR_EXTIOI_VECTOR_NUM 256 1197 1197 1198 - #ifndef __ASSEMBLY__ 1198 + #ifndef __ASSEMBLER__ 1199 1199 1200 1200 static __always_inline u64 drdtime(void) 1201 1201 { ··· 1357 1357 #define clear_csr_estat(val) \ 1358 1358 csr_xchg32(~(val), val, LOONGARCH_CSR_ESTAT) 1359 1359 1360 - #endif /* __ASSEMBLY__ */ 1360 + #endif /* __ASSEMBLER__ */ 1361 1361 1362 1362 /* Generic EntryLo bit definitions */ 1363 1363 #define ENTRYLO_V (_ULCAST_(1) << 0)
+2 -2
arch/loongarch/include/asm/orc_types.h
··· 34 34 #define ORC_TYPE_REGS 3 35 35 #define ORC_TYPE_REGS_PARTIAL 4 36 36 37 - #ifndef __ASSEMBLY__ 37 + #ifndef __ASSEMBLER__ 38 38 /* 39 39 * This struct is more or less a vastly simplified version of the DWARF Call 40 40 * Frame Information standard. It contains only the necessary parts of DWARF ··· 53 53 unsigned int type:3; 54 54 unsigned int signal:1; 55 55 }; 56 - #endif /* __ASSEMBLY__ */ 56 + #endif /* __ASSEMBLER__ */ 57 57 58 58 #endif /* _ORC_TYPES_H */
+2 -2
arch/loongarch/include/asm/page.h
··· 15 15 #define HPAGE_MASK (~(HPAGE_SIZE - 1)) 16 16 #define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT) 17 17 18 - #ifndef __ASSEMBLY__ 18 + #ifndef __ASSEMBLER__ 19 19 20 20 #include <linux/kernel.h> 21 21 #include <linux/pfn.h> ··· 110 110 #include <asm-generic/memory_model.h> 111 111 #include <asm-generic/getorder.h> 112 112 113 - #endif /* !__ASSEMBLY__ */ 113 + #endif /* !__ASSEMBLER__ */ 114 114 115 115 #endif /* _ASM_PAGE_H */
+2 -2
arch/loongarch/include/asm/pgtable-bits.h
··· 92 92 #define PAGE_KERNEL_WUC __pgprot(_PAGE_PRESENT | __READABLE | __WRITEABLE | \ 93 93 _PAGE_GLOBAL | _PAGE_KERN | _CACHE_WUC) 94 94 95 - #ifndef __ASSEMBLY__ 95 + #ifndef __ASSEMBLER__ 96 96 97 97 #define _PAGE_IOREMAP pgprot_val(PAGE_KERNEL_SUC) 98 98 ··· 127 127 return __pgprot(prot); 128 128 } 129 129 130 - #endif /* !__ASSEMBLY__ */ 130 + #endif /* !__ASSEMBLER__ */ 131 131 132 132 #endif /* _ASM_PGTABLE_BITS_H */
+2 -2
arch/loongarch/include/asm/pgtable.h
··· 55 55 56 56 #define USER_PTRS_PER_PGD ((TASK_SIZE64 / PGDIR_SIZE)?(TASK_SIZE64 / PGDIR_SIZE):1) 57 57 58 - #ifndef __ASSEMBLY__ 58 + #ifndef __ASSEMBLER__ 59 59 60 60 #include <linux/mm_types.h> 61 61 #include <linux/mmzone.h> ··· 618 618 #define HAVE_ARCH_UNMAPPED_AREA 619 619 #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN 620 620 621 - #endif /* !__ASSEMBLY__ */ 621 + #endif /* !__ASSEMBLER__ */ 622 622 623 623 #endif /* _ASM_PGTABLE_H */
+1 -1
arch/loongarch/include/asm/prefetch.h
··· 8 8 #define Pref_Load 0 9 9 #define Pref_Store 8 10 10 11 - #ifdef __ASSEMBLY__ 11 + #ifdef __ASSEMBLER__ 12 12 13 13 .macro __pref hint addr 14 14 #ifdef CONFIG_CPU_HAS_PREFETCH
+1 -1
arch/loongarch/include/asm/smp.h
··· 39 39 void loongson_cpu_die(unsigned int cpu); 40 40 #endif 41 41 42 - static inline void plat_smp_setup(void) 42 + static inline void __init plat_smp_setup(void) 43 43 { 44 44 loongson_smp_setup(); 45 45 }
+2 -2
arch/loongarch/include/asm/thread_info.h
··· 10 10 11 11 #ifdef __KERNEL__ 12 12 13 - #ifndef __ASSEMBLY__ 13 + #ifndef __ASSEMBLER__ 14 14 15 15 #include <asm/processor.h> 16 16 ··· 53 53 54 54 register unsigned long current_stack_pointer __asm__("$sp"); 55 55 56 - #endif /* !__ASSEMBLY__ */ 56 + #endif /* !__ASSEMBLER__ */ 57 57 58 58 /* thread information allocation */ 59 59 #define THREAD_SIZE SZ_16K
+1 -1
arch/loongarch/include/asm/types.h
··· 8 8 #include <asm-generic/int-ll64.h> 9 9 #include <uapi/asm/types.h> 10 10 11 - #ifdef __ASSEMBLY__ 11 + #ifdef __ASSEMBLER__ 12 12 #define _ULCAST_ 13 13 #define _U64CAST_ 14 14 #else
+3 -3
arch/loongarch/include/asm/unwind_hints.h
··· 5 5 #include <linux/objtool.h> 6 6 #include <asm/orc_types.h> 7 7 8 - #ifdef __ASSEMBLY__ 8 + #ifdef __ASSEMBLER__ 9 9 10 10 .macro UNWIND_HINT_UNDEFINED 11 11 UNWIND_HINT type=UNWIND_HINT_TYPE_UNDEFINED ··· 23 23 UNWIND_HINT sp_reg=ORC_REG_SP type=UNWIND_HINT_TYPE_CALL 24 24 .endm 25 25 26 - #else /* !__ASSEMBLY__ */ 26 + #else /* !__ASSEMBLER__ */ 27 27 28 28 #define UNWIND_HINT_SAVE \ 29 29 UNWIND_HINT(UNWIND_HINT_TYPE_SAVE, 0, 0, 0) ··· 31 31 #define UNWIND_HINT_RESTORE \ 32 32 UNWIND_HINT(UNWIND_HINT_TYPE_RESTORE, 0, 0, 0) 33 33 34 - #endif /* !__ASSEMBLY__ */ 34 + #endif /* !__ASSEMBLER__ */ 35 35 36 36 #endif /* _ASM_LOONGARCH_UNWIND_HINTS_H */
+2 -2
arch/loongarch/include/asm/vdso/arch_data.h
··· 7 7 #ifndef _VDSO_ARCH_DATA_H 8 8 #define _VDSO_ARCH_DATA_H 9 9 10 - #ifndef __ASSEMBLY__ 10 + #ifndef __ASSEMBLER__ 11 11 12 12 #include <asm/asm.h> 13 13 #include <asm/vdso.h> ··· 20 20 struct vdso_pcpu_data pdata[NR_CPUS]; 21 21 }; 22 22 23 - #endif /* __ASSEMBLY__ */ 23 + #endif /* __ASSEMBLER__ */ 24 24 25 25 #endif
+2 -2
arch/loongarch/include/asm/vdso/getrandom.h
··· 5 5 #ifndef __ASM_VDSO_GETRANDOM_H 6 6 #define __ASM_VDSO_GETRANDOM_H 7 7 8 - #ifndef __ASSEMBLY__ 8 + #ifndef __ASSEMBLER__ 9 9 10 10 #include <asm/unistd.h> 11 11 #include <asm/vdso/vdso.h> ··· 28 28 return ret; 29 29 } 30 30 31 - #endif /* !__ASSEMBLY__ */ 31 + #endif /* !__ASSEMBLER__ */ 32 32 33 33 #endif /* __ASM_VDSO_GETRANDOM_H */
+2 -2
arch/loongarch/include/asm/vdso/gettimeofday.h
··· 7 7 #ifndef __ASM_VDSO_GETTIMEOFDAY_H 8 8 #define __ASM_VDSO_GETTIMEOFDAY_H 9 9 10 - #ifndef __ASSEMBLY__ 10 + #ifndef __ASSEMBLER__ 11 11 12 12 #include <asm/unistd.h> 13 13 #include <asm/vdso/vdso.h> ··· 89 89 } 90 90 #define __arch_vdso_hres_capable loongarch_vdso_hres_capable 91 91 92 - #endif /* !__ASSEMBLY__ */ 92 + #endif /* !__ASSEMBLER__ */ 93 93 94 94 #endif /* __ASM_VDSO_GETTIMEOFDAY_H */
+2 -2
arch/loongarch/include/asm/vdso/processor.h
··· 5 5 #ifndef __ASM_VDSO_PROCESSOR_H 6 6 #define __ASM_VDSO_PROCESSOR_H 7 7 8 - #ifndef __ASSEMBLY__ 8 + #ifndef __ASSEMBLER__ 9 9 10 10 #define cpu_relax() barrier() 11 11 12 - #endif /* __ASSEMBLY__ */ 12 + #endif /* __ASSEMBLER__ */ 13 13 14 14 #endif /* __ASM_VDSO_PROCESSOR_H */
+2 -2
arch/loongarch/include/asm/vdso/vdso.h
··· 7 7 #ifndef _ASM_VDSO_VDSO_H 8 8 #define _ASM_VDSO_VDSO_H 9 9 10 - #ifndef __ASSEMBLY__ 10 + #ifndef __ASSEMBLER__ 11 11 12 12 #include <asm/asm.h> 13 13 #include <asm/page.h> ··· 16 16 17 17 #define VVAR_SIZE (VDSO_NR_PAGES << PAGE_SHIFT) 18 18 19 - #endif /* __ASSEMBLY__ */ 19 + #endif /* __ASSEMBLER__ */ 20 20 21 21 #endif
+2 -2
arch/loongarch/include/asm/vdso/vsyscall.h
··· 2 2 #ifndef __ASM_VDSO_VSYSCALL_H 3 3 #define __ASM_VDSO_VSYSCALL_H 4 4 5 - #ifndef __ASSEMBLY__ 5 + #ifndef __ASSEMBLER__ 6 6 7 7 #include <vdso/datapage.h> 8 8 9 9 /* The asm-generic header needs to be included after the definitions above */ 10 10 #include <asm-generic/vdso/vsyscall.h> 11 11 12 - #endif /* !__ASSEMBLY__ */ 12 + #endif /* !__ASSEMBLER__ */ 13 13 14 14 #endif /* __ASM_VDSO_VSYSCALL_H */
+1
arch/loongarch/kernel/acpi.c
··· 10 10 #include <linux/init.h> 11 11 #include <linux/acpi.h> 12 12 #include <linux/efi-bgrt.h> 13 + #include <linux/export.h> 13 14 #include <linux/irq.h> 14 15 #include <linux/irqdomain.h> 15 16 #include <linux/memblock.h>
+1
arch/loongarch/kernel/alternative.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 + #include <linux/export.h> 2 3 #include <linux/mm.h> 3 4 #include <linux/module.h> 4 5 #include <asm/alternative.h>
+12
arch/loongarch/kernel/efi.c
··· 144 144 if (efi_memmap_init_early(&data) < 0) 145 145 panic("Unable to map EFI memory map.\n"); 146 146 147 + /* 148 + * Reserve the physical memory region occupied by the EFI 149 + * memory map table (header + descriptors). This is crucial 150 + * for kdump, as the kdump kernel relies on this original 151 + * memmap passed by the bootloader. Without reservation, 152 + * this region could be overwritten by the primary kernel. 153 + * Also, set the EFI_PRESERVE_BS_REGIONS flag to indicate that 154 + * critical boot services code/data regions like this are preserved. 155 + */ 156 + memblock_reserve((phys_addr_t)boot_memmap, sizeof(*tbl) + data.size); 157 + set_bit(EFI_PRESERVE_BS_REGIONS, &efi.flags); 158 + 147 159 early_memunmap(tbl, sizeof(*tbl)); 148 160 } 149 161
-1
arch/loongarch/kernel/elf.c
··· 6 6 7 7 #include <linux/binfmts.h> 8 8 #include <linux/elf.h> 9 - #include <linux/export.h> 10 9 #include <linux/sched.h> 11 10 12 11 #include <asm/cpu-features.h>
+1
arch/loongarch/kernel/kfpu.c
··· 4 4 */ 5 5 6 6 #include <linux/cpu.h> 7 + #include <linux/export.h> 7 8 #include <linux/init.h> 8 9 #include <asm/fpu.h> 9 10 #include <asm/smp.h>
-1
arch/loongarch/kernel/paravirt.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - #include <linux/export.h> 3 2 #include <linux/types.h> 4 3 #include <linux/interrupt.h> 5 4 #include <linux/irq_work.h>
+1 -1
arch/loongarch/kernel/time.c
··· 102 102 return 0; 103 103 } 104 104 105 - static unsigned long __init get_loops_per_jiffy(void) 105 + static unsigned long get_loops_per_jiffy(void) 106 106 { 107 107 unsigned long lpj = (unsigned long)const_clock_freq; 108 108
+1
arch/loongarch/kernel/traps.c
··· 13 13 #include <linux/kernel.h> 14 14 #include <linux/kexec.h> 15 15 #include <linux/module.h> 16 + #include <linux/export.h> 16 17 #include <linux/extable.h> 17 18 #include <linux/mm.h> 18 19 #include <linux/sched/mm.h>
+1
arch/loongarch/kernel/unwind_guess.c
··· 3 3 * Copyright (C) 2022 Loongson Technology Corporation Limited 4 4 */ 5 5 #include <asm/unwind.h> 6 + #include <linux/export.h> 6 7 7 8 unsigned long unwind_get_return_address(struct unwind_state *state) 8 9 {
+2 -1
arch/loongarch/kernel/unwind_orc.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 - #include <linux/objtool.h> 2 + #include <linux/export.h> 3 3 #include <linux/module.h> 4 + #include <linux/objtool.h> 4 5 #include <linux/sort.h> 5 6 #include <asm/exception.h> 6 7 #include <asm/orc_header.h>
+1
arch/loongarch/kernel/unwind_prologue.c
··· 3 3 * Copyright (C) 2022 Loongson Technology Corporation Limited 4 4 */ 5 5 #include <linux/cpumask.h> 6 + #include <linux/export.h> 6 7 #include <linux/ftrace.h> 7 8 #include <linux/kallsyms.h> 8 9
+61 -28
arch/loongarch/kvm/intc/eiointc.c
··· 9 9 10 10 static void eiointc_set_sw_coreisr(struct loongarch_eiointc *s) 11 11 { 12 - int ipnum, cpu, irq_index, irq_mask, irq; 12 + int ipnum, cpu, cpuid, irq_index, irq_mask, irq; 13 + struct kvm_vcpu *vcpu; 13 14 14 15 for (irq = 0; irq < EIOINTC_IRQS; irq++) { 15 16 ipnum = s->ipmap.reg_u8[irq / 32]; ··· 21 20 irq_index = irq / 32; 22 21 irq_mask = BIT(irq & 0x1f); 23 22 24 - cpu = s->coremap.reg_u8[irq]; 23 + cpuid = s->coremap.reg_u8[irq]; 24 + vcpu = kvm_get_vcpu_by_cpuid(s->kvm, cpuid); 25 + if (!vcpu) 26 + continue; 27 + 28 + cpu = vcpu->vcpu_id; 25 29 if (!!(s->coreisr.reg_u32[cpu][irq_index] & irq_mask)) 26 30 set_bit(irq, s->sw_coreisr[cpu][ipnum]); 27 31 else ··· 72 66 } 73 67 74 68 static inline void eiointc_update_sw_coremap(struct loongarch_eiointc *s, 75 - int irq, void *pvalue, u32 len, bool notify) 69 + int irq, u64 val, u32 len, bool notify) 76 70 { 77 - int i, cpu; 78 - u64 val = *(u64 *)pvalue; 71 + int i, cpu, cpuid; 72 + struct kvm_vcpu *vcpu; 79 73 80 74 for (i = 0; i < len; i++) { 81 - cpu = val & 0xff; 75 + cpuid = val & 0xff; 82 76 val = val >> 8; 83 77 84 78 if (!(s->status & BIT(EIOINTC_ENABLE_CPU_ENCODE))) { 85 - cpu = ffs(cpu) - 1; 86 - cpu = (cpu >= 4) ? 0 : cpu; 79 + cpuid = ffs(cpuid) - 1; 80 + cpuid = (cpuid >= 4) ? 0 : cpuid; 87 81 } 88 82 83 + vcpu = kvm_get_vcpu_by_cpuid(s->kvm, cpuid); 84 + if (!vcpu) 85 + continue; 86 + 87 + cpu = vcpu->vcpu_id; 89 88 if (s->sw_coremap[irq + i] == cpu) 90 89 continue; 91 90 ··· 316 305 return -EINVAL; 317 306 } 318 307 308 + if (addr & (len - 1)) { 309 + kvm_err("%s: eiointc not aligned addr %llx len %d\n", __func__, addr, len); 310 + return -EINVAL; 311 + } 312 + 319 313 vcpu->kvm->stat.eiointc_read_exits++; 320 314 spin_lock_irqsave(&eiointc->lock, flags); 321 315 switch (len) { ··· 414 398 irq = offset - EIOINTC_COREMAP_START; 415 399 index = irq; 416 400 s->coremap.reg_u8[index] = data; 417 - eiointc_update_sw_coremap(s, irq, (void *)&data, sizeof(data), true); 401 + eiointc_update_sw_coremap(s, irq, data, sizeof(data), true); 418 402 break; 419 403 default: 420 404 ret = -EINVAL; ··· 452 436 break; 453 437 case EIOINTC_ENABLE_START ... EIOINTC_ENABLE_END: 454 438 index = (offset - EIOINTC_ENABLE_START) >> 1; 455 - old_data = s->enable.reg_u32[index]; 439 + old_data = s->enable.reg_u16[index]; 456 440 s->enable.reg_u16[index] = data; 457 441 /* 458 442 * 1: enable irq. 459 443 * update irq when isr is set. 460 444 */ 461 445 data = s->enable.reg_u16[index] & ~old_data & s->isr.reg_u16[index]; 462 - index = index << 1; 463 446 for (i = 0; i < sizeof(data); i++) { 464 447 u8 mask = (data >> (i * 8)) & 0xff; 465 - eiointc_enable_irq(vcpu, s, index + i, mask, 1); 448 + eiointc_enable_irq(vcpu, s, index * 2 + i, mask, 1); 466 449 } 467 450 /* 468 451 * 0: disable irq. ··· 470 455 data = ~s->enable.reg_u16[index] & old_data & s->isr.reg_u16[index]; 471 456 for (i = 0; i < sizeof(data); i++) { 472 457 u8 mask = (data >> (i * 8)) & 0xff; 473 - eiointc_enable_irq(vcpu, s, index, mask, 0); 458 + eiointc_enable_irq(vcpu, s, index * 2 + i, mask, 0); 474 459 } 475 460 break; 476 461 case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END: ··· 499 484 irq = offset - EIOINTC_COREMAP_START; 500 485 index = irq >> 1; 501 486 s->coremap.reg_u16[index] = data; 502 - eiointc_update_sw_coremap(s, irq, (void *)&data, sizeof(data), true); 487 + eiointc_update_sw_coremap(s, irq, data, sizeof(data), true); 503 488 break; 504 489 default: 505 490 ret = -EINVAL; ··· 544 529 * update irq when isr is set. 545 530 */ 546 531 data = s->enable.reg_u32[index] & ~old_data & s->isr.reg_u32[index]; 547 - index = index << 2; 548 532 for (i = 0; i < sizeof(data); i++) { 549 533 u8 mask = (data >> (i * 8)) & 0xff; 550 - eiointc_enable_irq(vcpu, s, index + i, mask, 1); 534 + eiointc_enable_irq(vcpu, s, index * 4 + i, mask, 1); 551 535 } 552 536 /* 553 537 * 0: disable irq. ··· 555 541 data = ~s->enable.reg_u32[index] & old_data & s->isr.reg_u32[index]; 556 542 for (i = 0; i < sizeof(data); i++) { 557 543 u8 mask = (data >> (i * 8)) & 0xff; 558 - eiointc_enable_irq(vcpu, s, index, mask, 0); 544 + eiointc_enable_irq(vcpu, s, index * 4 + i, mask, 0); 559 545 } 560 546 break; 561 547 case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END: ··· 584 570 irq = offset - EIOINTC_COREMAP_START; 585 571 index = irq >> 2; 586 572 s->coremap.reg_u32[index] = data; 587 - eiointc_update_sw_coremap(s, irq, (void *)&data, sizeof(data), true); 573 + eiointc_update_sw_coremap(s, irq, data, sizeof(data), true); 588 574 break; 589 575 default: 590 576 ret = -EINVAL; ··· 629 615 * update irq when isr is set. 630 616 */ 631 617 data = s->enable.reg_u64[index] & ~old_data & s->isr.reg_u64[index]; 632 - index = index << 3; 633 618 for (i = 0; i < sizeof(data); i++) { 634 619 u8 mask = (data >> (i * 8)) & 0xff; 635 - eiointc_enable_irq(vcpu, s, index + i, mask, 1); 620 + eiointc_enable_irq(vcpu, s, index * 8 + i, mask, 1); 636 621 } 637 622 /* 638 623 * 0: disable irq. ··· 640 627 data = ~s->enable.reg_u64[index] & old_data & s->isr.reg_u64[index]; 641 628 for (i = 0; i < sizeof(data); i++) { 642 629 u8 mask = (data >> (i * 8)) & 0xff; 643 - eiointc_enable_irq(vcpu, s, index, mask, 0); 630 + eiointc_enable_irq(vcpu, s, index * 8 + i, mask, 0); 644 631 } 645 632 break; 646 633 case EIOINTC_BOUNCE_START ... EIOINTC_BOUNCE_END: ··· 669 656 irq = offset - EIOINTC_COREMAP_START; 670 657 index = irq >> 3; 671 658 s->coremap.reg_u64[index] = data; 672 - eiointc_update_sw_coremap(s, irq, (void *)&data, sizeof(data), true); 659 + eiointc_update_sw_coremap(s, irq, data, sizeof(data), true); 673 660 break; 674 661 default: 675 662 ret = -EINVAL; ··· 689 676 690 677 if (!eiointc) { 691 678 kvm_err("%s: eiointc irqchip not valid!\n", __func__); 679 + return -EINVAL; 680 + } 681 + 682 + if (addr & (len - 1)) { 683 + kvm_err("%s: eiointc not aligned addr %llx len %d\n", __func__, addr, len); 692 684 return -EINVAL; 693 685 } 694 686 ··· 805 787 int ret = 0; 806 788 unsigned long flags; 807 789 unsigned long type = (unsigned long)attr->attr; 808 - u32 i, start_irq; 790 + u32 i, start_irq, val; 809 791 void __user *data; 810 792 struct loongarch_eiointc *s = dev->kvm->arch.eiointc; 811 793 ··· 813 795 spin_lock_irqsave(&s->lock, flags); 814 796 switch (type) { 815 797 case KVM_DEV_LOONGARCH_EXTIOI_CTRL_INIT_NUM_CPU: 816 - if (copy_from_user(&s->num_cpu, data, 4)) 798 + if (copy_from_user(&val, data, 4)) 817 799 ret = -EFAULT; 800 + else { 801 + if (val >= EIOINTC_ROUTE_MAX_VCPUS) 802 + ret = -EINVAL; 803 + else 804 + s->num_cpu = val; 805 + } 818 806 break; 819 807 case KVM_DEV_LOONGARCH_EXTIOI_CTRL_INIT_FEATURE: 820 808 if (copy_from_user(&s->features, data, 4)) ··· 833 809 for (i = 0; i < (EIOINTC_IRQS / 4); i++) { 834 810 start_irq = i * 4; 835 811 eiointc_update_sw_coremap(s, start_irq, 836 - (void *)&s->coremap.reg_u32[i], sizeof(u32), false); 812 + s->coremap.reg_u32[i], sizeof(u32), false); 837 813 } 838 814 break; 839 815 default: ··· 848 824 struct kvm_device_attr *attr, 849 825 bool is_write) 850 826 { 851 - int addr, cpuid, offset, ret = 0; 827 + int addr, cpu, offset, ret = 0; 852 828 unsigned long flags; 853 829 void *p = NULL; 854 830 void __user *data; ··· 856 832 857 833 s = dev->kvm->arch.eiointc; 858 834 addr = attr->attr; 859 - cpuid = addr >> 16; 835 + cpu = addr >> 16; 860 836 addr &= 0xffff; 861 837 data = (void __user *)attr->addr; 862 838 switch (addr) { ··· 881 857 p = &s->isr.reg_u32[offset]; 882 858 break; 883 859 case EIOINTC_COREISR_START ... EIOINTC_COREISR_END: 860 + if (cpu >= s->num_cpu) 861 + return -EINVAL; 862 + 884 863 offset = (addr - EIOINTC_COREISR_START) / 4; 885 - p = &s->coreisr.reg_u32[cpuid][offset]; 864 + p = &s->coreisr.reg_u32[cpu][offset]; 886 865 break; 887 866 case EIOINTC_COREMAP_START ... EIOINTC_COREMAP_END: 888 867 offset = (addr - EIOINTC_COREMAP_START) / 4; ··· 926 899 data = (void __user *)attr->addr; 927 900 switch (addr) { 928 901 case KVM_DEV_LOONGARCH_EXTIOI_SW_STATUS_NUM_CPU: 902 + if (is_write) 903 + return ret; 904 + 929 905 p = &s->num_cpu; 930 906 break; 931 907 case KVM_DEV_LOONGARCH_EXTIOI_SW_STATUS_FEATURE: 908 + if (is_write) 909 + return ret; 910 + 932 911 p = &s->features; 933 912 break; 934 913 case KVM_DEV_LOONGARCH_EXTIOI_SW_STATUS_STATE:
+1
arch/loongarch/lib/crc32-loongarch.c
··· 11 11 12 12 #include <asm/cpu-features.h> 13 13 #include <linux/crc32.h> 14 + #include <linux/export.h> 14 15 #include <linux/module.h> 15 16 #include <linux/unaligned.h> 16 17
+1
arch/loongarch/lib/csum.c
··· 2 2 // Copyright (C) 2019-2020 Arm Ltd. 3 3 4 4 #include <linux/compiler.h> 5 + #include <linux/export.h> 5 6 #include <linux/kasan-checks.h> 6 7 #include <linux/kernel.h> 7 8
+2 -2
arch/loongarch/mm/ioremap.c
··· 16 16 17 17 } 18 18 19 - void *early_memremap_ro(resource_size_t phys_addr, unsigned long size) 19 + void * __init early_memremap_ro(resource_size_t phys_addr, unsigned long size) 20 20 { 21 21 return early_memremap(phys_addr, size); 22 22 } 23 23 24 - void *early_memremap_prot(resource_size_t phys_addr, unsigned long size, 24 + void * __init early_memremap_prot(resource_size_t phys_addr, unsigned long size, 25 25 unsigned long prot_val) 26 26 { 27 27 return early_memremap(phys_addr, size);
-1
arch/loongarch/pci/pci.c
··· 3 3 * Copyright (C) 2020-2022 Loongson Technology Corporation Limited 4 4 */ 5 5 #include <linux/kernel.h> 6 - #include <linux/export.h> 7 6 #include <linux/init.h> 8 7 #include <linux/acpi.h> 9 8 #include <linux/types.h>
+1 -1
arch/powerpc/boot/dts/microwatt.dts
··· 4 4 / { 5 5 #size-cells = <0x02>; 6 6 #address-cells = <0x02>; 7 - model-name = "microwatt"; 7 + model = "microwatt"; 8 8 compatible = "microwatt-soc"; 9 9 10 10 aliases {
+10
arch/powerpc/boot/dts/mpc8315erdb.dts
··· 6 6 */ 7 7 8 8 /dts-v1/; 9 + #include <dt-bindings/interrupt-controller/irq.h> 9 10 10 11 / { 11 12 compatible = "fsl,mpc8315erdb"; ··· 358 357 interrupts = <80 8>; 359 358 interrupt-parent = <&ipic>; 360 359 fsl,mpc8313-wakeup-timer = <&gtm1>; 360 + }; 361 + 362 + gpio: gpio-controller@c00 { 363 + compatible = "fsl,mpc8314-gpio"; 364 + reg = <0xc00 0x100>; 365 + interrupts = <74 IRQ_TYPE_LEVEL_LOW>; 366 + interrupt-parent = <&ipic>; 367 + gpio-controller; 368 + #gpio-cells = <2>; 361 369 }; 362 370 }; 363 371
+1 -1
arch/powerpc/include/asm/ppc_asm.h
··· 183 183 /* 184 184 * Used to name C functions called from asm 185 185 */ 186 - #ifdef CONFIG_PPC_KERNEL_PCREL 186 + #if defined(__powerpc64__) && defined(CONFIG_PPC_KERNEL_PCREL) 187 187 #define CFUNC(name) name@notoc 188 188 #else 189 189 #define CFUNC(name) name
+4 -4
arch/powerpc/include/uapi/asm/ioctls.h
··· 23 23 #define TCSETSW _IOW('t', 21, struct termios) 24 24 #define TCSETSF _IOW('t', 22, struct termios) 25 25 26 - #define TCGETA _IOR('t', 23, struct termio) 27 - #define TCSETA _IOW('t', 24, struct termio) 28 - #define TCSETAW _IOW('t', 25, struct termio) 29 - #define TCSETAF _IOW('t', 28, struct termio) 26 + #define TCGETA 0x40147417 /* _IOR('t', 23, struct termio) */ 27 + #define TCSETA 0x80147418 /* _IOW('t', 24, struct termio) */ 28 + #define TCSETAW 0x80147419 /* _IOW('t', 25, struct termio) */ 29 + #define TCSETAF 0x8014741c /* _IOW('t', 28, struct termio) */ 30 30 31 31 #define TCSBRK _IO('t', 29) 32 32 #define TCXONC _IO('t', 30)
+2
arch/powerpc/kernel/eeh.c
··· 1509 1509 /* Invalid PE ? */ 1510 1510 if (!pe) 1511 1511 return -ENODEV; 1512 + else 1513 + ret = eeh_ops->configure_bridge(pe); 1512 1514 1513 1515 return ret; 1514 1516 }
+1 -1
arch/powerpc/kernel/vdso/Makefile
··· 53 53 ldflags-y += $(filter-out $(CC_AUTO_VAR_INIT_ZERO_ENABLER) $(CC_FLAGS_FTRACE) -Wa$(comma)%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS)) 54 54 55 55 CC32FLAGS := -m32 56 - CC32FLAGSREMOVE := -mcmodel=medium -mabi=elfv1 -mabi=elfv2 -mcall-aixdesc 56 + CC32FLAGSREMOVE := -mcmodel=medium -mabi=elfv1 -mabi=elfv2 -mcall-aixdesc -mpcrel 57 57 ifdef CONFIG_CC_IS_CLANG 58 58 # This flag is supported by clang for 64-bit but not 32-bit so it will cause 59 59 # an unused command line flag warning for this file.
-1
arch/riscv/include/asm/pgtable.h
··· 1075 1075 */ 1076 1076 #ifdef CONFIG_64BIT 1077 1077 #define TASK_SIZE_64 (PGDIR_SIZE * PTRS_PER_PGD / 2) 1078 - #define TASK_SIZE_MAX LONG_MAX 1079 1078 1080 1079 #ifdef CONFIG_COMPAT 1081 1080 #define TASK_SIZE_32 (_AC(0x80000000, UL) - PAGE_SIZE)
+1 -1
arch/riscv/include/asm/runtime-const.h
··· 206 206 addi_insn_mask &= 0x07fff; 207 207 } 208 208 209 - if (lower_immediate & 0x00000fff) { 209 + if (lower_immediate & 0x00000fff || lui_insn == RISCV_INSN_NOP4) { 210 210 /* replace upper 12 bits of addi with lower 12 bits of val */ 211 211 addi_insn &= addi_insn_mask; 212 212 addi_insn |= (lower_immediate & 0x00000fff) << 20;
+2 -1
arch/riscv/include/asm/uaccess.h
··· 127 127 128 128 #ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT 129 129 #define __get_user_8(x, ptr, label) \ 130 + do { \ 130 131 u32 __user *__ptr = (u32 __user *)(ptr); \ 131 132 u32 __lo, __hi; \ 132 133 asm_goto_output( \ ··· 142 141 : : label); \ 143 142 (x) = (__typeof__(x))((__typeof__((x) - (x)))( \ 144 143 (((u64)__hi << 32) | __lo))); \ 145 - 144 + } while (0) 146 145 #else /* !CONFIG_CC_HAS_ASM_GOTO_OUTPUT */ 147 146 #define __get_user_8(x, ptr, label) \ 148 147 do { \
+1 -1
arch/riscv/include/asm/vdso/getrandom.h
··· 18 18 register unsigned int flags asm("a2") = _flags; 19 19 20 20 asm volatile ("ecall\n" 21 - : "+r" (ret) 21 + : "=r" (ret) 22 22 : "r" (nr), "r" (buffer), "r" (len), "r" (flags) 23 23 : "memory"); 24 24
+6 -6
arch/riscv/include/asm/vector.h
··· 205 205 THEAD_VSETVLI_T4X0E8M8D1 206 206 THEAD_VSB_V_V0T0 207 207 "add t0, t0, t4\n\t" 208 - THEAD_VSB_V_V0T0 208 + THEAD_VSB_V_V8T0 209 209 "add t0, t0, t4\n\t" 210 - THEAD_VSB_V_V0T0 210 + THEAD_VSB_V_V16T0 211 211 "add t0, t0, t4\n\t" 212 - THEAD_VSB_V_V0T0 212 + THEAD_VSB_V_V24T0 213 213 : : "r" (datap) : "memory", "t0", "t4"); 214 214 } else { 215 215 asm volatile ( ··· 241 241 THEAD_VSETVLI_T4X0E8M8D1 242 242 THEAD_VLB_V_V0T0 243 243 "add t0, t0, t4\n\t" 244 - THEAD_VLB_V_V0T0 244 + THEAD_VLB_V_V8T0 245 245 "add t0, t0, t4\n\t" 246 - THEAD_VLB_V_V0T0 246 + THEAD_VLB_V_V16T0 247 247 "add t0, t0, t4\n\t" 248 - THEAD_VLB_V_V0T0 248 + THEAD_VLB_V_V24T0 249 249 : : "r" (datap) : "memory", "t0", "t4"); 250 250 } else { 251 251 asm volatile (
+1
arch/riscv/kernel/setup.c
··· 50 50 #endif 51 51 ; 52 52 unsigned long boot_cpu_hartid; 53 + EXPORT_SYMBOL_GPL(boot_cpu_hartid); 53 54 54 55 /* 55 56 * Place kernel memory regions on the resource tree so that
+2 -2
arch/riscv/kernel/traps_misaligned.c
··· 454 454 455 455 val.data_u64 = 0; 456 456 if (user_mode(regs)) { 457 - if (copy_from_user_nofault(&val, (u8 __user *)addr, len)) 457 + if (copy_from_user(&val, (u8 __user *)addr, len)) 458 458 return -1; 459 459 } else { 460 460 memcpy(&val, (u8 *)addr, len); ··· 555 555 return -EOPNOTSUPP; 556 556 557 557 if (user_mode(regs)) { 558 - if (copy_to_user_nofault((u8 __user *)addr, &val, len)) 558 + if (copy_to_user((u8 __user *)addr, &val, len)) 559 559 return -1; 560 560 } else { 561 561 memcpy((u8 *)addr, &val, len);
+1 -1
arch/riscv/kernel/vdso/vdso.lds.S
··· 30 30 *(.data .data.* .gnu.linkonce.d.*) 31 31 *(.dynbss) 32 32 *(.bss .bss.* .gnu.linkonce.b.*) 33 - } 33 + } :text 34 34 35 35 .note : { *(.note.*) } :text :note 36 36
+1 -1
arch/riscv/kernel/vendor_extensions/sifive.c
··· 8 8 #include <linux/types.h> 9 9 10 10 /* All SiFive vendor extensions supported in Linux */ 11 - const struct riscv_isa_ext_data riscv_isa_vendor_ext_sifive[] = { 11 + static const struct riscv_isa_ext_data riscv_isa_vendor_ext_sifive[] = { 12 12 __RISCV_ISA_EXT_DATA(xsfvfnrclipxfqf, RISCV_ISA_VENDOR_EXT_XSFVFNRCLIPXFQF), 13 13 __RISCV_ISA_EXT_DATA(xsfvfwmaccqqq, RISCV_ISA_VENDOR_EXT_XSFVFWMACCQQQ), 14 14 __RISCV_ISA_EXT_DATA(xsfvqmaccdod, RISCV_ISA_VENDOR_EXT_XSFVQMACCDOD),
+4 -4
arch/riscv/kvm/vcpu_sbi_replace.c
··· 103 103 kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_SENT); 104 104 break; 105 105 case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA: 106 - if (cp->a2 == 0 && cp->a3 == 0) 106 + if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL) 107 107 kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask); 108 108 else 109 109 kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask, ··· 111 111 kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT); 112 112 break; 113 113 case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID: 114 - if (cp->a2 == 0 && cp->a3 == 0) 114 + if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL) 115 115 kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, 116 116 hbase, hmask, cp->a4); 117 117 else ··· 127 127 case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA_ASID: 128 128 /* 129 129 * Until nested virtualization is implemented, the 130 - * SBI HFENCE calls should be treated as NOPs 130 + * SBI HFENCE calls should return not supported 131 + * hence fallthrough. 131 132 */ 132 - break; 133 133 default: 134 134 retdata->err_val = SBI_ERR_NOT_SUPPORTED; 135 135 }
+1 -1
arch/s390/include/asm/ptrace.h
··· 265 265 addr = kernel_stack_pointer(regs) + n * sizeof(long); 266 266 if (!regs_within_kernel_stack(regs, addr)) 267 267 return 0; 268 - return READ_ONCE_NOCHECK(addr); 268 + return READ_ONCE_NOCHECK(*(unsigned long *)addr); 269 269 } 270 270 271 271 /**
+1 -1
arch/um/drivers/ubd_user.c
··· 41 41 *fd_out = fds[1]; 42 42 43 43 err = os_set_fd_block(*fd_out, 0); 44 - err = os_set_fd_block(kernel_fd, 0); 44 + err |= os_set_fd_block(kernel_fd, 0); 45 45 if (err) { 46 46 printk("start_io_thread - failed to set nonblocking I/O.\n"); 47 47 goto out_close;
+13 -29
arch/um/drivers/vector_kern.c
··· 1625 1625 1626 1626 device->dev = dev; 1627 1627 1628 - *vp = ((struct vector_private) 1629 - { 1630 - .list = LIST_HEAD_INIT(vp->list), 1631 - .dev = dev, 1632 - .unit = n, 1633 - .options = get_transport_options(def), 1634 - .rx_irq = 0, 1635 - .tx_irq = 0, 1636 - .parsed = def, 1637 - .max_packet = get_mtu(def) + ETH_HEADER_OTHER, 1638 - /* TODO - we need to calculate headroom so that ip header 1639 - * is 16 byte aligned all the time 1640 - */ 1641 - .headroom = get_headroom(def), 1642 - .form_header = NULL, 1643 - .verify_header = NULL, 1644 - .header_rxbuffer = NULL, 1645 - .header_txbuffer = NULL, 1646 - .header_size = 0, 1647 - .rx_header_size = 0, 1648 - .rexmit_scheduled = false, 1649 - .opened = false, 1650 - .transport_data = NULL, 1651 - .in_write_poll = false, 1652 - .coalesce = 2, 1653 - .req_size = get_req_size(def), 1654 - .in_error = false, 1655 - .bpf = NULL 1656 - }); 1628 + INIT_LIST_HEAD(&vp->list); 1629 + vp->dev = dev; 1630 + vp->unit = n; 1631 + vp->options = get_transport_options(def); 1632 + vp->parsed = def; 1633 + vp->max_packet = get_mtu(def) + ETH_HEADER_OTHER; 1634 + /* 1635 + * TODO - we need to calculate headroom so that ip header 1636 + * is 16 byte aligned all the time 1637 + */ 1638 + vp->headroom = get_headroom(def); 1639 + vp->coalesce = 2; 1640 + vp->req_size = get_req_size(def); 1657 1641 1658 1642 dev->features = dev->hw_features = (NETIF_F_SG | NETIF_F_FRAGLIST); 1659 1643 INIT_WORK(&vp->reset_tx, vector_reset_tx);
+14
arch/um/drivers/vfio_kern.c
··· 570 570 kfree(dev); 571 571 } 572 572 573 + static struct uml_vfio_device *uml_vfio_find_device(const char *device) 574 + { 575 + struct uml_vfio_device *dev; 576 + 577 + list_for_each_entry(dev, &uml_vfio_devices, list) { 578 + if (!strcmp(dev->name, device)) 579 + return dev; 580 + } 581 + return NULL; 582 + } 583 + 573 584 static int uml_vfio_cmdline_set(const char *device, const struct kernel_param *kp) 574 585 { 575 586 struct uml_vfio_device *dev; ··· 592 581 return fd; 593 582 uml_vfio_container.fd = fd; 594 583 } 584 + 585 + if (uml_vfio_find_device(device)) 586 + return -EEXIST; 595 587 596 588 dev = kzalloc(sizeof(*dev), GFP_KERNEL); 597 589 if (!dev)
+1 -1
arch/x86/Kconfig
··· 89 89 select ARCH_HAS_DMA_OPS if GART_IOMMU || XEN 90 90 select ARCH_HAS_EARLY_DEBUG if KGDB 91 91 select ARCH_HAS_ELF_RANDOMIZE 92 - select ARCH_HAS_EXECMEM_ROX if X86_64 92 + select ARCH_HAS_EXECMEM_ROX if X86_64 && STRICT_MODULE_RWX 93 93 select ARCH_HAS_FAST_MULTIPLIER 94 94 select ARCH_HAS_FORTIFY_SOURCE 95 95 select ARCH_HAS_GCOV_PROFILE_ALL
+1 -1
arch/x86/events/intel/core.c
··· 2826 2826 * If the PEBS counters snapshotting is enabled, 2827 2827 * the topdown event is available in PEBS records. 2828 2828 */ 2829 - if (is_topdown_event(event) && !is_pebs_counter_event_group(event)) 2829 + if (is_topdown_count(event) && !is_pebs_counter_event_group(event)) 2830 2830 static_call(intel_pmu_update_topdown_event)(event, NULL); 2831 2831 else 2832 2832 intel_pmu_drain_pebs_buffer();
+15 -4
arch/x86/include/asm/debugreg.h
··· 9 9 #include <asm/cpufeature.h> 10 10 #include <asm/msr.h> 11 11 12 + /* 13 + * Define bits that are always set to 1 in DR7, only bit 10 is 14 + * architecturally reserved to '1'. 15 + * 16 + * This is also the init/reset value for DR7. 17 + */ 18 + #define DR7_FIXED_1 0x00000400 19 + 12 20 DECLARE_PER_CPU(unsigned long, cpu_dr7); 13 21 14 22 #ifndef CONFIG_PARAVIRT_XXL ··· 108 100 109 101 static inline void hw_breakpoint_disable(void) 110 102 { 111 - /* Zero the control register for HW Breakpoint */ 112 - set_debugreg(0UL, 7); 103 + /* Reset the control register for HW Breakpoint */ 104 + set_debugreg(DR7_FIXED_1, 7); 113 105 114 106 /* Zero-out the individual HW breakpoint address registers */ 115 107 set_debugreg(0UL, 0); ··· 133 125 return 0; 134 126 135 127 get_debugreg(dr7, 7); 136 - dr7 &= ~0x400; /* architecturally set bit */ 128 + 129 + /* Architecturally set bit */ 130 + dr7 &= ~DR7_FIXED_1; 137 131 if (dr7) 138 - set_debugreg(0, 7); 132 + set_debugreg(DR7_FIXED_1, 7); 133 + 139 134 /* 140 135 * Ensure the compiler doesn't lower the above statements into 141 136 * the critical section; disabling breakpoints late would not
+1 -1
arch/x86/include/asm/kvm_host.h
··· 31 31 32 32 #include <asm/apic.h> 33 33 #include <asm/pvclock-abi.h> 34 + #include <asm/debugreg.h> 34 35 #include <asm/desc.h> 35 36 #include <asm/mtrr.h> 36 37 #include <asm/msr-index.h> ··· 250 249 #define DR7_BP_EN_MASK 0x000000ff 251 250 #define DR7_GE (1 << 9) 252 251 #define DR7_GD (1 << 13) 253 - #define DR7_FIXED_1 0x00000400 254 252 #define DR7_VOLATILE 0xffff2bff 255 253 256 254 #define KVM_GUESTDBG_VALID_MASK \
+8
arch/x86/include/asm/module.h
··· 5 5 #include <asm-generic/module.h> 6 6 #include <asm/orc_types.h> 7 7 8 + struct its_array { 9 + #ifdef CONFIG_MITIGATION_ITS 10 + void **pages; 11 + int num; 12 + #endif 13 + }; 14 + 8 15 struct mod_arch_specific { 9 16 #ifdef CONFIG_UNWINDER_ORC 10 17 unsigned int num_orcs; 11 18 int *orc_unwind_ip; 12 19 struct orc_entry *orc_unwind; 13 20 #endif 21 + struct its_array its_pages; 14 22 }; 15 23 16 24 #endif /* _ASM_X86_MODULE_H */
+1
arch/x86/include/asm/shared/tdx.h
··· 80 80 #define TDVMCALL_STATUS_RETRY 0x0000000000000001ULL 81 81 #define TDVMCALL_STATUS_INVALID_OPERAND 0x8000000000000000ULL 82 82 #define TDVMCALL_STATUS_ALIGN_ERROR 0x8000000000000002ULL 83 + #define TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED 0x8000000000000003ULL 83 84 84 85 /* 85 86 * Bitmasks of exposed registers (with VMM).
+22
arch/x86/include/asm/sighandling.h
··· 24 24 int x64_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs); 25 25 int x32_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs); 26 26 27 + /* 28 + * To prevent immediate repeat of single step trap on return from SIGTRAP 29 + * handler if the trap flag (TF) is set without an external debugger attached, 30 + * clear the software event flag in the augmented SS, ensuring no single-step 31 + * trap is pending upon ERETU completion. 32 + * 33 + * Note, this function should be called in sigreturn() before the original 34 + * state is restored to make sure the TF is read from the entry frame. 35 + */ 36 + static __always_inline void prevent_single_step_upon_eretu(struct pt_regs *regs) 37 + { 38 + /* 39 + * If the trap flag (TF) is set, i.e., the sigreturn() SYSCALL instruction 40 + * is being single-stepped, do not clear the software event flag in the 41 + * augmented SS, thus a debugger won't skip over the following instruction. 42 + */ 43 + #ifdef CONFIG_X86_FRED 44 + if (!(regs->flags & X86_EFLAGS_TF)) 45 + regs->fred_ss.swevent = 0; 46 + #endif 47 + } 48 + 27 49 #endif /* _ASM_X86_SIGHANDLING_H */
+1 -1
arch/x86/include/asm/tdx.h
··· 106 106 107 107 typedef u64 (*sc_func_t)(u64 fn, struct tdx_module_args *args); 108 108 109 - static inline u64 sc_retry(sc_func_t func, u64 fn, 109 + static __always_inline u64 sc_retry(sc_func_t func, u64 fn, 110 110 struct tdx_module_args *args) 111 111 { 112 112 int retry = RDRAND_RETRY_LOOPS;
+20 -1
arch/x86/include/uapi/asm/debugreg.h
··· 15 15 which debugging register was responsible for the trap. The other bits 16 16 are either reserved or not of interest to us. */ 17 17 18 - /* Define reserved bits in DR6 which are always set to 1 */ 18 + /* 19 + * Define bits in DR6 which are set to 1 by default. 20 + * 21 + * This is also the DR6 architectural value following Power-up, Reset or INIT. 22 + * 23 + * Note, with the introduction of Bus Lock Detection (BLD) and Restricted 24 + * Transactional Memory (RTM), the DR6 register has been modified: 25 + * 26 + * 1) BLD flag (bit 11) is no longer reserved to 1 if the CPU supports 27 + * Bus Lock Detection. The assertion of a bus lock could clear it. 28 + * 29 + * 2) RTM flag (bit 16) is no longer reserved to 1 if the CPU supports 30 + * restricted transactional memory. #DB occurred inside an RTM region 31 + * could clear it. 32 + * 33 + * Apparently, DR6.BLD and DR6.RTM are active low bits. 34 + * 35 + * As a result, DR6_RESERVED is an incorrect name now, but it is kept for 36 + * compatibility. 37 + */ 19 38 #define DR6_RESERVED (0xFFFF0FF0) 20 39 21 40 #define DR_TRAP0 (0x1) /* db0 */
+56 -25
arch/x86/kernel/alternative.c
··· 116 116 #endif 117 117 static void *its_page; 118 118 static unsigned int its_offset; 119 + struct its_array its_pages; 120 + 121 + static void *__its_alloc(struct its_array *pages) 122 + { 123 + void *page __free(execmem) = execmem_alloc(EXECMEM_MODULE_TEXT, PAGE_SIZE); 124 + if (!page) 125 + return NULL; 126 + 127 + void *tmp = krealloc(pages->pages, (pages->num+1) * sizeof(void *), 128 + GFP_KERNEL); 129 + if (!tmp) 130 + return NULL; 131 + 132 + pages->pages = tmp; 133 + pages->pages[pages->num++] = page; 134 + 135 + return no_free_ptr(page); 136 + } 119 137 120 138 /* Initialize a thunk with the "jmp *reg; int3" instructions. */ 121 139 static void *its_init_thunk(void *thunk, int reg) ··· 169 151 return thunk + offset; 170 152 } 171 153 154 + static void its_pages_protect(struct its_array *pages) 155 + { 156 + for (int i = 0; i < pages->num; i++) { 157 + void *page = pages->pages[i]; 158 + execmem_restore_rox(page, PAGE_SIZE); 159 + } 160 + } 161 + 162 + static void its_fini_core(void) 163 + { 164 + if (IS_ENABLED(CONFIG_STRICT_KERNEL_RWX)) 165 + its_pages_protect(&its_pages); 166 + kfree(its_pages.pages); 167 + } 168 + 172 169 #ifdef CONFIG_MODULES 173 170 void its_init_mod(struct module *mod) 174 171 { ··· 206 173 its_page = NULL; 207 174 mutex_unlock(&text_mutex); 208 175 209 - for (int i = 0; i < mod->its_num_pages; i++) { 210 - void *page = mod->its_page_array[i]; 211 - execmem_restore_rox(page, PAGE_SIZE); 212 - } 176 + if (IS_ENABLED(CONFIG_STRICT_MODULE_RWX)) 177 + its_pages_protect(&mod->arch.its_pages); 213 178 } 214 179 215 180 void its_free_mod(struct module *mod) ··· 215 184 if (!cpu_feature_enabled(X86_FEATURE_INDIRECT_THUNK_ITS)) 216 185 return; 217 186 218 - for (int i = 0; i < mod->its_num_pages; i++) { 219 - void *page = mod->its_page_array[i]; 187 + for (int i = 0; i < mod->arch.its_pages.num; i++) { 188 + void *page = mod->arch.its_pages.pages[i]; 220 189 execmem_free(page); 221 190 } 222 - kfree(mod->its_page_array); 191 + kfree(mod->arch.its_pages.pages); 223 192 } 224 193 #endif /* CONFIG_MODULES */ 225 194 226 195 static void *its_alloc(void) 227 196 { 228 - void *page __free(execmem) = execmem_alloc(EXECMEM_MODULE_TEXT, PAGE_SIZE); 197 + struct its_array *pages = &its_pages; 198 + void *page; 229 199 200 + #ifdef CONFIG_MODULES 201 + if (its_mod) 202 + pages = &its_mod->arch.its_pages; 203 + #endif 204 + 205 + page = __its_alloc(pages); 230 206 if (!page) 231 207 return NULL; 232 208 233 - #ifdef CONFIG_MODULES 234 - if (its_mod) { 235 - void *tmp = krealloc(its_mod->its_page_array, 236 - (its_mod->its_num_pages+1) * sizeof(void *), 237 - GFP_KERNEL); 238 - if (!tmp) 239 - return NULL; 209 + execmem_make_temp_rw(page, PAGE_SIZE); 210 + if (pages == &its_pages) 211 + set_memory_x((unsigned long)page, 1); 240 212 241 - its_mod->its_page_array = tmp; 242 - its_mod->its_page_array[its_mod->its_num_pages++] = page; 243 - 244 - execmem_make_temp_rw(page, PAGE_SIZE); 245 - } 246 - #endif /* CONFIG_MODULES */ 247 - 248 - return no_free_ptr(page); 213 + return page; 249 214 } 250 215 251 216 static void *its_allocate_thunk(int reg) ··· 295 268 return thunk; 296 269 } 297 270 298 - #endif 271 + #else 272 + static inline void its_fini_core(void) {} 273 + #endif /* CONFIG_MITIGATION_ITS */ 299 274 300 275 /* 301 276 * Nomenclature for variable names to simplify and clarify this code and ease ··· 2367 2338 apply_retpolines(__retpoline_sites, __retpoline_sites_end); 2368 2339 apply_returns(__return_sites, __return_sites_end); 2369 2340 2341 + its_fini_core(); 2342 + 2370 2343 /* 2371 2344 * Adjust all CALL instructions to point to func()-10, including 2372 2345 * those in .altinstr_replacement. ··· 3138 3107 */ 3139 3108 void __ref smp_text_poke_single(void *addr, const void *opcode, size_t len, const void *emulate) 3140 3109 { 3141 - __smp_text_poke_batch_add(addr, opcode, len, emulate); 3110 + smp_text_poke_batch_add(addr, opcode, len, emulate); 3142 3111 smp_text_poke_batch_finish(); 3143 3112 }
+1 -1
arch/x86/kernel/cpu/amd.c
··· 31 31 32 32 #include "cpu.h" 33 33 34 - u16 invlpgb_count_max __ro_after_init; 34 + u16 invlpgb_count_max __ro_after_init = 1; 35 35 36 36 static inline int rdmsrq_amd_safe(unsigned msr, u64 *p) 37 37 {
+10 -14
arch/x86/kernel/cpu/common.c
··· 2243 2243 #endif 2244 2244 #endif 2245 2245 2246 - /* 2247 - * Clear all 6 debug registers: 2248 - */ 2249 - static void clear_all_debug_regs(void) 2246 + static void initialize_debug_regs(void) 2250 2247 { 2251 - int i; 2252 - 2253 - for (i = 0; i < 8; i++) { 2254 - /* Ignore db4, db5 */ 2255 - if ((i == 4) || (i == 5)) 2256 - continue; 2257 - 2258 - set_debugreg(0, i); 2259 - } 2248 + /* Control register first -- to make sure everything is disabled. */ 2249 + set_debugreg(DR7_FIXED_1, 7); 2250 + set_debugreg(DR6_RESERVED, 6); 2251 + /* dr5 and dr4 don't exist */ 2252 + set_debugreg(0, 3); 2253 + set_debugreg(0, 2); 2254 + set_debugreg(0, 1); 2255 + set_debugreg(0, 0); 2260 2256 } 2261 2257 2262 2258 #ifdef CONFIG_KGDB ··· 2413 2417 2414 2418 load_mm_ldt(&init_mm); 2415 2419 2416 - clear_all_debug_regs(); 2420 + initialize_debug_regs(); 2417 2421 dbg_restore_debug_regs(); 2418 2422 2419 2423 doublefault_init_cpu_tss();
+4 -2
arch/x86/kernel/cpu/resctrl/core.c
··· 498 498 struct rdt_hw_mon_domain *hw_dom; 499 499 struct rdt_domain_hdr *hdr; 500 500 struct rdt_mon_domain *d; 501 + struct cacheinfo *ci; 501 502 int err; 502 503 503 504 lockdep_assert_held(&domain_list_lock); ··· 526 525 d = &hw_dom->d_resctrl; 527 526 d->hdr.id = id; 528 527 d->hdr.type = RESCTRL_MON_DOMAIN; 529 - d->ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); 530 - if (!d->ci) { 528 + ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); 529 + if (!ci) { 531 530 pr_warn_once("Can't find L3 cache for CPU:%d resource %s\n", cpu, r->name); 532 531 mon_domain_free(hw_dom); 533 532 return; 534 533 } 534 + d->ci_id = ci->id; 535 535 cpumask_set_cpu(cpu, &d->hdr.cpu_mask); 536 536 537 537 arch_mon_domain_online(r, d);
+1 -1
arch/x86/kernel/kgdb.c
··· 385 385 struct perf_event *bp; 386 386 387 387 /* Disable hardware debugging while we are in kgdb: */ 388 - set_debugreg(0UL, 7); 388 + set_debugreg(DR7_FIXED_1, 7); 389 389 for (i = 0; i < HBP_NUM; i++) { 390 390 if (!breakinfo[i].enabled) 391 391 continue;
+1 -1
arch/x86/kernel/process_32.c
··· 93 93 94 94 /* Only print out debug registers if they are in their non-default state. */ 95 95 if ((d0 == 0) && (d1 == 0) && (d2 == 0) && (d3 == 0) && 96 - (d6 == DR6_RESERVED) && (d7 == 0x400)) 96 + (d6 == DR6_RESERVED) && (d7 == DR7_FIXED_1)) 97 97 return; 98 98 99 99 printk("%sDR0: %08lx DR1: %08lx DR2: %08lx DR3: %08lx\n",
+1 -1
arch/x86/kernel/process_64.c
··· 133 133 134 134 /* Only print out debug registers if they are in their non-default state. */ 135 135 if (!((d0 == 0) && (d1 == 0) && (d2 == 0) && (d3 == 0) && 136 - (d6 == DR6_RESERVED) && (d7 == 0x400))) { 136 + (d6 == DR6_RESERVED) && (d7 == DR7_FIXED_1))) { 137 137 printk("%sDR0: %016lx DR1: %016lx DR2: %016lx\n", 138 138 log_lvl, d0, d1, d2); 139 139 printk("%sDR3: %016lx DR6: %016lx DR7: %016lx\n",
+4
arch/x86/kernel/signal_32.c
··· 152 152 struct sigframe_ia32 __user *frame = (struct sigframe_ia32 __user *)(regs->sp-8); 153 153 sigset_t set; 154 154 155 + prevent_single_step_upon_eretu(regs); 156 + 155 157 if (!access_ok(frame, sizeof(*frame))) 156 158 goto badframe; 157 159 if (__get_user(set.sig[0], &frame->sc.oldmask) ··· 176 174 struct pt_regs *regs = current_pt_regs(); 177 175 struct rt_sigframe_ia32 __user *frame; 178 176 sigset_t set; 177 + 178 + prevent_single_step_upon_eretu(regs); 179 179 180 180 frame = (struct rt_sigframe_ia32 __user *)(regs->sp - 4); 181 181
+4
arch/x86/kernel/signal_64.c
··· 250 250 sigset_t set; 251 251 unsigned long uc_flags; 252 252 253 + prevent_single_step_upon_eretu(regs); 254 + 253 255 frame = (struct rt_sigframe __user *)(regs->sp - sizeof(long)); 254 256 if (!access_ok(frame, sizeof(*frame))) 255 257 goto badframe; ··· 367 365 struct rt_sigframe_x32 __user *frame; 368 366 sigset_t set; 369 367 unsigned long uc_flags; 368 + 369 + prevent_single_step_upon_eretu(regs); 370 370 371 371 frame = (struct rt_sigframe_x32 __user *)(regs->sp - 8); 372 372
+21 -13
arch/x86/kernel/traps.c
··· 1022 1022 #endif 1023 1023 } 1024 1024 1025 - static __always_inline unsigned long debug_read_clear_dr6(void) 1025 + static __always_inline unsigned long debug_read_reset_dr6(void) 1026 1026 { 1027 1027 unsigned long dr6; 1028 + 1029 + get_debugreg(dr6, 6); 1030 + dr6 ^= DR6_RESERVED; /* Flip to positive polarity */ 1028 1031 1029 1032 /* 1030 1033 * The Intel SDM says: 1031 1034 * 1032 - * Certain debug exceptions may clear bits 0-3. The remaining 1033 - * contents of the DR6 register are never cleared by the 1034 - * processor. To avoid confusion in identifying debug 1035 - * exceptions, debug handlers should clear the register before 1036 - * returning to the interrupted task. 1035 + * Certain debug exceptions may clear bits 0-3 of DR6. 1037 1036 * 1038 - * Keep it simple: clear DR6 immediately. 1037 + * BLD induced #DB clears DR6.BLD and any other debug 1038 + * exception doesn't modify DR6.BLD. 1039 + * 1040 + * RTM induced #DB clears DR6.RTM and any other debug 1041 + * exception sets DR6.RTM. 1042 + * 1043 + * To avoid confusion in identifying debug exceptions, 1044 + * debug handlers should set DR6.BLD and DR6.RTM, and 1045 + * clear other DR6 bits before returning. 1046 + * 1047 + * Keep it simple: write DR6 with its architectural reset 1048 + * value 0xFFFF0FF0, defined as DR6_RESERVED, immediately. 1039 1049 */ 1040 - get_debugreg(dr6, 6); 1041 1050 set_debugreg(DR6_RESERVED, 6); 1042 - dr6 ^= DR6_RESERVED; /* Flip to positive polarity */ 1043 1051 1044 1052 return dr6; 1045 1053 } ··· 1247 1239 /* IST stack entry */ 1248 1240 DEFINE_IDTENTRY_DEBUG(exc_debug) 1249 1241 { 1250 - exc_debug_kernel(regs, debug_read_clear_dr6()); 1242 + exc_debug_kernel(regs, debug_read_reset_dr6()); 1251 1243 } 1252 1244 1253 1245 /* User entry, runs on regular task stack */ 1254 1246 DEFINE_IDTENTRY_DEBUG_USER(exc_debug) 1255 1247 { 1256 - exc_debug_user(regs, debug_read_clear_dr6()); 1248 + exc_debug_user(regs, debug_read_reset_dr6()); 1257 1249 } 1258 1250 1259 1251 #ifdef CONFIG_X86_FRED ··· 1272 1264 { 1273 1265 /* 1274 1266 * FRED #DB stores DR6 on the stack in the format which 1275 - * debug_read_clear_dr6() returns for the IDT entry points. 1267 + * debug_read_reset_dr6() returns for the IDT entry points. 1276 1268 */ 1277 1269 unsigned long dr6 = fred_event_data(regs); 1278 1270 ··· 1287 1279 /* 32 bit does not have separate entry points. */ 1288 1280 DEFINE_IDTENTRY_RAW(exc_debug) 1289 1281 { 1290 - unsigned long dr6 = debug_read_clear_dr6(); 1282 + unsigned long dr6 = debug_read_reset_dr6(); 1291 1283 1292 1284 if (user_mode(regs)) 1293 1285 exc_debug_user(regs, dr6);
+76 -7
arch/x86/kvm/vmx/tdx.c
··· 1212 1212 /* 1213 1213 * Converting TDVMCALL_MAP_GPA to KVM_HC_MAP_GPA_RANGE requires 1214 1214 * userspace to enable KVM_CAP_EXIT_HYPERCALL with KVM_HC_MAP_GPA_RANGE 1215 - * bit set. If not, the error code is not defined in GHCI for TDX, use 1216 - * TDVMCALL_STATUS_INVALID_OPERAND for this case. 1215 + * bit set. This is a base call so it should always be supported, but 1216 + * KVM has no way to ensure that userspace implements the GHCI correctly. 1217 + * So if KVM_HC_MAP_GPA_RANGE does not cause a VMEXIT, return an error 1218 + * to the guest. 1217 1219 */ 1218 1220 if (!user_exit_on_hypercall(vcpu->kvm, KVM_HC_MAP_GPA_RANGE)) { 1219 - ret = TDVMCALL_STATUS_INVALID_OPERAND; 1221 + ret = TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED; 1220 1222 goto error; 1221 1223 } 1222 1224 ··· 1451 1449 return 1; 1452 1450 } 1453 1451 1452 + static int tdx_complete_get_td_vm_call_info(struct kvm_vcpu *vcpu) 1453 + { 1454 + struct vcpu_tdx *tdx = to_tdx(vcpu); 1455 + 1456 + tdvmcall_set_return_code(vcpu, vcpu->run->tdx.get_tdvmcall_info.ret); 1457 + 1458 + /* 1459 + * For now, there is no TDVMCALL beyond GHCI base API supported by KVM 1460 + * directly without the support from userspace, just set the value 1461 + * returned from userspace. 1462 + */ 1463 + tdx->vp_enter_args.r11 = vcpu->run->tdx.get_tdvmcall_info.r11; 1464 + tdx->vp_enter_args.r12 = vcpu->run->tdx.get_tdvmcall_info.r12; 1465 + tdx->vp_enter_args.r13 = vcpu->run->tdx.get_tdvmcall_info.r13; 1466 + tdx->vp_enter_args.r14 = vcpu->run->tdx.get_tdvmcall_info.r14; 1467 + 1468 + return 1; 1469 + } 1470 + 1454 1471 static int tdx_get_td_vm_call_info(struct kvm_vcpu *vcpu) 1455 1472 { 1456 1473 struct vcpu_tdx *tdx = to_tdx(vcpu); 1457 1474 1458 - if (tdx->vp_enter_args.r12) 1459 - tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND); 1460 - else { 1475 + switch (tdx->vp_enter_args.r12) { 1476 + case 0: 1461 1477 tdx->vp_enter_args.r11 = 0; 1478 + tdx->vp_enter_args.r12 = 0; 1462 1479 tdx->vp_enter_args.r13 = 0; 1463 1480 tdx->vp_enter_args.r14 = 0; 1481 + tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_SUCCESS); 1482 + return 1; 1483 + case 1: 1484 + vcpu->run->tdx.get_tdvmcall_info.leaf = tdx->vp_enter_args.r12; 1485 + vcpu->run->exit_reason = KVM_EXIT_TDX; 1486 + vcpu->run->tdx.flags = 0; 1487 + vcpu->run->tdx.nr = TDVMCALL_GET_TD_VM_CALL_INFO; 1488 + vcpu->run->tdx.get_tdvmcall_info.ret = TDVMCALL_STATUS_SUCCESS; 1489 + vcpu->run->tdx.get_tdvmcall_info.r11 = 0; 1490 + vcpu->run->tdx.get_tdvmcall_info.r12 = 0; 1491 + vcpu->run->tdx.get_tdvmcall_info.r13 = 0; 1492 + vcpu->run->tdx.get_tdvmcall_info.r14 = 0; 1493 + vcpu->arch.complete_userspace_io = tdx_complete_get_td_vm_call_info; 1494 + return 0; 1495 + default: 1496 + tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND); 1497 + return 1; 1464 1498 } 1499 + } 1500 + 1501 + static int tdx_complete_simple(struct kvm_vcpu *vcpu) 1502 + { 1503 + tdvmcall_set_return_code(vcpu, vcpu->run->tdx.unknown.ret); 1465 1504 return 1; 1505 + } 1506 + 1507 + static int tdx_get_quote(struct kvm_vcpu *vcpu) 1508 + { 1509 + struct vcpu_tdx *tdx = to_tdx(vcpu); 1510 + u64 gpa = tdx->vp_enter_args.r12; 1511 + u64 size = tdx->vp_enter_args.r13; 1512 + 1513 + /* The gpa of buffer must have shared bit set. */ 1514 + if (vt_is_tdx_private_gpa(vcpu->kvm, gpa)) { 1515 + tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND); 1516 + return 1; 1517 + } 1518 + 1519 + vcpu->run->exit_reason = KVM_EXIT_TDX; 1520 + vcpu->run->tdx.flags = 0; 1521 + vcpu->run->tdx.nr = TDVMCALL_GET_QUOTE; 1522 + vcpu->run->tdx.get_quote.ret = TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED; 1523 + vcpu->run->tdx.get_quote.gpa = gpa & ~gfn_to_gpa(kvm_gfn_direct_bits(tdx->vcpu.kvm)); 1524 + vcpu->run->tdx.get_quote.size = size; 1525 + 1526 + vcpu->arch.complete_userspace_io = tdx_complete_simple; 1527 + 1528 + return 0; 1466 1529 } 1467 1530 1468 1531 static int handle_tdvmcall(struct kvm_vcpu *vcpu) ··· 1539 1472 return tdx_report_fatal_error(vcpu); 1540 1473 case TDVMCALL_GET_TD_VM_CALL_INFO: 1541 1474 return tdx_get_td_vm_call_info(vcpu); 1475 + case TDVMCALL_GET_QUOTE: 1476 + return tdx_get_quote(vcpu); 1542 1477 default: 1543 1478 break; 1544 1479 } 1545 1480 1546 - tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND); 1481 + tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED); 1547 1482 return 1; 1548 1483 } 1549 1484
+2 -2
arch/x86/kvm/x86.c
··· 11035 11035 11036 11036 if (unlikely(vcpu->arch.switch_db_regs && 11037 11037 !(vcpu->arch.switch_db_regs & KVM_DEBUGREG_AUTO_SWITCH))) { 11038 - set_debugreg(0, 7); 11038 + set_debugreg(DR7_FIXED_1, 7); 11039 11039 set_debugreg(vcpu->arch.eff_db[0], 0); 11040 11040 set_debugreg(vcpu->arch.eff_db[1], 1); 11041 11041 set_debugreg(vcpu->arch.eff_db[2], 2); ··· 11044 11044 if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)) 11045 11045 kvm_x86_call(set_dr6)(vcpu, vcpu->arch.dr6); 11046 11046 } else if (unlikely(hw_breakpoint_active())) { 11047 - set_debugreg(0, 7); 11047 + set_debugreg(DR7_FIXED_1, 7); 11048 11048 } 11049 11049 11050 11050 vcpu->arch.host_debugctl = get_debugctlmsr();
-3
arch/x86/mm/init_32.c
··· 30 30 #include <linux/initrd.h> 31 31 #include <linux/cpumask.h> 32 32 #include <linux/gfp.h> 33 - #include <linux/execmem.h> 34 33 35 34 #include <asm/asm.h> 36 35 #include <asm/bios_ebda.h> ··· 747 748 set_pages_ro(virt_to_page(start), size >> PAGE_SHIFT); 748 749 pr_info("Write protecting kernel text and read-only data: %luk\n", 749 750 size >> 10); 750 - 751 - execmem_cache_make_ro(); 752 751 753 752 kernel_set_to_readonly = 1; 754 753
-3
arch/x86/mm/init_64.c
··· 34 34 #include <linux/gfp.h> 35 35 #include <linux/kcore.h> 36 36 #include <linux/bootmem_info.h> 37 - #include <linux/execmem.h> 38 37 39 38 #include <asm/processor.h> 40 39 #include <asm/bios_ebda.h> ··· 1390 1391 printk(KERN_INFO "Write protecting the kernel read-only data: %luk\n", 1391 1392 (end - start) >> 10); 1392 1393 set_memory_ro(start, (end - start) >> PAGE_SHIFT); 1393 - 1394 - execmem_cache_make_ro(); 1395 1394 1396 1395 kernel_set_to_readonly = 1; 1397 1396
+3
arch/x86/mm/pat/set_memory.c
··· 1257 1257 pgprot_t pgprot; 1258 1258 int i = 0; 1259 1259 1260 + if (!cpu_feature_enabled(X86_FEATURE_PSE)) 1261 + return 0; 1262 + 1260 1263 addr &= PMD_MASK; 1261 1264 pte = pte_offset_kernel(pmd, addr); 1262 1265 first = *pte;
+5
arch/x86/mm/pti.c
··· 98 98 return; 99 99 100 100 setup_force_cpu_cap(X86_FEATURE_PTI); 101 + 102 + if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) { 103 + pr_debug("PTI enabled, disabling INVLPGB\n"); 104 + setup_clear_cpu_cap(X86_FEATURE_INVLPGB); 105 + } 101 106 } 102 107 103 108 static int __init pti_parse_cmdline(char *arg)
+1 -1
arch/x86/um/ptrace.c
··· 161 161 from = kbuf; 162 162 } 163 163 164 - return um_fxsr_from_i387(fxsave, &buf); 164 + return um_fxsr_from_i387(fxsave, from); 165 165 } 166 166 #endif 167 167
+3 -2
arch/x86/virt/vmx/tdx/tdx.c
··· 75 75 args->r9, args->r10, args->r11); 76 76 } 77 77 78 - static inline int sc_retry_prerr(sc_func_t func, sc_err_func_t err_func, 79 - u64 fn, struct tdx_module_args *args) 78 + static __always_inline int sc_retry_prerr(sc_func_t func, 79 + sc_err_func_t err_func, 80 + u64 fn, struct tdx_module_args *args) 80 81 { 81 82 u64 sret = sc_retry(func, fn, args); 82 83
+15 -11
block/genhd.c
··· 128 128 static void bdev_count_inflight_rw(struct block_device *part, 129 129 unsigned int inflight[2], bool mq_driver) 130 130 { 131 + int write = 0; 132 + int read = 0; 131 133 int cpu; 132 134 133 135 if (mq_driver) { 134 136 blk_mq_in_driver_rw(part, inflight); 135 - } else { 136 - for_each_possible_cpu(cpu) { 137 - inflight[READ] += part_stat_local_read_cpu( 138 - part, in_flight[READ], cpu); 139 - inflight[WRITE] += part_stat_local_read_cpu( 140 - part, in_flight[WRITE], cpu); 141 - } 137 + return; 142 138 } 143 139 144 - if (WARN_ON_ONCE((int)inflight[READ] < 0)) 145 - inflight[READ] = 0; 146 - if (WARN_ON_ONCE((int)inflight[WRITE] < 0)) 147 - inflight[WRITE] = 0; 140 + for_each_possible_cpu(cpu) { 141 + read += part_stat_local_read_cpu(part, in_flight[READ], cpu); 142 + write += part_stat_local_read_cpu(part, in_flight[WRITE], cpu); 143 + } 144 + 145 + /* 146 + * While iterating all CPUs, some IOs may be issued from a CPU already 147 + * traversed and complete on a CPU that has not yet been traversed, 148 + * causing the inflight number to be negative. 149 + */ 150 + inflight[READ] = read > 0 ? read : 0; 151 + inflight[WRITE] = write > 0 ? write : 0; 148 152 } 149 153 150 154 /**
+21 -4
crypto/Kconfig
··· 176 176 177 177 config CRYPTO_SELFTESTS 178 178 bool "Enable cryptographic self-tests" 179 - depends on DEBUG_KERNEL 179 + depends on EXPERT 180 180 help 181 181 Enable the cryptographic self-tests. 182 182 183 183 The cryptographic self-tests run at boot time, or at algorithm 184 184 registration time if algorithms are dynamically loaded later. 185 185 186 - This is primarily intended for developer use. It should not be 187 - enabled in production kernels, unless you are trying to use these 188 - tests to fulfill a FIPS testing requirement. 186 + There are two main use cases for these tests: 187 + 188 + - Development and pre-release testing. In this case, also enable 189 + CRYPTO_SELFTESTS_FULL to get the full set of tests. All crypto code 190 + in the kernel is expected to pass the full set of tests. 191 + 192 + - Production kernels, to help prevent buggy drivers from being used 193 + and/or meet FIPS 140-3 pre-operational testing requirements. In 194 + this case, enable CRYPTO_SELFTESTS but not CRYPTO_SELFTESTS_FULL. 195 + 196 + config CRYPTO_SELFTESTS_FULL 197 + bool "Enable the full set of cryptographic self-tests" 198 + depends on CRYPTO_SELFTESTS 199 + help 200 + Enable the full set of cryptographic self-tests for each algorithm. 201 + 202 + The full set of tests should be enabled for development and 203 + pre-release testing, but not in production kernels. 204 + 205 + All crypto code in the kernel is expected to pass the full tests. 189 206 190 207 config CRYPTO_NULL 191 208 tristate "Null algorithms"
+3 -1
crypto/ahash.c
··· 600 600 601 601 static int ahash_def_finup_finish1(struct ahash_request *req, int err) 602 602 { 603 + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); 604 + 603 605 if (err) 604 606 goto out; 605 607 606 608 req->base.complete = ahash_def_finup_done2; 607 609 608 - err = crypto_ahash_final(req); 610 + err = crypto_ahash_alg(tfm)->final(req); 609 611 if (err == -EINPROGRESS || err == -EBUSY) 610 612 return err; 611 613
+12 -3
crypto/testmgr.c
··· 45 45 module_param(notests, bool, 0644); 46 46 MODULE_PARM_DESC(notests, "disable all crypto self-tests"); 47 47 48 + #ifdef CONFIG_CRYPTO_SELFTESTS_FULL 48 49 static bool noslowtests; 49 50 module_param(noslowtests, bool, 0644); 50 51 MODULE_PARM_DESC(noslowtests, "disable slow crypto self-tests"); ··· 53 52 static unsigned int fuzz_iterations = 100; 54 53 module_param(fuzz_iterations, uint, 0644); 55 54 MODULE_PARM_DESC(fuzz_iterations, "number of fuzz test iterations"); 55 + #else 56 + #define noslowtests 1 57 + #define fuzz_iterations 0 58 + #endif 56 59 57 60 #ifndef CONFIG_CRYPTO_SELFTESTS 58 61 ··· 324 319 325 320 /* 326 321 * The following are the lists of testvec_configs to test for each algorithm 327 - * type when the fast crypto self-tests are enabled. They aim to provide good 328 - * test coverage, while keeping the test time much shorter than the full tests 329 - * so that the fast tests can be used to fulfill FIPS 140 testing requirements. 322 + * type when the "fast" crypto self-tests are enabled. They aim to provide good 323 + * test coverage, while keeping the test time much shorter than the "full" tests 324 + * so that the "fast" tests can be enabled in a wider range of circumstances. 330 325 */ 331 326 332 327 /* Configs for skciphers and aeads */ ··· 1188 1183 1189 1184 static void crypto_disable_simd_for_test(void) 1190 1185 { 1186 + #ifdef CONFIG_CRYPTO_SELFTESTS_FULL 1191 1187 migrate_disable(); 1192 1188 __this_cpu_write(crypto_simd_disabled_for_test, true); 1189 + #endif 1193 1190 } 1194 1191 1195 1192 static void crypto_reenable_simd_for_test(void) 1196 1193 { 1194 + #ifdef CONFIG_CRYPTO_SELFTESTS_FULL 1197 1195 __this_cpu_write(crypto_simd_disabled_for_test, false); 1198 1196 migrate_enable(); 1197 + #endif 1199 1198 } 1200 1199 1201 1200 /*
+47 -78
crypto/wp512.c
··· 21 21 */ 22 22 #include <crypto/internal/hash.h> 23 23 #include <linux/init.h> 24 + #include <linux/kernel.h> 24 25 #include <linux/module.h> 25 - #include <linux/mm.h> 26 - #include <asm/byteorder.h> 27 - #include <linux/types.h> 26 + #include <linux/string.h> 27 + #include <linux/unaligned.h> 28 28 29 29 #define WP512_DIGEST_SIZE 64 30 30 #define WP384_DIGEST_SIZE 48 ··· 37 37 38 38 struct wp512_ctx { 39 39 u8 bitLength[WP512_LENGTHBYTES]; 40 - u8 buffer[WP512_BLOCK_SIZE]; 41 - int bufferBits; 42 - int bufferPos; 43 40 u64 hash[WP512_DIGEST_SIZE/8]; 44 41 }; 45 42 ··· 776 779 * The core Whirlpool transform. 777 780 */ 778 781 779 - static __no_kmsan_checks void wp512_process_buffer(struct wp512_ctx *wctx) { 782 + static __no_kmsan_checks void wp512_process_buffer(struct wp512_ctx *wctx, 783 + const u8 *buffer) { 780 784 int i, r; 781 785 u64 K[8]; /* the round key */ 782 786 u64 block[8]; /* mu(buffer) */ 783 787 u64 state[8]; /* the cipher state */ 784 788 u64 L[8]; 785 - const __be64 *buffer = (const __be64 *)wctx->buffer; 786 789 787 790 for (i = 0; i < 8; i++) 788 - block[i] = be64_to_cpu(buffer[i]); 791 + block[i] = get_unaligned_be64(buffer + i * 8); 789 792 790 793 state[0] = block[0] ^ (K[0] = wctx->hash[0]); 791 794 state[1] = block[1] ^ (K[1] = wctx->hash[1]); ··· 988 991 int i; 989 992 990 993 memset(wctx->bitLength, 0, 32); 991 - wctx->bufferBits = wctx->bufferPos = 0; 992 - wctx->buffer[0] = 0; 993 994 for (i = 0; i < 8; i++) { 994 995 wctx->hash[i] = 0L; 995 996 } ··· 995 1000 return 0; 996 1001 } 997 1002 998 - static int wp512_update(struct shash_desc *desc, const u8 *source, 999 - unsigned int len) 1003 + static void wp512_add_length(u8 *bitLength, u64 value) 1000 1004 { 1001 - struct wp512_ctx *wctx = shash_desc_ctx(desc); 1002 - int sourcePos = 0; 1003 - unsigned int bits_len = len * 8; // convert to number of bits 1004 - int sourceGap = (8 - ((int)bits_len & 7)) & 7; 1005 - int bufferRem = wctx->bufferBits & 7; 1005 + u32 carry; 1006 1006 int i; 1007 - u32 b, carry; 1008 - u8 *buffer = wctx->buffer; 1009 - u8 *bitLength = wctx->bitLength; 1010 - int bufferBits = wctx->bufferBits; 1011 - int bufferPos = wctx->bufferPos; 1012 1007 1013 - u64 value = bits_len; 1014 1008 for (i = 31, carry = 0; i >= 0 && (carry != 0 || value != 0ULL); i--) { 1015 1009 carry += bitLength[i] + ((u32)value & 0xff); 1016 1010 bitLength[i] = (u8)carry; 1017 1011 carry >>= 8; 1018 1012 value >>= 8; 1019 1013 } 1020 - while (bits_len > 8) { 1021 - b = ((source[sourcePos] << sourceGap) & 0xff) | 1022 - ((source[sourcePos + 1] & 0xff) >> (8 - sourceGap)); 1023 - buffer[bufferPos++] |= (u8)(b >> bufferRem); 1024 - bufferBits += 8 - bufferRem; 1025 - if (bufferBits == WP512_BLOCK_SIZE * 8) { 1026 - wp512_process_buffer(wctx); 1027 - bufferBits = bufferPos = 0; 1028 - } 1029 - buffer[bufferPos] = b << (8 - bufferRem); 1030 - bufferBits += bufferRem; 1031 - bits_len -= 8; 1032 - sourcePos++; 1033 - } 1034 - if (bits_len > 0) { 1035 - b = (source[sourcePos] << sourceGap) & 0xff; 1036 - buffer[bufferPos] |= b >> bufferRem; 1037 - } else { 1038 - b = 0; 1039 - } 1040 - if (bufferRem + bits_len < 8) { 1041 - bufferBits += bits_len; 1042 - } else { 1043 - bufferPos++; 1044 - bufferBits += 8 - bufferRem; 1045 - bits_len -= 8 - bufferRem; 1046 - if (bufferBits == WP512_BLOCK_SIZE * 8) { 1047 - wp512_process_buffer(wctx); 1048 - bufferBits = bufferPos = 0; 1049 - } 1050 - buffer[bufferPos] = b << (8 - bufferRem); 1051 - bufferBits += (int)bits_len; 1052 - } 1053 - 1054 - wctx->bufferBits = bufferBits; 1055 - wctx->bufferPos = bufferPos; 1056 - 1057 - return 0; 1058 1014 } 1059 1015 1060 - static int wp512_final(struct shash_desc *desc, u8 *out) 1016 + static int wp512_update(struct shash_desc *desc, const u8 *source, 1017 + unsigned int len) 1018 + { 1019 + struct wp512_ctx *wctx = shash_desc_ctx(desc); 1020 + unsigned int remain = len % WP512_BLOCK_SIZE; 1021 + u64 bits_len = (len - remain) * 8ull; 1022 + u8 *bitLength = wctx->bitLength; 1023 + 1024 + wp512_add_length(bitLength, bits_len); 1025 + do { 1026 + wp512_process_buffer(wctx, source); 1027 + source += WP512_BLOCK_SIZE; 1028 + bits_len -= WP512_BLOCK_SIZE * 8; 1029 + } while (bits_len); 1030 + 1031 + return remain; 1032 + } 1033 + 1034 + static int wp512_finup(struct shash_desc *desc, const u8 *src, 1035 + unsigned int bufferPos, u8 *out) 1061 1036 { 1062 1037 struct wp512_ctx *wctx = shash_desc_ctx(desc); 1063 1038 int i; 1064 - u8 *buffer = wctx->buffer; 1065 1039 u8 *bitLength = wctx->bitLength; 1066 - int bufferBits = wctx->bufferBits; 1067 - int bufferPos = wctx->bufferPos; 1068 1040 __be64 *digest = (__be64 *)out; 1041 + u8 buffer[WP512_BLOCK_SIZE]; 1069 1042 1070 - buffer[bufferPos] |= 0x80U >> (bufferBits & 7); 1043 + wp512_add_length(bitLength, bufferPos * 8); 1044 + memcpy(buffer, src, bufferPos); 1045 + buffer[bufferPos] = 0x80U; 1071 1046 bufferPos++; 1072 1047 if (bufferPos > WP512_BLOCK_SIZE - WP512_LENGTHBYTES) { 1073 1048 if (bufferPos < WP512_BLOCK_SIZE) 1074 1049 memset(&buffer[bufferPos], 0, WP512_BLOCK_SIZE - bufferPos); 1075 - wp512_process_buffer(wctx); 1050 + wp512_process_buffer(wctx, buffer); 1076 1051 bufferPos = 0; 1077 1052 } 1078 1053 if (bufferPos < WP512_BLOCK_SIZE - WP512_LENGTHBYTES) ··· 1051 1086 bufferPos = WP512_BLOCK_SIZE - WP512_LENGTHBYTES; 1052 1087 memcpy(&buffer[WP512_BLOCK_SIZE - WP512_LENGTHBYTES], 1053 1088 bitLength, WP512_LENGTHBYTES); 1054 - wp512_process_buffer(wctx); 1089 + wp512_process_buffer(wctx, buffer); 1090 + memzero_explicit(buffer, sizeof(buffer)); 1055 1091 for (i = 0; i < WP512_DIGEST_SIZE/8; i++) 1056 1092 digest[i] = cpu_to_be64(wctx->hash[i]); 1057 - wctx->bufferBits = bufferBits; 1058 - wctx->bufferPos = bufferPos; 1059 1093 1060 1094 return 0; 1061 1095 } 1062 1096 1063 - static int wp384_final(struct shash_desc *desc, u8 *out) 1097 + static int wp384_finup(struct shash_desc *desc, const u8 *src, 1098 + unsigned int len, u8 *out) 1064 1099 { 1065 1100 u8 D[64]; 1066 1101 1067 - wp512_final(desc, D); 1102 + wp512_finup(desc, src, len, D); 1068 1103 memcpy(out, D, WP384_DIGEST_SIZE); 1069 1104 memzero_explicit(D, WP512_DIGEST_SIZE); 1070 1105 1071 1106 return 0; 1072 1107 } 1073 1108 1074 - static int wp256_final(struct shash_desc *desc, u8 *out) 1109 + static int wp256_finup(struct shash_desc *desc, const u8 *src, 1110 + unsigned int len, u8 *out) 1075 1111 { 1076 1112 u8 D[64]; 1077 1113 1078 - wp512_final(desc, D); 1114 + wp512_finup(desc, src, len, D); 1079 1115 memcpy(out, D, WP256_DIGEST_SIZE); 1080 1116 memzero_explicit(D, WP512_DIGEST_SIZE); 1081 1117 ··· 1087 1121 .digestsize = WP512_DIGEST_SIZE, 1088 1122 .init = wp512_init, 1089 1123 .update = wp512_update, 1090 - .final = wp512_final, 1124 + .finup = wp512_finup, 1091 1125 .descsize = sizeof(struct wp512_ctx), 1092 1126 .base = { 1093 1127 .cra_name = "wp512", 1094 1128 .cra_driver_name = "wp512-generic", 1129 + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, 1095 1130 .cra_blocksize = WP512_BLOCK_SIZE, 1096 1131 .cra_module = THIS_MODULE, 1097 1132 } ··· 1100 1133 .digestsize = WP384_DIGEST_SIZE, 1101 1134 .init = wp512_init, 1102 1135 .update = wp512_update, 1103 - .final = wp384_final, 1136 + .finup = wp384_finup, 1104 1137 .descsize = sizeof(struct wp512_ctx), 1105 1138 .base = { 1106 1139 .cra_name = "wp384", 1107 1140 .cra_driver_name = "wp384-generic", 1141 + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, 1108 1142 .cra_blocksize = WP512_BLOCK_SIZE, 1109 1143 .cra_module = THIS_MODULE, 1110 1144 } ··· 1113 1145 .digestsize = WP256_DIGEST_SIZE, 1114 1146 .init = wp512_init, 1115 1147 .update = wp512_update, 1116 - .final = wp256_final, 1148 + .finup = wp256_finup, 1117 1149 .descsize = sizeof(struct wp512_ctx), 1118 1150 .base = { 1119 1151 .cra_name = "wp256", 1120 1152 .cra_driver_name = "wp256-generic", 1153 + .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY, 1121 1154 .cra_blocksize = WP512_BLOCK_SIZE, 1122 1155 .cra_module = THIS_MODULE, 1123 1156 }
+7
drivers/acpi/acpica/dsmethod.c
··· 483 483 return_ACPI_STATUS(AE_NULL_OBJECT); 484 484 } 485 485 486 + if (this_walk_state->num_operands < obj_desc->method.param_count) { 487 + ACPI_ERROR((AE_INFO, "Missing argument for method [%4.4s]", 488 + acpi_ut_get_node_name(method_node))); 489 + 490 + return_ACPI_STATUS(AE_AML_UNINITIALIZED_ARG); 491 + } 492 + 486 493 /* Init for new method, possibly wait on method mutex */ 487 494 488 495 status =
+33 -6
drivers/ata/ahci.c
··· 1410 1410 1411 1411 static bool ahci_broken_lpm(struct pci_dev *pdev) 1412 1412 { 1413 + /* 1414 + * Platforms with LPM problems. 1415 + * If driver_data is NULL, there is no existing BIOS version with 1416 + * functioning LPM. 1417 + * If driver_data is non-NULL, then driver_data contains the DMI BIOS 1418 + * build date of the first BIOS version with functioning LPM (i.e. older 1419 + * BIOS versions have broken LPM). 1420 + */ 1413 1421 static const struct dmi_system_id sysids[] = { 1414 - /* Various Lenovo 50 series have LPM issues with older BIOSen */ 1415 1422 { 1416 1423 .matches = { 1417 1424 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), ··· 1445 1438 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 1446 1439 DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W541"), 1447 1440 }, 1441 + .driver_data = "20180409", /* 2.35 */ 1442 + }, 1443 + { 1444 + .matches = { 1445 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 1446 + DMI_MATCH(DMI_PRODUCT_NAME, "ASUSPRO D840MB_M840SA"), 1447 + }, 1448 + /* 320 is broken, there is no known good version. */ 1449 + }, 1450 + { 1448 1451 /* 1449 - * Note date based on release notes, 2.35 has been 1450 - * reported to be good, but I've been unable to get 1451 - * a hold of the reporter to get the DMI BIOS date. 1452 - * TODO: fix this. 1452 + * AMD 500 Series Chipset SATA Controller [1022:43eb] 1453 + * on this motherboard timeouts on ports 5 and 6 when 1454 + * LPM is enabled, at least with WDC WD20EFAX-68FB5N0 1455 + * hard drives. LPM with the same drive works fine on 1456 + * all other ports on the same controller. 1453 1457 */ 1454 - .driver_data = "20180310", /* 2.35 */ 1458 + .matches = { 1459 + DMI_MATCH(DMI_BOARD_VENDOR, 1460 + "ASUSTeK COMPUTER INC."), 1461 + DMI_MATCH(DMI_BOARD_NAME, 1462 + "ROG STRIX B550-F GAMING (WI-FI)"), 1463 + }, 1464 + /* 3621 is broken, there is no known good version. */ 1455 1465 }, 1456 1466 { } /* terminate list */ 1457 1467 }; ··· 1478 1454 1479 1455 if (!dmi) 1480 1456 return false; 1457 + 1458 + if (!dmi->driver_data) 1459 + return true; 1481 1460 1482 1461 dmi_get_date(DMI_BIOS_DATE, &year, &month, &date); 1483 1462 snprintf(buf, sizeof(buf), "%04d%02d%02d", year, month, date);
+16 -8
drivers/ata/libata-acpi.c
··· 514 514 EXPORT_SYMBOL_GPL(ata_acpi_gtm_xfermask); 515 515 516 516 /** 517 - * ata_acpi_cbl_80wire - Check for 80 wire cable 517 + * ata_acpi_cbl_pata_type - Return PATA cable type 518 518 * @ap: Port to check 519 - * @gtm: GTM data to use 520 519 * 521 - * Return 1 if the @gtm indicates the BIOS selected an 80wire mode. 520 + * Return ATA_CBL_PATA* according to the transfer mode selected by BIOS 522 521 */ 523 - int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm) 522 + int ata_acpi_cbl_pata_type(struct ata_port *ap) 524 523 { 525 524 struct ata_device *dev; 525 + int ret = ATA_CBL_PATA_UNK; 526 + const struct ata_acpi_gtm *gtm = ata_acpi_init_gtm(ap); 527 + 528 + if (!gtm) 529 + return ATA_CBL_PATA40; 526 530 527 531 ata_for_each_dev(dev, &ap->link, ENABLED) { 528 532 unsigned int xfer_mask, udma_mask; ··· 534 530 xfer_mask = ata_acpi_gtm_xfermask(dev, gtm); 535 531 ata_unpack_xfermask(xfer_mask, NULL, NULL, &udma_mask); 536 532 537 - if (udma_mask & ~ATA_UDMA_MASK_40C) 538 - return 1; 533 + ret = ATA_CBL_PATA40; 534 + 535 + if (udma_mask & ~ATA_UDMA_MASK_40C) { 536 + ret = ATA_CBL_PATA80; 537 + break; 538 + } 539 539 } 540 540 541 - return 0; 541 + return ret; 542 542 } 543 - EXPORT_SYMBOL_GPL(ata_acpi_cbl_80wire); 543 + EXPORT_SYMBOL_GPL(ata_acpi_cbl_pata_type); 544 544 545 545 static void ata_acpi_gtf_to_tf(struct ata_device *dev, 546 546 const struct ata_acpi_gtf *gtf,
+1 -1
drivers/ata/pata_cs5536.c
··· 27 27 #include <scsi/scsi_host.h> 28 28 #include <linux/dmi.h> 29 29 30 - #ifdef CONFIG_X86_32 30 + #if defined(CONFIG_X86) && defined(CONFIG_X86_32) 31 31 #include <asm/msr.h> 32 32 static int use_msr; 33 33 module_param_named(msr, use_msr, int, 0644);
+1 -1
drivers/ata/pata_macio.c
··· 1298 1298 priv->dev = &pdev->dev; 1299 1299 1300 1300 /* Get MMIO regions */ 1301 - if (pci_request_regions(pdev, "pata-macio")) { 1301 + if (pcim_request_all_regions(pdev, "pata-macio")) { 1302 1302 dev_err(&pdev->dev, 1303 1303 "Cannot obtain PCI resources\n"); 1304 1304 return -EBUSY;
+4 -5
drivers/ata/pata_via.c
··· 201 201 two drives */ 202 202 if (ata66 & (0x10100000 >> (16 * ap->port_no))) 203 203 return ATA_CBL_PATA80; 204 + 204 205 /* Check with ACPI so we can spot BIOS reported SATA bridges */ 205 - if (ata_acpi_init_gtm(ap) && 206 - ata_acpi_cbl_80wire(ap, ata_acpi_init_gtm(ap))) 207 - return ATA_CBL_PATA80; 208 - return ATA_CBL_PATA40; 206 + return ata_acpi_cbl_pata_type(ap); 209 207 } 210 208 211 209 static int via_pre_reset(struct ata_link *link, unsigned long deadline) ··· 366 368 } 367 369 368 370 if (dev->class == ATA_DEV_ATAPI && 369 - dmi_check_system(no_atapi_dma_dmi_table)) { 371 + (dmi_check_system(no_atapi_dma_dmi_table) || 372 + config->id == PCI_DEVICE_ID_VIA_6415)) { 370 373 ata_dev_warn(dev, "controller locks up on ATAPI DMA, forcing PIO\n"); 371 374 mask &= ATA_MASK_PIO; 372 375 }
+3 -1
drivers/atm/atmtcp.c
··· 288 288 struct sk_buff *new_skb; 289 289 int result = 0; 290 290 291 - if (!skb->len) return 0; 291 + if (skb->len < sizeof(struct atmtcp_hdr)) 292 + goto done; 293 + 292 294 dev = vcc->dev_data; 293 295 hdr = (struct atmtcp_hdr *) skb->data; 294 296 if (hdr->length == ATMTCP_HDR_MAGIC) {
+5
drivers/atm/idt77252.c
··· 852 852 853 853 IDT77252_PRV_PADDR(skb) = dma_map_single(&card->pcidev->dev, skb->data, 854 854 skb->len, DMA_TO_DEVICE); 855 + if (dma_mapping_error(&card->pcidev->dev, IDT77252_PRV_PADDR(skb))) 856 + return -ENOMEM; 855 857 856 858 error = -EINVAL; 857 859 ··· 1859 1857 paddr = dma_map_single(&card->pcidev->dev, skb->data, 1860 1858 skb_end_pointer(skb) - skb->data, 1861 1859 DMA_FROM_DEVICE); 1860 + if (dma_mapping_error(&card->pcidev->dev, paddr)) 1861 + goto outpoolrm; 1862 1862 IDT77252_PRV_PADDR(skb) = paddr; 1863 1863 1864 1864 if (push_rx_skb(card, skb, queue)) { ··· 1875 1871 dma_unmap_single(&card->pcidev->dev, IDT77252_PRV_PADDR(skb), 1876 1872 skb_end_pointer(skb) - skb->data, DMA_FROM_DEVICE); 1877 1873 1874 + outpoolrm: 1878 1875 handle = IDT77252_PRV_POOL(skb); 1879 1876 card->sbpool[POOL_QUEUE(handle)].skb[POOL_INDEX(handle)] = NULL; 1880 1877
+1
drivers/block/aoe/aoe.h
··· 80 80 DEVFL_NEWSIZE = (1<<6), /* need to update dev size in block layer */ 81 81 DEVFL_FREEING = (1<<7), /* set when device is being cleaned up */ 82 82 DEVFL_FREED = (1<<8), /* device has been cleaned up */ 83 + DEVFL_DEAD = (1<<9), /* device has timed out of aoe_deadsecs */ 83 84 }; 84 85 85 86 enum {
+6 -2
drivers/block/aoe/aoecmd.c
··· 754 754 755 755 utgts = count_targets(d, NULL); 756 756 757 - if (d->flags & DEVFL_TKILL) { 757 + if (d->flags & (DEVFL_TKILL | DEVFL_DEAD)) { 758 758 spin_unlock_irqrestore(&d->lock, flags); 759 759 return; 760 760 } ··· 786 786 * to clean up. 787 787 */ 788 788 list_splice(&flist, &d->factive[0]); 789 - aoedev_downdev(d); 789 + d->flags |= DEVFL_DEAD; 790 + queue_work(aoe_wq, &d->work); 790 791 goto out; 791 792 } 792 793 ··· 898 897 aoecmd_sleepwork(struct work_struct *work) 899 898 { 900 899 struct aoedev *d = container_of(work, struct aoedev, work); 900 + 901 + if (d->flags & DEVFL_DEAD) 902 + aoedev_downdev(d); 901 903 902 904 if (d->flags & DEVFL_GDALLOC) 903 905 aoeblk_gdalloc(d);
+12 -1
drivers/block/aoe/aoedev.c
··· 198 198 { 199 199 struct aoetgt *t, **tt, **te; 200 200 struct list_head *head, *pos, *nx; 201 + struct request *rq, *rqnext; 201 202 int i; 203 + unsigned long flags; 202 204 203 - d->flags &= ~DEVFL_UP; 205 + spin_lock_irqsave(&d->lock, flags); 206 + d->flags &= ~(DEVFL_UP | DEVFL_DEAD); 207 + spin_unlock_irqrestore(&d->lock, flags); 204 208 205 209 /* clean out active and to-be-retransmitted buffers */ 206 210 for (i = 0; i < NFACTIVE; i++) { ··· 226 222 227 223 /* clean out the in-process request (if any) */ 228 224 aoe_failip(d); 225 + 226 + /* clean out any queued block requests */ 227 + list_for_each_entry_safe(rq, rqnext, &d->rq_list, queuelist) { 228 + list_del_init(&rq->queuelist); 229 + blk_mq_start_request(rq); 230 + blk_mq_end_request(rq, BLK_STS_IOERR); 231 + } 229 232 230 233 /* fast fail all pending I/O */ 231 234 if (d->blkq) {
+39 -11
drivers/block/ublk_drv.c
··· 1148 1148 blk_mq_end_request(req, res); 1149 1149 } 1150 1150 1151 - static void ublk_complete_io_cmd(struct ublk_io *io, struct request *req, 1152 - int res, unsigned issue_flags) 1151 + static struct io_uring_cmd *__ublk_prep_compl_io_cmd(struct ublk_io *io, 1152 + struct request *req) 1153 1153 { 1154 1154 /* read cmd first because req will overwrite it */ 1155 1155 struct io_uring_cmd *cmd = io->cmd; ··· 1164 1164 io->flags &= ~UBLK_IO_FLAG_ACTIVE; 1165 1165 1166 1166 io->req = req; 1167 + return cmd; 1168 + } 1169 + 1170 + static void ublk_complete_io_cmd(struct ublk_io *io, struct request *req, 1171 + int res, unsigned issue_flags) 1172 + { 1173 + struct io_uring_cmd *cmd = __ublk_prep_compl_io_cmd(io, req); 1167 1174 1168 1175 /* tell ublksrv one io request is coming */ 1169 1176 io_uring_cmd_done(cmd, res, 0, issue_flags); ··· 1423 1416 return BLK_STS_OK; 1424 1417 } 1425 1418 1419 + static inline bool ublk_belong_to_same_batch(const struct ublk_io *io, 1420 + const struct ublk_io *io2) 1421 + { 1422 + return (io_uring_cmd_ctx_handle(io->cmd) == 1423 + io_uring_cmd_ctx_handle(io2->cmd)) && 1424 + (io->task == io2->task); 1425 + } 1426 + 1426 1427 static void ublk_queue_rqs(struct rq_list *rqlist) 1427 1428 { 1428 1429 struct rq_list requeue_list = { }; ··· 1442 1427 struct ublk_queue *this_q = req->mq_hctx->driver_data; 1443 1428 struct ublk_io *this_io = &this_q->ios[req->tag]; 1444 1429 1445 - if (io && io->task != this_io->task && !rq_list_empty(&submit_list)) 1430 + if (io && !ublk_belong_to_same_batch(io, this_io) && 1431 + !rq_list_empty(&submit_list)) 1446 1432 ublk_queue_cmd_list(io, &submit_list); 1447 1433 io = this_io; 1448 1434 ··· 2164 2148 return 0; 2165 2149 } 2166 2150 2167 - static bool ublk_get_data(const struct ublk_queue *ubq, struct ublk_io *io) 2151 + static bool ublk_get_data(const struct ublk_queue *ubq, struct ublk_io *io, 2152 + struct request *req) 2168 2153 { 2169 - struct request *req = io->req; 2170 - 2171 2154 /* 2172 2155 * We have handled UBLK_IO_NEED_GET_DATA command, 2173 2156 * so clear UBLK_IO_FLAG_NEED_GET_DATA now and just ··· 2193 2178 u32 cmd_op = cmd->cmd_op; 2194 2179 unsigned tag = ub_cmd->tag; 2195 2180 int ret = -EINVAL; 2181 + struct request *req; 2196 2182 2197 2183 pr_devel("%s: received: cmd op %d queue %d tag %d result %d\n", 2198 2184 __func__, cmd->cmd_op, ub_cmd->q_id, tag, ··· 2252 2236 goto out; 2253 2237 break; 2254 2238 case UBLK_IO_NEED_GET_DATA: 2255 - io->addr = ub_cmd->addr; 2256 - if (!ublk_get_data(ubq, io)) 2257 - return -EIOCBQUEUED; 2258 - 2259 - return UBLK_IO_RES_OK; 2239 + /* 2240 + * ublk_get_data() may fail and fallback to requeue, so keep 2241 + * uring_cmd active first and prepare for handling new requeued 2242 + * request 2243 + */ 2244 + req = io->req; 2245 + ublk_fill_io_cmd(io, cmd, ub_cmd->addr); 2246 + io->flags &= ~UBLK_IO_FLAG_OWNED_BY_SRV; 2247 + if (likely(ublk_get_data(ubq, io, req))) { 2248 + __ublk_prep_compl_io_cmd(io, req); 2249 + return UBLK_IO_RES_OK; 2250 + } 2251 + break; 2260 2252 default: 2261 2253 goto out; 2262 2254 } ··· 2848 2824 2849 2825 if (copy_from_user(&info, argp, sizeof(info))) 2850 2826 return -EFAULT; 2827 + 2828 + if (info.queue_depth > UBLK_MAX_QUEUE_DEPTH || !info.queue_depth || 2829 + info.nr_hw_queues > UBLK_MAX_NR_QUEUES || !info.nr_hw_queues) 2830 + return -EINVAL; 2851 2831 2852 2832 if (capable(CAP_SYS_ADMIN)) 2853 2833 info.flags &= ~UBLK_F_UNPRIVILEGED_DEV;
+31 -2
drivers/bluetooth/btintel_pcie.c
··· 2033 2033 data->hdev = NULL; 2034 2034 } 2035 2035 2036 + static void btintel_pcie_disable_interrupts(struct btintel_pcie_data *data) 2037 + { 2038 + spin_lock(&data->irq_lock); 2039 + btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_MSIX_FH_INT_MASK, data->fh_init_mask); 2040 + btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_MSIX_HW_INT_MASK, data->hw_init_mask); 2041 + spin_unlock(&data->irq_lock); 2042 + } 2043 + 2044 + static void btintel_pcie_enable_interrupts(struct btintel_pcie_data *data) 2045 + { 2046 + spin_lock(&data->irq_lock); 2047 + btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_MSIX_FH_INT_MASK, ~data->fh_init_mask); 2048 + btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_MSIX_HW_INT_MASK, ~data->hw_init_mask); 2049 + spin_unlock(&data->irq_lock); 2050 + } 2051 + 2052 + static void btintel_pcie_synchronize_irqs(struct btintel_pcie_data *data) 2053 + { 2054 + for (int i = 0; i < data->alloc_vecs; i++) 2055 + synchronize_irq(data->msix_entries[i].vector); 2056 + } 2057 + 2036 2058 static int btintel_pcie_setup_internal(struct hci_dev *hdev) 2037 2059 { 2038 2060 struct btintel_pcie_data *data = hci_get_drvdata(hdev); ··· 2174 2152 bt_dev_err(hdev, "Firmware download retry count: %d", 2175 2153 fw_dl_retry); 2176 2154 btintel_pcie_dump_debug_registers(hdev); 2155 + btintel_pcie_disable_interrupts(data); 2156 + btintel_pcie_synchronize_irqs(data); 2177 2157 err = btintel_pcie_reset_bt(data); 2178 2158 if (err) { 2179 2159 bt_dev_err(hdev, "Failed to do shr reset: %d", err); ··· 2183 2159 } 2184 2160 usleep_range(10000, 12000); 2185 2161 btintel_pcie_reset_ia(data); 2162 + btintel_pcie_enable_interrupts(data); 2186 2163 btintel_pcie_config_msix(data); 2187 2164 err = btintel_pcie_enable_bt(data); 2188 2165 if (err) { ··· 2316 2291 2317 2292 data = pci_get_drvdata(pdev); 2318 2293 2294 + btintel_pcie_disable_interrupts(data); 2295 + 2296 + btintel_pcie_synchronize_irqs(data); 2297 + 2298 + flush_work(&data->rx_work); 2299 + 2319 2300 btintel_pcie_reset_bt(data); 2320 2301 for (int i = 0; i < data->alloc_vecs; i++) { 2321 2302 struct msix_entry *msix_entry; ··· 2333 2302 pci_free_irq_vectors(pdev); 2334 2303 2335 2304 btintel_pcie_release_hdev(data); 2336 - 2337 - flush_work(&data->rx_work); 2338 2305 2339 2306 destroy_workqueue(data->workqueue); 2340 2307
+10 -3
drivers/bluetooth/hci_qca.c
··· 2392 2392 */ 2393 2393 qcadev->bt_power->pwrseq = devm_pwrseq_get(&serdev->dev, 2394 2394 "bluetooth"); 2395 - if (IS_ERR(qcadev->bt_power->pwrseq)) 2396 - return PTR_ERR(qcadev->bt_power->pwrseq); 2397 2395 2398 - break; 2396 + /* 2397 + * Some modules have BT_EN enabled via a hardware pull-up, 2398 + * meaning it is not defined in the DTS and is not controlled 2399 + * through the power sequence. In such cases, fall through 2400 + * to follow the legacy flow. 2401 + */ 2402 + if (IS_ERR(qcadev->bt_power->pwrseq)) 2403 + qcadev->bt_power->pwrseq = NULL; 2404 + else 2405 + break; 2399 2406 } 2400 2407 fallthrough; 2401 2408 case QCA_WCN3950:
+13 -5
drivers/cxl/core/edac.c
··· 103 103 u8 *cap, u16 *cycle, u8 *flags, u8 *min_cycle) 104 104 { 105 105 struct cxl_mailbox *cxl_mbox; 106 - u8 min_scrub_cycle = U8_MAX; 107 106 struct cxl_region_params *p; 108 107 struct cxl_memdev *cxlmd; 109 108 struct cxl_region *cxlr; 109 + u8 min_scrub_cycle = 0; 110 110 int i, ret; 111 111 112 112 if (!cxl_ps_ctx->cxlr) { ··· 133 133 if (ret) 134 134 return ret; 135 135 136 + /* 137 + * The min_scrub_cycle of a region is the max of minimum scrub 138 + * cycles supported by memdevs that back the region. 139 + */ 136 140 if (min_cycle) 137 - min_scrub_cycle = min(*min_cycle, min_scrub_cycle); 141 + min_scrub_cycle = max(*min_cycle, min_scrub_cycle); 138 142 } 139 143 140 144 if (min_cycle) ··· 1103 1099 old_rec = xa_store(&array_rec->rec_gen_media, 1104 1100 le64_to_cpu(rec->media_hdr.phys_addr), rec, 1105 1101 GFP_KERNEL); 1106 - if (xa_is_err(old_rec)) 1102 + if (xa_is_err(old_rec)) { 1103 + kfree(rec); 1107 1104 return xa_err(old_rec); 1105 + } 1108 1106 1109 1107 kfree(old_rec); 1110 1108 ··· 1133 1127 old_rec = xa_store(&array_rec->rec_dram, 1134 1128 le64_to_cpu(rec->media_hdr.phys_addr), rec, 1135 1129 GFP_KERNEL); 1136 - if (xa_is_err(old_rec)) 1130 + if (xa_is_err(old_rec)) { 1131 + kfree(rec); 1137 1132 return xa_err(old_rec); 1133 + } 1138 1134 1139 1135 kfree(old_rec); 1140 1136 ··· 1323 1315 attrbs.bank = ctx->bank; 1324 1316 break; 1325 1317 case EDAC_REPAIR_RANK_SPARING: 1326 - attrbs.repair_type = CXL_BANK_SPARING; 1318 + attrbs.repair_type = CXL_RANK_SPARING; 1327 1319 break; 1328 1320 default: 1329 1321 return NULL;
+1 -1
drivers/cxl/core/features.c
··· 544 544 u32 flags; 545 545 546 546 if (rpc_in->op_size < sizeof(uuid_t)) 547 - return ERR_PTR(-EINVAL); 547 + return false; 548 548 549 549 feat = cxl_feature_info(cxlfs, &rpc_in->set_feat_in.uuid); 550 550 if (IS_ERR(feat))
+27 -20
drivers/cxl/core/ras.c
··· 31 31 ras_cap.header_log); 32 32 } 33 33 34 - static void cxl_cper_trace_corr_prot_err(struct pci_dev *pdev, 35 - struct cxl_ras_capability_regs ras_cap) 34 + static void cxl_cper_trace_corr_prot_err(struct cxl_memdev *cxlmd, 35 + struct cxl_ras_capability_regs ras_cap) 36 36 { 37 37 u32 status = ras_cap.cor_status & ~ras_cap.cor_mask; 38 - struct cxl_dev_state *cxlds; 39 38 40 - cxlds = pci_get_drvdata(pdev); 41 - if (!cxlds) 42 - return; 43 - 44 - trace_cxl_aer_correctable_error(cxlds->cxlmd, status); 39 + trace_cxl_aer_correctable_error(cxlmd, status); 45 40 } 46 41 47 - static void cxl_cper_trace_uncorr_prot_err(struct pci_dev *pdev, 48 - struct cxl_ras_capability_regs ras_cap) 42 + static void 43 + cxl_cper_trace_uncorr_prot_err(struct cxl_memdev *cxlmd, 44 + struct cxl_ras_capability_regs ras_cap) 49 45 { 50 46 u32 status = ras_cap.uncor_status & ~ras_cap.uncor_mask; 51 - struct cxl_dev_state *cxlds; 52 47 u32 fe; 53 - 54 - cxlds = pci_get_drvdata(pdev); 55 - if (!cxlds) 56 - return; 57 48 58 49 if (hweight32(status) > 1) 59 50 fe = BIT(FIELD_GET(CXL_RAS_CAP_CONTROL_FE_MASK, ··· 52 61 else 53 62 fe = status; 54 63 55 - trace_cxl_aer_uncorrectable_error(cxlds->cxlmd, status, fe, 64 + trace_cxl_aer_uncorrectable_error(cxlmd, status, fe, 56 65 ras_cap.header_log); 66 + } 67 + 68 + static int match_memdev_by_parent(struct device *dev, const void *uport) 69 + { 70 + if (is_cxl_memdev(dev) && dev->parent == uport) 71 + return 1; 72 + return 0; 57 73 } 58 74 59 75 static void cxl_cper_handle_prot_err(struct cxl_cper_prot_err_work_data *data) ··· 71 73 pci_get_domain_bus_and_slot(data->prot_err.agent_addr.segment, 72 74 data->prot_err.agent_addr.bus, 73 75 devfn); 76 + struct cxl_memdev *cxlmd; 74 77 int port_type; 75 78 76 79 if (!pdev) 77 80 return; 78 - 79 - guard(device)(&pdev->dev); 80 81 81 82 port_type = pci_pcie_type(pdev); 82 83 if (port_type == PCI_EXP_TYPE_ROOT_PORT || ··· 89 92 return; 90 93 } 91 94 95 + guard(device)(&pdev->dev); 96 + if (!pdev->dev.driver) 97 + return; 98 + 99 + struct device *mem_dev __free(put_device) = bus_find_device( 100 + &cxl_bus_type, NULL, pdev, match_memdev_by_parent); 101 + if (!mem_dev) 102 + return; 103 + 104 + cxlmd = to_cxl_memdev(mem_dev); 92 105 if (data->severity == AER_CORRECTABLE) 93 - cxl_cper_trace_corr_prot_err(pdev, data->ras_cap); 106 + cxl_cper_trace_corr_prot_err(cxlmd, data->ras_cap); 94 107 else 95 - cxl_cper_trace_uncorr_prot_err(pdev, data->ras_cap); 108 + cxl_cper_trace_uncorr_prot_err(cxlmd, data->ras_cap); 96 109 } 97 110 98 111 static void cxl_cper_prot_err_work_fn(struct work_struct *work)
+37 -21
drivers/edac/amd64_edac.c
··· 1209 1209 if (csrow_enabled(2 * dimm + 1, ctrl, pvt)) 1210 1210 cs_mode |= CS_ODD_PRIMARY; 1211 1211 1212 - /* Asymmetric dual-rank DIMM support. */ 1212 + if (csrow_sec_enabled(2 * dimm, ctrl, pvt)) 1213 + cs_mode |= CS_EVEN_SECONDARY; 1214 + 1213 1215 if (csrow_sec_enabled(2 * dimm + 1, ctrl, pvt)) 1214 1216 cs_mode |= CS_ODD_SECONDARY; 1215 1217 ··· 1232 1230 return cs_mode; 1233 1231 } 1234 1232 1235 - static int __addr_mask_to_cs_size(u32 addr_mask_orig, unsigned int cs_mode, 1236 - int csrow_nr, int dimm) 1233 + static int calculate_cs_size(u32 mask, unsigned int cs_mode) 1237 1234 { 1238 - u32 msb, weight, num_zero_bits; 1239 - u32 addr_mask_deinterleaved; 1240 - int size = 0; 1235 + int msb, weight, num_zero_bits; 1236 + u32 deinterleaved_mask; 1237 + 1238 + if (!mask) 1239 + return 0; 1241 1240 1242 1241 /* 1243 1242 * The number of zero bits in the mask is equal to the number of bits ··· 1251 1248 * without swapping with the most significant bit. This can be handled 1252 1249 * by keeping the MSB where it is and ignoring the single zero bit. 1253 1250 */ 1254 - msb = fls(addr_mask_orig) - 1; 1255 - weight = hweight_long(addr_mask_orig); 1251 + msb = fls(mask) - 1; 1252 + weight = hweight_long(mask); 1256 1253 num_zero_bits = msb - weight - !!(cs_mode & CS_3R_INTERLEAVE); 1257 1254 1258 1255 /* Take the number of zero bits off from the top of the mask. */ 1259 - addr_mask_deinterleaved = GENMASK_ULL(msb - num_zero_bits, 1); 1256 + deinterleaved_mask = GENMASK(msb - num_zero_bits, 1); 1257 + edac_dbg(1, " Deinterleaved AddrMask: 0x%x\n", deinterleaved_mask); 1258 + 1259 + return (deinterleaved_mask >> 2) + 1; 1260 + } 1261 + 1262 + static int __addr_mask_to_cs_size(u32 addr_mask, u32 addr_mask_sec, 1263 + unsigned int cs_mode, int csrow_nr, int dimm) 1264 + { 1265 + int size; 1260 1266 1261 1267 edac_dbg(1, "CS%d DIMM%d AddrMasks:\n", csrow_nr, dimm); 1262 - edac_dbg(1, " Original AddrMask: 0x%x\n", addr_mask_orig); 1263 - edac_dbg(1, " Deinterleaved AddrMask: 0x%x\n", addr_mask_deinterleaved); 1268 + edac_dbg(1, " Primary AddrMask: 0x%x\n", addr_mask); 1264 1269 1265 1270 /* Register [31:1] = Address [39:9]. Size is in kBs here. */ 1266 - size = (addr_mask_deinterleaved >> 2) + 1; 1271 + size = calculate_cs_size(addr_mask, cs_mode); 1272 + 1273 + edac_dbg(1, " Secondary AddrMask: 0x%x\n", addr_mask_sec); 1274 + size += calculate_cs_size(addr_mask_sec, cs_mode); 1267 1275 1268 1276 /* Return size in MBs. */ 1269 1277 return size >> 10; ··· 1283 1269 static int umc_addr_mask_to_cs_size(struct amd64_pvt *pvt, u8 umc, 1284 1270 unsigned int cs_mode, int csrow_nr) 1285 1271 { 1272 + u32 addr_mask = 0, addr_mask_sec = 0; 1286 1273 int cs_mask_nr = csrow_nr; 1287 - u32 addr_mask_orig; 1288 1274 int dimm, size = 0; 1289 1275 1290 1276 /* No Chip Selects are enabled. */ ··· 1322 1308 if (!pvt->flags.zn_regs_v2) 1323 1309 cs_mask_nr >>= 1; 1324 1310 1325 - /* Asymmetric dual-rank DIMM support. */ 1326 - if ((csrow_nr & 1) && (cs_mode & CS_ODD_SECONDARY)) 1327 - addr_mask_orig = pvt->csels[umc].csmasks_sec[cs_mask_nr]; 1328 - else 1329 - addr_mask_orig = pvt->csels[umc].csmasks[cs_mask_nr]; 1311 + if (cs_mode & (CS_EVEN_PRIMARY | CS_ODD_PRIMARY)) 1312 + addr_mask = pvt->csels[umc].csmasks[cs_mask_nr]; 1330 1313 1331 - return __addr_mask_to_cs_size(addr_mask_orig, cs_mode, csrow_nr, dimm); 1314 + if (cs_mode & (CS_EVEN_SECONDARY | CS_ODD_SECONDARY)) 1315 + addr_mask_sec = pvt->csels[umc].csmasks_sec[cs_mask_nr]; 1316 + 1317 + return __addr_mask_to_cs_size(addr_mask, addr_mask_sec, cs_mode, csrow_nr, dimm); 1332 1318 } 1333 1319 1334 1320 static void umc_debug_display_dimm_sizes(struct amd64_pvt *pvt, u8 ctrl) ··· 3526 3512 static int gpu_addr_mask_to_cs_size(struct amd64_pvt *pvt, u8 umc, 3527 3513 unsigned int cs_mode, int csrow_nr) 3528 3514 { 3529 - u32 addr_mask_orig = pvt->csels[umc].csmasks[csrow_nr]; 3515 + u32 addr_mask = pvt->csels[umc].csmasks[csrow_nr]; 3516 + u32 addr_mask_sec = pvt->csels[umc].csmasks_sec[csrow_nr]; 3530 3517 3531 - return __addr_mask_to_cs_size(addr_mask_orig, cs_mode, csrow_nr, csrow_nr >> 1); 3518 + return __addr_mask_to_cs_size(addr_mask, addr_mask_sec, cs_mode, csrow_nr, csrow_nr >> 1); 3532 3519 } 3533 3520 3534 3521 static void gpu_debug_display_dimm_sizes(struct amd64_pvt *pvt, u8 ctrl) ··· 3894 3879 break; 3895 3880 case 0x70 ... 0x7f: 3896 3881 pvt->ctl_name = "F19h_M70h"; 3882 + pvt->max_mcs = 4; 3897 3883 pvt->flags.zn_regs_v2 = 1; 3898 3884 break; 3899 3885 case 0x90 ... 0x9f:
+13 -11
drivers/edac/igen6_edac.c
··· 125 125 #define MEM_SLICE_HASH_MASK(v) (GET_BITFIELD(v, 6, 19) << 6) 126 126 #define MEM_SLICE_HASH_LSB_MASK_BIT(v) GET_BITFIELD(v, 24, 26) 127 127 128 - static const struct res_config { 128 + static struct res_config { 129 129 bool machine_check; 130 130 /* The number of present memory controllers. */ 131 131 int num_imc; ··· 479 479 return ECC_ERROR_LOG_ADDR45(ecclog); 480 480 } 481 481 482 - static const struct res_config ehl_cfg = { 482 + static struct res_config ehl_cfg = { 483 483 .num_imc = 1, 484 484 .imc_base = 0x5000, 485 485 .ibecc_base = 0xdc00, ··· 489 489 .err_addr_to_imc_addr = ehl_err_addr_to_imc_addr, 490 490 }; 491 491 492 - static const struct res_config icl_cfg = { 492 + static struct res_config icl_cfg = { 493 493 .num_imc = 1, 494 494 .imc_base = 0x5000, 495 495 .ibecc_base = 0xd800, ··· 499 499 .err_addr_to_imc_addr = ehl_err_addr_to_imc_addr, 500 500 }; 501 501 502 - static const struct res_config tgl_cfg = { 502 + static struct res_config tgl_cfg = { 503 503 .machine_check = true, 504 504 .num_imc = 2, 505 505 .imc_base = 0x5000, ··· 513 513 .err_addr_to_imc_addr = tgl_err_addr_to_imc_addr, 514 514 }; 515 515 516 - static const struct res_config adl_cfg = { 516 + static struct res_config adl_cfg = { 517 517 .machine_check = true, 518 518 .num_imc = 2, 519 519 .imc_base = 0xd800, ··· 524 524 .err_addr_to_imc_addr = adl_err_addr_to_imc_addr, 525 525 }; 526 526 527 - static const struct res_config adl_n_cfg = { 527 + static struct res_config adl_n_cfg = { 528 528 .machine_check = true, 529 529 .num_imc = 1, 530 530 .imc_base = 0xd800, ··· 535 535 .err_addr_to_imc_addr = adl_err_addr_to_imc_addr, 536 536 }; 537 537 538 - static const struct res_config rpl_p_cfg = { 538 + static struct res_config rpl_p_cfg = { 539 539 .machine_check = true, 540 540 .num_imc = 2, 541 541 .imc_base = 0xd800, ··· 547 547 .err_addr_to_imc_addr = adl_err_addr_to_imc_addr, 548 548 }; 549 549 550 - static const struct res_config mtl_ps_cfg = { 550 + static struct res_config mtl_ps_cfg = { 551 551 .machine_check = true, 552 552 .num_imc = 2, 553 553 .imc_base = 0xd800, ··· 558 558 .err_addr_to_imc_addr = adl_err_addr_to_imc_addr, 559 559 }; 560 560 561 - static const struct res_config mtl_p_cfg = { 561 + static struct res_config mtl_p_cfg = { 562 562 .machine_check = true, 563 563 .num_imc = 2, 564 564 .imc_base = 0xd800, ··· 569 569 .err_addr_to_imc_addr = adl_err_addr_to_imc_addr, 570 570 }; 571 571 572 - static const struct pci_device_id igen6_pci_tbl[] = { 572 + static struct pci_device_id igen6_pci_tbl[] = { 573 573 { PCI_VDEVICE(INTEL, DID_EHL_SKU5), (kernel_ulong_t)&ehl_cfg }, 574 574 { PCI_VDEVICE(INTEL, DID_EHL_SKU6), (kernel_ulong_t)&ehl_cfg }, 575 575 { PCI_VDEVICE(INTEL, DID_EHL_SKU7), (kernel_ulong_t)&ehl_cfg }, ··· 1350 1350 return -ENODEV; 1351 1351 } 1352 1352 1353 - if (lmc < res_cfg->num_imc) 1353 + if (lmc < res_cfg->num_imc) { 1354 1354 igen6_printk(KERN_WARNING, "Expected %d mcs, but only %d detected.", 1355 1355 res_cfg->num_imc, lmc); 1356 + res_cfg->num_imc = lmc; 1357 + } 1356 1358 1357 1359 return 0; 1358 1360
+1 -1
drivers/gpio/gpio-loongson-64bit.c
··· 268 268 /* LS7A2000 ACPI GPIO */ 269 269 static const struct loongson_gpio_chip_data loongson_gpio_ls7a2000_data1 = { 270 270 .label = "ls7a2000_gpio", 271 - .mode = BYTE_CTRL_MODE, 271 + .mode = BIT_CTRL_MODE, 272 272 .conf_offset = 0x4, 273 273 .in_offset = 0x8, 274 274 .out_offset = 0x0,
+34 -18
drivers/gpio/gpio-mlxbf3.c
··· 190 190 struct mlxbf3_gpio_context *gs; 191 191 struct gpio_irq_chip *girq; 192 192 struct gpio_chip *gc; 193 + char *colon_ptr; 193 194 int ret, irq; 195 + long num; 194 196 195 197 gs = devm_kzalloc(dev, sizeof(*gs), GFP_KERNEL); 196 198 if (!gs) ··· 229 227 gc->owner = THIS_MODULE; 230 228 gc->add_pin_ranges = mlxbf3_gpio_add_pin_ranges; 231 229 232 - irq = platform_get_irq(pdev, 0); 233 - if (irq >= 0) { 234 - girq = &gs->gc.irq; 235 - gpio_irq_chip_set_chip(girq, &gpio_mlxbf3_irqchip); 236 - girq->default_type = IRQ_TYPE_NONE; 237 - /* This will let us handle the parent IRQ in the driver */ 238 - girq->num_parents = 0; 239 - girq->parents = NULL; 240 - girq->parent_handler = NULL; 241 - girq->handler = handle_bad_irq; 230 + colon_ptr = strchr(dev_name(dev), ':'); 231 + if (!colon_ptr) { 232 + dev_err(dev, "invalid device name format\n"); 233 + return -EINVAL; 234 + } 242 235 243 - /* 244 - * Directly request the irq here instead of passing 245 - * a flow-handler because the irq is shared. 246 - */ 247 - ret = devm_request_irq(dev, irq, mlxbf3_gpio_irq_handler, 248 - IRQF_SHARED, dev_name(dev), gs); 249 - if (ret) 250 - return dev_err_probe(dev, ret, "failed to request IRQ"); 236 + ret = kstrtol(++colon_ptr, 16, &num); 237 + if (ret) { 238 + dev_err(dev, "invalid device instance\n"); 239 + return ret; 240 + } 241 + 242 + if (!num) { 243 + irq = platform_get_irq(pdev, 0); 244 + if (irq >= 0) { 245 + girq = &gs->gc.irq; 246 + gpio_irq_chip_set_chip(girq, &gpio_mlxbf3_irqchip); 247 + girq->default_type = IRQ_TYPE_NONE; 248 + /* This will let us handle the parent IRQ in the driver */ 249 + girq->num_parents = 0; 250 + girq->parents = NULL; 251 + girq->parent_handler = NULL; 252 + girq->handler = handle_bad_irq; 253 + 254 + /* 255 + * Directly request the irq here instead of passing 256 + * a flow-handler because the irq is shared. 257 + */ 258 + ret = devm_request_irq(dev, irq, mlxbf3_gpio_irq_handler, 259 + IRQF_SHARED, dev_name(dev), gs); 260 + if (ret) 261 + return dev_err_probe(dev, ret, "failed to request IRQ"); 262 + } 251 263 } 252 264 253 265 platform_set_drvdata(pdev, gs);
+1 -1
drivers/gpio/gpio-pca953x.c
··· 974 974 IRQF_ONESHOT | IRQF_SHARED, dev_name(dev), 975 975 chip); 976 976 if (ret) 977 - return dev_err_probe(dev, client->irq, "failed to request irq\n"); 977 + return dev_err_probe(dev, ret, "failed to request irq\n"); 978 978 979 979 return 0; 980 980 }
+1
drivers/gpio/gpio-spacemit-k1.c
··· 278 278 { .compatible = "spacemit,k1-gpio" }, 279 279 { /* sentinel */ } 280 280 }; 281 + MODULE_DEVICE_TABLE(of, spacemit_gpio_dt_ids); 281 282 282 283 static struct platform_driver spacemit_gpio_driver = { 283 284 .probe = spacemit_gpio_probe,
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
··· 1902 1902 continue; 1903 1903 } 1904 1904 job = to_amdgpu_job(s_job); 1905 - if (preempted && (&job->hw_fence) == fence) 1905 + if (preempted && (&job->hw_fence.base) == fence) 1906 1906 /* mark the job as preempted */ 1907 1907 job->preemption_status |= AMDGPU_IB_PREEMPTED; 1908 1908 }
+56 -26
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 6019 6019 return ret; 6020 6020 } 6021 6021 6022 - static int amdgpu_device_halt_activities(struct amdgpu_device *adev, 6023 - struct amdgpu_job *job, 6024 - struct amdgpu_reset_context *reset_context, 6025 - struct list_head *device_list, 6026 - struct amdgpu_hive_info *hive, 6027 - bool need_emergency_restart) 6022 + static int amdgpu_device_recovery_prepare(struct amdgpu_device *adev, 6023 + struct list_head *device_list, 6024 + struct amdgpu_hive_info *hive) 6028 6025 { 6029 - struct list_head *device_list_handle = NULL; 6030 6026 struct amdgpu_device *tmp_adev = NULL; 6031 - int i, r = 0; 6027 + int r; 6032 6028 6033 6029 /* 6034 6030 * Build list of devices to reset. ··· 6041 6045 } 6042 6046 if (!list_is_first(&adev->reset_list, device_list)) 6043 6047 list_rotate_to_front(&adev->reset_list, device_list); 6044 - device_list_handle = device_list; 6045 6048 } else { 6046 6049 list_add_tail(&adev->reset_list, device_list); 6047 - device_list_handle = device_list; 6048 6050 } 6049 6051 6050 6052 if (!amdgpu_sriov_vf(adev) && (!adev->pcie_reset_ctx.occurs_dpc)) { 6051 - r = amdgpu_device_health_check(device_list_handle); 6053 + r = amdgpu_device_health_check(device_list); 6052 6054 if (r) 6053 6055 return r; 6054 6056 } 6055 6057 6056 - /* We need to lock reset domain only once both for XGMI and single device */ 6057 - tmp_adev = list_first_entry(device_list_handle, struct amdgpu_device, 6058 - reset_list); 6058 + return 0; 6059 + } 6060 + 6061 + static void amdgpu_device_recovery_get_reset_lock(struct amdgpu_device *adev, 6062 + struct list_head *device_list) 6063 + { 6064 + struct amdgpu_device *tmp_adev = NULL; 6065 + 6066 + if (list_empty(device_list)) 6067 + return; 6068 + tmp_adev = 6069 + list_first_entry(device_list, struct amdgpu_device, reset_list); 6059 6070 amdgpu_device_lock_reset_domain(tmp_adev->reset_domain); 6071 + } 6072 + 6073 + static void amdgpu_device_recovery_put_reset_lock(struct amdgpu_device *adev, 6074 + struct list_head *device_list) 6075 + { 6076 + struct amdgpu_device *tmp_adev = NULL; 6077 + 6078 + if (list_empty(device_list)) 6079 + return; 6080 + tmp_adev = 6081 + list_first_entry(device_list, struct amdgpu_device, reset_list); 6082 + amdgpu_device_unlock_reset_domain(tmp_adev->reset_domain); 6083 + } 6084 + 6085 + static int amdgpu_device_halt_activities( 6086 + struct amdgpu_device *adev, struct amdgpu_job *job, 6087 + struct amdgpu_reset_context *reset_context, 6088 + struct list_head *device_list, struct amdgpu_hive_info *hive, 6089 + bool need_emergency_restart) 6090 + { 6091 + struct amdgpu_device *tmp_adev = NULL; 6092 + int i, r = 0; 6060 6093 6061 6094 /* block all schedulers and reset given job's ring */ 6062 - list_for_each_entry(tmp_adev, device_list_handle, reset_list) { 6063 - 6095 + list_for_each_entry(tmp_adev, device_list, reset_list) { 6064 6096 amdgpu_device_set_mp1_state(tmp_adev); 6065 6097 6066 6098 /* ··· 6276 6252 amdgpu_ras_set_error_query_ready(tmp_adev, true); 6277 6253 6278 6254 } 6279 - 6280 - tmp_adev = list_first_entry(device_list, struct amdgpu_device, 6281 - reset_list); 6282 - amdgpu_device_unlock_reset_domain(tmp_adev->reset_domain); 6283 - 6284 6255 } 6285 6256 6286 6257 ··· 6343 6324 reset_context->hive = hive; 6344 6325 INIT_LIST_HEAD(&device_list); 6345 6326 6327 + if (amdgpu_device_recovery_prepare(adev, &device_list, hive)) 6328 + goto end_reset; 6329 + 6330 + /* We need to lock reset domain only once both for XGMI and single device */ 6331 + amdgpu_device_recovery_get_reset_lock(adev, &device_list); 6332 + 6346 6333 r = amdgpu_device_halt_activities(adev, job, reset_context, &device_list, 6347 6334 hive, need_emergency_restart); 6348 6335 if (r) 6349 - goto end_reset; 6336 + goto reset_unlock; 6350 6337 6351 6338 if (need_emergency_restart) 6352 6339 goto skip_sched_resume; ··· 6362 6337 * 6363 6338 * job->base holds a reference to parent fence 6364 6339 */ 6365 - if (job && dma_fence_is_signaled(&job->hw_fence)) { 6340 + if (job && dma_fence_is_signaled(&job->hw_fence.base)) { 6366 6341 job_signaled = true; 6367 6342 dev_info(adev->dev, "Guilty job already signaled, skipping HW reset"); 6368 6343 goto skip_hw_reset; ··· 6370 6345 6371 6346 r = amdgpu_device_asic_reset(adev, &device_list, reset_context); 6372 6347 if (r) 6373 - goto end_reset; 6348 + goto reset_unlock; 6374 6349 skip_hw_reset: 6375 6350 r = amdgpu_device_sched_resume(&device_list, reset_context, job_signaled); 6376 6351 if (r) 6377 - goto end_reset; 6352 + goto reset_unlock; 6378 6353 skip_sched_resume: 6379 6354 amdgpu_device_gpu_resume(adev, &device_list, need_emergency_restart); 6355 + reset_unlock: 6356 + amdgpu_device_recovery_put_reset_lock(adev, &device_list); 6380 6357 end_reset: 6381 6358 if (hive) { 6382 6359 mutex_unlock(&hive->hive_lock); ··· 6790 6763 memset(&reset_context, 0, sizeof(reset_context)); 6791 6764 INIT_LIST_HEAD(&device_list); 6792 6765 6766 + amdgpu_device_recovery_prepare(adev, &device_list, hive); 6767 + amdgpu_device_recovery_get_reset_lock(adev, &device_list); 6793 6768 r = amdgpu_device_halt_activities(adev, NULL, &reset_context, &device_list, 6794 6769 hive, false); 6795 6770 if (hive) { ··· 6909 6880 if (hive) { 6910 6881 list_for_each_entry(tmp_adev, &device_list, reset_list) 6911 6882 amdgpu_device_unset_mp1_state(tmp_adev); 6912 - amdgpu_device_unlock_reset_domain(adev->reset_domain); 6913 6883 } 6884 + amdgpu_device_recovery_put_reset_lock(adev, &device_list); 6914 6885 } 6915 6886 6916 6887 if (hive) { ··· 6956 6927 6957 6928 amdgpu_device_sched_resume(&device_list, NULL, NULL); 6958 6929 amdgpu_device_gpu_resume(adev, &device_list, false); 6930 + amdgpu_device_recovery_put_reset_lock(adev, &device_list); 6959 6931 adev->pcie_reset_ctx.occurs_dpc = false; 6960 6932 6961 6933 if (hive) {
+13 -15
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
··· 321 321 const struct firmware *fw; 322 322 int r; 323 323 324 - r = request_firmware(&fw, fw_name, adev->dev); 324 + r = firmware_request_nowarn(&fw, fw_name, adev->dev); 325 325 if (r) { 326 - dev_err(adev->dev, "can't load firmware \"%s\"\n", 327 - fw_name); 326 + if (amdgpu_discovery == 2) 327 + dev_err(adev->dev, "can't load firmware \"%s\"\n", fw_name); 328 + else 329 + drm_info(&adev->ddev, "Optional firmware \"%s\" was not found\n", fw_name); 328 330 return r; 329 331 } 330 332 ··· 461 459 /* Read from file if it is the preferred option */ 462 460 fw_name = amdgpu_discovery_get_fw_name(adev); 463 461 if (fw_name != NULL) { 464 - dev_info(adev->dev, "use ip discovery information from file"); 462 + drm_dbg(&adev->ddev, "use ip discovery information from file"); 465 463 r = amdgpu_discovery_read_binary_from_file(adev, adev->mman.discovery_bin, fw_name); 466 - 467 - if (r) { 468 - dev_err(adev->dev, "failed to read ip discovery binary from file\n"); 469 - r = -EINVAL; 464 + if (r) 470 465 goto out; 471 - } 472 - 473 466 } else { 467 + drm_dbg(&adev->ddev, "use ip discovery information from memory"); 474 468 r = amdgpu_discovery_read_binary_from_mem( 475 469 adev, adev->mman.discovery_bin); 476 470 if (r) ··· 1336 1338 int r; 1337 1339 1338 1340 r = amdgpu_discovery_init(adev); 1339 - if (r) { 1340 - DRM_ERROR("amdgpu_discovery_init failed\n"); 1341 + if (r) 1341 1342 return r; 1342 - } 1343 1343 1344 1344 wafl_ver = 0; 1345 1345 adev->gfx.xcc_mask = 0; ··· 2575 2579 break; 2576 2580 default: 2577 2581 r = amdgpu_discovery_reg_base_init(adev); 2578 - if (r) 2579 - return -EINVAL; 2582 + if (r) { 2583 + drm_err(&adev->ddev, "discovery failed: %d\n", r); 2584 + return r; 2585 + } 2580 2586 2581 2587 amdgpu_discovery_harvest_ip(adev); 2582 2588 amdgpu_discovery_get_gfx_info(adev);
+7 -23
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
··· 41 41 #include "amdgpu_trace.h" 42 42 #include "amdgpu_reset.h" 43 43 44 - /* 45 - * Fences mark an event in the GPUs pipeline and are used 46 - * for GPU/CPU synchronization. When the fence is written, 47 - * it is expected that all buffers associated with that fence 48 - * are no longer in use by the associated ring on the GPU and 49 - * that the relevant GPU caches have been flushed. 50 - */ 51 - 52 - struct amdgpu_fence { 53 - struct dma_fence base; 54 - 55 - /* RB, DMA, etc. */ 56 - struct amdgpu_ring *ring; 57 - ktime_t start_timestamp; 58 - }; 59 - 60 44 static struct kmem_cache *amdgpu_fence_slab; 61 45 62 46 int amdgpu_fence_slab_init(void) ··· 135 151 am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC); 136 152 if (am_fence == NULL) 137 153 return -ENOMEM; 138 - fence = &am_fence->base; 139 - am_fence->ring = ring; 140 154 } else { 141 155 /* take use of job-embedded fence */ 142 - fence = &job->hw_fence; 156 + am_fence = &job->hw_fence; 143 157 } 158 + fence = &am_fence->base; 159 + am_fence->ring = ring; 144 160 145 161 seq = ++ring->fence_drv.sync_seq; 146 162 if (job && job->job_run_counter) { ··· 702 718 * it right here or we won't be able to track them in fence_drv 703 719 * and they will remain unsignaled during sa_bo free. 704 720 */ 705 - job = container_of(old, struct amdgpu_job, hw_fence); 721 + job = container_of(old, struct amdgpu_job, hw_fence.base); 706 722 if (!job->base.s_fence && !dma_fence_is_signaled(old)) 707 723 dma_fence_signal(old); 708 724 RCU_INIT_POINTER(*ptr, NULL); ··· 764 780 765 781 static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f) 766 782 { 767 - struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence); 783 + struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base); 768 784 769 785 return (const char *)to_amdgpu_ring(job->base.sched)->name; 770 786 } ··· 794 810 */ 795 811 static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f) 796 812 { 797 - struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence); 813 + struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base); 798 814 799 815 if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer)) 800 816 amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched)); ··· 829 845 struct dma_fence *f = container_of(rcu, struct dma_fence, rcu); 830 846 831 847 /* free job if fence has a parent job */ 832 - kfree(container_of(f, struct amdgpu_job, hw_fence)); 848 + kfree(container_of(f, struct amdgpu_job, hw_fence.base)); 833 849 } 834 850 835 851 /**
+6 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
··· 272 272 /* Check if any fences where initialized */ 273 273 if (job->base.s_fence && job->base.s_fence->finished.ops) 274 274 f = &job->base.s_fence->finished; 275 - else if (job->hw_fence.ops) 276 - f = &job->hw_fence; 275 + else if (job->hw_fence.base.ops) 276 + f = &job->hw_fence.base; 277 277 else 278 278 f = NULL; 279 279 ··· 290 290 amdgpu_sync_free(&job->explicit_sync); 291 291 292 292 /* only put the hw fence if has embedded fence */ 293 - if (!job->hw_fence.ops) 293 + if (!job->hw_fence.base.ops) 294 294 kfree(job); 295 295 else 296 - dma_fence_put(&job->hw_fence); 296 + dma_fence_put(&job->hw_fence.base); 297 297 } 298 298 299 299 void amdgpu_job_set_gang_leader(struct amdgpu_job *job, ··· 322 322 if (job->gang_submit != &job->base.s_fence->scheduled) 323 323 dma_fence_put(job->gang_submit); 324 324 325 - if (!job->hw_fence.ops) 325 + if (!job->hw_fence.base.ops) 326 326 kfree(job); 327 327 else 328 - dma_fence_put(&job->hw_fence); 328 + dma_fence_put(&job->hw_fence.base); 329 329 } 330 330 331 331 struct dma_fence *amdgpu_job_submit(struct amdgpu_job *job)
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
··· 48 48 struct drm_sched_job base; 49 49 struct amdgpu_vm *vm; 50 50 struct amdgpu_sync explicit_sync; 51 - struct dma_fence hw_fence; 51 + struct amdgpu_fence hw_fence; 52 52 struct dma_fence *gang_submit; 53 53 uint32_t preamble_status; 54 54 uint32_t preemption_status;
+12 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
··· 3522 3522 uint8_t *ucode_array_start_addr; 3523 3523 int err = 0; 3524 3524 3525 - err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, AMDGPU_UCODE_REQUIRED, 3526 - "amdgpu/%s_sos.bin", chip_name); 3525 + if (amdgpu_is_kicker_fw(adev)) 3526 + err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, AMDGPU_UCODE_REQUIRED, 3527 + "amdgpu/%s_sos_kicker.bin", chip_name); 3528 + else 3529 + err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, AMDGPU_UCODE_REQUIRED, 3530 + "amdgpu/%s_sos.bin", chip_name); 3527 3531 if (err) 3528 3532 goto out; 3529 3533 ··· 3803 3799 struct amdgpu_device *adev = psp->adev; 3804 3800 int err; 3805 3801 3806 - err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, AMDGPU_UCODE_REQUIRED, 3807 - "amdgpu/%s_ta.bin", chip_name); 3802 + if (amdgpu_is_kicker_fw(adev)) 3803 + err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, AMDGPU_UCODE_REQUIRED, 3804 + "amdgpu/%s_ta_kicker.bin", chip_name); 3805 + else 3806 + err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, AMDGPU_UCODE_REQUIRED, 3807 + "amdgpu/%s_ta.bin", chip_name); 3808 3808 if (err) 3809 3809 return err; 3810 3810
+16
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
··· 127 127 struct dma_fence **fences; 128 128 }; 129 129 130 + /* 131 + * Fences mark an event in the GPUs pipeline and are used 132 + * for GPU/CPU synchronization. When the fence is written, 133 + * it is expected that all buffers associated with that fence 134 + * are no longer in use by the associated ring on the GPU and 135 + * that the relevant GPU caches have been flushed. 136 + */ 137 + 138 + struct amdgpu_fence { 139 + struct dma_fence base; 140 + 141 + /* RB, DMA, etc. */ 142 + struct amdgpu_ring *ring; 143 + ktime_t start_timestamp; 144 + }; 145 + 130 146 extern const struct drm_sched_backend_ops amdgpu_sched_ops; 131 147 132 148 void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);
+6 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
··· 540 540 case IP_VERSION(4, 4, 2): 541 541 case IP_VERSION(4, 4, 4): 542 542 case IP_VERSION(4, 4, 5): 543 - /* For SDMA 4.x, use the existing DPM interface for backward compatibility */ 544 - r = amdgpu_dpm_reset_sdma(adev, 1 << instance_id); 543 + /* For SDMA 4.x, use the existing DPM interface for backward compatibility, 544 + * we need to convert the logical instance ID to physical instance ID before reset. 545 + */ 546 + r = amdgpu_dpm_reset_sdma(adev, 1 << GET_INST(SDMA0, instance_id)); 545 547 break; 546 548 case IP_VERSION(5, 0, 0): 547 549 case IP_VERSION(5, 0, 1): ··· 570 568 /** 571 569 * amdgpu_sdma_reset_engine - Reset a specific SDMA engine 572 570 * @adev: Pointer to the AMDGPU device 573 - * @instance_id: ID of the SDMA engine instance to reset 571 + * @instance_id: Logical ID of the SDMA engine instance to reset 574 572 * 575 573 * Returns: 0 on success, or a negative error code on failure. 576 574 */ ··· 603 601 /* Perform the SDMA reset for the specified instance */ 604 602 ret = amdgpu_sdma_soft_reset(adev, instance_id); 605 603 if (ret) { 606 - dev_err(adev->dev, "Failed to reset SDMA instance %u\n", instance_id); 604 + dev_err(adev->dev, "Failed to reset SDMA logical instance %u\n", instance_id); 607 605 goto exit; 608 606 } 609 607
+17
drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
··· 30 30 31 31 #define AMDGPU_UCODE_NAME_MAX (128) 32 32 33 + static const struct kicker_device kicker_device_list[] = { 34 + {0x744B, 0x00}, 35 + }; 36 + 33 37 static void amdgpu_ucode_print_common_hdr(const struct common_firmware_header *hdr) 34 38 { 35 39 DRM_DEBUG("size_bytes: %u\n", le32_to_cpu(hdr->size_bytes)); ··· 1389 1385 } 1390 1386 } 1391 1387 return NULL; 1388 + } 1389 + 1390 + bool amdgpu_is_kicker_fw(struct amdgpu_device *adev) 1391 + { 1392 + int i; 1393 + 1394 + for (i = 0; i < ARRAY_SIZE(kicker_device_list); i++) { 1395 + if (adev->pdev->device == kicker_device_list[i].device && 1396 + adev->pdev->revision == kicker_device_list[i].revision) 1397 + return true; 1398 + } 1399 + 1400 + return false; 1392 1401 } 1393 1402 1394 1403 void amdgpu_ucode_ip_version_decode(struct amdgpu_device *adev, int block_type, char *ucode_prefix, int len)
+6
drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
··· 605 605 uint32_t pldm_version; 606 606 }; 607 607 608 + struct kicker_device{ 609 + unsigned short device; 610 + u8 revision; 611 + }; 612 + 608 613 void amdgpu_ucode_print_mc_hdr(const struct common_firmware_header *hdr); 609 614 void amdgpu_ucode_print_smc_hdr(const struct common_firmware_header *hdr); 610 615 void amdgpu_ucode_print_imu_hdr(const struct common_firmware_header *hdr); ··· 637 632 const char *amdgpu_ucode_name(enum AMDGPU_UCODE_ID ucode_id); 638 633 639 634 void amdgpu_ucode_ip_version_decode(struct amdgpu_device *adev, int block_type, char *ucode_prefix, int len); 635 + bool amdgpu_is_kicker_fw(struct amdgpu_device *adev); 640 636 641 637 #endif
+5
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
··· 85 85 MODULE_FIRMWARE("amdgpu/gc_11_0_0_me.bin"); 86 86 MODULE_FIRMWARE("amdgpu/gc_11_0_0_mec.bin"); 87 87 MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc.bin"); 88 + MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc_kicker.bin"); 88 89 MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc_1.bin"); 89 90 MODULE_FIRMWARE("amdgpu/gc_11_0_0_toc.bin"); 90 91 MODULE_FIRMWARE("amdgpu/gc_11_0_1_pfp.bin"); ··· 760 759 err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw, 761 760 AMDGPU_UCODE_REQUIRED, 762 761 "amdgpu/gc_11_0_0_rlc_1.bin"); 762 + else if (amdgpu_is_kicker_fw(adev)) 763 + err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw, 764 + AMDGPU_UCODE_REQUIRED, 765 + "amdgpu/%s_rlc_kicker.bin", ucode_prefix); 763 766 else 764 767 err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw, 765 768 AMDGPU_UCODE_REQUIRED,
+19
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 2235 2235 } 2236 2236 2237 2237 switch (amdgpu_ip_version(adev, GC_HWIP, 0)) { 2238 + case IP_VERSION(9, 0, 1): 2239 + case IP_VERSION(9, 2, 1): 2240 + case IP_VERSION(9, 4, 0): 2241 + case IP_VERSION(9, 2, 2): 2242 + case IP_VERSION(9, 1, 0): 2243 + case IP_VERSION(9, 3, 0): 2244 + adev->gfx.cleaner_shader_ptr = gfx_9_4_2_cleaner_shader_hex; 2245 + adev->gfx.cleaner_shader_size = sizeof(gfx_9_4_2_cleaner_shader_hex); 2246 + if (adev->gfx.me_fw_version >= 167 && 2247 + adev->gfx.pfp_fw_version >= 196 && 2248 + adev->gfx.mec_fw_version >= 474) { 2249 + adev->gfx.enable_cleaner_shader = true; 2250 + r = amdgpu_gfx_cleaner_shader_sw_init(adev, adev->gfx.cleaner_shader_size); 2251 + if (r) { 2252 + adev->gfx.enable_cleaner_shader = false; 2253 + dev_err(adev->dev, "Failed to initialize cleaner shader\n"); 2254 + } 2255 + } 2256 + break; 2238 2257 case IP_VERSION(9, 4, 2): 2239 2258 adev->gfx.cleaner_shader_ptr = gfx_9_4_2_cleaner_shader_hex; 2240 2259 adev->gfx.cleaner_shader_size = sizeof(gfx_9_4_2_cleaner_shader_hex);
+7 -2
drivers/gpu/drm/amd/amdgpu/imu_v11_0.c
··· 32 32 #include "gc/gc_11_0_0_sh_mask.h" 33 33 34 34 MODULE_FIRMWARE("amdgpu/gc_11_0_0_imu.bin"); 35 + MODULE_FIRMWARE("amdgpu/gc_11_0_0_imu_kicker.bin"); 35 36 MODULE_FIRMWARE("amdgpu/gc_11_0_1_imu.bin"); 36 37 MODULE_FIRMWARE("amdgpu/gc_11_0_2_imu.bin"); 37 38 MODULE_FIRMWARE("amdgpu/gc_11_0_3_imu.bin"); ··· 52 51 DRM_DEBUG("\n"); 53 52 54 53 amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix)); 55 - err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED, 56 - "amdgpu/%s_imu.bin", ucode_prefix); 54 + if (amdgpu_is_kicker_fw(adev)) 55 + err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED, 56 + "amdgpu/%s_imu_kicker.bin", ucode_prefix); 57 + else 58 + err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED, 59 + "amdgpu/%s_imu.bin", ucode_prefix); 57 60 if (err) 58 61 goto out; 59 62
+6 -4
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
··· 1630 1630 if (r) 1631 1631 goto failure; 1632 1632 1633 - r = mes_v11_0_set_hw_resources_1(&adev->mes); 1634 - if (r) { 1635 - DRM_ERROR("failed mes_v11_0_set_hw_resources_1, r=%d\n", r); 1636 - goto failure; 1633 + if ((adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x50) { 1634 + r = mes_v11_0_set_hw_resources_1(&adev->mes); 1635 + if (r) { 1636 + DRM_ERROR("failed mes_v11_0_set_hw_resources_1, r=%d\n", r); 1637 + goto failure; 1638 + } 1637 1639 } 1638 1640 1639 1641 r = mes_v11_0_query_sched_status(&adev->mes);
+2 -1
drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
··· 1742 1742 if (r) 1743 1743 goto failure; 1744 1744 1745 - mes_v12_0_set_hw_resources_1(&adev->mes, AMDGPU_MES_SCHED_PIPE); 1745 + if ((adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x4b) 1746 + mes_v12_0_set_hw_resources_1(&adev->mes, AMDGPU_MES_SCHED_PIPE); 1746 1747 1747 1748 mes_v12_0_init_aggregated_doorbell(&adev->mes); 1748 1749
+2
drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
··· 42 42 MODULE_FIRMWARE("amdgpu/psp_13_0_8_toc.bin"); 43 43 MODULE_FIRMWARE("amdgpu/psp_13_0_8_ta.bin"); 44 44 MODULE_FIRMWARE("amdgpu/psp_13_0_0_sos.bin"); 45 + MODULE_FIRMWARE("amdgpu/psp_13_0_0_sos_kicker.bin"); 45 46 MODULE_FIRMWARE("amdgpu/psp_13_0_0_ta.bin"); 47 + MODULE_FIRMWARE("amdgpu/psp_13_0_0_ta_kicker.bin"); 46 48 MODULE_FIRMWARE("amdgpu/psp_13_0_7_sos.bin"); 47 49 MODULE_FIRMWARE("amdgpu/psp_13_0_7_ta.bin"); 48 50 MODULE_FIRMWARE("amdgpu/psp_13_0_10_sos.bin");
+7 -3
drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
··· 490 490 { 491 491 struct amdgpu_ring *sdma[AMDGPU_MAX_SDMA_INSTANCES]; 492 492 u32 doorbell_offset, doorbell; 493 - u32 rb_cntl, ib_cntl; 493 + u32 rb_cntl, ib_cntl, sdma_cntl; 494 494 int i; 495 495 496 496 for_each_inst(i, inst_mask) { ··· 502 502 ib_cntl = RREG32_SDMA(i, regSDMA_GFX_IB_CNTL); 503 503 ib_cntl = REG_SET_FIELD(ib_cntl, SDMA_GFX_IB_CNTL, IB_ENABLE, 0); 504 504 WREG32_SDMA(i, regSDMA_GFX_IB_CNTL, ib_cntl); 505 + sdma_cntl = RREG32_SDMA(i, regSDMA_CNTL); 506 + sdma_cntl = REG_SET_FIELD(sdma_cntl, SDMA_CNTL, UTC_L1_ENABLE, 0); 507 + WREG32_SDMA(i, regSDMA_CNTL, sdma_cntl); 505 508 506 509 if (sdma[i]->use_doorbell) { 507 510 doorbell = RREG32_SDMA(i, regSDMA_GFX_DOORBELL); ··· 998 995 /* set utc l1 enable flag always to 1 */ 999 996 temp = RREG32_SDMA(i, regSDMA_CNTL); 1000 997 temp = REG_SET_FIELD(temp, SDMA_CNTL, UTC_L1_ENABLE, 1); 998 + WREG32_SDMA(i, regSDMA_CNTL, temp); 1001 999 1002 1000 if (amdgpu_ip_version(adev, SDMA0_HWIP, 0) < IP_VERSION(4, 4, 5)) { 1003 1001 /* enable context empty interrupt during initialization */ ··· 1674 1670 static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring, unsigned int vmid) 1675 1671 { 1676 1672 struct amdgpu_device *adev = ring->adev; 1677 - u32 id = GET_INST(SDMA0, ring->me); 1673 + u32 id = ring->me; 1678 1674 int r; 1679 1675 1680 1676 if (!(adev->sdma.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE)) ··· 1690 1686 static int sdma_v4_4_2_stop_queue(struct amdgpu_ring *ring) 1691 1687 { 1692 1688 struct amdgpu_device *adev = ring->adev; 1693 - u32 instance_id = GET_INST(SDMA0, ring->me); 1689 + u32 instance_id = ring->me; 1694 1690 u32 inst_mask; 1695 1691 uint64_t rptr; 1696 1692
+1
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
··· 1399 1399 return r; 1400 1400 1401 1401 for (i = 0; i < adev->sdma.num_instances; i++) { 1402 + mutex_init(&adev->sdma.instance[i].engine_reset_mutex); 1402 1403 adev->sdma.instance[i].funcs = &sdma_v5_0_sdma_funcs; 1403 1404 ring = &adev->sdma.instance[i].ring; 1404 1405 ring->ring_obj = NULL;
+1
drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
··· 1318 1318 } 1319 1319 1320 1320 for (i = 0; i < adev->sdma.num_instances; i++) { 1321 + mutex_init(&adev->sdma.instance[i].engine_reset_mutex); 1321 1322 adev->sdma.instance[i].funcs = &sdma_v5_2_sdma_funcs; 1322 1323 ring = &adev->sdma.instance[i].ring; 1323 1324 ring->ring_obj = NULL;
+16 -3
drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
··· 1374 1374 else 1375 1375 DRM_ERROR("Failed to allocated memory for SDMA IP Dump\n"); 1376 1376 1377 - /* add firmware version checks here */ 1378 - if (0 && !adev->sdma.disable_uq) 1379 - adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1377 + switch (amdgpu_ip_version(adev, SDMA0_HWIP, 0)) { 1378 + case IP_VERSION(6, 0, 0): 1379 + if ((adev->sdma.instance[0].fw_version >= 24) && !adev->sdma.disable_uq) 1380 + adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1381 + break; 1382 + case IP_VERSION(6, 0, 2): 1383 + if ((adev->sdma.instance[0].fw_version >= 21) && !adev->sdma.disable_uq) 1384 + adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1385 + break; 1386 + case IP_VERSION(6, 0, 3): 1387 + if ((adev->sdma.instance[0].fw_version >= 25) && !adev->sdma.disable_uq) 1388 + adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1389 + break; 1390 + default: 1391 + break; 1392 + } 1380 1393 1381 1394 r = amdgpu_sdma_sysfs_reset_mask_init(adev); 1382 1395 if (r)
+9 -3
drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
··· 1349 1349 else 1350 1350 DRM_ERROR("Failed to allocated memory for SDMA IP Dump\n"); 1351 1351 1352 - /* add firmware version checks here */ 1353 - if (0 && !adev->sdma.disable_uq) 1354 - adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1352 + switch (amdgpu_ip_version(adev, SDMA0_HWIP, 0)) { 1353 + case IP_VERSION(7, 0, 0): 1354 + case IP_VERSION(7, 0, 1): 1355 + if ((adev->sdma.instance[0].fw_version >= 7836028) && !adev->sdma.disable_uq) 1356 + adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs; 1357 + break; 1358 + default: 1359 + break; 1360 + } 1355 1361 1356 1362 return r; 1357 1363 }
+5 -1
drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c
··· 669 669 if (indirect) 670 670 amdgpu_vcn_psp_update_sram(adev, inst_idx, AMDGPU_UCODE_ID_VCN0_RAM); 671 671 672 + /* resetting ring, fw should not check RB ring */ 673 + fw_shared->sq.queue_mode |= FW_QUEUE_RING_RESET; 674 + 672 675 /* Pause dpg */ 673 676 vcn_v5_0_1_pause_dpg_mode(vinst, &state); 674 677 ··· 684 681 tmp = RREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE); 685 682 tmp &= ~(VCN_RB_ENABLE__RB1_EN_MASK); 686 683 WREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE, tmp); 687 - fw_shared->sq.queue_mode |= FW_QUEUE_RING_RESET; 684 + 688 685 WREG32_SOC15(VCN, vcn_inst, regUVD_RB_RPTR, 0); 689 686 WREG32_SOC15(VCN, vcn_inst, regUVD_RB_WPTR, 0); 690 687 ··· 695 692 tmp = RREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE); 696 693 tmp |= VCN_RB_ENABLE__RB1_EN_MASK; 697 694 WREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE, tmp); 695 + /* resetting done, fw can check RB ring */ 698 696 fw_shared->sq.queue_mode &= ~(FW_QUEUE_RING_RESET | FW_QUEUE_DPG_HOLD_OFF); 699 697 700 698 WREG32_SOC15(VCN, vcn_inst, regVCN_RB1_DB_CTRL,
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c
··· 240 240 241 241 packet->bitfields2.engine_sel = 242 242 engine_sel__mes_map_queues__compute_vi; 243 - packet->bitfields2.gws_control_queue = q->gws ? 1 : 0; 243 + packet->bitfields2.gws_control_queue = q->properties.is_gws ? 1 : 0; 244 244 packet->bitfields2.extended_engine_sel = 245 245 extended_engine_sel__mes_map_queues__legacy_engine_sel; 246 246 packet->bitfields2.queue_type =
+4 -2
drivers/gpu/drm/amd/amdkfd/kfd_topology.c
··· 510 510 dev->node_props.capability |= 511 511 HSA_CAP_AQL_QUEUE_DOUBLE_MAP; 512 512 513 + if (KFD_GC_VERSION(dev->gpu) < IP_VERSION(10, 0, 0) && 514 + (dev->gpu->adev->sdma.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE)) 515 + dev->node_props.capability2 |= HSA_CAP2_PER_SDMA_QUEUE_RESET_SUPPORTED; 516 + 513 517 sysfs_show_32bit_prop(buffer, offs, "max_engine_clk_fcompute", 514 518 dev->node_props.max_engine_clk_fcompute); 515 519 ··· 2012 2008 if (!amdgpu_sriov_vf(dev->gpu->adev)) 2013 2009 dev->node_props.capability |= HSA_CAP_PER_QUEUE_RESET_SUPPORTED; 2014 2010 2015 - if (dev->gpu->adev->sdma.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE) 2016 - dev->node_props.capability2 |= HSA_CAP2_PER_SDMA_QUEUE_RESET_SUPPORTED; 2017 2011 } else { 2018 2012 dev->node_props.debug_prop |= HSA_DBG_WATCH_ADDR_MASK_LO_BIT_GFX10 | 2019 2013 HSA_DBG_WATCH_ADDR_MASK_HI_BIT;
+35 -22
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 4718 4718 return 1; 4719 4719 } 4720 4720 4721 - static void convert_custom_brightness(const struct amdgpu_dm_backlight_caps *caps, 4722 - uint32_t *brightness) 4721 + /* Rescale from [min..max] to [0..MAX_BACKLIGHT_LEVEL] */ 4722 + static inline u32 scale_input_to_fw(int min, int max, u64 input) 4723 4723 { 4724 + return DIV_ROUND_CLOSEST_ULL(input * MAX_BACKLIGHT_LEVEL, max - min); 4725 + } 4726 + 4727 + /* Rescale from [0..MAX_BACKLIGHT_LEVEL] to [min..max] */ 4728 + static inline u32 scale_fw_to_input(int min, int max, u64 input) 4729 + { 4730 + return min + DIV_ROUND_CLOSEST_ULL(input * (max - min), MAX_BACKLIGHT_LEVEL); 4731 + } 4732 + 4733 + static void convert_custom_brightness(const struct amdgpu_dm_backlight_caps *caps, 4734 + unsigned int min, unsigned int max, 4735 + uint32_t *user_brightness) 4736 + { 4737 + u32 brightness = scale_input_to_fw(min, max, *user_brightness); 4724 4738 u8 prev_signal = 0, prev_lum = 0; 4725 4739 int i = 0; 4726 4740 ··· 4745 4731 return; 4746 4732 4747 4733 /* choose start to run less interpolation steps */ 4748 - if (caps->luminance_data[caps->data_points/2].input_signal > *brightness) 4734 + if (caps->luminance_data[caps->data_points/2].input_signal > brightness) 4749 4735 i = caps->data_points/2; 4750 4736 do { 4751 4737 u8 signal = caps->luminance_data[i].input_signal; ··· 4756 4742 * brightness < signal: interpolate between previous and current luminance numerator 4757 4743 * brightness > signal: find next data point 4758 4744 */ 4759 - if (*brightness > signal) { 4745 + if (brightness > signal) { 4760 4746 prev_signal = signal; 4761 4747 prev_lum = lum; 4762 4748 i++; 4763 4749 continue; 4764 4750 } 4765 - if (*brightness < signal) 4751 + if (brightness < signal) 4766 4752 lum = prev_lum + DIV_ROUND_CLOSEST((lum - prev_lum) * 4767 - (*brightness - prev_signal), 4753 + (brightness - prev_signal), 4768 4754 signal - prev_signal); 4769 - *brightness = DIV_ROUND_CLOSEST(lum * *brightness, 101); 4755 + *user_brightness = scale_fw_to_input(min, max, 4756 + DIV_ROUND_CLOSEST(lum * brightness, 101)); 4770 4757 return; 4771 4758 } while (i < caps->data_points); 4772 4759 } ··· 4780 4765 if (!get_brightness_range(caps, &min, &max)) 4781 4766 return brightness; 4782 4767 4783 - convert_custom_brightness(caps, &brightness); 4768 + convert_custom_brightness(caps, min, max, &brightness); 4784 4769 4785 - // Rescale 0..255 to min..max 4786 - return min + DIV_ROUND_CLOSEST((max - min) * brightness, 4787 - AMDGPU_MAX_BL_LEVEL); 4770 + // Rescale 0..max to min..max 4771 + return min + DIV_ROUND_CLOSEST_ULL((u64)(max - min) * brightness, max); 4788 4772 } 4789 4773 4790 4774 static u32 convert_brightness_to_user(const struct amdgpu_dm_backlight_caps *caps, ··· 4796 4782 4797 4783 if (brightness < min) 4798 4784 return 0; 4799 - // Rescale min..max to 0..255 4800 - return DIV_ROUND_CLOSEST(AMDGPU_MAX_BL_LEVEL * (brightness - min), 4785 + // Rescale min..max to 0..max 4786 + return DIV_ROUND_CLOSEST_ULL((u64)max * (brightness - min), 4801 4787 max - min); 4802 4788 } 4803 4789 ··· 4922 4908 struct drm_device *drm = aconnector->base.dev; 4923 4909 struct amdgpu_display_manager *dm = &drm_to_adev(drm)->dm; 4924 4910 struct backlight_properties props = { 0 }; 4925 - struct amdgpu_dm_backlight_caps caps = { 0 }; 4911 + struct amdgpu_dm_backlight_caps *caps; 4926 4912 char bl_name[16]; 4927 4913 int min, max; 4928 4914 ··· 4936 4922 return; 4937 4923 } 4938 4924 4939 - amdgpu_acpi_get_backlight_caps(&caps); 4940 - if (caps.caps_valid && get_brightness_range(&caps, &min, &max)) { 4925 + caps = &dm->backlight_caps[aconnector->bl_idx]; 4926 + if (get_brightness_range(caps, &min, &max)) { 4941 4927 if (power_supply_is_system_supplied() > 0) 4942 - props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps.ac_level, 100); 4928 + props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps->ac_level, 100); 4943 4929 else 4944 - props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps.dc_level, 100); 4930 + props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps->dc_level, 100); 4945 4931 /* min is zero, so max needs to be adjusted */ 4946 4932 props.max_brightness = max - min; 4947 4933 drm_dbg(drm, "Backlight caps: min: %d, max: %d, ac %d, dc %d\n", min, max, 4948 - caps.ac_level, caps.dc_level); 4934 + caps->ac_level, caps->dc_level); 4949 4935 } else 4950 - props.brightness = AMDGPU_MAX_BL_LEVEL; 4936 + props.brightness = props.max_brightness = MAX_BACKLIGHT_LEVEL; 4951 4937 4952 - if (caps.data_points && !(amdgpu_dc_debug_mask & DC_DISABLE_CUSTOM_BRIGHTNESS_CURVE)) 4938 + if (caps->data_points && !(amdgpu_dc_debug_mask & DC_DISABLE_CUSTOM_BRIGHTNESS_CURVE)) 4953 4939 drm_info(drm, "Using custom brightness curve\n"); 4954 - props.max_brightness = AMDGPU_MAX_BL_LEVEL; 4955 4940 props.type = BACKLIGHT_RAW; 4956 4941 4957 4942 snprintf(bl_name, sizeof(bl_name), "amdgpu_bl%d",
+4
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
··· 1029 1029 return EDID_NO_RESPONSE; 1030 1030 1031 1031 edid = drm_edid_raw(drm_edid); // FIXME: Get rid of drm_edid_raw() 1032 + if (!edid || 1033 + edid->extensions >= sizeof(sink->dc_edid.raw_edid) / EDID_LENGTH) 1034 + return EDID_BAD_INPUT; 1035 + 1032 1036 sink->dc_edid.length = EDID_LENGTH * (edid->extensions + 1); 1033 1037 memmove(sink->dc_edid.raw_edid, (uint8_t *)edid, sink->dc_edid.length); 1034 1038
+33
drivers/gpu/drm/amd/display/dc/core/dc.c
··· 241 241 DC_LOG_DC("BIOS object table - end"); 242 242 243 243 /* Create a link for each usb4 dpia port */ 244 + dc->lowest_dpia_link_index = MAX_LINKS; 244 245 for (i = 0; i < dc->res_pool->usb4_dpia_count; i++) { 245 246 struct link_init_data link_init_params = {0}; 246 247 struct dc_link *link; ··· 254 253 255 254 link = dc->link_srv->create_link(&link_init_params); 256 255 if (link) { 256 + if (dc->lowest_dpia_link_index > dc->link_count) 257 + dc->lowest_dpia_link_index = dc->link_count; 258 + 257 259 dc->links[dc->link_count] = link; 258 260 link->dc = dc; 259 261 ++dc->link_count; ··· 6379 6375 return dc->res_pool->funcs->get_det_buffer_size(context); 6380 6376 else 6381 6377 return 0; 6378 + } 6379 + /** 6380 + *********************************************************************************************** 6381 + * dc_get_host_router_index: Get index of host router from a dpia link 6382 + * 6383 + * This function return a host router index of the target link. If the target link is dpia link. 6384 + * 6385 + * @param [in] link: target link 6386 + * @param [out] host_router_index: host router index of the target link 6387 + * 6388 + * @return: true if the host router index is found and valid. 6389 + * 6390 + *********************************************************************************************** 6391 + */ 6392 + bool dc_get_host_router_index(const struct dc_link *link, unsigned int *host_router_index) 6393 + { 6394 + struct dc *dc = link->ctx->dc; 6395 + 6396 + if (link->ep_type != DISPLAY_ENDPOINT_USB4_DPIA) 6397 + return false; 6398 + 6399 + if (link->link_index < dc->lowest_dpia_link_index) 6400 + return false; 6401 + 6402 + *host_router_index = (link->link_index - dc->lowest_dpia_link_index) / dc->caps.num_of_dpias_per_host_router; 6403 + if (*host_router_index < dc->caps.num_of_host_routers) 6404 + return true; 6405 + else 6406 + return false; 6382 6407 } 6383 6408 6384 6409 bool dc_is_cursor_limit_pending(struct dc *dc)
+7 -1
drivers/gpu/drm/amd/display/dc/dc.h
··· 66 66 #define MAX_STREAMS 6 67 67 #define MIN_VIEWPORT_SIZE 12 68 68 #define MAX_NUM_EDP 2 69 - #define MAX_HOST_ROUTERS_NUM 2 69 + #define MAX_HOST_ROUTERS_NUM 3 70 + #define MAX_DPIA_PER_HOST_ROUTER 2 70 71 71 72 /* Display Core Interfaces */ 72 73 struct dc_versions { ··· 306 305 /* Conservative limit for DCC cases which require ODM4:1 to support*/ 307 306 uint32_t dcc_plane_width_limit; 308 307 struct dc_scl_caps scl_caps; 308 + uint8_t num_of_host_routers; 309 + uint8_t num_of_dpias_per_host_router; 309 310 }; 310 311 311 312 struct dc_bug_wa { ··· 1606 1603 1607 1604 uint8_t link_count; 1608 1605 struct dc_link *links[MAX_LINKS]; 1606 + uint8_t lowest_dpia_link_index; 1609 1607 struct link_service *link_srv; 1610 1608 1611 1609 struct dc_state *current_state; ··· 2598 2594 struct dc_power_profile dc_get_power_profile_for_dc_state(const struct dc_state *context); 2599 2595 2600 2596 unsigned int dc_get_det_buffer_size_from_state(const struct dc_state *context); 2597 + 2598 + bool dc_get_host_router_index(const struct dc_link *link, unsigned int *host_router_index); 2601 2599 2602 2600 /* DSC Interfaces */ 2603 2601 #include "dc_dsc.h"
+2 -2
drivers/gpu/drm/amd/display/dc/dc_dp_types.h
··· 1172 1172 union dp_128b_132b_supported_lttpr_link_rates supported_128b_132b_rates; 1173 1173 union dp_alpm_lttpr_cap alpm; 1174 1174 uint8_t aux_rd_interval[MAX_REPEATER_CNT - 1]; 1175 - uint8_t lttpr_ieee_oui[3]; 1176 - uint8_t lttpr_device_id[6]; 1175 + uint8_t lttpr_ieee_oui[3]; // Always read from closest LTTPR to host 1176 + uint8_t lttpr_device_id[6]; // Always read from closest LTTPR to host 1177 1177 }; 1178 1178 1179 1179 struct dc_dongle_dfp_cap_ext {
+1
drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
··· 788 788 plane->pixel_format = dml2_420_10; 789 789 break; 790 790 case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616: 791 + case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616: 791 792 case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616F: 792 793 case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616F: 793 794 plane->pixel_format = dml2_444_64;
+4 -1
drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
··· 4685 4685 //the tdlut is fetched during the 2 row times of prefetch. 4686 4686 if (p->setup_for_tdlut) { 4687 4687 *p->tdlut_groups_per_2row_ub = (unsigned int)math_ceil2((double) *p->tdlut_bytes_per_frame / *p->tdlut_bytes_per_group, 1); 4688 - *p->tdlut_opt_time = (*p->tdlut_bytes_per_frame - p->cursor_buffer_size * 1024) / tdlut_drain_rate; 4688 + if (*p->tdlut_bytes_per_frame > p->cursor_buffer_size * 1024) 4689 + *p->tdlut_opt_time = (*p->tdlut_bytes_per_frame - p->cursor_buffer_size * 1024) / tdlut_drain_rate; 4690 + else 4691 + *p->tdlut_opt_time = 0; 4689 4692 *p->tdlut_drain_time = p->cursor_buffer_size * 1024 / tdlut_drain_rate; 4690 4693 *p->tdlut_bytes_to_deliver = (unsigned int) (p->cursor_buffer_size * 1024.0); 4691 4694 }
+1
drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
··· 953 953 out->SourcePixelFormat[location] = dml_420_10; 954 954 break; 955 955 case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616: 956 + case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616: 956 957 case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616F: 957 958 case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616F: 958 959 out->SourcePixelFormat[location] = dml_444_64;
+1 -1
drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
··· 1225 1225 return; 1226 1226 1227 1227 if (link->local_sink && link->local_sink->sink_signal == SIGNAL_TYPE_EDP) { 1228 - if (!link->skip_implict_edp_power_control) 1228 + if (!link->skip_implict_edp_power_control && hws) 1229 1229 hws->funcs.edp_backlight_control(link, false); 1230 1230 link->dc->hwss.set_abm_immediate_disable(pipe_ctx); 1231 1231 }
+28
drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
··· 1047 1047 if (dc->caps.sequential_ono) { 1048 1048 update_state->pg_pipe_res_update[PG_HUBP][pipe_ctx->stream_res.dsc->inst] = false; 1049 1049 update_state->pg_pipe_res_update[PG_DPP][pipe_ctx->stream_res.dsc->inst] = false; 1050 + 1051 + /* All HUBP/DPP instances must be powered if the DSC inst != HUBP inst */ 1052 + if (!pipe_ctx->top_pipe && pipe_ctx->plane_res.hubp && 1053 + pipe_ctx->plane_res.hubp->inst != pipe_ctx->stream_res.dsc->inst) { 1054 + for (j = 0; j < dc->res_pool->pipe_count; ++j) { 1055 + update_state->pg_pipe_res_update[PG_HUBP][j] = false; 1056 + update_state->pg_pipe_res_update[PG_DPP][j] = false; 1057 + } 1058 + } 1050 1059 } 1051 1060 } 1052 1061 ··· 1202 1193 update_state->pg_pipe_res_update[PG_HDMISTREAM][0] = true; 1203 1194 1204 1195 if (dc->caps.sequential_ono) { 1196 + for (i = 0; i < dc->res_pool->pipe_count; i++) { 1197 + struct pipe_ctx *new_pipe = &context->res_ctx.pipe_ctx[i]; 1198 + 1199 + if (new_pipe->stream_res.dsc && !new_pipe->top_pipe && 1200 + update_state->pg_pipe_res_update[PG_DSC][new_pipe->stream_res.dsc->inst]) { 1201 + update_state->pg_pipe_res_update[PG_HUBP][new_pipe->stream_res.dsc->inst] = true; 1202 + update_state->pg_pipe_res_update[PG_DPP][new_pipe->stream_res.dsc->inst] = true; 1203 + 1204 + /* All HUBP/DPP instances must be powered if the DSC inst != HUBP inst */ 1205 + if (new_pipe->plane_res.hubp && 1206 + new_pipe->plane_res.hubp->inst != new_pipe->stream_res.dsc->inst) { 1207 + for (j = 0; j < dc->res_pool->pipe_count; ++j) { 1208 + update_state->pg_pipe_res_update[PG_HUBP][j] = true; 1209 + update_state->pg_pipe_res_update[PG_DPP][j] = true; 1210 + } 1211 + } 1212 + } 1213 + } 1214 + 1205 1215 for (i = dc->res_pool->pipe_count - 1; i >= 0; i--) { 1206 1216 if (update_state->pg_pipe_res_update[PG_HUBP][i] && 1207 1217 update_state->pg_pipe_res_update[PG_DPP][i]) {
+3
drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
··· 1954 1954 dc->caps.color.mpc.ogam_rom_caps.hlg = 0; 1955 1955 dc->caps.color.mpc.ocsc = 1; 1956 1956 1957 + dc->caps.num_of_host_routers = 2; 1958 + dc->caps.num_of_dpias_per_host_router = 2; 1959 + 1957 1960 /* Use pipe context based otg sync logic */ 1958 1961 dc->config.use_pipe_ctx_sync_logic = true; 1959 1962 dc->config.disable_hbr_audio_dp2 = true;
+3
drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
··· 1885 1885 1886 1886 dc->caps.max_disp_clock_khz_at_vmin = 650000; 1887 1887 1888 + dc->caps.num_of_host_routers = 2; 1889 + dc->caps.num_of_dpias_per_host_router = 2; 1890 + 1888 1891 /* Use pipe context based otg sync logic */ 1889 1892 dc->config.use_pipe_ctx_sync_logic = true; 1890 1893
+3
drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
··· 1894 1894 dc->caps.color.mpc.ogam_rom_caps.hlg = 0; 1895 1895 dc->caps.color.mpc.ocsc = 1; 1896 1896 1897 + dc->caps.num_of_host_routers = 2; 1898 + dc->caps.num_of_dpias_per_host_router = 2; 1899 + 1897 1900 /* max_disp_clock_khz_at_vmin is slightly lower than the STA value in order 1898 1901 * to provide some margin. 1899 1902 * It's expected for furture ASIC to have equal or higher value, in order to
+3
drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
··· 1866 1866 dc->caps.color.mpc.ogam_rom_caps.hlg = 0; 1867 1867 dc->caps.color.mpc.ocsc = 1; 1868 1868 1869 + dc->caps.num_of_host_routers = 2; 1870 + dc->caps.num_of_dpias_per_host_router = 2; 1871 + 1869 1872 /* max_disp_clock_khz_at_vmin is slightly lower than the STA value in order 1870 1873 * to provide some margin. 1871 1874 * It's expected for furture ASIC to have equal or higher value, in order to
+3
drivers/gpu/drm/amd/display/dc/resource/dcn36/dcn36_resource.c
··· 1867 1867 dc->caps.color.mpc.ogam_rom_caps.hlg = 0; 1868 1868 dc->caps.color.mpc.ocsc = 1; 1869 1869 1870 + dc->caps.num_of_host_routers = 2; 1871 + dc->caps.num_of_dpias_per_host_router = 2; 1872 + 1870 1873 /* max_disp_clock_khz_at_vmin is slightly lower than the STA value in order 1871 1874 * to provide some margin. 1872 1875 * It's expected for furture ASIC to have equal or higher value, in order to
+9 -3
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
··· 58 58 59 59 MODULE_FIRMWARE("amdgpu/aldebaran_smc.bin"); 60 60 MODULE_FIRMWARE("amdgpu/smu_13_0_0.bin"); 61 + MODULE_FIRMWARE("amdgpu/smu_13_0_0_kicker.bin"); 61 62 MODULE_FIRMWARE("amdgpu/smu_13_0_7.bin"); 62 63 MODULE_FIRMWARE("amdgpu/smu_13_0_10.bin"); 63 64 ··· 93 92 int smu_v13_0_init_microcode(struct smu_context *smu) 94 93 { 95 94 struct amdgpu_device *adev = smu->adev; 96 - char ucode_prefix[15]; 95 + char ucode_prefix[30]; 97 96 int err = 0; 98 97 const struct smc_firmware_header_v1_0 *hdr; 99 98 const struct common_firmware_header *header; ··· 104 103 return 0; 105 104 106 105 amdgpu_ucode_ip_version_decode(adev, MP1_HWIP, ucode_prefix, sizeof(ucode_prefix)); 107 - err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED, 108 - "amdgpu/%s.bin", ucode_prefix); 106 + 107 + if (amdgpu_is_kicker_fw(adev)) 108 + err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED, 109 + "amdgpu/%s_kicker.bin", ucode_prefix); 110 + else 111 + err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED, 112 + "amdgpu/%s.bin", ucode_prefix); 109 113 if (err) 110 114 goto out; 111 115
+1 -1
drivers/gpu/drm/arm/malidp_planes.c
··· 159 159 } 160 160 161 161 if (!fourcc_mod_is_vendor(modifier, ARM)) { 162 - DRM_ERROR("Unknown modifier (not Arm)\n"); 162 + DRM_DEBUG_KMS("Unknown modifier (not Arm)\n"); 163 163 return false; 164 164 } 165 165
-1
drivers/gpu/drm/ast/ast_mode.c
··· 29 29 */ 30 30 31 31 #include <linux/delay.h> 32 - #include <linux/export.h> 33 32 #include <linux/pci.h> 34 33 35 34 #include <drm/drm_atomic.h>
+60 -9
drivers/gpu/drm/bridge/ti-sn65dsi86.c
··· 348 348 * 200 ms. We'll assume that the panel driver will have the hardcoded 349 349 * delay in its prepare and always disable HPD. 350 350 * 351 - * If HPD somehow makes sense on some future panel we'll have to 352 - * change this to be conditional on someone specifying that HPD should 353 - * be used. 351 + * For DisplayPort bridge type, we need HPD. So we use the bridge type 352 + * to conditionally disable HPD. 353 + * NOTE: The bridge type is set in ti_sn_bridge_probe() but enable_comms() 354 + * can be called before. So for DisplayPort, HPD will be enabled once 355 + * bridge type is set. We are using bridge type instead of "no-hpd" 356 + * property because it is not used properly in devicetree description 357 + * and hence is unreliable. 354 358 */ 355 - regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG, HPD_DISABLE, 356 - HPD_DISABLE); 359 + 360 + if (pdata->bridge.type != DRM_MODE_CONNECTOR_DisplayPort) 361 + regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG, HPD_DISABLE, 362 + HPD_DISABLE); 357 363 358 364 pdata->comms_enabled = true; 359 365 ··· 1201 1195 struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge); 1202 1196 int val = 0; 1203 1197 1204 - pm_runtime_get_sync(pdata->dev); 1198 + /* 1199 + * Runtime reference is grabbed in ti_sn_bridge_hpd_enable() 1200 + * as the chip won't report HPD just after being powered on. 1201 + * HPD_DEBOUNCED_STATE reflects correct state only after the 1202 + * debounce time (~100-400 ms). 1203 + */ 1204 + 1205 1205 regmap_read(pdata->regmap, SN_HPD_DISABLE_REG, &val); 1206 - pm_runtime_put_autosuspend(pdata->dev); 1207 1206 1208 1207 return val & HPD_DEBOUNCED_STATE ? connector_status_connected 1209 1208 : connector_status_disconnected; ··· 1231 1220 debugfs_create_file("status", 0600, debugfs, pdata, &status_fops); 1232 1221 } 1233 1222 1223 + static void ti_sn_bridge_hpd_enable(struct drm_bridge *bridge) 1224 + { 1225 + struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge); 1226 + 1227 + /* 1228 + * Device needs to be powered on before reading the HPD state 1229 + * for reliable hpd detection in ti_sn_bridge_detect() due to 1230 + * the high debounce time. 1231 + */ 1232 + 1233 + pm_runtime_get_sync(pdata->dev); 1234 + } 1235 + 1236 + static void ti_sn_bridge_hpd_disable(struct drm_bridge *bridge) 1237 + { 1238 + struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge); 1239 + 1240 + pm_runtime_put_autosuspend(pdata->dev); 1241 + } 1242 + 1234 1243 static const struct drm_bridge_funcs ti_sn_bridge_funcs = { 1235 1244 .attach = ti_sn_bridge_attach, 1236 1245 .detach = ti_sn_bridge_detach, ··· 1265 1234 .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 1266 1235 .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, 1267 1236 .debugfs_init = ti_sn65dsi86_debugfs_init, 1237 + .hpd_enable = ti_sn_bridge_hpd_enable, 1238 + .hpd_disable = ti_sn_bridge_hpd_disable, 1268 1239 }; 1269 1240 1270 1241 static void ti_sn_bridge_parse_lanes(struct ti_sn65dsi86 *pdata, ··· 1354 1321 pdata->bridge.type = pdata->next_bridge->type == DRM_MODE_CONNECTOR_DisplayPort 1355 1322 ? DRM_MODE_CONNECTOR_DisplayPort : DRM_MODE_CONNECTOR_eDP; 1356 1323 1357 - if (pdata->bridge.type == DRM_MODE_CONNECTOR_DisplayPort) 1358 - pdata->bridge.ops = DRM_BRIDGE_OP_EDID | DRM_BRIDGE_OP_DETECT; 1324 + if (pdata->bridge.type == DRM_MODE_CONNECTOR_DisplayPort) { 1325 + pdata->bridge.ops = DRM_BRIDGE_OP_EDID | DRM_BRIDGE_OP_DETECT | 1326 + DRM_BRIDGE_OP_HPD; 1327 + /* 1328 + * If comms were already enabled they would have been enabled 1329 + * with the wrong value of HPD_DISABLE. Update it now. Comms 1330 + * could be enabled if anyone is holding a pm_runtime reference 1331 + * (like if a GPIO is in use). Note that in most cases nobody 1332 + * is doing AUX channel xfers before the bridge is added so 1333 + * HPD doesn't _really_ matter then. The only exception is in 1334 + * the eDP case where the panel wants to read the EDID before 1335 + * the bridge is added. We always consistently have HPD disabled 1336 + * for eDP. 1337 + */ 1338 + mutex_lock(&pdata->comms_mutex); 1339 + if (pdata->comms_enabled) 1340 + regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG, 1341 + HPD_DISABLE, 0); 1342 + mutex_unlock(&pdata->comms_mutex); 1343 + }; 1359 1344 1360 1345 drm_bridge_add(&pdata->bridge); 1361 1346
+5 -2
drivers/gpu/drm/display/drm_bridge_connector.c
··· 708 708 if (bridge_connector->bridge_hdmi_audio || 709 709 bridge_connector->bridge_dp_audio) { 710 710 struct device *dev; 711 + struct drm_bridge *bridge; 711 712 712 713 if (bridge_connector->bridge_hdmi_audio) 713 - dev = bridge_connector->bridge_hdmi_audio->hdmi_audio_dev; 714 + bridge = bridge_connector->bridge_hdmi_audio; 714 715 else 715 - dev = bridge_connector->bridge_dp_audio->hdmi_audio_dev; 716 + bridge = bridge_connector->bridge_dp_audio; 717 + 718 + dev = bridge->hdmi_audio_dev; 716 719 717 720 ret = drm_connector_hdmi_audio_init(connector, dev, 718 721 &drm_bridge_connector_hdmi_audio_funcs,
+1 -1
drivers/gpu/drm/display/drm_dp_helper.c
··· 725 725 * monitor doesn't power down exactly after the throw away read. 726 726 */ 727 727 if (!aux->is_remote) { 728 - ret = drm_dp_dpcd_probe(aux, DP_DPCD_REV); 728 + ret = drm_dp_dpcd_probe(aux, DP_LANE0_1_STATUS); 729 729 if (ret < 0) 730 730 return ret; 731 731 }
+4 -3
drivers/gpu/drm/drm_writeback.c
··· 343 343 /** 344 344 * drm_writeback_connector_cleanup - Cleanup the writeback connector 345 345 * @dev: DRM device 346 - * @wb_connector: Pointer to the writeback connector to clean up 346 + * @data: Pointer to the writeback connector to clean up 347 347 * 348 348 * This will decrement the reference counter of blobs and destroy properties. It 349 349 * will also clean the remaining jobs in this writeback connector. Caution: This helper will not 350 350 * clean up the attached encoder and the drm_connector. 351 351 */ 352 352 static void drm_writeback_connector_cleanup(struct drm_device *dev, 353 - struct drm_writeback_connector *wb_connector) 353 + void *data) 354 354 { 355 355 unsigned long flags; 356 356 struct drm_writeback_job *pos, *n; 357 + struct drm_writeback_connector *wb_connector = data; 357 358 358 359 delete_writeback_properties(dev); 359 360 drm_property_blob_put(wb_connector->pixel_formats_blob_ptr); ··· 406 405 if (ret) 407 406 return ret; 408 407 409 - ret = drmm_add_action_or_reset(dev, (void *)drm_writeback_connector_cleanup, 408 + ret = drmm_add_action_or_reset(dev, drm_writeback_connector_cleanup, 410 409 wb_connector); 411 410 if (ret) 412 411 return ret;
+4 -1
drivers/gpu/drm/etnaviv/etnaviv_sched.c
··· 35 35 *sched_job) 36 36 { 37 37 struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job); 38 + struct drm_gpu_scheduler *sched = sched_job->sched; 38 39 struct etnaviv_gpu *gpu = submit->gpu; 39 40 u32 dma_addr, primid = 0; 40 41 int change; ··· 90 89 return DRM_GPU_SCHED_STAT_NOMINAL; 91 90 92 91 out_no_timeout: 93 - list_add(&sched_job->list, &sched_job->sched->pending_list); 92 + spin_lock(&sched->job_list_lock); 93 + list_add(&sched_job->list, &sched->pending_list); 94 + spin_unlock(&sched->job_list_lock); 94 95 return DRM_GPU_SCHED_STAT_NOMINAL; 95 96 } 96 97
+2 -2
drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.c
··· 103 103 DIV_ROUND_DOWN_ULL(curve_1_interpolated, CURVE0_MULTIPLIER))); 104 104 105 105 ana_cp_int_temp = 106 - DIV_ROUND_CLOSEST_ULL(DIV_ROUND_DOWN_ULL(adjusted_vco_clk1, curve_2_scaled1), 107 - CURVE2_MULTIPLIER); 106 + DIV64_U64_ROUND_CLOSEST(DIV_ROUND_DOWN_ULL(adjusted_vco_clk1, curve_2_scaled1), 107 + CURVE2_MULTIPLIER); 108 108 109 109 *ana_cp_int = max(1, min(ana_cp_int_temp, 127)); 110 110
+2 -2
drivers/gpu/drm/i915/display/vlv_dsi.c
··· 1056 1056 BXT_MIPI_TRANS_VACTIVE(port)); 1057 1057 adjusted_mode->crtc_vtotal = 1058 1058 intel_de_read(display, 1059 - BXT_MIPI_TRANS_VTOTAL(port)); 1059 + BXT_MIPI_TRANS_VTOTAL(port)) + 1; 1060 1060 1061 1061 hactive = adjusted_mode->crtc_hdisplay; 1062 1062 hfp = intel_de_read(display, MIPI_HFP_COUNT(display, port)); ··· 1260 1260 intel_de_write(display, BXT_MIPI_TRANS_VACTIVE(port), 1261 1261 adjusted_mode->crtc_vdisplay); 1262 1262 intel_de_write(display, BXT_MIPI_TRANS_VTOTAL(port), 1263 - adjusted_mode->crtc_vtotal); 1263 + adjusted_mode->crtc_vtotal - 1); 1264 1264 } 1265 1265 1266 1266 intel_de_write(display, MIPI_HACTIVE_AREA_COUNT(display, port),
+3 -3
drivers/gpu/drm/i915/i915_pmu.c
··· 108 108 return other_bit(config); 109 109 } 110 110 111 - static u32 config_mask(const u64 config) 111 + static __always_inline u32 config_mask(const u64 config) 112 112 { 113 113 unsigned int bit = config_bit(config); 114 114 115 - if (__builtin_constant_p(config)) 115 + if (__builtin_constant_p(bit)) 116 116 BUILD_BUG_ON(bit > 117 117 BITS_PER_TYPE(typeof_member(struct i915_pmu, 118 118 enable)) - 1); ··· 121 121 BITS_PER_TYPE(typeof_member(struct i915_pmu, 122 122 enable)) - 1); 123 123 124 - return BIT(config_bit(config)); 124 + return BIT(bit); 125 125 } 126 126 127 127 static bool is_engine_event(struct perf_event *event)
-1
drivers/gpu/drm/mgag200/mgag200_ddc.c
··· 26 26 * Authors: Dave Airlie <airlied@redhat.com> 27 27 */ 28 28 29 - #include <linux/export.h> 30 29 #include <linux/i2c-algo-bit.h> 31 30 #include <linux/i2c.h> 32 31 #include <linux/pci.h>
-5
drivers/gpu/drm/msm/adreno/a2xx_gpummu.c
··· 71 71 return 0; 72 72 } 73 73 74 - static void a2xx_gpummu_resume_translation(struct msm_mmu *mmu) 75 - { 76 - } 77 - 78 74 static void a2xx_gpummu_destroy(struct msm_mmu *mmu) 79 75 { 80 76 struct a2xx_gpummu *gpummu = to_a2xx_gpummu(mmu); ··· 86 90 .map = a2xx_gpummu_map, 87 91 .unmap = a2xx_gpummu_unmap, 88 92 .destroy = a2xx_gpummu_destroy, 89 - .resume_translation = a2xx_gpummu_resume_translation, 90 93 }; 91 94 92 95 struct msm_mmu *a2xx_gpummu_new(struct device *dev, struct msm_gpu *gpu)
+2
drivers/gpu/drm/msm/adreno/a5xx_gpu.c
··· 131 131 struct msm_ringbuffer *ring = submit->ring; 132 132 unsigned int i, ibs = 0; 133 133 134 + adreno_check_and_reenable_stall(adreno_gpu); 135 + 134 136 if (IS_ENABLED(CONFIG_DRM_MSM_GPU_SUDO) && submit->in_rb) { 135 137 ring->cur_ctx_seqno = 0; 136 138 a5xx_submit_in_rb(gpu, submit);
+18
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
··· 130 130 OUT_RING(ring, lower_32_bits(rbmemptr(ring, fence))); 131 131 OUT_RING(ring, upper_32_bits(rbmemptr(ring, fence))); 132 132 OUT_RING(ring, submit->seqno - 1); 133 + 134 + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); 135 + OUT_RING(ring, CP_SET_THREAD_BOTH); 136 + 137 + /* Reset state used to synchronize BR and BV */ 138 + OUT_PKT7(ring, CP_RESET_CONTEXT_STATE, 1); 139 + OUT_RING(ring, 140 + CP_RESET_CONTEXT_STATE_0_CLEAR_ON_CHIP_TS | 141 + CP_RESET_CONTEXT_STATE_0_CLEAR_RESOURCE_TABLE | 142 + CP_RESET_CONTEXT_STATE_0_CLEAR_BV_BR_COUNTER | 143 + CP_RESET_CONTEXT_STATE_0_RESET_GLOBAL_LOCAL_TS); 144 + 145 + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); 146 + OUT_RING(ring, CP_SET_THREAD_BR); 133 147 } 134 148 135 149 if (!sysprof) { ··· 225 211 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 226 212 struct msm_ringbuffer *ring = submit->ring; 227 213 unsigned int i, ibs = 0; 214 + 215 + adreno_check_and_reenable_stall(adreno_gpu); 228 216 229 217 a6xx_set_pagetable(a6xx_gpu, ring, submit); 230 218 ··· 350 334 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 351 335 struct msm_ringbuffer *ring = submit->ring; 352 336 unsigned int i, ibs = 0; 337 + 338 + adreno_check_and_reenable_stall(adreno_gpu); 353 339 354 340 /* 355 341 * Toggle concurrent binning for pagetable switch and set the thread to
+29 -10
drivers/gpu/drm/msm/adreno/adreno_device.c
··· 137 137 return NULL; 138 138 } 139 139 140 - static int find_chipid(struct device *dev, uint32_t *chipid) 140 + static int find_chipid(struct device_node *node, uint32_t *chipid) 141 141 { 142 - struct device_node *node = dev->of_node; 143 142 const char *compat; 144 143 int ret; 145 144 ··· 172 173 /* and if that fails, fall back to legacy "qcom,chipid" property: */ 173 174 ret = of_property_read_u32(node, "qcom,chipid", chipid); 174 175 if (ret) { 175 - DRM_DEV_ERROR(dev, "could not parse qcom,chipid: %d\n", ret); 176 + DRM_ERROR("%pOF: could not parse qcom,chipid: %d\n", 177 + node, ret); 176 178 return ret; 177 179 } 178 180 179 - dev_warn(dev, "Using legacy qcom,chipid binding!\n"); 181 + pr_warn("%pOF: Using legacy qcom,chipid binding!\n", node); 180 182 181 183 return 0; 184 + } 185 + 186 + bool adreno_has_gpu(struct device_node *node) 187 + { 188 + const struct adreno_info *info; 189 + uint32_t chip_id; 190 + int ret; 191 + 192 + ret = find_chipid(node, &chip_id); 193 + if (ret) 194 + return false; 195 + 196 + info = adreno_info(chip_id); 197 + if (!info) { 198 + pr_warn("%pOF: Unknown GPU revision: %"ADRENO_CHIPID_FMT"\n", 199 + node, ADRENO_CHIPID_ARGS(chip_id)); 200 + return false; 201 + } 202 + 203 + return true; 182 204 } 183 205 184 206 static int adreno_bind(struct device *dev, struct device *master, void *data) ··· 211 191 struct msm_gpu *gpu; 212 192 int ret; 213 193 214 - ret = find_chipid(dev, &config.chip_id); 215 - if (ret) 194 + ret = find_chipid(dev->of_node, &config.chip_id); 195 + /* We shouldn't have gotten this far if we can't parse the chip_id */ 196 + if (WARN_ON(ret)) 216 197 return ret; 217 198 218 199 dev->platform_data = &config; 219 200 priv->gpu_pdev = to_platform_device(dev); 220 201 221 202 info = adreno_info(config.chip_id); 222 - if (!info) { 223 - dev_warn(drm->dev, "Unknown GPU revision: %"ADRENO_CHIPID_FMT"\n", 224 - ADRENO_CHIPID_ARGS(config.chip_id)); 203 + /* We shouldn't have gotten this far if we don't recognize the GPU: */ 204 + if (WARN_ON(!info)) 225 205 return -ENXIO; 226 - } 227 206 228 207 config.info = info; 229 208
+43 -11
drivers/gpu/drm/msm/adreno/adreno_gpu.c
··· 259 259 return BIT(ttbr1_cfg->ias) - ADRENO_VM_START; 260 260 } 261 261 262 + void adreno_check_and_reenable_stall(struct adreno_gpu *adreno_gpu) 263 + { 264 + struct msm_gpu *gpu = &adreno_gpu->base; 265 + struct msm_drm_private *priv = gpu->dev->dev_private; 266 + unsigned long flags; 267 + 268 + /* 269 + * Wait until the cooldown period has passed and we would actually 270 + * collect a crashdump to re-enable stall-on-fault. 271 + */ 272 + spin_lock_irqsave(&priv->fault_stall_lock, flags); 273 + if (!priv->stall_enabled && 274 + ktime_after(ktime_get(), priv->stall_reenable_time) && 275 + !READ_ONCE(gpu->crashstate)) { 276 + priv->stall_enabled = true; 277 + 278 + gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, true); 279 + } 280 + spin_unlock_irqrestore(&priv->fault_stall_lock, flags); 281 + } 282 + 262 283 #define ARM_SMMU_FSR_TF BIT(1) 263 284 #define ARM_SMMU_FSR_PF BIT(3) 264 285 #define ARM_SMMU_FSR_EF BIT(4) 286 + #define ARM_SMMU_FSR_SS BIT(30) 265 287 266 288 int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, 267 289 struct adreno_smmu_fault_info *info, const char *block, 268 290 u32 scratch[4]) 269 291 { 292 + struct msm_drm_private *priv = gpu->dev->dev_private; 270 293 const char *type = "UNKNOWN"; 271 - bool do_devcoredump = info && !READ_ONCE(gpu->crashstate); 294 + bool do_devcoredump = info && (info->fsr & ARM_SMMU_FSR_SS) && 295 + !READ_ONCE(gpu->crashstate); 296 + unsigned long irq_flags; 272 297 273 298 /* 274 - * If we aren't going to be resuming later from fault_worker, then do 275 - * it now. 299 + * In case there is a subsequent storm of pagefaults, disable 300 + * stall-on-fault for at least half a second. 276 301 */ 277 - if (!do_devcoredump) { 278 - gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu); 302 + spin_lock_irqsave(&priv->fault_stall_lock, irq_flags); 303 + if (priv->stall_enabled) { 304 + priv->stall_enabled = false; 305 + 306 + gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, false); 279 307 } 308 + priv->stall_reenable_time = ktime_add_ms(ktime_get(), 500); 309 + spin_unlock_irqrestore(&priv->fault_stall_lock, irq_flags); 280 310 281 311 /* 282 312 * Print a default message if we couldn't get the data from the ··· 334 304 scratch[0], scratch[1], scratch[2], scratch[3]); 335 305 336 306 if (do_devcoredump) { 307 + struct msm_gpu_fault_info fault_info = {}; 308 + 337 309 /* Turn off the hangcheck timer to keep it from bothering us */ 338 310 timer_delete(&gpu->hangcheck_timer); 339 311 340 - gpu->fault_info.ttbr0 = info->ttbr0; 341 - gpu->fault_info.iova = iova; 342 - gpu->fault_info.flags = flags; 343 - gpu->fault_info.type = type; 344 - gpu->fault_info.block = block; 312 + fault_info.ttbr0 = info->ttbr0; 313 + fault_info.iova = iova; 314 + fault_info.flags = flags; 315 + fault_info.type = type; 316 + fault_info.block = block; 345 317 346 - kthread_queue_work(gpu->worker, &gpu->fault_work); 318 + msm_gpu_fault_crashstate_capture(gpu, &fault_info); 347 319 } 348 320 349 321 return 0;
+2
drivers/gpu/drm/msm/adreno/adreno_gpu.h
··· 636 636 struct adreno_smmu_fault_info *info, const char *block, 637 637 u32 scratch[4]); 638 638 639 + void adreno_check_and_reenable_stall(struct adreno_gpu *gpu); 640 + 639 641 int adreno_read_speedbin(struct device *dev, u32 *speedbin); 640 642 641 643 /*
+9 -5
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
··· 94 94 timing->vsync_polarity = 0; 95 95 } 96 96 97 - /* for DP/EDP, Shift timings to align it to bottom right */ 98 - if (phys_enc->hw_intf->cap->type == INTF_DP) { 97 + timing->wide_bus_en = dpu_encoder_is_widebus_enabled(phys_enc->parent); 98 + timing->compression_en = dpu_encoder_is_dsc_enabled(phys_enc->parent); 99 + 100 + /* 101 + * For DP/EDP, Shift timings to align it to bottom right. 102 + * wide_bus_en is set for everything excluding SDM845 & 103 + * porch changes cause DisplayPort failure and HDMI tearing. 104 + */ 105 + if (phys_enc->hw_intf->cap->type == INTF_DP && timing->wide_bus_en) { 99 106 timing->h_back_porch += timing->h_front_porch; 100 107 timing->h_front_porch = 0; 101 108 timing->v_back_porch += timing->v_front_porch; 102 109 timing->v_front_porch = 0; 103 110 } 104 - 105 - timing->wide_bus_en = dpu_encoder_is_widebus_enabled(phys_enc->parent); 106 - timing->compression_en = dpu_encoder_is_dsc_enabled(phys_enc->parent); 107 111 108 112 /* 109 113 * for DP, divide the horizonal parameters by 2 when
+6 -1
drivers/gpu/drm/msm/dp/dp_display.c
··· 128 128 {} 129 129 }; 130 130 131 + static const struct msm_dp_desc msm_dp_desc_sdm845[] = { 132 + { .io_start = 0x0ae90000, .id = MSM_DP_CONTROLLER_0 }, 133 + {} 134 + }; 135 + 131 136 static const struct msm_dp_desc msm_dp_desc_sc7180[] = { 132 137 { .io_start = 0x0ae90000, .id = MSM_DP_CONTROLLER_0, .wide_bus_supported = true }, 133 138 {} ··· 185 180 { .compatible = "qcom,sc8180x-edp", .data = &msm_dp_desc_sc8180x }, 186 181 { .compatible = "qcom,sc8280xp-dp", .data = &msm_dp_desc_sc8280xp }, 187 182 { .compatible = "qcom,sc8280xp-edp", .data = &msm_dp_desc_sc8280xp }, 188 - { .compatible = "qcom,sdm845-dp", .data = &msm_dp_desc_sc7180 }, 183 + { .compatible = "qcom,sdm845-dp", .data = &msm_dp_desc_sdm845 }, 189 184 { .compatible = "qcom,sm8350-dp", .data = &msm_dp_desc_sc7180 }, 190 185 { .compatible = "qcom,sm8650-dp", .data = &msm_dp_desc_sm8650 }, 191 186 { .compatible = "qcom,x1e80100-dp", .data = &msm_dp_desc_x1e80100 },
+7
drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
··· 704 704 /* TODO: Remove this when we have proper display handover support */ 705 705 msm_dsi_phy_pll_save_state(phy); 706 706 707 + /* 708 + * Store also proper vco_current_rate, because its value will be used in 709 + * dsi_10nm_pll_restore_state(). 710 + */ 711 + if (!dsi_pll_10nm_vco_recalc_rate(&pll_10nm->clk_hw, VCO_REF_CLK_RATE)) 712 + pll_10nm->vco_current_rate = pll_10nm->phy->cfg->min_pll_rate; 713 + 707 714 return 0; 708 715 } 709 716
+32
drivers/gpu/drm/msm/msm_debugfs.c
··· 208 208 shrink_get, shrink_set, 209 209 "0x%08llx\n"); 210 210 211 + /* 212 + * Return the number of microseconds to wait until stall-on-fault is 213 + * re-enabled. If 0 then it is already enabled or will be re-enabled on the 214 + * next submit (unless there's a leftover devcoredump). This is useful for 215 + * kernel tests that intentionally produce a fault and check the devcoredump to 216 + * wait until the cooldown period is over. 217 + */ 218 + 219 + static int 220 + stall_reenable_time_get(void *data, u64 *val) 221 + { 222 + struct msm_drm_private *priv = data; 223 + unsigned long irq_flags; 224 + 225 + spin_lock_irqsave(&priv->fault_stall_lock, irq_flags); 226 + 227 + if (priv->stall_enabled) 228 + *val = 0; 229 + else 230 + *val = max(ktime_us_delta(priv->stall_reenable_time, ktime_get()), 0); 231 + 232 + spin_unlock_irqrestore(&priv->fault_stall_lock, irq_flags); 233 + 234 + return 0; 235 + } 236 + 237 + DEFINE_DEBUGFS_ATTRIBUTE(stall_reenable_time_fops, 238 + stall_reenable_time_get, NULL, 239 + "%lld\n"); 211 240 212 241 static int msm_gem_show(struct seq_file *m, void *arg) 213 242 { ··· 347 318 348 319 debugfs_create_bool("disable_err_irq", 0600, minor->debugfs_root, 349 320 &priv->disable_err_irq); 321 + 322 + debugfs_create_file("stall_reenable_time_us", 0400, minor->debugfs_root, 323 + priv, &stall_reenable_time_fops); 350 324 351 325 gpu_devfreq = debugfs_create_dir("devfreq", minor->debugfs_root); 352 326
+7 -3
drivers/gpu/drm/msm/msm_drv.c
··· 245 245 drm_gem_lru_init(&priv->lru.willneed, &priv->lru.lock); 246 246 drm_gem_lru_init(&priv->lru.dontneed, &priv->lru.lock); 247 247 248 + /* Initialize stall-on-fault */ 249 + spin_lock_init(&priv->fault_stall_lock); 250 + priv->stall_enabled = true; 251 + 248 252 /* Teach lockdep about lock ordering wrt. shrinker: */ 249 253 fs_reclaim_acquire(GFP_KERNEL); 250 254 might_lock(&priv->lru.lock); ··· 930 926 * is no external component that we need to add since LVDS is within MDP4 931 927 * itself. 932 928 */ 933 - static int add_components_mdp(struct device *master_dev, 929 + static int add_mdp_components(struct device *master_dev, 934 930 struct component_match **matchptr) 935 931 { 936 932 struct device_node *np = master_dev->of_node; ··· 1034 1030 if (!np) 1035 1031 return 0; 1036 1032 1037 - if (of_device_is_available(np)) 1033 + if (of_device_is_available(np) && adreno_has_gpu(np)) 1038 1034 drm_of_component_match_add(dev, matchptr, component_compare_of, np); 1039 1035 1040 1036 of_node_put(np); ··· 1075 1071 1076 1072 /* Add mdp components if we have KMS. */ 1077 1073 if (kms_init) { 1078 - ret = add_components_mdp(master_dev, &match); 1074 + ret = add_mdp_components(master_dev, &match); 1079 1075 if (ret) 1080 1076 return ret; 1081 1077 }
+23
drivers/gpu/drm/msm/msm_drv.h
··· 222 222 * the sw hangcheck mechanism. 223 223 */ 224 224 bool disable_err_irq; 225 + 226 + /** 227 + * @fault_stall_lock: 228 + * 229 + * Serialize changes to stall-on-fault state. 230 + */ 231 + spinlock_t fault_stall_lock; 232 + 233 + /** 234 + * @fault_stall_reenable_time: 235 + * 236 + * If stall_enabled is false, when to reenable stall-on-fault. 237 + * Protected by @fault_stall_lock. 238 + */ 239 + ktime_t stall_reenable_time; 240 + 241 + /** 242 + * @stall_enabled: 243 + * 244 + * Whether stall-on-fault is currently enabled. Protected by 245 + * @fault_stall_lock. 246 + */ 247 + bool stall_enabled; 225 248 }; 226 249 227 250 const struct msm_format *mdp_get_format(struct msm_kms *kms, uint32_t format, uint64_t modifier);
+15 -2
drivers/gpu/drm/msm/msm_gem_submit.c
··· 85 85 container_of(kref, struct msm_gem_submit, ref); 86 86 unsigned i; 87 87 88 + /* 89 + * In error paths, we could unref the submit without calling 90 + * drm_sched_entity_push_job(), so msm_job_free() will never 91 + * get called. Since drm_sched_job_cleanup() will NULL out 92 + * s_fence, we can use that to detect this case. 93 + */ 94 + if (submit->base.s_fence) 95 + drm_sched_job_cleanup(&submit->base); 96 + 88 97 if (submit->fence_id) { 89 98 spin_lock(&submit->queue->idr_lock); 90 99 idr_remove(&submit->queue->fence_idr, submit->fence_id); ··· 658 649 struct msm_ringbuffer *ring; 659 650 struct msm_submit_post_dep *post_deps = NULL; 660 651 struct drm_syncobj **syncobjs_to_reset = NULL; 652 + struct sync_file *sync_file = NULL; 661 653 int out_fence_fd = -1; 662 654 unsigned i; 663 655 int ret; ··· 868 858 } 869 859 870 860 if (ret == 0 && args->flags & MSM_SUBMIT_FENCE_FD_OUT) { 871 - struct sync_file *sync_file = sync_file_create(submit->user_fence); 861 + sync_file = sync_file_create(submit->user_fence); 872 862 if (!sync_file) { 873 863 ret = -ENOMEM; 874 864 } else { ··· 902 892 out_unlock: 903 893 mutex_unlock(&queue->lock); 904 894 out_post_unlock: 905 - if (ret && (out_fence_fd >= 0)) 895 + if (ret && (out_fence_fd >= 0)) { 906 896 put_unused_fd(out_fence_fd); 897 + if (sync_file) 898 + fput(sync_file->file); 899 + } 907 900 908 901 if (!IS_ERR_OR_NULL(submit)) { 909 902 msm_gem_submit_put(submit);
+9 -11
drivers/gpu/drm/msm/msm_gpu.c
··· 257 257 } 258 258 259 259 static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, 260 - struct msm_gem_submit *submit, char *comm, char *cmd) 260 + struct msm_gem_submit *submit, struct msm_gpu_fault_info *fault_info, 261 + char *comm, char *cmd) 261 262 { 262 263 struct msm_gpu_state *state; 263 264 ··· 277 276 /* Fill in the additional crash state information */ 278 277 state->comm = kstrdup(comm, GFP_KERNEL); 279 278 state->cmd = kstrdup(cmd, GFP_KERNEL); 280 - state->fault_info = gpu->fault_info; 279 + if (fault_info) 280 + state->fault_info = *fault_info; 281 281 282 282 if (submit) { 283 283 int i; ··· 310 308 } 311 309 #else 312 310 static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, 313 - struct msm_gem_submit *submit, char *comm, char *cmd) 311 + struct msm_gem_submit *submit, struct msm_gpu_fault_info *fault_info, 312 + char *comm, char *cmd) 314 313 { 315 314 } 316 315 #endif ··· 408 405 409 406 /* Record the crash state */ 410 407 pm_runtime_get_sync(&gpu->pdev->dev); 411 - msm_gpu_crashstate_capture(gpu, submit, comm, cmd); 408 + msm_gpu_crashstate_capture(gpu, submit, NULL, comm, cmd); 412 409 413 410 kfree(cmd); 414 411 kfree(comm); ··· 462 459 msm_gpu_retire(gpu); 463 460 } 464 461 465 - static void fault_worker(struct kthread_work *work) 462 + void msm_gpu_fault_crashstate_capture(struct msm_gpu *gpu, struct msm_gpu_fault_info *fault_info) 466 463 { 467 - struct msm_gpu *gpu = container_of(work, struct msm_gpu, fault_work); 468 464 struct msm_gem_submit *submit; 469 465 struct msm_ringbuffer *cur_ring = gpu->funcs->active_ring(gpu); 470 466 char *comm = NULL, *cmd = NULL; ··· 486 484 487 485 /* Record the crash state */ 488 486 pm_runtime_get_sync(&gpu->pdev->dev); 489 - msm_gpu_crashstate_capture(gpu, submit, comm, cmd); 487 + msm_gpu_crashstate_capture(gpu, submit, fault_info, comm, cmd); 490 488 pm_runtime_put_sync(&gpu->pdev->dev); 491 489 492 490 kfree(cmd); 493 491 kfree(comm); 494 492 495 493 resume_smmu: 496 - memset(&gpu->fault_info, 0, sizeof(gpu->fault_info)); 497 - gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu); 498 - 499 494 mutex_unlock(&gpu->lock); 500 495 } 501 496 ··· 881 882 init_waitqueue_head(&gpu->retire_event); 882 883 kthread_init_work(&gpu->retire_work, retire_worker); 883 884 kthread_init_work(&gpu->recover_work, recover_worker); 884 - kthread_init_work(&gpu->fault_work, fault_worker); 885 885 886 886 priv->hangcheck_period = DRM_MSM_HANGCHECK_DEFAULT_PERIOD; 887 887
+3 -6
drivers/gpu/drm/msm/msm_gpu.h
··· 253 253 #define DRM_MSM_HANGCHECK_PROGRESS_RETRIES 3 254 254 struct timer_list hangcheck_timer; 255 255 256 - /* Fault info for most recent iova fault: */ 257 - struct msm_gpu_fault_info fault_info; 258 - 259 - /* work for handling GPU ioval faults: */ 260 - struct kthread_work fault_work; 261 - 262 256 /* work for handling GPU recovery: */ 263 257 struct kthread_work recover_work; 264 258 ··· 662 668 void msm_gpu_cleanup(struct msm_gpu *gpu); 663 669 664 670 struct msm_gpu *adreno_load_gpu(struct drm_device *dev); 671 + bool adreno_has_gpu(struct device_node *node); 665 672 void __init adreno_register(void); 666 673 void __exit adreno_unregister(void); 667 674 ··· 699 704 700 705 mutex_unlock(&gpu->lock); 701 706 } 707 + 708 + void msm_gpu_fault_crashstate_capture(struct msm_gpu *gpu, struct msm_gpu_fault_info *fault_info); 702 709 703 710 /* 704 711 * Simple macro to semi-cleanly add the MAP_PRIV flag for targets that can
+4 -8
drivers/gpu/drm/msm/msm_iommu.c
··· 345 345 unsigned long iova, int flags, void *arg) 346 346 { 347 347 struct msm_iommu *iommu = arg; 348 - struct msm_mmu *mmu = &iommu->base; 349 348 struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(iommu->base.dev); 350 349 struct adreno_smmu_fault_info info, *ptr = NULL; 351 350 ··· 357 358 return iommu->base.handler(iommu->base.arg, iova, flags, ptr); 358 359 359 360 pr_warn_ratelimited("*** fault: iova=%16lx, flags=%d\n", iova, flags); 360 - 361 - if (mmu->funcs->resume_translation) 362 - mmu->funcs->resume_translation(mmu); 363 361 364 362 return 0; 365 363 } ··· 372 376 return -ENOSYS; 373 377 } 374 378 375 - static void msm_iommu_resume_translation(struct msm_mmu *mmu) 379 + static void msm_iommu_set_stall(struct msm_mmu *mmu, bool enable) 376 380 { 377 381 struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(mmu->dev); 378 382 379 - if (adreno_smmu->resume_translation) 380 - adreno_smmu->resume_translation(adreno_smmu->cookie, true); 383 + if (adreno_smmu->set_stall) 384 + adreno_smmu->set_stall(adreno_smmu->cookie, enable); 381 385 } 382 386 383 387 static void msm_iommu_detach(struct msm_mmu *mmu) ··· 427 431 .map = msm_iommu_map, 428 432 .unmap = msm_iommu_unmap, 429 433 .destroy = msm_iommu_destroy, 430 - .resume_translation = msm_iommu_resume_translation, 434 + .set_stall = msm_iommu_set_stall, 431 435 }; 432 436 433 437 struct msm_mmu *msm_iommu_new(struct device *dev, unsigned long quirks)
+1 -1
drivers/gpu/drm/msm/msm_mmu.h
··· 15 15 size_t len, int prot); 16 16 int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); 17 17 void (*destroy)(struct msm_mmu *mmu); 18 - void (*resume_translation)(struct msm_mmu *mmu); 18 + void (*set_stall)(struct msm_mmu *mmu, bool enable); 19 19 }; 20 20 21 21 enum msm_mmu_type {
+2 -1
drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
··· 2255 2255 <reg32 offset="0" name="0"> 2256 2256 <bitfield name="CLEAR_ON_CHIP_TS" pos="0" type="boolean"/> 2257 2257 <bitfield name="CLEAR_RESOURCE_TABLE" pos="1" type="boolean"/> 2258 - <bitfield name="CLEAR_GLOBAL_LOCAL_TS" pos="2" type="boolean"/> 2258 + <bitfield name="CLEAR_BV_BR_COUNTER" pos="2" type="boolean"/> 2259 + <bitfield name="RESET_GLOBAL_LOCAL_TS" pos="3" type="boolean"/> 2259 2260 </reg32> 2260 2261 </domain> 2261 2262
+5 -3
drivers/gpu/drm/msm/registers/gen_header.py
··· 11 11 import argparse 12 12 import time 13 13 import datetime 14 + import re 14 15 15 16 class Error(Exception): 16 17 def __init__(self, message): ··· 878 877 """) 879 878 maxlen = 0 880 879 for filepath in p.xml_files: 881 - maxlen = max(maxlen, len(filepath)) 880 + new_filepath = re.sub("^.+drivers","drivers",filepath) 881 + maxlen = max(maxlen, len(new_filepath)) 882 882 for filepath in p.xml_files: 883 - pad = " " * (maxlen - len(filepath)) 883 + pad = " " * (maxlen - len(new_filepath)) 884 884 filesize = str(os.path.getsize(filepath)) 885 885 filesize = " " * (7 - len(filesize)) + filesize 886 886 filetime = time.ctime(os.path.getmtime(filepath)) 887 - print("- " + filepath + pad + " (" + filesize + " bytes, from " + filetime + ")") 887 + print("- " + new_filepath + pad + " (" + filesize + " bytes, from <stripped>)") 888 888 if p.copyright_year: 889 889 current_year = str(datetime.date.today().year) 890 890 print()
+1 -1
drivers/gpu/drm/nouveau/nouveau_backlight.c
··· 42 42 #include "nouveau_acpi.h" 43 43 44 44 static struct ida bl_ida; 45 - #define BL_NAME_SIZE 15 // 12 for name + 2 for digits + 1 for '\0' 45 + #define BL_NAME_SIZE 24 // 12 for name + 11 for digits + 1 for '\0' 46 46 47 47 static bool 48 48 nouveau_get_backlight_name(char backlight_name[BL_NAME_SIZE],
+12 -5
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c
··· 637 637 if (payload_size > max_payload_size) { 638 638 const u32 fn = rpc->function; 639 639 u32 remain_payload_size = payload_size; 640 + void *next; 640 641 641 - /* Adjust length, and send initial RPC. */ 642 - rpc->length = sizeof(*rpc) + max_payload_size; 643 - msg->checksum = rpc->length; 642 + /* Send initial RPC. */ 643 + next = r535_gsp_rpc_get(gsp, fn, max_payload_size); 644 + if (IS_ERR(next)) { 645 + repv = next; 646 + goto done; 647 + } 644 648 645 - repv = r535_gsp_rpc_send(gsp, payload, NVKM_GSP_RPC_REPLY_NOWAIT, 0); 649 + memcpy(next, payload, max_payload_size); 650 + 651 + repv = r535_gsp_rpc_send(gsp, next, NVKM_GSP_RPC_REPLY_NOWAIT, 0); 646 652 if (IS_ERR(repv)) 647 653 goto done; 648 654 ··· 659 653 while (remain_payload_size) { 660 654 u32 size = min(remain_payload_size, 661 655 max_payload_size); 662 - void *next; 663 656 664 657 next = r535_gsp_rpc_get(gsp, NV_VGPU_MSG_FUNCTION_CONTINUATION_RECORD, size); 665 658 if (IS_ERR(next)) { ··· 679 674 /* Wait for reply. */ 680 675 repv = r535_gsp_rpc_handle_reply(gsp, fn, policy, payload_size + 681 676 sizeof(*rpc)); 677 + if (!IS_ERR(repv)) 678 + kvfree(msg); 682 679 } else { 683 680 repv = r535_gsp_rpc_send(gsp, payload, policy, gsp_rpc_len); 684 681 }
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/vmm.c
··· 121 121 page_shift -= desc->bits; 122 122 123 123 ctrl->levels[i].physAddress = pd->pt[0]->addr; 124 - ctrl->levels[i].size = (1 << desc->bits) * desc->size; 124 + ctrl->levels[i].size = BIT_ULL(desc->bits) * desc->size; 125 125 ctrl->levels[i].aperture = 1; 126 126 ctrl->levels[i].pageShift = page_shift; 127 127
+1 -1
drivers/gpu/drm/solomon/ssd130x.c
··· 974 974 975 975 static void ssd132x_clear_screen(struct ssd130x_device *ssd130x, u8 *data_array) 976 976 { 977 - unsigned int columns = DIV_ROUND_UP(ssd130x->height, SSD132X_SEGMENT_WIDTH); 977 + unsigned int columns = DIV_ROUND_UP(ssd130x->width, SSD132X_SEGMENT_WIDTH); 978 978 unsigned int height = ssd130x->height; 979 979 980 980 memset(data_array, 0, columns * height);
+6 -2
drivers/gpu/drm/v3d/v3d_sched.c
··· 199 199 struct v3d_dev *v3d = job->v3d; 200 200 struct v3d_file_priv *file = job->file->driver_priv; 201 201 struct v3d_stats *global_stats = &v3d->queue[queue].stats; 202 - struct v3d_stats *local_stats = &file->stats[queue]; 203 202 u64 now = local_clock(); 204 203 unsigned long flags; 205 204 ··· 208 209 else 209 210 preempt_disable(); 210 211 211 - v3d_stats_update(local_stats, now); 212 + /* Don't update the local stats if the file context has already closed */ 213 + if (file) 214 + v3d_stats_update(&file->stats[queue], now); 215 + else 216 + drm_dbg(&v3d->drm, "The file descriptor was closed before job completion\n"); 217 + 212 218 v3d_stats_update(global_stats, now); 213 219 214 220 if (IS_ENABLED(CONFIG_LOCKDEP))
+2
drivers/gpu/drm/xe/display/xe_display.c
··· 104 104 spin_lock_init(&xe->display.fb_tracking.lock); 105 105 106 106 xe->display.hotplug.dp_wq = alloc_ordered_workqueue("xe-dp", 0); 107 + if (!xe->display.hotplug.dp_wq) 108 + return -ENOMEM; 107 109 108 110 return drmm_add_action_or_reset(&xe->drm, display_destroy, NULL); 109 111 }
+4 -7
drivers/gpu/drm/xe/display/xe_dsb_buffer.c
··· 17 17 18 18 void intel_dsb_buffer_write(struct intel_dsb_buffer *dsb_buf, u32 idx, u32 val) 19 19 { 20 - struct xe_device *xe = dsb_buf->vma->bo->tile->xe; 21 - 22 20 iosys_map_wr(&dsb_buf->vma->bo->vmap, idx * 4, u32, val); 23 - xe_device_l2_flush(xe); 24 21 } 25 22 26 23 u32 intel_dsb_buffer_read(struct intel_dsb_buffer *dsb_buf, u32 idx) ··· 27 30 28 31 void intel_dsb_buffer_memset(struct intel_dsb_buffer *dsb_buf, u32 idx, u32 val, size_t size) 29 32 { 30 - struct xe_device *xe = dsb_buf->vma->bo->tile->xe; 31 - 32 33 WARN_ON(idx > (dsb_buf->buf_size - size) / sizeof(*dsb_buf->cmd_buf)); 33 34 34 35 iosys_map_memset(&dsb_buf->vma->bo->vmap, idx * 4, val, size); 35 - xe_device_l2_flush(xe); 36 36 } 37 37 38 38 bool intel_dsb_buffer_create(struct intel_crtc *crtc, struct intel_dsb_buffer *dsb_buf, size_t size) ··· 68 74 69 75 void intel_dsb_buffer_flush_map(struct intel_dsb_buffer *dsb_buf) 70 76 { 77 + struct xe_device *xe = dsb_buf->vma->bo->tile->xe; 78 + 71 79 /* 72 80 * The memory barrier here is to ensure coherency of DSB vs MMIO, 73 81 * both for weak ordering archs and discrete cards. 74 82 */ 75 - xe_device_wmb(dsb_buf->vma->bo->tile->xe); 83 + xe_device_wmb(xe); 84 + xe_device_l2_flush(xe); 76 85 }
+3 -2
drivers/gpu/drm/xe/display/xe_fb_pin.c
··· 164 164 165 165 vma->dpt = dpt; 166 166 vma->node = dpt->ggtt_node[tile0->id]; 167 + 168 + /* Ensure DPT writes are flushed */ 169 + xe_device_l2_flush(xe); 167 170 return 0; 168 171 } 169 172 ··· 336 333 if (ret) 337 334 goto err_unpin; 338 335 339 - /* Ensure DPT writes are flushed */ 340 - xe_device_l2_flush(xe); 341 336 return vma; 342 337 343 338 err_unpin:
+1
drivers/gpu/drm/xe/regs/xe_mchbar_regs.h
··· 40 40 #define PCU_CR_PACKAGE_RAPL_LIMIT XE_REG(MCHBAR_MIRROR_BASE_SNB + 0x59a0) 41 41 #define PWR_LIM_VAL REG_GENMASK(14, 0) 42 42 #define PWR_LIM_EN REG_BIT(15) 43 + #define PWR_LIM REG_GENMASK(15, 0) 43 44 #define PWR_LIM_TIME REG_GENMASK(23, 17) 44 45 #define PWR_LIM_TIME_X REG_GENMASK(23, 22) 45 46 #define PWR_LIM_TIME_Y REG_GENMASK(21, 17)
+11
drivers/gpu/drm/xe/xe_ggtt.c
··· 201 201 .ggtt_set_pte = xe_ggtt_set_pte_and_flush, 202 202 }; 203 203 204 + static void dev_fini_ggtt(void *arg) 205 + { 206 + struct xe_ggtt *ggtt = arg; 207 + 208 + drain_workqueue(ggtt->wq); 209 + } 210 + 204 211 /** 205 212 * xe_ggtt_init_early - Early GGTT initialization 206 213 * @ggtt: the &xe_ggtt to be initialized ··· 261 254 primelockdep(ggtt); 262 255 263 256 err = drmm_add_action_or_reset(&xe->drm, ggtt_fini_early, ggtt); 257 + if (err) 258 + return err; 259 + 260 + err = devm_add_action_or_reset(xe->drm.dev, dev_fini_ggtt, ggtt); 264 261 if (err) 265 262 return err; 266 263
+1 -1
drivers/gpu/drm/xe/xe_gt.c
··· 118 118 xe_gt_mcr_multicast_write(gt, XE2_GAMREQSTRM_CTRL, reg); 119 119 } 120 120 121 - xe_gt_mcr_multicast_write(gt, XEHPC_L3CLOS_MASK(3), 0x3); 121 + xe_gt_mcr_multicast_write(gt, XEHPC_L3CLOS_MASK(3), 0xF); 122 122 xe_force_wake_put(gt_to_fw(gt), fw_ref); 123 123 } 124 124
+8
drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
··· 138 138 int pending_seqno; 139 139 140 140 /* 141 + * we can get here before the CTs are even initialized if we're wedging 142 + * very early, in which case there are not going to be any pending 143 + * fences so we can bail immediately. 144 + */ 145 + if (!xe_guc_ct_initialized(&gt->uc.guc.ct)) 146 + return; 147 + 148 + /* 141 149 * CT channel is already disabled at this point. No new TLB requests can 142 150 * appear. 143 151 */
+11 -6
drivers/gpu/drm/xe/xe_guc_ct.c
··· 34 34 #include "xe_pm.h" 35 35 #include "xe_trace_guc.h" 36 36 37 + static void receive_g2h(struct xe_guc_ct *ct); 38 + static void g2h_worker_func(struct work_struct *w); 39 + static void safe_mode_worker_func(struct work_struct *w); 40 + static void ct_exit_safe_mode(struct xe_guc_ct *ct); 41 + 37 42 #if IS_ENABLED(CONFIG_DRM_XE_DEBUG) 38 43 enum { 39 44 /* Internal states, not error conditions */ ··· 191 186 { 192 187 struct xe_guc_ct *ct = arg; 193 188 189 + ct_exit_safe_mode(ct); 194 190 destroy_workqueue(ct->g2h_wq); 195 191 xa_destroy(&ct->fence_lookup); 196 192 } 197 - 198 - static void receive_g2h(struct xe_guc_ct *ct); 199 - static void g2h_worker_func(struct work_struct *w); 200 - static void safe_mode_worker_func(struct work_struct *w); 201 193 202 194 static void primelockdep(struct xe_guc_ct *ct) 203 195 { ··· 516 514 */ 517 515 void xe_guc_ct_stop(struct xe_guc_ct *ct) 518 516 { 517 + if (!xe_guc_ct_initialized(ct)) 518 + return; 519 + 519 520 xe_guc_ct_set_state(ct, XE_GUC_CT_STATE_STOPPED); 520 521 stop_g2h_handler(ct); 521 522 } ··· 765 760 u16 seqno; 766 761 int ret; 767 762 768 - xe_gt_assert(gt, ct->state != XE_GUC_CT_STATE_NOT_INITIALIZED); 763 + xe_gt_assert(gt, xe_guc_ct_initialized(ct)); 769 764 xe_gt_assert(gt, !g2h_len || !g2h_fence); 770 765 xe_gt_assert(gt, !num_g2h || !g2h_fence); 771 766 xe_gt_assert(gt, !g2h_len || num_g2h); ··· 1349 1344 u32 action; 1350 1345 u32 *hxg; 1351 1346 1352 - xe_gt_assert(gt, ct->state != XE_GUC_CT_STATE_NOT_INITIALIZED); 1347 + xe_gt_assert(gt, xe_guc_ct_initialized(ct)); 1353 1348 lockdep_assert_held(&ct->fast_lock); 1354 1349 1355 1350 if (ct->state == XE_GUC_CT_STATE_DISABLED)
+5
drivers/gpu/drm/xe/xe_guc_ct.h
··· 22 22 void xe_guc_ct_snapshot_free(struct xe_guc_ct_snapshot *snapshot); 23 23 void xe_guc_ct_print(struct xe_guc_ct *ct, struct drm_printer *p, bool want_ctb); 24 24 25 + static inline bool xe_guc_ct_initialized(struct xe_guc_ct *ct) 26 + { 27 + return ct->state != XE_GUC_CT_STATE_NOT_INITIALIZED; 28 + } 29 + 25 30 static inline bool xe_guc_ct_enabled(struct xe_guc_ct *ct) 26 31 { 27 32 return ct->state == XE_GUC_CT_STATE_ENABLED;
+1 -1
drivers/gpu/drm/xe/xe_guc_pc.c
··· 1068 1068 goto out; 1069 1069 } 1070 1070 1071 - memset(pc->bo->vmap.vaddr, 0, size); 1071 + xe_map_memset(xe, &pc->bo->vmap, 0, 0, size); 1072 1072 slpc_shared_data_write(pc, header.size, size); 1073 1073 1074 1074 earlier = ktime_get();
+3
drivers/gpu/drm/xe/xe_guc_submit.c
··· 1762 1762 { 1763 1763 int ret; 1764 1764 1765 + if (!guc->submission_state.initialized) 1766 + return 0; 1767 + 1765 1768 /* 1766 1769 * Using an atomic here rather than submission_state.lock as this 1767 1770 * function can be called while holding the CT lock (engine reset
+15 -19
drivers/gpu/drm/xe/xe_hwmon.c
··· 159 159 return ret; 160 160 } 161 161 162 - static int xe_hwmon_pcode_write_power_limit(const struct xe_hwmon *hwmon, u32 attr, u8 channel, 163 - u32 uval) 162 + static int xe_hwmon_pcode_rmw_power_limit(const struct xe_hwmon *hwmon, u32 attr, u8 channel, 163 + u32 clr, u32 set) 164 164 { 165 165 struct xe_tile *root_tile = xe_device_get_root_tile(hwmon->xe); 166 166 u32 val0, val1; ··· 179 179 channel, val0, val1, ret); 180 180 181 181 if (attr == PL1_HWMON_ATTR) 182 - val0 = uval; 182 + val0 = (val0 & ~clr) | set; 183 183 else 184 184 return -EIO; 185 185 ··· 339 339 if (hwmon->xe->info.has_mbx_power_limits) { 340 340 drm_dbg(&hwmon->xe->drm, "disabling %s on channel %d\n", 341 341 PWR_ATTR_TO_STR(attr), channel); 342 - xe_hwmon_pcode_write_power_limit(hwmon, attr, channel, 0); 342 + xe_hwmon_pcode_rmw_power_limit(hwmon, attr, channel, PWR_LIM_EN, 0); 343 343 xe_hwmon_pcode_read_power_limit(hwmon, attr, channel, &reg_val); 344 344 } else { 345 345 reg_val = xe_mmio_rmw32(mmio, rapl_limit, PWR_LIM_EN, 0); ··· 370 370 } 371 371 372 372 if (hwmon->xe->info.has_mbx_power_limits) 373 - ret = xe_hwmon_pcode_write_power_limit(hwmon, attr, channel, reg_val); 373 + ret = xe_hwmon_pcode_rmw_power_limit(hwmon, attr, channel, PWR_LIM, reg_val); 374 374 else 375 - reg_val = xe_mmio_rmw32(mmio, rapl_limit, PWR_LIM_EN | PWR_LIM_VAL, 376 - reg_val); 375 + reg_val = xe_mmio_rmw32(mmio, rapl_limit, PWR_LIM, reg_val); 377 376 unlock: 378 377 mutex_unlock(&hwmon->hwmon_lock); 379 378 return ret; ··· 562 563 563 564 mutex_lock(&hwmon->hwmon_lock); 564 565 565 - if (hwmon->xe->info.has_mbx_power_limits) { 566 - ret = xe_hwmon_pcode_read_power_limit(hwmon, power_attr, channel, (u32 *)&r); 567 - r = (r & ~PWR_LIM_TIME) | rxy; 568 - xe_hwmon_pcode_write_power_limit(hwmon, power_attr, channel, r); 569 - } else { 566 + if (hwmon->xe->info.has_mbx_power_limits) 567 + xe_hwmon_pcode_rmw_power_limit(hwmon, power_attr, channel, PWR_LIM_TIME, rxy); 568 + else 570 569 r = xe_mmio_rmw32(mmio, xe_hwmon_get_reg(hwmon, REG_PKG_RAPL_LIMIT, channel), 571 570 PWR_LIM_TIME, rxy); 572 - } 573 571 574 572 mutex_unlock(&hwmon->hwmon_lock); 575 573 ··· 1134 1138 } else { 1135 1139 drm_info(&hwmon->xe->drm, "Using mailbox commands for power limits\n"); 1136 1140 /* Write default limits to read from pcode from now on. */ 1137 - xe_hwmon_pcode_write_power_limit(hwmon, PL1_HWMON_ATTR, 1138 - CHANNEL_CARD, 1139 - hwmon->pl1_on_boot[CHANNEL_CARD]); 1140 - xe_hwmon_pcode_write_power_limit(hwmon, PL1_HWMON_ATTR, 1141 - CHANNEL_PKG, 1142 - hwmon->pl1_on_boot[CHANNEL_PKG]); 1141 + xe_hwmon_pcode_rmw_power_limit(hwmon, PL1_HWMON_ATTR, 1142 + CHANNEL_CARD, PWR_LIM | PWR_LIM_TIME, 1143 + hwmon->pl1_on_boot[CHANNEL_CARD]); 1144 + xe_hwmon_pcode_rmw_power_limit(hwmon, PL1_HWMON_ATTR, 1145 + CHANNEL_PKG, PWR_LIM | PWR_LIM_TIME, 1146 + hwmon->pl1_on_boot[CHANNEL_PKG]); 1143 1147 hwmon->scl_shift_power = PWR_UNIT; 1144 1148 hwmon->scl_shift_energy = ENERGY_UNIT; 1145 1149 hwmon->scl_shift_time = TIME_UNIT;
+5
drivers/hid/hid-appletb-kbd.c
··· 438 438 return 0; 439 439 440 440 close_hw: 441 + if (kbd->backlight_dev) 442 + put_device(&kbd->backlight_dev->dev); 441 443 hid_hw_close(hdev); 442 444 stop_hw: 443 445 hid_hw_stop(hdev); ··· 454 452 455 453 input_unregister_handler(&kbd->inp_handler); 456 454 timer_delete_sync(&kbd->inactivity_timer); 455 + 456 + if (kbd->backlight_dev) 457 + put_device(&kbd->backlight_dev->dev); 457 458 458 459 hid_hw_close(hdev); 459 460 hid_hw_stop(hdev);
+6
drivers/hid/hid-ids.h
··· 312 312 #define USB_DEVICE_ID_ASUS_AK1D 0x1125 313 313 #define USB_DEVICE_ID_CHICONY_TOSHIBA_WT10A 0x1408 314 314 #define USB_DEVICE_ID_CHICONY_ACER_SWITCH12 0x1421 315 + #define USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA 0xb824 316 + #define USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA2 0xb82c 315 317 316 318 #define USB_VENDOR_ID_CHUNGHWAT 0x2247 317 319 #define USB_DEVICE_ID_CHUNGHWAT_MULTITOUCH 0x0001 ··· 821 819 #define USB_DEVICE_ID_LENOVO_TPPRODOCK 0x6067 822 820 #define USB_DEVICE_ID_LENOVO_X1_COVER 0x6085 823 821 #define USB_DEVICE_ID_LENOVO_X1_TAB 0x60a3 822 + #define USB_DEVICE_ID_LENOVO_X1_TAB2 0x60a4 824 823 #define USB_DEVICE_ID_LENOVO_X1_TAB3 0x60b5 825 824 #define USB_DEVICE_ID_LENOVO_X12_TAB 0x60fe 826 825 #define USB_DEVICE_ID_LENOVO_X12_TAB2 0x61ae ··· 1527 1524 1528 1525 #define USB_VENDOR_ID_SIGNOTEC 0x2133 1529 1526 #define USB_DEVICE_ID_SIGNOTEC_VIEWSONIC_PD1011 0x0018 1527 + 1528 + #define USB_VENDOR_ID_SMARTLINKTECHNOLOGY 0x4c4a 1529 + #define USB_DEVICE_ID_SMARTLINKTECHNOLOGY_4155 0x4155 1530 1530 1531 1531 #endif
+1 -1
drivers/hid/hid-input.c
··· 2343 2343 } 2344 2344 2345 2345 if (list_empty(&hid->inputs)) { 2346 - hid_err(hid, "No inputs registered, leaving\n"); 2346 + hid_dbg(hid, "No inputs registered, leaving\n"); 2347 2347 goto out_unwind; 2348 2348 } 2349 2349
+15 -4
drivers/hid/hid-lenovo.c
··· 492 492 case USB_DEVICE_ID_LENOVO_X12_TAB: 493 493 case USB_DEVICE_ID_LENOVO_X12_TAB2: 494 494 case USB_DEVICE_ID_LENOVO_X1_TAB: 495 + case USB_DEVICE_ID_LENOVO_X1_TAB2: 495 496 case USB_DEVICE_ID_LENOVO_X1_TAB3: 496 497 return lenovo_input_mapping_x1_tab_kbd(hdev, hi, field, usage, bit, max); 497 498 default: ··· 549 548 550 549 /* 551 550 * Tell the keyboard a driver understands it, and turn F7, F9, F11 into 552 - * regular keys 551 + * regular keys (Compact only) 553 552 */ 554 - ret = lenovo_send_cmd_cptkbd(hdev, 0x01, 0x03); 555 - if (ret) 556 - hid_warn(hdev, "Failed to switch F7/9/11 mode: %d\n", ret); 553 + if (hdev->product == USB_DEVICE_ID_LENOVO_CUSBKBD || 554 + hdev->product == USB_DEVICE_ID_LENOVO_CBTKBD) { 555 + ret = lenovo_send_cmd_cptkbd(hdev, 0x01, 0x03); 556 + if (ret) 557 + hid_warn(hdev, "Failed to switch F7/9/11 mode: %d\n", ret); 558 + } 557 559 558 560 /* Switch middle button to native mode */ 559 561 ret = lenovo_send_cmd_cptkbd(hdev, 0x09, 0x01); ··· 609 605 case USB_DEVICE_ID_LENOVO_X12_TAB2: 610 606 case USB_DEVICE_ID_LENOVO_TP10UBKBD: 611 607 case USB_DEVICE_ID_LENOVO_X1_TAB: 608 + case USB_DEVICE_ID_LENOVO_X1_TAB2: 612 609 case USB_DEVICE_ID_LENOVO_X1_TAB3: 613 610 ret = lenovo_led_set_tp10ubkbd(hdev, TP10UBKBD_FN_LOCK_LED, value); 614 611 if (ret) ··· 866 861 case USB_DEVICE_ID_LENOVO_X12_TAB2: 867 862 case USB_DEVICE_ID_LENOVO_TP10UBKBD: 868 863 case USB_DEVICE_ID_LENOVO_X1_TAB: 864 + case USB_DEVICE_ID_LENOVO_X1_TAB2: 869 865 case USB_DEVICE_ID_LENOVO_X1_TAB3: 870 866 return lenovo_event_tp10ubkbd(hdev, field, usage, value); 871 867 default: ··· 1150 1144 case USB_DEVICE_ID_LENOVO_X12_TAB2: 1151 1145 case USB_DEVICE_ID_LENOVO_TP10UBKBD: 1152 1146 case USB_DEVICE_ID_LENOVO_X1_TAB: 1147 + case USB_DEVICE_ID_LENOVO_X1_TAB2: 1153 1148 case USB_DEVICE_ID_LENOVO_X1_TAB3: 1154 1149 ret = lenovo_led_set_tp10ubkbd(hdev, tp10ubkbd_led[led_nr], value); 1155 1150 break; ··· 1391 1384 case USB_DEVICE_ID_LENOVO_X12_TAB2: 1392 1385 case USB_DEVICE_ID_LENOVO_TP10UBKBD: 1393 1386 case USB_DEVICE_ID_LENOVO_X1_TAB: 1387 + case USB_DEVICE_ID_LENOVO_X1_TAB2: 1394 1388 case USB_DEVICE_ID_LENOVO_X1_TAB3: 1395 1389 ret = lenovo_probe_tp10ubkbd(hdev); 1396 1390 break; ··· 1481 1473 case USB_DEVICE_ID_LENOVO_X12_TAB2: 1482 1474 case USB_DEVICE_ID_LENOVO_TP10UBKBD: 1483 1475 case USB_DEVICE_ID_LENOVO_X1_TAB: 1476 + case USB_DEVICE_ID_LENOVO_X1_TAB2: 1484 1477 case USB_DEVICE_ID_LENOVO_X1_TAB3: 1485 1478 lenovo_remove_tp10ubkbd(hdev); 1486 1479 break; ··· 1532 1523 */ 1533 1524 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 1534 1525 USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB) }, 1526 + { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 1527 + USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB2) }, 1535 1528 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 1536 1529 USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB3) }, 1537 1530 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
+7 -1
drivers/hid/hid-multitouch.c
··· 2132 2132 HID_DEVICE(BUS_I2C, HID_GROUP_GENERIC, 2133 2133 USB_VENDOR_ID_LG, I2C_DEVICE_ID_LG_7010) }, 2134 2134 2135 - /* Lenovo X1 TAB Gen 2 */ 2135 + /* Lenovo X1 TAB Gen 1 */ 2136 2136 { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT, 2137 2137 HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8, 2138 2138 USB_VENDOR_ID_LENOVO, 2139 2139 USB_DEVICE_ID_LENOVO_X1_TAB) }, 2140 + 2141 + /* Lenovo X1 TAB Gen 2 */ 2142 + { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT, 2143 + HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8, 2144 + USB_VENDOR_ID_LENOVO, 2145 + USB_DEVICE_ID_LENOVO_X1_TAB2) }, 2140 2146 2141 2147 /* Lenovo X1 TAB Gen 3 */ 2142 2148 { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
+36 -2
drivers/hid/hid-nintendo.c
··· 308 308 JOYCON_CTLR_STATE_INIT, 309 309 JOYCON_CTLR_STATE_READ, 310 310 JOYCON_CTLR_STATE_REMOVED, 311 + JOYCON_CTLR_STATE_SUSPENDED, 311 312 }; 312 313 313 314 /* Controller type received as part of device info */ ··· 2751 2750 2752 2751 static int nintendo_hid_resume(struct hid_device *hdev) 2753 2752 { 2754 - int ret = joycon_init(hdev); 2753 + struct joycon_ctlr *ctlr = hid_get_drvdata(hdev); 2754 + int ret; 2755 2755 2756 + hid_dbg(hdev, "resume\n"); 2757 + if (!joycon_using_usb(ctlr)) { 2758 + hid_dbg(hdev, "no-op resume for bt ctlr\n"); 2759 + ctlr->ctlr_state = JOYCON_CTLR_STATE_READ; 2760 + return 0; 2761 + } 2762 + 2763 + ret = joycon_init(hdev); 2756 2764 if (ret) 2757 - hid_err(hdev, "Failed to restore controller after resume"); 2765 + hid_err(hdev, 2766 + "Failed to restore controller after resume: %d\n", 2767 + ret); 2768 + else 2769 + ctlr->ctlr_state = JOYCON_CTLR_STATE_READ; 2758 2770 2759 2771 return ret; 2772 + } 2773 + 2774 + static int nintendo_hid_suspend(struct hid_device *hdev, pm_message_t message) 2775 + { 2776 + struct joycon_ctlr *ctlr = hid_get_drvdata(hdev); 2777 + 2778 + hid_dbg(hdev, "suspend: %d\n", message.event); 2779 + /* 2780 + * Avoid any blocking loops in suspend/resume transitions. 2781 + * 2782 + * joycon_enforce_subcmd_rate() can result in repeated retries if for 2783 + * whatever reason the controller stops providing input reports. 2784 + * 2785 + * This has been observed with bluetooth controllers which lose 2786 + * connectivity prior to suspend (but not long enough to result in 2787 + * complete disconnection). 2788 + */ 2789 + ctlr->ctlr_state = JOYCON_CTLR_STATE_SUSPENDED; 2790 + return 0; 2760 2791 } 2761 2792 2762 2793 #endif ··· 2829 2796 2830 2797 #ifdef CONFIG_PM 2831 2798 .resume = nintendo_hid_resume, 2799 + .suspend = nintendo_hid_suspend, 2832 2800 #endif 2833 2801 }; 2834 2802 static int __init nintendo_init(void)
+3
drivers/hid/hid-quirks.c
··· 757 757 { HID_USB_DEVICE(USB_VENDOR_ID_AVERMEDIA, USB_DEVICE_ID_AVER_FM_MR800) }, 758 758 { HID_USB_DEVICE(USB_VENDOR_ID_AXENTIA, USB_DEVICE_ID_AXENTIA_FM_RADIO) }, 759 759 { HID_USB_DEVICE(USB_VENDOR_ID_BERKSHIRE, USB_DEVICE_ID_BERKSHIRE_PCWD) }, 760 + { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA) }, 761 + { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA2) }, 760 762 { HID_USB_DEVICE(USB_VENDOR_ID_CIDC, 0x0103) }, 761 763 { HID_USB_DEVICE(USB_VENDOR_ID_CYGNAL, USB_DEVICE_ID_CYGNAL_RADIO_SI470X) }, 762 764 { HID_USB_DEVICE(USB_VENDOR_ID_CYGNAL, USB_DEVICE_ID_CYGNAL_RADIO_SI4713) }, ··· 906 904 #endif 907 905 { HID_USB_DEVICE(USB_VENDOR_ID_YEALINK, USB_DEVICE_ID_YEALINK_P1K_P4K_B2K) }, 908 906 { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_HP_5MP_CAMERA_5473) }, 907 + { HID_USB_DEVICE(USB_VENDOR_ID_SMARTLINKTECHNOLOGY, USB_DEVICE_ID_SMARTLINKTECHNOLOGY_4155) }, 909 908 { } 910 909 }; 911 910
+1
drivers/hid/intel-ish-hid/ipc/hw-ish.h
··· 38 38 #define PCI_DEVICE_ID_INTEL_ISH_LNL_M 0xA845 39 39 #define PCI_DEVICE_ID_INTEL_ISH_PTL_H 0xE345 40 40 #define PCI_DEVICE_ID_INTEL_ISH_PTL_P 0xE445 41 + #define PCI_DEVICE_ID_INTEL_ISH_WCL 0x4D45 41 42 42 43 #define REVISION_ID_CHT_A0 0x6 43 44 #define REVISION_ID_CHT_Ax_SI 0x0
+9 -3
drivers/hid/intel-ish-hid/ipc/pci-ish.c
··· 27 27 ISHTP_DRIVER_DATA_NONE, 28 28 ISHTP_DRIVER_DATA_LNL_M, 29 29 ISHTP_DRIVER_DATA_PTL, 30 + ISHTP_DRIVER_DATA_WCL, 30 31 }; 31 32 32 33 #define ISH_FW_GEN_LNL_M "lnlm" 33 34 #define ISH_FW_GEN_PTL "ptl" 35 + #define ISH_FW_GEN_WCL "wcl" 34 36 35 37 #define ISH_FIRMWARE_PATH(gen) "intel/ish/ish_" gen ".bin" 36 38 #define ISH_FIRMWARE_PATH_ALL "intel/ish/ish_*.bin" ··· 43 41 }, 44 42 [ISHTP_DRIVER_DATA_PTL] = { 45 43 .fw_generation = ISH_FW_GEN_PTL, 44 + }, 45 + [ISHTP_DRIVER_DATA_WCL] = { 46 + .fw_generation = ISH_FW_GEN_WCL, 46 47 }, 47 48 }; 48 49 ··· 72 67 {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_MTL_P)}, 73 68 {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_ARL_H)}, 74 69 {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_ARL_S)}, 75 - {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_LNL_M), .driver_data = ISHTP_DRIVER_DATA_LNL_M}, 76 - {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_PTL_H), .driver_data = ISHTP_DRIVER_DATA_PTL}, 77 - {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_PTL_P), .driver_data = ISHTP_DRIVER_DATA_PTL}, 70 + {PCI_DEVICE_DATA(INTEL, ISH_LNL_M, ISHTP_DRIVER_DATA_LNL_M)}, 71 + {PCI_DEVICE_DATA(INTEL, ISH_PTL_H, ISHTP_DRIVER_DATA_PTL)}, 72 + {PCI_DEVICE_DATA(INTEL, ISH_PTL_P, ISHTP_DRIVER_DATA_PTL)}, 73 + {PCI_DEVICE_DATA(INTEL, ISH_WCL, ISHTP_DRIVER_DATA_WCL)}, 78 74 {} 79 75 }; 80 76 MODULE_DEVICE_TABLE(pci, ish_pci_tbl);
+25 -1
drivers/hid/intel-thc-hid/intel-quicki2c/quicki2c-protocol.c
··· 4 4 #include <linux/bitfield.h> 5 5 #include <linux/hid.h> 6 6 #include <linux/hid-over-i2c.h> 7 + #include <linux/unaligned.h> 7 8 8 9 #include "intel-thc-dev.h" 9 10 #include "intel-thc-dma.h" ··· 201 200 202 201 int quicki2c_reset(struct quicki2c_device *qcdev) 203 202 { 203 + u16 input_reg = le16_to_cpu(qcdev->dev_desc.input_reg); 204 + size_t read_len = HIDI2C_LENGTH_LEN; 205 + u32 prd_len = read_len; 204 206 int ret; 205 207 206 208 qcdev->reset_ack = false; ··· 217 213 218 214 ret = wait_event_interruptible_timeout(qcdev->reset_ack_wq, qcdev->reset_ack, 219 215 HIDI2C_RESET_TIMEOUT * HZ); 220 - if (ret <= 0 || !qcdev->reset_ack) { 216 + if (qcdev->reset_ack) 217 + return 0; 218 + 219 + /* 220 + * Manually read reset response if it wasn't received, in case reset interrupt 221 + * was missed by touch device or THC hardware. 222 + */ 223 + ret = thc_tic_pio_read(qcdev->thc_hw, input_reg, read_len, &prd_len, 224 + (u32 *)qcdev->input_buf); 225 + if (ret) { 226 + dev_err_once(qcdev->dev, "Read Reset Response failed, ret %d\n", ret); 227 + return ret; 228 + } 229 + 230 + /* 231 + * Check response packet length, it's first 16 bits of packet. 232 + * If response packet length is zero, it's reset response, otherwise not. 233 + */ 234 + if (get_unaligned_le16(qcdev->input_buf)) { 221 235 dev_err_once(qcdev->dev, 222 236 "Wait reset response timed out ret:%d timeout:%ds\n", 223 237 ret, HIDI2C_RESET_TIMEOUT); 224 238 return -ETIMEDOUT; 225 239 } 240 + 241 + qcdev->reset_ack = true; 226 242 227 243 return 0; 228 244 }
+6 -1
drivers/hid/wacom_sys.c
··· 2048 2048 2049 2049 remote->remote_dir = kobject_create_and_add("wacom_remote", 2050 2050 &wacom->hdev->dev.kobj); 2051 - if (!remote->remote_dir) 2051 + if (!remote->remote_dir) { 2052 + kfifo_free(&remote->remote_fifo); 2052 2053 return -ENOMEM; 2054 + } 2053 2055 2054 2056 error = sysfs_create_files(remote->remote_dir, remote_unpair_attrs); 2055 2057 2056 2058 if (error) { 2057 2059 hid_err(wacom->hdev, 2058 2060 "cannot create sysfs group err: %d\n", error); 2061 + kfifo_free(&remote->remote_fifo); 2062 + kobject_put(remote->remote_dir); 2059 2063 return error; 2060 2064 } 2061 2065 ··· 2905 2901 hid_hw_stop(hdev); 2906 2902 2907 2903 cancel_delayed_work_sync(&wacom->init_work); 2904 + cancel_delayed_work_sync(&wacom->aes_battery_work); 2908 2905 cancel_work_sync(&wacom->wireless_work); 2909 2906 cancel_work_sync(&wacom->battery_work); 2910 2907 cancel_work_sync(&wacom->remote_work);
+6 -3
drivers/hwmon/ftsteutates.c
··· 423 423 break; 424 424 case hwmon_pwm: 425 425 switch (attr) { 426 - case hwmon_pwm_auto_channels_temp: 427 - if (data->fan_source[channel] == FTS_FAN_SOURCE_INVALID) 426 + case hwmon_pwm_auto_channels_temp: { 427 + u8 fan_source = data->fan_source[channel]; 428 + 429 + if (fan_source == FTS_FAN_SOURCE_INVALID || fan_source >= BITS_PER_LONG) 428 430 *val = 0; 429 431 else 430 - *val = BIT(data->fan_source[channel]); 432 + *val = BIT(fan_source); 431 433 432 434 return 0; 435 + } 433 436 default: 434 437 break; 435 438 }
-7
drivers/hwmon/ltc4282.c
··· 1512 1512 } 1513 1513 1514 1514 if (device_property_read_bool(dev, "adi,fault-log-enable")) { 1515 - ret = regmap_set_bits(st->map, LTC4282_ADC_CTRL, 1516 - LTC4282_FAULT_LOG_EN_MASK); 1517 - if (ret) 1518 - return ret; 1519 - } 1520 - 1521 - if (device_property_read_bool(dev, "adi,fault-log-enable")) { 1522 1515 ret = regmap_set_bits(st->map, LTC4282_ADC_CTRL, LTC4282_FAULT_LOG_EN_MASK); 1523 1516 if (ret) 1524 1517 return ret;
+97 -141
drivers/hwmon/occ/common.c
··· 459 459 return sysfs_emit(buf, "%llu\n", val); 460 460 } 461 461 462 - static u64 occ_get_powr_avg(u64 *accum, u32 *samples) 462 + static u64 occ_get_powr_avg(u64 accum, u32 samples) 463 463 { 464 - u64 divisor = get_unaligned_be32(samples); 465 - 466 - return (divisor == 0) ? 0 : 467 - div64_u64(get_unaligned_be64(accum) * 1000000ULL, divisor); 464 + return (samples == 0) ? 0 : 465 + mul_u64_u32_div(accum, 1000000UL, samples); 468 466 } 469 467 470 468 static ssize_t occ_show_power_2(struct device *dev, ··· 487 489 get_unaligned_be32(&power->sensor_id), 488 490 power->function_id, power->apss_channel); 489 491 case 1: 490 - val = occ_get_powr_avg(&power->accumulator, 491 - &power->update_tag); 492 + val = occ_get_powr_avg(get_unaligned_be64(&power->accumulator), 493 + get_unaligned_be32(&power->update_tag)); 492 494 break; 493 495 case 2: 494 496 val = (u64)get_unaligned_be32(&power->update_tag) * ··· 525 527 return sysfs_emit(buf, "%u_system\n", 526 528 get_unaligned_be32(&power->sensor_id)); 527 529 case 1: 528 - val = occ_get_powr_avg(&power->system.accumulator, 529 - &power->system.update_tag); 530 + val = occ_get_powr_avg(get_unaligned_be64(&power->system.accumulator), 531 + get_unaligned_be32(&power->system.update_tag)); 530 532 break; 531 533 case 2: 532 534 val = (u64)get_unaligned_be32(&power->system.update_tag) * ··· 539 541 return sysfs_emit(buf, "%u_proc\n", 540 542 get_unaligned_be32(&power->sensor_id)); 541 543 case 5: 542 - val = occ_get_powr_avg(&power->proc.accumulator, 543 - &power->proc.update_tag); 544 + val = occ_get_powr_avg(get_unaligned_be64(&power->proc.accumulator), 545 + get_unaligned_be32(&power->proc.update_tag)); 544 546 break; 545 547 case 6: 546 548 val = (u64)get_unaligned_be32(&power->proc.update_tag) * ··· 553 555 return sysfs_emit(buf, "%u_vdd\n", 554 556 get_unaligned_be32(&power->sensor_id)); 555 557 case 9: 556 - val = occ_get_powr_avg(&power->vdd.accumulator, 557 - &power->vdd.update_tag); 558 + val = occ_get_powr_avg(get_unaligned_be64(&power->vdd.accumulator), 559 + get_unaligned_be32(&power->vdd.update_tag)); 558 560 break; 559 561 case 10: 560 562 val = (u64)get_unaligned_be32(&power->vdd.update_tag) * ··· 567 569 return sysfs_emit(buf, "%u_vdn\n", 568 570 get_unaligned_be32(&power->sensor_id)); 569 571 case 13: 570 - val = occ_get_powr_avg(&power->vdn.accumulator, 571 - &power->vdn.update_tag); 572 + val = occ_get_powr_avg(get_unaligned_be64(&power->vdn.accumulator), 573 + get_unaligned_be32(&power->vdn.update_tag)); 572 574 break; 573 575 case 14: 574 576 val = (u64)get_unaligned_be32(&power->vdn.update_tag) * ··· 745 747 } 746 748 747 749 /* 748 - * Some helper macros to make it easier to define an occ_attribute. Since these 749 - * are dynamically allocated, we shouldn't use the existing kernel macros which 750 + * A helper to make it easier to define an occ_attribute. Since these 751 + * are dynamically allocated, we cannot use the existing kernel macros which 750 752 * stringify the name argument. 751 753 */ 752 - #define ATTR_OCC(_name, _mode, _show, _store) { \ 753 - .attr = { \ 754 - .name = _name, \ 755 - .mode = VERIFY_OCTAL_PERMISSIONS(_mode), \ 756 - }, \ 757 - .show = _show, \ 758 - .store = _store, \ 759 - } 754 + static void occ_init_attribute(struct occ_attribute *attr, int mode, 755 + ssize_t (*show)(struct device *dev, struct device_attribute *attr, char *buf), 756 + ssize_t (*store)(struct device *dev, struct device_attribute *attr, 757 + const char *buf, size_t count), 758 + int nr, int index, const char *fmt, ...) 759 + { 760 + va_list args; 760 761 761 - #define SENSOR_ATTR_OCC(_name, _mode, _show, _store, _nr, _index) { \ 762 - .dev_attr = ATTR_OCC(_name, _mode, _show, _store), \ 763 - .index = _index, \ 764 - .nr = _nr, \ 765 - } 762 + va_start(args, fmt); 763 + vsnprintf(attr->name, sizeof(attr->name), fmt, args); 764 + va_end(args); 766 765 767 - #define OCC_INIT_ATTR(_name, _mode, _show, _store, _nr, _index) \ 768 - ((struct sensor_device_attribute_2) \ 769 - SENSOR_ATTR_OCC(_name, _mode, _show, _store, _nr, _index)) 766 + attr->sensor.dev_attr.attr.name = attr->name; 767 + attr->sensor.dev_attr.attr.mode = mode; 768 + attr->sensor.dev_attr.show = show; 769 + attr->sensor.dev_attr.store = store; 770 + attr->sensor.index = index; 771 + attr->sensor.nr = nr; 772 + } 770 773 771 774 /* 772 775 * Allocate and instatiate sensor_device_attribute_2s. It's most efficient to ··· 854 855 sensors->extended.num_sensors = 0; 855 856 } 856 857 857 - occ->attrs = devm_kzalloc(dev, sizeof(*occ->attrs) * num_attrs, 858 + occ->attrs = devm_kcalloc(dev, num_attrs, sizeof(*occ->attrs), 858 859 GFP_KERNEL); 859 860 if (!occ->attrs) 860 861 return -ENOMEM; 861 862 862 863 /* null-terminated list */ 863 - occ->group.attrs = devm_kzalloc(dev, sizeof(*occ->group.attrs) * 864 - num_attrs + 1, GFP_KERNEL); 864 + occ->group.attrs = devm_kcalloc(dev, num_attrs + 1, 865 + sizeof(*occ->group.attrs), 866 + GFP_KERNEL); 865 867 if (!occ->group.attrs) 866 868 return -ENOMEM; 867 869 ··· 872 872 s = i + 1; 873 873 temp = ((struct temp_sensor_2 *)sensors->temp.data) + i; 874 874 875 - snprintf(attr->name, sizeof(attr->name), "temp%d_label", s); 876 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_temp, NULL, 877 - 0, i); 875 + occ_init_attribute(attr, 0444, show_temp, NULL, 876 + 0, i, "temp%d_label", s); 878 877 attr++; 879 878 880 879 if (sensors->temp.version == 2 && 881 880 temp->fru_type == OCC_FRU_TYPE_VRM) { 882 - snprintf(attr->name, sizeof(attr->name), 883 - "temp%d_alarm", s); 881 + occ_init_attribute(attr, 0444, show_temp, NULL, 882 + 1, i, "temp%d_alarm", s); 884 883 } else { 885 - snprintf(attr->name, sizeof(attr->name), 886 - "temp%d_input", s); 884 + occ_init_attribute(attr, 0444, show_temp, NULL, 885 + 1, i, "temp%d_input", s); 887 886 } 888 887 889 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_temp, NULL, 890 - 1, i); 891 888 attr++; 892 889 893 890 if (sensors->temp.version > 1) { 894 - snprintf(attr->name, sizeof(attr->name), 895 - "temp%d_fru_type", s); 896 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 897 - show_temp, NULL, 2, i); 891 + occ_init_attribute(attr, 0444, show_temp, NULL, 892 + 2, i, "temp%d_fru_type", s); 898 893 attr++; 899 894 900 - snprintf(attr->name, sizeof(attr->name), 901 - "temp%d_fault", s); 902 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 903 - show_temp, NULL, 3, i); 895 + occ_init_attribute(attr, 0444, show_temp, NULL, 896 + 3, i, "temp%d_fault", s); 904 897 attr++; 905 898 906 899 if (sensors->temp.version == 0x10) { 907 - snprintf(attr->name, sizeof(attr->name), 908 - "temp%d_max", s); 909 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 910 - show_temp, NULL, 911 - 4, i); 900 + occ_init_attribute(attr, 0444, show_temp, NULL, 901 + 4, i, "temp%d_max", s); 912 902 attr++; 913 903 } 914 904 } ··· 907 917 for (i = 0; i < sensors->freq.num_sensors; ++i) { 908 918 s = i + 1; 909 919 910 - snprintf(attr->name, sizeof(attr->name), "freq%d_label", s); 911 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_freq, NULL, 912 - 0, i); 920 + occ_init_attribute(attr, 0444, show_freq, NULL, 921 + 0, i, "freq%d_label", s); 913 922 attr++; 914 923 915 - snprintf(attr->name, sizeof(attr->name), "freq%d_input", s); 916 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_freq, NULL, 917 - 1, i); 924 + occ_init_attribute(attr, 0444, show_freq, NULL, 925 + 1, i, "freq%d_input", s); 918 926 attr++; 919 927 } 920 928 ··· 928 940 s = (i * 4) + 1; 929 941 930 942 for (j = 0; j < 4; ++j) { 931 - snprintf(attr->name, sizeof(attr->name), 932 - "power%d_label", s); 933 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 934 - show_power, NULL, 935 - nr++, i); 943 + occ_init_attribute(attr, 0444, show_power, 944 + NULL, nr++, i, 945 + "power%d_label", s); 936 946 attr++; 937 947 938 - snprintf(attr->name, sizeof(attr->name), 939 - "power%d_average", s); 940 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 941 - show_power, NULL, 942 - nr++, i); 948 + occ_init_attribute(attr, 0444, show_power, 949 + NULL, nr++, i, 950 + "power%d_average", s); 943 951 attr++; 944 952 945 - snprintf(attr->name, sizeof(attr->name), 946 - "power%d_average_interval", s); 947 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 948 - show_power, NULL, 949 - nr++, i); 953 + occ_init_attribute(attr, 0444, show_power, 954 + NULL, nr++, i, 955 + "power%d_average_interval", s); 950 956 attr++; 951 957 952 - snprintf(attr->name, sizeof(attr->name), 953 - "power%d_input", s); 954 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 955 - show_power, NULL, 956 - nr++, i); 958 + occ_init_attribute(attr, 0444, show_power, 959 + NULL, nr++, i, 960 + "power%d_input", s); 957 961 attr++; 958 962 959 963 s++; ··· 957 977 for (i = 0; i < sensors->power.num_sensors; ++i) { 958 978 s = i + 1; 959 979 960 - snprintf(attr->name, sizeof(attr->name), 961 - "power%d_label", s); 962 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 963 - show_power, NULL, 0, i); 980 + occ_init_attribute(attr, 0444, show_power, NULL, 981 + 0, i, "power%d_label", s); 964 982 attr++; 965 983 966 - snprintf(attr->name, sizeof(attr->name), 967 - "power%d_average", s); 968 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 969 - show_power, NULL, 1, i); 984 + occ_init_attribute(attr, 0444, show_power, NULL, 985 + 1, i, "power%d_average", s); 970 986 attr++; 971 987 972 - snprintf(attr->name, sizeof(attr->name), 973 - "power%d_average_interval", s); 974 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 975 - show_power, NULL, 2, i); 988 + occ_init_attribute(attr, 0444, show_power, NULL, 989 + 2, i, "power%d_average_interval", s); 976 990 attr++; 977 991 978 - snprintf(attr->name, sizeof(attr->name), 979 - "power%d_input", s); 980 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 981 - show_power, NULL, 3, i); 992 + occ_init_attribute(attr, 0444, show_power, NULL, 993 + 3, i, "power%d_input", s); 982 994 attr++; 983 995 } 984 996 ··· 978 1006 } 979 1007 980 1008 if (sensors->caps.num_sensors >= 1) { 981 - snprintf(attr->name, sizeof(attr->name), "power%d_label", s); 982 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, 983 - 0, 0); 1009 + occ_init_attribute(attr, 0444, show_caps, NULL, 1010 + 0, 0, "power%d_label", s); 984 1011 attr++; 985 1012 986 - snprintf(attr->name, sizeof(attr->name), "power%d_cap", s); 987 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, 988 - 1, 0); 1013 + occ_init_attribute(attr, 0444, show_caps, NULL, 1014 + 1, 0, "power%d_cap", s); 989 1015 attr++; 990 1016 991 - snprintf(attr->name, sizeof(attr->name), "power%d_input", s); 992 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, 993 - 2, 0); 1017 + occ_init_attribute(attr, 0444, show_caps, NULL, 1018 + 2, 0, "power%d_input", s); 994 1019 attr++; 995 1020 996 - snprintf(attr->name, sizeof(attr->name), 997 - "power%d_cap_not_redundant", s); 998 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, 999 - 3, 0); 1021 + occ_init_attribute(attr, 0444, show_caps, NULL, 1022 + 3, 0, "power%d_cap_not_redundant", s); 1000 1023 attr++; 1001 1024 1002 - snprintf(attr->name, sizeof(attr->name), "power%d_cap_max", s); 1003 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, 1004 - 4, 0); 1025 + occ_init_attribute(attr, 0444, show_caps, NULL, 1026 + 4, 0, "power%d_cap_max", s); 1005 1027 attr++; 1006 1028 1007 - snprintf(attr->name, sizeof(attr->name), "power%d_cap_min", s); 1008 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL, 1009 - 5, 0); 1029 + occ_init_attribute(attr, 0444, show_caps, NULL, 1030 + 5, 0, "power%d_cap_min", s); 1010 1031 attr++; 1011 1032 1012 - snprintf(attr->name, sizeof(attr->name), "power%d_cap_user", 1013 - s); 1014 - attr->sensor = OCC_INIT_ATTR(attr->name, 0644, show_caps, 1015 - occ_store_caps_user, 6, 0); 1033 + occ_init_attribute(attr, 0644, show_caps, occ_store_caps_user, 1034 + 6, 0, "power%d_cap_user", s); 1016 1035 attr++; 1017 1036 1018 1037 if (sensors->caps.version > 1) { 1019 - snprintf(attr->name, sizeof(attr->name), 1020 - "power%d_cap_user_source", s); 1021 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 1022 - show_caps, NULL, 7, 0); 1038 + occ_init_attribute(attr, 0444, show_caps, NULL, 1039 + 7, 0, "power%d_cap_user_source", s); 1023 1040 attr++; 1024 1041 1025 1042 if (sensors->caps.version > 2) { 1026 - snprintf(attr->name, sizeof(attr->name), 1027 - "power%d_cap_min_soft", s); 1028 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 1029 - show_caps, NULL, 1030 - 8, 0); 1043 + occ_init_attribute(attr, 0444, show_caps, NULL, 1044 + 8, 0, 1045 + "power%d_cap_min_soft", s); 1031 1046 attr++; 1032 1047 } 1033 1048 } ··· 1023 1064 for (i = 0; i < sensors->extended.num_sensors; ++i) { 1024 1065 s = i + 1; 1025 1066 1026 - snprintf(attr->name, sizeof(attr->name), "extn%d_label", s); 1027 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 1028 - occ_show_extended, NULL, 0, i); 1067 + occ_init_attribute(attr, 0444, occ_show_extended, NULL, 1068 + 0, i, "extn%d_label", s); 1029 1069 attr++; 1030 1070 1031 - snprintf(attr->name, sizeof(attr->name), "extn%d_flags", s); 1032 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 1033 - occ_show_extended, NULL, 1, i); 1071 + occ_init_attribute(attr, 0444, occ_show_extended, NULL, 1072 + 1, i, "extn%d_flags", s); 1034 1073 attr++; 1035 1074 1036 - snprintf(attr->name, sizeof(attr->name), "extn%d_input", s); 1037 - attr->sensor = OCC_INIT_ATTR(attr->name, 0444, 1038 - occ_show_extended, NULL, 2, i); 1075 + occ_init_attribute(attr, 0444, occ_show_extended, NULL, 1076 + 2, i, "extn%d_input", s); 1039 1077 attr++; 1040 1078 } 1041 1079
+2 -2
drivers/i2c/algos/i2c-algo-bit.c
··· 619 619 /* -----exported algorithm data: ------------------------------------- */ 620 620 621 621 const struct i2c_algorithm i2c_bit_algo = { 622 - .master_xfer = bit_xfer, 623 - .master_xfer_atomic = bit_xfer_atomic, 622 + .xfer = bit_xfer, 623 + .xfer_atomic = bit_xfer_atomic, 624 624 .functionality = bit_func, 625 625 }; 626 626 EXPORT_SYMBOL(i2c_bit_algo);
+2 -2
drivers/i2c/algos/i2c-algo-pca.c
··· 361 361 } 362 362 363 363 static const struct i2c_algorithm pca_algo = { 364 - .master_xfer = pca_xfer, 365 - .functionality = pca_func, 364 + .xfer = pca_xfer, 365 + .functionality = pca_func, 366 366 }; 367 367 368 368 static unsigned int pca_probe_chip(struct i2c_adapter *adap)
+2 -2
drivers/i2c/algos/i2c-algo-pcf.c
··· 389 389 390 390 /* exported algorithm data: */ 391 391 static const struct i2c_algorithm pcf_algo = { 392 - .master_xfer = pcf_xfer, 393 - .functionality = pcf_func, 392 + .xfer = pcf_xfer, 393 + .functionality = pcf_func, 394 394 }; 395 395 396 396 /*
+1 -1
drivers/i2c/busses/Kconfig
··· 1530 1530 1531 1531 config SCx200_ACB 1532 1532 tristate "Geode ACCESS.bus support" 1533 - depends on X86_32 && PCI 1533 + depends on X86_32 && PCI && HAS_IOPORT 1534 1534 help 1535 1535 Enable the use of the ACCESS.bus controllers on the Geode SCx200 and 1536 1536 SC1100 processors and the CS5535 and CS5536 Geode companion devices.
+1 -1
drivers/i2c/busses/i2c-amd-mp2-plat.c
··· 179 179 } 180 180 181 181 static const struct i2c_algorithm i2c_amd_algorithm = { 182 - .master_xfer = i2c_amd_xfer, 182 + .xfer = i2c_amd_xfer, 183 183 .functionality = i2c_amd_func, 184 184 }; 185 185
+4 -4
drivers/i2c/busses/i2c-aspeed.c
··· 814 814 #endif /* CONFIG_I2C_SLAVE */ 815 815 816 816 static const struct i2c_algorithm aspeed_i2c_algo = { 817 - .master_xfer = aspeed_i2c_master_xfer, 818 - .functionality = aspeed_i2c_functionality, 817 + .xfer = aspeed_i2c_master_xfer, 818 + .functionality = aspeed_i2c_functionality, 819 819 #if IS_ENABLED(CONFIG_I2C_SLAVE) 820 - .reg_slave = aspeed_i2c_reg_slave, 821 - .unreg_slave = aspeed_i2c_unreg_slave, 820 + .reg_slave = aspeed_i2c_reg_slave, 821 + .unreg_slave = aspeed_i2c_unreg_slave, 822 822 #endif /* CONFIG_I2C_SLAVE */ 823 823 }; 824 824
+2 -2
drivers/i2c/busses/i2c-at91-master.c
··· 739 739 } 740 740 741 741 static const struct i2c_algorithm at91_twi_algorithm = { 742 - .master_xfer = at91_twi_xfer, 743 - .functionality = at91_twi_func, 742 + .xfer = at91_twi_xfer, 743 + .functionality = at91_twi_func, 744 744 }; 745 745 746 746 static int at91_twi_configure_dma(struct at91_twi_dev *dev, u32 phy_addr)
+1 -1
drivers/i2c/busses/i2c-axxia.c
··· 706 706 } 707 707 708 708 static const struct i2c_algorithm axxia_i2c_algo = { 709 - .master_xfer = axxia_i2c_xfer, 709 + .xfer = axxia_i2c_xfer, 710 710 .functionality = axxia_i2c_func, 711 711 .reg_slave = axxia_i2c_reg_slave, 712 712 .unreg_slave = axxia_i2c_unreg_slave,
+1 -1
drivers/i2c/busses/i2c-bcm-iproc.c
··· 1041 1041 } 1042 1042 1043 1043 static struct i2c_algorithm bcm_iproc_algo = { 1044 - .master_xfer = bcm_iproc_i2c_xfer, 1044 + .xfer = bcm_iproc_i2c_xfer, 1045 1045 .functionality = bcm_iproc_i2c_functionality, 1046 1046 .reg_slave = bcm_iproc_i2c_reg_slave, 1047 1047 .unreg_slave = bcm_iproc_i2c_unreg_slave,
+5 -5
drivers/i2c/busses/i2c-cadence.c
··· 1231 1231 #endif 1232 1232 1233 1233 static const struct i2c_algorithm cdns_i2c_algo = { 1234 - .master_xfer = cdns_i2c_master_xfer, 1235 - .master_xfer_atomic = cdns_i2c_master_xfer_atomic, 1236 - .functionality = cdns_i2c_func, 1234 + .xfer = cdns_i2c_master_xfer, 1235 + .xfer_atomic = cdns_i2c_master_xfer_atomic, 1236 + .functionality = cdns_i2c_func, 1237 1237 #if IS_ENABLED(CONFIG_I2C_SLAVE) 1238 - .reg_slave = cdns_reg_slave, 1239 - .unreg_slave = cdns_unreg_slave, 1238 + .reg_slave = cdns_reg_slave, 1239 + .unreg_slave = cdns_unreg_slave, 1240 1240 #endif 1241 1241 }; 1242 1242
+2 -2
drivers/i2c/busses/i2c-cgbc.c
··· 331 331 } 332 332 333 333 static const struct i2c_algorithm cgbc_i2c_algorithm = { 334 - .master_xfer = cgbc_i2c_xfer, 335 - .functionality = cgbc_i2c_func, 334 + .xfer = cgbc_i2c_xfer, 335 + .functionality = cgbc_i2c_func, 336 336 }; 337 337 338 338 static struct i2c_algo_cgbc_data cgbc_i2c_algo_data[] = {
+2
drivers/i2c/busses/i2c-designware-amdisp.c
··· 8 8 #include <linux/module.h> 9 9 #include <linux/platform_device.h> 10 10 #include <linux/pm_runtime.h> 11 + #include <linux/soc/amd/isp4_misc.h> 11 12 12 13 #include "i2c-designware-core.h" 13 14 ··· 63 62 64 63 adap = &isp_i2c_dev->adapter; 65 64 adap->owner = THIS_MODULE; 65 + scnprintf(adap->name, sizeof(adap->name), AMDISP_I2C_ADAP_NAME); 66 66 ACPI_COMPANION_SET(&adap->dev, ACPI_COMPANION(&pdev->dev)); 67 67 adap->dev.of_node = pdev->dev.of_node; 68 68 /* use dynamically allocated adapter id */
+3 -2
drivers/i2c/busses/i2c-designware-master.c
··· 1042 1042 if (ret) 1043 1043 return ret; 1044 1044 1045 - snprintf(adap->name, sizeof(adap->name), 1046 - "Synopsys DesignWare I2C adapter"); 1045 + if (!adap->name[0]) 1046 + scnprintf(adap->name, sizeof(adap->name), 1047 + "Synopsys DesignWare I2C adapter"); 1047 1048 adap->retries = 3; 1048 1049 adap->algo = &i2c_dw_algo; 1049 1050 adap->quirks = &i2c_dw_quirks;
+1 -1
drivers/i2c/busses/i2c-eg20t.c
··· 690 690 } 691 691 692 692 static const struct i2c_algorithm pch_algorithm = { 693 - .master_xfer = pch_i2c_xfer, 693 + .xfer = pch_i2c_xfer, 694 694 .functionality = pch_i2c_func 695 695 }; 696 696
+3 -3
drivers/i2c/busses/i2c-emev2.c
··· 351 351 } 352 352 353 353 static const struct i2c_algorithm em_i2c_algo = { 354 - .master_xfer = em_i2c_xfer, 354 + .xfer = em_i2c_xfer, 355 355 .functionality = em_i2c_func, 356 - .reg_slave = em_i2c_reg_slave, 357 - .unreg_slave = em_i2c_unreg_slave, 356 + .reg_slave = em_i2c_reg_slave, 357 + .unreg_slave = em_i2c_unreg_slave, 358 358 }; 359 359 360 360 static int em_i2c_probe(struct platform_device *pdev)
+3 -3
drivers/i2c/busses/i2c-exynos5.c
··· 879 879 } 880 880 881 881 static const struct i2c_algorithm exynos5_i2c_algorithm = { 882 - .master_xfer = exynos5_i2c_xfer, 883 - .master_xfer_atomic = exynos5_i2c_xfer_atomic, 884 - .functionality = exynos5_i2c_func, 882 + .xfer = exynos5_i2c_xfer, 883 + .xfer_atomic = exynos5_i2c_xfer_atomic, 884 + .functionality = exynos5_i2c_func, 885 885 }; 886 886 887 887 static int exynos5_i2c_probe(struct platform_device *pdev)
+3 -3
drivers/i2c/busses/i2c-gxp.c
··· 184 184 #endif 185 185 186 186 static const struct i2c_algorithm gxp_i2c_algo = { 187 - .master_xfer = gxp_i2c_master_xfer, 187 + .xfer = gxp_i2c_master_xfer, 188 188 .functionality = gxp_i2c_func, 189 189 #if IS_ENABLED(CONFIG_I2C_SLAVE) 190 - .reg_slave = gxp_i2c_reg_slave, 191 - .unreg_slave = gxp_i2c_unreg_slave, 190 + .reg_slave = gxp_i2c_reg_slave, 191 + .unreg_slave = gxp_i2c_unreg_slave, 192 192 #endif 193 193 }; 194 194
+1 -1
drivers/i2c/busses/i2c-img-scb.c
··· 1143 1143 } 1144 1144 1145 1145 static const struct i2c_algorithm img_i2c_algo = { 1146 - .master_xfer = img_i2c_xfer, 1146 + .xfer = img_i2c_xfer, 1147 1147 .functionality = img_i2c_func, 1148 1148 }; 1149 1149
+4 -4
drivers/i2c/busses/i2c-imx-lpi2c.c
··· 1268 1268 } 1269 1269 1270 1270 static const struct i2c_algorithm lpi2c_imx_algo = { 1271 - .master_xfer = lpi2c_imx_xfer, 1272 - .functionality = lpi2c_imx_func, 1273 - .reg_target = lpi2c_imx_register_target, 1274 - .unreg_target = lpi2c_imx_unregister_target, 1271 + .xfer = lpi2c_imx_xfer, 1272 + .functionality = lpi2c_imx_func, 1273 + .reg_target = lpi2c_imx_register_target, 1274 + .unreg_target = lpi2c_imx_unregister_target, 1275 1275 }; 1276 1276 1277 1277 static const struct of_device_id lpi2c_imx_of_match[] = {
+6 -5
drivers/i2c/busses/i2c-imx.c
··· 1008 1008 /* setup bus to read data */ 1009 1009 temp = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2CR); 1010 1010 temp &= ~I2CR_MTX; 1011 - if (i2c_imx->msg->len - 1) 1011 + if ((i2c_imx->msg->len - 1) || (i2c_imx->msg->flags & I2C_M_RECV_LEN)) 1012 1012 temp &= ~I2CR_TXAK; 1013 1013 1014 1014 imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR); ··· 1063 1063 wake_up(&i2c_imx->queue); 1064 1064 } 1065 1065 i2c_imx->msg->len += len; 1066 + i2c_imx->msg->buf[i2c_imx->msg_buf_idx++] = len; 1066 1067 } 1067 1068 1068 1069 static irqreturn_t i2c_imx_master_isr(struct imx_i2c_struct *i2c_imx, unsigned int status) ··· 1693 1692 } 1694 1693 1695 1694 static const struct i2c_algorithm i2c_imx_algo = { 1696 - .master_xfer = i2c_imx_xfer, 1697 - .master_xfer_atomic = i2c_imx_xfer_atomic, 1695 + .xfer = i2c_imx_xfer, 1696 + .xfer_atomic = i2c_imx_xfer_atomic, 1698 1697 .functionality = i2c_imx_func, 1699 - .reg_slave = i2c_imx_reg_slave, 1700 - .unreg_slave = i2c_imx_unreg_slave, 1698 + .reg_slave = i2c_imx_reg_slave, 1699 + .unreg_slave = i2c_imx_unreg_slave, 1701 1700 }; 1702 1701 1703 1702 static int i2c_imx_probe(struct platform_device *pdev)
+1 -1
drivers/i2c/busses/i2c-k1.c
··· 477 477 478 478 ret = spacemit_i2c_wait_bus_idle(i2c); 479 479 if (!ret) 480 - spacemit_i2c_xfer_msg(i2c); 480 + ret = spacemit_i2c_xfer_msg(i2c); 481 481 else if (ret < 0) 482 482 dev_dbg(i2c->dev, "i2c transfer error: %d\n", ret); 483 483 else
+1 -1
drivers/i2c/busses/i2c-keba.c
··· 500 500 } 501 501 502 502 static const struct i2c_algorithm ki2c_algo = { 503 - .master_xfer = ki2c_xfer, 503 + .xfer = ki2c_xfer, 504 504 .functionality = ki2c_func, 505 505 }; 506 506
+1 -1
drivers/i2c/busses/i2c-mchp-pci1xxxx.c
··· 1048 1048 } 1049 1049 1050 1050 static const struct i2c_algorithm pci1xxxx_i2c_algo = { 1051 - .master_xfer = pci1xxxx_i2c_xfer, 1051 + .xfer = pci1xxxx_i2c_xfer, 1052 1052 .functionality = pci1xxxx_i2c_get_funcs, 1053 1053 }; 1054 1054
+2 -2
drivers/i2c/busses/i2c-meson.c
··· 448 448 } 449 449 450 450 static const struct i2c_algorithm meson_i2c_algorithm = { 451 - .master_xfer = meson_i2c_xfer, 452 - .master_xfer_atomic = meson_i2c_xfer_atomic, 451 + .xfer = meson_i2c_xfer, 452 + .xfer_atomic = meson_i2c_xfer_atomic, 453 453 .functionality = meson_i2c_func, 454 454 }; 455 455
+1 -1
drivers/i2c/busses/i2c-microchip-corei2c.c
··· 526 526 } 527 527 528 528 static const struct i2c_algorithm mchp_corei2c_algo = { 529 - .master_xfer = mchp_corei2c_xfer, 529 + .xfer = mchp_corei2c_xfer, 530 530 .functionality = mchp_corei2c_func, 531 531 .smbus_xfer = mchp_corei2c_smbus_xfer, 532 532 };
+1 -1
drivers/i2c/busses/i2c-mt65xx.c
··· 1342 1342 } 1343 1343 1344 1344 static const struct i2c_algorithm mtk_i2c_algorithm = { 1345 - .master_xfer = mtk_i2c_transfer, 1345 + .xfer = mtk_i2c_transfer, 1346 1346 .functionality = mtk_i2c_functionality, 1347 1347 }; 1348 1348
+1 -1
drivers/i2c/busses/i2c-mxs.c
··· 687 687 } 688 688 689 689 static const struct i2c_algorithm mxs_i2c_algo = { 690 - .master_xfer = mxs_i2c_xfer, 690 + .xfer = mxs_i2c_xfer, 691 691 .functionality = mxs_i2c_func, 692 692 }; 693 693
+2 -2
drivers/i2c/busses/i2c-nomadik.c
··· 996 996 } 997 997 998 998 static const struct i2c_algorithm nmk_i2c_algo = { 999 - .master_xfer = nmk_i2c_xfer, 1000 - .functionality = nmk_i2c_functionality 999 + .xfer = nmk_i2c_xfer, 1000 + .functionality = nmk_i2c_functionality 1001 1001 }; 1002 1002 1003 1003 static void nmk_i2c_of_probe(struct device_node *np,
+3 -3
drivers/i2c/busses/i2c-npcm7xx.c
··· 2470 2470 }; 2471 2471 2472 2472 static const struct i2c_algorithm npcm_i2c_algo = { 2473 - .master_xfer = npcm_i2c_master_xfer, 2473 + .xfer = npcm_i2c_master_xfer, 2474 2474 .functionality = npcm_i2c_functionality, 2475 2475 #if IS_ENABLED(CONFIG_I2C_SLAVE) 2476 - .reg_slave = npcm_i2c_reg_slave, 2477 - .unreg_slave = npcm_i2c_unreg_slave, 2476 + .reg_slave = npcm_i2c_reg_slave, 2477 + .unreg_slave = npcm_i2c_unreg_slave, 2478 2478 #endif 2479 2479 }; 2480 2480
+8 -5
drivers/i2c/busses/i2c-omap.c
··· 1201 1201 } 1202 1202 1203 1203 static const struct i2c_algorithm omap_i2c_algo = { 1204 - .master_xfer = omap_i2c_xfer_irq, 1205 - .master_xfer_atomic = omap_i2c_xfer_polling, 1206 - .functionality = omap_i2c_func, 1204 + .xfer = omap_i2c_xfer_irq, 1205 + .xfer_atomic = omap_i2c_xfer_polling, 1206 + .functionality = omap_i2c_func, 1207 1207 }; 1208 1208 1209 1209 static const struct i2c_adapter_quirks omap_i2c_quirks = { ··· 1461 1461 if (IS_ERR(mux_state)) { 1462 1462 r = PTR_ERR(mux_state); 1463 1463 dev_dbg(&pdev->dev, "failed to get I2C mux: %d\n", r); 1464 - goto err_disable_pm; 1464 + goto err_put_pm; 1465 1465 } 1466 1466 omap->mux_state = mux_state; 1467 1467 r = mux_state_select(omap->mux_state); 1468 1468 if (r) { 1469 1469 dev_err(&pdev->dev, "failed to select I2C mux: %d\n", r); 1470 - goto err_disable_pm; 1470 + goto err_put_pm; 1471 1471 } 1472 1472 } 1473 1473 ··· 1515 1515 1516 1516 err_unuse_clocks: 1517 1517 omap_i2c_write_reg(omap, OMAP_I2C_CON_REG, 0); 1518 + if (omap->mux_state) 1519 + mux_state_deselect(omap->mux_state); 1520 + err_put_pm: 1518 1521 pm_runtime_dont_use_autosuspend(omap->dev); 1519 1522 pm_runtime_put_sync(omap->dev); 1520 1523 err_disable_pm:
+1 -1
drivers/i2c/busses/i2c-pnx.c
··· 580 580 } 581 581 582 582 static const struct i2c_algorithm pnx_algorithm = { 583 - .master_xfer = i2c_pnx_xfer, 583 + .xfer = i2c_pnx_xfer, 584 584 .functionality = i2c_pnx_func, 585 585 }; 586 586
+8 -8
drivers/i2c/busses/i2c-pxa.c
··· 1154 1154 } 1155 1155 1156 1156 static const struct i2c_algorithm i2c_pxa_algorithm = { 1157 - .master_xfer = i2c_pxa_xfer, 1158 - .functionality = i2c_pxa_functionality, 1157 + .xfer = i2c_pxa_xfer, 1158 + .functionality = i2c_pxa_functionality, 1159 1159 #ifdef CONFIG_I2C_PXA_SLAVE 1160 - .reg_slave = i2c_pxa_slave_reg, 1161 - .unreg_slave = i2c_pxa_slave_unreg, 1160 + .reg_slave = i2c_pxa_slave_reg, 1161 + .unreg_slave = i2c_pxa_slave_unreg, 1162 1162 #endif 1163 1163 }; 1164 1164 ··· 1244 1244 } 1245 1245 1246 1246 static const struct i2c_algorithm i2c_pxa_pio_algorithm = { 1247 - .master_xfer = i2c_pxa_pio_xfer, 1248 - .functionality = i2c_pxa_functionality, 1247 + .xfer = i2c_pxa_pio_xfer, 1248 + .functionality = i2c_pxa_functionality, 1249 1249 #ifdef CONFIG_I2C_PXA_SLAVE 1250 - .reg_slave = i2c_pxa_slave_reg, 1251 - .unreg_slave = i2c_pxa_slave_unreg, 1250 + .reg_slave = i2c_pxa_slave_reg, 1251 + .unreg_slave = i2c_pxa_slave_unreg, 1252 1252 #endif 1253 1253 }; 1254 1254
+2 -2
drivers/i2c/busses/i2c-qcom-cci.c
··· 462 462 } 463 463 464 464 static const struct i2c_algorithm cci_algo = { 465 - .master_xfer = cci_xfer, 466 - .functionality = cci_func, 465 + .xfer = cci_xfer, 466 + .functionality = cci_func, 467 467 }; 468 468 469 469 static int cci_enable_clocks(struct cci *cci)
+2 -2
drivers/i2c/busses/i2c-qcom-geni.c
··· 727 727 } 728 728 729 729 static const struct i2c_algorithm geni_i2c_algo = { 730 - .master_xfer = geni_i2c_xfer, 731 - .functionality = geni_i2c_func, 730 + .xfer = geni_i2c_xfer, 731 + .functionality = geni_i2c_func, 732 732 }; 733 733 734 734 #ifdef CONFIG_ACPI
+4 -4
drivers/i2c/busses/i2c-qup.c
··· 1634 1634 } 1635 1635 1636 1636 static const struct i2c_algorithm qup_i2c_algo = { 1637 - .master_xfer = qup_i2c_xfer, 1638 - .functionality = qup_i2c_func, 1637 + .xfer = qup_i2c_xfer, 1638 + .functionality = qup_i2c_func, 1639 1639 }; 1640 1640 1641 1641 static const struct i2c_algorithm qup_i2c_algo_v2 = { 1642 - .master_xfer = qup_i2c_xfer_v2, 1643 - .functionality = qup_i2c_func, 1642 + .xfer = qup_i2c_xfer_v2, 1643 + .functionality = qup_i2c_func, 1644 1644 }; 1645 1645 1646 1646 /*
+5 -5
drivers/i2c/busses/i2c-rcar.c
··· 1084 1084 } 1085 1085 1086 1086 static const struct i2c_algorithm rcar_i2c_algo = { 1087 - .master_xfer = rcar_i2c_master_xfer, 1088 - .master_xfer_atomic = rcar_i2c_master_xfer_atomic, 1089 - .functionality = rcar_i2c_func, 1090 - .reg_slave = rcar_reg_slave, 1091 - .unreg_slave = rcar_unreg_slave, 1087 + .xfer = rcar_i2c_master_xfer, 1088 + .xfer_atomic = rcar_i2c_master_xfer_atomic, 1089 + .functionality = rcar_i2c_func, 1090 + .reg_slave = rcar_reg_slave, 1091 + .unreg_slave = rcar_unreg_slave, 1092 1092 }; 1093 1093 1094 1094 static const struct i2c_adapter_quirks rcar_i2c_quirks = {
+6
drivers/i2c/busses/i2c-robotfuzz-osif.c
··· 111 111 return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL; 112 112 } 113 113 114 + /* prevent invalid 0-length usb_control_msg */ 115 + static const struct i2c_adapter_quirks osif_quirks = { 116 + .flags = I2C_AQ_NO_ZERO_LEN_READ, 117 + }; 118 + 114 119 static const struct i2c_algorithm osif_algorithm = { 115 120 .xfer = osif_xfer, 116 121 .functionality = osif_func, ··· 148 143 149 144 priv->adapter.owner = THIS_MODULE; 150 145 priv->adapter.class = I2C_CLASS_HWMON; 146 + priv->adapter.quirks = &osif_quirks; 151 147 priv->adapter.algo = &osif_algorithm; 152 148 priv->adapter.algo_data = priv; 153 149 snprintf(priv->adapter.name, sizeof(priv->adapter.name),
+3 -3
drivers/i2c/busses/i2c-s3c2410.c
··· 800 800 801 801 /* i2c bus registration info */ 802 802 static const struct i2c_algorithm s3c24xx_i2c_algorithm = { 803 - .master_xfer = s3c24xx_i2c_xfer, 804 - .master_xfer_atomic = s3c24xx_i2c_xfer_atomic, 805 - .functionality = s3c24xx_i2c_func, 803 + .xfer = s3c24xx_i2c_xfer, 804 + .xfer_atomic = s3c24xx_i2c_xfer_atomic, 805 + .functionality = s3c24xx_i2c_func, 806 806 }; 807 807 808 808 /*
+2 -2
drivers/i2c/busses/i2c-sh7760.c
··· 379 379 } 380 380 381 381 static const struct i2c_algorithm sh7760_i2c_algo = { 382 - .master_xfer = sh7760_i2c_master_xfer, 383 - .functionality = sh7760_i2c_func, 382 + .xfer = sh7760_i2c_master_xfer, 383 + .functionality = sh7760_i2c_func, 384 384 }; 385 385 386 386 /* calculate CCR register setting for a desired scl clock. SCL clock is
+2 -2
drivers/i2c/busses/i2c-sh_mobile.c
··· 740 740 741 741 static const struct i2c_algorithm sh_mobile_i2c_algorithm = { 742 742 .functionality = sh_mobile_i2c_func, 743 - .master_xfer = sh_mobile_i2c_xfer, 744 - .master_xfer_atomic = sh_mobile_i2c_xfer_atomic, 743 + .xfer = sh_mobile_i2c_xfer, 744 + .xfer_atomic = sh_mobile_i2c_xfer_atomic, 745 745 }; 746 746 747 747 static const struct i2c_adapter_quirks sh_mobile_i2c_quirks = {
+2 -2
drivers/i2c/busses/i2c-stm32f7.c
··· 2151 2151 } 2152 2152 2153 2153 static const struct i2c_algorithm stm32f7_i2c_algo = { 2154 - .master_xfer = stm32f7_i2c_xfer, 2155 - .master_xfer_atomic = stm32f7_i2c_xfer_atomic, 2154 + .xfer = stm32f7_i2c_xfer, 2155 + .xfer_atomic = stm32f7_i2c_xfer_atomic, 2156 2156 .smbus_xfer = stm32f7_i2c_smbus_xfer, 2157 2157 .functionality = stm32f7_i2c_func, 2158 2158 .reg_slave = stm32f7_i2c_reg_slave,
+2 -2
drivers/i2c/busses/i2c-synquacer.c
··· 520 520 } 521 521 522 522 static const struct i2c_algorithm synquacer_i2c_algo = { 523 - .master_xfer = synquacer_i2c_xfer, 524 - .functionality = synquacer_i2c_functionality, 523 + .xfer = synquacer_i2c_xfer, 524 + .functionality = synquacer_i2c_functionality, 525 525 }; 526 526 527 527 static const struct i2c_adapter synquacer_i2c_ops = {
+3 -3
drivers/i2c/busses/i2c-tegra.c
··· 1440 1440 } 1441 1441 1442 1442 static const struct i2c_algorithm tegra_i2c_algo = { 1443 - .master_xfer = tegra_i2c_xfer, 1444 - .master_xfer_atomic = tegra_i2c_xfer_atomic, 1445 - .functionality = tegra_i2c_func, 1443 + .xfer = tegra_i2c_xfer, 1444 + .xfer_atomic = tegra_i2c_xfer_atomic, 1445 + .functionality = tegra_i2c_func, 1446 1446 }; 1447 1447 1448 1448 /* payload size is only 12 bit */
+6
drivers/i2c/busses/i2c-tiny-usb.c
··· 139 139 return ret; 140 140 } 141 141 142 + /* prevent invalid 0-length usb_control_msg */ 143 + static const struct i2c_adapter_quirks usb_quirks = { 144 + .flags = I2C_AQ_NO_ZERO_LEN_READ, 145 + }; 146 + 142 147 /* This is the actual algorithm we define */ 143 148 static const struct i2c_algorithm usb_algorithm = { 144 149 .xfer = usb_xfer, ··· 252 247 /* setup i2c adapter description */ 253 248 dev->adapter.owner = THIS_MODULE; 254 249 dev->adapter.class = I2C_CLASS_HWMON; 250 + dev->adapter.quirks = &usb_quirks; 255 251 dev->adapter.algo = &usb_algorithm; 256 252 dev->adapter.algo_data = dev; 257 253 snprintf(dev->adapter.name, sizeof(dev->adapter.name),
+2 -2
drivers/i2c/busses/i2c-xiic.c
··· 1398 1398 } 1399 1399 1400 1400 static const struct i2c_algorithm xiic_algorithm = { 1401 - .master_xfer = xiic_xfer, 1402 - .master_xfer_atomic = xiic_xfer_atomic, 1401 + .xfer = xiic_xfer, 1402 + .xfer_atomic = xiic_xfer_atomic, 1403 1403 .functionality = xiic_func, 1404 1404 }; 1405 1405
+1 -1
drivers/i2c/busses/i2c-xlp9xx.c
··· 452 452 } 453 453 454 454 static const struct i2c_algorithm xlp9xx_i2c_algo = { 455 - .master_xfer = xlp9xx_i2c_xfer, 455 + .xfer = xlp9xx_i2c_xfer, 456 456 .functionality = xlp9xx_i2c_functionality, 457 457 }; 458 458
+1 -1
drivers/i2c/i2c-atr.c
··· 738 738 atr->flags = flags; 739 739 740 740 if (parent->algo->master_xfer) 741 - atr->algo.master_xfer = i2c_atr_master_xfer; 741 + atr->algo.xfer = i2c_atr_master_xfer; 742 742 if (parent->algo->smbus_xfer) 743 743 atr->algo.smbus_xfer = i2c_atr_smbus_xfer; 744 744 atr->algo.functionality = i2c_atr_functionality;
+3 -3
drivers/i2c/i2c-mux.c
··· 293 293 */ 294 294 if (parent->algo->master_xfer) { 295 295 if (muxc->mux_locked) 296 - priv->algo.master_xfer = i2c_mux_master_xfer; 296 + priv->algo.xfer = i2c_mux_master_xfer; 297 297 else 298 - priv->algo.master_xfer = __i2c_mux_master_xfer; 298 + priv->algo.xfer = __i2c_mux_master_xfer; 299 299 } 300 300 if (parent->algo->master_xfer_atomic) 301 - priv->algo.master_xfer_atomic = priv->algo.master_xfer; 301 + priv->algo.xfer_atomic = priv->algo.master_xfer; 302 302 303 303 if (parent->algo->smbus_xfer) { 304 304 if (muxc->mux_locked)
+2 -2
drivers/i2c/muxes/i2c-demux-pinctrl.c
··· 95 95 priv->cur_chan = new_chan; 96 96 97 97 /* Now fill out current adapter structure. cur_chan must be up to date */ 98 - priv->algo.master_xfer = i2c_demux_master_xfer; 98 + priv->algo.xfer = i2c_demux_master_xfer; 99 99 if (adap->algo->master_xfer_atomic) 100 - priv->algo.master_xfer_atomic = i2c_demux_master_xfer; 100 + priv->algo.xfer_atomic = i2c_demux_master_xfer; 101 101 priv->algo.functionality = i2c_demux_functionality; 102 102 103 103 snprintf(priv->cur_adap.name, sizeof(priv->cur_adap.name),
+2 -18
drivers/irqchip/irq-ath79-misc.c
··· 15 15 #include <linux/of_address.h> 16 16 #include <linux/of_irq.h> 17 17 18 + #include <asm/time.h> 19 + 18 20 #define AR71XX_RESET_REG_MISC_INT_STATUS 0 19 21 #define AR71XX_RESET_REG_MISC_INT_ENABLE 4 20 22 ··· 179 177 180 178 IRQCHIP_DECLARE(ar7240_misc_intc, "qca,ar7240-misc-intc", 181 179 ar7240_misc_intc_of_init); 182 - 183 - void __init ath79_misc_irq_init(void __iomem *regs, int irq, 184 - int irq_base, bool is_ar71xx) 185 - { 186 - struct irq_domain *domain; 187 - 188 - if (is_ar71xx) 189 - ath79_misc_irq_chip.irq_mask_ack = ar71xx_misc_irq_mask; 190 - else 191 - ath79_misc_irq_chip.irq_ack = ar724x_misc_irq_ack; 192 - 193 - domain = irq_domain_create_legacy(NULL, ATH79_MISC_IRQ_COUNT, 194 - irq_base, 0, &misc_irq_domain_ops, regs); 195 - if (!domain) 196 - panic("Failed to create MISC irqdomain"); 197 - 198 - ath79_misc_intc_domain_init(domain, irq); 199 - }
-1
drivers/md/bcache/Kconfig
··· 5 5 select BLOCK_HOLDER_DEPRECATED if SYSFS 6 6 select CRC64 7 7 select CLOSURES 8 - select MIN_HEAP 9 8 help 10 9 Allows a block device to be used as cache for other devices; uses 11 10 a btree for indexing and the layout is optimized for SSDs.
+17 -40
drivers/md/bcache/alloc.c
··· 164 164 * prio is worth 1/8th of what INITIAL_PRIO is worth. 165 165 */ 166 166 167 - static inline unsigned int new_bucket_prio(struct cache *ca, struct bucket *b) 168 - { 169 - unsigned int min_prio = (INITIAL_PRIO - ca->set->min_prio) / 8; 167 + #define bucket_prio(b) \ 168 + ({ \ 169 + unsigned int min_prio = (INITIAL_PRIO - ca->set->min_prio) / 8; \ 170 + \ 171 + (b->prio - ca->set->min_prio + min_prio) * GC_SECTORS_USED(b); \ 172 + }) 170 173 171 - return (b->prio - ca->set->min_prio + min_prio) * GC_SECTORS_USED(b); 172 - } 173 - 174 - static inline bool new_bucket_max_cmp(const void *l, const void *r, void *args) 175 - { 176 - struct bucket **lhs = (struct bucket **)l; 177 - struct bucket **rhs = (struct bucket **)r; 178 - struct cache *ca = args; 179 - 180 - return new_bucket_prio(ca, *lhs) > new_bucket_prio(ca, *rhs); 181 - } 182 - 183 - static inline bool new_bucket_min_cmp(const void *l, const void *r, void *args) 184 - { 185 - struct bucket **lhs = (struct bucket **)l; 186 - struct bucket **rhs = (struct bucket **)r; 187 - struct cache *ca = args; 188 - 189 - return new_bucket_prio(ca, *lhs) < new_bucket_prio(ca, *rhs); 190 - } 174 + #define bucket_max_cmp(l, r) (bucket_prio(l) < bucket_prio(r)) 175 + #define bucket_min_cmp(l, r) (bucket_prio(l) > bucket_prio(r)) 191 176 192 177 static void invalidate_buckets_lru(struct cache *ca) 193 178 { 194 179 struct bucket *b; 195 - const struct min_heap_callbacks bucket_max_cmp_callback = { 196 - .less = new_bucket_max_cmp, 197 - .swp = NULL, 198 - }; 199 - const struct min_heap_callbacks bucket_min_cmp_callback = { 200 - .less = new_bucket_min_cmp, 201 - .swp = NULL, 202 - }; 180 + ssize_t i; 203 181 204 - ca->heap.nr = 0; 182 + ca->heap.used = 0; 205 183 206 184 for_each_bucket(b, ca) { 207 185 if (!bch_can_invalidate_bucket(ca, b)) 208 186 continue; 209 187 210 - if (!min_heap_full(&ca->heap)) 211 - min_heap_push(&ca->heap, &b, &bucket_max_cmp_callback, ca); 212 - else if (!new_bucket_max_cmp(&b, min_heap_peek(&ca->heap), ca)) { 188 + if (!heap_full(&ca->heap)) 189 + heap_add(&ca->heap, b, bucket_max_cmp); 190 + else if (bucket_max_cmp(b, heap_peek(&ca->heap))) { 213 191 ca->heap.data[0] = b; 214 - min_heap_sift_down(&ca->heap, 0, &bucket_max_cmp_callback, ca); 192 + heap_sift(&ca->heap, 0, bucket_max_cmp); 215 193 } 216 194 } 217 195 218 - min_heapify_all(&ca->heap, &bucket_min_cmp_callback, ca); 196 + for (i = ca->heap.used / 2 - 1; i >= 0; --i) 197 + heap_sift(&ca->heap, i, bucket_min_cmp); 219 198 220 199 while (!fifo_full(&ca->free_inc)) { 221 - if (!ca->heap.nr) { 200 + if (!heap_pop(&ca->heap, b, bucket_min_cmp)) { 222 201 /* 223 202 * We don't want to be calling invalidate_buckets() 224 203 * multiple times when it can't do anything ··· 206 227 wake_up_gc(ca->set); 207 228 return; 208 229 } 209 - b = min_heap_peek(&ca->heap)[0]; 210 - min_heap_pop(&ca->heap, &bucket_min_cmp_callback, ca); 211 230 212 231 bch_invalidate_one_bucket(ca, b); 213 232 }
+1 -1
drivers/md/bcache/bcache.h
··· 458 458 /* Allocation stuff: */ 459 459 struct bucket *buckets; 460 460 461 - DEFINE_MIN_HEAP(struct bucket *, cache_heap) heap; 461 + DECLARE_HEAP(struct bucket *, heap); 462 462 463 463 /* 464 464 * If nonzero, we know we aren't going to find any buckets to invalidate
+45 -71
drivers/md/bcache/bset.c
··· 54 54 int __bch_count_data(struct btree_keys *b) 55 55 { 56 56 unsigned int ret = 0; 57 - struct btree_iter iter; 57 + struct btree_iter_stack iter; 58 58 struct bkey *k; 59 - 60 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 61 59 62 60 if (b->ops->is_extents) 63 61 for_each_key(b, k, &iter) ··· 67 69 { 68 70 va_list args; 69 71 struct bkey *k, *p = NULL; 70 - struct btree_iter iter; 72 + struct btree_iter_stack iter; 71 73 const char *err; 72 - 73 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 74 74 75 75 for_each_key(b, k, &iter) { 76 76 if (b->ops->is_extents) { ··· 110 114 111 115 static void bch_btree_iter_next_check(struct btree_iter *iter) 112 116 { 113 - struct bkey *k = iter->heap.data->k, *next = bkey_next(k); 117 + struct bkey *k = iter->data->k, *next = bkey_next(k); 114 118 115 - if (next < iter->heap.data->end && 119 + if (next < iter->data->end && 116 120 bkey_cmp(k, iter->b->ops->is_extents ? 117 121 &START_KEY(next) : next) > 0) { 118 122 bch_dump_bucket(iter->b); ··· 879 883 unsigned int status = BTREE_INSERT_STATUS_NO_INSERT; 880 884 struct bset *i = bset_tree_last(b)->data; 881 885 struct bkey *m, *prev = NULL; 882 - struct btree_iter iter; 886 + struct btree_iter_stack iter; 883 887 struct bkey preceding_key_on_stack = ZERO_KEY; 884 888 struct bkey *preceding_key_p = &preceding_key_on_stack; 885 889 886 890 BUG_ON(b->ops->is_extents && !KEY_SIZE(k)); 887 - 888 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 889 891 890 892 /* 891 893 * If k has preceding key, preceding_key_p will be set to address ··· 895 901 else 896 902 preceding_key(k, &preceding_key_p); 897 903 898 - m = bch_btree_iter_init(b, &iter, preceding_key_p); 904 + m = bch_btree_iter_stack_init(b, &iter, preceding_key_p); 899 905 900 - if (b->ops->insert_fixup(b, k, &iter, replace_key)) 906 + if (b->ops->insert_fixup(b, k, &iter.iter, replace_key)) 901 907 return status; 902 908 903 909 status = BTREE_INSERT_STATUS_INSERT; ··· 1077 1083 1078 1084 /* Btree iterator */ 1079 1085 1080 - typedef bool (new_btree_iter_cmp_fn)(const void *, const void *, void *); 1086 + typedef bool (btree_iter_cmp_fn)(struct btree_iter_set, 1087 + struct btree_iter_set); 1081 1088 1082 - static inline bool new_btree_iter_cmp(const void *l, const void *r, void __always_unused *args) 1089 + static inline bool btree_iter_cmp(struct btree_iter_set l, 1090 + struct btree_iter_set r) 1083 1091 { 1084 - const struct btree_iter_set *_l = l; 1085 - const struct btree_iter_set *_r = r; 1086 - 1087 - return bkey_cmp(_l->k, _r->k) <= 0; 1092 + return bkey_cmp(l.k, r.k) > 0; 1088 1093 } 1089 1094 1090 1095 static inline bool btree_iter_end(struct btree_iter *iter) 1091 1096 { 1092 - return !iter->heap.nr; 1097 + return !iter->used; 1093 1098 } 1094 1099 1095 1100 void bch_btree_iter_push(struct btree_iter *iter, struct bkey *k, 1096 1101 struct bkey *end) 1097 1102 { 1098 - const struct min_heap_callbacks callbacks = { 1099 - .less = new_btree_iter_cmp, 1100 - .swp = NULL, 1101 - }; 1102 - 1103 1103 if (k != end) 1104 - BUG_ON(!min_heap_push(&iter->heap, 1105 - &((struct btree_iter_set) { k, end }), 1106 - &callbacks, 1107 - NULL)); 1104 + BUG_ON(!heap_add(iter, 1105 + ((struct btree_iter_set) { k, end }), 1106 + btree_iter_cmp)); 1108 1107 } 1109 1108 1110 - static struct bkey *__bch_btree_iter_init(struct btree_keys *b, 1111 - struct btree_iter *iter, 1112 - struct bkey *search, 1113 - struct bset_tree *start) 1109 + static struct bkey *__bch_btree_iter_stack_init(struct btree_keys *b, 1110 + struct btree_iter_stack *iter, 1111 + struct bkey *search, 1112 + struct bset_tree *start) 1114 1113 { 1115 1114 struct bkey *ret = NULL; 1116 1115 1117 - iter->heap.size = ARRAY_SIZE(iter->heap.preallocated); 1118 - iter->heap.nr = 0; 1116 + iter->iter.size = ARRAY_SIZE(iter->stack_data); 1117 + iter->iter.used = 0; 1119 1118 1120 1119 #ifdef CONFIG_BCACHE_DEBUG 1121 - iter->b = b; 1120 + iter->iter.b = b; 1122 1121 #endif 1123 1122 1124 1123 for (; start <= bset_tree_last(b); start++) { 1125 1124 ret = bch_bset_search(b, start, search); 1126 - bch_btree_iter_push(iter, ret, bset_bkey_last(start->data)); 1125 + bch_btree_iter_push(&iter->iter, ret, bset_bkey_last(start->data)); 1127 1126 } 1128 1127 1129 1128 return ret; 1130 1129 } 1131 1130 1132 - struct bkey *bch_btree_iter_init(struct btree_keys *b, 1133 - struct btree_iter *iter, 1131 + struct bkey *bch_btree_iter_stack_init(struct btree_keys *b, 1132 + struct btree_iter_stack *iter, 1134 1133 struct bkey *search) 1135 1134 { 1136 - return __bch_btree_iter_init(b, iter, search, b->set); 1135 + return __bch_btree_iter_stack_init(b, iter, search, b->set); 1137 1136 } 1138 1137 1139 1138 static inline struct bkey *__bch_btree_iter_next(struct btree_iter *iter, 1140 - new_btree_iter_cmp_fn *cmp) 1139 + btree_iter_cmp_fn *cmp) 1141 1140 { 1142 1141 struct btree_iter_set b __maybe_unused; 1143 1142 struct bkey *ret = NULL; 1144 - const struct min_heap_callbacks callbacks = { 1145 - .less = cmp, 1146 - .swp = NULL, 1147 - }; 1148 1143 1149 1144 if (!btree_iter_end(iter)) { 1150 1145 bch_btree_iter_next_check(iter); 1151 1146 1152 - ret = iter->heap.data->k; 1153 - iter->heap.data->k = bkey_next(iter->heap.data->k); 1147 + ret = iter->data->k; 1148 + iter->data->k = bkey_next(iter->data->k); 1154 1149 1155 - if (iter->heap.data->k > iter->heap.data->end) { 1150 + if (iter->data->k > iter->data->end) { 1156 1151 WARN_ONCE(1, "bset was corrupt!\n"); 1157 - iter->heap.data->k = iter->heap.data->end; 1152 + iter->data->k = iter->data->end; 1158 1153 } 1159 1154 1160 - if (iter->heap.data->k == iter->heap.data->end) { 1161 - if (iter->heap.nr) { 1162 - b = min_heap_peek(&iter->heap)[0]; 1163 - min_heap_pop(&iter->heap, &callbacks, NULL); 1164 - } 1165 - } 1155 + if (iter->data->k == iter->data->end) 1156 + heap_pop(iter, b, cmp); 1166 1157 else 1167 - min_heap_sift_down(&iter->heap, 0, &callbacks, NULL); 1158 + heap_sift(iter, 0, cmp); 1168 1159 } 1169 1160 1170 1161 return ret; ··· 1157 1178 1158 1179 struct bkey *bch_btree_iter_next(struct btree_iter *iter) 1159 1180 { 1160 - return __bch_btree_iter_next(iter, new_btree_iter_cmp); 1181 + return __bch_btree_iter_next(iter, btree_iter_cmp); 1161 1182 1162 1183 } 1163 1184 ··· 1195 1216 struct btree_iter *iter, 1196 1217 bool fixup, bool remove_stale) 1197 1218 { 1219 + int i; 1198 1220 struct bkey *k, *last = NULL; 1199 1221 BKEY_PADDED(k) tmp; 1200 1222 bool (*bad)(struct btree_keys *, const struct bkey *) = remove_stale 1201 1223 ? bch_ptr_bad 1202 1224 : bch_ptr_invalid; 1203 - const struct min_heap_callbacks callbacks = { 1204 - .less = b->ops->sort_cmp, 1205 - .swp = NULL, 1206 - }; 1207 1225 1208 1226 /* Heapify the iterator, using our comparison function */ 1209 - min_heapify_all(&iter->heap, &callbacks, NULL); 1227 + for (i = iter->used / 2 - 1; i >= 0; --i) 1228 + heap_sift(iter, i, b->ops->sort_cmp); 1210 1229 1211 1230 while (!btree_iter_end(iter)) { 1212 1231 if (b->ops->sort_fixup && fixup) ··· 1293 1316 struct bset_sort_state *state) 1294 1317 { 1295 1318 size_t order = b->page_order, keys = 0; 1296 - struct btree_iter iter; 1319 + struct btree_iter_stack iter; 1297 1320 int oldsize = bch_count_data(b); 1298 1321 1299 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 1300 - __bch_btree_iter_init(b, &iter, NULL, &b->set[start]); 1322 + __bch_btree_iter_stack_init(b, &iter, NULL, &b->set[start]); 1301 1323 1302 1324 if (start) { 1303 1325 unsigned int i; ··· 1307 1331 order = get_order(__set_bytes(b->set->data, keys)); 1308 1332 } 1309 1333 1310 - __btree_sort(b, &iter, start, order, false, state); 1334 + __btree_sort(b, &iter.iter, start, order, false, state); 1311 1335 1312 1336 EBUG_ON(oldsize >= 0 && bch_count_data(b) != oldsize); 1313 1337 } ··· 1323 1347 struct bset_sort_state *state) 1324 1348 { 1325 1349 uint64_t start_time = local_clock(); 1326 - struct btree_iter iter; 1350 + struct btree_iter_stack iter; 1327 1351 1328 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 1352 + bch_btree_iter_stack_init(b, &iter, NULL); 1329 1353 1330 - bch_btree_iter_init(b, &iter, NULL); 1331 - 1332 - btree_mergesort(b, new->set->data, &iter, false, true); 1354 + btree_mergesort(b, new->set->data, &iter.iter, false, true); 1333 1355 1334 1356 bch_time_stats_update(&state->time, start_time); 1335 1357
+23 -17
drivers/md/bcache/bset.h
··· 187 187 }; 188 188 189 189 struct btree_keys_ops { 190 - bool (*sort_cmp)(const void *l, 191 - const void *r, 192 - void *args); 190 + bool (*sort_cmp)(struct btree_iter_set l, 191 + struct btree_iter_set r); 193 192 struct bkey *(*sort_fixup)(struct btree_iter *iter, 194 193 struct bkey *tmp); 195 194 bool (*insert_fixup)(struct btree_keys *b, ··· 312 313 BTREE_INSERT_STATUS_FRONT_MERGE, 313 314 }; 314 315 315 - struct btree_iter_set { 316 - struct bkey *k, *end; 317 - }; 318 - 319 316 /* Btree key iteration */ 320 317 321 318 struct btree_iter { 319 + size_t size, used; 322 320 #ifdef CONFIG_BCACHE_DEBUG 323 321 struct btree_keys *b; 324 322 #endif 325 - MIN_HEAP_PREALLOCATED(struct btree_iter_set, btree_iter_heap, MAX_BSETS) heap; 323 + struct btree_iter_set { 324 + struct bkey *k, *end; 325 + } data[]; 326 + }; 327 + 328 + /* Fixed-size btree_iter that can be allocated on the stack */ 329 + 330 + struct btree_iter_stack { 331 + struct btree_iter iter; 332 + struct btree_iter_set stack_data[MAX_BSETS]; 326 333 }; 327 334 328 335 typedef bool (*ptr_filter_fn)(struct btree_keys *b, const struct bkey *k); ··· 340 335 341 336 void bch_btree_iter_push(struct btree_iter *iter, struct bkey *k, 342 337 struct bkey *end); 343 - struct bkey *bch_btree_iter_init(struct btree_keys *b, 344 - struct btree_iter *iter, 345 - struct bkey *search); 338 + struct bkey *bch_btree_iter_stack_init(struct btree_keys *b, 339 + struct btree_iter_stack *iter, 340 + struct bkey *search); 346 341 347 342 struct bkey *__bch_bset_search(struct btree_keys *b, struct bset_tree *t, 348 343 const struct bkey *search); ··· 357 352 return search ? __bch_bset_search(b, t, search) : t->data->start; 358 353 } 359 354 360 - #define for_each_key_filter(b, k, iter, filter) \ 361 - for (bch_btree_iter_init((b), (iter), NULL); \ 362 - ((k) = bch_btree_iter_next_filter((iter), (b), filter));) 355 + #define for_each_key_filter(b, k, stack_iter, filter) \ 356 + for (bch_btree_iter_stack_init((b), (stack_iter), NULL); \ 357 + ((k) = bch_btree_iter_next_filter(&((stack_iter)->iter), (b), \ 358 + filter));) 363 359 364 - #define for_each_key(b, k, iter) \ 365 - for (bch_btree_iter_init((b), (iter), NULL); \ 366 - ((k) = bch_btree_iter_next(iter));) 360 + #define for_each_key(b, k, stack_iter) \ 361 + for (bch_btree_iter_stack_init((b), (stack_iter), NULL); \ 362 + ((k) = bch_btree_iter_next(&((stack_iter)->iter)));) 367 363 368 364 /* Sorting */ 369 365
+29 -40
drivers/md/bcache/btree.c
··· 148 148 { 149 149 const char *err = "bad btree header"; 150 150 struct bset *i = btree_bset_first(b); 151 - struct btree_iter iter; 151 + struct btree_iter *iter; 152 152 153 153 /* 154 154 * c->fill_iter can allocate an iterator with more memory space 155 155 * than static MAX_BSETS. 156 156 * See the comment arount cache_set->fill_iter. 157 157 */ 158 - iter.heap.data = mempool_alloc(&b->c->fill_iter, GFP_NOIO); 159 - iter.heap.size = b->c->cache->sb.bucket_size / b->c->cache->sb.block_size; 160 - iter.heap.nr = 0; 158 + iter = mempool_alloc(&b->c->fill_iter, GFP_NOIO); 159 + iter->size = b->c->cache->sb.bucket_size / b->c->cache->sb.block_size; 160 + iter->used = 0; 161 161 162 162 #ifdef CONFIG_BCACHE_DEBUG 163 - iter.b = &b->keys; 163 + iter->b = &b->keys; 164 164 #endif 165 165 166 166 if (!i->seq) ··· 198 198 if (i != b->keys.set[0].data && !i->keys) 199 199 goto err; 200 200 201 - bch_btree_iter_push(&iter, i->start, bset_bkey_last(i)); 201 + bch_btree_iter_push(iter, i->start, bset_bkey_last(i)); 202 202 203 203 b->written += set_blocks(i, block_bytes(b->c->cache)); 204 204 } ··· 210 210 if (i->seq == b->keys.set[0].data->seq) 211 211 goto err; 212 212 213 - bch_btree_sort_and_fix_extents(&b->keys, &iter, &b->c->sort); 213 + bch_btree_sort_and_fix_extents(&b->keys, iter, &b->c->sort); 214 214 215 215 i = b->keys.set[0].data; 216 216 err = "short btree key"; ··· 222 222 bch_bset_init_next(&b->keys, write_block(b), 223 223 bset_magic(&b->c->cache->sb)); 224 224 out: 225 - mempool_free(iter.heap.data, &b->c->fill_iter); 225 + mempool_free(iter, &b->c->fill_iter); 226 226 return; 227 227 err: 228 228 set_btree_node_io_error(b); ··· 1306 1306 uint8_t stale = 0; 1307 1307 unsigned int keys = 0, good_keys = 0; 1308 1308 struct bkey *k; 1309 - struct btree_iter iter; 1309 + struct btree_iter_stack iter; 1310 1310 struct bset_tree *t; 1311 - 1312 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 1313 1311 1314 1312 gc->nodes++; 1315 1313 ··· 1567 1569 static unsigned int btree_gc_count_keys(struct btree *b) 1568 1570 { 1569 1571 struct bkey *k; 1570 - struct btree_iter iter; 1572 + struct btree_iter_stack iter; 1571 1573 unsigned int ret = 0; 1572 - 1573 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 1574 1574 1575 1575 for_each_key_filter(&b->keys, k, &iter, bch_ptr_bad) 1576 1576 ret += bkey_u64s(k); ··· 1608 1612 int ret = 0; 1609 1613 bool should_rewrite; 1610 1614 struct bkey *k; 1611 - struct btree_iter iter; 1615 + struct btree_iter_stack iter; 1612 1616 struct gc_merge_info r[GC_MERGE_NODES]; 1613 1617 struct gc_merge_info *i, *last = r + ARRAY_SIZE(r) - 1; 1614 1618 1615 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 1616 - bch_btree_iter_init(&b->keys, &iter, &b->c->gc_done); 1619 + bch_btree_iter_stack_init(&b->keys, &iter, &b->c->gc_done); 1617 1620 1618 1621 for (i = r; i < r + ARRAY_SIZE(r); i++) 1619 1622 i->b = ERR_PTR(-EINTR); 1620 1623 1621 1624 while (1) { 1622 - k = bch_btree_iter_next_filter(&iter, &b->keys, bch_ptr_bad); 1625 + k = bch_btree_iter_next_filter(&iter.iter, &b->keys, 1626 + bch_ptr_bad); 1623 1627 if (k) { 1624 1628 r->b = bch_btree_node_get(b->c, op, k, b->level - 1, 1625 1629 true, b); ··· 1914 1918 { 1915 1919 int ret = 0; 1916 1920 struct bkey *k, *p = NULL; 1917 - struct btree_iter iter; 1918 - 1919 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 1921 + struct btree_iter_stack iter; 1920 1922 1921 1923 for_each_key_filter(&b->keys, k, &iter, bch_ptr_invalid) 1922 1924 bch_initial_mark_key(b->c, b->level, k); ··· 1922 1928 bch_initial_mark_key(b->c, b->level + 1, &b->key); 1923 1929 1924 1930 if (b->level) { 1925 - bch_btree_iter_init(&b->keys, &iter, NULL); 1931 + bch_btree_iter_stack_init(&b->keys, &iter, NULL); 1926 1932 1927 1933 do { 1928 - k = bch_btree_iter_next_filter(&iter, &b->keys, 1934 + k = bch_btree_iter_next_filter(&iter.iter, &b->keys, 1929 1935 bch_ptr_bad); 1930 1936 if (k) { 1931 1937 btree_node_prefetch(b, k); ··· 1953 1959 struct btree_check_info *info = arg; 1954 1960 struct btree_check_state *check_state = info->state; 1955 1961 struct cache_set *c = check_state->c; 1956 - struct btree_iter iter; 1962 + struct btree_iter_stack iter; 1957 1963 struct bkey *k, *p; 1958 1964 int cur_idx, prev_idx, skip_nr; 1959 1965 ··· 1961 1967 cur_idx = prev_idx = 0; 1962 1968 ret = 0; 1963 1969 1964 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 1965 - 1966 1970 /* root node keys are checked before thread created */ 1967 - bch_btree_iter_init(&c->root->keys, &iter, NULL); 1968 - k = bch_btree_iter_next_filter(&iter, &c->root->keys, bch_ptr_bad); 1971 + bch_btree_iter_stack_init(&c->root->keys, &iter, NULL); 1972 + k = bch_btree_iter_next_filter(&iter.iter, &c->root->keys, bch_ptr_bad); 1969 1973 BUG_ON(!k); 1970 1974 1971 1975 p = k; ··· 1981 1989 skip_nr = cur_idx - prev_idx; 1982 1990 1983 1991 while (skip_nr) { 1984 - k = bch_btree_iter_next_filter(&iter, 1992 + k = bch_btree_iter_next_filter(&iter.iter, 1985 1993 &c->root->keys, 1986 1994 bch_ptr_bad); 1987 1995 if (k) ··· 2054 2062 int ret = 0; 2055 2063 int i; 2056 2064 struct bkey *k = NULL; 2057 - struct btree_iter iter; 2065 + struct btree_iter_stack iter; 2058 2066 struct btree_check_state check_state; 2059 - 2060 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 2061 2067 2062 2068 /* check and mark root node keys */ 2063 2069 for_each_key_filter(&c->root->keys, k, &iter, bch_ptr_invalid) ··· 2550 2560 2551 2561 if (b->level) { 2552 2562 struct bkey *k; 2553 - struct btree_iter iter; 2563 + struct btree_iter_stack iter; 2554 2564 2555 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 2556 - bch_btree_iter_init(&b->keys, &iter, from); 2565 + bch_btree_iter_stack_init(&b->keys, &iter, from); 2557 2566 2558 - while ((k = bch_btree_iter_next_filter(&iter, &b->keys, 2567 + while ((k = bch_btree_iter_next_filter(&iter.iter, &b->keys, 2559 2568 bch_ptr_bad))) { 2560 2569 ret = bcache_btree(map_nodes_recurse, k, b, 2561 2570 op, from, fn, flags); ··· 2583 2594 { 2584 2595 int ret = MAP_CONTINUE; 2585 2596 struct bkey *k; 2586 - struct btree_iter iter; 2597 + struct btree_iter_stack iter; 2587 2598 2588 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 2589 - bch_btree_iter_init(&b->keys, &iter, from); 2599 + bch_btree_iter_stack_init(&b->keys, &iter, from); 2590 2600 2591 - while ((k = bch_btree_iter_next_filter(&iter, &b->keys, bch_ptr_bad))) { 2601 + while ((k = bch_btree_iter_next_filter(&iter.iter, &b->keys, 2602 + bch_ptr_bad))) { 2592 2603 ret = !b->level 2593 2604 ? fn(op, b, k) 2594 2605 : bcache_btree(map_keys_recurse, k,
+18 -25
drivers/md/bcache/extents.c
··· 33 33 i->k = bkey_next(i->k); 34 34 35 35 if (i->k == i->end) 36 - *i = iter->heap.data[--iter->heap.nr]; 36 + *i = iter->data[--iter->used]; 37 37 } 38 38 39 - static bool new_bch_key_sort_cmp(const void *l, const void *r, void *args) 39 + static bool bch_key_sort_cmp(struct btree_iter_set l, 40 + struct btree_iter_set r) 40 41 { 41 - struct btree_iter_set *_l = (struct btree_iter_set *)l; 42 - struct btree_iter_set *_r = (struct btree_iter_set *)r; 43 - int64_t c = bkey_cmp(_l->k, _r->k); 42 + int64_t c = bkey_cmp(l.k, r.k); 44 43 45 - return !(c ? c > 0 : _l->k < _r->k); 44 + return c ? c > 0 : l.k < r.k; 46 45 } 47 46 48 47 static bool __ptr_invalid(struct cache_set *c, const struct bkey *k) ··· 238 239 } 239 240 240 241 const struct btree_keys_ops bch_btree_keys_ops = { 241 - .sort_cmp = new_bch_key_sort_cmp, 242 + .sort_cmp = bch_key_sort_cmp, 242 243 .insert_fixup = bch_btree_ptr_insert_fixup, 243 244 .key_invalid = bch_btree_ptr_invalid, 244 245 .key_bad = bch_btree_ptr_bad, ··· 255 256 * Necessary for btree_sort_fixup() - if there are multiple keys that compare 256 257 * equal in different sets, we have to process them newest to oldest. 257 258 */ 258 - 259 - static bool new_bch_extent_sort_cmp(const void *l, const void *r, void __always_unused *args) 259 + static bool bch_extent_sort_cmp(struct btree_iter_set l, 260 + struct btree_iter_set r) 260 261 { 261 - struct btree_iter_set *_l = (struct btree_iter_set *)l; 262 - struct btree_iter_set *_r = (struct btree_iter_set *)r; 263 - int64_t c = bkey_cmp(&START_KEY(_l->k), &START_KEY(_r->k)); 262 + int64_t c = bkey_cmp(&START_KEY(l.k), &START_KEY(r.k)); 264 263 265 - return !(c ? c > 0 : _l->k < _r->k); 264 + return c ? c > 0 : l.k < r.k; 266 265 } 267 266 268 267 static struct bkey *bch_extent_sort_fixup(struct btree_iter *iter, 269 268 struct bkey *tmp) 270 269 { 271 - const struct min_heap_callbacks callbacks = { 272 - .less = new_bch_extent_sort_cmp, 273 - .swp = NULL, 274 - }; 275 - while (iter->heap.nr > 1) { 276 - struct btree_iter_set *top = iter->heap.data, *i = top + 1; 270 + while (iter->used > 1) { 271 + struct btree_iter_set *top = iter->data, *i = top + 1; 277 272 278 - if (iter->heap.nr > 2 && 279 - !new_bch_extent_sort_cmp(&i[0], &i[1], NULL)) 273 + if (iter->used > 2 && 274 + bch_extent_sort_cmp(i[0], i[1])) 280 275 i++; 281 276 282 277 if (bkey_cmp(top->k, &START_KEY(i->k)) <= 0) ··· 278 285 279 286 if (!KEY_SIZE(i->k)) { 280 287 sort_key_next(iter, i); 281 - min_heap_sift_down(&iter->heap, i - top, &callbacks, NULL); 288 + heap_sift(iter, i - top, bch_extent_sort_cmp); 282 289 continue; 283 290 } 284 291 ··· 288 295 else 289 296 bch_cut_front(top->k, i->k); 290 297 291 - min_heap_sift_down(&iter->heap, i - top, &callbacks, NULL); 298 + heap_sift(iter, i - top, bch_extent_sort_cmp); 292 299 } else { 293 300 /* can't happen because of comparison func */ 294 301 BUG_ON(!bkey_cmp(&START_KEY(top->k), &START_KEY(i->k))); ··· 298 305 299 306 bch_cut_back(&START_KEY(i->k), tmp); 300 307 bch_cut_front(i->k, top->k); 301 - min_heap_sift_down(&iter->heap, 0, &callbacks, NULL); 308 + heap_sift(iter, 0, bch_extent_sort_cmp); 302 309 303 310 return tmp; 304 311 } else { ··· 618 625 } 619 626 620 627 const struct btree_keys_ops bch_extent_keys_ops = { 621 - .sort_cmp = new_bch_extent_sort_cmp, 628 + .sort_cmp = bch_extent_sort_cmp, 622 629 .sort_fixup = bch_extent_sort_fixup, 623 630 .insert_fixup = bch_extent_insert_fixup, 624 631 .key_invalid = bch_extent_invalid,
+10 -23
drivers/md/bcache/movinggc.c
··· 182 182 closure_sync(&cl); 183 183 } 184 184 185 - static bool new_bucket_cmp(const void *l, const void *r, void __always_unused *args) 185 + static bool bucket_cmp(struct bucket *l, struct bucket *r) 186 186 { 187 - struct bucket **_l = (struct bucket **)l; 188 - struct bucket **_r = (struct bucket **)r; 189 - 190 - return GC_SECTORS_USED(*_l) >= GC_SECTORS_USED(*_r); 187 + return GC_SECTORS_USED(l) < GC_SECTORS_USED(r); 191 188 } 192 189 193 190 static unsigned int bucket_heap_top(struct cache *ca) 194 191 { 195 192 struct bucket *b; 196 193 197 - return (b = min_heap_peek(&ca->heap)[0]) ? GC_SECTORS_USED(b) : 0; 194 + return (b = heap_peek(&ca->heap)) ? GC_SECTORS_USED(b) : 0; 198 195 } 199 196 200 197 void bch_moving_gc(struct cache_set *c) ··· 199 202 struct cache *ca = c->cache; 200 203 struct bucket *b; 201 204 unsigned long sectors_to_move, reserve_sectors; 202 - const struct min_heap_callbacks callbacks = { 203 - .less = new_bucket_cmp, 204 - .swp = NULL, 205 - }; 206 205 207 206 if (!c->copy_gc_enabled) 208 207 return; ··· 209 216 reserve_sectors = ca->sb.bucket_size * 210 217 fifo_used(&ca->free[RESERVE_MOVINGGC]); 211 218 212 - ca->heap.nr = 0; 219 + ca->heap.used = 0; 213 220 214 221 for_each_bucket(b, ca) { 215 222 if (GC_MARK(b) == GC_MARK_METADATA || ··· 218 225 atomic_read(&b->pin)) 219 226 continue; 220 227 221 - if (!min_heap_full(&ca->heap)) { 228 + if (!heap_full(&ca->heap)) { 222 229 sectors_to_move += GC_SECTORS_USED(b); 223 - min_heap_push(&ca->heap, &b, &callbacks, NULL); 224 - } else if (!new_bucket_cmp(&b, min_heap_peek(&ca->heap), ca)) { 230 + heap_add(&ca->heap, b, bucket_cmp); 231 + } else if (bucket_cmp(b, heap_peek(&ca->heap))) { 225 232 sectors_to_move -= bucket_heap_top(ca); 226 233 sectors_to_move += GC_SECTORS_USED(b); 227 234 228 235 ca->heap.data[0] = b; 229 - min_heap_sift_down(&ca->heap, 0, &callbacks, NULL); 236 + heap_sift(&ca->heap, 0, bucket_cmp); 230 237 } 231 238 } 232 239 233 240 while (sectors_to_move > reserve_sectors) { 234 - if (ca->heap.nr) { 235 - b = min_heap_peek(&ca->heap)[0]; 236 - min_heap_pop(&ca->heap, &callbacks, NULL); 237 - } 241 + heap_pop(&ca->heap, b, bucket_cmp); 238 242 sectors_to_move -= GC_SECTORS_USED(b); 239 243 } 240 244 241 - while (ca->heap.nr) { 242 - b = min_heap_peek(&ca->heap)[0]; 243 - min_heap_pop(&ca->heap, &callbacks, NULL); 245 + while (heap_pop(&ca->heap, b, bucket_cmp)) 244 246 SET_GC_MOVE(b, 1); 245 - } 246 247 247 248 mutex_unlock(&c->bucket_lock); 248 249
+2 -1
drivers/md/bcache/super.c
··· 1912 1912 INIT_LIST_HEAD(&c->btree_cache_freed); 1913 1913 INIT_LIST_HEAD(&c->data_buckets); 1914 1914 1915 - iter_size = ((meta_bucket_pages(sb) * PAGE_SECTORS) / sb->block_size) * 1915 + iter_size = sizeof(struct btree_iter) + 1916 + ((meta_bucket_pages(sb) * PAGE_SECTORS) / sb->block_size) * 1916 1917 sizeof(struct btree_iter_set); 1917 1918 1918 1919 c->devices = kcalloc(c->nr_uuids, sizeof(void *), GFP_KERNEL);
+1 -3
drivers/md/bcache/sysfs.c
··· 660 660 unsigned int bytes = 0; 661 661 struct bkey *k; 662 662 struct btree *b; 663 - struct btree_iter iter; 664 - 665 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 663 + struct btree_iter_stack iter; 666 664 667 665 goto lock_root; 668 666
+65 -2
drivers/md/bcache/util.h
··· 9 9 #include <linux/kernel.h> 10 10 #include <linux/sched/clock.h> 11 11 #include <linux/llist.h> 12 - #include <linux/min_heap.h> 13 12 #include <linux/ratelimit.h> 14 13 #include <linux/vmalloc.h> 15 14 #include <linux/workqueue.h> ··· 30 31 31 32 #endif 32 33 34 + #define DECLARE_HEAP(type, name) \ 35 + struct { \ 36 + size_t size, used; \ 37 + type *data; \ 38 + } name 39 + 33 40 #define init_heap(heap, _size, gfp) \ 34 41 ({ \ 35 42 size_t _bytes; \ 36 - (heap)->nr = 0; \ 43 + (heap)->used = 0; \ 37 44 (heap)->size = (_size); \ 38 45 _bytes = (heap)->size * sizeof(*(heap)->data); \ 39 46 (heap)->data = kvmalloc(_bytes, (gfp) & GFP_KERNEL); \ ··· 51 46 kvfree((heap)->data); \ 52 47 (heap)->data = NULL; \ 53 48 } while (0) 49 + 50 + #define heap_swap(h, i, j) swap((h)->data[i], (h)->data[j]) 51 + 52 + #define heap_sift(h, i, cmp) \ 53 + do { \ 54 + size_t _r, _j = i; \ 55 + \ 56 + for (; _j * 2 + 1 < (h)->used; _j = _r) { \ 57 + _r = _j * 2 + 1; \ 58 + if (_r + 1 < (h)->used && \ 59 + cmp((h)->data[_r], (h)->data[_r + 1])) \ 60 + _r++; \ 61 + \ 62 + if (cmp((h)->data[_r], (h)->data[_j])) \ 63 + break; \ 64 + heap_swap(h, _r, _j); \ 65 + } \ 66 + } while (0) 67 + 68 + #define heap_sift_down(h, i, cmp) \ 69 + do { \ 70 + while (i) { \ 71 + size_t p = (i - 1) / 2; \ 72 + if (cmp((h)->data[i], (h)->data[p])) \ 73 + break; \ 74 + heap_swap(h, i, p); \ 75 + i = p; \ 76 + } \ 77 + } while (0) 78 + 79 + #define heap_add(h, d, cmp) \ 80 + ({ \ 81 + bool _r = !heap_full(h); \ 82 + if (_r) { \ 83 + size_t _i = (h)->used++; \ 84 + (h)->data[_i] = d; \ 85 + \ 86 + heap_sift_down(h, _i, cmp); \ 87 + heap_sift(h, _i, cmp); \ 88 + } \ 89 + _r; \ 90 + }) 91 + 92 + #define heap_pop(h, d, cmp) \ 93 + ({ \ 94 + bool _r = (h)->used; \ 95 + if (_r) { \ 96 + (d) = (h)->data[0]; \ 97 + (h)->used--; \ 98 + heap_swap(h, 0, (h)->used); \ 99 + heap_sift(h, 0, cmp); \ 100 + } \ 101 + _r; \ 102 + }) 103 + 104 + #define heap_peek(h) ((h)->used ? (h)->data[0] : NULL) 105 + 106 + #define heap_full(h) ((h)->used == (h)->size) 54 107 55 108 #define DECLARE_FIFO(type, name) \ 56 109 struct { \
+5 -8
drivers/md/bcache/writeback.c
··· 908 908 struct dirty_init_thrd_info *info = arg; 909 909 struct bch_dirty_init_state *state = info->state; 910 910 struct cache_set *c = state->c; 911 - struct btree_iter iter; 911 + struct btree_iter_stack iter; 912 912 struct bkey *k, *p; 913 913 int cur_idx, prev_idx, skip_nr; 914 914 915 915 k = p = NULL; 916 916 prev_idx = 0; 917 917 918 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 919 - bch_btree_iter_init(&c->root->keys, &iter, NULL); 920 - k = bch_btree_iter_next_filter(&iter, &c->root->keys, bch_ptr_bad); 918 + bch_btree_iter_stack_init(&c->root->keys, &iter, NULL); 919 + k = bch_btree_iter_next_filter(&iter.iter, &c->root->keys, bch_ptr_bad); 921 920 BUG_ON(!k); 922 921 923 922 p = k; ··· 930 931 skip_nr = cur_idx - prev_idx; 931 932 932 933 while (skip_nr) { 933 - k = bch_btree_iter_next_filter(&iter, 934 + k = bch_btree_iter_next_filter(&iter.iter, 934 935 &c->root->keys, 935 936 bch_ptr_bad); 936 937 if (k) ··· 979 980 int i; 980 981 struct btree *b = NULL; 981 982 struct bkey *k = NULL; 982 - struct btree_iter iter; 983 + struct btree_iter_stack iter; 983 984 struct sectors_dirty_init op; 984 985 struct cache_set *c = d->c; 985 986 struct bch_dirty_init_state state; 986 - 987 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 988 987 989 988 retry_lock: 990 989 b = c->root;
+7 -4
drivers/md/dm-crypt.c
··· 517 517 { 518 518 struct iv_lmk_private *lmk = &cc->iv_gen_private.lmk; 519 519 SHASH_DESC_ON_STACK(desc, lmk->hash_tfm); 520 - struct md5_state md5state; 520 + union { 521 + struct md5_state md5state; 522 + u8 state[CRYPTO_MD5_STATESIZE]; 523 + } u; 521 524 __le32 buf[4]; 522 525 int i, r; 523 526 ··· 551 548 return r; 552 549 553 550 /* No MD5 padding here */ 554 - r = crypto_shash_export(desc, &md5state); 551 + r = crypto_shash_export(desc, &u.md5state); 555 552 if (r) 556 553 return r; 557 554 558 555 for (i = 0; i < MD5_HASH_WORDS; i++) 559 - __cpu_to_le32s(&md5state.hash[i]); 560 - memcpy(iv, &md5state.hash, cc->iv_size); 556 + __cpu_to_le32s(&u.md5state.hash[i]); 557 + memcpy(iv, &u.md5state.hash, cc->iv_size); 561 558 562 559 return 0; 563 560 }
+1 -1
drivers/md/dm-raid.c
··· 2407 2407 */ 2408 2408 sb_retrieve_failed_devices(sb, failed_devices); 2409 2409 rdev_for_each(r, mddev) { 2410 - if (test_bit(Journal, &rdev->flags) || 2410 + if (test_bit(Journal, &r->flags) || 2411 2411 !r->sb_page) 2412 2412 continue; 2413 2413 sb2 = page_address(r->sb_page);
+1 -1
drivers/mtd/mtdchar.c
··· 559 559 /* Sanitize user input */ 560 560 p.devname[BLKPG_DEVNAMELTH - 1] = '\0'; 561 561 562 - return mtd_add_partition(mtd, p.devname, p.start, p.length, NULL); 562 + return mtd_add_partition(mtd, p.devname, p.start, p.length); 563 563 564 564 case BLKPG_DEL_PARTITION: 565 565
+40 -112
drivers/mtd/mtdcore.c
··· 68 68 .pm = MTD_CLS_PM_OPS, 69 69 }; 70 70 71 - static struct class mtd_master_class = { 72 - .name = "mtd_master", 73 - .pm = MTD_CLS_PM_OPS, 74 - }; 75 - 76 71 static DEFINE_IDR(mtd_idr); 77 - static DEFINE_IDR(mtd_master_idr); 78 72 79 73 /* These are exported solely for the purpose of mtd_blkdevs.c. You 80 74 should not use them for _anything_ else */ ··· 83 89 84 90 static LIST_HEAD(mtd_notifiers); 85 91 86 - #define MTD_MASTER_DEVS 255 92 + 87 93 #define MTD_DEVT(index) MKDEV(MTD_CHAR_MAJOR, (index)*2) 88 - static dev_t mtd_master_devt; 89 94 90 95 /* REVISIT once MTD uses the driver model better, whoever allocates 91 96 * the mtd_info will probably want to use the release() hook... ··· 102 109 103 110 /* remove /dev/mtdXro node */ 104 111 device_destroy(&mtd_class, index + 1); 105 - } 106 - 107 - static void mtd_master_release(struct device *dev) 108 - { 109 - struct mtd_info *mtd = dev_get_drvdata(dev); 110 - 111 - idr_remove(&mtd_master_idr, mtd->index); 112 - of_node_put(mtd_get_of_node(mtd)); 113 - 114 - if (mtd_is_partition(mtd)) 115 - release_mtd_partition(mtd); 116 112 } 117 113 118 114 static void mtd_device_release(struct kref *kref) ··· 365 383 .name = "mtd", 366 384 .groups = mtd_groups, 367 385 .release = mtd_release, 368 - }; 369 - 370 - static const struct device_type mtd_master_devtype = { 371 - .name = "mtd_master", 372 - .release = mtd_master_release, 373 386 }; 374 387 375 388 static bool mtd_expert_analysis_mode; ··· 634 657 /** 635 658 * add_mtd_device - register an MTD device 636 659 * @mtd: pointer to new MTD device info structure 637 - * @partitioned: create partitioned device 638 660 * 639 661 * Add a device to the list of MTD devices present in the system, and 640 662 * notify each currently active MTD 'user' of its arrival. Returns 641 663 * zero on success or non-zero on failure. 642 664 */ 643 - int add_mtd_device(struct mtd_info *mtd, bool partitioned) 665 + 666 + int add_mtd_device(struct mtd_info *mtd) 644 667 { 645 668 struct device_node *np = mtd_get_of_node(mtd); 646 669 struct mtd_info *master = mtd_get_master(mtd); ··· 687 710 ofidx = -1; 688 711 if (np) 689 712 ofidx = of_alias_get_id(np, "mtd"); 690 - if (partitioned) { 691 - if (ofidx >= 0) 692 - i = idr_alloc(&mtd_idr, mtd, ofidx, ofidx + 1, GFP_KERNEL); 693 - else 694 - i = idr_alloc(&mtd_idr, mtd, 0, 0, GFP_KERNEL); 695 - } else { 696 - if (ofidx >= 0) 697 - i = idr_alloc(&mtd_master_idr, mtd, ofidx, ofidx + 1, GFP_KERNEL); 698 - else 699 - i = idr_alloc(&mtd_master_idr, mtd, 0, 0, GFP_KERNEL); 700 - } 713 + if (ofidx >= 0) 714 + i = idr_alloc(&mtd_idr, mtd, ofidx, ofidx + 1, GFP_KERNEL); 715 + else 716 + i = idr_alloc(&mtd_idr, mtd, 0, 0, GFP_KERNEL); 701 717 if (i < 0) { 702 718 error = i; 703 719 goto fail_locked; ··· 738 768 /* Caller should have set dev.parent to match the 739 769 * physical device, if appropriate. 740 770 */ 741 - if (partitioned) { 742 - mtd->dev.type = &mtd_devtype; 743 - mtd->dev.class = &mtd_class; 744 - mtd->dev.devt = MTD_DEVT(i); 745 - dev_set_name(&mtd->dev, "mtd%d", i); 746 - error = dev_set_name(&mtd->dev, "mtd%d", i); 747 - } else { 748 - mtd->dev.type = &mtd_master_devtype; 749 - mtd->dev.class = &mtd_master_class; 750 - mtd->dev.devt = MKDEV(MAJOR(mtd_master_devt), i); 751 - error = dev_set_name(&mtd->dev, "mtd_master%d", i); 752 - } 771 + mtd->dev.type = &mtd_devtype; 772 + mtd->dev.class = &mtd_class; 773 + mtd->dev.devt = MTD_DEVT(i); 774 + error = dev_set_name(&mtd->dev, "mtd%d", i); 753 775 if (error) 754 776 goto fail_devname; 755 777 dev_set_drvdata(&mtd->dev, mtd); ··· 749 787 of_node_get(mtd_get_of_node(mtd)); 750 788 error = device_register(&mtd->dev); 751 789 if (error) { 752 - pr_err("mtd: %s device_register fail %d\n", mtd->name, error); 753 790 put_device(&mtd->dev); 754 791 goto fail_added; 755 792 } ··· 760 799 761 800 mtd_debugfs_populate(mtd); 762 801 763 - if (partitioned) { 764 - device_create(&mtd_class, mtd->dev.parent, MTD_DEVT(i) + 1, NULL, 765 - "mtd%dro", i); 766 - } 802 + device_create(&mtd_class, mtd->dev.parent, MTD_DEVT(i) + 1, NULL, 803 + "mtd%dro", i); 767 804 768 - pr_debug("mtd: Giving out %spartitioned device %d to %s\n", 769 - partitioned ? "" : "un-", i, mtd->name); 805 + pr_debug("mtd: Giving out device %d to %s\n", i, mtd->name); 770 806 /* No need to get a refcount on the module containing 771 807 the notifier, since we hold the mtd_table_mutex */ 772 808 list_for_each_entry(not, &mtd_notifiers, list) ··· 771 813 772 814 mutex_unlock(&mtd_table_mutex); 773 815 774 - if (partitioned) { 775 - if (of_property_read_bool(mtd_get_of_node(mtd), "linux,rootfs")) { 776 - if (IS_BUILTIN(CONFIG_MTD)) { 777 - pr_info("mtd: setting mtd%d (%s) as root device\n", 778 - mtd->index, mtd->name); 779 - ROOT_DEV = MKDEV(MTD_BLOCK_MAJOR, mtd->index); 780 - } else { 781 - pr_warn("mtd: can't set mtd%d (%s) as root device - mtd must be builtin\n", 782 - mtd->index, mtd->name); 783 - } 816 + if (of_property_read_bool(mtd_get_of_node(mtd), "linux,rootfs")) { 817 + if (IS_BUILTIN(CONFIG_MTD)) { 818 + pr_info("mtd: setting mtd%d (%s) as root device\n", mtd->index, mtd->name); 819 + ROOT_DEV = MKDEV(MTD_BLOCK_MAJOR, mtd->index); 820 + } else { 821 + pr_warn("mtd: can't set mtd%d (%s) as root device - mtd must be builtin\n", 822 + mtd->index, mtd->name); 784 823 } 785 824 } 786 825 ··· 793 838 fail_added: 794 839 of_node_put(mtd_get_of_node(mtd)); 795 840 fail_devname: 796 - if (partitioned) 797 - idr_remove(&mtd_idr, i); 798 - else 799 - idr_remove(&mtd_master_idr, i); 841 + idr_remove(&mtd_idr, i); 800 842 fail_locked: 801 843 mutex_unlock(&mtd_table_mutex); 802 844 return error; ··· 811 859 812 860 int del_mtd_device(struct mtd_info *mtd) 813 861 { 814 - struct mtd_notifier *not; 815 - struct idr *idr; 816 862 int ret; 863 + struct mtd_notifier *not; 817 864 818 865 mutex_lock(&mtd_table_mutex); 819 866 820 - idr = mtd->dev.class == &mtd_class ? &mtd_idr : &mtd_master_idr; 821 - if (idr_find(idr, mtd->index) != mtd) { 867 + if (idr_find(&mtd_idr, mtd->index) != mtd) { 822 868 ret = -ENODEV; 823 869 goto out_error; 824 870 } ··· 1056 1106 const struct mtd_partition *parts, 1057 1107 int nr_parts) 1058 1108 { 1059 - struct mtd_info *parent; 1060 1109 int ret, err; 1061 1110 1062 1111 mtd_set_dev_defaults(mtd); ··· 1064 1115 if (ret) 1065 1116 goto out; 1066 1117 1067 - ret = add_mtd_device(mtd, false); 1068 - if (ret) 1069 - goto out; 1070 - 1071 1118 if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) { 1072 - ret = mtd_add_partition(mtd, mtd->name, 0, MTDPART_SIZ_FULL, &parent); 1119 + ret = add_mtd_device(mtd); 1073 1120 if (ret) 1074 1121 goto out; 1075 - 1076 - } else { 1077 - parent = mtd; 1078 1122 } 1079 1123 1080 1124 /* Prefer parsed partitions over driver-provided fallback */ 1081 - ret = parse_mtd_partitions(parent, types, parser_data); 1125 + ret = parse_mtd_partitions(mtd, types, parser_data); 1082 1126 if (ret == -EPROBE_DEFER) 1083 1127 goto out; 1084 1128 1085 1129 if (ret > 0) 1086 1130 ret = 0; 1087 1131 else if (nr_parts) 1088 - ret = add_mtd_partitions(parent, parts, nr_parts); 1089 - else if (!IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) 1090 - ret = mtd_add_partition(parent, mtd->name, 0, MTDPART_SIZ_FULL, NULL); 1132 + ret = add_mtd_partitions(mtd, parts, nr_parts); 1133 + else if (!device_is_registered(&mtd->dev)) 1134 + ret = add_mtd_device(mtd); 1135 + else 1136 + ret = 0; 1091 1137 1092 1138 if (ret) 1093 1139 goto out; ··· 1102 1158 register_reboot_notifier(&mtd->reboot_notifier); 1103 1159 } 1104 1160 1105 - return 0; 1106 1161 out: 1107 - nvmem_unregister(mtd->otp_user_nvmem); 1108 - nvmem_unregister(mtd->otp_factory_nvmem); 1162 + if (ret) { 1163 + nvmem_unregister(mtd->otp_user_nvmem); 1164 + nvmem_unregister(mtd->otp_factory_nvmem); 1165 + } 1109 1166 1110 - del_mtd_partitions(mtd); 1111 - 1112 - if (device_is_registered(&mtd->dev)) { 1167 + if (ret && device_is_registered(&mtd->dev)) { 1113 1168 err = del_mtd_device(mtd); 1114 1169 if (err) 1115 1170 pr_err("Error when deleting MTD device (%d)\n", err); ··· 1267 1324 mtd = mtd->parent; 1268 1325 } 1269 1326 1270 - kref_get(&master->refcnt); 1327 + if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) 1328 + kref_get(&master->refcnt); 1271 1329 1272 1330 return 0; 1273 1331 } ··· 1362 1418 mtd = parent; 1363 1419 } 1364 1420 1365 - kref_put(&master->refcnt, mtd_device_release); 1421 + if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) 1422 + kref_put(&master->refcnt, mtd_device_release); 1366 1423 1367 1424 module_put(master->owner); 1368 1425 ··· 2530 2585 if (ret) 2531 2586 goto err_reg; 2532 2587 2533 - ret = class_register(&mtd_master_class); 2534 - if (ret) 2535 - goto err_reg2; 2536 - 2537 - ret = alloc_chrdev_region(&mtd_master_devt, 0, MTD_MASTER_DEVS, "mtd_master"); 2538 - if (ret < 0) { 2539 - pr_err("unable to allocate char dev region\n"); 2540 - goto err_chrdev; 2541 - } 2542 - 2543 2588 mtd_bdi = mtd_bdi_init("mtd"); 2544 2589 if (IS_ERR(mtd_bdi)) { 2545 2590 ret = PTR_ERR(mtd_bdi); ··· 2554 2619 bdi_unregister(mtd_bdi); 2555 2620 bdi_put(mtd_bdi); 2556 2621 err_bdi: 2557 - unregister_chrdev_region(mtd_master_devt, MTD_MASTER_DEVS); 2558 - err_chrdev: 2559 - class_unregister(&mtd_master_class); 2560 - err_reg2: 2561 2622 class_unregister(&mtd_class); 2562 2623 err_reg: 2563 2624 pr_err("Error registering mtd class or bdi: %d\n", ret); ··· 2567 2636 if (proc_mtd) 2568 2637 remove_proc_entry("mtd", NULL); 2569 2638 class_unregister(&mtd_class); 2570 - class_unregister(&mtd_master_class); 2571 - unregister_chrdev_region(mtd_master_devt, MTD_MASTER_DEVS); 2572 2639 bdi_unregister(mtd_bdi); 2573 2640 bdi_put(mtd_bdi); 2574 2641 idr_destroy(&mtd_idr); 2575 - idr_destroy(&mtd_master_idr); 2576 2642 } 2577 2643 2578 2644 module_init(init_mtd);
+1 -1
drivers/mtd/mtdcore.h
··· 8 8 extern struct backing_dev_info *mtd_bdi; 9 9 10 10 struct mtd_info *__mtd_next_device(int i); 11 - int __must_check add_mtd_device(struct mtd_info *mtd, bool partitioned); 11 + int __must_check add_mtd_device(struct mtd_info *mtd); 12 12 int del_mtd_device(struct mtd_info *mtd); 13 13 int add_mtd_partitions(struct mtd_info *, const struct mtd_partition *, int); 14 14 int del_mtd_partitions(struct mtd_info *);
+8 -8
drivers/mtd/mtdpart.c
··· 86 86 * parent conditional on that option. Note, this is a way to 87 87 * distinguish between the parent and its partitions in sysfs. 88 88 */ 89 - child->dev.parent = &parent->dev; 89 + child->dev.parent = IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER) || mtd_is_partition(parent) ? 90 + &parent->dev : parent->dev.parent; 90 91 child->dev.of_node = part->of_node; 91 92 child->parent = parent; 92 93 child->part.offset = part->offset; ··· 243 242 } 244 243 245 244 int mtd_add_partition(struct mtd_info *parent, const char *name, 246 - long long offset, long long length, struct mtd_info **out) 245 + long long offset, long long length) 247 246 { 248 247 struct mtd_info *master = mtd_get_master(parent); 249 248 u64 parent_size = mtd_is_partition(parent) ? ··· 276 275 list_add_tail(&child->part.node, &parent->partitions); 277 276 mutex_unlock(&master->master.partitions_lock); 278 277 279 - ret = add_mtd_device(child, true); 278 + ret = add_mtd_device(child); 280 279 if (ret) 281 280 goto err_remove_part; 282 281 283 282 mtd_add_partition_attrs(child); 284 - 285 - if (out) 286 - *out = child; 287 283 288 284 return 0; 289 285 ··· 413 415 list_add_tail(&child->part.node, &parent->partitions); 414 416 mutex_unlock(&master->master.partitions_lock); 415 417 416 - ret = add_mtd_device(child, true); 418 + ret = add_mtd_device(child); 417 419 if (ret) { 418 420 mutex_lock(&master->master.partitions_lock); 419 421 list_del(&child->part.node); ··· 590 592 int ret, err = 0; 591 593 592 594 dev = &master->dev; 595 + /* Use parent device (controller) if the top level MTD is not registered */ 596 + if (!IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER) && !mtd_is_partition(master)) 597 + dev = master->dev.parent; 593 598 594 599 np = mtd_get_of_node(master); 595 600 if (mtd_is_partition(master)) ··· 711 710 if (ret < 0 && !err) 712 711 err = ret; 713 712 } 714 - 715 713 return err; 716 714 } 717 715
+1
drivers/mtd/nand/spi/core.c
··· 1585 1585 { 1586 1586 struct nand_device *nand = spinand_to_nand(spinand); 1587 1587 1588 + nanddev_ecc_engine_cleanup(nand); 1588 1589 nanddev_cleanup(nand); 1589 1590 spinand_manufacturer_cleanup(spinand); 1590 1591 kfree(spinand->databuf);
+5 -5
drivers/mtd/nand/spi/winbond.c
··· 25 25 26 26 static SPINAND_OP_VARIANTS(read_cache_octal_variants, 27 27 SPINAND_PAGE_READ_FROM_CACHE_1S_1D_8D_OP(0, 2, NULL, 0, 105 * HZ_PER_MHZ), 28 - SPINAND_PAGE_READ_FROM_CACHE_1S_8S_8S_OP(0, 16, NULL, 0, 86 * HZ_PER_MHZ), 28 + SPINAND_PAGE_READ_FROM_CACHE_1S_8S_8S_OP(0, 16, NULL, 0, 162 * HZ_PER_MHZ), 29 29 SPINAND_PAGE_READ_FROM_CACHE_1S_1S_8S_OP(0, 1, NULL, 0, 133 * HZ_PER_MHZ), 30 30 SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 31 31 SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); ··· 42 42 static SPINAND_OP_VARIANTS(read_cache_dual_quad_dtr_variants, 43 43 SPINAND_PAGE_READ_FROM_CACHE_1S_4D_4D_OP(0, 8, NULL, 0, 80 * HZ_PER_MHZ), 44 44 SPINAND_PAGE_READ_FROM_CACHE_1S_1D_4D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ), 45 - SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0), 45 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0, 104 * HZ_PER_MHZ), 46 46 SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 47 47 SPINAND_PAGE_READ_FROM_CACHE_1S_2D_2D_OP(0, 4, NULL, 0, 80 * HZ_PER_MHZ), 48 48 SPINAND_PAGE_READ_FROM_CACHE_1S_1D_2D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ), 49 - SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 49 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0, 104 * HZ_PER_MHZ), 50 50 SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 51 51 SPINAND_PAGE_READ_FROM_CACHE_1S_1D_1D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ), 52 52 SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), ··· 289 289 SPINAND_ECCINFO(&w35n01jw_ooblayout, NULL)), 290 290 SPINAND_INFO("W35N02JW", /* 1.8V */ 291 291 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xdf, 0x22), 292 - NAND_MEMORG(1, 4096, 128, 64, 512, 10, 2, 1, 1), 292 + NAND_MEMORG(1, 4096, 128, 64, 512, 10, 1, 2, 1), 293 293 NAND_ECCREQ(1, 512), 294 294 SPINAND_INFO_OP_VARIANTS(&read_cache_octal_variants, 295 295 &write_cache_octal_variants, ··· 298 298 SPINAND_ECCINFO(&w35n01jw_ooblayout, NULL)), 299 299 SPINAND_INFO("W35N04JW", /* 1.8V */ 300 300 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xdf, 0x23), 301 - NAND_MEMORG(1, 4096, 128, 64, 512, 10, 4, 1, 1), 301 + NAND_MEMORG(1, 4096, 128, 64, 512, 10, 1, 4, 1), 302 302 NAND_ECCREQ(1, 512), 303 303 SPINAND_INFO_OP_VARIANTS(&read_cache_octal_variants, 304 304 &write_cache_octal_variants,
+5 -4
drivers/net/can/m_can/tcan4x5x-core.c
··· 411 411 priv = cdev_to_priv(mcan_class); 412 412 413 413 priv->power = devm_regulator_get_optional(&spi->dev, "vsup"); 414 - if (PTR_ERR(priv->power) == -EPROBE_DEFER) { 415 - ret = -EPROBE_DEFER; 416 - goto out_m_can_class_free_dev; 417 - } else { 414 + if (IS_ERR(priv->power)) { 415 + if (PTR_ERR(priv->power) == -EPROBE_DEFER) { 416 + ret = -EPROBE_DEFER; 417 + goto out_m_can_class_free_dev; 418 + } 418 419 priv->power = NULL; 419 420 } 420 421
+16 -11
drivers/net/ethernet/airoha/airoha_eth.c
··· 1065 1065 1066 1066 static int airoha_qdma_init_hfwd_queues(struct airoha_qdma *qdma) 1067 1067 { 1068 + int size, index, num_desc = HW_DSCP_NUM; 1068 1069 struct airoha_eth *eth = qdma->eth; 1069 1070 int id = qdma - &eth->qdma[0]; 1071 + u32 status, buf_size; 1070 1072 dma_addr_t dma_addr; 1071 1073 const char *name; 1072 - int size, index; 1073 - u32 status; 1074 - 1075 - size = HW_DSCP_NUM * sizeof(struct airoha_qdma_fwd_desc); 1076 - if (!dmam_alloc_coherent(eth->dev, size, &dma_addr, GFP_KERNEL)) 1077 - return -ENOMEM; 1078 - 1079 - airoha_qdma_wr(qdma, REG_FWD_DSCP_BASE, dma_addr); 1080 1074 1081 1075 name = devm_kasprintf(eth->dev, GFP_KERNEL, "qdma%d-buf", id); 1082 1076 if (!name) 1083 1077 return -ENOMEM; 1084 1078 1079 + buf_size = id ? AIROHA_MAX_PACKET_SIZE / 2 : AIROHA_MAX_PACKET_SIZE; 1085 1080 index = of_property_match_string(eth->dev->of_node, 1086 1081 "memory-region-names", name); 1087 1082 if (index >= 0) { ··· 1094 1099 rmem = of_reserved_mem_lookup(np); 1095 1100 of_node_put(np); 1096 1101 dma_addr = rmem->base; 1102 + /* Compute the number of hw descriptors according to the 1103 + * reserved memory size and the payload buffer size 1104 + */ 1105 + num_desc = div_u64(rmem->size, buf_size); 1097 1106 } else { 1098 - size = AIROHA_MAX_PACKET_SIZE * HW_DSCP_NUM; 1107 + size = buf_size * num_desc; 1099 1108 if (!dmam_alloc_coherent(eth->dev, size, &dma_addr, 1100 1109 GFP_KERNEL)) 1101 1110 return -ENOMEM; ··· 1107 1108 1108 1109 airoha_qdma_wr(qdma, REG_FWD_BUF_BASE, dma_addr); 1109 1110 1111 + size = num_desc * sizeof(struct airoha_qdma_fwd_desc); 1112 + if (!dmam_alloc_coherent(eth->dev, size, &dma_addr, GFP_KERNEL)) 1113 + return -ENOMEM; 1114 + 1115 + airoha_qdma_wr(qdma, REG_FWD_DSCP_BASE, dma_addr); 1116 + /* QDMA0: 2KB. QDMA1: 1KB */ 1110 1117 airoha_qdma_rmw(qdma, REG_HW_FWD_DSCP_CFG, 1111 1118 HW_FWD_DSCP_PAYLOAD_SIZE_MASK, 1112 - FIELD_PREP(HW_FWD_DSCP_PAYLOAD_SIZE_MASK, 0)); 1119 + FIELD_PREP(HW_FWD_DSCP_PAYLOAD_SIZE_MASK, !!id)); 1113 1120 airoha_qdma_rmw(qdma, REG_FWD_DSCP_LOW_THR, FWD_DSCP_LOW_THR_MASK, 1114 1121 FIELD_PREP(FWD_DSCP_LOW_THR_MASK, 128)); 1115 1122 airoha_qdma_rmw(qdma, REG_LMGR_INIT_CFG, 1116 1123 LMGR_INIT_START | LMGR_SRAM_MODE_MASK | 1117 1124 HW_FWD_DESC_NUM_MASK, 1118 - FIELD_PREP(HW_FWD_DESC_NUM_MASK, HW_DSCP_NUM) | 1125 + FIELD_PREP(HW_FWD_DESC_NUM_MASK, num_desc) | 1119 1126 LMGR_INIT_START | LMGR_SRAM_MODE_MASK); 1120 1127 1121 1128 return read_poll_timeout(airoha_qdma_rr, status,
+3 -1
drivers/net/ethernet/airoha/airoha_ppe.c
··· 809 809 int idle; 810 810 811 811 hwe = airoha_ppe_foe_get_entry(ppe, iter->hash); 812 - ib1 = READ_ONCE(hwe->ib1); 812 + if (!hwe) 813 + continue; 813 814 815 + ib1 = READ_ONCE(hwe->ib1); 814 816 state = FIELD_GET(AIROHA_FOE_IB1_BIND_STATE, ib1); 815 817 if (state != AIROHA_FOE_STATE_BIND) { 816 818 iter->hash = 0xffff;
+78 -14
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 2989 2989 { 2990 2990 struct bnxt_napi *bnapi = cpr->bnapi; 2991 2991 u32 raw_cons = cpr->cp_raw_cons; 2992 + bool flush_xdp = false; 2992 2993 u32 cons; 2993 2994 int rx_pkts = 0; 2994 2995 u8 event = 0; ··· 3043 3042 else 3044 3043 rc = bnxt_force_rx_discard(bp, cpr, &raw_cons, 3045 3044 &event); 3045 + if (event & BNXT_REDIRECT_EVENT) 3046 + flush_xdp = true; 3046 3047 if (likely(rc >= 0)) 3047 3048 rx_pkts += rc; 3048 3049 /* Increment rx_pkts when rc is -ENOMEM to count towards ··· 3069 3066 } 3070 3067 } 3071 3068 3072 - if (event & BNXT_REDIRECT_EVENT) { 3069 + if (flush_xdp) { 3073 3070 xdp_do_flush(); 3074 3071 event &= ~BNXT_REDIRECT_EVENT; 3075 3072 } ··· 10783 10780 bp->num_rss_ctx--; 10784 10781 } 10785 10782 10783 + static bool bnxt_vnic_has_rx_ring(struct bnxt *bp, struct bnxt_vnic_info *vnic, 10784 + int rxr_id) 10785 + { 10786 + u16 tbl_size = bnxt_get_rxfh_indir_size(bp->dev); 10787 + int i, vnic_rx; 10788 + 10789 + /* Ntuple VNIC always has all the rx rings. Any change of ring id 10790 + * must be updated because a future filter may use it. 10791 + */ 10792 + if (vnic->flags & BNXT_VNIC_NTUPLE_FLAG) 10793 + return true; 10794 + 10795 + for (i = 0; i < tbl_size; i++) { 10796 + if (vnic->flags & BNXT_VNIC_RSSCTX_FLAG) 10797 + vnic_rx = ethtool_rxfh_context_indir(vnic->rss_ctx)[i]; 10798 + else 10799 + vnic_rx = bp->rss_indir_tbl[i]; 10800 + 10801 + if (rxr_id == vnic_rx) 10802 + return true; 10803 + } 10804 + 10805 + return false; 10806 + } 10807 + 10808 + static int bnxt_set_vnic_mru_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic, 10809 + u16 mru, int rxr_id) 10810 + { 10811 + int rc; 10812 + 10813 + if (!bnxt_vnic_has_rx_ring(bp, vnic, rxr_id)) 10814 + return 0; 10815 + 10816 + if (mru) { 10817 + rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true); 10818 + if (rc) { 10819 + netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n", 10820 + vnic->vnic_id, rc); 10821 + return rc; 10822 + } 10823 + } 10824 + vnic->mru = mru; 10825 + bnxt_hwrm_vnic_update(bp, vnic, 10826 + VNIC_UPDATE_REQ_ENABLES_MRU_VALID); 10827 + 10828 + return 0; 10829 + } 10830 + 10831 + static int bnxt_set_rss_ctx_vnic_mru(struct bnxt *bp, u16 mru, int rxr_id) 10832 + { 10833 + struct ethtool_rxfh_context *ctx; 10834 + unsigned long context; 10835 + int rc; 10836 + 10837 + xa_for_each(&bp->dev->ethtool->rss_ctx, context, ctx) { 10838 + struct bnxt_rss_ctx *rss_ctx = ethtool_rxfh_context_priv(ctx); 10839 + struct bnxt_vnic_info *vnic = &rss_ctx->vnic; 10840 + 10841 + rc = bnxt_set_vnic_mru_p5(bp, vnic, mru, rxr_id); 10842 + if (rc) 10843 + return rc; 10844 + } 10845 + 10846 + return 0; 10847 + } 10848 + 10786 10849 static void bnxt_hwrm_realloc_rss_ctx_vnic(struct bnxt *bp) 10787 10850 { 10788 10851 bool set_tpa = !!(bp->flags & BNXT_FLAG_TPA); ··· 15996 15927 struct bnxt_vnic_info *vnic; 15997 15928 struct bnxt_napi *bnapi; 15998 15929 int i, rc; 15930 + u16 mru; 15999 15931 16000 15932 rxr = &bp->rx_ring[idx]; 16001 15933 clone = qmem; ··· 16047 15977 napi_enable_locked(&bnapi->napi); 16048 15978 bnxt_db_nq_arm(bp, &cpr->cp_db, cpr->cp_raw_cons); 16049 15979 15980 + mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN; 16050 15981 for (i = 0; i < bp->nr_vnics; i++) { 16051 15982 vnic = &bp->vnic_info[i]; 16052 15983 16053 - rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true); 16054 - if (rc) { 16055 - netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n", 16056 - vnic->vnic_id, rc); 15984 + rc = bnxt_set_vnic_mru_p5(bp, vnic, mru, idx); 15985 + if (rc) 16057 15986 return rc; 16058 - } 16059 - vnic->mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN; 16060 - bnxt_hwrm_vnic_update(bp, vnic, 16061 - VNIC_UPDATE_REQ_ENABLES_MRU_VALID); 16062 15987 } 16063 - 16064 - return 0; 15988 + return bnxt_set_rss_ctx_vnic_mru(bp, mru, idx); 16065 15989 16066 15990 err_reset: 16067 15991 netdev_err(bp->dev, "Unexpected HWRM error during queue start rc: %d\n", ··· 16077 16013 16078 16014 for (i = 0; i < bp->nr_vnics; i++) { 16079 16015 vnic = &bp->vnic_info[i]; 16080 - vnic->mru = 0; 16081 - bnxt_hwrm_vnic_update(bp, vnic, 16082 - VNIC_UPDATE_REQ_ENABLES_MRU_VALID); 16016 + 16017 + bnxt_set_vnic_mru_p5(bp, vnic, 0, idx); 16083 16018 } 16019 + bnxt_set_rss_ctx_vnic_mru(bp, 0, idx); 16084 16020 /* Make sure NAPI sees that the VNIC is disabled */ 16085 16021 synchronize_net(); 16086 16022 rxr = &bp->rx_ring[idx];
+10 -14
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
··· 231 231 return; 232 232 233 233 mutex_lock(&edev->en_dev_lock); 234 - if (!bnxt_ulp_registered(edev)) { 235 - mutex_unlock(&edev->en_dev_lock); 236 - return; 237 - } 234 + if (!bnxt_ulp_registered(edev) || 235 + (edev->flags & BNXT_EN_FLAG_ULP_STOPPED)) 236 + goto ulp_stop_exit; 238 237 239 238 edev->flags |= BNXT_EN_FLAG_ULP_STOPPED; 240 239 if (aux_priv) { ··· 249 250 adrv->suspend(adev, pm); 250 251 } 251 252 } 253 + ulp_stop_exit: 252 254 mutex_unlock(&edev->en_dev_lock); 253 255 } 254 256 ··· 258 258 struct bnxt_aux_priv *aux_priv = bp->aux_priv; 259 259 struct bnxt_en_dev *edev = bp->edev; 260 260 261 - if (!edev) 262 - return; 263 - 264 - edev->flags &= ~BNXT_EN_FLAG_ULP_STOPPED; 265 - 266 - if (err) 261 + if (!edev || err) 267 262 return; 268 263 269 264 mutex_lock(&edev->en_dev_lock); 270 - if (!bnxt_ulp_registered(edev)) { 271 - mutex_unlock(&edev->en_dev_lock); 272 - return; 273 - } 265 + if (!bnxt_ulp_registered(edev) || 266 + !(edev->flags & BNXT_EN_FLAG_ULP_STOPPED)) 267 + goto ulp_start_exit; 274 268 275 269 if (edev->ulp_tbl->msix_requested) 276 270 bnxt_fill_msix_vecs(bp, edev->msix_entries); ··· 281 287 adrv->resume(adev); 282 288 } 283 289 } 290 + ulp_start_exit: 291 + edev->flags &= ~BNXT_EN_FLAG_ULP_STOPPED; 284 292 mutex_unlock(&edev->en_dev_lock); 285 293 } 286 294
+1
drivers/net/ethernet/faraday/Kconfig
··· 31 31 depends on ARM || COMPILE_TEST 32 32 depends on !64BIT || BROKEN 33 33 select PHYLIB 34 + select FIXED_PHY 34 35 select MDIO_ASPEED if MACH_ASPEED_G6 35 36 select CRC32 36 37 help
+1 -1
drivers/net/ethernet/freescale/enetc/enetc_hw.h
··· 507 507 tmp = ioread32(reg + 4); 508 508 } while (high != tmp); 509 509 510 - return le64_to_cpu((__le64)high << 32 | low); 510 + return (u64)high << 32 | low; 511 511 } 512 512 #endif 513 513
+11 -3
drivers/net/ethernet/intel/e1000e/netdev.c
··· 3534 3534 case e1000_pch_cnp: 3535 3535 case e1000_pch_tgp: 3536 3536 case e1000_pch_adp: 3537 - case e1000_pch_mtp: 3538 - case e1000_pch_lnp: 3539 - case e1000_pch_ptp: 3540 3537 case e1000_pch_nvp: 3541 3538 if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) { 3542 3539 /* Stable 24MHz frequency */ ··· 3548 3551 shift = INCVALUE_SHIFT_38400KHZ; 3549 3552 adapter->cc.shift = shift; 3550 3553 } 3554 + break; 3555 + case e1000_pch_mtp: 3556 + case e1000_pch_lnp: 3557 + case e1000_pch_ptp: 3558 + /* System firmware can misreport this value, so set it to a 3559 + * stable 38400KHz frequency. 3560 + */ 3561 + incperiod = INCPERIOD_38400KHZ; 3562 + incvalue = INCVALUE_38400KHZ; 3563 + shift = INCVALUE_SHIFT_38400KHZ; 3564 + adapter->cc.shift = shift; 3551 3565 break; 3552 3566 case e1000_82574: 3553 3567 case e1000_82583:
+5 -3
drivers/net/ethernet/intel/e1000e/ptp.c
··· 295 295 case e1000_pch_cnp: 296 296 case e1000_pch_tgp: 297 297 case e1000_pch_adp: 298 - case e1000_pch_mtp: 299 - case e1000_pch_lnp: 300 - case e1000_pch_ptp: 301 298 case e1000_pch_nvp: 302 299 if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) 303 300 adapter->ptp_clock_info.max_adj = MAX_PPB_24MHZ; 304 301 else 305 302 adapter->ptp_clock_info.max_adj = MAX_PPB_38400KHZ; 303 + break; 304 + case e1000_pch_mtp: 305 + case e1000_pch_lnp: 306 + case e1000_pch_ptp: 307 + adapter->ptp_clock_info.max_adj = MAX_PPB_38400KHZ; 306 308 break; 307 309 case e1000_82574: 308 310 case e1000_82583:
+48
drivers/net/ethernet/intel/ice/ice_arfs.c
··· 378 378 } 379 379 380 380 /** 381 + * ice_arfs_cmp - Check if aRFS filter matches this flow. 382 + * @fltr_info: filter info of the saved ARFS entry. 383 + * @fk: flow dissector keys. 384 + * @n_proto: One of htons(ETH_P_IP) or htons(ETH_P_IPV6). 385 + * @ip_proto: One of IPPROTO_TCP or IPPROTO_UDP. 386 + * 387 + * Since this function assumes limited values for n_proto and ip_proto, it 388 + * is meant to be called only from ice_rx_flow_steer(). 389 + * 390 + * Return: 391 + * * true - fltr_info refers to the same flow as fk. 392 + * * false - fltr_info and fk refer to different flows. 393 + */ 394 + static bool 395 + ice_arfs_cmp(const struct ice_fdir_fltr *fltr_info, const struct flow_keys *fk, 396 + __be16 n_proto, u8 ip_proto) 397 + { 398 + /* Determine if the filter is for IPv4 or IPv6 based on flow_type, 399 + * which is one of ICE_FLTR_PTYPE_NONF_IPV{4,6}_{TCP,UDP}. 400 + */ 401 + bool is_v4 = fltr_info->flow_type == ICE_FLTR_PTYPE_NONF_IPV4_TCP || 402 + fltr_info->flow_type == ICE_FLTR_PTYPE_NONF_IPV4_UDP; 403 + 404 + /* Following checks are arranged in the quickest and most discriminative 405 + * fields first for early failure. 406 + */ 407 + if (is_v4) 408 + return n_proto == htons(ETH_P_IP) && 409 + fltr_info->ip.v4.src_port == fk->ports.src && 410 + fltr_info->ip.v4.dst_port == fk->ports.dst && 411 + fltr_info->ip.v4.src_ip == fk->addrs.v4addrs.src && 412 + fltr_info->ip.v4.dst_ip == fk->addrs.v4addrs.dst && 413 + fltr_info->ip.v4.proto == ip_proto; 414 + 415 + return fltr_info->ip.v6.src_port == fk->ports.src && 416 + fltr_info->ip.v6.dst_port == fk->ports.dst && 417 + fltr_info->ip.v6.proto == ip_proto && 418 + !memcmp(&fltr_info->ip.v6.src_ip, &fk->addrs.v6addrs.src, 419 + sizeof(struct in6_addr)) && 420 + !memcmp(&fltr_info->ip.v6.dst_ip, &fk->addrs.v6addrs.dst, 421 + sizeof(struct in6_addr)); 422 + } 423 + 424 + /** 381 425 * ice_rx_flow_steer - steer the Rx flow to where application is being run 382 426 * @netdev: ptr to the netdev being adjusted 383 427 * @skb: buffer with required header information ··· 492 448 continue; 493 449 494 450 fltr_info = &arfs_entry->fltr_info; 451 + 452 + if (!ice_arfs_cmp(fltr_info, &fk, n_proto, ip_proto)) 453 + continue; 454 + 495 455 ret = fltr_info->fltr_id; 496 456 497 457 if (fltr_info->q_index == rxq_idx ||
+5 -1
drivers/net/ethernet/intel/ice/ice_eswitch.c
··· 508 508 */ 509 509 int ice_eswitch_attach_vf(struct ice_pf *pf, struct ice_vf *vf) 510 510 { 511 - struct ice_repr *repr = ice_repr_create_vf(vf); 512 511 struct devlink *devlink = priv_to_devlink(pf); 512 + struct ice_repr *repr; 513 513 int err; 514 514 515 + if (!ice_is_eswitch_mode_switchdev(pf)) 516 + return 0; 517 + 518 + repr = ice_repr_create_vf(vf); 515 519 if (IS_ERR(repr)) 516 520 return PTR_ERR(repr); 517 521
+2 -2
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
··· 1822 1822 req->chan_cnt = IEEE_8021QAZ_MAX_TCS; 1823 1823 req->bpid_per_chan = 1; 1824 1824 } else { 1825 - req->chan_cnt = 1; 1825 + req->chan_cnt = pfvf->hw.rx_chan_cnt; 1826 1826 req->bpid_per_chan = 0; 1827 1827 } 1828 1828 ··· 1847 1847 req->chan_cnt = IEEE_8021QAZ_MAX_TCS; 1848 1848 req->bpid_per_chan = 1; 1849 1849 } else { 1850 - req->chan_cnt = 1; 1850 + req->chan_cnt = pfvf->hw.rx_chan_cnt; 1851 1851 req->bpid_per_chan = 0; 1852 1852 } 1853 1853
+4 -2
drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
··· 447 447 priv->llu_plu_irq = platform_get_irq(pdev, MLXBF_GIGE_LLU_PLU_INTR_IDX); 448 448 449 449 phy_irq = acpi_dev_gpio_irq_get_by(ACPI_COMPANION(&pdev->dev), "phy", 0); 450 - if (phy_irq < 0) { 451 - dev_err(&pdev->dev, "Error getting PHY irq. Use polling instead"); 450 + if (phy_irq == -EPROBE_DEFER) { 451 + err = -EPROBE_DEFER; 452 + goto out; 453 + } else if (phy_irq < 0) { 452 454 phy_irq = PHY_POLL; 453 455 } 454 456
+1 -4
drivers/net/ethernet/meta/fbnic/fbnic_fw.c
··· 127 127 return -EBUSY; 128 128 129 129 addr = dma_map_single(fbd->dev, msg, PAGE_SIZE, direction); 130 - if (dma_mapping_error(fbd->dev, addr)) { 131 - free_page((unsigned long)msg); 132 - 130 + if (dma_mapping_error(fbd->dev, addr)) 133 131 return -ENOSPC; 134 - } 135 132 136 133 mbx->buf_info[tail].msg = msg; 137 134 mbx->buf_info[tail].addr = addr;
+2 -2
drivers/net/ethernet/microchip/lan743x_ptp.h
··· 18 18 */ 19 19 #define LAN743X_PTP_N_EVENT_CHAN 2 20 20 #define LAN743X_PTP_N_PEROUT LAN743X_PTP_N_EVENT_CHAN 21 - #define LAN743X_PTP_N_EXTTS 4 22 - #define LAN743X_PTP_N_PPS 0 23 21 #define PCI11X1X_PTP_IO_MAX_CHANNELS 8 22 + #define LAN743X_PTP_N_EXTTS PCI11X1X_PTP_IO_MAX_CHANNELS 23 + #define LAN743X_PTP_N_PPS 0 24 24 #define PTP_CMD_CTL_TIMEOUT_CNT 50 25 25 26 26 struct lan743x_adapter;
+3
drivers/net/ethernet/microsoft/mana/gdma_main.c
··· 31 31 gc->db_page_base = gc->bar0_va + 32 32 mana_gd_r64(gc, GDMA_PF_REG_DB_PAGE_OFF); 33 33 34 + gc->phys_db_page_base = gc->bar0_pa + 35 + mana_gd_r64(gc, GDMA_PF_REG_DB_PAGE_OFF); 36 + 34 37 sriov_base_off = mana_gd_r64(gc, GDMA_SRIOV_REG_CFG_BASE_OFF); 35 38 36 39 sriov_base_va = gc->bar0_va + sriov_base_off;
+2 -1
drivers/net/ethernet/pensando/ionic/ionic_main.c
··· 516 516 unsigned long start_time; 517 517 unsigned long max_wait; 518 518 unsigned long duration; 519 - int done = 0; 520 519 bool fw_up; 521 520 int opcode; 521 + bool done; 522 522 int err; 523 523 524 524 /* Wait for dev cmd to complete, retrying if we get EAGAIN, ··· 526 526 */ 527 527 max_wait = jiffies + (max_seconds * HZ); 528 528 try_again: 529 + done = false; 529 530 opcode = idev->opcode; 530 531 start_time = jiffies; 531 532 for (fw_up = ionic_is_fw_running(idev);
+6 -6
drivers/net/ethernet/pensando/ionic/ionic_txrx.c
··· 321 321 len, DMA_TO_DEVICE); 322 322 } else /* XDP_REDIRECT */ { 323 323 dma_addr = ionic_tx_map_single(q, frame->data, len); 324 - if (!dma_addr) 324 + if (dma_addr == DMA_MAPPING_ERROR) 325 325 return -EIO; 326 326 } 327 327 ··· 357 357 } else { 358 358 dma_addr = ionic_tx_map_frag(q, frag, 0, 359 359 skb_frag_size(frag)); 360 - if (dma_mapping_error(q->dev, dma_addr)) { 360 + if (dma_addr == DMA_MAPPING_ERROR) { 361 361 ionic_tx_desc_unmap_bufs(q, desc_info); 362 362 return -EIO; 363 363 } ··· 1083 1083 net_warn_ratelimited("%s: DMA single map failed on %s!\n", 1084 1084 dev_name(dev), q->name); 1085 1085 q_to_tx_stats(q)->dma_map_err++; 1086 - return 0; 1086 + return DMA_MAPPING_ERROR; 1087 1087 } 1088 1088 return dma_addr; 1089 1089 } ··· 1100 1100 net_warn_ratelimited("%s: DMA frag map failed on %s!\n", 1101 1101 dev_name(dev), q->name); 1102 1102 q_to_tx_stats(q)->dma_map_err++; 1103 - return 0; 1103 + return DMA_MAPPING_ERROR; 1104 1104 } 1105 1105 return dma_addr; 1106 1106 } ··· 1116 1116 int frag_idx; 1117 1117 1118 1118 dma_addr = ionic_tx_map_single(q, skb->data, skb_headlen(skb)); 1119 - if (!dma_addr) 1119 + if (dma_addr == DMA_MAPPING_ERROR) 1120 1120 return -EIO; 1121 1121 buf_info->dma_addr = dma_addr; 1122 1122 buf_info->len = skb_headlen(skb); ··· 1126 1126 nfrags = skb_shinfo(skb)->nr_frags; 1127 1127 for (frag_idx = 0; frag_idx < nfrags; frag_idx++, frag++) { 1128 1128 dma_addr = ionic_tx_map_frag(q, frag, 0, skb_frag_size(frag)); 1129 - if (!dma_addr) 1129 + if (dma_addr == DMA_MAPPING_ERROR) 1130 1130 goto dma_fail; 1131 1131 buf_info->dma_addr = dma_addr; 1132 1132 buf_info->len = skb_frag_size(frag);
+4 -4
drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c
··· 242 242 } 243 243 244 244 /* Returns size of the data buffer or, -1 in case TLV data is not available. */ 245 - static int 245 + static noinline_for_stack int 246 246 qed_mfw_get_gen_tlv_value(struct qed_drv_tlv_hdr *p_tlv, 247 247 struct qed_mfw_tlv_generic *p_drv_buf, 248 248 struct qed_tlv_parsed_buf *p_buf) ··· 304 304 return -1; 305 305 } 306 306 307 - static int 307 + static noinline_for_stack int 308 308 qed_mfw_get_eth_tlv_value(struct qed_drv_tlv_hdr *p_tlv, 309 309 struct qed_mfw_tlv_eth *p_drv_buf, 310 310 struct qed_tlv_parsed_buf *p_buf) ··· 438 438 return QED_MFW_TLV_TIME_SIZE; 439 439 } 440 440 441 - static int 441 + static noinline_for_stack int 442 442 qed_mfw_get_fcoe_tlv_value(struct qed_drv_tlv_hdr *p_tlv, 443 443 struct qed_mfw_tlv_fcoe *p_drv_buf, 444 444 struct qed_tlv_parsed_buf *p_buf) ··· 1073 1073 return -1; 1074 1074 } 1075 1075 1076 - static int 1076 + static noinline_for_stack int 1077 1077 qed_mfw_get_iscsi_tlv_value(struct qed_drv_tlv_hdr *p_tlv, 1078 1078 struct qed_mfw_tlv_iscsi *p_drv_buf, 1079 1079 struct qed_tlv_parsed_buf *p_buf)
+2 -17
drivers/net/ethernet/ti/icssg/icssg_common.c
··· 98 98 { 99 99 struct cppi5_host_desc_t *first_desc, *next_desc; 100 100 dma_addr_t buf_dma, next_desc_dma; 101 - struct prueth_swdata *swdata; 102 - struct page *page; 103 101 u32 buf_dma_len; 104 102 105 103 first_desc = desc; 106 104 next_desc = first_desc; 107 - 108 - swdata = cppi5_hdesc_get_swdata(desc); 109 - if (swdata->type == PRUETH_SWDATA_PAGE) { 110 - page = swdata->data.page; 111 - page_pool_recycle_direct(page->pp, swdata->data.page); 112 - goto free_desc; 113 - } 114 105 115 106 cppi5_hdesc_get_obuf(first_desc, &buf_dma, &buf_dma_len); 116 107 k3_udma_glue_tx_cppi5_to_dma_addr(tx_chn->tx_chn, &buf_dma); ··· 126 135 k3_cppi_desc_pool_free(tx_chn->desc_pool, next_desc); 127 136 } 128 137 129 - free_desc: 130 138 k3_cppi_desc_pool_free(tx_chn->desc_pool, first_desc); 131 139 } 132 140 EXPORT_SYMBOL_GPL(prueth_xmit_free); ··· 602 612 k3_udma_glue_tx_dma_to_cppi5_addr(tx_chn->tx_chn, &buf_dma); 603 613 cppi5_hdesc_attach_buf(first_desc, buf_dma, xdpf->len, buf_dma, xdpf->len); 604 614 swdata = cppi5_hdesc_get_swdata(first_desc); 605 - if (page) { 606 - swdata->type = PRUETH_SWDATA_PAGE; 607 - swdata->data.page = page; 608 - } else { 609 - swdata->type = PRUETH_SWDATA_XDPF; 610 - swdata->data.xdpf = xdpf; 611 - } 615 + swdata->type = PRUETH_SWDATA_XDPF; 616 + swdata->data.xdpf = xdpf; 612 617 613 618 /* Report BQL before sending the packet */ 614 619 netif_txq = netdev_get_tx_queue(ndev, tx_chn->id);
+1 -1
drivers/net/ethernet/wangxun/libwx/wx_lib.c
··· 2623 2623 struct page_pool_params pp_params = { 2624 2624 .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV, 2625 2625 .order = 0, 2626 - .pool_size = rx_ring->size, 2626 + .pool_size = rx_ring->count, 2627 2627 .nid = dev_to_node(rx_ring->dev), 2628 2628 .dev = rx_ring->dev, 2629 2629 .dma_dir = DMA_FROM_DEVICE,
+1
drivers/net/usb/qmi_wwan.c
··· 1426 1426 {QMI_QUIRK_SET_DTR(0x22de, 0x9051, 2)}, /* Hucom Wireless HM-211S/K */ 1427 1427 {QMI_FIXED_INTF(0x22de, 0x9061, 3)}, /* WeTelecom WPD-600N */ 1428 1428 {QMI_QUIRK_SET_DTR(0x1e0e, 0x9001, 5)}, /* SIMCom 7100E, 7230E, 7600E ++ */ 1429 + {QMI_QUIRK_SET_DTR(0x1e0e, 0x9071, 3)}, /* SIMCom 8230C ++ */ 1429 1430 {QMI_QUIRK_SET_DTR(0x2c7c, 0x0121, 4)}, /* Quectel EC21 Mini PCIe */ 1430 1431 {QMI_QUIRK_SET_DTR(0x2c7c, 0x0191, 4)}, /* Quectel EG91 */ 1431 1432 {QMI_QUIRK_SET_DTR(0x2c7c, 0x0195, 4)}, /* Quectel EG95 */
+3 -1
drivers/net/wireless/ath/ath12k/core.c
··· 1216 1216 INIT_LIST_HEAD(&ar->fw_stats.pdevs); 1217 1217 INIT_LIST_HEAD(&ar->fw_stats.bcn); 1218 1218 init_completion(&ar->fw_stats_complete); 1219 + init_completion(&ar->fw_stats_done); 1219 1220 } 1220 1221 1221 1222 void ath12k_fw_stats_free(struct ath12k_fw_stats *stats) ··· 1229 1228 void ath12k_fw_stats_reset(struct ath12k *ar) 1230 1229 { 1231 1230 spin_lock_bh(&ar->data_lock); 1232 - ar->fw_stats.fw_stats_done = false; 1233 1231 ath12k_fw_stats_free(&ar->fw_stats); 1232 + ar->fw_stats.num_vdev_recvd = 0; 1233 + ar->fw_stats.num_bcn_recvd = 0; 1234 1234 spin_unlock_bh(&ar->data_lock); 1235 1235 } 1236 1236
+9 -1
drivers/net/wireless/ath/ath12k/core.h
··· 601 601 #define ATH12K_NUM_CHANS 101 602 602 #define ATH12K_MAX_5GHZ_CHAN 173 603 603 604 + static inline bool ath12k_is_2ghz_channel_freq(u32 freq) 605 + { 606 + return freq >= ATH12K_MIN_2GHZ_FREQ && 607 + freq <= ATH12K_MAX_2GHZ_FREQ; 608 + } 609 + 604 610 enum ath12k_hw_state { 605 611 ATH12K_HW_STATE_OFF, 606 612 ATH12K_HW_STATE_ON, ··· 632 626 struct list_head pdevs; 633 627 struct list_head vdevs; 634 628 struct list_head bcn; 635 - bool fw_stats_done; 629 + u32 num_vdev_recvd; 630 + u32 num_bcn_recvd; 636 631 }; 637 632 638 633 struct ath12k_dbg_htt_stats { ··· 813 806 bool regdom_set_by_user; 814 807 815 808 struct completion fw_stats_complete; 809 + struct completion fw_stats_done; 816 810 817 811 struct completion mlo_setup_done; 818 812 u32 mlo_setup_status;
-58
drivers/net/wireless/ath/ath12k/debugfs.c
··· 1251 1251 */ 1252 1252 } 1253 1253 1254 - void 1255 - ath12k_debugfs_fw_stats_process(struct ath12k *ar, 1256 - struct ath12k_fw_stats *stats) 1257 - { 1258 - struct ath12k_base *ab = ar->ab; 1259 - struct ath12k_pdev *pdev; 1260 - bool is_end; 1261 - static unsigned int num_vdev, num_bcn; 1262 - size_t total_vdevs_started = 0; 1263 - int i; 1264 - 1265 - if (stats->stats_id == WMI_REQUEST_VDEV_STAT) { 1266 - if (list_empty(&stats->vdevs)) { 1267 - ath12k_warn(ab, "empty vdev stats"); 1268 - return; 1269 - } 1270 - /* FW sends all the active VDEV stats irrespective of PDEV, 1271 - * hence limit until the count of all VDEVs started 1272 - */ 1273 - rcu_read_lock(); 1274 - for (i = 0; i < ab->num_radios; i++) { 1275 - pdev = rcu_dereference(ab->pdevs_active[i]); 1276 - if (pdev && pdev->ar) 1277 - total_vdevs_started += pdev->ar->num_started_vdevs; 1278 - } 1279 - rcu_read_unlock(); 1280 - 1281 - is_end = ((++num_vdev) == total_vdevs_started); 1282 - 1283 - list_splice_tail_init(&stats->vdevs, 1284 - &ar->fw_stats.vdevs); 1285 - 1286 - if (is_end) { 1287 - ar->fw_stats.fw_stats_done = true; 1288 - num_vdev = 0; 1289 - } 1290 - return; 1291 - } 1292 - if (stats->stats_id == WMI_REQUEST_BCN_STAT) { 1293 - if (list_empty(&stats->bcn)) { 1294 - ath12k_warn(ab, "empty beacon stats"); 1295 - return; 1296 - } 1297 - /* Mark end until we reached the count of all started VDEVs 1298 - * within the PDEV 1299 - */ 1300 - is_end = ((++num_bcn) == ar->num_started_vdevs); 1301 - 1302 - list_splice_tail_init(&stats->bcn, 1303 - &ar->fw_stats.bcn); 1304 - 1305 - if (is_end) { 1306 - ar->fw_stats.fw_stats_done = true; 1307 - num_bcn = 0; 1308 - } 1309 - } 1310 - } 1311 - 1312 1254 static int ath12k_open_vdev_stats(struct inode *inode, struct file *file) 1313 1255 { 1314 1256 struct ath12k *ar = inode->i_private;
-7
drivers/net/wireless/ath/ath12k/debugfs.h
··· 12 12 void ath12k_debugfs_soc_destroy(struct ath12k_base *ab); 13 13 void ath12k_debugfs_register(struct ath12k *ar); 14 14 void ath12k_debugfs_unregister(struct ath12k *ar); 15 - void ath12k_debugfs_fw_stats_process(struct ath12k *ar, 16 - struct ath12k_fw_stats *stats); 17 15 void ath12k_debugfs_op_vif_add(struct ieee80211_hw *hw, 18 16 struct ieee80211_vif *vif); 19 17 void ath12k_debugfs_pdev_create(struct ath12k_base *ab); ··· 121 123 } 122 124 123 125 static inline void ath12k_debugfs_unregister(struct ath12k *ar) 124 - { 125 - } 126 - 127 - static inline void ath12k_debugfs_fw_stats_process(struct ath12k *ar, 128 - struct ath12k_fw_stats *stats) 129 126 { 130 127 } 131 128
+370 -24
drivers/net/wireless/ath/ath12k/mac.c
··· 4360 4360 { 4361 4361 struct ath12k_base *ab = ar->ab; 4362 4362 struct ath12k_hw *ah = ath12k_ar_to_ah(ar); 4363 - unsigned long timeout, time_left; 4363 + unsigned long time_left; 4364 4364 int ret; 4365 4365 4366 4366 guard(mutex)(&ah->hw_mutex); ··· 4368 4368 if (ah->state != ATH12K_HW_STATE_ON) 4369 4369 return -ENETDOWN; 4370 4370 4371 - /* FW stats can get split when exceeding the stats data buffer limit. 4372 - * In that case, since there is no end marking for the back-to-back 4373 - * received 'update stats' event, we keep a 3 seconds timeout in case, 4374 - * fw_stats_done is not marked yet 4375 - */ 4376 - timeout = jiffies + msecs_to_jiffies(3 * 1000); 4377 4371 ath12k_fw_stats_reset(ar); 4378 4372 4379 4373 reinit_completion(&ar->fw_stats_complete); 4374 + reinit_completion(&ar->fw_stats_done); 4380 4375 4381 4376 ret = ath12k_wmi_send_stats_request_cmd(ar, param->stats_id, 4382 4377 param->vdev_id, param->pdev_id); 4383 - 4384 4378 if (ret) { 4385 4379 ath12k_warn(ab, "failed to request fw stats: %d\n", ret); 4386 4380 return ret; ··· 4385 4391 param->pdev_id, param->vdev_id, param->stats_id); 4386 4392 4387 4393 time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 1 * HZ); 4388 - 4389 4394 if (!time_left) { 4390 4395 ath12k_warn(ab, "time out while waiting for get fw stats\n"); 4391 4396 return -ETIMEDOUT; ··· 4393 4400 /* Firmware sends WMI_UPDATE_STATS_EVENTID back-to-back 4394 4401 * when stats data buffer limit is reached. fw_stats_complete 4395 4402 * is completed once host receives first event from firmware, but 4396 - * still end might not be marked in the TLV. 4397 - * Below loop is to confirm that firmware completed sending all the event 4398 - * and fw_stats_done is marked true when end is marked in the TLV. 4403 + * still there could be more events following. Below is to wait 4404 + * until firmware completes sending all the events. 4399 4405 */ 4400 - for (;;) { 4401 - if (time_after(jiffies, timeout)) 4402 - break; 4403 - spin_lock_bh(&ar->data_lock); 4404 - if (ar->fw_stats.fw_stats_done) { 4405 - spin_unlock_bh(&ar->data_lock); 4406 - break; 4407 - } 4408 - spin_unlock_bh(&ar->data_lock); 4406 + time_left = wait_for_completion_timeout(&ar->fw_stats_done, 3 * HZ); 4407 + if (!time_left) { 4408 + ath12k_warn(ab, "time out while waiting for fw stats done\n"); 4409 + return -ETIMEDOUT; 4409 4410 } 4411 + 4410 4412 return 0; 4411 4413 } 4412 4414 ··· 5878 5890 return ret; 5879 5891 } 5880 5892 5893 + static bool ath12k_mac_is_freq_on_mac(struct ath12k_hw_mode_freq_range_arg *freq_range, 5894 + u32 freq, u8 mac_id) 5895 + { 5896 + return (freq >= freq_range[mac_id].low_2ghz_freq && 5897 + freq <= freq_range[mac_id].high_2ghz_freq) || 5898 + (freq >= freq_range[mac_id].low_5ghz_freq && 5899 + freq <= freq_range[mac_id].high_5ghz_freq); 5900 + } 5901 + 5902 + static bool 5903 + ath12k_mac_2_freq_same_mac_in_freq_range(struct ath12k_base *ab, 5904 + struct ath12k_hw_mode_freq_range_arg *freq_range, 5905 + u32 freq_link1, u32 freq_link2) 5906 + { 5907 + u8 i; 5908 + 5909 + for (i = 0; i < MAX_RADIOS; i++) { 5910 + if (ath12k_mac_is_freq_on_mac(freq_range, freq_link1, i) && 5911 + ath12k_mac_is_freq_on_mac(freq_range, freq_link2, i)) 5912 + return true; 5913 + } 5914 + 5915 + return false; 5916 + } 5917 + 5918 + static bool ath12k_mac_is_hw_dbs_capable(struct ath12k_base *ab) 5919 + { 5920 + return test_bit(WMI_TLV_SERVICE_DUAL_BAND_SIMULTANEOUS_SUPPORT, 5921 + ab->wmi_ab.svc_map) && 5922 + ab->wmi_ab.hw_mode_info.support_dbs; 5923 + } 5924 + 5925 + static bool ath12k_mac_2_freq_same_mac_in_dbs(struct ath12k_base *ab, 5926 + u32 freq_link1, u32 freq_link2) 5927 + { 5928 + struct ath12k_hw_mode_freq_range_arg *freq_range; 5929 + 5930 + if (!ath12k_mac_is_hw_dbs_capable(ab)) 5931 + return true; 5932 + 5933 + freq_range = ab->wmi_ab.hw_mode_info.freq_range_caps[ATH12K_HW_MODE_DBS]; 5934 + return ath12k_mac_2_freq_same_mac_in_freq_range(ab, freq_range, 5935 + freq_link1, freq_link2); 5936 + } 5937 + 5938 + static bool ath12k_mac_is_hw_sbs_capable(struct ath12k_base *ab) 5939 + { 5940 + return test_bit(WMI_TLV_SERVICE_DUAL_BAND_SIMULTANEOUS_SUPPORT, 5941 + ab->wmi_ab.svc_map) && 5942 + ab->wmi_ab.hw_mode_info.support_sbs; 5943 + } 5944 + 5945 + static bool ath12k_mac_2_freq_same_mac_in_sbs(struct ath12k_base *ab, 5946 + u32 freq_link1, u32 freq_link2) 5947 + { 5948 + struct ath12k_hw_mode_info *info = &ab->wmi_ab.hw_mode_info; 5949 + struct ath12k_hw_mode_freq_range_arg *sbs_uppr_share; 5950 + struct ath12k_hw_mode_freq_range_arg *sbs_low_share; 5951 + struct ath12k_hw_mode_freq_range_arg *sbs_range; 5952 + 5953 + if (!ath12k_mac_is_hw_sbs_capable(ab)) 5954 + return true; 5955 + 5956 + if (ab->wmi_ab.sbs_lower_band_end_freq) { 5957 + sbs_uppr_share = info->freq_range_caps[ATH12K_HW_MODE_SBS_UPPER_SHARE]; 5958 + sbs_low_share = info->freq_range_caps[ATH12K_HW_MODE_SBS_LOWER_SHARE]; 5959 + 5960 + return ath12k_mac_2_freq_same_mac_in_freq_range(ab, sbs_low_share, 5961 + freq_link1, freq_link2) || 5962 + ath12k_mac_2_freq_same_mac_in_freq_range(ab, sbs_uppr_share, 5963 + freq_link1, freq_link2); 5964 + } 5965 + 5966 + sbs_range = info->freq_range_caps[ATH12K_HW_MODE_SBS]; 5967 + return ath12k_mac_2_freq_same_mac_in_freq_range(ab, sbs_range, 5968 + freq_link1, freq_link2); 5969 + } 5970 + 5971 + static bool ath12k_mac_freqs_on_same_mac(struct ath12k_base *ab, 5972 + u32 freq_link1, u32 freq_link2) 5973 + { 5974 + return ath12k_mac_2_freq_same_mac_in_dbs(ab, freq_link1, freq_link2) && 5975 + ath12k_mac_2_freq_same_mac_in_sbs(ab, freq_link1, freq_link2); 5976 + } 5977 + 5978 + static int ath12k_mac_mlo_sta_set_link_active(struct ath12k_base *ab, 5979 + enum wmi_mlo_link_force_reason reason, 5980 + enum wmi_mlo_link_force_mode mode, 5981 + u8 *mlo_vdev_id_lst, 5982 + u8 num_mlo_vdev, 5983 + u8 *mlo_inactive_vdev_lst, 5984 + u8 num_mlo_inactive_vdev) 5985 + { 5986 + struct wmi_mlo_link_set_active_arg param = {0}; 5987 + u32 entry_idx, entry_offset, vdev_idx; 5988 + u8 vdev_id; 5989 + 5990 + param.reason = reason; 5991 + param.force_mode = mode; 5992 + 5993 + for (vdev_idx = 0; vdev_idx < num_mlo_vdev; vdev_idx++) { 5994 + vdev_id = mlo_vdev_id_lst[vdev_idx]; 5995 + entry_idx = vdev_id / 32; 5996 + entry_offset = vdev_id % 32; 5997 + if (entry_idx >= WMI_MLO_LINK_NUM_SZ) { 5998 + ath12k_warn(ab, "Invalid entry_idx %d num_mlo_vdev %d vdev %d", 5999 + entry_idx, num_mlo_vdev, vdev_id); 6000 + return -EINVAL; 6001 + } 6002 + param.vdev_bitmap[entry_idx] |= 1 << entry_offset; 6003 + /* update entry number if entry index changed */ 6004 + if (param.num_vdev_bitmap < entry_idx + 1) 6005 + param.num_vdev_bitmap = entry_idx + 1; 6006 + } 6007 + 6008 + ath12k_dbg(ab, ATH12K_DBG_MAC, 6009 + "num_vdev_bitmap %d vdev_bitmap[0] = 0x%x, vdev_bitmap[1] = 0x%x", 6010 + param.num_vdev_bitmap, param.vdev_bitmap[0], param.vdev_bitmap[1]); 6011 + 6012 + if (mode == WMI_MLO_LINK_FORCE_MODE_ACTIVE_INACTIVE) { 6013 + for (vdev_idx = 0; vdev_idx < num_mlo_inactive_vdev; vdev_idx++) { 6014 + vdev_id = mlo_inactive_vdev_lst[vdev_idx]; 6015 + entry_idx = vdev_id / 32; 6016 + entry_offset = vdev_id % 32; 6017 + if (entry_idx >= WMI_MLO_LINK_NUM_SZ) { 6018 + ath12k_warn(ab, "Invalid entry_idx %d num_mlo_vdev %d vdev %d", 6019 + entry_idx, num_mlo_inactive_vdev, vdev_id); 6020 + return -EINVAL; 6021 + } 6022 + param.inactive_vdev_bitmap[entry_idx] |= 1 << entry_offset; 6023 + /* update entry number if entry index changed */ 6024 + if (param.num_inactive_vdev_bitmap < entry_idx + 1) 6025 + param.num_inactive_vdev_bitmap = entry_idx + 1; 6026 + } 6027 + 6028 + ath12k_dbg(ab, ATH12K_DBG_MAC, 6029 + "num_vdev_bitmap %d inactive_vdev_bitmap[0] = 0x%x, inactive_vdev_bitmap[1] = 0x%x", 6030 + param.num_inactive_vdev_bitmap, 6031 + param.inactive_vdev_bitmap[0], 6032 + param.inactive_vdev_bitmap[1]); 6033 + } 6034 + 6035 + if (mode == WMI_MLO_LINK_FORCE_MODE_ACTIVE_LINK_NUM || 6036 + mode == WMI_MLO_LINK_FORCE_MODE_INACTIVE_LINK_NUM) { 6037 + param.num_link_entry = 1; 6038 + param.link_num[0].num_of_link = num_mlo_vdev - 1; 6039 + } 6040 + 6041 + return ath12k_wmi_send_mlo_link_set_active_cmd(ab, &param); 6042 + } 6043 + 6044 + static int ath12k_mac_mlo_sta_update_link_active(struct ath12k_base *ab, 6045 + struct ieee80211_hw *hw, 6046 + struct ath12k_vif *ahvif) 6047 + { 6048 + u8 mlo_vdev_id_lst[IEEE80211_MLD_MAX_NUM_LINKS] = {0}; 6049 + u32 mlo_freq_list[IEEE80211_MLD_MAX_NUM_LINKS] = {0}; 6050 + unsigned long links = ahvif->links_map; 6051 + enum wmi_mlo_link_force_reason reason; 6052 + struct ieee80211_chanctx_conf *conf; 6053 + enum wmi_mlo_link_force_mode mode; 6054 + struct ieee80211_bss_conf *info; 6055 + struct ath12k_link_vif *arvif; 6056 + u8 num_mlo_vdev = 0; 6057 + u8 link_id; 6058 + 6059 + for_each_set_bit(link_id, &links, IEEE80211_MLD_MAX_NUM_LINKS) { 6060 + arvif = wiphy_dereference(hw->wiphy, ahvif->link[link_id]); 6061 + /* make sure vdev is created on this device */ 6062 + if (!arvif || !arvif->is_created || arvif->ar->ab != ab) 6063 + continue; 6064 + 6065 + info = ath12k_mac_get_link_bss_conf(arvif); 6066 + conf = wiphy_dereference(hw->wiphy, info->chanctx_conf); 6067 + mlo_freq_list[num_mlo_vdev] = conf->def.chan->center_freq; 6068 + 6069 + mlo_vdev_id_lst[num_mlo_vdev] = arvif->vdev_id; 6070 + num_mlo_vdev++; 6071 + } 6072 + 6073 + /* It is not allowed to activate more links than a single device 6074 + * supported. Something goes wrong if we reach here. 6075 + */ 6076 + if (num_mlo_vdev > ATH12K_NUM_MAX_ACTIVE_LINKS_PER_DEVICE) { 6077 + WARN_ON_ONCE(1); 6078 + return -EINVAL; 6079 + } 6080 + 6081 + /* if 2 links are established and both link channels fall on the 6082 + * same hardware MAC, send command to firmware to deactivate one 6083 + * of them. 6084 + */ 6085 + if (num_mlo_vdev == 2 && 6086 + ath12k_mac_freqs_on_same_mac(ab, mlo_freq_list[0], 6087 + mlo_freq_list[1])) { 6088 + mode = WMI_MLO_LINK_FORCE_MODE_INACTIVE_LINK_NUM; 6089 + reason = WMI_MLO_LINK_FORCE_REASON_NEW_CONNECT; 6090 + return ath12k_mac_mlo_sta_set_link_active(ab, reason, mode, 6091 + mlo_vdev_id_lst, num_mlo_vdev, 6092 + NULL, 0); 6093 + } 6094 + 6095 + return 0; 6096 + } 6097 + 6098 + static bool ath12k_mac_are_sbs_chan(struct ath12k_base *ab, u32 freq_1, u32 freq_2) 6099 + { 6100 + if (!ath12k_mac_is_hw_sbs_capable(ab)) 6101 + return false; 6102 + 6103 + if (ath12k_is_2ghz_channel_freq(freq_1) || 6104 + ath12k_is_2ghz_channel_freq(freq_2)) 6105 + return false; 6106 + 6107 + return !ath12k_mac_2_freq_same_mac_in_sbs(ab, freq_1, freq_2); 6108 + } 6109 + 6110 + static bool ath12k_mac_are_dbs_chan(struct ath12k_base *ab, u32 freq_1, u32 freq_2) 6111 + { 6112 + if (!ath12k_mac_is_hw_dbs_capable(ab)) 6113 + return false; 6114 + 6115 + return !ath12k_mac_2_freq_same_mac_in_dbs(ab, freq_1, freq_2); 6116 + } 6117 + 6118 + static int ath12k_mac_select_links(struct ath12k_base *ab, 6119 + struct ieee80211_vif *vif, 6120 + struct ieee80211_hw *hw, 6121 + u16 *selected_links) 6122 + { 6123 + unsigned long useful_links = ieee80211_vif_usable_links(vif); 6124 + struct ath12k_vif *ahvif = ath12k_vif_to_ahvif(vif); 6125 + u8 num_useful_links = hweight_long(useful_links); 6126 + struct ieee80211_chanctx_conf *chanctx; 6127 + struct ath12k_link_vif *assoc_arvif; 6128 + u32 assoc_link_freq, partner_freq; 6129 + u16 sbs_links = 0, dbs_links = 0; 6130 + struct ieee80211_bss_conf *info; 6131 + struct ieee80211_channel *chan; 6132 + struct ieee80211_sta *sta; 6133 + struct ath12k_sta *ahsta; 6134 + u8 link_id; 6135 + 6136 + /* activate all useful links if less than max supported */ 6137 + if (num_useful_links <= ATH12K_NUM_MAX_ACTIVE_LINKS_PER_DEVICE) { 6138 + *selected_links = useful_links; 6139 + return 0; 6140 + } 6141 + 6142 + /* only in station mode we can get here, so it's safe 6143 + * to use ap_addr 6144 + */ 6145 + rcu_read_lock(); 6146 + sta = ieee80211_find_sta(vif, vif->cfg.ap_addr); 6147 + if (!sta) { 6148 + rcu_read_unlock(); 6149 + ath12k_warn(ab, "failed to find sta with addr %pM\n", vif->cfg.ap_addr); 6150 + return -EINVAL; 6151 + } 6152 + 6153 + ahsta = ath12k_sta_to_ahsta(sta); 6154 + assoc_arvif = wiphy_dereference(hw->wiphy, ahvif->link[ahsta->assoc_link_id]); 6155 + info = ath12k_mac_get_link_bss_conf(assoc_arvif); 6156 + chanctx = rcu_dereference(info->chanctx_conf); 6157 + assoc_link_freq = chanctx->def.chan->center_freq; 6158 + rcu_read_unlock(); 6159 + ath12k_dbg(ab, ATH12K_DBG_MAC, "assoc link %u freq %u\n", 6160 + assoc_arvif->link_id, assoc_link_freq); 6161 + 6162 + /* assoc link is already activated and has to be kept active, 6163 + * only need to select a partner link from others. 6164 + */ 6165 + useful_links &= ~BIT(assoc_arvif->link_id); 6166 + for_each_set_bit(link_id, &useful_links, IEEE80211_MLD_MAX_NUM_LINKS) { 6167 + info = wiphy_dereference(hw->wiphy, vif->link_conf[link_id]); 6168 + if (!info) { 6169 + ath12k_warn(ab, "failed to get link info for link: %u\n", 6170 + link_id); 6171 + return -ENOLINK; 6172 + } 6173 + 6174 + chan = info->chanreq.oper.chan; 6175 + if (!chan) { 6176 + ath12k_warn(ab, "failed to get chan for link: %u\n", link_id); 6177 + return -EINVAL; 6178 + } 6179 + 6180 + partner_freq = chan->center_freq; 6181 + if (ath12k_mac_are_sbs_chan(ab, assoc_link_freq, partner_freq)) { 6182 + sbs_links |= BIT(link_id); 6183 + ath12k_dbg(ab, ATH12K_DBG_MAC, "new SBS link %u freq %u\n", 6184 + link_id, partner_freq); 6185 + continue; 6186 + } 6187 + 6188 + if (ath12k_mac_are_dbs_chan(ab, assoc_link_freq, partner_freq)) { 6189 + dbs_links |= BIT(link_id); 6190 + ath12k_dbg(ab, ATH12K_DBG_MAC, "new DBS link %u freq %u\n", 6191 + link_id, partner_freq); 6192 + continue; 6193 + } 6194 + 6195 + ath12k_dbg(ab, ATH12K_DBG_MAC, "non DBS/SBS link %u freq %u\n", 6196 + link_id, partner_freq); 6197 + } 6198 + 6199 + /* choose the first candidate no matter how many is in the list */ 6200 + if (sbs_links) 6201 + link_id = __ffs(sbs_links); 6202 + else if (dbs_links) 6203 + link_id = __ffs(dbs_links); 6204 + else 6205 + link_id = ffs(useful_links) - 1; 6206 + 6207 + ath12k_dbg(ab, ATH12K_DBG_MAC, "select partner link %u\n", link_id); 6208 + 6209 + *selected_links = BIT(assoc_arvif->link_id) | BIT(link_id); 6210 + 6211 + return 0; 6212 + } 6213 + 5881 6214 static int ath12k_mac_op_sta_state(struct ieee80211_hw *hw, 5882 6215 struct ieee80211_vif *vif, 5883 6216 struct ieee80211_sta *sta, ··· 6208 5899 struct ath12k_vif *ahvif = ath12k_vif_to_ahvif(vif); 6209 5900 struct ath12k_sta *ahsta = ath12k_sta_to_ahsta(sta); 6210 5901 struct ath12k_hw *ah = ath12k_hw_to_ah(hw); 5902 + struct ath12k_base *prev_ab = NULL, *ab; 6211 5903 struct ath12k_link_vif *arvif; 6212 5904 struct ath12k_link_sta *arsta; 6213 5905 unsigned long valid_links; 6214 - u8 link_id = 0; 5906 + u16 selected_links = 0; 5907 + u8 link_id = 0, i; 5908 + struct ath12k *ar; 6215 5909 int ret; 6216 5910 6217 5911 lockdep_assert_wiphy(hw->wiphy); ··· 6284 5972 * about to move to the associated state. 6285 5973 */ 6286 5974 if (ieee80211_vif_is_mld(vif) && vif->type == NL80211_IFTYPE_STATION && 6287 - old_state == IEEE80211_STA_AUTH && new_state == IEEE80211_STA_ASSOC) 6288 - ieee80211_set_active_links(vif, ieee80211_vif_usable_links(vif)); 5975 + old_state == IEEE80211_STA_AUTH && new_state == IEEE80211_STA_ASSOC) { 5976 + /* TODO: for now only do link selection for single device 5977 + * MLO case. Other cases would be handled in the future. 5978 + */ 5979 + ab = ah->radio[0].ab; 5980 + if (ab->ag->num_devices == 1) { 5981 + ret = ath12k_mac_select_links(ab, vif, hw, &selected_links); 5982 + if (ret) { 5983 + ath12k_warn(ab, 5984 + "failed to get selected links: %d\n", ret); 5985 + goto exit; 5986 + } 5987 + } else { 5988 + selected_links = ieee80211_vif_usable_links(vif); 5989 + } 5990 + 5991 + ieee80211_set_active_links(vif, selected_links); 5992 + } 6289 5993 6290 5994 /* Handle all the other state transitions in generic way */ 6291 5995 valid_links = ahsta->links_map; ··· 6325 5997 } 6326 5998 } 6327 5999 6000 + if (ieee80211_vif_is_mld(vif) && vif->type == NL80211_IFTYPE_STATION && 6001 + old_state == IEEE80211_STA_ASSOC && new_state == IEEE80211_STA_AUTHORIZED) { 6002 + for_each_ar(ah, ar, i) { 6003 + ab = ar->ab; 6004 + if (prev_ab == ab) 6005 + continue; 6006 + 6007 + ret = ath12k_mac_mlo_sta_update_link_active(ab, hw, ahvif); 6008 + if (ret) { 6009 + ath12k_warn(ab, 6010 + "failed to update link active state on connect %d\n", 6011 + ret); 6012 + goto exit; 6013 + } 6014 + 6015 + prev_ab = ab; 6016 + } 6017 + } 6328 6018 /* IEEE80211_STA_NONE -> IEEE80211_STA_NOTEXIST: 6329 6019 * Remove the station from driver (handle ML sta here since that 6330 6020 * needs special handling. Normal sta will be handled in generic
+2
drivers/net/wireless/ath/ath12k/mac.h
··· 54 54 #define ATH12K_DEFAULT_SCAN_LINK IEEE80211_MLD_MAX_NUM_LINKS 55 55 #define ATH12K_NUM_MAX_LINKS (IEEE80211_MLD_MAX_NUM_LINKS + 1) 56 56 57 + #define ATH12K_NUM_MAX_ACTIVE_LINKS_PER_DEVICE 2 58 + 57 59 enum ath12k_supported_bw { 58 60 ATH12K_BW_20 = 0, 59 61 ATH12K_BW_40 = 1,
+819 -10
drivers/net/wireless/ath/ath12k/wmi.c
··· 91 91 bool dma_ring_cap_done; 92 92 bool spectral_bin_scaling_done; 93 93 bool mac_phy_caps_ext_done; 94 + bool hal_reg_caps_ext2_done; 95 + bool scan_radio_caps_ext2_done; 96 + bool twt_caps_done; 97 + bool htt_msdu_idx_to_qtype_map_done; 98 + bool dbs_or_sbs_cap_ext_done; 94 99 }; 95 100 96 101 struct ath12k_wmi_rdy_parse { ··· 4400 4395 static int ath12k_wmi_hw_mode_caps(struct ath12k_base *soc, 4401 4396 u16 len, const void *ptr, void *data) 4402 4397 { 4398 + struct ath12k_svc_ext_info *svc_ext_info = &soc->wmi_ab.svc_ext_info; 4403 4399 struct ath12k_wmi_svc_rdy_ext_parse *svc_rdy_ext = data; 4404 4400 const struct ath12k_wmi_hw_mode_cap_params *hw_mode_caps; 4405 4401 enum wmi_host_hw_mode_config_type mode, pref; ··· 4433 4427 } 4434 4428 } 4435 4429 4436 - ath12k_dbg(soc, ATH12K_DBG_WMI, "preferred_hw_mode:%d\n", 4437 - soc->wmi_ab.preferred_hw_mode); 4430 + svc_ext_info->num_hw_modes = svc_rdy_ext->n_hw_mode_caps; 4431 + 4432 + ath12k_dbg(soc, ATH12K_DBG_WMI, "num hw modes %u preferred_hw_mode %d\n", 4433 + svc_ext_info->num_hw_modes, soc->wmi_ab.preferred_hw_mode); 4434 + 4438 4435 if (soc->wmi_ab.preferred_hw_mode == WMI_HOST_HW_MODE_MAX) 4439 4436 return -EINVAL; 4440 4437 ··· 4667 4658 return ret; 4668 4659 } 4669 4660 4661 + static void 4662 + ath12k_wmi_save_mac_phy_info(struct ath12k_base *ab, 4663 + const struct ath12k_wmi_mac_phy_caps_params *mac_phy_cap, 4664 + struct ath12k_svc_ext_mac_phy_info *mac_phy_info) 4665 + { 4666 + mac_phy_info->phy_id = __le32_to_cpu(mac_phy_cap->phy_id); 4667 + mac_phy_info->supported_bands = __le32_to_cpu(mac_phy_cap->supported_bands); 4668 + mac_phy_info->hw_freq_range.low_2ghz_freq = 4669 + __le32_to_cpu(mac_phy_cap->low_2ghz_chan_freq); 4670 + mac_phy_info->hw_freq_range.high_2ghz_freq = 4671 + __le32_to_cpu(mac_phy_cap->high_2ghz_chan_freq); 4672 + mac_phy_info->hw_freq_range.low_5ghz_freq = 4673 + __le32_to_cpu(mac_phy_cap->low_5ghz_chan_freq); 4674 + mac_phy_info->hw_freq_range.high_5ghz_freq = 4675 + __le32_to_cpu(mac_phy_cap->high_5ghz_chan_freq); 4676 + } 4677 + 4678 + static void 4679 + ath12k_wmi_save_all_mac_phy_info(struct ath12k_base *ab, 4680 + struct ath12k_wmi_svc_rdy_ext_parse *svc_rdy_ext) 4681 + { 4682 + struct ath12k_svc_ext_info *svc_ext_info = &ab->wmi_ab.svc_ext_info; 4683 + const struct ath12k_wmi_mac_phy_caps_params *mac_phy_cap; 4684 + const struct ath12k_wmi_hw_mode_cap_params *hw_mode_cap; 4685 + struct ath12k_svc_ext_mac_phy_info *mac_phy_info; 4686 + u32 hw_mode_id, phy_bit_map; 4687 + u8 hw_idx; 4688 + 4689 + mac_phy_info = &svc_ext_info->mac_phy_info[0]; 4690 + mac_phy_cap = svc_rdy_ext->mac_phy_caps; 4691 + 4692 + for (hw_idx = 0; hw_idx < svc_ext_info->num_hw_modes; hw_idx++) { 4693 + hw_mode_cap = &svc_rdy_ext->hw_mode_caps[hw_idx]; 4694 + hw_mode_id = __le32_to_cpu(hw_mode_cap->hw_mode_id); 4695 + phy_bit_map = __le32_to_cpu(hw_mode_cap->phy_id_map); 4696 + 4697 + while (phy_bit_map) { 4698 + ath12k_wmi_save_mac_phy_info(ab, mac_phy_cap, mac_phy_info); 4699 + mac_phy_info->hw_mode_config_type = 4700 + le32_get_bits(hw_mode_cap->hw_mode_config_type, 4701 + WMI_HW_MODE_CAP_CFG_TYPE); 4702 + ath12k_dbg(ab, ATH12K_DBG_WMI, 4703 + "hw_idx %u hw_mode_id %u hw_mode_config_type %u supported_bands %u phy_id %u 2 GHz [%u - %u] 5 GHz [%u - %u]\n", 4704 + hw_idx, hw_mode_id, 4705 + mac_phy_info->hw_mode_config_type, 4706 + mac_phy_info->supported_bands, mac_phy_info->phy_id, 4707 + mac_phy_info->hw_freq_range.low_2ghz_freq, 4708 + mac_phy_info->hw_freq_range.high_2ghz_freq, 4709 + mac_phy_info->hw_freq_range.low_5ghz_freq, 4710 + mac_phy_info->hw_freq_range.high_5ghz_freq); 4711 + 4712 + mac_phy_cap++; 4713 + mac_phy_info++; 4714 + 4715 + phy_bit_map >>= 1; 4716 + } 4717 + } 4718 + } 4719 + 4670 4720 static int ath12k_wmi_svc_rdy_ext_parse(struct ath12k_base *ab, 4671 4721 u16 tag, u16 len, 4672 4722 const void *ptr, void *data) ··· 4773 4705 ath12k_warn(ab, "failed to parse tlv %d\n", ret); 4774 4706 return ret; 4775 4707 } 4708 + 4709 + ath12k_wmi_save_all_mac_phy_info(ab, svc_rdy_ext); 4776 4710 4777 4711 svc_rdy_ext->mac_phy_done = true; 4778 4712 } else if (!svc_rdy_ext->ext_hal_reg_done) { ··· 4992 4922 return 0; 4993 4923 } 4994 4924 4925 + static void 4926 + ath12k_wmi_update_freq_info(struct ath12k_base *ab, 4927 + struct ath12k_svc_ext_mac_phy_info *mac_cap, 4928 + enum ath12k_hw_mode mode, 4929 + u32 phy_id) 4930 + { 4931 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 4932 + struct ath12k_hw_mode_freq_range_arg *mac_range; 4933 + 4934 + mac_range = &hw_mode_info->freq_range_caps[mode][phy_id]; 4935 + 4936 + if (mac_cap->supported_bands & WMI_HOST_WLAN_2GHZ_CAP) { 4937 + mac_range->low_2ghz_freq = max_t(u32, 4938 + mac_cap->hw_freq_range.low_2ghz_freq, 4939 + ATH12K_MIN_2GHZ_FREQ); 4940 + mac_range->high_2ghz_freq = mac_cap->hw_freq_range.high_2ghz_freq ? 4941 + min_t(u32, 4942 + mac_cap->hw_freq_range.high_2ghz_freq, 4943 + ATH12K_MAX_2GHZ_FREQ) : 4944 + ATH12K_MAX_2GHZ_FREQ; 4945 + } 4946 + 4947 + if (mac_cap->supported_bands & WMI_HOST_WLAN_5GHZ_CAP) { 4948 + mac_range->low_5ghz_freq = max_t(u32, 4949 + mac_cap->hw_freq_range.low_5ghz_freq, 4950 + ATH12K_MIN_5GHZ_FREQ); 4951 + mac_range->high_5ghz_freq = mac_cap->hw_freq_range.high_5ghz_freq ? 4952 + min_t(u32, 4953 + mac_cap->hw_freq_range.high_5ghz_freq, 4954 + ATH12K_MAX_6GHZ_FREQ) : 4955 + ATH12K_MAX_6GHZ_FREQ; 4956 + } 4957 + } 4958 + 4959 + static bool 4960 + ath12k_wmi_all_phy_range_updated(struct ath12k_base *ab, 4961 + enum ath12k_hw_mode hwmode) 4962 + { 4963 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 4964 + struct ath12k_hw_mode_freq_range_arg *mac_range; 4965 + u8 phy_id; 4966 + 4967 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 4968 + mac_range = &hw_mode_info->freq_range_caps[hwmode][phy_id]; 4969 + /* modify SBS/DBS range only when both phy for DBS are filled */ 4970 + if (!mac_range->low_2ghz_freq && !mac_range->low_5ghz_freq) 4971 + return false; 4972 + } 4973 + 4974 + return true; 4975 + } 4976 + 4977 + static void ath12k_wmi_update_dbs_freq_info(struct ath12k_base *ab) 4978 + { 4979 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 4980 + struct ath12k_hw_mode_freq_range_arg *mac_range; 4981 + u8 phy_id; 4982 + 4983 + mac_range = hw_mode_info->freq_range_caps[ATH12K_HW_MODE_DBS]; 4984 + /* Reset 5 GHz range for shared mac for DBS */ 4985 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 4986 + if (mac_range[phy_id].low_2ghz_freq && 4987 + mac_range[phy_id].low_5ghz_freq) { 4988 + mac_range[phy_id].low_5ghz_freq = 0; 4989 + mac_range[phy_id].high_5ghz_freq = 0; 4990 + } 4991 + } 4992 + } 4993 + 4994 + static u32 4995 + ath12k_wmi_get_highest_5ghz_freq_from_range(struct ath12k_hw_mode_freq_range_arg *range) 4996 + { 4997 + u32 highest_freq = 0; 4998 + u8 phy_id; 4999 + 5000 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 5001 + if (range[phy_id].high_5ghz_freq > highest_freq) 5002 + highest_freq = range[phy_id].high_5ghz_freq; 5003 + } 5004 + 5005 + return highest_freq ? highest_freq : ATH12K_MAX_6GHZ_FREQ; 5006 + } 5007 + 5008 + static u32 5009 + ath12k_wmi_get_lowest_5ghz_freq_from_range(struct ath12k_hw_mode_freq_range_arg *range) 5010 + { 5011 + u32 lowest_freq = 0; 5012 + u8 phy_id; 5013 + 5014 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 5015 + if ((!lowest_freq && range[phy_id].low_5ghz_freq) || 5016 + range[phy_id].low_5ghz_freq < lowest_freq) 5017 + lowest_freq = range[phy_id].low_5ghz_freq; 5018 + } 5019 + 5020 + return lowest_freq ? lowest_freq : ATH12K_MIN_5GHZ_FREQ; 5021 + } 5022 + 5023 + static void 5024 + ath12k_wmi_fill_upper_share_sbs_freq(struct ath12k_base *ab, 5025 + u16 sbs_range_sep, 5026 + struct ath12k_hw_mode_freq_range_arg *ref_freq) 5027 + { 5028 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 5029 + struct ath12k_hw_mode_freq_range_arg *upper_sbs_freq_range; 5030 + u8 phy_id; 5031 + 5032 + upper_sbs_freq_range = 5033 + hw_mode_info->freq_range_caps[ATH12K_HW_MODE_SBS_UPPER_SHARE]; 5034 + 5035 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 5036 + upper_sbs_freq_range[phy_id].low_2ghz_freq = 5037 + ref_freq[phy_id].low_2ghz_freq; 5038 + upper_sbs_freq_range[phy_id].high_2ghz_freq = 5039 + ref_freq[phy_id].high_2ghz_freq; 5040 + 5041 + /* update for shared mac */ 5042 + if (upper_sbs_freq_range[phy_id].low_2ghz_freq) { 5043 + upper_sbs_freq_range[phy_id].low_5ghz_freq = sbs_range_sep + 10; 5044 + upper_sbs_freq_range[phy_id].high_5ghz_freq = 5045 + ath12k_wmi_get_highest_5ghz_freq_from_range(ref_freq); 5046 + } else { 5047 + upper_sbs_freq_range[phy_id].low_5ghz_freq = 5048 + ath12k_wmi_get_lowest_5ghz_freq_from_range(ref_freq); 5049 + upper_sbs_freq_range[phy_id].high_5ghz_freq = sbs_range_sep; 5050 + } 5051 + } 5052 + } 5053 + 5054 + static void 5055 + ath12k_wmi_fill_lower_share_sbs_freq(struct ath12k_base *ab, 5056 + u16 sbs_range_sep, 5057 + struct ath12k_hw_mode_freq_range_arg *ref_freq) 5058 + { 5059 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 5060 + struct ath12k_hw_mode_freq_range_arg *lower_sbs_freq_range; 5061 + u8 phy_id; 5062 + 5063 + lower_sbs_freq_range = 5064 + hw_mode_info->freq_range_caps[ATH12K_HW_MODE_SBS_LOWER_SHARE]; 5065 + 5066 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 5067 + lower_sbs_freq_range[phy_id].low_2ghz_freq = 5068 + ref_freq[phy_id].low_2ghz_freq; 5069 + lower_sbs_freq_range[phy_id].high_2ghz_freq = 5070 + ref_freq[phy_id].high_2ghz_freq; 5071 + 5072 + /* update for shared mac */ 5073 + if (lower_sbs_freq_range[phy_id].low_2ghz_freq) { 5074 + lower_sbs_freq_range[phy_id].low_5ghz_freq = 5075 + ath12k_wmi_get_lowest_5ghz_freq_from_range(ref_freq); 5076 + lower_sbs_freq_range[phy_id].high_5ghz_freq = sbs_range_sep; 5077 + } else { 5078 + lower_sbs_freq_range[phy_id].low_5ghz_freq = sbs_range_sep + 10; 5079 + lower_sbs_freq_range[phy_id].high_5ghz_freq = 5080 + ath12k_wmi_get_highest_5ghz_freq_from_range(ref_freq); 5081 + } 5082 + } 5083 + } 5084 + 5085 + static const char *ath12k_wmi_hw_mode_to_str(enum ath12k_hw_mode hw_mode) 5086 + { 5087 + static const char * const mode_str[] = { 5088 + [ATH12K_HW_MODE_SMM] = "SMM", 5089 + [ATH12K_HW_MODE_DBS] = "DBS", 5090 + [ATH12K_HW_MODE_SBS] = "SBS", 5091 + [ATH12K_HW_MODE_SBS_UPPER_SHARE] = "SBS_UPPER_SHARE", 5092 + [ATH12K_HW_MODE_SBS_LOWER_SHARE] = "SBS_LOWER_SHARE", 5093 + }; 5094 + 5095 + if (hw_mode >= ARRAY_SIZE(mode_str)) 5096 + return "Unknown"; 5097 + 5098 + return mode_str[hw_mode]; 5099 + } 5100 + 5101 + static void 5102 + ath12k_wmi_dump_freq_range_per_mac(struct ath12k_base *ab, 5103 + struct ath12k_hw_mode_freq_range_arg *freq_range, 5104 + enum ath12k_hw_mode hw_mode) 5105 + { 5106 + u8 i; 5107 + 5108 + for (i = 0; i < MAX_RADIOS; i++) 5109 + if (freq_range[i].low_2ghz_freq || freq_range[i].low_5ghz_freq) 5110 + ath12k_dbg(ab, ATH12K_DBG_WMI, 5111 + "frequency range: %s(%d) mac %d 2 GHz [%d - %d] 5 GHz [%d - %d]", 5112 + ath12k_wmi_hw_mode_to_str(hw_mode), 5113 + hw_mode, i, 5114 + freq_range[i].low_2ghz_freq, 5115 + freq_range[i].high_2ghz_freq, 5116 + freq_range[i].low_5ghz_freq, 5117 + freq_range[i].high_5ghz_freq); 5118 + } 5119 + 5120 + static void ath12k_wmi_dump_freq_range(struct ath12k_base *ab) 5121 + { 5122 + struct ath12k_hw_mode_freq_range_arg *freq_range; 5123 + u8 i; 5124 + 5125 + for (i = ATH12K_HW_MODE_SMM; i < ATH12K_HW_MODE_MAX; i++) { 5126 + freq_range = ab->wmi_ab.hw_mode_info.freq_range_caps[i]; 5127 + ath12k_wmi_dump_freq_range_per_mac(ab, freq_range, i); 5128 + } 5129 + } 5130 + 5131 + static int ath12k_wmi_modify_sbs_freq(struct ath12k_base *ab, u8 phy_id) 5132 + { 5133 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 5134 + struct ath12k_hw_mode_freq_range_arg *sbs_mac_range, *shared_mac_range; 5135 + struct ath12k_hw_mode_freq_range_arg *non_shared_range; 5136 + u8 shared_phy_id; 5137 + 5138 + sbs_mac_range = &hw_mode_info->freq_range_caps[ATH12K_HW_MODE_SBS][phy_id]; 5139 + 5140 + /* if SBS mac range has both 2.4 and 5 GHz ranges, i.e. shared phy_id 5141 + * keep the range as it is in SBS 5142 + */ 5143 + if (sbs_mac_range->low_2ghz_freq && sbs_mac_range->low_5ghz_freq) 5144 + return 0; 5145 + 5146 + if (sbs_mac_range->low_2ghz_freq && !sbs_mac_range->low_5ghz_freq) { 5147 + ath12k_err(ab, "Invalid DBS/SBS mode with only 2.4Ghz"); 5148 + ath12k_wmi_dump_freq_range_per_mac(ab, sbs_mac_range, ATH12K_HW_MODE_SBS); 5149 + return -EINVAL; 5150 + } 5151 + 5152 + non_shared_range = sbs_mac_range; 5153 + /* if SBS mac range has only 5 GHz then it's the non-shared phy, so 5154 + * modify the range as per the shared mac. 5155 + */ 5156 + shared_phy_id = phy_id ? 0 : 1; 5157 + shared_mac_range = 5158 + &hw_mode_info->freq_range_caps[ATH12K_HW_MODE_SBS][shared_phy_id]; 5159 + 5160 + if (shared_mac_range->low_5ghz_freq > non_shared_range->low_5ghz_freq) { 5161 + ath12k_dbg(ab, ATH12K_DBG_WMI, "high 5 GHz shared"); 5162 + /* If the shared mac lower 5 GHz frequency is greater than 5163 + * non-shared mac lower 5 GHz frequency then the shared mac has 5164 + * high 5 GHz shared with 2.4 GHz. So non-shared mac's 5 GHz high 5165 + * freq should be less than the shared mac's low 5 GHz freq. 5166 + */ 5167 + if (non_shared_range->high_5ghz_freq >= 5168 + shared_mac_range->low_5ghz_freq) 5169 + non_shared_range->high_5ghz_freq = 5170 + max_t(u32, shared_mac_range->low_5ghz_freq - 10, 5171 + non_shared_range->low_5ghz_freq); 5172 + } else if (shared_mac_range->high_5ghz_freq < 5173 + non_shared_range->high_5ghz_freq) { 5174 + ath12k_dbg(ab, ATH12K_DBG_WMI, "low 5 GHz shared"); 5175 + /* If the shared mac high 5 GHz frequency is less than 5176 + * non-shared mac high 5 GHz frequency then the shared mac has 5177 + * low 5 GHz shared with 2.4 GHz. So non-shared mac's 5 GHz low 5178 + * freq should be greater than the shared mac's high 5 GHz freq. 5179 + */ 5180 + if (shared_mac_range->high_5ghz_freq >= 5181 + non_shared_range->low_5ghz_freq) 5182 + non_shared_range->low_5ghz_freq = 5183 + min_t(u32, shared_mac_range->high_5ghz_freq + 10, 5184 + non_shared_range->high_5ghz_freq); 5185 + } else { 5186 + ath12k_warn(ab, "invalid SBS range with all 5 GHz shared"); 5187 + return -EINVAL; 5188 + } 5189 + 5190 + return 0; 5191 + } 5192 + 5193 + static void ath12k_wmi_update_sbs_freq_info(struct ath12k_base *ab) 5194 + { 5195 + struct ath12k_hw_mode_info *hw_mode_info = &ab->wmi_ab.hw_mode_info; 5196 + struct ath12k_hw_mode_freq_range_arg *mac_range; 5197 + u16 sbs_range_sep; 5198 + u8 phy_id; 5199 + int ret; 5200 + 5201 + mac_range = hw_mode_info->freq_range_caps[ATH12K_HW_MODE_SBS]; 5202 + 5203 + /* If sbs_lower_band_end_freq has a value, then the frequency range 5204 + * will be split using that value. 5205 + */ 5206 + sbs_range_sep = ab->wmi_ab.sbs_lower_band_end_freq; 5207 + if (sbs_range_sep) { 5208 + ath12k_wmi_fill_upper_share_sbs_freq(ab, sbs_range_sep, 5209 + mac_range); 5210 + ath12k_wmi_fill_lower_share_sbs_freq(ab, sbs_range_sep, 5211 + mac_range); 5212 + /* Hardware specifies the range boundary with sbs_range_sep, 5213 + * (i.e. the boundary between 5 GHz high and 5 GHz low), 5214 + * reset the original one to make sure it will not get used. 5215 + */ 5216 + memset(mac_range, 0, sizeof(*mac_range) * MAX_RADIOS); 5217 + return; 5218 + } 5219 + 5220 + /* If sbs_lower_band_end_freq is not set that means firmware will send one 5221 + * shared mac range and one non-shared mac range. so update that freq. 5222 + */ 5223 + for (phy_id = 0; phy_id < MAX_RADIOS; phy_id++) { 5224 + ret = ath12k_wmi_modify_sbs_freq(ab, phy_id); 5225 + if (ret) { 5226 + memset(mac_range, 0, sizeof(*mac_range) * MAX_RADIOS); 5227 + break; 5228 + } 5229 + } 5230 + } 5231 + 5232 + static void 5233 + ath12k_wmi_update_mac_freq_info(struct ath12k_base *ab, 5234 + enum wmi_host_hw_mode_config_type hw_config_type, 5235 + u32 phy_id, 5236 + struct ath12k_svc_ext_mac_phy_info *mac_cap) 5237 + { 5238 + if (phy_id >= MAX_RADIOS) { 5239 + ath12k_err(ab, "mac more than two not supported: %d", phy_id); 5240 + return; 5241 + } 5242 + 5243 + ath12k_dbg(ab, ATH12K_DBG_WMI, 5244 + "hw_mode_cfg %d mac %d band 0x%x SBS cutoff freq %d 2 GHz [%d - %d] 5 GHz [%d - %d]", 5245 + hw_config_type, phy_id, mac_cap->supported_bands, 5246 + ab->wmi_ab.sbs_lower_band_end_freq, 5247 + mac_cap->hw_freq_range.low_2ghz_freq, 5248 + mac_cap->hw_freq_range.high_2ghz_freq, 5249 + mac_cap->hw_freq_range.low_5ghz_freq, 5250 + mac_cap->hw_freq_range.high_5ghz_freq); 5251 + 5252 + switch (hw_config_type) { 5253 + case WMI_HOST_HW_MODE_SINGLE: 5254 + if (phy_id) { 5255 + ath12k_dbg(ab, ATH12K_DBG_WMI, "mac phy 1 is not supported"); 5256 + break; 5257 + } 5258 + ath12k_wmi_update_freq_info(ab, mac_cap, ATH12K_HW_MODE_SMM, phy_id); 5259 + break; 5260 + 5261 + case WMI_HOST_HW_MODE_DBS: 5262 + if (!ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_DBS)) 5263 + ath12k_wmi_update_freq_info(ab, mac_cap, 5264 + ATH12K_HW_MODE_DBS, phy_id); 5265 + break; 5266 + case WMI_HOST_HW_MODE_DBS_SBS: 5267 + case WMI_HOST_HW_MODE_DBS_OR_SBS: 5268 + ath12k_wmi_update_freq_info(ab, mac_cap, ATH12K_HW_MODE_DBS, phy_id); 5269 + if (ab->wmi_ab.sbs_lower_band_end_freq || 5270 + mac_cap->hw_freq_range.low_5ghz_freq || 5271 + mac_cap->hw_freq_range.low_2ghz_freq) 5272 + ath12k_wmi_update_freq_info(ab, mac_cap, ATH12K_HW_MODE_SBS, 5273 + phy_id); 5274 + 5275 + if (ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_DBS)) 5276 + ath12k_wmi_update_dbs_freq_info(ab); 5277 + if (ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_SBS)) 5278 + ath12k_wmi_update_sbs_freq_info(ab); 5279 + break; 5280 + case WMI_HOST_HW_MODE_SBS: 5281 + case WMI_HOST_HW_MODE_SBS_PASSIVE: 5282 + ath12k_wmi_update_freq_info(ab, mac_cap, ATH12K_HW_MODE_SBS, phy_id); 5283 + if (ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_SBS)) 5284 + ath12k_wmi_update_sbs_freq_info(ab); 5285 + 5286 + break; 5287 + default: 5288 + break; 5289 + } 5290 + } 5291 + 5292 + static bool ath12k_wmi_sbs_range_present(struct ath12k_base *ab) 5293 + { 5294 + if (ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_SBS) || 5295 + (ab->wmi_ab.sbs_lower_band_end_freq && 5296 + ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_SBS_LOWER_SHARE) && 5297 + ath12k_wmi_all_phy_range_updated(ab, ATH12K_HW_MODE_SBS_UPPER_SHARE))) 5298 + return true; 5299 + 5300 + return false; 5301 + } 5302 + 5303 + static int ath12k_wmi_update_hw_mode_list(struct ath12k_base *ab) 5304 + { 5305 + struct ath12k_svc_ext_info *svc_ext_info = &ab->wmi_ab.svc_ext_info; 5306 + struct ath12k_hw_mode_info *info = &ab->wmi_ab.hw_mode_info; 5307 + enum wmi_host_hw_mode_config_type hw_config_type; 5308 + struct ath12k_svc_ext_mac_phy_info *tmp; 5309 + bool dbs_mode = false, sbs_mode = false; 5310 + u32 i, j = 0; 5311 + 5312 + if (!svc_ext_info->num_hw_modes) { 5313 + ath12k_err(ab, "invalid number of hw modes"); 5314 + return -EINVAL; 5315 + } 5316 + 5317 + ath12k_dbg(ab, ATH12K_DBG_WMI, "updated HW mode list: num modes %d", 5318 + svc_ext_info->num_hw_modes); 5319 + 5320 + memset(info->freq_range_caps, 0, sizeof(info->freq_range_caps)); 5321 + 5322 + for (i = 0; i < svc_ext_info->num_hw_modes; i++) { 5323 + if (j >= ATH12K_MAX_MAC_PHY_CAP) 5324 + return -EINVAL; 5325 + 5326 + /* Update for MAC0 */ 5327 + tmp = &svc_ext_info->mac_phy_info[j++]; 5328 + hw_config_type = tmp->hw_mode_config_type; 5329 + ath12k_wmi_update_mac_freq_info(ab, hw_config_type, tmp->phy_id, tmp); 5330 + 5331 + /* SBS and DBS have dual MAC. Up to 2 MACs are considered. */ 5332 + if (hw_config_type == WMI_HOST_HW_MODE_DBS || 5333 + hw_config_type == WMI_HOST_HW_MODE_SBS_PASSIVE || 5334 + hw_config_type == WMI_HOST_HW_MODE_SBS || 5335 + hw_config_type == WMI_HOST_HW_MODE_DBS_OR_SBS) { 5336 + if (j >= ATH12K_MAX_MAC_PHY_CAP) 5337 + return -EINVAL; 5338 + /* Update for MAC1 */ 5339 + tmp = &svc_ext_info->mac_phy_info[j++]; 5340 + ath12k_wmi_update_mac_freq_info(ab, hw_config_type, 5341 + tmp->phy_id, tmp); 5342 + 5343 + if (hw_config_type == WMI_HOST_HW_MODE_DBS || 5344 + hw_config_type == WMI_HOST_HW_MODE_DBS_OR_SBS) 5345 + dbs_mode = true; 5346 + 5347 + if (ath12k_wmi_sbs_range_present(ab) && 5348 + (hw_config_type == WMI_HOST_HW_MODE_SBS_PASSIVE || 5349 + hw_config_type == WMI_HOST_HW_MODE_SBS || 5350 + hw_config_type == WMI_HOST_HW_MODE_DBS_OR_SBS)) 5351 + sbs_mode = true; 5352 + } 5353 + } 5354 + 5355 + info->support_dbs = dbs_mode; 5356 + info->support_sbs = sbs_mode; 5357 + 5358 + ath12k_wmi_dump_freq_range(ab); 5359 + 5360 + return 0; 5361 + } 5362 + 4995 5363 static int ath12k_wmi_svc_rdy_ext2_parse(struct ath12k_base *ab, 4996 5364 u16 tag, u16 len, 4997 5365 const void *ptr, void *data) 4998 5366 { 5367 + const struct ath12k_wmi_dbs_or_sbs_cap_params *dbs_or_sbs_caps; 4999 5368 struct ath12k_wmi_pdev *wmi_handle = &ab->wmi_ab.wmi[0]; 5000 5369 struct ath12k_wmi_svc_rdy_ext2_parse *parse = data; 5001 5370 int ret; ··· 5476 4967 } 5477 4968 5478 4969 parse->mac_phy_caps_ext_done = true; 4970 + } else if (!parse->hal_reg_caps_ext2_done) { 4971 + parse->hal_reg_caps_ext2_done = true; 4972 + } else if (!parse->scan_radio_caps_ext2_done) { 4973 + parse->scan_radio_caps_ext2_done = true; 4974 + } else if (!parse->twt_caps_done) { 4975 + parse->twt_caps_done = true; 4976 + } else if (!parse->htt_msdu_idx_to_qtype_map_done) { 4977 + parse->htt_msdu_idx_to_qtype_map_done = true; 4978 + } else if (!parse->dbs_or_sbs_cap_ext_done) { 4979 + dbs_or_sbs_caps = ptr; 4980 + ab->wmi_ab.sbs_lower_band_end_freq = 4981 + __le32_to_cpu(dbs_or_sbs_caps->sbs_lower_band_end_freq); 4982 + 4983 + ath12k_dbg(ab, ATH12K_DBG_WMI, "sbs_lower_band_end_freq %u\n", 4984 + ab->wmi_ab.sbs_lower_band_end_freq); 4985 + 4986 + ret = ath12k_wmi_update_hw_mode_list(ab); 4987 + if (ret) { 4988 + ath12k_warn(ab, "failed to update hw mode list: %d\n", 4989 + ret); 4990 + return ret; 4991 + } 4992 + 4993 + parse->dbs_or_sbs_cap_ext_done = true; 5479 4994 } 4995 + 5480 4996 break; 5481 4997 default: 5482 4998 break; ··· 8160 7626 &parse); 8161 7627 } 8162 7628 7629 + static void ath12k_wmi_fw_stats_process(struct ath12k *ar, 7630 + struct ath12k_fw_stats *stats) 7631 + { 7632 + struct ath12k_base *ab = ar->ab; 7633 + struct ath12k_pdev *pdev; 7634 + bool is_end = true; 7635 + size_t total_vdevs_started = 0; 7636 + int i; 7637 + 7638 + if (stats->stats_id == WMI_REQUEST_VDEV_STAT) { 7639 + if (list_empty(&stats->vdevs)) { 7640 + ath12k_warn(ab, "empty vdev stats"); 7641 + return; 7642 + } 7643 + /* FW sends all the active VDEV stats irrespective of PDEV, 7644 + * hence limit until the count of all VDEVs started 7645 + */ 7646 + rcu_read_lock(); 7647 + for (i = 0; i < ab->num_radios; i++) { 7648 + pdev = rcu_dereference(ab->pdevs_active[i]); 7649 + if (pdev && pdev->ar) 7650 + total_vdevs_started += pdev->ar->num_started_vdevs; 7651 + } 7652 + rcu_read_unlock(); 7653 + 7654 + if (total_vdevs_started) 7655 + is_end = ((++ar->fw_stats.num_vdev_recvd) == 7656 + total_vdevs_started); 7657 + 7658 + list_splice_tail_init(&stats->vdevs, 7659 + &ar->fw_stats.vdevs); 7660 + 7661 + if (is_end) 7662 + complete(&ar->fw_stats_done); 7663 + 7664 + return; 7665 + } 7666 + 7667 + if (stats->stats_id == WMI_REQUEST_BCN_STAT) { 7668 + if (list_empty(&stats->bcn)) { 7669 + ath12k_warn(ab, "empty beacon stats"); 7670 + return; 7671 + } 7672 + /* Mark end until we reached the count of all started VDEVs 7673 + * within the PDEV 7674 + */ 7675 + if (ar->num_started_vdevs) 7676 + is_end = ((++ar->fw_stats.num_bcn_recvd) == 7677 + ar->num_started_vdevs); 7678 + 7679 + list_splice_tail_init(&stats->bcn, 7680 + &ar->fw_stats.bcn); 7681 + 7682 + if (is_end) 7683 + complete(&ar->fw_stats_done); 7684 + } 7685 + } 7686 + 8163 7687 static void ath12k_update_stats_event(struct ath12k_base *ab, struct sk_buff *skb) 8164 7688 { 8165 7689 struct ath12k_fw_stats stats = {}; ··· 8247 7655 8248 7656 spin_lock_bh(&ar->data_lock); 8249 7657 8250 - /* WMI_REQUEST_PDEV_STAT can be requested via .get_txpower mac ops or via 8251 - * debugfs fw stats. Therefore, processing it separately. 8252 - */ 7658 + /* Handle WMI_REQUEST_PDEV_STAT status update */ 8253 7659 if (stats.stats_id == WMI_REQUEST_PDEV_STAT) { 8254 7660 list_splice_tail_init(&stats.pdevs, &ar->fw_stats.pdevs); 8255 - ar->fw_stats.fw_stats_done = true; 7661 + complete(&ar->fw_stats_done); 8256 7662 goto complete; 8257 7663 } 8258 7664 8259 - /* WMI_REQUEST_VDEV_STAT and WMI_REQUEST_BCN_STAT are currently requested only 8260 - * via debugfs fw stats. Hence, processing these in debugfs context. 8261 - */ 8262 - ath12k_debugfs_fw_stats_process(ar, &stats); 7665 + /* Handle WMI_REQUEST_VDEV_STAT and WMI_REQUEST_BCN_STAT updates. */ 7666 + ath12k_wmi_fw_stats_process(ar, &stats); 8263 7667 8264 7668 complete: 8265 7669 complete(&ar->fw_stats_complete); ··· 10498 9910 } 10499 9911 10500 9912 return 0; 9913 + } 9914 + 9915 + static int 9916 + ath12k_wmi_fill_disallowed_bmap(struct ath12k_base *ab, 9917 + struct wmi_disallowed_mlo_mode_bitmap_params *dislw_bmap, 9918 + struct wmi_mlo_link_set_active_arg *arg) 9919 + { 9920 + struct wmi_ml_disallow_mode_bmap_arg *dislw_bmap_arg; 9921 + u8 i; 9922 + 9923 + if (arg->num_disallow_mode_comb > 9924 + ARRAY_SIZE(arg->disallow_bmap)) { 9925 + ath12k_warn(ab, "invalid num_disallow_mode_comb: %d", 9926 + arg->num_disallow_mode_comb); 9927 + return -EINVAL; 9928 + } 9929 + 9930 + dislw_bmap_arg = &arg->disallow_bmap[0]; 9931 + for (i = 0; i < arg->num_disallow_mode_comb; i++) { 9932 + dislw_bmap->tlv_header = 9933 + ath12k_wmi_tlv_cmd_hdr(0, sizeof(*dislw_bmap)); 9934 + dislw_bmap->disallowed_mode_bitmap = 9935 + cpu_to_le32(dislw_bmap_arg->disallowed_mode); 9936 + dislw_bmap->ieee_link_id_comb = 9937 + le32_encode_bits(dislw_bmap_arg->ieee_link_id[0], 9938 + WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_1) | 9939 + le32_encode_bits(dislw_bmap_arg->ieee_link_id[1], 9940 + WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_2) | 9941 + le32_encode_bits(dislw_bmap_arg->ieee_link_id[2], 9942 + WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_3) | 9943 + le32_encode_bits(dislw_bmap_arg->ieee_link_id[3], 9944 + WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_4); 9945 + 9946 + ath12k_dbg(ab, ATH12K_DBG_WMI, 9947 + "entry %d disallowed_mode %d ieee_link_id_comb 0x%x", 9948 + i, dislw_bmap_arg->disallowed_mode, 9949 + dislw_bmap_arg->ieee_link_id_comb); 9950 + dislw_bmap++; 9951 + dislw_bmap_arg++; 9952 + } 9953 + 9954 + return 0; 9955 + } 9956 + 9957 + int ath12k_wmi_send_mlo_link_set_active_cmd(struct ath12k_base *ab, 9958 + struct wmi_mlo_link_set_active_arg *arg) 9959 + { 9960 + struct wmi_disallowed_mlo_mode_bitmap_params *disallowed_mode_bmap; 9961 + struct wmi_mlo_set_active_link_number_params *link_num_param; 9962 + u32 num_link_num_param = 0, num_vdev_bitmap = 0; 9963 + struct ath12k_wmi_base *wmi_ab = &ab->wmi_ab; 9964 + struct wmi_mlo_link_set_active_cmd *cmd; 9965 + u32 num_inactive_vdev_bitmap = 0; 9966 + u32 num_disallow_mode_comb = 0; 9967 + struct wmi_tlv *tlv; 9968 + struct sk_buff *skb; 9969 + __le32 *vdev_bitmap; 9970 + void *buf_ptr; 9971 + int i, ret; 9972 + u32 len; 9973 + 9974 + if (!arg->num_vdev_bitmap && !arg->num_link_entry) { 9975 + ath12k_warn(ab, "Invalid num_vdev_bitmap and num_link_entry"); 9976 + return -EINVAL; 9977 + } 9978 + 9979 + switch (arg->force_mode) { 9980 + case WMI_MLO_LINK_FORCE_MODE_ACTIVE_LINK_NUM: 9981 + case WMI_MLO_LINK_FORCE_MODE_INACTIVE_LINK_NUM: 9982 + num_link_num_param = arg->num_link_entry; 9983 + fallthrough; 9984 + case WMI_MLO_LINK_FORCE_MODE_ACTIVE: 9985 + case WMI_MLO_LINK_FORCE_MODE_INACTIVE: 9986 + case WMI_MLO_LINK_FORCE_MODE_NO_FORCE: 9987 + num_vdev_bitmap = arg->num_vdev_bitmap; 9988 + break; 9989 + case WMI_MLO_LINK_FORCE_MODE_ACTIVE_INACTIVE: 9990 + num_vdev_bitmap = arg->num_vdev_bitmap; 9991 + num_inactive_vdev_bitmap = arg->num_inactive_vdev_bitmap; 9992 + break; 9993 + default: 9994 + ath12k_warn(ab, "Invalid force mode: %u", arg->force_mode); 9995 + return -EINVAL; 9996 + } 9997 + 9998 + num_disallow_mode_comb = arg->num_disallow_mode_comb; 9999 + len = sizeof(*cmd) + 10000 + TLV_HDR_SIZE + sizeof(*link_num_param) * num_link_num_param + 10001 + TLV_HDR_SIZE + sizeof(*vdev_bitmap) * num_vdev_bitmap + 10002 + TLV_HDR_SIZE + TLV_HDR_SIZE + TLV_HDR_SIZE + 10003 + TLV_HDR_SIZE + sizeof(*disallowed_mode_bmap) * num_disallow_mode_comb; 10004 + if (arg->force_mode == WMI_MLO_LINK_FORCE_MODE_ACTIVE_INACTIVE) 10005 + len += sizeof(*vdev_bitmap) * num_inactive_vdev_bitmap; 10006 + 10007 + skb = ath12k_wmi_alloc_skb(wmi_ab, len); 10008 + if (!skb) 10009 + return -ENOMEM; 10010 + 10011 + cmd = (struct wmi_mlo_link_set_active_cmd *)skb->data; 10012 + cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_MLO_LINK_SET_ACTIVE_CMD, 10013 + sizeof(*cmd)); 10014 + cmd->force_mode = cpu_to_le32(arg->force_mode); 10015 + cmd->reason = cpu_to_le32(arg->reason); 10016 + ath12k_dbg(ab, ATH12K_DBG_WMI, 10017 + "mode %d reason %d num_link_num_param %d num_vdev_bitmap %d inactive %d num_disallow_mode_comb %d", 10018 + arg->force_mode, arg->reason, num_link_num_param, 10019 + num_vdev_bitmap, num_inactive_vdev_bitmap, 10020 + num_disallow_mode_comb); 10021 + 10022 + buf_ptr = skb->data + sizeof(*cmd); 10023 + tlv = buf_ptr; 10024 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, 10025 + sizeof(*link_num_param) * num_link_num_param); 10026 + buf_ptr += TLV_HDR_SIZE; 10027 + 10028 + if (num_link_num_param) { 10029 + cmd->ctrl_flags = 10030 + le32_encode_bits(arg->ctrl_flags.dync_force_link_num ? 1 : 0, 10031 + CRTL_F_DYNC_FORCE_LINK_NUM); 10032 + 10033 + link_num_param = buf_ptr; 10034 + for (i = 0; i < num_link_num_param; i++) { 10035 + link_num_param->tlv_header = 10036 + ath12k_wmi_tlv_cmd_hdr(0, sizeof(*link_num_param)); 10037 + link_num_param->num_of_link = 10038 + cpu_to_le32(arg->link_num[i].num_of_link); 10039 + link_num_param->vdev_type = 10040 + cpu_to_le32(arg->link_num[i].vdev_type); 10041 + link_num_param->vdev_subtype = 10042 + cpu_to_le32(arg->link_num[i].vdev_subtype); 10043 + link_num_param->home_freq = 10044 + cpu_to_le32(arg->link_num[i].home_freq); 10045 + ath12k_dbg(ab, ATH12K_DBG_WMI, 10046 + "entry %d num_of_link %d vdev type %d subtype %d freq %d control_flags %d", 10047 + i, arg->link_num[i].num_of_link, 10048 + arg->link_num[i].vdev_type, 10049 + arg->link_num[i].vdev_subtype, 10050 + arg->link_num[i].home_freq, 10051 + __le32_to_cpu(cmd->ctrl_flags)); 10052 + link_num_param++; 10053 + } 10054 + 10055 + buf_ptr += sizeof(*link_num_param) * num_link_num_param; 10056 + } 10057 + 10058 + tlv = buf_ptr; 10059 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, 10060 + sizeof(*vdev_bitmap) * num_vdev_bitmap); 10061 + buf_ptr += TLV_HDR_SIZE; 10062 + 10063 + if (num_vdev_bitmap) { 10064 + vdev_bitmap = buf_ptr; 10065 + for (i = 0; i < num_vdev_bitmap; i++) { 10066 + vdev_bitmap[i] = cpu_to_le32(arg->vdev_bitmap[i]); 10067 + ath12k_dbg(ab, ATH12K_DBG_WMI, "entry %d vdev_id_bitmap 0x%x", 10068 + i, arg->vdev_bitmap[i]); 10069 + } 10070 + 10071 + buf_ptr += sizeof(*vdev_bitmap) * num_vdev_bitmap; 10072 + } 10073 + 10074 + if (arg->force_mode == WMI_MLO_LINK_FORCE_MODE_ACTIVE_INACTIVE) { 10075 + tlv = buf_ptr; 10076 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, 10077 + sizeof(*vdev_bitmap) * 10078 + num_inactive_vdev_bitmap); 10079 + buf_ptr += TLV_HDR_SIZE; 10080 + 10081 + if (num_inactive_vdev_bitmap) { 10082 + vdev_bitmap = buf_ptr; 10083 + for (i = 0; i < num_inactive_vdev_bitmap; i++) { 10084 + vdev_bitmap[i] = 10085 + cpu_to_le32(arg->inactive_vdev_bitmap[i]); 10086 + ath12k_dbg(ab, ATH12K_DBG_WMI, 10087 + "entry %d inactive_vdev_id_bitmap 0x%x", 10088 + i, arg->inactive_vdev_bitmap[i]); 10089 + } 10090 + 10091 + buf_ptr += sizeof(*vdev_bitmap) * num_inactive_vdev_bitmap; 10092 + } 10093 + } else { 10094 + /* add empty vdev bitmap2 tlv */ 10095 + tlv = buf_ptr; 10096 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, 0); 10097 + buf_ptr += TLV_HDR_SIZE; 10098 + } 10099 + 10100 + /* add empty ieee_link_id_bitmap tlv */ 10101 + tlv = buf_ptr; 10102 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, 0); 10103 + buf_ptr += TLV_HDR_SIZE; 10104 + 10105 + /* add empty ieee_link_id_bitmap2 tlv */ 10106 + tlv = buf_ptr; 10107 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, 0); 10108 + buf_ptr += TLV_HDR_SIZE; 10109 + 10110 + tlv = buf_ptr; 10111 + tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, 10112 + sizeof(*disallowed_mode_bmap) * 10113 + arg->num_disallow_mode_comb); 10114 + buf_ptr += TLV_HDR_SIZE; 10115 + 10116 + ret = ath12k_wmi_fill_disallowed_bmap(ab, buf_ptr, arg); 10117 + if (ret) 10118 + goto free_skb; 10119 + 10120 + ret = ath12k_wmi_cmd_send(&wmi_ab->wmi[0], skb, WMI_MLO_LINK_SET_ACTIVE_CMDID); 10121 + if (ret) { 10122 + ath12k_warn(ab, 10123 + "failed to send WMI_MLO_LINK_SET_ACTIVE_CMDID: %d\n", ret); 10124 + goto free_skb; 10125 + } 10126 + 10127 + ath12k_dbg(ab, ATH12K_DBG_WMI, "WMI mlo link set active cmd"); 10128 + 10129 + return ret; 10130 + 10131 + free_skb: 10132 + dev_kfree_skb(skb); 10133 + return ret; 10501 10134 }
+179 -1
drivers/net/wireless/ath/ath12k/wmi.h
··· 1974 1974 WMI_TAG_TPC_STATS_CTL_PWR_TABLE_EVENT, 1975 1975 WMI_TAG_VDEV_SET_TPC_POWER_CMD = 0x3B5, 1976 1976 WMI_TAG_VDEV_CH_POWER_INFO, 1977 + WMI_TAG_MLO_LINK_SET_ACTIVE_CMD = 0x3BE, 1977 1978 WMI_TAG_EHT_RATE_SET = 0x3C4, 1978 1979 WMI_TAG_DCS_AWGN_INT_TYPE = 0x3C5, 1979 1980 WMI_TAG_MLO_TX_SEND_PARAMS, ··· 2618 2617 __le32 num_chainmask_tables; 2619 2618 } __packed; 2620 2619 2620 + #define WMI_HW_MODE_CAP_CFG_TYPE GENMASK(27, 0) 2621 + 2621 2622 struct ath12k_wmi_hw_mode_cap_params { 2622 2623 __le32 tlv_header; 2623 2624 __le32 hw_mode_id; ··· 2669 2666 __le32 he_cap_info_2g_ext; 2670 2667 __le32 he_cap_info_5g_ext; 2671 2668 __le32 he_cap_info_internal; 2669 + __le32 wireless_modes; 2670 + __le32 low_2ghz_chan_freq; 2671 + __le32 high_2ghz_chan_freq; 2672 + __le32 low_5ghz_chan_freq; 2673 + __le32 high_5ghz_chan_freq; 2674 + __le32 nss_ratio; 2672 2675 } __packed; 2673 2676 2674 2677 struct ath12k_wmi_hal_reg_caps_ext_params { ··· 2746 2737 __le32 max_num_linkview_peers; 2747 2738 __le32 max_num_msduq_supported_per_tid; 2748 2739 __le32 default_num_msduq_supported_per_tid; 2740 + } __packed; 2741 + 2742 + struct ath12k_wmi_dbs_or_sbs_cap_params { 2743 + __le32 hw_mode_id; 2744 + __le32 sbs_lower_band_end_freq; 2749 2745 } __packed; 2750 2746 2751 2747 struct ath12k_wmi_caps_ext_params { ··· 5063 5049 u32 rx_decap_mode; 5064 5050 }; 5065 5051 5052 + struct ath12k_hw_mode_freq_range_arg { 5053 + u32 low_2ghz_freq; 5054 + u32 high_2ghz_freq; 5055 + u32 low_5ghz_freq; 5056 + u32 high_5ghz_freq; 5057 + }; 5058 + 5059 + struct ath12k_svc_ext_mac_phy_info { 5060 + enum wmi_host_hw_mode_config_type hw_mode_config_type; 5061 + u32 phy_id; 5062 + u32 supported_bands; 5063 + struct ath12k_hw_mode_freq_range_arg hw_freq_range; 5064 + }; 5065 + 5066 + #define ATH12K_MAX_MAC_PHY_CAP 8 5067 + 5068 + struct ath12k_svc_ext_info { 5069 + u32 num_hw_modes; 5070 + struct ath12k_svc_ext_mac_phy_info mac_phy_info[ATH12K_MAX_MAC_PHY_CAP]; 5071 + }; 5072 + 5073 + /** 5074 + * enum ath12k_hw_mode - enum for host mode 5075 + * @ATH12K_HW_MODE_SMM: Single mac mode 5076 + * @ATH12K_HW_MODE_DBS: DBS mode 5077 + * @ATH12K_HW_MODE_SBS: SBS mode with either high share or low share 5078 + * @ATH12K_HW_MODE_SBS_UPPER_SHARE: Higher 5 GHz shared with 2.4 GHz 5079 + * @ATH12K_HW_MODE_SBS_LOWER_SHARE: Lower 5 GHz shared with 2.4 GHz 5080 + * @ATH12K_HW_MODE_MAX: Max, used to indicate invalid mode 5081 + */ 5082 + enum ath12k_hw_mode { 5083 + ATH12K_HW_MODE_SMM, 5084 + ATH12K_HW_MODE_DBS, 5085 + ATH12K_HW_MODE_SBS, 5086 + ATH12K_HW_MODE_SBS_UPPER_SHARE, 5087 + ATH12K_HW_MODE_SBS_LOWER_SHARE, 5088 + ATH12K_HW_MODE_MAX, 5089 + }; 5090 + 5091 + struct ath12k_hw_mode_info { 5092 + bool support_dbs:1; 5093 + bool support_sbs:1; 5094 + 5095 + struct ath12k_hw_mode_freq_range_arg freq_range_caps[ATH12K_HW_MODE_MAX] 5096 + [MAX_RADIOS]; 5097 + }; 5098 + 5066 5099 struct ath12k_wmi_base { 5067 5100 struct ath12k_base *ab; 5068 5101 struct ath12k_wmi_pdev wmi[MAX_RADIOS]; ··· 5127 5066 enum wmi_host_hw_mode_config_type preferred_hw_mode; 5128 5067 5129 5068 struct ath12k_wmi_target_cap_arg *targ_cap; 5069 + 5070 + struct ath12k_svc_ext_info svc_ext_info; 5071 + u32 sbs_lower_band_end_freq; 5072 + struct ath12k_hw_mode_info hw_mode_info; 5130 5073 }; 5131 5074 5132 5075 struct wmi_pdev_set_bios_interface_cmd { ··· 6062 5997 */ 6063 5998 } __packed; 6064 5999 6000 + #define CRTL_F_DYNC_FORCE_LINK_NUM GENMASK(3, 2) 6001 + 6002 + struct wmi_mlo_link_set_active_cmd { 6003 + __le32 tlv_header; 6004 + __le32 force_mode; 6005 + __le32 reason; 6006 + __le32 use_ieee_link_id_bitmap; 6007 + struct ath12k_wmi_mac_addr_params ap_mld_mac_addr; 6008 + __le32 ctrl_flags; 6009 + } __packed; 6010 + 6011 + struct wmi_mlo_set_active_link_number_params { 6012 + __le32 tlv_header; 6013 + __le32 num_of_link; 6014 + __le32 vdev_type; 6015 + __le32 vdev_subtype; 6016 + __le32 home_freq; 6017 + } __packed; 6018 + 6019 + #define WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_1 GENMASK(7, 0) 6020 + #define WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_2 GENMASK(15, 8) 6021 + #define WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_3 GENMASK(23, 16) 6022 + #define WMI_DISALW_MLO_MODE_BMAP_IEEE_LINK_ID_COMB_4 GENMASK(31, 24) 6023 + 6024 + struct wmi_disallowed_mlo_mode_bitmap_params { 6025 + __le32 tlv_header; 6026 + __le32 disallowed_mode_bitmap; 6027 + __le32 ieee_link_id_comb; 6028 + } __packed; 6029 + 6030 + enum wmi_mlo_link_force_mode { 6031 + WMI_MLO_LINK_FORCE_MODE_ACTIVE = 1, 6032 + WMI_MLO_LINK_FORCE_MODE_INACTIVE = 2, 6033 + WMI_MLO_LINK_FORCE_MODE_ACTIVE_LINK_NUM = 3, 6034 + WMI_MLO_LINK_FORCE_MODE_INACTIVE_LINK_NUM = 4, 6035 + WMI_MLO_LINK_FORCE_MODE_NO_FORCE = 5, 6036 + WMI_MLO_LINK_FORCE_MODE_ACTIVE_INACTIVE = 6, 6037 + WMI_MLO_LINK_FORCE_MODE_NON_FORCE_UPDATE = 7, 6038 + }; 6039 + 6040 + enum wmi_mlo_link_force_reason { 6041 + WMI_MLO_LINK_FORCE_REASON_NEW_CONNECT = 1, 6042 + WMI_MLO_LINK_FORCE_REASON_NEW_DISCONNECT = 2, 6043 + WMI_MLO_LINK_FORCE_REASON_LINK_REMOVAL = 3, 6044 + WMI_MLO_LINK_FORCE_REASON_TDLS = 4, 6045 + WMI_MLO_LINK_FORCE_REASON_REVERT_FAILURE = 5, 6046 + WMI_MLO_LINK_FORCE_REASON_LINK_DELETE = 6, 6047 + WMI_MLO_LINK_FORCE_REASON_SINGLE_LINK_EMLSR_OP = 7, 6048 + }; 6049 + 6050 + struct wmi_mlo_link_num_arg { 6051 + u32 num_of_link; 6052 + u32 vdev_type; 6053 + u32 vdev_subtype; 6054 + u32 home_freq; 6055 + }; 6056 + 6057 + struct wmi_mlo_control_flags_arg { 6058 + bool overwrite_force_active_bitmap; 6059 + bool overwrite_force_inactive_bitmap; 6060 + bool dync_force_link_num; 6061 + bool post_re_evaluate; 6062 + u8 post_re_evaluate_loops; 6063 + bool dont_reschedule_workqueue; 6064 + }; 6065 + 6066 + struct wmi_ml_link_force_cmd_arg { 6067 + u8 ap_mld_mac_addr[ETH_ALEN]; 6068 + u16 ieee_link_id_bitmap; 6069 + u16 ieee_link_id_bitmap2; 6070 + u8 link_num; 6071 + }; 6072 + 6073 + struct wmi_ml_disallow_mode_bmap_arg { 6074 + u32 disallowed_mode; 6075 + union { 6076 + u32 ieee_link_id_comb; 6077 + u8 ieee_link_id[4]; 6078 + }; 6079 + }; 6080 + 6081 + /* maximum size of link number param array 6082 + * for MLO link set active command 6083 + */ 6084 + #define WMI_MLO_LINK_NUM_SZ 2 6085 + 6086 + /* maximum size of vdev bitmap array for 6087 + * MLO link set active command 6088 + */ 6089 + #define WMI_MLO_VDEV_BITMAP_SZ 2 6090 + 6091 + /* Max number of disallowed bitmap combination 6092 + * sent to firmware 6093 + */ 6094 + #define WMI_ML_MAX_DISALLOW_BMAP_COMB 4 6095 + 6096 + struct wmi_mlo_link_set_active_arg { 6097 + enum wmi_mlo_link_force_mode force_mode; 6098 + enum wmi_mlo_link_force_reason reason; 6099 + u32 num_link_entry; 6100 + u32 num_vdev_bitmap; 6101 + u32 num_inactive_vdev_bitmap; 6102 + struct wmi_mlo_link_num_arg link_num[WMI_MLO_LINK_NUM_SZ]; 6103 + u32 vdev_bitmap[WMI_MLO_VDEV_BITMAP_SZ]; 6104 + u32 inactive_vdev_bitmap[WMI_MLO_VDEV_BITMAP_SZ]; 6105 + struct wmi_mlo_control_flags_arg ctrl_flags; 6106 + bool use_ieee_link_id; 6107 + struct wmi_ml_link_force_cmd_arg force_cmd; 6108 + u32 num_disallow_mode_comb; 6109 + struct wmi_ml_disallow_mode_bmap_arg disallow_bmap[WMI_ML_MAX_DISALLOW_BMAP_COMB]; 6110 + }; 6111 + 6065 6112 void ath12k_wmi_init_qcn9274(struct ath12k_base *ab, 6066 6113 struct ath12k_wmi_resource_config_arg *config); 6067 6114 void ath12k_wmi_init_wcn7850(struct ath12k_base *ab, ··· 6372 6195 int ath12k_wmi_send_vdev_set_tpc_power(struct ath12k *ar, 6373 6196 u32 vdev_id, 6374 6197 struct ath12k_reg_tpc_power_info *param); 6375 - 6198 + int ath12k_wmi_send_mlo_link_set_active_cmd(struct ath12k_base *ab, 6199 + struct wmi_mlo_link_set_active_arg *param); 6376 6200 #endif
+3 -1
drivers/net/wireless/ath/ath6kl/bmi.c
··· 87 87 * We need to do some backwards compatibility to make this work. 88 88 */ 89 89 if (le32_to_cpu(targ_info->byte_count) != sizeof(*targ_info)) { 90 - WARN_ON(1); 90 + ath6kl_err("mismatched byte count %d vs. expected %zd\n", 91 + le32_to_cpu(targ_info->byte_count), 92 + sizeof(*targ_info)); 91 93 return -EINVAL; 92 94 } 93 95
+13 -6
drivers/net/wireless/ath/carl9170/usb.c
··· 438 438 439 439 if (atomic_read(&ar->rx_anch_urbs) == 0) { 440 440 /* 441 - * The system is too slow to cope with 442 - * the enormous workload. We have simply 443 - * run out of active rx urbs and this 444 - * unfortunately leads to an unpredictable 445 - * device. 441 + * At this point, either the system is too slow to 442 + * cope with the enormous workload (so we have simply 443 + * run out of active rx urbs and this unfortunately 444 + * leads to an unpredictable device), or the device 445 + * is not fully functional after an unsuccessful 446 + * firmware loading attempts (so it doesn't pass 447 + * ieee80211_register_hw() and there is no internal 448 + * workqueue at all). 446 449 */ 447 450 448 - ieee80211_queue_work(ar->hw, &ar->ping_work); 451 + if (ar->registered) 452 + ieee80211_queue_work(ar->hw, &ar->ping_work); 453 + else 454 + pr_warn_once("device %s is not registered\n", 455 + dev_name(&ar->udev->dev)); 449 456 } 450 457 } else { 451 458 /*
+2 -1
drivers/net/wireless/intel/iwlegacy/4965-rs.c
··· 203 203 return (u8) (rate_n_flags & 0xFF); 204 204 } 205 205 206 - static void 206 + /* noinline works around https://github.com/llvm/llvm-project/issues/143908 */ 207 + static noinline_for_stack void 207 208 il4965_rs_rate_scale_clear_win(struct il_rate_scale_data *win) 208 209 { 209 210 win->data = 0;
+1
drivers/net/wireless/intel/iwlwifi/dvm/main.c
··· 1316 1316 sizeof(trans->conf.no_reclaim_cmds)); 1317 1317 memcpy(trans->conf.no_reclaim_cmds, no_reclaim_cmds, 1318 1318 sizeof(no_reclaim_cmds)); 1319 + trans->conf.n_no_reclaim_cmds = ARRAY_SIZE(no_reclaim_cmds); 1319 1320 1320 1321 switch (iwlwifi_mod_params.amsdu_size) { 1321 1322 case IWL_AMSDU_DEF:
+1
drivers/net/wireless/intel/iwlwifi/mld/mld.c
··· 77 77 78 78 /* Setup async RX handling */ 79 79 spin_lock_init(&mld->async_handlers_lock); 80 + INIT_LIST_HEAD(&mld->async_handlers_list); 80 81 wiphy_work_init(&mld->async_handlers_wk, 81 82 iwl_mld_async_handlers_wk); 82 83
+2 -2
drivers/net/wireless/intel/iwlwifi/mvm/mld-mac.c
··· 32 32 unsigned int link_id; 33 33 int cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw, 34 34 WIDE_ID(MAC_CONF_GROUP, 35 - MAC_CONFIG_CMD), 0); 35 + MAC_CONFIG_CMD), 1); 36 36 37 - if (WARN_ON(cmd_ver < 1 && cmd_ver > 3)) 37 + if (WARN_ON(cmd_ver > 3)) 38 38 return; 39 39 40 40 cmd->id_and_color = cpu_to_le32(mvmvif->id);
+6 -5
drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
··· 166 166 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 167 167 struct iwl_context_info *ctxt_info; 168 168 struct iwl_context_info_rbd_cfg *rx_cfg; 169 - u32 control_flags = 0, rb_size; 169 + u32 control_flags = 0, rb_size, cb_size; 170 170 dma_addr_t phys; 171 171 int ret; 172 172 ··· 202 202 rb_size = IWL_CTXT_INFO_RB_SIZE_4K; 203 203 } 204 204 205 - WARN_ON(RX_QUEUE_CB_SIZE(iwl_trans_get_num_rbds(trans)) > 12); 205 + cb_size = RX_QUEUE_CB_SIZE(iwl_trans_get_num_rbds(trans)); 206 + if (WARN_ON(cb_size > 12)) 207 + cb_size = 12; 208 + 206 209 control_flags = IWL_CTXT_INFO_TFD_FORMAT_LONG; 207 - control_flags |= 208 - u32_encode_bits(RX_QUEUE_CB_SIZE(iwl_trans_get_num_rbds(trans)), 209 - IWL_CTXT_INFO_RB_CB_SIZE); 210 + control_flags |= u32_encode_bits(cb_size, IWL_CTXT_INFO_RB_CB_SIZE); 210 211 control_flags |= u32_encode_bits(rb_size, IWL_CTXT_INFO_RB_SIZE); 211 212 ctxt_info->control.control_flags = cpu_to_le32(control_flags); 212 213
+42 -45
drivers/nvme/host/core.c
··· 2015 2015 } 2016 2016 2017 2017 2018 - static void nvme_update_atomic_write_disk_info(struct nvme_ns *ns, 2019 - struct nvme_id_ns *id, struct queue_limits *lim, 2020 - u32 bs, u32 atomic_bs) 2018 + static u32 nvme_configure_atomic_write(struct nvme_ns *ns, 2019 + struct nvme_id_ns *id, struct queue_limits *lim, u32 bs) 2021 2020 { 2022 - unsigned int boundary = 0; 2021 + u32 atomic_bs, boundary = 0; 2023 2022 2024 - if (id->nsfeat & NVME_NS_FEAT_ATOMICS && id->nawupf) { 2025 - if (le16_to_cpu(id->nabspf)) 2023 + /* 2024 + * We do not support an offset for the atomic boundaries. 2025 + */ 2026 + if (id->nabo) 2027 + return bs; 2028 + 2029 + if ((id->nsfeat & NVME_NS_FEAT_ATOMICS) && id->nawupf) { 2030 + /* 2031 + * Use the per-namespace atomic write unit when available. 2032 + */ 2033 + atomic_bs = (1 + le16_to_cpu(id->nawupf)) * bs; 2034 + if (id->nabspf) 2026 2035 boundary = (le16_to_cpu(id->nabspf) + 1) * bs; 2036 + } else { 2037 + /* 2038 + * Use the controller wide atomic write unit. This sucks 2039 + * because the limit is defined in terms of logical blocks while 2040 + * namespaces can have different formats, and because there is 2041 + * no clear language in the specification prohibiting different 2042 + * values for different controllers in the subsystem. 2043 + */ 2044 + atomic_bs = (1 + ns->ctrl->subsys->awupf) * bs; 2027 2045 } 2046 + 2028 2047 lim->atomic_write_hw_max = atomic_bs; 2029 2048 lim->atomic_write_hw_boundary = boundary; 2030 2049 lim->atomic_write_hw_unit_min = bs; 2031 2050 lim->atomic_write_hw_unit_max = rounddown_pow_of_two(atomic_bs); 2032 2051 lim->features |= BLK_FEAT_ATOMIC_WRITES; 2052 + return atomic_bs; 2033 2053 } 2034 2054 2035 2055 static u32 nvme_max_drv_segments(struct nvme_ctrl *ctrl) ··· 2087 2067 valid = false; 2088 2068 } 2089 2069 2090 - atomic_bs = phys_bs = bs; 2091 - if (id->nabo == 0) { 2092 - /* 2093 - * Bit 1 indicates whether NAWUPF is defined for this namespace 2094 - * and whether it should be used instead of AWUPF. If NAWUPF == 2095 - * 0 then AWUPF must be used instead. 2096 - */ 2097 - if (id->nsfeat & NVME_NS_FEAT_ATOMICS && id->nawupf) 2098 - atomic_bs = (1 + le16_to_cpu(id->nawupf)) * bs; 2099 - else 2100 - atomic_bs = (1 + ns->ctrl->awupf) * bs; 2101 - 2102 - /* 2103 - * Set subsystem atomic bs. 2104 - */ 2105 - if (ns->ctrl->subsys->atomic_bs) { 2106 - if (atomic_bs != ns->ctrl->subsys->atomic_bs) { 2107 - dev_err_ratelimited(ns->ctrl->device, 2108 - "%s: Inconsistent Atomic Write Size, Namespace will not be added: Subsystem=%d bytes, Controller/Namespace=%d bytes\n", 2109 - ns->disk ? ns->disk->disk_name : "?", 2110 - ns->ctrl->subsys->atomic_bs, 2111 - atomic_bs); 2112 - } 2113 - } else 2114 - ns->ctrl->subsys->atomic_bs = atomic_bs; 2115 - 2116 - nvme_update_atomic_write_disk_info(ns, id, lim, bs, atomic_bs); 2117 - } 2070 + phys_bs = bs; 2071 + atomic_bs = nvme_configure_atomic_write(ns, id, lim, bs); 2118 2072 2119 2073 if (id->nsfeat & NVME_NS_FEAT_IO_OPT) { 2120 2074 /* NPWG = Namespace Preferred Write Granularity */ ··· 2375 2381 nvme_set_chunk_sectors(ns, id, &lim); 2376 2382 if (!nvme_update_disk_info(ns, id, &lim)) 2377 2383 capacity = 0; 2378 - 2379 - /* 2380 - * Validate the max atomic write size fits within the subsystem's 2381 - * atomic write capabilities. 2382 - */ 2383 - if (lim.atomic_write_hw_max > ns->ctrl->subsys->atomic_bs) { 2384 - blk_mq_unfreeze_queue(ns->disk->queue, memflags); 2385 - ret = -ENXIO; 2386 - goto out; 2387 - } 2388 2384 2389 2385 nvme_config_discard(ns, &lim); 2390 2386 if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) && ··· 3199 3215 memcpy(subsys->model, id->mn, sizeof(subsys->model)); 3200 3216 subsys->vendor_id = le16_to_cpu(id->vid); 3201 3217 subsys->cmic = id->cmic; 3218 + subsys->awupf = le16_to_cpu(id->awupf); 3202 3219 3203 3220 /* Versions prior to 1.4 don't necessarily report a valid type */ 3204 3221 if (id->cntrltype == NVME_CTRL_DISC || ··· 3537 3552 if (ret) 3538 3553 goto out_free; 3539 3554 } 3555 + 3556 + if (le16_to_cpu(id->awupf) != ctrl->subsys->awupf) { 3557 + dev_err_ratelimited(ctrl->device, 3558 + "inconsistent AWUPF, controller not added (%u/%u).\n", 3559 + le16_to_cpu(id->awupf), ctrl->subsys->awupf); 3560 + ret = -EINVAL; 3561 + goto out_free; 3562 + } 3563 + 3540 3564 memcpy(ctrl->subsys->firmware_rev, id->fr, 3541 3565 sizeof(ctrl->subsys->firmware_rev)); 3542 3566 ··· 3641 3647 dev_pm_qos_expose_latency_tolerance(ctrl->device); 3642 3648 else if (!ctrl->apst_enabled && prev_apst_enabled) 3643 3649 dev_pm_qos_hide_latency_tolerance(ctrl->device); 3644 - ctrl->awupf = le16_to_cpu(id->awupf); 3645 3650 out_free: 3646 3651 kfree(id); 3647 3652 return ret; ··· 4029 4036 list_add_tail_rcu(&ns->siblings, &head->list); 4030 4037 ns->head = head; 4031 4038 mutex_unlock(&ctrl->subsys->lock); 4039 + 4040 + #ifdef CONFIG_NVME_MULTIPATH 4041 + cancel_delayed_work(&head->remove_work); 4042 + #endif 4032 4043 return 0; 4033 4044 4034 4045 out_put_ns_head:
+1 -1
drivers/nvme/host/multipath.c
··· 1311 1311 */ 1312 1312 if (!try_module_get(THIS_MODULE)) 1313 1313 goto out; 1314 - queue_delayed_work(nvme_wq, &head->remove_work, 1314 + mod_delayed_work(nvme_wq, &head->remove_work, 1315 1315 head->delayed_removal_secs * HZ); 1316 1316 } else { 1317 1317 list_del_init(&head->entry);
+1 -2
drivers/nvme/host/nvme.h
··· 410 410 411 411 enum nvme_ctrl_type cntrltype; 412 412 enum nvme_dctype dctype; 413 - u16 awupf; /* 0's based value. */ 414 413 }; 415 414 416 415 static inline enum nvme_ctrl_state nvme_ctrl_state(struct nvme_ctrl *ctrl) ··· 442 443 u8 cmic; 443 444 enum nvme_subsys_type subtype; 444 445 u16 vendor_id; 446 + u16 awupf; /* 0's based value. */ 445 447 struct ida ns_ida; 446 448 #ifdef CONFIG_NVME_MULTIPATH 447 449 enum nvme_iopolicy iopolicy; 448 450 #endif 449 - u32 atomic_bs; 450 451 }; 451 452 452 453 /*
+1 -1
drivers/pci/hotplug/pciehp_hpc.c
··· 771 771 u16 ignored_events = PCI_EXP_SLTSTA_DLLSC; 772 772 773 773 if (!ctrl->inband_presence_disabled) 774 - ignored_events |= events & PCI_EXP_SLTSTA_PDC; 774 + ignored_events |= PCI_EXP_SLTSTA_PDC; 775 775 776 776 events &= ~ignored_events; 777 777 pciehp_ignore_link_change(ctrl, pdev, irq, ignored_events);
+10 -13
drivers/pci/pci-acpi.c
··· 1676 1676 return NULL; 1677 1677 1678 1678 root_ops = kzalloc(sizeof(*root_ops), GFP_KERNEL); 1679 - if (!root_ops) 1680 - goto free_ri; 1679 + if (!root_ops) { 1680 + kfree(ri); 1681 + return NULL; 1682 + } 1681 1683 1682 1684 ri->cfg = pci_acpi_setup_ecam_mapping(root); 1683 - if (!ri->cfg) 1684 - goto free_root_ops; 1685 + if (!ri->cfg) { 1686 + kfree(ri); 1687 + kfree(root_ops); 1688 + return NULL; 1689 + } 1685 1690 1686 1691 root_ops->release_info = pci_acpi_generic_release_info; 1687 1692 root_ops->prepare_resources = pci_acpi_root_prepare_resources; 1688 1693 root_ops->pci_ops = (struct pci_ops *)&ri->cfg->ops->pci_ops; 1689 1694 bus = acpi_pci_root_create(root, root_ops, &ri->common, ri->cfg); 1690 1695 if (!bus) 1691 - goto free_cfg; 1696 + return NULL; 1692 1697 1693 1698 /* If we must preserve the resource configuration, claim now */ 1694 1699 host = pci_find_host_bridge(bus); ··· 1710 1705 pcie_bus_configure_settings(child); 1711 1706 1712 1707 return bus; 1713 - 1714 - free_cfg: 1715 - pci_ecam_free(ri->cfg); 1716 - free_root_ops: 1717 - kfree(root_ops); 1718 - free_ri: 1719 - kfree(ri); 1720 - return NULL; 1721 1708 } 1722 1709 1723 1710 void pcibios_add_bus(struct pci_bus *bus)
+3 -2
drivers/pci/pci.c
··· 3217 3217 /* find PCI PM capability in list */ 3218 3218 pm = pci_find_capability(dev, PCI_CAP_ID_PM); 3219 3219 if (!pm) 3220 - return; 3220 + goto poweron; 3221 3221 /* Check device's ability to generate PME# */ 3222 3222 pci_read_config_word(dev, pm + PCI_PM_PMC, &pmc); 3223 3223 3224 3224 if ((pmc & PCI_PM_CAP_VER_MASK) > 3) { 3225 3225 pci_err(dev, "unsupported PM cap regs version (%u)\n", 3226 3226 pmc & PCI_PM_CAP_VER_MASK); 3227 - return; 3227 + goto poweron; 3228 3228 } 3229 3229 3230 3230 dev->pm_cap = pm; ··· 3269 3269 pci_read_config_word(dev, PCI_STATUS, &status); 3270 3270 if (status & PCI_STATUS_IMM_READY) 3271 3271 dev->imm_ready = 1; 3272 + poweron: 3272 3273 pci_pm_power_up_and_verify_state(dev); 3273 3274 pm_runtime_forbid(&dev->dev); 3274 3275 pm_runtime_set_active(&dev->dev);
+2
drivers/pci/pcie/ptm.c
··· 254 254 } 255 255 EXPORT_SYMBOL(pcie_ptm_enabled); 256 256 257 + #if IS_ENABLED(CONFIG_DEBUG_FS) 257 258 static ssize_t context_update_write(struct file *file, const char __user *ubuf, 258 259 size_t count, loff_t *ppos) 259 260 { ··· 553 552 debugfs_remove_recursive(ptm_debugfs->debugfs); 554 553 } 555 554 EXPORT_SYMBOL_GPL(pcie_ptm_destroy_debugfs); 555 + #endif
+2 -1
drivers/platform/x86/amd/amd_isp4.c
··· 11 11 #include <linux/mutex.h> 12 12 #include <linux/platform_device.h> 13 13 #include <linux/property.h> 14 + #include <linux/soc/amd/isp4_misc.h> 14 15 #include <linux/string.h> 15 16 #include <linux/types.h> 16 17 #include <linux/units.h> ··· 152 151 153 152 static inline bool is_isp_i2c_adapter(struct i2c_adapter *adap) 154 153 { 155 - return !strcmp(adap->owner->name, "i2c_designware_amdisp"); 154 + return !strcmp(adap->name, AMDISP_I2C_ADAP_NAME); 156 155 } 157 156 158 157 static void instantiate_isp_i2c_client(struct amdisp_platform *isp4_platform,
+6 -8
drivers/platform/x86/amd/hsmp/hsmp.c
··· 97 97 short_sleep = jiffies + msecs_to_jiffies(HSMP_SHORT_SLEEP); 98 98 timeout = jiffies + msecs_to_jiffies(HSMP_MSG_TIMEOUT); 99 99 100 - while (time_before(jiffies, timeout)) { 100 + while (true) { 101 101 ret = sock->amd_hsmp_rdwr(sock, mbinfo->msg_resp_off, &mbox_status, HSMP_RD); 102 102 if (ret) { 103 103 dev_err(sock->dev, "Error %d reading mailbox status\n", ret); ··· 106 106 107 107 if (mbox_status != HSMP_STATUS_NOT_READY) 108 108 break; 109 + 110 + if (!time_before(jiffies, timeout)) 111 + break; 112 + 109 113 if (time_before(jiffies, short_sleep)) 110 114 usleep_range(50, 100); 111 115 else ··· 214 210 return -ENODEV; 215 211 sock = &hsmp_pdev.sock[msg->sock_ind]; 216 212 217 - /* 218 - * The time taken by smu operation to complete is between 219 - * 10us to 1ms. Sometime it may take more time. 220 - * In SMP system timeout of 100 millisecs should 221 - * be enough for the previous thread to finish the operation 222 - */ 223 - ret = down_timeout(&sock->hsmp_sem, msecs_to_jiffies(HSMP_MSG_TIMEOUT)); 213 + ret = down_interruptible(&sock->hsmp_sem); 224 214 if (ret < 0) 225 215 return ret; 226 216
+9
drivers/platform/x86/amd/pmc/pmc-quirks.c
··· 225 225 DMI_MATCH(DMI_BOARD_NAME, "WUJIE14-GX4HRXL"), 226 226 } 227 227 }, 228 + /* https://bugzilla.kernel.org/show_bug.cgi?id=220116 */ 229 + { 230 + .ident = "PCSpecialist Lafite Pro V 14M", 231 + .driver_data = &quirk_spurious_8042, 232 + .matches = { 233 + DMI_MATCH(DMI_SYS_VENDOR, "PCSpecialist"), 234 + DMI_MATCH(DMI_PRODUCT_NAME, "Lafite Pro V 14M"), 235 + } 236 + }, 228 237 {} 229 238 }; 230 239
+2
drivers/platform/x86/amd/pmc/pmc.c
··· 157 157 return -ENOMEM; 158 158 } 159 159 160 + memset_io(dev->smu_virt_addr, 0, sizeof(struct smu_metrics)); 161 + 160 162 /* Start the logging */ 161 163 amd_pmc_send_cmd(dev, 0, NULL, SMU_MSG_LOG_RESET, false); 162 164 amd_pmc_send_cmd(dev, 0, NULL, SMU_MSG_LOG_START, false);
+1 -2
drivers/platform/x86/amd/pmf/core.c
··· 280 280 dev_err(dev->dev, "Invalid CPU id: 0x%x", dev->cpu_id); 281 281 } 282 282 283 - dev->buf = kzalloc(dev->mtable_size, GFP_KERNEL); 283 + dev->buf = devm_kzalloc(dev->dev, dev->mtable_size, GFP_KERNEL); 284 284 if (!dev->buf) 285 285 return -ENOMEM; 286 286 } ··· 493 493 mutex_destroy(&dev->lock); 494 494 mutex_destroy(&dev->update_mutex); 495 495 mutex_destroy(&dev->cb_mutex); 496 - kfree(dev->buf); 497 496 } 498 497 499 498 static const struct attribute_group *amd_pmf_driver_groups[] = {
+38 -70
drivers/platform/x86/amd/pmf/tee-if.c
··· 358 358 return -EINVAL; 359 359 360 360 /* re-alloc to the new buffer length of the policy binary */ 361 - new_policy_buf = memdup_user(buf, length); 362 - if (IS_ERR(new_policy_buf)) 363 - return PTR_ERR(new_policy_buf); 361 + new_policy_buf = devm_kzalloc(dev->dev, length, GFP_KERNEL); 362 + if (!new_policy_buf) 363 + return -ENOMEM; 364 364 365 - kfree(dev->policy_buf); 365 + if (copy_from_user(new_policy_buf, buf, length)) { 366 + devm_kfree(dev->dev, new_policy_buf); 367 + return -EFAULT; 368 + } 369 + 370 + devm_kfree(dev->dev, dev->policy_buf); 366 371 dev->policy_buf = new_policy_buf; 367 372 dev->policy_sz = length; 368 373 369 - if (!amd_pmf_pb_valid(dev)) { 370 - ret = -EINVAL; 371 - goto cleanup; 372 - } 374 + if (!amd_pmf_pb_valid(dev)) 375 + return -EINVAL; 373 376 374 377 amd_pmf_hex_dump_pb(dev); 375 378 ret = amd_pmf_start_policy_engine(dev); 376 379 if (ret < 0) 377 - goto cleanup; 380 + return ret; 378 381 379 382 return length; 380 - 381 - cleanup: 382 - kfree(dev->policy_buf); 383 - dev->policy_buf = NULL; 384 - return ret; 385 383 } 386 384 387 385 static const struct file_operations pb_fops = { ··· 420 422 rc = tee_client_open_session(ctx, &sess_arg, NULL); 421 423 if (rc < 0 || sess_arg.ret != 0) { 422 424 pr_err("Failed to open TEE session err:%#x, rc:%d\n", sess_arg.ret, rc); 423 - return rc; 425 + return rc ?: -EINVAL; 424 426 } 425 427 426 428 *id = sess_arg.session; 427 429 428 - return rc; 430 + return 0; 429 431 } 430 432 431 433 static int amd_pmf_register_input_device(struct amd_pmf_dev *dev) ··· 460 462 dev->tee_ctx = tee_client_open_context(NULL, amd_pmf_amdtee_ta_match, NULL, NULL); 461 463 if (IS_ERR(dev->tee_ctx)) { 462 464 dev_err(dev->dev, "Failed to open TEE context\n"); 463 - return PTR_ERR(dev->tee_ctx); 465 + ret = PTR_ERR(dev->tee_ctx); 466 + dev->tee_ctx = NULL; 467 + return ret; 464 468 } 465 469 466 470 ret = amd_pmf_ta_open_session(dev->tee_ctx, &dev->session_id, uuid); ··· 502 502 503 503 static void amd_pmf_tee_deinit(struct amd_pmf_dev *dev) 504 504 { 505 + if (!dev->tee_ctx) 506 + return; 505 507 tee_shm_free(dev->fw_shm_pool); 506 508 tee_client_close_session(dev->tee_ctx, dev->session_id); 507 509 tee_client_close_context(dev->tee_ctx); 510 + dev->tee_ctx = NULL; 508 511 } 509 512 510 513 int amd_pmf_init_smart_pc(struct amd_pmf_dev *dev) ··· 530 527 531 528 ret = amd_pmf_set_dram_addr(dev, true); 532 529 if (ret) 533 - goto err_cancel_work; 530 + return ret; 534 531 535 532 dev->policy_base = devm_ioremap_resource(dev->dev, dev->res); 536 - if (IS_ERR(dev->policy_base)) { 537 - ret = PTR_ERR(dev->policy_base); 538 - goto err_free_dram_buf; 539 - } 533 + if (IS_ERR(dev->policy_base)) 534 + return PTR_ERR(dev->policy_base); 540 535 541 - dev->policy_buf = kzalloc(dev->policy_sz, GFP_KERNEL); 542 - if (!dev->policy_buf) { 543 - ret = -ENOMEM; 544 - goto err_free_dram_buf; 545 - } 536 + dev->policy_buf = devm_kzalloc(dev->dev, dev->policy_sz, GFP_KERNEL); 537 + if (!dev->policy_buf) 538 + return -ENOMEM; 546 539 547 540 memcpy_fromio(dev->policy_buf, dev->policy_base, dev->policy_sz); 548 541 549 542 if (!amd_pmf_pb_valid(dev)) { 550 543 dev_info(dev->dev, "No Smart PC policy present\n"); 551 - ret = -EINVAL; 552 - goto err_free_policy; 544 + return -EINVAL; 553 545 } 554 546 555 547 amd_pmf_hex_dump_pb(dev); 556 548 557 - dev->prev_data = kzalloc(sizeof(*dev->prev_data), GFP_KERNEL); 558 - if (!dev->prev_data) { 559 - ret = -ENOMEM; 560 - goto err_free_policy; 561 - } 549 + dev->prev_data = devm_kzalloc(dev->dev, sizeof(*dev->prev_data), GFP_KERNEL); 550 + if (!dev->prev_data) 551 + return -ENOMEM; 562 552 563 553 for (i = 0; i < ARRAY_SIZE(amd_pmf_ta_uuid); i++) { 564 554 ret = amd_pmf_tee_init(dev, &amd_pmf_ta_uuid[i]); 565 555 if (ret) 566 - goto err_free_prev_data; 556 + return ret; 567 557 568 558 ret = amd_pmf_start_policy_engine(dev); 569 - switch (ret) { 570 - case TA_PMF_TYPE_SUCCESS: 571 - status = true; 572 - break; 573 - case TA_ERROR_CRYPTO_INVALID_PARAM: 574 - case TA_ERROR_CRYPTO_BIN_TOO_LARGE: 575 - amd_pmf_tee_deinit(dev); 576 - status = false; 577 - break; 578 - default: 579 - ret = -EINVAL; 580 - amd_pmf_tee_deinit(dev); 581 - goto err_free_prev_data; 582 - } 583 - 559 + dev_dbg(dev->dev, "start policy engine ret: %d\n", ret); 560 + status = ret == TA_PMF_TYPE_SUCCESS; 584 561 if (status) 585 562 break; 563 + amd_pmf_tee_deinit(dev); 586 564 } 587 565 588 566 if (!status && !pb_side_load) { 589 567 ret = -EINVAL; 590 - goto err_free_prev_data; 568 + goto err; 591 569 } 592 570 593 571 if (pb_side_load) ··· 576 592 577 593 ret = amd_pmf_register_input_device(dev); 578 594 if (ret) 579 - goto err_pmf_remove_pb; 595 + goto err; 580 596 581 597 return 0; 582 598 583 - err_pmf_remove_pb: 584 - if (pb_side_load && dev->esbin) 585 - amd_pmf_remove_pb(dev); 586 - amd_pmf_tee_deinit(dev); 587 - err_free_prev_data: 588 - kfree(dev->prev_data); 589 - err_free_policy: 590 - kfree(dev->policy_buf); 591 - err_free_dram_buf: 592 - kfree(dev->buf); 593 - err_cancel_work: 594 - cancel_delayed_work_sync(&dev->pb_work); 599 + err: 600 + amd_pmf_deinit_smart_pc(dev); 595 601 596 602 return ret; 597 603 } ··· 595 621 amd_pmf_remove_pb(dev); 596 622 597 623 cancel_delayed_work_sync(&dev->pb_work); 598 - kfree(dev->prev_data); 599 - dev->prev_data = NULL; 600 - kfree(dev->policy_buf); 601 - dev->policy_buf = NULL; 602 - kfree(dev->buf); 603 - dev->buf = NULL; 604 624 amd_pmf_tee_deinit(dev); 605 625 }
+1 -1
drivers/platform/x86/dell/alienware-wmi-wmax.c
··· 119 119 DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), 120 120 DMI_MATCH(DMI_PRODUCT_NAME, "Alienware m16 R1 AMD"), 121 121 }, 122 - .driver_data = &g_series_quirks, 122 + .driver_data = &generic_quirks, 123 123 }, 124 124 { 125 125 .ident = "Alienware m16 R2",
+5 -5
drivers/platform/x86/dell/dell_rbu.c
··· 45 45 MODULE_AUTHOR("Abhay Salunke <abhay_salunke@dell.com>"); 46 46 MODULE_DESCRIPTION("Driver for updating BIOS image on DELL systems"); 47 47 MODULE_LICENSE("GPL"); 48 - MODULE_VERSION("3.2"); 48 + MODULE_VERSION("3.3"); 49 49 50 50 #define BIOS_SCAN_LIMIT 0xffffffff 51 51 #define MAX_IMAGE_LENGTH 16 ··· 91 91 rbu_data.imagesize = 0; 92 92 } 93 93 94 - static int create_packet(void *data, size_t length) 94 + static int create_packet(void *data, size_t length) __must_hold(&rbu_data.lock) 95 95 { 96 96 struct packet_data *newpacket; 97 97 int ordernum = 0; ··· 292 292 remaining_bytes = *pread_length; 293 293 bytes_read = rbu_data.packet_read_count; 294 294 295 - list_for_each_entry(newpacket, (&packet_data_head.list)->next, list) { 295 + list_for_each_entry(newpacket, &packet_data_head.list, list) { 296 296 bytes_copied = do_packet_read(pdest, newpacket, 297 297 remaining_bytes, bytes_read, &temp_count); 298 298 remaining_bytes -= bytes_copied; ··· 315 315 { 316 316 struct packet_data *newpacket, *tmp; 317 317 318 - list_for_each_entry_safe(newpacket, tmp, (&packet_data_head.list)->next, list) { 318 + list_for_each_entry_safe(newpacket, tmp, &packet_data_head.list, list) { 319 319 list_del(&newpacket->list); 320 320 321 321 /* 322 322 * zero out the RBU packet memory before freeing 323 323 * to make sure there are no stale RBU packets left in memory 324 324 */ 325 - memset(newpacket->data, 0, rbu_data.packetsize); 325 + memset(newpacket->data, 0, newpacket->length); 326 326 set_memory_wb((unsigned long)newpacket->data, 327 327 1 << newpacket->ordernum); 328 328 free_pages((unsigned long) newpacket->data,
+17 -2
drivers/platform/x86/ideapad-laptop.c
··· 15 15 #include <linux/bug.h> 16 16 #include <linux/cleanup.h> 17 17 #include <linux/debugfs.h> 18 + #include <linux/delay.h> 18 19 #include <linux/device.h> 19 20 #include <linux/dmi.h> 20 21 #include <linux/i8042.h> ··· 268 267 */ 269 268 #define IDEAPAD_EC_TIMEOUT 200 /* in ms */ 270 269 270 + /* 271 + * Some models (e.g., ThinkBook since 2024) have a low tolerance for being 272 + * polled too frequently. Doing so may break the state machine in the EC, 273 + * resulting in a hard shutdown. 274 + * 275 + * It is also observed that frequent polls may disturb the ongoing operation 276 + * and notably delay the availability of EC response. 277 + * 278 + * These values are used as the delay before the first poll and the interval 279 + * between subsequent polls to solve the above issues. 280 + */ 281 + #define IDEAPAD_EC_POLL_MIN_US 150 282 + #define IDEAPAD_EC_POLL_MAX_US 300 283 + 271 284 static int eval_int(acpi_handle handle, const char *name, unsigned long *res) 272 285 { 273 286 unsigned long long result; ··· 398 383 end_jiffies = jiffies + msecs_to_jiffies(IDEAPAD_EC_TIMEOUT) + 1; 399 384 400 385 while (time_before(jiffies, end_jiffies)) { 401 - schedule(); 386 + usleep_range(IDEAPAD_EC_POLL_MIN_US, IDEAPAD_EC_POLL_MAX_US); 402 387 403 388 err = eval_vpcr(handle, 1, &val); 404 389 if (err) ··· 429 414 end_jiffies = jiffies + msecs_to_jiffies(IDEAPAD_EC_TIMEOUT) + 1; 430 415 431 416 while (time_before(jiffies, end_jiffies)) { 432 - schedule(); 417 + usleep_range(IDEAPAD_EC_POLL_MIN_US, IDEAPAD_EC_POLL_MAX_US); 433 418 434 419 err = eval_vpcr(handle, 1, &val); 435 420 if (err)
+7
drivers/platform/x86/intel/pmc/core.h
··· 299 299 #define PTL_PCD_PMC_MMIO_REG_LEN 0x31A8 300 300 301 301 /* SSRAM PMC Device ID */ 302 + /* LNL */ 303 + #define PMC_DEVID_LNL_SOCM 0xa87f 304 + 305 + /* PTL */ 306 + #define PMC_DEVID_PTL_PCDH 0xe37f 307 + #define PMC_DEVID_PTL_PCDP 0xe47f 308 + 302 309 /* ARL */ 303 310 #define PMC_DEVID_ARL_SOCM 0x777f 304 311 #define PMC_DEVID_ARL_SOCS 0xae7f
+3
drivers/platform/x86/intel/pmc/ssram_telemetry.c
··· 187 187 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PMC_DEVID_MTL_SOCM) }, 188 188 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PMC_DEVID_ARL_SOCS) }, 189 189 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PMC_DEVID_ARL_SOCM) }, 190 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PMC_DEVID_LNL_SOCM) }, 191 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PMC_DEVID_PTL_PCDH) }, 192 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PMC_DEVID_PTL_PCDP) }, 190 193 { } 191 194 }; 192 195 MODULE_DEVICE_TABLE(pci, intel_pmc_ssram_telemetry_pci_ids);
+3 -1
drivers/platform/x86/intel/tpmi_power_domains.c
··· 228 228 229 229 domain_die_map = kcalloc(size_mul(topology_max_packages(), MAX_POWER_DOMAINS), 230 230 sizeof(*domain_die_map), GFP_KERNEL); 231 - if (!domain_die_map) 231 + if (!domain_die_map) { 232 + ret = -ENOMEM; 232 233 goto free_domain_mask; 234 + } 233 235 234 236 ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, 235 237 "platform/x86/tpmi_power_domains:online",
+1 -1
drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c
··· 58 58 if (length) 59 59 length += sysfs_emit_at(buf, length, " "); 60 60 61 - length += sysfs_emit_at(buf, length, agent_name[agent]); 61 + length += sysfs_emit_at(buf, length, "%s", agent_name[agent]); 62 62 } 63 63 64 64 length += sysfs_emit_at(buf, length, "\n");
+6 -3
drivers/platform/x86/intel/uncore-frequency/uncore-frequency-tpmi.c
··· 511 511 512 512 /* Get the package ID from the TPMI core */ 513 513 plat_info = tpmi_get_platform_data(auxdev); 514 - if (plat_info) 515 - pkg = plat_info->package_id; 516 - else 514 + if (unlikely(!plat_info)) { 517 515 dev_info(&auxdev->dev, "Platform information is NULL\n"); 516 + ret = -ENODEV; 517 + goto err_rem_common; 518 + } 519 + 520 + pkg = plat_info->package_id; 518 521 519 522 for (i = 0; i < num_resources; ++i) { 520 523 struct tpmi_uncore_power_domain_info *pd_info;
+1
drivers/platform/x86/samsung-galaxybook.c
··· 1403 1403 } 1404 1404 1405 1405 static const struct acpi_device_id galaxybook_device_ids[] = { 1406 + { "SAM0426" }, 1406 1407 { "SAM0427" }, 1407 1408 { "SAM0428" }, 1408 1409 { "SAM0429" },
+2 -1
drivers/ptp/ptp_clock.c
··· 121 121 struct ptp_clock_info *ops; 122 122 int err = -EOPNOTSUPP; 123 123 124 - if (ptp_clock_freerun(ptp)) { 124 + if (tx->modes & (ADJ_SETOFFSET | ADJ_FREQUENCY | ADJ_OFFSET) && 125 + ptp_clock_freerun(ptp)) { 125 126 pr_err("ptp: physical clock is free running\n"); 126 127 return -EBUSY; 127 128 }
+21 -1
drivers/ptp/ptp_private.h
··· 98 98 /* Check if ptp virtual clock is in use */ 99 99 static inline bool ptp_vclock_in_use(struct ptp_clock *ptp) 100 100 { 101 - return !ptp->is_virtual_clock; 101 + bool in_use = false; 102 + 103 + /* Virtual clocks can't be stacked on top of virtual clocks. 104 + * Avoid acquiring the n_vclocks_mux on virtual clocks, to allow this 105 + * function to be called from code paths where the n_vclocks_mux of the 106 + * parent physical clock is already held. Functionally that's not an 107 + * issue, but lockdep would complain, because they have the same lock 108 + * class. 109 + */ 110 + if (ptp->is_virtual_clock) 111 + return false; 112 + 113 + if (mutex_lock_interruptible(&ptp->n_vclocks_mux)) 114 + return true; 115 + 116 + if (ptp->n_vclocks) 117 + in_use = true; 118 + 119 + mutex_unlock(&ptp->n_vclocks_mux); 120 + 121 + return in_use; 102 122 } 103 123 104 124 /* Check if ptp clock shall be free running */
+14
drivers/regulator/fan53555.c
··· 147 147 unsigned int slew_mask; 148 148 const unsigned int *ramp_delay_table; 149 149 unsigned int n_ramp_values; 150 + unsigned int enable_time; 150 151 unsigned int slew_rate; 151 152 }; 152 153 ··· 283 282 di->slew_mask = CTL_SLEW_MASK; 284 283 di->ramp_delay_table = slew_rates; 285 284 di->n_ramp_values = ARRAY_SIZE(slew_rates); 285 + di->enable_time = 250; 286 286 di->vsel_count = FAN53526_NVOLTAGES; 287 287 288 288 return 0; ··· 298 296 case FAN53555_CHIP_REV_00: 299 297 di->vsel_min = 600000; 300 298 di->vsel_step = 10000; 299 + di->enable_time = 400; 301 300 break; 302 301 case FAN53555_CHIP_REV_13: 303 302 di->vsel_min = 800000; 304 303 di->vsel_step = 10000; 304 + di->enable_time = 400; 305 305 break; 306 306 default: 307 307 dev_err(di->dev, ··· 315 311 case FAN53555_CHIP_ID_01: 316 312 case FAN53555_CHIP_ID_03: 317 313 case FAN53555_CHIP_ID_05: 314 + di->vsel_min = 600000; 315 + di->vsel_step = 10000; 316 + di->enable_time = 400; 317 + break; 318 318 case FAN53555_CHIP_ID_08: 319 319 di->vsel_min = 600000; 320 320 di->vsel_step = 10000; 321 + di->enable_time = 175; 321 322 break; 322 323 case FAN53555_CHIP_ID_04: 323 324 di->vsel_min = 603000; 324 325 di->vsel_step = 12826; 326 + di->enable_time = 400; 325 327 break; 326 328 default: 327 329 dev_err(di->dev, ··· 360 350 di->slew_mask = CTL_SLEW_MASK; 361 351 di->ramp_delay_table = slew_rates; 362 352 di->n_ramp_values = ARRAY_SIZE(slew_rates); 353 + di->enable_time = 360; 363 354 di->vsel_count = FAN53555_NVOLTAGES; 364 355 365 356 return 0; ··· 383 372 di->slew_mask = CTL_SLEW_MASK; 384 373 di->ramp_delay_table = slew_rates; 385 374 di->n_ramp_values = ARRAY_SIZE(slew_rates); 375 + di->enable_time = 360; 386 376 di->vsel_count = RK8602_NVOLTAGES; 387 377 388 378 return 0; ··· 407 395 di->slew_mask = CTL_SLEW_MASK; 408 396 di->ramp_delay_table = slew_rates; 409 397 di->n_ramp_values = ARRAY_SIZE(slew_rates); 398 + di->enable_time = 400; 410 399 di->vsel_count = FAN53555_NVOLTAGES; 411 400 412 401 return 0; ··· 607 594 rdesc->ramp_mask = di->slew_mask; 608 595 rdesc->ramp_delay_table = di->ramp_delay_table; 609 596 rdesc->n_ramp_values = di->n_ramp_values; 597 + rdesc->enable_time = di->enable_time; 610 598 rdesc->owner = THIS_MODULE; 611 599 612 600 rdev = devm_regulator_register(di->dev, &di->desc, config);
+1 -1
drivers/s390/crypto/pkey_api.c
··· 86 86 if (!uapqns || nr_apqns == 0) 87 87 return NULL; 88 88 89 - return memdup_user(uapqns, nr_apqns * sizeof(struct pkey_apqn)); 89 + return memdup_array_user(uapqns, nr_apqns, sizeof(struct pkey_apqn)); 90 90 } 91 91 92 92 static int pkey_ioctl_genseck(struct pkey_genseck __user *ugs)
+3 -2
drivers/scsi/elx/efct/efct_hw.c
··· 1120 1120 efct_hw_parse_filter(struct efct_hw *hw, void *value) 1121 1121 { 1122 1122 int rc = 0; 1123 - char *p = NULL; 1123 + char *p = NULL, *pp = NULL; 1124 1124 char *token; 1125 1125 u32 idx = 0; 1126 1126 ··· 1132 1132 efc_log_err(hw->os, "p is NULL\n"); 1133 1133 return -ENOMEM; 1134 1134 } 1135 + pp = p; 1135 1136 1136 1137 idx = 0; 1137 1138 while ((token = strsep(&p, ",")) && *token) { ··· 1145 1144 if (idx == ARRAY_SIZE(hw->config.filter_def)) 1146 1145 break; 1147 1146 } 1148 - kfree(p); 1147 + kfree(pp); 1149 1148 1150 1149 return rc; 1151 1150 }
+149 -38
drivers/scsi/fnic/fdls_disc.c
··· 763 763 iport->fabric.timer_pending = 1; 764 764 } 765 765 766 - static void fdls_send_fdmi_abts(struct fnic_iport_s *iport) 766 + static uint8_t *fdls_alloc_init_fdmi_abts_frame(struct fnic_iport_s *iport, 767 + uint16_t oxid) 767 768 { 768 - uint8_t *frame; 769 + struct fc_frame_header *pfdmi_abts; 769 770 uint8_t d_id[3]; 771 + uint8_t *frame; 770 772 struct fnic *fnic = iport->fnic; 771 - struct fc_frame_header *pfabric_abts; 772 - unsigned long fdmi_tov; 773 - uint16_t oxid; 774 - uint16_t frame_size = FNIC_ETH_FCOE_HDRS_OFFSET + 775 - sizeof(struct fc_frame_header); 776 773 777 774 frame = fdls_alloc_frame(iport); 778 775 if (frame == NULL) { 779 776 FNIC_FCS_DBG(KERN_ERR, fnic->host, fnic->fnic_num, 780 777 "Failed to allocate frame to send FDMI ABTS"); 781 - return; 778 + return NULL; 782 779 } 783 780 784 - pfabric_abts = (struct fc_frame_header *) (frame + FNIC_ETH_FCOE_HDRS_OFFSET); 781 + pfdmi_abts = (struct fc_frame_header *) (frame + FNIC_ETH_FCOE_HDRS_OFFSET); 785 782 fdls_init_fabric_abts_frame(frame, iport); 786 783 787 784 hton24(d_id, FC_FID_MGMT_SERV); 788 - FNIC_STD_SET_D_ID(*pfabric_abts, d_id); 785 + FNIC_STD_SET_D_ID(*pfdmi_abts, d_id); 786 + FNIC_STD_SET_OX_ID(*pfdmi_abts, oxid); 787 + 788 + return frame; 789 + } 790 + 791 + static void fdls_send_fdmi_abts(struct fnic_iport_s *iport) 792 + { 793 + uint8_t *frame; 794 + struct fnic *fnic = iport->fnic; 795 + unsigned long fdmi_tov; 796 + uint16_t frame_size = FNIC_ETH_FCOE_HDRS_OFFSET + 797 + sizeof(struct fc_frame_header); 789 798 790 799 if (iport->fabric.fdmi_pending & FDLS_FDMI_PLOGI_PENDING) { 791 - oxid = iport->active_oxid_fdmi_plogi; 792 - FNIC_STD_SET_OX_ID(*pfabric_abts, oxid); 800 + frame = fdls_alloc_init_fdmi_abts_frame(iport, 801 + iport->active_oxid_fdmi_plogi); 802 + if (frame == NULL) 803 + return; 804 + 805 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 806 + "0x%x: FDLS send FDMI PLOGI abts. iport->fabric.state: %d oxid: 0x%x", 807 + iport->fcid, iport->fabric.state, iport->active_oxid_fdmi_plogi); 793 808 fnic_send_fcoe_frame(iport, frame, frame_size); 794 809 } else { 795 810 if (iport->fabric.fdmi_pending & FDLS_FDMI_REG_HBA_PENDING) { 796 - oxid = iport->active_oxid_fdmi_rhba; 797 - FNIC_STD_SET_OX_ID(*pfabric_abts, oxid); 811 + frame = fdls_alloc_init_fdmi_abts_frame(iport, 812 + iport->active_oxid_fdmi_rhba); 813 + if (frame == NULL) 814 + return; 815 + 816 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 817 + "0x%x: FDLS send FDMI RHBA abts. iport->fabric.state: %d oxid: 0x%x", 818 + iport->fcid, iport->fabric.state, iport->active_oxid_fdmi_rhba); 798 819 fnic_send_fcoe_frame(iport, frame, frame_size); 799 820 } 800 821 if (iport->fabric.fdmi_pending & FDLS_FDMI_RPA_PENDING) { 801 - oxid = iport->active_oxid_fdmi_rpa; 802 - FNIC_STD_SET_OX_ID(*pfabric_abts, oxid); 822 + frame = fdls_alloc_init_fdmi_abts_frame(iport, 823 + iport->active_oxid_fdmi_rpa); 824 + if (frame == NULL) { 825 + if (iport->fabric.fdmi_pending & FDLS_FDMI_REG_HBA_PENDING) 826 + goto arm_timer; 827 + else 828 + return; 829 + } 830 + 831 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 832 + "0x%x: FDLS send FDMI RPA abts. iport->fabric.state: %d oxid: 0x%x", 833 + iport->fcid, iport->fabric.state, iport->active_oxid_fdmi_rpa); 803 834 fnic_send_fcoe_frame(iport, frame, frame_size); 804 835 } 805 836 } 806 837 838 + arm_timer: 807 839 fdmi_tov = jiffies + msecs_to_jiffies(2 * iport->e_d_tov); 808 840 mod_timer(&iport->fabric.fdmi_timer, round_jiffies(fdmi_tov)); 809 841 iport->fabric.fdmi_pending |= FDLS_FDMI_ABORT_PENDING; 842 + 843 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 844 + "0x%x: iport->fabric.fdmi_pending: 0x%x", 845 + iport->fcid, iport->fabric.fdmi_pending); 810 846 } 811 847 812 848 static void fdls_send_fabric_flogi(struct fnic_iport_s *iport) ··· 2281 2245 spin_unlock_irqrestore(&fnic->fnic_lock, flags); 2282 2246 } 2283 2247 2248 + void fdls_fdmi_retry_plogi(struct fnic_iport_s *iport) 2249 + { 2250 + struct fnic *fnic = iport->fnic; 2251 + 2252 + iport->fabric.fdmi_pending = 0; 2253 + /* If max retries not exhausted, start over from fdmi plogi */ 2254 + if (iport->fabric.fdmi_retry < FDLS_FDMI_MAX_RETRY) { 2255 + iport->fabric.fdmi_retry++; 2256 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2257 + "Retry FDMI PLOGI. FDMI retry: %d", 2258 + iport->fabric.fdmi_retry); 2259 + fdls_send_fdmi_plogi(iport); 2260 + } 2261 + } 2262 + 2284 2263 void fdls_fdmi_timer_callback(struct timer_list *t) 2285 2264 { 2286 2265 struct fnic_fdls_fabric_s *fabric = timer_container_of(fabric, t, ··· 2308 2257 spin_lock_irqsave(&fnic->fnic_lock, flags); 2309 2258 2310 2259 FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2311 - "fdmi timer callback : 0x%x\n", iport->fabric.fdmi_pending); 2260 + "iport->fabric.fdmi_pending: 0x%x\n", iport->fabric.fdmi_pending); 2312 2261 2313 2262 if (!iport->fabric.fdmi_pending) { 2314 2263 /* timer expired after fdmi responses received. */ ··· 2316 2265 return; 2317 2266 } 2318 2267 FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2319 - "fdmi timer callback : 0x%x\n", iport->fabric.fdmi_pending); 2268 + "iport->fabric.fdmi_pending: 0x%x\n", iport->fabric.fdmi_pending); 2320 2269 2321 2270 /* if not abort pending, send an abort */ 2322 2271 if (!(iport->fabric.fdmi_pending & FDLS_FDMI_ABORT_PENDING)) { ··· 2325 2274 return; 2326 2275 } 2327 2276 FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2328 - "fdmi timer callback : 0x%x\n", iport->fabric.fdmi_pending); 2277 + "iport->fabric.fdmi_pending: 0x%x\n", iport->fabric.fdmi_pending); 2329 2278 2330 2279 /* ABTS pending for an active fdmi request that is pending. 2331 2280 * That means FDMI ABTS timed out 2332 2281 * Schedule to free the OXID after 2*r_a_tov and proceed 2333 2282 */ 2334 2283 if (iport->fabric.fdmi_pending & FDLS_FDMI_PLOGI_PENDING) { 2284 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2285 + "FDMI PLOGI ABTS timed out. Schedule oxid free: 0x%x\n", 2286 + iport->active_oxid_fdmi_plogi); 2335 2287 fdls_schedule_oxid_free(iport, &iport->active_oxid_fdmi_plogi); 2336 2288 } else { 2337 - if (iport->fabric.fdmi_pending & FDLS_FDMI_REG_HBA_PENDING) 2289 + if (iport->fabric.fdmi_pending & FDLS_FDMI_REG_HBA_PENDING) { 2290 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2291 + "FDMI RHBA ABTS timed out. Schedule oxid free: 0x%x\n", 2292 + iport->active_oxid_fdmi_rhba); 2338 2293 fdls_schedule_oxid_free(iport, &iport->active_oxid_fdmi_rhba); 2339 - if (iport->fabric.fdmi_pending & FDLS_FDMI_RPA_PENDING) 2294 + } 2295 + if (iport->fabric.fdmi_pending & FDLS_FDMI_RPA_PENDING) { 2296 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2297 + "FDMI RPA ABTS timed out. Schedule oxid free: 0x%x\n", 2298 + iport->active_oxid_fdmi_rpa); 2340 2299 fdls_schedule_oxid_free(iport, &iport->active_oxid_fdmi_rpa); 2300 + } 2341 2301 } 2342 2302 FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2343 - "fdmi timer callback : 0x%x\n", iport->fabric.fdmi_pending); 2303 + "iport->fabric.fdmi_pending: 0x%x\n", iport->fabric.fdmi_pending); 2344 2304 2345 - iport->fabric.fdmi_pending = 0; 2346 - /* If max retries not exhaused, start over from fdmi plogi */ 2347 - if (iport->fabric.fdmi_retry < FDLS_FDMI_MAX_RETRY) { 2348 - iport->fabric.fdmi_retry++; 2349 - FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2350 - "retry fdmi timer %d", iport->fabric.fdmi_retry); 2351 - fdls_send_fdmi_plogi(iport); 2352 - } 2305 + fdls_fdmi_retry_plogi(iport); 2353 2306 FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2354 - "fdmi timer callback : 0x%x\n", iport->fabric.fdmi_pending); 2307 + "iport->fabric.fdmi_pending: 0x%x\n", iport->fabric.fdmi_pending); 2355 2308 spin_unlock_irqrestore(&fnic->fnic_lock, flags); 2356 2309 } 2357 2310 ··· 3770 3715 3771 3716 switch (FNIC_FRAME_TYPE(oxid)) { 3772 3717 case FNIC_FRAME_TYPE_FDMI_PLOGI: 3718 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3719 + "Received FDMI PLOGI ABTS rsp with oxid: 0x%x", oxid); 3720 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3721 + "0x%x: iport->fabric.fdmi_pending: 0x%x", 3722 + iport->fcid, iport->fabric.fdmi_pending); 3773 3723 fdls_free_oxid(iport, oxid, &iport->active_oxid_fdmi_plogi); 3724 + 3725 + iport->fabric.fdmi_pending &= ~FDLS_FDMI_PLOGI_PENDING; 3726 + iport->fabric.fdmi_pending &= ~FDLS_FDMI_ABORT_PENDING; 3727 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3728 + "0x%x: iport->fabric.fdmi_pending: 0x%x", 3729 + iport->fcid, iport->fabric.fdmi_pending); 3774 3730 break; 3775 3731 case FNIC_FRAME_TYPE_FDMI_RHBA: 3732 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3733 + "Received FDMI RHBA ABTS rsp with oxid: 0x%x", oxid); 3734 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3735 + "0x%x: iport->fabric.fdmi_pending: 0x%x", 3736 + iport->fcid, iport->fabric.fdmi_pending); 3737 + 3738 + iport->fabric.fdmi_pending &= ~FDLS_FDMI_REG_HBA_PENDING; 3739 + 3740 + /* If RPA is still pending, don't turn off ABORT PENDING. 3741 + * We count on the timer to detect the ABTS timeout and take 3742 + * corrective action. 3743 + */ 3744 + if (!(iport->fabric.fdmi_pending & FDLS_FDMI_RPA_PENDING)) 3745 + iport->fabric.fdmi_pending &= ~FDLS_FDMI_ABORT_PENDING; 3746 + 3776 3747 fdls_free_oxid(iport, oxid, &iport->active_oxid_fdmi_rhba); 3748 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3749 + "0x%x: iport->fabric.fdmi_pending: 0x%x", 3750 + iport->fcid, iport->fabric.fdmi_pending); 3777 3751 break; 3778 3752 case FNIC_FRAME_TYPE_FDMI_RPA: 3753 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3754 + "Received FDMI RPA ABTS rsp with oxid: 0x%x", oxid); 3755 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3756 + "0x%x: iport->fabric.fdmi_pending: 0x%x", 3757 + iport->fcid, iport->fabric.fdmi_pending); 3758 + 3759 + iport->fabric.fdmi_pending &= ~FDLS_FDMI_RPA_PENDING; 3760 + 3761 + /* If RHBA is still pending, don't turn off ABORT PENDING. 3762 + * We count on the timer to detect the ABTS timeout and take 3763 + * corrective action. 3764 + */ 3765 + if (!(iport->fabric.fdmi_pending & FDLS_FDMI_REG_HBA_PENDING)) 3766 + iport->fabric.fdmi_pending &= ~FDLS_FDMI_ABORT_PENDING; 3767 + 3779 3768 fdls_free_oxid(iport, oxid, &iport->active_oxid_fdmi_rpa); 3769 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3770 + "0x%x: iport->fabric.fdmi_pending: 0x%x", 3771 + iport->fcid, iport->fabric.fdmi_pending); 3780 3772 break; 3781 3773 default: 3782 3774 FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, ··· 3832 3730 break; 3833 3731 } 3834 3732 3835 - timer_delete_sync(&iport->fabric.fdmi_timer); 3836 - iport->fabric.fdmi_pending &= ~FDLS_FDMI_ABORT_PENDING; 3837 - 3838 - fdls_send_fdmi_plogi(iport); 3733 + /* 3734 + * Only if ABORT PENDING is off, delete the timer, and if no other 3735 + * operations are pending, retry FDMI. 3736 + * Otherwise, let the timer pop and take the appropriate action. 3737 + */ 3738 + if (!(iport->fabric.fdmi_pending & FDLS_FDMI_ABORT_PENDING)) { 3739 + timer_delete_sync(&iport->fabric.fdmi_timer); 3740 + if (!iport->fabric.fdmi_pending) 3741 + fdls_fdmi_retry_plogi(iport); 3742 + } 3839 3743 } 3840 3744 3841 3745 static void ··· 5080 4972 fdls_delete_tport(iport, tport); 5081 4973 } 5082 4974 5083 - if ((fnic_fdmi_support == 1) && (iport->fabric.fdmi_pending > 0)) { 5084 - timer_delete_sync(&iport->fabric.fdmi_timer); 5085 - iport->fabric.fdmi_pending = 0; 4975 + if (fnic_fdmi_support == 1) { 4976 + if (iport->fabric.fdmi_pending > 0) { 4977 + timer_delete_sync(&iport->fabric.fdmi_timer); 4978 + iport->fabric.fdmi_pending = 0; 4979 + } 4980 + iport->flags &= ~FNIC_FDMI_ACTIVE; 5086 4981 } 5087 4982 5088 4983 FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num,
+1 -1
drivers/scsi/fnic/fnic.h
··· 30 30 31 31 #define DRV_NAME "fnic" 32 32 #define DRV_DESCRIPTION "Cisco FCoE HBA Driver" 33 - #define DRV_VERSION "1.8.0.0" 33 + #define DRV_VERSION "1.8.0.2" 34 34 #define PFX DRV_NAME ": " 35 35 #define DFX DRV_NAME "%d: " 36 36
+2
drivers/scsi/fnic/fnic_fcs.c
··· 636 636 unsigned long flags; 637 637 638 638 pa = dma_map_single(&fnic->pdev->dev, frame, frame_len, DMA_TO_DEVICE); 639 + if (dma_mapping_error(&fnic->pdev->dev, pa)) 640 + return -ENOMEM; 639 641 640 642 if ((fnic_fc_trace_set_data(fnic->fnic_num, 641 643 FNIC_FC_SEND | 0x80, (char *) frame,
+1
drivers/scsi/fnic/fnic_fdls.h
··· 394 394 bool fdls_delete_tport(struct fnic_iport_s *iport, 395 395 struct fnic_tport_s *tport); 396 396 void fdls_fdmi_timer_callback(struct timer_list *t); 397 + void fdls_fdmi_retry_plogi(struct fnic_iport_s *iport); 397 398 398 399 /* fnic_fcs.c */ 399 400 void fnic_fdls_init(struct fnic *fnic, int usefip);
+1 -1
drivers/scsi/fnic/fnic_scsi.c
··· 1046 1046 if (icmnd_cmpl->scsi_status == SAM_STAT_TASK_SET_FULL) 1047 1047 atomic64_inc(&fnic_stats->misc_stats.queue_fulls); 1048 1048 1049 - FNIC_SCSI_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 1049 + FNIC_SCSI_DBG(KERN_DEBUG, fnic->host, fnic->fnic_num, 1050 1050 "xfer_len: %llu", xfer_len); 1051 1051 break; 1052 1052
+5 -1
drivers/scsi/megaraid/megaraid_sas_base.c
··· 5910 5910 const struct cpumask *mask; 5911 5911 5912 5912 if (instance->perf_mode == MR_BALANCED_PERF_MODE) { 5913 - mask = cpumask_of_node(dev_to_node(&instance->pdev->dev)); 5913 + int nid = dev_to_node(&instance->pdev->dev); 5914 + 5915 + if (nid == NUMA_NO_NODE) 5916 + nid = 0; 5917 + mask = cpumask_of_node(nid); 5914 5918 5915 5919 for (i = 0; i < instance->low_latency_index_start; i++) { 5916 5920 irq = pci_irq_vector(instance->pdev, i);
+7 -5
drivers/spi/spi-cadence-quadspi.c
··· 1958 1958 goto probe_setup_failed; 1959 1959 } 1960 1960 1961 - ret = devm_pm_runtime_enable(dev); 1962 - if (ret) { 1963 - if (cqspi->rx_chan) 1964 - dma_release_channel(cqspi->rx_chan); 1961 + pm_runtime_enable(dev); 1962 + 1963 + if (cqspi->rx_chan) { 1964 + dma_release_channel(cqspi->rx_chan); 1965 1965 goto probe_setup_failed; 1966 1966 } 1967 1967 ··· 1981 1981 return 0; 1982 1982 probe_setup_failed: 1983 1983 cqspi_controller_enable(cqspi, 0); 1984 + pm_runtime_disable(dev); 1984 1985 probe_reset_failed: 1985 1986 if (cqspi->is_jh7110) 1986 1987 cqspi_jh7110_disable_clk(pdev, cqspi); ··· 2000 1999 if (cqspi->rx_chan) 2001 2000 dma_release_channel(cqspi->rx_chan); 2002 2001 2003 - clk_disable_unprepare(cqspi->clk); 2002 + if (pm_runtime_get_sync(&pdev->dev) >= 0) 2003 + clk_disable(cqspi->clk); 2004 2004 2005 2005 if (cqspi->is_jh7110) 2006 2006 cqspi_jh7110_disable_clk(pdev, cqspi);
-14
drivers/spi/spi-tegra210-quad.c
··· 407 407 static void 408 408 tegra_qspi_copy_client_txbuf_to_qspi_txbuf(struct tegra_qspi *tqspi, struct spi_transfer *t) 409 409 { 410 - dma_sync_single_for_cpu(tqspi->dev, tqspi->tx_dma_phys, 411 - tqspi->dma_buf_size, DMA_TO_DEVICE); 412 - 413 410 /* 414 411 * In packed mode, each word in FIFO may contain multiple packets 415 412 * based on bits per word. So all bytes in each FIFO word are valid. ··· 439 442 440 443 tqspi->cur_tx_pos += write_bytes; 441 444 } 442 - 443 - dma_sync_single_for_device(tqspi->dev, tqspi->tx_dma_phys, 444 - tqspi->dma_buf_size, DMA_TO_DEVICE); 445 445 } 446 446 447 447 static void 448 448 tegra_qspi_copy_qspi_rxbuf_to_client_rxbuf(struct tegra_qspi *tqspi, struct spi_transfer *t) 449 449 { 450 - dma_sync_single_for_cpu(tqspi->dev, tqspi->rx_dma_phys, 451 - tqspi->dma_buf_size, DMA_FROM_DEVICE); 452 - 453 450 if (tqspi->is_packed) { 454 451 tqspi->cur_rx_pos += tqspi->curr_dma_words * tqspi->bytes_per_word; 455 452 } else { ··· 469 478 470 479 tqspi->cur_rx_pos += read_bytes; 471 480 } 472 - 473 - dma_sync_single_for_device(tqspi->dev, tqspi->rx_dma_phys, 474 - tqspi->dma_buf_size, DMA_FROM_DEVICE); 475 481 } 476 482 477 483 static void tegra_qspi_dma_complete(void *args) ··· 689 701 return ret; 690 702 } 691 703 692 - dma_sync_single_for_device(tqspi->dev, tqspi->rx_dma_phys, 693 - tqspi->dma_buf_size, DMA_FROM_DEVICE); 694 704 ret = tegra_qspi_start_rx_dma(tqspi, t, len); 695 705 if (ret < 0) { 696 706 dev_err(tqspi->dev, "failed to start RX DMA: %d\n", ret);
+14 -30
drivers/staging/rtl8723bs/core/rtw_security.c
··· 868 868 num_blocks, payload_index; 869 869 870 870 u8 pn_vector[6]; 871 - u8 mic_iv[16]; 872 - u8 mic_header1[16]; 873 - u8 mic_header2[16]; 874 - u8 ctr_preload[16]; 871 + u8 mic_iv[16] = {}; 872 + u8 mic_header1[16] = {}; 873 + u8 mic_header2[16] = {}; 874 + u8 ctr_preload[16] = {}; 875 875 876 876 /* Intermediate Buffers */ 877 - u8 chain_buffer[16]; 878 - u8 aes_out[16]; 879 - u8 padded_buffer[16]; 877 + u8 chain_buffer[16] = {}; 878 + u8 aes_out[16] = {}; 879 + u8 padded_buffer[16] = {}; 880 880 u8 mic[8]; 881 881 uint frtype = GetFrameType(pframe); 882 882 uint frsubtype = GetFrameSubType(pframe); 883 883 884 884 frsubtype = frsubtype>>4; 885 - 886 - memset((void *)mic_iv, 0, 16); 887 - memset((void *)mic_header1, 0, 16); 888 - memset((void *)mic_header2, 0, 16); 889 - memset((void *)ctr_preload, 0, 16); 890 - memset((void *)chain_buffer, 0, 16); 891 - memset((void *)aes_out, 0, 16); 892 - memset((void *)padded_buffer, 0, 16); 893 885 894 886 if ((hdrlen == WLAN_HDR_A3_LEN) || (hdrlen == WLAN_HDR_A3_QOS_LEN)) 895 887 a4_exists = 0; ··· 1072 1080 num_blocks, payload_index; 1073 1081 signed int res = _SUCCESS; 1074 1082 u8 pn_vector[6]; 1075 - u8 mic_iv[16]; 1076 - u8 mic_header1[16]; 1077 - u8 mic_header2[16]; 1078 - u8 ctr_preload[16]; 1083 + u8 mic_iv[16] = {}; 1084 + u8 mic_header1[16] = {}; 1085 + u8 mic_header2[16] = {}; 1086 + u8 ctr_preload[16] = {}; 1079 1087 1080 1088 /* Intermediate Buffers */ 1081 - u8 chain_buffer[16]; 1082 - u8 aes_out[16]; 1083 - u8 padded_buffer[16]; 1089 + u8 chain_buffer[16] = {}; 1090 + u8 aes_out[16] = {}; 1091 + u8 padded_buffer[16] = {}; 1084 1092 u8 mic[8]; 1085 1093 1086 1094 uint frtype = GetFrameType(pframe); 1087 1095 uint frsubtype = GetFrameSubType(pframe); 1088 1096 1089 1097 frsubtype = frsubtype>>4; 1090 - 1091 - memset((void *)mic_iv, 0, 16); 1092 - memset((void *)mic_header1, 0, 16); 1093 - memset((void *)mic_header2, 0, 16); 1094 - memset((void *)ctr_preload, 0, 16); 1095 - memset((void *)chain_buffer, 0, 16); 1096 - memset((void *)aes_out, 0, 16); 1097 - memset((void *)padded_buffer, 0, 16); 1098 1098 1099 1099 /* start to decrypt the payload */ 1100 1100
+3 -1
drivers/target/target_core_pr.c
··· 1842 1842 } 1843 1843 1844 1844 kmem_cache_free(t10_pr_reg_cache, dest_pr_reg); 1845 - core_scsi3_lunacl_undepend_item(dest_se_deve); 1845 + 1846 + if (dest_se_deve) 1847 + core_scsi3_lunacl_undepend_item(dest_se_deve); 1846 1848 1847 1849 if (is_local) 1848 1850 continue;
+12 -5
drivers/tty/serial/imx.c
··· 235 235 enum imx_tx_state tx_state; 236 236 struct hrtimer trigger_start_tx; 237 237 struct hrtimer trigger_stop_tx; 238 + unsigned int rxtl; 238 239 }; 239 240 240 241 struct imx_port_ucrs { ··· 1340 1339 1341 1340 #define TXTL_DEFAULT 8 1342 1341 #define RXTL_DEFAULT 8 /* 8 characters or aging timer */ 1342 + #define RXTL_CONSOLE_DEFAULT 1 1343 1343 #define TXTL_DMA 8 /* DMA burst setting */ 1344 1344 #define RXTL_DMA 9 /* DMA burst setting */ 1345 1345 ··· 1459 1457 ucr1 &= ~(UCR1_RXDMAEN | UCR1_TXDMAEN | UCR1_ATDMAEN); 1460 1458 imx_uart_writel(sport, ucr1, UCR1); 1461 1459 1462 - imx_uart_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT); 1460 + imx_uart_setup_ufcr(sport, TXTL_DEFAULT, sport->rxtl); 1463 1461 1464 1462 sport->dma_is_enabled = 0; 1465 1463 } ··· 1484 1482 return retval; 1485 1483 } 1486 1484 1487 - imx_uart_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT); 1485 + if (uart_console(&sport->port)) 1486 + sport->rxtl = RXTL_CONSOLE_DEFAULT; 1487 + else 1488 + sport->rxtl = RXTL_DEFAULT; 1489 + 1490 + imx_uart_setup_ufcr(sport, TXTL_DEFAULT, sport->rxtl); 1488 1491 1489 1492 /* disable the DREN bit (Data Ready interrupt enable) before 1490 1493 * requesting IRQs ··· 1955 1948 if (retval) 1956 1949 clk_disable_unprepare(sport->clk_ipg); 1957 1950 1958 - imx_uart_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT); 1951 + imx_uart_setup_ufcr(sport, TXTL_DEFAULT, sport->rxtl); 1959 1952 1960 1953 uart_port_lock_irqsave(&sport->port, &flags); 1961 1954 ··· 2047 2040 /* If the receiver trigger is 0, set it to a default value */ 2048 2041 ufcr = imx_uart_readl(sport, UFCR); 2049 2042 if ((ufcr & UFCR_RXTL_MASK) == 0) 2050 - imx_uart_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT); 2043 + imx_uart_setup_ufcr(sport, TXTL_DEFAULT, sport->rxtl); 2051 2044 imx_uart_start_rx(port); 2052 2045 } 2053 2046 ··· 2309 2302 else 2310 2303 imx_uart_console_get_options(sport, &baud, &parity, &bits); 2311 2304 2312 - imx_uart_setup_ufcr(sport, TXTL_DEFAULT, RXTL_DEFAULT); 2305 + imx_uart_setup_ufcr(sport, TXTL_DEFAULT, sport->rxtl); 2313 2306 2314 2307 retval = uart_set_options(&sport->port, co, baud, parity, bits, flow); 2315 2308
+1
drivers/tty/serial/serial_base_bus.c
··· 72 72 dev->parent = parent_dev; 73 73 dev->bus = &serial_base_bus_type; 74 74 dev->release = release; 75 + device_set_of_node_from_dev(dev, parent_dev); 75 76 76 77 if (!serial_base_initialized) { 77 78 dev_dbg(port->dev, "uart_add_one_port() called before arch_initcall()?\n");
+1 -1
drivers/tty/vt/ucs.c
··· 206 206 207 207 /** 208 208 * ucs_get_fallback() - Get a substitution for the provided Unicode character 209 - * @base: Base Unicode code point (UCS-4) 209 + * @cp: Unicode code point (UCS-4) 210 210 * 211 211 * Get a simpler fallback character for the provided Unicode character. 212 212 * This is used for terminal display when corresponding glyph is unavailable.
+1
drivers/tty/vt/vt.c
··· 4650 4650 set_palette(vc); 4651 4651 set_cursor(vc); 4652 4652 vt_event_post(VT_EVENT_UNBLANK, vc->vc_num, vc->vc_num); 4653 + notify_update(vc); 4653 4654 } 4654 4655 EXPORT_SYMBOL(do_unblank_screen); 4655 4656
+2 -1
drivers/ufs/core/ufshcd.c
··· 7807 7807 hba->silence_err_logs = false; 7808 7808 7809 7809 /* scale up clocks to max frequency before full reinitialization */ 7810 - ufshcd_scale_clks(hba, ULONG_MAX, true); 7810 + if (ufshcd_is_clkscaling_supported(hba)) 7811 + ufshcd_scale_clks(hba, ULONG_MAX, true); 7811 7812 7812 7813 err = ufshcd_hba_enable(hba); 7813 7814
+9 -4
fs/bcachefs/alloc_background.c
··· 1406 1406 : BCH_DATA_free; 1407 1407 struct printbuf buf = PRINTBUF; 1408 1408 1409 + unsigned fsck_flags = (async_repair ? FSCK_ERR_NO_LOG : 0)| 1410 + FSCK_CAN_FIX|FSCK_CAN_IGNORE; 1411 + 1409 1412 struct bpos bucket = iter->pos; 1410 1413 bucket.offset &= ~(~0ULL << 56); 1411 1414 u64 genbits = iter->pos.offset & (~0ULL << 56); ··· 1422 1419 return ret; 1423 1420 1424 1421 if (!bch2_dev_bucket_exists(c, bucket)) { 1425 - if (fsck_err(trans, need_discard_freespace_key_to_invalid_dev_bucket, 1426 - "entry in %s btree for nonexistant dev:bucket %llu:%llu", 1427 - bch2_btree_id_str(iter->btree_id), bucket.inode, bucket.offset)) 1422 + if (__fsck_err(trans, fsck_flags, 1423 + need_discard_freespace_key_to_invalid_dev_bucket, 1424 + "entry in %s btree for nonexistant dev:bucket %llu:%llu", 1425 + bch2_btree_id_str(iter->btree_id), bucket.inode, bucket.offset)) 1428 1426 goto delete; 1429 1427 ret = 1; 1430 1428 goto out; ··· 1437 1433 if (a->data_type != state || 1438 1434 (state == BCH_DATA_free && 1439 1435 genbits != alloc_freespace_genbits(*a))) { 1440 - if (fsck_err(trans, need_discard_freespace_key_bad, 1436 + if (__fsck_err(trans, fsck_flags, 1437 + need_discard_freespace_key_bad, 1441 1438 "%s\nincorrectly set at %s:%llu:%llu:0 (free %u, genbits %llu should be %llu)", 1442 1439 (bch2_bkey_val_to_text(&buf, c, alloc_k), buf.buf), 1443 1440 bch2_btree_id_str(iter->btree_id),
+1 -1
fs/bcachefs/backpointers.c
··· 353 353 return ret ? bkey_s_c_err(ret) : bkey_s_c_null; 354 354 } else { 355 355 struct btree *b = __bch2_backpointer_get_node(trans, bp, iter, last_flushed, commit); 356 - if (b == ERR_PTR(bch_err_throw(c, backpointer_to_overwritten_btree_node))) 356 + if (b == ERR_PTR(-BCH_ERR_backpointer_to_overwritten_btree_node)) 357 357 return bkey_s_c_null; 358 358 if (IS_ERR_OR_NULL(b)) 359 359 return ((struct bkey_s_c) { .k = ERR_CAST(b) });
+2 -1
fs/bcachefs/bcachefs.h
··· 767 767 x(sysfs) \ 768 768 x(btree_write_buffer) \ 769 769 x(btree_node_scrub) \ 770 - x(async_recovery_passes) 770 + x(async_recovery_passes) \ 771 + x(ioctl_data) 771 772 772 773 enum bch_write_ref { 773 774 #define x(n) BCH_WRITE_REF_##n,
+25 -12
fs/bcachefs/btree_gc.c
··· 503 503 prt_newline(&buf); 504 504 bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(&b->key)); 505 505 506 + /* 507 + * XXX: we're not passing the trans object here because we're not set up 508 + * to handle a transaction restart - this code needs to be rewritten 509 + * when we start doing online topology repair 510 + */ 511 + bch2_trans_unlock_long(trans); 506 512 if (mustfix_fsck_err_on(!have_child, 507 - trans, btree_node_topology_interior_node_empty, 513 + c, btree_node_topology_interior_node_empty, 508 514 "empty interior btree node at %s", buf.buf)) 509 515 ret = DROP_THIS_NODE; 510 516 err: ··· 534 528 return ret; 535 529 } 536 530 537 - static int bch2_check_root(struct btree_trans *trans, enum btree_id i, 531 + static int bch2_check_root(struct btree_trans *trans, enum btree_id btree, 538 532 bool *reconstructed_root) 539 533 { 540 534 struct bch_fs *c = trans->c; 541 - struct btree_root *r = bch2_btree_id_root(c, i); 535 + struct btree_root *r = bch2_btree_id_root(c, btree); 542 536 struct printbuf buf = PRINTBUF; 543 537 int ret = 0; 544 538 545 - bch2_btree_id_to_text(&buf, i); 539 + bch2_btree_id_to_text(&buf, btree); 546 540 547 541 if (r->error) { 548 542 bch_info(c, "btree root %s unreadable, must recover from scan", buf.buf); 549 543 550 - r->alive = false; 551 - r->error = 0; 544 + ret = bch2_btree_has_scanned_nodes(c, btree); 545 + if (ret < 0) 546 + goto err; 552 547 553 - if (!bch2_btree_has_scanned_nodes(c, i)) { 548 + if (!ret) { 554 549 __fsck_err(trans, 555 - FSCK_CAN_FIX|(!btree_id_important(i) ? FSCK_AUTOFIX : 0), 550 + FSCK_CAN_FIX|(!btree_id_important(btree) ? FSCK_AUTOFIX : 0), 556 551 btree_root_unreadable_and_scan_found_nothing, 557 552 "no nodes found for btree %s, continue?", buf.buf); 558 - bch2_btree_root_alloc_fake_trans(trans, i, 0); 553 + 554 + r->alive = false; 555 + r->error = 0; 556 + bch2_btree_root_alloc_fake_trans(trans, btree, 0); 559 557 } else { 560 - bch2_btree_root_alloc_fake_trans(trans, i, 1); 561 - bch2_shoot_down_journal_keys(c, i, 1, BTREE_MAX_DEPTH, POS_MIN, SPOS_MAX); 562 - ret = bch2_get_scanned_nodes(c, i, 0, POS_MIN, SPOS_MAX); 558 + r->alive = false; 559 + r->error = 0; 560 + bch2_btree_root_alloc_fake_trans(trans, btree, 1); 561 + 562 + bch2_shoot_down_journal_keys(c, btree, 1, BTREE_MAX_DEPTH, POS_MIN, SPOS_MAX); 563 + ret = bch2_get_scanned_nodes(c, btree, 0, POS_MIN, SPOS_MAX); 563 564 if (ret) 564 565 goto err; 565 566 }
+31 -43
fs/bcachefs/btree_io.c
··· 557 557 const char *fmt, ...) 558 558 { 559 559 if (c->recovery.curr_pass == BCH_RECOVERY_PASS_scan_for_btree_nodes) 560 - return bch_err_throw(c, fsck_fix); 560 + return ret == -BCH_ERR_btree_node_read_err_fixable 561 + ? bch_err_throw(c, fsck_fix) 562 + : ret; 561 563 562 564 bool have_retry = false; 563 565 int ret2; ··· 725 723 726 724 static int validate_bset(struct bch_fs *c, struct bch_dev *ca, 727 725 struct btree *b, struct bset *i, 728 - unsigned offset, unsigned sectors, int write, 726 + unsigned offset, int write, 729 727 struct bch_io_failures *failed, 730 728 struct printbuf *err_msg) 731 729 { 732 730 unsigned version = le16_to_cpu(i->version); 733 - unsigned ptr_written = btree_ptr_sectors_written(bkey_i_to_s_c(&b->key)); 734 731 struct printbuf buf1 = PRINTBUF; 735 732 struct printbuf buf2 = PRINTBUF; 736 733 int ret = 0; ··· 778 777 c, ca, b, i, NULL, 779 778 btree_node_unsupported_version, 780 779 "BSET_SEPARATE_WHITEOUTS no longer supported"); 781 - 782 - if (!write && 783 - btree_err_on(offset + sectors > (ptr_written ?: btree_sectors(c)), 784 - -BCH_ERR_btree_node_read_err_fixable, 785 - c, ca, b, i, NULL, 786 - bset_past_end_of_btree_node, 787 - "bset past end of btree node (offset %u len %u but written %zu)", 788 - offset, sectors, ptr_written ?: btree_sectors(c))) 789 - i->u64s = 0; 790 780 791 781 btree_err_on(offset && !i->u64s, 792 782 -BCH_ERR_btree_node_read_err_fixable, ··· 1143 1151 "unknown checksum type %llu", BSET_CSUM_TYPE(i)); 1144 1152 1145 1153 if (first) { 1154 + sectors = vstruct_sectors(b->data, c->block_bits); 1155 + if (btree_err_on(b->written + sectors > (ptr_written ?: btree_sectors(c)), 1156 + -BCH_ERR_btree_node_read_err_fixable, 1157 + c, ca, b, i, NULL, 1158 + bset_past_end_of_btree_node, 1159 + "bset past end of btree node (offset %u len %u but written %zu)", 1160 + b->written, sectors, ptr_written ?: btree_sectors(c))) 1161 + i->u64s = 0; 1146 1162 if (good_csum_type) { 1147 1163 struct bch_csum csum = csum_vstruct(c, BSET_CSUM_TYPE(i), nonce, b->data); 1148 1164 bool csum_bad = bch2_crc_cmp(b->data->csum, csum); ··· 1178 1178 c, NULL, b, NULL, NULL, 1179 1179 btree_node_unsupported_version, 1180 1180 "btree node does not have NEW_EXTENT_OVERWRITE set"); 1181 - 1182 - sectors = vstruct_sectors(b->data, c->block_bits); 1183 1181 } else { 1182 + sectors = vstruct_sectors(bne, c->block_bits); 1183 + if (btree_err_on(b->written + sectors > (ptr_written ?: btree_sectors(c)), 1184 + -BCH_ERR_btree_node_read_err_fixable, 1185 + c, ca, b, i, NULL, 1186 + bset_past_end_of_btree_node, 1187 + "bset past end of btree node (offset %u len %u but written %zu)", 1188 + b->written, sectors, ptr_written ?: btree_sectors(c))) 1189 + i->u64s = 0; 1184 1190 if (good_csum_type) { 1185 1191 struct bch_csum csum = csum_vstruct(c, BSET_CSUM_TYPE(i), nonce, bne); 1186 1192 bool csum_bad = bch2_crc_cmp(bne->csum, csum); ··· 1207 1201 "decrypting btree node: %s", bch2_err_str(ret))) 1208 1202 goto fsck_err; 1209 1203 } 1210 - 1211 - sectors = vstruct_sectors(bne, c->block_bits); 1212 1204 } 1213 1205 1214 1206 b->version_ondisk = min(b->version_ondisk, 1215 1207 le16_to_cpu(i->version)); 1216 1208 1217 - ret = validate_bset(c, ca, b, i, b->written, sectors, READ, failed, err_msg); 1209 + ret = validate_bset(c, ca, b, i, b->written, READ, failed, err_msg); 1218 1210 if (ret) 1219 1211 goto fsck_err; 1220 1212 ··· 1986 1982 prt_newline(&err); 1987 1983 1988 1984 if (!btree_node_scrub_check(c, scrub->buf, scrub->written, &err)) { 1989 - struct btree_trans *trans = bch2_trans_get(c); 1990 - 1991 - struct btree_iter iter; 1992 - bch2_trans_node_iter_init(trans, &iter, scrub->btree, 1993 - scrub->key.k->k.p, 0, scrub->level - 1, 0); 1994 - 1995 - struct btree *b; 1996 - int ret = lockrestart_do(trans, 1997 - PTR_ERR_OR_ZERO(b = bch2_btree_iter_peek_node(trans, &iter))); 1998 - if (ret) 1999 - goto err; 2000 - 2001 - if (bkey_i_to_btree_ptr_v2(&b->key)->v.seq == scrub->seq) { 2002 - bch_err(c, "error validating btree node during scrub on %s at btree %s", 2003 - scrub->ca->name, err.buf); 2004 - 2005 - ret = bch2_btree_node_rewrite(trans, &iter, b, 0, 0); 2006 - } 2007 - err: 2008 - bch2_trans_iter_exit(trans, &iter); 2009 - bch2_trans_begin(trans); 2010 - bch2_trans_put(trans); 1985 + int ret = bch2_trans_do(c, 1986 + bch2_btree_node_rewrite_key(trans, scrub->btree, scrub->level - 1, 1987 + scrub->key.k, 0)); 1988 + if (!bch2_err_matches(ret, ENOENT) && 1989 + !bch2_err_matches(ret, EROFS)) 1990 + bch_err_fn_ratelimited(c, ret); 2011 1991 } 2012 1992 2013 1993 printbuf_exit(&err); ··· 2255 2267 } 2256 2268 2257 2269 static int validate_bset_for_write(struct bch_fs *c, struct btree *b, 2258 - struct bset *i, unsigned sectors) 2270 + struct bset *i) 2259 2271 { 2260 2272 int ret = bch2_bkey_validate(c, bkey_i_to_s_c(&b->key), 2261 2273 (struct bkey_validate_context) { ··· 2270 2282 } 2271 2283 2272 2284 ret = validate_bset_keys(c, b, i, WRITE, NULL, NULL) ?: 2273 - validate_bset(c, NULL, b, i, b->written, sectors, WRITE, NULL, NULL); 2285 + validate_bset(c, NULL, b, i, b->written, WRITE, NULL, NULL); 2274 2286 if (ret) { 2275 2287 bch2_inconsistent_error(c); 2276 2288 dump_stack(); ··· 2463 2475 2464 2476 /* if we're going to be encrypting, check metadata validity first: */ 2465 2477 if (validate_before_checksum && 2466 - validate_bset_for_write(c, b, i, sectors_to_write)) 2478 + validate_bset_for_write(c, b, i)) 2467 2479 goto err; 2468 2480 2469 2481 ret = bset_encrypt(c, i, b->written << 9); ··· 2480 2492 2481 2493 /* if we're not encrypting, check metadata after checksumming: */ 2482 2494 if (!validate_before_checksum && 2483 - validate_bset_for_write(c, b, i, sectors_to_write)) 2495 + validate_bset_for_write(c, b, i)) 2484 2496 goto err; 2485 2497 2486 2498 /*
+120 -55
fs/bcachefs/btree_iter.c
··· 2076 2076 2077 2077 static noinline 2078 2078 void bch2_btree_trans_peek_prev_updates(struct btree_trans *trans, struct btree_iter *iter, 2079 - struct bkey_s_c *k) 2079 + struct bpos search_key, struct bkey_s_c *k) 2080 2080 { 2081 2081 struct bpos end = path_l(btree_iter_path(trans, iter))->b->data->min_key; 2082 2082 2083 2083 trans_for_each_update(trans, i) 2084 2084 if (!i->key_cache_already_flushed && 2085 2085 i->btree_id == iter->btree_id && 2086 - bpos_le(i->k->k.p, iter->pos) && 2086 + bpos_le(i->k->k.p, search_key) && 2087 2087 bpos_ge(i->k->k.p, k->k ? k->k->p : end)) { 2088 2088 iter->k = i->k->k; 2089 2089 *k = bkey_i_to_s_c(i->k); ··· 2092 2092 2093 2093 static noinline 2094 2094 void bch2_btree_trans_peek_updates(struct btree_trans *trans, struct btree_iter *iter, 2095 + struct bpos search_key, 2095 2096 struct bkey_s_c *k) 2096 2097 { 2097 2098 struct btree_path *path = btree_iter_path(trans, iter); ··· 2101 2100 trans_for_each_update(trans, i) 2102 2101 if (!i->key_cache_already_flushed && 2103 2102 i->btree_id == iter->btree_id && 2104 - bpos_ge(i->k->k.p, path->pos) && 2103 + bpos_ge(i->k->k.p, search_key) && 2105 2104 bpos_le(i->k->k.p, k->k ? k->k->p : end)) { 2106 2105 iter->k = i->k->k; 2107 2106 *k = bkey_i_to_s_c(i->k); ··· 2123 2122 2124 2123 static struct bkey_i *bch2_btree_journal_peek(struct btree_trans *trans, 2125 2124 struct btree_iter *iter, 2125 + struct bpos search_pos, 2126 2126 struct bpos end_pos) 2127 2127 { 2128 2128 struct btree_path *path = btree_iter_path(trans, iter); 2129 2129 2130 2130 return bch2_journal_keys_peek_max(trans->c, iter->btree_id, 2131 2131 path->level, 2132 - path->pos, 2132 + search_pos, 2133 2133 end_pos, 2134 2134 &iter->journal_idx); 2135 2135 } ··· 2140 2138 struct btree_iter *iter) 2141 2139 { 2142 2140 struct btree_path *path = btree_iter_path(trans, iter); 2143 - struct bkey_i *k = bch2_btree_journal_peek(trans, iter, path->pos); 2141 + struct bkey_i *k = bch2_btree_journal_peek(trans, iter, path->pos, path->pos); 2144 2142 2145 2143 if (k) { 2146 2144 iter->k = k->k; ··· 2153 2151 static noinline 2154 2152 void btree_trans_peek_journal(struct btree_trans *trans, 2155 2153 struct btree_iter *iter, 2154 + struct bpos search_key, 2156 2155 struct bkey_s_c *k) 2157 2156 { 2158 2157 struct btree_path *path = btree_iter_path(trans, iter); 2159 2158 struct bkey_i *next_journal = 2160 - bch2_btree_journal_peek(trans, iter, 2159 + bch2_btree_journal_peek(trans, iter, search_key, 2161 2160 k->k ? k->k->p : path_l(path)->b->key.k.p); 2162 2161 if (next_journal) { 2163 2162 iter->k = next_journal->k; ··· 2168 2165 2169 2166 static struct bkey_i *bch2_btree_journal_peek_prev(struct btree_trans *trans, 2170 2167 struct btree_iter *iter, 2168 + struct bpos search_key, 2171 2169 struct bpos end_pos) 2172 2170 { 2173 2171 struct btree_path *path = btree_iter_path(trans, iter); 2174 2172 2175 2173 return bch2_journal_keys_peek_prev_min(trans->c, iter->btree_id, 2176 2174 path->level, 2177 - path->pos, 2175 + search_key, 2178 2176 end_pos, 2179 2177 &iter->journal_idx); 2180 2178 } ··· 2183 2179 static noinline 2184 2180 void btree_trans_peek_prev_journal(struct btree_trans *trans, 2185 2181 struct btree_iter *iter, 2182 + struct bpos search_key, 2186 2183 struct bkey_s_c *k) 2187 2184 { 2188 2185 struct btree_path *path = btree_iter_path(trans, iter); 2189 2186 struct bkey_i *next_journal = 2190 - bch2_btree_journal_peek_prev(trans, iter, 2187 + bch2_btree_journal_peek_prev(trans, iter, search_key, 2191 2188 k->k ? k->k->p : path_l(path)->b->key.k.p); 2192 2189 2193 2190 if (next_journal) { ··· 2297 2292 } 2298 2293 2299 2294 if (unlikely(iter->flags & BTREE_ITER_with_journal)) 2300 - btree_trans_peek_journal(trans, iter, &k); 2295 + btree_trans_peek_journal(trans, iter, search_key, &k); 2301 2296 2302 2297 if (unlikely((iter->flags & BTREE_ITER_with_updates) && 2303 2298 trans->nr_updates)) 2304 - bch2_btree_trans_peek_updates(trans, iter, &k); 2299 + bch2_btree_trans_peek_updates(trans, iter, search_key, &k); 2305 2300 2306 2301 if (k.k && bkey_deleted(k.k)) { 2307 2302 /* ··· 2331 2326 } 2332 2327 2333 2328 bch2_btree_iter_verify(trans, iter); 2329 + 2330 + if (trace___btree_iter_peek_enabled()) { 2331 + CLASS(printbuf, buf)(); 2332 + 2333 + int ret = bkey_err(k); 2334 + if (ret) 2335 + prt_str(&buf, bch2_err_str(ret)); 2336 + else if (k.k) 2337 + bch2_bkey_val_to_text(&buf, trans->c, k); 2338 + else 2339 + prt_str(&buf, "(null)"); 2340 + trace___btree_iter_peek(trans->c, buf.buf); 2341 + } 2342 + 2334 2343 return k; 2335 2344 } 2336 2345 ··· 2503 2484 2504 2485 bch2_btree_iter_verify_entry_exit(iter); 2505 2486 2487 + if (trace_btree_iter_peek_max_enabled()) { 2488 + CLASS(printbuf, buf)(); 2489 + 2490 + int ret = bkey_err(k); 2491 + if (ret) 2492 + prt_str(&buf, bch2_err_str(ret)); 2493 + else if (k.k) 2494 + bch2_bkey_val_to_text(&buf, trans->c, k); 2495 + else 2496 + prt_str(&buf, "(null)"); 2497 + trace_btree_iter_peek_max(trans->c, buf.buf); 2498 + } 2499 + 2506 2500 return k; 2507 2501 end: 2508 2502 bch2_btree_iter_set_pos(trans, iter, end); ··· 2589 2557 } 2590 2558 2591 2559 if (unlikely(iter->flags & BTREE_ITER_with_journal)) 2592 - btree_trans_peek_prev_journal(trans, iter, &k); 2560 + btree_trans_peek_prev_journal(trans, iter, search_key, &k); 2593 2561 2594 2562 if (unlikely((iter->flags & BTREE_ITER_with_updates) && 2595 2563 trans->nr_updates)) 2596 - bch2_btree_trans_peek_prev_updates(trans, iter, &k); 2564 + bch2_btree_trans_peek_prev_updates(trans, iter, search_key, &k); 2597 2565 2598 2566 if (likely(k.k && !bkey_deleted(k.k))) { 2599 2567 break; ··· 2756 2724 2757 2725 bch2_btree_iter_verify_entry_exit(iter); 2758 2726 bch2_btree_iter_verify(trans, iter); 2727 + 2728 + if (trace_btree_iter_peek_prev_min_enabled()) { 2729 + CLASS(printbuf, buf)(); 2730 + 2731 + int ret = bkey_err(k); 2732 + if (ret) 2733 + prt_str(&buf, bch2_err_str(ret)); 2734 + else if (k.k) 2735 + bch2_bkey_val_to_text(&buf, trans->c, k); 2736 + else 2737 + prt_str(&buf, "(null)"); 2738 + trace_btree_iter_peek_prev_min(trans->c, buf.buf); 2739 + } 2759 2740 return k; 2760 2741 end: 2761 2742 bch2_btree_iter_set_pos(trans, iter, end); ··· 2812 2767 /* extents can't span inode numbers: */ 2813 2768 if ((iter->flags & BTREE_ITER_is_extents) && 2814 2769 unlikely(iter->pos.offset == KEY_OFFSET_MAX)) { 2815 - if (iter->pos.inode == KEY_INODE_MAX) 2816 - return bkey_s_c_null; 2770 + if (iter->pos.inode == KEY_INODE_MAX) { 2771 + k = bkey_s_c_null; 2772 + goto out2; 2773 + } 2817 2774 2818 2775 bch2_btree_iter_set_pos(trans, iter, bpos_nosnap_successor(iter->pos)); 2819 2776 } ··· 2832 2785 } 2833 2786 2834 2787 struct btree_path *path = btree_iter_path(trans, iter); 2835 - if (unlikely(!btree_path_node(path, path->level))) 2836 - return bkey_s_c_null; 2788 + if (unlikely(!btree_path_node(path, path->level))) { 2789 + k = bkey_s_c_null; 2790 + goto out2; 2791 + } 2837 2792 2838 2793 btree_path_set_should_be_locked(trans, path); 2839 2794 ··· 2928 2879 bch2_btree_iter_verify(trans, iter); 2929 2880 ret = bch2_btree_iter_verify_ret(trans, iter, k); 2930 2881 if (unlikely(ret)) 2931 - return bkey_s_c_err(ret); 2882 + k = bkey_s_c_err(ret); 2883 + out2: 2884 + if (trace_btree_iter_peek_slot_enabled()) { 2885 + CLASS(printbuf, buf)(); 2886 + 2887 + int ret = bkey_err(k); 2888 + if (ret) 2889 + prt_str(&buf, bch2_err_str(ret)); 2890 + else if (k.k) 2891 + bch2_bkey_val_to_text(&buf, trans->c, k); 2892 + else 2893 + prt_str(&buf, "(null)"); 2894 + trace_btree_iter_peek_slot(trans->c, buf.buf); 2895 + } 2932 2896 2933 2897 return k; 2934 2898 } ··· 3194 3132 if (WARN_ON_ONCE(new_bytes > BTREE_TRANS_MEM_MAX)) { 3195 3133 #ifdef CONFIG_BCACHEFS_TRANS_KMALLOC_TRACE 3196 3134 struct printbuf buf = PRINTBUF; 3135 + bch2_log_msg_start(c, &buf); 3136 + prt_printf(&buf, "bump allocator exceeded BTREE_TRANS_MEM_MAX (%u)\n", 3137 + BTREE_TRANS_MEM_MAX); 3138 + 3197 3139 bch2_trans_kmalloc_trace_to_text(&buf, &trans->trans_kmalloc_trace); 3198 3140 bch2_print_str(c, KERN_ERR, buf.buf); 3199 3141 printbuf_exit(&buf); ··· 3225 3159 mutex_unlock(&s->lock); 3226 3160 } 3227 3161 3228 - if (trans->used_mempool) { 3229 - if (trans->mem_bytes >= new_bytes) 3230 - goto out_change_top; 3231 - 3232 - /* No more space from mempool item, need malloc new one */ 3233 - new_mem = kmalloc(new_bytes, GFP_NOWAIT|__GFP_NOWARN); 3234 - if (unlikely(!new_mem)) { 3235 - bch2_trans_unlock(trans); 3236 - 3237 - new_mem = kmalloc(new_bytes, GFP_KERNEL); 3238 - if (!new_mem) 3239 - return ERR_PTR(-BCH_ERR_ENOMEM_trans_kmalloc); 3240 - 3241 - ret = bch2_trans_relock(trans); 3242 - if (ret) { 3243 - kfree(new_mem); 3244 - return ERR_PTR(ret); 3245 - } 3246 - } 3247 - memcpy(new_mem, trans->mem, trans->mem_top); 3248 - trans->used_mempool = false; 3249 - mempool_free(trans->mem, &c->btree_trans_mem_pool); 3250 - goto out_new_mem; 3162 + if (trans->used_mempool || new_bytes > BTREE_TRANS_MEM_MAX) { 3163 + EBUG_ON(trans->mem_bytes >= new_bytes); 3164 + return ERR_PTR(-BCH_ERR_ENOMEM_trans_kmalloc); 3251 3165 } 3252 3166 3253 - new_mem = krealloc(trans->mem, new_bytes, GFP_NOWAIT|__GFP_NOWARN); 3167 + if (old_bytes) { 3168 + trans->realloc_bytes_required = new_bytes; 3169 + trace_and_count(c, trans_restart_mem_realloced, trans, _RET_IP_, new_bytes); 3170 + return ERR_PTR(btree_trans_restart_ip(trans, 3171 + BCH_ERR_transaction_restart_mem_realloced, _RET_IP_)); 3172 + } 3173 + 3174 + EBUG_ON(trans->mem); 3175 + 3176 + new_mem = kmalloc(new_bytes, GFP_NOWAIT|__GFP_NOWARN); 3254 3177 if (unlikely(!new_mem)) { 3255 3178 bch2_trans_unlock(trans); 3256 3179 3257 - new_mem = krealloc(trans->mem, new_bytes, GFP_KERNEL); 3180 + new_mem = kmalloc(new_bytes, GFP_KERNEL); 3258 3181 if (!new_mem && new_bytes <= BTREE_TRANS_MEM_MAX) { 3259 3182 new_mem = mempool_alloc(&c->btree_trans_mem_pool, GFP_KERNEL); 3260 3183 new_bytes = BTREE_TRANS_MEM_MAX; 3261 - memcpy(new_mem, trans->mem, trans->mem_top); 3262 3184 trans->used_mempool = true; 3263 - kfree(trans->mem); 3264 3185 } 3265 3186 3266 - if (!new_mem) 3267 - return ERR_PTR(-BCH_ERR_ENOMEM_trans_kmalloc); 3187 + EBUG_ON(!new_mem); 3268 3188 3269 3189 trans->mem = new_mem; 3270 3190 trans->mem_bytes = new_bytes; ··· 3259 3207 if (ret) 3260 3208 return ERR_PTR(ret); 3261 3209 } 3262 - out_new_mem: 3210 + 3263 3211 trans->mem = new_mem; 3264 3212 trans->mem_bytes = new_bytes; 3265 - 3266 - if (old_bytes) { 3267 - trace_and_count(c, trans_restart_mem_realloced, trans, _RET_IP_, new_bytes); 3268 - return ERR_PTR(btree_trans_restart_ip(trans, 3269 - BCH_ERR_transaction_restart_mem_realloced, _RET_IP_)); 3270 - } 3271 - out_change_top: 3272 - bch2_trans_kmalloc_trace(trans, size, ip); 3273 3213 3274 3214 p = trans->mem + trans->mem_top; 3275 3215 trans->mem_top += size; ··· 3322 3278 3323 3279 trans->restart_count++; 3324 3280 trans->mem_top = 0; 3281 + 3282 + if (trans->restarted == BCH_ERR_transaction_restart_mem_realloced) { 3283 + EBUG_ON(!trans->mem || !trans->mem_bytes); 3284 + unsigned new_bytes = trans->realloc_bytes_required; 3285 + void *new_mem = krealloc(trans->mem, new_bytes, GFP_NOWAIT|__GFP_NOWARN); 3286 + if (unlikely(!new_mem)) { 3287 + bch2_trans_unlock(trans); 3288 + new_mem = krealloc(trans->mem, new_bytes, GFP_KERNEL); 3289 + 3290 + EBUG_ON(new_bytes > BTREE_TRANS_MEM_MAX); 3291 + 3292 + if (!new_mem) { 3293 + new_mem = mempool_alloc(&trans->c->btree_trans_mem_pool, GFP_KERNEL); 3294 + new_bytes = BTREE_TRANS_MEM_MAX; 3295 + trans->used_mempool = true; 3296 + kfree(trans->mem); 3297 + } 3298 + } 3299 + trans->mem = new_mem; 3300 + trans->mem_bytes = new_bytes; 3301 + } 3325 3302 3326 3303 trans_for_each_path(trans, path, i) { 3327 3304 path->should_be_locked = false;
+53 -25
fs/bcachefs/btree_journal_iter.c
··· 137 137 struct journal_key *k; 138 138 139 139 BUG_ON(*idx > keys->nr); 140 + 141 + if (!keys->nr) 142 + return NULL; 140 143 search: 141 144 if (!*idx) 142 145 *idx = __bch2_journal_key_search(keys, btree_id, level, pos); 143 146 144 - while (*idx && 145 - __journal_key_cmp(btree_id, level, end_pos, idx_to_key(keys, *idx - 1)) <= 0) { 147 + while (*idx < keys->nr && 148 + __journal_key_cmp(btree_id, level, end_pos, idx_to_key(keys, *idx)) >= 0) { 146 149 (*idx)++; 147 150 iters++; 148 151 if (iters == 10) { ··· 154 151 } 155 152 } 156 153 154 + if (*idx == keys->nr) 155 + --(*idx); 156 + 157 157 struct bkey_i *ret = NULL; 158 158 rcu_read_lock(); /* for overwritten_ranges */ 159 159 160 - while ((k = *idx < keys->nr ? idx_to_key(keys, *idx) : NULL)) { 160 + while (true) { 161 + k = idx_to_key(keys, *idx); 161 162 if (__journal_key_cmp(btree_id, level, end_pos, k) > 0) 162 163 break; 163 164 164 165 if (k->overwritten) { 165 166 if (k->overwritten_range) 166 - *idx = rcu_dereference(k->overwritten_range)->start - 1; 167 - else 168 - *idx -= 1; 167 + *idx = rcu_dereference(k->overwritten_range)->start; 168 + if (!*idx) 169 + break; 170 + --(*idx); 169 171 continue; 170 172 } 171 173 ··· 179 171 break; 180 172 } 181 173 174 + if (!*idx) 175 + break; 182 176 --(*idx); 183 177 iters++; 184 178 if (iters == 10) { ··· 651 641 { 652 642 const struct journal_key *l = _l; 653 643 const struct journal_key *r = _r; 644 + int rewind = l->rewind && r->rewind ? -1 : 1; 654 645 655 646 return journal_key_cmp(l, r) ?: 656 - cmp_int(l->journal_seq, r->journal_seq) ?: 657 - cmp_int(l->journal_offset, r->journal_offset); 647 + ((cmp_int(l->journal_seq, r->journal_seq) ?: 648 + cmp_int(l->journal_offset, r->journal_offset)) * rewind); 658 649 } 659 650 660 651 void bch2_journal_keys_put(struct bch_fs *c) ··· 724 713 struct journal_keys *keys = &c->journal_keys; 725 714 size_t nr_read = 0; 726 715 716 + u64 rewind_seq = c->opts.journal_rewind ?: U64_MAX; 717 + 727 718 genradix_for_each(&c->journal_entries, iter, _i) { 728 719 i = *_i; 729 720 ··· 734 721 735 722 cond_resched(); 736 723 737 - for_each_jset_key(k, entry, &i->j) { 738 - struct journal_key n = (struct journal_key) { 739 - .btree_id = entry->btree_id, 740 - .level = entry->level, 741 - .k = k, 742 - .journal_seq = le64_to_cpu(i->j.seq), 743 - .journal_offset = k->_data - i->j._data, 744 - }; 724 + vstruct_for_each(&i->j, entry) { 725 + bool rewind = !entry->level && 726 + !btree_id_is_alloc(entry->btree_id) && 727 + le64_to_cpu(i->j.seq) >= rewind_seq; 745 728 746 - if (darray_push(keys, n)) { 747 - __journal_keys_sort(keys); 729 + if (entry->type != (rewind 730 + ? BCH_JSET_ENTRY_overwrite 731 + : BCH_JSET_ENTRY_btree_keys)) 732 + continue; 748 733 749 - if (keys->nr * 8 > keys->size * 7) { 750 - bch_err(c, "Too many journal keys for slowpath; have %zu compacted, buf size %zu, processed %zu keys at seq %llu", 751 - keys->nr, keys->size, nr_read, le64_to_cpu(i->j.seq)); 752 - return bch_err_throw(c, ENOMEM_journal_keys_sort); 734 + if (!rewind && le64_to_cpu(i->j.seq) < c->journal_replay_seq_start) 735 + continue; 736 + 737 + jset_entry_for_each_key(entry, k) { 738 + struct journal_key n = (struct journal_key) { 739 + .btree_id = entry->btree_id, 740 + .level = entry->level, 741 + .rewind = rewind, 742 + .k = k, 743 + .journal_seq = le64_to_cpu(i->j.seq), 744 + .journal_offset = k->_data - i->j._data, 745 + }; 746 + 747 + if (darray_push(keys, n)) { 748 + __journal_keys_sort(keys); 749 + 750 + if (keys->nr * 8 > keys->size * 7) { 751 + bch_err(c, "Too many journal keys for slowpath; have %zu compacted, buf size %zu, processed %zu keys at seq %llu", 752 + keys->nr, keys->size, nr_read, le64_to_cpu(i->j.seq)); 753 + return bch_err_throw(c, ENOMEM_journal_keys_sort); 754 + } 755 + 756 + BUG_ON(darray_push(keys, n)); 753 757 } 754 758 755 - BUG_ON(darray_push(keys, n)); 759 + nr_read++; 756 760 } 757 - 758 - nr_read++; 759 761 } 760 762 } 761 763
+3 -2
fs/bcachefs/btree_journal_iter_types.h
··· 11 11 u32 journal_offset; 12 12 enum btree_id btree_id:8; 13 13 unsigned level:8; 14 - bool allocated; 15 - bool overwritten; 14 + bool allocated:1; 15 + bool overwritten:1; 16 + bool rewind:1; 16 17 struct journal_key_range_overwritten __rcu * 17 18 overwritten_range; 18 19 struct bkey_i *k;
+6 -6
fs/bcachefs/btree_locking.c
··· 771 771 } 772 772 773 773 static noinline __cold void bch2_trans_relock_fail(struct btree_trans *trans, struct btree_path *path, 774 - struct get_locks_fail *f, bool trace) 774 + struct get_locks_fail *f, bool trace, ulong ip) 775 775 { 776 776 if (!trace) 777 777 goto out; ··· 796 796 prt_printf(&buf, " total locked %u.%u.%u", c.n[0], c.n[1], c.n[2]); 797 797 } 798 798 799 - trace_trans_restart_relock(trans, _RET_IP_, buf.buf); 799 + trace_trans_restart_relock(trans, ip, buf.buf); 800 800 printbuf_exit(&buf); 801 801 } 802 802 ··· 806 806 bch2_trans_verify_locks(trans); 807 807 } 808 808 809 - static inline int __bch2_trans_relock(struct btree_trans *trans, bool trace) 809 + static inline int __bch2_trans_relock(struct btree_trans *trans, bool trace, ulong ip) 810 810 { 811 811 bch2_trans_verify_locks(trans); 812 812 ··· 825 825 if (path->should_be_locked && 826 826 (ret = btree_path_get_locks(trans, path, false, &f, 827 827 BCH_ERR_transaction_restart_relock))) { 828 - bch2_trans_relock_fail(trans, path, &f, trace); 828 + bch2_trans_relock_fail(trans, path, &f, trace, ip); 829 829 return ret; 830 830 } 831 831 } ··· 838 838 839 839 int bch2_trans_relock(struct btree_trans *trans) 840 840 { 841 - return __bch2_trans_relock(trans, true); 841 + return __bch2_trans_relock(trans, true, _RET_IP_); 842 842 } 843 843 844 844 int bch2_trans_relock_notrace(struct btree_trans *trans) 845 845 { 846 - return __bch2_trans_relock(trans, false); 846 + return __bch2_trans_relock(trans, false, _RET_IP_); 847 847 } 848 848 849 849 void bch2_trans_unlock(struct btree_trans *trans)
+5 -1
fs/bcachefs/btree_node_scan.c
··· 521 521 return false; 522 522 } 523 523 524 - bool bch2_btree_has_scanned_nodes(struct bch_fs *c, enum btree_id btree) 524 + int bch2_btree_has_scanned_nodes(struct bch_fs *c, enum btree_id btree) 525 525 { 526 + int ret = bch2_run_print_explicit_recovery_pass(c, BCH_RECOVERY_PASS_scan_for_btree_nodes); 527 + if (ret) 528 + return ret; 529 + 526 530 struct found_btree_node search = { 527 531 .btree_id = btree, 528 532 .level = 0,
+1 -1
fs/bcachefs/btree_node_scan.h
··· 4 4 5 5 int bch2_scan_for_btree_nodes(struct bch_fs *); 6 6 bool bch2_btree_node_is_stale(struct bch_fs *, struct btree *); 7 - bool bch2_btree_has_scanned_nodes(struct bch_fs *, enum btree_id); 7 + int bch2_btree_has_scanned_nodes(struct bch_fs *, enum btree_id); 8 8 int bch2_get_scanned_nodes(struct bch_fs *, enum btree_id, unsigned, struct bpos, struct bpos); 9 9 void bch2_find_btree_nodes_exit(struct find_btree_nodes *); 10 10
+12 -6
fs/bcachefs/btree_trans_commit.c
··· 595 595 int ret = 0; 596 596 597 597 bch2_trans_verify_not_unlocked_or_in_restart(trans); 598 - 598 + #if 0 599 + /* todo: bring back dynamic fault injection */ 599 600 if (race_fault()) { 600 601 trace_and_count(c, trans_restart_fault_inject, trans, trace_ip); 601 602 return btree_trans_restart(trans, BCH_ERR_transaction_restart_fault_inject); 602 603 } 603 - 604 + #endif 604 605 /* 605 606 * Check if the insert will fit in the leaf node with the write lock 606 607 * held, otherwise another thread could write the node changing the ··· 757 756 memcpy_u64s_small(journal_res_entry(&c->journal, &trans->journal_res), 758 757 btree_trans_journal_entries_start(trans), 759 758 trans->journal_entries.u64s); 759 + 760 + EBUG_ON(trans->journal_res.u64s < trans->journal_entries.u64s); 760 761 761 762 trans->journal_res.offset += trans->journal_entries.u64s; 762 763 trans->journal_res.u64s -= trans->journal_entries.u64s; ··· 1006 1003 { 1007 1004 struct btree_insert_entry *errored_at = NULL; 1008 1005 struct bch_fs *c = trans->c; 1006 + unsigned journal_u64s = 0; 1009 1007 int ret = 0; 1010 1008 1011 1009 bch2_trans_verify_not_unlocked_or_in_restart(trans); ··· 1035 1031 1036 1032 EBUG_ON(test_bit(BCH_FS_clean_shutdown, &c->flags)); 1037 1033 1038 - trans->journal_u64s = trans->journal_entries.u64s + jset_u64s(trans->accounting.u64s); 1034 + journal_u64s = jset_u64s(trans->accounting.u64s); 1039 1035 trans->journal_transaction_names = READ_ONCE(c->opts.journal_transaction_names); 1040 1036 if (trans->journal_transaction_names) 1041 - trans->journal_u64s += jset_u64s(JSET_ENTRY_LOG_U64s); 1037 + journal_u64s += jset_u64s(JSET_ENTRY_LOG_U64s); 1042 1038 1043 1039 trans_for_each_update(trans, i) { 1044 1040 struct btree_path *path = trans->paths + i->path; ··· 1058 1054 continue; 1059 1055 1060 1056 /* we're going to journal the key being updated: */ 1061 - trans->journal_u64s += jset_u64s(i->k->k.u64s); 1057 + journal_u64s += jset_u64s(i->k->k.u64s); 1062 1058 1063 1059 /* and we're also going to log the overwrite: */ 1064 1060 if (trans->journal_transaction_names) 1065 - trans->journal_u64s += jset_u64s(i->old_k.u64s); 1061 + journal_u64s += jset_u64s(i->old_k.u64s); 1066 1062 } 1067 1063 1068 1064 if (trans->extra_disk_res) { ··· 1079 1075 if (likely(!(flags & BCH_TRANS_COMMIT_no_journal_res))) 1080 1076 memset(&trans->journal_res, 0, sizeof(trans->journal_res)); 1081 1077 memset(&trans->fs_usage_delta, 0, sizeof(trans->fs_usage_delta)); 1078 + 1079 + trans->journal_u64s = journal_u64s + trans->journal_entries.u64s; 1082 1080 1083 1081 ret = do_bch2_trans_commit(trans, flags, &errored_at, _RET_IP_); 1084 1082
+1
fs/bcachefs/btree_types.h
··· 497 497 void *mem; 498 498 unsigned mem_top; 499 499 unsigned mem_bytes; 500 + unsigned realloc_bytes_required; 500 501 #ifdef CONFIG_BCACHEFS_TRANS_KMALLOC_TRACE 501 502 darray_trans_kmalloc_trace trans_kmalloc_trace; 502 503 #endif
+11 -5
fs/bcachefs/btree_update.c
··· 549 549 unsigned u64s) 550 550 { 551 551 unsigned new_top = buf->u64s + u64s; 552 - unsigned old_size = buf->size; 552 + unsigned new_size = buf->size; 553 553 554 - if (new_top > buf->size) 555 - buf->size = roundup_pow_of_two(new_top); 554 + BUG_ON(roundup_pow_of_two(new_top) > U16_MAX); 556 555 557 - void *n = bch2_trans_kmalloc_nomemzero(trans, buf->size * sizeof(u64)); 556 + if (new_top > new_size) 557 + new_size = roundup_pow_of_two(new_top); 558 + 559 + void *n = bch2_trans_kmalloc_nomemzero(trans, new_size * sizeof(u64)); 558 560 if (IS_ERR(n)) 559 561 return n; 562 + 563 + unsigned offset = (u64 *) n - (u64 *) trans->mem; 564 + BUG_ON(offset > U16_MAX); 560 565 561 566 if (buf->u64s) 562 567 memcpy(n, 563 568 btree_trans_subbuf_base(trans, buf), 564 - old_size * sizeof(u64)); 569 + buf->size * sizeof(u64)); 565 570 buf->base = (u64 *) n - (u64 *) trans->mem; 571 + buf->size = new_size; 566 572 567 573 void *p = btree_trans_subbuf_top(trans, buf); 568 574 buf->u64s = new_top;
+2 -3
fs/bcachefs/btree_update.h
··· 170 170 171 171 int bch2_btree_insert_clone_trans(struct btree_trans *, enum btree_id, struct bkey_i *); 172 172 173 - int bch2_btree_write_buffer_insert_err(struct btree_trans *, 174 - enum btree_id, struct bkey_i *); 173 + int bch2_btree_write_buffer_insert_err(struct bch_fs *, enum btree_id, struct bkey_i *); 175 174 176 175 static inline int __must_check bch2_trans_update_buffered(struct btree_trans *trans, 177 176 enum btree_id btree, ··· 181 182 EBUG_ON(k->k.u64s > BTREE_WRITE_BUFERED_U64s_MAX); 182 183 183 184 if (unlikely(!btree_type_uses_write_buffer(btree))) { 184 - int ret = bch2_btree_write_buffer_insert_err(trans, btree, k); 185 + int ret = bch2_btree_write_buffer_insert_err(trans->c, btree, k); 185 186 dump_stack(); 186 187 return ret; 187 188 }
+8 -8
fs/bcachefs/btree_update_interior.c
··· 1287 1287 1288 1288 do { 1289 1289 ret = bch2_btree_reserve_get(trans, as, nr_nodes, target, flags, &cl); 1290 - 1290 + if (!bch2_err_matches(ret, BCH_ERR_operation_blocked)) 1291 + break; 1291 1292 bch2_trans_unlock(trans); 1292 1293 bch2_wait_on_allocator(c, &cl); 1293 - } while (bch2_err_matches(ret, BCH_ERR_operation_blocked)); 1294 + } while (1); 1294 1295 } 1295 1296 1296 1297 if (ret) { ··· 2294 2293 goto out; 2295 2294 } 2296 2295 2297 - static int bch2_btree_node_rewrite_key(struct btree_trans *trans, 2298 - enum btree_id btree, unsigned level, 2299 - struct bkey_i *k, unsigned flags) 2296 + int bch2_btree_node_rewrite_key(struct btree_trans *trans, 2297 + enum btree_id btree, unsigned level, 2298 + struct bkey_i *k, unsigned flags) 2300 2299 { 2301 2300 struct btree_iter iter; 2302 2301 bch2_trans_node_iter_init(trans, &iter, ··· 2368 2367 2369 2368 int ret = bch2_trans_do(c, bch2_btree_node_rewrite_key(trans, 2370 2369 a->btree_id, a->level, a->key.k, 0)); 2371 - if (ret != -ENOENT && 2372 - !bch2_err_matches(ret, EROFS) && 2373 - ret != -BCH_ERR_journal_shutdown) 2370 + if (!bch2_err_matches(ret, ENOENT) && 2371 + !bch2_err_matches(ret, EROFS)) 2374 2372 bch_err_fn_ratelimited(c, ret); 2375 2373 2376 2374 spin_lock(&c->btree_node_rewrites_lock);
+3
fs/bcachefs/btree_update_interior.h
··· 176 176 177 177 int bch2_btree_node_rewrite(struct btree_trans *, struct btree_iter *, 178 178 struct btree *, unsigned, unsigned); 179 + int bch2_btree_node_rewrite_key(struct btree_trans *, 180 + enum btree_id, unsigned, 181 + struct bkey_i *, unsigned); 179 182 int bch2_btree_node_rewrite_pos(struct btree_trans *, 180 183 enum btree_id, unsigned, 181 184 struct bpos, unsigned, unsigned);
+5 -3
fs/bcachefs/btree_write_buffer.c
··· 267 267 BUG_ON(wb->sorted.size < wb->flushing.keys.nr); 268 268 } 269 269 270 - int bch2_btree_write_buffer_insert_err(struct btree_trans *trans, 270 + int bch2_btree_write_buffer_insert_err(struct bch_fs *c, 271 271 enum btree_id btree, struct bkey_i *k) 272 272 { 273 - struct bch_fs *c = trans->c; 274 273 struct printbuf buf = PRINTBUF; 275 274 276 275 prt_printf(&buf, "attempting to do write buffer update on non wb btree="); ··· 331 332 struct btree_write_buffered_key *k = &wb->flushing.keys.data[i->idx]; 332 333 333 334 if (unlikely(!btree_type_uses_write_buffer(k->btree))) { 334 - ret = bch2_btree_write_buffer_insert_err(trans, k->btree, &k->k); 335 + ret = bch2_btree_write_buffer_insert_err(trans->c, k->btree, &k->k); 335 336 goto err; 336 337 } 337 338 ··· 675 676 goto err; 676 677 677 678 bch2_bkey_buf_copy(last_flushed, c, tmp.k); 679 + 680 + /* can we avoid the unconditional restart? */ 681 + trace_and_count(c, trans_restart_write_buffer_flush, trans, _RET_IP_); 678 682 ret = bch_err_throw(c, transaction_restart_write_buffer_flush); 679 683 } 680 684 err:
+6
fs/bcachefs/btree_write_buffer.h
··· 89 89 struct journal_keys_to_wb *dst, 90 90 enum btree_id btree, struct bkey_i *k) 91 91 { 92 + if (unlikely(!btree_type_uses_write_buffer(btree))) { 93 + int ret = bch2_btree_write_buffer_insert_err(c, btree, k); 94 + dump_stack(); 95 + return ret; 96 + } 97 + 92 98 EBUG_ON(!dst->seq); 93 99 94 100 return k->k.type == KEY_TYPE_accounting
+22 -7
fs/bcachefs/chardev.c
··· 319 319 ctx->stats.ret = BCH_IOCTL_DATA_EVENT_RET_done; 320 320 ctx->stats.data_type = (int) DATA_PROGRESS_DATA_TYPE_done; 321 321 } 322 + enumerated_ref_put(&ctx->c->writes, BCH_WRITE_REF_ioctl_data); 322 323 return 0; 323 324 } 324 325 ··· 379 378 struct bch_data_ctx *ctx; 380 379 int ret; 381 380 382 - if (!capable(CAP_SYS_ADMIN)) 383 - return -EPERM; 381 + if (!enumerated_ref_tryget(&c->writes, BCH_WRITE_REF_ioctl_data)) 382 + return -EROFS; 384 383 385 - if (arg.op >= BCH_DATA_OP_NR || arg.flags) 386 - return -EINVAL; 384 + if (!capable(CAP_SYS_ADMIN)) { 385 + ret = -EPERM; 386 + goto put_ref; 387 + } 388 + 389 + if (arg.op >= BCH_DATA_OP_NR || arg.flags) { 390 + ret = -EINVAL; 391 + goto put_ref; 392 + } 387 393 388 394 ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); 389 - if (!ctx) 390 - return -ENOMEM; 395 + if (!ctx) { 396 + ret = -ENOMEM; 397 + goto put_ref; 398 + } 391 399 392 400 ctx->c = c; 393 401 ctx->arg = arg; ··· 405 395 &bcachefs_data_ops, 406 396 bch2_data_thread); 407 397 if (ret < 0) 408 - kfree(ctx); 398 + goto cleanup; 399 + return ret; 400 + cleanup: 401 + kfree(ctx); 402 + put_ref: 403 + enumerated_ref_put(&c->writes, BCH_WRITE_REF_ioctl_data); 409 404 return ret; 410 405 } 411 406
+1
fs/bcachefs/data_update.c
··· 249 249 bch2_bkey_val_to_text(&buf, c, k); 250 250 prt_str(&buf, "\nnew: "); 251 251 bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(insert)); 252 + prt_newline(&buf); 252 253 253 254 bch2_fs_emergency_read_only2(c, &buf); 254 255
-5
fs/bcachefs/errcode.h
··· 137 137 x(BCH_ERR_transaction_restart, transaction_restart_relock) \ 138 138 x(BCH_ERR_transaction_restart, transaction_restart_relock_path) \ 139 139 x(BCH_ERR_transaction_restart, transaction_restart_relock_path_intent) \ 140 - x(BCH_ERR_transaction_restart, transaction_restart_relock_after_fill) \ 141 140 x(BCH_ERR_transaction_restart, transaction_restart_too_many_iters) \ 142 141 x(BCH_ERR_transaction_restart, transaction_restart_lock_node_reused) \ 143 142 x(BCH_ERR_transaction_restart, transaction_restart_fill_relock) \ ··· 147 148 x(BCH_ERR_transaction_restart, transaction_restart_would_deadlock_write)\ 148 149 x(BCH_ERR_transaction_restart, transaction_restart_deadlock_recursion_limit)\ 149 150 x(BCH_ERR_transaction_restart, transaction_restart_upgrade) \ 150 - x(BCH_ERR_transaction_restart, transaction_restart_key_cache_upgrade) \ 151 151 x(BCH_ERR_transaction_restart, transaction_restart_key_cache_fill) \ 152 152 x(BCH_ERR_transaction_restart, transaction_restart_key_cache_raced) \ 153 - x(BCH_ERR_transaction_restart, transaction_restart_key_cache_realloced)\ 154 - x(BCH_ERR_transaction_restart, transaction_restart_journal_preres_get) \ 155 153 x(BCH_ERR_transaction_restart, transaction_restart_split_race) \ 156 154 x(BCH_ERR_transaction_restart, transaction_restart_write_buffer_flush) \ 157 155 x(BCH_ERR_transaction_restart, transaction_restart_nested) \ ··· 237 241 x(BCH_ERR_journal_res_blocked, journal_buf_enomem) \ 238 242 x(BCH_ERR_journal_res_blocked, journal_stuck) \ 239 243 x(BCH_ERR_journal_res_blocked, journal_retry_open) \ 240 - x(BCH_ERR_journal_res_blocked, journal_preres_get_blocked) \ 241 244 x(BCH_ERR_journal_res_blocked, bucket_alloc_blocked) \ 242 245 x(BCH_ERR_journal_res_blocked, stripe_alloc_blocked) \ 243 246 x(BCH_ERR_invalid, invalid_sb) \
+3 -1
fs/bcachefs/error.c
··· 621 621 if (s) 622 622 s->ret = ret; 623 623 624 - if (trans) 624 + if (trans && 625 + !(flags & FSCK_ERR_NO_LOG) && 626 + ret == -BCH_ERR_fsck_fix) 625 627 ret = bch2_trans_log_str(trans, bch2_sb_error_strs[err]) ?: ret; 626 628 err_unlock: 627 629 mutex_unlock(&c->fsck_error_msgs_lock);
+12 -1
fs/bcachefs/extent_update.c
··· 139 139 if (ret) 140 140 return ret; 141 141 142 - bch2_cut_back(end, k); 142 + /* tracepoint */ 143 + 144 + if (bpos_lt(end, k->k.p)) { 145 + if (trace_extent_trim_atomic_enabled()) { 146 + CLASS(printbuf, buf)(); 147 + bch2_bpos_to_text(&buf, end); 148 + prt_newline(&buf); 149 + bch2_bkey_val_to_text(&buf, trans->c, bkey_i_to_s_c(k)); 150 + trace_extent_trim_atomic(trans->c, buf.buf); 151 + } 152 + bch2_cut_back(end, k); 153 + } 143 154 return 0; 144 155 }
+2 -1
fs/bcachefs/fs.c
··· 1732 1732 bch2_write_inode(c, inode, fssetxattr_inode_update_fn, &s, 1733 1733 ATTR_CTIME); 1734 1734 mutex_unlock(&inode->ei_update_lock); 1735 - return ret; 1735 + 1736 + return bch2_err_class(ret); 1736 1737 } 1737 1738 1738 1739 static const struct file_operations bch_file_operations = {
+218 -99
fs/bcachefs/fsck.c
··· 327 327 (inode->bi_flags & BCH_INODE_has_child_snapshot)) 328 328 return false; 329 329 330 - return !inode->bi_dir && !(inode->bi_flags & BCH_INODE_unlinked); 330 + return !bch2_inode_has_backpointer(inode) && 331 + !(inode->bi_flags & BCH_INODE_unlinked); 331 332 } 332 333 333 334 static int maybe_delete_dirent(struct btree_trans *trans, struct bpos d_pos, u32 snapshot) ··· 373 372 if (inode->bi_subvol) { 374 373 inode->bi_parent_subvol = BCACHEFS_ROOT_SUBVOL; 375 374 375 + struct btree_iter subvol_iter; 376 + struct bkey_i_subvolume *subvol = 377 + bch2_bkey_get_mut_typed(trans, &subvol_iter, 378 + BTREE_ID_subvolumes, POS(0, inode->bi_subvol), 379 + 0, subvolume); 380 + ret = PTR_ERR_OR_ZERO(subvol); 381 + if (ret) 382 + return ret; 383 + 384 + subvol->v.fs_path_parent = BCACHEFS_ROOT_SUBVOL; 385 + bch2_trans_iter_exit(trans, &subvol_iter); 386 + 376 387 u64 root_inum; 377 388 ret = subvol_lookup(trans, inode->bi_parent_subvol, 378 389 &dirent_snapshot, &root_inum); ··· 399 386 ret = lookup_lostfound(trans, dirent_snapshot, &lostfound, inode->bi_inum); 400 387 if (ret) 401 388 return ret; 389 + 390 + bch_verbose(c, "got lostfound inum %llu", lostfound.bi_inum); 402 391 403 392 lostfound.bi_nlink += S_ISDIR(inode->bi_mode); 404 393 ··· 437 422 ret = __bch2_fsck_write_inode(trans, inode); 438 423 if (ret) 439 424 return ret; 425 + 426 + { 427 + CLASS(printbuf, buf)(); 428 + ret = bch2_inum_snapshot_to_path(trans, inode->bi_inum, 429 + inode->bi_snapshot, NULL, &buf); 430 + if (ret) 431 + return ret; 432 + 433 + bch_info(c, "reattached at %s", buf.buf); 434 + } 440 435 441 436 /* 442 437 * Fix up inodes in child snapshots: if they should also be reattached ··· 515 490 static int remove_backpointer(struct btree_trans *trans, 516 491 struct bch_inode_unpacked *inode) 517 492 { 518 - if (!inode->bi_dir) 493 + if (!bch2_inode_has_backpointer(inode)) 519 494 return 0; 495 + 496 + u32 snapshot = inode->bi_snapshot; 497 + 498 + if (inode->bi_parent_subvol) { 499 + int ret = bch2_subvolume_get_snapshot(trans, inode->bi_parent_subvol, &snapshot); 500 + if (ret) 501 + return ret; 502 + } 520 503 521 504 struct bch_fs *c = trans->c; 522 505 struct btree_iter iter; 523 506 struct bkey_s_c_dirent d = dirent_get_by_pos(trans, &iter, 524 - SPOS(inode->bi_dir, inode->bi_dir_offset, inode->bi_snapshot)); 507 + SPOS(inode->bi_dir, inode->bi_dir_offset, snapshot)); 525 508 int ret = bkey_err(d) ?: 526 509 dirent_points_to_inode(c, d, inode) ?: 527 510 bch2_fsck_remove_dirent(trans, d.k->p); ··· 728 695 static bool key_visible_in_snapshot(struct bch_fs *c, struct snapshots_seen *seen, 729 696 u32 id, u32 ancestor) 730 697 { 731 - ssize_t i; 732 - 733 698 EBUG_ON(id > ancestor); 734 - 735 - /* @ancestor should be the snapshot most recently added to @seen */ 736 - EBUG_ON(ancestor != seen->pos.snapshot); 737 - EBUG_ON(ancestor != darray_last(seen->ids)); 738 699 739 700 if (id == ancestor) 740 701 return true; ··· 745 718 * numerically, since snapshot ID lists are kept sorted, so if we find 746 719 * an id that's an ancestor of @id we're done: 747 720 */ 748 - 749 - for (i = seen->ids.nr - 2; 750 - i >= 0 && seen->ids.data[i] >= id; 751 - --i) 752 - if (bch2_snapshot_is_ancestor(c, id, seen->ids.data[i])) 721 + darray_for_each_reverse(seen->ids, i) 722 + if (*i != ancestor && bch2_snapshot_is_ancestor(c, id, *i)) 753 723 return false; 754 724 755 725 return true; ··· 830 806 if (!n->whiteout) { 831 807 return bch2_inode_unpack(inode, &n->inode); 832 808 } else { 833 - n->inode.bi_inum = inode.k->p.inode; 809 + n->inode.bi_inum = inode.k->p.offset; 834 810 n->inode.bi_snapshot = inode.k->p.snapshot; 835 811 return 0; 836 812 } ··· 927 903 w->last_pos.inode, k.k->p.snapshot, i->inode.bi_snapshot, 928 904 (bch2_bkey_val_to_text(&buf, c, k), 929 905 buf.buf))) { 930 - struct bch_inode_unpacked new = i->inode; 931 - struct bkey_i whiteout; 932 - 933 - new.bi_snapshot = k.k->p.snapshot; 934 - 935 906 if (!i->whiteout) { 907 + struct bch_inode_unpacked new = i->inode; 908 + new.bi_snapshot = k.k->p.snapshot; 936 909 ret = __bch2_fsck_write_inode(trans, &new); 937 910 } else { 911 + struct bkey_i whiteout; 938 912 bkey_init(&whiteout.k); 939 913 whiteout.k.type = KEY_TYPE_whiteout; 940 - whiteout.k.p = SPOS(0, i->inode.bi_inum, i->inode.bi_snapshot); 914 + whiteout.k.p = SPOS(0, i->inode.bi_inum, k.k->p.snapshot); 941 915 ret = bch2_btree_insert_nonextent(trans, BTREE_ID_inodes, 942 916 &whiteout, 943 917 BTREE_UPDATE_internal_snapshot_node); ··· 1157 1135 if (ret) 1158 1136 goto err; 1159 1137 1160 - if (u.bi_dir || u.bi_dir_offset) { 1138 + if (bch2_inode_has_backpointer(&u)) { 1161 1139 ret = check_inode_dirent_inode(trans, &u, &do_update); 1162 1140 if (ret) 1163 1141 goto err; 1164 1142 } 1165 1143 1166 - if (fsck_err_on(u.bi_dir && (u.bi_flags & BCH_INODE_unlinked), 1144 + if (fsck_err_on(bch2_inode_has_backpointer(&u) && 1145 + (u.bi_flags & BCH_INODE_unlinked), 1167 1146 trans, inode_unlinked_but_has_dirent, 1168 1147 "inode unlinked but has dirent\n%s", 1169 1148 (printbuf_reset(&buf), ··· 1461 1438 { 1462 1439 struct bch_fs *c = trans->c; 1463 1440 struct printbuf buf = PRINTBUF; 1441 + struct btree_iter iter2 = {}; 1464 1442 int ret = PTR_ERR_OR_ZERO(i); 1465 1443 if (ret) 1466 1444 return ret; ··· 1471 1447 1472 1448 bool have_inode = i && !i->whiteout; 1473 1449 1474 - if (!have_inode && (c->sb.btrees_lost_data & BIT_ULL(BTREE_ID_inodes))) { 1475 - ret = reconstruct_inode(trans, iter->btree_id, k.k->p.snapshot, k.k->p.inode) ?: 1476 - bch2_trans_commit(trans, NULL, NULL, BCH_TRANS_COMMIT_no_enospc); 1477 - if (ret) 1478 - goto err; 1450 + if (!have_inode && (c->sb.btrees_lost_data & BIT_ULL(BTREE_ID_inodes))) 1451 + goto reconstruct; 1479 1452 1480 - inode->last_pos.inode--; 1481 - ret = bch_err_throw(c, transaction_restart_nested); 1482 - goto err; 1453 + if (have_inode && btree_matches_i_mode(iter->btree_id, i->inode.bi_mode)) 1454 + goto out; 1455 + 1456 + prt_printf(&buf, ", "); 1457 + 1458 + bool have_old_inode = false; 1459 + darray_for_each(inode->inodes, i2) 1460 + if (!i2->whiteout && 1461 + bch2_snapshot_is_ancestor(c, k.k->p.snapshot, i2->inode.bi_snapshot) && 1462 + btree_matches_i_mode(iter->btree_id, i2->inode.bi_mode)) { 1463 + prt_printf(&buf, "but found good inode in older snapshot\n"); 1464 + bch2_inode_unpacked_to_text(&buf, &i2->inode); 1465 + prt_newline(&buf); 1466 + have_old_inode = true; 1467 + break; 1468 + } 1469 + 1470 + struct bkey_s_c k2; 1471 + unsigned nr_keys = 0; 1472 + 1473 + prt_printf(&buf, "found keys:\n"); 1474 + 1475 + for_each_btree_key_max_norestart(trans, iter2, iter->btree_id, 1476 + SPOS(k.k->p.inode, 0, k.k->p.snapshot), 1477 + POS(k.k->p.inode, U64_MAX), 1478 + 0, k2, ret) { 1479 + nr_keys++; 1480 + if (nr_keys <= 10) { 1481 + bch2_bkey_val_to_text(&buf, c, k2); 1482 + prt_newline(&buf); 1483 + } 1484 + if (nr_keys >= 100) 1485 + break; 1483 1486 } 1484 1487 1485 - if (fsck_err_on(!have_inode, 1486 - trans, key_in_missing_inode, 1487 - "key in missing inode:\n%s", 1488 - (printbuf_reset(&buf), 1489 - bch2_bkey_val_to_text(&buf, c, k), buf.buf))) 1490 - goto delete; 1488 + if (ret) 1489 + goto err; 1491 1490 1492 - if (fsck_err_on(have_inode && !btree_matches_i_mode(iter->btree_id, i->inode.bi_mode), 1493 - trans, key_in_wrong_inode_type, 1494 - "key for wrong inode mode %o:\n%s", 1495 - i->inode.bi_mode, 1496 - (printbuf_reset(&buf), 1497 - bch2_bkey_val_to_text(&buf, c, k), buf.buf))) 1498 - goto delete; 1491 + if (nr_keys > 100) 1492 + prt_printf(&buf, "found > %u keys for this missing inode\n", nr_keys); 1493 + else if (nr_keys > 10) 1494 + prt_printf(&buf, "found %u keys for this missing inode\n", nr_keys); 1495 + 1496 + if (!have_inode) { 1497 + if (fsck_err_on(!have_inode, 1498 + trans, key_in_missing_inode, 1499 + "key in missing inode%s", buf.buf)) { 1500 + /* 1501 + * Maybe a deletion that raced with data move, or something 1502 + * weird like that? But if we know the inode was deleted, or 1503 + * it's just a few keys, we can safely delete them. 1504 + * 1505 + * If it's many keys, we should probably recreate the inode 1506 + */ 1507 + if (have_old_inode || nr_keys <= 2) 1508 + goto delete; 1509 + else 1510 + goto reconstruct; 1511 + } 1512 + } else { 1513 + /* 1514 + * not autofix, this one would be a giant wtf - bit error in the 1515 + * inode corrupting i_mode? 1516 + * 1517 + * may want to try repairing inode instead of deleting 1518 + */ 1519 + if (fsck_err_on(!btree_matches_i_mode(iter->btree_id, i->inode.bi_mode), 1520 + trans, key_in_wrong_inode_type, 1521 + "key for wrong inode mode %o%s", 1522 + i->inode.bi_mode, buf.buf)) 1523 + goto delete; 1524 + } 1499 1525 out: 1500 1526 err: 1501 1527 fsck_err: 1528 + bch2_trans_iter_exit(trans, &iter2); 1502 1529 printbuf_exit(&buf); 1503 1530 bch_err_fn(c, ret); 1504 1531 return ret; 1505 1532 delete: 1533 + /* 1534 + * XXX: print out more info 1535 + * count up extents for this inode, check if we have different inode in 1536 + * an older snapshot version, perhaps decide if we want to reconstitute 1537 + */ 1506 1538 ret = bch2_btree_delete_at(trans, iter, BTREE_UPDATE_internal_snapshot_node); 1539 + goto out; 1540 + reconstruct: 1541 + ret = reconstruct_inode(trans, iter->btree_id, k.k->p.snapshot, k.k->p.inode) ?: 1542 + bch2_trans_commit(trans, NULL, NULL, BCH_TRANS_COMMIT_no_enospc); 1543 + if (ret) 1544 + goto err; 1545 + 1546 + inode->last_pos.inode--; 1547 + ret = bch_err_throw(c, transaction_restart_nested); 1507 1548 goto out; 1508 1549 } 1509 1550 ··· 1911 1822 !key_visible_in_snapshot(c, s, i->inode.bi_snapshot, k.k->p.snapshot)) 1912 1823 continue; 1913 1824 1914 - if (fsck_err_on(k.k->p.offset > round_up(i->inode.bi_size, block_bytes(c)) >> 9 && 1825 + u64 last_block = round_up(i->inode.bi_size, block_bytes(c)) >> 9; 1826 + 1827 + if (fsck_err_on(k.k->p.offset > last_block && 1915 1828 !bkey_extent_is_reservation(k), 1916 1829 trans, extent_past_end_of_inode, 1917 1830 "extent type past end of inode %llu:%u, i_size %llu\n%s", 1918 1831 i->inode.bi_inum, i->inode.bi_snapshot, i->inode.bi_size, 1919 1832 (bch2_bkey_val_to_text(&buf, c, k), buf.buf))) { 1920 - struct btree_iter iter2; 1833 + struct bkey_i *whiteout = bch2_trans_kmalloc(trans, sizeof(*whiteout)); 1834 + ret = PTR_ERR_OR_ZERO(whiteout); 1835 + if (ret) 1836 + goto err; 1921 1837 1922 - bch2_trans_copy_iter(trans, &iter2, iter); 1923 - bch2_btree_iter_set_snapshot(trans, &iter2, i->inode.bi_snapshot); 1838 + bkey_init(&whiteout->k); 1839 + whiteout->k.p = SPOS(k.k->p.inode, 1840 + last_block, 1841 + i->inode.bi_snapshot); 1842 + bch2_key_resize(&whiteout->k, 1843 + min(KEY_SIZE_MAX & (~0 << c->block_bits), 1844 + U64_MAX - whiteout->k.p.offset)); 1845 + 1846 + 1847 + /* 1848 + * Need a normal (not BTREE_ITER_all_snapshots) 1849 + * iterator, if we're deleting in a different 1850 + * snapshot and need to emit a whiteout 1851 + */ 1852 + struct btree_iter iter2; 1853 + bch2_trans_iter_init(trans, &iter2, BTREE_ID_extents, 1854 + bkey_start_pos(&whiteout->k), 1855 + BTREE_ITER_intent); 1924 1856 ret = bch2_btree_iter_traverse(trans, &iter2) ?: 1925 - bch2_btree_delete_at(trans, &iter2, 1857 + bch2_trans_update(trans, &iter2, whiteout, 1926 1858 BTREE_UPDATE_internal_snapshot_node); 1927 1859 bch2_trans_iter_exit(trans, &iter2); 1928 1860 if (ret) ··· 2059 1949 continue; 2060 1950 } 2061 1951 2062 - if (fsck_err_on(i->inode.bi_nlink != i->count, 2063 - trans, inode_dir_wrong_nlink, 2064 - "directory %llu:%u with wrong i_nlink: got %u, should be %llu", 2065 - w->last_pos.inode, i->inode.bi_snapshot, i->inode.bi_nlink, i->count)) { 2066 - i->inode.bi_nlink = i->count; 2067 - ret = bch2_fsck_write_inode(trans, &i->inode); 2068 - if (ret) 2069 - break; 1952 + if (i->inode.bi_nlink != i->count) { 1953 + CLASS(printbuf, buf)(); 1954 + 1955 + lockrestart_do(trans, 1956 + bch2_inum_snapshot_to_path(trans, w->last_pos.inode, 1957 + i->inode.bi_snapshot, NULL, &buf)); 1958 + 1959 + if (fsck_err_on(i->inode.bi_nlink != i->count, 1960 + trans, inode_dir_wrong_nlink, 1961 + "directory with wrong i_nlink: got %u, should be %llu\n%s", 1962 + i->inode.bi_nlink, i->count, buf.buf)) { 1963 + i->inode.bi_nlink = i->count; 1964 + ret = bch2_fsck_write_inode(trans, &i->inode); 1965 + if (ret) 1966 + break; 1967 + } 2070 1968 } 2071 1969 } 2072 1970 fsck_err: ··· 2611 2493 if (k.k->type != KEY_TYPE_subvolume) 2612 2494 return 0; 2613 2495 2496 + subvol_inum start = { 2497 + .subvol = k.k->p.offset, 2498 + .inum = le64_to_cpu(bkey_s_c_to_subvolume(k).v->inode), 2499 + }; 2500 + 2614 2501 while (k.k->p.offset != BCACHEFS_ROOT_SUBVOL) { 2615 2502 ret = darray_push(&subvol_path, k.k->p.offset); 2616 2503 if (ret) ··· 2634 2511 2635 2512 if (darray_u32_has(&subvol_path, parent)) { 2636 2513 printbuf_reset(&buf); 2637 - prt_printf(&buf, "subvolume loop:\n"); 2514 + prt_printf(&buf, "subvolume loop: "); 2638 2515 2639 - darray_for_each_reverse(subvol_path, i) 2640 - prt_printf(&buf, "%u ", *i); 2641 - prt_printf(&buf, "%u", parent); 2516 + ret = bch2_inum_to_path(trans, start, &buf); 2517 + if (ret) 2518 + goto err; 2642 2519 2643 2520 if (fsck_err(trans, subvol_loop, "%s", buf.buf)) 2644 2521 ret = reattach_subvol(trans, s); ··· 2682 2559 return ret; 2683 2560 } 2684 2561 2685 - struct pathbuf_entry { 2686 - u64 inum; 2687 - u32 snapshot; 2688 - }; 2689 - 2690 - typedef DARRAY(struct pathbuf_entry) pathbuf; 2691 - 2692 - static int bch2_bi_depth_renumber_one(struct btree_trans *trans, struct pathbuf_entry *p, 2562 + static int bch2_bi_depth_renumber_one(struct btree_trans *trans, 2563 + u64 inum, u32 snapshot, 2693 2564 u32 new_depth) 2694 2565 { 2695 2566 struct btree_iter iter; 2696 2567 struct bkey_s_c k = bch2_bkey_get_iter(trans, &iter, BTREE_ID_inodes, 2697 - SPOS(0, p->inum, p->snapshot), 0); 2568 + SPOS(0, inum, snapshot), 0); 2698 2569 2699 2570 struct bch_inode_unpacked inode; 2700 2571 int ret = bkey_err(k) ?: ··· 2707 2590 return ret; 2708 2591 } 2709 2592 2710 - static int bch2_bi_depth_renumber(struct btree_trans *trans, pathbuf *path, u32 new_bi_depth) 2593 + static int bch2_bi_depth_renumber(struct btree_trans *trans, darray_u64 *path, 2594 + u32 snapshot, u32 new_bi_depth) 2711 2595 { 2712 2596 u32 restart_count = trans->restart_count; 2713 2597 int ret = 0; 2714 2598 2715 2599 darray_for_each_reverse(*path, i) { 2716 2600 ret = nested_lockrestart_do(trans, 2717 - bch2_bi_depth_renumber_one(trans, i, new_bi_depth)); 2601 + bch2_bi_depth_renumber_one(trans, *i, snapshot, new_bi_depth)); 2718 2602 bch_err_fn(trans->c, ret); 2719 2603 if (ret) 2720 2604 break; ··· 2726 2608 return ret ?: trans_was_restarted(trans, restart_count); 2727 2609 } 2728 2610 2729 - static bool path_is_dup(pathbuf *p, u64 inum, u32 snapshot) 2730 - { 2731 - darray_for_each(*p, i) 2732 - if (i->inum == inum && 2733 - i->snapshot == snapshot) 2734 - return true; 2735 - return false; 2736 - } 2737 - 2738 2611 static int check_path_loop(struct btree_trans *trans, struct bkey_s_c inode_k) 2739 2612 { 2740 2613 struct bch_fs *c = trans->c; 2741 2614 struct btree_iter inode_iter = {}; 2742 - pathbuf path = {}; 2615 + darray_u64 path = {}; 2743 2616 struct printbuf buf = PRINTBUF; 2744 2617 u32 snapshot = inode_k.k->p.snapshot; 2745 2618 bool redo_bi_depth = false; 2746 2619 u32 min_bi_depth = U32_MAX; 2747 2620 int ret = 0; 2748 2621 2622 + struct bpos start = inode_k.k->p; 2623 + 2749 2624 struct bch_inode_unpacked inode; 2750 2625 ret = bch2_inode_unpack(inode_k, &inode); 2751 2626 if (ret) 2752 2627 return ret; 2753 2628 2754 - while (!inode.bi_subvol) { 2629 + /* 2630 + * If we're running full fsck, check_dirents() will have already ran, 2631 + * and we shouldn't see any missing backpointers here - otherwise that's 2632 + * handled separately, by check_unreachable_inodes 2633 + */ 2634 + while (!inode.bi_subvol && 2635 + bch2_inode_has_backpointer(&inode)) { 2755 2636 struct btree_iter dirent_iter; 2756 2637 struct bkey_s_c_dirent d; 2757 - u32 parent_snapshot = snapshot; 2758 2638 2759 - d = inode_get_dirent(trans, &dirent_iter, &inode, &parent_snapshot); 2639 + d = dirent_get_by_pos(trans, &dirent_iter, 2640 + SPOS(inode.bi_dir, inode.bi_dir_offset, snapshot)); 2760 2641 ret = bkey_err(d.s_c); 2761 2642 if (ret && !bch2_err_matches(ret, ENOENT)) 2762 2643 goto out; ··· 2773 2656 2774 2657 bch2_trans_iter_exit(trans, &dirent_iter); 2775 2658 2776 - ret = darray_push(&path, ((struct pathbuf_entry) { 2777 - .inum = inode.bi_inum, 2778 - .snapshot = snapshot, 2779 - })); 2659 + ret = darray_push(&path, inode.bi_inum); 2780 2660 if (ret) 2781 2661 return ret; 2782 - 2783 - snapshot = parent_snapshot; 2784 2662 2785 2663 bch2_trans_iter_exit(trans, &inode_iter); 2786 2664 inode_k = bch2_bkey_get_iter(trans, &inode_iter, BTREE_ID_inodes, ··· 2798 2686 break; 2799 2687 2800 2688 inode = parent_inode; 2801 - snapshot = inode_k.k->p.snapshot; 2802 2689 redo_bi_depth = true; 2803 2690 2804 - if (path_is_dup(&path, inode.bi_inum, snapshot)) { 2691 + if (darray_find(path, inode.bi_inum)) { 2805 2692 printbuf_reset(&buf); 2806 - prt_printf(&buf, "directory structure loop:\n"); 2807 - darray_for_each_reverse(path, i) 2808 - prt_printf(&buf, "%llu:%u ", i->inum, i->snapshot); 2809 - prt_printf(&buf, "%llu:%u", inode.bi_inum, snapshot); 2693 + prt_printf(&buf, "directory structure loop in snapshot %u: ", 2694 + snapshot); 2695 + 2696 + ret = bch2_inum_snapshot_to_path(trans, start.offset, start.snapshot, NULL, &buf); 2697 + if (ret) 2698 + goto out; 2699 + 2700 + if (c->opts.verbose) { 2701 + prt_newline(&buf); 2702 + darray_for_each(path, i) 2703 + prt_printf(&buf, "%llu ", *i); 2704 + } 2810 2705 2811 2706 if (fsck_err(trans, dir_loop, "%s", buf.buf)) { 2812 2707 ret = remove_backpointer(trans, &inode); ··· 2833 2714 min_bi_depth = 0; 2834 2715 2835 2716 if (redo_bi_depth) 2836 - ret = bch2_bi_depth_renumber(trans, &path, min_bi_depth); 2717 + ret = bch2_bi_depth_renumber(trans, &path, snapshot, min_bi_depth); 2837 2718 out: 2838 2719 fsck_err: 2839 2720 bch2_trans_iter_exit(trans, &inode_iter); ··· 2850 2731 int bch2_check_directory_structure(struct bch_fs *c) 2851 2732 { 2852 2733 int ret = bch2_trans_run(c, 2853 - for_each_btree_key_commit(trans, iter, BTREE_ID_inodes, POS_MIN, 2734 + for_each_btree_key_reverse_commit(trans, iter, BTREE_ID_inodes, POS_MIN, 2854 2735 BTREE_ITER_intent| 2855 2736 BTREE_ITER_prefetch| 2856 2737 BTREE_ITER_all_snapshots, k,
+5
fs/bcachefs/inode.h
··· 254 254 : c->opts.casefold; 255 255 } 256 256 257 + static inline bool bch2_inode_has_backpointer(const struct bch_inode_unpacked *bi) 258 + { 259 + return bi->bi_dir || bi->bi_dir_offset; 260 + } 261 + 257 262 /* i_nlink: */ 258 263 259 264 static inline unsigned nlink_bias(umode_t mode)
+6 -1
fs/bcachefs/io_read.c
··· 1491 1491 prt_printf(out, "have_ioref:\t%u\n", rbio->have_ioref); 1492 1492 prt_printf(out, "narrow_crcs:\t%u\n", rbio->narrow_crcs); 1493 1493 prt_printf(out, "context:\t%u\n", rbio->context); 1494 - prt_printf(out, "ret:\t%s\n", bch2_err_str(rbio->ret)); 1494 + 1495 + int ret = READ_ONCE(rbio->ret); 1496 + if (ret < 0) 1497 + prt_printf(out, "ret:\t%s\n", bch2_err_str(ret)); 1498 + else 1499 + prt_printf(out, "ret:\t%i\n", ret); 1495 1500 1496 1501 prt_printf(out, "flags:\t"); 1497 1502 bch2_prt_bitflags(out, bch2_read_bio_flags, rbio->flags);
+7 -13
fs/bcachefs/journal.c
··· 1283 1283 ret = 0; /* wait and retry */ 1284 1284 1285 1285 bch2_disk_reservation_put(c, &disk_res); 1286 - closure_sync(&cl); 1286 + bch2_wait_on_allocator(c, &cl); 1287 1287 } 1288 1288 1289 1289 return ret; ··· 1474 1474 clear_bit(JOURNAL_running, &j->flags); 1475 1475 } 1476 1476 1477 - int bch2_fs_journal_start(struct journal *j, u64 cur_seq) 1477 + int bch2_fs_journal_start(struct journal *j, u64 last_seq, u64 cur_seq) 1478 1478 { 1479 1479 struct bch_fs *c = container_of(j, struct bch_fs, journal); 1480 1480 struct journal_entry_pin_list *p; 1481 1481 struct journal_replay *i, **_i; 1482 1482 struct genradix_iter iter; 1483 1483 bool had_entries = false; 1484 - u64 last_seq = cur_seq, nr, seq; 1485 1484 1486 1485 /* 1487 1486 * ··· 1494 1495 return -EINVAL; 1495 1496 } 1496 1497 1497 - genradix_for_each_reverse(&c->journal_entries, iter, _i) { 1498 - i = *_i; 1498 + /* Clean filesystem? */ 1499 + if (!last_seq) 1500 + last_seq = cur_seq; 1499 1501 1500 - if (journal_replay_ignore(i)) 1501 - continue; 1502 - 1503 - last_seq = le64_to_cpu(i->j.last_seq); 1504 - break; 1505 - } 1506 - 1507 - nr = cur_seq - last_seq; 1502 + u64 nr = cur_seq - last_seq; 1508 1503 1509 1504 /* 1510 1505 * Extra fudge factor, in case we crashed when the journal pin fifo was ··· 1525 1532 j->pin.back = cur_seq; 1526 1533 atomic64_set(&j->seq, cur_seq - 1); 1527 1534 1535 + u64 seq; 1528 1536 fifo_for_each_entry_ptr(p, &j->pin, seq) 1529 1537 journal_pin_list_init(p, 1); 1530 1538
+1 -1
fs/bcachefs/journal.h
··· 453 453 void bch2_dev_journal_stop(struct journal *, struct bch_dev *); 454 454 455 455 void bch2_fs_journal_stop(struct journal *); 456 - int bch2_fs_journal_start(struct journal *, u64); 456 + int bch2_fs_journal_start(struct journal *, u64, u64); 457 457 void bch2_journal_set_replay_done(struct journal *); 458 458 459 459 void bch2_dev_journal_exit(struct bch_dev *);
+20 -6
fs/bcachefs/journal_io.c
··· 160 160 struct printbuf buf = PRINTBUF; 161 161 int ret = JOURNAL_ENTRY_ADD_OK; 162 162 163 + if (last_seq && c->opts.journal_rewind) 164 + last_seq = min(last_seq, c->opts.journal_rewind); 165 + 163 166 if (!c->journal.oldest_seq_found_ondisk || 164 167 le64_to_cpu(j->seq) < c->journal.oldest_seq_found_ondisk) 165 168 c->journal.oldest_seq_found_ondisk = le64_to_cpu(j->seq); ··· 1433 1430 printbuf_reset(&buf); 1434 1431 prt_printf(&buf, "journal read done, replaying entries %llu-%llu", 1435 1432 *last_seq, *blacklist_seq - 1); 1433 + 1434 + /* 1435 + * Drop blacklisted entries and entries older than last_seq (or start of 1436 + * journal rewind: 1437 + */ 1438 + u64 drop_before = *last_seq; 1439 + if (c->opts.journal_rewind) { 1440 + drop_before = min(drop_before, c->opts.journal_rewind); 1441 + prt_printf(&buf, " (rewinding from %llu)", c->opts.journal_rewind); 1442 + } 1443 + 1444 + *last_seq = drop_before; 1436 1445 if (*start_seq != *blacklist_seq) 1437 1446 prt_printf(&buf, " (unflushed %llu-%llu)", *blacklist_seq, *start_seq - 1); 1438 1447 bch_info(c, "%s", buf.buf); 1439 - 1440 - /* Drop blacklisted entries and entries older than last_seq: */ 1441 1448 genradix_for_each(&c->journal_entries, radix_iter, _i) { 1442 1449 i = *_i; 1443 1450 ··· 1455 1442 continue; 1456 1443 1457 1444 seq = le64_to_cpu(i->j.seq); 1458 - if (seq < *last_seq) { 1445 + if (seq < drop_before) { 1459 1446 journal_replay_free(c, i, false); 1460 1447 continue; 1461 1448 } ··· 1468 1455 } 1469 1456 } 1470 1457 1471 - ret = bch2_journal_check_for_missing(c, *last_seq, *blacklist_seq - 1); 1458 + ret = bch2_journal_check_for_missing(c, drop_before, *blacklist_seq - 1); 1472 1459 if (ret) 1473 1460 goto err; 1474 1461 ··· 1716 1703 bch2_log_msg_start(c, &buf); 1717 1704 1718 1705 if (err == -BCH_ERR_journal_write_err) 1719 - prt_printf(&buf, "unable to write journal to sufficient devices"); 1706 + prt_printf(&buf, "unable to write journal to sufficient devices\n"); 1720 1707 else 1721 - prt_printf(&buf, "journal write error marking replicas: %s", bch2_err_str(err)); 1708 + prt_printf(&buf, "journal write error marking replicas: %s\n", 1709 + bch2_err_str(err)); 1722 1710 1723 1711 bch2_fs_emergency_read_only2(c, &buf); 1724 1712
+23 -7
fs/bcachefs/namei.c
··· 625 625 { 626 626 unsigned orig_pos = path->pos; 627 627 int ret = 0; 628 + DARRAY(subvol_inum) inums = {}; 629 + 630 + if (!snapshot) { 631 + ret = bch2_subvolume_get_snapshot(trans, subvol, &snapshot); 632 + if (ret) 633 + goto disconnected; 634 + } 628 635 629 636 while (true) { 630 - if (!snapshot) { 631 - ret = bch2_subvolume_get_snapshot(trans, subvol, &snapshot); 632 - if (ret) 633 - goto disconnected; 637 + subvol_inum n = (subvol_inum) { subvol ?: snapshot, inum }; 638 + 639 + if (darray_find_p(inums, i, i->subvol == n.subvol && i->inum == n.inum)) { 640 + prt_str_reversed(path, "(loop)"); 641 + break; 634 642 } 643 + 644 + ret = darray_push(&inums, n); 645 + if (ret) 646 + goto err; 635 647 636 648 struct bch_inode_unpacked inode; 637 649 ret = bch2_inode_find_by_inum_snapshot(trans, inum, snapshot, &inode, 0); ··· 662 650 inum = inode.bi_dir; 663 651 if (inode.bi_parent_subvol) { 664 652 subvol = inode.bi_parent_subvol; 665 - snapshot = 0; 653 + ret = bch2_subvolume_get_snapshot(trans, inode.bi_parent_subvol, &snapshot); 654 + if (ret) 655 + goto disconnected; 666 656 } 667 657 668 658 struct btree_iter d_iter; ··· 676 662 goto disconnected; 677 663 678 664 struct qstr dirent_name = bch2_dirent_get_name(d); 665 + 679 666 prt_bytes_reversed(path, dirent_name.name, dirent_name.len); 680 667 681 668 prt_char(path, '/'); ··· 692 677 goto err; 693 678 694 679 reverse_bytes(path->buf + orig_pos, path->pos - orig_pos); 680 + darray_exit(&inums); 695 681 return 0; 696 682 err: 683 + darray_exit(&inums); 697 684 return ret; 698 685 disconnected: 699 686 if (bch2_err_matches(ret, BCH_ERR_transaction_restart)) ··· 734 717 if (inode_points_to_dirent(target, d)) 735 718 return 0; 736 719 737 - if (!target->bi_dir && 738 - !target->bi_dir_offset) { 720 + if (!bch2_inode_has_backpointer(target)) { 739 721 fsck_err_on(S_ISDIR(target->bi_mode), 740 722 trans, inode_dir_missing_backpointer, 741 723 "directory with missing backpointer\n%s",
+5
fs/bcachefs/opts.h
··· 379 379 OPT_BOOL(), \ 380 380 BCH2_NO_SB_OPT, false, \ 381 381 NULL, "Exit recovery immediately prior to journal replay")\ 382 + x(journal_rewind, u64, \ 383 + OPT_FS|OPT_MOUNT, \ 384 + OPT_UINT(0, U64_MAX), \ 385 + BCH2_NO_SB_OPT, 0, \ 386 + NULL, "Rewind journal") \ 382 387 x(recovery_passes, u64, \ 383 388 OPT_FS|OPT_MOUNT, \ 384 389 OPT_BITFIELD(bch2_recovery_passes), \
+20 -4
fs/bcachefs/recovery.c
··· 607 607 buf.buf, bch2_err_str(ret))) { 608 608 if (btree_id_is_alloc(i)) 609 609 r->error = 0; 610 + ret = 0; 610 611 } 611 612 } 612 613 ··· 693 692 ret = true; 694 693 } 695 694 696 - if (new_version > c->sb.version_incompat && 695 + if (new_version > c->sb.version_incompat_allowed && 697 696 c->opts.version_upgrade == BCH_VERSION_UPGRADE_incompatible) { 698 697 struct printbuf buf = PRINTBUF; 699 698 ··· 757 756 758 757 if (c->opts.nochanges) 759 758 c->opts.read_only = true; 759 + 760 + if (c->opts.journal_rewind) { 761 + bch_info(c, "rewinding journal, fsck required"); 762 + c->opts.fsck = true; 763 + } 764 + 765 + if (go_rw_in_recovery(c)) { 766 + /* 767 + * start workqueues/kworkers early - kthread creation checks for 768 + * pending signals, which is _very_ annoying 769 + */ 770 + ret = bch2_fs_init_rw(c); 771 + if (ret) 772 + goto err; 773 + } 760 774 761 775 mutex_lock(&c->sb_lock); 762 776 struct bch_sb_field_ext *ext = bch2_sb_field_get(c->disk_sb.sb, ext); ··· 981 965 982 966 ret = bch2_journal_log_msg(c, "starting journal at entry %llu, replaying %llu-%llu", 983 967 journal_seq, last_seq, blacklist_seq - 1) ?: 984 - bch2_fs_journal_start(&c->journal, journal_seq); 968 + bch2_fs_journal_start(&c->journal, last_seq, journal_seq); 985 969 if (ret) 986 970 goto err; 987 971 ··· 1142 1126 struct printbuf buf = PRINTBUF; 1143 1127 bch2_log_msg_start(c, &buf); 1144 1128 1145 - prt_printf(&buf, "error in recovery: %s", bch2_err_str(ret)); 1129 + prt_printf(&buf, "error in recovery: %s\n", bch2_err_str(ret)); 1146 1130 bch2_fs_emergency_read_only2(c, &buf); 1147 1131 1148 1132 bch2_print_str(c, KERN_ERR, buf.buf); ··· 1197 1181 * journal_res_get() will crash if called before this has 1198 1182 * set up the journal.pin FIFO and journal.cur pointer: 1199 1183 */ 1200 - ret = bch2_fs_journal_start(&c->journal, 1); 1184 + ret = bch2_fs_journal_start(&c->journal, 1, 1); 1201 1185 if (ret) 1202 1186 goto err; 1203 1187
+9 -10
fs/bcachefs/recovery_passes.c
··· 217 217 218 218 set_bit(BCH_FS_may_go_rw, &c->flags); 219 219 220 - if (keys->nr || 221 - !c->opts.read_only || 222 - !c->sb.clean || 223 - c->opts.recovery_passes || 224 - (c->opts.fsck && !(c->sb.features & BIT_ULL(BCH_FEATURE_no_alloc_info)))) { 220 + if (go_rw_in_recovery(c)) { 225 221 if (c->sb.features & BIT_ULL(BCH_FEATURE_no_alloc_info)) { 226 222 bch_info(c, "mounting a filesystem with no alloc info read-write; will recreate"); 227 223 bch2_reconstruct_alloc(c); ··· 313 317 */ 314 318 bool in_recovery = test_bit(BCH_FS_in_recovery, &c->flags); 315 319 bool persistent = !in_recovery || !(*flags & RUN_RECOVERY_PASS_nopersistent); 320 + bool rewind = in_recovery && 321 + r->curr_pass > pass && 322 + !(r->passes_complete & BIT_ULL(pass)); 316 323 317 324 if (persistent 318 325 ? !(c->sb.recovery_passes_required & BIT_ULL(pass)) ··· 324 325 325 326 if (!(*flags & RUN_RECOVERY_PASS_ratelimit) && 326 327 (r->passes_ratelimiting & BIT_ULL(pass))) 328 + return true; 329 + 330 + if (rewind) 327 331 return true; 328 332 329 333 return false; ··· 342 340 { 343 341 struct bch_fs_recovery *r = &c->recovery; 344 342 int ret = 0; 345 - 346 343 347 344 lockdep_assert_held(&c->sb_lock); 348 345 ··· 413 412 { 414 413 int ret = 0; 415 414 416 - scoped_guard(mutex, &c->sb_lock) { 417 - if (!recovery_pass_needs_set(c, pass, &flags)) 418 - return 0; 419 - 415 + if (recovery_pass_needs_set(c, pass, &flags)) { 416 + guard(mutex)(&c->sb_lock); 420 417 ret = __bch2_run_explicit_recovery_pass(c, out, pass, flags); 421 418 bch2_write_super(c); 422 419 }
+9
fs/bcachefs/recovery_passes.h
··· 17 17 RUN_RECOVERY_PASS_ratelimit = BIT(1), 18 18 }; 19 19 20 + static inline bool go_rw_in_recovery(struct bch_fs *c) 21 + { 22 + return (c->journal_keys.nr || 23 + !c->opts.read_only || 24 + !c->sb.clean || 25 + c->opts.recovery_passes || 26 + (c->opts.fsck && !(c->sb.features & BIT_ULL(BCH_FEATURE_no_alloc_info)))); 27 + } 28 + 20 29 int bch2_run_print_explicit_recovery_pass(struct bch_fs *, enum bch_recovery_pass); 21 30 22 31 int __bch2_run_explicit_recovery_pass(struct bch_fs *, struct printbuf *,
+7 -5
fs/bcachefs/reflink.c
··· 64 64 REFLINK_P_IDX(p.v), 65 65 le32_to_cpu(p.v->front_pad), 66 66 le32_to_cpu(p.v->back_pad)); 67 + 68 + if (REFLINK_P_ERROR(p.v)) 69 + prt_str(out, " error"); 67 70 } 68 71 69 72 bool bch2_reflink_p_merge(struct bch_fs *c, struct bkey_s _l, struct bkey_s_c _r) ··· 272 269 return k; 273 270 274 271 if (unlikely(!bkey_extent_is_reflink_data(k.k))) { 275 - unsigned size = min((u64) k.k->size, 276 - REFLINK_P_IDX(p.v) + p.k->size + le32_to_cpu(p.v->back_pad) - 277 - reflink_offset); 278 - bch2_key_resize(&iter->k, size); 272 + u64 missing_end = min(k.k->p.offset, 273 + REFLINK_P_IDX(p.v) + p.k->size + le32_to_cpu(p.v->back_pad)); 274 + BUG_ON(reflink_offset == missing_end); 279 275 280 276 int ret = bch2_indirect_extent_missing_error(trans, p, reflink_offset, 281 - k.k->p.offset, should_commit); 277 + missing_end, should_commit); 282 278 if (ret) { 283 279 bch2_trans_iter_exit(trans, iter); 284 280 return bkey_s_c_err(ret);
+10 -9
fs/bcachefs/sb-errors_format.h
··· 3 3 #define _BCACHEFS_SB_ERRORS_FORMAT_H 4 4 5 5 enum bch_fsck_flags { 6 - FSCK_CAN_FIX = 1 << 0, 7 - FSCK_CAN_IGNORE = 1 << 1, 8 - FSCK_AUTOFIX = 1 << 2, 6 + FSCK_CAN_FIX = BIT(0), 7 + FSCK_CAN_IGNORE = BIT(1), 8 + FSCK_AUTOFIX = BIT(2), 9 + FSCK_ERR_NO_LOG = BIT(3), 9 10 }; 10 11 11 12 #define BCH_SB_ERRS() \ ··· 218 217 x(inode_str_hash_invalid, 194, 0) \ 219 218 x(inode_v3_fields_start_bad, 195, 0) \ 220 219 x(inode_snapshot_mismatch, 196, 0) \ 221 - x(snapshot_key_missing_inode_snapshot, 314, 0) \ 220 + x(snapshot_key_missing_inode_snapshot, 314, FSCK_AUTOFIX) \ 222 221 x(inode_unlinked_but_clean, 197, 0) \ 223 222 x(inode_unlinked_but_nlink_nonzero, 198, 0) \ 224 223 x(inode_unlinked_and_not_open, 281, 0) \ ··· 252 251 x(deleted_inode_not_unlinked, 214, FSCK_AUTOFIX) \ 253 252 x(deleted_inode_has_child_snapshots, 288, FSCK_AUTOFIX) \ 254 253 x(extent_overlapping, 215, 0) \ 255 - x(key_in_missing_inode, 216, 0) \ 254 + x(key_in_missing_inode, 216, FSCK_AUTOFIX) \ 256 255 x(key_in_wrong_inode_type, 217, 0) \ 257 - x(extent_past_end_of_inode, 218, 0) \ 256 + x(extent_past_end_of_inode, 218, FSCK_AUTOFIX) \ 258 257 x(dirent_empty_name, 219, 0) \ 259 258 x(dirent_val_too_big, 220, 0) \ 260 259 x(dirent_name_too_long, 221, 0) \ 261 260 x(dirent_name_embedded_nul, 222, 0) \ 262 261 x(dirent_name_dot_or_dotdot, 223, 0) \ 263 262 x(dirent_name_has_slash, 224, 0) \ 264 - x(dirent_d_type_wrong, 225, 0) \ 263 + x(dirent_d_type_wrong, 225, FSCK_AUTOFIX) \ 265 264 x(inode_bi_parent_wrong, 226, 0) \ 266 265 x(dirent_in_missing_dir_inode, 227, 0) \ 267 266 x(dirent_in_non_dir_inode, 228, 0) \ 268 - x(dirent_to_missing_inode, 229, 0) \ 267 + x(dirent_to_missing_inode, 229, FSCK_AUTOFIX) \ 269 268 x(dirent_to_overwritten_inode, 302, 0) \ 270 269 x(dirent_to_missing_subvol, 230, 0) \ 271 270 x(dirent_to_itself, 231, 0) \ ··· 301 300 x(btree_node_bkey_bad_u64s, 260, 0) \ 302 301 x(btree_node_topology_empty_interior_node, 261, 0) \ 303 302 x(btree_ptr_v2_min_key_bad, 262, 0) \ 304 - x(btree_root_unreadable_and_scan_found_nothing, 263, FSCK_AUTOFIX) \ 303 + x(btree_root_unreadable_and_scan_found_nothing, 263, 0) \ 305 304 x(snapshot_node_missing, 264, FSCK_AUTOFIX) \ 306 305 x(dup_backpointer_to_bad_csum_extent, 265, 0) \ 307 306 x(btree_bitmap_not_marked, 266, FSCK_AUTOFIX) \
+9 -5
fs/bcachefs/snapshot.c
··· 135 135 136 136 bool __bch2_snapshot_is_ancestor(struct bch_fs *c, u32 id, u32 ancestor) 137 137 { 138 - bool ret; 138 + #ifdef CONFIG_BCACHEFS_DEBUG 139 + u32 orig_id = id; 140 + #endif 139 141 140 142 guard(rcu)(); 141 143 struct snapshot_table *t = rcu_dereference(c->snapshots); ··· 149 147 while (id && id < ancestor - IS_ANCESTOR_BITMAP) 150 148 id = get_ancestor_below(t, id, ancestor); 151 149 152 - ret = id && id < ancestor 150 + bool ret = id && id < ancestor 153 151 ? test_ancestor_bitmap(t, id, ancestor) 154 152 : id == ancestor; 155 153 156 - EBUG_ON(ret != __bch2_snapshot_is_ancestor_early(t, id, ancestor)); 154 + EBUG_ON(ret != __bch2_snapshot_is_ancestor_early(t, orig_id, ancestor)); 157 155 return ret; 158 156 } 159 157 ··· 871 869 872 870 for_each_btree_key_norestart(trans, iter, BTREE_ID_snapshot_trees, POS_MIN, 873 871 0, k, ret) { 874 - if (le32_to_cpu(bkey_s_c_to_snapshot_tree(k).v->root_snapshot) == id) { 872 + if (k.k->type == KEY_TYPE_snapshot_tree && 873 + le32_to_cpu(bkey_s_c_to_snapshot_tree(k).v->root_snapshot) == id) { 875 874 tree_id = k.k->p.offset; 876 875 break; 877 876 } ··· 900 897 901 898 for_each_btree_key_norestart(trans, iter, BTREE_ID_subvolumes, POS_MIN, 902 899 0, k, ret) { 903 - if (le32_to_cpu(bkey_s_c_to_subvolume(k).v->snapshot) == id) { 900 + if (k.k->type == KEY_TYPE_subvolume && 901 + le32_to_cpu(bkey_s_c_to_subvolume(k).v->snapshot) == id) { 904 902 snapshot->v.subvol = cpu_to_le32(k.k->p.offset); 905 903 SET_BCH_SNAPSHOT_SUBVOL(&snapshot->v, true); 906 904 break;
+11 -2
fs/bcachefs/super.c
··· 210 210 static int bch2_dev_sysfs_online(struct bch_fs *, struct bch_dev *); 211 211 static void bch2_dev_io_ref_stop(struct bch_dev *, int); 212 212 static void __bch2_dev_read_only(struct bch_fs *, struct bch_dev *); 213 - static int bch2_fs_init_rw(struct bch_fs *); 214 213 215 214 struct bch_fs *bch2_dev_to_fs(dev_t dev) 216 215 { ··· 793 794 return ret; 794 795 } 795 796 796 - static int bch2_fs_init_rw(struct bch_fs *c) 797 + int bch2_fs_init_rw(struct bch_fs *c) 797 798 { 798 799 if (test_bit(BCH_FS_rw_init_done, &c->flags)) 799 800 return 0; ··· 1013 1014 bch2_fs_vfs_init(c); 1014 1015 if (ret) 1015 1016 goto err; 1017 + 1018 + if (go_rw_in_recovery(c)) { 1019 + /* 1020 + * start workqueues/kworkers early - kthread creation checks for 1021 + * pending signals, which is _very_ annoying 1022 + */ 1023 + ret = bch2_fs_init_rw(c); 1024 + if (ret) 1025 + goto err; 1026 + } 1016 1027 1017 1028 #ifdef CONFIG_UNICODE 1018 1029 /* Default encoding until we can potentially have more as an option. */
+1
fs/bcachefs/super.h
··· 46 46 void bch2_fs_free(struct bch_fs *); 47 47 void bch2_fs_stop(struct bch_fs *); 48 48 49 + int bch2_fs_init_rw(struct bch_fs *); 49 50 int bch2_fs_start(struct bch_fs *); 50 51 struct bch_fs *bch2_fs_open(darray_const_str *, struct bch_opts *); 51 52
+28 -97
fs/bcachefs/trace.h
··· 1080 1080 __entry->must_wait) 1081 1081 ); 1082 1082 1083 - TRACE_EVENT(trans_restart_journal_preres_get, 1084 - TP_PROTO(struct btree_trans *trans, 1085 - unsigned long caller_ip, 1086 - unsigned flags), 1087 - TP_ARGS(trans, caller_ip, flags), 1088 - 1089 - TP_STRUCT__entry( 1090 - __array(char, trans_fn, 32 ) 1091 - __field(unsigned long, caller_ip ) 1092 - __field(unsigned, flags ) 1093 - ), 1094 - 1095 - TP_fast_assign( 1096 - strscpy(__entry->trans_fn, trans->fn, sizeof(__entry->trans_fn)); 1097 - __entry->caller_ip = caller_ip; 1098 - __entry->flags = flags; 1099 - ), 1100 - 1101 - TP_printk("%s %pS %x", __entry->trans_fn, 1102 - (void *) __entry->caller_ip, 1103 - __entry->flags) 1104 - ); 1105 - 1083 + #if 0 1084 + /* todo: bring back dynamic fault injection */ 1106 1085 DEFINE_EVENT(transaction_event, trans_restart_fault_inject, 1107 1086 TP_PROTO(struct btree_trans *trans, 1108 1087 unsigned long caller_ip), 1109 1088 TP_ARGS(trans, caller_ip) 1110 1089 ); 1090 + #endif 1111 1091 1112 1092 DEFINE_EVENT(transaction_event, trans_traverse_all, 1113 1093 TP_PROTO(struct btree_trans *trans, ··· 1175 1195 TP_ARGS(trans, caller_ip, path) 1176 1196 ); 1177 1197 1178 - DEFINE_EVENT(transaction_restart_iter, trans_restart_relock_after_fill, 1179 - TP_PROTO(struct btree_trans *trans, 1180 - unsigned long caller_ip, 1181 - struct btree_path *path), 1182 - TP_ARGS(trans, caller_ip, path) 1183 - ); 1184 - 1185 - DEFINE_EVENT(transaction_event, trans_restart_key_cache_upgrade, 1186 - TP_PROTO(struct btree_trans *trans, 1187 - unsigned long caller_ip), 1188 - TP_ARGS(trans, caller_ip) 1189 - ); 1190 - 1191 1198 DEFINE_EVENT(transaction_restart_iter, trans_restart_relock_key_cache_fill, 1192 1199 TP_PROTO(struct btree_trans *trans, 1193 1200 unsigned long caller_ip, ··· 1190 1223 ); 1191 1224 1192 1225 DEFINE_EVENT(transaction_restart_iter, trans_restart_relock_path_intent, 1193 - TP_PROTO(struct btree_trans *trans, 1194 - unsigned long caller_ip, 1195 - struct btree_path *path), 1196 - TP_ARGS(trans, caller_ip, path) 1197 - ); 1198 - 1199 - DEFINE_EVENT(transaction_restart_iter, trans_restart_traverse, 1200 1226 TP_PROTO(struct btree_trans *trans, 1201 1227 unsigned long caller_ip, 1202 1228 struct btree_path *path), ··· 1252 1292 __entry->trans_fn, 1253 1293 (void *) __entry->caller_ip, 1254 1294 __entry->bytes) 1255 - ); 1256 - 1257 - TRACE_EVENT(trans_restart_key_cache_key_realloced, 1258 - TP_PROTO(struct btree_trans *trans, 1259 - unsigned long caller_ip, 1260 - struct btree_path *path, 1261 - unsigned old_u64s, 1262 - unsigned new_u64s), 1263 - TP_ARGS(trans, caller_ip, path, old_u64s, new_u64s), 1264 - 1265 - TP_STRUCT__entry( 1266 - __array(char, trans_fn, 32 ) 1267 - __field(unsigned long, caller_ip ) 1268 - __field(enum btree_id, btree_id ) 1269 - TRACE_BPOS_entries(pos) 1270 - __field(u32, old_u64s ) 1271 - __field(u32, new_u64s ) 1272 - ), 1273 - 1274 - TP_fast_assign( 1275 - strscpy(__entry->trans_fn, trans->fn, sizeof(__entry->trans_fn)); 1276 - __entry->caller_ip = caller_ip; 1277 - 1278 - __entry->btree_id = path->btree_id; 1279 - TRACE_BPOS_assign(pos, path->pos); 1280 - __entry->old_u64s = old_u64s; 1281 - __entry->new_u64s = new_u64s; 1282 - ), 1283 - 1284 - TP_printk("%s %pS btree %s pos %llu:%llu:%u old_u64s %u new_u64s %u", 1285 - __entry->trans_fn, 1286 - (void *) __entry->caller_ip, 1287 - bch2_btree_id_str(__entry->btree_id), 1288 - __entry->pos_inode, 1289 - __entry->pos_offset, 1290 - __entry->pos_snapshot, 1291 - __entry->old_u64s, 1292 - __entry->new_u64s) 1293 1295 ); 1294 1296 1295 1297 DEFINE_EVENT(transaction_event, trans_restart_write_buffer_flush, ··· 1408 1486 ); 1409 1487 1410 1488 DEFINE_EVENT(fs_str, io_move_evacuate_bucket, 1489 + TP_PROTO(struct bch_fs *c, const char *str), 1490 + TP_ARGS(c, str) 1491 + ); 1492 + 1493 + DEFINE_EVENT(fs_str, extent_trim_atomic, 1494 + TP_PROTO(struct bch_fs *c, const char *str), 1495 + TP_ARGS(c, str) 1496 + ); 1497 + 1498 + DEFINE_EVENT(fs_str, btree_iter_peek_slot, 1499 + TP_PROTO(struct bch_fs *c, const char *str), 1500 + TP_ARGS(c, str) 1501 + ); 1502 + 1503 + DEFINE_EVENT(fs_str, __btree_iter_peek, 1504 + TP_PROTO(struct bch_fs *c, const char *str), 1505 + TP_ARGS(c, str) 1506 + ); 1507 + 1508 + DEFINE_EVENT(fs_str, btree_iter_peek_max, 1509 + TP_PROTO(struct bch_fs *c, const char *str), 1510 + TP_ARGS(c, str) 1511 + ); 1512 + 1513 + DEFINE_EVENT(fs_str, btree_iter_peek_prev_min, 1411 1514 TP_PROTO(struct bch_fs *c, const char *str), 1412 1515 TP_ARGS(c, str) 1413 1516 ); ··· 1849 1902 __entry->dup_locked) 1850 1903 ); 1851 1904 1852 - TRACE_EVENT(btree_path_free_trans_begin, 1853 - TP_PROTO(btree_path_idx_t path), 1854 - TP_ARGS(path), 1855 - 1856 - TP_STRUCT__entry( 1857 - __field(btree_path_idx_t, idx ) 1858 - ), 1859 - 1860 - TP_fast_assign( 1861 - __entry->idx = path; 1862 - ), 1863 - 1864 - TP_printk(" path %3u", __entry->idx) 1865 - ); 1866 - 1867 1905 #else /* CONFIG_BCACHEFS_PATH_TRACEPOINTS */ 1868 1906 #ifndef _TRACE_BCACHEFS_H 1869 1907 ··· 1866 1934 static inline void trace_btree_path_traverse_end(struct btree_trans *trans, struct btree_path *path) {} 1867 1935 static inline void trace_btree_path_set_pos(struct btree_trans *trans, struct btree_path *path, struct bpos *new_pos) {} 1868 1936 static inline void trace_btree_path_free(struct btree_trans *trans, btree_path_idx_t path, struct btree_path *dup) {} 1869 - static inline void trace_btree_path_free_trans_begin(btree_path_idx_t path) {} 1870 1937 1871 1938 #endif 1872 1939 #endif /* CONFIG_BCACHEFS_PATH_TRACEPOINTS */
+4 -1
fs/btrfs/delayed-inode.c
··· 1377 1377 1378 1378 void btrfs_assert_delayed_root_empty(struct btrfs_fs_info *fs_info) 1379 1379 { 1380 - WARN_ON(btrfs_first_delayed_node(fs_info->delayed_root)); 1380 + struct btrfs_delayed_node *node = btrfs_first_delayed_node(fs_info->delayed_root); 1381 + 1382 + if (WARN_ON(node)) 1383 + refcount_dec(&node->refs); 1381 1384 } 1382 1385 1383 1386 static bool could_end_wait(struct btrfs_delayed_root *delayed_root, int seq)
+21 -6
fs/btrfs/disk-io.c
··· 1835 1835 if (refcount_dec_and_test(&root->refs)) { 1836 1836 if (WARN_ON(!xa_empty(&root->inodes))) 1837 1837 xa_destroy(&root->inodes); 1838 + if (WARN_ON(!xa_empty(&root->delayed_nodes))) 1839 + xa_destroy(&root->delayed_nodes); 1838 1840 WARN_ON(test_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state)); 1839 1841 if (root->anon_dev) 1840 1842 free_anon_bdev(root->anon_dev); ··· 2158 2156 found = true; 2159 2157 root = read_tree_root_path(tree_root, path, &key); 2160 2158 if (IS_ERR(root)) { 2161 - if (!btrfs_test_opt(fs_info, IGNOREBADROOTS)) 2162 - ret = PTR_ERR(root); 2159 + ret = PTR_ERR(root); 2163 2160 break; 2164 2161 } 2165 2162 set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state); ··· 4311 4310 * 4312 4311 * So wait for all ongoing ordered extents to complete and then run 4313 4312 * delayed iputs. This works because once we reach this point no one 4314 - * can either create new ordered extents nor create delayed iputs 4315 - * through some other means. 4313 + * can create new ordered extents, but delayed iputs can still be added 4314 + * by a reclaim worker (see comments further below). 4316 4315 * 4317 4316 * Also note that btrfs_wait_ordered_roots() is not safe here, because 4318 4317 * it waits for BTRFS_ORDERED_COMPLETE to be set on an ordered extent, ··· 4323 4322 btrfs_flush_workqueue(fs_info->endio_write_workers); 4324 4323 /* Ordered extents for free space inodes. */ 4325 4324 btrfs_flush_workqueue(fs_info->endio_freespace_worker); 4325 + /* 4326 + * Run delayed iputs in case an async reclaim worker is waiting for them 4327 + * to be run as mentioned above. 4328 + */ 4326 4329 btrfs_run_delayed_iputs(fs_info); 4327 - /* There should be no more workload to generate new delayed iputs. */ 4328 - set_bit(BTRFS_FS_STATE_NO_DELAYED_IPUT, &fs_info->fs_state); 4329 4330 4330 4331 cancel_work_sync(&fs_info->async_reclaim_work); 4331 4332 cancel_work_sync(&fs_info->async_data_reclaim_work); 4332 4333 cancel_work_sync(&fs_info->preempt_reclaim_work); 4333 4334 cancel_work_sync(&fs_info->em_shrinker_work); 4335 + 4336 + /* 4337 + * Run delayed iputs again because an async reclaim worker may have 4338 + * added new ones if it was flushing delalloc: 4339 + * 4340 + * shrink_delalloc() -> btrfs_start_delalloc_roots() -> 4341 + * start_delalloc_inodes() -> btrfs_add_delayed_iput() 4342 + */ 4343 + btrfs_run_delayed_iputs(fs_info); 4344 + 4345 + /* There should be no more workload to generate new delayed iputs. */ 4346 + set_bit(BTRFS_FS_STATE_NO_DELAYED_IPUT, &fs_info->fs_state); 4334 4347 4335 4348 /* Cancel or finish ongoing discard work */ 4336 4349 btrfs_discard_cleanup(fs_info);
+1 -1
fs/btrfs/extent_io.c
··· 4312 4312 spin_unlock(&eb->refs_lock); 4313 4313 continue; 4314 4314 } 4315 - xa_unlock_irq(&fs_info->buffer_tree); 4316 4315 4317 4316 /* 4318 4317 * If tree ref isn't set then we know the ref on this eb is a ··· 4328 4329 * check the folio private at the end. And 4329 4330 * release_extent_buffer() will release the refs_lock. 4330 4331 */ 4332 + xa_unlock_irq(&fs_info->buffer_tree); 4331 4333 release_extent_buffer(eb); 4332 4334 xa_lock_irq(&fs_info->buffer_tree); 4333 4335 }
+12 -4
fs/btrfs/free-space-tree.c
··· 1115 1115 ret = btrfs_search_slot_for_read(extent_root, &key, path, 1, 0); 1116 1116 if (ret < 0) 1117 1117 goto out_locked; 1118 - ASSERT(ret == 0); 1118 + /* 1119 + * If ret is 1 (no key found), it means this is an empty block group, 1120 + * without any extents allocated from it and there's no block group 1121 + * item (key BTRFS_BLOCK_GROUP_ITEM_KEY) located in the extent tree 1122 + * because we are using the block group tree feature, so block group 1123 + * items are stored in the block group tree. It also means there are no 1124 + * extents allocated for block groups with a start offset beyond this 1125 + * block group's end offset (this is the last, highest, block group). 1126 + */ 1127 + if (!btrfs_fs_compat_ro(trans->fs_info, BLOCK_GROUP_TREE)) 1128 + ASSERT(ret == 0); 1119 1129 1120 1130 start = block_group->start; 1121 1131 end = block_group->start + block_group->length; 1122 - while (1) { 1132 + while (ret == 0) { 1123 1133 btrfs_item_key_to_cpu(path->nodes[0], &key, path->slots[0]); 1124 1134 1125 1135 if (key.type == BTRFS_EXTENT_ITEM_KEY || ··· 1159 1149 ret = btrfs_next_item(extent_root, path); 1160 1150 if (ret < 0) 1161 1151 goto out_locked; 1162 - if (ret) 1163 - break; 1164 1152 } 1165 1153 if (start < end) { 1166 1154 ret = __add_to_free_space_tree(trans, block_group, path2,
+68 -21
fs/btrfs/inode.c
··· 4250 4250 4251 4251 ret = btrfs_del_inode_ref(trans, root, name, ino, dir_ino, &index); 4252 4252 if (ret) { 4253 - btrfs_info(fs_info, 4254 - "failed to delete reference to %.*s, inode %llu parent %llu", 4255 - name->len, name->name, ino, dir_ino); 4253 + btrfs_crit(fs_info, 4254 + "failed to delete reference to %.*s, root %llu inode %llu parent %llu", 4255 + name->len, name->name, btrfs_root_id(root), ino, dir_ino); 4256 4256 btrfs_abort_transaction(trans, ret); 4257 4257 goto err; 4258 4258 } ··· 8059 8059 int ret; 8060 8060 int ret2; 8061 8061 bool need_abort = false; 8062 + bool logs_pinned = false; 8062 8063 struct fscrypt_name old_fname, new_fname; 8063 8064 struct fscrypt_str *old_name, *new_name; 8064 8065 ··· 8183 8182 inode_inc_iversion(new_inode); 8184 8183 simple_rename_timestamp(old_dir, old_dentry, new_dir, new_dentry); 8185 8184 8185 + if (old_ino != BTRFS_FIRST_FREE_OBJECTID && 8186 + new_ino != BTRFS_FIRST_FREE_OBJECTID) { 8187 + /* 8188 + * If we are renaming in the same directory (and it's not for 8189 + * root entries) pin the log early to prevent any concurrent 8190 + * task from logging the directory after we removed the old 8191 + * entries and before we add the new entries, otherwise that 8192 + * task can sync a log without any entry for the inodes we are 8193 + * renaming and therefore replaying that log, if a power failure 8194 + * happens after syncing the log, would result in deleting the 8195 + * inodes. 8196 + * 8197 + * If the rename affects two different directories, we want to 8198 + * make sure the that there's no log commit that contains 8199 + * updates for only one of the directories but not for the 8200 + * other. 8201 + * 8202 + * If we are renaming an entry for a root, we don't care about 8203 + * log updates since we called btrfs_set_log_full_commit(). 8204 + */ 8205 + btrfs_pin_log_trans(root); 8206 + btrfs_pin_log_trans(dest); 8207 + logs_pinned = true; 8208 + } 8209 + 8186 8210 if (old_dentry->d_parent != new_dentry->d_parent) { 8187 8211 btrfs_record_unlink_dir(trans, BTRFS_I(old_dir), 8188 8212 BTRFS_I(old_inode), true); ··· 8279 8253 BTRFS_I(new_inode)->dir_index = new_idx; 8280 8254 8281 8255 /* 8282 - * Now pin the logs of the roots. We do it to ensure that no other task 8283 - * can sync the logs while we are in progress with the rename, because 8284 - * that could result in an inconsistency in case any of the inodes that 8285 - * are part of this rename operation were logged before. 8256 + * Do the log updates for all inodes. 8257 + * 8258 + * If either entry is for a root we don't need to update the logs since 8259 + * we've called btrfs_set_log_full_commit() before. 8286 8260 */ 8287 - if (old_ino != BTRFS_FIRST_FREE_OBJECTID) 8288 - btrfs_pin_log_trans(root); 8289 - if (new_ino != BTRFS_FIRST_FREE_OBJECTID) 8290 - btrfs_pin_log_trans(dest); 8291 - 8292 - /* Do the log updates for all inodes. */ 8293 - if (old_ino != BTRFS_FIRST_FREE_OBJECTID) 8261 + if (logs_pinned) { 8294 8262 btrfs_log_new_name(trans, old_dentry, BTRFS_I(old_dir), 8295 8263 old_rename_ctx.index, new_dentry->d_parent); 8296 - if (new_ino != BTRFS_FIRST_FREE_OBJECTID) 8297 8264 btrfs_log_new_name(trans, new_dentry, BTRFS_I(new_dir), 8298 8265 new_rename_ctx.index, old_dentry->d_parent); 8266 + } 8299 8267 8300 - /* Now unpin the logs. */ 8301 - if (old_ino != BTRFS_FIRST_FREE_OBJECTID) 8302 - btrfs_end_log_trans(root); 8303 - if (new_ino != BTRFS_FIRST_FREE_OBJECTID) 8304 - btrfs_end_log_trans(dest); 8305 8268 out_fail: 8269 + if (logs_pinned) { 8270 + btrfs_end_log_trans(root); 8271 + btrfs_end_log_trans(dest); 8272 + } 8306 8273 ret2 = btrfs_end_transaction(trans); 8307 8274 ret = ret ? ret : ret2; 8308 8275 out_notrans: ··· 8345 8326 int ret2; 8346 8327 u64 old_ino = btrfs_ino(BTRFS_I(old_inode)); 8347 8328 struct fscrypt_name old_fname, new_fname; 8329 + bool logs_pinned = false; 8348 8330 8349 8331 if (btrfs_ino(BTRFS_I(new_dir)) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID) 8350 8332 return -EPERM; ··· 8480 8460 inode_inc_iversion(old_inode); 8481 8461 simple_rename_timestamp(old_dir, old_dentry, new_dir, new_dentry); 8482 8462 8463 + if (old_ino != BTRFS_FIRST_FREE_OBJECTID) { 8464 + /* 8465 + * If we are renaming in the same directory (and it's not a 8466 + * root entry) pin the log to prevent any concurrent task from 8467 + * logging the directory after we removed the old entry and 8468 + * before we add the new entry, otherwise that task can sync 8469 + * a log without any entry for the inode we are renaming and 8470 + * therefore replaying that log, if a power failure happens 8471 + * after syncing the log, would result in deleting the inode. 8472 + * 8473 + * If the rename affects two different directories, we want to 8474 + * make sure the that there's no log commit that contains 8475 + * updates for only one of the directories but not for the 8476 + * other. 8477 + * 8478 + * If we are renaming an entry for a root, we don't care about 8479 + * log updates since we called btrfs_set_log_full_commit(). 8480 + */ 8481 + btrfs_pin_log_trans(root); 8482 + btrfs_pin_log_trans(dest); 8483 + logs_pinned = true; 8484 + } 8485 + 8483 8486 if (old_dentry->d_parent != new_dentry->d_parent) 8484 8487 btrfs_record_unlink_dir(trans, BTRFS_I(old_dir), 8485 8488 BTRFS_I(old_inode), true); ··· 8567 8524 if (old_inode->i_nlink == 1) 8568 8525 BTRFS_I(old_inode)->dir_index = index; 8569 8526 8570 - if (old_ino != BTRFS_FIRST_FREE_OBJECTID) 8527 + if (logs_pinned) 8571 8528 btrfs_log_new_name(trans, old_dentry, BTRFS_I(old_dir), 8572 8529 rename_ctx.index, new_dentry->d_parent); 8573 8530 ··· 8583 8540 } 8584 8541 } 8585 8542 out_fail: 8543 + if (logs_pinned) { 8544 + btrfs_end_log_trans(root); 8545 + btrfs_end_log_trans(dest); 8546 + } 8586 8547 ret2 = btrfs_end_transaction(trans); 8587 8548 ret = ret ? ret : ret2; 8588 8549 out_notrans:
+1 -1
fs/btrfs/ioctl.c
··· 3139 3139 return -EPERM; 3140 3140 3141 3141 if (btrfs_fs_incompat(fs_info, EXTENT_TREE_V2)) { 3142 - btrfs_err(fs_info, "scrub is not supported on extent tree v2 yet"); 3142 + btrfs_err(fs_info, "scrub: extent tree v2 not yet supported"); 3143 3143 return -EINVAL; 3144 3144 } 3145 3145
+26 -27
fs/btrfs/scrub.c
··· 557 557 */ 558 558 for (i = 0; i < ipath->fspath->elem_cnt; ++i) 559 559 btrfs_warn_in_rcu(fs_info, 560 - "%s at logical %llu on dev %s, physical %llu, root %llu, inode %llu, offset %llu, length %u, links %u (path: %s)", 560 + "scrub: %s at logical %llu on dev %s, physical %llu root %llu inode %llu offset %llu length %u links %u (path: %s)", 561 561 swarn->errstr, swarn->logical, 562 562 btrfs_dev_name(swarn->dev), 563 563 swarn->physical, ··· 571 571 572 572 err: 573 573 btrfs_warn_in_rcu(fs_info, 574 - "%s at logical %llu on dev %s, physical %llu, root %llu, inode %llu, offset %llu: path resolving failed with ret=%d", 574 + "scrub: %s at logical %llu on dev %s, physical %llu root %llu inode %llu offset %llu: path resolving failed with ret=%d", 575 575 swarn->errstr, swarn->logical, 576 576 btrfs_dev_name(swarn->dev), 577 577 swarn->physical, ··· 596 596 597 597 /* Super block error, no need to search extent tree. */ 598 598 if (is_super) { 599 - btrfs_warn_in_rcu(fs_info, "%s on device %s, physical %llu", 599 + btrfs_warn_in_rcu(fs_info, "scrub: %s on device %s, physical %llu", 600 600 errstr, btrfs_dev_name(dev), physical); 601 601 return; 602 602 } ··· 631 631 &ref_level); 632 632 if (ret < 0) { 633 633 btrfs_warn(fs_info, 634 - "failed to resolve tree backref for logical %llu: %d", 635 - swarn.logical, ret); 634 + "scrub: failed to resolve tree backref for logical %llu: %d", 635 + swarn.logical, ret); 636 636 break; 637 637 } 638 638 if (ret > 0) 639 639 break; 640 640 btrfs_warn_in_rcu(fs_info, 641 - "%s at logical %llu on dev %s, physical %llu: metadata %s (level %d) in tree %llu", 641 + "scrub: %s at logical %llu on dev %s, physical %llu: metadata %s (level %d) in tree %llu", 642 642 errstr, swarn.logical, btrfs_dev_name(dev), 643 643 swarn.physical, (ref_level ? "node" : "leaf"), 644 644 ref_level, ref_root); ··· 718 718 scrub_bitmap_set_meta_error(stripe, sector_nr, sectors_per_tree); 719 719 scrub_bitmap_set_error(stripe, sector_nr, sectors_per_tree); 720 720 btrfs_warn_rl(fs_info, 721 - "tree block %llu mirror %u has bad bytenr, has %llu want %llu", 721 + "scrub: tree block %llu mirror %u has bad bytenr, has %llu want %llu", 722 722 logical, stripe->mirror_num, 723 723 btrfs_stack_header_bytenr(header), logical); 724 724 return; ··· 728 728 scrub_bitmap_set_meta_error(stripe, sector_nr, sectors_per_tree); 729 729 scrub_bitmap_set_error(stripe, sector_nr, sectors_per_tree); 730 730 btrfs_warn_rl(fs_info, 731 - "tree block %llu mirror %u has bad fsid, has %pU want %pU", 731 + "scrub: tree block %llu mirror %u has bad fsid, has %pU want %pU", 732 732 logical, stripe->mirror_num, 733 733 header->fsid, fs_info->fs_devices->fsid); 734 734 return; ··· 738 738 scrub_bitmap_set_meta_error(stripe, sector_nr, sectors_per_tree); 739 739 scrub_bitmap_set_error(stripe, sector_nr, sectors_per_tree); 740 740 btrfs_warn_rl(fs_info, 741 - "tree block %llu mirror %u has bad chunk tree uuid, has %pU want %pU", 741 + "scrub: tree block %llu mirror %u has bad chunk tree uuid, has %pU want %pU", 742 742 logical, stripe->mirror_num, 743 743 header->chunk_tree_uuid, fs_info->chunk_tree_uuid); 744 744 return; ··· 760 760 scrub_bitmap_set_meta_error(stripe, sector_nr, sectors_per_tree); 761 761 scrub_bitmap_set_error(stripe, sector_nr, sectors_per_tree); 762 762 btrfs_warn_rl(fs_info, 763 - "tree block %llu mirror %u has bad csum, has " CSUM_FMT " want " CSUM_FMT, 763 + "scrub: tree block %llu mirror %u has bad csum, has " CSUM_FMT " want " CSUM_FMT, 764 764 logical, stripe->mirror_num, 765 765 CSUM_FMT_VALUE(fs_info->csum_size, on_disk_csum), 766 766 CSUM_FMT_VALUE(fs_info->csum_size, calculated_csum)); ··· 771 771 scrub_bitmap_set_meta_gen_error(stripe, sector_nr, sectors_per_tree); 772 772 scrub_bitmap_set_error(stripe, sector_nr, sectors_per_tree); 773 773 btrfs_warn_rl(fs_info, 774 - "tree block %llu mirror %u has bad generation, has %llu want %llu", 774 + "scrub: tree block %llu mirror %u has bad generation, has %llu want %llu", 775 775 logical, stripe->mirror_num, 776 776 btrfs_stack_header_generation(header), 777 777 stripe->sectors[sector_nr].generation); ··· 814 814 */ 815 815 if (unlikely(sector_nr + sectors_per_tree > stripe->nr_sectors)) { 816 816 btrfs_warn_rl(fs_info, 817 - "tree block at %llu crosses stripe boundary %llu", 817 + "scrub: tree block at %llu crosses stripe boundary %llu", 818 818 stripe->logical + 819 819 (sector_nr << fs_info->sectorsize_bits), 820 820 stripe->logical); ··· 1046 1046 if (repaired) { 1047 1047 if (dev) { 1048 1048 btrfs_err_rl_in_rcu(fs_info, 1049 - "fixed up error at logical %llu on dev %s physical %llu", 1049 + "scrub: fixed up error at logical %llu on dev %s physical %llu", 1050 1050 stripe->logical, btrfs_dev_name(dev), 1051 1051 physical); 1052 1052 } else { 1053 1053 btrfs_err_rl_in_rcu(fs_info, 1054 - "fixed up error at logical %llu on mirror %u", 1054 + "scrub: fixed up error at logical %llu on mirror %u", 1055 1055 stripe->logical, stripe->mirror_num); 1056 1056 } 1057 1057 continue; ··· 1060 1060 /* The remaining are all for unrepaired. */ 1061 1061 if (dev) { 1062 1062 btrfs_err_rl_in_rcu(fs_info, 1063 - "unable to fixup (regular) error at logical %llu on dev %s physical %llu", 1063 + "scrub: unable to fixup (regular) error at logical %llu on dev %s physical %llu", 1064 1064 stripe->logical, btrfs_dev_name(dev), 1065 1065 physical); 1066 1066 } else { 1067 1067 btrfs_err_rl_in_rcu(fs_info, 1068 - "unable to fixup (regular) error at logical %llu on mirror %u", 1068 + "scrub: unable to fixup (regular) error at logical %llu on mirror %u", 1069 1069 stripe->logical, stripe->mirror_num); 1070 1070 } 1071 1071 ··· 1593 1593 physical, 1594 1594 sctx->write_pointer); 1595 1595 if (ret) 1596 - btrfs_err(fs_info, 1597 - "zoned: failed to recover write pointer"); 1596 + btrfs_err(fs_info, "scrub: zoned: failed to recover write pointer"); 1598 1597 } 1599 1598 mutex_unlock(&sctx->wr_lock); 1600 1599 btrfs_dev_clear_zone_empty(sctx->wr_tgtdev, physical); ··· 1657 1658 int ret; 1658 1659 1659 1660 if (unlikely(!extent_root || !csum_root)) { 1660 - btrfs_err(fs_info, "no valid extent or csum root for scrub"); 1661 + btrfs_err(fs_info, "scrub: no valid extent or csum root found"); 1661 1662 return -EUCLEAN; 1662 1663 } 1663 1664 memset(stripe->sectors, 0, sizeof(struct scrub_sector_verification) * ··· 1906 1907 struct btrfs_fs_info *fs_info = stripe->bg->fs_info; 1907 1908 1908 1909 btrfs_err(fs_info, 1909 - "stripe %llu has unrepaired metadata sector at %llu", 1910 + "scrub: stripe %llu has unrepaired metadata sector at logical %llu", 1910 1911 stripe->logical, 1911 1912 stripe->logical + (i << fs_info->sectorsize_bits)); 1912 1913 return true; ··· 2166 2167 bitmap_and(&error, &error, &has_extent, stripe->nr_sectors); 2167 2168 if (!bitmap_empty(&error, stripe->nr_sectors)) { 2168 2169 btrfs_err(fs_info, 2169 - "unrepaired sectors detected, full stripe %llu data stripe %u errors %*pbl", 2170 + "scrub: unrepaired sectors detected, full stripe %llu data stripe %u errors %*pbl", 2170 2171 full_stripe_start, i, stripe->nr_sectors, 2171 2172 &error); 2172 2173 ret = -EIO; ··· 2788 2789 ro_set = 0; 2789 2790 } else if (ret == -ETXTBSY) { 2790 2791 btrfs_warn(fs_info, 2791 - "skipping scrub of block group %llu due to active swapfile", 2792 + "scrub: skipping scrub of block group %llu due to active swapfile", 2792 2793 cache->start); 2793 2794 scrub_pause_off(fs_info); 2794 2795 ret = 0; 2795 2796 goto skip_unfreeze; 2796 2797 } else { 2797 - btrfs_warn(fs_info, 2798 - "failed setting block group ro: %d", ret); 2798 + btrfs_warn(fs_info, "scrub: failed setting block group ro: %d", 2799 + ret); 2799 2800 btrfs_unfreeze_block_group(cache); 2800 2801 btrfs_put_block_group(cache); 2801 2802 scrub_pause_off(fs_info); ··· 2891 2892 ret = btrfs_check_super_csum(fs_info, sb); 2892 2893 if (ret != 0) { 2893 2894 btrfs_err_rl(fs_info, 2894 - "super block at physical %llu devid %llu has bad csum", 2895 + "scrub: super block at physical %llu devid %llu has bad csum", 2895 2896 physical, dev->devid); 2896 2897 return -EIO; 2897 2898 } 2898 2899 if (btrfs_super_generation(sb) != generation) { 2899 2900 btrfs_err_rl(fs_info, 2900 - "super block at physical %llu devid %llu has bad generation %llu expect %llu", 2901 + "scrub: super block at physical %llu devid %llu has bad generation %llu expect %llu", 2901 2902 physical, dev->devid, 2902 2903 btrfs_super_generation(sb), generation); 2903 2904 return -EUCLEAN; ··· 3058 3059 !test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state)) { 3059 3060 mutex_unlock(&fs_info->fs_devices->device_list_mutex); 3060 3061 btrfs_err_in_rcu(fs_info, 3061 - "scrub on devid %llu: filesystem on %s is not writable", 3062 + "scrub: devid %llu: filesystem on %s is not writable", 3062 3063 devid, btrfs_dev_name(dev)); 3063 3064 ret = -EROFS; 3064 3065 goto out;
+9 -8
fs/btrfs/tree-log.c
··· 668 668 extent_end = ALIGN(start + size, 669 669 fs_info->sectorsize); 670 670 } else { 671 - ret = 0; 672 - goto out; 671 + btrfs_err(fs_info, 672 + "unexpected extent type=%d root=%llu inode=%llu offset=%llu", 673 + found_type, btrfs_root_id(root), key->objectid, key->offset); 674 + return -EUCLEAN; 673 675 } 674 676 675 677 inode = read_one_inode(root, key->objectid); 676 - if (!inode) { 677 - ret = -EIO; 678 - goto out; 679 - } 678 + if (!inode) 679 + return -EIO; 680 680 681 681 /* 682 682 * first check to see if we already have this extent in the ··· 961 961 ret = unlink_inode_for_log_replay(trans, dir, inode, &name); 962 962 out: 963 963 kfree(name.name); 964 - iput(&inode->vfs_inode); 964 + if (inode) 965 + iput(&inode->vfs_inode); 965 966 return ret; 966 967 } 967 968 ··· 1177 1176 ret = unlink_inode_for_log_replay(trans, 1178 1177 victim_parent, 1179 1178 inode, &victim_name); 1179 + iput(&victim_parent->vfs_inode); 1180 1180 } 1181 - iput(&victim_parent->vfs_inode); 1182 1181 kfree(victim_name.name); 1183 1182 if (ret) 1184 1183 return ret;
+6
fs/btrfs/volumes.c
··· 3282 3282 device->bytes_used - dev_extent_len); 3283 3283 atomic64_add(dev_extent_len, &fs_info->free_chunk_space); 3284 3284 btrfs_clear_space_info_full(fs_info); 3285 + 3286 + if (list_empty(&device->post_commit_list)) { 3287 + list_add_tail(&device->post_commit_list, 3288 + &trans->transaction->dev_update_list); 3289 + } 3290 + 3285 3291 mutex_unlock(&fs_info->chunk_mutex); 3286 3292 } 3287 3293 }
+72 -14
fs/btrfs/zoned.c
··· 1403 1403 static int btrfs_load_block_group_dup(struct btrfs_block_group *bg, 1404 1404 struct btrfs_chunk_map *map, 1405 1405 struct zone_info *zone_info, 1406 - unsigned long *active) 1406 + unsigned long *active, 1407 + u64 last_alloc) 1407 1408 { 1408 1409 struct btrfs_fs_info *fs_info = bg->fs_info; 1409 1410 ··· 1427 1426 zone_info[1].physical); 1428 1427 return -EIO; 1429 1428 } 1429 + 1430 + if (zone_info[0].alloc_offset == WP_CONVENTIONAL) 1431 + zone_info[0].alloc_offset = last_alloc; 1432 + 1433 + if (zone_info[1].alloc_offset == WP_CONVENTIONAL) 1434 + zone_info[1].alloc_offset = last_alloc; 1435 + 1430 1436 if (zone_info[0].alloc_offset != zone_info[1].alloc_offset) { 1431 1437 btrfs_err(bg->fs_info, 1432 1438 "zoned: write pointer offset mismatch of zones in DUP profile"); ··· 1454 1446 static int btrfs_load_block_group_raid1(struct btrfs_block_group *bg, 1455 1447 struct btrfs_chunk_map *map, 1456 1448 struct zone_info *zone_info, 1457 - unsigned long *active) 1449 + unsigned long *active, 1450 + u64 last_alloc) 1458 1451 { 1459 1452 struct btrfs_fs_info *fs_info = bg->fs_info; 1460 1453 int i; ··· 1470 1461 bg->zone_capacity = min_not_zero(zone_info[0].capacity, zone_info[1].capacity); 1471 1462 1472 1463 for (i = 0; i < map->num_stripes; i++) { 1473 - if (zone_info[i].alloc_offset == WP_MISSING_DEV || 1474 - zone_info[i].alloc_offset == WP_CONVENTIONAL) 1464 + if (zone_info[i].alloc_offset == WP_MISSING_DEV) 1475 1465 continue; 1466 + 1467 + if (zone_info[i].alloc_offset == WP_CONVENTIONAL) 1468 + zone_info[i].alloc_offset = last_alloc; 1476 1469 1477 1470 if ((zone_info[0].alloc_offset != zone_info[i].alloc_offset) && 1478 1471 !btrfs_test_opt(fs_info, DEGRADED)) { ··· 1505 1494 static int btrfs_load_block_group_raid0(struct btrfs_block_group *bg, 1506 1495 struct btrfs_chunk_map *map, 1507 1496 struct zone_info *zone_info, 1508 - unsigned long *active) 1497 + unsigned long *active, 1498 + u64 last_alloc) 1509 1499 { 1510 1500 struct btrfs_fs_info *fs_info = bg->fs_info; 1511 1501 ··· 1517 1505 } 1518 1506 1519 1507 for (int i = 0; i < map->num_stripes; i++) { 1520 - if (zone_info[i].alloc_offset == WP_MISSING_DEV || 1521 - zone_info[i].alloc_offset == WP_CONVENTIONAL) 1508 + if (zone_info[i].alloc_offset == WP_MISSING_DEV) 1522 1509 continue; 1510 + 1511 + if (zone_info[i].alloc_offset == WP_CONVENTIONAL) { 1512 + u64 stripe_nr, full_stripe_nr; 1513 + u64 stripe_offset; 1514 + int stripe_index; 1515 + 1516 + stripe_nr = div64_u64(last_alloc, map->stripe_size); 1517 + stripe_offset = stripe_nr * map->stripe_size; 1518 + full_stripe_nr = div_u64(stripe_nr, map->num_stripes); 1519 + div_u64_rem(stripe_nr, map->num_stripes, &stripe_index); 1520 + 1521 + zone_info[i].alloc_offset = 1522 + full_stripe_nr * map->stripe_size; 1523 + 1524 + if (stripe_index > i) 1525 + zone_info[i].alloc_offset += map->stripe_size; 1526 + else if (stripe_index == i) 1527 + zone_info[i].alloc_offset += 1528 + (last_alloc - stripe_offset); 1529 + } 1523 1530 1524 1531 if (test_bit(0, active) != test_bit(i, active)) { 1525 1532 if (!btrfs_zone_activate(bg)) ··· 1557 1526 static int btrfs_load_block_group_raid10(struct btrfs_block_group *bg, 1558 1527 struct btrfs_chunk_map *map, 1559 1528 struct zone_info *zone_info, 1560 - unsigned long *active) 1529 + unsigned long *active, 1530 + u64 last_alloc) 1561 1531 { 1562 1532 struct btrfs_fs_info *fs_info = bg->fs_info; 1563 1533 ··· 1569 1537 } 1570 1538 1571 1539 for (int i = 0; i < map->num_stripes; i++) { 1572 - if (zone_info[i].alloc_offset == WP_MISSING_DEV || 1573 - zone_info[i].alloc_offset == WP_CONVENTIONAL) 1540 + if (zone_info[i].alloc_offset == WP_MISSING_DEV) 1574 1541 continue; 1575 1542 1576 1543 if (test_bit(0, active) != test_bit(i, active)) { ··· 1578 1547 } else { 1579 1548 if (test_bit(0, active)) 1580 1549 set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &bg->runtime_flags); 1550 + } 1551 + 1552 + if (zone_info[i].alloc_offset == WP_CONVENTIONAL) { 1553 + u64 stripe_nr, full_stripe_nr; 1554 + u64 stripe_offset; 1555 + int stripe_index; 1556 + 1557 + stripe_nr = div64_u64(last_alloc, map->stripe_size); 1558 + stripe_offset = stripe_nr * map->stripe_size; 1559 + full_stripe_nr = div_u64(stripe_nr, 1560 + map->num_stripes / map->sub_stripes); 1561 + div_u64_rem(stripe_nr, 1562 + (map->num_stripes / map->sub_stripes), 1563 + &stripe_index); 1564 + 1565 + zone_info[i].alloc_offset = 1566 + full_stripe_nr * map->stripe_size; 1567 + 1568 + if (stripe_index > (i / map->sub_stripes)) 1569 + zone_info[i].alloc_offset += map->stripe_size; 1570 + else if (stripe_index == (i / map->sub_stripes)) 1571 + zone_info[i].alloc_offset += 1572 + (last_alloc - stripe_offset); 1581 1573 } 1582 1574 1583 1575 if ((i % map->sub_stripes) == 0) { ··· 1691 1637 ret = btrfs_load_block_group_single(cache, &zone_info[0], active); 1692 1638 break; 1693 1639 case BTRFS_BLOCK_GROUP_DUP: 1694 - ret = btrfs_load_block_group_dup(cache, map, zone_info, active); 1640 + ret = btrfs_load_block_group_dup(cache, map, zone_info, active, 1641 + last_alloc); 1695 1642 break; 1696 1643 case BTRFS_BLOCK_GROUP_RAID1: 1697 1644 case BTRFS_BLOCK_GROUP_RAID1C3: 1698 1645 case BTRFS_BLOCK_GROUP_RAID1C4: 1699 - ret = btrfs_load_block_group_raid1(cache, map, zone_info, active); 1646 + ret = btrfs_load_block_group_raid1(cache, map, zone_info, 1647 + active, last_alloc); 1700 1648 break; 1701 1649 case BTRFS_BLOCK_GROUP_RAID0: 1702 - ret = btrfs_load_block_group_raid0(cache, map, zone_info, active); 1650 + ret = btrfs_load_block_group_raid0(cache, map, zone_info, 1651 + active, last_alloc); 1703 1652 break; 1704 1653 case BTRFS_BLOCK_GROUP_RAID10: 1705 - ret = btrfs_load_block_group_raid10(cache, map, zone_info, active); 1654 + ret = btrfs_load_block_group_raid10(cache, map, zone_info, 1655 + active, last_alloc); 1706 1656 break; 1707 1657 case BTRFS_BLOCK_GROUP_RAID5: 1708 1658 case BTRFS_BLOCK_GROUP_RAID6:
+3
fs/erofs/fileio.c
··· 47 47 48 48 static void erofs_fileio_rq_submit(struct erofs_fileio_rq *rq) 49 49 { 50 + const struct cred *old_cred; 50 51 struct iov_iter iter; 51 52 int ret; 52 53 ··· 61 60 rq->iocb.ki_flags = IOCB_DIRECT; 62 61 iov_iter_bvec(&iter, ITER_DEST, rq->bvecs, rq->bio.bi_vcnt, 63 62 rq->bio.bi_iter.bi_size); 63 + old_cred = override_creds(rq->iocb.ki_filp->f_cred); 64 64 ret = vfs_iocb_iter_read(rq->iocb.ki_filp, &rq->iocb, &iter); 65 + revert_creds(old_cred); 65 66 if (ret != -EIOCBQUEUED) 66 67 erofs_fileio_ki_complete(&rq->iocb, ret); 67 68 }
+4 -6
fs/erofs/zmap.c
··· 597 597 598 598 if (la > map->m_la) { 599 599 r = mid; 600 + if (la > lend) { 601 + DBG_BUGON(1); 602 + return -EFSCORRUPTED; 603 + } 600 604 lend = la; 601 605 } else { 602 606 l = mid + 1; ··· 639 635 } 640 636 } 641 637 map->m_llen = lend - map->m_la; 642 - if (!last && map->m_llen < sb->s_blocksize) { 643 - erofs_err(sb, "extent too small %llu @ offset %llu of nid %llu", 644 - map->m_llen, map->m_la, vi->nid); 645 - DBG_BUGON(1); 646 - return -EFSCORRUPTED; 647 - } 648 638 return 0; 649 639 } 650 640
+38
fs/f2fs/file.c
··· 35 35 #include <trace/events/f2fs.h> 36 36 #include <uapi/linux/f2fs.h> 37 37 38 + static void f2fs_zero_post_eof_page(struct inode *inode, loff_t new_size) 39 + { 40 + loff_t old_size = i_size_read(inode); 41 + 42 + if (old_size >= new_size) 43 + return; 44 + 45 + /* zero or drop pages only in range of [old_size, new_size] */ 46 + truncate_pagecache(inode, old_size); 47 + } 48 + 38 49 static vm_fault_t f2fs_filemap_fault(struct vm_fault *vmf) 39 50 { 40 51 struct inode *inode = file_inode(vmf->vma->vm_file); ··· 114 103 115 104 f2fs_bug_on(sbi, f2fs_has_inline_data(inode)); 116 105 106 + filemap_invalidate_lock(inode->i_mapping); 107 + f2fs_zero_post_eof_page(inode, (folio->index + 1) << PAGE_SHIFT); 108 + filemap_invalidate_unlock(inode->i_mapping); 109 + 117 110 file_update_time(vmf->vma->vm_file); 118 111 filemap_invalidate_lock_shared(inode->i_mapping); 112 + 119 113 folio_lock(folio); 120 114 if (unlikely(folio->mapping != inode->i_mapping || 121 115 folio_pos(folio) > i_size_read(inode) || ··· 1125 1109 f2fs_down_write(&fi->i_gc_rwsem[WRITE]); 1126 1110 filemap_invalidate_lock(inode->i_mapping); 1127 1111 1112 + if (attr->ia_size > old_size) 1113 + f2fs_zero_post_eof_page(inode, attr->ia_size); 1128 1114 truncate_setsize(inode, attr->ia_size); 1129 1115 1130 1116 if (attr->ia_size <= old_size) ··· 1244 1226 ret = f2fs_convert_inline_inode(inode); 1245 1227 if (ret) 1246 1228 return ret; 1229 + 1230 + filemap_invalidate_lock(inode->i_mapping); 1231 + f2fs_zero_post_eof_page(inode, offset + len); 1232 + filemap_invalidate_unlock(inode->i_mapping); 1247 1233 1248 1234 pg_start = ((unsigned long long) offset) >> PAGE_SHIFT; 1249 1235 pg_end = ((unsigned long long) offset + len) >> PAGE_SHIFT; ··· 1532 1510 f2fs_down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]); 1533 1511 filemap_invalidate_lock(inode->i_mapping); 1534 1512 1513 + f2fs_zero_post_eof_page(inode, offset + len); 1514 + 1535 1515 f2fs_lock_op(sbi); 1536 1516 f2fs_drop_extent_tree(inode); 1537 1517 truncate_pagecache(inode, offset); ··· 1654 1630 ret = filemap_write_and_wait_range(mapping, offset, offset + len - 1); 1655 1631 if (ret) 1656 1632 return ret; 1633 + 1634 + filemap_invalidate_lock(mapping); 1635 + f2fs_zero_post_eof_page(inode, offset + len); 1636 + filemap_invalidate_unlock(mapping); 1657 1637 1658 1638 pg_start = ((unsigned long long) offset) >> PAGE_SHIFT; 1659 1639 pg_end = ((unsigned long long) offset + len) >> PAGE_SHIFT; ··· 1790 1762 /* avoid gc operation during block exchange */ 1791 1763 f2fs_down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]); 1792 1764 filemap_invalidate_lock(mapping); 1765 + 1766 + f2fs_zero_post_eof_page(inode, offset + len); 1793 1767 truncate_pagecache(inode, offset); 1794 1768 1795 1769 while (!ret && idx > pg_start) { ··· 1848 1818 err = f2fs_convert_inline_inode(inode); 1849 1819 if (err) 1850 1820 return err; 1821 + 1822 + filemap_invalidate_lock(inode->i_mapping); 1823 + f2fs_zero_post_eof_page(inode, offset + len); 1824 + filemap_invalidate_unlock(inode->i_mapping); 1851 1825 1852 1826 f2fs_balance_fs(sbi, true); 1853 1827 ··· 4894 4860 err = file_modified(file); 4895 4861 if (err) 4896 4862 return err; 4863 + 4864 + filemap_invalidate_lock(inode->i_mapping); 4865 + f2fs_zero_post_eof_page(inode, iocb->ki_pos + iov_iter_count(from)); 4866 + filemap_invalidate_unlock(inode->i_mapping); 4897 4867 return count; 4898 4868 } 4899 4869
-1
fs/f2fs/node.c
··· 2078 2078 2079 2079 if (!__write_node_folio(folio, false, &submitted, 2080 2080 wbc, do_balance, io_type, NULL)) { 2081 - folio_unlock(folio); 2082 2081 folio_batch_release(&fbatch); 2083 2082 ret = -EIO; 2084 2083 goto out;
+6 -2
fs/file.c
··· 1198 1198 if (!(file->f_mode & FMODE_ATOMIC_POS) && !file->f_op->iterate_shared) 1199 1199 return false; 1200 1200 1201 - VFS_WARN_ON_ONCE((file_count(file) > 1) && 1202 - !mutex_is_locked(&file->f_pos_lock)); 1201 + /* 1202 + * Note that we are not guaranteed to be called after fdget_pos() on 1203 + * this file obj, in which case the caller is expected to provide the 1204 + * appropriate locking. 1205 + */ 1206 + 1203 1207 return true; 1204 1208 } 1205 1209
+4
fs/fuse/inode.c
··· 9 9 #include "fuse_i.h" 10 10 #include "dev_uring_i.h" 11 11 12 + #include <linux/dax.h> 12 13 #include <linux/pagemap.h> 13 14 #include <linux/slab.h> 14 15 #include <linux/file.h> ··· 162 161 163 162 /* Will write inode on close/munmap and in all other dirtiers */ 164 163 WARN_ON(inode->i_state & I_DIRTY_INODE); 164 + 165 + if (FUSE_IS_DAX(inode)) 166 + dax_break_layout_final(inode); 165 167 166 168 truncate_inode_pages_final(&inode->i_data); 167 169 clear_inode(inode);
+13 -4
fs/namei.c
··· 2917 2917 * @base: base directory to lookup from 2918 2918 * 2919 2919 * Look up a dentry by name in the dcache, returning NULL if it does not 2920 - * currently exist. The function does not try to create a dentry. 2920 + * currently exist. The function does not try to create a dentry and if one 2921 + * is found it doesn't try to revalidate it. 2921 2922 * 2922 2923 * Note that this routine is purely a helper for filesystem usage and should 2923 2924 * not be called by generic code. It does no permission checking. ··· 2934 2933 if (err) 2935 2934 return ERR_PTR(err); 2936 2935 2937 - return lookup_dcache(name, base, 0); 2936 + return d_lookup(base, name); 2938 2937 } 2939 2938 EXPORT_SYMBOL(try_lookup_noperm); 2940 2939 ··· 3058 3057 * Note that this routine is purely a helper for filesystem usage and should 3059 3058 * not be called by generic code. It does no permission checking. 3060 3059 * 3061 - * Unlike lookup_noperm, it should be called without the parent 3060 + * Unlike lookup_noperm(), it should be called without the parent 3062 3061 * i_rwsem held, and will take the i_rwsem itself if necessary. 3062 + * 3063 + * Unlike try_lookup_noperm() it *does* revalidate the dentry if it already 3064 + * existed. 3063 3065 */ 3064 3066 struct dentry *lookup_noperm_unlocked(struct qstr *name, struct dentry *base) 3065 3067 { 3066 3068 struct dentry *ret; 3069 + int err; 3067 3070 3068 - ret = try_lookup_noperm(name, base); 3071 + err = lookup_noperm_common(name, base); 3072 + if (err) 3073 + return ERR_PTR(err); 3074 + 3075 + ret = lookup_dcache(name, base, 0); 3069 3076 if (!ret) 3070 3077 ret = lookup_slow(name, base, 0); 3071 3078 return ret;
+67 -50
fs/namespace.c
··· 2310 2310 return dst_mnt; 2311 2311 } 2312 2312 2313 - /* Caller should check returned pointer for errors */ 2314 - 2315 - struct vfsmount *collect_mounts(const struct path *path) 2313 + static inline bool extend_array(struct path **res, struct path **to_free, 2314 + unsigned n, unsigned *count, unsigned new_count) 2316 2315 { 2317 - struct mount *tree; 2318 - namespace_lock(); 2319 - if (!check_mnt(real_mount(path->mnt))) 2320 - tree = ERR_PTR(-EINVAL); 2321 - else 2322 - tree = copy_tree(real_mount(path->mnt), path->dentry, 2323 - CL_COPY_ALL | CL_PRIVATE); 2324 - namespace_unlock(); 2325 - if (IS_ERR(tree)) 2326 - return ERR_CAST(tree); 2327 - return &tree->mnt; 2316 + struct path *p; 2317 + 2318 + if (likely(n < *count)) 2319 + return true; 2320 + p = kmalloc_array(new_count, sizeof(struct path), GFP_KERNEL); 2321 + if (p && *count) 2322 + memcpy(p, *res, *count * sizeof(struct path)); 2323 + *count = new_count; 2324 + kfree(*to_free); 2325 + *to_free = *res = p; 2326 + return p; 2327 + } 2328 + 2329 + struct path *collect_paths(const struct path *path, 2330 + struct path *prealloc, unsigned count) 2331 + { 2332 + struct mount *root = real_mount(path->mnt); 2333 + struct mount *child; 2334 + struct path *res = prealloc, *to_free = NULL; 2335 + unsigned n = 0; 2336 + 2337 + guard(rwsem_read)(&namespace_sem); 2338 + 2339 + if (!check_mnt(root)) 2340 + return ERR_PTR(-EINVAL); 2341 + if (!extend_array(&res, &to_free, 0, &count, 32)) 2342 + return ERR_PTR(-ENOMEM); 2343 + res[n++] = *path; 2344 + list_for_each_entry(child, &root->mnt_mounts, mnt_child) { 2345 + if (!is_subdir(child->mnt_mountpoint, path->dentry)) 2346 + continue; 2347 + for (struct mount *m = child; m; m = next_mnt(m, child)) { 2348 + if (!extend_array(&res, &to_free, n, &count, 2 * count)) 2349 + return ERR_PTR(-ENOMEM); 2350 + res[n].mnt = &m->mnt; 2351 + res[n].dentry = m->mnt.mnt_root; 2352 + n++; 2353 + } 2354 + } 2355 + if (!extend_array(&res, &to_free, n, &count, count + 1)) 2356 + return ERR_PTR(-ENOMEM); 2357 + memset(res + n, 0, (count - n) * sizeof(struct path)); 2358 + for (struct path *p = res; p->mnt; p++) 2359 + path_get(p); 2360 + return res; 2361 + } 2362 + 2363 + void drop_collected_paths(struct path *paths, struct path *prealloc) 2364 + { 2365 + for (struct path *p = paths; p->mnt; p++) 2366 + path_put(p); 2367 + if (paths != prealloc) 2368 + kfree(paths); 2328 2369 } 2329 2370 2330 2371 static void free_mnt_ns(struct mnt_namespace *); ··· 2440 2399 /* Make sure we notice when we leak mounts. */ 2441 2400 VFS_WARN_ON_ONCE(!mnt_ns_empty(ns)); 2442 2401 free_mnt_ns(ns); 2443 - } 2444 - 2445 - void drop_collected_mounts(struct vfsmount *mnt) 2446 - { 2447 - namespace_lock(); 2448 - lock_mount_hash(); 2449 - umount_tree(real_mount(mnt), 0); 2450 - unlock_mount_hash(); 2451 - namespace_unlock(); 2452 2402 } 2453 2403 2454 2404 static bool __has_locked_children(struct mount *mnt, struct dentry *dentry) ··· 2542 2510 return &new_mnt->mnt; 2543 2511 } 2544 2512 EXPORT_SYMBOL_GPL(clone_private_mount); 2545 - 2546 - int iterate_mounts(int (*f)(struct vfsmount *, void *), void *arg, 2547 - struct vfsmount *root) 2548 - { 2549 - struct mount *mnt; 2550 - int res = f(root, arg); 2551 - if (res) 2552 - return res; 2553 - list_for_each_entry(mnt, &real_mount(root)->mnt_list, mnt_list) { 2554 - res = f(&mnt->mnt, arg); 2555 - if (res) 2556 - return res; 2557 - } 2558 - return 0; 2559 - } 2560 2513 2561 2514 static void lock_mnt_tree(struct mount *mnt) 2562 2515 { ··· 2768 2751 hlist_for_each_entry_safe(child, n, &tree_list, mnt_hash) { 2769 2752 struct mount *q; 2770 2753 hlist_del_init(&child->mnt_hash); 2771 - q = __lookup_mnt(&child->mnt_parent->mnt, 2772 - child->mnt_mountpoint); 2773 - if (q) 2774 - mnt_change_mountpoint(child, smp, q); 2775 2754 /* Notice when we are propagating across user namespaces */ 2776 2755 if (child->mnt_parent->mnt_ns->user_ns != user_ns) 2777 2756 lock_mnt_tree(child); 2778 2757 child->mnt.mnt_flags &= ~MNT_LOCKED; 2758 + q = __lookup_mnt(&child->mnt_parent->mnt, 2759 + child->mnt_mountpoint); 2760 + if (q) 2761 + mnt_change_mountpoint(child, smp, q); 2779 2762 commit_tree(child); 2780 2763 } 2781 2764 put_mountpoint(smp); ··· 5307 5290 kattr.kflags |= MOUNT_KATTR_RECURSE; 5308 5291 5309 5292 ret = wants_mount_setattr(uattr, usize, &kattr); 5310 - if (ret < 0) 5311 - return ret; 5312 - 5313 - if (ret) { 5293 + if (ret > 0) { 5314 5294 ret = do_mount_setattr(&file->f_path, &kattr); 5315 - if (ret) 5316 - return ret; 5317 - 5318 5295 finish_mount_kattr(&kattr); 5319 5296 } 5297 + if (ret) 5298 + return ret; 5320 5299 } 5321 5300 5322 5301 fd = get_unused_fd_flags(flags & O_CLOEXEC); ··· 6275 6262 { 6276 6263 if (!refcount_dec_and_test(&ns->ns.count)) 6277 6264 return; 6278 - drop_collected_mounts(&ns->root->mnt); 6265 + namespace_lock(); 6266 + lock_mount_hash(); 6267 + umount_tree(ns->root, 0); 6268 + unlock_mount_hash(); 6269 + namespace_unlock(); 6279 6270 free_mnt_ns(ns); 6280 6271 } 6281 6272
+1
fs/nfsd/nfs4callback.c
··· 1409 1409 out: 1410 1410 if (!rcl->__nr_referring_calls) { 1411 1411 cb->cb_nr_referring_call_list--; 1412 + list_del(&rcl->__list); 1412 1413 kfree(rcl); 1413 1414 } 1414 1415 }
+2 -3
fs/nfsd/nfsctl.c
··· 1611 1611 */ 1612 1612 int nfsd_nl_threads_set_doit(struct sk_buff *skb, struct genl_info *info) 1613 1613 { 1614 - int *nthreads, count = 0, nrpools, i, ret = -EOPNOTSUPP, rem; 1614 + int *nthreads, nrpools = 0, i, ret = -EOPNOTSUPP, rem; 1615 1615 struct net *net = genl_info_net(info); 1616 1616 struct nfsd_net *nn = net_generic(net, nfsd_net_id); 1617 1617 const struct nlattr *attr; ··· 1623 1623 /* count number of SERVER_THREADS values */ 1624 1624 nlmsg_for_each_attr(attr, info->nlhdr, GENL_HDRLEN, rem) { 1625 1625 if (nla_type(attr) == NFSD_A_SERVER_THREADS) 1626 - count++; 1626 + nrpools++; 1627 1627 } 1628 1628 1629 1629 mutex_lock(&nfsd_mutex); 1630 1630 1631 - nrpools = max(count, nfsd_nrpools(net)); 1632 1631 nthreads = kcalloc(nrpools, sizeof(int), GFP_KERNEL); 1633 1632 if (!nthreads) { 1634 1633 ret = -ENOMEM;
+8 -2
fs/overlayfs/namei.c
··· 1393 1393 bool ovl_lower_positive(struct dentry *dentry) 1394 1394 { 1395 1395 struct ovl_entry *poe = OVL_E(dentry->d_parent); 1396 - struct qstr *name = &dentry->d_name; 1396 + const struct qstr *name = &dentry->d_name; 1397 1397 const struct cred *old_cred; 1398 1398 unsigned int i; 1399 1399 bool positive = false; ··· 1416 1416 struct dentry *this; 1417 1417 struct ovl_path *parentpath = &ovl_lowerstack(poe)[i]; 1418 1418 1419 + /* 1420 + * We need to make a non-const copy of dentry->d_name, 1421 + * because lookup_one_positive_unlocked() will hash name 1422 + * with parentpath base, which is on another (lower fs). 1423 + */ 1419 1424 this = lookup_one_positive_unlocked( 1420 1425 mnt_idmap(parentpath->layer->mnt), 1421 - name, parentpath->dentry); 1426 + &QSTR_LEN(name->name, name->len), 1427 + parentpath->dentry); 1422 1428 if (IS_ERR(this)) { 1423 1429 switch (PTR_ERR(this)) { 1424 1430 case -ENOENT:
+5 -3
fs/overlayfs/overlayfs.h
··· 246 246 struct dentry *dentry, 247 247 umode_t mode) 248 248 { 249 - dentry = vfs_mkdir(ovl_upper_mnt_idmap(ofs), dir, dentry, mode); 250 - pr_debug("mkdir(%pd2, 0%o) = %i\n", dentry, mode, PTR_ERR_OR_ZERO(dentry)); 251 - return dentry; 249 + struct dentry *ret; 250 + 251 + ret = vfs_mkdir(ovl_upper_mnt_idmap(ofs), dir, dentry, mode); 252 + pr_debug("mkdir(%pd2, 0%o) = %i\n", dentry, mode, PTR_ERR_OR_ZERO(ret)); 253 + return ret; 252 254 } 253 255 254 256 static inline int ovl_do_mknod(struct ovl_fs *ofs,
+1 -1
fs/pidfs.c
··· 366 366 kinfo.pid = task_pid_vnr(task); 367 367 kinfo.mask |= PIDFD_INFO_PID; 368 368 369 - if (kinfo.pid == 0 || kinfo.tgid == 0 || (kinfo.ppid == 0 && kinfo.pid != 1)) 369 + if (kinfo.pid == 0 || kinfo.tgid == 0) 370 370 return -ESRCH; 371 371 372 372 copy_out:
-2
fs/pnode.h
··· 28 28 #define CL_SHARED_TO_SLAVE 0x20 29 29 #define CL_COPY_MNT_NS_FILE 0x40 30 30 31 - #define CL_COPY_ALL (CL_COPY_UNBINDABLE | CL_COPY_MNT_NS_FILE) 32 - 33 31 static inline void set_mnt_shared(struct mount *mnt) 34 32 { 35 33 mnt->mnt.mnt_flags &= ~MNT_SHARED_MASK;
+1 -1
fs/proc/task_mmu.c
··· 2182 2182 categories |= PAGE_IS_FILE; 2183 2183 } 2184 2184 2185 - if (is_zero_pfn(pmd_pfn(pmd))) 2185 + if (is_huge_zero_pmd(pmd)) 2186 2186 categories |= PAGE_IS_PFNZERO; 2187 2187 if (pmd_soft_dirty(pmd)) 2188 2188 categories |= PAGE_IS_SOFT_DIRTY;
+9 -4
fs/resctrl/ctrlmondata.c
··· 594 594 struct rmid_read rr = {0}; 595 595 struct rdt_mon_domain *d; 596 596 struct rdtgroup *rdtgrp; 597 + int domid, cpu, ret = 0; 597 598 struct rdt_resource *r; 599 + struct cacheinfo *ci; 598 600 struct mon_data *md; 599 - int domid, ret = 0; 600 601 601 602 rdtgrp = rdtgroup_kn_lock_live(of->kn); 602 603 if (!rdtgrp) { ··· 624 623 * one that matches this cache id. 625 624 */ 626 625 list_for_each_entry(d, &r->mon_domains, hdr.list) { 627 - if (d->ci->id == domid) { 628 - rr.ci = d->ci; 626 + if (d->ci_id == domid) { 627 + rr.ci_id = d->ci_id; 628 + cpu = cpumask_any(&d->hdr.cpu_mask); 629 + ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); 630 + if (!ci) 631 + continue; 629 632 mon_event_read(&rr, r, NULL, rdtgrp, 630 - &d->ci->shared_cpu_map, evtid, false); 633 + &ci->shared_cpu_map, evtid, false); 631 634 goto checkresult; 632 635 } 633 636 }
+2 -2
fs/resctrl/internal.h
··· 98 98 * domains in @r sharing L3 @ci.id 99 99 * @evtid: Which monitor event to read. 100 100 * @first: Initialize MBM counter when true. 101 - * @ci: Cacheinfo for L3. Only set when @d is NULL. Used when summing domains. 101 + * @ci_id: Cacheinfo id for L3. Only set when @d is NULL. Used when summing domains. 102 102 * @err: Error encountered when reading counter. 103 103 * @val: Returned value of event counter. If @rgrp is a parent resource group, 104 104 * @val includes the sum of event counts from its child resource groups. ··· 112 112 struct rdt_mon_domain *d; 113 113 enum resctrl_event_id evtid; 114 114 bool first; 115 - struct cacheinfo *ci; 115 + unsigned int ci_id; 116 116 int err; 117 117 u64 val; 118 118 void *arch_mon_ctx;
+4 -2
fs/resctrl/monitor.c
··· 361 361 { 362 362 int cpu = smp_processor_id(); 363 363 struct rdt_mon_domain *d; 364 + struct cacheinfo *ci; 364 365 struct mbm_state *m; 365 366 int err, ret; 366 367 u64 tval = 0; ··· 389 388 } 390 389 391 390 /* Summing domains that share a cache, must be on a CPU for that cache. */ 392 - if (!cpumask_test_cpu(cpu, &rr->ci->shared_cpu_map)) 391 + ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); 392 + if (!ci || ci->id != rr->ci_id) 393 393 return -EINVAL; 394 394 395 395 /* ··· 402 400 */ 403 401 ret = -EINVAL; 404 402 list_for_each_entry(d, &rr->r->mon_domains, hdr.list) { 405 - if (d->ci->id != rr->ci->id) 403 + if (d->ci_id != rr->ci_id) 406 404 continue; 407 405 err = resctrl_arch_rmid_read(rr->r, d, closid, rmid, 408 406 rr->evtid, &tval, rr->arch_mon_ctx);
+3 -3
fs/resctrl/rdtgroup.c
··· 3036 3036 char name[32]; 3037 3037 3038 3038 snc_mode = r->mon_scope == RESCTRL_L3_NODE; 3039 - sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci->id : d->hdr.id); 3039 + sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); 3040 3040 if (snc_mode) 3041 3041 sprintf(subname, "mon_sub_%s_%02d", r->name, d->hdr.id); 3042 3042 ··· 3061 3061 return -EPERM; 3062 3062 3063 3063 list_for_each_entry(mevt, &r->evt_list, list) { 3064 - domid = do_sum ? d->ci->id : d->hdr.id; 3064 + domid = do_sum ? d->ci_id : d->hdr.id; 3065 3065 priv = mon_get_kn_priv(r->rid, domid, mevt, do_sum); 3066 3066 if (WARN_ON_ONCE(!priv)) 3067 3067 return -EINVAL; ··· 3089 3089 lockdep_assert_held(&rdtgroup_mutex); 3090 3090 3091 3091 snc_mode = r->mon_scope == RESCTRL_L3_NODE; 3092 - sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci->id : d->hdr.id); 3092 + sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); 3093 3093 kn = kernfs_find_and_get(parent_kn, name); 3094 3094 if (kn) { 3095 3095 /*
+12 -2
fs/smb/client/cached_dir.c
··· 509 509 spin_lock(&cfids->cfid_list_lock); 510 510 list_for_each_entry(cfid, &cfids->entries, entry) { 511 511 tmp_list = kmalloc(sizeof(*tmp_list), GFP_ATOMIC); 512 - if (tmp_list == NULL) 513 - break; 512 + if (tmp_list == NULL) { 513 + /* 514 + * If the malloc() fails, we won't drop all 515 + * dentries, and unmounting is likely to trigger 516 + * a 'Dentry still in use' error. 517 + */ 518 + cifs_tcon_dbg(VFS, "Out of memory while dropping dentries\n"); 519 + spin_unlock(&cfids->cfid_list_lock); 520 + spin_unlock(&cifs_sb->tlink_tree_lock); 521 + goto done; 522 + } 514 523 spin_lock(&cfid->fid_lock); 515 524 tmp_list->dentry = cfid->dentry; 516 525 cfid->dentry = NULL; ··· 531 522 } 532 523 spin_unlock(&cifs_sb->tlink_tree_lock); 533 524 525 + done: 534 526 list_for_each_entry_safe(tmp_list, q, &entry, entry) { 535 527 list_del(&tmp_list->entry); 536 528 dput(tmp_list->dentry);
+1 -1
fs/smb/client/cached_dir.h
··· 26 26 * open file instance. 27 27 */ 28 28 struct mutex de_mutex; 29 - int pos; /* Expected ctx->pos */ 29 + loff_t pos; /* Expected ctx->pos */ 30 30 struct list_head entries; 31 31 }; 32 32
+1 -1
fs/smb/client/cifs_debug.c
··· 1105 1105 if ((count < 1) || (count > 11)) 1106 1106 return -EINVAL; 1107 1107 1108 - memset(flags_string, 0, 12); 1108 + memset(flags_string, 0, sizeof(flags_string)); 1109 1109 1110 1110 if (copy_from_user(flags_string, buffer, count)) 1111 1111 return -EFAULT;
+1 -1
fs/smb/client/cifs_ioctl.h
··· 61 61 struct smb3_key_debug_info { 62 62 __u64 Suid; 63 63 __u16 cipher_type; 64 - __u8 auth_key[16]; /* SMB2_NTLMV2_SESSKEY_SIZE */ 64 + __u8 auth_key[SMB2_NTLMV2_SESSKEY_SIZE]; 65 65 __u8 smb3encryptionkey[SMB3_SIGN_KEY_SIZE]; 66 66 __u8 smb3decryptionkey[SMB3_SIGN_KEY_SIZE]; 67 67 } __packed;
+1
fs/smb/client/cifsglob.h
··· 709 709 struct TCP_Server_Info { 710 710 struct list_head tcp_ses_list; 711 711 struct list_head smb_ses_list; 712 + struct list_head rlist; /* reconnect list */ 712 713 spinlock_t srv_lock; /* protect anything here that is not protected */ 713 714 __u64 conn_id; /* connection identifier (useful for debugging) */ 714 715 int srv_count; /* reference counter */
+37 -22
fs/smb/client/connect.c
··· 124 124 (SMB_INTERFACE_POLL_INTERVAL * HZ)); 125 125 } 126 126 127 + #define set_need_reco(server) \ 128 + do { \ 129 + spin_lock(&server->srv_lock); \ 130 + if (server->tcpStatus != CifsExiting) \ 131 + server->tcpStatus = CifsNeedReconnect; \ 132 + spin_unlock(&server->srv_lock); \ 133 + } while (0) 134 + 127 135 /* 128 136 * Update the tcpStatus for the server. 129 137 * This is used to signal the cifsd thread to call cifs_reconnect ··· 145 137 cifs_signal_cifsd_for_reconnect(struct TCP_Server_Info *server, 146 138 bool all_channels) 147 139 { 148 - struct TCP_Server_Info *pserver; 140 + struct TCP_Server_Info *nserver; 149 141 struct cifs_ses *ses; 142 + LIST_HEAD(reco); 150 143 int i; 151 - 152 - /* If server is a channel, select the primary channel */ 153 - pserver = SERVER_IS_CHAN(server) ? server->primary_server : server; 154 144 155 145 /* if we need to signal just this channel */ 156 146 if (!all_channels) { 157 - spin_lock(&server->srv_lock); 158 - if (server->tcpStatus != CifsExiting) 159 - server->tcpStatus = CifsNeedReconnect; 160 - spin_unlock(&server->srv_lock); 147 + set_need_reco(server); 161 148 return; 162 149 } 163 150 164 - spin_lock(&cifs_tcp_ses_lock); 165 - list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) { 166 - if (cifs_ses_exiting(ses)) 167 - continue; 168 - spin_lock(&ses->chan_lock); 169 - for (i = 0; i < ses->chan_count; i++) { 170 - if (!ses->chans[i].server) 151 + if (SERVER_IS_CHAN(server)) 152 + server = server->primary_server; 153 + scoped_guard(spinlock, &cifs_tcp_ses_lock) { 154 + set_need_reco(server); 155 + list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) { 156 + spin_lock(&ses->ses_lock); 157 + if (ses->ses_status == SES_EXITING) { 158 + spin_unlock(&ses->ses_lock); 171 159 continue; 172 - 173 - spin_lock(&ses->chans[i].server->srv_lock); 174 - if (ses->chans[i].server->tcpStatus != CifsExiting) 175 - ses->chans[i].server->tcpStatus = CifsNeedReconnect; 176 - spin_unlock(&ses->chans[i].server->srv_lock); 160 + } 161 + spin_lock(&ses->chan_lock); 162 + for (i = 1; i < ses->chan_count; i++) { 163 + nserver = ses->chans[i].server; 164 + if (!nserver) 165 + continue; 166 + nserver->srv_count++; 167 + list_add(&nserver->rlist, &reco); 168 + } 169 + spin_unlock(&ses->chan_lock); 170 + spin_unlock(&ses->ses_lock); 177 171 } 178 - spin_unlock(&ses->chan_lock); 179 172 } 180 - spin_unlock(&cifs_tcp_ses_lock); 173 + 174 + list_for_each_entry_safe(server, nserver, &reco, rlist) { 175 + list_del_init(&server->rlist); 176 + set_need_reco(server); 177 + cifs_put_tcp_session(server, 0); 178 + } 181 179 } 182 180 183 181 /* ··· 4213 4199 return 0; 4214 4200 } 4215 4201 4202 + server->lstrp = jiffies; 4216 4203 server->tcpStatus = CifsInNegotiate; 4217 4204 spin_unlock(&server->srv_lock); 4218 4205
+6 -2
fs/smb/client/file.c
··· 52 52 struct netfs_io_stream *stream = &req->rreq.io_streams[subreq->stream_nr]; 53 53 struct TCP_Server_Info *server; 54 54 struct cifsFileInfo *open_file = req->cfile; 55 + struct cifs_sb_info *cifs_sb = CIFS_SB(wdata->rreq->inode->i_sb); 55 56 size_t wsize = req->rreq.wsize; 56 57 int rc; 57 58 ··· 63 62 64 63 server = cifs_pick_channel(tlink_tcon(open_file->tlink)->ses); 65 64 wdata->server = server; 65 + 66 + if (cifs_sb->ctx->wsize == 0) 67 + cifs_negotiate_wsize(server, cifs_sb->ctx, 68 + tlink_tcon(req->cfile->tlink)); 66 69 67 70 retry: 68 71 if (open_file->invalidHandle) { ··· 165 160 server = cifs_pick_channel(tlink_tcon(req->cfile->tlink)->ses); 166 161 rdata->server = server; 167 162 168 - if (cifs_sb->ctx->rsize == 0) { 163 + if (cifs_sb->ctx->rsize == 0) 169 164 cifs_negotiate_rsize(server, cifs_sb->ctx, 170 165 tlink_tcon(req->cfile->tlink)); 171 - } 172 166 173 167 rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, 174 168 &size, &rdata->credits);
+1 -1
fs/smb/client/ioctl.c
··· 506 506 le16_to_cpu(tcon->ses->server->cipher_type); 507 507 pkey_inf.Suid = tcon->ses->Suid; 508 508 memcpy(pkey_inf.auth_key, tcon->ses->auth_key.response, 509 - 16 /* SMB2_NTLMV2_SESSKEY_SIZE */); 509 + SMB2_NTLMV2_SESSKEY_SIZE); 510 510 memcpy(pkey_inf.smb3decryptionkey, 511 511 tcon->ses->smb3decryptionkey, SMB3_SIGN_KEY_SIZE); 512 512 memcpy(pkey_inf.smb3encryptionkey,
+4 -17
fs/smb/client/reparse.c
··· 875 875 abs_path += sizeof("\\DosDevices\\")-1; 876 876 else if (strstarts(abs_path, "\\GLOBAL??\\")) 877 877 abs_path += sizeof("\\GLOBAL??\\")-1; 878 - else { 879 - /* Unhandled absolute symlink, points outside of DOS/Win32 */ 880 - cifs_dbg(VFS, 881 - "absolute symlink '%s' cannot be converted from NT format " 882 - "because points to unknown target\n", 883 - smb_target); 884 - rc = -EIO; 885 - goto out; 886 - } 878 + else 879 + goto out_unhandled_target; 887 880 888 881 /* Sometimes path separator after \?? is double backslash */ 889 882 if (abs_path[0] == '\\') ··· 903 910 abs_path++; 904 911 abs_path[0] = drive_letter; 905 912 } else { 906 - /* Unhandled absolute symlink. Report an error. */ 907 - cifs_dbg(VFS, 908 - "absolute symlink '%s' cannot be converted from NT format " 909 - "because points to unknown target\n", 910 - smb_target); 911 - rc = -EIO; 912 - goto out; 913 + goto out_unhandled_target; 913 914 } 914 915 915 916 abs_path_len = strlen(abs_path)+1; ··· 953 966 * These paths have same format as Linux symlinks, so no 954 967 * conversion is needed. 955 968 */ 969 + out_unhandled_target: 956 970 linux_target = smb_target; 957 971 smb_target = NULL; 958 972 } ··· 1160 1172 if (!have_xattr_dev && (tag == IO_REPARSE_TAG_LX_CHR || tag == IO_REPARSE_TAG_LX_BLK)) 1161 1173 return false; 1162 1174 1163 - fattr->cf_dtype = S_DT(fattr->cf_mode); 1164 1175 return true; 1165 1176 } 1166 1177
+1 -2
fs/smb/client/sess.c
··· 498 498 ctx->domainauto = ses->domainAuto; 499 499 ctx->domainname = ses->domainName; 500 500 501 - /* no hostname for extra channels */ 502 - ctx->server_hostname = ""; 501 + ctx->server_hostname = ses->server->hostname; 503 502 504 503 ctx->username = ses->user_name; 505 504 ctx->password = ses->password;
+60 -106
fs/smb/client/smbdirect.c
··· 907 907 .local_dma_lkey = sc->ib.pd->local_dma_lkey, 908 908 .direction = DMA_TO_DEVICE, 909 909 }; 910 + size_t payload_len = umin(*_remaining_data_length, 911 + sp->max_send_size - sizeof(*packet)); 910 912 911 - rc = smb_extract_iter_to_rdma(iter, *_remaining_data_length, 913 + rc = smb_extract_iter_to_rdma(iter, payload_len, 912 914 &extract); 913 915 if (rc < 0) 914 916 goto err_dma; ··· 1013 1011 1014 1012 info->count_send_empty++; 1015 1013 return smbd_post_send_iter(info, NULL, &remaining_data_length); 1014 + } 1015 + 1016 + static int smbd_post_send_full_iter(struct smbd_connection *info, 1017 + struct iov_iter *iter, 1018 + int *_remaining_data_length) 1019 + { 1020 + int rc = 0; 1021 + 1022 + /* 1023 + * smbd_post_send_iter() respects the 1024 + * negotiated max_send_size, so we need to 1025 + * loop until the full iter is posted 1026 + */ 1027 + 1028 + while (iov_iter_count(iter) > 0) { 1029 + rc = smbd_post_send_iter(info, iter, _remaining_data_length); 1030 + if (rc < 0) 1031 + break; 1032 + } 1033 + 1034 + return rc; 1016 1035 } 1017 1036 1018 1037 /* ··· 1475 1452 char name[MAX_NAME_LEN]; 1476 1453 int rc; 1477 1454 1455 + if (WARN_ON_ONCE(sp->max_recv_size < sizeof(struct smbdirect_data_transfer))) 1456 + return -ENOMEM; 1457 + 1478 1458 scnprintf(name, MAX_NAME_LEN, "smbd_request_%p", info); 1479 1459 info->request_cache = 1480 1460 kmem_cache_create( ··· 1495 1469 goto out1; 1496 1470 1497 1471 scnprintf(name, MAX_NAME_LEN, "smbd_response_%p", info); 1472 + 1473 + struct kmem_cache_args response_args = { 1474 + .align = __alignof__(struct smbd_response), 1475 + .useroffset = (offsetof(struct smbd_response, packet) + 1476 + sizeof(struct smbdirect_data_transfer)), 1477 + .usersize = sp->max_recv_size - sizeof(struct smbdirect_data_transfer), 1478 + }; 1498 1479 info->response_cache = 1499 - kmem_cache_create( 1500 - name, 1501 - sizeof(struct smbd_response) + 1502 - sp->max_recv_size, 1503 - 0, SLAB_HWCACHE_ALIGN, NULL); 1480 + kmem_cache_create(name, 1481 + sizeof(struct smbd_response) + sp->max_recv_size, 1482 + &response_args, SLAB_HWCACHE_ALIGN); 1504 1483 if (!info->response_cache) 1505 1484 goto out2; 1506 1485 ··· 1778 1747 } 1779 1748 1780 1749 /* 1781 - * Receive data from receive reassembly queue 1750 + * Receive data from the transport's receive reassembly queue 1782 1751 * All the incoming data packets are placed in reassembly queue 1783 - * buf: the buffer to read data into 1752 + * iter: the buffer to read data into 1784 1753 * size: the length of data to read 1785 1754 * return value: actual data read 1786 - * Note: this implementation copies the data from reassebmly queue to receive 1755 + * 1756 + * Note: this implementation copies the data from reassembly queue to receive 1787 1757 * buffers used by upper layer. This is not the optimal code path. A better way 1788 1758 * to do it is to not have upper layer allocate its receive buffers but rather 1789 1759 * borrow the buffer from reassembly queue, and return it after data is 1790 1760 * consumed. But this will require more changes to upper layer code, and also 1791 1761 * need to consider packet boundaries while they still being reassembled. 1792 1762 */ 1793 - static int smbd_recv_buf(struct smbd_connection *info, char *buf, 1794 - unsigned int size) 1763 + int smbd_recv(struct smbd_connection *info, struct msghdr *msg) 1795 1764 { 1796 1765 struct smbdirect_socket *sc = &info->socket; 1797 1766 struct smbd_response *response; 1798 1767 struct smbdirect_data_transfer *data_transfer; 1768 + size_t size = iov_iter_count(&msg->msg_iter); 1799 1769 int to_copy, to_read, data_read, offset; 1800 1770 u32 data_length, remaining_data_length, data_offset; 1801 1771 int rc; 1772 + 1773 + if (WARN_ON_ONCE(iov_iter_rw(&msg->msg_iter) == WRITE)) 1774 + return -EINVAL; /* It's a bug in upper layer to get there */ 1802 1775 1803 1776 again: 1804 1777 /* ··· 1810 1775 * the only one reading from the front of the queue. The transport 1811 1776 * may add more entries to the back of the queue at the same time 1812 1777 */ 1813 - log_read(INFO, "size=%d info->reassembly_data_length=%d\n", size, 1778 + log_read(INFO, "size=%zd info->reassembly_data_length=%d\n", size, 1814 1779 info->reassembly_data_length); 1815 1780 if (info->reassembly_data_length >= size) { 1816 1781 int queue_length; ··· 1848 1813 if (response->first_segment && size == 4) { 1849 1814 unsigned int rfc1002_len = 1850 1815 data_length + remaining_data_length; 1851 - *((__be32 *)buf) = cpu_to_be32(rfc1002_len); 1816 + __be32 rfc1002_hdr = cpu_to_be32(rfc1002_len); 1817 + if (copy_to_iter(&rfc1002_hdr, sizeof(rfc1002_hdr), 1818 + &msg->msg_iter) != sizeof(rfc1002_hdr)) 1819 + return -EFAULT; 1852 1820 data_read = 4; 1853 1821 response->first_segment = false; 1854 1822 log_read(INFO, "returning rfc1002 length %d\n", ··· 1860 1822 } 1861 1823 1862 1824 to_copy = min_t(int, data_length - offset, to_read); 1863 - memcpy( 1864 - buf + data_read, 1865 - (char *)data_transfer + data_offset + offset, 1866 - to_copy); 1825 + if (copy_to_iter((char *)data_transfer + data_offset + offset, 1826 + to_copy, &msg->msg_iter) != to_copy) 1827 + return -EFAULT; 1867 1828 1868 1829 /* move on to the next buffer? */ 1869 1830 if (to_copy == data_length - offset) { ··· 1928 1891 } 1929 1892 1930 1893 /* 1931 - * Receive a page from receive reassembly queue 1932 - * page: the page to read data into 1933 - * to_read: the length of data to read 1934 - * return value: actual data read 1935 - */ 1936 - static int smbd_recv_page(struct smbd_connection *info, 1937 - struct page *page, unsigned int page_offset, 1938 - unsigned int to_read) 1939 - { 1940 - struct smbdirect_socket *sc = &info->socket; 1941 - int ret; 1942 - char *to_address; 1943 - void *page_address; 1944 - 1945 - /* make sure we have the page ready for read */ 1946 - ret = wait_event_interruptible( 1947 - info->wait_reassembly_queue, 1948 - info->reassembly_data_length >= to_read || 1949 - sc->status != SMBDIRECT_SOCKET_CONNECTED); 1950 - if (ret) 1951 - return ret; 1952 - 1953 - /* now we can read from reassembly queue and not sleep */ 1954 - page_address = kmap_atomic(page); 1955 - to_address = (char *) page_address + page_offset; 1956 - 1957 - log_read(INFO, "reading from page=%p address=%p to_read=%d\n", 1958 - page, to_address, to_read); 1959 - 1960 - ret = smbd_recv_buf(info, to_address, to_read); 1961 - kunmap_atomic(page_address); 1962 - 1963 - return ret; 1964 - } 1965 - 1966 - /* 1967 - * Receive data from transport 1968 - * msg: a msghdr point to the buffer, can be ITER_KVEC or ITER_BVEC 1969 - * return: total bytes read, or 0. SMB Direct will not do partial read. 1970 - */ 1971 - int smbd_recv(struct smbd_connection *info, struct msghdr *msg) 1972 - { 1973 - char *buf; 1974 - struct page *page; 1975 - unsigned int to_read, page_offset; 1976 - int rc; 1977 - 1978 - if (iov_iter_rw(&msg->msg_iter) == WRITE) { 1979 - /* It's a bug in upper layer to get there */ 1980 - cifs_dbg(VFS, "Invalid msg iter dir %u\n", 1981 - iov_iter_rw(&msg->msg_iter)); 1982 - rc = -EINVAL; 1983 - goto out; 1984 - } 1985 - 1986 - switch (iov_iter_type(&msg->msg_iter)) { 1987 - case ITER_KVEC: 1988 - buf = msg->msg_iter.kvec->iov_base; 1989 - to_read = msg->msg_iter.kvec->iov_len; 1990 - rc = smbd_recv_buf(info, buf, to_read); 1991 - break; 1992 - 1993 - case ITER_BVEC: 1994 - page = msg->msg_iter.bvec->bv_page; 1995 - page_offset = msg->msg_iter.bvec->bv_offset; 1996 - to_read = msg->msg_iter.bvec->bv_len; 1997 - rc = smbd_recv_page(info, page, page_offset, to_read); 1998 - break; 1999 - 2000 - default: 2001 - /* It's a bug in upper layer to get there */ 2002 - cifs_dbg(VFS, "Invalid msg type %d\n", 2003 - iov_iter_type(&msg->msg_iter)); 2004 - rc = -EINVAL; 2005 - } 2006 - 2007 - out: 2008 - /* SMBDirect will read it all or nothing */ 2009 - if (rc > 0) 2010 - msg->msg_iter.count = 0; 2011 - return rc; 2012 - } 2013 - 2014 - /* 2015 1894 * Send data to transport 2016 1895 * Each rqst is transported as a SMBDirect payload 2017 1896 * rqst: the data to write ··· 1985 2032 klen += rqst->rq_iov[i].iov_len; 1986 2033 iov_iter_kvec(&iter, ITER_SOURCE, rqst->rq_iov, rqst->rq_nvec, klen); 1987 2034 1988 - rc = smbd_post_send_iter(info, &iter, &remaining_data_length); 2035 + rc = smbd_post_send_full_iter(info, &iter, &remaining_data_length); 1989 2036 if (rc < 0) 1990 2037 break; 1991 2038 1992 2039 if (iov_iter_count(&rqst->rq_iter) > 0) { 1993 2040 /* And then the data pages if there are any */ 1994 - rc = smbd_post_send_iter(info, &rqst->rq_iter, 1995 - &remaining_data_length); 2041 + rc = smbd_post_send_full_iter(info, &rqst->rq_iter, 2042 + &remaining_data_length); 1996 2043 if (rc < 0) 1997 2044 break; 1998 2045 } ··· 2542 2589 size_t fsize = folioq_folio_size(folioq, slot); 2543 2590 2544 2591 if (offset < fsize) { 2545 - size_t part = umin(maxsize - ret, fsize - offset); 2592 + size_t part = umin(maxsize, fsize - offset); 2546 2593 2547 2594 if (!smb_set_sge(rdma, folio_page(folio, 0), offset, part)) 2548 2595 return -EIO; 2549 2596 2550 2597 offset += part; 2551 2598 ret += part; 2599 + maxsize -= part; 2552 2600 } 2553 2601 2554 2602 if (offset >= fsize) { ··· 2564 2610 slot = 0; 2565 2611 } 2566 2612 } 2567 - } while (rdma->nr_sge < rdma->max_sge || maxsize > 0); 2613 + } while (rdma->nr_sge < rdma->max_sge && maxsize > 0); 2568 2614 2569 2615 iter->folioq = folioq; 2570 2616 iter->folioq_slot = slot;
+12 -12
fs/smb/client/trace.h
··· 140 140 __entry->len = len; 141 141 __entry->rc = rc; 142 142 ), 143 - TP_printk("\tR=%08x[%x] xid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x rc=%d", 143 + TP_printk("R=%08x[%x] xid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x rc=%d", 144 144 __entry->rreq_debug_id, __entry->rreq_debug_index, 145 145 __entry->xid, __entry->sesid, __entry->tid, __entry->fid, 146 146 __entry->offset, __entry->len, __entry->rc) ··· 190 190 __entry->len = len; 191 191 __entry->rc = rc; 192 192 ), 193 - TP_printk("\txid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x rc=%d", 193 + TP_printk("xid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x rc=%d", 194 194 __entry->xid, __entry->sesid, __entry->tid, __entry->fid, 195 195 __entry->offset, __entry->len, __entry->rc) 196 196 ) ··· 247 247 __entry->len = len; 248 248 __entry->rc = rc; 249 249 ), 250 - TP_printk("\txid=%u sid=0x%llx tid=0x%x source fid=0x%llx source offset=0x%llx target fid=0x%llx target offset=0x%llx len=0x%x rc=%d", 250 + TP_printk("xid=%u sid=0x%llx tid=0x%x source fid=0x%llx source offset=0x%llx target fid=0x%llx target offset=0x%llx len=0x%x rc=%d", 251 251 __entry->xid, __entry->sesid, __entry->tid, __entry->target_fid, 252 252 __entry->src_offset, __entry->target_fid, __entry->target_offset, __entry->len, __entry->rc) 253 253 ) ··· 298 298 __entry->target_offset = target_offset; 299 299 __entry->len = len; 300 300 ), 301 - TP_printk("\txid=%u sid=0x%llx tid=0x%x source fid=0x%llx source offset=0x%llx target fid=0x%llx target offset=0x%llx len=0x%x", 301 + TP_printk("xid=%u sid=0x%llx tid=0x%x source fid=0x%llx source offset=0x%llx target fid=0x%llx target offset=0x%llx len=0x%x", 302 302 __entry->xid, __entry->sesid, __entry->tid, __entry->target_fid, 303 303 __entry->src_offset, __entry->target_fid, __entry->target_offset, __entry->len) 304 304 ) ··· 482 482 __entry->tid = tid; 483 483 __entry->sesid = sesid; 484 484 ), 485 - TP_printk("\txid=%u sid=0x%llx tid=0x%x fid=0x%llx", 485 + TP_printk("xid=%u sid=0x%llx tid=0x%x fid=0x%llx", 486 486 __entry->xid, __entry->sesid, __entry->tid, __entry->fid) 487 487 ) 488 488 ··· 521 521 __entry->sesid = sesid; 522 522 __entry->rc = rc; 523 523 ), 524 - TP_printk("\txid=%u sid=0x%llx tid=0x%x fid=0x%llx rc=%d", 524 + TP_printk("xid=%u sid=0x%llx tid=0x%x fid=0x%llx rc=%d", 525 525 __entry->xid, __entry->sesid, __entry->tid, __entry->fid, 526 526 __entry->rc) 527 527 ) ··· 794 794 __entry->status = status; 795 795 __entry->rc = rc; 796 796 ), 797 - TP_printk("\tsid=0x%llx tid=0x%x cmd=%u mid=%llu status=0x%x rc=%d", 797 + TP_printk("sid=0x%llx tid=0x%x cmd=%u mid=%llu status=0x%x rc=%d", 798 798 __entry->sesid, __entry->tid, __entry->cmd, __entry->mid, 799 799 __entry->status, __entry->rc) 800 800 ) ··· 829 829 __entry->cmd = cmd; 830 830 __entry->mid = mid; 831 831 ), 832 - TP_printk("\tsid=0x%llx tid=0x%x cmd=%u mid=%llu", 832 + TP_printk("sid=0x%llx tid=0x%x cmd=%u mid=%llu", 833 833 __entry->sesid, __entry->tid, 834 834 __entry->cmd, __entry->mid) 835 835 ) ··· 867 867 __entry->when_sent = when_sent; 868 868 __entry->when_received = when_received; 869 869 ), 870 - TP_printk("\tcmd=%u mid=%llu pid=%u, when_sent=%lu when_rcv=%lu", 870 + TP_printk("cmd=%u mid=%llu pid=%u, when_sent=%lu when_rcv=%lu", 871 871 __entry->cmd, __entry->mid, __entry->pid, __entry->when_sent, 872 872 __entry->when_received) 873 873 ) ··· 898 898 __assign_str(func_name); 899 899 __entry->rc = rc; 900 900 ), 901 - TP_printk("\t%s: xid=%u rc=%d", 901 + TP_printk("%s: xid=%u rc=%d", 902 902 __get_str(func_name), __entry->xid, __entry->rc) 903 903 ) 904 904 ··· 924 924 __entry->ino = ino; 925 925 __entry->rc = rc; 926 926 ), 927 - TP_printk("\tino=%lu rc=%d", 927 + TP_printk("ino=%lu rc=%d", 928 928 __entry->ino, __entry->rc) 929 929 ) 930 930 ··· 950 950 __entry->xid = xid; 951 951 __assign_str(func_name); 952 952 ), 953 - TP_printk("\t%s: xid=%u", 953 + TP_printk("%s: xid=%u", 954 954 __get_str(func_name), __entry->xid) 955 955 ) 956 956
+1 -1
fs/smb/server/connection.c
··· 40 40 kvfree(conn->request_buf); 41 41 kfree(conn->preauth_info); 42 42 if (atomic_dec_and_test(&conn->refcnt)) { 43 - ksmbd_free_transport(conn->transport); 43 + conn->transport->ops->free_transport(conn->transport); 44 44 kfree(conn); 45 45 } 46 46 }
+1
fs/smb/server/connection.h
··· 133 133 void *buf, unsigned int len, 134 134 struct smb2_buffer_desc_v1 *desc, 135 135 unsigned int desc_len); 136 + void (*free_transport)(struct ksmbd_transport *kt); 136 137 }; 137 138 138 139 struct ksmbd_transport {
+56 -18
fs/smb/server/smb2pdu.c
··· 1607 1607 out_len = work->response_sz - 1608 1608 (le16_to_cpu(rsp->SecurityBufferOffset) + 4); 1609 1609 1610 - /* Check previous session */ 1611 - prev_sess_id = le64_to_cpu(req->PreviousSessionId); 1612 - if (prev_sess_id && prev_sess_id != sess->id) 1613 - destroy_previous_session(conn, sess->user, prev_sess_id); 1614 - 1615 1610 retval = ksmbd_krb5_authenticate(sess, in_blob, in_len, 1616 1611 out_blob, &out_len); 1617 1612 if (retval) { 1618 1613 ksmbd_debug(SMB, "krb5 authentication failed\n"); 1619 1614 return -EINVAL; 1620 1615 } 1616 + 1617 + /* Check previous session */ 1618 + prev_sess_id = le64_to_cpu(req->PreviousSessionId); 1619 + if (prev_sess_id && prev_sess_id != sess->id) 1620 + destroy_previous_session(conn, sess->user, prev_sess_id); 1621 + 1621 1622 rsp->SecurityBufferLength = cpu_to_le16(out_len); 1622 1623 1623 1624 if ((conn->sign || server_conf.enforced_signing) || ··· 4872 4871 sinfo = (struct smb2_file_standard_info *)rsp->Buffer; 4873 4872 delete_pending = ksmbd_inode_pending_delete(fp); 4874 4873 4875 - sinfo->AllocationSize = cpu_to_le64(stat.blocks << 9); 4876 - sinfo->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 4874 + if (ksmbd_stream_fd(fp) == false) { 4875 + sinfo->AllocationSize = cpu_to_le64(stat.blocks << 9); 4876 + sinfo->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 4877 + } else { 4878 + sinfo->AllocationSize = cpu_to_le64(fp->stream.size); 4879 + sinfo->EndOfFile = cpu_to_le64(fp->stream.size); 4880 + } 4877 4881 sinfo->NumberOfLinks = cpu_to_le32(get_nlink(&stat) - delete_pending); 4878 4882 sinfo->DeletePending = delete_pending; 4879 4883 sinfo->Directory = S_ISDIR(stat.mode) ? 1 : 0; ··· 4941 4935 file_info->ChangeTime = cpu_to_le64(time); 4942 4936 file_info->Attributes = fp->f_ci->m_fattr; 4943 4937 file_info->Pad1 = 0; 4944 - file_info->AllocationSize = 4945 - cpu_to_le64(stat.blocks << 9); 4946 - file_info->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 4938 + if (ksmbd_stream_fd(fp) == false) { 4939 + file_info->AllocationSize = 4940 + cpu_to_le64(stat.blocks << 9); 4941 + file_info->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 4942 + } else { 4943 + file_info->AllocationSize = cpu_to_le64(fp->stream.size); 4944 + file_info->EndOfFile = cpu_to_le64(fp->stream.size); 4945 + } 4947 4946 file_info->NumberOfLinks = 4948 4947 cpu_to_le32(get_nlink(&stat) - delete_pending); 4949 4948 file_info->DeletePending = delete_pending; ··· 4957 4946 file_info->IndexNumber = cpu_to_le64(stat.ino); 4958 4947 file_info->EASize = 0; 4959 4948 file_info->AccessFlags = fp->daccess; 4960 - file_info->CurrentByteOffset = cpu_to_le64(fp->filp->f_pos); 4949 + if (ksmbd_stream_fd(fp) == false) 4950 + file_info->CurrentByteOffset = cpu_to_le64(fp->filp->f_pos); 4951 + else 4952 + file_info->CurrentByteOffset = cpu_to_le64(fp->stream.pos); 4961 4953 file_info->Mode = fp->coption; 4962 4954 file_info->AlignmentRequirement = 0; 4963 4955 conv_len = smbConvertToUTF16((__le16 *)file_info->FileName, filename, ··· 5148 5134 time = ksmbd_UnixTimeToNT(stat.ctime); 5149 5135 file_info->ChangeTime = cpu_to_le64(time); 5150 5136 file_info->Attributes = fp->f_ci->m_fattr; 5151 - file_info->AllocationSize = cpu_to_le64(stat.blocks << 9); 5152 - file_info->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 5137 + if (ksmbd_stream_fd(fp) == false) { 5138 + file_info->AllocationSize = cpu_to_le64(stat.blocks << 9); 5139 + file_info->EndOfFile = S_ISDIR(stat.mode) ? 0 : cpu_to_le64(stat.size); 5140 + } else { 5141 + file_info->AllocationSize = cpu_to_le64(fp->stream.size); 5142 + file_info->EndOfFile = cpu_to_le64(fp->stream.size); 5143 + } 5153 5144 file_info->Reserved = cpu_to_le32(0); 5154 5145 rsp->OutputBufferLength = 5155 5146 cpu_to_le32(sizeof(struct smb2_file_ntwrk_info)); ··· 5177 5158 struct smb2_file_pos_info *file_info; 5178 5159 5179 5160 file_info = (struct smb2_file_pos_info *)rsp->Buffer; 5180 - file_info->CurrentByteOffset = cpu_to_le64(fp->filp->f_pos); 5161 + if (ksmbd_stream_fd(fp) == false) 5162 + file_info->CurrentByteOffset = cpu_to_le64(fp->filp->f_pos); 5163 + else 5164 + file_info->CurrentByteOffset = cpu_to_le64(fp->stream.pos); 5165 + 5181 5166 rsp->OutputBufferLength = 5182 5167 cpu_to_le32(sizeof(struct smb2_file_pos_info)); 5183 5168 } ··· 5270 5247 file_info->ChangeTime = cpu_to_le64(time); 5271 5248 file_info->DosAttributes = fp->f_ci->m_fattr; 5272 5249 file_info->Inode = cpu_to_le64(stat.ino); 5273 - file_info->EndOfFile = cpu_to_le64(stat.size); 5274 - file_info->AllocationSize = cpu_to_le64(stat.blocks << 9); 5250 + if (ksmbd_stream_fd(fp) == false) { 5251 + file_info->EndOfFile = cpu_to_le64(stat.size); 5252 + file_info->AllocationSize = cpu_to_le64(stat.blocks << 9); 5253 + } else { 5254 + file_info->EndOfFile = cpu_to_le64(fp->stream.size); 5255 + file_info->AllocationSize = cpu_to_le64(fp->stream.size); 5256 + } 5275 5257 file_info->HardLinks = cpu_to_le32(stat.nlink); 5276 5258 file_info->Mode = cpu_to_le32(stat.mode & 0777); 5277 5259 switch (stat.mode & S_IFMT) { ··· 6218 6190 if (!(fp->daccess & FILE_WRITE_DATA_LE)) 6219 6191 return -EACCES; 6220 6192 6193 + if (ksmbd_stream_fd(fp) == true) 6194 + return 0; 6195 + 6221 6196 rc = vfs_getattr(&fp->filp->f_path, &stat, STATX_BASIC_STATS, 6222 6197 AT_STATX_SYNC_AS_STAT); 6223 6198 if (rc) ··· 6279 6248 * truncate of some filesystem like FAT32 fill zero data in 6280 6249 * truncated range. 6281 6250 */ 6282 - if (inode->i_sb->s_magic != MSDOS_SUPER_MAGIC) { 6251 + if (inode->i_sb->s_magic != MSDOS_SUPER_MAGIC && 6252 + ksmbd_stream_fd(fp) == false) { 6283 6253 ksmbd_debug(SMB, "truncated to newsize %lld\n", newsize); 6284 6254 rc = ksmbd_vfs_truncate(work, fp, newsize); 6285 6255 if (rc) { ··· 6353 6321 return -EINVAL; 6354 6322 } 6355 6323 6356 - fp->filp->f_pos = current_byte_offset; 6324 + if (ksmbd_stream_fd(fp) == false) 6325 + fp->filp->f_pos = current_byte_offset; 6326 + else { 6327 + if (current_byte_offset > XATTR_SIZE_MAX) 6328 + current_byte_offset = XATTR_SIZE_MAX; 6329 + fp->stream.pos = current_byte_offset; 6330 + } 6357 6331 return 0; 6358 6332 } 6359 6333
+8 -2
fs/smb/server/transport_rdma.c
··· 159 159 }; 160 160 161 161 #define KSMBD_TRANS(t) ((struct ksmbd_transport *)&((t)->transport)) 162 - 162 + #define SMBD_TRANS(t) ((struct smb_direct_transport *)container_of(t, \ 163 + struct smb_direct_transport, transport)) 163 164 enum { 164 165 SMB_DIRECT_MSG_NEGOTIATE_REQ = 0, 165 166 SMB_DIRECT_MSG_DATA_TRANSFER ··· 411 410 return NULL; 412 411 } 413 412 413 + static void smb_direct_free_transport(struct ksmbd_transport *kt) 414 + { 415 + kfree(SMBD_TRANS(kt)); 416 + } 417 + 414 418 static void free_transport(struct smb_direct_transport *t) 415 419 { 416 420 struct smb_direct_recvmsg *recvmsg; ··· 461 455 462 456 smb_direct_destroy_pools(t); 463 457 ksmbd_conn_free(KSMBD_TRANS(t)->conn); 464 - kfree(t); 465 458 } 466 459 467 460 static struct smb_direct_sendmsg ··· 2286 2281 .read = smb_direct_read, 2287 2282 .rdma_read = smb_direct_rdma_read, 2288 2283 .rdma_write = smb_direct_rdma_write, 2284 + .free_transport = smb_direct_free_transport, 2289 2285 };
+2 -1
fs/smb/server/transport_tcp.c
··· 93 93 return t; 94 94 } 95 95 96 - void ksmbd_free_transport(struct ksmbd_transport *kt) 96 + static void ksmbd_tcp_free_transport(struct ksmbd_transport *kt) 97 97 { 98 98 struct tcp_transport *t = TCP_TRANS(kt); 99 99 ··· 656 656 .read = ksmbd_tcp_read, 657 657 .writev = ksmbd_tcp_writev, 658 658 .disconnect = ksmbd_tcp_disconnect, 659 + .free_transport = ksmbd_tcp_free_transport, 659 660 };
+3 -2
fs/smb/server/vfs.c
··· 293 293 294 294 if (v_len - *pos < count) 295 295 count = v_len - *pos; 296 + fp->stream.pos = v_len; 296 297 297 298 memcpy(buf, &stream_buf[*pos], count); 298 299 ··· 457 456 true); 458 457 if (err < 0) 459 458 goto out; 460 - 461 - fp->filp->f_pos = *pos; 459 + else 460 + fp->stream.pos = size; 462 461 err = 0; 463 462 out: 464 463 kvfree(stream_buf);
+1
fs/smb/server/vfs_cache.h
··· 44 44 struct stream { 45 45 char *name; 46 46 ssize_t size; 47 + loff_t pos; 47 48 }; 48 49 49 50 struct ksmbd_inode {
+3 -1
fs/super.c
··· 964 964 spin_unlock(&sb_lock); 965 965 966 966 locked = super_lock_shared(sb); 967 - if (locked) 967 + if (locked) { 968 968 f(sb, arg); 969 + super_unlock_shared(sb); 970 + } 969 971 970 972 spin_lock(&sb_lock); 971 973 if (p)
+1
fs/xattr.c
··· 1479 1479 buffer += err; 1480 1480 } 1481 1481 remaining_size -= err; 1482 + err = 0; 1482 1483 1483 1484 read_lock(&xattrs->lock); 1484 1485 for (rbp = rb_first(&xattrs->rb_root); rbp; rbp = rb_next(rbp)) {
+2
include/crypto/hash.h
··· 202 202 #define HASH_REQUEST_CLONE(name, gfp) \ 203 203 hash_request_clone(name, sizeof(__##name##_req), gfp) 204 204 205 + #define CRYPTO_HASH_STATESIZE(coresize, blocksize) (coresize + blocksize + 1) 206 + 205 207 /** 206 208 * struct shash_alg - synchronous message digest definition 207 209 * @init: see struct ahash_alg
+1 -1
include/crypto/internal/sha2.h
··· 25 25 void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS], 26 26 const u8 *data, size_t nblocks); 27 27 28 - static inline void sha256_choose_blocks( 28 + static __always_inline void sha256_choose_blocks( 29 29 u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks, 30 30 bool force_generic, bool force_simd) 31 31 {
+4 -2
include/crypto/internal/simd.h
··· 44 44 * 45 45 * This delegates to may_use_simd(), except that this also returns false if SIMD 46 46 * in crypto code has been temporarily disabled on this CPU by the crypto 47 - * self-tests, in order to test the no-SIMD fallback code. 47 + * self-tests, in order to test the no-SIMD fallback code. This override is 48 + * currently limited to configurations where the "full" self-tests are enabled, 49 + * because it might be a bit too invasive to be part of the "fast" self-tests. 48 50 */ 49 - #ifdef CONFIG_CRYPTO_SELFTESTS 51 + #ifdef CONFIG_CRYPTO_SELFTESTS_FULL 50 52 DECLARE_PER_CPU(bool, crypto_simd_disabled_for_test); 51 53 #define crypto_simd_usable() \ 52 54 (may_use_simd() && !this_cpu_read(crypto_simd_disabled_for_test))
+4
include/crypto/md5.h
··· 2 2 #ifndef _CRYPTO_MD5_H 3 3 #define _CRYPTO_MD5_H 4 4 5 + #include <crypto/hash.h> 5 6 #include <linux/types.h> 6 7 7 8 #define MD5_DIGEST_SIZE 16 ··· 15 14 #define MD5_H1 0xefcdab89UL 16 15 #define MD5_H2 0x98badcfeUL 17 16 #define MD5_H3 0x10325476UL 17 + 18 + #define CRYPTO_MD5_STATESIZE \ 19 + CRYPTO_HASH_STATESIZE(MD5_STATE_SIZE, MD5_HMAC_BLOCK_SIZE) 18 20 19 21 extern const u8 md5_zero_message_hash[MD5_DIGEST_SIZE]; 20 22
+6
include/linux/atmdev.h
··· 249 249 ATM_SKB(skb)->atm_options = vcc->atm_options; 250 250 } 251 251 252 + static inline void atm_return_tx(struct atm_vcc *vcc, struct sk_buff *skb) 253 + { 254 + WARN_ON_ONCE(refcount_sub_and_test(ATM_SKB(skb)->acct_truesize, 255 + &sk_atm(vcc)->sk_wmem_alloc)); 256 + } 257 + 252 258 static inline void atm_force_charge(struct atm_vcc *vcc,int truesize) 253 259 { 254 260 atomic_add(truesize, &sk_atm(vcc)->sk_rmem_alloc);
+1 -7
include/linux/execmem.h
··· 54 54 EXECMEM_ROX_CACHE = (1 << 1), 55 55 }; 56 56 57 - #if defined(CONFIG_ARCH_HAS_EXECMEM_ROX) && defined(CONFIG_EXECMEM) 57 + #ifdef CONFIG_ARCH_HAS_EXECMEM_ROX 58 58 /** 59 59 * execmem_fill_trapping_insns - set memory to contain instructions that 60 60 * will trap ··· 94 94 * Return: 0 on success or negative error code on failure. 95 95 */ 96 96 int execmem_restore_rox(void *ptr, size_t size); 97 - 98 - /* 99 - * Called from mark_readonly(), where the system transitions to ROX. 100 - */ 101 - void execmem_cache_make_ro(void); 102 97 #else 103 98 static inline int execmem_make_temp_rw(void *ptr, size_t size) { return 0; } 104 99 static inline int execmem_restore_rox(void *ptr, size_t size) { return 0; } 105 - static inline void execmem_cache_make_ro(void) { } 106 100 #endif 107 101 108 102 /**
+3 -1
include/linux/fs.h
··· 399 399 { IOCB_WAITQ, "WAITQ" }, \ 400 400 { IOCB_NOIO, "NOIO" }, \ 401 401 { IOCB_ALLOC_CACHE, "ALLOC_CACHE" }, \ 402 - { IOCB_DIO_CALLER_COMP, "CALLER_COMP" } 402 + { IOCB_DIO_CALLER_COMP, "CALLER_COMP" }, \ 403 + { IOCB_AIO_RW, "AIO_RW" }, \ 404 + { IOCB_HAS_METADATA, "AIO_HAS_METADATA" } 403 405 404 406 struct kiocb { 405 407 struct file *ki_filp;
+1
include/linux/futex.h
··· 89 89 static inline void futex_mm_init(struct mm_struct *mm) 90 90 { 91 91 RCU_INIT_POINTER(mm->futex_phash, NULL); 92 + mm->futex_phash_new = NULL; 92 93 mutex_init(&mm->futex_hash_lock); 93 94 } 94 95
+9 -9
include/linux/ieee80211.h
··· 1278 1278 u8 sa[ETH_ALEN]; 1279 1279 __le32 timestamp; 1280 1280 u8 change_seq; 1281 - u8 variable[0]; 1281 + u8 variable[]; 1282 1282 } __packed s1g_beacon; 1283 1283 } u; 1284 1284 } __packed __aligned(2); ··· 1536 1536 u8 action_code; 1537 1537 u8 dialog_token; 1538 1538 __le16 capability; 1539 - u8 variable[0]; 1539 + u8 variable[]; 1540 1540 } __packed tdls_discover_resp; 1541 1541 struct { 1542 1542 u8 action_code; ··· 1721 1721 struct { 1722 1722 u8 dialog_token; 1723 1723 __le16 capability; 1724 - u8 variable[0]; 1724 + u8 variable[]; 1725 1725 } __packed setup_req; 1726 1726 struct { 1727 1727 __le16 status_code; 1728 1728 u8 dialog_token; 1729 1729 __le16 capability; 1730 - u8 variable[0]; 1730 + u8 variable[]; 1731 1731 } __packed setup_resp; 1732 1732 struct { 1733 1733 __le16 status_code; 1734 1734 u8 dialog_token; 1735 - u8 variable[0]; 1735 + u8 variable[]; 1736 1736 } __packed setup_cfm; 1737 1737 struct { 1738 1738 __le16 reason_code; 1739 - u8 variable[0]; 1739 + u8 variable[]; 1740 1740 } __packed teardown; 1741 1741 struct { 1742 1742 u8 dialog_token; 1743 - u8 variable[0]; 1743 + u8 variable[]; 1744 1744 } __packed discover_req; 1745 1745 struct { 1746 1746 u8 target_channel; 1747 1747 u8 oper_class; 1748 - u8 variable[0]; 1748 + u8 variable[]; 1749 1749 } __packed chan_switch_req; 1750 1750 struct { 1751 1751 __le16 status_code; 1752 - u8 variable[0]; 1752 + u8 variable[]; 1753 1753 } __packed chan_switch_resp; 1754 1754 } u; 1755 1755 } __packed;
+4
include/linux/kmemleak.h
··· 28 28 extern void kmemleak_not_leak(const void *ptr) __ref; 29 29 extern void kmemleak_transient_leak(const void *ptr) __ref; 30 30 extern void kmemleak_ignore(const void *ptr) __ref; 31 + extern void kmemleak_ignore_percpu(const void __percpu *ptr) __ref; 31 32 extern void kmemleak_scan_area(const void *ptr, size_t size, gfp_t gfp) __ref; 32 33 extern void kmemleak_no_scan(const void *ptr) __ref; 33 34 extern void kmemleak_alloc_phys(phys_addr_t phys, size_t size, ··· 96 95 { 97 96 } 98 97 static inline void kmemleak_transient_leak(const void *ptr) 98 + { 99 + } 100 + static inline void kmemleak_ignore_percpu(const void __percpu *ptr) 99 101 { 100 102 } 101 103 static inline void kmemleak_ignore(const void *ptr)
+3 -4
include/linux/libata.h
··· 1352 1352 int ata_acpi_gtm(struct ata_port *ap, struct ata_acpi_gtm *stm); 1353 1353 unsigned int ata_acpi_gtm_xfermask(struct ata_device *dev, 1354 1354 const struct ata_acpi_gtm *gtm); 1355 - int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm); 1355 + int ata_acpi_cbl_pata_type(struct ata_port *ap); 1356 1356 #else 1357 1357 static inline const struct ata_acpi_gtm *ata_acpi_init_gtm(struct ata_port *ap) 1358 1358 { ··· 1377 1377 return 0; 1378 1378 } 1379 1379 1380 - static inline int ata_acpi_cbl_80wire(struct ata_port *ap, 1381 - const struct ata_acpi_gtm *gtm) 1380 + static inline int ata_acpi_cbl_pata_type(struct ata_port *ap) 1382 1381 { 1383 - return 0; 1382 + return ATA_CBL_PATA40; 1384 1383 } 1385 1384 #endif 1386 1385
-5
include/linux/module.h
··· 586 586 atomic_t refcnt; 587 587 #endif 588 588 589 - #ifdef CONFIG_MITIGATION_ITS 590 - int its_num_pages; 591 - void **its_page_array; 592 - #endif 593 - 594 589 #ifdef CONFIG_CONSTRUCTORS 595 590 /* Constructor functions. */ 596 591 ctor_fn_t *ctors;
+2 -4
include/linux/mount.h
··· 116 116 extern int may_umount(struct vfsmount *); 117 117 int do_mount(const char *, const char __user *, 118 118 const char *, unsigned long, void *); 119 - extern struct vfsmount *collect_mounts(const struct path *); 120 - extern void drop_collected_mounts(struct vfsmount *); 121 - extern int iterate_mounts(int (*)(struct vfsmount *, void *), void *, 122 - struct vfsmount *); 119 + extern struct path *collect_paths(const struct path *, struct path *, unsigned); 120 + extern void drop_collected_paths(struct path *, struct path *); 123 121 extern void kern_unmount_array(struct vfsmount *mnt[], unsigned int num); 124 122 125 123 extern int cifs_root_data(char **dev, char **opts);
+1 -1
include/linux/mtd/partitions.h
··· 108 108 deregister_mtd_parser) 109 109 110 110 int mtd_add_partition(struct mtd_info *master, const char *name, 111 - long long offset, long long length, struct mtd_info **part); 111 + long long offset, long long length); 112 112 int mtd_del_partition(struct mtd_info *master, int partno); 113 113 uint64_t mtd_get_device_size(const struct mtd_info *mtd); 114 114
+6 -4
include/linux/mtd/spinand.h
··· 113 113 SPI_MEM_DTR_OP_DATA_IN(len, buf, 2), \ 114 114 SPI_MEM_OP_MAX_FREQ(freq)) 115 115 116 - #define SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(addr, ndummy, buf, len) \ 116 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(addr, ndummy, buf, len, ...) \ 117 117 SPI_MEM_OP(SPI_MEM_OP_CMD(0xbb, 1), \ 118 118 SPI_MEM_OP_ADDR(2, addr, 2), \ 119 119 SPI_MEM_OP_DUMMY(ndummy, 2), \ 120 - SPI_MEM_OP_DATA_IN(len, buf, 2)) 120 + SPI_MEM_OP_DATA_IN(len, buf, 2), \ 121 + SPI_MEM_OP_MAX_FREQ(__VA_ARGS__ + 0)) 121 122 122 123 #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_2S_2S_OP(addr, ndummy, buf, len) \ 123 124 SPI_MEM_OP(SPI_MEM_OP_CMD(0xbb, 1), \ ··· 152 151 SPI_MEM_DTR_OP_DATA_IN(len, buf, 4), \ 153 152 SPI_MEM_OP_MAX_FREQ(freq)) 154 153 155 - #define SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(addr, ndummy, buf, len) \ 154 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(addr, ndummy, buf, len, ...) \ 156 155 SPI_MEM_OP(SPI_MEM_OP_CMD(0xeb, 1), \ 157 156 SPI_MEM_OP_ADDR(2, addr, 4), \ 158 157 SPI_MEM_OP_DUMMY(ndummy, 4), \ 159 - SPI_MEM_OP_DATA_IN(len, buf, 4)) 158 + SPI_MEM_OP_DATA_IN(len, buf, 4), \ 159 + SPI_MEM_OP_MAX_FREQ(__VA_ARGS__ + 0)) 160 160 161 161 #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_4S_4S_OP(addr, ndummy, buf, len) \ 162 162 SPI_MEM_OP(SPI_MEM_OP_CMD(0xeb, 1), \
+40 -2
include/linux/perf_event.h
··· 635 635 unsigned long size; 636 636 }; 637 637 638 - /** 639 - * enum perf_event_state - the states of an event: 638 + /* 639 + * The normal states are: 640 + * 641 + * ACTIVE --. 642 + * ^ | 643 + * | | 644 + * sched_{in,out}() | 645 + * | | 646 + * v | 647 + * ,---> INACTIVE --+ <-. 648 + * | | | 649 + * | {dis,en}able() 650 + * sched_in() | | 651 + * | OFF <--' --+ 652 + * | | 653 + * `---> ERROR ------' 654 + * 655 + * That is: 656 + * 657 + * sched_in: INACTIVE -> {ACTIVE,ERROR} 658 + * sched_out: ACTIVE -> INACTIVE 659 + * disable: {ACTIVE,INACTIVE} -> OFF 660 + * enable: {OFF,ERROR} -> INACTIVE 661 + * 662 + * Where {OFF,ERROR} are disabled states. 663 + * 664 + * Then we have the {EXIT,REVOKED,DEAD} states which are various shades of 665 + * defunct events: 666 + * 667 + * - EXIT means task that the even was assigned to died, but child events 668 + * still live, and further children can still be created. But the event 669 + * itself will never be active again. It can only transition to 670 + * {REVOKED,DEAD}; 671 + * 672 + * - REVOKED means the PMU the event was associated with is gone; all 673 + * functionality is stopped but the event is still alive. Can only 674 + * transition to DEAD; 675 + * 676 + * - DEAD event really is DYING tearing down state and freeing bits. 677 + * 640 678 */ 641 679 enum perf_event_state { 642 680 PERF_EVENT_STATE_DEAD = -5,
+2 -2
include/linux/resctrl.h
··· 159 159 /** 160 160 * struct rdt_mon_domain - group of CPUs sharing a resctrl monitor resource 161 161 * @hdr: common header for different domain types 162 - * @ci: cache info for this domain 162 + * @ci_id: cache info id for this domain 163 163 * @rmid_busy_llc: bitmap of which limbo RMIDs are above threshold 164 164 * @mbm_total: saved state for MBM total bandwidth 165 165 * @mbm_local: saved state for MBM local bandwidth ··· 170 170 */ 171 171 struct rdt_mon_domain { 172 172 struct rdt_domain_hdr hdr; 173 - struct cacheinfo *ci; 173 + unsigned int ci_id; 174 174 unsigned long *rmid_busy_llc; 175 175 struct mbm_state *mbm_total; 176 176 struct mbm_state *mbm_local;
+12
include/linux/soc/amd/isp4_misc.h
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + 3 + /* 4 + * Copyright (C) 2025 Advanced Micro Devices, Inc. 5 + */ 6 + 7 + #ifndef __SOC_ISP4_MISC_H 8 + #define __SOC_ISP4_MISC_H 9 + 10 + #define AMDISP_I2C_ADAP_NAME "AMDISP DesignWare I2C adapter" 11 + 12 + #endif
+2
include/net/bluetooth/hci_core.h
··· 29 29 #include <linux/idr.h> 30 30 #include <linux/leds.h> 31 31 #include <linux/rculist.h> 32 + #include <linux/srcu.h> 32 33 33 34 #include <net/bluetooth/hci.h> 34 35 #include <net/bluetooth/hci_drv.h> ··· 348 347 349 348 struct hci_dev { 350 349 struct list_head list; 350 + struct srcu_struct srcu; 351 351 struct mutex lock; 352 352 353 353 struct ida unset_handle_ida;
-18
include/trace/events/erofs.h
··· 211 211 show_mflags(__entry->mflags), __entry->ret) 212 212 ); 213 213 214 - TRACE_EVENT(erofs_destroy_inode, 215 - TP_PROTO(struct inode *inode), 216 - 217 - TP_ARGS(inode), 218 - 219 - TP_STRUCT__entry( 220 - __field( dev_t, dev ) 221 - __field( erofs_nid_t, nid ) 222 - ), 223 - 224 - TP_fast_assign( 225 - __entry->dev = inode->i_sb->s_dev; 226 - __entry->nid = EROFS_I(inode)->nid; 227 - ), 228 - 229 - TP_printk("dev = (%d,%d), nid = %llu", show_dev_nid(__entry)) 230 - ); 231 - 232 214 #endif /* _TRACE_EROFS_H */ 233 215 234 216 /* This part must be outside protection */
+2 -2
include/uapi/linux/bits.h
··· 4 4 #ifndef _UAPI_LINUX_BITS_H 5 5 #define _UAPI_LINUX_BITS_H 6 6 7 - #define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (__BITS_PER_LONG - 1 - (h)))) 7 + #define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (BITS_PER_LONG - 1 - (h)))) 8 8 9 - #define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (__BITS_PER_LONG_LONG - 1 - (h)))) 9 + #define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (BITS_PER_LONG_LONG - 1 - (h)))) 10 10 11 11 #define __GENMASK_U128(h, l) \ 12 12 ((_BIT128((h)) << 1) - (_BIT128(l)))
-4
include/uapi/linux/ethtool_netlink.h
··· 208 208 ETHTOOL_A_STATS_PHY_MAX = (__ETHTOOL_A_STATS_PHY_CNT - 1) 209 209 }; 210 210 211 - /* generic netlink info */ 212 - #define ETHTOOL_GENL_NAME "ethtool" 213 - #define ETHTOOL_GENL_VERSION 1 214 - 215 211 #define ETHTOOL_MCGRP_MONITOR_NAME "monitor" 216 212 217 213 #endif /* _UAPI_LINUX_ETHTOOL_NETLINK_H_ */
+22
include/uapi/linux/kvm.h
··· 178 178 #define KVM_EXIT_NOTIFY 37 179 179 #define KVM_EXIT_LOONGARCH_IOCSR 38 180 180 #define KVM_EXIT_MEMORY_FAULT 39 181 + #define KVM_EXIT_TDX 40 181 182 182 183 /* For KVM_EXIT_INTERNAL_ERROR */ 183 184 /* Emulate instruction failed. */ ··· 448 447 __u64 gpa; 449 448 __u64 size; 450 449 } memory_fault; 450 + /* KVM_EXIT_TDX */ 451 + struct { 452 + __u64 flags; 453 + __u64 nr; 454 + union { 455 + struct { 456 + __u64 ret; 457 + __u64 data[5]; 458 + } unknown; 459 + struct { 460 + __u64 ret; 461 + __u64 gpa; 462 + __u64 size; 463 + } get_quote; 464 + struct { 465 + __u64 ret; 466 + __u64 leaf; 467 + __u64 r11, r12, r13, r14; 468 + } get_tdvmcall_info; 469 + }; 470 + } tdx; 451 471 /* Fix the size of the union. */ 452 472 char padding[256]; 453 473 };
+3 -3
include/uapi/linux/mptcp_pm.h
··· 27 27 * token, rem_id. 28 28 * @MPTCP_EVENT_SUB_ESTABLISHED: A new subflow has been established. 'error' 29 29 * should not be set. Attributes: token, family, loc_id, rem_id, saddr4 | 30 - * saddr6, daddr4 | daddr6, sport, dport, backup, if_idx [, error]. 30 + * saddr6, daddr4 | daddr6, sport, dport, backup, if-idx [, error]. 31 31 * @MPTCP_EVENT_SUB_CLOSED: A subflow has been closed. An error (copy of 32 32 * sk_err) could be set if an error has been detected for this subflow. 33 33 * Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | 34 - * daddr6, sport, dport, backup, if_idx [, error]. 34 + * daddr6, sport, dport, backup, if-idx [, error]. 35 35 * @MPTCP_EVENT_SUB_PRIORITY: The priority of a subflow has changed. 'error' 36 36 * should not be set. Attributes: token, family, loc_id, rem_id, saddr4 | 37 - * saddr6, daddr4 | daddr6, sport, dport, backup, if_idx [, error]. 37 + * saddr6, daddr4 | daddr6, sport, dport, backup, if-idx [, error]. 38 38 * @MPTCP_EVENT_LISTENER_CREATED: A new PM listener is created. Attributes: 39 39 * family, sport, saddr4 | saddr6. 40 40 * @MPTCP_EVENT_LISTENER_CLOSED: A PM listener is closed. Attributes: family,
+26 -6
include/uapi/linux/ublk_cmd.h
··· 135 135 #define UBLKSRV_IO_BUF_TOTAL_SIZE (1ULL << UBLKSRV_IO_BUF_TOTAL_BITS) 136 136 137 137 /* 138 - * zero copy requires 4k block size, and can remap ublk driver's io 139 - * request into ublksrv's vm space 138 + * ublk server can register data buffers for incoming I/O requests with a sparse 139 + * io_uring buffer table. The request buffer can then be used as the data buffer 140 + * for io_uring operations via the fixed buffer index. 141 + * Note that the ublk server can never directly access the request data memory. 142 + * 143 + * To use this feature, the ublk server must first register a sparse buffer 144 + * table on an io_uring instance. 145 + * When an incoming ublk request is received, the ublk server submits a 146 + * UBLK_U_IO_REGISTER_IO_BUF command to that io_uring instance. The 147 + * ublksrv_io_cmd's q_id and tag specify the request whose buffer to register 148 + * and addr is the index in the io_uring's buffer table to install the buffer. 149 + * SQEs can now be submitted to the io_uring to read/write the request's buffer 150 + * by enabling fixed buffers (e.g. using IORING_OP_{READ,WRITE}_FIXED or 151 + * IORING_URING_CMD_FIXED) and passing the registered buffer index in buf_index. 152 + * Once the last io_uring operation using the request's buffer has completed, 153 + * the ublk server submits a UBLK_U_IO_UNREGISTER_IO_BUF command with q_id, tag, 154 + * and addr again specifying the request buffer to unregister. 155 + * The ublk request is completed when its buffer is unregistered from all 156 + * io_uring instances and the ublk server issues UBLK_U_IO_COMMIT_AND_FETCH_REQ. 157 + * 158 + * Not available for UBLK_F_UNPRIVILEGED_DEV, as a ublk server can leak 159 + * uninitialized kernel memory by not reading into the full request buffer. 140 160 */ 141 161 #define UBLK_F_SUPPORT_ZERO_COPY (1ULL << 0) 142 162 ··· 470 450 __u64 sqe_addr) 471 451 { 472 452 struct ublk_auto_buf_reg reg = { 473 - .index = sqe_addr & 0xffff, 474 - .flags = (sqe_addr >> 16) & 0xff, 475 - .reserved0 = (sqe_addr >> 24) & 0xff, 476 - .reserved1 = sqe_addr >> 32, 453 + .index = (__u16)sqe_addr, 454 + .flags = (__u8)(sqe_addr >> 16), 455 + .reserved0 = (__u8)(sqe_addr >> 24), 456 + .reserved1 = (__u32)(sqe_addr >> 32), 477 457 }; 478 458 479 459 return reg;
+4
include/uapi/linux/vm_sockets.h
··· 17 17 #ifndef _UAPI_VM_SOCKETS_H 18 18 #define _UAPI_VM_SOCKETS_H 19 19 20 + #ifndef __KERNEL__ 21 + #include <sys/socket.h> /* for struct sockaddr and sa_family_t */ 22 + #endif 23 + 20 24 #include <linux/socket.h> 21 25 #include <linux/types.h> 22 26
+3 -1
io_uring/io-wq.c
··· 1259 1259 atomic_set(&wq->worker_refs, 1); 1260 1260 init_completion(&wq->worker_done); 1261 1261 ret = cpuhp_state_add_instance_nocalls(io_wq_online, &wq->cpuhp_node); 1262 - if (ret) 1262 + if (ret) { 1263 + put_task_struct(wq->task); 1263 1264 goto err; 1265 + } 1264 1266 1265 1267 return wq; 1266 1268 err:
-2
io_uring/io_uring.h
··· 98 98 struct llist_node *tctx_task_work_run(struct io_uring_task *tctx, unsigned int max_entries, unsigned int *count); 99 99 void tctx_task_work(struct callback_head *cb); 100 100 __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd); 101 - int io_uring_alloc_task_context(struct task_struct *task, 102 - struct io_ring_ctx *ctx); 103 101 104 102 int io_ring_add_registered_file(struct io_uring_task *tctx, struct file *file, 105 103 int start, int end);
+1
io_uring/kbuf.c
··· 271 271 if (len > arg->max_len) { 272 272 len = arg->max_len; 273 273 if (!(bl->flags & IOBL_INC)) { 274 + arg->partial_map = 1; 274 275 if (iov != arg->iovs) 275 276 break; 276 277 buf->len = len;
+2 -1
io_uring/kbuf.h
··· 58 58 size_t max_len; 59 59 unsigned short nr_iovs; 60 60 unsigned short mode; 61 - unsigned buf_group; 61 + unsigned short buf_group; 62 + unsigned short partial_map; 62 63 }; 63 64 64 65 void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
+22 -14
io_uring/net.c
··· 75 75 u16 flags; 76 76 /* initialised and used only by !msg send variants */ 77 77 u16 buf_group; 78 - bool retry; 78 + unsigned short retry_flags; 79 79 void __user *msg_control; 80 80 /* used only for send zerocopy */ 81 81 struct io_kiocb *notif; 82 + }; 83 + 84 + enum sr_retry_flags { 85 + IO_SR_MSG_RETRY = 1, 86 + IO_SR_MSG_PARTIAL_MAP = 2, 82 87 }; 83 88 84 89 /* ··· 192 187 193 188 req->flags &= ~REQ_F_BL_EMPTY; 194 189 sr->done_io = 0; 195 - sr->retry = false; 190 + sr->retry_flags = 0; 196 191 sr->len = 0; /* get from the provided buffer */ 197 192 } 198 193 ··· 402 397 struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg); 403 398 404 399 sr->done_io = 0; 405 - sr->retry = false; 400 + sr->retry_flags = 0; 406 401 sr->len = READ_ONCE(sqe->len); 407 402 sr->flags = READ_ONCE(sqe->ioprio); 408 403 if (sr->flags & ~SENDMSG_FLAGS) ··· 756 751 struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg); 757 752 758 753 sr->done_io = 0; 759 - sr->retry = false; 754 + sr->retry_flags = 0; 760 755 761 756 if (unlikely(sqe->file_index || sqe->addr2)) 762 757 return -EINVAL; ··· 826 821 if (sr->flags & IORING_RECVSEND_BUNDLE) { 827 822 size_t this_ret = *ret - sr->done_io; 828 823 829 - cflags |= io_put_kbufs(req, *ret, io_bundle_nbufs(kmsg, this_ret), 824 + cflags |= io_put_kbufs(req, this_ret, io_bundle_nbufs(kmsg, this_ret), 830 825 issue_flags); 831 - if (sr->retry) 826 + if (sr->retry_flags & IO_SR_MSG_RETRY) 832 827 cflags = req->cqe.flags | (cflags & CQE_F_MASK); 833 828 /* bundle with no more immediate buffers, we're done */ 834 829 if (req->flags & REQ_F_BL_EMPTY) ··· 837 832 * If more is available AND it was a full transfer, retry and 838 833 * append to this one 839 834 */ 840 - if (!sr->retry && kmsg->msg.msg_inq > 1 && this_ret > 0 && 835 + if (!sr->retry_flags && kmsg->msg.msg_inq > 1 && this_ret > 0 && 841 836 !iov_iter_count(&kmsg->msg.msg_iter)) { 842 837 req->cqe.flags = cflags & ~CQE_F_MASK; 843 838 sr->len = kmsg->msg.msg_inq; 844 839 sr->done_io += this_ret; 845 - sr->retry = true; 840 + sr->retry_flags |= IO_SR_MSG_RETRY; 846 841 return false; 847 842 } 848 843 } else { ··· 1082 1077 if (unlikely(ret < 0)) 1083 1078 return ret; 1084 1079 1080 + if (arg.iovs != &kmsg->fast_iov && arg.iovs != kmsg->vec.iovec) { 1081 + kmsg->vec.nr = ret; 1082 + kmsg->vec.iovec = arg.iovs; 1083 + req->flags |= REQ_F_NEED_CLEANUP; 1084 + } 1085 + if (arg.partial_map) 1086 + sr->retry_flags |= IO_SR_MSG_PARTIAL_MAP; 1087 + 1085 1088 /* special case 1 vec, can be a fast path */ 1086 1089 if (ret == 1) { 1087 1090 sr->buf = arg.iovs[0].iov_base; ··· 1098 1085 } 1099 1086 iov_iter_init(&kmsg->msg.msg_iter, ITER_DEST, arg.iovs, ret, 1100 1087 arg.out_len); 1101 - if (arg.iovs != &kmsg->fast_iov && arg.iovs != kmsg->vec.iovec) { 1102 - kmsg->vec.nr = ret; 1103 - kmsg->vec.iovec = arg.iovs; 1104 - req->flags |= REQ_F_NEED_CLEANUP; 1105 - } 1106 1088 } else { 1107 1089 void __user *buf; 1108 1090 ··· 1283 1275 int ret; 1284 1276 1285 1277 zc->done_io = 0; 1286 - zc->retry = false; 1278 + zc->retry_flags = 0; 1287 1279 1288 1280 if (unlikely(READ_ONCE(sqe->__pad2[0]) || READ_ONCE(sqe->addr3))) 1289 1281 return -EINVAL;
+1
io_uring/opdef.c
··· 216 216 }, 217 217 [IORING_OP_FALLOCATE] = { 218 218 .needs_file = 1, 219 + .hash_reg_file = 1, 219 220 .prep = io_fallocate_prep, 220 221 .issue = io_fallocate, 221 222 },
+25 -9
io_uring/rsrc.c
··· 112 112 struct io_mapped_ubuf *imu = priv; 113 113 unsigned int i; 114 114 115 - for (i = 0; i < imu->nr_bvecs; i++) 116 - unpin_user_page(imu->bvec[i].bv_page); 115 + for (i = 0; i < imu->nr_bvecs; i++) { 116 + struct folio *folio = page_folio(imu->bvec[i].bv_page); 117 + 118 + unpin_user_folio(folio, 1); 119 + } 117 120 } 118 121 119 122 static struct io_mapped_ubuf *io_alloc_imu(struct io_ring_ctx *ctx, ··· 734 731 735 732 data->nr_pages_mid = folio_nr_pages(folio); 736 733 data->folio_shift = folio_shift(folio); 734 + data->first_folio_page_idx = folio_page_idx(folio, page_array[0]); 737 735 738 736 /* 739 737 * Check if pages are contiguous inside a folio, and all folios have ··· 813 809 814 810 imu->nr_bvecs = nr_pages; 815 811 ret = io_buffer_account_pin(ctx, pages, nr_pages, imu, last_hpage); 816 - if (ret) { 817 - unpin_user_pages(pages, nr_pages); 812 + if (ret) 818 813 goto done; 819 - } 820 814 821 815 size = iov->iov_len; 822 816 /* store original address for later verification */ ··· 828 826 if (coalesced) 829 827 imu->folio_shift = data.folio_shift; 830 828 refcount_set(&imu->refs, 1); 831 - off = (unsigned long) iov->iov_base & ((1UL << imu->folio_shift) - 1); 829 + 830 + off = (unsigned long)iov->iov_base & ~PAGE_MASK; 831 + if (coalesced) 832 + off += data.first_folio_page_idx << PAGE_SHIFT; 833 + 832 834 node->buf = imu; 833 835 ret = 0; 834 836 ··· 848 842 if (ret) { 849 843 if (imu) 850 844 io_free_imu(ctx, imu); 845 + if (pages) { 846 + for (i = 0; i < nr_pages; i++) 847 + unpin_user_folio(page_folio(pages[i]), 1); 848 + } 851 849 io_cache_free(&ctx->node_cache, node); 852 850 node = ERR_PTR(ret); 853 851 } ··· 1187 1177 return -EINVAL; 1188 1178 if (check_add_overflow(arg->nr, arg->dst_off, &nbufs)) 1189 1179 return -EOVERFLOW; 1180 + if (nbufs > IORING_MAX_REG_BUFFERS) 1181 + return -EINVAL; 1190 1182 1191 1183 ret = io_rsrc_data_alloc(&data, max(nbufs, ctx->buf_table.nr)); 1192 1184 if (ret) ··· 1339 1327 { 1340 1328 unsigned long folio_size = 1 << imu->folio_shift; 1341 1329 unsigned long folio_mask = folio_size - 1; 1342 - u64 folio_addr = imu->ubuf & ~folio_mask; 1343 1330 struct bio_vec *res_bvec = vec->bvec; 1344 1331 size_t total_len = 0; 1345 1332 unsigned bvec_idx = 0; ··· 1360 1349 if (unlikely(check_add_overflow(total_len, iov_len, &total_len))) 1361 1350 return -EOVERFLOW; 1362 1351 1363 - /* by using folio address it also accounts for bvec offset */ 1364 - offset = buf_addr - folio_addr; 1352 + offset = buf_addr - imu->ubuf; 1353 + /* 1354 + * Only the first bvec can have non zero bv_offset, account it 1355 + * here and work with full folios below. 1356 + */ 1357 + offset += imu->bvec[0].bv_offset; 1358 + 1365 1359 src_bvec = imu->bvec + (offset >> imu->folio_shift); 1366 1360 offset &= folio_mask; 1367 1361
+1
io_uring/rsrc.h
··· 49 49 unsigned int nr_pages_mid; 50 50 unsigned int folio_shift; 51 51 unsigned int nr_folios; 52 + unsigned long first_folio_page_idx; 52 53 }; 53 54 54 55 bool io_rsrc_cache_init(struct io_ring_ctx *ctx);
+2 -4
io_uring/sqpoll.c
··· 16 16 #include <uapi/linux/io_uring.h> 17 17 18 18 #include "io_uring.h" 19 + #include "tctx.h" 19 20 #include "napi.h" 20 21 #include "sqpoll.h" 21 22 ··· 420 419 __cold int io_sq_offload_create(struct io_ring_ctx *ctx, 421 420 struct io_uring_params *p) 422 421 { 423 - struct task_struct *task_to_put = NULL; 424 422 int ret; 425 423 426 424 /* Retain compatibility with failing for an invalid attach attempt */ ··· 498 498 rcu_assign_pointer(sqd->thread, tsk); 499 499 mutex_unlock(&sqd->lock); 500 500 501 - task_to_put = get_task_struct(tsk); 501 + get_task_struct(tsk); 502 502 ret = io_uring_alloc_task_context(tsk, ctx); 503 503 wake_up_new_task(tsk); 504 504 if (ret) ··· 513 513 complete(&ctx->sq_data->exited); 514 514 err: 515 515 io_sq_thread_finish(ctx); 516 - if (task_to_put) 517 - put_task_struct(task_to_put); 518 516 return ret; 519 517 } 520 518
+4 -2
io_uring/zcrx.c
··· 106 106 for_each_sgtable_dma_sg(mem->sgt, sg, i) 107 107 total_size += sg_dma_len(sg); 108 108 109 - if (total_size < off + len) 110 - return -EINVAL; 109 + if (total_size < off + len) { 110 + ret = -EINVAL; 111 + goto err; 112 + } 111 113 112 114 mem->dmabuf_offset = off; 113 115 mem->size = len;
+1
kernel/Kconfig.kexec
··· 134 134 depends on KEXEC_FILE 135 135 depends on CRASH_DUMP 136 136 depends on DM_CRYPT 137 + depends on KEYS 137 138 help 138 139 With this option enabled, user space can intereact with 139 140 /sys/kernel/config/crash_dm_crypt_keys to make the dm crypt keys
+34 -29
kernel/audit_tree.c
··· 668 668 return 0; 669 669 } 670 670 671 - static int compare_root(struct vfsmount *mnt, void *arg) 672 - { 673 - return inode_to_key(d_backing_inode(mnt->mnt_root)) == 674 - (unsigned long)arg; 675 - } 676 - 677 671 void audit_trim_trees(void) 678 672 { 679 673 struct list_head cursor; ··· 677 683 while (cursor.next != &tree_list) { 678 684 struct audit_tree *tree; 679 685 struct path path; 680 - struct vfsmount *root_mnt; 681 686 struct audit_node *node; 687 + struct path *paths; 688 + struct path array[16]; 682 689 int err; 683 690 684 691 tree = container_of(cursor.next, struct audit_tree, list); ··· 691 696 if (err) 692 697 goto skip_it; 693 698 694 - root_mnt = collect_mounts(&path); 699 + paths = collect_paths(&path, array, 16); 695 700 path_put(&path); 696 - if (IS_ERR(root_mnt)) 701 + if (IS_ERR(paths)) 697 702 goto skip_it; 698 703 699 704 spin_lock(&hash_lock); ··· 701 706 struct audit_chunk *chunk = find_chunk(node); 702 707 /* this could be NULL if the watch is dying else where... */ 703 708 node->index |= 1U<<31; 704 - if (iterate_mounts(compare_root, 705 - (void *)(chunk->key), 706 - root_mnt)) 707 - node->index &= ~(1U<<31); 709 + for (struct path *p = paths; p->dentry; p++) { 710 + struct inode *inode = p->dentry->d_inode; 711 + if (inode_to_key(inode) == chunk->key) { 712 + node->index &= ~(1U<<31); 713 + break; 714 + } 715 + } 708 716 } 709 717 spin_unlock(&hash_lock); 710 718 trim_marked(tree); 711 - drop_collected_mounts(root_mnt); 719 + drop_collected_paths(paths, array); 712 720 skip_it: 713 721 put_tree(tree); 714 722 mutex_lock(&audit_filter_mutex); ··· 740 742 put_tree(tree); 741 743 } 742 744 743 - static int tag_mount(struct vfsmount *mnt, void *arg) 745 + static int tag_mounts(struct path *paths, struct audit_tree *tree) 744 746 { 745 - return tag_chunk(d_backing_inode(mnt->mnt_root), arg); 747 + for (struct path *p = paths; p->dentry; p++) { 748 + int err = tag_chunk(p->dentry->d_inode, tree); 749 + if (err) 750 + return err; 751 + } 752 + return 0; 746 753 } 747 754 748 755 /* ··· 804 801 { 805 802 struct audit_tree *seed = rule->tree, *tree; 806 803 struct path path; 807 - struct vfsmount *mnt; 804 + struct path array[16]; 805 + struct path *paths; 808 806 int err; 809 807 810 808 rule->tree = NULL; ··· 832 828 err = kern_path(tree->pathname, 0, &path); 833 829 if (err) 834 830 goto Err; 835 - mnt = collect_mounts(&path); 831 + paths = collect_paths(&path, array, 16); 836 832 path_put(&path); 837 - if (IS_ERR(mnt)) { 838 - err = PTR_ERR(mnt); 833 + if (IS_ERR(paths)) { 834 + err = PTR_ERR(paths); 839 835 goto Err; 840 836 } 841 837 842 838 get_tree(tree); 843 - err = iterate_mounts(tag_mount, tree, mnt); 844 - drop_collected_mounts(mnt); 839 + err = tag_mounts(paths, tree); 840 + drop_collected_paths(paths, array); 845 841 846 842 if (!err) { 847 843 struct audit_node *node; ··· 876 872 struct list_head cursor, barrier; 877 873 int failed = 0; 878 874 struct path path1, path2; 879 - struct vfsmount *tagged; 875 + struct path array[16]; 876 + struct path *paths; 880 877 int err; 881 878 882 879 err = kern_path(new, 0, &path2); 883 880 if (err) 884 881 return err; 885 - tagged = collect_mounts(&path2); 882 + paths = collect_paths(&path2, array, 16); 886 883 path_put(&path2); 887 - if (IS_ERR(tagged)) 888 - return PTR_ERR(tagged); 884 + if (IS_ERR(paths)) 885 + return PTR_ERR(paths); 889 886 890 887 err = kern_path(old, 0, &path1); 891 888 if (err) { 892 - drop_collected_mounts(tagged); 889 + drop_collected_paths(paths, array); 893 890 return err; 894 891 } 895 892 ··· 919 914 continue; 920 915 } 921 916 922 - failed = iterate_mounts(tag_mount, tree, tagged); 917 + failed = tag_mounts(paths, tree); 923 918 if (failed) { 924 919 put_tree(tree); 925 920 mutex_lock(&audit_filter_mutex); ··· 960 955 list_del(&cursor); 961 956 mutex_unlock(&audit_filter_mutex); 962 957 path_put(&path1); 963 - drop_collected_mounts(tagged); 958 + drop_collected_paths(paths, array); 964 959 return failed; 965 960 } 966 961
+6 -3
kernel/bpf/bpf_lru_list.c
··· 337 337 list) { 338 338 __bpf_lru_node_move_to_free(l, node, local_free_list(loc_l), 339 339 BPF_LRU_LOCAL_LIST_T_FREE); 340 - if (++nfree == LOCAL_FREE_TARGET) 340 + if (++nfree == lru->target_free) 341 341 break; 342 342 } 343 343 344 - if (nfree < LOCAL_FREE_TARGET) 345 - __bpf_lru_list_shrink(lru, l, LOCAL_FREE_TARGET - nfree, 344 + if (nfree < lru->target_free) 345 + __bpf_lru_list_shrink(lru, l, lru->target_free - nfree, 346 346 local_free_list(loc_l), 347 347 BPF_LRU_LOCAL_LIST_T_FREE); 348 348 ··· 577 577 list_add(&node->list, &l->lists[BPF_LRU_LIST_T_FREE]); 578 578 buf += elem_size; 579 579 } 580 + 581 + lru->target_free = clamp((nr_elems / num_possible_cpus()) / 2, 582 + 1, LOCAL_FREE_TARGET); 580 583 } 581 584 582 585 static void bpf_percpu_lru_populate(struct bpf_lru *lru, void *buf,
+1
kernel/bpf/bpf_lru_list.h
··· 58 58 del_from_htab_func del_from_htab; 59 59 void *del_arg; 60 60 unsigned int hash_offset; 61 + unsigned int target_free; 61 62 unsigned int nr_scans; 62 63 bool percpu; 63 64 };
+1 -1
kernel/bpf/cgroup.c
··· 2134 2134 .gpl_only = false, 2135 2135 .ret_type = RET_INTEGER, 2136 2136 .arg1_type = ARG_PTR_TO_CTX, 2137 - .arg2_type = ARG_PTR_TO_MEM, 2137 + .arg2_type = ARG_PTR_TO_MEM | MEM_WRITE, 2138 2138 .arg3_type = ARG_CONST_SIZE, 2139 2139 .arg4_type = ARG_ANYTHING, 2140 2140 };
+2 -3
kernel/bpf/verifier.c
··· 7027 7027 struct inode *f_inode; 7028 7028 }; 7029 7029 7030 - BTF_TYPE_SAFE_TRUSTED(struct dentry) { 7031 - /* no negative dentry-s in places where bpf can see it */ 7030 + BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct dentry) { 7032 7031 struct inode *d_inode; 7033 7032 }; 7034 7033 ··· 7065 7066 BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct bpf_iter__task)); 7066 7067 BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct linux_binprm)); 7067 7068 BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct file)); 7068 - BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct dentry)); 7069 7069 7070 7070 return btf_nested_type_is_trusted(&env->log, reg, field_name, btf_id, "__safe_trusted"); 7071 7071 } ··· 7074 7076 const char *field_name, u32 btf_id) 7075 7077 { 7076 7078 BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct socket)); 7079 + BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct dentry)); 7077 7080 7078 7081 return btf_nested_type_is_trusted(&env->log, reg, field_name, btf_id, 7079 7082 "__safe_trusted_or_null");
+1 -2
kernel/cgroup/legacy_freezer.c
··· 188 188 if (!(freezer->state & CGROUP_FREEZING)) { 189 189 __thaw_task(task); 190 190 } else { 191 - freeze_task(task); 192 - 193 191 /* clear FROZEN and propagate upwards */ 194 192 while (freezer && (freezer->state & CGROUP_FROZEN)) { 195 193 freezer->state &= ~CGROUP_FROZEN; 196 194 freezer = parent_freezer(freezer); 197 195 } 196 + freeze_task(task); 198 197 } 199 198 } 200 199
+77 -45
kernel/events/core.c
··· 207 207 __perf_ctx_unlock(&cpuctx->ctx); 208 208 } 209 209 210 + typedef struct { 211 + struct perf_cpu_context *cpuctx; 212 + struct perf_event_context *ctx; 213 + } class_perf_ctx_lock_t; 214 + 215 + static inline void class_perf_ctx_lock_destructor(class_perf_ctx_lock_t *_T) 216 + { perf_ctx_unlock(_T->cpuctx, _T->ctx); } 217 + 218 + static inline class_perf_ctx_lock_t 219 + class_perf_ctx_lock_constructor(struct perf_cpu_context *cpuctx, 220 + struct perf_event_context *ctx) 221 + { perf_ctx_lock(cpuctx, ctx); return (class_perf_ctx_lock_t){ cpuctx, ctx }; } 222 + 210 223 #define TASK_TOMBSTONE ((void *)-1L) 211 224 212 225 static bool is_kernel_event(struct perf_event *event) ··· 957 944 if (READ_ONCE(cpuctx->cgrp) == cgrp) 958 945 return; 959 946 960 - perf_ctx_lock(cpuctx, cpuctx->task_ctx); 947 + guard(perf_ctx_lock)(cpuctx, cpuctx->task_ctx); 948 + /* 949 + * Re-check, could've raced vs perf_remove_from_context(). 950 + */ 951 + if (READ_ONCE(cpuctx->cgrp) == NULL) 952 + return; 953 + 961 954 perf_ctx_disable(&cpuctx->ctx, true); 962 955 963 956 ctx_sched_out(&cpuctx->ctx, NULL, EVENT_ALL|EVENT_CGROUP); ··· 981 962 ctx_sched_in(&cpuctx->ctx, NULL, EVENT_ALL|EVENT_CGROUP); 982 963 983 964 perf_ctx_enable(&cpuctx->ctx, true); 984 - perf_ctx_unlock(cpuctx, cpuctx->task_ctx); 985 965 } 986 966 987 967 static int perf_cgroup_ensure_storage(struct perf_event *event, ··· 2138 2120 if (event->group_leader == event) 2139 2121 del_event_from_groups(event, ctx); 2140 2122 2141 - /* 2142 - * If event was in error state, then keep it 2143 - * that way, otherwise bogus counts will be 2144 - * returned on read(). The only way to get out 2145 - * of error state is by explicit re-enabling 2146 - * of the event 2147 - */ 2148 - if (event->state > PERF_EVENT_STATE_OFF) { 2149 - perf_cgroup_event_disable(event, ctx); 2150 - perf_event_set_state(event, PERF_EVENT_STATE_OFF); 2151 - } 2152 - 2153 2123 ctx->generation++; 2154 2124 event->pmu_ctx->nr_events--; 2155 2125 } ··· 2155 2149 } 2156 2150 2157 2151 static void put_event(struct perf_event *event); 2158 - static void event_sched_out(struct perf_event *event, 2159 - struct perf_event_context *ctx); 2152 + static void __event_disable(struct perf_event *event, 2153 + struct perf_event_context *ctx, 2154 + enum perf_event_state state); 2160 2155 2161 2156 static void perf_put_aux_event(struct perf_event *event) 2162 2157 { ··· 2190 2183 * state so that we don't try to schedule it again. Note 2191 2184 * that perf_event_enable() will clear the ERROR status. 2192 2185 */ 2193 - event_sched_out(iter, ctx); 2194 - perf_event_set_state(event, PERF_EVENT_STATE_ERROR); 2186 + __event_disable(iter, ctx, PERF_EVENT_STATE_ERROR); 2195 2187 } 2196 2188 } 2197 2189 ··· 2248 2242 &event->pmu_ctx->flexible_active; 2249 2243 } 2250 2244 2251 - /* 2252 - * Events that have PERF_EV_CAP_SIBLING require being part of a group and 2253 - * cannot exist on their own, schedule them out and move them into the ERROR 2254 - * state. Also see _perf_event_enable(), it will not be able to recover 2255 - * this ERROR state. 2256 - */ 2257 - static inline void perf_remove_sibling_event(struct perf_event *event) 2258 - { 2259 - event_sched_out(event, event->ctx); 2260 - perf_event_set_state(event, PERF_EVENT_STATE_ERROR); 2261 - } 2262 - 2263 2245 static void perf_group_detach(struct perf_event *event) 2264 2246 { 2265 2247 struct perf_event *leader = event->group_leader; ··· 2283 2289 */ 2284 2290 list_for_each_entry_safe(sibling, tmp, &event->sibling_list, sibling_list) { 2285 2291 2292 + /* 2293 + * Events that have PERF_EV_CAP_SIBLING require being part of 2294 + * a group and cannot exist on their own, schedule them out 2295 + * and move them into the ERROR state. Also see 2296 + * _perf_event_enable(), it will not be able to recover this 2297 + * ERROR state. 2298 + */ 2286 2299 if (sibling->event_caps & PERF_EV_CAP_SIBLING) 2287 - perf_remove_sibling_event(sibling); 2300 + __event_disable(sibling, ctx, PERF_EVENT_STATE_ERROR); 2288 2301 2289 2302 sibling->group_leader = sibling; 2290 2303 list_del_init(&sibling->sibling_list); ··· 2494 2493 state = PERF_EVENT_STATE_EXIT; 2495 2494 if (flags & DETACH_REVOKE) 2496 2495 state = PERF_EVENT_STATE_REVOKED; 2497 - if (flags & DETACH_DEAD) { 2498 - event->pending_disable = 1; 2496 + if (flags & DETACH_DEAD) 2499 2497 state = PERF_EVENT_STATE_DEAD; 2500 - } 2498 + 2501 2499 event_sched_out(event, ctx); 2500 + 2501 + if (event->state > PERF_EVENT_STATE_OFF) 2502 + perf_cgroup_event_disable(event, ctx); 2503 + 2502 2504 perf_event_set_state(event, min(event->state, state)); 2503 2505 2504 2506 if (flags & DETACH_GROUP) ··· 2566 2562 event_function_call(event, __perf_remove_from_context, (void *)flags); 2567 2563 } 2568 2564 2565 + static void __event_disable(struct perf_event *event, 2566 + struct perf_event_context *ctx, 2567 + enum perf_event_state state) 2568 + { 2569 + event_sched_out(event, ctx); 2570 + perf_cgroup_event_disable(event, ctx); 2571 + perf_event_set_state(event, state); 2572 + } 2573 + 2569 2574 /* 2570 2575 * Cross CPU call to disable a performance event 2571 2576 */ ··· 2589 2576 perf_pmu_disable(event->pmu_ctx->pmu); 2590 2577 ctx_time_update_event(ctx, event); 2591 2578 2579 + /* 2580 + * When disabling a group leader, the whole group becomes ineligible 2581 + * to run, so schedule out the full group. 2582 + */ 2592 2583 if (event == event->group_leader) 2593 2584 group_sched_out(event, ctx); 2594 - else 2595 - event_sched_out(event, ctx); 2596 2585 2597 - perf_event_set_state(event, PERF_EVENT_STATE_OFF); 2598 - perf_cgroup_event_disable(event, ctx); 2586 + /* 2587 + * But only mark the leader OFF; the siblings will remain 2588 + * INACTIVE. 2589 + */ 2590 + __event_disable(event, ctx, PERF_EVENT_STATE_OFF); 2599 2591 2600 2592 perf_pmu_enable(event->pmu_ctx->pmu); 2601 2593 } ··· 2674 2656 2675 2657 static void perf_event_throttle(struct perf_event *event) 2676 2658 { 2677 - event->pmu->stop(event, 0); 2678 2659 event->hw.interrupts = MAX_INTERRUPTS; 2660 + event->pmu->stop(event, 0); 2679 2661 if (event == event->group_leader) 2680 2662 perf_log_throttle(event, 0); 2681 2663 } ··· 7251 7233 * CPU-A CPU-B 7252 7234 * 7253 7235 * perf_event_disable_inatomic() 7254 - * @pending_disable = CPU-A; 7236 + * @pending_disable = 1; 7255 7237 * irq_work_queue(); 7256 7238 * 7257 7239 * sched-out 7258 - * @pending_disable = -1; 7240 + * @pending_disable = 0; 7259 7241 * 7260 7242 * sched-in 7261 7243 * perf_event_disable_inatomic() 7262 - * @pending_disable = CPU-B; 7244 + * @pending_disable = 1; 7263 7245 * irq_work_queue(); // FAILS 7264 7246 * 7265 7247 * irq_work_run() ··· 7455 7437 7456 7438 /* No regs, no stack pointer, no dump. */ 7457 7439 if (!regs) 7440 + return 0; 7441 + 7442 + /* No mm, no stack, no dump. */ 7443 + if (!current->mm) 7458 7444 return 0; 7459 7445 7460 7446 /* ··· 8171 8149 bool crosstask = event->ctx->task && event->ctx->task != current; 8172 8150 const u32 max_stack = event->attr.sample_max_stack; 8173 8151 struct perf_callchain_entry *callchain; 8152 + 8153 + if (!current->mm) 8154 + user = false; 8174 8155 8175 8156 if (!kernel && !user) 8176 8157 return &__empty_callchain; ··· 11774 11749 { 11775 11750 struct hw_perf_event *hwc = &event->hw; 11776 11751 11777 - if (is_sampling_event(event)) { 11752 + /* 11753 + * The throttle can be triggered in the hrtimer handler. 11754 + * The HRTIMER_NORESTART should be used to stop the timer, 11755 + * rather than hrtimer_cancel(). See perf_swevent_hrtimer() 11756 + */ 11757 + if (is_sampling_event(event) && (hwc->interrupts != MAX_INTERRUPTS)) { 11778 11758 ktime_t remaining = hrtimer_get_remaining(&hwc->hrtimer); 11779 11759 local64_set(&hwc->period_left, ktime_to_ns(remaining)); 11780 11760 ··· 11834 11804 static void cpu_clock_event_stop(struct perf_event *event, int flags) 11835 11805 { 11836 11806 perf_swevent_cancel_hrtimer(event); 11837 - cpu_clock_event_update(event); 11807 + if (flags & PERF_EF_UPDATE) 11808 + cpu_clock_event_update(event); 11838 11809 } 11839 11810 11840 11811 static int cpu_clock_event_add(struct perf_event *event, int flags) ··· 11913 11882 static void task_clock_event_stop(struct perf_event *event, int flags) 11914 11883 { 11915 11884 perf_swevent_cancel_hrtimer(event); 11916 - task_clock_event_update(event, event->ctx->time); 11885 + if (flags & PERF_EF_UPDATE) 11886 + task_clock_event_update(event, event->ctx->time); 11917 11887 } 11918 11888 11919 11889 static int task_clock_event_add(struct perf_event *event, int flags)
+2 -2
kernel/events/ring_buffer.c
··· 441 441 * store that will be enabled on successful return 442 442 */ 443 443 if (!handle->size) { /* A, matches D */ 444 - event->pending_disable = smp_processor_id(); 444 + perf_event_disable_inatomic(handle->event); 445 445 perf_output_wakeup(handle); 446 446 WRITE_ONCE(rb->aux_nest, 0); 447 447 goto err_put; ··· 526 526 527 527 if (wakeup) { 528 528 if (handle->aux_flags & PERF_AUX_FLAG_TRUNCATED) 529 - handle->event->pending_disable = smp_processor_id(); 529 + perf_event_disable_inatomic(handle->event); 530 530 perf_output_wakeup(handle); 531 531 } 532 532
+9 -8
kernel/exit.c
··· 940 940 taskstats_exit(tsk, group_dead); 941 941 trace_sched_process_exit(tsk, group_dead); 942 942 943 + /* 944 + * Since sampling can touch ->mm, make sure to stop everything before we 945 + * tear it down. 946 + * 947 + * Also flushes inherited counters to the parent - before the parent 948 + * gets woken up by child-exit notifications. 949 + */ 950 + perf_event_exit_task(tsk); 951 + 943 952 exit_mm(); 944 953 945 954 if (group_dead) ··· 963 954 exit_task_namespaces(tsk); 964 955 exit_task_work(tsk); 965 956 exit_thread(tsk); 966 - 967 - /* 968 - * Flush inherited counters to the parent - before the parent 969 - * gets woken up by child-exit notifications. 970 - * 971 - * because of cgroup mode, must be called before cgroup_exit() 972 - */ 973 - perf_event_exit_task(tsk); 974 957 975 958 sched_autogroup_exit_task(tsk); 976 959 cgroup_exit(tsk);
+12 -2
kernel/futex/core.c
··· 583 583 if (futex_get_value(&node, naddr)) 584 584 return -EFAULT; 585 585 586 - if (node != FUTEX_NO_NODE && 587 - (node >= MAX_NUMNODES || !node_possible(node))) 586 + if ((node != FUTEX_NO_NODE) && 587 + ((unsigned int)node >= MAX_NUMNODES || !node_possible(node))) 588 588 return -EINVAL; 589 589 } 590 590 ··· 1629 1629 mm->futex_phash_new = NULL; 1630 1630 1631 1631 if (fph) { 1632 + if (cur && (!cur->hash_mask || cur->immutable)) { 1633 + /* 1634 + * If two threads simultaneously request the global 1635 + * hash then the first one performs the switch, 1636 + * the second one returns here. 1637 + */ 1638 + free = fph; 1639 + mm->futex_phash_new = new; 1640 + return -EBUSY; 1641 + } 1632 1642 if (cur && !new) { 1633 1643 /* 1634 1644 * If we have an existing hash, but do not yet have
+8
kernel/irq/chip.c
··· 205 205 206 206 void irq_startup_managed(struct irq_desc *desc) 207 207 { 208 + struct irq_data *d = irq_desc_get_irq_data(desc); 209 + 210 + /* 211 + * Clear managed-shutdown flag, so we don't repeat managed-startup for 212 + * multiple hotplugs, and cause imbalanced disable depth. 213 + */ 214 + irqd_clr_managed_shutdown(d); 215 + 208 216 /* 209 217 * Only start it up when the disable depth is 1, so that a disable, 210 218 * hotunplug, hotplug sequence does not end up enabling it during
-7
kernel/irq/cpuhotplug.c
··· 210 210 !irq_data_get_irq_chip(data) || !cpumask_test_cpu(cpu, affinity)) 211 211 return; 212 212 213 - /* 214 - * Don't restore suspended interrupts here when a system comes back 215 - * from S3. They are reenabled via resume_device_irqs(). 216 - */ 217 - if (desc->istate & IRQS_SUSPENDED) 218 - return; 219 - 220 213 if (irqd_is_managed_and_shutdown(data)) 221 214 irq_startup_managed(desc); 222 215
+1 -1
kernel/irq/irq_sim.c
··· 202 202 void *data) 203 203 { 204 204 struct irq_sim_work_ctx *work_ctx __free(kfree) = 205 - kmalloc(sizeof(*work_ctx), GFP_KERNEL); 205 + kzalloc(sizeof(*work_ctx), GFP_KERNEL); 206 206 207 207 if (!work_ctx) 208 208 return ERR_PTR(-ENOMEM);
+17 -12
kernel/kexec_handover.c
··· 164 164 } 165 165 166 166 /* almost as free_reserved_page(), just don't free the page */ 167 - static void kho_restore_page(struct page *page) 167 + static void kho_restore_page(struct page *page, unsigned int order) 168 168 { 169 - ClearPageReserved(page); 170 - init_page_count(page); 171 - adjust_managed_page_count(page, 1); 169 + unsigned int nr_pages = (1 << order); 170 + 171 + /* Head page gets refcount of 1. */ 172 + set_page_count(page, 1); 173 + 174 + /* For higher order folios, tail pages get a page count of zero. */ 175 + for (unsigned int i = 1; i < nr_pages; i++) 176 + set_page_count(page + i, 0); 177 + 178 + if (order > 0) 179 + prep_compound_page(page, order); 180 + 181 + adjust_managed_page_count(page, nr_pages); 172 182 } 173 183 174 184 /** ··· 196 186 return NULL; 197 187 198 188 order = page->private; 199 - if (order) { 200 - if (order > MAX_PAGE_ORDER) 201 - return NULL; 189 + if (order > MAX_PAGE_ORDER) 190 + return NULL; 202 191 203 - prep_compound_page(page, order); 204 - } else { 205 - kho_restore_page(page); 206 - } 207 - 192 + kho_restore_page(page, order); 208 193 return page_folio(page); 209 194 } 210 195 EXPORT_SYMBOL_GPL(kho_restore_folio);
+4
kernel/rcu/tree.c
··· 3072 3072 /* Misaligned rcu_head! */ 3073 3073 WARN_ON_ONCE((unsigned long)head & (sizeof(void *) - 1)); 3074 3074 3075 + /* Avoid NULL dereference if callback is NULL. */ 3076 + if (WARN_ON_ONCE(!func)) 3077 + return; 3078 + 3075 3079 if (debug_rcu_head_queue(head)) { 3076 3080 /* 3077 3081 * Probable double call_rcu(), so leak the callback.
+2 -2
kernel/sched/core.c
··· 8545 8545 init_cfs_bandwidth(&root_task_group.cfs_bandwidth, NULL); 8546 8546 #endif /* CONFIG_FAIR_GROUP_SCHED */ 8547 8547 #ifdef CONFIG_EXT_GROUP_SCHED 8548 - root_task_group.scx_weight = CGROUP_WEIGHT_DFL; 8548 + scx_tg_init(&root_task_group); 8549 8549 #endif /* CONFIG_EXT_GROUP_SCHED */ 8550 8550 #ifdef CONFIG_RT_GROUP_SCHED 8551 8551 root_task_group.rt_se = (struct sched_rt_entity **)ptr; ··· 8985 8985 if (!alloc_rt_sched_group(tg, parent)) 8986 8986 goto err; 8987 8987 8988 - scx_group_set_weight(tg, CGROUP_WEIGHT_DFL); 8988 + scx_tg_init(tg); 8989 8989 alloc_uclamp_sched_group(tg, parent); 8990 8990 8991 8991 return tg;
+11 -6
kernel/sched/ext.c
··· 4092 4092 DEFINE_STATIC_PERCPU_RWSEM(scx_cgroup_rwsem); 4093 4093 static bool scx_cgroup_enabled; 4094 4094 4095 + void scx_tg_init(struct task_group *tg) 4096 + { 4097 + tg->scx_weight = CGROUP_WEIGHT_DFL; 4098 + } 4099 + 4095 4100 int scx_tg_online(struct task_group *tg) 4096 4101 { 4097 4102 struct scx_sched *sch = scx_root; ··· 4246 4241 4247 4242 percpu_down_read(&scx_cgroup_rwsem); 4248 4243 4249 - if (scx_cgroup_enabled && tg->scx_weight != weight) { 4250 - if (SCX_HAS_OP(sch, cgroup_set_weight)) 4251 - SCX_CALL_OP(sch, SCX_KF_UNLOCKED, cgroup_set_weight, NULL, 4252 - tg_cgrp(tg), weight); 4253 - tg->scx_weight = weight; 4254 - } 4244 + if (scx_cgroup_enabled && SCX_HAS_OP(sch, cgroup_set_weight) && 4245 + tg->scx_weight != weight) 4246 + SCX_CALL_OP(sch, SCX_KF_UNLOCKED, cgroup_set_weight, NULL, 4247 + tg_cgrp(tg), weight); 4248 + 4249 + tg->scx_weight = weight; 4255 4250 4256 4251 percpu_up_read(&scx_cgroup_rwsem); 4257 4252 }
+2
kernel/sched/ext.h
··· 79 79 80 80 #ifdef CONFIG_CGROUP_SCHED 81 81 #ifdef CONFIG_EXT_GROUP_SCHED 82 + void scx_tg_init(struct task_group *tg); 82 83 int scx_tg_online(struct task_group *tg); 83 84 void scx_tg_offline(struct task_group *tg); 84 85 int scx_cgroup_can_attach(struct cgroup_taskset *tset); ··· 89 88 void scx_group_set_weight(struct task_group *tg, unsigned long cgrp_weight); 90 89 void scx_group_set_idle(struct task_group *tg, bool idle); 91 90 #else /* CONFIG_EXT_GROUP_SCHED */ 91 + static inline void scx_tg_init(struct task_group *tg) {} 92 92 static inline int scx_tg_online(struct task_group *tg) { return 0; } 93 93 static inline void scx_tg_offline(struct task_group *tg) {} 94 94 static inline int scx_cgroup_can_attach(struct cgroup_taskset *tset) { return 0; }
+7 -7
kernel/trace/trace_events_filter.c
··· 1436 1436 1437 1437 INIT_LIST_HEAD(&head->list); 1438 1438 1439 - item = kmalloc(sizeof(*item), GFP_KERNEL); 1440 - if (!item) 1441 - goto free_now; 1442 - 1443 - item->filter = filter; 1444 - list_add_tail(&item->list, &head->list); 1445 - 1446 1439 list_for_each_entry(file, &tr->events, list) { 1447 1440 if (file->system != dir) 1448 1441 continue; ··· 1446 1453 list_add_tail(&item->list, &head->list); 1447 1454 event_clear_filter(file); 1448 1455 } 1456 + 1457 + item = kmalloc(sizeof(*item), GFP_KERNEL); 1458 + if (!item) 1459 + goto free_now; 1460 + 1461 + item->filter = filter; 1462 + list_add_tail(&item->list, &head->list); 1449 1463 1450 1464 delay_free_filter(head); 1451 1465 return;
+6
kernel/trace/trace_functions_graph.c
··· 455 455 return 0; 456 456 } 457 457 458 + static struct tracer graph_trace; 459 + 458 460 static int ftrace_graph_trace_args(struct trace_array *tr, int set) 459 461 { 460 462 trace_func_graph_ent_t entry; 463 + 464 + /* Do nothing if the current tracer is not this tracer */ 465 + if (tr->current_trace != &graph_trace) 466 + return 0; 461 467 462 468 if (set) 463 469 entry = trace_graph_entry_args;
+2 -1
kernel/workqueue.c
··· 7767 7767 restrict_unbound_cpumask("workqueue.unbound_cpus", &wq_cmdline_cpumask); 7768 7768 7769 7769 cpumask_copy(wq_requested_unbound_cpumask, wq_unbound_cpumask); 7770 - 7770 + cpumask_andnot(wq_isolated_cpumask, cpu_possible_mask, 7771 + housekeeping_cpumask(HK_TYPE_DOMAIN)); 7771 7772 pwq_cache = KMEM_CACHE(pool_workqueue, SLAB_PANIC); 7772 7773 7773 7774 unbound_wq_update_pwq_attrs_buf = alloc_workqueue_attrs();
+1
lib/Kconfig
··· 716 716 717 717 config PLDMFW 718 718 bool 719 + select CRC32 719 720 default n 720 721 721 722 config ASN1_ENCODER
+7 -1
lib/alloc_tag.c
··· 10 10 #include <linux/seq_buf.h> 11 11 #include <linux/seq_file.h> 12 12 #include <linux/vmalloc.h> 13 + #include <linux/kmemleak.h> 13 14 14 15 #define ALLOCINFO_FILE_NAME "allocinfo" 15 16 #define MODULE_ALLOC_TAG_VMAP_SIZE (100000UL * sizeof(struct alloc_tag)) ··· 633 632 mod->name); 634 633 return -ENOMEM; 635 634 } 636 - } 637 635 636 + /* 637 + * Avoid a kmemleak false positive. The pointer to the counters is stored 638 + * in the alloc_tag section of the module and cannot be directly accessed. 639 + */ 640 + kmemleak_ignore_percpu(tag->counters); 641 + } 638 642 return 0; 639 643 } 640 644
+5 -1
lib/crypto/Makefile
··· 35 35 libcurve25519-generic-y := curve25519-fiat32.o 36 36 libcurve25519-generic-$(CONFIG_ARCH_SUPPORTS_INT128) := curve25519-hacl64.o 37 37 libcurve25519-generic-y += curve25519-generic.o 38 + # clang versions prior to 18 may blow out the stack with KASAN 39 + ifeq ($(call clang-min-version, 180000),) 40 + KASAN_SANITIZE_curve25519-hacl64.o := n 41 + endif 38 42 39 43 obj-$(CONFIG_CRYPTO_LIB_CURVE25519) += libcurve25519.o 40 44 libcurve25519-y += curve25519.o ··· 66 62 67 63 obj-$(CONFIG_MPILIB) += mpi/ 68 64 69 - obj-$(CONFIG_CRYPTO_SELFTESTS) += simd.o 65 + obj-$(CONFIG_CRYPTO_SELFTESTS_FULL) += simd.o 70 66 71 67 obj-$(CONFIG_CRYPTO_LIB_SM3) += libsm3.o 72 68 libsm3-y := sm3.o
+4 -4
lib/crypto/aescfb.c
··· 106 106 */ 107 107 108 108 static struct { 109 - u8 ptext[64]; 110 - u8 ctext[64]; 109 + u8 ptext[64] __nonstring; 110 + u8 ctext[64] __nonstring; 111 111 112 - u8 key[AES_MAX_KEY_SIZE]; 113 - u8 iv[AES_BLOCK_SIZE]; 112 + u8 key[AES_MAX_KEY_SIZE] __nonstring; 113 + u8 iv[AES_BLOCK_SIZE] __nonstring; 114 114 115 115 int klen; 116 116 int len;
+23 -23
lib/crypto/aesgcm.c
··· 205 205 * Test code below. Vectors taken from crypto/testmgr.h 206 206 */ 207 207 208 - static const u8 __initconst ctext0[16] = 208 + static const u8 __initconst ctext0[16] __nonstring = 209 209 "\x58\xe2\xfc\xce\xfa\x7e\x30\x61" 210 210 "\x36\x7f\x1d\x57\xa4\xe7\x45\x5a"; 211 211 212 212 static const u8 __initconst ptext1[16]; 213 213 214 - static const u8 __initconst ctext1[32] = 214 + static const u8 __initconst ctext1[32] __nonstring = 215 215 "\x03\x88\xda\xce\x60\xb6\xa3\x92" 216 216 "\xf3\x28\xc2\xb9\x71\xb2\xfe\x78" 217 217 "\xab\x6e\x47\xd4\x2c\xec\x13\xbd" 218 218 "\xf5\x3a\x67\xb2\x12\x57\xbd\xdf"; 219 219 220 - static const u8 __initconst ptext2[64] = 220 + static const u8 __initconst ptext2[64] __nonstring = 221 221 "\xd9\x31\x32\x25\xf8\x84\x06\xe5" 222 222 "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" 223 223 "\x86\xa7\xa9\x53\x15\x34\xf7\xda" ··· 227 227 "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" 228 228 "\xba\x63\x7b\x39\x1a\xaf\xd2\x55"; 229 229 230 - static const u8 __initconst ctext2[80] = 230 + static const u8 __initconst ctext2[80] __nonstring = 231 231 "\x42\x83\x1e\xc2\x21\x77\x74\x24" 232 232 "\x4b\x72\x21\xb7\x84\xd0\xd4\x9c" 233 233 "\xe3\xaa\x21\x2f\x2c\x02\xa4\xe0" ··· 239 239 "\x4d\x5c\x2a\xf3\x27\xcd\x64\xa6" 240 240 "\x2c\xf3\x5a\xbd\x2b\xa6\xfa\xb4"; 241 241 242 - static const u8 __initconst ptext3[60] = 242 + static const u8 __initconst ptext3[60] __nonstring = 243 243 "\xd9\x31\x32\x25\xf8\x84\x06\xe5" 244 244 "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" 245 245 "\x86\xa7\xa9\x53\x15\x34\xf7\xda" ··· 249 249 "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" 250 250 "\xba\x63\x7b\x39"; 251 251 252 - static const u8 __initconst ctext3[76] = 252 + static const u8 __initconst ctext3[76] __nonstring = 253 253 "\x42\x83\x1e\xc2\x21\x77\x74\x24" 254 254 "\x4b\x72\x21\xb7\x84\xd0\xd4\x9c" 255 255 "\xe3\xaa\x21\x2f\x2c\x02\xa4\xe0" ··· 261 261 "\x5b\xc9\x4f\xbc\x32\x21\xa5\xdb" 262 262 "\x94\xfa\xe9\x5a\xe7\x12\x1a\x47"; 263 263 264 - static const u8 __initconst ctext4[16] = 264 + static const u8 __initconst ctext4[16] __nonstring = 265 265 "\xcd\x33\xb2\x8a\xc7\x73\xf7\x4b" 266 266 "\xa0\x0e\xd1\xf3\x12\x57\x24\x35"; 267 267 268 - static const u8 __initconst ctext5[32] = 268 + static const u8 __initconst ctext5[32] __nonstring = 269 269 "\x98\xe7\x24\x7c\x07\xf0\xfe\x41" 270 270 "\x1c\x26\x7e\x43\x84\xb0\xf6\x00" 271 271 "\x2f\xf5\x8d\x80\x03\x39\x27\xab" 272 272 "\x8e\xf4\xd4\x58\x75\x14\xf0\xfb"; 273 273 274 - static const u8 __initconst ptext6[64] = 274 + static const u8 __initconst ptext6[64] __nonstring = 275 275 "\xd9\x31\x32\x25\xf8\x84\x06\xe5" 276 276 "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" 277 277 "\x86\xa7\xa9\x53\x15\x34\xf7\xda" ··· 281 281 "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" 282 282 "\xba\x63\x7b\x39\x1a\xaf\xd2\x55"; 283 283 284 - static const u8 __initconst ctext6[80] = 284 + static const u8 __initconst ctext6[80] __nonstring = 285 285 "\x39\x80\xca\x0b\x3c\x00\xe8\x41" 286 286 "\xeb\x06\xfa\xc4\x87\x2a\x27\x57" 287 287 "\x85\x9e\x1c\xea\xa6\xef\xd9\x84" ··· 293 293 "\x99\x24\xa7\xc8\x58\x73\x36\xbf" 294 294 "\xb1\x18\x02\x4d\xb8\x67\x4a\x14"; 295 295 296 - static const u8 __initconst ctext7[16] = 296 + static const u8 __initconst ctext7[16] __nonstring = 297 297 "\x53\x0f\x8a\xfb\xc7\x45\x36\xb9" 298 298 "\xa9\x63\xb4\xf1\xc4\xcb\x73\x8b"; 299 299 300 - static const u8 __initconst ctext8[32] = 300 + static const u8 __initconst ctext8[32] __nonstring = 301 301 "\xce\xa7\x40\x3d\x4d\x60\x6b\x6e" 302 302 "\x07\x4e\xc5\xd3\xba\xf3\x9d\x18" 303 303 "\xd0\xd1\xc8\xa7\x99\x99\x6b\xf0" 304 304 "\x26\x5b\x98\xb5\xd4\x8a\xb9\x19"; 305 305 306 - static const u8 __initconst ptext9[64] = 306 + static const u8 __initconst ptext9[64] __nonstring = 307 307 "\xd9\x31\x32\x25\xf8\x84\x06\xe5" 308 308 "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" 309 309 "\x86\xa7\xa9\x53\x15\x34\xf7\xda" ··· 313 313 "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" 314 314 "\xba\x63\x7b\x39\x1a\xaf\xd2\x55"; 315 315 316 - static const u8 __initconst ctext9[80] = 316 + static const u8 __initconst ctext9[80] __nonstring = 317 317 "\x52\x2d\xc1\xf0\x99\x56\x7d\x07" 318 318 "\xf4\x7f\x37\xa3\x2a\x84\x42\x7d" 319 319 "\x64\x3a\x8c\xdc\xbf\xe5\xc0\xc9" ··· 325 325 "\xb0\x94\xda\xc5\xd9\x34\x71\xbd" 326 326 "\xec\x1a\x50\x22\x70\xe3\xcc\x6c"; 327 327 328 - static const u8 __initconst ptext10[60] = 328 + static const u8 __initconst ptext10[60] __nonstring = 329 329 "\xd9\x31\x32\x25\xf8\x84\x06\xe5" 330 330 "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" 331 331 "\x86\xa7\xa9\x53\x15\x34\xf7\xda" ··· 335 335 "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" 336 336 "\xba\x63\x7b\x39"; 337 337 338 - static const u8 __initconst ctext10[76] = 338 + static const u8 __initconst ctext10[76] __nonstring = 339 339 "\x52\x2d\xc1\xf0\x99\x56\x7d\x07" 340 340 "\xf4\x7f\x37\xa3\x2a\x84\x42\x7d" 341 341 "\x64\x3a\x8c\xdc\xbf\xe5\xc0\xc9" ··· 347 347 "\x76\xfc\x6e\xce\x0f\x4e\x17\x68" 348 348 "\xcd\xdf\x88\x53\xbb\x2d\x55\x1b"; 349 349 350 - static const u8 __initconst ptext11[60] = 350 + static const u8 __initconst ptext11[60] __nonstring = 351 351 "\xd9\x31\x32\x25\xf8\x84\x06\xe5" 352 352 "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a" 353 353 "\x86\xa7\xa9\x53\x15\x34\xf7\xda" ··· 357 357 "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57" 358 358 "\xba\x63\x7b\x39"; 359 359 360 - static const u8 __initconst ctext11[76] = 360 + static const u8 __initconst ctext11[76] __nonstring = 361 361 "\x39\x80\xca\x0b\x3c\x00\xe8\x41" 362 362 "\xeb\x06\xfa\xc4\x87\x2a\x27\x57" 363 363 "\x85\x9e\x1c\xea\xa6\xef\xd9\x84" ··· 369 369 "\x25\x19\x49\x8e\x80\xf1\x47\x8f" 370 370 "\x37\xba\x55\xbd\x6d\x27\x61\x8c"; 371 371 372 - static const u8 __initconst ptext12[719] = 372 + static const u8 __initconst ptext12[719] __nonstring = 373 373 "\x42\xc1\xcc\x08\x48\x6f\x41\x3f" 374 374 "\x2f\x11\x66\x8b\x2a\x16\xf0\xe0" 375 375 "\x58\x83\xf0\xc3\x70\x14\xc0\x5b" ··· 461 461 "\x59\xfa\xfa\xaa\x44\x04\x01\xa7" 462 462 "\xa4\x78\xdb\x74\x3d\x8b\xb5"; 463 463 464 - static const u8 __initconst ctext12[735] = 464 + static const u8 __initconst ctext12[735] __nonstring = 465 465 "\x84\x0b\xdb\xd5\xb7\xa8\xfe\x20" 466 466 "\xbb\xb1\x12\x7f\x41\xea\xb3\xc0" 467 467 "\xa2\xb4\x37\x19\x11\x58\xb6\x0b" ··· 559 559 const u8 *ptext; 560 560 const u8 *ctext; 561 561 562 - u8 key[AES_MAX_KEY_SIZE]; 563 - u8 iv[GCM_AES_IV_SIZE]; 564 - u8 assoc[20]; 562 + u8 key[AES_MAX_KEY_SIZE] __nonstring; 563 + u8 iv[GCM_AES_IV_SIZE] __nonstring; 564 + u8 assoc[20] __nonstring; 565 565 566 566 int klen; 567 567 int clen;
+8 -1
lib/group_cpus.c
··· 352 352 int ret = -ENOMEM; 353 353 struct cpumask *masks = NULL; 354 354 355 + if (numgrps == 0) 356 + return NULL; 357 + 355 358 if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) 356 359 return NULL; 357 360 ··· 429 426 #else /* CONFIG_SMP */ 430 427 struct cpumask *group_cpus_evenly(unsigned int numgrps) 431 428 { 432 - struct cpumask *masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL); 429 + struct cpumask *masks; 433 430 431 + if (numgrps == 0) 432 + return NULL; 433 + 434 + masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL); 434 435 if (!masks) 435 436 return NULL; 436 437
+3 -1
lib/maple_tree.c
··· 5527 5527 mas->store_type = mas_wr_store_type(&wr_mas); 5528 5528 request = mas_prealloc_calc(&wr_mas, entry); 5529 5529 if (!request) 5530 - return ret; 5530 + goto set_flag; 5531 5531 5532 + mas->mas_flags &= ~MA_STATE_PREALLOC; 5532 5533 mas_node_count_gfp(mas, request, gfp); 5533 5534 if (mas_is_err(mas)) { 5534 5535 mas_set_alloc_req(mas, 0); ··· 5539 5538 return ret; 5540 5539 } 5541 5540 5541 + set_flag: 5542 5542 mas->mas_flags |= MA_STATE_PREALLOC; 5543 5543 return ret; 5544 5544 }
+28 -20
lib/raid6/rvv.c
··· 26 26 static void raid6_rvv1_gen_syndrome_real(int disks, unsigned long bytes, void **ptrs) 27 27 { 28 28 u8 **dptr = (u8 **)ptrs; 29 - unsigned long d; 30 - int z, z0; 31 29 u8 *p, *q; 30 + unsigned long vl, d; 31 + int z, z0; 32 32 33 33 z0 = disks - 3; /* Highest data disk */ 34 34 p = dptr[z0 + 1]; /* XOR parity */ ··· 36 36 37 37 asm volatile (".option push\n" 38 38 ".option arch,+v\n" 39 - "vsetvli t0, x0, e8, m1, ta, ma\n" 39 + "vsetvli %0, x0, e8, m1, ta, ma\n" 40 40 ".option pop\n" 41 + : "=&r" (vl) 41 42 ); 42 43 43 44 /* v0:wp0, v1:wq0, v2:wd0/w20, v3:w10 */ ··· 100 99 { 101 100 u8 **dptr = (u8 **)ptrs; 102 101 u8 *p, *q; 103 - unsigned long d; 102 + unsigned long vl, d; 104 103 int z, z0; 105 104 106 105 z0 = stop; /* P/Q right side optimization */ ··· 109 108 110 109 asm volatile (".option push\n" 111 110 ".option arch,+v\n" 112 - "vsetvli t0, x0, e8, m1, ta, ma\n" 111 + "vsetvli %0, x0, e8, m1, ta, ma\n" 113 112 ".option pop\n" 113 + : "=&r" (vl) 114 114 ); 115 115 116 116 /* v0:wp0, v1:wq0, v2:wd0/w20, v3:w10 */ ··· 197 195 static void raid6_rvv2_gen_syndrome_real(int disks, unsigned long bytes, void **ptrs) 198 196 { 199 197 u8 **dptr = (u8 **)ptrs; 200 - unsigned long d; 201 - int z, z0; 202 198 u8 *p, *q; 199 + unsigned long vl, d; 200 + int z, z0; 203 201 204 202 z0 = disks - 3; /* Highest data disk */ 205 203 p = dptr[z0 + 1]; /* XOR parity */ ··· 207 205 208 206 asm volatile (".option push\n" 209 207 ".option arch,+v\n" 210 - "vsetvli t0, x0, e8, m1, ta, ma\n" 208 + "vsetvli %0, x0, e8, m1, ta, ma\n" 211 209 ".option pop\n" 210 + : "=&r" (vl) 212 211 ); 213 212 214 213 /* ··· 290 287 { 291 288 u8 **dptr = (u8 **)ptrs; 292 289 u8 *p, *q; 293 - unsigned long d; 290 + unsigned long vl, d; 294 291 int z, z0; 295 292 296 293 z0 = stop; /* P/Q right side optimization */ ··· 299 296 300 297 asm volatile (".option push\n" 301 298 ".option arch,+v\n" 302 - "vsetvli t0, x0, e8, m1, ta, ma\n" 299 + "vsetvli %0, x0, e8, m1, ta, ma\n" 303 300 ".option pop\n" 301 + : "=&r" (vl) 304 302 ); 305 303 306 304 /* ··· 417 413 static void raid6_rvv4_gen_syndrome_real(int disks, unsigned long bytes, void **ptrs) 418 414 { 419 415 u8 **dptr = (u8 **)ptrs; 420 - unsigned long d; 421 - int z, z0; 422 416 u8 *p, *q; 417 + unsigned long vl, d; 418 + int z, z0; 423 419 424 420 z0 = disks - 3; /* Highest data disk */ 425 421 p = dptr[z0 + 1]; /* XOR parity */ ··· 427 423 428 424 asm volatile (".option push\n" 429 425 ".option arch,+v\n" 430 - "vsetvli t0, x0, e8, m1, ta, ma\n" 426 + "vsetvli %0, x0, e8, m1, ta, ma\n" 431 427 ".option pop\n" 428 + : "=&r" (vl) 432 429 ); 433 430 434 431 /* ··· 544 539 { 545 540 u8 **dptr = (u8 **)ptrs; 546 541 u8 *p, *q; 547 - unsigned long d; 542 + unsigned long vl, d; 548 543 int z, z0; 549 544 550 545 z0 = stop; /* P/Q right side optimization */ ··· 553 548 554 549 asm volatile (".option push\n" 555 550 ".option arch,+v\n" 556 - "vsetvli t0, x0, e8, m1, ta, ma\n" 551 + "vsetvli %0, x0, e8, m1, ta, ma\n" 557 552 ".option pop\n" 553 + : "=&r" (vl) 558 554 ); 559 555 560 556 /* ··· 727 721 static void raid6_rvv8_gen_syndrome_real(int disks, unsigned long bytes, void **ptrs) 728 722 { 729 723 u8 **dptr = (u8 **)ptrs; 730 - unsigned long d; 731 - int z, z0; 732 724 u8 *p, *q; 725 + unsigned long vl, d; 726 + int z, z0; 733 727 734 728 z0 = disks - 3; /* Highest data disk */ 735 729 p = dptr[z0 + 1]; /* XOR parity */ ··· 737 731 738 732 asm volatile (".option push\n" 739 733 ".option arch,+v\n" 740 - "vsetvli t0, x0, e8, m1, ta, ma\n" 734 + "vsetvli %0, x0, e8, m1, ta, ma\n" 741 735 ".option pop\n" 736 + : "=&r" (vl) 742 737 ); 743 738 744 739 /* ··· 922 915 { 923 916 u8 **dptr = (u8 **)ptrs; 924 917 u8 *p, *q; 925 - unsigned long d; 918 + unsigned long vl, d; 926 919 int z, z0; 927 920 928 921 z0 = stop; /* P/Q right side optimization */ ··· 931 924 932 925 asm volatile (".option push\n" 933 926 ".option arch,+v\n" 934 - "vsetvli t0, x0, e8, m1, ta, ma\n" 927 + "vsetvli %0, x0, e8, m1, ta, ma\n" 935 928 ".option pop\n" 929 + : "=&r" (vl) 936 930 ); 937 931 938 932 /*
+1
mm/damon/sysfs-schemes.c
··· 472 472 return -ENOMEM; 473 473 474 474 strscpy(path, buf, count + 1); 475 + kfree(filter->memcg_path); 475 476 filter->memcg_path = path; 476 477 return count; 477 478 }
+3 -37
mm/execmem.c
··· 254 254 return ptr; 255 255 } 256 256 257 - static bool execmem_cache_rox = false; 258 - 259 - void execmem_cache_make_ro(void) 260 - { 261 - struct maple_tree *free_areas = &execmem_cache.free_areas; 262 - struct maple_tree *busy_areas = &execmem_cache.busy_areas; 263 - MA_STATE(mas_free, free_areas, 0, ULONG_MAX); 264 - MA_STATE(mas_busy, busy_areas, 0, ULONG_MAX); 265 - struct mutex *mutex = &execmem_cache.mutex; 266 - void *area; 267 - 268 - execmem_cache_rox = true; 269 - 270 - mutex_lock(mutex); 271 - 272 - mas_for_each(&mas_free, area, ULONG_MAX) { 273 - unsigned long pages = mas_range_len(&mas_free) >> PAGE_SHIFT; 274 - set_memory_ro(mas_free.index, pages); 275 - } 276 - 277 - mas_for_each(&mas_busy, area, ULONG_MAX) { 278 - unsigned long pages = mas_range_len(&mas_busy) >> PAGE_SHIFT; 279 - set_memory_ro(mas_busy.index, pages); 280 - } 281 - 282 - mutex_unlock(mutex); 283 - } 284 - 285 257 static int execmem_cache_populate(struct execmem_range *range, size_t size) 286 258 { 287 259 unsigned long vm_flags = VM_ALLOW_HUGE_VMAP; ··· 274 302 /* fill memory with instructions that will trap */ 275 303 execmem_fill_trapping_insns(p, alloc_size, /* writable = */ true); 276 304 277 - if (execmem_cache_rox) { 278 - err = set_memory_rox((unsigned long)p, vm->nr_pages); 279 - if (err) 280 - goto err_free_mem; 281 - } else { 282 - err = set_memory_x((unsigned long)p, vm->nr_pages); 283 - if (err) 284 - goto err_free_mem; 285 - } 305 + err = set_memory_rox((unsigned long)p, vm->nr_pages); 306 + if (err) 307 + goto err_free_mem; 286 308 287 309 err = execmem_cache_add(p, alloc_size); 288 310 if (err)
+10 -4
mm/gup.c
··· 2303 2303 /* 2304 2304 * Returns the number of collected folios. Return value is always >= 0. 2305 2305 */ 2306 - static void collect_longterm_unpinnable_folios( 2306 + static unsigned long collect_longterm_unpinnable_folios( 2307 2307 struct list_head *movable_folio_list, 2308 2308 struct pages_or_folios *pofs) 2309 2309 { 2310 + unsigned long i, collected = 0; 2310 2311 struct folio *prev_folio = NULL; 2311 2312 bool drain_allow = true; 2312 - unsigned long i; 2313 2313 2314 2314 for (i = 0; i < pofs->nr_entries; i++) { 2315 2315 struct folio *folio = pofs_get_folio(pofs, i); ··· 2320 2320 2321 2321 if (folio_is_longterm_pinnable(folio)) 2322 2322 continue; 2323 + 2324 + collected++; 2323 2325 2324 2326 if (folio_is_device_coherent(folio)) 2325 2327 continue; ··· 2344 2342 NR_ISOLATED_ANON + folio_is_file_lru(folio), 2345 2343 folio_nr_pages(folio)); 2346 2344 } 2345 + 2346 + return collected; 2347 2347 } 2348 2348 2349 2349 /* ··· 2422 2418 check_and_migrate_movable_pages_or_folios(struct pages_or_folios *pofs) 2423 2419 { 2424 2420 LIST_HEAD(movable_folio_list); 2421 + unsigned long collected; 2425 2422 2426 - collect_longterm_unpinnable_folios(&movable_folio_list, pofs); 2427 - if (list_empty(&movable_folio_list)) 2423 + collected = collect_longterm_unpinnable_folios(&movable_folio_list, 2424 + pofs); 2425 + if (!collected) 2428 2426 return 0; 2429 2427 2430 2428 return migrate_longterm_unpinnable_folios(&movable_folio_list, pofs);
+17 -37
mm/hugetlb.c
··· 2787 2787 /* 2788 2788 * alloc_and_dissolve_hugetlb_folio - Allocate a new folio and dissolve 2789 2789 * the old one 2790 - * @h: struct hstate old page belongs to 2791 2790 * @old_folio: Old folio to dissolve 2792 2791 * @list: List to isolate the page in case we need to 2793 2792 * Returns 0 on success, otherwise negated error. 2794 2793 */ 2795 - static int alloc_and_dissolve_hugetlb_folio(struct hstate *h, 2796 - struct folio *old_folio, struct list_head *list) 2794 + static int alloc_and_dissolve_hugetlb_folio(struct folio *old_folio, 2795 + struct list_head *list) 2797 2796 { 2798 - gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; 2797 + gfp_t gfp_mask; 2798 + struct hstate *h; 2799 2799 int nid = folio_nid(old_folio); 2800 2800 struct folio *new_folio = NULL; 2801 2801 int ret = 0; 2802 2802 2803 2803 retry: 2804 + /* 2805 + * The old_folio might have been dissolved from under our feet, so make sure 2806 + * to carefully check the state under the lock. 2807 + */ 2804 2808 spin_lock_irq(&hugetlb_lock); 2805 2809 if (!folio_test_hugetlb(old_folio)) { 2806 2810 /* ··· 2833 2829 cond_resched(); 2834 2830 goto retry; 2835 2831 } else { 2832 + h = folio_hstate(old_folio); 2836 2833 if (!new_folio) { 2837 2834 spin_unlock_irq(&hugetlb_lock); 2835 + gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; 2838 2836 new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid, 2839 2837 NULL, NULL); 2840 2838 if (!new_folio) ··· 2880 2874 2881 2875 int isolate_or_dissolve_huge_folio(struct folio *folio, struct list_head *list) 2882 2876 { 2883 - struct hstate *h; 2884 2877 int ret = -EBUSY; 2885 2878 2886 - /* 2887 - * The page might have been dissolved from under our feet, so make sure 2888 - * to carefully check the state under the lock. 2889 - * Return success when racing as if we dissolved the page ourselves. 2890 - */ 2891 - spin_lock_irq(&hugetlb_lock); 2892 - if (folio_test_hugetlb(folio)) { 2893 - h = folio_hstate(folio); 2894 - } else { 2895 - spin_unlock_irq(&hugetlb_lock); 2879 + /* Not to disrupt normal path by vainly holding hugetlb_lock */ 2880 + if (!folio_test_hugetlb(folio)) 2896 2881 return 0; 2897 - } 2898 - spin_unlock_irq(&hugetlb_lock); 2899 2882 2900 2883 /* 2901 2884 * Fence off gigantic pages as there is a cyclic dependency between 2902 2885 * alloc_contig_range and them. Return -ENOMEM as this has the effect 2903 2886 * of bailing out right away without further retrying. 2904 2887 */ 2905 - if (hstate_is_gigantic(h)) 2888 + if (folio_order(folio) > MAX_PAGE_ORDER) 2906 2889 return -ENOMEM; 2907 2890 2908 2891 if (folio_ref_count(folio) && folio_isolate_hugetlb(folio, list)) 2909 2892 ret = 0; 2910 2893 else if (!folio_ref_count(folio)) 2911 - ret = alloc_and_dissolve_hugetlb_folio(h, folio, list); 2894 + ret = alloc_and_dissolve_hugetlb_folio(folio, list); 2912 2895 2913 2896 return ret; 2914 2897 } ··· 2911 2916 */ 2912 2917 int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn) 2913 2918 { 2914 - struct hstate *h; 2915 2919 struct folio *folio; 2916 2920 int ret = 0; 2917 2921 ··· 2919 2925 while (start_pfn < end_pfn) { 2920 2926 folio = pfn_folio(start_pfn); 2921 2927 2922 - /* 2923 - * The folio might have been dissolved from under our feet, so make sure 2924 - * to carefully check the state under the lock. 2925 - */ 2926 - spin_lock_irq(&hugetlb_lock); 2927 - if (folio_test_hugetlb(folio)) { 2928 - h = folio_hstate(folio); 2929 - } else { 2930 - spin_unlock_irq(&hugetlb_lock); 2931 - start_pfn++; 2932 - continue; 2933 - } 2934 - spin_unlock_irq(&hugetlb_lock); 2935 - 2936 - if (!folio_ref_count(folio)) { 2937 - ret = alloc_and_dissolve_hugetlb_folio(h, folio, 2938 - &isolate_list); 2928 + /* Not to disrupt normal path by vainly holding hugetlb_lock */ 2929 + if (folio_test_hugetlb(folio) && !folio_ref_count(folio)) { 2930 + ret = alloc_and_dissolve_hugetlb_folio(folio, &isolate_list); 2939 2931 if (ret) 2940 2932 break; 2941 2933
+14
mm/kmemleak.c
··· 1247 1247 EXPORT_SYMBOL(kmemleak_transient_leak); 1248 1248 1249 1249 /** 1250 + * kmemleak_ignore_percpu - similar to kmemleak_ignore but taking a percpu 1251 + * address argument 1252 + * @ptr: percpu address of the object 1253 + */ 1254 + void __ref kmemleak_ignore_percpu(const void __percpu *ptr) 1255 + { 1256 + pr_debug("%s(0x%px)\n", __func__, ptr); 1257 + 1258 + if (kmemleak_enabled && ptr && !IS_ERR_PCPU(ptr)) 1259 + make_black_object((unsigned long)ptr, OBJECT_PERCPU); 1260 + } 1261 + EXPORT_SYMBOL_GPL(kmemleak_ignore_percpu); 1262 + 1263 + /** 1250 1264 * kmemleak_ignore - ignore an allocated object 1251 1265 * @ptr: pointer to beginning of the object 1252 1266 *
-20
mm/memory.c
··· 4315 4315 } 4316 4316 4317 4317 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 4318 - static inline int non_swapcache_batch(swp_entry_t entry, int max_nr) 4319 - { 4320 - struct swap_info_struct *si = swp_swap_info(entry); 4321 - pgoff_t offset = swp_offset(entry); 4322 - int i; 4323 - 4324 - /* 4325 - * While allocating a large folio and doing swap_read_folio, which is 4326 - * the case the being faulted pte doesn't have swapcache. We need to 4327 - * ensure all PTEs have no cache as well, otherwise, we might go to 4328 - * swap devices while the content is in swapcache. 4329 - */ 4330 - for (i = 0; i < max_nr; i++) { 4331 - if ((si->swap_map[offset + i] & SWAP_HAS_CACHE)) 4332 - return i; 4333 - } 4334 - 4335 - return i; 4336 - } 4337 - 4338 4318 /* 4339 4319 * Check if the PTEs within a range are contiguous swap entries 4340 4320 * and have consistent swapcache, zeromap.
+5 -1
mm/shmem.c
··· 2259 2259 folio = swap_cache_get_folio(swap, NULL, 0); 2260 2260 order = xa_get_order(&mapping->i_pages, index); 2261 2261 if (!folio) { 2262 + int nr_pages = 1 << order; 2262 2263 bool fallback_order0 = false; 2263 2264 2264 2265 /* Or update major stats only when swapin succeeds?? */ ··· 2273 2272 * If uffd is active for the vma, we need per-page fault 2274 2273 * fidelity to maintain the uffd semantics, then fallback 2275 2274 * to swapin order-0 folio, as well as for zswap case. 2275 + * Any existing sub folio in the swap cache also blocks 2276 + * mTHP swapin. 2276 2277 */ 2277 2278 if (order > 0 && ((vma && unlikely(userfaultfd_armed(vma))) || 2278 - !zswap_never_enabled())) 2279 + !zswap_never_enabled() || 2280 + non_swapcache_batch(swap, nr_pages) != nr_pages)) 2279 2281 fallback_order0 = true; 2280 2282 2281 2283 /* Skip swapcache for synchronous device. */
+23
mm/swap.h
··· 106 106 return find_next_bit(sis->zeromap, end, start) - start; 107 107 } 108 108 109 + static inline int non_swapcache_batch(swp_entry_t entry, int max_nr) 110 + { 111 + struct swap_info_struct *si = swp_swap_info(entry); 112 + pgoff_t offset = swp_offset(entry); 113 + int i; 114 + 115 + /* 116 + * While allocating a large folio and doing mTHP swapin, we need to 117 + * ensure all entries are not cached, otherwise, the mTHP folio will 118 + * be in conflict with the folio in swap cache. 119 + */ 120 + for (i = 0; i < max_nr; i++) { 121 + if ((si->swap_map[offset + i] & SWAP_HAS_CACHE)) 122 + return i; 123 + } 124 + 125 + return i; 126 + } 127 + 109 128 #else /* CONFIG_SWAP */ 110 129 struct swap_iocb; 111 130 static inline void swap_read_folio(struct folio *folio, struct swap_iocb **plug) ··· 218 199 return 0; 219 200 } 220 201 202 + static inline int non_swapcache_batch(swp_entry_t entry, int max_nr) 203 + { 204 + return 0; 205 + } 221 206 #endif /* CONFIG_SWAP */ 222 207 223 208 /**
+31 -2
mm/userfaultfd.c
··· 1084 1084 pte_t orig_dst_pte, pte_t orig_src_pte, 1085 1085 pmd_t *dst_pmd, pmd_t dst_pmdval, 1086 1086 spinlock_t *dst_ptl, spinlock_t *src_ptl, 1087 - struct folio *src_folio) 1087 + struct folio *src_folio, 1088 + struct swap_info_struct *si, swp_entry_t entry) 1088 1089 { 1090 + /* 1091 + * Check if the folio still belongs to the target swap entry after 1092 + * acquiring the lock. Folio can be freed in the swap cache while 1093 + * not locked. 1094 + */ 1095 + if (src_folio && unlikely(!folio_test_swapcache(src_folio) || 1096 + entry.val != src_folio->swap.val)) 1097 + return -EAGAIN; 1098 + 1089 1099 double_pt_lock(dst_ptl, src_ptl); 1090 1100 1091 1101 if (!is_pte_pages_stable(dst_pte, src_pte, orig_dst_pte, orig_src_pte, ··· 1112 1102 if (src_folio) { 1113 1103 folio_move_anon_rmap(src_folio, dst_vma); 1114 1104 src_folio->index = linear_page_index(dst_vma, dst_addr); 1105 + } else { 1106 + /* 1107 + * Check if the swap entry is cached after acquiring the src_pte 1108 + * lock. Otherwise, we might miss a newly loaded swap cache folio. 1109 + * 1110 + * Check swap_map directly to minimize overhead, READ_ONCE is sufficient. 1111 + * We are trying to catch newly added swap cache, the only possible case is 1112 + * when a folio is swapped in and out again staying in swap cache, using the 1113 + * same entry before the PTE check above. The PTL is acquired and released 1114 + * twice, each time after updating the swap_map's flag. So holding 1115 + * the PTL here ensures we see the updated value. False positive is possible, 1116 + * e.g. SWP_SYNCHRONOUS_IO swapin may set the flag without touching the 1117 + * cache, or during the tiny synchronization window between swap cache and 1118 + * swap_map, but it will be gone very quickly, worst result is retry jitters. 1119 + */ 1120 + if (READ_ONCE(si->swap_map[swp_offset(entry)]) & SWAP_HAS_CACHE) { 1121 + double_pt_unlock(dst_ptl, src_ptl); 1122 + return -EAGAIN; 1123 + } 1115 1124 } 1116 1125 1117 1126 orig_src_pte = ptep_get_and_clear(mm, src_addr, src_pte); ··· 1441 1412 } 1442 1413 err = move_swap_pte(mm, dst_vma, dst_addr, src_addr, dst_pte, src_pte, 1443 1414 orig_dst_pte, orig_src_pte, dst_pmd, dst_pmdval, 1444 - dst_ptl, src_ptl, src_folio); 1415 + dst_ptl, src_ptl, src_folio, si, entry); 1445 1416 } 1446 1417 1447 1418 out:
+5 -6
net/atm/clip.c
··· 193 193 194 194 pr_debug("\n"); 195 195 196 - if (!clip_devs) { 197 - atm_return(vcc, skb->truesize); 198 - kfree_skb(skb); 199 - return; 200 - } 201 - 202 196 if (!skb) { 203 197 pr_debug("removing VCC %p\n", clip_vcc); 204 198 if (clip_vcc->entry) ··· 202 208 return; 203 209 } 204 210 atm_return(vcc, skb->truesize); 211 + if (!clip_devs) { 212 + kfree_skb(skb); 213 + return; 214 + } 215 + 205 216 skb->dev = clip_vcc->entry ? clip_vcc->entry->neigh->dev : clip_devs; 206 217 /* clip_vcc->entry == NULL if we don't have an IP address yet */ 207 218 if (!skb->dev) {
+1
net/atm/common.c
··· 635 635 636 636 skb->dev = NULL; /* for paths shared with net_device interfaces */ 637 637 if (!copy_from_iter_full(skb_put(skb, size), size, &m->msg_iter)) { 638 + atm_return_tx(vcc, skb); 638 639 kfree_skb(skb); 639 640 error = -EFAULT; 640 641 goto out;
+10 -2
net/atm/lec.c
··· 124 124 125 125 /* Device structures */ 126 126 static struct net_device *dev_lec[MAX_LEC_ITF]; 127 + static DEFINE_MUTEX(lec_mutex); 127 128 128 129 #if IS_ENABLED(CONFIG_BRIDGE) 129 130 static void lec_handle_bridge(struct sk_buff *skb, struct net_device *dev) ··· 686 685 int bytes_left; 687 686 struct atmlec_ioc ioc_data; 688 687 688 + lockdep_assert_held(&lec_mutex); 689 689 /* Lecd must be up in this case */ 690 690 bytes_left = copy_from_user(&ioc_data, arg, sizeof(struct atmlec_ioc)); 691 691 if (bytes_left != 0) ··· 712 710 713 711 static int lec_mcast_attach(struct atm_vcc *vcc, int arg) 714 712 { 713 + lockdep_assert_held(&lec_mutex); 715 714 if (arg < 0 || arg >= MAX_LEC_ITF) 716 715 return -EINVAL; 717 716 arg = array_index_nospec(arg, MAX_LEC_ITF); ··· 728 725 int i; 729 726 struct lec_priv *priv; 730 727 728 + lockdep_assert_held(&lec_mutex); 731 729 if (arg < 0) 732 730 arg = 0; 733 731 if (arg >= MAX_LEC_ITF) ··· 746 742 snprintf(dev_lec[i]->name, IFNAMSIZ, "lec%d", i); 747 743 if (register_netdev(dev_lec[i])) { 748 744 free_netdev(dev_lec[i]); 745 + dev_lec[i] = NULL; 749 746 return -EINVAL; 750 747 } 751 748 ··· 909 904 v = (dev && netdev_priv(dev)) ? 910 905 lec_priv_walk(state, l, netdev_priv(dev)) : NULL; 911 906 if (!v && dev) { 912 - dev_put(dev); 913 907 /* Partial state reset for the next time we get called */ 914 908 dev = NULL; 915 909 } ··· 932 928 { 933 929 struct lec_state *state = seq->private; 934 930 931 + mutex_lock(&lec_mutex); 935 932 state->itf = 0; 936 933 state->dev = NULL; 937 934 state->locked = NULL; ··· 950 945 if (state->dev) { 951 946 spin_unlock_irqrestore(&state->locked->lec_arp_lock, 952 947 state->flags); 953 - dev_put(state->dev); 948 + state->dev = NULL; 954 949 } 950 + mutex_unlock(&lec_mutex); 955 951 } 956 952 957 953 static void *lec_seq_next(struct seq_file *seq, void *v, loff_t *pos) ··· 1009 1003 return -ENOIOCTLCMD; 1010 1004 } 1011 1005 1006 + mutex_lock(&lec_mutex); 1012 1007 switch (cmd) { 1013 1008 case ATMLEC_CTRL: 1014 1009 err = lecd_attach(vcc, (int)arg); ··· 1024 1017 break; 1025 1018 } 1026 1019 1020 + mutex_unlock(&lec_mutex); 1027 1021 return err; 1028 1022 } 1029 1023
+1 -1
net/atm/raw.c
··· 36 36 37 37 pr_debug("(%d) %d -= %d\n", 38 38 vcc->vci, sk_wmem_alloc_get(sk), ATM_SKB(skb)->acct_truesize); 39 - WARN_ON(refcount_sub_and_test(ATM_SKB(skb)->acct_truesize, &sk->sk_wmem_alloc)); 39 + atm_return_tx(vcc, skb); 40 40 dev_kfree_skb_any(skb); 41 41 sk->sk_write_space(sk); 42 42 }
+1 -2
net/atm/resources.c
··· 146 146 */ 147 147 mutex_lock(&atm_dev_mutex); 148 148 list_del(&dev->dev_list); 149 - mutex_unlock(&atm_dev_mutex); 150 - 151 149 atm_dev_release_vccs(dev); 152 150 atm_unregister_sysfs(dev); 153 151 atm_proc_dev_deregister(dev); 152 + mutex_unlock(&atm_dev_mutex); 154 153 155 154 atm_dev_put(dev); 156 155 }
+30 -4
net/bluetooth/hci_core.c
··· 64 64 65 65 /* Get HCI device by index. 66 66 * Device is held on return. */ 67 - struct hci_dev *hci_dev_get(int index) 67 + static struct hci_dev *__hci_dev_get(int index, int *srcu_index) 68 68 { 69 69 struct hci_dev *hdev = NULL, *d; 70 70 ··· 77 77 list_for_each_entry(d, &hci_dev_list, list) { 78 78 if (d->id == index) { 79 79 hdev = hci_dev_hold(d); 80 + if (srcu_index) 81 + *srcu_index = srcu_read_lock(&d->srcu); 80 82 break; 81 83 } 82 84 } 83 85 read_unlock(&hci_dev_list_lock); 84 86 return hdev; 87 + } 88 + 89 + struct hci_dev *hci_dev_get(int index) 90 + { 91 + return __hci_dev_get(index, NULL); 92 + } 93 + 94 + static struct hci_dev *hci_dev_get_srcu(int index, int *srcu_index) 95 + { 96 + return __hci_dev_get(index, srcu_index); 97 + } 98 + 99 + static void hci_dev_put_srcu(struct hci_dev *hdev, int srcu_index) 100 + { 101 + srcu_read_unlock(&hdev->srcu, srcu_index); 102 + hci_dev_put(hdev); 85 103 } 86 104 87 105 /* ---- Inquiry support ---- */ ··· 586 568 int hci_dev_reset(__u16 dev) 587 569 { 588 570 struct hci_dev *hdev; 589 - int err; 571 + int err, srcu_index; 590 572 591 - hdev = hci_dev_get(dev); 573 + hdev = hci_dev_get_srcu(dev, &srcu_index); 592 574 if (!hdev) 593 575 return -ENODEV; 594 576 ··· 610 592 err = hci_dev_do_reset(hdev); 611 593 612 594 done: 613 - hci_dev_put(hdev); 595 + hci_dev_put_srcu(hdev, srcu_index); 614 596 return err; 615 597 } 616 598 ··· 2451 2433 if (!hdev) 2452 2434 return NULL; 2453 2435 2436 + if (init_srcu_struct(&hdev->srcu)) { 2437 + kfree(hdev); 2438 + return NULL; 2439 + } 2440 + 2454 2441 hdev->pkt_type = (HCI_DM1 | HCI_DH1 | HCI_HV1); 2455 2442 hdev->esco_type = (ESCO_HV1); 2456 2443 hdev->link_mode = (HCI_LM_ACCEPT); ··· 2700 2677 write_lock(&hci_dev_list_lock); 2701 2678 list_del(&hdev->list); 2702 2679 write_unlock(&hci_dev_list_lock); 2680 + 2681 + synchronize_srcu(&hdev->srcu); 2682 + cleanup_srcu_struct(&hdev->srcu); 2703 2683 2704 2684 disable_work_sync(&hdev->rx_work); 2705 2685 disable_work_sync(&hdev->cmd_work);
+8 -1
net/bluetooth/l2cap_core.c
··· 3415 3415 struct l2cap_conf_rfc rfc = { .mode = L2CAP_MODE_BASIC }; 3416 3416 struct l2cap_conf_efs efs; 3417 3417 u8 remote_efs = 0; 3418 - u16 mtu = L2CAP_DEFAULT_MTU; 3418 + u16 mtu = 0; 3419 3419 u16 result = L2CAP_CONF_SUCCESS; 3420 3420 u16 size; 3421 3421 ··· 3519 3519 if (result == L2CAP_CONF_SUCCESS) { 3520 3520 /* Configure output options and let the other side know 3521 3521 * which ones we don't like. */ 3522 + 3523 + /* If MTU is not provided in configure request, use the most recently 3524 + * explicitly or implicitly accepted value for the other direction, 3525 + * or the default value. 3526 + */ 3527 + if (mtu == 0) 3528 + mtu = chan->imtu ? chan->imtu : L2CAP_DEFAULT_MTU; 3522 3529 3523 3530 if (mtu < L2CAP_DEFAULT_MIN_MTU) 3524 3531 result = L2CAP_CONF_UNACCEPT;
+9
net/bridge/br_multicast.c
··· 2015 2015 2016 2016 void br_multicast_port_ctx_deinit(struct net_bridge_mcast_port *pmctx) 2017 2017 { 2018 + struct net_bridge *br = pmctx->port->br; 2019 + bool del = false; 2020 + 2018 2021 #if IS_ENABLED(CONFIG_IPV6) 2019 2022 timer_delete_sync(&pmctx->ip6_mc_router_timer); 2020 2023 #endif 2021 2024 timer_delete_sync(&pmctx->ip4_mc_router_timer); 2025 + 2026 + spin_lock_bh(&br->multicast_lock); 2027 + del |= br_ip6_multicast_rport_del(pmctx); 2028 + del |= br_ip4_multicast_rport_del(pmctx); 2029 + br_multicast_rport_del_notify(pmctx, del); 2030 + spin_unlock_bh(&br->multicast_lock); 2022 2031 } 2023 2032 2024 2033 int br_multicast_add_port(struct net_bridge_port *port)
+1 -1
net/core/netpoll.c
··· 432 432 udph->dest = htons(np->remote_port); 433 433 udph->len = htons(udp_len); 434 434 435 + udph->check = 0; 435 436 if (np->ipv6) { 436 437 udph->check = csum_ipv6_magic(&np->local_ip.in6, 437 438 &np->remote_ip.in6, ··· 461 460 skb_reset_mac_header(skb); 462 461 skb->protocol = eth->h_proto = htons(ETH_P_IPV6); 463 462 } else { 464 - udph->check = 0; 465 463 udph->check = csum_tcpudp_magic(np->local_ip.ip, 466 464 np->remote_ip.ip, 467 465 udp_len, IPPROTO_UDP,
+3 -2
net/core/selftests.c
··· 160 160 skb->csum = 0; 161 161 skb->ip_summed = CHECKSUM_PARTIAL; 162 162 if (attr->tcp) { 163 - thdr->check = ~tcp_v4_check(skb->len, ihdr->saddr, 164 - ihdr->daddr, 0); 163 + int l4len = skb->len - skb_transport_offset(skb); 164 + 165 + thdr->check = ~tcp_v4_check(l4len, ihdr->saddr, ihdr->daddr, 0); 165 166 skb->csum_start = skb_transport_header(skb) - skb->head; 166 167 skb->csum_offset = offsetof(struct tcphdr, check); 167 168 } else {
-3
net/core/skbuff.c
··· 6261 6261 if (!pskb_may_pull(skb, write_len)) 6262 6262 return -ENOMEM; 6263 6263 6264 - if (!skb_frags_readable(skb)) 6265 - return -EFAULT; 6266 - 6267 6264 if (!skb_cloned(skb) || skb_clone_writable(skb, write_len)) 6268 6265 return 0; 6269 6266
+3
net/ipv4/tcp_fastopen.c
··· 3 3 #include <linux/tcp.h> 4 4 #include <linux/rcupdate.h> 5 5 #include <net/tcp.h> 6 + #include <net/busy_poll.h> 6 7 7 8 void tcp_fastopen_init_key_once(struct net *net) 8 9 { ··· 279 278 req->timeout, false); 280 279 281 280 refcount_set(&req->rsk_refcnt, 2); 281 + 282 + sk_mark_napi_id_set(child, skb); 282 283 283 284 /* Now finish processing the fastopen child socket. */ 284 285 tcp_init_transfer(child, BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB, skb);
+24 -11
net/ipv4/tcp_input.c
··· 2479 2479 { 2480 2480 const struct sock *sk = (const struct sock *)tp; 2481 2481 2482 - if (tp->retrans_stamp && 2483 - tcp_tsopt_ecr_before(tp, tp->retrans_stamp)) 2484 - return true; /* got echoed TS before first retransmission */ 2482 + /* Received an echoed timestamp before the first retransmission? */ 2483 + if (tp->retrans_stamp) 2484 + return tcp_tsopt_ecr_before(tp, tp->retrans_stamp); 2485 2485 2486 - /* Check if nothing was retransmitted (retrans_stamp==0), which may 2487 - * happen in fast recovery due to TSQ. But we ignore zero retrans_stamp 2488 - * in TCP_SYN_SENT, since when we set FLAG_SYN_ACKED we also clear 2489 - * retrans_stamp even if we had retransmitted the SYN. 2486 + /* We set tp->retrans_stamp upon the first retransmission of a loss 2487 + * recovery episode, so normally if tp->retrans_stamp is 0 then no 2488 + * retransmission has happened yet (likely due to TSQ, which can cause 2489 + * fast retransmits to be delayed). So if snd_una advanced while 2490 + * (tp->retrans_stamp is 0 then apparently a packet was merely delayed, 2491 + * not lost. But there are exceptions where we retransmit but then 2492 + * clear tp->retrans_stamp, so we check for those exceptions. 2490 2493 */ 2491 - if (!tp->retrans_stamp && /* no record of a retransmit/SYN? */ 2492 - sk->sk_state != TCP_SYN_SENT) /* not the FLAG_SYN_ACKED case? */ 2493 - return true; /* nothing was retransmitted */ 2494 2494 2495 - return false; 2495 + /* (1) For non-SACK connections, tcp_is_non_sack_preventing_reopen() 2496 + * clears tp->retrans_stamp when snd_una == high_seq. 2497 + */ 2498 + if (!tcp_is_sack(tp) && !before(tp->snd_una, tp->high_seq)) 2499 + return false; 2500 + 2501 + /* (2) In TCP_SYN_SENT tcp_clean_rtx_queue() clears tp->retrans_stamp 2502 + * when setting FLAG_SYN_ACKED is set, even if the SYN was 2503 + * retransmitted. 2504 + */ 2505 + if (sk->sk_state == TCP_SYN_SENT) 2506 + return false; 2507 + 2508 + return true; /* tp->retrans_stamp is zero; no retransmit yet */ 2496 2509 } 2497 2510 2498 2511 /* Undo procedures. */
+8
net/ipv6/calipso.c
··· 1207 1207 struct ipv6_opt_hdr *old, *new; 1208 1208 struct sock *sk = sk_to_full_sk(req_to_sk(req)); 1209 1209 1210 + /* sk is NULL for SYN+ACK w/ SYN Cookie */ 1211 + if (!sk) 1212 + return -ENOMEM; 1213 + 1210 1214 if (req_inet->ipv6_opt && req_inet->ipv6_opt->hopopt) 1211 1215 old = req_inet->ipv6_opt->hopopt; 1212 1216 else ··· 1250 1246 struct ipv6_opt_hdr *new; 1251 1247 struct ipv6_txoptions *txopts; 1252 1248 struct sock *sk = sk_to_full_sk(req_to_sk(req)); 1249 + 1250 + /* sk is NULL for SYN+ACK w/ SYN Cookie */ 1251 + if (!sk) 1252 + return; 1253 1253 1254 1254 if (!req_inet->ipv6_opt || !req_inet->ipv6_opt->hopopt) 1255 1255 return;
+4 -1
net/mac80211/debug.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 3 * Portions 4 - * Copyright (C) 2022 - 2024 Intel Corporation 4 + * Copyright (C) 2022 - 2025 Intel Corporation 5 5 */ 6 6 #ifndef __MAC80211_DEBUG_H 7 7 #define __MAC80211_DEBUG_H 8 + #include <linux/once_lite.h> 8 9 #include <net/cfg80211.h> 9 10 10 11 #ifdef CONFIG_MAC80211_OCB_DEBUG ··· 153 152 else \ 154 153 _sdata_err((link)->sdata, fmt, ##__VA_ARGS__); \ 155 154 } while (0) 155 + #define link_err_once(link, fmt, ...) \ 156 + DO_ONCE_LITE(link_err, link, fmt, ##__VA_ARGS__) 156 157 #define link_id_info(sdata, link_id, fmt, ...) \ 157 158 do { \ 158 159 if (ieee80211_vif_is_mld(&sdata->vif)) \
+3 -3
net/mac80211/link.c
··· 93 93 if (link_id < 0) 94 94 link_id = 0; 95 95 96 - rcu_assign_pointer(sdata->vif.link_conf[link_id], link_conf); 97 - rcu_assign_pointer(sdata->link[link_id], link); 98 - 99 96 if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN) { 100 97 struct ieee80211_sub_if_data *ap_bss; 101 98 struct ieee80211_bss_conf *ap_bss_conf; ··· 142 145 143 146 ieee80211_link_debugfs_add(link); 144 147 } 148 + 149 + rcu_assign_pointer(sdata->vif.link_conf[link_id], link_conf); 150 + rcu_assign_pointer(sdata->link[link_id], link); 145 151 } 146 152 147 153 void ieee80211_link_stop(struct ieee80211_link_data *link)
+4
net/mac80211/rx.c
··· 4432 4432 if (!multicast && 4433 4433 !ether_addr_equal(sdata->dev->dev_addr, hdr->addr1)) 4434 4434 return false; 4435 + /* reject invalid/our STA address */ 4436 + if (!is_valid_ether_addr(hdr->addr2) || 4437 + ether_addr_equal(sdata->dev->dev_addr, hdr->addr2)) 4438 + return false; 4435 4439 if (!rx->sta) { 4436 4440 int rate_idx; 4437 4441 if (status->encoding != RX_ENC_LEGACY)
+21 -8
net/mac80211/tx.c
··· 5 5 * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz> 6 6 * Copyright 2007 Johannes Berg <johannes@sipsolutions.net> 7 7 * Copyright 2013-2014 Intel Mobile Communications GmbH 8 - * Copyright (C) 2018-2024 Intel Corporation 8 + * Copyright (C) 2018-2025 Intel Corporation 9 9 * 10 10 * Transmit and frame generation functions. 11 11 */ ··· 5016 5016 } 5017 5017 } 5018 5018 5019 - static u8 __ieee80211_beacon_update_cntdwn(struct beacon_data *beacon) 5019 + static u8 __ieee80211_beacon_update_cntdwn(struct ieee80211_link_data *link, 5020 + struct beacon_data *beacon) 5020 5021 { 5021 - beacon->cntdwn_current_counter--; 5022 + if (beacon->cntdwn_current_counter == 1) { 5023 + /* 5024 + * Channel switch handling is done by a worker thread while 5025 + * beacons get pulled from hardware timers. It's therefore 5026 + * possible that software threads are slow enough to not be 5027 + * able to complete CSA handling in a single beacon interval, 5028 + * in which case we get here. There isn't much to do about 5029 + * it, other than letting the user know that the AP isn't 5030 + * behaving correctly. 5031 + */ 5032 + link_err_once(link, 5033 + "beacon TX faster than countdown (channel/color switch) completion\n"); 5034 + return 0; 5035 + } 5022 5036 5023 - /* the counter should never reach 0 */ 5024 - WARN_ON_ONCE(!beacon->cntdwn_current_counter); 5037 + beacon->cntdwn_current_counter--; 5025 5038 5026 5039 return beacon->cntdwn_current_counter; 5027 5040 } ··· 5065 5052 if (!beacon) 5066 5053 goto unlock; 5067 5054 5068 - count = __ieee80211_beacon_update_cntdwn(beacon); 5055 + count = __ieee80211_beacon_update_cntdwn(link, beacon); 5069 5056 5070 5057 unlock: 5071 5058 rcu_read_unlock(); ··· 5463 5450 5464 5451 if (beacon->cntdwn_counter_offsets[0]) { 5465 5452 if (!is_template) 5466 - __ieee80211_beacon_update_cntdwn(beacon); 5453 + __ieee80211_beacon_update_cntdwn(link, beacon); 5467 5454 5468 5455 ieee80211_set_beacon_cntdwn(sdata, beacon, link); 5469 5456 } ··· 5495 5482 * for now we leave it consistent with overall 5496 5483 * mac80211's behavior. 5497 5484 */ 5498 - __ieee80211_beacon_update_cntdwn(beacon); 5485 + __ieee80211_beacon_update_cntdwn(link, beacon); 5499 5486 5500 5487 ieee80211_set_beacon_cntdwn(sdata, beacon, link); 5501 5488 }
+1 -1
net/mac80211/util.c
··· 3884 3884 { 3885 3885 u64 tsf = drv_get_tsf(local, sdata); 3886 3886 u64 dtim_count = 0; 3887 - u16 beacon_int = sdata->vif.bss_conf.beacon_int * 1024; 3887 + u32 beacon_int = sdata->vif.bss_conf.beacon_int * 1024; 3888 3888 u8 dtim_period = sdata->vif.bss_conf.dtim_period; 3889 3889 struct ps_data *ps; 3890 3890 u8 bcns_from_dtim;
+2 -2
net/mpls/af_mpls.c
··· 81 81 82 82 if (index < net->mpls.platform_labels) { 83 83 struct mpls_route __rcu **platform_label = 84 - rcu_dereference(net->mpls.platform_label); 85 - rt = rcu_dereference(platform_label[index]); 84 + rcu_dereference_rtnl(net->mpls.platform_label); 85 + rt = rcu_dereference_rtnl(platform_label[index]); 86 86 } 87 87 return rt; 88 88 }
+4 -4
net/nfc/nci/uart.c
··· 119 119 120 120 memcpy(nu, nci_uart_drivers[driver], sizeof(struct nci_uart)); 121 121 nu->tty = tty; 122 - tty->disc_data = nu; 123 122 skb_queue_head_init(&nu->tx_q); 124 123 INIT_WORK(&nu->write_work, nci_uart_write_work); 125 124 spin_lock_init(&nu->rx_lock); 126 125 127 126 ret = nu->ops.open(nu); 128 127 if (ret) { 129 - tty->disc_data = NULL; 130 128 kfree(nu); 129 + return ret; 131 130 } else if (!try_module_get(nu->owner)) { 132 131 nu->ops.close(nu); 133 - tty->disc_data = NULL; 134 132 kfree(nu); 135 133 return -ENOENT; 136 134 } 137 - return ret; 135 + tty->disc_data = nu; 136 + 137 + return 0; 138 138 } 139 139 140 140 /* ------ LDISC part ------ */
+10 -13
net/openvswitch/actions.c
··· 39 39 #include "flow_netlink.h" 40 40 #include "openvswitch_trace.h" 41 41 42 - DEFINE_PER_CPU(struct ovs_pcpu_storage, ovs_pcpu_storage) = { 43 - .bh_lock = INIT_LOCAL_LOCK(bh_lock), 44 - }; 42 + struct ovs_pcpu_storage __percpu *ovs_pcpu_storage; 45 43 46 44 /* Make a clone of the 'key', using the pre-allocated percpu 'flow_keys' 47 45 * space. Return NULL if out of key spaces. 48 46 */ 49 47 static struct sw_flow_key *clone_key(const struct sw_flow_key *key_) 50 48 { 51 - struct ovs_pcpu_storage *ovs_pcpu = this_cpu_ptr(&ovs_pcpu_storage); 49 + struct ovs_pcpu_storage *ovs_pcpu = this_cpu_ptr(ovs_pcpu_storage); 52 50 struct action_flow_keys *keys = &ovs_pcpu->flow_keys; 53 51 int level = ovs_pcpu->exec_level; 54 52 struct sw_flow_key *key = NULL; ··· 92 94 const struct nlattr *actions, 93 95 const int actions_len) 94 96 { 95 - struct action_fifo *fifo = this_cpu_ptr(&ovs_pcpu_storage.action_fifos); 97 + struct action_fifo *fifo = this_cpu_ptr(&ovs_pcpu_storage->action_fifos); 96 98 struct deferred_action *da; 97 99 98 100 da = action_fifo_put(fifo); ··· 753 755 static int ovs_vport_output(struct net *net, struct sock *sk, 754 756 struct sk_buff *skb) 755 757 { 756 - struct ovs_frag_data *data = this_cpu_ptr(&ovs_pcpu_storage.frag_data); 758 + struct ovs_frag_data *data = this_cpu_ptr(&ovs_pcpu_storage->frag_data); 757 759 struct vport *vport = data->vport; 758 760 759 761 if (skb_cow_head(skb, data->l2_len) < 0) { ··· 805 807 unsigned int hlen = skb_network_offset(skb); 806 808 struct ovs_frag_data *data; 807 809 808 - data = this_cpu_ptr(&ovs_pcpu_storage.frag_data); 810 + data = this_cpu_ptr(&ovs_pcpu_storage->frag_data); 809 811 data->dst = skb->_skb_refdst; 810 812 data->vport = vport; 811 813 data->cb = *OVS_CB(skb); ··· 1564 1566 clone = clone_flow_key ? clone_key(key) : key; 1565 1567 if (clone) { 1566 1568 int err = 0; 1567 - 1568 1569 if (actions) { /* Sample action */ 1569 1570 if (clone_flow_key) 1570 - __this_cpu_inc(ovs_pcpu_storage.exec_level); 1571 + __this_cpu_inc(ovs_pcpu_storage->exec_level); 1571 1572 1572 1573 err = do_execute_actions(dp, skb, clone, 1573 1574 actions, len); 1574 1575 1575 1576 if (clone_flow_key) 1576 - __this_cpu_dec(ovs_pcpu_storage.exec_level); 1577 + __this_cpu_dec(ovs_pcpu_storage->exec_level); 1577 1578 } else { /* Recirc action */ 1578 1579 clone->recirc_id = recirc_id; 1579 1580 ovs_dp_process_packet(skb, clone); ··· 1608 1611 1609 1612 static void process_deferred_actions(struct datapath *dp) 1610 1613 { 1611 - struct action_fifo *fifo = this_cpu_ptr(&ovs_pcpu_storage.action_fifos); 1614 + struct action_fifo *fifo = this_cpu_ptr(&ovs_pcpu_storage->action_fifos); 1612 1615 1613 1616 /* Do not touch the FIFO in case there is no deferred actions. */ 1614 1617 if (action_fifo_is_empty(fifo)) ··· 1639 1642 { 1640 1643 int err, level; 1641 1644 1642 - level = __this_cpu_inc_return(ovs_pcpu_storage.exec_level); 1645 + level = __this_cpu_inc_return(ovs_pcpu_storage->exec_level); 1643 1646 if (unlikely(level > OVS_RECURSION_LIMIT)) { 1644 1647 net_crit_ratelimited("ovs: recursion limit reached on datapath %s, probable configuration error\n", 1645 1648 ovs_dp_name(dp)); ··· 1656 1659 process_deferred_actions(dp); 1657 1660 1658 1661 out: 1659 - __this_cpu_dec(ovs_pcpu_storage.exec_level); 1662 + __this_cpu_dec(ovs_pcpu_storage->exec_level); 1660 1663 return err; 1661 1664 }
+35 -7
net/openvswitch/datapath.c
··· 244 244 /* Must be called with rcu_read_lock. */ 245 245 void ovs_dp_process_packet(struct sk_buff *skb, struct sw_flow_key *key) 246 246 { 247 - struct ovs_pcpu_storage *ovs_pcpu = this_cpu_ptr(&ovs_pcpu_storage); 247 + struct ovs_pcpu_storage *ovs_pcpu = this_cpu_ptr(ovs_pcpu_storage); 248 248 const struct vport *p = OVS_CB(skb)->input_vport; 249 249 struct datapath *dp = p->dp; 250 250 struct sw_flow *flow; ··· 299 299 * avoided. 300 300 */ 301 301 if (IS_ENABLED(CONFIG_PREEMPT_RT) && ovs_pcpu->owner != current) { 302 - local_lock_nested_bh(&ovs_pcpu_storage.bh_lock); 302 + local_lock_nested_bh(&ovs_pcpu_storage->bh_lock); 303 303 ovs_pcpu->owner = current; 304 304 ovs_pcpu_locked = true; 305 305 } ··· 310 310 ovs_dp_name(dp), error); 311 311 if (ovs_pcpu_locked) { 312 312 ovs_pcpu->owner = NULL; 313 - local_unlock_nested_bh(&ovs_pcpu_storage.bh_lock); 313 + local_unlock_nested_bh(&ovs_pcpu_storage->bh_lock); 314 314 } 315 315 316 316 stats_counter = &stats->n_hit; ··· 689 689 sf_acts = rcu_dereference(flow->sf_acts); 690 690 691 691 local_bh_disable(); 692 - local_lock_nested_bh(&ovs_pcpu_storage.bh_lock); 692 + local_lock_nested_bh(&ovs_pcpu_storage->bh_lock); 693 693 if (IS_ENABLED(CONFIG_PREEMPT_RT)) 694 - this_cpu_write(ovs_pcpu_storage.owner, current); 694 + this_cpu_write(ovs_pcpu_storage->owner, current); 695 695 err = ovs_execute_actions(dp, packet, sf_acts, &flow->key); 696 696 if (IS_ENABLED(CONFIG_PREEMPT_RT)) 697 - this_cpu_write(ovs_pcpu_storage.owner, NULL); 698 - local_unlock_nested_bh(&ovs_pcpu_storage.bh_lock); 697 + this_cpu_write(ovs_pcpu_storage->owner, NULL); 698 + local_unlock_nested_bh(&ovs_pcpu_storage->bh_lock); 699 699 local_bh_enable(); 700 700 rcu_read_unlock(); 701 701 ··· 2744 2744 .n_reasons = ARRAY_SIZE(ovs_drop_reasons), 2745 2745 }; 2746 2746 2747 + static int __init ovs_alloc_percpu_storage(void) 2748 + { 2749 + unsigned int cpu; 2750 + 2751 + ovs_pcpu_storage = alloc_percpu(*ovs_pcpu_storage); 2752 + if (!ovs_pcpu_storage) 2753 + return -ENOMEM; 2754 + 2755 + for_each_possible_cpu(cpu) { 2756 + struct ovs_pcpu_storage *ovs_pcpu; 2757 + 2758 + ovs_pcpu = per_cpu_ptr(ovs_pcpu_storage, cpu); 2759 + local_lock_init(&ovs_pcpu->bh_lock); 2760 + } 2761 + return 0; 2762 + } 2763 + 2764 + static void ovs_free_percpu_storage(void) 2765 + { 2766 + free_percpu(ovs_pcpu_storage); 2767 + } 2768 + 2747 2769 static int __init dp_init(void) 2748 2770 { 2749 2771 int err; ··· 2774 2752 sizeof_field(struct sk_buff, cb)); 2775 2753 2776 2754 pr_info("Open vSwitch switching datapath\n"); 2755 + 2756 + err = ovs_alloc_percpu_storage(); 2757 + if (err) 2758 + goto error; 2777 2759 2778 2760 err = ovs_internal_dev_rtnl_link_register(); 2779 2761 if (err) ··· 2825 2799 error_unreg_rtnl_link: 2826 2800 ovs_internal_dev_rtnl_link_unregister(); 2827 2801 error: 2802 + ovs_free_percpu_storage(); 2828 2803 return err; 2829 2804 } 2830 2805 ··· 2840 2813 ovs_vport_exit(); 2841 2814 ovs_flow_exit(); 2842 2815 ovs_internal_dev_rtnl_link_unregister(); 2816 + ovs_free_percpu_storage(); 2843 2817 } 2844 2818 2845 2819 module_init(dp_init);
+2 -1
net/openvswitch/datapath.h
··· 220 220 struct task_struct *owner; 221 221 local_lock_t bh_lock; 222 222 }; 223 - DECLARE_PER_CPU(struct ovs_pcpu_storage, ovs_pcpu_storage); 223 + 224 + extern struct ovs_pcpu_storage __percpu *ovs_pcpu_storage; 224 225 225 226 /** 226 227 * enum ovs_pkt_hash_types - hash info to include with a packet
+4 -2
net/sched/sch_taprio.c
··· 1328 1328 1329 1329 stab = rtnl_dereference(q->root->stab); 1330 1330 1331 - oper = rtnl_dereference(q->oper_sched); 1331 + rcu_read_lock(); 1332 + oper = rcu_dereference(q->oper_sched); 1332 1333 if (oper) 1333 1334 taprio_update_queue_max_sdu(q, oper, stab); 1334 1335 1335 - admin = rtnl_dereference(q->admin_sched); 1336 + admin = rcu_dereference(q->admin_sched); 1336 1337 if (admin) 1337 1338 taprio_update_queue_max_sdu(q, admin, stab); 1339 + rcu_read_unlock(); 1338 1340 1339 1341 break; 1340 1342 }
+3 -14
net/sunrpc/svc.c
··· 638 638 static bool 639 639 svc_init_buffer(struct svc_rqst *rqstp, const struct svc_serv *serv, int node) 640 640 { 641 - unsigned long ret; 642 - 643 641 rqstp->rq_maxpages = svc_serv_maxpages(serv); 644 642 645 643 /* rq_pages' last entry is NULL for historical reasons. */ ··· 647 649 if (!rqstp->rq_pages) 648 650 return false; 649 651 650 - ret = alloc_pages_bulk_node(GFP_KERNEL, node, rqstp->rq_maxpages, 651 - rqstp->rq_pages); 652 - return ret == rqstp->rq_maxpages; 652 + return true; 653 653 } 654 654 655 655 /* ··· 1371 1375 case SVC_OK: 1372 1376 break; 1373 1377 case SVC_GARBAGE: 1374 - goto err_garbage_args; 1378 + rqstp->rq_auth_stat = rpc_autherr_badcred; 1379 + goto err_bad_auth; 1375 1380 case SVC_SYSERR: 1376 1381 goto err_system_err; 1377 1382 case SVC_DENIED: ··· 1511 1514 if (serv->sv_stats) 1512 1515 serv->sv_stats->rpcbadfmt++; 1513 1516 *rqstp->rq_accept_statp = rpc_proc_unavail; 1514 - goto sendit; 1515 - 1516 - err_garbage_args: 1517 - svc_printk(rqstp, "failed to decode RPC header\n"); 1518 - 1519 - if (serv->sv_stats) 1520 - serv->sv_stats->rpcbadfmt++; 1521 - *rqstp->rq_accept_statp = rpc_garbage_args; 1522 1517 goto sendit; 1523 1518 1524 1519 err_system_err:
+2 -2
net/tipc/udp_media.c
··· 489 489 490 490 rtnl_lock(); 491 491 b = tipc_bearer_find(net, bname); 492 - if (!b) { 492 + if (!b || b->bcast_addr.media_id != TIPC_MEDIA_TYPE_UDP) { 493 493 rtnl_unlock(); 494 494 return -EINVAL; 495 495 } ··· 500 500 501 501 rtnl_lock(); 502 502 b = rtnl_dereference(tn->bearer_list[bid]); 503 - if (!b) { 503 + if (!b || b->bcast_addr.media_id != TIPC_MEDIA_TYPE_UDP) { 504 504 rtnl_unlock(); 505 505 return -EINVAL; 506 506 }
+23 -8
net/unix/af_unix.c
··· 660 660 #endif 661 661 } 662 662 663 + static unsigned int unix_skb_len(const struct sk_buff *skb) 664 + { 665 + return skb->len - UNIXCB(skb).consumed; 666 + } 667 + 663 668 static void unix_release_sock(struct sock *sk, int embrion) 664 669 { 665 670 struct unix_sock *u = unix_sk(sk); ··· 699 694 700 695 if (skpair != NULL) { 701 696 if (sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET) { 697 + struct sk_buff *skb = skb_peek(&sk->sk_receive_queue); 698 + 699 + #if IS_ENABLED(CONFIG_AF_UNIX_OOB) 700 + if (skb && !unix_skb_len(skb)) 701 + skb = skb_peek_next(skb, &sk->sk_receive_queue); 702 + #endif 702 703 unix_state_lock(skpair); 703 704 /* No more writes */ 704 705 WRITE_ONCE(skpair->sk_shutdown, SHUTDOWN_MASK); 705 - if (!skb_queue_empty_lockless(&sk->sk_receive_queue) || embrion) 706 + if (skb || embrion) 706 707 WRITE_ONCE(skpair->sk_err, ECONNRESET); 707 708 unix_state_unlock(skpair); 708 709 skpair->sk_state_change(skpair); ··· 2672 2661 return timeo; 2673 2662 } 2674 2663 2675 - static unsigned int unix_skb_len(const struct sk_buff *skb) 2676 - { 2677 - return skb->len - UNIXCB(skb).consumed; 2678 - } 2679 - 2680 2664 struct unix_stream_read_state { 2681 2665 int (*recv_actor)(struct sk_buff *, int, int, 2682 2666 struct unix_stream_read_state *); ··· 2686 2680 #if IS_ENABLED(CONFIG_AF_UNIX_OOB) 2687 2681 static int unix_stream_recv_urg(struct unix_stream_read_state *state) 2688 2682 { 2683 + struct sk_buff *oob_skb, *read_skb = NULL; 2689 2684 struct socket *sock = state->socket; 2690 2685 struct sock *sk = sock->sk; 2691 2686 struct unix_sock *u = unix_sk(sk); 2692 2687 int chunk = 1; 2693 - struct sk_buff *oob_skb; 2694 2688 2695 2689 mutex_lock(&u->iolock); 2696 2690 unix_state_lock(sk); ··· 2705 2699 2706 2700 oob_skb = u->oob_skb; 2707 2701 2708 - if (!(state->flags & MSG_PEEK)) 2702 + if (!(state->flags & MSG_PEEK)) { 2709 2703 WRITE_ONCE(u->oob_skb, NULL); 2704 + 2705 + if (oob_skb->prev != (struct sk_buff *)&sk->sk_receive_queue && 2706 + !unix_skb_len(oob_skb->prev)) { 2707 + read_skb = oob_skb->prev; 2708 + __skb_unlink(read_skb, &sk->sk_receive_queue); 2709 + } 2710 + } 2710 2711 2711 2712 spin_unlock(&sk->sk_receive_queue.lock); 2712 2713 unix_state_unlock(sk); ··· 2724 2711 UNIXCB(oob_skb).consumed += 1; 2725 2712 2726 2713 mutex_unlock(&u->iolock); 2714 + 2715 + consume_skb(read_skb); 2727 2716 2728 2717 if (chunk < 0) 2729 2718 return -EFAULT;
+1
rust/bindings/bindings_helper.h
··· 39 39 #include <linux/blk_types.h> 40 40 #include <linux/blkdev.h> 41 41 #include <linux/clk.h> 42 + #include <linux/completion.h> 42 43 #include <linux/configfs.h> 43 44 #include <linux/cpu.h> 44 45 #include <linux/cpufreq.h>
+8
rust/helpers/completion.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/completion.h> 4 + 5 + void rust_helper_init_completion(struct completion *x) 6 + { 7 + init_completion(x); 8 + }
+1
rust/helpers/helpers.c
··· 13 13 #include "build_assert.c" 14 14 #include "build_bug.c" 15 15 #include "clk.c" 16 + #include "completion.c" 16 17 #include "cpu.c" 17 18 #include "cpufreq.c" 18 19 #include "cpumask.c"
+43 -17
rust/kernel/devres.rs
··· 12 12 error::{Error, Result}, 13 13 ffi::c_void, 14 14 prelude::*, 15 - revocable::Revocable, 16 - sync::Arc, 15 + revocable::{Revocable, RevocableGuard}, 16 + sync::{rcu, Arc, Completion}, 17 17 types::ARef, 18 18 }; 19 - 20 - use core::ops::Deref; 21 19 22 20 #[pin_data] 23 21 struct DevresInner<T> { ··· 23 25 callback: unsafe extern "C" fn(*mut c_void), 24 26 #[pin] 25 27 data: Revocable<T>, 28 + #[pin] 29 + revoke: Completion, 26 30 } 27 31 28 32 /// This abstraction is meant to be used by subsystems to containerize [`Device`] bound resources to 29 33 /// manage their lifetime. 30 34 /// 31 35 /// [`Device`] bound resources should be freed when either the resource goes out of scope or the 32 - /// [`Device`] is unbound respectively, depending on what happens first. 36 + /// [`Device`] is unbound respectively, depending on what happens first. In any case, it is always 37 + /// guaranteed that revoking the device resource is completed before the corresponding [`Device`] 38 + /// is unbound. 33 39 /// 34 40 /// To achieve that [`Devres`] registers a devres callback on creation, which is called once the 35 41 /// [`Device`] is unbound, revoking access to the encapsulated resource (see also [`Revocable`]). ··· 104 102 dev: dev.into(), 105 103 callback: Self::devres_callback, 106 104 data <- Revocable::new(data), 105 + revoke <- Completion::new(), 107 106 }), 108 107 flags, 109 108 )?; ··· 133 130 self as _ 134 131 } 135 132 136 - fn remove_action(this: &Arc<Self>) { 133 + fn remove_action(this: &Arc<Self>) -> bool { 137 134 // SAFETY: 138 135 // - `self.inner.dev` is a valid `Device`, 139 136 // - the `action` and `data` pointers are the exact same ones as given to devm_add_action() 140 137 // previously, 141 138 // - `self` is always valid, even if the action has been released already. 142 - let ret = unsafe { 139 + let success = unsafe { 143 140 bindings::devm_remove_action_nowarn( 144 141 this.dev.as_raw(), 145 142 Some(this.callback), 146 143 this.as_ptr() as _, 147 144 ) 148 - }; 145 + } == 0; 149 146 150 - if ret == 0 { 147 + if success { 151 148 // SAFETY: We leaked an `Arc` reference to devm_add_action() in `DevresInner::new`; if 152 149 // devm_remove_action_nowarn() was successful we can (and have to) claim back ownership 153 150 // of this reference. 154 151 let _ = unsafe { Arc::from_raw(this.as_ptr()) }; 155 152 } 153 + 154 + success 156 155 } 157 156 158 157 #[allow(clippy::missing_safety_doc)] ··· 166 161 // `DevresInner::new`. 167 162 let inner = unsafe { Arc::from_raw(ptr) }; 168 163 169 - inner.data.revoke(); 164 + if !inner.data.revoke() { 165 + // If `revoke()` returns false, it means that `Devres::drop` already started revoking 166 + // `inner.data` for us. Hence we have to wait until `Devres::drop()` signals that it 167 + // completed revoking `inner.data`. 168 + inner.revoke.wait_for_completion(); 169 + } 170 170 } 171 171 } 172 172 ··· 228 218 // SAFETY: `dev` being the same device as the device this `Devres` has been created for 229 219 // proves that `self.0.data` hasn't been revoked and is guaranteed to not be revoked as 230 220 // long as `dev` lives; `dev` lives at least as long as `self`. 231 - Ok(unsafe { self.deref().access() }) 221 + Ok(unsafe { self.0.data.access() }) 232 222 } 233 - } 234 223 235 - impl<T> Deref for Devres<T> { 236 - type Target = Revocable<T>; 224 + /// [`Devres`] accessor for [`Revocable::try_access`]. 225 + pub fn try_access(&self) -> Option<RevocableGuard<'_, T>> { 226 + self.0.data.try_access() 227 + } 237 228 238 - fn deref(&self) -> &Self::Target { 239 - &self.0.data 229 + /// [`Devres`] accessor for [`Revocable::try_access_with`]. 230 + pub fn try_access_with<R, F: FnOnce(&T) -> R>(&self, f: F) -> Option<R> { 231 + self.0.data.try_access_with(f) 232 + } 233 + 234 + /// [`Devres`] accessor for [`Revocable::try_access_with_guard`]. 235 + pub fn try_access_with_guard<'a>(&'a self, guard: &'a rcu::Guard) -> Option<&'a T> { 236 + self.0.data.try_access_with_guard(guard) 240 237 } 241 238 } 242 239 243 240 impl<T> Drop for Devres<T> { 244 241 fn drop(&mut self) { 245 - DevresInner::remove_action(&self.0); 242 + // SAFETY: When `drop` runs, it is guaranteed that nobody is accessing the revocable data 243 + // anymore, hence it is safe not to wait for the grace period to finish. 244 + if unsafe { self.0.data.revoke_nosync() } { 245 + // We revoked `self.0.data` before the devres action did, hence try to remove it. 246 + if !DevresInner::remove_action(&self.0) { 247 + // We could not remove the devres action, which means that it now runs concurrently, 248 + // hence signal that `self.0.data` has been revoked successfully. 249 + self.0.revoke.complete_all(); 250 + } 251 + } 246 252 } 247 253 }
+14 -4
rust/kernel/revocable.rs
··· 154 154 /// # Safety 155 155 /// 156 156 /// Callers must ensure that there are no more concurrent users of the revocable object. 157 - unsafe fn revoke_internal<const SYNC: bool>(&self) { 158 - if self.is_available.swap(false, Ordering::Relaxed) { 157 + unsafe fn revoke_internal<const SYNC: bool>(&self) -> bool { 158 + let revoke = self.is_available.swap(false, Ordering::Relaxed); 159 + 160 + if revoke { 159 161 if SYNC { 160 162 // SAFETY: Just an FFI call, there are no further requirements. 161 163 unsafe { bindings::synchronize_rcu() }; ··· 167 165 // `compare_exchange` above that takes `is_available` from `true` to `false`. 168 166 unsafe { drop_in_place(self.data.get()) }; 169 167 } 168 + 169 + revoke 170 170 } 171 171 172 172 /// Revokes access to and drops the wrapped object. ··· 176 172 /// Access to the object is revoked immediately to new callers of [`Revocable::try_access`], 177 173 /// expecting that there are no concurrent users of the object. 178 174 /// 175 + /// Returns `true` if `&self` has been revoked with this call, `false` if it was revoked 176 + /// already. 177 + /// 179 178 /// # Safety 180 179 /// 181 180 /// Callers must ensure that there are no more concurrent users of the revocable object. 182 - pub unsafe fn revoke_nosync(&self) { 181 + pub unsafe fn revoke_nosync(&self) -> bool { 183 182 // SAFETY: By the safety requirement of this function, the caller ensures that nobody is 184 183 // accessing the data anymore and hence we don't have to wait for the grace period to 185 184 // finish. ··· 196 189 /// If there are concurrent users of the object (i.e., ones that called 197 190 /// [`Revocable::try_access`] beforehand and still haven't dropped the returned guard), this 198 191 /// function waits for the concurrent access to complete before dropping the wrapped object. 199 - pub fn revoke(&self) { 192 + /// 193 + /// Returns `true` if `&self` has been revoked with this call, `false` if it was revoked 194 + /// already. 195 + pub fn revoke(&self) -> bool { 200 196 // SAFETY: By passing `true` we ask `revoke_internal` to wait for the grace period to 201 197 // finish. 202 198 unsafe { self.revoke_internal::<true>() }
+2
rust/kernel/sync.rs
··· 10 10 use pin_init; 11 11 12 12 mod arc; 13 + pub mod completion; 13 14 mod condvar; 14 15 pub mod lock; 15 16 mod locked_by; ··· 18 17 pub mod rcu; 19 18 20 19 pub use arc::{Arc, ArcBorrow, UniqueArc}; 20 + pub use completion::Completion; 21 21 pub use condvar::{new_condvar, CondVar, CondVarTimeoutResult}; 22 22 pub use lock::global::{global_lock, GlobalGuard, GlobalLock, GlobalLockBackend, GlobalLockedBy}; 23 23 pub use lock::mutex::{new_mutex, Mutex, MutexGuard};
+112
rust/kernel/sync/completion.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Completion support. 4 + //! 5 + //! Reference: <https://docs.kernel.org/scheduler/completion.html> 6 + //! 7 + //! C header: [`include/linux/completion.h`](srctree/include/linux/completion.h) 8 + 9 + use crate::{bindings, prelude::*, types::Opaque}; 10 + 11 + /// Synchronization primitive to signal when a certain task has been completed. 12 + /// 13 + /// The [`Completion`] synchronization primitive signals when a certain task has been completed by 14 + /// waking up other tasks that have been queued up to wait for the [`Completion`] to be completed. 15 + /// 16 + /// # Examples 17 + /// 18 + /// ``` 19 + /// use kernel::sync::{Arc, Completion}; 20 + /// use kernel::workqueue::{self, impl_has_work, new_work, Work, WorkItem}; 21 + /// 22 + /// #[pin_data] 23 + /// struct MyTask { 24 + /// #[pin] 25 + /// work: Work<MyTask>, 26 + /// #[pin] 27 + /// done: Completion, 28 + /// } 29 + /// 30 + /// impl_has_work! { 31 + /// impl HasWork<Self> for MyTask { self.work } 32 + /// } 33 + /// 34 + /// impl MyTask { 35 + /// fn new() -> Result<Arc<Self>> { 36 + /// let this = Arc::pin_init(pin_init!(MyTask { 37 + /// work <- new_work!("MyTask::work"), 38 + /// done <- Completion::new(), 39 + /// }), GFP_KERNEL)?; 40 + /// 41 + /// let _ = workqueue::system().enqueue(this.clone()); 42 + /// 43 + /// Ok(this) 44 + /// } 45 + /// 46 + /// fn wait_for_completion(&self) { 47 + /// self.done.wait_for_completion(); 48 + /// 49 + /// pr_info!("Completion: task complete\n"); 50 + /// } 51 + /// } 52 + /// 53 + /// impl WorkItem for MyTask { 54 + /// type Pointer = Arc<MyTask>; 55 + /// 56 + /// fn run(this: Arc<MyTask>) { 57 + /// // process this task 58 + /// this.done.complete_all(); 59 + /// } 60 + /// } 61 + /// 62 + /// let task = MyTask::new()?; 63 + /// task.wait_for_completion(); 64 + /// # Ok::<(), Error>(()) 65 + /// ``` 66 + #[pin_data] 67 + pub struct Completion { 68 + #[pin] 69 + inner: Opaque<bindings::completion>, 70 + } 71 + 72 + // SAFETY: `Completion` is safe to be send to any task. 73 + unsafe impl Send for Completion {} 74 + 75 + // SAFETY: `Completion` is safe to be accessed concurrently. 76 + unsafe impl Sync for Completion {} 77 + 78 + impl Completion { 79 + /// Create an initializer for a new [`Completion`]. 80 + pub fn new() -> impl PinInit<Self> { 81 + pin_init!(Self { 82 + inner <- Opaque::ffi_init(|slot: *mut bindings::completion| { 83 + // SAFETY: `slot` is a valid pointer to an uninitialized `struct completion`. 84 + unsafe { bindings::init_completion(slot) }; 85 + }), 86 + }) 87 + } 88 + 89 + fn as_raw(&self) -> *mut bindings::completion { 90 + self.inner.get() 91 + } 92 + 93 + /// Signal all tasks waiting on this completion. 94 + /// 95 + /// This method wakes up all tasks waiting on this completion; after this operation the 96 + /// completion is permanently done, i.e. signals all current and future waiters. 97 + pub fn complete_all(&self) { 98 + // SAFETY: `self.as_raw()` is a pointer to a valid `struct completion`. 99 + unsafe { bindings::complete_all(self.as_raw()) }; 100 + } 101 + 102 + /// Wait for completion of a task. 103 + /// 104 + /// This method waits for the completion of a task; it is not interruptible and there is no 105 + /// timeout. 106 + /// 107 + /// See also [`Completion::complete_all`]. 108 + pub fn wait_for_completion(&self) { 109 + // SAFETY: `self.as_raw()` is a pointer to a valid `struct completion`. 110 + unsafe { bindings::wait_for_completion(self.as_raw()) }; 111 + } 112 + }
+1 -1
scripts/gdb/linux/vfs.py
··· 22 22 if parent == d or parent == 0: 23 23 return "" 24 24 p = dentry_name(d['d_parent']) + "/" 25 - return p + d['d_iname'].string() 25 + return p + d['d_shortname']['string'].string() 26 26 27 27 class DentryName(gdb.Function): 28 28 """Return string of the full path of a dentry.
+11 -5
security/selinux/ss/services.c
··· 1909 1909 goto out_unlock; 1910 1910 } 1911 1911 /* Obtain the sid for the context. */ 1912 - rc = sidtab_context_to_sid(sidtab, &newcontext, out_sid); 1913 - if (rc == -ESTALE) { 1914 - rcu_read_unlock(); 1915 - context_destroy(&newcontext); 1916 - goto retry; 1912 + if (context_equal(scontext, &newcontext)) 1913 + *out_sid = ssid; 1914 + else if (context_equal(tcontext, &newcontext)) 1915 + *out_sid = tsid; 1916 + else { 1917 + rc = sidtab_context_to_sid(sidtab, &newcontext, out_sid); 1918 + if (rc == -ESTALE) { 1919 + rcu_read_unlock(); 1920 + context_destroy(&newcontext); 1921 + goto retry; 1922 + } 1917 1923 } 1918 1924 out_unlock: 1919 1925 rcu_read_unlock();
+1 -1
security/selinux/xfrm.c
··· 94 94 95 95 ctx->ctx_doi = XFRM_SC_DOI_LSM; 96 96 ctx->ctx_alg = XFRM_SC_ALG_SELINUX; 97 - ctx->ctx_len = str_len; 97 + ctx->ctx_len = str_len + 1; 98 98 memcpy(ctx->ctx_str, &uctx[1], str_len); 99 99 ctx->ctx_str[str_len] = '\0'; 100 100 rc = security_context_to_sid(ctx->ctx_str, str_len,
+7
sound/isa/sb/sb16_main.c
··· 703 703 unsigned char nval, oval; 704 704 int change; 705 705 706 + if (chip->mode & (SB_MODE_PLAYBACK | SB_MODE_CAPTURE)) 707 + return -EBUSY; 708 + 706 709 nval = ucontrol->value.enumerated.item[0]; 707 710 if (nval > 2) 708 711 return -EINVAL; ··· 714 711 change = nval != oval; 715 712 snd_sb16_set_dma_mode(chip, nval); 716 713 spin_unlock_irqrestore(&chip->reg_lock, flags); 714 + if (change) { 715 + snd_dma_disable(chip->dma8); 716 + snd_dma_disable(chip->dma16); 717 + } 717 718 return change; 718 719 } 719 720
+2 -2
sound/pci/ctxfi/xfi.c
··· 98 98 if (err < 0) 99 99 goto error; 100 100 101 - strcpy(card->driver, "SB-XFi"); 102 - strcpy(card->shortname, "Creative X-Fi"); 101 + strscpy(card->driver, "SB-XFi"); 102 + strscpy(card->shortname, "Creative X-Fi"); 103 103 snprintf(card->longname, sizeof(card->longname), "%s %s %s", 104 104 card->shortname, atc->chip_name, atc->model_name); 105 105
+2
sound/pci/hda/hda_intel.c
··· 2283 2283 SND_PCI_QUIRK(0x1734, 0x1232, "KONTRON SinglePC", 0), 2284 2284 /* Dell ALC3271 */ 2285 2285 SND_PCI_QUIRK(0x1028, 0x0962, "Dell ALC3271", 0), 2286 + /* https://bugzilla.kernel.org/show_bug.cgi?id=220210 */ 2287 + SND_PCI_QUIRK(0x17aa, 0x5079, "Lenovo Thinkpad E15", 0), 2286 2288 {} 2287 2289 }; 2288 2290
+40
sound/pci/hda/patch_realtek.c
··· 2656 2656 SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX), 2657 2657 SND_PCI_QUIRK(0x1558, 0x3702, "Clevo X370SN[VW]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2658 2658 SND_PCI_QUIRK(0x1558, 0x50d3, "Clevo PC50[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2659 + SND_PCI_QUIRK(0x1558, 0x5802, "Clevo X58[05]WN[RST]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2659 2660 SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2660 2661 SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2661 2662 SND_PCI_QUIRK(0x1558, 0x65e1, "Clevo PB51[ED][DF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), ··· 6610 6609 if (action == HDA_FIXUP_ACT_PRE_PROBE) { 6611 6610 static const hda_nid_t conn[] = { 0x02, 0x03 }; 6612 6611 snd_hda_override_conn_list(codec, 0x15, ARRAY_SIZE(conn), conn); 6612 + snd_hda_gen_add_micmute_led_cdev(codec, NULL); 6613 6613 } 6614 6614 } 6615 6615 ··· 8032 8030 ALC294_FIXUP_ASUS_CS35L41_SPI_2, 8033 8031 ALC274_FIXUP_HP_AIO_BIND_DACS, 8034 8032 ALC287_FIXUP_PREDATOR_SPK_CS35L41_I2C_2, 8033 + ALC285_FIXUP_ASUS_GA605K_HEADSET_MIC, 8034 + ALC285_FIXUP_ASUS_GA605K_I2C_SPEAKER2_TO_DAC1, 8035 + ALC269_FIXUP_POSITIVO_P15X_HEADSET_MIC, 8035 8036 }; 8036 8037 8037 8038 /* A special fixup for Lenovo C940 and Yoga Duet 7; ··· 10419 10414 .type = HDA_FIXUP_FUNC, 10420 10415 .v.func = alc274_fixup_hp_aio_bind_dacs, 10421 10416 }, 10417 + [ALC285_FIXUP_ASUS_GA605K_HEADSET_MIC] = { 10418 + .type = HDA_FIXUP_PINS, 10419 + .v.pins = (const struct hda_pintbl[]) { 10420 + { 0x19, 0x03a11050 }, 10421 + { 0x1b, 0x03a11c30 }, 10422 + { } 10423 + }, 10424 + .chained = true, 10425 + .chain_id = ALC285_FIXUP_ASUS_GA605K_I2C_SPEAKER2_TO_DAC1 10426 + }, 10427 + [ALC285_FIXUP_ASUS_GA605K_I2C_SPEAKER2_TO_DAC1] = { 10428 + .type = HDA_FIXUP_FUNC, 10429 + .v.func = alc285_fixup_speaker2_to_dac1, 10430 + }, 10431 + [ALC269_FIXUP_POSITIVO_P15X_HEADSET_MIC] = { 10432 + .type = HDA_FIXUP_FUNC, 10433 + .v.func = alc269_fixup_limit_int_mic_boost, 10434 + .chained = true, 10435 + .chain_id = ALC269VC_FIXUP_ACER_MIC_NO_PRESENCE, 10436 + }, 10422 10437 }; 10423 10438 10424 10439 static const struct hda_quirk alc269_fixup_tbl[] = { ··· 10534 10509 SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC), 10535 10510 SND_PCI_QUIRK(0x1028, 0x0872, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC), 10536 10511 SND_PCI_QUIRK(0x1028, 0x0873, "Dell Precision 3930", ALC255_FIXUP_DUMMY_LINEOUT_VERB), 10512 + SND_PCI_QUIRK(0x1028, 0x0879, "Dell Latitude 5420 Rugged", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE), 10537 10513 SND_PCI_QUIRK(0x1028, 0x08ad, "Dell WYSE AIO", ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE), 10538 10514 SND_PCI_QUIRK(0x1028, 0x08ae, "Dell WYSE NB", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE), 10539 10515 SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB), ··· 10739 10713 SND_PCI_QUIRK(0x103c, 0x8975, "HP EliteBook x360 840 Aero G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 10740 10714 SND_PCI_QUIRK(0x103c, 0x897d, "HP mt440 Mobile Thin Client U74", ALC236_FIXUP_HP_GPIO_LED), 10741 10715 SND_PCI_QUIRK(0x103c, 0x8981, "HP Elite Dragonfly G3", ALC245_FIXUP_CS35L41_SPI_4), 10716 + SND_PCI_QUIRK(0x103c, 0x898a, "HP Pavilion 15-eg100", ALC287_FIXUP_HP_GPIO_LED), 10742 10717 SND_PCI_QUIRK(0x103c, 0x898e, "HP EliteBook 835 G9", ALC287_FIXUP_CS35L41_I2C_2), 10743 10718 SND_PCI_QUIRK(0x103c, 0x898f, "HP EliteBook 835 G9", ALC287_FIXUP_CS35L41_I2C_2), 10744 10719 SND_PCI_QUIRK(0x103c, 0x8991, "HP EliteBook 845 G9", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED), ··· 10814 10787 SND_PCI_QUIRK(0x103c, 0x8b97, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10815 10788 SND_PCI_QUIRK(0x103c, 0x8bb3, "HP Slim OMEN", ALC287_FIXUP_CS35L41_I2C_2), 10816 10789 SND_PCI_QUIRK(0x103c, 0x8bb4, "HP Slim OMEN", ALC287_FIXUP_CS35L41_I2C_2), 10790 + SND_PCI_QUIRK(0x103c, 0x8bc8, "HP Victus 15-fa1xxx", ALC245_FIXUP_HP_MUTE_LED_COEFBIT), 10817 10791 SND_PCI_QUIRK(0x103c, 0x8bcd, "HP Omen 16-xd0xxx", ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT), 10818 10792 SND_PCI_QUIRK(0x103c, 0x8bdd, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2), 10819 10793 SND_PCI_QUIRK(0x103c, 0x8bde, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2), ··· 10868 10840 SND_PCI_QUIRK(0x103c, 0x8c91, "HP EliteBook 660", ALC236_FIXUP_HP_GPIO_LED), 10869 10841 SND_PCI_QUIRK(0x103c, 0x8c96, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10870 10842 SND_PCI_QUIRK(0x103c, 0x8c97, "HP ZBook", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10843 + SND_PCI_QUIRK(0x103c, 0x8c9c, "HP Victus 16-s1xxx (MB 8C9C)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT), 10871 10844 SND_PCI_QUIRK(0x103c, 0x8ca1, "HP ZBook Power", ALC236_FIXUP_HP_GPIO_LED), 10872 10845 SND_PCI_QUIRK(0x103c, 0x8ca2, "HP ZBook Power", ALC236_FIXUP_HP_GPIO_LED), 10873 10846 SND_PCI_QUIRK(0x103c, 0x8ca4, "HP ZBook Fury", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), ··· 10910 10881 SND_PCI_QUIRK(0x103c, 0x8def, "HP EliteBook 660 G12", ALC236_FIXUP_HP_GPIO_LED), 10911 10882 SND_PCI_QUIRK(0x103c, 0x8df0, "HP EliteBook 630 G12", ALC236_FIXUP_HP_GPIO_LED), 10912 10883 SND_PCI_QUIRK(0x103c, 0x8df1, "HP EliteBook 630 G12", ALC236_FIXUP_HP_GPIO_LED), 10884 + SND_PCI_QUIRK(0x103c, 0x8dfb, "HP EliteBook 6 G1a 14", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10913 10885 SND_PCI_QUIRK(0x103c, 0x8dfc, "HP EliteBook 645 G12", ALC236_FIXUP_HP_GPIO_LED), 10886 + SND_PCI_QUIRK(0x103c, 0x8dfd, "HP EliteBook 6 G1a 16", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10914 10887 SND_PCI_QUIRK(0x103c, 0x8dfe, "HP EliteBook 665 G12", ALC236_FIXUP_HP_GPIO_LED), 10915 10888 SND_PCI_QUIRK(0x103c, 0x8e11, "HP Trekker", ALC287_FIXUP_CS35L41_I2C_2), 10916 10889 SND_PCI_QUIRK(0x103c, 0x8e12, "HP Trekker", ALC287_FIXUP_CS35L41_I2C_2), ··· 10935 10904 SND_PCI_QUIRK(0x103c, 0x8e60, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2), 10936 10905 SND_PCI_QUIRK(0x103c, 0x8e61, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2), 10937 10906 SND_PCI_QUIRK(0x103c, 0x8e62, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2), 10907 + SND_PCI_QUIRK(0x1043, 0x1032, "ASUS VivoBook X513EA", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 10908 + SND_PCI_QUIRK(0x1043, 0x1034, "ASUS GU605C", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1), 10938 10909 SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), 10939 10910 SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), 10940 10911 SND_PCI_QUIRK(0x1043, 0x1054, "ASUS G614FH/FM/FP", ALC287_FIXUP_CS35L41_I2C_2), ··· 10965 10932 SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 10966 10933 SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 10967 10934 SND_PCI_QUIRK(0x1043, 0x1313, "Asus K42JZ", ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE), 10935 + SND_PCI_QUIRK(0x1043, 0x1314, "ASUS GA605K", ALC285_FIXUP_ASUS_GA605K_HEADSET_MIC), 10968 10936 SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 10969 10937 SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK), 10970 10938 SND_PCI_QUIRK(0x1043, 0x1433, "ASUS GX650PY/PZ/PV/PU/PYV/PZV/PIV/PVV", ALC285_FIXUP_ASUS_I2C_HEADSET_MIC), ··· 11031 10997 SND_PCI_QUIRK(0x1043, 0x1df3, "ASUS UM5606WA", ALC294_FIXUP_BASS_SPEAKER_15), 11032 10998 SND_PCI_QUIRK(0x1043, 0x1264, "ASUS UM5606KA", ALC294_FIXUP_BASS_SPEAKER_15), 11033 10999 SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402ZA", ALC245_FIXUP_CS35L41_SPI_2), 11000 + SND_PCI_QUIRK(0x1043, 0x1e10, "ASUS VivoBook X507UAR", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 11034 11001 SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502), 11035 11002 SND_PCI_QUIRK(0x1043, 0x1e12, "ASUS UM3402", ALC287_FIXUP_CS35L41_I2C_2), 11036 11003 SND_PCI_QUIRK(0x1043, 0x1e1f, "ASUS Vivobook 15 X1504VAP", ALC2XX_FIXUP_HEADSET_MIC), ··· 11141 11106 SND_PCI_QUIRK(0x1558, 0x14a1, "Clevo L141MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11142 11107 SND_PCI_QUIRK(0x1558, 0x2624, "Clevo L240TU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11143 11108 SND_PCI_QUIRK(0x1558, 0x28c1, "Clevo V370VND", ALC2XX_FIXUP_HEADSET_MIC), 11109 + SND_PCI_QUIRK(0x1558, 0x35a1, "Clevo V3[56]0EN[CDE]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11110 + SND_PCI_QUIRK(0x1558, 0x35b1, "Clevo V3[57]0WN[MNP]Q", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11144 11111 SND_PCI_QUIRK(0x1558, 0x4018, "Clevo NV40M[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11145 11112 SND_PCI_QUIRK(0x1558, 0x4019, "Clevo NV40MZ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11146 11113 SND_PCI_QUIRK(0x1558, 0x4020, "Clevo NV40MB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ··· 11170 11133 SND_PCI_QUIRK(0x1558, 0x51b1, "Clevo NS50AU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11171 11134 SND_PCI_QUIRK(0x1558, 0x51b3, "Clevo NS70AU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11172 11135 SND_PCI_QUIRK(0x1558, 0x5630, "Clevo NP50RNJS", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11136 + SND_PCI_QUIRK(0x1558, 0x5700, "Clevo X560WN[RST]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11173 11137 SND_PCI_QUIRK(0x1558, 0x70a1, "Clevo NB70T[HJK]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11174 11138 SND_PCI_QUIRK(0x1558, 0x70b3, "Clevo NK70SB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11175 11139 SND_PCI_QUIRK(0x1558, 0x70f2, "Clevo NH79EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ··· 11210 11172 SND_PCI_QUIRK(0x1558, 0xa650, "Clevo NP[567]0SN[CD]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11211 11173 SND_PCI_QUIRK(0x1558, 0xa671, "Clevo NP70SN[CDE]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11212 11174 SND_PCI_QUIRK(0x1558, 0xa741, "Clevo V54x_6x_TNE", ALC245_FIXUP_CLEVO_NOISY_MIC), 11175 + SND_PCI_QUIRK(0x1558, 0xa743, "Clevo V54x_6x_TU", ALC245_FIXUP_CLEVO_NOISY_MIC), 11213 11176 SND_PCI_QUIRK(0x1558, 0xa763, "Clevo V54x_6x_TU", ALC245_FIXUP_CLEVO_NOISY_MIC), 11214 11177 SND_PCI_QUIRK(0x1558, 0xb018, "Clevo NP50D[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 11215 11178 SND_PCI_QUIRK(0x1558, 0xb019, "Clevo NH77D[BE]Q", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ··· 11423 11384 SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 11424 11385 SND_PCI_QUIRK(0x2782, 0x0228, "Infinix ZERO BOOK 13", ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13), 11425 11386 SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO), 11387 + SND_PCI_QUIRK(0x2782, 0x1407, "Positivo P15X", ALC269_FIXUP_POSITIVO_P15X_HEADSET_MIC), 11426 11388 SND_PCI_QUIRK(0x2782, 0x1701, "Infinix Y4 Max", ALC269VC_FIXUP_INFINIX_Y4_MAX), 11427 11389 SND_PCI_QUIRK(0x2782, 0x1705, "MEDION E15433", ALC269VC_FIXUP_INFINIX_Y4_MAX), 11428 11390 SND_PCI_QUIRK(0x2782, 0x1707, "Vaio VJFE-ADL", ALC298_FIXUP_SPK_VOLUME),
+4
sound/soc/amd/ps/acp63.h
··· 334 334 * @addr: pci ioremap address 335 335 * @reg_range: ACP reigister range 336 336 * @acp_rev: ACP PCI revision id 337 + * @acp_sw_pad_keeper_en: store acp SoundWire pad keeper enable register value 338 + * @acp_pad_pulldown_ctrl: store acp pad pulldown control register value 337 339 * @acp63_sdw0-dma_intr_stat: DMA interrupt status array for ACP6.3 platform SoundWire 338 340 * manager-SW0 instance 339 341 * @acp63_sdw_dma_intr_stat: DMA interrupt status array for ACP6.3 platform SoundWire ··· 369 367 u32 addr; 370 368 u32 reg_range; 371 369 u32 acp_rev; 370 + u32 acp_sw_pad_keeper_en; 371 + u32 acp_pad_pulldown_ctrl; 372 372 u16 acp63_sdw0_dma_intr_stat[ACP63_SDW0_DMA_MAX_STREAMS]; 373 373 u16 acp63_sdw1_dma_intr_stat[ACP63_SDW1_DMA_MAX_STREAMS]; 374 374 u16 acp70_sdw0_dma_intr_stat[ACP70_SDW0_DMA_MAX_STREAMS];
+18
sound/soc/amd/ps/ps-common.c
··· 160 160 161 161 adata = dev_get_drvdata(dev); 162 162 if (adata->is_sdw_dev) { 163 + adata->acp_sw_pad_keeper_en = readl(adata->acp63_base + ACP_SW0_PAD_KEEPER_EN); 164 + adata->acp_pad_pulldown_ctrl = readl(adata->acp63_base + ACP_PAD_PULLDOWN_CTRL); 163 165 adata->sdw_en_stat = check_acp_sdw_enable_status(adata); 164 166 if (adata->sdw_en_stat) { 165 167 writel(1, adata->acp63_base + ACP_ZSC_DSP_CTRL); ··· 199 197 static int __maybe_unused snd_acp63_resume(struct device *dev) 200 198 { 201 199 struct acp63_dev_data *adata; 200 + u32 acp_sw_pad_keeper_en; 202 201 int ret; 203 202 204 203 adata = dev_get_drvdata(dev); ··· 212 209 if (ret) 213 210 dev_err(dev, "ACP init failed\n"); 214 211 212 + acp_sw_pad_keeper_en = readl(adata->acp63_base + ACP_SW0_PAD_KEEPER_EN); 213 + dev_dbg(dev, "ACP_SW0_PAD_KEEPER_EN:0x%x\n", acp_sw_pad_keeper_en); 214 + if (!acp_sw_pad_keeper_en) { 215 + writel(adata->acp_sw_pad_keeper_en, adata->acp63_base + ACP_SW0_PAD_KEEPER_EN); 216 + writel(adata->acp_pad_pulldown_ctrl, adata->acp63_base + ACP_PAD_PULLDOWN_CTRL); 217 + } 215 218 return ret; 216 219 } 217 220 ··· 417 408 418 409 adata = dev_get_drvdata(dev); 419 410 if (adata->is_sdw_dev) { 411 + adata->acp_sw_pad_keeper_en = readl(adata->acp63_base + ACP_SW0_PAD_KEEPER_EN); 412 + adata->acp_pad_pulldown_ctrl = readl(adata->acp63_base + ACP_PAD_PULLDOWN_CTRL); 420 413 adata->sdw_en_stat = check_acp_sdw_enable_status(adata); 421 414 if (adata->sdw_en_stat) { 422 415 writel(1, adata->acp63_base + ACP_ZSC_DSP_CTRL); ··· 456 445 static int __maybe_unused snd_acp70_resume(struct device *dev) 457 446 { 458 447 struct acp63_dev_data *adata; 448 + u32 acp_sw_pad_keeper_en; 459 449 int ret; 460 450 461 451 adata = dev_get_drvdata(dev); ··· 471 459 if (ret) 472 460 dev_err(dev, "ACP init failed\n"); 473 461 462 + acp_sw_pad_keeper_en = readl(adata->acp63_base + ACP_SW0_PAD_KEEPER_EN); 463 + dev_dbg(dev, "ACP_SW0_PAD_KEEPER_EN:0x%x\n", acp_sw_pad_keeper_en); 464 + if (!acp_sw_pad_keeper_en) { 465 + writel(adata->acp_sw_pad_keeper_en, adata->acp63_base + ACP_SW0_PAD_KEEPER_EN); 466 + writel(adata->acp_pad_pulldown_ctrl, adata->acp63_base + ACP_PAD_PULLDOWN_CTRL); 467 + } 474 468 return ret; 475 469 } 476 470
+28
sound/soc/amd/yc/acp6x-mach.c
··· 356 356 { 357 357 .driver_data = &acp6x_card, 358 358 .matches = { 359 + DMI_MATCH(DMI_BOARD_VENDOR, "RB"), 360 + DMI_MATCH(DMI_PRODUCT_NAME, "Nitro ANV15-41"), 361 + } 362 + }, 363 + { 364 + .driver_data = &acp6x_card, 365 + .matches = { 359 366 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 360 367 DMI_MATCH(DMI_PRODUCT_NAME, "83J2"), 368 + } 369 + }, 370 + { 371 + .driver_data = &acp6x_card, 372 + .matches = { 373 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 374 + DMI_MATCH(DMI_PRODUCT_NAME, "83J3"), 361 375 } 362 376 }, 363 377 { ··· 468 454 { 469 455 .driver_data = &acp6x_card, 470 456 .matches = { 457 + DMI_MATCH(DMI_BOARD_VENDOR, "Micro-Star International Co., Ltd."), 458 + DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 17 D7VF"), 459 + } 460 + }, 461 + { 462 + .driver_data = &acp6x_card, 463 + .matches = { 471 464 DMI_MATCH(DMI_BOARD_VENDOR, "Alienware"), 472 465 DMI_MATCH(DMI_PRODUCT_NAME, "Alienware m17 R5 AMD"), 473 466 } ··· 533 512 .matches = { 534 513 DMI_MATCH(DMI_BOARD_VENDOR, "HP"), 535 514 DMI_MATCH(DMI_PRODUCT_NAME, "OMEN by HP Gaming Laptop 16z-n000"), 515 + } 516 + }, 517 + { 518 + .driver_data = &acp6x_card, 519 + .matches = { 520 + DMI_MATCH(DMI_BOARD_VENDOR, "HP"), 521 + DMI_MATCH(DMI_PRODUCT_NAME, "Victus by HP Gaming Laptop 15-fb2xxx"), 536 522 } 537 523 }, 538 524 {
-1
sound/soc/apple/Kconfig
··· 2 2 tristate "Apple Silicon MCA driver" 3 3 depends on ARCH_APPLE || COMPILE_TEST 4 4 select SND_DMAENGINE_PCM 5 - default ARCH_APPLE 6 5 help 7 6 This option enables an ASoC platform driver for MCA peripherals found 8 7 on Apple Silicon SoCs.
+10 -8
sound/soc/codecs/cs35l56-sdw.c
··· 238 238 .val_format_endian_default = REGMAP_ENDIAN_BIG, 239 239 }; 240 240 241 - static int cs35l56_sdw_set_cal_index(struct cs35l56_private *cs35l56) 241 + static int cs35l56_sdw_get_unique_id(struct cs35l56_private *cs35l56) 242 242 { 243 243 int ret; 244 244 245 - /* SoundWire UniqueId is used to index the calibration array */ 246 245 ret = sdw_read_no_pm(cs35l56->sdw_peripheral, SDW_SCP_DEVID_0); 247 246 if (ret < 0) 248 247 return ret; 249 248 250 - cs35l56->base.cal_index = ret & 0xf; 249 + cs35l56->sdw_unique_id = ret & 0xf; 251 250 252 251 return 0; 253 252 } ··· 258 259 259 260 pm_runtime_get_noresume(cs35l56->base.dev); 260 261 261 - if (cs35l56->base.cal_index < 0) { 262 - ret = cs35l56_sdw_set_cal_index(cs35l56); 263 - if (ret < 0) 264 - goto out; 265 - } 262 + ret = cs35l56_sdw_get_unique_id(cs35l56); 263 + if (ret) 264 + goto out; 265 + 266 + /* SoundWire UniqueId is used to index the calibration array */ 267 + if (cs35l56->base.cal_index < 0) 268 + cs35l56->base.cal_index = cs35l56->sdw_unique_id; 266 269 267 270 ret = cs35l56_init(cs35l56); 268 271 if (ret < 0) { ··· 588 587 589 588 cs35l56->base.dev = dev; 590 589 cs35l56->sdw_peripheral = peripheral; 590 + cs35l56->sdw_link_num = peripheral->bus->link_id; 591 591 INIT_WORK(&cs35l56->sdw_irq_work, cs35l56_sdw_irq_work); 592 592 593 593 dev_set_drvdata(dev, cs35l56);
+63 -9
sound/soc/codecs/cs35l56.c
··· 706 706 return ret; 707 707 } 708 708 709 + static int cs35l56_dsp_download_and_power_up(struct cs35l56_private *cs35l56, 710 + bool load_firmware) 711 + { 712 + int ret; 713 + 714 + /* 715 + * Abort the first load if it didn't find the suffixed bins and 716 + * we have an alternate fallback suffix. 717 + */ 718 + cs35l56->dsp.bin_mandatory = (load_firmware && cs35l56->fallback_fw_suffix); 719 + 720 + ret = wm_adsp_power_up(&cs35l56->dsp, load_firmware); 721 + if ((ret == -ENOENT) && cs35l56->dsp.bin_mandatory) { 722 + cs35l56->dsp.fwf_suffix = cs35l56->fallback_fw_suffix; 723 + cs35l56->fallback_fw_suffix = NULL; 724 + cs35l56->dsp.bin_mandatory = false; 725 + ret = wm_adsp_power_up(&cs35l56->dsp, load_firmware); 726 + } 727 + 728 + if (ret) { 729 + dev_dbg(cs35l56->base.dev, "wm_adsp_power_up ret %d\n", ret); 730 + return ret; 731 + } 732 + 733 + return 0; 734 + } 735 + 709 736 static void cs35l56_reinit_patch(struct cs35l56_private *cs35l56) 710 737 { 711 738 int ret; 712 739 713 - /* Use wm_adsp to load and apply the firmware patch and coefficient files */ 714 - ret = wm_adsp_power_up(&cs35l56->dsp, true); 715 - if (ret) { 716 - dev_dbg(cs35l56->base.dev, "%s: wm_adsp_power_up ret %d\n", __func__, ret); 740 + ret = cs35l56_dsp_download_and_power_up(cs35l56, true); 741 + if (ret) 717 742 return; 718 - } 719 743 720 744 cs35l56_write_cal(cs35l56); 721 745 ··· 774 750 * but only if firmware is missing. If firmware is already patched just 775 751 * power-up wm_adsp without downloading firmware. 776 752 */ 777 - ret = wm_adsp_power_up(&cs35l56->dsp, !!firmware_missing); 778 - if (ret) { 779 - dev_dbg(cs35l56->base.dev, "%s: wm_adsp_power_up ret %d\n", __func__, ret); 753 + ret = cs35l56_dsp_download_and_power_up(cs35l56, firmware_missing); 754 + if (ret) 780 755 goto err; 781 - } 782 756 783 757 mutex_lock(&cs35l56->base.irq_lock); 784 758 ··· 875 853 pm_runtime_put_autosuspend(cs35l56->base.dev); 876 854 } 877 855 856 + static int cs35l56_set_fw_suffix(struct cs35l56_private *cs35l56) 857 + { 858 + if (cs35l56->dsp.fwf_suffix) 859 + return 0; 860 + 861 + if (!cs35l56->sdw_peripheral) 862 + return 0; 863 + 864 + cs35l56->dsp.fwf_suffix = devm_kasprintf(cs35l56->base.dev, GFP_KERNEL, 865 + "l%uu%u", 866 + cs35l56->sdw_link_num, 867 + cs35l56->sdw_unique_id); 868 + if (!cs35l56->dsp.fwf_suffix) 869 + return -ENOMEM; 870 + 871 + /* 872 + * There are published firmware files for L56 B0 silicon using 873 + * the ALSA prefix as the filename suffix. Default to trying these 874 + * first, with the new name as an alternate. 875 + */ 876 + if ((cs35l56->base.type == 0x56) && (cs35l56->base.rev == 0xb0)) { 877 + cs35l56->fallback_fw_suffix = cs35l56->dsp.fwf_suffix; 878 + cs35l56->dsp.fwf_suffix = cs35l56->component->name_prefix; 879 + } 880 + 881 + return 0; 882 + } 883 + 878 884 static int cs35l56_component_probe(struct snd_soc_component *component) 879 885 { 880 886 struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(component); ··· 942 892 return -ENOMEM; 943 893 944 894 cs35l56->component = component; 895 + ret = cs35l56_set_fw_suffix(cs35l56); 896 + if (ret) 897 + return ret; 898 + 945 899 wm_adsp2_component_probe(&cs35l56->dsp, component); 946 900 947 901 debugfs_create_bool("init_done", 0444, debugfs_root, &cs35l56->base.init_done);
+3
sound/soc/codecs/cs35l56.h
··· 38 38 struct snd_soc_component *component; 39 39 struct regulator_bulk_data supplies[CS35L56_NUM_BULK_SUPPLIES]; 40 40 struct sdw_slave *sdw_peripheral; 41 + const char *fallback_fw_suffix; 41 42 struct work_struct sdw_irq_work; 42 43 bool sdw_irq_no_unmask; 43 44 bool soft_resetting; ··· 53 52 bool tdm_mode; 54 53 bool sysclk_set; 55 54 u8 old_sdw_clock_scale; 55 + u8 sdw_link_num; 56 + u8 sdw_unique_id; 56 57 }; 57 58 58 59 extern const struct dev_pm_ops cs35l56_pm_ops_i2c_spi;
+4
sound/soc/codecs/cs48l32.c
··· 2162 2162 n_slots_multiple = 1; 2163 2163 2164 2164 sclk_target = snd_soc_tdm_params_to_bclk(params, slotw, n_slots, n_slots_multiple); 2165 + if (sclk_target < 0) { 2166 + cs48l32_asp_err(dai, "Invalid parameters\n"); 2167 + return sclk_target; 2168 + } 2165 2169 2166 2170 for (i = 0; i < ARRAY_SIZE(cs48l32_sclk_rates); i++) { 2167 2171 if ((cs48l32_sclk_rates[i].freq >= sclk_target) &&
+1 -2
sound/soc/codecs/es8326.c
··· 1079 1079 regmap_update_bits(es8326->regmap, ES8326_HPDET_TYPE, 0x03, 0x00); 1080 1080 regmap_write(es8326->regmap, ES8326_INTOUT_IO, 1081 1081 es8326->interrupt_clk); 1082 - regmap_write(es8326->regmap, ES8326_SDINOUT1_IO, 1083 - (ES8326_IO_DMIC_CLK << ES8326_SDINOUT1_SHIFT)); 1082 + regmap_write(es8326->regmap, ES8326_SDINOUT1_IO, ES8326_IO_INPUT); 1084 1083 regmap_write(es8326->regmap, ES8326_SDINOUT23_IO, ES8326_IO_INPUT); 1085 1084 1086 1085 regmap_write(es8326->regmap, ES8326_ANA_PDN, 0x00);
+19 -4
sound/soc/codecs/rt721-sdca.c
··· 430 430 unsigned int read_l, read_r, ctl_l = 0, ctl_r = 0; 431 431 unsigned int adc_vol_flag = 0; 432 432 const unsigned int interval_offset = 0xc0; 433 + const unsigned int tendA = 0x200; 433 434 const unsigned int tendB = 0xa00; 434 435 435 436 if (strstr(ucontrol->id.name, "FU1E Capture Volume") || ··· 440 439 regmap_read(rt721->mbq_regmap, mc->reg, &read_l); 441 440 regmap_read(rt721->mbq_regmap, mc->rreg, &read_r); 442 441 443 - if (mc->shift == 8) /* boost gain */ 442 + if (mc->shift == 8) { 443 + /* boost gain */ 444 444 ctl_l = read_l / tendB; 445 - else { 445 + } else if (mc->shift == 1) { 446 + /* FU33 boost gain */ 447 + if (read_l == 0x8000 || read_l == 0xfe00) 448 + ctl_l = 0; 449 + else 450 + ctl_l = read_l / tendA + 1; 451 + } else { 446 452 if (adc_vol_flag) 447 453 ctl_l = mc->max - (((0x1e00 - read_l) & 0xffff) / interval_offset); 448 454 else ··· 457 449 } 458 450 459 451 if (read_l != read_r) { 460 - if (mc->shift == 8) /* boost gain */ 452 + if (mc->shift == 8) { 453 + /* boost gain */ 461 454 ctl_r = read_r / tendB; 462 - else { /* ADC/DAC gain */ 455 + } else if (mc->shift == 1) { 456 + /* FU33 boost gain */ 457 + if (read_r == 0x8000 || read_r == 0xfe00) 458 + ctl_r = 0; 459 + else 460 + ctl_r = read_r / tendA + 1; 461 + } else { /* ADC/DAC gain */ 463 462 if (adc_vol_flag) 464 463 ctl_r = mc->max - (((0x1e00 - read_r) & 0xffff) / interval_offset); 465 464 else
+18 -9
sound/soc/codecs/wm_adsp.c
··· 783 783 char **coeff_filename) 784 784 { 785 785 const char *system_name = dsp->system_name; 786 - const char *asoc_component_prefix = dsp->component->name_prefix; 786 + const char *suffix = dsp->component->name_prefix; 787 787 int ret = 0; 788 788 789 - if (system_name && asoc_component_prefix) { 789 + if (dsp->fwf_suffix) 790 + suffix = dsp->fwf_suffix; 791 + 792 + if (system_name && suffix) { 790 793 if (!wm_adsp_request_firmware_file(dsp, wmfw_firmware, wmfw_filename, 791 794 cirrus_dir, system_name, 792 - asoc_component_prefix, "wmfw")) { 795 + suffix, "wmfw")) { 793 796 wm_adsp_request_firmware_file(dsp, coeff_firmware, coeff_filename, 794 797 cirrus_dir, system_name, 795 - asoc_component_prefix, "bin"); 798 + suffix, "bin"); 796 799 return 0; 797 800 } 798 801 } ··· 804 801 if (!wm_adsp_request_firmware_file(dsp, wmfw_firmware, wmfw_filename, 805 802 cirrus_dir, system_name, 806 803 NULL, "wmfw")) { 807 - if (asoc_component_prefix) 804 + if (suffix) 808 805 wm_adsp_request_firmware_file(dsp, coeff_firmware, coeff_filename, 809 806 cirrus_dir, system_name, 810 - asoc_component_prefix, "bin"); 807 + suffix, "bin"); 811 808 812 809 if (!*coeff_firmware) 813 810 wm_adsp_request_firmware_file(dsp, coeff_firmware, coeff_filename, ··· 819 816 820 817 /* Check system-specific bin without wmfw before falling back to generic */ 821 818 if (dsp->wmfw_optional && system_name) { 822 - if (asoc_component_prefix) 819 + if (suffix) 823 820 wm_adsp_request_firmware_file(dsp, coeff_firmware, coeff_filename, 824 821 cirrus_dir, system_name, 825 - asoc_component_prefix, "bin"); 822 + suffix, "bin"); 826 823 827 824 if (!*coeff_firmware) 828 825 wm_adsp_request_firmware_file(dsp, coeff_firmware, coeff_filename, ··· 853 850 adsp_err(dsp, "Failed to request firmware <%s>%s-%s-%s<-%s<%s>>.wmfw\n", 854 851 cirrus_dir, dsp->part, 855 852 dsp->fwf_name ? dsp->fwf_name : dsp->cs_dsp.name, 856 - wm_adsp_fw[dsp->fw].file, system_name, asoc_component_prefix); 853 + wm_adsp_fw[dsp->fw].file, system_name, suffix); 857 854 858 855 return -ENOENT; 859 856 } ··· 1000 997 return ret; 1001 998 } 1002 999 1000 + if (dsp->bin_mandatory && !coeff_firmware) { 1001 + ret = -ENOENT; 1002 + goto err; 1003 + } 1004 + 1003 1005 ret = cs_dsp_power_up(&dsp->cs_dsp, 1004 1006 wmfw_firmware, wmfw_filename, 1005 1007 coeff_firmware, coeff_filename, 1006 1008 wm_adsp_fw_text[dsp->fw]); 1007 1009 1010 + err: 1008 1011 wm_adsp_release_firmware_files(dsp, 1009 1012 wmfw_firmware, wmfw_filename, 1010 1013 coeff_firmware, coeff_filename);
+2
sound/soc/codecs/wm_adsp.h
··· 29 29 const char *part; 30 30 const char *fwf_name; 31 31 const char *system_name; 32 + const char *fwf_suffix; 32 33 struct snd_soc_component *component; 33 34 34 35 unsigned int sys_config_size; 35 36 36 37 int fw; 37 38 bool wmfw_optional; 39 + bool bin_mandatory; 38 40 39 41 struct work_struct boot_work; 40 42 int (*control_add)(struct wm_adsp *dsp, struct cs_dsp_coeff_ctl *cs_ctl);
+2 -1
sound/soc/intel/common/sof-function-topology-lib.c
··· 73 73 break; 74 74 default: 75 75 dev_warn(card->dev, 76 - "only -2ch and -4ch are supported for dmic\n"); 76 + "unsupported number of dmics: %d\n", 77 + mach_params.dmic_num); 77 78 continue; 78 79 } 79 80 tplg_dev = TPLG_DEVICE_INTEL_PCH_DMIC;
+1
sound/soc/loongson/loongson_i2s.c
··· 9 9 #include <linux/module.h> 10 10 #include <linux/platform_device.h> 11 11 #include <linux/delay.h> 12 + #include <linux/export.h> 12 13 #include <linux/pm_runtime.h> 13 14 #include <linux/dma-mapping.h> 14 15 #include <sound/soc.h>
+1
sound/soc/qcom/Kconfig
··· 186 186 tristate "SoC Machine driver for SM8250 boards" 187 187 depends on QCOM_APR && SOUNDWIRE 188 188 depends on COMMON_CLK 189 + depends on SND_SOC_QCOM_OFFLOAD_UTILS || !SND_SOC_QCOM_OFFLOAD_UTILS 189 190 select SND_SOC_QDSP6 190 191 select SND_SOC_QCOM_COMMON 191 192 select SND_SOC_QCOM_SDW
+2
sound/soc/sdw_utils/soc_sdw_utils.c
··· 1205 1205 int i; 1206 1206 1207 1207 dlc = kzalloc(sizeof(*dlc), GFP_KERNEL); 1208 + if (!dlc) 1209 + return -ENOMEM; 1208 1210 1209 1211 adr_end = &adr_dev->endpoints[end_index]; 1210 1212 dai_info = &codec_info->dais[adr_end->num];
+15
sound/soc/sof/imx/imx8.c
··· 40 40 struct reset_control *run_stall; 41 41 }; 42 42 43 + static int imx8_shutdown(struct snd_sof_dev *sdev) 44 + { 45 + /* 46 + * Force the DSP to stall. After the firmware image is loaded, 47 + * the stall will be removed during run() by a matching 48 + * imx_sc_pm_cpu_start() call. 49 + */ 50 + imx_sc_pm_cpu_start(get_chip_pdata(sdev), IMX_SC_R_DSP, false, 51 + RESET_VECTOR_VADDR); 52 + 53 + return 0; 54 + } 55 + 43 56 /* 44 57 * DSP control. 45 58 */ ··· 294 281 static const struct imx_chip_ops imx8_chip_ops = { 295 282 .probe = imx8_probe, 296 283 .core_kick = imx8_run, 284 + .core_shutdown = imx8_shutdown, 297 285 }; 298 286 299 287 static const struct imx_chip_ops imx8x_chip_ops = { 300 288 .probe = imx8_probe, 301 289 .core_kick = imx8x_run, 290 + .core_shutdown = imx8_shutdown, 302 291 }; 303 292 304 293 static const struct imx_chip_ops imx8m_chip_ops = {
+3 -3
sound/soc/sof/intel/hda.c
··· 1257 1257 return 0; 1258 1258 } 1259 1259 1260 - static char *remove_file_ext(const char *tplg_filename) 1260 + static char *remove_file_ext(struct device *dev, const char *tplg_filename) 1261 1261 { 1262 1262 char *filename, *tmp; 1263 1263 1264 - filename = kstrdup(tplg_filename, GFP_KERNEL); 1264 + filename = devm_kstrdup(dev, tplg_filename, GFP_KERNEL); 1265 1265 if (!filename) 1266 1266 return NULL; 1267 1267 ··· 1345 1345 */ 1346 1346 if (!sof_pdata->tplg_filename) { 1347 1347 /* remove file extension if it exists */ 1348 - tplg_filename = remove_file_ext(mach->sof_tplg_filename); 1348 + tplg_filename = remove_file_ext(sdev->dev, mach->sof_tplg_filename); 1349 1349 if (!tplg_filename) 1350 1350 return NULL; 1351 1351
+12
sound/usb/mixer_maps.c
··· 383 383 { 0 } /* terminator */ 384 384 }; 385 385 386 + /* KTMicro USB */ 387 + static struct usbmix_name_map s31b2_0022_map[] = { 388 + { 23, "Speaker Playback" }, 389 + { 18, "Headphone Playback" }, 390 + { 0 } 391 + }; 392 + 386 393 /* ASUS ROG Zenith II with Realtek ALC1220-VB */ 387 394 static const struct usbmix_name_map asus_zenith_ii_map[] = { 388 395 { 19, NULL, 12 }, /* FU, Input Gain Pad - broken response, disabled */ ··· 698 691 /* Microsoft USB Link headset */ 699 692 .id = USB_ID(0x045e, 0x083c), 700 693 .map = ms_usb_link_map, 694 + }, 695 + { 696 + /* KTMicro USB */ 697 + .id = USB_ID(0X31b2, 0x0022), 698 + .map = s31b2_0022_map, 701 699 }, 702 700 { 0 } /* terminator */ 703 701 };
+8 -8
sound/usb/qcom/qc_audio_offload.c
··· 759 759 subs = find_substream(pcm_card_num, info->pcm_dev_num, 760 760 info->direction); 761 761 if (!subs || !chip || atomic_read(&chip->shutdown)) { 762 - dev_err(&subs->dev->dev, 762 + dev_err(&uadev[idx].udev->dev, 763 763 "no sub for c#%u dev#%u dir%u\n", 764 764 info->pcm_card_num, 765 765 info->pcm_dev_num, ··· 1360 1360 1361 1361 if (!uadev[card_num].ctrl_intf) { 1362 1362 dev_err(&subs->dev->dev, "audio ctrl intf info not cached\n"); 1363 - ret = -ENODEV; 1364 - goto err; 1363 + return -ENODEV; 1365 1364 } 1366 1365 1367 1366 ret = uaudio_populate_uac_desc(subs, resp); 1368 1367 if (ret < 0) 1369 - goto err; 1368 + return ret; 1370 1369 1371 1370 resp->slot_id = subs->dev->slot_id; 1372 1371 resp->slot_id_valid = 1; 1373 1372 1374 1373 data = snd_soc_usb_find_priv_data(uaudio_qdev->auxdev->dev.parent); 1375 - if (!data) 1376 - goto err; 1374 + if (!data) { 1375 + dev_err(&subs->dev->dev, "No private data found\n"); 1376 + return -ENODEV; 1377 + } 1377 1378 1378 1379 uaudio_qdev->data = data; 1379 1380 ··· 1383 1382 &resp->xhci_mem_info.tr_data, 1384 1383 &resp->std_as_data_ep_desc); 1385 1384 if (ret < 0) 1386 - goto err; 1385 + return ret; 1387 1386 1388 1387 resp->std_as_data_ep_desc_valid = 1; 1389 1388 ··· 1501 1500 xhci_sideband_remove_endpoint(uadev[card_num].sb, 1502 1501 usb_pipe_endpoint(subs->dev, subs->data_endpoint->pipe)); 1503 1502 1504 - err: 1505 1503 return ret; 1506 1504 } 1507 1505
+2
sound/usb/stream.c
··· 987 987 * and request Cluster Descriptor 988 988 */ 989 989 wLength = le16_to_cpu(hc_header.wLength); 990 + if (wLength < sizeof(cluster)) 991 + return NULL; 990 992 cluster = kzalloc(wLength, GFP_KERNEL); 991 993 if (!cluster) 992 994 return ERR_PTR(-ENOMEM);
+5 -4
tools/arch/arm64/include/uapi/asm/kvm.h
··· 431 431 432 432 /* Device Control API on vcpu fd */ 433 433 #define KVM_ARM_VCPU_PMU_V3_CTRL 0 434 - #define KVM_ARM_VCPU_PMU_V3_IRQ 0 435 - #define KVM_ARM_VCPU_PMU_V3_INIT 1 436 - #define KVM_ARM_VCPU_PMU_V3_FILTER 2 437 - #define KVM_ARM_VCPU_PMU_V3_SET_PMU 3 434 + #define KVM_ARM_VCPU_PMU_V3_IRQ 0 435 + #define KVM_ARM_VCPU_PMU_V3_INIT 1 436 + #define KVM_ARM_VCPU_PMU_V3_FILTER 2 437 + #define KVM_ARM_VCPU_PMU_V3_SET_PMU 3 438 + #define KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS 4 438 439 #define KVM_ARM_VCPU_TIMER_CTRL 1 439 440 #define KVM_ARM_VCPU_TIMER_IRQ_VTIMER 0 440 441 #define KVM_ARM_VCPU_TIMER_IRQ_PTIMER 1
+2 -2
tools/arch/loongarch/include/asm/orc_types.h
··· 34 34 #define ORC_TYPE_REGS 3 35 35 #define ORC_TYPE_REGS_PARTIAL 4 36 36 37 - #ifndef __ASSEMBLY__ 37 + #ifndef __ASSEMBLER__ 38 38 /* 39 39 * This struct is more or less a vastly simplified version of the DWARF Call 40 40 * Frame Information standard. It contains only the necessary parts of DWARF ··· 53 53 unsigned int type:3; 54 54 unsigned int signal:1; 55 55 }; 56 - #endif /* __ASSEMBLY__ */ 56 + #endif /* __ASSEMBLER__ */ 57 57 58 58 #endif /* _ORC_TYPES_H */
+5
tools/arch/x86/include/asm/amd/ibs.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _ASM_X86_AMD_IBS_H 3 + #define _ASM_X86_AMD_IBS_H 4 + 2 5 /* 3 6 * From PPR Vol 1 for AMD Family 19h Model 01h B1 4 7 * 55898 Rev 0.35 - Feb 5, 2021 ··· 154 151 }; 155 152 u64 regs[MSR_AMD64_IBS_REG_COUNT_MAX]; 156 153 }; 154 + 155 + #endif /* _ASM_X86_AMD_IBS_H */
+10 -4
tools/arch/x86/include/asm/cpufeatures.h
··· 336 336 #define X86_FEATURE_AMD_IBRS (13*32+14) /* Indirect Branch Restricted Speculation */ 337 337 #define X86_FEATURE_AMD_STIBP (13*32+15) /* Single Thread Indirect Branch Predictors */ 338 338 #define X86_FEATURE_AMD_STIBP_ALWAYS_ON (13*32+17) /* Single Thread Indirect Branch Predictors always-on preferred */ 339 - #define X86_FEATURE_AMD_IBRS_SAME_MODE (13*32+19) /* Indirect Branch Restricted Speculation same mode protection*/ 339 + #define X86_FEATURE_AMD_IBRS_SAME_MODE (13*32+19) /* Indirect Branch Restricted Speculation same mode protection*/ 340 340 #define X86_FEATURE_AMD_PPIN (13*32+23) /* "amd_ppin" Protected Processor Inventory Number */ 341 341 #define X86_FEATURE_AMD_SSBD (13*32+24) /* Speculative Store Bypass Disable */ 342 342 #define X86_FEATURE_VIRT_SSBD (13*32+25) /* "virt_ssbd" Virtualized Speculative Store Bypass Disable */ ··· 379 379 #define X86_FEATURE_V_SPEC_CTRL (15*32+20) /* "v_spec_ctrl" Virtual SPEC_CTRL */ 380 380 #define X86_FEATURE_VNMI (15*32+25) /* "vnmi" Virtual NMI */ 381 381 #define X86_FEATURE_SVME_ADDR_CHK (15*32+28) /* SVME addr check */ 382 + #define X86_FEATURE_BUS_LOCK_THRESHOLD (15*32+29) /* Bus lock threshold */ 382 383 #define X86_FEATURE_IDLE_HLT (15*32+30) /* IDLE HLT intercept */ 383 384 384 385 /* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */ ··· 448 447 #define X86_FEATURE_DEBUG_SWAP (19*32+14) /* "debug_swap" SEV-ES full debug state swap support */ 449 448 #define X86_FEATURE_RMPREAD (19*32+21) /* RMPREAD instruction */ 450 449 #define X86_FEATURE_SEGMENTED_RMP (19*32+23) /* Segmented RMP support */ 450 + #define X86_FEATURE_ALLOWED_SEV_FEATURES (19*32+27) /* Allowed SEV Features */ 451 451 #define X86_FEATURE_SVSM (19*32+28) /* "svsm" SVSM present */ 452 452 #define X86_FEATURE_HV_INUSE_WR_ALLOWED (19*32+30) /* Allow Write to in-use hypervisor-owned pages */ 453 453 ··· 460 458 #define X86_FEATURE_AUTOIBRS (20*32+ 8) /* Automatic IBRS */ 461 459 #define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* SMM_CTL MSR is not present */ 462 460 461 + #define X86_FEATURE_PREFETCHI (20*32+20) /* Prefetch Data/Instruction to Cache Level */ 463 462 #define X86_FEATURE_SBPB (20*32+27) /* Selective Branch Prediction Barrier */ 464 463 #define X86_FEATURE_IBPB_BRTYPE (20*32+28) /* MSR_PRED_CMD[IBPB] flushes all branch type predictions */ 465 464 #define X86_FEATURE_SRSO_NO (20*32+29) /* CPU is not affected by SRSO */ ··· 485 482 #define X86_FEATURE_AMD_HTR_CORES (21*32+ 6) /* Heterogeneous Core Topology */ 486 483 #define X86_FEATURE_AMD_WORKLOAD_CLASS (21*32+ 7) /* Workload Classification */ 487 484 #define X86_FEATURE_PREFER_YMM (21*32+ 8) /* Avoid ZMM registers due to downclocking */ 488 - #define X86_FEATURE_INDIRECT_THUNK_ITS (21*32+ 9) /* Use thunk for indirect branches in lower half of cacheline */ 485 + #define X86_FEATURE_APX (21*32+ 9) /* Advanced Performance Extensions */ 486 + #define X86_FEATURE_INDIRECT_THUNK_ITS (21*32+10) /* Use thunk for indirect branches in lower half of cacheline */ 489 487 490 488 /* 491 489 * BUG word(s) ··· 539 535 #define X86_BUG_BHI X86_BUG( 1*32+ 3) /* "bhi" CPU is affected by Branch History Injection */ 540 536 #define X86_BUG_IBPB_NO_RET X86_BUG( 1*32+ 4) /* "ibpb_no_ret" IBPB omits return target predictions */ 541 537 #define X86_BUG_SPECTRE_V2_USER X86_BUG( 1*32+ 5) /* "spectre_v2_user" CPU is affected by Spectre variant 2 attack between user processes */ 542 - #define X86_BUG_ITS X86_BUG( 1*32+ 6) /* "its" CPU is affected by Indirect Target Selection */ 543 - #define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 7) /* "its_native_only" CPU is affected by ITS, VMX is not affected */ 538 + #define X86_BUG_OLD_MICROCODE X86_BUG( 1*32+ 6) /* "old_microcode" CPU has old microcode, it is surely vulnerable to something */ 539 + #define X86_BUG_ITS X86_BUG( 1*32+ 7) /* "its" CPU is affected by Indirect Target Selection */ 540 + #define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 8) /* "its_native_only" CPU is affected by ITS, VMX is not affected */ 541 + 544 542 #endif /* _ASM_X86_CPUFEATURES_H */
+10 -6
tools/arch/x86/include/asm/msr-index.h
··· 533 533 #define MSR_HWP_CAPABILITIES 0x00000771 534 534 #define MSR_HWP_REQUEST_PKG 0x00000772 535 535 #define MSR_HWP_INTERRUPT 0x00000773 536 - #define MSR_HWP_REQUEST 0x00000774 536 + #define MSR_HWP_REQUEST 0x00000774 537 537 #define MSR_HWP_STATUS 0x00000777 538 538 539 539 /* CPUID.6.EAX */ ··· 550 550 #define HWP_LOWEST_PERF(x) (((x) >> 24) & 0xff) 551 551 552 552 /* IA32_HWP_REQUEST */ 553 - #define HWP_MIN_PERF(x) (x & 0xff) 554 - #define HWP_MAX_PERF(x) ((x & 0xff) << 8) 553 + #define HWP_MIN_PERF(x) (x & 0xff) 554 + #define HWP_MAX_PERF(x) ((x & 0xff) << 8) 555 555 #define HWP_DESIRED_PERF(x) ((x & 0xff) << 16) 556 - #define HWP_ENERGY_PERF_PREFERENCE(x) (((unsigned long long) x & 0xff) << 24) 556 + #define HWP_ENERGY_PERF_PREFERENCE(x) (((u64)x & 0xff) << 24) 557 557 #define HWP_EPP_PERFORMANCE 0x00 558 558 #define HWP_EPP_BALANCE_PERFORMANCE 0x80 559 559 #define HWP_EPP_BALANCE_POWERSAVE 0xC0 560 560 #define HWP_EPP_POWERSAVE 0xFF 561 - #define HWP_ACTIVITY_WINDOW(x) ((unsigned long long)(x & 0xff3) << 32) 562 - #define HWP_PACKAGE_CONTROL(x) ((unsigned long long)(x & 0x1) << 42) 561 + #define HWP_ACTIVITY_WINDOW(x) ((u64)(x & 0xff3) << 32) 562 + #define HWP_PACKAGE_CONTROL(x) ((u64)(x & 0x1) << 42) 563 563 564 564 /* IA32_HWP_STATUS */ 565 565 #define HWP_GUARANTEED_CHANGE(x) (x & 0x1) ··· 602 602 /* V6 PMON MSR range */ 603 603 #define MSR_IA32_PMC_V6_GP0_CTR 0x1900 604 604 #define MSR_IA32_PMC_V6_GP0_CFG_A 0x1901 605 + #define MSR_IA32_PMC_V6_GP0_CFG_B 0x1902 606 + #define MSR_IA32_PMC_V6_GP0_CFG_C 0x1903 605 607 #define MSR_IA32_PMC_V6_FX0_CTR 0x1980 608 + #define MSR_IA32_PMC_V6_FX0_CFG_B 0x1982 609 + #define MSR_IA32_PMC_V6_FX0_CFG_C 0x1983 606 610 #define MSR_IA32_PMC_V6_STEP 4 607 611 608 612 /* KeyID partitioning between MKTME and TDX */
+71
tools/arch/x86/include/uapi/asm/kvm.h
··· 441 441 #define KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS (1 << 6) 442 442 #define KVM_X86_QUIRK_SLOT_ZAP_ALL (1 << 7) 443 443 #define KVM_X86_QUIRK_STUFF_FEATURE_MSRS (1 << 8) 444 + #define KVM_X86_QUIRK_IGNORE_GUEST_PAT (1 << 9) 444 445 445 446 #define KVM_STATE_NESTED_FORMAT_VMX 0 446 447 #define KVM_STATE_NESTED_FORMAT_SVM 1 ··· 931 930 #define KVM_X86_SEV_ES_VM 3 932 931 #define KVM_X86_SNP_VM 4 933 932 #define KVM_X86_TDX_VM 5 933 + 934 + /* Trust Domain eXtension sub-ioctl() commands. */ 935 + enum kvm_tdx_cmd_id { 936 + KVM_TDX_CAPABILITIES = 0, 937 + KVM_TDX_INIT_VM, 938 + KVM_TDX_INIT_VCPU, 939 + KVM_TDX_INIT_MEM_REGION, 940 + KVM_TDX_FINALIZE_VM, 941 + KVM_TDX_GET_CPUID, 942 + 943 + KVM_TDX_CMD_NR_MAX, 944 + }; 945 + 946 + struct kvm_tdx_cmd { 947 + /* enum kvm_tdx_cmd_id */ 948 + __u32 id; 949 + /* flags for sub-commend. If sub-command doesn't use this, set zero. */ 950 + __u32 flags; 951 + /* 952 + * data for each sub-command. An immediate or a pointer to the actual 953 + * data in process virtual address. If sub-command doesn't use it, 954 + * set zero. 955 + */ 956 + __u64 data; 957 + /* 958 + * Auxiliary error code. The sub-command may return TDX SEAMCALL 959 + * status code in addition to -Exxx. 960 + */ 961 + __u64 hw_error; 962 + }; 963 + 964 + struct kvm_tdx_capabilities { 965 + __u64 supported_attrs; 966 + __u64 supported_xfam; 967 + __u64 reserved[254]; 968 + 969 + /* Configurable CPUID bits for userspace */ 970 + struct kvm_cpuid2 cpuid; 971 + }; 972 + 973 + struct kvm_tdx_init_vm { 974 + __u64 attributes; 975 + __u64 xfam; 976 + __u64 mrconfigid[6]; /* sha384 digest */ 977 + __u64 mrowner[6]; /* sha384 digest */ 978 + __u64 mrownerconfig[6]; /* sha384 digest */ 979 + 980 + /* The total space for TD_PARAMS before the CPUIDs is 256 bytes */ 981 + __u64 reserved[12]; 982 + 983 + /* 984 + * Call KVM_TDX_INIT_VM before vcpu creation, thus before 985 + * KVM_SET_CPUID2. 986 + * This configuration supersedes KVM_SET_CPUID2s for VCPUs because the 987 + * TDX module directly virtualizes those CPUIDs without VMM. The user 988 + * space VMM, e.g. qemu, should make KVM_SET_CPUID2 consistent with 989 + * those values. If it doesn't, KVM may have wrong idea of vCPUIDs of 990 + * the guest, and KVM may wrongly emulate CPUIDs or MSRs that the TDX 991 + * module doesn't virtualize. 992 + */ 993 + struct kvm_cpuid2 cpuid; 994 + }; 995 + 996 + #define KVM_TDX_MEASURE_MEMORY_REGION _BITULL(0) 997 + 998 + struct kvm_tdx_init_mem_region { 999 + __u64 source_addr; 1000 + __u64 gpa; 1001 + __u64 nr_pages; 1002 + }; 934 1003 935 1004 #endif /* _ASM_X86_KVM_H */
+2
tools/arch/x86/include/uapi/asm/svm.h
··· 95 95 #define SVM_EXIT_CR14_WRITE_TRAP 0x09e 96 96 #define SVM_EXIT_CR15_WRITE_TRAP 0x09f 97 97 #define SVM_EXIT_INVPCID 0x0a2 98 + #define SVM_EXIT_BUS_LOCK 0x0a5 98 99 #define SVM_EXIT_IDLE_HLT 0x0a6 99 100 #define SVM_EXIT_NPF 0x400 100 101 #define SVM_EXIT_AVIC_INCOMPLETE_IPI 0x401 ··· 226 225 { SVM_EXIT_CR4_WRITE_TRAP, "write_cr4_trap" }, \ 227 226 { SVM_EXIT_CR8_WRITE_TRAP, "write_cr8_trap" }, \ 228 227 { SVM_EXIT_INVPCID, "invpcid" }, \ 228 + { SVM_EXIT_BUS_LOCK, "buslock" }, \ 229 229 { SVM_EXIT_IDLE_HLT, "idle-halt" }, \ 230 230 { SVM_EXIT_NPF, "npf" }, \ 231 231 { SVM_EXIT_AVIC_INCOMPLETE_IPI, "avic_incomplete_ipi" }, \
+4 -1
tools/arch/x86/include/uapi/asm/vmx.h
··· 34 34 #define EXIT_REASON_TRIPLE_FAULT 2 35 35 #define EXIT_REASON_INIT_SIGNAL 3 36 36 #define EXIT_REASON_SIPI_SIGNAL 4 37 + #define EXIT_REASON_OTHER_SMI 6 37 38 38 39 #define EXIT_REASON_INTERRUPT_WINDOW 7 39 40 #define EXIT_REASON_NMI_WINDOW 8 ··· 93 92 #define EXIT_REASON_TPAUSE 68 94 93 #define EXIT_REASON_BUS_LOCK 74 95 94 #define EXIT_REASON_NOTIFY 75 95 + #define EXIT_REASON_TDCALL 77 96 96 97 97 #define VMX_EXIT_REASONS \ 98 98 { EXIT_REASON_EXCEPTION_NMI, "EXCEPTION_NMI" }, \ ··· 157 155 { EXIT_REASON_UMWAIT, "UMWAIT" }, \ 158 156 { EXIT_REASON_TPAUSE, "TPAUSE" }, \ 159 157 { EXIT_REASON_BUS_LOCK, "BUS_LOCK" }, \ 160 - { EXIT_REASON_NOTIFY, "NOTIFY" } 158 + { EXIT_REASON_NOTIFY, "NOTIFY" }, \ 159 + { EXIT_REASON_TDCALL, "TDCALL" } 161 160 162 161 #define VMX_EXIT_REASON_FLAGS \ 163 162 { VMX_EXIT_REASONS_FAILED_VMENTRY, "FAILED_VMENTRY" }
+1
tools/arch/x86/lib/memcpy_64.S
··· 40 40 EXPORT_SYMBOL(__memcpy) 41 41 42 42 SYM_FUNC_ALIAS_MEMFUNC(memcpy, __memcpy) 43 + SYM_PIC_ALIAS(memcpy) 43 44 EXPORT_SYMBOL(memcpy) 44 45 45 46 SYM_FUNC_START_LOCAL(memcpy_orig)
+1
tools/arch/x86/lib/memset_64.S
··· 42 42 EXPORT_SYMBOL(__memset) 43 43 44 44 SYM_FUNC_ALIAS_MEMFUNC(memset, __memset) 45 + SYM_PIC_ALIAS(memset) 45 46 EXPORT_SYMBOL(memset) 46 47 47 48 SYM_FUNC_START_LOCAL(memset_orig)
+55 -2
tools/include/linux/bits.h
··· 12 12 #define BIT_ULL_MASK(nr) (ULL(1) << ((nr) % BITS_PER_LONG_LONG)) 13 13 #define BIT_ULL_WORD(nr) ((nr) / BITS_PER_LONG_LONG) 14 14 #define BITS_PER_BYTE 8 15 + #define BITS_PER_TYPE(type) (sizeof(type) * BITS_PER_BYTE) 15 16 16 17 /* 17 18 * Create a contiguous bitmask starting at bit position @l and ending at ··· 20 19 * GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000. 21 20 */ 22 21 #if !defined(__ASSEMBLY__) 22 + 23 + /* 24 + * Missing asm support 25 + * 26 + * GENMASK_U*() and BIT_U*() depend on BITS_PER_TYPE() which relies on sizeof(), 27 + * something not available in asm. Nevertheless, fixed width integers is a C 28 + * concept. Assembly code can rely on the long and long long versions instead. 29 + */ 30 + 23 31 #include <linux/build_bug.h> 24 32 #include <linux/compiler.h> 33 + #include <linux/overflow.h> 34 + 25 35 #define GENMASK_INPUT_CHECK(h, l) BUILD_BUG_ON_ZERO(const_true((l) > (h))) 26 - #else 36 + 37 + /* 38 + * Generate a mask for the specified type @t. Additional checks are made to 39 + * guarantee the value returned fits in that type, relying on 40 + * -Wshift-count-overflow compiler check to detect incompatible arguments. 41 + * For example, all these create build errors or warnings: 42 + * 43 + * - GENMASK(15, 20): wrong argument order 44 + * - GENMASK(72, 15): doesn't fit unsigned long 45 + * - GENMASK_U32(33, 15): doesn't fit in a u32 46 + */ 47 + #define GENMASK_TYPE(t, h, l) \ 48 + ((t)(GENMASK_INPUT_CHECK(h, l) + \ 49 + (type_max(t) << (l) & \ 50 + type_max(t) >> (BITS_PER_TYPE(t) - 1 - (h))))) 51 + 52 + #define GENMASK_U8(h, l) GENMASK_TYPE(u8, h, l) 53 + #define GENMASK_U16(h, l) GENMASK_TYPE(u16, h, l) 54 + #define GENMASK_U32(h, l) GENMASK_TYPE(u32, h, l) 55 + #define GENMASK_U64(h, l) GENMASK_TYPE(u64, h, l) 56 + 57 + /* 58 + * Fixed-type variants of BIT(), with additional checks like GENMASK_TYPE(). The 59 + * following examples generate compiler warnings due to -Wshift-count-overflow: 60 + * 61 + * - BIT_U8(8) 62 + * - BIT_U32(-1) 63 + * - BIT_U32(40) 64 + */ 65 + #define BIT_INPUT_CHECK(type, nr) \ 66 + BUILD_BUG_ON_ZERO(const_true((nr) >= BITS_PER_TYPE(type))) 67 + 68 + #define BIT_TYPE(type, nr) ((type)(BIT_INPUT_CHECK(type, nr) + BIT_ULL(nr))) 69 + 70 + #define BIT_U8(nr) BIT_TYPE(u8, nr) 71 + #define BIT_U16(nr) BIT_TYPE(u16, nr) 72 + #define BIT_U32(nr) BIT_TYPE(u32, nr) 73 + #define BIT_U64(nr) BIT_TYPE(u64, nr) 74 + 75 + #else /* defined(__ASSEMBLY__) */ 76 + 27 77 /* 28 78 * BUILD_BUG_ON_ZERO is not available in h files included from asm files, 29 79 * disable the input check if that is the case. 30 80 */ 31 81 #define GENMASK_INPUT_CHECK(h, l) 0 32 - #endif 82 + 83 + #endif /* !defined(__ASSEMBLY__) */ 33 84 34 85 #define GENMASK(h, l) \ 35 86 (GENMASK_INPUT_CHECK(h, l) + __GENMASK(h, l))
+5 -5
tools/include/linux/build_bug.h
··· 4 4 5 5 #include <linux/compiler.h> 6 6 7 - #ifdef __CHECKER__ 8 - #define BUILD_BUG_ON_ZERO(e) (0) 9 - #else /* __CHECKER__ */ 10 7 /* 11 8 * Force a compilation error if condition is true, but also produce a 12 9 * result (of value 0 and type int), so the expression can be used 13 10 * e.g. in a structure initializer (or where-ever else comma expressions 14 11 * aren't permitted). 12 + * 13 + * Take an error message as an optional second argument. If omitted, 14 + * default to the stringification of the tested expression. 15 15 */ 16 - #define BUILD_BUG_ON_ZERO(e) ((int)(sizeof(struct { int:(-!!(e)); }))) 17 - #endif /* __CHECKER__ */ 16 + #define BUILD_BUG_ON_ZERO(e, ...) \ 17 + __BUILD_BUG_ON_ZERO_MSG(e, ##__VA_ARGS__, #e " is true") 18 18 19 19 /* Force a compilation error if a constant expression is not a power of 2 */ 20 20 #define __BUILD_BUG_ON_NOT_POWER_OF_2(n) \
+8
tools/include/linux/compiler.h
··· 244 244 __asm__ ("" : "=r" (var) : "0" (var)) 245 245 #endif 246 246 247 + #ifndef __BUILD_BUG_ON_ZERO_MSG 248 + #if defined(__clang__) 249 + #define __BUILD_BUG_ON_ZERO_MSG(e, msg, ...) ((int)(sizeof(struct { int:(-!!(e)); }))) 250 + #else 251 + #define __BUILD_BUG_ON_ZERO_MSG(e, msg, ...) ((int)sizeof(struct {_Static_assert(!(e), msg);})) 252 + #endif 253 + #endif 254 + 247 255 #endif /* __ASSEMBLY__ */ 248 256 249 257 #endif /* _TOOLS_LINUX_COMPILER_H */
+4
tools/include/uapi/drm/drm.h
··· 905 905 }; 906 906 907 907 #define DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE (1 << 0) 908 + #define DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_TIMELINE (1 << 1) 908 909 #define DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE (1 << 0) 910 + #define DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_TIMELINE (1 << 1) 909 911 struct drm_syncobj_handle { 910 912 __u32 handle; 911 913 __u32 flags; 912 914 913 915 __s32 fd; 914 916 __u32 pad; 917 + 918 + __u64 point; 915 919 }; 916 920 917 921 struct drm_syncobj_transfer {
+4 -2
tools/include/uapi/linux/fscrypt.h
··· 119 119 */ 120 120 struct fscrypt_provisioning_key_payload { 121 121 __u32 type; 122 - __u32 __reserved; 122 + __u32 flags; 123 123 __u8 raw[]; 124 124 }; 125 125 ··· 128 128 struct fscrypt_key_specifier key_spec; 129 129 __u32 raw_size; 130 130 __u32 key_id; 131 - __u32 __reserved[8]; 131 + #define FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED 0x00000001 132 + __u32 flags; 133 + __u32 __reserved[7]; 132 134 __u8 raw[]; 133 135 }; 134 136
+4
tools/include/uapi/linux/kvm.h
··· 375 375 #define KVM_SYSTEM_EVENT_WAKEUP 4 376 376 #define KVM_SYSTEM_EVENT_SUSPEND 5 377 377 #define KVM_SYSTEM_EVENT_SEV_TERM 6 378 + #define KVM_SYSTEM_EVENT_TDX_FATAL 7 378 379 __u32 type; 379 380 __u32 ndata; 380 381 union { ··· 931 930 #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 932 931 #define KVM_CAP_X86_GUEST_MODE 238 933 932 #define KVM_CAP_ARM_WRITABLE_IMP_ID_REGS 239 933 + #define KVM_CAP_ARM_EL2 240 934 + #define KVM_CAP_ARM_EL2_E2H0 241 935 + #define KVM_CAP_RISCV_MP_STATE_RESET 242 934 936 935 937 struct kvm_irq_routing_irqchip { 936 938 __u32 irqchip;
+6 -2
tools/include/uapi/linux/stat.h
··· 182 182 /* File offset alignment for direct I/O reads */ 183 183 __u32 stx_dio_read_offset_align; 184 184 185 - /* 0xb8 */ 186 - __u64 __spare3[9]; /* Spare space for future expansion */ 185 + /* Optimised max atomic write unit in bytes */ 186 + __u32 stx_atomic_write_unit_max_opt; 187 + __u32 __spare2[1]; 188 + 189 + /* 0xc0 */ 190 + __u64 __spare3[8]; /* Spare space for future expansion */ 187 191 188 192 /* 0x100 */ 189 193 };
+3
tools/lib/bpf/btf_dump.c
··· 226 226 size_t bkt; 227 227 struct hashmap_entry *cur; 228 228 229 + if (!map) 230 + return; 231 + 229 232 hashmap__for_each_entry(map, cur, bkt) 230 233 free((void *)cur->pkey); 231 234
+7 -3
tools/lib/bpf/libbpf.c
··· 597 597 int sym_idx; 598 598 int btf_id; 599 599 int sec_btf_id; 600 - const char *name; 600 + char *name; 601 601 char *essent_name; 602 602 bool is_set; 603 603 bool is_weak; ··· 4259 4259 return ext->btf_id; 4260 4260 } 4261 4261 t = btf__type_by_id(obj->btf, ext->btf_id); 4262 - ext->name = btf__name_by_offset(obj->btf, t->name_off); 4262 + ext->name = strdup(btf__name_by_offset(obj->btf, t->name_off)); 4263 + if (!ext->name) 4264 + return -ENOMEM; 4263 4265 ext->sym_idx = i; 4264 4266 ext->is_weak = ELF64_ST_BIND(sym->st_info) == STB_WEAK; 4265 4267 ··· 9140 9138 zfree(&obj->btf_custom_path); 9141 9139 zfree(&obj->kconfig); 9142 9140 9143 - for (i = 0; i < obj->nr_extern; i++) 9141 + for (i = 0; i < obj->nr_extern; i++) { 9142 + zfree(&obj->externs[i].name); 9144 9143 zfree(&obj->externs[i].essent_name); 9144 + } 9145 9145 9146 9146 zfree(&obj->externs); 9147 9147 obj->nr_extern = 0;
+17 -11
tools/net/ynl/pyynl/lib/ynl.py
··· 231 231 self.extack['unknown'].append(extack) 232 232 233 233 if attr_space: 234 - # We don't have the ability to parse nests yet, so only do global 235 - if 'miss-type' in self.extack and 'miss-nest' not in self.extack: 236 - miss_type = self.extack['miss-type'] 237 - if miss_type in attr_space.attrs_by_val: 238 - spec = attr_space.attrs_by_val[miss_type] 239 - self.extack['miss-type'] = spec['name'] 240 - if 'doc' in spec: 241 - self.extack['miss-type-doc'] = spec['doc'] 234 + self.annotate_extack(attr_space) 242 235 243 236 def _decode_policy(self, raw): 244 237 policy = {} ··· 257 264 policy['mask'] = attr.as_scalar('u64') 258 265 return policy 259 266 267 + def annotate_extack(self, attr_space): 268 + """ Make extack more human friendly with attribute information """ 269 + 270 + # We don't have the ability to parse nests yet, so only do global 271 + if 'miss-type' in self.extack and 'miss-nest' not in self.extack: 272 + miss_type = self.extack['miss-type'] 273 + if miss_type in attr_space.attrs_by_val: 274 + spec = attr_space.attrs_by_val[miss_type] 275 + self.extack['miss-type'] = spec['name'] 276 + if 'doc' in spec: 277 + self.extack['miss-type-doc'] = spec['doc'] 278 + 260 279 def cmd(self): 261 280 return self.nl_type 262 281 ··· 282 277 283 278 284 279 class NlMsgs: 285 - def __init__(self, data, attr_space=None): 280 + def __init__(self, data): 286 281 self.msgs = [] 287 282 288 283 offset = 0 289 284 while offset < len(data): 290 - msg = NlMsg(data, offset, attr_space=attr_space) 285 + msg = NlMsg(data, offset) 291 286 offset += msg.nl_len 292 287 self.msgs.append(msg) 293 288 ··· 1039 1034 op_rsp = [] 1040 1035 while not done: 1041 1036 reply = self.sock.recv(self._recv_size) 1042 - nms = NlMsgs(reply, attr_space=op.attr_set) 1037 + nms = NlMsgs(reply) 1043 1038 self._recv_dbg_print(reply, nms) 1044 1039 for nl_msg in nms: 1045 1040 if nl_msg.nl_seq in reqs_by_seq: 1046 1041 (op, vals, req_msg, req_flags) = reqs_by_seq[nl_msg.nl_seq] 1047 1042 if nl_msg.extack: 1043 + nl_msg.annotate_extack(op.attr_set) 1048 1044 self._decode_extack(req_msg, op, nl_msg.extack, vals) 1049 1045 else: 1050 1046 op = None
+41 -16
tools/perf/Documentation/perf-amd-ibs.txt
··· 171 171 # perf mem report 172 172 173 173 A normal perf mem report output will provide detailed memory access profile. 174 - However, it can also be aggregated based on output fields. For example: 174 + New output fields will show related access info together. For example: 175 175 176 - # perf mem report -F mem,sample,snoop 177 - Samples: 3M of event 'ibs_op//', Event count (approx.): 23524876 178 - Memory access Samples Snoop 179 - N/A 1903343 N/A 180 - L1 hit 1056754 N/A 181 - L2 hit 75231 N/A 182 - L3 hit 9496 HitM 183 - L3 hit 2270 N/A 184 - RAM hit 8710 N/A 185 - Remote node, same socket RAM hit 3241 N/A 186 - Remote core, same node Any cache hit 1572 HitM 187 - Remote core, same node Any cache hit 514 N/A 188 - Remote node, same socket Any cache hit 1216 HitM 189 - Remote node, same socket Any cache hit 350 N/A 190 - Uncached hit 18 N/A 176 + # perf mem report -F overhead,cache,snoop,comm 177 + ... 178 + # Samples: 92K of event 'ibs_op//' 179 + # Total weight : 531104 180 + # 181 + # ---------- Cache ----------- --- Snoop ---- 182 + # Overhead L1 L2 L1-buf Other HitM Other Command 183 + # ........ ............................ .............. .......... 184 + # 185 + 76.07% 5.8% 35.7% 0.0% 34.6% 23.3% 52.8% cc1 186 + 5.79% 0.2% 0.0% 0.0% 5.6% 0.1% 5.7% make 187 + 5.78% 0.1% 4.4% 0.0% 1.2% 0.5% 5.3% gcc 188 + 5.33% 0.3% 3.9% 0.0% 1.1% 0.2% 5.2% as 189 + 5.00% 0.1% 3.8% 0.0% 1.0% 0.3% 4.7% sh 190 + 1.56% 0.1% 0.1% 0.0% 1.4% 0.6% 0.9% ld 191 + 0.28% 0.1% 0.0% 0.0% 0.2% 0.1% 0.2% pkg-config 192 + 0.09% 0.0% 0.0% 0.0% 0.1% 0.0% 0.1% git 193 + 0.03% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% rm 194 + ... 195 + 196 + Also, it can be aggregated based on various memory access info using the 197 + sort keys. For example: 198 + 199 + # perf mem report -s mem,snoop 200 + ... 201 + # Samples: 92K of event 'ibs_op//' 202 + # Total weight : 531104 203 + # Sort order : mem,snoop 204 + # 205 + # Overhead Samples Memory access Snoop 206 + # ........ ............ ....................................... ............ 207 + # 208 + 47.99% 1509 L2 hit N/A 209 + 25.08% 338 core, same node Any cache hit HitM 210 + 10.24% 54374 N/A N/A 211 + 6.77% 35938 L1 hit N/A 212 + 6.39% 101 core, same node Any cache hit N/A 213 + 3.50% 69 RAM hit N/A 214 + 0.03% 158 LFB/MAB hit N/A 215 + 0.00% 2 Uncached hit N/A 191 216 192 217 Please refer to their man page for more detail. 193 218
+50
tools/perf/Documentation/perf-mem.txt
··· 119 119 And the default sort keys are changed to local_weight, mem, sym, dso, 120 120 symbol_daddr, dso_daddr, snoop, tlb, locked, blocked, local_ins_lat. 121 121 122 + -F:: 123 + --fields=:: 124 + Specify output field - multiple keys can be specified in CSV format. 125 + Please see linkperf:perf-report[1] for details. 126 + 127 + In addition to the default fields, 'perf mem report' will provide the 128 + following fields to break down sample periods. 129 + 130 + - op: operation in the sample instruction (load, store, prefetch, ...) 131 + - cache: location in CPU cache (L1, L2, ...) where the sample hit 132 + - mem: location in memory or other places the sample hit 133 + - dtlb: location in Data TLB (L1, L2) where the sample hit 134 + - snoop: snoop result for the sampled data access 135 + 136 + Please take a look at the OUTPUT FIELD SELECTION section for caveats. 137 + 122 138 -T:: 123 139 --type-profile:: 124 140 Show data-type profile result instead of code symbols. This requires ··· 171 155 $ perf mem report -F overhead,symbol 172 156 90% [k] memcpy 173 157 10% [.] strcmp 158 + 159 + OUTPUT FIELD SELECTION 160 + ---------------------- 161 + "perf mem report" adds a number of new output fields specific to data source 162 + information in the sample. Some of them have the same name with the existing 163 + sort keys ("mem" and "snoop"). So unlike other fields and sort keys, they'll 164 + behave differently when it's used by -F/--fields or -s/--sort. 165 + 166 + Using those two as output fields will aggregate samples altogether and show 167 + breakdown. 168 + 169 + $ perf mem report -F mem,snoop 170 + ... 171 + # ------ Memory ------- --- Snoop ---- 172 + # RAM Uncach Other HitM Other 173 + # ..................... .............. 174 + # 175 + 3.5% 0.0% 96.5% 25.1% 74.9% 176 + 177 + But using the same name for sort keys will aggregate samples for each type 178 + separately. 179 + 180 + $ perf mem report -s mem,snoop 181 + # Overhead Samples Memory access Snoop 182 + # ........ ............ ....................................... ............ 183 + # 184 + 47.99% 1509 L2 hit N/A 185 + 25.08% 338 core, same node Any cache hit HitM 186 + 10.24% 54374 N/A N/A 187 + 6.77% 35938 L1 hit N/A 188 + 6.39% 101 core, same node Any cache hit N/A 189 + 3.50% 69 RAM hit N/A 190 + 0.03% 158 LFB/MAB hit N/A 191 + 0.00% 2 Uncached hit N/A 174 192 175 193 SEE ALSO 176 194 --------
-1
tools/perf/bench/futex-hash.c
··· 18 18 #include <stdlib.h> 19 19 #include <linux/compiler.h> 20 20 #include <linux/kernel.h> 21 - #include <linux/prctl.h> 22 21 #include <linux/zalloc.h> 23 22 #include <sys/time.h> 24 23 #include <sys/mman.h>
+8 -1
tools/perf/bench/futex.c
··· 2 2 #include <err.h> 3 3 #include <stdio.h> 4 4 #include <stdlib.h> 5 - #include <linux/prctl.h> 6 5 #include <sys/prctl.h> 7 6 8 7 #include "futex.h" 8 + 9 + #ifndef PR_FUTEX_HASH 10 + #define PR_FUTEX_HASH 78 11 + # define PR_FUTEX_HASH_SET_SLOTS 1 12 + # define FH_FLAG_IMMUTABLE (1ULL << 0) 13 + # define PR_FUTEX_HASH_GET_SLOTS 2 14 + # define PR_FUTEX_HASH_GET_IMMUTABLE 3 15 + #endif // PR_FUTEX_HASH 9 16 10 17 void futex_set_nbuckets_param(struct bench_futex_parameters *params) 11 18 {
+1 -1
tools/perf/check-headers.sh
··· 186 186 # diff with extra ignore lines 187 187 check arch/x86/lib/memcpy_64.S '-I "^EXPORT_SYMBOL" -I "^#include <asm/export.h>" -I"^SYM_FUNC_START\(_LOCAL\)*(memcpy_\(erms\|orig\))" -I"^#include <linux/cfi_types.h>"' 188 188 check arch/x86/lib/memset_64.S '-I "^EXPORT_SYMBOL" -I "^#include <asm/export.h>" -I"^SYM_FUNC_START\(_LOCAL\)*(memset_\(erms\|orig\))"' 189 - check arch/x86/include/asm/amd/ibs.h '-I "^#include [<\"]\(asm/\)*msr-index.h"' 189 + check arch/x86/include/asm/amd/ibs.h '-I "^#include .*/msr-index.h"' 190 190 check arch/arm64/include/asm/cputype.h '-I "^#include [<\"]\(asm/\)*sysreg.h"' 191 191 check include/linux/unaligned.h '-I "^#include <linux/unaligned/packed_struct.h>" -I "^#include <asm/byteorder.h>" -I "^#pragma GCC diagnostic"' 192 192 check include/uapi/asm-generic/mman.h '-I "^#include <\(uapi/\)*asm-generic/mman-common\(-tools\)*.h>"'
+10 -2
tools/perf/tests/shell/stat+event_uniquifying.sh
··· 9 9 err=0 10 10 11 11 test_event_uniquifying() { 12 - # We use `clockticks` to verify the uniquify behavior. 12 + # We use `clockticks` in `uncore_imc` to verify the uniquify behavior. 13 + pmu="uncore_imc" 13 14 event="clockticks" 14 15 15 16 # If the `-A` option is added, the event should be uniquified. ··· 44 43 echo "stat event uniquifying test" 45 44 uniquified_event_array=() 46 45 46 + # Skip if the machine does not have `uncore_imc` device. 47 + if ! ${perf_tool} list pmu | grep -q ${pmu}; then 48 + echo "Target does not support PMU ${pmu} [Skipped]" 49 + err=2 50 + return 51 + fi 52 + 47 53 # Check how many uniquified events. 48 54 while IFS= read -r line; do 49 55 uniquified_event=$(echo "$line" | awk '{print $1}') 50 56 uniquified_event_array+=("${uniquified_event}") 51 - done < <(${perf_tool} list -v ${event} | grep "\[Kernel PMU event\]") 57 + done < <(${perf_tool} list -v ${event} | grep ${pmu}) 52 58 53 59 perf_command="${perf_tool} stat -e $event -A -o ${stat_output} -- true" 54 60 $perf_command
+1
tools/perf/tests/tests-scripts.c
··· 260 260 continue; /* Skip scripts that have a separate driver. */ 261 261 fd = openat(dir_fd, ent->d_name, O_PATH); 262 262 append_scripts_in_dir(fd, result, result_sz); 263 + close(fd); 263 264 } 264 265 for (i = 0; i < n_dirs; i++) /* Clean up */ 265 266 zfree(&entlist[i]);
+1 -1
tools/perf/trace/beauty/include/linux/socket.h
··· 168 168 return __cmsg_nxthdr(__msg->msg_control, __msg->msg_controllen, __cmsg); 169 169 } 170 170 171 - static inline size_t msg_data_left(struct msghdr *msg) 171 + static inline size_t msg_data_left(const struct msghdr *msg) 172 172 { 173 173 return iov_iter_count(&msg->msg_iter); 174 174 }
+1
tools/perf/trace/beauty/include/uapi/linux/fs.h
··· 361 361 #define PAGE_IS_PFNZERO (1 << 5) 362 362 #define PAGE_IS_HUGE (1 << 6) 363 363 #define PAGE_IS_SOFT_DIRTY (1 << 7) 364 + #define PAGE_IS_GUARD (1 << 8) 364 365 365 366 /* 366 367 * struct page_region - Page region with flags
+7
tools/perf/trace/beauty/include/uapi/linux/prctl.h
··· 364 364 # define PR_TIMER_CREATE_RESTORE_IDS_ON 1 365 365 # define PR_TIMER_CREATE_RESTORE_IDS_GET 2 366 366 367 + /* FUTEX hash management */ 368 + #define PR_FUTEX_HASH 78 369 + # define PR_FUTEX_HASH_SET_SLOTS 1 370 + # define FH_FLAG_IMMUTABLE (1ULL << 0) 371 + # define PR_FUTEX_HASH_GET_SLOTS 2 372 + # define PR_FUTEX_HASH_GET_IMMUTABLE 3 373 + 367 374 #endif /* _LINUX_PRCTL_H */
+6 -2
tools/perf/trace/beauty/include/uapi/linux/stat.h
··· 182 182 /* File offset alignment for direct I/O reads */ 183 183 __u32 stx_dio_read_offset_align; 184 184 185 - /* 0xb8 */ 186 - __u64 __spare3[9]; /* Spare space for future expansion */ 185 + /* Optimised max atomic write unit in bytes */ 186 + __u32 stx_atomic_write_unit_max_opt; 187 + __u32 __spare2[1]; 188 + 189 + /* 0xc0 */ 190 + __u64 __spare3[8]; /* Spare space for future expansion */ 187 191 188 192 /* 0x100 */ 189 193 };
+4
tools/perf/util/include/linux/linkage.h
··· 132 132 SYM_TYPED_START(name, SYM_L_GLOBAL, SYM_A_ALIGN) 133 133 #endif 134 134 135 + #ifndef SYM_PIC_ALIAS 136 + #define SYM_PIC_ALIAS(sym) SYM_ALIAS(__pi_ ## sym, sym, SYM_T_FUNC, SYM_L_GLOBAL) 137 + #endif 138 + 135 139 #endif /* PERF_LINUX_LINKAGE_H_ */
+1
tools/perf/util/print-events.c
··· 268 268 ret = evsel__open(evsel, NULL, tmap) >= 0; 269 269 } 270 270 271 + evsel__close(evsel); 271 272 evsel__delete(evsel); 272 273 } 273 274
-1
tools/testing/selftests/bpf/.gitignore
··· 21 21 flow_dissector_load 22 22 test_tcpnotify_user 23 23 test_libbpf 24 - test_sysctl 25 24 xdping 26 25 test_cpp 27 26 *.d
+2 -3
tools/testing/selftests/bpf/Makefile
··· 73 73 # Order correspond to 'make run_tests' order 74 74 TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_progs \ 75 75 test_sockmap \ 76 - test_tcpnotify_user test_sysctl \ 76 + test_tcpnotify_user \ 77 77 test_progs-no_alu32 78 78 TEST_INST_SUBDIRS := no_alu32 79 79 ··· 220 220 $(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)") 221 221 endif 222 222 223 - # Define simple and short `make test_progs`, `make test_sysctl`, etc targets 223 + # Define simple and short `make test_progs`, `make test_maps`, etc targets 224 224 # to build individual tests. 225 225 # NOTE: Semicolon at the end is critical to override lib.mk's default static 226 226 # rule for binaries. ··· 329 329 $(OUTPUT)/test_sockmap: $(CGROUP_HELPERS) $(TESTING_HELPERS) 330 330 $(OUTPUT)/test_tcpnotify_user: $(CGROUP_HELPERS) $(TESTING_HELPERS) $(TRACE_HELPERS) 331 331 $(OUTPUT)/test_sock_fields: $(CGROUP_HELPERS) $(TESTING_HELPERS) 332 - $(OUTPUT)/test_sysctl: $(CGROUP_HELPERS) $(TESTING_HELPERS) 333 332 $(OUTPUT)/test_tag: $(TESTING_HELPERS) 334 333 $(OUTPUT)/test_lirc_mode2_user: $(TESTING_HELPERS) 335 334 $(OUTPUT)/xdping: $(TESTING_HELPERS)
+16
tools/testing/selftests/bpf/progs/test_global_map_resize.c
··· 32 32 33 33 int percpu_arr[1] SEC(".data.percpu_arr"); 34 34 35 + /* at least one extern is included, to ensure that a specific 36 + * regression is tested whereby resizing resulted in a free-after-use 37 + * bug after type information is invalidated by the resize operation. 38 + * 39 + * There isn't a particularly good API to test for this specific condition, 40 + * but by having externs for the resizing tests it will cover this path. 41 + */ 42 + extern int LINUX_KERNEL_VERSION __kconfig; 43 + long version_sink; 44 + 35 45 SEC("tp/syscalls/sys_enter_getpid") 36 46 int bss_array_sum(void *ctx) 37 47 { ··· 53 43 54 44 for (size_t i = 0; i < bss_array_len; ++i) 55 45 sum += array[i]; 46 + 47 + /* see above; ensure this is not optimized out */ 48 + version_sink = LINUX_KERNEL_VERSION; 56 49 57 50 return 0; 58 51 } ··· 71 58 72 59 for (size_t i = 0; i < data_array_len; ++i) 73 60 sum += my_array[i]; 61 + 62 + /* see above; ensure this is not optimized out */ 63 + version_sink = LINUX_KERNEL_VERSION; 74 64 75 65 return 0; 76 66 }
+18
tools/testing/selftests/bpf/progs/verifier_vfs_accept.c
··· 2 2 /* Copyright (c) 2024 Google LLC. */ 3 3 4 4 #include <vmlinux.h> 5 + #include <errno.h> 5 6 #include <bpf/bpf_helpers.h> 6 7 #include <bpf/bpf_tracing.h> 7 8 ··· 80 79 path = &file->f_path; 81 80 ret = bpf_path_d_path(path, buf, sizeof(buf)); 82 81 __sink(ret); 82 + return 0; 83 + } 84 + 85 + SEC("lsm.s/inode_rename") 86 + __success 87 + int BPF_PROG(inode_rename, struct inode *old_dir, struct dentry *old_dentry, 88 + struct inode *new_dir, struct dentry *new_dentry, 89 + unsigned int flags) 90 + { 91 + struct inode *inode = new_dentry->d_inode; 92 + ino_t ino; 93 + 94 + if (!inode) 95 + return 0; 96 + ino = inode->i_ino; 97 + if (ino == 0) 98 + return -EACCES; 83 99 return 0; 84 100 } 85 101
+15
tools/testing/selftests/bpf/progs/verifier_vfs_reject.c
··· 2 2 /* Copyright (c) 2024 Google LLC. */ 3 3 4 4 #include <vmlinux.h> 5 + #include <errno.h> 5 6 #include <bpf/bpf_helpers.h> 6 7 #include <bpf/bpf_tracing.h> 7 8 #include <linux/limits.h> ··· 159 158 return 0; 160 159 } 161 160 161 + SEC("lsm.s/inode_rename") 162 + __failure __msg("invalid mem access 'trusted_ptr_or_null_'") 163 + int BPF_PROG(inode_rename, struct inode *old_dir, struct dentry *old_dentry, 164 + struct inode *new_dir, struct dentry *new_dentry, 165 + unsigned int flags) 166 + { 167 + struct inode *inode = new_dentry->d_inode; 168 + ino_t ino; 169 + 170 + ino = inode->i_ino; 171 + if (ino == 0) 172 + return -EACCES; 173 + return 0; 174 + } 162 175 char _license[] SEC("license") = "GPL";
+53 -52
tools/testing/selftests/bpf/test_lru_map.c
··· 138 138 return ret; 139 139 } 140 140 141 + /* Derive target_free from map_size, same as bpf_common_lru_populate */ 142 + static unsigned int __tgt_size(unsigned int map_size) 143 + { 144 + return (map_size / nr_cpus) / 2; 145 + } 146 + 147 + /* Inverse of how bpf_common_lru_populate derives target_free from map_size. */ 148 + static unsigned int __map_size(unsigned int tgt_free) 149 + { 150 + return tgt_free * nr_cpus * 2; 151 + } 152 + 141 153 /* Size of the LRU map is 2 142 154 * Add key=1 (+1 key) 143 155 * Add key=2 (+1 key) ··· 243 231 printf("Pass\n"); 244 232 } 245 233 246 - /* Size of the LRU map is 1.5*tgt_free 247 - * Insert 1 to tgt_free (+tgt_free keys) 248 - * Lookup 1 to tgt_free/2 249 - * Insert 1+tgt_free to 2*tgt_free (+tgt_free keys) 250 - * => 1+tgt_free/2 to LOCALFREE_TARGET will be removed by LRU 234 + /* Verify that unreferenced elements are recycled before referenced ones. 235 + * Insert elements. 236 + * Reference a subset of these. 237 + * Insert more, enough to trigger recycling. 238 + * Verify that unreferenced are recycled. 251 239 */ 252 240 static void test_lru_sanity1(int map_type, int map_flags, unsigned int tgt_free) 253 241 { ··· 269 257 batch_size = tgt_free / 2; 270 258 assert(batch_size * 2 == tgt_free); 271 259 272 - map_size = tgt_free + batch_size; 260 + map_size = __map_size(tgt_free) + batch_size; 273 261 lru_map_fd = create_map(map_type, map_flags, map_size); 274 262 assert(lru_map_fd != -1); 275 263 ··· 278 266 279 267 value[0] = 1234; 280 268 281 - /* Insert 1 to tgt_free (+tgt_free keys) */ 282 - end_key = 1 + tgt_free; 269 + /* Insert map_size - batch_size keys */ 270 + end_key = 1 + __map_size(tgt_free); 283 271 for (key = 1; key < end_key; key++) 284 272 assert(!bpf_map_update_elem(lru_map_fd, &key, value, 285 273 BPF_NOEXIST)); 286 274 287 - /* Lookup 1 to tgt_free/2 */ 275 + /* Lookup 1 to batch_size */ 288 276 end_key = 1 + batch_size; 289 277 for (key = 1; key < end_key; key++) { 290 278 assert(!bpf_map_lookup_elem_with_ref_bit(lru_map_fd, key, value)); ··· 292 280 BPF_NOEXIST)); 293 281 } 294 282 295 - /* Insert 1+tgt_free to 2*tgt_free 296 - * => 1+tgt_free/2 to LOCALFREE_TARGET will be 283 + /* Insert another map_size - batch_size keys 284 + * Map will contain 1 to batch_size plus these latest, i.e., 285 + * => previous 1+batch_size to map_size - batch_size will have been 297 286 * removed by LRU 298 287 */ 299 - key = 1 + tgt_free; 300 - end_key = key + tgt_free; 288 + key = 1 + __map_size(tgt_free); 289 + end_key = key + __map_size(tgt_free); 301 290 for (; key < end_key; key++) { 302 291 assert(!bpf_map_update_elem(lru_map_fd, &key, value, 303 292 BPF_NOEXIST)); ··· 314 301 printf("Pass\n"); 315 302 } 316 303 317 - /* Size of the LRU map 1.5 * tgt_free 318 - * Insert 1 to tgt_free (+tgt_free keys) 319 - * Update 1 to tgt_free/2 320 - * => The original 1 to tgt_free/2 will be removed due to 321 - * the LRU shrink process 322 - * Re-insert 1 to tgt_free/2 again and do a lookup immeidately 323 - * Insert 1+tgt_free to tgt_free*3/2 324 - * Insert 1+tgt_free*3/2 to tgt_free*5/2 325 - * => Key 1+tgt_free to tgt_free*3/2 326 - * will be removed from LRU because it has never 327 - * been lookup and ref bit is not set 304 + /* Verify that insertions exceeding map size will recycle the oldest. 305 + * Verify that unreferenced elements are recycled before referenced. 328 306 */ 329 307 static void test_lru_sanity2(int map_type, int map_flags, unsigned int tgt_free) 330 308 { ··· 338 334 batch_size = tgt_free / 2; 339 335 assert(batch_size * 2 == tgt_free); 340 336 341 - map_size = tgt_free + batch_size; 337 + map_size = __map_size(tgt_free) + batch_size; 342 338 lru_map_fd = create_map(map_type, map_flags, map_size); 343 339 assert(lru_map_fd != -1); 344 340 ··· 347 343 348 344 value[0] = 1234; 349 345 350 - /* Insert 1 to tgt_free (+tgt_free keys) */ 351 - end_key = 1 + tgt_free; 346 + /* Insert map_size - batch_size keys */ 347 + end_key = 1 + __map_size(tgt_free); 352 348 for (key = 1; key < end_key; key++) 353 349 assert(!bpf_map_update_elem(lru_map_fd, &key, value, 354 350 BPF_NOEXIST)); ··· 361 357 * shrink the inactive list to get tgt_free 362 358 * number of free nodes. 363 359 * 364 - * Hence, the oldest key 1 to tgt_free/2 365 - * are removed from the LRU list. 360 + * Hence, the oldest key is removed from the LRU list. 366 361 */ 367 362 key = 1; 368 363 if (map_type == BPF_MAP_TYPE_LRU_PERCPU_HASH) { ··· 373 370 BPF_EXIST)); 374 371 } 375 372 376 - /* Re-insert 1 to tgt_free/2 again and do a lookup 377 - * immeidately. 373 + /* Re-insert 1 to batch_size again and do a lookup immediately. 378 374 */ 379 375 end_key = 1 + batch_size; 380 376 value[0] = 4321; ··· 389 387 390 388 value[0] = 1234; 391 389 392 - /* Insert 1+tgt_free to tgt_free*3/2 */ 393 - end_key = 1 + tgt_free + batch_size; 394 - for (key = 1 + tgt_free; key < end_key; key++) 390 + /* Insert batch_size new elements */ 391 + key = 1 + __map_size(tgt_free); 392 + end_key = key + batch_size; 393 + for (; key < end_key; key++) 395 394 /* These newly added but not referenced keys will be 396 395 * gone during the next LRU shrink. 397 396 */ 398 397 assert(!bpf_map_update_elem(lru_map_fd, &key, value, 399 398 BPF_NOEXIST)); 400 399 401 - /* Insert 1+tgt_free*3/2 to tgt_free*5/2 */ 402 - end_key = key + tgt_free; 400 + /* Insert map_size - batch_size elements */ 401 + end_key += __map_size(tgt_free); 403 402 for (; key < end_key; key++) { 404 403 assert(!bpf_map_update_elem(lru_map_fd, &key, value, 405 404 BPF_NOEXIST)); ··· 416 413 printf("Pass\n"); 417 414 } 418 415 419 - /* Size of the LRU map is 2*tgt_free 420 - * It is to test the active/inactive list rotation 421 - * Insert 1 to 2*tgt_free (+2*tgt_free keys) 422 - * Lookup key 1 to tgt_free*3/2 423 - * Add 1+2*tgt_free to tgt_free*5/2 (+tgt_free/2 keys) 424 - * => key 1+tgt_free*3/2 to 2*tgt_free are removed from LRU 416 + /* Test the active/inactive list rotation 417 + * 418 + * Fill the whole map, deplete the free list. 419 + * Reference all except the last lru->target_free elements. 420 + * Insert lru->target_free new elements. This triggers one shrink. 421 + * Verify that the non-referenced elements are replaced. 425 422 */ 426 423 static void test_lru_sanity3(int map_type, int map_flags, unsigned int tgt_free) 427 424 { ··· 440 437 441 438 assert(sched_next_online(0, &next_cpu) != -1); 442 439 443 - batch_size = tgt_free / 2; 444 - assert(batch_size * 2 == tgt_free); 440 + batch_size = __tgt_size(tgt_free); 445 441 446 442 map_size = tgt_free * 2; 447 443 lru_map_fd = create_map(map_type, map_flags, map_size); ··· 451 449 452 450 value[0] = 1234; 453 451 454 - /* Insert 1 to 2*tgt_free (+2*tgt_free keys) */ 455 - end_key = 1 + (2 * tgt_free); 452 + /* Fill the map */ 453 + end_key = 1 + map_size; 456 454 for (key = 1; key < end_key; key++) 457 455 assert(!bpf_map_update_elem(lru_map_fd, &key, value, 458 456 BPF_NOEXIST)); 459 457 460 - /* Lookup key 1 to tgt_free*3/2 */ 461 - end_key = tgt_free + batch_size; 458 + /* Reference all but the last batch_size */ 459 + end_key = 1 + map_size - batch_size; 462 460 for (key = 1; key < end_key; key++) { 463 461 assert(!bpf_map_lookup_elem_with_ref_bit(lru_map_fd, key, value)); 464 462 assert(!bpf_map_update_elem(expected_map_fd, &key, value, 465 463 BPF_NOEXIST)); 466 464 } 467 465 468 - /* Add 1+2*tgt_free to tgt_free*5/2 469 - * (+tgt_free/2 keys) 470 - */ 466 + /* Insert new batch_size: replaces the non-referenced elements */ 471 467 key = 2 * tgt_free + 1; 472 468 end_key = key + batch_size; 473 469 for (; key < end_key; key++) { ··· 500 500 lru_map_fd = create_map(map_type, map_flags, 501 501 3 * tgt_free * nr_cpus); 502 502 else 503 - lru_map_fd = create_map(map_type, map_flags, 3 * tgt_free); 503 + lru_map_fd = create_map(map_type, map_flags, 504 + 3 * __map_size(tgt_free)); 504 505 assert(lru_map_fd != -1); 505 506 506 507 expected_map_fd = create_map(BPF_MAP_TYPE_HASH, 0,
+8 -29
tools/testing/selftests/bpf/test_sysctl.c tools/testing/selftests/bpf/prog_tests/test_sysctl.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 // Copyright (c) 2019 Facebook 3 3 4 - #include <fcntl.h> 5 - #include <stdint.h> 6 - #include <stdio.h> 7 - #include <stdlib.h> 8 - #include <string.h> 9 - #include <unistd.h> 10 - 11 - #include <linux/filter.h> 12 - 13 - #include <bpf/bpf.h> 14 - #include <bpf/libbpf.h> 15 - 16 - #include <bpf/bpf_endian.h> 17 - #include "bpf_util.h" 4 + #include "test_progs.h" 18 5 #include "cgroup_helpers.h" 19 - #include "testing_helpers.h" 20 6 21 7 #define CG_PATH "/foo" 22 8 #define MAX_INSNS 512 ··· 1594 1608 return fails ? -1 : 0; 1595 1609 } 1596 1610 1597 - int main(int argc, char **argv) 1611 + void test_sysctl(void) 1598 1612 { 1599 - int cgfd = -1; 1600 - int err = 0; 1613 + int cgfd; 1601 1614 1602 1615 cgfd = cgroup_setup_and_join(CG_PATH); 1603 - if (cgfd < 0) 1604 - goto err; 1616 + if (!ASSERT_OK_FD(cgfd < 0, "create_cgroup")) 1617 + goto out; 1605 1618 1606 - /* Use libbpf 1.0 API mode */ 1607 - libbpf_set_strict_mode(LIBBPF_STRICT_ALL); 1619 + if (!ASSERT_OK(run_tests(cgfd), "run_tests")) 1620 + goto out; 1608 1621 1609 - if (run_tests(cgfd)) 1610 - goto err; 1611 - 1612 - goto out; 1613 - err: 1614 - err = -1; 1615 1622 out: 1616 1623 close(cgfd); 1617 1624 cleanup_cgroup_environment(); 1618 - return err; 1625 + return; 1619 1626 }
+1 -1
tools/testing/selftests/drivers/net/hw/rss_input_xfrm.py
··· 38 38 raise KsftSkipEx("socket.SO_INCOMING_CPU was added in Python 3.11") 39 39 40 40 input_xfrm = cfg.ethnl.rss_get( 41 - {'header': {'dev-name': cfg.ifname}}).get('input_xfrm') 41 + {'header': {'dev-name': cfg.ifname}}).get('input-xfrm') 42 42 43 43 # Check for symmetric xor/or-xor 44 44 if not input_xfrm or (input_xfrm != 1 and input_xfrm != 2):
+2 -1
tools/testing/selftests/drivers/net/netdevsim/peer.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0-only 3 3 4 - source ../../../net/lib.sh 4 + lib_dir=$(dirname $0)/../../../net 5 + source $lib_dir/lib.sh 5 6 6 7 NSIM_DEV_1_ID=$((256 + RANDOM % 256)) 7 8 NSIM_DEV_1_SYS=/sys/bus/netdevsim/devices/netdevsim$NSIM_DEV_1_ID
+7 -3
tools/testing/selftests/futex/functional/futex_numa_mpol.c
··· 144 144 struct futex32_numa *futex_numa; 145 145 int mem_size, i; 146 146 void *futex_ptr; 147 - char c; 147 + int c; 148 148 149 149 while ((c = getopt(argc, argv, "chv:")) != -1) { 150 150 switch (c) { ··· 210 210 ret = mbind(futex_ptr, mem_size, MPOL_BIND, &nodemask, 211 211 sizeof(nodemask) * 8, 0); 212 212 if (ret == 0) { 213 + ret = numa_set_mempolicy_home_node(futex_ptr, mem_size, i, 0); 214 + if (ret != 0) 215 + ksft_exit_fail_msg("Failed to set home node: %m, %d\n", errno); 216 + 213 217 ksft_print_msg("Node %d test\n", i); 214 218 futex_numa->futex = 0; 215 219 futex_numa->numa = FUTEX_NO_NODE; ··· 224 220 if (0) 225 221 test_futex_mpol(futex_numa, 0); 226 222 if (futex_numa->numa != i) { 227 - ksft_test_result_fail("Returned NUMA node is %d expected %d\n", 228 - futex_numa->numa, i); 223 + ksft_exit_fail_msg("Returned NUMA node is %d expected %d\n", 224 + futex_numa->numa, i); 229 225 } 230 226 } 231 227 }
+1 -1
tools/testing/selftests/futex/functional/futex_priv_hash.c
··· 130 130 pthread_mutexattr_t mutex_attr_pi; 131 131 int use_global_hash = 0; 132 132 int ret; 133 - char c; 133 + int c; 134 134 135 135 while ((c = getopt(argc, argv, "cghv:")) != -1) { 136 136 switch (c) {
+13 -3
tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
··· 954 954 pr_debug("ptimer_irq: %d; vtimer_irq: %d\n", ptimer_irq, vtimer_irq); 955 955 } 956 956 957 + static int gic_fd; 958 + 957 959 static void test_vm_create(struct kvm_vm **vm, struct kvm_vcpu **vcpu, 958 960 enum arch_timer timer) 959 961 { ··· 970 968 vcpu_args_set(*vcpu, 1, timer); 971 969 972 970 test_init_timer_irq(*vm, *vcpu); 973 - vgic_v3_setup(*vm, 1, 64); 971 + gic_fd = vgic_v3_setup(*vm, 1, 64); 972 + __TEST_REQUIRE(gic_fd >= 0, "Failed to create vgic-v3"); 973 + 974 974 sync_global_to_guest(*vm, test_args); 975 975 sync_global_to_guest(*vm, CVAL_MAX); 976 976 sync_global_to_guest(*vm, DEF_CNT); 977 + } 978 + 979 + static void test_vm_cleanup(struct kvm_vm *vm) 980 + { 981 + close(gic_fd); 982 + kvm_vm_free(vm); 977 983 } 978 984 979 985 static void test_print_help(char *name) ··· 1070 1060 if (test_args.test_virtual) { 1071 1061 test_vm_create(&vm, &vcpu, VIRTUAL); 1072 1062 test_run(vm, vcpu); 1073 - kvm_vm_free(vm); 1063 + test_vm_cleanup(vm); 1074 1064 } 1075 1065 1076 1066 if (test_args.test_physical) { 1077 1067 test_vm_create(&vm, &vcpu, PHYSICAL); 1078 1068 test_run(vm, vcpu); 1079 - kvm_vm_free(vm); 1069 + test_vm_cleanup(vm); 1080 1070 } 1081 1071 1082 1072 return 0;
+3
tools/testing/selftests/mm/config
··· 8 8 CONFIG_TRANSPARENT_HUGEPAGE=y 9 9 CONFIG_MEM_SOFT_DIRTY=y 10 10 CONFIG_ANON_VMA_NAME=y 11 + CONFIG_FTRACE=y 12 + CONFIG_PROFILING=y 13 + CONFIG_UPROBES=y
+4 -1
tools/testing/selftests/mm/merge.c
··· 470 470 ASSERT_GE(fd, 0); 471 471 472 472 ASSERT_EQ(ftruncate(fd, page_size), 0); 473 - ASSERT_EQ(read_sysfs("/sys/bus/event_source/devices/uprobe/type", &type), 0); 473 + if (read_sysfs("/sys/bus/event_source/devices/uprobe/type", &type) != 0) { 474 + SKIP(goto out, "Failed to read uprobe sysfs file, skipping"); 475 + } 474 476 475 477 memset(&attr, 0, attr_sz); 476 478 attr.size = attr_sz; ··· 493 491 ASSERT_NE(mremap(ptr2, page_size, page_size, 494 492 MREMAP_MAYMOVE | MREMAP_FIXED, ptr1), MAP_FAILED); 495 493 494 + out: 496 495 close(fd); 497 496 remove(probe_file); 498 497 }
+1 -1
tools/testing/selftests/mm/settings
··· 1 - timeout=180 1 + timeout=900
+5 -2
tools/testing/selftests/mm/virtual_address_range.c
··· 77 77 { 78 78 unsigned long addr = (unsigned long) ptr; 79 79 80 - if (high_addr && addr < HIGH_ADDR_MARK) 81 - ksft_exit_fail_msg("Bad address %lx\n", addr); 80 + if (high_addr) { 81 + if (addr < HIGH_ADDR_MARK) 82 + ksft_exit_fail_msg("Bad address %lx\n", addr); 83 + return; 84 + } 82 85 83 86 if (addr > HIGH_ADDR_MARK) 84 87 ksft_exit_fail_msg("Bad address %lx\n", addr);
+1
tools/testing/selftests/net/.gitignore
··· 50 50 tcp_fastopen_backup_key 51 51 tcp_inq 52 52 tcp_mmap 53 + tfo 53 54 timestamping 54 55 tls 55 56 toeplitz
+2
tools/testing/selftests/net/Makefile
··· 110 110 TEST_PROGS += lwt_dst_cache_ref_loop.sh 111 111 TEST_PROGS += skf_net_off.sh 112 112 TEST_GEN_FILES += skf_net_off 113 + TEST_GEN_FILES += tfo 114 + TEST_PROGS += tfo_passive.sh 113 115 114 116 # YNL files, must be before "include ..lib.mk" 115 117 YNL_GEN_FILES := busy_poller netlink-dumps
+138 -4
tools/testing/selftests/net/af_unix/msg_oob.c
··· 210 210 static void __recvpair(struct __test_metadata *_metadata, 211 211 FIXTURE_DATA(msg_oob) *self, 212 212 const char *expected_buf, int expected_len, 213 - int buf_len, int flags) 213 + int buf_len, int flags, bool is_sender) 214 214 { 215 215 int i, ret[2], recv_errno[2], expected_errno = 0; 216 216 char recv_buf[2][BUF_SZ] = {}; ··· 221 221 errno = 0; 222 222 223 223 for (i = 0; i < 2; i++) { 224 - ret[i] = recv(self->fd[i * 2 + 1], recv_buf[i], buf_len, flags); 224 + int index = is_sender ? i * 2 : i * 2 + 1; 225 + 226 + ret[i] = recv(self->fd[index], recv_buf[i], buf_len, flags); 225 227 recv_errno[i] = errno; 226 228 } 227 229 ··· 310 308 ASSERT_EQ(answ[0], answ[1]); 311 309 } 312 310 311 + static void __resetpair(struct __test_metadata *_metadata, 312 + FIXTURE_DATA(msg_oob) *self, 313 + const FIXTURE_VARIANT(msg_oob) *variant, 314 + bool reset) 315 + { 316 + int i; 317 + 318 + for (i = 0; i < 2; i++) 319 + close(self->fd[i * 2 + 1]); 320 + 321 + __recvpair(_metadata, self, "", reset ? -ECONNRESET : 0, 1, 322 + variant->peek ? MSG_PEEK : 0, true); 323 + } 324 + 313 325 #define sendpair(buf, len, flags) \ 314 326 __sendpair(_metadata, self, buf, len, flags) 315 327 ··· 332 316 if (variant->peek) \ 333 317 __recvpair(_metadata, self, \ 334 318 expected_buf, expected_len, \ 335 - buf_len, (flags) | MSG_PEEK); \ 319 + buf_len, (flags) | MSG_PEEK, false); \ 336 320 __recvpair(_metadata, self, \ 337 - expected_buf, expected_len, buf_len, flags); \ 321 + expected_buf, expected_len, \ 322 + buf_len, flags, false); \ 338 323 } while (0) 339 324 340 325 #define epollpair(oob_remaining) \ ··· 346 329 347 330 #define setinlinepair() \ 348 331 __setinlinepair(_metadata, self) 332 + 333 + #define resetpair(reset) \ 334 + __resetpair(_metadata, self, variant, reset) 349 335 350 336 #define tcp_incompliant \ 351 337 for (self->tcp_compliant = false; \ ··· 364 344 recvpair("", -EINVAL, 1, MSG_OOB); 365 345 epollpair(false); 366 346 siocatmarkpair(false); 347 + 348 + resetpair(true); 349 + } 350 + 351 + TEST_F(msg_oob, non_oob_no_reset) 352 + { 353 + sendpair("x", 1, 0); 354 + epollpair(false); 355 + siocatmarkpair(false); 356 + 357 + recvpair("x", 1, 1, 0); 358 + epollpair(false); 359 + siocatmarkpair(false); 360 + 361 + resetpair(false); 367 362 } 368 363 369 364 TEST_F(msg_oob, oob) ··· 390 355 recvpair("x", 1, 1, MSG_OOB); 391 356 epollpair(false); 392 357 siocatmarkpair(true); 358 + 359 + tcp_incompliant { 360 + resetpair(false); /* TCP sets -ECONNRESET for ex-OOB. */ 361 + } 362 + } 363 + 364 + TEST_F(msg_oob, oob_reset) 365 + { 366 + sendpair("x", 1, MSG_OOB); 367 + epollpair(true); 368 + siocatmarkpair(true); 369 + 370 + resetpair(true); 393 371 } 394 372 395 373 TEST_F(msg_oob, oob_drop) ··· 418 370 recvpair("", -EINVAL, 1, MSG_OOB); 419 371 epollpair(false); 420 372 siocatmarkpair(false); 373 + 374 + resetpair(false); 421 375 } 422 376 423 377 TEST_F(msg_oob, oob_ahead) ··· 435 385 recvpair("hell", 4, 4, 0); 436 386 epollpair(false); 437 387 siocatmarkpair(true); 388 + 389 + tcp_incompliant { 390 + resetpair(false); /* TCP sets -ECONNRESET for ex-OOB. */ 391 + } 438 392 } 439 393 440 394 TEST_F(msg_oob, oob_break) ··· 457 403 458 404 recvpair("", -EAGAIN, 1, 0); 459 405 siocatmarkpair(false); 406 + 407 + resetpair(false); 460 408 } 461 409 462 410 TEST_F(msg_oob, oob_ahead_break) ··· 482 426 recvpair("world", 5, 5, 0); 483 427 epollpair(false); 484 428 siocatmarkpair(false); 429 + 430 + resetpair(false); 485 431 } 486 432 487 433 TEST_F(msg_oob, oob_break_drop) ··· 507 449 recvpair("", -EINVAL, 1, MSG_OOB); 508 450 epollpair(false); 509 451 siocatmarkpair(false); 452 + 453 + resetpair(false); 510 454 } 511 455 512 456 TEST_F(msg_oob, ex_oob_break) ··· 536 476 recvpair("ld", 2, 2, 0); 537 477 epollpair(false); 538 478 siocatmarkpair(false); 479 + 480 + resetpair(false); 539 481 } 540 482 541 483 TEST_F(msg_oob, ex_oob_drop) ··· 560 498 epollpair(false); 561 499 siocatmarkpair(true); 562 500 } 501 + 502 + resetpair(false); 563 503 } 564 504 565 505 TEST_F(msg_oob, ex_oob_drop_2) ··· 587 523 epollpair(false); 588 524 siocatmarkpair(true); 589 525 } 526 + 527 + resetpair(false); 590 528 } 591 529 592 530 TEST_F(msg_oob, ex_oob_oob) ··· 612 546 recvpair("", -EINVAL, 1, MSG_OOB); 613 547 epollpair(false); 614 548 siocatmarkpair(false); 549 + 550 + resetpair(false); 551 + } 552 + 553 + TEST_F(msg_oob, ex_oob_ex_oob) 554 + { 555 + sendpair("x", 1, MSG_OOB); 556 + epollpair(true); 557 + siocatmarkpair(true); 558 + 559 + recvpair("x", 1, 1, MSG_OOB); 560 + epollpair(false); 561 + siocatmarkpair(true); 562 + 563 + sendpair("y", 1, MSG_OOB); 564 + epollpair(true); 565 + siocatmarkpair(true); 566 + 567 + recvpair("y", 1, 1, MSG_OOB); 568 + epollpair(false); 569 + siocatmarkpair(true); 570 + 571 + tcp_incompliant { 572 + resetpair(false); /* TCP sets -ECONNRESET for ex-OOB. */ 573 + } 574 + } 575 + 576 + TEST_F(msg_oob, ex_oob_ex_oob_oob) 577 + { 578 + sendpair("x", 1, MSG_OOB); 579 + epollpair(true); 580 + siocatmarkpair(true); 581 + 582 + recvpair("x", 1, 1, MSG_OOB); 583 + epollpair(false); 584 + siocatmarkpair(true); 585 + 586 + sendpair("y", 1, MSG_OOB); 587 + epollpair(true); 588 + siocatmarkpair(true); 589 + 590 + recvpair("y", 1, 1, MSG_OOB); 591 + epollpair(false); 592 + siocatmarkpair(true); 593 + 594 + sendpair("z", 1, MSG_OOB); 595 + epollpair(true); 596 + siocatmarkpair(true); 615 597 } 616 598 617 599 TEST_F(msg_oob, ex_oob_ahead_break) ··· 690 576 recvpair("d", 1, 1, MSG_OOB); 691 577 epollpair(false); 692 578 siocatmarkpair(true); 579 + 580 + tcp_incompliant { 581 + resetpair(false); /* TCP sets -ECONNRESET for ex-OOB. */ 582 + } 693 583 } 694 584 695 585 TEST_F(msg_oob, ex_oob_siocatmark) ··· 713 595 recvpair("hell", 4, 4, 0); /* Intentionally stop at ex-OOB. */ 714 596 epollpair(true); 715 597 siocatmarkpair(false); 598 + 599 + resetpair(true); 716 600 } 717 601 718 602 TEST_F(msg_oob, inline_oob) ··· 732 612 recvpair("x", 1, 1, 0); 733 613 epollpair(false); 734 614 siocatmarkpair(false); 615 + 616 + resetpair(false); 735 617 } 736 618 737 619 TEST_F(msg_oob, inline_oob_break) ··· 755 633 recvpair("o", 1, 1, 0); 756 634 epollpair(false); 757 635 siocatmarkpair(false); 636 + 637 + resetpair(false); 758 638 } 759 639 760 640 TEST_F(msg_oob, inline_oob_ahead_break) ··· 785 661 786 662 epollpair(false); 787 663 siocatmarkpair(false); 664 + 665 + resetpair(false); 788 666 } 789 667 790 668 TEST_F(msg_oob, inline_ex_oob_break) ··· 812 686 recvpair("rld", 3, 3, 0); 813 687 epollpair(false); 814 688 siocatmarkpair(false); 689 + 690 + resetpair(false); 815 691 } 816 692 817 693 TEST_F(msg_oob, inline_ex_oob_no_drop) ··· 835 707 recvpair("y", 1, 1, 0); 836 708 epollpair(false); 837 709 siocatmarkpair(false); 710 + 711 + resetpair(false); 838 712 } 839 713 840 714 TEST_F(msg_oob, inline_ex_oob_drop) ··· 861 731 epollpair(false); 862 732 siocatmarkpair(false); 863 733 } 734 + 735 + resetpair(false); 864 736 } 865 737 866 738 TEST_F(msg_oob, inline_ex_oob_siocatmark) ··· 884 752 recvpair("hell", 4, 4, 0); /* Intentionally stop at ex-OOB. */ 885 753 epollpair(true); 886 754 siocatmarkpair(false); 755 + 756 + resetpair(true); 887 757 } 888 758 889 759 TEST_HARNESS_MAIN
+171
tools/testing/selftests/net/tfo.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <error.h> 3 + #include <fcntl.h> 4 + #include <limits.h> 5 + #include <stdbool.h> 6 + #include <stdint.h> 7 + #include <stdio.h> 8 + #include <stdlib.h> 9 + #include <string.h> 10 + #include <unistd.h> 11 + #include <arpa/inet.h> 12 + #include <sys/socket.h> 13 + #include <netinet/tcp.h> 14 + #include <errno.h> 15 + 16 + static int cfg_server; 17 + static int cfg_client; 18 + static int cfg_port = 8000; 19 + static struct sockaddr_in6 cfg_addr; 20 + static char *cfg_outfile; 21 + 22 + static int parse_address(const char *str, int port, struct sockaddr_in6 *sin6) 23 + { 24 + int ret; 25 + 26 + sin6->sin6_family = AF_INET6; 27 + sin6->sin6_port = htons(port); 28 + 29 + ret = inet_pton(sin6->sin6_family, str, &sin6->sin6_addr); 30 + if (ret != 1) { 31 + /* fallback to plain IPv4 */ 32 + ret = inet_pton(AF_INET, str, &sin6->sin6_addr.s6_addr32[3]); 33 + if (ret != 1) 34 + return -1; 35 + 36 + /* add ::ffff prefix */ 37 + sin6->sin6_addr.s6_addr32[0] = 0; 38 + sin6->sin6_addr.s6_addr32[1] = 0; 39 + sin6->sin6_addr.s6_addr16[4] = 0; 40 + sin6->sin6_addr.s6_addr16[5] = 0xffff; 41 + } 42 + 43 + return 0; 44 + } 45 + 46 + static void run_server(void) 47 + { 48 + unsigned long qlen = 32; 49 + int fd, opt, connfd; 50 + socklen_t len; 51 + char buf[64]; 52 + FILE *outfile; 53 + 54 + outfile = fopen(cfg_outfile, "w"); 55 + if (!outfile) 56 + error(1, errno, "fopen() outfile"); 57 + 58 + fd = socket(AF_INET6, SOCK_STREAM, 0); 59 + if (fd == -1) 60 + error(1, errno, "socket()"); 61 + 62 + opt = 1; 63 + if (setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt)) < 0) 64 + error(1, errno, "setsockopt(SO_REUSEADDR)"); 65 + 66 + if (setsockopt(fd, SOL_TCP, TCP_FASTOPEN, &qlen, sizeof(qlen)) < 0) 67 + error(1, errno, "setsockopt(TCP_FASTOPEN)"); 68 + 69 + if (bind(fd, (struct sockaddr *)&cfg_addr, sizeof(cfg_addr)) < 0) 70 + error(1, errno, "bind()"); 71 + 72 + if (listen(fd, 5) < 0) 73 + error(1, errno, "listen()"); 74 + 75 + len = sizeof(cfg_addr); 76 + connfd = accept(fd, (struct sockaddr *)&cfg_addr, &len); 77 + if (connfd < 0) 78 + error(1, errno, "accept()"); 79 + 80 + len = sizeof(opt); 81 + if (getsockopt(connfd, SOL_SOCKET, SO_INCOMING_NAPI_ID, &opt, &len) < 0) 82 + error(1, errno, "getsockopt(SO_INCOMING_NAPI_ID)"); 83 + 84 + read(connfd, buf, 64); 85 + fprintf(outfile, "%d\n", opt); 86 + 87 + fclose(outfile); 88 + close(connfd); 89 + close(fd); 90 + } 91 + 92 + static void run_client(void) 93 + { 94 + int fd; 95 + char *msg = "Hello, world!"; 96 + 97 + fd = socket(AF_INET6, SOCK_STREAM, 0); 98 + if (fd == -1) 99 + error(1, errno, "socket()"); 100 + 101 + sendto(fd, msg, strlen(msg), MSG_FASTOPEN, (struct sockaddr *)&cfg_addr, sizeof(cfg_addr)); 102 + 103 + close(fd); 104 + } 105 + 106 + static void usage(const char *filepath) 107 + { 108 + error(1, 0, "Usage: %s (-s|-c) -h<server_ip> -p<port> -o<outfile> ", filepath); 109 + } 110 + 111 + static void parse_opts(int argc, char **argv) 112 + { 113 + struct sockaddr_in6 *addr6 = (void *) &cfg_addr; 114 + char *addr = NULL; 115 + int ret; 116 + int c; 117 + 118 + if (argc <= 1) 119 + usage(argv[0]); 120 + 121 + while ((c = getopt(argc, argv, "sch:p:o:")) != -1) { 122 + switch (c) { 123 + case 's': 124 + if (cfg_client) 125 + error(1, 0, "Pass one of -s or -c"); 126 + cfg_server = 1; 127 + break; 128 + case 'c': 129 + if (cfg_server) 130 + error(1, 0, "Pass one of -s or -c"); 131 + cfg_client = 1; 132 + break; 133 + case 'h': 134 + addr = optarg; 135 + break; 136 + case 'p': 137 + cfg_port = strtoul(optarg, NULL, 0); 138 + break; 139 + case 'o': 140 + cfg_outfile = strdup(optarg); 141 + if (!cfg_outfile) 142 + error(1, 0, "outfile invalid"); 143 + break; 144 + } 145 + } 146 + 147 + if (cfg_server && addr) 148 + error(1, 0, "Server cannot have -h specified"); 149 + 150 + memset(addr6, 0, sizeof(*addr6)); 151 + addr6->sin6_family = AF_INET6; 152 + addr6->sin6_port = htons(cfg_port); 153 + addr6->sin6_addr = in6addr_any; 154 + if (addr) { 155 + ret = parse_address(addr, cfg_port, addr6); 156 + if (ret) 157 + error(1, 0, "Client address parse error: %s", addr); 158 + } 159 + } 160 + 161 + int main(int argc, char **argv) 162 + { 163 + parse_opts(argc, argv); 164 + 165 + if (cfg_server) 166 + run_server(); 167 + else if (cfg_client) 168 + run_client(); 169 + 170 + return 0; 171 + }
+112
tools/testing/selftests/net/tfo_passive.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + source lib.sh 4 + 5 + NSIM_SV_ID=$((256 + RANDOM % 256)) 6 + NSIM_SV_SYS=/sys/bus/netdevsim/devices/netdevsim$NSIM_SV_ID 7 + NSIM_CL_ID=$((512 + RANDOM % 256)) 8 + NSIM_CL_SYS=/sys/bus/netdevsim/devices/netdevsim$NSIM_CL_ID 9 + 10 + NSIM_DEV_SYS_NEW=/sys/bus/netdevsim/new_device 11 + NSIM_DEV_SYS_DEL=/sys/bus/netdevsim/del_device 12 + NSIM_DEV_SYS_LINK=/sys/bus/netdevsim/link_device 13 + NSIM_DEV_SYS_UNLINK=/sys/bus/netdevsim/unlink_device 14 + 15 + SERVER_IP=192.168.1.1 16 + CLIENT_IP=192.168.1.2 17 + SERVER_PORT=48675 18 + 19 + setup_ns() 20 + { 21 + set -e 22 + ip netns add nssv 23 + ip netns add nscl 24 + 25 + NSIM_SV_NAME=$(find $NSIM_SV_SYS/net -maxdepth 1 -type d ! \ 26 + -path $NSIM_SV_SYS/net -exec basename {} \;) 27 + NSIM_CL_NAME=$(find $NSIM_CL_SYS/net -maxdepth 1 -type d ! \ 28 + -path $NSIM_CL_SYS/net -exec basename {} \;) 29 + 30 + ip link set $NSIM_SV_NAME netns nssv 31 + ip link set $NSIM_CL_NAME netns nscl 32 + 33 + ip netns exec nssv ip addr add "${SERVER_IP}/24" dev $NSIM_SV_NAME 34 + ip netns exec nscl ip addr add "${CLIENT_IP}/24" dev $NSIM_CL_NAME 35 + 36 + ip netns exec nssv ip link set dev $NSIM_SV_NAME up 37 + ip netns exec nscl ip link set dev $NSIM_CL_NAME up 38 + 39 + # Enable passive TFO 40 + ip netns exec nssv sysctl -w net.ipv4.tcp_fastopen=519 > /dev/null 41 + 42 + set +e 43 + } 44 + 45 + cleanup_ns() 46 + { 47 + ip netns del nscl 48 + ip netns del nssv 49 + } 50 + 51 + ### 52 + ### Code start 53 + ### 54 + 55 + modprobe netdevsim 56 + 57 + # linking 58 + 59 + echo $NSIM_SV_ID > $NSIM_DEV_SYS_NEW 60 + echo $NSIM_CL_ID > $NSIM_DEV_SYS_NEW 61 + udevadm settle 62 + 63 + setup_ns 64 + 65 + NSIM_SV_FD=$((256 + RANDOM % 256)) 66 + exec {NSIM_SV_FD}</var/run/netns/nssv 67 + NSIM_SV_IFIDX=$(ip netns exec nssv cat /sys/class/net/$NSIM_SV_NAME/ifindex) 68 + 69 + NSIM_CL_FD=$((256 + RANDOM % 256)) 70 + exec {NSIM_CL_FD}</var/run/netns/nscl 71 + NSIM_CL_IFIDX=$(ip netns exec nscl cat /sys/class/net/$NSIM_CL_NAME/ifindex) 72 + 73 + echo "$NSIM_SV_FD:$NSIM_SV_IFIDX $NSIM_CL_FD:$NSIM_CL_IFIDX" > \ 74 + $NSIM_DEV_SYS_LINK 75 + 76 + if [ $? -ne 0 ]; then 77 + echo "linking netdevsim1 with netdevsim2 should succeed" 78 + cleanup_ns 79 + exit 1 80 + fi 81 + 82 + out_file=$(mktemp) 83 + 84 + timeout -k 1s 30s ip netns exec nssv ./tfo \ 85 + -s \ 86 + -p ${SERVER_PORT} \ 87 + -o ${out_file}& 88 + 89 + wait_local_port_listen nssv ${SERVER_PORT} tcp 90 + 91 + ip netns exec nscl ./tfo -c -h ${SERVER_IP} -p ${SERVER_PORT} 92 + 93 + wait 94 + 95 + res=$(cat $out_file) 96 + rm $out_file 97 + 98 + if [ $res -eq 0 ]; then 99 + echo "got invalid NAPI ID from passive TFO socket" 100 + cleanup_ns 101 + exit 1 102 + fi 103 + 104 + echo "$NSIM_SV_FD:$NSIM_SV_IFIDX" > $NSIM_DEV_SYS_UNLINK 105 + 106 + echo $NSIM_CL_ID > $NSIM_DEV_SYS_DEL 107 + 108 + cleanup_ns 109 + 110 + modprobe -r netdevsim 111 + 112 + exit 0
+3 -2
tools/testing/selftests/ublk/test_stress_03.sh
··· 32 32 ublk_io_and_remove 8G -t null -q 4 -z & 33 33 ublk_io_and_remove 256M -t loop -q 4 -z "${UBLK_BACKFILES[0]}" & 34 34 ublk_io_and_remove 256M -t stripe -q 4 -z "${UBLK_BACKFILES[1]}" "${UBLK_BACKFILES[2]}" & 35 + wait 35 36 36 37 if _have_feature "AUTO_BUF_REG"; then 37 38 ublk_io_and_remove 8G -t null -q 4 --auto_zc & 38 39 ublk_io_and_remove 256M -t loop -q 4 --auto_zc "${UBLK_BACKFILES[0]}" & 39 40 ublk_io_and_remove 256M -t stripe -q 4 --auto_zc "${UBLK_BACKFILES[1]}" "${UBLK_BACKFILES[2]}" & 40 41 ublk_io_and_remove 8G -t null -q 4 -z --auto_zc --auto_zc_fallback & 42 + wait 41 43 fi 42 - wait 43 44 44 45 if _have_feature "PER_IO_DAEMON"; then 45 46 ublk_io_and_remove 8G -t null -q 4 --auto_zc --nthreads 8 --per_io_tasks & 46 47 ublk_io_and_remove 256M -t loop -q 4 --auto_zc --nthreads 8 --per_io_tasks "${UBLK_BACKFILES[0]}" & 47 48 ublk_io_and_remove 256M -t stripe -q 4 --auto_zc --nthreads 8 --per_io_tasks "${UBLK_BACKFILES[1]}" "${UBLK_BACKFILES[2]}" & 48 49 ublk_io_and_remove 8G -t null -q 4 -z --auto_zc --auto_zc_fallback --nthreads 8 --per_io_tasks & 50 + wait 49 51 fi 50 - wait 51 52 52 53 _cleanup_test "stress" 53 54 _show_result $TID $ERR_CODE
+1 -1
tools/testing/selftests/x86/Makefile
··· 12 12 13 13 TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt test_mremap_vdso \ 14 14 check_initial_reg_state sigreturn iopl ioperm \ 15 - test_vsyscall mov_ss_trap \ 15 + test_vsyscall mov_ss_trap sigtrap_loop \ 16 16 syscall_arg_fault fsgsbase_restore sigaltstack 17 17 TARGETS_C_BOTHBITS += nx_stack 18 18 TARGETS_C_32BIT_ONLY := entry_from_vm86 test_syscall_vdso unwind_vdso \
+101
tools/testing/selftests/x86/sigtrap_loop.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2025 Intel Corporation 4 + */ 5 + #define _GNU_SOURCE 6 + 7 + #include <err.h> 8 + #include <signal.h> 9 + #include <stdio.h> 10 + #include <stdlib.h> 11 + #include <string.h> 12 + #include <sys/ucontext.h> 13 + 14 + #ifdef __x86_64__ 15 + # define REG_IP REG_RIP 16 + #else 17 + # define REG_IP REG_EIP 18 + #endif 19 + 20 + static void sethandler(int sig, void (*handler)(int, siginfo_t *, void *), int flags) 21 + { 22 + struct sigaction sa; 23 + 24 + memset(&sa, 0, sizeof(sa)); 25 + sa.sa_sigaction = handler; 26 + sa.sa_flags = SA_SIGINFO | flags; 27 + sigemptyset(&sa.sa_mask); 28 + 29 + if (sigaction(sig, &sa, 0)) 30 + err(1, "sigaction"); 31 + 32 + return; 33 + } 34 + 35 + static void sigtrap(int sig, siginfo_t *info, void *ctx_void) 36 + { 37 + ucontext_t *ctx = (ucontext_t *)ctx_void; 38 + static unsigned int loop_count_on_same_ip; 39 + static unsigned long last_trap_ip; 40 + 41 + if (last_trap_ip == ctx->uc_mcontext.gregs[REG_IP]) { 42 + printf("\tTrapped at %016lx\n", last_trap_ip); 43 + 44 + /* 45 + * If the same IP is hit more than 10 times in a row, it is 46 + * _considered_ an infinite loop. 47 + */ 48 + if (++loop_count_on_same_ip > 10) { 49 + printf("[FAIL]\tDetected SIGTRAP infinite loop\n"); 50 + exit(1); 51 + } 52 + 53 + return; 54 + } 55 + 56 + loop_count_on_same_ip = 0; 57 + last_trap_ip = ctx->uc_mcontext.gregs[REG_IP]; 58 + printf("\tTrapped at %016lx\n", last_trap_ip); 59 + } 60 + 61 + int main(int argc, char *argv[]) 62 + { 63 + sethandler(SIGTRAP, sigtrap, 0); 64 + 65 + /* 66 + * Set the Trap Flag (TF) to single-step the test code, therefore to 67 + * trigger a SIGTRAP signal after each instruction until the TF is 68 + * cleared. 69 + * 70 + * Because the arithmetic flags are not significant here, the TF is 71 + * set by pushing 0x302 onto the stack and then popping it into the 72 + * flags register. 73 + * 74 + * Four instructions in the following asm code are executed with the 75 + * TF set, thus the SIGTRAP handler is expected to run four times. 76 + */ 77 + printf("[RUN]\tSIGTRAP infinite loop detection\n"); 78 + asm volatile( 79 + #ifdef __x86_64__ 80 + /* 81 + * Avoid clobbering the redzone 82 + * 83 + * Equivalent to "sub $128, %rsp", however -128 can be encoded 84 + * in a single byte immediate while 128 uses 4 bytes. 85 + */ 86 + "add $-128, %rsp\n\t" 87 + #endif 88 + "push $0x302\n\t" 89 + "popf\n\t" 90 + "nop\n\t" 91 + "nop\n\t" 92 + "push $0x202\n\t" 93 + "popf\n\t" 94 + #ifdef __x86_64__ 95 + "sub $-128, %rsp\n\t" 96 + #endif 97 + ); 98 + 99 + printf("[OK]\tNo SIGTRAP infinite loop detected\n"); 100 + return 0; 101 + }