Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.16-rc4).

Conflicts:

Documentation/netlink/specs/mptcp_pm.yaml
9e6dd4c256d0 ("netlink: specs: mptcp: replace underscores with dashes in names")
ec362192aa9e ("netlink: specs: fix up indentation errors")
https://lore.kernel.org/20250626122205.389c2cd4@canb.auug.org.au

Adjacent changes:

Documentation/netlink/specs/fou.yaml
791a9ed0a40d ("netlink: specs: fou: replace underscores with dashes in names")
880d43ca9aa4 ("netlink: specs: clean up spaces in brackets")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+3538 -1795
+5
CREDITS
··· 2981 2981 S: Potsdam, New York 13676 2982 2982 S: USA 2983 2983 2984 + N: Shannon Nelson 2985 + E: sln@onemain.com 2986 + D: Worked on several network drivers including 2987 + D: ixgbe, i40e, ionic, pds_core, pds_vdpa, pds_fwctl 2988 + 2984 2989 N: Dave Neuer 2985 2990 E: dave.neuer@pobox.com 2986 2991 D: Helped implement support for Compaq's H31xx series iPAQs
+1 -1
Documentation/arch/arm64/booting.rst
··· 234 234 235 235 - If the kernel is entered at EL1: 236 236 237 - - ICC.SRE_EL2.Enable (bit 3) must be initialised to 0b1 237 + - ICC_SRE_EL2.Enable (bit 3) must be initialised to 0b1 238 238 - ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b1. 239 239 240 240 - The DT or ACPI tables must describe a GICv3 interrupt controller.
+7 -1
Documentation/bpf/map_hash.rst
··· 233 233 other CPUs involved in the following operation attempts: 234 234 235 235 - Attempt to use CPU-local state to batch operations 236 - - Attempt to fetch free nodes from global lists 236 + - Attempt to fetch ``target_free`` free nodes from global lists 237 237 - Attempt to pull any node from a global list and remove it from the hashmap 238 238 - Attempt to pull any node from any CPU's list and remove it from the hashmap 239 + 240 + The number of nodes to borrow from the global list in a batch, ``target_free``, 241 + depends on the size of the map. Larger batch size reduces lock contention, but 242 + may also exhaust the global structure. The value is computed at map init to 243 + avoid exhaustion, by limiting aggregate reservation by all CPUs to half the map 244 + size. With a minimum of a single element and maximum budget of 128 at a time. 239 245 240 246 This algorithm is described visually in the following diagram. See the 241 247 description in commit 3a08c2fd7634 ("bpf: LRU List") for a full explanation of
+3 -3
Documentation/bpf/map_lru_hash_update.dot
··· 35 35 fn_bpf_lru_list_pop_free_to_local [shape=rectangle,fillcolor=2, 36 36 label="Flush local pending, 37 37 Rotate Global list, move 38 - LOCAL_FREE_TARGET 38 + target_free 39 39 from global -> local"] 40 40 // Also corresponds to: 41 41 // fn__local_list_flush() 42 42 // fn_bpf_lru_list_rotate() 43 43 fn___bpf_lru_node_move_to_free[shape=diamond,fillcolor=2, 44 - label="Able to free\nLOCAL_FREE_TARGET\nnodes?"] 44 + label="Able to free\ntarget_free\nnodes?"] 45 45 46 46 fn___bpf_lru_list_shrink_inactive [shape=rectangle,fillcolor=3, 47 47 label="Shrink inactive list 48 48 up to remaining 49 - LOCAL_FREE_TARGET 49 + target_free 50 50 (global LRU -> local)"] 51 51 fn___bpf_lru_list_shrink [shape=diamond,fillcolor=2, 52 52 label="> 0 entries in\nlocal free list?"]
+23 -1
Documentation/devicetree/bindings/i2c/nvidia,tegra20-i2c.yaml
··· 97 97 98 98 resets: 99 99 items: 100 - - description: module reset 100 + - description: 101 + Module reset. This property is optional for controllers in Tegra194, 102 + Tegra234 etc where an internal software reset is available as an 103 + alternative. 101 104 102 105 reset-names: 103 106 items: ··· 118 115 items: 119 116 - const: rx 120 117 - const: tx 118 + 119 + required: 120 + - compatible 121 + - reg 122 + - interrupts 123 + - clocks 124 + - clock-names 121 125 122 126 allOf: 123 127 - $ref: /schemas/i2c/i2c-controller.yaml ··· 178 168 else: 179 169 properties: 180 170 power-domains: false 171 + 172 + - if: 173 + not: 174 + properties: 175 + compatible: 176 + contains: 177 + enum: 178 + - nvidia,tegra194-i2c 179 + then: 180 + required: 181 + - resets 182 + - reset-names 181 183 182 184 unevaluatedProperties: false 183 185
+9
Documentation/filesystems/porting.rst
··· 1249 1249 1250 1250 Calling conventions for ->d_automount() have changed; we should *not* grab 1251 1251 an extra reference to new mount - it should be returned with refcount 1. 1252 + 1253 + --- 1254 + 1255 + collect_mounts()/drop_collected_mounts()/iterate_mounts() are gone now. 1256 + Replacement is collect_paths()/drop_collected_path(), with no special 1257 + iterator needed. Instead of a cloned mount tree, the new interface returns 1258 + an array of struct path, one for each mount collect_mounts() would've 1259 + created. These struct path point to locations in the caller's namespace 1260 + that would be roots of the cloned mounts.
+1 -1
Documentation/gpu/nouveau.rst
··· 25 25 GSP Support 26 26 ------------------------ 27 27 28 - .. kernel-doc:: drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c 28 + .. kernel-doc:: drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c 29 29 :doc: GSP message queue element 30 30 31 31 .. kernel-doc:: drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h
+10 -7
Documentation/netlink/genetlink.yaml
··· 6 6 7 7 # Common defines 8 8 $defs: 9 + name: 10 + type: string 11 + pattern: ^[0-9a-z-]+$ 9 12 uint: 10 13 type: integer 11 14 minimum: 0 ··· 32 29 properties: 33 30 name: 34 31 description: Name of the genetlink family. 35 - type: string 32 + $ref: '#/$defs/name' 36 33 doc: 37 34 type: string 38 35 protocol: ··· 51 48 additionalProperties: False 52 49 properties: 53 50 name: 54 - type: string 51 + $ref: '#/$defs/name' 55 52 header: 56 53 description: For C-compatible languages, header which already defines this value. 57 54 type: string ··· 78 75 additionalProperties: False 79 76 properties: 80 77 name: 81 - type: string 78 + $ref: '#/$defs/name' 82 79 value: 83 80 type: integer 84 81 doc: ··· 99 96 name: 100 97 description: | 101 98 Name used when referring to this space in other definitions, not used outside of the spec. 102 - type: string 99 + $ref: '#/$defs/name' 103 100 name-prefix: 104 101 description: | 105 102 Prefix for the C enum name of the attributes. Default family[name]-set[name]-a- ··· 124 121 additionalProperties: False 125 122 properties: 126 123 name: 127 - type: string 124 + $ref: '#/$defs/name' 128 125 type: &attr-type 129 126 enum: [ unused, pad, flag, binary, 130 127 uint, sint, u8, u16, u32, u64, s8, s16, s32, s64, ··· 246 243 properties: 247 244 name: 248 245 description: Name of the operation, also defining its C enum value in uAPI. 249 - type: string 246 + $ref: '#/$defs/name' 250 247 doc: 251 248 description: Documentation for the command. 252 249 type: string ··· 330 327 name: 331 328 description: | 332 329 The name for the group, used to form the define and the value of the define. 333 - type: string 330 + $ref: '#/$defs/name' 334 331 flags: *cmd_flags 335 332 336 333 kernel-family:
+4 -4
Documentation/netlink/specs/devlink.yaml
··· 38 38 - 39 39 name: dsa 40 40 - 41 - name: pci_pf 41 + name: pci-pf 42 42 - 43 - name: pci_vf 43 + name: pci-vf 44 44 - 45 45 name: virtual 46 46 - 47 47 name: unused 48 48 - 49 - name: pci_sf 49 + name: pci-sf 50 50 - 51 51 type: enum 52 52 name: port-fn-state ··· 220 220 - 221 221 name: flag 222 222 - 223 - name: nul_string 223 + name: nul-string 224 224 value: 10 225 225 - 226 226 name: binary
+1 -1
Documentation/netlink/specs/dpll.yaml
··· 188 188 value: 10000 189 189 - 190 190 type: const 191 - name: pin-frequency-77_5-khz 191 + name: pin-frequency-77-5-khz 192 192 value: 77500 193 193 - 194 194 type: const
+3 -3
Documentation/netlink/specs/ethtool.yaml
··· 48 48 name: started 49 49 doc: The firmware flashing process has started. 50 50 - 51 - name: in_progress 51 + name: in-progress 52 52 doc: The firmware flashing process is in progress. 53 53 - 54 54 name: completed ··· 1473 1473 name: hkey 1474 1474 type: binary 1475 1475 - 1476 - name: input_xfrm 1476 + name: input-xfrm 1477 1477 type: u32 1478 1478 - 1479 1479 name: start-context ··· 2306 2306 - hfunc 2307 2307 - indir 2308 2308 - hkey 2309 - - input_xfrm 2309 + - input-xfrm 2310 2310 dump: 2311 2311 request: 2312 2312 attributes:
+18 -18
Documentation/netlink/specs/fou.yaml
··· 15 15 definitions: 16 16 - 17 17 type: enum 18 - name: encap_type 18 + name: encap-type 19 19 name-prefix: fou-encap- 20 20 enum-name: 21 21 entries: [unspec, direct, gue] ··· 43 43 name: type 44 44 type: u8 45 45 - 46 - name: remcsum_nopartial 46 + name: remcsum-nopartial 47 47 type: flag 48 48 - 49 - name: local_v4 49 + name: local-v4 50 50 type: u32 51 51 - 52 - name: local_v6 52 + name: local-v6 53 53 type: binary 54 54 checks: 55 55 min-len: 16 56 56 - 57 - name: peer_v4 57 + name: peer-v4 58 58 type: u32 59 59 - 60 - name: peer_v6 60 + name: peer-v6 61 61 type: binary 62 62 checks: 63 63 min-len: 16 64 64 - 65 - name: peer_port 65 + name: peer-port 66 66 type: u16 67 67 byte-order: big-endian 68 68 - ··· 90 90 - port 91 91 - ipproto 92 92 - type 93 - - remcsum_nopartial 94 - - local_v4 95 - - peer_v4 96 - - local_v6 97 - - peer_v6 98 - - peer_port 93 + - remcsum-nopartial 94 + - local-v4 95 + - peer-v4 96 + - local-v6 97 + - peer-v6 98 + - peer-port 99 99 - ifindex 100 100 101 101 - ··· 112 112 - af 113 113 - ifindex 114 114 - port 115 - - peer_port 116 - - local_v4 117 - - peer_v4 118 - - local_v6 119 - - peer_v6 115 + - peer-port 116 + - local-v4 117 + - peer-v4 118 + - local-v6 119 + - peer-v6 120 120 121 121 - 122 122 name: get
+4 -4
Documentation/netlink/specs/mptcp_pm.yaml
··· 57 57 doc: >- 58 58 A new subflow has been established. 'error' should not be set. 59 59 Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | 60 - daddr6, sport, dport, backup, if_idx [, error]. 60 + daddr6, sport, dport, backup, if-idx [, error]. 61 61 - 62 62 name: sub-closed 63 63 doc: >- 64 64 A subflow has been closed. An error (copy of sk_err) could be set if 65 65 an error has been detected for this subflow. 66 66 Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | 67 - daddr6, sport, dport, backup, if_idx [, error]. 67 + daddr6, sport, dport, backup, if-idx [, error]. 68 68 - 69 69 name: sub-priority 70 70 value: 13 71 71 doc: >- 72 72 The priority of a subflow has changed. 'error' should not be set. 73 73 Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | 74 - daddr6, sport, dport, backup, if_idx [, error]. 74 + daddr6, sport, dport, backup, if-idx [, error]. 75 75 - 76 76 name: listener-created 77 77 value: 15 ··· 255 255 name: timeout 256 256 type: u32 257 257 - 258 - name: if_idx 258 + name: if-idx 259 259 type: u32 260 260 - 261 261 name: reset-reason
+2 -2
Documentation/netlink/specs/nfsd.yaml
··· 27 27 name: proc 28 28 type: u32 29 29 - 30 - name: service_time 30 + name: service-time 31 31 type: s64 32 32 - 33 33 name: pad ··· 139 139 - prog 140 140 - version 141 141 - proc 142 - - service_time 142 + - service-time 143 143 - saddr4 144 144 - daddr4 145 145 - saddr6
+3 -3
Documentation/netlink/specs/ovs_flow.yaml
··· 216 216 type: struct 217 217 members: 218 218 - 219 - name: nd_target 219 + name: nd-target 220 220 type: binary 221 221 len: 16 222 222 byte-order: big-endian ··· 258 258 type: struct 259 259 members: 260 260 - 261 - name: vlan_tpid 261 + name: vlan-tpid 262 262 type: u16 263 263 byte-order: big-endian 264 264 doc: Tag protocol identifier (TPID) to push. 265 265 - 266 - name: vlan_tci 266 + name: vlan-tci 267 267 type: u16 268 268 byte-order: big-endian 269 269 doc: Tag control identifier (TCI) to push.
+2 -2
Documentation/netlink/specs/rt-link.yaml
··· 603 603 name: optmask 604 604 type: u32 605 605 - 606 - name: if_stats_msg 606 + name: if-stats-msg 607 607 type: struct 608 608 members: 609 609 - ··· 2486 2486 name: getstats 2487 2487 doc: Get / dump link stats. 2488 2488 attribute-set: stats-attrs 2489 - fixed-header: if_stats_msg 2489 + fixed-header: if-stats-msg 2490 2490 do: 2491 2491 request: 2492 2492 value: 94
+2 -2
Documentation/netlink/specs/tc.yaml
··· 233 233 type: u8 234 234 doc: log(P_max / (qth-max - qth-min)) 235 235 - 236 - name: Scell_log 236 + name: Scell-log 237 237 type: u8 238 238 doc: cell size for idle damping 239 239 - ··· 254 254 name: DPs 255 255 type: u32 256 256 - 257 - name: def_DP 257 + name: def-DP 258 258 type: u32 259 259 - 260 260 name: grio
+1 -1
Documentation/networking/device_drivers/ethernet/marvell/octeontx2.rst
··· 66 66 As mentioned above RVU PF0 is called the admin function (AF), this driver 67 67 supports resource provisioning and configuration of functional blocks. 68 68 Doesn't handle any I/O. It sets up few basic stuff but most of the 69 - funcionality is achieved via configuration requests from PFs and VFs. 69 + functionality is achieved via configuration requests from PFs and VFs. 70 70 71 71 PF/VFs communicates with AF via a shared memory region (mailbox). Upon 72 72 receiving requests AF does resource provisioning and other HW configuration.
+19 -5
Documentation/sound/codecs/cs35l56.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0-only 2 2 3 - ===================================================================== 4 - Audio drivers for Cirrus Logic CS35L54/56/57 Boosted Smart Amplifiers 5 - ===================================================================== 3 + ======================================================================== 4 + Audio drivers for Cirrus Logic CS35L54/56/57/63 Boosted Smart Amplifiers 5 + ======================================================================== 6 6 :Copyright: 2025 Cirrus Logic, Inc. and 7 7 Cirrus Logic International Semiconductor Ltd. 8 8 ··· 13 13 14 14 The high-level summary of this document is: 15 15 16 - **If you have a laptop that uses CS35L54/56/57 amplifiers but audio is not 16 + **If you have a laptop that uses CS35L54/56/57/63 amplifiers but audio is not 17 17 working, DO NOT ATTEMPT TO USE FIRMWARE AND SETTINGS FROM ANOTHER LAPTOP, 18 18 EVEN IF THAT LAPTOP SEEMS SIMILAR.** 19 19 20 - The CS35L54/56/57 amplifiers must be correctly configured for the power 20 + The CS35L54/56/57/63 amplifiers must be correctly configured for the power 21 21 supply voltage, speaker impedance, maximum speaker voltage/current, and 22 22 other external hardware connections. 23 23 ··· 34 34 * CS35L54 35 35 * CS35L56 36 36 * CS35L57 37 + * CS35L63 37 38 38 39 There are two drivers in the kernel 39 40 ··· 105 104 106 105 The format of the firmware file names is: 107 106 107 + SoundWire (except CS35L56 Rev B0): 108 + cs35lxx-b0-dsp1-misc-SSID[-spkidX]-l?u? 109 + 110 + SoundWire CS35L56 Rev B0: 111 + cs35lxx-b0-dsp1-misc-SSID[-spkidX]-ampN 112 + 113 + Non-SoundWire (HDA and I2S): 108 114 cs35lxx-b0-dsp1-misc-SSID[-spkidX]-ampN 109 115 110 116 Where: ··· 119 111 * cs35lxx-b0 is the amplifier model and silicon revision. This information 120 112 is logged by the driver during initialization. 121 113 * SSID is the 8-digit hexadecimal SSID value. 114 + * l?u? is the physical address on the SoundWire bus of the amp this 115 + file applies to. 122 116 * ampN is the amplifier number (for example amp1). This is the same as 123 117 the prefix on the ALSA control names except that it is always lower-case 124 118 in the file name. 125 119 * spkidX is an optional part, used for laptops that have firmware 126 120 configurations for different makes and models of internal speakers. 121 + 122 + The CS35L56 Rev B0 continues to use the old filename scheme because a 123 + large number of firmware files have already been published with these 124 + names. 127 125 128 126 Sound Open Firmware and ALSA topology files 129 127 -------------------------------------------
+58 -1
Documentation/virt/kvm/api.rst
··· 6645 6645 .. note:: 6646 6646 6647 6647 For KVM_EXIT_IO, KVM_EXIT_MMIO, KVM_EXIT_OSI, KVM_EXIT_PAPR, KVM_EXIT_XEN, 6648 - KVM_EXIT_EPR, KVM_EXIT_X86_RDMSR and KVM_EXIT_X86_WRMSR the corresponding 6648 + KVM_EXIT_EPR, KVM_EXIT_HYPERCALL, KVM_EXIT_TDX, 6649 + KVM_EXIT_X86_RDMSR and KVM_EXIT_X86_WRMSR the corresponding 6649 6650 operations are complete (and guest state is consistent) only after userspace 6650 6651 has re-entered the kernel with KVM_RUN. The kernel side will first finish 6651 6652 incomplete operations and then check for pending signals. ··· 7175 7174 - KVM_NOTIFY_CONTEXT_INVALID -- the VM context is corrupted and not valid 7176 7175 in VMCS. It would run into unknown result if resume the target VM. 7177 7176 7177 + :: 7178 + 7179 + /* KVM_EXIT_TDX */ 7180 + struct { 7181 + __u64 flags; 7182 + __u64 nr; 7183 + union { 7184 + struct { 7185 + u64 ret; 7186 + u64 data[5]; 7187 + } unknown; 7188 + struct { 7189 + u64 ret; 7190 + u64 gpa; 7191 + u64 size; 7192 + } get_quote; 7193 + struct { 7194 + u64 ret; 7195 + u64 leaf; 7196 + u64 r11, r12, r13, r14; 7197 + } get_tdvmcall_info; 7198 + }; 7199 + } tdx; 7200 + 7201 + Process a TDVMCALL from the guest. KVM forwards select TDVMCALL based 7202 + on the Guest-Hypervisor Communication Interface (GHCI) specification; 7203 + KVM bridges these requests to the userspace VMM with minimal changes, 7204 + placing the inputs in the union and copying them back to the guest 7205 + on re-entry. 7206 + 7207 + Flags are currently always zero, whereas ``nr`` contains the TDVMCALL 7208 + number from register R11. The remaining field of the union provide the 7209 + inputs and outputs of the TDVMCALL. Currently the following values of 7210 + ``nr`` are defined: 7211 + 7212 + * ``TDVMCALL_GET_QUOTE``: the guest has requested to generate a TD-Quote 7213 + signed by a service hosting TD-Quoting Enclave operating on the host. 7214 + Parameters and return value are in the ``get_quote`` field of the union. 7215 + The ``gpa`` field and ``size`` specify the guest physical address 7216 + (without the shared bit set) and the size of a shared-memory buffer, in 7217 + which the TDX guest passes a TD Report. The ``ret`` field represents 7218 + the return value of the GetQuote request. When the request has been 7219 + queued successfully, the TDX guest can poll the status field in the 7220 + shared-memory area to check whether the Quote generation is completed or 7221 + not. When completed, the generated Quote is returned via the same buffer. 7222 + 7223 + * ``TDVMCALL_GET_TD_VM_CALL_INFO``: the guest has requested the support 7224 + status of TDVMCALLs. The output values for the given leaf should be 7225 + placed in fields from ``r11`` to ``r14`` of the ``get_tdvmcall_info`` 7226 + field of the union. 7227 + 7228 + KVM may add support for more values in the future that may cause a userspace 7229 + exit, even without calls to ``KVM_ENABLE_CAP`` or similar. In this case, 7230 + it will enter with output fields already valid; in the common case, the 7231 + ``unknown.ret`` field of the union will be ``TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED``. 7232 + Userspace need not do anything if it does not wish to support a TDVMCALL. 7178 7233 :: 7179 7234 7180 7235 /* Fix the size of the union. */
+20 -3
MAINTAINERS
··· 10839 10839 F: drivers/dma/hisi_dma.c 10840 10840 10841 10841 HISILICON GPIO DRIVER 10842 - M: Jay Fang <f.fangjian@huawei.com> 10842 + M: Yang Shen <shenyang39@huawei.com> 10843 10843 L: linux-gpio@vger.kernel.org 10844 10844 S: Maintained 10845 10845 F: Documentation/devicetree/bindings/gpio/hisilicon,ascend910-gpio.yaml ··· 11155 11155 11156 11156 HUGETLB SUBSYSTEM 11157 11157 M: Muchun Song <muchun.song@linux.dev> 11158 - R: Oscar Salvador <osalvador@suse.de> 11158 + M: Oscar Salvador <osalvador@suse.de> 11159 + R: David Hildenbrand <david@redhat.com> 11159 11160 L: linux-mm@kvack.org 11160 11161 S: Maintained 11161 11162 F: Documentation/ABI/testing/sysfs-kernel-mm-hugepages ··· 11167 11166 F: include/linux/hugetlb.h 11168 11167 F: include/trace/events/hugetlbfs.h 11169 11168 F: mm/hugetlb.c 11169 + F: mm/hugetlb_cgroup.c 11170 11170 F: mm/hugetlb_cma.c 11171 11171 F: mm/hugetlb_cma.h 11172 11172 F: mm/hugetlb_vmemmap.c ··· 13347 13345 M: Mike Rapoport <rppt@kernel.org> 13348 13346 M: Changyuan Lyu <changyuanl@google.com> 13349 13347 L: kexec@lists.infradead.org 13348 + L: linux-mm@kvack.org 13350 13349 S: Maintained 13351 13350 F: Documentation/admin-guide/mm/kho.rst 13352 13351 F: Documentation/core-api/kho/* ··· 15679 15676 F: Documentation/core-api/boot-time-mm.rst 15680 15677 F: Documentation/core-api/kho/bindings/memblock/* 15681 15678 F: include/linux/memblock.h 15679 + F: mm/bootmem_info.c 15682 15680 F: mm/memblock.c 15681 + F: mm/memtest.c 15683 15682 F: mm/mm_init.c 15683 + F: mm/rodata_test.c 15684 15684 F: tools/testing/memblock/ 15685 15685 15686 15686 MEMORY ALLOCATION PROFILING ··· 15738 15732 F: Documentation/mm/ 15739 15733 F: include/linux/gfp.h 15740 15734 F: include/linux/gfp_types.h 15741 - F: include/linux/memfd.h 15742 15735 F: include/linux/memory_hotplug.h 15743 15736 F: include/linux/memory-tiers.h 15744 15737 F: include/linux/mempolicy.h ··· 15797 15792 W: http://www.linux-mm.org 15798 15793 T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm 15799 15794 F: mm/gup.c 15795 + F: mm/gup_test.c 15796 + F: mm/gup_test.h 15797 + F: tools/testing/selftests/mm/gup_longterm.c 15798 + F: tools/testing/selftests/mm/gup_test.c 15800 15799 15801 15800 MEMORY MANAGEMENT - KSM (Kernel Samepage Merging) 15802 15801 M: Andrew Morton <akpm@linux-foundation.org> ··· 15877 15868 S: Maintained 15878 15869 F: mm/pt_reclaim.c 15879 15870 F: mm/vmscan.c 15871 + F: mm/workingset.c 15880 15872 15881 15873 MEMORY MANAGEMENT - RMAP (REVERSE MAPPING) 15882 15874 M: Andrew Morton <akpm@linux-foundation.org> ··· 15890 15880 L: linux-mm@kvack.org 15891 15881 S: Maintained 15892 15882 F: include/linux/rmap.h 15883 + F: mm/page_vma_mapped.c 15893 15884 F: mm/rmap.c 15894 15885 15895 15886 MEMORY MANAGEMENT - SECRETMEM ··· 15983 15972 W: http://www.linux-mm.org 15984 15973 T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm 15985 15974 F: include/trace/events/mmap.h 15975 + F: mm/mincore.c 15986 15976 F: mm/mlock.c 15987 15977 F: mm/mmap.c 15988 15978 F: mm/mprotect.c 15989 15979 F: mm/mremap.c 15990 15980 F: mm/mseal.c 15981 + F: mm/msync.c 15982 + F: mm/nommu.c 15991 15983 F: mm/vma.c 15992 15984 F: mm/vma.h 15993 15985 F: mm/vma_exec.c ··· 25041 25027 R: Baolin Wang <baolin.wang@linux.alibaba.com> 25042 25028 L: linux-mm@kvack.org 25043 25029 S: Maintained 25030 + F: include/linux/memfd.h 25044 25031 F: include/linux/shmem_fs.h 25032 + F: mm/memfd.c 25045 25033 F: mm/shmem.c 25034 + F: mm/shmem_quota.c 25046 25035 25047 25036 TOMOYO SECURITY MODULE 25048 25037 M: Kentaro Takeda <takedakn@nttdata.co.jp>
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 16 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc3 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
-62
arch/arm64/include/asm/kvm_emulate.h
··· 561 561 vcpu_set_flag((v), e); \ 562 562 } while (0) 563 563 564 - #define __build_check_all_or_none(r, bits) \ 565 - BUILD_BUG_ON(((r) & (bits)) && ((r) & (bits)) != (bits)) 566 - 567 - #define __cpacr_to_cptr_clr(clr, set) \ 568 - ({ \ 569 - u64 cptr = 0; \ 570 - \ 571 - if ((set) & CPACR_EL1_FPEN) \ 572 - cptr |= CPTR_EL2_TFP; \ 573 - if ((set) & CPACR_EL1_ZEN) \ 574 - cptr |= CPTR_EL2_TZ; \ 575 - if ((set) & CPACR_EL1_SMEN) \ 576 - cptr |= CPTR_EL2_TSM; \ 577 - if ((clr) & CPACR_EL1_TTA) \ 578 - cptr |= CPTR_EL2_TTA; \ 579 - if ((clr) & CPTR_EL2_TAM) \ 580 - cptr |= CPTR_EL2_TAM; \ 581 - if ((clr) & CPTR_EL2_TCPAC) \ 582 - cptr |= CPTR_EL2_TCPAC; \ 583 - \ 584 - cptr; \ 585 - }) 586 - 587 - #define __cpacr_to_cptr_set(clr, set) \ 588 - ({ \ 589 - u64 cptr = 0; \ 590 - \ 591 - if ((clr) & CPACR_EL1_FPEN) \ 592 - cptr |= CPTR_EL2_TFP; \ 593 - if ((clr) & CPACR_EL1_ZEN) \ 594 - cptr |= CPTR_EL2_TZ; \ 595 - if ((clr) & CPACR_EL1_SMEN) \ 596 - cptr |= CPTR_EL2_TSM; \ 597 - if ((set) & CPACR_EL1_TTA) \ 598 - cptr |= CPTR_EL2_TTA; \ 599 - if ((set) & CPTR_EL2_TAM) \ 600 - cptr |= CPTR_EL2_TAM; \ 601 - if ((set) & CPTR_EL2_TCPAC) \ 602 - cptr |= CPTR_EL2_TCPAC; \ 603 - \ 604 - cptr; \ 605 - }) 606 - 607 - #define cpacr_clear_set(clr, set) \ 608 - do { \ 609 - BUILD_BUG_ON((set) & CPTR_VHE_EL2_RES0); \ 610 - BUILD_BUG_ON((clr) & CPACR_EL1_E0POE); \ 611 - __build_check_all_or_none((clr), CPACR_EL1_FPEN); \ 612 - __build_check_all_or_none((set), CPACR_EL1_FPEN); \ 613 - __build_check_all_or_none((clr), CPACR_EL1_ZEN); \ 614 - __build_check_all_or_none((set), CPACR_EL1_ZEN); \ 615 - __build_check_all_or_none((clr), CPACR_EL1_SMEN); \ 616 - __build_check_all_or_none((set), CPACR_EL1_SMEN); \ 617 - \ 618 - if (has_vhe() || has_hvhe()) \ 619 - sysreg_clear_set(cpacr_el1, clr, set); \ 620 - else \ 621 - sysreg_clear_set(cptr_el2, \ 622 - __cpacr_to_cptr_clr(clr, set), \ 623 - __cpacr_to_cptr_set(clr, set));\ 624 - } while (0) 625 - 626 564 /* 627 565 * Returns a 'sanitised' view of CPTR_EL2, translating from nVHE to the VHE 628 566 * format if E2H isn't set.
+2 -4
arch/arm64/include/asm/kvm_host.h
··· 1289 1289 }) 1290 1290 1291 1291 /* 1292 - * The couple of isb() below are there to guarantee the same behaviour 1293 - * on VHE as on !VHE, where the eret to EL1 acts as a context 1294 - * synchronization event. 1292 + * The isb() below is there to guarantee the same behaviour on VHE as on !VHE, 1293 + * where the eret to EL1 acts as a context synchronization event. 1295 1294 */ 1296 1295 #define kvm_call_hyp(f, ...) \ 1297 1296 do { \ ··· 1308 1309 \ 1309 1310 if (has_vhe()) { \ 1310 1311 ret = f(__VA_ARGS__); \ 1311 - isb(); \ 1312 1312 } else { \ 1313 1313 ret = kvm_call_hyp_nvhe(f, ##__VA_ARGS__); \ 1314 1314 } \
+3 -1
arch/arm64/kernel/process.c
··· 288 288 if (!system_supports_gcs()) 289 289 return; 290 290 291 - gcs_free(current); 291 + current->thread.gcspr_el0 = 0; 292 + current->thread.gcs_base = 0; 293 + current->thread.gcs_size = 0; 292 294 current->thread.gcs_el0_mode = 0; 293 295 write_sysreg_s(GCSCRE0_EL1_nTR, SYS_GCSCRE0_EL1); 294 296 write_sysreg_s(0, SYS_GCSPR_EL0);
+1 -1
arch/arm64/kernel/ptrace.c
··· 141 141 142 142 addr += n; 143 143 if (regs_within_kernel_stack(regs, (unsigned long)addr)) 144 - return *addr; 144 + return READ_ONCE_NOCHECK(*addr); 145 145 else 146 146 return 0; 147 147 }
+2 -1
arch/arm64/kvm/arm.c
··· 2764 2764 bool kvm_arch_irqfd_route_changed(struct kvm_kernel_irq_routing_entry *old, 2765 2765 struct kvm_kernel_irq_routing_entry *new) 2766 2766 { 2767 - if (new->type != KVM_IRQ_ROUTING_MSI) 2767 + if (old->type != KVM_IRQ_ROUTING_MSI || 2768 + new->type != KVM_IRQ_ROUTING_MSI) 2768 2769 return true; 2769 2770 2770 2771 return memcmp(&old->msi, &new->msi, sizeof(new->msi));
+138 -9
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 65 65 } 66 66 } 67 67 68 + static inline void __activate_cptr_traps_nvhe(struct kvm_vcpu *vcpu) 69 + { 70 + u64 val = CPTR_NVHE_EL2_RES1 | CPTR_EL2_TAM | CPTR_EL2_TTA; 71 + 72 + /* 73 + * Always trap SME since it's not supported in KVM. 74 + * TSM is RES1 if SME isn't implemented. 75 + */ 76 + val |= CPTR_EL2_TSM; 77 + 78 + if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs()) 79 + val |= CPTR_EL2_TZ; 80 + 81 + if (!guest_owns_fp_regs()) 82 + val |= CPTR_EL2_TFP; 83 + 84 + write_sysreg(val, cptr_el2); 85 + } 86 + 87 + static inline void __activate_cptr_traps_vhe(struct kvm_vcpu *vcpu) 88 + { 89 + /* 90 + * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to 91 + * CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2, 92 + * except for some missing controls, such as TAM. 93 + * In this case, CPTR_EL2.TAM has the same position with or without 94 + * VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM 95 + * shift value for trapping the AMU accesses. 96 + */ 97 + u64 val = CPTR_EL2_TAM | CPACR_EL1_TTA; 98 + u64 cptr; 99 + 100 + if (guest_owns_fp_regs()) { 101 + val |= CPACR_EL1_FPEN; 102 + if (vcpu_has_sve(vcpu)) 103 + val |= CPACR_EL1_ZEN; 104 + } 105 + 106 + if (!vcpu_has_nv(vcpu)) 107 + goto write; 108 + 109 + /* 110 + * The architecture is a bit crap (what a surprise): an EL2 guest 111 + * writing to CPTR_EL2 via CPACR_EL1 can't set any of TCPAC or TTA, 112 + * as they are RES0 in the guest's view. To work around it, trap the 113 + * sucker using the very same bit it can't set... 114 + */ 115 + if (vcpu_el2_e2h_is_set(vcpu) && is_hyp_ctxt(vcpu)) 116 + val |= CPTR_EL2_TCPAC; 117 + 118 + /* 119 + * Layer the guest hypervisor's trap configuration on top of our own if 120 + * we're in a nested context. 121 + */ 122 + if (is_hyp_ctxt(vcpu)) 123 + goto write; 124 + 125 + cptr = vcpu_sanitised_cptr_el2(vcpu); 126 + 127 + /* 128 + * Pay attention, there's some interesting detail here. 129 + * 130 + * The CPTR_EL2.xEN fields are 2 bits wide, although there are only two 131 + * meaningful trap states when HCR_EL2.TGE = 0 (running a nested guest): 132 + * 133 + * - CPTR_EL2.xEN = x0, traps are enabled 134 + * - CPTR_EL2.xEN = x1, traps are disabled 135 + * 136 + * In other words, bit[0] determines if guest accesses trap or not. In 137 + * the interest of simplicity, clear the entire field if the guest 138 + * hypervisor has traps enabled to dispel any illusion of something more 139 + * complicated taking place. 140 + */ 141 + if (!(SYS_FIELD_GET(CPACR_EL1, FPEN, cptr) & BIT(0))) 142 + val &= ~CPACR_EL1_FPEN; 143 + if (!(SYS_FIELD_GET(CPACR_EL1, ZEN, cptr) & BIT(0))) 144 + val &= ~CPACR_EL1_ZEN; 145 + 146 + if (kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, S2POE, IMP)) 147 + val |= cptr & CPACR_EL1_E0POE; 148 + 149 + val |= cptr & CPTR_EL2_TCPAC; 150 + 151 + write: 152 + write_sysreg(val, cpacr_el1); 153 + } 154 + 155 + static inline void __activate_cptr_traps(struct kvm_vcpu *vcpu) 156 + { 157 + if (!guest_owns_fp_regs()) 158 + __activate_traps_fpsimd32(vcpu); 159 + 160 + if (has_vhe() || has_hvhe()) 161 + __activate_cptr_traps_vhe(vcpu); 162 + else 163 + __activate_cptr_traps_nvhe(vcpu); 164 + } 165 + 166 + static inline void __deactivate_cptr_traps_nvhe(struct kvm_vcpu *vcpu) 167 + { 168 + u64 val = CPTR_NVHE_EL2_RES1; 169 + 170 + if (!cpus_have_final_cap(ARM64_SVE)) 171 + val |= CPTR_EL2_TZ; 172 + if (!cpus_have_final_cap(ARM64_SME)) 173 + val |= CPTR_EL2_TSM; 174 + 175 + write_sysreg(val, cptr_el2); 176 + } 177 + 178 + static inline void __deactivate_cptr_traps_vhe(struct kvm_vcpu *vcpu) 179 + { 180 + u64 val = CPACR_EL1_FPEN; 181 + 182 + if (cpus_have_final_cap(ARM64_SVE)) 183 + val |= CPACR_EL1_ZEN; 184 + if (cpus_have_final_cap(ARM64_SME)) 185 + val |= CPACR_EL1_SMEN; 186 + 187 + write_sysreg(val, cpacr_el1); 188 + } 189 + 190 + static inline void __deactivate_cptr_traps(struct kvm_vcpu *vcpu) 191 + { 192 + if (has_vhe() || has_hvhe()) 193 + __deactivate_cptr_traps_vhe(vcpu); 194 + else 195 + __deactivate_cptr_traps_nvhe(vcpu); 196 + } 197 + 68 198 #define reg_to_fgt_masks(reg) \ 69 199 ({ \ 70 200 struct fgt_masks *m; \ ··· 616 486 */ 617 487 if (system_supports_sve()) { 618 488 __hyp_sve_save_host(); 619 - 620 - /* Re-enable SVE traps if not supported for the guest vcpu. */ 621 - if (!vcpu_has_sve(vcpu)) 622 - cpacr_clear_set(CPACR_EL1_ZEN, 0); 623 - 624 489 } else { 625 490 __fpsimd_save_state(host_data_ptr(host_ctxt.fp_regs)); 626 491 } ··· 666 541 /* Valid trap. Switch the context: */ 667 542 668 543 /* First disable enough traps to allow us to update the registers */ 669 - if (sve_guest || (is_protected_kvm_enabled() && system_supports_sve())) 670 - cpacr_clear_set(0, CPACR_EL1_FPEN | CPACR_EL1_ZEN); 671 - else 672 - cpacr_clear_set(0, CPACR_EL1_FPEN); 544 + __deactivate_cptr_traps(vcpu); 673 545 isb(); 674 546 675 547 /* Write out the host state if it's in the registers */ ··· 687 565 write_sysreg(__vcpu_sys_reg(vcpu, FPEXC32_EL2), fpexc32_el2); 688 566 689 567 *host_data_ptr(fp_owner) = FP_STATE_GUEST_OWNED; 568 + 569 + /* 570 + * Re-enable traps necessary for the current state of the guest, e.g. 571 + * those enabled by a guest hypervisor. The ERET to the guest will 572 + * provide the necessary context synchronization. 573 + */ 574 + __activate_cptr_traps(vcpu); 690 575 691 576 return true; 692 577 }
+4 -1
arch/arm64/kvm/hyp/nvhe/hyp-main.c
··· 69 69 if (!guest_owns_fp_regs()) 70 70 return; 71 71 72 - cpacr_clear_set(0, CPACR_EL1_FPEN | CPACR_EL1_ZEN); 72 + /* 73 + * Traps have been disabled by __deactivate_cptr_traps(), but there 74 + * hasn't necessarily been a context synchronization event yet. 75 + */ 73 76 isb(); 74 77 75 78 if (vcpu_has_sve(vcpu))
-59
arch/arm64/kvm/hyp/nvhe/switch.c
··· 47 47 48 48 extern void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc); 49 49 50 - static void __activate_cptr_traps(struct kvm_vcpu *vcpu) 51 - { 52 - u64 val = CPTR_EL2_TAM; /* Same bit irrespective of E2H */ 53 - 54 - if (!guest_owns_fp_regs()) 55 - __activate_traps_fpsimd32(vcpu); 56 - 57 - if (has_hvhe()) { 58 - val |= CPACR_EL1_TTA; 59 - 60 - if (guest_owns_fp_regs()) { 61 - val |= CPACR_EL1_FPEN; 62 - if (vcpu_has_sve(vcpu)) 63 - val |= CPACR_EL1_ZEN; 64 - } 65 - 66 - write_sysreg(val, cpacr_el1); 67 - } else { 68 - val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1; 69 - 70 - /* 71 - * Always trap SME since it's not supported in KVM. 72 - * TSM is RES1 if SME isn't implemented. 73 - */ 74 - val |= CPTR_EL2_TSM; 75 - 76 - if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs()) 77 - val |= CPTR_EL2_TZ; 78 - 79 - if (!guest_owns_fp_regs()) 80 - val |= CPTR_EL2_TFP; 81 - 82 - write_sysreg(val, cptr_el2); 83 - } 84 - } 85 - 86 - static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu) 87 - { 88 - if (has_hvhe()) { 89 - u64 val = CPACR_EL1_FPEN; 90 - 91 - if (cpus_have_final_cap(ARM64_SVE)) 92 - val |= CPACR_EL1_ZEN; 93 - if (cpus_have_final_cap(ARM64_SME)) 94 - val |= CPACR_EL1_SMEN; 95 - 96 - write_sysreg(val, cpacr_el1); 97 - } else { 98 - u64 val = CPTR_NVHE_EL2_RES1; 99 - 100 - if (!cpus_have_final_cap(ARM64_SVE)) 101 - val |= CPTR_EL2_TZ; 102 - if (!cpus_have_final_cap(ARM64_SME)) 103 - val |= CPTR_EL2_TSM; 104 - 105 - write_sysreg(val, cptr_el2); 106 - } 107 - } 108 - 109 50 static void __activate_traps(struct kvm_vcpu *vcpu) 110 51 { 111 52 ___activate_traps(vcpu, vcpu->arch.hcr_el2);
+14 -93
arch/arm64/kvm/hyp/vhe/switch.c
··· 90 90 return hcr | (guest_hcr & ~NV_HCR_GUEST_EXCLUDE); 91 91 } 92 92 93 - static void __activate_cptr_traps(struct kvm_vcpu *vcpu) 94 - { 95 - u64 cptr; 96 - 97 - /* 98 - * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to 99 - * CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2, 100 - * except for some missing controls, such as TAM. 101 - * In this case, CPTR_EL2.TAM has the same position with or without 102 - * VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM 103 - * shift value for trapping the AMU accesses. 104 - */ 105 - u64 val = CPACR_EL1_TTA | CPTR_EL2_TAM; 106 - 107 - if (guest_owns_fp_regs()) { 108 - val |= CPACR_EL1_FPEN; 109 - if (vcpu_has_sve(vcpu)) 110 - val |= CPACR_EL1_ZEN; 111 - } else { 112 - __activate_traps_fpsimd32(vcpu); 113 - } 114 - 115 - if (!vcpu_has_nv(vcpu)) 116 - goto write; 117 - 118 - /* 119 - * The architecture is a bit crap (what a surprise): an EL2 guest 120 - * writing to CPTR_EL2 via CPACR_EL1 can't set any of TCPAC or TTA, 121 - * as they are RES0 in the guest's view. To work around it, trap the 122 - * sucker using the very same bit it can't set... 123 - */ 124 - if (vcpu_el2_e2h_is_set(vcpu) && is_hyp_ctxt(vcpu)) 125 - val |= CPTR_EL2_TCPAC; 126 - 127 - /* 128 - * Layer the guest hypervisor's trap configuration on top of our own if 129 - * we're in a nested context. 130 - */ 131 - if (is_hyp_ctxt(vcpu)) 132 - goto write; 133 - 134 - cptr = vcpu_sanitised_cptr_el2(vcpu); 135 - 136 - /* 137 - * Pay attention, there's some interesting detail here. 138 - * 139 - * The CPTR_EL2.xEN fields are 2 bits wide, although there are only two 140 - * meaningful trap states when HCR_EL2.TGE = 0 (running a nested guest): 141 - * 142 - * - CPTR_EL2.xEN = x0, traps are enabled 143 - * - CPTR_EL2.xEN = x1, traps are disabled 144 - * 145 - * In other words, bit[0] determines if guest accesses trap or not. In 146 - * the interest of simplicity, clear the entire field if the guest 147 - * hypervisor has traps enabled to dispel any illusion of something more 148 - * complicated taking place. 149 - */ 150 - if (!(SYS_FIELD_GET(CPACR_EL1, FPEN, cptr) & BIT(0))) 151 - val &= ~CPACR_EL1_FPEN; 152 - if (!(SYS_FIELD_GET(CPACR_EL1, ZEN, cptr) & BIT(0))) 153 - val &= ~CPACR_EL1_ZEN; 154 - 155 - if (kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, S2POE, IMP)) 156 - val |= cptr & CPACR_EL1_E0POE; 157 - 158 - val |= cptr & CPTR_EL2_TCPAC; 159 - 160 - write: 161 - write_sysreg(val, cpacr_el1); 162 - } 163 - 164 - static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu) 165 - { 166 - u64 val = CPACR_EL1_FPEN | CPACR_EL1_ZEN_EL1EN; 167 - 168 - if (cpus_have_final_cap(ARM64_SME)) 169 - val |= CPACR_EL1_SMEN_EL1EN; 170 - 171 - write_sysreg(val, cpacr_el1); 172 - } 173 - 174 93 static void __activate_traps(struct kvm_vcpu *vcpu) 175 94 { 176 95 u64 val; ··· 558 639 host_ctxt = host_data_ptr(host_ctxt); 559 640 guest_ctxt = &vcpu->arch.ctxt; 560 641 561 - sysreg_save_host_state_vhe(host_ctxt); 562 - 563 642 fpsimd_lazy_switch_to_guest(vcpu); 643 + 644 + sysreg_save_host_state_vhe(host_ctxt); 564 645 565 646 /* 566 647 * Note that ARM erratum 1165522 requires us to configure both stage 1 ··· 586 667 587 668 __deactivate_traps(vcpu); 588 669 589 - fpsimd_lazy_switch_to_host(vcpu); 590 - 591 670 sysreg_restore_host_state_vhe(host_ctxt); 671 + 672 + __debug_switch_to_host(vcpu); 673 + 674 + /* 675 + * Ensure that all system register writes above have taken effect 676 + * before returning to the host. In VHE mode, CPTR traps for 677 + * FPSIMD/SVE/SME also apply to EL2, so FPSIMD/SVE/SME state must be 678 + * manipulated after the ISB. 679 + */ 680 + isb(); 681 + 682 + fpsimd_lazy_switch_to_host(vcpu); 592 683 593 684 if (guest_owns_fp_regs()) 594 685 __fpsimd_save_fpexc32(vcpu); 595 - 596 - __debug_switch_to_host(vcpu); 597 686 598 687 return exit_code; 599 688 } ··· 632 705 */ 633 706 local_daif_restore(DAIF_PROCCTX_NOIRQ); 634 707 635 - /* 636 - * When we exit from the guest we change a number of CPU configuration 637 - * parameters, such as traps. We rely on the isb() in kvm_call_hyp*() 638 - * to make sure these changes take effect before running the host or 639 - * additional guests. 640 - */ 641 708 return ret; 642 709 } 643 710
+42 -39
arch/arm64/kvm/vgic/vgic-v3-nested.c
··· 36 36 37 37 static DEFINE_PER_CPU(struct shadow_if, shadow_if); 38 38 39 + static int lr_map_idx_to_shadow_idx(struct shadow_if *shadow_if, int idx) 40 + { 41 + return hweight16(shadow_if->lr_map & (BIT(idx) - 1)); 42 + } 43 + 39 44 /* 40 45 * Nesting GICv3 support 41 46 * ··· 214 209 return reg; 215 210 } 216 211 212 + static u64 translate_lr_pintid(struct kvm_vcpu *vcpu, u64 lr) 213 + { 214 + struct vgic_irq *irq; 215 + 216 + if (!(lr & ICH_LR_HW)) 217 + return lr; 218 + 219 + /* We have the HW bit set, check for validity of pINTID */ 220 + irq = vgic_get_vcpu_irq(vcpu, FIELD_GET(ICH_LR_PHYS_ID_MASK, lr)); 221 + /* If there was no real mapping, nuke the HW bit */ 222 + if (!irq || !irq->hw || irq->intid > VGIC_MAX_SPI) 223 + lr &= ~ICH_LR_HW; 224 + 225 + /* Translate the virtual mapping to the real one, even if invalid */ 226 + if (irq) { 227 + lr &= ~ICH_LR_PHYS_ID_MASK; 228 + lr |= FIELD_PREP(ICH_LR_PHYS_ID_MASK, (u64)irq->hwintid); 229 + vgic_put_irq(vcpu->kvm, irq); 230 + } 231 + 232 + return lr; 233 + } 234 + 217 235 /* 218 236 * For LRs which have HW bit set such as timer interrupts, we modify them to 219 237 * have the host hardware interrupt number instead of the virtual one programmed ··· 245 217 static void vgic_v3_create_shadow_lr(struct kvm_vcpu *vcpu, 246 218 struct vgic_v3_cpu_if *s_cpu_if) 247 219 { 248 - unsigned long lr_map = 0; 249 - int index = 0; 220 + struct shadow_if *shadow_if; 221 + 222 + shadow_if = container_of(s_cpu_if, struct shadow_if, cpuif); 223 + shadow_if->lr_map = 0; 250 224 251 225 for (int i = 0; i < kvm_vgic_global_state.nr_lr; i++) { 252 226 u64 lr = __vcpu_sys_reg(vcpu, ICH_LRN(i)); 253 - struct vgic_irq *irq; 254 227 255 228 if (!(lr & ICH_LR_STATE)) 256 - lr = 0; 229 + continue; 257 230 258 - if (!(lr & ICH_LR_HW)) 259 - goto next; 231 + lr = translate_lr_pintid(vcpu, lr); 260 232 261 - /* We have the HW bit set, check for validity of pINTID */ 262 - irq = vgic_get_vcpu_irq(vcpu, FIELD_GET(ICH_LR_PHYS_ID_MASK, lr)); 263 - if (!irq || !irq->hw || irq->intid > VGIC_MAX_SPI ) { 264 - /* There was no real mapping, so nuke the HW bit */ 265 - lr &= ~ICH_LR_HW; 266 - if (irq) 267 - vgic_put_irq(vcpu->kvm, irq); 268 - goto next; 269 - } 270 - 271 - /* Translate the virtual mapping to the real one */ 272 - lr &= ~ICH_LR_PHYS_ID_MASK; 273 - lr |= FIELD_PREP(ICH_LR_PHYS_ID_MASK, (u64)irq->hwintid); 274 - 275 - vgic_put_irq(vcpu->kvm, irq); 276 - 277 - next: 278 - s_cpu_if->vgic_lr[index] = lr; 279 - if (lr) { 280 - lr_map |= BIT(i); 281 - index++; 282 - } 233 + s_cpu_if->vgic_lr[hweight16(shadow_if->lr_map)] = lr; 234 + shadow_if->lr_map |= BIT(i); 283 235 } 284 236 285 - container_of(s_cpu_if, struct shadow_if, cpuif)->lr_map = lr_map; 286 - s_cpu_if->used_lrs = index; 237 + s_cpu_if->used_lrs = hweight16(shadow_if->lr_map); 287 238 } 288 239 289 240 void vgic_v3_sync_nested(struct kvm_vcpu *vcpu) 290 241 { 291 242 struct shadow_if *shadow_if = get_shadow_if(); 292 - int i, index = 0; 243 + int i; 293 244 294 245 for_each_set_bit(i, &shadow_if->lr_map, kvm_vgic_global_state.nr_lr) { 295 246 u64 lr = __vcpu_sys_reg(vcpu, ICH_LRN(i)); 296 247 struct vgic_irq *irq; 297 248 298 249 if (!(lr & ICH_LR_HW) || !(lr & ICH_LR_STATE)) 299 - goto next; 250 + continue; 300 251 301 252 /* 302 253 * If we had a HW lr programmed by the guest hypervisor, we ··· 284 277 */ 285 278 irq = vgic_get_vcpu_irq(vcpu, FIELD_GET(ICH_LR_PHYS_ID_MASK, lr)); 286 279 if (WARN_ON(!irq)) /* Shouldn't happen as we check on load */ 287 - goto next; 280 + continue; 288 281 289 - lr = __gic_v3_get_lr(index); 282 + lr = __gic_v3_get_lr(lr_map_idx_to_shadow_idx(shadow_if, i)); 290 283 if (!(lr & ICH_LR_STATE)) 291 284 irq->active = false; 292 285 293 286 vgic_put_irq(vcpu->kvm, irq); 294 - next: 295 - index++; 296 287 } 297 288 } 298 289 ··· 373 368 val = __vcpu_sys_reg(vcpu, ICH_LRN(i)); 374 369 375 370 val &= ~ICH_LR_STATE; 376 - val |= s_cpu_if->vgic_lr[i] & ICH_LR_STATE; 371 + val |= s_cpu_if->vgic_lr[lr_map_idx_to_shadow_idx(shadow_if, i)] & ICH_LR_STATE; 377 372 378 373 __vcpu_assign_sys_reg(vcpu, ICH_LRN(i), val); 379 - s_cpu_if->vgic_lr[i] = 0; 380 374 } 381 375 382 - shadow_if->lr_map = 0; 383 376 vcpu->arch.vgic_cpu.vgic_v3.used_lrs = 0; 384 377 } 385 378
+2 -1
arch/arm64/mm/mmu.c
··· 1305 1305 next = addr; 1306 1306 end = addr + PUD_SIZE; 1307 1307 do { 1308 - pmd_free_pte_page(pmdp, next); 1308 + if (pmd_present(pmdp_get(pmdp))) 1309 + pmd_free_pte_page(pmdp, next); 1309 1310 } while (pmdp++, next += PMD_SIZE, next != end); 1310 1311 1311 1312 pud_clear(pudp);
+4 -4
arch/riscv/kvm/vcpu_sbi_replace.c
··· 103 103 kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_SENT); 104 104 break; 105 105 case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA: 106 - if (cp->a2 == 0 && cp->a3 == 0) 106 + if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL) 107 107 kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask); 108 108 else 109 109 kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask, ··· 111 111 kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT); 112 112 break; 113 113 case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID: 114 - if (cp->a2 == 0 && cp->a3 == 0) 114 + if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL) 115 115 kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, 116 116 hbase, hmask, cp->a4); 117 117 else ··· 127 127 case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA_ASID: 128 128 /* 129 129 * Until nested virtualization is implemented, the 130 - * SBI HFENCE calls should be treated as NOPs 130 + * SBI HFENCE calls should return not supported 131 + * hence fallthrough. 131 132 */ 132 - break; 133 133 default: 134 134 retdata->err_val = SBI_ERR_NOT_SUPPORTED; 135 135 }
+1 -1
arch/um/drivers/ubd_user.c
··· 41 41 *fd_out = fds[1]; 42 42 43 43 err = os_set_fd_block(*fd_out, 0); 44 - err = os_set_fd_block(kernel_fd, 0); 44 + err |= os_set_fd_block(kernel_fd, 0); 45 45 if (err) { 46 46 printk("start_io_thread - failed to set nonblocking I/O.\n"); 47 47 goto out_close;
+13 -29
arch/um/drivers/vector_kern.c
··· 1625 1625 1626 1626 device->dev = dev; 1627 1627 1628 - *vp = ((struct vector_private) 1629 - { 1630 - .list = LIST_HEAD_INIT(vp->list), 1631 - .dev = dev, 1632 - .unit = n, 1633 - .options = get_transport_options(def), 1634 - .rx_irq = 0, 1635 - .tx_irq = 0, 1636 - .parsed = def, 1637 - .max_packet = get_mtu(def) + ETH_HEADER_OTHER, 1638 - /* TODO - we need to calculate headroom so that ip header 1639 - * is 16 byte aligned all the time 1640 - */ 1641 - .headroom = get_headroom(def), 1642 - .form_header = NULL, 1643 - .verify_header = NULL, 1644 - .header_rxbuffer = NULL, 1645 - .header_txbuffer = NULL, 1646 - .header_size = 0, 1647 - .rx_header_size = 0, 1648 - .rexmit_scheduled = false, 1649 - .opened = false, 1650 - .transport_data = NULL, 1651 - .in_write_poll = false, 1652 - .coalesce = 2, 1653 - .req_size = get_req_size(def), 1654 - .in_error = false, 1655 - .bpf = NULL 1656 - }); 1628 + INIT_LIST_HEAD(&vp->list); 1629 + vp->dev = dev; 1630 + vp->unit = n; 1631 + vp->options = get_transport_options(def); 1632 + vp->parsed = def; 1633 + vp->max_packet = get_mtu(def) + ETH_HEADER_OTHER; 1634 + /* 1635 + * TODO - we need to calculate headroom so that ip header 1636 + * is 16 byte aligned all the time 1637 + */ 1638 + vp->headroom = get_headroom(def); 1639 + vp->coalesce = 2; 1640 + vp->req_size = get_req_size(def); 1657 1641 1658 1642 dev->features = dev->hw_features = (NETIF_F_SG | NETIF_F_FRAGLIST); 1659 1643 INIT_WORK(&vp->reset_tx, vector_reset_tx);
+14
arch/um/drivers/vfio_kern.c
··· 570 570 kfree(dev); 571 571 } 572 572 573 + static struct uml_vfio_device *uml_vfio_find_device(const char *device) 574 + { 575 + struct uml_vfio_device *dev; 576 + 577 + list_for_each_entry(dev, &uml_vfio_devices, list) { 578 + if (!strcmp(dev->name, device)) 579 + return dev; 580 + } 581 + return NULL; 582 + } 583 + 573 584 static int uml_vfio_cmdline_set(const char *device, const struct kernel_param *kp) 574 585 { 575 586 struct uml_vfio_device *dev; ··· 592 581 return fd; 593 582 uml_vfio_container.fd = fd; 594 583 } 584 + 585 + if (uml_vfio_find_device(device)) 586 + return -EEXIST; 595 587 596 588 dev = kzalloc(sizeof(*dev), GFP_KERNEL); 597 589 if (!dev)
+1 -1
arch/x86/events/intel/core.c
··· 2826 2826 * If the PEBS counters snapshotting is enabled, 2827 2827 * the topdown event is available in PEBS records. 2828 2828 */ 2829 - if (is_topdown_event(event) && !is_pebs_counter_event_group(event)) 2829 + if (is_topdown_count(event) && !is_pebs_counter_event_group(event)) 2830 2830 static_call(intel_pmu_update_topdown_event)(event, NULL); 2831 2831 else 2832 2832 intel_pmu_drain_pebs_buffer();
+1
arch/x86/include/asm/shared/tdx.h
··· 80 80 #define TDVMCALL_STATUS_RETRY 0x0000000000000001ULL 81 81 #define TDVMCALL_STATUS_INVALID_OPERAND 0x8000000000000000ULL 82 82 #define TDVMCALL_STATUS_ALIGN_ERROR 0x8000000000000002ULL 83 + #define TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED 0x8000000000000003ULL 83 84 84 85 /* 85 86 * Bitmasks of exposed registers (with VMM).
+2 -2
arch/x86/kernel/alternative.c
··· 228 228 struct its_array *pages = &its_pages; 229 229 void *page; 230 230 231 - #ifdef CONFIG_MODULE 231 + #ifdef CONFIG_MODULES 232 232 if (its_mod) 233 233 pages = &its_mod->arch.its_pages; 234 234 #endif ··· 3138 3138 */ 3139 3139 void __ref smp_text_poke_single(void *addr, const void *opcode, size_t len, const void *emulate) 3140 3140 { 3141 - __smp_text_poke_batch_add(addr, opcode, len, emulate); 3141 + smp_text_poke_batch_add(addr, opcode, len, emulate); 3142 3142 smp_text_poke_batch_finish(); 3143 3143 }
+1 -1
arch/x86/kernel/cpu/amd.c
··· 31 31 32 32 #include "cpu.h" 33 33 34 - u16 invlpgb_count_max __ro_after_init; 34 + u16 invlpgb_count_max __ro_after_init = 1; 35 35 36 36 static inline int rdmsrq_amd_safe(unsigned msr, u64 *p) 37 37 {
+4 -2
arch/x86/kernel/cpu/resctrl/core.c
··· 498 498 struct rdt_hw_mon_domain *hw_dom; 499 499 struct rdt_domain_hdr *hdr; 500 500 struct rdt_mon_domain *d; 501 + struct cacheinfo *ci; 501 502 int err; 502 503 503 504 lockdep_assert_held(&domain_list_lock); ··· 526 525 d = &hw_dom->d_resctrl; 527 526 d->hdr.id = id; 528 527 d->hdr.type = RESCTRL_MON_DOMAIN; 529 - d->ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); 530 - if (!d->ci) { 528 + ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); 529 + if (!ci) { 531 530 pr_warn_once("Can't find L3 cache for CPU:%d resource %s\n", cpu, r->name); 532 531 mon_domain_free(hw_dom); 533 532 return; 534 533 } 534 + d->ci_id = ci->id; 535 535 cpumask_set_cpu(cpu, &d->hdr.cpu_mask); 536 536 537 537 arch_mon_domain_online(r, d);
+76 -7
arch/x86/kvm/vmx/tdx.c
··· 1212 1212 /* 1213 1213 * Converting TDVMCALL_MAP_GPA to KVM_HC_MAP_GPA_RANGE requires 1214 1214 * userspace to enable KVM_CAP_EXIT_HYPERCALL with KVM_HC_MAP_GPA_RANGE 1215 - * bit set. If not, the error code is not defined in GHCI for TDX, use 1216 - * TDVMCALL_STATUS_INVALID_OPERAND for this case. 1215 + * bit set. This is a base call so it should always be supported, but 1216 + * KVM has no way to ensure that userspace implements the GHCI correctly. 1217 + * So if KVM_HC_MAP_GPA_RANGE does not cause a VMEXIT, return an error 1218 + * to the guest. 1217 1219 */ 1218 1220 if (!user_exit_on_hypercall(vcpu->kvm, KVM_HC_MAP_GPA_RANGE)) { 1219 - ret = TDVMCALL_STATUS_INVALID_OPERAND; 1221 + ret = TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED; 1220 1222 goto error; 1221 1223 } 1222 1224 ··· 1451 1449 return 1; 1452 1450 } 1453 1451 1452 + static int tdx_complete_get_td_vm_call_info(struct kvm_vcpu *vcpu) 1453 + { 1454 + struct vcpu_tdx *tdx = to_tdx(vcpu); 1455 + 1456 + tdvmcall_set_return_code(vcpu, vcpu->run->tdx.get_tdvmcall_info.ret); 1457 + 1458 + /* 1459 + * For now, there is no TDVMCALL beyond GHCI base API supported by KVM 1460 + * directly without the support from userspace, just set the value 1461 + * returned from userspace. 1462 + */ 1463 + tdx->vp_enter_args.r11 = vcpu->run->tdx.get_tdvmcall_info.r11; 1464 + tdx->vp_enter_args.r12 = vcpu->run->tdx.get_tdvmcall_info.r12; 1465 + tdx->vp_enter_args.r13 = vcpu->run->tdx.get_tdvmcall_info.r13; 1466 + tdx->vp_enter_args.r14 = vcpu->run->tdx.get_tdvmcall_info.r14; 1467 + 1468 + return 1; 1469 + } 1470 + 1454 1471 static int tdx_get_td_vm_call_info(struct kvm_vcpu *vcpu) 1455 1472 { 1456 1473 struct vcpu_tdx *tdx = to_tdx(vcpu); 1457 1474 1458 - if (tdx->vp_enter_args.r12) 1459 - tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND); 1460 - else { 1475 + switch (tdx->vp_enter_args.r12) { 1476 + case 0: 1461 1477 tdx->vp_enter_args.r11 = 0; 1478 + tdx->vp_enter_args.r12 = 0; 1462 1479 tdx->vp_enter_args.r13 = 0; 1463 1480 tdx->vp_enter_args.r14 = 0; 1481 + tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_SUCCESS); 1482 + return 1; 1483 + case 1: 1484 + vcpu->run->tdx.get_tdvmcall_info.leaf = tdx->vp_enter_args.r12; 1485 + vcpu->run->exit_reason = KVM_EXIT_TDX; 1486 + vcpu->run->tdx.flags = 0; 1487 + vcpu->run->tdx.nr = TDVMCALL_GET_TD_VM_CALL_INFO; 1488 + vcpu->run->tdx.get_tdvmcall_info.ret = TDVMCALL_STATUS_SUCCESS; 1489 + vcpu->run->tdx.get_tdvmcall_info.r11 = 0; 1490 + vcpu->run->tdx.get_tdvmcall_info.r12 = 0; 1491 + vcpu->run->tdx.get_tdvmcall_info.r13 = 0; 1492 + vcpu->run->tdx.get_tdvmcall_info.r14 = 0; 1493 + vcpu->arch.complete_userspace_io = tdx_complete_get_td_vm_call_info; 1494 + return 0; 1495 + default: 1496 + tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND); 1497 + return 1; 1464 1498 } 1499 + } 1500 + 1501 + static int tdx_complete_simple(struct kvm_vcpu *vcpu) 1502 + { 1503 + tdvmcall_set_return_code(vcpu, vcpu->run->tdx.unknown.ret); 1465 1504 return 1; 1505 + } 1506 + 1507 + static int tdx_get_quote(struct kvm_vcpu *vcpu) 1508 + { 1509 + struct vcpu_tdx *tdx = to_tdx(vcpu); 1510 + u64 gpa = tdx->vp_enter_args.r12; 1511 + u64 size = tdx->vp_enter_args.r13; 1512 + 1513 + /* The gpa of buffer must have shared bit set. */ 1514 + if (vt_is_tdx_private_gpa(vcpu->kvm, gpa)) { 1515 + tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND); 1516 + return 1; 1517 + } 1518 + 1519 + vcpu->run->exit_reason = KVM_EXIT_TDX; 1520 + vcpu->run->tdx.flags = 0; 1521 + vcpu->run->tdx.nr = TDVMCALL_GET_QUOTE; 1522 + vcpu->run->tdx.get_quote.ret = TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED; 1523 + vcpu->run->tdx.get_quote.gpa = gpa & ~gfn_to_gpa(kvm_gfn_direct_bits(tdx->vcpu.kvm)); 1524 + vcpu->run->tdx.get_quote.size = size; 1525 + 1526 + vcpu->arch.complete_userspace_io = tdx_complete_simple; 1527 + 1528 + return 0; 1466 1529 } 1467 1530 1468 1531 static int handle_tdvmcall(struct kvm_vcpu *vcpu) ··· 1539 1472 return tdx_report_fatal_error(vcpu); 1540 1473 case TDVMCALL_GET_TD_VM_CALL_INFO: 1541 1474 return tdx_get_td_vm_call_info(vcpu); 1475 + case TDVMCALL_GET_QUOTE: 1476 + return tdx_get_quote(vcpu); 1542 1477 default: 1543 1478 break; 1544 1479 } 1545 1480 1546 - tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND); 1481 + tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED); 1547 1482 return 1; 1548 1483 } 1549 1484
+5
arch/x86/mm/pti.c
··· 98 98 return; 99 99 100 100 setup_force_cpu_cap(X86_FEATURE_PTI); 101 + 102 + if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) { 103 + pr_debug("PTI enabled, disabling INVLPGB\n"); 104 + setup_clear_cpu_cap(X86_FEATURE_INVLPGB); 105 + } 101 106 } 102 107 103 108 static int __init pti_parse_cmdline(char *arg)
+1 -1
arch/x86/um/ptrace.c
··· 161 161 from = kbuf; 162 162 } 163 163 164 - return um_fxsr_from_i387(fxsave, &buf); 164 + return um_fxsr_from_i387(fxsave, from); 165 165 } 166 166 #endif 167 167
+21 -4
crypto/Kconfig
··· 176 176 177 177 config CRYPTO_SELFTESTS 178 178 bool "Enable cryptographic self-tests" 179 - depends on DEBUG_KERNEL 179 + depends on EXPERT 180 180 help 181 181 Enable the cryptographic self-tests. 182 182 183 183 The cryptographic self-tests run at boot time, or at algorithm 184 184 registration time if algorithms are dynamically loaded later. 185 185 186 - This is primarily intended for developer use. It should not be 187 - enabled in production kernels, unless you are trying to use these 188 - tests to fulfill a FIPS testing requirement. 186 + There are two main use cases for these tests: 187 + 188 + - Development and pre-release testing. In this case, also enable 189 + CRYPTO_SELFTESTS_FULL to get the full set of tests. All crypto code 190 + in the kernel is expected to pass the full set of tests. 191 + 192 + - Production kernels, to help prevent buggy drivers from being used 193 + and/or meet FIPS 140-3 pre-operational testing requirements. In 194 + this case, enable CRYPTO_SELFTESTS but not CRYPTO_SELFTESTS_FULL. 195 + 196 + config CRYPTO_SELFTESTS_FULL 197 + bool "Enable the full set of cryptographic self-tests" 198 + depends on CRYPTO_SELFTESTS 199 + help 200 + Enable the full set of cryptographic self-tests for each algorithm. 201 + 202 + The full set of tests should be enabled for development and 203 + pre-release testing, but not in production kernels. 204 + 205 + All crypto code in the kernel is expected to pass the full tests. 189 206 190 207 config CRYPTO_NULL 191 208 tristate "Null algorithms"
+3 -1
crypto/ahash.c
··· 600 600 601 601 static int ahash_def_finup_finish1(struct ahash_request *req, int err) 602 602 { 603 + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); 604 + 603 605 if (err) 604 606 goto out; 605 607 606 608 req->base.complete = ahash_def_finup_done2; 607 609 608 - err = crypto_ahash_final(req); 610 + err = crypto_ahash_alg(tfm)->final(req); 609 611 if (err == -EINPROGRESS || err == -EBUSY) 610 612 return err; 611 613
+12 -3
crypto/testmgr.c
··· 45 45 module_param(notests, bool, 0644); 46 46 MODULE_PARM_DESC(notests, "disable all crypto self-tests"); 47 47 48 + #ifdef CONFIG_CRYPTO_SELFTESTS_FULL 48 49 static bool noslowtests; 49 50 module_param(noslowtests, bool, 0644); 50 51 MODULE_PARM_DESC(noslowtests, "disable slow crypto self-tests"); ··· 53 52 static unsigned int fuzz_iterations = 100; 54 53 module_param(fuzz_iterations, uint, 0644); 55 54 MODULE_PARM_DESC(fuzz_iterations, "number of fuzz test iterations"); 55 + #else 56 + #define noslowtests 1 57 + #define fuzz_iterations 0 58 + #endif 56 59 57 60 #ifndef CONFIG_CRYPTO_SELFTESTS 58 61 ··· 324 319 325 320 /* 326 321 * The following are the lists of testvec_configs to test for each algorithm 327 - * type when the fast crypto self-tests are enabled. They aim to provide good 328 - * test coverage, while keeping the test time much shorter than the full tests 329 - * so that the fast tests can be used to fulfill FIPS 140 testing requirements. 322 + * type when the "fast" crypto self-tests are enabled. They aim to provide good 323 + * test coverage, while keeping the test time much shorter than the "full" tests 324 + * so that the "fast" tests can be enabled in a wider range of circumstances. 330 325 */ 331 326 332 327 /* Configs for skciphers and aeads */ ··· 1188 1183 1189 1184 static void crypto_disable_simd_for_test(void) 1190 1185 { 1186 + #ifdef CONFIG_CRYPTO_SELFTESTS_FULL 1191 1187 migrate_disable(); 1192 1188 __this_cpu_write(crypto_simd_disabled_for_test, true); 1189 + #endif 1193 1190 } 1194 1191 1195 1192 static void crypto_reenable_simd_for_test(void) 1196 1193 { 1194 + #ifdef CONFIG_CRYPTO_SELFTESTS_FULL 1197 1195 __this_cpu_write(crypto_simd_disabled_for_test, false); 1198 1196 migrate_enable(); 1197 + #endif 1199 1198 } 1200 1199 1201 1200 /*
+7
drivers/acpi/acpica/dsmethod.c
··· 483 483 return_ACPI_STATUS(AE_NULL_OBJECT); 484 484 } 485 485 486 + if (this_walk_state->num_operands < obj_desc->method.param_count) { 487 + ACPI_ERROR((AE_INFO, "Missing argument for method [%4.4s]", 488 + acpi_ut_get_node_name(method_node))); 489 + 490 + return_ACPI_STATUS(AE_AML_UNINITIALIZED_ARG); 491 + } 492 + 486 493 /* Init for new method, possibly wait on method mutex */ 487 494 488 495 status =
+5
drivers/atm/idt77252.c
··· 852 852 853 853 IDT77252_PRV_PADDR(skb) = dma_map_single(&card->pcidev->dev, skb->data, 854 854 skb->len, DMA_TO_DEVICE); 855 + if (dma_mapping_error(&card->pcidev->dev, IDT77252_PRV_PADDR(skb))) 856 + return -ENOMEM; 855 857 856 858 error = -EINVAL; 857 859 ··· 1859 1857 paddr = dma_map_single(&card->pcidev->dev, skb->data, 1860 1858 skb_end_pointer(skb) - skb->data, 1861 1859 DMA_FROM_DEVICE); 1860 + if (dma_mapping_error(&card->pcidev->dev, paddr)) 1861 + goto outpoolrm; 1862 1862 IDT77252_PRV_PADDR(skb) = paddr; 1863 1863 1864 1864 if (push_rx_skb(card, skb, queue)) { ··· 1875 1871 dma_unmap_single(&card->pcidev->dev, IDT77252_PRV_PADDR(skb), 1876 1872 skb_end_pointer(skb) - skb->data, DMA_FROM_DEVICE); 1877 1873 1874 + outpoolrm: 1878 1875 handle = IDT77252_PRV_POOL(skb); 1879 1876 card->sbpool[POOL_QUEUE(handle)].skb[POOL_INDEX(handle)] = NULL; 1880 1877
+1
drivers/block/aoe/aoe.h
··· 80 80 DEVFL_NEWSIZE = (1<<6), /* need to update dev size in block layer */ 81 81 DEVFL_FREEING = (1<<7), /* set when device is being cleaned up */ 82 82 DEVFL_FREED = (1<<8), /* device has been cleaned up */ 83 + DEVFL_DEAD = (1<<9), /* device has timed out of aoe_deadsecs */ 83 84 }; 84 85 85 86 enum {
+6 -2
drivers/block/aoe/aoecmd.c
··· 754 754 755 755 utgts = count_targets(d, NULL); 756 756 757 - if (d->flags & DEVFL_TKILL) { 757 + if (d->flags & (DEVFL_TKILL | DEVFL_DEAD)) { 758 758 spin_unlock_irqrestore(&d->lock, flags); 759 759 return; 760 760 } ··· 786 786 * to clean up. 787 787 */ 788 788 list_splice(&flist, &d->factive[0]); 789 - aoedev_downdev(d); 789 + d->flags |= DEVFL_DEAD; 790 + queue_work(aoe_wq, &d->work); 790 791 goto out; 791 792 } 792 793 ··· 898 897 aoecmd_sleepwork(struct work_struct *work) 899 898 { 900 899 struct aoedev *d = container_of(work, struct aoedev, work); 900 + 901 + if (d->flags & DEVFL_DEAD) 902 + aoedev_downdev(d); 901 903 902 904 if (d->flags & DEVFL_GDALLOC) 903 905 aoeblk_gdalloc(d);
+12 -1
drivers/block/aoe/aoedev.c
··· 198 198 { 199 199 struct aoetgt *t, **tt, **te; 200 200 struct list_head *head, *pos, *nx; 201 + struct request *rq, *rqnext; 201 202 int i; 203 + unsigned long flags; 202 204 203 - d->flags &= ~DEVFL_UP; 205 + spin_lock_irqsave(&d->lock, flags); 206 + d->flags &= ~(DEVFL_UP | DEVFL_DEAD); 207 + spin_unlock_irqrestore(&d->lock, flags); 204 208 205 209 /* clean out active and to-be-retransmitted buffers */ 206 210 for (i = 0; i < NFACTIVE; i++) { ··· 226 222 227 223 /* clean out the in-process request (if any) */ 228 224 aoe_failip(d); 225 + 226 + /* clean out any queued block requests */ 227 + list_for_each_entry_safe(rq, rqnext, &d->rq_list, queuelist) { 228 + list_del_init(&rq->queuelist); 229 + blk_mq_start_request(rq); 230 + blk_mq_end_request(rq, BLK_STS_IOERR); 231 + } 229 232 230 233 /* fast fail all pending I/O */ 231 234 if (d->blkq) {
+3
drivers/block/ublk_drv.c
··· 2825 2825 if (copy_from_user(&info, argp, sizeof(info))) 2826 2826 return -EFAULT; 2827 2827 2828 + if (info.queue_depth > UBLK_MAX_QUEUE_DEPTH || info.nr_hw_queues > UBLK_MAX_NR_QUEUES) 2829 + return -EINVAL; 2830 + 2828 2831 if (capable(CAP_SYS_ADMIN)) 2829 2832 info.flags &= ~UBLK_F_UNPRIVILEGED_DEV; 2830 2833 else if (!(info.flags & UBLK_F_UNPRIVILEGED_DEV))
+31 -2
drivers/bluetooth/btintel_pcie.c
··· 2033 2033 data->hdev = NULL; 2034 2034 } 2035 2035 2036 + static void btintel_pcie_disable_interrupts(struct btintel_pcie_data *data) 2037 + { 2038 + spin_lock(&data->irq_lock); 2039 + btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_MSIX_FH_INT_MASK, data->fh_init_mask); 2040 + btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_MSIX_HW_INT_MASK, data->hw_init_mask); 2041 + spin_unlock(&data->irq_lock); 2042 + } 2043 + 2044 + static void btintel_pcie_enable_interrupts(struct btintel_pcie_data *data) 2045 + { 2046 + spin_lock(&data->irq_lock); 2047 + btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_MSIX_FH_INT_MASK, ~data->fh_init_mask); 2048 + btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_MSIX_HW_INT_MASK, ~data->hw_init_mask); 2049 + spin_unlock(&data->irq_lock); 2050 + } 2051 + 2052 + static void btintel_pcie_synchronize_irqs(struct btintel_pcie_data *data) 2053 + { 2054 + for (int i = 0; i < data->alloc_vecs; i++) 2055 + synchronize_irq(data->msix_entries[i].vector); 2056 + } 2057 + 2036 2058 static int btintel_pcie_setup_internal(struct hci_dev *hdev) 2037 2059 { 2038 2060 struct btintel_pcie_data *data = hci_get_drvdata(hdev); ··· 2174 2152 bt_dev_err(hdev, "Firmware download retry count: %d", 2175 2153 fw_dl_retry); 2176 2154 btintel_pcie_dump_debug_registers(hdev); 2155 + btintel_pcie_disable_interrupts(data); 2156 + btintel_pcie_synchronize_irqs(data); 2177 2157 err = btintel_pcie_reset_bt(data); 2178 2158 if (err) { 2179 2159 bt_dev_err(hdev, "Failed to do shr reset: %d", err); ··· 2183 2159 } 2184 2160 usleep_range(10000, 12000); 2185 2161 btintel_pcie_reset_ia(data); 2162 + btintel_pcie_enable_interrupts(data); 2186 2163 btintel_pcie_config_msix(data); 2187 2164 err = btintel_pcie_enable_bt(data); 2188 2165 if (err) { ··· 2316 2291 2317 2292 data = pci_get_drvdata(pdev); 2318 2293 2294 + btintel_pcie_disable_interrupts(data); 2295 + 2296 + btintel_pcie_synchronize_irqs(data); 2297 + 2298 + flush_work(&data->rx_work); 2299 + 2319 2300 btintel_pcie_reset_bt(data); 2320 2301 for (int i = 0; i < data->alloc_vecs; i++) { 2321 2302 struct msix_entry *msix_entry; ··· 2333 2302 pci_free_irq_vectors(pdev); 2334 2303 2335 2304 btintel_pcie_release_hdev(data); 2336 - 2337 - flush_work(&data->rx_work); 2338 2305 2339 2306 destroy_workqueue(data->workqueue); 2340 2307
+10 -3
drivers/bluetooth/hci_qca.c
··· 2392 2392 */ 2393 2393 qcadev->bt_power->pwrseq = devm_pwrseq_get(&serdev->dev, 2394 2394 "bluetooth"); 2395 - if (IS_ERR(qcadev->bt_power->pwrseq)) 2396 - return PTR_ERR(qcadev->bt_power->pwrseq); 2397 2395 2398 - break; 2396 + /* 2397 + * Some modules have BT_EN enabled via a hardware pull-up, 2398 + * meaning it is not defined in the DTS and is not controlled 2399 + * through the power sequence. In such cases, fall through 2400 + * to follow the legacy flow. 2401 + */ 2402 + if (IS_ERR(qcadev->bt_power->pwrseq)) 2403 + qcadev->bt_power->pwrseq = NULL; 2404 + else 2405 + break; 2399 2406 } 2400 2407 fallthrough; 2401 2408 case QCA_WCN3950:
+1
drivers/edac/amd64_edac.c
··· 3879 3879 break; 3880 3880 case 0x70 ... 0x7f: 3881 3881 pvt->ctl_name = "F19h_M70h"; 3882 + pvt->max_mcs = 4; 3882 3883 pvt->flags.zn_regs_v2 = 1; 3883 3884 break; 3884 3885 case 0x90 ... 0x9f:
+13 -11
drivers/edac/igen6_edac.c
··· 125 125 #define MEM_SLICE_HASH_MASK(v) (GET_BITFIELD(v, 6, 19) << 6) 126 126 #define MEM_SLICE_HASH_LSB_MASK_BIT(v) GET_BITFIELD(v, 24, 26) 127 127 128 - static const struct res_config { 128 + static struct res_config { 129 129 bool machine_check; 130 130 /* The number of present memory controllers. */ 131 131 int num_imc; ··· 479 479 return ECC_ERROR_LOG_ADDR45(ecclog); 480 480 } 481 481 482 - static const struct res_config ehl_cfg = { 482 + static struct res_config ehl_cfg = { 483 483 .num_imc = 1, 484 484 .imc_base = 0x5000, 485 485 .ibecc_base = 0xdc00, ··· 489 489 .err_addr_to_imc_addr = ehl_err_addr_to_imc_addr, 490 490 }; 491 491 492 - static const struct res_config icl_cfg = { 492 + static struct res_config icl_cfg = { 493 493 .num_imc = 1, 494 494 .imc_base = 0x5000, 495 495 .ibecc_base = 0xd800, ··· 499 499 .err_addr_to_imc_addr = ehl_err_addr_to_imc_addr, 500 500 }; 501 501 502 - static const struct res_config tgl_cfg = { 502 + static struct res_config tgl_cfg = { 503 503 .machine_check = true, 504 504 .num_imc = 2, 505 505 .imc_base = 0x5000, ··· 513 513 .err_addr_to_imc_addr = tgl_err_addr_to_imc_addr, 514 514 }; 515 515 516 - static const struct res_config adl_cfg = { 516 + static struct res_config adl_cfg = { 517 517 .machine_check = true, 518 518 .num_imc = 2, 519 519 .imc_base = 0xd800, ··· 524 524 .err_addr_to_imc_addr = adl_err_addr_to_imc_addr, 525 525 }; 526 526 527 - static const struct res_config adl_n_cfg = { 527 + static struct res_config adl_n_cfg = { 528 528 .machine_check = true, 529 529 .num_imc = 1, 530 530 .imc_base = 0xd800, ··· 535 535 .err_addr_to_imc_addr = adl_err_addr_to_imc_addr, 536 536 }; 537 537 538 - static const struct res_config rpl_p_cfg = { 538 + static struct res_config rpl_p_cfg = { 539 539 .machine_check = true, 540 540 .num_imc = 2, 541 541 .imc_base = 0xd800, ··· 547 547 .err_addr_to_imc_addr = adl_err_addr_to_imc_addr, 548 548 }; 549 549 550 - static const struct res_config mtl_ps_cfg = { 550 + static struct res_config mtl_ps_cfg = { 551 551 .machine_check = true, 552 552 .num_imc = 2, 553 553 .imc_base = 0xd800, ··· 558 558 .err_addr_to_imc_addr = adl_err_addr_to_imc_addr, 559 559 }; 560 560 561 - static const struct res_config mtl_p_cfg = { 561 + static struct res_config mtl_p_cfg = { 562 562 .machine_check = true, 563 563 .num_imc = 2, 564 564 .imc_base = 0xd800, ··· 569 569 .err_addr_to_imc_addr = adl_err_addr_to_imc_addr, 570 570 }; 571 571 572 - static const struct pci_device_id igen6_pci_tbl[] = { 572 + static struct pci_device_id igen6_pci_tbl[] = { 573 573 { PCI_VDEVICE(INTEL, DID_EHL_SKU5), (kernel_ulong_t)&ehl_cfg }, 574 574 { PCI_VDEVICE(INTEL, DID_EHL_SKU6), (kernel_ulong_t)&ehl_cfg }, 575 575 { PCI_VDEVICE(INTEL, DID_EHL_SKU7), (kernel_ulong_t)&ehl_cfg }, ··· 1350 1350 return -ENODEV; 1351 1351 } 1352 1352 1353 - if (lmc < res_cfg->num_imc) 1353 + if (lmc < res_cfg->num_imc) { 1354 1354 igen6_printk(KERN_WARNING, "Expected %d mcs, but only %d detected.", 1355 1355 res_cfg->num_imc, lmc); 1356 + res_cfg->num_imc = lmc; 1357 + } 1356 1358 1357 1359 return 0; 1358 1360
+1 -1
drivers/gpio/gpio-loongson-64bit.c
··· 268 268 /* LS7A2000 ACPI GPIO */ 269 269 static const struct loongson_gpio_chip_data loongson_gpio_ls7a2000_data1 = { 270 270 .label = "ls7a2000_gpio", 271 - .mode = BYTE_CTRL_MODE, 271 + .mode = BIT_CTRL_MODE, 272 272 .conf_offset = 0x4, 273 273 .in_offset = 0x8, 274 274 .out_offset = 0x0,
+34 -18
drivers/gpio/gpio-mlxbf3.c
··· 190 190 struct mlxbf3_gpio_context *gs; 191 191 struct gpio_irq_chip *girq; 192 192 struct gpio_chip *gc; 193 + char *colon_ptr; 193 194 int ret, irq; 195 + long num; 194 196 195 197 gs = devm_kzalloc(dev, sizeof(*gs), GFP_KERNEL); 196 198 if (!gs) ··· 229 227 gc->owner = THIS_MODULE; 230 228 gc->add_pin_ranges = mlxbf3_gpio_add_pin_ranges; 231 229 232 - irq = platform_get_irq(pdev, 0); 233 - if (irq >= 0) { 234 - girq = &gs->gc.irq; 235 - gpio_irq_chip_set_chip(girq, &gpio_mlxbf3_irqchip); 236 - girq->default_type = IRQ_TYPE_NONE; 237 - /* This will let us handle the parent IRQ in the driver */ 238 - girq->num_parents = 0; 239 - girq->parents = NULL; 240 - girq->parent_handler = NULL; 241 - girq->handler = handle_bad_irq; 230 + colon_ptr = strchr(dev_name(dev), ':'); 231 + if (!colon_ptr) { 232 + dev_err(dev, "invalid device name format\n"); 233 + return -EINVAL; 234 + } 242 235 243 - /* 244 - * Directly request the irq here instead of passing 245 - * a flow-handler because the irq is shared. 246 - */ 247 - ret = devm_request_irq(dev, irq, mlxbf3_gpio_irq_handler, 248 - IRQF_SHARED, dev_name(dev), gs); 249 - if (ret) 250 - return dev_err_probe(dev, ret, "failed to request IRQ"); 236 + ret = kstrtol(++colon_ptr, 16, &num); 237 + if (ret) { 238 + dev_err(dev, "invalid device instance\n"); 239 + return ret; 240 + } 241 + 242 + if (!num) { 243 + irq = platform_get_irq(pdev, 0); 244 + if (irq >= 0) { 245 + girq = &gs->gc.irq; 246 + gpio_irq_chip_set_chip(girq, &gpio_mlxbf3_irqchip); 247 + girq->default_type = IRQ_TYPE_NONE; 248 + /* This will let us handle the parent IRQ in the driver */ 249 + girq->num_parents = 0; 250 + girq->parents = NULL; 251 + girq->parent_handler = NULL; 252 + girq->handler = handle_bad_irq; 253 + 254 + /* 255 + * Directly request the irq here instead of passing 256 + * a flow-handler because the irq is shared. 257 + */ 258 + ret = devm_request_irq(dev, irq, mlxbf3_gpio_irq_handler, 259 + IRQF_SHARED, dev_name(dev), gs); 260 + if (ret) 261 + return dev_err_probe(dev, ret, "failed to request IRQ"); 262 + } 251 263 } 252 264 253 265 platform_set_drvdata(pdev, gs);
+1 -1
drivers/gpio/gpio-pca953x.c
··· 974 974 IRQF_ONESHOT | IRQF_SHARED, dev_name(dev), 975 975 chip); 976 976 if (ret) 977 - return dev_err_probe(dev, client->irq, "failed to request irq\n"); 977 + return dev_err_probe(dev, ret, "failed to request irq\n"); 978 978 979 979 return 0; 980 980 }
+1
drivers/gpio/gpio-spacemit-k1.c
··· 278 278 { .compatible = "spacemit,k1-gpio" }, 279 279 { /* sentinel */ } 280 280 }; 281 + MODULE_DEVICE_TABLE(of, spacemit_gpio_dt_ids); 281 282 282 283 static struct platform_driver spacemit_gpio_driver = { 283 284 .probe = spacemit_gpio_probe,
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
··· 1902 1902 continue; 1903 1903 } 1904 1904 job = to_amdgpu_job(s_job); 1905 - if (preempted && (&job->hw_fence) == fence) 1905 + if (preempted && (&job->hw_fence.base) == fence) 1906 1906 /* mark the job as preempted */ 1907 1907 job->preemption_status |= AMDGPU_IB_PREEMPTED; 1908 1908 }
+56 -26
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 6019 6019 return ret; 6020 6020 } 6021 6021 6022 - static int amdgpu_device_halt_activities(struct amdgpu_device *adev, 6023 - struct amdgpu_job *job, 6024 - struct amdgpu_reset_context *reset_context, 6025 - struct list_head *device_list, 6026 - struct amdgpu_hive_info *hive, 6027 - bool need_emergency_restart) 6022 + static int amdgpu_device_recovery_prepare(struct amdgpu_device *adev, 6023 + struct list_head *device_list, 6024 + struct amdgpu_hive_info *hive) 6028 6025 { 6029 - struct list_head *device_list_handle = NULL; 6030 6026 struct amdgpu_device *tmp_adev = NULL; 6031 - int i, r = 0; 6027 + int r; 6032 6028 6033 6029 /* 6034 6030 * Build list of devices to reset. ··· 6041 6045 } 6042 6046 if (!list_is_first(&adev->reset_list, device_list)) 6043 6047 list_rotate_to_front(&adev->reset_list, device_list); 6044 - device_list_handle = device_list; 6045 6048 } else { 6046 6049 list_add_tail(&adev->reset_list, device_list); 6047 - device_list_handle = device_list; 6048 6050 } 6049 6051 6050 6052 if (!amdgpu_sriov_vf(adev) && (!adev->pcie_reset_ctx.occurs_dpc)) { 6051 - r = amdgpu_device_health_check(device_list_handle); 6053 + r = amdgpu_device_health_check(device_list); 6052 6054 if (r) 6053 6055 return r; 6054 6056 } 6055 6057 6056 - /* We need to lock reset domain only once both for XGMI and single device */ 6057 - tmp_adev = list_first_entry(device_list_handle, struct amdgpu_device, 6058 - reset_list); 6058 + return 0; 6059 + } 6060 + 6061 + static void amdgpu_device_recovery_get_reset_lock(struct amdgpu_device *adev, 6062 + struct list_head *device_list) 6063 + { 6064 + struct amdgpu_device *tmp_adev = NULL; 6065 + 6066 + if (list_empty(device_list)) 6067 + return; 6068 + tmp_adev = 6069 + list_first_entry(device_list, struct amdgpu_device, reset_list); 6059 6070 amdgpu_device_lock_reset_domain(tmp_adev->reset_domain); 6071 + } 6072 + 6073 + static void amdgpu_device_recovery_put_reset_lock(struct amdgpu_device *adev, 6074 + struct list_head *device_list) 6075 + { 6076 + struct amdgpu_device *tmp_adev = NULL; 6077 + 6078 + if (list_empty(device_list)) 6079 + return; 6080 + tmp_adev = 6081 + list_first_entry(device_list, struct amdgpu_device, reset_list); 6082 + amdgpu_device_unlock_reset_domain(tmp_adev->reset_domain); 6083 + } 6084 + 6085 + static int amdgpu_device_halt_activities( 6086 + struct amdgpu_device *adev, struct amdgpu_job *job, 6087 + struct amdgpu_reset_context *reset_context, 6088 + struct list_head *device_list, struct amdgpu_hive_info *hive, 6089 + bool need_emergency_restart) 6090 + { 6091 + struct amdgpu_device *tmp_adev = NULL; 6092 + int i, r = 0; 6060 6093 6061 6094 /* block all schedulers and reset given job's ring */ 6062 - list_for_each_entry(tmp_adev, device_list_handle, reset_list) { 6063 - 6095 + list_for_each_entry(tmp_adev, device_list, reset_list) { 6064 6096 amdgpu_device_set_mp1_state(tmp_adev); 6065 6097 6066 6098 /* ··· 6276 6252 amdgpu_ras_set_error_query_ready(tmp_adev, true); 6277 6253 6278 6254 } 6279 - 6280 - tmp_adev = list_first_entry(device_list, struct amdgpu_device, 6281 - reset_list); 6282 - amdgpu_device_unlock_reset_domain(tmp_adev->reset_domain); 6283 - 6284 6255 } 6285 6256 6286 6257 ··· 6343 6324 reset_context->hive = hive; 6344 6325 INIT_LIST_HEAD(&device_list); 6345 6326 6327 + if (amdgpu_device_recovery_prepare(adev, &device_list, hive)) 6328 + goto end_reset; 6329 + 6330 + /* We need to lock reset domain only once both for XGMI and single device */ 6331 + amdgpu_device_recovery_get_reset_lock(adev, &device_list); 6332 + 6346 6333 r = amdgpu_device_halt_activities(adev, job, reset_context, &device_list, 6347 6334 hive, need_emergency_restart); 6348 6335 if (r) 6349 - goto end_reset; 6336 + goto reset_unlock; 6350 6337 6351 6338 if (need_emergency_restart) 6352 6339 goto skip_sched_resume; ··· 6362 6337 * 6363 6338 * job->base holds a reference to parent fence 6364 6339 */ 6365 - if (job && dma_fence_is_signaled(&job->hw_fence)) { 6340 + if (job && dma_fence_is_signaled(&job->hw_fence.base)) { 6366 6341 job_signaled = true; 6367 6342 dev_info(adev->dev, "Guilty job already signaled, skipping HW reset"); 6368 6343 goto skip_hw_reset; ··· 6370 6345 6371 6346 r = amdgpu_device_asic_reset(adev, &device_list, reset_context); 6372 6347 if (r) 6373 - goto end_reset; 6348 + goto reset_unlock; 6374 6349 skip_hw_reset: 6375 6350 r = amdgpu_device_sched_resume(&device_list, reset_context, job_signaled); 6376 6351 if (r) 6377 - goto end_reset; 6352 + goto reset_unlock; 6378 6353 skip_sched_resume: 6379 6354 amdgpu_device_gpu_resume(adev, &device_list, need_emergency_restart); 6355 + reset_unlock: 6356 + amdgpu_device_recovery_put_reset_lock(adev, &device_list); 6380 6357 end_reset: 6381 6358 if (hive) { 6382 6359 mutex_unlock(&hive->hive_lock); ··· 6790 6763 memset(&reset_context, 0, sizeof(reset_context)); 6791 6764 INIT_LIST_HEAD(&device_list); 6792 6765 6766 + amdgpu_device_recovery_prepare(adev, &device_list, hive); 6767 + amdgpu_device_recovery_get_reset_lock(adev, &device_list); 6793 6768 r = amdgpu_device_halt_activities(adev, NULL, &reset_context, &device_list, 6794 6769 hive, false); 6795 6770 if (hive) { ··· 6909 6880 if (hive) { 6910 6881 list_for_each_entry(tmp_adev, &device_list, reset_list) 6911 6882 amdgpu_device_unset_mp1_state(tmp_adev); 6912 - amdgpu_device_unlock_reset_domain(adev->reset_domain); 6913 6883 } 6884 + amdgpu_device_recovery_put_reset_lock(adev, &device_list); 6914 6885 } 6915 6886 6916 6887 if (hive) { ··· 6956 6927 6957 6928 amdgpu_device_sched_resume(&device_list, NULL, NULL); 6958 6929 amdgpu_device_gpu_resume(adev, &device_list, false); 6930 + amdgpu_device_recovery_put_reset_lock(adev, &device_list); 6959 6931 adev->pcie_reset_ctx.occurs_dpc = false; 6960 6932 6961 6933 if (hive) {
+7 -23
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
··· 41 41 #include "amdgpu_trace.h" 42 42 #include "amdgpu_reset.h" 43 43 44 - /* 45 - * Fences mark an event in the GPUs pipeline and are used 46 - * for GPU/CPU synchronization. When the fence is written, 47 - * it is expected that all buffers associated with that fence 48 - * are no longer in use by the associated ring on the GPU and 49 - * that the relevant GPU caches have been flushed. 50 - */ 51 - 52 - struct amdgpu_fence { 53 - struct dma_fence base; 54 - 55 - /* RB, DMA, etc. */ 56 - struct amdgpu_ring *ring; 57 - ktime_t start_timestamp; 58 - }; 59 - 60 44 static struct kmem_cache *amdgpu_fence_slab; 61 45 62 46 int amdgpu_fence_slab_init(void) ··· 135 151 am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC); 136 152 if (am_fence == NULL) 137 153 return -ENOMEM; 138 - fence = &am_fence->base; 139 - am_fence->ring = ring; 140 154 } else { 141 155 /* take use of job-embedded fence */ 142 - fence = &job->hw_fence; 156 + am_fence = &job->hw_fence; 143 157 } 158 + fence = &am_fence->base; 159 + am_fence->ring = ring; 144 160 145 161 seq = ++ring->fence_drv.sync_seq; 146 162 if (job && job->job_run_counter) { ··· 702 718 * it right here or we won't be able to track them in fence_drv 703 719 * and they will remain unsignaled during sa_bo free. 704 720 */ 705 - job = container_of(old, struct amdgpu_job, hw_fence); 721 + job = container_of(old, struct amdgpu_job, hw_fence.base); 706 722 if (!job->base.s_fence && !dma_fence_is_signaled(old)) 707 723 dma_fence_signal(old); 708 724 RCU_INIT_POINTER(*ptr, NULL); ··· 764 780 765 781 static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f) 766 782 { 767 - struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence); 783 + struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base); 768 784 769 785 return (const char *)to_amdgpu_ring(job->base.sched)->name; 770 786 } ··· 794 810 */ 795 811 static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f) 796 812 { 797 - struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence); 813 + struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base); 798 814 799 815 if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer)) 800 816 amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched)); ··· 829 845 struct dma_fence *f = container_of(rcu, struct dma_fence, rcu); 830 846 831 847 /* free job if fence has a parent job */ 832 - kfree(container_of(f, struct amdgpu_job, hw_fence)); 848 + kfree(container_of(f, struct amdgpu_job, hw_fence.base)); 833 849 } 834 850 835 851 /**
+6 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
··· 272 272 /* Check if any fences where initialized */ 273 273 if (job->base.s_fence && job->base.s_fence->finished.ops) 274 274 f = &job->base.s_fence->finished; 275 - else if (job->hw_fence.ops) 276 - f = &job->hw_fence; 275 + else if (job->hw_fence.base.ops) 276 + f = &job->hw_fence.base; 277 277 else 278 278 f = NULL; 279 279 ··· 290 290 amdgpu_sync_free(&job->explicit_sync); 291 291 292 292 /* only put the hw fence if has embedded fence */ 293 - if (!job->hw_fence.ops) 293 + if (!job->hw_fence.base.ops) 294 294 kfree(job); 295 295 else 296 - dma_fence_put(&job->hw_fence); 296 + dma_fence_put(&job->hw_fence.base); 297 297 } 298 298 299 299 void amdgpu_job_set_gang_leader(struct amdgpu_job *job, ··· 322 322 if (job->gang_submit != &job->base.s_fence->scheduled) 323 323 dma_fence_put(job->gang_submit); 324 324 325 - if (!job->hw_fence.ops) 325 + if (!job->hw_fence.base.ops) 326 326 kfree(job); 327 327 else 328 - dma_fence_put(&job->hw_fence); 328 + dma_fence_put(&job->hw_fence.base); 329 329 } 330 330 331 331 struct dma_fence *amdgpu_job_submit(struct amdgpu_job *job)
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
··· 48 48 struct drm_sched_job base; 49 49 struct amdgpu_vm *vm; 50 50 struct amdgpu_sync explicit_sync; 51 - struct dma_fence hw_fence; 51 + struct amdgpu_fence hw_fence; 52 52 struct dma_fence *gang_submit; 53 53 uint32_t preamble_status; 54 54 uint32_t preemption_status;
+12 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
··· 3522 3522 uint8_t *ucode_array_start_addr; 3523 3523 int err = 0; 3524 3524 3525 - err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, AMDGPU_UCODE_REQUIRED, 3526 - "amdgpu/%s_sos.bin", chip_name); 3525 + if (amdgpu_is_kicker_fw(adev)) 3526 + err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, AMDGPU_UCODE_REQUIRED, 3527 + "amdgpu/%s_sos_kicker.bin", chip_name); 3528 + else 3529 + err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, AMDGPU_UCODE_REQUIRED, 3530 + "amdgpu/%s_sos.bin", chip_name); 3527 3531 if (err) 3528 3532 goto out; 3529 3533 ··· 3803 3799 struct amdgpu_device *adev = psp->adev; 3804 3800 int err; 3805 3801 3806 - err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, AMDGPU_UCODE_REQUIRED, 3807 - "amdgpu/%s_ta.bin", chip_name); 3802 + if (amdgpu_is_kicker_fw(adev)) 3803 + err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, AMDGPU_UCODE_REQUIRED, 3804 + "amdgpu/%s_ta_kicker.bin", chip_name); 3805 + else 3806 + err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, AMDGPU_UCODE_REQUIRED, 3807 + "amdgpu/%s_ta.bin", chip_name); 3808 3808 if (err) 3809 3809 return err; 3810 3810
+16
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
··· 127 127 struct dma_fence **fences; 128 128 }; 129 129 130 + /* 131 + * Fences mark an event in the GPUs pipeline and are used 132 + * for GPU/CPU synchronization. When the fence is written, 133 + * it is expected that all buffers associated with that fence 134 + * are no longer in use by the associated ring on the GPU and 135 + * that the relevant GPU caches have been flushed. 136 + */ 137 + 138 + struct amdgpu_fence { 139 + struct dma_fence base; 140 + 141 + /* RB, DMA, etc. */ 142 + struct amdgpu_ring *ring; 143 + ktime_t start_timestamp; 144 + }; 145 + 130 146 extern const struct drm_sched_backend_ops amdgpu_sched_ops; 131 147 132 148 void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);
+6 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
··· 540 540 case IP_VERSION(4, 4, 2): 541 541 case IP_VERSION(4, 4, 4): 542 542 case IP_VERSION(4, 4, 5): 543 - /* For SDMA 4.x, use the existing DPM interface for backward compatibility */ 544 - r = amdgpu_dpm_reset_sdma(adev, 1 << instance_id); 543 + /* For SDMA 4.x, use the existing DPM interface for backward compatibility, 544 + * we need to convert the logical instance ID to physical instance ID before reset. 545 + */ 546 + r = amdgpu_dpm_reset_sdma(adev, 1 << GET_INST(SDMA0, instance_id)); 545 547 break; 546 548 case IP_VERSION(5, 0, 0): 547 549 case IP_VERSION(5, 0, 1): ··· 570 568 /** 571 569 * amdgpu_sdma_reset_engine - Reset a specific SDMA engine 572 570 * @adev: Pointer to the AMDGPU device 573 - * @instance_id: ID of the SDMA engine instance to reset 571 + * @instance_id: Logical ID of the SDMA engine instance to reset 574 572 * 575 573 * Returns: 0 on success, or a negative error code on failure. 576 574 */ ··· 603 601 /* Perform the SDMA reset for the specified instance */ 604 602 ret = amdgpu_sdma_soft_reset(adev, instance_id); 605 603 if (ret) { 606 - dev_err(adev->dev, "Failed to reset SDMA instance %u\n", instance_id); 604 + dev_err(adev->dev, "Failed to reset SDMA logical instance %u\n", instance_id); 607 605 goto exit; 608 606 } 609 607
+17
drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
··· 30 30 31 31 #define AMDGPU_UCODE_NAME_MAX (128) 32 32 33 + static const struct kicker_device kicker_device_list[] = { 34 + {0x744B, 0x00}, 35 + }; 36 + 33 37 static void amdgpu_ucode_print_common_hdr(const struct common_firmware_header *hdr) 34 38 { 35 39 DRM_DEBUG("size_bytes: %u\n", le32_to_cpu(hdr->size_bytes)); ··· 1389 1385 } 1390 1386 } 1391 1387 return NULL; 1388 + } 1389 + 1390 + bool amdgpu_is_kicker_fw(struct amdgpu_device *adev) 1391 + { 1392 + int i; 1393 + 1394 + for (i = 0; i < ARRAY_SIZE(kicker_device_list); i++) { 1395 + if (adev->pdev->device == kicker_device_list[i].device && 1396 + adev->pdev->revision == kicker_device_list[i].revision) 1397 + return true; 1398 + } 1399 + 1400 + return false; 1392 1401 } 1393 1402 1394 1403 void amdgpu_ucode_ip_version_decode(struct amdgpu_device *adev, int block_type, char *ucode_prefix, int len)
+6
drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
··· 605 605 uint32_t pldm_version; 606 606 }; 607 607 608 + struct kicker_device{ 609 + unsigned short device; 610 + u8 revision; 611 + }; 612 + 608 613 void amdgpu_ucode_print_mc_hdr(const struct common_firmware_header *hdr); 609 614 void amdgpu_ucode_print_smc_hdr(const struct common_firmware_header *hdr); 610 615 void amdgpu_ucode_print_imu_hdr(const struct common_firmware_header *hdr); ··· 637 632 const char *amdgpu_ucode_name(enum AMDGPU_UCODE_ID ucode_id); 638 633 639 634 void amdgpu_ucode_ip_version_decode(struct amdgpu_device *adev, int block_type, char *ucode_prefix, int len); 635 + bool amdgpu_is_kicker_fw(struct amdgpu_device *adev); 640 636 641 637 #endif
+5
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
··· 85 85 MODULE_FIRMWARE("amdgpu/gc_11_0_0_me.bin"); 86 86 MODULE_FIRMWARE("amdgpu/gc_11_0_0_mec.bin"); 87 87 MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc.bin"); 88 + MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc_kicker.bin"); 88 89 MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc_1.bin"); 89 90 MODULE_FIRMWARE("amdgpu/gc_11_0_0_toc.bin"); 90 91 MODULE_FIRMWARE("amdgpu/gc_11_0_1_pfp.bin"); ··· 760 759 err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw, 761 760 AMDGPU_UCODE_REQUIRED, 762 761 "amdgpu/gc_11_0_0_rlc_1.bin"); 762 + else if (amdgpu_is_kicker_fw(adev)) 763 + err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw, 764 + AMDGPU_UCODE_REQUIRED, 765 + "amdgpu/%s_rlc_kicker.bin", ucode_prefix); 763 766 else 764 767 err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw, 765 768 AMDGPU_UCODE_REQUIRED,
+7 -2
drivers/gpu/drm/amd/amdgpu/imu_v11_0.c
··· 32 32 #include "gc/gc_11_0_0_sh_mask.h" 33 33 34 34 MODULE_FIRMWARE("amdgpu/gc_11_0_0_imu.bin"); 35 + MODULE_FIRMWARE("amdgpu/gc_11_0_0_imu_kicker.bin"); 35 36 MODULE_FIRMWARE("amdgpu/gc_11_0_1_imu.bin"); 36 37 MODULE_FIRMWARE("amdgpu/gc_11_0_2_imu.bin"); 37 38 MODULE_FIRMWARE("amdgpu/gc_11_0_3_imu.bin"); ··· 52 51 DRM_DEBUG("\n"); 53 52 54 53 amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix)); 55 - err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED, 56 - "amdgpu/%s_imu.bin", ucode_prefix); 54 + if (amdgpu_is_kicker_fw(adev)) 55 + err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED, 56 + "amdgpu/%s_imu_kicker.bin", ucode_prefix); 57 + else 58 + err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED, 59 + "amdgpu/%s_imu.bin", ucode_prefix); 57 60 if (err) 58 61 goto out; 59 62
+2
drivers/gpu/drm/amd/amdgpu/psp_v13_0.c
··· 42 42 MODULE_FIRMWARE("amdgpu/psp_13_0_8_toc.bin"); 43 43 MODULE_FIRMWARE("amdgpu/psp_13_0_8_ta.bin"); 44 44 MODULE_FIRMWARE("amdgpu/psp_13_0_0_sos.bin"); 45 + MODULE_FIRMWARE("amdgpu/psp_13_0_0_sos_kicker.bin"); 45 46 MODULE_FIRMWARE("amdgpu/psp_13_0_0_ta.bin"); 47 + MODULE_FIRMWARE("amdgpu/psp_13_0_0_ta_kicker.bin"); 46 48 MODULE_FIRMWARE("amdgpu/psp_13_0_7_sos.bin"); 47 49 MODULE_FIRMWARE("amdgpu/psp_13_0_7_ta.bin"); 48 50 MODULE_FIRMWARE("amdgpu/psp_13_0_10_sos.bin");
+7 -3
drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
··· 490 490 { 491 491 struct amdgpu_ring *sdma[AMDGPU_MAX_SDMA_INSTANCES]; 492 492 u32 doorbell_offset, doorbell; 493 - u32 rb_cntl, ib_cntl; 493 + u32 rb_cntl, ib_cntl, sdma_cntl; 494 494 int i; 495 495 496 496 for_each_inst(i, inst_mask) { ··· 502 502 ib_cntl = RREG32_SDMA(i, regSDMA_GFX_IB_CNTL); 503 503 ib_cntl = REG_SET_FIELD(ib_cntl, SDMA_GFX_IB_CNTL, IB_ENABLE, 0); 504 504 WREG32_SDMA(i, regSDMA_GFX_IB_CNTL, ib_cntl); 505 + sdma_cntl = RREG32_SDMA(i, regSDMA_CNTL); 506 + sdma_cntl = REG_SET_FIELD(sdma_cntl, SDMA_CNTL, UTC_L1_ENABLE, 0); 507 + WREG32_SDMA(i, regSDMA_CNTL, sdma_cntl); 505 508 506 509 if (sdma[i]->use_doorbell) { 507 510 doorbell = RREG32_SDMA(i, regSDMA_GFX_DOORBELL); ··· 998 995 /* set utc l1 enable flag always to 1 */ 999 996 temp = RREG32_SDMA(i, regSDMA_CNTL); 1000 997 temp = REG_SET_FIELD(temp, SDMA_CNTL, UTC_L1_ENABLE, 1); 998 + WREG32_SDMA(i, regSDMA_CNTL, temp); 1001 999 1002 1000 if (amdgpu_ip_version(adev, SDMA0_HWIP, 0) < IP_VERSION(4, 4, 5)) { 1003 1001 /* enable context empty interrupt during initialization */ ··· 1674 1670 static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring, unsigned int vmid) 1675 1671 { 1676 1672 struct amdgpu_device *adev = ring->adev; 1677 - u32 id = GET_INST(SDMA0, ring->me); 1673 + u32 id = ring->me; 1678 1674 int r; 1679 1675 1680 1676 if (!(adev->sdma.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE)) ··· 1690 1686 static int sdma_v4_4_2_stop_queue(struct amdgpu_ring *ring) 1691 1687 { 1692 1688 struct amdgpu_device *adev = ring->adev; 1693 - u32 instance_id = GET_INST(SDMA0, ring->me); 1689 + u32 instance_id = ring->me; 1694 1690 u32 inst_mask; 1695 1691 uint64_t rptr; 1696 1692
+1
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
··· 1399 1399 return r; 1400 1400 1401 1401 for (i = 0; i < adev->sdma.num_instances; i++) { 1402 + mutex_init(&adev->sdma.instance[i].engine_reset_mutex); 1402 1403 adev->sdma.instance[i].funcs = &sdma_v5_0_sdma_funcs; 1403 1404 ring = &adev->sdma.instance[i].ring; 1404 1405 ring->ring_obj = NULL;
+1
drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
··· 1318 1318 } 1319 1319 1320 1320 for (i = 0; i < adev->sdma.num_instances; i++) { 1321 + mutex_init(&adev->sdma.instance[i].engine_reset_mutex); 1321 1322 adev->sdma.instance[i].funcs = &sdma_v5_2_sdma_funcs; 1322 1323 ring = &adev->sdma.instance[i].ring; 1323 1324 ring->ring_obj = NULL;
+5 -1
drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c
··· 669 669 if (indirect) 670 670 amdgpu_vcn_psp_update_sram(adev, inst_idx, AMDGPU_UCODE_ID_VCN0_RAM); 671 671 672 + /* resetting ring, fw should not check RB ring */ 673 + fw_shared->sq.queue_mode |= FW_QUEUE_RING_RESET; 674 + 672 675 /* Pause dpg */ 673 676 vcn_v5_0_1_pause_dpg_mode(vinst, &state); 674 677 ··· 684 681 tmp = RREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE); 685 682 tmp &= ~(VCN_RB_ENABLE__RB1_EN_MASK); 686 683 WREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE, tmp); 687 - fw_shared->sq.queue_mode |= FW_QUEUE_RING_RESET; 684 + 688 685 WREG32_SOC15(VCN, vcn_inst, regUVD_RB_RPTR, 0); 689 686 WREG32_SOC15(VCN, vcn_inst, regUVD_RB_WPTR, 0); 690 687 ··· 695 692 tmp = RREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE); 696 693 tmp |= VCN_RB_ENABLE__RB1_EN_MASK; 697 694 WREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE, tmp); 695 + /* resetting done, fw can check RB ring */ 698 696 fw_shared->sq.queue_mode &= ~(FW_QUEUE_RING_RESET | FW_QUEUE_DPG_HOLD_OFF); 699 697 700 698 WREG32_SOC15(VCN, vcn_inst, regVCN_RB1_DB_CTRL,
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c
··· 240 240 241 241 packet->bitfields2.engine_sel = 242 242 engine_sel__mes_map_queues__compute_vi; 243 - packet->bitfields2.gws_control_queue = q->gws ? 1 : 0; 243 + packet->bitfields2.gws_control_queue = q->properties.is_gws ? 1 : 0; 244 244 packet->bitfields2.extended_engine_sel = 245 245 extended_engine_sel__mes_map_queues__legacy_engine_sel; 246 246 packet->bitfields2.queue_type =
+4 -2
drivers/gpu/drm/amd/amdkfd/kfd_topology.c
··· 510 510 dev->node_props.capability |= 511 511 HSA_CAP_AQL_QUEUE_DOUBLE_MAP; 512 512 513 + if (KFD_GC_VERSION(dev->gpu) < IP_VERSION(10, 0, 0) && 514 + (dev->gpu->adev->sdma.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE)) 515 + dev->node_props.capability2 |= HSA_CAP2_PER_SDMA_QUEUE_RESET_SUPPORTED; 516 + 513 517 sysfs_show_32bit_prop(buffer, offs, "max_engine_clk_fcompute", 514 518 dev->node_props.max_engine_clk_fcompute); 515 519 ··· 2012 2008 if (!amdgpu_sriov_vf(dev->gpu->adev)) 2013 2009 dev->node_props.capability |= HSA_CAP_PER_QUEUE_RESET_SUPPORTED; 2014 2010 2015 - if (dev->gpu->adev->sdma.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE) 2016 - dev->node_props.capability2 |= HSA_CAP2_PER_SDMA_QUEUE_RESET_SUPPORTED; 2017 2011 } else { 2018 2012 dev->node_props.debug_prop |= HSA_DBG_WATCH_ADDR_MASK_LO_BIT_GFX10 | 2019 2013 HSA_DBG_WATCH_ADDR_MASK_HI_BIT;
+35 -22
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 4718 4718 return 1; 4719 4719 } 4720 4720 4721 - static void convert_custom_brightness(const struct amdgpu_dm_backlight_caps *caps, 4722 - uint32_t *brightness) 4721 + /* Rescale from [min..max] to [0..AMDGPU_MAX_BL_LEVEL] */ 4722 + static inline u32 scale_input_to_fw(int min, int max, u64 input) 4723 4723 { 4724 + return DIV_ROUND_CLOSEST_ULL(input * AMDGPU_MAX_BL_LEVEL, max - min); 4725 + } 4726 + 4727 + /* Rescale from [0..AMDGPU_MAX_BL_LEVEL] to [min..max] */ 4728 + static inline u32 scale_fw_to_input(int min, int max, u64 input) 4729 + { 4730 + return min + DIV_ROUND_CLOSEST_ULL(input * (max - min), AMDGPU_MAX_BL_LEVEL); 4731 + } 4732 + 4733 + static void convert_custom_brightness(const struct amdgpu_dm_backlight_caps *caps, 4734 + unsigned int min, unsigned int max, 4735 + uint32_t *user_brightness) 4736 + { 4737 + u32 brightness = scale_input_to_fw(min, max, *user_brightness); 4724 4738 u8 prev_signal = 0, prev_lum = 0; 4725 4739 int i = 0; 4726 4740 ··· 4745 4731 return; 4746 4732 4747 4733 /* choose start to run less interpolation steps */ 4748 - if (caps->luminance_data[caps->data_points/2].input_signal > *brightness) 4734 + if (caps->luminance_data[caps->data_points/2].input_signal > brightness) 4749 4735 i = caps->data_points/2; 4750 4736 do { 4751 4737 u8 signal = caps->luminance_data[i].input_signal; ··· 4756 4742 * brightness < signal: interpolate between previous and current luminance numerator 4757 4743 * brightness > signal: find next data point 4758 4744 */ 4759 - if (*brightness > signal) { 4745 + if (brightness > signal) { 4760 4746 prev_signal = signal; 4761 4747 prev_lum = lum; 4762 4748 i++; 4763 4749 continue; 4764 4750 } 4765 - if (*brightness < signal) 4751 + if (brightness < signal) 4766 4752 lum = prev_lum + DIV_ROUND_CLOSEST((lum - prev_lum) * 4767 - (*brightness - prev_signal), 4753 + (brightness - prev_signal), 4768 4754 signal - prev_signal); 4769 - *brightness = DIV_ROUND_CLOSEST(lum * *brightness, 101); 4755 + *user_brightness = scale_fw_to_input(min, max, 4756 + DIV_ROUND_CLOSEST(lum * brightness, 101)); 4770 4757 return; 4771 4758 } while (i < caps->data_points); 4772 4759 } ··· 4780 4765 if (!get_brightness_range(caps, &min, &max)) 4781 4766 return brightness; 4782 4767 4783 - convert_custom_brightness(caps, &brightness); 4768 + convert_custom_brightness(caps, min, max, &brightness); 4784 4769 4785 - // Rescale 0..255 to min..max 4786 - return min + DIV_ROUND_CLOSEST((max - min) * brightness, 4787 - AMDGPU_MAX_BL_LEVEL); 4770 + // Rescale 0..max to min..max 4771 + return min + DIV_ROUND_CLOSEST_ULL((u64)(max - min) * brightness, max); 4788 4772 } 4789 4773 4790 4774 static u32 convert_brightness_to_user(const struct amdgpu_dm_backlight_caps *caps, ··· 4796 4782 4797 4783 if (brightness < min) 4798 4784 return 0; 4799 - // Rescale min..max to 0..255 4800 - return DIV_ROUND_CLOSEST(AMDGPU_MAX_BL_LEVEL * (brightness - min), 4785 + // Rescale min..max to 0..max 4786 + return DIV_ROUND_CLOSEST_ULL((u64)max * (brightness - min), 4801 4787 max - min); 4802 4788 } 4803 4789 ··· 4922 4908 struct drm_device *drm = aconnector->base.dev; 4923 4909 struct amdgpu_display_manager *dm = &drm_to_adev(drm)->dm; 4924 4910 struct backlight_properties props = { 0 }; 4925 - struct amdgpu_dm_backlight_caps caps = { 0 }; 4911 + struct amdgpu_dm_backlight_caps *caps; 4926 4912 char bl_name[16]; 4927 4913 int min, max; 4928 4914 ··· 4936 4922 return; 4937 4923 } 4938 4924 4939 - amdgpu_acpi_get_backlight_caps(&caps); 4940 - if (caps.caps_valid && get_brightness_range(&caps, &min, &max)) { 4925 + caps = &dm->backlight_caps[aconnector->bl_idx]; 4926 + if (get_brightness_range(caps, &min, &max)) { 4941 4927 if (power_supply_is_system_supplied() > 0) 4942 - props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps.ac_level, 100); 4928 + props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps->ac_level, 100); 4943 4929 else 4944 - props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps.dc_level, 100); 4930 + props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps->dc_level, 100); 4945 4931 /* min is zero, so max needs to be adjusted */ 4946 4932 props.max_brightness = max - min; 4947 4933 drm_dbg(drm, "Backlight caps: min: %d, max: %d, ac %d, dc %d\n", min, max, 4948 - caps.ac_level, caps.dc_level); 4934 + caps->ac_level, caps->dc_level); 4949 4935 } else 4950 - props.brightness = AMDGPU_MAX_BL_LEVEL; 4936 + props.brightness = props.max_brightness = AMDGPU_MAX_BL_LEVEL; 4951 4937 4952 - if (caps.data_points && !(amdgpu_dc_debug_mask & DC_DISABLE_CUSTOM_BRIGHTNESS_CURVE)) 4938 + if (caps->data_points && !(amdgpu_dc_debug_mask & DC_DISABLE_CUSTOM_BRIGHTNESS_CURVE)) 4953 4939 drm_info(drm, "Using custom brightness curve\n"); 4954 - props.max_brightness = AMDGPU_MAX_BL_LEVEL; 4955 4940 props.type = BACKLIGHT_RAW; 4956 4941 4957 4942 snprintf(bl_name, sizeof(bl_name), "amdgpu_bl%d",
+33
drivers/gpu/drm/amd/display/dc/core/dc.c
··· 241 241 DC_LOG_DC("BIOS object table - end"); 242 242 243 243 /* Create a link for each usb4 dpia port */ 244 + dc->lowest_dpia_link_index = MAX_LINKS; 244 245 for (i = 0; i < dc->res_pool->usb4_dpia_count; i++) { 245 246 struct link_init_data link_init_params = {0}; 246 247 struct dc_link *link; ··· 254 253 255 254 link = dc->link_srv->create_link(&link_init_params); 256 255 if (link) { 256 + if (dc->lowest_dpia_link_index > dc->link_count) 257 + dc->lowest_dpia_link_index = dc->link_count; 258 + 257 259 dc->links[dc->link_count] = link; 258 260 link->dc = dc; 259 261 ++dc->link_count; ··· 6379 6375 return dc->res_pool->funcs->get_det_buffer_size(context); 6380 6376 else 6381 6377 return 0; 6378 + } 6379 + /** 6380 + *********************************************************************************************** 6381 + * dc_get_host_router_index: Get index of host router from a dpia link 6382 + * 6383 + * This function return a host router index of the target link. If the target link is dpia link. 6384 + * 6385 + * @param [in] link: target link 6386 + * @param [out] host_router_index: host router index of the target link 6387 + * 6388 + * @return: true if the host router index is found and valid. 6389 + * 6390 + *********************************************************************************************** 6391 + */ 6392 + bool dc_get_host_router_index(const struct dc_link *link, unsigned int *host_router_index) 6393 + { 6394 + struct dc *dc = link->ctx->dc; 6395 + 6396 + if (link->ep_type != DISPLAY_ENDPOINT_USB4_DPIA) 6397 + return false; 6398 + 6399 + if (link->link_index < dc->lowest_dpia_link_index) 6400 + return false; 6401 + 6402 + *host_router_index = (link->link_index - dc->lowest_dpia_link_index) / dc->caps.num_of_dpias_per_host_router; 6403 + if (*host_router_index < dc->caps.num_of_host_routers) 6404 + return true; 6405 + else 6406 + return false; 6382 6407 } 6383 6408 6384 6409 bool dc_is_cursor_limit_pending(struct dc *dc)
+7 -1
drivers/gpu/drm/amd/display/dc/dc.h
··· 66 66 #define MAX_STREAMS 6 67 67 #define MIN_VIEWPORT_SIZE 12 68 68 #define MAX_NUM_EDP 2 69 - #define MAX_HOST_ROUTERS_NUM 2 69 + #define MAX_HOST_ROUTERS_NUM 3 70 + #define MAX_DPIA_PER_HOST_ROUTER 2 70 71 71 72 /* Display Core Interfaces */ 72 73 struct dc_versions { ··· 306 305 /* Conservative limit for DCC cases which require ODM4:1 to support*/ 307 306 uint32_t dcc_plane_width_limit; 308 307 struct dc_scl_caps scl_caps; 308 + uint8_t num_of_host_routers; 309 + uint8_t num_of_dpias_per_host_router; 309 310 }; 310 311 311 312 struct dc_bug_wa { ··· 1606 1603 1607 1604 uint8_t link_count; 1608 1605 struct dc_link *links[MAX_LINKS]; 1606 + uint8_t lowest_dpia_link_index; 1609 1607 struct link_service *link_srv; 1610 1608 1611 1609 struct dc_state *current_state; ··· 2598 2594 struct dc_power_profile dc_get_power_profile_for_dc_state(const struct dc_state *context); 2599 2595 2600 2596 unsigned int dc_get_det_buffer_size_from_state(const struct dc_state *context); 2597 + 2598 + bool dc_get_host_router_index(const struct dc_link *link, unsigned int *host_router_index); 2601 2599 2602 2600 /* DSC Interfaces */ 2603 2601 #include "dc_dsc.h"
+2 -2
drivers/gpu/drm/amd/display/dc/dc_dp_types.h
··· 1172 1172 union dp_128b_132b_supported_lttpr_link_rates supported_128b_132b_rates; 1173 1173 union dp_alpm_lttpr_cap alpm; 1174 1174 uint8_t aux_rd_interval[MAX_REPEATER_CNT - 1]; 1175 - uint8_t lttpr_ieee_oui[3]; 1176 - uint8_t lttpr_device_id[6]; 1175 + uint8_t lttpr_ieee_oui[3]; // Always read from closest LTTPR to host 1176 + uint8_t lttpr_device_id[6]; // Always read from closest LTTPR to host 1177 1177 }; 1178 1178 1179 1179 struct dc_dongle_dfp_cap_ext {
+1
drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
··· 788 788 plane->pixel_format = dml2_420_10; 789 789 break; 790 790 case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616: 791 + case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616: 791 792 case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616F: 792 793 case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616F: 793 794 plane->pixel_format = dml2_444_64;
+4 -1
drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
··· 4685 4685 //the tdlut is fetched during the 2 row times of prefetch. 4686 4686 if (p->setup_for_tdlut) { 4687 4687 *p->tdlut_groups_per_2row_ub = (unsigned int)math_ceil2((double) *p->tdlut_bytes_per_frame / *p->tdlut_bytes_per_group, 1); 4688 - *p->tdlut_opt_time = (*p->tdlut_bytes_per_frame - p->cursor_buffer_size * 1024) / tdlut_drain_rate; 4688 + if (*p->tdlut_bytes_per_frame > p->cursor_buffer_size * 1024) 4689 + *p->tdlut_opt_time = (*p->tdlut_bytes_per_frame - p->cursor_buffer_size * 1024) / tdlut_drain_rate; 4690 + else 4691 + *p->tdlut_opt_time = 0; 4689 4692 *p->tdlut_drain_time = p->cursor_buffer_size * 1024 / tdlut_drain_rate; 4690 4693 *p->tdlut_bytes_to_deliver = (unsigned int) (p->cursor_buffer_size * 1024.0); 4691 4694 }
+1
drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
··· 953 953 out->SourcePixelFormat[location] = dml_420_10; 954 954 break; 955 955 case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616: 956 + case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616: 956 957 case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616F: 957 958 case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616F: 958 959 out->SourcePixelFormat[location] = dml_444_64;
+1 -1
drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
··· 1225 1225 return; 1226 1226 1227 1227 if (link->local_sink && link->local_sink->sink_signal == SIGNAL_TYPE_EDP) { 1228 - if (!link->skip_implict_edp_power_control) 1228 + if (!link->skip_implict_edp_power_control && hws) 1229 1229 hws->funcs.edp_backlight_control(link, false); 1230 1230 link->dc->hwss.set_abm_immediate_disable(pipe_ctx); 1231 1231 }
+28
drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
··· 1047 1047 if (dc->caps.sequential_ono) { 1048 1048 update_state->pg_pipe_res_update[PG_HUBP][pipe_ctx->stream_res.dsc->inst] = false; 1049 1049 update_state->pg_pipe_res_update[PG_DPP][pipe_ctx->stream_res.dsc->inst] = false; 1050 + 1051 + /* All HUBP/DPP instances must be powered if the DSC inst != HUBP inst */ 1052 + if (!pipe_ctx->top_pipe && pipe_ctx->plane_res.hubp && 1053 + pipe_ctx->plane_res.hubp->inst != pipe_ctx->stream_res.dsc->inst) { 1054 + for (j = 0; j < dc->res_pool->pipe_count; ++j) { 1055 + update_state->pg_pipe_res_update[PG_HUBP][j] = false; 1056 + update_state->pg_pipe_res_update[PG_DPP][j] = false; 1057 + } 1058 + } 1050 1059 } 1051 1060 } 1052 1061 ··· 1202 1193 update_state->pg_pipe_res_update[PG_HDMISTREAM][0] = true; 1203 1194 1204 1195 if (dc->caps.sequential_ono) { 1196 + for (i = 0; i < dc->res_pool->pipe_count; i++) { 1197 + struct pipe_ctx *new_pipe = &context->res_ctx.pipe_ctx[i]; 1198 + 1199 + if (new_pipe->stream_res.dsc && !new_pipe->top_pipe && 1200 + update_state->pg_pipe_res_update[PG_DSC][new_pipe->stream_res.dsc->inst]) { 1201 + update_state->pg_pipe_res_update[PG_HUBP][new_pipe->stream_res.dsc->inst] = true; 1202 + update_state->pg_pipe_res_update[PG_DPP][new_pipe->stream_res.dsc->inst] = true; 1203 + 1204 + /* All HUBP/DPP instances must be powered if the DSC inst != HUBP inst */ 1205 + if (new_pipe->plane_res.hubp && 1206 + new_pipe->plane_res.hubp->inst != new_pipe->stream_res.dsc->inst) { 1207 + for (j = 0; j < dc->res_pool->pipe_count; ++j) { 1208 + update_state->pg_pipe_res_update[PG_HUBP][j] = true; 1209 + update_state->pg_pipe_res_update[PG_DPP][j] = true; 1210 + } 1211 + } 1212 + } 1213 + } 1214 + 1205 1215 for (i = dc->res_pool->pipe_count - 1; i >= 0; i--) { 1206 1216 if (update_state->pg_pipe_res_update[PG_HUBP][i] && 1207 1217 update_state->pg_pipe_res_update[PG_DPP][i]) {
+3
drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
··· 1954 1954 dc->caps.color.mpc.ogam_rom_caps.hlg = 0; 1955 1955 dc->caps.color.mpc.ocsc = 1; 1956 1956 1957 + dc->caps.num_of_host_routers = 2; 1958 + dc->caps.num_of_dpias_per_host_router = 2; 1959 + 1957 1960 /* Use pipe context based otg sync logic */ 1958 1961 dc->config.use_pipe_ctx_sync_logic = true; 1959 1962 dc->config.disable_hbr_audio_dp2 = true;
+3
drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
··· 1885 1885 1886 1886 dc->caps.max_disp_clock_khz_at_vmin = 650000; 1887 1887 1888 + dc->caps.num_of_host_routers = 2; 1889 + dc->caps.num_of_dpias_per_host_router = 2; 1890 + 1888 1891 /* Use pipe context based otg sync logic */ 1889 1892 dc->config.use_pipe_ctx_sync_logic = true; 1890 1893
+3
drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
··· 1894 1894 dc->caps.color.mpc.ogam_rom_caps.hlg = 0; 1895 1895 dc->caps.color.mpc.ocsc = 1; 1896 1896 1897 + dc->caps.num_of_host_routers = 2; 1898 + dc->caps.num_of_dpias_per_host_router = 2; 1899 + 1897 1900 /* max_disp_clock_khz_at_vmin is slightly lower than the STA value in order 1898 1901 * to provide some margin. 1899 1902 * It's expected for furture ASIC to have equal or higher value, in order to
+3
drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
··· 1866 1866 dc->caps.color.mpc.ogam_rom_caps.hlg = 0; 1867 1867 dc->caps.color.mpc.ocsc = 1; 1868 1868 1869 + dc->caps.num_of_host_routers = 2; 1870 + dc->caps.num_of_dpias_per_host_router = 2; 1871 + 1869 1872 /* max_disp_clock_khz_at_vmin is slightly lower than the STA value in order 1870 1873 * to provide some margin. 1871 1874 * It's expected for furture ASIC to have equal or higher value, in order to
+3
drivers/gpu/drm/amd/display/dc/resource/dcn36/dcn36_resource.c
··· 1867 1867 dc->caps.color.mpc.ogam_rom_caps.hlg = 0; 1868 1868 dc->caps.color.mpc.ocsc = 1; 1869 1869 1870 + dc->caps.num_of_host_routers = 2; 1871 + dc->caps.num_of_dpias_per_host_router = 2; 1872 + 1870 1873 /* max_disp_clock_khz_at_vmin is slightly lower than the STA value in order 1871 1874 * to provide some margin. 1872 1875 * It's expected for furture ASIC to have equal or higher value, in order to
+9 -3
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
··· 58 58 59 59 MODULE_FIRMWARE("amdgpu/aldebaran_smc.bin"); 60 60 MODULE_FIRMWARE("amdgpu/smu_13_0_0.bin"); 61 + MODULE_FIRMWARE("amdgpu/smu_13_0_0_kicker.bin"); 61 62 MODULE_FIRMWARE("amdgpu/smu_13_0_7.bin"); 62 63 MODULE_FIRMWARE("amdgpu/smu_13_0_10.bin"); 63 64 ··· 93 92 int smu_v13_0_init_microcode(struct smu_context *smu) 94 93 { 95 94 struct amdgpu_device *adev = smu->adev; 96 - char ucode_prefix[15]; 95 + char ucode_prefix[30]; 97 96 int err = 0; 98 97 const struct smc_firmware_header_v1_0 *hdr; 99 98 const struct common_firmware_header *header; ··· 104 103 return 0; 105 104 106 105 amdgpu_ucode_ip_version_decode(adev, MP1_HWIP, ucode_prefix, sizeof(ucode_prefix)); 107 - err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED, 108 - "amdgpu/%s.bin", ucode_prefix); 106 + 107 + if (amdgpu_is_kicker_fw(adev)) 108 + err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED, 109 + "amdgpu/%s_kicker.bin", ucode_prefix); 110 + else 111 + err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED, 112 + "amdgpu/%s.bin", ucode_prefix); 109 113 if (err) 110 114 goto out; 111 115
+1 -1
drivers/gpu/drm/arm/malidp_planes.c
··· 159 159 } 160 160 161 161 if (!fourcc_mod_is_vendor(modifier, ARM)) { 162 - DRM_ERROR("Unknown modifier (not Arm)\n"); 162 + DRM_DEBUG_KMS("Unknown modifier (not Arm)\n"); 163 163 return false; 164 164 } 165 165
-1
drivers/gpu/drm/ast/ast_mode.c
··· 29 29 */ 30 30 31 31 #include <linux/delay.h> 32 - #include <linux/export.h> 33 32 #include <linux/pci.h> 34 33 35 34 #include <drm/drm_atomic.h>
+4 -1
drivers/gpu/drm/etnaviv/etnaviv_sched.c
··· 35 35 *sched_job) 36 36 { 37 37 struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job); 38 + struct drm_gpu_scheduler *sched = sched_job->sched; 38 39 struct etnaviv_gpu *gpu = submit->gpu; 39 40 u32 dma_addr, primid = 0; 40 41 int change; ··· 90 89 return DRM_GPU_SCHED_STAT_NOMINAL; 91 90 92 91 out_no_timeout: 93 - list_add(&sched_job->list, &sched_job->sched->pending_list); 92 + spin_lock(&sched->job_list_lock); 93 + list_add(&sched_job->list, &sched->pending_list); 94 + spin_unlock(&sched->job_list_lock); 94 95 return DRM_GPU_SCHED_STAT_NOMINAL; 95 96 } 96 97
+2 -2
drivers/gpu/drm/i915/display/vlv_dsi.c
··· 1056 1056 BXT_MIPI_TRANS_VACTIVE(port)); 1057 1057 adjusted_mode->crtc_vtotal = 1058 1058 intel_de_read(display, 1059 - BXT_MIPI_TRANS_VTOTAL(port)); 1059 + BXT_MIPI_TRANS_VTOTAL(port)) + 1; 1060 1060 1061 1061 hactive = adjusted_mode->crtc_hdisplay; 1062 1062 hfp = intel_de_read(display, MIPI_HFP_COUNT(display, port)); ··· 1260 1260 intel_de_write(display, BXT_MIPI_TRANS_VACTIVE(port), 1261 1261 adjusted_mode->crtc_vdisplay); 1262 1262 intel_de_write(display, BXT_MIPI_TRANS_VTOTAL(port), 1263 - adjusted_mode->crtc_vtotal); 1263 + adjusted_mode->crtc_vtotal - 1); 1264 1264 } 1265 1265 1266 1266 intel_de_write(display, MIPI_HACTIVE_AREA_COUNT(display, port),
+2 -2
drivers/gpu/drm/i915/i915_pmu.c
··· 112 112 { 113 113 unsigned int bit = config_bit(config); 114 114 115 - if (__builtin_constant_p(config)) 115 + if (__builtin_constant_p(bit)) 116 116 BUILD_BUG_ON(bit > 117 117 BITS_PER_TYPE(typeof_member(struct i915_pmu, 118 118 enable)) - 1); ··· 121 121 BITS_PER_TYPE(typeof_member(struct i915_pmu, 122 122 enable)) - 1); 123 123 124 - return BIT(config_bit(config)); 124 + return BIT(bit); 125 125 } 126 126 127 127 static bool is_engine_event(struct perf_event *event)
-1
drivers/gpu/drm/mgag200/mgag200_ddc.c
··· 26 26 * Authors: Dave Airlie <airlied@redhat.com> 27 27 */ 28 28 29 - #include <linux/export.h> 30 29 #include <linux/i2c-algo-bit.h> 31 30 #include <linux/i2c.h> 32 31 #include <linux/pci.h>
-5
drivers/gpu/drm/msm/adreno/a2xx_gpummu.c
··· 71 71 return 0; 72 72 } 73 73 74 - static void a2xx_gpummu_resume_translation(struct msm_mmu *mmu) 75 - { 76 - } 77 - 78 74 static void a2xx_gpummu_destroy(struct msm_mmu *mmu) 79 75 { 80 76 struct a2xx_gpummu *gpummu = to_a2xx_gpummu(mmu); ··· 86 90 .map = a2xx_gpummu_map, 87 91 .unmap = a2xx_gpummu_unmap, 88 92 .destroy = a2xx_gpummu_destroy, 89 - .resume_translation = a2xx_gpummu_resume_translation, 90 93 }; 91 94 92 95 struct msm_mmu *a2xx_gpummu_new(struct device *dev, struct msm_gpu *gpu)
+2
drivers/gpu/drm/msm/adreno/a5xx_gpu.c
··· 131 131 struct msm_ringbuffer *ring = submit->ring; 132 132 unsigned int i, ibs = 0; 133 133 134 + adreno_check_and_reenable_stall(adreno_gpu); 135 + 134 136 if (IS_ENABLED(CONFIG_DRM_MSM_GPU_SUDO) && submit->in_rb) { 135 137 ring->cur_ctx_seqno = 0; 136 138 a5xx_submit_in_rb(gpu, submit);
+18
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
··· 130 130 OUT_RING(ring, lower_32_bits(rbmemptr(ring, fence))); 131 131 OUT_RING(ring, upper_32_bits(rbmemptr(ring, fence))); 132 132 OUT_RING(ring, submit->seqno - 1); 133 + 134 + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); 135 + OUT_RING(ring, CP_SET_THREAD_BOTH); 136 + 137 + /* Reset state used to synchronize BR and BV */ 138 + OUT_PKT7(ring, CP_RESET_CONTEXT_STATE, 1); 139 + OUT_RING(ring, 140 + CP_RESET_CONTEXT_STATE_0_CLEAR_ON_CHIP_TS | 141 + CP_RESET_CONTEXT_STATE_0_CLEAR_RESOURCE_TABLE | 142 + CP_RESET_CONTEXT_STATE_0_CLEAR_BV_BR_COUNTER | 143 + CP_RESET_CONTEXT_STATE_0_RESET_GLOBAL_LOCAL_TS); 144 + 145 + OUT_PKT7(ring, CP_THREAD_CONTROL, 1); 146 + OUT_RING(ring, CP_SET_THREAD_BR); 133 147 } 134 148 135 149 if (!sysprof) { ··· 225 211 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 226 212 struct msm_ringbuffer *ring = submit->ring; 227 213 unsigned int i, ibs = 0; 214 + 215 + adreno_check_and_reenable_stall(adreno_gpu); 228 216 229 217 a6xx_set_pagetable(a6xx_gpu, ring, submit); 230 218 ··· 350 334 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 351 335 struct msm_ringbuffer *ring = submit->ring; 352 336 unsigned int i, ibs = 0; 337 + 338 + adreno_check_and_reenable_stall(adreno_gpu); 353 339 354 340 /* 355 341 * Toggle concurrent binning for pagetable switch and set the thread to
+29 -10
drivers/gpu/drm/msm/adreno/adreno_device.c
··· 137 137 return NULL; 138 138 } 139 139 140 - static int find_chipid(struct device *dev, uint32_t *chipid) 140 + static int find_chipid(struct device_node *node, uint32_t *chipid) 141 141 { 142 - struct device_node *node = dev->of_node; 143 142 const char *compat; 144 143 int ret; 145 144 ··· 172 173 /* and if that fails, fall back to legacy "qcom,chipid" property: */ 173 174 ret = of_property_read_u32(node, "qcom,chipid", chipid); 174 175 if (ret) { 175 - DRM_DEV_ERROR(dev, "could not parse qcom,chipid: %d\n", ret); 176 + DRM_ERROR("%pOF: could not parse qcom,chipid: %d\n", 177 + node, ret); 176 178 return ret; 177 179 } 178 180 179 - dev_warn(dev, "Using legacy qcom,chipid binding!\n"); 181 + pr_warn("%pOF: Using legacy qcom,chipid binding!\n", node); 180 182 181 183 return 0; 184 + } 185 + 186 + bool adreno_has_gpu(struct device_node *node) 187 + { 188 + const struct adreno_info *info; 189 + uint32_t chip_id; 190 + int ret; 191 + 192 + ret = find_chipid(node, &chip_id); 193 + if (ret) 194 + return false; 195 + 196 + info = adreno_info(chip_id); 197 + if (!info) { 198 + pr_warn("%pOF: Unknown GPU revision: %"ADRENO_CHIPID_FMT"\n", 199 + node, ADRENO_CHIPID_ARGS(chip_id)); 200 + return false; 201 + } 202 + 203 + return true; 182 204 } 183 205 184 206 static int adreno_bind(struct device *dev, struct device *master, void *data) ··· 211 191 struct msm_gpu *gpu; 212 192 int ret; 213 193 214 - ret = find_chipid(dev, &config.chip_id); 215 - if (ret) 194 + ret = find_chipid(dev->of_node, &config.chip_id); 195 + /* We shouldn't have gotten this far if we can't parse the chip_id */ 196 + if (WARN_ON(ret)) 216 197 return ret; 217 198 218 199 dev->platform_data = &config; 219 200 priv->gpu_pdev = to_platform_device(dev); 220 201 221 202 info = adreno_info(config.chip_id); 222 - if (!info) { 223 - dev_warn(drm->dev, "Unknown GPU revision: %"ADRENO_CHIPID_FMT"\n", 224 - ADRENO_CHIPID_ARGS(config.chip_id)); 203 + /* We shouldn't have gotten this far if we don't recognize the GPU: */ 204 + if (WARN_ON(!info)) 225 205 return -ENXIO; 226 - } 227 206 228 207 config.info = info; 229 208
+43 -11
drivers/gpu/drm/msm/adreno/adreno_gpu.c
··· 259 259 return BIT(ttbr1_cfg->ias) - ADRENO_VM_START; 260 260 } 261 261 262 + void adreno_check_and_reenable_stall(struct adreno_gpu *adreno_gpu) 263 + { 264 + struct msm_gpu *gpu = &adreno_gpu->base; 265 + struct msm_drm_private *priv = gpu->dev->dev_private; 266 + unsigned long flags; 267 + 268 + /* 269 + * Wait until the cooldown period has passed and we would actually 270 + * collect a crashdump to re-enable stall-on-fault. 271 + */ 272 + spin_lock_irqsave(&priv->fault_stall_lock, flags); 273 + if (!priv->stall_enabled && 274 + ktime_after(ktime_get(), priv->stall_reenable_time) && 275 + !READ_ONCE(gpu->crashstate)) { 276 + priv->stall_enabled = true; 277 + 278 + gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, true); 279 + } 280 + spin_unlock_irqrestore(&priv->fault_stall_lock, flags); 281 + } 282 + 262 283 #define ARM_SMMU_FSR_TF BIT(1) 263 284 #define ARM_SMMU_FSR_PF BIT(3) 264 285 #define ARM_SMMU_FSR_EF BIT(4) 286 + #define ARM_SMMU_FSR_SS BIT(30) 265 287 266 288 int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, 267 289 struct adreno_smmu_fault_info *info, const char *block, 268 290 u32 scratch[4]) 269 291 { 292 + struct msm_drm_private *priv = gpu->dev->dev_private; 270 293 const char *type = "UNKNOWN"; 271 - bool do_devcoredump = info && !READ_ONCE(gpu->crashstate); 294 + bool do_devcoredump = info && (info->fsr & ARM_SMMU_FSR_SS) && 295 + !READ_ONCE(gpu->crashstate); 296 + unsigned long irq_flags; 272 297 273 298 /* 274 - * If we aren't going to be resuming later from fault_worker, then do 275 - * it now. 299 + * In case there is a subsequent storm of pagefaults, disable 300 + * stall-on-fault for at least half a second. 276 301 */ 277 - if (!do_devcoredump) { 278 - gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu); 302 + spin_lock_irqsave(&priv->fault_stall_lock, irq_flags); 303 + if (priv->stall_enabled) { 304 + priv->stall_enabled = false; 305 + 306 + gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, false); 279 307 } 308 + priv->stall_reenable_time = ktime_add_ms(ktime_get(), 500); 309 + spin_unlock_irqrestore(&priv->fault_stall_lock, irq_flags); 280 310 281 311 /* 282 312 * Print a default message if we couldn't get the data from the ··· 334 304 scratch[0], scratch[1], scratch[2], scratch[3]); 335 305 336 306 if (do_devcoredump) { 307 + struct msm_gpu_fault_info fault_info = {}; 308 + 337 309 /* Turn off the hangcheck timer to keep it from bothering us */ 338 310 timer_delete(&gpu->hangcheck_timer); 339 311 340 - gpu->fault_info.ttbr0 = info->ttbr0; 341 - gpu->fault_info.iova = iova; 342 - gpu->fault_info.flags = flags; 343 - gpu->fault_info.type = type; 344 - gpu->fault_info.block = block; 312 + fault_info.ttbr0 = info->ttbr0; 313 + fault_info.iova = iova; 314 + fault_info.flags = flags; 315 + fault_info.type = type; 316 + fault_info.block = block; 345 317 346 - kthread_queue_work(gpu->worker, &gpu->fault_work); 318 + msm_gpu_fault_crashstate_capture(gpu, &fault_info); 347 319 } 348 320 349 321 return 0;
+2
drivers/gpu/drm/msm/adreno/adreno_gpu.h
··· 636 636 struct adreno_smmu_fault_info *info, const char *block, 637 637 u32 scratch[4]); 638 638 639 + void adreno_check_and_reenable_stall(struct adreno_gpu *gpu); 640 + 639 641 int adreno_read_speedbin(struct device *dev, u32 *speedbin); 640 642 641 643 /*
+9 -5
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
··· 94 94 timing->vsync_polarity = 0; 95 95 } 96 96 97 - /* for DP/EDP, Shift timings to align it to bottom right */ 98 - if (phys_enc->hw_intf->cap->type == INTF_DP) { 97 + timing->wide_bus_en = dpu_encoder_is_widebus_enabled(phys_enc->parent); 98 + timing->compression_en = dpu_encoder_is_dsc_enabled(phys_enc->parent); 99 + 100 + /* 101 + * For DP/EDP, Shift timings to align it to bottom right. 102 + * wide_bus_en is set for everything excluding SDM845 & 103 + * porch changes cause DisplayPort failure and HDMI tearing. 104 + */ 105 + if (phys_enc->hw_intf->cap->type == INTF_DP && timing->wide_bus_en) { 99 106 timing->h_back_porch += timing->h_front_porch; 100 107 timing->h_front_porch = 0; 101 108 timing->v_back_porch += timing->v_front_porch; 102 109 timing->v_front_porch = 0; 103 110 } 104 - 105 - timing->wide_bus_en = dpu_encoder_is_widebus_enabled(phys_enc->parent); 106 - timing->compression_en = dpu_encoder_is_dsc_enabled(phys_enc->parent); 107 111 108 112 /* 109 113 * for DP, divide the horizonal parameters by 2 when
+6 -1
drivers/gpu/drm/msm/dp/dp_display.c
··· 128 128 {} 129 129 }; 130 130 131 + static const struct msm_dp_desc msm_dp_desc_sdm845[] = { 132 + { .io_start = 0x0ae90000, .id = MSM_DP_CONTROLLER_0 }, 133 + {} 134 + }; 135 + 131 136 static const struct msm_dp_desc msm_dp_desc_sc7180[] = { 132 137 { .io_start = 0x0ae90000, .id = MSM_DP_CONTROLLER_0, .wide_bus_supported = true }, 133 138 {} ··· 185 180 { .compatible = "qcom,sc8180x-edp", .data = &msm_dp_desc_sc8180x }, 186 181 { .compatible = "qcom,sc8280xp-dp", .data = &msm_dp_desc_sc8280xp }, 187 182 { .compatible = "qcom,sc8280xp-edp", .data = &msm_dp_desc_sc8280xp }, 188 - { .compatible = "qcom,sdm845-dp", .data = &msm_dp_desc_sc7180 }, 183 + { .compatible = "qcom,sdm845-dp", .data = &msm_dp_desc_sdm845 }, 189 184 { .compatible = "qcom,sm8350-dp", .data = &msm_dp_desc_sc7180 }, 190 185 { .compatible = "qcom,sm8650-dp", .data = &msm_dp_desc_sm8650 }, 191 186 { .compatible = "qcom,x1e80100-dp", .data = &msm_dp_desc_x1e80100 },
+7
drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
··· 704 704 /* TODO: Remove this when we have proper display handover support */ 705 705 msm_dsi_phy_pll_save_state(phy); 706 706 707 + /* 708 + * Store also proper vco_current_rate, because its value will be used in 709 + * dsi_10nm_pll_restore_state(). 710 + */ 711 + if (!dsi_pll_10nm_vco_recalc_rate(&pll_10nm->clk_hw, VCO_REF_CLK_RATE)) 712 + pll_10nm->vco_current_rate = pll_10nm->phy->cfg->min_pll_rate; 713 + 707 714 return 0; 708 715 } 709 716
+32
drivers/gpu/drm/msm/msm_debugfs.c
··· 208 208 shrink_get, shrink_set, 209 209 "0x%08llx\n"); 210 210 211 + /* 212 + * Return the number of microseconds to wait until stall-on-fault is 213 + * re-enabled. If 0 then it is already enabled or will be re-enabled on the 214 + * next submit (unless there's a leftover devcoredump). This is useful for 215 + * kernel tests that intentionally produce a fault and check the devcoredump to 216 + * wait until the cooldown period is over. 217 + */ 218 + 219 + static int 220 + stall_reenable_time_get(void *data, u64 *val) 221 + { 222 + struct msm_drm_private *priv = data; 223 + unsigned long irq_flags; 224 + 225 + spin_lock_irqsave(&priv->fault_stall_lock, irq_flags); 226 + 227 + if (priv->stall_enabled) 228 + *val = 0; 229 + else 230 + *val = max(ktime_us_delta(priv->stall_reenable_time, ktime_get()), 0); 231 + 232 + spin_unlock_irqrestore(&priv->fault_stall_lock, irq_flags); 233 + 234 + return 0; 235 + } 236 + 237 + DEFINE_DEBUGFS_ATTRIBUTE(stall_reenable_time_fops, 238 + stall_reenable_time_get, NULL, 239 + "%lld\n"); 211 240 212 241 static int msm_gem_show(struct seq_file *m, void *arg) 213 242 { ··· 347 318 348 319 debugfs_create_bool("disable_err_irq", 0600, minor->debugfs_root, 349 320 &priv->disable_err_irq); 321 + 322 + debugfs_create_file("stall_reenable_time_us", 0400, minor->debugfs_root, 323 + priv, &stall_reenable_time_fops); 350 324 351 325 gpu_devfreq = debugfs_create_dir("devfreq", minor->debugfs_root); 352 326
+7 -3
drivers/gpu/drm/msm/msm_drv.c
··· 245 245 drm_gem_lru_init(&priv->lru.willneed, &priv->lru.lock); 246 246 drm_gem_lru_init(&priv->lru.dontneed, &priv->lru.lock); 247 247 248 + /* Initialize stall-on-fault */ 249 + spin_lock_init(&priv->fault_stall_lock); 250 + priv->stall_enabled = true; 251 + 248 252 /* Teach lockdep about lock ordering wrt. shrinker: */ 249 253 fs_reclaim_acquire(GFP_KERNEL); 250 254 might_lock(&priv->lru.lock); ··· 930 926 * is no external component that we need to add since LVDS is within MDP4 931 927 * itself. 932 928 */ 933 - static int add_components_mdp(struct device *master_dev, 929 + static int add_mdp_components(struct device *master_dev, 934 930 struct component_match **matchptr) 935 931 { 936 932 struct device_node *np = master_dev->of_node; ··· 1034 1030 if (!np) 1035 1031 return 0; 1036 1032 1037 - if (of_device_is_available(np)) 1033 + if (of_device_is_available(np) && adreno_has_gpu(np)) 1038 1034 drm_of_component_match_add(dev, matchptr, component_compare_of, np); 1039 1035 1040 1036 of_node_put(np); ··· 1075 1071 1076 1072 /* Add mdp components if we have KMS. */ 1077 1073 if (kms_init) { 1078 - ret = add_components_mdp(master_dev, &match); 1074 + ret = add_mdp_components(master_dev, &match); 1079 1075 if (ret) 1080 1076 return ret; 1081 1077 }
+23
drivers/gpu/drm/msm/msm_drv.h
··· 222 222 * the sw hangcheck mechanism. 223 223 */ 224 224 bool disable_err_irq; 225 + 226 + /** 227 + * @fault_stall_lock: 228 + * 229 + * Serialize changes to stall-on-fault state. 230 + */ 231 + spinlock_t fault_stall_lock; 232 + 233 + /** 234 + * @fault_stall_reenable_time: 235 + * 236 + * If stall_enabled is false, when to reenable stall-on-fault. 237 + * Protected by @fault_stall_lock. 238 + */ 239 + ktime_t stall_reenable_time; 240 + 241 + /** 242 + * @stall_enabled: 243 + * 244 + * Whether stall-on-fault is currently enabled. Protected by 245 + * @fault_stall_lock. 246 + */ 247 + bool stall_enabled; 225 248 }; 226 249 227 250 const struct msm_format *mdp_get_format(struct msm_kms *kms, uint32_t format, uint64_t modifier);
+15 -2
drivers/gpu/drm/msm/msm_gem_submit.c
··· 85 85 container_of(kref, struct msm_gem_submit, ref); 86 86 unsigned i; 87 87 88 + /* 89 + * In error paths, we could unref the submit without calling 90 + * drm_sched_entity_push_job(), so msm_job_free() will never 91 + * get called. Since drm_sched_job_cleanup() will NULL out 92 + * s_fence, we can use that to detect this case. 93 + */ 94 + if (submit->base.s_fence) 95 + drm_sched_job_cleanup(&submit->base); 96 + 88 97 if (submit->fence_id) { 89 98 spin_lock(&submit->queue->idr_lock); 90 99 idr_remove(&submit->queue->fence_idr, submit->fence_id); ··· 658 649 struct msm_ringbuffer *ring; 659 650 struct msm_submit_post_dep *post_deps = NULL; 660 651 struct drm_syncobj **syncobjs_to_reset = NULL; 652 + struct sync_file *sync_file = NULL; 661 653 int out_fence_fd = -1; 662 654 unsigned i; 663 655 int ret; ··· 868 858 } 869 859 870 860 if (ret == 0 && args->flags & MSM_SUBMIT_FENCE_FD_OUT) { 871 - struct sync_file *sync_file = sync_file_create(submit->user_fence); 861 + sync_file = sync_file_create(submit->user_fence); 872 862 if (!sync_file) { 873 863 ret = -ENOMEM; 874 864 } else { ··· 902 892 out_unlock: 903 893 mutex_unlock(&queue->lock); 904 894 out_post_unlock: 905 - if (ret && (out_fence_fd >= 0)) 895 + if (ret && (out_fence_fd >= 0)) { 906 896 put_unused_fd(out_fence_fd); 897 + if (sync_file) 898 + fput(sync_file->file); 899 + } 907 900 908 901 if (!IS_ERR_OR_NULL(submit)) { 909 902 msm_gem_submit_put(submit);
+9 -11
drivers/gpu/drm/msm/msm_gpu.c
··· 257 257 } 258 258 259 259 static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, 260 - struct msm_gem_submit *submit, char *comm, char *cmd) 260 + struct msm_gem_submit *submit, struct msm_gpu_fault_info *fault_info, 261 + char *comm, char *cmd) 261 262 { 262 263 struct msm_gpu_state *state; 263 264 ··· 277 276 /* Fill in the additional crash state information */ 278 277 state->comm = kstrdup(comm, GFP_KERNEL); 279 278 state->cmd = kstrdup(cmd, GFP_KERNEL); 280 - state->fault_info = gpu->fault_info; 279 + if (fault_info) 280 + state->fault_info = *fault_info; 281 281 282 282 if (submit) { 283 283 int i; ··· 310 308 } 311 309 #else 312 310 static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, 313 - struct msm_gem_submit *submit, char *comm, char *cmd) 311 + struct msm_gem_submit *submit, struct msm_gpu_fault_info *fault_info, 312 + char *comm, char *cmd) 314 313 { 315 314 } 316 315 #endif ··· 408 405 409 406 /* Record the crash state */ 410 407 pm_runtime_get_sync(&gpu->pdev->dev); 411 - msm_gpu_crashstate_capture(gpu, submit, comm, cmd); 408 + msm_gpu_crashstate_capture(gpu, submit, NULL, comm, cmd); 412 409 413 410 kfree(cmd); 414 411 kfree(comm); ··· 462 459 msm_gpu_retire(gpu); 463 460 } 464 461 465 - static void fault_worker(struct kthread_work *work) 462 + void msm_gpu_fault_crashstate_capture(struct msm_gpu *gpu, struct msm_gpu_fault_info *fault_info) 466 463 { 467 - struct msm_gpu *gpu = container_of(work, struct msm_gpu, fault_work); 468 464 struct msm_gem_submit *submit; 469 465 struct msm_ringbuffer *cur_ring = gpu->funcs->active_ring(gpu); 470 466 char *comm = NULL, *cmd = NULL; ··· 486 484 487 485 /* Record the crash state */ 488 486 pm_runtime_get_sync(&gpu->pdev->dev); 489 - msm_gpu_crashstate_capture(gpu, submit, comm, cmd); 487 + msm_gpu_crashstate_capture(gpu, submit, fault_info, comm, cmd); 490 488 pm_runtime_put_sync(&gpu->pdev->dev); 491 489 492 490 kfree(cmd); 493 491 kfree(comm); 494 492 495 493 resume_smmu: 496 - memset(&gpu->fault_info, 0, sizeof(gpu->fault_info)); 497 - gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu); 498 - 499 494 mutex_unlock(&gpu->lock); 500 495 } 501 496 ··· 881 882 init_waitqueue_head(&gpu->retire_event); 882 883 kthread_init_work(&gpu->retire_work, retire_worker); 883 884 kthread_init_work(&gpu->recover_work, recover_worker); 884 - kthread_init_work(&gpu->fault_work, fault_worker); 885 885 886 886 priv->hangcheck_period = DRM_MSM_HANGCHECK_DEFAULT_PERIOD; 887 887
+3 -6
drivers/gpu/drm/msm/msm_gpu.h
··· 253 253 #define DRM_MSM_HANGCHECK_PROGRESS_RETRIES 3 254 254 struct timer_list hangcheck_timer; 255 255 256 - /* Fault info for most recent iova fault: */ 257 - struct msm_gpu_fault_info fault_info; 258 - 259 - /* work for handling GPU ioval faults: */ 260 - struct kthread_work fault_work; 261 - 262 256 /* work for handling GPU recovery: */ 263 257 struct kthread_work recover_work; 264 258 ··· 662 668 void msm_gpu_cleanup(struct msm_gpu *gpu); 663 669 664 670 struct msm_gpu *adreno_load_gpu(struct drm_device *dev); 671 + bool adreno_has_gpu(struct device_node *node); 665 672 void __init adreno_register(void); 666 673 void __exit adreno_unregister(void); 667 674 ··· 699 704 700 705 mutex_unlock(&gpu->lock); 701 706 } 707 + 708 + void msm_gpu_fault_crashstate_capture(struct msm_gpu *gpu, struct msm_gpu_fault_info *fault_info); 702 709 703 710 /* 704 711 * Simple macro to semi-cleanly add the MAP_PRIV flag for targets that can
+4 -8
drivers/gpu/drm/msm/msm_iommu.c
··· 345 345 unsigned long iova, int flags, void *arg) 346 346 { 347 347 struct msm_iommu *iommu = arg; 348 - struct msm_mmu *mmu = &iommu->base; 349 348 struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(iommu->base.dev); 350 349 struct adreno_smmu_fault_info info, *ptr = NULL; 351 350 ··· 357 358 return iommu->base.handler(iommu->base.arg, iova, flags, ptr); 358 359 359 360 pr_warn_ratelimited("*** fault: iova=%16lx, flags=%d\n", iova, flags); 360 - 361 - if (mmu->funcs->resume_translation) 362 - mmu->funcs->resume_translation(mmu); 363 361 364 362 return 0; 365 363 } ··· 372 376 return -ENOSYS; 373 377 } 374 378 375 - static void msm_iommu_resume_translation(struct msm_mmu *mmu) 379 + static void msm_iommu_set_stall(struct msm_mmu *mmu, bool enable) 376 380 { 377 381 struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(mmu->dev); 378 382 379 - if (adreno_smmu->resume_translation) 380 - adreno_smmu->resume_translation(adreno_smmu->cookie, true); 383 + if (adreno_smmu->set_stall) 384 + adreno_smmu->set_stall(adreno_smmu->cookie, enable); 381 385 } 382 386 383 387 static void msm_iommu_detach(struct msm_mmu *mmu) ··· 427 431 .map = msm_iommu_map, 428 432 .unmap = msm_iommu_unmap, 429 433 .destroy = msm_iommu_destroy, 430 - .resume_translation = msm_iommu_resume_translation, 434 + .set_stall = msm_iommu_set_stall, 431 435 }; 432 436 433 437 struct msm_mmu *msm_iommu_new(struct device *dev, unsigned long quirks)
+1 -1
drivers/gpu/drm/msm/msm_mmu.h
··· 15 15 size_t len, int prot); 16 16 int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); 17 17 void (*destroy)(struct msm_mmu *mmu); 18 - void (*resume_translation)(struct msm_mmu *mmu); 18 + void (*set_stall)(struct msm_mmu *mmu, bool enable); 19 19 }; 20 20 21 21 enum msm_mmu_type {
+2 -1
drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
··· 2255 2255 <reg32 offset="0" name="0"> 2256 2256 <bitfield name="CLEAR_ON_CHIP_TS" pos="0" type="boolean"/> 2257 2257 <bitfield name="CLEAR_RESOURCE_TABLE" pos="1" type="boolean"/> 2258 - <bitfield name="CLEAR_GLOBAL_LOCAL_TS" pos="2" type="boolean"/> 2258 + <bitfield name="CLEAR_BV_BR_COUNTER" pos="2" type="boolean"/> 2259 + <bitfield name="RESET_GLOBAL_LOCAL_TS" pos="3" type="boolean"/> 2259 2260 </reg32> 2260 2261 </domain> 2261 2262
+5 -3
drivers/gpu/drm/msm/registers/gen_header.py
··· 11 11 import argparse 12 12 import time 13 13 import datetime 14 + import re 14 15 15 16 class Error(Exception): 16 17 def __init__(self, message): ··· 878 877 """) 879 878 maxlen = 0 880 879 for filepath in p.xml_files: 881 - maxlen = max(maxlen, len(filepath)) 880 + new_filepath = re.sub("^.+drivers","drivers",filepath) 881 + maxlen = max(maxlen, len(new_filepath)) 882 882 for filepath in p.xml_files: 883 - pad = " " * (maxlen - len(filepath)) 883 + pad = " " * (maxlen - len(new_filepath)) 884 884 filesize = str(os.path.getsize(filepath)) 885 885 filesize = " " * (7 - len(filesize)) + filesize 886 886 filetime = time.ctime(os.path.getmtime(filepath)) 887 - print("- " + filepath + pad + " (" + filesize + " bytes, from " + filetime + ")") 887 + print("- " + new_filepath + pad + " (" + filesize + " bytes, from <stripped>)") 888 888 if p.copyright_year: 889 889 current_year = str(datetime.date.today().year) 890 890 print()
+1 -1
drivers/gpu/drm/nouveau/nouveau_backlight.c
··· 42 42 #include "nouveau_acpi.h" 43 43 44 44 static struct ida bl_ida; 45 - #define BL_NAME_SIZE 15 // 12 for name + 2 for digits + 1 for '\0' 45 + #define BL_NAME_SIZE 24 // 12 for name + 11 for digits + 1 for '\0' 46 46 47 47 static bool 48 48 nouveau_get_backlight_name(char backlight_name[BL_NAME_SIZE],
+12 -5
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c
··· 637 637 if (payload_size > max_payload_size) { 638 638 const u32 fn = rpc->function; 639 639 u32 remain_payload_size = payload_size; 640 + void *next; 640 641 641 - /* Adjust length, and send initial RPC. */ 642 - rpc->length = sizeof(*rpc) + max_payload_size; 643 - msg->checksum = rpc->length; 642 + /* Send initial RPC. */ 643 + next = r535_gsp_rpc_get(gsp, fn, max_payload_size); 644 + if (IS_ERR(next)) { 645 + repv = next; 646 + goto done; 647 + } 644 648 645 - repv = r535_gsp_rpc_send(gsp, payload, NVKM_GSP_RPC_REPLY_NOWAIT, 0); 649 + memcpy(next, payload, max_payload_size); 650 + 651 + repv = r535_gsp_rpc_send(gsp, next, NVKM_GSP_RPC_REPLY_NOWAIT, 0); 646 652 if (IS_ERR(repv)) 647 653 goto done; 648 654 ··· 659 653 while (remain_payload_size) { 660 654 u32 size = min(remain_payload_size, 661 655 max_payload_size); 662 - void *next; 663 656 664 657 next = r535_gsp_rpc_get(gsp, NV_VGPU_MSG_FUNCTION_CONTINUATION_RECORD, size); 665 658 if (IS_ERR(next)) { ··· 679 674 /* Wait for reply. */ 680 675 repv = r535_gsp_rpc_handle_reply(gsp, fn, policy, payload_size + 681 676 sizeof(*rpc)); 677 + if (!IS_ERR(repv)) 678 + kvfree(msg); 682 679 } else { 683 680 repv = r535_gsp_rpc_send(gsp, payload, policy, gsp_rpc_len); 684 681 }
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/vmm.c
··· 121 121 page_shift -= desc->bits; 122 122 123 123 ctrl->levels[i].physAddress = pd->pt[0]->addr; 124 - ctrl->levels[i].size = (1 << desc->bits) * desc->size; 124 + ctrl->levels[i].size = BIT_ULL(desc->bits) * desc->size; 125 125 ctrl->levels[i].aperture = 1; 126 126 ctrl->levels[i].pageShift = page_shift; 127 127
+1 -1
drivers/gpu/drm/solomon/ssd130x.c
··· 974 974 975 975 static void ssd132x_clear_screen(struct ssd130x_device *ssd130x, u8 *data_array) 976 976 { 977 - unsigned int columns = DIV_ROUND_UP(ssd130x->height, SSD132X_SEGMENT_WIDTH); 977 + unsigned int columns = DIV_ROUND_UP(ssd130x->width, SSD132X_SEGMENT_WIDTH); 978 978 unsigned int height = ssd130x->height; 979 979 980 980 memset(data_array, 0, columns * height);
+6 -2
drivers/gpu/drm/v3d/v3d_sched.c
··· 199 199 struct v3d_dev *v3d = job->v3d; 200 200 struct v3d_file_priv *file = job->file->driver_priv; 201 201 struct v3d_stats *global_stats = &v3d->queue[queue].stats; 202 - struct v3d_stats *local_stats = &file->stats[queue]; 203 202 u64 now = local_clock(); 204 203 unsigned long flags; 205 204 ··· 208 209 else 209 210 preempt_disable(); 210 211 211 - v3d_stats_update(local_stats, now); 212 + /* Don't update the local stats if the file context has already closed */ 213 + if (file) 214 + v3d_stats_update(&file->stats[queue], now); 215 + else 216 + drm_dbg(&v3d->drm, "The file descriptor was closed before job completion\n"); 217 + 212 218 v3d_stats_update(global_stats, now); 213 219 214 220 if (IS_ENABLED(CONFIG_LOCKDEP))
+1 -1
drivers/gpu/drm/xe/xe_gt.c
··· 118 118 xe_gt_mcr_multicast_write(gt, XE2_GAMREQSTRM_CTRL, reg); 119 119 } 120 120 121 - xe_gt_mcr_multicast_write(gt, XEHPC_L3CLOS_MASK(3), 0x3); 121 + xe_gt_mcr_multicast_write(gt, XEHPC_L3CLOS_MASK(3), 0xF); 122 122 xe_force_wake_put(gt_to_fw(gt), fw_ref); 123 123 } 124 124
+8
drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
··· 138 138 int pending_seqno; 139 139 140 140 /* 141 + * we can get here before the CTs are even initialized if we're wedging 142 + * very early, in which case there are not going to be any pending 143 + * fences so we can bail immediately. 144 + */ 145 + if (!xe_guc_ct_initialized(&gt->uc.guc.ct)) 146 + return; 147 + 148 + /* 141 149 * CT channel is already disabled at this point. No new TLB requests can 142 150 * appear. 143 151 */
+5 -2
drivers/gpu/drm/xe/xe_guc_ct.c
··· 514 514 */ 515 515 void xe_guc_ct_stop(struct xe_guc_ct *ct) 516 516 { 517 + if (!xe_guc_ct_initialized(ct)) 518 + return; 519 + 517 520 xe_guc_ct_set_state(ct, XE_GUC_CT_STATE_STOPPED); 518 521 stop_g2h_handler(ct); 519 522 } ··· 763 760 u16 seqno; 764 761 int ret; 765 762 766 - xe_gt_assert(gt, ct->state != XE_GUC_CT_STATE_NOT_INITIALIZED); 763 + xe_gt_assert(gt, xe_guc_ct_initialized(ct)); 767 764 xe_gt_assert(gt, !g2h_len || !g2h_fence); 768 765 xe_gt_assert(gt, !num_g2h || !g2h_fence); 769 766 xe_gt_assert(gt, !g2h_len || num_g2h); ··· 1347 1344 u32 action; 1348 1345 u32 *hxg; 1349 1346 1350 - xe_gt_assert(gt, ct->state != XE_GUC_CT_STATE_NOT_INITIALIZED); 1347 + xe_gt_assert(gt, xe_guc_ct_initialized(ct)); 1351 1348 lockdep_assert_held(&ct->fast_lock); 1352 1349 1353 1350 if (ct->state == XE_GUC_CT_STATE_DISABLED)
+5
drivers/gpu/drm/xe/xe_guc_ct.h
··· 22 22 void xe_guc_ct_snapshot_free(struct xe_guc_ct_snapshot *snapshot); 23 23 void xe_guc_ct_print(struct xe_guc_ct *ct, struct drm_printer *p, bool want_ctb); 24 24 25 + static inline bool xe_guc_ct_initialized(struct xe_guc_ct *ct) 26 + { 27 + return ct->state != XE_GUC_CT_STATE_NOT_INITIALIZED; 28 + } 29 + 25 30 static inline bool xe_guc_ct_enabled(struct xe_guc_ct *ct) 26 31 { 27 32 return ct->state == XE_GUC_CT_STATE_ENABLED;
+1 -1
drivers/gpu/drm/xe/xe_guc_pc.c
··· 1068 1068 goto out; 1069 1069 } 1070 1070 1071 - memset(pc->bo->vmap.vaddr, 0, size); 1071 + xe_map_memset(xe, &pc->bo->vmap, 0, 0, size); 1072 1072 slpc_shared_data_write(pc, header.size, size); 1073 1073 1074 1074 earlier = ktime_get();
+3
drivers/gpu/drm/xe/xe_guc_submit.c
··· 1762 1762 { 1763 1763 int ret; 1764 1764 1765 + if (!guc->submission_state.initialized) 1766 + return 0; 1767 + 1765 1768 /* 1766 1769 * Using an atomic here rather than submission_state.lock as this 1767 1770 * function can be called while holding the CT lock (engine reset
+2 -2
drivers/i2c/algos/i2c-algo-bit.c
··· 619 619 /* -----exported algorithm data: ------------------------------------- */ 620 620 621 621 const struct i2c_algorithm i2c_bit_algo = { 622 - .master_xfer = bit_xfer, 623 - .master_xfer_atomic = bit_xfer_atomic, 622 + .xfer = bit_xfer, 623 + .xfer_atomic = bit_xfer_atomic, 624 624 .functionality = bit_func, 625 625 }; 626 626 EXPORT_SYMBOL(i2c_bit_algo);
+2 -2
drivers/i2c/algos/i2c-algo-pca.c
··· 361 361 } 362 362 363 363 static const struct i2c_algorithm pca_algo = { 364 - .master_xfer = pca_xfer, 365 - .functionality = pca_func, 364 + .xfer = pca_xfer, 365 + .functionality = pca_func, 366 366 }; 367 367 368 368 static unsigned int pca_probe_chip(struct i2c_adapter *adap)
+2 -2
drivers/i2c/algos/i2c-algo-pcf.c
··· 389 389 390 390 /* exported algorithm data: */ 391 391 static const struct i2c_algorithm pcf_algo = { 392 - .master_xfer = pcf_xfer, 393 - .functionality = pcf_func, 392 + .xfer = pcf_xfer, 393 + .functionality = pcf_func, 394 394 }; 395 395 396 396 /*
+1 -1
drivers/i2c/busses/i2c-amd-mp2-plat.c
··· 179 179 } 180 180 181 181 static const struct i2c_algorithm i2c_amd_algorithm = { 182 - .master_xfer = i2c_amd_xfer, 182 + .xfer = i2c_amd_xfer, 183 183 .functionality = i2c_amd_func, 184 184 }; 185 185
+4 -4
drivers/i2c/busses/i2c-aspeed.c
··· 814 814 #endif /* CONFIG_I2C_SLAVE */ 815 815 816 816 static const struct i2c_algorithm aspeed_i2c_algo = { 817 - .master_xfer = aspeed_i2c_master_xfer, 818 - .functionality = aspeed_i2c_functionality, 817 + .xfer = aspeed_i2c_master_xfer, 818 + .functionality = aspeed_i2c_functionality, 819 819 #if IS_ENABLED(CONFIG_I2C_SLAVE) 820 - .reg_slave = aspeed_i2c_reg_slave, 821 - .unreg_slave = aspeed_i2c_unreg_slave, 820 + .reg_slave = aspeed_i2c_reg_slave, 821 + .unreg_slave = aspeed_i2c_unreg_slave, 822 822 #endif /* CONFIG_I2C_SLAVE */ 823 823 }; 824 824
+2 -2
drivers/i2c/busses/i2c-at91-master.c
··· 739 739 } 740 740 741 741 static const struct i2c_algorithm at91_twi_algorithm = { 742 - .master_xfer = at91_twi_xfer, 743 - .functionality = at91_twi_func, 742 + .xfer = at91_twi_xfer, 743 + .functionality = at91_twi_func, 744 744 }; 745 745 746 746 static int at91_twi_configure_dma(struct at91_twi_dev *dev, u32 phy_addr)
+1 -1
drivers/i2c/busses/i2c-axxia.c
··· 706 706 } 707 707 708 708 static const struct i2c_algorithm axxia_i2c_algo = { 709 - .master_xfer = axxia_i2c_xfer, 709 + .xfer = axxia_i2c_xfer, 710 710 .functionality = axxia_i2c_func, 711 711 .reg_slave = axxia_i2c_reg_slave, 712 712 .unreg_slave = axxia_i2c_unreg_slave,
+1 -1
drivers/i2c/busses/i2c-bcm-iproc.c
··· 1041 1041 } 1042 1042 1043 1043 static struct i2c_algorithm bcm_iproc_algo = { 1044 - .master_xfer = bcm_iproc_i2c_xfer, 1044 + .xfer = bcm_iproc_i2c_xfer, 1045 1045 .functionality = bcm_iproc_i2c_functionality, 1046 1046 .reg_slave = bcm_iproc_i2c_reg_slave, 1047 1047 .unreg_slave = bcm_iproc_i2c_unreg_slave,
+5 -5
drivers/i2c/busses/i2c-cadence.c
··· 1231 1231 #endif 1232 1232 1233 1233 static const struct i2c_algorithm cdns_i2c_algo = { 1234 - .master_xfer = cdns_i2c_master_xfer, 1235 - .master_xfer_atomic = cdns_i2c_master_xfer_atomic, 1236 - .functionality = cdns_i2c_func, 1234 + .xfer = cdns_i2c_master_xfer, 1235 + .xfer_atomic = cdns_i2c_master_xfer_atomic, 1236 + .functionality = cdns_i2c_func, 1237 1237 #if IS_ENABLED(CONFIG_I2C_SLAVE) 1238 - .reg_slave = cdns_reg_slave, 1239 - .unreg_slave = cdns_unreg_slave, 1238 + .reg_slave = cdns_reg_slave, 1239 + .unreg_slave = cdns_unreg_slave, 1240 1240 #endif 1241 1241 }; 1242 1242
+2 -2
drivers/i2c/busses/i2c-cgbc.c
··· 331 331 } 332 332 333 333 static const struct i2c_algorithm cgbc_i2c_algorithm = { 334 - .master_xfer = cgbc_i2c_xfer, 335 - .functionality = cgbc_i2c_func, 334 + .xfer = cgbc_i2c_xfer, 335 + .functionality = cgbc_i2c_func, 336 336 }; 337 337 338 338 static struct i2c_algo_cgbc_data cgbc_i2c_algo_data[] = {
+1 -1
drivers/i2c/busses/i2c-eg20t.c
··· 690 690 } 691 691 692 692 static const struct i2c_algorithm pch_algorithm = { 693 - .master_xfer = pch_i2c_xfer, 693 + .xfer = pch_i2c_xfer, 694 694 .functionality = pch_i2c_func 695 695 }; 696 696
+3 -3
drivers/i2c/busses/i2c-emev2.c
··· 351 351 } 352 352 353 353 static const struct i2c_algorithm em_i2c_algo = { 354 - .master_xfer = em_i2c_xfer, 354 + .xfer = em_i2c_xfer, 355 355 .functionality = em_i2c_func, 356 - .reg_slave = em_i2c_reg_slave, 357 - .unreg_slave = em_i2c_unreg_slave, 356 + .reg_slave = em_i2c_reg_slave, 357 + .unreg_slave = em_i2c_unreg_slave, 358 358 }; 359 359 360 360 static int em_i2c_probe(struct platform_device *pdev)
+3 -3
drivers/i2c/busses/i2c-exynos5.c
··· 879 879 } 880 880 881 881 static const struct i2c_algorithm exynos5_i2c_algorithm = { 882 - .master_xfer = exynos5_i2c_xfer, 883 - .master_xfer_atomic = exynos5_i2c_xfer_atomic, 884 - .functionality = exynos5_i2c_func, 882 + .xfer = exynos5_i2c_xfer, 883 + .xfer_atomic = exynos5_i2c_xfer_atomic, 884 + .functionality = exynos5_i2c_func, 885 885 }; 886 886 887 887 static int exynos5_i2c_probe(struct platform_device *pdev)
+3 -3
drivers/i2c/busses/i2c-gxp.c
··· 184 184 #endif 185 185 186 186 static const struct i2c_algorithm gxp_i2c_algo = { 187 - .master_xfer = gxp_i2c_master_xfer, 187 + .xfer = gxp_i2c_master_xfer, 188 188 .functionality = gxp_i2c_func, 189 189 #if IS_ENABLED(CONFIG_I2C_SLAVE) 190 - .reg_slave = gxp_i2c_reg_slave, 191 - .unreg_slave = gxp_i2c_unreg_slave, 190 + .reg_slave = gxp_i2c_reg_slave, 191 + .unreg_slave = gxp_i2c_unreg_slave, 192 192 #endif 193 193 }; 194 194
+1 -1
drivers/i2c/busses/i2c-img-scb.c
··· 1143 1143 } 1144 1144 1145 1145 static const struct i2c_algorithm img_i2c_algo = { 1146 - .master_xfer = img_i2c_xfer, 1146 + .xfer = img_i2c_xfer, 1147 1147 .functionality = img_i2c_func, 1148 1148 }; 1149 1149
+4 -4
drivers/i2c/busses/i2c-imx-lpi2c.c
··· 1268 1268 } 1269 1269 1270 1270 static const struct i2c_algorithm lpi2c_imx_algo = { 1271 - .master_xfer = lpi2c_imx_xfer, 1272 - .functionality = lpi2c_imx_func, 1273 - .reg_target = lpi2c_imx_register_target, 1274 - .unreg_target = lpi2c_imx_unregister_target, 1271 + .xfer = lpi2c_imx_xfer, 1272 + .functionality = lpi2c_imx_func, 1273 + .reg_target = lpi2c_imx_register_target, 1274 + .unreg_target = lpi2c_imx_unregister_target, 1275 1275 }; 1276 1276 1277 1277 static const struct of_device_id lpi2c_imx_of_match[] = {
+4 -4
drivers/i2c/busses/i2c-imx.c
··· 1692 1692 } 1693 1693 1694 1694 static const struct i2c_algorithm i2c_imx_algo = { 1695 - .master_xfer = i2c_imx_xfer, 1696 - .master_xfer_atomic = i2c_imx_xfer_atomic, 1695 + .xfer = i2c_imx_xfer, 1696 + .xfer_atomic = i2c_imx_xfer_atomic, 1697 1697 .functionality = i2c_imx_func, 1698 - .reg_slave = i2c_imx_reg_slave, 1699 - .unreg_slave = i2c_imx_unreg_slave, 1698 + .reg_slave = i2c_imx_reg_slave, 1699 + .unreg_slave = i2c_imx_unreg_slave, 1700 1700 }; 1701 1701 1702 1702 static int i2c_imx_probe(struct platform_device *pdev)
+1 -1
drivers/i2c/busses/i2c-k1.c
··· 477 477 478 478 ret = spacemit_i2c_wait_bus_idle(i2c); 479 479 if (!ret) 480 - spacemit_i2c_xfer_msg(i2c); 480 + ret = spacemit_i2c_xfer_msg(i2c); 481 481 else if (ret < 0) 482 482 dev_dbg(i2c->dev, "i2c transfer error: %d\n", ret); 483 483 else
+1 -1
drivers/i2c/busses/i2c-keba.c
··· 500 500 } 501 501 502 502 static const struct i2c_algorithm ki2c_algo = { 503 - .master_xfer = ki2c_xfer, 503 + .xfer = ki2c_xfer, 504 504 .functionality = ki2c_func, 505 505 }; 506 506
+1 -1
drivers/i2c/busses/i2c-mchp-pci1xxxx.c
··· 1048 1048 } 1049 1049 1050 1050 static const struct i2c_algorithm pci1xxxx_i2c_algo = { 1051 - .master_xfer = pci1xxxx_i2c_xfer, 1051 + .xfer = pci1xxxx_i2c_xfer, 1052 1052 .functionality = pci1xxxx_i2c_get_funcs, 1053 1053 }; 1054 1054
+2 -2
drivers/i2c/busses/i2c-meson.c
··· 448 448 } 449 449 450 450 static const struct i2c_algorithm meson_i2c_algorithm = { 451 - .master_xfer = meson_i2c_xfer, 452 - .master_xfer_atomic = meson_i2c_xfer_atomic, 451 + .xfer = meson_i2c_xfer, 452 + .xfer_atomic = meson_i2c_xfer_atomic, 453 453 .functionality = meson_i2c_func, 454 454 }; 455 455
+1 -1
drivers/i2c/busses/i2c-microchip-corei2c.c
··· 526 526 } 527 527 528 528 static const struct i2c_algorithm mchp_corei2c_algo = { 529 - .master_xfer = mchp_corei2c_xfer, 529 + .xfer = mchp_corei2c_xfer, 530 530 .functionality = mchp_corei2c_func, 531 531 .smbus_xfer = mchp_corei2c_smbus_xfer, 532 532 };
+1 -1
drivers/i2c/busses/i2c-mt65xx.c
··· 1342 1342 } 1343 1343 1344 1344 static const struct i2c_algorithm mtk_i2c_algorithm = { 1345 - .master_xfer = mtk_i2c_transfer, 1345 + .xfer = mtk_i2c_transfer, 1346 1346 .functionality = mtk_i2c_functionality, 1347 1347 }; 1348 1348
+1 -1
drivers/i2c/busses/i2c-mxs.c
··· 687 687 } 688 688 689 689 static const struct i2c_algorithm mxs_i2c_algo = { 690 - .master_xfer = mxs_i2c_xfer, 690 + .xfer = mxs_i2c_xfer, 691 691 .functionality = mxs_i2c_func, 692 692 }; 693 693
+2 -2
drivers/i2c/busses/i2c-nomadik.c
··· 996 996 } 997 997 998 998 static const struct i2c_algorithm nmk_i2c_algo = { 999 - .master_xfer = nmk_i2c_xfer, 1000 - .functionality = nmk_i2c_functionality 999 + .xfer = nmk_i2c_xfer, 1000 + .functionality = nmk_i2c_functionality 1001 1001 }; 1002 1002 1003 1003 static void nmk_i2c_of_probe(struct device_node *np,
+3 -3
drivers/i2c/busses/i2c-npcm7xx.c
··· 2470 2470 }; 2471 2471 2472 2472 static const struct i2c_algorithm npcm_i2c_algo = { 2473 - .master_xfer = npcm_i2c_master_xfer, 2473 + .xfer = npcm_i2c_master_xfer, 2474 2474 .functionality = npcm_i2c_functionality, 2475 2475 #if IS_ENABLED(CONFIG_I2C_SLAVE) 2476 - .reg_slave = npcm_i2c_reg_slave, 2477 - .unreg_slave = npcm_i2c_unreg_slave, 2476 + .reg_slave = npcm_i2c_reg_slave, 2477 + .unreg_slave = npcm_i2c_unreg_slave, 2478 2478 #endif 2479 2479 }; 2480 2480
+3 -3
drivers/i2c/busses/i2c-omap.c
··· 1201 1201 } 1202 1202 1203 1203 static const struct i2c_algorithm omap_i2c_algo = { 1204 - .master_xfer = omap_i2c_xfer_irq, 1205 - .master_xfer_atomic = omap_i2c_xfer_polling, 1206 - .functionality = omap_i2c_func, 1204 + .xfer = omap_i2c_xfer_irq, 1205 + .xfer_atomic = omap_i2c_xfer_polling, 1206 + .functionality = omap_i2c_func, 1207 1207 }; 1208 1208 1209 1209 static const struct i2c_adapter_quirks omap_i2c_quirks = {
+1 -1
drivers/i2c/busses/i2c-pnx.c
··· 580 580 } 581 581 582 582 static const struct i2c_algorithm pnx_algorithm = { 583 - .master_xfer = i2c_pnx_xfer, 583 + .xfer = i2c_pnx_xfer, 584 584 .functionality = i2c_pnx_func, 585 585 }; 586 586
+8 -8
drivers/i2c/busses/i2c-pxa.c
··· 1154 1154 } 1155 1155 1156 1156 static const struct i2c_algorithm i2c_pxa_algorithm = { 1157 - .master_xfer = i2c_pxa_xfer, 1158 - .functionality = i2c_pxa_functionality, 1157 + .xfer = i2c_pxa_xfer, 1158 + .functionality = i2c_pxa_functionality, 1159 1159 #ifdef CONFIG_I2C_PXA_SLAVE 1160 - .reg_slave = i2c_pxa_slave_reg, 1161 - .unreg_slave = i2c_pxa_slave_unreg, 1160 + .reg_slave = i2c_pxa_slave_reg, 1161 + .unreg_slave = i2c_pxa_slave_unreg, 1162 1162 #endif 1163 1163 }; 1164 1164 ··· 1244 1244 } 1245 1245 1246 1246 static const struct i2c_algorithm i2c_pxa_pio_algorithm = { 1247 - .master_xfer = i2c_pxa_pio_xfer, 1248 - .functionality = i2c_pxa_functionality, 1247 + .xfer = i2c_pxa_pio_xfer, 1248 + .functionality = i2c_pxa_functionality, 1249 1249 #ifdef CONFIG_I2C_PXA_SLAVE 1250 - .reg_slave = i2c_pxa_slave_reg, 1251 - .unreg_slave = i2c_pxa_slave_unreg, 1250 + .reg_slave = i2c_pxa_slave_reg, 1251 + .unreg_slave = i2c_pxa_slave_unreg, 1252 1252 #endif 1253 1253 }; 1254 1254
+2 -2
drivers/i2c/busses/i2c-qcom-cci.c
··· 462 462 } 463 463 464 464 static const struct i2c_algorithm cci_algo = { 465 - .master_xfer = cci_xfer, 466 - .functionality = cci_func, 465 + .xfer = cci_xfer, 466 + .functionality = cci_func, 467 467 }; 468 468 469 469 static int cci_enable_clocks(struct cci *cci)
+2 -2
drivers/i2c/busses/i2c-qcom-geni.c
··· 727 727 } 728 728 729 729 static const struct i2c_algorithm geni_i2c_algo = { 730 - .master_xfer = geni_i2c_xfer, 731 - .functionality = geni_i2c_func, 730 + .xfer = geni_i2c_xfer, 731 + .functionality = geni_i2c_func, 732 732 }; 733 733 734 734 #ifdef CONFIG_ACPI
+4 -4
drivers/i2c/busses/i2c-qup.c
··· 1634 1634 } 1635 1635 1636 1636 static const struct i2c_algorithm qup_i2c_algo = { 1637 - .master_xfer = qup_i2c_xfer, 1638 - .functionality = qup_i2c_func, 1637 + .xfer = qup_i2c_xfer, 1638 + .functionality = qup_i2c_func, 1639 1639 }; 1640 1640 1641 1641 static const struct i2c_algorithm qup_i2c_algo_v2 = { 1642 - .master_xfer = qup_i2c_xfer_v2, 1643 - .functionality = qup_i2c_func, 1642 + .xfer = qup_i2c_xfer_v2, 1643 + .functionality = qup_i2c_func, 1644 1644 }; 1645 1645 1646 1646 /*
+5 -5
drivers/i2c/busses/i2c-rcar.c
··· 1084 1084 } 1085 1085 1086 1086 static const struct i2c_algorithm rcar_i2c_algo = { 1087 - .master_xfer = rcar_i2c_master_xfer, 1088 - .master_xfer_atomic = rcar_i2c_master_xfer_atomic, 1089 - .functionality = rcar_i2c_func, 1090 - .reg_slave = rcar_reg_slave, 1091 - .unreg_slave = rcar_unreg_slave, 1087 + .xfer = rcar_i2c_master_xfer, 1088 + .xfer_atomic = rcar_i2c_master_xfer_atomic, 1089 + .functionality = rcar_i2c_func, 1090 + .reg_slave = rcar_reg_slave, 1091 + .unreg_slave = rcar_unreg_slave, 1092 1092 }; 1093 1093 1094 1094 static const struct i2c_adapter_quirks rcar_i2c_quirks = {
+3 -3
drivers/i2c/busses/i2c-s3c2410.c
··· 800 800 801 801 /* i2c bus registration info */ 802 802 static const struct i2c_algorithm s3c24xx_i2c_algorithm = { 803 - .master_xfer = s3c24xx_i2c_xfer, 804 - .master_xfer_atomic = s3c24xx_i2c_xfer_atomic, 805 - .functionality = s3c24xx_i2c_func, 803 + .xfer = s3c24xx_i2c_xfer, 804 + .xfer_atomic = s3c24xx_i2c_xfer_atomic, 805 + .functionality = s3c24xx_i2c_func, 806 806 }; 807 807 808 808 /*
+2 -2
drivers/i2c/busses/i2c-sh7760.c
··· 379 379 } 380 380 381 381 static const struct i2c_algorithm sh7760_i2c_algo = { 382 - .master_xfer = sh7760_i2c_master_xfer, 383 - .functionality = sh7760_i2c_func, 382 + .xfer = sh7760_i2c_master_xfer, 383 + .functionality = sh7760_i2c_func, 384 384 }; 385 385 386 386 /* calculate CCR register setting for a desired scl clock. SCL clock is
+2 -2
drivers/i2c/busses/i2c-sh_mobile.c
··· 740 740 741 741 static const struct i2c_algorithm sh_mobile_i2c_algorithm = { 742 742 .functionality = sh_mobile_i2c_func, 743 - .master_xfer = sh_mobile_i2c_xfer, 744 - .master_xfer_atomic = sh_mobile_i2c_xfer_atomic, 743 + .xfer = sh_mobile_i2c_xfer, 744 + .xfer_atomic = sh_mobile_i2c_xfer_atomic, 745 745 }; 746 746 747 747 static const struct i2c_adapter_quirks sh_mobile_i2c_quirks = {
+2 -2
drivers/i2c/busses/i2c-stm32f7.c
··· 2151 2151 } 2152 2152 2153 2153 static const struct i2c_algorithm stm32f7_i2c_algo = { 2154 - .master_xfer = stm32f7_i2c_xfer, 2155 - .master_xfer_atomic = stm32f7_i2c_xfer_atomic, 2154 + .xfer = stm32f7_i2c_xfer, 2155 + .xfer_atomic = stm32f7_i2c_xfer_atomic, 2156 2156 .smbus_xfer = stm32f7_i2c_smbus_xfer, 2157 2157 .functionality = stm32f7_i2c_func, 2158 2158 .reg_slave = stm32f7_i2c_reg_slave,
+2 -2
drivers/i2c/busses/i2c-synquacer.c
··· 520 520 } 521 521 522 522 static const struct i2c_algorithm synquacer_i2c_algo = { 523 - .master_xfer = synquacer_i2c_xfer, 524 - .functionality = synquacer_i2c_functionality, 523 + .xfer = synquacer_i2c_xfer, 524 + .functionality = synquacer_i2c_functionality, 525 525 }; 526 526 527 527 static const struct i2c_adapter synquacer_i2c_ops = {
+3 -3
drivers/i2c/busses/i2c-tegra.c
··· 1440 1440 } 1441 1441 1442 1442 static const struct i2c_algorithm tegra_i2c_algo = { 1443 - .master_xfer = tegra_i2c_xfer, 1444 - .master_xfer_atomic = tegra_i2c_xfer_atomic, 1445 - .functionality = tegra_i2c_func, 1443 + .xfer = tegra_i2c_xfer, 1444 + .xfer_atomic = tegra_i2c_xfer_atomic, 1445 + .functionality = tegra_i2c_func, 1446 1446 }; 1447 1447 1448 1448 /* payload size is only 12 bit */
+2 -2
drivers/i2c/busses/i2c-xiic.c
··· 1398 1398 } 1399 1399 1400 1400 static const struct i2c_algorithm xiic_algorithm = { 1401 - .master_xfer = xiic_xfer, 1402 - .master_xfer_atomic = xiic_xfer_atomic, 1401 + .xfer = xiic_xfer, 1402 + .xfer_atomic = xiic_xfer_atomic, 1403 1403 .functionality = xiic_func, 1404 1404 }; 1405 1405
+1 -1
drivers/i2c/busses/i2c-xlp9xx.c
··· 452 452 } 453 453 454 454 static const struct i2c_algorithm xlp9xx_i2c_algo = { 455 - .master_xfer = xlp9xx_i2c_xfer, 455 + .xfer = xlp9xx_i2c_xfer, 456 456 .functionality = xlp9xx_i2c_functionality, 457 457 }; 458 458
+1 -1
drivers/i2c/i2c-atr.c
··· 738 738 atr->flags = flags; 739 739 740 740 if (parent->algo->master_xfer) 741 - atr->algo.master_xfer = i2c_atr_master_xfer; 741 + atr->algo.xfer = i2c_atr_master_xfer; 742 742 if (parent->algo->smbus_xfer) 743 743 atr->algo.smbus_xfer = i2c_atr_smbus_xfer; 744 744 atr->algo.functionality = i2c_atr_functionality;
+3 -3
drivers/i2c/i2c-mux.c
··· 293 293 */ 294 294 if (parent->algo->master_xfer) { 295 295 if (muxc->mux_locked) 296 - priv->algo.master_xfer = i2c_mux_master_xfer; 296 + priv->algo.xfer = i2c_mux_master_xfer; 297 297 else 298 - priv->algo.master_xfer = __i2c_mux_master_xfer; 298 + priv->algo.xfer = __i2c_mux_master_xfer; 299 299 } 300 300 if (parent->algo->master_xfer_atomic) 301 - priv->algo.master_xfer_atomic = priv->algo.master_xfer; 301 + priv->algo.xfer_atomic = priv->algo.master_xfer; 302 302 303 303 if (parent->algo->smbus_xfer) { 304 304 if (muxc->mux_locked)
+2 -2
drivers/i2c/muxes/i2c-demux-pinctrl.c
··· 95 95 priv->cur_chan = new_chan; 96 96 97 97 /* Now fill out current adapter structure. cur_chan must be up to date */ 98 - priv->algo.master_xfer = i2c_demux_master_xfer; 98 + priv->algo.xfer = i2c_demux_master_xfer; 99 99 if (adap->algo->master_xfer_atomic) 100 - priv->algo.master_xfer_atomic = i2c_demux_master_xfer; 100 + priv->algo.xfer_atomic = i2c_demux_master_xfer; 101 101 priv->algo.functionality = i2c_demux_functionality; 102 102 103 103 snprintf(priv->cur_adap.name, sizeof(priv->cur_adap.name),
+2 -18
drivers/irqchip/irq-ath79-misc.c
··· 15 15 #include <linux/of_address.h> 16 16 #include <linux/of_irq.h> 17 17 18 + #include <asm/time.h> 19 + 18 20 #define AR71XX_RESET_REG_MISC_INT_STATUS 0 19 21 #define AR71XX_RESET_REG_MISC_INT_ENABLE 4 20 22 ··· 179 177 180 178 IRQCHIP_DECLARE(ar7240_misc_intc, "qca,ar7240-misc-intc", 181 179 ar7240_misc_intc_of_init); 182 - 183 - void __init ath79_misc_irq_init(void __iomem *regs, int irq, 184 - int irq_base, bool is_ar71xx) 185 - { 186 - struct irq_domain *domain; 187 - 188 - if (is_ar71xx) 189 - ath79_misc_irq_chip.irq_mask_ack = ar71xx_misc_irq_mask; 190 - else 191 - ath79_misc_irq_chip.irq_ack = ar724x_misc_irq_ack; 192 - 193 - domain = irq_domain_create_legacy(NULL, ATH79_MISC_IRQ_COUNT, 194 - irq_base, 0, &misc_irq_domain_ops, regs); 195 - if (!domain) 196 - panic("Failed to create MISC irqdomain"); 197 - 198 - ath79_misc_intc_domain_init(domain, irq); 199 - }
-1
drivers/md/bcache/Kconfig
··· 5 5 select BLOCK_HOLDER_DEPRECATED if SYSFS 6 6 select CRC64 7 7 select CLOSURES 8 - select MIN_HEAP 9 8 help 10 9 Allows a block device to be used as cache for other devices; uses 11 10 a btree for indexing and the layout is optimized for SSDs.
+17 -40
drivers/md/bcache/alloc.c
··· 164 164 * prio is worth 1/8th of what INITIAL_PRIO is worth. 165 165 */ 166 166 167 - static inline unsigned int new_bucket_prio(struct cache *ca, struct bucket *b) 168 - { 169 - unsigned int min_prio = (INITIAL_PRIO - ca->set->min_prio) / 8; 167 + #define bucket_prio(b) \ 168 + ({ \ 169 + unsigned int min_prio = (INITIAL_PRIO - ca->set->min_prio) / 8; \ 170 + \ 171 + (b->prio - ca->set->min_prio + min_prio) * GC_SECTORS_USED(b); \ 172 + }) 170 173 171 - return (b->prio - ca->set->min_prio + min_prio) * GC_SECTORS_USED(b); 172 - } 173 - 174 - static inline bool new_bucket_max_cmp(const void *l, const void *r, void *args) 175 - { 176 - struct bucket **lhs = (struct bucket **)l; 177 - struct bucket **rhs = (struct bucket **)r; 178 - struct cache *ca = args; 179 - 180 - return new_bucket_prio(ca, *lhs) > new_bucket_prio(ca, *rhs); 181 - } 182 - 183 - static inline bool new_bucket_min_cmp(const void *l, const void *r, void *args) 184 - { 185 - struct bucket **lhs = (struct bucket **)l; 186 - struct bucket **rhs = (struct bucket **)r; 187 - struct cache *ca = args; 188 - 189 - return new_bucket_prio(ca, *lhs) < new_bucket_prio(ca, *rhs); 190 - } 174 + #define bucket_max_cmp(l, r) (bucket_prio(l) < bucket_prio(r)) 175 + #define bucket_min_cmp(l, r) (bucket_prio(l) > bucket_prio(r)) 191 176 192 177 static void invalidate_buckets_lru(struct cache *ca) 193 178 { 194 179 struct bucket *b; 195 - const struct min_heap_callbacks bucket_max_cmp_callback = { 196 - .less = new_bucket_max_cmp, 197 - .swp = NULL, 198 - }; 199 - const struct min_heap_callbacks bucket_min_cmp_callback = { 200 - .less = new_bucket_min_cmp, 201 - .swp = NULL, 202 - }; 180 + ssize_t i; 203 181 204 - ca->heap.nr = 0; 182 + ca->heap.used = 0; 205 183 206 184 for_each_bucket(b, ca) { 207 185 if (!bch_can_invalidate_bucket(ca, b)) 208 186 continue; 209 187 210 - if (!min_heap_full(&ca->heap)) 211 - min_heap_push(&ca->heap, &b, &bucket_max_cmp_callback, ca); 212 - else if (!new_bucket_max_cmp(&b, min_heap_peek(&ca->heap), ca)) { 188 + if (!heap_full(&ca->heap)) 189 + heap_add(&ca->heap, b, bucket_max_cmp); 190 + else if (bucket_max_cmp(b, heap_peek(&ca->heap))) { 213 191 ca->heap.data[0] = b; 214 - min_heap_sift_down(&ca->heap, 0, &bucket_max_cmp_callback, ca); 192 + heap_sift(&ca->heap, 0, bucket_max_cmp); 215 193 } 216 194 } 217 195 218 - min_heapify_all(&ca->heap, &bucket_min_cmp_callback, ca); 196 + for (i = ca->heap.used / 2 - 1; i >= 0; --i) 197 + heap_sift(&ca->heap, i, bucket_min_cmp); 219 198 220 199 while (!fifo_full(&ca->free_inc)) { 221 - if (!ca->heap.nr) { 200 + if (!heap_pop(&ca->heap, b, bucket_min_cmp)) { 222 201 /* 223 202 * We don't want to be calling invalidate_buckets() 224 203 * multiple times when it can't do anything ··· 206 227 wake_up_gc(ca->set); 207 228 return; 208 229 } 209 - b = min_heap_peek(&ca->heap)[0]; 210 - min_heap_pop(&ca->heap, &bucket_min_cmp_callback, ca); 211 230 212 231 bch_invalidate_one_bucket(ca, b); 213 232 }
+1 -1
drivers/md/bcache/bcache.h
··· 458 458 /* Allocation stuff: */ 459 459 struct bucket *buckets; 460 460 461 - DEFINE_MIN_HEAP(struct bucket *, cache_heap) heap; 461 + DECLARE_HEAP(struct bucket *, heap); 462 462 463 463 /* 464 464 * If nonzero, we know we aren't going to find any buckets to invalidate
+45 -71
drivers/md/bcache/bset.c
··· 54 54 int __bch_count_data(struct btree_keys *b) 55 55 { 56 56 unsigned int ret = 0; 57 - struct btree_iter iter; 57 + struct btree_iter_stack iter; 58 58 struct bkey *k; 59 - 60 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 61 59 62 60 if (b->ops->is_extents) 63 61 for_each_key(b, k, &iter) ··· 67 69 { 68 70 va_list args; 69 71 struct bkey *k, *p = NULL; 70 - struct btree_iter iter; 72 + struct btree_iter_stack iter; 71 73 const char *err; 72 - 73 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 74 74 75 75 for_each_key(b, k, &iter) { 76 76 if (b->ops->is_extents) { ··· 110 114 111 115 static void bch_btree_iter_next_check(struct btree_iter *iter) 112 116 { 113 - struct bkey *k = iter->heap.data->k, *next = bkey_next(k); 117 + struct bkey *k = iter->data->k, *next = bkey_next(k); 114 118 115 - if (next < iter->heap.data->end && 119 + if (next < iter->data->end && 116 120 bkey_cmp(k, iter->b->ops->is_extents ? 117 121 &START_KEY(next) : next) > 0) { 118 122 bch_dump_bucket(iter->b); ··· 879 883 unsigned int status = BTREE_INSERT_STATUS_NO_INSERT; 880 884 struct bset *i = bset_tree_last(b)->data; 881 885 struct bkey *m, *prev = NULL; 882 - struct btree_iter iter; 886 + struct btree_iter_stack iter; 883 887 struct bkey preceding_key_on_stack = ZERO_KEY; 884 888 struct bkey *preceding_key_p = &preceding_key_on_stack; 885 889 886 890 BUG_ON(b->ops->is_extents && !KEY_SIZE(k)); 887 - 888 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 889 891 890 892 /* 891 893 * If k has preceding key, preceding_key_p will be set to address ··· 895 901 else 896 902 preceding_key(k, &preceding_key_p); 897 903 898 - m = bch_btree_iter_init(b, &iter, preceding_key_p); 904 + m = bch_btree_iter_stack_init(b, &iter, preceding_key_p); 899 905 900 - if (b->ops->insert_fixup(b, k, &iter, replace_key)) 906 + if (b->ops->insert_fixup(b, k, &iter.iter, replace_key)) 901 907 return status; 902 908 903 909 status = BTREE_INSERT_STATUS_INSERT; ··· 1077 1083 1078 1084 /* Btree iterator */ 1079 1085 1080 - typedef bool (new_btree_iter_cmp_fn)(const void *, const void *, void *); 1086 + typedef bool (btree_iter_cmp_fn)(struct btree_iter_set, 1087 + struct btree_iter_set); 1081 1088 1082 - static inline bool new_btree_iter_cmp(const void *l, const void *r, void __always_unused *args) 1089 + static inline bool btree_iter_cmp(struct btree_iter_set l, 1090 + struct btree_iter_set r) 1083 1091 { 1084 - const struct btree_iter_set *_l = l; 1085 - const struct btree_iter_set *_r = r; 1086 - 1087 - return bkey_cmp(_l->k, _r->k) <= 0; 1092 + return bkey_cmp(l.k, r.k) > 0; 1088 1093 } 1089 1094 1090 1095 static inline bool btree_iter_end(struct btree_iter *iter) 1091 1096 { 1092 - return !iter->heap.nr; 1097 + return !iter->used; 1093 1098 } 1094 1099 1095 1100 void bch_btree_iter_push(struct btree_iter *iter, struct bkey *k, 1096 1101 struct bkey *end) 1097 1102 { 1098 - const struct min_heap_callbacks callbacks = { 1099 - .less = new_btree_iter_cmp, 1100 - .swp = NULL, 1101 - }; 1102 - 1103 1103 if (k != end) 1104 - BUG_ON(!min_heap_push(&iter->heap, 1105 - &((struct btree_iter_set) { k, end }), 1106 - &callbacks, 1107 - NULL)); 1104 + BUG_ON(!heap_add(iter, 1105 + ((struct btree_iter_set) { k, end }), 1106 + btree_iter_cmp)); 1108 1107 } 1109 1108 1110 - static struct bkey *__bch_btree_iter_init(struct btree_keys *b, 1111 - struct btree_iter *iter, 1112 - struct bkey *search, 1113 - struct bset_tree *start) 1109 + static struct bkey *__bch_btree_iter_stack_init(struct btree_keys *b, 1110 + struct btree_iter_stack *iter, 1111 + struct bkey *search, 1112 + struct bset_tree *start) 1114 1113 { 1115 1114 struct bkey *ret = NULL; 1116 1115 1117 - iter->heap.size = ARRAY_SIZE(iter->heap.preallocated); 1118 - iter->heap.nr = 0; 1116 + iter->iter.size = ARRAY_SIZE(iter->stack_data); 1117 + iter->iter.used = 0; 1119 1118 1120 1119 #ifdef CONFIG_BCACHE_DEBUG 1121 - iter->b = b; 1120 + iter->iter.b = b; 1122 1121 #endif 1123 1122 1124 1123 for (; start <= bset_tree_last(b); start++) { 1125 1124 ret = bch_bset_search(b, start, search); 1126 - bch_btree_iter_push(iter, ret, bset_bkey_last(start->data)); 1125 + bch_btree_iter_push(&iter->iter, ret, bset_bkey_last(start->data)); 1127 1126 } 1128 1127 1129 1128 return ret; 1130 1129 } 1131 1130 1132 - struct bkey *bch_btree_iter_init(struct btree_keys *b, 1133 - struct btree_iter *iter, 1131 + struct bkey *bch_btree_iter_stack_init(struct btree_keys *b, 1132 + struct btree_iter_stack *iter, 1134 1133 struct bkey *search) 1135 1134 { 1136 - return __bch_btree_iter_init(b, iter, search, b->set); 1135 + return __bch_btree_iter_stack_init(b, iter, search, b->set); 1137 1136 } 1138 1137 1139 1138 static inline struct bkey *__bch_btree_iter_next(struct btree_iter *iter, 1140 - new_btree_iter_cmp_fn *cmp) 1139 + btree_iter_cmp_fn *cmp) 1141 1140 { 1142 1141 struct btree_iter_set b __maybe_unused; 1143 1142 struct bkey *ret = NULL; 1144 - const struct min_heap_callbacks callbacks = { 1145 - .less = cmp, 1146 - .swp = NULL, 1147 - }; 1148 1143 1149 1144 if (!btree_iter_end(iter)) { 1150 1145 bch_btree_iter_next_check(iter); 1151 1146 1152 - ret = iter->heap.data->k; 1153 - iter->heap.data->k = bkey_next(iter->heap.data->k); 1147 + ret = iter->data->k; 1148 + iter->data->k = bkey_next(iter->data->k); 1154 1149 1155 - if (iter->heap.data->k > iter->heap.data->end) { 1150 + if (iter->data->k > iter->data->end) { 1156 1151 WARN_ONCE(1, "bset was corrupt!\n"); 1157 - iter->heap.data->k = iter->heap.data->end; 1152 + iter->data->k = iter->data->end; 1158 1153 } 1159 1154 1160 - if (iter->heap.data->k == iter->heap.data->end) { 1161 - if (iter->heap.nr) { 1162 - b = min_heap_peek(&iter->heap)[0]; 1163 - min_heap_pop(&iter->heap, &callbacks, NULL); 1164 - } 1165 - } 1155 + if (iter->data->k == iter->data->end) 1156 + heap_pop(iter, b, cmp); 1166 1157 else 1167 - min_heap_sift_down(&iter->heap, 0, &callbacks, NULL); 1158 + heap_sift(iter, 0, cmp); 1168 1159 } 1169 1160 1170 1161 return ret; ··· 1157 1178 1158 1179 struct bkey *bch_btree_iter_next(struct btree_iter *iter) 1159 1180 { 1160 - return __bch_btree_iter_next(iter, new_btree_iter_cmp); 1181 + return __bch_btree_iter_next(iter, btree_iter_cmp); 1161 1182 1162 1183 } 1163 1184 ··· 1195 1216 struct btree_iter *iter, 1196 1217 bool fixup, bool remove_stale) 1197 1218 { 1219 + int i; 1198 1220 struct bkey *k, *last = NULL; 1199 1221 BKEY_PADDED(k) tmp; 1200 1222 bool (*bad)(struct btree_keys *, const struct bkey *) = remove_stale 1201 1223 ? bch_ptr_bad 1202 1224 : bch_ptr_invalid; 1203 - const struct min_heap_callbacks callbacks = { 1204 - .less = b->ops->sort_cmp, 1205 - .swp = NULL, 1206 - }; 1207 1225 1208 1226 /* Heapify the iterator, using our comparison function */ 1209 - min_heapify_all(&iter->heap, &callbacks, NULL); 1227 + for (i = iter->used / 2 - 1; i >= 0; --i) 1228 + heap_sift(iter, i, b->ops->sort_cmp); 1210 1229 1211 1230 while (!btree_iter_end(iter)) { 1212 1231 if (b->ops->sort_fixup && fixup) ··· 1293 1316 struct bset_sort_state *state) 1294 1317 { 1295 1318 size_t order = b->page_order, keys = 0; 1296 - struct btree_iter iter; 1319 + struct btree_iter_stack iter; 1297 1320 int oldsize = bch_count_data(b); 1298 1321 1299 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 1300 - __bch_btree_iter_init(b, &iter, NULL, &b->set[start]); 1322 + __bch_btree_iter_stack_init(b, &iter, NULL, &b->set[start]); 1301 1323 1302 1324 if (start) { 1303 1325 unsigned int i; ··· 1307 1331 order = get_order(__set_bytes(b->set->data, keys)); 1308 1332 } 1309 1333 1310 - __btree_sort(b, &iter, start, order, false, state); 1334 + __btree_sort(b, &iter.iter, start, order, false, state); 1311 1335 1312 1336 EBUG_ON(oldsize >= 0 && bch_count_data(b) != oldsize); 1313 1337 } ··· 1323 1347 struct bset_sort_state *state) 1324 1348 { 1325 1349 uint64_t start_time = local_clock(); 1326 - struct btree_iter iter; 1350 + struct btree_iter_stack iter; 1327 1351 1328 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 1352 + bch_btree_iter_stack_init(b, &iter, NULL); 1329 1353 1330 - bch_btree_iter_init(b, &iter, NULL); 1331 - 1332 - btree_mergesort(b, new->set->data, &iter, false, true); 1354 + btree_mergesort(b, new->set->data, &iter.iter, false, true); 1333 1355 1334 1356 bch_time_stats_update(&state->time, start_time); 1335 1357
+23 -17
drivers/md/bcache/bset.h
··· 187 187 }; 188 188 189 189 struct btree_keys_ops { 190 - bool (*sort_cmp)(const void *l, 191 - const void *r, 192 - void *args); 190 + bool (*sort_cmp)(struct btree_iter_set l, 191 + struct btree_iter_set r); 193 192 struct bkey *(*sort_fixup)(struct btree_iter *iter, 194 193 struct bkey *tmp); 195 194 bool (*insert_fixup)(struct btree_keys *b, ··· 312 313 BTREE_INSERT_STATUS_FRONT_MERGE, 313 314 }; 314 315 315 - struct btree_iter_set { 316 - struct bkey *k, *end; 317 - }; 318 - 319 316 /* Btree key iteration */ 320 317 321 318 struct btree_iter { 319 + size_t size, used; 322 320 #ifdef CONFIG_BCACHE_DEBUG 323 321 struct btree_keys *b; 324 322 #endif 325 - MIN_HEAP_PREALLOCATED(struct btree_iter_set, btree_iter_heap, MAX_BSETS) heap; 323 + struct btree_iter_set { 324 + struct bkey *k, *end; 325 + } data[]; 326 + }; 327 + 328 + /* Fixed-size btree_iter that can be allocated on the stack */ 329 + 330 + struct btree_iter_stack { 331 + struct btree_iter iter; 332 + struct btree_iter_set stack_data[MAX_BSETS]; 326 333 }; 327 334 328 335 typedef bool (*ptr_filter_fn)(struct btree_keys *b, const struct bkey *k); ··· 340 335 341 336 void bch_btree_iter_push(struct btree_iter *iter, struct bkey *k, 342 337 struct bkey *end); 343 - struct bkey *bch_btree_iter_init(struct btree_keys *b, 344 - struct btree_iter *iter, 345 - struct bkey *search); 338 + struct bkey *bch_btree_iter_stack_init(struct btree_keys *b, 339 + struct btree_iter_stack *iter, 340 + struct bkey *search); 346 341 347 342 struct bkey *__bch_bset_search(struct btree_keys *b, struct bset_tree *t, 348 343 const struct bkey *search); ··· 357 352 return search ? __bch_bset_search(b, t, search) : t->data->start; 358 353 } 359 354 360 - #define for_each_key_filter(b, k, iter, filter) \ 361 - for (bch_btree_iter_init((b), (iter), NULL); \ 362 - ((k) = bch_btree_iter_next_filter((iter), (b), filter));) 355 + #define for_each_key_filter(b, k, stack_iter, filter) \ 356 + for (bch_btree_iter_stack_init((b), (stack_iter), NULL); \ 357 + ((k) = bch_btree_iter_next_filter(&((stack_iter)->iter), (b), \ 358 + filter));) 363 359 364 - #define for_each_key(b, k, iter) \ 365 - for (bch_btree_iter_init((b), (iter), NULL); \ 366 - ((k) = bch_btree_iter_next(iter));) 360 + #define for_each_key(b, k, stack_iter) \ 361 + for (bch_btree_iter_stack_init((b), (stack_iter), NULL); \ 362 + ((k) = bch_btree_iter_next(&((stack_iter)->iter)));) 367 363 368 364 /* Sorting */ 369 365
+29 -40
drivers/md/bcache/btree.c
··· 148 148 { 149 149 const char *err = "bad btree header"; 150 150 struct bset *i = btree_bset_first(b); 151 - struct btree_iter iter; 151 + struct btree_iter *iter; 152 152 153 153 /* 154 154 * c->fill_iter can allocate an iterator with more memory space 155 155 * than static MAX_BSETS. 156 156 * See the comment arount cache_set->fill_iter. 157 157 */ 158 - iter.heap.data = mempool_alloc(&b->c->fill_iter, GFP_NOIO); 159 - iter.heap.size = b->c->cache->sb.bucket_size / b->c->cache->sb.block_size; 160 - iter.heap.nr = 0; 158 + iter = mempool_alloc(&b->c->fill_iter, GFP_NOIO); 159 + iter->size = b->c->cache->sb.bucket_size / b->c->cache->sb.block_size; 160 + iter->used = 0; 161 161 162 162 #ifdef CONFIG_BCACHE_DEBUG 163 - iter.b = &b->keys; 163 + iter->b = &b->keys; 164 164 #endif 165 165 166 166 if (!i->seq) ··· 198 198 if (i != b->keys.set[0].data && !i->keys) 199 199 goto err; 200 200 201 - bch_btree_iter_push(&iter, i->start, bset_bkey_last(i)); 201 + bch_btree_iter_push(iter, i->start, bset_bkey_last(i)); 202 202 203 203 b->written += set_blocks(i, block_bytes(b->c->cache)); 204 204 } ··· 210 210 if (i->seq == b->keys.set[0].data->seq) 211 211 goto err; 212 212 213 - bch_btree_sort_and_fix_extents(&b->keys, &iter, &b->c->sort); 213 + bch_btree_sort_and_fix_extents(&b->keys, iter, &b->c->sort); 214 214 215 215 i = b->keys.set[0].data; 216 216 err = "short btree key"; ··· 222 222 bch_bset_init_next(&b->keys, write_block(b), 223 223 bset_magic(&b->c->cache->sb)); 224 224 out: 225 - mempool_free(iter.heap.data, &b->c->fill_iter); 225 + mempool_free(iter, &b->c->fill_iter); 226 226 return; 227 227 err: 228 228 set_btree_node_io_error(b); ··· 1306 1306 uint8_t stale = 0; 1307 1307 unsigned int keys = 0, good_keys = 0; 1308 1308 struct bkey *k; 1309 - struct btree_iter iter; 1309 + struct btree_iter_stack iter; 1310 1310 struct bset_tree *t; 1311 - 1312 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 1313 1311 1314 1312 gc->nodes++; 1315 1313 ··· 1567 1569 static unsigned int btree_gc_count_keys(struct btree *b) 1568 1570 { 1569 1571 struct bkey *k; 1570 - struct btree_iter iter; 1572 + struct btree_iter_stack iter; 1571 1573 unsigned int ret = 0; 1572 - 1573 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 1574 1574 1575 1575 for_each_key_filter(&b->keys, k, &iter, bch_ptr_bad) 1576 1576 ret += bkey_u64s(k); ··· 1608 1612 int ret = 0; 1609 1613 bool should_rewrite; 1610 1614 struct bkey *k; 1611 - struct btree_iter iter; 1615 + struct btree_iter_stack iter; 1612 1616 struct gc_merge_info r[GC_MERGE_NODES]; 1613 1617 struct gc_merge_info *i, *last = r + ARRAY_SIZE(r) - 1; 1614 1618 1615 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 1616 - bch_btree_iter_init(&b->keys, &iter, &b->c->gc_done); 1619 + bch_btree_iter_stack_init(&b->keys, &iter, &b->c->gc_done); 1617 1620 1618 1621 for (i = r; i < r + ARRAY_SIZE(r); i++) 1619 1622 i->b = ERR_PTR(-EINTR); 1620 1623 1621 1624 while (1) { 1622 - k = bch_btree_iter_next_filter(&iter, &b->keys, bch_ptr_bad); 1625 + k = bch_btree_iter_next_filter(&iter.iter, &b->keys, 1626 + bch_ptr_bad); 1623 1627 if (k) { 1624 1628 r->b = bch_btree_node_get(b->c, op, k, b->level - 1, 1625 1629 true, b); ··· 1914 1918 { 1915 1919 int ret = 0; 1916 1920 struct bkey *k, *p = NULL; 1917 - struct btree_iter iter; 1918 - 1919 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 1921 + struct btree_iter_stack iter; 1920 1922 1921 1923 for_each_key_filter(&b->keys, k, &iter, bch_ptr_invalid) 1922 1924 bch_initial_mark_key(b->c, b->level, k); ··· 1922 1928 bch_initial_mark_key(b->c, b->level + 1, &b->key); 1923 1929 1924 1930 if (b->level) { 1925 - bch_btree_iter_init(&b->keys, &iter, NULL); 1931 + bch_btree_iter_stack_init(&b->keys, &iter, NULL); 1926 1932 1927 1933 do { 1928 - k = bch_btree_iter_next_filter(&iter, &b->keys, 1934 + k = bch_btree_iter_next_filter(&iter.iter, &b->keys, 1929 1935 bch_ptr_bad); 1930 1936 if (k) { 1931 1937 btree_node_prefetch(b, k); ··· 1953 1959 struct btree_check_info *info = arg; 1954 1960 struct btree_check_state *check_state = info->state; 1955 1961 struct cache_set *c = check_state->c; 1956 - struct btree_iter iter; 1962 + struct btree_iter_stack iter; 1957 1963 struct bkey *k, *p; 1958 1964 int cur_idx, prev_idx, skip_nr; 1959 1965 ··· 1961 1967 cur_idx = prev_idx = 0; 1962 1968 ret = 0; 1963 1969 1964 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 1965 - 1966 1970 /* root node keys are checked before thread created */ 1967 - bch_btree_iter_init(&c->root->keys, &iter, NULL); 1968 - k = bch_btree_iter_next_filter(&iter, &c->root->keys, bch_ptr_bad); 1971 + bch_btree_iter_stack_init(&c->root->keys, &iter, NULL); 1972 + k = bch_btree_iter_next_filter(&iter.iter, &c->root->keys, bch_ptr_bad); 1969 1973 BUG_ON(!k); 1970 1974 1971 1975 p = k; ··· 1981 1989 skip_nr = cur_idx - prev_idx; 1982 1990 1983 1991 while (skip_nr) { 1984 - k = bch_btree_iter_next_filter(&iter, 1992 + k = bch_btree_iter_next_filter(&iter.iter, 1985 1993 &c->root->keys, 1986 1994 bch_ptr_bad); 1987 1995 if (k) ··· 2054 2062 int ret = 0; 2055 2063 int i; 2056 2064 struct bkey *k = NULL; 2057 - struct btree_iter iter; 2065 + struct btree_iter_stack iter; 2058 2066 struct btree_check_state check_state; 2059 - 2060 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 2061 2067 2062 2068 /* check and mark root node keys */ 2063 2069 for_each_key_filter(&c->root->keys, k, &iter, bch_ptr_invalid) ··· 2550 2560 2551 2561 if (b->level) { 2552 2562 struct bkey *k; 2553 - struct btree_iter iter; 2563 + struct btree_iter_stack iter; 2554 2564 2555 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 2556 - bch_btree_iter_init(&b->keys, &iter, from); 2565 + bch_btree_iter_stack_init(&b->keys, &iter, from); 2557 2566 2558 - while ((k = bch_btree_iter_next_filter(&iter, &b->keys, 2567 + while ((k = bch_btree_iter_next_filter(&iter.iter, &b->keys, 2559 2568 bch_ptr_bad))) { 2560 2569 ret = bcache_btree(map_nodes_recurse, k, b, 2561 2570 op, from, fn, flags); ··· 2583 2594 { 2584 2595 int ret = MAP_CONTINUE; 2585 2596 struct bkey *k; 2586 - struct btree_iter iter; 2597 + struct btree_iter_stack iter; 2587 2598 2588 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 2589 - bch_btree_iter_init(&b->keys, &iter, from); 2599 + bch_btree_iter_stack_init(&b->keys, &iter, from); 2590 2600 2591 - while ((k = bch_btree_iter_next_filter(&iter, &b->keys, bch_ptr_bad))) { 2601 + while ((k = bch_btree_iter_next_filter(&iter.iter, &b->keys, 2602 + bch_ptr_bad))) { 2592 2603 ret = !b->level 2593 2604 ? fn(op, b, k) 2594 2605 : bcache_btree(map_keys_recurse, k,
+18 -25
drivers/md/bcache/extents.c
··· 33 33 i->k = bkey_next(i->k); 34 34 35 35 if (i->k == i->end) 36 - *i = iter->heap.data[--iter->heap.nr]; 36 + *i = iter->data[--iter->used]; 37 37 } 38 38 39 - static bool new_bch_key_sort_cmp(const void *l, const void *r, void *args) 39 + static bool bch_key_sort_cmp(struct btree_iter_set l, 40 + struct btree_iter_set r) 40 41 { 41 - struct btree_iter_set *_l = (struct btree_iter_set *)l; 42 - struct btree_iter_set *_r = (struct btree_iter_set *)r; 43 - int64_t c = bkey_cmp(_l->k, _r->k); 42 + int64_t c = bkey_cmp(l.k, r.k); 44 43 45 - return !(c ? c > 0 : _l->k < _r->k); 44 + return c ? c > 0 : l.k < r.k; 46 45 } 47 46 48 47 static bool __ptr_invalid(struct cache_set *c, const struct bkey *k) ··· 238 239 } 239 240 240 241 const struct btree_keys_ops bch_btree_keys_ops = { 241 - .sort_cmp = new_bch_key_sort_cmp, 242 + .sort_cmp = bch_key_sort_cmp, 242 243 .insert_fixup = bch_btree_ptr_insert_fixup, 243 244 .key_invalid = bch_btree_ptr_invalid, 244 245 .key_bad = bch_btree_ptr_bad, ··· 255 256 * Necessary for btree_sort_fixup() - if there are multiple keys that compare 256 257 * equal in different sets, we have to process them newest to oldest. 257 258 */ 258 - 259 - static bool new_bch_extent_sort_cmp(const void *l, const void *r, void __always_unused *args) 259 + static bool bch_extent_sort_cmp(struct btree_iter_set l, 260 + struct btree_iter_set r) 260 261 { 261 - struct btree_iter_set *_l = (struct btree_iter_set *)l; 262 - struct btree_iter_set *_r = (struct btree_iter_set *)r; 263 - int64_t c = bkey_cmp(&START_KEY(_l->k), &START_KEY(_r->k)); 262 + int64_t c = bkey_cmp(&START_KEY(l.k), &START_KEY(r.k)); 264 263 265 - return !(c ? c > 0 : _l->k < _r->k); 264 + return c ? c > 0 : l.k < r.k; 266 265 } 267 266 268 267 static struct bkey *bch_extent_sort_fixup(struct btree_iter *iter, 269 268 struct bkey *tmp) 270 269 { 271 - const struct min_heap_callbacks callbacks = { 272 - .less = new_bch_extent_sort_cmp, 273 - .swp = NULL, 274 - }; 275 - while (iter->heap.nr > 1) { 276 - struct btree_iter_set *top = iter->heap.data, *i = top + 1; 270 + while (iter->used > 1) { 271 + struct btree_iter_set *top = iter->data, *i = top + 1; 277 272 278 - if (iter->heap.nr > 2 && 279 - !new_bch_extent_sort_cmp(&i[0], &i[1], NULL)) 273 + if (iter->used > 2 && 274 + bch_extent_sort_cmp(i[0], i[1])) 280 275 i++; 281 276 282 277 if (bkey_cmp(top->k, &START_KEY(i->k)) <= 0) ··· 278 285 279 286 if (!KEY_SIZE(i->k)) { 280 287 sort_key_next(iter, i); 281 - min_heap_sift_down(&iter->heap, i - top, &callbacks, NULL); 288 + heap_sift(iter, i - top, bch_extent_sort_cmp); 282 289 continue; 283 290 } 284 291 ··· 288 295 else 289 296 bch_cut_front(top->k, i->k); 290 297 291 - min_heap_sift_down(&iter->heap, i - top, &callbacks, NULL); 298 + heap_sift(iter, i - top, bch_extent_sort_cmp); 292 299 } else { 293 300 /* can't happen because of comparison func */ 294 301 BUG_ON(!bkey_cmp(&START_KEY(top->k), &START_KEY(i->k))); ··· 298 305 299 306 bch_cut_back(&START_KEY(i->k), tmp); 300 307 bch_cut_front(i->k, top->k); 301 - min_heap_sift_down(&iter->heap, 0, &callbacks, NULL); 308 + heap_sift(iter, 0, bch_extent_sort_cmp); 302 309 303 310 return tmp; 304 311 } else { ··· 618 625 } 619 626 620 627 const struct btree_keys_ops bch_extent_keys_ops = { 621 - .sort_cmp = new_bch_extent_sort_cmp, 628 + .sort_cmp = bch_extent_sort_cmp, 622 629 .sort_fixup = bch_extent_sort_fixup, 623 630 .insert_fixup = bch_extent_insert_fixup, 624 631 .key_invalid = bch_extent_invalid,
+10 -23
drivers/md/bcache/movinggc.c
··· 182 182 closure_sync(&cl); 183 183 } 184 184 185 - static bool new_bucket_cmp(const void *l, const void *r, void __always_unused *args) 185 + static bool bucket_cmp(struct bucket *l, struct bucket *r) 186 186 { 187 - struct bucket **_l = (struct bucket **)l; 188 - struct bucket **_r = (struct bucket **)r; 189 - 190 - return GC_SECTORS_USED(*_l) >= GC_SECTORS_USED(*_r); 187 + return GC_SECTORS_USED(l) < GC_SECTORS_USED(r); 191 188 } 192 189 193 190 static unsigned int bucket_heap_top(struct cache *ca) 194 191 { 195 192 struct bucket *b; 196 193 197 - return (b = min_heap_peek(&ca->heap)[0]) ? GC_SECTORS_USED(b) : 0; 194 + return (b = heap_peek(&ca->heap)) ? GC_SECTORS_USED(b) : 0; 198 195 } 199 196 200 197 void bch_moving_gc(struct cache_set *c) ··· 199 202 struct cache *ca = c->cache; 200 203 struct bucket *b; 201 204 unsigned long sectors_to_move, reserve_sectors; 202 - const struct min_heap_callbacks callbacks = { 203 - .less = new_bucket_cmp, 204 - .swp = NULL, 205 - }; 206 205 207 206 if (!c->copy_gc_enabled) 208 207 return; ··· 209 216 reserve_sectors = ca->sb.bucket_size * 210 217 fifo_used(&ca->free[RESERVE_MOVINGGC]); 211 218 212 - ca->heap.nr = 0; 219 + ca->heap.used = 0; 213 220 214 221 for_each_bucket(b, ca) { 215 222 if (GC_MARK(b) == GC_MARK_METADATA || ··· 218 225 atomic_read(&b->pin)) 219 226 continue; 220 227 221 - if (!min_heap_full(&ca->heap)) { 228 + if (!heap_full(&ca->heap)) { 222 229 sectors_to_move += GC_SECTORS_USED(b); 223 - min_heap_push(&ca->heap, &b, &callbacks, NULL); 224 - } else if (!new_bucket_cmp(&b, min_heap_peek(&ca->heap), ca)) { 230 + heap_add(&ca->heap, b, bucket_cmp); 231 + } else if (bucket_cmp(b, heap_peek(&ca->heap))) { 225 232 sectors_to_move -= bucket_heap_top(ca); 226 233 sectors_to_move += GC_SECTORS_USED(b); 227 234 228 235 ca->heap.data[0] = b; 229 - min_heap_sift_down(&ca->heap, 0, &callbacks, NULL); 236 + heap_sift(&ca->heap, 0, bucket_cmp); 230 237 } 231 238 } 232 239 233 240 while (sectors_to_move > reserve_sectors) { 234 - if (ca->heap.nr) { 235 - b = min_heap_peek(&ca->heap)[0]; 236 - min_heap_pop(&ca->heap, &callbacks, NULL); 237 - } 241 + heap_pop(&ca->heap, b, bucket_cmp); 238 242 sectors_to_move -= GC_SECTORS_USED(b); 239 243 } 240 244 241 - while (ca->heap.nr) { 242 - b = min_heap_peek(&ca->heap)[0]; 243 - min_heap_pop(&ca->heap, &callbacks, NULL); 245 + while (heap_pop(&ca->heap, b, bucket_cmp)) 244 246 SET_GC_MOVE(b, 1); 245 - } 246 247 247 248 mutex_unlock(&c->bucket_lock); 248 249
+2 -1
drivers/md/bcache/super.c
··· 1912 1912 INIT_LIST_HEAD(&c->btree_cache_freed); 1913 1913 INIT_LIST_HEAD(&c->data_buckets); 1914 1914 1915 - iter_size = ((meta_bucket_pages(sb) * PAGE_SECTORS) / sb->block_size) * 1915 + iter_size = sizeof(struct btree_iter) + 1916 + ((meta_bucket_pages(sb) * PAGE_SECTORS) / sb->block_size) * 1916 1917 sizeof(struct btree_iter_set); 1917 1918 1918 1919 c->devices = kcalloc(c->nr_uuids, sizeof(void *), GFP_KERNEL);
+1 -3
drivers/md/bcache/sysfs.c
··· 660 660 unsigned int bytes = 0; 661 661 struct bkey *k; 662 662 struct btree *b; 663 - struct btree_iter iter; 664 - 665 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 663 + struct btree_iter_stack iter; 666 664 667 665 goto lock_root; 668 666
+65 -2
drivers/md/bcache/util.h
··· 9 9 #include <linux/kernel.h> 10 10 #include <linux/sched/clock.h> 11 11 #include <linux/llist.h> 12 - #include <linux/min_heap.h> 13 12 #include <linux/ratelimit.h> 14 13 #include <linux/vmalloc.h> 15 14 #include <linux/workqueue.h> ··· 30 31 31 32 #endif 32 33 34 + #define DECLARE_HEAP(type, name) \ 35 + struct { \ 36 + size_t size, used; \ 37 + type *data; \ 38 + } name 39 + 33 40 #define init_heap(heap, _size, gfp) \ 34 41 ({ \ 35 42 size_t _bytes; \ 36 - (heap)->nr = 0; \ 43 + (heap)->used = 0; \ 37 44 (heap)->size = (_size); \ 38 45 _bytes = (heap)->size * sizeof(*(heap)->data); \ 39 46 (heap)->data = kvmalloc(_bytes, (gfp) & GFP_KERNEL); \ ··· 51 46 kvfree((heap)->data); \ 52 47 (heap)->data = NULL; \ 53 48 } while (0) 49 + 50 + #define heap_swap(h, i, j) swap((h)->data[i], (h)->data[j]) 51 + 52 + #define heap_sift(h, i, cmp) \ 53 + do { \ 54 + size_t _r, _j = i; \ 55 + \ 56 + for (; _j * 2 + 1 < (h)->used; _j = _r) { \ 57 + _r = _j * 2 + 1; \ 58 + if (_r + 1 < (h)->used && \ 59 + cmp((h)->data[_r], (h)->data[_r + 1])) \ 60 + _r++; \ 61 + \ 62 + if (cmp((h)->data[_r], (h)->data[_j])) \ 63 + break; \ 64 + heap_swap(h, _r, _j); \ 65 + } \ 66 + } while (0) 67 + 68 + #define heap_sift_down(h, i, cmp) \ 69 + do { \ 70 + while (i) { \ 71 + size_t p = (i - 1) / 2; \ 72 + if (cmp((h)->data[i], (h)->data[p])) \ 73 + break; \ 74 + heap_swap(h, i, p); \ 75 + i = p; \ 76 + } \ 77 + } while (0) 78 + 79 + #define heap_add(h, d, cmp) \ 80 + ({ \ 81 + bool _r = !heap_full(h); \ 82 + if (_r) { \ 83 + size_t _i = (h)->used++; \ 84 + (h)->data[_i] = d; \ 85 + \ 86 + heap_sift_down(h, _i, cmp); \ 87 + heap_sift(h, _i, cmp); \ 88 + } \ 89 + _r; \ 90 + }) 91 + 92 + #define heap_pop(h, d, cmp) \ 93 + ({ \ 94 + bool _r = (h)->used; \ 95 + if (_r) { \ 96 + (d) = (h)->data[0]; \ 97 + (h)->used--; \ 98 + heap_swap(h, 0, (h)->used); \ 99 + heap_sift(h, 0, cmp); \ 100 + } \ 101 + _r; \ 102 + }) 103 + 104 + #define heap_peek(h) ((h)->used ? (h)->data[0] : NULL) 105 + 106 + #define heap_full(h) ((h)->used == (h)->size) 54 107 55 108 #define DECLARE_FIFO(type, name) \ 56 109 struct { \
+5 -8
drivers/md/bcache/writeback.c
··· 908 908 struct dirty_init_thrd_info *info = arg; 909 909 struct bch_dirty_init_state *state = info->state; 910 910 struct cache_set *c = state->c; 911 - struct btree_iter iter; 911 + struct btree_iter_stack iter; 912 912 struct bkey *k, *p; 913 913 int cur_idx, prev_idx, skip_nr; 914 914 915 915 k = p = NULL; 916 916 prev_idx = 0; 917 917 918 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 919 - bch_btree_iter_init(&c->root->keys, &iter, NULL); 920 - k = bch_btree_iter_next_filter(&iter, &c->root->keys, bch_ptr_bad); 918 + bch_btree_iter_stack_init(&c->root->keys, &iter, NULL); 919 + k = bch_btree_iter_next_filter(&iter.iter, &c->root->keys, bch_ptr_bad); 921 920 BUG_ON(!k); 922 921 923 922 p = k; ··· 930 931 skip_nr = cur_idx - prev_idx; 931 932 932 933 while (skip_nr) { 933 - k = bch_btree_iter_next_filter(&iter, 934 + k = bch_btree_iter_next_filter(&iter.iter, 934 935 &c->root->keys, 935 936 bch_ptr_bad); 936 937 if (k) ··· 979 980 int i; 980 981 struct btree *b = NULL; 981 982 struct bkey *k = NULL; 982 - struct btree_iter iter; 983 + struct btree_iter_stack iter; 983 984 struct sectors_dirty_init op; 984 985 struct cache_set *c = d->c; 985 986 struct bch_dirty_init_state state; 986 - 987 - min_heap_init(&iter.heap, NULL, MAX_BSETS); 988 987 989 988 retry_lock: 990 989 b = c->root;
+7 -4
drivers/md/dm-crypt.c
··· 517 517 { 518 518 struct iv_lmk_private *lmk = &cc->iv_gen_private.lmk; 519 519 SHASH_DESC_ON_STACK(desc, lmk->hash_tfm); 520 - struct md5_state md5state; 520 + union { 521 + struct md5_state md5state; 522 + u8 state[CRYPTO_MD5_STATESIZE]; 523 + } u; 521 524 __le32 buf[4]; 522 525 int i, r; 523 526 ··· 551 548 return r; 552 549 553 550 /* No MD5 padding here */ 554 - r = crypto_shash_export(desc, &md5state); 551 + r = crypto_shash_export(desc, &u.md5state); 555 552 if (r) 556 553 return r; 557 554 558 555 for (i = 0; i < MD5_HASH_WORDS; i++) 559 - __cpu_to_le32s(&md5state.hash[i]); 560 - memcpy(iv, &md5state.hash, cc->iv_size); 556 + __cpu_to_le32s(&u.md5state.hash[i]); 557 + memcpy(iv, &u.md5state.hash, cc->iv_size); 561 558 562 559 return 0; 563 560 }
+1 -1
drivers/md/dm-raid.c
··· 2407 2407 */ 2408 2408 sb_retrieve_failed_devices(sb, failed_devices); 2409 2409 rdev_for_each(r, mddev) { 2410 - if (test_bit(Journal, &rdev->flags) || 2410 + if (test_bit(Journal, &r->flags) || 2411 2411 !r->sb_page) 2412 2412 continue; 2413 2413 sb2 = page_address(r->sb_page);
+1 -1
drivers/mtd/mtdchar.c
··· 559 559 /* Sanitize user input */ 560 560 p.devname[BLKPG_DEVNAMELTH - 1] = '\0'; 561 561 562 - return mtd_add_partition(mtd, p.devname, p.start, p.length, NULL); 562 + return mtd_add_partition(mtd, p.devname, p.start, p.length); 563 563 564 564 case BLKPG_DEL_PARTITION: 565 565
+40 -112
drivers/mtd/mtdcore.c
··· 68 68 .pm = MTD_CLS_PM_OPS, 69 69 }; 70 70 71 - static struct class mtd_master_class = { 72 - .name = "mtd_master", 73 - .pm = MTD_CLS_PM_OPS, 74 - }; 75 - 76 71 static DEFINE_IDR(mtd_idr); 77 - static DEFINE_IDR(mtd_master_idr); 78 72 79 73 /* These are exported solely for the purpose of mtd_blkdevs.c. You 80 74 should not use them for _anything_ else */ ··· 83 89 84 90 static LIST_HEAD(mtd_notifiers); 85 91 86 - #define MTD_MASTER_DEVS 255 92 + 87 93 #define MTD_DEVT(index) MKDEV(MTD_CHAR_MAJOR, (index)*2) 88 - static dev_t mtd_master_devt; 89 94 90 95 /* REVISIT once MTD uses the driver model better, whoever allocates 91 96 * the mtd_info will probably want to use the release() hook... ··· 102 109 103 110 /* remove /dev/mtdXro node */ 104 111 device_destroy(&mtd_class, index + 1); 105 - } 106 - 107 - static void mtd_master_release(struct device *dev) 108 - { 109 - struct mtd_info *mtd = dev_get_drvdata(dev); 110 - 111 - idr_remove(&mtd_master_idr, mtd->index); 112 - of_node_put(mtd_get_of_node(mtd)); 113 - 114 - if (mtd_is_partition(mtd)) 115 - release_mtd_partition(mtd); 116 112 } 117 113 118 114 static void mtd_device_release(struct kref *kref) ··· 365 383 .name = "mtd", 366 384 .groups = mtd_groups, 367 385 .release = mtd_release, 368 - }; 369 - 370 - static const struct device_type mtd_master_devtype = { 371 - .name = "mtd_master", 372 - .release = mtd_master_release, 373 386 }; 374 387 375 388 static bool mtd_expert_analysis_mode; ··· 634 657 /** 635 658 * add_mtd_device - register an MTD device 636 659 * @mtd: pointer to new MTD device info structure 637 - * @partitioned: create partitioned device 638 660 * 639 661 * Add a device to the list of MTD devices present in the system, and 640 662 * notify each currently active MTD 'user' of its arrival. Returns 641 663 * zero on success or non-zero on failure. 642 664 */ 643 - int add_mtd_device(struct mtd_info *mtd, bool partitioned) 665 + 666 + int add_mtd_device(struct mtd_info *mtd) 644 667 { 645 668 struct device_node *np = mtd_get_of_node(mtd); 646 669 struct mtd_info *master = mtd_get_master(mtd); ··· 687 710 ofidx = -1; 688 711 if (np) 689 712 ofidx = of_alias_get_id(np, "mtd"); 690 - if (partitioned) { 691 - if (ofidx >= 0) 692 - i = idr_alloc(&mtd_idr, mtd, ofidx, ofidx + 1, GFP_KERNEL); 693 - else 694 - i = idr_alloc(&mtd_idr, mtd, 0, 0, GFP_KERNEL); 695 - } else { 696 - if (ofidx >= 0) 697 - i = idr_alloc(&mtd_master_idr, mtd, ofidx, ofidx + 1, GFP_KERNEL); 698 - else 699 - i = idr_alloc(&mtd_master_idr, mtd, 0, 0, GFP_KERNEL); 700 - } 713 + if (ofidx >= 0) 714 + i = idr_alloc(&mtd_idr, mtd, ofidx, ofidx + 1, GFP_KERNEL); 715 + else 716 + i = idr_alloc(&mtd_idr, mtd, 0, 0, GFP_KERNEL); 701 717 if (i < 0) { 702 718 error = i; 703 719 goto fail_locked; ··· 738 768 /* Caller should have set dev.parent to match the 739 769 * physical device, if appropriate. 740 770 */ 741 - if (partitioned) { 742 - mtd->dev.type = &mtd_devtype; 743 - mtd->dev.class = &mtd_class; 744 - mtd->dev.devt = MTD_DEVT(i); 745 - dev_set_name(&mtd->dev, "mtd%d", i); 746 - error = dev_set_name(&mtd->dev, "mtd%d", i); 747 - } else { 748 - mtd->dev.type = &mtd_master_devtype; 749 - mtd->dev.class = &mtd_master_class; 750 - mtd->dev.devt = MKDEV(MAJOR(mtd_master_devt), i); 751 - error = dev_set_name(&mtd->dev, "mtd_master%d", i); 752 - } 771 + mtd->dev.type = &mtd_devtype; 772 + mtd->dev.class = &mtd_class; 773 + mtd->dev.devt = MTD_DEVT(i); 774 + error = dev_set_name(&mtd->dev, "mtd%d", i); 753 775 if (error) 754 776 goto fail_devname; 755 777 dev_set_drvdata(&mtd->dev, mtd); ··· 749 787 of_node_get(mtd_get_of_node(mtd)); 750 788 error = device_register(&mtd->dev); 751 789 if (error) { 752 - pr_err("mtd: %s device_register fail %d\n", mtd->name, error); 753 790 put_device(&mtd->dev); 754 791 goto fail_added; 755 792 } ··· 760 799 761 800 mtd_debugfs_populate(mtd); 762 801 763 - if (partitioned) { 764 - device_create(&mtd_class, mtd->dev.parent, MTD_DEVT(i) + 1, NULL, 765 - "mtd%dro", i); 766 - } 802 + device_create(&mtd_class, mtd->dev.parent, MTD_DEVT(i) + 1, NULL, 803 + "mtd%dro", i); 767 804 768 - pr_debug("mtd: Giving out %spartitioned device %d to %s\n", 769 - partitioned ? "" : "un-", i, mtd->name); 805 + pr_debug("mtd: Giving out device %d to %s\n", i, mtd->name); 770 806 /* No need to get a refcount on the module containing 771 807 the notifier, since we hold the mtd_table_mutex */ 772 808 list_for_each_entry(not, &mtd_notifiers, list) ··· 771 813 772 814 mutex_unlock(&mtd_table_mutex); 773 815 774 - if (partitioned) { 775 - if (of_property_read_bool(mtd_get_of_node(mtd), "linux,rootfs")) { 776 - if (IS_BUILTIN(CONFIG_MTD)) { 777 - pr_info("mtd: setting mtd%d (%s) as root device\n", 778 - mtd->index, mtd->name); 779 - ROOT_DEV = MKDEV(MTD_BLOCK_MAJOR, mtd->index); 780 - } else { 781 - pr_warn("mtd: can't set mtd%d (%s) as root device - mtd must be builtin\n", 782 - mtd->index, mtd->name); 783 - } 816 + if (of_property_read_bool(mtd_get_of_node(mtd), "linux,rootfs")) { 817 + if (IS_BUILTIN(CONFIG_MTD)) { 818 + pr_info("mtd: setting mtd%d (%s) as root device\n", mtd->index, mtd->name); 819 + ROOT_DEV = MKDEV(MTD_BLOCK_MAJOR, mtd->index); 820 + } else { 821 + pr_warn("mtd: can't set mtd%d (%s) as root device - mtd must be builtin\n", 822 + mtd->index, mtd->name); 784 823 } 785 824 } 786 825 ··· 793 838 fail_added: 794 839 of_node_put(mtd_get_of_node(mtd)); 795 840 fail_devname: 796 - if (partitioned) 797 - idr_remove(&mtd_idr, i); 798 - else 799 - idr_remove(&mtd_master_idr, i); 841 + idr_remove(&mtd_idr, i); 800 842 fail_locked: 801 843 mutex_unlock(&mtd_table_mutex); 802 844 return error; ··· 811 859 812 860 int del_mtd_device(struct mtd_info *mtd) 813 861 { 814 - struct mtd_notifier *not; 815 - struct idr *idr; 816 862 int ret; 863 + struct mtd_notifier *not; 817 864 818 865 mutex_lock(&mtd_table_mutex); 819 866 820 - idr = mtd->dev.class == &mtd_class ? &mtd_idr : &mtd_master_idr; 821 - if (idr_find(idr, mtd->index) != mtd) { 867 + if (idr_find(&mtd_idr, mtd->index) != mtd) { 822 868 ret = -ENODEV; 823 869 goto out_error; 824 870 } ··· 1056 1106 const struct mtd_partition *parts, 1057 1107 int nr_parts) 1058 1108 { 1059 - struct mtd_info *parent; 1060 1109 int ret, err; 1061 1110 1062 1111 mtd_set_dev_defaults(mtd); ··· 1064 1115 if (ret) 1065 1116 goto out; 1066 1117 1067 - ret = add_mtd_device(mtd, false); 1068 - if (ret) 1069 - goto out; 1070 - 1071 1118 if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) { 1072 - ret = mtd_add_partition(mtd, mtd->name, 0, MTDPART_SIZ_FULL, &parent); 1119 + ret = add_mtd_device(mtd); 1073 1120 if (ret) 1074 1121 goto out; 1075 - 1076 - } else { 1077 - parent = mtd; 1078 1122 } 1079 1123 1080 1124 /* Prefer parsed partitions over driver-provided fallback */ 1081 - ret = parse_mtd_partitions(parent, types, parser_data); 1125 + ret = parse_mtd_partitions(mtd, types, parser_data); 1082 1126 if (ret == -EPROBE_DEFER) 1083 1127 goto out; 1084 1128 1085 1129 if (ret > 0) 1086 1130 ret = 0; 1087 1131 else if (nr_parts) 1088 - ret = add_mtd_partitions(parent, parts, nr_parts); 1089 - else if (!IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) 1090 - ret = mtd_add_partition(parent, mtd->name, 0, MTDPART_SIZ_FULL, NULL); 1132 + ret = add_mtd_partitions(mtd, parts, nr_parts); 1133 + else if (!device_is_registered(&mtd->dev)) 1134 + ret = add_mtd_device(mtd); 1135 + else 1136 + ret = 0; 1091 1137 1092 1138 if (ret) 1093 1139 goto out; ··· 1102 1158 register_reboot_notifier(&mtd->reboot_notifier); 1103 1159 } 1104 1160 1105 - return 0; 1106 1161 out: 1107 - nvmem_unregister(mtd->otp_user_nvmem); 1108 - nvmem_unregister(mtd->otp_factory_nvmem); 1162 + if (ret) { 1163 + nvmem_unregister(mtd->otp_user_nvmem); 1164 + nvmem_unregister(mtd->otp_factory_nvmem); 1165 + } 1109 1166 1110 - del_mtd_partitions(mtd); 1111 - 1112 - if (device_is_registered(&mtd->dev)) { 1167 + if (ret && device_is_registered(&mtd->dev)) { 1113 1168 err = del_mtd_device(mtd); 1114 1169 if (err) 1115 1170 pr_err("Error when deleting MTD device (%d)\n", err); ··· 1267 1324 mtd = mtd->parent; 1268 1325 } 1269 1326 1270 - kref_get(&master->refcnt); 1327 + if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) 1328 + kref_get(&master->refcnt); 1271 1329 1272 1330 return 0; 1273 1331 } ··· 1362 1418 mtd = parent; 1363 1419 } 1364 1420 1365 - kref_put(&master->refcnt, mtd_device_release); 1421 + if (IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER)) 1422 + kref_put(&master->refcnt, mtd_device_release); 1366 1423 1367 1424 module_put(master->owner); 1368 1425 ··· 2530 2585 if (ret) 2531 2586 goto err_reg; 2532 2587 2533 - ret = class_register(&mtd_master_class); 2534 - if (ret) 2535 - goto err_reg2; 2536 - 2537 - ret = alloc_chrdev_region(&mtd_master_devt, 0, MTD_MASTER_DEVS, "mtd_master"); 2538 - if (ret < 0) { 2539 - pr_err("unable to allocate char dev region\n"); 2540 - goto err_chrdev; 2541 - } 2542 - 2543 2588 mtd_bdi = mtd_bdi_init("mtd"); 2544 2589 if (IS_ERR(mtd_bdi)) { 2545 2590 ret = PTR_ERR(mtd_bdi); ··· 2554 2619 bdi_unregister(mtd_bdi); 2555 2620 bdi_put(mtd_bdi); 2556 2621 err_bdi: 2557 - unregister_chrdev_region(mtd_master_devt, MTD_MASTER_DEVS); 2558 - err_chrdev: 2559 - class_unregister(&mtd_master_class); 2560 - err_reg2: 2561 2622 class_unregister(&mtd_class); 2562 2623 err_reg: 2563 2624 pr_err("Error registering mtd class or bdi: %d\n", ret); ··· 2567 2636 if (proc_mtd) 2568 2637 remove_proc_entry("mtd", NULL); 2569 2638 class_unregister(&mtd_class); 2570 - class_unregister(&mtd_master_class); 2571 - unregister_chrdev_region(mtd_master_devt, MTD_MASTER_DEVS); 2572 2639 bdi_unregister(mtd_bdi); 2573 2640 bdi_put(mtd_bdi); 2574 2641 idr_destroy(&mtd_idr); 2575 - idr_destroy(&mtd_master_idr); 2576 2642 } 2577 2643 2578 2644 module_init(init_mtd);
+1 -1
drivers/mtd/mtdcore.h
··· 8 8 extern struct backing_dev_info *mtd_bdi; 9 9 10 10 struct mtd_info *__mtd_next_device(int i); 11 - int __must_check add_mtd_device(struct mtd_info *mtd, bool partitioned); 11 + int __must_check add_mtd_device(struct mtd_info *mtd); 12 12 int del_mtd_device(struct mtd_info *mtd); 13 13 int add_mtd_partitions(struct mtd_info *, const struct mtd_partition *, int); 14 14 int del_mtd_partitions(struct mtd_info *);
+8 -8
drivers/mtd/mtdpart.c
··· 86 86 * parent conditional on that option. Note, this is a way to 87 87 * distinguish between the parent and its partitions in sysfs. 88 88 */ 89 - child->dev.parent = &parent->dev; 89 + child->dev.parent = IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER) || mtd_is_partition(parent) ? 90 + &parent->dev : parent->dev.parent; 90 91 child->dev.of_node = part->of_node; 91 92 child->parent = parent; 92 93 child->part.offset = part->offset; ··· 243 242 } 244 243 245 244 int mtd_add_partition(struct mtd_info *parent, const char *name, 246 - long long offset, long long length, struct mtd_info **out) 245 + long long offset, long long length) 247 246 { 248 247 struct mtd_info *master = mtd_get_master(parent); 249 248 u64 parent_size = mtd_is_partition(parent) ? ··· 276 275 list_add_tail(&child->part.node, &parent->partitions); 277 276 mutex_unlock(&master->master.partitions_lock); 278 277 279 - ret = add_mtd_device(child, true); 278 + ret = add_mtd_device(child); 280 279 if (ret) 281 280 goto err_remove_part; 282 281 283 282 mtd_add_partition_attrs(child); 284 - 285 - if (out) 286 - *out = child; 287 283 288 284 return 0; 289 285 ··· 413 415 list_add_tail(&child->part.node, &parent->partitions); 414 416 mutex_unlock(&master->master.partitions_lock); 415 417 416 - ret = add_mtd_device(child, true); 418 + ret = add_mtd_device(child); 417 419 if (ret) { 418 420 mutex_lock(&master->master.partitions_lock); 419 421 list_del(&child->part.node); ··· 590 592 int ret, err = 0; 591 593 592 594 dev = &master->dev; 595 + /* Use parent device (controller) if the top level MTD is not registered */ 596 + if (!IS_ENABLED(CONFIG_MTD_PARTITIONED_MASTER) && !mtd_is_partition(master)) 597 + dev = master->dev.parent; 593 598 594 599 np = mtd_get_of_node(master); 595 600 if (mtd_is_partition(master)) ··· 711 710 if (ret < 0 && !err) 712 711 err = ret; 713 712 } 714 - 715 713 return err; 716 714 } 717 715
+1
drivers/mtd/nand/spi/core.c
··· 1585 1585 { 1586 1586 struct nand_device *nand = spinand_to_nand(spinand); 1587 1587 1588 + nanddev_ecc_engine_cleanup(nand); 1588 1589 nanddev_cleanup(nand); 1589 1590 spinand_manufacturer_cleanup(spinand); 1590 1591 kfree(spinand->databuf);
+5 -5
drivers/mtd/nand/spi/winbond.c
··· 25 25 26 26 static SPINAND_OP_VARIANTS(read_cache_octal_variants, 27 27 SPINAND_PAGE_READ_FROM_CACHE_1S_1D_8D_OP(0, 2, NULL, 0, 105 * HZ_PER_MHZ), 28 - SPINAND_PAGE_READ_FROM_CACHE_1S_8S_8S_OP(0, 16, NULL, 0, 86 * HZ_PER_MHZ), 28 + SPINAND_PAGE_READ_FROM_CACHE_1S_8S_8S_OP(0, 16, NULL, 0, 162 * HZ_PER_MHZ), 29 29 SPINAND_PAGE_READ_FROM_CACHE_1S_1S_8S_OP(0, 1, NULL, 0, 133 * HZ_PER_MHZ), 30 30 SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), 31 31 SPINAND_PAGE_READ_FROM_CACHE_1S_1S_1S_OP(0, 1, NULL, 0)); ··· 42 42 static SPINAND_OP_VARIANTS(read_cache_dual_quad_dtr_variants, 43 43 SPINAND_PAGE_READ_FROM_CACHE_1S_4D_4D_OP(0, 8, NULL, 0, 80 * HZ_PER_MHZ), 44 44 SPINAND_PAGE_READ_FROM_CACHE_1S_1D_4D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ), 45 - SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0), 45 + SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(0, 2, NULL, 0, 104 * HZ_PER_MHZ), 46 46 SPINAND_PAGE_READ_FROM_CACHE_1S_1S_4S_OP(0, 1, NULL, 0), 47 47 SPINAND_PAGE_READ_FROM_CACHE_1S_2D_2D_OP(0, 4, NULL, 0, 80 * HZ_PER_MHZ), 48 48 SPINAND_PAGE_READ_FROM_CACHE_1S_1D_2D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ), 49 - SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0), 49 + SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(0, 1, NULL, 0, 104 * HZ_PER_MHZ), 50 50 SPINAND_PAGE_READ_FROM_CACHE_1S_1S_2S_OP(0, 1, NULL, 0), 51 51 SPINAND_PAGE_READ_FROM_CACHE_1S_1D_1D_OP(0, 2, NULL, 0, 80 * HZ_PER_MHZ), 52 52 SPINAND_PAGE_READ_FROM_CACHE_FAST_1S_1S_1S_OP(0, 1, NULL, 0), ··· 289 289 SPINAND_ECCINFO(&w35n01jw_ooblayout, NULL)), 290 290 SPINAND_INFO("W35N02JW", /* 1.8V */ 291 291 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xdf, 0x22), 292 - NAND_MEMORG(1, 4096, 128, 64, 512, 10, 2, 1, 1), 292 + NAND_MEMORG(1, 4096, 128, 64, 512, 10, 1, 2, 1), 293 293 NAND_ECCREQ(1, 512), 294 294 SPINAND_INFO_OP_VARIANTS(&read_cache_octal_variants, 295 295 &write_cache_octal_variants, ··· 298 298 SPINAND_ECCINFO(&w35n01jw_ooblayout, NULL)), 299 299 SPINAND_INFO("W35N04JW", /* 1.8V */ 300 300 SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xdf, 0x23), 301 - NAND_MEMORG(1, 4096, 128, 64, 512, 10, 4, 1, 1), 301 + NAND_MEMORG(1, 4096, 128, 64, 512, 10, 1, 4, 1), 302 302 NAND_ECCREQ(1, 512), 303 303 SPINAND_INFO_OP_VARIANTS(&read_cache_octal_variants, 304 304 &write_cache_octal_variants,
+4 -1
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 2994 2994 { 2995 2995 struct bnxt_napi *bnapi = cpr->bnapi; 2996 2996 u32 raw_cons = cpr->cp_raw_cons; 2997 + bool flush_xdp = false; 2997 2998 u32 cons; 2998 2999 int rx_pkts = 0; 2999 3000 u8 event = 0; ··· 3048 3047 else 3049 3048 rc = bnxt_force_rx_discard(bp, cpr, &raw_cons, 3050 3049 &event); 3050 + if (event & BNXT_REDIRECT_EVENT) 3051 + flush_xdp = true; 3051 3052 if (likely(rc >= 0)) 3052 3053 rx_pkts += rc; 3053 3054 /* Increment rx_pkts when rc is -ENOMEM to count towards ··· 3074 3071 } 3075 3072 } 3076 3073 3077 - if (event & BNXT_REDIRECT_EVENT) { 3074 + if (flush_xdp) { 3078 3075 xdp_do_flush(); 3079 3076 event &= ~BNXT_REDIRECT_EVENT; 3080 3077 }
+1 -1
drivers/net/ethernet/freescale/enetc/enetc_hw.h
··· 510 510 tmp = ioread32(reg + 4); 511 511 } while (high != tmp); 512 512 513 - return le64_to_cpu((__le64)high << 32 | low); 513 + return (u64)high << 32 | low; 514 514 } 515 515 #endif 516 516
+3
drivers/net/ethernet/microsoft/mana/gdma_main.c
··· 33 33 gc->db_page_base = gc->bar0_va + 34 34 mana_gd_r64(gc, GDMA_PF_REG_DB_PAGE_OFF); 35 35 36 + gc->phys_db_page_base = gc->bar0_pa + 37 + mana_gd_r64(gc, GDMA_PF_REG_DB_PAGE_OFF); 38 + 36 39 sriov_base_off = mana_gd_r64(gc, GDMA_SRIOV_REG_CFG_BASE_OFF); 37 40 38 41 sriov_base_va = gc->bar0_va + sriov_base_off;
+6 -6
drivers/net/ethernet/pensando/ionic/ionic_txrx.c
··· 321 321 len, DMA_TO_DEVICE); 322 322 } else /* XDP_REDIRECT */ { 323 323 dma_addr = ionic_tx_map_single(q, frame->data, len); 324 - if (!dma_addr) 324 + if (dma_addr == DMA_MAPPING_ERROR) 325 325 return -EIO; 326 326 } 327 327 ··· 357 357 } else { 358 358 dma_addr = ionic_tx_map_frag(q, frag, 0, 359 359 skb_frag_size(frag)); 360 - if (dma_mapping_error(q->dev, dma_addr)) { 360 + if (dma_addr == DMA_MAPPING_ERROR) { 361 361 ionic_tx_desc_unmap_bufs(q, desc_info); 362 362 return -EIO; 363 363 } ··· 1083 1083 net_warn_ratelimited("%s: DMA single map failed on %s!\n", 1084 1084 dev_name(dev), q->name); 1085 1085 q_to_tx_stats(q)->dma_map_err++; 1086 - return 0; 1086 + return DMA_MAPPING_ERROR; 1087 1087 } 1088 1088 return dma_addr; 1089 1089 } ··· 1100 1100 net_warn_ratelimited("%s: DMA frag map failed on %s!\n", 1101 1101 dev_name(dev), q->name); 1102 1102 q_to_tx_stats(q)->dma_map_err++; 1103 - return 0; 1103 + return DMA_MAPPING_ERROR; 1104 1104 } 1105 1105 return dma_addr; 1106 1106 } ··· 1116 1116 int frag_idx; 1117 1117 1118 1118 dma_addr = ionic_tx_map_single(q, skb->data, skb_headlen(skb)); 1119 - if (!dma_addr) 1119 + if (dma_addr == DMA_MAPPING_ERROR) 1120 1120 return -EIO; 1121 1121 buf_info->dma_addr = dma_addr; 1122 1122 buf_info->len = skb_headlen(skb); ··· 1126 1126 nfrags = skb_shinfo(skb)->nr_frags; 1127 1127 for (frag_idx = 0; frag_idx < nfrags; frag_idx++, frag++) { 1128 1128 dma_addr = ionic_tx_map_frag(q, frag, 0, skb_frag_size(frag)); 1129 - if (!dma_addr) 1129 + if (dma_addr == DMA_MAPPING_ERROR) 1130 1130 goto dma_fail; 1131 1131 buf_info->dma_addr = dma_addr; 1132 1132 buf_info->len = skb_frag_size(frag);
+4 -4
drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c
··· 242 242 } 243 243 244 244 /* Returns size of the data buffer or, -1 in case TLV data is not available. */ 245 - static int 245 + static noinline_for_stack int 246 246 qed_mfw_get_gen_tlv_value(struct qed_drv_tlv_hdr *p_tlv, 247 247 struct qed_mfw_tlv_generic *p_drv_buf, 248 248 struct qed_tlv_parsed_buf *p_buf) ··· 304 304 return -1; 305 305 } 306 306 307 - static int 307 + static noinline_for_stack int 308 308 qed_mfw_get_eth_tlv_value(struct qed_drv_tlv_hdr *p_tlv, 309 309 struct qed_mfw_tlv_eth *p_drv_buf, 310 310 struct qed_tlv_parsed_buf *p_buf) ··· 438 438 return QED_MFW_TLV_TIME_SIZE; 439 439 } 440 440 441 - static int 441 + static noinline_for_stack int 442 442 qed_mfw_get_fcoe_tlv_value(struct qed_drv_tlv_hdr *p_tlv, 443 443 struct qed_mfw_tlv_fcoe *p_drv_buf, 444 444 struct qed_tlv_parsed_buf *p_buf) ··· 1073 1073 return -1; 1074 1074 } 1075 1075 1076 - static int 1076 + static noinline_for_stack int 1077 1077 qed_mfw_get_iscsi_tlv_value(struct qed_drv_tlv_hdr *p_tlv, 1078 1078 struct qed_mfw_tlv_iscsi *p_drv_buf, 1079 1079 struct qed_tlv_parsed_buf *p_buf)
+1 -1
drivers/net/ethernet/wangxun/libwx/wx_lib.c
··· 2623 2623 struct page_pool_params pp_params = { 2624 2624 .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV, 2625 2625 .order = 0, 2626 - .pool_size = rx_ring->size, 2626 + .pool_size = rx_ring->count, 2627 2627 .nid = dev_to_node(rx_ring->dev), 2628 2628 .dev = rx_ring->dev, 2629 2629 .dma_dir = DMA_FROM_DEVICE,
+1
drivers/net/usb/qmi_wwan.c
··· 1426 1426 {QMI_QUIRK_SET_DTR(0x22de, 0x9051, 2)}, /* Hucom Wireless HM-211S/K */ 1427 1427 {QMI_FIXED_INTF(0x22de, 0x9061, 3)}, /* WeTelecom WPD-600N */ 1428 1428 {QMI_QUIRK_SET_DTR(0x1e0e, 0x9001, 5)}, /* SIMCom 7100E, 7230E, 7600E ++ */ 1429 + {QMI_QUIRK_SET_DTR(0x1e0e, 0x9071, 3)}, /* SIMCom 8230C ++ */ 1429 1430 {QMI_QUIRK_SET_DTR(0x2c7c, 0x0121, 4)}, /* Quectel EC21 Mini PCIe */ 1430 1431 {QMI_QUIRK_SET_DTR(0x2c7c, 0x0191, 4)}, /* Quectel EG91 */ 1431 1432 {QMI_QUIRK_SET_DTR(0x2c7c, 0x0195, 4)}, /* Quectel EG95 */
+2 -1
drivers/net/wireless/intel/iwlegacy/4965-rs.c
··· 203 203 return (u8) (rate_n_flags & 0xFF); 204 204 } 205 205 206 - static void 206 + /* noinline works around https://github.com/llvm/llvm-project/issues/143908 */ 207 + static noinline_for_stack void 207 208 il4965_rs_rate_scale_clear_win(struct il_rate_scale_data *win) 208 209 { 209 210 win->data = 0;
+2 -2
drivers/net/wireless/intel/iwlwifi/mvm/mld-mac.c
··· 32 32 unsigned int link_id; 33 33 int cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw, 34 34 WIDE_ID(MAC_CONF_GROUP, 35 - MAC_CONFIG_CMD), 0); 35 + MAC_CONFIG_CMD), 1); 36 36 37 - if (WARN_ON(cmd_ver < 1 || cmd_ver > 3)) 37 + if (WARN_ON(cmd_ver > 3)) 38 38 return; 39 39 40 40 cmd->id_and_color = cpu_to_le32(mvmvif->id);
+1 -1
drivers/pci/hotplug/pciehp_hpc.c
··· 771 771 u16 ignored_events = PCI_EXP_SLTSTA_DLLSC; 772 772 773 773 if (!ctrl->inband_presence_disabled) 774 - ignored_events |= events & PCI_EXP_SLTSTA_PDC; 774 + ignored_events |= PCI_EXP_SLTSTA_PDC; 775 775 776 776 events &= ~ignored_events; 777 777 pciehp_ignore_link_change(ctrl, pdev, irq, ignored_events);
+3 -2
drivers/pci/pci.c
··· 3217 3217 /* find PCI PM capability in list */ 3218 3218 pm = pci_find_capability(dev, PCI_CAP_ID_PM); 3219 3219 if (!pm) 3220 - return; 3220 + goto poweron; 3221 3221 /* Check device's ability to generate PME# */ 3222 3222 pci_read_config_word(dev, pm + PCI_PM_PMC, &pmc); 3223 3223 3224 3224 if ((pmc & PCI_PM_CAP_VER_MASK) > 3) { 3225 3225 pci_err(dev, "unsupported PM cap regs version (%u)\n", 3226 3226 pmc & PCI_PM_CAP_VER_MASK); 3227 - return; 3227 + goto poweron; 3228 3228 } 3229 3229 3230 3230 dev->pm_cap = pm; ··· 3269 3269 pci_read_config_word(dev, PCI_STATUS, &status); 3270 3270 if (status & PCI_STATUS_IMM_READY) 3271 3271 dev->imm_ready = 1; 3272 + poweron: 3272 3273 pci_pm_power_up_and_verify_state(dev); 3273 3274 pm_runtime_forbid(&dev->dev); 3274 3275 pm_runtime_set_active(&dev->dev);
+14
drivers/regulator/fan53555.c
··· 147 147 unsigned int slew_mask; 148 148 const unsigned int *ramp_delay_table; 149 149 unsigned int n_ramp_values; 150 + unsigned int enable_time; 150 151 unsigned int slew_rate; 151 152 }; 152 153 ··· 283 282 di->slew_mask = CTL_SLEW_MASK; 284 283 di->ramp_delay_table = slew_rates; 285 284 di->n_ramp_values = ARRAY_SIZE(slew_rates); 285 + di->enable_time = 250; 286 286 di->vsel_count = FAN53526_NVOLTAGES; 287 287 288 288 return 0; ··· 298 296 case FAN53555_CHIP_REV_00: 299 297 di->vsel_min = 600000; 300 298 di->vsel_step = 10000; 299 + di->enable_time = 400; 301 300 break; 302 301 case FAN53555_CHIP_REV_13: 303 302 di->vsel_min = 800000; 304 303 di->vsel_step = 10000; 304 + di->enable_time = 400; 305 305 break; 306 306 default: 307 307 dev_err(di->dev, ··· 315 311 case FAN53555_CHIP_ID_01: 316 312 case FAN53555_CHIP_ID_03: 317 313 case FAN53555_CHIP_ID_05: 314 + di->vsel_min = 600000; 315 + di->vsel_step = 10000; 316 + di->enable_time = 400; 317 + break; 318 318 case FAN53555_CHIP_ID_08: 319 319 di->vsel_min = 600000; 320 320 di->vsel_step = 10000; 321 + di->enable_time = 175; 321 322 break; 322 323 case FAN53555_CHIP_ID_04: 323 324 di->vsel_min = 603000; 324 325 di->vsel_step = 12826; 326 + di->enable_time = 400; 325 327 break; 326 328 default: 327 329 dev_err(di->dev, ··· 360 350 di->slew_mask = CTL_SLEW_MASK; 361 351 di->ramp_delay_table = slew_rates; 362 352 di->n_ramp_values = ARRAY_SIZE(slew_rates); 353 + di->enable_time = 360; 363 354 di->vsel_count = FAN53555_NVOLTAGES; 364 355 365 356 return 0; ··· 383 372 di->slew_mask = CTL_SLEW_MASK; 384 373 di->ramp_delay_table = slew_rates; 385 374 di->n_ramp_values = ARRAY_SIZE(slew_rates); 375 + di->enable_time = 360; 386 376 di->vsel_count = RK8602_NVOLTAGES; 387 377 388 378 return 0; ··· 407 395 di->slew_mask = CTL_SLEW_MASK; 408 396 di->ramp_delay_table = slew_rates; 409 397 di->n_ramp_values = ARRAY_SIZE(slew_rates); 398 + di->enable_time = 400; 410 399 di->vsel_count = FAN53555_NVOLTAGES; 411 400 412 401 return 0; ··· 607 594 rdesc->ramp_mask = di->slew_mask; 608 595 rdesc->ramp_delay_table = di->ramp_delay_table; 609 596 rdesc->n_ramp_values = di->n_ramp_values; 597 + rdesc->enable_time = di->enable_time; 610 598 rdesc->owner = THIS_MODULE; 611 599 612 600 rdev = devm_regulator_register(di->dev, &di->desc, config);
+3 -2
drivers/scsi/elx/efct/efct_hw.c
··· 1120 1120 efct_hw_parse_filter(struct efct_hw *hw, void *value) 1121 1121 { 1122 1122 int rc = 0; 1123 - char *p = NULL; 1123 + char *p = NULL, *pp = NULL; 1124 1124 char *token; 1125 1125 u32 idx = 0; 1126 1126 ··· 1132 1132 efc_log_err(hw->os, "p is NULL\n"); 1133 1133 return -ENOMEM; 1134 1134 } 1135 + pp = p; 1135 1136 1136 1137 idx = 0; 1137 1138 while ((token = strsep(&p, ",")) && *token) { ··· 1145 1144 if (idx == ARRAY_SIZE(hw->config.filter_def)) 1146 1145 break; 1147 1146 } 1148 - kfree(p); 1147 + kfree(pp); 1149 1148 1150 1149 return rc; 1151 1150 }
+149 -38
drivers/scsi/fnic/fdls_disc.c
··· 763 763 iport->fabric.timer_pending = 1; 764 764 } 765 765 766 - static void fdls_send_fdmi_abts(struct fnic_iport_s *iport) 766 + static uint8_t *fdls_alloc_init_fdmi_abts_frame(struct fnic_iport_s *iport, 767 + uint16_t oxid) 767 768 { 768 - uint8_t *frame; 769 + struct fc_frame_header *pfdmi_abts; 769 770 uint8_t d_id[3]; 771 + uint8_t *frame; 770 772 struct fnic *fnic = iport->fnic; 771 - struct fc_frame_header *pfabric_abts; 772 - unsigned long fdmi_tov; 773 - uint16_t oxid; 774 - uint16_t frame_size = FNIC_ETH_FCOE_HDRS_OFFSET + 775 - sizeof(struct fc_frame_header); 776 773 777 774 frame = fdls_alloc_frame(iport); 778 775 if (frame == NULL) { 779 776 FNIC_FCS_DBG(KERN_ERR, fnic->host, fnic->fnic_num, 780 777 "Failed to allocate frame to send FDMI ABTS"); 781 - return; 778 + return NULL; 782 779 } 783 780 784 - pfabric_abts = (struct fc_frame_header *) (frame + FNIC_ETH_FCOE_HDRS_OFFSET); 781 + pfdmi_abts = (struct fc_frame_header *) (frame + FNIC_ETH_FCOE_HDRS_OFFSET); 785 782 fdls_init_fabric_abts_frame(frame, iport); 786 783 787 784 hton24(d_id, FC_FID_MGMT_SERV); 788 - FNIC_STD_SET_D_ID(*pfabric_abts, d_id); 785 + FNIC_STD_SET_D_ID(*pfdmi_abts, d_id); 786 + FNIC_STD_SET_OX_ID(*pfdmi_abts, oxid); 787 + 788 + return frame; 789 + } 790 + 791 + static void fdls_send_fdmi_abts(struct fnic_iport_s *iport) 792 + { 793 + uint8_t *frame; 794 + struct fnic *fnic = iport->fnic; 795 + unsigned long fdmi_tov; 796 + uint16_t frame_size = FNIC_ETH_FCOE_HDRS_OFFSET + 797 + sizeof(struct fc_frame_header); 789 798 790 799 if (iport->fabric.fdmi_pending & FDLS_FDMI_PLOGI_PENDING) { 791 - oxid = iport->active_oxid_fdmi_plogi; 792 - FNIC_STD_SET_OX_ID(*pfabric_abts, oxid); 800 + frame = fdls_alloc_init_fdmi_abts_frame(iport, 801 + iport->active_oxid_fdmi_plogi); 802 + if (frame == NULL) 803 + return; 804 + 805 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 806 + "0x%x: FDLS send FDMI PLOGI abts. iport->fabric.state: %d oxid: 0x%x", 807 + iport->fcid, iport->fabric.state, iport->active_oxid_fdmi_plogi); 793 808 fnic_send_fcoe_frame(iport, frame, frame_size); 794 809 } else { 795 810 if (iport->fabric.fdmi_pending & FDLS_FDMI_REG_HBA_PENDING) { 796 - oxid = iport->active_oxid_fdmi_rhba; 797 - FNIC_STD_SET_OX_ID(*pfabric_abts, oxid); 811 + frame = fdls_alloc_init_fdmi_abts_frame(iport, 812 + iport->active_oxid_fdmi_rhba); 813 + if (frame == NULL) 814 + return; 815 + 816 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 817 + "0x%x: FDLS send FDMI RHBA abts. iport->fabric.state: %d oxid: 0x%x", 818 + iport->fcid, iport->fabric.state, iport->active_oxid_fdmi_rhba); 798 819 fnic_send_fcoe_frame(iport, frame, frame_size); 799 820 } 800 821 if (iport->fabric.fdmi_pending & FDLS_FDMI_RPA_PENDING) { 801 - oxid = iport->active_oxid_fdmi_rpa; 802 - FNIC_STD_SET_OX_ID(*pfabric_abts, oxid); 822 + frame = fdls_alloc_init_fdmi_abts_frame(iport, 823 + iport->active_oxid_fdmi_rpa); 824 + if (frame == NULL) { 825 + if (iport->fabric.fdmi_pending & FDLS_FDMI_REG_HBA_PENDING) 826 + goto arm_timer; 827 + else 828 + return; 829 + } 830 + 831 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 832 + "0x%x: FDLS send FDMI RPA abts. iport->fabric.state: %d oxid: 0x%x", 833 + iport->fcid, iport->fabric.state, iport->active_oxid_fdmi_rpa); 803 834 fnic_send_fcoe_frame(iport, frame, frame_size); 804 835 } 805 836 } 806 837 838 + arm_timer: 807 839 fdmi_tov = jiffies + msecs_to_jiffies(2 * iport->e_d_tov); 808 840 mod_timer(&iport->fabric.fdmi_timer, round_jiffies(fdmi_tov)); 809 841 iport->fabric.fdmi_pending |= FDLS_FDMI_ABORT_PENDING; 842 + 843 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 844 + "0x%x: iport->fabric.fdmi_pending: 0x%x", 845 + iport->fcid, iport->fabric.fdmi_pending); 810 846 } 811 847 812 848 static void fdls_send_fabric_flogi(struct fnic_iport_s *iport) ··· 2281 2245 spin_unlock_irqrestore(&fnic->fnic_lock, flags); 2282 2246 } 2283 2247 2248 + void fdls_fdmi_retry_plogi(struct fnic_iport_s *iport) 2249 + { 2250 + struct fnic *fnic = iport->fnic; 2251 + 2252 + iport->fabric.fdmi_pending = 0; 2253 + /* If max retries not exhausted, start over from fdmi plogi */ 2254 + if (iport->fabric.fdmi_retry < FDLS_FDMI_MAX_RETRY) { 2255 + iport->fabric.fdmi_retry++; 2256 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2257 + "Retry FDMI PLOGI. FDMI retry: %d", 2258 + iport->fabric.fdmi_retry); 2259 + fdls_send_fdmi_plogi(iport); 2260 + } 2261 + } 2262 + 2284 2263 void fdls_fdmi_timer_callback(struct timer_list *t) 2285 2264 { 2286 2265 struct fnic_fdls_fabric_s *fabric = timer_container_of(fabric, t, ··· 2308 2257 spin_lock_irqsave(&fnic->fnic_lock, flags); 2309 2258 2310 2259 FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2311 - "fdmi timer callback : 0x%x\n", iport->fabric.fdmi_pending); 2260 + "iport->fabric.fdmi_pending: 0x%x\n", iport->fabric.fdmi_pending); 2312 2261 2313 2262 if (!iport->fabric.fdmi_pending) { 2314 2263 /* timer expired after fdmi responses received. */ ··· 2316 2265 return; 2317 2266 } 2318 2267 FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2319 - "fdmi timer callback : 0x%x\n", iport->fabric.fdmi_pending); 2268 + "iport->fabric.fdmi_pending: 0x%x\n", iport->fabric.fdmi_pending); 2320 2269 2321 2270 /* if not abort pending, send an abort */ 2322 2271 if (!(iport->fabric.fdmi_pending & FDLS_FDMI_ABORT_PENDING)) { ··· 2325 2274 return; 2326 2275 } 2327 2276 FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2328 - "fdmi timer callback : 0x%x\n", iport->fabric.fdmi_pending); 2277 + "iport->fabric.fdmi_pending: 0x%x\n", iport->fabric.fdmi_pending); 2329 2278 2330 2279 /* ABTS pending for an active fdmi request that is pending. 2331 2280 * That means FDMI ABTS timed out 2332 2281 * Schedule to free the OXID after 2*r_a_tov and proceed 2333 2282 */ 2334 2283 if (iport->fabric.fdmi_pending & FDLS_FDMI_PLOGI_PENDING) { 2284 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2285 + "FDMI PLOGI ABTS timed out. Schedule oxid free: 0x%x\n", 2286 + iport->active_oxid_fdmi_plogi); 2335 2287 fdls_schedule_oxid_free(iport, &iport->active_oxid_fdmi_plogi); 2336 2288 } else { 2337 - if (iport->fabric.fdmi_pending & FDLS_FDMI_REG_HBA_PENDING) 2289 + if (iport->fabric.fdmi_pending & FDLS_FDMI_REG_HBA_PENDING) { 2290 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2291 + "FDMI RHBA ABTS timed out. Schedule oxid free: 0x%x\n", 2292 + iport->active_oxid_fdmi_rhba); 2338 2293 fdls_schedule_oxid_free(iport, &iport->active_oxid_fdmi_rhba); 2339 - if (iport->fabric.fdmi_pending & FDLS_FDMI_RPA_PENDING) 2294 + } 2295 + if (iport->fabric.fdmi_pending & FDLS_FDMI_RPA_PENDING) { 2296 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2297 + "FDMI RPA ABTS timed out. Schedule oxid free: 0x%x\n", 2298 + iport->active_oxid_fdmi_rpa); 2340 2299 fdls_schedule_oxid_free(iport, &iport->active_oxid_fdmi_rpa); 2300 + } 2341 2301 } 2342 2302 FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2343 - "fdmi timer callback : 0x%x\n", iport->fabric.fdmi_pending); 2303 + "iport->fabric.fdmi_pending: 0x%x\n", iport->fabric.fdmi_pending); 2344 2304 2345 - iport->fabric.fdmi_pending = 0; 2346 - /* If max retries not exhaused, start over from fdmi plogi */ 2347 - if (iport->fabric.fdmi_retry < FDLS_FDMI_MAX_RETRY) { 2348 - iport->fabric.fdmi_retry++; 2349 - FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2350 - "retry fdmi timer %d", iport->fabric.fdmi_retry); 2351 - fdls_send_fdmi_plogi(iport); 2352 - } 2305 + fdls_fdmi_retry_plogi(iport); 2353 2306 FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 2354 - "fdmi timer callback : 0x%x\n", iport->fabric.fdmi_pending); 2307 + "iport->fabric.fdmi_pending: 0x%x\n", iport->fabric.fdmi_pending); 2355 2308 spin_unlock_irqrestore(&fnic->fnic_lock, flags); 2356 2309 } 2357 2310 ··· 3770 3715 3771 3716 switch (FNIC_FRAME_TYPE(oxid)) { 3772 3717 case FNIC_FRAME_TYPE_FDMI_PLOGI: 3718 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3719 + "Received FDMI PLOGI ABTS rsp with oxid: 0x%x", oxid); 3720 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3721 + "0x%x: iport->fabric.fdmi_pending: 0x%x", 3722 + iport->fcid, iport->fabric.fdmi_pending); 3773 3723 fdls_free_oxid(iport, oxid, &iport->active_oxid_fdmi_plogi); 3724 + 3725 + iport->fabric.fdmi_pending &= ~FDLS_FDMI_PLOGI_PENDING; 3726 + iport->fabric.fdmi_pending &= ~FDLS_FDMI_ABORT_PENDING; 3727 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3728 + "0x%x: iport->fabric.fdmi_pending: 0x%x", 3729 + iport->fcid, iport->fabric.fdmi_pending); 3774 3730 break; 3775 3731 case FNIC_FRAME_TYPE_FDMI_RHBA: 3732 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3733 + "Received FDMI RHBA ABTS rsp with oxid: 0x%x", oxid); 3734 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3735 + "0x%x: iport->fabric.fdmi_pending: 0x%x", 3736 + iport->fcid, iport->fabric.fdmi_pending); 3737 + 3738 + iport->fabric.fdmi_pending &= ~FDLS_FDMI_REG_HBA_PENDING; 3739 + 3740 + /* If RPA is still pending, don't turn off ABORT PENDING. 3741 + * We count on the timer to detect the ABTS timeout and take 3742 + * corrective action. 3743 + */ 3744 + if (!(iport->fabric.fdmi_pending & FDLS_FDMI_RPA_PENDING)) 3745 + iport->fabric.fdmi_pending &= ~FDLS_FDMI_ABORT_PENDING; 3746 + 3776 3747 fdls_free_oxid(iport, oxid, &iport->active_oxid_fdmi_rhba); 3748 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3749 + "0x%x: iport->fabric.fdmi_pending: 0x%x", 3750 + iport->fcid, iport->fabric.fdmi_pending); 3777 3751 break; 3778 3752 case FNIC_FRAME_TYPE_FDMI_RPA: 3753 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3754 + "Received FDMI RPA ABTS rsp with oxid: 0x%x", oxid); 3755 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3756 + "0x%x: iport->fabric.fdmi_pending: 0x%x", 3757 + iport->fcid, iport->fabric.fdmi_pending); 3758 + 3759 + iport->fabric.fdmi_pending &= ~FDLS_FDMI_RPA_PENDING; 3760 + 3761 + /* If RHBA is still pending, don't turn off ABORT PENDING. 3762 + * We count on the timer to detect the ABTS timeout and take 3763 + * corrective action. 3764 + */ 3765 + if (!(iport->fabric.fdmi_pending & FDLS_FDMI_REG_HBA_PENDING)) 3766 + iport->fabric.fdmi_pending &= ~FDLS_FDMI_ABORT_PENDING; 3767 + 3779 3768 fdls_free_oxid(iport, oxid, &iport->active_oxid_fdmi_rpa); 3769 + FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 3770 + "0x%x: iport->fabric.fdmi_pending: 0x%x", 3771 + iport->fcid, iport->fabric.fdmi_pending); 3780 3772 break; 3781 3773 default: 3782 3774 FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num, ··· 3832 3730 break; 3833 3731 } 3834 3732 3835 - timer_delete_sync(&iport->fabric.fdmi_timer); 3836 - iport->fabric.fdmi_pending &= ~FDLS_FDMI_ABORT_PENDING; 3837 - 3838 - fdls_send_fdmi_plogi(iport); 3733 + /* 3734 + * Only if ABORT PENDING is off, delete the timer, and if no other 3735 + * operations are pending, retry FDMI. 3736 + * Otherwise, let the timer pop and take the appropriate action. 3737 + */ 3738 + if (!(iport->fabric.fdmi_pending & FDLS_FDMI_ABORT_PENDING)) { 3739 + timer_delete_sync(&iport->fabric.fdmi_timer); 3740 + if (!iport->fabric.fdmi_pending) 3741 + fdls_fdmi_retry_plogi(iport); 3742 + } 3839 3743 } 3840 3744 3841 3745 static void ··· 5080 4972 fdls_delete_tport(iport, tport); 5081 4973 } 5082 4974 5083 - if ((fnic_fdmi_support == 1) && (iport->fabric.fdmi_pending > 0)) { 5084 - timer_delete_sync(&iport->fabric.fdmi_timer); 5085 - iport->fabric.fdmi_pending = 0; 4975 + if (fnic_fdmi_support == 1) { 4976 + if (iport->fabric.fdmi_pending > 0) { 4977 + timer_delete_sync(&iport->fabric.fdmi_timer); 4978 + iport->fabric.fdmi_pending = 0; 4979 + } 4980 + iport->flags &= ~FNIC_FDMI_ACTIVE; 5086 4981 } 5087 4982 5088 4983 FNIC_FCS_DBG(KERN_INFO, fnic->host, fnic->fnic_num,
+1 -1
drivers/scsi/fnic/fnic.h
··· 30 30 31 31 #define DRV_NAME "fnic" 32 32 #define DRV_DESCRIPTION "Cisco FCoE HBA Driver" 33 - #define DRV_VERSION "1.8.0.0" 33 + #define DRV_VERSION "1.8.0.2" 34 34 #define PFX DRV_NAME ": " 35 35 #define DFX DRV_NAME "%d: " 36 36
+2
drivers/scsi/fnic/fnic_fcs.c
··· 636 636 unsigned long flags; 637 637 638 638 pa = dma_map_single(&fnic->pdev->dev, frame, frame_len, DMA_TO_DEVICE); 639 + if (dma_mapping_error(&fnic->pdev->dev, pa)) 640 + return -ENOMEM; 639 641 640 642 if ((fnic_fc_trace_set_data(fnic->fnic_num, 641 643 FNIC_FC_SEND | 0x80, (char *) frame,
+1
drivers/scsi/fnic/fnic_fdls.h
··· 394 394 bool fdls_delete_tport(struct fnic_iport_s *iport, 395 395 struct fnic_tport_s *tport); 396 396 void fdls_fdmi_timer_callback(struct timer_list *t); 397 + void fdls_fdmi_retry_plogi(struct fnic_iport_s *iport); 397 398 398 399 /* fnic_fcs.c */ 399 400 void fnic_fdls_init(struct fnic *fnic, int usefip);
+1 -1
drivers/scsi/fnic/fnic_scsi.c
··· 1046 1046 if (icmnd_cmpl->scsi_status == SAM_STAT_TASK_SET_FULL) 1047 1047 atomic64_inc(&fnic_stats->misc_stats.queue_fulls); 1048 1048 1049 - FNIC_SCSI_DBG(KERN_INFO, fnic->host, fnic->fnic_num, 1049 + FNIC_SCSI_DBG(KERN_DEBUG, fnic->host, fnic->fnic_num, 1050 1050 "xfer_len: %llu", xfer_len); 1051 1051 break; 1052 1052
+5 -1
drivers/scsi/megaraid/megaraid_sas_base.c
··· 5910 5910 const struct cpumask *mask; 5911 5911 5912 5912 if (instance->perf_mode == MR_BALANCED_PERF_MODE) { 5913 - mask = cpumask_of_node(dev_to_node(&instance->pdev->dev)); 5913 + int nid = dev_to_node(&instance->pdev->dev); 5914 + 5915 + if (nid == NUMA_NO_NODE) 5916 + nid = 0; 5917 + mask = cpumask_of_node(nid); 5914 5918 5915 5919 for (i = 0; i < instance->low_latency_index_start; i++) { 5916 5920 irq = pci_irq_vector(instance->pdev, i);
+7 -5
drivers/spi/spi-cadence-quadspi.c
··· 1958 1958 goto probe_setup_failed; 1959 1959 } 1960 1960 1961 - ret = devm_pm_runtime_enable(dev); 1962 - if (ret) { 1963 - if (cqspi->rx_chan) 1964 - dma_release_channel(cqspi->rx_chan); 1961 + pm_runtime_enable(dev); 1962 + 1963 + if (cqspi->rx_chan) { 1964 + dma_release_channel(cqspi->rx_chan); 1965 1965 goto probe_setup_failed; 1966 1966 } 1967 1967 ··· 1981 1981 return 0; 1982 1982 probe_setup_failed: 1983 1983 cqspi_controller_enable(cqspi, 0); 1984 + pm_runtime_disable(dev); 1984 1985 probe_reset_failed: 1985 1986 if (cqspi->is_jh7110) 1986 1987 cqspi_jh7110_disable_clk(pdev, cqspi); ··· 2000 1999 if (cqspi->rx_chan) 2001 2000 dma_release_channel(cqspi->rx_chan); 2002 2001 2003 - clk_disable_unprepare(cqspi->clk); 2002 + if (pm_runtime_get_sync(&pdev->dev) >= 0) 2003 + clk_disable(cqspi->clk); 2004 2004 2005 2005 if (cqspi->is_jh7110) 2006 2006 cqspi_jh7110_disable_clk(pdev, cqspi);
-14
drivers/spi/spi-tegra210-quad.c
··· 407 407 static void 408 408 tegra_qspi_copy_client_txbuf_to_qspi_txbuf(struct tegra_qspi *tqspi, struct spi_transfer *t) 409 409 { 410 - dma_sync_single_for_cpu(tqspi->dev, tqspi->tx_dma_phys, 411 - tqspi->dma_buf_size, DMA_TO_DEVICE); 412 - 413 410 /* 414 411 * In packed mode, each word in FIFO may contain multiple packets 415 412 * based on bits per word. So all bytes in each FIFO word are valid. ··· 439 442 440 443 tqspi->cur_tx_pos += write_bytes; 441 444 } 442 - 443 - dma_sync_single_for_device(tqspi->dev, tqspi->tx_dma_phys, 444 - tqspi->dma_buf_size, DMA_TO_DEVICE); 445 445 } 446 446 447 447 static void 448 448 tegra_qspi_copy_qspi_rxbuf_to_client_rxbuf(struct tegra_qspi *tqspi, struct spi_transfer *t) 449 449 { 450 - dma_sync_single_for_cpu(tqspi->dev, tqspi->rx_dma_phys, 451 - tqspi->dma_buf_size, DMA_FROM_DEVICE); 452 - 453 450 if (tqspi->is_packed) { 454 451 tqspi->cur_rx_pos += tqspi->curr_dma_words * tqspi->bytes_per_word; 455 452 } else { ··· 469 478 470 479 tqspi->cur_rx_pos += read_bytes; 471 480 } 472 - 473 - dma_sync_single_for_device(tqspi->dev, tqspi->rx_dma_phys, 474 - tqspi->dma_buf_size, DMA_FROM_DEVICE); 475 481 } 476 482 477 483 static void tegra_qspi_dma_complete(void *args) ··· 689 701 return ret; 690 702 } 691 703 692 - dma_sync_single_for_device(tqspi->dev, tqspi->rx_dma_phys, 693 - tqspi->dma_buf_size, DMA_FROM_DEVICE); 694 704 ret = tegra_qspi_start_rx_dma(tqspi, t, len); 695 705 if (ret < 0) { 696 706 dev_err(tqspi->dev, "failed to start RX DMA: %d\n", ret);
+3 -1
drivers/target/target_core_pr.c
··· 1842 1842 } 1843 1843 1844 1844 kmem_cache_free(t10_pr_reg_cache, dest_pr_reg); 1845 - core_scsi3_lunacl_undepend_item(dest_se_deve); 1845 + 1846 + if (dest_se_deve) 1847 + core_scsi3_lunacl_undepend_item(dest_se_deve); 1846 1848 1847 1849 if (is_local) 1848 1850 continue;
+2 -1
drivers/ufs/core/ufshcd.c
··· 7807 7807 hba->silence_err_logs = false; 7808 7808 7809 7809 /* scale up clocks to max frequency before full reinitialization */ 7810 - ufshcd_scale_clks(hba, ULONG_MAX, true); 7810 + if (ufshcd_is_clkscaling_supported(hba)) 7811 + ufshcd_scale_clks(hba, ULONG_MAX, true); 7811 7812 7812 7813 err = ufshcd_hba_enable(hba); 7813 7814
+4 -1
fs/btrfs/delayed-inode.c
··· 1377 1377 1378 1378 void btrfs_assert_delayed_root_empty(struct btrfs_fs_info *fs_info) 1379 1379 { 1380 - WARN_ON(btrfs_first_delayed_node(fs_info->delayed_root)); 1380 + struct btrfs_delayed_node *node = btrfs_first_delayed_node(fs_info->delayed_root); 1381 + 1382 + if (WARN_ON(node)) 1383 + refcount_dec(&node->refs); 1381 1384 } 1382 1385 1383 1386 static bool could_end_wait(struct btrfs_delayed_root *delayed_root, int seq)
+21 -6
fs/btrfs/disk-io.c
··· 1835 1835 if (refcount_dec_and_test(&root->refs)) { 1836 1836 if (WARN_ON(!xa_empty(&root->inodes))) 1837 1837 xa_destroy(&root->inodes); 1838 + if (WARN_ON(!xa_empty(&root->delayed_nodes))) 1839 + xa_destroy(&root->delayed_nodes); 1838 1840 WARN_ON(test_bit(BTRFS_ROOT_DEAD_RELOC_TREE, &root->state)); 1839 1841 if (root->anon_dev) 1840 1842 free_anon_bdev(root->anon_dev); ··· 2158 2156 found = true; 2159 2157 root = read_tree_root_path(tree_root, path, &key); 2160 2158 if (IS_ERR(root)) { 2161 - if (!btrfs_test_opt(fs_info, IGNOREBADROOTS)) 2162 - ret = PTR_ERR(root); 2159 + ret = PTR_ERR(root); 2163 2160 break; 2164 2161 } 2165 2162 set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state); ··· 4311 4310 * 4312 4311 * So wait for all ongoing ordered extents to complete and then run 4313 4312 * delayed iputs. This works because once we reach this point no one 4314 - * can either create new ordered extents nor create delayed iputs 4315 - * through some other means. 4313 + * can create new ordered extents, but delayed iputs can still be added 4314 + * by a reclaim worker (see comments further below). 4316 4315 * 4317 4316 * Also note that btrfs_wait_ordered_roots() is not safe here, because 4318 4317 * it waits for BTRFS_ORDERED_COMPLETE to be set on an ordered extent, ··· 4323 4322 btrfs_flush_workqueue(fs_info->endio_write_workers); 4324 4323 /* Ordered extents for free space inodes. */ 4325 4324 btrfs_flush_workqueue(fs_info->endio_freespace_worker); 4325 + /* 4326 + * Run delayed iputs in case an async reclaim worker is waiting for them 4327 + * to be run as mentioned above. 4328 + */ 4326 4329 btrfs_run_delayed_iputs(fs_info); 4327 - /* There should be no more workload to generate new delayed iputs. */ 4328 - set_bit(BTRFS_FS_STATE_NO_DELAYED_IPUT, &fs_info->fs_state); 4329 4330 4330 4331 cancel_work_sync(&fs_info->async_reclaim_work); 4331 4332 cancel_work_sync(&fs_info->async_data_reclaim_work); 4332 4333 cancel_work_sync(&fs_info->preempt_reclaim_work); 4333 4334 cancel_work_sync(&fs_info->em_shrinker_work); 4335 + 4336 + /* 4337 + * Run delayed iputs again because an async reclaim worker may have 4338 + * added new ones if it was flushing delalloc: 4339 + * 4340 + * shrink_delalloc() -> btrfs_start_delalloc_roots() -> 4341 + * start_delalloc_inodes() -> btrfs_add_delayed_iput() 4342 + */ 4343 + btrfs_run_delayed_iputs(fs_info); 4344 + 4345 + /* There should be no more workload to generate new delayed iputs. */ 4346 + set_bit(BTRFS_FS_STATE_NO_DELAYED_IPUT, &fs_info->fs_state); 4334 4347 4335 4348 /* Cancel or finish ongoing discard work */ 4336 4349 btrfs_discard_cleanup(fs_info);
+1 -1
fs/btrfs/extent_io.c
··· 4312 4312 spin_unlock(&eb->refs_lock); 4313 4313 continue; 4314 4314 } 4315 - xa_unlock_irq(&fs_info->buffer_tree); 4316 4315 4317 4316 /* 4318 4317 * If tree ref isn't set then we know the ref on this eb is a ··· 4328 4329 * check the folio private at the end. And 4329 4330 * release_extent_buffer() will release the refs_lock. 4330 4331 */ 4332 + xa_unlock_irq(&fs_info->buffer_tree); 4331 4333 release_extent_buffer(eb); 4332 4334 xa_lock_irq(&fs_info->buffer_tree); 4333 4335 }
+12 -4
fs/btrfs/free-space-tree.c
··· 1115 1115 ret = btrfs_search_slot_for_read(extent_root, &key, path, 1, 0); 1116 1116 if (ret < 0) 1117 1117 goto out_locked; 1118 - ASSERT(ret == 0); 1118 + /* 1119 + * If ret is 1 (no key found), it means this is an empty block group, 1120 + * without any extents allocated from it and there's no block group 1121 + * item (key BTRFS_BLOCK_GROUP_ITEM_KEY) located in the extent tree 1122 + * because we are using the block group tree feature, so block group 1123 + * items are stored in the block group tree. It also means there are no 1124 + * extents allocated for block groups with a start offset beyond this 1125 + * block group's end offset (this is the last, highest, block group). 1126 + */ 1127 + if (!btrfs_fs_compat_ro(trans->fs_info, BLOCK_GROUP_TREE)) 1128 + ASSERT(ret == 0); 1119 1129 1120 1130 start = block_group->start; 1121 1131 end = block_group->start + block_group->length; 1122 - while (1) { 1132 + while (ret == 0) { 1123 1133 btrfs_item_key_to_cpu(path->nodes[0], &key, path->slots[0]); 1124 1134 1125 1135 if (key.type == BTRFS_EXTENT_ITEM_KEY || ··· 1159 1149 ret = btrfs_next_item(extent_root, path); 1160 1150 if (ret < 0) 1161 1151 goto out_locked; 1162 - if (ret) 1163 - break; 1164 1152 } 1165 1153 if (start < end) { 1166 1154 ret = __add_to_free_space_tree(trans, block_group, path2,
+68 -21
fs/btrfs/inode.c
··· 4250 4250 4251 4251 ret = btrfs_del_inode_ref(trans, root, name, ino, dir_ino, &index); 4252 4252 if (ret) { 4253 - btrfs_info(fs_info, 4254 - "failed to delete reference to %.*s, inode %llu parent %llu", 4255 - name->len, name->name, ino, dir_ino); 4253 + btrfs_crit(fs_info, 4254 + "failed to delete reference to %.*s, root %llu inode %llu parent %llu", 4255 + name->len, name->name, btrfs_root_id(root), ino, dir_ino); 4256 4256 btrfs_abort_transaction(trans, ret); 4257 4257 goto err; 4258 4258 } ··· 8059 8059 int ret; 8060 8060 int ret2; 8061 8061 bool need_abort = false; 8062 + bool logs_pinned = false; 8062 8063 struct fscrypt_name old_fname, new_fname; 8063 8064 struct fscrypt_str *old_name, *new_name; 8064 8065 ··· 8183 8182 inode_inc_iversion(new_inode); 8184 8183 simple_rename_timestamp(old_dir, old_dentry, new_dir, new_dentry); 8185 8184 8185 + if (old_ino != BTRFS_FIRST_FREE_OBJECTID && 8186 + new_ino != BTRFS_FIRST_FREE_OBJECTID) { 8187 + /* 8188 + * If we are renaming in the same directory (and it's not for 8189 + * root entries) pin the log early to prevent any concurrent 8190 + * task from logging the directory after we removed the old 8191 + * entries and before we add the new entries, otherwise that 8192 + * task can sync a log without any entry for the inodes we are 8193 + * renaming and therefore replaying that log, if a power failure 8194 + * happens after syncing the log, would result in deleting the 8195 + * inodes. 8196 + * 8197 + * If the rename affects two different directories, we want to 8198 + * make sure the that there's no log commit that contains 8199 + * updates for only one of the directories but not for the 8200 + * other. 8201 + * 8202 + * If we are renaming an entry for a root, we don't care about 8203 + * log updates since we called btrfs_set_log_full_commit(). 8204 + */ 8205 + btrfs_pin_log_trans(root); 8206 + btrfs_pin_log_trans(dest); 8207 + logs_pinned = true; 8208 + } 8209 + 8186 8210 if (old_dentry->d_parent != new_dentry->d_parent) { 8187 8211 btrfs_record_unlink_dir(trans, BTRFS_I(old_dir), 8188 8212 BTRFS_I(old_inode), true); ··· 8279 8253 BTRFS_I(new_inode)->dir_index = new_idx; 8280 8254 8281 8255 /* 8282 - * Now pin the logs of the roots. We do it to ensure that no other task 8283 - * can sync the logs while we are in progress with the rename, because 8284 - * that could result in an inconsistency in case any of the inodes that 8285 - * are part of this rename operation were logged before. 8256 + * Do the log updates for all inodes. 8257 + * 8258 + * If either entry is for a root we don't need to update the logs since 8259 + * we've called btrfs_set_log_full_commit() before. 8286 8260 */ 8287 - if (old_ino != BTRFS_FIRST_FREE_OBJECTID) 8288 - btrfs_pin_log_trans(root); 8289 - if (new_ino != BTRFS_FIRST_FREE_OBJECTID) 8290 - btrfs_pin_log_trans(dest); 8291 - 8292 - /* Do the log updates for all inodes. */ 8293 - if (old_ino != BTRFS_FIRST_FREE_OBJECTID) 8261 + if (logs_pinned) { 8294 8262 btrfs_log_new_name(trans, old_dentry, BTRFS_I(old_dir), 8295 8263 old_rename_ctx.index, new_dentry->d_parent); 8296 - if (new_ino != BTRFS_FIRST_FREE_OBJECTID) 8297 8264 btrfs_log_new_name(trans, new_dentry, BTRFS_I(new_dir), 8298 8265 new_rename_ctx.index, old_dentry->d_parent); 8266 + } 8299 8267 8300 - /* Now unpin the logs. */ 8301 - if (old_ino != BTRFS_FIRST_FREE_OBJECTID) 8302 - btrfs_end_log_trans(root); 8303 - if (new_ino != BTRFS_FIRST_FREE_OBJECTID) 8304 - btrfs_end_log_trans(dest); 8305 8268 out_fail: 8269 + if (logs_pinned) { 8270 + btrfs_end_log_trans(root); 8271 + btrfs_end_log_trans(dest); 8272 + } 8306 8273 ret2 = btrfs_end_transaction(trans); 8307 8274 ret = ret ? ret : ret2; 8308 8275 out_notrans: ··· 8345 8326 int ret2; 8346 8327 u64 old_ino = btrfs_ino(BTRFS_I(old_inode)); 8347 8328 struct fscrypt_name old_fname, new_fname; 8329 + bool logs_pinned = false; 8348 8330 8349 8331 if (btrfs_ino(BTRFS_I(new_dir)) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID) 8350 8332 return -EPERM; ··· 8480 8460 inode_inc_iversion(old_inode); 8481 8461 simple_rename_timestamp(old_dir, old_dentry, new_dir, new_dentry); 8482 8462 8463 + if (old_ino != BTRFS_FIRST_FREE_OBJECTID) { 8464 + /* 8465 + * If we are renaming in the same directory (and it's not a 8466 + * root entry) pin the log to prevent any concurrent task from 8467 + * logging the directory after we removed the old entry and 8468 + * before we add the new entry, otherwise that task can sync 8469 + * a log without any entry for the inode we are renaming and 8470 + * therefore replaying that log, if a power failure happens 8471 + * after syncing the log, would result in deleting the inode. 8472 + * 8473 + * If the rename affects two different directories, we want to 8474 + * make sure the that there's no log commit that contains 8475 + * updates for only one of the directories but not for the 8476 + * other. 8477 + * 8478 + * If we are renaming an entry for a root, we don't care about 8479 + * log updates since we called btrfs_set_log_full_commit(). 8480 + */ 8481 + btrfs_pin_log_trans(root); 8482 + btrfs_pin_log_trans(dest); 8483 + logs_pinned = true; 8484 + } 8485 + 8483 8486 if (old_dentry->d_parent != new_dentry->d_parent) 8484 8487 btrfs_record_unlink_dir(trans, BTRFS_I(old_dir), 8485 8488 BTRFS_I(old_inode), true); ··· 8567 8524 if (old_inode->i_nlink == 1) 8568 8525 BTRFS_I(old_inode)->dir_index = index; 8569 8526 8570 - if (old_ino != BTRFS_FIRST_FREE_OBJECTID) 8527 + if (logs_pinned) 8571 8528 btrfs_log_new_name(trans, old_dentry, BTRFS_I(old_dir), 8572 8529 rename_ctx.index, new_dentry->d_parent); 8573 8530 ··· 8583 8540 } 8584 8541 } 8585 8542 out_fail: 8543 + if (logs_pinned) { 8544 + btrfs_end_log_trans(root); 8545 + btrfs_end_log_trans(dest); 8546 + } 8586 8547 ret2 = btrfs_end_transaction(trans); 8587 8548 ret = ret ? ret : ret2; 8588 8549 out_notrans:
+1 -1
fs/btrfs/ioctl.c
··· 3139 3139 return -EPERM; 3140 3140 3141 3141 if (btrfs_fs_incompat(fs_info, EXTENT_TREE_V2)) { 3142 - btrfs_err(fs_info, "scrub is not supported on extent tree v2 yet"); 3142 + btrfs_err(fs_info, "scrub: extent tree v2 not yet supported"); 3143 3143 return -EINVAL; 3144 3144 } 3145 3145
+26 -27
fs/btrfs/scrub.c
··· 557 557 */ 558 558 for (i = 0; i < ipath->fspath->elem_cnt; ++i) 559 559 btrfs_warn_in_rcu(fs_info, 560 - "%s at logical %llu on dev %s, physical %llu, root %llu, inode %llu, offset %llu, length %u, links %u (path: %s)", 560 + "scrub: %s at logical %llu on dev %s, physical %llu root %llu inode %llu offset %llu length %u links %u (path: %s)", 561 561 swarn->errstr, swarn->logical, 562 562 btrfs_dev_name(swarn->dev), 563 563 swarn->physical, ··· 571 571 572 572 err: 573 573 btrfs_warn_in_rcu(fs_info, 574 - "%s at logical %llu on dev %s, physical %llu, root %llu, inode %llu, offset %llu: path resolving failed with ret=%d", 574 + "scrub: %s at logical %llu on dev %s, physical %llu root %llu inode %llu offset %llu: path resolving failed with ret=%d", 575 575 swarn->errstr, swarn->logical, 576 576 btrfs_dev_name(swarn->dev), 577 577 swarn->physical, ··· 596 596 597 597 /* Super block error, no need to search extent tree. */ 598 598 if (is_super) { 599 - btrfs_warn_in_rcu(fs_info, "%s on device %s, physical %llu", 599 + btrfs_warn_in_rcu(fs_info, "scrub: %s on device %s, physical %llu", 600 600 errstr, btrfs_dev_name(dev), physical); 601 601 return; 602 602 } ··· 631 631 &ref_level); 632 632 if (ret < 0) { 633 633 btrfs_warn(fs_info, 634 - "failed to resolve tree backref for logical %llu: %d", 635 - swarn.logical, ret); 634 + "scrub: failed to resolve tree backref for logical %llu: %d", 635 + swarn.logical, ret); 636 636 break; 637 637 } 638 638 if (ret > 0) 639 639 break; 640 640 btrfs_warn_in_rcu(fs_info, 641 - "%s at logical %llu on dev %s, physical %llu: metadata %s (level %d) in tree %llu", 641 + "scrub: %s at logical %llu on dev %s, physical %llu: metadata %s (level %d) in tree %llu", 642 642 errstr, swarn.logical, btrfs_dev_name(dev), 643 643 swarn.physical, (ref_level ? "node" : "leaf"), 644 644 ref_level, ref_root); ··· 718 718 scrub_bitmap_set_meta_error(stripe, sector_nr, sectors_per_tree); 719 719 scrub_bitmap_set_error(stripe, sector_nr, sectors_per_tree); 720 720 btrfs_warn_rl(fs_info, 721 - "tree block %llu mirror %u has bad bytenr, has %llu want %llu", 721 + "scrub: tree block %llu mirror %u has bad bytenr, has %llu want %llu", 722 722 logical, stripe->mirror_num, 723 723 btrfs_stack_header_bytenr(header), logical); 724 724 return; ··· 728 728 scrub_bitmap_set_meta_error(stripe, sector_nr, sectors_per_tree); 729 729 scrub_bitmap_set_error(stripe, sector_nr, sectors_per_tree); 730 730 btrfs_warn_rl(fs_info, 731 - "tree block %llu mirror %u has bad fsid, has %pU want %pU", 731 + "scrub: tree block %llu mirror %u has bad fsid, has %pU want %pU", 732 732 logical, stripe->mirror_num, 733 733 header->fsid, fs_info->fs_devices->fsid); 734 734 return; ··· 738 738 scrub_bitmap_set_meta_error(stripe, sector_nr, sectors_per_tree); 739 739 scrub_bitmap_set_error(stripe, sector_nr, sectors_per_tree); 740 740 btrfs_warn_rl(fs_info, 741 - "tree block %llu mirror %u has bad chunk tree uuid, has %pU want %pU", 741 + "scrub: tree block %llu mirror %u has bad chunk tree uuid, has %pU want %pU", 742 742 logical, stripe->mirror_num, 743 743 header->chunk_tree_uuid, fs_info->chunk_tree_uuid); 744 744 return; ··· 760 760 scrub_bitmap_set_meta_error(stripe, sector_nr, sectors_per_tree); 761 761 scrub_bitmap_set_error(stripe, sector_nr, sectors_per_tree); 762 762 btrfs_warn_rl(fs_info, 763 - "tree block %llu mirror %u has bad csum, has " CSUM_FMT " want " CSUM_FMT, 763 + "scrub: tree block %llu mirror %u has bad csum, has " CSUM_FMT " want " CSUM_FMT, 764 764 logical, stripe->mirror_num, 765 765 CSUM_FMT_VALUE(fs_info->csum_size, on_disk_csum), 766 766 CSUM_FMT_VALUE(fs_info->csum_size, calculated_csum)); ··· 771 771 scrub_bitmap_set_meta_gen_error(stripe, sector_nr, sectors_per_tree); 772 772 scrub_bitmap_set_error(stripe, sector_nr, sectors_per_tree); 773 773 btrfs_warn_rl(fs_info, 774 - "tree block %llu mirror %u has bad generation, has %llu want %llu", 774 + "scrub: tree block %llu mirror %u has bad generation, has %llu want %llu", 775 775 logical, stripe->mirror_num, 776 776 btrfs_stack_header_generation(header), 777 777 stripe->sectors[sector_nr].generation); ··· 814 814 */ 815 815 if (unlikely(sector_nr + sectors_per_tree > stripe->nr_sectors)) { 816 816 btrfs_warn_rl(fs_info, 817 - "tree block at %llu crosses stripe boundary %llu", 817 + "scrub: tree block at %llu crosses stripe boundary %llu", 818 818 stripe->logical + 819 819 (sector_nr << fs_info->sectorsize_bits), 820 820 stripe->logical); ··· 1046 1046 if (repaired) { 1047 1047 if (dev) { 1048 1048 btrfs_err_rl_in_rcu(fs_info, 1049 - "fixed up error at logical %llu on dev %s physical %llu", 1049 + "scrub: fixed up error at logical %llu on dev %s physical %llu", 1050 1050 stripe->logical, btrfs_dev_name(dev), 1051 1051 physical); 1052 1052 } else { 1053 1053 btrfs_err_rl_in_rcu(fs_info, 1054 - "fixed up error at logical %llu on mirror %u", 1054 + "scrub: fixed up error at logical %llu on mirror %u", 1055 1055 stripe->logical, stripe->mirror_num); 1056 1056 } 1057 1057 continue; ··· 1060 1060 /* The remaining are all for unrepaired. */ 1061 1061 if (dev) { 1062 1062 btrfs_err_rl_in_rcu(fs_info, 1063 - "unable to fixup (regular) error at logical %llu on dev %s physical %llu", 1063 + "scrub: unable to fixup (regular) error at logical %llu on dev %s physical %llu", 1064 1064 stripe->logical, btrfs_dev_name(dev), 1065 1065 physical); 1066 1066 } else { 1067 1067 btrfs_err_rl_in_rcu(fs_info, 1068 - "unable to fixup (regular) error at logical %llu on mirror %u", 1068 + "scrub: unable to fixup (regular) error at logical %llu on mirror %u", 1069 1069 stripe->logical, stripe->mirror_num); 1070 1070 } 1071 1071 ··· 1593 1593 physical, 1594 1594 sctx->write_pointer); 1595 1595 if (ret) 1596 - btrfs_err(fs_info, 1597 - "zoned: failed to recover write pointer"); 1596 + btrfs_err(fs_info, "scrub: zoned: failed to recover write pointer"); 1598 1597 } 1599 1598 mutex_unlock(&sctx->wr_lock); 1600 1599 btrfs_dev_clear_zone_empty(sctx->wr_tgtdev, physical); ··· 1657 1658 int ret; 1658 1659 1659 1660 if (unlikely(!extent_root || !csum_root)) { 1660 - btrfs_err(fs_info, "no valid extent or csum root for scrub"); 1661 + btrfs_err(fs_info, "scrub: no valid extent or csum root found"); 1661 1662 return -EUCLEAN; 1662 1663 } 1663 1664 memset(stripe->sectors, 0, sizeof(struct scrub_sector_verification) * ··· 1906 1907 struct btrfs_fs_info *fs_info = stripe->bg->fs_info; 1907 1908 1908 1909 btrfs_err(fs_info, 1909 - "stripe %llu has unrepaired metadata sector at %llu", 1910 + "scrub: stripe %llu has unrepaired metadata sector at logical %llu", 1910 1911 stripe->logical, 1911 1912 stripe->logical + (i << fs_info->sectorsize_bits)); 1912 1913 return true; ··· 2166 2167 bitmap_and(&error, &error, &has_extent, stripe->nr_sectors); 2167 2168 if (!bitmap_empty(&error, stripe->nr_sectors)) { 2168 2169 btrfs_err(fs_info, 2169 - "unrepaired sectors detected, full stripe %llu data stripe %u errors %*pbl", 2170 + "scrub: unrepaired sectors detected, full stripe %llu data stripe %u errors %*pbl", 2170 2171 full_stripe_start, i, stripe->nr_sectors, 2171 2172 &error); 2172 2173 ret = -EIO; ··· 2788 2789 ro_set = 0; 2789 2790 } else if (ret == -ETXTBSY) { 2790 2791 btrfs_warn(fs_info, 2791 - "skipping scrub of block group %llu due to active swapfile", 2792 + "scrub: skipping scrub of block group %llu due to active swapfile", 2792 2793 cache->start); 2793 2794 scrub_pause_off(fs_info); 2794 2795 ret = 0; 2795 2796 goto skip_unfreeze; 2796 2797 } else { 2797 - btrfs_warn(fs_info, 2798 - "failed setting block group ro: %d", ret); 2798 + btrfs_warn(fs_info, "scrub: failed setting block group ro: %d", 2799 + ret); 2799 2800 btrfs_unfreeze_block_group(cache); 2800 2801 btrfs_put_block_group(cache); 2801 2802 scrub_pause_off(fs_info); ··· 2891 2892 ret = btrfs_check_super_csum(fs_info, sb); 2892 2893 if (ret != 0) { 2893 2894 btrfs_err_rl(fs_info, 2894 - "super block at physical %llu devid %llu has bad csum", 2895 + "scrub: super block at physical %llu devid %llu has bad csum", 2895 2896 physical, dev->devid); 2896 2897 return -EIO; 2897 2898 } 2898 2899 if (btrfs_super_generation(sb) != generation) { 2899 2900 btrfs_err_rl(fs_info, 2900 - "super block at physical %llu devid %llu has bad generation %llu expect %llu", 2901 + "scrub: super block at physical %llu devid %llu has bad generation %llu expect %llu", 2901 2902 physical, dev->devid, 2902 2903 btrfs_super_generation(sb), generation); 2903 2904 return -EUCLEAN; ··· 3058 3059 !test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state)) { 3059 3060 mutex_unlock(&fs_info->fs_devices->device_list_mutex); 3060 3061 btrfs_err_in_rcu(fs_info, 3061 - "scrub on devid %llu: filesystem on %s is not writable", 3062 + "scrub: devid %llu: filesystem on %s is not writable", 3062 3063 devid, btrfs_dev_name(dev)); 3063 3064 ret = -EROFS; 3064 3065 goto out;
+9 -8
fs/btrfs/tree-log.c
··· 668 668 extent_end = ALIGN(start + size, 669 669 fs_info->sectorsize); 670 670 } else { 671 - ret = 0; 672 - goto out; 671 + btrfs_err(fs_info, 672 + "unexpected extent type=%d root=%llu inode=%llu offset=%llu", 673 + found_type, btrfs_root_id(root), key->objectid, key->offset); 674 + return -EUCLEAN; 673 675 } 674 676 675 677 inode = read_one_inode(root, key->objectid); 676 - if (!inode) { 677 - ret = -EIO; 678 - goto out; 679 - } 678 + if (!inode) 679 + return -EIO; 680 680 681 681 /* 682 682 * first check to see if we already have this extent in the ··· 961 961 ret = unlink_inode_for_log_replay(trans, dir, inode, &name); 962 962 out: 963 963 kfree(name.name); 964 - iput(&inode->vfs_inode); 964 + if (inode) 965 + iput(&inode->vfs_inode); 965 966 return ret; 966 967 } 967 968 ··· 1177 1176 ret = unlink_inode_for_log_replay(trans, 1178 1177 victim_parent, 1179 1178 inode, &victim_name); 1179 + iput(&victim_parent->vfs_inode); 1180 1180 } 1181 - iput(&victim_parent->vfs_inode); 1182 1181 kfree(victim_name.name); 1183 1182 if (ret) 1184 1183 return ret;
+6
fs/btrfs/volumes.c
··· 3282 3282 device->bytes_used - dev_extent_len); 3283 3283 atomic64_add(dev_extent_len, &fs_info->free_chunk_space); 3284 3284 btrfs_clear_space_info_full(fs_info); 3285 + 3286 + if (list_empty(&device->post_commit_list)) { 3287 + list_add_tail(&device->post_commit_list, 3288 + &trans->transaction->dev_update_list); 3289 + } 3290 + 3285 3291 mutex_unlock(&fs_info->chunk_mutex); 3286 3292 } 3287 3293 }
+72 -14
fs/btrfs/zoned.c
··· 1403 1403 static int btrfs_load_block_group_dup(struct btrfs_block_group *bg, 1404 1404 struct btrfs_chunk_map *map, 1405 1405 struct zone_info *zone_info, 1406 - unsigned long *active) 1406 + unsigned long *active, 1407 + u64 last_alloc) 1407 1408 { 1408 1409 struct btrfs_fs_info *fs_info = bg->fs_info; 1409 1410 ··· 1427 1426 zone_info[1].physical); 1428 1427 return -EIO; 1429 1428 } 1429 + 1430 + if (zone_info[0].alloc_offset == WP_CONVENTIONAL) 1431 + zone_info[0].alloc_offset = last_alloc; 1432 + 1433 + if (zone_info[1].alloc_offset == WP_CONVENTIONAL) 1434 + zone_info[1].alloc_offset = last_alloc; 1435 + 1430 1436 if (zone_info[0].alloc_offset != zone_info[1].alloc_offset) { 1431 1437 btrfs_err(bg->fs_info, 1432 1438 "zoned: write pointer offset mismatch of zones in DUP profile"); ··· 1454 1446 static int btrfs_load_block_group_raid1(struct btrfs_block_group *bg, 1455 1447 struct btrfs_chunk_map *map, 1456 1448 struct zone_info *zone_info, 1457 - unsigned long *active) 1449 + unsigned long *active, 1450 + u64 last_alloc) 1458 1451 { 1459 1452 struct btrfs_fs_info *fs_info = bg->fs_info; 1460 1453 int i; ··· 1470 1461 bg->zone_capacity = min_not_zero(zone_info[0].capacity, zone_info[1].capacity); 1471 1462 1472 1463 for (i = 0; i < map->num_stripes; i++) { 1473 - if (zone_info[i].alloc_offset == WP_MISSING_DEV || 1474 - zone_info[i].alloc_offset == WP_CONVENTIONAL) 1464 + if (zone_info[i].alloc_offset == WP_MISSING_DEV) 1475 1465 continue; 1466 + 1467 + if (zone_info[i].alloc_offset == WP_CONVENTIONAL) 1468 + zone_info[i].alloc_offset = last_alloc; 1476 1469 1477 1470 if ((zone_info[0].alloc_offset != zone_info[i].alloc_offset) && 1478 1471 !btrfs_test_opt(fs_info, DEGRADED)) { ··· 1505 1494 static int btrfs_load_block_group_raid0(struct btrfs_block_group *bg, 1506 1495 struct btrfs_chunk_map *map, 1507 1496 struct zone_info *zone_info, 1508 - unsigned long *active) 1497 + unsigned long *active, 1498 + u64 last_alloc) 1509 1499 { 1510 1500 struct btrfs_fs_info *fs_info = bg->fs_info; 1511 1501 ··· 1517 1505 } 1518 1506 1519 1507 for (int i = 0; i < map->num_stripes; i++) { 1520 - if (zone_info[i].alloc_offset == WP_MISSING_DEV || 1521 - zone_info[i].alloc_offset == WP_CONVENTIONAL) 1508 + if (zone_info[i].alloc_offset == WP_MISSING_DEV) 1522 1509 continue; 1510 + 1511 + if (zone_info[i].alloc_offset == WP_CONVENTIONAL) { 1512 + u64 stripe_nr, full_stripe_nr; 1513 + u64 stripe_offset; 1514 + int stripe_index; 1515 + 1516 + stripe_nr = div64_u64(last_alloc, map->stripe_size); 1517 + stripe_offset = stripe_nr * map->stripe_size; 1518 + full_stripe_nr = div_u64(stripe_nr, map->num_stripes); 1519 + div_u64_rem(stripe_nr, map->num_stripes, &stripe_index); 1520 + 1521 + zone_info[i].alloc_offset = 1522 + full_stripe_nr * map->stripe_size; 1523 + 1524 + if (stripe_index > i) 1525 + zone_info[i].alloc_offset += map->stripe_size; 1526 + else if (stripe_index == i) 1527 + zone_info[i].alloc_offset += 1528 + (last_alloc - stripe_offset); 1529 + } 1523 1530 1524 1531 if (test_bit(0, active) != test_bit(i, active)) { 1525 1532 if (!btrfs_zone_activate(bg)) ··· 1557 1526 static int btrfs_load_block_group_raid10(struct btrfs_block_group *bg, 1558 1527 struct btrfs_chunk_map *map, 1559 1528 struct zone_info *zone_info, 1560 - unsigned long *active) 1529 + unsigned long *active, 1530 + u64 last_alloc) 1561 1531 { 1562 1532 struct btrfs_fs_info *fs_info = bg->fs_info; 1563 1533 ··· 1569 1537 } 1570 1538 1571 1539 for (int i = 0; i < map->num_stripes; i++) { 1572 - if (zone_info[i].alloc_offset == WP_MISSING_DEV || 1573 - zone_info[i].alloc_offset == WP_CONVENTIONAL) 1540 + if (zone_info[i].alloc_offset == WP_MISSING_DEV) 1574 1541 continue; 1575 1542 1576 1543 if (test_bit(0, active) != test_bit(i, active)) { ··· 1578 1547 } else { 1579 1548 if (test_bit(0, active)) 1580 1549 set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &bg->runtime_flags); 1550 + } 1551 + 1552 + if (zone_info[i].alloc_offset == WP_CONVENTIONAL) { 1553 + u64 stripe_nr, full_stripe_nr; 1554 + u64 stripe_offset; 1555 + int stripe_index; 1556 + 1557 + stripe_nr = div64_u64(last_alloc, map->stripe_size); 1558 + stripe_offset = stripe_nr * map->stripe_size; 1559 + full_stripe_nr = div_u64(stripe_nr, 1560 + map->num_stripes / map->sub_stripes); 1561 + div_u64_rem(stripe_nr, 1562 + (map->num_stripes / map->sub_stripes), 1563 + &stripe_index); 1564 + 1565 + zone_info[i].alloc_offset = 1566 + full_stripe_nr * map->stripe_size; 1567 + 1568 + if (stripe_index > (i / map->sub_stripes)) 1569 + zone_info[i].alloc_offset += map->stripe_size; 1570 + else if (stripe_index == (i / map->sub_stripes)) 1571 + zone_info[i].alloc_offset += 1572 + (last_alloc - stripe_offset); 1581 1573 } 1582 1574 1583 1575 if ((i % map->sub_stripes) == 0) { ··· 1691 1637 ret = btrfs_load_block_group_single(cache, &zone_info[0], active); 1692 1638 break; 1693 1639 case BTRFS_BLOCK_GROUP_DUP: 1694 - ret = btrfs_load_block_group_dup(cache, map, zone_info, active); 1640 + ret = btrfs_load_block_group_dup(cache, map, zone_info, active, 1641 + last_alloc); 1695 1642 break; 1696 1643 case BTRFS_BLOCK_GROUP_RAID1: 1697 1644 case BTRFS_BLOCK_GROUP_RAID1C3: 1698 1645 case BTRFS_BLOCK_GROUP_RAID1C4: 1699 - ret = btrfs_load_block_group_raid1(cache, map, zone_info, active); 1646 + ret = btrfs_load_block_group_raid1(cache, map, zone_info, 1647 + active, last_alloc); 1700 1648 break; 1701 1649 case BTRFS_BLOCK_GROUP_RAID0: 1702 - ret = btrfs_load_block_group_raid0(cache, map, zone_info, active); 1650 + ret = btrfs_load_block_group_raid0(cache, map, zone_info, 1651 + active, last_alloc); 1703 1652 break; 1704 1653 case BTRFS_BLOCK_GROUP_RAID10: 1705 - ret = btrfs_load_block_group_raid10(cache, map, zone_info, active); 1654 + ret = btrfs_load_block_group_raid10(cache, map, zone_info, 1655 + active, last_alloc); 1706 1656 break; 1707 1657 case BTRFS_BLOCK_GROUP_RAID5: 1708 1658 case BTRFS_BLOCK_GROUP_RAID6:
+3
fs/erofs/fileio.c
··· 47 47 48 48 static void erofs_fileio_rq_submit(struct erofs_fileio_rq *rq) 49 49 { 50 + const struct cred *old_cred; 50 51 struct iov_iter iter; 51 52 int ret; 52 53 ··· 61 60 rq->iocb.ki_flags = IOCB_DIRECT; 62 61 iov_iter_bvec(&iter, ITER_DEST, rq->bvecs, rq->bio.bi_vcnt, 63 62 rq->bio.bi_iter.bi_size); 63 + old_cred = override_creds(rq->iocb.ki_filp->f_cred); 64 64 ret = vfs_iocb_iter_read(rq->iocb.ki_filp, &rq->iocb, &iter); 65 + revert_creds(old_cred); 65 66 if (ret != -EIOCBQUEUED) 66 67 erofs_fileio_ki_complete(&rq->iocb, ret); 67 68 }
+4 -6
fs/erofs/zmap.c
··· 597 597 598 598 if (la > map->m_la) { 599 599 r = mid; 600 + if (la > lend) { 601 + DBG_BUGON(1); 602 + return -EFSCORRUPTED; 603 + } 600 604 lend = la; 601 605 } else { 602 606 l = mid + 1; ··· 639 635 } 640 636 } 641 637 map->m_llen = lend - map->m_la; 642 - if (!last && map->m_llen < sb->s_blocksize) { 643 - erofs_err(sb, "extent too small %llu @ offset %llu of nid %llu", 644 - map->m_llen, map->m_la, vi->nid); 645 - DBG_BUGON(1); 646 - return -EFSCORRUPTED; 647 - } 648 638 return 0; 649 639 } 650 640
+38
fs/f2fs/file.c
··· 35 35 #include <trace/events/f2fs.h> 36 36 #include <uapi/linux/f2fs.h> 37 37 38 + static void f2fs_zero_post_eof_page(struct inode *inode, loff_t new_size) 39 + { 40 + loff_t old_size = i_size_read(inode); 41 + 42 + if (old_size >= new_size) 43 + return; 44 + 45 + /* zero or drop pages only in range of [old_size, new_size] */ 46 + truncate_pagecache(inode, old_size); 47 + } 48 + 38 49 static vm_fault_t f2fs_filemap_fault(struct vm_fault *vmf) 39 50 { 40 51 struct inode *inode = file_inode(vmf->vma->vm_file); ··· 114 103 115 104 f2fs_bug_on(sbi, f2fs_has_inline_data(inode)); 116 105 106 + filemap_invalidate_lock(inode->i_mapping); 107 + f2fs_zero_post_eof_page(inode, (folio->index + 1) << PAGE_SHIFT); 108 + filemap_invalidate_unlock(inode->i_mapping); 109 + 117 110 file_update_time(vmf->vma->vm_file); 118 111 filemap_invalidate_lock_shared(inode->i_mapping); 112 + 119 113 folio_lock(folio); 120 114 if (unlikely(folio->mapping != inode->i_mapping || 121 115 folio_pos(folio) > i_size_read(inode) || ··· 1125 1109 f2fs_down_write(&fi->i_gc_rwsem[WRITE]); 1126 1110 filemap_invalidate_lock(inode->i_mapping); 1127 1111 1112 + if (attr->ia_size > old_size) 1113 + f2fs_zero_post_eof_page(inode, attr->ia_size); 1128 1114 truncate_setsize(inode, attr->ia_size); 1129 1115 1130 1116 if (attr->ia_size <= old_size) ··· 1244 1226 ret = f2fs_convert_inline_inode(inode); 1245 1227 if (ret) 1246 1228 return ret; 1229 + 1230 + filemap_invalidate_lock(inode->i_mapping); 1231 + f2fs_zero_post_eof_page(inode, offset + len); 1232 + filemap_invalidate_unlock(inode->i_mapping); 1247 1233 1248 1234 pg_start = ((unsigned long long) offset) >> PAGE_SHIFT; 1249 1235 pg_end = ((unsigned long long) offset + len) >> PAGE_SHIFT; ··· 1532 1510 f2fs_down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]); 1533 1511 filemap_invalidate_lock(inode->i_mapping); 1534 1512 1513 + f2fs_zero_post_eof_page(inode, offset + len); 1514 + 1535 1515 f2fs_lock_op(sbi); 1536 1516 f2fs_drop_extent_tree(inode); 1537 1517 truncate_pagecache(inode, offset); ··· 1654 1630 ret = filemap_write_and_wait_range(mapping, offset, offset + len - 1); 1655 1631 if (ret) 1656 1632 return ret; 1633 + 1634 + filemap_invalidate_lock(mapping); 1635 + f2fs_zero_post_eof_page(inode, offset + len); 1636 + filemap_invalidate_unlock(mapping); 1657 1637 1658 1638 pg_start = ((unsigned long long) offset) >> PAGE_SHIFT; 1659 1639 pg_end = ((unsigned long long) offset + len) >> PAGE_SHIFT; ··· 1790 1762 /* avoid gc operation during block exchange */ 1791 1763 f2fs_down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]); 1792 1764 filemap_invalidate_lock(mapping); 1765 + 1766 + f2fs_zero_post_eof_page(inode, offset + len); 1793 1767 truncate_pagecache(inode, offset); 1794 1768 1795 1769 while (!ret && idx > pg_start) { ··· 1848 1818 err = f2fs_convert_inline_inode(inode); 1849 1819 if (err) 1850 1820 return err; 1821 + 1822 + filemap_invalidate_lock(inode->i_mapping); 1823 + f2fs_zero_post_eof_page(inode, offset + len); 1824 + filemap_invalidate_unlock(inode->i_mapping); 1851 1825 1852 1826 f2fs_balance_fs(sbi, true); 1853 1827 ··· 4894 4860 err = file_modified(file); 4895 4861 if (err) 4896 4862 return err; 4863 + 4864 + filemap_invalidate_lock(inode->i_mapping); 4865 + f2fs_zero_post_eof_page(inode, iocb->ki_pos + iov_iter_count(from)); 4866 + filemap_invalidate_unlock(inode->i_mapping); 4897 4867 return count; 4898 4868 } 4899 4869
-1
fs/f2fs/node.c
··· 2078 2078 2079 2079 if (!__write_node_folio(folio, false, &submitted, 2080 2080 wbc, do_balance, io_type, NULL)) { 2081 - folio_unlock(folio); 2082 2081 folio_batch_release(&fbatch); 2083 2082 ret = -EIO; 2084 2083 goto out;
+67 -50
fs/namespace.c
··· 2310 2310 return dst_mnt; 2311 2311 } 2312 2312 2313 - /* Caller should check returned pointer for errors */ 2314 - 2315 - struct vfsmount *collect_mounts(const struct path *path) 2313 + static inline bool extend_array(struct path **res, struct path **to_free, 2314 + unsigned n, unsigned *count, unsigned new_count) 2316 2315 { 2317 - struct mount *tree; 2318 - namespace_lock(); 2319 - if (!check_mnt(real_mount(path->mnt))) 2320 - tree = ERR_PTR(-EINVAL); 2321 - else 2322 - tree = copy_tree(real_mount(path->mnt), path->dentry, 2323 - CL_COPY_ALL | CL_PRIVATE); 2324 - namespace_unlock(); 2325 - if (IS_ERR(tree)) 2326 - return ERR_CAST(tree); 2327 - return &tree->mnt; 2316 + struct path *p; 2317 + 2318 + if (likely(n < *count)) 2319 + return true; 2320 + p = kmalloc_array(new_count, sizeof(struct path), GFP_KERNEL); 2321 + if (p && *count) 2322 + memcpy(p, *res, *count * sizeof(struct path)); 2323 + *count = new_count; 2324 + kfree(*to_free); 2325 + *to_free = *res = p; 2326 + return p; 2327 + } 2328 + 2329 + struct path *collect_paths(const struct path *path, 2330 + struct path *prealloc, unsigned count) 2331 + { 2332 + struct mount *root = real_mount(path->mnt); 2333 + struct mount *child; 2334 + struct path *res = prealloc, *to_free = NULL; 2335 + unsigned n = 0; 2336 + 2337 + guard(rwsem_read)(&namespace_sem); 2338 + 2339 + if (!check_mnt(root)) 2340 + return ERR_PTR(-EINVAL); 2341 + if (!extend_array(&res, &to_free, 0, &count, 32)) 2342 + return ERR_PTR(-ENOMEM); 2343 + res[n++] = *path; 2344 + list_for_each_entry(child, &root->mnt_mounts, mnt_child) { 2345 + if (!is_subdir(child->mnt_mountpoint, path->dentry)) 2346 + continue; 2347 + for (struct mount *m = child; m; m = next_mnt(m, child)) { 2348 + if (!extend_array(&res, &to_free, n, &count, 2 * count)) 2349 + return ERR_PTR(-ENOMEM); 2350 + res[n].mnt = &m->mnt; 2351 + res[n].dentry = m->mnt.mnt_root; 2352 + n++; 2353 + } 2354 + } 2355 + if (!extend_array(&res, &to_free, n, &count, count + 1)) 2356 + return ERR_PTR(-ENOMEM); 2357 + memset(res + n, 0, (count - n) * sizeof(struct path)); 2358 + for (struct path *p = res; p->mnt; p++) 2359 + path_get(p); 2360 + return res; 2361 + } 2362 + 2363 + void drop_collected_paths(struct path *paths, struct path *prealloc) 2364 + { 2365 + for (struct path *p = paths; p->mnt; p++) 2366 + path_put(p); 2367 + if (paths != prealloc) 2368 + kfree(paths); 2328 2369 } 2329 2370 2330 2371 static void free_mnt_ns(struct mnt_namespace *); ··· 2440 2399 /* Make sure we notice when we leak mounts. */ 2441 2400 VFS_WARN_ON_ONCE(!mnt_ns_empty(ns)); 2442 2401 free_mnt_ns(ns); 2443 - } 2444 - 2445 - void drop_collected_mounts(struct vfsmount *mnt) 2446 - { 2447 - namespace_lock(); 2448 - lock_mount_hash(); 2449 - umount_tree(real_mount(mnt), 0); 2450 - unlock_mount_hash(); 2451 - namespace_unlock(); 2452 2402 } 2453 2403 2454 2404 static bool __has_locked_children(struct mount *mnt, struct dentry *dentry) ··· 2542 2510 return &new_mnt->mnt; 2543 2511 } 2544 2512 EXPORT_SYMBOL_GPL(clone_private_mount); 2545 - 2546 - int iterate_mounts(int (*f)(struct vfsmount *, void *), void *arg, 2547 - struct vfsmount *root) 2548 - { 2549 - struct mount *mnt; 2550 - int res = f(root, arg); 2551 - if (res) 2552 - return res; 2553 - list_for_each_entry(mnt, &real_mount(root)->mnt_list, mnt_list) { 2554 - res = f(&mnt->mnt, arg); 2555 - if (res) 2556 - return res; 2557 - } 2558 - return 0; 2559 - } 2560 2513 2561 2514 static void lock_mnt_tree(struct mount *mnt) 2562 2515 { ··· 2768 2751 hlist_for_each_entry_safe(child, n, &tree_list, mnt_hash) { 2769 2752 struct mount *q; 2770 2753 hlist_del_init(&child->mnt_hash); 2771 - q = __lookup_mnt(&child->mnt_parent->mnt, 2772 - child->mnt_mountpoint); 2773 - if (q) 2774 - mnt_change_mountpoint(child, smp, q); 2775 2754 /* Notice when we are propagating across user namespaces */ 2776 2755 if (child->mnt_parent->mnt_ns->user_ns != user_ns) 2777 2756 lock_mnt_tree(child); 2778 2757 child->mnt.mnt_flags &= ~MNT_LOCKED; 2758 + q = __lookup_mnt(&child->mnt_parent->mnt, 2759 + child->mnt_mountpoint); 2760 + if (q) 2761 + mnt_change_mountpoint(child, smp, q); 2779 2762 commit_tree(child); 2780 2763 } 2781 2764 put_mountpoint(smp); ··· 5307 5290 kattr.kflags |= MOUNT_KATTR_RECURSE; 5308 5291 5309 5292 ret = wants_mount_setattr(uattr, usize, &kattr); 5310 - if (ret < 0) 5311 - return ret; 5312 - 5313 - if (ret) { 5293 + if (ret > 0) { 5314 5294 ret = do_mount_setattr(&file->f_path, &kattr); 5315 - if (ret) 5316 - return ret; 5317 - 5318 5295 finish_mount_kattr(&kattr); 5319 5296 } 5297 + if (ret) 5298 + return ret; 5320 5299 } 5321 5300 5322 5301 fd = get_unused_fd_flags(flags & O_CLOEXEC); ··· 6275 6262 { 6276 6263 if (!refcount_dec_and_test(&ns->ns.count)) 6277 6264 return; 6278 - drop_collected_mounts(&ns->root->mnt); 6265 + namespace_lock(); 6266 + lock_mount_hash(); 6267 + umount_tree(ns->root, 0); 6268 + unlock_mount_hash(); 6269 + namespace_unlock(); 6279 6270 free_mnt_ns(ns); 6280 6271 } 6281 6272
+1
fs/nfsd/nfs4callback.c
··· 1409 1409 out: 1410 1410 if (!rcl->__nr_referring_calls) { 1411 1411 cb->cb_nr_referring_call_list--; 1412 + list_del(&rcl->__list); 1412 1413 kfree(rcl); 1413 1414 } 1414 1415 }
+2 -3
fs/nfsd/nfsctl.c
··· 1611 1611 */ 1612 1612 int nfsd_nl_threads_set_doit(struct sk_buff *skb, struct genl_info *info) 1613 1613 { 1614 - int *nthreads, count = 0, nrpools, i, ret = -EOPNOTSUPP, rem; 1614 + int *nthreads, nrpools = 0, i, ret = -EOPNOTSUPP, rem; 1615 1615 struct net *net = genl_info_net(info); 1616 1616 struct nfsd_net *nn = net_generic(net, nfsd_net_id); 1617 1617 const struct nlattr *attr; ··· 1623 1623 /* count number of SERVER_THREADS values */ 1624 1624 nlmsg_for_each_attr(attr, info->nlhdr, GENL_HDRLEN, rem) { 1625 1625 if (nla_type(attr) == NFSD_A_SERVER_THREADS) 1626 - count++; 1626 + nrpools++; 1627 1627 } 1628 1628 1629 1629 mutex_lock(&nfsd_mutex); 1630 1630 1631 - nrpools = max(count, nfsd_nrpools(net)); 1632 1631 nthreads = kcalloc(nrpools, sizeof(int), GFP_KERNEL); 1633 1632 if (!nthreads) { 1634 1633 ret = -ENOMEM;
-2
fs/pnode.h
··· 28 28 #define CL_SHARED_TO_SLAVE 0x20 29 29 #define CL_COPY_MNT_NS_FILE 0x40 30 30 31 - #define CL_COPY_ALL (CL_COPY_UNBINDABLE | CL_COPY_MNT_NS_FILE) 32 - 33 31 static inline void set_mnt_shared(struct mount *mnt) 34 32 { 35 33 mnt->mnt.mnt_flags &= ~MNT_SHARED_MASK;
+9 -4
fs/resctrl/ctrlmondata.c
··· 594 594 struct rmid_read rr = {0}; 595 595 struct rdt_mon_domain *d; 596 596 struct rdtgroup *rdtgrp; 597 + int domid, cpu, ret = 0; 597 598 struct rdt_resource *r; 599 + struct cacheinfo *ci; 598 600 struct mon_data *md; 599 - int domid, ret = 0; 600 601 601 602 rdtgrp = rdtgroup_kn_lock_live(of->kn); 602 603 if (!rdtgrp) { ··· 624 623 * one that matches this cache id. 625 624 */ 626 625 list_for_each_entry(d, &r->mon_domains, hdr.list) { 627 - if (d->ci->id == domid) { 628 - rr.ci = d->ci; 626 + if (d->ci_id == domid) { 627 + rr.ci_id = d->ci_id; 628 + cpu = cpumask_any(&d->hdr.cpu_mask); 629 + ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); 630 + if (!ci) 631 + continue; 629 632 mon_event_read(&rr, r, NULL, rdtgrp, 630 - &d->ci->shared_cpu_map, evtid, false); 633 + &ci->shared_cpu_map, evtid, false); 631 634 goto checkresult; 632 635 } 633 636 }
+2 -2
fs/resctrl/internal.h
··· 98 98 * domains in @r sharing L3 @ci.id 99 99 * @evtid: Which monitor event to read. 100 100 * @first: Initialize MBM counter when true. 101 - * @ci: Cacheinfo for L3. Only set when @d is NULL. Used when summing domains. 101 + * @ci_id: Cacheinfo id for L3. Only set when @d is NULL. Used when summing domains. 102 102 * @err: Error encountered when reading counter. 103 103 * @val: Returned value of event counter. If @rgrp is a parent resource group, 104 104 * @val includes the sum of event counts from its child resource groups. ··· 112 112 struct rdt_mon_domain *d; 113 113 enum resctrl_event_id evtid; 114 114 bool first; 115 - struct cacheinfo *ci; 115 + unsigned int ci_id; 116 116 int err; 117 117 u64 val; 118 118 void *arch_mon_ctx;
+4 -2
fs/resctrl/monitor.c
··· 361 361 { 362 362 int cpu = smp_processor_id(); 363 363 struct rdt_mon_domain *d; 364 + struct cacheinfo *ci; 364 365 struct mbm_state *m; 365 366 int err, ret; 366 367 u64 tval = 0; ··· 389 388 } 390 389 391 390 /* Summing domains that share a cache, must be on a CPU for that cache. */ 392 - if (!cpumask_test_cpu(cpu, &rr->ci->shared_cpu_map)) 391 + ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE); 392 + if (!ci || ci->id != rr->ci_id) 393 393 return -EINVAL; 394 394 395 395 /* ··· 402 400 */ 403 401 ret = -EINVAL; 404 402 list_for_each_entry(d, &rr->r->mon_domains, hdr.list) { 405 - if (d->ci->id != rr->ci->id) 403 + if (d->ci_id != rr->ci_id) 406 404 continue; 407 405 err = resctrl_arch_rmid_read(rr->r, d, closid, rmid, 408 406 rr->evtid, &tval, rr->arch_mon_ctx);
+3 -3
fs/resctrl/rdtgroup.c
··· 3036 3036 char name[32]; 3037 3037 3038 3038 snc_mode = r->mon_scope == RESCTRL_L3_NODE; 3039 - sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci->id : d->hdr.id); 3039 + sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); 3040 3040 if (snc_mode) 3041 3041 sprintf(subname, "mon_sub_%s_%02d", r->name, d->hdr.id); 3042 3042 ··· 3061 3061 return -EPERM; 3062 3062 3063 3063 list_for_each_entry(mevt, &r->evt_list, list) { 3064 - domid = do_sum ? d->ci->id : d->hdr.id; 3064 + domid = do_sum ? d->ci_id : d->hdr.id; 3065 3065 priv = mon_get_kn_priv(r->rid, domid, mevt, do_sum); 3066 3066 if (WARN_ON_ONCE(!priv)) 3067 3067 return -EINVAL; ··· 3089 3089 lockdep_assert_held(&rdtgroup_mutex); 3090 3090 3091 3091 snc_mode = r->mon_scope == RESCTRL_L3_NODE; 3092 - sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci->id : d->hdr.id); 3092 + sprintf(name, "mon_%s_%02d", r->name, snc_mode ? d->ci_id : d->hdr.id); 3093 3093 kn = kernfs_find_and_get(parent_kn, name); 3094 3094 if (kn) { 3095 3095 /*
+12 -2
fs/smb/client/cached_dir.c
··· 509 509 spin_lock(&cfids->cfid_list_lock); 510 510 list_for_each_entry(cfid, &cfids->entries, entry) { 511 511 tmp_list = kmalloc(sizeof(*tmp_list), GFP_ATOMIC); 512 - if (tmp_list == NULL) 513 - break; 512 + if (tmp_list == NULL) { 513 + /* 514 + * If the malloc() fails, we won't drop all 515 + * dentries, and unmounting is likely to trigger 516 + * a 'Dentry still in use' error. 517 + */ 518 + cifs_tcon_dbg(VFS, "Out of memory while dropping dentries\n"); 519 + spin_unlock(&cfids->cfid_list_lock); 520 + spin_unlock(&cifs_sb->tlink_tree_lock); 521 + goto done; 522 + } 514 523 spin_lock(&cfid->fid_lock); 515 524 tmp_list->dentry = cfid->dentry; 516 525 cfid->dentry = NULL; ··· 531 522 } 532 523 spin_unlock(&cifs_sb->tlink_tree_lock); 533 524 525 + done: 534 526 list_for_each_entry_safe(tmp_list, q, &entry, entry) { 535 527 list_del(&tmp_list->entry); 536 528 dput(tmp_list->dentry);
+1 -1
fs/smb/client/cached_dir.h
··· 26 26 * open file instance. 27 27 */ 28 28 struct mutex de_mutex; 29 - int pos; /* Expected ctx->pos */ 29 + loff_t pos; /* Expected ctx->pos */ 30 30 struct list_head entries; 31 31 }; 32 32
+1 -1
fs/smb/client/cifs_debug.c
··· 1105 1105 if ((count < 1) || (count > 11)) 1106 1106 return -EINVAL; 1107 1107 1108 - memset(flags_string, 0, 12); 1108 + memset(flags_string, 0, sizeof(flags_string)); 1109 1109 1110 1110 if (copy_from_user(flags_string, buffer, count)) 1111 1111 return -EFAULT;
+1 -1
fs/smb/client/cifs_ioctl.h
··· 61 61 struct smb3_key_debug_info { 62 62 __u64 Suid; 63 63 __u16 cipher_type; 64 - __u8 auth_key[16]; /* SMB2_NTLMV2_SESSKEY_SIZE */ 64 + __u8 auth_key[SMB2_NTLMV2_SESSKEY_SIZE]; 65 65 __u8 smb3encryptionkey[SMB3_SIGN_KEY_SIZE]; 66 66 __u8 smb3decryptionkey[SMB3_SIGN_KEY_SIZE]; 67 67 } __packed;
+1
fs/smb/client/connect.c
··· 4199 4199 return 0; 4200 4200 } 4201 4201 4202 + server->lstrp = jiffies; 4202 4203 server->tcpStatus = CifsInNegotiate; 4203 4204 spin_unlock(&server->srv_lock); 4204 4205
+6 -2
fs/smb/client/file.c
··· 52 52 struct netfs_io_stream *stream = &req->rreq.io_streams[subreq->stream_nr]; 53 53 struct TCP_Server_Info *server; 54 54 struct cifsFileInfo *open_file = req->cfile; 55 + struct cifs_sb_info *cifs_sb = CIFS_SB(wdata->rreq->inode->i_sb); 55 56 size_t wsize = req->rreq.wsize; 56 57 int rc; 57 58 ··· 63 62 64 63 server = cifs_pick_channel(tlink_tcon(open_file->tlink)->ses); 65 64 wdata->server = server; 65 + 66 + if (cifs_sb->ctx->wsize == 0) 67 + cifs_negotiate_wsize(server, cifs_sb->ctx, 68 + tlink_tcon(req->cfile->tlink)); 66 69 67 70 retry: 68 71 if (open_file->invalidHandle) { ··· 165 160 server = cifs_pick_channel(tlink_tcon(req->cfile->tlink)->ses); 166 161 rdata->server = server; 167 162 168 - if (cifs_sb->ctx->rsize == 0) { 163 + if (cifs_sb->ctx->rsize == 0) 169 164 cifs_negotiate_rsize(server, cifs_sb->ctx, 170 165 tlink_tcon(req->cfile->tlink)); 171 - } 172 166 173 167 rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, 174 168 &size, &rdata->credits);
+1 -1
fs/smb/client/ioctl.c
··· 506 506 le16_to_cpu(tcon->ses->server->cipher_type); 507 507 pkey_inf.Suid = tcon->ses->Suid; 508 508 memcpy(pkey_inf.auth_key, tcon->ses->auth_key.response, 509 - 16 /* SMB2_NTLMV2_SESSKEY_SIZE */); 509 + SMB2_NTLMV2_SESSKEY_SIZE); 510 510 memcpy(pkey_inf.smb3decryptionkey, 511 511 tcon->ses->smb3decryptionkey, SMB3_SIGN_KEY_SIZE); 512 512 memcpy(pkey_inf.smb3encryptionkey,
-1
fs/smb/client/reparse.c
··· 1172 1172 if (!have_xattr_dev && (tag == IO_REPARSE_TAG_LX_CHR || tag == IO_REPARSE_TAG_LX_BLK)) 1173 1173 return false; 1174 1174 1175 - fattr->cf_dtype = S_DT(fattr->cf_mode); 1176 1175 return true; 1177 1176 } 1178 1177
+1 -2
fs/smb/client/sess.c
··· 498 498 ctx->domainauto = ses->domainAuto; 499 499 ctx->domainname = ses->domainName; 500 500 501 - /* no hostname for extra channels */ 502 - ctx->server_hostname = ""; 501 + ctx->server_hostname = ses->server->hostname; 503 502 504 503 ctx->username = ses->user_name; 505 504 ctx->password = ses->password;
+3 -2
fs/smb/client/smbdirect.c
··· 2589 2589 size_t fsize = folioq_folio_size(folioq, slot); 2590 2590 2591 2591 if (offset < fsize) { 2592 - size_t part = umin(maxsize - ret, fsize - offset); 2592 + size_t part = umin(maxsize, fsize - offset); 2593 2593 2594 2594 if (!smb_set_sge(rdma, folio_page(folio, 0), offset, part)) 2595 2595 return -EIO; 2596 2596 2597 2597 offset += part; 2598 2598 ret += part; 2599 + maxsize -= part; 2599 2600 } 2600 2601 2601 2602 if (offset >= fsize) { ··· 2611 2610 slot = 0; 2612 2611 } 2613 2612 } 2614 - } while (rdma->nr_sge < rdma->max_sge || maxsize > 0); 2613 + } while (rdma->nr_sge < rdma->max_sge && maxsize > 0); 2615 2614 2616 2615 iter->folioq = folioq; 2617 2616 iter->folioq_slot = slot;
+2
include/crypto/hash.h
··· 202 202 #define HASH_REQUEST_CLONE(name, gfp) \ 203 203 hash_request_clone(name, sizeof(__##name##_req), gfp) 204 204 205 + #define CRYPTO_HASH_STATESIZE(coresize, blocksize) (coresize + blocksize + 1) 206 + 205 207 /** 206 208 * struct shash_alg - synchronous message digest definition 207 209 * @init: see struct ahash_alg
+4 -2
include/crypto/internal/simd.h
··· 44 44 * 45 45 * This delegates to may_use_simd(), except that this also returns false if SIMD 46 46 * in crypto code has been temporarily disabled on this CPU by the crypto 47 - * self-tests, in order to test the no-SIMD fallback code. 47 + * self-tests, in order to test the no-SIMD fallback code. This override is 48 + * currently limited to configurations where the "full" self-tests are enabled, 49 + * because it might be a bit too invasive to be part of the "fast" self-tests. 48 50 */ 49 - #ifdef CONFIG_CRYPTO_SELFTESTS 51 + #ifdef CONFIG_CRYPTO_SELFTESTS_FULL 50 52 DECLARE_PER_CPU(bool, crypto_simd_disabled_for_test); 51 53 #define crypto_simd_usable() \ 52 54 (may_use_simd() && !this_cpu_read(crypto_simd_disabled_for_test))
+4
include/crypto/md5.h
··· 2 2 #ifndef _CRYPTO_MD5_H 3 3 #define _CRYPTO_MD5_H 4 4 5 + #include <crypto/hash.h> 5 6 #include <linux/types.h> 6 7 7 8 #define MD5_DIGEST_SIZE 16 ··· 15 14 #define MD5_H1 0xefcdab89UL 16 15 #define MD5_H2 0x98badcfeUL 17 16 #define MD5_H3 0x10325476UL 17 + 18 + #define CRYPTO_MD5_STATESIZE \ 19 + CRYPTO_HASH_STATESIZE(MD5_STATE_SIZE, MD5_HMAC_BLOCK_SIZE) 18 20 19 21 extern const u8 md5_zero_message_hash[MD5_DIGEST_SIZE]; 20 22
+2 -4
include/linux/mount.h
··· 116 116 extern int may_umount(struct vfsmount *); 117 117 int do_mount(const char *, const char __user *, 118 118 const char *, unsigned long, void *); 119 - extern struct vfsmount *collect_mounts(const struct path *); 120 - extern void drop_collected_mounts(struct vfsmount *); 121 - extern int iterate_mounts(int (*)(struct vfsmount *, void *), void *, 122 - struct vfsmount *); 119 + extern struct path *collect_paths(const struct path *, struct path *, unsigned); 120 + extern void drop_collected_paths(struct path *, struct path *); 123 121 extern void kern_unmount_array(struct vfsmount *mnt[], unsigned int num); 124 122 125 123 extern int cifs_root_data(char **dev, char **opts);
+1 -1
include/linux/mtd/partitions.h
··· 108 108 deregister_mtd_parser) 109 109 110 110 int mtd_add_partition(struct mtd_info *master, const char *name, 111 - long long offset, long long length, struct mtd_info **part); 111 + long long offset, long long length); 112 112 int mtd_del_partition(struct mtd_info *master, int partno); 113 113 uint64_t mtd_get_device_size(const struct mtd_info *mtd); 114 114
+6 -4
include/linux/mtd/spinand.h
··· 113 113 SPI_MEM_DTR_OP_DATA_IN(len, buf, 2), \ 114 114 SPI_MEM_OP_MAX_FREQ(freq)) 115 115 116 - #define SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(addr, ndummy, buf, len) \ 116 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_2S_2S_OP(addr, ndummy, buf, len, ...) \ 117 117 SPI_MEM_OP(SPI_MEM_OP_CMD(0xbb, 1), \ 118 118 SPI_MEM_OP_ADDR(2, addr, 2), \ 119 119 SPI_MEM_OP_DUMMY(ndummy, 2), \ 120 - SPI_MEM_OP_DATA_IN(len, buf, 2)) 120 + SPI_MEM_OP_DATA_IN(len, buf, 2), \ 121 + SPI_MEM_OP_MAX_FREQ(__VA_ARGS__ + 0)) 121 122 122 123 #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_2S_2S_OP(addr, ndummy, buf, len) \ 123 124 SPI_MEM_OP(SPI_MEM_OP_CMD(0xbb, 1), \ ··· 152 151 SPI_MEM_DTR_OP_DATA_IN(len, buf, 4), \ 153 152 SPI_MEM_OP_MAX_FREQ(freq)) 154 153 155 - #define SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(addr, ndummy, buf, len) \ 154 + #define SPINAND_PAGE_READ_FROM_CACHE_1S_4S_4S_OP(addr, ndummy, buf, len, ...) \ 156 155 SPI_MEM_OP(SPI_MEM_OP_CMD(0xeb, 1), \ 157 156 SPI_MEM_OP_ADDR(2, addr, 4), \ 158 157 SPI_MEM_OP_DUMMY(ndummy, 4), \ 159 - SPI_MEM_OP_DATA_IN(len, buf, 4)) 158 + SPI_MEM_OP_DATA_IN(len, buf, 4), \ 159 + SPI_MEM_OP_MAX_FREQ(__VA_ARGS__ + 0)) 160 160 161 161 #define SPINAND_PAGE_READ_FROM_CACHE_3A_1S_4S_4S_OP(addr, ndummy, buf, len) \ 162 162 SPI_MEM_OP(SPI_MEM_OP_CMD(0xeb, 1), \
+40 -2
include/linux/perf_event.h
··· 635 635 unsigned long size; 636 636 }; 637 637 638 - /** 639 - * enum perf_event_state - the states of an event: 638 + /* 639 + * The normal states are: 640 + * 641 + * ACTIVE --. 642 + * ^ | 643 + * | | 644 + * sched_{in,out}() | 645 + * | | 646 + * v | 647 + * ,---> INACTIVE --+ <-. 648 + * | | | 649 + * | {dis,en}able() 650 + * sched_in() | | 651 + * | OFF <--' --+ 652 + * | | 653 + * `---> ERROR ------' 654 + * 655 + * That is: 656 + * 657 + * sched_in: INACTIVE -> {ACTIVE,ERROR} 658 + * sched_out: ACTIVE -> INACTIVE 659 + * disable: {ACTIVE,INACTIVE} -> OFF 660 + * enable: {OFF,ERROR} -> INACTIVE 661 + * 662 + * Where {OFF,ERROR} are disabled states. 663 + * 664 + * Then we have the {EXIT,REVOKED,DEAD} states which are various shades of 665 + * defunct events: 666 + * 667 + * - EXIT means task that the even was assigned to died, but child events 668 + * still live, and further children can still be created. But the event 669 + * itself will never be active again. It can only transition to 670 + * {REVOKED,DEAD}; 671 + * 672 + * - REVOKED means the PMU the event was associated with is gone; all 673 + * functionality is stopped but the event is still alive. Can only 674 + * transition to DEAD; 675 + * 676 + * - DEAD event really is DYING tearing down state and freeing bits. 677 + * 640 678 */ 641 679 enum perf_event_state { 642 680 PERF_EVENT_STATE_DEAD = -5,
+2 -2
include/linux/resctrl.h
··· 159 159 /** 160 160 * struct rdt_mon_domain - group of CPUs sharing a resctrl monitor resource 161 161 * @hdr: common header for different domain types 162 - * @ci: cache info for this domain 162 + * @ci_id: cache info id for this domain 163 163 * @rmid_busy_llc: bitmap of which limbo RMIDs are above threshold 164 164 * @mbm_total: saved state for MBM total bandwidth 165 165 * @mbm_local: saved state for MBM local bandwidth ··· 170 170 */ 171 171 struct rdt_mon_domain { 172 172 struct rdt_domain_hdr hdr; 173 - struct cacheinfo *ci; 173 + unsigned int ci_id; 174 174 unsigned long *rmid_busy_llc; 175 175 struct mbm_state *mbm_total; 176 176 struct mbm_state *mbm_local;
+2
include/net/bluetooth/hci_core.h
··· 29 29 #include <linux/idr.h> 30 30 #include <linux/leds.h> 31 31 #include <linux/rculist.h> 32 + #include <linux/srcu.h> 32 33 33 34 #include <net/bluetooth/hci.h> 34 35 #include <net/bluetooth/hci_drv.h> ··· 348 347 349 348 struct hci_dev { 350 349 struct list_head list; 350 + struct srcu_struct srcu; 351 351 struct mutex lock; 352 352 353 353 struct ida unset_handle_ida;
-18
include/trace/events/erofs.h
··· 211 211 show_mflags(__entry->mflags), __entry->ret) 212 212 ); 213 213 214 - TRACE_EVENT(erofs_destroy_inode, 215 - TP_PROTO(struct inode *inode), 216 - 217 - TP_ARGS(inode), 218 - 219 - TP_STRUCT__entry( 220 - __field( dev_t, dev ) 221 - __field( erofs_nid_t, nid ) 222 - ), 223 - 224 - TP_fast_assign( 225 - __entry->dev = inode->i_sb->s_dev; 226 - __entry->nid = EROFS_I(inode)->nid; 227 - ), 228 - 229 - TP_printk("dev = (%d,%d), nid = %llu", show_dev_nid(__entry)) 230 - ); 231 - 232 214 #endif /* _TRACE_EROFS_H */ 233 215 234 216 /* This part must be outside protection */
+2 -2
include/uapi/linux/bits.h
··· 4 4 #ifndef _UAPI_LINUX_BITS_H 5 5 #define _UAPI_LINUX_BITS_H 6 6 7 - #define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (__BITS_PER_LONG - 1 - (h)))) 7 + #define __GENMASK(h, l) (((~_UL(0)) << (l)) & (~_UL(0) >> (BITS_PER_LONG - 1 - (h)))) 8 8 9 - #define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (__BITS_PER_LONG_LONG - 1 - (h)))) 9 + #define __GENMASK_ULL(h, l) (((~_ULL(0)) << (l)) & (~_ULL(0) >> (BITS_PER_LONG_LONG - 1 - (h)))) 10 10 11 11 #define __GENMASK_U128(h, l) \ 12 12 ((_BIT128((h)) << 1) - (_BIT128(l)))
+22
include/uapi/linux/kvm.h
··· 178 178 #define KVM_EXIT_NOTIFY 37 179 179 #define KVM_EXIT_LOONGARCH_IOCSR 38 180 180 #define KVM_EXIT_MEMORY_FAULT 39 181 + #define KVM_EXIT_TDX 40 181 182 182 183 /* For KVM_EXIT_INTERNAL_ERROR */ 183 184 /* Emulate instruction failed. */ ··· 448 447 __u64 gpa; 449 448 __u64 size; 450 449 } memory_fault; 450 + /* KVM_EXIT_TDX */ 451 + struct { 452 + __u64 flags; 453 + __u64 nr; 454 + union { 455 + struct { 456 + __u64 ret; 457 + __u64 data[5]; 458 + } unknown; 459 + struct { 460 + __u64 ret; 461 + __u64 gpa; 462 + __u64 size; 463 + } get_quote; 464 + struct { 465 + __u64 ret; 466 + __u64 leaf; 467 + __u64 r11, r12, r13, r14; 468 + } get_tdvmcall_info; 469 + }; 470 + } tdx; 451 471 /* Fix the size of the union. */ 452 472 char padding[256]; 453 473 };
+3 -3
include/uapi/linux/mptcp_pm.h
··· 27 27 * token, rem_id. 28 28 * @MPTCP_EVENT_SUB_ESTABLISHED: A new subflow has been established. 'error' 29 29 * should not be set. Attributes: token, family, loc_id, rem_id, saddr4 | 30 - * saddr6, daddr4 | daddr6, sport, dport, backup, if_idx [, error]. 30 + * saddr6, daddr4 | daddr6, sport, dport, backup, if-idx [, error]. 31 31 * @MPTCP_EVENT_SUB_CLOSED: A subflow has been closed. An error (copy of 32 32 * sk_err) could be set if an error has been detected for this subflow. 33 33 * Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | 34 - * daddr6, sport, dport, backup, if_idx [, error]. 34 + * daddr6, sport, dport, backup, if-idx [, error]. 35 35 * @MPTCP_EVENT_SUB_PRIORITY: The priority of a subflow has changed. 'error' 36 36 * should not be set. Attributes: token, family, loc_id, rem_id, saddr4 | 37 - * saddr6, daddr4 | daddr6, sport, dport, backup, if_idx [, error]. 37 + * saddr6, daddr4 | daddr6, sport, dport, backup, if-idx [, error]. 38 38 * @MPTCP_EVENT_LISTENER_CREATED: A new PM listener is created. Attributes: 39 39 * family, sport, saddr4 | saddr6. 40 40 * @MPTCP_EVENT_LISTENER_CLOSED: A PM listener is closed. Attributes: family,
+4
include/uapi/linux/vm_sockets.h
··· 17 17 #ifndef _UAPI_VM_SOCKETS_H 18 18 #define _UAPI_VM_SOCKETS_H 19 19 20 + #ifndef __KERNEL__ 21 + #include <sys/socket.h> /* for struct sockaddr and sa_family_t */ 22 + #endif 23 + 20 24 #include <linux/socket.h> 21 25 #include <linux/types.h> 22 26
+3 -1
io_uring/io-wq.c
··· 1259 1259 atomic_set(&wq->worker_refs, 1); 1260 1260 init_completion(&wq->worker_done); 1261 1261 ret = cpuhp_state_add_instance_nocalls(io_wq_online, &wq->cpuhp_node); 1262 - if (ret) 1262 + if (ret) { 1263 + put_task_struct(wq->task); 1263 1264 goto err; 1265 + } 1264 1266 1265 1267 return wq; 1266 1268 err:
-2
io_uring/io_uring.h
··· 98 98 struct llist_node *tctx_task_work_run(struct io_uring_task *tctx, unsigned int max_entries, unsigned int *count); 99 99 void tctx_task_work(struct callback_head *cb); 100 100 __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd); 101 - int io_uring_alloc_task_context(struct task_struct *task, 102 - struct io_ring_ctx *ctx); 103 101 104 102 int io_ring_add_registered_file(struct io_uring_task *tctx, struct file *file, 105 103 int start, int end);
+1 -1
io_uring/net.c
··· 821 821 if (sr->flags & IORING_RECVSEND_BUNDLE) { 822 822 size_t this_ret = *ret - sr->done_io; 823 823 824 - cflags |= io_put_kbufs(req, *ret, io_bundle_nbufs(kmsg, this_ret), 824 + cflags |= io_put_kbufs(req, this_ret, io_bundle_nbufs(kmsg, this_ret), 825 825 issue_flags); 826 826 if (sr->retry) 827 827 cflags = req->cqe.flags | (cflags & CQE_F_MASK);
+5 -3
io_uring/rsrc.c
··· 809 809 810 810 imu->nr_bvecs = nr_pages; 811 811 ret = io_buffer_account_pin(ctx, pages, nr_pages, imu, last_hpage); 812 - if (ret) { 813 - unpin_user_pages(pages, nr_pages); 812 + if (ret) 814 813 goto done; 815 - } 816 814 817 815 size = iov->iov_len; 818 816 /* store original address for later verification */ ··· 840 842 if (ret) { 841 843 if (imu) 842 844 io_free_imu(ctx, imu); 845 + if (pages) 846 + unpin_user_pages(pages, nr_pages); 843 847 io_cache_free(&ctx->node_cache, node); 844 848 node = ERR_PTR(ret); 845 849 } ··· 1177 1177 return -EINVAL; 1178 1178 if (check_add_overflow(arg->nr, arg->dst_off, &nbufs)) 1179 1179 return -EOVERFLOW; 1180 + if (nbufs > IORING_MAX_REG_BUFFERS) 1181 + return -EINVAL; 1180 1182 1181 1183 ret = io_rsrc_data_alloc(&data, max(nbufs, ctx->buf_table.nr)); 1182 1184 if (ret)
+2 -4
io_uring/sqpoll.c
··· 16 16 #include <uapi/linux/io_uring.h> 17 17 18 18 #include "io_uring.h" 19 + #include "tctx.h" 19 20 #include "napi.h" 20 21 #include "sqpoll.h" 21 22 ··· 420 419 __cold int io_sq_offload_create(struct io_ring_ctx *ctx, 421 420 struct io_uring_params *p) 422 421 { 423 - struct task_struct *task_to_put = NULL; 424 422 int ret; 425 423 426 424 /* Retain compatibility with failing for an invalid attach attempt */ ··· 498 498 rcu_assign_pointer(sqd->thread, tsk); 499 499 mutex_unlock(&sqd->lock); 500 500 501 - task_to_put = get_task_struct(tsk); 501 + get_task_struct(tsk); 502 502 ret = io_uring_alloc_task_context(tsk, ctx); 503 503 wake_up_new_task(tsk); 504 504 if (ret) ··· 513 513 complete(&ctx->sq_data->exited); 514 514 err: 515 515 io_sq_thread_finish(ctx); 516 - if (task_to_put) 517 - put_task_struct(task_to_put); 518 516 return ret; 519 517 } 520 518
+34 -29
kernel/audit_tree.c
··· 668 668 return 0; 669 669 } 670 670 671 - static int compare_root(struct vfsmount *mnt, void *arg) 672 - { 673 - return inode_to_key(d_backing_inode(mnt->mnt_root)) == 674 - (unsigned long)arg; 675 - } 676 - 677 671 void audit_trim_trees(void) 678 672 { 679 673 struct list_head cursor; ··· 677 683 while (cursor.next != &tree_list) { 678 684 struct audit_tree *tree; 679 685 struct path path; 680 - struct vfsmount *root_mnt; 681 686 struct audit_node *node; 687 + struct path *paths; 688 + struct path array[16]; 682 689 int err; 683 690 684 691 tree = container_of(cursor.next, struct audit_tree, list); ··· 691 696 if (err) 692 697 goto skip_it; 693 698 694 - root_mnt = collect_mounts(&path); 699 + paths = collect_paths(&path, array, 16); 695 700 path_put(&path); 696 - if (IS_ERR(root_mnt)) 701 + if (IS_ERR(paths)) 697 702 goto skip_it; 698 703 699 704 spin_lock(&hash_lock); ··· 701 706 struct audit_chunk *chunk = find_chunk(node); 702 707 /* this could be NULL if the watch is dying else where... */ 703 708 node->index |= 1U<<31; 704 - if (iterate_mounts(compare_root, 705 - (void *)(chunk->key), 706 - root_mnt)) 707 - node->index &= ~(1U<<31); 709 + for (struct path *p = paths; p->dentry; p++) { 710 + struct inode *inode = p->dentry->d_inode; 711 + if (inode_to_key(inode) == chunk->key) { 712 + node->index &= ~(1U<<31); 713 + break; 714 + } 715 + } 708 716 } 709 717 spin_unlock(&hash_lock); 710 718 trim_marked(tree); 711 - drop_collected_mounts(root_mnt); 719 + drop_collected_paths(paths, array); 712 720 skip_it: 713 721 put_tree(tree); 714 722 mutex_lock(&audit_filter_mutex); ··· 740 742 put_tree(tree); 741 743 } 742 744 743 - static int tag_mount(struct vfsmount *mnt, void *arg) 745 + static int tag_mounts(struct path *paths, struct audit_tree *tree) 744 746 { 745 - return tag_chunk(d_backing_inode(mnt->mnt_root), arg); 747 + for (struct path *p = paths; p->dentry; p++) { 748 + int err = tag_chunk(p->dentry->d_inode, tree); 749 + if (err) 750 + return err; 751 + } 752 + return 0; 746 753 } 747 754 748 755 /* ··· 804 801 { 805 802 struct audit_tree *seed = rule->tree, *tree; 806 803 struct path path; 807 - struct vfsmount *mnt; 804 + struct path array[16]; 805 + struct path *paths; 808 806 int err; 809 807 810 808 rule->tree = NULL; ··· 832 828 err = kern_path(tree->pathname, 0, &path); 833 829 if (err) 834 830 goto Err; 835 - mnt = collect_mounts(&path); 831 + paths = collect_paths(&path, array, 16); 836 832 path_put(&path); 837 - if (IS_ERR(mnt)) { 838 - err = PTR_ERR(mnt); 833 + if (IS_ERR(paths)) { 834 + err = PTR_ERR(paths); 839 835 goto Err; 840 836 } 841 837 842 838 get_tree(tree); 843 - err = iterate_mounts(tag_mount, tree, mnt); 844 - drop_collected_mounts(mnt); 839 + err = tag_mounts(paths, tree); 840 + drop_collected_paths(paths, array); 845 841 846 842 if (!err) { 847 843 struct audit_node *node; ··· 876 872 struct list_head cursor, barrier; 877 873 int failed = 0; 878 874 struct path path1, path2; 879 - struct vfsmount *tagged; 875 + struct path array[16]; 876 + struct path *paths; 880 877 int err; 881 878 882 879 err = kern_path(new, 0, &path2); 883 880 if (err) 884 881 return err; 885 - tagged = collect_mounts(&path2); 882 + paths = collect_paths(&path2, array, 16); 886 883 path_put(&path2); 887 - if (IS_ERR(tagged)) 888 - return PTR_ERR(tagged); 884 + if (IS_ERR(paths)) 885 + return PTR_ERR(paths); 889 886 890 887 err = kern_path(old, 0, &path1); 891 888 if (err) { 892 - drop_collected_mounts(tagged); 889 + drop_collected_paths(paths, array); 893 890 return err; 894 891 } 895 892 ··· 919 914 continue; 920 915 } 921 916 922 - failed = iterate_mounts(tag_mount, tree, tagged); 917 + failed = tag_mounts(paths, tree); 923 918 if (failed) { 924 919 put_tree(tree); 925 920 mutex_lock(&audit_filter_mutex); ··· 960 955 list_del(&cursor); 961 956 mutex_unlock(&audit_filter_mutex); 962 957 path_put(&path1); 963 - drop_collected_mounts(tagged); 958 + drop_collected_paths(paths, array); 964 959 return failed; 965 960 } 966 961
+6 -3
kernel/bpf/bpf_lru_list.c
··· 337 337 list) { 338 338 __bpf_lru_node_move_to_free(l, node, local_free_list(loc_l), 339 339 BPF_LRU_LOCAL_LIST_T_FREE); 340 - if (++nfree == LOCAL_FREE_TARGET) 340 + if (++nfree == lru->target_free) 341 341 break; 342 342 } 343 343 344 - if (nfree < LOCAL_FREE_TARGET) 345 - __bpf_lru_list_shrink(lru, l, LOCAL_FREE_TARGET - nfree, 344 + if (nfree < lru->target_free) 345 + __bpf_lru_list_shrink(lru, l, lru->target_free - nfree, 346 346 local_free_list(loc_l), 347 347 BPF_LRU_LOCAL_LIST_T_FREE); 348 348 ··· 577 577 list_add(&node->list, &l->lists[BPF_LRU_LIST_T_FREE]); 578 578 buf += elem_size; 579 579 } 580 + 581 + lru->target_free = clamp((nr_elems / num_possible_cpus()) / 2, 582 + 1, LOCAL_FREE_TARGET); 580 583 } 581 584 582 585 static void bpf_percpu_lru_populate(struct bpf_lru *lru, void *buf,
+1
kernel/bpf/bpf_lru_list.h
··· 58 58 del_from_htab_func del_from_htab; 59 59 void *del_arg; 60 60 unsigned int hash_offset; 61 + unsigned int target_free; 61 62 unsigned int nr_scans; 62 63 bool percpu; 63 64 };
+1 -1
kernel/bpf/cgroup.c
··· 2134 2134 .gpl_only = false, 2135 2135 .ret_type = RET_INTEGER, 2136 2136 .arg1_type = ARG_PTR_TO_CTX, 2137 - .arg2_type = ARG_PTR_TO_MEM, 2137 + .arg2_type = ARG_PTR_TO_MEM | MEM_WRITE, 2138 2138 .arg3_type = ARG_CONST_SIZE, 2139 2139 .arg4_type = ARG_ANYTHING, 2140 2140 };
+2 -3
kernel/bpf/verifier.c
··· 7027 7027 struct inode *f_inode; 7028 7028 }; 7029 7029 7030 - BTF_TYPE_SAFE_TRUSTED(struct dentry) { 7031 - /* no negative dentry-s in places where bpf can see it */ 7030 + BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct dentry) { 7032 7031 struct inode *d_inode; 7033 7032 }; 7034 7033 ··· 7065 7066 BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct bpf_iter__task)); 7066 7067 BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct linux_binprm)); 7067 7068 BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct file)); 7068 - BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct dentry)); 7069 7069 7070 7070 return btf_nested_type_is_trusted(&env->log, reg, field_name, btf_id, "__safe_trusted"); 7071 7071 } ··· 7074 7076 const char *field_name, u32 btf_id) 7075 7077 { 7076 7078 BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct socket)); 7079 + BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED_OR_NULL(struct dentry)); 7077 7080 7078 7081 return btf_nested_type_is_trusted(&env->log, reg, field_name, btf_id, 7079 7082 "__safe_trusted_or_null");
+74 -42
kernel/events/core.c
··· 207 207 __perf_ctx_unlock(&cpuctx->ctx); 208 208 } 209 209 210 + typedef struct { 211 + struct perf_cpu_context *cpuctx; 212 + struct perf_event_context *ctx; 213 + } class_perf_ctx_lock_t; 214 + 215 + static inline void class_perf_ctx_lock_destructor(class_perf_ctx_lock_t *_T) 216 + { perf_ctx_unlock(_T->cpuctx, _T->ctx); } 217 + 218 + static inline class_perf_ctx_lock_t 219 + class_perf_ctx_lock_constructor(struct perf_cpu_context *cpuctx, 220 + struct perf_event_context *ctx) 221 + { perf_ctx_lock(cpuctx, ctx); return (class_perf_ctx_lock_t){ cpuctx, ctx }; } 222 + 210 223 #define TASK_TOMBSTONE ((void *)-1L) 211 224 212 225 static bool is_kernel_event(struct perf_event *event) ··· 957 944 if (READ_ONCE(cpuctx->cgrp) == cgrp) 958 945 return; 959 946 960 - perf_ctx_lock(cpuctx, cpuctx->task_ctx); 947 + guard(perf_ctx_lock)(cpuctx, cpuctx->task_ctx); 948 + /* 949 + * Re-check, could've raced vs perf_remove_from_context(). 950 + */ 951 + if (READ_ONCE(cpuctx->cgrp) == NULL) 952 + return; 953 + 961 954 perf_ctx_disable(&cpuctx->ctx, true); 962 955 963 956 ctx_sched_out(&cpuctx->ctx, NULL, EVENT_ALL|EVENT_CGROUP); ··· 981 962 ctx_sched_in(&cpuctx->ctx, NULL, EVENT_ALL|EVENT_CGROUP); 982 963 983 964 perf_ctx_enable(&cpuctx->ctx, true); 984 - perf_ctx_unlock(cpuctx, cpuctx->task_ctx); 985 965 } 986 966 987 967 static int perf_cgroup_ensure_storage(struct perf_event *event, ··· 2138 2120 if (event->group_leader == event) 2139 2121 del_event_from_groups(event, ctx); 2140 2122 2141 - /* 2142 - * If event was in error state, then keep it 2143 - * that way, otherwise bogus counts will be 2144 - * returned on read(). The only way to get out 2145 - * of error state is by explicit re-enabling 2146 - * of the event 2147 - */ 2148 - if (event->state > PERF_EVENT_STATE_OFF) { 2149 - perf_cgroup_event_disable(event, ctx); 2150 - perf_event_set_state(event, PERF_EVENT_STATE_OFF); 2151 - } 2152 - 2153 2123 ctx->generation++; 2154 2124 event->pmu_ctx->nr_events--; 2155 2125 } ··· 2155 2149 } 2156 2150 2157 2151 static void put_event(struct perf_event *event); 2158 - static void event_sched_out(struct perf_event *event, 2159 - struct perf_event_context *ctx); 2152 + static void __event_disable(struct perf_event *event, 2153 + struct perf_event_context *ctx, 2154 + enum perf_event_state state); 2160 2155 2161 2156 static void perf_put_aux_event(struct perf_event *event) 2162 2157 { ··· 2190 2183 * state so that we don't try to schedule it again. Note 2191 2184 * that perf_event_enable() will clear the ERROR status. 2192 2185 */ 2193 - event_sched_out(iter, ctx); 2194 - perf_event_set_state(event, PERF_EVENT_STATE_ERROR); 2186 + __event_disable(iter, ctx, PERF_EVENT_STATE_ERROR); 2195 2187 } 2196 2188 } 2197 2189 ··· 2248 2242 &event->pmu_ctx->flexible_active; 2249 2243 } 2250 2244 2251 - /* 2252 - * Events that have PERF_EV_CAP_SIBLING require being part of a group and 2253 - * cannot exist on their own, schedule them out and move them into the ERROR 2254 - * state. Also see _perf_event_enable(), it will not be able to recover 2255 - * this ERROR state. 2256 - */ 2257 - static inline void perf_remove_sibling_event(struct perf_event *event) 2258 - { 2259 - event_sched_out(event, event->ctx); 2260 - perf_event_set_state(event, PERF_EVENT_STATE_ERROR); 2261 - } 2262 - 2263 2245 static void perf_group_detach(struct perf_event *event) 2264 2246 { 2265 2247 struct perf_event *leader = event->group_leader; ··· 2283 2289 */ 2284 2290 list_for_each_entry_safe(sibling, tmp, &event->sibling_list, sibling_list) { 2285 2291 2292 + /* 2293 + * Events that have PERF_EV_CAP_SIBLING require being part of 2294 + * a group and cannot exist on their own, schedule them out 2295 + * and move them into the ERROR state. Also see 2296 + * _perf_event_enable(), it will not be able to recover this 2297 + * ERROR state. 2298 + */ 2286 2299 if (sibling->event_caps & PERF_EV_CAP_SIBLING) 2287 - perf_remove_sibling_event(sibling); 2300 + __event_disable(sibling, ctx, PERF_EVENT_STATE_ERROR); 2288 2301 2289 2302 sibling->group_leader = sibling; 2290 2303 list_del_init(&sibling->sibling_list); ··· 2494 2493 state = PERF_EVENT_STATE_EXIT; 2495 2494 if (flags & DETACH_REVOKE) 2496 2495 state = PERF_EVENT_STATE_REVOKED; 2497 - if (flags & DETACH_DEAD) { 2498 - event->pending_disable = 1; 2496 + if (flags & DETACH_DEAD) 2499 2497 state = PERF_EVENT_STATE_DEAD; 2500 - } 2498 + 2501 2499 event_sched_out(event, ctx); 2500 + 2501 + if (event->state > PERF_EVENT_STATE_OFF) 2502 + perf_cgroup_event_disable(event, ctx); 2503 + 2502 2504 perf_event_set_state(event, min(event->state, state)); 2503 2505 2504 2506 if (flags & DETACH_GROUP) ··· 2566 2562 event_function_call(event, __perf_remove_from_context, (void *)flags); 2567 2563 } 2568 2564 2565 + static void __event_disable(struct perf_event *event, 2566 + struct perf_event_context *ctx, 2567 + enum perf_event_state state) 2568 + { 2569 + event_sched_out(event, ctx); 2570 + perf_cgroup_event_disable(event, ctx); 2571 + perf_event_set_state(event, state); 2572 + } 2573 + 2569 2574 /* 2570 2575 * Cross CPU call to disable a performance event 2571 2576 */ ··· 2589 2576 perf_pmu_disable(event->pmu_ctx->pmu); 2590 2577 ctx_time_update_event(ctx, event); 2591 2578 2579 + /* 2580 + * When disabling a group leader, the whole group becomes ineligible 2581 + * to run, so schedule out the full group. 2582 + */ 2592 2583 if (event == event->group_leader) 2593 2584 group_sched_out(event, ctx); 2594 - else 2595 - event_sched_out(event, ctx); 2596 2585 2597 - perf_event_set_state(event, PERF_EVENT_STATE_OFF); 2598 - perf_cgroup_event_disable(event, ctx); 2586 + /* 2587 + * But only mark the leader OFF; the siblings will remain 2588 + * INACTIVE. 2589 + */ 2590 + __event_disable(event, ctx, PERF_EVENT_STATE_OFF); 2599 2591 2600 2592 perf_pmu_enable(event->pmu_ctx->pmu); 2601 2593 } ··· 2674 2656 2675 2657 static void perf_event_throttle(struct perf_event *event) 2676 2658 { 2677 - event->pmu->stop(event, 0); 2678 2659 event->hw.interrupts = MAX_INTERRUPTS; 2660 + event->pmu->stop(event, 0); 2679 2661 if (event == event->group_leader) 2680 2662 perf_log_throttle(event, 0); 2681 2663 } ··· 7457 7439 if (!regs) 7458 7440 return 0; 7459 7441 7442 + /* No mm, no stack, no dump. */ 7443 + if (!current->mm) 7444 + return 0; 7445 + 7460 7446 /* 7461 7447 * Check if we fit in with the requested stack size into the: 7462 7448 * - TASK_SIZE ··· 8171 8149 bool crosstask = event->ctx->task && event->ctx->task != current; 8172 8150 const u32 max_stack = event->attr.sample_max_stack; 8173 8151 struct perf_callchain_entry *callchain; 8152 + 8153 + if (!current->mm) 8154 + user = false; 8174 8155 8175 8156 if (!kernel && !user) 8176 8157 return &__empty_callchain; ··· 11774 11749 { 11775 11750 struct hw_perf_event *hwc = &event->hw; 11776 11751 11777 - if (is_sampling_event(event)) { 11752 + /* 11753 + * The throttle can be triggered in the hrtimer handler. 11754 + * The HRTIMER_NORESTART should be used to stop the timer, 11755 + * rather than hrtimer_cancel(). See perf_swevent_hrtimer() 11756 + */ 11757 + if (is_sampling_event(event) && (hwc->interrupts != MAX_INTERRUPTS)) { 11778 11758 ktime_t remaining = hrtimer_get_remaining(&hwc->hrtimer); 11779 11759 local64_set(&hwc->period_left, ktime_to_ns(remaining)); 11780 11760 ··· 11834 11804 static void cpu_clock_event_stop(struct perf_event *event, int flags) 11835 11805 { 11836 11806 perf_swevent_cancel_hrtimer(event); 11837 - cpu_clock_event_update(event); 11807 + if (flags & PERF_EF_UPDATE) 11808 + cpu_clock_event_update(event); 11838 11809 } 11839 11810 11840 11811 static int cpu_clock_event_add(struct perf_event *event, int flags) ··· 11913 11882 static void task_clock_event_stop(struct perf_event *event, int flags) 11914 11883 { 11915 11884 perf_swevent_cancel_hrtimer(event); 11916 - task_clock_event_update(event, event->ctx->time); 11885 + if (flags & PERF_EF_UPDATE) 11886 + task_clock_event_update(event, event->ctx->time); 11917 11887 } 11918 11888 11919 11889 static int task_clock_event_add(struct perf_event *event, int flags)
+9 -8
kernel/exit.c
··· 940 940 taskstats_exit(tsk, group_dead); 941 941 trace_sched_process_exit(tsk, group_dead); 942 942 943 + /* 944 + * Since sampling can touch ->mm, make sure to stop everything before we 945 + * tear it down. 946 + * 947 + * Also flushes inherited counters to the parent - before the parent 948 + * gets woken up by child-exit notifications. 949 + */ 950 + perf_event_exit_task(tsk); 951 + 943 952 exit_mm(); 944 953 945 954 if (group_dead) ··· 963 954 exit_task_namespaces(tsk); 964 955 exit_task_work(tsk); 965 956 exit_thread(tsk); 966 - 967 - /* 968 - * Flush inherited counters to the parent - before the parent 969 - * gets woken up by child-exit notifications. 970 - * 971 - * because of cgroup mode, must be called before cgroup_exit() 972 - */ 973 - perf_event_exit_task(tsk); 974 957 975 958 sched_autogroup_exit_task(tsk); 976 959 cgroup_exit(tsk);
+12 -2
kernel/futex/core.c
··· 583 583 if (futex_get_value(&node, naddr)) 584 584 return -EFAULT; 585 585 586 - if (node != FUTEX_NO_NODE && 587 - (node >= MAX_NUMNODES || !node_possible(node))) 586 + if ((node != FUTEX_NO_NODE) && 587 + ((unsigned int)node >= MAX_NUMNODES || !node_possible(node))) 588 588 return -EINVAL; 589 589 } 590 590 ··· 1629 1629 mm->futex_phash_new = NULL; 1630 1630 1631 1631 if (fph) { 1632 + if (cur && (!cur->hash_mask || cur->immutable)) { 1633 + /* 1634 + * If two threads simultaneously request the global 1635 + * hash then the first one performs the switch, 1636 + * the second one returns here. 1637 + */ 1638 + free = fph; 1639 + mm->futex_phash_new = new; 1640 + return -EBUSY; 1641 + } 1632 1642 if (cur && !new) { 1633 1643 /* 1634 1644 * If we have an existing hash, but do not yet have
+8
kernel/irq/chip.c
··· 205 205 206 206 void irq_startup_managed(struct irq_desc *desc) 207 207 { 208 + struct irq_data *d = irq_desc_get_irq_data(desc); 209 + 210 + /* 211 + * Clear managed-shutdown flag, so we don't repeat managed-startup for 212 + * multiple hotplugs, and cause imbalanced disable depth. 213 + */ 214 + irqd_clr_managed_shutdown(d); 215 + 208 216 /* 209 217 * Only start it up when the disable depth is 1, so that a disable, 210 218 * hotunplug, hotplug sequence does not end up enabling it during
-7
kernel/irq/cpuhotplug.c
··· 210 210 !irq_data_get_irq_chip(data) || !cpumask_test_cpu(cpu, affinity)) 211 211 return; 212 212 213 - /* 214 - * Don't restore suspended interrupts here when a system comes back 215 - * from S3. They are reenabled via resume_device_irqs(). 216 - */ 217 - if (desc->istate & IRQS_SUSPENDED) 218 - return; 219 - 220 213 if (irqd_is_managed_and_shutdown(data)) 221 214 irq_startup_managed(desc); 222 215
+1 -1
kernel/irq/irq_sim.c
··· 202 202 void *data) 203 203 { 204 204 struct irq_sim_work_ctx *work_ctx __free(kfree) = 205 - kmalloc(sizeof(*work_ctx), GFP_KERNEL); 205 + kzalloc(sizeof(*work_ctx), GFP_KERNEL); 206 206 207 207 if (!work_ctx) 208 208 return ERR_PTR(-ENOMEM);
+17 -12
kernel/kexec_handover.c
··· 164 164 } 165 165 166 166 /* almost as free_reserved_page(), just don't free the page */ 167 - static void kho_restore_page(struct page *page) 167 + static void kho_restore_page(struct page *page, unsigned int order) 168 168 { 169 - ClearPageReserved(page); 170 - init_page_count(page); 171 - adjust_managed_page_count(page, 1); 169 + unsigned int nr_pages = (1 << order); 170 + 171 + /* Head page gets refcount of 1. */ 172 + set_page_count(page, 1); 173 + 174 + /* For higher order folios, tail pages get a page count of zero. */ 175 + for (unsigned int i = 1; i < nr_pages; i++) 176 + set_page_count(page + i, 0); 177 + 178 + if (order > 0) 179 + prep_compound_page(page, order); 180 + 181 + adjust_managed_page_count(page, nr_pages); 172 182 } 173 183 174 184 /** ··· 196 186 return NULL; 197 187 198 188 order = page->private; 199 - if (order) { 200 - if (order > MAX_PAGE_ORDER) 201 - return NULL; 189 + if (order > MAX_PAGE_ORDER) 190 + return NULL; 202 191 203 - prep_compound_page(page, order); 204 - } else { 205 - kho_restore_page(page); 206 - } 207 - 192 + kho_restore_page(page, order); 208 193 return page_folio(page); 209 194 } 210 195 EXPORT_SYMBOL_GPL(kho_restore_folio);
+4
kernel/rcu/tree.c
··· 3072 3072 /* Misaligned rcu_head! */ 3073 3073 WARN_ON_ONCE((unsigned long)head & (sizeof(void *) - 1)); 3074 3074 3075 + /* Avoid NULL dereference if callback is NULL. */ 3076 + if (WARN_ON_ONCE(!func)) 3077 + return; 3078 + 3075 3079 if (debug_rcu_head_queue(head)) { 3076 3080 /* 3077 3081 * Probable double call_rcu(), so leak the callback.
+1 -1
lib/crypto/Makefile
··· 66 66 67 67 obj-$(CONFIG_MPILIB) += mpi/ 68 68 69 - obj-$(CONFIG_CRYPTO_SELFTESTS) += simd.o 69 + obj-$(CONFIG_CRYPTO_SELFTESTS_FULL) += simd.o 70 70 71 71 obj-$(CONFIG_CRYPTO_LIB_SM3) += libsm3.o 72 72 libsm3-y := sm3.o
+3 -1
lib/maple_tree.c
··· 5527 5527 mas->store_type = mas_wr_store_type(&wr_mas); 5528 5528 request = mas_prealloc_calc(&wr_mas, entry); 5529 5529 if (!request) 5530 - return ret; 5530 + goto set_flag; 5531 5531 5532 + mas->mas_flags &= ~MA_STATE_PREALLOC; 5532 5533 mas_node_count_gfp(mas, request, gfp); 5533 5534 if (mas_is_err(mas)) { 5534 5535 mas_set_alloc_req(mas, 0); ··· 5539 5538 return ret; 5540 5539 } 5541 5540 5541 + set_flag: 5542 5542 mas->mas_flags |= MA_STATE_PREALLOC; 5543 5543 return ret; 5544 5544 }
+10 -4
mm/gup.c
··· 2303 2303 /* 2304 2304 * Returns the number of collected folios. Return value is always >= 0. 2305 2305 */ 2306 - static void collect_longterm_unpinnable_folios( 2306 + static unsigned long collect_longterm_unpinnable_folios( 2307 2307 struct list_head *movable_folio_list, 2308 2308 struct pages_or_folios *pofs) 2309 2309 { 2310 + unsigned long i, collected = 0; 2310 2311 struct folio *prev_folio = NULL; 2311 2312 bool drain_allow = true; 2312 - unsigned long i; 2313 2313 2314 2314 for (i = 0; i < pofs->nr_entries; i++) { 2315 2315 struct folio *folio = pofs_get_folio(pofs, i); ··· 2320 2320 2321 2321 if (folio_is_longterm_pinnable(folio)) 2322 2322 continue; 2323 + 2324 + collected++; 2323 2325 2324 2326 if (folio_is_device_coherent(folio)) 2325 2327 continue; ··· 2344 2342 NR_ISOLATED_ANON + folio_is_file_lru(folio), 2345 2343 folio_nr_pages(folio)); 2346 2344 } 2345 + 2346 + return collected; 2347 2347 } 2348 2348 2349 2349 /* ··· 2422 2418 check_and_migrate_movable_pages_or_folios(struct pages_or_folios *pofs) 2423 2419 { 2424 2420 LIST_HEAD(movable_folio_list); 2421 + unsigned long collected; 2425 2422 2426 - collect_longterm_unpinnable_folios(&movable_folio_list, pofs); 2427 - if (list_empty(&movable_folio_list)) 2423 + collected = collect_longterm_unpinnable_folios(&movable_folio_list, 2424 + pofs); 2425 + if (!collected) 2428 2426 return 0; 2429 2427 2430 2428 return migrate_longterm_unpinnable_folios(&movable_folio_list, pofs);
-20
mm/memory.c
··· 4315 4315 } 4316 4316 4317 4317 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 4318 - static inline int non_swapcache_batch(swp_entry_t entry, int max_nr) 4319 - { 4320 - struct swap_info_struct *si = swp_swap_info(entry); 4321 - pgoff_t offset = swp_offset(entry); 4322 - int i; 4323 - 4324 - /* 4325 - * While allocating a large folio and doing swap_read_folio, which is 4326 - * the case the being faulted pte doesn't have swapcache. We need to 4327 - * ensure all PTEs have no cache as well, otherwise, we might go to 4328 - * swap devices while the content is in swapcache. 4329 - */ 4330 - for (i = 0; i < max_nr; i++) { 4331 - if ((si->swap_map[offset + i] & SWAP_HAS_CACHE)) 4332 - return i; 4333 - } 4334 - 4335 - return i; 4336 - } 4337 - 4338 4318 /* 4339 4319 * Check if the PTEs within a range are contiguous swap entries 4340 4320 * and have consistent swapcache, zeromap.
+5 -1
mm/shmem.c
··· 2259 2259 folio = swap_cache_get_folio(swap, NULL, 0); 2260 2260 order = xa_get_order(&mapping->i_pages, index); 2261 2261 if (!folio) { 2262 + int nr_pages = 1 << order; 2262 2263 bool fallback_order0 = false; 2263 2264 2264 2265 /* Or update major stats only when swapin succeeds?? */ ··· 2273 2272 * If uffd is active for the vma, we need per-page fault 2274 2273 * fidelity to maintain the uffd semantics, then fallback 2275 2274 * to swapin order-0 folio, as well as for zswap case. 2275 + * Any existing sub folio in the swap cache also blocks 2276 + * mTHP swapin. 2276 2277 */ 2277 2278 if (order > 0 && ((vma && unlikely(userfaultfd_armed(vma))) || 2278 - !zswap_never_enabled())) 2279 + !zswap_never_enabled() || 2280 + non_swapcache_batch(swap, nr_pages) != nr_pages)) 2279 2281 fallback_order0 = true; 2280 2282 2281 2283 /* Skip swapcache for synchronous device. */
+23
mm/swap.h
··· 106 106 return find_next_bit(sis->zeromap, end, start) - start; 107 107 } 108 108 109 + static inline int non_swapcache_batch(swp_entry_t entry, int max_nr) 110 + { 111 + struct swap_info_struct *si = swp_swap_info(entry); 112 + pgoff_t offset = swp_offset(entry); 113 + int i; 114 + 115 + /* 116 + * While allocating a large folio and doing mTHP swapin, we need to 117 + * ensure all entries are not cached, otherwise, the mTHP folio will 118 + * be in conflict with the folio in swap cache. 119 + */ 120 + for (i = 0; i < max_nr; i++) { 121 + if ((si->swap_map[offset + i] & SWAP_HAS_CACHE)) 122 + return i; 123 + } 124 + 125 + return i; 126 + } 127 + 109 128 #else /* CONFIG_SWAP */ 110 129 struct swap_iocb; 111 130 static inline void swap_read_folio(struct folio *folio, struct swap_iocb **plug) ··· 218 199 return 0; 219 200 } 220 201 202 + static inline int non_swapcache_batch(swp_entry_t entry, int max_nr) 203 + { 204 + return 0; 205 + } 221 206 #endif /* CONFIG_SWAP */ 222 207 223 208 /**
+31 -2
mm/userfaultfd.c
··· 1084 1084 pte_t orig_dst_pte, pte_t orig_src_pte, 1085 1085 pmd_t *dst_pmd, pmd_t dst_pmdval, 1086 1086 spinlock_t *dst_ptl, spinlock_t *src_ptl, 1087 - struct folio *src_folio) 1087 + struct folio *src_folio, 1088 + struct swap_info_struct *si, swp_entry_t entry) 1088 1089 { 1090 + /* 1091 + * Check if the folio still belongs to the target swap entry after 1092 + * acquiring the lock. Folio can be freed in the swap cache while 1093 + * not locked. 1094 + */ 1095 + if (src_folio && unlikely(!folio_test_swapcache(src_folio) || 1096 + entry.val != src_folio->swap.val)) 1097 + return -EAGAIN; 1098 + 1089 1099 double_pt_lock(dst_ptl, src_ptl); 1090 1100 1091 1101 if (!is_pte_pages_stable(dst_pte, src_pte, orig_dst_pte, orig_src_pte, ··· 1112 1102 if (src_folio) { 1113 1103 folio_move_anon_rmap(src_folio, dst_vma); 1114 1104 src_folio->index = linear_page_index(dst_vma, dst_addr); 1105 + } else { 1106 + /* 1107 + * Check if the swap entry is cached after acquiring the src_pte 1108 + * lock. Otherwise, we might miss a newly loaded swap cache folio. 1109 + * 1110 + * Check swap_map directly to minimize overhead, READ_ONCE is sufficient. 1111 + * We are trying to catch newly added swap cache, the only possible case is 1112 + * when a folio is swapped in and out again staying in swap cache, using the 1113 + * same entry before the PTE check above. The PTL is acquired and released 1114 + * twice, each time after updating the swap_map's flag. So holding 1115 + * the PTL here ensures we see the updated value. False positive is possible, 1116 + * e.g. SWP_SYNCHRONOUS_IO swapin may set the flag without touching the 1117 + * cache, or during the tiny synchronization window between swap cache and 1118 + * swap_map, but it will be gone very quickly, worst result is retry jitters. 1119 + */ 1120 + if (READ_ONCE(si->swap_map[swp_offset(entry)]) & SWAP_HAS_CACHE) { 1121 + double_pt_unlock(dst_ptl, src_ptl); 1122 + return -EAGAIN; 1123 + } 1115 1124 } 1116 1125 1117 1126 orig_src_pte = ptep_get_and_clear(mm, src_addr, src_pte); ··· 1441 1412 } 1442 1413 err = move_swap_pte(mm, dst_vma, dst_addr, src_addr, dst_pte, src_pte, 1443 1414 orig_dst_pte, orig_src_pte, dst_pmd, dst_pmdval, 1444 - dst_ptl, src_ptl, src_folio); 1415 + dst_ptl, src_ptl, src_folio, si, entry); 1445 1416 } 1446 1417 1447 1418 out:
+5 -6
net/atm/clip.c
··· 193 193 194 194 pr_debug("\n"); 195 195 196 - if (!clip_devs) { 197 - atm_return(vcc, skb->truesize); 198 - kfree_skb(skb); 199 - return; 200 - } 201 - 202 196 if (!skb) { 203 197 pr_debug("removing VCC %p\n", clip_vcc); 204 198 if (clip_vcc->entry) ··· 202 208 return; 203 209 } 204 210 atm_return(vcc, skb->truesize); 211 + if (!clip_devs) { 212 + kfree_skb(skb); 213 + return; 214 + } 215 + 205 216 skb->dev = clip_vcc->entry ? clip_vcc->entry->neigh->dev : clip_devs; 206 217 /* clip_vcc->entry == NULL if we don't have an IP address yet */ 207 218 if (!skb->dev) {
+1 -2
net/atm/resources.c
··· 146 146 */ 147 147 mutex_lock(&atm_dev_mutex); 148 148 list_del(&dev->dev_list); 149 - mutex_unlock(&atm_dev_mutex); 150 - 151 149 atm_dev_release_vccs(dev); 152 150 atm_unregister_sysfs(dev); 153 151 atm_proc_dev_deregister(dev); 152 + mutex_unlock(&atm_dev_mutex); 154 153 155 154 atm_dev_put(dev); 156 155 }
+30 -4
net/bluetooth/hci_core.c
··· 64 64 65 65 /* Get HCI device by index. 66 66 * Device is held on return. */ 67 - struct hci_dev *hci_dev_get(int index) 67 + static struct hci_dev *__hci_dev_get(int index, int *srcu_index) 68 68 { 69 69 struct hci_dev *hdev = NULL, *d; 70 70 ··· 77 77 list_for_each_entry(d, &hci_dev_list, list) { 78 78 if (d->id == index) { 79 79 hdev = hci_dev_hold(d); 80 + if (srcu_index) 81 + *srcu_index = srcu_read_lock(&d->srcu); 80 82 break; 81 83 } 82 84 } 83 85 read_unlock(&hci_dev_list_lock); 84 86 return hdev; 87 + } 88 + 89 + struct hci_dev *hci_dev_get(int index) 90 + { 91 + return __hci_dev_get(index, NULL); 92 + } 93 + 94 + static struct hci_dev *hci_dev_get_srcu(int index, int *srcu_index) 95 + { 96 + return __hci_dev_get(index, srcu_index); 97 + } 98 + 99 + static void hci_dev_put_srcu(struct hci_dev *hdev, int srcu_index) 100 + { 101 + srcu_read_unlock(&hdev->srcu, srcu_index); 102 + hci_dev_put(hdev); 85 103 } 86 104 87 105 /* ---- Inquiry support ---- */ ··· 586 568 int hci_dev_reset(__u16 dev) 587 569 { 588 570 struct hci_dev *hdev; 589 - int err; 571 + int err, srcu_index; 590 572 591 - hdev = hci_dev_get(dev); 573 + hdev = hci_dev_get_srcu(dev, &srcu_index); 592 574 if (!hdev) 593 575 return -ENODEV; 594 576 ··· 610 592 err = hci_dev_do_reset(hdev); 611 593 612 594 done: 613 - hci_dev_put(hdev); 595 + hci_dev_put_srcu(hdev, srcu_index); 614 596 return err; 615 597 } 616 598 ··· 2451 2433 if (!hdev) 2452 2434 return NULL; 2453 2435 2436 + if (init_srcu_struct(&hdev->srcu)) { 2437 + kfree(hdev); 2438 + return NULL; 2439 + } 2440 + 2454 2441 hdev->pkt_type = (HCI_DM1 | HCI_DH1 | HCI_HV1); 2455 2442 hdev->esco_type = (ESCO_HV1); 2456 2443 hdev->link_mode = (HCI_LM_ACCEPT); ··· 2700 2677 write_lock(&hci_dev_list_lock); 2701 2678 list_del(&hdev->list); 2702 2679 write_unlock(&hci_dev_list_lock); 2680 + 2681 + synchronize_srcu(&hdev->srcu); 2682 + cleanup_srcu_struct(&hdev->srcu); 2703 2683 2704 2684 disable_work_sync(&hdev->rx_work); 2705 2685 disable_work_sync(&hdev->cmd_work);
+8 -1
net/bluetooth/l2cap_core.c
··· 3415 3415 struct l2cap_conf_rfc rfc = { .mode = L2CAP_MODE_BASIC }; 3416 3416 struct l2cap_conf_efs efs; 3417 3417 u8 remote_efs = 0; 3418 - u16 mtu = L2CAP_DEFAULT_MTU; 3418 + u16 mtu = 0; 3419 3419 u16 result = L2CAP_CONF_SUCCESS; 3420 3420 u16 size; 3421 3421 ··· 3519 3519 if (result == L2CAP_CONF_SUCCESS) { 3520 3520 /* Configure output options and let the other side know 3521 3521 * which ones we don't like. */ 3522 + 3523 + /* If MTU is not provided in configure request, use the most recently 3524 + * explicitly or implicitly accepted value for the other direction, 3525 + * or the default value. 3526 + */ 3527 + if (mtu == 0) 3528 + mtu = chan->imtu ? chan->imtu : L2CAP_DEFAULT_MTU; 3522 3529 3523 3530 if (mtu < L2CAP_DEFAULT_MIN_MTU) 3524 3531 result = L2CAP_CONF_UNACCEPT;
+9
net/bridge/br_multicast.c
··· 2015 2015 2016 2016 void br_multicast_port_ctx_deinit(struct net_bridge_mcast_port *pmctx) 2017 2017 { 2018 + struct net_bridge *br = pmctx->port->br; 2019 + bool del = false; 2020 + 2018 2021 #if IS_ENABLED(CONFIG_IPV6) 2019 2022 timer_delete_sync(&pmctx->ip6_mc_router_timer); 2020 2023 #endif 2021 2024 timer_delete_sync(&pmctx->ip4_mc_router_timer); 2025 + 2026 + spin_lock_bh(&br->multicast_lock); 2027 + del |= br_ip6_multicast_rport_del(pmctx); 2028 + del |= br_ip4_multicast_rport_del(pmctx); 2029 + br_multicast_rport_del_notify(pmctx, del); 2030 + spin_unlock_bh(&br->multicast_lock); 2022 2031 } 2023 2032 2024 2033 int br_multicast_add_port(struct net_bridge_port *port)
+1 -1
net/core/netpoll.c
··· 425 425 udph->dest = htons(np->remote_port); 426 426 udph->len = htons(udp_len); 427 427 428 + udph->check = 0; 428 429 if (np->ipv6) { 429 430 udph->check = csum_ipv6_magic(&np->local_ip.in6, 430 431 &np->remote_ip.in6, ··· 454 453 skb_reset_mac_header(skb); 455 454 skb->protocol = eth->h_proto = htons(ETH_P_IPV6); 456 455 } else { 457 - udph->check = 0; 458 456 udph->check = csum_tcpudp_magic(np->local_ip.ip, 459 457 np->remote_ip.ip, 460 458 udp_len, IPPROTO_UDP,
+3 -2
net/core/selftests.c
··· 160 160 skb->csum = 0; 161 161 skb->ip_summed = CHECKSUM_PARTIAL; 162 162 if (attr->tcp) { 163 - thdr->check = ~tcp_v4_check(skb->len, ihdr->saddr, 164 - ihdr->daddr, 0); 163 + int l4len = skb->len - skb_transport_offset(skb); 164 + 165 + thdr->check = ~tcp_v4_check(l4len, ihdr->saddr, ihdr->daddr, 0); 165 166 skb->csum_start = skb_transport_header(skb) - skb->head; 166 167 skb->csum_offset = offsetof(struct tcphdr, check); 167 168 } else {
+3 -3
net/mac80211/link.c
··· 93 93 if (link_id < 0) 94 94 link_id = 0; 95 95 96 - rcu_assign_pointer(sdata->vif.link_conf[link_id], link_conf); 97 - rcu_assign_pointer(sdata->link[link_id], link); 98 - 99 96 if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN) { 100 97 struct ieee80211_sub_if_data *ap_bss; 101 98 struct ieee80211_bss_conf *ap_bss_conf; ··· 142 145 143 146 ieee80211_link_debugfs_add(link); 144 147 } 148 + 149 + rcu_assign_pointer(sdata->vif.link_conf[link_id], link_conf); 150 + rcu_assign_pointer(sdata->link[link_id], link); 145 151 } 146 152 147 153 void ieee80211_link_stop(struct ieee80211_link_data *link)
+1 -1
net/mac80211/util.c
··· 3902 3902 { 3903 3903 u64 tsf = drv_get_tsf(local, sdata); 3904 3904 u64 dtim_count = 0; 3905 - u16 beacon_int = sdata->vif.bss_conf.beacon_int * 1024; 3905 + u32 beacon_int = sdata->vif.bss_conf.beacon_int * 1024; 3906 3906 u8 dtim_period = sdata->vif.bss_conf.dtim_period; 3907 3907 struct ps_data *ps; 3908 3908 u8 bcns_from_dtim;
+3 -14
net/sunrpc/svc.c
··· 638 638 static bool 639 639 svc_init_buffer(struct svc_rqst *rqstp, const struct svc_serv *serv, int node) 640 640 { 641 - unsigned long ret; 642 - 643 641 rqstp->rq_maxpages = svc_serv_maxpages(serv); 644 642 645 643 /* rq_pages' last entry is NULL for historical reasons. */ ··· 647 649 if (!rqstp->rq_pages) 648 650 return false; 649 651 650 - ret = alloc_pages_bulk_node(GFP_KERNEL, node, rqstp->rq_maxpages, 651 - rqstp->rq_pages); 652 - return ret == rqstp->rq_maxpages; 652 + return true; 653 653 } 654 654 655 655 /* ··· 1371 1375 case SVC_OK: 1372 1376 break; 1373 1377 case SVC_GARBAGE: 1374 - goto err_garbage_args; 1378 + rqstp->rq_auth_stat = rpc_autherr_badcred; 1379 + goto err_bad_auth; 1375 1380 case SVC_SYSERR: 1376 1381 goto err_system_err; 1377 1382 case SVC_DENIED: ··· 1511 1514 if (serv->sv_stats) 1512 1515 serv->sv_stats->rpcbadfmt++; 1513 1516 *rqstp->rq_accept_statp = rpc_proc_unavail; 1514 - goto sendit; 1515 - 1516 - err_garbage_args: 1517 - svc_printk(rqstp, "failed to decode RPC header\n"); 1518 - 1519 - if (serv->sv_stats) 1520 - serv->sv_stats->rpcbadfmt++; 1521 - *rqstp->rq_accept_statp = rpc_garbage_args; 1522 1517 goto sendit; 1523 1518 1524 1519 err_system_err:
+23 -8
net/unix/af_unix.c
··· 660 660 #endif 661 661 } 662 662 663 + static unsigned int unix_skb_len(const struct sk_buff *skb) 664 + { 665 + return skb->len - UNIXCB(skb).consumed; 666 + } 667 + 663 668 static void unix_release_sock(struct sock *sk, int embrion) 664 669 { 665 670 struct unix_sock *u = unix_sk(sk); ··· 699 694 700 695 if (skpair != NULL) { 701 696 if (sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET) { 697 + struct sk_buff *skb = skb_peek(&sk->sk_receive_queue); 698 + 699 + #if IS_ENABLED(CONFIG_AF_UNIX_OOB) 700 + if (skb && !unix_skb_len(skb)) 701 + skb = skb_peek_next(skb, &sk->sk_receive_queue); 702 + #endif 702 703 unix_state_lock(skpair); 703 704 /* No more writes */ 704 705 WRITE_ONCE(skpair->sk_shutdown, SHUTDOWN_MASK); 705 - if (!skb_queue_empty_lockless(&sk->sk_receive_queue) || embrion) 706 + if (skb || embrion) 706 707 WRITE_ONCE(skpair->sk_err, ECONNRESET); 707 708 unix_state_unlock(skpair); 708 709 skpair->sk_state_change(skpair); ··· 2672 2661 return timeo; 2673 2662 } 2674 2663 2675 - static unsigned int unix_skb_len(const struct sk_buff *skb) 2676 - { 2677 - return skb->len - UNIXCB(skb).consumed; 2678 - } 2679 - 2680 2664 struct unix_stream_read_state { 2681 2665 int (*recv_actor)(struct sk_buff *, int, int, 2682 2666 struct unix_stream_read_state *); ··· 2686 2680 #if IS_ENABLED(CONFIG_AF_UNIX_OOB) 2687 2681 static int unix_stream_recv_urg(struct unix_stream_read_state *state) 2688 2682 { 2683 + struct sk_buff *oob_skb, *read_skb = NULL; 2689 2684 struct socket *sock = state->socket; 2690 2685 struct sock *sk = sock->sk; 2691 2686 struct unix_sock *u = unix_sk(sk); 2692 2687 int chunk = 1; 2693 - struct sk_buff *oob_skb; 2694 2688 2695 2689 mutex_lock(&u->iolock); 2696 2690 unix_state_lock(sk); ··· 2705 2699 2706 2700 oob_skb = u->oob_skb; 2707 2701 2708 - if (!(state->flags & MSG_PEEK)) 2702 + if (!(state->flags & MSG_PEEK)) { 2709 2703 WRITE_ONCE(u->oob_skb, NULL); 2704 + 2705 + if (oob_skb->prev != (struct sk_buff *)&sk->sk_receive_queue && 2706 + !unix_skb_len(oob_skb->prev)) { 2707 + read_skb = oob_skb->prev; 2708 + __skb_unlink(read_skb, &sk->sk_receive_queue); 2709 + } 2710 + } 2710 2711 2711 2712 spin_unlock(&sk->sk_receive_queue.lock); 2712 2713 unix_state_unlock(sk); ··· 2724 2711 UNIXCB(oob_skb).consumed += 1; 2725 2712 2726 2713 mutex_unlock(&u->iolock); 2714 + 2715 + consume_skb(read_skb); 2727 2716 2728 2717 if (chunk < 0) 2729 2718 return -EFAULT;
+11 -5
security/selinux/ss/services.c
··· 1909 1909 goto out_unlock; 1910 1910 } 1911 1911 /* Obtain the sid for the context. */ 1912 - rc = sidtab_context_to_sid(sidtab, &newcontext, out_sid); 1913 - if (rc == -ESTALE) { 1914 - rcu_read_unlock(); 1915 - context_destroy(&newcontext); 1916 - goto retry; 1912 + if (context_equal(scontext, &newcontext)) 1913 + *out_sid = ssid; 1914 + else if (context_equal(tcontext, &newcontext)) 1915 + *out_sid = tsid; 1916 + else { 1917 + rc = sidtab_context_to_sid(sidtab, &newcontext, out_sid); 1918 + if (rc == -ESTALE) { 1919 + rcu_read_unlock(); 1920 + context_destroy(&newcontext); 1921 + goto retry; 1922 + } 1917 1923 } 1918 1924 out_unlock: 1919 1925 rcu_read_unlock();
+7
sound/isa/sb/sb16_main.c
··· 703 703 unsigned char nval, oval; 704 704 int change; 705 705 706 + if (chip->mode & (SB_MODE_PLAYBACK | SB_MODE_CAPTURE)) 707 + return -EBUSY; 708 + 706 709 nval = ucontrol->value.enumerated.item[0]; 707 710 if (nval > 2) 708 711 return -EINVAL; ··· 714 711 change = nval != oval; 715 712 snd_sb16_set_dma_mode(chip, nval); 716 713 spin_unlock_irqrestore(&chip->reg_lock, flags); 714 + if (change) { 715 + snd_dma_disable(chip->dma8); 716 + snd_dma_disable(chip->dma16); 717 + } 717 718 return change; 718 719 } 719 720
+2 -2
sound/pci/ctxfi/xfi.c
··· 98 98 if (err < 0) 99 99 goto error; 100 100 101 - strcpy(card->driver, "SB-XFi"); 102 - strcpy(card->shortname, "Creative X-Fi"); 101 + strscpy(card->driver, "SB-XFi"); 102 + strscpy(card->shortname, "Creative X-Fi"); 103 103 snprintf(card->longname, sizeof(card->longname), "%s %s %s", 104 104 card->shortname, atc->chip_name, atc->model_name); 105 105
+2
sound/pci/hda/hda_intel.c
··· 2283 2283 SND_PCI_QUIRK(0x1734, 0x1232, "KONTRON SinglePC", 0), 2284 2284 /* Dell ALC3271 */ 2285 2285 SND_PCI_QUIRK(0x1028, 0x0962, "Dell ALC3271", 0), 2286 + /* https://bugzilla.kernel.org/show_bug.cgi?id=220210 */ 2287 + SND_PCI_QUIRK(0x17aa, 0x5079, "Lenovo Thinkpad E15", 0), 2286 2288 {} 2287 2289 }; 2288 2290
+30
sound/pci/hda/patch_realtek.c
··· 8030 8030 ALC294_FIXUP_ASUS_CS35L41_SPI_2, 8031 8031 ALC274_FIXUP_HP_AIO_BIND_DACS, 8032 8032 ALC287_FIXUP_PREDATOR_SPK_CS35L41_I2C_2, 8033 + ALC285_FIXUP_ASUS_GA605K_HEADSET_MIC, 8034 + ALC285_FIXUP_ASUS_GA605K_I2C_SPEAKER2_TO_DAC1, 8035 + ALC269_FIXUP_POSITIVO_P15X_HEADSET_MIC, 8033 8036 }; 8034 8037 8035 8038 /* A special fixup for Lenovo C940 and Yoga Duet 7; ··· 10417 10414 .type = HDA_FIXUP_FUNC, 10418 10415 .v.func = alc274_fixup_hp_aio_bind_dacs, 10419 10416 }, 10417 + [ALC285_FIXUP_ASUS_GA605K_HEADSET_MIC] = { 10418 + .type = HDA_FIXUP_PINS, 10419 + .v.pins = (const struct hda_pintbl[]) { 10420 + { 0x19, 0x03a11050 }, 10421 + { 0x1b, 0x03a11c30 }, 10422 + { } 10423 + }, 10424 + .chained = true, 10425 + .chain_id = ALC285_FIXUP_ASUS_GA605K_I2C_SPEAKER2_TO_DAC1 10426 + }, 10427 + [ALC285_FIXUP_ASUS_GA605K_I2C_SPEAKER2_TO_DAC1] = { 10428 + .type = HDA_FIXUP_FUNC, 10429 + .v.func = alc285_fixup_speaker2_to_dac1, 10430 + }, 10431 + [ALC269_FIXUP_POSITIVO_P15X_HEADSET_MIC] = { 10432 + .type = HDA_FIXUP_FUNC, 10433 + .v.func = alc269_fixup_limit_int_mic_boost, 10434 + .chained = true, 10435 + .chain_id = ALC269VC_FIXUP_ACER_MIC_NO_PRESENCE, 10436 + }, 10420 10437 }; 10421 10438 10422 10439 static const struct hda_quirk alc269_fixup_tbl[] = { ··· 10532 10509 SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC), 10533 10510 SND_PCI_QUIRK(0x1028, 0x0872, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC), 10534 10511 SND_PCI_QUIRK(0x1028, 0x0873, "Dell Precision 3930", ALC255_FIXUP_DUMMY_LINEOUT_VERB), 10512 + SND_PCI_QUIRK(0x1028, 0x0879, "Dell Latitude 5420 Rugged", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE), 10535 10513 SND_PCI_QUIRK(0x1028, 0x08ad, "Dell WYSE AIO", ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE), 10536 10514 SND_PCI_QUIRK(0x1028, 0x08ae, "Dell WYSE NB", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE), 10537 10515 SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB), ··· 10811 10787 SND_PCI_QUIRK(0x103c, 0x8b97, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10812 10788 SND_PCI_QUIRK(0x103c, 0x8bb3, "HP Slim OMEN", ALC287_FIXUP_CS35L41_I2C_2), 10813 10789 SND_PCI_QUIRK(0x103c, 0x8bb4, "HP Slim OMEN", ALC287_FIXUP_CS35L41_I2C_2), 10790 + SND_PCI_QUIRK(0x103c, 0x8bc8, "HP Victus 15-fa1xxx", ALC245_FIXUP_HP_MUTE_LED_COEFBIT), 10814 10791 SND_PCI_QUIRK(0x103c, 0x8bcd, "HP Omen 16-xd0xxx", ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT), 10815 10792 SND_PCI_QUIRK(0x103c, 0x8bdd, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2), 10816 10793 SND_PCI_QUIRK(0x103c, 0x8bde, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2), ··· 10865 10840 SND_PCI_QUIRK(0x103c, 0x8c91, "HP EliteBook 660", ALC236_FIXUP_HP_GPIO_LED), 10866 10841 SND_PCI_QUIRK(0x103c, 0x8c96, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10867 10842 SND_PCI_QUIRK(0x103c, 0x8c97, "HP ZBook", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10843 + SND_PCI_QUIRK(0x103c, 0x8c9c, "HP Victus 16-s1xxx (MB 8C9C)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT), 10868 10844 SND_PCI_QUIRK(0x103c, 0x8ca1, "HP ZBook Power", ALC236_FIXUP_HP_GPIO_LED), 10869 10845 SND_PCI_QUIRK(0x103c, 0x8ca2, "HP ZBook Power", ALC236_FIXUP_HP_GPIO_LED), 10870 10846 SND_PCI_QUIRK(0x103c, 0x8ca4, "HP ZBook Fury", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), ··· 10930 10904 SND_PCI_QUIRK(0x103c, 0x8e60, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2), 10931 10905 SND_PCI_QUIRK(0x103c, 0x8e61, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2), 10932 10906 SND_PCI_QUIRK(0x103c, 0x8e62, "HP Trekker ", ALC287_FIXUP_CS35L41_I2C_2), 10907 + SND_PCI_QUIRK(0x1043, 0x1032, "ASUS VivoBook X513EA", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 10908 + SND_PCI_QUIRK(0x1043, 0x1034, "ASUS GU605C", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1), 10933 10909 SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), 10934 10910 SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), 10935 10911 SND_PCI_QUIRK(0x1043, 0x1054, "ASUS G614FH/FM/FP", ALC287_FIXUP_CS35L41_I2C_2), ··· 10960 10932 SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 10961 10933 SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 10962 10934 SND_PCI_QUIRK(0x1043, 0x1313, "Asus K42JZ", ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE), 10935 + SND_PCI_QUIRK(0x1043, 0x1314, "ASUS GA605K", ALC285_FIXUP_ASUS_GA605K_HEADSET_MIC), 10963 10936 SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 10964 10937 SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK), 10965 10938 SND_PCI_QUIRK(0x1043, 0x1433, "ASUS GX650PY/PZ/PV/PU/PYV/PZV/PIV/PVV", ALC285_FIXUP_ASUS_I2C_HEADSET_MIC), ··· 11413 11384 SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 11414 11385 SND_PCI_QUIRK(0x2782, 0x0228, "Infinix ZERO BOOK 13", ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13), 11415 11386 SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO), 11387 + SND_PCI_QUIRK(0x2782, 0x1407, "Positivo P15X", ALC269_FIXUP_POSITIVO_P15X_HEADSET_MIC), 11416 11388 SND_PCI_QUIRK(0x2782, 0x1701, "Infinix Y4 Max", ALC269VC_FIXUP_INFINIX_Y4_MAX), 11417 11389 SND_PCI_QUIRK(0x2782, 0x1705, "MEDION E15433", ALC269VC_FIXUP_INFINIX_Y4_MAX), 11418 11390 SND_PCI_QUIRK(0x2782, 0x1707, "Vaio VJFE-ADL", ALC298_FIXUP_SPK_VOLUME),
+14
sound/soc/amd/yc/acp6x-mach.c
··· 454 454 { 455 455 .driver_data = &acp6x_card, 456 456 .matches = { 457 + DMI_MATCH(DMI_BOARD_VENDOR, "Micro-Star International Co., Ltd."), 458 + DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 17 D7VF"), 459 + } 460 + }, 461 + { 462 + .driver_data = &acp6x_card, 463 + .matches = { 457 464 DMI_MATCH(DMI_BOARD_VENDOR, "Alienware"), 458 465 DMI_MATCH(DMI_PRODUCT_NAME, "Alienware m17 R5 AMD"), 459 466 } ··· 519 512 .matches = { 520 513 DMI_MATCH(DMI_BOARD_VENDOR, "HP"), 521 514 DMI_MATCH(DMI_PRODUCT_NAME, "OMEN by HP Gaming Laptop 16z-n000"), 515 + } 516 + }, 517 + { 518 + .driver_data = &acp6x_card, 519 + .matches = { 520 + DMI_MATCH(DMI_BOARD_VENDOR, "HP"), 521 + DMI_MATCH(DMI_PRODUCT_NAME, "Victus by HP Gaming Laptop 15-fb2xxx"), 522 522 } 523 523 }, 524 524 {
-1
sound/soc/apple/Kconfig
··· 2 2 tristate "Apple Silicon MCA driver" 3 3 depends on ARCH_APPLE || COMPILE_TEST 4 4 select SND_DMAENGINE_PCM 5 - default ARCH_APPLE 6 5 help 7 6 This option enables an ASoC platform driver for MCA peripherals found 8 7 on Apple Silicon SoCs.
+10 -8
sound/soc/codecs/cs35l56-sdw.c
··· 238 238 .val_format_endian_default = REGMAP_ENDIAN_BIG, 239 239 }; 240 240 241 - static int cs35l56_sdw_set_cal_index(struct cs35l56_private *cs35l56) 241 + static int cs35l56_sdw_get_unique_id(struct cs35l56_private *cs35l56) 242 242 { 243 243 int ret; 244 244 245 - /* SoundWire UniqueId is used to index the calibration array */ 246 245 ret = sdw_read_no_pm(cs35l56->sdw_peripheral, SDW_SCP_DEVID_0); 247 246 if (ret < 0) 248 247 return ret; 249 248 250 - cs35l56->base.cal_index = ret & 0xf; 249 + cs35l56->sdw_unique_id = ret & 0xf; 251 250 252 251 return 0; 253 252 } ··· 258 259 259 260 pm_runtime_get_noresume(cs35l56->base.dev); 260 261 261 - if (cs35l56->base.cal_index < 0) { 262 - ret = cs35l56_sdw_set_cal_index(cs35l56); 263 - if (ret < 0) 264 - goto out; 265 - } 262 + ret = cs35l56_sdw_get_unique_id(cs35l56); 263 + if (ret) 264 + goto out; 265 + 266 + /* SoundWire UniqueId is used to index the calibration array */ 267 + if (cs35l56->base.cal_index < 0) 268 + cs35l56->base.cal_index = cs35l56->sdw_unique_id; 266 269 267 270 ret = cs35l56_init(cs35l56); 268 271 if (ret < 0) { ··· 588 587 589 588 cs35l56->base.dev = dev; 590 589 cs35l56->sdw_peripheral = peripheral; 590 + cs35l56->sdw_link_num = peripheral->bus->link_id; 591 591 INIT_WORK(&cs35l56->sdw_irq_work, cs35l56_sdw_irq_work); 592 592 593 593 dev_set_drvdata(dev, cs35l56);
+63 -9
sound/soc/codecs/cs35l56.c
··· 706 706 return ret; 707 707 } 708 708 709 + static int cs35l56_dsp_download_and_power_up(struct cs35l56_private *cs35l56, 710 + bool load_firmware) 711 + { 712 + int ret; 713 + 714 + /* 715 + * Abort the first load if it didn't find the suffixed bins and 716 + * we have an alternate fallback suffix. 717 + */ 718 + cs35l56->dsp.bin_mandatory = (load_firmware && cs35l56->fallback_fw_suffix); 719 + 720 + ret = wm_adsp_power_up(&cs35l56->dsp, load_firmware); 721 + if ((ret == -ENOENT) && cs35l56->dsp.bin_mandatory) { 722 + cs35l56->dsp.fwf_suffix = cs35l56->fallback_fw_suffix; 723 + cs35l56->fallback_fw_suffix = NULL; 724 + cs35l56->dsp.bin_mandatory = false; 725 + ret = wm_adsp_power_up(&cs35l56->dsp, load_firmware); 726 + } 727 + 728 + if (ret) { 729 + dev_dbg(cs35l56->base.dev, "wm_adsp_power_up ret %d\n", ret); 730 + return ret; 731 + } 732 + 733 + return 0; 734 + } 735 + 709 736 static void cs35l56_reinit_patch(struct cs35l56_private *cs35l56) 710 737 { 711 738 int ret; 712 739 713 - /* Use wm_adsp to load and apply the firmware patch and coefficient files */ 714 - ret = wm_adsp_power_up(&cs35l56->dsp, true); 715 - if (ret) { 716 - dev_dbg(cs35l56->base.dev, "%s: wm_adsp_power_up ret %d\n", __func__, ret); 740 + ret = cs35l56_dsp_download_and_power_up(cs35l56, true); 741 + if (ret) 717 742 return; 718 - } 719 743 720 744 cs35l56_write_cal(cs35l56); 721 745 ··· 774 750 * but only if firmware is missing. If firmware is already patched just 775 751 * power-up wm_adsp without downloading firmware. 776 752 */ 777 - ret = wm_adsp_power_up(&cs35l56->dsp, !!firmware_missing); 778 - if (ret) { 779 - dev_dbg(cs35l56->base.dev, "%s: wm_adsp_power_up ret %d\n", __func__, ret); 753 + ret = cs35l56_dsp_download_and_power_up(cs35l56, firmware_missing); 754 + if (ret) 780 755 goto err; 781 - } 782 756 783 757 mutex_lock(&cs35l56->base.irq_lock); 784 758 ··· 875 853 pm_runtime_put_autosuspend(cs35l56->base.dev); 876 854 } 877 855 856 + static int cs35l56_set_fw_suffix(struct cs35l56_private *cs35l56) 857 + { 858 + if (cs35l56->dsp.fwf_suffix) 859 + return 0; 860 + 861 + if (!cs35l56->sdw_peripheral) 862 + return 0; 863 + 864 + cs35l56->dsp.fwf_suffix = devm_kasprintf(cs35l56->base.dev, GFP_KERNEL, 865 + "l%uu%u", 866 + cs35l56->sdw_link_num, 867 + cs35l56->sdw_unique_id); 868 + if (!cs35l56->dsp.fwf_suffix) 869 + return -ENOMEM; 870 + 871 + /* 872 + * There are published firmware files for L56 B0 silicon using 873 + * the ALSA prefix as the filename suffix. Default to trying these 874 + * first, with the new name as an alternate. 875 + */ 876 + if ((cs35l56->base.type == 0x56) && (cs35l56->base.rev == 0xb0)) { 877 + cs35l56->fallback_fw_suffix = cs35l56->dsp.fwf_suffix; 878 + cs35l56->dsp.fwf_suffix = cs35l56->component->name_prefix; 879 + } 880 + 881 + return 0; 882 + } 883 + 878 884 static int cs35l56_component_probe(struct snd_soc_component *component) 879 885 { 880 886 struct cs35l56_private *cs35l56 = snd_soc_component_get_drvdata(component); ··· 942 892 return -ENOMEM; 943 893 944 894 cs35l56->component = component; 895 + ret = cs35l56_set_fw_suffix(cs35l56); 896 + if (ret) 897 + return ret; 898 + 945 899 wm_adsp2_component_probe(&cs35l56->dsp, component); 946 900 947 901 debugfs_create_bool("init_done", 0444, debugfs_root, &cs35l56->base.init_done);
+3
sound/soc/codecs/cs35l56.h
··· 38 38 struct snd_soc_component *component; 39 39 struct regulator_bulk_data supplies[CS35L56_NUM_BULK_SUPPLIES]; 40 40 struct sdw_slave *sdw_peripheral; 41 + const char *fallback_fw_suffix; 41 42 struct work_struct sdw_irq_work; 42 43 bool sdw_irq_no_unmask; 43 44 bool soft_resetting; ··· 53 52 bool tdm_mode; 54 53 bool sysclk_set; 55 54 u8 old_sdw_clock_scale; 55 + u8 sdw_link_num; 56 + u8 sdw_unique_id; 56 57 }; 57 58 58 59 extern const struct dev_pm_ops cs35l56_pm_ops_i2c_spi;
+4
sound/soc/codecs/cs48l32.c
··· 2162 2162 n_slots_multiple = 1; 2163 2163 2164 2164 sclk_target = snd_soc_tdm_params_to_bclk(params, slotw, n_slots, n_slots_multiple); 2165 + if (sclk_target < 0) { 2166 + cs48l32_asp_err(dai, "Invalid parameters\n"); 2167 + return sclk_target; 2168 + } 2165 2169 2166 2170 for (i = 0; i < ARRAY_SIZE(cs48l32_sclk_rates); i++) { 2167 2171 if ((cs48l32_sclk_rates[i].freq >= sclk_target) &&
+1 -2
sound/soc/codecs/es8326.c
··· 1079 1079 regmap_update_bits(es8326->regmap, ES8326_HPDET_TYPE, 0x03, 0x00); 1080 1080 regmap_write(es8326->regmap, ES8326_INTOUT_IO, 1081 1081 es8326->interrupt_clk); 1082 - regmap_write(es8326->regmap, ES8326_SDINOUT1_IO, 1083 - (ES8326_IO_DMIC_CLK << ES8326_SDINOUT1_SHIFT)); 1082 + regmap_write(es8326->regmap, ES8326_SDINOUT1_IO, ES8326_IO_INPUT); 1084 1083 regmap_write(es8326->regmap, ES8326_SDINOUT23_IO, ES8326_IO_INPUT); 1085 1084 1086 1085 regmap_write(es8326->regmap, ES8326_ANA_PDN, 0x00);
+18 -9
sound/soc/codecs/wm_adsp.c
··· 783 783 char **coeff_filename) 784 784 { 785 785 const char *system_name = dsp->system_name; 786 - const char *asoc_component_prefix = dsp->component->name_prefix; 786 + const char *suffix = dsp->component->name_prefix; 787 787 int ret = 0; 788 788 789 - if (system_name && asoc_component_prefix) { 789 + if (dsp->fwf_suffix) 790 + suffix = dsp->fwf_suffix; 791 + 792 + if (system_name && suffix) { 790 793 if (!wm_adsp_request_firmware_file(dsp, wmfw_firmware, wmfw_filename, 791 794 cirrus_dir, system_name, 792 - asoc_component_prefix, "wmfw")) { 795 + suffix, "wmfw")) { 793 796 wm_adsp_request_firmware_file(dsp, coeff_firmware, coeff_filename, 794 797 cirrus_dir, system_name, 795 - asoc_component_prefix, "bin"); 798 + suffix, "bin"); 796 799 return 0; 797 800 } 798 801 } ··· 804 801 if (!wm_adsp_request_firmware_file(dsp, wmfw_firmware, wmfw_filename, 805 802 cirrus_dir, system_name, 806 803 NULL, "wmfw")) { 807 - if (asoc_component_prefix) 804 + if (suffix) 808 805 wm_adsp_request_firmware_file(dsp, coeff_firmware, coeff_filename, 809 806 cirrus_dir, system_name, 810 - asoc_component_prefix, "bin"); 807 + suffix, "bin"); 811 808 812 809 if (!*coeff_firmware) 813 810 wm_adsp_request_firmware_file(dsp, coeff_firmware, coeff_filename, ··· 819 816 820 817 /* Check system-specific bin without wmfw before falling back to generic */ 821 818 if (dsp->wmfw_optional && system_name) { 822 - if (asoc_component_prefix) 819 + if (suffix) 823 820 wm_adsp_request_firmware_file(dsp, coeff_firmware, coeff_filename, 824 821 cirrus_dir, system_name, 825 - asoc_component_prefix, "bin"); 822 + suffix, "bin"); 826 823 827 824 if (!*coeff_firmware) 828 825 wm_adsp_request_firmware_file(dsp, coeff_firmware, coeff_filename, ··· 853 850 adsp_err(dsp, "Failed to request firmware <%s>%s-%s-%s<-%s<%s>>.wmfw\n", 854 851 cirrus_dir, dsp->part, 855 852 dsp->fwf_name ? dsp->fwf_name : dsp->cs_dsp.name, 856 - wm_adsp_fw[dsp->fw].file, system_name, asoc_component_prefix); 853 + wm_adsp_fw[dsp->fw].file, system_name, suffix); 857 854 858 855 return -ENOENT; 859 856 } ··· 1000 997 return ret; 1001 998 } 1002 999 1000 + if (dsp->bin_mandatory && !coeff_firmware) { 1001 + ret = -ENOENT; 1002 + goto err; 1003 + } 1004 + 1003 1005 ret = cs_dsp_power_up(&dsp->cs_dsp, 1004 1006 wmfw_firmware, wmfw_filename, 1005 1007 coeff_firmware, coeff_filename, 1006 1008 wm_adsp_fw_text[dsp->fw]); 1007 1009 1010 + err: 1008 1011 wm_adsp_release_firmware_files(dsp, 1009 1012 wmfw_firmware, wmfw_filename, 1010 1013 coeff_firmware, coeff_filename);
+2
sound/soc/codecs/wm_adsp.h
··· 29 29 const char *part; 30 30 const char *fwf_name; 31 31 const char *system_name; 32 + const char *fwf_suffix; 32 33 struct snd_soc_component *component; 33 34 34 35 unsigned int sys_config_size; 35 36 36 37 int fw; 37 38 bool wmfw_optional; 39 + bool bin_mandatory; 38 40 39 41 struct work_struct boot_work; 40 42 int (*control_add)(struct wm_adsp *dsp, struct cs_dsp_coeff_ctl *cs_ctl);
+2 -1
sound/soc/intel/common/sof-function-topology-lib.c
··· 73 73 break; 74 74 default: 75 75 dev_warn(card->dev, 76 - "only -2ch and -4ch are supported for dmic\n"); 76 + "unsupported number of dmics: %d\n", 77 + mach_params.dmic_num); 77 78 continue; 78 79 } 79 80 tplg_dev = TPLG_DEVICE_INTEL_PCH_DMIC;
+1
sound/soc/loongson/loongson_i2s.c
··· 9 9 #include <linux/module.h> 10 10 #include <linux/platform_device.h> 11 11 #include <linux/delay.h> 12 + #include <linux/export.h> 12 13 #include <linux/pm_runtime.h> 13 14 #include <linux/dma-mapping.h> 14 15 #include <sound/soc.h>
+2
sound/soc/sdw_utils/soc_sdw_utils.c
··· 1205 1205 int i; 1206 1206 1207 1207 dlc = kzalloc(sizeof(*dlc), GFP_KERNEL); 1208 + if (!dlc) 1209 + return -ENOMEM; 1208 1210 1209 1211 adr_end = &adr_dev->endpoints[end_index]; 1210 1212 dai_info = &codec_info->dais[adr_end->num];
+15
sound/soc/sof/imx/imx8.c
··· 40 40 struct reset_control *run_stall; 41 41 }; 42 42 43 + static int imx8_shutdown(struct snd_sof_dev *sdev) 44 + { 45 + /* 46 + * Force the DSP to stall. After the firmware image is loaded, 47 + * the stall will be removed during run() by a matching 48 + * imx_sc_pm_cpu_start() call. 49 + */ 50 + imx_sc_pm_cpu_start(get_chip_pdata(sdev), IMX_SC_R_DSP, false, 51 + RESET_VECTOR_VADDR); 52 + 53 + return 0; 54 + } 55 + 43 56 /* 44 57 * DSP control. 45 58 */ ··· 294 281 static const struct imx_chip_ops imx8_chip_ops = { 295 282 .probe = imx8_probe, 296 283 .core_kick = imx8_run, 284 + .core_shutdown = imx8_shutdown, 297 285 }; 298 286 299 287 static const struct imx_chip_ops imx8x_chip_ops = { 300 288 .probe = imx8_probe, 301 289 .core_kick = imx8x_run, 290 + .core_shutdown = imx8_shutdown, 302 291 }; 303 292 304 293 static const struct imx_chip_ops imx8m_chip_ops = {
+12
sound/usb/mixer_maps.c
··· 383 383 { 0 } /* terminator */ 384 384 }; 385 385 386 + /* KTMicro USB */ 387 + static struct usbmix_name_map s31b2_0022_map[] = { 388 + { 23, "Speaker Playback" }, 389 + { 18, "Headphone Playback" }, 390 + { 0 } 391 + }; 392 + 386 393 /* ASUS ROG Zenith II with Realtek ALC1220-VB */ 387 394 static const struct usbmix_name_map asus_zenith_ii_map[] = { 388 395 { 19, NULL, 12 }, /* FU, Input Gain Pad - broken response, disabled */ ··· 698 691 /* Microsoft USB Link headset */ 699 692 .id = USB_ID(0x045e, 0x083c), 700 693 .map = ms_usb_link_map, 694 + }, 695 + { 696 + /* KTMicro USB */ 697 + .id = USB_ID(0X31b2, 0x0022), 698 + .map = s31b2_0022_map, 701 699 }, 702 700 { 0 } /* terminator */ 703 701 };
+5 -4
tools/arch/arm64/include/uapi/asm/kvm.h
··· 431 431 432 432 /* Device Control API on vcpu fd */ 433 433 #define KVM_ARM_VCPU_PMU_V3_CTRL 0 434 - #define KVM_ARM_VCPU_PMU_V3_IRQ 0 435 - #define KVM_ARM_VCPU_PMU_V3_INIT 1 436 - #define KVM_ARM_VCPU_PMU_V3_FILTER 2 437 - #define KVM_ARM_VCPU_PMU_V3_SET_PMU 3 434 + #define KVM_ARM_VCPU_PMU_V3_IRQ 0 435 + #define KVM_ARM_VCPU_PMU_V3_INIT 1 436 + #define KVM_ARM_VCPU_PMU_V3_FILTER 2 437 + #define KVM_ARM_VCPU_PMU_V3_SET_PMU 3 438 + #define KVM_ARM_VCPU_PMU_V3_SET_NR_COUNTERS 4 438 439 #define KVM_ARM_VCPU_TIMER_CTRL 1 439 440 #define KVM_ARM_VCPU_TIMER_IRQ_VTIMER 0 440 441 #define KVM_ARM_VCPU_TIMER_IRQ_PTIMER 1
+5
tools/arch/x86/include/asm/amd/ibs.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _ASM_X86_AMD_IBS_H 3 + #define _ASM_X86_AMD_IBS_H 4 + 2 5 /* 3 6 * From PPR Vol 1 for AMD Family 19h Model 01h B1 4 7 * 55898 Rev 0.35 - Feb 5, 2021 ··· 154 151 }; 155 152 u64 regs[MSR_AMD64_IBS_REG_COUNT_MAX]; 156 153 }; 154 + 155 + #endif /* _ASM_X86_AMD_IBS_H */
+10 -4
tools/arch/x86/include/asm/cpufeatures.h
··· 336 336 #define X86_FEATURE_AMD_IBRS (13*32+14) /* Indirect Branch Restricted Speculation */ 337 337 #define X86_FEATURE_AMD_STIBP (13*32+15) /* Single Thread Indirect Branch Predictors */ 338 338 #define X86_FEATURE_AMD_STIBP_ALWAYS_ON (13*32+17) /* Single Thread Indirect Branch Predictors always-on preferred */ 339 - #define X86_FEATURE_AMD_IBRS_SAME_MODE (13*32+19) /* Indirect Branch Restricted Speculation same mode protection*/ 339 + #define X86_FEATURE_AMD_IBRS_SAME_MODE (13*32+19) /* Indirect Branch Restricted Speculation same mode protection*/ 340 340 #define X86_FEATURE_AMD_PPIN (13*32+23) /* "amd_ppin" Protected Processor Inventory Number */ 341 341 #define X86_FEATURE_AMD_SSBD (13*32+24) /* Speculative Store Bypass Disable */ 342 342 #define X86_FEATURE_VIRT_SSBD (13*32+25) /* "virt_ssbd" Virtualized Speculative Store Bypass Disable */ ··· 379 379 #define X86_FEATURE_V_SPEC_CTRL (15*32+20) /* "v_spec_ctrl" Virtual SPEC_CTRL */ 380 380 #define X86_FEATURE_VNMI (15*32+25) /* "vnmi" Virtual NMI */ 381 381 #define X86_FEATURE_SVME_ADDR_CHK (15*32+28) /* SVME addr check */ 382 + #define X86_FEATURE_BUS_LOCK_THRESHOLD (15*32+29) /* Bus lock threshold */ 382 383 #define X86_FEATURE_IDLE_HLT (15*32+30) /* IDLE HLT intercept */ 383 384 384 385 /* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */ ··· 448 447 #define X86_FEATURE_DEBUG_SWAP (19*32+14) /* "debug_swap" SEV-ES full debug state swap support */ 449 448 #define X86_FEATURE_RMPREAD (19*32+21) /* RMPREAD instruction */ 450 449 #define X86_FEATURE_SEGMENTED_RMP (19*32+23) /* Segmented RMP support */ 450 + #define X86_FEATURE_ALLOWED_SEV_FEATURES (19*32+27) /* Allowed SEV Features */ 451 451 #define X86_FEATURE_SVSM (19*32+28) /* "svsm" SVSM present */ 452 452 #define X86_FEATURE_HV_INUSE_WR_ALLOWED (19*32+30) /* Allow Write to in-use hypervisor-owned pages */ 453 453 ··· 460 458 #define X86_FEATURE_AUTOIBRS (20*32+ 8) /* Automatic IBRS */ 461 459 #define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* SMM_CTL MSR is not present */ 462 460 461 + #define X86_FEATURE_PREFETCHI (20*32+20) /* Prefetch Data/Instruction to Cache Level */ 463 462 #define X86_FEATURE_SBPB (20*32+27) /* Selective Branch Prediction Barrier */ 464 463 #define X86_FEATURE_IBPB_BRTYPE (20*32+28) /* MSR_PRED_CMD[IBPB] flushes all branch type predictions */ 465 464 #define X86_FEATURE_SRSO_NO (20*32+29) /* CPU is not affected by SRSO */ ··· 485 482 #define X86_FEATURE_AMD_HTR_CORES (21*32+ 6) /* Heterogeneous Core Topology */ 486 483 #define X86_FEATURE_AMD_WORKLOAD_CLASS (21*32+ 7) /* Workload Classification */ 487 484 #define X86_FEATURE_PREFER_YMM (21*32+ 8) /* Avoid ZMM registers due to downclocking */ 488 - #define X86_FEATURE_INDIRECT_THUNK_ITS (21*32+ 9) /* Use thunk for indirect branches in lower half of cacheline */ 485 + #define X86_FEATURE_APX (21*32+ 9) /* Advanced Performance Extensions */ 486 + #define X86_FEATURE_INDIRECT_THUNK_ITS (21*32+10) /* Use thunk for indirect branches in lower half of cacheline */ 489 487 490 488 /* 491 489 * BUG word(s) ··· 539 535 #define X86_BUG_BHI X86_BUG( 1*32+ 3) /* "bhi" CPU is affected by Branch History Injection */ 540 536 #define X86_BUG_IBPB_NO_RET X86_BUG( 1*32+ 4) /* "ibpb_no_ret" IBPB omits return target predictions */ 541 537 #define X86_BUG_SPECTRE_V2_USER X86_BUG( 1*32+ 5) /* "spectre_v2_user" CPU is affected by Spectre variant 2 attack between user processes */ 542 - #define X86_BUG_ITS X86_BUG( 1*32+ 6) /* "its" CPU is affected by Indirect Target Selection */ 543 - #define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 7) /* "its_native_only" CPU is affected by ITS, VMX is not affected */ 538 + #define X86_BUG_OLD_MICROCODE X86_BUG( 1*32+ 6) /* "old_microcode" CPU has old microcode, it is surely vulnerable to something */ 539 + #define X86_BUG_ITS X86_BUG( 1*32+ 7) /* "its" CPU is affected by Indirect Target Selection */ 540 + #define X86_BUG_ITS_NATIVE_ONLY X86_BUG( 1*32+ 8) /* "its_native_only" CPU is affected by ITS, VMX is not affected */ 541 + 544 542 #endif /* _ASM_X86_CPUFEATURES_H */
+10 -6
tools/arch/x86/include/asm/msr-index.h
··· 533 533 #define MSR_HWP_CAPABILITIES 0x00000771 534 534 #define MSR_HWP_REQUEST_PKG 0x00000772 535 535 #define MSR_HWP_INTERRUPT 0x00000773 536 - #define MSR_HWP_REQUEST 0x00000774 536 + #define MSR_HWP_REQUEST 0x00000774 537 537 #define MSR_HWP_STATUS 0x00000777 538 538 539 539 /* CPUID.6.EAX */ ··· 550 550 #define HWP_LOWEST_PERF(x) (((x) >> 24) & 0xff) 551 551 552 552 /* IA32_HWP_REQUEST */ 553 - #define HWP_MIN_PERF(x) (x & 0xff) 554 - #define HWP_MAX_PERF(x) ((x & 0xff) << 8) 553 + #define HWP_MIN_PERF(x) (x & 0xff) 554 + #define HWP_MAX_PERF(x) ((x & 0xff) << 8) 555 555 #define HWP_DESIRED_PERF(x) ((x & 0xff) << 16) 556 - #define HWP_ENERGY_PERF_PREFERENCE(x) (((unsigned long long) x & 0xff) << 24) 556 + #define HWP_ENERGY_PERF_PREFERENCE(x) (((u64)x & 0xff) << 24) 557 557 #define HWP_EPP_PERFORMANCE 0x00 558 558 #define HWP_EPP_BALANCE_PERFORMANCE 0x80 559 559 #define HWP_EPP_BALANCE_POWERSAVE 0xC0 560 560 #define HWP_EPP_POWERSAVE 0xFF 561 - #define HWP_ACTIVITY_WINDOW(x) ((unsigned long long)(x & 0xff3) << 32) 562 - #define HWP_PACKAGE_CONTROL(x) ((unsigned long long)(x & 0x1) << 42) 561 + #define HWP_ACTIVITY_WINDOW(x) ((u64)(x & 0xff3) << 32) 562 + #define HWP_PACKAGE_CONTROL(x) ((u64)(x & 0x1) << 42) 563 563 564 564 /* IA32_HWP_STATUS */ 565 565 #define HWP_GUARANTEED_CHANGE(x) (x & 0x1) ··· 602 602 /* V6 PMON MSR range */ 603 603 #define MSR_IA32_PMC_V6_GP0_CTR 0x1900 604 604 #define MSR_IA32_PMC_V6_GP0_CFG_A 0x1901 605 + #define MSR_IA32_PMC_V6_GP0_CFG_B 0x1902 606 + #define MSR_IA32_PMC_V6_GP0_CFG_C 0x1903 605 607 #define MSR_IA32_PMC_V6_FX0_CTR 0x1980 608 + #define MSR_IA32_PMC_V6_FX0_CFG_B 0x1982 609 + #define MSR_IA32_PMC_V6_FX0_CFG_C 0x1983 606 610 #define MSR_IA32_PMC_V6_STEP 4 607 611 608 612 /* KeyID partitioning between MKTME and TDX */
+71
tools/arch/x86/include/uapi/asm/kvm.h
··· 441 441 #define KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS (1 << 6) 442 442 #define KVM_X86_QUIRK_SLOT_ZAP_ALL (1 << 7) 443 443 #define KVM_X86_QUIRK_STUFF_FEATURE_MSRS (1 << 8) 444 + #define KVM_X86_QUIRK_IGNORE_GUEST_PAT (1 << 9) 444 445 445 446 #define KVM_STATE_NESTED_FORMAT_VMX 0 446 447 #define KVM_STATE_NESTED_FORMAT_SVM 1 ··· 931 930 #define KVM_X86_SEV_ES_VM 3 932 931 #define KVM_X86_SNP_VM 4 933 932 #define KVM_X86_TDX_VM 5 933 + 934 + /* Trust Domain eXtension sub-ioctl() commands. */ 935 + enum kvm_tdx_cmd_id { 936 + KVM_TDX_CAPABILITIES = 0, 937 + KVM_TDX_INIT_VM, 938 + KVM_TDX_INIT_VCPU, 939 + KVM_TDX_INIT_MEM_REGION, 940 + KVM_TDX_FINALIZE_VM, 941 + KVM_TDX_GET_CPUID, 942 + 943 + KVM_TDX_CMD_NR_MAX, 944 + }; 945 + 946 + struct kvm_tdx_cmd { 947 + /* enum kvm_tdx_cmd_id */ 948 + __u32 id; 949 + /* flags for sub-commend. If sub-command doesn't use this, set zero. */ 950 + __u32 flags; 951 + /* 952 + * data for each sub-command. An immediate or a pointer to the actual 953 + * data in process virtual address. If sub-command doesn't use it, 954 + * set zero. 955 + */ 956 + __u64 data; 957 + /* 958 + * Auxiliary error code. The sub-command may return TDX SEAMCALL 959 + * status code in addition to -Exxx. 960 + */ 961 + __u64 hw_error; 962 + }; 963 + 964 + struct kvm_tdx_capabilities { 965 + __u64 supported_attrs; 966 + __u64 supported_xfam; 967 + __u64 reserved[254]; 968 + 969 + /* Configurable CPUID bits for userspace */ 970 + struct kvm_cpuid2 cpuid; 971 + }; 972 + 973 + struct kvm_tdx_init_vm { 974 + __u64 attributes; 975 + __u64 xfam; 976 + __u64 mrconfigid[6]; /* sha384 digest */ 977 + __u64 mrowner[6]; /* sha384 digest */ 978 + __u64 mrownerconfig[6]; /* sha384 digest */ 979 + 980 + /* The total space for TD_PARAMS before the CPUIDs is 256 bytes */ 981 + __u64 reserved[12]; 982 + 983 + /* 984 + * Call KVM_TDX_INIT_VM before vcpu creation, thus before 985 + * KVM_SET_CPUID2. 986 + * This configuration supersedes KVM_SET_CPUID2s for VCPUs because the 987 + * TDX module directly virtualizes those CPUIDs without VMM. The user 988 + * space VMM, e.g. qemu, should make KVM_SET_CPUID2 consistent with 989 + * those values. If it doesn't, KVM may have wrong idea of vCPUIDs of 990 + * the guest, and KVM may wrongly emulate CPUIDs or MSRs that the TDX 991 + * module doesn't virtualize. 992 + */ 993 + struct kvm_cpuid2 cpuid; 994 + }; 995 + 996 + #define KVM_TDX_MEASURE_MEMORY_REGION _BITULL(0) 997 + 998 + struct kvm_tdx_init_mem_region { 999 + __u64 source_addr; 1000 + __u64 gpa; 1001 + __u64 nr_pages; 1002 + }; 934 1003 935 1004 #endif /* _ASM_X86_KVM_H */
+2
tools/arch/x86/include/uapi/asm/svm.h
··· 95 95 #define SVM_EXIT_CR14_WRITE_TRAP 0x09e 96 96 #define SVM_EXIT_CR15_WRITE_TRAP 0x09f 97 97 #define SVM_EXIT_INVPCID 0x0a2 98 + #define SVM_EXIT_BUS_LOCK 0x0a5 98 99 #define SVM_EXIT_IDLE_HLT 0x0a6 99 100 #define SVM_EXIT_NPF 0x400 100 101 #define SVM_EXIT_AVIC_INCOMPLETE_IPI 0x401 ··· 226 225 { SVM_EXIT_CR4_WRITE_TRAP, "write_cr4_trap" }, \ 227 226 { SVM_EXIT_CR8_WRITE_TRAP, "write_cr8_trap" }, \ 228 227 { SVM_EXIT_INVPCID, "invpcid" }, \ 228 + { SVM_EXIT_BUS_LOCK, "buslock" }, \ 229 229 { SVM_EXIT_IDLE_HLT, "idle-halt" }, \ 230 230 { SVM_EXIT_NPF, "npf" }, \ 231 231 { SVM_EXIT_AVIC_INCOMPLETE_IPI, "avic_incomplete_ipi" }, \
+4 -1
tools/arch/x86/include/uapi/asm/vmx.h
··· 34 34 #define EXIT_REASON_TRIPLE_FAULT 2 35 35 #define EXIT_REASON_INIT_SIGNAL 3 36 36 #define EXIT_REASON_SIPI_SIGNAL 4 37 + #define EXIT_REASON_OTHER_SMI 6 37 38 38 39 #define EXIT_REASON_INTERRUPT_WINDOW 7 39 40 #define EXIT_REASON_NMI_WINDOW 8 ··· 93 92 #define EXIT_REASON_TPAUSE 68 94 93 #define EXIT_REASON_BUS_LOCK 74 95 94 #define EXIT_REASON_NOTIFY 75 95 + #define EXIT_REASON_TDCALL 77 96 96 97 97 #define VMX_EXIT_REASONS \ 98 98 { EXIT_REASON_EXCEPTION_NMI, "EXCEPTION_NMI" }, \ ··· 157 155 { EXIT_REASON_UMWAIT, "UMWAIT" }, \ 158 156 { EXIT_REASON_TPAUSE, "TPAUSE" }, \ 159 157 { EXIT_REASON_BUS_LOCK, "BUS_LOCK" }, \ 160 - { EXIT_REASON_NOTIFY, "NOTIFY" } 158 + { EXIT_REASON_NOTIFY, "NOTIFY" }, \ 159 + { EXIT_REASON_TDCALL, "TDCALL" } 161 160 162 161 #define VMX_EXIT_REASON_FLAGS \ 163 162 { VMX_EXIT_REASONS_FAILED_VMENTRY, "FAILED_VMENTRY" }
+1
tools/arch/x86/lib/memcpy_64.S
··· 40 40 EXPORT_SYMBOL(__memcpy) 41 41 42 42 SYM_FUNC_ALIAS_MEMFUNC(memcpy, __memcpy) 43 + SYM_PIC_ALIAS(memcpy) 43 44 EXPORT_SYMBOL(memcpy) 44 45 45 46 SYM_FUNC_START_LOCAL(memcpy_orig)
+1
tools/arch/x86/lib/memset_64.S
··· 42 42 EXPORT_SYMBOL(__memset) 43 43 44 44 SYM_FUNC_ALIAS_MEMFUNC(memset, __memset) 45 + SYM_PIC_ALIAS(memset) 45 46 EXPORT_SYMBOL(memset) 46 47 47 48 SYM_FUNC_START_LOCAL(memset_orig)
+55 -2
tools/include/linux/bits.h
··· 12 12 #define BIT_ULL_MASK(nr) (ULL(1) << ((nr) % BITS_PER_LONG_LONG)) 13 13 #define BIT_ULL_WORD(nr) ((nr) / BITS_PER_LONG_LONG) 14 14 #define BITS_PER_BYTE 8 15 + #define BITS_PER_TYPE(type) (sizeof(type) * BITS_PER_BYTE) 15 16 16 17 /* 17 18 * Create a contiguous bitmask starting at bit position @l and ending at ··· 20 19 * GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000. 21 20 */ 22 21 #if !defined(__ASSEMBLY__) 22 + 23 + /* 24 + * Missing asm support 25 + * 26 + * GENMASK_U*() and BIT_U*() depend on BITS_PER_TYPE() which relies on sizeof(), 27 + * something not available in asm. Nevertheless, fixed width integers is a C 28 + * concept. Assembly code can rely on the long and long long versions instead. 29 + */ 30 + 23 31 #include <linux/build_bug.h> 24 32 #include <linux/compiler.h> 33 + #include <linux/overflow.h> 34 + 25 35 #define GENMASK_INPUT_CHECK(h, l) BUILD_BUG_ON_ZERO(const_true((l) > (h))) 26 - #else 36 + 37 + /* 38 + * Generate a mask for the specified type @t. Additional checks are made to 39 + * guarantee the value returned fits in that type, relying on 40 + * -Wshift-count-overflow compiler check to detect incompatible arguments. 41 + * For example, all these create build errors or warnings: 42 + * 43 + * - GENMASK(15, 20): wrong argument order 44 + * - GENMASK(72, 15): doesn't fit unsigned long 45 + * - GENMASK_U32(33, 15): doesn't fit in a u32 46 + */ 47 + #define GENMASK_TYPE(t, h, l) \ 48 + ((t)(GENMASK_INPUT_CHECK(h, l) + \ 49 + (type_max(t) << (l) & \ 50 + type_max(t) >> (BITS_PER_TYPE(t) - 1 - (h))))) 51 + 52 + #define GENMASK_U8(h, l) GENMASK_TYPE(u8, h, l) 53 + #define GENMASK_U16(h, l) GENMASK_TYPE(u16, h, l) 54 + #define GENMASK_U32(h, l) GENMASK_TYPE(u32, h, l) 55 + #define GENMASK_U64(h, l) GENMASK_TYPE(u64, h, l) 56 + 57 + /* 58 + * Fixed-type variants of BIT(), with additional checks like GENMASK_TYPE(). The 59 + * following examples generate compiler warnings due to -Wshift-count-overflow: 60 + * 61 + * - BIT_U8(8) 62 + * - BIT_U32(-1) 63 + * - BIT_U32(40) 64 + */ 65 + #define BIT_INPUT_CHECK(type, nr) \ 66 + BUILD_BUG_ON_ZERO(const_true((nr) >= BITS_PER_TYPE(type))) 67 + 68 + #define BIT_TYPE(type, nr) ((type)(BIT_INPUT_CHECK(type, nr) + BIT_ULL(nr))) 69 + 70 + #define BIT_U8(nr) BIT_TYPE(u8, nr) 71 + #define BIT_U16(nr) BIT_TYPE(u16, nr) 72 + #define BIT_U32(nr) BIT_TYPE(u32, nr) 73 + #define BIT_U64(nr) BIT_TYPE(u64, nr) 74 + 75 + #else /* defined(__ASSEMBLY__) */ 76 + 27 77 /* 28 78 * BUILD_BUG_ON_ZERO is not available in h files included from asm files, 29 79 * disable the input check if that is the case. 30 80 */ 31 81 #define GENMASK_INPUT_CHECK(h, l) 0 32 - #endif 82 + 83 + #endif /* !defined(__ASSEMBLY__) */ 33 84 34 85 #define GENMASK(h, l) \ 35 86 (GENMASK_INPUT_CHECK(h, l) + __GENMASK(h, l))
+5 -5
tools/include/linux/build_bug.h
··· 4 4 5 5 #include <linux/compiler.h> 6 6 7 - #ifdef __CHECKER__ 8 - #define BUILD_BUG_ON_ZERO(e) (0) 9 - #else /* __CHECKER__ */ 10 7 /* 11 8 * Force a compilation error if condition is true, but also produce a 12 9 * result (of value 0 and type int), so the expression can be used 13 10 * e.g. in a structure initializer (or where-ever else comma expressions 14 11 * aren't permitted). 12 + * 13 + * Take an error message as an optional second argument. If omitted, 14 + * default to the stringification of the tested expression. 15 15 */ 16 - #define BUILD_BUG_ON_ZERO(e) ((int)(sizeof(struct { int:(-!!(e)); }))) 17 - #endif /* __CHECKER__ */ 16 + #define BUILD_BUG_ON_ZERO(e, ...) \ 17 + __BUILD_BUG_ON_ZERO_MSG(e, ##__VA_ARGS__, #e " is true") 18 18 19 19 /* Force a compilation error if a constant expression is not a power of 2 */ 20 20 #define __BUILD_BUG_ON_NOT_POWER_OF_2(n) \
+8
tools/include/linux/compiler.h
··· 244 244 __asm__ ("" : "=r" (var) : "0" (var)) 245 245 #endif 246 246 247 + #ifndef __BUILD_BUG_ON_ZERO_MSG 248 + #if defined(__clang__) 249 + #define __BUILD_BUG_ON_ZERO_MSG(e, msg, ...) ((int)(sizeof(struct { int:(-!!(e)); }))) 250 + #else 251 + #define __BUILD_BUG_ON_ZERO_MSG(e, msg, ...) ((int)sizeof(struct {_Static_assert(!(e), msg);})) 252 + #endif 253 + #endif 254 + 247 255 #endif /* __ASSEMBLY__ */ 248 256 249 257 #endif /* _TOOLS_LINUX_COMPILER_H */
+4
tools/include/uapi/drm/drm.h
··· 905 905 }; 906 906 907 907 #define DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_IMPORT_SYNC_FILE (1 << 0) 908 + #define DRM_SYNCOBJ_FD_TO_HANDLE_FLAGS_TIMELINE (1 << 1) 908 909 #define DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_EXPORT_SYNC_FILE (1 << 0) 910 + #define DRM_SYNCOBJ_HANDLE_TO_FD_FLAGS_TIMELINE (1 << 1) 909 911 struct drm_syncobj_handle { 910 912 __u32 handle; 911 913 __u32 flags; 912 914 913 915 __s32 fd; 914 916 __u32 pad; 917 + 918 + __u64 point; 915 919 }; 916 920 917 921 struct drm_syncobj_transfer {
+4 -2
tools/include/uapi/linux/fscrypt.h
··· 119 119 */ 120 120 struct fscrypt_provisioning_key_payload { 121 121 __u32 type; 122 - __u32 __reserved; 122 + __u32 flags; 123 123 __u8 raw[]; 124 124 }; 125 125 ··· 128 128 struct fscrypt_key_specifier key_spec; 129 129 __u32 raw_size; 130 130 __u32 key_id; 131 - __u32 __reserved[8]; 131 + #define FSCRYPT_ADD_KEY_FLAG_HW_WRAPPED 0x00000001 132 + __u32 flags; 133 + __u32 __reserved[7]; 132 134 __u8 raw[]; 133 135 }; 134 136
+4
tools/include/uapi/linux/kvm.h
··· 375 375 #define KVM_SYSTEM_EVENT_WAKEUP 4 376 376 #define KVM_SYSTEM_EVENT_SUSPEND 5 377 377 #define KVM_SYSTEM_EVENT_SEV_TERM 6 378 + #define KVM_SYSTEM_EVENT_TDX_FATAL 7 378 379 __u32 type; 379 380 __u32 ndata; 380 381 union { ··· 931 930 #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 932 931 #define KVM_CAP_X86_GUEST_MODE 238 933 932 #define KVM_CAP_ARM_WRITABLE_IMP_ID_REGS 239 933 + #define KVM_CAP_ARM_EL2 240 934 + #define KVM_CAP_ARM_EL2_E2H0 241 935 + #define KVM_CAP_RISCV_MP_STATE_RESET 242 934 936 935 937 struct kvm_irq_routing_irqchip { 936 938 __u32 irqchip;
+6 -2
tools/include/uapi/linux/stat.h
··· 182 182 /* File offset alignment for direct I/O reads */ 183 183 __u32 stx_dio_read_offset_align; 184 184 185 - /* 0xb8 */ 186 - __u64 __spare3[9]; /* Spare space for future expansion */ 185 + /* Optimised max atomic write unit in bytes */ 186 + __u32 stx_atomic_write_unit_max_opt; 187 + __u32 __spare2[1]; 188 + 189 + /* 0xc0 */ 190 + __u64 __spare3[8]; /* Spare space for future expansion */ 187 191 188 192 /* 0x100 */ 189 193 };
+3
tools/lib/bpf/btf_dump.c
··· 226 226 size_t bkt; 227 227 struct hashmap_entry *cur; 228 228 229 + if (!map) 230 + return; 231 + 229 232 hashmap__for_each_entry(map, cur, bkt) 230 233 free((void *)cur->pkey); 231 234
+7 -3
tools/lib/bpf/libbpf.c
··· 597 597 int sym_idx; 598 598 int btf_id; 599 599 int sec_btf_id; 600 - const char *name; 600 + char *name; 601 601 char *essent_name; 602 602 bool is_set; 603 603 bool is_weak; ··· 4259 4259 return ext->btf_id; 4260 4260 } 4261 4261 t = btf__type_by_id(obj->btf, ext->btf_id); 4262 - ext->name = btf__name_by_offset(obj->btf, t->name_off); 4262 + ext->name = strdup(btf__name_by_offset(obj->btf, t->name_off)); 4263 + if (!ext->name) 4264 + return -ENOMEM; 4263 4265 ext->sym_idx = i; 4264 4266 ext->is_weak = ELF64_ST_BIND(sym->st_info) == STB_WEAK; 4265 4267 ··· 9140 9138 zfree(&obj->btf_custom_path); 9141 9139 zfree(&obj->kconfig); 9142 9140 9143 - for (i = 0; i < obj->nr_extern; i++) 9141 + for (i = 0; i < obj->nr_extern; i++) { 9142 + zfree(&obj->externs[i].name); 9144 9143 zfree(&obj->externs[i].essent_name); 9144 + } 9145 9145 9146 9146 zfree(&obj->externs); 9147 9147 obj->nr_extern = 0;
+41 -16
tools/perf/Documentation/perf-amd-ibs.txt
··· 171 171 # perf mem report 172 172 173 173 A normal perf mem report output will provide detailed memory access profile. 174 - However, it can also be aggregated based on output fields. For example: 174 + New output fields will show related access info together. For example: 175 175 176 - # perf mem report -F mem,sample,snoop 177 - Samples: 3M of event 'ibs_op//', Event count (approx.): 23524876 178 - Memory access Samples Snoop 179 - N/A 1903343 N/A 180 - L1 hit 1056754 N/A 181 - L2 hit 75231 N/A 182 - L3 hit 9496 HitM 183 - L3 hit 2270 N/A 184 - RAM hit 8710 N/A 185 - Remote node, same socket RAM hit 3241 N/A 186 - Remote core, same node Any cache hit 1572 HitM 187 - Remote core, same node Any cache hit 514 N/A 188 - Remote node, same socket Any cache hit 1216 HitM 189 - Remote node, same socket Any cache hit 350 N/A 190 - Uncached hit 18 N/A 176 + # perf mem report -F overhead,cache,snoop,comm 177 + ... 178 + # Samples: 92K of event 'ibs_op//' 179 + # Total weight : 531104 180 + # 181 + # ---------- Cache ----------- --- Snoop ---- 182 + # Overhead L1 L2 L1-buf Other HitM Other Command 183 + # ........ ............................ .............. .......... 184 + # 185 + 76.07% 5.8% 35.7% 0.0% 34.6% 23.3% 52.8% cc1 186 + 5.79% 0.2% 0.0% 0.0% 5.6% 0.1% 5.7% make 187 + 5.78% 0.1% 4.4% 0.0% 1.2% 0.5% 5.3% gcc 188 + 5.33% 0.3% 3.9% 0.0% 1.1% 0.2% 5.2% as 189 + 5.00% 0.1% 3.8% 0.0% 1.0% 0.3% 4.7% sh 190 + 1.56% 0.1% 0.1% 0.0% 1.4% 0.6% 0.9% ld 191 + 0.28% 0.1% 0.0% 0.0% 0.2% 0.1% 0.2% pkg-config 192 + 0.09% 0.0% 0.0% 0.0% 0.1% 0.0% 0.1% git 193 + 0.03% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% rm 194 + ... 195 + 196 + Also, it can be aggregated based on various memory access info using the 197 + sort keys. For example: 198 + 199 + # perf mem report -s mem,snoop 200 + ... 201 + # Samples: 92K of event 'ibs_op//' 202 + # Total weight : 531104 203 + # Sort order : mem,snoop 204 + # 205 + # Overhead Samples Memory access Snoop 206 + # ........ ............ ....................................... ............ 207 + # 208 + 47.99% 1509 L2 hit N/A 209 + 25.08% 338 core, same node Any cache hit HitM 210 + 10.24% 54374 N/A N/A 211 + 6.77% 35938 L1 hit N/A 212 + 6.39% 101 core, same node Any cache hit N/A 213 + 3.50% 69 RAM hit N/A 214 + 0.03% 158 LFB/MAB hit N/A 215 + 0.00% 2 Uncached hit N/A 191 216 192 217 Please refer to their man page for more detail. 193 218
+50
tools/perf/Documentation/perf-mem.txt
··· 119 119 And the default sort keys are changed to local_weight, mem, sym, dso, 120 120 symbol_daddr, dso_daddr, snoop, tlb, locked, blocked, local_ins_lat. 121 121 122 + -F:: 123 + --fields=:: 124 + Specify output field - multiple keys can be specified in CSV format. 125 + Please see linkperf:perf-report[1] for details. 126 + 127 + In addition to the default fields, 'perf mem report' will provide the 128 + following fields to break down sample periods. 129 + 130 + - op: operation in the sample instruction (load, store, prefetch, ...) 131 + - cache: location in CPU cache (L1, L2, ...) where the sample hit 132 + - mem: location in memory or other places the sample hit 133 + - dtlb: location in Data TLB (L1, L2) where the sample hit 134 + - snoop: snoop result for the sampled data access 135 + 136 + Please take a look at the OUTPUT FIELD SELECTION section for caveats. 137 + 122 138 -T:: 123 139 --type-profile:: 124 140 Show data-type profile result instead of code symbols. This requires ··· 171 155 $ perf mem report -F overhead,symbol 172 156 90% [k] memcpy 173 157 10% [.] strcmp 158 + 159 + OUTPUT FIELD SELECTION 160 + ---------------------- 161 + "perf mem report" adds a number of new output fields specific to data source 162 + information in the sample. Some of them have the same name with the existing 163 + sort keys ("mem" and "snoop"). So unlike other fields and sort keys, they'll 164 + behave differently when it's used by -F/--fields or -s/--sort. 165 + 166 + Using those two as output fields will aggregate samples altogether and show 167 + breakdown. 168 + 169 + $ perf mem report -F mem,snoop 170 + ... 171 + # ------ Memory ------- --- Snoop ---- 172 + # RAM Uncach Other HitM Other 173 + # ..................... .............. 174 + # 175 + 3.5% 0.0% 96.5% 25.1% 74.9% 176 + 177 + But using the same name for sort keys will aggregate samples for each type 178 + separately. 179 + 180 + $ perf mem report -s mem,snoop 181 + # Overhead Samples Memory access Snoop 182 + # ........ ............ ....................................... ............ 183 + # 184 + 47.99% 1509 L2 hit N/A 185 + 25.08% 338 core, same node Any cache hit HitM 186 + 10.24% 54374 N/A N/A 187 + 6.77% 35938 L1 hit N/A 188 + 6.39% 101 core, same node Any cache hit N/A 189 + 3.50% 69 RAM hit N/A 190 + 0.03% 158 LFB/MAB hit N/A 191 + 0.00% 2 Uncached hit N/A 174 192 175 193 SEE ALSO 176 194 --------
-1
tools/perf/bench/futex-hash.c
··· 18 18 #include <stdlib.h> 19 19 #include <linux/compiler.h> 20 20 #include <linux/kernel.h> 21 - #include <linux/prctl.h> 22 21 #include <linux/zalloc.h> 23 22 #include <sys/time.h> 24 23 #include <sys/mman.h>
+8 -1
tools/perf/bench/futex.c
··· 2 2 #include <err.h> 3 3 #include <stdio.h> 4 4 #include <stdlib.h> 5 - #include <linux/prctl.h> 6 5 #include <sys/prctl.h> 7 6 8 7 #include "futex.h" 8 + 9 + #ifndef PR_FUTEX_HASH 10 + #define PR_FUTEX_HASH 78 11 + # define PR_FUTEX_HASH_SET_SLOTS 1 12 + # define FH_FLAG_IMMUTABLE (1ULL << 0) 13 + # define PR_FUTEX_HASH_GET_SLOTS 2 14 + # define PR_FUTEX_HASH_GET_IMMUTABLE 3 15 + #endif // PR_FUTEX_HASH 9 16 10 17 void futex_set_nbuckets_param(struct bench_futex_parameters *params) 11 18 {
+1 -1
tools/perf/check-headers.sh
··· 186 186 # diff with extra ignore lines 187 187 check arch/x86/lib/memcpy_64.S '-I "^EXPORT_SYMBOL" -I "^#include <asm/export.h>" -I"^SYM_FUNC_START\(_LOCAL\)*(memcpy_\(erms\|orig\))" -I"^#include <linux/cfi_types.h>"' 188 188 check arch/x86/lib/memset_64.S '-I "^EXPORT_SYMBOL" -I "^#include <asm/export.h>" -I"^SYM_FUNC_START\(_LOCAL\)*(memset_\(erms\|orig\))"' 189 - check arch/x86/include/asm/amd/ibs.h '-I "^#include [<\"]\(asm/\)*msr-index.h"' 189 + check arch/x86/include/asm/amd/ibs.h '-I "^#include .*/msr-index.h"' 190 190 check arch/arm64/include/asm/cputype.h '-I "^#include [<\"]\(asm/\)*sysreg.h"' 191 191 check include/linux/unaligned.h '-I "^#include <linux/unaligned/packed_struct.h>" -I "^#include <asm/byteorder.h>" -I "^#pragma GCC diagnostic"' 192 192 check include/uapi/asm-generic/mman.h '-I "^#include <\(uapi/\)*asm-generic/mman-common\(-tools\)*.h>"'
+10 -2
tools/perf/tests/shell/stat+event_uniquifying.sh
··· 9 9 err=0 10 10 11 11 test_event_uniquifying() { 12 - # We use `clockticks` to verify the uniquify behavior. 12 + # We use `clockticks` in `uncore_imc` to verify the uniquify behavior. 13 + pmu="uncore_imc" 13 14 event="clockticks" 14 15 15 16 # If the `-A` option is added, the event should be uniquified. ··· 44 43 echo "stat event uniquifying test" 45 44 uniquified_event_array=() 46 45 46 + # Skip if the machine does not have `uncore_imc` device. 47 + if ! ${perf_tool} list pmu | grep -q ${pmu}; then 48 + echo "Target does not support PMU ${pmu} [Skipped]" 49 + err=2 50 + return 51 + fi 52 + 47 53 # Check how many uniquified events. 48 54 while IFS= read -r line; do 49 55 uniquified_event=$(echo "$line" | awk '{print $1}') 50 56 uniquified_event_array+=("${uniquified_event}") 51 - done < <(${perf_tool} list -v ${event} | grep "\[Kernel PMU event\]") 57 + done < <(${perf_tool} list -v ${event} | grep ${pmu}) 52 58 53 59 perf_command="${perf_tool} stat -e $event -A -o ${stat_output} -- true" 54 60 $perf_command
+1
tools/perf/tests/tests-scripts.c
··· 260 260 continue; /* Skip scripts that have a separate driver. */ 261 261 fd = openat(dir_fd, ent->d_name, O_PATH); 262 262 append_scripts_in_dir(fd, result, result_sz); 263 + close(fd); 263 264 } 264 265 for (i = 0; i < n_dirs; i++) /* Clean up */ 265 266 zfree(&entlist[i]);
+1 -1
tools/perf/trace/beauty/include/linux/socket.h
··· 168 168 return __cmsg_nxthdr(__msg->msg_control, __msg->msg_controllen, __cmsg); 169 169 } 170 170 171 - static inline size_t msg_data_left(struct msghdr *msg) 171 + static inline size_t msg_data_left(const struct msghdr *msg) 172 172 { 173 173 return iov_iter_count(&msg->msg_iter); 174 174 }
+1
tools/perf/trace/beauty/include/uapi/linux/fs.h
··· 361 361 #define PAGE_IS_PFNZERO (1 << 5) 362 362 #define PAGE_IS_HUGE (1 << 6) 363 363 #define PAGE_IS_SOFT_DIRTY (1 << 7) 364 + #define PAGE_IS_GUARD (1 << 8) 364 365 365 366 /* 366 367 * struct page_region - Page region with flags
+7
tools/perf/trace/beauty/include/uapi/linux/prctl.h
··· 364 364 # define PR_TIMER_CREATE_RESTORE_IDS_ON 1 365 365 # define PR_TIMER_CREATE_RESTORE_IDS_GET 2 366 366 367 + /* FUTEX hash management */ 368 + #define PR_FUTEX_HASH 78 369 + # define PR_FUTEX_HASH_SET_SLOTS 1 370 + # define FH_FLAG_IMMUTABLE (1ULL << 0) 371 + # define PR_FUTEX_HASH_GET_SLOTS 2 372 + # define PR_FUTEX_HASH_GET_IMMUTABLE 3 373 + 367 374 #endif /* _LINUX_PRCTL_H */
+6 -2
tools/perf/trace/beauty/include/uapi/linux/stat.h
··· 182 182 /* File offset alignment for direct I/O reads */ 183 183 __u32 stx_dio_read_offset_align; 184 184 185 - /* 0xb8 */ 186 - __u64 __spare3[9]; /* Spare space for future expansion */ 185 + /* Optimised max atomic write unit in bytes */ 186 + __u32 stx_atomic_write_unit_max_opt; 187 + __u32 __spare2[1]; 188 + 189 + /* 0xc0 */ 190 + __u64 __spare3[8]; /* Spare space for future expansion */ 187 191 188 192 /* 0x100 */ 189 193 };
+4
tools/perf/util/include/linux/linkage.h
··· 132 132 SYM_TYPED_START(name, SYM_L_GLOBAL, SYM_A_ALIGN) 133 133 #endif 134 134 135 + #ifndef SYM_PIC_ALIAS 136 + #define SYM_PIC_ALIAS(sym) SYM_ALIAS(__pi_ ## sym, sym, SYM_T_FUNC, SYM_L_GLOBAL) 137 + #endif 138 + 135 139 #endif /* PERF_LINUX_LINKAGE_H_ */
+1
tools/perf/util/print-events.c
··· 268 268 ret = evsel__open(evsel, NULL, tmap) >= 0; 269 269 } 270 270 271 + evsel__close(evsel); 271 272 evsel__delete(evsel); 272 273 } 273 274
-1
tools/testing/selftests/bpf/.gitignore
··· 21 21 flow_dissector_load 22 22 test_tcpnotify_user 23 23 test_libbpf 24 - test_sysctl 25 24 xdping 26 25 test_cpp 27 26 *.d
+2 -3
tools/testing/selftests/bpf/Makefile
··· 73 73 # Order correspond to 'make run_tests' order 74 74 TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_progs \ 75 75 test_sockmap \ 76 - test_tcpnotify_user test_sysctl \ 76 + test_tcpnotify_user \ 77 77 test_progs-no_alu32 78 78 TEST_INST_SUBDIRS := no_alu32 79 79 ··· 220 220 $(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)") 221 221 endif 222 222 223 - # Define simple and short `make test_progs`, `make test_sysctl`, etc targets 223 + # Define simple and short `make test_progs`, `make test_maps`, etc targets 224 224 # to build individual tests. 225 225 # NOTE: Semicolon at the end is critical to override lib.mk's default static 226 226 # rule for binaries. ··· 329 329 $(OUTPUT)/test_sockmap: $(CGROUP_HELPERS) $(TESTING_HELPERS) 330 330 $(OUTPUT)/test_tcpnotify_user: $(CGROUP_HELPERS) $(TESTING_HELPERS) $(TRACE_HELPERS) 331 331 $(OUTPUT)/test_sock_fields: $(CGROUP_HELPERS) $(TESTING_HELPERS) 332 - $(OUTPUT)/test_sysctl: $(CGROUP_HELPERS) $(TESTING_HELPERS) 333 332 $(OUTPUT)/test_tag: $(TESTING_HELPERS) 334 333 $(OUTPUT)/test_lirc_mode2_user: $(TESTING_HELPERS) 335 334 $(OUTPUT)/xdping: $(TESTING_HELPERS)
+16
tools/testing/selftests/bpf/progs/test_global_map_resize.c
··· 32 32 33 33 int percpu_arr[1] SEC(".data.percpu_arr"); 34 34 35 + /* at least one extern is included, to ensure that a specific 36 + * regression is tested whereby resizing resulted in a free-after-use 37 + * bug after type information is invalidated by the resize operation. 38 + * 39 + * There isn't a particularly good API to test for this specific condition, 40 + * but by having externs for the resizing tests it will cover this path. 41 + */ 42 + extern int LINUX_KERNEL_VERSION __kconfig; 43 + long version_sink; 44 + 35 45 SEC("tp/syscalls/sys_enter_getpid") 36 46 int bss_array_sum(void *ctx) 37 47 { ··· 53 43 54 44 for (size_t i = 0; i < bss_array_len; ++i) 55 45 sum += array[i]; 46 + 47 + /* see above; ensure this is not optimized out */ 48 + version_sink = LINUX_KERNEL_VERSION; 56 49 57 50 return 0; 58 51 } ··· 71 58 72 59 for (size_t i = 0; i < data_array_len; ++i) 73 60 sum += my_array[i]; 61 + 62 + /* see above; ensure this is not optimized out */ 63 + version_sink = LINUX_KERNEL_VERSION; 74 64 75 65 return 0; 76 66 }
+18
tools/testing/selftests/bpf/progs/verifier_vfs_accept.c
··· 2 2 /* Copyright (c) 2024 Google LLC. */ 3 3 4 4 #include <vmlinux.h> 5 + #include <errno.h> 5 6 #include <bpf/bpf_helpers.h> 6 7 #include <bpf/bpf_tracing.h> 7 8 ··· 80 79 path = &file->f_path; 81 80 ret = bpf_path_d_path(path, buf, sizeof(buf)); 82 81 __sink(ret); 82 + return 0; 83 + } 84 + 85 + SEC("lsm.s/inode_rename") 86 + __success 87 + int BPF_PROG(inode_rename, struct inode *old_dir, struct dentry *old_dentry, 88 + struct inode *new_dir, struct dentry *new_dentry, 89 + unsigned int flags) 90 + { 91 + struct inode *inode = new_dentry->d_inode; 92 + ino_t ino; 93 + 94 + if (!inode) 95 + return 0; 96 + ino = inode->i_ino; 97 + if (ino == 0) 98 + return -EACCES; 83 99 return 0; 84 100 } 85 101
+15
tools/testing/selftests/bpf/progs/verifier_vfs_reject.c
··· 2 2 /* Copyright (c) 2024 Google LLC. */ 3 3 4 4 #include <vmlinux.h> 5 + #include <errno.h> 5 6 #include <bpf/bpf_helpers.h> 6 7 #include <bpf/bpf_tracing.h> 7 8 #include <linux/limits.h> ··· 159 158 return 0; 160 159 } 161 160 161 + SEC("lsm.s/inode_rename") 162 + __failure __msg("invalid mem access 'trusted_ptr_or_null_'") 163 + int BPF_PROG(inode_rename, struct inode *old_dir, struct dentry *old_dentry, 164 + struct inode *new_dir, struct dentry *new_dentry, 165 + unsigned int flags) 166 + { 167 + struct inode *inode = new_dentry->d_inode; 168 + ino_t ino; 169 + 170 + ino = inode->i_ino; 171 + if (ino == 0) 172 + return -EACCES; 173 + return 0; 174 + } 162 175 char _license[] SEC("license") = "GPL";
+53 -52
tools/testing/selftests/bpf/test_lru_map.c
··· 138 138 return ret; 139 139 } 140 140 141 + /* Derive target_free from map_size, same as bpf_common_lru_populate */ 142 + static unsigned int __tgt_size(unsigned int map_size) 143 + { 144 + return (map_size / nr_cpus) / 2; 145 + } 146 + 147 + /* Inverse of how bpf_common_lru_populate derives target_free from map_size. */ 148 + static unsigned int __map_size(unsigned int tgt_free) 149 + { 150 + return tgt_free * nr_cpus * 2; 151 + } 152 + 141 153 /* Size of the LRU map is 2 142 154 * Add key=1 (+1 key) 143 155 * Add key=2 (+1 key) ··· 243 231 printf("Pass\n"); 244 232 } 245 233 246 - /* Size of the LRU map is 1.5*tgt_free 247 - * Insert 1 to tgt_free (+tgt_free keys) 248 - * Lookup 1 to tgt_free/2 249 - * Insert 1+tgt_free to 2*tgt_free (+tgt_free keys) 250 - * => 1+tgt_free/2 to LOCALFREE_TARGET will be removed by LRU 234 + /* Verify that unreferenced elements are recycled before referenced ones. 235 + * Insert elements. 236 + * Reference a subset of these. 237 + * Insert more, enough to trigger recycling. 238 + * Verify that unreferenced are recycled. 251 239 */ 252 240 static void test_lru_sanity1(int map_type, int map_flags, unsigned int tgt_free) 253 241 { ··· 269 257 batch_size = tgt_free / 2; 270 258 assert(batch_size * 2 == tgt_free); 271 259 272 - map_size = tgt_free + batch_size; 260 + map_size = __map_size(tgt_free) + batch_size; 273 261 lru_map_fd = create_map(map_type, map_flags, map_size); 274 262 assert(lru_map_fd != -1); 275 263 ··· 278 266 279 267 value[0] = 1234; 280 268 281 - /* Insert 1 to tgt_free (+tgt_free keys) */ 282 - end_key = 1 + tgt_free; 269 + /* Insert map_size - batch_size keys */ 270 + end_key = 1 + __map_size(tgt_free); 283 271 for (key = 1; key < end_key; key++) 284 272 assert(!bpf_map_update_elem(lru_map_fd, &key, value, 285 273 BPF_NOEXIST)); 286 274 287 - /* Lookup 1 to tgt_free/2 */ 275 + /* Lookup 1 to batch_size */ 288 276 end_key = 1 + batch_size; 289 277 for (key = 1; key < end_key; key++) { 290 278 assert(!bpf_map_lookup_elem_with_ref_bit(lru_map_fd, key, value)); ··· 292 280 BPF_NOEXIST)); 293 281 } 294 282 295 - /* Insert 1+tgt_free to 2*tgt_free 296 - * => 1+tgt_free/2 to LOCALFREE_TARGET will be 283 + /* Insert another map_size - batch_size keys 284 + * Map will contain 1 to batch_size plus these latest, i.e., 285 + * => previous 1+batch_size to map_size - batch_size will have been 297 286 * removed by LRU 298 287 */ 299 - key = 1 + tgt_free; 300 - end_key = key + tgt_free; 288 + key = 1 + __map_size(tgt_free); 289 + end_key = key + __map_size(tgt_free); 301 290 for (; key < end_key; key++) { 302 291 assert(!bpf_map_update_elem(lru_map_fd, &key, value, 303 292 BPF_NOEXIST)); ··· 314 301 printf("Pass\n"); 315 302 } 316 303 317 - /* Size of the LRU map 1.5 * tgt_free 318 - * Insert 1 to tgt_free (+tgt_free keys) 319 - * Update 1 to tgt_free/2 320 - * => The original 1 to tgt_free/2 will be removed due to 321 - * the LRU shrink process 322 - * Re-insert 1 to tgt_free/2 again and do a lookup immeidately 323 - * Insert 1+tgt_free to tgt_free*3/2 324 - * Insert 1+tgt_free*3/2 to tgt_free*5/2 325 - * => Key 1+tgt_free to tgt_free*3/2 326 - * will be removed from LRU because it has never 327 - * been lookup and ref bit is not set 304 + /* Verify that insertions exceeding map size will recycle the oldest. 305 + * Verify that unreferenced elements are recycled before referenced. 328 306 */ 329 307 static void test_lru_sanity2(int map_type, int map_flags, unsigned int tgt_free) 330 308 { ··· 338 334 batch_size = tgt_free / 2; 339 335 assert(batch_size * 2 == tgt_free); 340 336 341 - map_size = tgt_free + batch_size; 337 + map_size = __map_size(tgt_free) + batch_size; 342 338 lru_map_fd = create_map(map_type, map_flags, map_size); 343 339 assert(lru_map_fd != -1); 344 340 ··· 347 343 348 344 value[0] = 1234; 349 345 350 - /* Insert 1 to tgt_free (+tgt_free keys) */ 351 - end_key = 1 + tgt_free; 346 + /* Insert map_size - batch_size keys */ 347 + end_key = 1 + __map_size(tgt_free); 352 348 for (key = 1; key < end_key; key++) 353 349 assert(!bpf_map_update_elem(lru_map_fd, &key, value, 354 350 BPF_NOEXIST)); ··· 361 357 * shrink the inactive list to get tgt_free 362 358 * number of free nodes. 363 359 * 364 - * Hence, the oldest key 1 to tgt_free/2 365 - * are removed from the LRU list. 360 + * Hence, the oldest key is removed from the LRU list. 366 361 */ 367 362 key = 1; 368 363 if (map_type == BPF_MAP_TYPE_LRU_PERCPU_HASH) { ··· 373 370 BPF_EXIST)); 374 371 } 375 372 376 - /* Re-insert 1 to tgt_free/2 again and do a lookup 377 - * immeidately. 373 + /* Re-insert 1 to batch_size again and do a lookup immediately. 378 374 */ 379 375 end_key = 1 + batch_size; 380 376 value[0] = 4321; ··· 389 387 390 388 value[0] = 1234; 391 389 392 - /* Insert 1+tgt_free to tgt_free*3/2 */ 393 - end_key = 1 + tgt_free + batch_size; 394 - for (key = 1 + tgt_free; key < end_key; key++) 390 + /* Insert batch_size new elements */ 391 + key = 1 + __map_size(tgt_free); 392 + end_key = key + batch_size; 393 + for (; key < end_key; key++) 395 394 /* These newly added but not referenced keys will be 396 395 * gone during the next LRU shrink. 397 396 */ 398 397 assert(!bpf_map_update_elem(lru_map_fd, &key, value, 399 398 BPF_NOEXIST)); 400 399 401 - /* Insert 1+tgt_free*3/2 to tgt_free*5/2 */ 402 - end_key = key + tgt_free; 400 + /* Insert map_size - batch_size elements */ 401 + end_key += __map_size(tgt_free); 403 402 for (; key < end_key; key++) { 404 403 assert(!bpf_map_update_elem(lru_map_fd, &key, value, 405 404 BPF_NOEXIST)); ··· 416 413 printf("Pass\n"); 417 414 } 418 415 419 - /* Size of the LRU map is 2*tgt_free 420 - * It is to test the active/inactive list rotation 421 - * Insert 1 to 2*tgt_free (+2*tgt_free keys) 422 - * Lookup key 1 to tgt_free*3/2 423 - * Add 1+2*tgt_free to tgt_free*5/2 (+tgt_free/2 keys) 424 - * => key 1+tgt_free*3/2 to 2*tgt_free are removed from LRU 416 + /* Test the active/inactive list rotation 417 + * 418 + * Fill the whole map, deplete the free list. 419 + * Reference all except the last lru->target_free elements. 420 + * Insert lru->target_free new elements. This triggers one shrink. 421 + * Verify that the non-referenced elements are replaced. 425 422 */ 426 423 static void test_lru_sanity3(int map_type, int map_flags, unsigned int tgt_free) 427 424 { ··· 440 437 441 438 assert(sched_next_online(0, &next_cpu) != -1); 442 439 443 - batch_size = tgt_free / 2; 444 - assert(batch_size * 2 == tgt_free); 440 + batch_size = __tgt_size(tgt_free); 445 441 446 442 map_size = tgt_free * 2; 447 443 lru_map_fd = create_map(map_type, map_flags, map_size); ··· 451 449 452 450 value[0] = 1234; 453 451 454 - /* Insert 1 to 2*tgt_free (+2*tgt_free keys) */ 455 - end_key = 1 + (2 * tgt_free); 452 + /* Fill the map */ 453 + end_key = 1 + map_size; 456 454 for (key = 1; key < end_key; key++) 457 455 assert(!bpf_map_update_elem(lru_map_fd, &key, value, 458 456 BPF_NOEXIST)); 459 457 460 - /* Lookup key 1 to tgt_free*3/2 */ 461 - end_key = tgt_free + batch_size; 458 + /* Reference all but the last batch_size */ 459 + end_key = 1 + map_size - batch_size; 462 460 for (key = 1; key < end_key; key++) { 463 461 assert(!bpf_map_lookup_elem_with_ref_bit(lru_map_fd, key, value)); 464 462 assert(!bpf_map_update_elem(expected_map_fd, &key, value, 465 463 BPF_NOEXIST)); 466 464 } 467 465 468 - /* Add 1+2*tgt_free to tgt_free*5/2 469 - * (+tgt_free/2 keys) 470 - */ 466 + /* Insert new batch_size: replaces the non-referenced elements */ 471 467 key = 2 * tgt_free + 1; 472 468 end_key = key + batch_size; 473 469 for (; key < end_key; key++) { ··· 500 500 lru_map_fd = create_map(map_type, map_flags, 501 501 3 * tgt_free * nr_cpus); 502 502 else 503 - lru_map_fd = create_map(map_type, map_flags, 3 * tgt_free); 503 + lru_map_fd = create_map(map_type, map_flags, 504 + 3 * __map_size(tgt_free)); 504 505 assert(lru_map_fd != -1); 505 506 506 507 expected_map_fd = create_map(BPF_MAP_TYPE_HASH, 0,
+8 -29
tools/testing/selftests/bpf/test_sysctl.c tools/testing/selftests/bpf/prog_tests/test_sysctl.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 // Copyright (c) 2019 Facebook 3 3 4 - #include <fcntl.h> 5 - #include <stdint.h> 6 - #include <stdio.h> 7 - #include <stdlib.h> 8 - #include <string.h> 9 - #include <unistd.h> 10 - 11 - #include <linux/filter.h> 12 - 13 - #include <bpf/bpf.h> 14 - #include <bpf/libbpf.h> 15 - 16 - #include <bpf/bpf_endian.h> 17 - #include "bpf_util.h" 4 + #include "test_progs.h" 18 5 #include "cgroup_helpers.h" 19 - #include "testing_helpers.h" 20 6 21 7 #define CG_PATH "/foo" 22 8 #define MAX_INSNS 512 ··· 1594 1608 return fails ? -1 : 0; 1595 1609 } 1596 1610 1597 - int main(int argc, char **argv) 1611 + void test_sysctl(void) 1598 1612 { 1599 - int cgfd = -1; 1600 - int err = 0; 1613 + int cgfd; 1601 1614 1602 1615 cgfd = cgroup_setup_and_join(CG_PATH); 1603 - if (cgfd < 0) 1604 - goto err; 1616 + if (!ASSERT_OK_FD(cgfd < 0, "create_cgroup")) 1617 + goto out; 1605 1618 1606 - /* Use libbpf 1.0 API mode */ 1607 - libbpf_set_strict_mode(LIBBPF_STRICT_ALL); 1619 + if (!ASSERT_OK(run_tests(cgfd), "run_tests")) 1620 + goto out; 1608 1621 1609 - if (run_tests(cgfd)) 1610 - goto err; 1611 - 1612 - goto out; 1613 - err: 1614 - err = -1; 1615 1622 out: 1616 1623 close(cgfd); 1617 1624 cleanup_cgroup_environment(); 1618 - return err; 1625 + return; 1619 1626 }
+1 -1
tools/testing/selftests/drivers/net/hw/rss_input_xfrm.py
··· 38 38 raise KsftSkipEx("socket.SO_INCOMING_CPU was added in Python 3.11") 39 39 40 40 input_xfrm = cfg.ethnl.rss_get( 41 - {'header': {'dev-name': cfg.ifname}}).get('input_xfrm') 41 + {'header': {'dev-name': cfg.ifname}}).get('input-xfrm') 42 42 43 43 # Check for symmetric xor/or-xor 44 44 if not input_xfrm or (input_xfrm != 1 and input_xfrm != 2):
+7 -3
tools/testing/selftests/futex/functional/futex_numa_mpol.c
··· 144 144 struct futex32_numa *futex_numa; 145 145 int mem_size, i; 146 146 void *futex_ptr; 147 - char c; 147 + int c; 148 148 149 149 while ((c = getopt(argc, argv, "chv:")) != -1) { 150 150 switch (c) { ··· 210 210 ret = mbind(futex_ptr, mem_size, MPOL_BIND, &nodemask, 211 211 sizeof(nodemask) * 8, 0); 212 212 if (ret == 0) { 213 + ret = numa_set_mempolicy_home_node(futex_ptr, mem_size, i, 0); 214 + if (ret != 0) 215 + ksft_exit_fail_msg("Failed to set home node: %m, %d\n", errno); 216 + 213 217 ksft_print_msg("Node %d test\n", i); 214 218 futex_numa->futex = 0; 215 219 futex_numa->numa = FUTEX_NO_NODE; ··· 224 220 if (0) 225 221 test_futex_mpol(futex_numa, 0); 226 222 if (futex_numa->numa != i) { 227 - ksft_test_result_fail("Returned NUMA node is %d expected %d\n", 228 - futex_numa->numa, i); 223 + ksft_exit_fail_msg("Returned NUMA node is %d expected %d\n", 224 + futex_numa->numa, i); 229 225 } 230 226 } 231 227 }
+1 -1
tools/testing/selftests/futex/functional/futex_priv_hash.c
··· 130 130 pthread_mutexattr_t mutex_attr_pi; 131 131 int use_global_hash = 0; 132 132 int ret; 133 - char c; 133 + int c; 134 134 135 135 while ((c = getopt(argc, argv, "cghv:")) != -1) { 136 136 switch (c) {
+13 -3
tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
··· 954 954 pr_debug("ptimer_irq: %d; vtimer_irq: %d\n", ptimer_irq, vtimer_irq); 955 955 } 956 956 957 + static int gic_fd; 958 + 957 959 static void test_vm_create(struct kvm_vm **vm, struct kvm_vcpu **vcpu, 958 960 enum arch_timer timer) 959 961 { ··· 970 968 vcpu_args_set(*vcpu, 1, timer); 971 969 972 970 test_init_timer_irq(*vm, *vcpu); 973 - vgic_v3_setup(*vm, 1, 64); 971 + gic_fd = vgic_v3_setup(*vm, 1, 64); 972 + __TEST_REQUIRE(gic_fd >= 0, "Failed to create vgic-v3"); 973 + 974 974 sync_global_to_guest(*vm, test_args); 975 975 sync_global_to_guest(*vm, CVAL_MAX); 976 976 sync_global_to_guest(*vm, DEF_CNT); 977 + } 978 + 979 + static void test_vm_cleanup(struct kvm_vm *vm) 980 + { 981 + close(gic_fd); 982 + kvm_vm_free(vm); 977 983 } 978 984 979 985 static void test_print_help(char *name) ··· 1070 1060 if (test_args.test_virtual) { 1071 1061 test_vm_create(&vm, &vcpu, VIRTUAL); 1072 1062 test_run(vm, vcpu); 1073 - kvm_vm_free(vm); 1063 + test_vm_cleanup(vm); 1074 1064 } 1075 1065 1076 1066 if (test_args.test_physical) { 1077 1067 test_vm_create(&vm, &vcpu, PHYSICAL); 1078 1068 test_run(vm, vcpu); 1079 - kvm_vm_free(vm); 1069 + test_vm_cleanup(vm); 1080 1070 } 1081 1071 1082 1072 return 0;
+3
tools/testing/selftests/mm/config
··· 8 8 CONFIG_TRANSPARENT_HUGEPAGE=y 9 9 CONFIG_MEM_SOFT_DIRTY=y 10 10 CONFIG_ANON_VMA_NAME=y 11 + CONFIG_FTRACE=y 12 + CONFIG_PROFILING=y 13 + CONFIG_UPROBES=y
+4 -1
tools/testing/selftests/mm/merge.c
··· 470 470 ASSERT_GE(fd, 0); 471 471 472 472 ASSERT_EQ(ftruncate(fd, page_size), 0); 473 - ASSERT_EQ(read_sysfs("/sys/bus/event_source/devices/uprobe/type", &type), 0); 473 + if (read_sysfs("/sys/bus/event_source/devices/uprobe/type", &type) != 0) { 474 + SKIP(goto out, "Failed to read uprobe sysfs file, skipping"); 475 + } 474 476 475 477 memset(&attr, 0, attr_sz); 476 478 attr.size = attr_sz; ··· 493 491 ASSERT_NE(mremap(ptr2, page_size, page_size, 494 492 MREMAP_MAYMOVE | MREMAP_FIXED, ptr1), MAP_FAILED); 495 493 494 + out: 496 495 close(fd); 497 496 remove(probe_file); 498 497 }
+1 -1
tools/testing/selftests/mm/settings
··· 1 - timeout=180 1 + timeout=900
+138 -4
tools/testing/selftests/net/af_unix/msg_oob.c
··· 210 210 static void __recvpair(struct __test_metadata *_metadata, 211 211 FIXTURE_DATA(msg_oob) *self, 212 212 const char *expected_buf, int expected_len, 213 - int buf_len, int flags) 213 + int buf_len, int flags, bool is_sender) 214 214 { 215 215 int i, ret[2], recv_errno[2], expected_errno = 0; 216 216 char recv_buf[2][BUF_SZ] = {}; ··· 221 221 errno = 0; 222 222 223 223 for (i = 0; i < 2; i++) { 224 - ret[i] = recv(self->fd[i * 2 + 1], recv_buf[i], buf_len, flags); 224 + int index = is_sender ? i * 2 : i * 2 + 1; 225 + 226 + ret[i] = recv(self->fd[index], recv_buf[i], buf_len, flags); 225 227 recv_errno[i] = errno; 226 228 } 227 229 ··· 310 308 ASSERT_EQ(answ[0], answ[1]); 311 309 } 312 310 311 + static void __resetpair(struct __test_metadata *_metadata, 312 + FIXTURE_DATA(msg_oob) *self, 313 + const FIXTURE_VARIANT(msg_oob) *variant, 314 + bool reset) 315 + { 316 + int i; 317 + 318 + for (i = 0; i < 2; i++) 319 + close(self->fd[i * 2 + 1]); 320 + 321 + __recvpair(_metadata, self, "", reset ? -ECONNRESET : 0, 1, 322 + variant->peek ? MSG_PEEK : 0, true); 323 + } 324 + 313 325 #define sendpair(buf, len, flags) \ 314 326 __sendpair(_metadata, self, buf, len, flags) 315 327 ··· 332 316 if (variant->peek) \ 333 317 __recvpair(_metadata, self, \ 334 318 expected_buf, expected_len, \ 335 - buf_len, (flags) | MSG_PEEK); \ 319 + buf_len, (flags) | MSG_PEEK, false); \ 336 320 __recvpair(_metadata, self, \ 337 - expected_buf, expected_len, buf_len, flags); \ 321 + expected_buf, expected_len, \ 322 + buf_len, flags, false); \ 338 323 } while (0) 339 324 340 325 #define epollpair(oob_remaining) \ ··· 346 329 347 330 #define setinlinepair() \ 348 331 __setinlinepair(_metadata, self) 332 + 333 + #define resetpair(reset) \ 334 + __resetpair(_metadata, self, variant, reset) 349 335 350 336 #define tcp_incompliant \ 351 337 for (self->tcp_compliant = false; \ ··· 364 344 recvpair("", -EINVAL, 1, MSG_OOB); 365 345 epollpair(false); 366 346 siocatmarkpair(false); 347 + 348 + resetpair(true); 349 + } 350 + 351 + TEST_F(msg_oob, non_oob_no_reset) 352 + { 353 + sendpair("x", 1, 0); 354 + epollpair(false); 355 + siocatmarkpair(false); 356 + 357 + recvpair("x", 1, 1, 0); 358 + epollpair(false); 359 + siocatmarkpair(false); 360 + 361 + resetpair(false); 367 362 } 368 363 369 364 TEST_F(msg_oob, oob) ··· 390 355 recvpair("x", 1, 1, MSG_OOB); 391 356 epollpair(false); 392 357 siocatmarkpair(true); 358 + 359 + tcp_incompliant { 360 + resetpair(false); /* TCP sets -ECONNRESET for ex-OOB. */ 361 + } 362 + } 363 + 364 + TEST_F(msg_oob, oob_reset) 365 + { 366 + sendpair("x", 1, MSG_OOB); 367 + epollpair(true); 368 + siocatmarkpair(true); 369 + 370 + resetpair(true); 393 371 } 394 372 395 373 TEST_F(msg_oob, oob_drop) ··· 418 370 recvpair("", -EINVAL, 1, MSG_OOB); 419 371 epollpair(false); 420 372 siocatmarkpair(false); 373 + 374 + resetpair(false); 421 375 } 422 376 423 377 TEST_F(msg_oob, oob_ahead) ··· 435 385 recvpair("hell", 4, 4, 0); 436 386 epollpair(false); 437 387 siocatmarkpair(true); 388 + 389 + tcp_incompliant { 390 + resetpair(false); /* TCP sets -ECONNRESET for ex-OOB. */ 391 + } 438 392 } 439 393 440 394 TEST_F(msg_oob, oob_break) ··· 457 403 458 404 recvpair("", -EAGAIN, 1, 0); 459 405 siocatmarkpair(false); 406 + 407 + resetpair(false); 460 408 } 461 409 462 410 TEST_F(msg_oob, oob_ahead_break) ··· 482 426 recvpair("world", 5, 5, 0); 483 427 epollpair(false); 484 428 siocatmarkpair(false); 429 + 430 + resetpair(false); 485 431 } 486 432 487 433 TEST_F(msg_oob, oob_break_drop) ··· 507 449 recvpair("", -EINVAL, 1, MSG_OOB); 508 450 epollpair(false); 509 451 siocatmarkpair(false); 452 + 453 + resetpair(false); 510 454 } 511 455 512 456 TEST_F(msg_oob, ex_oob_break) ··· 536 476 recvpair("ld", 2, 2, 0); 537 477 epollpair(false); 538 478 siocatmarkpair(false); 479 + 480 + resetpair(false); 539 481 } 540 482 541 483 TEST_F(msg_oob, ex_oob_drop) ··· 560 498 epollpair(false); 561 499 siocatmarkpair(true); 562 500 } 501 + 502 + resetpair(false); 563 503 } 564 504 565 505 TEST_F(msg_oob, ex_oob_drop_2) ··· 587 523 epollpair(false); 588 524 siocatmarkpair(true); 589 525 } 526 + 527 + resetpair(false); 590 528 } 591 529 592 530 TEST_F(msg_oob, ex_oob_oob) ··· 612 546 recvpair("", -EINVAL, 1, MSG_OOB); 613 547 epollpair(false); 614 548 siocatmarkpair(false); 549 + 550 + resetpair(false); 551 + } 552 + 553 + TEST_F(msg_oob, ex_oob_ex_oob) 554 + { 555 + sendpair("x", 1, MSG_OOB); 556 + epollpair(true); 557 + siocatmarkpair(true); 558 + 559 + recvpair("x", 1, 1, MSG_OOB); 560 + epollpair(false); 561 + siocatmarkpair(true); 562 + 563 + sendpair("y", 1, MSG_OOB); 564 + epollpair(true); 565 + siocatmarkpair(true); 566 + 567 + recvpair("y", 1, 1, MSG_OOB); 568 + epollpair(false); 569 + siocatmarkpair(true); 570 + 571 + tcp_incompliant { 572 + resetpair(false); /* TCP sets -ECONNRESET for ex-OOB. */ 573 + } 574 + } 575 + 576 + TEST_F(msg_oob, ex_oob_ex_oob_oob) 577 + { 578 + sendpair("x", 1, MSG_OOB); 579 + epollpair(true); 580 + siocatmarkpair(true); 581 + 582 + recvpair("x", 1, 1, MSG_OOB); 583 + epollpair(false); 584 + siocatmarkpair(true); 585 + 586 + sendpair("y", 1, MSG_OOB); 587 + epollpair(true); 588 + siocatmarkpair(true); 589 + 590 + recvpair("y", 1, 1, MSG_OOB); 591 + epollpair(false); 592 + siocatmarkpair(true); 593 + 594 + sendpair("z", 1, MSG_OOB); 595 + epollpair(true); 596 + siocatmarkpair(true); 615 597 } 616 598 617 599 TEST_F(msg_oob, ex_oob_ahead_break) ··· 690 576 recvpair("d", 1, 1, MSG_OOB); 691 577 epollpair(false); 692 578 siocatmarkpair(true); 579 + 580 + tcp_incompliant { 581 + resetpair(false); /* TCP sets -ECONNRESET for ex-OOB. */ 582 + } 693 583 } 694 584 695 585 TEST_F(msg_oob, ex_oob_siocatmark) ··· 713 595 recvpair("hell", 4, 4, 0); /* Intentionally stop at ex-OOB. */ 714 596 epollpair(true); 715 597 siocatmarkpair(false); 598 + 599 + resetpair(true); 716 600 } 717 601 718 602 TEST_F(msg_oob, inline_oob) ··· 732 612 recvpair("x", 1, 1, 0); 733 613 epollpair(false); 734 614 siocatmarkpair(false); 615 + 616 + resetpair(false); 735 617 } 736 618 737 619 TEST_F(msg_oob, inline_oob_break) ··· 755 633 recvpair("o", 1, 1, 0); 756 634 epollpair(false); 757 635 siocatmarkpair(false); 636 + 637 + resetpair(false); 758 638 } 759 639 760 640 TEST_F(msg_oob, inline_oob_ahead_break) ··· 785 661 786 662 epollpair(false); 787 663 siocatmarkpair(false); 664 + 665 + resetpair(false); 788 666 } 789 667 790 668 TEST_F(msg_oob, inline_ex_oob_break) ··· 812 686 recvpair("rld", 3, 3, 0); 813 687 epollpair(false); 814 688 siocatmarkpair(false); 689 + 690 + resetpair(false); 815 691 } 816 692 817 693 TEST_F(msg_oob, inline_ex_oob_no_drop) ··· 835 707 recvpair("y", 1, 1, 0); 836 708 epollpair(false); 837 709 siocatmarkpair(false); 710 + 711 + resetpair(false); 838 712 } 839 713 840 714 TEST_F(msg_oob, inline_ex_oob_drop) ··· 861 731 epollpair(false); 862 732 siocatmarkpair(false); 863 733 } 734 + 735 + resetpair(false); 864 736 } 865 737 866 738 TEST_F(msg_oob, inline_ex_oob_siocatmark) ··· 884 752 recvpair("hell", 4, 4, 0); /* Intentionally stop at ex-OOB. */ 885 753 epollpair(true); 886 754 siocatmarkpair(false); 755 + 756 + resetpair(true); 887 757 } 888 758 889 759 TEST_HARNESS_MAIN