Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.13-rc6).

No conflicts.

Adjacent changes:

include/linux/if_vlan.h
f91a5b808938 ("af_packet: fix vlan_get_protocol_dgram() vs MSG_PEEK")
3f330db30638 ("net: reformat kdoc return statements")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+4605 -1931
+1
.mailmap
··· 735 735 Wolfram Sang <wsa@kernel.org> <wsa@the-dreams.de> 736 736 Yakir Yang <kuankuan.y@gmail.com> <ykk@rock-chips.com> 737 737 Yanteng Si <si.yanteng@linux.dev> <siyanteng@loongson.cn> 738 + Ying Huang <huang.ying.caritas@gmail.com> <ying.huang@intel.com> 738 739 Yusuke Goda <goda.yusuke@renesas.com> 739 740 Zack Rusin <zack.rusin@broadcom.com> <zackr@vmware.com> 740 741 Zhu Yanjun <zyjzyj2000@gmail.com> <yanjunz@nvidia.com>
+7 -3
Documentation/admin-guide/laptops/thinkpad-acpi.rst
··· 445 445 0x1008 0x07 FN+F8 IBM: toggle screen expand 446 446 Lenovo: configure UltraNav, 447 447 or toggle screen expand. 448 - On newer platforms (2024+) 449 - replaced by 0x131f (see below) 448 + On 2024 platforms replaced by 449 + 0x131f (see below) and on newer 450 + platforms (2025 +) keycode is 451 + replaced by 0x1401 (see below). 450 452 451 453 0x1009 0x08 FN+F9 - 452 454 ··· 508 506 509 507 0x1019 0x18 unknown 510 508 511 - 0x131f ... FN+F8 Platform Mode change. 509 + 0x131f ... FN+F8 Platform Mode change (2024 systems). 512 510 Implemented in driver. 513 511 512 + 0x1401 ... FN+F8 Platform Mode change (2025 + systems). 513 + Implemented in driver. 514 514 ... ... ... 515 515 516 516 0x1020 0x1F unknown
+1 -3
Documentation/admin-guide/pm/amd-pstate.rst
··· 251 251 In some ASICs, the highest CPPC performance is not the one in the ``_CPC`` 252 252 table, so we need to expose it to sysfs. If boost is not active, but 253 253 still supported, this maximum frequency will be larger than the one in 254 - ``cpuinfo``. On systems that support preferred core, the driver will have 255 - different values for some cores than others and this will reflect the values 256 - advertised by the platform at bootup. 254 + ``cpuinfo``. 257 255 This attribute is read-only. 258 256 259 257 ``amd_pstate_lowest_nonlinear_freq``
+6 -4
Documentation/devicetree/bindings/crypto/fsl,sec-v4.0.yaml
··· 114 114 table that specifies the PPID to LIODN mapping. Needed if the PAMU is 115 115 used. Value is a 12 bit value where value is a LIODN ID for this JR. 116 116 This property is normally set by boot firmware. 117 - $ref: /schemas/types.yaml#/definitions/uint32 118 - maximum: 0xfff 117 + $ref: /schemas/types.yaml#/definitions/uint32-array 118 + items: 119 + - maximum: 0xfff 119 120 120 121 '^rtic@[0-9a-f]+$': 121 122 type: object ··· 187 186 Needed if the PAMU is used. Value is a 12 bit value where value 188 187 is a LIODN ID for this JR. This property is normally set by boot 189 188 firmware. 190 - $ref: /schemas/types.yaml#/definitions/uint32 191 - maximum: 0xfff 189 + $ref: /schemas/types.yaml#/definitions/uint32-array 190 + items: 191 + - maximum: 0xfff 192 192 193 193 fsl,rtic-region: 194 194 description:
+1 -1
Documentation/devicetree/bindings/display/bridge/adi,adv7533.yaml
··· 90 90 adi,dsi-lanes: 91 91 description: Number of DSI data lanes connected to the DSI host. 92 92 $ref: /schemas/types.yaml#/definitions/uint32 93 - enum: [ 1, 2, 3, 4 ] 93 + enum: [ 2, 3, 4 ] 94 94 95 95 "#sound-dai-cells": 96 96 const: 0
+1 -1
Documentation/devicetree/bindings/mtd/partitions/fixed-partitions.yaml
··· 82 82 83 83 uimage@100000 { 84 84 reg = <0x0100000 0x200000>; 85 - compress = "lzma"; 85 + compression = "lzma"; 86 86 }; 87 87 }; 88 88
+2
Documentation/devicetree/bindings/soc/fsl/fsl,qman-portal.yaml
··· 35 35 36 36 fsl,liodn: 37 37 $ref: /schemas/types.yaml#/definitions/uint32-array 38 + maxItems: 2 38 39 description: See pamu.txt. Two LIODN(s). DQRR LIODN (DLIODN) and Frame LIODN 39 40 (FLIODN) 40 41 ··· 70 69 type: object 71 70 properties: 72 71 fsl,liodn: 72 + $ref: /schemas/types.yaml#/definitions/uint32-array 73 73 description: See pamu.txt, PAMU property used for static LIODN assignment 74 74 75 75 fsl,iommu-parent:
+1 -1
Documentation/devicetree/bindings/sound/realtek,rt5645.yaml
··· 51 51 description: Power supply for AVDD, providing 1.8V. 52 52 53 53 cpvdd-supply: 54 - description: Power supply for CPVDD, providing 3.5V. 54 + description: Power supply for CPVDD, providing 1.8V. 55 55 56 56 hp-detect-gpios: 57 57 description:
+850
Documentation/mm/process_addrs.rst
··· 3 3 ================= 4 4 Process Addresses 5 5 ================= 6 + 7 + .. toctree:: 8 + :maxdepth: 3 9 + 10 + 11 + Userland memory ranges are tracked by the kernel via Virtual Memory Areas or 12 + 'VMA's of type :c:struct:`!struct vm_area_struct`. 13 + 14 + Each VMA describes a virtually contiguous memory range with identical 15 + attributes, each described by a :c:struct:`!struct vm_area_struct` 16 + object. Userland access outside of VMAs is invalid except in the case where an 17 + adjacent stack VMA could be extended to contain the accessed address. 18 + 19 + All VMAs are contained within one and only one virtual address space, described 20 + by a :c:struct:`!struct mm_struct` object which is referenced by all tasks (that is, 21 + threads) which share the virtual address space. We refer to this as the 22 + :c:struct:`!mm`. 23 + 24 + Each mm object contains a maple tree data structure which describes all VMAs 25 + within the virtual address space. 26 + 27 + .. note:: An exception to this is the 'gate' VMA which is provided by 28 + architectures which use :c:struct:`!vsyscall` and is a global static 29 + object which does not belong to any specific mm. 30 + 31 + ------- 32 + Locking 33 + ------- 34 + 35 + The kernel is designed to be highly scalable against concurrent read operations 36 + on VMA **metadata** so a complicated set of locks are required to ensure memory 37 + corruption does not occur. 38 + 39 + .. note:: Locking VMAs for their metadata does not have any impact on the memory 40 + they describe nor the page tables that map them. 41 + 42 + Terminology 43 + ----------- 44 + 45 + * **mmap locks** - Each MM has a read/write semaphore :c:member:`!mmap_lock` 46 + which locks at a process address space granularity which can be acquired via 47 + :c:func:`!mmap_read_lock`, :c:func:`!mmap_write_lock` and variants. 48 + * **VMA locks** - The VMA lock is at VMA granularity (of course) which behaves 49 + as a read/write semaphore in practice. A VMA read lock is obtained via 50 + :c:func:`!lock_vma_under_rcu` (and unlocked via :c:func:`!vma_end_read`) and a 51 + write lock via :c:func:`!vma_start_write` (all VMA write locks are unlocked 52 + automatically when the mmap write lock is released). To take a VMA write lock 53 + you **must** have already acquired an :c:func:`!mmap_write_lock`. 54 + * **rmap locks** - When trying to access VMAs through the reverse mapping via a 55 + :c:struct:`!struct address_space` or :c:struct:`!struct anon_vma` object 56 + (reachable from a folio via :c:member:`!folio->mapping`). VMAs must be stabilised via 57 + :c:func:`!anon_vma_[try]lock_read` or :c:func:`!anon_vma_[try]lock_write` for 58 + anonymous memory and :c:func:`!i_mmap_[try]lock_read` or 59 + :c:func:`!i_mmap_[try]lock_write` for file-backed memory. We refer to these 60 + locks as the reverse mapping locks, or 'rmap locks' for brevity. 61 + 62 + We discuss page table locks separately in the dedicated section below. 63 + 64 + The first thing **any** of these locks achieve is to **stabilise** the VMA 65 + within the MM tree. That is, guaranteeing that the VMA object will not be 66 + deleted from under you nor modified (except for some specific fields 67 + described below). 68 + 69 + Stabilising a VMA also keeps the address space described by it around. 70 + 71 + Lock usage 72 + ---------- 73 + 74 + If you want to **read** VMA metadata fields or just keep the VMA stable, you 75 + must do one of the following: 76 + 77 + * Obtain an mmap read lock at the MM granularity via :c:func:`!mmap_read_lock` (or a 78 + suitable variant), unlocking it with a matching :c:func:`!mmap_read_unlock` when 79 + you're done with the VMA, *or* 80 + * Try to obtain a VMA read lock via :c:func:`!lock_vma_under_rcu`. This tries to 81 + acquire the lock atomically so might fail, in which case fall-back logic is 82 + required to instead obtain an mmap read lock if this returns :c:macro:`!NULL`, 83 + *or* 84 + * Acquire an rmap lock before traversing the locked interval tree (whether 85 + anonymous or file-backed) to obtain the required VMA. 86 + 87 + If you want to **write** VMA metadata fields, then things vary depending on the 88 + field (we explore each VMA field in detail below). For the majority you must: 89 + 90 + * Obtain an mmap write lock at the MM granularity via :c:func:`!mmap_write_lock` (or a 91 + suitable variant), unlocking it with a matching :c:func:`!mmap_write_unlock` when 92 + you're done with the VMA, *and* 93 + * Obtain a VMA write lock via :c:func:`!vma_start_write` for each VMA you wish to 94 + modify, which will be released automatically when :c:func:`!mmap_write_unlock` is 95 + called. 96 + * If you want to be able to write to **any** field, you must also hide the VMA 97 + from the reverse mapping by obtaining an **rmap write lock**. 98 + 99 + VMA locks are special in that you must obtain an mmap **write** lock **first** 100 + in order to obtain a VMA **write** lock. A VMA **read** lock however can be 101 + obtained without any other lock (:c:func:`!lock_vma_under_rcu` will acquire then 102 + release an RCU lock to lookup the VMA for you). 103 + 104 + This constrains the impact of writers on readers, as a writer can interact with 105 + one VMA while a reader interacts with another simultaneously. 106 + 107 + .. note:: The primary users of VMA read locks are page fault handlers, which 108 + means that without a VMA write lock, page faults will run concurrent with 109 + whatever you are doing. 110 + 111 + Examining all valid lock states: 112 + 113 + .. table:: 114 + 115 + ========= ======== ========= ======= ===== =========== ========== 116 + mmap lock VMA lock rmap lock Stable? Read? Write most? Write all? 117 + ========= ======== ========= ======= ===== =========== ========== 118 + \- \- \- N N N N 119 + \- R \- Y Y N N 120 + \- \- R/W Y Y N N 121 + R/W \-/R \-/R/W Y Y N N 122 + W W \-/R Y Y Y N 123 + W W W Y Y Y Y 124 + ========= ======== ========= ======= ===== =========== ========== 125 + 126 + .. warning:: While it's possible to obtain a VMA lock while holding an mmap read lock, 127 + attempting to do the reverse is invalid as it can result in deadlock - if 128 + another task already holds an mmap write lock and attempts to acquire a VMA 129 + write lock that will deadlock on the VMA read lock. 130 + 131 + All of these locks behave as read/write semaphores in practice, so you can 132 + obtain either a read or a write lock for each of these. 133 + 134 + .. note:: Generally speaking, a read/write semaphore is a class of lock which 135 + permits concurrent readers. However a write lock can only be obtained 136 + once all readers have left the critical region (and pending readers 137 + made to wait). 138 + 139 + This renders read locks on a read/write semaphore concurrent with other 140 + readers and write locks exclusive against all others holding the semaphore. 141 + 142 + VMA fields 143 + ^^^^^^^^^^ 144 + 145 + We can subdivide :c:struct:`!struct vm_area_struct` fields by their purpose, which makes it 146 + easier to explore their locking characteristics: 147 + 148 + .. note:: We exclude VMA lock-specific fields here to avoid confusion, as these 149 + are in effect an internal implementation detail. 150 + 151 + .. table:: Virtual layout fields 152 + 153 + ===================== ======================================== =========== 154 + Field Description Write lock 155 + ===================== ======================================== =========== 156 + :c:member:`!vm_start` Inclusive start virtual address of range mmap write, 157 + VMA describes. VMA write, 158 + rmap write. 159 + :c:member:`!vm_end` Exclusive end virtual address of range mmap write, 160 + VMA describes. VMA write, 161 + rmap write. 162 + :c:member:`!vm_pgoff` Describes the page offset into the file, mmap write, 163 + the original page offset within the VMA write, 164 + virtual address space (prior to any rmap write. 165 + :c:func:`!mremap`), or PFN if a PFN map 166 + and the architecture does not support 167 + :c:macro:`!CONFIG_ARCH_HAS_PTE_SPECIAL`. 168 + ===================== ======================================== =========== 169 + 170 + These fields describes the size, start and end of the VMA, and as such cannot be 171 + modified without first being hidden from the reverse mapping since these fields 172 + are used to locate VMAs within the reverse mapping interval trees. 173 + 174 + .. table:: Core fields 175 + 176 + ============================ ======================================== ========================= 177 + Field Description Write lock 178 + ============================ ======================================== ========================= 179 + :c:member:`!vm_mm` Containing mm_struct. None - written once on 180 + initial map. 181 + :c:member:`!vm_page_prot` Architecture-specific page table mmap write, VMA write. 182 + protection bits determined from VMA 183 + flags. 184 + :c:member:`!vm_flags` Read-only access to VMA flags describing N/A 185 + attributes of the VMA, in union with 186 + private writable 187 + :c:member:`!__vm_flags`. 188 + :c:member:`!__vm_flags` Private, writable access to VMA flags mmap write, VMA write. 189 + field, updated by 190 + :c:func:`!vm_flags_*` functions. 191 + :c:member:`!vm_file` If the VMA is file-backed, points to a None - written once on 192 + struct file object describing the initial map. 193 + underlying file, if anonymous then 194 + :c:macro:`!NULL`. 195 + :c:member:`!vm_ops` If the VMA is file-backed, then either None - Written once on 196 + the driver or file-system provides a initial map by 197 + :c:struct:`!struct vm_operations_struct` :c:func:`!f_ops->mmap()`. 198 + object describing callbacks to be 199 + invoked on VMA lifetime events. 200 + :c:member:`!vm_private_data` A :c:member:`!void *` field for Handled by driver. 201 + driver-specific metadata. 202 + ============================ ======================================== ========================= 203 + 204 + These are the core fields which describe the MM the VMA belongs to and its attributes. 205 + 206 + .. table:: Config-specific fields 207 + 208 + ================================= ===================== ======================================== =============== 209 + Field Configuration option Description Write lock 210 + ================================= ===================== ======================================== =============== 211 + :c:member:`!anon_name` CONFIG_ANON_VMA_NAME A field for storing a mmap write, 212 + :c:struct:`!struct anon_vma_name` VMA write. 213 + object providing a name for anonymous 214 + mappings, or :c:macro:`!NULL` if none 215 + is set or the VMA is file-backed. The 216 + underlying object is reference counted 217 + and can be shared across multiple VMAs 218 + for scalability. 219 + :c:member:`!swap_readahead_info` CONFIG_SWAP Metadata used by the swap mechanism mmap read, 220 + to perform readahead. This field is swap-specific 221 + accessed atomically. lock. 222 + :c:member:`!vm_policy` CONFIG_NUMA :c:type:`!mempolicy` object which mmap write, 223 + describes the NUMA behaviour of the VMA write. 224 + VMA. The underlying object is reference 225 + counted. 226 + :c:member:`!numab_state` CONFIG_NUMA_BALANCING :c:type:`!vma_numab_state` object which mmap read, 227 + describes the current state of numab-specific 228 + NUMA balancing in relation to this VMA. lock. 229 + Updated under mmap read lock by 230 + :c:func:`!task_numa_work`. 231 + :c:member:`!vm_userfaultfd_ctx` CONFIG_USERFAULTFD Userfaultfd context wrapper object of mmap write, 232 + type :c:type:`!vm_userfaultfd_ctx`, VMA write. 233 + either of zero size if userfaultfd is 234 + disabled, or containing a pointer 235 + to an underlying 236 + :c:type:`!userfaultfd_ctx` object which 237 + describes userfaultfd metadata. 238 + ================================= ===================== ======================================== =============== 239 + 240 + These fields are present or not depending on whether the relevant kernel 241 + configuration option is set. 242 + 243 + .. table:: Reverse mapping fields 244 + 245 + =================================== ========================================= ============================ 246 + Field Description Write lock 247 + =================================== ========================================= ============================ 248 + :c:member:`!shared.rb` A red/black tree node used, if the mmap write, VMA write, 249 + mapping is file-backed, to place the VMA i_mmap write. 250 + in the 251 + :c:member:`!struct address_space->i_mmap` 252 + red/black interval tree. 253 + :c:member:`!shared.rb_subtree_last` Metadata used for management of the mmap write, VMA write, 254 + interval tree if the VMA is file-backed. i_mmap write. 255 + :c:member:`!anon_vma_chain` List of pointers to both forked/CoW’d mmap read, anon_vma write. 256 + :c:type:`!anon_vma` objects and 257 + :c:member:`!vma->anon_vma` if it is 258 + non-:c:macro:`!NULL`. 259 + :c:member:`!anon_vma` :c:type:`!anon_vma` object used by When :c:macro:`NULL` and 260 + anonymous folios mapped exclusively to setting non-:c:macro:`NULL`: 261 + this VMA. Initially set by mmap read, page_table_lock. 262 + :c:func:`!anon_vma_prepare` serialised 263 + by the :c:macro:`!page_table_lock`. This When non-:c:macro:`NULL` and 264 + is set as soon as any page is faulted in. setting :c:macro:`NULL`: 265 + mmap write, VMA write, 266 + anon_vma write. 267 + =================================== ========================================= ============================ 268 + 269 + These fields are used to both place the VMA within the reverse mapping, and for 270 + anonymous mappings, to be able to access both related :c:struct:`!struct anon_vma` objects 271 + and the :c:struct:`!struct anon_vma` in which folios mapped exclusively to this VMA should 272 + reside. 273 + 274 + .. note:: If a file-backed mapping is mapped with :c:macro:`!MAP_PRIVATE` set 275 + then it can be in both the :c:type:`!anon_vma` and :c:type:`!i_mmap` 276 + trees at the same time, so all of these fields might be utilised at 277 + once. 278 + 279 + Page tables 280 + ----------- 281 + 282 + We won't speak exhaustively on the subject but broadly speaking, page tables map 283 + virtual addresses to physical ones through a series of page tables, each of 284 + which contain entries with physical addresses for the next page table level 285 + (along with flags), and at the leaf level the physical addresses of the 286 + underlying physical data pages or a special entry such as a swap entry, 287 + migration entry or other special marker. Offsets into these pages are provided 288 + by the virtual address itself. 289 + 290 + In Linux these are divided into five levels - PGD, P4D, PUD, PMD and PTE. Huge 291 + pages might eliminate one or two of these levels, but when this is the case we 292 + typically refer to the leaf level as the PTE level regardless. 293 + 294 + .. note:: In instances where the architecture supports fewer page tables than 295 + five the kernel cleverly 'folds' page table levels, that is stubbing 296 + out functions related to the skipped levels. This allows us to 297 + conceptually act as if there were always five levels, even if the 298 + compiler might, in practice, eliminate any code relating to missing 299 + ones. 300 + 301 + There are four key operations typically performed on page tables: 302 + 303 + 1. **Traversing** page tables - Simply reading page tables in order to traverse 304 + them. This only requires that the VMA is kept stable, so a lock which 305 + establishes this suffices for traversal (there are also lockless variants 306 + which eliminate even this requirement, such as :c:func:`!gup_fast`). 307 + 2. **Installing** page table mappings - Whether creating a new mapping or 308 + modifying an existing one in such a way as to change its identity. This 309 + requires that the VMA is kept stable via an mmap or VMA lock (explicitly not 310 + rmap locks). 311 + 3. **Zapping/unmapping** page table entries - This is what the kernel calls 312 + clearing page table mappings at the leaf level only, whilst leaving all page 313 + tables in place. This is a very common operation in the kernel performed on 314 + file truncation, the :c:macro:`!MADV_DONTNEED` operation via 315 + :c:func:`!madvise`, and others. This is performed by a number of functions 316 + including :c:func:`!unmap_mapping_range` and :c:func:`!unmap_mapping_pages`. 317 + The VMA need only be kept stable for this operation. 318 + 4. **Freeing** page tables - When finally the kernel removes page tables from a 319 + userland process (typically via :c:func:`!free_pgtables`) extreme care must 320 + be taken to ensure this is done safely, as this logic finally frees all page 321 + tables in the specified range, ignoring existing leaf entries (it assumes the 322 + caller has both zapped the range and prevented any further faults or 323 + modifications within it). 324 + 325 + .. note:: Modifying mappings for reclaim or migration is performed under rmap 326 + lock as it, like zapping, does not fundamentally modify the identity 327 + of what is being mapped. 328 + 329 + **Traversing** and **zapping** ranges can be performed holding any one of the 330 + locks described in the terminology section above - that is the mmap lock, the 331 + VMA lock or either of the reverse mapping locks. 332 + 333 + That is - as long as you keep the relevant VMA **stable** - you are good to go 334 + ahead and perform these operations on page tables (though internally, kernel 335 + operations that perform writes also acquire internal page table locks to 336 + serialise - see the page table implementation detail section for more details). 337 + 338 + When **installing** page table entries, the mmap or VMA lock must be held to 339 + keep the VMA stable. We explore why this is in the page table locking details 340 + section below. 341 + 342 + .. warning:: Page tables are normally only traversed in regions covered by VMAs. 343 + If you want to traverse page tables in areas that might not be 344 + covered by VMAs, heavier locking is required. 345 + See :c:func:`!walk_page_range_novma` for details. 346 + 347 + **Freeing** page tables is an entirely internal memory management operation and 348 + has special requirements (see the page freeing section below for more details). 349 + 350 + .. warning:: When **freeing** page tables, it must not be possible for VMAs 351 + containing the ranges those page tables map to be accessible via 352 + the reverse mapping. 353 + 354 + The :c:func:`!free_pgtables` function removes the relevant VMAs 355 + from the reverse mappings, but no other VMAs can be permitted to be 356 + accessible and span the specified range. 357 + 358 + Lock ordering 359 + ------------- 360 + 361 + As we have multiple locks across the kernel which may or may not be taken at the 362 + same time as explicit mm or VMA locks, we have to be wary of lock inversion, and 363 + the **order** in which locks are acquired and released becomes very important. 364 + 365 + .. note:: Lock inversion occurs when two threads need to acquire multiple locks, 366 + but in doing so inadvertently cause a mutual deadlock. 367 + 368 + For example, consider thread 1 which holds lock A and tries to acquire lock B, 369 + while thread 2 holds lock B and tries to acquire lock A. 370 + 371 + Both threads are now deadlocked on each other. However, had they attempted to 372 + acquire locks in the same order, one would have waited for the other to 373 + complete its work and no deadlock would have occurred. 374 + 375 + The opening comment in :c:macro:`!mm/rmap.c` describes in detail the required 376 + ordering of locks within memory management code: 377 + 378 + .. code-block:: 379 + 380 + inode->i_rwsem (while writing or truncating, not reading or faulting) 381 + mm->mmap_lock 382 + mapping->invalidate_lock (in filemap_fault) 383 + folio_lock 384 + hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share, see hugetlbfs below) 385 + vma_start_write 386 + mapping->i_mmap_rwsem 387 + anon_vma->rwsem 388 + mm->page_table_lock or pte_lock 389 + swap_lock (in swap_duplicate, swap_info_get) 390 + mmlist_lock (in mmput, drain_mmlist and others) 391 + mapping->private_lock (in block_dirty_folio) 392 + i_pages lock (widely used) 393 + lruvec->lru_lock (in folio_lruvec_lock_irq) 394 + inode->i_lock (in set_page_dirty's __mark_inode_dirty) 395 + bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty) 396 + sb_lock (within inode_lock in fs/fs-writeback.c) 397 + i_pages lock (widely used, in set_page_dirty, 398 + in arch-dependent flush_dcache_mmap_lock, 399 + within bdi.wb->list_lock in __sync_single_inode) 400 + 401 + There is also a file-system specific lock ordering comment located at the top of 402 + :c:macro:`!mm/filemap.c`: 403 + 404 + .. code-block:: 405 + 406 + ->i_mmap_rwsem (truncate_pagecache) 407 + ->private_lock (__free_pte->block_dirty_folio) 408 + ->swap_lock (exclusive_swap_page, others) 409 + ->i_pages lock 410 + 411 + ->i_rwsem 412 + ->invalidate_lock (acquired by fs in truncate path) 413 + ->i_mmap_rwsem (truncate->unmap_mapping_range) 414 + 415 + ->mmap_lock 416 + ->i_mmap_rwsem 417 + ->page_table_lock or pte_lock (various, mainly in memory.c) 418 + ->i_pages lock (arch-dependent flush_dcache_mmap_lock) 419 + 420 + ->mmap_lock 421 + ->invalidate_lock (filemap_fault) 422 + ->lock_page (filemap_fault, access_process_vm) 423 + 424 + ->i_rwsem (generic_perform_write) 425 + ->mmap_lock (fault_in_readable->do_page_fault) 426 + 427 + bdi->wb.list_lock 428 + sb_lock (fs/fs-writeback.c) 429 + ->i_pages lock (__sync_single_inode) 430 + 431 + ->i_mmap_rwsem 432 + ->anon_vma.lock (vma_merge) 433 + 434 + ->anon_vma.lock 435 + ->page_table_lock or pte_lock (anon_vma_prepare and various) 436 + 437 + ->page_table_lock or pte_lock 438 + ->swap_lock (try_to_unmap_one) 439 + ->private_lock (try_to_unmap_one) 440 + ->i_pages lock (try_to_unmap_one) 441 + ->lruvec->lru_lock (follow_page_mask->mark_page_accessed) 442 + ->lruvec->lru_lock (check_pte_range->folio_isolate_lru) 443 + ->private_lock (folio_remove_rmap_pte->set_page_dirty) 444 + ->i_pages lock (folio_remove_rmap_pte->set_page_dirty) 445 + bdi.wb->list_lock (folio_remove_rmap_pte->set_page_dirty) 446 + ->inode->i_lock (folio_remove_rmap_pte->set_page_dirty) 447 + bdi.wb->list_lock (zap_pte_range->set_page_dirty) 448 + ->inode->i_lock (zap_pte_range->set_page_dirty) 449 + ->private_lock (zap_pte_range->block_dirty_folio) 450 + 451 + Please check the current state of these comments which may have changed since 452 + the time of writing of this document. 453 + 454 + ------------------------------ 455 + Locking Implementation Details 456 + ------------------------------ 457 + 458 + .. warning:: Locking rules for PTE-level page tables are very different from 459 + locking rules for page tables at other levels. 460 + 461 + Page table locking details 462 + -------------------------- 463 + 464 + In addition to the locks described in the terminology section above, we have 465 + additional locks dedicated to page tables: 466 + 467 + * **Higher level page table locks** - Higher level page tables, that is PGD, P4D 468 + and PUD each make use of the process address space granularity 469 + :c:member:`!mm->page_table_lock` lock when modified. 470 + 471 + * **Fine-grained page table locks** - PMDs and PTEs each have fine-grained locks 472 + either kept within the folios describing the page tables or allocated 473 + separated and pointed at by the folios if :c:macro:`!ALLOC_SPLIT_PTLOCKS` is 474 + set. The PMD spin lock is obtained via :c:func:`!pmd_lock`, however PTEs are 475 + mapped into higher memory (if a 32-bit system) and carefully locked via 476 + :c:func:`!pte_offset_map_lock`. 477 + 478 + These locks represent the minimum required to interact with each page table 479 + level, but there are further requirements. 480 + 481 + Importantly, note that on a **traversal** of page tables, sometimes no such 482 + locks are taken. However, at the PTE level, at least concurrent page table 483 + deletion must be prevented (using RCU) and the page table must be mapped into 484 + high memory, see below. 485 + 486 + Whether care is taken on reading the page table entries depends on the 487 + architecture, see the section on atomicity below. 488 + 489 + Locking rules 490 + ^^^^^^^^^^^^^ 491 + 492 + We establish basic locking rules when interacting with page tables: 493 + 494 + * When changing a page table entry the page table lock for that page table 495 + **must** be held, except if you can safely assume nobody can access the page 496 + tables concurrently (such as on invocation of :c:func:`!free_pgtables`). 497 + * Reads from and writes to page table entries must be *appropriately* 498 + atomic. See the section on atomicity below for details. 499 + * Populating previously empty entries requires that the mmap or VMA locks are 500 + held (read or write), doing so with only rmap locks would be dangerous (see 501 + the warning below). 502 + * As mentioned previously, zapping can be performed while simply keeping the VMA 503 + stable, that is holding any one of the mmap, VMA or rmap locks. 504 + 505 + .. warning:: Populating previously empty entries is dangerous as, when unmapping 506 + VMAs, :c:func:`!vms_clear_ptes` has a window of time between 507 + zapping (via :c:func:`!unmap_vmas`) and freeing page tables (via 508 + :c:func:`!free_pgtables`), where the VMA is still visible in the 509 + rmap tree. :c:func:`!free_pgtables` assumes that the zap has 510 + already been performed and removes PTEs unconditionally (along with 511 + all other page tables in the freed range), so installing new PTE 512 + entries could leak memory and also cause other unexpected and 513 + dangerous behaviour. 514 + 515 + There are additional rules applicable when moving page tables, which we discuss 516 + in the section on this topic below. 517 + 518 + PTE-level page tables are different from page tables at other levels, and there 519 + are extra requirements for accessing them: 520 + 521 + * On 32-bit architectures, they may be in high memory (meaning they need to be 522 + mapped into kernel memory to be accessible). 523 + * When empty, they can be unlinked and RCU-freed while holding an mmap lock or 524 + rmap lock for reading in combination with the PTE and PMD page table locks. 525 + In particular, this happens in :c:func:`!retract_page_tables` when handling 526 + :c:macro:`!MADV_COLLAPSE`. 527 + So accessing PTE-level page tables requires at least holding an RCU read lock; 528 + but that only suffices for readers that can tolerate racing with concurrent 529 + page table updates such that an empty PTE is observed (in a page table that 530 + has actually already been detached and marked for RCU freeing) while another 531 + new page table has been installed in the same location and filled with 532 + entries. Writers normally need to take the PTE lock and revalidate that the 533 + PMD entry still refers to the same PTE-level page table. 534 + 535 + To access PTE-level page tables, a helper like :c:func:`!pte_offset_map_lock` or 536 + :c:func:`!pte_offset_map` can be used depending on stability requirements. 537 + These map the page table into kernel memory if required, take the RCU lock, and 538 + depending on variant, may also look up or acquire the PTE lock. 539 + See the comment on :c:func:`!__pte_offset_map_lock`. 540 + 541 + Atomicity 542 + ^^^^^^^^^ 543 + 544 + Regardless of page table locks, the MMU hardware concurrently updates accessed 545 + and dirty bits (perhaps more, depending on architecture). Additionally, page 546 + table traversal operations in parallel (though holding the VMA stable) and 547 + functionality like GUP-fast locklessly traverses (that is reads) page tables, 548 + without even keeping the VMA stable at all. 549 + 550 + When performing a page table traversal and keeping the VMA stable, whether a 551 + read must be performed once and only once or not depends on the architecture 552 + (for instance x86-64 does not require any special precautions). 553 + 554 + If a write is being performed, or if a read informs whether a write takes place 555 + (on an installation of a page table entry say, for instance in 556 + :c:func:`!__pud_install`), special care must always be taken. In these cases we 557 + can never assume that page table locks give us entirely exclusive access, and 558 + must retrieve page table entries once and only once. 559 + 560 + If we are reading page table entries, then we need only ensure that the compiler 561 + does not rearrange our loads. This is achieved via :c:func:`!pXXp_get` 562 + functions - :c:func:`!pgdp_get`, :c:func:`!p4dp_get`, :c:func:`!pudp_get`, 563 + :c:func:`!pmdp_get`, and :c:func:`!ptep_get`. 564 + 565 + Each of these uses :c:func:`!READ_ONCE` to guarantee that the compiler reads 566 + the page table entry only once. 567 + 568 + However, if we wish to manipulate an existing page table entry and care about 569 + the previously stored data, we must go further and use an hardware atomic 570 + operation as, for example, in :c:func:`!ptep_get_and_clear`. 571 + 572 + Equally, operations that do not rely on the VMA being held stable, such as 573 + GUP-fast (see :c:func:`!gup_fast` and its various page table level handlers like 574 + :c:func:`!gup_fast_pte_range`), must very carefully interact with page table 575 + entries, using functions such as :c:func:`!ptep_get_lockless` and equivalent for 576 + higher level page table levels. 577 + 578 + Writes to page table entries must also be appropriately atomic, as established 579 + by :c:func:`!set_pXX` functions - :c:func:`!set_pgd`, :c:func:`!set_p4d`, 580 + :c:func:`!set_pud`, :c:func:`!set_pmd`, and :c:func:`!set_pte`. 581 + 582 + Equally functions which clear page table entries must be appropriately atomic, 583 + as in :c:func:`!pXX_clear` functions - :c:func:`!pgd_clear`, 584 + :c:func:`!p4d_clear`, :c:func:`!pud_clear`, :c:func:`!pmd_clear`, and 585 + :c:func:`!pte_clear`. 586 + 587 + Page table installation 588 + ^^^^^^^^^^^^^^^^^^^^^^^ 589 + 590 + Page table installation is performed with the VMA held stable explicitly by an 591 + mmap or VMA lock in read or write mode (see the warning in the locking rules 592 + section for details as to why). 593 + 594 + When allocating a P4D, PUD or PMD and setting the relevant entry in the above 595 + PGD, P4D or PUD, the :c:member:`!mm->page_table_lock` must be held. This is 596 + acquired in :c:func:`!__p4d_alloc`, :c:func:`!__pud_alloc` and 597 + :c:func:`!__pmd_alloc` respectively. 598 + 599 + .. note:: :c:func:`!__pmd_alloc` actually invokes :c:func:`!pud_lock` and 600 + :c:func:`!pud_lockptr` in turn, however at the time of writing it ultimately 601 + references the :c:member:`!mm->page_table_lock`. 602 + 603 + Allocating a PTE will either use the :c:member:`!mm->page_table_lock` or, if 604 + :c:macro:`!USE_SPLIT_PMD_PTLOCKS` is defined, a lock embedded in the PMD 605 + physical page metadata in the form of a :c:struct:`!struct ptdesc`, acquired by 606 + :c:func:`!pmd_ptdesc` called from :c:func:`!pmd_lock` and ultimately 607 + :c:func:`!__pte_alloc`. 608 + 609 + Finally, modifying the contents of the PTE requires special treatment, as the 610 + PTE page table lock must be acquired whenever we want stable and exclusive 611 + access to entries contained within a PTE, especially when we wish to modify 612 + them. 613 + 614 + This is performed via :c:func:`!pte_offset_map_lock` which carefully checks to 615 + ensure that the PTE hasn't changed from under us, ultimately invoking 616 + :c:func:`!pte_lockptr` to obtain a spin lock at PTE granularity contained within 617 + the :c:struct:`!struct ptdesc` associated with the physical PTE page. The lock 618 + must be released via :c:func:`!pte_unmap_unlock`. 619 + 620 + .. note:: There are some variants on this, such as 621 + :c:func:`!pte_offset_map_rw_nolock` when we know we hold the PTE stable but 622 + for brevity we do not explore this. See the comment for 623 + :c:func:`!__pte_offset_map_lock` for more details. 624 + 625 + When modifying data in ranges we typically only wish to allocate higher page 626 + tables as necessary, using these locks to avoid races or overwriting anything, 627 + and set/clear data at the PTE level as required (for instance when page faulting 628 + or zapping). 629 + 630 + A typical pattern taken when traversing page table entries to install a new 631 + mapping is to optimistically determine whether the page table entry in the table 632 + above is empty, if so, only then acquiring the page table lock and checking 633 + again to see if it was allocated underneath us. 634 + 635 + This allows for a traversal with page table locks only being taken when 636 + required. An example of this is :c:func:`!__pud_alloc`. 637 + 638 + At the leaf page table, that is the PTE, we can't entirely rely on this pattern 639 + as we have separate PMD and PTE locks and a THP collapse for instance might have 640 + eliminated the PMD entry as well as the PTE from under us. 641 + 642 + This is why :c:func:`!__pte_offset_map_lock` locklessly retrieves the PMD entry 643 + for the PTE, carefully checking it is as expected, before acquiring the 644 + PTE-specific lock, and then *again* checking that the PMD entry is as expected. 645 + 646 + If a THP collapse (or similar) were to occur then the lock on both pages would 647 + be acquired, so we can ensure this is prevented while the PTE lock is held. 648 + 649 + Installing entries this way ensures mutual exclusion on write. 650 + 651 + Page table freeing 652 + ^^^^^^^^^^^^^^^^^^ 653 + 654 + Tearing down page tables themselves is something that requires significant 655 + care. There must be no way that page tables designated for removal can be 656 + traversed or referenced by concurrent tasks. 657 + 658 + It is insufficient to simply hold an mmap write lock and VMA lock (which will 659 + prevent racing faults, and rmap operations), as a file-backed mapping can be 660 + truncated under the :c:struct:`!struct address_space->i_mmap_rwsem` alone. 661 + 662 + As a result, no VMA which can be accessed via the reverse mapping (either 663 + through the :c:struct:`!struct anon_vma->rb_root` or the :c:member:`!struct 664 + address_space->i_mmap` interval trees) can have its page tables torn down. 665 + 666 + The operation is typically performed via :c:func:`!free_pgtables`, which assumes 667 + either the mmap write lock has been taken (as specified by its 668 + :c:member:`!mm_wr_locked` parameter), or that the VMA is already unreachable. 669 + 670 + It carefully removes the VMA from all reverse mappings, however it's important 671 + that no new ones overlap these or any route remain to permit access to addresses 672 + within the range whose page tables are being torn down. 673 + 674 + Additionally, it assumes that a zap has already been performed and steps have 675 + been taken to ensure that no further page table entries can be installed between 676 + the zap and the invocation of :c:func:`!free_pgtables`. 677 + 678 + Since it is assumed that all such steps have been taken, page table entries are 679 + cleared without page table locks (in the :c:func:`!pgd_clear`, :c:func:`!p4d_clear`, 680 + :c:func:`!pud_clear`, and :c:func:`!pmd_clear` functions. 681 + 682 + .. note:: It is possible for leaf page tables to be torn down independent of 683 + the page tables above it as is done by 684 + :c:func:`!retract_page_tables`, which is performed under the i_mmap 685 + read lock, PMD, and PTE page table locks, without this level of care. 686 + 687 + Page table moving 688 + ^^^^^^^^^^^^^^^^^ 689 + 690 + Some functions manipulate page table levels above PMD (that is PUD, P4D and PGD 691 + page tables). Most notable of these is :c:func:`!mremap`, which is capable of 692 + moving higher level page tables. 693 + 694 + In these instances, it is required that **all** locks are taken, that is 695 + the mmap lock, the VMA lock and the relevant rmap locks. 696 + 697 + You can observe this in the :c:func:`!mremap` implementation in the functions 698 + :c:func:`!take_rmap_locks` and :c:func:`!drop_rmap_locks` which perform the rmap 699 + side of lock acquisition, invoked ultimately by :c:func:`!move_page_tables`. 700 + 701 + VMA lock internals 702 + ------------------ 703 + 704 + Overview 705 + ^^^^^^^^ 706 + 707 + VMA read locking is entirely optimistic - if the lock is contended or a competing 708 + write has started, then we do not obtain a read lock. 709 + 710 + A VMA **read** lock is obtained by :c:func:`!lock_vma_under_rcu`, which first 711 + calls :c:func:`!rcu_read_lock` to ensure that the VMA is looked up in an RCU 712 + critical section, then attempts to VMA lock it via :c:func:`!vma_start_read`, 713 + before releasing the RCU lock via :c:func:`!rcu_read_unlock`. 714 + 715 + VMA read locks hold the read lock on the :c:member:`!vma->vm_lock` semaphore for 716 + their duration and the caller of :c:func:`!lock_vma_under_rcu` must release it 717 + via :c:func:`!vma_end_read`. 718 + 719 + VMA **write** locks are acquired via :c:func:`!vma_start_write` in instances where a 720 + VMA is about to be modified, unlike :c:func:`!vma_start_read` the lock is always 721 + acquired. An mmap write lock **must** be held for the duration of the VMA write 722 + lock, releasing or downgrading the mmap write lock also releases the VMA write 723 + lock so there is no :c:func:`!vma_end_write` function. 724 + 725 + Note that a semaphore write lock is not held across a VMA lock. Rather, a 726 + sequence number is used for serialisation, and the write semaphore is only 727 + acquired at the point of write lock to update this. 728 + 729 + This ensures the semantics we require - VMA write locks provide exclusive write 730 + access to the VMA. 731 + 732 + Implementation details 733 + ^^^^^^^^^^^^^^^^^^^^^^ 734 + 735 + The VMA lock mechanism is designed to be a lightweight means of avoiding the use 736 + of the heavily contended mmap lock. It is implemented using a combination of a 737 + read/write semaphore and sequence numbers belonging to the containing 738 + :c:struct:`!struct mm_struct` and the VMA. 739 + 740 + Read locks are acquired via :c:func:`!vma_start_read`, which is an optimistic 741 + operation, i.e. it tries to acquire a read lock but returns false if it is 742 + unable to do so. At the end of the read operation, :c:func:`!vma_end_read` is 743 + called to release the VMA read lock. 744 + 745 + Invoking :c:func:`!vma_start_read` requires that :c:func:`!rcu_read_lock` has 746 + been called first, establishing that we are in an RCU critical section upon VMA 747 + read lock acquisition. Once acquired, the RCU lock can be released as it is only 748 + required for lookup. This is abstracted by :c:func:`!lock_vma_under_rcu` which 749 + is the interface a user should use. 750 + 751 + Writing requires the mmap to be write-locked and the VMA lock to be acquired via 752 + :c:func:`!vma_start_write`, however the write lock is released by the termination or 753 + downgrade of the mmap write lock so no :c:func:`!vma_end_write` is required. 754 + 755 + All this is achieved by the use of per-mm and per-VMA sequence counts, which are 756 + used in order to reduce complexity, especially for operations which write-lock 757 + multiple VMAs at once. 758 + 759 + If the mm sequence count, :c:member:`!mm->mm_lock_seq` is equal to the VMA 760 + sequence count :c:member:`!vma->vm_lock_seq` then the VMA is write-locked. If 761 + they differ, then it is not. 762 + 763 + Each time the mmap write lock is released in :c:func:`!mmap_write_unlock` or 764 + :c:func:`!mmap_write_downgrade`, :c:func:`!vma_end_write_all` is invoked which 765 + also increments :c:member:`!mm->mm_lock_seq` via 766 + :c:func:`!mm_lock_seqcount_end`. 767 + 768 + This way, we ensure that, regardless of the VMA's sequence number, a write lock 769 + is never incorrectly indicated and that when we release an mmap write lock we 770 + efficiently release **all** VMA write locks contained within the mmap at the 771 + same time. 772 + 773 + Since the mmap write lock is exclusive against others who hold it, the automatic 774 + release of any VMA locks on its release makes sense, as you would never want to 775 + keep VMAs locked across entirely separate write operations. It also maintains 776 + correct lock ordering. 777 + 778 + Each time a VMA read lock is acquired, we acquire a read lock on the 779 + :c:member:`!vma->vm_lock` read/write semaphore and hold it, while checking that 780 + the sequence count of the VMA does not match that of the mm. 781 + 782 + If it does, the read lock fails. If it does not, we hold the lock, excluding 783 + writers, but permitting other readers, who will also obtain this lock under RCU. 784 + 785 + Importantly, maple tree operations performed in :c:func:`!lock_vma_under_rcu` 786 + are also RCU safe, so the whole read lock operation is guaranteed to function 787 + correctly. 788 + 789 + On the write side, we acquire a write lock on the :c:member:`!vma->vm_lock` 790 + read/write semaphore, before setting the VMA's sequence number under this lock, 791 + also simultaneously holding the mmap write lock. 792 + 793 + This way, if any read locks are in effect, :c:func:`!vma_start_write` will sleep 794 + until these are finished and mutual exclusion is achieved. 795 + 796 + After setting the VMA's sequence number, the lock is released, avoiding 797 + complexity with a long-term held write lock. 798 + 799 + This clever combination of a read/write semaphore and sequence count allows for 800 + fast RCU-based per-VMA lock acquisition (especially on page fault, though 801 + utilised elsewhere) with minimal complexity around lock ordering. 802 + 803 + mmap write lock downgrading 804 + --------------------------- 805 + 806 + When an mmap write lock is held one has exclusive access to resources within the 807 + mmap (with the usual caveats about requiring VMA write locks to avoid races with 808 + tasks holding VMA read locks). 809 + 810 + It is then possible to **downgrade** from a write lock to a read lock via 811 + :c:func:`!mmap_write_downgrade` which, similar to :c:func:`!mmap_write_unlock`, 812 + implicitly terminates all VMA write locks via :c:func:`!vma_end_write_all`, but 813 + importantly does not relinquish the mmap lock while downgrading, therefore 814 + keeping the locked virtual address space stable. 815 + 816 + An interesting consequence of this is that downgraded locks are exclusive 817 + against any other task possessing a downgraded lock (since a racing task would 818 + have to acquire a write lock first to downgrade it, and the downgraded lock 819 + prevents a new write lock from being obtained until the original lock is 820 + released). 821 + 822 + For clarity, we map read (R)/downgraded write (D)/write (W) locks against one 823 + another showing which locks exclude the others: 824 + 825 + .. list-table:: Lock exclusivity 826 + :widths: 5 5 5 5 827 + :header-rows: 1 828 + :stub-columns: 1 829 + 830 + * - 831 + - R 832 + - D 833 + - W 834 + * - R 835 + - N 836 + - N 837 + - Y 838 + * - D 839 + - N 840 + - Y 841 + - Y 842 + * - W 843 + - Y 844 + - Y 845 + - Y 846 + 847 + Here a Y indicates the locks in the matching row/column are mutually exclusive, 848 + and N indicates that they are not. 849 + 850 + Stack expansion 851 + --------------- 852 + 853 + Stack expansion throws up additional complexities in that we cannot permit there 854 + to be racing page faults, as a result we invoke :c:func:`!vma_start_write` to 855 + prevent this in :c:func:`!expand_downwards` or :c:func:`!expand_upwards`.
+31 -29
Documentation/netlink/specs/mptcp_pm.yaml
··· 22 22 doc: unused event 23 23 - 24 24 name: created 25 - doc: 26 - token, family, saddr4 | saddr6, daddr4 | daddr6, sport, dport 25 + doc: >- 27 26 A new MPTCP connection has been created. It is the good time to 28 27 allocate memory and send ADD_ADDR if needed. Depending on the 29 28 traffic-patterns it can take a long time until the 30 29 MPTCP_EVENT_ESTABLISHED is sent. 30 + Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6, sport, 31 + dport, server-side. 31 32 - 32 33 name: established 33 - doc: 34 - token, family, saddr4 | saddr6, daddr4 | daddr6, sport, dport 34 + doc: >- 35 35 A MPTCP connection is established (can start new subflows). 36 + Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6, sport, 37 + dport, server-side. 36 38 - 37 39 name: closed 38 - doc: 39 - token 40 + doc: >- 40 41 A MPTCP connection has stopped. 42 + Attribute: token. 41 43 - 42 44 name: announced 43 45 value: 6 44 - doc: 45 - token, rem_id, family, daddr4 | daddr6 [, dport] 46 + doc: >- 46 47 A new address has been announced by the peer. 48 + Attributes: token, rem_id, family, daddr4 | daddr6 [, dport]. 47 49 - 48 50 name: removed 49 - doc: 50 - token, rem_id 51 + doc: >- 51 52 An address has been lost by the peer. 53 + Attributes: token, rem_id. 52 54 - 53 55 name: sub-established 54 56 value: 10 55 - doc: 56 - token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | daddr6, sport, 57 - dport, backup, if_idx [, error] 57 + doc: >- 58 58 A new subflow has been established. 'error' should not be set. 59 + Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | 60 + daddr6, sport, dport, backup, if_idx [, error]. 59 61 - 60 62 name: sub-closed 61 - doc: 62 - token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | daddr6, sport, 63 - dport, backup, if_idx [, error] 63 + doc: >- 64 64 A subflow has been closed. An error (copy of sk_err) could be set if an 65 65 error has been detected for this subflow. 66 + Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | 67 + daddr6, sport, dport, backup, if_idx [, error]. 66 68 - 67 69 name: sub-priority 68 70 value: 13 69 - doc: 70 - token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | daddr6, sport, 71 - dport, backup, if_idx [, error] 71 + doc: >- 72 72 The priority of a subflow has changed. 'error' should not be set. 73 + Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | 74 + daddr6, sport, dport, backup, if_idx [, error]. 73 75 - 74 76 name: listener-created 75 77 value: 15 76 - doc: 77 - family, sport, saddr4 | saddr6 78 + doc: >- 78 79 A new PM listener is created. 80 + Attributes: family, sport, saddr4 | saddr6. 79 81 - 80 82 name: listener-closed 81 - doc: 82 - family, sport, saddr4 | saddr6 83 + doc: >- 83 84 A PM listener is closed. 85 + Attributes: family, sport, saddr4 | saddr6. 84 86 85 87 attribute-sets: 86 88 - ··· 308 306 attributes: 309 307 - addr 310 308 - 311 - name: flush-addrs 312 - doc: flush addresses 309 + name: flush-addrs 310 + doc: Flush addresses 313 311 attribute-set: endpoint 314 312 dont-validate: [ strict ] 315 313 flags: [ uns-admin-perm ] ··· 353 351 - addr-remote 354 352 - 355 353 name: announce 356 - doc: announce new sf 354 + doc: Announce new address 357 355 attribute-set: attr 358 356 dont-validate: [ strict ] 359 357 flags: [ uns-admin-perm ] ··· 364 362 - token 365 363 - 366 364 name: remove 367 - doc: announce removal 365 + doc: Announce removal 368 366 attribute-set: attr 369 367 dont-validate: [ strict ] 370 368 flags: [ uns-admin-perm ] ··· 375 373 - loc-id 376 374 - 377 375 name: subflow-create 378 - doc: todo 376 + doc: Create subflow 379 377 attribute-set: attr 380 378 dont-validate: [ strict ] 381 379 flags: [ uns-admin-perm ] ··· 387 385 - addr-remote 388 386 - 389 387 name: subflow-destroy 390 - doc: todo 388 + doc: Destroy subflow 391 389 attribute-set: attr 392 390 dont-validate: [ strict ] 393 391 flags: [ uns-admin-perm ]
+3 -3
MAINTAINERS
··· 1797 1797 1798 1798 ARM AND ARM64 SoC SUB-ARCHITECTURES (COMMON PARTS) 1799 1799 M: Arnd Bergmann <arnd@arndb.de> 1800 - M: Olof Johansson <olof@lixom.net> 1801 1800 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1802 1801 L: soc@lists.linux.dev 1803 1802 S: Maintained ··· 3614 3615 3615 3616 ATHEROS ATH GENERIC UTILITIES 3616 3617 M: Kalle Valo <kvalo@kernel.org> 3618 + M: Jeff Johnson <jjohnson@kernel.org> 3617 3619 L: linux-wireless@vger.kernel.org 3618 3620 S: Supported 3619 3621 F: drivers/net/wireless/ath/* ··· 7355 7355 DRM DRIVER FOR NVIDIA GEFORCE/QUADRO GPUS 7356 7356 M: Karol Herbst <kherbst@redhat.com> 7357 7357 M: Lyude Paul <lyude@redhat.com> 7358 - M: Danilo Krummrich <dakr@redhat.com> 7358 + M: Danilo Krummrich <dakr@kernel.org> 7359 7359 L: dri-devel@lists.freedesktop.org 7360 7360 L: nouveau@lists.freedesktop.org 7361 7361 S: Supported ··· 8932 8932 FIRMWARE LOADER (request_firmware) 8933 8933 M: Luis Chamberlain <mcgrof@kernel.org> 8934 8934 M: Russ Weight <russ.weight@linux.dev> 8935 - M: Danilo Krummrich <dakr@redhat.com> 8935 + M: Danilo Krummrich <dakr@kernel.org> 8936 8936 L: linux-kernel@vger.kernel.org 8937 8937 S: Maintained 8938 8938 F: Documentation/firmware_class/
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 13 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc5 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+1
arch/arc/Kconfig
··· 6 6 config ARC 7 7 def_bool y 8 8 select ARC_TIMERS 9 + select ARCH_HAS_CPU_CACHE_ALIASING 9 10 select ARCH_HAS_CACHE_LINE_SIZE 10 11 select ARCH_HAS_DEBUG_VM_PGTABLE 11 12 select ARCH_HAS_DMA_PREP_COHERENT
+8
arch/arc/include/asm/cachetype.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __ASM_ARC_CACHETYPE_H 3 + #define __ASM_ARC_CACHETYPE_H 4 + 5 + #define cpu_dcache_is_aliasing() false 6 + #define cpu_icache_is_aliasing() true 7 + 8 + #endif
+1
arch/arm/mach-imx/Kconfig
··· 6 6 select CLKSRC_IMX_GPT 7 7 select GENERIC_IRQ_CHIP 8 8 select GPIOLIB 9 + select PINCTRL 9 10 select PM_OPP if PM 10 11 select SOC_BUS 11 12 select SRAM
+4 -4
arch/arm64/boot/dts/broadcom/bcm2712.dtsi
··· 67 67 l2_cache_l0: l2-cache-l0 { 68 68 compatible = "cache"; 69 69 cache-size = <0x80000>; 70 - cache-line-size = <128>; 70 + cache-line-size = <64>; 71 71 cache-sets = <1024>; //512KiB(size)/64(line-size)=8192ways/8-way set 72 72 cache-level = <2>; 73 73 cache-unified; ··· 91 91 l2_cache_l1: l2-cache-l1 { 92 92 compatible = "cache"; 93 93 cache-size = <0x80000>; 94 - cache-line-size = <128>; 94 + cache-line-size = <64>; 95 95 cache-sets = <1024>; //512KiB(size)/64(line-size)=8192ways/8-way set 96 96 cache-level = <2>; 97 97 cache-unified; ··· 115 115 l2_cache_l2: l2-cache-l2 { 116 116 compatible = "cache"; 117 117 cache-size = <0x80000>; 118 - cache-line-size = <128>; 118 + cache-line-size = <64>; 119 119 cache-sets = <1024>; //512KiB(size)/64(line-size)=8192ways/8-way set 120 120 cache-level = <2>; 121 121 cache-unified; ··· 139 139 l2_cache_l3: l2-cache-l3 { 140 140 compatible = "cache"; 141 141 cache-size = <0x80000>; 142 - cache-line-size = <128>; 142 + cache-line-size = <64>; 143 143 cache-sets = <1024>; //512KiB(size)/64(line-size)=8192ways/8-way set 144 144 cache-level = <2>; 145 145 cache-unified;
+15 -20
arch/arm64/kernel/signal.c
··· 36 36 #include <asm/traps.h> 37 37 #include <asm/vdso.h> 38 38 39 - #ifdef CONFIG_ARM64_GCS 40 39 #define GCS_SIGNAL_CAP(addr) (((unsigned long)addr) & GCS_CAP_ADDR_MASK) 41 - 42 - static bool gcs_signal_cap_valid(u64 addr, u64 val) 43 - { 44 - return val == GCS_SIGNAL_CAP(addr); 45 - } 46 - #endif 47 40 48 41 /* 49 42 * Do a signal return; undo the signal stack. These are aligned to 128-bit. ··· 1055 1062 #ifdef CONFIG_ARM64_GCS 1056 1063 static int gcs_restore_signal(void) 1057 1064 { 1058 - unsigned long __user *gcspr_el0; 1059 - u64 cap; 1065 + u64 gcspr_el0, cap; 1060 1066 int ret; 1061 1067 1062 1068 if (!system_supports_gcs()) ··· 1064 1072 if (!(current->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE)) 1065 1073 return 0; 1066 1074 1067 - gcspr_el0 = (unsigned long __user *)read_sysreg_s(SYS_GCSPR_EL0); 1075 + gcspr_el0 = read_sysreg_s(SYS_GCSPR_EL0); 1068 1076 1069 1077 /* 1070 1078 * Ensure that any changes to the GCS done via GCS operations ··· 1079 1087 * then faults will be generated on GCS operations - the main 1080 1088 * concern is to protect GCS pages. 1081 1089 */ 1082 - ret = copy_from_user(&cap, gcspr_el0, sizeof(cap)); 1090 + ret = copy_from_user(&cap, (unsigned long __user *)gcspr_el0, 1091 + sizeof(cap)); 1083 1092 if (ret) 1084 1093 return -EFAULT; 1085 1094 1086 1095 /* 1087 1096 * Check that the cap is the actual GCS before replacing it. 1088 1097 */ 1089 - if (!gcs_signal_cap_valid((u64)gcspr_el0, cap)) 1098 + if (cap != GCS_SIGNAL_CAP(gcspr_el0)) 1090 1099 return -EINVAL; 1091 1100 1092 1101 /* Invalidate the token to prevent reuse */ 1093 - put_user_gcs(0, (__user void*)gcspr_el0, &ret); 1102 + put_user_gcs(0, (unsigned long __user *)gcspr_el0, &ret); 1094 1103 if (ret != 0) 1095 1104 return -EFAULT; 1096 1105 1097 - write_sysreg_s(gcspr_el0 + 1, SYS_GCSPR_EL0); 1106 + write_sysreg_s(gcspr_el0 + 8, SYS_GCSPR_EL0); 1098 1107 1099 1108 return 0; 1100 1109 } ··· 1414 1421 1415 1422 static int gcs_signal_entry(__sigrestore_t sigtramp, struct ksignal *ksig) 1416 1423 { 1417 - unsigned long __user *gcspr_el0; 1424 + u64 gcspr_el0; 1418 1425 int ret = 0; 1419 1426 1420 1427 if (!system_supports_gcs()) ··· 1427 1434 * We are entering a signal handler, current register state is 1428 1435 * active. 1429 1436 */ 1430 - gcspr_el0 = (unsigned long __user *)read_sysreg_s(SYS_GCSPR_EL0); 1437 + gcspr_el0 = read_sysreg_s(SYS_GCSPR_EL0); 1431 1438 1432 1439 /* 1433 1440 * Push a cap and the GCS entry for the trampoline onto the GCS. 1434 1441 */ 1435 - put_user_gcs((unsigned long)sigtramp, gcspr_el0 - 2, &ret); 1436 - put_user_gcs(GCS_SIGNAL_CAP(gcspr_el0 - 1), gcspr_el0 - 1, &ret); 1442 + put_user_gcs((unsigned long)sigtramp, 1443 + (unsigned long __user *)(gcspr_el0 - 16), &ret); 1444 + put_user_gcs(GCS_SIGNAL_CAP(gcspr_el0 - 8), 1445 + (unsigned long __user *)(gcspr_el0 - 8), &ret); 1437 1446 if (ret != 0) 1438 1447 return ret; 1439 1448 1440 - gcspr_el0 -= 2; 1441 - write_sysreg_s((unsigned long)gcspr_el0, SYS_GCSPR_EL0); 1449 + gcspr_el0 -= 16; 1450 + write_sysreg_s(gcspr_el0, SYS_GCSPR_EL0); 1442 1451 1443 1452 return 0; 1444 1453 }
+5 -5
arch/nios2/kernel/cpuinfo.c
··· 143 143 " DIV:\t\t%s\n" 144 144 " BMX:\t\t%s\n" 145 145 " CDX:\t\t%s\n", 146 - cpuinfo.has_mul ? "yes" : "no", 147 - cpuinfo.has_mulx ? "yes" : "no", 148 - cpuinfo.has_div ? "yes" : "no", 149 - cpuinfo.has_bmx ? "yes" : "no", 150 - cpuinfo.has_cdx ? "yes" : "no"); 146 + str_yes_no(cpuinfo.has_mul), 147 + str_yes_no(cpuinfo.has_mulx), 148 + str_yes_no(cpuinfo.has_div), 149 + str_yes_no(cpuinfo.has_bmx), 150 + str_yes_no(cpuinfo.has_cdx)); 151 151 152 152 seq_printf(m, 153 153 "Icache:\t\t%ukB, line length: %u\n",
+1
arch/powerpc/configs/pmac32_defconfig
··· 208 208 CONFIG_FB_ATY_CT=y 209 209 CONFIG_FB_ATY_GX=y 210 210 CONFIG_FB_3DFX=y 211 + CONFIG_BACKLIGHT_CLASS_DEVICE=y 211 212 # CONFIG_VGA_CONSOLE is not set 212 213 CONFIG_FRAMEBUFFER_CONSOLE=y 213 214 CONFIG_LOGO=y
+1
arch/powerpc/configs/ppc6xx_defconfig
··· 716 716 CONFIG_FB_SM501=m 717 717 CONFIG_FB_IBM_GXT4500=y 718 718 CONFIG_LCD_PLATFORM=m 719 + CONFIG_BACKLIGHT_CLASS_DEVICE=y 719 720 CONFIG_FRAMEBUFFER_CONSOLE=y 720 721 CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y 721 722 CONFIG_LOGO=y
+36
arch/powerpc/platforms/book3s/vas-api.c
··· 464 464 return VM_FAULT_SIGBUS; 465 465 } 466 466 467 + /* 468 + * During mmap() paste address, mapping VMA is saved in VAS window 469 + * struct which is used to unmap during migration if the window is 470 + * still open. But the user space can remove this mapping with 471 + * munmap() before closing the window and the VMA address will 472 + * be invalid. Set VAS window VMA to NULL in this function which 473 + * is called before VMA free. 474 + */ 475 + static void vas_mmap_close(struct vm_area_struct *vma) 476 + { 477 + struct file *fp = vma->vm_file; 478 + struct coproc_instance *cp_inst = fp->private_data; 479 + struct vas_window *txwin; 480 + 481 + /* Should not happen */ 482 + if (!cp_inst || !cp_inst->txwin) { 483 + pr_err("No attached VAS window for the paste address mmap\n"); 484 + return; 485 + } 486 + 487 + txwin = cp_inst->txwin; 488 + /* 489 + * task_ref.vma is set in coproc_mmap() during mmap paste 490 + * address. So it has to be the same VMA that is getting freed. 491 + */ 492 + if (WARN_ON(txwin->task_ref.vma != vma)) { 493 + pr_err("Invalid paste address mmaping\n"); 494 + return; 495 + } 496 + 497 + mutex_lock(&txwin->task_ref.mmap_mutex); 498 + txwin->task_ref.vma = NULL; 499 + mutex_unlock(&txwin->task_ref.mmap_mutex); 500 + } 501 + 467 502 static const struct vm_operations_struct vas_vm_ops = { 503 + .close = vas_mmap_close, 468 504 .fault = vas_mmap_fault, 469 505 }; 470 506
+11 -1
arch/x86/events/intel/core.c
··· 429 429 EVENT_CONSTRAINT_END 430 430 }; 431 431 432 + static struct extra_reg intel_lnc_extra_regs[] __read_mostly = { 433 + INTEL_UEVENT_EXTRA_REG(0x012a, MSR_OFFCORE_RSP_0, 0xfffffffffffull, RSP_0), 434 + INTEL_UEVENT_EXTRA_REG(0x012b, MSR_OFFCORE_RSP_1, 0xfffffffffffull, RSP_1), 435 + INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd), 436 + INTEL_UEVENT_EXTRA_REG(0x02c6, MSR_PEBS_FRONTEND, 0x9, FE), 437 + INTEL_UEVENT_EXTRA_REG(0x03c6, MSR_PEBS_FRONTEND, 0x7fff1f, FE), 438 + INTEL_UEVENT_EXTRA_REG(0x40ad, MSR_PEBS_FRONTEND, 0xf, FE), 439 + INTEL_UEVENT_EXTRA_REG(0x04c2, MSR_PEBS_FRONTEND, 0x8, FE), 440 + EVENT_EXTRA_END 441 + }; 432 442 433 443 EVENT_ATTR_STR(mem-loads, mem_ld_nhm, "event=0x0b,umask=0x10,ldlat=3"); 434 444 EVENT_ATTR_STR(mem-loads, mem_ld_snb, "event=0xcd,umask=0x1,ldlat=3"); ··· 6432 6422 intel_pmu_init_glc(pmu); 6433 6423 hybrid(pmu, event_constraints) = intel_lnc_event_constraints; 6434 6424 hybrid(pmu, pebs_constraints) = intel_lnc_pebs_event_constraints; 6435 - hybrid(pmu, extra_regs) = intel_rwc_extra_regs; 6425 + hybrid(pmu, extra_regs) = intel_lnc_extra_regs; 6436 6426 } 6437 6427 6438 6428 static __always_inline void intel_pmu_init_skt(struct pmu *pmu)
+1
arch/x86/events/intel/ds.c
··· 2517 2517 x86_pmu.large_pebs_flags |= PERF_SAMPLE_TIME; 2518 2518 break; 2519 2519 2520 + case 6: 2520 2521 case 5: 2521 2522 x86_pmu.pebs_ept = 1; 2522 2523 fallthrough;
+1
arch/x86/events/intel/uncore.c
··· 1910 1910 X86_MATCH_VFM(INTEL_ATOM_GRACEMONT, &adl_uncore_init), 1911 1911 X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, &gnr_uncore_init), 1912 1912 X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, &gnr_uncore_init), 1913 + X86_MATCH_VFM(INTEL_ATOM_DARKMONT_X, &gnr_uncore_init), 1913 1914 {}, 1914 1915 }; 1915 1916 MODULE_DEVICE_TABLE(x86cpu, intel_uncore_match);
+1
arch/x86/include/asm/cpufeatures.h
··· 452 452 #define X86_FEATURE_SME_COHERENT (19*32+10) /* AMD hardware-enforced cache coherency */ 453 453 #define X86_FEATURE_DEBUG_SWAP (19*32+14) /* "debug_swap" AMD SEV-ES full debug state swap support */ 454 454 #define X86_FEATURE_SVSM (19*32+28) /* "svsm" SVSM present */ 455 + #define X86_FEATURE_HV_INUSE_WR_ALLOWED (19*32+30) /* Allow Write to in-use hypervisor-owned pages */ 455 456 456 457 /* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */ 457 458 #define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* No Nested Data Breakpoints */
+30
arch/x86/kernel/cet.c
··· 81 81 82 82 static __ro_after_init bool ibt_fatal = true; 83 83 84 + /* 85 + * By definition, all missing-ENDBRANCH #CPs are a result of WFE && !ENDBR. 86 + * 87 + * For the kernel IBT no ENDBR selftest where #CPs are deliberately triggered, 88 + * the WFE state of the interrupted context needs to be cleared to let execution 89 + * continue. Otherwise when the CPU resumes from the instruction that just 90 + * caused the previous #CP, another missing-ENDBRANCH #CP is raised and the CPU 91 + * enters a dead loop. 92 + * 93 + * This is not a problem with IDT because it doesn't preserve WFE and IRET doesn't 94 + * set WFE. But FRED provides space on the entry stack (in an expanded CS area) 95 + * to save and restore the WFE state, thus the WFE state is no longer clobbered, 96 + * so software must clear it. 97 + */ 98 + static void ibt_clear_fred_wfe(struct pt_regs *regs) 99 + { 100 + /* 101 + * No need to do any FRED checks. 102 + * 103 + * For IDT event delivery, the high-order 48 bits of CS are pushed 104 + * as 0s into the stack, and later IRET ignores these bits. 105 + * 106 + * For FRED, a test to check if fred_cs.wfe is set would be dropped 107 + * by compilers. 108 + */ 109 + regs->fred_cs.wfe = 0; 110 + } 111 + 84 112 static void do_kernel_cp_fault(struct pt_regs *regs, unsigned long error_code) 85 113 { 86 114 if ((error_code & CP_EC) != CP_ENDBR) { ··· 118 90 119 91 if (unlikely(regs->ip == (unsigned long)&ibt_selftest_noendbr)) { 120 92 regs->ax = 0; 93 + ibt_clear_fred_wfe(regs); 121 94 return; 122 95 } 123 96 ··· 126 97 if (!ibt_fatal) { 127 98 printk(KERN_DEFAULT CUT_HERE); 128 99 __warn(__FILE__, __LINE__, (void *)regs->ip, TAINT_WARN, regs, NULL); 100 + ibt_clear_fred_wfe(regs); 129 101 return; 130 102 } 131 103 BUG();
-12
arch/x86/kvm/mmu/mmu.c
··· 3364 3364 return true; 3365 3365 } 3366 3366 3367 - static bool is_access_allowed(struct kvm_page_fault *fault, u64 spte) 3368 - { 3369 - if (fault->exec) 3370 - return is_executable_pte(spte); 3371 - 3372 - if (fault->write) 3373 - return is_writable_pte(spte); 3374 - 3375 - /* Fault was on Read access */ 3376 - return spte & PT_PRESENT_MASK; 3377 - } 3378 - 3379 3367 /* 3380 3368 * Returns the last level spte pointer of the shadow page walk for the given 3381 3369 * gpa, and sets *spte to the spte value. This spte may be non-preset. If no
+17
arch/x86/kvm/mmu/spte.h
··· 462 462 } 463 463 464 464 /* 465 + * Returns true if the access indicated by @fault is allowed by the existing 466 + * SPTE protections. Note, the caller is responsible for checking that the 467 + * SPTE is a shadow-present, leaf SPTE (either before or after). 468 + */ 469 + static inline bool is_access_allowed(struct kvm_page_fault *fault, u64 spte) 470 + { 471 + if (fault->exec) 472 + return is_executable_pte(spte); 473 + 474 + if (fault->write) 475 + return is_writable_pte(spte); 476 + 477 + /* Fault was on Read access */ 478 + return spte & PT_PRESENT_MASK; 479 + } 480 + 481 + /* 465 482 * If the MMU-writable flag is cleared, i.e. the SPTE is write-protected for 466 483 * write-tracking, remote TLBs must be flushed, even if the SPTE was read-only, 467 484 * as KVM allows stale Writable TLB entries to exist. When dirty logging, KVM
+5
arch/x86/kvm/mmu/tdp_mmu.c
··· 985 985 if (fault->prefetch && is_shadow_present_pte(iter->old_spte)) 986 986 return RET_PF_SPURIOUS; 987 987 988 + if (is_shadow_present_pte(iter->old_spte) && 989 + is_access_allowed(fault, iter->old_spte) && 990 + is_last_spte(iter->old_spte, iter->level)) 991 + return RET_PF_SPURIOUS; 992 + 988 993 if (unlikely(!fault->slot)) 989 994 new_spte = make_mmio_spte(vcpu, iter->gfn, ACC_ALL); 990 995 else
+6
arch/x86/kvm/svm/avic.c
··· 1199 1199 return false; 1200 1200 } 1201 1201 1202 + if (cc_platform_has(CC_ATTR_HOST_SEV_SNP) && 1203 + !boot_cpu_has(X86_FEATURE_HV_INUSE_WR_ALLOWED)) { 1204 + pr_warn("AVIC disabled: missing HvInUseWrAllowed on SNP-enabled system\n"); 1205 + return false; 1206 + } 1207 + 1202 1208 if (boot_cpu_has(X86_FEATURE_AVIC)) { 1203 1209 pr_info("AVIC enabled\n"); 1204 1210 } else if (force_avic) {
-9
arch/x86/kvm/svm/svm.c
··· 3201 3201 if (data & ~supported_de_cfg) 3202 3202 return 1; 3203 3203 3204 - /* 3205 - * Don't let the guest change the host-programmed value. The 3206 - * MSR is very model specific, i.e. contains multiple bits that 3207 - * are completely unknown to KVM, and the one bit known to KVM 3208 - * is simply a reflection of hardware capabilities. 3209 - */ 3210 - if (!msr->host_initiated && data != svm->msr_decfg) 3211 - return 1; 3212 - 3213 3204 svm->msr_decfg = data; 3214 3205 break; 3215 3206 }
+1 -1
arch/x86/kvm/vmx/posted_intr.h
··· 2 2 #ifndef __KVM_X86_VMX_POSTED_INTR_H 3 3 #define __KVM_X86_VMX_POSTED_INTR_H 4 4 5 - #include <linux/find.h> 5 + #include <linux/bitmap.h> 6 6 #include <asm/posted_intr.h> 7 7 8 8 void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int cpu);
+8 -1
arch/x86/kvm/x86.c
··· 9976 9976 { 9977 9977 u64 ret = vcpu->run->hypercall.ret; 9978 9978 9979 - if (!is_64_bit_mode(vcpu)) 9979 + if (!is_64_bit_hypercall(vcpu)) 9980 9980 ret = (u32)ret; 9981 9981 kvm_rax_write(vcpu, ret); 9982 9982 ++vcpu->stat.hypercalls; ··· 12723 12723 kvm_apicv_init(kvm); 12724 12724 kvm_hv_init_vm(kvm); 12725 12725 kvm_xen_init_vm(kvm); 12726 + 12727 + if (ignore_msrs && !report_ignored_msrs) { 12728 + pr_warn_once("Running KVM with ignore_msrs=1 and report_ignored_msrs=0 is not a\n" 12729 + "a supported configuration. Lying to the guest about the existence of MSRs\n" 12730 + "may cause the guest operating system to hang or produce errors. If a guest\n" 12731 + "does not run without ignore_msrs=1, please report it to kvm@vger.kernel.org.\n"); 12732 + } 12726 12733 12727 12734 return 0; 12728 12735
+1 -2
block/bdev.c
··· 155 155 struct inode *inode = file->f_mapping->host; 156 156 struct block_device *bdev = I_BDEV(inode); 157 157 158 - /* Size must be a power of two, and between 512 and PAGE_SIZE */ 159 - if (size > PAGE_SIZE || size < 512 || !is_power_of_2(size)) 158 + if (blk_validate_block_size(size)) 160 159 return -EINVAL; 161 160 162 161 /* Size cannot be smaller than the size supported by the device */
+10 -6
block/blk-mq-sysfs.c
··· 275 275 struct blk_mq_hw_ctx *hctx; 276 276 unsigned long i; 277 277 278 - lockdep_assert_held(&q->sysfs_dir_lock); 279 - 278 + mutex_lock(&q->sysfs_dir_lock); 280 279 if (!q->mq_sysfs_init_done) 281 - return; 280 + goto unlock; 282 281 283 282 queue_for_each_hw_ctx(q, hctx, i) 284 283 blk_mq_unregister_hctx(hctx); 284 + 285 + unlock: 286 + mutex_unlock(&q->sysfs_dir_lock); 285 287 } 286 288 287 289 int blk_mq_sysfs_register_hctxs(struct request_queue *q) ··· 292 290 unsigned long i; 293 291 int ret = 0; 294 292 295 - lockdep_assert_held(&q->sysfs_dir_lock); 296 - 293 + mutex_lock(&q->sysfs_dir_lock); 297 294 if (!q->mq_sysfs_init_done) 298 - return ret; 295 + goto unlock; 299 296 300 297 queue_for_each_hw_ctx(q, hctx, i) { 301 298 ret = blk_mq_register_hctx(hctx); 302 299 if (ret) 303 300 break; 304 301 } 302 + 303 + unlock: 304 + mutex_unlock(&q->sysfs_dir_lock); 305 305 306 306 return ret; 307 307 }
+21 -19
block/blk-mq.c
··· 4412 4412 } 4413 4413 EXPORT_SYMBOL(blk_mq_alloc_disk_for_queue); 4414 4414 4415 + /* 4416 + * Only hctx removed from cpuhp list can be reused 4417 + */ 4418 + static bool blk_mq_hctx_is_reusable(struct blk_mq_hw_ctx *hctx) 4419 + { 4420 + return hlist_unhashed(&hctx->cpuhp_online) && 4421 + hlist_unhashed(&hctx->cpuhp_dead); 4422 + } 4423 + 4415 4424 static struct blk_mq_hw_ctx *blk_mq_alloc_and_init_hctx( 4416 4425 struct blk_mq_tag_set *set, struct request_queue *q, 4417 4426 int hctx_idx, int node) ··· 4430 4421 /* reuse dead hctx first */ 4431 4422 spin_lock(&q->unused_hctx_lock); 4432 4423 list_for_each_entry(tmp, &q->unused_hctx_list, hctx_list) { 4433 - if (tmp->numa_node == node) { 4424 + if (tmp->numa_node == node && blk_mq_hctx_is_reusable(tmp)) { 4434 4425 hctx = tmp; 4435 4426 break; 4436 4427 } ··· 4462 4453 unsigned long i, j; 4463 4454 4464 4455 /* protect against switching io scheduler */ 4465 - lockdep_assert_held(&q->sysfs_lock); 4466 - 4456 + mutex_lock(&q->sysfs_lock); 4467 4457 for (i = 0; i < set->nr_hw_queues; i++) { 4468 4458 int old_node; 4469 4459 int node = blk_mq_get_hctx_node(set, i); ··· 4495 4487 4496 4488 xa_for_each_start(&q->hctx_table, j, hctx, j) 4497 4489 blk_mq_exit_hctx(q, set, hctx, j); 4490 + mutex_unlock(&q->sysfs_lock); 4498 4491 4499 4492 /* unregister cpuhp callbacks for exited hctxs */ 4500 4493 blk_mq_remove_hw_queues_cpuhp(q); ··· 4527 4518 4528 4519 xa_init(&q->hctx_table); 4529 4520 4530 - mutex_lock(&q->sysfs_lock); 4531 - 4532 4521 blk_mq_realloc_hw_ctxs(set, q); 4533 4522 if (!q->nr_hw_queues) 4534 4523 goto err_hctxs; 4535 - 4536 - mutex_unlock(&q->sysfs_lock); 4537 4524 4538 4525 INIT_WORK(&q->timeout_work, blk_mq_timeout_work); 4539 4526 blk_queue_rq_timeout(q, set->timeout ? set->timeout : 30 * HZ); ··· 4549 4544 return 0; 4550 4545 4551 4546 err_hctxs: 4552 - mutex_unlock(&q->sysfs_lock); 4553 4547 blk_mq_release(q); 4554 4548 err_exit: 4555 4549 q->mq_ops = NULL; ··· 4929 4925 return false; 4930 4926 4931 4927 /* q->elevator needs protection from ->sysfs_lock */ 4932 - lockdep_assert_held(&q->sysfs_lock); 4928 + mutex_lock(&q->sysfs_lock); 4933 4929 4934 4930 /* the check has to be done with holding sysfs_lock */ 4935 4931 if (!q->elevator) { 4936 4932 kfree(qe); 4937 - goto out; 4933 + goto unlock; 4938 4934 } 4939 4935 4940 4936 INIT_LIST_HEAD(&qe->node); ··· 4944 4940 __elevator_get(qe->type); 4945 4941 list_add(&qe->node, head); 4946 4942 elevator_disable(q); 4947 - out: 4943 + unlock: 4944 + mutex_unlock(&q->sysfs_lock); 4945 + 4948 4946 return true; 4949 4947 } 4950 4948 ··· 4975 4969 list_del(&qe->node); 4976 4970 kfree(qe); 4977 4971 4972 + mutex_lock(&q->sysfs_lock); 4978 4973 elevator_switch(q, t); 4979 4974 /* drop the reference acquired in blk_mq_elv_switch_none */ 4980 4975 elevator_put(t); 4976 + mutex_unlock(&q->sysfs_lock); 4981 4977 } 4982 4978 4983 4979 static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, ··· 4999 4991 if (set->nr_maps == 1 && nr_hw_queues == set->nr_hw_queues) 5000 4992 return; 5001 4993 5002 - list_for_each_entry(q, &set->tag_list, tag_set_list) { 5003 - mutex_lock(&q->sysfs_dir_lock); 5004 - mutex_lock(&q->sysfs_lock); 4994 + list_for_each_entry(q, &set->tag_list, tag_set_list) 5005 4995 blk_mq_freeze_queue(q); 5006 - } 5007 4996 /* 5008 4997 * Switch IO scheduler to 'none', cleaning up the data associated 5009 4998 * with the previous scheduler. We will switch back once we are done ··· 5056 5051 list_for_each_entry(q, &set->tag_list, tag_set_list) 5057 5052 blk_mq_elv_switch_back(&head, q); 5058 5053 5059 - list_for_each_entry(q, &set->tag_list, tag_set_list) { 5054 + list_for_each_entry(q, &set->tag_list, tag_set_list) 5060 5055 blk_mq_unfreeze_queue(q); 5061 - mutex_unlock(&q->sysfs_lock); 5062 - mutex_unlock(&q->sysfs_dir_lock); 5063 - } 5064 5056 5065 5057 /* Free the excess tags when nr_hw_queues shrink. */ 5066 5058 for (i = set->nr_hw_queues; i < prev_nr_hw_queues; i++)
+2 -2
block/blk-sysfs.c
··· 706 706 if (entry->load_module) 707 707 entry->load_module(disk, page, length); 708 708 709 - mutex_lock(&q->sysfs_lock); 710 709 blk_mq_freeze_queue(q); 710 + mutex_lock(&q->sysfs_lock); 711 711 res = entry->store(disk, page, length); 712 - blk_mq_unfreeze_queue(q); 713 712 mutex_unlock(&q->sysfs_lock); 713 + blk_mq_unfreeze_queue(q); 714 714 return res; 715 715 } 716 716
+1 -1
drivers/accel/ivpu/ivpu_gem.c
··· 409 409 mutex_lock(&bo->lock); 410 410 411 411 drm_printf(p, "%-9p %-3u 0x%-12llx %-10lu 0x%-8x %-4u", 412 - bo, bo->ctx->id, bo->vpu_addr, bo->base.base.size, 412 + bo, bo->ctx ? bo->ctx->id : 0, bo->vpu_addr, bo->base.base.size, 413 413 bo->flags, kref_read(&bo->base.base.refcount)); 414 414 415 415 if (bo->base.pages)
+7 -3
drivers/accel/ivpu/ivpu_mmu_context.c
··· 612 612 if (!ivpu_mmu_ensure_pgd(vdev, &vdev->rctx.pgtable)) { 613 613 ivpu_err(vdev, "Failed to allocate root page table for reserved context\n"); 614 614 ret = -ENOMEM; 615 - goto unlock; 615 + goto err_ctx_fini; 616 616 } 617 617 618 618 ret = ivpu_mmu_cd_set(vdev, vdev->rctx.id, &vdev->rctx.pgtable); 619 619 if (ret) { 620 620 ivpu_err(vdev, "Failed to set context descriptor for reserved context\n"); 621 - goto unlock; 621 + goto err_ctx_fini; 622 622 } 623 623 624 - unlock: 625 624 mutex_unlock(&vdev->rctx.lock); 625 + return ret; 626 + 627 + err_ctx_fini: 628 + mutex_unlock(&vdev->rctx.lock); 629 + ivpu_mmu_context_fini(vdev, &vdev->rctx); 626 630 return ret; 627 631 } 628 632
+1 -1
drivers/accel/ivpu/ivpu_pm.c
··· 378 378 379 379 pm_runtime_use_autosuspend(dev); 380 380 pm_runtime_set_autosuspend_delay(dev, delay); 381 + pm_runtime_set_active(dev); 381 382 382 383 ivpu_dbg(vdev, PM, "Autosuspend delay = %d\n", delay); 383 384 } ··· 393 392 { 394 393 struct device *dev = vdev->drm.dev; 395 394 396 - pm_runtime_set_active(dev); 397 395 pm_runtime_allow(dev); 398 396 pm_runtime_mark_last_busy(dev); 399 397 pm_runtime_put_autosuspend(dev);
+2 -2
drivers/acpi/Kconfig
··· 135 135 config ACPI_EC 136 136 bool "Embedded Controller" 137 137 depends on HAS_IOPORT 138 - default X86 138 + default X86 || LOONGARCH 139 139 help 140 140 This driver handles communication with the microcontroller 141 - on many x86 laptops and other machines. 141 + on many x86/LoongArch laptops and other machines. 142 142 143 143 config ACPI_EC_DEBUGFS 144 144 tristate "EC read/write access through /sys/kernel/debug/ec"
+1 -1
drivers/auxdisplay/Kconfig
··· 489 489 490 490 config HT16K33 491 491 tristate "Holtek Ht16K33 LED controller with keyscan" 492 - depends on FB && I2C && INPUT 492 + depends on FB && I2C && INPUT && BACKLIGHT_CLASS_DEVICE 493 493 select FB_SYSMEM_HELPERS 494 494 select INPUT_MATRIXKMAP 495 495 select FB_BACKLIGHT
+17 -9
drivers/block/ublk_drv.c
··· 1618 1618 blk_mq_kick_requeue_list(ub->ub_disk->queue); 1619 1619 } 1620 1620 1621 + static struct gendisk *ublk_detach_disk(struct ublk_device *ub) 1622 + { 1623 + struct gendisk *disk; 1624 + 1625 + /* Sync with ublk_abort_queue() by holding the lock */ 1626 + spin_lock(&ub->lock); 1627 + disk = ub->ub_disk; 1628 + ub->dev_info.state = UBLK_S_DEV_DEAD; 1629 + ub->dev_info.ublksrv_pid = -1; 1630 + ub->ub_disk = NULL; 1631 + spin_unlock(&ub->lock); 1632 + 1633 + return disk; 1634 + } 1635 + 1621 1636 static void ublk_stop_dev(struct ublk_device *ub) 1622 1637 { 1623 1638 struct gendisk *disk; ··· 1646 1631 ublk_unquiesce_dev(ub); 1647 1632 } 1648 1633 del_gendisk(ub->ub_disk); 1649 - 1650 - /* Sync with ublk_abort_queue() by holding the lock */ 1651 - spin_lock(&ub->lock); 1652 - disk = ub->ub_disk; 1653 - ub->dev_info.state = UBLK_S_DEV_DEAD; 1654 - ub->dev_info.ublksrv_pid = -1; 1655 - ub->ub_disk = NULL; 1656 - spin_unlock(&ub->lock); 1634 + disk = ublk_detach_disk(ub); 1657 1635 put_disk(disk); 1658 1636 unlock: 1659 1637 mutex_unlock(&ub->mutex); ··· 2344 2336 2345 2337 out_put_cdev: 2346 2338 if (ret) { 2347 - ub->dev_info.state = UBLK_S_DEV_DEAD; 2339 + ublk_detach_disk(ub); 2348 2340 ublk_put_device(ub); 2349 2341 } 2350 2342 if (ret)
+10 -5
drivers/block/zram/zram_drv.c
··· 614 614 } 615 615 616 616 nr_pages = i_size_read(inode) >> PAGE_SHIFT; 617 + /* Refuse to use zero sized device (also prevents self reference) */ 618 + if (!nr_pages) { 619 + err = -EINVAL; 620 + goto out; 621 + } 622 + 617 623 bitmap_sz = BITS_TO_LONGS(nr_pages) * sizeof(long); 618 624 bitmap = kvzalloc(bitmap_sz, GFP_KERNEL); 619 625 if (!bitmap) { ··· 1444 1438 size_t num_pages = disksize >> PAGE_SHIFT; 1445 1439 size_t index; 1446 1440 1441 + if (!zram->table) 1442 + return; 1443 + 1447 1444 /* Free all pages that are still in this zram device */ 1448 1445 for (index = 0; index < num_pages; index++) 1449 1446 zram_free_page(zram, index); 1450 1447 1451 1448 zs_destroy_pool(zram->mem_pool); 1452 1449 vfree(zram->table); 1450 + zram->table = NULL; 1453 1451 } 1454 1452 1455 1453 static bool zram_meta_alloc(struct zram *zram, u64 disksize) ··· 2329 2319 down_write(&zram->init_lock); 2330 2320 2331 2321 zram->limit_pages = 0; 2332 - 2333 - if (!init_done(zram)) { 2334 - up_write(&zram->init_lock); 2335 - return; 2336 - } 2337 2322 2338 2323 set_capacity_and_notify(zram->disk, 0); 2339 2324 part_stat_set_all(zram->disk->part0, 0);
+26 -24
drivers/cpufreq/amd-pstate.c
··· 374 374 375 375 static int msr_init_perf(struct amd_cpudata *cpudata) 376 376 { 377 - u64 cap1; 377 + u64 cap1, numerator; 378 378 379 379 int ret = rdmsrl_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1, 380 380 &cap1); 381 381 if (ret) 382 382 return ret; 383 383 384 - WRITE_ONCE(cpudata->highest_perf, AMD_CPPC_HIGHEST_PERF(cap1)); 385 - WRITE_ONCE(cpudata->max_limit_perf, AMD_CPPC_HIGHEST_PERF(cap1)); 384 + ret = amd_get_boost_ratio_numerator(cpudata->cpu, &numerator); 385 + if (ret) 386 + return ret; 387 + 388 + WRITE_ONCE(cpudata->highest_perf, numerator); 389 + WRITE_ONCE(cpudata->max_limit_perf, numerator); 386 390 WRITE_ONCE(cpudata->nominal_perf, AMD_CPPC_NOMINAL_PERF(cap1)); 387 391 WRITE_ONCE(cpudata->lowest_nonlinear_perf, AMD_CPPC_LOWNONLIN_PERF(cap1)); 388 392 WRITE_ONCE(cpudata->lowest_perf, AMD_CPPC_LOWEST_PERF(cap1)); ··· 398 394 static int shmem_init_perf(struct amd_cpudata *cpudata) 399 395 { 400 396 struct cppc_perf_caps cppc_perf; 397 + u64 numerator; 401 398 402 399 int ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf); 403 400 if (ret) 404 401 return ret; 405 402 406 - WRITE_ONCE(cpudata->highest_perf, cppc_perf.highest_perf); 407 - WRITE_ONCE(cpudata->max_limit_perf, cppc_perf.highest_perf); 403 + ret = amd_get_boost_ratio_numerator(cpudata->cpu, &numerator); 404 + if (ret) 405 + return ret; 406 + 407 + WRITE_ONCE(cpudata->highest_perf, numerator); 408 + WRITE_ONCE(cpudata->max_limit_perf, numerator); 408 409 WRITE_ONCE(cpudata->nominal_perf, cppc_perf.nominal_perf); 409 410 WRITE_ONCE(cpudata->lowest_nonlinear_perf, 410 411 cppc_perf.lowest_nonlinear_perf); ··· 570 561 571 562 static int amd_pstate_update_min_max_limit(struct cpufreq_policy *policy) 572 563 { 573 - u32 max_limit_perf, min_limit_perf, lowest_perf, max_perf; 564 + u32 max_limit_perf, min_limit_perf, lowest_perf, max_perf, max_freq; 574 565 struct amd_cpudata *cpudata = policy->driver_data; 575 566 576 - if (cpudata->boost_supported && !policy->boost_enabled) 577 - max_perf = READ_ONCE(cpudata->nominal_perf); 578 - else 579 - max_perf = READ_ONCE(cpudata->highest_perf); 580 - 581 - max_limit_perf = div_u64(policy->max * max_perf, policy->cpuinfo.max_freq); 582 - min_limit_perf = div_u64(policy->min * max_perf, policy->cpuinfo.max_freq); 567 + max_perf = READ_ONCE(cpudata->highest_perf); 568 + max_freq = READ_ONCE(cpudata->max_freq); 569 + max_limit_perf = div_u64(policy->max * max_perf, max_freq); 570 + min_limit_perf = div_u64(policy->min * max_perf, max_freq); 583 571 584 572 lowest_perf = READ_ONCE(cpudata->lowest_perf); 585 573 if (min_limit_perf < lowest_perf) ··· 895 889 { 896 890 int ret; 897 891 u32 min_freq, max_freq; 898 - u64 numerator; 899 892 u32 nominal_perf, nominal_freq; 900 893 u32 lowest_nonlinear_perf, lowest_nonlinear_freq; 901 894 u32 boost_ratio, lowest_nonlinear_ratio; ··· 916 911 917 912 nominal_perf = READ_ONCE(cpudata->nominal_perf); 918 913 919 - ret = amd_get_boost_ratio_numerator(cpudata->cpu, &numerator); 920 - if (ret) 921 - return ret; 922 - boost_ratio = div_u64(numerator << SCHED_CAPACITY_SHIFT, nominal_perf); 914 + boost_ratio = div_u64(cpudata->highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf); 923 915 max_freq = (nominal_freq * boost_ratio >> SCHED_CAPACITY_SHIFT) * 1000; 924 916 925 917 lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf); ··· 1871 1869 static_call_update(amd_pstate_update_perf, shmem_update_perf); 1872 1870 } 1873 1871 1874 - ret = amd_pstate_register_driver(cppc_state); 1875 - if (ret) { 1876 - pr_err("failed to register with return %d\n", ret); 1877 - return ret; 1878 - } 1879 - 1880 1872 if (amd_pstate_prefcore) { 1881 1873 ret = amd_detect_prefcore(&amd_pstate_prefcore); 1882 1874 if (ret) 1883 1875 return ret; 1876 + } 1877 + 1878 + ret = amd_pstate_register_driver(cppc_state); 1879 + if (ret) { 1880 + pr_err("failed to register with return %d\n", ret); 1881 + return ret; 1884 1882 } 1885 1883 1886 1884 dev_root = bus_get_dev_root(&cpu_subsys);
+1 -1
drivers/dma-buf/dma-buf.c
··· 60 60 { 61 61 } 62 62 63 - static void __dma_buf_debugfs_list_del(struct file *file) 63 + static void __dma_buf_debugfs_list_del(struct dma_buf *dmabuf) 64 64 { 65 65 } 66 66 #endif
+27 -16
drivers/dma-buf/udmabuf.c
··· 297 297 }; 298 298 299 299 #define SEALS_WANTED (F_SEAL_SHRINK) 300 - #define SEALS_DENIED (F_SEAL_WRITE) 300 + #define SEALS_DENIED (F_SEAL_WRITE|F_SEAL_FUTURE_WRITE) 301 301 302 302 static int check_memfd_seals(struct file *memfd) 303 303 { ··· 317 317 return 0; 318 318 } 319 319 320 - static int export_udmabuf(struct udmabuf *ubuf, 321 - struct miscdevice *device, 322 - u32 flags) 320 + static struct dma_buf *export_udmabuf(struct udmabuf *ubuf, 321 + struct miscdevice *device) 323 322 { 324 323 DEFINE_DMA_BUF_EXPORT_INFO(exp_info); 325 - struct dma_buf *buf; 326 324 327 325 ubuf->device = device; 328 326 exp_info.ops = &udmabuf_ops; ··· 328 330 exp_info.priv = ubuf; 329 331 exp_info.flags = O_RDWR; 330 332 331 - buf = dma_buf_export(&exp_info); 332 - if (IS_ERR(buf)) 333 - return PTR_ERR(buf); 334 - 335 - return dma_buf_fd(buf, flags); 333 + return dma_buf_export(&exp_info); 336 334 } 337 335 338 336 static long udmabuf_pin_folios(struct udmabuf *ubuf, struct file *memfd, ··· 385 391 struct folio **folios = NULL; 386 392 pgoff_t pgcnt = 0, pglimit; 387 393 struct udmabuf *ubuf; 394 + struct dma_buf *dmabuf; 388 395 long ret = -EINVAL; 389 396 u32 i, flags; 390 397 ··· 431 436 goto err; 432 437 } 433 438 439 + /* 440 + * Take the inode lock to protect against concurrent 441 + * memfd_add_seals(), which takes this lock in write mode. 442 + */ 443 + inode_lock_shared(file_inode(memfd)); 434 444 ret = check_memfd_seals(memfd); 435 - if (ret < 0) { 436 - fput(memfd); 437 - goto err; 438 - } 445 + if (ret) 446 + goto out_unlock; 439 447 440 448 ret = udmabuf_pin_folios(ubuf, memfd, list[i].offset, 441 449 list[i].size, folios); 450 + out_unlock: 451 + inode_unlock_shared(file_inode(memfd)); 442 452 fput(memfd); 443 453 if (ret) 444 454 goto err; 445 455 } 446 456 447 457 flags = head->flags & UDMABUF_FLAGS_CLOEXEC ? O_CLOEXEC : 0; 448 - ret = export_udmabuf(ubuf, device, flags); 449 - if (ret < 0) 458 + dmabuf = export_udmabuf(ubuf, device); 459 + if (IS_ERR(dmabuf)) { 460 + ret = PTR_ERR(dmabuf); 450 461 goto err; 462 + } 463 + /* 464 + * Ownership of ubuf is held by the dmabuf from here. 465 + * If the following dma_buf_fd() fails, dma_buf_put() cleans up both the 466 + * dmabuf and the ubuf (through udmabuf_ops.release). 467 + */ 468 + 469 + ret = dma_buf_fd(dmabuf, flags); 470 + if (ret < 0) 471 + dma_buf_put(dmabuf); 451 472 452 473 kvfree(folios); 453 474 return ret;
+12 -16
drivers/dma/amd/qdma/qdma.c
··· 7 7 #include <linux/bitfield.h> 8 8 #include <linux/bitops.h> 9 9 #include <linux/dmaengine.h> 10 + #include <linux/dma-mapping.h> 10 11 #include <linux/module.h> 11 12 #include <linux/mod_devicetable.h> 12 - #include <linux/dma-map-ops.h> 13 13 #include <linux/platform_device.h> 14 14 #include <linux/platform_data/amd_qdma.h> 15 15 #include <linux/regmap.h> ··· 492 492 493 493 static int qdma_device_setup(struct qdma_device *qdev) 494 494 { 495 - struct device *dev = &qdev->pdev->dev; 496 495 u32 ring_sz = QDMA_DEFAULT_RING_SIZE; 497 496 int ret = 0; 498 - 499 - while (dev && get_dma_ops(dev)) 500 - dev = dev->parent; 501 - if (!dev) { 502 - qdma_err(qdev, "dma device not found"); 503 - return -EINVAL; 504 - } 505 - set_dma_ops(&qdev->pdev->dev, get_dma_ops(dev)); 506 497 507 498 ret = qdma_setup_fmap_context(qdev); 508 499 if (ret) { ··· 539 548 { 540 549 struct qdma_queue *queue = to_qdma_queue(chan); 541 550 struct qdma_device *qdev = queue->qdev; 542 - struct device *dev = qdev->dma_dev.dev; 551 + struct qdma_platdata *pdata; 543 552 544 553 qdma_clear_queue_context(queue); 545 554 vchan_free_chan_resources(&queue->vchan); 546 - dma_free_coherent(dev, queue->ring_size * QDMA_MM_DESC_SIZE, 555 + pdata = dev_get_platdata(&qdev->pdev->dev); 556 + dma_free_coherent(pdata->dma_dev, queue->ring_size * QDMA_MM_DESC_SIZE, 547 557 queue->desc_base, queue->dma_desc_base); 548 558 } 549 559 ··· 557 565 struct qdma_queue *queue = to_qdma_queue(chan); 558 566 struct qdma_device *qdev = queue->qdev; 559 567 struct qdma_ctxt_sw_desc desc; 568 + struct qdma_platdata *pdata; 560 569 size_t size; 561 570 int ret; 562 571 ··· 565 572 if (ret) 566 573 return ret; 567 574 575 + pdata = dev_get_platdata(&qdev->pdev->dev); 568 576 size = queue->ring_size * QDMA_MM_DESC_SIZE; 569 - queue->desc_base = dma_alloc_coherent(qdev->dma_dev.dev, size, 577 + queue->desc_base = dma_alloc_coherent(pdata->dma_dev, size, 570 578 &queue->dma_desc_base, 571 579 GFP_KERNEL); 572 580 if (!queue->desc_base) { ··· 582 588 if (ret) { 583 589 qdma_err(qdev, "Failed to setup SW desc ctxt for %s", 584 590 chan->name); 585 - dma_free_coherent(qdev->dma_dev.dev, size, queue->desc_base, 591 + dma_free_coherent(pdata->dma_dev, size, queue->desc_base, 586 592 queue->dma_desc_base); 587 593 return ret; 588 594 } ··· 942 948 943 949 static int qdmam_alloc_qintr_rings(struct qdma_device *qdev) 944 950 { 945 - u32 ctxt[QDMA_CTXT_REGMAP_LEN]; 951 + struct qdma_platdata *pdata = dev_get_platdata(&qdev->pdev->dev); 946 952 struct device *dev = &qdev->pdev->dev; 953 + u32 ctxt[QDMA_CTXT_REGMAP_LEN]; 947 954 struct qdma_intr_ring *ring; 948 955 struct qdma_ctxt_intr intr_ctxt; 949 956 u32 vector; ··· 964 969 ring->msix_id = qdev->err_irq_idx + i + 1; 965 970 ring->ridx = i; 966 971 ring->color = 1; 967 - ring->base = dmam_alloc_coherent(dev, QDMA_INTR_RING_SIZE, 972 + ring->base = dmam_alloc_coherent(pdata->dma_dev, 973 + QDMA_INTR_RING_SIZE, 968 974 &ring->dev_base, GFP_KERNEL); 969 975 if (!ring->base) { 970 976 qdma_err(qdev, "Failed to alloc intr ring %d", i);
+2 -5
drivers/dma/apple-admac.c
··· 153 153 { 154 154 struct admac_sram *sram; 155 155 int i, ret = 0, nblocks; 156 + ad->txcache.size = readl_relaxed(ad->base + REG_TX_SRAM_SIZE); 157 + ad->rxcache.size = readl_relaxed(ad->base + REG_RX_SRAM_SIZE); 156 158 157 159 if (dir == DMA_MEM_TO_DEV) 158 160 sram = &ad->txcache; ··· 914 912 goto free_irq; 915 913 } 916 914 917 - ad->txcache.size = readl_relaxed(ad->base + REG_TX_SRAM_SIZE); 918 - ad->rxcache.size = readl_relaxed(ad->base + REG_RX_SRAM_SIZE); 919 - 920 915 dev_info(&pdev->dev, "Audio DMA Controller\n"); 921 - dev_info(&pdev->dev, "imprint %x TX cache %u RX cache %u\n", 922 - readl_relaxed(ad->base + REG_IMPRINT), ad->txcache.size, ad->rxcache.size); 923 916 924 917 return 0; 925 918
+2
drivers/dma/at_xdmac.c
··· 1363 1363 return NULL; 1364 1364 1365 1365 desc = at_xdmac_memset_create_desc(chan, atchan, dest, len, value); 1366 + if (!desc) 1367 + return NULL; 1366 1368 list_add_tail(&desc->desc_node, &desc->descs_list); 1367 1369 1368 1370 desc->tx_dma_desc.cookie = -EBUSY;
+4 -2
drivers/dma/dw/acpi.c
··· 8 8 9 9 static bool dw_dma_acpi_filter(struct dma_chan *chan, void *param) 10 10 { 11 + struct dw_dma *dw = to_dw_dma(chan->device); 12 + struct dw_dma_chip_pdata *data = dev_get_drvdata(dw->dma.dev); 11 13 struct acpi_dma_spec *dma_spec = param; 12 14 struct dw_dma_slave slave = { 13 15 .dma_dev = dma_spec->dev, 14 16 .src_id = dma_spec->slave_id, 15 17 .dst_id = dma_spec->slave_id, 16 - .m_master = 0, 17 - .p_master = 1, 18 + .m_master = data->m_master, 19 + .p_master = data->p_master, 18 20 }; 19 21 20 22 return dw_dma_filter(chan, &slave);
+8
drivers/dma/dw/internal.h
··· 51 51 int (*probe)(struct dw_dma_chip *chip); 52 52 int (*remove)(struct dw_dma_chip *chip); 53 53 struct dw_dma_chip *chip; 54 + u8 m_master; 55 + u8 p_master; 54 56 }; 55 57 56 58 static __maybe_unused const struct dw_dma_chip_pdata dw_dma_chip_pdata = { 57 59 .probe = dw_dma_probe, 58 60 .remove = dw_dma_remove, 61 + .m_master = 0, 62 + .p_master = 1, 59 63 }; 60 64 61 65 static const struct dw_dma_platform_data idma32_pdata = { ··· 76 72 .pdata = &idma32_pdata, 77 73 .probe = idma32_dma_probe, 78 74 .remove = idma32_dma_remove, 75 + .m_master = 0, 76 + .p_master = 0, 79 77 }; 80 78 81 79 static const struct dw_dma_platform_data xbar_pdata = { ··· 94 88 .pdata = &xbar_pdata, 95 89 .probe = idma32_dma_probe, 96 90 .remove = idma32_dma_remove, 91 + .m_master = 0, 92 + .p_master = 0, 97 93 }; 98 94 99 95 #endif /* _DMA_DW_INTERNAL_H */
+2 -2
drivers/dma/dw/pci.c
··· 56 56 if (ret) 57 57 return ret; 58 58 59 - dw_dma_acpi_controller_register(chip->dw); 60 - 61 59 pci_set_drvdata(pdev, data); 60 + 61 + dw_dma_acpi_controller_register(chip->dw); 62 62 63 63 return 0; 64 64 }
+1
drivers/dma/fsl-edma-common.h
··· 166 166 struct work_struct issue_worker; 167 167 struct platform_device *pdev; 168 168 struct device *pd_dev; 169 + struct device_link *pd_dev_link; 169 170 u32 srcid; 170 171 struct clk *clk; 171 172 int priority;
+36 -5
drivers/dma/fsl-edma-main.c
··· 417 417 }; 418 418 MODULE_DEVICE_TABLE(of, fsl_edma_dt_ids); 419 419 420 + static void fsl_edma3_detach_pd(struct fsl_edma_engine *fsl_edma) 421 + { 422 + struct fsl_edma_chan *fsl_chan; 423 + int i; 424 + 425 + for (i = 0; i < fsl_edma->n_chans; i++) { 426 + if (fsl_edma->chan_masked & BIT(i)) 427 + continue; 428 + fsl_chan = &fsl_edma->chans[i]; 429 + if (fsl_chan->pd_dev_link) 430 + device_link_del(fsl_chan->pd_dev_link); 431 + if (fsl_chan->pd_dev) { 432 + dev_pm_domain_detach(fsl_chan->pd_dev, false); 433 + pm_runtime_dont_use_autosuspend(fsl_chan->pd_dev); 434 + pm_runtime_set_suspended(fsl_chan->pd_dev); 435 + } 436 + } 437 + } 438 + 439 + static void devm_fsl_edma3_detach_pd(void *data) 440 + { 441 + fsl_edma3_detach_pd(data); 442 + } 443 + 420 444 static int fsl_edma3_attach_pd(struct platform_device *pdev, struct fsl_edma_engine *fsl_edma) 421 445 { 422 446 struct fsl_edma_chan *fsl_chan; 423 - struct device_link *link; 424 447 struct device *pd_chan; 425 448 struct device *dev; 426 449 int i; ··· 459 436 pd_chan = dev_pm_domain_attach_by_id(dev, i); 460 437 if (IS_ERR_OR_NULL(pd_chan)) { 461 438 dev_err(dev, "Failed attach pd %d\n", i); 462 - return -EINVAL; 439 + goto detach; 463 440 } 464 441 465 - link = device_link_add(dev, pd_chan, DL_FLAG_STATELESS | 442 + fsl_chan->pd_dev_link = device_link_add(dev, pd_chan, DL_FLAG_STATELESS | 466 443 DL_FLAG_PM_RUNTIME | 467 444 DL_FLAG_RPM_ACTIVE); 468 - if (!link) { 445 + if (!fsl_chan->pd_dev_link) { 469 446 dev_err(dev, "Failed to add device_link to %d\n", i); 470 - return -EINVAL; 447 + dev_pm_domain_detach(pd_chan, false); 448 + goto detach; 471 449 } 472 450 473 451 fsl_chan->pd_dev = pd_chan; ··· 479 455 } 480 456 481 457 return 0; 458 + 459 + detach: 460 + fsl_edma3_detach_pd(fsl_edma); 461 + return -EINVAL; 482 462 } 483 463 484 464 static int fsl_edma_probe(struct platform_device *pdev) ··· 570 542 571 543 if (drvdata->flags & FSL_EDMA_DRV_HAS_PD) { 572 544 ret = fsl_edma3_attach_pd(pdev, fsl_edma); 545 + if (ret) 546 + return ret; 547 + ret = devm_add_action_or_reset(&pdev->dev, devm_fsl_edma3_detach_pd, fsl_edma); 573 548 if (ret) 574 549 return ret; 575 550 }
+1 -1
drivers/dma/loongson2-apb-dma.c
··· 31 31 #define LDMA_ASK_VALID BIT(2) 32 32 #define LDMA_START BIT(3) /* DMA start operation */ 33 33 #define LDMA_STOP BIT(4) /* DMA stop operation */ 34 - #define LDMA_CONFIG_MASK GENMASK(4, 0) /* DMA controller config bits mask */ 34 + #define LDMA_CONFIG_MASK GENMASK_ULL(4, 0) /* DMA controller config bits mask */ 35 35 36 36 /* Bitfields in ndesc_addr field of HW descriptor */ 37 37 #define LDMA_DESC_EN BIT(0) /*1: The next descriptor is valid */
+2
drivers/dma/mv_xor.c
··· 1388 1388 irq = irq_of_parse_and_map(np, 0); 1389 1389 if (!irq) { 1390 1390 ret = -ENODEV; 1391 + of_node_put(np); 1391 1392 goto err_channel_add; 1392 1393 } 1393 1394 ··· 1397 1396 if (IS_ERR(chan)) { 1398 1397 ret = PTR_ERR(chan); 1399 1398 irq_dispose_mapping(irq); 1399 + of_node_put(np); 1400 1400 goto err_channel_add; 1401 1401 } 1402 1402
+10
drivers/dma/tegra186-gpc-dma.c
··· 231 231 bool config_init; 232 232 char name[30]; 233 233 enum dma_transfer_direction sid_dir; 234 + enum dma_status status; 234 235 int id; 235 236 int irq; 236 237 int slave_id; ··· 394 393 tegra_dma_dump_chan_regs(tdc); 395 394 } 396 395 396 + tdc->status = DMA_PAUSED; 397 + 397 398 return ret; 398 399 } 399 400 ··· 422 419 val = tdc_read(tdc, TEGRA_GPCDMA_CHAN_CSRE); 423 420 val &= ~TEGRA_GPCDMA_CHAN_CSRE_PAUSE; 424 421 tdc_write(tdc, TEGRA_GPCDMA_CHAN_CSRE, val); 422 + 423 + tdc->status = DMA_IN_PROGRESS; 425 424 } 426 425 427 426 static int tegra_dma_device_resume(struct dma_chan *dc) ··· 549 544 550 545 tegra_dma_sid_free(tdc); 551 546 tdc->dma_desc = NULL; 547 + tdc->status = DMA_COMPLETE; 552 548 } 553 549 554 550 static void tegra_dma_chan_decode_error(struct tegra_dma_channel *tdc, ··· 722 716 tdc->dma_desc = NULL; 723 717 } 724 718 719 + tdc->status = DMA_COMPLETE; 725 720 tegra_dma_sid_free(tdc); 726 721 vchan_get_all_descriptors(&tdc->vc, &head); 727 722 spin_unlock_irqrestore(&tdc->vc.lock, flags); ··· 775 768 ret = dma_cookie_status(dc, cookie, txstate); 776 769 if (ret == DMA_COMPLETE) 777 770 return ret; 771 + 772 + if (tdc->status == DMA_PAUSED) 773 + ret = DMA_PAUSED; 778 774 779 775 spin_lock_irqsave(&tdc->vc.lock, flags); 780 776 vd = vchan_find_desc(&tdc->vc, cookie);
+2 -2
drivers/firmware/microchip/mpfs-auto-update.c
··· 402 402 return -EIO; 403 403 404 404 /* 405 - * Bit 5 of byte 1 is "UL_Auto Update" & if it is set, Auto Update is 405 + * Bit 5 of byte 1 is "UL_IAP" & if it is set, Auto Update is 406 406 * not possible. 407 407 */ 408 - if (response_msg[1] & AUTO_UPDATE_FEATURE_ENABLED) 408 + if ((((u8 *)response_msg)[1] & AUTO_UPDATE_FEATURE_ENABLED)) 409 409 return -EPERM; 410 410 411 411 return 0;
+4
drivers/gpu/drm/Kconfig
··· 99 99 config DRM_KMS_HELPER 100 100 tristate 101 101 depends on DRM 102 + select FB_CORE if DRM_FBDEV_EMULATION 102 103 help 103 104 CRTC helpers for KMS drivers. 104 105 ··· 359 358 tristate 360 359 depends on DRM 361 360 select DRM_TTM 361 + select FB_CORE if DRM_FBDEV_EMULATION 362 362 select FB_SYSMEM_HELPERS_DEFERRED if DRM_FBDEV_EMULATION 363 363 help 364 364 Helpers for ttm-based gem objects ··· 367 365 config DRM_GEM_DMA_HELPER 368 366 tristate 369 367 depends on DRM 368 + select FB_CORE if DRM_FBDEV_EMULATION 370 369 select FB_DMAMEM_HELPERS_DEFERRED if DRM_FBDEV_EMULATION 371 370 help 372 371 Choose this if you need the GEM DMA helper functions ··· 375 372 config DRM_GEM_SHMEM_HELPER 376 373 tristate 377 374 depends on DRM && MMU 375 + select FB_CORE if DRM_FBDEV_EMULATION 378 376 select FB_SYSMEM_HELPERS_DEFERRED if DRM_FBDEV_EMULATION 379 377 help 380 378 Choose this if you need the GEM shmem helper functions
+2 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c
··· 343 343 coredump->skip_vram_check = skip_vram_check; 344 344 coredump->reset_vram_lost = vram_lost; 345 345 346 - if (job && job->vm) { 347 - struct amdgpu_vm *vm = job->vm; 346 + if (job && job->pasid) { 348 347 struct amdgpu_task_info *ti; 349 348 350 - ti = amdgpu_vm_get_task_info_vm(vm); 349 + ti = amdgpu_vm_get_task_info_pasid(adev, job->pasid); 351 350 if (ti) { 352 351 coredump->reset_task_info = *ti; 353 352 amdgpu_vm_put_task_info(ti);
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 417 417 { 418 418 struct amdgpu_device *adev = drm_to_adev(dev); 419 419 420 + if (!IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE)) 421 + return false; 422 + 420 423 if (adev->has_pr3 || 421 424 ((adev->flags & AMD_IS_PX) && amdgpu_is_atpx_hybrid())) 422 425 return true;
+1 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
··· 255 255 256 256 void amdgpu_job_free_resources(struct amdgpu_job *job) 257 257 { 258 - struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched); 259 258 struct dma_fence *f; 260 259 unsigned i; 261 260 ··· 267 268 f = NULL; 268 269 269 270 for (i = 0; i < job->num_ibs; ++i) 270 - amdgpu_ib_free(ring->adev, &job->ibs[i], f); 271 + amdgpu_ib_free(NULL, &job->ibs[i], f); 271 272 } 272 273 273 274 static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
+3 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 1266 1266 * next command submission. 1267 1267 */ 1268 1268 if (amdgpu_vm_is_bo_always_valid(vm, bo)) { 1269 - uint32_t mem_type = bo->tbo.resource->mem_type; 1270 - 1271 - if (!(bo->preferred_domains & 1272 - amdgpu_mem_type_to_domain(mem_type))) 1269 + if (bo->tbo.resource && 1270 + !(bo->preferred_domains & 1271 + amdgpu_mem_type_to_domain(bo->tbo.resource->mem_type))) 1273 1272 amdgpu_vm_bo_evicted(&bo_va->base); 1274 1273 else 1275 1274 amdgpu_vm_bo_idle(&bo_va->base);
+1 -1
drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
··· 4123 4123 if (amdgpu_sriov_vf(adev)) 4124 4124 return 0; 4125 4125 4126 - switch (adev->ip_versions[GC_HWIP][0]) { 4126 + switch (amdgpu_ip_version(adev, GC_HWIP, 0)) { 4127 4127 case IP_VERSION(12, 0, 0): 4128 4128 case IP_VERSION(12, 0, 1): 4129 4129 gfx_v12_0_update_gfx_clock_gating(adev,
+1 -1
drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
··· 108 108 dev_err(adev->dev, 109 109 "MMVM_L2_PROTECTION_FAULT_STATUS_LO32:0x%08X\n", 110 110 status); 111 - switch (adev->ip_versions[MMHUB_HWIP][0]) { 111 + switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) { 112 112 case IP_VERSION(4, 1, 0): 113 113 mmhub_cid = mmhub_client_ids_v4_1_0[cid][rw]; 114 114 break;
+11
drivers/gpu/drm/amd/amdgpu/nbio_v7_0.c
··· 271 271 .ref_and_mask_sdma1 = GPU_HDP_FLUSH_DONE__SDMA1_MASK, 272 272 }; 273 273 274 + #define regRCC_DEV0_EPF6_STRAP4 0xd304 275 + #define regRCC_DEV0_EPF6_STRAP4_BASE_IDX 5 276 + 274 277 static void nbio_v7_0_init_registers(struct amdgpu_device *adev) 275 278 { 279 + uint32_t data; 280 + 281 + switch (amdgpu_ip_version(adev, NBIO_HWIP, 0)) { 282 + case IP_VERSION(2, 5, 0): 283 + data = RREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF6_STRAP4) & ~BIT(23); 284 + WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF6_STRAP4, data); 285 + break; 286 + } 276 287 } 277 288 278 289 #define MMIO_REG_HOLE_OFFSET (0x80000 - PAGE_SIZE)
+1 -1
drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
··· 275 275 if (def != data) 276 276 WREG32_SOC15(NBIO, 0, regBIF_BIF256_CI256_RC3X4_USB4_PCIE_MST_CTRL_3, data); 277 277 278 - switch (adev->ip_versions[NBIO_HWIP][0]) { 278 + switch (amdgpu_ip_version(adev, NBIO_HWIP, 0)) { 279 279 case IP_VERSION(7, 11, 0): 280 280 case IP_VERSION(7, 11, 1): 281 281 case IP_VERSION(7, 11, 2):
+1 -1
drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c
··· 247 247 if (def != data) 248 248 WREG32_SOC15(NBIO, 0, regBIF0_PCIE_MST_CTRL_3, data); 249 249 250 - switch (adev->ip_versions[NBIO_HWIP][0]) { 250 + switch (amdgpu_ip_version(adev, NBIO_HWIP, 0)) { 251 251 case IP_VERSION(7, 7, 0): 252 252 data = RREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF5_STRAP4) & ~BIT(23); 253 253 WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF5_STRAP4, data);
+1 -1
drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
··· 2096 2096 { 2097 2097 struct amdgpu_device *adev = smu->adev; 2098 2098 2099 - if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(14, 0, 2)) 2099 + if (amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(14, 0, 2)) 2100 2100 return smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_EnableAllSmuFeatures, 2101 2101 FEATURE_PWR_GFX, NULL); 2102 2102 else
+12 -2
drivers/gpu/drm/bridge/adv7511/adv7511_audio.c
··· 153 153 ADV7511_AUDIO_CFG3_LEN_MASK, len); 154 154 regmap_update_bits(adv7511->regmap, ADV7511_REG_I2C_FREQ_ID_CFG, 155 155 ADV7511_I2C_FREQ_ID_CFG_RATE_MASK, rate << 4); 156 - regmap_write(adv7511->regmap, 0x73, 0x1); 156 + 157 + /* send current Audio infoframe values while updating */ 158 + regmap_update_bits(adv7511->regmap, ADV7511_REG_INFOFRAME_UPDATE, 159 + BIT(5), BIT(5)); 160 + 161 + regmap_write(adv7511->regmap, ADV7511_REG_AUDIO_INFOFRAME(0), 0x1); 162 + 163 + /* use Audio infoframe updated info */ 164 + regmap_update_bits(adv7511->regmap, ADV7511_REG_INFOFRAME_UPDATE, 165 + BIT(5), 0); 157 166 158 167 return 0; 159 168 } ··· 193 184 regmap_update_bits(adv7511->regmap, ADV7511_REG_GC(0), 194 185 BIT(7) | BIT(6), BIT(7)); 195 186 /* use Audio infoframe updated info */ 196 - regmap_update_bits(adv7511->regmap, ADV7511_REG_GC(1), 187 + regmap_update_bits(adv7511->regmap, ADV7511_REG_INFOFRAME_UPDATE, 197 188 BIT(5), 0); 189 + 198 190 /* enable SPDIF receiver */ 199 191 if (adv7511->audio_source == ADV7511_AUDIO_SOURCE_SPDIF) 200 192 regmap_update_bits(adv7511->regmap, ADV7511_REG_AUDIO_CONFIG,
+8 -2
drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
··· 1241 1241 return ret; 1242 1242 1243 1243 ret = adv7511_init_regulators(adv7511); 1244 - if (ret) 1245 - return dev_err_probe(dev, ret, "failed to init regulators\n"); 1244 + if (ret) { 1245 + dev_err_probe(dev, ret, "failed to init regulators\n"); 1246 + goto err_of_node_put; 1247 + } 1246 1248 1247 1249 /* 1248 1250 * The power down GPIO is optional. If present, toggle it from active to ··· 1365 1363 i2c_unregister_device(adv7511->i2c_edid); 1366 1364 uninit_regulators: 1367 1365 adv7511_uninit_regulators(adv7511); 1366 + err_of_node_put: 1367 + of_node_put(adv7511->host_node); 1368 1368 1369 1369 return ret; 1370 1370 } ··· 1374 1370 static void adv7511_remove(struct i2c_client *i2c) 1375 1371 { 1376 1372 struct adv7511 *adv7511 = i2c_get_clientdata(i2c); 1373 + 1374 + of_node_put(adv7511->host_node); 1377 1375 1378 1376 adv7511_uninit_regulators(adv7511); 1379 1377
+1 -3
drivers/gpu/drm/bridge/adv7511/adv7533.c
··· 172 172 173 173 of_property_read_u32(np, "adi,dsi-lanes", &num_lanes); 174 174 175 - if (num_lanes < 1 || num_lanes > 4) 175 + if (num_lanes < 2 || num_lanes > 4) 176 176 return -EINVAL; 177 177 178 178 adv->num_dsi_lanes = num_lanes; ··· 180 180 adv->host_node = of_graph_get_remote_node(np, 0, 0); 181 181 if (!adv->host_node) 182 182 return -ENODEV; 183 - 184 - of_node_put(adv->host_node); 185 183 186 184 adv->use_timing_gen = !of_property_read_bool(np, 187 185 "adi,disable-timing-generator");
+5 -5
drivers/gpu/drm/display/drm_dp_tunnel.c
··· 1896 1896 * 1897 1897 * Creates a DP tunnel manager for @dev. 1898 1898 * 1899 - * Returns a pointer to the tunnel manager if created successfully or NULL in 1900 - * case of an error. 1899 + * Returns a pointer to the tunnel manager if created successfully or error 1900 + * pointer in case of failure. 1901 1901 */ 1902 1902 struct drm_dp_tunnel_mgr * 1903 1903 drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count) ··· 1907 1907 1908 1908 mgr = kzalloc(sizeof(*mgr), GFP_KERNEL); 1909 1909 if (!mgr) 1910 - return NULL; 1910 + return ERR_PTR(-ENOMEM); 1911 1911 1912 1912 mgr->dev = dev; 1913 1913 init_waitqueue_head(&mgr->bw_req_queue); ··· 1916 1916 if (!mgr->groups) { 1917 1917 kfree(mgr); 1918 1918 1919 - return NULL; 1919 + return ERR_PTR(-ENOMEM); 1920 1920 } 1921 1921 1922 1922 #ifdef CONFIG_DRM_DISPLAY_DP_TUNNEL_STATE_DEBUG ··· 1927 1927 if (!init_group(mgr, &mgr->groups[i])) { 1928 1928 destroy_mgr(mgr); 1929 1929 1930 - return NULL; 1930 + return ERR_PTR(-ENOMEM); 1931 1931 } 1932 1932 1933 1933 mgr->group_count++;
+7 -4
drivers/gpu/drm/drm_modes.c
··· 1287 1287 */ 1288 1288 int drm_mode_vrefresh(const struct drm_display_mode *mode) 1289 1289 { 1290 - unsigned int num, den; 1290 + unsigned int num = 1, den = 1; 1291 1291 1292 1292 if (mode->htotal == 0 || mode->vtotal == 0) 1293 1293 return 0; 1294 - 1295 - num = mode->clock; 1296 - den = mode->htotal * mode->vtotal; 1297 1294 1298 1295 if (mode->flags & DRM_MODE_FLAG_INTERLACE) 1299 1296 num *= 2; ··· 1298 1301 den *= 2; 1299 1302 if (mode->vscan > 1) 1300 1303 den *= mode->vscan; 1304 + 1305 + if (check_mul_overflow(mode->clock, num, &num)) 1306 + return 0; 1307 + 1308 + if (check_mul_overflow(mode->htotal * mode->vtotal, den, &den)) 1309 + return 0; 1301 1310 1302 1311 return DIV_ROUND_CLOSEST_ULL(mul_u32_u32(num, 1000), den); 1303 1312 }
+4 -8
drivers/gpu/drm/i915/display/intel_cx0_phy.c
··· 2115 2115 0, C10_VDR_CTRL_MSGBUS_ACCESS, 2116 2116 MB_WRITE_COMMITTED); 2117 2117 2118 - /* Custom width needs to be programmed to 0 for both the phy lanes */ 2119 - intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CUSTOM_WIDTH, 2120 - C10_VDR_CUSTOM_WIDTH_MASK, C10_VDR_CUSTOM_WIDTH_8_10, 2121 - MB_WRITE_COMMITTED); 2122 - intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1), 2123 - 0, C10_VDR_CTRL_UPDATE_CFG, 2124 - MB_WRITE_COMMITTED); 2125 - 2126 2118 /* Program the pll values only for the master lane */ 2127 2119 for (i = 0; i < ARRAY_SIZE(pll_state->pll); i++) 2128 2120 intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_PLL(i), ··· 2124 2132 intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_CMN(0), pll_state->cmn, MB_WRITE_COMMITTED); 2125 2133 intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_TX(0), pll_state->tx, MB_WRITE_COMMITTED); 2126 2134 2135 + /* Custom width needs to be programmed to 0 for both the phy lanes */ 2136 + intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CUSTOM_WIDTH, 2137 + C10_VDR_CUSTOM_WIDTH_MASK, C10_VDR_CUSTOM_WIDTH_8_10, 2138 + MB_WRITE_COMMITTED); 2127 2139 intel_cx0_rmw(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_CONTROL(1), 2128 2140 0, C10_VDR_CTRL_MASTER_LANE | C10_VDR_CTRL_UPDATE_CFG, 2129 2141 MB_WRITE_COMMITTED);
+5
drivers/gpu/drm/i915/gt/intel_engine_types.h
··· 343 343 * @start_gt_clk: GT clock time of last idle to active transition. 344 344 */ 345 345 u64 start_gt_clk; 346 + 347 + /** 348 + * @total: The last value of total returned 349 + */ 350 + u64 total; 346 351 }; 347 352 348 353 union intel_engine_tlb_inv_reg {
+1 -1
drivers/gpu/drm/i915/gt/intel_rc6.c
··· 133 133 GEN9_MEDIA_PG_ENABLE | 134 134 GEN11_MEDIA_SAMPLER_PG_ENABLE; 135 135 136 - if (GRAPHICS_VER(gt->i915) >= 12) { 136 + if (GRAPHICS_VER(gt->i915) >= 12 && !IS_DG1(gt->i915)) { 137 137 for (i = 0; i < I915_MAX_VCS; i++) 138 138 if (HAS_ENGINE(gt, _VCS(i))) 139 139 pg_enable |= (VDN_HCP_POWERGATE_ENABLE(i) |
+39 -2
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
··· 1243 1243 } while (++i < 6); 1244 1244 } 1245 1245 1246 + static void __set_engine_usage_record(struct intel_engine_cs *engine, 1247 + u32 last_in, u32 id, u32 total) 1248 + { 1249 + struct iosys_map rec_map = intel_guc_engine_usage_record_map(engine); 1250 + 1251 + #define record_write(map_, field_, val_) \ 1252 + iosys_map_wr_field(map_, 0, struct guc_engine_usage_record, field_, val_) 1253 + 1254 + record_write(&rec_map, last_switch_in_stamp, last_in); 1255 + record_write(&rec_map, current_context_index, id); 1256 + record_write(&rec_map, total_runtime, total); 1257 + 1258 + #undef record_write 1259 + } 1260 + 1246 1261 static void guc_update_engine_gt_clks(struct intel_engine_cs *engine) 1247 1262 { 1248 1263 struct intel_engine_guc_stats *stats = &engine->stats.guc; ··· 1378 1363 total += intel_gt_clock_interval_to_ns(gt, clk); 1379 1364 } 1380 1365 1366 + if (total > stats->total) 1367 + stats->total = total; 1368 + 1381 1369 spin_unlock_irqrestore(&guc->timestamp.lock, flags); 1382 1370 1383 - return ns_to_ktime(total); 1371 + return ns_to_ktime(stats->total); 1384 1372 } 1385 1373 1386 1374 static void guc_enable_busyness_worker(struct intel_guc *guc) ··· 1449 1431 1450 1432 guc_update_pm_timestamp(guc, &unused); 1451 1433 for_each_engine(engine, gt, id) { 1434 + struct intel_engine_guc_stats *stats = &engine->stats.guc; 1435 + 1452 1436 guc_update_engine_gt_clks(engine); 1453 - engine->stats.guc.prev_total = 0; 1437 + 1438 + /* 1439 + * If resetting a running context, accumulate the active 1440 + * time as well since there will be no context switch. 1441 + */ 1442 + if (stats->running) { 1443 + u64 clk = guc->timestamp.gt_stamp - stats->start_gt_clk; 1444 + 1445 + stats->total_gt_clks += clk; 1446 + } 1447 + stats->prev_total = 0; 1448 + stats->running = 0; 1454 1449 } 1455 1450 1456 1451 spin_unlock_irqrestore(&guc->timestamp.lock, flags); ··· 1574 1543 1575 1544 static int guc_action_enable_usage_stats(struct intel_guc *guc) 1576 1545 { 1546 + struct intel_gt *gt = guc_to_gt(guc); 1547 + struct intel_engine_cs *engine; 1548 + enum intel_engine_id id; 1577 1549 u32 offset = intel_guc_engine_usage_offset(guc); 1578 1550 u32 action[] = { 1579 1551 INTEL_GUC_ACTION_SET_ENG_UTIL_BUFF, 1580 1552 offset, 1581 1553 0, 1582 1554 }; 1555 + 1556 + for_each_engine(engine, gt, id) 1557 + __set_engine_usage_record(engine, 0, 0xffffffff, 0); 1583 1558 1584 1559 return intel_guc_send(guc, action, ARRAY_SIZE(action)); 1585 1560 }
+2
drivers/gpu/drm/panel/panel-himax-hx83102.c
··· 565 565 struct drm_display_mode *mode; 566 566 567 567 mode = drm_mode_duplicate(connector->dev, m); 568 + if (!mode) 569 + return -ENOMEM; 568 570 569 571 mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED; 570 572 drm_mode_set_name(mode);
+2 -2
drivers/gpu/drm/panel/panel-novatek-nt35950.c
··· 481 481 return dev_err_probe(dev, -EPROBE_DEFER, "Cannot get secondary DSI host\n"); 482 482 483 483 nt->dsi[1] = mipi_dsi_device_register_full(dsi_r_host, info); 484 - if (!nt->dsi[1]) { 484 + if (IS_ERR(nt->dsi[1])) { 485 485 dev_err(dev, "Cannot get secondary DSI node\n"); 486 - return -ENODEV; 486 + return PTR_ERR(nt->dsi[1]); 487 487 } 488 488 num_dsis++; 489 489 }
+1
drivers/gpu/drm/panel/panel-sitronix-st7701.c
··· 1177 1177 return dev_err_probe(dev, ret, "Failed to get orientation\n"); 1178 1178 1179 1179 drm_panel_init(&st7701->panel, dev, &st7701_funcs, connector_type); 1180 + st7701->panel.prepare_prev_first = true; 1180 1181 1181 1182 /** 1182 1183 * Once sleep out has been issued, ST7701 IC required to wait 120ms
+1 -1
drivers/gpu/drm/panel/panel-synaptics-r63353.c
··· 325 325 { 326 326 struct r63353_panel *rpanel = mipi_dsi_get_drvdata(dsi); 327 327 328 - r63353_panel_unprepare(&rpanel->base); 328 + drm_panel_unprepare(&rpanel->base); 329 329 } 330 330 331 331 static const struct r63353_desc sharp_ls068b3sx02_data = {
+2 -1
drivers/gpu/drm/scheduler/sched_main.c
··· 1355 1355 * drm_sched_backend_ops.run_job(). Consequently, drm_sched_backend_ops.free_job() 1356 1356 * will not be called for all jobs still in drm_gpu_scheduler.pending_list. 1357 1357 * There is no solution for this currently. Thus, it is up to the driver to make 1358 - * sure that 1358 + * sure that: 1359 + * 1359 1360 * a) drm_sched_fini() is only called after for all submitted jobs 1360 1361 * drm_sched_backend_ops.free_job() has been called or that 1361 1362 * b) the jobs for which drm_sched_backend_ops.free_job() has not been called
+10 -2
drivers/gpu/drm/xe/xe_bo.c
··· 724 724 new_mem->mem_type == XE_PL_SYSTEM) { 725 725 long timeout = dma_resv_wait_timeout(ttm_bo->base.resv, 726 726 DMA_RESV_USAGE_BOOKKEEP, 727 - true, 727 + false, 728 728 MAX_SCHEDULE_TIMEOUT); 729 729 if (timeout < 0) { 730 730 ret = timeout; ··· 848 848 849 849 out: 850 850 if ((!ttm_bo->resource || ttm_bo->resource->mem_type == XE_PL_SYSTEM) && 851 - ttm_bo->ttm) 851 + ttm_bo->ttm) { 852 + long timeout = dma_resv_wait_timeout(ttm_bo->base.resv, 853 + DMA_RESV_USAGE_KERNEL, 854 + false, 855 + MAX_SCHEDULE_TIMEOUT); 856 + if (timeout < 0) 857 + ret = timeout; 858 + 852 859 xe_tt_unmap_sg(ttm_bo->ttm); 860 + } 853 861 854 862 return ret; 855 863 }
+14 -1
drivers/gpu/drm/xe/xe_devcoredump.c
··· 109 109 drm_puts(&p, "\n**** GuC CT ****\n"); 110 110 xe_guc_ct_snapshot_print(ss->guc.ct, &p); 111 111 112 - drm_puts(&p, "\n**** Contexts ****\n"); 112 + /* 113 + * Don't add a new section header here because the mesa debug decoder 114 + * tool expects the context information to be in the 'GuC CT' section. 115 + */ 116 + /* drm_puts(&p, "\n**** Contexts ****\n"); */ 113 117 xe_guc_exec_queue_snapshot_print(ss->ge, &p); 114 118 115 119 drm_puts(&p, "\n**** Job ****\n"); ··· 366 362 const u32 *blob32 = (const u32 *)blob; 367 363 char buff[ASCII85_BUFSZ], *line_buff; 368 364 size_t line_pos = 0; 365 + 366 + /* 367 + * Splitting blobs across multiple lines is not compatible with the mesa 368 + * debug decoder tool. Note that even dropping the explicit '\n' below 369 + * doesn't help because the GuC log is so big some underlying implementation 370 + * still splits the lines at 512K characters. So just bail completely for 371 + * the moment. 372 + */ 373 + return; 369 374 370 375 #define DMESG_MAX_LINE_LEN 800 371 376 #define MIN_SPACE (ASCII85_BUFSZ + 2) /* 85 + "\n\0" */
+9
drivers/gpu/drm/xe/xe_exec_queue.c
··· 8 8 #include <linux/nospec.h> 9 9 10 10 #include <drm/drm_device.h> 11 + #include <drm/drm_drv.h> 11 12 #include <drm/drm_file.h> 12 13 #include <uapi/drm/xe_drm.h> 13 14 ··· 763 762 */ 764 763 void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q) 765 764 { 765 + struct xe_device *xe = gt_to_xe(q->gt); 766 766 struct xe_file *xef; 767 767 struct xe_lrc *lrc; 768 768 u32 old_ts, new_ts; 769 + int idx; 769 770 770 771 /* 771 772 * Jobs that are run during driver load may use an exec_queue, but are ··· 775 772 * for kernel specific work. 776 773 */ 777 774 if (!q->vm || !q->vm->xef) 775 + return; 776 + 777 + /* Synchronize with unbind while holding the xe file open */ 778 + if (!drm_dev_enter(&xe->drm, &idx)) 778 779 return; 779 780 780 781 xef = q->vm->xef; ··· 794 787 lrc = q->lrc[0]; 795 788 new_ts = xe_lrc_update_timestamp(lrc, &old_ts); 796 789 xef->run_ticks[q->class] += (new_ts - old_ts) * q->width; 790 + 791 + drm_dev_exit(idx); 797 792 } 798 793 799 794 /**
+1 -1
drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
··· 2046 2046 valid_any = valid_any || (valid_ggtt && is_primary); 2047 2047 2048 2048 if (IS_DGFX(xe)) { 2049 - bool valid_lmem = pf_get_vf_config_ggtt(primary_gt, vfid); 2049 + bool valid_lmem = pf_get_vf_config_lmem(primary_gt, vfid); 2050 2050 2051 2051 valid_any = valid_any || (valid_lmem && is_primary); 2052 2052 valid_all = valid_all && valid_lmem;
+45 -89
drivers/gpu/drm/xe/xe_oa.c
··· 74 74 struct rcu_head rcu; 75 75 }; 76 76 77 - struct flex { 78 - struct xe_reg reg; 79 - u32 offset; 80 - u32 value; 81 - }; 82 - 83 77 struct xe_oa_open_param { 84 78 struct xe_file *xef; 85 79 u32 oa_unit_id; ··· 590 596 return ret; 591 597 } 592 598 599 + static void xe_oa_lock_vma(struct xe_exec_queue *q) 600 + { 601 + if (q->vm) { 602 + down_read(&q->vm->lock); 603 + xe_vm_lock(q->vm, false); 604 + } 605 + } 606 + 607 + static void xe_oa_unlock_vma(struct xe_exec_queue *q) 608 + { 609 + if (q->vm) { 610 + xe_vm_unlock(q->vm); 611 + up_read(&q->vm->lock); 612 + } 613 + } 614 + 593 615 static struct dma_fence *xe_oa_submit_bb(struct xe_oa_stream *stream, enum xe_oa_submit_deps deps, 594 616 struct xe_bb *bb) 595 617 { 618 + struct xe_exec_queue *q = stream->exec_q ?: stream->k_exec_q; 596 619 struct xe_sched_job *job; 597 620 struct dma_fence *fence; 598 621 int err = 0; 599 622 600 - /* Kernel configuration is issued on stream->k_exec_q, not stream->exec_q */ 601 - job = xe_bb_create_job(stream->k_exec_q, bb); 623 + xe_oa_lock_vma(q); 624 + 625 + job = xe_bb_create_job(q, bb); 602 626 if (IS_ERR(job)) { 603 627 err = PTR_ERR(job); 604 628 goto exit; 605 629 } 630 + job->ggtt = true; 606 631 607 632 if (deps == XE_OA_SUBMIT_ADD_DEPS) { 608 633 for (int i = 0; i < stream->num_syncs && !err; i++) ··· 636 623 fence = dma_fence_get(&job->drm.s_fence->finished); 637 624 xe_sched_job_push(job); 638 625 626 + xe_oa_unlock_vma(q); 627 + 639 628 return fence; 640 629 err_put_job: 641 630 xe_sched_job_put(job); 642 631 exit: 632 + xe_oa_unlock_vma(q); 643 633 return ERR_PTR(err); 644 634 } 645 635 ··· 691 675 dma_fence_put(stream->last_fence); 692 676 } 693 677 694 - static void xe_oa_store_flex(struct xe_oa_stream *stream, struct xe_lrc *lrc, 695 - struct xe_bb *bb, const struct flex *flex, u32 count) 696 - { 697 - u32 offset = xe_bo_ggtt_addr(lrc->bo); 698 - 699 - do { 700 - bb->cs[bb->len++] = MI_STORE_DATA_IMM | MI_SDI_GGTT | MI_SDI_NUM_DW(1); 701 - bb->cs[bb->len++] = offset + flex->offset * sizeof(u32); 702 - bb->cs[bb->len++] = 0; 703 - bb->cs[bb->len++] = flex->value; 704 - 705 - } while (flex++, --count); 706 - } 707 - 708 - static int xe_oa_modify_ctx_image(struct xe_oa_stream *stream, struct xe_lrc *lrc, 709 - const struct flex *flex, u32 count) 678 + static int xe_oa_load_with_lri(struct xe_oa_stream *stream, struct xe_oa_reg *reg_lri, u32 count) 710 679 { 711 680 struct dma_fence *fence; 712 681 struct xe_bb *bb; 713 682 int err; 714 683 715 - bb = xe_bb_new(stream->gt, 4 * count, false); 684 + bb = xe_bb_new(stream->gt, 2 * count + 1, false); 716 685 if (IS_ERR(bb)) { 717 686 err = PTR_ERR(bb); 718 687 goto exit; 719 688 } 720 689 721 - xe_oa_store_flex(stream, lrc, bb, flex, count); 722 - 723 - fence = xe_oa_submit_bb(stream, XE_OA_SUBMIT_NO_DEPS, bb); 724 - if (IS_ERR(fence)) { 725 - err = PTR_ERR(fence); 726 - goto free_bb; 727 - } 728 - xe_bb_free(bb, fence); 729 - dma_fence_put(fence); 730 - 731 - return 0; 732 - free_bb: 733 - xe_bb_free(bb, NULL); 734 - exit: 735 - return err; 736 - } 737 - 738 - static int xe_oa_load_with_lri(struct xe_oa_stream *stream, struct xe_oa_reg *reg_lri) 739 - { 740 - struct dma_fence *fence; 741 - struct xe_bb *bb; 742 - int err; 743 - 744 - bb = xe_bb_new(stream->gt, 3, false); 745 - if (IS_ERR(bb)) { 746 - err = PTR_ERR(bb); 747 - goto exit; 748 - } 749 - 750 - write_cs_mi_lri(bb, reg_lri, 1); 690 + write_cs_mi_lri(bb, reg_lri, count); 751 691 752 692 fence = xe_oa_submit_bb(stream, XE_OA_SUBMIT_NO_DEPS, bb); 753 693 if (IS_ERR(fence)) { ··· 723 751 static int xe_oa_configure_oar_context(struct xe_oa_stream *stream, bool enable) 724 752 { 725 753 const struct xe_oa_format *format = stream->oa_buffer.format; 726 - struct xe_lrc *lrc = stream->exec_q->lrc[0]; 727 - u32 regs_offset = xe_lrc_regs_offset(lrc) / sizeof(u32); 728 754 u32 oacontrol = __format_to_oactrl(format, OAR_OACONTROL_COUNTER_SEL_MASK) | 729 755 (enable ? OAR_OACONTROL_COUNTER_ENABLE : 0); 730 756 731 - struct flex regs_context[] = { 757 + struct xe_oa_reg reg_lri[] = { 732 758 { 733 759 OACTXCONTROL(stream->hwe->mmio_base), 734 - stream->oa->ctx_oactxctrl_offset[stream->hwe->class] + 1, 735 760 enable ? OA_COUNTER_RESUME : 0, 736 761 }, 737 762 { 763 + OAR_OACONTROL, 764 + oacontrol, 765 + }, 766 + { 738 767 RING_CONTEXT_CONTROL(stream->hwe->mmio_base), 739 - regs_offset + CTX_CONTEXT_CONTROL, 740 - _MASKED_BIT_ENABLE(CTX_CTRL_OAC_CONTEXT_ENABLE), 768 + _MASKED_FIELD(CTX_CTRL_OAC_CONTEXT_ENABLE, 769 + enable ? CTX_CTRL_OAC_CONTEXT_ENABLE : 0) 741 770 }, 742 771 }; 743 - struct xe_oa_reg reg_lri = { OAR_OACONTROL, oacontrol }; 744 - int err; 745 772 746 - /* Modify stream hwe context image with regs_context */ 747 - err = xe_oa_modify_ctx_image(stream, stream->exec_q->lrc[0], 748 - regs_context, ARRAY_SIZE(regs_context)); 749 - if (err) 750 - return err; 751 - 752 - /* Apply reg_lri using LRI */ 753 - return xe_oa_load_with_lri(stream, &reg_lri); 773 + return xe_oa_load_with_lri(stream, reg_lri, ARRAY_SIZE(reg_lri)); 754 774 } 755 775 756 776 static int xe_oa_configure_oac_context(struct xe_oa_stream *stream, bool enable) 757 777 { 758 778 const struct xe_oa_format *format = stream->oa_buffer.format; 759 - struct xe_lrc *lrc = stream->exec_q->lrc[0]; 760 - u32 regs_offset = xe_lrc_regs_offset(lrc) / sizeof(u32); 761 779 u32 oacontrol = __format_to_oactrl(format, OAR_OACONTROL_COUNTER_SEL_MASK) | 762 780 (enable ? OAR_OACONTROL_COUNTER_ENABLE : 0); 763 - struct flex regs_context[] = { 781 + struct xe_oa_reg reg_lri[] = { 764 782 { 765 783 OACTXCONTROL(stream->hwe->mmio_base), 766 - stream->oa->ctx_oactxctrl_offset[stream->hwe->class] + 1, 767 784 enable ? OA_COUNTER_RESUME : 0, 768 785 }, 769 786 { 787 + OAC_OACONTROL, 788 + oacontrol 789 + }, 790 + { 770 791 RING_CONTEXT_CONTROL(stream->hwe->mmio_base), 771 - regs_offset + CTX_CONTEXT_CONTROL, 772 - _MASKED_BIT_ENABLE(CTX_CTRL_OAC_CONTEXT_ENABLE) | 792 + _MASKED_FIELD(CTX_CTRL_OAC_CONTEXT_ENABLE, 793 + enable ? CTX_CTRL_OAC_CONTEXT_ENABLE : 0) | 773 794 _MASKED_FIELD(CTX_CTRL_RUN_ALONE, enable ? CTX_CTRL_RUN_ALONE : 0), 774 795 }, 775 796 }; 776 - struct xe_oa_reg reg_lri = { OAC_OACONTROL, oacontrol }; 777 - int err; 778 797 779 798 /* Set ccs select to enable programming of OAC_OACONTROL */ 780 799 xe_mmio_write32(&stream->gt->mmio, __oa_regs(stream)->oa_ctrl, 781 800 __oa_ccs_select(stream)); 782 801 783 - /* Modify stream hwe context image with regs_context */ 784 - err = xe_oa_modify_ctx_image(stream, stream->exec_q->lrc[0], 785 - regs_context, ARRAY_SIZE(regs_context)); 786 - if (err) 787 - return err; 788 - 789 - /* Apply reg_lri using LRI */ 790 - return xe_oa_load_with_lri(stream, &reg_lri); 802 + return xe_oa_load_with_lri(stream, reg_lri, ARRAY_SIZE(reg_lri)); 791 803 } 792 804 793 805 static int xe_oa_configure_oa_context(struct xe_oa_stream *stream, bool enable) ··· 2022 2066 if (XE_IOCTL_DBG(oa->xe, !param.exec_q)) 2023 2067 return -ENOENT; 2024 2068 2025 - if (param.exec_q->width > 1) 2026 - drm_dbg(&oa->xe->drm, "exec_q->width > 1, programming only exec_q->lrc[0]\n"); 2069 + if (XE_IOCTL_DBG(oa->xe, param.exec_q->width > 1)) 2070 + return -EOPNOTSUPP; 2027 2071 } 2028 2072 2029 2073 /*
+4 -1
drivers/gpu/drm/xe/xe_ring_ops.c
··· 221 221 222 222 static u32 get_ppgtt_flag(struct xe_sched_job *job) 223 223 { 224 - return job->q->vm ? BIT(8) : 0; 224 + if (job->q->vm && !job->ggtt) 225 + return BIT(8); 226 + 227 + return 0; 225 228 } 226 229 227 230 static int emit_copy_timestamp(struct xe_lrc *lrc, u32 *dw, int i)
+2
drivers/gpu/drm/xe/xe_sched_job_types.h
··· 56 56 u32 migrate_flush_flags; 57 57 /** @ring_ops_flush_tlb: The ring ops need to flush TLB before payload. */ 58 58 bool ring_ops_flush_tlb; 59 + /** @ggtt: mapped in ggtt. */ 60 + bool ggtt; 59 61 /** @ptrs: per instance pointers. */ 60 62 struct xe_job_ptrs ptrs[]; 61 63 };
+6 -4
drivers/hwmon/tmp513.c
··· 182 182 struct regmap *regmap; 183 183 }; 184 184 185 - // Set the shift based on the gain 8=4, 4=3, 2=2, 1=1 185 + // Set the shift based on the gain: 8 -> 1, 4 -> 2, 2 -> 3, 1 -> 4 186 186 static inline u8 tmp51x_get_pga_shift(struct tmp51x_data *data) 187 187 { 188 188 return 5 - ffs(data->pga_gain); ··· 204 204 * 2's complement number shifted by one to four depending 205 205 * on the pga gain setting. 1lsb = 10uV 206 206 */ 207 - *val = sign_extend32(regval, 17 - tmp51x_get_pga_shift(data)); 207 + *val = sign_extend32(regval, 208 + reg == TMP51X_SHUNT_CURRENT_RESULT ? 209 + 16 - tmp51x_get_pga_shift(data) : 15); 208 210 *val = DIV_ROUND_CLOSEST(*val * 10 * MILLI, data->shunt_uohms); 209 211 break; 210 212 case TMP51X_BUS_VOLTAGE_RESULT: ··· 222 220 break; 223 221 case TMP51X_BUS_CURRENT_RESULT: 224 222 // Current = (ShuntVoltage * CalibrationRegister) / 4096 225 - *val = sign_extend32(regval, 16) * data->curr_lsb_ua; 223 + *val = sign_extend32(regval, 15) * (long)data->curr_lsb_ua; 226 224 *val = DIV_ROUND_CLOSEST(*val, MILLI); 227 225 break; 228 226 case TMP51X_LOCAL_TEMP_RESULT: ··· 234 232 case TMP51X_REMOTE_TEMP_LIMIT_2: 235 233 case TMP513_REMOTE_TEMP_LIMIT_3: 236 234 // 1lsb = 0.0625 degrees centigrade 237 - *val = sign_extend32(regval, 16) >> TMP51X_TEMP_SHIFT; 235 + *val = sign_extend32(regval, 15) >> TMP51X_TEMP_SHIFT; 238 236 *val = DIV_ROUND_CLOSEST(*val * 625, 10); 239 237 break; 240 238 case TMP51X_N_FACTOR_AND_HYST_1:
+4 -5
drivers/i2c/busses/i2c-imx.c
··· 335 335 { .compatible = "fsl,imx6sll-i2c", .data = &imx6_i2c_hwdata, }, 336 336 { .compatible = "fsl,imx6sx-i2c", .data = &imx6_i2c_hwdata, }, 337 337 { .compatible = "fsl,imx6ul-i2c", .data = &imx6_i2c_hwdata, }, 338 + { .compatible = "fsl,imx7d-i2c", .data = &imx6_i2c_hwdata, }, 338 339 { .compatible = "fsl,imx7s-i2c", .data = &imx6_i2c_hwdata, }, 339 340 { .compatible = "fsl,imx8mm-i2c", .data = &imx6_i2c_hwdata, }, 340 341 { .compatible = "fsl,imx8mn-i2c", .data = &imx6_i2c_hwdata, }, ··· 533 532 534 533 static int i2c_imx_bus_busy(struct imx_i2c_struct *i2c_imx, int for_busy, bool atomic) 535 534 { 535 + bool multi_master = i2c_imx->multi_master; 536 536 unsigned long orig_jiffies = jiffies; 537 537 unsigned int temp; 538 - 539 - if (!i2c_imx->multi_master) 540 - return 0; 541 538 542 539 while (1) { 543 540 temp = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2SR); 544 541 545 542 /* check for arbitration lost */ 546 - if (temp & I2SR_IAL) { 543 + if (multi_master && (temp & I2SR_IAL)) { 547 544 i2c_imx_clear_irq(i2c_imx, I2SR_IAL); 548 545 return -EAGAIN; 549 546 } 550 547 551 - if (for_busy && (temp & I2SR_IBB)) { 548 + if (for_busy && (!multi_master || (temp & I2SR_IBB))) { 552 549 i2c_imx->stopped = 0; 553 550 break; 554 551 }
+96 -30
drivers/i2c/busses/i2c-microchip-corei2c.c
··· 93 93 * @base: pointer to register struct 94 94 * @dev: device reference 95 95 * @i2c_clk: clock reference for i2c input clock 96 + * @msg_queue: pointer to the messages requiring sending 96 97 * @buf: pointer to msg buffer for easier use 97 98 * @msg_complete: xfer completion object 98 99 * @adapter: core i2c abstraction 99 100 * @msg_err: error code for completed message 100 101 * @bus_clk_rate: current i2c bus clock rate 101 102 * @isr_status: cached copy of local ISR status 103 + * @total_num: total number of messages to be sent/received 104 + * @current_num: index of the current message being sent/received 102 105 * @msg_len: number of bytes transferred in msg 103 106 * @addr: address of the current slave 107 + * @restart_needed: whether or not a repeated start is required after current message 104 108 */ 105 109 struct mchp_corei2c_dev { 106 110 void __iomem *base; 107 111 struct device *dev; 108 112 struct clk *i2c_clk; 113 + struct i2c_msg *msg_queue; 109 114 u8 *buf; 110 115 struct completion msg_complete; 111 116 struct i2c_adapter adapter; 112 117 int msg_err; 118 + int total_num; 119 + int current_num; 113 120 u32 bus_clk_rate; 114 121 u32 isr_status; 115 122 u16 msg_len; 116 123 u8 addr; 124 + bool restart_needed; 117 125 }; 118 126 119 127 static void mchp_corei2c_core_disable(struct mchp_corei2c_dev *idev) ··· 230 222 return 0; 231 223 } 232 224 225 + static void mchp_corei2c_next_msg(struct mchp_corei2c_dev *idev) 226 + { 227 + struct i2c_msg *this_msg; 228 + u8 ctrl; 229 + 230 + if (idev->current_num >= idev->total_num) { 231 + complete(&idev->msg_complete); 232 + return; 233 + } 234 + 235 + /* 236 + * If there's been an error, the isr needs to return control 237 + * to the "main" part of the driver, so as not to keep sending 238 + * messages once it completes and clears the SI bit. 239 + */ 240 + if (idev->msg_err) { 241 + complete(&idev->msg_complete); 242 + return; 243 + } 244 + 245 + this_msg = idev->msg_queue++; 246 + 247 + if (idev->current_num < (idev->total_num - 1)) { 248 + struct i2c_msg *next_msg = idev->msg_queue; 249 + 250 + idev->restart_needed = next_msg->flags & I2C_M_RD; 251 + } else { 252 + idev->restart_needed = false; 253 + } 254 + 255 + idev->addr = i2c_8bit_addr_from_msg(this_msg); 256 + idev->msg_len = this_msg->len; 257 + idev->buf = this_msg->buf; 258 + 259 + ctrl = readb(idev->base + CORE_I2C_CTRL); 260 + ctrl |= CTRL_STA; 261 + writeb(ctrl, idev->base + CORE_I2C_CTRL); 262 + 263 + idev->current_num++; 264 + } 265 + 233 266 static irqreturn_t mchp_corei2c_handle_isr(struct mchp_corei2c_dev *idev) 234 267 { 235 268 u32 status = idev->isr_status; ··· 287 238 ctrl &= ~CTRL_STA; 288 239 writeb(idev->addr, idev->base + CORE_I2C_DATA); 289 240 writeb(ctrl, idev->base + CORE_I2C_CTRL); 290 - if (idev->msg_len == 0) 291 - finished = true; 292 241 break; 293 242 case STATUS_M_ARB_LOST: 294 243 idev->msg_err = -EAGAIN; ··· 294 247 break; 295 248 case STATUS_M_SLAW_ACK: 296 249 case STATUS_M_TX_DATA_ACK: 297 - if (idev->msg_len > 0) 250 + if (idev->msg_len > 0) { 298 251 mchp_corei2c_fill_tx(idev); 299 - else 300 - last_byte = true; 252 + } else { 253 + if (idev->restart_needed) 254 + finished = true; 255 + else 256 + last_byte = true; 257 + } 301 258 break; 302 259 case STATUS_M_TX_DATA_NACK: 303 260 case STATUS_M_SLAR_NACK: ··· 338 287 mchp_corei2c_stop(idev); 339 288 340 289 if (last_byte || finished) 341 - complete(&idev->msg_complete); 290 + mchp_corei2c_next_msg(idev); 342 291 343 292 return IRQ_HANDLED; 344 293 } ··· 362 311 return ret; 363 312 } 364 313 365 - static int mchp_corei2c_xfer_msg(struct mchp_corei2c_dev *idev, 366 - struct i2c_msg *msg) 314 + static int mchp_corei2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, 315 + int num) 367 316 { 368 - u8 ctrl; 317 + struct mchp_corei2c_dev *idev = i2c_get_adapdata(adap); 318 + struct i2c_msg *this_msg = msgs; 369 319 unsigned long time_left; 370 - 371 - idev->addr = i2c_8bit_addr_from_msg(msg); 372 - idev->msg_len = msg->len; 373 - idev->buf = msg->buf; 374 - idev->msg_err = 0; 375 - 376 - reinit_completion(&idev->msg_complete); 320 + u8 ctrl; 377 321 378 322 mchp_corei2c_core_enable(idev); 379 323 324 + /* 325 + * The isr controls the flow of a transfer, this info needs to be saved 326 + * to a location that it can access the queue information from. 327 + */ 328 + idev->restart_needed = false; 329 + idev->msg_queue = msgs; 330 + idev->total_num = num; 331 + idev->current_num = 0; 332 + 333 + /* 334 + * But the first entry to the isr is triggered by the start in this 335 + * function, so the first message needs to be "dequeued". 336 + */ 337 + idev->addr = i2c_8bit_addr_from_msg(this_msg); 338 + idev->msg_len = this_msg->len; 339 + idev->buf = this_msg->buf; 340 + idev->msg_err = 0; 341 + 342 + if (idev->total_num > 1) { 343 + struct i2c_msg *next_msg = msgs + 1; 344 + 345 + idev->restart_needed = next_msg->flags & I2C_M_RD; 346 + } 347 + 348 + idev->current_num++; 349 + idev->msg_queue++; 350 + 351 + reinit_completion(&idev->msg_complete); 352 + 353 + /* 354 + * Send the first start to pass control to the isr 355 + */ 380 356 ctrl = readb(idev->base + CORE_I2C_CTRL); 381 357 ctrl |= CTRL_STA; 382 358 writeb(ctrl, idev->base + CORE_I2C_CTRL); ··· 413 335 if (!time_left) 414 336 return -ETIMEDOUT; 415 337 416 - return idev->msg_err; 417 - } 418 - 419 - static int mchp_corei2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, 420 - int num) 421 - { 422 - struct mchp_corei2c_dev *idev = i2c_get_adapdata(adap); 423 - int i, ret; 424 - 425 - for (i = 0; i < num; i++) { 426 - ret = mchp_corei2c_xfer_msg(idev, msgs++); 427 - if (ret) 428 - return ret; 429 - } 338 + if (idev->msg_err) 339 + return idev->msg_err; 430 340 431 341 return num; 432 342 }
+16
drivers/infiniband/core/cma.c
··· 690 690 int bound_if_index = dev_addr->bound_dev_if; 691 691 int dev_type = dev_addr->dev_type; 692 692 struct net_device *ndev = NULL; 693 + struct net_device *pdev = NULL; 693 694 694 695 if (!rdma_dev_access_netns(device, id_priv->id.route.addr.dev_addr.net)) 695 696 goto out; ··· 715 714 716 715 rcu_read_lock(); 717 716 ndev = rcu_dereference(sgid_attr->ndev); 717 + if (ndev->ifindex != bound_if_index) { 718 + pdev = dev_get_by_index_rcu(dev_addr->net, bound_if_index); 719 + if (pdev) { 720 + if (is_vlan_dev(pdev)) { 721 + pdev = vlan_dev_real_dev(pdev); 722 + if (ndev->ifindex == pdev->ifindex) 723 + bound_if_index = pdev->ifindex; 724 + } 725 + if (is_vlan_dev(ndev)) { 726 + pdev = vlan_dev_real_dev(ndev); 727 + if (bound_if_index == pdev->ifindex) 728 + bound_if_index = ndev->ifindex; 729 + } 730 + } 731 + } 718 732 if (!net_eq(dev_net(ndev), dev_addr->net) || 719 733 ndev->ifindex != bound_if_index) { 720 734 rdma_put_gid_attr(sgid_attr);
+1 -1
drivers/infiniband/core/nldev.c
··· 2833 2833 enum rdma_nl_notify_event_type type) 2834 2834 { 2835 2835 struct sk_buff *skb; 2836 + int ret = -EMSGSIZE; 2836 2837 struct net *net; 2837 - int ret = 0; 2838 2838 void *nlh; 2839 2839 2840 2840 net = read_pnet(&device->coredev.rdma_net);
+9 -7
drivers/infiniband/core/uverbs_cmd.c
··· 161 161 { 162 162 const void __user *res = iter->cur; 163 163 164 - if (iter->cur + len > iter->end) 164 + if (len > iter->end - iter->cur) 165 165 return (void __force __user *)ERR_PTR(-ENOSPC); 166 166 iter->cur += len; 167 167 return res; ··· 2008 2008 ret = uverbs_request_start(attrs, &iter, &cmd, sizeof(cmd)); 2009 2009 if (ret) 2010 2010 return ret; 2011 - wqes = uverbs_request_next_ptr(&iter, cmd.wqe_size * cmd.wr_count); 2011 + wqes = uverbs_request_next_ptr(&iter, size_mul(cmd.wqe_size, 2012 + cmd.wr_count)); 2012 2013 if (IS_ERR(wqes)) 2013 2014 return PTR_ERR(wqes); 2014 - sgls = uverbs_request_next_ptr( 2015 - &iter, cmd.sge_count * sizeof(struct ib_uverbs_sge)); 2015 + sgls = uverbs_request_next_ptr(&iter, 2016 + size_mul(cmd.sge_count, 2017 + sizeof(struct ib_uverbs_sge))); 2016 2018 if (IS_ERR(sgls)) 2017 2019 return PTR_ERR(sgls); 2018 2020 ret = uverbs_request_finish(&iter); ··· 2200 2198 if (wqe_size < sizeof(struct ib_uverbs_recv_wr)) 2201 2199 return ERR_PTR(-EINVAL); 2202 2200 2203 - wqes = uverbs_request_next_ptr(iter, wqe_size * wr_count); 2201 + wqes = uverbs_request_next_ptr(iter, size_mul(wqe_size, wr_count)); 2204 2202 if (IS_ERR(wqes)) 2205 2203 return ERR_CAST(wqes); 2206 - sgls = uverbs_request_next_ptr( 2207 - iter, sge_count * sizeof(struct ib_uverbs_sge)); 2204 + sgls = uverbs_request_next_ptr(iter, size_mul(sge_count, 2205 + sizeof(struct ib_uverbs_sge))); 2208 2206 if (IS_ERR(sgls)) 2209 2207 return ERR_CAST(sgls); 2210 2208 ret = uverbs_request_finish(iter);
+25 -25
drivers/infiniband/hw/bnxt_re/ib_verbs.c
··· 199 199 200 200 ib_attr->vendor_id = rdev->en_dev->pdev->vendor; 201 201 ib_attr->vendor_part_id = rdev->en_dev->pdev->device; 202 - ib_attr->hw_ver = rdev->en_dev->pdev->subsystem_device; 202 + ib_attr->hw_ver = rdev->en_dev->pdev->revision; 203 203 ib_attr->max_qp = dev_attr->max_qp; 204 204 ib_attr->max_qp_wr = dev_attr->max_qp_wqes; 205 205 ib_attr->device_cap_flags = ··· 967 967 unsigned int flags; 968 968 int rc; 969 969 970 + bnxt_re_debug_rem_qpinfo(rdev, qp); 971 + 970 972 bnxt_qplib_flush_cqn_wq(&qp->qplib_qp); 971 973 972 974 rc = bnxt_qplib_destroy_qp(&rdev->qplib_res, &qp->qplib_qp); 973 - if (rc) { 975 + if (rc) 974 976 ibdev_err(&rdev->ibdev, "Failed to destroy HW QP"); 975 - return rc; 976 - } 977 977 978 978 if (rdma_is_kernel_res(&qp->ib_qp.res)) { 979 979 flags = bnxt_re_lock_cqs(qp); ··· 983 983 984 984 bnxt_qplib_free_qp_res(&rdev->qplib_res, &qp->qplib_qp); 985 985 986 - if (ib_qp->qp_type == IB_QPT_GSI && rdev->gsi_ctx.gsi_sqp) { 987 - rc = bnxt_re_destroy_gsi_sqp(qp); 988 - if (rc) 989 - return rc; 990 - } 986 + if (ib_qp->qp_type == IB_QPT_GSI && rdev->gsi_ctx.gsi_sqp) 987 + bnxt_re_destroy_gsi_sqp(qp); 991 988 992 989 mutex_lock(&rdev->qp_lock); 993 990 list_del(&qp->list); ··· 994 997 atomic_dec(&rdev->stats.res.rc_qp_count); 995 998 else if (qp->qplib_qp.type == CMDQ_CREATE_QP_TYPE_UD) 996 999 atomic_dec(&rdev->stats.res.ud_qp_count); 997 - 998 - bnxt_re_debug_rem_qpinfo(rdev, qp); 999 1000 1000 1001 ib_umem_release(qp->rumem); 1001 1002 ib_umem_release(qp->sumem); ··· 2162 2167 } 2163 2168 } 2164 2169 2165 - if (qp_attr_mask & IB_QP_PATH_MTU) { 2166 - qp->qplib_qp.modify_flags |= 2167 - CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU; 2168 - qp->qplib_qp.path_mtu = __from_ib_mtu(qp_attr->path_mtu); 2169 - qp->qplib_qp.mtu = ib_mtu_enum_to_int(qp_attr->path_mtu); 2170 - } else if (qp_attr->qp_state == IB_QPS_RTR) { 2171 - qp->qplib_qp.modify_flags |= 2172 - CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU; 2173 - qp->qplib_qp.path_mtu = 2174 - __from_ib_mtu(iboe_get_mtu(rdev->netdev->mtu)); 2175 - qp->qplib_qp.mtu = 2176 - ib_mtu_enum_to_int(iboe_get_mtu(rdev->netdev->mtu)); 2170 + if (qp_attr->qp_state == IB_QPS_RTR) { 2171 + enum ib_mtu qpmtu; 2172 + 2173 + qpmtu = iboe_get_mtu(rdev->netdev->mtu); 2174 + if (qp_attr_mask & IB_QP_PATH_MTU) { 2175 + if (ib_mtu_enum_to_int(qp_attr->path_mtu) > 2176 + ib_mtu_enum_to_int(qpmtu)) 2177 + return -EINVAL; 2178 + qpmtu = qp_attr->path_mtu; 2179 + } 2180 + 2181 + qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU; 2182 + qp->qplib_qp.path_mtu = __from_ib_mtu(qpmtu); 2183 + qp->qplib_qp.mtu = ib_mtu_enum_to_int(qpmtu); 2177 2184 } 2178 2185 2179 2186 if (qp_attr_mask & IB_QP_TIMEOUT) { ··· 2325 2328 qp_attr->retry_cnt = qplib_qp->retry_cnt; 2326 2329 qp_attr->rnr_retry = qplib_qp->rnr_retry; 2327 2330 qp_attr->min_rnr_timer = qplib_qp->min_rnr_timer; 2331 + qp_attr->port_num = __to_ib_port_num(qplib_qp->port_id); 2328 2332 qp_attr->rq_psn = qplib_qp->rq.psn; 2329 2333 qp_attr->max_rd_atomic = qplib_qp->max_rd_atomic; 2330 2334 qp_attr->sq_psn = qplib_qp->sq.psn; ··· 2822 2824 wr = wr->next; 2823 2825 } 2824 2826 bnxt_qplib_post_send_db(&qp->qplib_qp); 2825 - bnxt_ud_qp_hw_stall_workaround(qp); 2827 + if (!bnxt_qplib_is_chip_gen_p5_p7(qp->rdev->chip_ctx)) 2828 + bnxt_ud_qp_hw_stall_workaround(qp); 2826 2829 spin_unlock_irqrestore(&qp->sq_lock, flags); 2827 2830 return rc; 2828 2831 } ··· 2935 2936 wr = wr->next; 2936 2937 } 2937 2938 bnxt_qplib_post_send_db(&qp->qplib_qp); 2938 - bnxt_ud_qp_hw_stall_workaround(qp); 2939 + if (!bnxt_qplib_is_chip_gen_p5_p7(qp->rdev->chip_ctx)) 2940 + bnxt_ud_qp_hw_stall_workaround(qp); 2939 2941 spin_unlock_irqrestore(&qp->sq_lock, flags); 2940 2942 2941 2943 return rc;
+4
drivers/infiniband/hw/bnxt_re/ib_verbs.h
··· 268 268 int bnxt_re_mmap(struct ib_ucontext *context, struct vm_area_struct *vma); 269 269 void bnxt_re_mmap_free(struct rdma_user_mmap_entry *rdma_entry); 270 270 271 + static inline u32 __to_ib_port_num(u16 port_id) 272 + { 273 + return (u32)port_id + 1; 274 + } 271 275 272 276 unsigned long bnxt_re_lock_cqs(struct bnxt_re_qp *qp); 273 277 void bnxt_re_unlock_cqs(struct bnxt_re_qp *qp, unsigned long flags);
+1 -7
drivers/infiniband/hw/bnxt_re/main.c
··· 1715 1715 1716 1716 static void bnxt_re_dev_stop(struct bnxt_re_dev *rdev) 1717 1717 { 1718 - int mask = IB_QP_STATE; 1719 - struct ib_qp_attr qp_attr; 1720 1718 struct bnxt_re_qp *qp; 1721 1719 1722 - qp_attr.qp_state = IB_QPS_ERR; 1723 1720 mutex_lock(&rdev->qp_lock); 1724 1721 list_for_each_entry(qp, &rdev->qp_list, list) { 1725 1722 /* Modify the state of all QPs except QP1/Shadow QP */ ··· 1724 1727 if (qp->qplib_qp.state != 1725 1728 CMDQ_MODIFY_QP_NEW_STATE_RESET && 1726 1729 qp->qplib_qp.state != 1727 - CMDQ_MODIFY_QP_NEW_STATE_ERR) { 1730 + CMDQ_MODIFY_QP_NEW_STATE_ERR) 1728 1731 bnxt_re_dispatch_event(&rdev->ibdev, &qp->ib_qp, 1729 1732 1, IB_EVENT_QP_FATAL); 1730 - bnxt_re_modify_qp(&qp->ib_qp, &qp_attr, mask, 1731 - NULL); 1732 - } 1733 1733 } 1734 1734 } 1735 1735 mutex_unlock(&rdev->qp_lock);
+50 -29
drivers/infiniband/hw/bnxt_re/qplib_fp.c
··· 659 659 rc = bnxt_qplib_alloc_init_hwq(&srq->hwq, &hwq_attr); 660 660 if (rc) 661 661 return rc; 662 - 663 - srq->swq = kcalloc(srq->hwq.max_elements, sizeof(*srq->swq), 664 - GFP_KERNEL); 665 - if (!srq->swq) { 666 - rc = -ENOMEM; 667 - goto fail; 668 - } 669 662 srq->dbinfo.flags = 0; 670 663 bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req, 671 664 CMDQ_BASE_OPCODE_CREATE_SRQ, ··· 687 694 spin_lock_init(&srq->lock); 688 695 srq->start_idx = 0; 689 696 srq->last_idx = srq->hwq.max_elements - 1; 690 - for (idx = 0; idx < srq->hwq.max_elements; idx++) 691 - srq->swq[idx].next_idx = idx + 1; 692 - srq->swq[srq->last_idx].next_idx = -1; 697 + if (!srq->hwq.is_user) { 698 + srq->swq = kcalloc(srq->hwq.max_elements, sizeof(*srq->swq), 699 + GFP_KERNEL); 700 + if (!srq->swq) { 701 + rc = -ENOMEM; 702 + goto fail; 703 + } 704 + for (idx = 0; idx < srq->hwq.max_elements; idx++) 705 + srq->swq[idx].next_idx = idx + 1; 706 + srq->swq[srq->last_idx].next_idx = -1; 707 + } 693 708 694 709 srq->id = le32_to_cpu(resp.xid); 695 710 srq->dbinfo.hwq = &srq->hwq; ··· 1001 1000 u32 tbl_indx; 1002 1001 u16 nsge; 1003 1002 1004 - if (res->dattr) 1005 - qp->is_host_msn_tbl = _is_host_msn_table(res->dattr->dev_cap_flags2); 1006 - 1003 + qp->is_host_msn_tbl = _is_host_msn_table(res->dattr->dev_cap_flags2); 1007 1004 sq->dbinfo.flags = 0; 1008 1005 bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req, 1009 1006 CMDQ_BASE_OPCODE_CREATE_QP, ··· 1033 1034 : 0; 1034 1035 /* Update msn tbl size */ 1035 1036 if (qp->is_host_msn_tbl && psn_sz) { 1036 - hwq_attr.aux_depth = roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode)); 1037 + if (qp->wqe_mode == BNXT_QPLIB_WQE_MODE_STATIC) 1038 + hwq_attr.aux_depth = 1039 + roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode)); 1040 + else 1041 + hwq_attr.aux_depth = 1042 + roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode)) / 2; 1037 1043 qp->msn_tbl_sz = hwq_attr.aux_depth; 1038 1044 qp->msn = 0; 1039 1045 } ··· 1048 1044 if (rc) 1049 1045 return rc; 1050 1046 1051 - rc = bnxt_qplib_alloc_init_swq(sq); 1052 - if (rc) 1053 - goto fail_sq; 1047 + if (!sq->hwq.is_user) { 1048 + rc = bnxt_qplib_alloc_init_swq(sq); 1049 + if (rc) 1050 + goto fail_sq; 1054 1051 1055 - if (psn_sz) 1056 - bnxt_qplib_init_psn_ptr(qp, psn_sz); 1057 - 1052 + if (psn_sz) 1053 + bnxt_qplib_init_psn_ptr(qp, psn_sz); 1054 + } 1058 1055 req.sq_size = cpu_to_le32(bnxt_qplib_set_sq_size(sq, qp->wqe_mode)); 1059 1056 pbl = &sq->hwq.pbl[PBL_LVL_0]; 1060 1057 req.sq_pbl = cpu_to_le64(pbl->pg_map_arr[0]); ··· 1081 1076 rc = bnxt_qplib_alloc_init_hwq(&rq->hwq, &hwq_attr); 1082 1077 if (rc) 1083 1078 goto sq_swq; 1084 - rc = bnxt_qplib_alloc_init_swq(rq); 1085 - if (rc) 1086 - goto fail_rq; 1079 + if (!rq->hwq.is_user) { 1080 + rc = bnxt_qplib_alloc_init_swq(rq); 1081 + if (rc) 1082 + goto fail_rq; 1083 + } 1087 1084 1088 1085 req.rq_size = cpu_to_le32(rq->max_wqe); 1089 1086 pbl = &rq->hwq.pbl[PBL_LVL_0]; ··· 1181 1174 rq->dbinfo.db = qp->dpi->dbr; 1182 1175 rq->dbinfo.max_slot = bnxt_qplib_set_rq_max_slot(rq->wqe_size); 1183 1176 } 1177 + spin_lock_bh(&rcfw->tbl_lock); 1184 1178 tbl_indx = map_qp_id_to_tbl_indx(qp->id, rcfw); 1185 1179 rcfw->qp_tbl[tbl_indx].qp_id = qp->id; 1186 1180 rcfw->qp_tbl[tbl_indx].qp_handle = (void *)qp; 1181 + spin_unlock_bh(&rcfw->tbl_lock); 1187 1182 1188 1183 return 0; 1189 1184 fail: ··· 1292 1283 } 1293 1284 } 1294 1285 1295 - static void bnxt_set_mandatory_attributes(struct bnxt_qplib_qp *qp, 1286 + static void bnxt_set_mandatory_attributes(struct bnxt_qplib_res *res, 1287 + struct bnxt_qplib_qp *qp, 1296 1288 struct cmdq_modify_qp *req) 1297 1289 { 1298 1290 u32 mandatory_flags = 0; ··· 1306 1296 if (qp->type == CMDQ_MODIFY_QP_QP_TYPE_RC && qp->srq) 1307 1297 req->flags = cpu_to_le16(CMDQ_MODIFY_QP_FLAGS_SRQ_USED); 1308 1298 mandatory_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_PKEY; 1299 + } 1300 + 1301 + if (_is_min_rnr_in_rtr_rts_mandatory(res->dattr->dev_cap_flags2) && 1302 + (qp->cur_qp_state == CMDQ_MODIFY_QP_NEW_STATE_RTR && 1303 + qp->state == CMDQ_MODIFY_QP_NEW_STATE_RTS)) { 1304 + if (qp->type == CMDQ_MODIFY_QP_QP_TYPE_RC) 1305 + mandatory_flags |= 1306 + CMDQ_MODIFY_QP_MODIFY_MASK_MIN_RNR_TIMER; 1309 1307 } 1310 1308 1311 1309 if (qp->type == CMDQ_MODIFY_QP_QP_TYPE_UD || ··· 1356 1338 /* Set mandatory attributes for INIT -> RTR and RTR -> RTS transition */ 1357 1339 if (_is_optimize_modify_qp_supported(res->dattr->dev_cap_flags2) && 1358 1340 is_optimized_state_transition(qp)) 1359 - bnxt_set_mandatory_attributes(qp, &req); 1341 + bnxt_set_mandatory_attributes(res, qp, &req); 1360 1342 } 1361 1343 bmask = qp->modify_flags; 1362 1344 req.modify_mask = cpu_to_le32(qp->modify_flags); ··· 1539 1521 qp->dest_qpn = le32_to_cpu(sb->dest_qp_id); 1540 1522 memcpy(qp->smac, sb->src_mac, 6); 1541 1523 qp->vlan_id = le16_to_cpu(sb->vlan_pcp_vlan_dei_vlan_id); 1524 + qp->port_id = le16_to_cpu(sb->port_id); 1542 1525 bail: 1543 1526 dma_free_coherent(&rcfw->pdev->dev, sbuf.size, 1544 1527 sbuf.sb, sbuf.dma_addr); ··· 2686 2667 bnxt_qplib_add_flush_qp(qp); 2687 2668 } else { 2688 2669 /* Before we complete, do WA 9060 */ 2689 - if (do_wa9060(qp, cq, cq_cons, sq->swq_last, 2690 - cqe_sq_cons)) { 2691 - *lib_qp = qp; 2692 - goto out; 2670 + if (!bnxt_qplib_is_chip_gen_p5_p7(qp->cctx)) { 2671 + if (do_wa9060(qp, cq, cq_cons, sq->swq_last, 2672 + cqe_sq_cons)) { 2673 + *lib_qp = qp; 2674 + goto out; 2675 + } 2693 2676 } 2694 2677 if (swq->flags & SQ_SEND_FLAGS_SIGNAL_COMP) { 2695 2678 cqe->status = CQ_REQ_STATUS_OK;
+2 -2
drivers/infiniband/hw/bnxt_re/qplib_fp.h
··· 114 114 u32 size; 115 115 }; 116 116 117 - #define BNXT_QPLIB_QP_MAX_SGL 6 118 117 struct bnxt_qplib_swq { 119 118 u64 wr_id; 120 119 int next_idx; ··· 153 154 #define BNXT_QPLIB_SWQE_FLAGS_UC_FENCE BIT(2) 154 155 #define BNXT_QPLIB_SWQE_FLAGS_SOLICIT_EVENT BIT(3) 155 156 #define BNXT_QPLIB_SWQE_FLAGS_INLINE BIT(4) 156 - struct bnxt_qplib_sge sg_list[BNXT_QPLIB_QP_MAX_SGL]; 157 + struct bnxt_qplib_sge sg_list[BNXT_VAR_MAX_SGE]; 157 158 int num_sge; 158 159 /* Max inline data is 96 bytes */ 159 160 u32 inline_len; ··· 298 299 u32 dest_qpn; 299 300 u8 smac[6]; 300 301 u16 vlan_id; 302 + u16 port_id; 301 303 u8 nw_type; 302 304 struct bnxt_qplib_ah ah; 303 305
+3 -2
drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
··· 424 424 425 425 /* Prevent posting if f/w is not in a state to process */ 426 426 if (test_bit(ERR_DEVICE_DETACHED, &rcfw->cmdq.flags)) 427 - return bnxt_qplib_map_rc(opcode); 427 + return -ENXIO; 428 + 428 429 if (test_bit(FIRMWARE_STALL_DETECTED, &cmdq->flags)) 429 430 return -ETIMEDOUT; 430 431 ··· 494 493 495 494 rc = __send_message_basic_sanity(rcfw, msg, opcode); 496 495 if (rc) 497 - return rc; 496 + return rc == -ENXIO ? bnxt_qplib_map_rc(opcode) : rc; 498 497 499 498 rc = __send_message(rcfw, msg, opcode); 500 499 if (rc)
+5
drivers/infiniband/hw/bnxt_re/qplib_res.h
··· 584 584 return dev_cap_ext_flags2 & CREQ_QUERY_FUNC_RESP_SB_OPTIMIZE_MODIFY_QP_SUPPORTED; 585 585 } 586 586 587 + static inline bool _is_min_rnr_in_rtr_rts_mandatory(u16 dev_cap_ext_flags2) 588 + { 589 + return !!(dev_cap_ext_flags2 & CREQ_QUERY_FUNC_RESP_SB_MIN_RNR_RTR_RTS_OPT_SUPPORTED); 590 + } 591 + 587 592 static inline bool _is_cq_coalescing_supported(u16 dev_cap_ext_flags2) 588 593 { 589 594 return dev_cap_ext_flags2 & CREQ_QUERY_FUNC_RESP_SB_CQ_COALESCING_SUPPORTED;
+12 -6
drivers/infiniband/hw/bnxt_re/qplib_sp.c
··· 129 129 attr->max_qp_init_rd_atom = 130 130 sb->max_qp_init_rd_atom > BNXT_QPLIB_MAX_OUT_RD_ATOM ? 131 131 BNXT_QPLIB_MAX_OUT_RD_ATOM : sb->max_qp_init_rd_atom; 132 - attr->max_qp_wqes = le16_to_cpu(sb->max_qp_wr); 133 - /* 134 - * 128 WQEs needs to be reserved for the HW (8916). Prevent 135 - * reporting the max number 136 - */ 137 - attr->max_qp_wqes -= BNXT_QPLIB_RESERVED_QP_WRS + 1; 132 + attr->max_qp_wqes = le16_to_cpu(sb->max_qp_wr) - 1; 133 + if (!bnxt_qplib_is_chip_gen_p5_p7(rcfw->res->cctx)) { 134 + /* 135 + * 128 WQEs needs to be reserved for the HW (8916). Prevent 136 + * reporting the max number on legacy devices 137 + */ 138 + attr->max_qp_wqes -= BNXT_QPLIB_RESERVED_QP_WRS + 1; 139 + } 140 + 141 + /* Adjust for max_qp_wqes for variable wqe */ 142 + if (cctx->modes.wqe_mode == BNXT_QPLIB_WQE_MODE_VARIABLE) 143 + attr->max_qp_wqes = BNXT_VAR_MAX_WQE - 1; 138 144 139 145 attr->max_qp_sges = cctx->modes.wqe_mode == BNXT_QPLIB_WQE_MODE_VARIABLE ? 140 146 min_t(u32, sb->max_sge_var_wqe, BNXT_VAR_MAX_SGE) : 6;
+1
drivers/infiniband/hw/bnxt_re/roce_hsi.h
··· 2215 2215 #define CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_IQM_MSN_TABLE (0x2UL << 4) 2216 2216 #define CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_LAST \ 2217 2217 CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_IQM_MSN_TABLE 2218 + #define CREQ_QUERY_FUNC_RESP_SB_MIN_RNR_RTR_RTS_OPT_SUPPORTED 0x1000UL 2218 2219 __le16 max_xp_qp_size; 2219 2220 __le16 create_qp_batch_size; 2220 2221 __le16 destroy_qp_batch_size;
+29 -14
drivers/infiniband/hw/hns/hns_roce_hem.c
··· 931 931 size_t count; /* max ba numbers */ 932 932 int start; /* start buf offset in this hem */ 933 933 int end; /* end buf offset in this hem */ 934 + bool exist_bt; 934 935 }; 935 936 936 937 /* All HEM items are linked in a tree structure */ ··· 960 959 } 961 960 } 962 961 962 + hem->exist_bt = exist_bt; 963 963 hem->count = count; 964 964 hem->start = start; 965 965 hem->end = end; ··· 971 969 } 972 970 973 971 static void hem_list_free_item(struct hns_roce_dev *hr_dev, 974 - struct hns_roce_hem_item *hem, bool exist_bt) 972 + struct hns_roce_hem_item *hem) 975 973 { 976 - if (exist_bt) 974 + if (hem->exist_bt) 977 975 dma_free_coherent(hr_dev->dev, hem->count * BA_BYTE_LEN, 978 976 hem->addr, hem->dma_addr); 979 977 kfree(hem); 980 978 } 981 979 982 980 static void hem_list_free_all(struct hns_roce_dev *hr_dev, 983 - struct list_head *head, bool exist_bt) 981 + struct list_head *head) 984 982 { 985 983 struct hns_roce_hem_item *hem, *temp_hem; 986 984 987 985 list_for_each_entry_safe(hem, temp_hem, head, list) { 988 986 list_del(&hem->list); 989 - hem_list_free_item(hr_dev, hem, exist_bt); 987 + hem_list_free_item(hr_dev, hem); 990 988 } 991 989 } 992 990 ··· 1086 1084 1087 1085 for (i = 0; i < region_cnt; i++) { 1088 1086 r = (struct hns_roce_buf_region *)&regions[i]; 1087 + /* when r->hopnum = 0, the region should not occupy root_ba. */ 1088 + if (!r->hopnum) 1089 + continue; 1090 + 1089 1091 if (r->hopnum > 1) { 1090 1092 step = hem_list_calc_ba_range(r->hopnum, 1, unit); 1091 1093 if (step > 0) ··· 1183 1177 1184 1178 err_exit: 1185 1179 for (level = 1; level < hopnum; level++) 1186 - hem_list_free_all(hr_dev, &temp_list[level], true); 1180 + hem_list_free_all(hr_dev, &temp_list[level]); 1187 1181 1188 1182 return ret; 1189 1183 } ··· 1224 1218 { 1225 1219 struct hns_roce_hem_item *hem; 1226 1220 1221 + /* This is on the has_mtt branch, if r->hopnum 1222 + * is 0, there is no root_ba to reuse for the 1223 + * region's fake hem, so a dma_alloc request is 1224 + * necessary here. 1225 + */ 1227 1226 hem = hem_list_alloc_item(hr_dev, r->offset, r->offset + r->count - 1, 1228 - r->count, false); 1227 + r->count, !r->hopnum); 1229 1228 if (!hem) 1230 1229 return -ENOMEM; 1231 1230 1232 - hem_list_assign_bt(hem, cpu_base, phy_base); 1231 + /* The root_ba can be reused only when r->hopnum > 0. */ 1232 + if (r->hopnum) 1233 + hem_list_assign_bt(hem, cpu_base, phy_base); 1233 1234 list_add(&hem->list, branch_head); 1234 1235 list_add(&hem->sibling, leaf_head); 1235 1236 1236 - return r->count; 1237 + /* If r->hopnum == 0, 0 is returned, 1238 + * so that the root_bt entry is not occupied. 1239 + */ 1240 + return r->hopnum ? r->count : 0; 1237 1241 } 1238 1242 1239 1243 static int setup_middle_bt(struct hns_roce_dev *hr_dev, void *cpu_base, ··· 1287 1271 return -ENOMEM; 1288 1272 1289 1273 total = 0; 1290 - for (i = 0; i < region_cnt && total < max_ba_num; i++) { 1274 + for (i = 0; i < region_cnt && total <= max_ba_num; i++) { 1291 1275 r = &regions[i]; 1292 1276 if (!r->count) 1293 1277 continue; ··· 1353 1337 region_cnt); 1354 1338 if (ret) { 1355 1339 for (i = 0; i < region_cnt; i++) 1356 - hem_list_free_all(hr_dev, &head.branch[i], false); 1340 + hem_list_free_all(hr_dev, &head.branch[i]); 1357 1341 1358 - hem_list_free_all(hr_dev, &head.root, true); 1342 + hem_list_free_all(hr_dev, &head.root); 1359 1343 } 1360 1344 1361 1345 return ret; ··· 1418 1402 1419 1403 for (i = 0; i < HNS_ROCE_MAX_BT_REGION; i++) 1420 1404 for (j = 0; j < HNS_ROCE_MAX_BT_LEVEL; j++) 1421 - hem_list_free_all(hr_dev, &hem_list->mid_bt[i][j], 1422 - j != 0); 1405 + hem_list_free_all(hr_dev, &hem_list->mid_bt[i][j]); 1423 1406 1424 - hem_list_free_all(hr_dev, &hem_list->root_bt, true); 1407 + hem_list_free_all(hr_dev, &hem_list->root_bt); 1425 1408 INIT_LIST_HEAD(&hem_list->btm_bt); 1426 1409 hem_list->root_ba = 0; 1427 1410 }
+9 -2
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 468 468 valid_num_sge = calc_wr_sge_num(wr, &msg_len); 469 469 470 470 ret = set_ud_opcode(ud_sq_wqe, wr); 471 - if (WARN_ON(ret)) 471 + if (WARN_ON_ONCE(ret)) 472 472 return ret; 473 473 474 474 ud_sq_wqe->msg_len = cpu_to_le32(msg_len); ··· 572 572 rc_sq_wqe->msg_len = cpu_to_le32(msg_len); 573 573 574 574 ret = set_rc_opcode(hr_dev, rc_sq_wqe, wr); 575 - if (WARN_ON(ret)) 575 + if (WARN_ON_ONCE(ret)) 576 576 return ret; 577 577 578 578 hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SO, ··· 670 670 #define HNS_ROCE_SL_SHIFT 2 671 671 struct hns_roce_v2_rc_send_wqe *rc_sq_wqe = wqe; 672 672 673 + if (unlikely(qp->state == IB_QPS_ERR)) { 674 + flush_cqe(hr_dev, qp); 675 + return; 676 + } 673 677 /* All kinds of DirectWQE have the same header field layout */ 674 678 hr_reg_enable(rc_sq_wqe, RC_SEND_WQE_FLAG); 675 679 hr_reg_write(rc_sq_wqe, RC_SEND_WQE_DB_SL_L, qp->sl); ··· 5622 5618 struct hns_roce_qp *hr_qp) 5623 5619 { 5624 5620 struct hns_roce_dip *hr_dip = hr_qp->dip; 5621 + 5622 + if (!hr_dip) 5623 + return; 5625 5624 5626 5625 xa_lock(&hr_dev->qp_table.dip_xa); 5627 5626
-5
drivers/infiniband/hw/hns/hns_roce_mr.c
··· 814 814 for (i = 0, mapped_cnt = 0; i < mtr->hem_cfg.region_count && 815 815 mapped_cnt < page_cnt; i++) { 816 816 r = &mtr->hem_cfg.region[i]; 817 - /* if hopnum is 0, no need to map pages in this region */ 818 - if (!r->hopnum) { 819 - mapped_cnt += r->count; 820 - continue; 821 - } 822 817 823 818 if (r->offset + r->count > page_cnt) { 824 819 ret = -EINVAL;
+5 -3
drivers/infiniband/hw/mlx5/main.c
··· 2839 2839 int err; 2840 2840 2841 2841 *num_plane = 0; 2842 - if (!MLX5_CAP_GEN(mdev, ib_virt)) 2842 + if (!MLX5_CAP_GEN(mdev, ib_virt) || !MLX5_CAP_GEN_2(mdev, multiplane)) 2843 2843 return 0; 2844 2844 2845 2845 err = mlx5_query_hca_vport_context(mdev, 0, 1, 0, &vport_ctx); ··· 3639 3639 list_for_each_entry(mpi, &mlx5_ib_unaffiliated_port_list, 3640 3640 list) { 3641 3641 if (dev->sys_image_guid == mpi->sys_image_guid && 3642 - (mlx5_core_native_port_num(mpi->mdev) - 1) == i) { 3642 + (mlx5_core_native_port_num(mpi->mdev) - 1) == i && 3643 + mlx5_core_same_coredev_type(dev->mdev, mpi->mdev)) { 3643 3644 bound = mlx5_ib_bind_slave_port(dev, mpi); 3644 3645 } 3645 3646 ··· 4786 4785 4787 4786 mutex_lock(&mlx5_ib_multiport_mutex); 4788 4787 list_for_each_entry(dev, &mlx5_ib_dev_list, ib_dev_list) { 4789 - if (dev->sys_image_guid == mpi->sys_image_guid) 4788 + if (dev->sys_image_guid == mpi->sys_image_guid && 4789 + mlx5_core_same_coredev_type(dev->mdev, mpi->mdev)) 4790 4790 bound = mlx5_ib_bind_slave_port(dev, mpi); 4791 4791 4792 4792 if (bound) {
+19 -4
drivers/infiniband/sw/rxe/rxe.c
··· 40 40 /* initialize rxe device parameters */ 41 41 static void rxe_init_device_param(struct rxe_dev *rxe) 42 42 { 43 + struct net_device *ndev; 44 + 43 45 rxe->max_inline_data = RXE_MAX_INLINE_DATA; 44 46 45 47 rxe->attr.vendor_id = RXE_VENDOR_ID; ··· 73 71 rxe->attr.max_fast_reg_page_list_len = RXE_MAX_FMR_PAGE_LIST_LEN; 74 72 rxe->attr.max_pkeys = RXE_MAX_PKEYS; 75 73 rxe->attr.local_ca_ack_delay = RXE_LOCAL_CA_ACK_DELAY; 74 + 75 + ndev = rxe_ib_device_get_netdev(&rxe->ib_dev); 76 + if (!ndev) 77 + return; 78 + 76 79 addrconf_addr_eui48((unsigned char *)&rxe->attr.sys_image_guid, 77 - rxe->ndev->dev_addr); 80 + ndev->dev_addr); 81 + 82 + dev_put(ndev); 78 83 79 84 rxe->max_ucontext = RXE_MAX_UCONTEXT; 80 85 } ··· 118 109 static void rxe_init_ports(struct rxe_dev *rxe) 119 110 { 120 111 struct rxe_port *port = &rxe->port; 112 + struct net_device *ndev; 121 113 122 114 rxe_init_port_param(port); 115 + ndev = rxe_ib_device_get_netdev(&rxe->ib_dev); 116 + if (!ndev) 117 + return; 123 118 addrconf_addr_eui48((unsigned char *)&port->port_guid, 124 - rxe->ndev->dev_addr); 119 + ndev->dev_addr); 120 + dev_put(ndev); 125 121 spin_lock_init(&port->port_lock); 126 122 } 127 123 ··· 181 167 /* called by ifc layer to create new rxe device. 182 168 * The caller should allocate memory for rxe by calling ib_alloc_device. 183 169 */ 184 - int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name) 170 + int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name, 171 + struct net_device *ndev) 185 172 { 186 173 rxe_init(rxe); 187 174 rxe_set_mtu(rxe, mtu); 188 175 189 - return rxe_register_device(rxe, ibdev_name); 176 + return rxe_register_device(rxe, ibdev_name, ndev); 190 177 } 191 178 192 179 static int rxe_newlink(const char *ibdev_name, struct net_device *ndev)
+2 -1
drivers/infiniband/sw/rxe/rxe.h
··· 139 139 140 140 void rxe_set_mtu(struct rxe_dev *rxe, unsigned int dev_mtu); 141 141 142 - int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name); 142 + int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name, 143 + struct net_device *ndev); 143 144 144 145 void rxe_rcv(struct sk_buff *skb); 145 146
+20 -2
drivers/infiniband/sw/rxe/rxe_mcast.c
··· 31 31 static int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid) 32 32 { 33 33 unsigned char ll_addr[ETH_ALEN]; 34 + struct net_device *ndev; 35 + int ret; 36 + 37 + ndev = rxe_ib_device_get_netdev(&rxe->ib_dev); 38 + if (!ndev) 39 + return -ENODEV; 34 40 35 41 ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr); 36 42 37 - return dev_mc_add(rxe->ndev, ll_addr); 43 + ret = dev_mc_add(ndev, ll_addr); 44 + dev_put(ndev); 45 + 46 + return ret; 38 47 } 39 48 40 49 /** ··· 56 47 static int rxe_mcast_del(struct rxe_dev *rxe, union ib_gid *mgid) 57 48 { 58 49 unsigned char ll_addr[ETH_ALEN]; 50 + struct net_device *ndev; 51 + int ret; 52 + 53 + ndev = rxe_ib_device_get_netdev(&rxe->ib_dev); 54 + if (!ndev) 55 + return -ENODEV; 59 56 60 57 ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr); 61 58 62 - return dev_mc_del(rxe->ndev, ll_addr); 59 + ret = dev_mc_del(ndev, ll_addr); 60 + dev_put(ndev); 61 + 62 + return ret; 63 63 } 64 64 65 65 /**
+20 -4
drivers/infiniband/sw/rxe/rxe_net.c
··· 524 524 */ 525 525 const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num) 526 526 { 527 - return rxe->ndev->name; 527 + struct net_device *ndev; 528 + char *ndev_name; 529 + 530 + ndev = rxe_ib_device_get_netdev(&rxe->ib_dev); 531 + if (!ndev) 532 + return NULL; 533 + ndev_name = ndev->name; 534 + dev_put(ndev); 535 + 536 + return ndev_name; 528 537 } 529 538 530 539 int rxe_net_add(const char *ibdev_name, struct net_device *ndev) ··· 545 536 if (!rxe) 546 537 return -ENOMEM; 547 538 548 - rxe->ndev = ndev; 549 539 ib_mark_name_assigned_by_user(&rxe->ib_dev); 550 540 551 - err = rxe_add(rxe, ndev->mtu, ibdev_name); 541 + err = rxe_add(rxe, ndev->mtu, ibdev_name, ndev); 552 542 if (err) { 553 543 ib_dealloc_device(&rxe->ib_dev); 554 544 return err; ··· 595 587 596 588 void rxe_set_port_state(struct rxe_dev *rxe) 597 589 { 598 - if (netif_running(rxe->ndev) && netif_carrier_ok(rxe->ndev)) 590 + struct net_device *ndev; 591 + 592 + ndev = rxe_ib_device_get_netdev(&rxe->ib_dev); 593 + if (!ndev) 594 + return; 595 + 596 + if (netif_running(ndev) && netif_carrier_ok(ndev)) 599 597 rxe_port_up(rxe); 600 598 else 601 599 rxe_port_down(rxe); 600 + 601 + dev_put(ndev); 602 602 } 603 603 604 604 static int rxe_notify(struct notifier_block *not_blk,
+21 -5
drivers/infiniband/sw/rxe/rxe_verbs.c
··· 41 41 u32 port_num, struct ib_port_attr *attr) 42 42 { 43 43 struct rxe_dev *rxe = to_rdev(ibdev); 44 + struct net_device *ndev; 44 45 int err, ret; 45 46 46 47 if (port_num != 1) { 47 48 err = -EINVAL; 48 49 rxe_dbg_dev(rxe, "bad port_num = %d\n", port_num); 50 + goto err_out; 51 + } 52 + 53 + ndev = rxe_ib_device_get_netdev(ibdev); 54 + if (!ndev) { 55 + err = -ENODEV; 49 56 goto err_out; 50 57 } 51 58 ··· 64 57 65 58 if (attr->state == IB_PORT_ACTIVE) 66 59 attr->phys_state = IB_PORT_PHYS_STATE_LINK_UP; 67 - else if (dev_get_flags(rxe->ndev) & IFF_UP) 60 + else if (dev_get_flags(ndev) & IFF_UP) 68 61 attr->phys_state = IB_PORT_PHYS_STATE_POLLING; 69 62 else 70 63 attr->phys_state = IB_PORT_PHYS_STATE_DISABLED; 71 64 72 65 mutex_unlock(&rxe->usdev_lock); 73 66 67 + dev_put(ndev); 74 68 return ret; 75 69 76 70 err_out: ··· 1433 1425 static int rxe_enable_driver(struct ib_device *ib_dev) 1434 1426 { 1435 1427 struct rxe_dev *rxe = container_of(ib_dev, struct rxe_dev, ib_dev); 1428 + struct net_device *ndev; 1429 + 1430 + ndev = rxe_ib_device_get_netdev(ib_dev); 1431 + if (!ndev) 1432 + return -ENODEV; 1436 1433 1437 1434 rxe_set_port_state(rxe); 1438 - dev_info(&rxe->ib_dev.dev, "added %s\n", netdev_name(rxe->ndev)); 1435 + dev_info(&rxe->ib_dev.dev, "added %s\n", netdev_name(ndev)); 1436 + 1437 + dev_put(ndev); 1439 1438 return 0; 1440 1439 } 1441 1440 ··· 1510 1495 INIT_RDMA_OBJ_SIZE(ib_mw, rxe_mw, ibmw), 1511 1496 }; 1512 1497 1513 - int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name) 1498 + int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name, 1499 + struct net_device *ndev) 1514 1500 { 1515 1501 int err; 1516 1502 struct ib_device *dev = &rxe->ib_dev; ··· 1523 1507 dev->num_comp_vectors = num_possible_cpus(); 1524 1508 dev->local_dma_lkey = 0; 1525 1509 addrconf_addr_eui48((unsigned char *)&dev->node_guid, 1526 - rxe->ndev->dev_addr); 1510 + ndev->dev_addr); 1527 1511 1528 1512 dev->uverbs_cmd_mask |= BIT_ULL(IB_USER_VERBS_CMD_POST_SEND) | 1529 1513 BIT_ULL(IB_USER_VERBS_CMD_REQ_NOTIFY_CQ); 1530 1514 1531 1515 ib_set_device_ops(dev, &rxe_dev_ops); 1532 - err = ib_device_set_netdev(&rxe->ib_dev, rxe->ndev, 1); 1516 + err = ib_device_set_netdev(&rxe->ib_dev, ndev, 1); 1533 1517 if (err) 1534 1518 return err; 1535 1519
+8 -3
drivers/infiniband/sw/rxe/rxe_verbs.h
··· 370 370 u32 qp_gsi_index; 371 371 }; 372 372 373 + #define RXE_PORT 1 373 374 struct rxe_dev { 374 375 struct ib_device ib_dev; 375 376 struct ib_device_attr attr; 376 377 int max_ucontext; 377 378 int max_inline_data; 378 379 struct mutex usdev_lock; 379 - 380 - struct net_device *ndev; 381 380 382 381 struct rxe_pool uc_pool; 383 382 struct rxe_pool pd_pool; ··· 404 405 struct rxe_port port; 405 406 struct crypto_shash *tfm; 406 407 }; 408 + 409 + static inline struct net_device *rxe_ib_device_get_netdev(struct ib_device *dev) 410 + { 411 + return ib_device_get_netdev(dev, RXE_PORT); 412 + } 407 413 408 414 static inline void rxe_counter_inc(struct rxe_dev *rxe, enum rxe_counters index) 409 415 { ··· 475 471 return to_rpd(mw->ibmw.pd); 476 472 } 477 473 478 - int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name); 474 + int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name, 475 + struct net_device *ndev); 479 476 480 477 #endif /* RXE_VERBS_H */
+3 -4
drivers/infiniband/sw/siw/siw.h
··· 46 46 */ 47 47 #define SIW_IRQ_MAXBURST_SQ_ACTIVE 4 48 48 49 + /* There is always only a port 1 per siw device */ 50 + #define SIW_PORT 1 51 + 49 52 struct siw_dev_cap { 50 53 int max_qp; 51 54 int max_qp_wr; ··· 72 69 73 70 struct siw_device { 74 71 struct ib_device base_dev; 75 - struct net_device *netdev; 76 72 struct siw_dev_cap attrs; 77 73 78 74 u32 vendor_part_id; 79 75 int numa_node; 80 76 char raw_gid[ETH_ALEN]; 81 - 82 - /* physical port state (only one port per device) */ 83 - enum ib_port_state state; 84 77 85 78 spinlock_t lock; 86 79
+21 -6
drivers/infiniband/sw/siw/siw_cm.c
··· 1759 1759 { 1760 1760 struct socket *s; 1761 1761 struct siw_cep *cep = NULL; 1762 + struct net_device *ndev = NULL; 1762 1763 struct siw_device *sdev = to_siw_dev(id->device); 1763 1764 int addr_family = id->local_addr.ss_family; 1764 1765 int rv = 0; ··· 1780 1779 struct sockaddr_in *laddr = &to_sockaddr_in(id->local_addr); 1781 1780 1782 1781 /* For wildcard addr, limit binding to current device only */ 1783 - if (ipv4_is_zeronet(laddr->sin_addr.s_addr)) 1784 - s->sk->sk_bound_dev_if = sdev->netdev->ifindex; 1785 - 1782 + if (ipv4_is_zeronet(laddr->sin_addr.s_addr)) { 1783 + ndev = ib_device_get_netdev(id->device, SIW_PORT); 1784 + if (ndev) { 1785 + s->sk->sk_bound_dev_if = ndev->ifindex; 1786 + } else { 1787 + rv = -ENODEV; 1788 + goto error; 1789 + } 1790 + } 1786 1791 rv = s->ops->bind(s, (struct sockaddr *)laddr, 1787 1792 sizeof(struct sockaddr_in)); 1788 1793 } else { ··· 1804 1797 } 1805 1798 1806 1799 /* For wildcard addr, limit binding to current device only */ 1807 - if (ipv6_addr_any(&laddr->sin6_addr)) 1808 - s->sk->sk_bound_dev_if = sdev->netdev->ifindex; 1809 - 1800 + if (ipv6_addr_any(&laddr->sin6_addr)) { 1801 + ndev = ib_device_get_netdev(id->device, SIW_PORT); 1802 + if (ndev) { 1803 + s->sk->sk_bound_dev_if = ndev->ifindex; 1804 + } else { 1805 + rv = -ENODEV; 1806 + goto error; 1807 + } 1808 + } 1810 1809 rv = s->ops->bind(s, (struct sockaddr *)laddr, 1811 1810 sizeof(struct sockaddr_in6)); 1812 1811 } ··· 1873 1860 } 1874 1861 list_add_tail(&cep->listenq, (struct list_head *)id->provider_data); 1875 1862 cep->state = SIW_EPSTATE_LISTENING; 1863 + dev_put(ndev); 1876 1864 1877 1865 siw_dbg(id->device, "Listen at laddr %pISp\n", &id->local_addr); 1878 1866 ··· 1893 1879 siw_cep_set_free_and_put(cep); 1894 1880 } 1895 1881 sock_release(s); 1882 + dev_put(ndev); 1896 1883 1897 1884 return rv; 1898 1885 }
+1 -14
drivers/infiniband/sw/siw/siw_main.c
··· 287 287 return NULL; 288 288 289 289 base_dev = &sdev->base_dev; 290 - sdev->netdev = netdev; 291 290 292 291 if (netdev->addr_len) { 293 292 memcpy(sdev->raw_gid, netdev->dev_addr, ··· 380 381 381 382 switch (event) { 382 383 case NETDEV_UP: 383 - sdev->state = IB_PORT_ACTIVE; 384 384 siw_port_event(sdev, 1, IB_EVENT_PORT_ACTIVE); 385 385 break; 386 386 387 387 case NETDEV_DOWN: 388 - sdev->state = IB_PORT_DOWN; 389 388 siw_port_event(sdev, 1, IB_EVENT_PORT_ERR); 390 389 break; 391 390 ··· 404 407 siw_port_event(sdev, 1, IB_EVENT_LID_CHANGE); 405 408 break; 406 409 /* 407 - * Todo: Below netdev events are currently not handled. 410 + * All other events are not handled 408 411 */ 409 - case NETDEV_CHANGEMTU: 410 - case NETDEV_CHANGE: 411 - break; 412 - 413 412 default: 414 413 break; 415 414 } ··· 435 442 sdev = siw_device_create(netdev); 436 443 if (sdev) { 437 444 dev_dbg(&netdev->dev, "siw: new device\n"); 438 - 439 - if (netif_running(netdev) && netif_carrier_ok(netdev)) 440 - sdev->state = IB_PORT_ACTIVE; 441 - else 442 - sdev->state = IB_PORT_DOWN; 443 - 444 445 ib_mark_name_assigned_by_user(&sdev->base_dev); 445 446 rv = siw_device_register(sdev, basedev_name); 446 447 if (rv)
+24 -11
drivers/infiniband/sw/siw/siw_verbs.c
··· 171 171 int siw_query_port(struct ib_device *base_dev, u32 port, 172 172 struct ib_port_attr *attr) 173 173 { 174 - struct siw_device *sdev = to_siw_dev(base_dev); 174 + struct net_device *ndev; 175 175 int rv; 176 176 177 177 memset(attr, 0, sizeof(*attr)); 178 178 179 179 rv = ib_get_eth_speed(base_dev, port, &attr->active_speed, 180 180 &attr->active_width); 181 + if (rv) 182 + return rv; 183 + 184 + ndev = ib_device_get_netdev(base_dev, SIW_PORT); 185 + if (!ndev) 186 + return -ENODEV; 187 + 181 188 attr->gid_tbl_len = 1; 182 189 attr->max_msg_sz = -1; 183 - attr->max_mtu = ib_mtu_int_to_enum(sdev->netdev->mtu); 184 - attr->active_mtu = ib_mtu_int_to_enum(sdev->netdev->mtu); 185 - attr->phys_state = sdev->state == IB_PORT_ACTIVE ? 190 + attr->max_mtu = ib_mtu_int_to_enum(ndev->max_mtu); 191 + attr->active_mtu = ib_mtu_int_to_enum(READ_ONCE(ndev->mtu)); 192 + attr->phys_state = (netif_running(ndev) && netif_carrier_ok(ndev)) ? 186 193 IB_PORT_PHYS_STATE_LINK_UP : IB_PORT_PHYS_STATE_DISABLED; 194 + attr->state = attr->phys_state == IB_PORT_PHYS_STATE_LINK_UP ? 195 + IB_PORT_ACTIVE : IB_PORT_DOWN; 187 196 attr->port_cap_flags = IB_PORT_CM_SUP | IB_PORT_DEVICE_MGMT_SUP; 188 - attr->state = sdev->state; 189 197 /* 190 198 * All zero 191 199 * ··· 207 199 * attr->subnet_timeout = 0; 208 200 * attr->init_type_repy = 0; 209 201 */ 202 + dev_put(ndev); 210 203 return rv; 211 204 } 212 205 ··· 514 505 int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr) 515 506 { 516 507 struct siw_qp *qp; 517 - struct siw_device *sdev; 508 + struct net_device *ndev; 518 509 519 - if (base_qp && qp_attr && qp_init_attr) { 510 + if (base_qp && qp_attr && qp_init_attr) 520 511 qp = to_siw_qp(base_qp); 521 - sdev = to_siw_dev(base_qp->device); 522 - } else { 512 + else 523 513 return -EINVAL; 524 - } 514 + 515 + ndev = ib_device_get_netdev(base_qp->device, SIW_PORT); 516 + if (!ndev) 517 + return -ENODEV; 518 + 525 519 qp_attr->qp_state = siw_qp_state_to_ib_qp_state[qp->attrs.state]; 526 520 qp_attr->cap.max_inline_data = SIW_MAX_INLINE; 527 521 qp_attr->cap.max_send_wr = qp->attrs.sq_size; 528 522 qp_attr->cap.max_send_sge = qp->attrs.sq_max_sges; 529 523 qp_attr->cap.max_recv_wr = qp->attrs.rq_size; 530 524 qp_attr->cap.max_recv_sge = qp->attrs.rq_max_sges; 531 - qp_attr->path_mtu = ib_mtu_int_to_enum(sdev->netdev->mtu); 525 + qp_attr->path_mtu = ib_mtu_int_to_enum(READ_ONCE(ndev->mtu)); 532 526 qp_attr->max_rd_atomic = qp->attrs.irq_size; 533 527 qp_attr->max_dest_rd_atomic = qp->attrs.orq_size; 534 528 ··· 546 534 547 535 qp_init_attr->cap = qp_attr->cap; 548 536 537 + dev_put(ndev); 549 538 return 0; 550 539 } 551 540
+1 -1
drivers/infiniband/ulp/rtrs/rtrs-srv.c
··· 349 349 struct rtrs_srv_mr *srv_mr; 350 350 bool need_inval = false; 351 351 enum ib_send_flags flags; 352 + struct ib_sge list; 352 353 u32 imm; 353 354 int err; 354 355 ··· 402 401 imm = rtrs_to_io_rsp_imm(id->msg_id, errno, need_inval); 403 402 imm_wr.wr.next = NULL; 404 403 if (always_invalidate) { 405 - struct ib_sge list; 406 404 struct rtrs_msg_rkey_rsp *msg; 407 405 408 406 srv_mr = &srv_path->mrs[id->msg_id];
+1
drivers/macintosh/Kconfig
··· 120 120 config PMAC_BACKLIGHT 121 121 bool "Backlight control for LCD screens" 122 122 depends on PPC_PMAC && ADB_PMU && FB = y && (BROKEN || !PPC64) 123 + depends on BACKLIGHT_CLASS_DEVICE=y 123 124 select FB_BACKLIGHT 124 125 help 125 126 Say Y here to enable Macintosh specific extensions of the generic
+1 -1
drivers/media/dvb-frontends/dib3000mb.c
··· 51 51 static int dib3000_read_reg(struct dib3000_state *state, u16 reg) 52 52 { 53 53 u8 wb[] = { ((reg >> 8) | 0x80) & 0xff, reg & 0xff }; 54 - u8 rb[2]; 54 + u8 rb[2] = {}; 55 55 struct i2c_msg msg[] = { 56 56 { .addr = state->config.demod_address, .flags = 0, .buf = wb, .len = 2 }, 57 57 { .addr = state->config.demod_address, .flags = I2C_M_RD, .buf = rb, .len = 2 },
+2 -1
drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp9_req_lat_if.c
··· 1188 1188 return ret; 1189 1189 } 1190 1190 1191 - static 1191 + /* clang stack usage explodes if this is inlined */ 1192 + static noinline_for_stack 1192 1193 void vdec_vp9_slice_map_counts_eob_coef(unsigned int i, unsigned int j, unsigned int k, 1193 1194 struct vdec_vp9_slice_frame_counts *counts, 1194 1195 struct v4l2_vp9_frame_symbol_counts *counts_helper)
+8 -8
drivers/mmc/host/sdhci-msm.c
··· 1867 1867 struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 1868 1868 union cqhci_crypto_cap_entry cap; 1869 1869 1870 + if (!(cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE)) 1871 + return qcom_ice_evict_key(msm_host->ice, slot); 1872 + 1870 1873 /* Only AES-256-XTS has been tested so far. */ 1871 1874 cap = cq_host->crypto_cap_array[cfg->crypto_cap_idx]; 1872 1875 if (cap.algorithm_id != CQHCI_CRYPTO_ALG_AES_XTS || 1873 1876 cap.key_size != CQHCI_CRYPTO_KEY_SIZE_256) 1874 1877 return -EINVAL; 1875 1878 1876 - if (cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE) 1877 - return qcom_ice_program_key(msm_host->ice, 1878 - QCOM_ICE_CRYPTO_ALG_AES_XTS, 1879 - QCOM_ICE_CRYPTO_KEY_SIZE_256, 1880 - cfg->crypto_key, 1881 - cfg->data_unit_size, slot); 1882 - else 1883 - return qcom_ice_evict_key(msm_host->ice, slot); 1879 + return qcom_ice_program_key(msm_host->ice, 1880 + QCOM_ICE_CRYPTO_ALG_AES_XTS, 1881 + QCOM_ICE_CRYPTO_KEY_SIZE_256, 1882 + cfg->crypto_key, 1883 + cfg->data_unit_size, slot); 1884 1884 } 1885 1885 1886 1886 #else /* CONFIG_MMC_CRYPTO */
+9 -2
drivers/mtd/nand/raw/arasan-nand-controller.c
··· 1409 1409 * case, the "not" chosen CS is assigned to nfc->spare_cs and selected 1410 1410 * whenever a GPIO CS must be asserted. 1411 1411 */ 1412 - if (nfc->cs_array && nfc->ncs > 2) { 1413 - if (!nfc->cs_array[0] && !nfc->cs_array[1]) { 1412 + if (nfc->cs_array) { 1413 + if (nfc->ncs > 2 && !nfc->cs_array[0] && !nfc->cs_array[1]) { 1414 1414 dev_err(nfc->dev, 1415 1415 "Assign a single native CS when using GPIOs\n"); 1416 1416 return -EINVAL; ··· 1478 1478 1479 1479 static void anfc_remove(struct platform_device *pdev) 1480 1480 { 1481 + int i; 1481 1482 struct arasan_nfc *nfc = platform_get_drvdata(pdev); 1483 + 1484 + for (i = 0; i < nfc->ncs; i++) { 1485 + if (nfc->cs_array[i]) { 1486 + gpiod_put(nfc->cs_array[i]); 1487 + } 1488 + } 1482 1489 1483 1490 anfc_chips_cleanup(nfc); 1484 1491 }
+1 -3
drivers/mtd/nand/raw/atmel/pmecc.c
··· 380 380 user->delta = user->dmu + req->ecc.strength + 1; 381 381 382 382 gf_tables = atmel_pmecc_get_gf_tables(req); 383 - if (IS_ERR(gf_tables)) { 384 - kfree(user); 383 + if (IS_ERR(gf_tables)) 385 384 return ERR_CAST(gf_tables); 386 - } 387 385 388 386 user->gf_tables = gf_tables; 389 387
+1 -1
drivers/mtd/nand/raw/diskonchip.c
··· 1098 1098 (i == 0) && (ip->firstUnit > 0)) { 1099 1099 parts[0].name = " DiskOnChip IPL / Media Header partition"; 1100 1100 parts[0].offset = 0; 1101 - parts[0].size = mtd->erasesize * ip->firstUnit; 1101 + parts[0].size = (uint64_t)mtd->erasesize * ip->firstUnit; 1102 1102 numparts = 1; 1103 1103 } 1104 1104
+16
drivers/mtd/nand/raw/omap2.c
··· 254 254 255 255 /** 256 256 * omap_nand_data_in_pref - NAND data in using prefetch engine 257 + * @chip: NAND chip 258 + * @buf: output buffer where NAND data is placed into 259 + * @len: length of transfer 260 + * @force_8bit: force 8-bit transfers 257 261 */ 258 262 static void omap_nand_data_in_pref(struct nand_chip *chip, void *buf, 259 263 unsigned int len, bool force_8bit) ··· 301 297 302 298 /** 303 299 * omap_nand_data_out_pref - NAND data out using Write Posting engine 300 + * @chip: NAND chip 301 + * @buf: input buffer that is sent to NAND 302 + * @len: length of transfer 303 + * @force_8bit: force 8-bit transfers 304 304 */ 305 305 static void omap_nand_data_out_pref(struct nand_chip *chip, 306 306 const void *buf, unsigned int len, ··· 448 440 449 441 /** 450 442 * omap_nand_data_in_dma_pref - NAND data in using DMA and Prefetch 443 + * @chip: NAND chip 444 + * @buf: output buffer where NAND data is placed into 445 + * @len: length of transfer 446 + * @force_8bit: force 8-bit transfers 451 447 */ 452 448 static void omap_nand_data_in_dma_pref(struct nand_chip *chip, void *buf, 453 449 unsigned int len, bool force_8bit) ··· 472 460 473 461 /** 474 462 * omap_nand_data_out_dma_pref - NAND data out using DMA and write posting 463 + * @chip: NAND chip 464 + * @buf: input buffer that is sent to NAND 465 + * @len: length of transfer 466 + * @force_8bit: force 8-bit transfers 475 467 */ 476 468 static void omap_nand_data_out_dma_pref(struct nand_chip *chip, 477 469 const void *buf, unsigned int len,
+36 -11
drivers/net/dsa/microchip/ksz9477.c
··· 2 2 /* 3 3 * Microchip KSZ9477 switch driver main logic 4 4 * 5 - * Copyright (C) 2017-2019 Microchip Technology Inc. 5 + * Copyright (C) 2017-2024 Microchip Technology Inc. 6 6 */ 7 7 8 8 #include <linux/kernel.h> ··· 983 983 int ksz9477_set_ageing_time(struct ksz_device *dev, unsigned int msecs) 984 984 { 985 985 u32 secs = msecs / 1000; 986 - u8 value; 987 - u8 data; 986 + u8 data, mult, value; 987 + u32 max_val; 988 988 int ret; 989 989 990 - value = FIELD_GET(SW_AGE_PERIOD_7_0_M, secs); 990 + #define MAX_TIMER_VAL ((1 << 8) - 1) 991 991 992 - ret = ksz_write8(dev, REG_SW_LUE_CTRL_3, value); 993 - if (ret < 0) 994 - return ret; 992 + /* The aging timer comprises a 3-bit multiplier and an 8-bit second 993 + * value. Either of them cannot be zero. The maximum timer is then 994 + * 7 * 255 = 1785 seconds. 995 + */ 996 + if (!secs) 997 + secs = 1; 995 998 996 - data = FIELD_GET(SW_AGE_PERIOD_10_8_M, secs); 999 + /* Return error if too large. */ 1000 + else if (secs > 7 * MAX_TIMER_VAL) 1001 + return -EINVAL; 997 1002 998 1003 ret = ksz_read8(dev, REG_SW_LUE_CTRL_0, &value); 999 1004 if (ret < 0) 1000 1005 return ret; 1001 1006 1002 - value &= ~SW_AGE_CNT_M; 1003 - value |= FIELD_PREP(SW_AGE_CNT_M, data); 1007 + /* Check whether there is need to update the multiplier. */ 1008 + mult = FIELD_GET(SW_AGE_CNT_M, value); 1009 + max_val = MAX_TIMER_VAL; 1010 + if (mult > 0) { 1011 + /* Try to use the same multiplier already in the register as 1012 + * the hardware default uses multiplier 4 and 75 seconds for 1013 + * 300 seconds. 1014 + */ 1015 + max_val = DIV_ROUND_UP(secs, mult); 1016 + if (max_val > MAX_TIMER_VAL || max_val * mult != secs) 1017 + max_val = MAX_TIMER_VAL; 1018 + } 1004 1019 1005 - return ksz_write8(dev, REG_SW_LUE_CTRL_0, value); 1020 + data = DIV_ROUND_UP(secs, max_val); 1021 + if (mult != data) { 1022 + value &= ~SW_AGE_CNT_M; 1023 + value |= FIELD_PREP(SW_AGE_CNT_M, data); 1024 + ret = ksz_write8(dev, REG_SW_LUE_CTRL_0, value); 1025 + if (ret < 0) 1026 + return ret; 1027 + } 1028 + 1029 + value = DIV_ROUND_UP(secs, data); 1030 + return ksz_write8(dev, REG_SW_LUE_CTRL_3, value); 1006 1031 } 1007 1032 1008 1033 void ksz9477_port_queue_split(struct ksz_device *dev, int port)
+1 -3
drivers/net/dsa/microchip/ksz9477_reg.h
··· 2 2 /* 3 3 * Microchip KSZ9477 register definitions 4 4 * 5 - * Copyright (C) 2017-2018 Microchip Technology Inc. 5 + * Copyright (C) 2017-2024 Microchip Technology Inc. 6 6 */ 7 7 8 8 #ifndef __KSZ9477_REGS_H ··· 165 165 #define SW_VLAN_ENABLE BIT(7) 166 166 #define SW_DROP_INVALID_VID BIT(6) 167 167 #define SW_AGE_CNT_M GENMASK(5, 3) 168 - #define SW_AGE_CNT_S 3 169 - #define SW_AGE_PERIOD_10_8_M GENMASK(10, 8) 170 168 #define SW_RESV_MCAST_ENABLE BIT(2) 171 169 #define SW_HASH_OPTION_M 0x03 172 170 #define SW_HASH_OPTION_CRC 1
+59 -3
drivers/net/dsa/microchip/lan937x_main.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* Microchip LAN937X switch driver main logic 3 - * Copyright (C) 2019-2022 Microchip Technology Inc. 3 + * Copyright (C) 2019-2024 Microchip Technology Inc. 4 4 */ 5 5 #include <linux/kernel.h> 6 6 #include <linux/module.h> ··· 461 461 462 462 int lan937x_set_ageing_time(struct ksz_device *dev, unsigned int msecs) 463 463 { 464 - u32 secs = msecs / 1000; 465 - u32 value; 464 + u8 data, mult, value8; 465 + bool in_msec = false; 466 + u32 max_val, value; 467 + u32 secs = msecs; 466 468 int ret; 469 + 470 + #define MAX_TIMER_VAL ((1 << 20) - 1) 471 + 472 + /* The aging timer comprises a 3-bit multiplier and a 20-bit second 473 + * value. Either of them cannot be zero. The maximum timer is then 474 + * 7 * 1048575 = 7340025 seconds. As this value is too large for 475 + * practical use it can be interpreted as microseconds, making the 476 + * maximum timer 7340 seconds with finer control. This allows for 477 + * maximum 122 minutes compared to 29 minutes in KSZ9477 switch. 478 + */ 479 + if (msecs % 1000) 480 + in_msec = true; 481 + else 482 + secs /= 1000; 483 + if (!secs) 484 + secs = 1; 485 + 486 + /* Return error if too large. */ 487 + else if (secs > 7 * MAX_TIMER_VAL) 488 + return -EINVAL; 489 + 490 + /* Configure how to interpret the number value. */ 491 + ret = ksz_rmw8(dev, REG_SW_LUE_CTRL_2, SW_AGE_CNT_IN_MICROSEC, 492 + in_msec ? SW_AGE_CNT_IN_MICROSEC : 0); 493 + if (ret < 0) 494 + return ret; 495 + 496 + ret = ksz_read8(dev, REG_SW_LUE_CTRL_0, &value8); 497 + if (ret < 0) 498 + return ret; 499 + 500 + /* Check whether there is need to update the multiplier. */ 501 + mult = FIELD_GET(SW_AGE_CNT_M, value8); 502 + max_val = MAX_TIMER_VAL; 503 + if (mult > 0) { 504 + /* Try to use the same multiplier already in the register as 505 + * the hardware default uses multiplier 4 and 75 seconds for 506 + * 300 seconds. 507 + */ 508 + max_val = DIV_ROUND_UP(secs, mult); 509 + if (max_val > MAX_TIMER_VAL || max_val * mult != secs) 510 + max_val = MAX_TIMER_VAL; 511 + } 512 + 513 + data = DIV_ROUND_UP(secs, max_val); 514 + if (mult != data) { 515 + value8 &= ~SW_AGE_CNT_M; 516 + value8 |= FIELD_PREP(SW_AGE_CNT_M, data); 517 + ret = ksz_write8(dev, REG_SW_LUE_CTRL_0, value8); 518 + if (ret < 0) 519 + return ret; 520 + } 521 + 522 + secs = DIV_ROUND_UP(secs, data); 467 523 468 524 value = FIELD_GET(SW_AGE_PERIOD_7_0_M, secs); 469 525
+6 -3
drivers/net/dsa/microchip/lan937x_reg.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* Microchip LAN937X switch register definitions 3 - * Copyright (C) 2019-2021 Microchip Technology Inc. 3 + * Copyright (C) 2019-2024 Microchip Technology Inc. 4 4 */ 5 5 #ifndef __LAN937X_REG_H 6 6 #define __LAN937X_REG_H ··· 56 56 57 57 #define SW_VLAN_ENABLE BIT(7) 58 58 #define SW_DROP_INVALID_VID BIT(6) 59 - #define SW_AGE_CNT_M 0x7 60 - #define SW_AGE_CNT_S 3 59 + #define SW_AGE_CNT_M GENMASK(5, 3) 61 60 #define SW_RESV_MCAST_ENABLE BIT(2) 62 61 63 62 #define REG_SW_LUE_CTRL_1 0x0311 ··· 68 69 #define SW_AGING_ENABLE BIT(2) 69 70 #define SW_FAST_AGING BIT(1) 70 71 #define SW_LINK_AUTO_AGING BIT(0) 72 + 73 + #define REG_SW_LUE_CTRL_2 0x0312 74 + 75 + #define SW_AGE_CNT_IN_MICROSEC BIT(7) 71 76 72 77 #define REG_SW_AGE_PERIOD__1 0x0313 73 78 #define SW_AGE_PERIOD_7_0_M GENMASK(7, 0)
+18 -3
drivers/net/ethernet/broadcom/bcmsysport.c
··· 1933 1933 unsigned int i; 1934 1934 int ret; 1935 1935 1936 - clk_prepare_enable(priv->clk); 1936 + ret = clk_prepare_enable(priv->clk); 1937 + if (ret) { 1938 + netdev_err(dev, "could not enable priv clock\n"); 1939 + return ret; 1940 + } 1937 1941 1938 1942 /* Reset UniMAC */ 1939 1943 umac_reset(priv); ··· 2595 2591 goto err_deregister_notifier; 2596 2592 } 2597 2593 2598 - clk_prepare_enable(priv->clk); 2594 + ret = clk_prepare_enable(priv->clk); 2595 + if (ret) { 2596 + dev_err(&pdev->dev, "could not enable priv clock\n"); 2597 + goto err_deregister_netdev; 2598 + } 2599 2599 2600 2600 priv->rev = topctrl_readl(priv, REV_CNTL) & REV_MASK; 2601 2601 dev_info(&pdev->dev, ··· 2613 2605 2614 2606 return 0; 2615 2607 2608 + err_deregister_netdev: 2609 + unregister_netdev(dev); 2616 2610 err_deregister_notifier: 2617 2611 unregister_netdevice_notifier(&priv->netdev_notifier); 2618 2612 err_deregister_fixed_link: ··· 2784 2774 if (!netif_running(dev)) 2785 2775 return 0; 2786 2776 2787 - clk_prepare_enable(priv->clk); 2777 + ret = clk_prepare_enable(priv->clk); 2778 + if (ret) { 2779 + netdev_err(dev, "could not enable priv clock\n"); 2780 + return ret; 2781 + } 2782 + 2788 2783 if (priv->wolopts) 2789 2784 clk_disable_unprepare(priv->wol_clk); 2790 2785
+1
drivers/net/ethernet/google/gve/gve.h
··· 1140 1140 void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid); 1141 1141 bool gve_tx_poll(struct gve_notify_block *block, int budget); 1142 1142 bool gve_xdp_poll(struct gve_notify_block *block, int budget); 1143 + int gve_xsk_tx_poll(struct gve_notify_block *block, int budget); 1143 1144 int gve_tx_alloc_rings_gqi(struct gve_priv *priv, 1144 1145 struct gve_tx_alloc_rings_cfg *cfg); 1145 1146 void gve_tx_free_rings_gqi(struct gve_priv *priv,
+36 -27
drivers/net/ethernet/google/gve/gve_main.c
··· 333 333 334 334 if (block->rx) { 335 335 work_done = gve_rx_poll(block, budget); 336 + 337 + /* Poll XSK TX as part of RX NAPI. Setup re-poll based on max of 338 + * TX and RX work done. 339 + */ 340 + if (priv->xdp_prog) 341 + work_done = max_t(int, work_done, 342 + gve_xsk_tx_poll(block, budget)); 343 + 336 344 reschedule |= work_done == budget; 337 345 } 338 346 ··· 930 922 static void gve_tx_get_curr_alloc_cfg(struct gve_priv *priv, 931 923 struct gve_tx_alloc_rings_cfg *cfg) 932 924 { 925 + int num_xdp_queues = priv->xdp_prog ? priv->rx_cfg.num_queues : 0; 926 + 933 927 cfg->qcfg = &priv->tx_cfg; 934 928 cfg->raw_addressing = !gve_is_qpl(priv); 935 929 cfg->ring_size = priv->tx_desc_cnt; 936 930 cfg->start_idx = 0; 937 - cfg->num_rings = gve_num_tx_queues(priv); 931 + cfg->num_rings = priv->tx_cfg.num_queues + num_xdp_queues; 938 932 cfg->tx = priv->tx; 939 933 } 940 934 ··· 1633 1623 if (err) 1634 1624 return err; 1635 1625 1636 - /* If XDP prog is not installed, return */ 1637 - if (!priv->xdp_prog) 1626 + /* If XDP prog is not installed or interface is down, return. */ 1627 + if (!priv->xdp_prog || !netif_running(dev)) 1638 1628 return 0; 1639 1629 1640 1630 rx = &priv->rx[qid]; ··· 1679 1669 if (qid >= priv->rx_cfg.num_queues) 1680 1670 return -EINVAL; 1681 1671 1682 - /* If XDP prog is not installed, unmap DMA and return */ 1683 - if (!priv->xdp_prog) 1672 + /* If XDP prog is not installed or interface is down, unmap DMA and 1673 + * return. 1674 + */ 1675 + if (!priv->xdp_prog || !netif_running(dev)) 1684 1676 goto done; 1685 - 1686 - tx_qid = gve_xdp_tx_queue_id(priv, qid); 1687 - if (!netif_running(dev)) { 1688 - priv->rx[qid].xsk_pool = NULL; 1689 - xdp_rxq_info_unreg(&priv->rx[qid].xsk_rxq); 1690 - priv->tx[tx_qid].xsk_pool = NULL; 1691 - goto done; 1692 - } 1693 1677 1694 1678 napi_rx = &priv->ntfy_blocks[priv->rx[qid].ntfy_id].napi; 1695 1679 napi_disable(napi_rx); /* make sure current rx poll is done */ 1696 1680 1681 + tx_qid = gve_xdp_tx_queue_id(priv, qid); 1697 1682 napi_tx = &priv->ntfy_blocks[priv->tx[tx_qid].ntfy_id].napi; 1698 1683 napi_disable(napi_tx); /* make sure current tx poll is done */ 1699 1684 ··· 1714 1709 static int gve_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags) 1715 1710 { 1716 1711 struct gve_priv *priv = netdev_priv(dev); 1717 - int tx_queue_id = gve_xdp_tx_queue_id(priv, queue_id); 1712 + struct napi_struct *napi; 1713 + 1714 + if (!gve_get_napi_enabled(priv)) 1715 + return -ENETDOWN; 1718 1716 1719 1717 if (queue_id >= priv->rx_cfg.num_queues || !priv->xdp_prog) 1720 1718 return -EINVAL; 1721 1719 1722 - if (flags & XDP_WAKEUP_TX) { 1723 - struct gve_tx_ring *tx = &priv->tx[tx_queue_id]; 1724 - struct napi_struct *napi = 1725 - &priv->ntfy_blocks[tx->ntfy_id].napi; 1726 - 1727 - if (!napi_if_scheduled_mark_missed(napi)) { 1728 - /* Call local_bh_enable to trigger SoftIRQ processing */ 1729 - local_bh_disable(); 1730 - napi_schedule(napi); 1731 - local_bh_enable(); 1732 - } 1733 - 1734 - tx->xdp_xsk_wakeup++; 1720 + napi = &priv->ntfy_blocks[gve_rx_idx_to_ntfy(priv, queue_id)].napi; 1721 + if (!napi_if_scheduled_mark_missed(napi)) { 1722 + /* Call local_bh_enable to trigger SoftIRQ processing */ 1723 + local_bh_disable(); 1724 + napi_schedule(napi); 1725 + local_bh_enable(); 1735 1726 } 1736 1727 1737 1728 return 0; ··· 1838 1837 { 1839 1838 struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; 1840 1839 struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; 1840 + int num_xdp_queues; 1841 1841 int err; 1842 1842 1843 1843 gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); ··· 1848 1846 rx_alloc_cfg.qcfg_tx = &new_tx_config; 1849 1847 rx_alloc_cfg.qcfg = &new_rx_config; 1850 1848 tx_alloc_cfg.num_rings = new_tx_config.num_queues; 1849 + 1850 + /* Add dedicated XDP TX queues if enabled. */ 1851 + num_xdp_queues = priv->xdp_prog ? new_rx_config.num_queues : 0; 1852 + tx_alloc_cfg.num_rings += num_xdp_queues; 1851 1853 1852 1854 if (netif_running(priv->dev)) { 1853 1855 err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); ··· 1905 1899 1906 1900 gve_clear_napi_enabled(priv); 1907 1901 gve_clear_report_stats(priv); 1902 + 1903 + /* Make sure that all traffic is finished processing. */ 1904 + synchronize_net(); 1908 1905 } 1909 1906 1910 1907 static void gve_turnup(struct gve_priv *priv)
+30 -16
drivers/net/ethernet/google/gve/gve_tx.c
··· 206 206 return; 207 207 208 208 gve_remove_napi(priv, ntfy_idx); 209 - gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false); 209 + if (tx->q_num < priv->tx_cfg.num_queues) 210 + gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false); 211 + else 212 + gve_clean_xdp_done(priv, tx, priv->tx_desc_cnt); 210 213 netdev_tx_reset_queue(tx->netdev_txq); 211 214 gve_tx_remove_from_block(priv, idx); 212 215 } ··· 837 834 struct gve_tx_ring *tx; 838 835 int i, err = 0, qid; 839 836 840 - if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) 837 + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK) || !priv->xdp_prog) 841 838 return -EINVAL; 839 + 840 + if (!gve_get_napi_enabled(priv)) 841 + return -ENETDOWN; 842 842 843 843 qid = gve_xdp_tx_queue_id(priv, 844 844 smp_processor_id() % priv->num_xdp_queues); ··· 981 975 return sent; 982 976 } 983 977 978 + int gve_xsk_tx_poll(struct gve_notify_block *rx_block, int budget) 979 + { 980 + struct gve_rx_ring *rx = rx_block->rx; 981 + struct gve_priv *priv = rx->gve; 982 + struct gve_tx_ring *tx; 983 + int sent = 0; 984 + 985 + tx = &priv->tx[gve_xdp_tx_queue_id(priv, rx->q_num)]; 986 + if (tx->xsk_pool) { 987 + sent = gve_xsk_tx(priv, tx, budget); 988 + 989 + u64_stats_update_begin(&tx->statss); 990 + tx->xdp_xsk_sent += sent; 991 + u64_stats_update_end(&tx->statss); 992 + if (xsk_uses_need_wakeup(tx->xsk_pool)) 993 + xsk_set_tx_need_wakeup(tx->xsk_pool); 994 + } 995 + 996 + return sent; 997 + } 998 + 984 999 bool gve_xdp_poll(struct gve_notify_block *block, int budget) 985 1000 { 986 1001 struct gve_priv *priv = block->priv; 987 1002 struct gve_tx_ring *tx = block->tx; 988 1003 u32 nic_done; 989 - bool repoll; 990 1004 u32 to_do; 991 1005 992 1006 /* Find out how much work there is to be done */ 993 1007 nic_done = gve_tx_load_event_counter(priv, tx); 994 1008 to_do = min_t(u32, (nic_done - tx->done), budget); 995 1009 gve_clean_xdp_done(priv, tx, to_do); 996 - repoll = nic_done != tx->done; 997 - 998 - if (tx->xsk_pool) { 999 - int sent = gve_xsk_tx(priv, tx, budget); 1000 - 1001 - u64_stats_update_begin(&tx->statss); 1002 - tx->xdp_xsk_sent += sent; 1003 - u64_stats_update_end(&tx->statss); 1004 - repoll |= (sent == budget); 1005 - if (xsk_uses_need_wakeup(tx->xsk_pool)) 1006 - xsk_set_tx_need_wakeup(tx->xsk_pool); 1007 - } 1008 1010 1009 1011 /* If we still have work we want to repoll */ 1010 - return repoll; 1012 + return nic_done != tx->done; 1011 1013 } 1012 1014 1013 1015 bool gve_tx_poll(struct gve_notify_block *block, int budget)
+12 -2
drivers/net/ethernet/marvell/mv643xx_eth.c
··· 2704 2704 2705 2705 static void mv643xx_eth_shared_of_remove(void) 2706 2706 { 2707 + struct mv643xx_eth_platform_data *pd; 2707 2708 int n; 2708 2709 2709 2710 for (n = 0; n < 3; n++) { 2711 + if (!port_platdev[n]) 2712 + continue; 2713 + pd = dev_get_platdata(&port_platdev[n]->dev); 2714 + if (pd) 2715 + of_node_put(pd->phy_node); 2710 2716 platform_device_del(port_platdev[n]); 2711 2717 port_platdev[n] = NULL; 2712 2718 } ··· 2775 2769 } 2776 2770 2777 2771 ppdev = platform_device_alloc(MV643XX_ETH_NAME, dev_num); 2778 - if (!ppdev) 2779 - return -ENOMEM; 2772 + if (!ppdev) { 2773 + ret = -ENOMEM; 2774 + goto put_err; 2775 + } 2780 2776 ppdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 2781 2777 ppdev->dev.of_node = pnp; 2782 2778 ··· 2800 2792 2801 2793 port_err: 2802 2794 platform_device_put(ppdev); 2795 + put_err: 2796 + of_node_put(ppd.phy_node); 2803 2797 return ret; 2804 2798 } 2805 2799
+1
drivers/net/ethernet/marvell/sky2.c
··· 130 130 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x436C) }, /* 88E8072 */ 131 131 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x436D) }, /* 88E8055 */ 132 132 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4370) }, /* 88E8075 */ 133 + { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4373) }, /* 88E8075 */ 133 134 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4380) }, /* 88E8057 */ 134 135 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4381) }, /* 88E8059 */ 135 136 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4382) }, /* 88E8079 */
+4
drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
··· 339 339 { 340 340 struct mlx5e_priv *priv = macsec_netdev_priv(ctx->netdev); 341 341 struct mlx5_macsec_fs *macsec_fs = priv->mdev->macsec_fs; 342 + const struct macsec_tx_sc *tx_sc = &ctx->secy->tx_sc; 342 343 struct mlx5_macsec_rule_attrs rule_attrs; 343 344 union mlx5_macsec_rule *macsec_rule; 345 + 346 + if (is_tx && tx_sc->encoding_sa != sa->assoc_num) 347 + return 0; 344 348 345 349 rule_attrs.macsec_obj_id = sa->macsec_obj_id; 346 350 rule_attrs.sci = sa->sci;
+17 -2
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 6543 6543 6544 6544 mlx5_core_uplink_netdev_set(mdev, NULL); 6545 6545 mlx5e_dcbnl_delete_app(priv); 6546 - unregister_netdev(priv->netdev); 6547 - _mlx5e_suspend(adev, false); 6546 + /* When unload driver, the netdev is in registered state 6547 + * if it's from legacy mode. If from switchdev mode, it 6548 + * is already unregistered before changing to NIC profile. 6549 + */ 6550 + if (priv->netdev->reg_state == NETREG_REGISTERED) { 6551 + unregister_netdev(priv->netdev); 6552 + _mlx5e_suspend(adev, false); 6553 + } else { 6554 + struct mlx5_core_dev *pos; 6555 + int i; 6556 + 6557 + if (test_bit(MLX5E_STATE_DESTROYING, &priv->state)) 6558 + mlx5_sd_for_each_dev(i, mdev, pos) 6559 + mlx5e_destroy_mdev_resources(pos); 6560 + else 6561 + _mlx5e_suspend(adev, true); 6562 + } 6548 6563 /* Avoid cleanup if profile rollback failed. */ 6549 6564 if (priv->profile) 6550 6565 priv->profile->cleanup(priv);
+15
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 1509 1509 1510 1510 priv = netdev_priv(netdev); 1511 1511 1512 + /* This bit is set when using devlink to change eswitch mode from 1513 + * switchdev to legacy. As need to keep uplink netdev ifindex, we 1514 + * detach uplink representor profile and attach NIC profile only. 1515 + * The netdev will be unregistered later when unload NIC auxiliary 1516 + * driver for this case. 1517 + * We explicitly block devlink eswitch mode change if any IPSec rules 1518 + * offloaded, but can't block other cases, such as driver unload 1519 + * and devlink reload. We have to unregister netdev before profile 1520 + * change for those cases. This is to avoid resource leak because 1521 + * the offloaded rules don't have the chance to be unoffloaded before 1522 + * cleanup which is triggered by detach uplink representor profile. 1523 + */ 1524 + if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_SWITCH_LEGACY)) 1525 + unregister_netdev(netdev); 1526 + 1512 1527 mlx5e_netdev_attach_nic_profile(priv); 1513 1528 } 1514 1529
+3 -3
drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
··· 150 150 unsigned long i; 151 151 int err; 152 152 153 - xa_for_each(&esw->offloads.vport_reps, i, rep) { 154 - rpriv = rep->rep_data[REP_ETH].priv; 155 - if (!rpriv || !rpriv->netdev) 153 + mlx5_esw_for_each_rep(esw, i, rep) { 154 + if (atomic_read(&rep->rep_data[REP_ETH].state) != REP_LOADED) 156 155 continue; 157 156 157 + rpriv = rep->rep_data[REP_ETH].priv; 158 158 rhashtable_walk_enter(&rpriv->tc_ht, &iter); 159 159 rhashtable_walk_start(&iter); 160 160 while ((flow = rhashtable_walk_next(&iter)) != NULL) {
+3
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
··· 714 714 MLX5_CAP_GEN_2((esw->dev), ec_vf_vport_base) +\ 715 715 (last) - 1) 716 716 717 + #define mlx5_esw_for_each_rep(esw, i, rep) \ 718 + xa_for_each(&((esw)->offloads.vport_reps), i, rep) 719 + 717 720 struct mlx5_eswitch *__must_check 718 721 mlx5_devlink_eswitch_get(struct devlink *devlink); 719 722
+2 -3
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 53 53 #include "lag/lag.h" 54 54 #include "en/tc/post_meter.h" 55 55 56 - #define mlx5_esw_for_each_rep(esw, i, rep) \ 57 - xa_for_each(&((esw)->offloads.vport_reps), i, rep) 58 - 59 56 /* There are two match-all miss flows, one for unicast dst mac and 60 57 * one for multicast. 61 58 */ ··· 3777 3780 esw->eswitch_operation_in_progress = true; 3778 3781 up_write(&esw->mode_lock); 3779 3782 3783 + if (mode == DEVLINK_ESWITCH_MODE_LEGACY) 3784 + esw->dev->priv.flags |= MLX5_PRIV_FLAGS_SWITCH_LEGACY; 3780 3785 mlx5_eswitch_disable_locked(esw); 3781 3786 if (mode == DEVLINK_ESWITCH_MODE_SWITCHDEV) { 3782 3787 if (mlx5_devlink_trap_get_num_active(esw->dev)) {
+1 -3
drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_send.c
··· 1067 1067 int inlen, err, eqn; 1068 1068 void *cqc, *in; 1069 1069 __be64 *pas; 1070 - int vector; 1071 1070 u32 i; 1072 1071 1073 1072 cq = kzalloc(sizeof(*cq), GFP_KERNEL); ··· 1095 1096 if (!in) 1096 1097 goto err_cqwq; 1097 1098 1098 - vector = raw_smp_processor_id() % mlx5_comp_vectors_max(mdev); 1099 - err = mlx5_comp_eqn_get(mdev, vector, &eqn); 1099 + err = mlx5_comp_eqn_get(mdev, 0, &eqn); 1100 1100 if (err) { 1101 1101 kvfree(in); 1102 1102 goto err_cqwq;
+1 -2
drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
··· 423 423 424 424 parms = mlxsw_sp_ipip_netdev_parms4(to_dev); 425 425 ip_tunnel_init_flow(&fl4, parms.iph.protocol, *daddrp, *saddrp, 426 - 0, 0, dev_net(to_dev), parms.link, tun->fwmark, 0, 427 - 0); 426 + 0, 0, tun->net, parms.link, tun->fwmark, 0, 0); 428 427 429 428 rt = ip_route_output_key(tun->net, &fl4); 430 429 if (IS_ERR(rt))
+1 -1
drivers/net/ethernet/meta/fbnic/fbnic_csr.c
··· 64 64 u32 i, j; 65 65 66 66 *(data++) = start; 67 - *(data++) = end - 1; 67 + *(data++) = end; 68 68 69 69 /* FBNIC_RPC_TCAM_ACT */ 70 70 for (i = 0; i < FBNIC_RPC_TCAM_ACT_NUM_ENTRIES; i++) {
+1 -1
drivers/net/ethernet/sfc/tc_conntrack.c
··· 16 16 void *cb_priv); 17 17 18 18 static const struct rhashtable_params efx_tc_ct_zone_ht_params = { 19 - .key_len = offsetof(struct efx_tc_ct_zone, linkage), 19 + .key_len = sizeof_field(struct efx_tc_ct_zone, zone), 20 20 .key_offset = 0, 21 21 .head_offset = offsetof(struct efx_tc_ct_zone, linkage), 22 22 };
+17 -26
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 406 406 } 407 407 408 408 /** 409 - * stmmac_remove_config_dt - undo the effects of stmmac_probe_config_dt() 410 - * @pdev: platform_device structure 411 - * @plat: driver data platform structure 412 - * 413 - * Release resources claimed by stmmac_probe_config_dt(). 414 - */ 415 - static void stmmac_remove_config_dt(struct platform_device *pdev, 416 - struct plat_stmmacenet_data *plat) 417 - { 418 - clk_disable_unprepare(plat->stmmac_clk); 419 - clk_disable_unprepare(plat->pclk); 420 - of_node_put(plat->phy_node); 421 - of_node_put(plat->mdio_node); 422 - } 423 - 424 - /** 425 409 * stmmac_probe_config_dt - parse device-tree driver parameters 426 410 * @pdev: platform_device structure 427 411 * @mac: MAC address to use ··· 474 490 dev_warn(&pdev->dev, "snps,phy-addr property is deprecated\n"); 475 491 476 492 rc = stmmac_mdio_setup(plat, np, &pdev->dev); 477 - if (rc) 478 - return ERR_PTR(rc); 493 + if (rc) { 494 + ret = ERR_PTR(rc); 495 + goto error_put_phy; 496 + } 479 497 480 498 of_property_read_u32(np, "tx-fifo-depth", &plat->tx_fifo_size); 481 499 ··· 567 581 dma_cfg = devm_kzalloc(&pdev->dev, sizeof(*dma_cfg), 568 582 GFP_KERNEL); 569 583 if (!dma_cfg) { 570 - stmmac_remove_config_dt(pdev, plat); 571 - return ERR_PTR(-ENOMEM); 584 + ret = ERR_PTR(-ENOMEM); 585 + goto error_put_mdio; 572 586 } 573 587 plat->dma_cfg = dma_cfg; 574 588 ··· 596 610 597 611 rc = stmmac_mtl_setup(pdev, plat); 598 612 if (rc) { 599 - stmmac_remove_config_dt(pdev, plat); 600 - return ERR_PTR(rc); 613 + ret = ERR_PTR(rc); 614 + goto error_put_mdio; 601 615 } 602 616 603 617 /* clock setup */ ··· 649 663 clk_disable_unprepare(plat->pclk); 650 664 error_pclk_get: 651 665 clk_disable_unprepare(plat->stmmac_clk); 666 + error_put_mdio: 667 + of_node_put(plat->mdio_node); 668 + error_put_phy: 669 + of_node_put(plat->phy_node); 652 670 653 671 return ret; 654 672 } ··· 661 671 { 662 672 struct plat_stmmacenet_data *plat = data; 663 673 664 - /* Platform data argument is unused */ 665 - stmmac_remove_config_dt(NULL, plat); 674 + clk_disable_unprepare(plat->stmmac_clk); 675 + clk_disable_unprepare(plat->pclk); 676 + of_node_put(plat->mdio_node); 677 + of_node_put(plat->phy_node); 666 678 } 667 679 668 680 /** 669 681 * devm_stmmac_probe_config_dt 670 682 * @pdev: platform_device structure 671 683 * @mac: MAC address to use 672 - * Description: Devres variant of stmmac_probe_config_dt(). Does not require 673 - * the user to call stmmac_remove_config_dt() at driver detach. 684 + * Description: Devres variant of stmmac_probe_config_dt(). 674 685 */ 675 686 struct plat_stmmacenet_data * 676 687 devm_stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+1 -1
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 3551 3551 init_completion(&common->tdown_complete); 3552 3552 common->tx_ch_num = AM65_CPSW_DEFAULT_TX_CHNS; 3553 3553 common->rx_ch_num_flows = AM65_CPSW_DEFAULT_RX_CHN_FLOWS; 3554 - common->pf_p0_rx_ptype_rrobin = false; 3554 + common->pf_p0_rx_ptype_rrobin = true; 3555 3555 common->default_vlan = 1; 3556 3556 3557 3557 common->ports = devm_kcalloc(dev, common->port_num,
+8
drivers/net/ethernet/ti/icssg/icss_iep.c
··· 215 215 for (cmp = IEP_MIN_CMP; cmp < IEP_MAX_CMP; cmp++) { 216 216 regmap_update_bits(iep->map, ICSS_IEP_CMP_STAT_REG, 217 217 IEP_CMP_STATUS(cmp), IEP_CMP_STATUS(cmp)); 218 + 219 + regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG, 220 + IEP_CMP_CFG_CMP_EN(cmp), 0); 218 221 } 219 222 220 223 /* enable reset counter on CMP0 event */ ··· 782 779 iep->ptp_clock = NULL; 783 780 } 784 781 icss_iep_disable(iep); 782 + 783 + if (iep->pps_enabled) 784 + icss_iep_pps_enable(iep, false); 785 + else if (iep->perout_enabled) 786 + icss_iep_perout_enable(iep, NULL, false); 785 787 786 788 return 0; 787 789 }
-25
drivers/net/ethernet/ti/icssg/icssg_common.c
··· 855 855 } 856 856 EXPORT_SYMBOL_GPL(prueth_rx_irq); 857 857 858 - void prueth_emac_stop(struct prueth_emac *emac) 859 - { 860 - struct prueth *prueth = emac->prueth; 861 - int slice; 862 - 863 - switch (emac->port_id) { 864 - case PRUETH_PORT_MII0: 865 - slice = ICSS_SLICE0; 866 - break; 867 - case PRUETH_PORT_MII1: 868 - slice = ICSS_SLICE1; 869 - break; 870 - default: 871 - netdev_err(emac->ndev, "invalid port\n"); 872 - return; 873 - } 874 - 875 - emac->fw_running = 0; 876 - if (!emac->is_sr1) 877 - rproc_shutdown(prueth->txpru[slice]); 878 - rproc_shutdown(prueth->rtu[slice]); 879 - rproc_shutdown(prueth->pru[slice]); 880 - } 881 - EXPORT_SYMBOL_GPL(prueth_emac_stop); 882 - 883 858 void prueth_cleanup_tx_ts(struct prueth_emac *emac) 884 859 { 885 860 int i;
+28 -13
drivers/net/ethernet/ti/icssg/icssg_config.c
··· 397 397 return 0; 398 398 } 399 399 400 - static void icssg_init_emac_mode(struct prueth *prueth) 400 + void icssg_init_emac_mode(struct prueth *prueth) 401 401 { 402 402 /* When the device is configured as a bridge and it is being brought 403 403 * back to the emac mode, the host mac address has to be set as 0. ··· 405 405 u32 addr = prueth->shram.pa + EMAC_ICSSG_SWITCH_DEFAULT_VLAN_TABLE_OFFSET; 406 406 int i; 407 407 u8 mac[ETH_ALEN] = { 0 }; 408 - 409 - if (prueth->emacs_initialized) 410 - return; 411 408 412 409 /* Set VLAN TABLE address base */ 413 410 regmap_update_bits(prueth->miig_rt, FDB_GEN_CFG1, SMEM_VLAN_OFFSET_MASK, ··· 420 423 /* Clear host MAC address */ 421 424 icssg_class_set_host_mac_addr(prueth->miig_rt, mac); 422 425 } 426 + EXPORT_SYMBOL_GPL(icssg_init_emac_mode); 423 427 424 - static void icssg_init_fw_offload_mode(struct prueth *prueth) 428 + void icssg_init_fw_offload_mode(struct prueth *prueth) 425 429 { 426 430 u32 addr = prueth->shram.pa + EMAC_ICSSG_SWITCH_DEFAULT_VLAN_TABLE_OFFSET; 427 431 int i; 428 - 429 - if (prueth->emacs_initialized) 430 - return; 431 432 432 433 /* Set VLAN TABLE address base */ 433 434 regmap_update_bits(prueth->miig_rt, FDB_GEN_CFG1, SMEM_VLAN_OFFSET_MASK, ··· 443 448 icssg_class_set_host_mac_addr(prueth->miig_rt, prueth->hw_bridge_dev->dev_addr); 444 449 icssg_set_pvid(prueth, prueth->default_vlan, PRUETH_PORT_HOST); 445 450 } 451 + EXPORT_SYMBOL_GPL(icssg_init_fw_offload_mode); 446 452 447 453 int icssg_config(struct prueth *prueth, struct prueth_emac *emac, int slice) 448 454 { 449 455 void __iomem *config = emac->dram.va + ICSSG_CONFIG_OFFSET; 450 456 struct icssg_flow_cfg __iomem *flow_cfg; 451 457 int ret; 452 - 453 - if (prueth->is_switch_mode || prueth->is_hsr_offload_mode) 454 - icssg_init_fw_offload_mode(prueth); 455 - else 456 - icssg_init_emac_mode(prueth); 457 458 458 459 memset_io(config, 0, TAS_GATE_MASK_LIST0); 459 460 icssg_miig_queues_init(prueth, slice); ··· 777 786 writel(pvid, prueth->shram.va + EMAC_ICSSG_SWITCH_PORT0_DEFAULT_VLAN_OFFSET); 778 787 } 779 788 EXPORT_SYMBOL_GPL(icssg_set_pvid); 789 + 790 + int emac_fdb_flow_id_updated(struct prueth_emac *emac) 791 + { 792 + struct mgmt_cmd_rsp fdb_cmd_rsp = { 0 }; 793 + int slice = prueth_emac_slice(emac); 794 + struct mgmt_cmd fdb_cmd = { 0 }; 795 + int ret; 796 + 797 + fdb_cmd.header = ICSSG_FW_MGMT_CMD_HEADER; 798 + fdb_cmd.type = ICSSG_FW_MGMT_FDB_CMD_TYPE_RX_FLOW; 799 + fdb_cmd.seqnum = ++(emac->prueth->icssg_hwcmdseq); 800 + fdb_cmd.param = 0; 801 + 802 + fdb_cmd.param |= (slice << 4); 803 + fdb_cmd.cmd_args[0] = 0; 804 + 805 + ret = icssg_send_fdb_msg(emac, &fdb_cmd, &fdb_cmd_rsp); 806 + if (ret) 807 + return ret; 808 + 809 + WARN_ON(fdb_cmd.seqnum != fdb_cmd_rsp.seqnum); 810 + return fdb_cmd_rsp.status == 1 ? 0 : -EINVAL; 811 + } 812 + EXPORT_SYMBOL_GPL(emac_fdb_flow_id_updated);
+1
drivers/net/ethernet/ti/icssg/icssg_config.h
··· 55 55 #define ICSSG_FW_MGMT_FDB_CMD_TYPE 0x03 56 56 #define ICSSG_FW_MGMT_CMD_TYPE 0x04 57 57 #define ICSSG_FW_MGMT_PKT 0x80000000 58 + #define ICSSG_FW_MGMT_FDB_CMD_TYPE_RX_FLOW 0x05 58 59 59 60 struct icssg_r30_cmd { 60 61 u32 cmd[4];
+191 -90
drivers/net/ethernet/ti/icssg/icssg_prueth.c
··· 164 164 } 165 165 }; 166 166 167 - static int prueth_emac_start(struct prueth *prueth, struct prueth_emac *emac) 167 + static int prueth_start(struct rproc *rproc, const char *fw_name) 168 + { 169 + int ret; 170 + 171 + ret = rproc_set_firmware(rproc, fw_name); 172 + if (ret) 173 + return ret; 174 + return rproc_boot(rproc); 175 + } 176 + 177 + static void prueth_shutdown(struct rproc *rproc) 178 + { 179 + rproc_shutdown(rproc); 180 + } 181 + 182 + static int prueth_emac_start(struct prueth *prueth) 168 183 { 169 184 struct icssg_firmwares *firmwares; 170 185 struct device *dev = prueth->dev; 171 - int slice, ret; 186 + int ret, slice; 172 187 173 188 if (prueth->is_switch_mode) 174 189 firmwares = icssg_switch_firmwares; ··· 192 177 else 193 178 firmwares = icssg_emac_firmwares; 194 179 195 - slice = prueth_emac_slice(emac); 196 - if (slice < 0) { 197 - netdev_err(emac->ndev, "invalid port\n"); 198 - return -EINVAL; 180 + for (slice = 0; slice < PRUETH_NUM_MACS; slice++) { 181 + ret = prueth_start(prueth->pru[slice], firmwares[slice].pru); 182 + if (ret) { 183 + dev_err(dev, "failed to boot PRU%d: %d\n", slice, ret); 184 + goto unwind_slices; 185 + } 186 + 187 + ret = prueth_start(prueth->rtu[slice], firmwares[slice].rtu); 188 + if (ret) { 189 + dev_err(dev, "failed to boot RTU%d: %d\n", slice, ret); 190 + rproc_shutdown(prueth->pru[slice]); 191 + goto unwind_slices; 192 + } 193 + 194 + ret = prueth_start(prueth->txpru[slice], firmwares[slice].txpru); 195 + if (ret) { 196 + dev_err(dev, "failed to boot TX_PRU%d: %d\n", slice, ret); 197 + rproc_shutdown(prueth->rtu[slice]); 198 + rproc_shutdown(prueth->pru[slice]); 199 + goto unwind_slices; 200 + } 199 201 } 200 202 201 - ret = icssg_config(prueth, emac, slice); 202 - if (ret) 203 - return ret; 204 - 205 - ret = rproc_set_firmware(prueth->pru[slice], firmwares[slice].pru); 206 - ret = rproc_boot(prueth->pru[slice]); 207 - if (ret) { 208 - dev_err(dev, "failed to boot PRU%d: %d\n", slice, ret); 209 - return -EINVAL; 210 - } 211 - 212 - ret = rproc_set_firmware(prueth->rtu[slice], firmwares[slice].rtu); 213 - ret = rproc_boot(prueth->rtu[slice]); 214 - if (ret) { 215 - dev_err(dev, "failed to boot RTU%d: %d\n", slice, ret); 216 - goto halt_pru; 217 - } 218 - 219 - ret = rproc_set_firmware(prueth->txpru[slice], firmwares[slice].txpru); 220 - ret = rproc_boot(prueth->txpru[slice]); 221 - if (ret) { 222 - dev_err(dev, "failed to boot TX_PRU%d: %d\n", slice, ret); 223 - goto halt_rtu; 224 - } 225 - 226 - emac->fw_running = 1; 227 203 return 0; 228 204 229 - halt_rtu: 230 - rproc_shutdown(prueth->rtu[slice]); 231 - 232 - halt_pru: 233 - rproc_shutdown(prueth->pru[slice]); 205 + unwind_slices: 206 + while (--slice >= 0) { 207 + prueth_shutdown(prueth->txpru[slice]); 208 + prueth_shutdown(prueth->rtu[slice]); 209 + prueth_shutdown(prueth->pru[slice]); 210 + } 234 211 235 212 return ret; 213 + } 214 + 215 + static void prueth_emac_stop(struct prueth *prueth) 216 + { 217 + int slice; 218 + 219 + for (slice = 0; slice < PRUETH_NUM_MACS; slice++) { 220 + prueth_shutdown(prueth->txpru[slice]); 221 + prueth_shutdown(prueth->rtu[slice]); 222 + prueth_shutdown(prueth->pru[slice]); 223 + } 224 + } 225 + 226 + static int prueth_emac_common_start(struct prueth *prueth) 227 + { 228 + struct prueth_emac *emac; 229 + int ret = 0; 230 + int slice; 231 + 232 + if (!prueth->emac[ICSS_SLICE0] && !prueth->emac[ICSS_SLICE1]) 233 + return -EINVAL; 234 + 235 + /* clear SMEM and MSMC settings for all slices */ 236 + memset_io(prueth->msmcram.va, 0, prueth->msmcram.size); 237 + memset_io(prueth->shram.va, 0, ICSSG_CONFIG_OFFSET_SLICE1 * PRUETH_NUM_MACS); 238 + 239 + icssg_class_default(prueth->miig_rt, ICSS_SLICE0, 0, false); 240 + icssg_class_default(prueth->miig_rt, ICSS_SLICE1, 0, false); 241 + 242 + if (prueth->is_switch_mode || prueth->is_hsr_offload_mode) 243 + icssg_init_fw_offload_mode(prueth); 244 + else 245 + icssg_init_emac_mode(prueth); 246 + 247 + for (slice = 0; slice < PRUETH_NUM_MACS; slice++) { 248 + emac = prueth->emac[slice]; 249 + if (!emac) 250 + continue; 251 + ret = icssg_config(prueth, emac, slice); 252 + if (ret) 253 + goto disable_class; 254 + } 255 + 256 + ret = prueth_emac_start(prueth); 257 + if (ret) 258 + goto disable_class; 259 + 260 + emac = prueth->emac[ICSS_SLICE0] ? prueth->emac[ICSS_SLICE0] : 261 + prueth->emac[ICSS_SLICE1]; 262 + ret = icss_iep_init(emac->iep, &prueth_iep_clockops, 263 + emac, IEP_DEFAULT_CYCLE_TIME_NS); 264 + if (ret) { 265 + dev_err(prueth->dev, "Failed to initialize IEP module\n"); 266 + goto stop_pruss; 267 + } 268 + 269 + return 0; 270 + 271 + stop_pruss: 272 + prueth_emac_stop(prueth); 273 + 274 + disable_class: 275 + icssg_class_disable(prueth->miig_rt, ICSS_SLICE0); 276 + icssg_class_disable(prueth->miig_rt, ICSS_SLICE1); 277 + 278 + return ret; 279 + } 280 + 281 + static int prueth_emac_common_stop(struct prueth *prueth) 282 + { 283 + struct prueth_emac *emac; 284 + 285 + if (!prueth->emac[ICSS_SLICE0] && !prueth->emac[ICSS_SLICE1]) 286 + return -EINVAL; 287 + 288 + icssg_class_disable(prueth->miig_rt, ICSS_SLICE0); 289 + icssg_class_disable(prueth->miig_rt, ICSS_SLICE1); 290 + 291 + prueth_emac_stop(prueth); 292 + 293 + emac = prueth->emac[ICSS_SLICE0] ? prueth->emac[ICSS_SLICE0] : 294 + prueth->emac[ICSS_SLICE1]; 295 + icss_iep_exit(emac->iep); 296 + 297 + return 0; 236 298 } 237 299 238 300 /* called back by PHY layer if there is change in link state of hw port*/ ··· 465 373 u64 cyclecount; 466 374 u32 cycletime; 467 375 int timeout; 468 - 469 - if (!emac->fw_running) 470 - return; 471 376 472 377 sc_descp = emac->prueth->shram.va + TIMESYNC_FW_WC_SETCLOCK_DESC_OFFSET; 473 378 ··· 632 543 { 633 544 struct prueth_emac *emac = netdev_priv(ndev); 634 545 int ret, i, num_data_chn = emac->tx_ch_num; 546 + struct icssg_flow_cfg __iomem *flow_cfg; 635 547 struct prueth *prueth = emac->prueth; 636 548 int slice = prueth_emac_slice(emac); 637 549 struct device *dev = prueth->dev; 638 550 int max_rx_flows; 639 551 int rx_flow; 640 552 641 - /* clear SMEM and MSMC settings for all slices */ 642 - if (!prueth->emacs_initialized) { 643 - memset_io(prueth->msmcram.va, 0, prueth->msmcram.size); 644 - memset_io(prueth->shram.va, 0, ICSSG_CONFIG_OFFSET_SLICE1 * PRUETH_NUM_MACS); 645 - } 646 - 647 553 /* set h/w MAC as user might have re-configured */ 648 554 ether_addr_copy(emac->mac_addr, ndev->dev_addr); 649 555 650 556 icssg_class_set_mac_addr(prueth->miig_rt, slice, emac->mac_addr); 651 - icssg_class_default(prueth->miig_rt, slice, 0, false); 652 557 icssg_ft1_set_mac_addr(prueth->miig_rt, slice, emac->mac_addr); 653 558 654 559 /* Notify the stack of the actual queue counts. */ ··· 680 597 goto cleanup_napi; 681 598 } 682 599 683 - /* reset and start PRU firmware */ 684 - ret = prueth_emac_start(prueth, emac); 685 - if (ret) 686 - goto free_rx_irq; 600 + if (!prueth->emacs_initialized) { 601 + ret = prueth_emac_common_start(prueth); 602 + if (ret) 603 + goto free_rx_irq; 604 + } 605 + 606 + flow_cfg = emac->dram.va + ICSSG_CONFIG_OFFSET + PSI_L_REGULAR_FLOW_ID_BASE_OFFSET; 607 + writew(emac->rx_flow_id_base, &flow_cfg->rx_base_flow); 608 + ret = emac_fdb_flow_id_updated(emac); 609 + 610 + if (ret) { 611 + netdev_err(ndev, "Failed to update Rx Flow ID %d", ret); 612 + goto stop; 613 + } 687 614 688 615 icssg_mii_update_mtu(prueth->mii_rt, slice, ndev->max_mtu); 689 - 690 - if (!prueth->emacs_initialized) { 691 - ret = icss_iep_init(emac->iep, &prueth_iep_clockops, 692 - emac, IEP_DEFAULT_CYCLE_TIME_NS); 693 - } 694 616 695 617 ret = request_threaded_irq(emac->tx_ts_irq, NULL, prueth_tx_ts_irq, 696 618 IRQF_ONESHOT, dev_name(dev), emac); ··· 741 653 free_tx_ts_irq: 742 654 free_irq(emac->tx_ts_irq, emac); 743 655 stop: 744 - prueth_emac_stop(emac); 656 + if (!prueth->emacs_initialized) 657 + prueth_emac_common_stop(prueth); 745 658 free_rx_irq: 746 659 free_irq(emac->rx_chns.irq[rx_flow], emac); 747 660 cleanup_napi: ··· 777 688 /* block packets from wire */ 778 689 if (ndev->phydev) 779 690 phy_stop(ndev->phydev); 780 - 781 - icssg_class_disable(prueth->miig_rt, prueth_emac_slice(emac)); 782 691 783 692 if (emac->prueth->is_hsr_offload_mode) 784 693 __dev_mc_unsync(ndev, icssg_prueth_hsr_del_mcast); ··· 815 728 /* Destroying the queued work in ndo_stop() */ 816 729 cancel_delayed_work_sync(&emac->stats_work); 817 730 818 - if (prueth->emacs_initialized == 1) 819 - icss_iep_exit(emac->iep); 820 - 821 731 /* stop PRUs */ 822 - prueth_emac_stop(emac); 732 + if (prueth->emacs_initialized == 1) 733 + prueth_emac_common_stop(prueth); 823 734 824 735 free_irq(emac->tx_ts_irq, emac); 825 736 ··· 1138 1053 } 1139 1054 } 1140 1055 1141 - static void prueth_emac_restart(struct prueth *prueth) 1056 + static int prueth_emac_restart(struct prueth *prueth) 1142 1057 { 1143 1058 struct prueth_emac *emac0 = prueth->emac[PRUETH_MAC0]; 1144 1059 struct prueth_emac *emac1 = prueth->emac[PRUETH_MAC1]; 1060 + int ret; 1145 1061 1146 1062 /* Detach the net_device for both PRUeth ports*/ 1147 1063 if (netif_running(emac0->ndev)) ··· 1151 1065 netif_device_detach(emac1->ndev); 1152 1066 1153 1067 /* Disable both PRUeth ports */ 1154 - icssg_set_port_state(emac0, ICSSG_EMAC_PORT_DISABLE); 1155 - icssg_set_port_state(emac1, ICSSG_EMAC_PORT_DISABLE); 1068 + ret = icssg_set_port_state(emac0, ICSSG_EMAC_PORT_DISABLE); 1069 + ret |= icssg_set_port_state(emac1, ICSSG_EMAC_PORT_DISABLE); 1070 + if (ret) 1071 + return ret; 1156 1072 1157 1073 /* Stop both pru cores for both PRUeth ports*/ 1158 - prueth_emac_stop(emac0); 1159 - prueth->emacs_initialized--; 1160 - prueth_emac_stop(emac1); 1161 - prueth->emacs_initialized--; 1074 + ret = prueth_emac_common_stop(prueth); 1075 + if (ret) { 1076 + dev_err(prueth->dev, "Failed to stop the firmwares"); 1077 + return ret; 1078 + } 1162 1079 1163 1080 /* Start both pru cores for both PRUeth ports */ 1164 - prueth_emac_start(prueth, emac0); 1165 - prueth->emacs_initialized++; 1166 - prueth_emac_start(prueth, emac1); 1167 - prueth->emacs_initialized++; 1081 + ret = prueth_emac_common_start(prueth); 1082 + if (ret) { 1083 + dev_err(prueth->dev, "Failed to start the firmwares"); 1084 + return ret; 1085 + } 1168 1086 1169 1087 /* Enable forwarding for both PRUeth ports */ 1170 - icssg_set_port_state(emac0, ICSSG_EMAC_PORT_FORWARD); 1171 - icssg_set_port_state(emac1, ICSSG_EMAC_PORT_FORWARD); 1088 + ret = icssg_set_port_state(emac0, ICSSG_EMAC_PORT_FORWARD); 1089 + ret |= icssg_set_port_state(emac1, ICSSG_EMAC_PORT_FORWARD); 1172 1090 1173 1091 /* Attache net_device for both PRUeth ports */ 1174 1092 netif_device_attach(emac0->ndev); 1175 1093 netif_device_attach(emac1->ndev); 1094 + 1095 + return ret; 1176 1096 } 1177 1097 1178 1098 static void icssg_change_mode(struct prueth *prueth) 1179 1099 { 1180 1100 struct prueth_emac *emac; 1181 - int mac; 1101 + int mac, ret; 1182 1102 1183 - prueth_emac_restart(prueth); 1103 + ret = prueth_emac_restart(prueth); 1104 + if (ret) { 1105 + dev_err(prueth->dev, "Failed to restart the firmwares, aborting the process"); 1106 + return; 1107 + } 1184 1108 1185 1109 for (mac = PRUETH_MAC0; mac < PRUETH_NUM_MACS; mac++) { 1186 1110 emac = prueth->emac[mac]; ··· 1269 1173 { 1270 1174 struct prueth_emac *emac = netdev_priv(ndev); 1271 1175 struct prueth *prueth = emac->prueth; 1176 + int ret; 1272 1177 1273 1178 prueth->br_members &= ~BIT(emac->port_id); 1274 1179 1275 1180 if (prueth->is_switch_mode) { 1276 1181 prueth->is_switch_mode = false; 1277 1182 emac->port_vlan = 0; 1278 - prueth_emac_restart(prueth); 1183 + ret = prueth_emac_restart(prueth); 1184 + if (ret) { 1185 + dev_err(prueth->dev, "Failed to restart the firmwares, aborting the process"); 1186 + return; 1187 + } 1279 1188 } 1280 1189 1281 1190 prueth_offload_fwd_mark_update(prueth); ··· 1329 1228 struct prueth *prueth = emac->prueth; 1330 1229 struct prueth_emac *emac0; 1331 1230 struct prueth_emac *emac1; 1231 + int ret; 1332 1232 1333 1233 emac0 = prueth->emac[PRUETH_MAC0]; 1334 1234 emac1 = prueth->emac[PRUETH_MAC1]; ··· 1340 1238 emac0->port_vlan = 0; 1341 1239 emac1->port_vlan = 0; 1342 1240 prueth->hsr_dev = NULL; 1343 - prueth_emac_restart(prueth); 1241 + ret = prueth_emac_restart(prueth); 1242 + if (ret) { 1243 + dev_err(prueth->dev, "Failed to restart the firmwares, aborting the process"); 1244 + return; 1245 + } 1344 1246 netdev_dbg(ndev, "Disabling HSR Offload mode\n"); 1345 1247 } 1346 1248 } ··· 1519 1413 prueth->pa_stats = NULL; 1520 1414 } 1521 1415 1522 - if (eth0_node) { 1416 + if (eth0_node || eth1_node) { 1523 1417 ret = prueth_get_cores(prueth, ICSS_SLICE0, false); 1524 1418 if (ret) 1525 1419 goto put_cores; 1526 - } 1527 - 1528 - if (eth1_node) { 1529 1420 ret = prueth_get_cores(prueth, ICSS_SLICE1, false); 1530 1421 if (ret) 1531 1422 goto put_cores; ··· 1721 1618 pruss_put(prueth->pruss); 1722 1619 1723 1620 put_cores: 1724 - if (eth1_node) { 1725 - prueth_put_cores(prueth, ICSS_SLICE1); 1726 - of_node_put(eth1_node); 1727 - } 1728 - 1729 - if (eth0_node) { 1621 + if (eth0_node || eth1_node) { 1730 1622 prueth_put_cores(prueth, ICSS_SLICE0); 1731 1623 of_node_put(eth0_node); 1624 + 1625 + prueth_put_cores(prueth, ICSS_SLICE1); 1626 + of_node_put(eth1_node); 1732 1627 } 1733 1628 1734 1629 return ret;
+3 -2
drivers/net/ethernet/ti/icssg/icssg_prueth.h
··· 140 140 /* data for each emac port */ 141 141 struct prueth_emac { 142 142 bool is_sr1; 143 - bool fw_running; 144 143 struct prueth *prueth; 145 144 struct net_device *ndev; 146 145 u8 mac_addr[6]; ··· 360 361 enum icssg_port_state_cmd state); 361 362 void icssg_config_set_speed(struct prueth_emac *emac); 362 363 void icssg_config_half_duplex(struct prueth_emac *emac); 364 + void icssg_init_emac_mode(struct prueth *prueth); 365 + void icssg_init_fw_offload_mode(struct prueth *prueth); 363 366 364 367 /* Buffer queue helpers */ 365 368 int icssg_queue_pop(struct prueth *prueth, u8 queue); ··· 378 377 u8 untag_mask, bool add); 379 378 u16 icssg_get_pvid(struct prueth_emac *emac); 380 379 void icssg_set_pvid(struct prueth *prueth, u8 vid, u8 port); 380 + int emac_fdb_flow_id_updated(struct prueth_emac *emac); 381 381 #define prueth_napi_to_tx_chn(pnapi) \ 382 382 container_of(pnapi, struct prueth_tx_chn, napi_tx) 383 383 ··· 409 407 struct sk_buff *skb, u32 *psdata); 410 408 enum netdev_tx icssg_ndo_start_xmit(struct sk_buff *skb, struct net_device *ndev); 411 409 irqreturn_t prueth_rx_irq(int irq, void *dev_id); 412 - void prueth_emac_stop(struct prueth_emac *emac); 413 410 void prueth_cleanup_tx_ts(struct prueth_emac *emac); 414 411 int icssg_napi_rx_poll(struct napi_struct *napi_rx, int budget); 415 412 int prueth_prepare_rx_chan(struct prueth_emac *emac,
+23 -1
drivers/net/ethernet/ti/icssg/icssg_prueth_sr1.c
··· 440 440 goto halt_pru; 441 441 } 442 442 443 - emac->fw_running = 1; 444 443 return 0; 445 444 446 445 halt_pru: 447 446 rproc_shutdown(prueth->pru[slice]); 448 447 449 448 return ret; 449 + } 450 + 451 + static void prueth_emac_stop(struct prueth_emac *emac) 452 + { 453 + struct prueth *prueth = emac->prueth; 454 + int slice; 455 + 456 + switch (emac->port_id) { 457 + case PRUETH_PORT_MII0: 458 + slice = ICSS_SLICE0; 459 + break; 460 + case PRUETH_PORT_MII1: 461 + slice = ICSS_SLICE1; 462 + break; 463 + default: 464 + netdev_err(emac->ndev, "invalid port\n"); 465 + return; 466 + } 467 + 468 + if (!emac->is_sr1) 469 + rproc_shutdown(prueth->txpru[slice]); 470 + rproc_shutdown(prueth->rtu[slice]); 471 + rproc_shutdown(prueth->pru[slice]); 450 472 } 451 473 452 474 /**
+101 -13
drivers/net/phy/micrel.c
··· 432 432 struct kszphy_priv { 433 433 struct kszphy_ptp_priv ptp_priv; 434 434 const struct kszphy_type *type; 435 + struct clk *clk; 435 436 int led_mode; 436 437 u16 vct_ctrl1000; 437 438 bool rmii_ref_clk_sel; 438 439 bool rmii_ref_clk_sel_val; 440 + bool clk_enable; 439 441 u64 stats[ARRAY_SIZE(kszphy_hw_stats)]; 440 442 }; 441 443 ··· 2052 2050 data[i] = kszphy_get_stat(phydev, i); 2053 2051 } 2054 2052 2053 + static void kszphy_enable_clk(struct phy_device *phydev) 2054 + { 2055 + struct kszphy_priv *priv = phydev->priv; 2056 + 2057 + if (!priv->clk_enable && priv->clk) { 2058 + clk_prepare_enable(priv->clk); 2059 + priv->clk_enable = true; 2060 + } 2061 + } 2062 + 2063 + static void kszphy_disable_clk(struct phy_device *phydev) 2064 + { 2065 + struct kszphy_priv *priv = phydev->priv; 2066 + 2067 + if (priv->clk_enable && priv->clk) { 2068 + clk_disable_unprepare(priv->clk); 2069 + priv->clk_enable = false; 2070 + } 2071 + } 2072 + 2073 + static int kszphy_generic_resume(struct phy_device *phydev) 2074 + { 2075 + kszphy_enable_clk(phydev); 2076 + 2077 + return genphy_resume(phydev); 2078 + } 2079 + 2080 + static int kszphy_generic_suspend(struct phy_device *phydev) 2081 + { 2082 + int ret; 2083 + 2084 + ret = genphy_suspend(phydev); 2085 + if (ret) 2086 + return ret; 2087 + 2088 + kszphy_disable_clk(phydev); 2089 + 2090 + return 0; 2091 + } 2092 + 2055 2093 static int kszphy_suspend(struct phy_device *phydev) 2056 2094 { 2057 2095 /* Disable PHY Interrupts */ ··· 2101 2059 phydev->drv->config_intr(phydev); 2102 2060 } 2103 2061 2104 - return genphy_suspend(phydev); 2062 + return kszphy_generic_suspend(phydev); 2105 2063 } 2106 2064 2107 2065 static void kszphy_parse_led_mode(struct phy_device *phydev) ··· 2132 2090 { 2133 2091 int ret; 2134 2092 2135 - genphy_resume(phydev); 2093 + ret = kszphy_generic_resume(phydev); 2094 + if (ret) 2095 + return ret; 2136 2096 2137 2097 /* After switching from power-down to normal mode, an internal global 2138 2098 * reset is automatically generated. Wait a minimum of 1 ms before ··· 2152 2108 if (phydev->drv->config_intr) 2153 2109 phydev->drv->config_intr(phydev); 2154 2110 } 2111 + 2112 + return 0; 2113 + } 2114 + 2115 + /* Because of errata DS80000700A, receiver error following software 2116 + * power down. Suspend and resume callbacks only disable and enable 2117 + * external rmii reference clock. 2118 + */ 2119 + static int ksz8041_resume(struct phy_device *phydev) 2120 + { 2121 + kszphy_enable_clk(phydev); 2122 + 2123 + return 0; 2124 + } 2125 + 2126 + static int ksz8041_suspend(struct phy_device *phydev) 2127 + { 2128 + kszphy_disable_clk(phydev); 2155 2129 2156 2130 return 0; 2157 2131 } ··· 2221 2159 if (!(ret & BMCR_PDOWN)) 2222 2160 return 0; 2223 2161 2224 - genphy_resume(phydev); 2162 + ret = kszphy_generic_resume(phydev); 2163 + if (ret) 2164 + return ret; 2165 + 2225 2166 usleep_range(1000, 2000); 2226 2167 2227 2168 /* Re-program the value after chip is reset. */ ··· 2240 2175 } 2241 2176 2242 2177 return 0; 2178 + } 2179 + 2180 + static int ksz8061_suspend(struct phy_device *phydev) 2181 + { 2182 + return kszphy_suspend(phydev); 2243 2183 } 2244 2184 2245 2185 static int kszphy_probe(struct phy_device *phydev) ··· 2287 2217 } else if (!clk) { 2288 2218 /* unnamed clock from the generic ethernet-phy binding */ 2289 2219 clk = devm_clk_get_optional_enabled(&phydev->mdio.dev, NULL); 2290 - if (IS_ERR(clk)) 2291 - return PTR_ERR(clk); 2292 2220 } 2221 + 2222 + if (IS_ERR(clk)) 2223 + return PTR_ERR(clk); 2224 + 2225 + clk_disable_unprepare(clk); 2226 + priv->clk = clk; 2293 2227 2294 2228 if (ksz8041_fiber_mode(phydev)) 2295 2229 phydev->port = PORT_FIBRE; ··· 5364 5290 return 0; 5365 5291 } 5366 5292 5293 + static int lan8804_resume(struct phy_device *phydev) 5294 + { 5295 + return kszphy_resume(phydev); 5296 + } 5297 + 5298 + static int lan8804_suspend(struct phy_device *phydev) 5299 + { 5300 + return kszphy_generic_suspend(phydev); 5301 + } 5302 + 5303 + static int lan8841_resume(struct phy_device *phydev) 5304 + { 5305 + return kszphy_generic_resume(phydev); 5306 + } 5307 + 5367 5308 static int lan8841_suspend(struct phy_device *phydev) 5368 5309 { 5369 5310 struct kszphy_priv *priv = phydev->priv; ··· 5387 5298 if (ptp_priv->ptp_clock) 5388 5299 ptp_cancel_worker_sync(ptp_priv->ptp_clock); 5389 5300 5390 - return genphy_suspend(phydev); 5301 + return kszphy_generic_suspend(phydev); 5391 5302 } 5392 5303 5393 5304 static struct phy_driver ksphy_driver[] = { ··· 5447 5358 .get_sset_count = kszphy_get_sset_count, 5448 5359 .get_strings = kszphy_get_strings, 5449 5360 .get_stats = kszphy_get_stats, 5450 - /* No suspend/resume callbacks because of errata DS80000700A, 5451 - * receiver error following software power down. 5452 - */ 5361 + .suspend = ksz8041_suspend, 5362 + .resume = ksz8041_resume, 5453 5363 }, { 5454 5364 .phy_id = PHY_ID_KSZ8041RNLI, 5455 5365 .phy_id_mask = MICREL_PHY_ID_MASK, ··· 5524 5436 .soft_reset = genphy_soft_reset, 5525 5437 .config_intr = kszphy_config_intr, 5526 5438 .handle_interrupt = kszphy_handle_interrupt, 5527 - .suspend = kszphy_suspend, 5439 + .suspend = ksz8061_suspend, 5528 5440 .resume = ksz8061_resume, 5529 5441 }, { 5530 5442 .phy_id = PHY_ID_KSZ9021, ··· 5595 5507 .get_sset_count = kszphy_get_sset_count, 5596 5508 .get_strings = kszphy_get_strings, 5597 5509 .get_stats = kszphy_get_stats, 5598 - .suspend = genphy_suspend, 5599 - .resume = kszphy_resume, 5510 + .suspend = lan8804_suspend, 5511 + .resume = lan8804_resume, 5600 5512 .config_intr = lan8804_config_intr, 5601 5513 .handle_interrupt = lan8804_handle_interrupt, 5602 5514 }, { ··· 5614 5526 .get_strings = kszphy_get_strings, 5615 5527 .get_stats = kszphy_get_stats, 5616 5528 .suspend = lan8841_suspend, 5617 - .resume = genphy_resume, 5529 + .resume = lan8841_resume, 5618 5530 .cable_test_start = lan8814_cable_test_start, 5619 5531 .cable_test_get_status = ksz886x_cable_test_get_status, 5620 5532 }, {
+4 -12
drivers/net/pse-pd/tps23881.c
··· 64 64 if (id >= TPS23881_MAX_CHANS) 65 65 return -ERANGE; 66 66 67 - ret = i2c_smbus_read_word_data(client, TPS23881_REG_PW_STATUS); 68 - if (ret < 0) 69 - return ret; 70 - 71 67 chan = priv->port[id].chan[0]; 72 68 if (chan < 4) 73 - val = (u16)(ret | BIT(chan)); 69 + val = BIT(chan); 74 70 else 75 - val = (u16)(ret | BIT(chan + 4)); 71 + val = BIT(chan + 4); 76 72 77 73 if (priv->port[id].is_4p) { 78 74 chan = priv->port[id].chan[1]; ··· 96 100 if (id >= TPS23881_MAX_CHANS) 97 101 return -ERANGE; 98 102 99 - ret = i2c_smbus_read_word_data(client, TPS23881_REG_PW_STATUS); 100 - if (ret < 0) 101 - return ret; 102 - 103 103 chan = priv->port[id].chan[0]; 104 104 if (chan < 4) 105 - val = (u16)(ret | BIT(chan + 4)); 105 + val = BIT(chan + 4); 106 106 else 107 - val = (u16)(ret | BIT(chan + 8)); 107 + val = BIT(chan + 8); 108 108 109 109 if (priv->port[id].is_4p) { 110 110 chan = priv->port[id].chan[1];
+1
drivers/net/wireless/intel/iwlwifi/cfg/bz.c
··· 161 161 162 162 const char iwl_bz_name[] = "Intel(R) TBD Bz device"; 163 163 const char iwl_fm_name[] = "Intel(R) Wi-Fi 7 BE201 320MHz"; 164 + const char iwl_wh_name[] = "Intel(R) Wi-Fi 7 BE211 320MHz"; 164 165 const char iwl_gl_name[] = "Intel(R) Wi-Fi 7 BE200 320MHz"; 165 166 const char iwl_mtp_name[] = "Intel(R) Wi-Fi 7 BE202 160MHz"; 166 167
+1
drivers/net/wireless/intel/iwlwifi/iwl-config.h
··· 545 545 extern const char iwl_ax411_name[]; 546 546 extern const char iwl_bz_name[]; 547 547 extern const char iwl_fm_name[]; 548 + extern const char iwl_wh_name[]; 548 549 extern const char iwl_gl_name[]; 549 550 extern const char iwl_mtp_name[]; 550 551 extern const char iwl_sc_name[];
+11 -3
drivers/net/wireless/intel/iwlwifi/mvm/d3.c
··· 2954 2954 int idx) 2955 2955 { 2956 2956 int i; 2957 + int n_channels = 0; 2957 2958 2958 2959 if (fw_has_api(&mvm->fw->ucode_capa, 2959 2960 IWL_UCODE_TLV_API_SCAN_OFFLOAD_CHANS)) { ··· 2963 2962 2964 2963 for (i = 0; i < SCAN_OFFLOAD_MATCHING_CHANNELS_LEN * 8; i++) 2965 2964 if (matches[idx].matching_channels[i / 8] & (BIT(i % 8))) 2966 - match->channels[match->n_channels++] = 2965 + match->channels[n_channels++] = 2967 2966 mvm->nd_channels[i]->center_freq; 2968 2967 } else { 2969 2968 struct iwl_scan_offload_profile_match_v1 *matches = ··· 2971 2970 2972 2971 for (i = 0; i < SCAN_OFFLOAD_MATCHING_CHANNELS_LEN_V1 * 8; i++) 2973 2972 if (matches[idx].matching_channels[i / 8] & (BIT(i % 8))) 2974 - match->channels[match->n_channels++] = 2973 + match->channels[n_channels++] = 2975 2974 mvm->nd_channels[i]->center_freq; 2976 2975 } 2976 + /* We may have ended up with fewer channels than we allocated. */ 2977 + match->n_channels = n_channels; 2977 2978 } 2978 2979 2979 2980 /** ··· 3056 3053 GFP_KERNEL); 3057 3054 if (!net_detect || !n_matches) 3058 3055 goto out_report_nd; 3056 + net_detect->n_matches = n_matches; 3057 + n_matches = 0; 3059 3058 3060 3059 for_each_set_bit(i, &matched_profiles, mvm->n_nd_match_sets) { 3061 3060 struct cfg80211_wowlan_nd_match *match; ··· 3071 3066 GFP_KERNEL); 3072 3067 if (!match) 3073 3068 goto out_report_nd; 3069 + match->n_channels = n_channels; 3074 3070 3075 - net_detect->matches[net_detect->n_matches++] = match; 3071 + net_detect->matches[n_matches++] = match; 3076 3072 3077 3073 /* We inverted the order of the SSIDs in the scan 3078 3074 * request, so invert the index here. ··· 3088 3082 3089 3083 iwl_mvm_query_set_freqs(mvm, d3_data->nd_results, match, i); 3090 3084 } 3085 + /* We may have fewer matches than we allocated. */ 3086 + net_detect->n_matches = n_matches; 3091 3087 3092 3088 out_report_nd: 3093 3089 wakeup.net_detect = net_detect;
+38 -3
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 1106 1106 iwlax210_2ax_cfg_so_jf_b0, iwl9462_name), 1107 1107 1108 1108 /* Bz */ 1109 - /* FIXME: need to change the naming according to the actual CRF */ 1110 1109 _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1111 1110 IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY, 1111 + IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY, 1112 1112 IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1113 + iwl_cfg_bz, iwl_ax201_name), 1114 + 1115 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1116 + IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY, 1117 + IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY, 1118 + IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1119 + iwl_cfg_bz, iwl_ax211_name), 1120 + 1121 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1122 + IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY, 1123 + IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY, 1124 + IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1125 + iwl_cfg_bz, iwl_fm_name), 1126 + 1127 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1128 + IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY, 1129 + IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY, 1130 + IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1131 + iwl_cfg_bz, iwl_wh_name), 1132 + 1133 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1134 + IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY, 1135 + IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY, 1136 + IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1137 + iwl_cfg_bz, iwl_ax201_name), 1138 + 1139 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1140 + IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY, 1141 + IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY, 1142 + IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1143 + iwl_cfg_bz, iwl_ax211_name), 1144 + 1145 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1146 + IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY, 1147 + IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY, 1113 1148 IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1114 1149 iwl_cfg_bz, iwl_fm_name), 1115 1150 1116 1151 _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1117 1152 IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY, 1153 + IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY, 1118 1154 IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1119 - IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1120 - iwl_cfg_bz, iwl_fm_name), 1155 + iwl_cfg_bz, iwl_wh_name), 1121 1156 1122 1157 /* Ga (Gl) */ 1123 1158 _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+1 -1
drivers/net/wireless/st/cw1200/cw1200_spi.c
··· 442 442 cw1200_core_release(self->core); 443 443 self->core = NULL; 444 444 } 445 + cw1200_spi_off(self, dev_get_platdata(&func->dev)); 445 446 } 446 - cw1200_spi_off(self, dev_get_platdata(&func->dev)); 447 447 } 448 448 449 449 static int __maybe_unused cw1200_spi_suspend(struct device *dev)
+1 -1
drivers/net/wwan/iosm/iosm_ipc_mmio.c
··· 104 104 break; 105 105 106 106 msleep(20); 107 - } while (retries-- > 0); 107 + } while (--retries > 0); 108 108 109 109 if (!retries) { 110 110 dev_err(ipc_mmio->dev, "invalid exec stage %X", stage);
+17 -9
drivers/net/wwan/t7xx/t7xx_state_monitor.c
··· 104 104 fsm_state_notify(ctl->md, state); 105 105 } 106 106 107 + static void fsm_release_command(struct kref *ref) 108 + { 109 + struct t7xx_fsm_command *cmd = container_of(ref, typeof(*cmd), refcnt); 110 + 111 + kfree(cmd); 112 + } 113 + 107 114 static void fsm_finish_command(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd, int result) 108 115 { 109 116 if (cmd->flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) { 110 - *cmd->ret = result; 111 - complete_all(cmd->done); 117 + cmd->result = result; 118 + complete_all(&cmd->done); 112 119 } 113 120 114 - kfree(cmd); 121 + kref_put(&cmd->refcnt, fsm_release_command); 115 122 } 116 123 117 124 static void fsm_del_kf_event(struct t7xx_fsm_event *event) ··· 482 475 483 476 int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id, unsigned int flag) 484 477 { 485 - DECLARE_COMPLETION_ONSTACK(done); 486 478 struct t7xx_fsm_command *cmd; 487 479 unsigned long flags; 488 480 int ret; ··· 493 487 INIT_LIST_HEAD(&cmd->entry); 494 488 cmd->cmd_id = cmd_id; 495 489 cmd->flag = flag; 490 + kref_init(&cmd->refcnt); 496 491 if (flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) { 497 - cmd->done = &done; 498 - cmd->ret = &ret; 492 + init_completion(&cmd->done); 493 + kref_get(&cmd->refcnt); 499 494 } 500 495 496 + kref_get(&cmd->refcnt); 501 497 spin_lock_irqsave(&ctl->command_lock, flags); 502 498 list_add_tail(&cmd->entry, &ctl->command_queue); 503 499 spin_unlock_irqrestore(&ctl->command_lock, flags); ··· 509 501 if (flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) { 510 502 unsigned long wait_ret; 511 503 512 - wait_ret = wait_for_completion_timeout(&done, 504 + wait_ret = wait_for_completion_timeout(&cmd->done, 513 505 msecs_to_jiffies(FSM_CMD_TIMEOUT_MS)); 514 - if (!wait_ret) 515 - return -ETIMEDOUT; 516 506 507 + ret = wait_ret ? cmd->result : -ETIMEDOUT; 508 + kref_put(&cmd->refcnt, fsm_release_command); 517 509 return ret; 518 510 } 519 511
+3 -2
drivers/net/wwan/t7xx/t7xx_state_monitor.h
··· 110 110 struct list_head entry; 111 111 enum t7xx_fsm_cmd_state cmd_id; 112 112 unsigned int flag; 113 - struct completion *done; 114 - int *ret; 113 + struct completion done; 114 + int result; 115 + struct kref refcnt; 115 116 }; 116 117 117 118 struct t7xx_fsm_notifier {
+1 -1
drivers/nvme/host/core.c
··· 2034 2034 * or smaller than a sector size yet, so catch this early and don't 2035 2035 * allow block I/O. 2036 2036 */ 2037 - if (head->lba_shift > PAGE_SHIFT || head->lba_shift < SECTOR_SHIFT) { 2037 + if (blk_validate_block_size(bs)) { 2038 2038 bs = (1 << 9); 2039 2039 valid = false; 2040 2040 }
+3 -2
drivers/of/address.c
··· 459 459 } 460 460 if (ranges == NULL || rlen == 0) { 461 461 offset = of_read_number(addr, na); 462 - memset(addr, 0, pna * 4); 462 + /* set address to zero, pass flags through */ 463 + memset(addr + pbus->flag_cells, 0, (pna - pbus->flag_cells) * 4); 463 464 pr_debug("empty ranges; 1:1 translation\n"); 464 465 goto finish; 465 466 } ··· 620 619 if (ret < 0) 621 620 return of_get_parent(np); 622 621 623 - return of_node_get(args.np); 622 + return args.np; 624 623 } 625 624 #endif 626 625
+12 -6
drivers/of/base.c
··· 88 88 } 89 89 90 90 #define EXCLUDED_DEFAULT_CELLS_PLATFORMS ( \ 91 - IS_ENABLED(CONFIG_SPARC) \ 91 + IS_ENABLED(CONFIG_SPARC) || \ 92 + of_find_compatible_node(NULL, NULL, "coreboot") \ 92 93 ) 93 94 94 95 int of_bus_n_addr_cells(struct device_node *np) ··· 1508 1507 map_len--; 1509 1508 1510 1509 /* Check if not found */ 1511 - if (!new) 1510 + if (!new) { 1511 + ret = -EINVAL; 1512 1512 goto put; 1513 + } 1513 1514 1514 1515 if (!of_device_is_available(new)) 1515 1516 match = 0; ··· 1521 1518 goto put; 1522 1519 1523 1520 /* Check for malformed properties */ 1524 - if (WARN_ON(new_size > MAX_PHANDLE_ARGS)) 1521 + if (WARN_ON(new_size > MAX_PHANDLE_ARGS) || 1522 + map_len < new_size) { 1523 + ret = -EINVAL; 1525 1524 goto put; 1526 - if (map_len < new_size) 1527 - goto put; 1525 + } 1528 1526 1529 1527 /* Move forward by new node's #<list>-cells amount */ 1530 1528 map += new_size; 1531 1529 map_len -= new_size; 1532 1530 } 1533 - if (!match) 1531 + if (!match) { 1532 + ret = -ENOENT; 1534 1533 goto put; 1534 + } 1535 1535 1536 1536 /* Get the <list>-map-pass-thru property (optional) */ 1537 1537 pass = of_get_property(cur, pass_name, NULL);
+8 -1
drivers/of/empty_root.dts
··· 2 2 /dts-v1/; 3 3 4 4 / { 5 - 5 + /* 6 + * #address-cells/#size-cells are required properties at root node. 7 + * Use 2 cells for both address cells and size cells in order to fully 8 + * support 64-bit addresses and sizes on systems using this empty root 9 + * node. 10 + */ 11 + #address-cells = <0x02>; 12 + #size-cells = <0x02>; 6 13 };
+2
drivers/of/irq.c
··· 111 111 else 112 112 np = of_find_node_by_phandle(be32_to_cpup(imap)); 113 113 imap++; 114 + len--; 114 115 115 116 /* Check if not found */ 116 117 if (!np) { ··· 355 354 return of_irq_parse_oldworld(device, index, out_irq); 356 355 357 356 /* Get the reg property (if any) */ 357 + addr_len = 0; 358 358 addr = of_get_property(device, "reg", &addr_len); 359 359 360 360 /* Prevent out-of-bounds read in case of longer interrupt parent address size */
-2
drivers/of/property.c
··· 1286 1286 DEFINE_SIMPLE_PROP(mboxes, "mboxes", "#mbox-cells") 1287 1287 DEFINE_SIMPLE_PROP(io_channels, "io-channels", "#io-channel-cells") 1288 1288 DEFINE_SIMPLE_PROP(io_backends, "io-backends", "#io-backend-cells") 1289 - DEFINE_SIMPLE_PROP(interrupt_parent, "interrupt-parent", NULL) 1290 1289 DEFINE_SIMPLE_PROP(dmas, "dmas", "#dma-cells") 1291 1290 DEFINE_SIMPLE_PROP(power_domains, "power-domains", "#power-domain-cells") 1292 1291 DEFINE_SIMPLE_PROP(hwlocks, "hwlocks", "#hwlock-cells") ··· 1431 1432 { .parse_prop = parse_mboxes, }, 1432 1433 { .parse_prop = parse_io_channels, }, 1433 1434 { .parse_prop = parse_io_backends, }, 1434 - { .parse_prop = parse_interrupt_parent, }, 1435 1435 { .parse_prop = parse_dmas, .optional = true, }, 1436 1436 { .parse_prop = parse_power_domains, }, 1437 1437 { .parse_prop = parse_hwlocks, },
+2
drivers/of/unittest-data/tests-address.dtsi
··· 114 114 device_type = "pci"; 115 115 ranges = <0x82000000 0 0xe8000000 0 0xe8000000 0 0x7f00000>, 116 116 <0x81000000 0 0x00000000 0 0xefff0000 0 0x0010000>; 117 + dma-ranges = <0x43000000 0x10 0x00 0x00 0x00 0x00 0x10000000>; 117 118 reg = <0x00000000 0xd1070000 0x20000>; 118 119 119 120 pci@0,0 { ··· 143 142 #size-cells = <0x01>; 144 143 ranges = <0xa0000000 0 0 0 0x2000000>, 145 144 <0xb0000000 1 0 0 0x1000000>; 145 + dma-ranges = <0xc0000000 0x43000000 0x10 0x00 0x10000000>; 146 146 147 147 dev@e0000000 { 148 148 reg = <0xa0001000 0x1000>,
+39
drivers/of/unittest.c
··· 1213 1213 of_node_put(np); 1214 1214 } 1215 1215 1216 + static void __init of_unittest_pci_empty_dma_ranges(void) 1217 + { 1218 + struct device_node *np; 1219 + struct of_pci_range range; 1220 + struct of_pci_range_parser parser; 1221 + 1222 + if (!IS_ENABLED(CONFIG_PCI)) 1223 + return; 1224 + 1225 + np = of_find_node_by_path("/testcase-data/address-tests2/pcie@d1070000/pci@0,0/dev@0,0/local-bus@0"); 1226 + if (!np) { 1227 + pr_err("missing testcase data\n"); 1228 + return; 1229 + } 1230 + 1231 + if (of_pci_dma_range_parser_init(&parser, np)) { 1232 + pr_err("missing dma-ranges property\n"); 1233 + return; 1234 + } 1235 + 1236 + /* 1237 + * Get the dma-ranges from the device tree 1238 + */ 1239 + for_each_of_pci_range(&parser, &range) { 1240 + unittest(range.size == 0x10000000, 1241 + "for_each_of_pci_range wrong size on node %pOF size=%llx\n", 1242 + np, range.size); 1243 + unittest(range.cpu_addr == 0x00000000, 1244 + "for_each_of_pci_range wrong CPU addr (%llx) on node %pOF", 1245 + range.cpu_addr, np); 1246 + unittest(range.pci_addr == 0xc0000000, 1247 + "for_each_of_pci_range wrong DMA addr (%llx) on node %pOF", 1248 + range.pci_addr, np); 1249 + } 1250 + 1251 + of_node_put(np); 1252 + } 1253 + 1216 1254 static void __init of_unittest_bus_ranges(void) 1217 1255 { 1218 1256 struct device_node *np; ··· 4310 4272 of_unittest_dma_get_max_cpu_address(); 4311 4273 of_unittest_parse_dma_ranges(); 4312 4274 of_unittest_pci_dma_ranges(); 4275 + of_unittest_pci_empty_dma_ranges(); 4313 4276 of_unittest_bus_ranges(); 4314 4277 of_unittest_bus_3cell_ranges(); 4315 4278 of_unittest_reg();
+5 -2
drivers/pci/msi/irqdomain.c
··· 350 350 351 351 domain = dev_get_msi_domain(&pdev->dev); 352 352 353 - if (!domain || !irq_domain_is_hierarchy(domain)) 354 - return mode == ALLOW_LEGACY; 353 + if (!domain || !irq_domain_is_hierarchy(domain)) { 354 + if (IS_ENABLED(CONFIG_PCI_MSI_ARCH_FALLBACKS)) 355 + return mode == ALLOW_LEGACY; 356 + return false; 357 + } 355 358 356 359 if (!irq_domain_is_msi_parent(domain)) { 357 360 /*
+4
drivers/pci/msi/msi.c
··· 433 433 if (WARN_ON_ONCE(dev->msi_enabled)) 434 434 return -EINVAL; 435 435 436 + /* Test for the availability of MSI support */ 437 + if (!pci_msi_domain_supports(dev, 0, ALLOW_LEGACY)) 438 + return -ENOTSUPP; 439 + 436 440 nvec = pci_msi_vec_count(dev); 437 441 if (nvec < 0) 438 442 return nvec;
+4 -2
drivers/pci/pci.c
··· 6232 6232 pcie_capability_read_dword(dev, PCI_EXP_LNKCAP2, &lnkcap2); 6233 6233 speeds = lnkcap2 & PCI_EXP_LNKCAP2_SLS; 6234 6234 6235 + /* Ignore speeds higher than Max Link Speed */ 6236 + pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap); 6237 + speeds &= GENMASK(lnkcap & PCI_EXP_LNKCAP_SLS, 0); 6238 + 6235 6239 /* PCIe r3.0-compliant */ 6236 6240 if (speeds) 6237 6241 return speeds; 6238 - 6239 - pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap); 6240 6242 6241 6243 /* Synthesize from the Max Link Speed field */ 6242 6244 if ((lnkcap & PCI_EXP_LNKCAP_SLS) == PCI_EXP_LNKCAP_SLS_5_0GB)
+3 -1
drivers/pci/pcie/portdrv.c
··· 265 265 (pcie_ports_dpc_native || (services & PCIE_PORT_SERVICE_AER))) 266 266 services |= PCIE_PORT_SERVICE_DPC; 267 267 268 + /* Enable bandwidth control if more than one speed is supported. */ 268 269 if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM || 269 270 pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) { 270 271 u32 linkcap; 271 272 272 273 pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &linkcap); 273 - if (linkcap & PCI_EXP_LNKCAP_LBNC) 274 + if (linkcap & PCI_EXP_LNKCAP_LBNC && 275 + hweight8(dev->supported_speeds) > 1) 274 276 services |= PCIE_PORT_SERVICE_BWCTRL; 275 277 } 276 278
+6
drivers/phy/broadcom/phy-brcm-usb-init-synopsys.c
··· 325 325 void __iomem *ctrl = params->regs[BRCM_REGS_CTRL]; 326 326 327 327 USB_CTRL_UNSET(ctrl, USB_PM, XHC_S2_CLK_SWITCH_EN); 328 + 329 + /* 330 + * The PHY might be in a bad state if it is already powered 331 + * up. Toggle the power just in case. 332 + */ 333 + USB_CTRL_SET(ctrl, USB_PM, USB_PWRDN); 328 334 USB_CTRL_UNSET(ctrl, USB_PM, USB_PWRDN); 329 335 330 336 /* 1 millisecond - for USB clocks to settle down */
+1 -2
drivers/phy/freescale/phy-fsl-samsung-hdmi.c
··· 424 424 * Fvco = (M * f_ref) / P, 425 425 * where f_ref is 24MHz. 426 426 */ 427 - tmp = (u64)_m * 24 * MHZ; 428 - do_div(tmp, _p); 427 + tmp = div64_ul((u64)_m * 24 * MHZ, _p); 429 428 if (tmp < 750 * MHZ || 430 429 tmp > 3000 * MHZ) 431 430 continue;
+1
drivers/phy/mediatek/Kconfig
··· 65 65 depends on ARCH_MEDIATEK || COMPILE_TEST 66 66 depends on COMMON_CLK 67 67 depends on OF 68 + depends on REGULATOR 68 69 select GENERIC_PHY 69 70 help 70 71 Support HDMI PHY for Mediatek SoCs.
+13 -8
drivers/phy/phy-core.c
··· 145 145 return phy_provider; 146 146 147 147 for_each_child_of_node(phy_provider->children, child) 148 - if (child == node) 148 + if (child == node) { 149 + of_node_put(child); 149 150 return phy_provider; 151 + } 150 152 } 151 153 152 154 return ERR_PTR(-EPROBE_DEFER); ··· 631 629 return ERR_PTR(-ENODEV); 632 630 633 631 /* This phy type handled by the usb-phy subsystem for now */ 634 - if (of_device_is_compatible(args.np, "usb-nop-xceiv")) 635 - return ERR_PTR(-ENODEV); 632 + if (of_device_is_compatible(args.np, "usb-nop-xceiv")) { 633 + phy = ERR_PTR(-ENODEV); 634 + goto out_put_node; 635 + } 636 636 637 637 mutex_lock(&phy_provider_mutex); 638 638 phy_provider = of_phy_provider_lookup(args.np); ··· 656 652 657 653 out_unlock: 658 654 mutex_unlock(&phy_provider_mutex); 655 + out_put_node: 659 656 of_node_put(args.np); 660 657 661 658 return phy; ··· 742 737 if (!phy) 743 738 return; 744 739 745 - r = devres_destroy(dev, devm_phy_release, devm_phy_match, phy); 740 + r = devres_release(dev, devm_phy_release, devm_phy_match, phy); 746 741 dev_WARN_ONCE(dev, r, "couldn't find PHY resource\n"); 747 742 } 748 743 EXPORT_SYMBOL_GPL(devm_phy_put); ··· 1126 1121 { 1127 1122 int r; 1128 1123 1129 - r = devres_destroy(dev, devm_phy_consume, devm_phy_match, phy); 1124 + r = devres_release(dev, devm_phy_consume, devm_phy_match, phy); 1130 1125 dev_WARN_ONCE(dev, r, "couldn't find PHY resource\n"); 1131 1126 } 1132 1127 EXPORT_SYMBOL_GPL(devm_phy_destroy); ··· 1264 1259 * of_phy_provider_unregister to unregister the phy provider. 1265 1260 */ 1266 1261 void devm_of_phy_provider_unregister(struct device *dev, 1267 - struct phy_provider *phy_provider) 1262 + struct phy_provider *phy_provider) 1268 1263 { 1269 1264 int r; 1270 1265 1271 - r = devres_destroy(dev, devm_phy_provider_release, devm_phy_match, 1272 - phy_provider); 1266 + r = devres_release(dev, devm_phy_provider_release, devm_phy_match, 1267 + phy_provider); 1273 1268 dev_WARN_ONCE(dev, r, "couldn't find PHY provider device resource\n"); 1274 1269 } 1275 1270 EXPORT_SYMBOL_GPL(devm_of_phy_provider_unregister);
+1 -1
drivers/phy/qualcomm/phy-qcom-qmp-usb.c
··· 1052 1052 QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FASTLOCK_FO_GAIN, 0x2f), 1053 1053 QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FASTLOCK_COUNT_LOW, 0xff), 1054 1054 QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FASTLOCK_COUNT_HIGH, 0x0f), 1055 - QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_SO_GAIN, 0x0a), 1055 + QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FO_GAIN, 0x0a), 1056 1056 QMP_PHY_INIT_CFG(QSERDES_V5_RX_VGA_CAL_CNTRL1, 0x54), 1057 1057 QMP_PHY_INIT_CFG(QSERDES_V5_RX_VGA_CAL_CNTRL2, 0x0f), 1058 1058 QMP_PHY_INIT_CFG(QSERDES_V5_RX_RX_EQU_ADAPTOR_CNTRL2, 0x0f),
+1 -1
drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
··· 309 309 310 310 priv->ext_refclk = device_property_present(dev, "rockchip,ext-refclk"); 311 311 312 - priv->phy_rst = devm_reset_control_array_get_exclusive(dev); 312 + priv->phy_rst = devm_reset_control_get(dev, "phy"); 313 313 if (IS_ERR(priv->phy_rst)) 314 314 return dev_err_probe(dev, PTR_ERR(priv->phy_rst), "failed to get phy reset\n"); 315 315
+2 -1
drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
··· 1101 1101 return dev_err_probe(dev, PTR_ERR(hdptx->grf), 1102 1102 "Could not get GRF syscon\n"); 1103 1103 1104 + platform_set_drvdata(pdev, hdptx); 1105 + 1104 1106 ret = devm_pm_runtime_enable(dev); 1105 1107 if (ret) 1106 1108 return dev_err_probe(dev, ret, "Failed to enable runtime PM\n"); ··· 1112 1110 return dev_err_probe(dev, PTR_ERR(hdptx->phy), 1113 1111 "Failed to create HDMI PHY\n"); 1114 1112 1115 - platform_set_drvdata(pdev, hdptx); 1116 1113 phy_set_drvdata(hdptx->phy, hdptx); 1117 1114 phy_set_bus_width(hdptx->phy, 8); 1118 1115
+15 -6
drivers/phy/st/phy-stm32-combophy.c
··· 122 122 u32 max_vswing = imp_lookup[imp_size - 1].vswing[vswing_size - 1]; 123 123 u32 min_vswing = imp_lookup[0].vswing[0]; 124 124 u32 val; 125 + u32 regval; 125 126 126 127 if (!of_property_read_u32(combophy->dev->of_node, "st,output-micro-ohms", &val)) { 127 128 if (val < min_imp || val > max_imp) { ··· 130 129 return -EINVAL; 131 130 } 132 131 133 - for (imp_of = 0; imp_of < ARRAY_SIZE(imp_lookup); imp_of++) 134 - if (imp_lookup[imp_of].microohm <= val) 132 + regval = 0; 133 + for (imp_of = 0; imp_of < ARRAY_SIZE(imp_lookup); imp_of++) { 134 + if (imp_lookup[imp_of].microohm <= val) { 135 + regval = FIELD_PREP(STM32MP25_PCIEPRG_IMPCTRL_OHM, imp_of); 135 136 break; 137 + } 138 + } 136 139 137 140 dev_dbg(combophy->dev, "Set %u micro-ohms output impedance\n", 138 141 imp_lookup[imp_of].microohm); 139 142 140 143 regmap_update_bits(combophy->regmap, SYSCFG_PCIEPRGCR, 141 144 STM32MP25_PCIEPRG_IMPCTRL_OHM, 142 - FIELD_PREP(STM32MP25_PCIEPRG_IMPCTRL_OHM, imp_of)); 145 + regval); 143 146 } else { 144 147 regmap_read(combophy->regmap, SYSCFG_PCIEPRGCR, &val); 145 148 imp_of = FIELD_GET(STM32MP25_PCIEPRG_IMPCTRL_OHM, val); ··· 155 150 return -EINVAL; 156 151 } 157 152 158 - for (vswing_of = 0; vswing_of < ARRAY_SIZE(imp_lookup[imp_of].vswing); vswing_of++) 159 - if (imp_lookup[imp_of].vswing[vswing_of] >= val) 153 + regval = 0; 154 + for (vswing_of = 0; vswing_of < ARRAY_SIZE(imp_lookup[imp_of].vswing); vswing_of++) { 155 + if (imp_lookup[imp_of].vswing[vswing_of] >= val) { 156 + regval = FIELD_PREP(STM32MP25_PCIEPRG_IMPCTRL_VSWING, vswing_of); 160 157 break; 158 + } 159 + } 161 160 162 161 dev_dbg(combophy->dev, "Set %u microvolt swing\n", 163 162 imp_lookup[imp_of].vswing[vswing_of]); 164 163 165 164 regmap_update_bits(combophy->regmap, SYSCFG_PCIEPRGCR, 166 165 STM32MP25_PCIEPRG_IMPCTRL_VSWING, 167 - FIELD_PREP(STM32MP25_PCIEPRG_IMPCTRL_VSWING, vswing_of)); 166 + regval); 168 167 } 169 168 170 169 return 0;
+6
drivers/pinctrl/pinctrl-mcp23s08.c
··· 86 86 .num_reg_defaults = ARRAY_SIZE(mcp23x08_defaults), 87 87 .cache_type = REGCACHE_FLAT, 88 88 .max_register = MCP_OLAT, 89 + .disable_locking = true, /* mcp->lock protects the regmap */ 89 90 }; 90 91 EXPORT_SYMBOL_GPL(mcp23x08_regmap); 91 92 ··· 133 132 .num_reg_defaults = ARRAY_SIZE(mcp23x17_defaults), 134 133 .cache_type = REGCACHE_FLAT, 135 134 .val_format_endian = REGMAP_ENDIAN_LITTLE, 135 + .disable_locking = true, /* mcp->lock protects the regmap */ 136 136 }; 137 137 EXPORT_SYMBOL_GPL(mcp23x17_regmap); 138 138 ··· 230 228 231 229 switch (param) { 232 230 case PIN_CONFIG_BIAS_PULL_UP: 231 + mutex_lock(&mcp->lock); 233 232 ret = mcp_read(mcp, MCP_GPPU, &data); 233 + mutex_unlock(&mcp->lock); 234 234 if (ret < 0) 235 235 return ret; 236 236 status = (data & BIT(pin)) ? 1 : 0; ··· 261 257 262 258 switch (param) { 263 259 case PIN_CONFIG_BIAS_PULL_UP: 260 + mutex_lock(&mcp->lock); 264 261 ret = mcp_set_bit(mcp, MCP_GPPU, pin, arg); 262 + mutex_unlock(&mcp->lock); 265 263 break; 266 264 default: 267 265 dev_dbg(mcp->dev, "Invalid config param %04x\n", param);
+2 -2
drivers/platform/chrome/cros_ec_lpc.c
··· 707 707 /* Framework Laptop (12th Gen Intel Core) */ 708 708 .matches = { 709 709 DMI_MATCH(DMI_SYS_VENDOR, "Framework"), 710 - DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "12th Gen Intel Core"), 710 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Laptop (12th Gen Intel Core)"), 711 711 }, 712 712 .driver_data = (void *)&framework_laptop_mec_lpc_driver_data, 713 713 }, ··· 715 715 /* Framework Laptop (13th Gen Intel Core) */ 716 716 .matches = { 717 717 DMI_MATCH(DMI_SYS_VENDOR, "Framework"), 718 - DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "13th Gen Intel Core"), 718 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Laptop (13th Gen Intel Core)"), 719 719 }, 720 720 .driver_data = (void *)&framework_laptop_mec_lpc_driver_data, 721 721 },
+1 -1
drivers/platform/loongarch/Kconfig
··· 18 18 19 19 config LOONGSON_LAPTOP 20 20 tristate "Generic Loongson-3 Laptop Driver" 21 - depends on ACPI 21 + depends on ACPI_EC 22 22 depends on BACKLIGHT_CLASS_DEVICE 23 23 depends on INPUT 24 24 depends on MACH_LOONGSON64
+2 -2
drivers/platform/x86/hp/hp-wmi.c
··· 64 64 "874A", "8603", "8604", "8748", "886B", "886C", "878A", "878B", "878C", 65 65 "88C8", "88CB", "8786", "8787", "8788", "88D1", "88D2", "88F4", "88FD", 66 66 "88F5", "88F6", "88F7", "88FE", "88FF", "8900", "8901", "8902", "8912", 67 - "8917", "8918", "8949", "894A", "89EB", "8BAD", "8A42" 67 + "8917", "8918", "8949", "894A", "89EB", "8BAD", "8A42", "8A15" 68 68 }; 69 69 70 70 /* DMI Board names of Omen laptops that are specifically set to be thermal ··· 80 80 * "balanced" when reaching zero. 81 81 */ 82 82 static const char * const omen_timed_thermal_profile_boards[] = { 83 - "8BAD", "8A42" 83 + "8BAD", "8A42", "8A15" 84 84 }; 85 85 86 86 /* DMI Board names of Victus laptops */
+2
drivers/platform/x86/mlx-platform.c
··· 6237 6237 fail_pci_request_regions: 6238 6238 pci_disable_device(pci_dev); 6239 6239 fail_pci_enable_device: 6240 + pci_dev_put(pci_dev); 6240 6241 return err; 6241 6242 } 6242 6243 ··· 6248 6247 iounmap(pci_bridge_addr); 6249 6248 pci_release_regions(pci_bridge); 6250 6249 pci_disable_device(pci_bridge); 6250 + pci_dev_put(pci_bridge); 6251 6251 } 6252 6252 6253 6253 static int
+3 -1
drivers/platform/x86/thinkpad_acpi.c
··· 184 184 */ 185 185 TP_HKEY_EV_AMT_TOGGLE = 0x131a, /* Toggle AMT on/off */ 186 186 TP_HKEY_EV_DOUBLETAP_TOGGLE = 0x131c, /* Toggle trackpoint doubletap on/off */ 187 - TP_HKEY_EV_PROFILE_TOGGLE = 0x131f, /* Toggle platform profile */ 187 + TP_HKEY_EV_PROFILE_TOGGLE = 0x131f, /* Toggle platform profile in 2024 systems */ 188 + TP_HKEY_EV_PROFILE_TOGGLE2 = 0x1401, /* Toggle platform profile in 2025 + systems */ 188 189 189 190 /* Reasons for waking up from S3/S4 */ 190 191 TP_HKEY_EV_WKUP_S3_UNDOCK = 0x2304, /* undock requested, S3 */ ··· 11201 11200 tp_features.trackpoint_doubletap = !tp_features.trackpoint_doubletap; 11202 11201 return true; 11203 11202 case TP_HKEY_EV_PROFILE_TOGGLE: 11203 + case TP_HKEY_EV_PROFILE_TOGGLE2: 11204 11204 platform_profile_cycle(); 11205 11205 return true; 11206 11206 }
+6
drivers/pmdomain/core.c
··· 2142 2142 return 0; 2143 2143 } 2144 2144 2145 + static void genpd_provider_release(struct device *dev) 2146 + { 2147 + /* nothing to be done here */ 2148 + } 2149 + 2145 2150 static int genpd_alloc_data(struct generic_pm_domain *genpd) 2146 2151 { 2147 2152 struct genpd_governor_data *gd = NULL; ··· 2178 2173 2179 2174 genpd->gd = gd; 2180 2175 device_initialize(&genpd->dev); 2176 + genpd->dev.release = genpd_provider_release; 2181 2177 2182 2178 if (!genpd_is_dev_name_fw(genpd)) { 2183 2179 dev_set_name(&genpd->dev, "%s", genpd->name);
+2 -2
drivers/pmdomain/imx/gpcv2.c
··· 1458 1458 .max_register = SZ_4K, 1459 1459 }; 1460 1460 struct device *dev = &pdev->dev; 1461 - struct device_node *pgc_np; 1461 + struct device_node *pgc_np __free(device_node) = 1462 + of_get_child_by_name(dev->of_node, "pgc"); 1462 1463 struct regmap *regmap; 1463 1464 void __iomem *base; 1464 1465 int ret; 1465 1466 1466 - pgc_np = of_get_child_by_name(dev->of_node, "pgc"); 1467 1467 if (!pgc_np) { 1468 1468 dev_err(dev, "No power domains specified in DT\n"); 1469 1469 return -EINVAL;
+9 -3
drivers/power/supply/bq24190_charger.c
··· 567 567 568 568 static int bq24296_set_otg_vbus(struct bq24190_dev_info *bdi, bool enable) 569 569 { 570 + union power_supply_propval val = { .intval = bdi->charge_type }; 570 571 int ret; 571 572 572 573 ret = pm_runtime_resume_and_get(bdi->dev); ··· 588 587 589 588 ret = bq24190_write_mask(bdi, BQ24190_REG_POC, 590 589 BQ24296_REG_POC_OTG_CONFIG_MASK, 591 - BQ24296_REG_POC_CHG_CONFIG_SHIFT, 590 + BQ24296_REG_POC_OTG_CONFIG_SHIFT, 592 591 BQ24296_REG_POC_OTG_CONFIG_OTG); 593 - } else 592 + } else { 594 593 ret = bq24190_write_mask(bdi, BQ24190_REG_POC, 595 594 BQ24296_REG_POC_OTG_CONFIG_MASK, 596 - BQ24296_REG_POC_CHG_CONFIG_SHIFT, 595 + BQ24296_REG_POC_OTG_CONFIG_SHIFT, 597 596 BQ24296_REG_POC_OTG_CONFIG_DISABLE); 597 + if (ret < 0) 598 + goto out; 599 + 600 + ret = bq24190_charger_set_charge_type(bdi, &val); 601 + } 598 602 599 603 out: 600 604 pm_runtime_mark_last_busy(bdi->dev);
+27 -9
drivers/power/supply/cros_charge-control.c
··· 7 7 #include <acpi/battery.h> 8 8 #include <linux/container_of.h> 9 9 #include <linux/dmi.h> 10 + #include <linux/lockdep.h> 10 11 #include <linux/mod_devicetable.h> 11 12 #include <linux/module.h> 13 + #include <linux/mutex.h> 12 14 #include <linux/platform_data/cros_ec_commands.h> 13 15 #include <linux/platform_data/cros_ec_proto.h> 14 16 #include <linux/platform_device.h> ··· 51 49 struct attribute *attributes[_CROS_CHCTL_ATTR_COUNT]; 52 50 struct attribute_group group; 53 51 52 + struct mutex lock; /* protects fields below and cros_ec */ 54 53 enum power_supply_charge_behaviour current_behaviour; 55 54 u8 current_start_threshold, current_end_threshold; 56 55 }; ··· 87 84 static int cros_chctl_configure_ec(struct cros_chctl_priv *priv) 88 85 { 89 86 struct ec_params_charge_control req = {}; 87 + 88 + lockdep_assert_held(&priv->lock); 90 89 91 90 req.cmd = EC_CHARGE_CONTROL_CMD_SET; 92 91 ··· 139 134 return -EINVAL; 140 135 141 136 if (is_end_threshold) { 142 - if (val <= priv->current_start_threshold) 137 + /* Start threshold is not exposed, use fixed value */ 138 + if (priv->cmd_version == 2) 139 + priv->current_start_threshold = val == 100 ? 0 : val; 140 + 141 + if (val < priv->current_start_threshold) 143 142 return -EINVAL; 144 143 priv->current_end_threshold = val; 145 144 } else { 146 - if (val >= priv->current_end_threshold) 145 + if (val > priv->current_end_threshold) 147 146 return -EINVAL; 148 147 priv->current_start_threshold = val; 149 148 } ··· 168 159 struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr, 169 160 CROS_CHCTL_ATTR_START_THRESHOLD); 170 161 162 + guard(mutex)(&priv->lock); 171 163 return sysfs_emit(buf, "%u\n", (unsigned int)priv->current_start_threshold); 172 164 } 173 165 ··· 179 169 struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr, 180 170 CROS_CHCTL_ATTR_START_THRESHOLD); 181 171 172 + guard(mutex)(&priv->lock); 182 173 return cros_chctl_store_threshold(dev, priv, 0, buf, count); 183 174 } 184 175 ··· 189 178 struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr, 190 179 CROS_CHCTL_ATTR_END_THRESHOLD); 191 180 181 + guard(mutex)(&priv->lock); 192 182 return sysfs_emit(buf, "%u\n", (unsigned int)priv->current_end_threshold); 193 183 } 194 184 ··· 199 187 struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr, 200 188 CROS_CHCTL_ATTR_END_THRESHOLD); 201 189 190 + guard(mutex)(&priv->lock); 202 191 return cros_chctl_store_threshold(dev, priv, 1, buf, count); 203 192 } 204 193 ··· 208 195 struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr, 209 196 CROS_CHCTL_ATTR_CHARGE_BEHAVIOUR); 210 197 198 + guard(mutex)(&priv->lock); 211 199 return power_supply_charge_behaviour_show(dev, EC_CHARGE_CONTROL_BEHAVIOURS, 212 200 priv->current_behaviour, buf); 213 201 } ··· 224 210 if (ret < 0) 225 211 return ret; 226 212 213 + guard(mutex)(&priv->lock); 227 214 priv->current_behaviour = ret; 228 215 229 216 ret = cros_chctl_configure_ec(priv); ··· 238 223 { 239 224 struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(attr, n); 240 225 241 - if (priv->cmd_version < 2) { 242 - if (n == CROS_CHCTL_ATTR_START_THRESHOLD) 243 - return 0; 244 - if (n == CROS_CHCTL_ATTR_END_THRESHOLD) 245 - return 0; 246 - } 226 + if (n == CROS_CHCTL_ATTR_START_THRESHOLD && priv->cmd_version < 3) 227 + return 0; 228 + else if (n == CROS_CHCTL_ATTR_END_THRESHOLD && priv->cmd_version < 2) 229 + return 0; 247 230 248 231 return attr->mode; 249 232 } ··· 303 290 if (!priv) 304 291 return -ENOMEM; 305 292 293 + ret = devm_mutex_init(dev, &priv->lock); 294 + if (ret) 295 + return ret; 296 + 306 297 ret = cros_ec_get_cmd_versions(cros_ec, EC_CMD_CHARGE_CONTROL); 307 298 if (ret < 0) 308 299 return ret; ··· 344 327 priv->current_end_threshold = 100; 345 328 346 329 /* Bring EC into well-known state */ 347 - ret = cros_chctl_configure_ec(priv); 330 + scoped_guard(mutex, &priv->lock) 331 + ret = cros_chctl_configure_ec(priv); 348 332 if (ret < 0) 349 333 return ret; 350 334
+8
drivers/power/supply/gpio-charger.c
··· 67 67 if (gpio_charger->current_limit_map[i].limit_ua <= val) 68 68 break; 69 69 } 70 + 71 + /* 72 + * If a valid charge current limit isn't found, default to smallest 73 + * current limitation for safety reasons. 74 + */ 75 + if (i >= gpio_charger->current_limit_map_size) 76 + i = gpio_charger->current_limit_map_size - 1; 77 + 70 78 mapping = gpio_charger->current_limit_map[i]; 71 79 72 80 for (i = 0; i < ndescs; i++) {
+1 -1
drivers/regulator/of_regulator.c
··· 175 175 if (!ret) 176 176 constraints->enable_time = pval; 177 177 178 - ret = of_property_read_u32(np, "regulator-uv-survival-time-ms", &pval); 178 + ret = of_property_read_u32(np, "regulator-uv-less-critical-window-ms", &pval); 179 179 if (!ret) 180 180 constraints->uv_less_critical_window_ms = pval; 181 181 else
+3 -1
drivers/spi/spi-rockchip-sfc.c
··· 182 182 bool use_dma; 183 183 u32 max_iosize; 184 184 u16 version; 185 + struct spi_controller *host; 185 186 }; 186 187 187 188 static int rockchip_sfc_reset(struct rockchip_sfc *sfc) ··· 575 574 576 575 sfc = spi_controller_get_devdata(host); 577 576 sfc->dev = dev; 577 + sfc->host = host; 578 578 579 579 sfc->regbase = devm_platform_ioremap_resource(pdev, 0); 580 580 if (IS_ERR(sfc->regbase)) ··· 653 651 654 652 static void rockchip_sfc_remove(struct platform_device *pdev) 655 653 { 656 - struct spi_controller *host = platform_get_drvdata(pdev); 657 654 struct rockchip_sfc *sfc = platform_get_drvdata(pdev); 655 + struct spi_controller *host = sfc->host; 658 656 659 657 spi_unregister_controller(host); 660 658
+1
drivers/staging/fbtft/Kconfig
··· 3 3 tristate "Support for small TFT LCD display modules" 4 4 depends on FB && SPI 5 5 depends on FB_DEVICE 6 + depends on BACKLIGHT_CLASS_DEVICE 6 7 depends on GPIOLIB || COMPILE_TEST 7 8 select FB_BACKLIGHT 8 9 select FB_SYSMEM_HELPERS_DEFERRED
+1 -1
drivers/staging/gpib/common/Makefile
··· 1 1 2 - obj-m += gpib_common.o 2 + obj-$(CONFIG_GPIB_COMMON) += gpib_common.o 3 3 4 4 gpib_common-objs := gpib_os.o iblib.o 5 5
+1 -1
drivers/staging/gpib/nec7210/Makefile
··· 1 1 2 - obj-m += nec7210.o 2 + obj-$(CONFIG_GPIB_NEC7210) += nec7210.o 3 3 4 4
+40 -36
drivers/thermal/thermal_thresholds.c
··· 69 69 return NULL; 70 70 } 71 71 72 - static bool __thermal_threshold_is_crossed(struct user_threshold *threshold, int temperature, 73 - int last_temperature, int direction, 74 - int *low, int *high) 75 - { 76 - 77 - if (temperature >= threshold->temperature) { 78 - if (threshold->temperature > *low && 79 - THERMAL_THRESHOLD_WAY_DOWN & threshold->direction) 80 - *low = threshold->temperature; 81 - 82 - if (last_temperature < threshold->temperature && 83 - threshold->direction & direction) 84 - return true; 85 - } else { 86 - if (threshold->temperature < *high && THERMAL_THRESHOLD_WAY_UP 87 - & threshold->direction) 88 - *high = threshold->temperature; 89 - 90 - if (last_temperature >= threshold->temperature && 91 - threshold->direction & direction) 92 - return true; 93 - } 94 - 95 - return false; 96 - } 97 - 98 72 static bool thermal_thresholds_handle_raising(struct list_head *thresholds, int temperature, 99 - int last_temperature, int *low, int *high) 73 + int last_temperature) 100 74 { 101 75 struct user_threshold *t; 102 76 103 77 list_for_each_entry(t, thresholds, list_node) { 104 - if (__thermal_threshold_is_crossed(t, temperature, last_temperature, 105 - THERMAL_THRESHOLD_WAY_UP, low, high)) 78 + 79 + if (!(t->direction & THERMAL_THRESHOLD_WAY_UP)) 80 + continue; 81 + 82 + if (temperature >= t->temperature && 83 + last_temperature < t->temperature) 106 84 return true; 107 85 } 108 86 ··· 88 110 } 89 111 90 112 static bool thermal_thresholds_handle_dropping(struct list_head *thresholds, int temperature, 91 - int last_temperature, int *low, int *high) 113 + int last_temperature) 92 114 { 93 115 struct user_threshold *t; 94 116 95 117 list_for_each_entry_reverse(t, thresholds, list_node) { 96 - if (__thermal_threshold_is_crossed(t, temperature, last_temperature, 97 - THERMAL_THRESHOLD_WAY_DOWN, low, high)) 118 + 119 + if (!(t->direction & THERMAL_THRESHOLD_WAY_DOWN)) 120 + continue; 121 + 122 + if (temperature <= t->temperature && 123 + last_temperature > t->temperature) 98 124 return true; 99 125 } 100 126 101 127 return false; 128 + } 129 + 130 + static void thermal_threshold_find_boundaries(struct list_head *thresholds, int temperature, 131 + int *low, int *high) 132 + { 133 + struct user_threshold *t; 134 + 135 + list_for_each_entry(t, thresholds, list_node) { 136 + if (temperature < t->temperature && 137 + (t->direction & THERMAL_THRESHOLD_WAY_UP) && 138 + *high > t->temperature) 139 + *high = t->temperature; 140 + } 141 + 142 + list_for_each_entry_reverse(t, thresholds, list_node) { 143 + if (temperature > t->temperature && 144 + (t->direction & THERMAL_THRESHOLD_WAY_DOWN) && 145 + *low < t->temperature) 146 + *low = t->temperature; 147 + } 102 148 } 103 149 104 150 void thermal_thresholds_handle(struct thermal_zone_device *tz, int *low, int *high) ··· 133 131 int last_temperature = tz->last_temperature; 134 132 135 133 lockdep_assert_held(&tz->lock); 134 + 135 + thermal_threshold_find_boundaries(thresholds, temperature, low, high); 136 136 137 137 /* 138 138 * We need a second update in order to detect a threshold being crossed ··· 155 151 * - decreased : thresholds are crossed the way down 156 152 */ 157 153 if (temperature > last_temperature) { 158 - if (thermal_thresholds_handle_raising(thresholds, temperature, 159 - last_temperature, low, high)) 154 + if (thermal_thresholds_handle_raising(thresholds, 155 + temperature, last_temperature)) 160 156 thermal_notify_threshold_up(tz); 161 157 } else { 162 - if (thermal_thresholds_handle_dropping(thresholds, temperature, 163 - last_temperature, low, high)) 158 + if (thermal_thresholds_handle_dropping(thresholds, 159 + temperature, last_temperature)) 164 160 thermal_notify_threshold_down(tz); 165 161 } 166 162 }
+8
drivers/thunderbolt/nhi.c
··· 1520 1520 .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1521 1521 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI1), 1522 1522 .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1523 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI0), 1524 + .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1525 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI1), 1526 + .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1527 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI0), 1528 + .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1529 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI1), 1530 + .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1523 1531 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI) }, 1524 1532 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI) }, 1525 1533
+4
drivers/thunderbolt/nhi.h
··· 92 92 #define PCI_DEVICE_ID_INTEL_RPL_NHI1 0xa76d 93 93 #define PCI_DEVICE_ID_INTEL_LNL_NHI0 0xa833 94 94 #define PCI_DEVICE_ID_INTEL_LNL_NHI1 0xa834 95 + #define PCI_DEVICE_ID_INTEL_PTL_M_NHI0 0xe333 96 + #define PCI_DEVICE_ID_INTEL_PTL_M_NHI1 0xe334 97 + #define PCI_DEVICE_ID_INTEL_PTL_P_NHI0 0xe433 98 + #define PCI_DEVICE_ID_INTEL_PTL_P_NHI1 0xe434 95 99 96 100 #define PCI_CLASS_SERIAL_USB_USB4 0x0c0340 97 101
+15 -4
drivers/thunderbolt/retimer.c
··· 103 103 104 104 err_nvm: 105 105 dev_dbg(&rt->dev, "NVM upgrade disabled\n"); 106 + rt->no_nvm_upgrade = true; 106 107 if (!IS_ERR(nvm)) 107 108 tb_nvm_free(nvm); 108 109 ··· 183 182 184 183 if (!rt->nvm) 185 184 ret = -EAGAIN; 186 - else if (rt->no_nvm_upgrade) 187 - ret = -EOPNOTSUPP; 188 185 else 189 186 ret = sysfs_emit(buf, "%#x\n", rt->auth_status); 190 187 ··· 322 323 323 324 if (!rt->nvm) 324 325 ret = -EAGAIN; 325 - else if (rt->no_nvm_upgrade) 326 - ret = -EOPNOTSUPP; 327 326 else 328 327 ret = sysfs_emit(buf, "%x.%x\n", rt->nvm->major, rt->nvm->minor); 329 328 ··· 339 342 } 340 343 static DEVICE_ATTR_RO(vendor); 341 344 345 + static umode_t retimer_is_visible(struct kobject *kobj, struct attribute *attr, 346 + int n) 347 + { 348 + struct device *dev = kobj_to_dev(kobj); 349 + struct tb_retimer *rt = tb_to_retimer(dev); 350 + 351 + if (attr == &dev_attr_nvm_authenticate.attr || 352 + attr == &dev_attr_nvm_version.attr) 353 + return rt->no_nvm_upgrade ? 0 : attr->mode; 354 + 355 + return attr->mode; 356 + } 357 + 342 358 static struct attribute *retimer_attrs[] = { 343 359 &dev_attr_device.attr, 344 360 &dev_attr_nvm_authenticate.attr, ··· 361 351 }; 362 352 363 353 static const struct attribute_group retimer_group = { 354 + .is_visible = retimer_is_visible, 364 355 .attrs = retimer_attrs, 365 356 }; 366 357
+41
drivers/thunderbolt/tb.c
··· 2059 2059 } 2060 2060 } 2061 2061 2062 + static void tb_switch_enter_redrive(struct tb_switch *sw) 2063 + { 2064 + struct tb_port *port; 2065 + 2066 + tb_switch_for_each_port(sw, port) 2067 + tb_enter_redrive(port); 2068 + } 2069 + 2070 + /* 2071 + * Called during system and runtime suspend to forcefully exit redrive 2072 + * mode without querying whether the resource is available. 2073 + */ 2074 + static void tb_switch_exit_redrive(struct tb_switch *sw) 2075 + { 2076 + struct tb_port *port; 2077 + 2078 + if (!(sw->quirks & QUIRK_KEEP_POWER_IN_DP_REDRIVE)) 2079 + return; 2080 + 2081 + tb_switch_for_each_port(sw, port) { 2082 + if (!tb_port_is_dpin(port)) 2083 + continue; 2084 + 2085 + if (port->redrive) { 2086 + port->redrive = false; 2087 + pm_runtime_put(&sw->dev); 2088 + tb_port_dbg(port, "exit redrive mode\n"); 2089 + } 2090 + } 2091 + } 2092 + 2062 2093 static void tb_dp_resource_unavailable(struct tb *tb, struct tb_port *port) 2063 2094 { 2064 2095 struct tb_port *in, *out; ··· 2940 2909 tb_create_usb3_tunnels(tb->root_switch); 2941 2910 /* Add DP IN resources for the root switch */ 2942 2911 tb_add_dp_resources(tb->root_switch); 2912 + tb_switch_enter_redrive(tb->root_switch); 2943 2913 /* Make the discovered switches available to the userspace */ 2944 2914 device_for_each_child(&tb->root_switch->dev, NULL, 2945 2915 tb_scan_finalize_switch); ··· 2956 2924 2957 2925 tb_dbg(tb, "suspending...\n"); 2958 2926 tb_disconnect_and_release_dp(tb); 2927 + tb_switch_exit_redrive(tb->root_switch); 2959 2928 tb_switch_suspend(tb->root_switch, false); 2960 2929 tcm->hotplug_active = false; /* signal tb_handle_hotplug to quit */ 2961 2930 tb_dbg(tb, "suspend finished\n"); ··· 3049 3016 tb_dbg(tb, "tunnels restarted, sleeping for 100ms\n"); 3050 3017 msleep(100); 3051 3018 } 3019 + tb_switch_enter_redrive(tb->root_switch); 3052 3020 /* Allow tb_handle_hotplug to progress events */ 3053 3021 tcm->hotplug_active = true; 3054 3022 tb_dbg(tb, "resume finished\n"); ··· 3113 3079 struct tb_cm *tcm = tb_priv(tb); 3114 3080 3115 3081 mutex_lock(&tb->lock); 3082 + /* 3083 + * The below call only releases DP resources to allow exiting and 3084 + * re-entering redrive mode. 3085 + */ 3086 + tb_disconnect_and_release_dp(tb); 3087 + tb_switch_exit_redrive(tb->root_switch); 3116 3088 tb_switch_suspend(tb->root_switch, true); 3117 3089 tcm->hotplug_active = false; 3118 3090 mutex_unlock(&tb->lock); ··· 3150 3110 tb_restore_children(tb->root_switch); 3151 3111 list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) 3152 3112 tb_tunnel_restart(tunnel); 3113 + tb_switch_enter_redrive(tb->root_switch); 3153 3114 tcm->hotplug_active = true; 3154 3115 mutex_unlock(&tb->lock); 3155 3116
+1 -1
drivers/usb/host/xhci-mem.c
··· 436 436 goto free_segments; 437 437 } 438 438 439 - xhci_link_rings(xhci, ring, &new_ring); 439 + xhci_link_rings(xhci, &new_ring, ring); 440 440 trace_xhci_ring_expansion(ring); 441 441 xhci_dbg_trace(xhci, trace_xhci_dbg_ring_expansion, 442 442 "ring expansion succeed, now has %d segments",
-2
drivers/usb/host/xhci-ring.c
··· 1199 1199 * Keep retrying until the EP starts and stops again, on 1200 1200 * chips where this is known to help. Wait for 100ms. 1201 1201 */ 1202 - if (!(xhci->quirks & XHCI_NEC_HOST)) 1203 - break; 1204 1202 if (time_is_before_jiffies(ep->stop_time + msecs_to_jiffies(100))) 1205 1203 break; 1206 1204 fallthrough;
+27
drivers/usb/serial/option.c
··· 625 625 #define MEIGSMART_PRODUCT_SRM825L 0x4d22 626 626 /* MeiG Smart SLM320 based on UNISOC UIS8910 */ 627 627 #define MEIGSMART_PRODUCT_SLM320 0x4d41 628 + /* MeiG Smart SLM770A based on ASR1803 */ 629 + #define MEIGSMART_PRODUCT_SLM770A 0x4d57 628 630 629 631 /* Device flags */ 630 632 ··· 1397 1395 .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) }, 1398 1396 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10aa, 0xff), /* Telit FN920C04 (MBIM) */ 1399 1397 .driver_info = NCTRL(3) | RSVD(4) | RSVD(5) }, 1398 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c0, 0xff), /* Telit FE910C04 (rmnet) */ 1399 + .driver_info = RSVD(0) | NCTRL(3) }, 1400 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c4, 0xff), /* Telit FE910C04 (rmnet) */ 1401 + .driver_info = RSVD(0) | NCTRL(3) }, 1402 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c8, 0xff), /* Telit FE910C04 (rmnet) */ 1403 + .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) }, 1400 1404 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), 1401 1405 .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, 1402 1406 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM), ··· 2255 2247 .driver_info = NCTRL(2) }, 2256 2248 { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x7127, 0xff, 0x00, 0x00), 2257 2249 .driver_info = NCTRL(2) | NCTRL(3) | NCTRL(4) }, 2250 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x7129, 0xff, 0x00, 0x00), /* MediaTek T7XX */ 2251 + .driver_info = NCTRL(2) | NCTRL(3) | NCTRL(4) }, 2258 2252 { USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MEN200) }, 2259 2253 { USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MPL200), 2260 2254 .driver_info = RSVD(1) | RSVD(4) }, ··· 2385 2375 { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WWD for Golbal EDU */ 2386 2376 { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0x00, 0x40) }, 2387 2377 { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0xff, 0x40) }, 2378 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010a, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WRD for WWAN Ready */ 2379 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010a, 0xff, 0x00, 0x40) }, 2380 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010a, 0xff, 0xff, 0x40) }, 2381 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010b, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WWD for WWAN Ready */ 2382 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010b, 0xff, 0x00, 0x40) }, 2383 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010b, 0xff, 0xff, 0x40) }, 2384 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010c, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WRD for WWAN Ready */ 2385 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010c, 0xff, 0x00, 0x40) }, 2386 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010c, 0xff, 0xff, 0x40) }, 2387 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010d, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WWD for WWAN Ready */ 2388 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010d, 0xff, 0x00, 0x40) }, 2389 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010d, 0xff, 0xff, 0x40) }, 2388 2390 { USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) }, 2389 2391 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) }, 2390 2392 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) }, ··· 2404 2382 { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) }, 2405 2383 { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) }, 2406 2384 { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM320, 0xff, 0, 0) }, 2385 + { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM770A, 0xff, 0, 0) }, 2407 2386 { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x30) }, 2408 2387 { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x40) }, 2409 2388 { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x60) }, 2389 + { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0530, 0xff), /* TCL IK512 MBIM */ 2390 + .driver_info = NCTRL(1) }, 2391 + { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0640, 0xff), /* TCL IK512 ECM */ 2392 + .driver_info = NCTRL(3) }, 2410 2393 { } /* Terminating entry */ 2411 2394 }; 2412 2395 MODULE_DEVICE_TABLE(usb, option_ids);
+13 -5
drivers/video/fbdev/Kconfig
··· 649 649 config FB_ATMEL 650 650 tristate "AT91 LCD Controller support" 651 651 depends on FB && OF && HAVE_CLK && HAS_IOMEM 652 + depends on BACKLIGHT_CLASS_DEVICE 652 653 depends on HAVE_FB_ATMEL || COMPILE_TEST 653 654 select FB_BACKLIGHT 654 655 select FB_IOMEM_HELPERS ··· 661 660 config FB_NVIDIA 662 661 tristate "nVidia Framebuffer Support" 663 662 depends on FB && PCI 664 - select FB_BACKLIGHT if FB_NVIDIA_BACKLIGHT 665 663 select FB_CFB_FILLRECT 666 664 select FB_CFB_COPYAREA 667 665 select FB_CFB_IMAGEBLIT ··· 700 700 config FB_NVIDIA_BACKLIGHT 701 701 bool "Support for backlight control" 702 702 depends on FB_NVIDIA 703 + depends on BACKLIGHT_CLASS_DEVICE=y || BACKLIGHT_CLASS_DEVICE=FB_NVIDIA 704 + select FB_BACKLIGHT 703 705 default y 704 706 help 705 707 Say Y here if you want to control the backlight of your display. ··· 709 707 config FB_RIVA 710 708 tristate "nVidia Riva support" 711 709 depends on FB && PCI 712 - select FB_BACKLIGHT if FB_RIVA_BACKLIGHT 713 710 select FB_CFB_FILLRECT 714 711 select FB_CFB_COPYAREA 715 712 select FB_CFB_IMAGEBLIT ··· 748 747 config FB_RIVA_BACKLIGHT 749 748 bool "Support for backlight control" 750 749 depends on FB_RIVA 750 + depends on BACKLIGHT_CLASS_DEVICE=y || BACKLIGHT_CLASS_DEVICE=FB_RIVA 751 + select FB_BACKLIGHT 751 752 default y 752 753 help 753 754 Say Y here if you want to control the backlight of your display. ··· 937 934 config FB_RADEON 938 935 tristate "ATI Radeon display support" 939 936 depends on FB && PCI 940 - select FB_BACKLIGHT if FB_RADEON_BACKLIGHT 941 937 select FB_CFB_FILLRECT 942 938 select FB_CFB_COPYAREA 943 939 select FB_CFB_IMAGEBLIT ··· 962 960 config FB_RADEON_BACKLIGHT 963 961 bool "Support for backlight control" 964 962 depends on FB_RADEON 963 + depends on BACKLIGHT_CLASS_DEVICE=y || BACKLIGHT_CLASS_DEVICE=FB_RADEON 964 + select FB_BACKLIGHT 965 965 default y 966 966 help 967 967 Say Y here if you want to control the backlight of your display. ··· 979 975 config FB_ATY128 980 976 tristate "ATI Rage128 display support" 981 977 depends on FB && PCI 982 - select FB_BACKLIGHT if FB_ATY128_BACKLIGHT 983 978 select FB_IOMEM_HELPERS 984 979 select FB_MACMODES if PPC_PMAC 985 980 help ··· 992 989 config FB_ATY128_BACKLIGHT 993 990 bool "Support for backlight control" 994 991 depends on FB_ATY128 992 + depends on BACKLIGHT_CLASS_DEVICE=y || BACKLIGHT_CLASS_DEVICE=FB_ATY128 993 + select FB_BACKLIGHT 995 994 default y 996 995 help 997 996 Say Y here if you want to control the backlight of your display. ··· 1004 999 select FB_CFB_FILLRECT 1005 1000 select FB_CFB_COPYAREA 1006 1001 select FB_CFB_IMAGEBLIT 1007 - select FB_BACKLIGHT if FB_ATY_BACKLIGHT 1008 1002 select FB_IOMEM_FOPS 1009 1003 select FB_MACMODES if PPC 1010 1004 select FB_ATY_CT if SPARC64 && PCI ··· 1044 1040 config FB_ATY_BACKLIGHT 1045 1041 bool "Support for backlight control" 1046 1042 depends on FB_ATY 1043 + depends on BACKLIGHT_CLASS_DEVICE=y || BACKLIGHT_CLASS_DEVICE=FB_ATY 1044 + select FB_BACKLIGHT 1047 1045 default y 1048 1046 help 1049 1047 Say Y here if you want to control the backlight of your display. ··· 1534 1528 depends on FB && HAVE_CLK && HAS_IOMEM 1535 1529 depends on SUPERH || COMPILE_TEST 1536 1530 depends on FB_DEVICE 1531 + depends on BACKLIGHT_CLASS_DEVICE 1537 1532 select FB_BACKLIGHT 1538 1533 select FB_DEFERRED_IO 1539 1534 select FB_DMAMEM_HELPERS ··· 1800 1793 tristate "Solomon SSD1307 framebuffer support" 1801 1794 depends on FB && I2C 1802 1795 depends on GPIOLIB || COMPILE_TEST 1796 + depends on BACKLIGHT_CLASS_DEVICE 1803 1797 select FB_BACKLIGHT 1804 1798 select FB_SYSMEM_HELPERS_DEFERRED 1805 1799 help
+1 -2
drivers/video/fbdev/core/Kconfig
··· 183 183 select FB_SYSMEM_HELPERS 184 184 185 185 config FB_BACKLIGHT 186 - tristate 186 + bool 187 187 depends on FB 188 - select BACKLIGHT_CLASS_DEVICE 189 188 190 189 config FB_MODE_HELPERS 191 190 bool "Enable Video Mode Handling Helpers"
+1 -3
drivers/virt/coco/tdx-guest/tdx-guest.c
··· 124 124 if (!addr) 125 125 return NULL; 126 126 127 - if (set_memory_decrypted((unsigned long)addr, count)) { 128 - free_pages_exact(addr, len); 127 + if (set_memory_decrypted((unsigned long)addr, count)) 129 128 return NULL; 130 - } 131 129 132 130 return addr; 133 131 }
+4 -7
fs/btrfs/ctree.c
··· 654 654 goto error_unlock_cow; 655 655 } 656 656 } 657 + 658 + trace_btrfs_cow_block(root, buf, cow); 657 659 if (unlock_orig) 658 660 btrfs_tree_unlock(buf); 659 661 free_extent_buffer_stale(buf); ··· 712 710 { 713 711 struct btrfs_fs_info *fs_info = root->fs_info; 714 712 u64 search_start; 715 - int ret; 716 713 717 714 if (unlikely(test_bit(BTRFS_ROOT_DELETING, &root->state))) { 718 715 btrfs_abort_transaction(trans, -EUCLEAN); ··· 752 751 * Also We don't care about the error, as it's handled internally. 753 752 */ 754 753 btrfs_qgroup_trace_subtree_after_cow(trans, root, buf); 755 - ret = btrfs_force_cow_block(trans, root, buf, parent, parent_slot, 756 - cow_ret, search_start, 0, nest); 757 - 758 - trace_btrfs_cow_block(root, buf, *cow_ret); 759 - 760 - return ret; 754 + return btrfs_force_cow_block(trans, root, buf, parent, parent_slot, 755 + cow_ret, search_start, 0, nest); 761 756 } 762 757 ALLOW_ERROR_INJECTION(btrfs_cow_block, ERRNO); 763 758
+110 -44
fs/btrfs/inode.c
··· 9078 9078 } 9079 9079 9080 9080 struct btrfs_encoded_read_private { 9081 - wait_queue_head_t wait; 9081 + struct completion done; 9082 9082 void *uring_ctx; 9083 - atomic_t pending; 9083 + refcount_t pending_refs; 9084 9084 blk_status_t status; 9085 9085 }; 9086 9086 ··· 9099 9099 */ 9100 9100 WRITE_ONCE(priv->status, bbio->bio.bi_status); 9101 9101 } 9102 - if (atomic_dec_and_test(&priv->pending)) { 9102 + if (refcount_dec_and_test(&priv->pending_refs)) { 9103 9103 int err = blk_status_to_errno(READ_ONCE(priv->status)); 9104 9104 9105 9105 if (priv->uring_ctx) { 9106 9106 btrfs_uring_read_extent_endio(priv->uring_ctx, err); 9107 9107 kfree(priv); 9108 9108 } else { 9109 - wake_up(&priv->wait); 9109 + complete(&priv->done); 9110 9110 } 9111 9111 } 9112 9112 bio_put(&bbio->bio); ··· 9126 9126 if (!priv) 9127 9127 return -ENOMEM; 9128 9128 9129 - init_waitqueue_head(&priv->wait); 9130 - atomic_set(&priv->pending, 1); 9129 + init_completion(&priv->done); 9130 + refcount_set(&priv->pending_refs, 1); 9131 9131 priv->status = 0; 9132 9132 priv->uring_ctx = uring_ctx; 9133 9133 ··· 9140 9140 size_t bytes = min_t(u64, disk_io_size, PAGE_SIZE); 9141 9141 9142 9142 if (bio_add_page(&bbio->bio, pages[i], bytes, 0) < bytes) { 9143 - atomic_inc(&priv->pending); 9143 + refcount_inc(&priv->pending_refs); 9144 9144 btrfs_submit_bbio(bbio, 0); 9145 9145 9146 9146 bbio = btrfs_bio_alloc(BIO_MAX_VECS, REQ_OP_READ, fs_info, ··· 9155 9155 disk_io_size -= bytes; 9156 9156 } while (disk_io_size); 9157 9157 9158 - atomic_inc(&priv->pending); 9158 + refcount_inc(&priv->pending_refs); 9159 9159 btrfs_submit_bbio(bbio, 0); 9160 9160 9161 9161 if (uring_ctx) { 9162 - if (atomic_dec_return(&priv->pending) == 0) { 9162 + if (refcount_dec_and_test(&priv->pending_refs)) { 9163 9163 ret = blk_status_to_errno(READ_ONCE(priv->status)); 9164 9164 btrfs_uring_read_extent_endio(uring_ctx, ret); 9165 9165 kfree(priv); ··· 9168 9168 9169 9169 return -EIOCBQUEUED; 9170 9170 } else { 9171 - if (atomic_dec_return(&priv->pending) != 0) 9172 - io_wait_event(priv->wait, !atomic_read(&priv->pending)); 9171 + if (!refcount_dec_and_test(&priv->pending_refs)) 9172 + wait_for_completion_io(&priv->done); 9173 9173 /* See btrfs_encoded_read_endio() for ordering. */ 9174 9174 ret = blk_status_to_errno(READ_ONCE(priv->status)); 9175 9175 kfree(priv); ··· 9799 9799 struct btrfs_fs_info *fs_info = root->fs_info; 9800 9800 struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree; 9801 9801 struct extent_state *cached_state = NULL; 9802 - struct extent_map *em = NULL; 9803 9802 struct btrfs_chunk_map *map = NULL; 9804 9803 struct btrfs_device *device = NULL; 9805 9804 struct btrfs_swap_info bsi = { 9806 9805 .lowest_ppage = (sector_t)-1ULL, 9807 9806 }; 9807 + struct btrfs_backref_share_check_ctx *backref_ctx = NULL; 9808 + struct btrfs_path *path = NULL; 9808 9809 int ret = 0; 9809 9810 u64 isize; 9810 - u64 start; 9811 + u64 prev_extent_end = 0; 9812 + 9813 + /* 9814 + * Acquire the inode's mmap lock to prevent races with memory mapped 9815 + * writes, as they could happen after we flush delalloc below and before 9816 + * we lock the extent range further below. The inode was already locked 9817 + * up in the call chain. 9818 + */ 9819 + btrfs_assert_inode_locked(BTRFS_I(inode)); 9820 + down_write(&BTRFS_I(inode)->i_mmap_lock); 9811 9821 9812 9822 /* 9813 9823 * If the swap file was just created, make sure delalloc is done. If the ··· 9826 9816 */ 9827 9817 ret = btrfs_wait_ordered_range(BTRFS_I(inode), 0, (u64)-1); 9828 9818 if (ret) 9829 - return ret; 9819 + goto out_unlock_mmap; 9830 9820 9831 9821 /* 9832 9822 * The inode is locked, so these flags won't change after we check them. 9833 9823 */ 9834 9824 if (BTRFS_I(inode)->flags & BTRFS_INODE_COMPRESS) { 9835 9825 btrfs_warn(fs_info, "swapfile must not be compressed"); 9836 - return -EINVAL; 9826 + ret = -EINVAL; 9827 + goto out_unlock_mmap; 9837 9828 } 9838 9829 if (!(BTRFS_I(inode)->flags & BTRFS_INODE_NODATACOW)) { 9839 9830 btrfs_warn(fs_info, "swapfile must not be copy-on-write"); 9840 - return -EINVAL; 9831 + ret = -EINVAL; 9832 + goto out_unlock_mmap; 9841 9833 } 9842 9834 if (!(BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM)) { 9843 9835 btrfs_warn(fs_info, "swapfile must not be checksummed"); 9844 - return -EINVAL; 9836 + ret = -EINVAL; 9837 + goto out_unlock_mmap; 9838 + } 9839 + 9840 + path = btrfs_alloc_path(); 9841 + backref_ctx = btrfs_alloc_backref_share_check_ctx(); 9842 + if (!path || !backref_ctx) { 9843 + ret = -ENOMEM; 9844 + goto out_unlock_mmap; 9845 9845 } 9846 9846 9847 9847 /* ··· 9866 9846 if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_SWAP_ACTIVATE)) { 9867 9847 btrfs_warn(fs_info, 9868 9848 "cannot activate swapfile while exclusive operation is running"); 9869 - return -EBUSY; 9849 + ret = -EBUSY; 9850 + goto out_unlock_mmap; 9870 9851 } 9871 9852 9872 9853 /* ··· 9881 9860 btrfs_exclop_finish(fs_info); 9882 9861 btrfs_warn(fs_info, 9883 9862 "cannot activate swapfile because snapshot creation is in progress"); 9884 - return -EINVAL; 9863 + ret = -EINVAL; 9864 + goto out_unlock_mmap; 9885 9865 } 9886 9866 /* 9887 9867 * Snapshots can create extents which require COW even if NODATACOW is ··· 9903 9881 btrfs_warn(fs_info, 9904 9882 "cannot activate swapfile because subvolume %llu is being deleted", 9905 9883 btrfs_root_id(root)); 9906 - return -EPERM; 9884 + ret = -EPERM; 9885 + goto out_unlock_mmap; 9907 9886 } 9908 9887 atomic_inc(&root->nr_swapfiles); 9909 9888 spin_unlock(&root->root_item_lock); ··· 9912 9889 isize = ALIGN_DOWN(inode->i_size, fs_info->sectorsize); 9913 9890 9914 9891 lock_extent(io_tree, 0, isize - 1, &cached_state); 9915 - start = 0; 9916 - while (start < isize) { 9917 - u64 logical_block_start, physical_block_start; 9892 + while (prev_extent_end < isize) { 9893 + struct btrfs_key key; 9894 + struct extent_buffer *leaf; 9895 + struct btrfs_file_extent_item *ei; 9918 9896 struct btrfs_block_group *bg; 9919 - u64 len = isize - start; 9897 + u64 logical_block_start; 9898 + u64 physical_block_start; 9899 + u64 extent_gen; 9900 + u64 disk_bytenr; 9901 + u64 len; 9920 9902 9921 - em = btrfs_get_extent(BTRFS_I(inode), NULL, start, len); 9922 - if (IS_ERR(em)) { 9923 - ret = PTR_ERR(em); 9903 + key.objectid = btrfs_ino(BTRFS_I(inode)); 9904 + key.type = BTRFS_EXTENT_DATA_KEY; 9905 + key.offset = prev_extent_end; 9906 + 9907 + ret = btrfs_search_slot(NULL, root, &key, path, 0, 0); 9908 + if (ret < 0) 9924 9909 goto out; 9925 - } 9926 9910 9927 - if (em->disk_bytenr == EXTENT_MAP_HOLE) { 9911 + /* 9912 + * If key not found it means we have an implicit hole (NO_HOLES 9913 + * is enabled). 9914 + */ 9915 + if (ret > 0) { 9928 9916 btrfs_warn(fs_info, "swapfile must not have holes"); 9929 9917 ret = -EINVAL; 9930 9918 goto out; 9931 9919 } 9932 - if (em->disk_bytenr == EXTENT_MAP_INLINE) { 9920 + 9921 + leaf = path->nodes[0]; 9922 + ei = btrfs_item_ptr(leaf, path->slots[0], struct btrfs_file_extent_item); 9923 + 9924 + if (btrfs_file_extent_type(leaf, ei) == BTRFS_FILE_EXTENT_INLINE) { 9933 9925 /* 9934 9926 * It's unlikely we'll ever actually find ourselves 9935 9927 * here, as a file small enough to fit inline won't be ··· 9956 9918 ret = -EINVAL; 9957 9919 goto out; 9958 9920 } 9959 - if (extent_map_is_compressed(em)) { 9921 + 9922 + if (btrfs_file_extent_compression(leaf, ei) != BTRFS_COMPRESS_NONE) { 9960 9923 btrfs_warn(fs_info, "swapfile must not be compressed"); 9961 9924 ret = -EINVAL; 9962 9925 goto out; 9963 9926 } 9964 9927 9965 - logical_block_start = extent_map_block_start(em) + (start - em->start); 9966 - len = min(len, em->len - (start - em->start)); 9967 - free_extent_map(em); 9968 - em = NULL; 9928 + disk_bytenr = btrfs_file_extent_disk_bytenr(leaf, ei); 9929 + if (disk_bytenr == 0) { 9930 + btrfs_warn(fs_info, "swapfile must not have holes"); 9931 + ret = -EINVAL; 9932 + goto out; 9933 + } 9969 9934 9970 - ret = can_nocow_extent(inode, start, &len, NULL, false, true); 9935 + logical_block_start = disk_bytenr + btrfs_file_extent_offset(leaf, ei); 9936 + extent_gen = btrfs_file_extent_generation(leaf, ei); 9937 + prev_extent_end = btrfs_file_extent_end(path); 9938 + 9939 + if (prev_extent_end > isize) 9940 + len = isize - key.offset; 9941 + else 9942 + len = btrfs_file_extent_num_bytes(leaf, ei); 9943 + 9944 + backref_ctx->curr_leaf_bytenr = leaf->start; 9945 + 9946 + /* 9947 + * Don't need the path anymore, release to avoid deadlocks when 9948 + * calling btrfs_is_data_extent_shared() because when joining a 9949 + * transaction it can block waiting for the current one's commit 9950 + * which in turn may be trying to lock the same leaf to flush 9951 + * delayed items for example. 9952 + */ 9953 + btrfs_release_path(path); 9954 + 9955 + ret = btrfs_is_data_extent_shared(BTRFS_I(inode), disk_bytenr, 9956 + extent_gen, backref_ctx); 9971 9957 if (ret < 0) { 9972 9958 goto out; 9973 - } else if (ret) { 9974 - ret = 0; 9975 - } else { 9959 + } else if (ret > 0) { 9976 9960 btrfs_warn(fs_info, 9977 9961 "swapfile must not be copy-on-write"); 9978 9962 ret = -EINVAL; ··· 10029 9969 10030 9970 physical_block_start = (map->stripes[0].physical + 10031 9971 (logical_block_start - map->start)); 10032 - len = min(len, map->chunk_len - (logical_block_start - map->start)); 10033 9972 btrfs_free_chunk_map(map); 10034 9973 map = NULL; 10035 9974 ··· 10069 10010 if (ret) 10070 10011 goto out; 10071 10012 } 10072 - bsi.start = start; 10013 + bsi.start = key.offset; 10073 10014 bsi.block_start = physical_block_start; 10074 10015 bsi.block_len = len; 10075 10016 } 10076 10017 10077 - start += len; 10018 + if (fatal_signal_pending(current)) { 10019 + ret = -EINTR; 10020 + goto out; 10021 + } 10022 + 10023 + cond_resched(); 10078 10024 } 10079 10025 10080 10026 if (bsi.block_len) 10081 10027 ret = btrfs_add_swap_extent(sis, &bsi); 10082 10028 10083 10029 out: 10084 - if (!IS_ERR_OR_NULL(em)) 10085 - free_extent_map(em); 10086 10030 if (!IS_ERR_OR_NULL(map)) 10087 10031 btrfs_free_chunk_map(map); 10088 10032 ··· 10098 10036 10099 10037 btrfs_exclop_finish(fs_info); 10100 10038 10039 + out_unlock_mmap: 10040 + up_write(&BTRFS_I(inode)->i_mmap_lock); 10041 + btrfs_free_backref_share_ctx(backref_ctx); 10042 + btrfs_free_path(path); 10101 10043 if (ret) 10102 10044 return ret; 10103 10045
+1 -2
fs/btrfs/qgroup.c
··· 1121 1121 fs_info->qgroup_flags = BTRFS_QGROUP_STATUS_FLAG_ON; 1122 1122 if (simple) { 1123 1123 fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_SIMPLE_MODE; 1124 + btrfs_set_fs_incompat(fs_info, SIMPLE_QUOTA); 1124 1125 btrfs_set_qgroup_status_enable_gen(leaf, ptr, trans->transid); 1125 1126 } else { 1126 1127 fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT; ··· 1255 1254 spin_lock(&fs_info->qgroup_lock); 1256 1255 fs_info->quota_root = quota_root; 1257 1256 set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags); 1258 - if (simple) 1259 - btrfs_set_fs_incompat(fs_info, SIMPLE_QUOTA); 1260 1257 spin_unlock(&fs_info->qgroup_lock); 1261 1258 1262 1259 /* Skip rescan for simple qgroups. */
+6
fs/btrfs/relocation.c
··· 2902 2902 const bool use_rst = btrfs_need_stripe_tree_update(fs_info, rc->block_group->flags); 2903 2903 2904 2904 ASSERT(index <= last_index); 2905 + again: 2905 2906 folio = filemap_lock_folio(inode->i_mapping, index); 2906 2907 if (IS_ERR(folio)) { 2907 2908 ··· 2937 2936 if (!folio_test_uptodate(folio)) { 2938 2937 ret = -EIO; 2939 2938 goto release_folio; 2939 + } 2940 + if (folio->mapping != inode->i_mapping) { 2941 + folio_unlock(folio); 2942 + folio_put(folio); 2943 + goto again; 2940 2944 } 2941 2945 } 2942 2946
+6
fs/btrfs/send.c
··· 5280 5280 unsigned cur_len = min_t(unsigned, len, 5281 5281 PAGE_SIZE - pg_offset); 5282 5282 5283 + again: 5283 5284 folio = filemap_lock_folio(mapping, index); 5284 5285 if (IS_ERR(folio)) { 5285 5286 page_cache_sync_readahead(mapping, ··· 5312 5311 folio_put(folio); 5313 5312 ret = -EIO; 5314 5313 break; 5314 + } 5315 + if (folio->mapping != mapping) { 5316 + folio_unlock(folio); 5317 + folio_put(folio); 5318 + goto again; 5315 5319 } 5316 5320 } 5317 5321
+3 -3
fs/btrfs/sysfs.c
··· 1118 1118 { 1119 1119 struct btrfs_fs_info *fs_info = to_fs_info(kobj); 1120 1120 1121 - return sysfs_emit(buf, "%u\n", fs_info->super_copy->nodesize); 1121 + return sysfs_emit(buf, "%u\n", fs_info->nodesize); 1122 1122 } 1123 1123 1124 1124 BTRFS_ATTR(, nodesize, btrfs_nodesize_show); ··· 1128 1128 { 1129 1129 struct btrfs_fs_info *fs_info = to_fs_info(kobj); 1130 1130 1131 - return sysfs_emit(buf, "%u\n", fs_info->super_copy->sectorsize); 1131 + return sysfs_emit(buf, "%u\n", fs_info->sectorsize); 1132 1132 } 1133 1133 1134 1134 BTRFS_ATTR(, sectorsize, btrfs_sectorsize_show); ··· 1180 1180 { 1181 1181 struct btrfs_fs_info *fs_info = to_fs_info(kobj); 1182 1182 1183 - return sysfs_emit(buf, "%u\n", fs_info->super_copy->sectorsize); 1183 + return sysfs_emit(buf, "%u\n", fs_info->sectorsize); 1184 1184 } 1185 1185 1186 1186 BTRFS_ATTR(, clone_alignment, btrfs_clone_alignment_show);
+38 -39
fs/ceph/file.c
··· 1066 1066 if (ceph_inode_is_shutdown(inode)) 1067 1067 return -EIO; 1068 1068 1069 - if (!len) 1069 + if (!len || !i_size) 1070 1070 return 0; 1071 1071 /* 1072 1072 * flush any page cache pages in this range. this ··· 1086 1086 int num_pages; 1087 1087 size_t page_off; 1088 1088 bool more; 1089 - int idx; 1089 + int idx = 0; 1090 1090 size_t left; 1091 1091 struct ceph_osd_req_op *op; 1092 1092 u64 read_off = off; ··· 1116 1116 len = read_off + read_len - off; 1117 1117 more = len < iov_iter_count(to); 1118 1118 1119 + op = &req->r_ops[0]; 1120 + if (sparse) { 1121 + extent_cnt = __ceph_sparse_read_ext_count(inode, read_len); 1122 + ret = ceph_alloc_sparse_ext_map(op, extent_cnt); 1123 + if (ret) { 1124 + ceph_osdc_put_request(req); 1125 + break; 1126 + } 1127 + } 1128 + 1119 1129 num_pages = calc_pages_for(read_off, read_len); 1120 1130 page_off = offset_in_page(off); 1121 1131 pages = ceph_alloc_page_vector(num_pages, GFP_KERNEL); ··· 1137 1127 1138 1128 osd_req_op_extent_osd_data_pages(req, 0, pages, read_len, 1139 1129 offset_in_page(read_off), 1140 - false, false); 1141 - 1142 - op = &req->r_ops[0]; 1143 - if (sparse) { 1144 - extent_cnt = __ceph_sparse_read_ext_count(inode, read_len); 1145 - ret = ceph_alloc_sparse_ext_map(op, extent_cnt); 1146 - if (ret) { 1147 - ceph_osdc_put_request(req); 1148 - break; 1149 - } 1150 - } 1130 + false, true); 1151 1131 1152 1132 ceph_osdc_start_request(osdc, req); 1153 1133 ret = ceph_osdc_wait_request(osdc, req); ··· 1160 1160 else if (ret == -ENOENT) 1161 1161 ret = 0; 1162 1162 1163 - if (ret > 0 && IS_ENCRYPTED(inode)) { 1163 + if (ret < 0) { 1164 + ceph_osdc_put_request(req); 1165 + if (ret == -EBLOCKLISTED) 1166 + fsc->blocklisted = true; 1167 + break; 1168 + } 1169 + 1170 + if (IS_ENCRYPTED(inode)) { 1164 1171 int fret; 1165 1172 1166 1173 fret = ceph_fscrypt_decrypt_extents(inode, pages, ··· 1193 1186 ret = min_t(ssize_t, fret, len); 1194 1187 } 1195 1188 1196 - ceph_osdc_put_request(req); 1197 - 1198 1189 /* Short read but not EOF? Zero out the remainder. */ 1199 - if (ret >= 0 && ret < len && (off + ret < i_size)) { 1190 + if (ret < len && (off + ret < i_size)) { 1200 1191 int zlen = min(len - ret, i_size - off - ret); 1201 1192 int zoff = page_off + ret; 1202 1193 ··· 1204 1199 ret += zlen; 1205 1200 } 1206 1201 1207 - idx = 0; 1208 - if (ret <= 0) 1209 - left = 0; 1210 - else if (off + ret > i_size) 1211 - left = i_size - off; 1202 + if (off + ret > i_size) 1203 + left = (i_size > off) ? i_size - off : 0; 1212 1204 else 1213 1205 left = ret; 1206 + 1214 1207 while (left > 0) { 1215 1208 size_t plen, copied; 1216 1209 ··· 1224 1221 break; 1225 1222 } 1226 1223 } 1227 - ceph_release_page_vector(pages, num_pages); 1228 1224 1229 - if (ret < 0) { 1230 - if (ret == -EBLOCKLISTED) 1231 - fsc->blocklisted = true; 1232 - break; 1233 - } 1225 + ceph_osdc_put_request(req); 1234 1226 1235 1227 if (off >= i_size || !more) 1236 1228 break; ··· 1551 1553 break; 1552 1554 } 1553 1555 1556 + op = &req->r_ops[0]; 1557 + if (!write && sparse) { 1558 + extent_cnt = __ceph_sparse_read_ext_count(inode, size); 1559 + ret = ceph_alloc_sparse_ext_map(op, extent_cnt); 1560 + if (ret) { 1561 + ceph_osdc_put_request(req); 1562 + break; 1563 + } 1564 + } 1565 + 1554 1566 len = iter_get_bvecs_alloc(iter, size, &bvecs, &num_pages); 1555 1567 if (len < 0) { 1556 1568 ceph_osdc_put_request(req); ··· 1569 1561 } 1570 1562 if (len != size) 1571 1563 osd_req_op_extent_update(req, 0, len); 1564 + 1565 + osd_req_op_extent_osd_data_bvecs(req, 0, bvecs, num_pages, len); 1572 1566 1573 1567 /* 1574 1568 * To simplify error handling, allow AIO when IO within i_size ··· 1601 1591 PAGE_ALIGN(pos + len) - 1); 1602 1592 1603 1593 req->r_mtime = mtime; 1604 - } 1605 - 1606 - osd_req_op_extent_osd_data_bvecs(req, 0, bvecs, num_pages, len); 1607 - op = &req->r_ops[0]; 1608 - if (sparse) { 1609 - extent_cnt = __ceph_sparse_read_ext_count(inode, size); 1610 - ret = ceph_alloc_sparse_ext_map(op, extent_cnt); 1611 - if (ret) { 1612 - ceph_osdc_put_request(req); 1613 - break; 1614 - } 1615 1594 } 1616 1595 1617 1596 if (aio_req) {
+4 -5
fs/ceph/mds_client.c
··· 2800 2800 2801 2801 if (pos < 0) { 2802 2802 /* 2803 - * A rename didn't occur, but somehow we didn't end up where 2804 - * we thought we would. Throw a warning and try again. 2803 + * The path is longer than PATH_MAX and this function 2804 + * cannot ever succeed. Creating paths that long is 2805 + * possible with Ceph, but Linux cannot use them. 2805 2806 */ 2806 - pr_warn_client(cl, "did not end path lookup where expected (pos = %d)\n", 2807 - pos); 2808 - goto retry; 2807 + return ERR_PTR(-ENAMETOOLONG); 2809 2808 } 2810 2809 2811 2810 *pbase = base;
+2
fs/ceph/super.c
··· 431 431 432 432 switch (token) { 433 433 case Opt_snapdirname: 434 + if (strlen(param->string) > NAME_MAX) 435 + return invalfc(fc, "snapdirname too long"); 434 436 kfree(fsopt->snapdir_name); 435 437 fsopt->snapdir_name = param->string; 436 438 param->string = NULL;
+1 -1
fs/hugetlbfs/inode.c
··· 825 825 error = PTR_ERR(folio); 826 826 goto out; 827 827 } 828 - folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size)); 828 + folio_zero_user(folio, addr); 829 829 __folio_mark_uptodate(folio); 830 830 error = hugetlb_add_to_page_cache(folio, mapping, index); 831 831 if (unlikely(error)) {
+1 -1
fs/nfs/pnfs.c
··· 1308 1308 enum pnfs_iomode *iomode) 1309 1309 { 1310 1310 /* Serialise LAYOUTGET/LAYOUTRETURN */ 1311 - if (atomic_read(&lo->plh_outstanding) != 0) 1311 + if (atomic_read(&lo->plh_outstanding) != 0 && lo->plh_return_seq == 0) 1312 1312 return false; 1313 1313 if (test_and_set_bit(NFS_LAYOUT_RETURN_LOCK, &lo->plh_flags)) 1314 1314 return false;
+1
fs/nfs/super.c
··· 73 73 #include "nfs.h" 74 74 #include "netns.h" 75 75 #include "sysfs.h" 76 + #include "nfs4idmap.h" 76 77 77 78 #define NFSDBG_FACILITY NFSDBG_VFS 78 79
+6 -25
fs/nfsd/export.c
··· 40 40 #define EXPKEY_HASHMAX (1 << EXPKEY_HASHBITS) 41 41 #define EXPKEY_HASHMASK (EXPKEY_HASHMAX -1) 42 42 43 - static void expkey_put_work(struct work_struct *work) 43 + static void expkey_put(struct kref *ref) 44 44 { 45 - struct svc_expkey *key = 46 - container_of(to_rcu_work(work), struct svc_expkey, ek_rcu_work); 45 + struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref); 47 46 48 47 if (test_bit(CACHE_VALID, &key->h.flags) && 49 48 !test_bit(CACHE_NEGATIVE, &key->h.flags)) 50 49 path_put(&key->ek_path); 51 50 auth_domain_put(key->ek_client); 52 - kfree(key); 53 - } 54 - 55 - static void expkey_put(struct kref *ref) 56 - { 57 - struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref); 58 - 59 - INIT_RCU_WORK(&key->ek_rcu_work, expkey_put_work); 60 - queue_rcu_work(system_wq, &key->ek_rcu_work); 51 + kfree_rcu(key, ek_rcu); 61 52 } 62 53 63 54 static int expkey_upcall(struct cache_detail *cd, struct cache_head *h) ··· 355 364 EXP_STATS_COUNTERS_NUM); 356 365 } 357 366 358 - static void svc_export_put_work(struct work_struct *work) 367 + static void svc_export_put(struct kref *ref) 359 368 { 360 - struct svc_export *exp = 361 - container_of(to_rcu_work(work), struct svc_export, ex_rcu_work); 362 - 369 + struct svc_export *exp = container_of(ref, struct svc_export, h.ref); 363 370 path_put(&exp->ex_path); 364 371 auth_domain_put(exp->ex_client); 365 372 nfsd4_fslocs_free(&exp->ex_fslocs); 366 373 export_stats_destroy(exp->ex_stats); 367 374 kfree(exp->ex_stats); 368 375 kfree(exp->ex_uuid); 369 - kfree(exp); 370 - } 371 - 372 - static void svc_export_put(struct kref *ref) 373 - { 374 - struct svc_export *exp = container_of(ref, struct svc_export, h.ref); 375 - 376 - INIT_RCU_WORK(&exp->ex_rcu_work, svc_export_put_work); 377 - queue_rcu_work(system_wq, &exp->ex_rcu_work); 376 + kfree_rcu(exp, ex_rcu); 378 377 } 379 378 380 379 static int svc_export_upcall(struct cache_detail *cd, struct cache_head *h)
+2 -2
fs/nfsd/export.h
··· 75 75 u32 ex_layout_types; 76 76 struct nfsd4_deviceid_map *ex_devid_map; 77 77 struct cache_detail *cd; 78 - struct rcu_work ex_rcu_work; 78 + struct rcu_head ex_rcu; 79 79 unsigned long ex_xprtsec_modes; 80 80 struct export_stats *ex_stats; 81 81 }; ··· 92 92 u32 ek_fsid[6]; 93 93 94 94 struct path ek_path; 95 - struct rcu_work ek_rcu_work; 95 + struct rcu_head ek_rcu; 96 96 }; 97 97 98 98 #define EX_ISSYNC(exp) (!((exp)->ex_flags & NFSEXP_ASYNC))
+1 -3
fs/nfsd/nfs4callback.c
··· 1100 1100 args.authflavor = clp->cl_cred.cr_flavor; 1101 1101 clp->cl_cb_ident = conn->cb_ident; 1102 1102 } else { 1103 - if (!conn->cb_xprt) 1103 + if (!conn->cb_xprt || !ses) 1104 1104 return -EINVAL; 1105 1105 clp->cl_cb_session = ses; 1106 1106 args.bc_xprt = conn->cb_xprt; ··· 1522 1522 ses = c->cn_session; 1523 1523 } 1524 1524 spin_unlock(&clp->cl_lock); 1525 - if (!c) 1526 - return; 1527 1525 1528 1526 err = setup_callback_client(clp, &conn, ses); 1529 1527 if (err) {
+8 -5
fs/nfsd/nfs4proc.c
··· 1347 1347 { 1348 1348 if (!refcount_dec_and_test(&copy->refcount)) 1349 1349 return; 1350 - atomic_dec(&copy->cp_nn->pending_async_copies); 1351 1350 kfree(copy->cp_src); 1352 1351 kfree(copy); 1353 1352 } ··· 1869 1870 set_bit(NFSD4_COPY_F_COMPLETED, &copy->cp_flags); 1870 1871 trace_nfsd_copy_async_done(copy); 1871 1872 nfsd4_send_cb_offload(copy); 1873 + atomic_dec(&copy->cp_nn->pending_async_copies); 1872 1874 return 0; 1873 1875 } 1874 1876 ··· 1927 1927 /* Arbitrary cap on number of pending async copy operations */ 1928 1928 if (atomic_inc_return(&nn->pending_async_copies) > 1929 1929 (int)rqstp->rq_pool->sp_nrthreads) 1930 - goto out_err; 1930 + goto out_dec_async_copy_err; 1931 1931 async_copy->cp_src = kmalloc(sizeof(*async_copy->cp_src), GFP_KERNEL); 1932 1932 if (!async_copy->cp_src) 1933 - goto out_err; 1933 + goto out_dec_async_copy_err; 1934 1934 if (!nfs4_init_copy_state(nn, copy)) 1935 - goto out_err; 1935 + goto out_dec_async_copy_err; 1936 1936 memcpy(&result->cb_stateid, &copy->cp_stateid.cs_stid, 1937 1937 sizeof(result->cb_stateid)); 1938 1938 dup_copy_fields(copy, async_copy); 1939 1939 async_copy->copy_task = kthread_create(nfsd4_do_async_copy, 1940 1940 async_copy, "%s", "copy thread"); 1941 1941 if (IS_ERR(async_copy->copy_task)) 1942 - goto out_err; 1942 + goto out_dec_async_copy_err; 1943 1943 spin_lock(&async_copy->cp_clp->async_lock); 1944 1944 list_add(&async_copy->copies, 1945 1945 &async_copy->cp_clp->async_copies); ··· 1954 1954 trace_nfsd_copy_done(copy, status); 1955 1955 release_copy_files(copy); 1956 1956 return status; 1957 + out_dec_async_copy_err: 1958 + if (async_copy) 1959 + atomic_dec(&nn->pending_async_copies); 1957 1960 out_err: 1958 1961 if (nfsd4_ssc_is_inter(copy)) { 1959 1962 /*
+1
fs/nilfs2/btnode.c
··· 35 35 ii->i_flags = 0; 36 36 memset(&ii->i_bmap_data, 0, sizeof(struct nilfs_bmap)); 37 37 mapping_set_gfp_mask(btnc_inode->i_mapping, GFP_NOFS); 38 + btnc_inode->i_mapping->a_ops = &nilfs_buffer_cache_aops; 38 39 } 39 40 40 41 void nilfs_btnode_cache_clear(struct address_space *btnc)
+1 -1
fs/nilfs2/gcinode.c
··· 163 163 164 164 inode->i_mode = S_IFREG; 165 165 mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS); 166 - inode->i_mapping->a_ops = &empty_aops; 166 + inode->i_mapping->a_ops = &nilfs_buffer_cache_aops; 167 167 168 168 ii->i_flags = 0; 169 169 nilfs_bmap_init_gc(ii->i_bmap);
+12 -1
fs/nilfs2/inode.c
··· 276 276 .is_partially_uptodate = block_is_partially_uptodate, 277 277 }; 278 278 279 + const struct address_space_operations nilfs_buffer_cache_aops = { 280 + .invalidate_folio = block_invalidate_folio, 281 + }; 282 + 279 283 static int nilfs_insert_inode_locked(struct inode *inode, 280 284 struct nilfs_root *root, 281 285 unsigned long ino) ··· 548 544 inode = nilfs_iget_locked(sb, root, ino); 549 545 if (unlikely(!inode)) 550 546 return ERR_PTR(-ENOMEM); 551 - if (!(inode->i_state & I_NEW)) 547 + 548 + if (!(inode->i_state & I_NEW)) { 549 + if (!inode->i_nlink) { 550 + iput(inode); 551 + return ERR_PTR(-ESTALE); 552 + } 552 553 return inode; 554 + } 553 555 554 556 err = __nilfs_read_inode(sb, root, ino, inode); 555 557 if (unlikely(err)) { ··· 685 675 NILFS_I(s_inode)->i_flags = 0; 686 676 memset(NILFS_I(s_inode)->i_bmap, 0, sizeof(struct nilfs_bmap)); 687 677 mapping_set_gfp_mask(s_inode->i_mapping, GFP_NOFS); 678 + s_inode->i_mapping->a_ops = &nilfs_buffer_cache_aops; 688 679 689 680 err = nilfs_attach_btree_node_cache(s_inode); 690 681 if (unlikely(err)) {
+5
fs/nilfs2/namei.c
··· 67 67 inode = NULL; 68 68 } else { 69 69 inode = nilfs_iget(dir->i_sb, NILFS_I(dir)->i_root, ino); 70 + if (inode == ERR_PTR(-ESTALE)) { 71 + nilfs_error(dir->i_sb, 72 + "deleted inode referenced: %lu", ino); 73 + return ERR_PTR(-EIO); 74 + } 70 75 } 71 76 72 77 return d_splice_alias(inode, dentry);
+1
fs/nilfs2/nilfs.h
··· 401 401 extern const struct inode_operations nilfs_file_inode_operations; 402 402 extern const struct file_operations nilfs_file_operations; 403 403 extern const struct address_space_operations nilfs_aops; 404 + extern const struct address_space_operations nilfs_buffer_cache_aops; 404 405 extern const struct inode_operations nilfs_dir_inode_operations; 405 406 extern const struct inode_operations nilfs_special_inode_operations; 406 407 extern const struct inode_operations nilfs_symlink_inode_operations;
+5 -22
fs/ocfs2/localalloc.c
··· 971 971 start = count = 0; 972 972 left = le32_to_cpu(alloc->id1.bitmap1.i_total); 973 973 974 - while ((bit_off = ocfs2_find_next_zero_bit(bitmap, left, start)) < 975 - left) { 976 - if (bit_off == start) { 974 + while (1) { 975 + bit_off = ocfs2_find_next_zero_bit(bitmap, left, start); 976 + if ((bit_off < left) && (bit_off == start)) { 977 977 count++; 978 978 start++; 979 979 continue; ··· 998 998 } 999 999 } 1000 1000 1001 + if (bit_off >= left) 1002 + break; 1001 1003 count = 1; 1002 1004 start = bit_off + 1; 1003 - } 1004 - 1005 - /* clear the contiguous bits until the end boundary */ 1006 - if (count) { 1007 - blkno = la_start_blk + 1008 - ocfs2_clusters_to_blocks(osb->sb, 1009 - start - count); 1010 - 1011 - trace_ocfs2_sync_local_to_main_free( 1012 - count, start - count, 1013 - (unsigned long long)la_start_blk, 1014 - (unsigned long long)blkno); 1015 - 1016 - status = ocfs2_release_clusters(handle, 1017 - main_bm_inode, 1018 - main_bm_bh, blkno, 1019 - count); 1020 - if (status < 0) 1021 - mlog_errno(status); 1022 1005 } 1023 1006 1024 1007 bail:
-1
fs/smb/client/Kconfig
··· 2 2 config CIFS 3 3 tristate "SMB3 and CIFS support (advanced network filesystem)" 4 4 depends on INET 5 - select NETFS_SUPPORT 6 5 select NLS 7 6 select NLS_UCS2_UTILS 8 7 select CRYPTO
+1 -1
fs/smb/client/cifsfs.c
··· 398 398 cifs_inode = alloc_inode_sb(sb, cifs_inode_cachep, GFP_KERNEL); 399 399 if (!cifs_inode) 400 400 return NULL; 401 - cifs_inode->cifsAttrs = 0x20; /* default */ 401 + cifs_inode->cifsAttrs = ATTR_ARCHIVE; /* default */ 402 402 cifs_inode->time = 0; 403 403 /* 404 404 * Until the file is open and we have gotten oplock info back from the
-2
fs/smb/client/cifsproto.h
··· 614 614 void cifs_free_hash(struct shash_desc **sdesc); 615 615 616 616 int cifs_try_adding_channels(struct cifs_ses *ses); 617 - bool is_server_using_iface(struct TCP_Server_Info *server, 618 - struct cifs_server_iface *iface); 619 617 bool is_ses_using_iface(struct cifs_ses *ses, struct cifs_server_iface *iface); 620 618 void cifs_ses_mark_for_reconnect(struct cifs_ses *ses); 621 619
+26 -10
fs/smb/client/connect.c
··· 987 987 msleep(125); 988 988 if (cifs_rdma_enabled(server)) 989 989 smbd_destroy(server); 990 + 990 991 if (server->ssocket) { 991 992 sock_release(server->ssocket); 992 993 server->ssocket = NULL; 994 + 995 + /* Release netns reference for the socket. */ 996 + put_net(cifs_net_ns(server)); 993 997 } 994 998 995 999 if (!list_empty(&server->pending_mid_q)) { ··· 1041 1037 */ 1042 1038 } 1043 1039 1040 + /* Release netns reference for this server. */ 1044 1041 put_net(cifs_net_ns(server)); 1045 1042 kfree(server->leaf_fullpath); 1046 1043 kfree(server); ··· 1718 1713 1719 1714 tcp_ses->ops = ctx->ops; 1720 1715 tcp_ses->vals = ctx->vals; 1716 + 1717 + /* Grab netns reference for this server. */ 1721 1718 cifs_set_net_ns(tcp_ses, get_net(current->nsproxy->net_ns)); 1722 1719 1723 1720 tcp_ses->conn_id = atomic_inc_return(&tcpSesNextId); ··· 1851 1844 out_err_crypto_release: 1852 1845 cifs_crypto_secmech_release(tcp_ses); 1853 1846 1847 + /* Release netns reference for this server. */ 1854 1848 put_net(cifs_net_ns(tcp_ses)); 1855 1849 1856 1850 out_err: ··· 1860 1852 cifs_put_tcp_session(tcp_ses->primary_server, false); 1861 1853 kfree(tcp_ses->hostname); 1862 1854 kfree(tcp_ses->leaf_fullpath); 1863 - if (tcp_ses->ssocket) 1855 + if (tcp_ses->ssocket) { 1864 1856 sock_release(tcp_ses->ssocket); 1857 + put_net(cifs_net_ns(tcp_ses)); 1858 + } 1865 1859 kfree(tcp_ses); 1866 1860 } 1867 1861 return ERR_PTR(rc); ··· 3141 3131 socket = server->ssocket; 3142 3132 } else { 3143 3133 struct net *net = cifs_net_ns(server); 3144 - struct sock *sk; 3145 3134 3146 - rc = __sock_create(net, sfamily, SOCK_STREAM, 3147 - IPPROTO_TCP, &server->ssocket, 1); 3135 + rc = sock_create_kern(net, sfamily, SOCK_STREAM, IPPROTO_TCP, &server->ssocket); 3148 3136 if (rc < 0) { 3149 3137 cifs_server_dbg(VFS, "Error %d creating socket\n", rc); 3150 3138 return rc; 3151 3139 } 3152 3140 3153 - sk = server->ssocket->sk; 3154 - __netns_tracker_free(net, &sk->ns_tracker, false); 3155 - sk->sk_net_refcnt = 1; 3156 - get_net_track(net, &sk->ns_tracker, GFP_KERNEL); 3157 - sock_inuse_add(net, 1); 3141 + /* 3142 + * Grab netns reference for the socket. 3143 + * 3144 + * It'll be released here, on error, or in clean_demultiplex_info() upon server 3145 + * teardown. 3146 + */ 3147 + get_net(net); 3158 3148 3159 3149 /* BB other socket options to set KEEPALIVE, NODELAY? */ 3160 3150 cifs_dbg(FYI, "Socket created\n"); ··· 3168 3158 } 3169 3159 3170 3160 rc = bind_socket(server); 3171 - if (rc < 0) 3161 + if (rc < 0) { 3162 + put_net(cifs_net_ns(server)); 3172 3163 return rc; 3164 + } 3173 3165 3174 3166 /* 3175 3167 * Eventually check for other socket options to change from ··· 3208 3196 if (rc < 0) { 3209 3197 cifs_dbg(FYI, "Error %d connecting to server\n", rc); 3210 3198 trace_smb3_connect_err(server->hostname, server->conn_id, &server->dstaddr, rc); 3199 + put_net(cifs_net_ns(server)); 3211 3200 sock_release(socket); 3212 3201 server->ssocket = NULL; 3213 3202 return rc; ··· 3216 3203 trace_smb3_connect_done(server->hostname, server->conn_id, &server->dstaddr); 3217 3204 if (sport == htons(RFC1001_PORT)) 3218 3205 rc = ip_rfc1001_connect(server); 3206 + 3207 + if (rc < 0) 3208 + put_net(cifs_net_ns(server)); 3219 3209 3220 3210 return rc; 3221 3211 }
+5 -1
fs/smb/client/file.c
··· 990 990 } 991 991 992 992 /* Get the cached handle as SMB2 close is deferred */ 993 - rc = cifs_get_readable_path(tcon, full_path, &cfile); 993 + if (OPEN_FMODE(file->f_flags) & FMODE_WRITE) { 994 + rc = cifs_get_writable_path(tcon, full_path, FIND_WR_FSUID_ONLY, &cfile); 995 + } else { 996 + rc = cifs_get_readable_path(tcon, full_path, &cfile); 997 + } 994 998 if (rc == 0) { 995 999 if (file->f_flags == cfile->f_flags) { 996 1000 file->private_data = cfile;
-25
fs/smb/client/sess.c
··· 27 27 cifs_ses_add_channel(struct cifs_ses *ses, 28 28 struct cifs_server_iface *iface); 29 29 30 - bool 31 - is_server_using_iface(struct TCP_Server_Info *server, 32 - struct cifs_server_iface *iface) 33 - { 34 - struct sockaddr_in *i4 = (struct sockaddr_in *)&iface->sockaddr; 35 - struct sockaddr_in6 *i6 = (struct sockaddr_in6 *)&iface->sockaddr; 36 - struct sockaddr_in *s4 = (struct sockaddr_in *)&server->dstaddr; 37 - struct sockaddr_in6 *s6 = (struct sockaddr_in6 *)&server->dstaddr; 38 - 39 - if (server->dstaddr.ss_family != iface->sockaddr.ss_family) 40 - return false; 41 - if (server->dstaddr.ss_family == AF_INET) { 42 - if (s4->sin_addr.s_addr != i4->sin_addr.s_addr) 43 - return false; 44 - } else if (server->dstaddr.ss_family == AF_INET6) { 45 - if (memcmp(&s6->sin6_addr, &i6->sin6_addr, 46 - sizeof(i6->sin6_addr)) != 0) 47 - return false; 48 - } else { 49 - /* unknown family.. */ 50 - return false; 51 - } 52 - return true; 53 - } 54 - 55 30 bool is_ses_using_iface(struct cifs_ses *ses, struct cifs_server_iface *iface) 56 31 { 57 32 int i;
+4 -1
fs/smb/client/smb2pdu.c
··· 4840 4840 if (written > wdata->subreq.len) 4841 4841 written &= 0xFFFF; 4842 4842 4843 + cifs_stats_bytes_written(tcon, written); 4844 + 4843 4845 if (written < wdata->subreq.len) 4844 4846 wdata->result = -ENOSPC; 4845 4847 else ··· 5158 5156 cifs_dbg(VFS, "Send error in write = %d\n", rc); 5159 5157 } else { 5160 5158 *nbytes = le32_to_cpu(rsp->DataLength); 5159 + cifs_stats_bytes_written(io_parms->tcon, *nbytes); 5161 5160 trace_smb3_write_done(0, 0, xid, 5162 5161 req->PersistentFileId, 5163 5162 io_parms->tcon->tid, ··· 6207 6204 req->StructureSize = cpu_to_le16(36); 6208 6205 total_len += 12; 6209 6206 6210 - memcpy(req->LeaseKey, lease_key, 16); 6207 + memcpy(req->LeaseKey, lease_key, SMB2_LEASE_KEY_SIZE); 6211 6208 req->LeaseState = lease_state; 6212 6209 6213 6210 flags |= CIFS_NO_RSP_BUF;
+7 -2
include/linux/alloc_tag.h
··· 63 63 #else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ 64 64 65 65 static inline bool is_codetag_empty(union codetag_ref *ref) { return false; } 66 - static inline void set_codetag_empty(union codetag_ref *ref) {} 66 + 67 + static inline void set_codetag_empty(union codetag_ref *ref) 68 + { 69 + if (ref) 70 + ref->ct = NULL; 71 + } 67 72 68 73 #endif /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ 69 74 ··· 140 135 #ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG 141 136 static inline void alloc_tag_add_check(union codetag_ref *ref, struct alloc_tag *tag) 142 137 { 143 - WARN_ONCE(ref && ref->ct, 138 + WARN_ONCE(ref && ref->ct && !is_codetag_empty(ref), 144 139 "alloc_tag was not cleared (got tag for %s:%u)\n", 145 140 ref->ct->filename, ref->ct->lineno); 146 141
+6
include/linux/cacheinfo.h
··· 155 155 156 156 #ifndef CONFIG_ARCH_HAS_CPU_CACHE_ALIASING 157 157 #define cpu_dcache_is_aliasing() false 158 + #define cpu_icache_is_aliasing() cpu_dcache_is_aliasing() 158 159 #else 159 160 #include <asm/cachetype.h> 161 + 162 + #ifndef cpu_icache_is_aliasing 163 + #define cpu_icache_is_aliasing() cpu_dcache_is_aliasing() 164 + #endif 165 + 160 166 #endif 161 167 162 168 #endif /* _LINUX_CACHEINFO_H */
+10 -3
include/linux/dmaengine.h
··· 84 84 DMA_TRANS_NONE, 85 85 }; 86 86 87 - /** 87 + /* 88 88 * Interleaved Transfer Request 89 89 * ---------------------------- 90 90 * A chunk is collection of contiguous bytes to be transferred. ··· 223 223 }; 224 224 225 225 /** 226 - * enum pq_check_flags - result of async_{xor,pq}_zero_sum operations 226 + * enum sum_check_flags - result of async_{xor,pq}_zero_sum operations 227 227 * @SUM_CHECK_P_RESULT - 1 if xor zero sum error, 0 otherwise 228 228 * @SUM_CHECK_Q_RESULT - 1 if reed-solomon zero sum error, 0 otherwise 229 229 */ ··· 286 286 * pointer to the engine's metadata area 287 287 * 4. Read out the metadata from the pointer 288 288 * 289 - * Note: the two mode is not compatible and clients must use one mode for a 289 + * Warning: the two modes are not compatible and clients must use one mode for a 290 290 * descriptor. 291 291 */ 292 292 enum dma_desc_metadata_mode { ··· 594 594 * @phys: physical address of the descriptor 595 595 * @chan: target channel for this operation 596 596 * @tx_submit: accept the descriptor, assign ordered cookie and mark the 597 + * @desc_free: driver's callback function to free a resusable descriptor 598 + * after completion 597 599 * descriptor pending. To be pushed on .issue_pending() call 598 600 * @callback: routine to call after this operation is complete 601 + * @callback_result: error result from a DMA transaction 599 602 * @callback_param: general parameter to pass to the callback routine 603 + * @unmap: hook for generic DMA unmap data 600 604 * @desc_metadata_mode: core managed metadata mode to protect mixed use of 601 605 * DESC_METADATA_CLIENT or DESC_METADATA_ENGINE. Otherwise 602 606 * DESC_METADATA_NONE ··· 831 827 * @device_prep_dma_memset: prepares a memset operation 832 828 * @device_prep_dma_memset_sg: prepares a memset operation over a scatter list 833 829 * @device_prep_dma_interrupt: prepares an end of chain interrupt operation 830 + * @device_prep_peripheral_dma_vec: prepares a scatter-gather DMA transfer, 831 + * where the address and size of each segment is located in one entry of 832 + * the dma_vec array. 834 833 * @device_prep_slave_sg: prepares a slave dma operation 835 834 * @device_prep_dma_cyclic: prepare a cyclic dma operation suitable for audio. 836 835 * The function takes a buffer of size buf_len. The callback function will
+7 -1
include/linux/highmem.h
··· 224 224 struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, 225 225 unsigned long vaddr) 226 226 { 227 - return vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr); 227 + struct folio *folio; 228 + 229 + folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vaddr); 230 + if (folio && user_alloc_needs_zeroing()) 231 + clear_user_highpage(&folio->page, vaddr); 232 + 233 + return folio; 228 234 } 229 235 #endif 230 236
+13 -3
include/linux/if_vlan.h
··· 585 585 * vlan_get_protocol - get protocol EtherType. 586 586 * @skb: skbuff to query 587 587 * @type: first vlan protocol 588 + * @mac_offset: MAC offset 588 589 * @depth: buffer to store length of eth and vlan tags in bytes 589 590 * 590 591 * Returns: the EtherType of the packet, regardless of whether it is 591 592 * vlan encapsulated (normal or hardware accelerated) or not. 592 593 */ 593 - static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type, 594 - int *depth) 594 + static inline __be16 __vlan_get_protocol_offset(const struct sk_buff *skb, 595 + __be16 type, 596 + int mac_offset, 597 + int *depth) 595 598 { 596 599 unsigned int vlan_depth = skb->mac_len, parse_depth = VLAN_MAX_DEPTH; 597 600 ··· 613 610 do { 614 611 struct vlan_hdr vhdr, *vh; 615 612 616 - vh = skb_header_pointer(skb, vlan_depth, sizeof(vhdr), &vhdr); 613 + vh = skb_header_pointer(skb, mac_offset + vlan_depth, 614 + sizeof(vhdr), &vhdr); 617 615 if (unlikely(!vh || !--parse_depth)) 618 616 return 0; 619 617 ··· 627 623 *depth = vlan_depth; 628 624 629 625 return type; 626 + } 627 + 628 + static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type, 629 + int *depth) 630 + { 631 + return __vlan_get_protocol_offset(skb, type, 0, depth); 630 632 } 631 633 632 634 /**
+1 -3
include/linux/io_uring.h
··· 15 15 16 16 static inline void io_uring_files_cancel(void) 17 17 { 18 - if (current->io_uring) { 19 - io_uring_unreg_ringfd(); 18 + if (current->io_uring) 20 19 __io_uring_cancel(false); 21 - } 22 20 } 23 21 static inline void io_uring_task_cancel(void) 24 22 {
+1 -1
include/linux/io_uring_types.h
··· 345 345 346 346 /* timeouts */ 347 347 struct { 348 - spinlock_t timeout_lock; 348 + raw_spinlock_t timeout_lock; 349 349 struct list_head timeout_list; 350 350 struct list_head ltimeout_list; 351 351 unsigned cq_last_tm_flush;
+7
include/linux/mlx5/driver.h
··· 524 524 * creation/deletion on drivers rescan. Unset during device attach. 525 525 */ 526 526 MLX5_PRIV_FLAGS_DETACH = 1 << 2, 527 + MLX5_PRIV_FLAGS_SWITCH_LEGACY = 1 << 3, 527 528 }; 528 529 529 530 struct mlx5_adev { ··· 1201 1200 static inline bool mlx5_core_is_vf(const struct mlx5_core_dev *dev) 1202 1201 { 1203 1202 return dev->coredev_type == MLX5_COREDEV_VF; 1203 + } 1204 + 1205 + static inline bool mlx5_core_same_coredev_type(const struct mlx5_core_dev *dev1, 1206 + const struct mlx5_core_dev *dev2) 1207 + { 1208 + return dev1->coredev_type == dev2->coredev_type; 1204 1209 } 1205 1210 1206 1211 static inline bool mlx5_core_is_ecpf(const struct mlx5_core_dev *dev)
+3 -1
include/linux/mlx5/mlx5_ifc.h
··· 2124 2124 u8 migration_in_chunks[0x1]; 2125 2125 u8 reserved_at_d1[0x1]; 2126 2126 u8 sf_eq_usage[0x1]; 2127 - u8 reserved_at_d3[0xd]; 2127 + u8 reserved_at_d3[0x5]; 2128 + u8 multiplane[0x1]; 2129 + u8 reserved_at_d9[0x7]; 2128 2130 2129 2131 u8 cross_vhca_object_to_object_supported[0x20]; 2130 2132
+29 -2
include/linux/mm.h
··· 31 31 #include <linux/kasan.h> 32 32 #include <linux/memremap.h> 33 33 #include <linux/slab.h> 34 + #include <linux/cacheinfo.h> 34 35 35 36 struct mempolicy; 36 37 struct anon_vma; ··· 3011 3010 lruvec_stat_sub_folio(folio, NR_PAGETABLE); 3012 3011 } 3013 3012 3014 - pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); 3013 + pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); 3014 + static inline pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, 3015 + pmd_t *pmdvalp) 3016 + { 3017 + pte_t *pte; 3018 + 3019 + __cond_lock(RCU, pte = ___pte_offset_map(pmd, addr, pmdvalp)); 3020 + return pte; 3021 + } 3015 3022 static inline pte_t *pte_offset_map(pmd_t *pmd, unsigned long addr) 3016 3023 { 3017 3024 return __pte_offset_map(pmd, addr, NULL); ··· 3032 3023 { 3033 3024 pte_t *pte; 3034 3025 3035 - __cond_lock(*ptlp, pte = __pte_offset_map_lock(mm, pmd, addr, ptlp)); 3026 + __cond_lock(RCU, __cond_lock(*ptlp, 3027 + pte = __pte_offset_map_lock(mm, pmd, addr, ptlp))); 3036 3028 return pte; 3037 3029 } 3038 3030 ··· 4184 4174 return 0; 4185 4175 } 4186 4176 #endif 4177 + 4178 + /* 4179 + * user_alloc_needs_zeroing checks if a user folio from page allocator needs to 4180 + * be zeroed or not. 4181 + */ 4182 + static inline bool user_alloc_needs_zeroing(void) 4183 + { 4184 + /* 4185 + * for user folios, arch with cache aliasing requires cache flush and 4186 + * arc changes folio->flags to make icache coherent with dcache, so 4187 + * always return false to make caller use 4188 + * clear_user_page()/clear_user_highpage(). 4189 + */ 4190 + return cpu_dcache_is_aliasing() || cpu_icache_is_aliasing() || 4191 + !static_branch_maybe(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, 4192 + &init_on_alloc); 4193 + } 4187 4194 4188 4195 int arch_get_shadow_stack_status(struct task_struct *t, unsigned long __user *status); 4189 4196 int arch_set_shadow_stack_status(struct task_struct *t, unsigned long status);
+2 -10
include/linux/page-flags.h
··· 862 862 ClearPageHead(page); 863 863 } 864 864 FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE) 865 - FOLIO_TEST_FLAG(partially_mapped, FOLIO_SECOND_PAGE) 866 - /* 867 - * PG_partially_mapped is protected by deferred_split split_queue_lock, 868 - * so its safe to use non-atomic set/clear. 869 - */ 870 - __FOLIO_SET_FLAG(partially_mapped, FOLIO_SECOND_PAGE) 871 - __FOLIO_CLEAR_FLAG(partially_mapped, FOLIO_SECOND_PAGE) 865 + FOLIO_FLAG(partially_mapped, FOLIO_SECOND_PAGE) 872 866 #else 873 867 FOLIO_FLAG_FALSE(large_rmappable) 874 - FOLIO_TEST_FLAG_FALSE(partially_mapped) 875 - __FOLIO_SET_FLAG_NOOP(partially_mapped) 876 - __FOLIO_CLEAR_FLAG_NOOP(partially_mapped) 868 + FOLIO_FLAG_FALSE(partially_mapped) 877 869 #endif 878 870 879 871 #define PG_head_mask ((1UL << PG_head))
+2
include/linux/platform_data/amd_qdma.h
··· 26 26 * @max_mm_channels: Maximum number of MM DMA channels in each direction 27 27 * @device_map: DMA slave map 28 28 * @irq_index: The index of first IRQ 29 + * @dma_dev: The device pointer for dma operations 29 30 */ 30 31 struct qdma_platdata { 31 32 u32 max_mm_channels; 32 33 u32 irq_index; 33 34 struct dma_slave_map *device_map; 35 + struct device *dma_dev; 34 36 }; 35 37 36 38 #endif /* _PLATDATA_AMD_QDMA_H */
+2 -1
include/linux/sched.h
··· 1637 1637 * We're lying here, but rather than expose a completely new task state 1638 1638 * to userspace, we can make this appear as if the task has gone through 1639 1639 * a regular rt_mutex_lock() call. 1640 + * Report frozen tasks as uninterruptible. 1640 1641 */ 1641 - if (tsk_state & TASK_RTLOCK_WAIT) 1642 + if ((tsk_state & TASK_RTLOCK_WAIT) || (tsk_state & TASK_FROZEN)) 1642 1643 state = TASK_UNINTERRUPTIBLE; 1643 1644 1644 1645 return fls(state);
+8 -3
include/linux/skmsg.h
··· 317 317 kfree_skb(skb); 318 318 } 319 319 320 - static inline void sk_psock_queue_msg(struct sk_psock *psock, 320 + static inline bool sk_psock_queue_msg(struct sk_psock *psock, 321 321 struct sk_msg *msg) 322 322 { 323 + bool ret; 324 + 323 325 spin_lock_bh(&psock->ingress_lock); 324 - if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) 326 + if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) { 325 327 list_add_tail(&msg->list, &psock->ingress_msg); 326 - else { 328 + ret = true; 329 + } else { 327 330 sk_msg_free(psock->sk, msg); 328 331 kfree(msg); 332 + ret = false; 329 333 } 330 334 spin_unlock_bh(&psock->ingress_lock); 335 + return ret; 331 336 } 332 337 333 338 static inline struct sk_msg *sk_psock_dequeue_msg(struct sk_psock *psock)
+1 -1
include/linux/trace_events.h
··· 364 364 struct list_head list; 365 365 struct trace_event_class *class; 366 366 union { 367 - char *name; 367 + const char *name; 368 368 /* Set TRACE_EVENT_FL_TRACEPOINT flag when using "tp" */ 369 369 struct tracepoint *tp; 370 370 };
+3 -3
include/linux/vermagic.h
··· 15 15 #else 16 16 #define MODULE_VERMAGIC_SMP "" 17 17 #endif 18 - #ifdef CONFIG_PREEMPT_BUILD 19 - #define MODULE_VERMAGIC_PREEMPT "preempt " 20 - #elif defined(CONFIG_PREEMPT_RT) 18 + #ifdef CONFIG_PREEMPT_RT 21 19 #define MODULE_VERMAGIC_PREEMPT "preempt_rt " 20 + #elif defined(CONFIG_PREEMPT_BUILD) 21 + #define MODULE_VERMAGIC_PREEMPT "preempt " 22 22 #else 23 23 #define MODULE_VERMAGIC_PREEMPT "" 24 24 #endif
+1 -1
include/linux/vmstat.h
··· 515 515 516 516 static inline const char *lru_list_name(enum lru_list lru) 517 517 { 518 - return node_stat_name(NR_LRU_BASE + lru) + 3; // skip "nr_" 518 + return node_stat_name(NR_LRU_BASE + (enum node_stat_item)lru) + 3; // skip "nr_" 519 519 } 520 520 521 521 #if defined(CONFIG_VM_EVENT_COUNTERS) || defined(CONFIG_MEMCG)
+5 -2
include/net/netfilter/nf_tables.h
··· 733 733 /** 734 734 * struct nft_set_ext - set extensions 735 735 * 736 - * @genmask: generation mask 736 + * @genmask: generation mask, but also flags (see NFT_SET_ELEM_DEAD_BIT) 737 737 * @offset: offsets of individual extension types 738 738 * @data: beginning of extension data 739 + * 740 + * This structure must be aligned to word size, otherwise atomic bitops 741 + * on genmask field can cause alignment failure on some archs. 739 742 */ 740 743 struct nft_set_ext { 741 744 u8 genmask; 742 745 u8 offset[NFT_SET_EXT_NUM]; 743 746 char data[]; 744 - }; 747 + } __aligned(BITS_PER_LONG / 8); 745 748 746 749 static inline void nft_set_ext_prepare(struct nft_set_ext_tmpl *tmpl) 747 750 {
+8 -2
include/net/sock.h
··· 1528 1528 } 1529 1529 1530 1530 static inline bool 1531 - sk_rmem_schedule(struct sock *sk, struct sk_buff *skb, int size) 1531 + __sk_rmem_schedule(struct sock *sk, int size, bool pfmemalloc) 1532 1532 { 1533 1533 int delta; 1534 1534 ··· 1536 1536 return true; 1537 1537 delta = size - sk->sk_forward_alloc; 1538 1538 return delta <= 0 || __sk_mem_schedule(sk, delta, SK_MEM_RECV) || 1539 - skb_pfmemalloc(skb); 1539 + pfmemalloc; 1540 + } 1541 + 1542 + static inline bool 1543 + sk_rmem_schedule(struct sock *sk, struct sk_buff *skb, int size) 1544 + { 1545 + return __sk_rmem_schedule(sk, size, skb_pfmemalloc(skb)); 1540 1546 } 1541 1547 1542 1548 static inline int sk_unused_reserved_mem(const struct sock *sk)
+26 -24
include/uapi/linux/mptcp_pm.h
··· 12 12 /** 13 13 * enum mptcp_event_type 14 14 * @MPTCP_EVENT_UNSPEC: unused event 15 - * @MPTCP_EVENT_CREATED: token, family, saddr4 | saddr6, daddr4 | daddr6, 16 - * sport, dport A new MPTCP connection has been created. It is the good time 17 - * to allocate memory and send ADD_ADDR if needed. Depending on the 15 + * @MPTCP_EVENT_CREATED: A new MPTCP connection has been created. It is the 16 + * good time to allocate memory and send ADD_ADDR if needed. Depending on the 18 17 * traffic-patterns it can take a long time until the MPTCP_EVENT_ESTABLISHED 19 - * is sent. 20 - * @MPTCP_EVENT_ESTABLISHED: token, family, saddr4 | saddr6, daddr4 | daddr6, 21 - * sport, dport A MPTCP connection is established (can start new subflows). 22 - * @MPTCP_EVENT_CLOSED: token A MPTCP connection has stopped. 23 - * @MPTCP_EVENT_ANNOUNCED: token, rem_id, family, daddr4 | daddr6 [, dport] A 24 - * new address has been announced by the peer. 25 - * @MPTCP_EVENT_REMOVED: token, rem_id An address has been lost by the peer. 26 - * @MPTCP_EVENT_SUB_ESTABLISHED: token, family, loc_id, rem_id, saddr4 | 27 - * saddr6, daddr4 | daddr6, sport, dport, backup, if_idx [, error] A new 28 - * subflow has been established. 'error' should not be set. 29 - * @MPTCP_EVENT_SUB_CLOSED: token, family, loc_id, rem_id, saddr4 | saddr6, 30 - * daddr4 | daddr6, sport, dport, backup, if_idx [, error] A subflow has been 31 - * closed. An error (copy of sk_err) could be set if an error has been 32 - * detected for this subflow. 33 - * @MPTCP_EVENT_SUB_PRIORITY: token, family, loc_id, rem_id, saddr4 | saddr6, 34 - * daddr4 | daddr6, sport, dport, backup, if_idx [, error] The priority of a 35 - * subflow has changed. 'error' should not be set. 36 - * @MPTCP_EVENT_LISTENER_CREATED: family, sport, saddr4 | saddr6 A new PM 37 - * listener is created. 38 - * @MPTCP_EVENT_LISTENER_CLOSED: family, sport, saddr4 | saddr6 A PM listener 39 - * is closed. 18 + * is sent. Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6, 19 + * sport, dport, server-side. 20 + * @MPTCP_EVENT_ESTABLISHED: A MPTCP connection is established (can start new 21 + * subflows). Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6, 22 + * sport, dport, server-side. 23 + * @MPTCP_EVENT_CLOSED: A MPTCP connection has stopped. Attribute: token. 24 + * @MPTCP_EVENT_ANNOUNCED: A new address has been announced by the peer. 25 + * Attributes: token, rem_id, family, daddr4 | daddr6 [, dport]. 26 + * @MPTCP_EVENT_REMOVED: An address has been lost by the peer. Attributes: 27 + * token, rem_id. 28 + * @MPTCP_EVENT_SUB_ESTABLISHED: A new subflow has been established. 'error' 29 + * should not be set. Attributes: token, family, loc_id, rem_id, saddr4 | 30 + * saddr6, daddr4 | daddr6, sport, dport, backup, if_idx [, error]. 31 + * @MPTCP_EVENT_SUB_CLOSED: A subflow has been closed. An error (copy of 32 + * sk_err) could be set if an error has been detected for this subflow. 33 + * Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | 34 + * daddr6, sport, dport, backup, if_idx [, error]. 35 + * @MPTCP_EVENT_SUB_PRIORITY: The priority of a subflow has changed. 'error' 36 + * should not be set. Attributes: token, family, loc_id, rem_id, saddr4 | 37 + * saddr6, daddr4 | daddr6, sport, dport, backup, if_idx [, error]. 38 + * @MPTCP_EVENT_LISTENER_CREATED: A new PM listener is created. Attributes: 39 + * family, sport, saddr4 | saddr6. 40 + * @MPTCP_EVENT_LISTENER_CLOSED: A PM listener is closed. Attributes: family, 41 + * sport, saddr4 | saddr6. 40 42 */ 41 43 enum mptcp_event_type { 42 44 MPTCP_EVENT_UNSPEC,
+10 -3
include/uapi/linux/stddef.h
··· 8 8 #define __always_inline inline 9 9 #endif 10 10 11 + /* Not all C++ standards support type declarations inside an anonymous union */ 12 + #ifndef __cplusplus 13 + #define __struct_group_tag(TAG) TAG 14 + #else 15 + #define __struct_group_tag(TAG) 16 + #endif 17 + 11 18 /** 12 19 * __struct_group() - Create a mirrored named and anonyomous struct 13 20 * ··· 27 20 * and size: one anonymous and one named. The former's members can be used 28 21 * normally without sub-struct naming, and the latter can be used to 29 22 * reason about the start, end, and size of the group of struct members. 30 - * The named struct can also be explicitly tagged for layer reuse, as well 31 - * as both having struct attributes appended. 23 + * The named struct can also be explicitly tagged for layer reuse (C only), 24 + * as well as both having struct attributes appended. 32 25 */ 33 26 #define __struct_group(TAG, NAME, ATTRS, MEMBERS...) \ 34 27 union { \ 35 28 struct { MEMBERS } ATTRS; \ 36 - struct TAG { MEMBERS } ATTRS NAME; \ 29 + struct __struct_group_tag(TAG) { MEMBERS } ATTRS NAME; \ 37 30 } ATTRS 38 31 39 32 #ifdef __cplusplus
+2 -2
include/uapi/linux/thermal.h
··· 3 3 #define _UAPI_LINUX_THERMAL_H 4 4 5 5 #define THERMAL_NAME_LENGTH 20 6 - #define THERMAL_THRESHOLD_WAY_UP BIT(0) 7 - #define THERMAL_THRESHOLD_WAY_DOWN BIT(1) 6 + #define THERMAL_THRESHOLD_WAY_UP 0x1 7 + #define THERMAL_THRESHOLD_WAY_DOWN 0x2 8 8 9 9 enum thermal_device_mode { 10 10 THERMAL_DEVICE_DISABLED = 0,
+11 -6
io_uring/io_uring.c
··· 215 215 struct io_ring_ctx *ctx = head->ctx; 216 216 217 217 /* protect against races with linked timeouts */ 218 - spin_lock_irq(&ctx->timeout_lock); 218 + raw_spin_lock_irq(&ctx->timeout_lock); 219 219 matched = io_match_linked(head); 220 - spin_unlock_irq(&ctx->timeout_lock); 220 + raw_spin_unlock_irq(&ctx->timeout_lock); 221 221 } else { 222 222 matched = io_match_linked(head); 223 223 } ··· 333 333 init_waitqueue_head(&ctx->cq_wait); 334 334 init_waitqueue_head(&ctx->poll_wq); 335 335 spin_lock_init(&ctx->completion_lock); 336 - spin_lock_init(&ctx->timeout_lock); 336 + raw_spin_lock_init(&ctx->timeout_lock); 337 337 INIT_WQ_LIST(&ctx->iopoll_list); 338 338 INIT_LIST_HEAD(&ctx->io_buffers_comp); 339 339 INIT_LIST_HEAD(&ctx->defer_list); ··· 498 498 if (req->flags & REQ_F_LINK_TIMEOUT) { 499 499 struct io_ring_ctx *ctx = req->ctx; 500 500 501 - spin_lock_irq(&ctx->timeout_lock); 501 + raw_spin_lock_irq(&ctx->timeout_lock); 502 502 io_for_each_link(cur, req) 503 503 io_prep_async_work(cur); 504 - spin_unlock_irq(&ctx->timeout_lock); 504 + raw_spin_unlock_irq(&ctx->timeout_lock); 505 505 } else { 506 506 io_for_each_link(cur, req) 507 507 io_prep_async_work(cur); ··· 514 514 struct io_uring_task *tctx = req->tctx; 515 515 516 516 BUG_ON(!tctx); 517 - BUG_ON(!tctx->io_wq); 517 + 518 + if ((current->flags & PF_KTHREAD) || !tctx->io_wq) { 519 + io_req_task_queue_fail(req, -ECANCELED); 520 + return; 521 + } 518 522 519 523 /* init ->work of the whole link before punting */ 520 524 io_prep_async_link(req); ··· 3218 3214 3219 3215 void __io_uring_cancel(bool cancel_all) 3220 3216 { 3217 + io_uring_unreg_ringfd(); 3221 3218 io_uring_cancel_generic(cancel_all, NULL); 3222 3219 } 3223 3220
+3
io_uring/register.c
··· 414 414 if (ctx->flags & IORING_SETUP_SINGLE_ISSUER && 415 415 current != ctx->submitter_task) 416 416 return -EEXIST; 417 + /* limited to DEFER_TASKRUN for now */ 418 + if (!(ctx->flags & IORING_SETUP_DEFER_TASKRUN)) 419 + return -EINVAL; 417 420 if (copy_from_user(&p, arg, sizeof(p))) 418 421 return -EFAULT; 419 422 if (p.flags & ~RESIZE_FLAGS)
+6
io_uring/sqpoll.c
··· 405 405 __cold int io_sq_offload_create(struct io_ring_ctx *ctx, 406 406 struct io_uring_params *p) 407 407 { 408 + struct task_struct *task_to_put = NULL; 408 409 int ret; 409 410 410 411 /* Retain compatibility with failing for an invalid attach attempt */ ··· 481 480 } 482 481 483 482 sqd->thread = tsk; 483 + task_to_put = get_task_struct(tsk); 484 484 ret = io_uring_alloc_task_context(tsk, ctx); 485 485 wake_up_new_task(tsk); 486 486 if (ret) ··· 492 490 goto err; 493 491 } 494 492 493 + if (task_to_put) 494 + put_task_struct(task_to_put); 495 495 return 0; 496 496 err_sqpoll: 497 497 complete(&ctx->sq_data->exited); 498 498 err: 499 499 io_sq_thread_finish(ctx); 500 + if (task_to_put) 501 + put_task_struct(task_to_put); 500 502 return ret; 501 503 } 502 504
+20 -20
io_uring/timeout.c
··· 74 74 if (!io_timeout_finish(timeout, data)) { 75 75 if (io_req_post_cqe(req, -ETIME, IORING_CQE_F_MORE)) { 76 76 /* re-arm timer */ 77 - spin_lock_irq(&ctx->timeout_lock); 77 + raw_spin_lock_irq(&ctx->timeout_lock); 78 78 list_add(&timeout->list, ctx->timeout_list.prev); 79 79 hrtimer_start(&data->timer, timespec64_to_ktime(data->ts), data->mode); 80 - spin_unlock_irq(&ctx->timeout_lock); 80 + raw_spin_unlock_irq(&ctx->timeout_lock); 81 81 return; 82 82 } 83 83 } ··· 109 109 u32 seq; 110 110 struct io_timeout *timeout, *tmp; 111 111 112 - spin_lock_irq(&ctx->timeout_lock); 112 + raw_spin_lock_irq(&ctx->timeout_lock); 113 113 seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts); 114 114 115 115 list_for_each_entry_safe(timeout, tmp, &ctx->timeout_list, list) { ··· 134 134 io_kill_timeout(req, 0); 135 135 } 136 136 ctx->cq_last_tm_flush = seq; 137 - spin_unlock_irq(&ctx->timeout_lock); 137 + raw_spin_unlock_irq(&ctx->timeout_lock); 138 138 } 139 139 140 140 static void io_req_tw_fail_links(struct io_kiocb *link, struct io_tw_state *ts) ··· 200 200 } else if (req->flags & REQ_F_LINK_TIMEOUT) { 201 201 struct io_ring_ctx *ctx = req->ctx; 202 202 203 - spin_lock_irq(&ctx->timeout_lock); 203 + raw_spin_lock_irq(&ctx->timeout_lock); 204 204 link = io_disarm_linked_timeout(req); 205 - spin_unlock_irq(&ctx->timeout_lock); 205 + raw_spin_unlock_irq(&ctx->timeout_lock); 206 206 if (link) 207 207 io_req_queue_tw_complete(link, -ECANCELED); 208 208 } ··· 238 238 struct io_ring_ctx *ctx = req->ctx; 239 239 unsigned long flags; 240 240 241 - spin_lock_irqsave(&ctx->timeout_lock, flags); 241 + raw_spin_lock_irqsave(&ctx->timeout_lock, flags); 242 242 list_del_init(&timeout->list); 243 243 atomic_set(&req->ctx->cq_timeouts, 244 244 atomic_read(&req->ctx->cq_timeouts) + 1); 245 - spin_unlock_irqrestore(&ctx->timeout_lock, flags); 245 + raw_spin_unlock_irqrestore(&ctx->timeout_lock, flags); 246 246 247 247 if (!(data->flags & IORING_TIMEOUT_ETIME_SUCCESS)) 248 248 req_set_fail(req); ··· 285 285 { 286 286 struct io_kiocb *req; 287 287 288 - spin_lock_irq(&ctx->timeout_lock); 288 + raw_spin_lock_irq(&ctx->timeout_lock); 289 289 req = io_timeout_extract(ctx, cd); 290 - spin_unlock_irq(&ctx->timeout_lock); 290 + raw_spin_unlock_irq(&ctx->timeout_lock); 291 291 292 292 if (IS_ERR(req)) 293 293 return PTR_ERR(req); ··· 330 330 struct io_ring_ctx *ctx = req->ctx; 331 331 unsigned long flags; 332 332 333 - spin_lock_irqsave(&ctx->timeout_lock, flags); 333 + raw_spin_lock_irqsave(&ctx->timeout_lock, flags); 334 334 prev = timeout->head; 335 335 timeout->head = NULL; 336 336 ··· 345 345 } 346 346 list_del(&timeout->list); 347 347 timeout->prev = prev; 348 - spin_unlock_irqrestore(&ctx->timeout_lock, flags); 348 + raw_spin_unlock_irqrestore(&ctx->timeout_lock, flags); 349 349 350 350 req->io_task_work.func = io_req_task_link_timeout; 351 351 io_req_task_work_add(req); ··· 472 472 } else { 473 473 enum hrtimer_mode mode = io_translate_timeout_mode(tr->flags); 474 474 475 - spin_lock_irq(&ctx->timeout_lock); 475 + raw_spin_lock_irq(&ctx->timeout_lock); 476 476 if (tr->ltimeout) 477 477 ret = io_linked_timeout_update(ctx, tr->addr, &tr->ts, mode); 478 478 else 479 479 ret = io_timeout_update(ctx, tr->addr, &tr->ts, mode); 480 - spin_unlock_irq(&ctx->timeout_lock); 480 + raw_spin_unlock_irq(&ctx->timeout_lock); 481 481 } 482 482 483 483 if (ret < 0) ··· 572 572 struct list_head *entry; 573 573 u32 tail, off = timeout->off; 574 574 575 - spin_lock_irq(&ctx->timeout_lock); 575 + raw_spin_lock_irq(&ctx->timeout_lock); 576 576 577 577 /* 578 578 * sqe->off holds how many events that need to occur for this ··· 611 611 list_add(&timeout->list, entry); 612 612 data->timer.function = io_timeout_fn; 613 613 hrtimer_start(&data->timer, timespec64_to_ktime(data->ts), data->mode); 614 - spin_unlock_irq(&ctx->timeout_lock); 614 + raw_spin_unlock_irq(&ctx->timeout_lock); 615 615 return IOU_ISSUE_SKIP_COMPLETE; 616 616 } 617 617 ··· 620 620 struct io_timeout *timeout = io_kiocb_to_cmd(req, struct io_timeout); 621 621 struct io_ring_ctx *ctx = req->ctx; 622 622 623 - spin_lock_irq(&ctx->timeout_lock); 623 + raw_spin_lock_irq(&ctx->timeout_lock); 624 624 /* 625 625 * If the back reference is NULL, then our linked request finished 626 626 * before we got a chance to setup the timer ··· 633 633 data->mode); 634 634 list_add_tail(&timeout->list, &ctx->ltimeout_list); 635 635 } 636 - spin_unlock_irq(&ctx->timeout_lock); 636 + raw_spin_unlock_irq(&ctx->timeout_lock); 637 637 /* drop submission reference */ 638 638 io_put_req(req); 639 639 } ··· 668 668 * timeout_lockfirst to keep locking ordering. 669 669 */ 670 670 spin_lock(&ctx->completion_lock); 671 - spin_lock_irq(&ctx->timeout_lock); 671 + raw_spin_lock_irq(&ctx->timeout_lock); 672 672 list_for_each_entry_safe(timeout, tmp, &ctx->timeout_list, list) { 673 673 struct io_kiocb *req = cmd_to_io_kiocb(timeout); 674 674 ··· 676 676 io_kill_timeout(req, -ECANCELED)) 677 677 canceled++; 678 678 } 679 - spin_unlock_irq(&ctx->timeout_lock); 679 + raw_spin_unlock_irq(&ctx->timeout_lock); 680 680 spin_unlock(&ctx->completion_lock); 681 681 return canceled != 0; 682 682 }
+5 -1
kernel/bpf/verifier.c
··· 21281 21281 * changed in some incompatible and hard to support 21282 21282 * way, it's fine to back out this inlining logic 21283 21283 */ 21284 + #ifdef CONFIG_SMP 21284 21285 insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number); 21285 21286 insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0); 21286 21287 insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0); 21287 21288 cnt = 3; 21288 - 21289 + #else 21290 + insn_buf[0] = BPF_ALU32_REG(BPF_XOR, BPF_REG_0, BPF_REG_0); 21291 + cnt = 1; 21292 + #endif 21289 21293 new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); 21290 21294 if (!new_prog) 21291 21295 return -ENOMEM;
+6 -7
kernel/fork.c
··· 639 639 LIST_HEAD(uf); 640 640 VMA_ITERATOR(vmi, mm, 0); 641 641 642 - uprobe_start_dup_mmap(); 643 - if (mmap_write_lock_killable(oldmm)) { 644 - retval = -EINTR; 645 - goto fail_uprobe_end; 646 - } 642 + if (mmap_write_lock_killable(oldmm)) 643 + return -EINTR; 647 644 flush_cache_dup_mm(oldmm); 648 645 uprobe_dup_mmap(oldmm, mm); 649 646 /* ··· 779 782 dup_userfaultfd_complete(&uf); 780 783 else 781 784 dup_userfaultfd_fail(&uf); 782 - fail_uprobe_end: 783 - uprobe_end_dup_mmap(); 784 785 return retval; 785 786 786 787 fail_nomem_anon_vma_fork: ··· 1687 1692 if (!mm_init(mm, tsk, mm->user_ns)) 1688 1693 goto fail_nomem; 1689 1694 1695 + uprobe_start_dup_mmap(); 1690 1696 err = dup_mmap(mm, oldmm); 1691 1697 if (err) 1692 1698 goto free_pt; 1699 + uprobe_end_dup_mmap(); 1693 1700 1694 1701 mm->hiwater_rss = get_mm_rss(mm); 1695 1702 mm->hiwater_vm = mm->total_vm; ··· 1706 1709 mm->binfmt = NULL; 1707 1710 mm_init_owner(mm, NULL); 1708 1711 mmput(mm); 1712 + if (err) 1713 + uprobe_end_dup_mmap(); 1709 1714 1710 1715 fail_nomem: 1711 1716 return NULL;
+16 -2
kernel/locking/rtmutex.c
··· 1292 1292 */ 1293 1293 get_task_struct(owner); 1294 1294 1295 + preempt_disable(); 1295 1296 raw_spin_unlock_irq(&lock->wait_lock); 1297 + /* wake up any tasks on the wake_q before calling rt_mutex_adjust_prio_chain */ 1298 + wake_up_q(wake_q); 1299 + wake_q_init(wake_q); 1300 + preempt_enable(); 1301 + 1296 1302 1297 1303 res = rt_mutex_adjust_prio_chain(owner, chwalk, lock, 1298 1304 next_lock, waiter, task); ··· 1602 1596 * or TASK_UNINTERRUPTIBLE) 1603 1597 * @timeout: the pre-initialized and started timer, or NULL for none 1604 1598 * @waiter: the pre-initialized rt_mutex_waiter 1599 + * @wake_q: wake_q of tasks to wake when we drop the lock->wait_lock 1605 1600 * 1606 1601 * Must be called with lock->wait_lock held and interrupts disabled 1607 1602 */ ··· 1610 1603 struct ww_acquire_ctx *ww_ctx, 1611 1604 unsigned int state, 1612 1605 struct hrtimer_sleeper *timeout, 1613 - struct rt_mutex_waiter *waiter) 1606 + struct rt_mutex_waiter *waiter, 1607 + struct wake_q_head *wake_q) 1614 1608 __releases(&lock->wait_lock) __acquires(&lock->wait_lock) 1615 1609 { 1616 1610 struct rt_mutex *rtm = container_of(lock, struct rt_mutex, rtmutex); ··· 1642 1634 owner = rt_mutex_owner(lock); 1643 1635 else 1644 1636 owner = NULL; 1637 + preempt_disable(); 1645 1638 raw_spin_unlock_irq(&lock->wait_lock); 1639 + if (wake_q) { 1640 + wake_up_q(wake_q); 1641 + wake_q_init(wake_q); 1642 + } 1643 + preempt_enable(); 1646 1644 1647 1645 if (!owner || !rtmutex_spin_on_owner(lock, waiter, owner)) 1648 1646 rt_mutex_schedule(); ··· 1722 1708 1723 1709 ret = task_blocks_on_rt_mutex(lock, waiter, current, ww_ctx, chwalk, wake_q); 1724 1710 if (likely(!ret)) 1725 - ret = rt_mutex_slowlock_block(lock, ww_ctx, state, NULL, waiter); 1711 + ret = rt_mutex_slowlock_block(lock, ww_ctx, state, NULL, waiter, wake_q); 1726 1712 1727 1713 if (likely(!ret)) { 1728 1714 /* acquired the lock */
+1 -1
kernel/locking/rtmutex_api.c
··· 383 383 raw_spin_lock_irq(&lock->wait_lock); 384 384 /* sleep on the mutex */ 385 385 set_current_state(TASK_INTERRUPTIBLE); 386 - ret = rt_mutex_slowlock_block(lock, NULL, TASK_INTERRUPTIBLE, to, waiter); 386 + ret = rt_mutex_slowlock_block(lock, NULL, TASK_INTERRUPTIBLE, to, waiter, NULL); 387 387 /* 388 388 * try_to_take_rt_mutex() sets the waiter bit unconditionally. We might 389 389 * have to fix that up.
+1 -1
kernel/trace/fgraph.c
··· 833 833 #endif 834 834 { 835 835 for_each_set_bit(i, &bitmap, sizeof(bitmap) * BITS_PER_BYTE) { 836 - struct fgraph_ops *gops = fgraph_array[i]; 836 + struct fgraph_ops *gops = READ_ONCE(fgraph_array[i]); 837 837 838 838 if (gops == &fgraph_stub) 839 839 continue;
+2 -6
kernel/trace/ftrace.c
··· 902 902 } 903 903 904 904 static struct fgraph_ops fprofiler_ops = { 905 - .ops = { 906 - .flags = FTRACE_OPS_FL_INITIALIZED, 907 - INIT_OPS_HASH(fprofiler_ops.ops) 908 - }, 909 905 .entryfunc = &profile_graph_entry, 910 906 .retfunc = &profile_graph_return, 911 907 }; 912 908 913 909 static int register_ftrace_profiler(void) 914 910 { 911 + ftrace_ops_set_global_filter(&fprofiler_ops.ops); 915 912 return register_ftrace_graph(&fprofiler_ops); 916 913 } 917 914 ··· 919 922 #else 920 923 static struct ftrace_ops ftrace_profile_ops __read_mostly = { 921 924 .func = function_profile_call, 922 - .flags = FTRACE_OPS_FL_INITIALIZED, 923 - INIT_OPS_HASH(ftrace_profile_ops) 924 925 }; 925 926 926 927 static int register_ftrace_profiler(void) 927 928 { 929 + ftrace_ops_set_global_filter(&ftrace_profile_ops); 928 930 return register_ftrace_function(&ftrace_profile_ops); 929 931 } 930 932
+5 -1
kernel/trace/ring_buffer.c
··· 7019 7019 lockdep_assert_held(&cpu_buffer->mapping_lock); 7020 7020 7021 7021 nr_subbufs = cpu_buffer->nr_pages + 1; /* + reader-subbuf */ 7022 - nr_pages = ((nr_subbufs + 1) << subbuf_order) - pgoff; /* + meta-page */ 7022 + nr_pages = ((nr_subbufs + 1) << subbuf_order); /* + meta-page */ 7023 + if (nr_pages <= pgoff) 7024 + return -EINVAL; 7025 + 7026 + nr_pages -= pgoff; 7023 7027 7024 7028 nr_vma_pages = vma_pages(vma); 7025 7029 if (!nr_vma_pages || nr_vma_pages > nr_pages)
+12
kernel/trace/trace.c
··· 4206 4206 if (event) { 4207 4207 if (tr->trace_flags & TRACE_ITER_FIELDS) 4208 4208 return print_event_fields(iter, event); 4209 + /* 4210 + * For TRACE_EVENT() events, the print_fmt is not 4211 + * safe to use if the array has delta offsets 4212 + * Force printing via the fields. 4213 + */ 4214 + if ((tr->text_delta || tr->data_delta) && 4215 + event->type > __TRACE_LAST_TYPE) 4216 + return print_event_fields(iter, event); 4217 + 4209 4218 return event->funcs->trace(iter, sym_flags, event); 4210 4219 } 4211 4220 ··· 5086 5077 struct trace_array *tr = file_inode(filp)->i_private; 5087 5078 cpumask_var_t tracing_cpumask_new; 5088 5079 int err; 5080 + 5081 + if (count == 0 || count > KMALLOC_MAX_SIZE) 5082 + return -EINVAL; 5089 5083 5090 5084 if (!zalloc_cpumask_var(&tracing_cpumask_new, GFP_KERNEL)) 5091 5085 return -ENOMEM;
+12
kernel/trace/trace_events.c
··· 365 365 } while (s < e); 366 366 367 367 /* 368 + * Check for arrays. If the argument has: foo[REC->val] 369 + * then it is very likely that foo is an array of strings 370 + * that are safe to use. 371 + */ 372 + r = strstr(s, "["); 373 + if (r && r < e) { 374 + r = strstr(r, "REC->"); 375 + if (r && r < e) 376 + return true; 377 + } 378 + 379 + /* 368 380 * If there's any strings in the argument consider this arg OK as it 369 381 * could be: REC->field ? "foo" : "bar" and we don't want to get into 370 382 * verifying that logic here.
+1 -1
kernel/trace/trace_kprobe.c
··· 725 725 726 726 static struct notifier_block trace_kprobe_module_nb = { 727 727 .notifier_call = trace_kprobe_module_callback, 728 - .priority = 1 /* Invoked after kprobe module callback */ 728 + .priority = 2 /* Invoked after kprobe and jump_label module callback */ 729 729 }; 730 730 static int trace_kprobe_register_module_notifier(void) 731 731 {
+36 -5
lib/alloc_tag.c
··· 209 209 return; 210 210 } 211 211 212 + /* 213 + * Clear tag references to avoid debug warning when using 214 + * __alloc_tag_ref_set() with non-empty reference. 215 + */ 216 + set_codetag_empty(&ref_old); 217 + set_codetag_empty(&ref_new); 218 + 212 219 /* swap tags */ 213 220 __alloc_tag_ref_set(&ref_old, tag_new); 214 221 update_page_tag_ref(handle_old, &ref_old); ··· 408 401 409 402 static int vm_module_tags_populate(void) 410 403 { 411 - unsigned long phys_size = vm_module_tags->nr_pages << PAGE_SHIFT; 404 + unsigned long phys_end = ALIGN_DOWN(module_tags.start_addr, PAGE_SIZE) + 405 + (vm_module_tags->nr_pages << PAGE_SHIFT); 406 + unsigned long new_end = module_tags.start_addr + module_tags.size; 412 407 413 - if (phys_size < module_tags.size) { 408 + if (phys_end < new_end) { 414 409 struct page **next_page = vm_module_tags->pages + vm_module_tags->nr_pages; 415 - unsigned long addr = module_tags.start_addr + phys_size; 410 + unsigned long old_shadow_end = ALIGN(phys_end, MODULE_ALIGN); 411 + unsigned long new_shadow_end = ALIGN(new_end, MODULE_ALIGN); 416 412 unsigned long more_pages; 417 413 unsigned long nr; 418 414 419 - more_pages = ALIGN(module_tags.size - phys_size, PAGE_SIZE) >> PAGE_SHIFT; 415 + more_pages = ALIGN(new_end - phys_end, PAGE_SIZE) >> PAGE_SHIFT; 420 416 nr = alloc_pages_bulk_array_node(GFP_KERNEL | __GFP_NOWARN, 421 417 NUMA_NO_NODE, more_pages, next_page); 422 418 if (nr < more_pages || 423 - vmap_pages_range(addr, addr + (nr << PAGE_SHIFT), PAGE_KERNEL, 419 + vmap_pages_range(phys_end, phys_end + (nr << PAGE_SHIFT), PAGE_KERNEL, 424 420 next_page, PAGE_SHIFT) < 0) { 425 421 /* Clean up and error out */ 426 422 for (int i = 0; i < nr; i++) 427 423 __free_page(next_page[i]); 428 424 return -ENOMEM; 429 425 } 426 + 430 427 vm_module_tags->nr_pages += nr; 428 + 429 + /* 430 + * Kasan allocates 1 byte of shadow for every 8 bytes of data. 431 + * When kasan_alloc_module_shadow allocates shadow memory, 432 + * its unit of allocation is a page. 433 + * Therefore, here we need to align to MODULE_ALIGN. 434 + */ 435 + if (old_shadow_end < new_shadow_end) 436 + kasan_alloc_module_shadow((void *)old_shadow_end, 437 + new_shadow_end - old_shadow_end, 438 + GFP_KERNEL); 431 439 } 440 + 441 + /* 442 + * Mark the pages as accessible, now that they are mapped. 443 + * With hardware tag-based KASAN, marking is skipped for 444 + * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc(). 445 + */ 446 + kasan_unpoison_vmalloc((void *)module_tags.start_addr, 447 + new_end - module_tags.start_addr, 448 + KASAN_VMALLOC_PROT_NORMAL); 432 449 433 450 return 0; 434 451 }
+10 -9
mm/huge_memory.c
··· 1176 1176 folio_throttle_swaprate(folio, gfp); 1177 1177 1178 1178 /* 1179 - * When a folio is not zeroed during allocation (__GFP_ZERO not used), 1180 - * folio_zero_user() is used to make sure that the page corresponding 1181 - * to the faulting address will be hot in the cache after zeroing. 1179 + * When a folio is not zeroed during allocation (__GFP_ZERO not used) 1180 + * or user folios require special handling, folio_zero_user() is used to 1181 + * make sure that the page corresponding to the faulting address will be 1182 + * hot in the cache after zeroing. 1182 1183 */ 1183 - if (!alloc_zeroed()) 1184 + if (user_alloc_needs_zeroing()) 1184 1185 folio_zero_user(folio, addr); 1185 1186 /* 1186 1187 * The memory barrier inside __folio_mark_uptodate makes sure that ··· 3577 3576 !list_empty(&folio->_deferred_list)) { 3578 3577 ds_queue->split_queue_len--; 3579 3578 if (folio_test_partially_mapped(folio)) { 3580 - __folio_clear_partially_mapped(folio); 3579 + folio_clear_partially_mapped(folio); 3581 3580 mod_mthp_stat(folio_order(folio), 3582 3581 MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1); 3583 3582 } ··· 3689 3688 if (!list_empty(&folio->_deferred_list)) { 3690 3689 ds_queue->split_queue_len--; 3691 3690 if (folio_test_partially_mapped(folio)) { 3692 - __folio_clear_partially_mapped(folio); 3691 + folio_clear_partially_mapped(folio); 3693 3692 mod_mthp_stat(folio_order(folio), 3694 3693 MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1); 3695 3694 } ··· 3733 3732 spin_lock_irqsave(&ds_queue->split_queue_lock, flags); 3734 3733 if (partially_mapped) { 3735 3734 if (!folio_test_partially_mapped(folio)) { 3736 - __folio_set_partially_mapped(folio); 3735 + folio_set_partially_mapped(folio); 3737 3736 if (folio_test_pmd_mappable(folio)) 3738 3737 count_vm_event(THP_DEFERRED_SPLIT_PAGE); 3739 3738 count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED); ··· 3826 3825 } else { 3827 3826 /* We lost race with folio_put() */ 3828 3827 if (folio_test_partially_mapped(folio)) { 3829 - __folio_clear_partially_mapped(folio); 3828 + folio_clear_partially_mapped(folio); 3830 3829 mod_mthp_stat(folio_order(folio), 3831 3830 MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1); 3832 3831 } ··· 4169 4168 size_t input_len = strlen(input_buf); 4170 4169 4171 4170 tok = strsep(&buf, ","); 4172 - if (tok) { 4171 + if (tok && buf) { 4173 4172 strscpy(file_path, tok); 4174 4173 } else { 4175 4174 ret = -EINVAL;
+2 -3
mm/hugetlb.c
··· 5340 5340 break; 5341 5341 } 5342 5342 ret = copy_user_large_folio(new_folio, pte_folio, 5343 - ALIGN_DOWN(addr, sz), dst_vma); 5343 + addr, dst_vma); 5344 5344 folio_put(pte_folio); 5345 5345 if (ret) { 5346 5346 folio_put(new_folio); ··· 6643 6643 *foliop = NULL; 6644 6644 goto out; 6645 6645 } 6646 - ret = copy_user_large_folio(folio, *foliop, 6647 - ALIGN_DOWN(dst_addr, size), dst_vma); 6646 + ret = copy_user_large_folio(folio, *foliop, dst_addr, dst_vma); 6648 6647 folio_put(*foliop); 6649 6648 *foliop = NULL; 6650 6649 if (ret) {
-6
mm/internal.h
··· 1285 1285 void touch_pmd(struct vm_area_struct *vma, unsigned long addr, 1286 1286 pmd_t *pmd, bool write); 1287 1287 1288 - static inline bool alloc_zeroed(void) 1289 - { 1290 - return static_branch_maybe(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, 1291 - &init_on_alloc); 1292 - } 1293 - 1294 1288 /* 1295 1289 * Parses a string with mem suffixes into its order. Useful to parse kernel 1296 1290 * parameters.
+10 -8
mm/memory.c
··· 4733 4733 folio_throttle_swaprate(folio, gfp); 4734 4734 /* 4735 4735 * When a folio is not zeroed during allocation 4736 - * (__GFP_ZERO not used), folio_zero_user() is used 4737 - * to make sure that the page corresponding to the 4738 - * faulting address will be hot in the cache after 4739 - * zeroing. 4736 + * (__GFP_ZERO not used) or user folios require special 4737 + * handling, folio_zero_user() is used to make sure 4738 + * that the page corresponding to the faulting address 4739 + * will be hot in the cache after zeroing. 4740 4740 */ 4741 - if (!alloc_zeroed()) 4741 + if (user_alloc_needs_zeroing()) 4742 4742 folio_zero_user(folio, vmf->address); 4743 4743 return folio; 4744 4744 } ··· 6815 6815 return 0; 6816 6816 } 6817 6817 6818 - static void clear_gigantic_page(struct folio *folio, unsigned long addr, 6818 + static void clear_gigantic_page(struct folio *folio, unsigned long addr_hint, 6819 6819 unsigned int nr_pages) 6820 6820 { 6821 + unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(folio)); 6821 6822 int i; 6822 6823 6823 6824 might_sleep(); ··· 6852 6851 } 6853 6852 6854 6853 static int copy_user_gigantic_page(struct folio *dst, struct folio *src, 6855 - unsigned long addr, 6854 + unsigned long addr_hint, 6856 6855 struct vm_area_struct *vma, 6857 6856 unsigned int nr_pages) 6858 6857 { 6859 - int i; 6858 + unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(dst)); 6860 6859 struct page *dst_page; 6861 6860 struct page *src_page; 6861 + int i; 6862 6862 6863 6863 for (i = 0; i < nr_pages; i++) { 6864 6864 dst_page = folio_page(dst, i);
+4 -2
mm/page_alloc.c
··· 1238 1238 if (order > pageblock_order) 1239 1239 order = pageblock_order; 1240 1240 1241 - while (pfn != end) { 1241 + do { 1242 1242 int mt = get_pfnblock_migratetype(page, pfn); 1243 1243 1244 1244 __free_one_page(page, pfn, zone, order, mt, fpi); 1245 1245 pfn += 1 << order; 1246 + if (pfn == end) 1247 + break; 1246 1248 page = pfn_to_page(pfn); 1247 - } 1249 + } while (1); 1248 1250 } 1249 1251 1250 1252 static void free_one_page(struct zone *zone, struct page *page,
+1 -1
mm/pgtable-generic.c
··· 279 279 static void pmdp_get_lockless_end(unsigned long irqflags) { } 280 280 #endif 281 281 282 - pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) 282 + pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) 283 283 { 284 284 unsigned long irqflags; 285 285 pmd_t pmdval;
+12 -10
mm/shmem.c
··· 787 787 } 788 788 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ 789 789 790 + static void shmem_update_stats(struct folio *folio, int nr_pages) 791 + { 792 + if (folio_test_pmd_mappable(folio)) 793 + __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr_pages); 794 + __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr_pages); 795 + __lruvec_stat_mod_folio(folio, NR_SHMEM, nr_pages); 796 + } 797 + 790 798 /* 791 799 * Somewhat like filemap_add_folio, but error if expected item has gone. 792 800 */ ··· 829 821 xas_store(&xas, folio); 830 822 if (xas_error(&xas)) 831 823 goto unlock; 832 - if (folio_test_pmd_mappable(folio)) 833 - __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr); 834 - __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr); 835 - __lruvec_stat_mod_folio(folio, NR_SHMEM, nr); 824 + shmem_update_stats(folio, nr); 836 825 mapping->nrpages += nr; 837 826 unlock: 838 827 xas_unlock_irq(&xas); ··· 857 852 error = shmem_replace_entry(mapping, folio->index, folio, radswap); 858 853 folio->mapping = NULL; 859 854 mapping->nrpages -= nr; 860 - __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr); 861 - __lruvec_stat_mod_folio(folio, NR_SHMEM, -nr); 855 + shmem_update_stats(folio, -nr); 862 856 xa_unlock_irq(&mapping->i_pages); 863 857 folio_put_refs(folio, nr); 864 858 BUG_ON(error); ··· 1973 1969 } 1974 1970 if (!error) { 1975 1971 mem_cgroup_replace_folio(old, new); 1976 - __lruvec_stat_mod_folio(new, NR_FILE_PAGES, nr_pages); 1977 - __lruvec_stat_mod_folio(new, NR_SHMEM, nr_pages); 1978 - __lruvec_stat_mod_folio(old, NR_FILE_PAGES, -nr_pages); 1979 - __lruvec_stat_mod_folio(old, NR_SHMEM, -nr_pages); 1972 + shmem_update_stats(new, nr_pages); 1973 + shmem_update_stats(old, -nr_pages); 1980 1974 } 1981 1975 xa_unlock_irq(&swap_mapping->i_pages); 1982 1976
+4 -1
mm/vma.c
··· 2460 2460 2461 2461 /* If flags changed, we might be able to merge, so try again. */ 2462 2462 if (map.retry_merge) { 2463 + struct vm_area_struct *merged; 2463 2464 VMG_MMAP_STATE(vmg, &map, vma); 2464 2465 2465 2466 vma_iter_config(map.vmi, map.addr, map.end); 2466 - vma_merge_existing_range(&vmg); 2467 + merged = vma_merge_existing_range(&vmg); 2468 + if (merged) 2469 + vma = merged; 2467 2470 } 2468 2471 2469 2472 __mmap_complete(&map, vma);
+4 -2
mm/vmalloc.c
··· 3374 3374 struct page *page = vm->pages[i]; 3375 3375 3376 3376 BUG_ON(!page); 3377 - mod_memcg_page_state(page, MEMCG_VMALLOC, -1); 3377 + if (!(vm->flags & VM_MAP_PUT_PAGES)) 3378 + mod_memcg_page_state(page, MEMCG_VMALLOC, -1); 3378 3379 /* 3379 3380 * High-order allocs for huge vmallocs are split, so 3380 3381 * can be freed as an array of order-0 allocations ··· 3383 3382 __free_page(page); 3384 3383 cond_resched(); 3385 3384 } 3386 - atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages); 3385 + if (!(vm->flags & VM_MAP_PUT_PAGES)) 3386 + atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages); 3387 3387 kvfree(vm->pages); 3388 3388 kfree(vm); 3389 3389 }
+2
net/ceph/osd_client.c
··· 1173 1173 1174 1174 int __ceph_alloc_sparse_ext_map(struct ceph_osd_req_op *op, int cnt) 1175 1175 { 1176 + WARN_ON(op->op != CEPH_OSD_OP_SPARSE_READ); 1177 + 1176 1178 op->extent.sparse_ext_cnt = cnt; 1177 1179 op->extent.sparse_ext = kmalloc_array(cnt, 1178 1180 sizeof(*op->extent.sparse_ext),
+3 -1
net/core/dev.c
··· 3642 3642 3643 3643 if (features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) { 3644 3644 if (vlan_get_protocol(skb) == htons(ETH_P_IPV6) && 3645 - skb_network_header_len(skb) != sizeof(struct ipv6hdr)) 3645 + skb_network_header_len(skb) != sizeof(struct ipv6hdr) && 3646 + !ipv6_has_hopopt_jumbo(skb)) 3646 3647 goto sw_checksum; 3648 + 3647 3649 switch (skb->csum_offset) { 3648 3650 case offsetof(struct tcphdr, check): 3649 3651 case offsetof(struct udphdr, check):
+15 -6
net/core/filter.c
··· 3734 3734 3735 3735 static u32 __bpf_skb_min_len(const struct sk_buff *skb) 3736 3736 { 3737 - u32 min_len = skb_network_offset(skb); 3737 + int offset = skb_network_offset(skb); 3738 + u32 min_len = 0; 3738 3739 3739 - if (skb_transport_header_was_set(skb)) 3740 - min_len = skb_transport_offset(skb); 3741 - if (skb->ip_summed == CHECKSUM_PARTIAL) 3742 - min_len = skb_checksum_start_offset(skb) + 3743 - skb->csum_offset + sizeof(__sum16); 3740 + if (offset > 0) 3741 + min_len = offset; 3742 + if (skb_transport_header_was_set(skb)) { 3743 + offset = skb_transport_offset(skb); 3744 + if (offset > 0) 3745 + min_len = offset; 3746 + } 3747 + if (skb->ip_summed == CHECKSUM_PARTIAL) { 3748 + offset = skb_checksum_start_offset(skb) + 3749 + skb->csum_offset + sizeof(__sum16); 3750 + if (offset > 0) 3751 + min_len = offset; 3752 + } 3744 3753 return min_len; 3745 3754 } 3746 3755
+5 -1
net/core/netdev-genl.c
··· 246 246 rcu_read_unlock(); 247 247 rtnl_unlock(); 248 248 249 - if (err) 249 + if (err) { 250 250 goto err_free_msg; 251 + } else if (!rsp->len) { 252 + err = -ENOENT; 253 + goto err_free_msg; 254 + } 251 255 252 256 return genlmsg_reply(rsp, info); 253 257
+8 -3
net/core/skmsg.c
··· 369 369 struct sk_msg *msg, u32 bytes) 370 370 { 371 371 int ret = -ENOSPC, i = msg->sg.curr; 372 + u32 copy, buf_size, copied = 0; 372 373 struct scatterlist *sge; 373 - u32 copy, buf_size; 374 374 void *to; 375 375 376 376 do { ··· 397 397 goto out; 398 398 } 399 399 bytes -= copy; 400 + copied += copy; 400 401 if (!bytes) 401 402 break; 402 403 msg->sg.copybreak = 0; ··· 405 404 } while (i != msg->sg.end); 406 405 out: 407 406 msg->sg.curr = i; 408 - return ret; 407 + return (ret < 0) ? ret : copied; 409 408 } 410 409 EXPORT_SYMBOL_GPL(sk_msg_memcopy_from_iter); 411 410 ··· 446 445 if (likely(!peek)) { 447 446 sge->offset += copy; 448 447 sge->length -= copy; 449 - if (!msg_rx->skb) 448 + if (!msg_rx->skb) { 450 449 sk_mem_uncharge(sk, copy); 450 + atomic_sub(copy, &sk->sk_rmem_alloc); 451 + } 451 452 msg_rx->sg.size -= copy; 452 453 453 454 if (!sge->length) { ··· 775 772 776 773 list_for_each_entry_safe(msg, tmp, &psock->ingress_msg, list) { 777 774 list_del(&msg->list); 775 + if (!msg->skb) 776 + atomic_sub(msg->sg.size, &psock->sk->sk_rmem_alloc); 778 777 sk_msg_free(psock->sk, msg); 779 778 kfree(msg); 780 779 }
+4 -1
net/core/sock.c
··· 1300 1300 sk->sk_reuse = (valbool ? SK_CAN_REUSE : SK_NO_REUSE); 1301 1301 break; 1302 1302 case SO_REUSEPORT: 1303 - sk->sk_reuseport = valbool; 1303 + if (valbool && !sk_is_inet(sk)) 1304 + ret = -EOPNOTSUPP; 1305 + else 1306 + sk->sk_reuseport = valbool; 1304 1307 break; 1305 1308 case SO_DONTROUTE: 1306 1309 sock_valbool_flag(sk, SOCK_LOCALROUTE, valbool);
+3 -3
net/ipv4/ip_tunnel.c
··· 294 294 295 295 ip_tunnel_init_flow(&fl4, iph->protocol, iph->daddr, 296 296 iph->saddr, tunnel->parms.o_key, 297 - iph->tos & INET_DSCP_MASK, dev_net(dev), 297 + iph->tos & INET_DSCP_MASK, tunnel->net, 298 298 tunnel->parms.link, tunnel->fwmark, 0, 0); 299 299 rt = ip_route_output_key(tunnel->net, &fl4); 300 300 ··· 611 611 } 612 612 ip_tunnel_init_flow(&fl4, proto, key->u.ipv4.dst, key->u.ipv4.src, 613 613 tunnel_id_to_key32(key->tun_id), 614 - tos & INET_DSCP_MASK, dev_net(dev), 0, skb->mark, 614 + tos & INET_DSCP_MASK, tunnel->net, 0, skb->mark, 615 615 skb_get_hash(skb), key->flow_flags); 616 616 617 617 if (!tunnel_hlen) ··· 774 774 775 775 ip_tunnel_init_flow(&fl4, protocol, dst, tnl_params->saddr, 776 776 tunnel->parms.o_key, tos & INET_DSCP_MASK, 777 - dev_net(dev), READ_ONCE(tunnel->parms.link), 777 + tunnel->net, READ_ONCE(tunnel->parms.link), 778 778 tunnel->fwmark, skb_get_hash(skb), 0); 779 779 780 780 if (ip_tunnel_encap(skb, &tunnel->encap, &protocol, &fl4) < 0)
+8 -6
net/ipv4/tcp_bpf.c
··· 49 49 sge = sk_msg_elem(msg, i); 50 50 size = (apply && apply_bytes < sge->length) ? 51 51 apply_bytes : sge->length; 52 - if (!sk_wmem_schedule(sk, size)) { 52 + if (!__sk_rmem_schedule(sk, size, false)) { 53 53 if (!copied) 54 54 ret = -ENOMEM; 55 55 break; 56 56 } 57 57 58 58 sk_mem_charge(sk, size); 59 + atomic_add(size, &sk->sk_rmem_alloc); 59 60 sk_msg_xfer(tmp, msg, i, size); 60 61 copied += size; 61 62 if (sge->length) ··· 75 74 76 75 if (!ret) { 77 76 msg->sg.start = i; 78 - sk_psock_queue_msg(psock, tmp); 77 + if (!sk_psock_queue_msg(psock, tmp)) 78 + atomic_sub(copied, &sk->sk_rmem_alloc); 79 79 sk_psock_data_ready(sk, psock); 80 80 } else { 81 81 sk_msg_free(sk, tmp); ··· 495 493 static int tcp_bpf_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) 496 494 { 497 495 struct sk_msg tmp, *msg_tx = NULL; 498 - int copied = 0, err = 0; 496 + int copied = 0, err = 0, ret = 0; 499 497 struct sk_psock *psock; 500 498 long timeo; 501 499 int flags; ··· 538 536 copy = msg_tx->sg.size - osize; 539 537 } 540 538 541 - err = sk_msg_memcopy_from_iter(sk, &msg->msg_iter, msg_tx, 539 + ret = sk_msg_memcopy_from_iter(sk, &msg->msg_iter, msg_tx, 542 540 copy); 543 - if (err < 0) { 541 + if (ret < 0) { 544 542 sk_msg_trim(sk, msg_tx, osize); 545 543 goto out_err; 546 544 } 547 545 548 - copied += copy; 546 + copied += ret; 549 547 if (psock->cork_bytes) { 550 548 if (size > psock->cork_bytes) 551 549 psock->cork_bytes = 0;
+1
net/ipv4/tcp_input.c
··· 7328 7328 if (unlikely(!inet_csk_reqsk_queue_hash_add(sk, req, 7329 7329 req->timeout))) { 7330 7330 reqsk_free(req); 7331 + dst_release(dst); 7331 7332 return 0; 7332 7333 } 7333 7334
+11 -5
net/ipv6/ila/ila_xlat.c
··· 195 195 }, 196 196 }; 197 197 198 + static DEFINE_MUTEX(ila_mutex); 199 + 198 200 static int ila_add_mapping(struct net *net, struct ila_xlat_params *xp) 199 201 { 200 202 struct ila_net *ilan = net_generic(net, ila_net_id); ··· 204 202 spinlock_t *lock = ila_get_lock(ilan, xp->ip.locator_match); 205 203 int err = 0, order; 206 204 207 - if (!ilan->xlat.hooks_registered) { 205 + if (!READ_ONCE(ilan->xlat.hooks_registered)) { 208 206 /* We defer registering net hooks in the namespace until the 209 207 * first mapping is added. 210 208 */ 211 - err = nf_register_net_hooks(net, ila_nf_hook_ops, 212 - ARRAY_SIZE(ila_nf_hook_ops)); 209 + mutex_lock(&ila_mutex); 210 + if (!ilan->xlat.hooks_registered) { 211 + err = nf_register_net_hooks(net, ila_nf_hook_ops, 212 + ARRAY_SIZE(ila_nf_hook_ops)); 213 + if (!err) 214 + WRITE_ONCE(ilan->xlat.hooks_registered, true); 215 + } 216 + mutex_unlock(&ila_mutex); 213 217 if (err) 214 218 return err; 215 - 216 - ilan->xlat.hooks_registered = true; 217 219 } 218 220 219 221 ila = kzalloc(sizeof(*ila), GFP_KERNEL);
+1 -1
net/llc/llc_input.c
··· 124 124 if (unlikely(!pskb_may_pull(skb, llc_len))) 125 125 return 0; 126 126 127 - skb->transport_header += llc_len; 128 127 skb_pull(skb, llc_len); 128 + skb_reset_transport_header(skb); 129 129 if (skb->protocol == htons(ETH_P_802_2)) { 130 130 __be16 pdulen; 131 131 s32 data_size;
+7
net/mptcp/options.c
··· 667 667 &echo, &drop_other_suboptions)) 668 668 return false; 669 669 670 + /* 671 + * Later on, mptcp_write_options() will enforce mutually exclusion with 672 + * DSS, bail out if such option is set and we can't drop it. 673 + */ 670 674 if (drop_other_suboptions) 671 675 remaining += opt_size; 676 + else if (opts->suboptions & OPTION_MPTCP_DSS) 677 + return false; 678 + 672 679 len = mptcp_add_addr_len(opts->addr.family, echo, !!opts->addr.port); 673 680 if (remaining < len) 674 681 return false;
+12 -11
net/mptcp/protocol.c
··· 136 136 int delta; 137 137 138 138 if (MPTCP_SKB_CB(from)->offset || 139 + ((to->len + from->len) > (sk->sk_rcvbuf >> 3)) || 139 140 !skb_try_coalesce(to, from, &fragstolen, &delta)) 140 141 return false; 141 142 ··· 529 528 mptcp_subflow_send_ack(mptcp_subflow_tcp_sock(subflow)); 530 529 } 531 530 532 - static void mptcp_subflow_cleanup_rbuf(struct sock *ssk) 531 + static void mptcp_subflow_cleanup_rbuf(struct sock *ssk, int copied) 533 532 { 534 533 bool slow; 535 534 536 535 slow = lock_sock_fast(ssk); 537 536 if (tcp_can_send_ack(ssk)) 538 - tcp_cleanup_rbuf(ssk, 1); 537 + tcp_cleanup_rbuf(ssk, copied); 539 538 unlock_sock_fast(ssk, slow); 540 539 } 541 540 ··· 552 551 (ICSK_ACK_PUSHED2 | ICSK_ACK_PUSHED))); 553 552 } 554 553 555 - static void mptcp_cleanup_rbuf(struct mptcp_sock *msk) 554 + static void mptcp_cleanup_rbuf(struct mptcp_sock *msk, int copied) 556 555 { 557 556 int old_space = READ_ONCE(msk->old_wspace); 558 557 struct mptcp_subflow_context *subflow; ··· 560 559 int space = __mptcp_space(sk); 561 560 bool cleanup, rx_empty; 562 561 563 - cleanup = (space > 0) && (space >= (old_space << 1)); 564 - rx_empty = !__mptcp_rmem(sk); 562 + cleanup = (space > 0) && (space >= (old_space << 1)) && copied; 563 + rx_empty = !__mptcp_rmem(sk) && copied; 565 564 566 565 mptcp_for_each_subflow(msk, subflow) { 567 566 struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 568 567 569 568 if (cleanup || mptcp_subflow_could_cleanup(ssk, rx_empty)) 570 - mptcp_subflow_cleanup_rbuf(ssk); 569 + mptcp_subflow_cleanup_rbuf(ssk, copied); 571 570 } 572 571 } 573 572 ··· 1940 1939 goto out; 1941 1940 } 1942 1941 1942 + static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied); 1943 + 1943 1944 static int __mptcp_recvmsg_mskq(struct mptcp_sock *msk, 1944 1945 struct msghdr *msg, 1945 1946 size_t len, int flags, ··· 1995 1992 break; 1996 1993 } 1997 1994 1995 + mptcp_rcv_space_adjust(msk, copied); 1998 1996 return copied; 1999 1997 } 2000 1998 ··· 2221 2217 2222 2218 copied += bytes_read; 2223 2219 2224 - /* be sure to advertise window change */ 2225 - mptcp_cleanup_rbuf(msk); 2226 - 2227 2220 if (skb_queue_empty(&msk->receive_queue) && __mptcp_move_skbs(msk)) 2228 2221 continue; 2229 2222 ··· 2269 2268 } 2270 2269 2271 2270 pr_debug("block timeout %ld\n", timeo); 2272 - mptcp_rcv_space_adjust(msk, copied); 2271 + mptcp_cleanup_rbuf(msk, copied); 2273 2272 err = sk_wait_data(sk, &timeo, NULL); 2274 2273 if (err < 0) { 2275 2274 err = copied ? : err; ··· 2277 2276 } 2278 2277 } 2279 2278 2280 - mptcp_rcv_space_adjust(msk, copied); 2279 + mptcp_cleanup_rbuf(msk, copied); 2281 2280 2282 2281 out_err: 2283 2282 if (cmsg_flags && copied >= 0) {
+6
net/netrom/nr_route.c
··· 754 754 int ret; 755 755 struct sk_buff *skbn; 756 756 757 + /* 758 + * Reject malformed packets early. Check that it contains at least 2 759 + * addresses and 1 byte more for Time-To-Live 760 + */ 761 + if (skb->len < 2 * sizeof(ax25_address) + 1) 762 + return 0; 757 763 758 764 nr_src = (ax25_address *)(skb->data + 0); 759 765 nr_dest = (ax25_address *)(skb->data + 7);
+7 -21
net/packet/af_packet.c
··· 538 538 return packet_lookup_frame(po, rb, rb->head, status); 539 539 } 540 540 541 - static u16 vlan_get_tci(struct sk_buff *skb, struct net_device *dev) 541 + static u16 vlan_get_tci(const struct sk_buff *skb, struct net_device *dev) 542 542 { 543 - u8 *skb_orig_data = skb->data; 544 - int skb_orig_len = skb->len; 545 543 struct vlan_hdr vhdr, *vh; 546 544 unsigned int header_len; 547 545 ··· 560 562 else 561 563 return 0; 562 564 563 - skb_push(skb, skb->data - skb_mac_header(skb)); 564 - vh = skb_header_pointer(skb, header_len, sizeof(vhdr), &vhdr); 565 - if (skb_orig_data != skb->data) { 566 - skb->data = skb_orig_data; 567 - skb->len = skb_orig_len; 568 - } 565 + vh = skb_header_pointer(skb, skb_mac_offset(skb) + header_len, 566 + sizeof(vhdr), &vhdr); 569 567 if (unlikely(!vh)) 570 568 return 0; 571 569 572 570 return ntohs(vh->h_vlan_TCI); 573 571 } 574 572 575 - static __be16 vlan_get_protocol_dgram(struct sk_buff *skb) 573 + static __be16 vlan_get_protocol_dgram(const struct sk_buff *skb) 576 574 { 577 575 __be16 proto = skb->protocol; 578 576 579 - if (unlikely(eth_type_vlan(proto))) { 580 - u8 *skb_orig_data = skb->data; 581 - int skb_orig_len = skb->len; 582 - 583 - skb_push(skb, skb->data - skb_mac_header(skb)); 584 - proto = __vlan_get_protocol(skb, proto, NULL); 585 - if (skb_orig_data != skb->data) { 586 - skb->data = skb_orig_data; 587 - skb->len = skb_orig_len; 588 - } 589 - } 577 + if (unlikely(eth_type_vlan(proto))) 578 + proto = __vlan_get_protocol_offset(skb, proto, 579 + skb_mac_offset(skb), NULL); 590 580 591 581 return proto; 592 582 }
+2 -1
net/sctp/associola.c
··· 137 137 = 5 * asoc->rto_max; 138 138 139 139 asoc->timeouts[SCTP_EVENT_TIMEOUT_SACK] = asoc->sackdelay; 140 - asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE] = sp->autoclose * HZ; 140 + asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE] = 141 + (unsigned long)sp->autoclose * HZ; 141 142 142 143 /* Initializes the timers */ 143 144 for (i = SCTP_EVENT_TIMEOUT_NONE; i < SCTP_NUM_TIMEOUT_TYPES; ++i)
+9 -8
scripts/mod/modpost.c
··· 155 155 /* A list of all modules we processed */ 156 156 LIST_HEAD(modules); 157 157 158 - static struct module *find_module(const char *modname) 158 + static struct module *find_module(const char *filename, const char *modname) 159 159 { 160 160 struct module *mod; 161 161 162 162 list_for_each_entry(mod, &modules, list) { 163 - if (strcmp(mod->name, modname) == 0) 163 + if (!strcmp(mod->dump_file, filename) && 164 + !strcmp(mod->name, modname)) 164 165 return mod; 165 166 } 166 167 return NULL; ··· 2031 2030 continue; 2032 2031 } 2033 2032 2034 - mod = find_module(modname); 2033 + mod = find_module(fname, modname); 2035 2034 if (!mod) { 2036 2035 mod = new_module(modname, strlen(modname)); 2037 - mod->from_dump = true; 2036 + mod->dump_file = fname; 2038 2037 } 2039 2038 s = sym_add_exported(symname, mod, gpl_only, namespace); 2040 2039 sym_set_crc(s, crc); ··· 2053 2052 struct symbol *sym; 2054 2053 2055 2054 list_for_each_entry(mod, &modules, list) { 2056 - if (mod->from_dump) 2055 + if (mod->dump_file) 2057 2056 continue; 2058 2057 list_for_each_entry(sym, &mod->exported_symbols, list) { 2059 2058 if (trim_unused_exports && !sym->used) ··· 2077 2076 2078 2077 list_for_each_entry(mod, &modules, list) { 2079 2078 2080 - if (mod->from_dump || list_empty(&mod->missing_namespaces)) 2079 + if (mod->dump_file || list_empty(&mod->missing_namespaces)) 2081 2080 continue; 2082 2081 2083 2082 buf_printf(&ns_deps_buf, "%s.ko:", mod->name); ··· 2195 2194 read_symbols_from_files(files_source); 2196 2195 2197 2196 list_for_each_entry(mod, &modules, list) { 2198 - if (mod->from_dump || mod->is_vmlinux) 2197 + if (mod->dump_file || mod->is_vmlinux) 2199 2198 continue; 2200 2199 2201 2200 check_modname_len(mod); ··· 2206 2205 handle_white_list_exports(unused_exports_white_list); 2207 2206 2208 2207 list_for_each_entry(mod, &modules, list) { 2209 - if (mod->from_dump) 2208 + if (mod->dump_file) 2210 2209 continue; 2211 2210 2212 2211 if (mod->is_vmlinux)
+2 -1
scripts/mod/modpost.h
··· 95 95 /** 96 96 * struct module - represent a module (vmlinux or *.ko) 97 97 * 98 + * @dump_file: path to the .symvers file if loaded from a file 98 99 * @aliases: list head for module_aliases 99 100 */ 100 101 struct module { 101 102 struct list_head list; 102 103 struct list_head exported_symbols; 103 104 struct list_head unresolved_symbols; 105 + const char *dump_file; 104 106 bool is_gpl_compatible; 105 - bool from_dump; /* true if module was loaded from *.symvers */ 106 107 bool is_vmlinux; 107 108 bool seen; 108 109 bool has_init;
+6
scripts/package/builddeb
··· 63 63 esac 64 64 cp "$(${MAKE} -s -f ${srctree}/Makefile image_name)" "${pdir}/${installed_image_path}" 65 65 66 + if [ "${ARCH}" != um ]; then 67 + install_maint_scripts "${pdir}" 68 + fi 69 + } 70 + 71 + install_maint_scripts () { 66 72 # Install the maintainer scripts 67 73 # Note: hook scripts under /etc/kernel are also executed by official Debian 68 74 # kernel packages, as well as kernel packages built using make-kpkg.
+7
scripts/package/mkdebian
··· 70 70 debarch=sh4$(if_enabled_echo CONFIG_CPU_BIG_ENDIAN eb) 71 71 fi 72 72 ;; 73 + um) 74 + if is_enabled CONFIG_64BIT; then 75 + debarch=amd64 76 + else 77 + debarch=i386 78 + fi 79 + ;; 73 80 esac 74 81 if [ -z "$debarch" ]; then 75 82 debarch=$(dpkg-architecture -qDEB_HOST_ARCH)
+26 -17
sound/core/compress_offload.c
··· 1025 1025 static int snd_compr_task_new(struct snd_compr_stream *stream, struct snd_compr_task *utask) 1026 1026 { 1027 1027 struct snd_compr_task_runtime *task; 1028 - int retval; 1028 + int retval, fd_i, fd_o; 1029 1029 1030 1030 if (stream->runtime->total_tasks >= stream->runtime->fragments) 1031 1031 return -EBUSY; ··· 1039 1039 retval = stream->ops->task_create(stream, task); 1040 1040 if (retval < 0) 1041 1041 goto cleanup; 1042 - utask->input_fd = dma_buf_fd(task->input, O_WRONLY|O_CLOEXEC); 1043 - if (utask->input_fd < 0) { 1044 - retval = utask->input_fd; 1042 + /* similar functionality as in dma_buf_fd(), but ensure that both 1043 + file descriptors are allocated before fd_install() */ 1044 + if (!task->input || !task->input->file || !task->output || !task->output->file) { 1045 + retval = -EINVAL; 1045 1046 goto cleanup; 1046 1047 } 1047 - utask->output_fd = dma_buf_fd(task->output, O_RDONLY|O_CLOEXEC); 1048 - if (utask->output_fd < 0) { 1049 - retval = utask->output_fd; 1048 + fd_i = get_unused_fd_flags(O_WRONLY|O_CLOEXEC); 1049 + if (fd_i < 0) 1050 + goto cleanup; 1051 + fd_o = get_unused_fd_flags(O_RDONLY|O_CLOEXEC); 1052 + if (fd_o < 0) { 1053 + put_unused_fd(fd_i); 1050 1054 goto cleanup; 1051 1055 } 1052 1056 /* keep dmabuf reference until freed with task free ioctl */ 1053 - dma_buf_get(utask->input_fd); 1054 - dma_buf_get(utask->output_fd); 1057 + get_dma_buf(task->input); 1058 + get_dma_buf(task->output); 1059 + fd_install(fd_i, task->input->file); 1060 + fd_install(fd_o, task->output->file); 1061 + utask->input_fd = fd_i; 1062 + utask->output_fd = fd_o; 1055 1063 list_add_tail(&task->list, &stream->runtime->tasks); 1056 1064 stream->runtime->total_tasks++; 1057 1065 return 0; ··· 1077 1069 return -EPERM; 1078 1070 task = memdup_user((void __user *)arg, sizeof(*task)); 1079 1071 if (IS_ERR(task)) 1080 - return PTR_ERR(no_free_ptr(task)); 1072 + return PTR_ERR(task); 1081 1073 retval = snd_compr_task_new(stream, task); 1082 1074 if (retval >= 0) 1083 1075 if (copy_to_user((void __user *)arg, task, sizeof(*task))) ··· 1138 1130 return -EPERM; 1139 1131 task = memdup_user((void __user *)arg, sizeof(*task)); 1140 1132 if (IS_ERR(task)) 1141 - return PTR_ERR(no_free_ptr(task)); 1133 + return PTR_ERR(task); 1142 1134 retval = snd_compr_task_start(stream, task); 1143 1135 if (retval >= 0) 1144 1136 if (copy_to_user((void __user *)arg, task, sizeof(*task))) ··· 1182 1174 static int snd_compr_task_seq(struct snd_compr_stream *stream, unsigned long arg, 1183 1175 snd_compr_seq_func_t fcn) 1184 1176 { 1185 - struct snd_compr_task_runtime *task; 1177 + struct snd_compr_task_runtime *task, *temp; 1186 1178 __u64 seqno; 1187 1179 int retval; 1188 1180 1189 1181 if (stream->runtime->state != SNDRV_PCM_STATE_SETUP) 1190 1182 return -EPERM; 1191 - retval = get_user(seqno, (__u64 __user *)arg); 1192 - if (retval < 0) 1193 - return retval; 1183 + retval = copy_from_user(&seqno, (__u64 __user *)arg, sizeof(seqno)); 1184 + if (retval) 1185 + return -EFAULT; 1194 1186 retval = 0; 1195 1187 if (seqno == 0) { 1196 - list_for_each_entry_reverse(task, &stream->runtime->tasks, list) 1188 + list_for_each_entry_safe_reverse(task, temp, &stream->runtime->tasks, list) 1197 1189 fcn(stream, task); 1198 1190 } else { 1199 1191 task = snd_compr_find_task(stream, seqno); ··· 1229 1221 return -EPERM; 1230 1222 status = memdup_user((void __user *)arg, sizeof(*status)); 1231 1223 if (IS_ERR(status)) 1232 - return PTR_ERR(no_free_ptr(status)); 1224 + return PTR_ERR(status); 1233 1225 retval = snd_compr_task_status(stream, status); 1234 1226 if (retval >= 0) 1235 1227 if (copy_to_user((void __user *)arg, status, sizeof(*status))) ··· 1255 1247 } 1256 1248 EXPORT_SYMBOL_GPL(snd_compr_task_finished); 1257 1249 1250 + MODULE_IMPORT_NS("DMA_BUF"); 1258 1251 #endif /* CONFIG_SND_COMPRESS_ACCEL */ 1259 1252 1260 1253 static long snd_compr_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+1 -1
sound/core/memalloc.c
··· 505 505 if (!p) 506 506 return NULL; 507 507 dmab->addr = dma_map_single(dmab->dev.dev, p, size, DMA_BIDIRECTIONAL); 508 - if (dmab->addr == DMA_MAPPING_ERROR) { 508 + if (dma_mapping_error(dmab->dev.dev, dmab->addr)) { 509 509 do_free_pages(dmab->area, size, true); 510 510 return NULL; 511 511 }
+2
sound/core/seq/oss/seq_oss_synth.c
··· 66 66 }; 67 67 68 68 static DEFINE_SPINLOCK(register_lock); 69 + static DEFINE_MUTEX(sysex_mutex); 69 70 70 71 /* 71 72 * prototypes ··· 498 497 if (!info) 499 498 return -ENXIO; 500 499 500 + guard(mutex)(&sysex_mutex); 501 501 sysex = info->sysex; 502 502 if (sysex == NULL) { 503 503 sysex = kzalloc(sizeof(*sysex), GFP_KERNEL);
+10 -4
sound/core/seq/seq_clientmgr.c
··· 1275 1275 if (client->type != client_info->type) 1276 1276 return -EINVAL; 1277 1277 1278 - /* check validity of midi_version field */ 1279 - if (client->user_pversion >= SNDRV_PROTOCOL_VERSION(1, 0, 3) && 1280 - client_info->midi_version > SNDRV_SEQ_CLIENT_UMP_MIDI_2_0) 1281 - return -EINVAL; 1278 + if (client->user_pversion >= SNDRV_PROTOCOL_VERSION(1, 0, 3)) { 1279 + /* check validity of midi_version field */ 1280 + if (client_info->midi_version > SNDRV_SEQ_CLIENT_UMP_MIDI_2_0) 1281 + return -EINVAL; 1282 + 1283 + /* check if UMP is supported in kernel */ 1284 + if (!IS_ENABLED(CONFIG_SND_SEQ_UMP) && 1285 + client_info->midi_version > 0) 1286 + return -EINVAL; 1287 + } 1282 1288 1283 1289 /* fill the info fields */ 1284 1290 if (client_info->name[0])
+1 -1
sound/core/ump.c
··· 1244 1244 1245 1245 num = 0; 1246 1246 for (i = 0; i < SNDRV_UMP_MAX_GROUPS; i++) 1247 - if ((group_maps & (1U << i)) && ump->groups[i].valid) 1247 + if (group_maps & (1U << i)) 1248 1248 ump->legacy_mapping[num++] = i; 1249 1249 1250 1250 return num;
+1
sound/pci/hda/patch_realtek.c
··· 11009 11009 SND_PCI_QUIRK(0xf111, 0x0001, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 11010 11010 SND_PCI_QUIRK(0xf111, 0x0006, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 11011 11011 SND_PCI_QUIRK(0xf111, 0x0009, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 11012 + SND_PCI_QUIRK(0xf111, 0x000c, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 11012 11013 11013 11014 #if 0 11014 11015 /* Below is a quirk table taken from the old code.
+4
sound/pci/hda/tas2781_hda_i2c.c
··· 142 142 } 143 143 sub = acpi_get_subsystem_id(ACPI_HANDLE(physdev)); 144 144 if (IS_ERR(sub)) { 145 + /* No subsys id in older tas2563 projects. */ 146 + if (!strncmp(hid, "INT8866", sizeof("INT8866"))) 147 + goto end_2563; 145 148 dev_err(p->dev, "Failed to get SUBSYS ID.\n"); 146 149 ret = PTR_ERR(sub); 147 150 goto err; ··· 167 164 p->speaker_id = NULL; 168 165 } 169 166 167 + end_2563: 170 168 acpi_dev_free_resource_list(&resources); 171 169 strscpy(p->dev_name, hid, sizeof(p->dev_name)); 172 170 put_device(physdev);
+1 -1
sound/sh/sh_dac_audio.c
··· 163 163 /* channel is not used (interleaved data) */ 164 164 struct snd_sh_dac *chip = snd_pcm_substream_chip(substream); 165 165 166 - if (copy_from_iter(chip->data_buffer + pos, src, count) != count) 166 + if (copy_from_iter(chip->data_buffer + pos, count, src) != count) 167 167 return -EFAULT; 168 168 chip->buffer_end = chip->data_buffer + pos + count; 169 169
+16 -1
sound/soc/amd/ps/pci-ps.c
··· 375 375 { 376 376 struct acpi_device *pdm_dev; 377 377 const union acpi_object *obj; 378 + acpi_handle handle; 379 + acpi_integer dmic_status; 378 380 u32 config; 379 381 bool is_dmic_dev = false; 380 382 bool is_sdw_dev = false; 383 + bool wov_en, dmic_en; 381 384 int ret; 385 + 386 + /* IF WOV entry not found, enable dmic based on acp-audio-device-type entry*/ 387 + wov_en = true; 388 + dmic_en = false; 382 389 383 390 config = readl(acp_data->acp63_base + ACP_PIN_CONFIG); 384 391 switch (config) { ··· 419 412 if (!acpi_dev_get_property(pdm_dev, "acp-audio-device-type", 420 413 ACPI_TYPE_INTEGER, &obj) && 421 414 obj->integer.value == ACP_DMIC_DEV) 422 - is_dmic_dev = true; 415 + dmic_en = true; 423 416 } 417 + 418 + handle = ACPI_HANDLE(&pci->dev); 419 + ret = acpi_evaluate_integer(handle, "_WOV", NULL, &dmic_status); 420 + if (!ACPI_FAILURE(ret)) 421 + wov_en = dmic_status; 424 422 } 423 + 424 + if (dmic_en && wov_en) 425 + is_dmic_dev = true; 425 426 426 427 if (acp_data->is_sdw_config) { 427 428 ret = acp_scan_sdw_devices(&pci->dev, ACP63_SDW_ADDR);
+6 -1
sound/soc/codecs/rt722-sdca.c
··· 1468 1468 0x008d); 1469 1469 /* check HP calibration FSM status */ 1470 1470 for (loop_check = 0; loop_check < chk_cnt; loop_check++) { 1471 + usleep_range(10000, 11000); 1471 1472 ret = rt722_sdca_index_read(rt722, RT722_VENDOR_CALI, 1472 1473 RT722_DAC_DC_CALI_CTL3, &calib_status); 1473 - if (ret < 0 || loop_check == chk_cnt) 1474 + if (ret < 0) 1474 1475 dev_dbg(&rt722->slave->dev, "calibration failed!, ret=%d\n", ret); 1475 1476 if ((calib_status & 0x0040) == 0x0) 1476 1477 break; 1477 1478 } 1479 + 1480 + if (loop_check == chk_cnt) 1481 + dev_dbg(&rt722->slave->dev, "%s, calibration time-out!\n", __func__); 1482 + 1478 1483 /* Set ADC09 power entity floating control */ 1479 1484 rt722_sdca_index_write(rt722, RT722_VENDOR_HDA_CTL, RT722_ADC0A_08_PDE_FLOAT_CTL, 1480 1485 0x2a12);
+20 -3
sound/soc/intel/boards/sof_sdw.c
··· 632 632 .callback = sof_sdw_quirk_cb, 633 633 .matches = { 634 634 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 635 - DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "233C") 635 + DMI_MATCH(DMI_PRODUCT_NAME, "21QB") 636 636 }, 637 637 /* Note this quirk excludes the CODEC mic */ 638 638 .driver_data = (void *)(SOC_SDW_CODEC_MIC), ··· 641 641 .callback = sof_sdw_quirk_cb, 642 642 .matches = { 643 643 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 644 - DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "233B") 644 + DMI_MATCH(DMI_PRODUCT_NAME, "21QA") 645 645 }, 646 - .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS), 646 + /* Note this quirk excludes the CODEC mic */ 647 + .driver_data = (void *)(SOC_SDW_CODEC_MIC), 648 + }, 649 + { 650 + .callback = sof_sdw_quirk_cb, 651 + .matches = { 652 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 653 + DMI_MATCH(DMI_PRODUCT_NAME, "21Q6") 654 + }, 655 + .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC), 656 + }, 657 + { 658 + .callback = sof_sdw_quirk_cb, 659 + .matches = { 660 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 661 + DMI_MATCH(DMI_PRODUCT_NAME, "21Q7") 662 + }, 663 + .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC), 647 664 }, 648 665 649 666 /* ArrowLake devices */
+2 -2
sound/soc/mediatek/common/mtk-afe-platform-driver.c
··· 120 120 struct mtk_base_afe *afe = snd_soc_component_get_drvdata(component); 121 121 122 122 size = afe->mtk_afe_hardware->buffer_bytes_max; 123 - snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV, 124 - afe->dev, size, size); 123 + snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV, afe->dev, 0, size); 124 + 125 125 return 0; 126 126 } 127 127 EXPORT_SYMBOL_GPL(mtk_afe_pcm_new);
+19 -6
sound/soc/sof/intel/hda-dai.c
··· 103 103 return sdai->platform_private; 104 104 } 105 105 106 - int hda_link_dma_cleanup(struct snd_pcm_substream *substream, struct hdac_ext_stream *hext_stream, 107 - struct snd_soc_dai *cpu_dai) 106 + static int 107 + hda_link_dma_cleanup(struct snd_pcm_substream *substream, 108 + struct hdac_ext_stream *hext_stream, 109 + struct snd_soc_dai *cpu_dai, bool release) 108 110 { 109 111 const struct hda_dai_widget_dma_ops *ops = hda_dai_get_ops(substream, cpu_dai); 110 112 struct sof_intel_hda_stream *hda_stream; ··· 128 126 if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { 129 127 stream_tag = hdac_stream(hext_stream)->stream_tag; 130 128 snd_hdac_ext_bus_link_clear_stream_id(hlink, stream_tag); 129 + } 130 + 131 + if (!release) { 132 + /* 133 + * Force stream reconfiguration without releasing the channel on 134 + * subsequent stream restart (without free), including LinkDMA 135 + * reset. 136 + * The stream is released via hda_dai_hw_free() 137 + */ 138 + hext_stream->link_prepared = 0; 139 + return 0; 131 140 } 132 141 133 142 if (ops->release_hext_stream) ··· 224 211 if (!hext_stream) 225 212 return 0; 226 213 227 - return hda_link_dma_cleanup(substream, hext_stream, cpu_dai); 214 + return hda_link_dma_cleanup(substream, hext_stream, cpu_dai, true); 228 215 } 229 216 230 217 static int __maybe_unused hda_dai_hw_params_data(struct snd_pcm_substream *substream, ··· 317 304 switch (cmd) { 318 305 case SNDRV_PCM_TRIGGER_STOP: 319 306 case SNDRV_PCM_TRIGGER_SUSPEND: 320 - ret = hda_link_dma_cleanup(substream, hext_stream, dai); 307 + ret = hda_link_dma_cleanup(substream, hext_stream, dai, 308 + cmd == SNDRV_PCM_TRIGGER_STOP ? false : true); 321 309 if (ret < 0) { 322 310 dev_err(sdev->dev, "%s: failed to clean up link DMA\n", __func__); 323 311 return ret; ··· 674 660 } 675 661 676 662 ret = hda_link_dma_cleanup(hext_stream->link_substream, 677 - hext_stream, 678 - cpu_dai); 663 + hext_stream, cpu_dai, true); 679 664 if (ret < 0) 680 665 return ret; 681 666 }
-2
sound/soc/sof/intel/hda.h
··· 1038 1038 hda_select_dai_widget_ops(struct snd_sof_dev *sdev, struct snd_sof_widget *swidget); 1039 1039 int hda_dai_config(struct snd_soc_dapm_widget *w, unsigned int flags, 1040 1040 struct snd_sof_dai_config_data *data); 1041 - int hda_link_dma_cleanup(struct snd_pcm_substream *substream, struct hdac_ext_stream *hext_stream, 1042 - struct snd_soc_dai *cpu_dai); 1043 1041 1044 1042 static inline struct snd_sof_dev *widget_to_sdev(struct snd_soc_dapm_widget *w) 1045 1043 {
+1 -1
sound/usb/mixer_us16x08.c
··· 687 687 struct usb_mixer_elem_info *elem = kcontrol->private_data; 688 688 struct snd_usb_audio *chip = elem->head.mixer->chip; 689 689 struct snd_us16x08_meter_store *store = elem->private_data; 690 - u8 meter_urb[64]; 690 + u8 meter_urb[64] = {0}; 691 691 692 692 switch (kcontrol->private_value) { 693 693 case 0: {
+11 -4
tools/include/uapi/linux/stddef.h
··· 8 8 #define __always_inline __inline__ 9 9 #endif 10 10 11 + /* Not all C++ standards support type declarations inside an anonymous union */ 12 + #ifndef __cplusplus 13 + #define __struct_group_tag(TAG) TAG 14 + #else 15 + #define __struct_group_tag(TAG) 16 + #endif 17 + 11 18 /** 12 19 * __struct_group() - Create a mirrored named and anonyomous struct 13 20 * ··· 27 20 * and size: one anonymous and one named. The former's members can be used 28 21 * normally without sub-struct naming, and the latter can be used to 29 22 * reason about the start, end, and size of the group of struct members. 30 - * The named struct can also be explicitly tagged for layer reuse, as well 31 - * as both having struct attributes appended. 23 + * The named struct can also be explicitly tagged for layer reuse (C only), 24 + * as well as both having struct attributes appended. 32 25 */ 33 26 #define __struct_group(TAG, NAME, ATTRS, MEMBERS...) \ 34 27 union { \ 35 28 struct { MEMBERS } ATTRS; \ 36 - struct TAG { MEMBERS } ATTRS NAME; \ 37 - } 29 + struct __struct_group_tag(TAG) { MEMBERS } ATTRS NAME; \ 30 + } ATTRS 38 31 39 32 /** 40 33 * __DECLARE_FLEX_ARRAY() - Declare a flexible array usable in a union
+1
tools/objtool/noreturns.h
··· 19 19 NORETURN(arch_cpu_idle_dead) 20 20 NORETURN(bch2_trans_in_restart_error) 21 21 NORETURN(bch2_trans_restart_error) 22 + NORETURN(bch2_trans_unlocked_error) 22 23 NORETURN(cpu_bringup_and_idle) 23 24 NORETURN(cpu_startup_entry) 24 25 NORETURN(do_exit)
+1 -1
tools/testing/selftests/alsa/Makefile
··· 27 27 $(OUTPUT)/libatest.so: conf.c alsa-local.h 28 28 $(CC) $(CFLAGS) -shared -fPIC $< $(LDLIBS) -o $@ 29 29 30 - $(OUTPUT)/%: %.c $(TEST_GEN_PROGS_EXTENDED) alsa-local.h 30 + $(OUTPUT)/%: %.c $(OUTPUT)/libatest.so alsa-local.h 31 31 $(CC) $(CFLAGS) $< $(LDLIBS) -latest -o $@
+394
tools/testing/selftests/bpf/prog_tests/socket_helpers.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef __SOCKET_HELPERS__ 4 + #define __SOCKET_HELPERS__ 5 + 6 + #include <linux/vm_sockets.h> 7 + 8 + /* include/linux/net.h */ 9 + #define SOCK_TYPE_MASK 0xf 10 + 11 + #define IO_TIMEOUT_SEC 30 12 + #define MAX_STRERR_LEN 256 13 + 14 + /* workaround for older vm_sockets.h */ 15 + #ifndef VMADDR_CID_LOCAL 16 + #define VMADDR_CID_LOCAL 1 17 + #endif 18 + 19 + /* include/linux/cleanup.h */ 20 + #define __get_and_null(p, nullvalue) \ 21 + ({ \ 22 + __auto_type __ptr = &(p); \ 23 + __auto_type __val = *__ptr; \ 24 + *__ptr = nullvalue; \ 25 + __val; \ 26 + }) 27 + 28 + #define take_fd(fd) __get_and_null(fd, -EBADF) 29 + 30 + /* Wrappers that fail the test on error and report it. */ 31 + 32 + #define _FAIL(errnum, fmt...) \ 33 + ({ \ 34 + error_at_line(0, (errnum), __func__, __LINE__, fmt); \ 35 + CHECK_FAIL(true); \ 36 + }) 37 + #define FAIL(fmt...) _FAIL(0, fmt) 38 + #define FAIL_ERRNO(fmt...) _FAIL(errno, fmt) 39 + #define FAIL_LIBBPF(err, msg) \ 40 + ({ \ 41 + char __buf[MAX_STRERR_LEN]; \ 42 + libbpf_strerror((err), __buf, sizeof(__buf)); \ 43 + FAIL("%s: %s", (msg), __buf); \ 44 + }) 45 + 46 + 47 + #define xaccept_nonblock(fd, addr, len) \ 48 + ({ \ 49 + int __ret = \ 50 + accept_timeout((fd), (addr), (len), IO_TIMEOUT_SEC); \ 51 + if (__ret == -1) \ 52 + FAIL_ERRNO("accept"); \ 53 + __ret; \ 54 + }) 55 + 56 + #define xbind(fd, addr, len) \ 57 + ({ \ 58 + int __ret = bind((fd), (addr), (len)); \ 59 + if (__ret == -1) \ 60 + FAIL_ERRNO("bind"); \ 61 + __ret; \ 62 + }) 63 + 64 + #define xclose(fd) \ 65 + ({ \ 66 + int __ret = close((fd)); \ 67 + if (__ret == -1) \ 68 + FAIL_ERRNO("close"); \ 69 + __ret; \ 70 + }) 71 + 72 + #define xconnect(fd, addr, len) \ 73 + ({ \ 74 + int __ret = connect((fd), (addr), (len)); \ 75 + if (__ret == -1) \ 76 + FAIL_ERRNO("connect"); \ 77 + __ret; \ 78 + }) 79 + 80 + #define xgetsockname(fd, addr, len) \ 81 + ({ \ 82 + int __ret = getsockname((fd), (addr), (len)); \ 83 + if (__ret == -1) \ 84 + FAIL_ERRNO("getsockname"); \ 85 + __ret; \ 86 + }) 87 + 88 + #define xgetsockopt(fd, level, name, val, len) \ 89 + ({ \ 90 + int __ret = getsockopt((fd), (level), (name), (val), (len)); \ 91 + if (__ret == -1) \ 92 + FAIL_ERRNO("getsockopt(" #name ")"); \ 93 + __ret; \ 94 + }) 95 + 96 + #define xlisten(fd, backlog) \ 97 + ({ \ 98 + int __ret = listen((fd), (backlog)); \ 99 + if (__ret == -1) \ 100 + FAIL_ERRNO("listen"); \ 101 + __ret; \ 102 + }) 103 + 104 + #define xsetsockopt(fd, level, name, val, len) \ 105 + ({ \ 106 + int __ret = setsockopt((fd), (level), (name), (val), (len)); \ 107 + if (__ret == -1) \ 108 + FAIL_ERRNO("setsockopt(" #name ")"); \ 109 + __ret; \ 110 + }) 111 + 112 + #define xsend(fd, buf, len, flags) \ 113 + ({ \ 114 + ssize_t __ret = send((fd), (buf), (len), (flags)); \ 115 + if (__ret == -1) \ 116 + FAIL_ERRNO("send"); \ 117 + __ret; \ 118 + }) 119 + 120 + #define xrecv_nonblock(fd, buf, len, flags) \ 121 + ({ \ 122 + ssize_t __ret = recv_timeout((fd), (buf), (len), (flags), \ 123 + IO_TIMEOUT_SEC); \ 124 + if (__ret == -1) \ 125 + FAIL_ERRNO("recv"); \ 126 + __ret; \ 127 + }) 128 + 129 + #define xsocket(family, sotype, flags) \ 130 + ({ \ 131 + int __ret = socket(family, sotype, flags); \ 132 + if (__ret == -1) \ 133 + FAIL_ERRNO("socket"); \ 134 + __ret; \ 135 + }) 136 + 137 + static inline void close_fd(int *fd) 138 + { 139 + if (*fd >= 0) 140 + xclose(*fd); 141 + } 142 + 143 + #define __close_fd __attribute__((cleanup(close_fd))) 144 + 145 + static inline struct sockaddr *sockaddr(struct sockaddr_storage *ss) 146 + { 147 + return (struct sockaddr *)ss; 148 + } 149 + 150 + static inline void init_addr_loopback4(struct sockaddr_storage *ss, 151 + socklen_t *len) 152 + { 153 + struct sockaddr_in *addr4 = memset(ss, 0, sizeof(*ss)); 154 + 155 + addr4->sin_family = AF_INET; 156 + addr4->sin_port = 0; 157 + addr4->sin_addr.s_addr = htonl(INADDR_LOOPBACK); 158 + *len = sizeof(*addr4); 159 + } 160 + 161 + static inline void init_addr_loopback6(struct sockaddr_storage *ss, 162 + socklen_t *len) 163 + { 164 + struct sockaddr_in6 *addr6 = memset(ss, 0, sizeof(*ss)); 165 + 166 + addr6->sin6_family = AF_INET6; 167 + addr6->sin6_port = 0; 168 + addr6->sin6_addr = in6addr_loopback; 169 + *len = sizeof(*addr6); 170 + } 171 + 172 + static inline void init_addr_loopback_vsock(struct sockaddr_storage *ss, 173 + socklen_t *len) 174 + { 175 + struct sockaddr_vm *addr = memset(ss, 0, sizeof(*ss)); 176 + 177 + addr->svm_family = AF_VSOCK; 178 + addr->svm_port = VMADDR_PORT_ANY; 179 + addr->svm_cid = VMADDR_CID_LOCAL; 180 + *len = sizeof(*addr); 181 + } 182 + 183 + static inline void init_addr_loopback(int family, struct sockaddr_storage *ss, 184 + socklen_t *len) 185 + { 186 + switch (family) { 187 + case AF_INET: 188 + init_addr_loopback4(ss, len); 189 + return; 190 + case AF_INET6: 191 + init_addr_loopback6(ss, len); 192 + return; 193 + case AF_VSOCK: 194 + init_addr_loopback_vsock(ss, len); 195 + return; 196 + default: 197 + FAIL("unsupported address family %d", family); 198 + } 199 + } 200 + 201 + static inline int enable_reuseport(int s, int progfd) 202 + { 203 + int err, one = 1; 204 + 205 + err = xsetsockopt(s, SOL_SOCKET, SO_REUSEPORT, &one, sizeof(one)); 206 + if (err) 207 + return -1; 208 + err = xsetsockopt(s, SOL_SOCKET, SO_ATTACH_REUSEPORT_EBPF, &progfd, 209 + sizeof(progfd)); 210 + if (err) 211 + return -1; 212 + 213 + return 0; 214 + } 215 + 216 + static inline int socket_loopback_reuseport(int family, int sotype, int progfd) 217 + { 218 + struct sockaddr_storage addr; 219 + socklen_t len = 0; 220 + int err, s; 221 + 222 + init_addr_loopback(family, &addr, &len); 223 + 224 + s = xsocket(family, sotype, 0); 225 + if (s == -1) 226 + return -1; 227 + 228 + if (progfd >= 0) 229 + enable_reuseport(s, progfd); 230 + 231 + err = xbind(s, sockaddr(&addr), len); 232 + if (err) 233 + goto close; 234 + 235 + if (sotype & SOCK_DGRAM) 236 + return s; 237 + 238 + err = xlisten(s, SOMAXCONN); 239 + if (err) 240 + goto close; 241 + 242 + return s; 243 + close: 244 + xclose(s); 245 + return -1; 246 + } 247 + 248 + static inline int socket_loopback(int family, int sotype) 249 + { 250 + return socket_loopback_reuseport(family, sotype, -1); 251 + } 252 + 253 + static inline int poll_connect(int fd, unsigned int timeout_sec) 254 + { 255 + struct timeval timeout = { .tv_sec = timeout_sec }; 256 + fd_set wfds; 257 + int r, eval; 258 + socklen_t esize = sizeof(eval); 259 + 260 + FD_ZERO(&wfds); 261 + FD_SET(fd, &wfds); 262 + 263 + r = select(fd + 1, NULL, &wfds, NULL, &timeout); 264 + if (r == 0) 265 + errno = ETIME; 266 + if (r != 1) 267 + return -1; 268 + 269 + if (getsockopt(fd, SOL_SOCKET, SO_ERROR, &eval, &esize) < 0) 270 + return -1; 271 + if (eval != 0) { 272 + errno = eval; 273 + return -1; 274 + } 275 + 276 + return 0; 277 + } 278 + 279 + static inline int poll_read(int fd, unsigned int timeout_sec) 280 + { 281 + struct timeval timeout = { .tv_sec = timeout_sec }; 282 + fd_set rfds; 283 + int r; 284 + 285 + FD_ZERO(&rfds); 286 + FD_SET(fd, &rfds); 287 + 288 + r = select(fd + 1, &rfds, NULL, NULL, &timeout); 289 + if (r == 0) 290 + errno = ETIME; 291 + 292 + return r == 1 ? 0 : -1; 293 + } 294 + 295 + static inline int accept_timeout(int fd, struct sockaddr *addr, socklen_t *len, 296 + unsigned int timeout_sec) 297 + { 298 + if (poll_read(fd, timeout_sec)) 299 + return -1; 300 + 301 + return accept(fd, addr, len); 302 + } 303 + 304 + static inline int recv_timeout(int fd, void *buf, size_t len, int flags, 305 + unsigned int timeout_sec) 306 + { 307 + if (poll_read(fd, timeout_sec)) 308 + return -1; 309 + 310 + return recv(fd, buf, len, flags); 311 + } 312 + 313 + 314 + static inline int create_pair(int family, int sotype, int *p0, int *p1) 315 + { 316 + __close_fd int s, c = -1, p = -1; 317 + struct sockaddr_storage addr; 318 + socklen_t len = sizeof(addr); 319 + int err; 320 + 321 + s = socket_loopback(family, sotype); 322 + if (s < 0) 323 + return s; 324 + 325 + err = xgetsockname(s, sockaddr(&addr), &len); 326 + if (err) 327 + return err; 328 + 329 + c = xsocket(family, sotype, 0); 330 + if (c < 0) 331 + return c; 332 + 333 + err = connect(c, sockaddr(&addr), len); 334 + if (err) { 335 + if (errno != EINPROGRESS) { 336 + FAIL_ERRNO("connect"); 337 + return err; 338 + } 339 + 340 + err = poll_connect(c, IO_TIMEOUT_SEC); 341 + if (err) { 342 + FAIL_ERRNO("poll_connect"); 343 + return err; 344 + } 345 + } 346 + 347 + switch (sotype & SOCK_TYPE_MASK) { 348 + case SOCK_DGRAM: 349 + err = xgetsockname(c, sockaddr(&addr), &len); 350 + if (err) 351 + return err; 352 + 353 + err = xconnect(s, sockaddr(&addr), len); 354 + if (err) 355 + return err; 356 + 357 + *p0 = take_fd(s); 358 + break; 359 + case SOCK_STREAM: 360 + case SOCK_SEQPACKET: 361 + p = xaccept_nonblock(s, NULL, NULL); 362 + if (p < 0) 363 + return p; 364 + 365 + *p0 = take_fd(p); 366 + break; 367 + default: 368 + FAIL("Unsupported socket type %#x", sotype); 369 + return -EOPNOTSUPP; 370 + } 371 + 372 + *p1 = take_fd(c); 373 + return 0; 374 + } 375 + 376 + static inline int create_socket_pairs(int family, int sotype, int *c0, int *c1, 377 + int *p0, int *p1) 378 + { 379 + int err; 380 + 381 + err = create_pair(family, sotype, c0, p0); 382 + if (err) 383 + return err; 384 + 385 + err = create_pair(family, sotype, c1, p1); 386 + if (err) { 387 + close(*c0); 388 + close(*p0); 389 + } 390 + 391 + return err; 392 + } 393 + 394 + #endif // __SOCKET_HELPERS__
+51
tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
··· 12 12 #include "test_sockmap_progs_query.skel.h" 13 13 #include "test_sockmap_pass_prog.skel.h" 14 14 #include "test_sockmap_drop_prog.skel.h" 15 + #include "test_sockmap_change_tail.skel.h" 15 16 #include "bpf_iter_sockmap.skel.h" 16 17 17 18 #include "sockmap_helpers.h" ··· 644 643 test_sockmap_drop_prog__destroy(drop); 645 644 } 646 645 646 + static void test_sockmap_skb_verdict_change_tail(void) 647 + { 648 + struct test_sockmap_change_tail *skel; 649 + int err, map, verdict; 650 + int c1, p1, sent, recvd; 651 + int zero = 0; 652 + char buf[2]; 653 + 654 + skel = test_sockmap_change_tail__open_and_load(); 655 + if (!ASSERT_OK_PTR(skel, "open_and_load")) 656 + return; 657 + verdict = bpf_program__fd(skel->progs.prog_skb_verdict); 658 + map = bpf_map__fd(skel->maps.sock_map_rx); 659 + 660 + err = bpf_prog_attach(verdict, map, BPF_SK_SKB_STREAM_VERDICT, 0); 661 + if (!ASSERT_OK(err, "bpf_prog_attach")) 662 + goto out; 663 + err = create_pair(AF_INET, SOCK_STREAM, &c1, &p1); 664 + if (!ASSERT_OK(err, "create_pair()")) 665 + goto out; 666 + err = bpf_map_update_elem(map, &zero, &c1, BPF_NOEXIST); 667 + if (!ASSERT_OK(err, "bpf_map_update_elem(c1)")) 668 + goto out_close; 669 + sent = xsend(p1, "Tr", 2, 0); 670 + ASSERT_EQ(sent, 2, "xsend(p1)"); 671 + recvd = recv(c1, buf, 2, 0); 672 + ASSERT_EQ(recvd, 1, "recv(c1)"); 673 + ASSERT_EQ(skel->data->change_tail_ret, 0, "change_tail_ret"); 674 + 675 + sent = xsend(p1, "G", 1, 0); 676 + ASSERT_EQ(sent, 1, "xsend(p1)"); 677 + recvd = recv(c1, buf, 2, 0); 678 + ASSERT_EQ(recvd, 2, "recv(c1)"); 679 + ASSERT_EQ(skel->data->change_tail_ret, 0, "change_tail_ret"); 680 + 681 + sent = xsend(p1, "E", 1, 0); 682 + ASSERT_EQ(sent, 1, "xsend(p1)"); 683 + recvd = recv(c1, buf, 1, 0); 684 + ASSERT_EQ(recvd, 1, "recv(c1)"); 685 + ASSERT_EQ(skel->data->change_tail_ret, -EINVAL, "change_tail_ret"); 686 + 687 + out_close: 688 + close(c1); 689 + close(p1); 690 + out: 691 + test_sockmap_change_tail__destroy(skel); 692 + } 693 + 647 694 static void test_sockmap_skb_verdict_peek_helper(int map) 648 695 { 649 696 int err, c1, p1, zero = 0, sent, recvd, avail; ··· 1107 1058 test_sockmap_skb_verdict_fionread(true); 1108 1059 if (test__start_subtest("sockmap skb_verdict fionread on drop")) 1109 1060 test_sockmap_skb_verdict_fionread(false); 1061 + if (test__start_subtest("sockmap skb_verdict change tail")) 1062 + test_sockmap_skb_verdict_change_tail(); 1110 1063 if (test__start_subtest("sockmap skb_verdict msg_f_peek")) 1111 1064 test_sockmap_skb_verdict_peek(); 1112 1065 if (test__start_subtest("sockmap skb_verdict msg_f_peek with link"))
+1 -384
tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h
··· 1 1 #ifndef __SOCKMAP_HELPERS__ 2 2 #define __SOCKMAP_HELPERS__ 3 3 4 - #include <linux/vm_sockets.h> 4 + #include "socket_helpers.h" 5 5 6 - /* include/linux/net.h */ 7 - #define SOCK_TYPE_MASK 0xf 8 - 9 - #define IO_TIMEOUT_SEC 30 10 - #define MAX_STRERR_LEN 256 11 6 #define MAX_TEST_NAME 80 12 7 13 - /* workaround for older vm_sockets.h */ 14 - #ifndef VMADDR_CID_LOCAL 15 - #define VMADDR_CID_LOCAL 1 16 - #endif 17 - 18 8 #define __always_unused __attribute__((__unused__)) 19 - 20 - /* include/linux/cleanup.h */ 21 - #define __get_and_null(p, nullvalue) \ 22 - ({ \ 23 - __auto_type __ptr = &(p); \ 24 - __auto_type __val = *__ptr; \ 25 - *__ptr = nullvalue; \ 26 - __val; \ 27 - }) 28 - 29 - #define take_fd(fd) __get_and_null(fd, -EBADF) 30 - 31 - #define _FAIL(errnum, fmt...) \ 32 - ({ \ 33 - error_at_line(0, (errnum), __func__, __LINE__, fmt); \ 34 - CHECK_FAIL(true); \ 35 - }) 36 - #define FAIL(fmt...) _FAIL(0, fmt) 37 - #define FAIL_ERRNO(fmt...) _FAIL(errno, fmt) 38 - #define FAIL_LIBBPF(err, msg) \ 39 - ({ \ 40 - char __buf[MAX_STRERR_LEN]; \ 41 - libbpf_strerror((err), __buf, sizeof(__buf)); \ 42 - FAIL("%s: %s", (msg), __buf); \ 43 - }) 44 - 45 - /* Wrappers that fail the test on error and report it. */ 46 - 47 - #define xaccept_nonblock(fd, addr, len) \ 48 - ({ \ 49 - int __ret = \ 50 - accept_timeout((fd), (addr), (len), IO_TIMEOUT_SEC); \ 51 - if (__ret == -1) \ 52 - FAIL_ERRNO("accept"); \ 53 - __ret; \ 54 - }) 55 - 56 - #define xbind(fd, addr, len) \ 57 - ({ \ 58 - int __ret = bind((fd), (addr), (len)); \ 59 - if (__ret == -1) \ 60 - FAIL_ERRNO("bind"); \ 61 - __ret; \ 62 - }) 63 - 64 - #define xclose(fd) \ 65 - ({ \ 66 - int __ret = close((fd)); \ 67 - if (__ret == -1) \ 68 - FAIL_ERRNO("close"); \ 69 - __ret; \ 70 - }) 71 - 72 - #define xconnect(fd, addr, len) \ 73 - ({ \ 74 - int __ret = connect((fd), (addr), (len)); \ 75 - if (__ret == -1) \ 76 - FAIL_ERRNO("connect"); \ 77 - __ret; \ 78 - }) 79 - 80 - #define xgetsockname(fd, addr, len) \ 81 - ({ \ 82 - int __ret = getsockname((fd), (addr), (len)); \ 83 - if (__ret == -1) \ 84 - FAIL_ERRNO("getsockname"); \ 85 - __ret; \ 86 - }) 87 - 88 - #define xgetsockopt(fd, level, name, val, len) \ 89 - ({ \ 90 - int __ret = getsockopt((fd), (level), (name), (val), (len)); \ 91 - if (__ret == -1) \ 92 - FAIL_ERRNO("getsockopt(" #name ")"); \ 93 - __ret; \ 94 - }) 95 - 96 - #define xlisten(fd, backlog) \ 97 - ({ \ 98 - int __ret = listen((fd), (backlog)); \ 99 - if (__ret == -1) \ 100 - FAIL_ERRNO("listen"); \ 101 - __ret; \ 102 - }) 103 - 104 - #define xsetsockopt(fd, level, name, val, len) \ 105 - ({ \ 106 - int __ret = setsockopt((fd), (level), (name), (val), (len)); \ 107 - if (__ret == -1) \ 108 - FAIL_ERRNO("setsockopt(" #name ")"); \ 109 - __ret; \ 110 - }) 111 - 112 - #define xsend(fd, buf, len, flags) \ 113 - ({ \ 114 - ssize_t __ret = send((fd), (buf), (len), (flags)); \ 115 - if (__ret == -1) \ 116 - FAIL_ERRNO("send"); \ 117 - __ret; \ 118 - }) 119 - 120 - #define xrecv_nonblock(fd, buf, len, flags) \ 121 - ({ \ 122 - ssize_t __ret = recv_timeout((fd), (buf), (len), (flags), \ 123 - IO_TIMEOUT_SEC); \ 124 - if (__ret == -1) \ 125 - FAIL_ERRNO("recv"); \ 126 - __ret; \ 127 - }) 128 - 129 - #define xsocket(family, sotype, flags) \ 130 - ({ \ 131 - int __ret = socket(family, sotype, flags); \ 132 - if (__ret == -1) \ 133 - FAIL_ERRNO("socket"); \ 134 - __ret; \ 135 - }) 136 9 137 10 #define xbpf_map_delete_elem(fd, key) \ 138 11 ({ \ ··· 66 193 __ret; \ 67 194 }) 68 195 69 - static inline void close_fd(int *fd) 70 - { 71 - if (*fd >= 0) 72 - xclose(*fd); 73 - } 74 - 75 - #define __close_fd __attribute__((cleanup(close_fd))) 76 - 77 - static inline int poll_connect(int fd, unsigned int timeout_sec) 78 - { 79 - struct timeval timeout = { .tv_sec = timeout_sec }; 80 - fd_set wfds; 81 - int r, eval; 82 - socklen_t esize = sizeof(eval); 83 - 84 - FD_ZERO(&wfds); 85 - FD_SET(fd, &wfds); 86 - 87 - r = select(fd + 1, NULL, &wfds, NULL, &timeout); 88 - if (r == 0) 89 - errno = ETIME; 90 - if (r != 1) 91 - return -1; 92 - 93 - if (getsockopt(fd, SOL_SOCKET, SO_ERROR, &eval, &esize) < 0) 94 - return -1; 95 - if (eval != 0) { 96 - errno = eval; 97 - return -1; 98 - } 99 - 100 - return 0; 101 - } 102 - 103 - static inline int poll_read(int fd, unsigned int timeout_sec) 104 - { 105 - struct timeval timeout = { .tv_sec = timeout_sec }; 106 - fd_set rfds; 107 - int r; 108 - 109 - FD_ZERO(&rfds); 110 - FD_SET(fd, &rfds); 111 - 112 - r = select(fd + 1, &rfds, NULL, NULL, &timeout); 113 - if (r == 0) 114 - errno = ETIME; 115 - 116 - return r == 1 ? 0 : -1; 117 - } 118 - 119 - static inline int accept_timeout(int fd, struct sockaddr *addr, socklen_t *len, 120 - unsigned int timeout_sec) 121 - { 122 - if (poll_read(fd, timeout_sec)) 123 - return -1; 124 - 125 - return accept(fd, addr, len); 126 - } 127 - 128 - static inline int recv_timeout(int fd, void *buf, size_t len, int flags, 129 - unsigned int timeout_sec) 130 - { 131 - if (poll_read(fd, timeout_sec)) 132 - return -1; 133 - 134 - return recv(fd, buf, len, flags); 135 - } 136 - 137 - static inline void init_addr_loopback4(struct sockaddr_storage *ss, 138 - socklen_t *len) 139 - { 140 - struct sockaddr_in *addr4 = memset(ss, 0, sizeof(*ss)); 141 - 142 - addr4->sin_family = AF_INET; 143 - addr4->sin_port = 0; 144 - addr4->sin_addr.s_addr = htonl(INADDR_LOOPBACK); 145 - *len = sizeof(*addr4); 146 - } 147 - 148 - static inline void init_addr_loopback6(struct sockaddr_storage *ss, 149 - socklen_t *len) 150 - { 151 - struct sockaddr_in6 *addr6 = memset(ss, 0, sizeof(*ss)); 152 - 153 - addr6->sin6_family = AF_INET6; 154 - addr6->sin6_port = 0; 155 - addr6->sin6_addr = in6addr_loopback; 156 - *len = sizeof(*addr6); 157 - } 158 - 159 - static inline void init_addr_loopback_vsock(struct sockaddr_storage *ss, 160 - socklen_t *len) 161 - { 162 - struct sockaddr_vm *addr = memset(ss, 0, sizeof(*ss)); 163 - 164 - addr->svm_family = AF_VSOCK; 165 - addr->svm_port = VMADDR_PORT_ANY; 166 - addr->svm_cid = VMADDR_CID_LOCAL; 167 - *len = sizeof(*addr); 168 - } 169 - 170 - static inline void init_addr_loopback(int family, struct sockaddr_storage *ss, 171 - socklen_t *len) 172 - { 173 - switch (family) { 174 - case AF_INET: 175 - init_addr_loopback4(ss, len); 176 - return; 177 - case AF_INET6: 178 - init_addr_loopback6(ss, len); 179 - return; 180 - case AF_VSOCK: 181 - init_addr_loopback_vsock(ss, len); 182 - return; 183 - default: 184 - FAIL("unsupported address family %d", family); 185 - } 186 - } 187 - 188 - static inline struct sockaddr *sockaddr(struct sockaddr_storage *ss) 189 - { 190 - return (struct sockaddr *)ss; 191 - } 192 - 193 196 static inline int add_to_sockmap(int sock_mapfd, int fd1, int fd2) 194 197 { 195 198 u64 value; ··· 81 332 key = 1; 82 333 value = fd2; 83 334 return xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST); 84 - } 85 - 86 - static inline int enable_reuseport(int s, int progfd) 87 - { 88 - int err, one = 1; 89 - 90 - err = xsetsockopt(s, SOL_SOCKET, SO_REUSEPORT, &one, sizeof(one)); 91 - if (err) 92 - return -1; 93 - err = xsetsockopt(s, SOL_SOCKET, SO_ATTACH_REUSEPORT_EBPF, &progfd, 94 - sizeof(progfd)); 95 - if (err) 96 - return -1; 97 - 98 - return 0; 99 - } 100 - 101 - static inline int socket_loopback_reuseport(int family, int sotype, int progfd) 102 - { 103 - struct sockaddr_storage addr; 104 - socklen_t len = 0; 105 - int err, s; 106 - 107 - init_addr_loopback(family, &addr, &len); 108 - 109 - s = xsocket(family, sotype, 0); 110 - if (s == -1) 111 - return -1; 112 - 113 - if (progfd >= 0) 114 - enable_reuseport(s, progfd); 115 - 116 - err = xbind(s, sockaddr(&addr), len); 117 - if (err) 118 - goto close; 119 - 120 - if (sotype & SOCK_DGRAM) 121 - return s; 122 - 123 - err = xlisten(s, SOMAXCONN); 124 - if (err) 125 - goto close; 126 - 127 - return s; 128 - close: 129 - xclose(s); 130 - return -1; 131 - } 132 - 133 - static inline int socket_loopback(int family, int sotype) 134 - { 135 - return socket_loopback_reuseport(family, sotype, -1); 136 - } 137 - 138 - static inline int create_pair(int family, int sotype, int *p0, int *p1) 139 - { 140 - __close_fd int s, c = -1, p = -1; 141 - struct sockaddr_storage addr; 142 - socklen_t len = sizeof(addr); 143 - int err; 144 - 145 - s = socket_loopback(family, sotype); 146 - if (s < 0) 147 - return s; 148 - 149 - err = xgetsockname(s, sockaddr(&addr), &len); 150 - if (err) 151 - return err; 152 - 153 - c = xsocket(family, sotype, 0); 154 - if (c < 0) 155 - return c; 156 - 157 - err = connect(c, sockaddr(&addr), len); 158 - if (err) { 159 - if (errno != EINPROGRESS) { 160 - FAIL_ERRNO("connect"); 161 - return err; 162 - } 163 - 164 - err = poll_connect(c, IO_TIMEOUT_SEC); 165 - if (err) { 166 - FAIL_ERRNO("poll_connect"); 167 - return err; 168 - } 169 - } 170 - 171 - switch (sotype & SOCK_TYPE_MASK) { 172 - case SOCK_DGRAM: 173 - err = xgetsockname(c, sockaddr(&addr), &len); 174 - if (err) 175 - return err; 176 - 177 - err = xconnect(s, sockaddr(&addr), len); 178 - if (err) 179 - return err; 180 - 181 - *p0 = take_fd(s); 182 - break; 183 - case SOCK_STREAM: 184 - case SOCK_SEQPACKET: 185 - p = xaccept_nonblock(s, NULL, NULL); 186 - if (p < 0) 187 - return p; 188 - 189 - *p0 = take_fd(p); 190 - break; 191 - default: 192 - FAIL("Unsupported socket type %#x", sotype); 193 - return -EOPNOTSUPP; 194 - } 195 - 196 - *p1 = take_fd(c); 197 - return 0; 198 - } 199 - 200 - static inline int create_socket_pairs(int family, int sotype, int *c0, int *c1, 201 - int *p0, int *p1) 202 - { 203 - int err; 204 - 205 - err = create_pair(family, sotype, c0, p0); 206 - if (err) 207 - return err; 208 - 209 - err = create_pair(family, sotype, c1, p1); 210 - if (err) { 211 - close(*c0); 212 - close(*p0); 213 - } 214 - 215 - return err; 216 335 } 217 336 218 337 #endif // __SOCKMAP_HELPERS__
+62
tools/testing/selftests/bpf/prog_tests/tc_change_tail.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <error.h> 3 + #include <test_progs.h> 4 + #include <linux/pkt_cls.h> 5 + 6 + #include "test_tc_change_tail.skel.h" 7 + #include "socket_helpers.h" 8 + 9 + #define LO_IFINDEX 1 10 + 11 + void test_tc_change_tail(void) 12 + { 13 + LIBBPF_OPTS(bpf_tcx_opts, tcx_opts); 14 + struct test_tc_change_tail *skel = NULL; 15 + struct bpf_link *link; 16 + int c1, p1; 17 + char buf[2]; 18 + int ret; 19 + 20 + skel = test_tc_change_tail__open_and_load(); 21 + if (!ASSERT_OK_PTR(skel, "test_tc_change_tail__open_and_load")) 22 + return; 23 + 24 + link = bpf_program__attach_tcx(skel->progs.change_tail, LO_IFINDEX, 25 + &tcx_opts); 26 + if (!ASSERT_OK_PTR(link, "bpf_program__attach_tcx")) 27 + goto destroy; 28 + 29 + skel->links.change_tail = link; 30 + ret = create_pair(AF_INET, SOCK_DGRAM, &c1, &p1); 31 + if (!ASSERT_OK(ret, "create_pair")) 32 + goto destroy; 33 + 34 + ret = xsend(p1, "Tr", 2, 0); 35 + ASSERT_EQ(ret, 2, "xsend(p1)"); 36 + ret = recv(c1, buf, 2, 0); 37 + ASSERT_EQ(ret, 2, "recv(c1)"); 38 + ASSERT_EQ(skel->data->change_tail_ret, 0, "change_tail_ret"); 39 + 40 + ret = xsend(p1, "G", 1, 0); 41 + ASSERT_EQ(ret, 1, "xsend(p1)"); 42 + ret = recv(c1, buf, 2, 0); 43 + ASSERT_EQ(ret, 1, "recv(c1)"); 44 + ASSERT_EQ(skel->data->change_tail_ret, 0, "change_tail_ret"); 45 + 46 + ret = xsend(p1, "E", 1, 0); 47 + ASSERT_EQ(ret, 1, "xsend(p1)"); 48 + ret = recv(c1, buf, 1, 0); 49 + ASSERT_EQ(ret, 1, "recv(c1)"); 50 + ASSERT_EQ(skel->data->change_tail_ret, -EINVAL, "change_tail_ret"); 51 + 52 + ret = xsend(p1, "Z", 1, 0); 53 + ASSERT_EQ(ret, 1, "xsend(p1)"); 54 + ret = recv(c1, buf, 1, 0); 55 + ASSERT_EQ(ret, 1, "recv(c1)"); 56 + ASSERT_EQ(skel->data->change_tail_ret, -EINVAL, "change_tail_ret"); 57 + 58 + close(c1); 59 + close(p1); 60 + destroy: 61 + test_tc_change_tail__destroy(skel); 62 + }
+40
tools/testing/selftests/bpf/progs/test_sockmap_change_tail.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2024 ByteDance */ 3 + #include <linux/bpf.h> 4 + #include <bpf/bpf_helpers.h> 5 + 6 + struct { 7 + __uint(type, BPF_MAP_TYPE_SOCKMAP); 8 + __uint(max_entries, 1); 9 + __type(key, int); 10 + __type(value, int); 11 + } sock_map_rx SEC(".maps"); 12 + 13 + long change_tail_ret = 1; 14 + 15 + SEC("sk_skb") 16 + int prog_skb_verdict(struct __sk_buff *skb) 17 + { 18 + char *data, *data_end; 19 + 20 + bpf_skb_pull_data(skb, 1); 21 + data = (char *)(unsigned long)skb->data; 22 + data_end = (char *)(unsigned long)skb->data_end; 23 + 24 + if (data + 1 > data_end) 25 + return SK_PASS; 26 + 27 + if (data[0] == 'T') { /* Trim the packet */ 28 + change_tail_ret = bpf_skb_change_tail(skb, skb->len - 1, 0); 29 + return SK_PASS; 30 + } else if (data[0] == 'G') { /* Grow the packet */ 31 + change_tail_ret = bpf_skb_change_tail(skb, skb->len + 1, 0); 32 + return SK_PASS; 33 + } else if (data[0] == 'E') { /* Error */ 34 + change_tail_ret = bpf_skb_change_tail(skb, 65535, 0); 35 + return SK_PASS; 36 + } 37 + return SK_PASS; 38 + } 39 + 40 + char _license[] SEC("license") = "GPL";
+106
tools/testing/selftests/bpf/progs/test_tc_change_tail.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/bpf.h> 3 + #include <bpf/bpf_helpers.h> 4 + #include <linux/if_ether.h> 5 + #include <linux/in.h> 6 + #include <linux/ip.h> 7 + #include <linux/udp.h> 8 + #include <linux/pkt_cls.h> 9 + 10 + long change_tail_ret = 1; 11 + 12 + static __always_inline struct iphdr *parse_ip_header(struct __sk_buff *skb, int *ip_proto) 13 + { 14 + void *data_end = (void *)(long)skb->data_end; 15 + void *data = (void *)(long)skb->data; 16 + struct ethhdr *eth = data; 17 + struct iphdr *iph; 18 + 19 + /* Verify Ethernet header */ 20 + if ((void *)(data + sizeof(*eth)) > data_end) 21 + return NULL; 22 + 23 + /* Skip Ethernet header to get to IP header */ 24 + iph = (void *)(data + sizeof(struct ethhdr)); 25 + 26 + /* Verify IP header */ 27 + if ((void *)(data + sizeof(struct ethhdr) + sizeof(*iph)) > data_end) 28 + return NULL; 29 + 30 + /* Basic IP header validation */ 31 + if (iph->version != 4) /* Only support IPv4 */ 32 + return NULL; 33 + 34 + if (iph->ihl < 5) /* Minimum IP header length */ 35 + return NULL; 36 + 37 + *ip_proto = iph->protocol; 38 + return iph; 39 + } 40 + 41 + static __always_inline struct udphdr *parse_udp_header(struct __sk_buff *skb, struct iphdr *iph) 42 + { 43 + void *data_end = (void *)(long)skb->data_end; 44 + void *hdr = (void *)iph; 45 + struct udphdr *udp; 46 + 47 + /* Calculate UDP header position */ 48 + udp = hdr + (iph->ihl * 4); 49 + hdr = (void *)udp; 50 + 51 + /* Verify UDP header bounds */ 52 + if ((void *)(hdr + sizeof(*udp)) > data_end) 53 + return NULL; 54 + 55 + return udp; 56 + } 57 + 58 + SEC("tc/ingress") 59 + int change_tail(struct __sk_buff *skb) 60 + { 61 + int len = skb->len; 62 + struct udphdr *udp; 63 + struct iphdr *iph; 64 + void *data_end; 65 + char *payload; 66 + int ip_proto; 67 + 68 + bpf_skb_pull_data(skb, len); 69 + 70 + data_end = (void *)(long)skb->data_end; 71 + iph = parse_ip_header(skb, &ip_proto); 72 + if (!iph) 73 + return TCX_PASS; 74 + 75 + if (ip_proto != IPPROTO_UDP) 76 + return TCX_PASS; 77 + 78 + udp = parse_udp_header(skb, iph); 79 + if (!udp) 80 + return TCX_PASS; 81 + 82 + payload = (char *)udp + (sizeof(struct udphdr)); 83 + if (payload + 1 > (char *)data_end) 84 + return TCX_PASS; 85 + 86 + if (payload[0] == 'T') { /* Trim the packet */ 87 + change_tail_ret = bpf_skb_change_tail(skb, len - 1, 0); 88 + if (!change_tail_ret) 89 + bpf_skb_change_tail(skb, len, 0); 90 + return TCX_PASS; 91 + } else if (payload[0] == 'G') { /* Grow the packet */ 92 + change_tail_ret = bpf_skb_change_tail(skb, len + 1, 0); 93 + if (!change_tail_ret) 94 + bpf_skb_change_tail(skb, len, 0); 95 + return TCX_PASS; 96 + } else if (payload[0] == 'E') { /* Error */ 97 + change_tail_ret = bpf_skb_change_tail(skb, 65535, 0); 98 + return TCX_PASS; 99 + } else if (payload[0] == 'Z') { /* Zero */ 100 + change_tail_ret = bpf_skb_change_tail(skb, 0, 0); 101 + return TCX_PASS; 102 + } 103 + return TCX_DROP; 104 + } 105 + 106 + char _license[] SEC("license") = "GPL";
+2
tools/testing/selftests/bpf/sdt.h
··· 102 102 # define STAP_SDT_ARG_CONSTRAINT nZr 103 103 # elif defined __arm__ 104 104 # define STAP_SDT_ARG_CONSTRAINT g 105 + # elif defined __loongarch__ 106 + # define STAP_SDT_ARG_CONSTRAINT nmr 105 107 # else 106 108 # define STAP_SDT_ARG_CONSTRAINT nor 107 109 # endif
+4
tools/testing/selftests/bpf/trace_helpers.c
··· 293 293 return 0; 294 294 } 295 295 #else 296 + # ifndef PROCMAP_QUERY_VMA_EXECUTABLE 297 + # define PROCMAP_QUERY_VMA_EXECUTABLE 0x04 298 + # endif 299 + 296 300 static int procmap_query(int fd, const void *addr, __u32 query_flags, size_t *start, size_t *offset, int *flags) 297 301 { 298 302 return -EOPNOTSUPP;
+24 -4
tools/testing/selftests/drivers/net/queues.py
··· 1 1 #!/usr/bin/env python3 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 - from lib.py import ksft_run, ksft_exit, ksft_eq, KsftSkipEx 5 - from lib.py import EthtoolFamily, NetdevFamily 4 + from lib.py import ksft_disruptive, ksft_exit, ksft_run 5 + from lib.py import ksft_eq, ksft_raises, KsftSkipEx 6 + from lib.py import EthtoolFamily, NetdevFamily, NlError 6 7 from lib.py import NetDrvEnv 7 - from lib.py import cmd 8 + from lib.py import cmd, defer, ip 9 + import errno 8 10 import glob 9 11 10 12 ··· 61 59 ksft_eq(queues, expected) 62 60 63 61 62 + @ksft_disruptive 63 + def check_down(cfg, nl) -> None: 64 + # Check the NAPI IDs before interface goes down and hides them 65 + napis = nl.napi_get({'ifindex': cfg.ifindex}, dump=True) 66 + 67 + ip(f"link set dev {cfg.dev['ifname']} down") 68 + defer(ip, f"link set dev {cfg.dev['ifname']} up") 69 + 70 + with ksft_raises(NlError) as cm: 71 + nl.queue_get({'ifindex': cfg.ifindex, 'id': 0, 'type': 'rx'}) 72 + ksft_eq(cm.exception.nl_msg.error, -errno.ENOENT) 73 + 74 + if napis: 75 + with ksft_raises(NlError) as cm: 76 + nl.napi_get({'id': napis[0]['id']}) 77 + ksft_eq(cm.exception.nl_msg.error, -errno.ENOENT) 78 + 79 + 64 80 def main() -> None: 65 81 with NetDrvEnv(__file__, queue_count=100) as cfg: 66 - ksft_run([get_queues, addremove_queues], args=(cfg, NetdevFamily())) 82 + ksft_run([get_queues, addremove_queues, check_down], args=(cfg, NetdevFamily())) 67 83 ksft_exit() 68 84 69 85
+12 -2
tools/testing/selftests/memfd/memfd_test.c
··· 9 9 #include <fcntl.h> 10 10 #include <linux/memfd.h> 11 11 #include <sched.h> 12 + #include <stdbool.h> 12 13 #include <stdio.h> 13 14 #include <stdlib.h> 14 15 #include <signal.h> ··· 1558 1557 close(fd); 1559 1558 } 1560 1559 1560 + static bool pid_ns_supported(void) 1561 + { 1562 + return access("/proc/self/ns/pid", F_OK) == 0; 1563 + } 1564 + 1561 1565 int main(int argc, char **argv) 1562 1566 { 1563 1567 pid_t pid; ··· 1597 1591 test_seal_grow(); 1598 1592 test_seal_resize(); 1599 1593 1600 - test_sysctl_simple(); 1601 - test_sysctl_nested(); 1594 + if (pid_ns_supported()) { 1595 + test_sysctl_simple(); 1596 + test_sysctl_nested(); 1597 + } else { 1598 + printf("PID namespaces are not supported; skipping sysctl tests\n"); 1599 + } 1602 1600 1603 1601 test_share_dup("SHARE-DUP", ""); 1604 1602 test_share_mmap("SHARE-MMAP", "");
-1
tools/testing/selftests/net/forwarding/local_termination.sh
··· 7 7 NUM_NETIFS=2 8 8 PING_COUNT=1 9 9 REQUIRE_MTOOLS=yes 10 - REQUIRE_MZ=no 11 10 12 11 source lib.sh 13 12
+96 -81
tools/tracing/rtla/src/timerlat_hist.c
··· 282 282 } 283 283 284 284 /* 285 + * format_summary_value - format a line of summary value (min, max or avg) 286 + * of hist data 287 + */ 288 + static void format_summary_value(struct trace_seq *seq, 289 + int count, 290 + unsigned long long val, 291 + bool avg) 292 + { 293 + if (count) 294 + trace_seq_printf(seq, "%9llu ", avg ? val / count : val); 295 + else 296 + trace_seq_printf(seq, "%9c ", '-'); 297 + } 298 + 299 + /* 285 300 * timerlat_print_summary - print the summary of the hist data to the output 286 301 */ 287 302 static void ··· 343 328 if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count) 344 329 continue; 345 330 346 - if (!params->no_irq) { 347 - if (data->hist[cpu].irq_count) 348 - trace_seq_printf(trace->seq, "%9llu ", 349 - data->hist[cpu].min_irq); 350 - else 351 - trace_seq_printf(trace->seq, " - "); 352 - } 331 + if (!params->no_irq) 332 + format_summary_value(trace->seq, 333 + data->hist[cpu].irq_count, 334 + data->hist[cpu].min_irq, 335 + false); 353 336 354 - if (!params->no_thread) { 355 - if (data->hist[cpu].thread_count) 356 - trace_seq_printf(trace->seq, "%9llu ", 357 - data->hist[cpu].min_thread); 358 - else 359 - trace_seq_printf(trace->seq, " - "); 360 - } 337 + if (!params->no_thread) 338 + format_summary_value(trace->seq, 339 + data->hist[cpu].thread_count, 340 + data->hist[cpu].min_thread, 341 + false); 361 342 362 - if (params->user_hist) { 363 - if (data->hist[cpu].user_count) 364 - trace_seq_printf(trace->seq, "%9llu ", 365 - data->hist[cpu].min_user); 366 - else 367 - trace_seq_printf(trace->seq, " - "); 368 - } 343 + if (params->user_hist) 344 + format_summary_value(trace->seq, 345 + data->hist[cpu].user_count, 346 + data->hist[cpu].min_user, 347 + false); 369 348 } 370 349 trace_seq_printf(trace->seq, "\n"); 371 350 ··· 373 364 if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count) 374 365 continue; 375 366 376 - if (!params->no_irq) { 377 - if (data->hist[cpu].irq_count) 378 - trace_seq_printf(trace->seq, "%9llu ", 379 - data->hist[cpu].sum_irq / data->hist[cpu].irq_count); 380 - else 381 - trace_seq_printf(trace->seq, " - "); 382 - } 367 + if (!params->no_irq) 368 + format_summary_value(trace->seq, 369 + data->hist[cpu].irq_count, 370 + data->hist[cpu].sum_irq, 371 + true); 383 372 384 - if (!params->no_thread) { 385 - if (data->hist[cpu].thread_count) 386 - trace_seq_printf(trace->seq, "%9llu ", 387 - data->hist[cpu].sum_thread / data->hist[cpu].thread_count); 388 - else 389 - trace_seq_printf(trace->seq, " - "); 390 - } 373 + if (!params->no_thread) 374 + format_summary_value(trace->seq, 375 + data->hist[cpu].thread_count, 376 + data->hist[cpu].sum_thread, 377 + true); 391 378 392 - if (params->user_hist) { 393 - if (data->hist[cpu].user_count) 394 - trace_seq_printf(trace->seq, "%9llu ", 395 - data->hist[cpu].sum_user / data->hist[cpu].user_count); 396 - else 397 - trace_seq_printf(trace->seq, " - "); 398 - } 379 + if (params->user_hist) 380 + format_summary_value(trace->seq, 381 + data->hist[cpu].user_count, 382 + data->hist[cpu].sum_user, 383 + true); 399 384 } 400 385 trace_seq_printf(trace->seq, "\n"); 401 386 ··· 403 400 if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count) 404 401 continue; 405 402 406 - if (!params->no_irq) { 407 - if (data->hist[cpu].irq_count) 408 - trace_seq_printf(trace->seq, "%9llu ", 409 - data->hist[cpu].max_irq); 410 - else 411 - trace_seq_printf(trace->seq, " - "); 412 - } 403 + if (!params->no_irq) 404 + format_summary_value(trace->seq, 405 + data->hist[cpu].irq_count, 406 + data->hist[cpu].max_irq, 407 + false); 413 408 414 - if (!params->no_thread) { 415 - if (data->hist[cpu].thread_count) 416 - trace_seq_printf(trace->seq, "%9llu ", 417 - data->hist[cpu].max_thread); 418 - else 419 - trace_seq_printf(trace->seq, " - "); 420 - } 409 + if (!params->no_thread) 410 + format_summary_value(trace->seq, 411 + data->hist[cpu].thread_count, 412 + data->hist[cpu].max_thread, 413 + false); 421 414 422 - if (params->user_hist) { 423 - if (data->hist[cpu].user_count) 424 - trace_seq_printf(trace->seq, "%9llu ", 425 - data->hist[cpu].max_user); 426 - else 427 - trace_seq_printf(trace->seq, " - "); 428 - } 415 + if (params->user_hist) 416 + format_summary_value(trace->seq, 417 + data->hist[cpu].user_count, 418 + data->hist[cpu].max_user, 419 + false); 429 420 } 430 421 trace_seq_printf(trace->seq, "\n"); 431 422 trace_seq_do_printf(trace->seq); ··· 503 506 trace_seq_printf(trace->seq, "min: "); 504 507 505 508 if (!params->no_irq) 506 - trace_seq_printf(trace->seq, "%9llu ", 507 - sum.min_irq); 509 + format_summary_value(trace->seq, 510 + sum.irq_count, 511 + sum.min_irq, 512 + false); 508 513 509 514 if (!params->no_thread) 510 - trace_seq_printf(trace->seq, "%9llu ", 511 - sum.min_thread); 515 + format_summary_value(trace->seq, 516 + sum.thread_count, 517 + sum.min_thread, 518 + false); 512 519 513 520 if (params->user_hist) 514 - trace_seq_printf(trace->seq, "%9llu ", 515 - sum.min_user); 521 + format_summary_value(trace->seq, 522 + sum.user_count, 523 + sum.min_user, 524 + false); 516 525 517 526 trace_seq_printf(trace->seq, "\n"); 518 527 ··· 526 523 trace_seq_printf(trace->seq, "avg: "); 527 524 528 525 if (!params->no_irq) 529 - trace_seq_printf(trace->seq, "%9llu ", 530 - sum.sum_irq / sum.irq_count); 526 + format_summary_value(trace->seq, 527 + sum.irq_count, 528 + sum.sum_irq, 529 + true); 531 530 532 531 if (!params->no_thread) 533 - trace_seq_printf(trace->seq, "%9llu ", 534 - sum.sum_thread / sum.thread_count); 532 + format_summary_value(trace->seq, 533 + sum.thread_count, 534 + sum.sum_thread, 535 + true); 535 536 536 537 if (params->user_hist) 537 - trace_seq_printf(trace->seq, "%9llu ", 538 - sum.sum_user / sum.user_count); 538 + format_summary_value(trace->seq, 539 + sum.user_count, 540 + sum.sum_user, 541 + true); 539 542 540 543 trace_seq_printf(trace->seq, "\n"); 541 544 ··· 549 540 trace_seq_printf(trace->seq, "max: "); 550 541 551 542 if (!params->no_irq) 552 - trace_seq_printf(trace->seq, "%9llu ", 553 - sum.max_irq); 543 + format_summary_value(trace->seq, 544 + sum.irq_count, 545 + sum.max_irq, 546 + false); 554 547 555 548 if (!params->no_thread) 556 - trace_seq_printf(trace->seq, "%9llu ", 557 - sum.max_thread); 549 + format_summary_value(trace->seq, 550 + sum.thread_count, 551 + sum.max_thread, 552 + false); 558 553 559 554 if (params->user_hist) 560 - trace_seq_printf(trace->seq, "%9llu ", 561 - sum.max_user); 555 + format_summary_value(trace->seq, 556 + sum.user_count, 557 + sum.max_user, 558 + false); 562 559 563 560 trace_seq_printf(trace->seq, "\n"); 564 561 trace_seq_do_printf(trace->seq);
+1 -1
usr/include/Makefile
··· 78 78 cmd_hdrtest = \ 79 79 $(CC) $(c_flags) -fsyntax-only -x c /dev/null \ 80 80 $(if $(filter-out $(no-header-test), $*.h), -include $< -include $<); \ 81 - $(PERL) $(src)/headers_check.pl $(obj) $(SRCARCH) $<; \ 81 + $(PERL) $(src)/headers_check.pl $(obj) $<; \ 82 82 touch $@ 83 83 84 84 $(obj)/%.hdrtest: $(obj)/%.h FORCE
+2 -7
usr/include/headers_check.pl
··· 3 3 # 4 4 # headers_check.pl execute a number of trivial consistency checks 5 5 # 6 - # Usage: headers_check.pl dir arch [files...] 6 + # Usage: headers_check.pl dir [files...] 7 7 # dir: dir to look for included files 8 - # arch: architecture 9 8 # files: list of files to check 10 9 # 11 10 # The script reads the supplied files line by line and: ··· 22 23 use strict; 23 24 use File::Basename; 24 25 25 - my ($dir, $arch, @files) = @ARGV; 26 + my ($dir, @files) = @ARGV; 26 27 27 28 my $ret = 0; 28 29 my $line; ··· 53 54 my $inc = $1; 54 55 my $found; 55 56 $found = stat($dir . "/" . $inc); 56 - if (!$found) { 57 - $inc =~ s#asm/#asm-$arch/#; 58 - $found = stat($dir . "/" . $inc); 59 - } 60 57 if (!$found) { 61 58 printf STDERR "$filename:$lineno: included file '$inc' is not exported\n"; 62 59 $ret = 1;