Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'loongarch-kvm-6.14' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson into HEAD

LoongArch KVM changes for v6.14

1. Clear LLBCTL if secondary mmu mapping changed.
2. Add hypercall service support for usermode VMM.

This is a really small changeset, because the Chinese New Year
(Spring Festival) is coming. Happy New Year!

+9223 -4686
+4 -1
.mailmap
··· 121 121 Benjamin Poirier <benjamin.poirier@gmail.com> <bpoirier@suse.de> 122 122 Benjamin Tissoires <bentiss@kernel.org> <benjamin.tissoires@gmail.com> 123 123 Benjamin Tissoires <bentiss@kernel.org> <benjamin.tissoires@redhat.com> 124 + Bingwu Zhang <xtex@aosc.io> <xtexchooser@duck.com> 125 + Bingwu Zhang <xtex@aosc.io> <xtex@xtexx.eu.org> 124 126 Bjorn Andersson <andersson@kernel.org> <bjorn@kryo.se> 125 127 Bjorn Andersson <andersson@kernel.org> <bjorn.andersson@linaro.org> 126 128 Bjorn Andersson <andersson@kernel.org> <bjorn.andersson@sonymobile.com> ··· 437 435 Martin Kepplinger <martink@posteo.de> <martin.kepplinger@puri.sm> 438 436 Martin Kepplinger <martink@posteo.de> <martin.kepplinger@theobroma-systems.com> 439 437 Martyna Szapar-Mudlaw <martyna.szapar-mudlaw@linux.intel.com> <martyna.szapar-mudlaw@intel.com> 440 - Mathieu Othacehe <m.othacehe@gmail.com> <othacehe@gnu.org> 438 + Mathieu Othacehe <othacehe@gnu.org> <m.othacehe@gmail.com> 441 439 Mat Martineau <martineau@kernel.org> <mathew.j.martineau@linux.intel.com> 442 440 Mat Martineau <martineau@kernel.org> <mathewm@codeaurora.org> 443 441 Matthew Wilcox <willy@infradead.org> <matthew.r.wilcox@intel.com> ··· 737 735 Wolfram Sang <wsa@kernel.org> <wsa@the-dreams.de> 738 736 Yakir Yang <kuankuan.y@gmail.com> <ykk@rock-chips.com> 739 737 Yanteng Si <si.yanteng@linux.dev> <siyanteng@loongson.cn> 738 + Ying Huang <huang.ying.caritas@gmail.com> <ying.huang@intel.com> 740 739 Yusuke Goda <goda.yusuke@renesas.com> 741 740 Zack Rusin <zack.rusin@broadcom.com> <zackr@vmware.com> 742 741 Zhu Yanjun <zyjzyj2000@gmail.com> <yanjunz@nvidia.com>
+12
CREDITS
··· 20 20 E: thomas.ab@samsung.com 21 21 D: Samsung pin controller driver 22 22 23 + N: Jose Abreu 24 + E: jose.abreu@synopsys.com 25 + D: Synopsys DesignWare XPCS MDIO/PCS driver. 26 + 23 27 N: Dragos Acostachioaie 24 28 E: dragos@iname.com 25 29 W: http://www.arbornet.org/~dragos ··· 1432 1428 S: Sterling Heights, Michigan 48313 1433 1429 S: USA 1434 1430 1431 + N: Andy Gospodarek 1432 + E: andy@greyhouse.net 1433 + D: Maintenance and contributions to the network interface bonding driver. 1434 + 1435 1435 N: Wolfgang Grandegger 1436 1436 E: wg@grandegger.com 1437 1437 D: Controller Area Network (device drivers) ··· 1819 1811 D: Author/maintainer of most DRM drivers (especially ATI, MGA) 1820 1812 D: Core DRM templates, general DRM and 3D-related hacking 1821 1813 S: No fixed address 1814 + 1815 + N: Woojung Huh 1816 + E: woojung.huh@microchip.com 1817 + D: Microchip LAN78XX USB Ethernet driver 1822 1818 1823 1819 N: Kenn Humborg 1824 1820 E: kenn@wombat.ie
+7 -3
Documentation/admin-guide/laptops/thinkpad-acpi.rst
··· 445 445 0x1008 0x07 FN+F8 IBM: toggle screen expand 446 446 Lenovo: configure UltraNav, 447 447 or toggle screen expand. 448 - On newer platforms (2024+) 449 - replaced by 0x131f (see below) 448 + On 2024 platforms replaced by 449 + 0x131f (see below) and on newer 450 + platforms (2025 +) keycode is 451 + replaced by 0x1401 (see below). 450 452 451 453 0x1009 0x08 FN+F9 - 452 454 ··· 508 506 509 507 0x1019 0x18 unknown 510 508 511 - 0x131f ... FN+F8 Platform Mode change. 509 + 0x131f ... FN+F8 Platform Mode change (2024 systems). 512 510 Implemented in driver. 513 511 512 + 0x1401 ... FN+F8 Platform Mode change (2025 + systems). 513 + Implemented in driver. 514 514 ... ... ... 515 515 516 516 0x1020 0x1F unknown
+1 -1
Documentation/admin-guide/mm/transhuge.rst
··· 436 436 The number of file transparent huge pages mapped to userspace is available 437 437 by reading ShmemPmdMapped and ShmemHugePages fields in ``/proc/meminfo``. 438 438 To identify what applications are mapping file transparent huge pages, it 439 - is necessary to read ``/proc/PID/smaps`` and count the FileHugeMapped fields 439 + is necessary to read ``/proc/PID/smaps`` and count the FilePmdMapped fields 440 440 for each mapping. 441 441 442 442 Note that reading the smaps file is expensive and reading it
+1 -3
Documentation/admin-guide/pm/amd-pstate.rst
··· 251 251 In some ASICs, the highest CPPC performance is not the one in the ``_CPC`` 252 252 table, so we need to expose it to sysfs. If boost is not active, but 253 253 still supported, this maximum frequency will be larger than the one in 254 - ``cpuinfo``. On systems that support preferred core, the driver will have 255 - different values for some cores than others and this will reflect the values 256 - advertised by the platform at bootup. 254 + ``cpuinfo``. 257 255 This attribute is read-only. 258 256 259 257 ``amd_pstate_lowest_nonlinear_freq``
+6 -4
Documentation/devicetree/bindings/crypto/fsl,sec-v4.0.yaml
··· 114 114 table that specifies the PPID to LIODN mapping. Needed if the PAMU is 115 115 used. Value is a 12 bit value where value is a LIODN ID for this JR. 116 116 This property is normally set by boot firmware. 117 - $ref: /schemas/types.yaml#/definitions/uint32 118 - maximum: 0xfff 117 + $ref: /schemas/types.yaml#/definitions/uint32-array 118 + items: 119 + - maximum: 0xfff 119 120 120 121 '^rtic@[0-9a-f]+$': 121 122 type: object ··· 187 186 Needed if the PAMU is used. Value is a 12 bit value where value 188 187 is a LIODN ID for this JR. This property is normally set by boot 189 188 firmware. 190 - $ref: /schemas/types.yaml#/definitions/uint32 191 - maximum: 0xfff 189 + $ref: /schemas/types.yaml#/definitions/uint32-array 190 + items: 191 + - maximum: 0xfff 192 192 193 193 fsl,rtic-region: 194 194 description:
+1 -1
Documentation/devicetree/bindings/display/bridge/adi,adv7533.yaml
··· 90 90 adi,dsi-lanes: 91 91 description: Number of DSI data lanes connected to the DSI host. 92 92 $ref: /schemas/types.yaml#/definitions/uint32 93 - enum: [ 1, 2, 3, 4 ] 93 + enum: [ 2, 3, 4 ] 94 94 95 95 "#sound-dai-cells": 96 96 const: 0
+18 -1
Documentation/devicetree/bindings/display/mediatek/mediatek,dp.yaml
··· 42 42 interrupts: 43 43 maxItems: 1 44 44 45 + '#sound-dai-cells': 46 + const: 0 47 + 45 48 ports: 46 49 $ref: /schemas/graph.yaml#/properties/ports 47 50 properties: ··· 88 85 - ports 89 86 - max-linkrate-mhz 90 87 91 - additionalProperties: false 88 + allOf: 89 + - $ref: /schemas/sound/dai-common.yaml# 90 + - if: 91 + not: 92 + properties: 93 + compatible: 94 + contains: 95 + enum: 96 + - mediatek,mt8188-dp-tx 97 + - mediatek,mt8195-dp-tx 98 + then: 99 + properties: 100 + '#sound-dai-cells': false 101 + 102 + unevaluatedProperties: false 92 103 93 104 examples: 94 105 - |
+1
Documentation/devicetree/bindings/iio/st,st-sensors.yaml
··· 65 65 - st,lsm9ds0-gyro 66 66 - description: STMicroelectronics Magnetometers 67 67 enum: 68 + - st,iis2mdc 68 69 - st,lis2mdl 69 70 - st,lis3mdl-magn 70 71 - st,lsm303agr-magn
+1 -1
Documentation/devicetree/bindings/mtd/partitions/fixed-partitions.yaml
··· 82 82 83 83 uimage@100000 { 84 84 reg = <0x0100000 0x200000>; 85 - compress = "lzma"; 85 + compression = "lzma"; 86 86 }; 87 87 }; 88 88
+1 -1
Documentation/devicetree/bindings/net/pse-pd/pse-controller.yaml
··· 81 81 List of phandles, each pointing to the power supply for the 82 82 corresponding pairset named in 'pairset-names'. This property 83 83 aligns with IEEE 802.3-2022, Section 33.2.3 and 145.2.4. 84 - PSE Pinout Alternatives (as per IEEE 802.3-2022 Table 145\u20133) 84 + PSE Pinout Alternatives (as per IEEE 802.3-2022 Table 145-3) 85 85 |-----------|---------------|---------------|---------------|---------------| 86 86 | Conductor | Alternative A | Alternative A | Alternative B | Alternative B | 87 87 | | (MDI-X) | (MDI) | (X) | (S) |
+2
Documentation/devicetree/bindings/soc/fsl/fsl,qman-portal.yaml
··· 35 35 36 36 fsl,liodn: 37 37 $ref: /schemas/types.yaml#/definitions/uint32-array 38 + maxItems: 2 38 39 description: See pamu.txt. Two LIODN(s). DQRR LIODN (DLIODN) and Frame LIODN 39 40 (FLIODN) 40 41 ··· 70 69 type: object 71 70 properties: 72 71 fsl,liodn: 72 + $ref: /schemas/types.yaml#/definitions/uint32-array 73 73 description: See pamu.txt, PAMU property used for static LIODN assignment 74 74 75 75 fsl,iommu-parent:
+1 -1
Documentation/devicetree/bindings/sound/realtek,rt5645.yaml
··· 51 51 description: Power supply for AVDD, providing 1.8V. 52 52 53 53 cpvdd-supply: 54 - description: Power supply for CPVDD, providing 3.5V. 54 + description: Power supply for CPVDD, providing 1.8V. 55 55 56 56 hp-detect-gpios: 57 57 description:
+850
Documentation/mm/process_addrs.rst
··· 3 3 ================= 4 4 Process Addresses 5 5 ================= 6 + 7 + .. toctree:: 8 + :maxdepth: 3 9 + 10 + 11 + Userland memory ranges are tracked by the kernel via Virtual Memory Areas or 12 + 'VMA's of type :c:struct:`!struct vm_area_struct`. 13 + 14 + Each VMA describes a virtually contiguous memory range with identical 15 + attributes, each described by a :c:struct:`!struct vm_area_struct` 16 + object. Userland access outside of VMAs is invalid except in the case where an 17 + adjacent stack VMA could be extended to contain the accessed address. 18 + 19 + All VMAs are contained within one and only one virtual address space, described 20 + by a :c:struct:`!struct mm_struct` object which is referenced by all tasks (that is, 21 + threads) which share the virtual address space. We refer to this as the 22 + :c:struct:`!mm`. 23 + 24 + Each mm object contains a maple tree data structure which describes all VMAs 25 + within the virtual address space. 26 + 27 + .. note:: An exception to this is the 'gate' VMA which is provided by 28 + architectures which use :c:struct:`!vsyscall` and is a global static 29 + object which does not belong to any specific mm. 30 + 31 + ------- 32 + Locking 33 + ------- 34 + 35 + The kernel is designed to be highly scalable against concurrent read operations 36 + on VMA **metadata** so a complicated set of locks are required to ensure memory 37 + corruption does not occur. 38 + 39 + .. note:: Locking VMAs for their metadata does not have any impact on the memory 40 + they describe nor the page tables that map them. 41 + 42 + Terminology 43 + ----------- 44 + 45 + * **mmap locks** - Each MM has a read/write semaphore :c:member:`!mmap_lock` 46 + which locks at a process address space granularity which can be acquired via 47 + :c:func:`!mmap_read_lock`, :c:func:`!mmap_write_lock` and variants. 48 + * **VMA locks** - The VMA lock is at VMA granularity (of course) which behaves 49 + as a read/write semaphore in practice. A VMA read lock is obtained via 50 + :c:func:`!lock_vma_under_rcu` (and unlocked via :c:func:`!vma_end_read`) and a 51 + write lock via :c:func:`!vma_start_write` (all VMA write locks are unlocked 52 + automatically when the mmap write lock is released). To take a VMA write lock 53 + you **must** have already acquired an :c:func:`!mmap_write_lock`. 54 + * **rmap locks** - When trying to access VMAs through the reverse mapping via a 55 + :c:struct:`!struct address_space` or :c:struct:`!struct anon_vma` object 56 + (reachable from a folio via :c:member:`!folio->mapping`). VMAs must be stabilised via 57 + :c:func:`!anon_vma_[try]lock_read` or :c:func:`!anon_vma_[try]lock_write` for 58 + anonymous memory and :c:func:`!i_mmap_[try]lock_read` or 59 + :c:func:`!i_mmap_[try]lock_write` for file-backed memory. We refer to these 60 + locks as the reverse mapping locks, or 'rmap locks' for brevity. 61 + 62 + We discuss page table locks separately in the dedicated section below. 63 + 64 + The first thing **any** of these locks achieve is to **stabilise** the VMA 65 + within the MM tree. That is, guaranteeing that the VMA object will not be 66 + deleted from under you nor modified (except for some specific fields 67 + described below). 68 + 69 + Stabilising a VMA also keeps the address space described by it around. 70 + 71 + Lock usage 72 + ---------- 73 + 74 + If you want to **read** VMA metadata fields or just keep the VMA stable, you 75 + must do one of the following: 76 + 77 + * Obtain an mmap read lock at the MM granularity via :c:func:`!mmap_read_lock` (or a 78 + suitable variant), unlocking it with a matching :c:func:`!mmap_read_unlock` when 79 + you're done with the VMA, *or* 80 + * Try to obtain a VMA read lock via :c:func:`!lock_vma_under_rcu`. This tries to 81 + acquire the lock atomically so might fail, in which case fall-back logic is 82 + required to instead obtain an mmap read lock if this returns :c:macro:`!NULL`, 83 + *or* 84 + * Acquire an rmap lock before traversing the locked interval tree (whether 85 + anonymous or file-backed) to obtain the required VMA. 86 + 87 + If you want to **write** VMA metadata fields, then things vary depending on the 88 + field (we explore each VMA field in detail below). For the majority you must: 89 + 90 + * Obtain an mmap write lock at the MM granularity via :c:func:`!mmap_write_lock` (or a 91 + suitable variant), unlocking it with a matching :c:func:`!mmap_write_unlock` when 92 + you're done with the VMA, *and* 93 + * Obtain a VMA write lock via :c:func:`!vma_start_write` for each VMA you wish to 94 + modify, which will be released automatically when :c:func:`!mmap_write_unlock` is 95 + called. 96 + * If you want to be able to write to **any** field, you must also hide the VMA 97 + from the reverse mapping by obtaining an **rmap write lock**. 98 + 99 + VMA locks are special in that you must obtain an mmap **write** lock **first** 100 + in order to obtain a VMA **write** lock. A VMA **read** lock however can be 101 + obtained without any other lock (:c:func:`!lock_vma_under_rcu` will acquire then 102 + release an RCU lock to lookup the VMA for you). 103 + 104 + This constrains the impact of writers on readers, as a writer can interact with 105 + one VMA while a reader interacts with another simultaneously. 106 + 107 + .. note:: The primary users of VMA read locks are page fault handlers, which 108 + means that without a VMA write lock, page faults will run concurrent with 109 + whatever you are doing. 110 + 111 + Examining all valid lock states: 112 + 113 + .. table:: 114 + 115 + ========= ======== ========= ======= ===== =========== ========== 116 + mmap lock VMA lock rmap lock Stable? Read? Write most? Write all? 117 + ========= ======== ========= ======= ===== =========== ========== 118 + \- \- \- N N N N 119 + \- R \- Y Y N N 120 + \- \- R/W Y Y N N 121 + R/W \-/R \-/R/W Y Y N N 122 + W W \-/R Y Y Y N 123 + W W W Y Y Y Y 124 + ========= ======== ========= ======= ===== =========== ========== 125 + 126 + .. warning:: While it's possible to obtain a VMA lock while holding an mmap read lock, 127 + attempting to do the reverse is invalid as it can result in deadlock - if 128 + another task already holds an mmap write lock and attempts to acquire a VMA 129 + write lock that will deadlock on the VMA read lock. 130 + 131 + All of these locks behave as read/write semaphores in practice, so you can 132 + obtain either a read or a write lock for each of these. 133 + 134 + .. note:: Generally speaking, a read/write semaphore is a class of lock which 135 + permits concurrent readers. However a write lock can only be obtained 136 + once all readers have left the critical region (and pending readers 137 + made to wait). 138 + 139 + This renders read locks on a read/write semaphore concurrent with other 140 + readers and write locks exclusive against all others holding the semaphore. 141 + 142 + VMA fields 143 + ^^^^^^^^^^ 144 + 145 + We can subdivide :c:struct:`!struct vm_area_struct` fields by their purpose, which makes it 146 + easier to explore their locking characteristics: 147 + 148 + .. note:: We exclude VMA lock-specific fields here to avoid confusion, as these 149 + are in effect an internal implementation detail. 150 + 151 + .. table:: Virtual layout fields 152 + 153 + ===================== ======================================== =========== 154 + Field Description Write lock 155 + ===================== ======================================== =========== 156 + :c:member:`!vm_start` Inclusive start virtual address of range mmap write, 157 + VMA describes. VMA write, 158 + rmap write. 159 + :c:member:`!vm_end` Exclusive end virtual address of range mmap write, 160 + VMA describes. VMA write, 161 + rmap write. 162 + :c:member:`!vm_pgoff` Describes the page offset into the file, mmap write, 163 + the original page offset within the VMA write, 164 + virtual address space (prior to any rmap write. 165 + :c:func:`!mremap`), or PFN if a PFN map 166 + and the architecture does not support 167 + :c:macro:`!CONFIG_ARCH_HAS_PTE_SPECIAL`. 168 + ===================== ======================================== =========== 169 + 170 + These fields describes the size, start and end of the VMA, and as such cannot be 171 + modified without first being hidden from the reverse mapping since these fields 172 + are used to locate VMAs within the reverse mapping interval trees. 173 + 174 + .. table:: Core fields 175 + 176 + ============================ ======================================== ========================= 177 + Field Description Write lock 178 + ============================ ======================================== ========================= 179 + :c:member:`!vm_mm` Containing mm_struct. None - written once on 180 + initial map. 181 + :c:member:`!vm_page_prot` Architecture-specific page table mmap write, VMA write. 182 + protection bits determined from VMA 183 + flags. 184 + :c:member:`!vm_flags` Read-only access to VMA flags describing N/A 185 + attributes of the VMA, in union with 186 + private writable 187 + :c:member:`!__vm_flags`. 188 + :c:member:`!__vm_flags` Private, writable access to VMA flags mmap write, VMA write. 189 + field, updated by 190 + :c:func:`!vm_flags_*` functions. 191 + :c:member:`!vm_file` If the VMA is file-backed, points to a None - written once on 192 + struct file object describing the initial map. 193 + underlying file, if anonymous then 194 + :c:macro:`!NULL`. 195 + :c:member:`!vm_ops` If the VMA is file-backed, then either None - Written once on 196 + the driver or file-system provides a initial map by 197 + :c:struct:`!struct vm_operations_struct` :c:func:`!f_ops->mmap()`. 198 + object describing callbacks to be 199 + invoked on VMA lifetime events. 200 + :c:member:`!vm_private_data` A :c:member:`!void *` field for Handled by driver. 201 + driver-specific metadata. 202 + ============================ ======================================== ========================= 203 + 204 + These are the core fields which describe the MM the VMA belongs to and its attributes. 205 + 206 + .. table:: Config-specific fields 207 + 208 + ================================= ===================== ======================================== =============== 209 + Field Configuration option Description Write lock 210 + ================================= ===================== ======================================== =============== 211 + :c:member:`!anon_name` CONFIG_ANON_VMA_NAME A field for storing a mmap write, 212 + :c:struct:`!struct anon_vma_name` VMA write. 213 + object providing a name for anonymous 214 + mappings, or :c:macro:`!NULL` if none 215 + is set or the VMA is file-backed. The 216 + underlying object is reference counted 217 + and can be shared across multiple VMAs 218 + for scalability. 219 + :c:member:`!swap_readahead_info` CONFIG_SWAP Metadata used by the swap mechanism mmap read, 220 + to perform readahead. This field is swap-specific 221 + accessed atomically. lock. 222 + :c:member:`!vm_policy` CONFIG_NUMA :c:type:`!mempolicy` object which mmap write, 223 + describes the NUMA behaviour of the VMA write. 224 + VMA. The underlying object is reference 225 + counted. 226 + :c:member:`!numab_state` CONFIG_NUMA_BALANCING :c:type:`!vma_numab_state` object which mmap read, 227 + describes the current state of numab-specific 228 + NUMA balancing in relation to this VMA. lock. 229 + Updated under mmap read lock by 230 + :c:func:`!task_numa_work`. 231 + :c:member:`!vm_userfaultfd_ctx` CONFIG_USERFAULTFD Userfaultfd context wrapper object of mmap write, 232 + type :c:type:`!vm_userfaultfd_ctx`, VMA write. 233 + either of zero size if userfaultfd is 234 + disabled, or containing a pointer 235 + to an underlying 236 + :c:type:`!userfaultfd_ctx` object which 237 + describes userfaultfd metadata. 238 + ================================= ===================== ======================================== =============== 239 + 240 + These fields are present or not depending on whether the relevant kernel 241 + configuration option is set. 242 + 243 + .. table:: Reverse mapping fields 244 + 245 + =================================== ========================================= ============================ 246 + Field Description Write lock 247 + =================================== ========================================= ============================ 248 + :c:member:`!shared.rb` A red/black tree node used, if the mmap write, VMA write, 249 + mapping is file-backed, to place the VMA i_mmap write. 250 + in the 251 + :c:member:`!struct address_space->i_mmap` 252 + red/black interval tree. 253 + :c:member:`!shared.rb_subtree_last` Metadata used for management of the mmap write, VMA write, 254 + interval tree if the VMA is file-backed. i_mmap write. 255 + :c:member:`!anon_vma_chain` List of pointers to both forked/CoW’d mmap read, anon_vma write. 256 + :c:type:`!anon_vma` objects and 257 + :c:member:`!vma->anon_vma` if it is 258 + non-:c:macro:`!NULL`. 259 + :c:member:`!anon_vma` :c:type:`!anon_vma` object used by When :c:macro:`NULL` and 260 + anonymous folios mapped exclusively to setting non-:c:macro:`NULL`: 261 + this VMA. Initially set by mmap read, page_table_lock. 262 + :c:func:`!anon_vma_prepare` serialised 263 + by the :c:macro:`!page_table_lock`. This When non-:c:macro:`NULL` and 264 + is set as soon as any page is faulted in. setting :c:macro:`NULL`: 265 + mmap write, VMA write, 266 + anon_vma write. 267 + =================================== ========================================= ============================ 268 + 269 + These fields are used to both place the VMA within the reverse mapping, and for 270 + anonymous mappings, to be able to access both related :c:struct:`!struct anon_vma` objects 271 + and the :c:struct:`!struct anon_vma` in which folios mapped exclusively to this VMA should 272 + reside. 273 + 274 + .. note:: If a file-backed mapping is mapped with :c:macro:`!MAP_PRIVATE` set 275 + then it can be in both the :c:type:`!anon_vma` and :c:type:`!i_mmap` 276 + trees at the same time, so all of these fields might be utilised at 277 + once. 278 + 279 + Page tables 280 + ----------- 281 + 282 + We won't speak exhaustively on the subject but broadly speaking, page tables map 283 + virtual addresses to physical ones through a series of page tables, each of 284 + which contain entries with physical addresses for the next page table level 285 + (along with flags), and at the leaf level the physical addresses of the 286 + underlying physical data pages or a special entry such as a swap entry, 287 + migration entry or other special marker. Offsets into these pages are provided 288 + by the virtual address itself. 289 + 290 + In Linux these are divided into five levels - PGD, P4D, PUD, PMD and PTE. Huge 291 + pages might eliminate one or two of these levels, but when this is the case we 292 + typically refer to the leaf level as the PTE level regardless. 293 + 294 + .. note:: In instances where the architecture supports fewer page tables than 295 + five the kernel cleverly 'folds' page table levels, that is stubbing 296 + out functions related to the skipped levels. This allows us to 297 + conceptually act as if there were always five levels, even if the 298 + compiler might, in practice, eliminate any code relating to missing 299 + ones. 300 + 301 + There are four key operations typically performed on page tables: 302 + 303 + 1. **Traversing** page tables - Simply reading page tables in order to traverse 304 + them. This only requires that the VMA is kept stable, so a lock which 305 + establishes this suffices for traversal (there are also lockless variants 306 + which eliminate even this requirement, such as :c:func:`!gup_fast`). 307 + 2. **Installing** page table mappings - Whether creating a new mapping or 308 + modifying an existing one in such a way as to change its identity. This 309 + requires that the VMA is kept stable via an mmap or VMA lock (explicitly not 310 + rmap locks). 311 + 3. **Zapping/unmapping** page table entries - This is what the kernel calls 312 + clearing page table mappings at the leaf level only, whilst leaving all page 313 + tables in place. This is a very common operation in the kernel performed on 314 + file truncation, the :c:macro:`!MADV_DONTNEED` operation via 315 + :c:func:`!madvise`, and others. This is performed by a number of functions 316 + including :c:func:`!unmap_mapping_range` and :c:func:`!unmap_mapping_pages`. 317 + The VMA need only be kept stable for this operation. 318 + 4. **Freeing** page tables - When finally the kernel removes page tables from a 319 + userland process (typically via :c:func:`!free_pgtables`) extreme care must 320 + be taken to ensure this is done safely, as this logic finally frees all page 321 + tables in the specified range, ignoring existing leaf entries (it assumes the 322 + caller has both zapped the range and prevented any further faults or 323 + modifications within it). 324 + 325 + .. note:: Modifying mappings for reclaim or migration is performed under rmap 326 + lock as it, like zapping, does not fundamentally modify the identity 327 + of what is being mapped. 328 + 329 + **Traversing** and **zapping** ranges can be performed holding any one of the 330 + locks described in the terminology section above - that is the mmap lock, the 331 + VMA lock or either of the reverse mapping locks. 332 + 333 + That is - as long as you keep the relevant VMA **stable** - you are good to go 334 + ahead and perform these operations on page tables (though internally, kernel 335 + operations that perform writes also acquire internal page table locks to 336 + serialise - see the page table implementation detail section for more details). 337 + 338 + When **installing** page table entries, the mmap or VMA lock must be held to 339 + keep the VMA stable. We explore why this is in the page table locking details 340 + section below. 341 + 342 + .. warning:: Page tables are normally only traversed in regions covered by VMAs. 343 + If you want to traverse page tables in areas that might not be 344 + covered by VMAs, heavier locking is required. 345 + See :c:func:`!walk_page_range_novma` for details. 346 + 347 + **Freeing** page tables is an entirely internal memory management operation and 348 + has special requirements (see the page freeing section below for more details). 349 + 350 + .. warning:: When **freeing** page tables, it must not be possible for VMAs 351 + containing the ranges those page tables map to be accessible via 352 + the reverse mapping. 353 + 354 + The :c:func:`!free_pgtables` function removes the relevant VMAs 355 + from the reverse mappings, but no other VMAs can be permitted to be 356 + accessible and span the specified range. 357 + 358 + Lock ordering 359 + ------------- 360 + 361 + As we have multiple locks across the kernel which may or may not be taken at the 362 + same time as explicit mm or VMA locks, we have to be wary of lock inversion, and 363 + the **order** in which locks are acquired and released becomes very important. 364 + 365 + .. note:: Lock inversion occurs when two threads need to acquire multiple locks, 366 + but in doing so inadvertently cause a mutual deadlock. 367 + 368 + For example, consider thread 1 which holds lock A and tries to acquire lock B, 369 + while thread 2 holds lock B and tries to acquire lock A. 370 + 371 + Both threads are now deadlocked on each other. However, had they attempted to 372 + acquire locks in the same order, one would have waited for the other to 373 + complete its work and no deadlock would have occurred. 374 + 375 + The opening comment in :c:macro:`!mm/rmap.c` describes in detail the required 376 + ordering of locks within memory management code: 377 + 378 + .. code-block:: 379 + 380 + inode->i_rwsem (while writing or truncating, not reading or faulting) 381 + mm->mmap_lock 382 + mapping->invalidate_lock (in filemap_fault) 383 + folio_lock 384 + hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share, see hugetlbfs below) 385 + vma_start_write 386 + mapping->i_mmap_rwsem 387 + anon_vma->rwsem 388 + mm->page_table_lock or pte_lock 389 + swap_lock (in swap_duplicate, swap_info_get) 390 + mmlist_lock (in mmput, drain_mmlist and others) 391 + mapping->private_lock (in block_dirty_folio) 392 + i_pages lock (widely used) 393 + lruvec->lru_lock (in folio_lruvec_lock_irq) 394 + inode->i_lock (in set_page_dirty's __mark_inode_dirty) 395 + bdi.wb->list_lock (in set_page_dirty's __mark_inode_dirty) 396 + sb_lock (within inode_lock in fs/fs-writeback.c) 397 + i_pages lock (widely used, in set_page_dirty, 398 + in arch-dependent flush_dcache_mmap_lock, 399 + within bdi.wb->list_lock in __sync_single_inode) 400 + 401 + There is also a file-system specific lock ordering comment located at the top of 402 + :c:macro:`!mm/filemap.c`: 403 + 404 + .. code-block:: 405 + 406 + ->i_mmap_rwsem (truncate_pagecache) 407 + ->private_lock (__free_pte->block_dirty_folio) 408 + ->swap_lock (exclusive_swap_page, others) 409 + ->i_pages lock 410 + 411 + ->i_rwsem 412 + ->invalidate_lock (acquired by fs in truncate path) 413 + ->i_mmap_rwsem (truncate->unmap_mapping_range) 414 + 415 + ->mmap_lock 416 + ->i_mmap_rwsem 417 + ->page_table_lock or pte_lock (various, mainly in memory.c) 418 + ->i_pages lock (arch-dependent flush_dcache_mmap_lock) 419 + 420 + ->mmap_lock 421 + ->invalidate_lock (filemap_fault) 422 + ->lock_page (filemap_fault, access_process_vm) 423 + 424 + ->i_rwsem (generic_perform_write) 425 + ->mmap_lock (fault_in_readable->do_page_fault) 426 + 427 + bdi->wb.list_lock 428 + sb_lock (fs/fs-writeback.c) 429 + ->i_pages lock (__sync_single_inode) 430 + 431 + ->i_mmap_rwsem 432 + ->anon_vma.lock (vma_merge) 433 + 434 + ->anon_vma.lock 435 + ->page_table_lock or pte_lock (anon_vma_prepare and various) 436 + 437 + ->page_table_lock or pte_lock 438 + ->swap_lock (try_to_unmap_one) 439 + ->private_lock (try_to_unmap_one) 440 + ->i_pages lock (try_to_unmap_one) 441 + ->lruvec->lru_lock (follow_page_mask->mark_page_accessed) 442 + ->lruvec->lru_lock (check_pte_range->folio_isolate_lru) 443 + ->private_lock (folio_remove_rmap_pte->set_page_dirty) 444 + ->i_pages lock (folio_remove_rmap_pte->set_page_dirty) 445 + bdi.wb->list_lock (folio_remove_rmap_pte->set_page_dirty) 446 + ->inode->i_lock (folio_remove_rmap_pte->set_page_dirty) 447 + bdi.wb->list_lock (zap_pte_range->set_page_dirty) 448 + ->inode->i_lock (zap_pte_range->set_page_dirty) 449 + ->private_lock (zap_pte_range->block_dirty_folio) 450 + 451 + Please check the current state of these comments which may have changed since 452 + the time of writing of this document. 453 + 454 + ------------------------------ 455 + Locking Implementation Details 456 + ------------------------------ 457 + 458 + .. warning:: Locking rules for PTE-level page tables are very different from 459 + locking rules for page tables at other levels. 460 + 461 + Page table locking details 462 + -------------------------- 463 + 464 + In addition to the locks described in the terminology section above, we have 465 + additional locks dedicated to page tables: 466 + 467 + * **Higher level page table locks** - Higher level page tables, that is PGD, P4D 468 + and PUD each make use of the process address space granularity 469 + :c:member:`!mm->page_table_lock` lock when modified. 470 + 471 + * **Fine-grained page table locks** - PMDs and PTEs each have fine-grained locks 472 + either kept within the folios describing the page tables or allocated 473 + separated and pointed at by the folios if :c:macro:`!ALLOC_SPLIT_PTLOCKS` is 474 + set. The PMD spin lock is obtained via :c:func:`!pmd_lock`, however PTEs are 475 + mapped into higher memory (if a 32-bit system) and carefully locked via 476 + :c:func:`!pte_offset_map_lock`. 477 + 478 + These locks represent the minimum required to interact with each page table 479 + level, but there are further requirements. 480 + 481 + Importantly, note that on a **traversal** of page tables, sometimes no such 482 + locks are taken. However, at the PTE level, at least concurrent page table 483 + deletion must be prevented (using RCU) and the page table must be mapped into 484 + high memory, see below. 485 + 486 + Whether care is taken on reading the page table entries depends on the 487 + architecture, see the section on atomicity below. 488 + 489 + Locking rules 490 + ^^^^^^^^^^^^^ 491 + 492 + We establish basic locking rules when interacting with page tables: 493 + 494 + * When changing a page table entry the page table lock for that page table 495 + **must** be held, except if you can safely assume nobody can access the page 496 + tables concurrently (such as on invocation of :c:func:`!free_pgtables`). 497 + * Reads from and writes to page table entries must be *appropriately* 498 + atomic. See the section on atomicity below for details. 499 + * Populating previously empty entries requires that the mmap or VMA locks are 500 + held (read or write), doing so with only rmap locks would be dangerous (see 501 + the warning below). 502 + * As mentioned previously, zapping can be performed while simply keeping the VMA 503 + stable, that is holding any one of the mmap, VMA or rmap locks. 504 + 505 + .. warning:: Populating previously empty entries is dangerous as, when unmapping 506 + VMAs, :c:func:`!vms_clear_ptes` has a window of time between 507 + zapping (via :c:func:`!unmap_vmas`) and freeing page tables (via 508 + :c:func:`!free_pgtables`), where the VMA is still visible in the 509 + rmap tree. :c:func:`!free_pgtables` assumes that the zap has 510 + already been performed and removes PTEs unconditionally (along with 511 + all other page tables in the freed range), so installing new PTE 512 + entries could leak memory and also cause other unexpected and 513 + dangerous behaviour. 514 + 515 + There are additional rules applicable when moving page tables, which we discuss 516 + in the section on this topic below. 517 + 518 + PTE-level page tables are different from page tables at other levels, and there 519 + are extra requirements for accessing them: 520 + 521 + * On 32-bit architectures, they may be in high memory (meaning they need to be 522 + mapped into kernel memory to be accessible). 523 + * When empty, they can be unlinked and RCU-freed while holding an mmap lock or 524 + rmap lock for reading in combination with the PTE and PMD page table locks. 525 + In particular, this happens in :c:func:`!retract_page_tables` when handling 526 + :c:macro:`!MADV_COLLAPSE`. 527 + So accessing PTE-level page tables requires at least holding an RCU read lock; 528 + but that only suffices for readers that can tolerate racing with concurrent 529 + page table updates such that an empty PTE is observed (in a page table that 530 + has actually already been detached and marked for RCU freeing) while another 531 + new page table has been installed in the same location and filled with 532 + entries. Writers normally need to take the PTE lock and revalidate that the 533 + PMD entry still refers to the same PTE-level page table. 534 + 535 + To access PTE-level page tables, a helper like :c:func:`!pte_offset_map_lock` or 536 + :c:func:`!pte_offset_map` can be used depending on stability requirements. 537 + These map the page table into kernel memory if required, take the RCU lock, and 538 + depending on variant, may also look up or acquire the PTE lock. 539 + See the comment on :c:func:`!__pte_offset_map_lock`. 540 + 541 + Atomicity 542 + ^^^^^^^^^ 543 + 544 + Regardless of page table locks, the MMU hardware concurrently updates accessed 545 + and dirty bits (perhaps more, depending on architecture). Additionally, page 546 + table traversal operations in parallel (though holding the VMA stable) and 547 + functionality like GUP-fast locklessly traverses (that is reads) page tables, 548 + without even keeping the VMA stable at all. 549 + 550 + When performing a page table traversal and keeping the VMA stable, whether a 551 + read must be performed once and only once or not depends on the architecture 552 + (for instance x86-64 does not require any special precautions). 553 + 554 + If a write is being performed, or if a read informs whether a write takes place 555 + (on an installation of a page table entry say, for instance in 556 + :c:func:`!__pud_install`), special care must always be taken. In these cases we 557 + can never assume that page table locks give us entirely exclusive access, and 558 + must retrieve page table entries once and only once. 559 + 560 + If we are reading page table entries, then we need only ensure that the compiler 561 + does not rearrange our loads. This is achieved via :c:func:`!pXXp_get` 562 + functions - :c:func:`!pgdp_get`, :c:func:`!p4dp_get`, :c:func:`!pudp_get`, 563 + :c:func:`!pmdp_get`, and :c:func:`!ptep_get`. 564 + 565 + Each of these uses :c:func:`!READ_ONCE` to guarantee that the compiler reads 566 + the page table entry only once. 567 + 568 + However, if we wish to manipulate an existing page table entry and care about 569 + the previously stored data, we must go further and use an hardware atomic 570 + operation as, for example, in :c:func:`!ptep_get_and_clear`. 571 + 572 + Equally, operations that do not rely on the VMA being held stable, such as 573 + GUP-fast (see :c:func:`!gup_fast` and its various page table level handlers like 574 + :c:func:`!gup_fast_pte_range`), must very carefully interact with page table 575 + entries, using functions such as :c:func:`!ptep_get_lockless` and equivalent for 576 + higher level page table levels. 577 + 578 + Writes to page table entries must also be appropriately atomic, as established 579 + by :c:func:`!set_pXX` functions - :c:func:`!set_pgd`, :c:func:`!set_p4d`, 580 + :c:func:`!set_pud`, :c:func:`!set_pmd`, and :c:func:`!set_pte`. 581 + 582 + Equally functions which clear page table entries must be appropriately atomic, 583 + as in :c:func:`!pXX_clear` functions - :c:func:`!pgd_clear`, 584 + :c:func:`!p4d_clear`, :c:func:`!pud_clear`, :c:func:`!pmd_clear`, and 585 + :c:func:`!pte_clear`. 586 + 587 + Page table installation 588 + ^^^^^^^^^^^^^^^^^^^^^^^ 589 + 590 + Page table installation is performed with the VMA held stable explicitly by an 591 + mmap or VMA lock in read or write mode (see the warning in the locking rules 592 + section for details as to why). 593 + 594 + When allocating a P4D, PUD or PMD and setting the relevant entry in the above 595 + PGD, P4D or PUD, the :c:member:`!mm->page_table_lock` must be held. This is 596 + acquired in :c:func:`!__p4d_alloc`, :c:func:`!__pud_alloc` and 597 + :c:func:`!__pmd_alloc` respectively. 598 + 599 + .. note:: :c:func:`!__pmd_alloc` actually invokes :c:func:`!pud_lock` and 600 + :c:func:`!pud_lockptr` in turn, however at the time of writing it ultimately 601 + references the :c:member:`!mm->page_table_lock`. 602 + 603 + Allocating a PTE will either use the :c:member:`!mm->page_table_lock` or, if 604 + :c:macro:`!USE_SPLIT_PMD_PTLOCKS` is defined, a lock embedded in the PMD 605 + physical page metadata in the form of a :c:struct:`!struct ptdesc`, acquired by 606 + :c:func:`!pmd_ptdesc` called from :c:func:`!pmd_lock` and ultimately 607 + :c:func:`!__pte_alloc`. 608 + 609 + Finally, modifying the contents of the PTE requires special treatment, as the 610 + PTE page table lock must be acquired whenever we want stable and exclusive 611 + access to entries contained within a PTE, especially when we wish to modify 612 + them. 613 + 614 + This is performed via :c:func:`!pte_offset_map_lock` which carefully checks to 615 + ensure that the PTE hasn't changed from under us, ultimately invoking 616 + :c:func:`!pte_lockptr` to obtain a spin lock at PTE granularity contained within 617 + the :c:struct:`!struct ptdesc` associated with the physical PTE page. The lock 618 + must be released via :c:func:`!pte_unmap_unlock`. 619 + 620 + .. note:: There are some variants on this, such as 621 + :c:func:`!pte_offset_map_rw_nolock` when we know we hold the PTE stable but 622 + for brevity we do not explore this. See the comment for 623 + :c:func:`!__pte_offset_map_lock` for more details. 624 + 625 + When modifying data in ranges we typically only wish to allocate higher page 626 + tables as necessary, using these locks to avoid races or overwriting anything, 627 + and set/clear data at the PTE level as required (for instance when page faulting 628 + or zapping). 629 + 630 + A typical pattern taken when traversing page table entries to install a new 631 + mapping is to optimistically determine whether the page table entry in the table 632 + above is empty, if so, only then acquiring the page table lock and checking 633 + again to see if it was allocated underneath us. 634 + 635 + This allows for a traversal with page table locks only being taken when 636 + required. An example of this is :c:func:`!__pud_alloc`. 637 + 638 + At the leaf page table, that is the PTE, we can't entirely rely on this pattern 639 + as we have separate PMD and PTE locks and a THP collapse for instance might have 640 + eliminated the PMD entry as well as the PTE from under us. 641 + 642 + This is why :c:func:`!__pte_offset_map_lock` locklessly retrieves the PMD entry 643 + for the PTE, carefully checking it is as expected, before acquiring the 644 + PTE-specific lock, and then *again* checking that the PMD entry is as expected. 645 + 646 + If a THP collapse (or similar) were to occur then the lock on both pages would 647 + be acquired, so we can ensure this is prevented while the PTE lock is held. 648 + 649 + Installing entries this way ensures mutual exclusion on write. 650 + 651 + Page table freeing 652 + ^^^^^^^^^^^^^^^^^^ 653 + 654 + Tearing down page tables themselves is something that requires significant 655 + care. There must be no way that page tables designated for removal can be 656 + traversed or referenced by concurrent tasks. 657 + 658 + It is insufficient to simply hold an mmap write lock and VMA lock (which will 659 + prevent racing faults, and rmap operations), as a file-backed mapping can be 660 + truncated under the :c:struct:`!struct address_space->i_mmap_rwsem` alone. 661 + 662 + As a result, no VMA which can be accessed via the reverse mapping (either 663 + through the :c:struct:`!struct anon_vma->rb_root` or the :c:member:`!struct 664 + address_space->i_mmap` interval trees) can have its page tables torn down. 665 + 666 + The operation is typically performed via :c:func:`!free_pgtables`, which assumes 667 + either the mmap write lock has been taken (as specified by its 668 + :c:member:`!mm_wr_locked` parameter), or that the VMA is already unreachable. 669 + 670 + It carefully removes the VMA from all reverse mappings, however it's important 671 + that no new ones overlap these or any route remain to permit access to addresses 672 + within the range whose page tables are being torn down. 673 + 674 + Additionally, it assumes that a zap has already been performed and steps have 675 + been taken to ensure that no further page table entries can be installed between 676 + the zap and the invocation of :c:func:`!free_pgtables`. 677 + 678 + Since it is assumed that all such steps have been taken, page table entries are 679 + cleared without page table locks (in the :c:func:`!pgd_clear`, :c:func:`!p4d_clear`, 680 + :c:func:`!pud_clear`, and :c:func:`!pmd_clear` functions. 681 + 682 + .. note:: It is possible for leaf page tables to be torn down independent of 683 + the page tables above it as is done by 684 + :c:func:`!retract_page_tables`, which is performed under the i_mmap 685 + read lock, PMD, and PTE page table locks, without this level of care. 686 + 687 + Page table moving 688 + ^^^^^^^^^^^^^^^^^ 689 + 690 + Some functions manipulate page table levels above PMD (that is PUD, P4D and PGD 691 + page tables). Most notable of these is :c:func:`!mremap`, which is capable of 692 + moving higher level page tables. 693 + 694 + In these instances, it is required that **all** locks are taken, that is 695 + the mmap lock, the VMA lock and the relevant rmap locks. 696 + 697 + You can observe this in the :c:func:`!mremap` implementation in the functions 698 + :c:func:`!take_rmap_locks` and :c:func:`!drop_rmap_locks` which perform the rmap 699 + side of lock acquisition, invoked ultimately by :c:func:`!move_page_tables`. 700 + 701 + VMA lock internals 702 + ------------------ 703 + 704 + Overview 705 + ^^^^^^^^ 706 + 707 + VMA read locking is entirely optimistic - if the lock is contended or a competing 708 + write has started, then we do not obtain a read lock. 709 + 710 + A VMA **read** lock is obtained by :c:func:`!lock_vma_under_rcu`, which first 711 + calls :c:func:`!rcu_read_lock` to ensure that the VMA is looked up in an RCU 712 + critical section, then attempts to VMA lock it via :c:func:`!vma_start_read`, 713 + before releasing the RCU lock via :c:func:`!rcu_read_unlock`. 714 + 715 + VMA read locks hold the read lock on the :c:member:`!vma->vm_lock` semaphore for 716 + their duration and the caller of :c:func:`!lock_vma_under_rcu` must release it 717 + via :c:func:`!vma_end_read`. 718 + 719 + VMA **write** locks are acquired via :c:func:`!vma_start_write` in instances where a 720 + VMA is about to be modified, unlike :c:func:`!vma_start_read` the lock is always 721 + acquired. An mmap write lock **must** be held for the duration of the VMA write 722 + lock, releasing or downgrading the mmap write lock also releases the VMA write 723 + lock so there is no :c:func:`!vma_end_write` function. 724 + 725 + Note that a semaphore write lock is not held across a VMA lock. Rather, a 726 + sequence number is used for serialisation, and the write semaphore is only 727 + acquired at the point of write lock to update this. 728 + 729 + This ensures the semantics we require - VMA write locks provide exclusive write 730 + access to the VMA. 731 + 732 + Implementation details 733 + ^^^^^^^^^^^^^^^^^^^^^^ 734 + 735 + The VMA lock mechanism is designed to be a lightweight means of avoiding the use 736 + of the heavily contended mmap lock. It is implemented using a combination of a 737 + read/write semaphore and sequence numbers belonging to the containing 738 + :c:struct:`!struct mm_struct` and the VMA. 739 + 740 + Read locks are acquired via :c:func:`!vma_start_read`, which is an optimistic 741 + operation, i.e. it tries to acquire a read lock but returns false if it is 742 + unable to do so. At the end of the read operation, :c:func:`!vma_end_read` is 743 + called to release the VMA read lock. 744 + 745 + Invoking :c:func:`!vma_start_read` requires that :c:func:`!rcu_read_lock` has 746 + been called first, establishing that we are in an RCU critical section upon VMA 747 + read lock acquisition. Once acquired, the RCU lock can be released as it is only 748 + required for lookup. This is abstracted by :c:func:`!lock_vma_under_rcu` which 749 + is the interface a user should use. 750 + 751 + Writing requires the mmap to be write-locked and the VMA lock to be acquired via 752 + :c:func:`!vma_start_write`, however the write lock is released by the termination or 753 + downgrade of the mmap write lock so no :c:func:`!vma_end_write` is required. 754 + 755 + All this is achieved by the use of per-mm and per-VMA sequence counts, which are 756 + used in order to reduce complexity, especially for operations which write-lock 757 + multiple VMAs at once. 758 + 759 + If the mm sequence count, :c:member:`!mm->mm_lock_seq` is equal to the VMA 760 + sequence count :c:member:`!vma->vm_lock_seq` then the VMA is write-locked. If 761 + they differ, then it is not. 762 + 763 + Each time the mmap write lock is released in :c:func:`!mmap_write_unlock` or 764 + :c:func:`!mmap_write_downgrade`, :c:func:`!vma_end_write_all` is invoked which 765 + also increments :c:member:`!mm->mm_lock_seq` via 766 + :c:func:`!mm_lock_seqcount_end`. 767 + 768 + This way, we ensure that, regardless of the VMA's sequence number, a write lock 769 + is never incorrectly indicated and that when we release an mmap write lock we 770 + efficiently release **all** VMA write locks contained within the mmap at the 771 + same time. 772 + 773 + Since the mmap write lock is exclusive against others who hold it, the automatic 774 + release of any VMA locks on its release makes sense, as you would never want to 775 + keep VMAs locked across entirely separate write operations. It also maintains 776 + correct lock ordering. 777 + 778 + Each time a VMA read lock is acquired, we acquire a read lock on the 779 + :c:member:`!vma->vm_lock` read/write semaphore and hold it, while checking that 780 + the sequence count of the VMA does not match that of the mm. 781 + 782 + If it does, the read lock fails. If it does not, we hold the lock, excluding 783 + writers, but permitting other readers, who will also obtain this lock under RCU. 784 + 785 + Importantly, maple tree operations performed in :c:func:`!lock_vma_under_rcu` 786 + are also RCU safe, so the whole read lock operation is guaranteed to function 787 + correctly. 788 + 789 + On the write side, we acquire a write lock on the :c:member:`!vma->vm_lock` 790 + read/write semaphore, before setting the VMA's sequence number under this lock, 791 + also simultaneously holding the mmap write lock. 792 + 793 + This way, if any read locks are in effect, :c:func:`!vma_start_write` will sleep 794 + until these are finished and mutual exclusion is achieved. 795 + 796 + After setting the VMA's sequence number, the lock is released, avoiding 797 + complexity with a long-term held write lock. 798 + 799 + This clever combination of a read/write semaphore and sequence count allows for 800 + fast RCU-based per-VMA lock acquisition (especially on page fault, though 801 + utilised elsewhere) with minimal complexity around lock ordering. 802 + 803 + mmap write lock downgrading 804 + --------------------------- 805 + 806 + When an mmap write lock is held one has exclusive access to resources within the 807 + mmap (with the usual caveats about requiring VMA write locks to avoid races with 808 + tasks holding VMA read locks). 809 + 810 + It is then possible to **downgrade** from a write lock to a read lock via 811 + :c:func:`!mmap_write_downgrade` which, similar to :c:func:`!mmap_write_unlock`, 812 + implicitly terminates all VMA write locks via :c:func:`!vma_end_write_all`, but 813 + importantly does not relinquish the mmap lock while downgrading, therefore 814 + keeping the locked virtual address space stable. 815 + 816 + An interesting consequence of this is that downgraded locks are exclusive 817 + against any other task possessing a downgraded lock (since a racing task would 818 + have to acquire a write lock first to downgrade it, and the downgraded lock 819 + prevents a new write lock from being obtained until the original lock is 820 + released). 821 + 822 + For clarity, we map read (R)/downgraded write (D)/write (W) locks against one 823 + another showing which locks exclude the others: 824 + 825 + .. list-table:: Lock exclusivity 826 + :widths: 5 5 5 5 827 + :header-rows: 1 828 + :stub-columns: 1 829 + 830 + * - 831 + - R 832 + - D 833 + - W 834 + * - R 835 + - N 836 + - N 837 + - Y 838 + * - D 839 + - N 840 + - Y 841 + - Y 842 + * - W 843 + - Y 844 + - Y 845 + - Y 846 + 847 + Here a Y indicates the locks in the matching row/column are mutually exclusive, 848 + and N indicates that they are not. 849 + 850 + Stack expansion 851 + --------------- 852 + 853 + Stack expansion throws up additional complexities in that we cannot permit there 854 + to be racing page faults, as a result we invoke :c:func:`!vma_start_write` to 855 + prevent this in :c:func:`!expand_downwards` or :c:func:`!expand_upwards`.
+31 -29
Documentation/netlink/specs/mptcp_pm.yaml
··· 22 22 doc: unused event 23 23 - 24 24 name: created 25 - doc: 26 - token, family, saddr4 | saddr6, daddr4 | daddr6, sport, dport 25 + doc: >- 27 26 A new MPTCP connection has been created. It is the good time to 28 27 allocate memory and send ADD_ADDR if needed. Depending on the 29 28 traffic-patterns it can take a long time until the 30 29 MPTCP_EVENT_ESTABLISHED is sent. 30 + Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6, sport, 31 + dport, server-side. 31 32 - 32 33 name: established 33 - doc: 34 - token, family, saddr4 | saddr6, daddr4 | daddr6, sport, dport 34 + doc: >- 35 35 A MPTCP connection is established (can start new subflows). 36 + Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6, sport, 37 + dport, server-side. 36 38 - 37 39 name: closed 38 - doc: 39 - token 40 + doc: >- 40 41 A MPTCP connection has stopped. 42 + Attribute: token. 41 43 - 42 44 name: announced 43 45 value: 6 44 - doc: 45 - token, rem_id, family, daddr4 | daddr6 [, dport] 46 + doc: >- 46 47 A new address has been announced by the peer. 48 + Attributes: token, rem_id, family, daddr4 | daddr6 [, dport]. 47 49 - 48 50 name: removed 49 - doc: 50 - token, rem_id 51 + doc: >- 51 52 An address has been lost by the peer. 53 + Attributes: token, rem_id. 52 54 - 53 55 name: sub-established 54 56 value: 10 55 - doc: 56 - token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | daddr6, sport, 57 - dport, backup, if_idx [, error] 57 + doc: >- 58 58 A new subflow has been established. 'error' should not be set. 59 + Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | 60 + daddr6, sport, dport, backup, if_idx [, error]. 59 61 - 60 62 name: sub-closed 61 - doc: 62 - token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | daddr6, sport, 63 - dport, backup, if_idx [, error] 63 + doc: >- 64 64 A subflow has been closed. An error (copy of sk_err) could be set if an 65 65 error has been detected for this subflow. 66 + Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | 67 + daddr6, sport, dport, backup, if_idx [, error]. 66 68 - 67 69 name: sub-priority 68 70 value: 13 69 - doc: 70 - token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | daddr6, sport, 71 - dport, backup, if_idx [, error] 71 + doc: >- 72 72 The priority of a subflow has changed. 'error' should not be set. 73 + Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | 74 + daddr6, sport, dport, backup, if_idx [, error]. 73 75 - 74 76 name: listener-created 75 77 value: 15 76 - doc: 77 - family, sport, saddr4 | saddr6 78 + doc: >- 78 79 A new PM listener is created. 80 + Attributes: family, sport, saddr4 | saddr6. 79 81 - 80 82 name: listener-closed 81 - doc: 82 - family, sport, saddr4 | saddr6 83 + doc: >- 83 84 A PM listener is closed. 85 + Attributes: family, sport, saddr4 | saddr6. 84 86 85 87 attribute-sets: 86 88 - ··· 308 306 attributes: 309 307 - addr 310 308 - 311 - name: flush-addrs 312 - doc: flush addresses 309 + name: flush-addrs 310 + doc: Flush addresses 313 311 attribute-set: endpoint 314 312 dont-validate: [ strict ] 315 313 flags: [ uns-admin-perm ] ··· 353 351 - addr-remote 354 352 - 355 353 name: announce 356 - doc: announce new sf 354 + doc: Announce new address 357 355 attribute-set: attr 358 356 dont-validate: [ strict ] 359 357 flags: [ uns-admin-perm ] ··· 364 362 - token 365 363 - 366 364 name: remove 367 - doc: announce removal 365 + doc: Announce removal 368 366 attribute-set: attr 369 367 dont-validate: [ strict ] 370 368 flags: [ uns-admin-perm ] ··· 375 373 - loc-id 376 374 - 377 375 name: subflow-create 378 - doc: todo 376 + doc: Create subflow 379 377 attribute-set: attr 380 378 dont-validate: [ strict ] 381 379 flags: [ uns-admin-perm ] ··· 387 385 - addr-remote 388 386 - 389 387 name: subflow-destroy 390 - doc: todo 388 + doc: Destroy subflow 391 389 attribute-set: attr 392 390 dont-validate: [ strict ] 393 391 flags: [ uns-admin-perm ]
+3
Documentation/virt/kvm/api.rst
··· 1914 1914 #define KVM_IRQ_ROUTING_HV_SINT 4 1915 1915 #define KVM_IRQ_ROUTING_XEN_EVTCHN 5 1916 1916 1917 + On s390, adding a KVM_IRQ_ROUTING_S390_ADAPTER is rejected on ucontrol VMs with 1918 + error -EINVAL. 1919 + 1917 1920 flags: 1918 1921 1919 1922 - KVM_MSI_VALID_DEVID: used along with KVM_IRQ_ROUTING_MSI routing entry
+4
Documentation/virt/kvm/devices/s390_flic.rst
··· 58 58 Enables async page faults for the guest. So in case of a major page fault 59 59 the host is allowed to handle this async and continues the guest. 60 60 61 + -EINVAL is returned when called on the FLIC of a ucontrol VM. 62 + 61 63 KVM_DEV_FLIC_APF_DISABLE_WAIT 62 64 Disables async page faults for the guest and waits until already pending 63 65 async page faults are done. This is necessary to trigger a completion interrupt 64 66 for every init interrupt before migrating the interrupt list. 67 + 68 + -EINVAL is returned when called on the FLIC of a ucontrol VM. 65 69 66 70 KVM_DEV_FLIC_ADAPTER_REGISTER 67 71 Register an I/O adapter interrupt source. Takes a kvm_s390_io_adapter
+15 -22
MAINTAINERS
··· 949 949 M: Shay Agroskin <shayagr@amazon.com> 950 950 M: Arthur Kiyanovski <akiyano@amazon.com> 951 951 R: David Arinzon <darinzon@amazon.com> 952 - R: Noam Dagan <ndagan@amazon.com> 953 952 R: Saeed Bishara <saeedb@amazon.com> 954 953 L: netdev@vger.kernel.org 955 954 S: Supported ··· 1796 1797 1797 1798 ARM AND ARM64 SoC SUB-ARCHITECTURES (COMMON PARTS) 1798 1799 M: Arnd Bergmann <arnd@arndb.de> 1799 - M: Olof Johansson <olof@lixom.net> 1800 1800 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1801 1801 L: soc@lists.linux.dev 1802 1802 S: Maintained ··· 2689 2691 N: atmel 2690 2692 2691 2693 ARM/Microchip Sparx5 SoC support 2692 - M: Lars Povlsen <lars.povlsen@microchip.com> 2693 2694 M: Steen Hegelund <Steen.Hegelund@microchip.com> 2694 2695 M: Daniel Machon <daniel.machon@microchip.com> 2695 2696 M: UNGLinuxDriver@microchip.com ··· 3605 3608 3606 3609 ATHEROS ATH GENERIC UTILITIES 3607 3610 M: Kalle Valo <kvalo@kernel.org> 3611 + M: Jeff Johnson <jjohnson@kernel.org> 3608 3612 L: linux-wireless@vger.kernel.org 3609 3613 S: Supported 3610 3614 F: drivers/net/wireless/ath/* ··· 4056 4058 4057 4059 BONDING DRIVER 4058 4060 M: Jay Vosburgh <jv@jvosburgh.net> 4059 - M: Andy Gospodarek <andy@greyhouse.net> 4060 4061 L: netdev@vger.kernel.org 4061 4062 S: Maintained 4062 4063 F: Documentation/networking/bonding.rst ··· 4128 4131 F: drivers/net/ethernet/netronome/nfp/bpf/ 4129 4132 4130 4133 BPF JIT for POWERPC (32-BIT AND 64-BIT) 4131 - M: Michael Ellerman <mpe@ellerman.id.au> 4132 4134 M: Hari Bathini <hbathini@linux.ibm.com> 4133 4135 M: Christophe Leroy <christophe.leroy@csgroup.eu> 4134 4136 R: Naveen N Rao <naveen@kernel.org> ··· 7343 7347 DRM DRIVER FOR NVIDIA GEFORCE/QUADRO GPUS 7344 7348 M: Karol Herbst <kherbst@redhat.com> 7345 7349 M: Lyude Paul <lyude@redhat.com> 7346 - M: Danilo Krummrich <dakr@redhat.com> 7350 + M: Danilo Krummrich <dakr@kernel.org> 7347 7351 L: dri-devel@lists.freedesktop.org 7348 7352 L: nouveau@lists.freedesktop.org 7349 7353 S: Supported ··· 8449 8453 EROFS FILE SYSTEM 8450 8454 M: Gao Xiang <xiang@kernel.org> 8451 8455 M: Chao Yu <chao@kernel.org> 8452 - R: Yue Hu <huyue2@coolpad.com> 8456 + R: Yue Hu <zbestahu@gmail.com> 8453 8457 R: Jeffle Xu <jefflexu@linux.alibaba.com> 8454 8458 R: Sandeep Dhavale <dhavale@google.com> 8455 8459 L: linux-erofs@lists.ozlabs.org ··· 8920 8924 FIRMWARE LOADER (request_firmware) 8921 8925 M: Luis Chamberlain <mcgrof@kernel.org> 8922 8926 M: Russ Weight <russ.weight@linux.dev> 8923 - M: Danilo Krummrich <dakr@redhat.com> 8927 + M: Danilo Krummrich <dakr@kernel.org> 8924 8928 L: linux-kernel@vger.kernel.org 8925 8929 S: Maintained 8926 8930 F: Documentation/firmware_class/ ··· 12628 12632 F: arch/mips/kvm/ 12629 12633 12630 12634 KERNEL VIRTUAL MACHINE FOR POWERPC (KVM/powerpc) 12631 - M: Michael Ellerman <mpe@ellerman.id.au> 12635 + M: Madhavan Srinivasan <maddy@linux.ibm.com> 12632 12636 R: Nicholas Piggin <npiggin@gmail.com> 12633 12637 L: linuxppc-dev@lists.ozlabs.org 12634 12638 L: kvm@vger.kernel.org ··· 13207 13211 X: drivers/macintosh/via-macii.c 13208 13212 13209 13213 LINUX FOR POWERPC (32-BIT AND 64-BIT) 13214 + M: Madhavan Srinivasan <maddy@linux.ibm.com> 13210 13215 M: Michael Ellerman <mpe@ellerman.id.au> 13211 13216 R: Nicholas Piggin <npiggin@gmail.com> 13212 13217 R: Christophe Leroy <christophe.leroy@csgroup.eu> 13213 13218 R: Naveen N Rao <naveen@kernel.org> 13214 - M: Madhavan Srinivasan <maddy@linux.ibm.com> 13215 13219 L: linuxppc-dev@lists.ozlabs.org 13216 13220 S: Supported 13217 13221 W: https://github.com/linuxppc/wiki/wiki ··· 14562 14566 MEDIATEK ETHERNET DRIVER 14563 14567 M: Felix Fietkau <nbd@nbd.name> 14564 14568 M: Sean Wang <sean.wang@mediatek.com> 14565 - M: Mark Lee <Mark-MC.Lee@mediatek.com> 14566 14569 M: Lorenzo Bianconi <lorenzo@kernel.org> 14567 14570 L: netdev@vger.kernel.org 14568 14571 S: Maintained ··· 14751 14756 F: include/soc/mediatek/smi.h 14752 14757 14753 14758 MEDIATEK SWITCH DRIVER 14754 - M: Arınç ÜNAL <arinc.unal@arinc9.com> 14759 + M: Chester A. Unal <chester.a.unal@arinc9.com> 14755 14760 M: Daniel Golle <daniel@makrotopia.org> 14756 14761 M: DENG Qingfang <dqfext@gmail.com> 14757 14762 M: Sean Wang <sean.wang@mediatek.com> ··· 18455 18460 F: drivers/pinctrl/mediatek/ 18456 18461 18457 18462 PIN CONTROLLER - MEDIATEK MIPS 18458 - M: Arınç ÜNAL <arinc.unal@arinc9.com> 18463 + M: Chester A. Unal <chester.a.unal@arinc9.com> 18459 18464 M: Sergio Paracuellos <sergio.paracuellos@gmail.com> 18460 18465 L: linux-mediatek@lists.infradead.org (moderated for non-subscribers) 18461 18466 L: linux-mips@vger.kernel.org ··· 19499 19504 F: arch/mips/ralink 19500 19505 19501 19506 RALINK MT7621 MIPS ARCHITECTURE 19502 - M: Arınç ÜNAL <arinc.unal@arinc9.com> 19507 + M: Chester A. Unal <chester.a.unal@arinc9.com> 19503 19508 M: Sergio Paracuellos <sergio.paracuellos@gmail.com> 19504 19509 L: linux-mips@vger.kernel.org 19505 19510 S: Maintained ··· 20902 20907 SCHEDULER - SCHED_EXT 20903 20908 R: Tejun Heo <tj@kernel.org> 20904 20909 R: David Vernet <void@manifault.com> 20910 + R: Andrea Righi <arighi@nvidia.com> 20911 + R: Changwoo Min <changwoo@igalia.com> 20905 20912 L: linux-kernel@vger.kernel.org 20906 20913 S: Maintained 20907 20914 W: https://github.com/sched-ext/scx ··· 22498 22501 F: drivers/phy/st/phy-stm32-combophy.c 22499 22502 22500 22503 STMMAC ETHERNET DRIVER 22501 - M: Alexandre Torgue <alexandre.torgue@foss.st.com> 22502 - M: Jose Abreu <joabreu@synopsys.com> 22503 22504 L: netdev@vger.kernel.org 22504 - S: Supported 22505 - W: http://www.stlinux.com 22505 + S: Orphan 22506 22506 F: Documentation/networking/device_drivers/ethernet/stmicro/ 22507 22507 F: drivers/net/ethernet/stmicro/stmmac/ 22508 22508 ··· 22731 22737 F: drivers/net/ethernet/synopsys/ 22732 22738 22733 22739 SYNOPSYS DESIGNWARE ETHERNET XPCS DRIVER 22734 - M: Jose Abreu <Jose.Abreu@synopsys.com> 22735 22740 L: netdev@vger.kernel.org 22736 - S: Supported 22741 + S: Orphan 22737 22742 F: drivers/net/pcs/pcs-xpcs.c 22738 22743 F: drivers/net/pcs/pcs-xpcs.h 22739 22744 F: include/linux/pcs/pcs-xpcs.h ··· 23640 23647 23641 23648 TIPC NETWORK LAYER 23642 23649 M: Jon Maloy <jmaloy@redhat.com> 23643 - M: Ying Xue <ying.xue@windriver.com> 23644 23650 L: netdev@vger.kernel.org (core kernel code) 23645 23651 L: tipc-discussion@lists.sourceforge.net (user apps, general discussion) 23646 23652 S: Maintained ··· 24245 24253 F: drivers/usb/isp1760/* 24246 24254 24247 24255 USB LAN78XX ETHERNET DRIVER 24248 - M: Woojung Huh <woojung.huh@microchip.com> 24256 + M: Thangaraj Samynathan <Thangaraj.S@microchip.com> 24257 + M: Rengarajan Sundararajan <Rengarajan.S@microchip.com> 24249 24258 M: UNGLinuxDriver@microchip.com 24250 24259 L: netdev@vger.kernel.org 24251 24260 S: Maintained
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 13 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc7 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+1
arch/arc/Kconfig
··· 6 6 config ARC 7 7 def_bool y 8 8 select ARC_TIMERS 9 + select ARCH_HAS_CPU_CACHE_ALIASING 9 10 select ARCH_HAS_CACHE_LINE_SIZE 10 11 select ARCH_HAS_DEBUG_VM_PGTABLE 11 12 select ARCH_HAS_DMA_PREP_COHERENT
+8
arch/arc/include/asm/cachetype.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __ASM_ARC_CACHETYPE_H 3 + #define __ASM_ARC_CACHETYPE_H 4 + 5 + #define cpu_dcache_is_aliasing() false 6 + #define cpu_icache_is_aliasing() true 7 + 8 + #endif
+1 -1
arch/arm/boot/dts/nxp/imx/imxrt1050.dtsi
··· 87 87 reg = <0x402c0000 0x4000>; 88 88 interrupts = <110>; 89 89 clocks = <&clks IMXRT1050_CLK_IPG_PDOF>, 90 - <&clks IMXRT1050_CLK_OSC>, 90 + <&clks IMXRT1050_CLK_AHB_PODF>, 91 91 <&clks IMXRT1050_CLK_USDHC1>; 92 92 clock-names = "ipg", "ahb", "per"; 93 93 bus-width = <4>;
+1
arch/arm/configs/imx_v6_v7_defconfig
··· 323 323 CONFIG_SND_SOC_FSL_ASOC_CARD=y 324 324 CONFIG_SND_SOC_AC97_CODEC=y 325 325 CONFIG_SND_SOC_CS42XX8_I2C=y 326 + CONFIG_SND_SOC_SPDIF=y 326 327 CONFIG_SND_SOC_TLV320AIC3X_I2C=y 327 328 CONFIG_SND_SOC_WM8960=y 328 329 CONFIG_SND_SOC_WM8962=y
+1
arch/arm/mach-imx/Kconfig
··· 6 6 select CLKSRC_IMX_GPT 7 7 select GENERIC_IRQ_CHIP 8 8 select GPIOLIB 9 + select PINCTRL 9 10 select PM_OPP if PM 10 11 select SOC_BUS 11 12 select SRAM
+1 -1
arch/arm64/boot/dts/arm/fvp-base-revc.dts
··· 233 233 #interrupt-cells = <0x1>; 234 234 compatible = "pci-host-ecam-generic"; 235 235 device_type = "pci"; 236 - bus-range = <0x0 0x1>; 236 + bus-range = <0x0 0xff>; 237 237 reg = <0x0 0x40000000 0x0 0x10000000>; 238 238 ranges = <0x2000000 0x0 0x50000000 0x0 0x50000000 0x0 0x10000000>; 239 239 interrupt-map = <0 0 0 1 &gic 0 0 GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>,
+4 -4
arch/arm64/boot/dts/broadcom/bcm2712.dtsi
··· 67 67 l2_cache_l0: l2-cache-l0 { 68 68 compatible = "cache"; 69 69 cache-size = <0x80000>; 70 - cache-line-size = <128>; 70 + cache-line-size = <64>; 71 71 cache-sets = <1024>; //512KiB(size)/64(line-size)=8192ways/8-way set 72 72 cache-level = <2>; 73 73 cache-unified; ··· 91 91 l2_cache_l1: l2-cache-l1 { 92 92 compatible = "cache"; 93 93 cache-size = <0x80000>; 94 - cache-line-size = <128>; 94 + cache-line-size = <64>; 95 95 cache-sets = <1024>; //512KiB(size)/64(line-size)=8192ways/8-way set 96 96 cache-level = <2>; 97 97 cache-unified; ··· 115 115 l2_cache_l2: l2-cache-l2 { 116 116 compatible = "cache"; 117 117 cache-size = <0x80000>; 118 - cache-line-size = <128>; 118 + cache-line-size = <64>; 119 119 cache-sets = <1024>; //512KiB(size)/64(line-size)=8192ways/8-way set 120 120 cache-level = <2>; 121 121 cache-unified; ··· 139 139 l2_cache_l3: l2-cache-l3 { 140 140 compatible = "cache"; 141 141 cache-size = <0x80000>; 142 - cache-line-size = <128>; 142 + cache-line-size = <64>; 143 143 cache-sets = <1024>; //512KiB(size)/64(line-size)=8192ways/8-way set 144 144 cache-level = <2>; 145 145 cache-unified;
+1 -1
arch/arm64/boot/dts/freescale/imx8-ss-audio.dtsi
··· 165 165 }; 166 166 167 167 esai0: esai@59010000 { 168 - compatible = "fsl,imx8qm-esai"; 168 + compatible = "fsl,imx8qm-esai", "fsl,imx6ull-esai"; 169 169 reg = <0x59010000 0x10000>; 170 170 interrupts = <GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>; 171 171 clocks = <&esai0_lpcg IMX_LPCG_CLK_4>,
+1 -1
arch/arm64/boot/dts/freescale/imx8qm-ss-audio.dtsi
··· 134 134 }; 135 135 136 136 esai1: esai@59810000 { 137 - compatible = "fsl,imx8qm-esai"; 137 + compatible = "fsl,imx8qm-esai", "fsl,imx6ull-esai"; 138 138 reg = <0x59810000 0x10000>; 139 139 interrupts = <GIC_SPI 411 IRQ_TYPE_LEVEL_HIGH>; 140 140 clocks = <&esai1_lpcg IMX_LPCG_CLK_0>,
+1 -1
arch/arm64/boot/dts/freescale/imx95.dtsi
··· 1673 1673 1674 1674 netcmix_blk_ctrl: syscon@4c810000 { 1675 1675 compatible = "nxp,imx95-netcmix-blk-ctrl", "syscon"; 1676 - reg = <0x0 0x4c810000 0x0 0x10000>; 1676 + reg = <0x0 0x4c810000 0x0 0x8>; 1677 1677 #clock-cells = <1>; 1678 1678 clocks = <&scmi_clk IMX95_CLK_BUSNETCMIX>; 1679 1679 assigned-clocks = <&scmi_clk IMX95_CLK_BUSNETCMIX>;
+3 -2
arch/arm64/boot/dts/qcom/sa8775p.dtsi
··· 2440 2440 2441 2441 qcom,cmb-element-bits = <32>; 2442 2442 qcom,cmb-msrs-num = <32>; 2443 + status = "disabled"; 2443 2444 2444 2445 out-ports { 2445 2446 port { ··· 6093 6092 <0x0 0x40000000 0x0 0xf20>, 6094 6093 <0x0 0x40000f20 0x0 0xa8>, 6095 6094 <0x0 0x40001000 0x0 0x4000>, 6096 - <0x0 0x40200000 0x0 0x100000>, 6095 + <0x0 0x40200000 0x0 0x1fe00000>, 6097 6096 <0x0 0x01c03000 0x0 0x1000>, 6098 6097 <0x0 0x40005000 0x0 0x2000>; 6099 6098 reg-names = "parf", "dbi", "elbi", "atu", "addr_space", ··· 6251 6250 <0x0 0x60000000 0x0 0xf20>, 6252 6251 <0x0 0x60000f20 0x0 0xa8>, 6253 6252 <0x0 0x60001000 0x0 0x4000>, 6254 - <0x0 0x60200000 0x0 0x100000>, 6253 + <0x0 0x60200000 0x0 0x1fe00000>, 6255 6254 <0x0 0x01c13000 0x0 0x1000>, 6256 6255 <0x0 0x60005000 0x0 0x2000>; 6257 6256 reg-names = "parf", "dbi", "elbi", "atu", "addr_space",
+8
arch/arm64/boot/dts/qcom/x1e78100-lenovo-thinkpad-t14s.dts
··· 773 773 status = "okay"; 774 774 }; 775 775 776 + &usb_1_ss0_dwc3 { 777 + dr_mode = "host"; 778 + }; 779 + 776 780 &usb_1_ss0_dwc3_hs { 777 781 remote-endpoint = <&pmic_glink_ss0_hs_in>; 778 782 }; ··· 803 799 804 800 &usb_1_ss1 { 805 801 status = "okay"; 802 + }; 803 + 804 + &usb_1_ss1_dwc3 { 805 + dr_mode = "host"; 806 806 }; 807 807 808 808 &usb_1_ss1_dwc3_hs {
+12
arch/arm64/boot/dts/qcom/x1e80100-crd.dts
··· 1197 1197 status = "okay"; 1198 1198 }; 1199 1199 1200 + &usb_1_ss0_dwc3 { 1201 + dr_mode = "host"; 1202 + }; 1203 + 1200 1204 &usb_1_ss0_dwc3_hs { 1201 1205 remote-endpoint = <&pmic_glink_ss0_hs_in>; 1202 1206 }; ··· 1229 1225 status = "okay"; 1230 1226 }; 1231 1227 1228 + &usb_1_ss1_dwc3 { 1229 + dr_mode = "host"; 1230 + }; 1231 + 1232 1232 &usb_1_ss1_dwc3_hs { 1233 1233 remote-endpoint = <&pmic_glink_ss1_hs_in>; 1234 1234 }; ··· 1259 1251 1260 1252 &usb_1_ss2 { 1261 1253 status = "okay"; 1254 + }; 1255 + 1256 + &usb_1_ss2_dwc3 { 1257 + dr_mode = "host"; 1262 1258 }; 1263 1259 1264 1260 &usb_1_ss2_dwc3_hs {
+1 -7
arch/arm64/boot/dts/qcom/x1e80100.dtsi
··· 2924 2924 #address-cells = <3>; 2925 2925 #size-cells = <2>; 2926 2926 ranges = <0x01000000 0x0 0x00000000 0x0 0x70200000 0x0 0x100000>, 2927 - <0x02000000 0x0 0x70300000 0x0 0x70300000 0x0 0x1d00000>; 2927 + <0x02000000 0x0 0x70300000 0x0 0x70300000 0x0 0x3d00000>; 2928 2928 bus-range = <0x00 0xff>; 2929 2929 2930 2930 dma-coherent; ··· 4066 4066 4067 4067 dma-coherent; 4068 4068 4069 - usb-role-switch; 4070 - 4071 4069 ports { 4072 4070 #address-cells = <1>; 4073 4071 #size-cells = <0>; ··· 4319 4321 4320 4322 dma-coherent; 4321 4323 4322 - usb-role-switch; 4323 - 4324 4324 ports { 4325 4325 #address-cells = <1>; 4326 4326 #size-cells = <0>; ··· 4416 4420 snps,usb3_lpm_capable; 4417 4421 4418 4422 dma-coherent; 4419 - 4420 - usb-role-switch; 4421 4423 4422 4424 ports { 4423 4425 #address-cells = <1>;
+1
arch/arm64/boot/dts/rockchip/rk3328.dtsi
··· 333 333 334 334 power-domain@RK3328_PD_HEVC { 335 335 reg = <RK3328_PD_HEVC>; 336 + clocks = <&cru SCLK_VENC_CORE>; 336 337 #power-domain-cells = <0>; 337 338 }; 338 339 power-domain@RK3328_PD_VIDEO {
+1
arch/arm64/boot/dts/rockchip/rk3568.dtsi
··· 350 350 assigned-clocks = <&pmucru CLK_PCIEPHY0_REF>; 351 351 assigned-clock-rates = <100000000>; 352 352 resets = <&cru SRST_PIPEPHY0>; 353 + reset-names = "phy"; 353 354 rockchip,pipe-grf = <&pipegrf>; 354 355 rockchip,pipe-phy-grf = <&pipe_phy_grf0>; 355 356 #phy-cells = <1>;
+2
arch/arm64/boot/dts/rockchip/rk356x-base.dtsi
··· 1681 1681 assigned-clocks = <&pmucru CLK_PCIEPHY1_REF>; 1682 1682 assigned-clock-rates = <100000000>; 1683 1683 resets = <&cru SRST_PIPEPHY1>; 1684 + reset-names = "phy"; 1684 1685 rockchip,pipe-grf = <&pipegrf>; 1685 1686 rockchip,pipe-phy-grf = <&pipe_phy_grf1>; 1686 1687 #phy-cells = <1>; ··· 1698 1697 assigned-clocks = <&pmucru CLK_PCIEPHY2_REF>; 1699 1698 assigned-clock-rates = <100000000>; 1700 1699 resets = <&cru SRST_PIPEPHY2>; 1700 + reset-names = "phy"; 1701 1701 rockchip,pipe-grf = <&pipegrf>; 1702 1702 rockchip,pipe-phy-grf = <&pipe_phy_grf2>; 1703 1703 #phy-cells = <1>;
+1 -1
arch/arm64/boot/dts/rockchip/rk3588-rock-5b.dts
··· 72 72 73 73 rfkill { 74 74 compatible = "rfkill-gpio"; 75 - label = "rfkill-pcie-wlan"; 75 + label = "rfkill-m2-wlan"; 76 76 radio-type = "wlan"; 77 77 shutdown-gpios = <&gpio4 RK_PA2 GPIO_ACTIVE_HIGH>; 78 78 };
+1
arch/arm64/boot/dts/rockchip/rk3588s-nanopi-r6.dtsi
··· 434 434 &sdmmc { 435 435 bus-width = <4>; 436 436 cap-sd-highspeed; 437 + cd-gpios = <&gpio0 RK_PA4 GPIO_ACTIVE_LOW>; 437 438 disable-wp; 438 439 max-frequency = <150000000>; 439 440 no-mmc;
+15 -20
arch/arm64/kernel/signal.c
··· 36 36 #include <asm/traps.h> 37 37 #include <asm/vdso.h> 38 38 39 - #ifdef CONFIG_ARM64_GCS 40 39 #define GCS_SIGNAL_CAP(addr) (((unsigned long)addr) & GCS_CAP_ADDR_MASK) 41 - 42 - static bool gcs_signal_cap_valid(u64 addr, u64 val) 43 - { 44 - return val == GCS_SIGNAL_CAP(addr); 45 - } 46 - #endif 47 40 48 41 /* 49 42 * Do a signal return; undo the signal stack. These are aligned to 128-bit. ··· 1055 1062 #ifdef CONFIG_ARM64_GCS 1056 1063 static int gcs_restore_signal(void) 1057 1064 { 1058 - unsigned long __user *gcspr_el0; 1059 - u64 cap; 1065 + u64 gcspr_el0, cap; 1060 1066 int ret; 1061 1067 1062 1068 if (!system_supports_gcs()) ··· 1064 1072 if (!(current->thread.gcs_el0_mode & PR_SHADOW_STACK_ENABLE)) 1065 1073 return 0; 1066 1074 1067 - gcspr_el0 = (unsigned long __user *)read_sysreg_s(SYS_GCSPR_EL0); 1075 + gcspr_el0 = read_sysreg_s(SYS_GCSPR_EL0); 1068 1076 1069 1077 /* 1070 1078 * Ensure that any changes to the GCS done via GCS operations ··· 1079 1087 * then faults will be generated on GCS operations - the main 1080 1088 * concern is to protect GCS pages. 1081 1089 */ 1082 - ret = copy_from_user(&cap, gcspr_el0, sizeof(cap)); 1090 + ret = copy_from_user(&cap, (unsigned long __user *)gcspr_el0, 1091 + sizeof(cap)); 1083 1092 if (ret) 1084 1093 return -EFAULT; 1085 1094 1086 1095 /* 1087 1096 * Check that the cap is the actual GCS before replacing it. 1088 1097 */ 1089 - if (!gcs_signal_cap_valid((u64)gcspr_el0, cap)) 1098 + if (cap != GCS_SIGNAL_CAP(gcspr_el0)) 1090 1099 return -EINVAL; 1091 1100 1092 1101 /* Invalidate the token to prevent reuse */ 1093 - put_user_gcs(0, (__user void*)gcspr_el0, &ret); 1102 + put_user_gcs(0, (unsigned long __user *)gcspr_el0, &ret); 1094 1103 if (ret != 0) 1095 1104 return -EFAULT; 1096 1105 1097 - write_sysreg_s(gcspr_el0 + 1, SYS_GCSPR_EL0); 1106 + write_sysreg_s(gcspr_el0 + 8, SYS_GCSPR_EL0); 1098 1107 1099 1108 return 0; 1100 1109 } ··· 1414 1421 1415 1422 static int gcs_signal_entry(__sigrestore_t sigtramp, struct ksignal *ksig) 1416 1423 { 1417 - unsigned long __user *gcspr_el0; 1424 + u64 gcspr_el0; 1418 1425 int ret = 0; 1419 1426 1420 1427 if (!system_supports_gcs()) ··· 1427 1434 * We are entering a signal handler, current register state is 1428 1435 * active. 1429 1436 */ 1430 - gcspr_el0 = (unsigned long __user *)read_sysreg_s(SYS_GCSPR_EL0); 1437 + gcspr_el0 = read_sysreg_s(SYS_GCSPR_EL0); 1431 1438 1432 1439 /* 1433 1440 * Push a cap and the GCS entry for the trampoline onto the GCS. 1434 1441 */ 1435 - put_user_gcs((unsigned long)sigtramp, gcspr_el0 - 2, &ret); 1436 - put_user_gcs(GCS_SIGNAL_CAP(gcspr_el0 - 1), gcspr_el0 - 1, &ret); 1442 + put_user_gcs((unsigned long)sigtramp, 1443 + (unsigned long __user *)(gcspr_el0 - 16), &ret); 1444 + put_user_gcs(GCS_SIGNAL_CAP(gcspr_el0 - 8), 1445 + (unsigned long __user *)(gcspr_el0 - 8), &ret); 1437 1446 if (ret != 0) 1438 1447 return ret; 1439 1448 1440 - gcspr_el0 -= 2; 1441 - write_sysreg_s((unsigned long)gcspr_el0, SYS_GCSPR_EL0); 1449 + gcspr_el0 -= 16; 1450 + write_sysreg_s(gcspr_el0, SYS_GCSPR_EL0); 1442 1451 1443 1452 return 0; 1444 1453 }
-3
arch/arm64/kvm/hyp/nvhe/mem_protect.c
··· 783 783 if (tx->initiator.id == PKVM_ID_HOST && hyp_page_count((void *)addr)) 784 784 return -EBUSY; 785 785 786 - if (__hyp_ack_skip_pgtable_check(tx)) 787 - return 0; 788 - 789 786 return __hyp_check_page_state_range(addr, size, 790 787 PKVM_PAGE_SHARED_BORROWED); 791 788 }
+36 -55
arch/arm64/kvm/pmu-emul.c
··· 24 24 25 25 static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc); 26 26 static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc); 27 + static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc); 27 28 28 29 static struct kvm_vcpu *kvm_pmc_to_vcpu(const struct kvm_pmc *pmc) 29 30 { ··· 328 327 return GENMASK(val - 1, 0) | BIT(ARMV8_PMU_CYCLE_IDX); 329 328 } 330 329 331 - /** 332 - * kvm_pmu_enable_counter_mask - enable selected PMU counters 333 - * @vcpu: The vcpu pointer 334 - * @val: the value guest writes to PMCNTENSET register 335 - * 336 - * Call perf_event_enable to start counting the perf event 337 - */ 338 - void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) 330 + static void kvm_pmc_enable_perf_event(struct kvm_pmc *pmc) 339 331 { 340 - int i; 341 - if (!kvm_vcpu_has_pmu(vcpu)) 332 + if (!pmc->perf_event) { 333 + kvm_pmu_create_perf_event(pmc); 342 334 return; 343 - 344 - if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) || !val) 345 - return; 346 - 347 - for (i = 0; i < KVM_ARMV8_PMU_MAX_COUNTERS; i++) { 348 - struct kvm_pmc *pmc; 349 - 350 - if (!(val & BIT(i))) 351 - continue; 352 - 353 - pmc = kvm_vcpu_idx_to_pmc(vcpu, i); 354 - 355 - if (!pmc->perf_event) { 356 - kvm_pmu_create_perf_event(pmc); 357 - } else { 358 - perf_event_enable(pmc->perf_event); 359 - if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE) 360 - kvm_debug("fail to enable perf event\n"); 361 - } 362 335 } 336 + 337 + perf_event_enable(pmc->perf_event); 338 + if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE) 339 + kvm_debug("fail to enable perf event\n"); 363 340 } 364 341 365 - /** 366 - * kvm_pmu_disable_counter_mask - disable selected PMU counters 367 - * @vcpu: The vcpu pointer 368 - * @val: the value guest writes to PMCNTENCLR register 369 - * 370 - * Call perf_event_disable to stop counting the perf event 371 - */ 372 - void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val) 342 + static void kvm_pmc_disable_perf_event(struct kvm_pmc *pmc) 343 + { 344 + if (pmc->perf_event) 345 + perf_event_disable(pmc->perf_event); 346 + } 347 + 348 + void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u64 val) 373 349 { 374 350 int i; 375 351 ··· 354 376 return; 355 377 356 378 for (i = 0; i < KVM_ARMV8_PMU_MAX_COUNTERS; i++) { 357 - struct kvm_pmc *pmc; 379 + struct kvm_pmc *pmc = kvm_vcpu_idx_to_pmc(vcpu, i); 358 380 359 381 if (!(val & BIT(i))) 360 382 continue; 361 383 362 - pmc = kvm_vcpu_idx_to_pmc(vcpu, i); 363 - 364 - if (pmc->perf_event) 365 - perf_event_disable(pmc->perf_event); 384 + if (kvm_pmu_counter_is_enabled(pmc)) 385 + kvm_pmc_enable_perf_event(pmc); 386 + else 387 + kvm_pmc_disable_perf_event(pmc); 366 388 } 389 + 390 + kvm_vcpu_pmu_restore_guest(vcpu); 367 391 } 368 392 369 393 /* ··· 606 626 if (!kvm_has_feat(vcpu->kvm, ID_AA64DFR0_EL1, PMUVer, V3P5)) 607 627 val &= ~ARMV8_PMU_PMCR_LP; 608 628 629 + /* Request a reload of the PMU to enable/disable affected counters */ 630 + if ((__vcpu_sys_reg(vcpu, PMCR_EL0) ^ val) & ARMV8_PMU_PMCR_E) 631 + kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu); 632 + 609 633 /* The reset bits don't indicate any state, and shouldn't be saved. */ 610 634 __vcpu_sys_reg(vcpu, PMCR_EL0) = val & ~(ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_P); 611 - 612 - if (val & ARMV8_PMU_PMCR_E) { 613 - kvm_pmu_enable_counter_mask(vcpu, 614 - __vcpu_sys_reg(vcpu, PMCNTENSET_EL0)); 615 - } else { 616 - kvm_pmu_disable_counter_mask(vcpu, 617 - __vcpu_sys_reg(vcpu, PMCNTENSET_EL0)); 618 - } 619 635 620 636 if (val & ARMV8_PMU_PMCR_C) 621 637 kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0); 622 638 623 639 if (val & ARMV8_PMU_PMCR_P) { 624 - unsigned long mask = kvm_pmu_accessible_counter_mask(vcpu); 625 - mask &= ~BIT(ARMV8_PMU_CYCLE_IDX); 640 + /* 641 + * Unlike other PMU sysregs, the controls in PMCR_EL0 always apply 642 + * to the 'guest' range of counters and never the 'hyp' range. 643 + */ 644 + unsigned long mask = kvm_pmu_implemented_counter_mask(vcpu) & 645 + ~kvm_pmu_hyp_counter_mask(vcpu) & 646 + ~BIT(ARMV8_PMU_CYCLE_IDX); 647 + 626 648 for_each_set_bit(i, &mask, 32) 627 649 kvm_pmu_set_pmc_value(kvm_vcpu_idx_to_pmc(vcpu, i), 0, true); 628 650 } 629 - kvm_vcpu_pmu_restore_guest(vcpu); 630 651 } 631 652 632 653 static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc) ··· 891 910 { 892 911 u64 mask = kvm_pmu_implemented_counter_mask(vcpu); 893 912 894 - kvm_pmu_handle_pmcr(vcpu, kvm_vcpu_read_pmcr(vcpu)); 895 - 896 913 __vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= mask; 897 914 __vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= mask; 898 915 __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= mask; 916 + 917 + kvm_pmu_reprogram_counter_mask(vcpu, mask); 899 918 } 900 919 901 920 int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu)
+25 -7
arch/arm64/kvm/sys_regs.c
··· 1208 1208 mask = kvm_pmu_accessible_counter_mask(vcpu); 1209 1209 if (p->is_write) { 1210 1210 val = p->regval & mask; 1211 - if (r->Op2 & 0x1) { 1211 + if (r->Op2 & 0x1) 1212 1212 /* accessing PMCNTENSET_EL0 */ 1213 1213 __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val; 1214 - kvm_pmu_enable_counter_mask(vcpu, val); 1215 - kvm_vcpu_pmu_restore_guest(vcpu); 1216 - } else { 1214 + else 1217 1215 /* accessing PMCNTENCLR_EL0 */ 1218 1216 __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val; 1219 - kvm_pmu_disable_counter_mask(vcpu, val); 1220 - } 1217 + 1218 + kvm_pmu_reprogram_counter_mask(vcpu, val); 1221 1219 } else { 1222 1220 p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); 1223 1221 } ··· 2448 2450 return __el2_visibility(vcpu, rd, s1pie_visibility); 2449 2451 } 2450 2452 2453 + static bool access_mdcr(struct kvm_vcpu *vcpu, 2454 + struct sys_reg_params *p, 2455 + const struct sys_reg_desc *r) 2456 + { 2457 + u64 old = __vcpu_sys_reg(vcpu, MDCR_EL2); 2458 + 2459 + if (!access_rw(vcpu, p, r)) 2460 + return false; 2461 + 2462 + /* 2463 + * Request a reload of the PMU to enable/disable the counters affected 2464 + * by HPME. 2465 + */ 2466 + if ((old ^ __vcpu_sys_reg(vcpu, MDCR_EL2)) & MDCR_EL2_HPME) 2467 + kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu); 2468 + 2469 + return true; 2470 + } 2471 + 2472 + 2451 2473 /* 2452 2474 * Architected system registers. 2453 2475 * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2 ··· 3001 2983 EL2_REG(SCTLR_EL2, access_rw, reset_val, SCTLR_EL2_RES1), 3002 2984 EL2_REG(ACTLR_EL2, access_rw, reset_val, 0), 3003 2985 EL2_REG_VNCR(HCR_EL2, reset_hcr, 0), 3004 - EL2_REG(MDCR_EL2, access_rw, reset_val, 0), 2986 + EL2_REG(MDCR_EL2, access_mdcr, reset_val, 0), 3005 2987 EL2_REG(CPTR_EL2, access_rw, reset_val, CPTR_NVHE_EL2_RES1), 3006 2988 EL2_REG_VNCR(HSTR_EL2, reset_val, 0), 3007 2989 EL2_REG_VNCR(HFGRTR_EL2, reset_val, 0),
+6
arch/hexagon/Makefile
··· 32 32 TIR_NAME := r19 33 33 KBUILD_CFLAGS += -ffixed-$(TIR_NAME) -DTHREADINFO_REG=$(TIR_NAME) -D__linux__ 34 34 KBUILD_AFLAGS += -DTHREADINFO_REG=$(TIR_NAME) 35 + 36 + # Disable HexagonConstExtenders pass for LLVM versions prior to 19.1.0 37 + # https://github.com/llvm/llvm-project/issues/99714 38 + ifneq ($(call clang-min-version, 190100),y) 39 + KBUILD_CFLAGS += -mllvm -hexagon-cext=false 40 + endif
+1
arch/loongarch/include/asm/kvm_host.h
··· 162 162 #define LOONGARCH_PV_FEAT_UPDATED BIT_ULL(63) 163 163 #define LOONGARCH_PV_FEAT_MASK (BIT(KVM_FEATURE_IPI) | \ 164 164 BIT(KVM_FEATURE_STEAL_TIME) | \ 165 + BIT(KVM_FEATURE_USER_HCALL) | \ 165 166 BIT(KVM_FEATURE_VIRT_EXTIOI)) 166 167 167 168 struct kvm_vcpu_arch {
+3
arch/loongarch/include/asm/kvm_para.h
··· 13 13 14 14 #define KVM_HCALL_CODE_SERVICE 0 15 15 #define KVM_HCALL_CODE_SWDBG 1 16 + #define KVM_HCALL_CODE_USER_SERVICE 2 16 17 17 18 #define KVM_HCALL_SERVICE HYPERCALL_ENCODE(HYPERVISOR_KVM, KVM_HCALL_CODE_SERVICE) 18 19 #define KVM_HCALL_FUNC_IPI 1 19 20 #define KVM_HCALL_FUNC_NOTIFY 2 20 21 21 22 #define KVM_HCALL_SWDBG HYPERCALL_ENCODE(HYPERVISOR_KVM, KVM_HCALL_CODE_SWDBG) 23 + 24 + #define KVM_HCALL_USER_SERVICE HYPERCALL_ENCODE(HYPERVISOR_KVM, KVM_HCALL_CODE_USER_SERVICE) 22 25 23 26 /* 24 27 * LoongArch hypercall return code
+1
arch/loongarch/include/asm/kvm_vcpu.h
··· 43 43 int kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst); 44 44 int kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run); 45 45 int kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run); 46 + int kvm_complete_user_service(struct kvm_vcpu *vcpu, struct kvm_run *run); 46 47 int kvm_emu_idle(struct kvm_vcpu *vcpu); 47 48 int kvm_pending_timer(struct kvm_vcpu *vcpu); 48 49 int kvm_handle_fault(struct kvm_vcpu *vcpu, int fault);
+1
arch/loongarch/include/uapi/asm/kvm_para.h
··· 17 17 #define KVM_FEATURE_STEAL_TIME 2 18 18 /* BIT 24 - 31 are features configurable by user space vmm */ 19 19 #define KVM_FEATURE_VIRT_EXTIOI 24 20 + #define KVM_FEATURE_USER_HCALL 25 20 21 21 22 #endif /* _UAPI_ASM_KVM_PARA_H */
+30
arch/loongarch/kvm/exit.c
··· 709 709 return kvm_handle_rdwr_fault(vcpu, true); 710 710 } 711 711 712 + int kvm_complete_user_service(struct kvm_vcpu *vcpu, struct kvm_run *run) 713 + { 714 + update_pc(&vcpu->arch); 715 + kvm_write_reg(vcpu, LOONGARCH_GPR_A0, run->hypercall.ret); 716 + 717 + return 0; 718 + } 719 + 712 720 /** 713 721 * kvm_handle_fpu_disabled() - Guest used fpu however it is disabled at host 714 722 * @vcpu: Virtual CPU context. ··· 880 872 case KVM_HCALL_SERVICE: 881 873 vcpu->stat.hypercall_exits++; 882 874 kvm_handle_service(vcpu); 875 + break; 876 + case KVM_HCALL_USER_SERVICE: 877 + if (!kvm_guest_has_pv_feature(vcpu, KVM_FEATURE_USER_HCALL)) { 878 + kvm_write_reg(vcpu, LOONGARCH_GPR_A0, KVM_HCALL_INVALID_CODE); 879 + break; 880 + } 881 + 882 + vcpu->stat.hypercall_exits++; 883 + vcpu->run->exit_reason = KVM_EXIT_HYPERCALL; 884 + vcpu->run->hypercall.nr = KVM_HCALL_USER_SERVICE; 885 + vcpu->run->hypercall.args[0] = kvm_read_reg(vcpu, LOONGARCH_GPR_A0); 886 + vcpu->run->hypercall.args[1] = kvm_read_reg(vcpu, LOONGARCH_GPR_A1); 887 + vcpu->run->hypercall.args[2] = kvm_read_reg(vcpu, LOONGARCH_GPR_A2); 888 + vcpu->run->hypercall.args[3] = kvm_read_reg(vcpu, LOONGARCH_GPR_A3); 889 + vcpu->run->hypercall.args[4] = kvm_read_reg(vcpu, LOONGARCH_GPR_A4); 890 + vcpu->run->hypercall.args[5] = kvm_read_reg(vcpu, LOONGARCH_GPR_A5); 891 + vcpu->run->hypercall.flags = 0; 892 + /* 893 + * Set invalid return value by default, let user-mode VMM modify it. 894 + */ 895 + vcpu->run->hypercall.ret = KVM_HCALL_INVALID_CODE; 896 + ret = RESUME_HOST; 883 897 break; 884 898 case KVM_HCALL_SWDBG: 885 899 /* KVM_HCALL_SWDBG only in effective when SW_BP is enabled */
+18
arch/loongarch/kvm/main.c
··· 245 245 trace_kvm_vpid_change(vcpu, vcpu->arch.vpid); 246 246 vcpu->cpu = cpu; 247 247 kvm_clear_request(KVM_REQ_TLB_FLUSH_GPA, vcpu); 248 + 249 + /* 250 + * LLBCTL is a separated guest CSR register from host, a general 251 + * exception ERET instruction clears the host LLBCTL register in 252 + * host mode, and clears the guest LLBCTL register in guest mode. 253 + * ERET in tlb refill exception does not clear LLBCTL register. 254 + * 255 + * When secondary mmu mapping is changed, guest OS does not know 256 + * even if the content is changed after mapping is changed. 257 + * 258 + * Here clear WCLLB of the guest LLBCTL register when mapping is 259 + * changed. Otherwise, if mmu mapping is changed while guest is 260 + * executing LL/SC pair, LL loads with the old address and set 261 + * the LLBCTL flag, SC checks the LLBCTL flag and will store the 262 + * new address successfully since LLBCTL_WCLLB is on, even if 263 + * memory with new address is changed on other VCPUs. 264 + */ 265 + set_gcsr_llbctl(CSR_LLBCTL_WCLLB); 248 266 } 249 267 250 268 /* Restore GSTAT(0x50).vpid */
+6 -1
arch/loongarch/kvm/vcpu.c
··· 1732 1732 vcpu->mmio_needed = 0; 1733 1733 } 1734 1734 1735 - if (run->exit_reason == KVM_EXIT_LOONGARCH_IOCSR) { 1735 + switch (run->exit_reason) { 1736 + case KVM_EXIT_HYPERCALL: 1737 + kvm_complete_user_service(vcpu, run); 1738 + break; 1739 + case KVM_EXIT_LOONGARCH_IOCSR: 1736 1740 if (!run->iocsr_io.is_write) 1737 1741 kvm_complete_iocsr_read(vcpu, run); 1742 + break; 1738 1743 } 1739 1744 1740 1745 if (!vcpu->wants_to_run)
+5 -5
arch/nios2/kernel/cpuinfo.c
··· 143 143 " DIV:\t\t%s\n" 144 144 " BMX:\t\t%s\n" 145 145 " CDX:\t\t%s\n", 146 - cpuinfo.has_mul ? "yes" : "no", 147 - cpuinfo.has_mulx ? "yes" : "no", 148 - cpuinfo.has_div ? "yes" : "no", 149 - cpuinfo.has_bmx ? "yes" : "no", 150 - cpuinfo.has_cdx ? "yes" : "no"); 146 + str_yes_no(cpuinfo.has_mul), 147 + str_yes_no(cpuinfo.has_mulx), 148 + str_yes_no(cpuinfo.has_div), 149 + str_yes_no(cpuinfo.has_bmx), 150 + str_yes_no(cpuinfo.has_cdx)); 151 151 152 152 seq_printf(m, 153 153 "Icache:\t\t%ukB, line length: %u\n",
+1
arch/powerpc/configs/pmac32_defconfig
··· 208 208 CONFIG_FB_ATY_CT=y 209 209 CONFIG_FB_ATY_GX=y 210 210 CONFIG_FB_3DFX=y 211 + CONFIG_BACKLIGHT_CLASS_DEVICE=y 211 212 # CONFIG_VGA_CONSOLE is not set 212 213 CONFIG_FRAMEBUFFER_CONSOLE=y 213 214 CONFIG_LOGO=y
+1
arch/powerpc/configs/ppc6xx_defconfig
··· 716 716 CONFIG_FB_SM501=m 717 717 CONFIG_FB_IBM_GXT4500=y 718 718 CONFIG_LCD_PLATFORM=m 719 + CONFIG_BACKLIGHT_CLASS_DEVICE=y 719 720 CONFIG_FRAMEBUFFER_CONSOLE=y 720 721 CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y 721 722 CONFIG_LOGO=y
+2
arch/powerpc/kvm/e500.h
··· 34 34 #define E500_TLB_BITMAP (1 << 30) 35 35 /* TLB1 entry is mapped by host TLB0 */ 36 36 #define E500_TLB_TLB0 (1 << 29) 37 + /* entry is writable on the host */ 38 + #define E500_TLB_WRITABLE (1 << 28) 37 39 /* bits [6-5] MAS2_X1 and MAS2_X0 and [4-0] bits for WIMGE */ 38 40 #define E500_TLB_MAS2_ATTR (0x7f) 39 41
+83 -116
arch/powerpc/kvm/e500_mmu_host.c
··· 45 45 return host_tlb_params[1].entries - tlbcam_index - 1; 46 46 } 47 47 48 - static inline u32 e500_shadow_mas3_attrib(u32 mas3, int usermode) 48 + static inline u32 e500_shadow_mas3_attrib(u32 mas3, bool writable, int usermode) 49 49 { 50 50 /* Mask off reserved bits. */ 51 51 mas3 &= MAS3_ATTRIB_MASK; 52 + 53 + if (!writable) 54 + mas3 &= ~(MAS3_UW|MAS3_SW); 52 55 53 56 #ifndef CONFIG_KVM_BOOKE_HV 54 57 if (!usermode) { ··· 245 242 return tlbe->mas7_3 & (MAS3_SW|MAS3_UW); 246 243 } 247 244 248 - static inline bool kvmppc_e500_ref_setup(struct tlbe_ref *ref, 245 + static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref, 249 246 struct kvm_book3e_206_tlb_entry *gtlbe, 250 - kvm_pfn_t pfn, unsigned int wimg) 247 + kvm_pfn_t pfn, unsigned int wimg, 248 + bool writable) 251 249 { 252 250 ref->pfn = pfn; 253 251 ref->flags = E500_TLB_VALID; 252 + if (writable) 253 + ref->flags |= E500_TLB_WRITABLE; 254 254 255 255 /* Use guest supplied MAS2_G and MAS2_E */ 256 256 ref->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg; 257 - 258 - return tlbe_is_writable(gtlbe); 259 257 } 260 258 261 259 static inline void kvmppc_e500_ref_release(struct tlbe_ref *ref) ··· 309 305 { 310 306 kvm_pfn_t pfn = ref->pfn; 311 307 u32 pr = vcpu->arch.shared->msr & MSR_PR; 308 + bool writable = !!(ref->flags & E500_TLB_WRITABLE); 312 309 313 310 BUG_ON(!(ref->flags & E500_TLB_VALID)); 314 311 ··· 317 312 stlbe->mas1 = MAS1_TSIZE(tsize) | get_tlb_sts(gtlbe) | MAS1_VALID; 318 313 stlbe->mas2 = (gvaddr & MAS2_EPN) | (ref->flags & E500_TLB_MAS2_ATTR); 319 314 stlbe->mas7_3 = ((u64)pfn << PAGE_SHIFT) | 320 - e500_shadow_mas3_attrib(gtlbe->mas7_3, pr); 315 + e500_shadow_mas3_attrib(gtlbe->mas7_3, writable, pr); 321 316 } 322 317 323 318 static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, ··· 326 321 struct tlbe_ref *ref) 327 322 { 328 323 struct kvm_memory_slot *slot; 329 - unsigned long pfn = 0; /* silence GCC warning */ 324 + unsigned int psize; 325 + unsigned long pfn; 330 326 struct page *page = NULL; 331 327 unsigned long hva; 332 - int pfnmap = 0; 333 328 int tsize = BOOK3E_PAGESZ_4K; 334 329 int ret = 0; 335 330 unsigned long mmu_seq; 336 331 struct kvm *kvm = vcpu_e500->vcpu.kvm; 337 - unsigned long tsize_pages = 0; 338 332 pte_t *ptep; 339 333 unsigned int wimg = 0; 340 334 pgd_t *pgdir; ··· 355 351 slot = gfn_to_memslot(vcpu_e500->vcpu.kvm, gfn); 356 352 hva = gfn_to_hva_memslot(slot, gfn); 357 353 358 - if (tlbsel == 1) { 359 - struct vm_area_struct *vma; 360 - mmap_read_lock(kvm->mm); 361 - 362 - vma = find_vma(kvm->mm, hva); 363 - if (vma && hva >= vma->vm_start && 364 - (vma->vm_flags & VM_PFNMAP)) { 365 - /* 366 - * This VMA is a physically contiguous region (e.g. 367 - * /dev/mem) that bypasses normal Linux page 368 - * management. Find the overlap between the 369 - * vma and the memslot. 370 - */ 371 - 372 - unsigned long start, end; 373 - unsigned long slot_start, slot_end; 374 - 375 - pfnmap = 1; 376 - 377 - start = vma->vm_pgoff; 378 - end = start + 379 - vma_pages(vma); 380 - 381 - pfn = start + ((hva - vma->vm_start) >> PAGE_SHIFT); 382 - 383 - slot_start = pfn - (gfn - slot->base_gfn); 384 - slot_end = slot_start + slot->npages; 385 - 386 - if (start < slot_start) 387 - start = slot_start; 388 - if (end > slot_end) 389 - end = slot_end; 390 - 391 - tsize = (gtlbe->mas1 & MAS1_TSIZE_MASK) >> 392 - MAS1_TSIZE_SHIFT; 393 - 394 - /* 395 - * e500 doesn't implement the lowest tsize bit, 396 - * or 1K pages. 397 - */ 398 - tsize = max(BOOK3E_PAGESZ_4K, tsize & ~1); 399 - 400 - /* 401 - * Now find the largest tsize (up to what the guest 402 - * requested) that will cover gfn, stay within the 403 - * range, and for which gfn and pfn are mutually 404 - * aligned. 405 - */ 406 - 407 - for (; tsize > BOOK3E_PAGESZ_4K; tsize -= 2) { 408 - unsigned long gfn_start, gfn_end; 409 - tsize_pages = 1UL << (tsize - 2); 410 - 411 - gfn_start = gfn & ~(tsize_pages - 1); 412 - gfn_end = gfn_start + tsize_pages; 413 - 414 - if (gfn_start + pfn - gfn < start) 415 - continue; 416 - if (gfn_end + pfn - gfn > end) 417 - continue; 418 - if ((gfn & (tsize_pages - 1)) != 419 - (pfn & (tsize_pages - 1))) 420 - continue; 421 - 422 - gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1); 423 - pfn &= ~(tsize_pages - 1); 424 - break; 425 - } 426 - } else if (vma && hva >= vma->vm_start && 427 - is_vm_hugetlb_page(vma)) { 428 - unsigned long psize = vma_kernel_pagesize(vma); 429 - 430 - tsize = (gtlbe->mas1 & MAS1_TSIZE_MASK) >> 431 - MAS1_TSIZE_SHIFT; 432 - 433 - /* 434 - * Take the largest page size that satisfies both host 435 - * and guest mapping 436 - */ 437 - tsize = min(__ilog2(psize) - 10, tsize); 438 - 439 - /* 440 - * e500 doesn't implement the lowest tsize bit, 441 - * or 1K pages. 442 - */ 443 - tsize = max(BOOK3E_PAGESZ_4K, tsize & ~1); 444 - } 445 - 446 - mmap_read_unlock(kvm->mm); 447 - } 448 - 449 - if (likely(!pfnmap)) { 450 - tsize_pages = 1UL << (tsize + 10 - PAGE_SHIFT); 451 - pfn = __kvm_faultin_pfn(slot, gfn, FOLL_WRITE, NULL, &page); 452 - if (is_error_noslot_pfn(pfn)) { 453 - if (printk_ratelimit()) 454 - pr_err("%s: real page not found for gfn %lx\n", 455 - __func__, (long)gfn); 456 - return -EINVAL; 457 - } 458 - 459 - /* Align guest and physical address to page map boundaries */ 460 - pfn &= ~(tsize_pages - 1); 461 - gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1); 354 + pfn = __kvm_faultin_pfn(slot, gfn, FOLL_WRITE, &writable, &page); 355 + if (is_error_noslot_pfn(pfn)) { 356 + if (printk_ratelimit()) 357 + pr_err("%s: real page not found for gfn %lx\n", 358 + __func__, (long)gfn); 359 + return -EINVAL; 462 360 } 463 361 464 362 spin_lock(&kvm->mmu_lock); ··· 378 472 * can't run hence pfn won't change. 379 473 */ 380 474 local_irq_save(flags); 381 - ptep = find_linux_pte(pgdir, hva, NULL, NULL); 475 + ptep = find_linux_pte(pgdir, hva, NULL, &psize); 382 476 if (ptep) { 383 477 pte_t pte = READ_ONCE(*ptep); 384 478 385 479 if (pte_present(pte)) { 386 480 wimg = (pte_val(pte) >> PTE_WIMGE_SHIFT) & 387 481 MAS2_WIMGE_MASK; 388 - local_irq_restore(flags); 389 482 } else { 390 483 local_irq_restore(flags); 391 484 pr_err_ratelimited("%s: pte not present: gfn %lx,pfn %lx\n", ··· 393 488 goto out; 394 489 } 395 490 } 396 - writable = kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg); 491 + local_irq_restore(flags); 397 492 493 + if (psize && tlbsel == 1) { 494 + unsigned long psize_pages, tsize_pages; 495 + unsigned long start, end; 496 + unsigned long slot_start, slot_end; 497 + 498 + psize_pages = 1UL << (psize - PAGE_SHIFT); 499 + start = pfn & ~(psize_pages - 1); 500 + end = start + psize_pages; 501 + 502 + slot_start = pfn - (gfn - slot->base_gfn); 503 + slot_end = slot_start + slot->npages; 504 + 505 + if (start < slot_start) 506 + start = slot_start; 507 + if (end > slot_end) 508 + end = slot_end; 509 + 510 + tsize = (gtlbe->mas1 & MAS1_TSIZE_MASK) >> 511 + MAS1_TSIZE_SHIFT; 512 + 513 + /* 514 + * Any page size that doesn't satisfy the host mapping 515 + * will fail the start and end tests. 516 + */ 517 + tsize = min(psize - PAGE_SHIFT + BOOK3E_PAGESZ_4K, tsize); 518 + 519 + /* 520 + * e500 doesn't implement the lowest tsize bit, 521 + * or 1K pages. 522 + */ 523 + tsize = max(BOOK3E_PAGESZ_4K, tsize & ~1); 524 + 525 + /* 526 + * Now find the largest tsize (up to what the guest 527 + * requested) that will cover gfn, stay within the 528 + * range, and for which gfn and pfn are mutually 529 + * aligned. 530 + */ 531 + 532 + for (; tsize > BOOK3E_PAGESZ_4K; tsize -= 2) { 533 + unsigned long gfn_start, gfn_end; 534 + tsize_pages = 1UL << (tsize - 2); 535 + 536 + gfn_start = gfn & ~(tsize_pages - 1); 537 + gfn_end = gfn_start + tsize_pages; 538 + 539 + if (gfn_start + pfn - gfn < start) 540 + continue; 541 + if (gfn_end + pfn - gfn > end) 542 + continue; 543 + if ((gfn & (tsize_pages - 1)) != 544 + (pfn & (tsize_pages - 1))) 545 + continue; 546 + 547 + gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1); 548 + pfn &= ~(tsize_pages - 1); 549 + break; 550 + } 551 + } 552 + 553 + kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg, writable); 398 554 kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize, 399 555 ref, gvaddr, stlbe); 556 + writable = tlbe_is_writable(stlbe); 400 557 401 558 /* Clear i-cache for new pages */ 402 559 kvmppc_mmu_flush_icache(pfn);
+36
arch/powerpc/platforms/book3s/vas-api.c
··· 464 464 return VM_FAULT_SIGBUS; 465 465 } 466 466 467 + /* 468 + * During mmap() paste address, mapping VMA is saved in VAS window 469 + * struct which is used to unmap during migration if the window is 470 + * still open. But the user space can remove this mapping with 471 + * munmap() before closing the window and the VMA address will 472 + * be invalid. Set VAS window VMA to NULL in this function which 473 + * is called before VMA free. 474 + */ 475 + static void vas_mmap_close(struct vm_area_struct *vma) 476 + { 477 + struct file *fp = vma->vm_file; 478 + struct coproc_instance *cp_inst = fp->private_data; 479 + struct vas_window *txwin; 480 + 481 + /* Should not happen */ 482 + if (!cp_inst || !cp_inst->txwin) { 483 + pr_err("No attached VAS window for the paste address mmap\n"); 484 + return; 485 + } 486 + 487 + txwin = cp_inst->txwin; 488 + /* 489 + * task_ref.vma is set in coproc_mmap() during mmap paste 490 + * address. So it has to be the same VMA that is getting freed. 491 + */ 492 + if (WARN_ON(txwin->task_ref.vma != vma)) { 493 + pr_err("Invalid paste address mmaping\n"); 494 + return; 495 + } 496 + 497 + mutex_lock(&txwin->task_ref.mmap_mutex); 498 + txwin->task_ref.vma = NULL; 499 + mutex_unlock(&txwin->task_ref.mmap_mutex); 500 + } 501 + 467 502 static const struct vm_operations_struct vas_vm_ops = { 503 + .close = vas_mmap_close, 468 504 .fault = vas_mmap_fault, 469 505 }; 470 506
+1
arch/riscv/include/asm/page.h
··· 122 122 123 123 extern struct kernel_mapping kernel_map; 124 124 extern phys_addr_t phys_ram_base; 125 + extern unsigned long vmemmap_start_pfn; 125 126 126 127 #define is_kernel_mapping(x) \ 127 128 ((x) >= kernel_map.virt_addr && (x) < (kernel_map.virt_addr + kernel_map.size))
+1 -1
arch/riscv/include/asm/pgtable.h
··· 87 87 * Define vmemmap for pfn_to_page & page_to_pfn calls. Needed if kernel 88 88 * is configured with CONFIG_SPARSEMEM_VMEMMAP enabled. 89 89 */ 90 - #define vmemmap ((struct page *)VMEMMAP_START - (phys_ram_base >> PAGE_SHIFT)) 90 + #define vmemmap ((struct page *)VMEMMAP_START - vmemmap_start_pfn) 91 91 92 92 #define PCI_IO_SIZE SZ_16M 93 93 #define PCI_IO_END VMEMMAP_START
+1
arch/riscv/include/asm/sbi.h
··· 159 159 }; 160 160 161 161 #define RISCV_PMU_RAW_EVENT_MASK GENMASK_ULL(47, 0) 162 + #define RISCV_PMU_PLAT_FW_EVENT_MASK GENMASK_ULL(61, 0) 162 163 #define RISCV_PMU_RAW_EVENT_IDX 0x20000 163 164 #define RISCV_PLAT_FW_EVENT 0xFFFF 164 165
+4 -1
arch/riscv/include/asm/spinlock.h
··· 3 3 #ifndef __ASM_RISCV_SPINLOCK_H 4 4 #define __ASM_RISCV_SPINLOCK_H 5 5 6 - #ifdef CONFIG_RISCV_COMBO_SPINLOCKS 6 + #ifdef CONFIG_QUEUED_SPINLOCKS 7 7 #define _Q_PENDING_LOOPS (1 << 9) 8 + #endif 9 + 10 + #ifdef CONFIG_RISCV_COMBO_SPINLOCKS 8 11 9 12 #define __no_arch_spinlock_redefine 10 13 #include <asm/ticket_spinlock.h>
+11 -10
arch/riscv/kernel/entry.S
··· 23 23 REG_S a0, TASK_TI_A0(tp) 24 24 csrr a0, CSR_CAUSE 25 25 /* Exclude IRQs */ 26 - blt a0, zero, _new_vmalloc_restore_context_a0 26 + blt a0, zero, .Lnew_vmalloc_restore_context_a0 27 27 28 28 REG_S a1, TASK_TI_A1(tp) 29 29 /* Only check new_vmalloc if we are in page/protection fault */ 30 30 li a1, EXC_LOAD_PAGE_FAULT 31 - beq a0, a1, _new_vmalloc_kernel_address 31 + beq a0, a1, .Lnew_vmalloc_kernel_address 32 32 li a1, EXC_STORE_PAGE_FAULT 33 - beq a0, a1, _new_vmalloc_kernel_address 33 + beq a0, a1, .Lnew_vmalloc_kernel_address 34 34 li a1, EXC_INST_PAGE_FAULT 35 - bne a0, a1, _new_vmalloc_restore_context_a1 35 + bne a0, a1, .Lnew_vmalloc_restore_context_a1 36 36 37 - _new_vmalloc_kernel_address: 37 + .Lnew_vmalloc_kernel_address: 38 38 /* Is it a kernel address? */ 39 39 csrr a0, CSR_TVAL 40 - bge a0, zero, _new_vmalloc_restore_context_a1 40 + bge a0, zero, .Lnew_vmalloc_restore_context_a1 41 41 42 42 /* Check if a new vmalloc mapping appeared that could explain the trap */ 43 43 REG_S a2, TASK_TI_A2(tp) ··· 69 69 /* Check the value of new_vmalloc for this cpu */ 70 70 REG_L a2, 0(a0) 71 71 and a2, a2, a1 72 - beq a2, zero, _new_vmalloc_restore_context 72 + beq a2, zero, .Lnew_vmalloc_restore_context 73 73 74 74 /* Atomically reset the current cpu bit in new_vmalloc */ 75 75 amoxor.d a0, a1, (a0) ··· 83 83 csrw CSR_SCRATCH, x0 84 84 sret 85 85 86 - _new_vmalloc_restore_context: 86 + .Lnew_vmalloc_restore_context: 87 87 REG_L a2, TASK_TI_A2(tp) 88 - _new_vmalloc_restore_context_a1: 88 + .Lnew_vmalloc_restore_context_a1: 89 89 REG_L a1, TASK_TI_A1(tp) 90 - _new_vmalloc_restore_context_a0: 90 + .Lnew_vmalloc_restore_context_a0: 91 91 REG_L a0, TASK_TI_A0(tp) 92 92 .endm 93 93 ··· 278 278 #else 279 279 sret 280 280 #endif 281 + SYM_INNER_LABEL(ret_from_exception_end, SYM_L_GLOBAL) 281 282 SYM_CODE_END(ret_from_exception) 282 283 ASM_NOKPROBE(ret_from_exception) 283 284
+4 -14
arch/riscv/kernel/module.c
··· 23 23 24 24 struct relocation_head { 25 25 struct hlist_node node; 26 - struct list_head *rel_entry; 26 + struct list_head rel_entry; 27 27 void *location; 28 28 }; 29 29 ··· 634 634 location = rel_head_iter->location; 635 635 list_for_each_entry_safe(rel_entry_iter, 636 636 rel_entry_iter_tmp, 637 - rel_head_iter->rel_entry, 637 + &rel_head_iter->rel_entry, 638 638 head) { 639 639 curr_type = rel_entry_iter->type; 640 640 reloc_handlers[curr_type].reloc_handler( ··· 704 704 return -ENOMEM; 705 705 } 706 706 707 - rel_head->rel_entry = 708 - kmalloc(sizeof(struct list_head), GFP_KERNEL); 709 - 710 - if (!rel_head->rel_entry) { 711 - kfree(entry); 712 - kfree(rel_head); 713 - return -ENOMEM; 714 - } 715 - 716 - INIT_LIST_HEAD(rel_head->rel_entry); 707 + INIT_LIST_HEAD(&rel_head->rel_entry); 717 708 rel_head->location = location; 718 709 INIT_HLIST_NODE(&rel_head->node); 719 710 if (!current_head->first) { ··· 713 722 714 723 if (!bucket) { 715 724 kfree(entry); 716 - kfree(rel_head->rel_entry); 717 725 kfree(rel_head); 718 726 return -ENOMEM; 719 727 } ··· 725 735 } 726 736 727 737 /* Add relocation to head of discovered rel_head */ 728 - list_add_tail(&entry->head, rel_head->rel_entry); 738 + list_add_tail(&entry->head, &rel_head->rel_entry); 729 739 730 740 return 0; 731 741 }
+1 -1
arch/riscv/kernel/probes/kprobes.c
··· 30 30 p->ainsn.api.restore = (unsigned long)p->addr + len; 31 31 32 32 patch_text_nosync(p->ainsn.api.insn, &p->opcode, len); 33 - patch_text_nosync(p->ainsn.api.insn + len, &insn, GET_INSN_LENGTH(insn)); 33 + patch_text_nosync((void *)p->ainsn.api.insn + len, &insn, GET_INSN_LENGTH(insn)); 34 34 } 35 35 36 36 static void __kprobes arch_prepare_simulate(struct kprobe *p)
+3 -1
arch/riscv/kernel/stacktrace.c
··· 17 17 #ifdef CONFIG_FRAME_POINTER 18 18 19 19 extern asmlinkage void handle_exception(void); 20 + extern unsigned long ret_from_exception_end; 20 21 21 22 static inline int fp_is_valid(unsigned long fp, unsigned long sp) 22 23 { ··· 72 71 fp = frame->fp; 73 72 pc = ftrace_graph_ret_addr(current, &graph_idx, frame->ra, 74 73 &frame->ra); 75 - if (pc == (unsigned long)handle_exception) { 74 + if (pc >= (unsigned long)handle_exception && 75 + pc < (unsigned long)&ret_from_exception_end) { 76 76 if (unlikely(!__kernel_text_address(pc) || !fn(arg, pc))) 77 77 break; 78 78
+3 -3
arch/riscv/kernel/traps.c
··· 35 35 36 36 int show_unhandled_signals = 1; 37 37 38 - static DEFINE_SPINLOCK(die_lock); 38 + static DEFINE_RAW_SPINLOCK(die_lock); 39 39 40 40 static int copy_code(struct pt_regs *regs, u16 *val, const u16 *insns) 41 41 { ··· 81 81 82 82 oops_enter(); 83 83 84 - spin_lock_irqsave(&die_lock, flags); 84 + raw_spin_lock_irqsave(&die_lock, flags); 85 85 console_verbose(); 86 86 bust_spinlocks(1); 87 87 ··· 100 100 101 101 bust_spinlocks(0); 102 102 add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE); 103 - spin_unlock_irqrestore(&die_lock, flags); 103 + raw_spin_unlock_irqrestore(&die_lock, flags); 104 104 oops_exit(); 105 105 106 106 if (in_interrupt())
+16 -1
arch/riscv/mm/init.c
··· 33 33 #include <asm/pgtable.h> 34 34 #include <asm/sections.h> 35 35 #include <asm/soc.h> 36 + #include <asm/sparsemem.h> 36 37 #include <asm/tlbflush.h> 37 38 38 39 #include "../kernel/head.h" ··· 62 61 63 62 phys_addr_t phys_ram_base __ro_after_init; 64 63 EXPORT_SYMBOL(phys_ram_base); 64 + 65 + #ifdef CONFIG_SPARSEMEM_VMEMMAP 66 + #define VMEMMAP_ADDR_ALIGN (1ULL << SECTION_SIZE_BITS) 67 + 68 + unsigned long vmemmap_start_pfn __ro_after_init; 69 + EXPORT_SYMBOL(vmemmap_start_pfn); 70 + #endif 65 71 66 72 unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] 67 73 __page_aligned_bss; ··· 248 240 * Make sure we align the start of the memory on a PMD boundary so that 249 241 * at worst, we map the linear mapping with PMD mappings. 250 242 */ 251 - if (!IS_ENABLED(CONFIG_XIP_KERNEL)) 243 + if (!IS_ENABLED(CONFIG_XIP_KERNEL)) { 252 244 phys_ram_base = memblock_start_of_DRAM() & PMD_MASK; 245 + #ifdef CONFIG_SPARSEMEM_VMEMMAP 246 + vmemmap_start_pfn = round_down(phys_ram_base, VMEMMAP_ADDR_ALIGN) >> PAGE_SHIFT; 247 + #endif 248 + } 253 249 254 250 /* 255 251 * In 64-bit, any use of __va/__pa before this point is wrong as we ··· 1113 1101 kernel_map.xiprom_sz = (uintptr_t)(&_exiprom) - (uintptr_t)(&_xiprom); 1114 1102 1115 1103 phys_ram_base = CONFIG_PHYS_RAM_BASE; 1104 + #ifdef CONFIG_SPARSEMEM_VMEMMAP 1105 + vmemmap_start_pfn = round_down(phys_ram_base, VMEMMAP_ADDR_ALIGN) >> PAGE_SHIFT; 1106 + #endif 1116 1107 kernel_map.phys_addr = (uintptr_t)CONFIG_PHYS_RAM_BASE; 1117 1108 kernel_map.size = (uintptr_t)(&_end) - (uintptr_t)(&_start); 1118 1109
+2
arch/s390/boot/startup.c
··· 234 234 vsize = round_up(SZ_2G + max_mappable, rte_size) + 235 235 round_up(vmemmap_size, rte_size) + 236 236 FIXMAP_SIZE + MODULES_LEN + KASLR_LEN; 237 + if (IS_ENABLED(CONFIG_KMSAN)) 238 + vsize += MODULES_LEN * 2; 237 239 return size_add(vsize, vmalloc_size); 238 240 } 239 241
+3 -3
arch/s390/boot/vmem.c
··· 306 306 pages++; 307 307 } 308 308 } 309 - if (mode == POPULATE_DIRECT) 309 + if (mode == POPULATE_IDENTITY) 310 310 update_page_count(PG_DIRECT_MAP_4K, pages); 311 311 } 312 312 ··· 339 339 } 340 340 pgtable_pte_populate(pmd, addr, next, mode); 341 341 } 342 - if (mode == POPULATE_DIRECT) 342 + if (mode == POPULATE_IDENTITY) 343 343 update_page_count(PG_DIRECT_MAP_1M, pages); 344 344 } 345 345 ··· 372 372 } 373 373 pgtable_pmd_populate(pud, addr, next, mode); 374 374 } 375 - if (mode == POPULATE_DIRECT) 375 + if (mode == POPULATE_IDENTITY) 376 376 update_page_count(PG_DIRECT_MAP_2G, pages); 377 377 } 378 378
+1 -1
arch/s390/kernel/ipl.c
··· 270 270 if (len >= sizeof(_value)) \ 271 271 return -E2BIG; \ 272 272 len = strscpy(_value, buf, sizeof(_value)); \ 273 - if (len < 0) \ 273 + if ((ssize_t)len < 0) \ 274 274 return len; \ 275 275 strim(_value); \ 276 276 return len; \
+6
arch/s390/kvm/interrupt.c
··· 2678 2678 kvm_s390_clear_float_irqs(dev->kvm); 2679 2679 break; 2680 2680 case KVM_DEV_FLIC_APF_ENABLE: 2681 + if (kvm_is_ucontrol(dev->kvm)) 2682 + return -EINVAL; 2681 2683 dev->kvm->arch.gmap->pfault_enabled = 1; 2682 2684 break; 2683 2685 case KVM_DEV_FLIC_APF_DISABLE_WAIT: 2686 + if (kvm_is_ucontrol(dev->kvm)) 2687 + return -EINVAL; 2684 2688 dev->kvm->arch.gmap->pfault_enabled = 0; 2685 2689 /* 2686 2690 * Make sure no async faults are in transition when ··· 2898 2894 switch (ue->type) { 2899 2895 /* we store the userspace addresses instead of the guest addresses */ 2900 2896 case KVM_IRQ_ROUTING_S390_ADAPTER: 2897 + if (kvm_is_ucontrol(kvm)) 2898 + return -EINVAL; 2901 2899 e->set = set_adapter_int; 2902 2900 uaddr = gmap_translate(kvm->arch.gmap, ue->u.adapter.summary_addr); 2903 2901 if (uaddr == -EFAULT)
+1 -1
arch/s390/kvm/vsie.c
··· 854 854 static void unpin_scb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page, 855 855 gpa_t gpa) 856 856 { 857 - hpa_t hpa = (hpa_t) vsie_page->scb_o; 857 + hpa_t hpa = virt_to_phys(vsie_page->scb_o); 858 858 859 859 if (hpa) 860 860 unpin_guest_page(vcpu->kvm, gpa, hpa);
+11 -1
arch/x86/events/intel/core.c
··· 429 429 EVENT_CONSTRAINT_END 430 430 }; 431 431 432 + static struct extra_reg intel_lnc_extra_regs[] __read_mostly = { 433 + INTEL_UEVENT_EXTRA_REG(0x012a, MSR_OFFCORE_RSP_0, 0xfffffffffffull, RSP_0), 434 + INTEL_UEVENT_EXTRA_REG(0x012b, MSR_OFFCORE_RSP_1, 0xfffffffffffull, RSP_1), 435 + INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd), 436 + INTEL_UEVENT_EXTRA_REG(0x02c6, MSR_PEBS_FRONTEND, 0x9, FE), 437 + INTEL_UEVENT_EXTRA_REG(0x03c6, MSR_PEBS_FRONTEND, 0x7fff1f, FE), 438 + INTEL_UEVENT_EXTRA_REG(0x40ad, MSR_PEBS_FRONTEND, 0xf, FE), 439 + INTEL_UEVENT_EXTRA_REG(0x04c2, MSR_PEBS_FRONTEND, 0x8, FE), 440 + EVENT_EXTRA_END 441 + }; 432 442 433 443 EVENT_ATTR_STR(mem-loads, mem_ld_nhm, "event=0x0b,umask=0x10,ldlat=3"); 434 444 EVENT_ATTR_STR(mem-loads, mem_ld_snb, "event=0xcd,umask=0x1,ldlat=3"); ··· 6432 6422 intel_pmu_init_glc(pmu); 6433 6423 hybrid(pmu, event_constraints) = intel_lnc_event_constraints; 6434 6424 hybrid(pmu, pebs_constraints) = intel_lnc_pebs_event_constraints; 6435 - hybrid(pmu, extra_regs) = intel_rwc_extra_regs; 6425 + hybrid(pmu, extra_regs) = intel_lnc_extra_regs; 6436 6426 } 6437 6427 6438 6428 static __always_inline void intel_pmu_init_skt(struct pmu *pmu)
+1
arch/x86/events/intel/ds.c
··· 2517 2517 x86_pmu.large_pebs_flags |= PERF_SAMPLE_TIME; 2518 2518 break; 2519 2519 2520 + case 6: 2520 2521 case 5: 2521 2522 x86_pmu.pebs_ept = 1; 2522 2523 fallthrough;
+1
arch/x86/events/intel/uncore.c
··· 1910 1910 X86_MATCH_VFM(INTEL_ATOM_GRACEMONT, &adl_uncore_init), 1911 1911 X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, &gnr_uncore_init), 1912 1912 X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, &gnr_uncore_init), 1913 + X86_MATCH_VFM(INTEL_ATOM_DARKMONT_X, &gnr_uncore_init), 1913 1914 {}, 1914 1915 }; 1915 1916 MODULE_DEVICE_TABLE(x86cpu, intel_uncore_match);
+2
arch/x86/include/asm/processor.h
··· 230 230 return BIT_ULL(boot_cpu_data.x86_cache_bits - 1 - PAGE_SHIFT); 231 231 } 232 232 233 + void init_cpu_devs(void); 234 + void get_cpu_vendor(struct cpuinfo_x86 *c); 233 235 extern void early_cpu_init(void); 234 236 extern void identify_secondary_cpu(struct cpuinfo_x86 *); 235 237 extern void print_cpu_info(struct cpuinfo_x86 *);
+15
arch/x86/include/asm/static_call.h
··· 65 65 66 66 extern bool __static_call_fixup(void *tramp, u8 op, void *dest); 67 67 68 + extern void __static_call_update_early(void *tramp, void *func); 69 + 70 + #define static_call_update_early(name, _func) \ 71 + ({ \ 72 + typeof(&STATIC_CALL_TRAMP(name)) __F = (_func); \ 73 + if (static_call_initialized) { \ 74 + __static_call_update(&STATIC_CALL_KEY(name), \ 75 + STATIC_CALL_TRAMP_ADDR(name), __F);\ 76 + } else { \ 77 + WRITE_ONCE(STATIC_CALL_KEY(name).func, _func); \ 78 + __static_call_update_early(STATIC_CALL_TRAMP_ADDR(name),\ 79 + __F); \ 80 + } \ 81 + }) 82 + 68 83 #endif /* _ASM_STATIC_CALL_H */
+3 -3
arch/x86/include/asm/sync_core.h
··· 8 8 #include <asm/special_insns.h> 9 9 10 10 #ifdef CONFIG_X86_32 11 - static inline void iret_to_self(void) 11 + static __always_inline void iret_to_self(void) 12 12 { 13 13 asm volatile ( 14 14 "pushfl\n\t" ··· 19 19 : ASM_CALL_CONSTRAINT : : "memory"); 20 20 } 21 21 #else 22 - static inline void iret_to_self(void) 22 + static __always_inline void iret_to_self(void) 23 23 { 24 24 unsigned int tmp; 25 25 ··· 55 55 * Like all of Linux's memory ordering operations, this is a 56 56 * compiler barrier as well. 57 57 */ 58 - static inline void sync_core(void) 58 + static __always_inline void sync_core(void) 59 59 { 60 60 /* 61 61 * The SERIALIZE instruction is the most straightforward way to
+22 -14
arch/x86/include/asm/xen/hypercall.h
··· 39 39 #include <linux/string.h> 40 40 #include <linux/types.h> 41 41 #include <linux/pgtable.h> 42 + #include <linux/instrumentation.h> 42 43 43 44 #include <trace/events/xen.h> 44 45 46 + #include <asm/alternative.h> 45 47 #include <asm/page.h> 46 48 #include <asm/smap.h> 47 49 #include <asm/nospec-branch.h> ··· 88 86 * there aren't more than 5 arguments...) 89 87 */ 90 88 91 - extern struct { char _entry[32]; } hypercall_page[]; 89 + void xen_hypercall_func(void); 90 + DECLARE_STATIC_CALL(xen_hypercall, xen_hypercall_func); 92 91 93 - #define __HYPERCALL "call hypercall_page+%c[offset]" 94 - #define __HYPERCALL_ENTRY(x) \ 95 - [offset] "i" (__HYPERVISOR_##x * sizeof(hypercall_page[0])) 92 + #ifdef MODULE 93 + #define __ADDRESSABLE_xen_hypercall 94 + #else 95 + #define __ADDRESSABLE_xen_hypercall __ADDRESSABLE_ASM_STR(__SCK__xen_hypercall) 96 + #endif 97 + 98 + #define __HYPERCALL \ 99 + __ADDRESSABLE_xen_hypercall \ 100 + "call __SCT__xen_hypercall" 101 + 102 + #define __HYPERCALL_ENTRY(x) "a" (x) 96 103 97 104 #ifdef CONFIG_X86_32 98 105 #define __HYPERCALL_RETREG "eax" ··· 159 148 __HYPERCALL_0ARG(); \ 160 149 asm volatile (__HYPERCALL \ 161 150 : __HYPERCALL_0PARAM \ 162 - : __HYPERCALL_ENTRY(name) \ 151 + : __HYPERCALL_ENTRY(__HYPERVISOR_ ## name) \ 163 152 : __HYPERCALL_CLOBBER0); \ 164 153 (type)__res; \ 165 154 }) ··· 170 159 __HYPERCALL_1ARG(a1); \ 171 160 asm volatile (__HYPERCALL \ 172 161 : __HYPERCALL_1PARAM \ 173 - : __HYPERCALL_ENTRY(name) \ 162 + : __HYPERCALL_ENTRY(__HYPERVISOR_ ## name) \ 174 163 : __HYPERCALL_CLOBBER1); \ 175 164 (type)__res; \ 176 165 }) ··· 181 170 __HYPERCALL_2ARG(a1, a2); \ 182 171 asm volatile (__HYPERCALL \ 183 172 : __HYPERCALL_2PARAM \ 184 - : __HYPERCALL_ENTRY(name) \ 173 + : __HYPERCALL_ENTRY(__HYPERVISOR_ ## name) \ 185 174 : __HYPERCALL_CLOBBER2); \ 186 175 (type)__res; \ 187 176 }) ··· 192 181 __HYPERCALL_3ARG(a1, a2, a3); \ 193 182 asm volatile (__HYPERCALL \ 194 183 : __HYPERCALL_3PARAM \ 195 - : __HYPERCALL_ENTRY(name) \ 184 + : __HYPERCALL_ENTRY(__HYPERVISOR_ ## name) \ 196 185 : __HYPERCALL_CLOBBER3); \ 197 186 (type)__res; \ 198 187 }) ··· 203 192 __HYPERCALL_4ARG(a1, a2, a3, a4); \ 204 193 asm volatile (__HYPERCALL \ 205 194 : __HYPERCALL_4PARAM \ 206 - : __HYPERCALL_ENTRY(name) \ 195 + : __HYPERCALL_ENTRY(__HYPERVISOR_ ## name) \ 207 196 : __HYPERCALL_CLOBBER4); \ 208 197 (type)__res; \ 209 198 }) ··· 217 206 __HYPERCALL_DECLS; 218 207 __HYPERCALL_5ARG(a1, a2, a3, a4, a5); 219 208 220 - if (call >= PAGE_SIZE / sizeof(hypercall_page[0])) 221 - return -EINVAL; 222 - 223 - asm volatile(CALL_NOSPEC 209 + asm volatile(__HYPERCALL 224 210 : __HYPERCALL_5PARAM 225 - : [thunk_target] "a" (&hypercall_page[call]) 211 + : __HYPERCALL_ENTRY(call) 226 212 : __HYPERCALL_CLOBBER5); 227 213 228 214 return (long)__res;
-5
arch/x86/kernel/callthunks.c
··· 143 143 dest < (void*)relocate_kernel + KEXEC_CONTROL_CODE_MAX_SIZE) 144 144 return true; 145 145 #endif 146 - #ifdef CONFIG_XEN 147 - if (dest >= (void *)hypercall_page && 148 - dest < (void*)hypercall_page + PAGE_SIZE) 149 - return true; 150 - #endif 151 146 return false; 152 147 } 153 148
+30
arch/x86/kernel/cet.c
··· 81 81 82 82 static __ro_after_init bool ibt_fatal = true; 83 83 84 + /* 85 + * By definition, all missing-ENDBRANCH #CPs are a result of WFE && !ENDBR. 86 + * 87 + * For the kernel IBT no ENDBR selftest where #CPs are deliberately triggered, 88 + * the WFE state of the interrupted context needs to be cleared to let execution 89 + * continue. Otherwise when the CPU resumes from the instruction that just 90 + * caused the previous #CP, another missing-ENDBRANCH #CP is raised and the CPU 91 + * enters a dead loop. 92 + * 93 + * This is not a problem with IDT because it doesn't preserve WFE and IRET doesn't 94 + * set WFE. But FRED provides space on the entry stack (in an expanded CS area) 95 + * to save and restore the WFE state, thus the WFE state is no longer clobbered, 96 + * so software must clear it. 97 + */ 98 + static void ibt_clear_fred_wfe(struct pt_regs *regs) 99 + { 100 + /* 101 + * No need to do any FRED checks. 102 + * 103 + * For IDT event delivery, the high-order 48 bits of CS are pushed 104 + * as 0s into the stack, and later IRET ignores these bits. 105 + * 106 + * For FRED, a test to check if fred_cs.wfe is set would be dropped 107 + * by compilers. 108 + */ 109 + regs->fred_cs.wfe = 0; 110 + } 111 + 84 112 static void do_kernel_cp_fault(struct pt_regs *regs, unsigned long error_code) 85 113 { 86 114 if ((error_code & CP_EC) != CP_ENDBR) { ··· 118 90 119 91 if (unlikely(regs->ip == (unsigned long)&ibt_selftest_noendbr)) { 120 92 regs->ax = 0; 93 + ibt_clear_fred_wfe(regs); 121 94 return; 122 95 } 123 96 ··· 126 97 if (!ibt_fatal) { 127 98 printk(KERN_DEFAULT CUT_HERE); 128 99 __warn(__FILE__, __LINE__, (void *)regs->ip, TAINT_WARN, regs, NULL); 100 + ibt_clear_fred_wfe(regs); 129 101 return; 130 102 } 131 103 BUG();
+22 -16
arch/x86/kernel/cpu/common.c
··· 867 867 tlb_lld_4m[ENTRIES], tlb_lld_1g[ENTRIES]); 868 868 } 869 869 870 - static void get_cpu_vendor(struct cpuinfo_x86 *c) 870 + void get_cpu_vendor(struct cpuinfo_x86 *c) 871 871 { 872 872 char *v = c->x86_vendor_id; 873 873 int i; ··· 1649 1649 detect_nopl(); 1650 1650 } 1651 1651 1652 - void __init early_cpu_init(void) 1652 + void __init init_cpu_devs(void) 1653 1653 { 1654 1654 const struct cpu_dev *const *cdev; 1655 1655 int count = 0; 1656 - 1657 - #ifdef CONFIG_PROCESSOR_SELECT 1658 - pr_info("KERNEL supported cpus:\n"); 1659 - #endif 1660 1656 1661 1657 for (cdev = __x86_cpu_dev_start; cdev < __x86_cpu_dev_end; cdev++) { 1662 1658 const struct cpu_dev *cpudev = *cdev; ··· 1661 1665 break; 1662 1666 cpu_devs[count] = cpudev; 1663 1667 count++; 1668 + } 1669 + } 1670 + 1671 + void __init early_cpu_init(void) 1672 + { 1673 + #ifdef CONFIG_PROCESSOR_SELECT 1674 + unsigned int i, j; 1675 + 1676 + pr_info("KERNEL supported cpus:\n"); 1677 + #endif 1678 + 1679 + init_cpu_devs(); 1664 1680 1665 1681 #ifdef CONFIG_PROCESSOR_SELECT 1666 - { 1667 - unsigned int j; 1668 - 1669 - for (j = 0; j < 2; j++) { 1670 - if (!cpudev->c_ident[j]) 1671 - continue; 1672 - pr_info(" %s %s\n", cpudev->c_vendor, 1673 - cpudev->c_ident[j]); 1674 - } 1682 + for (i = 0; i < X86_VENDOR_NUM && cpu_devs[i]; i++) { 1683 + for (j = 0; j < 2; j++) { 1684 + if (!cpu_devs[i]->c_ident[j]) 1685 + continue; 1686 + pr_info(" %s %s\n", cpu_devs[i]->c_vendor, 1687 + cpu_devs[i]->c_ident[j]); 1675 1688 } 1676 - #endif 1677 1689 } 1690 + #endif 1691 + 1678 1692 early_identify_cpu(&boot_cpu_data); 1679 1693 } 1680 1694
+58
arch/x86/kernel/cpu/mshyperv.c
··· 223 223 hyperv_cleanup(); 224 224 } 225 225 #endif /* CONFIG_CRASH_DUMP */ 226 + 227 + static u64 hv_ref_counter_at_suspend; 228 + static void (*old_save_sched_clock_state)(void); 229 + static void (*old_restore_sched_clock_state)(void); 230 + 231 + /* 232 + * Hyper-V clock counter resets during hibernation. Save and restore clock 233 + * offset during suspend/resume, while also considering the time passed 234 + * before suspend. This is to make sure that sched_clock using hv tsc page 235 + * based clocksource, proceeds from where it left off during suspend and 236 + * it shows correct time for the timestamps of kernel messages after resume. 237 + */ 238 + static void save_hv_clock_tsc_state(void) 239 + { 240 + hv_ref_counter_at_suspend = hv_read_reference_counter(); 241 + } 242 + 243 + static void restore_hv_clock_tsc_state(void) 244 + { 245 + /* 246 + * Adjust the offsets used by hv tsc clocksource to 247 + * account for the time spent before hibernation. 248 + * adjusted value = reference counter (time) at suspend 249 + * - reference counter (time) now. 250 + */ 251 + hv_adj_sched_clock_offset(hv_ref_counter_at_suspend - hv_read_reference_counter()); 252 + } 253 + 254 + /* 255 + * Functions to override save_sched_clock_state and restore_sched_clock_state 256 + * functions of x86_platform. The Hyper-V clock counter is reset during 257 + * suspend-resume and the offset used to measure time needs to be 258 + * corrected, post resume. 259 + */ 260 + static void hv_save_sched_clock_state(void) 261 + { 262 + old_save_sched_clock_state(); 263 + save_hv_clock_tsc_state(); 264 + } 265 + 266 + static void hv_restore_sched_clock_state(void) 267 + { 268 + restore_hv_clock_tsc_state(); 269 + old_restore_sched_clock_state(); 270 + } 271 + 272 + static void __init x86_setup_ops_for_tsc_pg_clock(void) 273 + { 274 + if (!(ms_hyperv.features & HV_MSR_REFERENCE_TSC_AVAILABLE)) 275 + return; 276 + 277 + old_save_sched_clock_state = x86_platform.save_sched_clock_state; 278 + x86_platform.save_sched_clock_state = hv_save_sched_clock_state; 279 + 280 + old_restore_sched_clock_state = x86_platform.restore_sched_clock_state; 281 + x86_platform.restore_sched_clock_state = hv_restore_sched_clock_state; 282 + } 226 283 #endif /* CONFIG_HYPERV */ 227 284 228 285 static uint32_t __init ms_hyperv_platform(void) ··· 636 579 637 580 /* Register Hyper-V specific clocksource */ 638 581 hv_init_clocksource(); 582 + x86_setup_ops_for_tsc_pg_clock(); 639 583 hv_vtl_init_platform(); 640 584 #endif 641 585 /*
+2 -1
arch/x86/kernel/fpu/regset.c
··· 190 190 struct fpu *fpu = &target->thread.fpu; 191 191 struct cet_user_state *cetregs; 192 192 193 - if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) 193 + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK) || 194 + !ssp_active(target, regset)) 194 195 return -ENODEV; 195 196 196 197 sync_fpstate(fpu);
+8
arch/x86/kernel/static_call.c
··· 172 172 } 173 173 EXPORT_SYMBOL_GPL(arch_static_call_transform); 174 174 175 + noinstr void __static_call_update_early(void *tramp, void *func) 176 + { 177 + BUG_ON(system_state != SYSTEM_BOOTING); 178 + BUG_ON(static_call_initialized); 179 + __text_gen_insn(tramp, JMP32_INSN_OPCODE, tramp, func, JMP32_INSN_SIZE); 180 + sync_core(); 181 + } 182 + 175 183 #ifdef CONFIG_MITIGATION_RETHUNK 176 184 /* 177 185 * This is called by apply_returns() to fix up static call trampolines,
-4
arch/x86/kernel/vmlinux.lds.S
··· 519 519 * linker will never mark as relocatable. (Using just ABSOLUTE() is not 520 520 * sufficient for that). 521 521 */ 522 - #ifdef CONFIG_XEN 523 522 #ifdef CONFIG_XEN_PV 524 523 xen_elfnote_entry_value = 525 524 ABSOLUTE(xen_elfnote_entry) + ABSOLUTE(startup_xen); 526 - #endif 527 - xen_elfnote_hypercall_page_value = 528 - ABSOLUTE(xen_elfnote_hypercall_page) + ABSOLUTE(hypercall_page); 529 525 #endif 530 526 #ifdef CONFIG_PVH 531 527 xen_elfnote_phys32_entry_value =
+1 -1
arch/x86/kvm/vmx/posted_intr.h
··· 2 2 #ifndef __KVM_X86_VMX_POSTED_INTR_H 3 3 #define __KVM_X86_VMX_POSTED_INTR_H 4 4 5 - #include <linux/find.h> 5 + #include <linux/bitmap.h> 6 6 #include <asm/posted_intr.h> 7 7 8 8 void vmx_vcpu_pi_load(struct kvm_vcpu *vcpu, int cpu);
+7
arch/x86/kvm/x86.c
··· 12724 12724 kvm_hv_init_vm(kvm); 12725 12725 kvm_xen_init_vm(kvm); 12726 12726 12727 + if (ignore_msrs && !report_ignored_msrs) { 12728 + pr_warn_once("Running KVM with ignore_msrs=1 and report_ignored_msrs=0 is not a\n" 12729 + "a supported configuration. Lying to the guest about the existence of MSRs\n" 12730 + "may cause the guest operating system to hang or produce errors. If a guest\n" 12731 + "does not run without ignore_msrs=1, please report it to kvm@vger.kernel.org.\n"); 12732 + } 12733 + 12727 12734 return 0; 12728 12735 12729 12736 out_uninit_mmu:
+64 -1
arch/x86/xen/enlighten.c
··· 2 2 3 3 #include <linux/console.h> 4 4 #include <linux/cpu.h> 5 + #include <linux/instrumentation.h> 5 6 #include <linux/kexec.h> 6 7 #include <linux/memblock.h> 7 8 #include <linux/slab.h> ··· 22 21 23 22 #include "xen-ops.h" 24 23 25 - EXPORT_SYMBOL_GPL(hypercall_page); 24 + DEFINE_STATIC_CALL(xen_hypercall, xen_hypercall_hvm); 25 + EXPORT_STATIC_CALL_TRAMP(xen_hypercall); 26 26 27 27 /* 28 28 * Pointer to the xen_vcpu_info structure or ··· 69 67 * page as soon as fixmap is up and running. 70 68 */ 71 69 struct shared_info *HYPERVISOR_shared_info = &xen_dummy_shared_info; 70 + 71 + static __ref void xen_get_vendor(void) 72 + { 73 + init_cpu_devs(); 74 + cpu_detect(&boot_cpu_data); 75 + get_cpu_vendor(&boot_cpu_data); 76 + } 77 + 78 + void xen_hypercall_setfunc(void) 79 + { 80 + if (static_call_query(xen_hypercall) != xen_hypercall_hvm) 81 + return; 82 + 83 + if ((boot_cpu_data.x86_vendor == X86_VENDOR_AMD || 84 + boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)) 85 + static_call_update(xen_hypercall, xen_hypercall_amd); 86 + else 87 + static_call_update(xen_hypercall, xen_hypercall_intel); 88 + } 89 + 90 + /* 91 + * Evaluate processor vendor in order to select the correct hypercall 92 + * function for HVM/PVH guests. 93 + * Might be called very early in boot before vendor has been set by 94 + * early_cpu_init(). 95 + */ 96 + noinstr void *__xen_hypercall_setfunc(void) 97 + { 98 + void (*func)(void); 99 + 100 + /* 101 + * Xen is supported only on CPUs with CPUID, so testing for 102 + * X86_FEATURE_CPUID is a test for early_cpu_init() having been 103 + * run. 104 + * 105 + * Note that __xen_hypercall_setfunc() is noinstr only due to a nasty 106 + * dependency chain: it is being called via the xen_hypercall static 107 + * call when running as a PVH or HVM guest. Hypercalls need to be 108 + * noinstr due to PV guests using hypercalls in noinstr code. So we 109 + * can safely tag the function body as "instrumentation ok", since 110 + * the PV guest requirement is not of interest here (xen_get_vendor() 111 + * calls noinstr functions, and static_call_update_early() might do 112 + * so, too). 113 + */ 114 + instrumentation_begin(); 115 + 116 + if (!boot_cpu_has(X86_FEATURE_CPUID)) 117 + xen_get_vendor(); 118 + 119 + if ((boot_cpu_data.x86_vendor == X86_VENDOR_AMD || 120 + boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)) 121 + func = xen_hypercall_amd; 122 + else 123 + func = xen_hypercall_intel; 124 + 125 + static_call_update_early(xen_hypercall, func); 126 + 127 + instrumentation_end(); 128 + 129 + return func; 130 + } 72 131 73 132 static int xen_cpu_up_online(unsigned int cpu) 74 133 {
+5 -8
arch/x86/xen/enlighten_hvm.c
··· 106 106 /* PVH set up hypercall page in xen_prepare_pvh(). */ 107 107 if (xen_pvh_domain()) 108 108 pv_info.name = "Xen PVH"; 109 - else { 110 - u64 pfn; 111 - uint32_t msr; 112 - 109 + else 113 110 pv_info.name = "Xen HVM"; 114 - msr = cpuid_ebx(base + 2); 115 - pfn = __pa(hypercall_page); 116 - wrmsr_safe(msr, (u32)pfn, (u32)(pfn >> 32)); 117 - } 118 111 119 112 xen_setup_features(); 120 113 ··· 292 299 293 300 if (xen_pv_domain()) 294 301 return 0; 302 + 303 + /* Set correct hypercall function. */ 304 + if (xen_domain) 305 + xen_hypercall_setfunc(); 295 306 296 307 if (xen_pvh_domain() && nopv) { 297 308 /* Guest booting via the Xen-PVH boot entry goes here */
+3 -1
arch/x86/xen/enlighten_pv.c
··· 1341 1341 1342 1342 xen_domain_type = XEN_PV_DOMAIN; 1343 1343 xen_start_flags = xen_start_info->flags; 1344 + /* Interrupts are guaranteed to be off initially. */ 1345 + early_boot_irqs_disabled = true; 1346 + static_call_update_early(xen_hypercall, xen_hypercall_pv); 1344 1347 1345 1348 xen_setup_features(); 1346 1349 ··· 1434 1431 WARN_ON(xen_cpuhp_setup(xen_cpu_up_prepare_pv, xen_cpu_dead_pv)); 1435 1432 1436 1433 local_irq_disable(); 1437 - early_boot_irqs_disabled = true; 1438 1434 1439 1435 xen_raw_console_write("mapping kernel into physical memory\n"); 1440 1436 xen_setup_kernel_pagetable((pgd_t *)xen_start_info->pt_base,
-7
arch/x86/xen/enlighten_pvh.c
··· 129 129 130 130 void __init xen_pvh_init(struct boot_params *boot_params) 131 131 { 132 - u32 msr; 133 - u64 pfn; 134 - 135 132 xen_pvh = 1; 136 133 xen_domain_type = XEN_HVM_DOMAIN; 137 134 xen_start_flags = pvh_start_info.flags; 138 - 139 - msr = cpuid_ebx(xen_cpuid_base() + 2); 140 - pfn = __pa(hypercall_page); 141 - wrmsr_safe(msr, (u32)pfn, (u32)(pfn >> 32)); 142 135 143 136 x86_init.oem.arch_setup = pvh_arch_setup; 144 137 x86_init.oem.banner = xen_banner;
+41 -9
arch/x86/xen/xen-asm.S
··· 20 20 21 21 #include <linux/init.h> 22 22 #include <linux/linkage.h> 23 + #include <linux/objtool.h> 23 24 #include <../entry/calling.h> 24 25 25 26 .pushsection .noinstr.text, "ax" 27 + /* 28 + * PV hypercall interface to the hypervisor. 29 + * 30 + * Called via inline asm(), so better preserve %rcx and %r11. 31 + * 32 + * Input: 33 + * %eax: hypercall number 34 + * %rdi, %rsi, %rdx, %r10, %r8: args 1..5 for the hypercall 35 + * Output: %rax 36 + */ 37 + SYM_FUNC_START(xen_hypercall_pv) 38 + ANNOTATE_NOENDBR 39 + push %rcx 40 + push %r11 41 + UNWIND_HINT_SAVE 42 + syscall 43 + UNWIND_HINT_RESTORE 44 + pop %r11 45 + pop %rcx 46 + RET 47 + SYM_FUNC_END(xen_hypercall_pv) 48 + 26 49 /* 27 50 * Disabling events is simply a matter of making the event mask 28 51 * non-zero. ··· 199 176 SYM_CODE_END(xen_early_idt_handler_array) 200 177 __FINIT 201 178 202 - hypercall_iret = hypercall_page + __HYPERVISOR_iret * 32 203 179 /* 204 180 * Xen64 iret frame: 205 181 * ··· 208 186 * cs 209 187 * rip <-- standard iret frame 210 188 * 211 - * flags 189 + * flags <-- xen_iret must push from here on 212 190 * 213 - * rcx } 214 - * r11 }<-- pushed by hypercall page 215 - * rsp->rax } 191 + * rcx 192 + * r11 193 + * rsp->rax 216 194 */ 195 + .macro xen_hypercall_iret 196 + pushq $0 /* Flags */ 197 + push %rcx 198 + push %r11 199 + push %rax 200 + mov $__HYPERVISOR_iret, %eax 201 + syscall /* Do the IRET. */ 202 + #ifdef CONFIG_MITIGATION_SLS 203 + int3 204 + #endif 205 + .endm 206 + 217 207 SYM_CODE_START(xen_iret) 218 208 UNWIND_HINT_UNDEFINED 219 209 ANNOTATE_NOENDBR 220 - pushq $0 221 - jmp hypercall_iret 210 + xen_hypercall_iret 222 211 SYM_CODE_END(xen_iret) 223 212 224 213 /* ··· 334 301 ENDBR 335 302 lea 16(%rsp), %rsp /* strip %rcx, %r11 */ 336 303 mov $-ENOSYS, %rax 337 - pushq $0 338 - jmp hypercall_iret 304 + xen_hypercall_iret 339 305 SYM_CODE_END(xen_entry_SYSENTER_compat) 340 306 SYM_CODE_END(xen_entry_SYSCALL_compat) 341 307
+83 -24
arch/x86/xen/xen-head.S
··· 6 6 7 7 #include <linux/elfnote.h> 8 8 #include <linux/init.h> 9 + #include <linux/instrumentation.h> 9 10 10 11 #include <asm/boot.h> 11 12 #include <asm/asm.h> 13 + #include <asm/frame.h> 12 14 #include <asm/msr.h> 13 15 #include <asm/page_types.h> 14 16 #include <asm/percpu.h> ··· 21 19 #include <xen/interface/xen.h> 22 20 #include <xen/interface/xen-mca.h> 23 21 #include <asm/xen/interface.h> 24 - 25 - .pushsection .noinstr.text, "ax" 26 - .balign PAGE_SIZE 27 - SYM_CODE_START(hypercall_page) 28 - .rept (PAGE_SIZE / 32) 29 - UNWIND_HINT_FUNC 30 - ANNOTATE_NOENDBR 31 - ANNOTATE_UNRET_SAFE 32 - ret 33 - /* 34 - * Xen will write the hypercall page, and sort out ENDBR. 35 - */ 36 - .skip 31, 0xcc 37 - .endr 38 - 39 - #define HYPERCALL(n) \ 40 - .equ xen_hypercall_##n, hypercall_page + __HYPERVISOR_##n * 32; \ 41 - .type xen_hypercall_##n, @function; .size xen_hypercall_##n, 32 42 - #include <asm/xen-hypercalls.h> 43 - #undef HYPERCALL 44 - SYM_CODE_END(hypercall_page) 45 - .popsection 46 22 47 23 #ifdef CONFIG_XEN_PV 48 24 __INIT ··· 67 87 #endif 68 88 #endif 69 89 90 + .pushsection .noinstr.text, "ax" 91 + /* 92 + * Xen hypercall interface to the hypervisor. 93 + * 94 + * Input: 95 + * %eax: hypercall number 96 + * 32-bit: 97 + * %ebx, %ecx, %edx, %esi, %edi: args 1..5 for the hypercall 98 + * 64-bit: 99 + * %rdi, %rsi, %rdx, %r10, %r8: args 1..5 for the hypercall 100 + * Output: %[er]ax 101 + */ 102 + SYM_FUNC_START(xen_hypercall_hvm) 103 + ENDBR 104 + FRAME_BEGIN 105 + /* Save all relevant registers (caller save and arguments). */ 106 + #ifdef CONFIG_X86_32 107 + push %eax 108 + push %ebx 109 + push %ecx 110 + push %edx 111 + push %esi 112 + push %edi 113 + #else 114 + push %rax 115 + push %rcx 116 + push %rdx 117 + push %rdi 118 + push %rsi 119 + push %r11 120 + push %r10 121 + push %r9 122 + push %r8 123 + #ifdef CONFIG_FRAME_POINTER 124 + pushq $0 /* Dummy push for stack alignment. */ 125 + #endif 126 + #endif 127 + /* Set the vendor specific function. */ 128 + call __xen_hypercall_setfunc 129 + /* Set ZF = 1 if AMD, Restore saved registers. */ 130 + #ifdef CONFIG_X86_32 131 + lea xen_hypercall_amd, %ebx 132 + cmp %eax, %ebx 133 + pop %edi 134 + pop %esi 135 + pop %edx 136 + pop %ecx 137 + pop %ebx 138 + pop %eax 139 + #else 140 + lea xen_hypercall_amd(%rip), %rbx 141 + cmp %rax, %rbx 142 + #ifdef CONFIG_FRAME_POINTER 143 + pop %rax /* Dummy pop. */ 144 + #endif 145 + pop %r8 146 + pop %r9 147 + pop %r10 148 + pop %r11 149 + pop %rsi 150 + pop %rdi 151 + pop %rdx 152 + pop %rcx 153 + pop %rax 154 + #endif 155 + /* Use correct hypercall function. */ 156 + jz xen_hypercall_amd 157 + jmp xen_hypercall_intel 158 + SYM_FUNC_END(xen_hypercall_hvm) 159 + 160 + SYM_FUNC_START(xen_hypercall_amd) 161 + vmmcall 162 + RET 163 + SYM_FUNC_END(xen_hypercall_amd) 164 + 165 + SYM_FUNC_START(xen_hypercall_intel) 166 + vmcall 167 + RET 168 + SYM_FUNC_END(xen_hypercall_intel) 169 + .popsection 170 + 70 171 ELFNOTE(Xen, XEN_ELFNOTE_GUEST_OS, .asciz "linux") 71 172 ELFNOTE(Xen, XEN_ELFNOTE_GUEST_VERSION, .asciz "2.6") 72 173 ELFNOTE(Xen, XEN_ELFNOTE_XEN_VERSION, .asciz "xen-3.0") ··· 177 116 #else 178 117 # define FEATURES_DOM0 0 179 118 #endif 180 - ELFNOTE(Xen, XEN_ELFNOTE_HYPERCALL_PAGE, .globl xen_elfnote_hypercall_page; 181 - xen_elfnote_hypercall_page: _ASM_PTR xen_elfnote_hypercall_page_value - .) 182 119 ELFNOTE(Xen, XEN_ELFNOTE_SUPPORTED_FEATURES, 183 120 .long FEATURES_PV | FEATURES_PVH | FEATURES_DOM0) 184 121 ELFNOTE(Xen, XEN_ELFNOTE_LOADER, .asciz "generic")
+9
arch/x86/xen/xen-ops.h
··· 326 326 static inline void xen_smp_count_cpus(void) { } 327 327 #endif /* CONFIG_SMP */ 328 328 329 + #ifdef CONFIG_XEN_PV 330 + void xen_hypercall_pv(void); 331 + #endif 332 + void xen_hypercall_hvm(void); 333 + void xen_hypercall_amd(void); 334 + void xen_hypercall_intel(void); 335 + void xen_hypercall_setfunc(void); 336 + void *__xen_hypercall_setfunc(void); 337 + 329 338 #endif /* XEN_OPS_H */
+1 -2
block/bdev.c
··· 155 155 struct inode *inode = file->f_mapping->host; 156 156 struct block_device *bdev = I_BDEV(inode); 157 157 158 - /* Size must be a power of two, and between 512 and PAGE_SIZE */ 159 - if (size > PAGE_SIZE || size < 512 || !is_power_of_2(size)) 158 + if (blk_validate_block_size(size)) 160 159 return -EINVAL; 161 160 162 161 /* Size cannot be smaller than the size supported by the device */
+10 -2
block/bfq-iosched.c
··· 6844 6844 if (new_bfqq == waker_bfqq) { 6845 6845 /* 6846 6846 * If waker_bfqq is in the merge chain, and current 6847 - * is the only procress. 6847 + * is the only process, waker_bfqq can be freed. 6848 6848 */ 6849 6849 if (bfqq_process_refs(waker_bfqq) == 1) 6850 6850 return NULL; 6851 - break; 6851 + 6852 + return waker_bfqq; 6852 6853 } 6853 6854 6854 6855 new_bfqq = new_bfqq->new_bfqq; 6855 6856 } 6857 + 6858 + /* 6859 + * If waker_bfqq is not in the merge chain, and it's procress reference 6860 + * is 0, waker_bfqq can be freed. 6861 + */ 6862 + if (bfqq_process_refs(waker_bfqq) == 0) 6863 + return NULL; 6856 6864 6857 6865 return waker_bfqq; 6858 6866 }
+10 -6
block/blk-mq-sysfs.c
··· 275 275 struct blk_mq_hw_ctx *hctx; 276 276 unsigned long i; 277 277 278 - lockdep_assert_held(&q->sysfs_dir_lock); 279 - 278 + mutex_lock(&q->sysfs_dir_lock); 280 279 if (!q->mq_sysfs_init_done) 281 - return; 280 + goto unlock; 282 281 283 282 queue_for_each_hw_ctx(q, hctx, i) 284 283 blk_mq_unregister_hctx(hctx); 284 + 285 + unlock: 286 + mutex_unlock(&q->sysfs_dir_lock); 285 287 } 286 288 287 289 int blk_mq_sysfs_register_hctxs(struct request_queue *q) ··· 292 290 unsigned long i; 293 291 int ret = 0; 294 292 295 - lockdep_assert_held(&q->sysfs_dir_lock); 296 - 293 + mutex_lock(&q->sysfs_dir_lock); 297 294 if (!q->mq_sysfs_init_done) 298 - return ret; 295 + goto unlock; 299 296 300 297 queue_for_each_hw_ctx(q, hctx, i) { 301 298 ret = blk_mq_register_hctx(hctx); 302 299 if (ret) 303 300 break; 304 301 } 302 + 303 + unlock: 304 + mutex_unlock(&q->sysfs_dir_lock); 305 305 306 306 return ret; 307 307 }
+21 -19
block/blk-mq.c
··· 4412 4412 } 4413 4413 EXPORT_SYMBOL(blk_mq_alloc_disk_for_queue); 4414 4414 4415 + /* 4416 + * Only hctx removed from cpuhp list can be reused 4417 + */ 4418 + static bool blk_mq_hctx_is_reusable(struct blk_mq_hw_ctx *hctx) 4419 + { 4420 + return hlist_unhashed(&hctx->cpuhp_online) && 4421 + hlist_unhashed(&hctx->cpuhp_dead); 4422 + } 4423 + 4415 4424 static struct blk_mq_hw_ctx *blk_mq_alloc_and_init_hctx( 4416 4425 struct blk_mq_tag_set *set, struct request_queue *q, 4417 4426 int hctx_idx, int node) ··· 4430 4421 /* reuse dead hctx first */ 4431 4422 spin_lock(&q->unused_hctx_lock); 4432 4423 list_for_each_entry(tmp, &q->unused_hctx_list, hctx_list) { 4433 - if (tmp->numa_node == node) { 4424 + if (tmp->numa_node == node && blk_mq_hctx_is_reusable(tmp)) { 4434 4425 hctx = tmp; 4435 4426 break; 4436 4427 } ··· 4462 4453 unsigned long i, j; 4463 4454 4464 4455 /* protect against switching io scheduler */ 4465 - lockdep_assert_held(&q->sysfs_lock); 4466 - 4456 + mutex_lock(&q->sysfs_lock); 4467 4457 for (i = 0; i < set->nr_hw_queues; i++) { 4468 4458 int old_node; 4469 4459 int node = blk_mq_get_hctx_node(set, i); ··· 4495 4487 4496 4488 xa_for_each_start(&q->hctx_table, j, hctx, j) 4497 4489 blk_mq_exit_hctx(q, set, hctx, j); 4490 + mutex_unlock(&q->sysfs_lock); 4498 4491 4499 4492 /* unregister cpuhp callbacks for exited hctxs */ 4500 4493 blk_mq_remove_hw_queues_cpuhp(q); ··· 4527 4518 4528 4519 xa_init(&q->hctx_table); 4529 4520 4530 - mutex_lock(&q->sysfs_lock); 4531 - 4532 4521 blk_mq_realloc_hw_ctxs(set, q); 4533 4522 if (!q->nr_hw_queues) 4534 4523 goto err_hctxs; 4535 - 4536 - mutex_unlock(&q->sysfs_lock); 4537 4524 4538 4525 INIT_WORK(&q->timeout_work, blk_mq_timeout_work); 4539 4526 blk_queue_rq_timeout(q, set->timeout ? set->timeout : 30 * HZ); ··· 4549 4544 return 0; 4550 4545 4551 4546 err_hctxs: 4552 - mutex_unlock(&q->sysfs_lock); 4553 4547 blk_mq_release(q); 4554 4548 err_exit: 4555 4549 q->mq_ops = NULL; ··· 4929 4925 return false; 4930 4926 4931 4927 /* q->elevator needs protection from ->sysfs_lock */ 4932 - lockdep_assert_held(&q->sysfs_lock); 4928 + mutex_lock(&q->sysfs_lock); 4933 4929 4934 4930 /* the check has to be done with holding sysfs_lock */ 4935 4931 if (!q->elevator) { 4936 4932 kfree(qe); 4937 - goto out; 4933 + goto unlock; 4938 4934 } 4939 4935 4940 4936 INIT_LIST_HEAD(&qe->node); ··· 4944 4940 __elevator_get(qe->type); 4945 4941 list_add(&qe->node, head); 4946 4942 elevator_disable(q); 4947 - out: 4943 + unlock: 4944 + mutex_unlock(&q->sysfs_lock); 4945 + 4948 4946 return true; 4949 4947 } 4950 4948 ··· 4975 4969 list_del(&qe->node); 4976 4970 kfree(qe); 4977 4971 4972 + mutex_lock(&q->sysfs_lock); 4978 4973 elevator_switch(q, t); 4979 4974 /* drop the reference acquired in blk_mq_elv_switch_none */ 4980 4975 elevator_put(t); 4976 + mutex_unlock(&q->sysfs_lock); 4981 4977 } 4982 4978 4983 4979 static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, ··· 4999 4991 if (set->nr_maps == 1 && nr_hw_queues == set->nr_hw_queues) 5000 4992 return; 5001 4993 5002 - list_for_each_entry(q, &set->tag_list, tag_set_list) { 5003 - mutex_lock(&q->sysfs_dir_lock); 5004 - mutex_lock(&q->sysfs_lock); 4994 + list_for_each_entry(q, &set->tag_list, tag_set_list) 5005 4995 blk_mq_freeze_queue(q); 5006 - } 5007 4996 /* 5008 4997 * Switch IO scheduler to 'none', cleaning up the data associated 5009 4998 * with the previous scheduler. We will switch back once we are done ··· 5056 5051 list_for_each_entry(q, &set->tag_list, tag_set_list) 5057 5052 blk_mq_elv_switch_back(&head, q); 5058 5053 5059 - list_for_each_entry(q, &set->tag_list, tag_set_list) { 5054 + list_for_each_entry(q, &set->tag_list, tag_set_list) 5060 5055 blk_mq_unfreeze_queue(q); 5061 - mutex_unlock(&q->sysfs_lock); 5062 - mutex_unlock(&q->sysfs_dir_lock); 5063 - } 5064 5056 5065 5057 /* Free the excess tags when nr_hw_queues shrink. */ 5066 5058 for (i = set->nr_hw_queues; i < prev_nr_hw_queues; i++)
+2 -2
block/blk-sysfs.c
··· 706 706 if (entry->load_module) 707 707 entry->load_module(disk, page, length); 708 708 709 - mutex_lock(&q->sysfs_lock); 710 709 blk_mq_freeze_queue(q); 710 + mutex_lock(&q->sysfs_lock); 711 711 res = entry->store(disk, page, length); 712 - blk_mq_unfreeze_queue(q); 713 712 mutex_unlock(&q->sysfs_lock); 713 + blk_mq_unfreeze_queue(q); 714 714 return res; 715 715 } 716 716
+1 -1
drivers/accel/ivpu/ivpu_gem.c
··· 409 409 mutex_lock(&bo->lock); 410 410 411 411 drm_printf(p, "%-9p %-3u 0x%-12llx %-10lu 0x%-8x %-4u", 412 - bo, bo->ctx->id, bo->vpu_addr, bo->base.base.size, 412 + bo, bo->ctx ? bo->ctx->id : 0, bo->vpu_addr, bo->base.base.size, 413 413 bo->flags, kref_read(&bo->base.base.refcount)); 414 414 415 415 if (bo->base.pages)
+7 -3
drivers/accel/ivpu/ivpu_mmu_context.c
··· 612 612 if (!ivpu_mmu_ensure_pgd(vdev, &vdev->rctx.pgtable)) { 613 613 ivpu_err(vdev, "Failed to allocate root page table for reserved context\n"); 614 614 ret = -ENOMEM; 615 - goto unlock; 615 + goto err_ctx_fini; 616 616 } 617 617 618 618 ret = ivpu_mmu_cd_set(vdev, vdev->rctx.id, &vdev->rctx.pgtable); 619 619 if (ret) { 620 620 ivpu_err(vdev, "Failed to set context descriptor for reserved context\n"); 621 - goto unlock; 621 + goto err_ctx_fini; 622 622 } 623 623 624 - unlock: 625 624 mutex_unlock(&vdev->rctx.lock); 625 + return ret; 626 + 627 + err_ctx_fini: 628 + mutex_unlock(&vdev->rctx.lock); 629 + ivpu_mmu_context_fini(vdev, &vdev->rctx); 626 630 return ret; 627 631 } 628 632
+1 -1
drivers/accel/ivpu/ivpu_pm.c
··· 378 378 379 379 pm_runtime_use_autosuspend(dev); 380 380 pm_runtime_set_autosuspend_delay(dev, delay); 381 + pm_runtime_set_active(dev); 381 382 382 383 ivpu_dbg(vdev, PM, "Autosuspend delay = %d\n", delay); 383 384 } ··· 393 392 { 394 393 struct device *dev = vdev->drm.dev; 395 394 396 - pm_runtime_set_active(dev); 397 395 pm_runtime_allow(dev); 398 396 pm_runtime_mark_last_busy(dev); 399 397 pm_runtime_put_autosuspend(dev);
+2 -2
drivers/acpi/Kconfig
··· 135 135 config ACPI_EC 136 136 bool "Embedded Controller" 137 137 depends on HAS_IOPORT 138 - default X86 138 + default X86 || LOONGARCH 139 139 help 140 140 This driver handles communication with the microcontroller 141 - on many x86 laptops and other machines. 141 + on many x86/LoongArch laptops and other machines. 142 142 143 143 config ACPI_EC_DEBUGFS 144 144 tristate "EC read/write access through /sys/kernel/debug/ec"
+21 -3
drivers/acpi/resource.c
··· 441 441 }, 442 442 }, 443 443 { 444 + /* Asus Vivobook X1504VAP */ 445 + .matches = { 446 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 447 + DMI_MATCH(DMI_BOARD_NAME, "X1504VAP"), 448 + }, 449 + }, 450 + { 444 451 /* Asus Vivobook X1704VAP */ 445 452 .matches = { 446 453 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), ··· 653 646 DMI_MATCH(DMI_BOARD_NAME, "GMxHGxx"), 654 647 }, 655 648 }, 649 + { 650 + /* 651 + * TongFang GM5HG0A in case of the SKIKK Vanaheim relabel the 652 + * board-name is changed, so check OEM strings instead. Note 653 + * OEM string matches are always exact matches. 654 + * https://bugzilla.kernel.org/show_bug.cgi?id=219614 655 + */ 656 + .matches = { 657 + DMI_EXACT_MATCH(DMI_OEM_STRING, "GM5HG0A"), 658 + }, 659 + }, 656 660 { } 657 661 }; 658 662 ··· 689 671 for (i = 0; i < ARRAY_SIZE(override_table); i++) { 690 672 const struct irq_override_cmp *entry = &override_table[i]; 691 673 692 - if (dmi_check_system(entry->system) && 693 - entry->irq == gsi && 674 + if (entry->irq == gsi && 694 675 entry->triggering == triggering && 695 676 entry->polarity == polarity && 696 - entry->shareable == shareable) 677 + entry->shareable == shareable && 678 + dmi_check_system(entry->system)) 697 679 return entry->override; 698 680 } 699 681
+1 -1
drivers/auxdisplay/Kconfig
··· 489 489 490 490 config HT16K33 491 491 tristate "Holtek Ht16K33 LED controller with keyscan" 492 - depends on FB && I2C && INPUT 492 + depends on FB && I2C && INPUT && BACKLIGHT_CLASS_DEVICE 493 493 select FB_SYSMEM_HELPERS 494 494 select INPUT_MATRIXKMAP 495 495 select FB_BACKLIGHT
+20 -4
drivers/base/topology.c
··· 27 27 loff_t off, size_t count) \ 28 28 { \ 29 29 struct device *dev = kobj_to_dev(kobj); \ 30 + cpumask_var_t mask; \ 31 + ssize_t n; \ 30 32 \ 31 - return cpumap_print_bitmask_to_buf(buf, topology_##mask(dev->id), \ 32 - off, count); \ 33 + if (!alloc_cpumask_var(&mask, GFP_KERNEL)) \ 34 + return -ENOMEM; \ 35 + \ 36 + cpumask_copy(mask, topology_##mask(dev->id)); \ 37 + n = cpumap_print_bitmask_to_buf(buf, mask, off, count); \ 38 + free_cpumask_var(mask); \ 39 + \ 40 + return n; \ 33 41 } \ 34 42 \ 35 43 static ssize_t name##_list_read(struct file *file, struct kobject *kobj, \ ··· 45 37 loff_t off, size_t count) \ 46 38 { \ 47 39 struct device *dev = kobj_to_dev(kobj); \ 40 + cpumask_var_t mask; \ 41 + ssize_t n; \ 48 42 \ 49 - return cpumap_print_list_to_buf(buf, topology_##mask(dev->id), \ 50 - off, count); \ 43 + if (!alloc_cpumask_var(&mask, GFP_KERNEL)) \ 44 + return -ENOMEM; \ 45 + \ 46 + cpumask_copy(mask, topology_##mask(dev->id)); \ 47 + n = cpumap_print_list_to_buf(buf, mask, off, count); \ 48 + free_cpumask_var(mask); \ 49 + \ 50 + return n; \ 51 51 } 52 52 53 53 define_id_show_func(physical_package_id, "%d");
+17 -9
drivers/block/ublk_drv.c
··· 1618 1618 blk_mq_kick_requeue_list(ub->ub_disk->queue); 1619 1619 } 1620 1620 1621 + static struct gendisk *ublk_detach_disk(struct ublk_device *ub) 1622 + { 1623 + struct gendisk *disk; 1624 + 1625 + /* Sync with ublk_abort_queue() by holding the lock */ 1626 + spin_lock(&ub->lock); 1627 + disk = ub->ub_disk; 1628 + ub->dev_info.state = UBLK_S_DEV_DEAD; 1629 + ub->dev_info.ublksrv_pid = -1; 1630 + ub->ub_disk = NULL; 1631 + spin_unlock(&ub->lock); 1632 + 1633 + return disk; 1634 + } 1635 + 1621 1636 static void ublk_stop_dev(struct ublk_device *ub) 1622 1637 { 1623 1638 struct gendisk *disk; ··· 1646 1631 ublk_unquiesce_dev(ub); 1647 1632 } 1648 1633 del_gendisk(ub->ub_disk); 1649 - 1650 - /* Sync with ublk_abort_queue() by holding the lock */ 1651 - spin_lock(&ub->lock); 1652 - disk = ub->ub_disk; 1653 - ub->dev_info.state = UBLK_S_DEV_DEAD; 1654 - ub->dev_info.ublksrv_pid = -1; 1655 - ub->ub_disk = NULL; 1656 - spin_unlock(&ub->lock); 1634 + disk = ublk_detach_disk(ub); 1657 1635 put_disk(disk); 1658 1636 unlock: 1659 1637 mutex_unlock(&ub->mutex); ··· 2344 2336 2345 2337 out_put_cdev: 2346 2338 if (ret) { 2347 - ub->dev_info.state = UBLK_S_DEV_DEAD; 2339 + ublk_detach_disk(ub); 2348 2340 ublk_put_device(ub); 2349 2341 } 2350 2342 if (ret)
+10 -5
drivers/block/zram/zram_drv.c
··· 614 614 } 615 615 616 616 nr_pages = i_size_read(inode) >> PAGE_SHIFT; 617 + /* Refuse to use zero sized device (also prevents self reference) */ 618 + if (!nr_pages) { 619 + err = -EINVAL; 620 + goto out; 621 + } 622 + 617 623 bitmap_sz = BITS_TO_LONGS(nr_pages) * sizeof(long); 618 624 bitmap = kvzalloc(bitmap_sz, GFP_KERNEL); 619 625 if (!bitmap) { ··· 1444 1438 size_t num_pages = disksize >> PAGE_SHIFT; 1445 1439 size_t index; 1446 1440 1441 + if (!zram->table) 1442 + return; 1443 + 1447 1444 /* Free all pages that are still in this zram device */ 1448 1445 for (index = 0; index < num_pages; index++) 1449 1446 zram_free_page(zram, index); 1450 1447 1451 1448 zs_destroy_pool(zram->mem_pool); 1452 1449 vfree(zram->table); 1450 + zram->table = NULL; 1453 1451 } 1454 1452 1455 1453 static bool zram_meta_alloc(struct zram *zram, u64 disksize) ··· 2329 2319 down_write(&zram->init_lock); 2330 2320 2331 2321 zram->limit_pages = 0; 2332 - 2333 - if (!init_done(zram)) { 2334 - up_write(&zram->init_lock); 2335 - return; 2336 - } 2337 2322 2338 2323 set_capacity_and_notify(zram->disk, 0); 2339 2324 part_stat_set_all(zram->disk->part0, 0);
+7
drivers/bluetooth/btmtk.c
··· 1472 1472 1473 1473 int btmtk_usb_shutdown(struct hci_dev *hdev) 1474 1474 { 1475 + struct btmtk_data *data = hci_get_priv(hdev); 1475 1476 struct btmtk_hci_wmt_params wmt_params; 1476 1477 u8 param = 0; 1477 1478 int err; 1479 + 1480 + err = usb_autopm_get_interface(data->intf); 1481 + if (err < 0) 1482 + return err; 1478 1483 1479 1484 /* Disable the device */ 1480 1485 wmt_params.op = BTMTK_WMT_FUNC_CTRL; ··· 1491 1486 err = btmtk_usb_hci_wmt_sync(hdev, &wmt_params); 1492 1487 if (err < 0) { 1493 1488 bt_dev_err(hdev, "Failed to send wmt func ctrl (%d)", err); 1489 + usb_autopm_put_interface(data->intf); 1494 1490 return err; 1495 1491 } 1496 1492 1493 + usb_autopm_put_interface(data->intf); 1497 1494 return 0; 1498 1495 } 1499 1496 EXPORT_SYMBOL_GPL(btmtk_usb_shutdown);
+1
drivers/bluetooth/btnxpuart.c
··· 1381 1381 1382 1382 while ((skb = nxp_dequeue(nxpdev))) { 1383 1383 len = serdev_device_write_buf(serdev, skb->data, skb->len); 1384 + serdev_device_wait_until_sent(serdev, 0); 1384 1385 hdev->stat.byte_tx += len; 1385 1386 1386 1387 skb_pull(skb, len);
+1 -1
drivers/bus/mhi/host/pci_generic.c
··· 917 917 return err; 918 918 } 919 919 920 - mhi_cntrl->regs = pcim_iomap_region(pdev, 1 << bar_num, pci_name(pdev)); 920 + mhi_cntrl->regs = pcim_iomap_region(pdev, bar_num, pci_name(pdev)); 921 921 if (IS_ERR(mhi_cntrl->regs)) { 922 922 err = PTR_ERR(mhi_cntrl->regs); 923 923 dev_err(&pdev->dev, "failed to map pci region: %d\n", err);
+1 -1
drivers/cdrom/cdrom.c
··· 1106 1106 } 1107 1107 } 1108 1108 1109 - cd_dbg(CD_OPEN, "all seems well, opening the devicen"); 1109 + cd_dbg(CD_OPEN, "all seems well, opening the device\n"); 1110 1110 1111 1111 /* all seems well, we can open the device */ 1112 1112 ret = cdo->open(cdi, 0); /* open for data */
+2 -1
drivers/clk/imx/clk-imx8mp-audiomix.c
··· 278 278 279 279 #else /* !CONFIG_RESET_CONTROLLER */ 280 280 281 - static int clk_imx8mp_audiomix_reset_controller_register(struct clk_imx8mp_audiomix_priv *priv) 281 + static int clk_imx8mp_audiomix_reset_controller_register(struct device *dev, 282 + struct clk_imx8mp_audiomix_priv *priv) 282 283 { 283 284 return 0; 284 285 }
+12 -1
drivers/clk/thead/clk-th1520-ap.c
··· 779 779 }, 780 780 }; 781 781 782 + static CLK_FIXED_FACTOR_HW(emmc_sdio_ref_clk, "emmc-sdio-ref", 783 + &video_pll_clk.common.hw, 4, 1, 0); 784 + 785 + static const struct clk_parent_data emmc_sdio_ref_clk_pd[] = { 786 + { .hw = &emmc_sdio_ref_clk.hw }, 787 + }; 788 + 782 789 static CCU_GATE(CLK_BROM, brom_clk, "brom", ahb2_cpusys_hclk_pd, 0x100, BIT(4), 0); 783 790 static CCU_GATE(CLK_BMU, bmu_clk, "bmu", axi4_cpusys2_aclk_pd, 0x100, BIT(5), 0); 784 791 static CCU_GATE(CLK_AON2CPU_A2X, aon2cpu_a2x_clk, "aon2cpu-a2x", axi4_cpusys2_aclk_pd, ··· 805 798 0x150, BIT(12), 0); 806 799 static CCU_GATE(CLK_NPU_AXI, npu_axi_clk, "npu-axi", axi_aclk_pd, 0x1c8, BIT(5), 0); 807 800 static CCU_GATE(CLK_CPU2VP, cpu2vp_clk, "cpu2vp", axi_aclk_pd, 0x1e0, BIT(13), 0); 808 - static CCU_GATE(CLK_EMMC_SDIO, emmc_sdio_clk, "emmc-sdio", video_pll_clk_pd, 0x204, BIT(30), 0); 801 + static CCU_GATE(CLK_EMMC_SDIO, emmc_sdio_clk, "emmc-sdio", emmc_sdio_ref_clk_pd, 0x204, BIT(30), 0); 809 802 static CCU_GATE(CLK_GMAC1, gmac1_clk, "gmac1", gmac_pll_clk_pd, 0x204, BIT(26), 0); 810 803 static CCU_GATE(CLK_PADCTRL1, padctrl1_clk, "padctrl1", perisys_apb_pclk_pd, 0x204, BIT(24), 0); 811 804 static CCU_GATE(CLK_DSMART, dsmart_clk, "dsmart", perisys_apb_pclk_pd, 0x204, BIT(23), 0); ··· 1065 1058 if (ret) 1066 1059 return ret; 1067 1060 priv->hws[CLK_PLL_GMAC_100M] = &gmac_pll_clk_100m.hw; 1061 + 1062 + ret = devm_clk_hw_register(dev, &emmc_sdio_ref_clk.hw); 1063 + if (ret) 1064 + return ret; 1068 1065 1069 1066 ret = devm_of_clk_add_hw_provider(dev, of_clk_hw_onecell_get, priv); 1070 1067 if (ret)
+13 -1
drivers/clocksource/hyperv_timer.c
··· 27 27 #include <asm/mshyperv.h> 28 28 29 29 static struct clock_event_device __percpu *hv_clock_event; 30 - static u64 hv_sched_clock_offset __ro_after_init; 30 + /* Note: offset can hold negative values after hibernation. */ 31 + static u64 hv_sched_clock_offset __read_mostly; 31 32 32 33 /* 33 34 * If false, we're using the old mechanism for stimer0 interrupts ··· 469 468 tsc_msr.enable = 1; 470 469 tsc_msr.pfn = tsc_pfn; 471 470 hv_set_msr(HV_MSR_REFERENCE_TSC, tsc_msr.as_uint64); 471 + } 472 + 473 + /* 474 + * Called during resume from hibernation, from overridden 475 + * x86_platform.restore_sched_clock_state routine. This is to adjust offsets 476 + * used to calculate time for hv tsc page based sched_clock, to account for 477 + * time spent before hibernation. 478 + */ 479 + void hv_adj_sched_clock_offset(u64 offset) 480 + { 481 + hv_sched_clock_offset -= offset; 472 482 } 473 483 474 484 #ifdef HAVE_VDSO_CLOCKMODE_HVCLOCK
+26 -24
drivers/cpufreq/amd-pstate.c
··· 374 374 375 375 static int msr_init_perf(struct amd_cpudata *cpudata) 376 376 { 377 - u64 cap1; 377 + u64 cap1, numerator; 378 378 379 379 int ret = rdmsrl_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1, 380 380 &cap1); 381 381 if (ret) 382 382 return ret; 383 383 384 - WRITE_ONCE(cpudata->highest_perf, AMD_CPPC_HIGHEST_PERF(cap1)); 385 - WRITE_ONCE(cpudata->max_limit_perf, AMD_CPPC_HIGHEST_PERF(cap1)); 384 + ret = amd_get_boost_ratio_numerator(cpudata->cpu, &numerator); 385 + if (ret) 386 + return ret; 387 + 388 + WRITE_ONCE(cpudata->highest_perf, numerator); 389 + WRITE_ONCE(cpudata->max_limit_perf, numerator); 386 390 WRITE_ONCE(cpudata->nominal_perf, AMD_CPPC_NOMINAL_PERF(cap1)); 387 391 WRITE_ONCE(cpudata->lowest_nonlinear_perf, AMD_CPPC_LOWNONLIN_PERF(cap1)); 388 392 WRITE_ONCE(cpudata->lowest_perf, AMD_CPPC_LOWEST_PERF(cap1)); ··· 398 394 static int shmem_init_perf(struct amd_cpudata *cpudata) 399 395 { 400 396 struct cppc_perf_caps cppc_perf; 397 + u64 numerator; 401 398 402 399 int ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf); 403 400 if (ret) 404 401 return ret; 405 402 406 - WRITE_ONCE(cpudata->highest_perf, cppc_perf.highest_perf); 407 - WRITE_ONCE(cpudata->max_limit_perf, cppc_perf.highest_perf); 403 + ret = amd_get_boost_ratio_numerator(cpudata->cpu, &numerator); 404 + if (ret) 405 + return ret; 406 + 407 + WRITE_ONCE(cpudata->highest_perf, numerator); 408 + WRITE_ONCE(cpudata->max_limit_perf, numerator); 408 409 WRITE_ONCE(cpudata->nominal_perf, cppc_perf.nominal_perf); 409 410 WRITE_ONCE(cpudata->lowest_nonlinear_perf, 410 411 cppc_perf.lowest_nonlinear_perf); ··· 570 561 571 562 static int amd_pstate_update_min_max_limit(struct cpufreq_policy *policy) 572 563 { 573 - u32 max_limit_perf, min_limit_perf, lowest_perf, max_perf; 564 + u32 max_limit_perf, min_limit_perf, lowest_perf, max_perf, max_freq; 574 565 struct amd_cpudata *cpudata = policy->driver_data; 575 566 576 - if (cpudata->boost_supported && !policy->boost_enabled) 577 - max_perf = READ_ONCE(cpudata->nominal_perf); 578 - else 579 - max_perf = READ_ONCE(cpudata->highest_perf); 580 - 581 - max_limit_perf = div_u64(policy->max * max_perf, policy->cpuinfo.max_freq); 582 - min_limit_perf = div_u64(policy->min * max_perf, policy->cpuinfo.max_freq); 567 + max_perf = READ_ONCE(cpudata->highest_perf); 568 + max_freq = READ_ONCE(cpudata->max_freq); 569 + max_limit_perf = div_u64(policy->max * max_perf, max_freq); 570 + min_limit_perf = div_u64(policy->min * max_perf, max_freq); 583 571 584 572 lowest_perf = READ_ONCE(cpudata->lowest_perf); 585 573 if (min_limit_perf < lowest_perf) ··· 895 889 { 896 890 int ret; 897 891 u32 min_freq, max_freq; 898 - u64 numerator; 899 892 u32 nominal_perf, nominal_freq; 900 893 u32 lowest_nonlinear_perf, lowest_nonlinear_freq; 901 894 u32 boost_ratio, lowest_nonlinear_ratio; ··· 916 911 917 912 nominal_perf = READ_ONCE(cpudata->nominal_perf); 918 913 919 - ret = amd_get_boost_ratio_numerator(cpudata->cpu, &numerator); 920 - if (ret) 921 - return ret; 922 - boost_ratio = div_u64(numerator << SCHED_CAPACITY_SHIFT, nominal_perf); 914 + boost_ratio = div_u64(cpudata->highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf); 923 915 max_freq = (nominal_freq * boost_ratio >> SCHED_CAPACITY_SHIFT) * 1000; 924 916 925 917 lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf); ··· 1871 1869 static_call_update(amd_pstate_update_perf, shmem_update_perf); 1872 1870 } 1873 1871 1874 - ret = amd_pstate_register_driver(cppc_state); 1875 - if (ret) { 1876 - pr_err("failed to register with return %d\n", ret); 1877 - return ret; 1878 - } 1879 - 1880 1872 if (amd_pstate_prefcore) { 1881 1873 ret = amd_detect_prefcore(&amd_pstate_prefcore); 1882 1874 if (ret) 1883 1875 return ret; 1876 + } 1877 + 1878 + ret = amd_pstate_register_driver(cppc_state); 1879 + if (ret) { 1880 + pr_err("failed to register with return %d\n", ret); 1881 + return ret; 1884 1882 } 1885 1883 1886 1884 dev_root = bus_get_dev_root(&cpu_subsys);
+2 -2
drivers/cpuidle/cpuidle-riscv-sbi.c
··· 504 504 int cpu, ret; 505 505 struct cpuidle_driver *drv; 506 506 struct cpuidle_device *dev; 507 - struct device_node *np, *pds_node; 507 + struct device_node *pds_node; 508 508 509 509 /* Detect OSI support based on CPU DT nodes */ 510 510 sbi_cpuidle_use_osi = true; 511 511 for_each_possible_cpu(cpu) { 512 - np = of_cpu_device_node_get(cpu); 512 + struct device_node *np __free(device_node) = of_cpu_device_node_get(cpu); 513 513 if (np && 514 514 of_property_present(np, "power-domains") && 515 515 of_property_present(np, "power-domain-names")) {
+18 -7
drivers/cxl/core/region.c
··· 1295 1295 struct cxl_region_params *p = &cxlr->params; 1296 1296 struct cxl_decoder *cxld = cxl_rr->decoder; 1297 1297 struct cxl_switch_decoder *cxlsd; 1298 + struct cxl_port *iter = port; 1298 1299 u16 eig, peig; 1299 1300 u8 eiw, peiw; 1300 1301 ··· 1312 1311 1313 1312 cxlsd = to_cxl_switch_decoder(&cxld->dev); 1314 1313 if (cxl_rr->nr_targets_set) { 1315 - int i, distance; 1314 + int i, distance = 1; 1315 + struct cxl_region_ref *cxl_rr_iter; 1316 1316 1317 1317 /* 1318 - * Passthrough decoders impose no distance requirements between 1319 - * peers 1318 + * The "distance" between peer downstream ports represents which 1319 + * endpoint positions in the region interleave a given port can 1320 + * host. 1321 + * 1322 + * For example, at the root of a hierarchy the distance is 1323 + * always 1 as every index targets a different host-bridge. At 1324 + * each subsequent switch level those ports map every Nth region 1325 + * position where N is the width of the switch == distance. 1320 1326 */ 1321 - if (cxl_rr->nr_targets == 1) 1322 - distance = 0; 1323 - else 1324 - distance = p->nr_targets / cxl_rr->nr_targets; 1327 + do { 1328 + cxl_rr_iter = cxl_rr_load(iter, cxlr); 1329 + distance *= cxl_rr_iter->nr_targets; 1330 + iter = to_cxl_port(iter->dev.parent); 1331 + } while (!is_cxl_root(iter)); 1332 + distance *= cxlrd->cxlsd.cxld.interleave_ways; 1333 + 1325 1334 for (i = 0; i < cxl_rr->nr_targets_set; i++) 1326 1335 if (ep->dport == cxlsd->target[i]) { 1327 1336 rc = check_last_peer(cxled, ep, cxl_rr,
+4 -2
drivers/cxl/pci.c
··· 836 836 if (!root_dev) 837 837 return -ENXIO; 838 838 839 + if (!dport->regs.rcd_pcie_cap) 840 + return -ENXIO; 841 + 839 842 guard(device)(root_dev); 840 843 if (!root_dev->driver) 841 844 return -ENXIO; ··· 1035 1032 if (rc) 1036 1033 return rc; 1037 1034 1038 - rc = cxl_pci_ras_unmask(pdev); 1039 - if (rc) 1035 + if (cxl_pci_ras_unmask(pdev)) 1040 1036 dev_dbg(&pdev->dev, "No RAS reporting unmasked\n"); 1041 1037 1042 1038 pci_save_state(pdev);
+1 -1
drivers/dma-buf/dma-buf.c
··· 60 60 { 61 61 } 62 62 63 - static void __dma_buf_debugfs_list_del(struct file *file) 63 + static void __dma_buf_debugfs_list_del(struct dma_buf *dmabuf) 64 64 { 65 65 } 66 66 #endif
+27 -16
drivers/dma-buf/udmabuf.c
··· 297 297 }; 298 298 299 299 #define SEALS_WANTED (F_SEAL_SHRINK) 300 - #define SEALS_DENIED (F_SEAL_WRITE) 300 + #define SEALS_DENIED (F_SEAL_WRITE|F_SEAL_FUTURE_WRITE) 301 301 302 302 static int check_memfd_seals(struct file *memfd) 303 303 { ··· 317 317 return 0; 318 318 } 319 319 320 - static int export_udmabuf(struct udmabuf *ubuf, 321 - struct miscdevice *device, 322 - u32 flags) 320 + static struct dma_buf *export_udmabuf(struct udmabuf *ubuf, 321 + struct miscdevice *device) 323 322 { 324 323 DEFINE_DMA_BUF_EXPORT_INFO(exp_info); 325 - struct dma_buf *buf; 326 324 327 325 ubuf->device = device; 328 326 exp_info.ops = &udmabuf_ops; ··· 328 330 exp_info.priv = ubuf; 329 331 exp_info.flags = O_RDWR; 330 332 331 - buf = dma_buf_export(&exp_info); 332 - if (IS_ERR(buf)) 333 - return PTR_ERR(buf); 334 - 335 - return dma_buf_fd(buf, flags); 333 + return dma_buf_export(&exp_info); 336 334 } 337 335 338 336 static long udmabuf_pin_folios(struct udmabuf *ubuf, struct file *memfd, ··· 385 391 struct folio **folios = NULL; 386 392 pgoff_t pgcnt = 0, pglimit; 387 393 struct udmabuf *ubuf; 394 + struct dma_buf *dmabuf; 388 395 long ret = -EINVAL; 389 396 u32 i, flags; 390 397 ··· 431 436 goto err; 432 437 } 433 438 439 + /* 440 + * Take the inode lock to protect against concurrent 441 + * memfd_add_seals(), which takes this lock in write mode. 442 + */ 443 + inode_lock_shared(file_inode(memfd)); 434 444 ret = check_memfd_seals(memfd); 435 - if (ret < 0) { 436 - fput(memfd); 437 - goto err; 438 - } 445 + if (ret) 446 + goto out_unlock; 439 447 440 448 ret = udmabuf_pin_folios(ubuf, memfd, list[i].offset, 441 449 list[i].size, folios); 450 + out_unlock: 451 + inode_unlock_shared(file_inode(memfd)); 442 452 fput(memfd); 443 453 if (ret) 444 454 goto err; 445 455 } 446 456 447 457 flags = head->flags & UDMABUF_FLAGS_CLOEXEC ? O_CLOEXEC : 0; 448 - ret = export_udmabuf(ubuf, device, flags); 449 - if (ret < 0) 458 + dmabuf = export_udmabuf(ubuf, device); 459 + if (IS_ERR(dmabuf)) { 460 + ret = PTR_ERR(dmabuf); 450 461 goto err; 462 + } 463 + /* 464 + * Ownership of ubuf is held by the dmabuf from here. 465 + * If the following dma_buf_fd() fails, dma_buf_put() cleans up both the 466 + * dmabuf and the ubuf (through udmabuf_ops.release). 467 + */ 468 + 469 + ret = dma_buf_fd(dmabuf, flags); 470 + if (ret < 0) 471 + dma_buf_put(dmabuf); 451 472 452 473 kvfree(folios); 453 474 return ret;
+12 -16
drivers/dma/amd/qdma/qdma.c
··· 7 7 #include <linux/bitfield.h> 8 8 #include <linux/bitops.h> 9 9 #include <linux/dmaengine.h> 10 + #include <linux/dma-mapping.h> 10 11 #include <linux/module.h> 11 12 #include <linux/mod_devicetable.h> 12 - #include <linux/dma-map-ops.h> 13 13 #include <linux/platform_device.h> 14 14 #include <linux/platform_data/amd_qdma.h> 15 15 #include <linux/regmap.h> ··· 492 492 493 493 static int qdma_device_setup(struct qdma_device *qdev) 494 494 { 495 - struct device *dev = &qdev->pdev->dev; 496 495 u32 ring_sz = QDMA_DEFAULT_RING_SIZE; 497 496 int ret = 0; 498 - 499 - while (dev && get_dma_ops(dev)) 500 - dev = dev->parent; 501 - if (!dev) { 502 - qdma_err(qdev, "dma device not found"); 503 - return -EINVAL; 504 - } 505 - set_dma_ops(&qdev->pdev->dev, get_dma_ops(dev)); 506 497 507 498 ret = qdma_setup_fmap_context(qdev); 508 499 if (ret) { ··· 539 548 { 540 549 struct qdma_queue *queue = to_qdma_queue(chan); 541 550 struct qdma_device *qdev = queue->qdev; 542 - struct device *dev = qdev->dma_dev.dev; 551 + struct qdma_platdata *pdata; 543 552 544 553 qdma_clear_queue_context(queue); 545 554 vchan_free_chan_resources(&queue->vchan); 546 - dma_free_coherent(dev, queue->ring_size * QDMA_MM_DESC_SIZE, 555 + pdata = dev_get_platdata(&qdev->pdev->dev); 556 + dma_free_coherent(pdata->dma_dev, queue->ring_size * QDMA_MM_DESC_SIZE, 547 557 queue->desc_base, queue->dma_desc_base); 548 558 } 549 559 ··· 557 565 struct qdma_queue *queue = to_qdma_queue(chan); 558 566 struct qdma_device *qdev = queue->qdev; 559 567 struct qdma_ctxt_sw_desc desc; 568 + struct qdma_platdata *pdata; 560 569 size_t size; 561 570 int ret; 562 571 ··· 565 572 if (ret) 566 573 return ret; 567 574 575 + pdata = dev_get_platdata(&qdev->pdev->dev); 568 576 size = queue->ring_size * QDMA_MM_DESC_SIZE; 569 - queue->desc_base = dma_alloc_coherent(qdev->dma_dev.dev, size, 577 + queue->desc_base = dma_alloc_coherent(pdata->dma_dev, size, 570 578 &queue->dma_desc_base, 571 579 GFP_KERNEL); 572 580 if (!queue->desc_base) { ··· 582 588 if (ret) { 583 589 qdma_err(qdev, "Failed to setup SW desc ctxt for %s", 584 590 chan->name); 585 - dma_free_coherent(qdev->dma_dev.dev, size, queue->desc_base, 591 + dma_free_coherent(pdata->dma_dev, size, queue->desc_base, 586 592 queue->dma_desc_base); 587 593 return ret; 588 594 } ··· 942 948 943 949 static int qdmam_alloc_qintr_rings(struct qdma_device *qdev) 944 950 { 945 - u32 ctxt[QDMA_CTXT_REGMAP_LEN]; 951 + struct qdma_platdata *pdata = dev_get_platdata(&qdev->pdev->dev); 946 952 struct device *dev = &qdev->pdev->dev; 953 + u32 ctxt[QDMA_CTXT_REGMAP_LEN]; 947 954 struct qdma_intr_ring *ring; 948 955 struct qdma_ctxt_intr intr_ctxt; 949 956 u32 vector; ··· 964 969 ring->msix_id = qdev->err_irq_idx + i + 1; 965 970 ring->ridx = i; 966 971 ring->color = 1; 967 - ring->base = dmam_alloc_coherent(dev, QDMA_INTR_RING_SIZE, 972 + ring->base = dmam_alloc_coherent(pdata->dma_dev, 973 + QDMA_INTR_RING_SIZE, 968 974 &ring->dev_base, GFP_KERNEL); 969 975 if (!ring->base) { 970 976 qdma_err(qdev, "Failed to alloc intr ring %d", i);
+2 -5
drivers/dma/apple-admac.c
··· 153 153 { 154 154 struct admac_sram *sram; 155 155 int i, ret = 0, nblocks; 156 + ad->txcache.size = readl_relaxed(ad->base + REG_TX_SRAM_SIZE); 157 + ad->rxcache.size = readl_relaxed(ad->base + REG_RX_SRAM_SIZE); 156 158 157 159 if (dir == DMA_MEM_TO_DEV) 158 160 sram = &ad->txcache; ··· 914 912 goto free_irq; 915 913 } 916 914 917 - ad->txcache.size = readl_relaxed(ad->base + REG_TX_SRAM_SIZE); 918 - ad->rxcache.size = readl_relaxed(ad->base + REG_RX_SRAM_SIZE); 919 - 920 915 dev_info(&pdev->dev, "Audio DMA Controller\n"); 921 - dev_info(&pdev->dev, "imprint %x TX cache %u RX cache %u\n", 922 - readl_relaxed(ad->base + REG_IMPRINT), ad->txcache.size, ad->rxcache.size); 923 916 924 917 return 0; 925 918
+2
drivers/dma/at_xdmac.c
··· 1363 1363 return NULL; 1364 1364 1365 1365 desc = at_xdmac_memset_create_desc(chan, atchan, dest, len, value); 1366 + if (!desc) 1367 + return NULL; 1366 1368 list_add_tail(&desc->desc_node, &desc->descs_list); 1367 1369 1368 1370 desc->tx_dma_desc.cookie = -EBUSY;
+4 -2
drivers/dma/dw/acpi.c
··· 8 8 9 9 static bool dw_dma_acpi_filter(struct dma_chan *chan, void *param) 10 10 { 11 + struct dw_dma *dw = to_dw_dma(chan->device); 12 + struct dw_dma_chip_pdata *data = dev_get_drvdata(dw->dma.dev); 11 13 struct acpi_dma_spec *dma_spec = param; 12 14 struct dw_dma_slave slave = { 13 15 .dma_dev = dma_spec->dev, 14 16 .src_id = dma_spec->slave_id, 15 17 .dst_id = dma_spec->slave_id, 16 - .m_master = 0, 17 - .p_master = 1, 18 + .m_master = data->m_master, 19 + .p_master = data->p_master, 18 20 }; 19 21 20 22 return dw_dma_filter(chan, &slave);
+8
drivers/dma/dw/internal.h
··· 51 51 int (*probe)(struct dw_dma_chip *chip); 52 52 int (*remove)(struct dw_dma_chip *chip); 53 53 struct dw_dma_chip *chip; 54 + u8 m_master; 55 + u8 p_master; 54 56 }; 55 57 56 58 static __maybe_unused const struct dw_dma_chip_pdata dw_dma_chip_pdata = { 57 59 .probe = dw_dma_probe, 58 60 .remove = dw_dma_remove, 61 + .m_master = 0, 62 + .p_master = 1, 59 63 }; 60 64 61 65 static const struct dw_dma_platform_data idma32_pdata = { ··· 76 72 .pdata = &idma32_pdata, 77 73 .probe = idma32_dma_probe, 78 74 .remove = idma32_dma_remove, 75 + .m_master = 0, 76 + .p_master = 0, 79 77 }; 80 78 81 79 static const struct dw_dma_platform_data xbar_pdata = { ··· 94 88 .pdata = &xbar_pdata, 95 89 .probe = idma32_dma_probe, 96 90 .remove = idma32_dma_remove, 91 + .m_master = 0, 92 + .p_master = 0, 97 93 }; 98 94 99 95 #endif /* _DMA_DW_INTERNAL_H */
+2 -2
drivers/dma/dw/pci.c
··· 56 56 if (ret) 57 57 return ret; 58 58 59 - dw_dma_acpi_controller_register(chip->dw); 60 - 61 59 pci_set_drvdata(pdev, data); 60 + 61 + dw_dma_acpi_controller_register(chip->dw); 62 62 63 63 return 0; 64 64 }
+1
drivers/dma/fsl-edma-common.h
··· 166 166 struct work_struct issue_worker; 167 167 struct platform_device *pdev; 168 168 struct device *pd_dev; 169 + struct device_link *pd_dev_link; 169 170 u32 srcid; 170 171 struct clk *clk; 171 172 int priority;
+36 -5
drivers/dma/fsl-edma-main.c
··· 417 417 }; 418 418 MODULE_DEVICE_TABLE(of, fsl_edma_dt_ids); 419 419 420 + static void fsl_edma3_detach_pd(struct fsl_edma_engine *fsl_edma) 421 + { 422 + struct fsl_edma_chan *fsl_chan; 423 + int i; 424 + 425 + for (i = 0; i < fsl_edma->n_chans; i++) { 426 + if (fsl_edma->chan_masked & BIT(i)) 427 + continue; 428 + fsl_chan = &fsl_edma->chans[i]; 429 + if (fsl_chan->pd_dev_link) 430 + device_link_del(fsl_chan->pd_dev_link); 431 + if (fsl_chan->pd_dev) { 432 + dev_pm_domain_detach(fsl_chan->pd_dev, false); 433 + pm_runtime_dont_use_autosuspend(fsl_chan->pd_dev); 434 + pm_runtime_set_suspended(fsl_chan->pd_dev); 435 + } 436 + } 437 + } 438 + 439 + static void devm_fsl_edma3_detach_pd(void *data) 440 + { 441 + fsl_edma3_detach_pd(data); 442 + } 443 + 420 444 static int fsl_edma3_attach_pd(struct platform_device *pdev, struct fsl_edma_engine *fsl_edma) 421 445 { 422 446 struct fsl_edma_chan *fsl_chan; 423 - struct device_link *link; 424 447 struct device *pd_chan; 425 448 struct device *dev; 426 449 int i; ··· 459 436 pd_chan = dev_pm_domain_attach_by_id(dev, i); 460 437 if (IS_ERR_OR_NULL(pd_chan)) { 461 438 dev_err(dev, "Failed attach pd %d\n", i); 462 - return -EINVAL; 439 + goto detach; 463 440 } 464 441 465 - link = device_link_add(dev, pd_chan, DL_FLAG_STATELESS | 442 + fsl_chan->pd_dev_link = device_link_add(dev, pd_chan, DL_FLAG_STATELESS | 466 443 DL_FLAG_PM_RUNTIME | 467 444 DL_FLAG_RPM_ACTIVE); 468 - if (!link) { 445 + if (!fsl_chan->pd_dev_link) { 469 446 dev_err(dev, "Failed to add device_link to %d\n", i); 470 - return -EINVAL; 447 + dev_pm_domain_detach(pd_chan, false); 448 + goto detach; 471 449 } 472 450 473 451 fsl_chan->pd_dev = pd_chan; ··· 479 455 } 480 456 481 457 return 0; 458 + 459 + detach: 460 + fsl_edma3_detach_pd(fsl_edma); 461 + return -EINVAL; 482 462 } 483 463 484 464 static int fsl_edma_probe(struct platform_device *pdev) ··· 570 542 571 543 if (drvdata->flags & FSL_EDMA_DRV_HAS_PD) { 572 544 ret = fsl_edma3_attach_pd(pdev, fsl_edma); 545 + if (ret) 546 + return ret; 547 + ret = devm_add_action_or_reset(&pdev->dev, devm_fsl_edma3_detach_pd, fsl_edma); 573 548 if (ret) 574 549 return ret; 575 550 }
+1 -1
drivers/dma/loongson2-apb-dma.c
··· 31 31 #define LDMA_ASK_VALID BIT(2) 32 32 #define LDMA_START BIT(3) /* DMA start operation */ 33 33 #define LDMA_STOP BIT(4) /* DMA stop operation */ 34 - #define LDMA_CONFIG_MASK GENMASK(4, 0) /* DMA controller config bits mask */ 34 + #define LDMA_CONFIG_MASK GENMASK_ULL(4, 0) /* DMA controller config bits mask */ 35 35 36 36 /* Bitfields in ndesc_addr field of HW descriptor */ 37 37 #define LDMA_DESC_EN BIT(0) /*1: The next descriptor is valid */
+2
drivers/dma/mv_xor.c
··· 1388 1388 irq = irq_of_parse_and_map(np, 0); 1389 1389 if (!irq) { 1390 1390 ret = -ENODEV; 1391 + of_node_put(np); 1391 1392 goto err_channel_add; 1392 1393 } 1393 1394 ··· 1397 1396 if (IS_ERR(chan)) { 1398 1397 ret = PTR_ERR(chan); 1399 1398 irq_dispose_mapping(irq); 1399 + of_node_put(np); 1400 1400 goto err_channel_add; 1401 1401 } 1402 1402
+10
drivers/dma/tegra186-gpc-dma.c
··· 231 231 bool config_init; 232 232 char name[30]; 233 233 enum dma_transfer_direction sid_dir; 234 + enum dma_status status; 234 235 int id; 235 236 int irq; 236 237 int slave_id; ··· 394 393 tegra_dma_dump_chan_regs(tdc); 395 394 } 396 395 396 + tdc->status = DMA_PAUSED; 397 + 397 398 return ret; 398 399 } 399 400 ··· 422 419 val = tdc_read(tdc, TEGRA_GPCDMA_CHAN_CSRE); 423 420 val &= ~TEGRA_GPCDMA_CHAN_CSRE_PAUSE; 424 421 tdc_write(tdc, TEGRA_GPCDMA_CHAN_CSRE, val); 422 + 423 + tdc->status = DMA_IN_PROGRESS; 425 424 } 426 425 427 426 static int tegra_dma_device_resume(struct dma_chan *dc) ··· 549 544 550 545 tegra_dma_sid_free(tdc); 551 546 tdc->dma_desc = NULL; 547 + tdc->status = DMA_COMPLETE; 552 548 } 553 549 554 550 static void tegra_dma_chan_decode_error(struct tegra_dma_channel *tdc, ··· 722 716 tdc->dma_desc = NULL; 723 717 } 724 718 719 + tdc->status = DMA_COMPLETE; 725 720 tegra_dma_sid_free(tdc); 726 721 vchan_get_all_descriptors(&tdc->vc, &head); 727 722 spin_unlock_irqrestore(&tdc->vc.lock, flags); ··· 775 768 ret = dma_cookie_status(dc, cookie, txstate); 776 769 if (ret == DMA_COMPLETE) 777 770 return ret; 771 + 772 + if (tdc->status == DMA_PAUSED) 773 + ret = DMA_PAUSED; 778 774 779 775 spin_lock_irqsave(&tdc->vc.lock, flags); 780 776 vd = vchan_find_desc(&tdc->vc, cookie);
+11 -4
drivers/firmware/arm_ffa/bus.c
··· 187 187 return valid; 188 188 } 189 189 190 - struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id, 191 - const struct ffa_ops *ops) 190 + struct ffa_device * 191 + ffa_device_register(const struct ffa_partition_info *part_info, 192 + const struct ffa_ops *ops) 192 193 { 193 194 int id, ret; 195 + uuid_t uuid; 194 196 struct device *dev; 195 197 struct ffa_device *ffa_dev; 198 + 199 + if (!part_info) 200 + return NULL; 196 201 197 202 id = ida_alloc_min(&ffa_bus_id, 1, GFP_KERNEL); 198 203 if (id < 0) ··· 215 210 dev_set_name(&ffa_dev->dev, "arm-ffa-%d", id); 216 211 217 212 ffa_dev->id = id; 218 - ffa_dev->vm_id = vm_id; 213 + ffa_dev->vm_id = part_info->id; 214 + ffa_dev->properties = part_info->properties; 219 215 ffa_dev->ops = ops; 220 - uuid_copy(&ffa_dev->uuid, uuid); 216 + import_uuid(&uuid, (u8 *)part_info->uuid); 217 + uuid_copy(&ffa_dev->uuid, &uuid); 221 218 222 219 ret = device_register(&ffa_dev->dev); 223 220 if (ret) {
+1 -6
drivers/firmware/arm_ffa/driver.c
··· 1387 1387 static int ffa_setup_partitions(void) 1388 1388 { 1389 1389 int count, idx, ret; 1390 - uuid_t uuid; 1391 1390 struct ffa_device *ffa_dev; 1392 1391 struct ffa_dev_part_info *info; 1393 1392 struct ffa_partition_info *pbuf, *tpbuf; ··· 1405 1406 1406 1407 xa_init(&drv_info->partition_info); 1407 1408 for (idx = 0, tpbuf = pbuf; idx < count; idx++, tpbuf++) { 1408 - import_uuid(&uuid, (u8 *)tpbuf->uuid); 1409 - 1410 1409 /* Note that if the UUID will be uuid_null, that will require 1411 1410 * ffa_bus_notifier() to find the UUID of this partition id 1412 1411 * with help of ffa_device_match_uuid(). FF-A v1.1 and above 1413 1412 * provides UUID here for each partition as part of the 1414 1413 * discovery API and the same is passed. 1415 1414 */ 1416 - ffa_dev = ffa_device_register(&uuid, tpbuf->id, &ffa_drv_ops); 1415 + ffa_dev = ffa_device_register(tpbuf, &ffa_drv_ops); 1417 1416 if (!ffa_dev) { 1418 1417 pr_err("%s: failed to register partition ID 0x%x\n", 1419 1418 __func__, tpbuf->id); 1420 1419 continue; 1421 1420 } 1422 - 1423 - ffa_dev->properties = tpbuf->properties; 1424 1421 1425 1422 if (drv_info->version > FFA_VERSION_1_0 && 1426 1423 !(tpbuf->properties & FFA_PARTITION_AARCH64_EXEC))
+1
drivers/firmware/arm_scmi/vendors/imx/Kconfig
··· 15 15 config IMX_SCMI_MISC_EXT 16 16 tristate "i.MX SCMI MISC EXTENSION" 17 17 depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF) 18 + depends on IMX_SCMI_MISC_DRV 18 19 default y if ARCH_MXC 19 20 help 20 21 This enables i.MX System MISC control logic such as gpio expander
-1
drivers/firmware/imx/Kconfig
··· 25 25 26 26 config IMX_SCMI_MISC_DRV 27 27 tristate "IMX SCMI MISC Protocol driver" 28 - depends on IMX_SCMI_MISC_EXT || COMPILE_TEST 29 28 default y if ARCH_MXC 30 29 help 31 30 The System Controller Management Interface firmware (SCMI FW) is
+2 -2
drivers/firmware/microchip/mpfs-auto-update.c
··· 402 402 return -EIO; 403 403 404 404 /* 405 - * Bit 5 of byte 1 is "UL_Auto Update" & if it is set, Auto Update is 405 + * Bit 5 of byte 1 is "UL_IAP" & if it is set, Auto Update is 406 406 * not possible. 407 407 */ 408 - if (response_msg[1] & AUTO_UPDATE_FEATURE_ENABLED) 408 + if ((((u8 *)response_msg)[1] & AUTO_UPDATE_FEATURE_ENABLED)) 409 409 return -EPERM; 410 410 411 411 return 0;
+3 -3
drivers/gpio/gpio-loongson-64bit.c
··· 237 237 static const struct loongson_gpio_chip_data loongson_gpio_ls2k2000_data2 = { 238 238 .label = "ls2k2000_gpio", 239 239 .mode = BIT_CTRL_MODE, 240 - .conf_offset = 0x84, 241 - .in_offset = 0x88, 242 - .out_offset = 0x80, 240 + .conf_offset = 0x4, 241 + .in_offset = 0x8, 242 + .out_offset = 0x0, 243 243 }; 244 244 245 245 static const struct loongson_gpio_chip_data loongson_gpio_ls3a5000_data = {
+41 -7
drivers/gpio/gpio-sim.c
··· 1027 1027 dev->pdev = NULL; 1028 1028 } 1029 1029 1030 + static void 1031 + gpio_sim_device_lockup_configfs(struct gpio_sim_device *dev, bool lock) 1032 + { 1033 + struct configfs_subsystem *subsys = dev->group.cg_subsys; 1034 + struct gpio_sim_bank *bank; 1035 + struct gpio_sim_line *line; 1036 + 1037 + /* 1038 + * The device only needs to depend on leaf line entries. This is 1039 + * sufficient to lock up all the configfs entries that the 1040 + * instantiated, alive device depends on. 1041 + */ 1042 + list_for_each_entry(bank, &dev->bank_list, siblings) { 1043 + list_for_each_entry(line, &bank->line_list, siblings) { 1044 + if (lock) 1045 + WARN_ON(configfs_depend_item_unlocked( 1046 + subsys, &line->group.cg_item)); 1047 + else 1048 + configfs_undepend_item_unlocked( 1049 + &line->group.cg_item); 1050 + } 1051 + } 1052 + } 1053 + 1030 1054 static ssize_t 1031 1055 gpio_sim_device_config_live_store(struct config_item *item, 1032 1056 const char *page, size_t count) ··· 1063 1039 if (ret) 1064 1040 return ret; 1065 1041 1066 - guard(mutex)(&dev->lock); 1042 + if (live) 1043 + gpio_sim_device_lockup_configfs(dev, true); 1067 1044 1068 - if (live == gpio_sim_device_is_live(dev)) 1069 - ret = -EPERM; 1070 - else if (live) 1071 - ret = gpio_sim_device_activate(dev); 1072 - else 1073 - gpio_sim_device_deactivate(dev); 1045 + scoped_guard(mutex, &dev->lock) { 1046 + if (live == gpio_sim_device_is_live(dev)) 1047 + ret = -EPERM; 1048 + else if (live) 1049 + ret = gpio_sim_device_activate(dev); 1050 + else 1051 + gpio_sim_device_deactivate(dev); 1052 + } 1053 + 1054 + /* 1055 + * Undepend is required only if device disablement (live == 0) 1056 + * succeeds or if device enablement (live == 1) fails. 1057 + */ 1058 + if (live == !!ret) 1059 + gpio_sim_device_lockup_configfs(dev, false); 1074 1060 1075 1061 return ret ?: count; 1076 1062 }
+70 -23
drivers/gpio/gpio-virtuser.c
··· 1410 1410 size_t num_entries = gpio_virtuser_get_lookup_count(dev); 1411 1411 struct gpio_virtuser_lookup_entry *entry; 1412 1412 struct gpio_virtuser_lookup *lookup; 1413 - unsigned int i = 0; 1413 + unsigned int i = 0, idx; 1414 1414 1415 1415 lockdep_assert_held(&dev->lock); 1416 1416 ··· 1424 1424 return -ENOMEM; 1425 1425 1426 1426 list_for_each_entry(lookup, &dev->lookup_list, siblings) { 1427 + idx = 0; 1427 1428 list_for_each_entry(entry, &lookup->entry_list, siblings) { 1428 - table->table[i] = 1429 + table->table[i++] = 1429 1430 GPIO_LOOKUP_IDX(entry->key, 1430 1431 entry->offset < 0 ? U16_MAX : entry->offset, 1431 - lookup->con_id, i, entry->flags); 1432 - i++; 1432 + lookup->con_id, idx++, entry->flags); 1433 1433 } 1434 1434 } 1435 1435 ··· 1437 1437 dev->lookup_table = no_free_ptr(table); 1438 1438 1439 1439 return 0; 1440 + } 1441 + 1442 + static void 1443 + gpio_virtuser_remove_lookup_table(struct gpio_virtuser_device *dev) 1444 + { 1445 + gpiod_remove_lookup_table(dev->lookup_table); 1446 + kfree(dev->lookup_table->dev_id); 1447 + kfree(dev->lookup_table); 1448 + dev->lookup_table = NULL; 1440 1449 } 1441 1450 1442 1451 static struct fwnode_handle * ··· 1496 1487 pdevinfo.fwnode = swnode; 1497 1488 1498 1489 ret = gpio_virtuser_make_lookup_table(dev); 1499 - if (ret) { 1500 - fwnode_remove_software_node(swnode); 1501 - return ret; 1502 - } 1490 + if (ret) 1491 + goto err_remove_swnode; 1503 1492 1504 1493 reinit_completion(&dev->probe_completion); 1505 1494 dev->driver_bound = false; ··· 1505 1498 1506 1499 pdev = platform_device_register_full(&pdevinfo); 1507 1500 if (IS_ERR(pdev)) { 1501 + ret = PTR_ERR(pdev); 1508 1502 bus_unregister_notifier(&platform_bus_type, &dev->bus_notifier); 1509 - fwnode_remove_software_node(swnode); 1510 - return PTR_ERR(pdev); 1503 + goto err_remove_lookup_table; 1511 1504 } 1512 1505 1513 1506 wait_for_completion(&dev->probe_completion); 1514 1507 bus_unregister_notifier(&platform_bus_type, &dev->bus_notifier); 1515 1508 1516 1509 if (!dev->driver_bound) { 1517 - platform_device_unregister(pdev); 1518 - fwnode_remove_software_node(swnode); 1519 - return -ENXIO; 1510 + ret = -ENXIO; 1511 + goto err_unregister_pdev; 1520 1512 } 1521 1513 1522 1514 dev->pdev = pdev; 1523 1515 1524 1516 return 0; 1517 + 1518 + err_unregister_pdev: 1519 + platform_device_unregister(pdev); 1520 + err_remove_lookup_table: 1521 + gpio_virtuser_remove_lookup_table(dev); 1522 + err_remove_swnode: 1523 + fwnode_remove_software_node(swnode); 1524 + 1525 + return ret; 1525 1526 } 1526 1527 1527 1528 static void ··· 1541 1526 1542 1527 swnode = dev_fwnode(&dev->pdev->dev); 1543 1528 platform_device_unregister(dev->pdev); 1529 + gpio_virtuser_remove_lookup_table(dev); 1544 1530 fwnode_remove_software_node(swnode); 1545 1531 dev->pdev = NULL; 1546 - gpiod_remove_lookup_table(dev->lookup_table); 1547 - kfree(dev->lookup_table); 1532 + } 1533 + 1534 + static void 1535 + gpio_virtuser_device_lockup_configfs(struct gpio_virtuser_device *dev, bool lock) 1536 + { 1537 + struct configfs_subsystem *subsys = dev->group.cg_subsys; 1538 + struct gpio_virtuser_lookup_entry *entry; 1539 + struct gpio_virtuser_lookup *lookup; 1540 + 1541 + /* 1542 + * The device only needs to depend on leaf lookup entries. This is 1543 + * sufficient to lock up all the configfs entries that the 1544 + * instantiated, alive device depends on. 1545 + */ 1546 + list_for_each_entry(lookup, &dev->lookup_list, siblings) { 1547 + list_for_each_entry(entry, &lookup->entry_list, siblings) { 1548 + if (lock) 1549 + WARN_ON(configfs_depend_item_unlocked( 1550 + subsys, &entry->group.cg_item)); 1551 + else 1552 + configfs_undepend_item_unlocked( 1553 + &entry->group.cg_item); 1554 + } 1555 + } 1548 1556 } 1549 1557 1550 1558 static ssize_t ··· 1582 1544 if (ret) 1583 1545 return ret; 1584 1546 1585 - guard(mutex)(&dev->lock); 1586 - 1587 - if (live == gpio_virtuser_device_is_live(dev)) 1588 - return -EPERM; 1589 - 1590 1547 if (live) 1591 - ret = gpio_virtuser_device_activate(dev); 1592 - else 1593 - gpio_virtuser_device_deactivate(dev); 1548 + gpio_virtuser_device_lockup_configfs(dev, true); 1549 + 1550 + scoped_guard(mutex, &dev->lock) { 1551 + if (live == gpio_virtuser_device_is_live(dev)) 1552 + ret = -EPERM; 1553 + else if (live) 1554 + ret = gpio_virtuser_device_activate(dev); 1555 + else 1556 + gpio_virtuser_device_deactivate(dev); 1557 + } 1558 + 1559 + /* 1560 + * Undepend is required only if device disablement (live == 0) 1561 + * succeeds or if device enablement (live == 1) fails. 1562 + */ 1563 + if (live == !!ret) 1564 + gpio_virtuser_device_lockup_configfs(dev, false); 1594 1565 1595 1566 return ret ?: count; 1596 1567 }
+4
drivers/gpu/drm/Kconfig
··· 99 99 config DRM_KMS_HELPER 100 100 tristate 101 101 depends on DRM 102 + select FB_CORE if DRM_FBDEV_EMULATION 102 103 help 103 104 CRTC helpers for KMS drivers. 104 105 ··· 359 358 tristate 360 359 depends on DRM 361 360 select DRM_TTM 361 + select FB_CORE if DRM_FBDEV_EMULATION 362 362 select FB_SYSMEM_HELPERS_DEFERRED if DRM_FBDEV_EMULATION 363 363 help 364 364 Helpers for ttm-based gem objects ··· 367 365 config DRM_GEM_DMA_HELPER 368 366 tristate 369 367 depends on DRM 368 + select FB_CORE if DRM_FBDEV_EMULATION 370 369 select FB_DMAMEM_HELPERS_DEFERRED if DRM_FBDEV_EMULATION 371 370 help 372 371 Choose this if you need the GEM DMA helper functions ··· 375 372 config DRM_GEM_SHMEM_HELPER 376 373 tristate 377 374 depends on DRM && MMU 375 + select FB_CORE if DRM_FBDEV_EMULATION 378 376 select FB_SYSMEM_HELPERS_DEFERRED if DRM_FBDEV_EMULATION 379 377 help 380 378 Choose this if you need the GEM shmem helper functions
+2 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c
··· 343 343 coredump->skip_vram_check = skip_vram_check; 344 344 coredump->reset_vram_lost = vram_lost; 345 345 346 - if (job && job->vm) { 347 - struct amdgpu_vm *vm = job->vm; 346 + if (job && job->pasid) { 348 347 struct amdgpu_task_info *ti; 349 348 350 - ti = amdgpu_vm_get_task_info_vm(vm); 349 + ti = amdgpu_vm_get_task_info_pasid(adev, job->pasid); 351 350 if (ti) { 352 351 coredump->reset_task_info = *ti; 353 352 amdgpu_vm_put_task_info(ti);
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 417 417 { 418 418 struct amdgpu_device *adev = drm_to_adev(dev); 419 419 420 + if (!IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE)) 421 + return false; 422 + 420 423 if (adev->has_pr3 || 421 424 ((adev->flags & AMD_IS_PX) && amdgpu_is_atpx_hybrid())) 422 425 return true;
+1 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
··· 255 255 256 256 void amdgpu_job_free_resources(struct amdgpu_job *job) 257 257 { 258 - struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched); 259 258 struct dma_fence *f; 260 259 unsigned i; 261 260 ··· 267 268 f = NULL; 268 269 269 270 for (i = 0; i < job->num_ibs; ++i) 270 - amdgpu_ib_free(ring->adev, &job->ibs[i], f); 271 + amdgpu_ib_free(NULL, &job->ibs[i], f); 271 272 } 272 273 273 274 static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
+3 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 1266 1266 * next command submission. 1267 1267 */ 1268 1268 if (amdgpu_vm_is_bo_always_valid(vm, bo)) { 1269 - uint32_t mem_type = bo->tbo.resource->mem_type; 1270 - 1271 - if (!(bo->preferred_domains & 1272 - amdgpu_mem_type_to_domain(mem_type))) 1269 + if (bo->tbo.resource && 1270 + !(bo->preferred_domains & 1271 + amdgpu_mem_type_to_domain(bo->tbo.resource->mem_type))) 1273 1272 amdgpu_vm_bo_evicted(&bo_va->base); 1274 1273 else 1275 1274 amdgpu_vm_bo_idle(&bo_va->base);
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
··· 567 567 else 568 568 remaining_size -= size; 569 569 } 570 - mutex_unlock(&mgr->lock); 571 570 572 571 if (bo->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS && adjust_dcc_size) { 573 572 struct drm_buddy_block *dcc_block; ··· 583 584 (u64)vres->base.size, 584 585 &vres->blocks); 585 586 } 587 + mutex_unlock(&mgr->lock); 586 588 587 589 vres->base.start = 0; 588 590 size = max_t(u64, amdgpu_vram_mgr_blocks_size(&vres->blocks),
+1 -1
drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
··· 4123 4123 if (amdgpu_sriov_vf(adev)) 4124 4124 return 0; 4125 4125 4126 - switch (adev->ip_versions[GC_HWIP][0]) { 4126 + switch (amdgpu_ip_version(adev, GC_HWIP, 0)) { 4127 4127 case IP_VERSION(12, 0, 0): 4128 4128 case IP_VERSION(12, 0, 1): 4129 4129 gfx_v12_0_update_gfx_clock_gating(adev,
+1 -1
drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
··· 108 108 dev_err(adev->dev, 109 109 "MMVM_L2_PROTECTION_FAULT_STATUS_LO32:0x%08X\n", 110 110 status); 111 - switch (adev->ip_versions[MMHUB_HWIP][0]) { 111 + switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) { 112 112 case IP_VERSION(4, 1, 0): 113 113 mmhub_cid = mmhub_client_ids_v4_1_0[cid][rw]; 114 114 break;
+11
drivers/gpu/drm/amd/amdgpu/nbio_v7_0.c
··· 271 271 .ref_and_mask_sdma1 = GPU_HDP_FLUSH_DONE__SDMA1_MASK, 272 272 }; 273 273 274 + #define regRCC_DEV0_EPF6_STRAP4 0xd304 275 + #define regRCC_DEV0_EPF6_STRAP4_BASE_IDX 5 276 + 274 277 static void nbio_v7_0_init_registers(struct amdgpu_device *adev) 275 278 { 279 + uint32_t data; 280 + 281 + switch (amdgpu_ip_version(adev, NBIO_HWIP, 0)) { 282 + case IP_VERSION(2, 5, 0): 283 + data = RREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF6_STRAP4) & ~BIT(23); 284 + WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF6_STRAP4, data); 285 + break; 286 + } 276 287 } 277 288 278 289 #define MMIO_REG_HOLE_OFFSET (0x80000 - PAGE_SIZE)
+1 -1
drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
··· 275 275 if (def != data) 276 276 WREG32_SOC15(NBIO, 0, regBIF_BIF256_CI256_RC3X4_USB4_PCIE_MST_CTRL_3, data); 277 277 278 - switch (adev->ip_versions[NBIO_HWIP][0]) { 278 + switch (amdgpu_ip_version(adev, NBIO_HWIP, 0)) { 279 279 case IP_VERSION(7, 11, 0): 280 280 case IP_VERSION(7, 11, 1): 281 281 case IP_VERSION(7, 11, 2):
+1 -1
drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c
··· 247 247 if (def != data) 248 248 WREG32_SOC15(NBIO, 0, regBIF0_PCIE_MST_CTRL_3, data); 249 249 250 - switch (adev->ip_versions[NBIO_HWIP][0]) { 250 + switch (amdgpu_ip_version(adev, NBIO_HWIP, 0)) { 251 251 case IP_VERSION(7, 7, 0): 252 252 data = RREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF5_STRAP4) & ~BIT(23); 253 253 WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF5_STRAP4, data);
+17
drivers/gpu/drm/amd/amdkfd/kfd_debug.c
··· 350 350 { 351 351 uint32_t spi_dbg_cntl = pdd->spi_dbg_override | pdd->spi_dbg_launch_mode; 352 352 uint32_t flags = pdd->process->dbg_flags; 353 + struct amdgpu_device *adev = pdd->dev->adev; 354 + int r; 353 355 354 356 if (!kfd_dbg_is_per_vmid_supported(pdd->dev)) 355 357 return 0; 358 + 359 + if (!pdd->proc_ctx_cpu_ptr) { 360 + r = amdgpu_amdkfd_alloc_gtt_mem(adev, 361 + AMDGPU_MES_PROC_CTX_SIZE, 362 + &pdd->proc_ctx_bo, 363 + &pdd->proc_ctx_gpu_addr, 364 + &pdd->proc_ctx_cpu_ptr, 365 + false); 366 + if (r) { 367 + dev_err(adev->dev, 368 + "failed to allocate process context bo\n"); 369 + return r; 370 + } 371 + memset(pdd->proc_ctx_cpu_ptr, 0, AMDGPU_MES_PROC_CTX_SIZE); 372 + } 356 373 357 374 return amdgpu_mes_set_shader_debugger(pdd->dev->adev, pdd->proc_ctx_gpu_addr, spi_dbg_cntl, 358 375 pdd->watch_points, flags, sq_trap_en);
+2 -1
drivers/gpu/drm/amd/amdkfd/kfd_process.c
··· 1160 1160 */ 1161 1161 synchronize_rcu(); 1162 1162 ef = rcu_access_pointer(p->ef); 1163 - dma_fence_signal(ef); 1163 + if (ef) 1164 + dma_fence_signal(ef); 1164 1165 1165 1166 kfd_process_remove_sysfs(p); 1166 1167
+2 -33
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 8400 8400 struct amdgpu_crtc *acrtc, 8401 8401 struct dm_crtc_state *acrtc_state) 8402 8402 { 8403 - /* 8404 - * We have no guarantee that the frontend index maps to the same 8405 - * backend index - some even map to more than one. 8406 - * 8407 - * TODO: Use a different interrupt or check DC itself for the mapping. 8408 - */ 8409 - int irq_type = 8410 - amdgpu_display_crtc_idx_to_irq_type( 8411 - adev, 8412 - acrtc->crtc_id); 8413 8403 struct drm_vblank_crtc_config config = {0}; 8414 8404 struct dc_crtc_timing *timing; 8415 8405 int offdelay; ··· 8425 8435 8426 8436 drm_crtc_vblank_on_config(&acrtc->base, 8427 8437 &config); 8428 - 8429 - amdgpu_irq_get( 8430 - adev, 8431 - &adev->pageflip_irq, 8432 - irq_type); 8433 - #if defined(CONFIG_DRM_AMD_SECURE_DISPLAY) 8434 - amdgpu_irq_get( 8435 - adev, 8436 - &adev->vline0_irq, 8437 - irq_type); 8438 - #endif 8439 8438 } else { 8440 - #if defined(CONFIG_DRM_AMD_SECURE_DISPLAY) 8441 - amdgpu_irq_put( 8442 - adev, 8443 - &adev->vline0_irq, 8444 - irq_type); 8445 - #endif 8446 - amdgpu_irq_put( 8447 - adev, 8448 - &adev->pageflip_irq, 8449 - irq_type); 8450 8439 drm_crtc_vblank_off(&acrtc->base); 8451 8440 } 8452 8441 } ··· 11124 11155 int plane_src_w, plane_src_h; 11125 11156 11126 11157 dm_get_oriented_plane_size(plane_state, &plane_src_w, &plane_src_h); 11127 - *out_plane_scale_w = plane_state->crtc_w * 1000 / plane_src_w; 11128 - *out_plane_scale_h = plane_state->crtc_h * 1000 / plane_src_h; 11158 + *out_plane_scale_w = plane_src_w ? plane_state->crtc_w * 1000 / plane_src_w : 0; 11159 + *out_plane_scale_h = plane_src_h ? plane_state->crtc_h * 1000 / plane_src_h : 0; 11129 11160 } 11130 11161 11131 11162 /*
+1 -1
drivers/gpu/drm/amd/display/dc/core/dc.c
··· 4510 4510 struct pipe_split_policy_backup policy; 4511 4511 struct dc_state *intermediate_context; 4512 4512 struct dc_state *old_current_state = dc->current_state; 4513 - struct dc_surface_update srf_updates[MAX_SURFACE_NUM] = {0}; 4513 + struct dc_surface_update srf_updates[MAX_SURFACES] = {0}; 4514 4514 int surface_count; 4515 4515 4516 4516 /*
+4 -4
drivers/gpu/drm/amd/display/dc/core/dc_state.c
··· 483 483 if (stream_status == NULL) { 484 484 dm_error("Existing stream not found; failed to attach surface!\n"); 485 485 goto out; 486 - } else if (stream_status->plane_count == MAX_SURFACE_NUM) { 486 + } else if (stream_status->plane_count == MAX_SURFACES) { 487 487 dm_error("Surface: can not attach plane_state %p! Maximum is: %d\n", 488 - plane_state, MAX_SURFACE_NUM); 488 + plane_state, MAX_SURFACES); 489 489 goto out; 490 490 } else if (!otg_master_pipe) { 491 491 goto out; ··· 600 600 { 601 601 int i, old_plane_count; 602 602 struct dc_stream_status *stream_status = NULL; 603 - struct dc_plane_state *del_planes[MAX_SURFACE_NUM] = { 0 }; 603 + struct dc_plane_state *del_planes[MAX_SURFACES] = { 0 }; 604 604 605 605 for (i = 0; i < state->stream_count; i++) 606 606 if (state->streams[i] == stream) { ··· 875 875 { 876 876 int i, old_plane_count; 877 877 struct dc_stream_status *stream_status = NULL; 878 - struct dc_plane_state *del_planes[MAX_SURFACE_NUM] = { 0 }; 878 + struct dc_plane_state *del_planes[MAX_SURFACES] = { 0 }; 879 879 880 880 for (i = 0; i < state->stream_count; i++) 881 881 if (state->streams[i] == phantom_stream) {
+2 -2
drivers/gpu/drm/amd/display/dc/dc.h
··· 57 57 58 58 #define DC_VER "3.2.310" 59 59 60 - #define MAX_SURFACES 3 60 + #define MAX_SURFACES 4 61 61 #define MAX_PLANES 6 62 62 #define MAX_STREAMS 6 63 63 #define MIN_VIEWPORT_SIZE 12 ··· 1398 1398 * store current value in plane states so we can still recover 1399 1399 * a valid current state during dc update. 1400 1400 */ 1401 - struct dc_plane_state plane_states[MAX_SURFACE_NUM]; 1401 + struct dc_plane_state plane_states[MAX_SURFACES]; 1402 1402 1403 1403 struct dc_stream_state stream_state; 1404 1404 };
+1 -1
drivers/gpu/drm/amd/display/dc/dc_stream.h
··· 56 56 int plane_count; 57 57 int audio_inst; 58 58 struct timing_sync_info timing_sync_info; 59 - struct dc_plane_state *plane_states[MAX_SURFACE_NUM]; 59 + struct dc_plane_state *plane_states[MAX_SURFACES]; 60 60 bool is_abm_supported; 61 61 struct mall_stream_config mall_stream_config; 62 62 bool fpo_in_use;
-1
drivers/gpu/drm/amd/display/dc/dc_types.h
··· 76 76 unsigned long last_entry_write; 77 77 }; 78 78 79 - #define MAX_SURFACE_NUM 6 80 79 #define NUM_PIXEL_FORMATS 10 81 80 82 81 enum tiling_mode {
+8
drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h
··· 66 66 67 67 static inline double dml_ceil(double a, double granularity) 68 68 { 69 + if (granularity == 0) 70 + return 0; 69 71 return (double) dcn_bw_ceil2(a, granularity); 70 72 } 71 73 72 74 static inline double dml_floor(double a, double granularity) 73 75 { 76 + if (granularity == 0) 77 + return 0; 74 78 return (double) dcn_bw_floor2(a, granularity); 75 79 } 76 80 ··· 118 114 119 115 static inline double dml_ceil_ex(double x, double granularity) 120 116 { 117 + if (granularity == 0) 118 + return 0; 121 119 return (double) dcn_bw_ceil2(x, granularity); 122 120 } 123 121 124 122 static inline double dml_floor_ex(double x, double granularity) 125 123 { 124 + if (granularity == 0) 125 + return 0; 126 126 return (double) dcn_bw_floor2(x, granularity); 127 127 } 128 128
+1 -1
drivers/gpu/drm/amd/display/dc/dml2/dml2_mall_phantom.c
··· 813 813 { 814 814 int i, old_plane_count; 815 815 struct dc_stream_status *stream_status = NULL; 816 - struct dc_plane_state *del_planes[MAX_SURFACE_NUM] = { 0 }; 816 + struct dc_plane_state *del_planes[MAX_SURFACES] = { 0 }; 817 817 818 818 for (i = 0; i < context->stream_count; i++) 819 819 if (context->streams[i] == stream) {
+2
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
··· 303 303 int smu_v13_0_get_boot_freq_by_index(struct smu_context *smu, 304 304 enum smu_clk_type clk_type, 305 305 uint32_t *value); 306 + 307 + void smu_v13_0_interrupt_work(struct smu_context *smu); 306 308 #endif 307 309 #endif
+6 -6
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
··· 1320 1320 return 0; 1321 1321 } 1322 1322 1323 - static int smu_v13_0_ack_ac_dc_interrupt(struct smu_context *smu) 1323 + void smu_v13_0_interrupt_work(struct smu_context *smu) 1324 1324 { 1325 - return smu_cmn_send_smc_msg(smu, 1326 - SMU_MSG_ReenableAcDcInterrupt, 1327 - NULL); 1325 + smu_cmn_send_smc_msg(smu, 1326 + SMU_MSG_ReenableAcDcInterrupt, 1327 + NULL); 1328 1328 } 1329 1329 1330 1330 #define THM_11_0__SRCID__THM_DIG_THERM_L2H 0 /* ASIC_TEMP > CG_THERMAL_INT.DIG_THERM_INTH */ ··· 1377 1377 switch (ctxid) { 1378 1378 case SMU_IH_INTERRUPT_CONTEXT_ID_AC: 1379 1379 dev_dbg(adev->dev, "Switched to AC mode!\n"); 1380 - smu_v13_0_ack_ac_dc_interrupt(smu); 1380 + schedule_work(&smu->interrupt_work); 1381 1381 adev->pm.ac_power = true; 1382 1382 break; 1383 1383 case SMU_IH_INTERRUPT_CONTEXT_ID_DC: 1384 1384 dev_dbg(adev->dev, "Switched to DC mode!\n"); 1385 - smu_v13_0_ack_ac_dc_interrupt(smu); 1385 + schedule_work(&smu->interrupt_work); 1386 1386 adev->pm.ac_power = false; 1387 1387 break; 1388 1388 case SMU_IH_INTERRUPT_CONTEXT_ID_THERMAL_THROTTLING:
+1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 3219 3219 .is_asic_wbrf_supported = smu_v13_0_0_wbrf_support_check, 3220 3220 .enable_uclk_shadow = smu_v13_0_enable_uclk_shadow, 3221 3221 .set_wbrf_exclusion_ranges = smu_v13_0_set_wbrf_exclusion_ranges, 3222 + .interrupt_work = smu_v13_0_interrupt_work, 3222 3223 }; 3223 3224 3224 3225 void smu_v13_0_0_set_ppt_funcs(struct smu_context *smu)
+1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
··· 2797 2797 .is_asic_wbrf_supported = smu_v13_0_7_wbrf_support_check, 2798 2798 .enable_uclk_shadow = smu_v13_0_enable_uclk_shadow, 2799 2799 .set_wbrf_exclusion_ranges = smu_v13_0_set_wbrf_exclusion_ranges, 2800 + .interrupt_work = smu_v13_0_interrupt_work, 2800 2801 }; 2801 2802 2802 2803 void smu_v13_0_7_set_ppt_funcs(struct smu_context *smu)
+1 -1
drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
··· 2096 2096 { 2097 2097 struct amdgpu_device *adev = smu->adev; 2098 2098 2099 - if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(14, 0, 2)) 2099 + if (amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(14, 0, 2)) 2100 2100 return smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_EnableAllSmuFeatures, 2101 2101 FEATURE_PWR_GFX, NULL); 2102 2102 else
+12 -2
drivers/gpu/drm/bridge/adv7511/adv7511_audio.c
··· 153 153 ADV7511_AUDIO_CFG3_LEN_MASK, len); 154 154 regmap_update_bits(adv7511->regmap, ADV7511_REG_I2C_FREQ_ID_CFG, 155 155 ADV7511_I2C_FREQ_ID_CFG_RATE_MASK, rate << 4); 156 - regmap_write(adv7511->regmap, 0x73, 0x1); 156 + 157 + /* send current Audio infoframe values while updating */ 158 + regmap_update_bits(adv7511->regmap, ADV7511_REG_INFOFRAME_UPDATE, 159 + BIT(5), BIT(5)); 160 + 161 + regmap_write(adv7511->regmap, ADV7511_REG_AUDIO_INFOFRAME(0), 0x1); 162 + 163 + /* use Audio infoframe updated info */ 164 + regmap_update_bits(adv7511->regmap, ADV7511_REG_INFOFRAME_UPDATE, 165 + BIT(5), 0); 157 166 158 167 return 0; 159 168 } ··· 193 184 regmap_update_bits(adv7511->regmap, ADV7511_REG_GC(0), 194 185 BIT(7) | BIT(6), BIT(7)); 195 186 /* use Audio infoframe updated info */ 196 - regmap_update_bits(adv7511->regmap, ADV7511_REG_GC(1), 187 + regmap_update_bits(adv7511->regmap, ADV7511_REG_INFOFRAME_UPDATE, 197 188 BIT(5), 0); 189 + 198 190 /* enable SPDIF receiver */ 199 191 if (adv7511->audio_source == ADV7511_AUDIO_SOURCE_SPDIF) 200 192 regmap_update_bits(adv7511->regmap, ADV7511_REG_AUDIO_CONFIG,
+8 -2
drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
··· 1241 1241 return ret; 1242 1242 1243 1243 ret = adv7511_init_regulators(adv7511); 1244 - if (ret) 1245 - return dev_err_probe(dev, ret, "failed to init regulators\n"); 1244 + if (ret) { 1245 + dev_err_probe(dev, ret, "failed to init regulators\n"); 1246 + goto err_of_node_put; 1247 + } 1246 1248 1247 1249 /* 1248 1250 * The power down GPIO is optional. If present, toggle it from active to ··· 1365 1363 i2c_unregister_device(adv7511->i2c_edid); 1366 1364 uninit_regulators: 1367 1365 adv7511_uninit_regulators(adv7511); 1366 + err_of_node_put: 1367 + of_node_put(adv7511->host_node); 1368 1368 1369 1369 return ret; 1370 1370 } ··· 1374 1370 static void adv7511_remove(struct i2c_client *i2c) 1375 1371 { 1376 1372 struct adv7511 *adv7511 = i2c_get_clientdata(i2c); 1373 + 1374 + of_node_put(adv7511->host_node); 1377 1375 1378 1376 adv7511_uninit_regulators(adv7511); 1379 1377
+1 -3
drivers/gpu/drm/bridge/adv7511/adv7533.c
··· 172 172 173 173 of_property_read_u32(np, "adi,dsi-lanes", &num_lanes); 174 174 175 - if (num_lanes < 1 || num_lanes > 4) 175 + if (num_lanes < 2 || num_lanes > 4) 176 176 return -EINVAL; 177 177 178 178 adv->num_dsi_lanes = num_lanes; ··· 180 180 adv->host_node = of_graph_get_remote_node(np, 0, 0); 181 181 if (!adv->host_node) 182 182 return -ENODEV; 183 - 184 - of_node_put(adv->host_node); 185 183 186 184 adv->use_timing_gen = !of_property_read_bool(np, 187 185 "adi,disable-timing-generator");
+5 -5
drivers/gpu/drm/display/drm_dp_tunnel.c
··· 1896 1896 * 1897 1897 * Creates a DP tunnel manager for @dev. 1898 1898 * 1899 - * Returns a pointer to the tunnel manager if created successfully or NULL in 1900 - * case of an error. 1899 + * Returns a pointer to the tunnel manager if created successfully or error 1900 + * pointer in case of failure. 1901 1901 */ 1902 1902 struct drm_dp_tunnel_mgr * 1903 1903 drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count) ··· 1907 1907 1908 1908 mgr = kzalloc(sizeof(*mgr), GFP_KERNEL); 1909 1909 if (!mgr) 1910 - return NULL; 1910 + return ERR_PTR(-ENOMEM); 1911 1911 1912 1912 mgr->dev = dev; 1913 1913 init_waitqueue_head(&mgr->bw_req_queue); ··· 1916 1916 if (!mgr->groups) { 1917 1917 kfree(mgr); 1918 1918 1919 - return NULL; 1919 + return ERR_PTR(-ENOMEM); 1920 1920 } 1921 1921 1922 1922 #ifdef CONFIG_DRM_DISPLAY_DP_TUNNEL_STATE_DEBUG ··· 1927 1927 if (!init_group(mgr, &mgr->groups[i])) { 1928 1928 destroy_mgr(mgr); 1929 1929 1930 - return NULL; 1930 + return ERR_PTR(-ENOMEM); 1931 1931 } 1932 1932 1933 1933 mgr->group_count++;
+7 -4
drivers/gpu/drm/drm_modes.c
··· 1287 1287 */ 1288 1288 int drm_mode_vrefresh(const struct drm_display_mode *mode) 1289 1289 { 1290 - unsigned int num, den; 1290 + unsigned int num = 1, den = 1; 1291 1291 1292 1292 if (mode->htotal == 0 || mode->vtotal == 0) 1293 1293 return 0; 1294 - 1295 - num = mode->clock; 1296 - den = mode->htotal * mode->vtotal; 1297 1294 1298 1295 if (mode->flags & DRM_MODE_FLAG_INTERLACE) 1299 1296 num *= 2; ··· 1298 1301 den *= 2; 1299 1302 if (mode->vscan > 1) 1300 1303 den *= mode->vscan; 1304 + 1305 + if (check_mul_overflow(mode->clock, num, &num)) 1306 + return 0; 1307 + 1308 + if (check_mul_overflow(mode->htotal * mode->vtotal, den, &den)) 1309 + return 0; 1301 1310 1302 1311 return DIV_ROUND_CLOSEST_ULL(mul_u32_u32(num, 1000), den); 1303 1312 }
+4 -8
drivers/gpu/drm/i915/display/intel_cx0_phy.c
··· 2115 2115 0, C10_VDR_CTRL_MSGBUS_ACCESS, 2116 2116 MB_WRITE_COMMITTED); 2117 2117 2118 - /* Custom width needs to be programmed to 0 for both the phy lanes */ 2119 - intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CUSTOM_WIDTH, 2120 - C10_VDR_CUSTOM_WIDTH_MASK, C10_VDR_CUSTOM_WIDTH_8_10, 2121 - MB_WRITE_COMMITTED); 2122 - intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1), 2123 - 0, C10_VDR_CTRL_UPDATE_CFG, 2124 - MB_WRITE_COMMITTED); 2125 - 2126 2118 /* Program the pll values only for the master lane */ 2127 2119 for (i = 0; i < ARRAY_SIZE(pll_state->pll); i++) 2128 2120 intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_PLL(i), ··· 2124 2132 intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_CMN(0), pll_state->cmn, MB_WRITE_COMMITTED); 2125 2133 intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_TX(0), pll_state->tx, MB_WRITE_COMMITTED); 2126 2134 2135 + /* Custom width needs to be programmed to 0 for both the phy lanes */ 2136 + intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CUSTOM_WIDTH, 2137 + C10_VDR_CUSTOM_WIDTH_MASK, C10_VDR_CUSTOM_WIDTH_8_10, 2138 + MB_WRITE_COMMITTED); 2127 2139 intel_cx0_rmw(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_CONTROL(1), 2128 2140 0, C10_VDR_CTRL_MASTER_LANE | C10_VDR_CTRL_UPDATE_CFG, 2129 2141 MB_WRITE_COMMITTED);
+9 -3
drivers/gpu/drm/i915/display/intel_hdcp.c
··· 1158 1158 goto out; 1159 1159 } 1160 1160 1161 - intel_hdcp_update_value(connector, 1162 - DRM_MODE_CONTENT_PROTECTION_DESIRED, 1163 - true); 1161 + ret = intel_hdcp1_enable(connector); 1162 + if (ret) { 1163 + drm_err(display->drm, "Failed to enable hdcp (%d)\n", ret); 1164 + intel_hdcp_update_value(connector, 1165 + DRM_MODE_CONTENT_PROTECTION_DESIRED, 1166 + true); 1167 + goto out; 1168 + } 1169 + 1164 1170 out: 1165 1171 mutex_unlock(&dig_port->hdcp_mutex); 1166 1172 mutex_unlock(&hdcp->mutex);
+5
drivers/gpu/drm/i915/gt/intel_engine_types.h
··· 343 343 * @start_gt_clk: GT clock time of last idle to active transition. 344 344 */ 345 345 u64 start_gt_clk; 346 + 347 + /** 348 + * @total: The last value of total returned 349 + */ 350 + u64 total; 346 351 }; 347 352 348 353 union intel_engine_tlb_inv_reg {
+1 -1
drivers/gpu/drm/i915/gt/intel_rc6.c
··· 133 133 GEN9_MEDIA_PG_ENABLE | 134 134 GEN11_MEDIA_SAMPLER_PG_ENABLE; 135 135 136 - if (GRAPHICS_VER(gt->i915) >= 12) { 136 + if (GRAPHICS_VER(gt->i915) >= 12 && !IS_DG1(gt->i915)) { 137 137 for (i = 0; i < I915_MAX_VCS; i++) 138 138 if (HAS_ENGINE(gt, _VCS(i))) 139 139 pg_enable |= (VDN_HCP_POWERGATE_ENABLE(i) |
+39 -2
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
··· 1243 1243 } while (++i < 6); 1244 1244 } 1245 1245 1246 + static void __set_engine_usage_record(struct intel_engine_cs *engine, 1247 + u32 last_in, u32 id, u32 total) 1248 + { 1249 + struct iosys_map rec_map = intel_guc_engine_usage_record_map(engine); 1250 + 1251 + #define record_write(map_, field_, val_) \ 1252 + iosys_map_wr_field(map_, 0, struct guc_engine_usage_record, field_, val_) 1253 + 1254 + record_write(&rec_map, last_switch_in_stamp, last_in); 1255 + record_write(&rec_map, current_context_index, id); 1256 + record_write(&rec_map, total_runtime, total); 1257 + 1258 + #undef record_write 1259 + } 1260 + 1246 1261 static void guc_update_engine_gt_clks(struct intel_engine_cs *engine) 1247 1262 { 1248 1263 struct intel_engine_guc_stats *stats = &engine->stats.guc; ··· 1378 1363 total += intel_gt_clock_interval_to_ns(gt, clk); 1379 1364 } 1380 1365 1366 + if (total > stats->total) 1367 + stats->total = total; 1368 + 1381 1369 spin_unlock_irqrestore(&guc->timestamp.lock, flags); 1382 1370 1383 - return ns_to_ktime(total); 1371 + return ns_to_ktime(stats->total); 1384 1372 } 1385 1373 1386 1374 static void guc_enable_busyness_worker(struct intel_guc *guc) ··· 1449 1431 1450 1432 guc_update_pm_timestamp(guc, &unused); 1451 1433 for_each_engine(engine, gt, id) { 1434 + struct intel_engine_guc_stats *stats = &engine->stats.guc; 1435 + 1452 1436 guc_update_engine_gt_clks(engine); 1453 - engine->stats.guc.prev_total = 0; 1437 + 1438 + /* 1439 + * If resetting a running context, accumulate the active 1440 + * time as well since there will be no context switch. 1441 + */ 1442 + if (stats->running) { 1443 + u64 clk = guc->timestamp.gt_stamp - stats->start_gt_clk; 1444 + 1445 + stats->total_gt_clks += clk; 1446 + } 1447 + stats->prev_total = 0; 1448 + stats->running = 0; 1454 1449 } 1455 1450 1456 1451 spin_unlock_irqrestore(&guc->timestamp.lock, flags); ··· 1574 1543 1575 1544 static int guc_action_enable_usage_stats(struct intel_guc *guc) 1576 1545 { 1546 + struct intel_gt *gt = guc_to_gt(guc); 1547 + struct intel_engine_cs *engine; 1548 + enum intel_engine_id id; 1577 1549 u32 offset = intel_guc_engine_usage_offset(guc); 1578 1550 u32 action[] = { 1579 1551 INTEL_GUC_ACTION_SET_ENG_UTIL_BUFF, 1580 1552 offset, 1581 1553 0, 1582 1554 }; 1555 + 1556 + for_each_engine(engine, gt, id) 1557 + __set_engine_usage_record(engine, 0, 0xffffffff, 0); 1583 1558 1584 1559 return intel_guc_send(guc, action, ARRAY_SIZE(action)); 1585 1560 }
-5
drivers/gpu/drm/mediatek/Kconfig
··· 14 14 select DRM_BRIDGE_CONNECTOR 15 15 select DRM_MIPI_DSI 16 16 select DRM_PANEL 17 - select MEMORY 18 - select MTK_SMI 19 - select PHY_MTK_MIPI_DSI 20 17 select VIDEOMODE_HELPERS 21 18 help 22 19 Choose this option if you have a Mediatek SoCs. ··· 24 27 config DRM_MEDIATEK_DP 25 28 tristate "DRM DPTX Support for MediaTek SoCs" 26 29 depends on DRM_MEDIATEK 27 - select PHY_MTK_DP 28 30 select DRM_DISPLAY_HELPER 29 31 select DRM_DISPLAY_DP_HELPER 30 32 select DRM_DISPLAY_DP_AUX_BUS ··· 34 38 tristate "DRM HDMI Support for Mediatek SoCs" 35 39 depends on DRM_MEDIATEK 36 40 select SND_SOC_HDMI_CODEC if SND_SOC 37 - select PHY_MTK_HDMI 38 41 help 39 42 DRM/KMS HDMI driver for Mediatek SoCs
+19 -6
drivers/gpu/drm/mediatek/mtk_crtc.c
··· 112 112 113 113 drm_crtc_handle_vblank(&mtk_crtc->base); 114 114 115 + #if IS_REACHABLE(CONFIG_MTK_CMDQ) 116 + if (mtk_crtc->cmdq_client.chan) 117 + return; 118 + #endif 119 + 115 120 spin_lock_irqsave(&mtk_crtc->config_lock, flags); 116 121 if (!mtk_crtc->config_updating && mtk_crtc->pending_needs_vblank) { 117 122 mtk_crtc_finish_page_flip(mtk_crtc); ··· 289 284 state = to_mtk_crtc_state(mtk_crtc->base.state); 290 285 291 286 spin_lock_irqsave(&mtk_crtc->config_lock, flags); 292 - if (mtk_crtc->config_updating) { 293 - spin_unlock_irqrestore(&mtk_crtc->config_lock, flags); 287 + if (mtk_crtc->config_updating) 294 288 goto ddp_cmdq_cb_out; 295 - } 296 289 297 290 state->pending_config = false; 298 291 ··· 318 315 mtk_crtc->pending_async_planes = false; 319 316 } 320 317 321 - spin_unlock_irqrestore(&mtk_crtc->config_lock, flags); 322 - 323 318 ddp_cmdq_cb_out: 319 + 320 + if (mtk_crtc->pending_needs_vblank) { 321 + mtk_crtc_finish_page_flip(mtk_crtc); 322 + mtk_crtc->pending_needs_vblank = false; 323 + } 324 + 325 + spin_unlock_irqrestore(&mtk_crtc->config_lock, flags); 324 326 325 327 mtk_crtc->cmdq_vblank_cnt = 0; 326 328 wake_up(&mtk_crtc->cb_blocking_queue); ··· 614 606 */ 615 607 mtk_crtc->cmdq_vblank_cnt = 3; 616 608 609 + spin_lock_irqsave(&mtk_crtc->config_lock, flags); 610 + mtk_crtc->config_updating = false; 611 + spin_unlock_irqrestore(&mtk_crtc->config_lock, flags); 612 + 617 613 mbox_send_message(mtk_crtc->cmdq_client.chan, cmdq_handle); 618 614 mbox_client_txdone(mtk_crtc->cmdq_client.chan, 0); 619 615 } 620 - #endif 616 + #else 621 617 spin_lock_irqsave(&mtk_crtc->config_lock, flags); 622 618 mtk_crtc->config_updating = false; 623 619 spin_unlock_irqrestore(&mtk_crtc->config_lock, flags); 620 + #endif 624 621 625 622 mutex_unlock(&mtk_crtc->hw_lock); 626 623 }
+39 -30
drivers/gpu/drm/mediatek/mtk_disp_ovl.c
··· 460 460 } 461 461 } 462 462 463 + static void mtk_ovl_afbc_layer_config(struct mtk_disp_ovl *ovl, 464 + unsigned int idx, 465 + struct mtk_plane_pending_state *pending, 466 + struct cmdq_pkt *cmdq_pkt) 467 + { 468 + unsigned int pitch_msb = pending->pitch >> 16; 469 + unsigned int hdr_pitch = pending->hdr_pitch; 470 + unsigned int hdr_addr = pending->hdr_addr; 471 + 472 + if (pending->modifier != DRM_FORMAT_MOD_LINEAR) { 473 + mtk_ddp_write_relaxed(cmdq_pkt, hdr_addr, &ovl->cmdq_reg, ovl->regs, 474 + DISP_REG_OVL_HDR_ADDR(ovl, idx)); 475 + mtk_ddp_write_relaxed(cmdq_pkt, 476 + OVL_PITCH_MSB_2ND_SUBBUF | pitch_msb, 477 + &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx)); 478 + mtk_ddp_write_relaxed(cmdq_pkt, hdr_pitch, &ovl->cmdq_reg, ovl->regs, 479 + DISP_REG_OVL_HDR_PITCH(ovl, idx)); 480 + } else { 481 + mtk_ddp_write_relaxed(cmdq_pkt, pitch_msb, 482 + &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx)); 483 + } 484 + } 485 + 463 486 void mtk_ovl_layer_config(struct device *dev, unsigned int idx, 464 487 struct mtk_plane_state *state, 465 488 struct cmdq_pkt *cmdq_pkt) ··· 490 467 struct mtk_disp_ovl *ovl = dev_get_drvdata(dev); 491 468 struct mtk_plane_pending_state *pending = &state->pending; 492 469 unsigned int addr = pending->addr; 493 - unsigned int hdr_addr = pending->hdr_addr; 494 - unsigned int pitch = pending->pitch; 495 - unsigned int hdr_pitch = pending->hdr_pitch; 470 + unsigned int pitch_lsb = pending->pitch & GENMASK(15, 0); 496 471 unsigned int fmt = pending->format; 472 + unsigned int rotation = pending->rotation; 497 473 unsigned int offset = (pending->y << 16) | pending->x; 498 474 unsigned int src_size = (pending->height << 16) | pending->width; 499 475 unsigned int blend_mode = state->base.pixel_blend_mode; 500 476 unsigned int ignore_pixel_alpha = 0; 501 477 unsigned int con; 502 - bool is_afbc = pending->modifier != DRM_FORMAT_MOD_LINEAR; 503 - union overlay_pitch { 504 - struct split_pitch { 505 - u16 lsb; 506 - u16 msb; 507 - } split_pitch; 508 - u32 pitch; 509 - } overlay_pitch; 510 - 511 - overlay_pitch.pitch = pitch; 512 478 513 479 if (!pending->enable) { 514 480 mtk_ovl_layer_off(dev, idx, cmdq_pkt); ··· 525 513 ignore_pixel_alpha = OVL_CONST_BLEND; 526 514 } 527 515 528 - if (pending->rotation & DRM_MODE_REFLECT_Y) { 516 + /* 517 + * Treat rotate 180 as flip x + flip y, and XOR the original rotation value 518 + * to flip x + flip y to support both in the same time. 519 + */ 520 + if (rotation & DRM_MODE_ROTATE_180) 521 + rotation ^= DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y; 522 + 523 + if (rotation & DRM_MODE_REFLECT_Y) { 529 524 con |= OVL_CON_VIRT_FLIP; 530 525 addr += (pending->height - 1) * pending->pitch; 531 526 } 532 527 533 - if (pending->rotation & DRM_MODE_REFLECT_X) { 528 + if (rotation & DRM_MODE_REFLECT_X) { 534 529 con |= OVL_CON_HORZ_FLIP; 535 530 addr += pending->pitch - 1; 536 531 } 537 532 538 533 if (ovl->data->supports_afbc) 539 - mtk_ovl_set_afbc(ovl, cmdq_pkt, idx, is_afbc); 534 + mtk_ovl_set_afbc(ovl, cmdq_pkt, idx, 535 + pending->modifier != DRM_FORMAT_MOD_LINEAR); 540 536 541 537 mtk_ddp_write_relaxed(cmdq_pkt, con, &ovl->cmdq_reg, ovl->regs, 542 538 DISP_REG_OVL_CON(idx)); 543 - mtk_ddp_write_relaxed(cmdq_pkt, overlay_pitch.split_pitch.lsb | ignore_pixel_alpha, 539 + mtk_ddp_write_relaxed(cmdq_pkt, pitch_lsb | ignore_pixel_alpha, 544 540 &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH(idx)); 545 541 mtk_ddp_write_relaxed(cmdq_pkt, src_size, &ovl->cmdq_reg, ovl->regs, 546 542 DISP_REG_OVL_SRC_SIZE(idx)); ··· 557 537 mtk_ddp_write_relaxed(cmdq_pkt, addr, &ovl->cmdq_reg, ovl->regs, 558 538 DISP_REG_OVL_ADDR(ovl, idx)); 559 539 560 - if (is_afbc) { 561 - mtk_ddp_write_relaxed(cmdq_pkt, hdr_addr, &ovl->cmdq_reg, ovl->regs, 562 - DISP_REG_OVL_HDR_ADDR(ovl, idx)); 563 - mtk_ddp_write_relaxed(cmdq_pkt, 564 - OVL_PITCH_MSB_2ND_SUBBUF | overlay_pitch.split_pitch.msb, 565 - &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx)); 566 - mtk_ddp_write_relaxed(cmdq_pkt, hdr_pitch, &ovl->cmdq_reg, ovl->regs, 567 - DISP_REG_OVL_HDR_PITCH(ovl, idx)); 568 - } else { 569 - mtk_ddp_write_relaxed(cmdq_pkt, 570 - overlay_pitch.split_pitch.msb, 571 - &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx)); 572 - } 540 + if (ovl->data->supports_afbc) 541 + mtk_ovl_afbc_layer_config(ovl, idx, pending, cmdq_pkt); 573 542 574 543 mtk_ovl_set_bit_depth(dev, idx, fmt, cmdq_pkt); 575 544 mtk_ovl_layer_on(dev, idx, cmdq_pkt);
+27 -19
drivers/gpu/drm/mediatek/mtk_dp.c
··· 543 543 enum dp_pixelformat color_format) 544 544 { 545 545 u32 val; 546 - 547 - /* update MISC0 */ 548 - mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3034, 549 - color_format << DP_TEST_COLOR_FORMAT_SHIFT, 550 - DP_TEST_COLOR_FORMAT_MASK); 546 + u32 misc0_color; 551 547 552 548 switch (color_format) { 553 549 case DP_PIXELFORMAT_YUV422: 554 550 val = PIXEL_ENCODE_FORMAT_DP_ENC0_P0_YCBCR422; 551 + misc0_color = DP_COLOR_FORMAT_YCbCr422; 555 552 break; 556 553 case DP_PIXELFORMAT_RGB: 557 554 val = PIXEL_ENCODE_FORMAT_DP_ENC0_P0_RGB; 555 + misc0_color = DP_COLOR_FORMAT_RGB; 558 556 break; 559 557 default: 560 558 drm_warn(mtk_dp->drm_dev, "Unsupported color format: %d\n", 561 559 color_format); 562 560 return -EINVAL; 563 561 } 562 + 563 + /* update MISC0 */ 564 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3034, 565 + misc0_color, 566 + DP_TEST_COLOR_FORMAT_MASK); 564 567 565 568 mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_303C, 566 569 val, PIXEL_ENCODE_FORMAT_DP_ENC0_P0_MASK); ··· 2103 2100 struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge); 2104 2101 enum drm_connector_status ret = connector_status_disconnected; 2105 2102 bool enabled = mtk_dp->enabled; 2106 - u8 sink_count = 0; 2107 2103 2108 2104 if (!mtk_dp->train_info.cable_plugged_in) 2109 2105 return ret; ··· 2117 2115 * function, we just need to check the HPD connection to check 2118 2116 * whether we connect to a sink device. 2119 2117 */ 2120 - drm_dp_dpcd_readb(&mtk_dp->aux, DP_SINK_COUNT, &sink_count); 2121 - if (DP_GET_SINK_COUNT(sink_count)) 2118 + 2119 + if (drm_dp_read_sink_count(&mtk_dp->aux) > 0) 2122 2120 ret = connector_status_connected; 2123 2121 2124 2122 if (!enabled) ··· 2410 2408 { 2411 2409 struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge); 2412 2410 u32 bpp = info->color_formats & DRM_COLOR_FORMAT_YCBCR422 ? 16 : 24; 2413 - u32 rate = min_t(u32, drm_dp_max_link_rate(mtk_dp->rx_cap) * 2414 - drm_dp_max_lane_count(mtk_dp->rx_cap), 2415 - drm_dp_bw_code_to_link_rate(mtk_dp->max_linkrate) * 2416 - mtk_dp->max_lanes); 2411 + u32 lane_count_min = mtk_dp->train_info.lane_count; 2412 + u32 rate = drm_dp_bw_code_to_link_rate(mtk_dp->train_info.link_rate) * 2413 + lane_count_min; 2417 2414 2418 - if (rate < mode->clock * bpp / 8) 2415 + /* 2416 + *FEC overhead is approximately 2.4% from DP 1.4a spec 2.2.1.4.2. 2417 + *The down-spread amplitude shall either be disabled (0.0%) or up 2418 + *to 0.5% from 1.4a 3.5.2.6. Add up to approximately 3% total overhead. 2419 + * 2420 + *Because rate is already divided by 10, 2421 + *mode->clock does not need to be multiplied by 10 2422 + */ 2423 + if ((rate * 97 / 100) < (mode->clock * bpp / 8)) 2419 2424 return MODE_CLOCK_HIGH; 2420 2425 2421 2426 return MODE_OK; ··· 2463 2454 struct drm_display_mode *mode = &crtc_state->adjusted_mode; 2464 2455 struct drm_display_info *display_info = 2465 2456 &conn_state->connector->display_info; 2466 - u32 rate = min_t(u32, drm_dp_max_link_rate(mtk_dp->rx_cap) * 2467 - drm_dp_max_lane_count(mtk_dp->rx_cap), 2468 - drm_dp_bw_code_to_link_rate(mtk_dp->max_linkrate) * 2469 - mtk_dp->max_lanes); 2457 + u32 lane_count_min = mtk_dp->train_info.lane_count; 2458 + u32 rate = drm_dp_bw_code_to_link_rate(mtk_dp->train_info.link_rate) * 2459 + lane_count_min; 2470 2460 2471 2461 *num_input_fmts = 0; 2472 2462 ··· 2474 2466 * datarate of YUV422 and sink device supports YUV422, we output YUV422 2475 2467 * format. Use this condition, we can support more resolution. 2476 2468 */ 2477 - if ((rate < (mode->clock * 24 / 8)) && 2478 - (rate > (mode->clock * 16 / 8)) && 2469 + if (((rate * 97 / 100) < (mode->clock * 24 / 8)) && 2470 + ((rate * 97 / 100) > (mode->clock * 16 / 8)) && 2479 2471 (display_info->color_formats & DRM_COLOR_FORMAT_YCBCR422)) { 2480 2472 input_fmts = kcalloc(1, sizeof(*input_fmts), GFP_KERNEL); 2481 2473 if (!input_fmts)
+9 -4
drivers/gpu/drm/mediatek/mtk_drm_drv.c
··· 373 373 struct mtk_drm_private *temp_drm_priv; 374 374 struct device_node *phandle = dev->parent->of_node; 375 375 const struct of_device_id *of_id; 376 + struct device_node *node; 376 377 struct device *drm_dev; 377 378 unsigned int cnt = 0; 378 379 int i, j; 379 380 380 - for_each_child_of_node_scoped(phandle->parent, node) { 381 + for_each_child_of_node(phandle->parent, node) { 381 382 struct platform_device *pdev; 382 383 383 384 of_id = of_match_node(mtk_drm_of_ids, node); ··· 407 406 if (temp_drm_priv->mtk_drm_bound) 408 407 cnt++; 409 408 410 - if (cnt == MAX_CRTC) 409 + if (cnt == MAX_CRTC) { 410 + of_node_put(node); 411 411 break; 412 + } 412 413 } 413 414 414 415 if (drm_priv->data->mmsys_dev_num == cnt) { ··· 676 673 err_free: 677 674 private->drm = NULL; 678 675 drm_dev_put(drm); 676 + for (i = 0; i < private->data->mmsys_dev_num; i++) 677 + private->all_drm_private[i]->drm = NULL; 679 678 return ret; 680 679 } 681 680 ··· 905 900 const unsigned int **out_path, 906 901 unsigned int *out_path_len) 907 902 { 908 - struct device_node *next, *prev, *vdo = dev->parent->of_node; 903 + struct device_node *next = NULL, *prev, *vdo = dev->parent->of_node; 909 904 unsigned int temp_path[DDP_COMPONENT_DRM_ID_MAX] = { 0 }; 910 905 unsigned int *final_ddp_path; 911 906 unsigned short int idx = 0; ··· 1094 1089 /* No devicetree graphs support: go with hardcoded paths if present */ 1095 1090 dev_dbg(dev, "Using hardcoded paths for MMSYS %u\n", mtk_drm_data->mmsys_id); 1096 1091 private->data = mtk_drm_data; 1097 - }; 1092 + } 1098 1093 1099 1094 private->all_drm_private = devm_kmalloc_array(dev, private->data->mmsys_dev_num, 1100 1095 sizeof(*private->all_drm_private),
+30 -19
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 139 139 #define CLK_HS_POST GENMASK(15, 8) 140 140 #define CLK_HS_EXIT GENMASK(23, 16) 141 141 142 - #define DSI_VM_CMD_CON 0x130 142 + /* DSI_VM_CMD_CON */ 143 143 #define VM_CMD_EN BIT(0) 144 144 #define TS_VFP_EN BIT(5) 145 145 146 - #define DSI_SHADOW_DEBUG 0x190U 146 + /* DSI_SHADOW_DEBUG */ 147 147 #define FORCE_COMMIT BIT(0) 148 148 #define BYPASS_SHADOW BIT(1) 149 149 ··· 187 187 188 188 struct mtk_dsi_driver_data { 189 189 const u32 reg_cmdq_off; 190 + const u32 reg_vm_cmd_off; 191 + const u32 reg_shadow_dbg_off; 190 192 bool has_shadow_ctl; 191 193 bool has_size_ctl; 192 194 bool cmdq_long_packet_ctl; ··· 248 246 u32 data_rate_mhz = DIV_ROUND_UP(dsi->data_rate, HZ_PER_MHZ); 249 247 struct mtk_phy_timing *timing = &dsi->phy_timing; 250 248 251 - timing->lpx = (80 * data_rate_mhz / (8 * 1000)) + 1; 252 - timing->da_hs_prepare = (59 * data_rate_mhz + 4 * 1000) / 8000 + 1; 253 - timing->da_hs_zero = (163 * data_rate_mhz + 11 * 1000) / 8000 + 1 - 249 + timing->lpx = (60 * data_rate_mhz / (8 * 1000)) + 1; 250 + timing->da_hs_prepare = (80 * data_rate_mhz + 4 * 1000) / 8000; 251 + timing->da_hs_zero = (170 * data_rate_mhz + 10 * 1000) / 8000 + 1 - 254 252 timing->da_hs_prepare; 255 - timing->da_hs_trail = (78 * data_rate_mhz + 7 * 1000) / 8000 + 1; 253 + timing->da_hs_trail = timing->da_hs_prepare + 1; 256 254 257 - timing->ta_go = 4 * timing->lpx; 258 - timing->ta_sure = 3 * timing->lpx / 2; 259 - timing->ta_get = 5 * timing->lpx; 260 - timing->da_hs_exit = (118 * data_rate_mhz / (8 * 1000)) + 1; 255 + timing->ta_go = 4 * timing->lpx - 2; 256 + timing->ta_sure = timing->lpx + 2; 257 + timing->ta_get = 4 * timing->lpx; 258 + timing->da_hs_exit = 2 * timing->lpx + 1; 261 259 262 - timing->clk_hs_prepare = (57 * data_rate_mhz / (8 * 1000)) + 1; 263 - timing->clk_hs_post = (65 * data_rate_mhz + 53 * 1000) / 8000 + 1; 264 - timing->clk_hs_trail = (78 * data_rate_mhz + 7 * 1000) / 8000 + 1; 265 - timing->clk_hs_zero = (330 * data_rate_mhz / (8 * 1000)) + 1 - 266 - timing->clk_hs_prepare; 267 - timing->clk_hs_exit = (118 * data_rate_mhz / (8 * 1000)) + 1; 260 + timing->clk_hs_prepare = 70 * data_rate_mhz / (8 * 1000); 261 + timing->clk_hs_post = timing->clk_hs_prepare + 8; 262 + timing->clk_hs_trail = timing->clk_hs_prepare; 263 + timing->clk_hs_zero = timing->clk_hs_trail * 4; 264 + timing->clk_hs_exit = 2 * timing->clk_hs_trail; 268 265 269 266 timcon0 = FIELD_PREP(LPX, timing->lpx) | 270 267 FIELD_PREP(HS_PREP, timing->da_hs_prepare) | ··· 368 367 369 368 static void mtk_dsi_set_vm_cmd(struct mtk_dsi *dsi) 370 369 { 371 - mtk_dsi_mask(dsi, DSI_VM_CMD_CON, VM_CMD_EN, VM_CMD_EN); 372 - mtk_dsi_mask(dsi, DSI_VM_CMD_CON, TS_VFP_EN, TS_VFP_EN); 370 + mtk_dsi_mask(dsi, dsi->driver_data->reg_vm_cmd_off, VM_CMD_EN, VM_CMD_EN); 371 + mtk_dsi_mask(dsi, dsi->driver_data->reg_vm_cmd_off, TS_VFP_EN, TS_VFP_EN); 373 372 } 374 373 375 374 static void mtk_dsi_rxtx_control(struct mtk_dsi *dsi) ··· 715 714 716 715 if (dsi->driver_data->has_shadow_ctl) 717 716 writel(FORCE_COMMIT | BYPASS_SHADOW, 718 - dsi->regs + DSI_SHADOW_DEBUG); 717 + dsi->regs + dsi->driver_data->reg_shadow_dbg_off); 719 718 720 719 mtk_dsi_reset_engine(dsi); 721 720 mtk_dsi_phy_timconfig(dsi); ··· 1264 1263 1265 1264 static const struct mtk_dsi_driver_data mt8173_dsi_driver_data = { 1266 1265 .reg_cmdq_off = 0x200, 1266 + .reg_vm_cmd_off = 0x130, 1267 + .reg_shadow_dbg_off = 0x190 1267 1268 }; 1268 1269 1269 1270 static const struct mtk_dsi_driver_data mt2701_dsi_driver_data = { 1270 1271 .reg_cmdq_off = 0x180, 1272 + .reg_vm_cmd_off = 0x130, 1273 + .reg_shadow_dbg_off = 0x190 1271 1274 }; 1272 1275 1273 1276 static const struct mtk_dsi_driver_data mt8183_dsi_driver_data = { 1274 1277 .reg_cmdq_off = 0x200, 1278 + .reg_vm_cmd_off = 0x130, 1279 + .reg_shadow_dbg_off = 0x190, 1275 1280 .has_shadow_ctl = true, 1276 1281 .has_size_ctl = true, 1277 1282 }; 1278 1283 1279 1284 static const struct mtk_dsi_driver_data mt8186_dsi_driver_data = { 1280 1285 .reg_cmdq_off = 0xd00, 1286 + .reg_vm_cmd_off = 0x200, 1287 + .reg_shadow_dbg_off = 0xc00, 1281 1288 .has_shadow_ctl = true, 1282 1289 .has_size_ctl = true, 1283 1290 }; 1284 1291 1285 1292 static const struct mtk_dsi_driver_data mt8188_dsi_driver_data = { 1286 1293 .reg_cmdq_off = 0xd00, 1294 + .reg_vm_cmd_off = 0x200, 1295 + .reg_shadow_dbg_off = 0xc00, 1287 1296 .has_shadow_ctl = true, 1288 1297 .has_size_ctl = true, 1289 1298 .cmdq_long_packet_ctl = true,
+2
drivers/gpu/drm/panel/panel-himax-hx83102.c
··· 565 565 struct drm_display_mode *mode; 566 566 567 567 mode = drm_mode_duplicate(connector->dev, m); 568 + if (!mode) 569 + return -ENOMEM; 568 570 569 571 mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED; 570 572 drm_mode_set_name(mode);
+2 -2
drivers/gpu/drm/panel/panel-novatek-nt35950.c
··· 481 481 return dev_err_probe(dev, -EPROBE_DEFER, "Cannot get secondary DSI host\n"); 482 482 483 483 nt->dsi[1] = mipi_dsi_device_register_full(dsi_r_host, info); 484 - if (!nt->dsi[1]) { 484 + if (IS_ERR(nt->dsi[1])) { 485 485 dev_err(dev, "Cannot get secondary DSI node\n"); 486 - return -ENODEV; 486 + return PTR_ERR(nt->dsi[1]); 487 487 } 488 488 num_dsis++; 489 489 }
+1
drivers/gpu/drm/panel/panel-sitronix-st7701.c
··· 1177 1177 return dev_err_probe(dev, ret, "Failed to get orientation\n"); 1178 1178 1179 1179 drm_panel_init(&st7701->panel, dev, &st7701_funcs, connector_type); 1180 + st7701->panel.prepare_prev_first = true; 1180 1181 1181 1182 /** 1182 1183 * Once sleep out has been issued, ST7701 IC required to wait 120ms
+1 -1
drivers/gpu/drm/panel/panel-synaptics-r63353.c
··· 325 325 { 326 326 struct r63353_panel *rpanel = mipi_dsi_get_drvdata(dsi); 327 327 328 - r63353_panel_unprepare(&rpanel->base); 328 + drm_panel_unprepare(&rpanel->base); 329 329 } 330 330 331 331 static const struct r63353_desc sharp_ls068b3sx02_data = {
+2 -1
drivers/gpu/drm/scheduler/sched_main.c
··· 1355 1355 * drm_sched_backend_ops.run_job(). Consequently, drm_sched_backend_ops.free_job() 1356 1356 * will not be called for all jobs still in drm_gpu_scheduler.pending_list. 1357 1357 * There is no solution for this currently. Thus, it is up to the driver to make 1358 - * sure that 1358 + * sure that: 1359 + * 1359 1360 * a) drm_sched_fini() is only called after for all submitted jobs 1360 1361 * drm_sched_backend_ops.free_job() has been called or that 1361 1362 * b) the jobs for which drm_sched_backend_ops.free_job() has not been called
+10 -2
drivers/gpu/drm/xe/xe_bo.c
··· 724 724 new_mem->mem_type == XE_PL_SYSTEM) { 725 725 long timeout = dma_resv_wait_timeout(ttm_bo->base.resv, 726 726 DMA_RESV_USAGE_BOOKKEEP, 727 - true, 727 + false, 728 728 MAX_SCHEDULE_TIMEOUT); 729 729 if (timeout < 0) { 730 730 ret = timeout; ··· 848 848 849 849 out: 850 850 if ((!ttm_bo->resource || ttm_bo->resource->mem_type == XE_PL_SYSTEM) && 851 - ttm_bo->ttm) 851 + ttm_bo->ttm) { 852 + long timeout = dma_resv_wait_timeout(ttm_bo->base.resv, 853 + DMA_RESV_USAGE_KERNEL, 854 + false, 855 + MAX_SCHEDULE_TIMEOUT); 856 + if (timeout < 0) 857 + ret = timeout; 858 + 852 859 xe_tt_unmap_sg(ttm_bo->ttm); 860 + } 853 861 854 862 return ret; 855 863 }
+14 -1
drivers/gpu/drm/xe/xe_devcoredump.c
··· 109 109 drm_puts(&p, "\n**** GuC CT ****\n"); 110 110 xe_guc_ct_snapshot_print(ss->guc.ct, &p); 111 111 112 - drm_puts(&p, "\n**** Contexts ****\n"); 112 + /* 113 + * Don't add a new section header here because the mesa debug decoder 114 + * tool expects the context information to be in the 'GuC CT' section. 115 + */ 116 + /* drm_puts(&p, "\n**** Contexts ****\n"); */ 113 117 xe_guc_exec_queue_snapshot_print(ss->ge, &p); 114 118 115 119 drm_puts(&p, "\n**** Job ****\n"); ··· 366 362 const u32 *blob32 = (const u32 *)blob; 367 363 char buff[ASCII85_BUFSZ], *line_buff; 368 364 size_t line_pos = 0; 365 + 366 + /* 367 + * Splitting blobs across multiple lines is not compatible with the mesa 368 + * debug decoder tool. Note that even dropping the explicit '\n' below 369 + * doesn't help because the GuC log is so big some underlying implementation 370 + * still splits the lines at 512K characters. So just bail completely for 371 + * the moment. 372 + */ 373 + return; 369 374 370 375 #define DMESG_MAX_LINE_LEN 800 371 376 #define MIN_SPACE (ASCII85_BUFSZ + 2) /* 85 + "\n\0" */
+9
drivers/gpu/drm/xe/xe_exec_queue.c
··· 8 8 #include <linux/nospec.h> 9 9 10 10 #include <drm/drm_device.h> 11 + #include <drm/drm_drv.h> 11 12 #include <drm/drm_file.h> 12 13 #include <uapi/drm/xe_drm.h> 13 14 ··· 763 762 */ 764 763 void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q) 765 764 { 765 + struct xe_device *xe = gt_to_xe(q->gt); 766 766 struct xe_file *xef; 767 767 struct xe_lrc *lrc; 768 768 u32 old_ts, new_ts; 769 + int idx; 769 770 770 771 /* 771 772 * Jobs that are run during driver load may use an exec_queue, but are ··· 775 772 * for kernel specific work. 776 773 */ 777 774 if (!q->vm || !q->vm->xef) 775 + return; 776 + 777 + /* Synchronize with unbind while holding the xe file open */ 778 + if (!drm_dev_enter(&xe->drm, &idx)) 778 779 return; 779 780 780 781 xef = q->vm->xef; ··· 794 787 lrc = q->lrc[0]; 795 788 new_ts = xe_lrc_update_timestamp(lrc, &old_ts); 796 789 xef->run_ticks[q->class] += (new_ts - old_ts) * q->width; 790 + 791 + drm_dev_exit(idx); 797 792 } 798 793 799 794 /**
+4 -4
drivers/gpu/drm/xe/xe_gt.c
··· 387 387 xe_force_wake_init_gt(gt, gt_to_fw(gt)); 388 388 spin_lock_init(&gt->global_invl_lock); 389 389 390 + err = xe_gt_tlb_invalidation_init_early(gt); 391 + if (err) 392 + return err; 393 + 390 394 return 0; 391 395 } 392 396 ··· 591 587 gt->ring_ops[i] = xe_ring_ops_get(gt, i); 592 588 xe_hw_fence_irq_init(&gt->fence_irq[i]); 593 589 } 594 - 595 - err = xe_gt_tlb_invalidation_init(gt); 596 - if (err) 597 - return err; 598 590 599 591 err = xe_gt_pagefault_init(gt); 600 592 if (err)
+6 -4
drivers/gpu/drm/xe/xe_gt_idle.c
··· 122 122 if (!xe_gt_is_media_type(gt)) 123 123 gtidle->powergate_enable |= RENDER_POWERGATE_ENABLE; 124 124 125 - for (i = XE_HW_ENGINE_VCS0, j = 0; i <= XE_HW_ENGINE_VCS7; ++i, ++j) { 126 - if ((gt->info.engine_mask & BIT(i))) 127 - gtidle->powergate_enable |= (VDN_HCP_POWERGATE_ENABLE(j) | 128 - VDN_MFXVDENC_POWERGATE_ENABLE(j)); 125 + if (xe->info.platform != XE_DG1) { 126 + for (i = XE_HW_ENGINE_VCS0, j = 0; i <= XE_HW_ENGINE_VCS7; ++i, ++j) { 127 + if ((gt->info.engine_mask & BIT(i))) 128 + gtidle->powergate_enable |= (VDN_HCP_POWERGATE_ENABLE(j) | 129 + VDN_MFXVDENC_POWERGATE_ENABLE(j)); 130 + } 129 131 } 130 132 131 133 fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+1 -1
drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
··· 2046 2046 valid_any = valid_any || (valid_ggtt && is_primary); 2047 2047 2048 2048 if (IS_DGFX(xe)) { 2049 - bool valid_lmem = pf_get_vf_config_ggtt(primary_gt, vfid); 2049 + bool valid_lmem = pf_get_vf_config_lmem(primary_gt, vfid); 2050 2050 2051 2051 valid_any = valid_any || (valid_lmem && is_primary); 2052 2052 valid_all = valid_all && valid_lmem;
+2 -2
drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
··· 106 106 } 107 107 108 108 /** 109 - * xe_gt_tlb_invalidation_init - Initialize GT TLB invalidation state 109 + * xe_gt_tlb_invalidation_init_early - Initialize GT TLB invalidation state 110 110 * @gt: graphics tile 111 111 * 112 112 * Initialize GT TLB invalidation state, purely software initialization, should ··· 114 114 * 115 115 * Return: 0 on success, negative error code on error. 116 116 */ 117 - int xe_gt_tlb_invalidation_init(struct xe_gt *gt) 117 + int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt) 118 118 { 119 119 gt->tlb_invalidation.seqno = 1; 120 120 INIT_LIST_HEAD(&gt->tlb_invalidation.pending_fences);
+2 -1
drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
··· 14 14 struct xe_guc; 15 15 struct xe_vma; 16 16 17 - int xe_gt_tlb_invalidation_init(struct xe_gt *gt); 17 + int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt); 18 + 18 19 void xe_gt_tlb_invalidation_reset(struct xe_gt *gt); 19 20 int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt); 20 21 int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
+45 -89
drivers/gpu/drm/xe/xe_oa.c
··· 74 74 struct rcu_head rcu; 75 75 }; 76 76 77 - struct flex { 78 - struct xe_reg reg; 79 - u32 offset; 80 - u32 value; 81 - }; 82 - 83 77 struct xe_oa_open_param { 84 78 struct xe_file *xef; 85 79 u32 oa_unit_id; ··· 590 596 return ret; 591 597 } 592 598 599 + static void xe_oa_lock_vma(struct xe_exec_queue *q) 600 + { 601 + if (q->vm) { 602 + down_read(&q->vm->lock); 603 + xe_vm_lock(q->vm, false); 604 + } 605 + } 606 + 607 + static void xe_oa_unlock_vma(struct xe_exec_queue *q) 608 + { 609 + if (q->vm) { 610 + xe_vm_unlock(q->vm); 611 + up_read(&q->vm->lock); 612 + } 613 + } 614 + 593 615 static struct dma_fence *xe_oa_submit_bb(struct xe_oa_stream *stream, enum xe_oa_submit_deps deps, 594 616 struct xe_bb *bb) 595 617 { 618 + struct xe_exec_queue *q = stream->exec_q ?: stream->k_exec_q; 596 619 struct xe_sched_job *job; 597 620 struct dma_fence *fence; 598 621 int err = 0; 599 622 600 - /* Kernel configuration is issued on stream->k_exec_q, not stream->exec_q */ 601 - job = xe_bb_create_job(stream->k_exec_q, bb); 623 + xe_oa_lock_vma(q); 624 + 625 + job = xe_bb_create_job(q, bb); 602 626 if (IS_ERR(job)) { 603 627 err = PTR_ERR(job); 604 628 goto exit; 605 629 } 630 + job->ggtt = true; 606 631 607 632 if (deps == XE_OA_SUBMIT_ADD_DEPS) { 608 633 for (int i = 0; i < stream->num_syncs && !err; i++) ··· 636 623 fence = dma_fence_get(&job->drm.s_fence->finished); 637 624 xe_sched_job_push(job); 638 625 626 + xe_oa_unlock_vma(q); 627 + 639 628 return fence; 640 629 err_put_job: 641 630 xe_sched_job_put(job); 642 631 exit: 632 + xe_oa_unlock_vma(q); 643 633 return ERR_PTR(err); 644 634 } 645 635 ··· 691 675 dma_fence_put(stream->last_fence); 692 676 } 693 677 694 - static void xe_oa_store_flex(struct xe_oa_stream *stream, struct xe_lrc *lrc, 695 - struct xe_bb *bb, const struct flex *flex, u32 count) 696 - { 697 - u32 offset = xe_bo_ggtt_addr(lrc->bo); 698 - 699 - do { 700 - bb->cs[bb->len++] = MI_STORE_DATA_IMM | MI_SDI_GGTT | MI_SDI_NUM_DW(1); 701 - bb->cs[bb->len++] = offset + flex->offset * sizeof(u32); 702 - bb->cs[bb->len++] = 0; 703 - bb->cs[bb->len++] = flex->value; 704 - 705 - } while (flex++, --count); 706 - } 707 - 708 - static int xe_oa_modify_ctx_image(struct xe_oa_stream *stream, struct xe_lrc *lrc, 709 - const struct flex *flex, u32 count) 678 + static int xe_oa_load_with_lri(struct xe_oa_stream *stream, struct xe_oa_reg *reg_lri, u32 count) 710 679 { 711 680 struct dma_fence *fence; 712 681 struct xe_bb *bb; 713 682 int err; 714 683 715 - bb = xe_bb_new(stream->gt, 4 * count, false); 684 + bb = xe_bb_new(stream->gt, 2 * count + 1, false); 716 685 if (IS_ERR(bb)) { 717 686 err = PTR_ERR(bb); 718 687 goto exit; 719 688 } 720 689 721 - xe_oa_store_flex(stream, lrc, bb, flex, count); 722 - 723 - fence = xe_oa_submit_bb(stream, XE_OA_SUBMIT_NO_DEPS, bb); 724 - if (IS_ERR(fence)) { 725 - err = PTR_ERR(fence); 726 - goto free_bb; 727 - } 728 - xe_bb_free(bb, fence); 729 - dma_fence_put(fence); 730 - 731 - return 0; 732 - free_bb: 733 - xe_bb_free(bb, NULL); 734 - exit: 735 - return err; 736 - } 737 - 738 - static int xe_oa_load_with_lri(struct xe_oa_stream *stream, struct xe_oa_reg *reg_lri) 739 - { 740 - struct dma_fence *fence; 741 - struct xe_bb *bb; 742 - int err; 743 - 744 - bb = xe_bb_new(stream->gt, 3, false); 745 - if (IS_ERR(bb)) { 746 - err = PTR_ERR(bb); 747 - goto exit; 748 - } 749 - 750 - write_cs_mi_lri(bb, reg_lri, 1); 690 + write_cs_mi_lri(bb, reg_lri, count); 751 691 752 692 fence = xe_oa_submit_bb(stream, XE_OA_SUBMIT_NO_DEPS, bb); 753 693 if (IS_ERR(fence)) { ··· 723 751 static int xe_oa_configure_oar_context(struct xe_oa_stream *stream, bool enable) 724 752 { 725 753 const struct xe_oa_format *format = stream->oa_buffer.format; 726 - struct xe_lrc *lrc = stream->exec_q->lrc[0]; 727 - u32 regs_offset = xe_lrc_regs_offset(lrc) / sizeof(u32); 728 754 u32 oacontrol = __format_to_oactrl(format, OAR_OACONTROL_COUNTER_SEL_MASK) | 729 755 (enable ? OAR_OACONTROL_COUNTER_ENABLE : 0); 730 756 731 - struct flex regs_context[] = { 757 + struct xe_oa_reg reg_lri[] = { 732 758 { 733 759 OACTXCONTROL(stream->hwe->mmio_base), 734 - stream->oa->ctx_oactxctrl_offset[stream->hwe->class] + 1, 735 760 enable ? OA_COUNTER_RESUME : 0, 736 761 }, 737 762 { 763 + OAR_OACONTROL, 764 + oacontrol, 765 + }, 766 + { 738 767 RING_CONTEXT_CONTROL(stream->hwe->mmio_base), 739 - regs_offset + CTX_CONTEXT_CONTROL, 740 - _MASKED_BIT_ENABLE(CTX_CTRL_OAC_CONTEXT_ENABLE), 768 + _MASKED_FIELD(CTX_CTRL_OAC_CONTEXT_ENABLE, 769 + enable ? CTX_CTRL_OAC_CONTEXT_ENABLE : 0) 741 770 }, 742 771 }; 743 - struct xe_oa_reg reg_lri = { OAR_OACONTROL, oacontrol }; 744 - int err; 745 772 746 - /* Modify stream hwe context image with regs_context */ 747 - err = xe_oa_modify_ctx_image(stream, stream->exec_q->lrc[0], 748 - regs_context, ARRAY_SIZE(regs_context)); 749 - if (err) 750 - return err; 751 - 752 - /* Apply reg_lri using LRI */ 753 - return xe_oa_load_with_lri(stream, &reg_lri); 773 + return xe_oa_load_with_lri(stream, reg_lri, ARRAY_SIZE(reg_lri)); 754 774 } 755 775 756 776 static int xe_oa_configure_oac_context(struct xe_oa_stream *stream, bool enable) 757 777 { 758 778 const struct xe_oa_format *format = stream->oa_buffer.format; 759 - struct xe_lrc *lrc = stream->exec_q->lrc[0]; 760 - u32 regs_offset = xe_lrc_regs_offset(lrc) / sizeof(u32); 761 779 u32 oacontrol = __format_to_oactrl(format, OAR_OACONTROL_COUNTER_SEL_MASK) | 762 780 (enable ? OAR_OACONTROL_COUNTER_ENABLE : 0); 763 - struct flex regs_context[] = { 781 + struct xe_oa_reg reg_lri[] = { 764 782 { 765 783 OACTXCONTROL(stream->hwe->mmio_base), 766 - stream->oa->ctx_oactxctrl_offset[stream->hwe->class] + 1, 767 784 enable ? OA_COUNTER_RESUME : 0, 768 785 }, 769 786 { 787 + OAC_OACONTROL, 788 + oacontrol 789 + }, 790 + { 770 791 RING_CONTEXT_CONTROL(stream->hwe->mmio_base), 771 - regs_offset + CTX_CONTEXT_CONTROL, 772 - _MASKED_BIT_ENABLE(CTX_CTRL_OAC_CONTEXT_ENABLE) | 792 + _MASKED_FIELD(CTX_CTRL_OAC_CONTEXT_ENABLE, 793 + enable ? CTX_CTRL_OAC_CONTEXT_ENABLE : 0) | 773 794 _MASKED_FIELD(CTX_CTRL_RUN_ALONE, enable ? CTX_CTRL_RUN_ALONE : 0), 774 795 }, 775 796 }; 776 - struct xe_oa_reg reg_lri = { OAC_OACONTROL, oacontrol }; 777 - int err; 778 797 779 798 /* Set ccs select to enable programming of OAC_OACONTROL */ 780 799 xe_mmio_write32(&stream->gt->mmio, __oa_regs(stream)->oa_ctrl, 781 800 __oa_ccs_select(stream)); 782 801 783 - /* Modify stream hwe context image with regs_context */ 784 - err = xe_oa_modify_ctx_image(stream, stream->exec_q->lrc[0], 785 - regs_context, ARRAY_SIZE(regs_context)); 786 - if (err) 787 - return err; 788 - 789 - /* Apply reg_lri using LRI */ 790 - return xe_oa_load_with_lri(stream, &reg_lri); 802 + return xe_oa_load_with_lri(stream, reg_lri, ARRAY_SIZE(reg_lri)); 791 803 } 792 804 793 805 static int xe_oa_configure_oa_context(struct xe_oa_stream *stream, bool enable) ··· 2022 2066 if (XE_IOCTL_DBG(oa->xe, !param.exec_q)) 2023 2067 return -ENOENT; 2024 2068 2025 - if (param.exec_q->width > 1) 2026 - drm_dbg(&oa->xe->drm, "exec_q->width > 1, programming only exec_q->lrc[0]\n"); 2069 + if (XE_IOCTL_DBG(oa->xe, param.exec_q->width > 1)) 2070 + return -EOPNOTSUPP; 2027 2071 } 2028 2072 2029 2073 /*
+4 -1
drivers/gpu/drm/xe/xe_ring_ops.c
··· 221 221 222 222 static u32 get_ppgtt_flag(struct xe_sched_job *job) 223 223 { 224 - return job->q->vm ? BIT(8) : 0; 224 + if (job->q->vm && !job->ggtt) 225 + return BIT(8); 226 + 227 + return 0; 225 228 } 226 229 227 230 static int emit_copy_timestamp(struct xe_lrc *lrc, u32 *dw, int i)
+2
drivers/gpu/drm/xe/xe_sched_job_types.h
··· 56 56 u32 migrate_flush_flags; 57 57 /** @ring_ops_flush_tlb: The ring ops need to flush TLB before payload. */ 58 58 bool ring_ops_flush_tlb; 59 + /** @ggtt: mapped in ggtt. */ 60 + bool ggtt; 59 61 /** @ptrs: per instance pointers. */ 60 62 struct xe_job_ptrs ptrs[]; 61 63 };
+5 -4
drivers/hv/hv_balloon.c
··· 756 756 * adding succeeded, it is ok to proceed even if the memory was 757 757 * not onlined in time. 758 758 */ 759 - wait_for_completion_timeout(&dm_device.ol_waitevent, 5 * HZ); 759 + wait_for_completion_timeout(&dm_device.ol_waitevent, secs_to_jiffies(5)); 760 760 post_status(&dm_device); 761 761 } 762 762 } ··· 1373 1373 struct hv_dynmem_device *dm = dm_dev; 1374 1374 1375 1375 while (!kthread_should_stop()) { 1376 - wait_for_completion_interruptible_timeout(&dm_device.config_event, 1 * HZ); 1376 + wait_for_completion_interruptible_timeout(&dm_device.config_event, 1377 + secs_to_jiffies(1)); 1377 1378 /* 1378 1379 * The host expects us to post information on the memory 1379 1380 * pressure every second. ··· 1749 1748 if (ret) 1750 1749 goto out; 1751 1750 1752 - t = wait_for_completion_timeout(&dm_device.host_event, 5 * HZ); 1751 + t = wait_for_completion_timeout(&dm_device.host_event, secs_to_jiffies(5)); 1753 1752 if (t == 0) { 1754 1753 ret = -ETIMEDOUT; 1755 1754 goto out; ··· 1807 1806 if (ret) 1808 1807 goto out; 1809 1808 1810 - t = wait_for_completion_timeout(&dm_device.host_event, 5 * HZ); 1809 + t = wait_for_completion_timeout(&dm_device.host_event, secs_to_jiffies(5)); 1811 1810 if (t == 0) { 1812 1811 ret = -ETIMEDOUT; 1813 1812 goto out;
+8 -2
drivers/hv/hv_kvp.c
··· 655 655 if (host_negotiatied == NEGO_NOT_STARTED) { 656 656 host_negotiatied = NEGO_IN_PROGRESS; 657 657 schedule_delayed_work(&kvp_host_handshake_work, 658 - HV_UTIL_NEGO_TIMEOUT * HZ); 658 + secs_to_jiffies(HV_UTIL_NEGO_TIMEOUT)); 659 659 } 660 660 return; 661 661 } ··· 724 724 */ 725 725 schedule_work(&kvp_sendkey_work); 726 726 schedule_delayed_work(&kvp_timeout_work, 727 - HV_UTIL_TIMEOUT * HZ); 727 + secs_to_jiffies(HV_UTIL_TIMEOUT)); 728 728 729 729 return; 730 730 ··· 767 767 */ 768 768 kvp_transaction.state = HVUTIL_DEVICE_INIT; 769 769 770 + return 0; 771 + } 772 + 773 + int 774 + hv_kvp_init_transport(void) 775 + { 770 776 hvt = hvutil_transport_init(kvp_devname, CN_KVP_IDX, CN_KVP_VAL, 771 777 kvp_on_msg, kvp_on_reset); 772 778 if (!hvt)
+8 -1
drivers/hv/hv_snapshot.c
··· 193 193 vss_transaction.state = HVUTIL_USERSPACE_REQ; 194 194 195 195 schedule_delayed_work(&vss_timeout_work, op == VSS_OP_FREEZE ? 196 - VSS_FREEZE_TIMEOUT * HZ : HV_UTIL_TIMEOUT * HZ); 196 + secs_to_jiffies(VSS_FREEZE_TIMEOUT) : 197 + secs_to_jiffies(HV_UTIL_TIMEOUT)); 197 198 198 199 rc = hvutil_transport_send(hvt, vss_msg, sizeof(*vss_msg), NULL); 199 200 if (rc) { ··· 389 388 */ 390 389 vss_transaction.state = HVUTIL_DEVICE_INIT; 391 390 391 + return 0; 392 + } 393 + 394 + int 395 + hv_vss_init_transport(void) 396 + { 392 397 hvt = hvutil_transport_init(vss_devname, CN_VSS_IDX, CN_VSS_VAL, 393 398 vss_on_msg, vss_on_reset); 394 399 if (!hvt) {
+10 -3
drivers/hv/hv_util.c
··· 141 141 static struct hv_util_service util_kvp = { 142 142 .util_cb = hv_kvp_onchannelcallback, 143 143 .util_init = hv_kvp_init, 144 + .util_init_transport = hv_kvp_init_transport, 144 145 .util_pre_suspend = hv_kvp_pre_suspend, 145 146 .util_pre_resume = hv_kvp_pre_resume, 146 147 .util_deinit = hv_kvp_deinit, ··· 150 149 static struct hv_util_service util_vss = { 151 150 .util_cb = hv_vss_onchannelcallback, 152 151 .util_init = hv_vss_init, 152 + .util_init_transport = hv_vss_init_transport, 153 153 .util_pre_suspend = hv_vss_pre_suspend, 154 154 .util_pre_resume = hv_vss_pre_resume, 155 155 .util_deinit = hv_vss_deinit, ··· 592 590 srv->channel = dev->channel; 593 591 if (srv->util_init) { 594 592 ret = srv->util_init(srv); 595 - if (ret) { 596 - ret = -ENODEV; 593 + if (ret) 597 594 goto error1; 598 - } 599 595 } 600 596 601 597 /* ··· 613 613 if (ret) 614 614 goto error; 615 615 616 + if (srv->util_init_transport) { 617 + ret = srv->util_init_transport(); 618 + if (ret) { 619 + vmbus_close(dev->channel); 620 + goto error; 621 + } 622 + } 616 623 return 0; 617 624 618 625 error:
+2
drivers/hv/hyperv_vmbus.h
··· 370 370 void vmbus_on_msg_dpc(unsigned long data); 371 371 372 372 int hv_kvp_init(struct hv_util_service *srv); 373 + int hv_kvp_init_transport(void); 373 374 void hv_kvp_deinit(void); 374 375 int hv_kvp_pre_suspend(void); 375 376 int hv_kvp_pre_resume(void); 376 377 void hv_kvp_onchannelcallback(void *context); 377 378 378 379 int hv_vss_init(struct hv_util_service *srv); 380 + int hv_vss_init_transport(void); 379 381 void hv_vss_deinit(void); 380 382 int hv_vss_pre_suspend(void); 381 383 int hv_vss_pre_resume(void);
+1 -1
drivers/hv/vmbus_drv.c
··· 2507 2507 vmbus_request_offers(); 2508 2508 2509 2509 if (wait_for_completion_timeout( 2510 - &vmbus_connection.ready_for_resume_event, 10 * HZ) == 0) 2510 + &vmbus_connection.ready_for_resume_event, secs_to_jiffies(10)) == 0) 2511 2511 pr_err("Some vmbus device is missing after suspending?\n"); 2512 2512 2513 2513 /* Reset the event for the next suspend. */
+6 -2
drivers/hwmon/drivetemp.c
··· 165 165 { 166 166 u8 scsi_cmd[MAX_COMMAND_SIZE]; 167 167 enum req_op op; 168 + int err; 168 169 169 170 memset(scsi_cmd, 0, sizeof(scsi_cmd)); 170 171 scsi_cmd[0] = ATA_16; ··· 193 192 scsi_cmd[12] = lba_high; 194 193 scsi_cmd[14] = ata_command; 195 194 196 - return scsi_execute_cmd(st->sdev, scsi_cmd, op, st->smartdata, 197 - ATA_SECT_SIZE, HZ, 5, NULL); 195 + err = scsi_execute_cmd(st->sdev, scsi_cmd, op, st->smartdata, 196 + ATA_SECT_SIZE, HZ, 5, NULL); 197 + if (err > 0) 198 + err = -EIO; 199 + return err; 198 200 } 199 201 200 202 static int drivetemp_ata_command(struct drivetemp_data *st, u8 feature,
+6 -4
drivers/hwmon/tmp513.c
··· 182 182 struct regmap *regmap; 183 183 }; 184 184 185 - // Set the shift based on the gain 8=4, 4=3, 2=2, 1=1 185 + // Set the shift based on the gain: 8 -> 1, 4 -> 2, 2 -> 3, 1 -> 4 186 186 static inline u8 tmp51x_get_pga_shift(struct tmp51x_data *data) 187 187 { 188 188 return 5 - ffs(data->pga_gain); ··· 204 204 * 2's complement number shifted by one to four depending 205 205 * on the pga gain setting. 1lsb = 10uV 206 206 */ 207 - *val = sign_extend32(regval, 17 - tmp51x_get_pga_shift(data)); 207 + *val = sign_extend32(regval, 208 + reg == TMP51X_SHUNT_CURRENT_RESULT ? 209 + 16 - tmp51x_get_pga_shift(data) : 15); 208 210 *val = DIV_ROUND_CLOSEST(*val * 10 * MILLI, data->shunt_uohms); 209 211 break; 210 212 case TMP51X_BUS_VOLTAGE_RESULT: ··· 222 220 break; 223 221 case TMP51X_BUS_CURRENT_RESULT: 224 222 // Current = (ShuntVoltage * CalibrationRegister) / 4096 225 - *val = sign_extend32(regval, 16) * data->curr_lsb_ua; 223 + *val = sign_extend32(regval, 15) * (long)data->curr_lsb_ua; 226 224 *val = DIV_ROUND_CLOSEST(*val, MILLI); 227 225 break; 228 226 case TMP51X_LOCAL_TEMP_RESULT: ··· 234 232 case TMP51X_REMOTE_TEMP_LIMIT_2: 235 233 case TMP513_REMOTE_TEMP_LIMIT_3: 236 234 // 1lsb = 0.0625 degrees centigrade 237 - *val = sign_extend32(regval, 16) >> TMP51X_TEMP_SHIFT; 235 + *val = sign_extend32(regval, 15) >> TMP51X_TEMP_SHIFT; 238 236 *val = DIV_ROUND_CLOSEST(*val * 625, 10); 239 237 break; 240 238 case TMP51X_N_FACTOR_AND_HYST_1:
+4 -5
drivers/i2c/busses/i2c-imx.c
··· 335 335 { .compatible = "fsl,imx6sll-i2c", .data = &imx6_i2c_hwdata, }, 336 336 { .compatible = "fsl,imx6sx-i2c", .data = &imx6_i2c_hwdata, }, 337 337 { .compatible = "fsl,imx6ul-i2c", .data = &imx6_i2c_hwdata, }, 338 + { .compatible = "fsl,imx7d-i2c", .data = &imx6_i2c_hwdata, }, 338 339 { .compatible = "fsl,imx7s-i2c", .data = &imx6_i2c_hwdata, }, 339 340 { .compatible = "fsl,imx8mm-i2c", .data = &imx6_i2c_hwdata, }, 340 341 { .compatible = "fsl,imx8mn-i2c", .data = &imx6_i2c_hwdata, }, ··· 533 532 534 533 static int i2c_imx_bus_busy(struct imx_i2c_struct *i2c_imx, int for_busy, bool atomic) 535 534 { 535 + bool multi_master = i2c_imx->multi_master; 536 536 unsigned long orig_jiffies = jiffies; 537 537 unsigned int temp; 538 - 539 - if (!i2c_imx->multi_master) 540 - return 0; 541 538 542 539 while (1) { 543 540 temp = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2SR); 544 541 545 542 /* check for arbitration lost */ 546 - if (temp & I2SR_IAL) { 543 + if (multi_master && (temp & I2SR_IAL)) { 547 544 i2c_imx_clear_irq(i2c_imx, I2SR_IAL); 548 545 return -EAGAIN; 549 546 } 550 547 551 - if (for_busy && (temp & I2SR_IBB)) { 548 + if (for_busy && (!multi_master || (temp & I2SR_IBB))) { 552 549 i2c_imx->stopped = 0; 553 550 break; 554 551 }
+96 -30
drivers/i2c/busses/i2c-microchip-corei2c.c
··· 93 93 * @base: pointer to register struct 94 94 * @dev: device reference 95 95 * @i2c_clk: clock reference for i2c input clock 96 + * @msg_queue: pointer to the messages requiring sending 96 97 * @buf: pointer to msg buffer for easier use 97 98 * @msg_complete: xfer completion object 98 99 * @adapter: core i2c abstraction 99 100 * @msg_err: error code for completed message 100 101 * @bus_clk_rate: current i2c bus clock rate 101 102 * @isr_status: cached copy of local ISR status 103 + * @total_num: total number of messages to be sent/received 104 + * @current_num: index of the current message being sent/received 102 105 * @msg_len: number of bytes transferred in msg 103 106 * @addr: address of the current slave 107 + * @restart_needed: whether or not a repeated start is required after current message 104 108 */ 105 109 struct mchp_corei2c_dev { 106 110 void __iomem *base; 107 111 struct device *dev; 108 112 struct clk *i2c_clk; 113 + struct i2c_msg *msg_queue; 109 114 u8 *buf; 110 115 struct completion msg_complete; 111 116 struct i2c_adapter adapter; 112 117 int msg_err; 118 + int total_num; 119 + int current_num; 113 120 u32 bus_clk_rate; 114 121 u32 isr_status; 115 122 u16 msg_len; 116 123 u8 addr; 124 + bool restart_needed; 117 125 }; 118 126 119 127 static void mchp_corei2c_core_disable(struct mchp_corei2c_dev *idev) ··· 230 222 return 0; 231 223 } 232 224 225 + static void mchp_corei2c_next_msg(struct mchp_corei2c_dev *idev) 226 + { 227 + struct i2c_msg *this_msg; 228 + u8 ctrl; 229 + 230 + if (idev->current_num >= idev->total_num) { 231 + complete(&idev->msg_complete); 232 + return; 233 + } 234 + 235 + /* 236 + * If there's been an error, the isr needs to return control 237 + * to the "main" part of the driver, so as not to keep sending 238 + * messages once it completes and clears the SI bit. 239 + */ 240 + if (idev->msg_err) { 241 + complete(&idev->msg_complete); 242 + return; 243 + } 244 + 245 + this_msg = idev->msg_queue++; 246 + 247 + if (idev->current_num < (idev->total_num - 1)) { 248 + struct i2c_msg *next_msg = idev->msg_queue; 249 + 250 + idev->restart_needed = next_msg->flags & I2C_M_RD; 251 + } else { 252 + idev->restart_needed = false; 253 + } 254 + 255 + idev->addr = i2c_8bit_addr_from_msg(this_msg); 256 + idev->msg_len = this_msg->len; 257 + idev->buf = this_msg->buf; 258 + 259 + ctrl = readb(idev->base + CORE_I2C_CTRL); 260 + ctrl |= CTRL_STA; 261 + writeb(ctrl, idev->base + CORE_I2C_CTRL); 262 + 263 + idev->current_num++; 264 + } 265 + 233 266 static irqreturn_t mchp_corei2c_handle_isr(struct mchp_corei2c_dev *idev) 234 267 { 235 268 u32 status = idev->isr_status; ··· 287 238 ctrl &= ~CTRL_STA; 288 239 writeb(idev->addr, idev->base + CORE_I2C_DATA); 289 240 writeb(ctrl, idev->base + CORE_I2C_CTRL); 290 - if (idev->msg_len == 0) 291 - finished = true; 292 241 break; 293 242 case STATUS_M_ARB_LOST: 294 243 idev->msg_err = -EAGAIN; ··· 294 247 break; 295 248 case STATUS_M_SLAW_ACK: 296 249 case STATUS_M_TX_DATA_ACK: 297 - if (idev->msg_len > 0) 250 + if (idev->msg_len > 0) { 298 251 mchp_corei2c_fill_tx(idev); 299 - else 300 - last_byte = true; 252 + } else { 253 + if (idev->restart_needed) 254 + finished = true; 255 + else 256 + last_byte = true; 257 + } 301 258 break; 302 259 case STATUS_M_TX_DATA_NACK: 303 260 case STATUS_M_SLAR_NACK: ··· 338 287 mchp_corei2c_stop(idev); 339 288 340 289 if (last_byte || finished) 341 - complete(&idev->msg_complete); 290 + mchp_corei2c_next_msg(idev); 342 291 343 292 return IRQ_HANDLED; 344 293 } ··· 362 311 return ret; 363 312 } 364 313 365 - static int mchp_corei2c_xfer_msg(struct mchp_corei2c_dev *idev, 366 - struct i2c_msg *msg) 314 + static int mchp_corei2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, 315 + int num) 367 316 { 368 - u8 ctrl; 317 + struct mchp_corei2c_dev *idev = i2c_get_adapdata(adap); 318 + struct i2c_msg *this_msg = msgs; 369 319 unsigned long time_left; 370 - 371 - idev->addr = i2c_8bit_addr_from_msg(msg); 372 - idev->msg_len = msg->len; 373 - idev->buf = msg->buf; 374 - idev->msg_err = 0; 375 - 376 - reinit_completion(&idev->msg_complete); 320 + u8 ctrl; 377 321 378 322 mchp_corei2c_core_enable(idev); 379 323 324 + /* 325 + * The isr controls the flow of a transfer, this info needs to be saved 326 + * to a location that it can access the queue information from. 327 + */ 328 + idev->restart_needed = false; 329 + idev->msg_queue = msgs; 330 + idev->total_num = num; 331 + idev->current_num = 0; 332 + 333 + /* 334 + * But the first entry to the isr is triggered by the start in this 335 + * function, so the first message needs to be "dequeued". 336 + */ 337 + idev->addr = i2c_8bit_addr_from_msg(this_msg); 338 + idev->msg_len = this_msg->len; 339 + idev->buf = this_msg->buf; 340 + idev->msg_err = 0; 341 + 342 + if (idev->total_num > 1) { 343 + struct i2c_msg *next_msg = msgs + 1; 344 + 345 + idev->restart_needed = next_msg->flags & I2C_M_RD; 346 + } 347 + 348 + idev->current_num++; 349 + idev->msg_queue++; 350 + 351 + reinit_completion(&idev->msg_complete); 352 + 353 + /* 354 + * Send the first start to pass control to the isr 355 + */ 380 356 ctrl = readb(idev->base + CORE_I2C_CTRL); 381 357 ctrl |= CTRL_STA; 382 358 writeb(ctrl, idev->base + CORE_I2C_CTRL); ··· 413 335 if (!time_left) 414 336 return -ETIMEDOUT; 415 337 416 - return idev->msg_err; 417 - } 418 - 419 - static int mchp_corei2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, 420 - int num) 421 - { 422 - struct mchp_corei2c_dev *idev = i2c_get_adapdata(adap); 423 - int i, ret; 424 - 425 - for (i = 0; i < num; i++) { 426 - ret = mchp_corei2c_xfer_msg(idev, msgs++); 427 - if (ret) 428 - return ret; 429 - } 338 + if (idev->msg_err) 339 + return idev->msg_err; 430 340 431 341 return num; 432 342 }
+68 -30
drivers/iio/adc/ad4695.c
··· 91 91 #define AD4695_T_WAKEUP_SW_MS 3 92 92 #define AD4695_T_REFBUF_MS 100 93 93 #define AD4695_T_REGCONFIG_NS 20 94 + #define AD4695_T_SCK_CNV_DELAY_NS 80 94 95 #define AD4695_REG_ACCESS_SCLK_HZ (10 * MEGA) 95 96 96 97 /* Max number of voltage input channels. */ ··· 133 132 unsigned int vref_mv; 134 133 /* Common mode input pin voltage. */ 135 134 unsigned int com_mv; 136 - /* 1 per voltage and temperature chan plus 1 xfer to trigger 1st CNV */ 137 - struct spi_transfer buf_read_xfer[AD4695_MAX_CHANNELS + 2]; 135 + /* 136 + * 2 per voltage and temperature chan plus 1 xfer to trigger 1st 137 + * CNV. Excluding the trigger xfer, every 2nd xfer only serves 138 + * to control CS and add a delay between the last SCLK and next 139 + * CNV rising edges. 140 + */ 141 + struct spi_transfer buf_read_xfer[AD4695_MAX_CHANNELS * 2 + 3]; 138 142 struct spi_message buf_read_msg; 139 143 /* Raw conversion data received. */ 140 144 u8 buf[ALIGN((AD4695_MAX_CHANNELS + 2) * AD4695_MAX_CHANNEL_SIZE, ··· 429 423 u8 temp_chan_bit = st->chip_info->num_voltage_inputs; 430 424 u32 bit, num_xfer, num_slots; 431 425 u32 temp_en = 0; 432 - int ret; 426 + int ret, rx_buf_offset = 0; 433 427 434 428 /* 435 429 * We are using the advanced sequencer since it is the only way to read ··· 455 449 iio_for_each_active_channel(indio_dev, bit) { 456 450 xfer = &st->buf_read_xfer[num_xfer]; 457 451 xfer->bits_per_word = 16; 458 - xfer->rx_buf = &st->buf[(num_xfer - 1) * 2]; 452 + xfer->rx_buf = &st->buf[rx_buf_offset]; 459 453 xfer->len = 2; 460 - xfer->cs_change = 1; 461 - xfer->cs_change_delay.value = AD4695_T_CONVERT_NS; 462 - xfer->cs_change_delay.unit = SPI_DELAY_UNIT_NSECS; 454 + rx_buf_offset += xfer->len; 463 455 464 456 if (bit == temp_chan_bit) { 465 457 temp_en = 1; ··· 472 468 } 473 469 474 470 num_xfer++; 471 + 472 + /* 473 + * We need to add a blank xfer in data reads, to meet the timing 474 + * requirement of a minimum delay between the last SCLK rising 475 + * edge and the CS deassert. 476 + */ 477 + xfer = &st->buf_read_xfer[num_xfer]; 478 + xfer->delay.value = AD4695_T_SCK_CNV_DELAY_NS; 479 + xfer->delay.unit = SPI_DELAY_UNIT_NSECS; 480 + xfer->cs_change = 1; 481 + xfer->cs_change_delay.value = AD4695_T_CONVERT_NS; 482 + xfer->cs_change_delay.unit = SPI_DELAY_UNIT_NSECS; 483 + 484 + num_xfer++; 475 485 } 476 486 477 487 /* 478 488 * The advanced sequencer requires that at least 2 slots are enabled. 479 489 * Since slot 0 is always used for other purposes, we need only 1 480 - * enabled voltage channel to meet this requirement. If the temperature 481 - * channel is the only enabled channel, we need to add one more slot 482 - * in the sequence but not read from it. 490 + * enabled voltage channel to meet this requirement. If the temperature 491 + * channel is the only enabled channel, we need to add one more slot in 492 + * the sequence but not read from it. This is because the temperature 493 + * sensor is sampled at the end of the channel sequence in advanced 494 + * sequencer mode (see datasheet page 38). 495 + * 496 + * From the iio_for_each_active_channel() block above, we now have an 497 + * xfer with data followed by a blank xfer to allow us to meet the 498 + * timing spec, so move both of those up before adding an extra to 499 + * handle the temperature-only case. 483 500 */ 484 501 if (num_slots < 2) { 485 - /* move last xfer so we can insert one more xfer before it */ 486 - st->buf_read_xfer[num_xfer] = *xfer; 502 + /* Move last two xfers */ 503 + st->buf_read_xfer[num_xfer] = st->buf_read_xfer[num_xfer - 1]; 504 + st->buf_read_xfer[num_xfer - 1] = st->buf_read_xfer[num_xfer - 2]; 487 505 num_xfer++; 488 506 489 - /* modify 2nd to last xfer for extra slot */ 507 + /* Modify inserted xfer for extra slot. */ 508 + xfer = &st->buf_read_xfer[num_xfer - 3]; 490 509 memset(xfer, 0, sizeof(*xfer)); 491 510 xfer->cs_change = 1; 492 511 xfer->delay.value = st->chip_info->t_acq_ns; ··· 526 499 return ret; 527 500 528 501 num_slots++; 502 + 503 + /* 504 + * We still want to point at the last xfer when finished, so 505 + * update the pointer. 506 + */ 507 + xfer = &st->buf_read_xfer[num_xfer - 1]; 529 508 } 530 509 531 510 /* ··· 616 583 */ 617 584 static int ad4695_read_one_sample(struct ad4695_state *st, unsigned int address) 618 585 { 619 - struct spi_transfer xfer[2] = { }; 620 - int ret, i = 0; 586 + struct spi_transfer xfers[2] = { 587 + { 588 + .speed_hz = AD4695_REG_ACCESS_SCLK_HZ, 589 + .bits_per_word = 16, 590 + .tx_buf = &st->cnv_cmd, 591 + .len = 2, 592 + }, 593 + { 594 + /* Required delay between last SCLK and CNV/CS */ 595 + .delay.value = AD4695_T_SCK_CNV_DELAY_NS, 596 + .delay.unit = SPI_DELAY_UNIT_NSECS, 597 + } 598 + }; 599 + int ret; 621 600 622 601 ret = ad4695_set_single_cycle_mode(st, address); 623 602 if (ret) ··· 637 592 638 593 /* 639 594 * Setting the first channel to the temperature channel isn't supported 640 - * in single-cycle mode, so we have to do an extra xfer to read the 641 - * temperature. 595 + * in single-cycle mode, so we have to do an extra conversion to read 596 + * the temperature. 642 597 */ 643 598 if (address == AD4695_CMD_TEMP_CHAN) { 644 - /* We aren't reading, so we can make this a short xfer. */ 645 - st->cnv_cmd2 = AD4695_CMD_TEMP_CHAN << 3; 646 - xfer[0].tx_buf = &st->cnv_cmd2; 647 - xfer[0].len = 1; 648 - xfer[0].cs_change = 1; 649 - xfer[0].cs_change_delay.value = AD4695_T_CONVERT_NS; 650 - xfer[0].cs_change_delay.unit = SPI_DELAY_UNIT_NSECS; 599 + st->cnv_cmd = AD4695_CMD_TEMP_CHAN << 11; 651 600 652 - i = 1; 601 + ret = spi_sync_transfer(st->spi, xfers, ARRAY_SIZE(xfers)); 602 + if (ret) 603 + return ret; 653 604 } 654 605 655 606 /* Then read the result and exit conversion mode. */ 656 607 st->cnv_cmd = AD4695_CMD_EXIT_CNV_MODE << 11; 657 - xfer[i].bits_per_word = 16; 658 - xfer[i].tx_buf = &st->cnv_cmd; 659 - xfer[i].rx_buf = &st->raw_data; 660 - xfer[i].len = 2; 608 + xfers[0].rx_buf = &st->raw_data; 661 609 662 - return spi_sync_transfer(st->spi, xfer, i + 1); 610 + return spi_sync_transfer(st->spi, xfers, ARRAY_SIZE(xfers)); 663 611 } 664 612 665 613 static int ad4695_read_raw(struct iio_dev *indio_dev,
+3
drivers/iio/adc/ad7124.c
··· 917 917 * set all channels to this default value. 918 918 */ 919 919 ad7124_set_channel_odr(st, i, 10); 920 + 921 + /* Disable all channels to prevent unintended conversions. */ 922 + ad_sd_write_reg(&st->sd, AD7124_CHANNEL(i), 2, 0); 920 923 } 921 924 922 925 ret = ad_sd_write_reg(&st->sd, AD7124_ADC_CONTROL, 2, st->adc_control);
+6 -4
drivers/iio/adc/ad7173.c
··· 200 200 201 201 struct ad7173_state { 202 202 struct ad_sigma_delta sd; 203 + struct ad_sigma_delta_info sigma_delta_info; 203 204 const struct ad7173_device_info *info; 204 205 struct ad7173_channel *channels; 205 206 struct regulator_bulk_data regulators[3]; ··· 754 753 return ad_sd_write_reg(sd, AD7173_REG_CH(chan), 2, 0); 755 754 } 756 755 757 - static struct ad_sigma_delta_info ad7173_sigma_delta_info = { 756 + static const struct ad_sigma_delta_info ad7173_sigma_delta_info = { 758 757 .set_channel = ad7173_set_channel, 759 758 .append_status = ad7173_append_status, 760 759 .disable_all = ad7173_disable_all, ··· 1404 1403 if (ret < 0) 1405 1404 return dev_err_probe(dev, ret, "Interrupt 'rdy' is required\n"); 1406 1405 1407 - ad7173_sigma_delta_info.irq_line = ret; 1406 + st->sigma_delta_info.irq_line = ret; 1408 1407 1409 1408 return ad7173_fw_parse_channel_config(indio_dev); 1410 1409 } ··· 1437 1436 spi->mode = SPI_MODE_3; 1438 1437 spi_setup(spi); 1439 1438 1440 - ad7173_sigma_delta_info.num_slots = st->info->num_configs; 1441 - ret = ad_sd_init(&st->sd, indio_dev, spi, &ad7173_sigma_delta_info); 1439 + st->sigma_delta_info = ad7173_sigma_delta_info; 1440 + st->sigma_delta_info.num_slots = st->info->num_configs; 1441 + ret = ad_sd_init(&st->sd, indio_dev, spi, &st->sigma_delta_info); 1442 1442 if (ret) 1443 1443 return ret; 1444 1444
+12 -3
drivers/iio/adc/ad9467.c
··· 895 895 return 0; 896 896 } 897 897 898 - static struct iio_info ad9467_info = { 898 + static const struct iio_info ad9467_info = { 899 899 .read_raw = ad9467_read_raw, 900 900 .write_raw = ad9467_write_raw, 901 901 .update_scan_mode = ad9467_update_scan_mode, 902 902 .debugfs_reg_access = ad9467_reg_access, 903 903 .read_avail = ad9467_read_avail, 904 + }; 905 + 906 + /* Same as above, but without .read_avail */ 907 + static const struct iio_info ad9467_info_no_read_avail = { 908 + .read_raw = ad9467_read_raw, 909 + .write_raw = ad9467_write_raw, 910 + .update_scan_mode = ad9467_update_scan_mode, 911 + .debugfs_reg_access = ad9467_reg_access, 904 912 }; 905 913 906 914 static int ad9467_scale_fill(struct ad9467_state *st) ··· 1222 1214 } 1223 1215 1224 1216 if (st->info->num_scales > 1) 1225 - ad9467_info.read_avail = ad9467_read_avail; 1217 + indio_dev->info = &ad9467_info; 1218 + else 1219 + indio_dev->info = &ad9467_info_no_read_avail; 1226 1220 indio_dev->name = st->info->name; 1227 1221 indio_dev->channels = st->info->channels; 1228 1222 indio_dev->num_channels = st->info->num_channels; 1229 - indio_dev->info = &ad9467_info; 1230 1223 1231 1224 ret = ad9467_iio_backend_get(st); 1232 1225 if (ret)
+1 -1
drivers/iio/adc/at91_adc.c
··· 979 979 return ret; 980 980 981 981 err: 982 - input_free_device(st->ts_input); 982 + input_free_device(input); 983 983 return ret; 984 984 } 985 985
+2
drivers/iio/adc/rockchip_saradc.c
··· 368 368 int ret; 369 369 int i, j = 0; 370 370 371 + memset(&data, 0, sizeof(data)); 372 + 371 373 mutex_lock(&info->lock); 372 374 373 375 iio_for_each_active_channel(i_dev, i) {
+8 -5
drivers/iio/adc/stm32-dfsdm-adc.c
··· 691 691 return -EINVAL; 692 692 } 693 693 694 - ret = fwnode_property_read_string(node, "label", &ch->datasheet_name); 695 - if (ret < 0) { 696 - dev_err(&indio_dev->dev, 697 - " Error parsing 'label' for idx %d\n", ch->channel); 698 - return ret; 694 + if (fwnode_property_present(node, "label")) { 695 + /* label is optional */ 696 + ret = fwnode_property_read_string(node, "label", &ch->datasheet_name); 697 + if (ret < 0) { 698 + dev_err(&indio_dev->dev, 699 + " Error parsing 'label' for idx %d\n", ch->channel); 700 + return ret; 701 + } 699 702 } 700 703 701 704 df_ch = &dfsdm->ch_list[ch->channel];
+3 -1
drivers/iio/adc/ti-ads1119.c
··· 500 500 struct iio_dev *indio_dev = pf->indio_dev; 501 501 struct ads1119_state *st = iio_priv(indio_dev); 502 502 struct { 503 - unsigned int sample; 503 + s16 sample; 504 504 s64 timestamp __aligned(8); 505 505 } scan; 506 506 unsigned int index; 507 507 int ret; 508 + 509 + memset(&scan, 0, sizeof(scan)); 508 510 509 511 if (!iio_trigger_using_own(indio_dev)) { 510 512 index = find_first_bit(indio_dev->active_scan_mask,
+2 -2
drivers/iio/adc/ti-ads124s08.c
··· 183 183 struct ads124s_private *priv = iio_priv(indio_dev); 184 184 185 185 if (priv->reset_gpio) { 186 - gpiod_set_value(priv->reset_gpio, 0); 186 + gpiod_set_value_cansleep(priv->reset_gpio, 0); 187 187 udelay(200); 188 - gpiod_set_value(priv->reset_gpio, 1); 188 + gpiod_set_value_cansleep(priv->reset_gpio, 1); 189 189 } else { 190 190 return ads124s_write_cmd(indio_dev, ADS124S08_CMD_RESET); 191 191 }
+2
drivers/iio/adc/ti-ads1298.c
··· 613 613 } 614 614 indio_dev->name = devm_kasprintf(dev, GFP_KERNEL, "ads129%u%s", 615 615 indio_dev->num_channels, suffix); 616 + if (!indio_dev->name) 617 + return -ENOMEM; 616 618 617 619 /* Enable internal test signal, double amplitude, double frequency */ 618 620 ret = regmap_write(priv->regmap, ADS1298_REG_CONFIG2,
+1 -1
drivers/iio/adc/ti-ads8688.c
··· 381 381 struct iio_poll_func *pf = p; 382 382 struct iio_dev *indio_dev = pf->indio_dev; 383 383 /* Ensure naturally aligned timestamp */ 384 - u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)] __aligned(8); 384 + u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)] __aligned(8) = { }; 385 385 int i, j = 0; 386 386 387 387 iio_for_each_active_channel(indio_dev, i) {
+1 -1
drivers/iio/dummy/iio_simple_dummy_buffer.c
··· 48 48 int i = 0, j; 49 49 u16 *data; 50 50 51 - data = kmalloc(indio_dev->scan_bytes, GFP_KERNEL); 51 + data = kzalloc(indio_dev->scan_bytes, GFP_KERNEL); 52 52 if (!data) 53 53 goto done; 54 54
+9 -2
drivers/iio/gyro/fxas21002c_core.c
··· 730 730 int ret; 731 731 732 732 mutex_lock(&data->lock); 733 - ret = regmap_bulk_read(data->regmap, FXAS21002C_REG_OUT_X_MSB, 734 - data->buffer, CHANNEL_SCAN_MAX * sizeof(s16)); 733 + ret = fxas21002c_pm_get(data); 735 734 if (ret < 0) 736 735 goto out_unlock; 737 736 737 + ret = regmap_bulk_read(data->regmap, FXAS21002C_REG_OUT_X_MSB, 738 + data->buffer, CHANNEL_SCAN_MAX * sizeof(s16)); 739 + if (ret < 0) 740 + goto out_pm_put; 741 + 738 742 iio_push_to_buffers_with_timestamp(indio_dev, data->buffer, 739 743 data->timestamp); 744 + 745 + out_pm_put: 746 + fxas21002c_pm_put(data); 740 747 741 748 out_unlock: 742 749 mutex_unlock(&data->lock);
+1
drivers/iio/imu/inv_icm42600/inv_icm42600.h
··· 403 403 typedef int (*inv_icm42600_bus_setup)(struct inv_icm42600_state *); 404 404 405 405 extern const struct regmap_config inv_icm42600_regmap_config; 406 + extern const struct regmap_config inv_icm42600_spi_regmap_config; 406 407 extern const struct dev_pm_ops inv_icm42600_pm_ops; 407 408 408 409 const struct iio_mount_matrix *
+21 -1
drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
··· 87 87 }; 88 88 EXPORT_SYMBOL_NS_GPL(inv_icm42600_regmap_config, "IIO_ICM42600"); 89 89 90 + /* define specific regmap for SPI not supporting burst write */ 91 + const struct regmap_config inv_icm42600_spi_regmap_config = { 92 + .name = "inv_icm42600", 93 + .reg_bits = 8, 94 + .val_bits = 8, 95 + .max_register = 0x4FFF, 96 + .ranges = inv_icm42600_regmap_ranges, 97 + .num_ranges = ARRAY_SIZE(inv_icm42600_regmap_ranges), 98 + .volatile_table = inv_icm42600_regmap_volatile_accesses, 99 + .rd_noinc_table = inv_icm42600_regmap_rd_noinc_accesses, 100 + .cache_type = REGCACHE_RBTREE, 101 + .use_single_write = true, 102 + }; 103 + EXPORT_SYMBOL_NS_GPL(inv_icm42600_spi_regmap_config, "IIO_ICM42600"); 104 + 90 105 struct inv_icm42600_hw { 91 106 uint8_t whoami; 92 107 const char *name; ··· 829 814 static int inv_icm42600_resume(struct device *dev) 830 815 { 831 816 struct inv_icm42600_state *st = dev_get_drvdata(dev); 817 + struct inv_icm42600_sensor_state *gyro_st = iio_priv(st->indio_gyro); 818 + struct inv_icm42600_sensor_state *accel_st = iio_priv(st->indio_accel); 832 819 int ret; 833 820 834 821 mutex_lock(&st->lock); ··· 851 834 goto out_unlock; 852 835 853 836 /* restore FIFO data streaming */ 854 - if (st->fifo.on) 837 + if (st->fifo.on) { 838 + inv_sensors_timestamp_reset(&gyro_st->ts); 839 + inv_sensors_timestamp_reset(&accel_st->ts); 855 840 ret = regmap_write(st->map, INV_ICM42600_REG_FIFO_CONFIG, 856 841 INV_ICM42600_FIFO_CONFIG_STREAM); 842 + } 857 843 858 844 out_unlock: 859 845 mutex_unlock(&st->lock);
+2 -1
drivers/iio/imu/inv_icm42600/inv_icm42600_spi.c
··· 59 59 return -EINVAL; 60 60 chip = (uintptr_t)match; 61 61 62 - regmap = devm_regmap_init_spi(spi, &inv_icm42600_regmap_config); 62 + /* use SPI specific regmap */ 63 + regmap = devm_regmap_init_spi(spi, &inv_icm42600_spi_regmap_config); 63 64 if (IS_ERR(regmap)) 64 65 return PTR_ERR(regmap); 65 66
+1 -1
drivers/iio/imu/kmx61.c
··· 1193 1193 struct kmx61_data *data = kmx61_get_data(indio_dev); 1194 1194 int bit, ret, i = 0; 1195 1195 u8 base; 1196 - s16 buffer[8]; 1196 + s16 buffer[8] = { }; 1197 1197 1198 1198 if (indio_dev == data->acc_indio_dev) 1199 1199 base = KMX61_ACC_XOUT_L;
+1 -1
drivers/iio/inkern.c
··· 500 500 return_ptr(chans); 501 501 502 502 error_free_chans: 503 - for (i = 0; i < nummaps; i++) 503 + for (i = 0; i < mapind; i++) 504 504 iio_device_put(chans[i].indio_dev); 505 505 return ERR_PTR(ret); 506 506 }
+2
drivers/iio/light/bh1745.c
··· 746 746 int i; 747 747 int j = 0; 748 748 749 + memset(&scan, 0, sizeof(scan)); 750 + 749 751 iio_for_each_active_channel(indio_dev, i) { 750 752 ret = regmap_bulk_read(data->regmap, BH1745_RED_LSB + 2 * i, 751 753 &value, 2);
+1 -1
drivers/iio/light/vcnl4035.c
··· 105 105 struct iio_dev *indio_dev = pf->indio_dev; 106 106 struct vcnl4035_data *data = iio_priv(indio_dev); 107 107 /* Ensure naturally aligned timestamp */ 108 - u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)] __aligned(8); 108 + u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)] __aligned(8) = { }; 109 109 int ret; 110 110 111 111 ret = regmap_read(data->regmap, VCNL4035_ALS_DATA, (int *)buffer);
+2
drivers/iio/pressure/zpa2326.c
··· 586 586 } sample; 587 587 int err; 588 588 589 + memset(&sample, 0, sizeof(sample)); 590 + 589 591 if (test_bit(0, indio_dev->active_scan_mask)) { 590 592 /* Get current pressure from hardware FIFO. */ 591 593 err = zpa2326_dequeue_pressure(indio_dev, &sample.pressure);
+2
drivers/iio/temperature/tmp006.c
··· 252 252 } scan; 253 253 s32 ret; 254 254 255 + memset(&scan, 0, sizeof(scan)); 256 + 255 257 ret = i2c_smbus_read_word_data(data->client, TMP006_VOBJECT); 256 258 if (ret < 0) 257 259 goto err;
+1 -1
drivers/iio/test/Kconfig
··· 5 5 6 6 # Keep in alphabetical order 7 7 config IIO_GTS_KUNIT_TEST 8 - tristate "Test IIO formatting functions" if !KUNIT_ALL_TESTS 8 + tristate "Test IIO gain-time-scale helpers" if !KUNIT_ALL_TESTS 9 9 depends on KUNIT 10 10 select IIO_GTS_HELPER 11 11 select TEST_KUNIT_DEVICE_HELPERS
+4
drivers/iio/test/iio-test-rescale.c
··· 652 652 int rel_ppm; 653 653 int ret; 654 654 655 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buff); 656 + 655 657 rescale.numerator = t->numerator; 656 658 rescale.denominator = t->denominator; 657 659 rescale.offset = t->offset; ··· 682 680 struct rescale rescale; 683 681 int values[2]; 684 682 int ret; 683 + 684 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buff_off); 685 685 686 686 rescale.numerator = t->numerator; 687 687 rescale.denominator = t->denominator;
+16
drivers/infiniband/core/cma.c
··· 690 690 int bound_if_index = dev_addr->bound_dev_if; 691 691 int dev_type = dev_addr->dev_type; 692 692 struct net_device *ndev = NULL; 693 + struct net_device *pdev = NULL; 693 694 694 695 if (!rdma_dev_access_netns(device, id_priv->id.route.addr.dev_addr.net)) 695 696 goto out; ··· 715 714 716 715 rcu_read_lock(); 717 716 ndev = rcu_dereference(sgid_attr->ndev); 717 + if (ndev->ifindex != bound_if_index) { 718 + pdev = dev_get_by_index_rcu(dev_addr->net, bound_if_index); 719 + if (pdev) { 720 + if (is_vlan_dev(pdev)) { 721 + pdev = vlan_dev_real_dev(pdev); 722 + if (ndev->ifindex == pdev->ifindex) 723 + bound_if_index = pdev->ifindex; 724 + } 725 + if (is_vlan_dev(ndev)) { 726 + pdev = vlan_dev_real_dev(ndev); 727 + if (bound_if_index == pdev->ifindex) 728 + bound_if_index = ndev->ifindex; 729 + } 730 + } 731 + } 718 732 if (!net_eq(dev_net(ndev), dev_addr->net) || 719 733 ndev->ifindex != bound_if_index) { 720 734 rdma_put_gid_attr(sgid_attr);
+1 -1
drivers/infiniband/core/nldev.c
··· 2833 2833 enum rdma_nl_notify_event_type type) 2834 2834 { 2835 2835 struct sk_buff *skb; 2836 + int ret = -EMSGSIZE; 2836 2837 struct net *net; 2837 - int ret = 0; 2838 2838 void *nlh; 2839 2839 2840 2840 net = read_pnet(&device->coredev.rdma_net);
+9 -7
drivers/infiniband/core/uverbs_cmd.c
··· 161 161 { 162 162 const void __user *res = iter->cur; 163 163 164 - if (iter->cur + len > iter->end) 164 + if (len > iter->end - iter->cur) 165 165 return (void __force __user *)ERR_PTR(-ENOSPC); 166 166 iter->cur += len; 167 167 return res; ··· 2008 2008 ret = uverbs_request_start(attrs, &iter, &cmd, sizeof(cmd)); 2009 2009 if (ret) 2010 2010 return ret; 2011 - wqes = uverbs_request_next_ptr(&iter, cmd.wqe_size * cmd.wr_count); 2011 + wqes = uverbs_request_next_ptr(&iter, size_mul(cmd.wqe_size, 2012 + cmd.wr_count)); 2012 2013 if (IS_ERR(wqes)) 2013 2014 return PTR_ERR(wqes); 2014 - sgls = uverbs_request_next_ptr( 2015 - &iter, cmd.sge_count * sizeof(struct ib_uverbs_sge)); 2015 + sgls = uverbs_request_next_ptr(&iter, 2016 + size_mul(cmd.sge_count, 2017 + sizeof(struct ib_uverbs_sge))); 2016 2018 if (IS_ERR(sgls)) 2017 2019 return PTR_ERR(sgls); 2018 2020 ret = uverbs_request_finish(&iter); ··· 2200 2198 if (wqe_size < sizeof(struct ib_uverbs_recv_wr)) 2201 2199 return ERR_PTR(-EINVAL); 2202 2200 2203 - wqes = uverbs_request_next_ptr(iter, wqe_size * wr_count); 2201 + wqes = uverbs_request_next_ptr(iter, size_mul(wqe_size, wr_count)); 2204 2202 if (IS_ERR(wqes)) 2205 2203 return ERR_CAST(wqes); 2206 - sgls = uverbs_request_next_ptr( 2207 - iter, sge_count * sizeof(struct ib_uverbs_sge)); 2204 + sgls = uverbs_request_next_ptr(iter, size_mul(sge_count, 2205 + sizeof(struct ib_uverbs_sge))); 2208 2206 if (IS_ERR(sgls)) 2209 2207 return ERR_CAST(sgls); 2210 2208 ret = uverbs_request_finish(iter);
+25 -25
drivers/infiniband/hw/bnxt_re/ib_verbs.c
··· 199 199 200 200 ib_attr->vendor_id = rdev->en_dev->pdev->vendor; 201 201 ib_attr->vendor_part_id = rdev->en_dev->pdev->device; 202 - ib_attr->hw_ver = rdev->en_dev->pdev->subsystem_device; 202 + ib_attr->hw_ver = rdev->en_dev->pdev->revision; 203 203 ib_attr->max_qp = dev_attr->max_qp; 204 204 ib_attr->max_qp_wr = dev_attr->max_qp_wqes; 205 205 ib_attr->device_cap_flags = ··· 967 967 unsigned int flags; 968 968 int rc; 969 969 970 + bnxt_re_debug_rem_qpinfo(rdev, qp); 971 + 970 972 bnxt_qplib_flush_cqn_wq(&qp->qplib_qp); 971 973 972 974 rc = bnxt_qplib_destroy_qp(&rdev->qplib_res, &qp->qplib_qp); 973 - if (rc) { 975 + if (rc) 974 976 ibdev_err(&rdev->ibdev, "Failed to destroy HW QP"); 975 - return rc; 976 - } 977 977 978 978 if (rdma_is_kernel_res(&qp->ib_qp.res)) { 979 979 flags = bnxt_re_lock_cqs(qp); ··· 983 983 984 984 bnxt_qplib_free_qp_res(&rdev->qplib_res, &qp->qplib_qp); 985 985 986 - if (ib_qp->qp_type == IB_QPT_GSI && rdev->gsi_ctx.gsi_sqp) { 987 - rc = bnxt_re_destroy_gsi_sqp(qp); 988 - if (rc) 989 - return rc; 990 - } 986 + if (ib_qp->qp_type == IB_QPT_GSI && rdev->gsi_ctx.gsi_sqp) 987 + bnxt_re_destroy_gsi_sqp(qp); 991 988 992 989 mutex_lock(&rdev->qp_lock); 993 990 list_del(&qp->list); ··· 994 997 atomic_dec(&rdev->stats.res.rc_qp_count); 995 998 else if (qp->qplib_qp.type == CMDQ_CREATE_QP_TYPE_UD) 996 999 atomic_dec(&rdev->stats.res.ud_qp_count); 997 - 998 - bnxt_re_debug_rem_qpinfo(rdev, qp); 999 1000 1000 1001 ib_umem_release(qp->rumem); 1001 1002 ib_umem_release(qp->sumem); ··· 2162 2167 } 2163 2168 } 2164 2169 2165 - if (qp_attr_mask & IB_QP_PATH_MTU) { 2166 - qp->qplib_qp.modify_flags |= 2167 - CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU; 2168 - qp->qplib_qp.path_mtu = __from_ib_mtu(qp_attr->path_mtu); 2169 - qp->qplib_qp.mtu = ib_mtu_enum_to_int(qp_attr->path_mtu); 2170 - } else if (qp_attr->qp_state == IB_QPS_RTR) { 2171 - qp->qplib_qp.modify_flags |= 2172 - CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU; 2173 - qp->qplib_qp.path_mtu = 2174 - __from_ib_mtu(iboe_get_mtu(rdev->netdev->mtu)); 2175 - qp->qplib_qp.mtu = 2176 - ib_mtu_enum_to_int(iboe_get_mtu(rdev->netdev->mtu)); 2170 + if (qp_attr->qp_state == IB_QPS_RTR) { 2171 + enum ib_mtu qpmtu; 2172 + 2173 + qpmtu = iboe_get_mtu(rdev->netdev->mtu); 2174 + if (qp_attr_mask & IB_QP_PATH_MTU) { 2175 + if (ib_mtu_enum_to_int(qp_attr->path_mtu) > 2176 + ib_mtu_enum_to_int(qpmtu)) 2177 + return -EINVAL; 2178 + qpmtu = qp_attr->path_mtu; 2179 + } 2180 + 2181 + qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU; 2182 + qp->qplib_qp.path_mtu = __from_ib_mtu(qpmtu); 2183 + qp->qplib_qp.mtu = ib_mtu_enum_to_int(qpmtu); 2177 2184 } 2178 2185 2179 2186 if (qp_attr_mask & IB_QP_TIMEOUT) { ··· 2325 2328 qp_attr->retry_cnt = qplib_qp->retry_cnt; 2326 2329 qp_attr->rnr_retry = qplib_qp->rnr_retry; 2327 2330 qp_attr->min_rnr_timer = qplib_qp->min_rnr_timer; 2331 + qp_attr->port_num = __to_ib_port_num(qplib_qp->port_id); 2328 2332 qp_attr->rq_psn = qplib_qp->rq.psn; 2329 2333 qp_attr->max_rd_atomic = qplib_qp->max_rd_atomic; 2330 2334 qp_attr->sq_psn = qplib_qp->sq.psn; ··· 2822 2824 wr = wr->next; 2823 2825 } 2824 2826 bnxt_qplib_post_send_db(&qp->qplib_qp); 2825 - bnxt_ud_qp_hw_stall_workaround(qp); 2827 + if (!bnxt_qplib_is_chip_gen_p5_p7(qp->rdev->chip_ctx)) 2828 + bnxt_ud_qp_hw_stall_workaround(qp); 2826 2829 spin_unlock_irqrestore(&qp->sq_lock, flags); 2827 2830 return rc; 2828 2831 } ··· 2935 2936 wr = wr->next; 2936 2937 } 2937 2938 bnxt_qplib_post_send_db(&qp->qplib_qp); 2938 - bnxt_ud_qp_hw_stall_workaround(qp); 2939 + if (!bnxt_qplib_is_chip_gen_p5_p7(qp->rdev->chip_ctx)) 2940 + bnxt_ud_qp_hw_stall_workaround(qp); 2939 2941 spin_unlock_irqrestore(&qp->sq_lock, flags); 2940 2942 2941 2943 return rc;
+4
drivers/infiniband/hw/bnxt_re/ib_verbs.h
··· 268 268 int bnxt_re_mmap(struct ib_ucontext *context, struct vm_area_struct *vma); 269 269 void bnxt_re_mmap_free(struct rdma_user_mmap_entry *rdma_entry); 270 270 271 + static inline u32 __to_ib_port_num(u16 port_id) 272 + { 273 + return (u32)port_id + 1; 274 + } 271 275 272 276 unsigned long bnxt_re_lock_cqs(struct bnxt_re_qp *qp); 273 277 void bnxt_re_unlock_cqs(struct bnxt_re_qp *qp, unsigned long flags);
+1 -7
drivers/infiniband/hw/bnxt_re/main.c
··· 1715 1715 1716 1716 static void bnxt_re_dev_stop(struct bnxt_re_dev *rdev) 1717 1717 { 1718 - int mask = IB_QP_STATE; 1719 - struct ib_qp_attr qp_attr; 1720 1718 struct bnxt_re_qp *qp; 1721 1719 1722 - qp_attr.qp_state = IB_QPS_ERR; 1723 1720 mutex_lock(&rdev->qp_lock); 1724 1721 list_for_each_entry(qp, &rdev->qp_list, list) { 1725 1722 /* Modify the state of all QPs except QP1/Shadow QP */ ··· 1724 1727 if (qp->qplib_qp.state != 1725 1728 CMDQ_MODIFY_QP_NEW_STATE_RESET && 1726 1729 qp->qplib_qp.state != 1727 - CMDQ_MODIFY_QP_NEW_STATE_ERR) { 1730 + CMDQ_MODIFY_QP_NEW_STATE_ERR) 1728 1731 bnxt_re_dispatch_event(&rdev->ibdev, &qp->ib_qp, 1729 1732 1, IB_EVENT_QP_FATAL); 1730 - bnxt_re_modify_qp(&qp->ib_qp, &qp_attr, mask, 1731 - NULL); 1732 - } 1733 1733 } 1734 1734 } 1735 1735 mutex_unlock(&rdev->qp_lock);
+50 -29
drivers/infiniband/hw/bnxt_re/qplib_fp.c
··· 659 659 rc = bnxt_qplib_alloc_init_hwq(&srq->hwq, &hwq_attr); 660 660 if (rc) 661 661 return rc; 662 - 663 - srq->swq = kcalloc(srq->hwq.max_elements, sizeof(*srq->swq), 664 - GFP_KERNEL); 665 - if (!srq->swq) { 666 - rc = -ENOMEM; 667 - goto fail; 668 - } 669 662 srq->dbinfo.flags = 0; 670 663 bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req, 671 664 CMDQ_BASE_OPCODE_CREATE_SRQ, ··· 687 694 spin_lock_init(&srq->lock); 688 695 srq->start_idx = 0; 689 696 srq->last_idx = srq->hwq.max_elements - 1; 690 - for (idx = 0; idx < srq->hwq.max_elements; idx++) 691 - srq->swq[idx].next_idx = idx + 1; 692 - srq->swq[srq->last_idx].next_idx = -1; 697 + if (!srq->hwq.is_user) { 698 + srq->swq = kcalloc(srq->hwq.max_elements, sizeof(*srq->swq), 699 + GFP_KERNEL); 700 + if (!srq->swq) { 701 + rc = -ENOMEM; 702 + goto fail; 703 + } 704 + for (idx = 0; idx < srq->hwq.max_elements; idx++) 705 + srq->swq[idx].next_idx = idx + 1; 706 + srq->swq[srq->last_idx].next_idx = -1; 707 + } 693 708 694 709 srq->id = le32_to_cpu(resp.xid); 695 710 srq->dbinfo.hwq = &srq->hwq; ··· 1001 1000 u32 tbl_indx; 1002 1001 u16 nsge; 1003 1002 1004 - if (res->dattr) 1005 - qp->is_host_msn_tbl = _is_host_msn_table(res->dattr->dev_cap_flags2); 1006 - 1003 + qp->is_host_msn_tbl = _is_host_msn_table(res->dattr->dev_cap_flags2); 1007 1004 sq->dbinfo.flags = 0; 1008 1005 bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req, 1009 1006 CMDQ_BASE_OPCODE_CREATE_QP, ··· 1033 1034 : 0; 1034 1035 /* Update msn tbl size */ 1035 1036 if (qp->is_host_msn_tbl && psn_sz) { 1036 - hwq_attr.aux_depth = roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode)); 1037 + if (qp->wqe_mode == BNXT_QPLIB_WQE_MODE_STATIC) 1038 + hwq_attr.aux_depth = 1039 + roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode)); 1040 + else 1041 + hwq_attr.aux_depth = 1042 + roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode)) / 2; 1037 1043 qp->msn_tbl_sz = hwq_attr.aux_depth; 1038 1044 qp->msn = 0; 1039 1045 } ··· 1048 1044 if (rc) 1049 1045 return rc; 1050 1046 1051 - rc = bnxt_qplib_alloc_init_swq(sq); 1052 - if (rc) 1053 - goto fail_sq; 1047 + if (!sq->hwq.is_user) { 1048 + rc = bnxt_qplib_alloc_init_swq(sq); 1049 + if (rc) 1050 + goto fail_sq; 1054 1051 1055 - if (psn_sz) 1056 - bnxt_qplib_init_psn_ptr(qp, psn_sz); 1057 - 1052 + if (psn_sz) 1053 + bnxt_qplib_init_psn_ptr(qp, psn_sz); 1054 + } 1058 1055 req.sq_size = cpu_to_le32(bnxt_qplib_set_sq_size(sq, qp->wqe_mode)); 1059 1056 pbl = &sq->hwq.pbl[PBL_LVL_0]; 1060 1057 req.sq_pbl = cpu_to_le64(pbl->pg_map_arr[0]); ··· 1081 1076 rc = bnxt_qplib_alloc_init_hwq(&rq->hwq, &hwq_attr); 1082 1077 if (rc) 1083 1078 goto sq_swq; 1084 - rc = bnxt_qplib_alloc_init_swq(rq); 1085 - if (rc) 1086 - goto fail_rq; 1079 + if (!rq->hwq.is_user) { 1080 + rc = bnxt_qplib_alloc_init_swq(rq); 1081 + if (rc) 1082 + goto fail_rq; 1083 + } 1087 1084 1088 1085 req.rq_size = cpu_to_le32(rq->max_wqe); 1089 1086 pbl = &rq->hwq.pbl[PBL_LVL_0]; ··· 1181 1174 rq->dbinfo.db = qp->dpi->dbr; 1182 1175 rq->dbinfo.max_slot = bnxt_qplib_set_rq_max_slot(rq->wqe_size); 1183 1176 } 1177 + spin_lock_bh(&rcfw->tbl_lock); 1184 1178 tbl_indx = map_qp_id_to_tbl_indx(qp->id, rcfw); 1185 1179 rcfw->qp_tbl[tbl_indx].qp_id = qp->id; 1186 1180 rcfw->qp_tbl[tbl_indx].qp_handle = (void *)qp; 1181 + spin_unlock_bh(&rcfw->tbl_lock); 1187 1182 1188 1183 return 0; 1189 1184 fail: ··· 1292 1283 } 1293 1284 } 1294 1285 1295 - static void bnxt_set_mandatory_attributes(struct bnxt_qplib_qp *qp, 1286 + static void bnxt_set_mandatory_attributes(struct bnxt_qplib_res *res, 1287 + struct bnxt_qplib_qp *qp, 1296 1288 struct cmdq_modify_qp *req) 1297 1289 { 1298 1290 u32 mandatory_flags = 0; ··· 1306 1296 if (qp->type == CMDQ_MODIFY_QP_QP_TYPE_RC && qp->srq) 1307 1297 req->flags = cpu_to_le16(CMDQ_MODIFY_QP_FLAGS_SRQ_USED); 1308 1298 mandatory_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_PKEY; 1299 + } 1300 + 1301 + if (_is_min_rnr_in_rtr_rts_mandatory(res->dattr->dev_cap_flags2) && 1302 + (qp->cur_qp_state == CMDQ_MODIFY_QP_NEW_STATE_RTR && 1303 + qp->state == CMDQ_MODIFY_QP_NEW_STATE_RTS)) { 1304 + if (qp->type == CMDQ_MODIFY_QP_QP_TYPE_RC) 1305 + mandatory_flags |= 1306 + CMDQ_MODIFY_QP_MODIFY_MASK_MIN_RNR_TIMER; 1309 1307 } 1310 1308 1311 1309 if (qp->type == CMDQ_MODIFY_QP_QP_TYPE_UD || ··· 1356 1338 /* Set mandatory attributes for INIT -> RTR and RTR -> RTS transition */ 1357 1339 if (_is_optimize_modify_qp_supported(res->dattr->dev_cap_flags2) && 1358 1340 is_optimized_state_transition(qp)) 1359 - bnxt_set_mandatory_attributes(qp, &req); 1341 + bnxt_set_mandatory_attributes(res, qp, &req); 1360 1342 } 1361 1343 bmask = qp->modify_flags; 1362 1344 req.modify_mask = cpu_to_le32(qp->modify_flags); ··· 1539 1521 qp->dest_qpn = le32_to_cpu(sb->dest_qp_id); 1540 1522 memcpy(qp->smac, sb->src_mac, 6); 1541 1523 qp->vlan_id = le16_to_cpu(sb->vlan_pcp_vlan_dei_vlan_id); 1524 + qp->port_id = le16_to_cpu(sb->port_id); 1542 1525 bail: 1543 1526 dma_free_coherent(&rcfw->pdev->dev, sbuf.size, 1544 1527 sbuf.sb, sbuf.dma_addr); ··· 2686 2667 bnxt_qplib_add_flush_qp(qp); 2687 2668 } else { 2688 2669 /* Before we complete, do WA 9060 */ 2689 - if (do_wa9060(qp, cq, cq_cons, sq->swq_last, 2690 - cqe_sq_cons)) { 2691 - *lib_qp = qp; 2692 - goto out; 2670 + if (!bnxt_qplib_is_chip_gen_p5_p7(qp->cctx)) { 2671 + if (do_wa9060(qp, cq, cq_cons, sq->swq_last, 2672 + cqe_sq_cons)) { 2673 + *lib_qp = qp; 2674 + goto out; 2675 + } 2693 2676 } 2694 2677 if (swq->flags & SQ_SEND_FLAGS_SIGNAL_COMP) { 2695 2678 cqe->status = CQ_REQ_STATUS_OK;
+2 -2
drivers/infiniband/hw/bnxt_re/qplib_fp.h
··· 114 114 u32 size; 115 115 }; 116 116 117 - #define BNXT_QPLIB_QP_MAX_SGL 6 118 117 struct bnxt_qplib_swq { 119 118 u64 wr_id; 120 119 int next_idx; ··· 153 154 #define BNXT_QPLIB_SWQE_FLAGS_UC_FENCE BIT(2) 154 155 #define BNXT_QPLIB_SWQE_FLAGS_SOLICIT_EVENT BIT(3) 155 156 #define BNXT_QPLIB_SWQE_FLAGS_INLINE BIT(4) 156 - struct bnxt_qplib_sge sg_list[BNXT_QPLIB_QP_MAX_SGL]; 157 + struct bnxt_qplib_sge sg_list[BNXT_VAR_MAX_SGE]; 157 158 int num_sge; 158 159 /* Max inline data is 96 bytes */ 159 160 u32 inline_len; ··· 298 299 u32 dest_qpn; 299 300 u8 smac[6]; 300 301 u16 vlan_id; 302 + u16 port_id; 301 303 u8 nw_type; 302 304 struct bnxt_qplib_ah ah; 303 305
+3 -2
drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
··· 424 424 425 425 /* Prevent posting if f/w is not in a state to process */ 426 426 if (test_bit(ERR_DEVICE_DETACHED, &rcfw->cmdq.flags)) 427 - return bnxt_qplib_map_rc(opcode); 427 + return -ENXIO; 428 + 428 429 if (test_bit(FIRMWARE_STALL_DETECTED, &cmdq->flags)) 429 430 return -ETIMEDOUT; 430 431 ··· 494 493 495 494 rc = __send_message_basic_sanity(rcfw, msg, opcode); 496 495 if (rc) 497 - return rc; 496 + return rc == -ENXIO ? bnxt_qplib_map_rc(opcode) : rc; 498 497 499 498 rc = __send_message(rcfw, msg, opcode); 500 499 if (rc)
+5
drivers/infiniband/hw/bnxt_re/qplib_res.h
··· 584 584 return dev_cap_ext_flags2 & CREQ_QUERY_FUNC_RESP_SB_OPTIMIZE_MODIFY_QP_SUPPORTED; 585 585 } 586 586 587 + static inline bool _is_min_rnr_in_rtr_rts_mandatory(u16 dev_cap_ext_flags2) 588 + { 589 + return !!(dev_cap_ext_flags2 & CREQ_QUERY_FUNC_RESP_SB_MIN_RNR_RTR_RTS_OPT_SUPPORTED); 590 + } 591 + 587 592 static inline bool _is_cq_coalescing_supported(u16 dev_cap_ext_flags2) 588 593 { 589 594 return dev_cap_ext_flags2 & CREQ_QUERY_FUNC_RESP_SB_CQ_COALESCING_SUPPORTED;
+12 -6
drivers/infiniband/hw/bnxt_re/qplib_sp.c
··· 129 129 attr->max_qp_init_rd_atom = 130 130 sb->max_qp_init_rd_atom > BNXT_QPLIB_MAX_OUT_RD_ATOM ? 131 131 BNXT_QPLIB_MAX_OUT_RD_ATOM : sb->max_qp_init_rd_atom; 132 - attr->max_qp_wqes = le16_to_cpu(sb->max_qp_wr); 133 - /* 134 - * 128 WQEs needs to be reserved for the HW (8916). Prevent 135 - * reporting the max number 136 - */ 137 - attr->max_qp_wqes -= BNXT_QPLIB_RESERVED_QP_WRS + 1; 132 + attr->max_qp_wqes = le16_to_cpu(sb->max_qp_wr) - 1; 133 + if (!bnxt_qplib_is_chip_gen_p5_p7(rcfw->res->cctx)) { 134 + /* 135 + * 128 WQEs needs to be reserved for the HW (8916). Prevent 136 + * reporting the max number on legacy devices 137 + */ 138 + attr->max_qp_wqes -= BNXT_QPLIB_RESERVED_QP_WRS + 1; 139 + } 140 + 141 + /* Adjust for max_qp_wqes for variable wqe */ 142 + if (cctx->modes.wqe_mode == BNXT_QPLIB_WQE_MODE_VARIABLE) 143 + attr->max_qp_wqes = BNXT_VAR_MAX_WQE - 1; 138 144 139 145 attr->max_qp_sges = cctx->modes.wqe_mode == BNXT_QPLIB_WQE_MODE_VARIABLE ? 140 146 min_t(u32, sb->max_sge_var_wqe, BNXT_VAR_MAX_SGE) : 6;
+1
drivers/infiniband/hw/bnxt_re/roce_hsi.h
··· 2215 2215 #define CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_IQM_MSN_TABLE (0x2UL << 4) 2216 2216 #define CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_LAST \ 2217 2217 CREQ_QUERY_FUNC_RESP_SB_REQ_RETRANSMISSION_SUPPORT_IQM_MSN_TABLE 2218 + #define CREQ_QUERY_FUNC_RESP_SB_MIN_RNR_RTR_RTS_OPT_SUPPORTED 0x1000UL 2218 2219 __le16 max_xp_qp_size; 2219 2220 __le16 create_qp_batch_size; 2220 2221 __le16 destroy_qp_batch_size;
+29 -14
drivers/infiniband/hw/hns/hns_roce_hem.c
··· 931 931 size_t count; /* max ba numbers */ 932 932 int start; /* start buf offset in this hem */ 933 933 int end; /* end buf offset in this hem */ 934 + bool exist_bt; 934 935 }; 935 936 936 937 /* All HEM items are linked in a tree structure */ ··· 960 959 } 961 960 } 962 961 962 + hem->exist_bt = exist_bt; 963 963 hem->count = count; 964 964 hem->start = start; 965 965 hem->end = end; ··· 971 969 } 972 970 973 971 static void hem_list_free_item(struct hns_roce_dev *hr_dev, 974 - struct hns_roce_hem_item *hem, bool exist_bt) 972 + struct hns_roce_hem_item *hem) 975 973 { 976 - if (exist_bt) 974 + if (hem->exist_bt) 977 975 dma_free_coherent(hr_dev->dev, hem->count * BA_BYTE_LEN, 978 976 hem->addr, hem->dma_addr); 979 977 kfree(hem); 980 978 } 981 979 982 980 static void hem_list_free_all(struct hns_roce_dev *hr_dev, 983 - struct list_head *head, bool exist_bt) 981 + struct list_head *head) 984 982 { 985 983 struct hns_roce_hem_item *hem, *temp_hem; 986 984 987 985 list_for_each_entry_safe(hem, temp_hem, head, list) { 988 986 list_del(&hem->list); 989 - hem_list_free_item(hr_dev, hem, exist_bt); 987 + hem_list_free_item(hr_dev, hem); 990 988 } 991 989 } 992 990 ··· 1086 1084 1087 1085 for (i = 0; i < region_cnt; i++) { 1088 1086 r = (struct hns_roce_buf_region *)&regions[i]; 1087 + /* when r->hopnum = 0, the region should not occupy root_ba. */ 1088 + if (!r->hopnum) 1089 + continue; 1090 + 1089 1091 if (r->hopnum > 1) { 1090 1092 step = hem_list_calc_ba_range(r->hopnum, 1, unit); 1091 1093 if (step > 0) ··· 1183 1177 1184 1178 err_exit: 1185 1179 for (level = 1; level < hopnum; level++) 1186 - hem_list_free_all(hr_dev, &temp_list[level], true); 1180 + hem_list_free_all(hr_dev, &temp_list[level]); 1187 1181 1188 1182 return ret; 1189 1183 } ··· 1224 1218 { 1225 1219 struct hns_roce_hem_item *hem; 1226 1220 1221 + /* This is on the has_mtt branch, if r->hopnum 1222 + * is 0, there is no root_ba to reuse for the 1223 + * region's fake hem, so a dma_alloc request is 1224 + * necessary here. 1225 + */ 1227 1226 hem = hem_list_alloc_item(hr_dev, r->offset, r->offset + r->count - 1, 1228 - r->count, false); 1227 + r->count, !r->hopnum); 1229 1228 if (!hem) 1230 1229 return -ENOMEM; 1231 1230 1232 - hem_list_assign_bt(hem, cpu_base, phy_base); 1231 + /* The root_ba can be reused only when r->hopnum > 0. */ 1232 + if (r->hopnum) 1233 + hem_list_assign_bt(hem, cpu_base, phy_base); 1233 1234 list_add(&hem->list, branch_head); 1234 1235 list_add(&hem->sibling, leaf_head); 1235 1236 1236 - return r->count; 1237 + /* If r->hopnum == 0, 0 is returned, 1238 + * so that the root_bt entry is not occupied. 1239 + */ 1240 + return r->hopnum ? r->count : 0; 1237 1241 } 1238 1242 1239 1243 static int setup_middle_bt(struct hns_roce_dev *hr_dev, void *cpu_base, ··· 1287 1271 return -ENOMEM; 1288 1272 1289 1273 total = 0; 1290 - for (i = 0; i < region_cnt && total < max_ba_num; i++) { 1274 + for (i = 0; i < region_cnt && total <= max_ba_num; i++) { 1291 1275 r = &regions[i]; 1292 1276 if (!r->count) 1293 1277 continue; ··· 1353 1337 region_cnt); 1354 1338 if (ret) { 1355 1339 for (i = 0; i < region_cnt; i++) 1356 - hem_list_free_all(hr_dev, &head.branch[i], false); 1340 + hem_list_free_all(hr_dev, &head.branch[i]); 1357 1341 1358 - hem_list_free_all(hr_dev, &head.root, true); 1342 + hem_list_free_all(hr_dev, &head.root); 1359 1343 } 1360 1344 1361 1345 return ret; ··· 1418 1402 1419 1403 for (i = 0; i < HNS_ROCE_MAX_BT_REGION; i++) 1420 1404 for (j = 0; j < HNS_ROCE_MAX_BT_LEVEL; j++) 1421 - hem_list_free_all(hr_dev, &hem_list->mid_bt[i][j], 1422 - j != 0); 1405 + hem_list_free_all(hr_dev, &hem_list->mid_bt[i][j]); 1423 1406 1424 - hem_list_free_all(hr_dev, &hem_list->root_bt, true); 1407 + hem_list_free_all(hr_dev, &hem_list->root_bt); 1425 1408 INIT_LIST_HEAD(&hem_list->btm_bt); 1426 1409 hem_list->root_ba = 0; 1427 1410 }
+9 -2
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 468 468 valid_num_sge = calc_wr_sge_num(wr, &msg_len); 469 469 470 470 ret = set_ud_opcode(ud_sq_wqe, wr); 471 - if (WARN_ON(ret)) 471 + if (WARN_ON_ONCE(ret)) 472 472 return ret; 473 473 474 474 ud_sq_wqe->msg_len = cpu_to_le32(msg_len); ··· 572 572 rc_sq_wqe->msg_len = cpu_to_le32(msg_len); 573 573 574 574 ret = set_rc_opcode(hr_dev, rc_sq_wqe, wr); 575 - if (WARN_ON(ret)) 575 + if (WARN_ON_ONCE(ret)) 576 576 return ret; 577 577 578 578 hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SO, ··· 670 670 #define HNS_ROCE_SL_SHIFT 2 671 671 struct hns_roce_v2_rc_send_wqe *rc_sq_wqe = wqe; 672 672 673 + if (unlikely(qp->state == IB_QPS_ERR)) { 674 + flush_cqe(hr_dev, qp); 675 + return; 676 + } 673 677 /* All kinds of DirectWQE have the same header field layout */ 674 678 hr_reg_enable(rc_sq_wqe, RC_SEND_WQE_FLAG); 675 679 hr_reg_write(rc_sq_wqe, RC_SEND_WQE_DB_SL_L, qp->sl); ··· 5622 5618 struct hns_roce_qp *hr_qp) 5623 5619 { 5624 5620 struct hns_roce_dip *hr_dip = hr_qp->dip; 5621 + 5622 + if (!hr_dip) 5623 + return; 5625 5624 5626 5625 xa_lock(&hr_dev->qp_table.dip_xa); 5627 5626
-5
drivers/infiniband/hw/hns/hns_roce_mr.c
··· 814 814 for (i = 0, mapped_cnt = 0; i < mtr->hem_cfg.region_count && 815 815 mapped_cnt < page_cnt; i++) { 816 816 r = &mtr->hem_cfg.region[i]; 817 - /* if hopnum is 0, no need to map pages in this region */ 818 - if (!r->hopnum) { 819 - mapped_cnt += r->count; 820 - continue; 821 - } 822 817 823 818 if (r->offset + r->count > page_cnt) { 824 819 ret = -EINVAL;
+5 -3
drivers/infiniband/hw/mlx5/main.c
··· 2839 2839 int err; 2840 2840 2841 2841 *num_plane = 0; 2842 - if (!MLX5_CAP_GEN(mdev, ib_virt)) 2842 + if (!MLX5_CAP_GEN(mdev, ib_virt) || !MLX5_CAP_GEN_2(mdev, multiplane)) 2843 2843 return 0; 2844 2844 2845 2845 err = mlx5_query_hca_vport_context(mdev, 0, 1, 0, &vport_ctx); ··· 3639 3639 list_for_each_entry(mpi, &mlx5_ib_unaffiliated_port_list, 3640 3640 list) { 3641 3641 if (dev->sys_image_guid == mpi->sys_image_guid && 3642 - (mlx5_core_native_port_num(mpi->mdev) - 1) == i) { 3642 + (mlx5_core_native_port_num(mpi->mdev) - 1) == i && 3643 + mlx5_core_same_coredev_type(dev->mdev, mpi->mdev)) { 3643 3644 bound = mlx5_ib_bind_slave_port(dev, mpi); 3644 3645 } 3645 3646 ··· 4786 4785 4787 4786 mutex_lock(&mlx5_ib_multiport_mutex); 4788 4787 list_for_each_entry(dev, &mlx5_ib_dev_list, ib_dev_list) { 4789 - if (dev->sys_image_guid == mpi->sys_image_guid) 4788 + if (dev->sys_image_guid == mpi->sys_image_guid && 4789 + mlx5_core_same_coredev_type(dev->mdev, mpi->mdev)) 4790 4790 bound = mlx5_ib_bind_slave_port(dev, mpi); 4791 4791 4792 4792 if (bound) {
+19 -4
drivers/infiniband/sw/rxe/rxe.c
··· 40 40 /* initialize rxe device parameters */ 41 41 static void rxe_init_device_param(struct rxe_dev *rxe) 42 42 { 43 + struct net_device *ndev; 44 + 43 45 rxe->max_inline_data = RXE_MAX_INLINE_DATA; 44 46 45 47 rxe->attr.vendor_id = RXE_VENDOR_ID; ··· 73 71 rxe->attr.max_fast_reg_page_list_len = RXE_MAX_FMR_PAGE_LIST_LEN; 74 72 rxe->attr.max_pkeys = RXE_MAX_PKEYS; 75 73 rxe->attr.local_ca_ack_delay = RXE_LOCAL_CA_ACK_DELAY; 74 + 75 + ndev = rxe_ib_device_get_netdev(&rxe->ib_dev); 76 + if (!ndev) 77 + return; 78 + 76 79 addrconf_addr_eui48((unsigned char *)&rxe->attr.sys_image_guid, 77 - rxe->ndev->dev_addr); 80 + ndev->dev_addr); 81 + 82 + dev_put(ndev); 78 83 79 84 rxe->max_ucontext = RXE_MAX_UCONTEXT; 80 85 } ··· 118 109 static void rxe_init_ports(struct rxe_dev *rxe) 119 110 { 120 111 struct rxe_port *port = &rxe->port; 112 + struct net_device *ndev; 121 113 122 114 rxe_init_port_param(port); 115 + ndev = rxe_ib_device_get_netdev(&rxe->ib_dev); 116 + if (!ndev) 117 + return; 123 118 addrconf_addr_eui48((unsigned char *)&port->port_guid, 124 - rxe->ndev->dev_addr); 119 + ndev->dev_addr); 120 + dev_put(ndev); 125 121 spin_lock_init(&port->port_lock); 126 122 } 127 123 ··· 181 167 /* called by ifc layer to create new rxe device. 182 168 * The caller should allocate memory for rxe by calling ib_alloc_device. 183 169 */ 184 - int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name) 170 + int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name, 171 + struct net_device *ndev) 185 172 { 186 173 rxe_init(rxe); 187 174 rxe_set_mtu(rxe, mtu); 188 175 189 - return rxe_register_device(rxe, ibdev_name); 176 + return rxe_register_device(rxe, ibdev_name, ndev); 190 177 } 191 178 192 179 static int rxe_newlink(const char *ibdev_name, struct net_device *ndev)
+2 -1
drivers/infiniband/sw/rxe/rxe.h
··· 139 139 140 140 void rxe_set_mtu(struct rxe_dev *rxe, unsigned int dev_mtu); 141 141 142 - int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name); 142 + int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name, 143 + struct net_device *ndev); 143 144 144 145 void rxe_rcv(struct sk_buff *skb); 145 146
+20 -2
drivers/infiniband/sw/rxe/rxe_mcast.c
··· 31 31 static int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid) 32 32 { 33 33 unsigned char ll_addr[ETH_ALEN]; 34 + struct net_device *ndev; 35 + int ret; 36 + 37 + ndev = rxe_ib_device_get_netdev(&rxe->ib_dev); 38 + if (!ndev) 39 + return -ENODEV; 34 40 35 41 ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr); 36 42 37 - return dev_mc_add(rxe->ndev, ll_addr); 43 + ret = dev_mc_add(ndev, ll_addr); 44 + dev_put(ndev); 45 + 46 + return ret; 38 47 } 39 48 40 49 /** ··· 56 47 static int rxe_mcast_del(struct rxe_dev *rxe, union ib_gid *mgid) 57 48 { 58 49 unsigned char ll_addr[ETH_ALEN]; 50 + struct net_device *ndev; 51 + int ret; 52 + 53 + ndev = rxe_ib_device_get_netdev(&rxe->ib_dev); 54 + if (!ndev) 55 + return -ENODEV; 59 56 60 57 ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr); 61 58 62 - return dev_mc_del(rxe->ndev, ll_addr); 59 + ret = dev_mc_del(ndev, ll_addr); 60 + dev_put(ndev); 61 + 62 + return ret; 63 63 } 64 64 65 65 /**
+20 -4
drivers/infiniband/sw/rxe/rxe_net.c
··· 524 524 */ 525 525 const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num) 526 526 { 527 - return rxe->ndev->name; 527 + struct net_device *ndev; 528 + char *ndev_name; 529 + 530 + ndev = rxe_ib_device_get_netdev(&rxe->ib_dev); 531 + if (!ndev) 532 + return NULL; 533 + ndev_name = ndev->name; 534 + dev_put(ndev); 535 + 536 + return ndev_name; 528 537 } 529 538 530 539 int rxe_net_add(const char *ibdev_name, struct net_device *ndev) ··· 545 536 if (!rxe) 546 537 return -ENOMEM; 547 538 548 - rxe->ndev = ndev; 549 539 ib_mark_name_assigned_by_user(&rxe->ib_dev); 550 540 551 - err = rxe_add(rxe, ndev->mtu, ibdev_name); 541 + err = rxe_add(rxe, ndev->mtu, ibdev_name, ndev); 552 542 if (err) { 553 543 ib_dealloc_device(&rxe->ib_dev); 554 544 return err; ··· 595 587 596 588 void rxe_set_port_state(struct rxe_dev *rxe) 597 589 { 598 - if (netif_running(rxe->ndev) && netif_carrier_ok(rxe->ndev)) 590 + struct net_device *ndev; 591 + 592 + ndev = rxe_ib_device_get_netdev(&rxe->ib_dev); 593 + if (!ndev) 594 + return; 595 + 596 + if (netif_running(ndev) && netif_carrier_ok(ndev)) 599 597 rxe_port_up(rxe); 600 598 else 601 599 rxe_port_down(rxe); 600 + 601 + dev_put(ndev); 602 602 } 603 603 604 604 static int rxe_notify(struct notifier_block *not_blk,
+21 -5
drivers/infiniband/sw/rxe/rxe_verbs.c
··· 41 41 u32 port_num, struct ib_port_attr *attr) 42 42 { 43 43 struct rxe_dev *rxe = to_rdev(ibdev); 44 + struct net_device *ndev; 44 45 int err, ret; 45 46 46 47 if (port_num != 1) { 47 48 err = -EINVAL; 48 49 rxe_dbg_dev(rxe, "bad port_num = %d\n", port_num); 50 + goto err_out; 51 + } 52 + 53 + ndev = rxe_ib_device_get_netdev(ibdev); 54 + if (!ndev) { 55 + err = -ENODEV; 49 56 goto err_out; 50 57 } 51 58 ··· 64 57 65 58 if (attr->state == IB_PORT_ACTIVE) 66 59 attr->phys_state = IB_PORT_PHYS_STATE_LINK_UP; 67 - else if (dev_get_flags(rxe->ndev) & IFF_UP) 60 + else if (dev_get_flags(ndev) & IFF_UP) 68 61 attr->phys_state = IB_PORT_PHYS_STATE_POLLING; 69 62 else 70 63 attr->phys_state = IB_PORT_PHYS_STATE_DISABLED; 71 64 72 65 mutex_unlock(&rxe->usdev_lock); 73 66 67 + dev_put(ndev); 74 68 return ret; 75 69 76 70 err_out: ··· 1433 1425 static int rxe_enable_driver(struct ib_device *ib_dev) 1434 1426 { 1435 1427 struct rxe_dev *rxe = container_of(ib_dev, struct rxe_dev, ib_dev); 1428 + struct net_device *ndev; 1429 + 1430 + ndev = rxe_ib_device_get_netdev(ib_dev); 1431 + if (!ndev) 1432 + return -ENODEV; 1436 1433 1437 1434 rxe_set_port_state(rxe); 1438 - dev_info(&rxe->ib_dev.dev, "added %s\n", netdev_name(rxe->ndev)); 1435 + dev_info(&rxe->ib_dev.dev, "added %s\n", netdev_name(ndev)); 1436 + 1437 + dev_put(ndev); 1439 1438 return 0; 1440 1439 } 1441 1440 ··· 1510 1495 INIT_RDMA_OBJ_SIZE(ib_mw, rxe_mw, ibmw), 1511 1496 }; 1512 1497 1513 - int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name) 1498 + int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name, 1499 + struct net_device *ndev) 1514 1500 { 1515 1501 int err; 1516 1502 struct ib_device *dev = &rxe->ib_dev; ··· 1523 1507 dev->num_comp_vectors = num_possible_cpus(); 1524 1508 dev->local_dma_lkey = 0; 1525 1509 addrconf_addr_eui48((unsigned char *)&dev->node_guid, 1526 - rxe->ndev->dev_addr); 1510 + ndev->dev_addr); 1527 1511 1528 1512 dev->uverbs_cmd_mask |= BIT_ULL(IB_USER_VERBS_CMD_POST_SEND) | 1529 1513 BIT_ULL(IB_USER_VERBS_CMD_REQ_NOTIFY_CQ); 1530 1514 1531 1515 ib_set_device_ops(dev, &rxe_dev_ops); 1532 - err = ib_device_set_netdev(&rxe->ib_dev, rxe->ndev, 1); 1516 + err = ib_device_set_netdev(&rxe->ib_dev, ndev, 1); 1533 1517 if (err) 1534 1518 return err; 1535 1519
+8 -3
drivers/infiniband/sw/rxe/rxe_verbs.h
··· 370 370 u32 qp_gsi_index; 371 371 }; 372 372 373 + #define RXE_PORT 1 373 374 struct rxe_dev { 374 375 struct ib_device ib_dev; 375 376 struct ib_device_attr attr; 376 377 int max_ucontext; 377 378 int max_inline_data; 378 379 struct mutex usdev_lock; 379 - 380 - struct net_device *ndev; 381 380 382 381 struct rxe_pool uc_pool; 383 382 struct rxe_pool pd_pool; ··· 404 405 struct rxe_port port; 405 406 struct crypto_shash *tfm; 406 407 }; 408 + 409 + static inline struct net_device *rxe_ib_device_get_netdev(struct ib_device *dev) 410 + { 411 + return ib_device_get_netdev(dev, RXE_PORT); 412 + } 407 413 408 414 static inline void rxe_counter_inc(struct rxe_dev *rxe, enum rxe_counters index) 409 415 { ··· 475 471 return to_rpd(mw->ibmw.pd); 476 472 } 477 473 478 - int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name); 474 + int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name, 475 + struct net_device *ndev); 479 476 480 477 #endif /* RXE_VERBS_H */
+3 -4
drivers/infiniband/sw/siw/siw.h
··· 46 46 */ 47 47 #define SIW_IRQ_MAXBURST_SQ_ACTIVE 4 48 48 49 + /* There is always only a port 1 per siw device */ 50 + #define SIW_PORT 1 51 + 49 52 struct siw_dev_cap { 50 53 int max_qp; 51 54 int max_qp_wr; ··· 72 69 73 70 struct siw_device { 74 71 struct ib_device base_dev; 75 - struct net_device *netdev; 76 72 struct siw_dev_cap attrs; 77 73 78 74 u32 vendor_part_id; 79 75 int numa_node; 80 76 char raw_gid[ETH_ALEN]; 81 - 82 - /* physical port state (only one port per device) */ 83 - enum ib_port_state state; 84 77 85 78 spinlock_t lock; 86 79
+21 -6
drivers/infiniband/sw/siw/siw_cm.c
··· 1759 1759 { 1760 1760 struct socket *s; 1761 1761 struct siw_cep *cep = NULL; 1762 + struct net_device *ndev = NULL; 1762 1763 struct siw_device *sdev = to_siw_dev(id->device); 1763 1764 int addr_family = id->local_addr.ss_family; 1764 1765 int rv = 0; ··· 1780 1779 struct sockaddr_in *laddr = &to_sockaddr_in(id->local_addr); 1781 1780 1782 1781 /* For wildcard addr, limit binding to current device only */ 1783 - if (ipv4_is_zeronet(laddr->sin_addr.s_addr)) 1784 - s->sk->sk_bound_dev_if = sdev->netdev->ifindex; 1785 - 1782 + if (ipv4_is_zeronet(laddr->sin_addr.s_addr)) { 1783 + ndev = ib_device_get_netdev(id->device, SIW_PORT); 1784 + if (ndev) { 1785 + s->sk->sk_bound_dev_if = ndev->ifindex; 1786 + } else { 1787 + rv = -ENODEV; 1788 + goto error; 1789 + } 1790 + } 1786 1791 rv = s->ops->bind(s, (struct sockaddr *)laddr, 1787 1792 sizeof(struct sockaddr_in)); 1788 1793 } else { ··· 1804 1797 } 1805 1798 1806 1799 /* For wildcard addr, limit binding to current device only */ 1807 - if (ipv6_addr_any(&laddr->sin6_addr)) 1808 - s->sk->sk_bound_dev_if = sdev->netdev->ifindex; 1809 - 1800 + if (ipv6_addr_any(&laddr->sin6_addr)) { 1801 + ndev = ib_device_get_netdev(id->device, SIW_PORT); 1802 + if (ndev) { 1803 + s->sk->sk_bound_dev_if = ndev->ifindex; 1804 + } else { 1805 + rv = -ENODEV; 1806 + goto error; 1807 + } 1808 + } 1810 1809 rv = s->ops->bind(s, (struct sockaddr *)laddr, 1811 1810 sizeof(struct sockaddr_in6)); 1812 1811 } ··· 1873 1860 } 1874 1861 list_add_tail(&cep->listenq, (struct list_head *)id->provider_data); 1875 1862 cep->state = SIW_EPSTATE_LISTENING; 1863 + dev_put(ndev); 1876 1864 1877 1865 siw_dbg(id->device, "Listen at laddr %pISp\n", &id->local_addr); 1878 1866 ··· 1893 1879 siw_cep_set_free_and_put(cep); 1894 1880 } 1895 1881 sock_release(s); 1882 + dev_put(ndev); 1896 1883 1897 1884 return rv; 1898 1885 }
+1 -14
drivers/infiniband/sw/siw/siw_main.c
··· 287 287 return NULL; 288 288 289 289 base_dev = &sdev->base_dev; 290 - sdev->netdev = netdev; 291 290 292 291 if (netdev->addr_len) { 293 292 memcpy(sdev->raw_gid, netdev->dev_addr, ··· 380 381 381 382 switch (event) { 382 383 case NETDEV_UP: 383 - sdev->state = IB_PORT_ACTIVE; 384 384 siw_port_event(sdev, 1, IB_EVENT_PORT_ACTIVE); 385 385 break; 386 386 387 387 case NETDEV_DOWN: 388 - sdev->state = IB_PORT_DOWN; 389 388 siw_port_event(sdev, 1, IB_EVENT_PORT_ERR); 390 389 break; 391 390 ··· 404 407 siw_port_event(sdev, 1, IB_EVENT_LID_CHANGE); 405 408 break; 406 409 /* 407 - * Todo: Below netdev events are currently not handled. 410 + * All other events are not handled 408 411 */ 409 - case NETDEV_CHANGEMTU: 410 - case NETDEV_CHANGE: 411 - break; 412 - 413 412 default: 414 413 break; 415 414 } ··· 435 442 sdev = siw_device_create(netdev); 436 443 if (sdev) { 437 444 dev_dbg(&netdev->dev, "siw: new device\n"); 438 - 439 - if (netif_running(netdev) && netif_carrier_ok(netdev)) 440 - sdev->state = IB_PORT_ACTIVE; 441 - else 442 - sdev->state = IB_PORT_DOWN; 443 - 444 445 ib_mark_name_assigned_by_user(&sdev->base_dev); 445 446 rv = siw_device_register(sdev, basedev_name); 446 447 if (rv)
+24 -11
drivers/infiniband/sw/siw/siw_verbs.c
··· 171 171 int siw_query_port(struct ib_device *base_dev, u32 port, 172 172 struct ib_port_attr *attr) 173 173 { 174 - struct siw_device *sdev = to_siw_dev(base_dev); 174 + struct net_device *ndev; 175 175 int rv; 176 176 177 177 memset(attr, 0, sizeof(*attr)); 178 178 179 179 rv = ib_get_eth_speed(base_dev, port, &attr->active_speed, 180 180 &attr->active_width); 181 + if (rv) 182 + return rv; 183 + 184 + ndev = ib_device_get_netdev(base_dev, SIW_PORT); 185 + if (!ndev) 186 + return -ENODEV; 187 + 181 188 attr->gid_tbl_len = 1; 182 189 attr->max_msg_sz = -1; 183 - attr->max_mtu = ib_mtu_int_to_enum(sdev->netdev->mtu); 184 - attr->active_mtu = ib_mtu_int_to_enum(sdev->netdev->mtu); 185 - attr->phys_state = sdev->state == IB_PORT_ACTIVE ? 190 + attr->max_mtu = ib_mtu_int_to_enum(ndev->max_mtu); 191 + attr->active_mtu = ib_mtu_int_to_enum(READ_ONCE(ndev->mtu)); 192 + attr->phys_state = (netif_running(ndev) && netif_carrier_ok(ndev)) ? 186 193 IB_PORT_PHYS_STATE_LINK_UP : IB_PORT_PHYS_STATE_DISABLED; 194 + attr->state = attr->phys_state == IB_PORT_PHYS_STATE_LINK_UP ? 195 + IB_PORT_ACTIVE : IB_PORT_DOWN; 187 196 attr->port_cap_flags = IB_PORT_CM_SUP | IB_PORT_DEVICE_MGMT_SUP; 188 - attr->state = sdev->state; 189 197 /* 190 198 * All zero 191 199 * ··· 207 199 * attr->subnet_timeout = 0; 208 200 * attr->init_type_repy = 0; 209 201 */ 202 + dev_put(ndev); 210 203 return rv; 211 204 } 212 205 ··· 514 505 int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr) 515 506 { 516 507 struct siw_qp *qp; 517 - struct siw_device *sdev; 508 + struct net_device *ndev; 518 509 519 - if (base_qp && qp_attr && qp_init_attr) { 510 + if (base_qp && qp_attr && qp_init_attr) 520 511 qp = to_siw_qp(base_qp); 521 - sdev = to_siw_dev(base_qp->device); 522 - } else { 512 + else 523 513 return -EINVAL; 524 - } 514 + 515 + ndev = ib_device_get_netdev(base_qp->device, SIW_PORT); 516 + if (!ndev) 517 + return -ENODEV; 518 + 525 519 qp_attr->qp_state = siw_qp_state_to_ib_qp_state[qp->attrs.state]; 526 520 qp_attr->cap.max_inline_data = SIW_MAX_INLINE; 527 521 qp_attr->cap.max_send_wr = qp->attrs.sq_size; 528 522 qp_attr->cap.max_send_sge = qp->attrs.sq_max_sges; 529 523 qp_attr->cap.max_recv_wr = qp->attrs.rq_size; 530 524 qp_attr->cap.max_recv_sge = qp->attrs.rq_max_sges; 531 - qp_attr->path_mtu = ib_mtu_int_to_enum(sdev->netdev->mtu); 525 + qp_attr->path_mtu = ib_mtu_int_to_enum(READ_ONCE(ndev->mtu)); 532 526 qp_attr->max_rd_atomic = qp->attrs.irq_size; 533 527 qp_attr->max_dest_rd_atomic = qp->attrs.orq_size; 534 528 ··· 546 534 547 535 qp_init_attr->cap = qp_attr->cap; 548 536 537 + dev_put(ndev); 549 538 return 0; 550 539 } 551 540
+1 -1
drivers/infiniband/ulp/rtrs/rtrs-srv.c
··· 349 349 struct rtrs_srv_mr *srv_mr; 350 350 bool need_inval = false; 351 351 enum ib_send_flags flags; 352 + struct ib_sge list; 352 353 u32 imm; 353 354 int err; 354 355 ··· 402 401 imm = rtrs_to_io_rsp_imm(id->msg_id, errno, need_inval); 403 402 imm_wr.wr.next = NULL; 404 403 if (always_invalidate) { 405 - struct ib_sge list; 406 404 struct rtrs_msg_rkey_rsp *msg; 407 405 408 406 srv_mr = &srv_path->mrs[id->msg_id];
+10
drivers/interconnect/icc-clk.c
··· 116 116 } 117 117 118 118 node->name = devm_kasprintf(dev, GFP_KERNEL, "%s_master", data[i].name); 119 + if (!node->name) { 120 + ret = -ENOMEM; 121 + goto err; 122 + } 123 + 119 124 node->data = &qp->clocks[i]; 120 125 icc_node_add(node, provider); 121 126 /* link to the next node, slave */ ··· 134 129 } 135 130 136 131 node->name = devm_kasprintf(dev, GFP_KERNEL, "%s_slave", data[i].name); 132 + if (!node->name) { 133 + ret = -ENOMEM; 134 + goto err; 135 + } 136 + 137 137 /* no data for slave node */ 138 138 icc_node_add(node, provider); 139 139 onecell->nodes[j++] = node;
+1 -1
drivers/interconnect/qcom/icc-rpm.c
··· 503 503 GFP_KERNEL); 504 504 if (!data) 505 505 return -ENOMEM; 506 + data->num_nodes = num_nodes; 506 507 507 508 qp->num_intf_clks = cd_num; 508 509 for (i = 0; i < cd_num; i++) ··· 598 597 599 598 data->nodes[i] = node; 600 599 } 601 - data->num_nodes = num_nodes; 602 600 603 601 clk_bulk_disable_unprepare(qp->num_intf_clks, qp->intf_clks); 604 602
+1
drivers/macintosh/Kconfig
··· 120 120 config PMAC_BACKLIGHT 121 121 bool "Backlight control for LCD screens" 122 122 depends on PPC_PMAC && ADB_PMU && FB = y && (BROKEN || !PPC64) 123 + depends on BACKLIGHT_CLASS_DEVICE=y 123 124 select FB_BACKLIGHT 124 125 help 125 126 Say Y here to enable Macintosh specific extensions of the generic
+1 -1
drivers/md/dm-ebs-target.c
··· 442 442 static struct target_type ebs_target = { 443 443 .name = "ebs", 444 444 .version = {1, 0, 1}, 445 - .features = DM_TARGET_PASSES_INTEGRITY, 445 + .features = 0, 446 446 .module = THIS_MODULE, 447 447 .ctr = ebs_ctr, 448 448 .dtr = ebs_dtr,
+2 -3
drivers/md/dm-thin.c
··· 2332 2332 struct thin_c *tc = NULL; 2333 2333 2334 2334 rcu_read_lock(); 2335 - if (!list_empty(&pool->active_thins)) { 2336 - tc = list_entry_rcu(pool->active_thins.next, struct thin_c, list); 2335 + tc = list_first_or_null_rcu(&pool->active_thins, struct thin_c, list); 2336 + if (tc) 2337 2337 thin_get(tc); 2338 - } 2339 2338 rcu_read_unlock(); 2340 2339 2341 2340 return tc;
+30 -29
drivers/md/dm-verity-fec.c
··· 40 40 } 41 41 42 42 /* 43 - * Decode an RS block using Reed-Solomon. 44 - */ 45 - static int fec_decode_rs8(struct dm_verity *v, struct dm_verity_fec_io *fio, 46 - u8 *data, u8 *fec, int neras) 47 - { 48 - int i; 49 - uint16_t par[DM_VERITY_FEC_RSM - DM_VERITY_FEC_MIN_RSN]; 50 - 51 - for (i = 0; i < v->fec->roots; i++) 52 - par[i] = fec[i]; 53 - 54 - return decode_rs8(fio->rs, data, par, v->fec->rsn, NULL, neras, 55 - fio->erasures, 0, NULL); 56 - } 57 - 58 - /* 59 43 * Read error-correcting codes for the requested RS block. Returns a pointer 60 44 * to the data block. Caller is responsible for releasing buf. 61 45 */ 62 46 static u8 *fec_read_parity(struct dm_verity *v, u64 rsb, int index, 63 - unsigned int *offset, struct dm_buffer **buf, 64 - unsigned short ioprio) 47 + unsigned int *offset, unsigned int par_buf_offset, 48 + struct dm_buffer **buf, unsigned short ioprio) 65 49 { 66 50 u64 position, block, rem; 67 51 u8 *res; 68 52 53 + /* We have already part of parity bytes read, skip to the next block */ 54 + if (par_buf_offset) 55 + index++; 56 + 69 57 position = (index + rsb) * v->fec->roots; 70 58 block = div64_u64_rem(position, v->fec->io_size, &rem); 71 - *offset = (unsigned int)rem; 59 + *offset = par_buf_offset ? 0 : (unsigned int)rem; 72 60 73 61 res = dm_bufio_read_with_ioprio(v->fec->bufio, block, buf, ioprio); 74 62 if (IS_ERR(res)) { ··· 116 128 { 117 129 int r, corrected = 0, res; 118 130 struct dm_buffer *buf; 119 - unsigned int n, i, offset; 131 + unsigned int n, i, j, offset, par_buf_offset = 0; 132 + uint16_t par_buf[DM_VERITY_FEC_RSM - DM_VERITY_FEC_MIN_RSN]; 120 133 u8 *par, *block; 121 134 struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size); 122 135 123 - par = fec_read_parity(v, rsb, block_offset, &offset, &buf, bio_prio(bio)); 136 + par = fec_read_parity(v, rsb, block_offset, &offset, 137 + par_buf_offset, &buf, bio_prio(bio)); 124 138 if (IS_ERR(par)) 125 139 return PTR_ERR(par); 126 140 ··· 132 142 */ 133 143 fec_for_each_buffer_rs_block(fio, n, i) { 134 144 block = fec_buffer_rs_block(v, fio, n, i); 135 - res = fec_decode_rs8(v, fio, block, &par[offset], neras); 145 + for (j = 0; j < v->fec->roots - par_buf_offset; j++) 146 + par_buf[par_buf_offset + j] = par[offset + j]; 147 + /* Decode an RS block using Reed-Solomon */ 148 + res = decode_rs8(fio->rs, block, par_buf, v->fec->rsn, 149 + NULL, neras, fio->erasures, 0, NULL); 136 150 if (res < 0) { 137 151 r = res; 138 152 goto error; ··· 149 155 if (block_offset >= 1 << v->data_dev_block_bits) 150 156 goto done; 151 157 152 - /* read the next block when we run out of parity bytes */ 153 - offset += v->fec->roots; 158 + /* Read the next block when we run out of parity bytes */ 159 + offset += (v->fec->roots - par_buf_offset); 160 + /* Check if parity bytes are split between blocks */ 161 + if (offset < v->fec->io_size && (offset + v->fec->roots) > v->fec->io_size) { 162 + par_buf_offset = v->fec->io_size - offset; 163 + for (j = 0; j < par_buf_offset; j++) 164 + par_buf[j] = par[offset + j]; 165 + offset += par_buf_offset; 166 + } else 167 + par_buf_offset = 0; 168 + 154 169 if (offset >= v->fec->io_size) { 155 170 dm_bufio_release(buf); 156 171 157 - par = fec_read_parity(v, rsb, block_offset, &offset, &buf, bio_prio(bio)); 172 + par = fec_read_parity(v, rsb, block_offset, &offset, 173 + par_buf_offset, &buf, bio_prio(bio)); 158 174 if (IS_ERR(par)) 159 175 return PTR_ERR(par); 160 176 } ··· 728 724 return -E2BIG; 729 725 } 730 726 731 - if ((f->roots << SECTOR_SHIFT) & ((1 << v->data_dev_block_bits) - 1)) 732 - f->io_size = 1 << v->data_dev_block_bits; 733 - else 734 - f->io_size = v->fec->roots << SECTOR_SHIFT; 727 + f->io_size = 1 << v->data_dev_block_bits; 735 728 736 729 f->bufio = dm_bufio_client_create(f->dev->bdev, 737 730 f->io_size,
+12 -7
drivers/md/persistent-data/dm-array.c
··· 917 917 if (c->block) 918 918 unlock_ablock(c->info, c->block); 919 919 920 - c->block = NULL; 921 - c->ab = NULL; 922 920 c->index = 0; 923 921 924 922 r = dm_btree_cursor_get_value(&c->cursor, &key, &value_le); 925 923 if (r) { 926 924 DMERR("dm_btree_cursor_get_value failed"); 927 - dm_btree_cursor_end(&c->cursor); 925 + goto out; 928 926 929 927 } else { 930 928 r = get_ablock(c->info, le64_to_cpu(value_le), &c->block, &c->ab); 931 929 if (r) { 932 930 DMERR("get_ablock failed"); 933 - dm_btree_cursor_end(&c->cursor); 931 + goto out; 934 932 } 935 933 } 936 934 935 + return 0; 936 + 937 + out: 938 + dm_btree_cursor_end(&c->cursor); 939 + c->block = NULL; 940 + c->ab = NULL; 937 941 return r; 938 942 } 939 943 ··· 960 956 961 957 void dm_array_cursor_end(struct dm_array_cursor *c) 962 958 { 963 - if (c->block) { 959 + if (c->block) 964 960 unlock_ablock(c->info, c->block); 965 - dm_btree_cursor_end(&c->cursor); 966 - } 961 + 962 + dm_btree_cursor_end(&c->cursor); 967 963 } 968 964 EXPORT_SYMBOL_GPL(dm_array_cursor_end); 969 965 ··· 1003 999 } 1004 1000 1005 1001 count -= remaining; 1002 + c->index += (remaining - 1); 1006 1003 r = dm_array_cursor_next(c); 1007 1004 1008 1005 } while (!r);
+1 -1
drivers/media/dvb-frontends/dib3000mb.c
··· 51 51 static int dib3000_read_reg(struct dib3000_state *state, u16 reg) 52 52 { 53 53 u8 wb[] = { ((reg >> 8) | 0x80) & 0xff, reg & 0xff }; 54 - u8 rb[2]; 54 + u8 rb[2] = {}; 55 55 struct i2c_msg msg[] = { 56 56 { .addr = state->config.demod_address, .flags = 0, .buf = wb, .len = 2 }, 57 57 { .addr = state->config.demod_address, .flags = I2C_M_RD, .buf = rb, .len = 2 },
+2 -1
drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp9_req_lat_if.c
··· 1188 1188 return ret; 1189 1189 } 1190 1190 1191 - static 1191 + /* clang stack usage explodes if this is inlined */ 1192 + static noinline_for_stack 1192 1193 void vdec_vp9_slice_map_counts_eob_coef(unsigned int i, unsigned int j, unsigned int k, 1193 1194 struct vdec_vp9_slice_frame_counts *counts, 1194 1195 struct v4l2_vp9_frame_symbol_counts *counts_helper)
+2 -2
drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gpio.c
··· 148 148 pci1xxx_assign_bit(priv->reg_base, OPENDRAIN_OFFSET(offset), (offset % 32), true); 149 149 break; 150 150 default: 151 - ret = -EOPNOTSUPP; 151 + ret = -ENOTSUPP; 152 152 break; 153 153 } 154 154 spin_unlock_irqrestore(&priv->lock, flags); ··· 277 277 writel(BIT(bit), priv->reg_base + INTR_STATUS_OFFSET(gpiobank)); 278 278 spin_unlock_irqrestore(&priv->lock, flags); 279 279 irq = irq_find_mapping(gc->irq.domain, (bit + (gpiobank * 32))); 280 - generic_handle_irq(irq); 280 + handle_nested_irq(irq); 281 281 } 282 282 } 283 283 spin_lock_irqsave(&priv->lock, flags);
+2
drivers/mmc/host/mtk-sd.c
··· 3070 3070 msdc_gate_clock(host); 3071 3071 platform_set_drvdata(pdev, NULL); 3072 3072 release_mem: 3073 + device_init_wakeup(&pdev->dev, false); 3073 3074 if (host->dma.gpd) 3074 3075 dma_free_coherent(&pdev->dev, 3075 3076 2 * sizeof(struct mt_gpdma_desc), ··· 3104 3103 host->dma.gpd, host->dma.gpd_addr); 3105 3104 dma_free_coherent(&pdev->dev, MAX_BD_NUM * sizeof(struct mt_bdma_desc), 3106 3105 host->dma.bd, host->dma.bd_addr); 3106 + device_init_wakeup(&pdev->dev, false); 3107 3107 } 3108 3108 3109 3109 static void msdc_save_reg(struct msdc_host *host)
+8 -8
drivers/mmc/host/sdhci-msm.c
··· 1867 1867 struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 1868 1868 union cqhci_crypto_cap_entry cap; 1869 1869 1870 + if (!(cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE)) 1871 + return qcom_ice_evict_key(msm_host->ice, slot); 1872 + 1870 1873 /* Only AES-256-XTS has been tested so far. */ 1871 1874 cap = cq_host->crypto_cap_array[cfg->crypto_cap_idx]; 1872 1875 if (cap.algorithm_id != CQHCI_CRYPTO_ALG_AES_XTS || 1873 1876 cap.key_size != CQHCI_CRYPTO_KEY_SIZE_256) 1874 1877 return -EINVAL; 1875 1878 1876 - if (cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE) 1877 - return qcom_ice_program_key(msm_host->ice, 1878 - QCOM_ICE_CRYPTO_ALG_AES_XTS, 1879 - QCOM_ICE_CRYPTO_KEY_SIZE_256, 1880 - cfg->crypto_key, 1881 - cfg->data_unit_size, slot); 1882 - else 1883 - return qcom_ice_evict_key(msm_host->ice, slot); 1879 + return qcom_ice_program_key(msm_host->ice, 1880 + QCOM_ICE_CRYPTO_ALG_AES_XTS, 1881 + QCOM_ICE_CRYPTO_KEY_SIZE_256, 1882 + cfg->crypto_key, 1883 + cfg->data_unit_size, slot); 1884 1884 } 1885 1885 1886 1886 #else /* CONFIG_MMC_CRYPTO */
-1
drivers/mmc/host/sdhci-tegra.c
··· 1525 1525 .quirks = SDHCI_QUIRK_BROKEN_TIMEOUT_VAL | 1526 1526 SDHCI_QUIRK_SINGLE_POWER_WRITE | 1527 1527 SDHCI_QUIRK_NO_HISPD_BIT | 1528 - SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC | 1529 1528 SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN, 1530 1529 .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | 1531 1530 SDHCI_QUIRK2_ISSUE_CMD_DAT_RESET_TOGETHER,
+9 -2
drivers/mtd/nand/raw/arasan-nand-controller.c
··· 1409 1409 * case, the "not" chosen CS is assigned to nfc->spare_cs and selected 1410 1410 * whenever a GPIO CS must be asserted. 1411 1411 */ 1412 - if (nfc->cs_array && nfc->ncs > 2) { 1413 - if (!nfc->cs_array[0] && !nfc->cs_array[1]) { 1412 + if (nfc->cs_array) { 1413 + if (nfc->ncs > 2 && !nfc->cs_array[0] && !nfc->cs_array[1]) { 1414 1414 dev_err(nfc->dev, 1415 1415 "Assign a single native CS when using GPIOs\n"); 1416 1416 return -EINVAL; ··· 1478 1478 1479 1479 static void anfc_remove(struct platform_device *pdev) 1480 1480 { 1481 + int i; 1481 1482 struct arasan_nfc *nfc = platform_get_drvdata(pdev); 1483 + 1484 + for (i = 0; i < nfc->ncs; i++) { 1485 + if (nfc->cs_array[i]) { 1486 + gpiod_put(nfc->cs_array[i]); 1487 + } 1488 + } 1482 1489 1483 1490 anfc_chips_cleanup(nfc); 1484 1491 }
+1 -3
drivers/mtd/nand/raw/atmel/pmecc.c
··· 380 380 user->delta = user->dmu + req->ecc.strength + 1; 381 381 382 382 gf_tables = atmel_pmecc_get_gf_tables(req); 383 - if (IS_ERR(gf_tables)) { 384 - kfree(user); 383 + if (IS_ERR(gf_tables)) 385 384 return ERR_CAST(gf_tables); 386 - } 387 385 388 386 user->gf_tables = gf_tables; 389 387
+1 -1
drivers/mtd/nand/raw/diskonchip.c
··· 1098 1098 (i == 0) && (ip->firstUnit > 0)) { 1099 1099 parts[0].name = " DiskOnChip IPL / Media Header partition"; 1100 1100 parts[0].offset = 0; 1101 - parts[0].size = mtd->erasesize * ip->firstUnit; 1101 + parts[0].size = (uint64_t)mtd->erasesize * ip->firstUnit; 1102 1102 numparts = 1; 1103 1103 } 1104 1104
+16
drivers/mtd/nand/raw/omap2.c
··· 254 254 255 255 /** 256 256 * omap_nand_data_in_pref - NAND data in using prefetch engine 257 + * @chip: NAND chip 258 + * @buf: output buffer where NAND data is placed into 259 + * @len: length of transfer 260 + * @force_8bit: force 8-bit transfers 257 261 */ 258 262 static void omap_nand_data_in_pref(struct nand_chip *chip, void *buf, 259 263 unsigned int len, bool force_8bit) ··· 301 297 302 298 /** 303 299 * omap_nand_data_out_pref - NAND data out using Write Posting engine 300 + * @chip: NAND chip 301 + * @buf: input buffer that is sent to NAND 302 + * @len: length of transfer 303 + * @force_8bit: force 8-bit transfers 304 304 */ 305 305 static void omap_nand_data_out_pref(struct nand_chip *chip, 306 306 const void *buf, unsigned int len, ··· 448 440 449 441 /** 450 442 * omap_nand_data_in_dma_pref - NAND data in using DMA and Prefetch 443 + * @chip: NAND chip 444 + * @buf: output buffer where NAND data is placed into 445 + * @len: length of transfer 446 + * @force_8bit: force 8-bit transfers 451 447 */ 452 448 static void omap_nand_data_in_dma_pref(struct nand_chip *chip, void *buf, 453 449 unsigned int len, bool force_8bit) ··· 472 460 473 461 /** 474 462 * omap_nand_data_out_dma_pref - NAND data out using DMA and write posting 463 + * @chip: NAND chip 464 + * @buf: input buffer that is sent to NAND 465 + * @len: length of transfer 466 + * @force_8bit: force 8-bit transfers 475 467 */ 476 468 static void omap_nand_data_out_dma_pref(struct nand_chip *chip, 477 469 const void *buf, unsigned int len,
+26 -10
drivers/net/can/m_can/m_can.c
··· 1220 1220 static int m_can_interrupt_handler(struct m_can_classdev *cdev) 1221 1221 { 1222 1222 struct net_device *dev = cdev->net; 1223 - u32 ir; 1223 + u32 ir = 0, ir_read; 1224 1224 int ret; 1225 1225 1226 1226 if (pm_runtime_suspended(cdev->dev)) 1227 1227 return IRQ_NONE; 1228 1228 1229 - ir = m_can_read(cdev, M_CAN_IR); 1229 + /* The m_can controller signals its interrupt status as a level, but 1230 + * depending in the integration the CPU may interpret the signal as 1231 + * edge-triggered (for example with m_can_pci). For these 1232 + * edge-triggered integrations, we must observe that IR is 0 at least 1233 + * once to be sure that the next interrupt will generate an edge. 1234 + */ 1235 + while ((ir_read = m_can_read(cdev, M_CAN_IR)) != 0) { 1236 + ir |= ir_read; 1237 + 1238 + /* ACK all irqs */ 1239 + m_can_write(cdev, M_CAN_IR, ir); 1240 + 1241 + if (!cdev->irq_edge_triggered) 1242 + break; 1243 + } 1244 + 1230 1245 m_can_coalescing_update(cdev, ir); 1231 1246 if (!ir) 1232 1247 return IRQ_NONE; 1233 - 1234 - /* ACK all irqs */ 1235 - m_can_write(cdev, M_CAN_IR, ir); 1236 1248 1237 1249 if (cdev->ops->clear_interrupts) 1238 1250 cdev->ops->clear_interrupts(cdev); ··· 1707 1695 return -EINVAL; 1708 1696 } 1709 1697 1698 + /* Write the INIT bit, in case no hardware reset has happened before 1699 + * the probe (for example, it was observed that the Intel Elkhart Lake 1700 + * SoCs do not properly reset the CAN controllers on reboot) 1701 + */ 1702 + err = m_can_cccr_update_bits(cdev, CCCR_INIT, CCCR_INIT); 1703 + if (err) 1704 + return err; 1705 + 1710 1706 if (!cdev->is_peripheral) 1711 1707 netif_napi_add(dev, &cdev->napi, m_can_poll); 1712 1708 ··· 1766 1746 return -EINVAL; 1767 1747 } 1768 1748 1769 - /* Forcing standby mode should be redundant, as the chip should be in 1770 - * standby after a reset. Write the INIT bit anyways, should the chip 1771 - * be configured by previous stage. 1772 - */ 1773 - return m_can_cccr_update_bits(cdev, CCCR_INIT, CCCR_INIT); 1749 + return 0; 1774 1750 } 1775 1751 1776 1752 static void m_can_stop(struct net_device *dev)
+1
drivers/net/can/m_can/m_can.h
··· 99 99 int pm_clock_support; 100 100 int pm_wake_source; 101 101 int is_peripheral; 102 + bool irq_edge_triggered; 102 103 103 104 // Cached M_CAN_IE register content 104 105 u32 active_interrupts;
+1
drivers/net/can/m_can/m_can_pci.c
··· 127 127 mcan_class->pm_clock_support = 1; 128 128 mcan_class->pm_wake_source = 0; 129 129 mcan_class->can.clock.freq = id->driver_data; 130 + mcan_class->irq_edge_triggered = true; 130 131 mcan_class->ops = &m_can_pci_ops; 131 132 132 133 pci_set_drvdata(pci, mcan_class);
+36 -11
drivers/net/dsa/microchip/ksz9477.c
··· 2 2 /* 3 3 * Microchip KSZ9477 switch driver main logic 4 4 * 5 - * Copyright (C) 2017-2019 Microchip Technology Inc. 5 + * Copyright (C) 2017-2024 Microchip Technology Inc. 6 6 */ 7 7 8 8 #include <linux/kernel.h> ··· 983 983 int ksz9477_set_ageing_time(struct ksz_device *dev, unsigned int msecs) 984 984 { 985 985 u32 secs = msecs / 1000; 986 - u8 value; 987 - u8 data; 986 + u8 data, mult, value; 987 + u32 max_val; 988 988 int ret; 989 989 990 - value = FIELD_GET(SW_AGE_PERIOD_7_0_M, secs); 990 + #define MAX_TIMER_VAL ((1 << 8) - 1) 991 991 992 - ret = ksz_write8(dev, REG_SW_LUE_CTRL_3, value); 993 - if (ret < 0) 994 - return ret; 992 + /* The aging timer comprises a 3-bit multiplier and an 8-bit second 993 + * value. Either of them cannot be zero. The maximum timer is then 994 + * 7 * 255 = 1785 seconds. 995 + */ 996 + if (!secs) 997 + secs = 1; 995 998 996 - data = FIELD_GET(SW_AGE_PERIOD_10_8_M, secs); 999 + /* Return error if too large. */ 1000 + else if (secs > 7 * MAX_TIMER_VAL) 1001 + return -EINVAL; 997 1002 998 1003 ret = ksz_read8(dev, REG_SW_LUE_CTRL_0, &value); 999 1004 if (ret < 0) 1000 1005 return ret; 1001 1006 1002 - value &= ~SW_AGE_CNT_M; 1003 - value |= FIELD_PREP(SW_AGE_CNT_M, data); 1007 + /* Check whether there is need to update the multiplier. */ 1008 + mult = FIELD_GET(SW_AGE_CNT_M, value); 1009 + max_val = MAX_TIMER_VAL; 1010 + if (mult > 0) { 1011 + /* Try to use the same multiplier already in the register as 1012 + * the hardware default uses multiplier 4 and 75 seconds for 1013 + * 300 seconds. 1014 + */ 1015 + max_val = DIV_ROUND_UP(secs, mult); 1016 + if (max_val > MAX_TIMER_VAL || max_val * mult != secs) 1017 + max_val = MAX_TIMER_VAL; 1018 + } 1004 1019 1005 - return ksz_write8(dev, REG_SW_LUE_CTRL_0, value); 1020 + data = DIV_ROUND_UP(secs, max_val); 1021 + if (mult != data) { 1022 + value &= ~SW_AGE_CNT_M; 1023 + value |= FIELD_PREP(SW_AGE_CNT_M, data); 1024 + ret = ksz_write8(dev, REG_SW_LUE_CTRL_0, value); 1025 + if (ret < 0) 1026 + return ret; 1027 + } 1028 + 1029 + value = DIV_ROUND_UP(secs, data); 1030 + return ksz_write8(dev, REG_SW_LUE_CTRL_3, value); 1006 1031 } 1007 1032 1008 1033 void ksz9477_port_queue_split(struct ksz_device *dev, int port)
+1 -3
drivers/net/dsa/microchip/ksz9477_reg.h
··· 2 2 /* 3 3 * Microchip KSZ9477 register definitions 4 4 * 5 - * Copyright (C) 2017-2018 Microchip Technology Inc. 5 + * Copyright (C) 2017-2024 Microchip Technology Inc. 6 6 */ 7 7 8 8 #ifndef __KSZ9477_REGS_H ··· 165 165 #define SW_VLAN_ENABLE BIT(7) 166 166 #define SW_DROP_INVALID_VID BIT(6) 167 167 #define SW_AGE_CNT_M GENMASK(5, 3) 168 - #define SW_AGE_CNT_S 3 169 - #define SW_AGE_PERIOD_10_8_M GENMASK(10, 8) 170 168 #define SW_RESV_MCAST_ENABLE BIT(2) 171 169 #define SW_HASH_OPTION_M 0x03 172 170 #define SW_HASH_OPTION_CRC 1
+59 -3
drivers/net/dsa/microchip/lan937x_main.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* Microchip LAN937X switch driver main logic 3 - * Copyright (C) 2019-2022 Microchip Technology Inc. 3 + * Copyright (C) 2019-2024 Microchip Technology Inc. 4 4 */ 5 5 #include <linux/kernel.h> 6 6 #include <linux/module.h> ··· 461 461 462 462 int lan937x_set_ageing_time(struct ksz_device *dev, unsigned int msecs) 463 463 { 464 - u32 secs = msecs / 1000; 465 - u32 value; 464 + u8 data, mult, value8; 465 + bool in_msec = false; 466 + u32 max_val, value; 467 + u32 secs = msecs; 466 468 int ret; 469 + 470 + #define MAX_TIMER_VAL ((1 << 20) - 1) 471 + 472 + /* The aging timer comprises a 3-bit multiplier and a 20-bit second 473 + * value. Either of them cannot be zero. The maximum timer is then 474 + * 7 * 1048575 = 7340025 seconds. As this value is too large for 475 + * practical use it can be interpreted as microseconds, making the 476 + * maximum timer 7340 seconds with finer control. This allows for 477 + * maximum 122 minutes compared to 29 minutes in KSZ9477 switch. 478 + */ 479 + if (msecs % 1000) 480 + in_msec = true; 481 + else 482 + secs /= 1000; 483 + if (!secs) 484 + secs = 1; 485 + 486 + /* Return error if too large. */ 487 + else if (secs > 7 * MAX_TIMER_VAL) 488 + return -EINVAL; 489 + 490 + /* Configure how to interpret the number value. */ 491 + ret = ksz_rmw8(dev, REG_SW_LUE_CTRL_2, SW_AGE_CNT_IN_MICROSEC, 492 + in_msec ? SW_AGE_CNT_IN_MICROSEC : 0); 493 + if (ret < 0) 494 + return ret; 495 + 496 + ret = ksz_read8(dev, REG_SW_LUE_CTRL_0, &value8); 497 + if (ret < 0) 498 + return ret; 499 + 500 + /* Check whether there is need to update the multiplier. */ 501 + mult = FIELD_GET(SW_AGE_CNT_M, value8); 502 + max_val = MAX_TIMER_VAL; 503 + if (mult > 0) { 504 + /* Try to use the same multiplier already in the register as 505 + * the hardware default uses multiplier 4 and 75 seconds for 506 + * 300 seconds. 507 + */ 508 + max_val = DIV_ROUND_UP(secs, mult); 509 + if (max_val > MAX_TIMER_VAL || max_val * mult != secs) 510 + max_val = MAX_TIMER_VAL; 511 + } 512 + 513 + data = DIV_ROUND_UP(secs, max_val); 514 + if (mult != data) { 515 + value8 &= ~SW_AGE_CNT_M; 516 + value8 |= FIELD_PREP(SW_AGE_CNT_M, data); 517 + ret = ksz_write8(dev, REG_SW_LUE_CTRL_0, value8); 518 + if (ret < 0) 519 + return ret; 520 + } 521 + 522 + secs = DIV_ROUND_UP(secs, data); 467 523 468 524 value = FIELD_GET(SW_AGE_PERIOD_7_0_M, secs); 469 525
+6 -3
drivers/net/dsa/microchip/lan937x_reg.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* Microchip LAN937X switch register definitions 3 - * Copyright (C) 2019-2021 Microchip Technology Inc. 3 + * Copyright (C) 2019-2024 Microchip Technology Inc. 4 4 */ 5 5 #ifndef __LAN937X_REG_H 6 6 #define __LAN937X_REG_H ··· 56 56 57 57 #define SW_VLAN_ENABLE BIT(7) 58 58 #define SW_DROP_INVALID_VID BIT(6) 59 - #define SW_AGE_CNT_M 0x7 60 - #define SW_AGE_CNT_S 3 59 + #define SW_AGE_CNT_M GENMASK(5, 3) 61 60 #define SW_RESV_MCAST_ENABLE BIT(2) 62 61 63 62 #define REG_SW_LUE_CTRL_1 0x0311 ··· 68 69 #define SW_AGING_ENABLE BIT(2) 69 70 #define SW_FAST_AGING BIT(1) 70 71 #define SW_LINK_AUTO_AGING BIT(0) 72 + 73 + #define REG_SW_LUE_CTRL_2 0x0312 74 + 75 + #define SW_AGE_CNT_IN_MICROSEC BIT(7) 71 76 72 77 #define REG_SW_AGE_PERIOD__1 0x0313 73 78 #define SW_AGE_PERIOD_7_0_M GENMASK(7, 0)
+1 -1
drivers/net/ethernet/amd/pds_core/devlink.c
··· 118 118 if (err && err != -EIO) 119 119 return err; 120 120 121 - listlen = fw_list.num_fw_slots; 121 + listlen = min(fw_list.num_fw_slots, ARRAY_SIZE(fw_list.fw_names)); 122 122 for (i = 0; i < listlen; i++) { 123 123 if (i < ARRAY_SIZE(fw_slotnames)) 124 124 strscpy(buf, fw_slotnames[i], sizeof(buf));
+18 -3
drivers/net/ethernet/broadcom/bcmsysport.c
··· 1933 1933 unsigned int i; 1934 1934 int ret; 1935 1935 1936 - clk_prepare_enable(priv->clk); 1936 + ret = clk_prepare_enable(priv->clk); 1937 + if (ret) { 1938 + netdev_err(dev, "could not enable priv clock\n"); 1939 + return ret; 1940 + } 1937 1941 1938 1942 /* Reset UniMAC */ 1939 1943 umac_reset(priv); ··· 2595 2591 goto err_deregister_notifier; 2596 2592 } 2597 2593 2598 - clk_prepare_enable(priv->clk); 2594 + ret = clk_prepare_enable(priv->clk); 2595 + if (ret) { 2596 + dev_err(&pdev->dev, "could not enable priv clock\n"); 2597 + goto err_deregister_netdev; 2598 + } 2599 2599 2600 2600 priv->rev = topctrl_readl(priv, REV_CNTL) & REV_MASK; 2601 2601 dev_info(&pdev->dev, ··· 2613 2605 2614 2606 return 0; 2615 2607 2608 + err_deregister_netdev: 2609 + unregister_netdev(dev); 2616 2610 err_deregister_notifier: 2617 2611 unregister_netdevice_notifier(&priv->netdev_notifier); 2618 2612 err_deregister_fixed_link: ··· 2784 2774 if (!netif_running(dev)) 2785 2775 return 0; 2786 2776 2787 - clk_prepare_enable(priv->clk); 2777 + ret = clk_prepare_enable(priv->clk); 2778 + if (ret) { 2779 + netdev_err(dev, "could not enable priv clock\n"); 2780 + return ret; 2781 + } 2782 + 2788 2783 if (priv->wolopts) 2789 2784 clk_disable_unprepare(priv->wol_clk); 2790 2785
+4 -1
drivers/net/ethernet/broadcom/bgmac-platform.c
··· 171 171 static int bgmac_probe(struct platform_device *pdev) 172 172 { 173 173 struct device_node *np = pdev->dev.of_node; 174 + struct device_node *phy_node; 174 175 struct bgmac *bgmac; 175 176 struct resource *regs; 176 177 int ret; ··· 237 236 bgmac->cco_ctl_maskset = platform_bgmac_cco_ctl_maskset; 238 237 bgmac->get_bus_clock = platform_bgmac_get_bus_clock; 239 238 bgmac->cmn_maskset32 = platform_bgmac_cmn_maskset32; 240 - if (of_parse_phandle(np, "phy-handle", 0)) { 239 + phy_node = of_parse_phandle(np, "phy-handle", 0); 240 + if (phy_node) { 241 + of_node_put(phy_node); 241 242 bgmac->phy_connect = platform_phy_connect; 242 243 } else { 243 244 bgmac->phy_connect = bgmac_phy_connect_direct;
+33 -5
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 2897 2897 return 0; 2898 2898 } 2899 2899 2900 + static bool bnxt_vnic_is_active(struct bnxt *bp) 2901 + { 2902 + struct bnxt_vnic_info *vnic = &bp->vnic_info[0]; 2903 + 2904 + return vnic->fw_vnic_id != INVALID_HW_RING_ID && vnic->mru > 0; 2905 + } 2906 + 2900 2907 static irqreturn_t bnxt_msix(int irq, void *dev_instance) 2901 2908 { 2902 2909 struct bnxt_napi *bnapi = dev_instance; ··· 3171 3164 break; 3172 3165 } 3173 3166 } 3174 - if (bp->flags & BNXT_FLAG_DIM) { 3167 + if ((bp->flags & BNXT_FLAG_DIM) && bnxt_vnic_is_active(bp)) { 3175 3168 struct dim_sample dim_sample = {}; 3176 3169 3177 3170 dim_update_sample(cpr->event_ctr, ··· 3302 3295 poll_done: 3303 3296 cpr_rx = &cpr->cp_ring_arr[0]; 3304 3297 if (cpr_rx->cp_ring_type == BNXT_NQ_HDL_TYPE_RX && 3305 - (bp->flags & BNXT_FLAG_DIM)) { 3298 + (bp->flags & BNXT_FLAG_DIM) && bnxt_vnic_is_active(bp)) { 3306 3299 struct dim_sample dim_sample = {}; 3307 3300 3308 3301 dim_update_sample(cpr->event_ctr, ··· 7273 7266 return rc; 7274 7267 } 7275 7268 7269 + static void bnxt_cancel_dim(struct bnxt *bp) 7270 + { 7271 + int i; 7272 + 7273 + /* DIM work is initialized in bnxt_enable_napi(). Proceed only 7274 + * if NAPI is enabled. 7275 + */ 7276 + if (!bp->bnapi || test_bit(BNXT_STATE_NAPI_DISABLED, &bp->state)) 7277 + return; 7278 + 7279 + /* Make sure NAPI sees that the VNIC is disabled */ 7280 + synchronize_net(); 7281 + for (i = 0; i < bp->rx_nr_rings; i++) { 7282 + struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i]; 7283 + struct bnxt_napi *bnapi = rxr->bnapi; 7284 + 7285 + cancel_work_sync(&bnapi->cp_ring.dim.work); 7286 + } 7287 + } 7288 + 7276 7289 static int hwrm_ring_free_send_msg(struct bnxt *bp, 7277 7290 struct bnxt_ring_struct *ring, 7278 7291 u32 ring_type, int cmpl_ring_id) ··· 7393 7366 } 7394 7367 } 7395 7368 7369 + bnxt_cancel_dim(bp); 7396 7370 for (i = 0; i < bp->rx_nr_rings; i++) { 7397 7371 bnxt_hwrm_rx_ring_free(bp, &bp->rx_ring[i], close_path); 7398 7372 bnxt_hwrm_rx_agg_ring_free(bp, &bp->rx_ring[i], close_path); ··· 11337 11309 if (bnapi->in_reset) 11338 11310 cpr->sw_stats->rx.rx_resets++; 11339 11311 napi_disable(&bnapi->napi); 11340 - if (bnapi->rx_ring) 11341 - cancel_work_sync(&cpr->dim.work); 11342 11312 } 11343 11313 } 11344 11314 ··· 15598 15572 bnxt_hwrm_vnic_update(bp, vnic, 15599 15573 VNIC_UPDATE_REQ_ENABLES_MRU_VALID); 15600 15574 } 15601 - 15575 + /* Make sure NAPI sees that the VNIC is disabled */ 15576 + synchronize_net(); 15602 15577 rxr = &bp->rx_ring[idx]; 15578 + cancel_work_sync(&rxr->bnapi->cp_ring.dim.work); 15603 15579 bnxt_hwrm_rx_ring_free(bp, rxr, false); 15604 15580 bnxt_hwrm_rx_agg_ring_free(bp, rxr, false); 15605 15581 rxr->rx_next_cons = 0;
+2 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
··· 208 208 209 209 rc = hwrm_req_replace(bp, req, fw_msg->msg, fw_msg->msg_len); 210 210 if (rc) 211 - return rc; 211 + goto drop_req; 212 212 213 213 hwrm_req_timeout(bp, req, fw_msg->timeout); 214 214 resp = hwrm_req_hold(bp, req); ··· 220 220 221 221 memcpy(fw_msg->resp, resp, resp_len); 222 222 } 223 + drop_req: 223 224 hwrm_req_drop(bp, req); 224 225 return rc; 225 226 }
+4 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 1799 1799 struct adapter *adap = container_of(t, struct adapter, tids); 1800 1800 struct sk_buff *skb; 1801 1801 1802 - WARN_ON(tid_out_of_range(&adap->tids, tid)); 1802 + if (tid_out_of_range(&adap->tids, tid)) { 1803 + dev_err(adap->pdev_dev, "tid %d out of range\n", tid); 1804 + return; 1805 + } 1803 1806 1804 1807 if (t->tid_tab[tid - adap->tids.tid_base]) { 1805 1808 t->tid_tab[tid - adap->tids.tid_base] = NULL;
+3 -2
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c
··· 346 346 * driver. Once driver synthesizes cpl_pass_accept_req the skb will go 347 347 * through the regular cpl_pass_accept_req processing in TOM. 348 348 */ 349 - skb = alloc_skb(gl->tot_len + sizeof(struct cpl_pass_accept_req) 350 - - pktshift, GFP_ATOMIC); 349 + skb = alloc_skb(size_add(gl->tot_len, 350 + sizeof(struct cpl_pass_accept_req)) - 351 + pktshift, GFP_ATOMIC); 351 352 if (unlikely(!skb)) 352 353 return NULL; 353 354 __skb_put(skb, gl->tot_len + sizeof(struct cpl_pass_accept_req)
+1
drivers/net/ethernet/google/gve/gve.h
··· 1140 1140 void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid); 1141 1141 bool gve_tx_poll(struct gve_notify_block *block, int budget); 1142 1142 bool gve_xdp_poll(struct gve_notify_block *block, int budget); 1143 + int gve_xsk_tx_poll(struct gve_notify_block *block, int budget); 1143 1144 int gve_tx_alloc_rings_gqi(struct gve_priv *priv, 1144 1145 struct gve_tx_alloc_rings_cfg *cfg); 1145 1146 void gve_tx_free_rings_gqi(struct gve_priv *priv,
+45 -32
drivers/net/ethernet/google/gve/gve_main.c
··· 333 333 334 334 if (block->rx) { 335 335 work_done = gve_rx_poll(block, budget); 336 + 337 + /* Poll XSK TX as part of RX NAPI. Setup re-poll based on max of 338 + * TX and RX work done. 339 + */ 340 + if (priv->xdp_prog) 341 + work_done = max_t(int, work_done, 342 + gve_xsk_tx_poll(block, budget)); 343 + 336 344 reschedule |= work_done == budget; 337 345 } 338 346 ··· 930 922 static void gve_tx_get_curr_alloc_cfg(struct gve_priv *priv, 931 923 struct gve_tx_alloc_rings_cfg *cfg) 932 924 { 925 + int num_xdp_queues = priv->xdp_prog ? priv->rx_cfg.num_queues : 0; 926 + 933 927 cfg->qcfg = &priv->tx_cfg; 934 928 cfg->raw_addressing = !gve_is_qpl(priv); 935 929 cfg->ring_size = priv->tx_desc_cnt; 936 930 cfg->start_idx = 0; 937 - cfg->num_rings = gve_num_tx_queues(priv); 931 + cfg->num_rings = priv->tx_cfg.num_queues + num_xdp_queues; 938 932 cfg->tx = priv->tx; 939 933 } 940 934 ··· 1633 1623 if (err) 1634 1624 return err; 1635 1625 1636 - /* If XDP prog is not installed, return */ 1637 - if (!priv->xdp_prog) 1626 + /* If XDP prog is not installed or interface is down, return. */ 1627 + if (!priv->xdp_prog || !netif_running(dev)) 1638 1628 return 0; 1639 1629 1640 1630 rx = &priv->rx[qid]; ··· 1679 1669 if (qid >= priv->rx_cfg.num_queues) 1680 1670 return -EINVAL; 1681 1671 1682 - /* If XDP prog is not installed, unmap DMA and return */ 1683 - if (!priv->xdp_prog) 1672 + /* If XDP prog is not installed or interface is down, unmap DMA and 1673 + * return. 1674 + */ 1675 + if (!priv->xdp_prog || !netif_running(dev)) 1684 1676 goto done; 1685 - 1686 - tx_qid = gve_xdp_tx_queue_id(priv, qid); 1687 - if (!netif_running(dev)) { 1688 - priv->rx[qid].xsk_pool = NULL; 1689 - xdp_rxq_info_unreg(&priv->rx[qid].xsk_rxq); 1690 - priv->tx[tx_qid].xsk_pool = NULL; 1691 - goto done; 1692 - } 1693 1677 1694 1678 napi_rx = &priv->ntfy_blocks[priv->rx[qid].ntfy_id].napi; 1695 1679 napi_disable(napi_rx); /* make sure current rx poll is done */ 1696 1680 1681 + tx_qid = gve_xdp_tx_queue_id(priv, qid); 1697 1682 napi_tx = &priv->ntfy_blocks[priv->tx[tx_qid].ntfy_id].napi; 1698 1683 napi_disable(napi_tx); /* make sure current tx poll is done */ 1699 1684 ··· 1714 1709 static int gve_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags) 1715 1710 { 1716 1711 struct gve_priv *priv = netdev_priv(dev); 1717 - int tx_queue_id = gve_xdp_tx_queue_id(priv, queue_id); 1712 + struct napi_struct *napi; 1713 + 1714 + if (!gve_get_napi_enabled(priv)) 1715 + return -ENETDOWN; 1718 1716 1719 1717 if (queue_id >= priv->rx_cfg.num_queues || !priv->xdp_prog) 1720 1718 return -EINVAL; 1721 1719 1722 - if (flags & XDP_WAKEUP_TX) { 1723 - struct gve_tx_ring *tx = &priv->tx[tx_queue_id]; 1724 - struct napi_struct *napi = 1725 - &priv->ntfy_blocks[tx->ntfy_id].napi; 1726 - 1727 - if (!napi_if_scheduled_mark_missed(napi)) { 1728 - /* Call local_bh_enable to trigger SoftIRQ processing */ 1729 - local_bh_disable(); 1730 - napi_schedule(napi); 1731 - local_bh_enable(); 1732 - } 1733 - 1734 - tx->xdp_xsk_wakeup++; 1720 + napi = &priv->ntfy_blocks[gve_rx_idx_to_ntfy(priv, queue_id)].napi; 1721 + if (!napi_if_scheduled_mark_missed(napi)) { 1722 + /* Call local_bh_enable to trigger SoftIRQ processing */ 1723 + local_bh_disable(); 1724 + napi_schedule(napi); 1725 + local_bh_enable(); 1735 1726 } 1736 1727 1737 1728 return 0; ··· 1838 1837 { 1839 1838 struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0}; 1840 1839 struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0}; 1840 + int num_xdp_queues; 1841 1841 int err; 1842 1842 1843 1843 gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg); ··· 1848 1846 rx_alloc_cfg.qcfg_tx = &new_tx_config; 1849 1847 rx_alloc_cfg.qcfg = &new_rx_config; 1850 1848 tx_alloc_cfg.num_rings = new_tx_config.num_queues; 1849 + 1850 + /* Add dedicated XDP TX queues if enabled. */ 1851 + num_xdp_queues = priv->xdp_prog ? new_rx_config.num_queues : 0; 1852 + tx_alloc_cfg.num_rings += num_xdp_queues; 1851 1853 1852 1854 if (netif_running(priv->dev)) { 1853 1855 err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); ··· 1905 1899 1906 1900 gve_clear_napi_enabled(priv); 1907 1901 gve_clear_report_stats(priv); 1902 + 1903 + /* Make sure that all traffic is finished processing. */ 1904 + synchronize_net(); 1908 1905 } 1909 1906 1910 1907 static void gve_turnup(struct gve_priv *priv) ··· 2241 2232 2242 2233 static void gve_set_netdev_xdp_features(struct gve_priv *priv) 2243 2234 { 2235 + xdp_features_t xdp_features; 2236 + 2244 2237 if (priv->queue_format == GVE_GQI_QPL_FORMAT) { 2245 - priv->dev->xdp_features = NETDEV_XDP_ACT_BASIC; 2246 - priv->dev->xdp_features |= NETDEV_XDP_ACT_REDIRECT; 2247 - priv->dev->xdp_features |= NETDEV_XDP_ACT_NDO_XMIT; 2248 - priv->dev->xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY; 2238 + xdp_features = NETDEV_XDP_ACT_BASIC; 2239 + xdp_features |= NETDEV_XDP_ACT_REDIRECT; 2240 + xdp_features |= NETDEV_XDP_ACT_NDO_XMIT; 2241 + xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY; 2249 2242 } else { 2250 - priv->dev->xdp_features = 0; 2243 + xdp_features = 0; 2251 2244 } 2245 + 2246 + xdp_set_features_flag(priv->dev, xdp_features); 2252 2247 } 2253 2248 2254 2249 static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device)
+30 -16
drivers/net/ethernet/google/gve/gve_tx.c
··· 206 206 return; 207 207 208 208 gve_remove_napi(priv, ntfy_idx); 209 - gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false); 209 + if (tx->q_num < priv->tx_cfg.num_queues) 210 + gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false); 211 + else 212 + gve_clean_xdp_done(priv, tx, priv->tx_desc_cnt); 210 213 netdev_tx_reset_queue(tx->netdev_txq); 211 214 gve_tx_remove_from_block(priv, idx); 212 215 } ··· 837 834 struct gve_tx_ring *tx; 838 835 int i, err = 0, qid; 839 836 840 - if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) 837 + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK) || !priv->xdp_prog) 841 838 return -EINVAL; 839 + 840 + if (!gve_get_napi_enabled(priv)) 841 + return -ENETDOWN; 842 842 843 843 qid = gve_xdp_tx_queue_id(priv, 844 844 smp_processor_id() % priv->num_xdp_queues); ··· 981 975 return sent; 982 976 } 983 977 978 + int gve_xsk_tx_poll(struct gve_notify_block *rx_block, int budget) 979 + { 980 + struct gve_rx_ring *rx = rx_block->rx; 981 + struct gve_priv *priv = rx->gve; 982 + struct gve_tx_ring *tx; 983 + int sent = 0; 984 + 985 + tx = &priv->tx[gve_xdp_tx_queue_id(priv, rx->q_num)]; 986 + if (tx->xsk_pool) { 987 + sent = gve_xsk_tx(priv, tx, budget); 988 + 989 + u64_stats_update_begin(&tx->statss); 990 + tx->xdp_xsk_sent += sent; 991 + u64_stats_update_end(&tx->statss); 992 + if (xsk_uses_need_wakeup(tx->xsk_pool)) 993 + xsk_set_tx_need_wakeup(tx->xsk_pool); 994 + } 995 + 996 + return sent; 997 + } 998 + 984 999 bool gve_xdp_poll(struct gve_notify_block *block, int budget) 985 1000 { 986 1001 struct gve_priv *priv = block->priv; 987 1002 struct gve_tx_ring *tx = block->tx; 988 1003 u32 nic_done; 989 - bool repoll; 990 1004 u32 to_do; 991 1005 992 1006 /* Find out how much work there is to be done */ 993 1007 nic_done = gve_tx_load_event_counter(priv, tx); 994 1008 to_do = min_t(u32, (nic_done - tx->done), budget); 995 1009 gve_clean_xdp_done(priv, tx, to_do); 996 - repoll = nic_done != tx->done; 997 - 998 - if (tx->xsk_pool) { 999 - int sent = gve_xsk_tx(priv, tx, budget); 1000 - 1001 - u64_stats_update_begin(&tx->statss); 1002 - tx->xdp_xsk_sent += sent; 1003 - u64_stats_update_end(&tx->statss); 1004 - repoll |= (sent == budget); 1005 - if (xsk_uses_need_wakeup(tx->xsk_pool)) 1006 - xsk_set_tx_need_wakeup(tx->xsk_pool); 1007 - } 1008 1010 1009 1011 /* If we still have work we want to repoll */ 1010 - return repoll; 1012 + return nic_done != tx->done; 1011 1013 } 1012 1014 1013 1015 bool gve_tx_poll(struct gve_notify_block *block, int budget)
-3
drivers/net/ethernet/hisilicon/hns3/hnae3.h
··· 916 916 917 917 u8 netdev_flags; 918 918 struct dentry *hnae3_dbgfs; 919 - /* protects concurrent contention between debugfs commands */ 920 - struct mutex dbgfs_lock; 921 - char **dbgfs_buf; 922 919 923 920 /* Network interface message level enabled bits */ 924 921 u32 msg_enable;
+31 -65
drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
··· 1260 1260 static ssize_t hns3_dbg_read(struct file *filp, char __user *buffer, 1261 1261 size_t count, loff_t *ppos) 1262 1262 { 1263 - struct hns3_dbg_data *dbg_data = filp->private_data; 1263 + char *buf = filp->private_data; 1264 + 1265 + return simple_read_from_buffer(buffer, count, ppos, buf, strlen(buf)); 1266 + } 1267 + 1268 + static int hns3_dbg_open(struct inode *inode, struct file *filp) 1269 + { 1270 + struct hns3_dbg_data *dbg_data = inode->i_private; 1264 1271 struct hnae3_handle *handle = dbg_data->handle; 1265 1272 struct hns3_nic_priv *priv = handle->priv; 1266 - ssize_t size = 0; 1267 - char **save_buf; 1268 - char *read_buf; 1269 1273 u32 index; 1274 + char *buf; 1270 1275 int ret; 1276 + 1277 + if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) || 1278 + test_bit(HNS3_NIC_STATE_RESETTING, &priv->state)) 1279 + return -EBUSY; 1271 1280 1272 1281 ret = hns3_dbg_get_cmd_index(dbg_data, &index); 1273 1282 if (ret) 1274 1283 return ret; 1275 1284 1276 - mutex_lock(&handle->dbgfs_lock); 1277 - save_buf = &handle->dbgfs_buf[index]; 1285 + buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL); 1286 + if (!buf) 1287 + return -ENOMEM; 1278 1288 1279 - if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) || 1280 - test_bit(HNS3_NIC_STATE_RESETTING, &priv->state)) { 1281 - ret = -EBUSY; 1282 - goto out; 1289 + ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd, 1290 + buf, hns3_dbg_cmd[index].buf_len); 1291 + if (ret) { 1292 + kvfree(buf); 1293 + return ret; 1283 1294 } 1284 1295 1285 - if (*save_buf) { 1286 - read_buf = *save_buf; 1287 - } else { 1288 - read_buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL); 1289 - if (!read_buf) { 1290 - ret = -ENOMEM; 1291 - goto out; 1292 - } 1296 + filp->private_data = buf; 1297 + return 0; 1298 + } 1293 1299 1294 - /* save the buffer addr until the last read operation */ 1295 - *save_buf = read_buf; 1296 - 1297 - /* get data ready for the first time to read */ 1298 - ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd, 1299 - read_buf, hns3_dbg_cmd[index].buf_len); 1300 - if (ret) 1301 - goto out; 1302 - } 1303 - 1304 - size = simple_read_from_buffer(buffer, count, ppos, read_buf, 1305 - strlen(read_buf)); 1306 - if (size > 0) { 1307 - mutex_unlock(&handle->dbgfs_lock); 1308 - return size; 1309 - } 1310 - 1311 - out: 1312 - /* free the buffer for the last read operation */ 1313 - if (*save_buf) { 1314 - kvfree(*save_buf); 1315 - *save_buf = NULL; 1316 - } 1317 - 1318 - mutex_unlock(&handle->dbgfs_lock); 1319 - return ret; 1300 + static int hns3_dbg_release(struct inode *inode, struct file *filp) 1301 + { 1302 + kvfree(filp->private_data); 1303 + filp->private_data = NULL; 1304 + return 0; 1320 1305 } 1321 1306 1322 1307 static const struct file_operations hns3_dbg_fops = { 1323 1308 .owner = THIS_MODULE, 1324 - .open = simple_open, 1309 + .open = hns3_dbg_open, 1325 1310 .read = hns3_dbg_read, 1311 + .release = hns3_dbg_release, 1326 1312 }; 1327 1313 1328 1314 static int hns3_dbg_bd_file_init(struct hnae3_handle *handle, u32 cmd) ··· 1365 1379 int ret; 1366 1380 u32 i; 1367 1381 1368 - handle->dbgfs_buf = devm_kcalloc(&handle->pdev->dev, 1369 - ARRAY_SIZE(hns3_dbg_cmd), 1370 - sizeof(*handle->dbgfs_buf), 1371 - GFP_KERNEL); 1372 - if (!handle->dbgfs_buf) 1373 - return -ENOMEM; 1374 - 1375 1382 hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry = 1376 1383 debugfs_create_dir(name, hns3_dbgfs_root); 1377 1384 handle->hnae3_dbgfs = hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry; ··· 1373 1394 hns3_dbg_dentry[i].dentry = 1374 1395 debugfs_create_dir(hns3_dbg_dentry[i].name, 1375 1396 handle->hnae3_dbgfs); 1376 - 1377 - mutex_init(&handle->dbgfs_lock); 1378 1397 1379 1398 for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++) { 1380 1399 if ((hns3_dbg_cmd[i].cmd == HNAE3_DBG_CMD_TM_NODES && ··· 1402 1425 out: 1403 1426 debugfs_remove_recursive(handle->hnae3_dbgfs); 1404 1427 handle->hnae3_dbgfs = NULL; 1405 - mutex_destroy(&handle->dbgfs_lock); 1406 1428 return ret; 1407 1429 } 1408 1430 1409 1431 void hns3_dbg_uninit(struct hnae3_handle *handle) 1410 1432 { 1411 - u32 i; 1412 - 1413 1433 debugfs_remove_recursive(handle->hnae3_dbgfs); 1414 1434 handle->hnae3_dbgfs = NULL; 1415 - 1416 - for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++) 1417 - if (handle->dbgfs_buf[i]) { 1418 - kvfree(handle->dbgfs_buf[i]); 1419 - handle->dbgfs_buf[i] = NULL; 1420 - } 1421 - 1422 - mutex_destroy(&handle->dbgfs_lock); 1423 1435 } 1424 1436 1425 1437 void hns3_dbg_register_debugfs(const char *debugfs_dir_name)
-1
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 2452 2452 return ret; 2453 2453 } 2454 2454 2455 - netdev->features = features; 2456 2455 return 0; 2457 2456 } 2458 2457
+36 -9
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 6 6 #include <linux/etherdevice.h> 7 7 #include <linux/init.h> 8 8 #include <linux/interrupt.h> 9 + #include <linux/irq.h> 9 10 #include <linux/kernel.h> 10 11 #include <linux/module.h> 11 12 #include <linux/netdevice.h> ··· 3575 3574 return ret; 3576 3575 } 3577 3576 3577 + static void hclge_set_reset_pending(struct hclge_dev *hdev, 3578 + enum hnae3_reset_type reset_type) 3579 + { 3580 + /* When an incorrect reset type is executed, the get_reset_level 3581 + * function generates the HNAE3_NONE_RESET flag. As a result, this 3582 + * type do not need to pending. 3583 + */ 3584 + if (reset_type != HNAE3_NONE_RESET) 3585 + set_bit(reset_type, &hdev->reset_pending); 3586 + } 3587 + 3578 3588 static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval) 3579 3589 { 3580 3590 u32 cmdq_src_reg, msix_src_reg, hw_err_src_reg; ··· 3606 3594 */ 3607 3595 if (BIT(HCLGE_VECTOR0_IMPRESET_INT_B) & msix_src_reg) { 3608 3596 dev_info(&hdev->pdev->dev, "IMP reset interrupt\n"); 3609 - set_bit(HNAE3_IMP_RESET, &hdev->reset_pending); 3597 + hclge_set_reset_pending(hdev, HNAE3_IMP_RESET); 3610 3598 set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); 3611 3599 *clearval = BIT(HCLGE_VECTOR0_IMPRESET_INT_B); 3612 3600 hdev->rst_stats.imp_rst_cnt++; ··· 3616 3604 if (BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B) & msix_src_reg) { 3617 3605 dev_info(&hdev->pdev->dev, "global reset interrupt\n"); 3618 3606 set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); 3619 - set_bit(HNAE3_GLOBAL_RESET, &hdev->reset_pending); 3607 + hclge_set_reset_pending(hdev, HNAE3_GLOBAL_RESET); 3620 3608 *clearval = BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B); 3621 3609 hdev->rst_stats.global_rst_cnt++; 3622 3610 return HCLGE_VECTOR0_EVENT_RST; ··· 3771 3759 snprintf(hdev->misc_vector.name, HNAE3_INT_NAME_LEN, "%s-misc-%s", 3772 3760 HCLGE_NAME, pci_name(hdev->pdev)); 3773 3761 ret = request_irq(hdev->misc_vector.vector_irq, hclge_misc_irq_handle, 3774 - 0, hdev->misc_vector.name, hdev); 3762 + IRQF_NO_AUTOEN, hdev->misc_vector.name, hdev); 3775 3763 if (ret) { 3776 3764 hclge_free_vector(hdev, 0); 3777 3765 dev_err(&hdev->pdev->dev, "request misc irq(%d) fail\n", ··· 4064 4052 case HNAE3_FUNC_RESET: 4065 4053 dev_info(&pdev->dev, "PF reset requested\n"); 4066 4054 /* schedule again to check later */ 4067 - set_bit(HNAE3_FUNC_RESET, &hdev->reset_pending); 4055 + hclge_set_reset_pending(hdev, HNAE3_FUNC_RESET); 4068 4056 hclge_reset_task_schedule(hdev); 4069 4057 break; 4070 4058 default: ··· 4097 4085 rst_level = HNAE3_FLR_RESET; 4098 4086 clear_bit(HNAE3_FLR_RESET, addr); 4099 4087 } 4088 + 4089 + clear_bit(HNAE3_NONE_RESET, addr); 4100 4090 4101 4091 if (hdev->reset_type != HNAE3_NONE_RESET && 4102 4092 rst_level < hdev->reset_type) ··· 4241 4227 return false; 4242 4228 } else if (hdev->rst_stats.reset_fail_cnt < MAX_RESET_FAIL_CNT) { 4243 4229 hdev->rst_stats.reset_fail_cnt++; 4244 - set_bit(hdev->reset_type, &hdev->reset_pending); 4230 + hclge_set_reset_pending(hdev, hdev->reset_type); 4245 4231 dev_info(&hdev->pdev->dev, 4246 4232 "re-schedule reset task(%u)\n", 4247 4233 hdev->rst_stats.reset_fail_cnt); ··· 4484 4470 static void hclge_set_def_reset_request(struct hnae3_ae_dev *ae_dev, 4485 4471 enum hnae3_reset_type rst_type) 4486 4472 { 4473 + #define HCLGE_SUPPORT_RESET_TYPE \ 4474 + (BIT(HNAE3_FLR_RESET) | BIT(HNAE3_FUNC_RESET) | \ 4475 + BIT(HNAE3_GLOBAL_RESET) | BIT(HNAE3_IMP_RESET)) 4476 + 4487 4477 struct hclge_dev *hdev = ae_dev->priv; 4478 + 4479 + if (!(BIT(rst_type) & HCLGE_SUPPORT_RESET_TYPE)) { 4480 + /* To prevent reset triggered by hclge_reset_event */ 4481 + set_bit(HNAE3_NONE_RESET, &hdev->default_reset_request); 4482 + dev_warn(&hdev->pdev->dev, "unsupported reset type %d\n", 4483 + rst_type); 4484 + return; 4485 + } 4488 4486 4489 4487 set_bit(rst_type, &hdev->default_reset_request); 4490 4488 } ··· 11907 11881 11908 11882 hclge_init_rxd_adv_layout(hdev); 11909 11883 11910 - /* Enable MISC vector(vector0) */ 11911 - hclge_enable_vector(&hdev->misc_vector, true); 11912 - 11913 11884 ret = hclge_init_wol(hdev); 11914 11885 if (ret) 11915 11886 dev_warn(&pdev->dev, ··· 11918 11895 11919 11896 hclge_state_init(hdev); 11920 11897 hdev->last_reset_time = jiffies; 11898 + 11899 + /* Enable MISC vector(vector0) */ 11900 + enable_irq(hdev->misc_vector.vector_irq); 11901 + hclge_enable_vector(&hdev->misc_vector, true); 11921 11902 11922 11903 dev_info(&hdev->pdev->dev, "%s driver initialization finished.\n", 11923 11904 HCLGE_DRIVER_NAME); ··· 12328 12301 12329 12302 /* Disable MISC vector(vector0) */ 12330 12303 hclge_enable_vector(&hdev->misc_vector, false); 12331 - synchronize_irq(hdev->misc_vector.vector_irq); 12304 + disable_irq(hdev->misc_vector.vector_irq); 12332 12305 12333 12306 /* Disable all hw interrupts */ 12334 12307 hclge_config_mac_tnl_int(hdev, false);
+3
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
··· 58 58 struct hclge_dev *hdev = vport->back; 59 59 struct hclge_ptp *ptp = hdev->ptp; 60 60 61 + if (!ptp) 62 + return false; 63 + 61 64 if (!test_bit(HCLGE_PTP_FLAG_TX_EN, &ptp->flags) || 62 65 test_and_set_bit(HCLGE_STATE_PTP_TX_HANDLING, &hdev->state)) { 63 66 ptp->tx_skipped++;
+5 -4
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.c
··· 510 510 static int hclge_fetch_pf_reg(struct hclge_dev *hdev, void *data, 511 511 struct hnae3_knic_private_info *kinfo) 512 512 { 513 - #define HCLGE_RING_REG_OFFSET 0x200 514 513 #define HCLGE_RING_INT_REG_OFFSET 0x4 515 514 515 + struct hnae3_queue *tqp; 516 516 int i, j, reg_num; 517 517 int data_num_sum; 518 518 u32 *reg = data; ··· 533 533 reg_num = ARRAY_SIZE(ring_reg_addr_list); 534 534 for (j = 0; j < kinfo->num_tqps; j++) { 535 535 reg += hclge_reg_get_tlv(HCLGE_REG_TAG_RING, reg_num, reg); 536 + tqp = kinfo->tqp[j]; 536 537 for (i = 0; i < reg_num; i++) 537 - *reg++ = hclge_read_dev(&hdev->hw, 538 - ring_reg_addr_list[i] + 539 - HCLGE_RING_REG_OFFSET * j); 538 + *reg++ = readl_relaxed(tqp->io_base - 539 + HCLGE_TQP_REG_OFFSET + 540 + ring_reg_addr_list[i]); 540 541 } 541 542 data_num_sum += (reg_num + HCLGE_REG_TLV_SPACE) * kinfo->num_tqps; 542 543
+34 -7
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
··· 1393 1393 return ret; 1394 1394 } 1395 1395 1396 + static void hclgevf_set_reset_pending(struct hclgevf_dev *hdev, 1397 + enum hnae3_reset_type reset_type) 1398 + { 1399 + /* When an incorrect reset type is executed, the get_reset_level 1400 + * function generates the HNAE3_NONE_RESET flag. As a result, this 1401 + * type do not need to pending. 1402 + */ 1403 + if (reset_type != HNAE3_NONE_RESET) 1404 + set_bit(reset_type, &hdev->reset_pending); 1405 + } 1406 + 1396 1407 static int hclgevf_reset_wait(struct hclgevf_dev *hdev) 1397 1408 { 1398 1409 #define HCLGEVF_RESET_WAIT_US 20000 ··· 1553 1542 hdev->rst_stats.rst_fail_cnt); 1554 1543 1555 1544 if (hdev->rst_stats.rst_fail_cnt < HCLGEVF_RESET_MAX_FAIL_CNT) 1556 - set_bit(hdev->reset_type, &hdev->reset_pending); 1545 + hclgevf_set_reset_pending(hdev, hdev->reset_type); 1557 1546 1558 1547 if (hclgevf_is_reset_pending(hdev)) { 1559 1548 set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state); ··· 1673 1662 clear_bit(HNAE3_FLR_RESET, addr); 1674 1663 } 1675 1664 1665 + clear_bit(HNAE3_NONE_RESET, addr); 1666 + 1676 1667 return rst_level; 1677 1668 } 1678 1669 ··· 1684 1671 struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev); 1685 1672 struct hclgevf_dev *hdev = ae_dev->priv; 1686 1673 1687 - dev_info(&hdev->pdev->dev, "received reset request from VF enet\n"); 1688 - 1689 1674 if (hdev->default_reset_request) 1690 1675 hdev->reset_level = 1691 1676 hclgevf_get_reset_level(&hdev->default_reset_request); 1692 1677 else 1693 1678 hdev->reset_level = HNAE3_VF_FUNC_RESET; 1679 + 1680 + dev_info(&hdev->pdev->dev, "received reset request from VF enet, reset level is %d\n", 1681 + hdev->reset_level); 1694 1682 1695 1683 /* reset of this VF requested */ 1696 1684 set_bit(HCLGEVF_RESET_REQUESTED, &hdev->reset_state); ··· 1703 1689 static void hclgevf_set_def_reset_request(struct hnae3_ae_dev *ae_dev, 1704 1690 enum hnae3_reset_type rst_type) 1705 1691 { 1692 + #define HCLGEVF_SUPPORT_RESET_TYPE \ 1693 + (BIT(HNAE3_VF_RESET) | BIT(HNAE3_VF_FUNC_RESET) | \ 1694 + BIT(HNAE3_VF_PF_FUNC_RESET) | BIT(HNAE3_VF_FULL_RESET) | \ 1695 + BIT(HNAE3_FLR_RESET) | BIT(HNAE3_VF_EXP_RESET)) 1696 + 1706 1697 struct hclgevf_dev *hdev = ae_dev->priv; 1707 1698 1699 + if (!(BIT(rst_type) & HCLGEVF_SUPPORT_RESET_TYPE)) { 1700 + /* To prevent reset triggered by hclge_reset_event */ 1701 + set_bit(HNAE3_NONE_RESET, &hdev->default_reset_request); 1702 + dev_info(&hdev->pdev->dev, "unsupported reset type %d\n", 1703 + rst_type); 1704 + return; 1705 + } 1708 1706 set_bit(rst_type, &hdev->default_reset_request); 1709 1707 } 1710 1708 ··· 1873 1847 */ 1874 1848 if (hdev->reset_attempts > HCLGEVF_MAX_RESET_ATTEMPTS_CNT) { 1875 1849 /* prepare for full reset of stack + pcie interface */ 1876 - set_bit(HNAE3_VF_FULL_RESET, &hdev->reset_pending); 1850 + hclgevf_set_reset_pending(hdev, HNAE3_VF_FULL_RESET); 1877 1851 1878 1852 /* "defer" schedule the reset task again */ 1879 1853 set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state); 1880 1854 } else { 1881 1855 hdev->reset_attempts++; 1882 1856 1883 - set_bit(hdev->reset_level, &hdev->reset_pending); 1857 + hclgevf_set_reset_pending(hdev, hdev->reset_level); 1884 1858 set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state); 1885 1859 } 1886 1860 hclgevf_reset_task_schedule(hdev); ··· 2003 1977 rst_ing_reg = hclgevf_read_dev(&hdev->hw, HCLGEVF_RST_ING); 2004 1978 dev_info(&hdev->pdev->dev, 2005 1979 "receive reset interrupt 0x%x!\n", rst_ing_reg); 2006 - set_bit(HNAE3_VF_RESET, &hdev->reset_pending); 1980 + hclgevf_set_reset_pending(hdev, HNAE3_VF_RESET); 2007 1981 set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state); 2008 1982 set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); 2009 1983 *clearval = ~(1U << HCLGEVF_VECTOR0_RST_INT_B); ··· 2313 2287 clear_bit(HCLGEVF_STATE_RST_FAIL, &hdev->state); 2314 2288 2315 2289 INIT_DELAYED_WORK(&hdev->service_task, hclgevf_service_task); 2290 + /* timer needs to be initialized before misc irq */ 2291 + timer_setup(&hdev->reset_timer, hclgevf_reset_timer, 0); 2316 2292 2317 2293 mutex_init(&hdev->mbx_resp.mbx_mutex); 2318 2294 sema_init(&hdev->reset_sem, 1); ··· 3014 2986 HCLGEVF_DRIVER_NAME); 3015 2987 3016 2988 hclgevf_task_schedule(hdev, round_jiffies_relative(HZ)); 3017 - timer_setup(&hdev->reset_timer, hclgevf_reset_timer, 0); 3018 2989 3019 2990 return 0; 3020 2991
+5 -4
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.c
··· 123 123 void hclgevf_get_regs(struct hnae3_handle *handle, u32 *version, 124 124 void *data) 125 125 { 126 - #define HCLGEVF_RING_REG_OFFSET 0x200 127 126 #define HCLGEVF_RING_INT_REG_OFFSET 0x4 128 127 129 128 struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 129 + struct hnae3_queue *tqp; 130 130 int i, j, reg_um; 131 131 u32 *reg = data; 132 132 ··· 147 147 reg_um = ARRAY_SIZE(ring_reg_addr_list); 148 148 for (j = 0; j < hdev->num_tqps; j++) { 149 149 reg += hclgevf_reg_get_tlv(HCLGEVF_REG_TAG_RING, reg_um, reg); 150 + tqp = &hdev->htqp[j].q; 150 151 for (i = 0; i < reg_um; i++) 151 - *reg++ = hclgevf_read_dev(&hdev->hw, 152 - ring_reg_addr_list[i] + 153 - HCLGEVF_RING_REG_OFFSET * j); 152 + *reg++ = readl_relaxed(tqp->io_base - 153 + HCLGEVF_TQP_REG_OFFSET + 154 + ring_reg_addr_list[i]); 154 155 } 155 156 156 157 reg_um = ARRAY_SIZE(tqp_intr_reg_addr_list);
+2
drivers/net/ethernet/huawei/hinic/hinic_main.c
··· 172 172 hinic_sq_dbgfs_uninit(nic_dev); 173 173 174 174 devm_kfree(&netdev->dev, nic_dev->txqs); 175 + nic_dev->txqs = NULL; 175 176 return err; 176 177 } 177 178 ··· 269 268 hinic_rq_dbgfs_uninit(nic_dev); 270 269 271 270 devm_kfree(&netdev->dev, nic_dev->rxqs); 271 + nic_dev->rxqs = NULL; 272 272 return err; 273 273 } 274 274
+2
drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
··· 2264 2264 struct ice_aqc_get_pkg_info pkg_info[]; 2265 2265 }; 2266 2266 2267 + #define ICE_AQC_GET_CGU_MAX_PHASE_ADJ GENMASK(30, 0) 2268 + 2267 2269 /* Get CGU abilities command response data structure (indirect 0x0C61) */ 2268 2270 struct ice_aqc_get_cgu_abilities { 2269 2271 u8 num_inputs;
+23 -12
drivers/net/ethernet/intel/ice/ice_dpll.c
··· 2065 2065 } 2066 2066 2067 2067 /** 2068 + * ice_dpll_phase_range_set - initialize phase adjust range helper 2069 + * @range: pointer to phase adjust range struct to be initialized 2070 + * @phase_adj: a value to be used as min(-)/max(+) boundary 2071 + */ 2072 + static void ice_dpll_phase_range_set(struct dpll_pin_phase_adjust_range *range, 2073 + u32 phase_adj) 2074 + { 2075 + range->min = -phase_adj; 2076 + range->max = phase_adj; 2077 + } 2078 + 2079 + /** 2068 2080 * ice_dpll_init_info_pins_generic - initializes generic pins info 2069 2081 * @pf: board private structure 2070 2082 * @input: if input pins initialized ··· 2117 2105 for (i = 0; i < pin_num; i++) { 2118 2106 pins[i].idx = i; 2119 2107 pins[i].prop.board_label = labels[i]; 2120 - pins[i].prop.phase_range.min = phase_adj_max; 2121 - pins[i].prop.phase_range.max = -phase_adj_max; 2108 + ice_dpll_phase_range_set(&pins[i].prop.phase_range, 2109 + phase_adj_max); 2122 2110 pins[i].prop.capabilities = cap; 2123 2111 pins[i].pf = pf; 2124 2112 ret = ice_dpll_pin_state_update(pf, &pins[i], pin_type, NULL); ··· 2164 2152 struct ice_hw *hw = &pf->hw; 2165 2153 struct ice_dpll_pin *pins; 2166 2154 unsigned long caps; 2155 + u32 phase_adj_max; 2167 2156 u8 freq_supp_num; 2168 2157 bool input; 2169 2158 ··· 2172 2159 case ICE_DPLL_PIN_TYPE_INPUT: 2173 2160 pins = pf->dplls.inputs; 2174 2161 num_pins = pf->dplls.num_inputs; 2162 + phase_adj_max = pf->dplls.input_phase_adj_max; 2175 2163 input = true; 2176 2164 break; 2177 2165 case ICE_DPLL_PIN_TYPE_OUTPUT: 2178 2166 pins = pf->dplls.outputs; 2179 2167 num_pins = pf->dplls.num_outputs; 2168 + phase_adj_max = pf->dplls.output_phase_adj_max; 2180 2169 input = false; 2181 2170 break; 2182 2171 default: ··· 2203 2188 return ret; 2204 2189 caps |= (DPLL_PIN_CAPABILITIES_PRIORITY_CAN_CHANGE | 2205 2190 DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE); 2206 - pins[i].prop.phase_range.min = 2207 - pf->dplls.input_phase_adj_max; 2208 - pins[i].prop.phase_range.max = 2209 - -pf->dplls.input_phase_adj_max; 2210 2191 } else { 2211 - pins[i].prop.phase_range.min = 2212 - pf->dplls.output_phase_adj_max; 2213 - pins[i].prop.phase_range.max = 2214 - -pf->dplls.output_phase_adj_max; 2215 2192 ret = ice_cgu_get_output_pin_state_caps(hw, i, &caps); 2216 2193 if (ret) 2217 2194 return ret; 2218 2195 } 2196 + ice_dpll_phase_range_set(&pins[i].prop.phase_range, 2197 + phase_adj_max); 2219 2198 pins[i].prop.capabilities = caps; 2220 2199 ret = ice_dpll_pin_state_update(pf, &pins[i], pin_type, NULL); 2221 2200 if (ret) ··· 2317 2308 dp->dpll_idx = abilities.pps_dpll_idx; 2318 2309 d->num_inputs = abilities.num_inputs; 2319 2310 d->num_outputs = abilities.num_outputs; 2320 - d->input_phase_adj_max = le32_to_cpu(abilities.max_in_phase_adj); 2321 - d->output_phase_adj_max = le32_to_cpu(abilities.max_out_phase_adj); 2311 + d->input_phase_adj_max = le32_to_cpu(abilities.max_in_phase_adj) & 2312 + ICE_AQC_GET_CGU_MAX_PHASE_ADJ; 2313 + d->output_phase_adj_max = le32_to_cpu(abilities.max_out_phase_adj) & 2314 + ICE_AQC_GET_CGU_MAX_PHASE_ADJ; 2322 2315 2323 2316 alloc_size = sizeof(*d->inputs) * d->num_inputs; 2324 2317 d->inputs = kzalloc(alloc_size, GFP_KERNEL);
+2 -2
drivers/net/ethernet/intel/ice/ice_ptp_consts.h
··· 761 761 /* rx_desk_rsgb_par */ 762 762 644531250, /* 644.53125 MHz Reed Solomon gearbox */ 763 763 /* tx_desk_rsgb_pcs */ 764 - 644531250, /* 644.53125 MHz Reed Solomon gearbox */ 764 + 390625000, /* 390.625 MHz Reed Solomon gearbox */ 765 765 /* rx_desk_rsgb_pcs */ 766 - 644531250, /* 644.53125 MHz Reed Solomon gearbox */ 766 + 390625000, /* 390.625 MHz Reed Solomon gearbox */ 767 767 /* tx_fixed_delay */ 768 768 1620, 769 769 /* pmd_adj_divisor */
+3
drivers/net/ethernet/intel/idpf/idpf_dev.c
··· 101 101 intr->dyn_ctl_itridx_s = PF_GLINT_DYN_CTL_ITR_INDX_S; 102 102 intr->dyn_ctl_intrvl_s = PF_GLINT_DYN_CTL_INTERVAL_S; 103 103 intr->dyn_ctl_wb_on_itr_m = PF_GLINT_DYN_CTL_WB_ON_ITR_M; 104 + intr->dyn_ctl_swint_trig_m = PF_GLINT_DYN_CTL_SWINT_TRIG_M; 105 + intr->dyn_ctl_sw_itridx_ena_m = 106 + PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_M; 104 107 105 108 spacing = IDPF_ITR_IDX_SPACING(reg_vals[vec_id].itrn_index_spacing, 106 109 IDPF_PF_ITR_IDX_SPACING);
+19 -10
drivers/net/ethernet/intel/idpf/idpf_txrx.c
··· 3604 3604 /** 3605 3605 * idpf_vport_intr_buildreg_itr - Enable default interrupt generation settings 3606 3606 * @q_vector: pointer to q_vector 3607 - * @type: itr index 3608 - * @itr: itr value 3609 3607 */ 3610 - static u32 idpf_vport_intr_buildreg_itr(struct idpf_q_vector *q_vector, 3611 - const int type, u16 itr) 3608 + static u32 idpf_vport_intr_buildreg_itr(struct idpf_q_vector *q_vector) 3612 3609 { 3613 - u32 itr_val; 3610 + u32 itr_val = q_vector->intr_reg.dyn_ctl_intena_m; 3611 + int type = IDPF_NO_ITR_UPDATE_IDX; 3612 + u16 itr = 0; 3613 + 3614 + if (q_vector->wb_on_itr) { 3615 + /* 3616 + * Trigger a software interrupt when exiting wb_on_itr, to make 3617 + * sure we catch any pending write backs that might have been 3618 + * missed due to interrupt state transition. 3619 + */ 3620 + itr_val |= q_vector->intr_reg.dyn_ctl_swint_trig_m | 3621 + q_vector->intr_reg.dyn_ctl_sw_itridx_ena_m; 3622 + type = IDPF_SW_ITR_UPDATE_IDX; 3623 + itr = IDPF_ITR_20K; 3624 + } 3614 3625 3615 3626 itr &= IDPF_ITR_MASK; 3616 3627 /* Don't clear PBA because that can cause lost interrupts that 3617 3628 * came in while we were cleaning/polling 3618 3629 */ 3619 - itr_val = q_vector->intr_reg.dyn_ctl_intena_m | 3620 - (type << q_vector->intr_reg.dyn_ctl_itridx_s) | 3621 - (itr << (q_vector->intr_reg.dyn_ctl_intrvl_s - 1)); 3630 + itr_val |= (type << q_vector->intr_reg.dyn_ctl_itridx_s) | 3631 + (itr << (q_vector->intr_reg.dyn_ctl_intrvl_s - 1)); 3622 3632 3623 3633 return itr_val; 3624 3634 } ··· 3726 3716 /* net_dim() updates ITR out-of-band using a work item */ 3727 3717 idpf_net_dim(q_vector); 3728 3718 3719 + intval = idpf_vport_intr_buildreg_itr(q_vector); 3729 3720 q_vector->wb_on_itr = false; 3730 - intval = idpf_vport_intr_buildreg_itr(q_vector, 3731 - IDPF_NO_ITR_UPDATE_IDX, 0); 3732 3721 3733 3722 writel(intval, q_vector->intr_reg.dyn_ctl); 3734 3723 }
+7 -1
drivers/net/ethernet/intel/idpf/idpf_txrx.h
··· 354 354 * @dyn_ctl_itridx_m: Mask for ITR index 355 355 * @dyn_ctl_intrvl_s: Register bit offset for ITR interval 356 356 * @dyn_ctl_wb_on_itr_m: Mask for WB on ITR feature 357 + * @dyn_ctl_sw_itridx_ena_m: Mask for SW ITR index 358 + * @dyn_ctl_swint_trig_m: Mask for dyn_ctl SW triggered interrupt enable 357 359 * @rx_itr: RX ITR register 358 360 * @tx_itr: TX ITR register 359 361 * @icr_ena: Interrupt cause register offset ··· 369 367 u32 dyn_ctl_itridx_m; 370 368 u32 dyn_ctl_intrvl_s; 371 369 u32 dyn_ctl_wb_on_itr_m; 370 + u32 dyn_ctl_sw_itridx_ena_m; 371 + u32 dyn_ctl_swint_trig_m; 372 372 void __iomem *rx_itr; 373 373 void __iomem *tx_itr; 374 374 void __iomem *icr_ena; ··· 441 437 cpumask_var_t affinity_mask; 442 438 __cacheline_group_end_aligned(cold); 443 439 }; 444 - libeth_cacheline_set_assert(struct idpf_q_vector, 112, 440 + libeth_cacheline_set_assert(struct idpf_q_vector, 120, 445 441 24 + sizeof(struct napi_struct) + 446 442 2 * sizeof(struct dim), 447 443 8 + sizeof(cpumask_var_t)); ··· 475 471 #define IDPF_ITR_IS_DYNAMIC(itr_mode) (itr_mode) 476 472 #define IDPF_ITR_TX_DEF IDPF_ITR_20K 477 473 #define IDPF_ITR_RX_DEF IDPF_ITR_20K 474 + /* Index used for 'SW ITR' update in DYN_CTL register */ 475 + #define IDPF_SW_ITR_UPDATE_IDX 2 478 476 /* Index used for 'No ITR' update in DYN_CTL register */ 479 477 #define IDPF_NO_ITR_UPDATE_IDX 3 480 478 #define IDPF_ITR_IDX_SPACING(spacing, dflt) (spacing ? spacing : dflt)
+3
drivers/net/ethernet/intel/idpf/idpf_vf_dev.c
··· 101 101 intr->dyn_ctl_itridx_s = VF_INT_DYN_CTLN_ITR_INDX_S; 102 102 intr->dyn_ctl_intrvl_s = VF_INT_DYN_CTLN_INTERVAL_S; 103 103 intr->dyn_ctl_wb_on_itr_m = VF_INT_DYN_CTLN_WB_ON_ITR_M; 104 + intr->dyn_ctl_swint_trig_m = VF_INT_DYN_CTLN_SWINT_TRIG_M; 105 + intr->dyn_ctl_sw_itridx_ena_m = 106 + VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_M; 104 107 105 108 spacing = IDPF_ITR_IDX_SPACING(reg_vals[vec_id].itrn_index_spacing, 106 109 IDPF_VF_ITR_IDX_SPACING);
+6
drivers/net/ethernet/intel/igc/igc_base.c
··· 68 68 u32 eecd = rd32(IGC_EECD); 69 69 u16 size; 70 70 71 + /* failed to read reg and got all F's */ 72 + if (!(~eecd)) 73 + return -ENXIO; 74 + 71 75 size = FIELD_GET(IGC_EECD_SIZE_EX_MASK, eecd); 72 76 73 77 /* Added to a constant, "size" becomes the left-shift value ··· 225 221 226 222 /* NVM initialization */ 227 223 ret_val = igc_init_nvm_params_base(hw); 224 + if (ret_val) 225 + goto out; 228 226 switch (hw->mac.type) { 229 227 case igc_i225: 230 228 ret_val = igc_init_nvm_params_i225(hw);
+12 -2
drivers/net/ethernet/marvell/mv643xx_eth.c
··· 2704 2704 2705 2705 static void mv643xx_eth_shared_of_remove(void) 2706 2706 { 2707 + struct mv643xx_eth_platform_data *pd; 2707 2708 int n; 2708 2709 2709 2710 for (n = 0; n < 3; n++) { 2711 + if (!port_platdev[n]) 2712 + continue; 2713 + pd = dev_get_platdata(&port_platdev[n]->dev); 2714 + if (pd) 2715 + of_node_put(pd->phy_node); 2710 2716 platform_device_del(port_platdev[n]); 2711 2717 port_platdev[n] = NULL; 2712 2718 } ··· 2775 2769 } 2776 2770 2777 2771 ppdev = platform_device_alloc(MV643XX_ETH_NAME, dev_num); 2778 - if (!ppdev) 2779 - return -ENOMEM; 2772 + if (!ppdev) { 2773 + ret = -ENOMEM; 2774 + goto put_err; 2775 + } 2780 2776 ppdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 2781 2777 ppdev->dev.of_node = pnp; 2782 2778 ··· 2800 2792 2801 2793 port_err: 2802 2794 platform_device_put(ppdev); 2795 + put_err: 2796 + of_node_put(ppd.phy_node); 2803 2797 return ret; 2804 2798 } 2805 2799
+4 -1
drivers/net/ethernet/marvell/octeontx2/nic/rep.c
··· 680 680 ndev->features |= ndev->hw_features; 681 681 eth_hw_addr_random(ndev); 682 682 err = rvu_rep_devlink_port_register(rep); 683 - if (err) 683 + if (err) { 684 + free_netdev(ndev); 684 685 goto exit; 686 + } 685 687 686 688 SET_NETDEV_DEVLINK_PORT(ndev, &rep->dl_port); 687 689 err = register_netdev(ndev); 688 690 if (err) { 689 691 NL_SET_ERR_MSG_MOD(extack, 690 692 "PFVF representor registration failed"); 693 + rvu_rep_devlink_port_unregister(rep); 691 694 free_netdev(ndev); 692 695 goto exit; 693 696 }
+1
drivers/net/ethernet/marvell/sky2.c
··· 130 130 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x436C) }, /* 88E8072 */ 131 131 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x436D) }, /* 88E8055 */ 132 132 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4370) }, /* 88E8075 */ 133 + { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4373) }, /* 88E8075 */ 133 134 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4380) }, /* 88E8057 */ 134 135 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4381) }, /* 88E8059 */ 135 136 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4382) }, /* 88E8079 */
+1
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 1013 1013 complete(&ent->done); 1014 1014 } 1015 1015 up(&cmd->vars.sem); 1016 + complete(&ent->slotted); 1016 1017 return; 1017 1018 } 1018 1019 } else {
+4
drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
··· 339 339 { 340 340 struct mlx5e_priv *priv = macsec_netdev_priv(ctx->netdev); 341 341 struct mlx5_macsec_fs *macsec_fs = priv->mdev->macsec_fs; 342 + const struct macsec_tx_sc *tx_sc = &ctx->secy->tx_sc; 342 343 struct mlx5_macsec_rule_attrs rule_attrs; 343 344 union mlx5_macsec_rule *macsec_rule; 345 + 346 + if (is_tx && tx_sc->encoding_sa != sa->assoc_num) 347 + return 0; 344 348 345 349 rule_attrs.macsec_obj_id = sa->macsec_obj_id; 346 350 rule_attrs.sci = sa->sci;
+17 -2
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 6542 6542 6543 6543 mlx5_core_uplink_netdev_set(mdev, NULL); 6544 6544 mlx5e_dcbnl_delete_app(priv); 6545 - unregister_netdev(priv->netdev); 6546 - _mlx5e_suspend(adev, false); 6545 + /* When unload driver, the netdev is in registered state 6546 + * if it's from legacy mode. If from switchdev mode, it 6547 + * is already unregistered before changing to NIC profile. 6548 + */ 6549 + if (priv->netdev->reg_state == NETREG_REGISTERED) { 6550 + unregister_netdev(priv->netdev); 6551 + _mlx5e_suspend(adev, false); 6552 + } else { 6553 + struct mlx5_core_dev *pos; 6554 + int i; 6555 + 6556 + if (test_bit(MLX5E_STATE_DESTROYING, &priv->state)) 6557 + mlx5_sd_for_each_dev(i, mdev, pos) 6558 + mlx5e_destroy_mdev_resources(pos); 6559 + else 6560 + _mlx5e_suspend(adev, true); 6561 + } 6547 6562 /* Avoid cleanup if profile rollback failed. */ 6548 6563 if (priv->profile) 6549 6564 priv->profile->cleanup(priv);
+15
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 1509 1509 1510 1510 priv = netdev_priv(netdev); 1511 1511 1512 + /* This bit is set when using devlink to change eswitch mode from 1513 + * switchdev to legacy. As need to keep uplink netdev ifindex, we 1514 + * detach uplink representor profile and attach NIC profile only. 1515 + * The netdev will be unregistered later when unload NIC auxiliary 1516 + * driver for this case. 1517 + * We explicitly block devlink eswitch mode change if any IPSec rules 1518 + * offloaded, but can't block other cases, such as driver unload 1519 + * and devlink reload. We have to unregister netdev before profile 1520 + * change for those cases. This is to avoid resource leak because 1521 + * the offloaded rules don't have the chance to be unoffloaded before 1522 + * cleanup which is triggered by detach uplink representor profile. 1523 + */ 1524 + if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_SWITCH_LEGACY)) 1525 + unregister_netdev(netdev); 1526 + 1512 1527 mlx5e_netdev_attach_nic_profile(priv); 1513 1528 } 1514 1529
+3 -3
drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
··· 150 150 unsigned long i; 151 151 int err; 152 152 153 - xa_for_each(&esw->offloads.vport_reps, i, rep) { 154 - rpriv = rep->rep_data[REP_ETH].priv; 155 - if (!rpriv || !rpriv->netdev) 153 + mlx5_esw_for_each_rep(esw, i, rep) { 154 + if (atomic_read(&rep->rep_data[REP_ETH].state) != REP_LOADED) 156 155 continue; 157 156 157 + rpriv = rep->rep_data[REP_ETH].priv; 158 158 rhashtable_walk_enter(&rpriv->tc_ht, &iter); 159 159 rhashtable_walk_start(&iter); 160 160 while ((flow = rhashtable_walk_next(&iter)) != NULL) {
+3
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
··· 714 714 MLX5_CAP_GEN_2((esw->dev), ec_vf_vport_base) +\ 715 715 (last) - 1) 716 716 717 + #define mlx5_esw_for_each_rep(esw, i, rep) \ 718 + xa_for_each(&((esw)->offloads.vport_reps), i, rep) 719 + 717 720 struct mlx5_eswitch *__must_check 718 721 mlx5_devlink_eswitch_get(struct devlink *devlink); 719 722
+2 -3
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 53 53 #include "lag/lag.h" 54 54 #include "en/tc/post_meter.h" 55 55 56 - #define mlx5_esw_for_each_rep(esw, i, rep) \ 57 - xa_for_each(&((esw)->offloads.vport_reps), i, rep) 58 - 59 56 /* There are two match-all miss flows, one for unicast dst mac and 60 57 * one for multicast. 61 58 */ ··· 3777 3780 esw->eswitch_operation_in_progress = true; 3778 3781 up_write(&esw->mode_lock); 3779 3782 3783 + if (mode == DEVLINK_ESWITCH_MODE_LEGACY) 3784 + esw->dev->priv.flags |= MLX5_PRIV_FLAGS_SWITCH_LEGACY; 3780 3785 mlx5_eswitch_disable_locked(esw); 3781 3786 if (mode == DEVLINK_ESWITCH_MODE_SWITCHDEV) { 3782 3787 if (mlx5_devlink_trap_get_num_active(esw->dev)) {
+1 -3
drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_send.c
··· 1067 1067 int inlen, err, eqn; 1068 1068 void *cqc, *in; 1069 1069 __be64 *pas; 1070 - int vector; 1071 1070 u32 i; 1072 1071 1073 1072 cq = kzalloc(sizeof(*cq), GFP_KERNEL); ··· 1095 1096 if (!in) 1096 1097 goto err_cqwq; 1097 1098 1098 - vector = raw_smp_processor_id() % mlx5_comp_vectors_max(mdev); 1099 - err = mlx5_comp_eqn_get(mdev, vector, &eqn); 1099 + err = mlx5_comp_eqn_get(mdev, 0, &eqn); 1100 1100 if (err) { 1101 1101 kvfree(in); 1102 1102 goto err_cqwq;
+1 -2
drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
··· 423 423 424 424 parms = mlxsw_sp_ipip_netdev_parms4(to_dev); 425 425 ip_tunnel_init_flow(&fl4, parms.iph.protocol, *daddrp, *saddrp, 426 - 0, 0, dev_net(to_dev), parms.link, tun->fwmark, 0, 427 - 0); 426 + 0, 0, tun->net, parms.link, tun->fwmark, 0, 0); 428 427 429 428 rt = ip_route_output_key(tun->net, &fl4); 430 429 if (IS_ERR(rt))
-1
drivers/net/ethernet/meta/fbnic/Makefile
··· 13 13 fbnic_ethtool.o \ 14 14 fbnic_fw.o \ 15 15 fbnic_hw_stats.o \ 16 - fbnic_hwmon.o \ 17 16 fbnic_irq.o \ 18 17 fbnic_mac.o \ 19 18 fbnic_netdev.o \
-5
drivers/net/ethernet/meta/fbnic/fbnic.h
··· 20 20 struct device *dev; 21 21 struct net_device *netdev; 22 22 struct dentry *dbg_fbd; 23 - struct device *hwmon; 24 23 25 24 u32 __iomem *uc_addr0; 26 25 u32 __iomem *uc_addr4; ··· 32 33 33 34 struct fbnic_fw_mbx mbx[FBNIC_IPC_MBX_INDICES]; 34 35 struct fbnic_fw_cap fw_cap; 35 - struct fbnic_fw_completion *cmpl_data; 36 36 /* Lock protecting Tx Mailbox queue to prevent possible races */ 37 37 spinlock_t fw_tx_lock; 38 38 ··· 139 141 140 142 int fbnic_fw_enable_mbx(struct fbnic_dev *fbd); 141 143 void fbnic_fw_disable_mbx(struct fbnic_dev *fbd); 142 - 143 - void fbnic_hwmon_register(struct fbnic_dev *fbd); 144 - void fbnic_hwmon_unregister(struct fbnic_dev *fbd); 145 144 146 145 int fbnic_pcs_irq_enable(struct fbnic_dev *fbd); 147 146 void fbnic_pcs_irq_disable(struct fbnic_dev *fbd);
+1 -1
drivers/net/ethernet/meta/fbnic/fbnic_csr.c
··· 64 64 u32 i, j; 65 65 66 66 *(data++) = start; 67 - *(data++) = end - 1; 67 + *(data++) = end; 68 68 69 69 /* FBNIC_RPC_TCAM_ACT */ 70 70 for (i = 0; i < FBNIC_RPC_TCAM_ACT_NUM_ENTRIES; i++) {
-7
drivers/net/ethernet/meta/fbnic/fbnic_fw.h
··· 44 44 u8 link_fec; 45 45 }; 46 46 47 - struct fbnic_fw_completion { 48 - struct { 49 - s32 millivolts; 50 - s32 millidegrees; 51 - } tsene; 52 - }; 53 - 54 47 void fbnic_mbx_init(struct fbnic_dev *fbd); 55 48 void fbnic_mbx_clean(struct fbnic_dev *fbd); 56 49 void fbnic_mbx_poll(struct fbnic_dev *fbd);
-81
drivers/net/ethernet/meta/fbnic/fbnic_hwmon.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* Copyright (c) Meta Platforms, Inc. and affiliates. */ 3 - 4 - #include <linux/hwmon.h> 5 - 6 - #include "fbnic.h" 7 - #include "fbnic_mac.h" 8 - 9 - static int fbnic_hwmon_sensor_id(enum hwmon_sensor_types type) 10 - { 11 - if (type == hwmon_temp) 12 - return FBNIC_SENSOR_TEMP; 13 - if (type == hwmon_in) 14 - return FBNIC_SENSOR_VOLTAGE; 15 - 16 - return -EOPNOTSUPP; 17 - } 18 - 19 - static umode_t fbnic_hwmon_is_visible(const void *drvdata, 20 - enum hwmon_sensor_types type, 21 - u32 attr, int channel) 22 - { 23 - if (type == hwmon_temp && attr == hwmon_temp_input) 24 - return 0444; 25 - if (type == hwmon_in && attr == hwmon_in_input) 26 - return 0444; 27 - 28 - return 0; 29 - } 30 - 31 - static int fbnic_hwmon_read(struct device *dev, enum hwmon_sensor_types type, 32 - u32 attr, int channel, long *val) 33 - { 34 - struct fbnic_dev *fbd = dev_get_drvdata(dev); 35 - const struct fbnic_mac *mac = fbd->mac; 36 - int id; 37 - 38 - id = fbnic_hwmon_sensor_id(type); 39 - return id < 0 ? id : mac->get_sensor(fbd, id, val); 40 - } 41 - 42 - static const struct hwmon_ops fbnic_hwmon_ops = { 43 - .is_visible = fbnic_hwmon_is_visible, 44 - .read = fbnic_hwmon_read, 45 - }; 46 - 47 - static const struct hwmon_channel_info *fbnic_hwmon_info[] = { 48 - HWMON_CHANNEL_INFO(temp, HWMON_T_INPUT), 49 - HWMON_CHANNEL_INFO(in, HWMON_I_INPUT), 50 - NULL 51 - }; 52 - 53 - static const struct hwmon_chip_info fbnic_chip_info = { 54 - .ops = &fbnic_hwmon_ops, 55 - .info = fbnic_hwmon_info, 56 - }; 57 - 58 - void fbnic_hwmon_register(struct fbnic_dev *fbd) 59 - { 60 - if (!IS_REACHABLE(CONFIG_HWMON)) 61 - return; 62 - 63 - fbd->hwmon = hwmon_device_register_with_info(fbd->dev, "fbnic", 64 - fbd, &fbnic_chip_info, 65 - NULL); 66 - if (IS_ERR(fbd->hwmon)) { 67 - dev_notice(fbd->dev, 68 - "Failed to register hwmon device %pe\n", 69 - fbd->hwmon); 70 - fbd->hwmon = NULL; 71 - } 72 - } 73 - 74 - void fbnic_hwmon_unregister(struct fbnic_dev *fbd) 75 - { 76 - if (!IS_REACHABLE(CONFIG_HWMON) || !fbd->hwmon) 77 - return; 78 - 79 - hwmon_device_unregister(fbd->hwmon); 80 - fbd->hwmon = NULL; 81 - }
-22
drivers/net/ethernet/meta/fbnic/fbnic_mac.c
··· 686 686 MAC_STAT_TX_BROADCAST); 687 687 } 688 688 689 - static int fbnic_mac_get_sensor_asic(struct fbnic_dev *fbd, int id, long *val) 690 - { 691 - struct fbnic_fw_completion fw_cmpl; 692 - s32 *sensor; 693 - 694 - switch (id) { 695 - case FBNIC_SENSOR_TEMP: 696 - sensor = &fw_cmpl.tsene.millidegrees; 697 - break; 698 - case FBNIC_SENSOR_VOLTAGE: 699 - sensor = &fw_cmpl.tsene.millivolts; 700 - break; 701 - default: 702 - return -EINVAL; 703 - } 704 - 705 - *val = *sensor; 706 - 707 - return 0; 708 - } 709 - 710 689 static const struct fbnic_mac fbnic_mac_asic = { 711 690 .init_regs = fbnic_mac_init_regs, 712 691 .pcs_enable = fbnic_pcs_enable_asic, ··· 695 716 .get_eth_mac_stats = fbnic_mac_get_eth_mac_stats, 696 717 .link_down = fbnic_mac_link_down_asic, 697 718 .link_up = fbnic_mac_link_up_asic, 698 - .get_sensor = fbnic_mac_get_sensor_asic, 699 719 }; 700 720 701 721 /**
-7
drivers/net/ethernet/meta/fbnic/fbnic_mac.h
··· 47 47 #define FBNIC_LINK_MODE_PAM4 (FBNIC_LINK_50R1) 48 48 #define FBNIC_LINK_MODE_MASK (FBNIC_LINK_AUTO - 1) 49 49 50 - enum fbnic_sensor_id { 51 - FBNIC_SENSOR_TEMP, /* Temp in millidegrees Centigrade */ 52 - FBNIC_SENSOR_VOLTAGE, /* Voltage in millivolts */ 53 - }; 54 - 55 50 /* This structure defines the interface hooks for the MAC. The MAC hooks 56 51 * will be configured as a const struct provided with a set of function 57 52 * pointers. ··· 83 88 84 89 void (*link_down)(struct fbnic_dev *fbd); 85 90 void (*link_up)(struct fbnic_dev *fbd, bool tx_pause, bool rx_pause); 86 - 87 - int (*get_sensor)(struct fbnic_dev *fbd, int id, long *val); 88 91 }; 89 92 90 93 int fbnic_mac_init(struct fbnic_dev *fbd);
-3
drivers/net/ethernet/meta/fbnic/fbnic_pci.c
··· 296 296 /* Capture snapshot of hardware stats so netdev can calculate delta */ 297 297 fbnic_reset_hw_stats(fbd); 298 298 299 - fbnic_hwmon_register(fbd); 300 - 301 299 if (!fbd->dsn) { 302 300 dev_warn(&pdev->dev, "Reading serial number failed\n"); 303 301 goto init_failure_mode; ··· 358 360 fbnic_netdev_free(fbd); 359 361 } 360 362 361 - fbnic_hwmon_unregister(fbd); 362 363 fbnic_dbg_fbd_exit(fbd); 363 364 fbnic_devlink_unregister(fbd); 364 365 fbnic_fw_disable_mbx(fbd);
+1 -1
drivers/net/ethernet/mscc/ocelot.c
··· 1432 1432 1433 1433 memset(ifh, 0, OCELOT_TAG_LEN); 1434 1434 ocelot_ifh_set_bypass(ifh, 1); 1435 - ocelot_ifh_set_src(ifh, BIT_ULL(ocelot->num_phys_ports)); 1435 + ocelot_ifh_set_src(ifh, ocelot->num_phys_ports); 1436 1436 ocelot_ifh_set_dest(ifh, BIT_ULL(port)); 1437 1437 ocelot_ifh_set_qos_class(ifh, qos_class); 1438 1438 ocelot_ifh_set_tag_type(ifh, tag_type);
+9 -2
drivers/net/ethernet/oa_tc6.c
··· 113 113 struct mii_bus *mdiobus; 114 114 struct spi_device *spi; 115 115 struct mutex spi_ctrl_lock; /* Protects spi control transfer */ 116 + spinlock_t tx_skb_lock; /* Protects tx skb handling */ 116 117 void *spi_ctrl_tx_buf; 117 118 void *spi_ctrl_rx_buf; 118 119 void *spi_data_tx_buf; ··· 1005 1004 for (used_tx_credits = 0; used_tx_credits < tc6->tx_credits; 1006 1005 used_tx_credits++) { 1007 1006 if (!tc6->ongoing_tx_skb) { 1007 + spin_lock_bh(&tc6->tx_skb_lock); 1008 1008 tc6->ongoing_tx_skb = tc6->waiting_tx_skb; 1009 1009 tc6->waiting_tx_skb = NULL; 1010 + spin_unlock_bh(&tc6->tx_skb_lock); 1010 1011 } 1011 1012 if (!tc6->ongoing_tx_skb) 1012 1013 break; ··· 1114 1111 /* This kthread will be waken up if there is a tx skb or mac-phy 1115 1112 * interrupt to perform spi transfer with tx chunks. 1116 1113 */ 1117 - wait_event_interruptible(tc6->spi_wq, tc6->waiting_tx_skb || 1118 - tc6->int_flag || 1114 + wait_event_interruptible(tc6->spi_wq, tc6->int_flag || 1115 + (tc6->waiting_tx_skb && 1116 + tc6->tx_credits) || 1119 1117 kthread_should_stop()); 1120 1118 1121 1119 if (kthread_should_stop()) ··· 1213 1209 return NETDEV_TX_OK; 1214 1210 } 1215 1211 1212 + spin_lock_bh(&tc6->tx_skb_lock); 1216 1213 tc6->waiting_tx_skb = skb; 1214 + spin_unlock_bh(&tc6->tx_skb_lock); 1217 1215 1218 1216 /* Wake spi kthread to perform spi transfer */ 1219 1217 wake_up_interruptible(&tc6->spi_wq); ··· 1245 1239 tc6->netdev = netdev; 1246 1240 SET_NETDEV_DEV(netdev, &spi->dev); 1247 1241 mutex_init(&tc6->spi_ctrl_lock); 1242 + spin_lock_init(&tc6->tx_skb_lock); 1248 1243 1249 1244 /* Set the SPI controller to pump at realtime priority */ 1250 1245 tc6->spi->rt = true;
+4 -1
drivers/net/ethernet/pensando/ionic/ionic_dev.c
··· 277 277 idev->phy_cmb_pages = 0; 278 278 idev->cmb_npages = 0; 279 279 280 - destroy_workqueue(ionic->wq); 280 + if (ionic->wq) { 281 + destroy_workqueue(ionic->wq); 282 + ionic->wq = NULL; 283 + } 281 284 mutex_destroy(&idev->cmb_inuse_lock); 282 285 } 283 286
+2 -2
drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
··· 961 961 len = min_t(u32, sizeof(xcvr->sprom), ee->len); 962 962 963 963 do { 964 - memcpy(data, xcvr->sprom, len); 965 - memcpy(tbuf, xcvr->sprom, len); 964 + memcpy(data, &xcvr->sprom[ee->offset], len); 965 + memcpy(tbuf, &xcvr->sprom[ee->offset], len); 966 966 967 967 /* Let's make sure we got a consistent copy */ 968 968 if (!memcmp(data, tbuf, len))
+2 -2
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 3869 3869 /* only register LIF0 for now */ 3870 3870 err = register_netdev(lif->netdev); 3871 3871 if (err) { 3872 - dev_err(lif->ionic->dev, "Cannot register net device, aborting\n"); 3873 - ionic_lif_unregister_phc(lif); 3872 + dev_err(lif->ionic->dev, "Cannot register net device: %d, aborting\n", err); 3873 + ionic_lif_unregister(lif); 3874 3874 return err; 3875 3875 } 3876 3876
+1
drivers/net/ethernet/qlogic/qed/qed_mcp.c
··· 3358 3358 p_ptt, &nvm_info.num_images); 3359 3359 if (rc == -EOPNOTSUPP) { 3360 3360 DP_INFO(p_hwfn, "DRV_MSG_CODE_BIST_TEST is not supported\n"); 3361 + nvm_info.num_images = 0; 3361 3362 goto out; 3362 3363 } else if (rc || !nvm_info.num_images) { 3363 3364 DP_ERR(p_hwfn, "Failed getting number of images\n");
+1 -1
drivers/net/ethernet/realtek/rtase/rtase_main.c
··· 1827 1827 1828 1828 for (i = 0; i < tp->int_nums; i++) { 1829 1829 irq = pci_irq_vector(pdev, i); 1830 - if (!irq) { 1830 + if (irq < 0) { 1831 1831 pci_disable_msix(pdev); 1832 1832 return irq; 1833 1833 }
+36 -32
drivers/net/ethernet/renesas/rswitch.c
··· 547 547 desc = &gq->ts_ring[gq->ring_size]; 548 548 desc->desc.die_dt = DT_LINKFIX; 549 549 rswitch_desc_set_dptr(&desc->desc, gq->ring_dma); 550 - INIT_LIST_HEAD(&priv->gwca.ts_info_list); 551 550 552 551 return 0; 553 552 } ··· 1002 1003 static void rswitch_ts(struct rswitch_private *priv) 1003 1004 { 1004 1005 struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue; 1005 - struct rswitch_gwca_ts_info *ts_info, *ts_info2; 1006 1006 struct skb_shared_hwtstamps shhwtstamps; 1007 1007 struct rswitch_ts_desc *desc; 1008 + struct rswitch_device *rdev; 1009 + struct sk_buff *ts_skb; 1008 1010 struct timespec64 ts; 1009 1011 unsigned int num; 1010 1012 u32 tag, port; ··· 1015 1015 dma_rmb(); 1016 1016 1017 1017 port = TS_DESC_DPN(__le32_to_cpu(desc->desc.dptrl)); 1018 + if (unlikely(port >= RSWITCH_NUM_PORTS)) 1019 + goto next; 1020 + rdev = priv->rdev[port]; 1021 + 1018 1022 tag = TS_DESC_TSUN(__le32_to_cpu(desc->desc.dptrl)); 1023 + if (unlikely(tag >= TS_TAGS_PER_PORT)) 1024 + goto next; 1025 + ts_skb = xchg(&rdev->ts_skb[tag], NULL); 1026 + smp_mb(); /* order rdev->ts_skb[] read before bitmap update */ 1027 + clear_bit(tag, rdev->ts_skb_used); 1019 1028 1020 - list_for_each_entry_safe(ts_info, ts_info2, &priv->gwca.ts_info_list, list) { 1021 - if (!(ts_info->port == port && ts_info->tag == tag)) 1022 - continue; 1029 + if (unlikely(!ts_skb)) 1030 + goto next; 1023 1031 1024 - memset(&shhwtstamps, 0, sizeof(shhwtstamps)); 1025 - ts.tv_sec = __le32_to_cpu(desc->ts_sec); 1026 - ts.tv_nsec = __le32_to_cpu(desc->ts_nsec & cpu_to_le32(0x3fffffff)); 1027 - shhwtstamps.hwtstamp = timespec64_to_ktime(ts); 1028 - skb_tstamp_tx(ts_info->skb, &shhwtstamps); 1029 - dev_consume_skb_irq(ts_info->skb); 1030 - list_del(&ts_info->list); 1031 - kfree(ts_info); 1032 - break; 1033 - } 1032 + memset(&shhwtstamps, 0, sizeof(shhwtstamps)); 1033 + ts.tv_sec = __le32_to_cpu(desc->ts_sec); 1034 + ts.tv_nsec = __le32_to_cpu(desc->ts_nsec & cpu_to_le32(0x3fffffff)); 1035 + shhwtstamps.hwtstamp = timespec64_to_ktime(ts); 1036 + skb_tstamp_tx(ts_skb, &shhwtstamps); 1037 + dev_consume_skb_irq(ts_skb); 1034 1038 1039 + next: 1035 1040 gq->cur = rswitch_next_queue_index(gq, true, 1); 1036 1041 desc = &gq->ts_ring[gq->cur]; 1037 1042 } ··· 1581 1576 static int rswitch_stop(struct net_device *ndev) 1582 1577 { 1583 1578 struct rswitch_device *rdev = netdev_priv(ndev); 1584 - struct rswitch_gwca_ts_info *ts_info, *ts_info2; 1579 + struct sk_buff *ts_skb; 1585 1580 unsigned long flags; 1581 + unsigned int tag; 1586 1582 1587 1583 netif_tx_stop_all_queues(ndev); 1588 1584 ··· 1600 1594 if (bitmap_empty(rdev->priv->opened_ports, RSWITCH_NUM_PORTS)) 1601 1595 iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDID); 1602 1596 1603 - list_for_each_entry_safe(ts_info, ts_info2, &rdev->priv->gwca.ts_info_list, list) { 1604 - if (ts_info->port != rdev->port) 1605 - continue; 1606 - dev_kfree_skb_irq(ts_info->skb); 1607 - list_del(&ts_info->list); 1608 - kfree(ts_info); 1597 + for (tag = find_first_bit(rdev->ts_skb_used, TS_TAGS_PER_PORT); 1598 + tag < TS_TAGS_PER_PORT; 1599 + tag = find_next_bit(rdev->ts_skb_used, TS_TAGS_PER_PORT, tag + 1)) { 1600 + ts_skb = xchg(&rdev->ts_skb[tag], NULL); 1601 + clear_bit(tag, rdev->ts_skb_used); 1602 + if (ts_skb) 1603 + dev_kfree_skb(ts_skb); 1609 1604 } 1610 1605 1611 1606 return 0; ··· 1619 1612 desc->info1 = cpu_to_le64(INFO1_DV(BIT(rdev->etha->index)) | 1620 1613 INFO1_IPV(GWCA_IPV_NUM) | INFO1_FMT); 1621 1614 if (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) { 1622 - struct rswitch_gwca_ts_info *ts_info; 1615 + unsigned int tag; 1623 1616 1624 - ts_info = kzalloc(sizeof(*ts_info), GFP_ATOMIC); 1625 - if (!ts_info) 1617 + tag = find_first_zero_bit(rdev->ts_skb_used, TS_TAGS_PER_PORT); 1618 + if (tag == TS_TAGS_PER_PORT) 1626 1619 return false; 1620 + smp_mb(); /* order bitmap read before rdev->ts_skb[] write */ 1621 + rdev->ts_skb[tag] = skb_get(skb); 1622 + set_bit(tag, rdev->ts_skb_used); 1627 1623 1628 1624 skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; 1629 - rdev->ts_tag++; 1630 - desc->info1 |= cpu_to_le64(INFO1_TSUN(rdev->ts_tag) | INFO1_TXC); 1631 - 1632 - ts_info->skb = skb_get(skb); 1633 - ts_info->port = rdev->port; 1634 - ts_info->tag = rdev->ts_tag; 1635 - list_add_tail(&ts_info->list, &rdev->priv->gwca.ts_info_list); 1625 + desc->info1 |= cpu_to_le64(INFO1_TSUN(tag) | INFO1_TXC); 1636 1626 1637 1627 skb_tx_timestamp(skb); 1638 1628 }
+3 -10
drivers/net/ethernet/renesas/rswitch.h
··· 972 972 }; 973 973 }; 974 974 975 - struct rswitch_gwca_ts_info { 976 - struct sk_buff *skb; 977 - struct list_head list; 978 - 979 - int port; 980 - u8 tag; 981 - }; 982 - 983 975 #define RSWITCH_NUM_IRQ_REGS (RSWITCH_MAX_NUM_QUEUES / BITS_PER_TYPE(u32)) 984 976 struct rswitch_gwca { 985 977 unsigned int index; ··· 981 989 struct rswitch_gwca_queue *queues; 982 990 int num_queues; 983 991 struct rswitch_gwca_queue ts_queue; 984 - struct list_head ts_info_list; 985 992 DECLARE_BITMAP(used, RSWITCH_MAX_NUM_QUEUES); 986 993 u32 tx_irq_bits[RSWITCH_NUM_IRQ_REGS]; 987 994 u32 rx_irq_bits[RSWITCH_NUM_IRQ_REGS]; ··· 988 997 }; 989 998 990 999 #define NUM_QUEUES_PER_NDEV 2 1000 + #define TS_TAGS_PER_PORT 256 991 1001 struct rswitch_device { 992 1002 struct rswitch_private *priv; 993 1003 struct net_device *ndev; ··· 996 1004 void __iomem *addr; 997 1005 struct rswitch_gwca_queue *tx_queue; 998 1006 struct rswitch_gwca_queue *rx_queue; 999 - u8 ts_tag; 1007 + struct sk_buff *ts_skb[TS_TAGS_PER_PORT]; 1008 + DECLARE_BITMAP(ts_skb_used, TS_TAGS_PER_PORT); 1000 1009 bool disabled; 1001 1010 1002 1011 int port;
+1 -1
drivers/net/ethernet/sfc/tc_conntrack.c
··· 16 16 void *cb_priv); 17 17 18 18 static const struct rhashtable_params efx_tc_ct_zone_ht_params = { 19 - .key_len = offsetof(struct efx_tc_ct_zone, linkage), 19 + .key_len = sizeof_field(struct efx_tc_ct_zone, zone), 20 20 .key_offset = 0, 21 21 .head_offset = offsetof(struct efx_tc_ct_zone, linkage), 22 22 };
+11 -3
drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 + #include <linux/iommu.h> 2 3 #include <linux/platform_device.h> 3 4 #include <linux/of.h> 4 5 #include <linux/module.h> ··· 19 18 20 19 struct reset_control *rst_mac; 21 20 struct reset_control *rst_pcs; 21 + 22 + u32 iommu_sid; 22 23 23 24 void __iomem *hv; 24 25 void __iomem *regs; ··· 53 50 #define MGBE_WRAP_COMMON_INTR_ENABLE 0x8704 54 51 #define MAC_SBD_INTR BIT(2) 55 52 #define MGBE_WRAP_AXI_ASID0_CTRL 0x8400 56 - #define MGBE_SID 0x6 57 53 58 54 static int __maybe_unused tegra_mgbe_suspend(struct device *dev) 59 55 { ··· 86 84 writel(MAC_SBD_INTR, mgbe->regs + MGBE_WRAP_COMMON_INTR_ENABLE); 87 85 88 86 /* Program SID */ 89 - writel(MGBE_SID, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL); 87 + writel(mgbe->iommu_sid, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL); 90 88 91 89 value = readl(mgbe->xpcs + XPCS_WRAP_UPHY_STATUS); 92 90 if ((value & XPCS_WRAP_UPHY_STATUS_TX_P_UP) == 0) { ··· 243 241 if (IS_ERR(mgbe->xpcs)) 244 242 return PTR_ERR(mgbe->xpcs); 245 243 244 + /* get controller's stream id from iommu property in device tree */ 245 + if (!tegra_dev_iommu_get_stream_id(mgbe->dev, &mgbe->iommu_sid)) { 246 + dev_err(mgbe->dev, "failed to get iommu stream id\n"); 247 + return -EINVAL; 248 + } 249 + 246 250 res.addr = mgbe->regs; 247 251 res.irq = irq; 248 252 ··· 354 346 writel(MAC_SBD_INTR, mgbe->regs + MGBE_WRAP_COMMON_INTR_ENABLE); 355 347 356 348 /* Program SID */ 357 - writel(MGBE_SID, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL); 349 + writel(mgbe->iommu_sid, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL); 358 350 359 351 plat->flags |= STMMAC_FLAG_SERDES_UP_AFTER_PHY_LINKUP; 360 352
+17 -26
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 406 406 } 407 407 408 408 /** 409 - * stmmac_remove_config_dt - undo the effects of stmmac_probe_config_dt() 410 - * @pdev: platform_device structure 411 - * @plat: driver data platform structure 412 - * 413 - * Release resources claimed by stmmac_probe_config_dt(). 414 - */ 415 - static void stmmac_remove_config_dt(struct platform_device *pdev, 416 - struct plat_stmmacenet_data *plat) 417 - { 418 - clk_disable_unprepare(plat->stmmac_clk); 419 - clk_disable_unprepare(plat->pclk); 420 - of_node_put(plat->phy_node); 421 - of_node_put(plat->mdio_node); 422 - } 423 - 424 - /** 425 409 * stmmac_probe_config_dt - parse device-tree driver parameters 426 410 * @pdev: platform_device structure 427 411 * @mac: MAC address to use ··· 474 490 dev_warn(&pdev->dev, "snps,phy-addr property is deprecated\n"); 475 491 476 492 rc = stmmac_mdio_setup(plat, np, &pdev->dev); 477 - if (rc) 478 - return ERR_PTR(rc); 493 + if (rc) { 494 + ret = ERR_PTR(rc); 495 + goto error_put_phy; 496 + } 479 497 480 498 of_property_read_u32(np, "tx-fifo-depth", &plat->tx_fifo_size); 481 499 ··· 567 581 dma_cfg = devm_kzalloc(&pdev->dev, sizeof(*dma_cfg), 568 582 GFP_KERNEL); 569 583 if (!dma_cfg) { 570 - stmmac_remove_config_dt(pdev, plat); 571 - return ERR_PTR(-ENOMEM); 584 + ret = ERR_PTR(-ENOMEM); 585 + goto error_put_mdio; 572 586 } 573 587 plat->dma_cfg = dma_cfg; 574 588 ··· 596 610 597 611 rc = stmmac_mtl_setup(pdev, plat); 598 612 if (rc) { 599 - stmmac_remove_config_dt(pdev, plat); 600 - return ERR_PTR(rc); 613 + ret = ERR_PTR(rc); 614 + goto error_put_mdio; 601 615 } 602 616 603 617 /* clock setup */ ··· 649 663 clk_disable_unprepare(plat->pclk); 650 664 error_pclk_get: 651 665 clk_disable_unprepare(plat->stmmac_clk); 666 + error_put_mdio: 667 + of_node_put(plat->mdio_node); 668 + error_put_phy: 669 + of_node_put(plat->phy_node); 652 670 653 671 return ret; 654 672 } ··· 661 671 { 662 672 struct plat_stmmacenet_data *plat = data; 663 673 664 - /* Platform data argument is unused */ 665 - stmmac_remove_config_dt(NULL, plat); 674 + clk_disable_unprepare(plat->stmmac_clk); 675 + clk_disable_unprepare(plat->pclk); 676 + of_node_put(plat->mdio_node); 677 + of_node_put(plat->phy_node); 666 678 } 667 679 668 680 /** 669 681 * devm_stmmac_probe_config_dt 670 682 * @pdev: platform_device structure 671 683 * @mac: MAC address to use 672 - * Description: Devres variant of stmmac_probe_config_dt(). Does not require 673 - * the user to call stmmac_remove_config_dt() at driver detach. 684 + * Description: Devres variant of stmmac_probe_config_dt(). 674 685 */ 675 686 struct plat_stmmacenet_data * 676 687 devm_stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+1 -1
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 3551 3551 init_completion(&common->tdown_complete); 3552 3552 common->tx_ch_num = AM65_CPSW_DEFAULT_TX_CHNS; 3553 3553 common->rx_ch_num_flows = AM65_CPSW_DEFAULT_RX_CHN_FLOWS; 3554 - common->pf_p0_rx_ptype_rrobin = false; 3554 + common->pf_p0_rx_ptype_rrobin = true; 3555 3555 common->default_vlan = 1; 3556 3556 3557 3557 common->ports = devm_kcalloc(dev, common->port_num,
+8
drivers/net/ethernet/ti/icssg/icss_iep.c
··· 215 215 for (cmp = IEP_MIN_CMP; cmp < IEP_MAX_CMP; cmp++) { 216 216 regmap_update_bits(iep->map, ICSS_IEP_CMP_STAT_REG, 217 217 IEP_CMP_STATUS(cmp), IEP_CMP_STATUS(cmp)); 218 + 219 + regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG, 220 + IEP_CMP_CFG_CMP_EN(cmp), 0); 218 221 } 219 222 220 223 /* enable reset counter on CMP0 event */ ··· 782 779 iep->ptp_clock = NULL; 783 780 } 784 781 icss_iep_disable(iep); 782 + 783 + if (iep->pps_enabled) 784 + icss_iep_pps_enable(iep, false); 785 + else if (iep->perout_enabled) 786 + icss_iep_perout_enable(iep, NULL, false); 785 787 786 788 return 0; 787 789 }
-25
drivers/net/ethernet/ti/icssg/icssg_common.c
··· 855 855 } 856 856 EXPORT_SYMBOL_GPL(prueth_rx_irq); 857 857 858 - void prueth_emac_stop(struct prueth_emac *emac) 859 - { 860 - struct prueth *prueth = emac->prueth; 861 - int slice; 862 - 863 - switch (emac->port_id) { 864 - case PRUETH_PORT_MII0: 865 - slice = ICSS_SLICE0; 866 - break; 867 - case PRUETH_PORT_MII1: 868 - slice = ICSS_SLICE1; 869 - break; 870 - default: 871 - netdev_err(emac->ndev, "invalid port\n"); 872 - return; 873 - } 874 - 875 - emac->fw_running = 0; 876 - if (!emac->is_sr1) 877 - rproc_shutdown(prueth->txpru[slice]); 878 - rproc_shutdown(prueth->rtu[slice]); 879 - rproc_shutdown(prueth->pru[slice]); 880 - } 881 - EXPORT_SYMBOL_GPL(prueth_emac_stop); 882 - 883 858 void prueth_cleanup_tx_ts(struct prueth_emac *emac) 884 859 { 885 860 int i;
+28 -13
drivers/net/ethernet/ti/icssg/icssg_config.c
··· 397 397 return 0; 398 398 } 399 399 400 - static void icssg_init_emac_mode(struct prueth *prueth) 400 + void icssg_init_emac_mode(struct prueth *prueth) 401 401 { 402 402 /* When the device is configured as a bridge and it is being brought 403 403 * back to the emac mode, the host mac address has to be set as 0. ··· 405 405 u32 addr = prueth->shram.pa + EMAC_ICSSG_SWITCH_DEFAULT_VLAN_TABLE_OFFSET; 406 406 int i; 407 407 u8 mac[ETH_ALEN] = { 0 }; 408 - 409 - if (prueth->emacs_initialized) 410 - return; 411 408 412 409 /* Set VLAN TABLE address base */ 413 410 regmap_update_bits(prueth->miig_rt, FDB_GEN_CFG1, SMEM_VLAN_OFFSET_MASK, ··· 420 423 /* Clear host MAC address */ 421 424 icssg_class_set_host_mac_addr(prueth->miig_rt, mac); 422 425 } 426 + EXPORT_SYMBOL_GPL(icssg_init_emac_mode); 423 427 424 - static void icssg_init_fw_offload_mode(struct prueth *prueth) 428 + void icssg_init_fw_offload_mode(struct prueth *prueth) 425 429 { 426 430 u32 addr = prueth->shram.pa + EMAC_ICSSG_SWITCH_DEFAULT_VLAN_TABLE_OFFSET; 427 431 int i; 428 - 429 - if (prueth->emacs_initialized) 430 - return; 431 432 432 433 /* Set VLAN TABLE address base */ 433 434 regmap_update_bits(prueth->miig_rt, FDB_GEN_CFG1, SMEM_VLAN_OFFSET_MASK, ··· 443 448 icssg_class_set_host_mac_addr(prueth->miig_rt, prueth->hw_bridge_dev->dev_addr); 444 449 icssg_set_pvid(prueth, prueth->default_vlan, PRUETH_PORT_HOST); 445 450 } 451 + EXPORT_SYMBOL_GPL(icssg_init_fw_offload_mode); 446 452 447 453 int icssg_config(struct prueth *prueth, struct prueth_emac *emac, int slice) 448 454 { 449 455 void __iomem *config = emac->dram.va + ICSSG_CONFIG_OFFSET; 450 456 struct icssg_flow_cfg __iomem *flow_cfg; 451 457 int ret; 452 - 453 - if (prueth->is_switch_mode || prueth->is_hsr_offload_mode) 454 - icssg_init_fw_offload_mode(prueth); 455 - else 456 - icssg_init_emac_mode(prueth); 457 458 458 459 memset_io(config, 0, TAS_GATE_MASK_LIST0); 459 460 icssg_miig_queues_init(prueth, slice); ··· 777 786 writel(pvid, prueth->shram.va + EMAC_ICSSG_SWITCH_PORT0_DEFAULT_VLAN_OFFSET); 778 787 } 779 788 EXPORT_SYMBOL_GPL(icssg_set_pvid); 789 + 790 + int emac_fdb_flow_id_updated(struct prueth_emac *emac) 791 + { 792 + struct mgmt_cmd_rsp fdb_cmd_rsp = { 0 }; 793 + int slice = prueth_emac_slice(emac); 794 + struct mgmt_cmd fdb_cmd = { 0 }; 795 + int ret; 796 + 797 + fdb_cmd.header = ICSSG_FW_MGMT_CMD_HEADER; 798 + fdb_cmd.type = ICSSG_FW_MGMT_FDB_CMD_TYPE_RX_FLOW; 799 + fdb_cmd.seqnum = ++(emac->prueth->icssg_hwcmdseq); 800 + fdb_cmd.param = 0; 801 + 802 + fdb_cmd.param |= (slice << 4); 803 + fdb_cmd.cmd_args[0] = 0; 804 + 805 + ret = icssg_send_fdb_msg(emac, &fdb_cmd, &fdb_cmd_rsp); 806 + if (ret) 807 + return ret; 808 + 809 + WARN_ON(fdb_cmd.seqnum != fdb_cmd_rsp.seqnum); 810 + return fdb_cmd_rsp.status == 1 ? 0 : -EINVAL; 811 + } 812 + EXPORT_SYMBOL_GPL(emac_fdb_flow_id_updated);
+1
drivers/net/ethernet/ti/icssg/icssg_config.h
··· 55 55 #define ICSSG_FW_MGMT_FDB_CMD_TYPE 0x03 56 56 #define ICSSG_FW_MGMT_CMD_TYPE 0x04 57 57 #define ICSSG_FW_MGMT_PKT 0x80000000 58 + #define ICSSG_FW_MGMT_FDB_CMD_TYPE_RX_FLOW 0x05 58 59 59 60 struct icssg_r30_cmd { 60 61 u32 cmd[4];
+191 -90
drivers/net/ethernet/ti/icssg/icssg_prueth.c
··· 164 164 } 165 165 }; 166 166 167 - static int prueth_emac_start(struct prueth *prueth, struct prueth_emac *emac) 167 + static int prueth_start(struct rproc *rproc, const char *fw_name) 168 + { 169 + int ret; 170 + 171 + ret = rproc_set_firmware(rproc, fw_name); 172 + if (ret) 173 + return ret; 174 + return rproc_boot(rproc); 175 + } 176 + 177 + static void prueth_shutdown(struct rproc *rproc) 178 + { 179 + rproc_shutdown(rproc); 180 + } 181 + 182 + static int prueth_emac_start(struct prueth *prueth) 168 183 { 169 184 struct icssg_firmwares *firmwares; 170 185 struct device *dev = prueth->dev; 171 - int slice, ret; 186 + int ret, slice; 172 187 173 188 if (prueth->is_switch_mode) 174 189 firmwares = icssg_switch_firmwares; ··· 192 177 else 193 178 firmwares = icssg_emac_firmwares; 194 179 195 - slice = prueth_emac_slice(emac); 196 - if (slice < 0) { 197 - netdev_err(emac->ndev, "invalid port\n"); 198 - return -EINVAL; 180 + for (slice = 0; slice < PRUETH_NUM_MACS; slice++) { 181 + ret = prueth_start(prueth->pru[slice], firmwares[slice].pru); 182 + if (ret) { 183 + dev_err(dev, "failed to boot PRU%d: %d\n", slice, ret); 184 + goto unwind_slices; 185 + } 186 + 187 + ret = prueth_start(prueth->rtu[slice], firmwares[slice].rtu); 188 + if (ret) { 189 + dev_err(dev, "failed to boot RTU%d: %d\n", slice, ret); 190 + rproc_shutdown(prueth->pru[slice]); 191 + goto unwind_slices; 192 + } 193 + 194 + ret = prueth_start(prueth->txpru[slice], firmwares[slice].txpru); 195 + if (ret) { 196 + dev_err(dev, "failed to boot TX_PRU%d: %d\n", slice, ret); 197 + rproc_shutdown(prueth->rtu[slice]); 198 + rproc_shutdown(prueth->pru[slice]); 199 + goto unwind_slices; 200 + } 199 201 } 200 202 201 - ret = icssg_config(prueth, emac, slice); 202 - if (ret) 203 - return ret; 204 - 205 - ret = rproc_set_firmware(prueth->pru[slice], firmwares[slice].pru); 206 - ret = rproc_boot(prueth->pru[slice]); 207 - if (ret) { 208 - dev_err(dev, "failed to boot PRU%d: %d\n", slice, ret); 209 - return -EINVAL; 210 - } 211 - 212 - ret = rproc_set_firmware(prueth->rtu[slice], firmwares[slice].rtu); 213 - ret = rproc_boot(prueth->rtu[slice]); 214 - if (ret) { 215 - dev_err(dev, "failed to boot RTU%d: %d\n", slice, ret); 216 - goto halt_pru; 217 - } 218 - 219 - ret = rproc_set_firmware(prueth->txpru[slice], firmwares[slice].txpru); 220 - ret = rproc_boot(prueth->txpru[slice]); 221 - if (ret) { 222 - dev_err(dev, "failed to boot TX_PRU%d: %d\n", slice, ret); 223 - goto halt_rtu; 224 - } 225 - 226 - emac->fw_running = 1; 227 203 return 0; 228 204 229 - halt_rtu: 230 - rproc_shutdown(prueth->rtu[slice]); 231 - 232 - halt_pru: 233 - rproc_shutdown(prueth->pru[slice]); 205 + unwind_slices: 206 + while (--slice >= 0) { 207 + prueth_shutdown(prueth->txpru[slice]); 208 + prueth_shutdown(prueth->rtu[slice]); 209 + prueth_shutdown(prueth->pru[slice]); 210 + } 234 211 235 212 return ret; 213 + } 214 + 215 + static void prueth_emac_stop(struct prueth *prueth) 216 + { 217 + int slice; 218 + 219 + for (slice = 0; slice < PRUETH_NUM_MACS; slice++) { 220 + prueth_shutdown(prueth->txpru[slice]); 221 + prueth_shutdown(prueth->rtu[slice]); 222 + prueth_shutdown(prueth->pru[slice]); 223 + } 224 + } 225 + 226 + static int prueth_emac_common_start(struct prueth *prueth) 227 + { 228 + struct prueth_emac *emac; 229 + int ret = 0; 230 + int slice; 231 + 232 + if (!prueth->emac[ICSS_SLICE0] && !prueth->emac[ICSS_SLICE1]) 233 + return -EINVAL; 234 + 235 + /* clear SMEM and MSMC settings for all slices */ 236 + memset_io(prueth->msmcram.va, 0, prueth->msmcram.size); 237 + memset_io(prueth->shram.va, 0, ICSSG_CONFIG_OFFSET_SLICE1 * PRUETH_NUM_MACS); 238 + 239 + icssg_class_default(prueth->miig_rt, ICSS_SLICE0, 0, false); 240 + icssg_class_default(prueth->miig_rt, ICSS_SLICE1, 0, false); 241 + 242 + if (prueth->is_switch_mode || prueth->is_hsr_offload_mode) 243 + icssg_init_fw_offload_mode(prueth); 244 + else 245 + icssg_init_emac_mode(prueth); 246 + 247 + for (slice = 0; slice < PRUETH_NUM_MACS; slice++) { 248 + emac = prueth->emac[slice]; 249 + if (!emac) 250 + continue; 251 + ret = icssg_config(prueth, emac, slice); 252 + if (ret) 253 + goto disable_class; 254 + } 255 + 256 + ret = prueth_emac_start(prueth); 257 + if (ret) 258 + goto disable_class; 259 + 260 + emac = prueth->emac[ICSS_SLICE0] ? prueth->emac[ICSS_SLICE0] : 261 + prueth->emac[ICSS_SLICE1]; 262 + ret = icss_iep_init(emac->iep, &prueth_iep_clockops, 263 + emac, IEP_DEFAULT_CYCLE_TIME_NS); 264 + if (ret) { 265 + dev_err(prueth->dev, "Failed to initialize IEP module\n"); 266 + goto stop_pruss; 267 + } 268 + 269 + return 0; 270 + 271 + stop_pruss: 272 + prueth_emac_stop(prueth); 273 + 274 + disable_class: 275 + icssg_class_disable(prueth->miig_rt, ICSS_SLICE0); 276 + icssg_class_disable(prueth->miig_rt, ICSS_SLICE1); 277 + 278 + return ret; 279 + } 280 + 281 + static int prueth_emac_common_stop(struct prueth *prueth) 282 + { 283 + struct prueth_emac *emac; 284 + 285 + if (!prueth->emac[ICSS_SLICE0] && !prueth->emac[ICSS_SLICE1]) 286 + return -EINVAL; 287 + 288 + icssg_class_disable(prueth->miig_rt, ICSS_SLICE0); 289 + icssg_class_disable(prueth->miig_rt, ICSS_SLICE1); 290 + 291 + prueth_emac_stop(prueth); 292 + 293 + emac = prueth->emac[ICSS_SLICE0] ? prueth->emac[ICSS_SLICE0] : 294 + prueth->emac[ICSS_SLICE1]; 295 + icss_iep_exit(emac->iep); 296 + 297 + return 0; 236 298 } 237 299 238 300 /* called back by PHY layer if there is change in link state of hw port*/ ··· 465 373 u64 cyclecount; 466 374 u32 cycletime; 467 375 int timeout; 468 - 469 - if (!emac->fw_running) 470 - return; 471 376 472 377 sc_descp = emac->prueth->shram.va + TIMESYNC_FW_WC_SETCLOCK_DESC_OFFSET; 473 378 ··· 632 543 { 633 544 struct prueth_emac *emac = netdev_priv(ndev); 634 545 int ret, i, num_data_chn = emac->tx_ch_num; 546 + struct icssg_flow_cfg __iomem *flow_cfg; 635 547 struct prueth *prueth = emac->prueth; 636 548 int slice = prueth_emac_slice(emac); 637 549 struct device *dev = prueth->dev; 638 550 int max_rx_flows; 639 551 int rx_flow; 640 552 641 - /* clear SMEM and MSMC settings for all slices */ 642 - if (!prueth->emacs_initialized) { 643 - memset_io(prueth->msmcram.va, 0, prueth->msmcram.size); 644 - memset_io(prueth->shram.va, 0, ICSSG_CONFIG_OFFSET_SLICE1 * PRUETH_NUM_MACS); 645 - } 646 - 647 553 /* set h/w MAC as user might have re-configured */ 648 554 ether_addr_copy(emac->mac_addr, ndev->dev_addr); 649 555 650 556 icssg_class_set_mac_addr(prueth->miig_rt, slice, emac->mac_addr); 651 - icssg_class_default(prueth->miig_rt, slice, 0, false); 652 557 icssg_ft1_set_mac_addr(prueth->miig_rt, slice, emac->mac_addr); 653 558 654 559 /* Notify the stack of the actual queue counts. */ ··· 680 597 goto cleanup_napi; 681 598 } 682 599 683 - /* reset and start PRU firmware */ 684 - ret = prueth_emac_start(prueth, emac); 685 - if (ret) 686 - goto free_rx_irq; 600 + if (!prueth->emacs_initialized) { 601 + ret = prueth_emac_common_start(prueth); 602 + if (ret) 603 + goto free_rx_irq; 604 + } 605 + 606 + flow_cfg = emac->dram.va + ICSSG_CONFIG_OFFSET + PSI_L_REGULAR_FLOW_ID_BASE_OFFSET; 607 + writew(emac->rx_flow_id_base, &flow_cfg->rx_base_flow); 608 + ret = emac_fdb_flow_id_updated(emac); 609 + 610 + if (ret) { 611 + netdev_err(ndev, "Failed to update Rx Flow ID %d", ret); 612 + goto stop; 613 + } 687 614 688 615 icssg_mii_update_mtu(prueth->mii_rt, slice, ndev->max_mtu); 689 - 690 - if (!prueth->emacs_initialized) { 691 - ret = icss_iep_init(emac->iep, &prueth_iep_clockops, 692 - emac, IEP_DEFAULT_CYCLE_TIME_NS); 693 - } 694 616 695 617 ret = request_threaded_irq(emac->tx_ts_irq, NULL, prueth_tx_ts_irq, 696 618 IRQF_ONESHOT, dev_name(dev), emac); ··· 741 653 free_tx_ts_irq: 742 654 free_irq(emac->tx_ts_irq, emac); 743 655 stop: 744 - prueth_emac_stop(emac); 656 + if (!prueth->emacs_initialized) 657 + prueth_emac_common_stop(prueth); 745 658 free_rx_irq: 746 659 free_irq(emac->rx_chns.irq[rx_flow], emac); 747 660 cleanup_napi: ··· 777 688 /* block packets from wire */ 778 689 if (ndev->phydev) 779 690 phy_stop(ndev->phydev); 780 - 781 - icssg_class_disable(prueth->miig_rt, prueth_emac_slice(emac)); 782 691 783 692 if (emac->prueth->is_hsr_offload_mode) 784 693 __dev_mc_unsync(ndev, icssg_prueth_hsr_del_mcast); ··· 815 728 /* Destroying the queued work in ndo_stop() */ 816 729 cancel_delayed_work_sync(&emac->stats_work); 817 730 818 - if (prueth->emacs_initialized == 1) 819 - icss_iep_exit(emac->iep); 820 - 821 731 /* stop PRUs */ 822 - prueth_emac_stop(emac); 732 + if (prueth->emacs_initialized == 1) 733 + prueth_emac_common_stop(prueth); 823 734 824 735 free_irq(emac->tx_ts_irq, emac); 825 736 ··· 1138 1053 } 1139 1054 } 1140 1055 1141 - static void prueth_emac_restart(struct prueth *prueth) 1056 + static int prueth_emac_restart(struct prueth *prueth) 1142 1057 { 1143 1058 struct prueth_emac *emac0 = prueth->emac[PRUETH_MAC0]; 1144 1059 struct prueth_emac *emac1 = prueth->emac[PRUETH_MAC1]; 1060 + int ret; 1145 1061 1146 1062 /* Detach the net_device for both PRUeth ports*/ 1147 1063 if (netif_running(emac0->ndev)) ··· 1151 1065 netif_device_detach(emac1->ndev); 1152 1066 1153 1067 /* Disable both PRUeth ports */ 1154 - icssg_set_port_state(emac0, ICSSG_EMAC_PORT_DISABLE); 1155 - icssg_set_port_state(emac1, ICSSG_EMAC_PORT_DISABLE); 1068 + ret = icssg_set_port_state(emac0, ICSSG_EMAC_PORT_DISABLE); 1069 + ret |= icssg_set_port_state(emac1, ICSSG_EMAC_PORT_DISABLE); 1070 + if (ret) 1071 + return ret; 1156 1072 1157 1073 /* Stop both pru cores for both PRUeth ports*/ 1158 - prueth_emac_stop(emac0); 1159 - prueth->emacs_initialized--; 1160 - prueth_emac_stop(emac1); 1161 - prueth->emacs_initialized--; 1074 + ret = prueth_emac_common_stop(prueth); 1075 + if (ret) { 1076 + dev_err(prueth->dev, "Failed to stop the firmwares"); 1077 + return ret; 1078 + } 1162 1079 1163 1080 /* Start both pru cores for both PRUeth ports */ 1164 - prueth_emac_start(prueth, emac0); 1165 - prueth->emacs_initialized++; 1166 - prueth_emac_start(prueth, emac1); 1167 - prueth->emacs_initialized++; 1081 + ret = prueth_emac_common_start(prueth); 1082 + if (ret) { 1083 + dev_err(prueth->dev, "Failed to start the firmwares"); 1084 + return ret; 1085 + } 1168 1086 1169 1087 /* Enable forwarding for both PRUeth ports */ 1170 - icssg_set_port_state(emac0, ICSSG_EMAC_PORT_FORWARD); 1171 - icssg_set_port_state(emac1, ICSSG_EMAC_PORT_FORWARD); 1088 + ret = icssg_set_port_state(emac0, ICSSG_EMAC_PORT_FORWARD); 1089 + ret |= icssg_set_port_state(emac1, ICSSG_EMAC_PORT_FORWARD); 1172 1090 1173 1091 /* Attache net_device for both PRUeth ports */ 1174 1092 netif_device_attach(emac0->ndev); 1175 1093 netif_device_attach(emac1->ndev); 1094 + 1095 + return ret; 1176 1096 } 1177 1097 1178 1098 static void icssg_change_mode(struct prueth *prueth) 1179 1099 { 1180 1100 struct prueth_emac *emac; 1181 - int mac; 1101 + int mac, ret; 1182 1102 1183 - prueth_emac_restart(prueth); 1103 + ret = prueth_emac_restart(prueth); 1104 + if (ret) { 1105 + dev_err(prueth->dev, "Failed to restart the firmwares, aborting the process"); 1106 + return; 1107 + } 1184 1108 1185 1109 for (mac = PRUETH_MAC0; mac < PRUETH_NUM_MACS; mac++) { 1186 1110 emac = prueth->emac[mac]; ··· 1269 1173 { 1270 1174 struct prueth_emac *emac = netdev_priv(ndev); 1271 1175 struct prueth *prueth = emac->prueth; 1176 + int ret; 1272 1177 1273 1178 prueth->br_members &= ~BIT(emac->port_id); 1274 1179 1275 1180 if (prueth->is_switch_mode) { 1276 1181 prueth->is_switch_mode = false; 1277 1182 emac->port_vlan = 0; 1278 - prueth_emac_restart(prueth); 1183 + ret = prueth_emac_restart(prueth); 1184 + if (ret) { 1185 + dev_err(prueth->dev, "Failed to restart the firmwares, aborting the process"); 1186 + return; 1187 + } 1279 1188 } 1280 1189 1281 1190 prueth_offload_fwd_mark_update(prueth); ··· 1329 1228 struct prueth *prueth = emac->prueth; 1330 1229 struct prueth_emac *emac0; 1331 1230 struct prueth_emac *emac1; 1231 + int ret; 1332 1232 1333 1233 emac0 = prueth->emac[PRUETH_MAC0]; 1334 1234 emac1 = prueth->emac[PRUETH_MAC1]; ··· 1340 1238 emac0->port_vlan = 0; 1341 1239 emac1->port_vlan = 0; 1342 1240 prueth->hsr_dev = NULL; 1343 - prueth_emac_restart(prueth); 1241 + ret = prueth_emac_restart(prueth); 1242 + if (ret) { 1243 + dev_err(prueth->dev, "Failed to restart the firmwares, aborting the process"); 1244 + return; 1245 + } 1344 1246 netdev_dbg(ndev, "Disabling HSR Offload mode\n"); 1345 1247 } 1346 1248 } ··· 1519 1413 prueth->pa_stats = NULL; 1520 1414 } 1521 1415 1522 - if (eth0_node) { 1416 + if (eth0_node || eth1_node) { 1523 1417 ret = prueth_get_cores(prueth, ICSS_SLICE0, false); 1524 1418 if (ret) 1525 1419 goto put_cores; 1526 - } 1527 - 1528 - if (eth1_node) { 1529 1420 ret = prueth_get_cores(prueth, ICSS_SLICE1, false); 1530 1421 if (ret) 1531 1422 goto put_cores; ··· 1721 1618 pruss_put(prueth->pruss); 1722 1619 1723 1620 put_cores: 1724 - if (eth1_node) { 1725 - prueth_put_cores(prueth, ICSS_SLICE1); 1726 - of_node_put(eth1_node); 1727 - } 1728 - 1729 - if (eth0_node) { 1621 + if (eth0_node || eth1_node) { 1730 1622 prueth_put_cores(prueth, ICSS_SLICE0); 1731 1623 of_node_put(eth0_node); 1624 + 1625 + prueth_put_cores(prueth, ICSS_SLICE1); 1626 + of_node_put(eth1_node); 1732 1627 } 1733 1628 1734 1629 return ret;
+3 -2
drivers/net/ethernet/ti/icssg/icssg_prueth.h
··· 140 140 /* data for each emac port */ 141 141 struct prueth_emac { 142 142 bool is_sr1; 143 - bool fw_running; 144 143 struct prueth *prueth; 145 144 struct net_device *ndev; 146 145 u8 mac_addr[6]; ··· 360 361 enum icssg_port_state_cmd state); 361 362 void icssg_config_set_speed(struct prueth_emac *emac); 362 363 void icssg_config_half_duplex(struct prueth_emac *emac); 364 + void icssg_init_emac_mode(struct prueth *prueth); 365 + void icssg_init_fw_offload_mode(struct prueth *prueth); 363 366 364 367 /* Buffer queue helpers */ 365 368 int icssg_queue_pop(struct prueth *prueth, u8 queue); ··· 378 377 u8 untag_mask, bool add); 379 378 u16 icssg_get_pvid(struct prueth_emac *emac); 380 379 void icssg_set_pvid(struct prueth *prueth, u8 vid, u8 port); 380 + int emac_fdb_flow_id_updated(struct prueth_emac *emac); 381 381 #define prueth_napi_to_tx_chn(pnapi) \ 382 382 container_of(pnapi, struct prueth_tx_chn, napi_tx) 383 383 ··· 409 407 struct sk_buff *skb, u32 *psdata); 410 408 enum netdev_tx icssg_ndo_start_xmit(struct sk_buff *skb, struct net_device *ndev); 411 409 irqreturn_t prueth_rx_irq(int irq, void *dev_id); 412 - void prueth_emac_stop(struct prueth_emac *emac); 413 410 void prueth_cleanup_tx_ts(struct prueth_emac *emac); 414 411 int icssg_napi_rx_poll(struct napi_struct *napi_rx, int budget); 415 412 int prueth_prepare_rx_chan(struct prueth_emac *emac,
+23 -1
drivers/net/ethernet/ti/icssg/icssg_prueth_sr1.c
··· 440 440 goto halt_pru; 441 441 } 442 442 443 - emac->fw_running = 1; 444 443 return 0; 445 444 446 445 halt_pru: 447 446 rproc_shutdown(prueth->pru[slice]); 448 447 449 448 return ret; 449 + } 450 + 451 + static void prueth_emac_stop(struct prueth_emac *emac) 452 + { 453 + struct prueth *prueth = emac->prueth; 454 + int slice; 455 + 456 + switch (emac->port_id) { 457 + case PRUETH_PORT_MII0: 458 + slice = ICSS_SLICE0; 459 + break; 460 + case PRUETH_PORT_MII1: 461 + slice = ICSS_SLICE1; 462 + break; 463 + default: 464 + netdev_err(emac->ndev, "invalid port\n"); 465 + return; 466 + } 467 + 468 + if (!emac->is_sr1) 469 + rproc_shutdown(prueth->txpru[slice]); 470 + rproc_shutdown(prueth->rtu[slice]); 471 + rproc_shutdown(prueth->pru[slice]); 450 472 } 451 473 452 474 /**
+11 -13
drivers/net/ethernet/wangxun/libwx/wx_hw.c
··· 334 334 status = read_poll_timeout(rd32, hicr, hicr & WX_MNG_MBOX_CTL_FWRDY, 1000, 335 335 timeout * 1000, false, wx, WX_MNG_MBOX_CTL); 336 336 337 + buf[0] = rd32(wx, WX_MNG_MBOX); 338 + if ((buf[0] & 0xff0000) >> 16 == 0x80) { 339 + wx_err(wx, "Unknown FW command: 0x%x\n", buffer[0] & 0xff); 340 + status = -EINVAL; 341 + goto rel_out; 342 + } 343 + 337 344 /* Check command completion */ 338 345 if (status) { 339 - wx_dbg(wx, "Command has failed with no status valid.\n"); 340 - 341 - buf[0] = rd32(wx, WX_MNG_MBOX); 342 - if ((buffer[0] & 0xff) != (~buf[0] >> 24)) { 343 - status = -EINVAL; 344 - goto rel_out; 345 - } 346 - if ((buf[0] & 0xff0000) >> 16 == 0x80) { 347 - wx_dbg(wx, "It's unknown cmd.\n"); 348 - status = -EINVAL; 349 - goto rel_out; 350 - } 351 - 346 + wx_err(wx, "Command has failed with no status valid.\n"); 352 347 wx_dbg(wx, "write value:\n"); 353 348 for (i = 0; i < dword_len; i++) 354 349 wx_dbg(wx, "%x ", buffer[i]); 355 350 wx_dbg(wx, "read value:\n"); 356 351 for (i = 0; i < dword_len; i++) 357 352 wx_dbg(wx, "%x ", buf[i]); 353 + wx_dbg(wx, "\ncheck: %x %x\n", buffer[0] & 0xff, ~buf[0] >> 24); 354 + 355 + goto rel_out; 358 356 } 359 357 360 358 if (!return_data)
+5 -1
drivers/net/ieee802154/ca8210.c
··· 3072 3072 spi_set_drvdata(priv->spi, priv); 3073 3073 if (IS_ENABLED(CONFIG_IEEE802154_CA8210_DEBUGFS)) { 3074 3074 cascoda_api_upstream = ca8210_test_int_driver_write; 3075 - ca8210_test_interface_init(priv); 3075 + ret = ca8210_test_interface_init(priv); 3076 + if (ret) { 3077 + dev_crit(&spi_device->dev, "ca8210_test_interface_init failed\n"); 3078 + goto error; 3079 + } 3076 3080 } else { 3077 3081 cascoda_api_upstream = NULL; 3078 3082 }
+4
drivers/net/mctp/mctp-i3c.c
··· 125 125 126 126 xfer.data.in = skb_put(skb, mi->mrl); 127 127 128 + /* Make sure netif_rx() is read in the same order as i3c. */ 129 + mutex_lock(&mi->lock); 128 130 rc = i3c_device_do_priv_xfers(mi->i3c, &xfer, 1); 129 131 if (rc < 0) 130 132 goto err; ··· 168 166 stats->rx_dropped++; 169 167 } 170 168 169 + mutex_unlock(&mi->lock); 171 170 return 0; 172 171 err: 172 + mutex_unlock(&mi->lock); 173 173 kfree_skb(skb); 174 174 return rc; 175 175 }
+10 -3
drivers/net/mdio/fwnode_mdio.c
··· 40 40 static struct mii_timestamper * 41 41 fwnode_find_mii_timestamper(struct fwnode_handle *fwnode) 42 42 { 43 + struct mii_timestamper *mii_ts; 43 44 struct of_phandle_args arg; 44 45 int err; 45 46 ··· 54 53 else if (err) 55 54 return ERR_PTR(err); 56 55 57 - if (arg.args_count != 1) 58 - return ERR_PTR(-EINVAL); 56 + if (arg.args_count != 1) { 57 + mii_ts = ERR_PTR(-EINVAL); 58 + goto put_node; 59 + } 59 60 60 - return register_mii_timestamper(arg.np, arg.args[0]); 61 + mii_ts = register_mii_timestamper(arg.np, arg.args[0]); 62 + 63 + put_node: 64 + of_node_put(arg.np); 65 + return mii_ts; 61 66 } 62 67 63 68 int fwnode_mdiobus_phy_device_register(struct mii_bus *mdio,
+2
drivers/net/netdevsim/health.c
··· 149 149 char *break_msg; 150 150 int err; 151 151 152 + if (count == 0 || count > PAGE_SIZE) 153 + return -EINVAL; 152 154 break_msg = memdup_user_nul(data, count); 153 155 if (IS_ERR(break_msg)) 154 156 return PTR_ERR(break_msg);
+2 -2
drivers/net/netdevsim/netdev.c
··· 635 635 page_pool_put_full_page(ns->page->pp, ns->page, false); 636 636 ns->page = NULL; 637 637 } 638 - rtnl_unlock(); 639 638 640 639 exit: 641 - return count; 640 + rtnl_unlock(); 641 + return ret; 642 642 } 643 643 644 644 static const struct file_operations nsim_pp_hold_fops = {
+1 -1
drivers/net/phy/aquantia/aquantia_leds.c
··· 156 156 if (force_active_high || force_active_low) 157 157 return aqr_phy_led_active_low_set(phydev, index, force_active_low); 158 158 159 - unreachable(); 159 + return -EINVAL; 160 160 }
+1 -1
drivers/net/phy/intel-xway.c
··· 529 529 if (force_active_high) 530 530 return phy_clear_bits(phydev, XWAY_MDIO_LED, XWAY_GPHY_LED_INV(index)); 531 531 532 - unreachable(); 532 + return -EINVAL; 533 533 } 534 534 535 535 static struct phy_driver xway_gphy[] = {
+101 -13
drivers/net/phy/micrel.c
··· 432 432 struct kszphy_priv { 433 433 struct kszphy_ptp_priv ptp_priv; 434 434 const struct kszphy_type *type; 435 + struct clk *clk; 435 436 int led_mode; 436 437 u16 vct_ctrl1000; 437 438 bool rmii_ref_clk_sel; 438 439 bool rmii_ref_clk_sel_val; 440 + bool clk_enable; 439 441 u64 stats[ARRAY_SIZE(kszphy_hw_stats)]; 440 442 }; 441 443 ··· 2052 2050 data[i] = kszphy_get_stat(phydev, i); 2053 2051 } 2054 2052 2053 + static void kszphy_enable_clk(struct phy_device *phydev) 2054 + { 2055 + struct kszphy_priv *priv = phydev->priv; 2056 + 2057 + if (!priv->clk_enable && priv->clk) { 2058 + clk_prepare_enable(priv->clk); 2059 + priv->clk_enable = true; 2060 + } 2061 + } 2062 + 2063 + static void kszphy_disable_clk(struct phy_device *phydev) 2064 + { 2065 + struct kszphy_priv *priv = phydev->priv; 2066 + 2067 + if (priv->clk_enable && priv->clk) { 2068 + clk_disable_unprepare(priv->clk); 2069 + priv->clk_enable = false; 2070 + } 2071 + } 2072 + 2073 + static int kszphy_generic_resume(struct phy_device *phydev) 2074 + { 2075 + kszphy_enable_clk(phydev); 2076 + 2077 + return genphy_resume(phydev); 2078 + } 2079 + 2080 + static int kszphy_generic_suspend(struct phy_device *phydev) 2081 + { 2082 + int ret; 2083 + 2084 + ret = genphy_suspend(phydev); 2085 + if (ret) 2086 + return ret; 2087 + 2088 + kszphy_disable_clk(phydev); 2089 + 2090 + return 0; 2091 + } 2092 + 2055 2093 static int kszphy_suspend(struct phy_device *phydev) 2056 2094 { 2057 2095 /* Disable PHY Interrupts */ ··· 2101 2059 phydev->drv->config_intr(phydev); 2102 2060 } 2103 2061 2104 - return genphy_suspend(phydev); 2062 + return kszphy_generic_suspend(phydev); 2105 2063 } 2106 2064 2107 2065 static void kszphy_parse_led_mode(struct phy_device *phydev) ··· 2132 2090 { 2133 2091 int ret; 2134 2092 2135 - genphy_resume(phydev); 2093 + ret = kszphy_generic_resume(phydev); 2094 + if (ret) 2095 + return ret; 2136 2096 2137 2097 /* After switching from power-down to normal mode, an internal global 2138 2098 * reset is automatically generated. Wait a minimum of 1 ms before ··· 2152 2108 if (phydev->drv->config_intr) 2153 2109 phydev->drv->config_intr(phydev); 2154 2110 } 2111 + 2112 + return 0; 2113 + } 2114 + 2115 + /* Because of errata DS80000700A, receiver error following software 2116 + * power down. Suspend and resume callbacks only disable and enable 2117 + * external rmii reference clock. 2118 + */ 2119 + static int ksz8041_resume(struct phy_device *phydev) 2120 + { 2121 + kszphy_enable_clk(phydev); 2122 + 2123 + return 0; 2124 + } 2125 + 2126 + static int ksz8041_suspend(struct phy_device *phydev) 2127 + { 2128 + kszphy_disable_clk(phydev); 2155 2129 2156 2130 return 0; 2157 2131 } ··· 2221 2159 if (!(ret & BMCR_PDOWN)) 2222 2160 return 0; 2223 2161 2224 - genphy_resume(phydev); 2162 + ret = kszphy_generic_resume(phydev); 2163 + if (ret) 2164 + return ret; 2165 + 2225 2166 usleep_range(1000, 2000); 2226 2167 2227 2168 /* Re-program the value after chip is reset. */ ··· 2240 2175 } 2241 2176 2242 2177 return 0; 2178 + } 2179 + 2180 + static int ksz8061_suspend(struct phy_device *phydev) 2181 + { 2182 + return kszphy_suspend(phydev); 2243 2183 } 2244 2184 2245 2185 static int kszphy_probe(struct phy_device *phydev) ··· 2287 2217 } else if (!clk) { 2288 2218 /* unnamed clock from the generic ethernet-phy binding */ 2289 2219 clk = devm_clk_get_optional_enabled(&phydev->mdio.dev, NULL); 2290 - if (IS_ERR(clk)) 2291 - return PTR_ERR(clk); 2292 2220 } 2221 + 2222 + if (IS_ERR(clk)) 2223 + return PTR_ERR(clk); 2224 + 2225 + clk_disable_unprepare(clk); 2226 + priv->clk = clk; 2293 2227 2294 2228 if (ksz8041_fiber_mode(phydev)) 2295 2229 phydev->port = PORT_FIBRE; ··· 5364 5290 return 0; 5365 5291 } 5366 5292 5293 + static int lan8804_resume(struct phy_device *phydev) 5294 + { 5295 + return kszphy_resume(phydev); 5296 + } 5297 + 5298 + static int lan8804_suspend(struct phy_device *phydev) 5299 + { 5300 + return kszphy_generic_suspend(phydev); 5301 + } 5302 + 5303 + static int lan8841_resume(struct phy_device *phydev) 5304 + { 5305 + return kszphy_generic_resume(phydev); 5306 + } 5307 + 5367 5308 static int lan8841_suspend(struct phy_device *phydev) 5368 5309 { 5369 5310 struct kszphy_priv *priv = phydev->priv; ··· 5387 5298 if (ptp_priv->ptp_clock) 5388 5299 ptp_cancel_worker_sync(ptp_priv->ptp_clock); 5389 5300 5390 - return genphy_suspend(phydev); 5301 + return kszphy_generic_suspend(phydev); 5391 5302 } 5392 5303 5393 5304 static struct phy_driver ksphy_driver[] = { ··· 5447 5358 .get_sset_count = kszphy_get_sset_count, 5448 5359 .get_strings = kszphy_get_strings, 5449 5360 .get_stats = kszphy_get_stats, 5450 - /* No suspend/resume callbacks because of errata DS80000700A, 5451 - * receiver error following software power down. 5452 - */ 5361 + .suspend = ksz8041_suspend, 5362 + .resume = ksz8041_resume, 5453 5363 }, { 5454 5364 .phy_id = PHY_ID_KSZ8041RNLI, 5455 5365 .phy_id_mask = MICREL_PHY_ID_MASK, ··· 5524 5436 .soft_reset = genphy_soft_reset, 5525 5437 .config_intr = kszphy_config_intr, 5526 5438 .handle_interrupt = kszphy_handle_interrupt, 5527 - .suspend = kszphy_suspend, 5439 + .suspend = ksz8061_suspend, 5528 5440 .resume = ksz8061_resume, 5529 5441 }, { 5530 5442 .phy_id = PHY_ID_KSZ9021, ··· 5595 5507 .get_sset_count = kszphy_get_sset_count, 5596 5508 .get_strings = kszphy_get_strings, 5597 5509 .get_stats = kszphy_get_stats, 5598 - .suspend = genphy_suspend, 5599 - .resume = kszphy_resume, 5510 + .suspend = lan8804_suspend, 5511 + .resume = lan8804_resume, 5600 5512 .config_intr = lan8804_config_intr, 5601 5513 .handle_interrupt = lan8804_handle_interrupt, 5602 5514 }, { ··· 5614 5526 .get_strings = kszphy_get_strings, 5615 5527 .get_stats = kszphy_get_stats, 5616 5528 .suspend = lan8841_suspend, 5617 - .resume = genphy_resume, 5529 + .resume = lan8841_resume, 5618 5530 .cable_test_start = lan8814_cable_test_start, 5619 5531 .cable_test_get_status = ksz886x_cable_test_get_status, 5620 5532 }, {
+1 -1
drivers/net/phy/mxl-gpy.c
··· 1014 1014 if (force_active_high) 1015 1015 return phy_clear_bits(phydev, PHY_LED, PHY_LED_POLARITY(index)); 1016 1016 1017 - unreachable(); 1017 + return -EINVAL; 1018 1018 } 1019 1019 1020 1020 static struct phy_driver gpy_drivers[] = {
+4 -12
drivers/net/pse-pd/tps23881.c
··· 64 64 if (id >= TPS23881_MAX_CHANS) 65 65 return -ERANGE; 66 66 67 - ret = i2c_smbus_read_word_data(client, TPS23881_REG_PW_STATUS); 68 - if (ret < 0) 69 - return ret; 70 - 71 67 chan = priv->port[id].chan[0]; 72 68 if (chan < 4) 73 - val = (u16)(ret | BIT(chan)); 69 + val = BIT(chan); 74 70 else 75 - val = (u16)(ret | BIT(chan + 4)); 71 + val = BIT(chan + 4); 76 72 77 73 if (priv->port[id].is_4p) { 78 74 chan = priv->port[id].chan[1]; ··· 96 100 if (id >= TPS23881_MAX_CHANS) 97 101 return -ERANGE; 98 102 99 - ret = i2c_smbus_read_word_data(client, TPS23881_REG_PW_STATUS); 100 - if (ret < 0) 101 - return ret; 102 - 103 103 chan = priv->port[id].chan[0]; 104 104 if (chan < 4) 105 - val = (u16)(ret | BIT(chan + 4)); 105 + val = BIT(chan + 4); 106 106 else 107 - val = (u16)(ret | BIT(chan + 8)); 107 + val = BIT(chan + 8); 108 108 109 109 if (priv->port[id].is_4p) { 110 110 chan = priv->port[id].chan[1];
+7 -3
drivers/net/team/team_core.c
··· 998 998 unsigned int dst_release_flag = IFF_XMIT_DST_RELEASE | 999 999 IFF_XMIT_DST_RELEASE_PERM; 1000 1000 1001 - vlan_features = netdev_base_features(vlan_features); 1002 - 1003 1001 rcu_read_lock(); 1002 + if (list_empty(&team->port_list)) 1003 + goto done; 1004 + 1005 + vlan_features = netdev_base_features(vlan_features); 1006 + enc_features = netdev_base_features(enc_features); 1007 + 1004 1008 list_for_each_entry_rcu(port, &team->port_list, list) { 1005 1009 vlan_features = netdev_increment_features(vlan_features, 1006 1010 port->dev->vlan_features, ··· 1014 1010 port->dev->hw_enc_features, 1015 1011 TEAM_ENC_FEATURES); 1016 1012 1017 - 1018 1013 dst_release_flag &= port->dev->priv_flags; 1019 1014 if (port->dev->hard_header_len > max_hard_header_len) 1020 1015 max_hard_header_len = port->dev->hard_header_len; 1021 1016 } 1017 + done: 1022 1018 rcu_read_unlock(); 1023 1019 1024 1020 team->dev->vlan_features = vlan_features;
+1 -1
drivers/net/tun.c
··· 1481 1481 skb->truesize += skb->data_len; 1482 1482 1483 1483 for (i = 1; i < it->nr_segs; i++) { 1484 - const struct iovec *iov = iter_iov(it); 1484 + const struct iovec *iov = iter_iov(it) + i; 1485 1485 size_t fragsz = iov->iov_len; 1486 1486 struct page *page; 1487 1487 void *frag;
+1
drivers/net/usb/qmi_wwan.c
··· 1429 1429 {QMI_QUIRK_SET_DTR(0x2c7c, 0x0195, 4)}, /* Quectel EG95 */ 1430 1430 {QMI_FIXED_INTF(0x2c7c, 0x0296, 4)}, /* Quectel BG96 */ 1431 1431 {QMI_QUIRK_SET_DTR(0x2c7c, 0x030e, 4)}, /* Quectel EM05GV2 */ 1432 + {QMI_QUIRK_SET_DTR(0x2c7c, 0x0316, 3)}, /* Quectel RG255C */ 1432 1433 {QMI_QUIRK_SET_DTR(0x2cb7, 0x0104, 4)}, /* Fibocom NL678 series */ 1433 1434 {QMI_QUIRK_SET_DTR(0x2cb7, 0x0112, 0)}, /* Fibocom FG132 */ 1434 1435 {QMI_FIXED_INTF(0x0489, 0xe0b4, 0)}, /* Foxconn T77W968 LTE */
+1
drivers/net/wireless/intel/iwlwifi/cfg/bz.c
··· 161 161 162 162 const char iwl_bz_name[] = "Intel(R) TBD Bz device"; 163 163 const char iwl_fm_name[] = "Intel(R) Wi-Fi 7 BE201 320MHz"; 164 + const char iwl_wh_name[] = "Intel(R) Wi-Fi 7 BE211 320MHz"; 164 165 const char iwl_gl_name[] = "Intel(R) Wi-Fi 7 BE200 320MHz"; 165 166 const char iwl_mtp_name[] = "Intel(R) Wi-Fi 7 BE202 160MHz"; 166 167
+1
drivers/net/wireless/intel/iwlwifi/iwl-config.h
··· 545 545 extern const char iwl_ax411_name[]; 546 546 extern const char iwl_bz_name[]; 547 547 extern const char iwl_fm_name[]; 548 + extern const char iwl_wh_name[]; 548 549 extern const char iwl_gl_name[]; 549 550 extern const char iwl_mtp_name[]; 550 551 extern const char iwl_sc_name[];
+11 -3
drivers/net/wireless/intel/iwlwifi/mvm/d3.c
··· 2954 2954 int idx) 2955 2955 { 2956 2956 int i; 2957 + int n_channels = 0; 2957 2958 2958 2959 if (fw_has_api(&mvm->fw->ucode_capa, 2959 2960 IWL_UCODE_TLV_API_SCAN_OFFLOAD_CHANS)) { ··· 2963 2962 2964 2963 for (i = 0; i < SCAN_OFFLOAD_MATCHING_CHANNELS_LEN * 8; i++) 2965 2964 if (matches[idx].matching_channels[i / 8] & (BIT(i % 8))) 2966 - match->channels[match->n_channels++] = 2965 + match->channels[n_channels++] = 2967 2966 mvm->nd_channels[i]->center_freq; 2968 2967 } else { 2969 2968 struct iwl_scan_offload_profile_match_v1 *matches = ··· 2971 2970 2972 2971 for (i = 0; i < SCAN_OFFLOAD_MATCHING_CHANNELS_LEN_V1 * 8; i++) 2973 2972 if (matches[idx].matching_channels[i / 8] & (BIT(i % 8))) 2974 - match->channels[match->n_channels++] = 2973 + match->channels[n_channels++] = 2975 2974 mvm->nd_channels[i]->center_freq; 2976 2975 } 2976 + /* We may have ended up with fewer channels than we allocated. */ 2977 + match->n_channels = n_channels; 2977 2978 } 2978 2979 2979 2980 /** ··· 3056 3053 GFP_KERNEL); 3057 3054 if (!net_detect || !n_matches) 3058 3055 goto out_report_nd; 3056 + net_detect->n_matches = n_matches; 3057 + n_matches = 0; 3059 3058 3060 3059 for_each_set_bit(i, &matched_profiles, mvm->n_nd_match_sets) { 3061 3060 struct cfg80211_wowlan_nd_match *match; ··· 3071 3066 GFP_KERNEL); 3072 3067 if (!match) 3073 3068 goto out_report_nd; 3069 + match->n_channels = n_channels; 3074 3070 3075 - net_detect->matches[net_detect->n_matches++] = match; 3071 + net_detect->matches[n_matches++] = match; 3076 3072 3077 3073 /* We inverted the order of the SSIDs in the scan 3078 3074 * request, so invert the index here. ··· 3088 3082 3089 3083 iwl_mvm_query_set_freqs(mvm, d3_data->nd_results, match, i); 3090 3084 } 3085 + /* We may have fewer matches than we allocated. */ 3086 + net_detect->n_matches = n_matches; 3091 3087 3092 3088 out_report_nd: 3093 3089 wakeup.net_detect = net_detect;
+38 -3
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 1106 1106 iwlax210_2ax_cfg_so_jf_b0, iwl9462_name), 1107 1107 1108 1108 /* Bz */ 1109 - /* FIXME: need to change the naming according to the actual CRF */ 1110 1109 _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1111 1110 IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY, 1111 + IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY, 1112 1112 IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1113 + iwl_cfg_bz, iwl_ax201_name), 1114 + 1115 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1116 + IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY, 1117 + IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY, 1118 + IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1119 + iwl_cfg_bz, iwl_ax211_name), 1120 + 1121 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1122 + IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY, 1123 + IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY, 1124 + IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1125 + iwl_cfg_bz, iwl_fm_name), 1126 + 1127 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1128 + IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY, 1129 + IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY, 1130 + IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1131 + iwl_cfg_bz, iwl_wh_name), 1132 + 1133 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1134 + IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY, 1135 + IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY, 1136 + IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1137 + iwl_cfg_bz, iwl_ax201_name), 1138 + 1139 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1140 + IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY, 1141 + IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY, 1142 + IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1143 + iwl_cfg_bz, iwl_ax211_name), 1144 + 1145 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1146 + IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY, 1147 + IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY, 1113 1148 IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1114 1149 iwl_cfg_bz, iwl_fm_name), 1115 1150 1116 1151 _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 1117 1152 IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY, 1153 + IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY, 1118 1154 IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1119 - IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY, 1120 - iwl_cfg_bz, iwl_fm_name), 1155 + iwl_cfg_bz, iwl_wh_name), 1121 1156 1122 1157 /* Ga (Gl) */ 1123 1158 _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+1 -1
drivers/net/wireless/st/cw1200/cw1200_spi.c
··· 442 442 cw1200_core_release(self->core); 443 443 self->core = NULL; 444 444 } 445 + cw1200_spi_off(self, dev_get_platdata(&func->dev)); 445 446 } 446 - cw1200_spi_off(self, dev_get_platdata(&func->dev)); 447 447 } 448 448 449 449 static int __maybe_unused cw1200_spi_suspend(struct device *dev)
+1 -1
drivers/net/wwan/iosm/iosm_ipc_mmio.c
··· 104 104 break; 105 105 106 106 msleep(20); 107 - } while (retries-- > 0); 107 + } while (--retries > 0); 108 108 109 109 if (!retries) { 110 110 dev_err(ipc_mmio->dev, "invalid exec stage %X", stage);
+17 -9
drivers/net/wwan/t7xx/t7xx_state_monitor.c
··· 104 104 fsm_state_notify(ctl->md, state); 105 105 } 106 106 107 + static void fsm_release_command(struct kref *ref) 108 + { 109 + struct t7xx_fsm_command *cmd = container_of(ref, typeof(*cmd), refcnt); 110 + 111 + kfree(cmd); 112 + } 113 + 107 114 static void fsm_finish_command(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd, int result) 108 115 { 109 116 if (cmd->flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) { 110 - *cmd->ret = result; 111 - complete_all(cmd->done); 117 + cmd->result = result; 118 + complete_all(&cmd->done); 112 119 } 113 120 114 - kfree(cmd); 121 + kref_put(&cmd->refcnt, fsm_release_command); 115 122 } 116 123 117 124 static void fsm_del_kf_event(struct t7xx_fsm_event *event) ··· 482 475 483 476 int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id, unsigned int flag) 484 477 { 485 - DECLARE_COMPLETION_ONSTACK(done); 486 478 struct t7xx_fsm_command *cmd; 487 479 unsigned long flags; 488 480 int ret; ··· 493 487 INIT_LIST_HEAD(&cmd->entry); 494 488 cmd->cmd_id = cmd_id; 495 489 cmd->flag = flag; 490 + kref_init(&cmd->refcnt); 496 491 if (flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) { 497 - cmd->done = &done; 498 - cmd->ret = &ret; 492 + init_completion(&cmd->done); 493 + kref_get(&cmd->refcnt); 499 494 } 500 495 496 + kref_get(&cmd->refcnt); 501 497 spin_lock_irqsave(&ctl->command_lock, flags); 502 498 list_add_tail(&cmd->entry, &ctl->command_queue); 503 499 spin_unlock_irqrestore(&ctl->command_lock, flags); ··· 509 501 if (flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) { 510 502 unsigned long wait_ret; 511 503 512 - wait_ret = wait_for_completion_timeout(&done, 504 + wait_ret = wait_for_completion_timeout(&cmd->done, 513 505 msecs_to_jiffies(FSM_CMD_TIMEOUT_MS)); 514 - if (!wait_ret) 515 - return -ETIMEDOUT; 516 506 507 + ret = wait_ret ? cmd->result : -ETIMEDOUT; 508 + kref_put(&cmd->refcnt, fsm_release_command); 517 509 return ret; 518 510 } 519 511
+3 -2
drivers/net/wwan/t7xx/t7xx_state_monitor.h
··· 110 110 struct list_head entry; 111 111 enum t7xx_fsm_cmd_state cmd_id; 112 112 unsigned int flag; 113 - struct completion *done; 114 - int *ret; 113 + struct completion done; 114 + int result; 115 + struct kref refcnt; 115 116 }; 116 117 117 118 struct t7xx_fsm_notifier {
+4 -1
drivers/net/xen-netfront.c
··· 867 867 static int xennet_close(struct net_device *dev) 868 868 { 869 869 struct netfront_info *np = netdev_priv(dev); 870 - unsigned int num_queues = dev->real_num_tx_queues; 870 + unsigned int num_queues = np->queues ? dev->real_num_tx_queues : 0; 871 871 unsigned int i; 872 872 struct netfront_queue *queue; 873 873 netif_tx_stop_all_queues(np->netdev); ··· 881 881 static void xennet_destroy_queues(struct netfront_info *info) 882 882 { 883 883 unsigned int i; 884 + 885 + if (!info->queues) 886 + return; 884 887 885 888 for (i = 0; i < info->netdev->real_num_tx_queues; i++) { 886 889 struct netfront_queue *queue = &info->queues[i];
+1 -1
drivers/nvme/host/core.c
··· 2034 2034 * or smaller than a sector size yet, so catch this early and don't 2035 2035 * allow block I/O. 2036 2036 */ 2037 - if (head->lba_shift > PAGE_SHIFT || head->lba_shift < SECTOR_SHIFT) { 2037 + if (blk_validate_block_size(bs)) { 2038 2038 bs = (1 << 9); 2039 2039 valid = false; 2040 2040 }
+5
drivers/nvme/host/nvme.h
··· 173 173 * MSI (but not MSI-X) interrupts are broken and never fire. 174 174 */ 175 175 NVME_QUIRK_BROKEN_MSI = (1 << 21), 176 + 177 + /* 178 + * Align dma pool segment size to 512 bytes 179 + */ 180 + NVME_QUIRK_DMAPOOL_ALIGN_512 = (1 << 22), 176 181 }; 177 182 178 183 /*
+7 -2
drivers/nvme/host/pci.c
··· 2834 2834 2835 2835 static int nvme_setup_prp_pools(struct nvme_dev *dev) 2836 2836 { 2837 + size_t small_align = 256; 2838 + 2837 2839 dev->prp_page_pool = dma_pool_create("prp list page", dev->dev, 2838 2840 NVME_CTRL_PAGE_SIZE, 2839 2841 NVME_CTRL_PAGE_SIZE, 0); 2840 2842 if (!dev->prp_page_pool) 2841 2843 return -ENOMEM; 2842 2844 2845 + if (dev->ctrl.quirks & NVME_QUIRK_DMAPOOL_ALIGN_512) 2846 + small_align = 512; 2847 + 2843 2848 /* Optimisation for I/Os between 4k and 128k */ 2844 2849 dev->prp_small_pool = dma_pool_create("prp list 256", dev->dev, 2845 - 256, 256, 0); 2850 + 256, small_align, 0); 2846 2851 if (!dev->prp_small_pool) { 2847 2852 dma_pool_destroy(dev->prp_page_pool); 2848 2853 return -ENOMEM; ··· 3612 3607 { PCI_VDEVICE(REDHAT, 0x0010), /* Qemu emulated controller */ 3613 3608 .driver_data = NVME_QUIRK_BOGUS_NID, }, 3614 3609 { PCI_DEVICE(0x1217, 0x8760), /* O2 Micro 64GB Steam Deck */ 3615 - .driver_data = NVME_QUIRK_QDEPTH_ONE }, 3610 + .driver_data = NVME_QUIRK_DMAPOOL_ALIGN_512, }, 3616 3611 { PCI_DEVICE(0x126f, 0x2262), /* Silicon Motion generic */ 3617 3612 .driver_data = NVME_QUIRK_NO_DEEPEST_PS | 3618 3613 NVME_QUIRK_BOGUS_NID, },
+7 -11
drivers/nvme/host/tcp.c
··· 2024 2024 return __nvme_tcp_alloc_io_queues(ctrl); 2025 2025 } 2026 2026 2027 - static void nvme_tcp_destroy_io_queues(struct nvme_ctrl *ctrl, bool remove) 2028 - { 2029 - nvme_tcp_stop_io_queues(ctrl); 2030 - if (remove) 2031 - nvme_remove_io_tag_set(ctrl); 2032 - nvme_tcp_free_io_queues(ctrl); 2033 - } 2034 - 2035 2027 static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new) 2036 2028 { 2037 2029 int ret, nr_queues; ··· 2168 2176 nvme_sync_io_queues(ctrl); 2169 2177 nvme_tcp_stop_io_queues(ctrl); 2170 2178 nvme_cancel_tagset(ctrl); 2171 - if (remove) 2179 + if (remove) { 2172 2180 nvme_unquiesce_io_queues(ctrl); 2173 - nvme_tcp_destroy_io_queues(ctrl, remove); 2181 + nvme_remove_io_tag_set(ctrl); 2182 + } 2183 + nvme_tcp_free_io_queues(ctrl); 2174 2184 } 2175 2185 2176 2186 static void nvme_tcp_reconnect_or_remove(struct nvme_ctrl *ctrl, ··· 2261 2267 nvme_sync_io_queues(ctrl); 2262 2268 nvme_tcp_stop_io_queues(ctrl); 2263 2269 nvme_cancel_tagset(ctrl); 2264 - nvme_tcp_destroy_io_queues(ctrl, new); 2270 + if (new) 2271 + nvme_remove_io_tag_set(ctrl); 2272 + nvme_tcp_free_io_queues(ctrl); 2265 2273 } 2266 2274 destroy_admin: 2267 2275 nvme_stop_keep_alive(ctrl);
+5 -4
drivers/nvme/target/admin-cmd.c
··· 139 139 unsigned long idx; 140 140 141 141 ctrl = req->sq->ctrl; 142 - xa_for_each(&ctrl->subsys->namespaces, idx, ns) { 142 + nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) { 143 143 /* we don't have the right data for file backed ns */ 144 144 if (!ns->bdev) 145 145 continue; ··· 331 331 u32 count = 0; 332 332 333 333 if (!(req->cmd->get_log_page.lsp & NVME_ANA_LOG_RGO)) { 334 - xa_for_each(&ctrl->subsys->namespaces, idx, ns) 334 + nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) { 335 335 if (ns->anagrpid == grpid) 336 336 desc->nsids[count++] = cpu_to_le32(ns->nsid); 337 + } 337 338 } 338 339 339 340 desc->grpid = cpu_to_le32(grpid); ··· 773 772 goto out; 774 773 } 775 774 776 - xa_for_each(&ctrl->subsys->namespaces, idx, ns) { 775 + nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) { 777 776 if (ns->nsid <= min_endgid) 778 777 continue; 779 778 ··· 816 815 goto out; 817 816 } 818 817 819 - xa_for_each(&ctrl->subsys->namespaces, idx, ns) { 818 + nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) { 820 819 if (ns->nsid <= min_nsid) 821 820 continue; 822 821 if (match_css && req->ns->csi != req->cmd->identify.csi)
+9 -14
drivers/nvme/target/configfs.c
··· 810 810 NULL, 811 811 }; 812 812 813 - bool nvmet_subsys_nsid_exists(struct nvmet_subsys *subsys, u32 nsid) 814 - { 815 - struct config_item *ns_item; 816 - char name[12]; 817 - 818 - snprintf(name, sizeof(name), "%u", nsid); 819 - mutex_lock(&subsys->namespaces_group.cg_subsys->su_mutex); 820 - ns_item = config_group_find_item(&subsys->namespaces_group, name); 821 - mutex_unlock(&subsys->namespaces_group.cg_subsys->su_mutex); 822 - return ns_item != NULL; 823 - } 824 - 825 813 static void nvmet_ns_release(struct config_item *item) 826 814 { 827 815 struct nvmet_ns *ns = to_nvmet_ns(item); ··· 2242 2254 const char *page, size_t count) 2243 2255 { 2244 2256 struct list_head *entry; 2257 + char *old_nqn, *new_nqn; 2245 2258 size_t len; 2246 2259 2247 2260 len = strcspn(page, "\n"); 2248 2261 if (!len || len > NVMF_NQN_FIELD_LEN - 1) 2249 2262 return -EINVAL; 2263 + 2264 + new_nqn = kstrndup(page, len, GFP_KERNEL); 2265 + if (!new_nqn) 2266 + return -ENOMEM; 2250 2267 2251 2268 down_write(&nvmet_config_sem); 2252 2269 list_for_each(entry, &nvmet_subsystems_group.cg_children) { ··· 2261 2268 if (!strncmp(config_item_name(item), page, len)) { 2262 2269 pr_err("duplicate NQN %s\n", config_item_name(item)); 2263 2270 up_write(&nvmet_config_sem); 2271 + kfree(new_nqn); 2264 2272 return -EINVAL; 2265 2273 } 2266 2274 } 2267 - memset(nvmet_disc_subsys->subsysnqn, 0, NVMF_NQN_FIELD_LEN); 2268 - memcpy(nvmet_disc_subsys->subsysnqn, page, len); 2275 + old_nqn = nvmet_disc_subsys->subsysnqn; 2276 + nvmet_disc_subsys->subsysnqn = new_nqn; 2269 2277 up_write(&nvmet_config_sem); 2270 2278 2279 + kfree(old_nqn); 2271 2280 return len; 2272 2281 } 2273 2282
+63 -45
drivers/nvme/target/core.c
··· 127 127 unsigned long idx; 128 128 u32 nsid = 0; 129 129 130 - xa_for_each(&subsys->namespaces, idx, cur) 130 + nvmet_for_each_enabled_ns(&subsys->namespaces, idx, cur) 131 131 nsid = cur->nsid; 132 132 133 133 return nsid; ··· 441 441 struct nvmet_subsys *subsys = nvmet_req_subsys(req); 442 442 443 443 req->ns = xa_load(&subsys->namespaces, nsid); 444 - if (unlikely(!req->ns)) { 444 + if (unlikely(!req->ns || !req->ns->enabled)) { 445 445 req->error_loc = offsetof(struct nvme_common_command, nsid); 446 - if (nvmet_subsys_nsid_exists(subsys, nsid)) 447 - return NVME_SC_INTERNAL_PATH_ERROR; 448 - return NVME_SC_INVALID_NS | NVME_STATUS_DNR; 446 + if (!req->ns) /* ns doesn't exist! */ 447 + return NVME_SC_INVALID_NS | NVME_STATUS_DNR; 448 + 449 + /* ns exists but it's disabled */ 450 + req->ns = NULL; 451 + return NVME_SC_INTERNAL_PATH_ERROR; 449 452 } 450 453 451 454 percpu_ref_get(&req->ns->ref); ··· 586 583 goto out_unlock; 587 584 588 585 ret = -EMFILE; 589 - if (subsys->nr_namespaces == NVMET_MAX_NAMESPACES) 590 - goto out_unlock; 591 586 592 587 ret = nvmet_bdev_ns_enable(ns); 593 588 if (ret == -ENOTBLK) ··· 600 599 list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) 601 600 nvmet_p2pmem_ns_add_p2p(ctrl, ns); 602 601 603 - ret = percpu_ref_init(&ns->ref, nvmet_destroy_namespace, 604 - 0, GFP_KERNEL); 605 - if (ret) 606 - goto out_dev_put; 607 - 608 - if (ns->nsid > subsys->max_nsid) 609 - subsys->max_nsid = ns->nsid; 610 - 611 - ret = xa_insert(&subsys->namespaces, ns->nsid, ns, GFP_KERNEL); 612 - if (ret) 613 - goto out_restore_subsys_maxnsid; 614 - 615 602 if (ns->pr.enable) { 616 603 ret = nvmet_pr_init_ns(ns); 617 604 if (ret) 618 - goto out_remove_from_subsys; 605 + goto out_dev_put; 619 606 } 620 - 621 - subsys->nr_namespaces++; 622 607 623 608 nvmet_ns_changed(subsys, ns->nsid); 624 609 ns->enabled = true; 610 + xa_set_mark(&subsys->namespaces, ns->nsid, NVMET_NS_ENABLED); 625 611 ret = 0; 626 612 out_unlock: 627 613 mutex_unlock(&subsys->lock); 628 614 return ret; 629 - 630 - out_remove_from_subsys: 631 - xa_erase(&subsys->namespaces, ns->nsid); 632 - out_restore_subsys_maxnsid: 633 - subsys->max_nsid = nvmet_max_nsid(subsys); 634 - percpu_ref_exit(&ns->ref); 635 615 out_dev_put: 636 616 list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) 637 617 pci_dev_put(radix_tree_delete(&ctrl->p2p_ns_map, ns->nsid)); ··· 631 649 goto out_unlock; 632 650 633 651 ns->enabled = false; 634 - xa_erase(&ns->subsys->namespaces, ns->nsid); 635 - if (ns->nsid == subsys->max_nsid) 636 - subsys->max_nsid = nvmet_max_nsid(subsys); 652 + xa_clear_mark(&subsys->namespaces, ns->nsid, NVMET_NS_ENABLED); 637 653 638 654 list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) 639 655 pci_dev_put(radix_tree_delete(&ctrl->p2p_ns_map, ns->nsid)); 656 + 657 + mutex_unlock(&subsys->lock); 658 + 659 + if (ns->pr.enable) 660 + nvmet_pr_exit_ns(ns); 661 + 662 + mutex_lock(&subsys->lock); 663 + nvmet_ns_changed(subsys, ns->nsid); 664 + nvmet_ns_dev_disable(ns); 665 + out_unlock: 666 + mutex_unlock(&subsys->lock); 667 + } 668 + 669 + void nvmet_ns_free(struct nvmet_ns *ns) 670 + { 671 + struct nvmet_subsys *subsys = ns->subsys; 672 + 673 + nvmet_ns_disable(ns); 674 + 675 + mutex_lock(&subsys->lock); 676 + 677 + xa_erase(&subsys->namespaces, ns->nsid); 678 + if (ns->nsid == subsys->max_nsid) 679 + subsys->max_nsid = nvmet_max_nsid(subsys); 640 680 641 681 mutex_unlock(&subsys->lock); 642 682 ··· 675 671 wait_for_completion(&ns->disable_done); 676 672 percpu_ref_exit(&ns->ref); 677 673 678 - if (ns->pr.enable) 679 - nvmet_pr_exit_ns(ns); 680 - 681 674 mutex_lock(&subsys->lock); 682 - 683 675 subsys->nr_namespaces--; 684 - nvmet_ns_changed(subsys, ns->nsid); 685 - nvmet_ns_dev_disable(ns); 686 - out_unlock: 687 676 mutex_unlock(&subsys->lock); 688 - } 689 - 690 - void nvmet_ns_free(struct nvmet_ns *ns) 691 - { 692 - nvmet_ns_disable(ns); 693 677 694 678 down_write(&nvmet_ana_sem); 695 679 nvmet_ana_group_enabled[ns->anagrpid]--; ··· 691 699 { 692 700 struct nvmet_ns *ns; 693 701 702 + mutex_lock(&subsys->lock); 703 + 704 + if (subsys->nr_namespaces == NVMET_MAX_NAMESPACES) 705 + goto out_unlock; 706 + 694 707 ns = kzalloc(sizeof(*ns), GFP_KERNEL); 695 708 if (!ns) 696 - return NULL; 709 + goto out_unlock; 697 710 698 711 init_completion(&ns->disable_done); 699 712 700 713 ns->nsid = nsid; 701 714 ns->subsys = subsys; 715 + 716 + if (percpu_ref_init(&ns->ref, nvmet_destroy_namespace, 0, GFP_KERNEL)) 717 + goto out_free; 718 + 719 + if (ns->nsid > subsys->max_nsid) 720 + subsys->max_nsid = nsid; 721 + 722 + if (xa_insert(&subsys->namespaces, ns->nsid, ns, GFP_KERNEL)) 723 + goto out_exit; 724 + 725 + subsys->nr_namespaces++; 726 + 727 + mutex_unlock(&subsys->lock); 702 728 703 729 down_write(&nvmet_ana_sem); 704 730 ns->anagrpid = NVMET_DEFAULT_ANA_GRPID; ··· 728 718 ns->csi = NVME_CSI_NVM; 729 719 730 720 return ns; 721 + out_exit: 722 + subsys->max_nsid = nvmet_max_nsid(subsys); 723 + percpu_ref_exit(&ns->ref); 724 + out_free: 725 + kfree(ns); 726 + out_unlock: 727 + mutex_unlock(&subsys->lock); 728 + return NULL; 731 729 } 732 730 733 731 static void nvmet_update_sq_head(struct nvmet_req *req) ··· 1412 1394 1413 1395 ctrl->p2p_client = get_device(req->p2p_client); 1414 1396 1415 - xa_for_each(&ctrl->subsys->namespaces, idx, ns) 1397 + nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) 1416 1398 nvmet_p2pmem_ns_add_p2p(ctrl, ns); 1417 1399 } 1418 1400
+1 -1
drivers/nvme/target/io-cmd-bdev.c
··· 36 36 */ 37 37 id->nsfeat |= 1 << 4; 38 38 /* NPWG = Namespace Preferred Write Granularity. 0's based */ 39 - id->npwg = lpp0b; 39 + id->npwg = to0based(bdev_io_min(bdev) / bdev_logical_block_size(bdev)); 40 40 /* NPWA = Namespace Preferred Write Alignment. 0's based */ 41 41 id->npwa = id->npwg; 42 42 /* NPDG = Namespace Preferred Deallocate Granularity. 0's based */
+7
drivers/nvme/target/nvmet.h
··· 24 24 25 25 #define NVMET_DEFAULT_VS NVME_VS(2, 1, 0) 26 26 27 + #define NVMET_NS_ENABLED XA_MARK_1 27 28 #define NVMET_ASYNC_EVENTS 4 28 29 #define NVMET_ERROR_LOG_SLOTS 128 29 30 #define NVMET_NO_ERROR_LOC ((u16)-1) ··· 33 32 #define NVMET_SN_MAX_SIZE 20 34 33 #define NVMET_FR_MAX_SIZE 8 35 34 #define NVMET_PR_LOG_QUEUE_SIZE 64 35 + 36 + #define nvmet_for_each_ns(xa, index, entry) \ 37 + xa_for_each(xa, index, entry) 38 + 39 + #define nvmet_for_each_enabled_ns(xa, index, entry) \ 40 + xa_for_each_marked(xa, index, entry, NVMET_NS_ENABLED) 36 41 37 42 /* 38 43 * Supported optional AENs:
+4 -4
drivers/nvme/target/pr.c
··· 60 60 goto success; 61 61 } 62 62 63 - xa_for_each(&ctrl->subsys->namespaces, idx, ns) { 63 + nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) { 64 64 if (ns->pr.enable) 65 65 WRITE_ONCE(ns->pr.notify_mask, mask); 66 66 } ··· 1056 1056 * nvmet_pr_init_ns(), see more details in nvmet_ns_enable(). 1057 1057 * So just check ns->pr.enable. 1058 1058 */ 1059 - xa_for_each(&subsys->namespaces, idx, ns) { 1059 + nvmet_for_each_enabled_ns(&subsys->namespaces, idx, ns) { 1060 1060 if (ns->pr.enable) { 1061 1061 ret = nvmet_pr_alloc_and_insert_pc_ref(ns, ctrl->cntlid, 1062 1062 &ctrl->hostid); ··· 1067 1067 return 0; 1068 1068 1069 1069 free_per_ctrl_refs: 1070 - xa_for_each(&subsys->namespaces, idx, ns) { 1070 + nvmet_for_each_enabled_ns(&subsys->namespaces, idx, ns) { 1071 1071 if (ns->pr.enable) { 1072 1072 pc_ref = xa_erase(&ns->pr_per_ctrl_refs, ctrl->cntlid); 1073 1073 if (pc_ref) ··· 1087 1087 kfifo_free(&ctrl->pr_log_mgr.log_queue); 1088 1088 mutex_destroy(&ctrl->pr_log_mgr.lock); 1089 1089 1090 - xa_for_each(&ctrl->subsys->namespaces, idx, ns) { 1090 + nvmet_for_each_enabled_ns(&ctrl->subsys->namespaces, idx, ns) { 1091 1091 if (ns->pr.enable) { 1092 1092 pc_ref = xa_erase(&ns->pr_per_ctrl_refs, ctrl->cntlid); 1093 1093 if (pc_ref)
+3 -2
drivers/of/address.c
··· 459 459 } 460 460 if (ranges == NULL || rlen == 0) { 461 461 offset = of_read_number(addr, na); 462 - memset(addr, 0, pna * 4); 462 + /* set address to zero, pass flags through */ 463 + memset(addr + pbus->flag_cells, 0, (pna - pbus->flag_cells) * 4); 463 464 pr_debug("empty ranges; 1:1 translation\n"); 464 465 goto finish; 465 466 } ··· 620 619 if (ret < 0) 621 620 return of_get_parent(np); 622 621 623 - return of_node_get(args.np); 622 + return args.np; 624 623 } 625 624 #endif 626 625
+12 -6
drivers/of/base.c
··· 88 88 } 89 89 90 90 #define EXCLUDED_DEFAULT_CELLS_PLATFORMS ( \ 91 - IS_ENABLED(CONFIG_SPARC) \ 91 + IS_ENABLED(CONFIG_SPARC) || \ 92 + of_find_compatible_node(NULL, NULL, "coreboot") \ 92 93 ) 93 94 94 95 int of_bus_n_addr_cells(struct device_node *np) ··· 1508 1507 map_len--; 1509 1508 1510 1509 /* Check if not found */ 1511 - if (!new) 1510 + if (!new) { 1511 + ret = -EINVAL; 1512 1512 goto put; 1513 + } 1513 1514 1514 1515 if (!of_device_is_available(new)) 1515 1516 match = 0; ··· 1521 1518 goto put; 1522 1519 1523 1520 /* Check for malformed properties */ 1524 - if (WARN_ON(new_size > MAX_PHANDLE_ARGS)) 1521 + if (WARN_ON(new_size > MAX_PHANDLE_ARGS) || 1522 + map_len < new_size) { 1523 + ret = -EINVAL; 1525 1524 goto put; 1526 - if (map_len < new_size) 1527 - goto put; 1525 + } 1528 1526 1529 1527 /* Move forward by new node's #<list>-cells amount */ 1530 1528 map += new_size; 1531 1529 map_len -= new_size; 1532 1530 } 1533 - if (!match) 1531 + if (!match) { 1532 + ret = -ENOENT; 1534 1533 goto put; 1534 + } 1535 1535 1536 1536 /* Get the <list>-map-pass-thru property (optional) */ 1537 1537 pass = of_get_property(cur, pass_name, NULL);
+8 -1
drivers/of/empty_root.dts
··· 2 2 /dts-v1/; 3 3 4 4 / { 5 - 5 + /* 6 + * #address-cells/#size-cells are required properties at root node. 7 + * Use 2 cells for both address cells and size cells in order to fully 8 + * support 64-bit addresses and sizes on systems using this empty root 9 + * node. 10 + */ 11 + #address-cells = <0x02>; 12 + #size-cells = <0x02>; 6 13 };
+2
drivers/of/irq.c
··· 111 111 else 112 112 np = of_find_node_by_phandle(be32_to_cpup(imap)); 113 113 imap++; 114 + len--; 114 115 115 116 /* Check if not found */ 116 117 if (!np) { ··· 355 354 return of_irq_parse_oldworld(device, index, out_irq); 356 355 357 356 /* Get the reg property (if any) */ 357 + addr_len = 0; 358 358 addr = of_get_property(device, "reg", &addr_len); 359 359 360 360 /* Prevent out-of-bounds read in case of longer interrupt parent address size */
-2
drivers/of/property.c
··· 1286 1286 DEFINE_SIMPLE_PROP(mboxes, "mboxes", "#mbox-cells") 1287 1287 DEFINE_SIMPLE_PROP(io_channels, "io-channels", "#io-channel-cells") 1288 1288 DEFINE_SIMPLE_PROP(io_backends, "io-backends", "#io-backend-cells") 1289 - DEFINE_SIMPLE_PROP(interrupt_parent, "interrupt-parent", NULL) 1290 1289 DEFINE_SIMPLE_PROP(dmas, "dmas", "#dma-cells") 1291 1290 DEFINE_SIMPLE_PROP(power_domains, "power-domains", "#power-domain-cells") 1292 1291 DEFINE_SIMPLE_PROP(hwlocks, "hwlocks", "#hwlock-cells") ··· 1431 1432 { .parse_prop = parse_mboxes, }, 1432 1433 { .parse_prop = parse_io_channels, }, 1433 1434 { .parse_prop = parse_io_backends, }, 1434 - { .parse_prop = parse_interrupt_parent, }, 1435 1435 { .parse_prop = parse_dmas, .optional = true, }, 1436 1436 { .parse_prop = parse_power_domains, }, 1437 1437 { .parse_prop = parse_hwlocks, },
+2
drivers/of/unittest-data/tests-address.dtsi
··· 114 114 device_type = "pci"; 115 115 ranges = <0x82000000 0 0xe8000000 0 0xe8000000 0 0x7f00000>, 116 116 <0x81000000 0 0x00000000 0 0xefff0000 0 0x0010000>; 117 + dma-ranges = <0x43000000 0x10 0x00 0x00 0x00 0x00 0x10000000>; 117 118 reg = <0x00000000 0xd1070000 0x20000>; 118 119 119 120 pci@0,0 { ··· 143 142 #size-cells = <0x01>; 144 143 ranges = <0xa0000000 0 0 0 0x2000000>, 145 144 <0xb0000000 1 0 0 0x1000000>; 145 + dma-ranges = <0xc0000000 0x43000000 0x10 0x00 0x10000000>; 146 146 147 147 dev@e0000000 { 148 148 reg = <0xa0001000 0x1000>,
+39
drivers/of/unittest.c
··· 1213 1213 of_node_put(np); 1214 1214 } 1215 1215 1216 + static void __init of_unittest_pci_empty_dma_ranges(void) 1217 + { 1218 + struct device_node *np; 1219 + struct of_pci_range range; 1220 + struct of_pci_range_parser parser; 1221 + 1222 + if (!IS_ENABLED(CONFIG_PCI)) 1223 + return; 1224 + 1225 + np = of_find_node_by_path("/testcase-data/address-tests2/pcie@d1070000/pci@0,0/dev@0,0/local-bus@0"); 1226 + if (!np) { 1227 + pr_err("missing testcase data\n"); 1228 + return; 1229 + } 1230 + 1231 + if (of_pci_dma_range_parser_init(&parser, np)) { 1232 + pr_err("missing dma-ranges property\n"); 1233 + return; 1234 + } 1235 + 1236 + /* 1237 + * Get the dma-ranges from the device tree 1238 + */ 1239 + for_each_of_pci_range(&parser, &range) { 1240 + unittest(range.size == 0x10000000, 1241 + "for_each_of_pci_range wrong size on node %pOF size=%llx\n", 1242 + np, range.size); 1243 + unittest(range.cpu_addr == 0x00000000, 1244 + "for_each_of_pci_range wrong CPU addr (%llx) on node %pOF", 1245 + range.cpu_addr, np); 1246 + unittest(range.pci_addr == 0xc0000000, 1247 + "for_each_of_pci_range wrong DMA addr (%llx) on node %pOF", 1248 + range.pci_addr, np); 1249 + } 1250 + 1251 + of_node_put(np); 1252 + } 1253 + 1216 1254 static void __init of_unittest_bus_ranges(void) 1217 1255 { 1218 1256 struct device_node *np; ··· 4310 4272 of_unittest_dma_get_max_cpu_address(); 4311 4273 of_unittest_parse_dma_ranges(); 4312 4274 of_unittest_pci_dma_ranges(); 4275 + of_unittest_pci_empty_dma_ranges(); 4313 4276 of_unittest_bus_ranges(); 4314 4277 of_unittest_bus_3cell_ranges(); 4315 4278 of_unittest_reg();
+5 -2
drivers/pci/msi/irqdomain.c
··· 350 350 351 351 domain = dev_get_msi_domain(&pdev->dev); 352 352 353 - if (!domain || !irq_domain_is_hierarchy(domain)) 354 - return mode == ALLOW_LEGACY; 353 + if (!domain || !irq_domain_is_hierarchy(domain)) { 354 + if (IS_ENABLED(CONFIG_PCI_MSI_ARCH_FALLBACKS)) 355 + return mode == ALLOW_LEGACY; 356 + return false; 357 + } 355 358 356 359 if (!irq_domain_is_msi_parent(domain)) { 357 360 /*
+4
drivers/pci/msi/msi.c
··· 433 433 if (WARN_ON_ONCE(dev->msi_enabled)) 434 434 return -EINVAL; 435 435 436 + /* Test for the availability of MSI support */ 437 + if (!pci_msi_domain_supports(dev, 0, ALLOW_LEGACY)) 438 + return -ENOTSUPP; 439 + 436 440 nvec = pci_msi_vec_count(dev); 437 441 if (nvec < 0) 438 442 return nvec;
+4 -2
drivers/pci/pci.c
··· 6232 6232 pcie_capability_read_dword(dev, PCI_EXP_LNKCAP2, &lnkcap2); 6233 6233 speeds = lnkcap2 & PCI_EXP_LNKCAP2_SLS; 6234 6234 6235 + /* Ignore speeds higher than Max Link Speed */ 6236 + pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap); 6237 + speeds &= GENMASK(lnkcap & PCI_EXP_LNKCAP_SLS, 0); 6238 + 6235 6239 /* PCIe r3.0-compliant */ 6236 6240 if (speeds) 6237 6241 return speeds; 6238 - 6239 - pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap); 6240 6242 6241 6243 /* Synthesize from the Max Link Speed field */ 6242 6244 if ((lnkcap & PCI_EXP_LNKCAP_SLS) == PCI_EXP_LNKCAP_SLS_5_0GB)
+3 -1
drivers/pci/pcie/portdrv.c
··· 265 265 (pcie_ports_dpc_native || (services & PCIE_PORT_SERVICE_AER))) 266 266 services |= PCIE_PORT_SERVICE_DPC; 267 267 268 + /* Enable bandwidth control if more than one speed is supported. */ 268 269 if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM || 269 270 pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) { 270 271 u32 linkcap; 271 272 272 273 pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &linkcap); 273 - if (linkcap & PCI_EXP_LNKCAP_LBNC) 274 + if (linkcap & PCI_EXP_LNKCAP_LBNC && 275 + hweight8(dev->supported_speeds) > 1) 274 276 services |= PCIE_PORT_SERVICE_BWCTRL; 275 277 } 276 278
+12 -10
drivers/perf/riscv_pmu_sbi.c
··· 507 507 { 508 508 u32 type = event->attr.type; 509 509 u64 config = event->attr.config; 510 - u64 raw_config_val; 511 - int ret; 510 + int ret = -ENOENT; 512 511 513 512 /* 514 513 * Ensure we are finished checking standard hardware events for ··· 527 528 case PERF_TYPE_RAW: 528 529 /* 529 530 * As per SBI specification, the upper 16 bits must be unused 530 - * for a raw event. 531 + * for a hardware raw event. 531 532 * Bits 63:62 are used to distinguish between raw events 532 533 * 00 - Hardware raw event 533 534 * 10 - SBI firmware events 534 535 * 11 - Risc-V platform specific firmware event 535 536 */ 536 - raw_config_val = config & RISCV_PMU_RAW_EVENT_MASK; 537 + 537 538 switch (config >> 62) { 538 539 case 0: 539 - ret = RISCV_PMU_RAW_EVENT_IDX; 540 - *econfig = raw_config_val; 540 + /* Return error any bits [48-63] is set as it is not allowed by the spec */ 541 + if (!(config & ~RISCV_PMU_RAW_EVENT_MASK)) { 542 + *econfig = config & RISCV_PMU_RAW_EVENT_MASK; 543 + ret = RISCV_PMU_RAW_EVENT_IDX; 544 + } 541 545 break; 542 546 case 2: 543 - ret = (raw_config_val & 0xFFFF) | 544 - (SBI_PMU_EVENT_TYPE_FW << 16); 547 + ret = (config & 0xFFFF) | (SBI_PMU_EVENT_TYPE_FW << 16); 545 548 break; 546 549 case 3: 547 550 /* ··· 552 551 * Event data - raw event encoding 553 552 */ 554 553 ret = SBI_PMU_EVENT_TYPE_FW << 16 | RISCV_PLAT_FW_EVENT; 555 - *econfig = raw_config_val; 554 + *econfig = config & RISCV_PMU_PLAT_FW_EVENT_MASK; 555 + break; 556 + default: 556 557 break; 557 558 } 558 559 break; 559 560 default: 560 - ret = -ENOENT; 561 561 break; 562 562 } 563 563
+6
drivers/phy/broadcom/phy-brcm-usb-init-synopsys.c
··· 325 325 void __iomem *ctrl = params->regs[BRCM_REGS_CTRL]; 326 326 327 327 USB_CTRL_UNSET(ctrl, USB_PM, XHC_S2_CLK_SWITCH_EN); 328 + 329 + /* 330 + * The PHY might be in a bad state if it is already powered 331 + * up. Toggle the power just in case. 332 + */ 333 + USB_CTRL_SET(ctrl, USB_PM, USB_PWRDN); 328 334 USB_CTRL_UNSET(ctrl, USB_PM, USB_PWRDN); 329 335 330 336 /* 1 millisecond - for USB clocks to settle down */
+1 -2
drivers/phy/freescale/phy-fsl-samsung-hdmi.c
··· 424 424 * Fvco = (M * f_ref) / P, 425 425 * where f_ref is 24MHz. 426 426 */ 427 - tmp = (u64)_m * 24 * MHZ; 428 - do_div(tmp, _p); 427 + tmp = div64_ul((u64)_m * 24 * MHZ, _p); 429 428 if (tmp < 750 * MHZ || 430 429 tmp > 3000 * MHZ) 431 430 continue;
+1
drivers/phy/mediatek/Kconfig
··· 65 65 depends on ARCH_MEDIATEK || COMPILE_TEST 66 66 depends on COMMON_CLK 67 67 depends on OF 68 + depends on REGULATOR 68 69 select GENERIC_PHY 69 70 help 70 71 Support HDMI PHY for Mediatek SoCs.
+13 -8
drivers/phy/phy-core.c
··· 145 145 return phy_provider; 146 146 147 147 for_each_child_of_node(phy_provider->children, child) 148 - if (child == node) 148 + if (child == node) { 149 + of_node_put(child); 149 150 return phy_provider; 151 + } 150 152 } 151 153 152 154 return ERR_PTR(-EPROBE_DEFER); ··· 631 629 return ERR_PTR(-ENODEV); 632 630 633 631 /* This phy type handled by the usb-phy subsystem for now */ 634 - if (of_device_is_compatible(args.np, "usb-nop-xceiv")) 635 - return ERR_PTR(-ENODEV); 632 + if (of_device_is_compatible(args.np, "usb-nop-xceiv")) { 633 + phy = ERR_PTR(-ENODEV); 634 + goto out_put_node; 635 + } 636 636 637 637 mutex_lock(&phy_provider_mutex); 638 638 phy_provider = of_phy_provider_lookup(args.np); ··· 656 652 657 653 out_unlock: 658 654 mutex_unlock(&phy_provider_mutex); 655 + out_put_node: 659 656 of_node_put(args.np); 660 657 661 658 return phy; ··· 742 737 if (!phy) 743 738 return; 744 739 745 - r = devres_destroy(dev, devm_phy_release, devm_phy_match, phy); 740 + r = devres_release(dev, devm_phy_release, devm_phy_match, phy); 746 741 dev_WARN_ONCE(dev, r, "couldn't find PHY resource\n"); 747 742 } 748 743 EXPORT_SYMBOL_GPL(devm_phy_put); ··· 1126 1121 { 1127 1122 int r; 1128 1123 1129 - r = devres_destroy(dev, devm_phy_consume, devm_phy_match, phy); 1124 + r = devres_release(dev, devm_phy_consume, devm_phy_match, phy); 1130 1125 dev_WARN_ONCE(dev, r, "couldn't find PHY resource\n"); 1131 1126 } 1132 1127 EXPORT_SYMBOL_GPL(devm_phy_destroy); ··· 1264 1259 * of_phy_provider_unregister to unregister the phy provider. 1265 1260 */ 1266 1261 void devm_of_phy_provider_unregister(struct device *dev, 1267 - struct phy_provider *phy_provider) 1262 + struct phy_provider *phy_provider) 1268 1263 { 1269 1264 int r; 1270 1265 1271 - r = devres_destroy(dev, devm_phy_provider_release, devm_phy_match, 1272 - phy_provider); 1266 + r = devres_release(dev, devm_phy_provider_release, devm_phy_match, 1267 + phy_provider); 1273 1268 dev_WARN_ONCE(dev, r, "couldn't find PHY provider device resource\n"); 1274 1269 } 1275 1270 EXPORT_SYMBOL_GPL(devm_of_phy_provider_unregister);
+1 -1
drivers/phy/qualcomm/phy-qcom-qmp-usb.c
··· 1052 1052 QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FASTLOCK_FO_GAIN, 0x2f), 1053 1053 QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FASTLOCK_COUNT_LOW, 0xff), 1054 1054 QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FASTLOCK_COUNT_HIGH, 0x0f), 1055 - QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_SO_GAIN, 0x0a), 1055 + QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FO_GAIN, 0x0a), 1056 1056 QMP_PHY_INIT_CFG(QSERDES_V5_RX_VGA_CAL_CNTRL1, 0x54), 1057 1057 QMP_PHY_INIT_CFG(QSERDES_V5_RX_VGA_CAL_CNTRL2, 0x0f), 1058 1058 QMP_PHY_INIT_CFG(QSERDES_V5_RX_RX_EQU_ADAPTOR_CNTRL2, 0x0f),
+1 -1
drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
··· 309 309 310 310 priv->ext_refclk = device_property_present(dev, "rockchip,ext-refclk"); 311 311 312 - priv->phy_rst = devm_reset_control_array_get_exclusive(dev); 312 + priv->phy_rst = devm_reset_control_get(dev, "phy"); 313 313 if (IS_ERR(priv->phy_rst)) 314 314 return dev_err_probe(dev, PTR_ERR(priv->phy_rst), "failed to get phy reset\n"); 315 315
+2 -1
drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
··· 1101 1101 return dev_err_probe(dev, PTR_ERR(hdptx->grf), 1102 1102 "Could not get GRF syscon\n"); 1103 1103 1104 + platform_set_drvdata(pdev, hdptx); 1105 + 1104 1106 ret = devm_pm_runtime_enable(dev); 1105 1107 if (ret) 1106 1108 return dev_err_probe(dev, ret, "Failed to enable runtime PM\n"); ··· 1112 1110 return dev_err_probe(dev, PTR_ERR(hdptx->phy), 1113 1111 "Failed to create HDMI PHY\n"); 1114 1112 1115 - platform_set_drvdata(pdev, hdptx); 1116 1113 phy_set_drvdata(hdptx->phy, hdptx); 1117 1114 phy_set_bus_width(hdptx->phy, 8); 1118 1115
+15 -6
drivers/phy/st/phy-stm32-combophy.c
··· 122 122 u32 max_vswing = imp_lookup[imp_size - 1].vswing[vswing_size - 1]; 123 123 u32 min_vswing = imp_lookup[0].vswing[0]; 124 124 u32 val; 125 + u32 regval; 125 126 126 127 if (!of_property_read_u32(combophy->dev->of_node, "st,output-micro-ohms", &val)) { 127 128 if (val < min_imp || val > max_imp) { ··· 130 129 return -EINVAL; 131 130 } 132 131 133 - for (imp_of = 0; imp_of < ARRAY_SIZE(imp_lookup); imp_of++) 134 - if (imp_lookup[imp_of].microohm <= val) 132 + regval = 0; 133 + for (imp_of = 0; imp_of < ARRAY_SIZE(imp_lookup); imp_of++) { 134 + if (imp_lookup[imp_of].microohm <= val) { 135 + regval = FIELD_PREP(STM32MP25_PCIEPRG_IMPCTRL_OHM, imp_of); 135 136 break; 137 + } 138 + } 136 139 137 140 dev_dbg(combophy->dev, "Set %u micro-ohms output impedance\n", 138 141 imp_lookup[imp_of].microohm); 139 142 140 143 regmap_update_bits(combophy->regmap, SYSCFG_PCIEPRGCR, 141 144 STM32MP25_PCIEPRG_IMPCTRL_OHM, 142 - FIELD_PREP(STM32MP25_PCIEPRG_IMPCTRL_OHM, imp_of)); 145 + regval); 143 146 } else { 144 147 regmap_read(combophy->regmap, SYSCFG_PCIEPRGCR, &val); 145 148 imp_of = FIELD_GET(STM32MP25_PCIEPRG_IMPCTRL_OHM, val); ··· 155 150 return -EINVAL; 156 151 } 157 152 158 - for (vswing_of = 0; vswing_of < ARRAY_SIZE(imp_lookup[imp_of].vswing); vswing_of++) 159 - if (imp_lookup[imp_of].vswing[vswing_of] >= val) 153 + regval = 0; 154 + for (vswing_of = 0; vswing_of < ARRAY_SIZE(imp_lookup[imp_of].vswing); vswing_of++) { 155 + if (imp_lookup[imp_of].vswing[vswing_of] >= val) { 156 + regval = FIELD_PREP(STM32MP25_PCIEPRG_IMPCTRL_VSWING, vswing_of); 160 157 break; 158 + } 159 + } 161 160 162 161 dev_dbg(combophy->dev, "Set %u microvolt swing\n", 163 162 imp_lookup[imp_of].vswing[vswing_of]); 164 163 165 164 regmap_update_bits(combophy->regmap, SYSCFG_PCIEPRGCR, 166 165 STM32MP25_PCIEPRG_IMPCTRL_VSWING, 167 - FIELD_PREP(STM32MP25_PCIEPRG_IMPCTRL_VSWING, vswing_of)); 166 + regval); 168 167 } 169 168 170 169 return 0;
+6
drivers/pinctrl/pinctrl-mcp23s08.c
··· 86 86 .num_reg_defaults = ARRAY_SIZE(mcp23x08_defaults), 87 87 .cache_type = REGCACHE_FLAT, 88 88 .max_register = MCP_OLAT, 89 + .disable_locking = true, /* mcp->lock protects the regmap */ 89 90 }; 90 91 EXPORT_SYMBOL_GPL(mcp23x08_regmap); 91 92 ··· 133 132 .num_reg_defaults = ARRAY_SIZE(mcp23x17_defaults), 134 133 .cache_type = REGCACHE_FLAT, 135 134 .val_format_endian = REGMAP_ENDIAN_LITTLE, 135 + .disable_locking = true, /* mcp->lock protects the regmap */ 136 136 }; 137 137 EXPORT_SYMBOL_GPL(mcp23x17_regmap); 138 138 ··· 230 228 231 229 switch (param) { 232 230 case PIN_CONFIG_BIAS_PULL_UP: 231 + mutex_lock(&mcp->lock); 233 232 ret = mcp_read(mcp, MCP_GPPU, &data); 233 + mutex_unlock(&mcp->lock); 234 234 if (ret < 0) 235 235 return ret; 236 236 status = (data & BIT(pin)) ? 1 : 0; ··· 261 257 262 258 switch (param) { 263 259 case PIN_CONFIG_BIAS_PULL_UP: 260 + mutex_lock(&mcp->lock); 264 261 ret = mcp_set_bit(mcp, MCP_GPPU, pin, arg); 262 + mutex_unlock(&mcp->lock); 265 263 break; 266 264 default: 267 265 dev_dbg(mcp->dev, "Invalid config param %04x\n", param);
+2 -2
drivers/platform/chrome/cros_ec_lpc.c
··· 707 707 /* Framework Laptop (12th Gen Intel Core) */ 708 708 .matches = { 709 709 DMI_MATCH(DMI_SYS_VENDOR, "Framework"), 710 - DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "12th Gen Intel Core"), 710 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Laptop (12th Gen Intel Core)"), 711 711 }, 712 712 .driver_data = (void *)&framework_laptop_mec_lpc_driver_data, 713 713 }, ··· 715 715 /* Framework Laptop (13th Gen Intel Core) */ 716 716 .matches = { 717 717 DMI_MATCH(DMI_SYS_VENDOR, "Framework"), 718 - DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "13th Gen Intel Core"), 718 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Laptop (13th Gen Intel Core)"), 719 719 }, 720 720 .driver_data = (void *)&framework_laptop_mec_lpc_driver_data, 721 721 },
+1 -1
drivers/platform/loongarch/Kconfig
··· 18 18 19 19 config LOONGSON_LAPTOP 20 20 tristate "Generic Loongson-3 Laptop Driver" 21 - depends on ACPI 21 + depends on ACPI_EC 22 22 depends on BACKLIGHT_CLASS_DEVICE 23 23 depends on INPUT 24 24 depends on MACH_LOONGSON64
+7 -1
drivers/platform/x86/amd/pmc/pmc.c
··· 947 947 { 948 948 struct amd_pmc_dev *pdev = dev_get_drvdata(dev); 949 949 950 + /* 951 + * Must be called only from the same set of dev_pm_ops handlers 952 + * as i8042_pm_suspend() is called: currently just from .suspend. 953 + */ 950 954 if (pdev->disable_8042_wakeup && !disable_workarounds) { 951 955 int rc = amd_pmc_wa_irq1(pdev); 952 956 ··· 963 959 return 0; 964 960 } 965 961 966 - static DEFINE_SIMPLE_DEV_PM_OPS(amd_pmc_pm, amd_pmc_suspend_handler, NULL); 962 + static const struct dev_pm_ops amd_pmc_pm = { 963 + .suspend = amd_pmc_suspend_handler, 964 + }; 967 965 968 966 static const struct pci_device_id pmc_pci_ids[] = { 969 967 { PCI_DEVICE(PCI_VENDOR_ID_AMD, AMD_CPU_ID_PS) },
+19 -5
drivers/platform/x86/dell/alienware-wmi.c
··· 190 190 }; 191 191 192 192 static struct quirk_entry quirk_g_series = { 193 - .num_zones = 2, 193 + .num_zones = 0, 194 194 .hdmi_mux = 0, 195 195 .amplifier = 0, 196 196 .deepslp = 0, ··· 199 199 }; 200 200 201 201 static struct quirk_entry quirk_x_series = { 202 - .num_zones = 2, 202 + .num_zones = 0, 203 203 .hdmi_mux = 0, 204 204 .amplifier = 0, 205 205 .deepslp = 0, ··· 240 240 DMI_MATCH(DMI_PRODUCT_NAME, "ASM201"), 241 241 }, 242 242 .driver_data = &quirk_asm201, 243 + }, 244 + { 245 + .callback = dmi_matched, 246 + .ident = "Alienware m16 R1 AMD", 247 + .matches = { 248 + DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), 249 + DMI_MATCH(DMI_PRODUCT_NAME, "Alienware m16 R1 AMD"), 250 + }, 251 + .driver_data = &quirk_x_series, 243 252 }, 244 253 { 245 254 .callback = dmi_matched, ··· 695 686 static void alienware_zone_exit(struct platform_device *dev) 696 687 { 697 688 u8 zone; 689 + 690 + if (!quirks->num_zones) 691 + return; 698 692 699 693 sysfs_remove_group(&dev->dev.kobj, &zone_attribute_group); 700 694 led_classdev_unregister(&global_led); ··· 1241 1229 goto fail_prep_thermal_profile; 1242 1230 } 1243 1231 1244 - ret = alienware_zone_init(platform_device); 1245 - if (ret) 1246 - goto fail_prep_zones; 1232 + if (quirks->num_zones > 0) { 1233 + ret = alienware_zone_init(platform_device); 1234 + if (ret) 1235 + goto fail_prep_zones; 1236 + } 1247 1237 1248 1238 return 0; 1249 1239
+2 -2
drivers/platform/x86/hp/hp-wmi.c
··· 64 64 "874A", "8603", "8604", "8748", "886B", "886C", "878A", "878B", "878C", 65 65 "88C8", "88CB", "8786", "8787", "8788", "88D1", "88D2", "88F4", "88FD", 66 66 "88F5", "88F6", "88F7", "88FE", "88FF", "8900", "8901", "8902", "8912", 67 - "8917", "8918", "8949", "894A", "89EB", "8BAD", "8A42" 67 + "8917", "8918", "8949", "894A", "89EB", "8BAD", "8A42", "8A15" 68 68 }; 69 69 70 70 /* DMI Board names of Omen laptops that are specifically set to be thermal ··· 80 80 * "balanced" when reaching zero. 81 81 */ 82 82 static const char * const omen_timed_thermal_profile_boards[] = { 83 - "8BAD", "8A42" 83 + "8BAD", "8A42", "8A15" 84 84 }; 85 85 86 86 /* DMI Board names of Victus laptops */
+1
drivers/platform/x86/intel/ifs/core.c
··· 20 20 X86_MATCH(INTEL_GRANITERAPIDS_X, ARRAY_GEN0), 21 21 X86_MATCH(INTEL_GRANITERAPIDS_D, ARRAY_GEN0), 22 22 X86_MATCH(INTEL_ATOM_CRESTMONT_X, ARRAY_GEN1), 23 + X86_MATCH(INTEL_ATOM_DARKMONT_X, ARRAY_GEN1), 23 24 {} 24 25 }; 25 26 MODULE_DEVICE_TABLE(x86cpu, ifs_cpu_ids);
+4
drivers/platform/x86/intel/pmc/core_ssram.c
··· 269 269 /* 270 270 * The secondary PMC BARS (which are behind hidden PCI devices) 271 271 * are read from fixed offsets in MMIO of the primary PMC BAR. 272 + * If a device is not present, the value will be 0. 272 273 */ 273 274 ssram_base = get_base(tmp_ssram, offset); 275 + if (!ssram_base) 276 + return 0; 277 + 274 278 ssram = ioremap(ssram_base, SSRAM_HDR_SIZE); 275 279 if (!ssram) 276 280 return -ENOMEM;
+1
drivers/platform/x86/intel/speed_select_if/isst_if_common.c
··· 804 804 static const struct x86_cpu_id isst_cpu_ids[] = { 805 805 X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, SST_HPM_SUPPORTED), 806 806 X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, SST_HPM_SUPPORTED), 807 + X86_MATCH_VFM(INTEL_ATOM_DARKMONT_X, SST_HPM_SUPPORTED), 807 808 X86_MATCH_VFM(INTEL_EMERALDRAPIDS_X, 0), 808 809 X86_MATCH_VFM(INTEL_GRANITERAPIDS_D, SST_HPM_SUPPORTED), 809 810 X86_MATCH_VFM(INTEL_GRANITERAPIDS_X, SST_HPM_SUPPORTED),
+1
drivers/platform/x86/intel/tpmi_power_domains.c
··· 81 81 X86_MATCH_VFM(INTEL_GRANITERAPIDS_X, NULL), 82 82 X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, NULL), 83 83 X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, NULL), 84 + X86_MATCH_VFM(INTEL_ATOM_DARKMONT_X, NULL), 84 85 X86_MATCH_VFM(INTEL_GRANITERAPIDS_D, NULL), 85 86 X86_MATCH_VFM(INTEL_PANTHERCOVE_X, NULL), 86 87 {}
+2
drivers/platform/x86/intel/vsec.c
··· 423 423 #define PCI_DEVICE_ID_INTEL_VSEC_RPL 0xa77d 424 424 #define PCI_DEVICE_ID_INTEL_VSEC_TGL 0x9a0d 425 425 #define PCI_DEVICE_ID_INTEL_VSEC_LNL_M 0x647d 426 + #define PCI_DEVICE_ID_INTEL_VSEC_PTL 0xb07d 426 427 static const struct pci_device_id intel_vsec_pci_ids[] = { 427 428 { PCI_DEVICE_DATA(INTEL, VSEC_ADL, &tgl_info) }, 428 429 { PCI_DEVICE_DATA(INTEL, VSEC_DG1, &dg1_info) }, ··· 433 432 { PCI_DEVICE_DATA(INTEL, VSEC_RPL, &tgl_info) }, 434 433 { PCI_DEVICE_DATA(INTEL, VSEC_TGL, &tgl_info) }, 435 434 { PCI_DEVICE_DATA(INTEL, VSEC_LNL_M, &lnl_info) }, 435 + { PCI_DEVICE_DATA(INTEL, VSEC_PTL, &mtl_info) }, 436 436 { } 437 437 }; 438 438 MODULE_DEVICE_TABLE(pci, intel_vsec_pci_ids);
+2
drivers/platform/x86/mlx-platform.c
··· 6237 6237 fail_pci_request_regions: 6238 6238 pci_disable_device(pci_dev); 6239 6239 fail_pci_enable_device: 6240 + pci_dev_put(pci_dev); 6240 6241 return err; 6241 6242 } 6242 6243 ··· 6248 6247 iounmap(pci_bridge_addr); 6249 6248 pci_release_regions(pci_bridge); 6250 6249 pci_disable_device(pci_bridge); 6250 + pci_dev_put(pci_bridge); 6251 6251 } 6252 6252 6253 6253 static int
+56 -23
drivers/platform/x86/p2sb.c
··· 43 43 }; 44 44 45 45 static struct p2sb_res_cache p2sb_resources[NR_P2SB_RES_CACHE]; 46 + static bool p2sb_hidden_by_bios; 46 47 47 48 static void p2sb_get_devfn(unsigned int *devfn) 48 49 { ··· 98 97 99 98 static int p2sb_scan_and_cache(struct pci_bus *bus, unsigned int devfn) 100 99 { 100 + /* 101 + * The BIOS prevents the P2SB device from being enumerated by the PCI 102 + * subsystem, so we need to unhide and hide it back to lookup the BAR. 103 + */ 104 + pci_bus_write_config_dword(bus, devfn, P2SBC, 0); 105 + 101 106 /* Scan the P2SB device and cache its BAR0 */ 102 107 p2sb_scan_and_cache_devfn(bus, devfn); 103 108 104 109 /* On Goldmont p2sb_bar() also gets called for the SPI controller */ 105 110 if (devfn == P2SB_DEVFN_GOLDMONT) 106 111 p2sb_scan_and_cache_devfn(bus, SPI_DEVFN_GOLDMONT); 112 + 113 + pci_bus_write_config_dword(bus, devfn, P2SBC, P2SBC_HIDE); 107 114 108 115 if (!p2sb_valid_resource(&p2sb_resources[PCI_FUNC(devfn)].res)) 109 116 return -ENOENT; ··· 138 129 u32 value = P2SBC_HIDE; 139 130 struct pci_bus *bus; 140 131 u16 class; 141 - int ret; 132 + int ret = 0; 142 133 143 134 /* Get devfn for P2SB device itself */ 144 135 p2sb_get_devfn(&devfn_p2sb); ··· 161 152 */ 162 153 pci_lock_rescan_remove(); 163 154 164 - /* 165 - * The BIOS prevents the P2SB device from being enumerated by the PCI 166 - * subsystem, so we need to unhide and hide it back to lookup the BAR. 167 - * Unhide the P2SB device here, if needed. 168 - */ 169 155 pci_bus_read_config_dword(bus, devfn_p2sb, P2SBC, &value); 170 - if (value & P2SBC_HIDE) 171 - pci_bus_write_config_dword(bus, devfn_p2sb, P2SBC, 0); 156 + p2sb_hidden_by_bios = value & P2SBC_HIDE; 172 157 173 - ret = p2sb_scan_and_cache(bus, devfn_p2sb); 174 - 175 - /* Hide the P2SB device, if it was hidden */ 176 - if (value & P2SBC_HIDE) 177 - pci_bus_write_config_dword(bus, devfn_p2sb, P2SBC, P2SBC_HIDE); 158 + /* 159 + * If the BIOS does not hide the P2SB device then its resources 160 + * are accesilble. Cache them only if the P2SB device is hidden. 161 + */ 162 + if (p2sb_hidden_by_bios) 163 + ret = p2sb_scan_and_cache(bus, devfn_p2sb); 178 164 179 165 pci_unlock_rescan_remove(); 166 + 167 + return ret; 168 + } 169 + 170 + static int p2sb_read_from_cache(struct pci_bus *bus, unsigned int devfn, 171 + struct resource *mem) 172 + { 173 + struct p2sb_res_cache *cache = &p2sb_resources[PCI_FUNC(devfn)]; 174 + 175 + if (cache->bus_dev_id != bus->dev.id) 176 + return -ENODEV; 177 + 178 + if (!p2sb_valid_resource(&cache->res)) 179 + return -ENOENT; 180 + 181 + memcpy(mem, &cache->res, sizeof(*mem)); 182 + 183 + return 0; 184 + } 185 + 186 + static int p2sb_read_from_dev(struct pci_bus *bus, unsigned int devfn, 187 + struct resource *mem) 188 + { 189 + struct pci_dev *pdev; 190 + int ret = 0; 191 + 192 + pdev = pci_get_slot(bus, devfn); 193 + if (!pdev) 194 + return -ENODEV; 195 + 196 + if (p2sb_valid_resource(pci_resource_n(pdev, 0))) 197 + p2sb_read_bar0(pdev, mem); 198 + else 199 + ret = -ENOENT; 200 + 201 + pci_dev_put(pdev); 180 202 181 203 return ret; 182 204 } ··· 228 188 */ 229 189 int p2sb_bar(struct pci_bus *bus, unsigned int devfn, struct resource *mem) 230 190 { 231 - struct p2sb_res_cache *cache; 232 - 233 191 bus = p2sb_get_bus(bus); 234 192 if (!bus) 235 193 return -ENODEV; ··· 235 197 if (!devfn) 236 198 p2sb_get_devfn(&devfn); 237 199 238 - cache = &p2sb_resources[PCI_FUNC(devfn)]; 239 - if (cache->bus_dev_id != bus->dev.id) 240 - return -ENODEV; 200 + if (p2sb_hidden_by_bios) 201 + return p2sb_read_from_cache(bus, devfn, mem); 241 202 242 - if (!p2sb_valid_resource(&cache->res)) 243 - return -ENOENT; 244 - 245 - memcpy(mem, &cache->res, sizeof(*mem)); 246 - return 0; 203 + return p2sb_read_from_dev(bus, devfn, mem); 247 204 } 248 205 EXPORT_SYMBOL_GPL(p2sb_bar); 249 206
+3 -1
drivers/platform/x86/thinkpad_acpi.c
··· 184 184 */ 185 185 TP_HKEY_EV_AMT_TOGGLE = 0x131a, /* Toggle AMT on/off */ 186 186 TP_HKEY_EV_DOUBLETAP_TOGGLE = 0x131c, /* Toggle trackpoint doubletap on/off */ 187 - TP_HKEY_EV_PROFILE_TOGGLE = 0x131f, /* Toggle platform profile */ 187 + TP_HKEY_EV_PROFILE_TOGGLE = 0x131f, /* Toggle platform profile in 2024 systems */ 188 + TP_HKEY_EV_PROFILE_TOGGLE2 = 0x1401, /* Toggle platform profile in 2025 + systems */ 188 189 189 190 /* Reasons for waking up from S3/S4 */ 190 191 TP_HKEY_EV_WKUP_S3_UNDOCK = 0x2304, /* undock requested, S3 */ ··· 11201 11200 tp_features.trackpoint_doubletap = !tp_features.trackpoint_doubletap; 11202 11201 return true; 11203 11202 case TP_HKEY_EV_PROFILE_TOGGLE: 11203 + case TP_HKEY_EV_PROFILE_TOGGLE2: 11204 11204 platform_profile_cycle(); 11205 11205 return true; 11206 11206 }
+26
drivers/platform/x86/touchscreen_dmi.c
··· 855 855 .properties = rwc_nanote_next_props, 856 856 }; 857 857 858 + static const struct property_entry sary_tab_3_props[] = { 859 + PROPERTY_ENTRY_U32("touchscreen-size-x", 1730), 860 + PROPERTY_ENTRY_U32("touchscreen-size-y", 1151), 861 + PROPERTY_ENTRY_BOOL("touchscreen-inverted-x"), 862 + PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 863 + PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 864 + PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-sary-tab-3.fw"), 865 + PROPERTY_ENTRY_U32("silead,max-fingers", 10), 866 + PROPERTY_ENTRY_BOOL("silead,home-button"), 867 + { } 868 + }; 869 + 870 + static const struct ts_dmi_data sary_tab_3_data = { 871 + .acpi_name = "MSSL1680:00", 872 + .properties = sary_tab_3_props, 873 + }; 874 + 858 875 static const struct property_entry schneider_sct101ctm_props[] = { 859 876 PROPERTY_ENTRY_U32("touchscreen-size-x", 1715), 860 877 PROPERTY_ENTRY_U32("touchscreen-size-y", 1140), ··· 1630 1613 DMI_MATCH(DMI_BOARD_VENDOR, "To be filled by O.E.M."), 1631 1614 /* Above matches are too generic, add bios-version match */ 1632 1615 DMI_MATCH(DMI_BIOS_VERSION, "S8A70R100-V005"), 1616 + }, 1617 + }, 1618 + { 1619 + /* SARY Tab 3 */ 1620 + .driver_data = (void *)&sary_tab_3_data, 1621 + .matches = { 1622 + DMI_MATCH(DMI_SYS_VENDOR, "SARY"), 1623 + DMI_MATCH(DMI_PRODUCT_NAME, "C210C"), 1624 + DMI_MATCH(DMI_PRODUCT_SKU, "TAB3"), 1633 1625 }, 1634 1626 }, 1635 1627 {
+6
drivers/pmdomain/core.c
··· 2142 2142 return 0; 2143 2143 } 2144 2144 2145 + static void genpd_provider_release(struct device *dev) 2146 + { 2147 + /* nothing to be done here */ 2148 + } 2149 + 2145 2150 static int genpd_alloc_data(struct generic_pm_domain *genpd) 2146 2151 { 2147 2152 struct genpd_governor_data *gd = NULL; ··· 2178 2173 2179 2174 genpd->gd = gd; 2180 2175 device_initialize(&genpd->dev); 2176 + genpd->dev.release = genpd_provider_release; 2181 2177 2182 2178 if (!genpd_is_dev_name_fw(genpd)) { 2183 2179 dev_set_name(&genpd->dev, "%s", genpd->name);
+2 -2
drivers/pmdomain/imx/gpcv2.c
··· 1458 1458 .max_register = SZ_4K, 1459 1459 }; 1460 1460 struct device *dev = &pdev->dev; 1461 - struct device_node *pgc_np; 1461 + struct device_node *pgc_np __free(device_node) = 1462 + of_get_child_by_name(dev->of_node, "pgc"); 1462 1463 struct regmap *regmap; 1463 1464 void __iomem *base; 1464 1465 int ret; 1465 1466 1466 - pgc_np = of_get_child_by_name(dev->of_node, "pgc"); 1467 1467 if (!pgc_np) { 1468 1468 dev_err(dev, "No power domains specified in DT\n"); 1469 1469 return -EINVAL;
+9 -3
drivers/power/supply/bq24190_charger.c
··· 567 567 568 568 static int bq24296_set_otg_vbus(struct bq24190_dev_info *bdi, bool enable) 569 569 { 570 + union power_supply_propval val = { .intval = bdi->charge_type }; 570 571 int ret; 571 572 572 573 ret = pm_runtime_resume_and_get(bdi->dev); ··· 588 587 589 588 ret = bq24190_write_mask(bdi, BQ24190_REG_POC, 590 589 BQ24296_REG_POC_OTG_CONFIG_MASK, 591 - BQ24296_REG_POC_CHG_CONFIG_SHIFT, 590 + BQ24296_REG_POC_OTG_CONFIG_SHIFT, 592 591 BQ24296_REG_POC_OTG_CONFIG_OTG); 593 - } else 592 + } else { 594 593 ret = bq24190_write_mask(bdi, BQ24190_REG_POC, 595 594 BQ24296_REG_POC_OTG_CONFIG_MASK, 596 - BQ24296_REG_POC_CHG_CONFIG_SHIFT, 595 + BQ24296_REG_POC_OTG_CONFIG_SHIFT, 597 596 BQ24296_REG_POC_OTG_CONFIG_DISABLE); 597 + if (ret < 0) 598 + goto out; 599 + 600 + ret = bq24190_charger_set_charge_type(bdi, &val); 601 + } 598 602 599 603 out: 600 604 pm_runtime_mark_last_busy(bdi->dev);
+27 -9
drivers/power/supply/cros_charge-control.c
··· 7 7 #include <acpi/battery.h> 8 8 #include <linux/container_of.h> 9 9 #include <linux/dmi.h> 10 + #include <linux/lockdep.h> 10 11 #include <linux/mod_devicetable.h> 11 12 #include <linux/module.h> 13 + #include <linux/mutex.h> 12 14 #include <linux/platform_data/cros_ec_commands.h> 13 15 #include <linux/platform_data/cros_ec_proto.h> 14 16 #include <linux/platform_device.h> ··· 51 49 struct attribute *attributes[_CROS_CHCTL_ATTR_COUNT]; 52 50 struct attribute_group group; 53 51 52 + struct mutex lock; /* protects fields below and cros_ec */ 54 53 enum power_supply_charge_behaviour current_behaviour; 55 54 u8 current_start_threshold, current_end_threshold; 56 55 }; ··· 87 84 static int cros_chctl_configure_ec(struct cros_chctl_priv *priv) 88 85 { 89 86 struct ec_params_charge_control req = {}; 87 + 88 + lockdep_assert_held(&priv->lock); 90 89 91 90 req.cmd = EC_CHARGE_CONTROL_CMD_SET; 92 91 ··· 139 134 return -EINVAL; 140 135 141 136 if (is_end_threshold) { 142 - if (val <= priv->current_start_threshold) 137 + /* Start threshold is not exposed, use fixed value */ 138 + if (priv->cmd_version == 2) 139 + priv->current_start_threshold = val == 100 ? 0 : val; 140 + 141 + if (val < priv->current_start_threshold) 143 142 return -EINVAL; 144 143 priv->current_end_threshold = val; 145 144 } else { 146 - if (val >= priv->current_end_threshold) 145 + if (val > priv->current_end_threshold) 147 146 return -EINVAL; 148 147 priv->current_start_threshold = val; 149 148 } ··· 168 159 struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr, 169 160 CROS_CHCTL_ATTR_START_THRESHOLD); 170 161 162 + guard(mutex)(&priv->lock); 171 163 return sysfs_emit(buf, "%u\n", (unsigned int)priv->current_start_threshold); 172 164 } 173 165 ··· 179 169 struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr, 180 170 CROS_CHCTL_ATTR_START_THRESHOLD); 181 171 172 + guard(mutex)(&priv->lock); 182 173 return cros_chctl_store_threshold(dev, priv, 0, buf, count); 183 174 } 184 175 ··· 189 178 struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr, 190 179 CROS_CHCTL_ATTR_END_THRESHOLD); 191 180 181 + guard(mutex)(&priv->lock); 192 182 return sysfs_emit(buf, "%u\n", (unsigned int)priv->current_end_threshold); 193 183 } 194 184 ··· 199 187 struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr, 200 188 CROS_CHCTL_ATTR_END_THRESHOLD); 201 189 190 + guard(mutex)(&priv->lock); 202 191 return cros_chctl_store_threshold(dev, priv, 1, buf, count); 203 192 } 204 193 ··· 208 195 struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr, 209 196 CROS_CHCTL_ATTR_CHARGE_BEHAVIOUR); 210 197 198 + guard(mutex)(&priv->lock); 211 199 return power_supply_charge_behaviour_show(dev, EC_CHARGE_CONTROL_BEHAVIOURS, 212 200 priv->current_behaviour, buf); 213 201 } ··· 224 210 if (ret < 0) 225 211 return ret; 226 212 213 + guard(mutex)(&priv->lock); 227 214 priv->current_behaviour = ret; 228 215 229 216 ret = cros_chctl_configure_ec(priv); ··· 238 223 { 239 224 struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(attr, n); 240 225 241 - if (priv->cmd_version < 2) { 242 - if (n == CROS_CHCTL_ATTR_START_THRESHOLD) 243 - return 0; 244 - if (n == CROS_CHCTL_ATTR_END_THRESHOLD) 245 - return 0; 246 - } 226 + if (n == CROS_CHCTL_ATTR_START_THRESHOLD && priv->cmd_version < 3) 227 + return 0; 228 + else if (n == CROS_CHCTL_ATTR_END_THRESHOLD && priv->cmd_version < 2) 229 + return 0; 247 230 248 231 return attr->mode; 249 232 } ··· 303 290 if (!priv) 304 291 return -ENOMEM; 305 292 293 + ret = devm_mutex_init(dev, &priv->lock); 294 + if (ret) 295 + return ret; 296 + 306 297 ret = cros_ec_get_cmd_versions(cros_ec, EC_CMD_CHARGE_CONTROL); 307 298 if (ret < 0) 308 299 return ret; ··· 344 327 priv->current_end_threshold = 100; 345 328 346 329 /* Bring EC into well-known state */ 347 - ret = cros_chctl_configure_ec(priv); 330 + scoped_guard(mutex, &priv->lock) 331 + ret = cros_chctl_configure_ec(priv); 348 332 if (ret < 0) 349 333 return ret; 350 334
+8
drivers/power/supply/gpio-charger.c
··· 67 67 if (gpio_charger->current_limit_map[i].limit_ua <= val) 68 68 break; 69 69 } 70 + 71 + /* 72 + * If a valid charge current limit isn't found, default to smallest 73 + * current limitation for safety reasons. 74 + */ 75 + if (i >= gpio_charger->current_limit_map_size) 76 + i = gpio_charger->current_limit_map_size - 1; 77 + 70 78 mapping = gpio_charger->current_limit_map[i]; 71 79 72 80 for (i = 0; i < ndescs; i++) {
+1 -1
drivers/pwm/pwm-stm32.c
··· 84 84 85 85 wfhw->ccer = TIM_CCER_CCxE(ch + 1); 86 86 if (priv->have_complementary_output) 87 - wfhw->ccer = TIM_CCER_CCxNE(ch + 1); 87 + wfhw->ccer |= TIM_CCER_CCxNE(ch + 1); 88 88 89 89 rate = clk_get_rate(priv->clk); 90 90
+1 -1
drivers/regulator/of_regulator.c
··· 175 175 if (!ret) 176 176 constraints->enable_time = pval; 177 177 178 - ret = of_property_read_u32(np, "regulator-uv-survival-time-ms", &pval); 178 + ret = of_property_read_u32(np, "regulator-uv-less-critical-window-ms", &pval); 179 179 if (!ret) 180 180 constraints->uv_less_critical_window_ms = pval; 181 181 else
+3 -1
drivers/spi/spi-rockchip-sfc.c
··· 182 182 bool use_dma; 183 183 u32 max_iosize; 184 184 u16 version; 185 + struct spi_controller *host; 185 186 }; 186 187 187 188 static int rockchip_sfc_reset(struct rockchip_sfc *sfc) ··· 575 574 576 575 sfc = spi_controller_get_devdata(host); 577 576 sfc->dev = dev; 577 + sfc->host = host; 578 578 579 579 sfc->regbase = devm_platform_ioremap_resource(pdev, 0); 580 580 if (IS_ERR(sfc->regbase)) ··· 653 651 654 652 static void rockchip_sfc_remove(struct platform_device *pdev) 655 653 { 656 - struct spi_controller *host = platform_get_drvdata(pdev); 657 654 struct rockchip_sfc *sfc = platform_get_drvdata(pdev); 655 + struct spi_controller *host = sfc->host; 658 656 659 657 spi_unregister_controller(host); 660 658
+1
drivers/staging/fbtft/Kconfig
··· 3 3 tristate "Support for small TFT LCD display modules" 4 4 depends on FB && SPI 5 5 depends on FB_DEVICE 6 + depends on BACKLIGHT_CLASS_DEVICE 6 7 depends on GPIOLIB || COMPILE_TEST 7 8 select FB_BACKLIGHT 8 9 select FB_SYSMEM_HELPERS_DEFERRED
+6 -2
drivers/staging/gpib/Kconfig
··· 65 65 depends on ISA_BUS || PCI || PCMCIA 66 66 depends on HAS_IOPORT 67 67 depends on !X86_PAE 68 + depends on PCMCIA || !PCMCIA 69 + depends on HAS_IOPORT_MAP 68 70 select GPIB_COMMON 69 71 select GPIB_NEC7210 70 72 help ··· 91 89 depends on HAS_IOPORT 92 90 depends on ISA_BUS || PCI || PCMCIA 93 91 depends on !X86_PAE 92 + depends on PCMCIA || !PCMCIA 94 93 select GPIB_COMMON 95 94 select GPIB_NEC7210 96 95 help ··· 180 177 config GPIB_INES 181 178 tristate "INES" 182 179 depends on PCI || ISA_BUS || PCMCIA 180 + depends on PCMCIA || !PCMCIA 183 181 depends on HAS_IOPORT 184 182 depends on !X86_PAE 185 183 select GPIB_COMMON ··· 203 199 called cb7210. 204 200 205 201 config GPIB_PCMCIA 206 - bool "PCMCIA/Cardbus support for NI MC and Ines boards" 207 - depends on PCCARD && (GPIB_NI_PCI_ISA || GPIB_CB7210 || GPIB_INES) 202 + def_bool y 203 + depends on PCMCIA && (GPIB_NI_PCI_ISA || GPIB_CB7210 || GPIB_INES) 208 204 help 209 205 Enable PCMCIA/CArdbus support for National Instruments, 210 206 measurement computing boards and Ines boards.
+1 -1
drivers/staging/gpib/agilent_82350b/Makefile
··· 1 1 2 - obj-m += agilent_82350b.o 2 + obj-$(CONFIG_GPIB_AGILENT_82350B) += agilent_82350b.o
+2 -2
drivers/staging/gpib/agilent_82350b/agilent_82350b.c
··· 700 700 GPIB_82350A_REGION)); 701 701 dev_dbg(board->gpib_dev, "%s: gpib base address remapped to 0x%p\n", 702 702 driver_name, a_priv->gpib_base); 703 - tms_priv->iobase = a_priv->gpib_base + TMS9914_BASE_REG; 703 + tms_priv->mmiobase = a_priv->gpib_base + TMS9914_BASE_REG; 704 704 a_priv->sram_base = ioremap(pci_resource_start(a_priv->pci_device, 705 705 SRAM_82350A_REGION), 706 706 pci_resource_len(a_priv->pci_device, ··· 724 724 pci_resource_len(a_priv->pci_device, GPIB_REGION)); 725 725 dev_dbg(board->gpib_dev, "%s: gpib base address remapped to 0x%p\n", 726 726 driver_name, a_priv->gpib_base); 727 - tms_priv->iobase = a_priv->gpib_base + TMS9914_BASE_REG; 727 + tms_priv->mmiobase = a_priv->gpib_base + TMS9914_BASE_REG; 728 728 a_priv->sram_base = ioremap(pci_resource_start(a_priv->pci_device, SRAM_REGION), 729 729 pci_resource_len(a_priv->pci_device, SRAM_REGION)); 730 730 dev_dbg(board->gpib_dev, "%s: sram base address remapped to 0x%p\n",
+1 -1
drivers/staging/gpib/agilent_82357a/Makefile
··· 1 1 2 - obj-m += agilent_82357a.o 2 + obj-$(CONFIG_GPIB_AGILENT_82357A) += agilent_82357a.o 3 3 4 4
+1 -1
drivers/staging/gpib/cb7210/Makefile
··· 1 1 ccflags-$(CONFIG_GPIB_PCMCIA) := -DGPIB_PCMCIA 2 - obj-m += cb7210.o 2 + obj-$(CONFIG_GPIB_CB7210) += cb7210.o 3 3 4 4
+6 -6
drivers/staging/gpib/cb7210/cb7210.c
··· 971 971 switch (cb_priv->pci_chip) { 972 972 case PCI_CHIP_AMCC_S5933: 973 973 cb_priv->amcc_iobase = pci_resource_start(cb_priv->pci_device, 0); 974 - nec_priv->iobase = (void *)(pci_resource_start(cb_priv->pci_device, 1)); 974 + nec_priv->iobase = pci_resource_start(cb_priv->pci_device, 1); 975 975 cb_priv->fifo_iobase = pci_resource_start(cb_priv->pci_device, 2); 976 976 break; 977 977 case PCI_CHIP_QUANCOM: 978 - nec_priv->iobase = (void *)(pci_resource_start(cb_priv->pci_device, 0)); 979 - cb_priv->fifo_iobase = (unsigned long)nec_priv->iobase; 978 + nec_priv->iobase = pci_resource_start(cb_priv->pci_device, 0); 979 + cb_priv->fifo_iobase = nec_priv->iobase; 980 980 break; 981 981 default: 982 982 pr_err("cb7210: bug! unhandled pci_chip=%i\n", cb_priv->pci_chip); ··· 1040 1040 return retval; 1041 1041 cb_priv = board->private_data; 1042 1042 nec_priv = &cb_priv->nec7210_priv; 1043 - if (request_region((unsigned long)config->ibbase, cb7210_iosize, "cb7210") == 0) { 1044 - pr_err("gpib: ioports starting at 0x%p are already in use\n", config->ibbase); 1043 + if (request_region(config->ibbase, cb7210_iosize, "cb7210") == 0) { 1044 + pr_err("gpib: ioports starting at 0x%u are already in use\n", config->ibbase); 1045 1045 return -EIO; 1046 1046 } 1047 1047 nec_priv->iobase = config->ibbase; ··· 1471 1471 (unsigned long)curr_dev->resource[0]->start); 1472 1472 return -EIO; 1473 1473 } 1474 - nec_priv->iobase = (void *)(unsigned long)curr_dev->resource[0]->start; 1474 + nec_priv->iobase = curr_dev->resource[0]->start; 1475 1475 cb_priv->fifo_iobase = curr_dev->resource[0]->start; 1476 1476 1477 1477 if (request_irq(curr_dev->irq, cb7210_interrupt, IRQF_SHARED,
+2 -2
drivers/staging/gpib/cb7210/cb7210.h
··· 113 113 HS_STATUS = 0x8, /* HS_STATUS register */ 114 114 }; 115 115 116 - static inline unsigned long nec7210_iobase(const struct cb7210_priv *cb_priv) 116 + static inline u32 nec7210_iobase(const struct cb7210_priv *cb_priv) 117 117 { 118 - return (unsigned long)(cb_priv->nec7210_priv.iobase); 118 + return cb_priv->nec7210_priv.iobase; 119 119 } 120 120 121 121 static inline int cb7210_page_in_bits(unsigned int page)
+1 -1
drivers/staging/gpib/cec/Makefile
··· 1 1 2 - obj-m += cec_gpib.o 2 + obj-$(CONFIG_GPIB_CEC_PCI) += cec_gpib.o 3 3
+2 -2
drivers/staging/gpib/cec/cec_gpib.c
··· 297 297 298 298 cec_priv->plx_iobase = pci_resource_start(cec_priv->pci_device, 1); 299 299 pr_info(" plx9050 base address 0x%lx\n", cec_priv->plx_iobase); 300 - nec_priv->iobase = (void *)(pci_resource_start(cec_priv->pci_device, 3)); 301 - pr_info(" nec7210 base address 0x%p\n", nec_priv->iobase); 300 + nec_priv->iobase = pci_resource_start(cec_priv->pci_device, 3); 301 + pr_info(" nec7210 base address 0x%x\n", nec_priv->iobase); 302 302 303 303 isr_flags |= IRQF_SHARED; 304 304 if (request_irq(cec_priv->pci_device->irq, cec_interrupt, isr_flags, "pci-gpib", board)) {
+1 -1
drivers/staging/gpib/common/Makefile
··· 1 1 2 - obj-m += gpib_common.o 2 + obj-$(CONFIG_GPIB_COMMON) += gpib_common.o 3 3 4 4 gpib_common-objs := gpib_os.o iblib.o 5 5
+2 -52
drivers/staging/gpib/common/gpib_os.c
··· 116 116 return 0; 117 117 } 118 118 119 - void writeb_wrapper(unsigned int value, void *address) 120 - { 121 - writeb(value, address); 122 - }; 123 - EXPORT_SYMBOL(writeb_wrapper); 124 - 125 - void writew_wrapper(unsigned int value, void *address) 126 - { 127 - writew(value, address); 128 - }; 129 - EXPORT_SYMBOL(writew_wrapper); 130 - 131 - unsigned int readb_wrapper(void *address) 132 - { 133 - return readb(address); 134 - }; 135 - EXPORT_SYMBOL(readb_wrapper); 136 - 137 - unsigned int readw_wrapper(void *address) 138 - { 139 - return readw(address); 140 - }; 141 - EXPORT_SYMBOL(readw_wrapper); 142 - 143 - #ifdef CONFIG_HAS_IOPORT 144 - void outb_wrapper(unsigned int value, void *address) 145 - { 146 - outb(value, (unsigned long)(address)); 147 - }; 148 - EXPORT_SYMBOL(outb_wrapper); 149 - 150 - void outw_wrapper(unsigned int value, void *address) 151 - { 152 - outw(value, (unsigned long)(address)); 153 - }; 154 - EXPORT_SYMBOL(outw_wrapper); 155 - 156 - unsigned int inb_wrapper(void *address) 157 - { 158 - return inb((unsigned long)(address)); 159 - }; 160 - EXPORT_SYMBOL(inb_wrapper); 161 - 162 - unsigned int inw_wrapper(void *address) 163 - { 164 - return inw((unsigned long)(address)); 165 - }; 166 - EXPORT_SYMBOL(inw_wrapper); 167 - #endif 168 - 169 119 /* this is a function instead of a constant because of Suse 170 120 * defining HZ to be a function call to get_hz() 171 121 */ ··· 486 536 return -1; 487 537 } 488 538 489 - if (pad > MAX_GPIB_PRIMARY_ADDRESS || sad > MAX_GPIB_SECONDARY_ADDRESS) { 539 + if (pad > MAX_GPIB_PRIMARY_ADDRESS || sad > MAX_GPIB_SECONDARY_ADDRESS || sad < -1) { 490 540 pr_err("gpib: bad address for serial poll"); 491 541 return -1; 492 542 } ··· 1573 1623 1574 1624 if (WARN_ON_ONCE(sizeof(void *) > sizeof(base_addr))) 1575 1625 return -EFAULT; 1576 - config->ibbase = (void *)(unsigned long)(base_addr); 1626 + config->ibbase = base_addr; 1577 1627 1578 1628 return 0; 1579 1629 }
+1 -1
drivers/staging/gpib/eastwood/Makefile
··· 1 1 2 - obj-m += fluke_gpib.o 2 + obj-$(CONFIG_GPIB_FLUKE) += fluke_gpib.o 3 3
+6 -6
drivers/staging/gpib/eastwood/fluke_gpib.c
··· 1011 1011 } 1012 1012 e_priv->gpib_iomem_res = res; 1013 1013 1014 - nec_priv->iobase = ioremap(e_priv->gpib_iomem_res->start, 1014 + nec_priv->mmiobase = ioremap(e_priv->gpib_iomem_res->start, 1015 1015 resource_size(e_priv->gpib_iomem_res)); 1016 - pr_info("gpib: iobase %lx remapped to %p, length=%d\n", 1017 - (unsigned long)e_priv->gpib_iomem_res->start, 1018 - nec_priv->iobase, (int)resource_size(e_priv->gpib_iomem_res)); 1019 - if (!nec_priv->iobase) { 1016 + pr_info("gpib: mmiobase %llx remapped to %p, length=%d\n", 1017 + (u64)e_priv->gpib_iomem_res->start, 1018 + nec_priv->mmiobase, (int)resource_size(e_priv->gpib_iomem_res)); 1019 + if (!nec_priv->mmiobase) { 1020 1020 dev_err(&fluke_gpib_pdev->dev, "Could not map I/O memory\n"); 1021 1021 return -ENOMEM; 1022 1022 } ··· 1107 1107 gpib_free_pseudo_irq(board); 1108 1108 nec_priv = &e_priv->nec7210_priv; 1109 1109 1110 - if (nec_priv->iobase) { 1110 + if (nec_priv->mmiobase) { 1111 1111 fluke_paged_write_byte(e_priv, 0, ISR0_IMR0, ISR0_IMR0_PAGE); 1112 1112 nec7210_board_reset(nec_priv, board); 1113 1113 }
+2 -2
drivers/staging/gpib/eastwood/fluke_gpib.h
··· 72 72 { 73 73 u8 retval; 74 74 75 - retval = readl(nec_priv->iobase + register_num * nec_priv->offset); 75 + retval = readl(nec_priv->mmiobase + register_num * nec_priv->offset); 76 76 return retval; 77 77 } 78 78 ··· 80 80 static inline void fluke_write_byte_nolock(struct nec7210_priv *nec_priv, uint8_t data, 81 81 int register_num) 82 82 { 83 - writel(data, nec_priv->iobase + register_num * nec_priv->offset); 83 + writel(data, nec_priv->mmiobase + register_num * nec_priv->offset); 84 84 } 85 85 86 86 static inline uint8_t fluke_paged_read_byte(struct fluke_priv *e_priv,
+14 -13
drivers/staging/gpib/fmh_gpib/fmh_gpib.c
··· 24 24 #include <linux/slab.h> 25 25 26 26 MODULE_LICENSE("GPL"); 27 + MODULE_DESCRIPTION("GPIB Driver for fmh_gpib_core"); 28 + MODULE_AUTHOR("Frank Mori Hess <fmh6jj@gmail.com>"); 27 29 28 30 static irqreturn_t fmh_gpib_interrupt(int irq, void *arg); 29 31 static int fmh_gpib_attach_holdoff_all(gpib_board_t *board, const gpib_board_config_t *config); ··· 1421 1419 } 1422 1420 e_priv->gpib_iomem_res = res; 1423 1421 1424 - nec_priv->iobase = ioremap(e_priv->gpib_iomem_res->start, 1422 + nec_priv->mmiobase = ioremap(e_priv->gpib_iomem_res->start, 1425 1423 resource_size(e_priv->gpib_iomem_res)); 1426 - if (!nec_priv->iobase) { 1424 + if (!nec_priv->mmiobase) { 1427 1425 dev_err(board->dev, "Could not map I/O memory for gpib\n"); 1428 1426 return -ENOMEM; 1429 1427 } 1430 - dev_info(board->dev, "iobase 0x%lx remapped to %p, length=%ld\n", 1431 - (unsigned long)e_priv->gpib_iomem_res->start, 1432 - nec_priv->iobase, (unsigned long)resource_size(e_priv->gpib_iomem_res)); 1428 + dev_info(board->dev, "iobase %pr remapped to %p\n", 1429 + e_priv->gpib_iomem_res, nec_priv->mmiobase); 1433 1430 1434 1431 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dma_fifos"); 1435 1432 if (!res) { ··· 1508 1507 free_irq(e_priv->irq, board); 1509 1508 if (e_priv->fifo_base) 1510 1509 fifos_write(e_priv, 0, FIFO_CONTROL_STATUS_REG); 1511 - if (nec_priv->iobase) { 1510 + if (nec_priv->mmiobase) { 1512 1511 write_byte(nec_priv, 0, ISR0_IMR0_REG); 1513 1512 nec7210_board_reset(nec_priv, board); 1514 1513 } 1515 1514 if (e_priv->fifo_base) 1516 1515 iounmap(e_priv->fifo_base); 1517 - if (nec_priv->iobase) 1518 - iounmap(nec_priv->iobase); 1516 + if (nec_priv->mmiobase) 1517 + iounmap(nec_priv->mmiobase); 1519 1518 if (e_priv->dma_port_res) { 1520 1519 release_mem_region(e_priv->dma_port_res->start, 1521 1520 resource_size(e_priv->dma_port_res)); ··· 1565 1564 e_priv->gpib_iomem_res = &pci_device->resource[gpib_control_status_pci_resource_index]; 1566 1565 e_priv->dma_port_res = &pci_device->resource[gpib_fifo_pci_resource_index]; 1567 1566 1568 - nec_priv->iobase = ioremap(pci_resource_start(pci_device, 1567 + nec_priv->mmiobase = ioremap(pci_resource_start(pci_device, 1569 1568 gpib_control_status_pci_resource_index), 1570 1569 pci_resource_len(pci_device, 1571 1570 gpib_control_status_pci_resource_index)); 1572 1571 dev_info(board->dev, "base address for gpib control/status registers remapped to 0x%p\n", 1573 - nec_priv->iobase); 1572 + nec_priv->mmiobase); 1574 1573 1575 1574 if (e_priv->dma_port_res->flags & IORESOURCE_MEM) { 1576 1575 e_priv->fifo_base = ioremap(pci_resource_start(pci_device, ··· 1633 1632 free_irq(e_priv->irq, board); 1634 1633 if (e_priv->fifo_base) 1635 1634 fifos_write(e_priv, 0, FIFO_CONTROL_STATUS_REG); 1636 - if (nec_priv->iobase) { 1635 + if (nec_priv->mmiobase) { 1637 1636 write_byte(nec_priv, 0, ISR0_IMR0_REG); 1638 1637 nec7210_board_reset(nec_priv, board); 1639 1638 } 1640 1639 if (e_priv->fifo_base) 1641 1640 iounmap(e_priv->fifo_base); 1642 - if (nec_priv->iobase) 1643 - iounmap(nec_priv->iobase); 1641 + if (nec_priv->mmiobase) 1642 + iounmap(nec_priv->mmiobase); 1644 1643 if (e_priv->dma_port_res || e_priv->gpib_iomem_res) 1645 1644 pci_release_regions(to_pci_dev(board->dev)); 1646 1645 if (board->dev)
+2 -2
drivers/staging/gpib/fmh_gpib/fmh_gpib.h
··· 127 127 static inline uint8_t gpib_cs_read_byte(struct nec7210_priv *nec_priv, 128 128 unsigned int register_num) 129 129 { 130 - return readb(nec_priv->iobase + register_num * nec_priv->offset); 130 + return readb(nec_priv->mmiobase + register_num * nec_priv->offset); 131 131 } 132 132 133 133 static inline void gpib_cs_write_byte(struct nec7210_priv *nec_priv, uint8_t data, 134 134 unsigned int register_num) 135 135 { 136 - writeb(data, nec_priv->iobase + register_num * nec_priv->offset); 136 + writeb(data, nec_priv->mmiobase + register_num * nec_priv->offset); 137 137 } 138 138 139 139 static inline uint16_t fifos_read(struct fmh_priv *fmh_priv, int register_num)
+1 -1
drivers/staging/gpib/gpio/Makefile
··· 1 1 2 - obj-m += gpib_bitbang.o 2 + obj-$(CONFIG_GPIB_GPIO) += gpib_bitbang.o 3 3 4 4
+1 -1
drivers/staging/gpib/gpio/gpib_bitbang.c
··· 315 315 enum listener_function_state listener_state; 316 316 }; 317 317 318 - inline long usec_diff(struct timespec64 *a, struct timespec64 *b); 318 + static inline long usec_diff(struct timespec64 *a, struct timespec64 *b); 319 319 static void bb_buffer_print(unsigned char *buffer, size_t length, int cmd, int eoi); 320 320 static void set_data_lines(u8 byte); 321 321 static u8 get_data_lines(void);
+1 -1
drivers/staging/gpib/hp_82335/Makefile
··· 1 1 2 - obj-m += hp82335.o 2 + obj-$(CONFIG_GPIB_HP82335) += hp82335.o 3 3 4 4
+11 -10
drivers/staging/gpib/hp_82335/hp82335.c
··· 9 9 */ 10 10 11 11 #include "hp82335.h" 12 + #include <linux/io.h> 12 13 #include <linux/ioport.h> 13 14 #include <linux/sched.h> 14 15 #include <linux/module.h> ··· 234 233 { 235 234 struct tms9914_priv *tms_priv = &hp_priv->tms9914_priv; 236 235 237 - writeb(0, tms_priv->iobase + HPREG_INTR_CLEAR); 236 + writeb(0, tms_priv->mmiobase + HPREG_INTR_CLEAR); 238 237 } 239 238 240 239 int hp82335_attach(gpib_board_t *board, const gpib_board_config_t *config) ··· 242 241 struct hp82335_priv *hp_priv; 243 242 struct tms9914_priv *tms_priv; 244 243 int retval; 245 - const unsigned long upper_iomem_base = (unsigned long)config->ibbase + hp82335_rom_size; 244 + const unsigned long upper_iomem_base = config->ibbase + hp82335_rom_size; 246 245 247 246 board->status = 0; 248 247 ··· 254 253 tms_priv->write_byte = hp82335_write_byte; 255 254 tms_priv->offset = 1; 256 255 257 - switch ((unsigned long)(config->ibbase)) { 256 + switch (config->ibbase) { 258 257 case 0xc4000: 259 258 case 0xc8000: 260 259 case 0xcc000: ··· 272 271 case 0xfc000: 273 272 break; 274 273 default: 275 - pr_err("hp82335: invalid base io address 0x%p\n", config->ibbase); 274 + pr_err("hp82335: invalid base io address 0x%u\n", config->ibbase); 276 275 return -EINVAL; 277 276 } 278 277 if (!request_mem_region(upper_iomem_base, hp82335_upper_iomem_size, "hp82335")) { ··· 281 280 return -EBUSY; 282 281 } 283 282 hp_priv->raw_iobase = upper_iomem_base; 284 - tms_priv->iobase = ioremap(upper_iomem_base, hp82335_upper_iomem_size); 283 + tms_priv->mmiobase = ioremap(upper_iomem_base, hp82335_upper_iomem_size); 285 284 pr_info("hp82335: upper half of 82335 iomem region 0x%lx remapped to 0x%p\n", 286 - hp_priv->raw_iobase, tms_priv->iobase); 285 + hp_priv->raw_iobase, tms_priv->mmiobase); 287 286 288 287 retval = request_irq(config->ibirq, hp82335_interrupt, 0, "hp82335", board); 289 288 if (retval) { ··· 297 296 298 297 hp82335_clear_interrupt(hp_priv); 299 298 300 - writeb(INTR_ENABLE, tms_priv->iobase + HPREG_CCR); 299 + writeb(INTR_ENABLE, tms_priv->mmiobase + HPREG_CCR); 301 300 302 301 tms9914_online(board, tms_priv); 303 302 ··· 313 312 tms_priv = &hp_priv->tms9914_priv; 314 313 if (hp_priv->irq) 315 314 free_irq(hp_priv->irq, board); 316 - if (tms_priv->iobase) { 317 - writeb(0, tms_priv->iobase + HPREG_CCR); 315 + if (tms_priv->mmiobase) { 316 + writeb(0, tms_priv->mmiobase + HPREG_CCR); 318 317 tms9914_board_reset(tms_priv); 319 - iounmap((void *)tms_priv->iobase); 318 + iounmap(tms_priv->mmiobase); 320 319 } 321 320 if (hp_priv->raw_iobase) 322 321 release_mem_region(hp_priv->raw_iobase, hp82335_upper_iomem_size);
+1 -1
drivers/staging/gpib/hp_82341/Makefile
··· 1 1 2 - obj-m += hp_82341.o 2 + obj-$(CONFIG_GPIB_HP82341) += hp_82341.o
+8 -8
drivers/staging/gpib/hp_82341/hp_82341.c
··· 473 473 474 474 static uint8_t hp_82341_read_byte(struct tms9914_priv *priv, unsigned int register_num) 475 475 { 476 - return inb((unsigned long)(priv->iobase) + register_num); 476 + return inb(priv->iobase + register_num); 477 477 } 478 478 479 479 static void hp_82341_write_byte(struct tms9914_priv *priv, uint8_t data, unsigned int register_num) 480 480 { 481 - outb(data, (unsigned long)(priv->iobase) + register_num); 481 + outb(data, priv->iobase + register_num); 482 482 } 483 483 484 484 static int hp_82341_find_isapnp_board(struct pnp_dev **dev) ··· 682 682 { 683 683 struct hp_82341_priv *hp_priv; 684 684 struct tms9914_priv *tms_priv; 685 - unsigned long start_addr; 686 - void *iobase; 685 + u32 start_addr; 686 + u32 iobase; 687 687 int irq; 688 688 int i; 689 689 int retval; ··· 704 704 if (retval < 0) 705 705 return retval; 706 706 hp_priv->pnp_dev = dev; 707 - iobase = (void *)(pnp_port_start(dev, 0)); 707 + iobase = pnp_port_start(dev, 0); 708 708 irq = pnp_irq(dev, 0); 709 709 hp_priv->hw_version = HW_VERSION_82341D; 710 710 hp_priv->io_region_offset = 0x8; ··· 714 714 hp_priv->hw_version = HW_VERSION_82341C; 715 715 hp_priv->io_region_offset = 0x400; 716 716 } 717 - pr_info("hp_82341: base io 0x%p\n", iobase); 717 + pr_info("hp_82341: base io 0x%u\n", iobase); 718 718 for (i = 0; i < hp_82341_num_io_regions; ++i) { 719 - start_addr = (unsigned long)(iobase) + i * hp_priv->io_region_offset; 719 + start_addr = iobase + i * hp_priv->io_region_offset; 720 720 if (!request_region(start_addr, hp_82341_region_iosize, "hp_82341")) { 721 721 pr_err("hp_82341: failed to allocate io ports 0x%lx-0x%lx\n", 722 722 start_addr, ··· 725 725 } 726 726 hp_priv->iobase[i] = start_addr; 727 727 } 728 - tms_priv->iobase = (void *)(hp_priv->iobase[2]); 728 + tms_priv->iobase = hp_priv->iobase[2]; 729 729 if (hp_priv->hw_version == HW_VERSION_82341D) { 730 730 retval = isapnp_cfg_begin(hp_priv->pnp_dev->card->number, 731 731 hp_priv->pnp_dev->number);
+1 -11
drivers/staging/gpib/include/gpibP.h
··· 16 16 17 17 #include <linux/fs.h> 18 18 #include <linux/interrupt.h> 19 + #include <linux/io.h> 19 20 20 21 void gpib_register_driver(gpib_interface_t *interface, struct module *mod); 21 22 void gpib_unregister_driver(gpib_interface_t *interface); ··· 35 34 extern gpib_board_t board_array[GPIB_MAX_NUM_BOARDS]; 36 35 37 36 extern struct list_head registered_drivers; 38 - 39 - #include <linux/io.h> 40 - 41 - void writeb_wrapper(unsigned int value, void *address); 42 - unsigned int readb_wrapper(void *address); 43 - void outb_wrapper(unsigned int value, void *address); 44 - unsigned int inb_wrapper(void *address); 45 - void writew_wrapper(unsigned int value, void *address); 46 - unsigned int readw_wrapper(void *address); 47 - void outw_wrapper(unsigned int value, void *address); 48 - unsigned int inw_wrapper(void *address); 49 37 50 38 #endif // _GPIB_P_H 51 39
+2 -1
drivers/staging/gpib/include/gpib_types.h
··· 31 31 void *init_data; 32 32 int init_data_length; 33 33 /* IO base address to use for non-pnp cards (set by core, driver should make local copy) */ 34 - void *ibbase; 34 + u32 ibbase; 35 + void __iomem *mmibbase; 35 36 /* IRQ to use for non-pnp cards (set by core, driver should make local copy) */ 36 37 unsigned int ibirq; 37 38 /* dma channel to use for non-pnp cards (set by core, driver should make local copy) */
+4 -1
drivers/staging/gpib/include/nec7210.h
··· 18 18 19 19 /* struct used to provide variables local to a nec7210 chip */ 20 20 struct nec7210_priv { 21 - void *iobase; 21 + #ifdef CONFIG_HAS_IOPORT 22 + u32 iobase; 23 + #endif 24 + void __iomem *mmiobase; 22 25 unsigned int offset; // offset between successive nec7210 io addresses 23 26 unsigned int dma_channel; 24 27 u8 *dma_buffer;
+4 -1
drivers/staging/gpib/include/tms9914.h
··· 20 20 21 21 /* struct used to provide variables local to a tms9914 chip */ 22 22 struct tms9914_priv { 23 - void *iobase; 23 + #ifdef CONFIG_HAS_IOPORT 24 + u32 iobase; 25 + #endif 26 + void __iomem *mmiobase; 24 27 unsigned int offset; // offset between successive tms9914 io addresses 25 28 unsigned int dma_channel; 26 29 // software copy of bits written to interrupt mask registers
+1 -1
drivers/staging/gpib/ines/Makefile
··· 1 1 ccflags-$(CONFIG_GPIB_PCMCIA) := -DGPIB_PCMCIA 2 - obj-m += ines_gpib.o 2 + obj-$(CONFIG_GPIB_INES) += ines_gpib.o 3 3 4 4
+2 -2
drivers/staging/gpib/ines/ines.h
··· 83 83 /* inb/outb wrappers */ 84 84 static inline unsigned int ines_inb(struct ines_priv *priv, unsigned int register_number) 85 85 { 86 - return inb((unsigned long)(priv->nec7210_priv.iobase) + 86 + return inb(priv->nec7210_priv.iobase + 87 87 register_number * priv->nec7210_priv.offset); 88 88 } 89 89 90 90 static inline void ines_outb(struct ines_priv *priv, unsigned int value, 91 91 unsigned int register_number) 92 92 { 93 - outb(value, (unsigned long)(priv->nec7210_priv.iobase) + 93 + outb(value, priv->nec7210_priv.iobase + 94 94 register_number * priv->nec7210_priv.offset); 95 95 } 96 96
+11 -11
drivers/staging/gpib/ines/ines_gpib.c
··· 273 273 struct nec7210_priv *nec_priv = &priv->nec7210_priv; 274 274 275 275 if (priv->pci_chip_type == PCI_CHIP_QUANCOM) { 276 - if ((inb((unsigned long)nec_priv->iobase + 276 + if ((inb(nec_priv->iobase + 277 277 QUANCOM_IRQ_CONTROL_STATUS_REG) & 278 278 QUANCOM_IRQ_ASSERTED_BIT)) 279 - outb(QUANCOM_IRQ_ENABLE_BIT, (unsigned long)(nec_priv->iobase) + 279 + outb(QUANCOM_IRQ_ENABLE_BIT, nec_priv->iobase + 280 280 QUANCOM_IRQ_CONTROL_STATUS_REG); 281 281 } 282 282 ··· 780 780 781 781 if (pci_request_regions(ines_priv->pci_device, "ines-gpib")) 782 782 return -1; 783 - nec_priv->iobase = (void *)(pci_resource_start(ines_priv->pci_device, 784 - found_id.gpib_region)); 783 + nec_priv->iobase = pci_resource_start(ines_priv->pci_device, 784 + found_id.gpib_region); 785 785 786 786 ines_priv->pci_chip_type = found_id.pci_chip_type; 787 787 nec_priv->offset = found_id.io_offset; ··· 840 840 } 841 841 break; 842 842 case PCI_CHIP_QUANCOM: 843 - outb(QUANCOM_IRQ_ENABLE_BIT, (unsigned long)(nec_priv->iobase) + 843 + outb(QUANCOM_IRQ_ENABLE_BIT, nec_priv->iobase + 844 844 QUANCOM_IRQ_CONTROL_STATUS_REG); 845 845 break; 846 846 case PCI_CHIP_QUICKLOGIC5030: ··· 899 899 ines_priv = board->private_data; 900 900 nec_priv = &ines_priv->nec7210_priv; 901 901 902 - if (!request_region((unsigned long)config->ibbase, ines_isa_iosize, "ines_gpib")) { 903 - pr_err("ines_gpib: ioports at 0x%p already in use\n", config->ibbase); 902 + if (!request_region(config->ibbase, ines_isa_iosize, "ines_gpib")) { 903 + pr_err("ines_gpib: ioports at 0x%x already in use\n", config->ibbase); 904 904 return -1; 905 905 } 906 906 nec_priv->iobase = config->ibbase; ··· 931 931 break; 932 932 case PCI_CHIP_QUANCOM: 933 933 if (nec_priv->iobase) 934 - outb(0, (unsigned long)(nec_priv->iobase) + 934 + outb(0, nec_priv->iobase + 935 935 QUANCOM_IRQ_CONTROL_STATUS_REG); 936 936 break; 937 937 default: ··· 960 960 free_irq(ines_priv->irq, board); 961 961 if (nec_priv->iobase) { 962 962 nec7210_board_reset(nec_priv, board); 963 - release_region((unsigned long)(nec_priv->iobase), ines_isa_iosize); 963 + release_region(nec_priv->iobase, ines_isa_iosize); 964 964 } 965 965 } 966 966 ines_free_private(board); ··· 1355 1355 return -1; 1356 1356 } 1357 1357 1358 - nec_priv->iobase = (void *)(unsigned long)curr_dev->resource[0]->start; 1358 + nec_priv->iobase = curr_dev->resource[0]->start; 1359 1359 1360 1360 nec7210_board_reset(nec_priv, board); 1361 1361 ··· 1410 1410 free_irq(ines_priv->irq, board); 1411 1411 if (nec_priv->iobase) { 1412 1412 nec7210_board_reset(nec_priv, board); 1413 - release_region((unsigned long)(nec_priv->iobase), ines_pcmcia_iosize); 1413 + release_region(nec_priv->iobase, ines_pcmcia_iosize); 1414 1414 } 1415 1415 } 1416 1416 ines_free_private(board);
+1 -1
drivers/staging/gpib/lpvo_usb_gpib/Makefile
··· 1 1 2 - obj-m += lpvo_usb_gpib.o 2 + obj-$(CONFIG_GPIB_LPVO) += lpvo_usb_gpib.o 3 3
+9 -9
drivers/staging/gpib/lpvo_usb_gpib/lpvo_usb_gpib.c
··· 99 99 #define USB_GPIB_DEBUG_ON "\nIBDE\xAA\n" 100 100 #define USB_GPIB_SET_LISTEN "\nIBDT0\n" 101 101 #define USB_GPIB_SET_TALK "\nIBDT1\n" 102 - #define USB_GPIB_SET_LINES "\nIBDC\n" 103 - #define USB_GPIB_SET_DATA "\nIBDM\n" 102 + #define USB_GPIB_SET_LINES "\nIBDC.\n" 103 + #define USB_GPIB_SET_DATA "\nIBDM.\n" 104 104 #define USB_GPIB_READ_LINES "\nIBD?C\n" 105 105 #define USB_GPIB_READ_DATA "\nIBD?M\n" 106 106 #define USB_GPIB_READ_BUS "\nIBD??\n" ··· 210 210 * (unix time in sec and NANOsec) 211 211 */ 212 212 213 - inline int usec_diff(struct timespec64 *a, struct timespec64 *b) 213 + static inline int usec_diff(struct timespec64 *a, struct timespec64 *b) 214 214 { 215 215 return ((a->tv_sec - b->tv_sec) * 1000000 + 216 216 (a->tv_nsec - b->tv_nsec) / 1000); ··· 436 436 static int usb_gpib_attach(gpib_board_t *board, const gpib_board_config_t *config) 437 437 { 438 438 int retval, j; 439 - int base = (long)config->ibbase; 439 + u32 base = config->ibbase; 440 440 char *device_path; 441 441 int match; 442 442 struct usb_device *udev; ··· 589 589 size_t *bytes_written) 590 590 { 591 591 int i, retval; 592 - char command[6] = "IBc\n"; 592 + char command[6] = "IBc.\n"; 593 593 594 594 DIA_LOG(1, "enter %p\n", board); 595 595 ··· 608 608 } 609 609 610 610 /** 611 - * disable_eos() - Disable END on eos byte (END on EOI only) 611 + * usb_gpib_disable_eos() - Disable END on eos byte (END on EOI only) 612 612 * 613 613 * @board: the gpib_board data area for this gpib interface 614 614 * ··· 624 624 } 625 625 626 626 /** 627 - * enable_eos() - Enable END for reads when eos byte is received. 627 + * usb_gpib_enable_eos() - Enable END for reads when eos byte is received. 628 628 * 629 629 * @board: the gpib_board data area for this gpib interface 630 630 * @eos_byte: the 'eos' byte ··· 647 647 } 648 648 649 649 /** 650 - * go_to_standby() - De-assert ATN 650 + * usb_gpib_go_to_standby() - De-assert ATN 651 651 * 652 652 * @board: the gpib_board data area for this gpib interface 653 653 */ ··· 664 664 } 665 665 666 666 /** 667 - * interface_clear() - Assert or de-assert IFC 667 + * usb_gpib_interface_clear() - Assert or de-assert IFC 668 668 * 669 669 * @board: the gpib_board data area for this gpib interface 670 670 * assert: 1: assert IFC; 0: de-assert IFC
+1 -1
drivers/staging/gpib/nec7210/Makefile
··· 1 1 2 - obj-m += nec7210.o 2 + obj-$(CONFIG_GPIB_NEC7210) += nec7210.o 3 3 4 4
+8 -8
drivers/staging/gpib/nec7210/nec7210.c
··· 1035 1035 /* wrappers for io */ 1036 1036 uint8_t nec7210_ioport_read_byte(struct nec7210_priv *priv, unsigned int register_num) 1037 1037 { 1038 - return inb((unsigned long)(priv->iobase) + register_num * priv->offset); 1038 + return inb(priv->iobase + register_num * priv->offset); 1039 1039 } 1040 1040 EXPORT_SYMBOL(nec7210_ioport_read_byte); 1041 1041 ··· 1047 1047 */ 1048 1048 nec7210_locking_ioport_write_byte(priv, data, register_num); 1049 1049 else 1050 - outb(data, (unsigned long)(priv->iobase) + register_num * priv->offset); 1050 + outb(data, priv->iobase + register_num * priv->offset); 1051 1051 } 1052 1052 EXPORT_SYMBOL(nec7210_ioport_write_byte); 1053 1053 ··· 1058 1058 unsigned long flags; 1059 1059 1060 1060 spin_lock_irqsave(&priv->register_page_lock, flags); 1061 - retval = inb((unsigned long)(priv->iobase) + register_num * priv->offset); 1061 + retval = inb(priv->iobase + register_num * priv->offset); 1062 1062 spin_unlock_irqrestore(&priv->register_page_lock, flags); 1063 1063 return retval; 1064 1064 } ··· 1072 1072 spin_lock_irqsave(&priv->register_page_lock, flags); 1073 1073 if (register_num == AUXMR) 1074 1074 udelay(1); 1075 - outb(data, (unsigned long)(priv->iobase) + register_num * priv->offset); 1075 + outb(data, priv->iobase + register_num * priv->offset); 1076 1076 spin_unlock_irqrestore(&priv->register_page_lock, flags); 1077 1077 } 1078 1078 EXPORT_SYMBOL(nec7210_locking_ioport_write_byte); ··· 1080 1080 1081 1081 uint8_t nec7210_iomem_read_byte(struct nec7210_priv *priv, unsigned int register_num) 1082 1082 { 1083 - return readb(priv->iobase + register_num * priv->offset); 1083 + return readb(priv->mmiobase + register_num * priv->offset); 1084 1084 } 1085 1085 EXPORT_SYMBOL(nec7210_iomem_read_byte); 1086 1086 ··· 1092 1092 */ 1093 1093 nec7210_locking_iomem_write_byte(priv, data, register_num); 1094 1094 else 1095 - writeb(data, priv->iobase + register_num * priv->offset); 1095 + writeb(data, priv->mmiobase + register_num * priv->offset); 1096 1096 } 1097 1097 EXPORT_SYMBOL(nec7210_iomem_write_byte); 1098 1098 ··· 1102 1102 unsigned long flags; 1103 1103 1104 1104 spin_lock_irqsave(&priv->register_page_lock, flags); 1105 - retval = readb(priv->iobase + register_num * priv->offset); 1105 + retval = readb(priv->mmiobase + register_num * priv->offset); 1106 1106 spin_unlock_irqrestore(&priv->register_page_lock, flags); 1107 1107 return retval; 1108 1108 } ··· 1116 1116 spin_lock_irqsave(&priv->register_page_lock, flags); 1117 1117 if (register_num == AUXMR) 1118 1118 udelay(1); 1119 - writeb(data, priv->iobase + register_num * priv->offset); 1119 + writeb(data, priv->mmiobase + register_num * priv->offset); 1120 1120 spin_unlock_irqrestore(&priv->register_page_lock, flags); 1121 1121 } 1122 1122 EXPORT_SYMBOL(nec7210_locking_iomem_write_byte);
+1 -1
drivers/staging/gpib/ni_usb/Makefile
··· 1 1 2 - obj-m += ni_usb_gpib.o 2 + obj-$(CONFIG_GPIB_NI_USB) += ni_usb_gpib.o 3 3 4 4
+1 -1
drivers/staging/gpib/pc2/Makefile
··· 1 1 2 - obj-m += pc2_gpib.o 2 + obj-$(CONFIG_GPIB_PC2) += pc2_gpib.o 3 3 4 4 5 5
+8 -8
drivers/staging/gpib/pc2/pc2_gpib.c
··· 426 426 nec_priv = &pc2_priv->nec7210_priv; 427 427 nec_priv->offset = pc2_reg_offset; 428 428 429 - if (request_region((unsigned long)config->ibbase, pc2_iosize, "pc2") == 0) { 429 + if (request_region(config->ibbase, pc2_iosize, "pc2") == 0) { 430 430 pr_err("gpib: ioports are already in use\n"); 431 431 return -1; 432 432 } ··· 471 471 free_irq(pc2_priv->irq, board); 472 472 if (nec_priv->iobase) { 473 473 nec7210_board_reset(nec_priv, board); 474 - release_region((unsigned long)(nec_priv->iobase), pc2_iosize); 474 + release_region(nec_priv->iobase, pc2_iosize); 475 475 } 476 476 if (nec_priv->dma_buffer) { 477 477 dma_free_coherent(board->dev, nec_priv->dma_buffer_length, ··· 498 498 nec_priv = &pc2_priv->nec7210_priv; 499 499 nec_priv->offset = pc2a_reg_offset; 500 500 501 - switch ((unsigned long)(config->ibbase)) { 501 + switch (config->ibbase) { 502 502 case 0x02e1: 503 503 case 0x22e1: 504 504 case 0x42e1: 505 505 case 0x62e1: 506 506 break; 507 507 default: 508 - pr_err("PCIIa base range invalid, must be one of 0x[0246]2e1, but is 0x%p\n", 508 + pr_err("PCIIa base range invalid, must be one of 0x[0246]2e1, but is 0x%d\n", 509 509 config->ibbase); 510 510 return -1; 511 511 } ··· 522 522 unsigned int err = 0; 523 523 524 524 for (i = 0; i < num_registers; i++) { 525 - if (check_region((unsigned long)config->ibbase + i * pc2a_reg_offset, 1)) 525 + if (check_region(config->ibbase + i * pc2a_reg_offset, 1)) 526 526 err++; 527 527 } 528 528 if (config->ibirq && check_region(pc2a_clear_intr_iobase + config->ibirq, 1)) ··· 533 533 } 534 534 #endif 535 535 for (i = 0; i < num_registers; i++) { 536 - if (!request_region((unsigned long)config->ibbase + 536 + if (!request_region(config->ibbase + 537 537 i * pc2a_reg_offset, 1, "pc2a")) { 538 538 pr_err("gpib: ioports are already in use"); 539 539 for (j = 0; j < i; j++) 540 - release_region((unsigned long)(config->ibbase) + 540 + release_region(config->ibbase + 541 541 j * pc2a_reg_offset, 1); 542 542 return -1; 543 543 } ··· 608 608 if (nec_priv->iobase) { 609 609 nec7210_board_reset(nec_priv, board); 610 610 for (i = 0; i < num_registers; i++) 611 - release_region((unsigned long)nec_priv->iobase + 611 + release_region(nec_priv->iobase + 612 612 i * pc2a_reg_offset, 1); 613 613 } 614 614 if (pc2_priv->clear_intr_addr)
+1 -1
drivers/staging/gpib/tms9914/Makefile
··· 1 1 2 - obj-m += tms9914.o 2 + obj-$(CONFIG_GPIB_TMS9914) += tms9914.o 3 3 4 4 5 5
+4 -4
drivers/staging/gpib/tms9914/tms9914.c
··· 866 866 // wrapper for inb 867 867 uint8_t tms9914_ioport_read_byte(struct tms9914_priv *priv, unsigned int register_num) 868 868 { 869 - return inb((unsigned long)(priv->iobase) + register_num * priv->offset); 869 + return inb(priv->iobase + register_num * priv->offset); 870 870 } 871 871 EXPORT_SYMBOL_GPL(tms9914_ioport_read_byte); 872 872 873 873 // wrapper for outb 874 874 void tms9914_ioport_write_byte(struct tms9914_priv *priv, uint8_t data, unsigned int register_num) 875 875 { 876 - outb(data, (unsigned long)(priv->iobase) + register_num * priv->offset); 876 + outb(data, priv->iobase + register_num * priv->offset); 877 877 if (register_num == AUXCR) 878 878 udelay(1); 879 879 } ··· 883 883 // wrapper for readb 884 884 uint8_t tms9914_iomem_read_byte(struct tms9914_priv *priv, unsigned int register_num) 885 885 { 886 - return readb(priv->iobase + register_num * priv->offset); 886 + return readb(priv->mmiobase + register_num * priv->offset); 887 887 } 888 888 EXPORT_SYMBOL_GPL(tms9914_iomem_read_byte); 889 889 890 890 // wrapper for writeb 891 891 void tms9914_iomem_write_byte(struct tms9914_priv *priv, uint8_t data, unsigned int register_num) 892 892 { 893 - writeb(data, priv->iobase + register_num * priv->offset); 893 + writeb(data, priv->mmiobase + register_num * priv->offset); 894 894 if (register_num == AUXCR) 895 895 udelay(1); 896 896 }
+1 -1
drivers/staging/gpib/tnt4882/Makefile
··· 1 1 ccflags-$(CONFIG_GPIB_PCMCIA) := -DGPIB_PCMCIA 2 - obj-m += tnt4882.o 2 + obj-$(CONFIG_GPIB_NI_PCI_ISA) += tnt4882.o 3 3 4 4 tnt4882-objs := tnt4882_gpib.o mite.o 5 5
-69
drivers/staging/gpib/tnt4882/mite.c
··· 148 148 } 149 149 pr_info("\n"); 150 150 } 151 - 152 - int mite_bytes_transferred(struct mite_struct *mite, int chan) 153 - { 154 - int dar, fcr; 155 - 156 - dar = readl(mite->mite_io_addr + MITE_DAR + CHAN_OFFSET(chan)); 157 - fcr = readl(mite->mite_io_addr + MITE_FCR + CHAN_OFFSET(chan)) & 0x000000FF; 158 - return dar - fcr; 159 - } 160 - 161 - int mite_dma_tcr(struct mite_struct *mite) 162 - { 163 - int tcr; 164 - int lkar; 165 - 166 - lkar = readl(mite->mite_io_addr + CHAN_OFFSET(0) + MITE_LKAR); 167 - tcr = readl(mite->mite_io_addr + CHAN_OFFSET(0) + MITE_TCR); 168 - MDPRINTK("lkar=0x%08x tcr=%d\n", lkar, tcr); 169 - 170 - return tcr; 171 - } 172 - 173 - void mite_dma_disarm(struct mite_struct *mite) 174 - { 175 - int chor; 176 - 177 - /* disarm */ 178 - chor = CHOR_ABORT; 179 - writel(chor, mite->mite_io_addr + CHAN_OFFSET(0) + MITE_CHOR); 180 - } 181 - 182 - void mite_dump_regs(struct mite_struct *mite) 183 - { 184 - void *addr = 0; 185 - unsigned long temp = 0; 186 - 187 - pr_info("mite address is =0x%p\n", mite->mite_io_addr); 188 - 189 - addr = mite->mite_io_addr + MITE_CHOR + CHAN_OFFSET(0); 190 - pr_info("mite status[CHOR]at 0x%p =0x%08lx\n", addr, temp = readl(addr)); 191 - //mite_decode(mite_CHOR_strings,temp); 192 - addr = mite->mite_io_addr + MITE_CHCR + CHAN_OFFSET(0); 193 - pr_info("mite status[CHCR]at 0x%p =0x%08lx\n", addr, temp = readl(addr)); 194 - //mite_decode(mite_CHCR_strings,temp); 195 - addr = mite->mite_io_addr + MITE_TCR + CHAN_OFFSET(0); 196 - pr_info("mite status[TCR] at 0x%p =0x%08x\n", addr, readl(addr)); 197 - addr = mite->mite_io_addr + MITE_MCR + CHAN_OFFSET(0); 198 - pr_info("mite status[MCR] at 0x%p =0x%08lx\n", addr, temp = readl(addr)); 199 - //mite_decode(mite_MCR_strings,temp); 200 - addr = mite->mite_io_addr + MITE_MAR + CHAN_OFFSET(0); 201 - pr_info("mite status[MAR] at 0x%p =0x%08x\n", addr, readl(addr)); 202 - addr = mite->mite_io_addr + MITE_DCR + CHAN_OFFSET(0); 203 - pr_info("mite status[DCR] at 0x%p =0x%08lx\n", addr, temp = readl(addr)); 204 - //mite_decode(mite_CR_strings,temp); 205 - addr = mite->mite_io_addr + MITE_DAR + CHAN_OFFSET(0); 206 - pr_info("mite status[DAR] at 0x%p =0x%08x\n", addr, readl(addr)); 207 - addr = mite->mite_io_addr + MITE_LKCR + CHAN_OFFSET(0); 208 - pr_info("mite status[LKCR]at 0x%p =0x%08lx\n", addr, temp = readl(addr)); 209 - //mite_decode(mite_CR_strings,temp); 210 - addr = mite->mite_io_addr + MITE_LKAR + CHAN_OFFSET(0); 211 - pr_info("mite status[LKAR]at 0x%p =0x%08x\n", addr, readl(addr)); 212 - 213 - addr = mite->mite_io_addr + MITE_CHSR + CHAN_OFFSET(0); 214 - pr_info("mite status[CHSR]at 0x%p =0x%08lx\n", addr, temp = readl(addr)); 215 - //mite_decode(mite_CHSR_strings,temp); 216 - addr = mite->mite_io_addr + MITE_FCR + CHAN_OFFSET(0); 217 - pr_info("mite status[FCR] at 0x%p =0x%08x\n\n", addr, readl(addr)); 218 - } 219 -
+2 -11
drivers/staging/gpib/tnt4882/mite.h
··· 34 34 35 35 struct pci_dev *pcidev; 36 36 unsigned long mite_phys_addr; 37 - void *mite_io_addr; 37 + void __iomem *mite_io_addr; 38 38 unsigned long daq_phys_addr; 39 - void *daq_io_addr; 39 + void __iomem *daq_io_addr; 40 40 41 41 int DMA_CheckNearEnd; 42 42 ··· 60 60 int mite_setup(struct mite_struct *mite); 61 61 void mite_unsetup(struct mite_struct *mite); 62 62 void mite_list_devices(void); 63 - 64 - int mite_dma_tcr(struct mite_struct *mite); 65 - 66 - void mite_dma_arm(struct mite_struct *mite); 67 - void mite_dma_disarm(struct mite_struct *mite); 68 - 69 - void mite_dump_regs(struct mite_struct *mite); 70 - void mite_setregs(struct mite_struct *mite, unsigned long ll_start, int chan, int dir); 71 - int mite_bytes_transferred(struct mite_struct *mite, int chan); 72 63 73 64 #define CHAN_OFFSET(x) (0x100 * (x)) 74 65
+30 -37
drivers/staging/gpib/tnt4882/tnt4882_gpib.c
··· 45 45 unsigned short imr0_bits; 46 46 unsigned short imr3_bits; 47 47 unsigned short auxg_bits; // bits written to auxiliary register G 48 - void (*io_writeb)(unsigned int value, void *address); 49 - void (*io_writew)(unsigned int value, void *address); 50 - unsigned int (*io_readb)(void *address); 51 - unsigned int (*io_readw)(void *address); 52 48 }; 53 49 54 50 // interface functions ··· 100 104 /* paged io */ 101 105 static inline unsigned int tnt_paged_readb(struct tnt4882_priv *priv, unsigned long offset) 102 106 { 103 - priv->io_writeb(AUX_PAGEIN, priv->nec7210_priv.iobase + AUXMR * priv->nec7210_priv.offset); 107 + iowrite8(AUX_PAGEIN, priv->nec7210_priv.mmiobase + AUXMR * priv->nec7210_priv.offset); 104 108 udelay(1); 105 - return priv->io_readb(priv->nec7210_priv.iobase + offset); 109 + return ioread8(priv->nec7210_priv.mmiobase + offset); 106 110 } 107 111 108 112 static inline void tnt_paged_writeb(struct tnt4882_priv *priv, unsigned int value, 109 113 unsigned long offset) 110 114 { 111 - priv->io_writeb(AUX_PAGEIN, priv->nec7210_priv.iobase + AUXMR * priv->nec7210_priv.offset); 115 + iowrite8(AUX_PAGEIN, priv->nec7210_priv.mmiobase + AUXMR * priv->nec7210_priv.offset); 112 116 udelay(1); 113 - priv->io_writeb(value, priv->nec7210_priv.iobase + offset); 117 + iowrite8(value, priv->nec7210_priv.mmiobase + offset); 114 118 } 115 119 116 120 /* readb/writeb wrappers */ 117 121 static inline unsigned short tnt_readb(struct tnt4882_priv *priv, unsigned long offset) 118 122 { 119 - void *address = priv->nec7210_priv.iobase + offset; 123 + void *address = priv->nec7210_priv.mmiobase + offset; 120 124 unsigned long flags; 121 125 unsigned short retval; 122 126 spinlock_t *register_lock = &priv->nec7210_priv.register_page_lock; ··· 130 134 switch (priv->nec7210_priv.type) { 131 135 case TNT4882: 132 136 case TNT5004: 133 - retval = priv->io_readb(address); 137 + retval = ioread8(address); 134 138 break; 135 139 case NAT4882: 136 140 retval = tnt_paged_readb(priv, offset - tnt_pagein_offset); ··· 145 149 } 146 150 break; 147 151 default: 148 - retval = priv->io_readb(address); 152 + retval = ioread8(address); 149 153 break; 150 154 } 151 155 spin_unlock_irqrestore(register_lock, flags); ··· 154 158 155 159 static inline void tnt_writeb(struct tnt4882_priv *priv, unsigned short value, unsigned long offset) 156 160 { 157 - void *address = priv->nec7210_priv.iobase + offset; 161 + void *address = priv->nec7210_priv.mmiobase + offset; 158 162 unsigned long flags; 159 163 spinlock_t *register_lock = &priv->nec7210_priv.register_page_lock; 160 164 ··· 166 170 switch (priv->nec7210_priv.type) { 167 171 case TNT4882: 168 172 case TNT5004: 169 - priv->io_writeb(value, address); 173 + iowrite8(value, address); 170 174 break; 171 175 case NAT4882: 172 176 tnt_paged_writeb(priv, value, offset - tnt_pagein_offset); ··· 179 183 } 180 184 break; 181 185 default: 182 - priv->io_writeb(value, address); 186 + iowrite8(value, address); 183 187 break; 184 188 } 185 189 spin_unlock_irqrestore(register_lock, flags); ··· 284 288 while (fifo_word_available(tnt_priv) && count + 2 <= num_bytes) { 285 289 short word; 286 290 287 - word = tnt_priv->io_readw(nec_priv->iobase + FIFOB); 291 + word = ioread16(nec_priv->mmiobase + FIFOB); 288 292 buffer[count++] = word & 0xff; 289 293 buffer[count++] = (word >> 8) & 0xff; 290 294 } ··· 569 573 word = buffer[count++] & 0xff; 570 574 if (count < length) 571 575 word |= (buffer[count++] << 8) & 0xff00; 572 - tnt_priv->io_writew(word, nec_priv->iobase + FIFOB); 576 + iowrite16(word, nec_priv->mmiobase + FIFOB); 573 577 } 574 578 // avoid unnecessary HR_NFF interrupts 575 579 // tnt_priv->imr3_bits |= HR_NFF; ··· 1265 1269 if (tnt4882_allocate_private(board)) 1266 1270 return -ENOMEM; 1267 1271 tnt_priv = board->private_data; 1268 - tnt_priv->io_writeb = writeb_wrapper; 1269 - tnt_priv->io_readb = readb_wrapper; 1270 - tnt_priv->io_writew = writew_wrapper; 1271 - tnt_priv->io_readw = readw_wrapper; 1272 1272 nec_priv = &tnt_priv->nec7210_priv; 1273 1273 nec_priv->type = TNT4882; 1274 1274 nec_priv->read_byte = nec7210_locking_iomem_read_byte; ··· 1316 1324 return retval; 1317 1325 } 1318 1326 1319 - nec_priv->iobase = tnt_priv->mite->daq_io_addr; 1327 + nec_priv->mmiobase = tnt_priv->mite->daq_io_addr; 1320 1328 1321 1329 // get irq 1322 1330 if (request_irq(mite_irq(tnt_priv->mite), tnt4882_interrupt, isr_flags, ··· 1351 1359 if (tnt_priv) { 1352 1360 nec_priv = &tnt_priv->nec7210_priv; 1353 1361 1354 - if (nec_priv->iobase) 1362 + if (nec_priv->mmiobase) 1355 1363 tnt4882_board_reset(tnt_priv, board); 1356 1364 if (tnt_priv->irq) 1357 1365 free_irq(tnt_priv->irq, board); ··· 1392 1400 struct tnt4882_priv *tnt_priv; 1393 1401 struct nec7210_priv *nec_priv; 1394 1402 int isr_flags = 0; 1395 - void *iobase; 1403 + u32 iobase; 1396 1404 int irq; 1397 1405 1398 1406 board->status = 0; ··· 1400 1408 if (tnt4882_allocate_private(board)) 1401 1409 return -ENOMEM; 1402 1410 tnt_priv = board->private_data; 1403 - tnt_priv->io_writeb = outb_wrapper; 1404 - tnt_priv->io_readb = inb_wrapper; 1405 - tnt_priv->io_writew = outw_wrapper; 1406 - tnt_priv->io_readw = inw_wrapper; 1407 1411 nec_priv = &tnt_priv->nec7210_priv; 1408 1412 nec_priv->type = chipset; 1409 1413 nec_priv->read_byte = nec7210_locking_ioport_read_byte; ··· 1415 1427 if (retval < 0) 1416 1428 return retval; 1417 1429 tnt_priv->pnp_dev = dev; 1418 - iobase = (void *)(pnp_port_start(dev, 0)); 1430 + iobase = pnp_port_start(dev, 0); 1419 1431 irq = pnp_irq(dev, 0); 1420 1432 } else { 1421 1433 iobase = config->ibbase; 1422 1434 irq = config->ibirq; 1423 1435 } 1424 1436 // allocate ioports 1425 - if (!request_region((unsigned long)(iobase), atgpib_iosize, "atgpib")) { 1437 + if (!request_region(iobase, atgpib_iosize, "atgpib")) { 1426 1438 pr_err("tnt4882: failed to allocate ioports\n"); 1427 1439 return -1; 1428 1440 } 1429 - nec_priv->iobase = iobase; 1441 + nec_priv->mmiobase = ioport_map(iobase, atgpib_iosize); 1442 + if (!nec_priv->mmiobase) 1443 + return -1; 1430 1444 1431 1445 // get irq 1432 1446 if (request_irq(irq, tnt4882_interrupt, isr_flags, "atgpib", board)) { ··· 1468 1478 tnt4882_board_reset(tnt_priv, board); 1469 1479 if (tnt_priv->irq) 1470 1480 free_irq(tnt_priv->irq, board); 1481 + if (nec_priv->mmiobase) 1482 + ioport_unmap(nec_priv->mmiobase); 1471 1483 if (nec_priv->iobase) 1472 - release_region((unsigned long)(nec_priv->iobase), atgpib_iosize); 1484 + release_region(nec_priv->iobase, atgpib_iosize); 1473 1485 if (tnt_priv->pnp_dev) 1474 1486 pnp_device_detach(tnt_priv->pnp_dev); 1475 1487 } ··· 1809 1817 if (tnt4882_allocate_private(board)) 1810 1818 return -ENOMEM; 1811 1819 tnt_priv = board->private_data; 1812 - tnt_priv->io_writeb = outb_wrapper; 1813 - tnt_priv->io_readb = inb_wrapper; 1814 - tnt_priv->io_writew = outw_wrapper; 1815 - tnt_priv->io_readw = inw_wrapper; 1816 1820 nec_priv = &tnt_priv->nec7210_priv; 1817 1821 nec_priv->type = TNT4882; 1818 1822 nec_priv->read_byte = nec7210_locking_ioport_read_byte; ··· 1823 1835 return -EIO; 1824 1836 } 1825 1837 1826 - nec_priv->iobase = (void *)(unsigned long)curr_dev->resource[0]->start; 1838 + nec_priv->mmiobase = ioport_map(curr_dev->resource[0]->start, 1839 + resource_size(curr_dev->resource[0])); 1840 + if (!nec_priv->mmiobase) 1841 + return -1; 1827 1842 1828 1843 // get irq 1829 1844 if (request_irq(curr_dev->irq, tnt4882_interrupt, isr_flags, "tnt4882", board)) { ··· 1851 1860 nec_priv = &tnt_priv->nec7210_priv; 1852 1861 if (tnt_priv->irq) 1853 1862 free_irq(tnt_priv->irq, board); 1863 + if (nec_priv->mmiobase) 1864 + ioport_unmap(nec_priv->mmiobase); 1854 1865 if (nec_priv->iobase) { 1855 1866 tnt4882_board_reset(tnt_priv, board); 1856 - release_region((unsigned long)nec_priv->iobase, pcmcia_gpib_iosize); 1867 + release_region(nec_priv->iobase, pcmcia_gpib_iosize); 1857 1868 } 1858 1869 } 1859 1870 tnt4882_free_private(board);
+1 -1
drivers/staging/iio/frequency/ad9832.c
··· 158 158 static int ad9832_write_phase(struct ad9832_state *st, 159 159 unsigned long addr, unsigned long phase) 160 160 { 161 - if (phase > BIT(AD9832_PHASE_BITS)) 161 + if (phase >= BIT(AD9832_PHASE_BITS)) 162 162 return -EINVAL; 163 163 164 164 st->phase_data[0] = cpu_to_be16((AD9832_CMD_PHA8BITSW << CMD_SHIFT) |
+1 -1
drivers/staging/iio/frequency/ad9834.c
··· 131 131 static int ad9834_write_phase(struct ad9834_state *st, 132 132 unsigned long addr, unsigned long phase) 133 133 { 134 - if (phase > BIT(AD9834_PHASE_BITS)) 134 + if (phase >= BIT(AD9834_PHASE_BITS)) 135 135 return -EINVAL; 136 136 st->data = cpu_to_be16(addr | phase); 137 137
+1
drivers/thermal/thermal_of.c
··· 160 160 return ERR_PTR(ret); 161 161 } 162 162 163 + of_node_put(sensor_specs.np); 163 164 if ((sensor == sensor_specs.np) && id == (sensor_specs.args_count ? 164 165 sensor_specs.args[0] : 0)) { 165 166 pr_debug("sensor %pOFn id=%d belongs to %pOFn\n", sensor, id, child);
+40 -36
drivers/thermal/thermal_thresholds.c
··· 69 69 return NULL; 70 70 } 71 71 72 - static bool __thermal_threshold_is_crossed(struct user_threshold *threshold, int temperature, 73 - int last_temperature, int direction, 74 - int *low, int *high) 75 - { 76 - 77 - if (temperature >= threshold->temperature) { 78 - if (threshold->temperature > *low && 79 - THERMAL_THRESHOLD_WAY_DOWN & threshold->direction) 80 - *low = threshold->temperature; 81 - 82 - if (last_temperature < threshold->temperature && 83 - threshold->direction & direction) 84 - return true; 85 - } else { 86 - if (threshold->temperature < *high && THERMAL_THRESHOLD_WAY_UP 87 - & threshold->direction) 88 - *high = threshold->temperature; 89 - 90 - if (last_temperature >= threshold->temperature && 91 - threshold->direction & direction) 92 - return true; 93 - } 94 - 95 - return false; 96 - } 97 - 98 72 static bool thermal_thresholds_handle_raising(struct list_head *thresholds, int temperature, 99 - int last_temperature, int *low, int *high) 73 + int last_temperature) 100 74 { 101 75 struct user_threshold *t; 102 76 103 77 list_for_each_entry(t, thresholds, list_node) { 104 - if (__thermal_threshold_is_crossed(t, temperature, last_temperature, 105 - THERMAL_THRESHOLD_WAY_UP, low, high)) 78 + 79 + if (!(t->direction & THERMAL_THRESHOLD_WAY_UP)) 80 + continue; 81 + 82 + if (temperature >= t->temperature && 83 + last_temperature < t->temperature) 106 84 return true; 107 85 } 108 86 ··· 88 110 } 89 111 90 112 static bool thermal_thresholds_handle_dropping(struct list_head *thresholds, int temperature, 91 - int last_temperature, int *low, int *high) 113 + int last_temperature) 92 114 { 93 115 struct user_threshold *t; 94 116 95 117 list_for_each_entry_reverse(t, thresholds, list_node) { 96 - if (__thermal_threshold_is_crossed(t, temperature, last_temperature, 97 - THERMAL_THRESHOLD_WAY_DOWN, low, high)) 118 + 119 + if (!(t->direction & THERMAL_THRESHOLD_WAY_DOWN)) 120 + continue; 121 + 122 + if (temperature <= t->temperature && 123 + last_temperature > t->temperature) 98 124 return true; 99 125 } 100 126 101 127 return false; 128 + } 129 + 130 + static void thermal_threshold_find_boundaries(struct list_head *thresholds, int temperature, 131 + int *low, int *high) 132 + { 133 + struct user_threshold *t; 134 + 135 + list_for_each_entry(t, thresholds, list_node) { 136 + if (temperature < t->temperature && 137 + (t->direction & THERMAL_THRESHOLD_WAY_UP) && 138 + *high > t->temperature) 139 + *high = t->temperature; 140 + } 141 + 142 + list_for_each_entry_reverse(t, thresholds, list_node) { 143 + if (temperature > t->temperature && 144 + (t->direction & THERMAL_THRESHOLD_WAY_DOWN) && 145 + *low < t->temperature) 146 + *low = t->temperature; 147 + } 102 148 } 103 149 104 150 void thermal_thresholds_handle(struct thermal_zone_device *tz, int *low, int *high) ··· 133 131 int last_temperature = tz->last_temperature; 134 132 135 133 lockdep_assert_held(&tz->lock); 134 + 135 + thermal_threshold_find_boundaries(thresholds, temperature, low, high); 136 136 137 137 /* 138 138 * We need a second update in order to detect a threshold being crossed ··· 155 151 * - decreased : thresholds are crossed the way down 156 152 */ 157 153 if (temperature > last_temperature) { 158 - if (thermal_thresholds_handle_raising(thresholds, temperature, 159 - last_temperature, low, high)) 154 + if (thermal_thresholds_handle_raising(thresholds, 155 + temperature, last_temperature)) 160 156 thermal_notify_threshold_up(tz); 161 157 } else { 162 - if (thermal_thresholds_handle_dropping(thresholds, temperature, 163 - last_temperature, low, high)) 158 + if (thermal_thresholds_handle_dropping(thresholds, 159 + temperature, last_temperature)) 164 160 thermal_notify_threshold_down(tz); 165 161 } 166 162 }
+8
drivers/thunderbolt/nhi.c
··· 1520 1520 .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1521 1521 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI1), 1522 1522 .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1523 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI0), 1524 + .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1525 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI1), 1526 + .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1527 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI0), 1528 + .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1529 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI1), 1530 + .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1523 1531 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI) }, 1524 1532 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI) }, 1525 1533
+4
drivers/thunderbolt/nhi.h
··· 92 92 #define PCI_DEVICE_ID_INTEL_RPL_NHI1 0xa76d 93 93 #define PCI_DEVICE_ID_INTEL_LNL_NHI0 0xa833 94 94 #define PCI_DEVICE_ID_INTEL_LNL_NHI1 0xa834 95 + #define PCI_DEVICE_ID_INTEL_PTL_M_NHI0 0xe333 96 + #define PCI_DEVICE_ID_INTEL_PTL_M_NHI1 0xe334 97 + #define PCI_DEVICE_ID_INTEL_PTL_P_NHI0 0xe433 98 + #define PCI_DEVICE_ID_INTEL_PTL_P_NHI1 0xe434 95 99 96 100 #define PCI_CLASS_SERIAL_USB_USB4 0x0c0340 97 101
+15 -4
drivers/thunderbolt/retimer.c
··· 103 103 104 104 err_nvm: 105 105 dev_dbg(&rt->dev, "NVM upgrade disabled\n"); 106 + rt->no_nvm_upgrade = true; 106 107 if (!IS_ERR(nvm)) 107 108 tb_nvm_free(nvm); 108 109 ··· 183 182 184 183 if (!rt->nvm) 185 184 ret = -EAGAIN; 186 - else if (rt->no_nvm_upgrade) 187 - ret = -EOPNOTSUPP; 188 185 else 189 186 ret = sysfs_emit(buf, "%#x\n", rt->auth_status); 190 187 ··· 322 323 323 324 if (!rt->nvm) 324 325 ret = -EAGAIN; 325 - else if (rt->no_nvm_upgrade) 326 - ret = -EOPNOTSUPP; 327 326 else 328 327 ret = sysfs_emit(buf, "%x.%x\n", rt->nvm->major, rt->nvm->minor); 329 328 ··· 339 342 } 340 343 static DEVICE_ATTR_RO(vendor); 341 344 345 + static umode_t retimer_is_visible(struct kobject *kobj, struct attribute *attr, 346 + int n) 347 + { 348 + struct device *dev = kobj_to_dev(kobj); 349 + struct tb_retimer *rt = tb_to_retimer(dev); 350 + 351 + if (attr == &dev_attr_nvm_authenticate.attr || 352 + attr == &dev_attr_nvm_version.attr) 353 + return rt->no_nvm_upgrade ? 0 : attr->mode; 354 + 355 + return attr->mode; 356 + } 357 + 342 358 static struct attribute *retimer_attrs[] = { 343 359 &dev_attr_device.attr, 344 360 &dev_attr_nvm_authenticate.attr, ··· 361 351 }; 362 352 363 353 static const struct attribute_group retimer_group = { 354 + .is_visible = retimer_is_visible, 364 355 .attrs = retimer_attrs, 365 356 }; 366 357
+41
drivers/thunderbolt/tb.c
··· 2059 2059 } 2060 2060 } 2061 2061 2062 + static void tb_switch_enter_redrive(struct tb_switch *sw) 2063 + { 2064 + struct tb_port *port; 2065 + 2066 + tb_switch_for_each_port(sw, port) 2067 + tb_enter_redrive(port); 2068 + } 2069 + 2070 + /* 2071 + * Called during system and runtime suspend to forcefully exit redrive 2072 + * mode without querying whether the resource is available. 2073 + */ 2074 + static void tb_switch_exit_redrive(struct tb_switch *sw) 2075 + { 2076 + struct tb_port *port; 2077 + 2078 + if (!(sw->quirks & QUIRK_KEEP_POWER_IN_DP_REDRIVE)) 2079 + return; 2080 + 2081 + tb_switch_for_each_port(sw, port) { 2082 + if (!tb_port_is_dpin(port)) 2083 + continue; 2084 + 2085 + if (port->redrive) { 2086 + port->redrive = false; 2087 + pm_runtime_put(&sw->dev); 2088 + tb_port_dbg(port, "exit redrive mode\n"); 2089 + } 2090 + } 2091 + } 2092 + 2062 2093 static void tb_dp_resource_unavailable(struct tb *tb, struct tb_port *port) 2063 2094 { 2064 2095 struct tb_port *in, *out; ··· 2940 2909 tb_create_usb3_tunnels(tb->root_switch); 2941 2910 /* Add DP IN resources for the root switch */ 2942 2911 tb_add_dp_resources(tb->root_switch); 2912 + tb_switch_enter_redrive(tb->root_switch); 2943 2913 /* Make the discovered switches available to the userspace */ 2944 2914 device_for_each_child(&tb->root_switch->dev, NULL, 2945 2915 tb_scan_finalize_switch); ··· 2956 2924 2957 2925 tb_dbg(tb, "suspending...\n"); 2958 2926 tb_disconnect_and_release_dp(tb); 2927 + tb_switch_exit_redrive(tb->root_switch); 2959 2928 tb_switch_suspend(tb->root_switch, false); 2960 2929 tcm->hotplug_active = false; /* signal tb_handle_hotplug to quit */ 2961 2930 tb_dbg(tb, "suspend finished\n"); ··· 3049 3016 tb_dbg(tb, "tunnels restarted, sleeping for 100ms\n"); 3050 3017 msleep(100); 3051 3018 } 3019 + tb_switch_enter_redrive(tb->root_switch); 3052 3020 /* Allow tb_handle_hotplug to progress events */ 3053 3021 tcm->hotplug_active = true; 3054 3022 tb_dbg(tb, "resume finished\n"); ··· 3113 3079 struct tb_cm *tcm = tb_priv(tb); 3114 3080 3115 3081 mutex_lock(&tb->lock); 3082 + /* 3083 + * The below call only releases DP resources to allow exiting and 3084 + * re-entering redrive mode. 3085 + */ 3086 + tb_disconnect_and_release_dp(tb); 3087 + tb_switch_exit_redrive(tb->root_switch); 3116 3088 tb_switch_suspend(tb->root_switch, true); 3117 3089 tcm->hotplug_active = false; 3118 3090 mutex_unlock(&tb->lock); ··· 3150 3110 tb_restore_children(tb->root_switch); 3151 3111 list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) 3152 3112 tb_tunnel_restart(tunnel); 3113 + tb_switch_enter_redrive(tb->root_switch); 3153 3114 tcm->hotplug_active = true; 3154 3115 mutex_unlock(&tb->lock); 3155 3116
+3
drivers/tty/serial/8250/8250_core.c
··· 812 812 uart->dl_write = up->dl_write; 813 813 814 814 if (uart->port.type != PORT_8250_CIR) { 815 + if (uart_console_registered(&uart->port)) 816 + pm_runtime_get_sync(uart->port.dev); 817 + 815 818 if (serial8250_isa_config != NULL) 816 819 serial8250_isa_config(0, &uart->port, 817 820 &uart->capabilities);
+2 -2
drivers/tty/serial/imx.c
··· 2692 2692 { 2693 2693 u32 ucr3; 2694 2694 2695 - uart_port_lock(&sport->port); 2695 + uart_port_lock_irq(&sport->port); 2696 2696 2697 2697 ucr3 = imx_uart_readl(sport, UCR3); 2698 2698 if (on) { ··· 2714 2714 imx_uart_writel(sport, ucr1, UCR1); 2715 2715 } 2716 2716 2717 - uart_port_unlock(&sport->port); 2717 + uart_port_unlock_irq(&sport->port); 2718 2718 } 2719 2719 2720 2720 static int imx_uart_suspend_noirq(struct device *dev)
+2 -2
drivers/tty/serial/stm32-usart.c
··· 1051 1051 const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs; 1052 1052 unsigned long flags; 1053 1053 1054 - spin_lock_irqsave(&port->lock, flags); 1054 + uart_port_lock_irqsave(port, &flags); 1055 1055 1056 1056 if (break_state) 1057 1057 stm32_usart_set_bits(port, ofs->rqr, USART_RQR_SBKRQ); 1058 1058 else 1059 1059 stm32_usart_clr_bits(port, ofs->rqr, USART_RQR_SBKRQ); 1060 1060 1061 - spin_unlock_irqrestore(&port->lock, flags); 1061 + uart_port_unlock_irqrestore(port, flags); 1062 1062 } 1063 1063 1064 1064 static int stm32_usart_startup(struct uart_port *port)
-6
drivers/ufs/core/ufshcd-priv.h
··· 237 237 hba->vops->config_scaling_param(hba, p, data); 238 238 } 239 239 240 - static inline void ufshcd_vops_reinit_notify(struct ufs_hba *hba) 241 - { 242 - if (hba->vops && hba->vops->reinit_notify) 243 - hba->vops->reinit_notify(hba); 244 - } 245 - 246 240 static inline int ufshcd_vops_mcq_config_resource(struct ufs_hba *hba) 247 241 { 248 242 if (hba->vops && hba->vops->mcq_config_resource)
+6 -4
drivers/ufs/core/ufshcd.c
··· 8858 8858 ufshcd_device_reset(hba); 8859 8859 ufs_put_device_desc(hba); 8860 8860 ufshcd_hba_stop(hba); 8861 - ufshcd_vops_reinit_notify(hba); 8862 8861 ret = ufshcd_hba_enable(hba); 8863 8862 if (ret) { 8864 8863 dev_err(hba->dev, "Host controller enable failed\n"); ··· 10590 10591 } 10591 10592 10592 10593 /* 10593 - * Set the default power management level for runtime and system PM. 10594 + * Set the default power management level for runtime and system PM if 10595 + * not set by the host controller drivers. 10594 10596 * Default power saving mode is to keep UFS link in Hibern8 state 10595 10597 * and UFS device in sleep state. 10596 10598 */ 10597 - hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state( 10599 + if (!hba->rpm_lvl) 10600 + hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state( 10598 10601 UFS_SLEEP_PWR_MODE, 10599 10602 UIC_LINK_HIBERN8_STATE); 10600 - hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state( 10603 + if (!hba->spm_lvl) 10604 + hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state( 10601 10605 UFS_SLEEP_PWR_MODE, 10602 10606 UIC_LINK_HIBERN8_STATE); 10603 10607
+19 -12
drivers/ufs/host/ufs-qcom.c
··· 368 368 if (ret) 369 369 return ret; 370 370 371 + if (phy->power_count) { 372 + phy_power_off(phy); 373 + phy_exit(phy); 374 + } 375 + 371 376 /* phy initialization - calibrate the phy */ 372 377 ret = phy_init(phy); 373 378 if (ret) { ··· 871 866 */ 872 867 static void ufs_qcom_advertise_quirks(struct ufs_hba *hba) 873 868 { 869 + const struct ufs_qcom_drvdata *drvdata = of_device_get_match_data(hba->dev); 874 870 struct ufs_qcom_host *host = ufshcd_get_variant(hba); 875 871 876 872 if (host->hw_ver.major == 0x2) ··· 880 874 if (host->hw_ver.major > 0x3) 881 875 hba->quirks |= UFSHCD_QUIRK_REINIT_AFTER_MAX_GEAR_SWITCH; 882 876 883 - if (of_device_is_compatible(hba->dev->of_node, "qcom,sm8550-ufshc") || 884 - of_device_is_compatible(hba->dev->of_node, "qcom,sm8650-ufshc")) 885 - hba->quirks |= UFSHCD_QUIRK_BROKEN_LSDBS_CAP; 877 + if (drvdata && drvdata->quirks) 878 + hba->quirks |= drvdata->quirks; 886 879 } 887 880 888 881 static void ufs_qcom_set_phy_gear(struct ufs_qcom_host *host) ··· 1069 1064 struct device *dev = hba->dev; 1070 1065 struct ufs_qcom_host *host; 1071 1066 struct ufs_clk_info *clki; 1067 + const struct ufs_qcom_drvdata *drvdata = of_device_get_match_data(hba->dev); 1072 1068 1073 1069 host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL); 1074 1070 if (!host) ··· 1148 1142 /* Failure is non-fatal */ 1149 1143 dev_warn(dev, "%s: failed to configure the testbus %d\n", 1150 1144 __func__, err); 1145 + 1146 + if (drvdata && drvdata->no_phy_retention) 1147 + hba->spm_lvl = UFS_PM_LVL_5; 1151 1148 1152 1149 return 0; 1153 1150 ··· 1588 1579 } 1589 1580 #endif 1590 1581 1591 - static void ufs_qcom_reinit_notify(struct ufs_hba *hba) 1592 - { 1593 - struct ufs_qcom_host *host = ufshcd_get_variant(hba); 1594 - 1595 - phy_power_off(host->generic_phy); 1596 - } 1597 - 1598 1582 /* Resources */ 1599 1583 static const struct ufshcd_res_info ufs_res_info[RES_MAX] = { 1600 1584 {.name = "ufs_mem",}, ··· 1827 1825 .device_reset = ufs_qcom_device_reset, 1828 1826 .config_scaling_param = ufs_qcom_config_scaling_param, 1829 1827 .program_key = ufs_qcom_ice_program_key, 1830 - .reinit_notify = ufs_qcom_reinit_notify, 1831 1828 .mcq_config_resource = ufs_qcom_mcq_config_resource, 1832 1829 .get_hba_mac = ufs_qcom_get_hba_mac, 1833 1830 .op_runtime_config = ufs_qcom_op_runtime_config, ··· 1869 1868 platform_device_msi_free_irqs_all(hba->dev); 1870 1869 } 1871 1870 1871 + static const struct ufs_qcom_drvdata ufs_qcom_sm8550_drvdata = { 1872 + .quirks = UFSHCD_QUIRK_BROKEN_LSDBS_CAP, 1873 + .no_phy_retention = true, 1874 + }; 1875 + 1872 1876 static const struct of_device_id ufs_qcom_of_match[] __maybe_unused = { 1873 1877 { .compatible = "qcom,ufshc" }, 1874 - { .compatible = "qcom,sm8550-ufshc" }, 1878 + { .compatible = "qcom,sm8550-ufshc", .data = &ufs_qcom_sm8550_drvdata }, 1879 + { .compatible = "qcom,sm8650-ufshc", .data = &ufs_qcom_sm8550_drvdata }, 1875 1880 {}, 1876 1881 }; 1877 1882 MODULE_DEVICE_TABLE(of, ufs_qcom_of_match);
+5
drivers/ufs/host/ufs-qcom.h
··· 217 217 bool esi_enabled; 218 218 }; 219 219 220 + struct ufs_qcom_drvdata { 221 + enum ufshcd_quirks quirks; 222 + bool no_phy_retention; 223 + }; 224 + 220 225 static inline u32 221 226 ufs_qcom_get_debug_reg_offset(struct ufs_qcom_host *host, u32 reg) 222 227 {
+17 -8
drivers/usb/chipidea/ci_hdrc_imx.c
··· 370 370 data->pinctrl = devm_pinctrl_get(dev); 371 371 if (PTR_ERR(data->pinctrl) == -ENODEV) 372 372 data->pinctrl = NULL; 373 - else if (IS_ERR(data->pinctrl)) 374 - return dev_err_probe(dev, PTR_ERR(data->pinctrl), 373 + else if (IS_ERR(data->pinctrl)) { 374 + ret = dev_err_probe(dev, PTR_ERR(data->pinctrl), 375 375 "pinctrl get failed\n"); 376 + goto err_put; 377 + } 376 378 377 379 data->hsic_pad_regulator = 378 380 devm_regulator_get_optional(dev, "hsic"); 379 381 if (PTR_ERR(data->hsic_pad_regulator) == -ENODEV) { 380 382 /* no pad regulator is needed */ 381 383 data->hsic_pad_regulator = NULL; 382 - } else if (IS_ERR(data->hsic_pad_regulator)) 383 - return dev_err_probe(dev, PTR_ERR(data->hsic_pad_regulator), 384 + } else if (IS_ERR(data->hsic_pad_regulator)) { 385 + ret = dev_err_probe(dev, PTR_ERR(data->hsic_pad_regulator), 384 386 "Get HSIC pad regulator error\n"); 387 + goto err_put; 388 + } 385 389 386 390 if (data->hsic_pad_regulator) { 387 391 ret = regulator_enable(data->hsic_pad_regulator); 388 392 if (ret) { 389 393 dev_err(dev, 390 394 "Failed to enable HSIC pad regulator\n"); 391 - return ret; 395 + goto err_put; 392 396 } 393 397 } 394 398 } ··· 406 402 dev_err(dev, 407 403 "pinctrl_hsic_idle lookup failed, err=%ld\n", 408 404 PTR_ERR(pinctrl_hsic_idle)); 409 - return PTR_ERR(pinctrl_hsic_idle); 405 + ret = PTR_ERR(pinctrl_hsic_idle); 406 + goto err_put; 410 407 } 411 408 412 409 ret = pinctrl_select_state(data->pinctrl, pinctrl_hsic_idle); 413 410 if (ret) { 414 411 dev_err(dev, "hsic_idle select failed, err=%d\n", ret); 415 - return ret; 412 + goto err_put; 416 413 } 417 414 418 415 data->pinctrl_hsic_active = pinctrl_lookup_state(data->pinctrl, ··· 422 417 dev_err(dev, 423 418 "pinctrl_hsic_active lookup failed, err=%ld\n", 424 419 PTR_ERR(data->pinctrl_hsic_active)); 425 - return PTR_ERR(data->pinctrl_hsic_active); 420 + ret = PTR_ERR(data->pinctrl_hsic_active); 421 + goto err_put; 426 422 } 427 423 } 428 424 ··· 533 527 if (pdata.flags & CI_HDRC_PMQOS) 534 528 cpu_latency_qos_remove_request(&data->pm_qos_req); 535 529 data->ci_pdev = NULL; 530 + err_put: 531 + put_device(data->usbmisc_data->dev); 536 532 return ret; 537 533 } 538 534 ··· 559 551 if (data->hsic_pad_regulator) 560 552 regulator_disable(data->hsic_pad_regulator); 561 553 } 554 + put_device(data->usbmisc_data->dev); 562 555 } 563 556 564 557 static void ci_hdrc_imx_shutdown(struct platform_device *pdev)
+4 -3
drivers/usb/class/usblp.c
··· 1337 1337 if (protocol < USBLP_FIRST_PROTOCOL || protocol > USBLP_LAST_PROTOCOL) 1338 1338 return -EINVAL; 1339 1339 1340 + alts = usblp->protocol[protocol].alt_setting; 1341 + if (alts < 0) 1342 + return -EINVAL; 1343 + 1340 1344 /* Don't unnecessarily set the interface if there's a single alt. */ 1341 1345 if (usblp->intf->num_altsetting > 1) { 1342 - alts = usblp->protocol[protocol].alt_setting; 1343 - if (alts < 0) 1344 - return -EINVAL; 1345 1346 r = usb_set_interface(usblp->dev, usblp->ifnum, alts); 1346 1347 if (r < 0) { 1347 1348 printk(KERN_ERR "usblp: can't set desired altsetting %d on interface %d\n",
+4 -2
drivers/usb/core/hub.c
··· 2663 2663 err = sysfs_create_link(&udev->dev.kobj, 2664 2664 &port_dev->dev.kobj, "port"); 2665 2665 if (err) 2666 - goto fail; 2666 + goto out_del_dev; 2667 2667 2668 2668 err = sysfs_create_link(&port_dev->dev.kobj, 2669 2669 &udev->dev.kobj, "device"); 2670 2670 if (err) { 2671 2671 sysfs_remove_link(&udev->dev.kobj, "port"); 2672 - goto fail; 2672 + goto out_del_dev; 2673 2673 } 2674 2674 2675 2675 if (!test_and_set_bit(port1, hub->child_usage_bits)) ··· 2683 2683 pm_runtime_put_sync_autosuspend(&udev->dev); 2684 2684 return err; 2685 2685 2686 + out_del_dev: 2687 + device_del(&udev->dev); 2686 2688 fail: 2687 2689 usb_set_device_state(udev, USB_STATE_NOTATTACHED); 2688 2690 pm_runtime_disable(&udev->dev);
+4 -3
drivers/usb/core/port.c
··· 453 453 static void usb_port_shutdown(struct device *dev) 454 454 { 455 455 struct usb_port *port_dev = to_usb_port(dev); 456 + struct usb_device *udev = port_dev->child; 456 457 457 - if (port_dev->child) { 458 - usb_disable_usb2_hardware_lpm(port_dev->child); 459 - usb_unlocked_disable_lpm(port_dev->child); 458 + if (udev && !udev->port_is_suspended) { 459 + usb_disable_usb2_hardware_lpm(udev); 460 + usb_unlocked_disable_lpm(udev); 460 461 } 461 462 } 462 463
+1
drivers/usb/dwc3/core.h
··· 464 464 #define DWC3_DCTL_TRGTULST_SS_INACT (DWC3_DCTL_TRGTULST(6)) 465 465 466 466 /* These apply for core versions 1.94a and later */ 467 + #define DWC3_DCTL_NYET_THRES_MASK (0xf << 20) 467 468 #define DWC3_DCTL_NYET_THRES(n) (((n) & 0xf) << 20) 468 469 469 470 #define DWC3_DCTL_KEEP_CONNECT BIT(19)
+1
drivers/usb/dwc3/dwc3-am62.c
··· 309 309 310 310 pm_runtime_put_sync(dev); 311 311 pm_runtime_disable(dev); 312 + pm_runtime_dont_use_autosuspend(dev); 312 313 pm_runtime_set_suspended(dev); 313 314 } 314 315
+3 -1
drivers/usb/dwc3/gadget.c
··· 4195 4195 WARN_ONCE(DWC3_VER_IS_PRIOR(DWC3, 240A) && dwc->has_lpm_erratum, 4196 4196 "LPM Erratum not available on dwc3 revisions < 2.40a\n"); 4197 4197 4198 - if (dwc->has_lpm_erratum && !DWC3_VER_IS_PRIOR(DWC3, 240A)) 4198 + if (dwc->has_lpm_erratum && !DWC3_VER_IS_PRIOR(DWC3, 240A)) { 4199 + reg &= ~DWC3_DCTL_NYET_THRES_MASK; 4199 4200 reg |= DWC3_DCTL_NYET_THRES(dwc->lpm_nyet_threshold); 4201 + } 4200 4202 4201 4203 dwc3_gadget_dctl_write_safe(dwc, reg); 4202 4204 } else {
+2 -2
drivers/usb/gadget/Kconfig
··· 211 211 212 212 config USB_F_MIDI2 213 213 tristate 214 + select SND_UMP 215 + select SND_UMP_LEGACY_RAWMIDI 214 216 215 217 config USB_F_HID 216 218 tristate ··· 447 445 depends on USB_CONFIGFS 448 446 depends on SND 449 447 select USB_LIBCOMPOSITE 450 - select SND_UMP 451 - select SND_UMP_LEGACY_RAWMIDI 452 448 select USB_F_MIDI2 453 449 help 454 450 The MIDI 2.0 function driver provides the generic emulated
+5 -1
drivers/usb/gadget/configfs.c
··· 827 827 { 828 828 struct gadget_string *string = to_gadget_string(item); 829 829 int size = min(sizeof(string->string), len + 1); 830 + ssize_t cpy_len; 830 831 831 832 if (len > USB_MAX_STRING_LEN) 832 833 return -EINVAL; 833 834 834 - return strscpy(string->string, page, size); 835 + cpy_len = strscpy(string->string, page, size); 836 + if (cpy_len > 0 && string->string[cpy_len - 1] == '\n') 837 + string->string[cpy_len - 1] = 0; 838 + return len; 835 839 } 836 840 CONFIGFS_ATTR(gadget_string_, s); 837 841
+1 -1
drivers/usb/gadget/function/f_fs.c
··· 2285 2285 struct usb_gadget_strings **lang; 2286 2286 int first_id; 2287 2287 2288 - if (WARN_ON(ffs->state != FFS_ACTIVE 2288 + if ((ffs->state != FFS_ACTIVE 2289 2289 || test_and_set_bit(FFS_FL_BOUND, &ffs->flags))) 2290 2290 return -EBADFD; 2291 2291
+1
drivers/usb/gadget/function/f_uac2.c
··· 1185 1185 uac2->as_in_alt = 0; 1186 1186 } 1187 1187 1188 + std_ac_if_desc.bNumEndpoints = 0; 1188 1189 if (FUOUT_EN(uac2_opts) || FUIN_EN(uac2_opts)) { 1189 1190 uac2->int_ep = usb_ep_autoconfig(gadget, &fs_ep_int_desc); 1190 1191 if (!uac2->int_ep) {
+4 -4
drivers/usb/gadget/function/u_serial.c
··· 1420 1420 /* REVISIT as above: how best to track this? */ 1421 1421 port->port_line_coding = gser->port_line_coding; 1422 1422 1423 + /* disable endpoints, aborting down any active I/O */ 1424 + usb_ep_disable(gser->out); 1425 + usb_ep_disable(gser->in); 1426 + 1423 1427 port->port_usb = NULL; 1424 1428 gser->ioport = NULL; 1425 1429 if (port->port.count > 0) { ··· 1434 1430 port->suspended = false; 1435 1431 spin_unlock(&port->port_lock); 1436 1432 spin_unlock_irqrestore(&serial_port_lock, flags); 1437 - 1438 - /* disable endpoints, aborting down any active I/O */ 1439 - usb_ep_disable(gser->out); 1440 - usb_ep_disable(gser->in); 1441 1433 1442 1434 /* finally, free any unused/unusable I/O buffers */ 1443 1435 spin_lock_irqsave(&port->port_lock, flags);
+1 -1
drivers/usb/host/xhci-mem.c
··· 436 436 goto free_segments; 437 437 } 438 438 439 - xhci_link_rings(xhci, ring, &new_ring); 439 + xhci_link_rings(xhci, &new_ring, ring); 440 440 trace_xhci_ring_expansion(ring); 441 441 xhci_dbg_trace(xhci, trace_xhci_dbg_ring_expansion, 442 442 "ring expansion succeed, now has %d segments",
+2 -1
drivers/usb/host/xhci-plat.c
··· 290 290 291 291 hcd->tpl_support = of_usb_host_tpl_support(sysdev->of_node); 292 292 293 - if (priv && (priv->quirks & XHCI_SKIP_PHY_INIT)) 293 + if ((priv && (priv->quirks & XHCI_SKIP_PHY_INIT)) || 294 + (xhci->quirks & XHCI_SKIP_PHY_INIT)) 294 295 hcd->skip_phy_initialization = 1; 295 296 296 297 if (priv && (priv->quirks & XHCI_SG_TRB_CACHE_SIZE_QUIRK))
-2
drivers/usb/host/xhci-ring.c
··· 1199 1199 * Keep retrying until the EP starts and stops again, on 1200 1200 * chips where this is known to help. Wait for 100ms. 1201 1201 */ 1202 - if (!(xhci->quirks & XHCI_NEC_HOST)) 1203 - break; 1204 1202 if (time_is_before_jiffies(ep->stop_time + msecs_to_jiffies(100))) 1205 1203 break; 1206 1204 fallthrough;
+1
drivers/usb/serial/cp210x.c
··· 223 223 { USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */ 224 224 { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */ 225 225 { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */ 226 + { USB_DEVICE(0x1B93, 0x1013) }, /* Phoenix Contact UPS Device */ 226 227 { USB_DEVICE(0x1BA4, 0x0002) }, /* Silicon Labs 358x factory default */ 227 228 { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */ 228 229 { USB_DEVICE(0x1D6F, 0x0010) }, /* Seluxit ApS RF Dongle */
+30 -1
drivers/usb/serial/option.c
··· 621 621 622 622 /* MeiG Smart Technology products */ 623 623 #define MEIGSMART_VENDOR_ID 0x2dee 624 - /* MeiG Smart SRM825L based on Qualcomm 315 */ 624 + /* MeiG Smart SRM815/SRM825L based on Qualcomm 315 */ 625 625 #define MEIGSMART_PRODUCT_SRM825L 0x4d22 626 626 /* MeiG Smart SLM320 based on UNISOC UIS8910 */ 627 627 #define MEIGSMART_PRODUCT_SLM320 0x4d41 628 + /* MeiG Smart SLM770A based on ASR1803 */ 629 + #define MEIGSMART_PRODUCT_SLM770A 0x4d57 628 630 629 631 /* Device flags */ 630 632 ··· 1397 1395 .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) }, 1398 1396 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10aa, 0xff), /* Telit FN920C04 (MBIM) */ 1399 1397 .driver_info = NCTRL(3) | RSVD(4) | RSVD(5) }, 1398 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c0, 0xff), /* Telit FE910C04 (rmnet) */ 1399 + .driver_info = RSVD(0) | NCTRL(3) }, 1400 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c4, 0xff), /* Telit FE910C04 (rmnet) */ 1401 + .driver_info = RSVD(0) | NCTRL(3) }, 1402 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c8, 0xff), /* Telit FE910C04 (rmnet) */ 1403 + .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) }, 1400 1404 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), 1401 1405 .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, 1402 1406 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM), ··· 2255 2247 .driver_info = NCTRL(2) }, 2256 2248 { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x7127, 0xff, 0x00, 0x00), 2257 2249 .driver_info = NCTRL(2) | NCTRL(3) | NCTRL(4) }, 2250 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x7129, 0xff, 0x00, 0x00), /* MediaTek T7XX */ 2251 + .driver_info = NCTRL(2) | NCTRL(3) | NCTRL(4) }, 2258 2252 { USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MEN200) }, 2259 2253 { USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MPL200), 2260 2254 .driver_info = RSVD(1) | RSVD(4) }, ··· 2385 2375 { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WWD for Golbal EDU */ 2386 2376 { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0x00, 0x40) }, 2387 2377 { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0xff, 0x40) }, 2378 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010a, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WRD for WWAN Ready */ 2379 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010a, 0xff, 0x00, 0x40) }, 2380 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010a, 0xff, 0xff, 0x40) }, 2381 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010b, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WWD for WWAN Ready */ 2382 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010b, 0xff, 0x00, 0x40) }, 2383 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010b, 0xff, 0xff, 0x40) }, 2384 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010c, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WRD for WWAN Ready */ 2385 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010c, 0xff, 0x00, 0x40) }, 2386 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010c, 0xff, 0xff, 0x40) }, 2387 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010d, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WWD for WWAN Ready */ 2388 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010d, 0xff, 0x00, 0x40) }, 2389 + { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010d, 0xff, 0xff, 0x40) }, 2388 2390 { USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) }, 2389 2391 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) }, 2390 2392 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) }, ··· 2404 2382 { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) }, 2405 2383 { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) }, 2406 2384 { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM320, 0xff, 0, 0) }, 2385 + { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM770A, 0xff, 0, 0) }, 2386 + { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0, 0) }, 2407 2387 { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x30) }, 2408 2388 { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x40) }, 2409 2389 { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x60) }, 2390 + { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0530, 0xff), /* TCL IK512 MBIM */ 2391 + .driver_info = NCTRL(1) }, 2392 + { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0640, 0xff), /* TCL IK512 ECM */ 2393 + .driver_info = NCTRL(3) }, 2394 + { USB_DEVICE_INTERFACE_CLASS(0x2949, 0x8700, 0xff) }, /* Neoway N723-EA */ 2410 2395 { } /* Terminating entry */ 2411 2396 }; 2412 2397 MODULE_DEVICE_TABLE(usb, option_ids);
+7
drivers/usb/storage/unusual_devs.h
··· 255 255 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 256 256 US_FL_MAX_SECTORS_64 ), 257 257 258 + /* Added by Lubomir Rintel <lkundrak@v3.sk>, a very fine chap */ 259 + UNUSUAL_DEV( 0x0421, 0x06c2, 0x0000, 0x0406, 260 + "Nokia", 261 + "Nokia 208", 262 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 263 + US_FL_MAX_SECTORS_64 ), 264 + 258 265 #ifdef NO_SDDR09 259 266 UNUSUAL_DEV( 0x0436, 0x0005, 0x0100, 0x0100, 260 267 "Microtech",
+2 -2
drivers/usb/typec/tcpm/maxim_contaminant.c
··· 135 135 136 136 mv = max_contaminant_read_adc_mv(chip, channel, sleep_msec, raw, true); 137 137 if (mv < 0) 138 - return ret; 138 + return mv; 139 139 140 140 /* OVP enable */ 141 141 ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL2, CCOVPDIS, 0); ··· 157 157 158 158 mv = max_contaminant_read_adc_mv(chip, channel, sleep_msec, raw, true); 159 159 if (mv < 0) 160 - return ret; 160 + return mv; 161 161 /* Disable current source */ 162 162 ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL2, SBURPCTRL, 0); 163 163 if (ret < 0)
+15 -10
drivers/usb/typec/tcpm/tcpci.c
··· 700 700 701 701 tcpci->alert_mask = reg; 702 702 703 - return tcpci_write16(tcpci, TCPC_ALERT_MASK, reg); 703 + return 0; 704 704 } 705 705 706 706 irqreturn_t tcpci_irq(struct tcpci *tcpci) ··· 923 923 924 924 chip->data.set_orientation = err; 925 925 926 + chip->tcpci = tcpci_register_port(&client->dev, &chip->data); 927 + if (IS_ERR(chip->tcpci)) 928 + return PTR_ERR(chip->tcpci); 929 + 926 930 err = devm_request_threaded_irq(&client->dev, client->irq, NULL, 927 931 _tcpci_irq, 928 932 IRQF_SHARED | IRQF_ONESHOT, 929 933 dev_name(&client->dev), chip); 930 934 if (err < 0) 931 - return err; 935 + goto unregister_port; 932 936 933 - /* 934 - * Disable irq while registering port. If irq is configured as an edge 935 - * irq this allow to keep track and process the irq as soon as it is enabled. 936 - */ 937 - disable_irq(client->irq); 938 - chip->tcpci = tcpci_register_port(&client->dev, &chip->data); 939 - enable_irq(client->irq); 937 + /* Enable chip interrupts at last */ 938 + err = tcpci_write16(chip->tcpci, TCPC_ALERT_MASK, chip->tcpci->alert_mask); 939 + if (err < 0) 940 + goto unregister_port; 940 941 941 - return PTR_ERR_OR_ZERO(chip->tcpci); 942 + return 0; 943 + 944 + unregister_port: 945 + tcpci_unregister_port(chip->tcpci); 946 + return err; 942 947 } 943 948 944 949 static void tcpci_remove(struct i2c_client *client)
+2 -2
drivers/usb/typec/ucsi/ucsi_ccg.c
··· 646 646 UCSI_CMD_CONNECTOR_MASK; 647 647 if (con_index == 0) { 648 648 ret = -EINVAL; 649 - goto unlock; 649 + goto err_put; 650 650 } 651 651 con = &uc->ucsi->connector[con_index - 1]; 652 652 ucsi_ccg_update_set_new_cam_cmd(uc, con, &command); ··· 654 654 655 655 ret = ucsi_sync_control_common(ucsi, command); 656 656 657 + err_put: 657 658 pm_runtime_put_sync(uc->dev); 658 - unlock: 659 659 mutex_unlock(&uc->lock); 660 660 661 661 return ret;
+5
drivers/usb/typec/ucsi/ucsi_glink.c
··· 185 185 struct pmic_glink_ucsi *ucsi = ucsi_get_drvdata(con->ucsi); 186 186 int orientation; 187 187 188 + if (!UCSI_CONSTAT(con, CONNECTED)) { 189 + typec_set_orientation(con->port, TYPEC_ORIENTATION_NONE); 190 + return; 191 + } 192 + 188 193 if (con->num > PMIC_GLINK_MAX_PORTS || 189 194 !ucsi->port_orientation[con->num - 1]) 190 195 return;
+9 -8
drivers/vfio/pci/vfio_pci_core.c
··· 1661 1661 unsigned long pfn, pgoff = vmf->pgoff - vma->vm_pgoff; 1662 1662 vm_fault_t ret = VM_FAULT_SIGBUS; 1663 1663 1664 - if (order && (vmf->address & ((PAGE_SIZE << order) - 1) || 1664 + pfn = vma_to_pfn(vma) + pgoff; 1665 + 1666 + if (order && (pfn & ((1 << order) - 1) || 1667 + vmf->address & ((PAGE_SIZE << order) - 1) || 1665 1668 vmf->address + (PAGE_SIZE << order) > vma->vm_end)) { 1666 1669 ret = VM_FAULT_FALLBACK; 1667 1670 goto out; 1668 1671 } 1669 - 1670 - pfn = vma_to_pfn(vma); 1671 1672 1672 1673 down_read(&vdev->memory_lock); 1673 1674 ··· 1677 1676 1678 1677 switch (order) { 1679 1678 case 0: 1680 - ret = vmf_insert_pfn(vma, vmf->address, pfn + pgoff); 1679 + ret = vmf_insert_pfn(vma, vmf->address, pfn); 1681 1680 break; 1682 1681 #ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP 1683 1682 case PMD_ORDER: 1684 - ret = vmf_insert_pfn_pmd(vmf, __pfn_to_pfn_t(pfn + pgoff, 1685 - PFN_DEV), false); 1683 + ret = vmf_insert_pfn_pmd(vmf, 1684 + __pfn_to_pfn_t(pfn, PFN_DEV), false); 1686 1685 break; 1687 1686 #endif 1688 1687 #ifdef CONFIG_ARCH_SUPPORTS_PUD_PFNMAP 1689 1688 case PUD_ORDER: 1690 - ret = vmf_insert_pfn_pud(vmf, __pfn_to_pfn_t(pfn + pgoff, 1691 - PFN_DEV), false); 1689 + ret = vmf_insert_pfn_pud(vmf, 1690 + __pfn_to_pfn_t(pfn, PFN_DEV), false); 1692 1691 break; 1693 1692 #endif 1694 1693 default:
+13 -5
drivers/video/fbdev/Kconfig
··· 649 649 config FB_ATMEL 650 650 tristate "AT91 LCD Controller support" 651 651 depends on FB && OF && HAVE_CLK && HAS_IOMEM 652 + depends on BACKLIGHT_CLASS_DEVICE 652 653 depends on HAVE_FB_ATMEL || COMPILE_TEST 653 654 select FB_BACKLIGHT 654 655 select FB_IOMEM_HELPERS ··· 661 660 config FB_NVIDIA 662 661 tristate "nVidia Framebuffer Support" 663 662 depends on FB && PCI 664 - select FB_BACKLIGHT if FB_NVIDIA_BACKLIGHT 665 663 select FB_CFB_FILLRECT 666 664 select FB_CFB_COPYAREA 667 665 select FB_CFB_IMAGEBLIT ··· 700 700 config FB_NVIDIA_BACKLIGHT 701 701 bool "Support for backlight control" 702 702 depends on FB_NVIDIA 703 + depends on BACKLIGHT_CLASS_DEVICE=y || BACKLIGHT_CLASS_DEVICE=FB_NVIDIA 704 + select FB_BACKLIGHT 703 705 default y 704 706 help 705 707 Say Y here if you want to control the backlight of your display. ··· 709 707 config FB_RIVA 710 708 tristate "nVidia Riva support" 711 709 depends on FB && PCI 712 - select FB_BACKLIGHT if FB_RIVA_BACKLIGHT 713 710 select FB_CFB_FILLRECT 714 711 select FB_CFB_COPYAREA 715 712 select FB_CFB_IMAGEBLIT ··· 748 747 config FB_RIVA_BACKLIGHT 749 748 bool "Support for backlight control" 750 749 depends on FB_RIVA 750 + depends on BACKLIGHT_CLASS_DEVICE=y || BACKLIGHT_CLASS_DEVICE=FB_RIVA 751 + select FB_BACKLIGHT 751 752 default y 752 753 help 753 754 Say Y here if you want to control the backlight of your display. ··· 937 934 config FB_RADEON 938 935 tristate "ATI Radeon display support" 939 936 depends on FB && PCI 940 - select FB_BACKLIGHT if FB_RADEON_BACKLIGHT 941 937 select FB_CFB_FILLRECT 942 938 select FB_CFB_COPYAREA 943 939 select FB_CFB_IMAGEBLIT ··· 962 960 config FB_RADEON_BACKLIGHT 963 961 bool "Support for backlight control" 964 962 depends on FB_RADEON 963 + depends on BACKLIGHT_CLASS_DEVICE=y || BACKLIGHT_CLASS_DEVICE=FB_RADEON 964 + select FB_BACKLIGHT 965 965 default y 966 966 help 967 967 Say Y here if you want to control the backlight of your display. ··· 979 975 config FB_ATY128 980 976 tristate "ATI Rage128 display support" 981 977 depends on FB && PCI 982 - select FB_BACKLIGHT if FB_ATY128_BACKLIGHT 983 978 select FB_IOMEM_HELPERS 984 979 select FB_MACMODES if PPC_PMAC 985 980 help ··· 992 989 config FB_ATY128_BACKLIGHT 993 990 bool "Support for backlight control" 994 991 depends on FB_ATY128 992 + depends on BACKLIGHT_CLASS_DEVICE=y || BACKLIGHT_CLASS_DEVICE=FB_ATY128 993 + select FB_BACKLIGHT 995 994 default y 996 995 help 997 996 Say Y here if you want to control the backlight of your display. ··· 1004 999 select FB_CFB_FILLRECT 1005 1000 select FB_CFB_COPYAREA 1006 1001 select FB_CFB_IMAGEBLIT 1007 - select FB_BACKLIGHT if FB_ATY_BACKLIGHT 1008 1002 select FB_IOMEM_FOPS 1009 1003 select FB_MACMODES if PPC 1010 1004 select FB_ATY_CT if SPARC64 && PCI ··· 1044 1040 config FB_ATY_BACKLIGHT 1045 1041 bool "Support for backlight control" 1046 1042 depends on FB_ATY 1043 + depends on BACKLIGHT_CLASS_DEVICE=y || BACKLIGHT_CLASS_DEVICE=FB_ATY 1044 + select FB_BACKLIGHT 1047 1045 default y 1048 1046 help 1049 1047 Say Y here if you want to control the backlight of your display. ··· 1534 1528 depends on FB && HAVE_CLK && HAS_IOMEM 1535 1529 depends on SUPERH || COMPILE_TEST 1536 1530 depends on FB_DEVICE 1531 + depends on BACKLIGHT_CLASS_DEVICE 1537 1532 select FB_BACKLIGHT 1538 1533 select FB_DEFERRED_IO 1539 1534 select FB_DMAMEM_HELPERS ··· 1800 1793 tristate "Solomon SSD1307 framebuffer support" 1801 1794 depends on FB && I2C 1802 1795 depends on GPIOLIB || COMPILE_TEST 1796 + depends on BACKLIGHT_CLASS_DEVICE 1803 1797 select FB_BACKLIGHT 1804 1798 select FB_SYSMEM_HELPERS_DEFERRED 1805 1799 help
+1 -2
drivers/video/fbdev/core/Kconfig
··· 183 183 select FB_SYSMEM_HELPERS 184 184 185 185 config FB_BACKLIGHT 186 - tristate 186 + bool 187 187 depends on FB 188 - select BACKLIGHT_CLASS_DEVICE 189 188 190 189 config FB_MODE_HELPERS 191 190 bool "Enable Video Mode Handling Helpers"
+1 -3
drivers/virt/coco/tdx-guest/tdx-guest.c
··· 124 124 if (!addr) 125 125 return NULL; 126 126 127 - if (set_memory_decrypted((unsigned long)addr, count)) { 128 - free_pages_exact(addr, len); 127 + if (set_memory_decrypted((unsigned long)addr, count)) 129 128 return NULL; 130 - } 131 129 132 130 return addr; 133 131 }
+1 -1
drivers/watchdog/stm32_iwdg.c
··· 286 286 if (!wdt->data->has_early_wakeup) 287 287 return 0; 288 288 289 - irq = platform_get_irq(pdev, 0); 289 + irq = platform_get_irq_optional(pdev, 0); 290 290 if (irq <= 0) 291 291 return 0; 292 292
+5 -1
fs/9p/vfs_addr.c
··· 57 57 int err, len; 58 58 59 59 len = p9_client_write(fid, subreq->start, &subreq->io_iter, &err); 60 + if (len > 0) 61 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); 60 62 netfs_write_subrequest_terminated(subreq, len ?: err, false); 61 63 } 62 64 ··· 82 80 if (pos + total >= i_size_read(rreq->inode)) 83 81 __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags); 84 82 85 - if (!err) 83 + if (!err) { 86 84 subreq->transferred += total; 85 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); 86 + } 87 87 88 88 netfs_read_subreq_terminated(subreq, err, false); 89 89 }
+4 -2
fs/afs/addr_prefs.c
··· 413 413 414 414 do { 415 415 argc = afs_split_string(&buf, argv, ARRAY_SIZE(argv)); 416 - if (argc < 0) 417 - return argc; 416 + if (argc < 0) { 417 + ret = argc; 418 + goto done; 419 + } 418 420 if (argc < 2) 419 421 goto inval; 420 422
+1 -1
fs/afs/afs.h
··· 10 10 11 11 #include <linux/in.h> 12 12 13 - #define AFS_MAXCELLNAME 256 /* Maximum length of a cell name */ 13 + #define AFS_MAXCELLNAME 253 /* Maximum length of a cell name (DNS limited) */ 14 14 #define AFS_MAXVOLNAME 64 /* Maximum length of a volume name */ 15 15 #define AFS_MAXNSERVERS 8 /* Maximum servers in a basic volume record */ 16 16 #define AFS_NMAXNSERVERS 13 /* Maximum servers in a N/U-class volume record */
+1
fs/afs/afs_vl.h
··· 13 13 #define AFS_VL_PORT 7003 /* volume location service port */ 14 14 #define VL_SERVICE 52 /* RxRPC service ID for the Volume Location service */ 15 15 #define YFS_VL_SERVICE 2503 /* Service ID for AuriStor upgraded VL service */ 16 + #define YFS_VL_MAXCELLNAME 256 /* Maximum length of a cell name in YFS protocol */ 16 17 17 18 enum AFSVL_Operations { 18 19 VLGETENTRYBYID = 503, /* AFS Get VLDB entry by ID */
+6 -2
fs/afs/vl_alias.c
··· 253 253 static int yfs_check_canonical_cell_name(struct afs_cell *cell, struct key *key) 254 254 { 255 255 struct afs_cell *master; 256 + size_t name_len; 256 257 char *cell_name; 257 258 258 259 cell_name = afs_vl_get_cell_name(cell, key); ··· 265 264 return 0; 266 265 } 267 266 268 - master = afs_lookup_cell(cell->net, cell_name, strlen(cell_name), 269 - NULL, false); 267 + name_len = strlen(cell_name); 268 + if (!name_len || name_len > AFS_MAXCELLNAME) 269 + master = ERR_PTR(-EOPNOTSUPP); 270 + else 271 + master = afs_lookup_cell(cell->net, cell_name, name_len, NULL, false); 270 272 kfree(cell_name); 271 273 if (IS_ERR(master)) 272 274 return PTR_ERR(master);
+1 -1
fs/afs/vlclient.c
··· 697 697 return ret; 698 698 699 699 namesz = ntohl(call->tmp); 700 - if (namesz > AFS_MAXCELLNAME) 700 + if (namesz > YFS_VL_MAXCELLNAME) 701 701 return afs_protocol_error(call, afs_eproto_cellname_len); 702 702 paddedsz = (namesz + 3) & ~3; 703 703 call->count = namesz;
+4 -1
fs/afs/write.c
··· 122 122 if (subreq->debug_index == 3) 123 123 return netfs_write_subrequest_terminated(subreq, -ENOANO, false); 124 124 125 - if (!test_bit(NETFS_SREQ_RETRYING, &subreq->flags)) { 125 + if (!subreq->retry_count) { 126 126 set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 127 127 return netfs_write_subrequest_terminated(subreq, -EAGAIN, false); 128 128 } ··· 149 149 afs_wait_for_operation(op); 150 150 ret = afs_put_operation(op); 151 151 switch (ret) { 152 + case 0: 153 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); 154 + break; 152 155 case -EACCES: 153 156 case -EPERM: 154 157 case -ENOKEY:
+11 -5
fs/btrfs/bio.c
··· 358 358 INIT_WORK(&bbio->end_io_work, btrfs_end_bio_work); 359 359 queue_work(btrfs_end_io_wq(fs_info, bio), &bbio->end_io_work); 360 360 } else { 361 - if (bio_op(bio) == REQ_OP_ZONE_APPEND && !bio->bi_status) 361 + if (bio_is_zone_append(bio) && !bio->bi_status) 362 362 btrfs_record_physical_zoned(bbio); 363 363 btrfs_bio_end_io(bbio, bbio->bio.bi_status); 364 364 } ··· 401 401 else 402 402 bio->bi_status = BLK_STS_OK; 403 403 404 - if (bio_op(bio) == REQ_OP_ZONE_APPEND && !bio->bi_status) 404 + if (bio_is_zone_append(bio) && !bio->bi_status) 405 405 stripe->physical = bio->bi_iter.bi_sector << SECTOR_SHIFT; 406 406 407 407 btrfs_bio_end_io(bbio, bbio->bio.bi_status); ··· 415 415 if (bio->bi_status) { 416 416 atomic_inc(&stripe->bioc->error); 417 417 btrfs_log_dev_io_error(bio, stripe->dev); 418 - } else if (bio_op(bio) == REQ_OP_ZONE_APPEND) { 418 + } else if (bio_is_zone_append(bio)) { 419 419 stripe->physical = bio->bi_iter.bi_sector << SECTOR_SHIFT; 420 420 } 421 421 ··· 652 652 map_length = min(map_length, bbio->fs_info->max_zone_append_size); 653 653 sector_offset = bio_split_rw_at(&bbio->bio, &bbio->fs_info->limits, 654 654 &nr_segs, map_length); 655 - if (sector_offset) 656 - return sector_offset << SECTOR_SHIFT; 655 + if (sector_offset) { 656 + /* 657 + * bio_split_rw_at() could split at a size smaller than our 658 + * sectorsize and thus cause unaligned I/Os. Fix that by 659 + * always rounding down to the nearest boundary. 660 + */ 661 + return ALIGN_DOWN(sector_offset << SECTOR_SHIFT, bbio->fs_info->sectorsize); 662 + } 657 663 return map_length; 658 664 } 659 665
+4 -7
fs/btrfs/ctree.c
··· 654 654 goto error_unlock_cow; 655 655 } 656 656 } 657 + 658 + trace_btrfs_cow_block(root, buf, cow); 657 659 if (unlock_orig) 658 660 btrfs_tree_unlock(buf); 659 661 free_extent_buffer_stale(buf); ··· 712 710 { 713 711 struct btrfs_fs_info *fs_info = root->fs_info; 714 712 u64 search_start; 715 - int ret; 716 713 717 714 if (unlikely(test_bit(BTRFS_ROOT_DELETING, &root->state))) { 718 715 btrfs_abort_transaction(trans, -EUCLEAN); ··· 752 751 * Also We don't care about the error, as it's handled internally. 753 752 */ 754 753 btrfs_qgroup_trace_subtree_after_cow(trans, root, buf); 755 - ret = btrfs_force_cow_block(trans, root, buf, parent, parent_slot, 756 - cow_ret, search_start, 0, nest); 757 - 758 - trace_btrfs_cow_block(root, buf, *cow_ret); 759 - 760 - return ret; 754 + return btrfs_force_cow_block(trans, root, buf, parent, parent_slot, 755 + cow_ret, search_start, 0, nest); 761 756 } 762 757 ALLOW_ERROR_INJECTION(btrfs_cow_block, ERRNO); 763 758
+19
fs/btrfs/ctree.h
··· 371 371 } 372 372 373 373 /* 374 + * Return the generation this root started with. 375 + * 376 + * Every normal root that is created with root->root_key.offset set to it's 377 + * originating generation. If it is a snapshot it is the generation when the 378 + * snapshot was created. 379 + * 380 + * However for TREE_RELOC roots root_key.offset is the objectid of the owning 381 + * tree root. Thankfully we copy the root item of the owning tree root, which 382 + * has it's last_snapshot set to what we would have root_key.offset set to, so 383 + * return that if this is a TREE_RELOC root. 384 + */ 385 + static inline u64 btrfs_root_origin_generation(const struct btrfs_root *root) 386 + { 387 + if (btrfs_root_id(root) == BTRFS_TREE_RELOC_OBJECTID) 388 + return btrfs_root_last_snapshot(&root->root_item); 389 + return root->root_key.offset; 390 + } 391 + 392 + /* 374 393 * Structure that conveys information about an extent that is going to replace 375 394 * all the extents in a file range. 376 395 */
+3 -3
fs/btrfs/extent-tree.c
··· 5285 5285 * reference to it. 5286 5286 */ 5287 5287 generation = btrfs_node_ptr_generation(eb, slot); 5288 - if (!wc->update_ref || generation <= root->root_key.offset) 5288 + if (!wc->update_ref || generation <= btrfs_root_origin_generation(root)) 5289 5289 return false; 5290 5290 5291 5291 /* ··· 5340 5340 goto reada; 5341 5341 5342 5342 if (wc->stage == UPDATE_BACKREF && 5343 - generation <= root->root_key.offset) 5343 + generation <= btrfs_root_origin_generation(root)) 5344 5344 continue; 5345 5345 5346 5346 /* We don't lock the tree block, it's OK to be racy here */ ··· 5683 5683 * for the subtree 5684 5684 */ 5685 5685 if (wc->stage == UPDATE_BACKREF && 5686 - generation <= root->root_key.offset) { 5686 + generation <= btrfs_root_origin_generation(root)) { 5687 5687 wc->lookup_info = 1; 5688 5688 return 1; 5689 5689 }
+110 -44
fs/btrfs/inode.c
··· 9078 9078 } 9079 9079 9080 9080 struct btrfs_encoded_read_private { 9081 - wait_queue_head_t wait; 9081 + struct completion done; 9082 9082 void *uring_ctx; 9083 - atomic_t pending; 9083 + refcount_t pending_refs; 9084 9084 blk_status_t status; 9085 9085 }; 9086 9086 ··· 9099 9099 */ 9100 9100 WRITE_ONCE(priv->status, bbio->bio.bi_status); 9101 9101 } 9102 - if (atomic_dec_and_test(&priv->pending)) { 9102 + if (refcount_dec_and_test(&priv->pending_refs)) { 9103 9103 int err = blk_status_to_errno(READ_ONCE(priv->status)); 9104 9104 9105 9105 if (priv->uring_ctx) { 9106 9106 btrfs_uring_read_extent_endio(priv->uring_ctx, err); 9107 9107 kfree(priv); 9108 9108 } else { 9109 - wake_up(&priv->wait); 9109 + complete(&priv->done); 9110 9110 } 9111 9111 } 9112 9112 bio_put(&bbio->bio); ··· 9126 9126 if (!priv) 9127 9127 return -ENOMEM; 9128 9128 9129 - init_waitqueue_head(&priv->wait); 9130 - atomic_set(&priv->pending, 1); 9129 + init_completion(&priv->done); 9130 + refcount_set(&priv->pending_refs, 1); 9131 9131 priv->status = 0; 9132 9132 priv->uring_ctx = uring_ctx; 9133 9133 ··· 9140 9140 size_t bytes = min_t(u64, disk_io_size, PAGE_SIZE); 9141 9141 9142 9142 if (bio_add_page(&bbio->bio, pages[i], bytes, 0) < bytes) { 9143 - atomic_inc(&priv->pending); 9143 + refcount_inc(&priv->pending_refs); 9144 9144 btrfs_submit_bbio(bbio, 0); 9145 9145 9146 9146 bbio = btrfs_bio_alloc(BIO_MAX_VECS, REQ_OP_READ, fs_info, ··· 9155 9155 disk_io_size -= bytes; 9156 9156 } while (disk_io_size); 9157 9157 9158 - atomic_inc(&priv->pending); 9158 + refcount_inc(&priv->pending_refs); 9159 9159 btrfs_submit_bbio(bbio, 0); 9160 9160 9161 9161 if (uring_ctx) { 9162 - if (atomic_dec_return(&priv->pending) == 0) { 9162 + if (refcount_dec_and_test(&priv->pending_refs)) { 9163 9163 ret = blk_status_to_errno(READ_ONCE(priv->status)); 9164 9164 btrfs_uring_read_extent_endio(uring_ctx, ret); 9165 9165 kfree(priv); ··· 9168 9168 9169 9169 return -EIOCBQUEUED; 9170 9170 } else { 9171 - if (atomic_dec_return(&priv->pending) != 0) 9172 - io_wait_event(priv->wait, !atomic_read(&priv->pending)); 9171 + if (!refcount_dec_and_test(&priv->pending_refs)) 9172 + wait_for_completion_io(&priv->done); 9173 9173 /* See btrfs_encoded_read_endio() for ordering. */ 9174 9174 ret = blk_status_to_errno(READ_ONCE(priv->status)); 9175 9175 kfree(priv); ··· 9799 9799 struct btrfs_fs_info *fs_info = root->fs_info; 9800 9800 struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree; 9801 9801 struct extent_state *cached_state = NULL; 9802 - struct extent_map *em = NULL; 9803 9802 struct btrfs_chunk_map *map = NULL; 9804 9803 struct btrfs_device *device = NULL; 9805 9804 struct btrfs_swap_info bsi = { 9806 9805 .lowest_ppage = (sector_t)-1ULL, 9807 9806 }; 9807 + struct btrfs_backref_share_check_ctx *backref_ctx = NULL; 9808 + struct btrfs_path *path = NULL; 9808 9809 int ret = 0; 9809 9810 u64 isize; 9810 - u64 start; 9811 + u64 prev_extent_end = 0; 9812 + 9813 + /* 9814 + * Acquire the inode's mmap lock to prevent races with memory mapped 9815 + * writes, as they could happen after we flush delalloc below and before 9816 + * we lock the extent range further below. The inode was already locked 9817 + * up in the call chain. 9818 + */ 9819 + btrfs_assert_inode_locked(BTRFS_I(inode)); 9820 + down_write(&BTRFS_I(inode)->i_mmap_lock); 9811 9821 9812 9822 /* 9813 9823 * If the swap file was just created, make sure delalloc is done. If the ··· 9826 9816 */ 9827 9817 ret = btrfs_wait_ordered_range(BTRFS_I(inode), 0, (u64)-1); 9828 9818 if (ret) 9829 - return ret; 9819 + goto out_unlock_mmap; 9830 9820 9831 9821 /* 9832 9822 * The inode is locked, so these flags won't change after we check them. 9833 9823 */ 9834 9824 if (BTRFS_I(inode)->flags & BTRFS_INODE_COMPRESS) { 9835 9825 btrfs_warn(fs_info, "swapfile must not be compressed"); 9836 - return -EINVAL; 9826 + ret = -EINVAL; 9827 + goto out_unlock_mmap; 9837 9828 } 9838 9829 if (!(BTRFS_I(inode)->flags & BTRFS_INODE_NODATACOW)) { 9839 9830 btrfs_warn(fs_info, "swapfile must not be copy-on-write"); 9840 - return -EINVAL; 9831 + ret = -EINVAL; 9832 + goto out_unlock_mmap; 9841 9833 } 9842 9834 if (!(BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM)) { 9843 9835 btrfs_warn(fs_info, "swapfile must not be checksummed"); 9844 - return -EINVAL; 9836 + ret = -EINVAL; 9837 + goto out_unlock_mmap; 9838 + } 9839 + 9840 + path = btrfs_alloc_path(); 9841 + backref_ctx = btrfs_alloc_backref_share_check_ctx(); 9842 + if (!path || !backref_ctx) { 9843 + ret = -ENOMEM; 9844 + goto out_unlock_mmap; 9845 9845 } 9846 9846 9847 9847 /* ··· 9866 9846 if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_SWAP_ACTIVATE)) { 9867 9847 btrfs_warn(fs_info, 9868 9848 "cannot activate swapfile while exclusive operation is running"); 9869 - return -EBUSY; 9849 + ret = -EBUSY; 9850 + goto out_unlock_mmap; 9870 9851 } 9871 9852 9872 9853 /* ··· 9881 9860 btrfs_exclop_finish(fs_info); 9882 9861 btrfs_warn(fs_info, 9883 9862 "cannot activate swapfile because snapshot creation is in progress"); 9884 - return -EINVAL; 9863 + ret = -EINVAL; 9864 + goto out_unlock_mmap; 9885 9865 } 9886 9866 /* 9887 9867 * Snapshots can create extents which require COW even if NODATACOW is ··· 9903 9881 btrfs_warn(fs_info, 9904 9882 "cannot activate swapfile because subvolume %llu is being deleted", 9905 9883 btrfs_root_id(root)); 9906 - return -EPERM; 9884 + ret = -EPERM; 9885 + goto out_unlock_mmap; 9907 9886 } 9908 9887 atomic_inc(&root->nr_swapfiles); 9909 9888 spin_unlock(&root->root_item_lock); ··· 9912 9889 isize = ALIGN_DOWN(inode->i_size, fs_info->sectorsize); 9913 9890 9914 9891 lock_extent(io_tree, 0, isize - 1, &cached_state); 9915 - start = 0; 9916 - while (start < isize) { 9917 - u64 logical_block_start, physical_block_start; 9892 + while (prev_extent_end < isize) { 9893 + struct btrfs_key key; 9894 + struct extent_buffer *leaf; 9895 + struct btrfs_file_extent_item *ei; 9918 9896 struct btrfs_block_group *bg; 9919 - u64 len = isize - start; 9897 + u64 logical_block_start; 9898 + u64 physical_block_start; 9899 + u64 extent_gen; 9900 + u64 disk_bytenr; 9901 + u64 len; 9920 9902 9921 - em = btrfs_get_extent(BTRFS_I(inode), NULL, start, len); 9922 - if (IS_ERR(em)) { 9923 - ret = PTR_ERR(em); 9903 + key.objectid = btrfs_ino(BTRFS_I(inode)); 9904 + key.type = BTRFS_EXTENT_DATA_KEY; 9905 + key.offset = prev_extent_end; 9906 + 9907 + ret = btrfs_search_slot(NULL, root, &key, path, 0, 0); 9908 + if (ret < 0) 9924 9909 goto out; 9925 - } 9926 9910 9927 - if (em->disk_bytenr == EXTENT_MAP_HOLE) { 9911 + /* 9912 + * If key not found it means we have an implicit hole (NO_HOLES 9913 + * is enabled). 9914 + */ 9915 + if (ret > 0) { 9928 9916 btrfs_warn(fs_info, "swapfile must not have holes"); 9929 9917 ret = -EINVAL; 9930 9918 goto out; 9931 9919 } 9932 - if (em->disk_bytenr == EXTENT_MAP_INLINE) { 9920 + 9921 + leaf = path->nodes[0]; 9922 + ei = btrfs_item_ptr(leaf, path->slots[0], struct btrfs_file_extent_item); 9923 + 9924 + if (btrfs_file_extent_type(leaf, ei) == BTRFS_FILE_EXTENT_INLINE) { 9933 9925 /* 9934 9926 * It's unlikely we'll ever actually find ourselves 9935 9927 * here, as a file small enough to fit inline won't be ··· 9956 9918 ret = -EINVAL; 9957 9919 goto out; 9958 9920 } 9959 - if (extent_map_is_compressed(em)) { 9921 + 9922 + if (btrfs_file_extent_compression(leaf, ei) != BTRFS_COMPRESS_NONE) { 9960 9923 btrfs_warn(fs_info, "swapfile must not be compressed"); 9961 9924 ret = -EINVAL; 9962 9925 goto out; 9963 9926 } 9964 9927 9965 - logical_block_start = extent_map_block_start(em) + (start - em->start); 9966 - len = min(len, em->len - (start - em->start)); 9967 - free_extent_map(em); 9968 - em = NULL; 9928 + disk_bytenr = btrfs_file_extent_disk_bytenr(leaf, ei); 9929 + if (disk_bytenr == 0) { 9930 + btrfs_warn(fs_info, "swapfile must not have holes"); 9931 + ret = -EINVAL; 9932 + goto out; 9933 + } 9969 9934 9970 - ret = can_nocow_extent(inode, start, &len, NULL, false, true); 9935 + logical_block_start = disk_bytenr + btrfs_file_extent_offset(leaf, ei); 9936 + extent_gen = btrfs_file_extent_generation(leaf, ei); 9937 + prev_extent_end = btrfs_file_extent_end(path); 9938 + 9939 + if (prev_extent_end > isize) 9940 + len = isize - key.offset; 9941 + else 9942 + len = btrfs_file_extent_num_bytes(leaf, ei); 9943 + 9944 + backref_ctx->curr_leaf_bytenr = leaf->start; 9945 + 9946 + /* 9947 + * Don't need the path anymore, release to avoid deadlocks when 9948 + * calling btrfs_is_data_extent_shared() because when joining a 9949 + * transaction it can block waiting for the current one's commit 9950 + * which in turn may be trying to lock the same leaf to flush 9951 + * delayed items for example. 9952 + */ 9953 + btrfs_release_path(path); 9954 + 9955 + ret = btrfs_is_data_extent_shared(BTRFS_I(inode), disk_bytenr, 9956 + extent_gen, backref_ctx); 9971 9957 if (ret < 0) { 9972 9958 goto out; 9973 - } else if (ret) { 9974 - ret = 0; 9975 - } else { 9959 + } else if (ret > 0) { 9976 9960 btrfs_warn(fs_info, 9977 9961 "swapfile must not be copy-on-write"); 9978 9962 ret = -EINVAL; ··· 10029 9969 10030 9970 physical_block_start = (map->stripes[0].physical + 10031 9971 (logical_block_start - map->start)); 10032 - len = min(len, map->chunk_len - (logical_block_start - map->start)); 10033 9972 btrfs_free_chunk_map(map); 10034 9973 map = NULL; 10035 9974 ··· 10069 10010 if (ret) 10070 10011 goto out; 10071 10012 } 10072 - bsi.start = start; 10013 + bsi.start = key.offset; 10073 10014 bsi.block_start = physical_block_start; 10074 10015 bsi.block_len = len; 10075 10016 } 10076 10017 10077 - start += len; 10018 + if (fatal_signal_pending(current)) { 10019 + ret = -EINTR; 10020 + goto out; 10021 + } 10022 + 10023 + cond_resched(); 10078 10024 } 10079 10025 10080 10026 if (bsi.block_len) 10081 10027 ret = btrfs_add_swap_extent(sis, &bsi); 10082 10028 10083 10029 out: 10084 - if (!IS_ERR_OR_NULL(em)) 10085 - free_extent_map(em); 10086 10030 if (!IS_ERR_OR_NULL(map)) 10087 10031 btrfs_free_chunk_map(map); 10088 10032 ··· 10098 10036 10099 10037 btrfs_exclop_finish(fs_info); 10100 10038 10039 + out_unlock_mmap: 10040 + up_write(&BTRFS_I(inode)->i_mmap_lock); 10041 + btrfs_free_backref_share_ctx(backref_ctx); 10042 + btrfs_free_path(path); 10101 10043 if (ret) 10102 10044 return ret; 10103 10045
+68 -60
fs/btrfs/ioctl.c
··· 4878 4878 return ret; 4879 4879 } 4880 4880 4881 + struct btrfs_uring_encoded_data { 4882 + struct btrfs_ioctl_encoded_io_args args; 4883 + struct iovec iovstack[UIO_FASTIOV]; 4884 + struct iovec *iov; 4885 + struct iov_iter iter; 4886 + }; 4887 + 4881 4888 static int btrfs_uring_encoded_read(struct io_uring_cmd *cmd, unsigned int issue_flags) 4882 4889 { 4883 4890 size_t copy_end_kernel = offsetofend(struct btrfs_ioctl_encoded_io_args, flags); 4884 4891 size_t copy_end; 4885 - struct btrfs_ioctl_encoded_io_args args = { 0 }; 4886 4892 int ret; 4887 4893 u64 disk_bytenr, disk_io_size; 4888 4894 struct file *file; 4889 4895 struct btrfs_inode *inode; 4890 4896 struct btrfs_fs_info *fs_info; 4891 4897 struct extent_io_tree *io_tree; 4892 - struct iovec iovstack[UIO_FASTIOV]; 4893 - struct iovec *iov = iovstack; 4894 - struct iov_iter iter; 4895 4898 loff_t pos; 4896 4899 struct kiocb kiocb; 4897 4900 struct extent_state *cached_state = NULL; 4898 4901 u64 start, lockend; 4899 4902 void __user *sqe_addr; 4903 + struct btrfs_uring_encoded_data *data = io_uring_cmd_get_async_data(cmd)->op_data; 4900 4904 4901 4905 if (!capable(CAP_SYS_ADMIN)) { 4902 4906 ret = -EPERM; ··· 4914 4910 4915 4911 if (issue_flags & IO_URING_F_COMPAT) { 4916 4912 #if defined(CONFIG_64BIT) && defined(CONFIG_COMPAT) 4917 - struct btrfs_ioctl_encoded_io_args_32 args32; 4918 - 4919 4913 copy_end = offsetofend(struct btrfs_ioctl_encoded_io_args_32, flags); 4920 - if (copy_from_user(&args32, sqe_addr, copy_end)) { 4921 - ret = -EFAULT; 4922 - goto out_acct; 4923 - } 4924 - args.iov = compat_ptr(args32.iov); 4925 - args.iovcnt = args32.iovcnt; 4926 - args.offset = args32.offset; 4927 - args.flags = args32.flags; 4928 4914 #else 4929 4915 return -ENOTTY; 4930 4916 #endif 4931 4917 } else { 4932 4918 copy_end = copy_end_kernel; 4933 - if (copy_from_user(&args, sqe_addr, copy_end)) { 4934 - ret = -EFAULT; 4919 + } 4920 + 4921 + if (!data) { 4922 + data = kzalloc(sizeof(*data), GFP_NOFS); 4923 + if (!data) { 4924 + ret = -ENOMEM; 4935 4925 goto out_acct; 4926 + } 4927 + 4928 + io_uring_cmd_get_async_data(cmd)->op_data = data; 4929 + 4930 + if (issue_flags & IO_URING_F_COMPAT) { 4931 + #if defined(CONFIG_64BIT) && defined(CONFIG_COMPAT) 4932 + struct btrfs_ioctl_encoded_io_args_32 args32; 4933 + 4934 + if (copy_from_user(&args32, sqe_addr, copy_end)) { 4935 + ret = -EFAULT; 4936 + goto out_acct; 4937 + } 4938 + 4939 + data->args.iov = compat_ptr(args32.iov); 4940 + data->args.iovcnt = args32.iovcnt; 4941 + data->args.offset = args32.offset; 4942 + data->args.flags = args32.flags; 4943 + #endif 4944 + } else { 4945 + if (copy_from_user(&data->args, sqe_addr, copy_end)) { 4946 + ret = -EFAULT; 4947 + goto out_acct; 4948 + } 4949 + } 4950 + 4951 + if (data->args.flags != 0) { 4952 + ret = -EINVAL; 4953 + goto out_acct; 4954 + } 4955 + 4956 + data->iov = data->iovstack; 4957 + ret = import_iovec(ITER_DEST, data->args.iov, data->args.iovcnt, 4958 + ARRAY_SIZE(data->iovstack), &data->iov, 4959 + &data->iter); 4960 + if (ret < 0) 4961 + goto out_acct; 4962 + 4963 + if (iov_iter_count(&data->iter) == 0) { 4964 + ret = 0; 4965 + goto out_free; 4936 4966 } 4937 4967 } 4938 4968 4939 - if (args.flags != 0) 4940 - return -EINVAL; 4941 - 4942 - ret = import_iovec(ITER_DEST, args.iov, args.iovcnt, ARRAY_SIZE(iovstack), 4943 - &iov, &iter); 4944 - if (ret < 0) 4945 - goto out_acct; 4946 - 4947 - if (iov_iter_count(&iter) == 0) { 4948 - ret = 0; 4949 - goto out_free; 4950 - } 4951 - 4952 - pos = args.offset; 4953 - ret = rw_verify_area(READ, file, &pos, args.len); 4969 + pos = data->args.offset; 4970 + ret = rw_verify_area(READ, file, &pos, data->args.len); 4954 4971 if (ret < 0) 4955 4972 goto out_free; 4956 4973 ··· 4984 4959 start = ALIGN_DOWN(pos, fs_info->sectorsize); 4985 4960 lockend = start + BTRFS_MAX_UNCOMPRESSED - 1; 4986 4961 4987 - ret = btrfs_encoded_read(&kiocb, &iter, &args, &cached_state, 4962 + ret = btrfs_encoded_read(&kiocb, &data->iter, &data->args, &cached_state, 4988 4963 &disk_bytenr, &disk_io_size); 4989 4964 if (ret < 0 && ret != -EIOCBQUEUED) 4990 4965 goto out_free; 4991 4966 4992 4967 file_accessed(file); 4993 4968 4994 - if (copy_to_user(sqe_addr + copy_end, (const char *)&args + copy_end_kernel, 4995 - sizeof(args) - copy_end_kernel)) { 4969 + if (copy_to_user(sqe_addr + copy_end, 4970 + (const char *)&data->args + copy_end_kernel, 4971 + sizeof(data->args) - copy_end_kernel)) { 4996 4972 if (ret == -EIOCBQUEUED) { 4997 4973 unlock_extent(io_tree, start, lockend, &cached_state); 4998 4974 btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED); ··· 5003 4977 } 5004 4978 5005 4979 if (ret == -EIOCBQUEUED) { 5006 - u64 count; 5007 - 5008 - /* 5009 - * If we've optimized things by storing the iovecs on the stack, 5010 - * undo this. 5011 - */ 5012 - if (!iov) { 5013 - iov = kmalloc(sizeof(struct iovec) * args.iovcnt, GFP_NOFS); 5014 - if (!iov) { 5015 - unlock_extent(io_tree, start, lockend, &cached_state); 5016 - btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED); 5017 - ret = -ENOMEM; 5018 - goto out_acct; 5019 - } 5020 - 5021 - memcpy(iov, iovstack, sizeof(struct iovec) * args.iovcnt); 5022 - } 5023 - 5024 - count = min_t(u64, iov_iter_count(&iter), disk_io_size); 4980 + u64 count = min_t(u64, iov_iter_count(&data->iter), disk_io_size); 5025 4981 5026 4982 /* Match ioctl by not returning past EOF if uncompressed. */ 5027 - if (!args.compression) 5028 - count = min_t(u64, count, args.len); 4983 + if (!data->args.compression) 4984 + count = min_t(u64, count, data->args.len); 5029 4985 5030 - ret = btrfs_uring_read_extent(&kiocb, &iter, start, lockend, 5031 - cached_state, disk_bytenr, 5032 - disk_io_size, count, 5033 - args.compression, iov, cmd); 4986 + ret = btrfs_uring_read_extent(&kiocb, &data->iter, start, lockend, 4987 + cached_state, disk_bytenr, disk_io_size, 4988 + count, data->args.compression, 4989 + data->iov, cmd); 5034 4990 5035 4991 goto out_acct; 5036 4992 } 5037 4993 5038 4994 out_free: 5039 - kfree(iov); 4995 + kfree(data->iov); 5040 4996 5041 4997 out_acct: 5042 4998 if (ret > 0)
+1 -2
fs/btrfs/qgroup.c
··· 1121 1121 fs_info->qgroup_flags = BTRFS_QGROUP_STATUS_FLAG_ON; 1122 1122 if (simple) { 1123 1123 fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_SIMPLE_MODE; 1124 + btrfs_set_fs_incompat(fs_info, SIMPLE_QUOTA); 1124 1125 btrfs_set_qgroup_status_enable_gen(leaf, ptr, trans->transid); 1125 1126 } else { 1126 1127 fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT; ··· 1255 1254 spin_lock(&fs_info->qgroup_lock); 1256 1255 fs_info->quota_root = quota_root; 1257 1256 set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags); 1258 - if (simple) 1259 - btrfs_set_fs_incompat(fs_info, SIMPLE_QUOTA); 1260 1257 spin_unlock(&fs_info->qgroup_lock); 1261 1258 1262 1259 /* Skip rescan for simple qgroups. */
+6
fs/btrfs/relocation.c
··· 2902 2902 const bool use_rst = btrfs_need_stripe_tree_update(fs_info, rc->block_group->flags); 2903 2903 2904 2904 ASSERT(index <= last_index); 2905 + again: 2905 2906 folio = filemap_lock_folio(inode->i_mapping, index); 2906 2907 if (IS_ERR(folio)) { 2907 2908 ··· 2937 2936 if (!folio_test_uptodate(folio)) { 2938 2937 ret = -EIO; 2939 2938 goto release_folio; 2939 + } 2940 + if (folio->mapping != inode->i_mapping) { 2941 + folio_unlock(folio); 2942 + folio_put(folio); 2943 + goto again; 2940 2944 } 2941 2945 } 2942 2946
+4
fs/btrfs/scrub.c
··· 1541 1541 u64 extent_gen; 1542 1542 int ret; 1543 1543 1544 + if (unlikely(!extent_root)) { 1545 + btrfs_err(fs_info, "no valid extent root for scrub"); 1546 + return -EUCLEAN; 1547 + } 1544 1548 memset(stripe->sectors, 0, sizeof(struct scrub_sector_verification) * 1545 1549 stripe->nr_sectors); 1546 1550 scrub_stripe_reset_bitmaps(stripe);
+6
fs/btrfs/send.c
··· 5280 5280 unsigned cur_len = min_t(unsigned, len, 5281 5281 PAGE_SIZE - pg_offset); 5282 5282 5283 + again: 5283 5284 folio = filemap_lock_folio(mapping, index); 5284 5285 if (IS_ERR(folio)) { 5285 5286 page_cache_sync_readahead(mapping, ··· 5312 5311 folio_put(folio); 5313 5312 ret = -EIO; 5314 5313 break; 5314 + } 5315 + if (folio->mapping != mapping) { 5316 + folio_unlock(folio); 5317 + folio_put(folio); 5318 + goto again; 5315 5319 } 5316 5320 } 5317 5321
+3 -3
fs/btrfs/sysfs.c
··· 1118 1118 { 1119 1119 struct btrfs_fs_info *fs_info = to_fs_info(kobj); 1120 1120 1121 - return sysfs_emit(buf, "%u\n", fs_info->super_copy->nodesize); 1121 + return sysfs_emit(buf, "%u\n", fs_info->nodesize); 1122 1122 } 1123 1123 1124 1124 BTRFS_ATTR(, nodesize, btrfs_nodesize_show); ··· 1128 1128 { 1129 1129 struct btrfs_fs_info *fs_info = to_fs_info(kobj); 1130 1130 1131 - return sysfs_emit(buf, "%u\n", fs_info->super_copy->sectorsize); 1131 + return sysfs_emit(buf, "%u\n", fs_info->sectorsize); 1132 1132 } 1133 1133 1134 1134 BTRFS_ATTR(, sectorsize, btrfs_sectorsize_show); ··· 1180 1180 { 1181 1181 struct btrfs_fs_info *fs_info = to_fs_info(kobj); 1182 1182 1183 - return sysfs_emit(buf, "%u\n", fs_info->super_copy->sectorsize); 1183 + return sysfs_emit(buf, "%u\n", fs_info->sectorsize); 1184 1184 } 1185 1185 1186 1186 BTRFS_ATTR(, clone_alignment, btrfs_clone_alignment_show);
+26 -1
fs/btrfs/tree-checker.c
··· 1527 1527 dref_offset, fs_info->sectorsize); 1528 1528 return -EUCLEAN; 1529 1529 } 1530 + if (unlikely(btrfs_extent_data_ref_count(leaf, dref) == 0)) { 1531 + extent_err(leaf, slot, 1532 + "invalid data ref count, should have non-zero value"); 1533 + return -EUCLEAN; 1534 + } 1530 1535 inline_refs += btrfs_extent_data_ref_count(leaf, dref); 1531 1536 break; 1532 1537 /* Contains parent bytenr and ref count */ ··· 1542 1537 extent_err(leaf, slot, 1543 1538 "invalid data parent bytenr, have %llu expect aligned to %u", 1544 1539 inline_offset, fs_info->sectorsize); 1540 + return -EUCLEAN; 1541 + } 1542 + if (unlikely(btrfs_shared_data_ref_count(leaf, sref) == 0)) { 1543 + extent_err(leaf, slot, 1544 + "invalid shared data ref count, should have non-zero value"); 1545 1545 return -EUCLEAN; 1546 1546 } 1547 1547 inline_refs += btrfs_shared_data_ref_count(leaf, sref); ··· 1621 1611 { 1622 1612 u32 expect_item_size = 0; 1623 1613 1624 - if (key->type == BTRFS_SHARED_DATA_REF_KEY) 1614 + if (key->type == BTRFS_SHARED_DATA_REF_KEY) { 1615 + struct btrfs_shared_data_ref *sref; 1616 + 1617 + sref = btrfs_item_ptr(leaf, slot, struct btrfs_shared_data_ref); 1618 + if (unlikely(btrfs_shared_data_ref_count(leaf, sref) == 0)) { 1619 + extent_err(leaf, slot, 1620 + "invalid shared data backref count, should have non-zero value"); 1621 + return -EUCLEAN; 1622 + } 1623 + 1625 1624 expect_item_size = sizeof(struct btrfs_shared_data_ref); 1625 + } 1626 1626 1627 1627 if (unlikely(btrfs_item_size(leaf, slot) != expect_item_size)) { 1628 1628 generic_err(leaf, slot, ··· 1707 1687 extent_err(leaf, slot, 1708 1688 "invalid extent data backref offset, have %llu expect aligned to %u", 1709 1689 offset, leaf->fs_info->sectorsize); 1690 + return -EUCLEAN; 1691 + } 1692 + if (unlikely(btrfs_extent_data_ref_count(leaf, dref) == 0)) { 1693 + extent_err(leaf, slot, 1694 + "invalid extent data backref count, should have non-zero value"); 1710 1695 return -EUCLEAN; 1711 1696 } 1712 1697 }
+2 -2
fs/btrfs/zlib.c
··· 174 174 copy_page(workspace->buf + i * PAGE_SIZE, 175 175 data_in); 176 176 start += PAGE_SIZE; 177 - workspace->strm.avail_in = 178 - (in_buf_folios << PAGE_SHIFT); 179 177 } 180 178 workspace->strm.next_in = workspace->buf; 179 + workspace->strm.avail_in = min(bytes_left, 180 + in_buf_folios << PAGE_SHIFT); 181 181 } else { 182 182 unsigned int pg_off; 183 183 unsigned int cur_len;
+3 -2
fs/btrfs/zoned.c
··· 748 748 (u64)lim->max_segments << PAGE_SHIFT), 749 749 fs_info->sectorsize); 750 750 fs_info->fs_devices->chunk_alloc_policy = BTRFS_CHUNK_ALLOC_ZONED; 751 - if (fs_info->max_zone_append_size < fs_info->max_extent_size) 752 - fs_info->max_extent_size = fs_info->max_zone_append_size; 751 + 752 + fs_info->max_extent_size = min_not_zero(fs_info->max_extent_size, 753 + fs_info->max_zone_append_size); 753 754 754 755 /* 755 756 * Check mount options here, because we might change fs_info->zoned
+7 -7
fs/cachefiles/daemon.c
··· 15 15 #include <linux/namei.h> 16 16 #include <linux/poll.h> 17 17 #include <linux/mount.h> 18 + #include <linux/security.h> 18 19 #include <linux/statfs.h> 19 20 #include <linux/ctype.h> 20 21 #include <linux/string.h> ··· 577 576 */ 578 577 static int cachefiles_daemon_secctx(struct cachefiles_cache *cache, char *args) 579 578 { 580 - char *secctx; 579 + int err; 581 580 582 581 _enter(",%s", args); 583 582 ··· 586 585 return -EINVAL; 587 586 } 588 587 589 - if (cache->secctx) { 588 + if (cache->have_secid) { 590 589 pr_err("Second security context specified\n"); 591 590 return -EINVAL; 592 591 } 593 592 594 - secctx = kstrdup(args, GFP_KERNEL); 595 - if (!secctx) 596 - return -ENOMEM; 593 + err = security_secctx_to_secid(args, strlen(args), &cache->secid); 594 + if (err) 595 + return err; 597 596 598 - cache->secctx = secctx; 597 + cache->have_secid = true; 599 598 return 0; 600 599 } 601 600 ··· 821 820 put_cred(cache->cache_cred); 822 821 823 822 kfree(cache->rootdirname); 824 - kfree(cache->secctx); 825 823 kfree(cache->tag); 826 824 827 825 _leave("");
+2 -1
fs/cachefiles/internal.h
··· 122 122 #define CACHEFILES_STATE_CHANGED 3 /* T if state changed (poll trigger) */ 123 123 #define CACHEFILES_ONDEMAND_MODE 4 /* T if in on-demand read mode */ 124 124 char *rootdirname; /* name of cache root directory */ 125 - char *secctx; /* LSM security context */ 126 125 char *tag; /* cache binding tag */ 127 126 refcount_t unbind_pincount;/* refcount to do daemon unbind */ 128 127 struct xarray reqs; /* xarray of pending on-demand requests */ ··· 129 130 struct xarray ondemand_ids; /* xarray for ondemand_id allocation */ 130 131 u32 ondemand_id_next; 131 132 u32 msg_id_next; 133 + u32 secid; /* LSM security id */ 134 + bool have_secid; /* whether "secid" was set */ 132 135 }; 133 136 134 137 static inline bool cachefiles_in_ondemand_mode(struct cachefiles_cache *cache)
+3 -3
fs/cachefiles/security.c
··· 18 18 struct cred *new; 19 19 int ret; 20 20 21 - _enter("{%s}", cache->secctx); 21 + _enter("{%u}", cache->have_secid ? cache->secid : 0); 22 22 23 23 new = prepare_kernel_cred(current); 24 24 if (!new) { ··· 26 26 goto error; 27 27 } 28 28 29 - if (cache->secctx) { 30 - ret = set_security_override_from_ctx(new, cache->secctx); 29 + if (cache->have_secid) { 30 + ret = set_security_override(new, cache->secid); 31 31 if (ret < 0) { 32 32 put_cred(new); 33 33 pr_err("Security denies permission to nominate security context: error %d\n",
+38 -39
fs/ceph/file.c
··· 1066 1066 if (ceph_inode_is_shutdown(inode)) 1067 1067 return -EIO; 1068 1068 1069 - if (!len) 1069 + if (!len || !i_size) 1070 1070 return 0; 1071 1071 /* 1072 1072 * flush any page cache pages in this range. this ··· 1086 1086 int num_pages; 1087 1087 size_t page_off; 1088 1088 bool more; 1089 - int idx; 1089 + int idx = 0; 1090 1090 size_t left; 1091 1091 struct ceph_osd_req_op *op; 1092 1092 u64 read_off = off; ··· 1116 1116 len = read_off + read_len - off; 1117 1117 more = len < iov_iter_count(to); 1118 1118 1119 + op = &req->r_ops[0]; 1120 + if (sparse) { 1121 + extent_cnt = __ceph_sparse_read_ext_count(inode, read_len); 1122 + ret = ceph_alloc_sparse_ext_map(op, extent_cnt); 1123 + if (ret) { 1124 + ceph_osdc_put_request(req); 1125 + break; 1126 + } 1127 + } 1128 + 1119 1129 num_pages = calc_pages_for(read_off, read_len); 1120 1130 page_off = offset_in_page(off); 1121 1131 pages = ceph_alloc_page_vector(num_pages, GFP_KERNEL); ··· 1137 1127 1138 1128 osd_req_op_extent_osd_data_pages(req, 0, pages, read_len, 1139 1129 offset_in_page(read_off), 1140 - false, false); 1141 - 1142 - op = &req->r_ops[0]; 1143 - if (sparse) { 1144 - extent_cnt = __ceph_sparse_read_ext_count(inode, read_len); 1145 - ret = ceph_alloc_sparse_ext_map(op, extent_cnt); 1146 - if (ret) { 1147 - ceph_osdc_put_request(req); 1148 - break; 1149 - } 1150 - } 1130 + false, true); 1151 1131 1152 1132 ceph_osdc_start_request(osdc, req); 1153 1133 ret = ceph_osdc_wait_request(osdc, req); ··· 1160 1160 else if (ret == -ENOENT) 1161 1161 ret = 0; 1162 1162 1163 - if (ret > 0 && IS_ENCRYPTED(inode)) { 1163 + if (ret < 0) { 1164 + ceph_osdc_put_request(req); 1165 + if (ret == -EBLOCKLISTED) 1166 + fsc->blocklisted = true; 1167 + break; 1168 + } 1169 + 1170 + if (IS_ENCRYPTED(inode)) { 1164 1171 int fret; 1165 1172 1166 1173 fret = ceph_fscrypt_decrypt_extents(inode, pages, ··· 1193 1186 ret = min_t(ssize_t, fret, len); 1194 1187 } 1195 1188 1196 - ceph_osdc_put_request(req); 1197 - 1198 1189 /* Short read but not EOF? Zero out the remainder. */ 1199 - if (ret >= 0 && ret < len && (off + ret < i_size)) { 1190 + if (ret < len && (off + ret < i_size)) { 1200 1191 int zlen = min(len - ret, i_size - off - ret); 1201 1192 int zoff = page_off + ret; 1202 1193 ··· 1204 1199 ret += zlen; 1205 1200 } 1206 1201 1207 - idx = 0; 1208 - if (ret <= 0) 1209 - left = 0; 1210 - else if (off + ret > i_size) 1211 - left = i_size - off; 1202 + if (off + ret > i_size) 1203 + left = (i_size > off) ? i_size - off : 0; 1212 1204 else 1213 1205 left = ret; 1206 + 1214 1207 while (left > 0) { 1215 1208 size_t plen, copied; 1216 1209 ··· 1224 1221 break; 1225 1222 } 1226 1223 } 1227 - ceph_release_page_vector(pages, num_pages); 1228 1224 1229 - if (ret < 0) { 1230 - if (ret == -EBLOCKLISTED) 1231 - fsc->blocklisted = true; 1232 - break; 1233 - } 1225 + ceph_osdc_put_request(req); 1234 1226 1235 1227 if (off >= i_size || !more) 1236 1228 break; ··· 1551 1553 break; 1552 1554 } 1553 1555 1556 + op = &req->r_ops[0]; 1557 + if (!write && sparse) { 1558 + extent_cnt = __ceph_sparse_read_ext_count(inode, size); 1559 + ret = ceph_alloc_sparse_ext_map(op, extent_cnt); 1560 + if (ret) { 1561 + ceph_osdc_put_request(req); 1562 + break; 1563 + } 1564 + } 1565 + 1554 1566 len = iter_get_bvecs_alloc(iter, size, &bvecs, &num_pages); 1555 1567 if (len < 0) { 1556 1568 ceph_osdc_put_request(req); ··· 1569 1561 } 1570 1562 if (len != size) 1571 1563 osd_req_op_extent_update(req, 0, len); 1564 + 1565 + osd_req_op_extent_osd_data_bvecs(req, 0, bvecs, num_pages, len); 1572 1566 1573 1567 /* 1574 1568 * To simplify error handling, allow AIO when IO within i_size ··· 1601 1591 PAGE_ALIGN(pos + len) - 1); 1602 1592 1603 1593 req->r_mtime = mtime; 1604 - } 1605 - 1606 - osd_req_op_extent_osd_data_bvecs(req, 0, bvecs, num_pages, len); 1607 - op = &req->r_ops[0]; 1608 - if (sparse) { 1609 - extent_cnt = __ceph_sparse_read_ext_count(inode, size); 1610 - ret = ceph_alloc_sparse_ext_map(op, extent_cnt); 1611 - if (ret) { 1612 - ceph_osdc_put_request(req); 1613 - break; 1614 - } 1615 1594 } 1616 1595 1617 1596 if (aio_req) {
+4 -5
fs/ceph/mds_client.c
··· 2800 2800 2801 2801 if (pos < 0) { 2802 2802 /* 2803 - * A rename didn't occur, but somehow we didn't end up where 2804 - * we thought we would. Throw a warning and try again. 2803 + * The path is longer than PATH_MAX and this function 2804 + * cannot ever succeed. Creating paths that long is 2805 + * possible with Ceph, but Linux cannot use them. 2805 2806 */ 2806 - pr_warn_client(cl, "did not end path lookup where expected (pos = %d)\n", 2807 - pos); 2808 - goto retry; 2807 + return ERR_PTR(-ENAMETOOLONG); 2809 2808 } 2810 2809 2811 2810 *pbase = base;
+2
fs/ceph/super.c
··· 431 431 432 432 switch (token) { 433 433 case Opt_snapdirname: 434 + if (strlen(param->string) > NAME_MAX) 435 + return invalfc(fc, "snapdirname too long"); 434 436 kfree(fsopt->snapdir_name); 435 437 fsopt->snapdir_name = param->string; 436 438 param->string = NULL;
+51 -23
fs/debugfs/file.c
··· 64 64 } 65 65 EXPORT_SYMBOL_GPL(debugfs_real_fops); 66 66 67 - /** 68 - * debugfs_file_get - mark the beginning of file data access 69 - * @dentry: the dentry object whose data is being accessed. 70 - * 71 - * Up to a matching call to debugfs_file_put(), any successive call 72 - * into the file removing functions debugfs_remove() and 73 - * debugfs_remove_recursive() will block. Since associated private 74 - * file data may only get freed after a successful return of any of 75 - * the removal functions, you may safely access it after a successful 76 - * call to debugfs_file_get() without worrying about lifetime issues. 77 - * 78 - * If -%EIO is returned, the file has already been removed and thus, 79 - * it is not safe to access any of its data. If, on the other hand, 80 - * it is allowed to access the file data, zero is returned. 81 - */ 82 - int debugfs_file_get(struct dentry *dentry) 67 + enum dbgfs_get_mode { 68 + DBGFS_GET_ALREADY, 69 + DBGFS_GET_REGULAR, 70 + DBGFS_GET_SHORT, 71 + }; 72 + 73 + static int __debugfs_file_get(struct dentry *dentry, enum dbgfs_get_mode mode) 83 74 { 84 75 struct debugfs_fsdata *fsd; 85 76 void *d_fsd; ··· 87 96 if (!((unsigned long)d_fsd & DEBUGFS_FSDATA_IS_REAL_FOPS_BIT)) { 88 97 fsd = d_fsd; 89 98 } else { 99 + if (WARN_ON(mode == DBGFS_GET_ALREADY)) 100 + return -EINVAL; 101 + 90 102 fsd = kmalloc(sizeof(*fsd), GFP_KERNEL); 91 103 if (!fsd) 92 104 return -ENOMEM; 93 105 94 - if ((unsigned long)d_fsd & DEBUGFS_FSDATA_IS_SHORT_FOPS_BIT) { 106 + if (mode == DBGFS_GET_SHORT) { 95 107 fsd->real_fops = NULL; 96 108 fsd->short_fops = (void *)((unsigned long)d_fsd & 97 - ~(DEBUGFS_FSDATA_IS_REAL_FOPS_BIT | 98 - DEBUGFS_FSDATA_IS_SHORT_FOPS_BIT)); 109 + ~DEBUGFS_FSDATA_IS_REAL_FOPS_BIT); 99 110 } else { 100 111 fsd->real_fops = (void *)((unsigned long)d_fsd & 101 112 ~DEBUGFS_FSDATA_IS_REAL_FOPS_BIT); ··· 130 137 return -EIO; 131 138 132 139 return 0; 140 + } 141 + 142 + /** 143 + * debugfs_file_get - mark the beginning of file data access 144 + * @dentry: the dentry object whose data is being accessed. 145 + * 146 + * Up to a matching call to debugfs_file_put(), any successive call 147 + * into the file removing functions debugfs_remove() and 148 + * debugfs_remove_recursive() will block. Since associated private 149 + * file data may only get freed after a successful return of any of 150 + * the removal functions, you may safely access it after a successful 151 + * call to debugfs_file_get() without worrying about lifetime issues. 152 + * 153 + * If -%EIO is returned, the file has already been removed and thus, 154 + * it is not safe to access any of its data. If, on the other hand, 155 + * it is allowed to access the file data, zero is returned. 156 + */ 157 + int debugfs_file_get(struct dentry *dentry) 158 + { 159 + return __debugfs_file_get(dentry, DBGFS_GET_ALREADY); 133 160 } 134 161 EXPORT_SYMBOL_GPL(debugfs_file_get); 135 162 ··· 280 267 const struct file_operations *real_fops = NULL; 281 268 int r; 282 269 283 - r = debugfs_file_get(dentry); 270 + r = __debugfs_file_get(dentry, DBGFS_GET_REGULAR); 284 271 if (r) 285 272 return r == -EIO ? -ENOENT : r; 286 273 ··· 437 424 proxy_fops->unlocked_ioctl = full_proxy_unlocked_ioctl; 438 425 } 439 426 440 - static int full_proxy_open(struct inode *inode, struct file *filp) 427 + static int full_proxy_open(struct inode *inode, struct file *filp, 428 + enum dbgfs_get_mode mode) 441 429 { 442 430 struct dentry *dentry = F_DENTRY(filp); 443 431 const struct file_operations *real_fops; ··· 446 432 struct debugfs_fsdata *fsd; 447 433 int r; 448 434 449 - r = debugfs_file_get(dentry); 435 + r = __debugfs_file_get(dentry, mode); 450 436 if (r) 451 437 return r == -EIO ? -ENOENT : r; 452 438 ··· 505 491 return r; 506 492 } 507 493 494 + static int full_proxy_open_regular(struct inode *inode, struct file *filp) 495 + { 496 + return full_proxy_open(inode, filp, DBGFS_GET_REGULAR); 497 + } 498 + 508 499 const struct file_operations debugfs_full_proxy_file_operations = { 509 - .open = full_proxy_open, 500 + .open = full_proxy_open_regular, 501 + }; 502 + 503 + static int full_proxy_open_short(struct inode *inode, struct file *filp) 504 + { 505 + return full_proxy_open(inode, filp, DBGFS_GET_SHORT); 506 + } 507 + 508 + const struct file_operations debugfs_full_short_proxy_file_operations = { 509 + .open = full_proxy_open_short, 510 510 }; 511 511 512 512 ssize_t debugfs_attr_read(struct file *file, char __user *buf,
+5 -8
fs/debugfs/inode.c
··· 229 229 return; 230 230 231 231 /* check it wasn't a dir (no fsdata) or automount (no real_fops) */ 232 - if (fsd && fsd->real_fops) { 232 + if (fsd && (fsd->real_fops || fsd->short_fops)) { 233 233 WARN_ON(!list_empty(&fsd->cancellations)); 234 234 mutex_destroy(&fsd->cancellations_mtx); 235 235 } ··· 455 455 const struct file_operations *fops) 456 456 { 457 457 if (WARN_ON((unsigned long)fops & 458 - (DEBUGFS_FSDATA_IS_SHORT_FOPS_BIT | 459 - DEBUGFS_FSDATA_IS_REAL_FOPS_BIT))) 458 + DEBUGFS_FSDATA_IS_REAL_FOPS_BIT)) 460 459 return ERR_PTR(-EINVAL); 461 460 462 461 return __debugfs_create_file(name, mode, parent, data, ··· 470 471 const struct debugfs_short_fops *fops) 471 472 { 472 473 if (WARN_ON((unsigned long)fops & 473 - (DEBUGFS_FSDATA_IS_SHORT_FOPS_BIT | 474 - DEBUGFS_FSDATA_IS_REAL_FOPS_BIT))) 474 + DEBUGFS_FSDATA_IS_REAL_FOPS_BIT)) 475 475 return ERR_PTR(-EINVAL); 476 476 477 477 return __debugfs_create_file(name, mode, parent, data, 478 - fops ? &debugfs_full_proxy_file_operations : 478 + fops ? &debugfs_full_short_proxy_file_operations : 479 479 &debugfs_noop_file_operations, 480 - (const void *)((unsigned long)fops | 481 - DEBUGFS_FSDATA_IS_SHORT_FOPS_BIT)); 480 + fops); 482 481 } 483 482 EXPORT_SYMBOL_GPL(debugfs_create_file_short); 484 483
+1 -5
fs/debugfs/internal.h
··· 15 15 extern const struct file_operations debugfs_noop_file_operations; 16 16 extern const struct file_operations debugfs_open_proxy_file_operations; 17 17 extern const struct file_operations debugfs_full_proxy_file_operations; 18 + extern const struct file_operations debugfs_full_short_proxy_file_operations; 18 19 19 20 struct debugfs_fsdata { 20 21 const struct file_operations *real_fops; ··· 41 40 * pointer gets its lowest bit set. 42 41 */ 43 42 #define DEBUGFS_FSDATA_IS_REAL_FOPS_BIT BIT(0) 44 - /* 45 - * A dentry's ->d_fsdata, when pointing to real fops, is with 46 - * short fops instead of full fops. 47 - */ 48 - #define DEBUGFS_FSDATA_IS_SHORT_FOPS_BIT BIT(1) 49 43 50 44 /* Access BITS */ 51 45 #define DEBUGFS_ALLOW_API BIT(0)
+13 -23
fs/erofs/data.c
··· 56 56 57 57 buf->file = NULL; 58 58 if (erofs_is_fileio_mode(sbi)) { 59 - buf->file = sbi->fdev; /* some fs like FUSE needs it */ 59 + buf->file = sbi->dif0.file; /* some fs like FUSE needs it */ 60 60 buf->mapping = buf->file->f_mapping; 61 61 } else if (erofs_is_fscache_mode(sb)) 62 - buf->mapping = sbi->s_fscache->inode->i_mapping; 62 + buf->mapping = sbi->dif0.fscache->inode->i_mapping; 63 63 else 64 64 buf->mapping = sb->s_bdev->bd_mapping; 65 65 } ··· 179 179 } 180 180 181 181 static void erofs_fill_from_devinfo(struct erofs_map_dev *map, 182 - struct erofs_device_info *dif) 182 + struct super_block *sb, struct erofs_device_info *dif) 183 183 { 184 + map->m_sb = sb; 185 + map->m_dif = dif; 184 186 map->m_bdev = NULL; 185 - map->m_fp = NULL; 186 - if (dif->file) { 187 - if (S_ISBLK(file_inode(dif->file)->i_mode)) 188 - map->m_bdev = file_bdev(dif->file); 189 - else 190 - map->m_fp = dif->file; 191 - } 192 - map->m_daxdev = dif->dax_dev; 193 - map->m_dax_part_off = dif->dax_part_off; 194 - map->m_fscache = dif->fscache; 187 + if (dif->file && S_ISBLK(file_inode(dif->file)->i_mode)) 188 + map->m_bdev = file_bdev(dif->file); 195 189 } 196 190 197 191 int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map) ··· 195 201 erofs_off_t startoff, length; 196 202 int id; 197 203 198 - map->m_bdev = sb->s_bdev; 199 - map->m_daxdev = EROFS_SB(sb)->dax_dev; 200 - map->m_dax_part_off = EROFS_SB(sb)->dax_part_off; 201 - map->m_fscache = EROFS_SB(sb)->s_fscache; 202 - map->m_fp = EROFS_SB(sb)->fdev; 203 - 204 + erofs_fill_from_devinfo(map, sb, &EROFS_SB(sb)->dif0); 205 + map->m_bdev = sb->s_bdev; /* use s_bdev for the primary device */ 204 206 if (map->m_deviceid) { 205 207 down_read(&devs->rwsem); 206 208 dif = idr_find(&devs->tree, map->m_deviceid - 1); ··· 209 219 up_read(&devs->rwsem); 210 220 return 0; 211 221 } 212 - erofs_fill_from_devinfo(map, dif); 222 + erofs_fill_from_devinfo(map, sb, dif); 213 223 up_read(&devs->rwsem); 214 224 } else if (devs->extra_devices && !devs->flatdev) { 215 225 down_read(&devs->rwsem); ··· 222 232 if (map->m_pa >= startoff && 223 233 map->m_pa < startoff + length) { 224 234 map->m_pa -= startoff; 225 - erofs_fill_from_devinfo(map, dif); 235 + erofs_fill_from_devinfo(map, sb, dif); 226 236 break; 227 237 } 228 238 } ··· 292 302 293 303 iomap->offset = map.m_la; 294 304 if (flags & IOMAP_DAX) 295 - iomap->dax_dev = mdev.m_daxdev; 305 + iomap->dax_dev = mdev.m_dif->dax_dev; 296 306 else 297 307 iomap->bdev = mdev.m_bdev; 298 308 iomap->length = map.m_llen; ··· 321 331 iomap->type = IOMAP_MAPPED; 322 332 iomap->addr = mdev.m_pa; 323 333 if (flags & IOMAP_DAX) 324 - iomap->addr += mdev.m_dax_part_off; 334 + iomap->addr += mdev.m_dif->dax_part_off; 325 335 } 326 336 return 0; 327 337 }
+6 -3
fs/erofs/fileio.c
··· 9 9 struct bio_vec bvecs[BIO_MAX_VECS]; 10 10 struct bio bio; 11 11 struct kiocb iocb; 12 + struct super_block *sb; 12 13 }; 13 14 14 15 struct erofs_fileio { ··· 53 52 rq->iocb.ki_pos = rq->bio.bi_iter.bi_sector << SECTOR_SHIFT; 54 53 rq->iocb.ki_ioprio = get_current_ioprio(); 55 54 rq->iocb.ki_complete = erofs_fileio_ki_complete; 56 - rq->iocb.ki_flags = (rq->iocb.ki_filp->f_mode & FMODE_CAN_ODIRECT) ? 57 - IOCB_DIRECT : 0; 55 + if (test_opt(&EROFS_SB(rq->sb)->opt, DIRECT_IO) && 56 + rq->iocb.ki_filp->f_mode & FMODE_CAN_ODIRECT) 57 + rq->iocb.ki_flags = IOCB_DIRECT; 58 58 iov_iter_bvec(&iter, ITER_DEST, rq->bvecs, rq->bio.bi_vcnt, 59 59 rq->bio.bi_iter.bi_size); 60 60 ret = vfs_iocb_iter_read(rq->iocb.ki_filp, &rq->iocb, &iter); ··· 69 67 GFP_KERNEL | __GFP_NOFAIL); 70 68 71 69 bio_init(&rq->bio, NULL, rq->bvecs, BIO_MAX_VECS, REQ_OP_READ); 72 - rq->iocb.ki_filp = mdev->m_fp; 70 + rq->iocb.ki_filp = mdev->m_dif->file; 71 + rq->sb = mdev->m_sb; 73 72 return rq; 74 73 } 75 74
+5 -5
fs/erofs/fscache.c
··· 198 198 199 199 io = kmalloc(sizeof(*io), GFP_KERNEL | __GFP_NOFAIL); 200 200 bio_init(&io->bio, NULL, io->bvecs, BIO_MAX_VECS, REQ_OP_READ); 201 - io->io.private = mdev->m_fscache->cookie; 201 + io->io.private = mdev->m_dif->fscache->cookie; 202 202 io->io.end_io = erofs_fscache_bio_endio; 203 203 refcount_set(&io->io.ref, 1); 204 204 return &io->bio; ··· 316 316 if (!io) 317 317 return -ENOMEM; 318 318 iov_iter_xarray(&io->iter, ITER_DEST, &mapping->i_pages, pos, count); 319 - ret = erofs_fscache_read_io_async(mdev.m_fscache->cookie, 319 + ret = erofs_fscache_read_io_async(mdev.m_dif->fscache->cookie, 320 320 mdev.m_pa + (pos - map.m_la), io); 321 321 erofs_fscache_req_io_put(io); 322 322 ··· 657 657 if (IS_ERR(fscache)) 658 658 return PTR_ERR(fscache); 659 659 660 - sbi->s_fscache = fscache; 660 + sbi->dif0.fscache = fscache; 661 661 return 0; 662 662 } 663 663 ··· 665 665 { 666 666 struct erofs_sb_info *sbi = EROFS_SB(sb); 667 667 668 - erofs_fscache_unregister_cookie(sbi->s_fscache); 668 + erofs_fscache_unregister_cookie(sbi->dif0.fscache); 669 669 670 670 if (sbi->domain) 671 671 erofs_fscache_domain_put(sbi->domain); 672 672 else 673 673 fscache_relinquish_volume(sbi->volume, NULL, false); 674 674 675 - sbi->s_fscache = NULL; 675 + sbi->dif0.fscache = NULL; 676 676 sbi->volume = NULL; 677 677 sbi->domain = NULL; 678 678 }
+5 -10
fs/erofs/internal.h
··· 107 107 }; 108 108 109 109 struct erofs_sb_info { 110 + struct erofs_device_info dif0; 110 111 struct erofs_mount_opts opt; /* options */ 111 112 #ifdef CONFIG_EROFS_FS_ZIP 112 113 /* list for all registered superblocks, mainly for shrinker */ ··· 125 124 126 125 struct erofs_sb_lz4_info lz4; 127 126 #endif /* CONFIG_EROFS_FS_ZIP */ 128 - struct file *fdev; 129 127 struct inode *packed_inode; 130 128 struct erofs_dev_context *devs; 131 - struct dax_device *dax_dev; 132 - u64 dax_part_off; 133 129 u64 total_blocks; 134 - u32 primarydevice_blocks; 135 130 136 131 u32 meta_blkaddr; 137 132 #ifdef CONFIG_EROFS_FS_XATTR ··· 163 166 164 167 /* fscache support */ 165 168 struct fscache_volume *volume; 166 - struct erofs_fscache *s_fscache; 167 169 struct erofs_domain *domain; 168 170 char *fsid; 169 171 char *domain_id; ··· 176 180 #define EROFS_MOUNT_POSIX_ACL 0x00000020 177 181 #define EROFS_MOUNT_DAX_ALWAYS 0x00000040 178 182 #define EROFS_MOUNT_DAX_NEVER 0x00000080 183 + #define EROFS_MOUNT_DIRECT_IO 0x00000100 179 184 180 185 #define clear_opt(opt, option) ((opt)->mount_opt &= ~EROFS_MOUNT_##option) 181 186 #define set_opt(opt, option) ((opt)->mount_opt |= EROFS_MOUNT_##option) ··· 184 187 185 188 static inline bool erofs_is_fileio_mode(struct erofs_sb_info *sbi) 186 189 { 187 - return IS_ENABLED(CONFIG_EROFS_FS_BACKED_BY_FILE) && sbi->fdev; 190 + return IS_ENABLED(CONFIG_EROFS_FS_BACKED_BY_FILE) && sbi->dif0.file; 188 191 } 189 192 190 193 static inline bool erofs_is_fscache_mode(struct super_block *sb) ··· 354 357 }; 355 358 356 359 struct erofs_map_dev { 357 - struct erofs_fscache *m_fscache; 360 + struct super_block *m_sb; 361 + struct erofs_device_info *m_dif; 358 362 struct block_device *m_bdev; 359 - struct dax_device *m_daxdev; 360 - struct file *m_fp; 361 - u64 m_dax_part_off; 362 363 363 364 erofs_off_t m_pa; 364 365 unsigned int m_deviceid;
+44 -36
fs/erofs/super.c
··· 203 203 struct erofs_device_info *dif; 204 204 int id, err = 0; 205 205 206 - sbi->total_blocks = sbi->primarydevice_blocks; 206 + sbi->total_blocks = sbi->dif0.blocks; 207 207 if (!erofs_sb_has_device_table(sbi)) 208 208 ondisk_extradevs = 0; 209 209 else ··· 307 307 sbi->sb_size); 308 308 goto out; 309 309 } 310 - sbi->primarydevice_blocks = le32_to_cpu(dsb->blocks); 310 + sbi->dif0.blocks = le32_to_cpu(dsb->blocks); 311 311 sbi->meta_blkaddr = le32_to_cpu(dsb->meta_blkaddr); 312 312 #ifdef CONFIG_EROFS_FS_XATTR 313 313 sbi->xattr_blkaddr = le32_to_cpu(dsb->xattr_blkaddr); ··· 364 364 } 365 365 366 366 enum { 367 - Opt_user_xattr, 368 - Opt_acl, 369 - Opt_cache_strategy, 370 - Opt_dax, 371 - Opt_dax_enum, 372 - Opt_device, 373 - Opt_fsid, 374 - Opt_domain_id, 367 + Opt_user_xattr, Opt_acl, Opt_cache_strategy, Opt_dax, Opt_dax_enum, 368 + Opt_device, Opt_fsid, Opt_domain_id, Opt_directio, 375 369 Opt_err 376 370 }; 377 371 ··· 392 398 fsparam_string("device", Opt_device), 393 399 fsparam_string("fsid", Opt_fsid), 394 400 fsparam_string("domain_id", Opt_domain_id), 401 + fsparam_flag_no("directio", Opt_directio), 395 402 {} 396 403 }; 397 404 ··· 506 511 errorfc(fc, "%s option not supported", erofs_fs_parameters[opt].name); 507 512 break; 508 513 #endif 514 + case Opt_directio: 515 + #ifdef CONFIG_EROFS_FS_BACKED_BY_FILE 516 + if (result.boolean) 517 + set_opt(&sbi->opt, DIRECT_IO); 518 + else 519 + clear_opt(&sbi->opt, DIRECT_IO); 520 + #else 521 + errorfc(fc, "%s option not supported", erofs_fs_parameters[opt].name); 522 + #endif 523 + break; 509 524 default: 510 525 return -ENOPARAM; 511 526 } ··· 607 602 return -EINVAL; 608 603 } 609 604 610 - sbi->dax_dev = fs_dax_get_by_bdev(sb->s_bdev, 611 - &sbi->dax_part_off, 612 - NULL, NULL); 605 + sbi->dif0.dax_dev = fs_dax_get_by_bdev(sb->s_bdev, 606 + &sbi->dif0.dax_part_off, NULL, NULL); 613 607 } 614 608 615 609 err = erofs_read_superblock(sb); ··· 631 627 } 632 628 633 629 if (test_opt(&sbi->opt, DAX_ALWAYS)) { 634 - if (!sbi->dax_dev) { 630 + if (!sbi->dif0.dax_dev) { 635 631 errorfc(fc, "DAX unsupported by block device. Turning off DAX."); 636 632 clear_opt(&sbi->opt, DAX_ALWAYS); 637 633 } else if (sbi->blkszbits != PAGE_SHIFT) { ··· 707 703 GET_TREE_BDEV_QUIET_LOOKUP : 0); 708 704 #ifdef CONFIG_EROFS_FS_BACKED_BY_FILE 709 705 if (ret == -ENOTBLK) { 706 + struct file *file; 707 + 710 708 if (!fc->source) 711 709 return invalf(fc, "No source specified"); 712 - sbi->fdev = filp_open(fc->source, O_RDONLY | O_LARGEFILE, 0); 713 - if (IS_ERR(sbi->fdev)) 714 - return PTR_ERR(sbi->fdev); 710 + file = filp_open(fc->source, O_RDONLY | O_LARGEFILE, 0); 711 + if (IS_ERR(file)) 712 + return PTR_ERR(file); 713 + sbi->dif0.file = file; 715 714 716 - if (S_ISREG(file_inode(sbi->fdev)->i_mode) && 717 - sbi->fdev->f_mapping->a_ops->read_folio) 715 + if (S_ISREG(file_inode(sbi->dif0.file)->i_mode) && 716 + sbi->dif0.file->f_mapping->a_ops->read_folio) 718 717 return get_tree_nodev(fc, erofs_fc_fill_super); 719 - fput(sbi->fdev); 720 718 } 721 719 #endif 722 720 return ret; ··· 769 763 kfree(devs); 770 764 } 771 765 766 + static void erofs_sb_free(struct erofs_sb_info *sbi) 767 + { 768 + erofs_free_dev_context(sbi->devs); 769 + kfree(sbi->fsid); 770 + kfree(sbi->domain_id); 771 + if (sbi->dif0.file) 772 + fput(sbi->dif0.file); 773 + kfree(sbi); 774 + } 775 + 772 776 static void erofs_fc_free(struct fs_context *fc) 773 777 { 774 778 struct erofs_sb_info *sbi = fc->s_fs_info; 775 779 776 - if (!sbi) 777 - return; 778 - 779 - erofs_free_dev_context(sbi->devs); 780 - kfree(sbi->fsid); 781 - kfree(sbi->domain_id); 782 - kfree(sbi); 780 + if (sbi) /* free here if an error occurs before transferring to sb */ 781 + erofs_sb_free(sbi); 783 782 } 784 783 785 784 static const struct fs_context_operations erofs_context_ops = { ··· 820 809 { 821 810 struct erofs_sb_info *sbi = EROFS_SB(sb); 822 811 823 - if ((IS_ENABLED(CONFIG_EROFS_FS_ONDEMAND) && sbi->fsid) || sbi->fdev) 812 + if ((IS_ENABLED(CONFIG_EROFS_FS_ONDEMAND) && sbi->fsid) || 813 + sbi->dif0.file) 824 814 kill_anon_super(sb); 825 815 else 826 816 kill_block_super(sb); 827 - 828 - erofs_free_dev_context(sbi->devs); 829 - fs_put_dax(sbi->dax_dev, NULL); 817 + fs_put_dax(sbi->dif0.dax_dev, NULL); 830 818 erofs_fscache_unregister_fs(sb); 831 - kfree(sbi->fsid); 832 - kfree(sbi->domain_id); 833 - if (sbi->fdev) 834 - fput(sbi->fdev); 835 - kfree(sbi); 819 + erofs_sb_free(sbi); 836 820 sb->s_fs_info = NULL; 837 821 } 838 822 ··· 953 947 seq_puts(seq, ",dax=always"); 954 948 if (test_opt(opt, DAX_NEVER)) 955 949 seq_puts(seq, ",dax=never"); 950 + if (erofs_is_fileio_mode(sbi) && test_opt(opt, DIRECT_IO)) 951 + seq_puts(seq, ",directio"); 956 952 #ifdef CONFIG_EROFS_FS_ONDEMAND 957 953 if (sbi->fsid) 958 954 seq_printf(seq, ",fsid=%s", sbi->fsid);
+2 -2
fs/erofs/zdata.c
··· 1792 1792 erofs_fscache_submit_bio(bio); 1793 1793 else 1794 1794 submit_bio(bio); 1795 - if (memstall) 1796 - psi_memstall_leave(&pflags); 1797 1795 } 1796 + if (memstall) 1797 + psi_memstall_leave(&pflags); 1798 1798 1799 1799 /* 1800 1800 * although background is preferred, no one is pending for submission.
+4 -3
fs/erofs/zutil.c
··· 230 230 struct erofs_sb_info *const sbi = EROFS_SB(sb); 231 231 232 232 mutex_lock(&sbi->umount_mutex); 233 - /* clean up all remaining pclusters in memory */ 234 - z_erofs_shrink_scan(sbi, ~0UL); 235 - 233 + while (!xa_empty(&sbi->managed_pslots)) { 234 + z_erofs_shrink_scan(sbi, ~0UL); 235 + cond_resched(); 236 + } 236 237 spin_lock(&erofs_sb_list_lock); 237 238 list_del(&sbi->list); 238 239 spin_unlock(&erofs_sb_list_lock);
+2 -1
fs/exfat/dir.c
··· 122 122 type = exfat_get_entry_type(ep); 123 123 if (type == TYPE_UNUSED) { 124 124 brelse(bh); 125 - break; 125 + goto out; 126 126 } 127 127 128 128 if (type != TYPE_FILE && type != TYPE_DIR) { ··· 170 170 } 171 171 } 172 172 173 + out: 173 174 dir_entry->namebuf.lfn[0] = '\0'; 174 175 *cpos = EXFAT_DEN_TO_B(dentry); 175 176 return 0;
+10
fs/exfat/fatent.c
··· 216 216 217 217 if (err) 218 218 goto dec_used_clus; 219 + 220 + if (num_clusters >= sbi->num_clusters - EXFAT_FIRST_CLUSTER) { 221 + /* 222 + * The cluster chain includes a loop, scan the 223 + * bitmap to get the number of used clusters. 224 + */ 225 + exfat_count_used_clusters(sb, &sbi->used_clusters); 226 + 227 + return 0; 228 + } 219 229 } while (clu != EXFAT_EOF_CLUSTER); 220 230 } 221 231
+6
fs/exfat/file.c
··· 545 545 while (pos < new_valid_size) { 546 546 u32 len; 547 547 struct folio *folio; 548 + unsigned long off; 548 549 549 550 len = PAGE_SIZE - (pos & (PAGE_SIZE - 1)); 550 551 if (pos + len > new_valid_size) ··· 555 554 if (err) 556 555 goto out; 557 556 557 + off = offset_in_folio(folio, pos); 558 + folio_zero_new_buffers(folio, off, off + len); 559 + 558 560 err = ops->write_end(file, mapping, pos, len, len, folio, NULL); 559 561 if (err < 0) 560 562 goto out; ··· 566 562 balance_dirty_pages_ratelimited(mapping); 567 563 cond_resched(); 568 564 } 565 + 566 + return 0; 569 567 570 568 out: 571 569 return err;
+2 -2
fs/exfat/namei.c
··· 330 330 331 331 while ((dentry = exfat_search_empty_slot(sb, &hint_femp, p_dir, 332 332 num_entries, es)) < 0) { 333 - if (dentry == -EIO) 334 - break; 333 + if (dentry != -ENOSPC) 334 + return dentry; 335 335 336 336 if (exfat_check_max_dentries(inode)) 337 337 return -ENOSPC;
+1
fs/file.c
··· 22 22 #include <linux/close_range.h> 23 23 #include <linux/file_ref.h> 24 24 #include <net/sock.h> 25 + #include <linux/init_task.h> 25 26 26 27 #include "internal.h" 27 28
+2
fs/fuse/dir.c
··· 1681 1681 */ 1682 1682 if (ff->open_flags & (FOPEN_STREAM | FOPEN_NONSEEKABLE)) 1683 1683 nonseekable_open(inode, file); 1684 + if (!(ff->open_flags & FOPEN_KEEP_CACHE)) 1685 + invalidate_inode_pages2(inode->i_mapping); 1684 1686 } 1685 1687 1686 1688 return err;
+19 -12
fs/fuse/file.c
··· 1541 1541 */ 1542 1542 struct page **pages = kzalloc(max_pages * sizeof(struct page *), 1543 1543 GFP_KERNEL); 1544 - if (!pages) 1545 - return -ENOMEM; 1544 + if (!pages) { 1545 + ret = -ENOMEM; 1546 + goto out; 1547 + } 1546 1548 1547 1549 while (nbytes < *nbytesp && nr_pages < max_pages) { 1548 1550 unsigned nfolios, i; ··· 1559 1557 1560 1558 nbytes += ret; 1561 1559 1562 - ret += start; 1563 - /* Currently, all folios in FUSE are one page */ 1564 - nfolios = DIV_ROUND_UP(ret, PAGE_SIZE); 1560 + nfolios = DIV_ROUND_UP(ret + start, PAGE_SIZE); 1565 1561 1566 - ap->descs[ap->num_folios].offset = start; 1567 - fuse_folio_descs_length_init(ap->descs, ap->num_folios, nfolios); 1568 - for (i = 0; i < nfolios; i++) 1569 - ap->folios[i + ap->num_folios] = page_folio(pages[i]); 1562 + for (i = 0; i < nfolios; i++) { 1563 + struct folio *folio = page_folio(pages[i]); 1564 + unsigned int offset = start + 1565 + (folio_page_idx(folio, pages[i]) << PAGE_SHIFT); 1566 + unsigned int len = min_t(unsigned int, ret, PAGE_SIZE - start); 1570 1567 1571 - ap->num_folios += nfolios; 1572 - ap->descs[ap->num_folios - 1].length -= 1573 - (PAGE_SIZE - ret) & (PAGE_SIZE - 1); 1568 + ap->descs[ap->num_folios].offset = offset; 1569 + ap->descs[ap->num_folios].length = len; 1570 + ap->folios[ap->num_folios] = folio; 1571 + start = 0; 1572 + ret -= len; 1573 + ap->num_folios++; 1574 + } 1575 + 1574 1576 nr_pages += nfolios; 1575 1577 } 1576 1578 kfree(pages); ··· 1590 1584 else 1591 1585 ap->args.out_pages = true; 1592 1586 1587 + out: 1593 1588 *nbytesp = nbytes; 1594 1589 1595 1590 return ret < 0 ? ret : 0;
+3 -1
fs/hfs/super.c
··· 349 349 goto bail_no_root; 350 350 res = hfs_cat_find_brec(sb, HFS_ROOT_CNID, &fd); 351 351 if (!res) { 352 - if (fd.entrylength > sizeof(rec) || fd.entrylength < 0) { 352 + if (fd.entrylength != sizeof(rec.dir)) { 353 353 res = -EIO; 354 354 goto bail_hfs_find; 355 355 } 356 356 hfs_bnode_read(fd.bnode, &rec, fd.entryoffset, fd.entrylength); 357 + if (rec.type != HFS_CDR_DIR) 358 + res = -EIO; 357 359 } 358 360 if (res) 359 361 goto bail_hfs_find;
+1 -1
fs/hugetlbfs/inode.c
··· 825 825 error = PTR_ERR(folio); 826 826 goto out; 827 827 } 828 - folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size)); 828 + folio_zero_user(folio, addr); 829 829 __folio_mark_uptodate(folio); 830 830 error = hugetlb_add_to_page_cache(folio, mapping, index); 831 831 if (unlikely(error)) {
+58 -10
fs/iomap/buffered-io.c
··· 1138 1138 start_byte, end_byte, iomap, punch); 1139 1139 1140 1140 /* move offset to start of next folio in range */ 1141 - start_byte = folio_next_index(folio) << PAGE_SHIFT; 1141 + start_byte = folio_pos(folio) + folio_size(folio); 1142 1142 folio_unlock(folio); 1143 1143 folio_put(folio); 1144 1144 } ··· 1774 1774 */ 1775 1775 static int iomap_add_to_ioend(struct iomap_writepage_ctx *wpc, 1776 1776 struct writeback_control *wbc, struct folio *folio, 1777 - struct inode *inode, loff_t pos, unsigned len) 1777 + struct inode *inode, loff_t pos, loff_t end_pos, 1778 + unsigned len) 1778 1779 { 1779 1780 struct iomap_folio_state *ifs = folio->private; 1780 1781 size_t poff = offset_in_folio(folio, pos); ··· 1794 1793 1795 1794 if (ifs) 1796 1795 atomic_add(len, &ifs->write_bytes_pending); 1796 + 1797 + /* 1798 + * Clamp io_offset and io_size to the incore EOF so that ondisk 1799 + * file size updates in the ioend completion are byte-accurate. 1800 + * This avoids recovering files with zeroed tail regions when 1801 + * writeback races with appending writes: 1802 + * 1803 + * Thread 1: Thread 2: 1804 + * ------------ ----------- 1805 + * write [A, A+B] 1806 + * update inode size to A+B 1807 + * submit I/O [A, A+BS] 1808 + * write [A+B, A+B+C] 1809 + * update inode size to A+B+C 1810 + * <I/O completes, updates disk size to min(A+B+C, A+BS)> 1811 + * <power failure> 1812 + * 1813 + * After reboot: 1814 + * 1) with A+B+C < A+BS, the file has zero padding in range 1815 + * [A+B, A+B+C] 1816 + * 1817 + * |< Block Size (BS) >| 1818 + * |DDDDDDDDDDDD0000000000000| 1819 + * ^ ^ ^ 1820 + * A A+B A+B+C 1821 + * (EOF) 1822 + * 1823 + * 2) with A+B+C > A+BS, the file has zero padding in range 1824 + * [A+B, A+BS] 1825 + * 1826 + * |< Block Size (BS) >|< Block Size (BS) >| 1827 + * |DDDDDDDDDDDD0000000000000|00000000000000000000000000| 1828 + * ^ ^ ^ ^ 1829 + * A A+B A+BS A+B+C 1830 + * (EOF) 1831 + * 1832 + * D = Valid Data 1833 + * 0 = Zero Padding 1834 + * 1835 + * Note that this defeats the ability to chain the ioends of 1836 + * appending writes. 1837 + */ 1797 1838 wpc->ioend->io_size += len; 1839 + if (wpc->ioend->io_offset + wpc->ioend->io_size > end_pos) 1840 + wpc->ioend->io_size = end_pos - wpc->ioend->io_offset; 1841 + 1798 1842 wbc_account_cgroup_owner(wbc, folio, len); 1799 1843 return 0; 1800 1844 } 1801 1845 1802 1846 static int iomap_writepage_map_blocks(struct iomap_writepage_ctx *wpc, 1803 1847 struct writeback_control *wbc, struct folio *folio, 1804 - struct inode *inode, u64 pos, unsigned dirty_len, 1805 - unsigned *count) 1848 + struct inode *inode, u64 pos, u64 end_pos, 1849 + unsigned dirty_len, unsigned *count) 1806 1850 { 1807 1851 int error; 1808 1852 ··· 1872 1826 break; 1873 1827 default: 1874 1828 error = iomap_add_to_ioend(wpc, wbc, folio, inode, pos, 1875 - map_len); 1829 + end_pos, map_len); 1876 1830 if (!error) 1877 1831 (*count)++; 1878 1832 break; ··· 1943 1897 * remaining memory is zeroed when mapped, and writes to that 1944 1898 * region are not written out to the file. 1945 1899 * 1946 - * Also adjust the writeback range to skip all blocks entirely 1947 - * beyond i_size. 1900 + * Also adjust the end_pos to the end of file and skip writeback 1901 + * for all blocks entirely beyond i_size. 1948 1902 */ 1949 1903 folio_zero_segment(folio, poff, folio_size(folio)); 1950 - *end_pos = round_up(isize, i_blocksize(inode)); 1904 + *end_pos = isize; 1951 1905 } 1952 1906 1953 1907 return true; ··· 1960 1914 struct inode *inode = folio->mapping->host; 1961 1915 u64 pos = folio_pos(folio); 1962 1916 u64 end_pos = pos + folio_size(folio); 1917 + u64 end_aligned = 0; 1963 1918 unsigned count = 0; 1964 1919 int error = 0; 1965 1920 u32 rlen; ··· 2002 1955 /* 2003 1956 * Walk through the folio to find dirty areas to write back. 2004 1957 */ 2005 - while ((rlen = iomap_find_dirty_range(folio, &pos, end_pos))) { 1958 + end_aligned = round_up(end_pos, i_blocksize(inode)); 1959 + while ((rlen = iomap_find_dirty_range(folio, &pos, end_aligned))) { 2006 1960 error = iomap_writepage_map_blocks(wpc, wbc, folio, inode, 2007 - pos, rlen, &count); 1961 + pos, end_pos, rlen, &count); 2008 1962 if (error) 2009 1963 break; 2010 1964 pos += rlen;
+2 -2
fs/jbd2/commit.c
··· 772 772 /* 773 773 * If the journal is not located on the file system device, 774 774 * then we must flush the file system device before we issue 775 - * the commit record 775 + * the commit record and update the journal tail sequence. 776 776 */ 777 - if (commit_transaction->t_need_data_flush && 777 + if ((commit_transaction->t_need_data_flush || update_tail) && 778 778 (journal->j_fs_dev != journal->j_dev) && 779 779 (journal->j_flags & JBD2_BARRIER)) 780 780 blkdev_issue_flush(journal->j_fs_dev);
+1 -1
fs/jbd2/revoke.c
··· 654 654 set_buffer_jwrite(descriptor); 655 655 BUFFER_TRACE(descriptor, "write"); 656 656 set_buffer_dirty(descriptor); 657 - write_dirty_buffer(descriptor, REQ_SYNC); 657 + write_dirty_buffer(descriptor, JBD2_JOURNAL_REQ_FLAGS); 658 658 } 659 659 #endif 660 660
+9 -6
fs/mount.h
··· 38 38 struct dentry *mnt_mountpoint; 39 39 struct vfsmount mnt; 40 40 union { 41 + struct rb_node mnt_node; /* node in the ns->mounts rbtree */ 41 42 struct rcu_head mnt_rcu; 42 43 struct llist_node mnt_llist; 43 44 }; ··· 52 51 struct list_head mnt_child; /* and going through their mnt_child */ 53 52 struct list_head mnt_instance; /* mount instance on sb->s_mounts */ 54 53 const char *mnt_devname; /* Name of device e.g. /dev/dsk/hda1 */ 55 - union { 56 - struct rb_node mnt_node; /* Under ns->mounts */ 57 - struct list_head mnt_list; 58 - }; 54 + struct list_head mnt_list; 59 55 struct list_head mnt_expire; /* link in fs-specific expiry list */ 60 56 struct list_head mnt_share; /* circular list of shared mounts */ 61 57 struct list_head mnt_slave_list;/* list of slave mounts */ ··· 143 145 return ns->seq == 0; 144 146 } 145 147 148 + static inline bool mnt_ns_attached(const struct mount *mnt) 149 + { 150 + return !RB_EMPTY_NODE(&mnt->mnt_node); 151 + } 152 + 146 153 static inline void move_from_ns(struct mount *mnt, struct list_head *dt_list) 147 154 { 148 - WARN_ON(!(mnt->mnt.mnt_flags & MNT_ONRB)); 149 - mnt->mnt.mnt_flags &= ~MNT_ONRB; 155 + WARN_ON(!mnt_ns_attached(mnt)); 150 156 rb_erase(&mnt->mnt_node, &mnt->mnt_ns->mounts); 157 + RB_CLEAR_NODE(&mnt->mnt_node); 151 158 list_add_tail(&mnt->mnt_list, dt_list); 152 159 } 153 160
+14 -10
fs/namespace.c
··· 344 344 INIT_HLIST_NODE(&mnt->mnt_mp_list); 345 345 INIT_LIST_HEAD(&mnt->mnt_umounting); 346 346 INIT_HLIST_HEAD(&mnt->mnt_stuck_children); 347 + RB_CLEAR_NODE(&mnt->mnt_node); 347 348 mnt->mnt.mnt_idmap = &nop_mnt_idmap; 348 349 } 349 350 return mnt; ··· 1125 1124 struct rb_node **link = &ns->mounts.rb_node; 1126 1125 struct rb_node *parent = NULL; 1127 1126 1128 - WARN_ON(mnt->mnt.mnt_flags & MNT_ONRB); 1127 + WARN_ON(mnt_ns_attached(mnt)); 1129 1128 mnt->mnt_ns = ns; 1130 1129 while (*link) { 1131 1130 parent = *link; ··· 1136 1135 } 1137 1136 rb_link_node(&mnt->mnt_node, parent, link); 1138 1137 rb_insert_color(&mnt->mnt_node, &ns->mounts); 1139 - mnt->mnt.mnt_flags |= MNT_ONRB; 1140 1138 } 1141 1139 1142 1140 /* ··· 1305 1305 } 1306 1306 1307 1307 mnt->mnt.mnt_flags = old->mnt.mnt_flags; 1308 - mnt->mnt.mnt_flags &= ~(MNT_WRITE_HOLD|MNT_MARKED|MNT_INTERNAL|MNT_ONRB); 1308 + mnt->mnt.mnt_flags &= ~(MNT_WRITE_HOLD|MNT_MARKED|MNT_INTERNAL); 1309 1309 1310 1310 atomic_inc(&sb->s_active); 1311 1311 mnt->mnt.mnt_idmap = mnt_idmap_get(mnt_idmap(&old->mnt)); ··· 1763 1763 /* Gather the mounts to umount */ 1764 1764 for (p = mnt; p; p = next_mnt(p, mnt)) { 1765 1765 p->mnt.mnt_flags |= MNT_UMOUNT; 1766 - if (p->mnt.mnt_flags & MNT_ONRB) 1766 + if (mnt_ns_attached(p)) 1767 1767 move_from_ns(p, &tmp_list); 1768 1768 else 1769 1769 list_move(&p->mnt_list, &tmp_list); ··· 1912 1912 1913 1913 event++; 1914 1914 if (flags & MNT_DETACH) { 1915 - if (mnt->mnt.mnt_flags & MNT_ONRB || 1916 - !list_empty(&mnt->mnt_list)) 1915 + if (mnt_ns_attached(mnt) || !list_empty(&mnt->mnt_list)) 1917 1916 umount_tree(mnt, UMOUNT_PROPAGATE); 1918 1917 retval = 0; 1919 1918 } else { 1920 1919 shrink_submounts(mnt); 1921 1920 retval = -EBUSY; 1922 1921 if (!propagate_mount_busy(mnt, 2)) { 1923 - if (mnt->mnt.mnt_flags & MNT_ONRB || 1924 - !list_empty(&mnt->mnt_list)) 1922 + if (mnt_ns_attached(mnt) || !list_empty(&mnt->mnt_list)) 1925 1923 umount_tree(mnt, UMOUNT_PROPAGATE|UMOUNT_SYNC); 1926 1924 retval = 0; 1927 1925 } ··· 2053 2055 2054 2056 static bool is_mnt_ns_file(struct dentry *dentry) 2055 2057 { 2058 + struct ns_common *ns; 2059 + 2056 2060 /* Is this a proxy for a mount namespace? */ 2057 - return dentry->d_op == &ns_dentry_operations && 2058 - dentry->d_fsdata == &mntns_operations; 2061 + if (dentry->d_op != &ns_dentry_operations) 2062 + return false; 2063 + 2064 + ns = d_inode(dentry)->i_private; 2065 + 2066 + return ns->ops == &mntns_operations; 2059 2067 } 2060 2068 2061 2069 struct ns_common *from_mnt_ns(struct mnt_namespace *mnt)
+16 -12
fs/netfs/buffered_read.c
··· 275 275 netfs_stat(&netfs_n_rh_download); 276 276 if (rreq->netfs_ops->prepare_read) { 277 277 ret = rreq->netfs_ops->prepare_read(subreq); 278 - if (ret < 0) { 279 - atomic_dec(&rreq->nr_outstanding); 280 - netfs_put_subrequest(subreq, false, 281 - netfs_sreq_trace_put_cancel); 282 - break; 283 - } 278 + if (ret < 0) 279 + goto prep_failed; 284 280 trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); 285 281 } 286 282 287 283 slice = netfs_prepare_read_iterator(subreq); 288 - if (slice < 0) { 289 - atomic_dec(&rreq->nr_outstanding); 290 - netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); 291 - ret = slice; 292 - break; 293 - } 284 + if (slice < 0) 285 + goto prep_iter_failed; 294 286 295 287 rreq->netfs_ops->issue_read(subreq); 296 288 goto done; ··· 294 302 trace_netfs_sreq(subreq, netfs_sreq_trace_submit); 295 303 netfs_stat(&netfs_n_rh_zero); 296 304 slice = netfs_prepare_read_iterator(subreq); 305 + if (slice < 0) 306 + goto prep_iter_failed; 297 307 __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); 298 308 netfs_read_subreq_terminated(subreq, 0, false); 299 309 goto done; ··· 304 310 if (source == NETFS_READ_FROM_CACHE) { 305 311 trace_netfs_sreq(subreq, netfs_sreq_trace_submit); 306 312 slice = netfs_prepare_read_iterator(subreq); 313 + if (slice < 0) 314 + goto prep_iter_failed; 307 315 netfs_read_cache_to_pagecache(rreq, subreq); 308 316 goto done; 309 317 } 310 318 311 319 pr_err("Unexpected read source %u\n", source); 312 320 WARN_ON_ONCE(1); 321 + break; 322 + 323 + prep_iter_failed: 324 + ret = slice; 325 + prep_failed: 326 + subreq->error = ret; 327 + atomic_dec(&rreq->nr_outstanding); 328 + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); 313 329 break; 314 330 315 331 done:
+6 -2
fs/netfs/direct_write.c
··· 67 67 * allocate a sufficiently large bvec array and may shorten the 68 68 * request. 69 69 */ 70 - if (async || user_backed_iter(iter)) { 70 + if (user_backed_iter(iter)) { 71 71 n = netfs_extract_user_iter(iter, len, &wreq->iter, 0); 72 72 if (n < 0) { 73 73 ret = n; ··· 77 77 wreq->direct_bv_count = n; 78 78 wreq->direct_bv_unpin = iov_iter_extract_will_pin(iter); 79 79 } else { 80 + /* If this is a kernel-generated async DIO request, 81 + * assume that any resources the iterator points to 82 + * (eg. a bio_vec array) will persist till the end of 83 + * the op. 84 + */ 80 85 wreq->iter = *iter; 81 86 } 82 87 ··· 109 104 trace_netfs_rreq(wreq, netfs_rreq_trace_wait_ip); 110 105 wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS, 111 106 TASK_UNINTERRUPTIBLE); 112 - smp_rmb(); /* Read error/transferred after RIP flag */ 113 107 ret = wreq->error; 114 108 if (ret == 0) { 115 109 ret = wreq->transferred;
+19 -14
fs/netfs/read_collect.c
··· 62 62 } else { 63 63 trace_netfs_folio(folio, netfs_folio_trace_read_done); 64 64 } 65 + 66 + folioq_clear(folioq, slot); 65 67 } else { 66 68 // TODO: Use of PG_private_2 is deprecated. 67 69 if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) 68 70 netfs_pgpriv2_mark_copy_to_cache(subreq, rreq, folioq, slot); 71 + else 72 + folioq_clear(folioq, slot); 69 73 } 70 74 71 75 if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) { ··· 81 77 folio_unlock(folio); 82 78 } 83 79 } 84 - 85 - folioq_clear(folioq, slot); 86 80 } 87 81 88 82 /* ··· 249 247 250 248 /* Deal with the trickiest case: that this subreq is in the middle of a 251 249 * folio, not touching either edge, but finishes first. In such a 252 - * case, we donate to the previous subreq, if there is one, so that the 253 - * donation is only handled when that completes - and remove this 254 - * subreq from the list. 250 + * case, we donate to the previous subreq, if there is one and if it is 251 + * contiguous, so that the donation is only handled when that completes 252 + * - and remove this subreq from the list. 255 253 * 256 254 * If the previous subreq finished first, we will have acquired their 257 255 * donation and should be able to unlock folios and/or donate nextwards. 258 256 */ 259 257 if (!subreq->consumed && 260 258 !prev_donated && 261 - !list_is_first(&subreq->rreq_link, &rreq->subrequests)) { 259 + !list_is_first(&subreq->rreq_link, &rreq->subrequests) && 260 + subreq->start == prev->start + prev->len) { 262 261 prev = list_prev_entry(subreq, rreq_link); 263 262 WRITE_ONCE(prev->next_donated, prev->next_donated + subreq->len); 264 263 subreq->start += subreq->len; ··· 381 378 task_io_account_read(rreq->transferred); 382 379 383 380 trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip); 384 - clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags); 385 - wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS); 381 + clear_and_wake_up_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags); 386 382 387 383 trace_netfs_rreq(rreq, netfs_rreq_trace_done); 388 384 netfs_clear_subrequests(rreq, false); ··· 440 438 rreq->origin == NETFS_READPAGE || 441 439 rreq->origin == NETFS_READ_FOR_WRITE)) { 442 440 netfs_consume_read_data(subreq, was_async); 443 - __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); 441 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); 444 442 } 445 443 } 446 444 EXPORT_SYMBOL(netfs_read_subreq_progress); ··· 499 497 rreq->origin == NETFS_READPAGE || 500 498 rreq->origin == NETFS_READ_FOR_WRITE)) { 501 499 netfs_consume_read_data(subreq, was_async); 502 - __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); 500 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); 503 501 } 504 502 rreq->transferred += subreq->transferred; 505 503 } ··· 513 511 } else { 514 512 trace_netfs_sreq(subreq, netfs_sreq_trace_short); 515 513 if (subreq->transferred > subreq->consumed) { 516 - __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 517 - __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); 518 - set_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags); 519 - } else if (!__test_and_set_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags)) { 514 + /* If we didn't read new data, abandon retry. */ 515 + if (subreq->retry_count && 516 + test_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags)) { 517 + __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 518 + set_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags); 519 + } 520 + } else if (test_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags)) { 520 521 __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 521 522 set_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags); 522 523 } else {
+4
fs/netfs/read_pgpriv2.c
··· 170 170 171 171 trace_netfs_write(wreq, netfs_write_trace_copy_to_cache); 172 172 netfs_stat(&netfs_n_wh_copy_to_cache); 173 + if (!wreq->io_streams[1].avail) { 174 + netfs_put_request(wreq, false, netfs_rreq_trace_put_return); 175 + goto couldnt_start; 176 + } 173 177 174 178 for (;;) { 175 179 error = netfs_pgpriv2_copy_folio(wreq, folio);
+7 -4
fs/netfs/read_retry.c
··· 49 49 * up to the first permanently failed one. 50 50 */ 51 51 if (!rreq->netfs_ops->prepare_read && 52 - !test_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags)) { 52 + !rreq->cache_resources.ops) { 53 53 struct netfs_io_subrequest *subreq; 54 54 55 55 list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { 56 56 if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) 57 57 break; 58 58 if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { 59 + __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); 60 + subreq->retry_count++; 59 61 netfs_reset_iter(subreq); 60 62 netfs_reissue_read(rreq, subreq); 61 63 } ··· 139 137 stream0->sreq_max_len = subreq->len; 140 138 141 139 __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 142 - __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); 140 + __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); 141 + subreq->retry_count++; 143 142 144 143 spin_lock_bh(&rreq->lock); 145 144 list_add_tail(&subreq->rreq_link, &rreq->subrequests); ··· 152 149 BUG_ON(!len); 153 150 154 151 /* Renegotiate max_len (rsize) */ 155 - if (rreq->netfs_ops->prepare_read(subreq) < 0) { 152 + if (rreq->netfs_ops->prepare_read && 153 + rreq->netfs_ops->prepare_read(subreq) < 0) { 156 154 trace_netfs_sreq(subreq, netfs_sreq_trace_reprep_failed); 157 155 __set_bit(NETFS_SREQ_FAILED, &subreq->flags); 158 156 } ··· 217 213 subreq->error = -ENOMEM; 218 214 __clear_bit(NETFS_SREQ_FAILED, &subreq->flags); 219 215 __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 220 - __clear_bit(NETFS_SREQ_RETRYING, &subreq->flags); 221 216 } 222 217 spin_lock_bh(&rreq->lock); 223 218 list_splice_tail_init(&queue, &rreq->subrequests);
+5 -9
fs/netfs/write_collect.c
··· 179 179 struct iov_iter source = subreq->io_iter; 180 180 181 181 iov_iter_revert(&source, subreq->len - source.count); 182 - __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); 183 182 netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); 184 183 netfs_reissue_write(stream, subreq, &source); 185 184 } ··· 233 234 /* Renegotiate max_len (wsize) */ 234 235 trace_netfs_sreq(subreq, netfs_sreq_trace_retry); 235 236 __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 236 - __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); 237 + subreq->retry_count++; 237 238 stream->prepare_write(subreq); 238 239 239 240 part = min(len, stream->sreq_max_len); ··· 278 279 subreq->start = start; 279 280 subreq->debug_index = atomic_inc_return(&wreq->subreq_counter); 280 281 subreq->stream_nr = to->stream_nr; 281 - __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); 282 + subreq->retry_count = 1; 282 283 283 284 trace_netfs_sreq_ref(wreq->debug_id, subreq->debug_index, 284 285 refcount_read(&subreq->ref), ··· 500 501 goto need_retry; 501 502 if ((notes & MADE_PROGRESS) && test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) { 502 503 trace_netfs_rreq(wreq, netfs_rreq_trace_unpause); 503 - clear_bit_unlock(NETFS_RREQ_PAUSE, &wreq->flags); 504 - wake_up_bit(&wreq->flags, NETFS_RREQ_PAUSE); 504 + clear_and_wake_up_bit(NETFS_RREQ_PAUSE, &wreq->flags); 505 505 } 506 506 507 507 if (notes & NEED_REASSESS) { ··· 603 605 604 606 _debug("finished"); 605 607 trace_netfs_rreq(wreq, netfs_rreq_trace_wake_ip); 606 - clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &wreq->flags); 607 - wake_up_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS); 608 + clear_and_wake_up_bit(NETFS_RREQ_IN_PROGRESS, &wreq->flags); 608 609 609 610 if (wreq->iocb) { 610 611 size_t written = min(wreq->transferred, wreq->len); ··· 711 714 712 715 trace_netfs_sreq(subreq, netfs_sreq_trace_terminated); 713 716 714 - clear_bit_unlock(NETFS_SREQ_IN_PROGRESS, &subreq->flags); 715 - wake_up_bit(&subreq->flags, NETFS_SREQ_IN_PROGRESS); 717 + clear_and_wake_up_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); 716 718 717 719 /* If we are at the head of the queue, wake up the collector, 718 720 * transferring a ref to it if we were the ones to do so.
+2
fs/netfs/write_issue.c
··· 244 244 iov_iter_advance(source, size); 245 245 iov_iter_truncate(&subreq->io_iter, size); 246 246 247 + subreq->retry_count++; 248 + __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); 247 249 __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); 248 250 netfs_do_issue_write(stream, subreq); 249 251 }
+8 -1
fs/nfs/fscache.c
··· 263 263 static atomic_t nfs_netfs_debug_id; 264 264 static int nfs_netfs_init_request(struct netfs_io_request *rreq, struct file *file) 265 265 { 266 + if (!file) { 267 + if (WARN_ON_ONCE(rreq->origin != NETFS_PGPRIV2_COPY_TO_CACHE)) 268 + return -EIO; 269 + return 0; 270 + } 271 + 266 272 rreq->netfs_priv = get_nfs_open_context(nfs_file_open_context(file)); 267 273 rreq->debug_id = atomic_inc_return(&nfs_netfs_debug_id); 268 274 /* [DEPRECATED] Use PG_private_2 to mark folio being written to the cache. */ ··· 280 274 281 275 static void nfs_netfs_free_request(struct netfs_io_request *rreq) 282 276 { 283 - put_nfs_open_context(rreq->netfs_priv); 277 + if (rreq->netfs_priv) 278 + put_nfs_open_context(rreq->netfs_priv); 284 279 } 285 280 286 281 static struct nfs_netfs_io_data *nfs_netfs_alloc(struct netfs_io_subrequest *sreq)
+1 -1
fs/nfs/pnfs.c
··· 1308 1308 enum pnfs_iomode *iomode) 1309 1309 { 1310 1310 /* Serialise LAYOUTGET/LAYOUTRETURN */ 1311 - if (atomic_read(&lo->plh_outstanding) != 0) 1311 + if (atomic_read(&lo->plh_outstanding) != 0 && lo->plh_return_seq == 0) 1312 1312 return false; 1313 1313 if (test_and_set_bit(NFS_LAYOUT_RETURN_LOCK, &lo->plh_flags)) 1314 1314 return false;
+1
fs/nfs/super.c
··· 73 73 #include "nfs.h" 74 74 #include "netns.h" 75 75 #include "sysfs.h" 76 + #include "nfs4idmap.h" 76 77 77 78 #define NFSDBG_FACILITY NFSDBG_VFS 78 79
+6 -25
fs/nfsd/export.c
··· 40 40 #define EXPKEY_HASHMAX (1 << EXPKEY_HASHBITS) 41 41 #define EXPKEY_HASHMASK (EXPKEY_HASHMAX -1) 42 42 43 - static void expkey_put_work(struct work_struct *work) 43 + static void expkey_put(struct kref *ref) 44 44 { 45 - struct svc_expkey *key = 46 - container_of(to_rcu_work(work), struct svc_expkey, ek_rcu_work); 45 + struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref); 47 46 48 47 if (test_bit(CACHE_VALID, &key->h.flags) && 49 48 !test_bit(CACHE_NEGATIVE, &key->h.flags)) 50 49 path_put(&key->ek_path); 51 50 auth_domain_put(key->ek_client); 52 - kfree(key); 53 - } 54 - 55 - static void expkey_put(struct kref *ref) 56 - { 57 - struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref); 58 - 59 - INIT_RCU_WORK(&key->ek_rcu_work, expkey_put_work); 60 - queue_rcu_work(system_wq, &key->ek_rcu_work); 51 + kfree_rcu(key, ek_rcu); 61 52 } 62 53 63 54 static int expkey_upcall(struct cache_detail *cd, struct cache_head *h) ··· 355 364 EXP_STATS_COUNTERS_NUM); 356 365 } 357 366 358 - static void svc_export_put_work(struct work_struct *work) 367 + static void svc_export_put(struct kref *ref) 359 368 { 360 - struct svc_export *exp = 361 - container_of(to_rcu_work(work), struct svc_export, ex_rcu_work); 362 - 369 + struct svc_export *exp = container_of(ref, struct svc_export, h.ref); 363 370 path_put(&exp->ex_path); 364 371 auth_domain_put(exp->ex_client); 365 372 nfsd4_fslocs_free(&exp->ex_fslocs); 366 373 export_stats_destroy(exp->ex_stats); 367 374 kfree(exp->ex_stats); 368 375 kfree(exp->ex_uuid); 369 - kfree(exp); 370 - } 371 - 372 - static void svc_export_put(struct kref *ref) 373 - { 374 - struct svc_export *exp = container_of(ref, struct svc_export, h.ref); 375 - 376 - INIT_RCU_WORK(&exp->ex_rcu_work, svc_export_put_work); 377 - queue_rcu_work(system_wq, &exp->ex_rcu_work); 376 + kfree_rcu(exp, ex_rcu); 378 377 } 379 378 380 379 static int svc_export_upcall(struct cache_detail *cd, struct cache_head *h)
+2 -2
fs/nfsd/export.h
··· 75 75 u32 ex_layout_types; 76 76 struct nfsd4_deviceid_map *ex_devid_map; 77 77 struct cache_detail *cd; 78 - struct rcu_work ex_rcu_work; 78 + struct rcu_head ex_rcu; 79 79 unsigned long ex_xprtsec_modes; 80 80 struct export_stats *ex_stats; 81 81 }; ··· 92 92 u32 ek_fsid[6]; 93 93 94 94 struct path ek_path; 95 - struct rcu_work ek_rcu_work; 95 + struct rcu_head ek_rcu; 96 96 }; 97 97 98 98 #define EX_ISSYNC(exp) (!((exp)->ex_flags & NFSEXP_ASYNC))
+1 -3
fs/nfsd/nfs4callback.c
··· 1100 1100 args.authflavor = clp->cl_cred.cr_flavor; 1101 1101 clp->cl_cb_ident = conn->cb_ident; 1102 1102 } else { 1103 - if (!conn->cb_xprt) 1103 + if (!conn->cb_xprt || !ses) 1104 1104 return -EINVAL; 1105 1105 clp->cl_cb_session = ses; 1106 1106 args.bc_xprt = conn->cb_xprt; ··· 1522 1522 ses = c->cn_session; 1523 1523 } 1524 1524 spin_unlock(&clp->cl_lock); 1525 - if (!c) 1526 - return; 1527 1525 1528 1526 err = setup_callback_client(clp, &conn, ses); 1529 1527 if (err) {
+8 -5
fs/nfsd/nfs4proc.c
··· 1347 1347 { 1348 1348 if (!refcount_dec_and_test(&copy->refcount)) 1349 1349 return; 1350 - atomic_dec(&copy->cp_nn->pending_async_copies); 1351 1350 kfree(copy->cp_src); 1352 1351 kfree(copy); 1353 1352 } ··· 1869 1870 set_bit(NFSD4_COPY_F_COMPLETED, &copy->cp_flags); 1870 1871 trace_nfsd_copy_async_done(copy); 1871 1872 nfsd4_send_cb_offload(copy); 1873 + atomic_dec(&copy->cp_nn->pending_async_copies); 1872 1874 return 0; 1873 1875 } 1874 1876 ··· 1927 1927 /* Arbitrary cap on number of pending async copy operations */ 1928 1928 if (atomic_inc_return(&nn->pending_async_copies) > 1929 1929 (int)rqstp->rq_pool->sp_nrthreads) 1930 - goto out_err; 1930 + goto out_dec_async_copy_err; 1931 1931 async_copy->cp_src = kmalloc(sizeof(*async_copy->cp_src), GFP_KERNEL); 1932 1932 if (!async_copy->cp_src) 1933 - goto out_err; 1933 + goto out_dec_async_copy_err; 1934 1934 if (!nfs4_init_copy_state(nn, copy)) 1935 - goto out_err; 1935 + goto out_dec_async_copy_err; 1936 1936 memcpy(&result->cb_stateid, &copy->cp_stateid.cs_stid, 1937 1937 sizeof(result->cb_stateid)); 1938 1938 dup_copy_fields(copy, async_copy); 1939 1939 async_copy->copy_task = kthread_create(nfsd4_do_async_copy, 1940 1940 async_copy, "%s", "copy thread"); 1941 1941 if (IS_ERR(async_copy->copy_task)) 1942 - goto out_err; 1942 + goto out_dec_async_copy_err; 1943 1943 spin_lock(&async_copy->cp_clp->async_lock); 1944 1944 list_add(&async_copy->copies, 1945 1945 &async_copy->cp_clp->async_copies); ··· 1954 1954 trace_nfsd_copy_done(copy, status); 1955 1955 release_copy_files(copy); 1956 1956 return status; 1957 + out_dec_async_copy_err: 1958 + if (async_copy) 1959 + atomic_dec(&nn->pending_async_copies); 1957 1960 out_err: 1958 1961 if (nfsd4_ssc_is_inter(copy)) { 1959 1962 /*
+1
fs/nilfs2/btnode.c
··· 35 35 ii->i_flags = 0; 36 36 memset(&ii->i_bmap_data, 0, sizeof(struct nilfs_bmap)); 37 37 mapping_set_gfp_mask(btnc_inode->i_mapping, GFP_NOFS); 38 + btnc_inode->i_mapping->a_ops = &nilfs_buffer_cache_aops; 38 39 } 39 40 40 41 void nilfs_btnode_cache_clear(struct address_space *btnc)
+1 -1
fs/nilfs2/gcinode.c
··· 163 163 164 164 inode->i_mode = S_IFREG; 165 165 mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS); 166 - inode->i_mapping->a_ops = &empty_aops; 166 + inode->i_mapping->a_ops = &nilfs_buffer_cache_aops; 167 167 168 168 ii->i_flags = 0; 169 169 nilfs_bmap_init_gc(ii->i_bmap);
+12 -1
fs/nilfs2/inode.c
··· 276 276 .is_partially_uptodate = block_is_partially_uptodate, 277 277 }; 278 278 279 + const struct address_space_operations nilfs_buffer_cache_aops = { 280 + .invalidate_folio = block_invalidate_folio, 281 + }; 282 + 279 283 static int nilfs_insert_inode_locked(struct inode *inode, 280 284 struct nilfs_root *root, 281 285 unsigned long ino) ··· 548 544 inode = nilfs_iget_locked(sb, root, ino); 549 545 if (unlikely(!inode)) 550 546 return ERR_PTR(-ENOMEM); 551 - if (!(inode->i_state & I_NEW)) 547 + 548 + if (!(inode->i_state & I_NEW)) { 549 + if (!inode->i_nlink) { 550 + iput(inode); 551 + return ERR_PTR(-ESTALE); 552 + } 552 553 return inode; 554 + } 553 555 554 556 err = __nilfs_read_inode(sb, root, ino, inode); 555 557 if (unlikely(err)) { ··· 685 675 NILFS_I(s_inode)->i_flags = 0; 686 676 memset(NILFS_I(s_inode)->i_bmap, 0, sizeof(struct nilfs_bmap)); 687 677 mapping_set_gfp_mask(s_inode->i_mapping, GFP_NOFS); 678 + s_inode->i_mapping->a_ops = &nilfs_buffer_cache_aops; 688 679 689 680 err = nilfs_attach_btree_node_cache(s_inode); 690 681 if (unlikely(err)) {
+5
fs/nilfs2/namei.c
··· 67 67 inode = NULL; 68 68 } else { 69 69 inode = nilfs_iget(dir->i_sb, NILFS_I(dir)->i_root, ino); 70 + if (inode == ERR_PTR(-ESTALE)) { 71 + nilfs_error(dir->i_sb, 72 + "deleted inode referenced: %lu", ino); 73 + return ERR_PTR(-EIO); 74 + } 70 75 } 71 76 72 77 return d_splice_alias(inode, dentry);
+1
fs/nilfs2/nilfs.h
··· 401 401 extern const struct inode_operations nilfs_file_inode_operations; 402 402 extern const struct file_operations nilfs_file_operations; 403 403 extern const struct address_space_operations nilfs_aops; 404 + extern const struct address_space_operations nilfs_buffer_cache_aops; 404 405 extern const struct inode_operations nilfs_dir_inode_operations; 405 406 extern const struct inode_operations nilfs_special_inode_operations; 406 407 extern const struct inode_operations nilfs_symlink_inode_operations;
+1 -3
fs/notify/fdinfo.c
··· 47 47 size = f->handle_bytes >> 2; 48 48 49 49 ret = exportfs_encode_fid(inode, (struct fid *)f->f_handle, &size); 50 - if ((ret == FILEID_INVALID) || (ret < 0)) { 51 - WARN_ONCE(1, "Can't encode file handler for inotify: %d\n", ret); 50 + if ((ret == FILEID_INVALID) || (ret < 0)) 52 51 return; 53 - } 54 52 55 53 f->handle_type = ret; 56 54 f->handle_bytes = size * sizeof(u32);
+5 -22
fs/ocfs2/localalloc.c
··· 971 971 start = count = 0; 972 972 left = le32_to_cpu(alloc->id1.bitmap1.i_total); 973 973 974 - while ((bit_off = ocfs2_find_next_zero_bit(bitmap, left, start)) < 975 - left) { 976 - if (bit_off == start) { 974 + while (1) { 975 + bit_off = ocfs2_find_next_zero_bit(bitmap, left, start); 976 + if ((bit_off < left) && (bit_off == start)) { 977 977 count++; 978 978 start++; 979 979 continue; ··· 998 998 } 999 999 } 1000 1000 1001 + if (bit_off >= left) 1002 + break; 1001 1003 count = 1; 1002 1004 start = bit_off + 1; 1003 - } 1004 - 1005 - /* clear the contiguous bits until the end boundary */ 1006 - if (count) { 1007 - blkno = la_start_blk + 1008 - ocfs2_clusters_to_blocks(osb->sb, 1009 - start - count); 1010 - 1011 - trace_ocfs2_sync_local_to_main_free( 1012 - count, start - count, 1013 - (unsigned long long)la_start_blk, 1014 - (unsigned long long)blkno); 1015 - 1016 - status = ocfs2_release_clusters(handle, 1017 - main_bm_inode, 1018 - main_bm_bh, blkno, 1019 - count); 1020 - if (status < 0) 1021 - mlog_errno(status); 1022 1005 } 1023 1006 1024 1007 bail:
+1 -1
fs/ocfs2/quota_global.c
··· 893 893 int status = 0; 894 894 895 895 trace_ocfs2_get_next_id(from_kqid(&init_user_ns, *qid), type); 896 - if (!sb_has_quota_loaded(sb, type)) { 896 + if (!sb_has_quota_active(sb, type)) { 897 897 status = -ESRCH; 898 898 goto out; 899 899 }
+1
fs/ocfs2/quota_local.c
··· 867 867 brelse(oinfo->dqi_libh); 868 868 brelse(oinfo->dqi_lqi_bh); 869 869 kfree(oinfo); 870 + info->dqi_priv = NULL; 870 871 return status; 871 872 } 872 873
+8 -8
fs/overlayfs/copy_up.c
··· 415 415 return err; 416 416 } 417 417 418 - struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct dentry *real, 418 + struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct inode *realinode, 419 419 bool is_upper) 420 420 { 421 421 struct ovl_fh *fh; 422 422 int fh_type, dwords; 423 423 int buflen = MAX_HANDLE_SZ; 424 - uuid_t *uuid = &real->d_sb->s_uuid; 424 + uuid_t *uuid = &realinode->i_sb->s_uuid; 425 425 int err; 426 426 427 427 /* Make sure the real fid stays 32bit aligned */ ··· 438 438 * the price or reconnecting the dentry. 439 439 */ 440 440 dwords = buflen >> 2; 441 - fh_type = exportfs_encode_fh(real, (void *)fh->fb.fid, &dwords, 0); 441 + fh_type = exportfs_encode_inode_fh(realinode, (void *)fh->fb.fid, 442 + &dwords, NULL, 0); 442 443 buflen = (dwords << 2); 443 444 444 445 err = -EIO; 445 - if (WARN_ON(fh_type < 0) || 446 - WARN_ON(buflen > MAX_HANDLE_SZ) || 447 - WARN_ON(fh_type == FILEID_INVALID)) 446 + if (fh_type < 0 || fh_type == FILEID_INVALID || 447 + WARN_ON(buflen > MAX_HANDLE_SZ)) 448 448 goto out_err; 449 449 450 450 fh->fb.version = OVL_FH_VERSION; ··· 480 480 if (!ovl_can_decode_fh(origin->d_sb)) 481 481 return NULL; 482 482 483 - return ovl_encode_real_fh(ofs, origin, false); 483 + return ovl_encode_real_fh(ofs, d_inode(origin), false); 484 484 } 485 485 486 486 int ovl_set_origin_fh(struct ovl_fs *ofs, const struct ovl_fh *fh, ··· 505 505 const struct ovl_fh *fh; 506 506 int err; 507 507 508 - fh = ovl_encode_real_fh(ofs, upper, true); 508 + fh = ovl_encode_real_fh(ofs, d_inode(upper), true); 509 509 if (IS_ERR(fh)) 510 510 return PTR_ERR(fh); 511 511
+27 -22
fs/overlayfs/export.c
··· 176 176 * 177 177 * Return 0 for upper file handle, > 0 for lower file handle or < 0 on error. 178 178 */ 179 - static int ovl_check_encode_origin(struct dentry *dentry) 179 + static int ovl_check_encode_origin(struct inode *inode) 180 180 { 181 - struct ovl_fs *ofs = OVL_FS(dentry->d_sb); 181 + struct ovl_fs *ofs = OVL_FS(inode->i_sb); 182 182 bool decodable = ofs->config.nfs_export; 183 + struct dentry *dentry; 184 + int err; 183 185 184 186 /* No upper layer? */ 185 187 if (!ovl_upper_mnt(ofs)) 186 188 return 1; 187 189 188 190 /* Lower file handle for non-upper non-decodable */ 189 - if (!ovl_dentry_upper(dentry) && !decodable) 191 + if (!ovl_inode_upper(inode) && !decodable) 190 192 return 1; 191 193 192 194 /* Upper file handle for pure upper */ 193 - if (!ovl_dentry_lower(dentry)) 195 + if (!ovl_inode_lower(inode)) 194 196 return 0; 195 197 196 198 /* 197 199 * Root is never indexed, so if there's an upper layer, encode upper for 198 200 * root. 199 201 */ 200 - if (dentry == dentry->d_sb->s_root) 202 + if (inode == d_inode(inode->i_sb->s_root)) 201 203 return 0; 202 204 203 205 /* 204 206 * Upper decodable file handle for non-indexed upper. 205 207 */ 206 - if (ovl_dentry_upper(dentry) && decodable && 207 - !ovl_test_flag(OVL_INDEX, d_inode(dentry))) 208 + if (ovl_inode_upper(inode) && decodable && 209 + !ovl_test_flag(OVL_INDEX, inode)) 208 210 return 0; 209 211 210 212 /* ··· 215 213 * ovl_connect_layer() will try to make origin's layer "connected" by 216 214 * copying up a "connectable" ancestor. 217 215 */ 218 - if (d_is_dir(dentry) && decodable) 219 - return ovl_connect_layer(dentry); 216 + if (!decodable || !S_ISDIR(inode->i_mode)) 217 + return 1; 218 + 219 + dentry = d_find_any_alias(inode); 220 + if (!dentry) 221 + return -ENOENT; 222 + 223 + err = ovl_connect_layer(dentry); 224 + dput(dentry); 225 + if (err < 0) 226 + return err; 220 227 221 228 /* Lower file handle for indexed and non-upper dir/non-dir */ 222 229 return 1; 223 230 } 224 231 225 - static int ovl_dentry_to_fid(struct ovl_fs *ofs, struct dentry *dentry, 232 + static int ovl_dentry_to_fid(struct ovl_fs *ofs, struct inode *inode, 226 233 u32 *fid, int buflen) 227 234 { 228 235 struct ovl_fh *fh = NULL; ··· 242 231 * Check if we should encode a lower or upper file handle and maybe 243 232 * copy up an ancestor to make lower file handle connectable. 244 233 */ 245 - err = enc_lower = ovl_check_encode_origin(dentry); 234 + err = enc_lower = ovl_check_encode_origin(inode); 246 235 if (enc_lower < 0) 247 236 goto fail; 248 237 249 238 /* Encode an upper or lower file handle */ 250 - fh = ovl_encode_real_fh(ofs, enc_lower ? ovl_dentry_lower(dentry) : 251 - ovl_dentry_upper(dentry), !enc_lower); 239 + fh = ovl_encode_real_fh(ofs, enc_lower ? ovl_inode_lower(inode) : 240 + ovl_inode_upper(inode), !enc_lower); 252 241 if (IS_ERR(fh)) 253 242 return PTR_ERR(fh); 254 243 ··· 262 251 return err; 263 252 264 253 fail: 265 - pr_warn_ratelimited("failed to encode file handle (%pd2, err=%i)\n", 266 - dentry, err); 254 + pr_warn_ratelimited("failed to encode file handle (ino=%lu, err=%i)\n", 255 + inode->i_ino, err); 267 256 goto out; 268 257 } 269 258 ··· 271 260 struct inode *parent) 272 261 { 273 262 struct ovl_fs *ofs = OVL_FS(inode->i_sb); 274 - struct dentry *dentry; 275 263 int bytes, buflen = *max_len << 2; 276 264 277 265 /* TODO: encode connectable file handles */ 278 266 if (parent) 279 267 return FILEID_INVALID; 280 268 281 - dentry = d_find_any_alias(inode); 282 - if (!dentry) 283 - return FILEID_INVALID; 284 - 285 - bytes = ovl_dentry_to_fid(ofs, dentry, fid, buflen); 286 - dput(dentry); 269 + bytes = ovl_dentry_to_fid(ofs, inode, fid, buflen); 287 270 if (bytes <= 0) 288 271 return FILEID_INVALID; 289 272
+2 -2
fs/overlayfs/namei.c
··· 542 542 struct ovl_fh *fh; 543 543 int err; 544 544 545 - fh = ovl_encode_real_fh(ofs, real, is_upper); 545 + fh = ovl_encode_real_fh(ofs, d_inode(real), is_upper); 546 546 err = PTR_ERR(fh); 547 547 if (IS_ERR(fh)) { 548 548 fh = NULL; ··· 738 738 struct ovl_fh *fh; 739 739 int err; 740 740 741 - fh = ovl_encode_real_fh(ofs, origin, false); 741 + fh = ovl_encode_real_fh(ofs, d_inode(origin), false); 742 742 if (IS_ERR(fh)) 743 743 return PTR_ERR(fh); 744 744
+1 -1
fs/overlayfs/overlayfs.h
··· 865 865 int ovl_maybe_copy_up(struct dentry *dentry, int flags); 866 866 int ovl_copy_xattr(struct super_block *sb, const struct path *path, struct dentry *new); 867 867 int ovl_set_attr(struct ovl_fs *ofs, struct dentry *upper, struct kstat *stat); 868 - struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct dentry *real, 868 + struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct inode *realinode, 869 869 bool is_upper); 870 870 struct ovl_fh *ovl_get_origin_fh(struct ovl_fs *ofs, struct dentry *origin); 871 871 int ovl_set_origin_fh(struct ovl_fs *ofs, const struct ovl_fh *fh,
+1 -1
fs/proc/task_mmu.c
··· 1810 1810 } 1811 1811 1812 1812 for (; addr != end; addr += PAGE_SIZE, idx++) { 1813 - unsigned long cur_flags = flags; 1813 + u64 cur_flags = flags; 1814 1814 pagemap_entry_t pme; 1815 1815 1816 1816 if (folio && (flags & PM_PRESENT) &&
+4 -7
fs/qnx6/inode.c
··· 179 179 */ 180 180 static const char *qnx6_checkroot(struct super_block *s) 181 181 { 182 - static char match_root[2][3] = {".\0\0", "..\0"}; 183 - int i, error = 0; 182 + int error = 0; 184 183 struct qnx6_dir_entry *dir_entry; 185 184 struct inode *root = d_inode(s->s_root); 186 185 struct address_space *mapping = root->i_mapping; ··· 188 189 if (IS_ERR(folio)) 189 190 return "error reading root directory"; 190 191 dir_entry = kmap_local_folio(folio, 0); 191 - for (i = 0; i < 2; i++) { 192 - /* maximum 3 bytes - due to match_root limitation */ 193 - if (strncmp(dir_entry[i].de_fname, match_root[i], 3)) 194 - error = 1; 195 - } 192 + if (memcmp(dir_entry[0].de_fname, ".", 2) || 193 + memcmp(dir_entry[1].de_fname, "..", 3)) 194 + error = 1; 196 195 folio_release_kmap(folio, dir_entry); 197 196 if (error) 198 197 return "error reading root directory.";
-1
fs/smb/client/Kconfig
··· 2 2 config CIFS 3 3 tristate "SMB3 and CIFS support (advanced network filesystem)" 4 4 depends on INET 5 - select NETFS_SUPPORT 6 5 select NLS 7 6 select NLS_UCS2_UTILS 8 7 select CRYPTO
+1 -1
fs/smb/client/cifsfs.c
··· 398 398 cifs_inode = alloc_inode_sb(sb, cifs_inode_cachep, GFP_KERNEL); 399 399 if (!cifs_inode) 400 400 return NULL; 401 - cifs_inode->cifsAttrs = 0x20; /* default */ 401 + cifs_inode->cifsAttrs = ATTR_ARCHIVE; /* default */ 402 402 cifs_inode->time = 0; 403 403 /* 404 404 * Until the file is open and we have gotten oplock info back from the
-2
fs/smb/client/cifsproto.h
··· 614 614 void cifs_free_hash(struct shash_desc **sdesc); 615 615 616 616 int cifs_try_adding_channels(struct cifs_ses *ses); 617 - bool is_server_using_iface(struct TCP_Server_Info *server, 618 - struct cifs_server_iface *iface); 619 617 bool is_ses_using_iface(struct cifs_ses *ses, struct cifs_server_iface *iface); 620 618 void cifs_ses_mark_for_reconnect(struct cifs_ses *ses); 621 619
+9 -4
fs/smb/client/cifssmb.c
··· 1319 1319 } 1320 1320 1321 1321 if (rdata->result == -ENODATA) { 1322 - __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 1323 1322 rdata->result = 0; 1323 + __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 1324 1324 } else { 1325 1325 size_t trans = rdata->subreq.transferred + rdata->got_bytes; 1326 1326 if (trans < rdata->subreq.len && 1327 1327 rdata->subreq.start + trans == ictx->remote_i_size) { 1328 - __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 1329 1328 rdata->result = 0; 1329 + __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 1330 + } else if (rdata->got_bytes > 0) { 1331 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &rdata->subreq.flags); 1330 1332 } 1331 1333 } 1332 1334 ··· 1672 1670 if (written > wdata->subreq.len) 1673 1671 written &= 0xFFFF; 1674 1672 1675 - if (written < wdata->subreq.len) 1673 + if (written < wdata->subreq.len) { 1676 1674 result = -ENOSPC; 1677 - else 1675 + } else { 1678 1676 result = written; 1677 + if (written > 0) 1678 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &wdata->subreq.flags); 1679 + } 1679 1680 break; 1680 1681 case MID_REQUEST_SUBMITTED: 1681 1682 case MID_RETRY_NEEDED:
+26 -10
fs/smb/client/connect.c
··· 987 987 msleep(125); 988 988 if (cifs_rdma_enabled(server)) 989 989 smbd_destroy(server); 990 + 990 991 if (server->ssocket) { 991 992 sock_release(server->ssocket); 992 993 server->ssocket = NULL; 994 + 995 + /* Release netns reference for the socket. */ 996 + put_net(cifs_net_ns(server)); 993 997 } 994 998 995 999 if (!list_empty(&server->pending_mid_q)) { ··· 1041 1037 */ 1042 1038 } 1043 1039 1040 + /* Release netns reference for this server. */ 1044 1041 put_net(cifs_net_ns(server)); 1045 1042 kfree(server->leaf_fullpath); 1046 1043 kfree(server); ··· 1718 1713 1719 1714 tcp_ses->ops = ctx->ops; 1720 1715 tcp_ses->vals = ctx->vals; 1716 + 1717 + /* Grab netns reference for this server. */ 1721 1718 cifs_set_net_ns(tcp_ses, get_net(current->nsproxy->net_ns)); 1722 1719 1723 1720 tcp_ses->conn_id = atomic_inc_return(&tcpSesNextId); ··· 1851 1844 out_err_crypto_release: 1852 1845 cifs_crypto_secmech_release(tcp_ses); 1853 1846 1847 + /* Release netns reference for this server. */ 1854 1848 put_net(cifs_net_ns(tcp_ses)); 1855 1849 1856 1850 out_err: ··· 1860 1852 cifs_put_tcp_session(tcp_ses->primary_server, false); 1861 1853 kfree(tcp_ses->hostname); 1862 1854 kfree(tcp_ses->leaf_fullpath); 1863 - if (tcp_ses->ssocket) 1855 + if (tcp_ses->ssocket) { 1864 1856 sock_release(tcp_ses->ssocket); 1857 + put_net(cifs_net_ns(tcp_ses)); 1858 + } 1865 1859 kfree(tcp_ses); 1866 1860 } 1867 1861 return ERR_PTR(rc); ··· 3141 3131 socket = server->ssocket; 3142 3132 } else { 3143 3133 struct net *net = cifs_net_ns(server); 3144 - struct sock *sk; 3145 3134 3146 - rc = __sock_create(net, sfamily, SOCK_STREAM, 3147 - IPPROTO_TCP, &server->ssocket, 1); 3135 + rc = sock_create_kern(net, sfamily, SOCK_STREAM, IPPROTO_TCP, &server->ssocket); 3148 3136 if (rc < 0) { 3149 3137 cifs_server_dbg(VFS, "Error %d creating socket\n", rc); 3150 3138 return rc; 3151 3139 } 3152 3140 3153 - sk = server->ssocket->sk; 3154 - __netns_tracker_free(net, &sk->ns_tracker, false); 3155 - sk->sk_net_refcnt = 1; 3156 - get_net_track(net, &sk->ns_tracker, GFP_KERNEL); 3157 - sock_inuse_add(net, 1); 3141 + /* 3142 + * Grab netns reference for the socket. 3143 + * 3144 + * It'll be released here, on error, or in clean_demultiplex_info() upon server 3145 + * teardown. 3146 + */ 3147 + get_net(net); 3158 3148 3159 3149 /* BB other socket options to set KEEPALIVE, NODELAY? */ 3160 3150 cifs_dbg(FYI, "Socket created\n"); ··· 3168 3158 } 3169 3159 3170 3160 rc = bind_socket(server); 3171 - if (rc < 0) 3161 + if (rc < 0) { 3162 + put_net(cifs_net_ns(server)); 3172 3163 return rc; 3164 + } 3173 3165 3174 3166 /* 3175 3167 * Eventually check for other socket options to change from ··· 3208 3196 if (rc < 0) { 3209 3197 cifs_dbg(FYI, "Error %d connecting to server\n", rc); 3210 3198 trace_smb3_connect_err(server->hostname, server->conn_id, &server->dstaddr, rc); 3199 + put_net(cifs_net_ns(server)); 3211 3200 sock_release(socket); 3212 3201 server->ssocket = NULL; 3213 3202 return rc; ··· 3216 3203 trace_smb3_connect_done(server->hostname, server->conn_id, &server->dstaddr); 3217 3204 if (sport == htons(RFC1001_PORT)) 3218 3205 rc = ip_rfc1001_connect(server); 3206 + 3207 + if (rc < 0) 3208 + put_net(cifs_net_ns(server)); 3219 3209 3220 3210 return rc; 3221 3211 }
+5 -1
fs/smb/client/file.c
··· 990 990 } 991 991 992 992 /* Get the cached handle as SMB2 close is deferred */ 993 - rc = cifs_get_readable_path(tcon, full_path, &cfile); 993 + if (OPEN_FMODE(file->f_flags) & FMODE_WRITE) { 994 + rc = cifs_get_writable_path(tcon, full_path, FIND_WR_FSUID_ONLY, &cfile); 995 + } else { 996 + rc = cifs_get_readable_path(tcon, full_path, &cfile); 997 + } 994 998 if (rc == 0) { 995 999 if (file->f_flags == cfile->f_flags) { 996 1000 file->private_data = cfile;
+18 -1
fs/smb/client/namespace.c
··· 196 196 struct smb3_fs_context tmp; 197 197 char *full_path; 198 198 struct vfsmount *mnt; 199 + struct cifs_sb_info *mntpt_sb; 200 + struct cifs_ses *ses; 199 201 200 202 if (IS_ROOT(mntpt)) 201 203 return ERR_PTR(-ESTALE); 202 204 203 - cur_ctx = CIFS_SB(mntpt->d_sb)->ctx; 205 + mntpt_sb = CIFS_SB(mntpt->d_sb); 206 + ses = cifs_sb_master_tcon(mntpt_sb)->ses; 207 + cur_ctx = mntpt_sb->ctx; 208 + 209 + /* 210 + * At this point, the root session should be in the mntpt sb. We should 211 + * bring the sb context passwords in sync with the root session's 212 + * passwords. This would help prevent unnecessary retries and password 213 + * swaps for automounts. 214 + */ 215 + mutex_lock(&ses->session_mutex); 216 + rc = smb3_sync_session_ctx_passwords(mntpt_sb, ses); 217 + mutex_unlock(&ses->session_mutex); 218 + 219 + if (rc) 220 + return ERR_PTR(rc); 204 221 205 222 fc = fs_context_for_submount(path->mnt->mnt_sb->s_type, mntpt); 206 223 if (IS_ERR(fc))
-25
fs/smb/client/sess.c
··· 27 27 cifs_ses_add_channel(struct cifs_ses *ses, 28 28 struct cifs_server_iface *iface); 29 29 30 - bool 31 - is_server_using_iface(struct TCP_Server_Info *server, 32 - struct cifs_server_iface *iface) 33 - { 34 - struct sockaddr_in *i4 = (struct sockaddr_in *)&iface->sockaddr; 35 - struct sockaddr_in6 *i6 = (struct sockaddr_in6 *)&iface->sockaddr; 36 - struct sockaddr_in *s4 = (struct sockaddr_in *)&server->dstaddr; 37 - struct sockaddr_in6 *s6 = (struct sockaddr_in6 *)&server->dstaddr; 38 - 39 - if (server->dstaddr.ss_family != iface->sockaddr.ss_family) 40 - return false; 41 - if (server->dstaddr.ss_family == AF_INET) { 42 - if (s4->sin_addr.s_addr != i4->sin_addr.s_addr) 43 - return false; 44 - } else if (server->dstaddr.ss_family == AF_INET6) { 45 - if (memcmp(&s6->sin6_addr, &i6->sin6_addr, 46 - sizeof(i6->sin6_addr)) != 0) 47 - return false; 48 - } else { 49 - /* unknown family.. */ 50 - return false; 51 - } 52 - return true; 53 - } 54 - 55 30 bool is_ses_using_iface(struct cifs_ses *ses, struct cifs_server_iface *iface) 56 31 { 57 32 int i;
+10 -4
fs/smb/client/smb2pdu.c
··· 4615 4615 __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 4616 4616 rdata->result = 0; 4617 4617 } 4618 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &rdata->subreq.flags); 4618 4619 } 4619 4620 trace_smb3_rw_credits(rreq_debug_id, subreq_debug_index, rdata->credits.value, 4620 4621 server->credits, server->in_flight, ··· 4841 4840 if (written > wdata->subreq.len) 4842 4841 written &= 0xFFFF; 4843 4842 4844 - if (written < wdata->subreq.len) 4843 + cifs_stats_bytes_written(tcon, written); 4844 + 4845 + if (written < wdata->subreq.len) { 4845 4846 wdata->result = -ENOSPC; 4846 - else 4847 + } else if (written > 0) { 4847 4848 wdata->subreq.len = written; 4849 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &wdata->subreq.flags); 4850 + } 4848 4851 break; 4849 4852 case MID_REQUEST_SUBMITTED: 4850 4853 case MID_RETRY_NEEDED: ··· 5017 5012 } 5018 5013 #endif 5019 5014 5020 - if (test_bit(NETFS_SREQ_RETRYING, &wdata->subreq.flags)) 5015 + if (wdata->subreq.retry_count > 0) 5021 5016 smb2_set_replay(server, &rqst); 5022 5017 5023 5018 cifs_dbg(FYI, "async write at %llu %u bytes iter=%zx\n", ··· 5161 5156 cifs_dbg(VFS, "Send error in write = %d\n", rc); 5162 5157 } else { 5163 5158 *nbytes = le32_to_cpu(rsp->DataLength); 5159 + cifs_stats_bytes_written(io_parms->tcon, *nbytes); 5164 5160 trace_smb3_write_done(0, 0, xid, 5165 5161 req->PersistentFileId, 5166 5162 io_parms->tcon->tid, ··· 6210 6204 req->StructureSize = cpu_to_le16(36); 6211 6205 total_len += 12; 6212 6206 6213 - memcpy(req->LeaseKey, lease_key, 16); 6207 + memcpy(req->LeaseKey, lease_key, SMB2_LEASE_KEY_SIZE); 6214 6208 req->LeaseState = lease_state; 6215 6209 6216 6210 flags |= CIFS_NO_RSP_BUF;
+14 -4
fs/smb/server/connection.c
··· 70 70 atomic_set(&conn->req_running, 0); 71 71 atomic_set(&conn->r_count, 0); 72 72 atomic_set(&conn->refcnt, 1); 73 - atomic_set(&conn->mux_smb_requests, 0); 74 73 conn->total_credits = 1; 75 74 conn->outstanding_credits = 0; 76 75 ··· 119 120 if (conn->ops->get_cmd_val(work) != SMB2_CANCEL_HE) 120 121 requests_queue = &conn->requests; 121 122 123 + atomic_inc(&conn->req_running); 122 124 if (requests_queue) { 123 - atomic_inc(&conn->req_running); 124 125 spin_lock(&conn->request_lock); 125 126 list_add_tail(&work->request_entry, requests_queue); 126 127 spin_unlock(&conn->request_lock); ··· 131 132 { 132 133 struct ksmbd_conn *conn = work->conn; 133 134 135 + atomic_dec(&conn->req_running); 136 + if (waitqueue_active(&conn->req_running_q)) 137 + wake_up(&conn->req_running_q); 138 + 134 139 if (list_empty(&work->request_entry) && 135 140 list_empty(&work->async_request_entry)) 136 141 return; 137 142 138 - atomic_dec(&conn->req_running); 139 143 spin_lock(&conn->request_lock); 140 144 list_del_init(&work->request_entry); 141 145 spin_unlock(&conn->request_lock); ··· 310 308 { 311 309 struct ksmbd_conn *conn = (struct ksmbd_conn *)p; 312 310 struct ksmbd_transport *t = conn->transport; 313 - unsigned int pdu_size, max_allowed_pdu_size; 311 + unsigned int pdu_size, max_allowed_pdu_size, max_req; 314 312 char hdr_buf[4] = {0,}; 315 313 int size; 316 314 ··· 320 318 if (t->ops->prepare && t->ops->prepare(t)) 321 319 goto out; 322 320 321 + max_req = server_conf.max_inflight_req; 323 322 conn->last_active = jiffies; 324 323 set_freezable(); 325 324 while (ksmbd_conn_alive(conn)) { ··· 329 326 330 327 kvfree(conn->request_buf); 331 328 conn->request_buf = NULL; 329 + 330 + recheck: 331 + if (atomic_read(&conn->req_running) + 1 > max_req) { 332 + wait_event_interruptible(conn->req_running_q, 333 + atomic_read(&conn->req_running) < max_req); 334 + goto recheck; 335 + } 332 336 333 337 size = t->ops->read(t, hdr_buf, sizeof(hdr_buf), -1); 334 338 if (size != sizeof(hdr_buf))
-1
fs/smb/server/connection.h
··· 107 107 __le16 signing_algorithm; 108 108 bool binding; 109 109 atomic_t refcnt; 110 - atomic_t mux_smb_requests; 111 110 }; 112 111 113 112 struct ksmbd_conn_ops {
+1 -6
fs/smb/server/server.c
··· 270 270 271 271 ksmbd_conn_try_dequeue_request(work); 272 272 ksmbd_free_work_struct(work); 273 - atomic_dec(&conn->mux_smb_requests); 274 273 /* 275 274 * Checking waitqueue to dropping pending requests on 276 275 * disconnection. waitqueue_active is safe because it ··· 298 299 err = ksmbd_init_smb_server(conn); 299 300 if (err) 300 301 return 0; 301 - 302 - if (atomic_inc_return(&conn->mux_smb_requests) >= conn->vals->max_credits) { 303 - atomic_dec_return(&conn->mux_smb_requests); 304 - return -ENOSPC; 305 - } 306 302 307 303 work = ksmbd_alloc_work_struct(); 308 304 if (!work) { ··· 361 367 server_conf.auth_mechs |= KSMBD_AUTH_KRB5 | 362 368 KSMBD_AUTH_MSKRB5; 363 369 #endif 370 + server_conf.max_inflight_req = SMB2_MAX_CREDITS; 364 371 return 0; 365 372 } 366 373
+1
fs/smb/server/server.h
··· 42 42 struct smb_sid domain_sid; 43 43 unsigned int auth_mechs; 44 44 unsigned int max_connections; 45 + unsigned int max_inflight_req; 45 46 46 47 char *conf[SERVER_CONF_WORK_GROUP + 1]; 47 48 struct task_struct *dh_task;
+45
fs/smb/server/smb2pdu.c
··· 695 695 struct smb2_hdr *rsp_hdr; 696 696 struct ksmbd_work *in_work = ksmbd_alloc_work_struct(); 697 697 698 + if (!in_work) 699 + return; 700 + 698 701 if (allocate_interim_rsp_buf(in_work)) { 699 702 pr_err("smb_allocate_rsp_buf failed!\n"); 700 703 ksmbd_free_work_struct(in_work); ··· 1100 1097 return rc; 1101 1098 } 1102 1099 1100 + ksmbd_conn_lock(conn); 1103 1101 smb2_buf_len = get_rfc1002_len(work->request_buf); 1104 1102 smb2_neg_size = offsetof(struct smb2_negotiate_req, Dialects); 1105 1103 if (smb2_neg_size > smb2_buf_len) { ··· 1251 1247 ksmbd_conn_set_need_negotiate(conn); 1252 1248 1253 1249 err_out: 1250 + ksmbd_conn_unlock(conn); 1254 1251 if (rc) 1255 1252 rsp->hdr.Status = STATUS_INSUFFICIENT_RESOURCES; 1256 1253 ··· 3994 3989 posix_info->DeviceId = cpu_to_le32(ksmbd_kstat->kstat->rdev); 3995 3990 posix_info->HardLinks = cpu_to_le32(ksmbd_kstat->kstat->nlink); 3996 3991 posix_info->Mode = cpu_to_le32(ksmbd_kstat->kstat->mode & 0777); 3992 + switch (ksmbd_kstat->kstat->mode & S_IFMT) { 3993 + case S_IFDIR: 3994 + posix_info->Mode |= cpu_to_le32(POSIX_TYPE_DIR << POSIX_FILETYPE_SHIFT); 3995 + break; 3996 + case S_IFLNK: 3997 + posix_info->Mode |= cpu_to_le32(POSIX_TYPE_SYMLINK << POSIX_FILETYPE_SHIFT); 3998 + break; 3999 + case S_IFCHR: 4000 + posix_info->Mode |= cpu_to_le32(POSIX_TYPE_CHARDEV << POSIX_FILETYPE_SHIFT); 4001 + break; 4002 + case S_IFBLK: 4003 + posix_info->Mode |= cpu_to_le32(POSIX_TYPE_BLKDEV << POSIX_FILETYPE_SHIFT); 4004 + break; 4005 + case S_IFIFO: 4006 + posix_info->Mode |= cpu_to_le32(POSIX_TYPE_FIFO << POSIX_FILETYPE_SHIFT); 4007 + break; 4008 + case S_IFSOCK: 4009 + posix_info->Mode |= cpu_to_le32(POSIX_TYPE_SOCKET << POSIX_FILETYPE_SHIFT); 4010 + } 4011 + 3997 4012 posix_info->Inode = cpu_to_le64(ksmbd_kstat->kstat->ino); 3998 4013 posix_info->DosAttributes = 3999 4014 S_ISDIR(ksmbd_kstat->kstat->mode) ? ··· 5204 5179 file_info->AllocationSize = cpu_to_le64(stat.blocks << 9); 5205 5180 file_info->HardLinks = cpu_to_le32(stat.nlink); 5206 5181 file_info->Mode = cpu_to_le32(stat.mode & 0777); 5182 + switch (stat.mode & S_IFMT) { 5183 + case S_IFDIR: 5184 + file_info->Mode |= cpu_to_le32(POSIX_TYPE_DIR << POSIX_FILETYPE_SHIFT); 5185 + break; 5186 + case S_IFLNK: 5187 + file_info->Mode |= cpu_to_le32(POSIX_TYPE_SYMLINK << POSIX_FILETYPE_SHIFT); 5188 + break; 5189 + case S_IFCHR: 5190 + file_info->Mode |= cpu_to_le32(POSIX_TYPE_CHARDEV << POSIX_FILETYPE_SHIFT); 5191 + break; 5192 + case S_IFBLK: 5193 + file_info->Mode |= cpu_to_le32(POSIX_TYPE_BLKDEV << POSIX_FILETYPE_SHIFT); 5194 + break; 5195 + case S_IFIFO: 5196 + file_info->Mode |= cpu_to_le32(POSIX_TYPE_FIFO << POSIX_FILETYPE_SHIFT); 5197 + break; 5198 + case S_IFSOCK: 5199 + file_info->Mode |= cpu_to_le32(POSIX_TYPE_SOCKET << POSIX_FILETYPE_SHIFT); 5200 + } 5201 + 5207 5202 file_info->DeviceId = cpu_to_le32(stat.rdev); 5208 5203 5209 5204 /*
+10
fs/smb/server/smb2pdu.h
··· 502 502 return buf + 4; 503 503 } 504 504 505 + #define POSIX_TYPE_FILE 0 506 + #define POSIX_TYPE_DIR 1 507 + #define POSIX_TYPE_SYMLINK 2 508 + #define POSIX_TYPE_CHARDEV 3 509 + #define POSIX_TYPE_BLKDEV 4 510 + #define POSIX_TYPE_FIFO 5 511 + #define POSIX_TYPE_SOCKET 6 512 + 513 + #define POSIX_FILETYPE_SHIFT 12 514 + 505 515 #endif /* _SMB2PDU_H */
+4 -1
fs/smb/server/transport_ipc.c
··· 319 319 init_smb2_max_write_size(req->smb2_max_write); 320 320 if (req->smb2_max_trans) 321 321 init_smb2_max_trans_size(req->smb2_max_trans); 322 - if (req->smb2_max_credits) 322 + if (req->smb2_max_credits) { 323 323 init_smb2_max_credits(req->smb2_max_credits); 324 + server_conf.max_inflight_req = 325 + req->smb2_max_credits; 326 + } 324 327 if (req->smbd_max_io_size) 325 328 init_smbd_max_io_size(req->smbd_max_io_size); 326 329
+1 -2
fs/smb/server/transport_rdma.c
··· 2283 2283 2284 2284 ibdev = ib_device_get_by_netdev(netdev, RDMA_DRIVER_UNKNOWN); 2285 2285 if (ibdev) { 2286 - if (rdma_frwr_is_supported(&ibdev->attrs)) 2287 - rdma_capable = true; 2286 + rdma_capable = rdma_frwr_is_supported(&ibdev->attrs); 2288 2287 ib_device_put(ibdev); 2289 2288 } 2290 2289 }
+2 -1
fs/smb/server/vfs.c
··· 1264 1264 filepath, 1265 1265 flags, 1266 1266 path); 1267 + if (!is_last) 1268 + next[0] = '/'; 1267 1269 if (err) 1268 1270 goto out2; 1269 1271 else if (is_last) ··· 1273 1271 path_put(parent_path); 1274 1272 *parent_path = *path; 1275 1273 1276 - next[0] = '/'; 1277 1274 remain_len -= filename_len + 1; 1278 1275 } 1279 1276
+1 -1
fs/xfs/libxfs/xfs_rtgroup.h
··· 272 272 } 273 273 274 274 # define xfs_rtgroup_extents(mp, rgno) (0) 275 - # define xfs_update_last_rtgroup_size(mp, rgno) (-EOPNOTSUPP) 275 + # define xfs_update_last_rtgroup_size(mp, rgno) (0) 276 276 # define xfs_rtgroup_lock(rtg, gf) ((void)0) 277 277 # define xfs_rtgroup_unlock(rtg, gf) ((void)0) 278 278 # define xfs_rtgroup_trans_join(tp, rtg, gf) ((void)0)
+2 -1
fs/xfs/xfs_dquot.c
··· 87 87 } 88 88 spin_unlock(&qlip->qli_lock); 89 89 if (bp) { 90 + xfs_buf_lock(bp); 90 91 list_del_init(&qlip->qli_item.li_bio_list); 91 - xfs_buf_rele(bp); 92 + xfs_buf_relse(bp); 92 93 } 93 94 } 94 95
+2
include/clocksource/hyperv_timer.h
··· 38 38 extern unsigned long hv_get_tsc_pfn(void); 39 39 extern struct ms_hyperv_tsc_page *hv_get_tsc_page(void); 40 40 41 + extern void hv_adj_sched_clock_offset(u64 offset); 42 + 41 43 static __always_inline bool 42 44 hv_read_tsc_page_tsc(const struct ms_hyperv_tsc_page *tsc_pg, 43 45 u64 *cur_tsc, u64 *time)
+2 -4
include/kvm/arm_pmu.h
··· 53 53 void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu); 54 54 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu); 55 55 void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu); 56 - void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val); 57 - void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val); 56 + void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u64 val); 58 57 void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu); 59 58 void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu); 60 59 bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu); ··· 126 127 static inline void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu) {} 127 128 static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {} 128 129 static inline void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {} 129 - static inline void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val) {} 130 - static inline void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) {} 130 + static inline void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u64 val) {} 131 131 static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {} 132 132 static inline void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {} 133 133 static inline bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu)
+7 -2
include/linux/alloc_tag.h
··· 63 63 #else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ 64 64 65 65 static inline bool is_codetag_empty(union codetag_ref *ref) { return false; } 66 - static inline void set_codetag_empty(union codetag_ref *ref) {} 66 + 67 + static inline void set_codetag_empty(union codetag_ref *ref) 68 + { 69 + if (ref) 70 + ref->ct = NULL; 71 + } 67 72 68 73 #endif /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ 69 74 ··· 140 135 #ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG 141 136 static inline void alloc_tag_add_check(union codetag_ref *ref, struct alloc_tag *tag) 142 137 { 143 - WARN_ONCE(ref && ref->ct, 138 + WARN_ONCE(ref && ref->ct && !is_codetag_empty(ref), 144 139 "alloc_tag was not cleared (got tag for %s:%u)\n", 145 140 ref->ct->filename, ref->ct->lineno); 146 141
+8 -5
include/linux/arm_ffa.h
··· 166 166 return dev_get_drvdata(&fdev->dev); 167 167 } 168 168 169 + struct ffa_partition_info; 170 + 169 171 #if IS_REACHABLE(CONFIG_ARM_FFA_TRANSPORT) 170 - struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id, 171 - const struct ffa_ops *ops); 172 + struct ffa_device * 173 + ffa_device_register(const struct ffa_partition_info *part_info, 174 + const struct ffa_ops *ops); 172 175 void ffa_device_unregister(struct ffa_device *ffa_dev); 173 176 int ffa_driver_register(struct ffa_driver *driver, struct module *owner, 174 177 const char *mod_name); ··· 179 176 bool ffa_device_is_valid(struct ffa_device *ffa_dev); 180 177 181 178 #else 182 - static inline 183 - struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id, 184 - const struct ffa_ops *ops) 179 + static inline struct ffa_device * 180 + ffa_device_register(const struct ffa_partition_info *part_info, 181 + const struct ffa_ops *ops) 185 182 { 186 183 return NULL; 187 184 }
+1 -1
include/linux/bus/stm32_firewall_device.h
··· 115 115 #else /* CONFIG_STM32_FIREWALL */ 116 116 117 117 int stm32_firewall_get_firewall(struct device_node *np, struct stm32_firewall *firewall, 118 - unsigned int nb_firewall); 118 + unsigned int nb_firewall) 119 119 { 120 120 return -ENODEV; 121 121 }
+6
include/linux/cacheinfo.h
··· 155 155 156 156 #ifndef CONFIG_ARCH_HAS_CPU_CACHE_ALIASING 157 157 #define cpu_dcache_is_aliasing() false 158 + #define cpu_icache_is_aliasing() cpu_dcache_is_aliasing() 158 159 #else 159 160 #include <asm/cachetype.h> 161 + 162 + #ifndef cpu_icache_is_aliasing 163 + #define cpu_icache_is_aliasing() cpu_dcache_is_aliasing() 164 + #endif 165 + 160 166 #endif 161 167 162 168 #endif /* _LINUX_CACHEINFO_H */
+27 -12
include/linux/compiler.h
··· 216 216 217 217 #endif /* __KERNEL__ */ 218 218 219 - /* 220 - * Force the compiler to emit 'sym' as a symbol, so that we can reference 221 - * it from inline assembler. Necessary in case 'sym' could be inlined 222 - * otherwise, or eliminated entirely due to lack of references that are 223 - * visible to the compiler. 224 - */ 225 - #define ___ADDRESSABLE(sym, __attrs) \ 226 - static void * __used __attrs \ 227 - __UNIQUE_ID(__PASTE(__addressable_,sym)) = (void *)(uintptr_t)&sym; 228 - #define __ADDRESSABLE(sym) \ 229 - ___ADDRESSABLE(sym, __section(".discard.addressable")) 230 - 231 219 /** 232 220 * offset_to_ptr - convert a relative memory offset to an absolute pointer 233 221 * @off: the address of the 32-bit offset value ··· 226 238 } 227 239 228 240 #endif /* __ASSEMBLY__ */ 241 + 242 + #ifdef CONFIG_64BIT 243 + #define ARCH_SEL(a,b) a 244 + #else 245 + #define ARCH_SEL(a,b) b 246 + #endif 247 + 248 + /* 249 + * Force the compiler to emit 'sym' as a symbol, so that we can reference 250 + * it from inline assembler. Necessary in case 'sym' could be inlined 251 + * otherwise, or eliminated entirely due to lack of references that are 252 + * visible to the compiler. 253 + */ 254 + #define ___ADDRESSABLE(sym, __attrs) \ 255 + static void * __used __attrs \ 256 + __UNIQUE_ID(__PASTE(__addressable_,sym)) = (void *)(uintptr_t)&sym; 257 + 258 + #define __ADDRESSABLE(sym) \ 259 + ___ADDRESSABLE(sym, __section(".discard.addressable")) 260 + 261 + #define __ADDRESSABLE_ASM(sym) \ 262 + .pushsection .discard.addressable,"aw"; \ 263 + .align ARCH_SEL(8,4); \ 264 + ARCH_SEL(.quad, .long) __stringify(sym); \ 265 + .popsection; 266 + 267 + #define __ADDRESSABLE_ASM_STR(sym) __stringify(__ADDRESSABLE_ASM(sym)) 229 268 230 269 #ifdef __CHECKER__ 231 270 #define __BUILD_BUG_ON_ZERO_MSG(e, msg) (0)
+10 -3
include/linux/dmaengine.h
··· 84 84 DMA_TRANS_NONE, 85 85 }; 86 86 87 - /** 87 + /* 88 88 * Interleaved Transfer Request 89 89 * ---------------------------- 90 90 * A chunk is collection of contiguous bytes to be transferred. ··· 223 223 }; 224 224 225 225 /** 226 - * enum pq_check_flags - result of async_{xor,pq}_zero_sum operations 226 + * enum sum_check_flags - result of async_{xor,pq}_zero_sum operations 227 227 * @SUM_CHECK_P_RESULT - 1 if xor zero sum error, 0 otherwise 228 228 * @SUM_CHECK_Q_RESULT - 1 if reed-solomon zero sum error, 0 otherwise 229 229 */ ··· 286 286 * pointer to the engine's metadata area 287 287 * 4. Read out the metadata from the pointer 288 288 * 289 - * Note: the two mode is not compatible and clients must use one mode for a 289 + * Warning: the two modes are not compatible and clients must use one mode for a 290 290 * descriptor. 291 291 */ 292 292 enum dma_desc_metadata_mode { ··· 594 594 * @phys: physical address of the descriptor 595 595 * @chan: target channel for this operation 596 596 * @tx_submit: accept the descriptor, assign ordered cookie and mark the 597 + * @desc_free: driver's callback function to free a resusable descriptor 598 + * after completion 597 599 * descriptor pending. To be pushed on .issue_pending() call 598 600 * @callback: routine to call after this operation is complete 601 + * @callback_result: error result from a DMA transaction 599 602 * @callback_param: general parameter to pass to the callback routine 603 + * @unmap: hook for generic DMA unmap data 600 604 * @desc_metadata_mode: core managed metadata mode to protect mixed use of 601 605 * DESC_METADATA_CLIENT or DESC_METADATA_ENGINE. Otherwise 602 606 * DESC_METADATA_NONE ··· 831 827 * @device_prep_dma_memset: prepares a memset operation 832 828 * @device_prep_dma_memset_sg: prepares a memset operation over a scatter list 833 829 * @device_prep_dma_interrupt: prepares an end of chain interrupt operation 830 + * @device_prep_peripheral_dma_vec: prepares a scatter-gather DMA transfer, 831 + * where the address and size of each segment is located in one entry of 832 + * the dma_vec array. 834 833 * @device_prep_slave_sg: prepares a slave dma operation 835 834 * @device_prep_dma_cyclic: prepare a cyclic dma operation suitable for audio. 836 835 * The function takes a buffer of size buf_len. The callback function will
+13 -1
include/linux/fortify-string.h
··· 616 616 return false; 617 617 } 618 618 619 + /* 620 + * To work around what seems to be an optimizer bug, the macro arguments 621 + * need to have const copies or the values end up changed by the time they 622 + * reach fortify_warn_once(). See commit 6f7630b1b5bc ("fortify: Capture 623 + * __bos() results in const temp vars") for more details. 624 + */ 619 625 #define __fortify_memcpy_chk(p, q, size, p_size, q_size, \ 620 626 p_size_field, q_size_field, op) ({ \ 621 627 const size_t __fortify_size = (size_t)(size); \ ··· 629 623 const size_t __q_size = (q_size); \ 630 624 const size_t __p_size_field = (p_size_field); \ 631 625 const size_t __q_size_field = (q_size_field); \ 626 + /* Keep a mutable version of the size for the final copy. */ \ 627 + size_t __copy_size = __fortify_size; \ 632 628 fortify_warn_once(fortify_memcpy_chk(__fortify_size, __p_size, \ 633 629 __q_size, __p_size_field, \ 634 630 __q_size_field, FORTIFY_FUNC_ ##op), \ ··· 638 630 __fortify_size, \ 639 631 "field \"" #p "\" at " FILE_LINE, \ 640 632 __p_size_field); \ 641 - __underlying_##op(p, q, __fortify_size); \ 633 + /* Hide only the run-time size from value range tracking to */ \ 634 + /* silence compile-time false positive bounds warnings. */ \ 635 + if (!__builtin_constant_p(__copy_size)) \ 636 + OPTIMIZER_HIDE_VAR(__copy_size); \ 637 + __underlying_##op(p, q, __copy_size); \ 642 638 }) 643 639 644 640 /*
+7 -1
include/linux/highmem.h
··· 224 224 struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, 225 225 unsigned long vaddr) 226 226 { 227 - return vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr); 227 + struct folio *folio; 228 + 229 + folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vaddr); 230 + if (folio && user_alloc_needs_zeroing()) 231 + clear_user_highpage(&folio->page, vaddr); 232 + 233 + return folio; 228 234 } 229 235 #endif 230 236
+1
include/linux/hyperv.h
··· 1559 1559 void *channel; 1560 1560 void (*util_cb)(void *); 1561 1561 int (*util_init)(struct hv_util_service *); 1562 + int (*util_init_transport)(void); 1562 1563 void (*util_deinit)(void); 1563 1564 int (*util_pre_suspend)(void); 1564 1565 int (*util_pre_resume)(void);
+13 -3
include/linux/if_vlan.h
··· 585 585 * vlan_get_protocol - get protocol EtherType. 586 586 * @skb: skbuff to query 587 587 * @type: first vlan protocol 588 + * @mac_offset: MAC offset 588 589 * @depth: buffer to store length of eth and vlan tags in bytes 589 590 * 590 591 * Returns the EtherType of the packet, regardless of whether it is 591 592 * vlan encapsulated (normal or hardware accelerated) or not. 592 593 */ 593 - static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type, 594 - int *depth) 594 + static inline __be16 __vlan_get_protocol_offset(const struct sk_buff *skb, 595 + __be16 type, 596 + int mac_offset, 597 + int *depth) 595 598 { 596 599 unsigned int vlan_depth = skb->mac_len, parse_depth = VLAN_MAX_DEPTH; 597 600 ··· 613 610 do { 614 611 struct vlan_hdr vhdr, *vh; 615 612 616 - vh = skb_header_pointer(skb, vlan_depth, sizeof(vhdr), &vhdr); 613 + vh = skb_header_pointer(skb, mac_offset + vlan_depth, 614 + sizeof(vhdr), &vhdr); 617 615 if (unlikely(!vh || !--parse_depth)) 618 616 return 0; 619 617 ··· 627 623 *depth = vlan_depth; 628 624 629 625 return type; 626 + } 627 + 628 + static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type, 629 + int *depth) 630 + { 631 + return __vlan_get_protocol_offset(skb, type, 0, depth); 630 632 } 631 633 632 634 /**
+1 -3
include/linux/io_uring.h
··· 15 15 16 16 static inline void io_uring_files_cancel(void) 17 17 { 18 - if (current->io_uring) { 19 - io_uring_unreg_ringfd(); 18 + if (current->io_uring) 20 19 __io_uring_cancel(false); 21 - } 22 20 } 23 21 static inline void io_uring_task_cancel(void) 24 22 {
+10
include/linux/io_uring/cmd.h
··· 18 18 u8 pdu[32]; /* available inline for free use */ 19 19 }; 20 20 21 + struct io_uring_cmd_data { 22 + struct io_uring_sqe sqes[2]; 23 + void *op_data; 24 + }; 25 + 21 26 static inline const void *io_uring_sqe_cmd(const struct io_uring_sqe *sqe) 22 27 { 23 28 return sqe->cmd; ··· 116 111 static inline struct task_struct *io_uring_cmd_get_task(struct io_uring_cmd *cmd) 117 112 { 118 113 return cmd_to_io_kiocb(cmd)->tctx->task; 114 + } 115 + 116 + static inline struct io_uring_cmd_data *io_uring_cmd_get_async_data(struct io_uring_cmd *cmd) 117 + { 118 + return cmd_to_io_kiocb(cmd)->async_data; 119 119 } 120 120 121 121 #endif /* _LINUX_IO_URING_CMD_H */
+1 -1
include/linux/io_uring_types.h
··· 345 345 346 346 /* timeouts */ 347 347 struct { 348 - spinlock_t timeout_lock; 348 + raw_spinlock_t timeout_lock; 349 349 struct list_head timeout_list; 350 350 struct list_head ltimeout_list; 351 351 unsigned cq_last_tm_flush;
+1 -1
include/linux/iomap.h
··· 335 335 u16 io_type; 336 336 u16 io_flags; /* IOMAP_F_* */ 337 337 struct inode *io_inode; /* file being written to */ 338 - size_t io_size; /* size of the extent */ 338 + size_t io_size; /* size of data within eof */ 339 339 loff_t io_offset; /* offset in the file */ 340 340 sector_t io_sector; /* start sector of ioend */ 341 341 struct bio io_bio; /* MUST BE LAST! */
+14
include/linux/memfd.h
··· 7 7 #ifdef CONFIG_MEMFD_CREATE 8 8 extern long memfd_fcntl(struct file *file, unsigned int cmd, unsigned int arg); 9 9 struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx); 10 + unsigned int *memfd_file_seals_ptr(struct file *file); 10 11 #else 11 12 static inline long memfd_fcntl(struct file *f, unsigned int c, unsigned int a) 12 13 { ··· 17 16 { 18 17 return ERR_PTR(-EINVAL); 19 18 } 19 + 20 + static inline unsigned int *memfd_file_seals_ptr(struct file *file) 21 + { 22 + return NULL; 23 + } 20 24 #endif 25 + 26 + /* Retrieve memfd seals associated with the file, if any. */ 27 + static inline unsigned int memfd_file_seals(struct file *file) 28 + { 29 + unsigned int *sealsp = memfd_file_seals_ptr(file); 30 + 31 + return sealsp ? *sealsp : 0; 32 + } 21 33 22 34 #endif /* __LINUX_MEMFD_H */
+7
include/linux/mlx5/driver.h
··· 524 524 * creation/deletion on drivers rescan. Unset during device attach. 525 525 */ 526 526 MLX5_PRIV_FLAGS_DETACH = 1 << 2, 527 + MLX5_PRIV_FLAGS_SWITCH_LEGACY = 1 << 3, 527 528 }; 528 529 529 530 struct mlx5_adev { ··· 1201 1200 static inline bool mlx5_core_is_vf(const struct mlx5_core_dev *dev) 1202 1201 { 1203 1202 return dev->coredev_type == MLX5_COREDEV_VF; 1203 + } 1204 + 1205 + static inline bool mlx5_core_same_coredev_type(const struct mlx5_core_dev *dev1, 1206 + const struct mlx5_core_dev *dev2) 1207 + { 1208 + return dev1->coredev_type == dev2->coredev_type; 1204 1209 } 1205 1210 1206 1211 static inline bool mlx5_core_is_ecpf(const struct mlx5_core_dev *dev)
+3 -1
include/linux/mlx5/mlx5_ifc.h
··· 2119 2119 u8 migration_in_chunks[0x1]; 2120 2120 u8 reserved_at_d1[0x1]; 2121 2121 u8 sf_eq_usage[0x1]; 2122 - u8 reserved_at_d3[0xd]; 2122 + u8 reserved_at_d3[0x5]; 2123 + u8 multiplane[0x1]; 2124 + u8 reserved_at_d9[0x7]; 2123 2125 2124 2126 u8 cross_vhca_object_to_object_supported[0x20]; 2125 2127
+69 -19
include/linux/mm.h
··· 31 31 #include <linux/kasan.h> 32 32 #include <linux/memremap.h> 33 33 #include <linux/slab.h> 34 + #include <linux/cacheinfo.h> 34 35 35 36 struct mempolicy; 36 37 struct anon_vma; ··· 3011 3010 lruvec_stat_sub_folio(folio, NR_PAGETABLE); 3012 3011 } 3013 3012 3014 - pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); 3013 + pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); 3014 + static inline pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, 3015 + pmd_t *pmdvalp) 3016 + { 3017 + pte_t *pte; 3018 + 3019 + __cond_lock(RCU, pte = ___pte_offset_map(pmd, addr, pmdvalp)); 3020 + return pte; 3021 + } 3015 3022 static inline pte_t *pte_offset_map(pmd_t *pmd, unsigned long addr) 3016 3023 { 3017 3024 return __pte_offset_map(pmd, addr, NULL); ··· 3032 3023 { 3033 3024 pte_t *pte; 3034 3025 3035 - __cond_lock(*ptlp, pte = __pte_offset_map_lock(mm, pmd, addr, ptlp)); 3026 + __cond_lock(RCU, __cond_lock(*ptlp, 3027 + pte = __pte_offset_map_lock(mm, pmd, addr, ptlp))); 3036 3028 return pte; 3037 3029 } 3038 3030 ··· 3125 3115 if (!pmd_ptlock_init(ptdesc)) 3126 3116 return false; 3127 3117 __folio_set_pgtable(folio); 3118 + ptdesc_pmd_pts_init(ptdesc); 3128 3119 lruvec_stat_add_folio(folio, NR_PAGETABLE); 3129 3120 return true; 3130 3121 } ··· 4102 4091 static inline void mem_dump_obj(void *object) {} 4103 4092 #endif 4104 4093 4094 + static inline bool is_write_sealed(int seals) 4095 + { 4096 + return seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE); 4097 + } 4098 + 4099 + /** 4100 + * is_readonly_sealed - Checks whether write-sealed but mapped read-only, 4101 + * in which case writes should be disallowing moving 4102 + * forwards. 4103 + * @seals: the seals to check 4104 + * @vm_flags: the VMA flags to check 4105 + * 4106 + * Returns whether readonly sealed, in which case writess should be disallowed 4107 + * going forward. 4108 + */ 4109 + static inline bool is_readonly_sealed(int seals, vm_flags_t vm_flags) 4110 + { 4111 + /* 4112 + * Since an F_SEAL_[FUTURE_]WRITE sealed memfd can be mapped as 4113 + * MAP_SHARED and read-only, take care to not allow mprotect to 4114 + * revert protections on such mappings. Do this only for shared 4115 + * mappings. For private mappings, don't need to mask 4116 + * VM_MAYWRITE as we still want them to be COW-writable. 4117 + */ 4118 + if (is_write_sealed(seals) && 4119 + ((vm_flags & (VM_SHARED | VM_WRITE)) == VM_SHARED)) 4120 + return true; 4121 + 4122 + return false; 4123 + } 4124 + 4105 4125 /** 4106 4126 * seal_check_write - Check for F_SEAL_WRITE or F_SEAL_FUTURE_WRITE flags and 4107 4127 * handle them. ··· 4144 4102 */ 4145 4103 static inline int seal_check_write(int seals, struct vm_area_struct *vma) 4146 4104 { 4147 - if (seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) { 4148 - /* 4149 - * New PROT_WRITE and MAP_SHARED mmaps are not allowed when 4150 - * write seals are active. 4151 - */ 4152 - if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE)) 4153 - return -EPERM; 4105 + if (!is_write_sealed(seals)) 4106 + return 0; 4154 4107 4155 - /* 4156 - * Since an F_SEAL_[FUTURE_]WRITE sealed memfd can be mapped as 4157 - * MAP_SHARED and read-only, take care to not allow mprotect to 4158 - * revert protections on such mappings. Do this only for shared 4159 - * mappings. For private mappings, don't need to mask 4160 - * VM_MAYWRITE as we still want them to be COW-writable. 4161 - */ 4162 - if (vma->vm_flags & VM_SHARED) 4163 - vm_flags_clear(vma, VM_MAYWRITE); 4164 - } 4108 + /* 4109 + * New PROT_WRITE and MAP_SHARED mmaps are not allowed when 4110 + * write seals are active. 4111 + */ 4112 + if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE)) 4113 + return -EPERM; 4165 4114 4166 4115 return 0; 4167 4116 } ··· 4207 4174 return 0; 4208 4175 } 4209 4176 #endif 4177 + 4178 + /* 4179 + * user_alloc_needs_zeroing checks if a user folio from page allocator needs to 4180 + * be zeroed or not. 4181 + */ 4182 + static inline bool user_alloc_needs_zeroing(void) 4183 + { 4184 + /* 4185 + * for user folios, arch with cache aliasing requires cache flush and 4186 + * arc changes folio->flags to make icache coherent with dcache, so 4187 + * always return false to make caller use 4188 + * clear_user_page()/clear_user_highpage(). 4189 + */ 4190 + return cpu_dcache_is_aliasing() || cpu_icache_is_aliasing() || 4191 + !static_branch_maybe(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, 4192 + &init_on_alloc); 4193 + } 4210 4194 4211 4195 int arch_get_shadow_stack_status(struct task_struct *t, unsigned long __user *status); 4212 4196 int arch_set_shadow_stack_status(struct task_struct *t, unsigned long status);
+30
include/linux/mm_types.h
··· 445 445 * @pt_index: Used for s390 gmap. 446 446 * @pt_mm: Used for x86 pgds. 447 447 * @pt_frag_refcount: For fragmented page table tracking. Powerpc only. 448 + * @pt_share_count: Used for HugeTLB PMD page table share count. 448 449 * @_pt_pad_2: Padding to ensure proper alignment. 449 450 * @ptl: Lock for the page table. 450 451 * @__page_type: Same as page->page_type. Unused for page tables. ··· 472 471 pgoff_t pt_index; 473 472 struct mm_struct *pt_mm; 474 473 atomic_t pt_frag_refcount; 474 + #ifdef CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING 475 + atomic_t pt_share_count; 476 + #endif 475 477 }; 476 478 477 479 union { ··· 519 515 #define page_ptdesc(p) (_Generic((p), \ 520 516 const struct page *: (const struct ptdesc *)(p), \ 521 517 struct page *: (struct ptdesc *)(p))) 518 + 519 + #ifdef CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING 520 + static inline void ptdesc_pmd_pts_init(struct ptdesc *ptdesc) 521 + { 522 + atomic_set(&ptdesc->pt_share_count, 0); 523 + } 524 + 525 + static inline void ptdesc_pmd_pts_inc(struct ptdesc *ptdesc) 526 + { 527 + atomic_inc(&ptdesc->pt_share_count); 528 + } 529 + 530 + static inline void ptdesc_pmd_pts_dec(struct ptdesc *ptdesc) 531 + { 532 + atomic_dec(&ptdesc->pt_share_count); 533 + } 534 + 535 + static inline int ptdesc_pmd_pts_count(struct ptdesc *ptdesc) 536 + { 537 + return atomic_read(&ptdesc->pt_share_count); 538 + } 539 + #else 540 + static inline void ptdesc_pmd_pts_init(struct ptdesc *ptdesc) 541 + { 542 + } 543 + #endif 522 544 523 545 /* 524 546 * Used for sizing the vmemmap region on some architectures
+1 -2
include/linux/mount.h
··· 50 50 #define MNT_ATIME_MASK (MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME ) 51 51 52 52 #define MNT_INTERNAL_FLAGS (MNT_SHARED | MNT_WRITE_HOLD | MNT_INTERNAL | \ 53 - MNT_DOOMED | MNT_SYNC_UMOUNT | MNT_MARKED | MNT_ONRB) 53 + MNT_DOOMED | MNT_SYNC_UMOUNT | MNT_MARKED) 54 54 55 55 #define MNT_INTERNAL 0x4000 56 56 ··· 64 64 #define MNT_SYNC_UMOUNT 0x2000000 65 65 #define MNT_MARKED 0x4000000 66 66 #define MNT_UMOUNT 0x8000000 67 - #define MNT_ONRB 0x10000000 68 67 69 68 struct vfsmount { 70 69 struct dentry *mnt_root; /* root of the mounted tree */
+3 -4
include/linux/netfs.h
··· 185 185 short error; /* 0 or error that occurred */ 186 186 unsigned short debug_index; /* Index in list (for debugging output) */ 187 187 unsigned int nr_segs; /* Number of segs in io_iter */ 188 + u8 retry_count; /* The number of retries (0 on initial pass) */ 188 189 enum netfs_io_source source; /* Where to read from/write to */ 189 190 unsigned char stream_nr; /* I/O stream this belongs to */ 190 191 unsigned char curr_folioq_slot; /* Folio currently being read */ ··· 195 194 #define NETFS_SREQ_COPY_TO_CACHE 0 /* Set if should copy the data to the cache */ 196 195 #define NETFS_SREQ_CLEAR_TAIL 1 /* Set if the rest of the read should be cleared */ 197 196 #define NETFS_SREQ_SEEK_DATA_READ 3 /* Set if ->read() should SEEK_DATA first */ 198 - #define NETFS_SREQ_NO_PROGRESS 4 /* Set if we didn't manage to read any data */ 197 + #define NETFS_SREQ_MADE_PROGRESS 4 /* Set if we transferred at least some data */ 199 198 #define NETFS_SREQ_ONDEMAND 5 /* Set if it's from on-demand read mode */ 200 199 #define NETFS_SREQ_BOUNDARY 6 /* Set if ends on hard boundary (eg. ceph object) */ 201 200 #define NETFS_SREQ_HIT_EOF 7 /* Set if short due to EOF */ 202 201 #define NETFS_SREQ_IN_PROGRESS 8 /* Unlocked when the subrequest completes */ 203 202 #define NETFS_SREQ_NEED_RETRY 9 /* Set if the filesystem requests a retry */ 204 - #define NETFS_SREQ_RETRYING 10 /* Set if we're retrying */ 205 - #define NETFS_SREQ_FAILED 11 /* Set if the subreq failed unretryably */ 203 + #define NETFS_SREQ_FAILED 10 /* Set if the subreq failed unretryably */ 206 204 }; 207 205 208 206 enum netfs_io_origin { ··· 269 269 size_t prev_donated; /* Fallback for subreq->prev_donated */ 270 270 refcount_t ref; 271 271 unsigned long flags; 272 - #define NETFS_RREQ_COPY_TO_CACHE 1 /* Need to write to the cache */ 273 272 #define NETFS_RREQ_NO_UNLOCK_FOLIO 2 /* Don't unlock no_unlock_folio on completion */ 274 273 #define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on completion */ 275 274 #define NETFS_RREQ_FAILED 4 /* The request failed */
+2 -10
include/linux/page-flags.h
··· 862 862 ClearPageHead(page); 863 863 } 864 864 FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE) 865 - FOLIO_TEST_FLAG(partially_mapped, FOLIO_SECOND_PAGE) 866 - /* 867 - * PG_partially_mapped is protected by deferred_split split_queue_lock, 868 - * so its safe to use non-atomic set/clear. 869 - */ 870 - __FOLIO_SET_FLAG(partially_mapped, FOLIO_SECOND_PAGE) 871 - __FOLIO_CLEAR_FLAG(partially_mapped, FOLIO_SECOND_PAGE) 865 + FOLIO_FLAG(partially_mapped, FOLIO_SECOND_PAGE) 872 866 #else 873 867 FOLIO_FLAG_FALSE(large_rmappable) 874 - FOLIO_TEST_FLAG_FALSE(partially_mapped) 875 - __FOLIO_SET_FLAG_NOOP(partially_mapped) 876 - __FOLIO_CLEAR_FLAG_NOOP(partially_mapped) 868 + FOLIO_FLAG_FALSE(partially_mapped) 877 869 #endif 878 870 879 871 #define PG_head_mask ((1UL << PG_head))
+1 -4
include/linux/percpu-defs.h
··· 221 221 } while (0) 222 222 223 223 #define PERCPU_PTR(__p) \ 224 - ({ \ 225 - unsigned long __pcpu_ptr = (__force unsigned long)(__p); \ 226 - (typeof(*(__p)) __force __kernel *)(__pcpu_ptr); \ 227 - }) 224 + (typeof(*(__p)) __force __kernel *)((__force unsigned long)(__p)) 228 225 229 226 #ifdef CONFIG_SMP 230 227
+2
include/linux/platform_data/amd_qdma.h
··· 26 26 * @max_mm_channels: Maximum number of MM DMA channels in each direction 27 27 * @device_map: DMA slave map 28 28 * @irq_index: The index of first IRQ 29 + * @dma_dev: The device pointer for dma operations 29 30 */ 30 31 struct qdma_platdata { 31 32 u32 max_mm_channels; 32 33 u32 irq_index; 33 34 struct dma_slave_map *device_map; 35 + struct device *dma_dev; 34 36 }; 35 37 36 38 #endif /* _PLATDATA_AMD_QDMA_H */
+12 -14
include/linux/poll.h
··· 25 25 26 26 struct poll_table_struct; 27 27 28 - /* 28 + /* 29 29 * structures and helpers for f_op->poll implementations 30 30 */ 31 31 typedef void (*poll_queue_proc)(struct file *, wait_queue_head_t *, struct poll_table_struct *); 32 32 33 33 /* 34 - * Do not touch the structure directly, use the access functions 35 - * poll_does_not_wait() and poll_requested_events() instead. 34 + * Do not touch the structure directly, use the access function 35 + * poll_requested_events() instead. 36 36 */ 37 37 typedef struct poll_table_struct { 38 38 poll_queue_proc _qproc; ··· 41 41 42 42 static inline void poll_wait(struct file * filp, wait_queue_head_t * wait_address, poll_table *p) 43 43 { 44 - if (p && p->_qproc && wait_address) 44 + if (p && p->_qproc) { 45 45 p->_qproc(filp, wait_address, p); 46 - } 47 - 48 - /* 49 - * Return true if it is guaranteed that poll will not wait. This is the case 50 - * if the poll() of another file descriptor in the set got an event, so there 51 - * is no need for waiting. 52 - */ 53 - static inline bool poll_does_not_wait(const poll_table *p) 54 - { 55 - return p == NULL || p->_qproc == NULL; 46 + /* 47 + * This memory barrier is paired in the wq_has_sleeper(). 48 + * See the comment above prepare_to_wait(), we need to 49 + * ensure that subsequent tests in this thread can't be 50 + * reordered with __add_wait_queue() in _qproc() paths. 51 + */ 52 + smp_mb(); 53 + } 56 54 } 57 55 58 56 /*
+32 -45
include/linux/regulator/consumer.h
··· 168 168 void regulator_put(struct regulator *regulator); 169 169 void devm_regulator_put(struct regulator *regulator); 170 170 171 - #if IS_ENABLED(CONFIG_OF) 172 - struct regulator *__must_check of_regulator_get_optional(struct device *dev, 173 - struct device_node *node, 174 - const char *id); 175 - struct regulator *__must_check devm_of_regulator_get_optional(struct device *dev, 176 - struct device_node *node, 177 - const char *id); 178 - #else 179 - static inline struct regulator *__must_check of_regulator_get_optional(struct device *dev, 180 - struct device_node *node, 181 - const char *id) 182 - { 183 - return ERR_PTR(-ENODEV); 184 - } 185 - 186 - static inline struct regulator *__must_check devm_of_regulator_get_optional(struct device *dev, 187 - struct device_node *node, 188 - const char *id) 189 - { 190 - return ERR_PTR(-ENODEV); 191 - } 192 - #endif 193 - 194 171 int regulator_register_supply_alias(struct device *dev, const char *id, 195 172 struct device *alias_dev, 196 173 const char *alias_id); ··· 200 223 201 224 int __must_check regulator_bulk_get(struct device *dev, int num_consumers, 202 225 struct regulator_bulk_data *consumers); 203 - int __must_check of_regulator_bulk_get_all(struct device *dev, struct device_node *np, 204 - struct regulator_bulk_data **consumers); 205 226 int __must_check devm_regulator_bulk_get(struct device *dev, int num_consumers, 206 227 struct regulator_bulk_data *consumers); 207 228 void devm_regulator_bulk_put(struct regulator_bulk_data *consumers); ··· 348 373 return ERR_PTR(-ENODEV); 349 374 } 350 375 351 - static inline struct regulator *__must_check of_regulator_get_optional(struct device *dev, 352 - struct device_node *node, 353 - const char *id) 354 - { 355 - return ERR_PTR(-ENODEV); 356 - } 357 - 358 - static inline struct regulator *__must_check devm_of_regulator_get_optional(struct device *dev, 359 - struct device_node *node, 360 - const char *id) 361 - { 362 - return ERR_PTR(-ENODEV); 363 - } 364 - 365 376 static inline void regulator_put(struct regulator *regulator) 366 377 { 367 378 } ··· 440 479 441 480 static inline int devm_regulator_bulk_get(struct device *dev, int num_consumers, 442 481 struct regulator_bulk_data *consumers) 443 - { 444 - return 0; 445 - } 446 - 447 - static inline int of_regulator_bulk_get_all(struct device *dev, struct device_node *np, 448 - struct regulator_bulk_data **consumers) 449 482 { 450 483 return 0; 451 484 } ··· 653 698 { 654 699 return false; 655 700 } 701 + #endif 702 + 703 + #if IS_ENABLED(CONFIG_OF) && IS_ENABLED(CONFIG_REGULATOR) 704 + struct regulator *__must_check of_regulator_get_optional(struct device *dev, 705 + struct device_node *node, 706 + const char *id); 707 + struct regulator *__must_check devm_of_regulator_get_optional(struct device *dev, 708 + struct device_node *node, 709 + const char *id); 710 + int __must_check of_regulator_bulk_get_all(struct device *dev, struct device_node *np, 711 + struct regulator_bulk_data **consumers); 712 + #else 713 + static inline struct regulator *__must_check of_regulator_get_optional(struct device *dev, 714 + struct device_node *node, 715 + const char *id) 716 + { 717 + return ERR_PTR(-ENODEV); 718 + } 719 + 720 + static inline struct regulator *__must_check devm_of_regulator_get_optional(struct device *dev, 721 + struct device_node *node, 722 + const char *id) 723 + { 724 + return ERR_PTR(-ENODEV); 725 + } 726 + 727 + static inline int of_regulator_bulk_get_all(struct device *dev, struct device_node *np, 728 + struct regulator_bulk_data **consumers) 729 + { 730 + return 0; 731 + } 732 + 656 733 #endif 657 734 658 735 static inline int regulator_set_voltage_triplet(struct regulator *regulator,
+2 -1
include/linux/sched.h
··· 1637 1637 * We're lying here, but rather than expose a completely new task state 1638 1638 * to userspace, we can make this appear as if the task has gone through 1639 1639 * a regular rt_mutex_lock() call. 1640 + * Report frozen tasks as uninterruptible. 1640 1641 */ 1641 - if (tsk_state & TASK_RTLOCK_WAIT) 1642 + if ((tsk_state & TASK_RTLOCK_WAIT) || (tsk_state & TASK_FROZEN)) 1642 1643 state = TASK_UNINTERRUPTIBLE; 1643 1644 1644 1645 return fls(state);
+8 -3
include/linux/skmsg.h
··· 317 317 kfree_skb(skb); 318 318 } 319 319 320 - static inline void sk_psock_queue_msg(struct sk_psock *psock, 320 + static inline bool sk_psock_queue_msg(struct sk_psock *psock, 321 321 struct sk_msg *msg) 322 322 { 323 + bool ret; 324 + 323 325 spin_lock_bh(&psock->ingress_lock); 324 - if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) 326 + if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) { 325 327 list_add_tail(&msg->list, &psock->ingress_msg); 326 - else { 328 + ret = true; 329 + } else { 327 330 sk_msg_free(psock->sk, msg); 328 331 kfree(msg); 332 + ret = false; 329 333 } 330 334 spin_unlock_bh(&psock->ingress_lock); 335 + return ret; 331 336 } 332 337 333 338 static inline struct sk_msg *sk_psock_dequeue_msg(struct sk_psock *psock)
+6
include/linux/static_call.h
··· 160 160 161 161 #ifdef CONFIG_HAVE_STATIC_CALL_INLINE 162 162 163 + extern int static_call_initialized; 164 + 163 165 extern int __init static_call_init(void); 164 166 165 167 extern void static_call_force_reinit(void); ··· 227 225 228 226 #elif defined(CONFIG_HAVE_STATIC_CALL) 229 227 228 + #define static_call_initialized 0 229 + 230 230 static inline int static_call_init(void) { return 0; } 231 231 232 232 #define DEFINE_STATIC_CALL(name, _func) \ ··· 284 280 EXPORT_SYMBOL_GPL(STATIC_CALL_TRAMP(name)) 285 281 286 282 #else /* Generic implementation */ 283 + 284 + #define static_call_initialized 0 287 285 288 286 static inline int static_call_init(void) { return 0; } 289 287
+6 -2
include/linux/trace_events.h
··· 273 273 const char *name; 274 274 const int size; 275 275 const int align; 276 - const int is_signed; 276 + const unsigned int is_signed:1; 277 + unsigned int needs_test:1; 277 278 const int filter_type; 278 279 const int len; 279 280 }; ··· 325 324 TRACE_EVENT_FL_EPROBE_BIT, 326 325 TRACE_EVENT_FL_FPROBE_BIT, 327 326 TRACE_EVENT_FL_CUSTOM_BIT, 327 + TRACE_EVENT_FL_TEST_STR_BIT, 328 328 }; 329 329 330 330 /* ··· 342 340 * CUSTOM - Event is a custom event (to be attached to an exsiting tracepoint) 343 341 * This is set when the custom event has not been attached 344 342 * to a tracepoint yet, then it is cleared when it is. 343 + * TEST_STR - The event has a "%s" that points to a string outside the event 345 344 */ 346 345 enum { 347 346 TRACE_EVENT_FL_CAP_ANY = (1 << TRACE_EVENT_FL_CAP_ANY_BIT), ··· 355 352 TRACE_EVENT_FL_EPROBE = (1 << TRACE_EVENT_FL_EPROBE_BIT), 356 353 TRACE_EVENT_FL_FPROBE = (1 << TRACE_EVENT_FL_FPROBE_BIT), 357 354 TRACE_EVENT_FL_CUSTOM = (1 << TRACE_EVENT_FL_CUSTOM_BIT), 355 + TRACE_EVENT_FL_TEST_STR = (1 << TRACE_EVENT_FL_TEST_STR_BIT), 358 356 }; 359 357 360 358 #define TRACE_EVENT_FL_UKPROBE (TRACE_EVENT_FL_KPROBE | TRACE_EVENT_FL_UPROBE) ··· 364 360 struct list_head list; 365 361 struct trace_event_class *class; 366 362 union { 367 - char *name; 363 + const char *name; 368 364 /* Set TRACE_EVENT_FL_TRACEPOINT flag when using "tp" */ 369 365 struct tracepoint *tp; 370 366 };
+3 -3
include/linux/vermagic.h
··· 15 15 #else 16 16 #define MODULE_VERMAGIC_SMP "" 17 17 #endif 18 - #ifdef CONFIG_PREEMPT_BUILD 19 - #define MODULE_VERMAGIC_PREEMPT "preempt " 20 - #elif defined(CONFIG_PREEMPT_RT) 18 + #ifdef CONFIG_PREEMPT_RT 21 19 #define MODULE_VERMAGIC_PREEMPT "preempt_rt " 20 + #elif defined(CONFIG_PREEMPT_BUILD) 21 + #define MODULE_VERMAGIC_PREEMPT "preempt " 22 22 #else 23 23 #define MODULE_VERMAGIC_PREEMPT "" 24 24 #endif
+1 -1
include/linux/vmstat.h
··· 515 515 516 516 static inline const char *lru_list_name(enum lru_list lru) 517 517 { 518 - return node_stat_name(NR_LRU_BASE + lru) + 3; // skip "nr_" 518 + return node_stat_name(NR_LRU_BASE + (enum node_stat_item)lru) + 3; // skip "nr_" 519 519 } 520 520 521 521 #if defined(CONFIG_VM_EVENT_COUNTERS) || defined(CONFIG_MEMCG)
+1 -1
include/net/inet_connection_sock.h
··· 282 282 283 283 static inline int inet_csk_reqsk_queue_is_full(const struct sock *sk) 284 284 { 285 - return inet_csk_reqsk_queue_len(sk) >= READ_ONCE(sk->sk_max_ack_backlog); 285 + return inet_csk_reqsk_queue_len(sk) > READ_ONCE(sk->sk_max_ack_backlog); 286 286 } 287 287 288 288 bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req);
+5 -2
include/net/netfilter/nf_tables.h
··· 733 733 /** 734 734 * struct nft_set_ext - set extensions 735 735 * 736 - * @genmask: generation mask 736 + * @genmask: generation mask, but also flags (see NFT_SET_ELEM_DEAD_BIT) 737 737 * @offset: offsets of individual extension types 738 738 * @data: beginning of extension data 739 + * 740 + * This structure must be aligned to word size, otherwise atomic bitops 741 + * on genmask field can cause alignment failure on some archs. 739 742 */ 740 743 struct nft_set_ext { 741 744 u8 genmask; 742 745 u8 offset[NFT_SET_EXT_NUM]; 743 746 char data[]; 744 - }; 747 + } __aligned(BITS_PER_LONG / 8); 745 748 746 749 static inline void nft_set_ext_prepare(struct nft_set_ext_tmpl *tmpl) 747 750 {
+15 -12
include/net/sock.h
··· 1527 1527 } 1528 1528 1529 1529 static inline bool 1530 - sk_rmem_schedule(struct sock *sk, struct sk_buff *skb, int size) 1530 + __sk_rmem_schedule(struct sock *sk, int size, bool pfmemalloc) 1531 1531 { 1532 1532 int delta; 1533 1533 ··· 1535 1535 return true; 1536 1536 delta = size - sk->sk_forward_alloc; 1537 1537 return delta <= 0 || __sk_mem_schedule(sk, delta, SK_MEM_RECV) || 1538 - skb_pfmemalloc(skb); 1538 + pfmemalloc; 1539 + } 1540 + 1541 + static inline bool 1542 + sk_rmem_schedule(struct sock *sk, struct sk_buff *skb, int size) 1543 + { 1544 + return __sk_rmem_schedule(sk, size, skb_pfmemalloc(skb)); 1539 1545 } 1540 1546 1541 1547 static inline int sk_unused_reserved_mem(const struct sock *sk) ··· 2297 2291 } 2298 2292 2299 2293 /** 2300 - * sock_poll_wait - place memory barrier behind the poll_wait call. 2294 + * sock_poll_wait - wrapper for the poll_wait call. 2301 2295 * @filp: file 2302 2296 * @sock: socket to wait on 2303 2297 * @p: poll_table ··· 2307 2301 static inline void sock_poll_wait(struct file *filp, struct socket *sock, 2308 2302 poll_table *p) 2309 2303 { 2310 - if (!poll_does_not_wait(p)) { 2311 - poll_wait(filp, &sock->wq.wait, p); 2312 - /* We need to be sure we are in sync with the 2313 - * socket flags modification. 2314 - * 2315 - * This memory barrier is paired in the wq_has_sleeper. 2316 - */ 2317 - smp_mb(); 2318 - } 2304 + /* Provides a barrier we need to be sure we are in sync 2305 + * with the socket flags modification. 2306 + * 2307 + * This memory barrier is paired in the wq_has_sleeper. 2308 + */ 2309 + poll_wait(filp, &sock->wq.wait, p); 2319 2310 } 2320 2311 2321 2312 static inline void skb_set_hash_from_sk(struct sk_buff *skb, struct sock *sk)
+26 -24
include/uapi/linux/mptcp_pm.h
··· 12 12 /** 13 13 * enum mptcp_event_type 14 14 * @MPTCP_EVENT_UNSPEC: unused event 15 - * @MPTCP_EVENT_CREATED: token, family, saddr4 | saddr6, daddr4 | daddr6, 16 - * sport, dport A new MPTCP connection has been created. It is the good time 17 - * to allocate memory and send ADD_ADDR if needed. Depending on the 15 + * @MPTCP_EVENT_CREATED: A new MPTCP connection has been created. It is the 16 + * good time to allocate memory and send ADD_ADDR if needed. Depending on the 18 17 * traffic-patterns it can take a long time until the MPTCP_EVENT_ESTABLISHED 19 - * is sent. 20 - * @MPTCP_EVENT_ESTABLISHED: token, family, saddr4 | saddr6, daddr4 | daddr6, 21 - * sport, dport A MPTCP connection is established (can start new subflows). 22 - * @MPTCP_EVENT_CLOSED: token A MPTCP connection has stopped. 23 - * @MPTCP_EVENT_ANNOUNCED: token, rem_id, family, daddr4 | daddr6 [, dport] A 24 - * new address has been announced by the peer. 25 - * @MPTCP_EVENT_REMOVED: token, rem_id An address has been lost by the peer. 26 - * @MPTCP_EVENT_SUB_ESTABLISHED: token, family, loc_id, rem_id, saddr4 | 27 - * saddr6, daddr4 | daddr6, sport, dport, backup, if_idx [, error] A new 28 - * subflow has been established. 'error' should not be set. 29 - * @MPTCP_EVENT_SUB_CLOSED: token, family, loc_id, rem_id, saddr4 | saddr6, 30 - * daddr4 | daddr6, sport, dport, backup, if_idx [, error] A subflow has been 31 - * closed. An error (copy of sk_err) could be set if an error has been 32 - * detected for this subflow. 33 - * @MPTCP_EVENT_SUB_PRIORITY: token, family, loc_id, rem_id, saddr4 | saddr6, 34 - * daddr4 | daddr6, sport, dport, backup, if_idx [, error] The priority of a 35 - * subflow has changed. 'error' should not be set. 36 - * @MPTCP_EVENT_LISTENER_CREATED: family, sport, saddr4 | saddr6 A new PM 37 - * listener is created. 38 - * @MPTCP_EVENT_LISTENER_CLOSED: family, sport, saddr4 | saddr6 A PM listener 39 - * is closed. 18 + * is sent. Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6, 19 + * sport, dport, server-side. 20 + * @MPTCP_EVENT_ESTABLISHED: A MPTCP connection is established (can start new 21 + * subflows). Attributes: token, family, saddr4 | saddr6, daddr4 | daddr6, 22 + * sport, dport, server-side. 23 + * @MPTCP_EVENT_CLOSED: A MPTCP connection has stopped. Attribute: token. 24 + * @MPTCP_EVENT_ANNOUNCED: A new address has been announced by the peer. 25 + * Attributes: token, rem_id, family, daddr4 | daddr6 [, dport]. 26 + * @MPTCP_EVENT_REMOVED: An address has been lost by the peer. Attributes: 27 + * token, rem_id. 28 + * @MPTCP_EVENT_SUB_ESTABLISHED: A new subflow has been established. 'error' 29 + * should not be set. Attributes: token, family, loc_id, rem_id, saddr4 | 30 + * saddr6, daddr4 | daddr6, sport, dport, backup, if_idx [, error]. 31 + * @MPTCP_EVENT_SUB_CLOSED: A subflow has been closed. An error (copy of 32 + * sk_err) could be set if an error has been detected for this subflow. 33 + * Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | 34 + * daddr6, sport, dport, backup, if_idx [, error]. 35 + * @MPTCP_EVENT_SUB_PRIORITY: The priority of a subflow has changed. 'error' 36 + * should not be set. Attributes: token, family, loc_id, rem_id, saddr4 | 37 + * saddr6, daddr4 | daddr6, sport, dport, backup, if_idx [, error]. 38 + * @MPTCP_EVENT_LISTENER_CREATED: A new PM listener is created. Attributes: 39 + * family, sport, saddr4 | saddr6. 40 + * @MPTCP_EVENT_LISTENER_CLOSED: A PM listener is closed. Attributes: family, 41 + * sport, saddr4 | saddr6. 40 42 */ 41 43 enum mptcp_event_type { 42 44 MPTCP_EVENT_UNSPEC,
+10 -3
include/uapi/linux/stddef.h
··· 8 8 #define __always_inline inline 9 9 #endif 10 10 11 + /* Not all C++ standards support type declarations inside an anonymous union */ 12 + #ifndef __cplusplus 13 + #define __struct_group_tag(TAG) TAG 14 + #else 15 + #define __struct_group_tag(TAG) 16 + #endif 17 + 11 18 /** 12 19 * __struct_group() - Create a mirrored named and anonyomous struct 13 20 * ··· 27 20 * and size: one anonymous and one named. The former's members can be used 28 21 * normally without sub-struct naming, and the latter can be used to 29 22 * reason about the start, end, and size of the group of struct members. 30 - * The named struct can also be explicitly tagged for layer reuse, as well 31 - * as both having struct attributes appended. 23 + * The named struct can also be explicitly tagged for layer reuse (C only), 24 + * as well as both having struct attributes appended. 32 25 */ 33 26 #define __struct_group(TAG, NAME, ATTRS, MEMBERS...) \ 34 27 union { \ 35 28 struct { MEMBERS } ATTRS; \ 36 - struct TAG { MEMBERS } ATTRS NAME; \ 29 + struct __struct_group_tag(TAG) { MEMBERS } ATTRS NAME; \ 37 30 } ATTRS 38 31 39 32 #ifdef __cplusplus
+2 -2
include/uapi/linux/thermal.h
··· 3 3 #define _UAPI_LINUX_THERMAL_H 4 4 5 5 #define THERMAL_NAME_LENGTH 20 6 - #define THERMAL_THRESHOLD_WAY_UP BIT(0) 7 - #define THERMAL_THRESHOLD_WAY_DOWN BIT(1) 6 + #define THERMAL_THRESHOLD_WAY_UP 0x1 7 + #define THERMAL_THRESHOLD_WAY_DOWN 0x2 8 8 9 9 enum thermal_device_mode { 10 10 THERMAL_DEVICE_DISABLED = 0,
-2
include/ufs/ufshcd.h
··· 329 329 * @program_key: program or evict an inline encryption key 330 330 * @fill_crypto_prdt: initialize crypto-related fields in the PRDT 331 331 * @event_notify: called to notify important events 332 - * @reinit_notify: called to notify reinit of UFSHCD during max gear switch 333 332 * @mcq_config_resource: called to configure MCQ platform resources 334 333 * @get_hba_mac: reports maximum number of outstanding commands supported by 335 334 * the controller. Should be implemented for UFSHCI 4.0 or later ··· 380 381 void *prdt, unsigned int num_segments); 381 382 void (*event_notify)(struct ufs_hba *hba, 382 383 enum ufs_event_type evt, void *data); 383 - void (*reinit_notify)(struct ufs_hba *); 384 384 int (*mcq_config_resource)(struct ufs_hba *hba); 385 385 int (*get_hba_mac)(struct ufs_hba *hba); 386 386 int (*op_runtime_config)(struct ufs_hba *hba);
+7 -9
io_uring/eventfd.c
··· 33 33 kfree(ev_fd); 34 34 } 35 35 36 + static void io_eventfd_put(struct io_ev_fd *ev_fd) 37 + { 38 + if (refcount_dec_and_test(&ev_fd->refs)) 39 + call_rcu(&ev_fd->rcu, io_eventfd_free); 40 + } 41 + 36 42 static void io_eventfd_do_signal(struct rcu_head *rcu) 37 43 { 38 44 struct io_ev_fd *ev_fd = container_of(rcu, struct io_ev_fd, rcu); 39 45 40 46 eventfd_signal_mask(ev_fd->cq_ev_fd, EPOLL_URING_WAKE); 41 - 42 - if (refcount_dec_and_test(&ev_fd->refs)) 43 - io_eventfd_free(rcu); 44 - } 45 - 46 - static void io_eventfd_put(struct io_ev_fd *ev_fd) 47 - { 48 - if (refcount_dec_and_test(&ev_fd->refs)) 49 - call_rcu(&ev_fd->rcu, io_eventfd_free); 47 + io_eventfd_put(ev_fd); 50 48 } 51 49 52 50 static void io_eventfd_release(struct io_ev_fd *ev_fd, bool put_ref)
+17 -16
io_uring/io_uring.c
··· 215 215 struct io_ring_ctx *ctx = head->ctx; 216 216 217 217 /* protect against races with linked timeouts */ 218 - spin_lock_irq(&ctx->timeout_lock); 218 + raw_spin_lock_irq(&ctx->timeout_lock); 219 219 matched = io_match_linked(head); 220 - spin_unlock_irq(&ctx->timeout_lock); 220 + raw_spin_unlock_irq(&ctx->timeout_lock); 221 221 } else { 222 222 matched = io_match_linked(head); 223 223 } ··· 320 320 ret |= io_alloc_cache_init(&ctx->rw_cache, IO_ALLOC_CACHE_MAX, 321 321 sizeof(struct io_async_rw)); 322 322 ret |= io_alloc_cache_init(&ctx->uring_cache, IO_ALLOC_CACHE_MAX, 323 - sizeof(struct uring_cache)); 323 + sizeof(struct io_uring_cmd_data)); 324 324 spin_lock_init(&ctx->msg_lock); 325 325 ret |= io_alloc_cache_init(&ctx->msg_cache, IO_ALLOC_CACHE_MAX, 326 326 sizeof(struct io_kiocb)); ··· 333 333 init_waitqueue_head(&ctx->cq_wait); 334 334 init_waitqueue_head(&ctx->poll_wq); 335 335 spin_lock_init(&ctx->completion_lock); 336 - spin_lock_init(&ctx->timeout_lock); 336 + raw_spin_lock_init(&ctx->timeout_lock); 337 337 INIT_WQ_LIST(&ctx->iopoll_list); 338 338 INIT_LIST_HEAD(&ctx->io_buffers_comp); 339 339 INIT_LIST_HEAD(&ctx->defer_list); ··· 498 498 if (req->flags & REQ_F_LINK_TIMEOUT) { 499 499 struct io_ring_ctx *ctx = req->ctx; 500 500 501 - spin_lock_irq(&ctx->timeout_lock); 501 + raw_spin_lock_irq(&ctx->timeout_lock); 502 502 io_for_each_link(cur, req) 503 503 io_prep_async_work(cur); 504 - spin_unlock_irq(&ctx->timeout_lock); 504 + raw_spin_unlock_irq(&ctx->timeout_lock); 505 505 } else { 506 506 io_for_each_link(cur, req) 507 507 io_prep_async_work(cur); ··· 514 514 struct io_uring_task *tctx = req->tctx; 515 515 516 516 BUG_ON(!tctx); 517 - BUG_ON(!tctx->io_wq); 517 + 518 + if ((current->flags & PF_KTHREAD) || !tctx->io_wq) { 519 + io_req_task_queue_fail(req, -ECANCELED); 520 + return; 521 + } 518 522 519 523 /* init ->work of the whole link before punting */ 520 524 io_prep_async_link(req); ··· 1226 1222 1227 1223 /* SQPOLL doesn't need the task_work added, it'll run it itself */ 1228 1224 if (ctx->flags & IORING_SETUP_SQPOLL) { 1229 - struct io_sq_data *sqd = ctx->sq_data; 1230 - 1231 - if (sqd->thread) 1232 - __set_notify_signal(sqd->thread); 1225 + __set_notify_signal(tctx->task); 1233 1226 return; 1234 1227 } 1235 1228 ··· 2810 2809 2811 2810 if (unlikely(!ctx->poll_activated)) 2812 2811 io_activate_pollwq(ctx); 2813 - 2814 - poll_wait(file, &ctx->poll_wq, wait); 2815 2812 /* 2816 - * synchronizes with barrier from wq_has_sleeper call in 2817 - * io_commit_cqring 2813 + * provides mb() which pairs with barrier from wq_has_sleeper 2814 + * call in io_commit_cqring 2818 2815 */ 2819 - smp_rmb(); 2816 + poll_wait(file, &ctx->poll_wq, wait); 2817 + 2820 2818 if (!io_sqring_full(ctx)) 2821 2819 mask |= EPOLLOUT | EPOLLWRNORM; 2822 2820 ··· 3214 3214 3215 3215 void __io_uring_cancel(bool cancel_all) 3216 3216 { 3217 + io_uring_unreg_ringfd(); 3217 3218 io_uring_cancel_generic(cancel_all, NULL); 3218 3219 } 3219 3220
+4 -3
io_uring/io_uring.h
··· 125 125 #if defined(CONFIG_PROVE_LOCKING) 126 126 lockdep_assert(in_task()); 127 127 128 + if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) 129 + lockdep_assert_held(&ctx->uring_lock); 130 + 128 131 if (ctx->flags & IORING_SETUP_IOPOLL) { 129 132 lockdep_assert_held(&ctx->uring_lock); 130 133 } else if (!ctx->task_complete) { ··· 139 136 * Not from an SQE, as those cannot be submitted, but via 140 137 * updating tagged resources. 141 138 */ 142 - if (percpu_ref_is_dying(&ctx->refs)) 143 - lockdep_assert(current_work()); 144 - else 139 + if (!percpu_ref_is_dying(&ctx->refs)) 145 140 lockdep_assert(current == ctx->submitter_task); 146 141 } 147 142 #endif
+3 -1
io_uring/kbuf.c
··· 139 139 struct io_uring_buf_ring *br = bl->buf_ring; 140 140 __u16 tail, head = bl->head; 141 141 struct io_uring_buf *buf; 142 + void __user *ret; 142 143 143 144 tail = smp_load_acquire(&br->tail); 144 145 if (unlikely(tail == head)) ··· 154 153 req->flags |= REQ_F_BUFFER_RING | REQ_F_BUFFERS_COMMIT; 155 154 req->buf_list = bl; 156 155 req->buf_index = buf->bid; 156 + ret = u64_to_user_ptr(buf->addr); 157 157 158 158 if (issue_flags & IO_URING_F_UNLOCKED || !io_file_can_poll(req)) { 159 159 /* ··· 170 168 io_kbuf_commit(req, bl, *len, 1); 171 169 req->buf_list = NULL; 172 170 } 173 - return u64_to_user_ptr(buf->addr); 171 + return ret; 174 172 } 175 173 176 174 void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
+1
io_uring/net.c
··· 754 754 if (req->opcode == IORING_OP_RECV) { 755 755 kmsg->msg.msg_name = NULL; 756 756 kmsg->msg.msg_namelen = 0; 757 + kmsg->msg.msg_inq = 0; 757 758 kmsg->msg.msg_control = NULL; 758 759 kmsg->msg.msg_get_inq = 1; 759 760 kmsg->msg.msg_controllen = 0;
+2 -1
io_uring/opdef.c
··· 7 7 #include <linux/fs.h> 8 8 #include <linux/file.h> 9 9 #include <linux/io_uring.h> 10 + #include <linux/io_uring/cmd.h> 10 11 11 12 #include "io_uring.h" 12 13 #include "opdef.h" ··· 415 414 .plug = 1, 416 415 .iopoll = 1, 417 416 .iopoll_queue = 1, 418 - .async_size = 2 * sizeof(struct io_uring_sqe), 417 + .async_size = sizeof(struct io_uring_cmd_data), 419 418 .prep = io_uring_cmd_prep, 420 419 .issue = io_uring_cmd, 421 420 },
+3
io_uring/register.c
··· 414 414 if (ctx->flags & IORING_SETUP_SINGLE_ISSUER && 415 415 current != ctx->submitter_task) 416 416 return -EEXIST; 417 + /* limited to DEFER_TASKRUN for now */ 418 + if (!(ctx->flags & IORING_SETUP_DEFER_TASKRUN)) 419 + return -EINVAL; 417 420 if (copy_from_user(&p, arg, sizeof(p))) 418 421 return -EFAULT; 419 422 if (p.flags & ~RESIZE_FLAGS)
+2
io_uring/rw.c
··· 983 983 io_kbuf_recycle(req, issue_flags); 984 984 if (ret < 0) 985 985 req_set_fail(req); 986 + } else if (!(req->flags & REQ_F_APOLL_MULTISHOT)) { 987 + cflags = io_put_kbuf(req, ret, issue_flags); 986 988 } else { 987 989 /* 988 990 * Any successful return value will keep the multishot read
+11 -1
io_uring/sqpoll.c
··· 268 268 DEFINE_WAIT(wait); 269 269 270 270 /* offload context creation failed, just exit */ 271 - if (!current->io_uring) 271 + if (!current->io_uring) { 272 + mutex_lock(&sqd->lock); 273 + sqd->thread = NULL; 274 + mutex_unlock(&sqd->lock); 272 275 goto err_out; 276 + } 273 277 274 278 snprintf(buf, sizeof(buf), "iou-sqp-%d", sqd->task_pid); 275 279 set_task_comm(current, buf); ··· 409 405 __cold int io_sq_offload_create(struct io_ring_ctx *ctx, 410 406 struct io_uring_params *p) 411 407 { 408 + struct task_struct *task_to_put = NULL; 412 409 int ret; 413 410 414 411 /* Retain compatibility with failing for an invalid attach attempt */ ··· 485 480 } 486 481 487 482 sqd->thread = tsk; 483 + task_to_put = get_task_struct(tsk); 488 484 ret = io_uring_alloc_task_context(tsk, ctx); 489 485 wake_up_new_task(tsk); 490 486 if (ret) ··· 496 490 goto err; 497 491 } 498 492 493 + if (task_to_put) 494 + put_task_struct(task_to_put); 499 495 return 0; 500 496 err_sqpoll: 501 497 complete(&ctx->sq_data->exited); 502 498 err: 503 499 io_sq_thread_finish(ctx); 500 + if (task_to_put) 501 + put_task_struct(task_to_put); 504 502 return ret; 505 503 } 506 504
+54 -35
io_uring/timeout.c
··· 74 74 if (!io_timeout_finish(timeout, data)) { 75 75 if (io_req_post_cqe(req, -ETIME, IORING_CQE_F_MORE)) { 76 76 /* re-arm timer */ 77 - spin_lock_irq(&ctx->timeout_lock); 77 + raw_spin_lock_irq(&ctx->timeout_lock); 78 78 list_add(&timeout->list, ctx->timeout_list.prev); 79 79 hrtimer_start(&data->timer, timespec64_to_ktime(data->ts), data->mode); 80 - spin_unlock_irq(&ctx->timeout_lock); 80 + raw_spin_unlock_irq(&ctx->timeout_lock); 81 81 return; 82 82 } 83 83 } ··· 85 85 io_req_task_complete(req, ts); 86 86 } 87 87 88 - static bool io_kill_timeout(struct io_kiocb *req, int status) 88 + static __cold bool io_flush_killed_timeouts(struct list_head *list, int err) 89 + { 90 + if (list_empty(list)) 91 + return false; 92 + 93 + while (!list_empty(list)) { 94 + struct io_timeout *timeout; 95 + struct io_kiocb *req; 96 + 97 + timeout = list_first_entry(list, struct io_timeout, list); 98 + list_del_init(&timeout->list); 99 + req = cmd_to_io_kiocb(timeout); 100 + if (err) 101 + req_set_fail(req); 102 + io_req_queue_tw_complete(req, err); 103 + } 104 + 105 + return true; 106 + } 107 + 108 + static void io_kill_timeout(struct io_kiocb *req, struct list_head *list) 89 109 __must_hold(&req->ctx->timeout_lock) 90 110 { 91 111 struct io_timeout_data *io = req->async_data; ··· 113 93 if (hrtimer_try_to_cancel(&io->timer) != -1) { 114 94 struct io_timeout *timeout = io_kiocb_to_cmd(req, struct io_timeout); 115 95 116 - if (status) 117 - req_set_fail(req); 118 96 atomic_set(&req->ctx->cq_timeouts, 119 97 atomic_read(&req->ctx->cq_timeouts) + 1); 120 - list_del_init(&timeout->list); 121 - io_req_queue_tw_complete(req, status); 122 - return true; 98 + list_move_tail(&timeout->list, list); 123 99 } 124 - return false; 125 100 } 126 101 127 102 __cold void io_flush_timeouts(struct io_ring_ctx *ctx) 128 103 { 129 - u32 seq; 130 104 struct io_timeout *timeout, *tmp; 105 + LIST_HEAD(list); 106 + u32 seq; 131 107 132 - spin_lock_irq(&ctx->timeout_lock); 108 + raw_spin_lock_irq(&ctx->timeout_lock); 133 109 seq = ctx->cached_cq_tail - atomic_read(&ctx->cq_timeouts); 134 110 135 111 list_for_each_entry_safe(timeout, tmp, &ctx->timeout_list, list) { ··· 147 131 if (events_got < events_needed) 148 132 break; 149 133 150 - io_kill_timeout(req, 0); 134 + io_kill_timeout(req, &list); 151 135 } 152 136 ctx->cq_last_tm_flush = seq; 153 - spin_unlock_irq(&ctx->timeout_lock); 137 + raw_spin_unlock_irq(&ctx->timeout_lock); 138 + io_flush_killed_timeouts(&list, 0); 154 139 } 155 140 156 141 static void io_req_tw_fail_links(struct io_kiocb *link, struct io_tw_state *ts) ··· 217 200 } else if (req->flags & REQ_F_LINK_TIMEOUT) { 218 201 struct io_ring_ctx *ctx = req->ctx; 219 202 220 - spin_lock_irq(&ctx->timeout_lock); 203 + raw_spin_lock_irq(&ctx->timeout_lock); 221 204 link = io_disarm_linked_timeout(req); 222 - spin_unlock_irq(&ctx->timeout_lock); 205 + raw_spin_unlock_irq(&ctx->timeout_lock); 223 206 if (link) 224 207 io_req_queue_tw_complete(link, -ECANCELED); 225 208 } ··· 255 238 struct io_ring_ctx *ctx = req->ctx; 256 239 unsigned long flags; 257 240 258 - spin_lock_irqsave(&ctx->timeout_lock, flags); 241 + raw_spin_lock_irqsave(&ctx->timeout_lock, flags); 259 242 list_del_init(&timeout->list); 260 243 atomic_set(&req->ctx->cq_timeouts, 261 244 atomic_read(&req->ctx->cq_timeouts) + 1); 262 - spin_unlock_irqrestore(&ctx->timeout_lock, flags); 245 + raw_spin_unlock_irqrestore(&ctx->timeout_lock, flags); 263 246 264 247 if (!(data->flags & IORING_TIMEOUT_ETIME_SUCCESS)) 265 248 req_set_fail(req); ··· 302 285 { 303 286 struct io_kiocb *req; 304 287 305 - spin_lock_irq(&ctx->timeout_lock); 288 + raw_spin_lock_irq(&ctx->timeout_lock); 306 289 req = io_timeout_extract(ctx, cd); 307 - spin_unlock_irq(&ctx->timeout_lock); 290 + raw_spin_unlock_irq(&ctx->timeout_lock); 308 291 309 292 if (IS_ERR(req)) 310 293 return PTR_ERR(req); ··· 347 330 struct io_ring_ctx *ctx = req->ctx; 348 331 unsigned long flags; 349 332 350 - spin_lock_irqsave(&ctx->timeout_lock, flags); 333 + raw_spin_lock_irqsave(&ctx->timeout_lock, flags); 351 334 prev = timeout->head; 352 335 timeout->head = NULL; 353 336 ··· 362 345 } 363 346 list_del(&timeout->list); 364 347 timeout->prev = prev; 365 - spin_unlock_irqrestore(&ctx->timeout_lock, flags); 348 + raw_spin_unlock_irqrestore(&ctx->timeout_lock, flags); 366 349 367 350 req->io_task_work.func = io_req_task_link_timeout; 368 351 io_req_task_work_add(req); ··· 427 410 428 411 timeout->off = 0; /* noseq */ 429 412 data = req->async_data; 413 + data->ts = *ts; 414 + 430 415 list_add_tail(&timeout->list, &ctx->timeout_list); 431 416 hrtimer_init(&data->timer, io_timeout_get_clock(data), mode); 432 417 data->timer.function = io_timeout_fn; 433 - hrtimer_start(&data->timer, timespec64_to_ktime(*ts), mode); 418 + hrtimer_start(&data->timer, timespec64_to_ktime(data->ts), mode); 434 419 return 0; 435 420 } 436 421 ··· 491 472 } else { 492 473 enum hrtimer_mode mode = io_translate_timeout_mode(tr->flags); 493 474 494 - spin_lock_irq(&ctx->timeout_lock); 475 + raw_spin_lock_irq(&ctx->timeout_lock); 495 476 if (tr->ltimeout) 496 477 ret = io_linked_timeout_update(ctx, tr->addr, &tr->ts, mode); 497 478 else 498 479 ret = io_timeout_update(ctx, tr->addr, &tr->ts, mode); 499 - spin_unlock_irq(&ctx->timeout_lock); 480 + raw_spin_unlock_irq(&ctx->timeout_lock); 500 481 } 501 482 502 483 if (ret < 0) ··· 591 572 struct list_head *entry; 592 573 u32 tail, off = timeout->off; 593 574 594 - spin_lock_irq(&ctx->timeout_lock); 575 + raw_spin_lock_irq(&ctx->timeout_lock); 595 576 596 577 /* 597 578 * sqe->off holds how many events that need to occur for this ··· 630 611 list_add(&timeout->list, entry); 631 612 data->timer.function = io_timeout_fn; 632 613 hrtimer_start(&data->timer, timespec64_to_ktime(data->ts), data->mode); 633 - spin_unlock_irq(&ctx->timeout_lock); 614 + raw_spin_unlock_irq(&ctx->timeout_lock); 634 615 return IOU_ISSUE_SKIP_COMPLETE; 635 616 } 636 617 ··· 639 620 struct io_timeout *timeout = io_kiocb_to_cmd(req, struct io_timeout); 640 621 struct io_ring_ctx *ctx = req->ctx; 641 622 642 - spin_lock_irq(&ctx->timeout_lock); 623 + raw_spin_lock_irq(&ctx->timeout_lock); 643 624 /* 644 625 * If the back reference is NULL, then our linked request finished 645 626 * before we got a chance to setup the timer ··· 652 633 data->mode); 653 634 list_add_tail(&timeout->list, &ctx->ltimeout_list); 654 635 } 655 - spin_unlock_irq(&ctx->timeout_lock); 636 + raw_spin_unlock_irq(&ctx->timeout_lock); 656 637 /* drop submission reference */ 657 638 io_put_req(req); 658 639 } ··· 680 661 bool cancel_all) 681 662 { 682 663 struct io_timeout *timeout, *tmp; 683 - int canceled = 0; 664 + LIST_HEAD(list); 684 665 685 666 /* 686 667 * completion_lock is needed for io_match_task(). Take it before 687 668 * timeout_lockfirst to keep locking ordering. 688 669 */ 689 670 spin_lock(&ctx->completion_lock); 690 - spin_lock_irq(&ctx->timeout_lock); 671 + raw_spin_lock_irq(&ctx->timeout_lock); 691 672 list_for_each_entry_safe(timeout, tmp, &ctx->timeout_list, list) { 692 673 struct io_kiocb *req = cmd_to_io_kiocb(timeout); 693 674 694 - if (io_match_task(req, tctx, cancel_all) && 695 - io_kill_timeout(req, -ECANCELED)) 696 - canceled++; 675 + if (io_match_task(req, tctx, cancel_all)) 676 + io_kill_timeout(req, &list); 697 677 } 698 - spin_unlock_irq(&ctx->timeout_lock); 678 + raw_spin_unlock_irq(&ctx->timeout_lock); 699 679 spin_unlock(&ctx->completion_lock); 700 - return canceled != 0; 680 + 681 + return io_flush_killed_timeouts(&list, -ECANCELED); 701 682 }
+16 -7
io_uring/uring_cmd.c
··· 16 16 #include "rsrc.h" 17 17 #include "uring_cmd.h" 18 18 19 - static struct uring_cache *io_uring_async_get(struct io_kiocb *req) 19 + static struct io_uring_cmd_data *io_uring_async_get(struct io_kiocb *req) 20 20 { 21 21 struct io_ring_ctx *ctx = req->ctx; 22 - struct uring_cache *cache; 22 + struct io_uring_cmd_data *cache; 23 23 24 24 cache = io_alloc_cache_get(&ctx->uring_cache); 25 25 if (cache) { 26 + cache->op_data = NULL; 26 27 req->flags |= REQ_F_ASYNC_DATA; 27 28 req->async_data = cache; 28 29 return cache; 29 30 } 30 - if (!io_alloc_async_data(req)) 31 - return req->async_data; 31 + if (!io_alloc_async_data(req)) { 32 + cache = req->async_data; 33 + cache->op_data = NULL; 34 + return cache; 35 + } 32 36 return NULL; 33 37 } 34 38 35 39 static void io_req_uring_cleanup(struct io_kiocb *req, unsigned int issue_flags) 36 40 { 37 41 struct io_uring_cmd *ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd); 38 - struct uring_cache *cache = req->async_data; 42 + struct io_uring_cmd_data *cache = req->async_data; 43 + 44 + if (cache->op_data) { 45 + kfree(cache->op_data); 46 + cache->op_data = NULL; 47 + } 39 48 40 49 if (issue_flags & IO_URING_F_UNLOCKED) 41 50 return; ··· 192 183 const struct io_uring_sqe *sqe) 193 184 { 194 185 struct io_uring_cmd *ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd); 195 - struct uring_cache *cache; 186 + struct io_uring_cmd_data *cache; 196 187 197 188 cache = io_uring_async_get(req); 198 189 if (unlikely(!cache)) ··· 269 260 270 261 ret = file->f_op->uring_cmd(ioucmd, issue_flags); 271 262 if (ret == -EAGAIN) { 272 - struct uring_cache *cache = req->async_data; 263 + struct io_uring_cmd_data *cache = req->async_data; 273 264 274 265 if (ioucmd->sqe != (void *) cache) 275 266 memcpy(cache, ioucmd->sqe, uring_sqe_size(req->ctx));
-4
io_uring/uring_cmd.h
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 - struct uring_cache { 4 - struct io_uring_sqe sqes[2]; 5 - }; 6 - 7 3 int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags); 8 4 int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe); 9 5
+5 -1
kernel/bpf/verifier.c
··· 21281 21281 * changed in some incompatible and hard to support 21282 21282 * way, it's fine to back out this inlining logic 21283 21283 */ 21284 + #ifdef CONFIG_SMP 21284 21285 insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number); 21285 21286 insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0); 21286 21287 insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0); 21287 21288 cnt = 3; 21288 - 21289 + #else 21290 + insn_buf[0] = BPF_ALU32_REG(BPF_XOR, BPF_REG_0, BPF_REG_0); 21291 + cnt = 1; 21292 + #endif 21289 21293 new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); 21290 21294 if (!new_prog) 21291 21295 return -ENOMEM;
+11 -33
kernel/cgroup/cpuset.c
··· 197 197 198 198 /* 199 199 * There are two global locks guarding cpuset structures - cpuset_mutex and 200 - * callback_lock. We also require taking task_lock() when dereferencing a 201 - * task's cpuset pointer. See "The task_lock() exception", at the end of this 202 - * comment. The cpuset code uses only cpuset_mutex. Other kernel subsystems 203 - * can use cpuset_lock()/cpuset_unlock() to prevent change to cpuset 200 + * callback_lock. The cpuset code uses only cpuset_mutex. Other kernel 201 + * subsystems can use cpuset_lock()/cpuset_unlock() to prevent change to cpuset 204 202 * structures. Note that cpuset_mutex needs to be a mutex as it is used in 205 203 * paths that rely on priority inheritance (e.g. scheduler - on RT) for 206 204 * correctness. ··· 227 229 * The cpuset_common_seq_show() handlers only hold callback_lock across 228 230 * small pieces of code, such as when reading out possibly multi-word 229 231 * cpumasks and nodemasks. 230 - * 231 - * Accessing a task's cpuset should be done in accordance with the 232 - * guidelines for accessing subsystem state in kernel/cgroup.c 233 232 */ 234 233 235 234 static DEFINE_MUTEX(cpuset_mutex); ··· 885 890 */ 886 891 if (cgrpv2) { 887 892 for (i = 0; i < ndoms; i++) { 888 - cpumask_copy(doms[i], csa[i]->effective_cpus); 893 + /* 894 + * The top cpuset may contain some boot time isolated 895 + * CPUs that need to be excluded from the sched domain. 896 + */ 897 + if (csa[i] == &top_cpuset) 898 + cpumask_and(doms[i], csa[i]->effective_cpus, 899 + housekeeping_cpumask(HK_TYPE_DOMAIN)); 900 + else 901 + cpumask_copy(doms[i], csa[i]->effective_cpus); 889 902 if (dattr) 890 903 dattr[i] = SD_ATTR_INIT; 891 904 } ··· 3124 3121 int retval = -ENODEV; 3125 3122 3126 3123 buf = strstrip(buf); 3127 - 3128 - /* 3129 - * CPU or memory hotunplug may leave @cs w/o any execution 3130 - * resources, in which case the hotplug code asynchronously updates 3131 - * configuration and transfers all tasks to the nearest ancestor 3132 - * which can execute. 3133 - * 3134 - * As writes to "cpus" or "mems" may restore @cs's execution 3135 - * resources, wait for the previously scheduled operations before 3136 - * proceeding, so that we don't end up keep removing tasks added 3137 - * after execution capability is restored. 3138 - * 3139 - * cpuset_handle_hotplug may call back into cgroup core asynchronously 3140 - * via cgroup_transfer_tasks() and waiting for it from a cgroupfs 3141 - * operation like this one can lead to a deadlock through kernfs 3142 - * active_ref protection. Let's break the protection. Losing the 3143 - * protection is okay as we check whether @cs is online after 3144 - * grabbing cpuset_mutex anyway. This only happens on the legacy 3145 - * hierarchies. 3146 - */ 3147 - css_get(&cs->css); 3148 - kernfs_break_active_protection(of->kn); 3149 - 3150 3124 cpus_read_lock(); 3151 3125 mutex_lock(&cpuset_mutex); 3152 3126 if (!is_cpuset_online(cs)) ··· 3156 3176 out_unlock: 3157 3177 mutex_unlock(&cpuset_mutex); 3158 3178 cpus_read_unlock(); 3159 - kernfs_unbreak_active_protection(of->kn); 3160 - css_put(&cs->css); 3161 3179 flush_workqueue(cpuset_migrate_mm_wq); 3162 3180 return retval ?: nbytes; 3163 3181 }
+1 -1
kernel/events/uprobes.c
··· 1915 1915 if (!utask) 1916 1916 return; 1917 1917 1918 + t->utask = NULL; 1918 1919 WARN_ON_ONCE(utask->active_uprobe || utask->xol_vaddr); 1919 1920 1920 1921 timer_delete_sync(&utask->ri_timer); ··· 1925 1924 ri = free_ret_instance(ri, true /* cleanup_hprobe */); 1926 1925 1927 1926 kfree(utask); 1928 - t->utask = NULL; 1929 1927 } 1930 1928 1931 1929 #define RI_TIMER_PERIOD (HZ / 10) /* 100 ms */
+6 -7
kernel/fork.c
··· 639 639 LIST_HEAD(uf); 640 640 VMA_ITERATOR(vmi, mm, 0); 641 641 642 - uprobe_start_dup_mmap(); 643 - if (mmap_write_lock_killable(oldmm)) { 644 - retval = -EINTR; 645 - goto fail_uprobe_end; 646 - } 642 + if (mmap_write_lock_killable(oldmm)) 643 + return -EINTR; 647 644 flush_cache_dup_mm(oldmm); 648 645 uprobe_dup_mmap(oldmm, mm); 649 646 /* ··· 779 782 dup_userfaultfd_complete(&uf); 780 783 else 781 784 dup_userfaultfd_fail(&uf); 782 - fail_uprobe_end: 783 - uprobe_end_dup_mmap(); 784 785 return retval; 785 786 786 787 fail_nomem_anon_vma_fork: ··· 1687 1692 if (!mm_init(mm, tsk, mm->user_ns)) 1688 1693 goto fail_nomem; 1689 1694 1695 + uprobe_start_dup_mmap(); 1690 1696 err = dup_mmap(mm, oldmm); 1691 1697 if (err) 1692 1698 goto free_pt; 1699 + uprobe_end_dup_mmap(); 1693 1700 1694 1701 mm->hiwater_rss = get_mm_rss(mm); 1695 1702 mm->hiwater_vm = mm->total_vm; ··· 1706 1709 mm->binfmt = NULL; 1707 1710 mm_init_owner(mm, NULL); 1708 1711 mmput(mm); 1712 + if (err) 1713 + uprobe_end_dup_mmap(); 1709 1714 1710 1715 fail_nomem: 1711 1716 return NULL;
+1
kernel/gen_kheaders.sh
··· 89 89 90 90 # Create archive and try to normalize metadata for reproducibility. 91 91 tar "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \ 92 + --exclude=".__afs*" --exclude=".nfs*" \ 92 93 --owner=0 --group=0 --sort=name --numeric-owner --mode=u=rw,go=r,a+X \ 93 94 -I $XZ -cf $tarfile -C $cpio_dir/ . > /dev/null 94 95
+1 -1
kernel/kcov.c
··· 166 166 * Unlike in_serving_softirq(), this function returns false when called during 167 167 * a hardirq or an NMI that happened in the softirq context. 168 168 */ 169 - static inline bool in_softirq_really(void) 169 + static __always_inline bool in_softirq_really(void) 170 170 { 171 171 return in_serving_softirq() && !in_hardirq() && !in_nmi(); 172 172 }
+16 -2
kernel/locking/rtmutex.c
··· 1292 1292 */ 1293 1293 get_task_struct(owner); 1294 1294 1295 + preempt_disable(); 1295 1296 raw_spin_unlock_irq(&lock->wait_lock); 1297 + /* wake up any tasks on the wake_q before calling rt_mutex_adjust_prio_chain */ 1298 + wake_up_q(wake_q); 1299 + wake_q_init(wake_q); 1300 + preempt_enable(); 1301 + 1296 1302 1297 1303 res = rt_mutex_adjust_prio_chain(owner, chwalk, lock, 1298 1304 next_lock, waiter, task); ··· 1602 1596 * or TASK_UNINTERRUPTIBLE) 1603 1597 * @timeout: the pre-initialized and started timer, or NULL for none 1604 1598 * @waiter: the pre-initialized rt_mutex_waiter 1599 + * @wake_q: wake_q of tasks to wake when we drop the lock->wait_lock 1605 1600 * 1606 1601 * Must be called with lock->wait_lock held and interrupts disabled 1607 1602 */ ··· 1610 1603 struct ww_acquire_ctx *ww_ctx, 1611 1604 unsigned int state, 1612 1605 struct hrtimer_sleeper *timeout, 1613 - struct rt_mutex_waiter *waiter) 1606 + struct rt_mutex_waiter *waiter, 1607 + struct wake_q_head *wake_q) 1614 1608 __releases(&lock->wait_lock) __acquires(&lock->wait_lock) 1615 1609 { 1616 1610 struct rt_mutex *rtm = container_of(lock, struct rt_mutex, rtmutex); ··· 1642 1634 owner = rt_mutex_owner(lock); 1643 1635 else 1644 1636 owner = NULL; 1637 + preempt_disable(); 1645 1638 raw_spin_unlock_irq(&lock->wait_lock); 1639 + if (wake_q) { 1640 + wake_up_q(wake_q); 1641 + wake_q_init(wake_q); 1642 + } 1643 + preempt_enable(); 1646 1644 1647 1645 if (!owner || !rtmutex_spin_on_owner(lock, waiter, owner)) 1648 1646 rt_mutex_schedule(); ··· 1722 1708 1723 1709 ret = task_blocks_on_rt_mutex(lock, waiter, current, ww_ctx, chwalk, wake_q); 1724 1710 if (likely(!ret)) 1725 - ret = rt_mutex_slowlock_block(lock, ww_ctx, state, NULL, waiter); 1711 + ret = rt_mutex_slowlock_block(lock, ww_ctx, state, NULL, waiter, wake_q); 1726 1712 1727 1713 if (likely(!ret)) { 1728 1714 /* acquired the lock */
+1 -1
kernel/locking/rtmutex_api.c
··· 383 383 raw_spin_lock_irq(&lock->wait_lock); 384 384 /* sleep on the mutex */ 385 385 set_current_state(TASK_INTERRUPTIBLE); 386 - ret = rt_mutex_slowlock_block(lock, NULL, TASK_INTERRUPTIBLE, to, waiter); 386 + ret = rt_mutex_slowlock_block(lock, NULL, TASK_INTERRUPTIBLE, to, waiter, NULL); 387 387 /* 388 388 * try_to_take_rt_mutex() sets the waiter bit unconditionally. We might 389 389 * have to fix that up.
+68 -21
kernel/sched/ext.c
··· 2747 2747 { 2748 2748 struct scx_dsp_ctx *dspc = this_cpu_ptr(scx_dsp_ctx); 2749 2749 bool prev_on_scx = prev->sched_class == &ext_sched_class; 2750 + bool prev_on_rq = prev->scx.flags & SCX_TASK_QUEUED; 2750 2751 int nr_loops = SCX_DSP_MAX_LOOPS; 2751 2752 2752 2753 lockdep_assert_rq_held(rq); ··· 2780 2779 * See scx_ops_disable_workfn() for the explanation on the 2781 2780 * bypassing test. 2782 2781 */ 2783 - if ((prev->scx.flags & SCX_TASK_QUEUED) && 2784 - prev->scx.slice && !scx_rq_bypassing(rq)) { 2782 + if (prev_on_rq && prev->scx.slice && !scx_rq_bypassing(rq)) { 2785 2783 rq->scx.flags |= SCX_RQ_BAL_KEEP; 2786 2784 goto has_tasks; 2787 2785 } ··· 2813 2813 2814 2814 flush_dispatch_buf(rq); 2815 2815 2816 + if (prev_on_rq && prev->scx.slice) { 2817 + rq->scx.flags |= SCX_RQ_BAL_KEEP; 2818 + goto has_tasks; 2819 + } 2816 2820 if (rq->scx.local_dsq.nr) 2817 2821 goto has_tasks; 2818 2822 if (consume_global_dsq(rq)) ··· 2842 2838 * Didn't find another task to run. Keep running @prev unless 2843 2839 * %SCX_OPS_ENQ_LAST is in effect. 2844 2840 */ 2845 - if ((prev->scx.flags & SCX_TASK_QUEUED) && 2846 - (!static_branch_unlikely(&scx_ops_enq_last) || 2841 + if (prev_on_rq && (!static_branch_unlikely(&scx_ops_enq_last) || 2847 2842 scx_rq_bypassing(rq))) { 2848 2843 rq->scx.flags |= SCX_RQ_BAL_KEEP; 2849 2844 goto has_tasks; ··· 3037 3034 */ 3038 3035 if (p->scx.slice && !scx_rq_bypassing(rq)) { 3039 3036 dispatch_enqueue(&rq->scx.local_dsq, p, SCX_ENQ_HEAD); 3040 - return; 3037 + goto switch_class; 3041 3038 } 3042 3039 3043 3040 /* ··· 3054 3051 } 3055 3052 } 3056 3053 3054 + switch_class: 3057 3055 if (next && next->sched_class != &ext_sched_class) 3058 3056 switch_class(rq, next); 3059 3057 } ··· 3590 3586 cpumask_copy(idle_masks.smt, cpu_online_mask); 3591 3587 } 3592 3588 3593 - void __scx_update_idle(struct rq *rq, bool idle) 3589 + static void update_builtin_idle(int cpu, bool idle) 3594 3590 { 3595 - int cpu = cpu_of(rq); 3596 - 3597 - if (SCX_HAS_OP(update_idle) && !scx_rq_bypassing(rq)) { 3598 - SCX_CALL_OP(SCX_KF_REST, update_idle, cpu_of(rq), idle); 3599 - if (!static_branch_unlikely(&scx_builtin_idle_enabled)) 3600 - return; 3601 - } 3602 - 3603 3591 if (idle) 3604 3592 cpumask_set_cpu(cpu, idle_masks.cpu); 3605 3593 else ··· 3616 3620 } 3617 3621 } 3618 3622 #endif 3623 + } 3624 + 3625 + /* 3626 + * Update the idle state of a CPU to @idle. 3627 + * 3628 + * If @do_notify is true, ops.update_idle() is invoked to notify the scx 3629 + * scheduler of an actual idle state transition (idle to busy or vice 3630 + * versa). If @do_notify is false, only the idle state in the idle masks is 3631 + * refreshed without invoking ops.update_idle(). 3632 + * 3633 + * This distinction is necessary, because an idle CPU can be "reserved" and 3634 + * awakened via scx_bpf_pick_idle_cpu() + scx_bpf_kick_cpu(), marking it as 3635 + * busy even if no tasks are dispatched. In this case, the CPU may return 3636 + * to idle without a true state transition. Refreshing the idle masks 3637 + * without invoking ops.update_idle() ensures accurate idle state tracking 3638 + * while avoiding unnecessary updates and maintaining balanced state 3639 + * transitions. 3640 + */ 3641 + void __scx_update_idle(struct rq *rq, bool idle, bool do_notify) 3642 + { 3643 + int cpu = cpu_of(rq); 3644 + 3645 + lockdep_assert_rq_held(rq); 3646 + 3647 + /* 3648 + * Trigger ops.update_idle() only when transitioning from a task to 3649 + * the idle thread and vice versa. 3650 + * 3651 + * Idle transitions are indicated by do_notify being set to true, 3652 + * managed by put_prev_task_idle()/set_next_task_idle(). 3653 + */ 3654 + if (SCX_HAS_OP(update_idle) && do_notify && !scx_rq_bypassing(rq)) 3655 + SCX_CALL_OP(SCX_KF_REST, update_idle, cpu_of(rq), idle); 3656 + 3657 + /* 3658 + * Update the idle masks: 3659 + * - for real idle transitions (do_notify == true) 3660 + * - for idle-to-idle transitions (indicated by the previous task 3661 + * being the idle thread, managed by pick_task_idle()) 3662 + * 3663 + * Skip updating idle masks if the previous task is not the idle 3664 + * thread, since set_next_task_idle() has already handled it when 3665 + * transitioning from a task to the idle thread (calling this 3666 + * function with do_notify == true). 3667 + * 3668 + * In this way we can avoid updating the idle masks twice, 3669 + * unnecessarily. 3670 + */ 3671 + if (static_branch_likely(&scx_builtin_idle_enabled)) 3672 + if (do_notify || is_idle_task(rq->curr)) 3673 + update_builtin_idle(cpu, idle); 3619 3674 } 3620 3675 3621 3676 static void handle_hotplug(struct rq *rq, bool online) ··· 4791 4744 */ 4792 4745 for_each_possible_cpu(cpu) { 4793 4746 struct rq *rq = cpu_rq(cpu); 4794 - struct rq_flags rf; 4795 4747 struct task_struct *p, *n; 4796 4748 4797 - rq_lock(rq, &rf); 4749 + raw_spin_rq_lock(rq); 4798 4750 4799 4751 if (bypass) { 4800 4752 WARN_ON_ONCE(rq->scx.flags & SCX_RQ_BYPASSING); ··· 4809 4763 * sees scx_rq_bypassing() before moving tasks to SCX. 4810 4764 */ 4811 4765 if (!scx_enabled()) { 4812 - rq_unlock_irqrestore(rq, &rf); 4766 + raw_spin_rq_unlock(rq); 4813 4767 continue; 4814 4768 } 4815 4769 ··· 4829 4783 sched_enq_and_set_task(&ctx); 4830 4784 } 4831 4785 4832 - rq_unlock(rq, &rf); 4833 - 4834 4786 /* resched to restore ticks and idle state */ 4835 - resched_cpu(cpu); 4787 + if (cpu_online(cpu) || cpu == smp_processor_id()) 4788 + resched_curr(rq); 4789 + 4790 + raw_spin_rq_unlock(rq); 4836 4791 } 4837 4792 4838 4793 atomic_dec(&scx_ops_breather_depth); ··· 7060 7013 return -ENOENT; 7061 7014 7062 7015 INIT_LIST_HEAD(&kit->cursor.node); 7063 - kit->cursor.flags |= SCX_DSQ_LNODE_ITER_CURSOR | flags; 7016 + kit->cursor.flags = SCX_DSQ_LNODE_ITER_CURSOR | flags; 7064 7017 kit->cursor.priv = READ_ONCE(kit->dsq->seq); 7065 7018 7066 7019 return 0;
+4 -4
kernel/sched/ext.h
··· 57 57 #endif /* CONFIG_SCHED_CLASS_EXT */ 58 58 59 59 #if defined(CONFIG_SCHED_CLASS_EXT) && defined(CONFIG_SMP) 60 - void __scx_update_idle(struct rq *rq, bool idle); 60 + void __scx_update_idle(struct rq *rq, bool idle, bool do_notify); 61 61 62 - static inline void scx_update_idle(struct rq *rq, bool idle) 62 + static inline void scx_update_idle(struct rq *rq, bool idle, bool do_notify) 63 63 { 64 64 if (scx_enabled()) 65 - __scx_update_idle(rq, idle); 65 + __scx_update_idle(rq, idle, do_notify); 66 66 } 67 67 #else 68 - static inline void scx_update_idle(struct rq *rq, bool idle) {} 68 + static inline void scx_update_idle(struct rq *rq, bool idle, bool do_notify) {} 69 69 #endif 70 70 71 71 #ifdef CONFIG_CGROUP_SCHED
+3 -2
kernel/sched/idle.c
··· 452 452 static void put_prev_task_idle(struct rq *rq, struct task_struct *prev, struct task_struct *next) 453 453 { 454 454 dl_server_update_idle_time(rq, prev); 455 - scx_update_idle(rq, false); 455 + scx_update_idle(rq, false, true); 456 456 } 457 457 458 458 static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first) 459 459 { 460 460 update_idle_core(rq); 461 - scx_update_idle(rq, true); 461 + scx_update_idle(rq, true, true); 462 462 schedstat_inc(rq->sched_goidle); 463 463 next->se.exec_start = rq_clock_task(rq); 464 464 } 465 465 466 466 struct task_struct *pick_task_idle(struct rq *rq) 467 467 { 468 + scx_update_idle(rq, true, false); 468 469 return rq->idle; 469 470 } 470 471
+1 -1
kernel/static_call_inline.c
··· 15 15 extern struct static_call_tramp_key __start_static_call_tramp_key[], 16 16 __stop_static_call_tramp_key[]; 17 17 18 - static int static_call_initialized; 18 + int static_call_initialized; 19 19 20 20 /* 21 21 * Must be called before early_initcall() to be effective.
+8 -2
kernel/trace/fgraph.c
··· 833 833 #endif 834 834 { 835 835 for_each_set_bit(i, &bitmap, sizeof(bitmap) * BITS_PER_BYTE) { 836 - struct fgraph_ops *gops = fgraph_array[i]; 836 + struct fgraph_ops *gops = READ_ONCE(fgraph_array[i]); 837 837 838 838 if (gops == &fgraph_stub) 839 839 continue; ··· 1215 1215 static int start_graph_tracing(void) 1216 1216 { 1217 1217 unsigned long **ret_stack_list; 1218 - int ret; 1218 + int ret, cpu; 1219 1219 1220 1220 ret_stack_list = kcalloc(FTRACE_RETSTACK_ALLOC_SIZE, 1221 1221 sizeof(*ret_stack_list), GFP_KERNEL); 1222 1222 1223 1223 if (!ret_stack_list) 1224 1224 return -ENOMEM; 1225 + 1226 + /* The cpu_boot init_task->ret_stack will never be freed */ 1227 + for_each_online_cpu(cpu) { 1228 + if (!idle_task(cpu)->ret_stack) 1229 + ftrace_graph_init_idle_task(idle_task(cpu), cpu); 1230 + } 1225 1231 1226 1232 do { 1227 1233 ret = alloc_retstack_tasklist(ret_stack_list);
+2 -6
kernel/trace/ftrace.c
··· 902 902 } 903 903 904 904 static struct fgraph_ops fprofiler_ops = { 905 - .ops = { 906 - .flags = FTRACE_OPS_FL_INITIALIZED, 907 - INIT_OPS_HASH(fprofiler_ops.ops) 908 - }, 909 905 .entryfunc = &profile_graph_entry, 910 906 .retfunc = &profile_graph_return, 911 907 }; 912 908 913 909 static int register_ftrace_profiler(void) 914 910 { 911 + ftrace_ops_set_global_filter(&fprofiler_ops.ops); 915 912 return register_ftrace_graph(&fprofiler_ops); 916 913 } 917 914 ··· 919 922 #else 920 923 static struct ftrace_ops ftrace_profile_ops __read_mostly = { 921 924 .func = function_profile_call, 922 - .flags = FTRACE_OPS_FL_INITIALIZED, 923 - INIT_OPS_HASH(ftrace_profile_ops) 924 925 }; 925 926 926 927 static int register_ftrace_profiler(void) 927 928 { 929 + ftrace_ops_set_global_filter(&ftrace_profile_ops); 928 930 return register_ftrace_function(&ftrace_profile_ops); 929 931 } 930 932
+5 -1
kernel/trace/ring_buffer.c
··· 7019 7019 lockdep_assert_held(&cpu_buffer->mapping_lock); 7020 7020 7021 7021 nr_subbufs = cpu_buffer->nr_pages + 1; /* + reader-subbuf */ 7022 - nr_pages = ((nr_subbufs + 1) << subbuf_order) - pgoff; /* + meta-page */ 7022 + nr_pages = ((nr_subbufs + 1) << subbuf_order); /* + meta-page */ 7023 + if (nr_pages <= pgoff) 7024 + return -EINVAL; 7025 + 7026 + nr_pages -= pgoff; 7023 7027 7024 7028 nr_vma_pages = vma_pages(vma); 7025 7029 if (!nr_vma_pages || nr_vma_pages > nr_pages)
+65 -202
kernel/trace/trace.c
··· 3611 3611 } 3612 3612 3613 3613 /* Returns true if the string is safe to dereference from an event */ 3614 - static bool trace_safe_str(struct trace_iterator *iter, const char *str, 3615 - bool star, int len) 3614 + static bool trace_safe_str(struct trace_iterator *iter, const char *str) 3616 3615 { 3617 3616 unsigned long addr = (unsigned long)str; 3618 3617 struct trace_event *trace_event; 3619 3618 struct trace_event_call *event; 3620 - 3621 - /* Ignore strings with no length */ 3622 - if (star && !len) 3623 - return true; 3624 3619 3625 3620 /* OK if part of the event data */ 3626 3621 if ((addr >= (unsigned long)iter->ent) && ··· 3656 3661 return false; 3657 3662 } 3658 3663 3659 - static DEFINE_STATIC_KEY_FALSE(trace_no_verify); 3660 - 3661 - static int test_can_verify_check(const char *fmt, ...) 3662 - { 3663 - char buf[16]; 3664 - va_list ap; 3665 - int ret; 3666 - 3667 - /* 3668 - * The verifier is dependent on vsnprintf() modifies the va_list 3669 - * passed to it, where it is sent as a reference. Some architectures 3670 - * (like x86_32) passes it by value, which means that vsnprintf() 3671 - * does not modify the va_list passed to it, and the verifier 3672 - * would then need to be able to understand all the values that 3673 - * vsnprintf can use. If it is passed by value, then the verifier 3674 - * is disabled. 3675 - */ 3676 - va_start(ap, fmt); 3677 - vsnprintf(buf, 16, "%d", ap); 3678 - ret = va_arg(ap, int); 3679 - va_end(ap); 3680 - 3681 - return ret; 3682 - } 3683 - 3684 - static void test_can_verify(void) 3685 - { 3686 - if (!test_can_verify_check("%d %d", 0, 1)) { 3687 - pr_info("trace event string verifier disabled\n"); 3688 - static_branch_inc(&trace_no_verify); 3689 - } 3690 - } 3691 - 3692 3664 /** 3693 - * trace_check_vprintf - Check dereferenced strings while writing to the seq buffer 3665 + * ignore_event - Check dereferenced fields while writing to the seq buffer 3694 3666 * @iter: The iterator that holds the seq buffer and the event being printed 3695 - * @fmt: The format used to print the event 3696 - * @ap: The va_list holding the data to print from @fmt. 3697 3667 * 3698 - * This writes the data into the @iter->seq buffer using the data from 3699 - * @fmt and @ap. If the format has a %s, then the source of the string 3700 - * is examined to make sure it is safe to print, otherwise it will 3701 - * warn and print "[UNSAFE MEMORY]" in place of the dereferenced string 3702 - * pointer. 3668 + * At boot up, test_event_printk() will flag any event that dereferences 3669 + * a string with "%s" that does exist in the ring buffer. It may still 3670 + * be valid, as the string may point to a static string in the kernel 3671 + * rodata that never gets freed. But if the string pointer is pointing 3672 + * to something that was allocated, there's a chance that it can be freed 3673 + * by the time the user reads the trace. This would cause a bad memory 3674 + * access by the kernel and possibly crash the system. 3675 + * 3676 + * This function will check if the event has any fields flagged as needing 3677 + * to be checked at runtime and perform those checks. 3678 + * 3679 + * If it is found that a field is unsafe, it will write into the @iter->seq 3680 + * a message stating what was found to be unsafe. 3681 + * 3682 + * @return: true if the event is unsafe and should be ignored, 3683 + * false otherwise. 3703 3684 */ 3704 - void trace_check_vprintf(struct trace_iterator *iter, const char *fmt, 3705 - va_list ap) 3685 + bool ignore_event(struct trace_iterator *iter) 3706 3686 { 3707 - long text_delta = 0; 3708 - long data_delta = 0; 3709 - const char *p = fmt; 3710 - const char *str; 3711 - bool good; 3712 - int i, j; 3687 + struct ftrace_event_field *field; 3688 + struct trace_event *trace_event; 3689 + struct trace_event_call *event; 3690 + struct list_head *head; 3691 + struct trace_seq *seq; 3692 + const void *ptr; 3713 3693 3714 - if (WARN_ON_ONCE(!fmt)) 3715 - return; 3694 + trace_event = ftrace_find_event(iter->ent->type); 3716 3695 3717 - if (static_branch_unlikely(&trace_no_verify)) 3718 - goto print; 3696 + seq = &iter->seq; 3719 3697 3720 - /* 3721 - * When the kernel is booted with the tp_printk command line 3722 - * parameter, trace events go directly through to printk(). 3723 - * It also is checked by this function, but it does not 3724 - * have an associated trace_array (tr) for it. 3725 - */ 3726 - if (iter->tr) { 3727 - text_delta = iter->tr->text_delta; 3728 - data_delta = iter->tr->data_delta; 3698 + if (!trace_event) { 3699 + trace_seq_printf(seq, "EVENT ID %d NOT FOUND?\n", iter->ent->type); 3700 + return true; 3729 3701 } 3730 3702 3731 - /* Don't bother checking when doing a ftrace_dump() */ 3732 - if (iter->fmt == static_fmt_buf) 3733 - goto print; 3703 + event = container_of(trace_event, struct trace_event_call, event); 3704 + if (!(event->flags & TRACE_EVENT_FL_TEST_STR)) 3705 + return false; 3734 3706 3735 - while (*p) { 3736 - bool star = false; 3737 - int len = 0; 3707 + head = trace_get_fields(event); 3708 + if (!head) { 3709 + trace_seq_printf(seq, "FIELDS FOR EVENT '%s' NOT FOUND?\n", 3710 + trace_event_name(event)); 3711 + return true; 3712 + } 3738 3713 3739 - j = 0; 3714 + /* Offsets are from the iter->ent that points to the raw event */ 3715 + ptr = iter->ent; 3740 3716 3741 - /* 3742 - * We only care about %s and variants 3743 - * as well as %p[sS] if delta is non-zero 3744 - */ 3745 - for (i = 0; p[i]; i++) { 3746 - if (i + 1 >= iter->fmt_size) { 3747 - /* 3748 - * If we can't expand the copy buffer, 3749 - * just print it. 3750 - */ 3751 - if (!trace_iter_expand_format(iter)) 3752 - goto print; 3753 - } 3717 + list_for_each_entry(field, head, link) { 3718 + const char *str; 3719 + bool good; 3754 3720 3755 - if (p[i] == '\\' && p[i+1]) { 3756 - i++; 3757 - continue; 3758 - } 3759 - if (p[i] == '%') { 3760 - /* Need to test cases like %08.*s */ 3761 - for (j = 1; p[i+j]; j++) { 3762 - if (isdigit(p[i+j]) || 3763 - p[i+j] == '.') 3764 - continue; 3765 - if (p[i+j] == '*') { 3766 - star = true; 3767 - continue; 3768 - } 3769 - break; 3770 - } 3771 - if (p[i+j] == 's') 3772 - break; 3773 - 3774 - if (text_delta && p[i+1] == 'p' && 3775 - ((p[i+2] == 's' || p[i+2] == 'S'))) 3776 - break; 3777 - 3778 - star = false; 3779 - } 3780 - j = 0; 3781 - } 3782 - /* If no %s found then just print normally */ 3783 - if (!p[i]) 3784 - break; 3785 - 3786 - /* Copy up to the %s, and print that */ 3787 - strncpy(iter->fmt, p, i); 3788 - iter->fmt[i] = '\0'; 3789 - trace_seq_vprintf(&iter->seq, iter->fmt, ap); 3790 - 3791 - /* Add delta to %pS pointers */ 3792 - if (p[i+1] == 'p') { 3793 - unsigned long addr; 3794 - char fmt[4]; 3795 - 3796 - fmt[0] = '%'; 3797 - fmt[1] = 'p'; 3798 - fmt[2] = p[i+2]; /* Either %ps or %pS */ 3799 - fmt[3] = '\0'; 3800 - 3801 - addr = va_arg(ap, unsigned long); 3802 - addr += text_delta; 3803 - trace_seq_printf(&iter->seq, fmt, (void *)addr); 3804 - 3805 - p += i + 3; 3721 + if (!field->needs_test) 3806 3722 continue; 3807 - } 3808 3723 3809 - /* 3810 - * If iter->seq is full, the above call no longer guarantees 3811 - * that ap is in sync with fmt processing, and further calls 3812 - * to va_arg() can return wrong positional arguments. 3813 - * 3814 - * Ensure that ap is no longer used in this case. 3815 - */ 3816 - if (iter->seq.full) { 3817 - p = ""; 3818 - break; 3819 - } 3724 + str = *(const char **)(ptr + field->offset); 3820 3725 3821 - if (star) 3822 - len = va_arg(ap, int); 3823 - 3824 - /* The ap now points to the string data of the %s */ 3825 - str = va_arg(ap, const char *); 3826 - 3827 - good = trace_safe_str(iter, str, star, len); 3828 - 3829 - /* Could be from the last boot */ 3830 - if (data_delta && !good) { 3831 - str += data_delta; 3832 - good = trace_safe_str(iter, str, star, len); 3833 - } 3726 + good = trace_safe_str(iter, str); 3834 3727 3835 3728 /* 3836 3729 * If you hit this warning, it is likely that the ··· 3729 3846 * instead. See samples/trace_events/trace-events-sample.h 3730 3847 * for reference. 3731 3848 */ 3732 - if (WARN_ONCE(!good, "fmt: '%s' current_buffer: '%s'", 3733 - fmt, seq_buf_str(&iter->seq.seq))) { 3734 - int ret; 3735 - 3736 - /* Try to safely read the string */ 3737 - if (star) { 3738 - if (len + 1 > iter->fmt_size) 3739 - len = iter->fmt_size - 1; 3740 - if (len < 0) 3741 - len = 0; 3742 - ret = copy_from_kernel_nofault(iter->fmt, str, len); 3743 - iter->fmt[len] = 0; 3744 - star = false; 3745 - } else { 3746 - ret = strncpy_from_kernel_nofault(iter->fmt, str, 3747 - iter->fmt_size); 3748 - } 3749 - if (ret < 0) 3750 - trace_seq_printf(&iter->seq, "(0x%px)", str); 3751 - else 3752 - trace_seq_printf(&iter->seq, "(0x%px:%s)", 3753 - str, iter->fmt); 3754 - str = "[UNSAFE-MEMORY]"; 3755 - strcpy(iter->fmt, "%s"); 3756 - } else { 3757 - strncpy(iter->fmt, p + i, j + 1); 3758 - iter->fmt[j+1] = '\0'; 3849 + if (WARN_ONCE(!good, "event '%s' has unsafe pointer field '%s'", 3850 + trace_event_name(event), field->name)) { 3851 + trace_seq_printf(seq, "EVENT %s: HAS UNSAFE POINTER FIELD '%s'\n", 3852 + trace_event_name(event), field->name); 3853 + return true; 3759 3854 } 3760 - if (star) 3761 - trace_seq_printf(&iter->seq, iter->fmt, len, str); 3762 - else 3763 - trace_seq_printf(&iter->seq, iter->fmt, str); 3764 - 3765 - p += i + j + 1; 3766 3855 } 3767 - print: 3768 - if (*p) 3769 - trace_seq_vprintf(&iter->seq, p, ap); 3856 + return false; 3770 3857 } 3771 3858 3772 3859 const char *trace_event_format(struct trace_iterator *iter, const char *fmt) ··· 4206 4353 if (event) { 4207 4354 if (tr->trace_flags & TRACE_ITER_FIELDS) 4208 4355 return print_event_fields(iter, event); 4356 + /* 4357 + * For TRACE_EVENT() events, the print_fmt is not 4358 + * safe to use if the array has delta offsets 4359 + * Force printing via the fields. 4360 + */ 4361 + if ((tr->text_delta || tr->data_delta) && 4362 + event->type > __TRACE_LAST_TYPE) 4363 + return print_event_fields(iter, event); 4364 + 4209 4365 return event->funcs->trace(iter, sym_flags, event); 4210 4366 } 4211 4367 ··· 5086 5224 struct trace_array *tr = file_inode(filp)->i_private; 5087 5225 cpumask_var_t tracing_cpumask_new; 5088 5226 int err; 5227 + 5228 + if (count == 0 || count > KMALLOC_MAX_SIZE) 5229 + return -EINVAL; 5089 5230 5090 5231 if (!zalloc_cpumask_var(&tracing_cpumask_new, GFP_KERNEL)) 5091 5232 return -ENOMEM; ··· 10641 10776 apply_trace_boot_options(); 10642 10777 10643 10778 register_snapshot_cmd(); 10644 - 10645 - test_can_verify(); 10646 10779 10647 10780 return 0; 10648 10781
+3 -3
kernel/trace/trace.h
··· 667 667 668 668 bool trace_is_tracepoint_string(const char *str); 669 669 const char *trace_event_format(struct trace_iterator *iter, const char *fmt); 670 - void trace_check_vprintf(struct trace_iterator *iter, const char *fmt, 671 - va_list ap) __printf(2, 0); 672 670 char *trace_iter_expand_format(struct trace_iterator *iter); 671 + bool ignore_event(struct trace_iterator *iter); 673 672 674 673 int trace_empty(struct trace_iterator *iter); 675 674 ··· 1412 1413 int filter_type; 1413 1414 int offset; 1414 1415 int size; 1415 - int is_signed; 1416 + unsigned int is_signed:1; 1417 + unsigned int needs_test:1; 1416 1418 int len; 1417 1419 }; 1418 1420
+189 -50
kernel/trace/trace_events.c
··· 82 82 } 83 83 84 84 static struct ftrace_event_field * 85 - __find_event_field(struct list_head *head, char *name) 85 + __find_event_field(struct list_head *head, const char *name) 86 86 { 87 87 struct ftrace_event_field *field; 88 88 ··· 114 114 115 115 static int __trace_define_field(struct list_head *head, const char *type, 116 116 const char *name, int offset, int size, 117 - int is_signed, int filter_type, int len) 117 + int is_signed, int filter_type, int len, 118 + int need_test) 118 119 { 119 120 struct ftrace_event_field *field; 120 121 ··· 134 133 field->offset = offset; 135 134 field->size = size; 136 135 field->is_signed = is_signed; 136 + field->needs_test = need_test; 137 137 field->len = len; 138 138 139 139 list_add(&field->link, head); ··· 153 151 154 152 head = trace_get_fields(call); 155 153 return __trace_define_field(head, type, name, offset, size, 156 - is_signed, filter_type, 0); 154 + is_signed, filter_type, 0, 0); 157 155 } 158 156 EXPORT_SYMBOL_GPL(trace_define_field); 159 157 160 158 static int trace_define_field_ext(struct trace_event_call *call, const char *type, 161 159 const char *name, int offset, int size, int is_signed, 162 - int filter_type, int len) 160 + int filter_type, int len, int need_test) 163 161 { 164 162 struct list_head *head; 165 163 ··· 168 166 169 167 head = trace_get_fields(call); 170 168 return __trace_define_field(head, type, name, offset, size, 171 - is_signed, filter_type, len); 169 + is_signed, filter_type, len, need_test); 172 170 } 173 171 174 172 #define __generic_field(type, item, filter_type) \ 175 173 ret = __trace_define_field(&ftrace_generic_fields, #type, \ 176 174 #item, 0, 0, is_signed_type(type), \ 177 - filter_type, 0); \ 175 + filter_type, 0, 0); \ 178 176 if (ret) \ 179 177 return ret; 180 178 ··· 183 181 "common_" #item, \ 184 182 offsetof(typeof(ent), item), \ 185 183 sizeof(ent.item), \ 186 - is_signed_type(type), FILTER_OTHER, 0); \ 184 + is_signed_type(type), FILTER_OTHER, \ 185 + 0, 0); \ 187 186 if (ret) \ 188 187 return ret; 189 188 ··· 247 244 return tail->offset + tail->size; 248 245 } 249 246 250 - /* 251 - * Check if the referenced field is an array and return true, 252 - * as arrays are OK to dereference. 253 - */ 254 - static bool test_field(const char *fmt, struct trace_event_call *call) 247 + 248 + static struct trace_event_fields *find_event_field(const char *fmt, 249 + struct trace_event_call *call) 255 250 { 256 251 struct trace_event_fields *field = call->class->fields_array; 257 - const char *array_descriptor; 258 252 const char *p = fmt; 259 253 int len; 260 254 261 255 if (!(len = str_has_prefix(fmt, "REC->"))) 262 - return false; 256 + return NULL; 263 257 fmt += len; 264 258 for (p = fmt; *p; p++) { 265 259 if (!isalnum(*p) && *p != '_') ··· 265 265 len = p - fmt; 266 266 267 267 for (; field->type; field++) { 268 - if (strncmp(field->name, fmt, len) || 269 - field->name[len]) 268 + if (strncmp(field->name, fmt, len) || field->name[len]) 270 269 continue; 271 - array_descriptor = strchr(field->type, '['); 272 - /* This is an array and is OK to dereference. */ 273 - return array_descriptor != NULL; 270 + 271 + return field; 272 + } 273 + return NULL; 274 + } 275 + 276 + /* 277 + * Check if the referenced field is an array and return true, 278 + * as arrays are OK to dereference. 279 + */ 280 + static bool test_field(const char *fmt, struct trace_event_call *call) 281 + { 282 + struct trace_event_fields *field; 283 + 284 + field = find_event_field(fmt, call); 285 + if (!field) 286 + return false; 287 + 288 + /* This is an array and is OK to dereference. */ 289 + return strchr(field->type, '[') != NULL; 290 + } 291 + 292 + /* Look for a string within an argument */ 293 + static bool find_print_string(const char *arg, const char *str, const char *end) 294 + { 295 + const char *r; 296 + 297 + r = strstr(arg, str); 298 + return r && r < end; 299 + } 300 + 301 + /* Return true if the argument pointer is safe */ 302 + static bool process_pointer(const char *fmt, int len, struct trace_event_call *call) 303 + { 304 + const char *r, *e, *a; 305 + 306 + e = fmt + len; 307 + 308 + /* Find the REC-> in the argument */ 309 + r = strstr(fmt, "REC->"); 310 + if (r && r < e) { 311 + /* 312 + * Addresses of events on the buffer, or an array on the buffer is 313 + * OK to dereference. There's ways to fool this, but 314 + * this is to catch common mistakes, not malicious code. 315 + */ 316 + a = strchr(fmt, '&'); 317 + if ((a && (a < r)) || test_field(r, call)) 318 + return true; 319 + } else if (find_print_string(fmt, "__get_dynamic_array(", e)) { 320 + return true; 321 + } else if (find_print_string(fmt, "__get_rel_dynamic_array(", e)) { 322 + return true; 323 + } else if (find_print_string(fmt, "__get_dynamic_array_len(", e)) { 324 + return true; 325 + } else if (find_print_string(fmt, "__get_rel_dynamic_array_len(", e)) { 326 + return true; 327 + } else if (find_print_string(fmt, "__get_sockaddr(", e)) { 328 + return true; 329 + } else if (find_print_string(fmt, "__get_rel_sockaddr(", e)) { 330 + return true; 274 331 } 275 332 return false; 333 + } 334 + 335 + /* Return true if the string is safe */ 336 + static bool process_string(const char *fmt, int len, struct trace_event_call *call) 337 + { 338 + struct trace_event_fields *field; 339 + const char *r, *e, *s; 340 + 341 + e = fmt + len; 342 + 343 + /* 344 + * There are several helper functions that return strings. 345 + * If the argument contains a function, then assume its field is valid. 346 + * It is considered that the argument has a function if it has: 347 + * alphanumeric or '_' before a parenthesis. 348 + */ 349 + s = fmt; 350 + do { 351 + r = strstr(s, "("); 352 + if (!r || r >= e) 353 + break; 354 + for (int i = 1; r - i >= s; i++) { 355 + char ch = *(r - i); 356 + if (isspace(ch)) 357 + continue; 358 + if (isalnum(ch) || ch == '_') 359 + return true; 360 + /* Anything else, this isn't a function */ 361 + break; 362 + } 363 + /* A function could be wrapped in parethesis, try the next one */ 364 + s = r + 1; 365 + } while (s < e); 366 + 367 + /* 368 + * Check for arrays. If the argument has: foo[REC->val] 369 + * then it is very likely that foo is an array of strings 370 + * that are safe to use. 371 + */ 372 + r = strstr(s, "["); 373 + if (r && r < e) { 374 + r = strstr(r, "REC->"); 375 + if (r && r < e) 376 + return true; 377 + } 378 + 379 + /* 380 + * If there's any strings in the argument consider this arg OK as it 381 + * could be: REC->field ? "foo" : "bar" and we don't want to get into 382 + * verifying that logic here. 383 + */ 384 + if (find_print_string(fmt, "\"", e)) 385 + return true; 386 + 387 + /* Dereferenced strings are also valid like any other pointer */ 388 + if (process_pointer(fmt, len, call)) 389 + return true; 390 + 391 + /* Make sure the field is found */ 392 + field = find_event_field(fmt, call); 393 + if (!field) 394 + return false; 395 + 396 + /* Test this field's string before printing the event */ 397 + call->flags |= TRACE_EVENT_FL_TEST_STR; 398 + field->needs_test = 1; 399 + 400 + return true; 276 401 } 277 402 278 403 /* ··· 409 284 static void test_event_printk(struct trace_event_call *call) 410 285 { 411 286 u64 dereference_flags = 0; 287 + u64 string_flags = 0; 412 288 bool first = true; 413 - const char *fmt, *c, *r, *a; 289 + const char *fmt; 414 290 int parens = 0; 415 291 char in_quote = 0; 416 292 int start_arg = 0; 417 293 int arg = 0; 418 - int i; 294 + int i, e; 419 295 420 296 fmt = call->print_fmt; 421 297 ··· 500 374 star = true; 501 375 continue; 502 376 } 503 - if ((fmt[i + j] == 's') && star) 504 - arg++; 377 + if ((fmt[i + j] == 's')) { 378 + if (star) 379 + arg++; 380 + if (WARN_ONCE(arg == 63, 381 + "Too many args for event: %s", 382 + trace_event_name(call))) 383 + return; 384 + dereference_flags |= 1ULL << arg; 385 + string_flags |= 1ULL << arg; 386 + } 505 387 break; 506 388 } 507 389 break; ··· 537 403 case ',': 538 404 if (in_quote || parens) 539 405 continue; 406 + e = i; 540 407 i++; 541 408 while (isspace(fmt[i])) 542 409 i++; 543 - start_arg = i; 544 - if (!(dereference_flags & (1ULL << arg))) 545 - goto next_arg; 546 410 547 - /* Find the REC-> in the argument */ 548 - c = strchr(fmt + i, ','); 549 - r = strstr(fmt + i, "REC->"); 550 - if (r && (!c || r < c)) { 551 - /* 552 - * Addresses of events on the buffer, 553 - * or an array on the buffer is 554 - * OK to dereference. 555 - * There's ways to fool this, but 556 - * this is to catch common mistakes, 557 - * not malicious code. 558 - */ 559 - a = strchr(fmt + i, '&'); 560 - if ((a && (a < r)) || test_field(r, call)) 561 - dereference_flags &= ~(1ULL << arg); 562 - } else if ((r = strstr(fmt + i, "__get_dynamic_array(")) && 563 - (!c || r < c)) { 564 - dereference_flags &= ~(1ULL << arg); 565 - } else if ((r = strstr(fmt + i, "__get_sockaddr(")) && 566 - (!c || r < c)) { 567 - dereference_flags &= ~(1ULL << arg); 411 + /* 412 + * If start_arg is zero, then this is the start of the 413 + * first argument. The processing of the argument happens 414 + * when the end of the argument is found, as it needs to 415 + * handle paranthesis and such. 416 + */ 417 + if (!start_arg) { 418 + start_arg = i; 419 + /* Balance out the i++ in the for loop */ 420 + i--; 421 + continue; 568 422 } 569 423 570 - next_arg: 571 - i--; 424 + if (dereference_flags & (1ULL << arg)) { 425 + if (string_flags & (1ULL << arg)) { 426 + if (process_string(fmt + start_arg, e - start_arg, call)) 427 + dereference_flags &= ~(1ULL << arg); 428 + } else if (process_pointer(fmt + start_arg, e - start_arg, call)) 429 + dereference_flags &= ~(1ULL << arg); 430 + } 431 + 432 + start_arg = i; 572 433 arg++; 434 + /* Balance out the i++ in the for loop */ 435 + i--; 573 436 } 437 + } 438 + 439 + if (dereference_flags & (1ULL << arg)) { 440 + if (string_flags & (1ULL << arg)) { 441 + if (process_string(fmt + start_arg, i - start_arg, call)) 442 + dereference_flags &= ~(1ULL << arg); 443 + } else if (process_pointer(fmt + start_arg, i - start_arg, call)) 444 + dereference_flags &= ~(1ULL << arg); 574 445 } 575 446 576 447 /* ··· 2610 2471 ret = trace_define_field_ext(call, field->type, field->name, 2611 2472 offset, field->size, 2612 2473 field->is_signed, field->filter_type, 2613 - field->len); 2474 + field->len, field->needs_test); 2614 2475 if (WARN_ON_ONCE(ret)) { 2615 2476 pr_err("error code is %d\n", ret); 2616 2477 break;
+2 -1
kernel/trace/trace_functions.c
··· 176 176 tracing_reset_online_cpus(&tr->array_buffer); 177 177 } 178 178 179 - #ifdef CONFIG_FUNCTION_GRAPH_TRACER 179 + /* fregs are guaranteed not to be NULL if HAVE_DYNAMIC_FTRACE_WITH_ARGS is set */ 180 + #if defined(CONFIG_FUNCTION_GRAPH_TRACER) && defined(CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS) 180 181 static __always_inline unsigned long 181 182 function_get_true_parent_ip(unsigned long parent_ip, struct ftrace_regs *fregs) 182 183 {
+5 -3
kernel/trace/trace_kprobe.c
··· 725 725 726 726 static struct notifier_block trace_kprobe_module_nb = { 727 727 .notifier_call = trace_kprobe_module_callback, 728 - .priority = 1 /* Invoked after kprobe module callback */ 728 + .priority = 2 /* Invoked after kprobe and jump_label module callback */ 729 729 }; 730 730 static int trace_kprobe_register_module_notifier(void) 731 731 { ··· 940 940 } 941 941 /* a symbol specified */ 942 942 symbol = kstrdup(argv[1], GFP_KERNEL); 943 - if (!symbol) 944 - return -ENOMEM; 943 + if (!symbol) { 944 + ret = -ENOMEM; 945 + goto error; 946 + } 945 947 946 948 tmp = strchr(symbol, '%'); 947 949 if (tmp) {
+5 -1
kernel/trace/trace_output.c
··· 317 317 318 318 void trace_event_printf(struct trace_iterator *iter, const char *fmt, ...) 319 319 { 320 + struct trace_seq *s = &iter->seq; 320 321 va_list ap; 321 322 323 + if (ignore_event(iter)) 324 + return; 325 + 322 326 va_start(ap, fmt); 323 - trace_check_vprintf(iter, trace_event_format(iter, fmt), ap); 327 + trace_seq_vprintf(s, trace_event_format(iter, fmt), ap); 324 328 va_end(ap); 325 329 } 326 330 EXPORT_SYMBOL(trace_event_printf);
+21 -9
kernel/workqueue.c
··· 2508 2508 return; 2509 2509 } 2510 2510 2511 + WARN_ON_ONCE(cpu != WORK_CPU_UNBOUND && !cpu_online(cpu)); 2511 2512 dwork->wq = wq; 2512 2513 dwork->cpu = cpu; 2513 2514 timer->expires = jiffies + delay; ··· 2533 2532 * @wq: workqueue to use 2534 2533 * @dwork: work to queue 2535 2534 * @delay: number of jiffies to wait before queueing 2535 + * 2536 + * We queue the delayed_work to a specific CPU, for non-zero delays the 2537 + * caller must ensure it is online and can't go away. Callers that fail 2538 + * to ensure this, may get @dwork->timer queued to an offlined CPU and 2539 + * this will prevent queueing of @dwork->work unless the offlined CPU 2540 + * becomes online again. 2536 2541 * 2537 2542 * Return: %false if @work was already on a queue, %true otherwise. If 2538 2543 * @delay is zero and @dwork is idle, it will be scheduled for immediate ··· 3687 3680 * check_flush_dependency - check for flush dependency sanity 3688 3681 * @target_wq: workqueue being flushed 3689 3682 * @target_work: work item being flushed (NULL for workqueue flushes) 3683 + * @from_cancel: are we called from the work cancel path 3690 3684 * 3691 3685 * %current is trying to flush the whole @target_wq or @target_work on it. 3692 - * If @target_wq doesn't have %WQ_MEM_RECLAIM, verify that %current is not 3693 - * reclaiming memory or running on a workqueue which doesn't have 3694 - * %WQ_MEM_RECLAIM as that can break forward-progress guarantee leading to 3695 - * a deadlock. 3686 + * If this is not the cancel path (which implies work being flushed is either 3687 + * already running, or will not be at all), check if @target_wq doesn't have 3688 + * %WQ_MEM_RECLAIM and verify that %current is not reclaiming memory or running 3689 + * on a workqueue which doesn't have %WQ_MEM_RECLAIM as that can break forward- 3690 + * progress guarantee leading to a deadlock. 3696 3691 */ 3697 3692 static void check_flush_dependency(struct workqueue_struct *target_wq, 3698 - struct work_struct *target_work) 3693 + struct work_struct *target_work, 3694 + bool from_cancel) 3699 3695 { 3700 - work_func_t target_func = target_work ? target_work->func : NULL; 3696 + work_func_t target_func; 3701 3697 struct worker *worker; 3702 3698 3703 - if (target_wq->flags & WQ_MEM_RECLAIM) 3699 + if (from_cancel || target_wq->flags & WQ_MEM_RECLAIM) 3704 3700 return; 3705 3701 3706 3702 worker = current_wq_worker(); 3703 + target_func = target_work ? target_work->func : NULL; 3707 3704 3708 3705 WARN_ONCE(current->flags & PF_MEMALLOC, 3709 3706 "workqueue: PF_MEMALLOC task %d(%s) is flushing !WQ_MEM_RECLAIM %s:%ps", ··· 3991 3980 list_add_tail(&this_flusher.list, &wq->flusher_overflow); 3992 3981 } 3993 3982 3994 - check_flush_dependency(wq, NULL); 3983 + check_flush_dependency(wq, NULL, false); 3995 3984 3996 3985 mutex_unlock(&wq->mutex); 3997 3986 ··· 4166 4155 } 4167 4156 4168 4157 wq = pwq->wq; 4169 - check_flush_dependency(wq, work); 4158 + check_flush_dependency(wq, work, from_cancel); 4170 4159 4171 4160 insert_wq_barrier(pwq, barr, work, worker); 4172 4161 raw_spin_unlock_irq(&pool->lock); ··· 5652 5641 } while (activated); 5653 5642 } 5654 5643 5644 + __printf(1, 0) 5655 5645 static struct workqueue_struct *__alloc_workqueue(const char *fmt, 5656 5646 unsigned int flags, 5657 5647 int max_active, va_list args)
+36 -5
lib/alloc_tag.c
··· 209 209 return; 210 210 } 211 211 212 + /* 213 + * Clear tag references to avoid debug warning when using 214 + * __alloc_tag_ref_set() with non-empty reference. 215 + */ 216 + set_codetag_empty(&ref_old); 217 + set_codetag_empty(&ref_new); 218 + 212 219 /* swap tags */ 213 220 __alloc_tag_ref_set(&ref_old, tag_new); 214 221 update_page_tag_ref(handle_old, &ref_old); ··· 408 401 409 402 static int vm_module_tags_populate(void) 410 403 { 411 - unsigned long phys_size = vm_module_tags->nr_pages << PAGE_SHIFT; 404 + unsigned long phys_end = ALIGN_DOWN(module_tags.start_addr, PAGE_SIZE) + 405 + (vm_module_tags->nr_pages << PAGE_SHIFT); 406 + unsigned long new_end = module_tags.start_addr + module_tags.size; 412 407 413 - if (phys_size < module_tags.size) { 408 + if (phys_end < new_end) { 414 409 struct page **next_page = vm_module_tags->pages + vm_module_tags->nr_pages; 415 - unsigned long addr = module_tags.start_addr + phys_size; 410 + unsigned long old_shadow_end = ALIGN(phys_end, MODULE_ALIGN); 411 + unsigned long new_shadow_end = ALIGN(new_end, MODULE_ALIGN); 416 412 unsigned long more_pages; 417 413 unsigned long nr; 418 414 419 - more_pages = ALIGN(module_tags.size - phys_size, PAGE_SIZE) >> PAGE_SHIFT; 415 + more_pages = ALIGN(new_end - phys_end, PAGE_SIZE) >> PAGE_SHIFT; 420 416 nr = alloc_pages_bulk_array_node(GFP_KERNEL | __GFP_NOWARN, 421 417 NUMA_NO_NODE, more_pages, next_page); 422 418 if (nr < more_pages || 423 - vmap_pages_range(addr, addr + (nr << PAGE_SHIFT), PAGE_KERNEL, 419 + vmap_pages_range(phys_end, phys_end + (nr << PAGE_SHIFT), PAGE_KERNEL, 424 420 next_page, PAGE_SHIFT) < 0) { 425 421 /* Clean up and error out */ 426 422 for (int i = 0; i < nr; i++) 427 423 __free_page(next_page[i]); 428 424 return -ENOMEM; 429 425 } 426 + 430 427 vm_module_tags->nr_pages += nr; 428 + 429 + /* 430 + * Kasan allocates 1 byte of shadow for every 8 bytes of data. 431 + * When kasan_alloc_module_shadow allocates shadow memory, 432 + * its unit of allocation is a page. 433 + * Therefore, here we need to align to MODULE_ALIGN. 434 + */ 435 + if (old_shadow_end < new_shadow_end) 436 + kasan_alloc_module_shadow((void *)old_shadow_end, 437 + new_shadow_end - old_shadow_end, 438 + GFP_KERNEL); 431 439 } 440 + 441 + /* 442 + * Mark the pages as accessible, now that they are mapped. 443 + * With hardware tag-based KASAN, marking is skipped for 444 + * non-VM_ALLOC mappings, see __kasan_unpoison_vmalloc(). 445 + */ 446 + kasan_unpoison_vmalloc((void *)module_tags.start_addr, 447 + new_end - module_tags.start_addr, 448 + KASAN_VMALLOC_PROT_NORMAL); 432 449 433 450 return 0; 434 451 }
+1
lib/maple_tree.c
··· 4354 4354 ret = 1; 4355 4355 } 4356 4356 if (ret < 0 && range_lo > min) { 4357 + mas_reset(mas); 4357 4358 ret = mas_empty_area(mas, min, range_hi, 1); 4358 4359 if (ret == 0) 4359 4360 ret = 1;
+9 -1
mm/damon/core.c
··· 868 868 NUMA_NO_NODE); 869 869 if (!new_scheme) 870 870 return -ENOMEM; 871 + err = damos_commit(new_scheme, src_scheme); 872 + if (err) { 873 + damon_destroy_scheme(new_scheme); 874 + return err; 875 + } 871 876 damon_add_scheme(dst, new_scheme); 872 877 } 873 878 return 0; ··· 966 961 return -ENOMEM; 967 962 err = damon_commit_target(new_target, false, 968 963 src_target, damon_target_has_pid(src)); 969 - if (err) 964 + if (err) { 965 + damon_destroy_target(new_target); 970 966 return err; 967 + } 968 + damon_add_target(dst, new_target); 971 969 } 972 970 return 0; 973 971 }
-9
mm/filemap.c
··· 124 124 * ->private_lock (zap_pte_range->block_dirty_folio) 125 125 */ 126 126 127 - static void mapping_set_update(struct xa_state *xas, 128 - struct address_space *mapping) 129 - { 130 - if (dax_mapping(mapping) || shmem_mapping(mapping)) 131 - return; 132 - xas_set_update(xas, workingset_update_node); 133 - xas_set_lru(xas, &shadow_nodes); 134 - } 135 - 136 127 static void page_cache_delete(struct address_space *mapping, 137 128 struct folio *folio, void *shadow) 138 129 {
+10 -9
mm/huge_memory.c
··· 1176 1176 folio_throttle_swaprate(folio, gfp); 1177 1177 1178 1178 /* 1179 - * When a folio is not zeroed during allocation (__GFP_ZERO not used), 1180 - * folio_zero_user() is used to make sure that the page corresponding 1181 - * to the faulting address will be hot in the cache after zeroing. 1179 + * When a folio is not zeroed during allocation (__GFP_ZERO not used) 1180 + * or user folios require special handling, folio_zero_user() is used to 1181 + * make sure that the page corresponding to the faulting address will be 1182 + * hot in the cache after zeroing. 1182 1183 */ 1183 - if (!alloc_zeroed()) 1184 + if (user_alloc_needs_zeroing()) 1184 1185 folio_zero_user(folio, addr); 1185 1186 /* 1186 1187 * The memory barrier inside __folio_mark_uptodate makes sure that ··· 3577 3576 !list_empty(&folio->_deferred_list)) { 3578 3577 ds_queue->split_queue_len--; 3579 3578 if (folio_test_partially_mapped(folio)) { 3580 - __folio_clear_partially_mapped(folio); 3579 + folio_clear_partially_mapped(folio); 3581 3580 mod_mthp_stat(folio_order(folio), 3582 3581 MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1); 3583 3582 } ··· 3689 3688 if (!list_empty(&folio->_deferred_list)) { 3690 3689 ds_queue->split_queue_len--; 3691 3690 if (folio_test_partially_mapped(folio)) { 3692 - __folio_clear_partially_mapped(folio); 3691 + folio_clear_partially_mapped(folio); 3693 3692 mod_mthp_stat(folio_order(folio), 3694 3693 MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1); 3695 3694 } ··· 3733 3732 spin_lock_irqsave(&ds_queue->split_queue_lock, flags); 3734 3733 if (partially_mapped) { 3735 3734 if (!folio_test_partially_mapped(folio)) { 3736 - __folio_set_partially_mapped(folio); 3735 + folio_set_partially_mapped(folio); 3737 3736 if (folio_test_pmd_mappable(folio)) 3738 3737 count_vm_event(THP_DEFERRED_SPLIT_PAGE); 3739 3738 count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED); ··· 3826 3825 } else { 3827 3826 /* We lost race with folio_put() */ 3828 3827 if (folio_test_partially_mapped(folio)) { 3829 - __folio_clear_partially_mapped(folio); 3828 + folio_clear_partially_mapped(folio); 3830 3829 mod_mthp_stat(folio_order(folio), 3831 3830 MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1); 3832 3831 } ··· 4169 4168 size_t input_len = strlen(input_buf); 4170 4169 4171 4170 tok = strsep(&buf, ","); 4172 - if (tok) { 4171 + if (tok && buf) { 4173 4172 strscpy(file_path, tok); 4174 4173 } else { 4175 4174 ret = -EINVAL;
+9 -12
mm/hugetlb.c
··· 5340 5340 break; 5341 5341 } 5342 5342 ret = copy_user_large_folio(new_folio, pte_folio, 5343 - ALIGN_DOWN(addr, sz), dst_vma); 5343 + addr, dst_vma); 5344 5344 folio_put(pte_folio); 5345 5345 if (ret) { 5346 5346 folio_put(new_folio); ··· 6643 6643 *foliop = NULL; 6644 6644 goto out; 6645 6645 } 6646 - ret = copy_user_large_folio(folio, *foliop, 6647 - ALIGN_DOWN(dst_addr, size), dst_vma); 6646 + ret = copy_user_large_folio(folio, *foliop, dst_addr, dst_vma); 6648 6647 folio_put(*foliop); 6649 6648 *foliop = NULL; 6650 6649 if (ret) { ··· 7211 7212 spte = hugetlb_walk(svma, saddr, 7212 7213 vma_mmu_pagesize(svma)); 7213 7214 if (spte) { 7214 - get_page(virt_to_page(spte)); 7215 + ptdesc_pmd_pts_inc(virt_to_ptdesc(spte)); 7215 7216 break; 7216 7217 } 7217 7218 } ··· 7226 7227 (pmd_t *)((unsigned long)spte & PAGE_MASK)); 7227 7228 mm_inc_nr_pmds(mm); 7228 7229 } else { 7229 - put_page(virt_to_page(spte)); 7230 + ptdesc_pmd_pts_dec(virt_to_ptdesc(spte)); 7230 7231 } 7231 7232 spin_unlock(&mm->page_table_lock); 7232 7233 out: ··· 7238 7239 /* 7239 7240 * unmap huge page backed by shared pte. 7240 7241 * 7241 - * Hugetlb pte page is ref counted at the time of mapping. If pte is shared 7242 - * indicated by page_count > 1, unmap is achieved by clearing pud and 7243 - * decrementing the ref count. If count == 1, the pte page is not shared. 7244 - * 7245 7242 * Called with page table lock held. 7246 7243 * 7247 7244 * returns: 1 successfully unmapped a shared pte page ··· 7246 7251 int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, 7247 7252 unsigned long addr, pte_t *ptep) 7248 7253 { 7254 + unsigned long sz = huge_page_size(hstate_vma(vma)); 7249 7255 pgd_t *pgd = pgd_offset(mm, addr); 7250 7256 p4d_t *p4d = p4d_offset(pgd, addr); 7251 7257 pud_t *pud = pud_offset(p4d, addr); 7252 7258 7253 7259 i_mmap_assert_write_locked(vma->vm_file->f_mapping); 7254 7260 hugetlb_vma_assert_locked(vma); 7255 - BUG_ON(page_count(virt_to_page(ptep)) == 0); 7256 - if (page_count(virt_to_page(ptep)) == 1) 7261 + if (sz != PMD_SIZE) 7262 + return 0; 7263 + if (!ptdesc_pmd_pts_count(virt_to_ptdesc(ptep))) 7257 7264 return 0; 7258 7265 7259 7266 pud_clear(pud); 7260 - put_page(virt_to_page(ptep)); 7267 + ptdesc_pmd_pts_dec(virt_to_ptdesc(ptep)); 7261 7268 mm_dec_nr_pmds(mm); 7262 7269 return 1; 7263 7270 }
+6 -6
mm/internal.h
··· 1285 1285 void touch_pmd(struct vm_area_struct *vma, unsigned long addr, 1286 1286 pmd_t *pmd, bool write); 1287 1287 1288 - static inline bool alloc_zeroed(void) 1289 - { 1290 - return static_branch_maybe(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, 1291 - &init_on_alloc); 1292 - } 1293 - 1294 1288 /* 1295 1289 * Parses a string with mem suffixes into its order. Useful to parse kernel 1296 1290 * parameters. ··· 1504 1510 /* Only track the nodes of mappings with shadow entries */ 1505 1511 void workingset_update_node(struct xa_node *node); 1506 1512 extern struct list_lru shadow_nodes; 1513 + #define mapping_set_update(xas, mapping) do { \ 1514 + if (!dax_mapping(mapping) && !shmem_mapping(mapping)) { \ 1515 + xas_set_update(xas, workingset_update_node); \ 1516 + xas_set_lru(xas, &shadow_nodes); \ 1517 + } \ 1518 + } while (0) 1507 1519 1508 1520 /* mremap.c */ 1509 1521 unsigned long move_page_tables(struct vm_area_struct *vma,
+3
mm/khugepaged.c
··· 19 19 #include <linux/rcupdate_wait.h> 20 20 #include <linux/swapops.h> 21 21 #include <linux/shmem_fs.h> 22 + #include <linux/dax.h> 22 23 #include <linux/ksm.h> 23 24 24 25 #include <asm/tlb.h> ··· 1837 1836 result = alloc_charge_folio(&new_folio, mm, cc); 1838 1837 if (result != SCAN_SUCCEED) 1839 1838 goto out; 1839 + 1840 + mapping_set_update(&xas, mapping); 1840 1841 1841 1842 __folio_set_locked(new_folio); 1842 1843 if (is_shmem)
+1 -1
mm/kmemleak.c
··· 373 373 374 374 for (i = 0; i < nr_entries; i++) { 375 375 void *ptr = (void *)entries[i]; 376 - warn_or_seq_printf(seq, " [<%pK>] %pS\n", ptr, ptr); 376 + warn_or_seq_printf(seq, " %pS\n", ptr); 377 377 } 378 378 } 379 379
+1 -1
mm/list_lru.c
··· 77 77 spin_lock(&l->lock); 78 78 nr_items = READ_ONCE(l->nr_items); 79 79 if (likely(nr_items != LONG_MIN)) { 80 - WARN_ON(nr_items < 0); 81 80 rcu_read_unlock(); 82 81 return l; 83 82 } ··· 449 450 450 451 list_splice_init(&src->list, &dst->list); 451 452 if (src->nr_items) { 453 + WARN_ON(src->nr_items < 0); 452 454 dst->nr_items += src->nr_items; 453 455 set_shrinker_bit(dst_memcg, nid, lru_shrinker_id(lru)); 454 456 }
+1 -1
mm/memfd.c
··· 170 170 return error; 171 171 } 172 172 173 - static unsigned int *memfd_file_seals_ptr(struct file *file) 173 + unsigned int *memfd_file_seals_ptr(struct file *file) 174 174 { 175 175 if (shmem_file(file)) 176 176 return &SHMEM_I(file_inode(file))->seals;
+10 -8
mm/memory.c
··· 4733 4733 folio_throttle_swaprate(folio, gfp); 4734 4734 /* 4735 4735 * When a folio is not zeroed during allocation 4736 - * (__GFP_ZERO not used), folio_zero_user() is used 4737 - * to make sure that the page corresponding to the 4738 - * faulting address will be hot in the cache after 4739 - * zeroing. 4736 + * (__GFP_ZERO not used) or user folios require special 4737 + * handling, folio_zero_user() is used to make sure 4738 + * that the page corresponding to the faulting address 4739 + * will be hot in the cache after zeroing. 4740 4740 */ 4741 - if (!alloc_zeroed()) 4741 + if (user_alloc_needs_zeroing()) 4742 4742 folio_zero_user(folio, vmf->address); 4743 4743 return folio; 4744 4744 } ··· 6815 6815 return 0; 6816 6816 } 6817 6817 6818 - static void clear_gigantic_page(struct folio *folio, unsigned long addr, 6818 + static void clear_gigantic_page(struct folio *folio, unsigned long addr_hint, 6819 6819 unsigned int nr_pages) 6820 6820 { 6821 + unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(folio)); 6821 6822 int i; 6822 6823 6823 6824 might_sleep(); ··· 6852 6851 } 6853 6852 6854 6853 static int copy_user_gigantic_page(struct folio *dst, struct folio *src, 6855 - unsigned long addr, 6854 + unsigned long addr_hint, 6856 6855 struct vm_area_struct *vma, 6857 6856 unsigned int nr_pages) 6858 6857 { 6859 - int i; 6858 + unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(dst)); 6860 6859 struct page *dst_page; 6861 6860 struct page *src_page; 6861 + int i; 6862 6862 6863 6863 for (i = 0; i < nr_pages; i++) { 6864 6864 dst_page = folio_page(dst, i);
+5 -1
mm/mmap.c
··· 47 47 #include <linux/oom.h> 48 48 #include <linux/sched/mm.h> 49 49 #include <linux/ksm.h> 50 + #include <linux/memfd.h> 50 51 51 52 #include <linux/uaccess.h> 52 53 #include <asm/cacheflush.h> ··· 369 368 370 369 if (file) { 371 370 struct inode *inode = file_inode(file); 371 + unsigned int seals = memfd_file_seals(file); 372 372 unsigned long flags_mask; 373 373 374 374 if (!file_mmap_ok(file, inode, pgoff, len)) ··· 410 408 vm_flags |= VM_SHARED | VM_MAYSHARE; 411 409 if (!(file->f_mode & FMODE_WRITE)) 412 410 vm_flags &= ~(VM_MAYWRITE | VM_SHARED); 411 + else if (is_readonly_sealed(seals, vm_flags)) 412 + vm_flags &= ~VM_MAYWRITE; 413 413 fallthrough; 414 414 case MAP_PRIVATE: 415 415 if (!(file->f_mode & FMODE_READ)) ··· 892 888 893 889 if (get_area) { 894 890 addr = get_area(file, addr, len, pgoff, flags); 895 - } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) 891 + } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && !file 896 892 && !addr /* no hint */ 897 893 && IS_ALIGNED(len, PMD_SIZE)) { 898 894 /* Ensures that larger anonymous mappings are THP aligned. */
+4 -2
mm/page_alloc.c
··· 1238 1238 if (order > pageblock_order) 1239 1239 order = pageblock_order; 1240 1240 1241 - while (pfn != end) { 1241 + do { 1242 1242 int mt = get_pfnblock_migratetype(page, pfn); 1243 1243 1244 1244 __free_one_page(page, pfn, zone, order, mt, fpi); 1245 1245 pfn += 1 << order; 1246 + if (pfn == end) 1247 + break; 1246 1248 page = pfn_to_page(pfn); 1247 - } 1249 + } while (1); 1248 1250 } 1249 1251 1250 1252 static void free_one_page(struct zone *zone, struct page *page,
+1 -1
mm/pgtable-generic.c
··· 279 279 static void pmdp_get_lockless_end(unsigned long irqflags) { } 280 280 #endif 281 281 282 - pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) 282 + pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) 283 283 { 284 284 unsigned long irqflags; 285 285 pmd_t pmdval;
+5 -1
mm/readahead.c
··· 646 646 1UL << order); 647 647 if (index == expected) { 648 648 ra->start += ra->size; 649 - ra->size = get_next_ra_size(ra, max_pages); 649 + /* 650 + * In the case of MADV_HUGEPAGE, the actual size might exceed 651 + * the readahead window. 652 + */ 653 + ra->size = max(ra->size, get_next_ra_size(ra, max_pages)); 650 654 ra->async_size = ra->size; 651 655 goto readit; 652 656 }
+16 -13
mm/shmem.c
··· 787 787 } 788 788 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ 789 789 790 + static void shmem_update_stats(struct folio *folio, int nr_pages) 791 + { 792 + if (folio_test_pmd_mappable(folio)) 793 + __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr_pages); 794 + __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr_pages); 795 + __lruvec_stat_mod_folio(folio, NR_SHMEM, nr_pages); 796 + } 797 + 790 798 /* 791 799 * Somewhat like filemap_add_folio, but error if expected item has gone. 792 800 */ ··· 829 821 xas_store(&xas, folio); 830 822 if (xas_error(&xas)) 831 823 goto unlock; 832 - if (folio_test_pmd_mappable(folio)) 833 - __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr); 834 - __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr); 835 - __lruvec_stat_mod_folio(folio, NR_SHMEM, nr); 824 + shmem_update_stats(folio, nr); 836 825 mapping->nrpages += nr; 837 826 unlock: 838 827 xas_unlock_irq(&xas); ··· 857 852 error = shmem_replace_entry(mapping, folio->index, folio, radswap); 858 853 folio->mapping = NULL; 859 854 mapping->nrpages -= nr; 860 - __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr); 861 - __lruvec_stat_mod_folio(folio, NR_SHMEM, -nr); 855 + shmem_update_stats(folio, -nr); 862 856 xa_unlock_irq(&mapping->i_pages); 863 857 folio_put_refs(folio, nr); 864 858 BUG_ON(error); ··· 1535 1531 !shmem_falloc->waitq && 1536 1532 index >= shmem_falloc->start && 1537 1533 index < shmem_falloc->next) 1538 - shmem_falloc->nr_unswapped++; 1534 + shmem_falloc->nr_unswapped += nr_pages; 1539 1535 else 1540 1536 shmem_falloc = NULL; 1541 1537 spin_unlock(&inode->i_lock); ··· 1689 1685 unsigned long mask = READ_ONCE(huge_shmem_orders_always); 1690 1686 unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size); 1691 1687 unsigned long vm_flags = vma ? vma->vm_flags : 0; 1688 + pgoff_t aligned_index; 1692 1689 bool global_huge; 1693 1690 loff_t i_size; 1694 1691 int order; ··· 1724 1719 /* Allow mTHP that will be fully within i_size. */ 1725 1720 order = highest_order(within_size_orders); 1726 1721 while (within_size_orders) { 1727 - index = round_up(index + 1, order); 1722 + aligned_index = round_up(index + 1, 1 << order); 1728 1723 i_size = round_up(i_size_read(inode), PAGE_SIZE); 1729 - if (i_size >> PAGE_SHIFT >= index) { 1724 + if (i_size >> PAGE_SHIFT >= aligned_index) { 1730 1725 mask |= within_size_orders; 1731 1726 break; 1732 1727 } ··· 1974 1969 } 1975 1970 if (!error) { 1976 1971 mem_cgroup_replace_folio(old, new); 1977 - __lruvec_stat_mod_folio(new, NR_FILE_PAGES, nr_pages); 1978 - __lruvec_stat_mod_folio(new, NR_SHMEM, nr_pages); 1979 - __lruvec_stat_mod_folio(old, NR_FILE_PAGES, -nr_pages); 1980 - __lruvec_stat_mod_folio(old, NR_SHMEM, -nr_pages); 1972 + shmem_update_stats(new, nr_pages); 1973 + shmem_update_stats(old, -nr_pages); 1981 1974 } 1982 1975 xa_unlock_irq(&swap_mapping->i_pages); 1983 1976
+1 -6
mm/util.c
··· 297 297 { 298 298 char *p; 299 299 300 - /* 301 - * Always use GFP_KERNEL, since copy_from_user() can sleep and 302 - * cause pagefault, which makes it pointless to use GFP_NOFS 303 - * or GFP_ATOMIC. 304 - */ 305 - p = kmalloc_track_caller(len + 1, GFP_KERNEL); 300 + p = kmem_buckets_alloc_track_caller(user_buckets, len + 1, GFP_USER | __GFP_NOWARN); 306 301 if (!p) 307 302 return ERR_PTR(-ENOMEM); 308 303
+4 -1
mm/vma.c
··· 2460 2460 2461 2461 /* If flags changed, we might be able to merge, so try again. */ 2462 2462 if (map.retry_merge) { 2463 + struct vm_area_struct *merged; 2463 2464 VMG_MMAP_STATE(vmg, &map, vma); 2464 2465 2465 2466 vma_iter_config(map.vmi, map.addr, map.end); 2466 - vma_merge_existing_range(&vmg); 2467 + merged = vma_merge_existing_range(&vmg); 2468 + if (merged) 2469 + vma = merged; 2467 2470 } 2468 2471 2469 2472 __mmap_complete(&map, vma);
+4 -2
mm/vmalloc.c
··· 3374 3374 struct page *page = vm->pages[i]; 3375 3375 3376 3376 BUG_ON(!page); 3377 - mod_memcg_page_state(page, MEMCG_VMALLOC, -1); 3377 + if (!(vm->flags & VM_MAP_PUT_PAGES)) 3378 + mod_memcg_page_state(page, MEMCG_VMALLOC, -1); 3378 3379 /* 3379 3380 * High-order allocs for huge vmallocs are split, so 3380 3381 * can be freed as an array of order-0 allocations ··· 3383 3382 __free_page(page); 3384 3383 cond_resched(); 3385 3384 } 3386 - atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages); 3385 + if (!(vm->flags & VM_MAP_PUT_PAGES)) 3386 + atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages); 3387 3387 kvfree(vm->pages); 3388 3388 kfree(vm); 3389 3389 }
+8 -1
mm/vmscan.c
··· 374 374 if (can_reclaim_anon_pages(NULL, zone_to_nid(zone), NULL)) 375 375 nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) + 376 376 zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON); 377 - 377 + /* 378 + * If there are no reclaimable file-backed or anonymous pages, 379 + * ensure zones with sufficient free pages are not skipped. 380 + * This prevents zones like DMA32 from being ignored in reclaim 381 + * scenarios where they can still help alleviate memory pressure. 382 + */ 383 + if (nr == 0) 384 + nr = zone_page_state_snapshot(zone, NR_FREE_PAGES); 378 385 return nr; 379 386 } 380 387
+16 -3
mm/zswap.c
··· 880 880 return 0; 881 881 } 882 882 883 + /* Prevent CPU hotplug from freeing up the per-CPU acomp_ctx resources */ 884 + static struct crypto_acomp_ctx *acomp_ctx_get_cpu(struct crypto_acomp_ctx __percpu *acomp_ctx) 885 + { 886 + cpus_read_lock(); 887 + return raw_cpu_ptr(acomp_ctx); 888 + } 889 + 890 + static void acomp_ctx_put_cpu(void) 891 + { 892 + cpus_read_unlock(); 893 + } 894 + 883 895 static bool zswap_compress(struct page *page, struct zswap_entry *entry, 884 896 struct zswap_pool *pool) 885 897 { ··· 905 893 gfp_t gfp; 906 894 u8 *dst; 907 895 908 - acomp_ctx = raw_cpu_ptr(pool->acomp_ctx); 909 - 896 + acomp_ctx = acomp_ctx_get_cpu(pool->acomp_ctx); 910 897 mutex_lock(&acomp_ctx->mutex); 911 898 912 899 dst = acomp_ctx->buffer; ··· 961 950 zswap_reject_alloc_fail++; 962 951 963 952 mutex_unlock(&acomp_ctx->mutex); 953 + acomp_ctx_put_cpu(); 964 954 return comp_ret == 0 && alloc_ret == 0; 965 955 } 966 956 ··· 972 960 struct crypto_acomp_ctx *acomp_ctx; 973 961 u8 *src; 974 962 975 - acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx); 963 + acomp_ctx = acomp_ctx_get_cpu(entry->pool->acomp_ctx); 976 964 mutex_lock(&acomp_ctx->mutex); 977 965 978 966 src = zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO); ··· 1002 990 1003 991 if (src != acomp_ctx->buffer) 1004 992 zpool_unmap_handle(zpool, entry->handle); 993 + acomp_ctx_put_cpu(); 1005 994 } 1006 995 1007 996 /*********************************
+2 -2
net/802/psnap.c
··· 55 55 goto drop; 56 56 57 57 rcu_read_lock(); 58 - proto = find_snap_client(skb_transport_header(skb)); 58 + proto = find_snap_client(skb->data); 59 59 if (proto) { 60 60 /* Pass the frame on. */ 61 - skb->transport_header += 5; 62 61 skb_pull_rcsum(skb, 5); 62 + skb_reset_transport_header(skb); 63 63 rc = proto->rcvfunc(skb, dev, &snap_packet_type, orig_dev); 64 64 } 65 65 rcu_read_unlock();
+6 -5
net/bluetooth/hci_sync.c
··· 1031 1031 1032 1032 static int hci_set_random_addr_sync(struct hci_dev *hdev, bdaddr_t *rpa) 1033 1033 { 1034 - /* If we're advertising or initiating an LE connection we can't 1035 - * go ahead and change the random address at this time. This is 1036 - * because the eventual initiator address used for the 1034 + /* If a random_addr has been set we're advertising or initiating an LE 1035 + * connection we can't go ahead and change the random address at this 1036 + * time. This is because the eventual initiator address used for the 1037 1037 * subsequently created connection will be undefined (some 1038 1038 * controllers use the new address and others the one we had 1039 1039 * when the operation started). ··· 1041 1041 * In this kind of scenario skip the update and let the random 1042 1042 * address be updated at the next cycle. 1043 1043 */ 1044 - if (hci_dev_test_flag(hdev, HCI_LE_ADV) || 1045 - hci_lookup_le_connect(hdev)) { 1044 + if (bacmp(&hdev->random_addr, BDADDR_ANY) && 1045 + (hci_dev_test_flag(hdev, HCI_LE_ADV) || 1046 + hci_lookup_le_connect(hdev))) { 1046 1047 bt_dev_dbg(hdev, "Deferring random address update"); 1047 1048 hci_dev_set_flag(hdev, HCI_RPA_EXPIRED); 1048 1049 return 0;
+36 -2
net/bluetooth/mgmt.c
··· 7655 7655 mgmt_event(MGMT_EV_DEVICE_ADDED, hdev, &ev, sizeof(ev), sk); 7656 7656 } 7657 7657 7658 + static void add_device_complete(struct hci_dev *hdev, void *data, int err) 7659 + { 7660 + struct mgmt_pending_cmd *cmd = data; 7661 + struct mgmt_cp_add_device *cp = cmd->param; 7662 + 7663 + if (!err) { 7664 + device_added(cmd->sk, hdev, &cp->addr.bdaddr, cp->addr.type, 7665 + cp->action); 7666 + device_flags_changed(NULL, hdev, &cp->addr.bdaddr, 7667 + cp->addr.type, hdev->conn_flags, 7668 + PTR_UINT(cmd->user_data)); 7669 + } 7670 + 7671 + mgmt_cmd_complete(cmd->sk, hdev->id, MGMT_OP_ADD_DEVICE, 7672 + mgmt_status(err), &cp->addr, sizeof(cp->addr)); 7673 + mgmt_pending_free(cmd); 7674 + } 7675 + 7658 7676 static int add_device_sync(struct hci_dev *hdev, void *data) 7659 7677 { 7660 7678 return hci_update_passive_scan_sync(hdev); ··· 7681 7663 static int add_device(struct sock *sk, struct hci_dev *hdev, 7682 7664 void *data, u16 len) 7683 7665 { 7666 + struct mgmt_pending_cmd *cmd; 7684 7667 struct mgmt_cp_add_device *cp = data; 7685 7668 u8 auto_conn, addr_type; 7686 7669 struct hci_conn_params *params; ··· 7762 7743 current_flags = params->flags; 7763 7744 } 7764 7745 7765 - err = hci_cmd_sync_queue(hdev, add_device_sync, NULL, NULL); 7766 - if (err < 0) 7746 + cmd = mgmt_pending_new(sk, MGMT_OP_ADD_DEVICE, hdev, data, len); 7747 + if (!cmd) { 7748 + err = -ENOMEM; 7767 7749 goto unlock; 7750 + } 7751 + 7752 + cmd->user_data = UINT_PTR(current_flags); 7753 + 7754 + err = hci_cmd_sync_queue(hdev, add_device_sync, cmd, 7755 + add_device_complete); 7756 + if (err < 0) { 7757 + err = mgmt_cmd_complete(sk, hdev->id, MGMT_OP_ADD_DEVICE, 7758 + MGMT_STATUS_FAILED, &cp->addr, 7759 + sizeof(cp->addr)); 7760 + mgmt_pending_free(cmd); 7761 + } 7762 + 7763 + goto unlock; 7768 7764 7769 7765 added: 7770 7766 device_added(sk, hdev, &cp->addr.bdaddr, cp->addr.type, cp->action);
+2 -2
net/bluetooth/rfcomm/tty.c
··· 201 201 struct device_attribute *attr, char *buf) 202 202 { 203 203 struct rfcomm_dev *dev = dev_get_drvdata(tty_dev); 204 - return sprintf(buf, "%pMR\n", &dev->dst); 204 + return sysfs_emit(buf, "%pMR\n", &dev->dst); 205 205 } 206 206 207 207 static ssize_t channel_show(struct device *tty_dev, 208 208 struct device_attribute *attr, char *buf) 209 209 { 210 210 struct rfcomm_dev *dev = dev_get_drvdata(tty_dev); 211 - return sprintf(buf, "%d\n", dev->channel); 211 + return sysfs_emit(buf, "%d\n", dev->channel); 212 212 } 213 213 214 214 static DEVICE_ATTR_RO(address);
+2
net/ceph/osd_client.c
··· 1173 1173 1174 1174 int __ceph_alloc_sparse_ext_map(struct ceph_osd_req_op *op, int cnt) 1175 1175 { 1176 + WARN_ON(op->op != CEPH_OSD_OP_SPARSE_READ); 1177 + 1176 1178 op->extent.sparse_ext_cnt = cnt; 1177 1179 op->extent.sparse_ext = kmalloc_array(cnt, 1178 1180 sizeof(*op->extent.sparse_ext),
+33 -14
net/core/dev.c
··· 753 753 } 754 754 EXPORT_SYMBOL_GPL(dev_fill_forward_path); 755 755 756 + /* must be called under rcu_read_lock(), as we dont take a reference */ 757 + static struct napi_struct *napi_by_id(unsigned int napi_id) 758 + { 759 + unsigned int hash = napi_id % HASH_SIZE(napi_hash); 760 + struct napi_struct *napi; 761 + 762 + hlist_for_each_entry_rcu(napi, &napi_hash[hash], napi_hash_node) 763 + if (napi->napi_id == napi_id) 764 + return napi; 765 + 766 + return NULL; 767 + } 768 + 769 + /* must be called under rcu_read_lock(), as we dont take a reference */ 770 + struct napi_struct *netdev_napi_by_id(struct net *net, unsigned int napi_id) 771 + { 772 + struct napi_struct *napi; 773 + 774 + napi = napi_by_id(napi_id); 775 + if (!napi) 776 + return NULL; 777 + 778 + if (WARN_ON_ONCE(!napi->dev)) 779 + return NULL; 780 + if (!net_eq(net, dev_net(napi->dev))) 781 + return NULL; 782 + 783 + return napi; 784 + } 785 + 756 786 /** 757 787 * __dev_get_by_name - find a device by its name 758 788 * @net: the applicable net namespace ··· 3672 3642 3673 3643 if (features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) { 3674 3644 if (vlan_get_protocol(skb) == htons(ETH_P_IPV6) && 3675 - skb_network_header_len(skb) != sizeof(struct ipv6hdr)) 3645 + skb_network_header_len(skb) != sizeof(struct ipv6hdr) && 3646 + !ipv6_has_hopopt_jumbo(skb)) 3676 3647 goto sw_checksum; 3648 + 3677 3649 switch (skb->csum_offset) { 3678 3650 case offsetof(struct tcphdr, check): 3679 3651 case offsetof(struct udphdr, check): ··· 6322 6290 return ret; 6323 6291 } 6324 6292 EXPORT_SYMBOL(napi_complete_done); 6325 - 6326 - /* must be called under rcu_read_lock(), as we dont take a reference */ 6327 - struct napi_struct *napi_by_id(unsigned int napi_id) 6328 - { 6329 - unsigned int hash = napi_id % HASH_SIZE(napi_hash); 6330 - struct napi_struct *napi; 6331 - 6332 - hlist_for_each_entry_rcu(napi, &napi_hash[hash], napi_hash_node) 6333 - if (napi->napi_id == napi_id) 6334 - return napi; 6335 - 6336 - return NULL; 6337 - } 6338 6293 6339 6294 static void skb_defer_free_flush(struct softnet_data *sd) 6340 6295 {
+2 -1
net/core/dev.h
··· 22 22 23 23 extern int netdev_flow_limit_table_len; 24 24 25 + struct napi_struct *netdev_napi_by_id(struct net *net, unsigned int napi_id); 26 + 25 27 #ifdef CONFIG_PROC_FS 26 28 int __init dev_proc_init(void); 27 29 #else ··· 271 269 static inline void xdp_do_check_flushed(struct napi_struct *napi) { } 272 270 #endif 273 271 274 - struct napi_struct *napi_by_id(unsigned int napi_id); 275 272 void kick_defer_list_purge(struct softnet_data *sd, unsigned int cpu); 276 273 277 274 #define XMIT_RECURSION_LIMIT 8
+15 -6
net/core/filter.c
··· 3734 3734 3735 3735 static u32 __bpf_skb_min_len(const struct sk_buff *skb) 3736 3736 { 3737 - u32 min_len = skb_network_offset(skb); 3737 + int offset = skb_network_offset(skb); 3738 + u32 min_len = 0; 3738 3739 3739 - if (skb_transport_header_was_set(skb)) 3740 - min_len = skb_transport_offset(skb); 3741 - if (skb->ip_summed == CHECKSUM_PARTIAL) 3742 - min_len = skb_checksum_start_offset(skb) + 3743 - skb->csum_offset + sizeof(__sum16); 3740 + if (offset > 0) 3741 + min_len = offset; 3742 + if (skb_transport_header_was_set(skb)) { 3743 + offset = skb_transport_offset(skb); 3744 + if (offset > 0) 3745 + min_len = offset; 3746 + } 3747 + if (skb->ip_summed == CHECKSUM_PARTIAL) { 3748 + offset = skb_checksum_start_offset(skb) + 3749 + skb->csum_offset + sizeof(__sum16); 3750 + if (offset > 0) 3751 + min_len = offset; 3752 + } 3744 3753 return min_len; 3745 3754 } 3746 3755
+18 -18
net/core/netdev-genl.c
··· 167 167 void *hdr; 168 168 pid_t pid; 169 169 170 - if (WARN_ON_ONCE(!napi->dev)) 171 - return -EINVAL; 172 170 if (!(napi->dev->flags & IFF_UP)) 173 171 return 0; 174 172 ··· 174 176 if (!hdr) 175 177 return -EMSGSIZE; 176 178 177 - if (napi->napi_id >= MIN_NAPI_ID && 178 - nla_put_u32(rsp, NETDEV_A_NAPI_ID, napi->napi_id)) 179 + if (nla_put_u32(rsp, NETDEV_A_NAPI_ID, napi->napi_id)) 179 180 goto nla_put_failure; 180 181 181 182 if (nla_put_u32(rsp, NETDEV_A_NAPI_IFINDEX, napi->dev->ifindex)) ··· 232 235 rtnl_lock(); 233 236 rcu_read_lock(); 234 237 235 - napi = napi_by_id(napi_id); 238 + napi = netdev_napi_by_id(genl_info_net(info), napi_id); 236 239 if (napi) { 237 240 err = netdev_nl_napi_fill_one(rsp, napi, info); 238 241 } else { ··· 243 246 rcu_read_unlock(); 244 247 rtnl_unlock(); 245 248 246 - if (err) 249 + if (err) { 247 250 goto err_free_msg; 251 + } else if (!rsp->len) { 252 + err = -ENOENT; 253 + goto err_free_msg; 254 + } 248 255 249 256 return genlmsg_reply(rsp, info); 250 257 ··· 269 268 return err; 270 269 271 270 list_for_each_entry(napi, &netdev->napi_list, dev_list) { 271 + if (napi->napi_id < MIN_NAPI_ID) 272 + continue; 272 273 if (ctx->napi_id && napi->napi_id >= ctx->napi_id) 273 274 continue; 274 275 ··· 353 350 rtnl_lock(); 354 351 rcu_read_lock(); 355 352 356 - napi = napi_by_id(napi_id); 353 + napi = netdev_napi_by_id(genl_info_net(info), napi_id); 357 354 if (napi) { 358 355 err = netdev_nl_napi_set_config(napi, info); 359 356 } else { ··· 433 430 netdev_nl_queue_fill(struct sk_buff *rsp, struct net_device *netdev, u32 q_idx, 434 431 u32 q_type, const struct genl_info *info) 435 432 { 436 - int err = 0; 433 + int err; 437 434 438 435 if (!(netdev->flags & IFF_UP)) 439 - return err; 436 + return -ENOENT; 440 437 441 438 err = netdev_nl_queue_validate(netdev, q_idx, q_type); 442 439 if (err) ··· 491 488 struct netdev_nl_dump_ctx *ctx) 492 489 { 493 490 int err = 0; 494 - int i; 495 491 496 492 if (!(netdev->flags & IFF_UP)) 497 493 return err; 498 494 499 - for (i = ctx->rxq_idx; i < netdev->real_num_rx_queues;) { 500 - err = netdev_nl_queue_fill_one(rsp, netdev, i, 495 + for (; ctx->rxq_idx < netdev->real_num_rx_queues; ctx->rxq_idx++) { 496 + err = netdev_nl_queue_fill_one(rsp, netdev, ctx->rxq_idx, 501 497 NETDEV_QUEUE_TYPE_RX, info); 502 498 if (err) 503 499 return err; 504 - ctx->rxq_idx = i++; 505 500 } 506 - for (i = ctx->txq_idx; i < netdev->real_num_tx_queues;) { 507 - err = netdev_nl_queue_fill_one(rsp, netdev, i, 501 + for (; ctx->txq_idx < netdev->real_num_tx_queues; ctx->txq_idx++) { 502 + err = netdev_nl_queue_fill_one(rsp, netdev, ctx->txq_idx, 508 503 NETDEV_QUEUE_TYPE_TX, info); 509 504 if (err) 510 505 return err; 511 - ctx->txq_idx = i++; 512 506 } 513 507 514 508 return err; ··· 671 671 i, info); 672 672 if (err) 673 673 return err; 674 - ctx->rxq_idx = i++; 674 + ctx->rxq_idx = ++i; 675 675 } 676 676 i = ctx->txq_idx; 677 677 while (ops->get_queue_stats_tx && i < netdev->real_num_tx_queues) { ··· 679 679 i, info); 680 680 if (err) 681 681 return err; 682 - ctx->txq_idx = i++; 682 + ctx->txq_idx = ++i; 683 683 } 684 684 685 685 ctx->rxq_idx = 0;
+3 -2
net/core/rtnetlink.c
··· 3819 3819 } 3820 3820 3821 3821 static struct net *rtnl_get_peer_net(const struct rtnl_link_ops *ops, 3822 + struct nlattr *tbp[], 3822 3823 struct nlattr *data[], 3823 3824 struct netlink_ext_ack *extack) 3824 3825 { ··· 3827 3826 int err; 3828 3827 3829 3828 if (!data || !data[ops->peer_type]) 3830 - return NULL; 3829 + return rtnl_link_get_net_ifla(tbp); 3831 3830 3832 3831 err = rtnl_nla_parse_ifinfomsg(tb, data[ops->peer_type], extack); 3833 3832 if (err < 0) ··· 3972 3971 } 3973 3972 3974 3973 if (ops->peer_type) { 3975 - peer_net = rtnl_get_peer_net(ops, data, extack); 3974 + peer_net = rtnl_get_peer_net(ops, tb, data, extack); 3976 3975 if (IS_ERR(peer_net)) { 3977 3976 ret = PTR_ERR(peer_net); 3978 3977 goto put_ops;
+8 -3
net/core/skmsg.c
··· 369 369 struct sk_msg *msg, u32 bytes) 370 370 { 371 371 int ret = -ENOSPC, i = msg->sg.curr; 372 + u32 copy, buf_size, copied = 0; 372 373 struct scatterlist *sge; 373 - u32 copy, buf_size; 374 374 void *to; 375 375 376 376 do { ··· 397 397 goto out; 398 398 } 399 399 bytes -= copy; 400 + copied += copy; 400 401 if (!bytes) 401 402 break; 402 403 msg->sg.copybreak = 0; ··· 405 404 } while (i != msg->sg.end); 406 405 out: 407 406 msg->sg.curr = i; 408 - return ret; 407 + return (ret < 0) ? ret : copied; 409 408 } 410 409 EXPORT_SYMBOL_GPL(sk_msg_memcopy_from_iter); 411 410 ··· 446 445 if (likely(!peek)) { 447 446 sge->offset += copy; 448 447 sge->length -= copy; 449 - if (!msg_rx->skb) 448 + if (!msg_rx->skb) { 450 449 sk_mem_uncharge(sk, copy); 450 + atomic_sub(copy, &sk->sk_rmem_alloc); 451 + } 451 452 msg_rx->sg.size -= copy; 452 453 453 454 if (!sge->length) { ··· 775 772 776 773 list_for_each_entry_safe(msg, tmp, &psock->ingress_msg, list) { 777 774 list_del(&msg->list); 775 + if (!msg->skb) 776 + atomic_sub(msg->sg.size, &psock->sk->sk_rmem_alloc); 778 777 sk_msg_free(psock->sk, msg); 779 778 kfree(msg); 780 779 }
+4 -1
net/core/sock.c
··· 1295 1295 sk->sk_reuse = (valbool ? SK_CAN_REUSE : SK_NO_REUSE); 1296 1296 break; 1297 1297 case SO_REUSEPORT: 1298 - sk->sk_reuseport = valbool; 1298 + if (valbool && !sk_is_inet(sk)) 1299 + ret = -EOPNOTSUPP; 1300 + else 1301 + sk->sk_reuseport = valbool; 1299 1302 break; 1300 1303 case SO_DONTROUTE: 1301 1304 sock_valbool_flag(sk, SOCK_LOCALROUTE, valbool);
+11 -5
net/dsa/tag.h
··· 138 138 * dsa_software_vlan_untag: Software VLAN untagging in DSA receive path 139 139 * @skb: Pointer to socket buffer (packet) 140 140 * 141 - * Receive path method for switches which cannot avoid tagging all packets 142 - * towards the CPU port. Called when ds->untag_bridge_pvid (legacy) or 143 - * ds->untag_vlan_aware_bridge_pvid is set to true. 141 + * Receive path method for switches which send some packets as VLAN-tagged 142 + * towards the CPU port (generally from VLAN-aware bridge ports) even when the 143 + * packet was not tagged on the wire. Called when ds->untag_bridge_pvid 144 + * (legacy) or ds->untag_vlan_aware_bridge_pvid is set to true. 144 145 * 145 146 * As a side effect of this method, any VLAN tag from the skb head is moved 146 147 * to hwaccel. ··· 150 149 { 151 150 struct dsa_port *dp = dsa_user_to_port(skb->dev); 152 151 struct net_device *br = dsa_port_bridge_dev_get(dp); 153 - u16 vid; 152 + u16 vid, proto; 153 + int err; 154 154 155 155 /* software untagging for standalone ports not yet necessary */ 156 156 if (!br) 157 157 return skb; 158 158 159 + err = br_vlan_get_proto(br, &proto); 160 + if (err) 161 + return skb; 162 + 159 163 /* Move VLAN tag from data to hwaccel */ 160 - if (!skb_vlan_tag_present(skb)) { 164 + if (!skb_vlan_tag_present(skb) && skb->protocol == htons(proto)) { 161 165 skb = skb_vlan_untag(skb); 162 166 if (!skb) 163 167 return NULL;
+3 -3
net/ipv4/ip_tunnel.c
··· 294 294 295 295 ip_tunnel_init_flow(&fl4, iph->protocol, iph->daddr, 296 296 iph->saddr, tunnel->parms.o_key, 297 - iph->tos & INET_DSCP_MASK, dev_net(dev), 297 + iph->tos & INET_DSCP_MASK, tunnel->net, 298 298 tunnel->parms.link, tunnel->fwmark, 0, 0); 299 299 rt = ip_route_output_key(tunnel->net, &fl4); 300 300 ··· 611 611 } 612 612 ip_tunnel_init_flow(&fl4, proto, key->u.ipv4.dst, key->u.ipv4.src, 613 613 tunnel_id_to_key32(key->tun_id), 614 - tos & INET_DSCP_MASK, dev_net(dev), 0, skb->mark, 614 + tos & INET_DSCP_MASK, tunnel->net, 0, skb->mark, 615 615 skb_get_hash(skb), key->flow_flags); 616 616 617 617 if (!tunnel_hlen) ··· 774 774 775 775 ip_tunnel_init_flow(&fl4, protocol, dst, tnl_params->saddr, 776 776 tunnel->parms.o_key, tos & INET_DSCP_MASK, 777 - dev_net(dev), READ_ONCE(tunnel->parms.link), 777 + tunnel->net, READ_ONCE(tunnel->parms.link), 778 778 tunnel->fwmark, skb_get_hash(skb), 0); 779 779 780 780 if (ip_tunnel_encap(skb, &tunnel->encap, &protocol, &fl4) < 0)
+8 -6
net/ipv4/tcp_bpf.c
··· 49 49 sge = sk_msg_elem(msg, i); 50 50 size = (apply && apply_bytes < sge->length) ? 51 51 apply_bytes : sge->length; 52 - if (!sk_wmem_schedule(sk, size)) { 52 + if (!__sk_rmem_schedule(sk, size, false)) { 53 53 if (!copied) 54 54 ret = -ENOMEM; 55 55 break; 56 56 } 57 57 58 58 sk_mem_charge(sk, size); 59 + atomic_add(size, &sk->sk_rmem_alloc); 59 60 sk_msg_xfer(tmp, msg, i, size); 60 61 copied += size; 61 62 if (sge->length) ··· 75 74 76 75 if (!ret) { 77 76 msg->sg.start = i; 78 - sk_psock_queue_msg(psock, tmp); 77 + if (!sk_psock_queue_msg(psock, tmp)) 78 + atomic_sub(copied, &sk->sk_rmem_alloc); 79 79 sk_psock_data_ready(sk, psock); 80 80 } else { 81 81 sk_msg_free(sk, tmp); ··· 495 493 static int tcp_bpf_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) 496 494 { 497 495 struct sk_msg tmp, *msg_tx = NULL; 498 - int copied = 0, err = 0; 496 + int copied = 0, err = 0, ret = 0; 499 497 struct sk_psock *psock; 500 498 long timeo; 501 499 int flags; ··· 538 536 copy = msg_tx->sg.size - osize; 539 537 } 540 538 541 - err = sk_msg_memcopy_from_iter(sk, &msg->msg_iter, msg_tx, 539 + ret = sk_msg_memcopy_from_iter(sk, &msg->msg_iter, msg_tx, 542 540 copy); 543 - if (err < 0) { 541 + if (ret < 0) { 544 542 sk_msg_trim(sk, msg_tx, osize); 545 543 goto out_err; 546 544 } 547 545 548 - copied += copy; 546 + copied += ret; 549 547 if (psock->cork_bytes) { 550 548 if (size > psock->cork_bytes) 551 549 psock->cork_bytes = 0;
+1
net/ipv4/tcp_input.c
··· 7328 7328 if (unlikely(!inet_csk_reqsk_queue_hash_add(sk, req, 7329 7329 req->timeout))) { 7330 7330 reqsk_free(req); 7331 + dst_release(dst); 7331 7332 return 0; 7332 7333 } 7333 7334
+1 -1
net/ipv4/tcp_ipv4.c
··· 896 896 sock_net_set(ctl_sk, net); 897 897 if (sk) { 898 898 ctl_sk->sk_mark = (sk->sk_state == TCP_TIME_WAIT) ? 899 - inet_twsk(sk)->tw_mark : sk->sk_mark; 899 + inet_twsk(sk)->tw_mark : READ_ONCE(sk->sk_mark); 900 900 ctl_sk->sk_priority = (sk->sk_state == TCP_TIME_WAIT) ? 901 901 inet_twsk(sk)->tw_priority : READ_ONCE(sk->sk_priority); 902 902 transmit_time = tcp_transmit_time(sk);
+11 -5
net/ipv6/ila/ila_xlat.c
··· 195 195 }, 196 196 }; 197 197 198 + static DEFINE_MUTEX(ila_mutex); 199 + 198 200 static int ila_add_mapping(struct net *net, struct ila_xlat_params *xp) 199 201 { 200 202 struct ila_net *ilan = net_generic(net, ila_net_id); ··· 204 202 spinlock_t *lock = ila_get_lock(ilan, xp->ip.locator_match); 205 203 int err = 0, order; 206 204 207 - if (!ilan->xlat.hooks_registered) { 205 + if (!READ_ONCE(ilan->xlat.hooks_registered)) { 208 206 /* We defer registering net hooks in the namespace until the 209 207 * first mapping is added. 210 208 */ 211 - err = nf_register_net_hooks(net, ila_nf_hook_ops, 212 - ARRAY_SIZE(ila_nf_hook_ops)); 209 + mutex_lock(&ila_mutex); 210 + if (!ilan->xlat.hooks_registered) { 211 + err = nf_register_net_hooks(net, ila_nf_hook_ops, 212 + ARRAY_SIZE(ila_nf_hook_ops)); 213 + if (!err) 214 + WRITE_ONCE(ilan->xlat.hooks_registered, true); 215 + } 216 + mutex_unlock(&ila_mutex); 213 217 if (err) 214 218 return err; 215 - 216 - ilan->xlat.hooks_registered = true; 217 219 } 218 220 219 221 ila = kzalloc(sizeof(*ila), GFP_KERNEL);
+1 -1
net/llc/llc_input.c
··· 124 124 if (unlikely(!pskb_may_pull(skb, llc_len))) 125 125 return 0; 126 126 127 - skb->transport_header += llc_len; 128 127 skb_pull(skb, llc_len); 128 + skb_reset_transport_header(skb); 129 129 if (skb->protocol == htons(ETH_P_802_2)) { 130 130 __be16 pdulen; 131 131 s32 data_size;
+4
net/mac802154/iface.c
··· 684 684 ASSERT_RTNL(); 685 685 686 686 mutex_lock(&sdata->local->iflist_mtx); 687 + if (list_empty(&sdata->local->interfaces)) { 688 + mutex_unlock(&sdata->local->iflist_mtx); 689 + return; 690 + } 687 691 list_del_rcu(&sdata->list); 688 692 mutex_unlock(&sdata->local->iflist_mtx); 689 693
+26 -10
net/mctp/route.c
··· 374 374 msk = NULL; 375 375 rc = -EINVAL; 376 376 377 - /* we may be receiving a locally-routed packet; drop source sk 378 - * accounting 377 + /* We may be receiving a locally-routed packet; drop source sk 378 + * accounting. 379 + * 380 + * From here, we will either queue the skb - either to a frag_queue, or 381 + * to a receiving socket. When that succeeds, we clear the skb pointer; 382 + * a non-NULL skb on exit will be otherwise unowned, and hence 383 + * kfree_skb()-ed. 379 384 */ 380 385 skb_orphan(skb); 381 386 ··· 439 434 * pending key. 440 435 */ 441 436 if (flags & MCTP_HDR_FLAG_EOM) { 442 - sock_queue_rcv_skb(&msk->sk, skb); 437 + rc = sock_queue_rcv_skb(&msk->sk, skb); 438 + if (!rc) 439 + skb = NULL; 443 440 if (key) { 444 441 /* we've hit a pending reassembly; not much we 445 442 * can do but drop it ··· 450 443 MCTP_TRACE_KEY_REPLIED); 451 444 key = NULL; 452 445 } 453 - rc = 0; 454 446 goto out_unlock; 455 447 } 456 448 ··· 476 470 * this function. 477 471 */ 478 472 rc = mctp_key_add(key, msk); 479 - if (!rc) 473 + if (!rc) { 480 474 trace_mctp_key_acquire(key); 475 + skb = NULL; 476 + } 481 477 482 478 /* we don't need to release key->lock on exit, so 483 479 * clean up here and suppress the unlock via ··· 497 489 key = NULL; 498 490 } else { 499 491 rc = mctp_frag_queue(key, skb); 492 + if (!rc) 493 + skb = NULL; 500 494 } 501 495 } 502 496 ··· 513 503 else 514 504 rc = mctp_frag_queue(key, skb); 515 505 506 + if (rc) 507 + goto out_unlock; 508 + 509 + /* we've queued; the queue owns the skb now */ 510 + skb = NULL; 511 + 516 512 /* end of message? deliver to socket, and we're done with 517 513 * the reassembly/response key 518 514 */ 519 - if (!rc && flags & MCTP_HDR_FLAG_EOM) { 520 - sock_queue_rcv_skb(key->sk, key->reasm_head); 521 - key->reasm_head = NULL; 515 + if (flags & MCTP_HDR_FLAG_EOM) { 516 + rc = sock_queue_rcv_skb(key->sk, key->reasm_head); 517 + if (!rc) 518 + key->reasm_head = NULL; 522 519 __mctp_key_done_in(key, net, f, MCTP_TRACE_KEY_REPLIED); 523 520 key = NULL; 524 521 } ··· 544 527 if (any_key) 545 528 mctp_key_unref(any_key); 546 529 out: 547 - if (rc) 548 - kfree_skb(skb); 530 + kfree_skb(skb); 549 531 return rc; 550 532 } 551 533
+86
net/mctp/test/route-test.c
··· 837 837 mctp_test_route_input_multiple_nets_key_fini(test, &t2); 838 838 } 839 839 840 + /* Input route to socket, using a single-packet message, where sock delivery 841 + * fails. Ensure we're handling the failure appropriately. 842 + */ 843 + static void mctp_test_route_input_sk_fail_single(struct kunit *test) 844 + { 845 + const struct mctp_hdr hdr = RX_HDR(1, 10, 8, FL_S | FL_E | FL_TO); 846 + struct mctp_test_route *rt; 847 + struct mctp_test_dev *dev; 848 + struct socket *sock; 849 + struct sk_buff *skb; 850 + int rc; 851 + 852 + __mctp_route_test_init(test, &dev, &rt, &sock, MCTP_NET_ANY); 853 + 854 + /* No rcvbuf space, so delivery should fail. __sock_set_rcvbuf will 855 + * clamp the minimum to SOCK_MIN_RCVBUF, so we open-code this. 856 + */ 857 + lock_sock(sock->sk); 858 + WRITE_ONCE(sock->sk->sk_rcvbuf, 0); 859 + release_sock(sock->sk); 860 + 861 + skb = mctp_test_create_skb(&hdr, 10); 862 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, skb); 863 + skb_get(skb); 864 + 865 + mctp_test_skb_set_dev(skb, dev); 866 + 867 + /* do route input, which should fail */ 868 + rc = mctp_route_input(&rt->rt, skb); 869 + KUNIT_EXPECT_NE(test, rc, 0); 870 + 871 + /* we should hold the only reference to skb */ 872 + KUNIT_EXPECT_EQ(test, refcount_read(&skb->users), 1); 873 + kfree_skb(skb); 874 + 875 + __mctp_route_test_fini(test, dev, rt, sock); 876 + } 877 + 878 + /* Input route to socket, using a fragmented message, where sock delivery fails. 879 + */ 880 + static void mctp_test_route_input_sk_fail_frag(struct kunit *test) 881 + { 882 + const struct mctp_hdr hdrs[2] = { RX_FRAG(FL_S, 0), RX_FRAG(FL_E, 1) }; 883 + struct mctp_test_route *rt; 884 + struct mctp_test_dev *dev; 885 + struct sk_buff *skbs[2]; 886 + struct socket *sock; 887 + unsigned int i; 888 + int rc; 889 + 890 + __mctp_route_test_init(test, &dev, &rt, &sock, MCTP_NET_ANY); 891 + 892 + lock_sock(sock->sk); 893 + WRITE_ONCE(sock->sk->sk_rcvbuf, 0); 894 + release_sock(sock->sk); 895 + 896 + for (i = 0; i < ARRAY_SIZE(skbs); i++) { 897 + skbs[i] = mctp_test_create_skb(&hdrs[i], 10); 898 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, skbs[i]); 899 + skb_get(skbs[i]); 900 + 901 + mctp_test_skb_set_dev(skbs[i], dev); 902 + } 903 + 904 + /* first route input should succeed, we're only queueing to the 905 + * frag list 906 + */ 907 + rc = mctp_route_input(&rt->rt, skbs[0]); 908 + KUNIT_EXPECT_EQ(test, rc, 0); 909 + 910 + /* final route input should fail to deliver to the socket */ 911 + rc = mctp_route_input(&rt->rt, skbs[1]); 912 + KUNIT_EXPECT_NE(test, rc, 0); 913 + 914 + /* we should hold the only reference to both skbs */ 915 + KUNIT_EXPECT_EQ(test, refcount_read(&skbs[0]->users), 1); 916 + kfree_skb(skbs[0]); 917 + 918 + KUNIT_EXPECT_EQ(test, refcount_read(&skbs[1]->users), 1); 919 + kfree_skb(skbs[1]); 920 + 921 + __mctp_route_test_fini(test, dev, rt, sock); 922 + } 923 + 840 924 #if IS_ENABLED(CONFIG_MCTP_FLOWS) 841 925 842 926 static void mctp_test_flow_init(struct kunit *test, ··· 1137 1053 mctp_route_input_sk_reasm_gen_params), 1138 1054 KUNIT_CASE_PARAM(mctp_test_route_input_sk_keys, 1139 1055 mctp_route_input_sk_keys_gen_params), 1056 + KUNIT_CASE(mctp_test_route_input_sk_fail_single), 1057 + KUNIT_CASE(mctp_test_route_input_sk_fail_frag), 1140 1058 KUNIT_CASE(mctp_test_route_input_multiple_nets_bind), 1141 1059 KUNIT_CASE(mctp_test_route_input_multiple_nets_key), 1142 1060 KUNIT_CASE(mctp_test_packet_flow),
+9 -8
net/mptcp/ctrl.c
··· 102 102 } 103 103 104 104 #ifdef CONFIG_SYSCTL 105 - static int mptcp_set_scheduler(const struct net *net, const char *name) 105 + static int mptcp_set_scheduler(char *scheduler, const char *name) 106 106 { 107 - struct mptcp_pernet *pernet = mptcp_get_pernet(net); 108 107 struct mptcp_sched_ops *sched; 109 108 int ret = 0; 110 109 111 110 rcu_read_lock(); 112 111 sched = mptcp_sched_find(name); 113 112 if (sched) 114 - strscpy(pernet->scheduler, name, MPTCP_SCHED_NAME_MAX); 113 + strscpy(scheduler, name, MPTCP_SCHED_NAME_MAX); 115 114 else 116 115 ret = -ENOENT; 117 116 rcu_read_unlock(); ··· 121 122 static int proc_scheduler(const struct ctl_table *ctl, int write, 122 123 void *buffer, size_t *lenp, loff_t *ppos) 123 124 { 124 - const struct net *net = current->nsproxy->net_ns; 125 + char (*scheduler)[MPTCP_SCHED_NAME_MAX] = ctl->data; 125 126 char val[MPTCP_SCHED_NAME_MAX]; 126 127 struct ctl_table tbl = { 127 128 .data = val, ··· 129 130 }; 130 131 int ret; 131 132 132 - strscpy(val, mptcp_get_scheduler(net), MPTCP_SCHED_NAME_MAX); 133 + strscpy(val, *scheduler, MPTCP_SCHED_NAME_MAX); 133 134 134 135 ret = proc_dostring(&tbl, write, buffer, lenp, ppos); 135 136 if (write && ret == 0) 136 - ret = mptcp_set_scheduler(net, val); 137 + ret = mptcp_set_scheduler(*scheduler, val); 137 138 138 139 return ret; 139 140 } ··· 160 161 int write, void *buffer, size_t *lenp, 161 162 loff_t *ppos) 162 163 { 163 - struct mptcp_pernet *pernet = mptcp_get_pernet(current->nsproxy->net_ns); 164 + struct mptcp_pernet *pernet = container_of(table->data, 165 + struct mptcp_pernet, 166 + blackhole_timeout); 164 167 int ret; 165 168 166 169 ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); ··· 229 228 { 230 229 .procname = "available_schedulers", 231 230 .maxlen = MPTCP_SCHED_BUF_MAX, 232 - .mode = 0644, 231 + .mode = 0444, 233 232 .proc_handler = proc_available_schedulers, 234 233 }, 235 234 {
+7
net/mptcp/options.c
··· 667 667 &echo, &drop_other_suboptions)) 668 668 return false; 669 669 670 + /* 671 + * Later on, mptcp_write_options() will enforce mutually exclusion with 672 + * DSS, bail out if such option is set and we can't drop it. 673 + */ 670 674 if (drop_other_suboptions) 671 675 remaining += opt_size; 676 + else if (opts->suboptions & OPTION_MPTCP_DSS) 677 + return false; 678 + 672 679 len = mptcp_add_addr_len(opts->addr.family, echo, !!opts->addr.port); 673 680 if (remaining < len) 674 681 return false;
+12 -11
net/mptcp/protocol.c
··· 136 136 int delta; 137 137 138 138 if (MPTCP_SKB_CB(from)->offset || 139 + ((to->len + from->len) > (sk->sk_rcvbuf >> 3)) || 139 140 !skb_try_coalesce(to, from, &fragstolen, &delta)) 140 141 return false; 141 142 ··· 529 528 mptcp_subflow_send_ack(mptcp_subflow_tcp_sock(subflow)); 530 529 } 531 530 532 - static void mptcp_subflow_cleanup_rbuf(struct sock *ssk) 531 + static void mptcp_subflow_cleanup_rbuf(struct sock *ssk, int copied) 533 532 { 534 533 bool slow; 535 534 536 535 slow = lock_sock_fast(ssk); 537 536 if (tcp_can_send_ack(ssk)) 538 - tcp_cleanup_rbuf(ssk, 1); 537 + tcp_cleanup_rbuf(ssk, copied); 539 538 unlock_sock_fast(ssk, slow); 540 539 } 541 540 ··· 552 551 (ICSK_ACK_PUSHED2 | ICSK_ACK_PUSHED))); 553 552 } 554 553 555 - static void mptcp_cleanup_rbuf(struct mptcp_sock *msk) 554 + static void mptcp_cleanup_rbuf(struct mptcp_sock *msk, int copied) 556 555 { 557 556 int old_space = READ_ONCE(msk->old_wspace); 558 557 struct mptcp_subflow_context *subflow; ··· 560 559 int space = __mptcp_space(sk); 561 560 bool cleanup, rx_empty; 562 561 563 - cleanup = (space > 0) && (space >= (old_space << 1)); 564 - rx_empty = !__mptcp_rmem(sk); 562 + cleanup = (space > 0) && (space >= (old_space << 1)) && copied; 563 + rx_empty = !__mptcp_rmem(sk) && copied; 565 564 566 565 mptcp_for_each_subflow(msk, subflow) { 567 566 struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 568 567 569 568 if (cleanup || mptcp_subflow_could_cleanup(ssk, rx_empty)) 570 - mptcp_subflow_cleanup_rbuf(ssk); 569 + mptcp_subflow_cleanup_rbuf(ssk, copied); 571 570 } 572 571 } 573 572 ··· 1940 1939 goto out; 1941 1940 } 1942 1941 1942 + static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied); 1943 + 1943 1944 static int __mptcp_recvmsg_mskq(struct mptcp_sock *msk, 1944 1945 struct msghdr *msg, 1945 1946 size_t len, int flags, ··· 1995 1992 break; 1996 1993 } 1997 1994 1995 + mptcp_rcv_space_adjust(msk, copied); 1998 1996 return copied; 1999 1997 } 2000 1998 ··· 2221 2217 2222 2218 copied += bytes_read; 2223 2219 2224 - /* be sure to advertise window change */ 2225 - mptcp_cleanup_rbuf(msk); 2226 - 2227 2220 if (skb_queue_empty(&msk->receive_queue) && __mptcp_move_skbs(msk)) 2228 2221 continue; 2229 2222 ··· 2269 2268 } 2270 2269 2271 2270 pr_debug("block timeout %ld\n", timeo); 2272 - mptcp_rcv_space_adjust(msk, copied); 2271 + mptcp_cleanup_rbuf(msk, copied); 2273 2272 err = sk_wait_data(sk, &timeo, NULL); 2274 2273 if (err < 0) { 2275 2274 err = copied ? : err; ··· 2277 2276 } 2278 2277 } 2279 2278 2280 - mptcp_rcv_space_adjust(msk, copied); 2279 + mptcp_cleanup_rbuf(msk, copied); 2281 2280 2282 2281 out_err: 2283 2282 if (cmsg_flags && copied >= 0) {
+3
net/netfilter/ipset/ip_set_list_set.c
··· 611 611 return true; 612 612 } 613 613 614 + static struct lock_class_key list_set_lockdep_key; 615 + 614 616 static int 615 617 list_set_create(struct net *net, struct ip_set *set, struct nlattr *tb[], 616 618 u32 flags) ··· 629 627 if (size < IP_SET_LIST_MIN_SIZE) 630 628 size = IP_SET_LIST_MIN_SIZE; 631 629 630 + lockdep_set_class(&set->lock, &list_set_lockdep_key); 632 631 set->variant = &set_variant; 633 632 set->dsize = ip_set_elem_len(set, tb, sizeof(struct set_elem), 634 633 __alignof__(struct set_elem));
+2 -2
net/netfilter/ipvs/ip_vs_conn.c
··· 1495 1495 max_avail -= 2; /* ~4 in hash row */ 1496 1496 max_avail -= 1; /* IPVS up to 1/2 of mem */ 1497 1497 max_avail -= order_base_2(sizeof(struct ip_vs_conn)); 1498 - max = clamp(max, min, max_avail); 1499 - ip_vs_conn_tab_bits = clamp_val(ip_vs_conn_tab_bits, min, max); 1498 + max = clamp(max_avail, min, max); 1499 + ip_vs_conn_tab_bits = clamp(ip_vs_conn_tab_bits, min, max); 1500 1500 ip_vs_conn_tab_size = 1 << ip_vs_conn_tab_bits; 1501 1501 ip_vs_conn_tab_mask = ip_vs_conn_tab_size - 1; 1502 1502
+4 -1
net/netfilter/nf_conntrack_core.c
··· 2517 2517 struct hlist_nulls_head *hash; 2518 2518 unsigned int nr_slots, i; 2519 2519 2520 - if (*sizep > (UINT_MAX / sizeof(struct hlist_nulls_head))) 2520 + if (*sizep > (INT_MAX / sizeof(struct hlist_nulls_head))) 2521 2521 return NULL; 2522 2522 2523 2523 BUILD_BUG_ON(sizeof(struct hlist_nulls_head) != sizeof(struct hlist_head)); 2524 2524 nr_slots = *sizep = roundup(*sizep, PAGE_SIZE / sizeof(struct hlist_nulls_head)); 2525 + 2526 + if (nr_slots > (INT_MAX / sizeof(struct hlist_nulls_head))) 2527 + return NULL; 2525 2528 2526 2529 hash = kvcalloc(nr_slots, sizeof(struct hlist_nulls_head), GFP_KERNEL); 2527 2530
+11 -4
net/netfilter/nf_tables_api.c
··· 8822 8822 } 8823 8823 8824 8824 static void __nft_unregister_flowtable_net_hooks(struct net *net, 8825 + struct nft_flowtable *flowtable, 8825 8826 struct list_head *hook_list, 8826 8827 bool release_netdev) 8827 8828 { ··· 8830 8829 8831 8830 list_for_each_entry_safe(hook, next, hook_list, list) { 8832 8831 nf_unregister_net_hook(net, &hook->ops); 8832 + flowtable->data.type->setup(&flowtable->data, hook->ops.dev, 8833 + FLOW_BLOCK_UNBIND); 8833 8834 if (release_netdev) { 8834 8835 list_del(&hook->list); 8835 8836 kfree_rcu(hook, rcu); ··· 8840 8837 } 8841 8838 8842 8839 static void nft_unregister_flowtable_net_hooks(struct net *net, 8840 + struct nft_flowtable *flowtable, 8843 8841 struct list_head *hook_list) 8844 8842 { 8845 - __nft_unregister_flowtable_net_hooks(net, hook_list, false); 8843 + __nft_unregister_flowtable_net_hooks(net, flowtable, hook_list, false); 8846 8844 } 8847 8845 8848 8846 static int nft_register_flowtable_net_hooks(struct net *net, ··· 9485 9481 9486 9482 flowtable->data.type->free(&flowtable->data); 9487 9483 list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) { 9488 - flowtable->data.type->setup(&flowtable->data, hook->ops.dev, 9489 - FLOW_BLOCK_UNBIND); 9490 9484 list_del_rcu(&hook->list); 9491 9485 kfree_rcu(hook, rcu); 9492 9486 } ··· 10872 10870 &nft_trans_flowtable_hooks(trans), 10873 10871 trans->msg_type); 10874 10872 nft_unregister_flowtable_net_hooks(net, 10873 + nft_trans_flowtable(trans), 10875 10874 &nft_trans_flowtable_hooks(trans)); 10876 10875 } else { 10877 10876 list_del_rcu(&nft_trans_flowtable(trans)->list); ··· 10881 10878 NULL, 10882 10879 trans->msg_type); 10883 10880 nft_unregister_flowtable_net_hooks(net, 10881 + nft_trans_flowtable(trans), 10884 10882 &nft_trans_flowtable(trans)->hook_list); 10885 10883 } 10886 10884 break; ··· 11144 11140 case NFT_MSG_NEWFLOWTABLE: 11145 11141 if (nft_trans_flowtable_update(trans)) { 11146 11142 nft_unregister_flowtable_net_hooks(net, 11143 + nft_trans_flowtable(trans), 11147 11144 &nft_trans_flowtable_hooks(trans)); 11148 11145 } else { 11149 11146 nft_use_dec_restore(&table->use); 11150 11147 list_del_rcu(&nft_trans_flowtable(trans)->list); 11151 11148 nft_unregister_flowtable_net_hooks(net, 11149 + nft_trans_flowtable(trans), 11152 11150 &nft_trans_flowtable(trans)->hook_list); 11153 11151 } 11154 11152 break; ··· 11743 11737 list_for_each_entry(chain, &table->chains, list) 11744 11738 __nf_tables_unregister_hook(net, table, chain, true); 11745 11739 list_for_each_entry(flowtable, &table->flowtables, list) 11746 - __nft_unregister_flowtable_net_hooks(net, &flowtable->hook_list, 11740 + __nft_unregister_flowtable_net_hooks(net, flowtable, 11741 + &flowtable->hook_list, 11747 11742 true); 11748 11743 } 11749 11744
+6
net/netrom/nr_route.c
··· 754 754 int ret; 755 755 struct sk_buff *skbn; 756 756 757 + /* 758 + * Reject malformed packets early. Check that it contains at least 2 759 + * addresses and 1 byte more for Time-To-Live 760 + */ 761 + if (skb->len < 2 * sizeof(ax25_address) + 1) 762 + return 0; 757 763 758 764 nr_src = (ax25_address *)(skb->data + 0); 759 765 nr_dest = (ax25_address *)(skb->data + 7);
+7 -21
net/packet/af_packet.c
··· 538 538 return packet_lookup_frame(po, rb, rb->head, status); 539 539 } 540 540 541 - static u16 vlan_get_tci(struct sk_buff *skb, struct net_device *dev) 541 + static u16 vlan_get_tci(const struct sk_buff *skb, struct net_device *dev) 542 542 { 543 - u8 *skb_orig_data = skb->data; 544 - int skb_orig_len = skb->len; 545 543 struct vlan_hdr vhdr, *vh; 546 544 unsigned int header_len; 547 545 ··· 560 562 else 561 563 return 0; 562 564 563 - skb_push(skb, skb->data - skb_mac_header(skb)); 564 - vh = skb_header_pointer(skb, header_len, sizeof(vhdr), &vhdr); 565 - if (skb_orig_data != skb->data) { 566 - skb->data = skb_orig_data; 567 - skb->len = skb_orig_len; 568 - } 565 + vh = skb_header_pointer(skb, skb_mac_offset(skb) + header_len, 566 + sizeof(vhdr), &vhdr); 569 567 if (unlikely(!vh)) 570 568 return 0; 571 569 572 570 return ntohs(vh->h_vlan_TCI); 573 571 } 574 572 575 - static __be16 vlan_get_protocol_dgram(struct sk_buff *skb) 573 + static __be16 vlan_get_protocol_dgram(const struct sk_buff *skb) 576 574 { 577 575 __be16 proto = skb->protocol; 578 576 579 - if (unlikely(eth_type_vlan(proto))) { 580 - u8 *skb_orig_data = skb->data; 581 - int skb_orig_len = skb->len; 582 - 583 - skb_push(skb, skb->data - skb_mac_header(skb)); 584 - proto = __vlan_get_protocol(skb, proto, NULL); 585 - if (skb_orig_data != skb->data) { 586 - skb->data = skb_orig_data; 587 - skb->len = skb_orig_len; 588 - } 589 - } 577 + if (unlikely(eth_type_vlan(proto))) 578 + proto = __vlan_get_protocol_offset(skb, proto, 579 + skb_mac_offset(skb), NULL); 590 580 591 581 return proto; 592 582 }
+6 -3
net/psample/psample.c
··· 393 393 nla_total_size_64bit(sizeof(u64)) + /* timestamp */ 394 394 nla_total_size(sizeof(u16)) + /* protocol */ 395 395 (md->user_cookie_len ? 396 - nla_total_size(md->user_cookie_len) : 0); /* user cookie */ 396 + nla_total_size(md->user_cookie_len) : 0) + /* user cookie */ 397 + (md->rate_as_probability ? 398 + nla_total_size(0) : 0); /* rate as probability */ 397 399 398 400 #ifdef CONFIG_INET 399 401 tun_info = skb_tunnel_info(skb); ··· 500 498 md->user_cookie)) 501 499 goto error; 502 500 503 - if (md->rate_as_probability) 504 - nla_put_flag(nl_skb, PSAMPLE_ATTR_SAMPLE_PROBABILITY); 501 + if (md->rate_as_probability && 502 + nla_put_flag(nl_skb, PSAMPLE_ATTR_SAMPLE_PROBABILITY)) 503 + goto error; 505 504 506 505 genlmsg_end(nl_skb, data); 507 506 genlmsg_multicast_netns(&psample_nl_family, group->net, nl_skb, 0,
+32 -7
net/rds/tcp.c
··· 61 61 62 62 static struct kmem_cache *rds_tcp_conn_slab; 63 63 64 - static int rds_tcp_skbuf_handler(const struct ctl_table *ctl, int write, 65 - void *buffer, size_t *lenp, loff_t *fpos); 64 + static int rds_tcp_sndbuf_handler(const struct ctl_table *ctl, int write, 65 + void *buffer, size_t *lenp, loff_t *fpos); 66 + static int rds_tcp_rcvbuf_handler(const struct ctl_table *ctl, int write, 67 + void *buffer, size_t *lenp, loff_t *fpos); 66 68 67 69 static int rds_tcp_min_sndbuf = SOCK_MIN_SNDBUF; 68 70 static int rds_tcp_min_rcvbuf = SOCK_MIN_RCVBUF; ··· 76 74 /* data is per-net pointer */ 77 75 .maxlen = sizeof(int), 78 76 .mode = 0644, 79 - .proc_handler = rds_tcp_skbuf_handler, 77 + .proc_handler = rds_tcp_sndbuf_handler, 80 78 .extra1 = &rds_tcp_min_sndbuf, 81 79 }, 82 80 #define RDS_TCP_RCVBUF 1 ··· 85 83 /* data is per-net pointer */ 86 84 .maxlen = sizeof(int), 87 85 .mode = 0644, 88 - .proc_handler = rds_tcp_skbuf_handler, 86 + .proc_handler = rds_tcp_rcvbuf_handler, 89 87 .extra1 = &rds_tcp_min_rcvbuf, 90 88 }, 91 89 }; ··· 684 682 spin_unlock_irq(&rds_tcp_conn_lock); 685 683 } 686 684 687 - static int rds_tcp_skbuf_handler(const struct ctl_table *ctl, int write, 685 + static int rds_tcp_skbuf_handler(struct rds_tcp_net *rtn, 686 + const struct ctl_table *ctl, int write, 688 687 void *buffer, size_t *lenp, loff_t *fpos) 689 688 { 690 - struct net *net = current->nsproxy->net_ns; 691 689 int err; 692 690 693 691 err = proc_dointvec_minmax(ctl, write, buffer, lenp, fpos); ··· 696 694 *(int *)(ctl->extra1)); 697 695 return err; 698 696 } 699 - if (write) 697 + 698 + if (write && rtn->rds_tcp_listen_sock && rtn->rds_tcp_listen_sock->sk) { 699 + struct net *net = sock_net(rtn->rds_tcp_listen_sock->sk); 700 + 700 701 rds_tcp_sysctl_reset(net); 702 + } 703 + 701 704 return 0; 705 + } 706 + 707 + static int rds_tcp_sndbuf_handler(const struct ctl_table *ctl, int write, 708 + void *buffer, size_t *lenp, loff_t *fpos) 709 + { 710 + struct rds_tcp_net *rtn = container_of(ctl->data, struct rds_tcp_net, 711 + sndbuf_size); 712 + 713 + return rds_tcp_skbuf_handler(rtn, ctl, write, buffer, lenp, fpos); 714 + } 715 + 716 + static int rds_tcp_rcvbuf_handler(const struct ctl_table *ctl, int write, 717 + void *buffer, size_t *lenp, loff_t *fpos) 718 + { 719 + struct rds_tcp_net *rtn = container_of(ctl->data, struct rds_tcp_net, 720 + rcvbuf_size); 721 + 722 + return rds_tcp_skbuf_handler(rtn, ctl, write, buffer, lenp, fpos); 702 723 } 703 724 704 725 static void rds_tcp_exit(void)
+2 -1
net/sched/cls_flow.c
··· 356 356 [TCA_FLOW_KEYS] = { .type = NLA_U32 }, 357 357 [TCA_FLOW_MODE] = { .type = NLA_U32 }, 358 358 [TCA_FLOW_BASECLASS] = { .type = NLA_U32 }, 359 - [TCA_FLOW_RSHIFT] = { .type = NLA_U32 }, 359 + [TCA_FLOW_RSHIFT] = NLA_POLICY_MAX(NLA_U32, 360 + 31 /* BITS_PER_U32 - 1 */), 360 361 [TCA_FLOW_ADDEND] = { .type = NLA_U32 }, 361 362 [TCA_FLOW_MASK] = { .type = NLA_U32 }, 362 363 [TCA_FLOW_XOR] = { .type = NLA_U32 },
+75 -65
net/sched/sch_cake.c
··· 627 627 return (flow_mode & CAKE_FLOW_DUAL_DST) == CAKE_FLOW_DUAL_DST; 628 628 } 629 629 630 + static void cake_dec_srchost_bulk_flow_count(struct cake_tin_data *q, 631 + struct cake_flow *flow, 632 + int flow_mode) 633 + { 634 + if (likely(cake_dsrc(flow_mode) && 635 + q->hosts[flow->srchost].srchost_bulk_flow_count)) 636 + q->hosts[flow->srchost].srchost_bulk_flow_count--; 637 + } 638 + 639 + static void cake_inc_srchost_bulk_flow_count(struct cake_tin_data *q, 640 + struct cake_flow *flow, 641 + int flow_mode) 642 + { 643 + if (likely(cake_dsrc(flow_mode) && 644 + q->hosts[flow->srchost].srchost_bulk_flow_count < CAKE_QUEUES)) 645 + q->hosts[flow->srchost].srchost_bulk_flow_count++; 646 + } 647 + 648 + static void cake_dec_dsthost_bulk_flow_count(struct cake_tin_data *q, 649 + struct cake_flow *flow, 650 + int flow_mode) 651 + { 652 + if (likely(cake_ddst(flow_mode) && 653 + q->hosts[flow->dsthost].dsthost_bulk_flow_count)) 654 + q->hosts[flow->dsthost].dsthost_bulk_flow_count--; 655 + } 656 + 657 + static void cake_inc_dsthost_bulk_flow_count(struct cake_tin_data *q, 658 + struct cake_flow *flow, 659 + int flow_mode) 660 + { 661 + if (likely(cake_ddst(flow_mode) && 662 + q->hosts[flow->dsthost].dsthost_bulk_flow_count < CAKE_QUEUES)) 663 + q->hosts[flow->dsthost].dsthost_bulk_flow_count++; 664 + } 665 + 666 + static u16 cake_get_flow_quantum(struct cake_tin_data *q, 667 + struct cake_flow *flow, 668 + int flow_mode) 669 + { 670 + u16 host_load = 1; 671 + 672 + if (cake_dsrc(flow_mode)) 673 + host_load = max(host_load, 674 + q->hosts[flow->srchost].srchost_bulk_flow_count); 675 + 676 + if (cake_ddst(flow_mode)) 677 + host_load = max(host_load, 678 + q->hosts[flow->dsthost].dsthost_bulk_flow_count); 679 + 680 + /* The get_random_u16() is a way to apply dithering to avoid 681 + * accumulating roundoff errors 682 + */ 683 + return (q->flow_quantum * quantum_div[host_load] + 684 + get_random_u16()) >> 16; 685 + } 686 + 630 687 static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb, 631 688 int flow_mode, u16 flow_override, u16 host_override) 632 689 { ··· 830 773 allocate_dst = cake_ddst(flow_mode); 831 774 832 775 if (q->flows[outer_hash + k].set == CAKE_SET_BULK) { 833 - if (allocate_src) 834 - q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--; 835 - if (allocate_dst) 836 - q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--; 776 + cake_dec_srchost_bulk_flow_count(q, &q->flows[outer_hash + k], flow_mode); 777 + cake_dec_dsthost_bulk_flow_count(q, &q->flows[outer_hash + k], flow_mode); 837 778 } 838 779 found: 839 780 /* reserve queue for future packets in same flow */ ··· 856 801 q->hosts[outer_hash + k].srchost_tag = srchost_hash; 857 802 found_src: 858 803 srchost_idx = outer_hash + k; 859 - if (q->flows[reduced_hash].set == CAKE_SET_BULK) 860 - q->hosts[srchost_idx].srchost_bulk_flow_count++; 861 804 q->flows[reduced_hash].srchost = srchost_idx; 805 + 806 + if (q->flows[reduced_hash].set == CAKE_SET_BULK) 807 + cake_inc_srchost_bulk_flow_count(q, &q->flows[reduced_hash], flow_mode); 862 808 } 863 809 864 810 if (allocate_dst) { ··· 880 824 q->hosts[outer_hash + k].dsthost_tag = dsthost_hash; 881 825 found_dst: 882 826 dsthost_idx = outer_hash + k; 883 - if (q->flows[reduced_hash].set == CAKE_SET_BULK) 884 - q->hosts[dsthost_idx].dsthost_bulk_flow_count++; 885 827 q->flows[reduced_hash].dsthost = dsthost_idx; 828 + 829 + if (q->flows[reduced_hash].set == CAKE_SET_BULK) 830 + cake_inc_dsthost_bulk_flow_count(q, &q->flows[reduced_hash], flow_mode); 886 831 } 887 832 } 888 833 ··· 1896 1839 1897 1840 /* flowchain */ 1898 1841 if (!flow->set || flow->set == CAKE_SET_DECAYING) { 1899 - struct cake_host *srchost = &b->hosts[flow->srchost]; 1900 - struct cake_host *dsthost = &b->hosts[flow->dsthost]; 1901 - u16 host_load = 1; 1902 - 1903 1842 if (!flow->set) { 1904 1843 list_add_tail(&flow->flowchain, &b->new_flows); 1905 1844 } else { ··· 1905 1852 flow->set = CAKE_SET_SPARSE; 1906 1853 b->sparse_flow_count++; 1907 1854 1908 - if (cake_dsrc(q->flow_mode)) 1909 - host_load = max(host_load, srchost->srchost_bulk_flow_count); 1910 - 1911 - if (cake_ddst(q->flow_mode)) 1912 - host_load = max(host_load, dsthost->dsthost_bulk_flow_count); 1913 - 1914 - flow->deficit = (b->flow_quantum * 1915 - quantum_div[host_load]) >> 16; 1855 + flow->deficit = cake_get_flow_quantum(b, flow, q->flow_mode); 1916 1856 } else if (flow->set == CAKE_SET_SPARSE_WAIT) { 1917 - struct cake_host *srchost = &b->hosts[flow->srchost]; 1918 - struct cake_host *dsthost = &b->hosts[flow->dsthost]; 1919 - 1920 1857 /* this flow was empty, accounted as a sparse flow, but actually 1921 1858 * in the bulk rotation. 1922 1859 */ ··· 1914 1871 b->sparse_flow_count--; 1915 1872 b->bulk_flow_count++; 1916 1873 1917 - if (cake_dsrc(q->flow_mode)) 1918 - srchost->srchost_bulk_flow_count++; 1919 - 1920 - if (cake_ddst(q->flow_mode)) 1921 - dsthost->dsthost_bulk_flow_count++; 1922 - 1874 + cake_inc_srchost_bulk_flow_count(b, flow, q->flow_mode); 1875 + cake_inc_dsthost_bulk_flow_count(b, flow, q->flow_mode); 1923 1876 } 1924 1877 1925 1878 if (q->buffer_used > q->buffer_max_used) ··· 1972 1933 { 1973 1934 struct cake_sched_data *q = qdisc_priv(sch); 1974 1935 struct cake_tin_data *b = &q->tins[q->cur_tin]; 1975 - struct cake_host *srchost, *dsthost; 1976 1936 ktime_t now = ktime_get(); 1977 1937 struct cake_flow *flow; 1978 1938 struct list_head *head; 1979 1939 bool first_flow = true; 1980 1940 struct sk_buff *skb; 1981 - u16 host_load; 1982 1941 u64 delay; 1983 1942 u32 len; 1984 1943 ··· 2076 2039 q->cur_flow = flow - b->flows; 2077 2040 first_flow = false; 2078 2041 2079 - /* triple isolation (modified DRR++) */ 2080 - srchost = &b->hosts[flow->srchost]; 2081 - dsthost = &b->hosts[flow->dsthost]; 2082 - host_load = 1; 2083 - 2084 2042 /* flow isolation (DRR++) */ 2085 2043 if (flow->deficit <= 0) { 2086 2044 /* Keep all flows with deficits out of the sparse and decaying ··· 2087 2055 b->sparse_flow_count--; 2088 2056 b->bulk_flow_count++; 2089 2057 2090 - if (cake_dsrc(q->flow_mode)) 2091 - srchost->srchost_bulk_flow_count++; 2092 - 2093 - if (cake_ddst(q->flow_mode)) 2094 - dsthost->dsthost_bulk_flow_count++; 2058 + cake_inc_srchost_bulk_flow_count(b, flow, q->flow_mode); 2059 + cake_inc_dsthost_bulk_flow_count(b, flow, q->flow_mode); 2095 2060 2096 2061 flow->set = CAKE_SET_BULK; 2097 2062 } else { ··· 2100 2071 } 2101 2072 } 2102 2073 2103 - if (cake_dsrc(q->flow_mode)) 2104 - host_load = max(host_load, srchost->srchost_bulk_flow_count); 2105 - 2106 - if (cake_ddst(q->flow_mode)) 2107 - host_load = max(host_load, dsthost->dsthost_bulk_flow_count); 2108 - 2109 - WARN_ON(host_load > CAKE_QUEUES); 2110 - 2111 - /* The get_random_u16() is a way to apply dithering to avoid 2112 - * accumulating roundoff errors 2113 - */ 2114 - flow->deficit += (b->flow_quantum * quantum_div[host_load] + 2115 - get_random_u16()) >> 16; 2074 + flow->deficit += cake_get_flow_quantum(b, flow, q->flow_mode); 2116 2075 list_move_tail(&flow->flowchain, &b->old_flows); 2117 2076 2118 2077 goto retry; ··· 2124 2107 if (flow->set == CAKE_SET_BULK) { 2125 2108 b->bulk_flow_count--; 2126 2109 2127 - if (cake_dsrc(q->flow_mode)) 2128 - srchost->srchost_bulk_flow_count--; 2129 - 2130 - if (cake_ddst(q->flow_mode)) 2131 - dsthost->dsthost_bulk_flow_count--; 2110 + cake_dec_srchost_bulk_flow_count(b, flow, q->flow_mode); 2111 + cake_dec_dsthost_bulk_flow_count(b, flow, q->flow_mode); 2132 2112 2133 2113 b->decaying_flow_count++; 2134 2114 } else if (flow->set == CAKE_SET_SPARSE || ··· 2143 2129 else if (flow->set == CAKE_SET_BULK) { 2144 2130 b->bulk_flow_count--; 2145 2131 2146 - if (cake_dsrc(q->flow_mode)) 2147 - srchost->srchost_bulk_flow_count--; 2148 - 2149 - if (cake_ddst(q->flow_mode)) 2150 - dsthost->dsthost_bulk_flow_count--; 2151 - 2132 + cake_dec_srchost_bulk_flow_count(b, flow, q->flow_mode); 2133 + cake_dec_dsthost_bulk_flow_count(b, flow, q->flow_mode); 2152 2134 } else 2153 2135 b->decaying_flow_count--; 2154 2136
+2 -1
net/sctp/associola.c
··· 137 137 = 5 * asoc->rto_max; 138 138 139 139 asoc->timeouts[SCTP_EVENT_TIMEOUT_SACK] = asoc->sackdelay; 140 - asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE] = sp->autoclose * HZ; 140 + asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE] = 141 + (unsigned long)sp->autoclose * HZ; 141 142 142 143 /* Initializes the timers */ 143 144 for (i = SCTP_EVENT_TIMEOUT_NONE; i < SCTP_NUM_TIMEOUT_TYPES; ++i)
+8 -6
net/sctp/sysctl.c
··· 387 387 static int proc_sctp_do_hmac_alg(const struct ctl_table *ctl, int write, 388 388 void *buffer, size_t *lenp, loff_t *ppos) 389 389 { 390 - struct net *net = current->nsproxy->net_ns; 390 + struct net *net = container_of(ctl->data, struct net, 391 + sctp.sctp_hmac_alg); 391 392 struct ctl_table tbl; 392 393 bool changed = false; 393 394 char *none = "none"; ··· 433 432 static int proc_sctp_do_rto_min(const struct ctl_table *ctl, int write, 434 433 void *buffer, size_t *lenp, loff_t *ppos) 435 434 { 436 - struct net *net = current->nsproxy->net_ns; 435 + struct net *net = container_of(ctl->data, struct net, sctp.rto_min); 437 436 unsigned int min = *(unsigned int *) ctl->extra1; 438 437 unsigned int max = *(unsigned int *) ctl->extra2; 439 438 struct ctl_table tbl; ··· 461 460 static int proc_sctp_do_rto_max(const struct ctl_table *ctl, int write, 462 461 void *buffer, size_t *lenp, loff_t *ppos) 463 462 { 464 - struct net *net = current->nsproxy->net_ns; 463 + struct net *net = container_of(ctl->data, struct net, sctp.rto_max); 465 464 unsigned int min = *(unsigned int *) ctl->extra1; 466 465 unsigned int max = *(unsigned int *) ctl->extra2; 467 466 struct ctl_table tbl; ··· 499 498 static int proc_sctp_do_auth(const struct ctl_table *ctl, int write, 500 499 void *buffer, size_t *lenp, loff_t *ppos) 501 500 { 502 - struct net *net = current->nsproxy->net_ns; 501 + struct net *net = container_of(ctl->data, struct net, sctp.auth_enable); 503 502 struct ctl_table tbl; 504 503 int new_value, ret; 505 504 ··· 528 527 static int proc_sctp_do_udp_port(const struct ctl_table *ctl, int write, 529 528 void *buffer, size_t *lenp, loff_t *ppos) 530 529 { 531 - struct net *net = current->nsproxy->net_ns; 530 + struct net *net = container_of(ctl->data, struct net, sctp.udp_port); 532 531 unsigned int min = *(unsigned int *)ctl->extra1; 533 532 unsigned int max = *(unsigned int *)ctl->extra2; 534 533 struct ctl_table tbl; ··· 569 568 static int proc_sctp_do_probe_interval(const struct ctl_table *ctl, int write, 570 569 void *buffer, size_t *lenp, loff_t *ppos) 571 570 { 572 - struct net *net = current->nsproxy->net_ns; 571 + struct net *net = container_of(ctl->data, struct net, 572 + sctp.probe_interval); 573 573 struct ctl_table tbl; 574 574 int ret, new_value; 575 575
+16 -2
net/smc/af_smc.c
··· 2032 2032 if (pclc->hdr.typev1 == SMC_TYPE_N) 2033 2033 return 0; 2034 2034 pclc_prfx = smc_clc_proposal_get_prefix(pclc); 2035 + if (!pclc_prfx) 2036 + return -EPROTO; 2035 2037 if (smc_clc_prfx_match(newclcsock, pclc_prfx)) 2036 2038 return SMC_CLC_DECL_DIFFPREFIX; 2037 2039 ··· 2147 2145 pclc_smcd = smc_get_clc_msg_smcd(pclc); 2148 2146 smc_v2_ext = smc_get_clc_v2_ext(pclc); 2149 2147 smcd_v2_ext = smc_get_clc_smcd_v2_ext(smc_v2_ext); 2148 + if (!pclc_smcd || !smc_v2_ext || !smcd_v2_ext) 2149 + goto not_found; 2150 2150 2151 2151 mutex_lock(&smcd_dev_list.mutex); 2152 2152 if (pclc_smcd->ism.chid) { ··· 2225 2221 int rc = 0; 2226 2222 2227 2223 /* check if ISM V1 is available */ 2228 - if (!(ini->smcd_version & SMC_V1) || !smcd_indicated(ini->smc_type_v1)) 2224 + if (!(ini->smcd_version & SMC_V1) || 2225 + !smcd_indicated(ini->smc_type_v1) || 2226 + !pclc_smcd) 2229 2227 goto not_found; 2230 2228 ini->is_smcd = true; /* prepare ISM check */ 2231 2229 ini->ism_peer_gid[0].gid = ntohll(pclc_smcd->ism.gid); ··· 2278 2272 goto not_found; 2279 2273 2280 2274 smc_v2_ext = smc_get_clc_v2_ext(pclc); 2281 - if (!smc_clc_match_eid(ini->negotiated_eid, smc_v2_ext, NULL, NULL)) 2275 + if (!smc_v2_ext || 2276 + !smc_clc_match_eid(ini->negotiated_eid, smc_v2_ext, NULL, NULL)) 2282 2277 goto not_found; 2283 2278 2284 2279 /* prepare RDMA check */ ··· 2888 2881 } else { 2889 2882 sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk); 2890 2883 set_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 2884 + 2885 + if (sk->sk_state != SMC_INIT) { 2886 + /* Race breaker the same way as tcp_poll(). */ 2887 + smp_mb__after_atomic(); 2888 + if (atomic_read(&smc->conn.sndbuf_space)) 2889 + mask |= EPOLLOUT | EPOLLWRNORM; 2890 + } 2891 2891 } 2892 2892 if (atomic_read(&smc->conn.bytes_to_rcv)) 2893 2893 mask |= EPOLLIN | EPOLLRDNORM;
+16 -1
net/smc/smc_clc.c
··· 352 352 struct smc_clc_msg_hdr *hdr = &pclc->hdr; 353 353 struct smc_clc_v2_extension *v2_ext; 354 354 355 - v2_ext = smc_get_clc_v2_ext(pclc); 356 355 pclc_prfx = smc_clc_proposal_get_prefix(pclc); 356 + if (!pclc_prfx || 357 + pclc_prfx->ipv6_prefixes_cnt > SMC_CLC_MAX_V6_PREFIX) 358 + return false; 359 + 357 360 if (hdr->version == SMC_V1) { 358 361 if (hdr->typev1 == SMC_TYPE_N) 359 362 return false; ··· 368 365 sizeof(struct smc_clc_msg_trail)) 369 366 return false; 370 367 } else { 368 + v2_ext = smc_get_clc_v2_ext(pclc); 369 + if ((hdr->typev2 != SMC_TYPE_N && 370 + (!v2_ext || v2_ext->hdr.eid_cnt > SMC_CLC_MAX_UEID)) || 371 + (smcd_indicated(hdr->typev2) && 372 + v2_ext->hdr.ism_gid_cnt > SMCD_CLC_MAX_V2_GID_ENTRIES)) 373 + return false; 374 + 371 375 if (ntohs(hdr->length) != 372 376 sizeof(*pclc) + 373 377 sizeof(struct smc_clc_msg_smcd) + ··· 774 764 SMC_CLC_RECV_BUF_LEN : datlen; 775 765 iov_iter_kvec(&msg.msg_iter, ITER_DEST, &vec, 1, recvlen); 776 766 len = sock_recvmsg(smc->clcsock, &msg, krflags); 767 + if (len < recvlen) { 768 + smc->sk.sk_err = EPROTO; 769 + reason_code = -EPROTO; 770 + goto out; 771 + } 777 772 datlen -= len; 778 773 } 779 774 if (clcm->type == SMC_CLC_DECLINE) {
+19 -3
net/smc/smc_clc.h
··· 336 336 static inline struct smc_clc_msg_proposal_prefix * 337 337 smc_clc_proposal_get_prefix(struct smc_clc_msg_proposal *pclc) 338 338 { 339 + u16 offset = ntohs(pclc->iparea_offset); 340 + 341 + if (offset > sizeof(struct smc_clc_msg_smcd)) 342 + return NULL; 339 343 return (struct smc_clc_msg_proposal_prefix *) 340 - ((u8 *)pclc + sizeof(*pclc) + ntohs(pclc->iparea_offset)); 344 + ((u8 *)pclc + sizeof(*pclc) + offset); 341 345 } 342 346 343 347 static inline bool smcr_indicated(int smc_type) ··· 380 376 smc_get_clc_v2_ext(struct smc_clc_msg_proposal *prop) 381 377 { 382 378 struct smc_clc_msg_smcd *prop_smcd = smc_get_clc_msg_smcd(prop); 379 + u16 max_offset; 383 380 384 - if (!prop_smcd || !ntohs(prop_smcd->v2_ext_offset)) 381 + max_offset = offsetof(struct smc_clc_msg_proposal_area, pclc_v2_ext) - 382 + offsetof(struct smc_clc_msg_proposal_area, pclc_smcd) - 383 + offsetofend(struct smc_clc_msg_smcd, v2_ext_offset); 384 + 385 + if (!prop_smcd || !ntohs(prop_smcd->v2_ext_offset) || 386 + ntohs(prop_smcd->v2_ext_offset) > max_offset) 385 387 return NULL; 386 388 387 389 return (struct smc_clc_v2_extension *) ··· 400 390 static inline struct smc_clc_smcd_v2_extension * 401 391 smc_get_clc_smcd_v2_ext(struct smc_clc_v2_extension *prop_v2ext) 402 392 { 393 + u16 max_offset = offsetof(struct smc_clc_msg_proposal_area, pclc_smcd_v2_ext) - 394 + offsetof(struct smc_clc_msg_proposal_area, pclc_v2_ext) - 395 + offsetof(struct smc_clc_v2_extension, hdr) - 396 + offsetofend(struct smc_clnt_opts_area_hdr, smcd_v2_ext_offset); 397 + 403 398 if (!prop_v2ext) 404 399 return NULL; 405 - if (!ntohs(prop_v2ext->hdr.smcd_v2_ext_offset)) 400 + if (!ntohs(prop_v2ext->hdr.smcd_v2_ext_offset) || 401 + ntohs(prop_v2ext->hdr.smcd_v2_ext_offset) > max_offset) 406 402 return NULL; 407 403 408 404 return (struct smc_clc_smcd_v2_extension *)
+7 -2
net/smc/smc_core.c
··· 1818 1818 { 1819 1819 if (smc_link_downing(&lnk->state)) { 1820 1820 trace_smcr_link_down(lnk, __builtin_return_address(0)); 1821 - schedule_work(&lnk->link_down_wrk); 1821 + smcr_link_hold(lnk); /* smcr_link_put in link_down_wrk */ 1822 + if (!schedule_work(&lnk->link_down_wrk)) 1823 + smcr_link_put(lnk); 1822 1824 } 1823 1825 } 1824 1826 ··· 1852 1850 struct smc_link_group *lgr = link->lgr; 1853 1851 1854 1852 if (list_empty(&lgr->list)) 1855 - return; 1853 + goto out; 1856 1854 wake_up_all(&lgr->llc_msg_waiter); 1857 1855 down_write(&lgr->llc_conf_mutex); 1858 1856 smcr_link_down(link); 1859 1857 up_write(&lgr->llc_conf_mutex); 1858 + 1859 + out: 1860 + smcr_link_put(link); /* smcr_link_hold by schedulers of link_down_work */ 1860 1861 } 1861 1862 1862 1863 static int smc_vlan_by_tcpsk_walk(struct net_device *lower_dev,
+1 -1
net/tls/tls_sw.c
··· 458 458 459 459 tx_err: 460 460 if (rc < 0 && rc != -EAGAIN) 461 - tls_err_abort(sk, -EBADMSG); 461 + tls_err_abort(sk, rc); 462 462 463 463 return rc; 464 464 }
+2 -2
rust/kernel/net/phy.rs
··· 860 860 /// ]; 861 861 /// #[cfg(MODULE)] 862 862 /// #[no_mangle] 863 - /// static __mod_mdio__phydev_device_table: [::kernel::bindings::mdio_device_id; 2] = _DEVICE_TABLE; 863 + /// static __mod_device_table__mdio__phydev: [::kernel::bindings::mdio_device_id; 2] = _DEVICE_TABLE; 864 864 /// ``` 865 865 #[macro_export] 866 866 macro_rules! module_phy_driver { ··· 883 883 884 884 #[cfg(MODULE)] 885 885 #[no_mangle] 886 - static __mod_mdio__phydev_device_table: [$crate::bindings::mdio_device_id; 886 + static __mod_device_table__mdio__phydev: [$crate::bindings::mdio_device_id; 887 887 $crate::module_phy_driver!(@count_devices $($dev),+) + 1] = _DEVICE_TABLE; 888 888 }; 889 889
+16 -2
rust/kernel/workqueue.rs
··· 519 519 impl{T} HasWork<Self> for ClosureWork<T> { self.work } 520 520 } 521 521 522 - // SAFETY: TODO. 522 + // SAFETY: The `__enqueue` implementation in RawWorkItem uses a `work_struct` initialized with the 523 + // `run` method of this trait as the function pointer because: 524 + // - `__enqueue` gets the `work_struct` from the `Work` field, using `T::raw_get_work`. 525 + // - The only safe way to create a `Work` object is through `Work::new`. 526 + // - `Work::new` makes sure that `T::Pointer::run` is passed to `init_work_with_key`. 527 + // - Finally `Work` and `RawWorkItem` guarantee that the correct `Work` field 528 + // will be used because of the ID const generic bound. This makes sure that `T::raw_get_work` 529 + // uses the correct offset for the `Work` field, and `Work::new` picks the correct 530 + // implementation of `WorkItemPointer` for `Arc<T>`. 523 531 unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Arc<T> 524 532 where 525 533 T: WorkItem<ID, Pointer = Self>, ··· 545 537 } 546 538 } 547 539 548 - // SAFETY: TODO. 540 + // SAFETY: The `work_struct` raw pointer is guaranteed to be valid for the duration of the call to 541 + // the closure because we get it from an `Arc`, which means that the ref count will be at least 1, 542 + // and we don't drop the `Arc` ourselves. If `queue_work_on` returns true, it is further guaranteed 543 + // to be valid until a call to the function pointer in `work_struct` because we leak the memory it 544 + // points to, and only reclaim it if the closure returns false, or in `WorkItemPointer::run`, which 545 + // is what the function pointer in the `work_struct` must be pointing to, according to the safety 546 + // requirements of `WorkItemPointer`. 549 547 unsafe impl<T, const ID: u64> RawWorkItem<ID> for Arc<T> 550 548 where 551 549 T: WorkItem<ID, Pointer = Self>,
+2 -2
scripts/mksysmap
··· 26 26 # (do not forget a space before each pattern) 27 27 28 28 # local symbols for ARM, MIPS, etc. 29 - / \\$/d 29 + / \$/d 30 30 31 31 # local labels, .LBB, .Ltmpxxx, .L__unnamed_xx, .LASANPC, etc. 32 32 / \.L/d ··· 39 39 / __pi_\.L/d 40 40 41 41 # arm64 local symbols in non-VHE KVM namespace 42 - / __kvm_nvhe_\\$/d 42 + / __kvm_nvhe_\$/d 43 43 / __kvm_nvhe_\.L/d 44 44 45 45 # lld arm/aarch64/mips thunks
+17 -19
scripts/mod/file2alias.c
··· 132 132 * based at address m. 133 133 */ 134 134 #define DEF_FIELD(m, devid, f) \ 135 - typeof(((struct devid *)0)->f) f = TO_NATIVE(*(typeof(f) *)((m) + OFF_##devid##_##f)) 135 + typeof(((struct devid *)0)->f) f = \ 136 + get_unaligned_native((typeof(f) *)((m) + OFF_##devid##_##f)) 136 137 137 138 /* Define a variable f that holds the address of field f of struct devid 138 139 * based at address m. Due to the way typeof works, for a field of type ··· 601 600 static void do_pcmcia_entry(struct module *mod, void *symval) 602 601 { 603 602 char alias[256] = {}; 604 - unsigned int i; 603 + 605 604 DEF_FIELD(symval, pcmcia_device_id, match_flags); 606 605 DEF_FIELD(symval, pcmcia_device_id, manf_id); 607 606 DEF_FIELD(symval, pcmcia_device_id, card_id); ··· 609 608 DEF_FIELD(symval, pcmcia_device_id, function); 610 609 DEF_FIELD(symval, pcmcia_device_id, device_no); 611 610 DEF_FIELD_ADDR(symval, pcmcia_device_id, prod_id_hash); 612 - 613 - for (i=0; i<4; i++) { 614 - (*prod_id_hash)[i] = TO_NATIVE((*prod_id_hash)[i]); 615 - } 616 611 617 612 ADD(alias, "m", match_flags & PCMCIA_DEV_ID_MATCH_MANF_ID, 618 613 manf_id); ··· 620 623 function); 621 624 ADD(alias, "pfn", match_flags & PCMCIA_DEV_ID_MATCH_DEVICE_NO, 622 625 device_no); 623 - ADD(alias, "pa", match_flags & PCMCIA_DEV_ID_MATCH_PROD_ID1, (*prod_id_hash)[0]); 624 - ADD(alias, "pb", match_flags & PCMCIA_DEV_ID_MATCH_PROD_ID2, (*prod_id_hash)[1]); 625 - ADD(alias, "pc", match_flags & PCMCIA_DEV_ID_MATCH_PROD_ID3, (*prod_id_hash)[2]); 626 - ADD(alias, "pd", match_flags & PCMCIA_DEV_ID_MATCH_PROD_ID4, (*prod_id_hash)[3]); 626 + ADD(alias, "pa", match_flags & PCMCIA_DEV_ID_MATCH_PROD_ID1, 627 + get_unaligned_native(*prod_id_hash + 0)); 628 + ADD(alias, "pb", match_flags & PCMCIA_DEV_ID_MATCH_PROD_ID2, 629 + get_unaligned_native(*prod_id_hash + 1)); 630 + ADD(alias, "pc", match_flags & PCMCIA_DEV_ID_MATCH_PROD_ID3, 631 + get_unaligned_native(*prod_id_hash + 2)); 632 + ADD(alias, "pd", match_flags & PCMCIA_DEV_ID_MATCH_PROD_ID4, 633 + get_unaligned_native(*prod_id_hash + 3)); 627 634 628 635 module_alias_printf(mod, true, "pcmcia:%s", alias); 629 636 } ··· 655 654 { 656 655 unsigned int i; 657 656 658 - for (i = min / BITS_PER_LONG; i < max / BITS_PER_LONG + 1; i++) 659 - arr[i] = TO_NATIVE(arr[i]); 660 - for (i = min; i < max; i++) 661 - if (arr[i / BITS_PER_LONG] & (1ULL << (i%BITS_PER_LONG))) 657 + for (i = min; i <= max; i++) 658 + if (get_unaligned_native(arr + i / BITS_PER_LONG) & 659 + (1ULL << (i % BITS_PER_LONG))) 662 660 sprintf(alias + strlen(alias), "%X,*", i); 663 661 } 664 662 ··· 812 812 * Each byte of the guid will be represented by two hex characters 813 813 * in the name. 814 814 */ 815 - 816 815 static void do_vmbus_entry(struct module *mod, void *symval) 817 816 { 818 - int i; 819 817 DEF_FIELD_ADDR(symval, hv_vmbus_device_id, guid); 820 - char guid_name[(sizeof(*guid) + 1) * 2]; 818 + char guid_name[sizeof(*guid) * 2 + 1]; 821 819 822 - for (i = 0; i < (sizeof(*guid) * 2); i += 2) 823 - sprintf(&guid_name[i], "%02x", TO_NATIVE((guid->b)[i/2])); 820 + for (int i = 0; i < sizeof(*guid); i++) 821 + sprintf(&guid_name[i * 2], "%02x", guid->b[i]); 824 822 825 823 module_alias_printf(mod, false, "vmbus:%s", guid_name); 826 824 }
+21 -20
scripts/mod/modpost.c
··· 155 155 /* A list of all modules we processed */ 156 156 LIST_HEAD(modules); 157 157 158 - static struct module *find_module(const char *modname) 158 + static struct module *find_module(const char *filename, const char *modname) 159 159 { 160 160 struct module *mod; 161 161 162 162 list_for_each_entry(mod, &modules, list) { 163 - if (strcmp(mod->name, modname) == 0) 163 + if (!strcmp(mod->dump_file, filename) && 164 + !strcmp(mod->name, modname)) 164 165 return mod; 165 166 } 166 167 return NULL; ··· 1138 1137 { 1139 1138 switch (r_type) { 1140 1139 case R_386_32: 1141 - return TO_NATIVE(*location); 1140 + return get_unaligned_native(location); 1142 1141 case R_386_PC32: 1143 - return TO_NATIVE(*location) + 4; 1142 + return get_unaligned_native(location) + 4; 1144 1143 } 1145 1144 1146 1145 return (Elf_Addr)(-1); ··· 1161 1160 switch (r_type) { 1162 1161 case R_ARM_ABS32: 1163 1162 case R_ARM_REL32: 1164 - inst = TO_NATIVE(*(uint32_t *)loc); 1163 + inst = get_unaligned_native((uint32_t *)loc); 1165 1164 return inst + sym->st_value; 1166 1165 case R_ARM_MOVW_ABS_NC: 1167 1166 case R_ARM_MOVT_ABS: 1168 - inst = TO_NATIVE(*(uint32_t *)loc); 1167 + inst = get_unaligned_native((uint32_t *)loc); 1169 1168 offset = sign_extend32(((inst & 0xf0000) >> 4) | (inst & 0xfff), 1170 1169 15); 1171 1170 return offset + sym->st_value; 1172 1171 case R_ARM_PC24: 1173 1172 case R_ARM_CALL: 1174 1173 case R_ARM_JUMP24: 1175 - inst = TO_NATIVE(*(uint32_t *)loc); 1174 + inst = get_unaligned_native((uint32_t *)loc); 1176 1175 offset = sign_extend32((inst & 0x00ffffff) << 2, 25); 1177 1176 return offset + sym->st_value + 8; 1178 1177 case R_ARM_THM_MOVW_ABS_NC: 1179 1178 case R_ARM_THM_MOVT_ABS: 1180 - upper = TO_NATIVE(*(uint16_t *)loc); 1181 - lower = TO_NATIVE(*((uint16_t *)loc + 1)); 1179 + upper = get_unaligned_native((uint16_t *)loc); 1180 + lower = get_unaligned_native((uint16_t *)loc + 1); 1182 1181 offset = sign_extend32(((upper & 0x000f) << 12) | 1183 1182 ((upper & 0x0400) << 1) | 1184 1183 ((lower & 0x7000) >> 4) | ··· 1195 1194 * imm11 = lower[10:0] 1196 1195 * imm32 = SignExtend(S:J2:J1:imm6:imm11:'0') 1197 1196 */ 1198 - upper = TO_NATIVE(*(uint16_t *)loc); 1199 - lower = TO_NATIVE(*((uint16_t *)loc + 1)); 1197 + upper = get_unaligned_native((uint16_t *)loc); 1198 + lower = get_unaligned_native((uint16_t *)loc + 1); 1200 1199 1201 1200 sign = (upper >> 10) & 1; 1202 1201 j1 = (lower >> 13) & 1; ··· 1219 1218 * I2 = NOT(J2 XOR S) 1220 1219 * imm32 = SignExtend(S:I1:I2:imm10:imm11:'0') 1221 1220 */ 1222 - upper = TO_NATIVE(*(uint16_t *)loc); 1223 - lower = TO_NATIVE(*((uint16_t *)loc + 1)); 1221 + upper = get_unaligned_native((uint16_t *)loc); 1222 + lower = get_unaligned_native((uint16_t *)loc + 1); 1224 1223 1225 1224 sign = (upper >> 10) & 1; 1226 1225 j1 = (lower >> 13) & 1; ··· 1241 1240 { 1242 1241 uint32_t inst; 1243 1242 1244 - inst = TO_NATIVE(*location); 1243 + inst = get_unaligned_native(location); 1245 1244 switch (r_type) { 1246 1245 case R_MIPS_LO16: 1247 1246 return inst & 0xffff; ··· 2031 2030 continue; 2032 2031 } 2033 2032 2034 - mod = find_module(modname); 2033 + mod = find_module(fname, modname); 2035 2034 if (!mod) { 2036 2035 mod = new_module(modname, strlen(modname)); 2037 - mod->from_dump = true; 2036 + mod->dump_file = fname; 2038 2037 } 2039 2038 s = sym_add_exported(symname, mod, gpl_only, namespace); 2040 2039 sym_set_crc(s, crc); ··· 2053 2052 struct symbol *sym; 2054 2053 2055 2054 list_for_each_entry(mod, &modules, list) { 2056 - if (mod->from_dump) 2055 + if (mod->dump_file) 2057 2056 continue; 2058 2057 list_for_each_entry(sym, &mod->exported_symbols, list) { 2059 2058 if (trim_unused_exports && !sym->used) ··· 2077 2076 2078 2077 list_for_each_entry(mod, &modules, list) { 2079 2078 2080 - if (mod->from_dump || list_empty(&mod->missing_namespaces)) 2079 + if (mod->dump_file || list_empty(&mod->missing_namespaces)) 2081 2080 continue; 2082 2081 2083 2082 buf_printf(&ns_deps_buf, "%s.ko:", mod->name); ··· 2195 2194 read_symbols_from_files(files_source); 2196 2195 2197 2196 list_for_each_entry(mod, &modules, list) { 2198 - if (mod->from_dump || mod->is_vmlinux) 2197 + if (mod->dump_file || mod->is_vmlinux) 2199 2198 continue; 2200 2199 2201 2200 check_modname_len(mod); ··· 2206 2205 handle_white_list_exports(unused_exports_white_list); 2207 2206 2208 2207 list_for_each_entry(mod, &modules, list) { 2209 - if (mod->from_dump) 2208 + if (mod->dump_file) 2210 2209 continue; 2211 2210 2212 2211 if (mod->is_vmlinux)
+16 -1
scripts/mod/modpost.h
··· 65 65 #define TO_NATIVE(x) \ 66 66 (target_is_big_endian == host_is_big_endian ? x : bswap(x)) 67 67 68 + #define __get_unaligned_t(type, ptr) ({ \ 69 + const struct { type x; } __attribute__((__packed__)) *__pptr = \ 70 + (typeof(__pptr))(ptr); \ 71 + __pptr->x; \ 72 + }) 73 + 74 + #define get_unaligned(ptr) __get_unaligned_t(typeof(*(ptr)), (ptr)) 75 + 76 + #define get_unaligned_native(ptr) \ 77 + ({ \ 78 + typeof(*(ptr)) _val = get_unaligned(ptr); \ 79 + TO_NATIVE(_val); \ 80 + }) 81 + 68 82 #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0])) 69 83 70 84 #define strstarts(str, prefix) (strncmp(str, prefix, strlen(prefix)) == 0) ··· 109 95 /** 110 96 * struct module - represent a module (vmlinux or *.ko) 111 97 * 98 + * @dump_file: path to the .symvers file if loaded from a file 112 99 * @aliases: list head for module_aliases 113 100 */ 114 101 struct module { 115 102 struct list_head list; 116 103 struct list_head exported_symbols; 117 104 struct list_head unresolved_symbols; 105 + const char *dump_file; 118 106 bool is_gpl_compatible; 119 - bool from_dump; /* true if module was loaded from *.symvers */ 120 107 bool is_vmlinux; 121 108 bool seen; 122 109 bool has_init;
+1 -1
scripts/package/PKGBUILD
··· 103 103 104 104 _package-api-headers() { 105 105 pkgdesc="Kernel headers sanitized for use in userspace" 106 - provides=(linux-api-headers) 106 + provides=(linux-api-headers="${pkgver}") 107 107 conflicts=(linux-api-headers) 108 108 109 109 _prologue
+6
scripts/package/builddeb
··· 63 63 esac 64 64 cp "$(${MAKE} -s -f ${srctree}/Makefile image_name)" "${pdir}/${installed_image_path}" 65 65 66 + if [ "${ARCH}" != um ]; then 67 + install_maint_scripts "${pdir}" 68 + fi 69 + } 70 + 71 + install_maint_scripts () { 66 72 # Install the maintainer scripts 67 73 # Note: hook scripts under /etc/kernel are also executed by official Debian 68 74 # kernel packages, as well as kernel packages built using make-kpkg.
+7
scripts/package/mkdebian
··· 70 70 debarch=sh4$(if_enabled_echo CONFIG_CPU_BIG_ENDIAN eb) 71 71 fi 72 72 ;; 73 + um) 74 + if is_enabled CONFIG_64BIT; then 75 + debarch=amd64 76 + else 77 + debarch=i386 78 + fi 79 + ;; 73 80 esac 74 81 if [ -z "$debarch" ]; then 75 82 debarch=$(dpkg-architecture -qDEB_HOST_ARCH)
+4 -1
scripts/sorttable.h
··· 110 110 111 111 static int orc_sort_cmp(const void *_a, const void *_b) 112 112 { 113 - struct orc_entry *orc_a; 113 + struct orc_entry *orc_a, *orc_b; 114 114 const int *a = g_orc_ip_table + *(int *)_a; 115 115 const int *b = g_orc_ip_table + *(int *)_b; 116 116 unsigned long a_val = orc_ip(a); ··· 128 128 * whitelisted .o files which didn't get objtool generation. 129 129 */ 130 130 orc_a = g_orc_table + (a - g_orc_ip_table); 131 + orc_b = g_orc_table + (b - g_orc_ip_table); 132 + if (orc_a->type == ORC_TYPE_UNDEFINED && orc_b->type == ORC_TYPE_UNDEFINED) 133 + return 0; 131 134 return orc_a->type == ORC_TYPE_UNDEFINED ? -1 : 1; 132 135 } 133 136
+34 -27
security/selinux/avc.c
··· 174 174 * using a linked list for extended_perms_decision lookup because the list is 175 175 * always small. i.e. less than 5, typically 1 176 176 */ 177 - static struct extended_perms_decision *avc_xperms_decision_lookup(u8 driver, 178 - struct avc_xperms_node *xp_node) 177 + static struct extended_perms_decision * 178 + avc_xperms_decision_lookup(u8 driver, u8 base_perm, 179 + struct avc_xperms_node *xp_node) 179 180 { 180 181 struct avc_xperms_decision_node *xpd_node; 181 182 182 183 list_for_each_entry(xpd_node, &xp_node->xpd_head, xpd_list) { 183 - if (xpd_node->xpd.driver == driver) 184 + if (xpd_node->xpd.driver == driver && 185 + xpd_node->xpd.base_perm == base_perm) 184 186 return &xpd_node->xpd; 185 187 } 186 188 return NULL; ··· 207 205 } 208 206 209 207 static void avc_xperms_allow_perm(struct avc_xperms_node *xp_node, 210 - u8 driver, u8 perm) 208 + u8 driver, u8 base_perm, u8 perm) 211 209 { 212 210 struct extended_perms_decision *xpd; 213 211 security_xperm_set(xp_node->xp.drivers.p, driver); 214 - xpd = avc_xperms_decision_lookup(driver, xp_node); 212 + xp_node->xp.base_perms |= base_perm; 213 + xpd = avc_xperms_decision_lookup(driver, base_perm, xp_node); 215 214 if (xpd && xpd->allowed) 216 215 security_xperm_set(xpd->allowed->p, perm); 217 216 } ··· 248 245 static void avc_copy_xperms_decision(struct extended_perms_decision *dest, 249 246 struct extended_perms_decision *src) 250 247 { 248 + dest->base_perm = src->base_perm; 251 249 dest->driver = src->driver; 252 250 dest->used = src->used; 253 251 if (dest->used & XPERMS_ALLOWED) ··· 276 272 */ 277 273 u8 i = perm >> 5; 278 274 275 + dest->base_perm = src->base_perm; 279 276 dest->used = src->used; 280 277 if (dest->used & XPERMS_ALLOWED) 281 278 dest->allowed->p[i] = src->allowed->p[i]; ··· 362 357 363 358 memcpy(dest->xp.drivers.p, src->xp.drivers.p, sizeof(dest->xp.drivers.p)); 364 359 dest->xp.len = src->xp.len; 360 + dest->xp.base_perms = src->xp.base_perms; 365 361 366 362 /* for each source xpd allocate a destination xpd and copy */ 367 363 list_for_each_entry(src_xpd, &src->xpd_head, xpd_list) { ··· 813 807 * @event : Updating event 814 808 * @perms : Permission mask bits 815 809 * @driver: xperm driver information 810 + * @base_perm: the base permission associated with the extended permission 816 811 * @xperm: xperm permissions 817 812 * @ssid: AVC entry source sid 818 813 * @tsid: AVC entry target sid ··· 827 820 * otherwise, this function updates the AVC entry. The original AVC-entry object 828 821 * will release later by RCU. 829 822 */ 830 - static int avc_update_node(u32 event, u32 perms, u8 driver, u8 xperm, u32 ssid, 831 - u32 tsid, u16 tclass, u32 seqno, 832 - struct extended_perms_decision *xpd, 833 - u32 flags) 823 + static int avc_update_node(u32 event, u32 perms, u8 driver, u8 base_perm, 824 + u8 xperm, u32 ssid, u32 tsid, u16 tclass, u32 seqno, 825 + struct extended_perms_decision *xpd, u32 flags) 834 826 { 835 827 u32 hvalue; 836 828 int rc = 0; ··· 886 880 case AVC_CALLBACK_GRANT: 887 881 node->ae.avd.allowed |= perms; 888 882 if (node->ae.xp_node && (flags & AVC_EXTENDED_PERMS)) 889 - avc_xperms_allow_perm(node->ae.xp_node, driver, xperm); 883 + avc_xperms_allow_perm(node->ae.xp_node, driver, base_perm, xperm); 890 884 break; 891 885 case AVC_CALLBACK_TRY_REVOKE: 892 886 case AVC_CALLBACK_REVOKE: ··· 993 987 avc_insert(ssid, tsid, tclass, avd, xp_node); 994 988 } 995 989 996 - static noinline int avc_denied(u32 ssid, u32 tsid, 997 - u16 tclass, u32 requested, 998 - u8 driver, u8 xperm, unsigned int flags, 999 - struct av_decision *avd) 990 + static noinline int avc_denied(u32 ssid, u32 tsid, u16 tclass, u32 requested, 991 + u8 driver, u8 base_perm, u8 xperm, 992 + unsigned int flags, struct av_decision *avd) 1000 993 { 1001 994 if (flags & AVC_STRICT) 1002 995 return -EACCES; ··· 1004 999 !(avd->flags & AVD_FLAGS_PERMISSIVE)) 1005 1000 return -EACCES; 1006 1001 1007 - avc_update_node(AVC_CALLBACK_GRANT, requested, driver, 1002 + avc_update_node(AVC_CALLBACK_GRANT, requested, driver, base_perm, 1008 1003 xperm, ssid, tsid, tclass, avd->seqno, NULL, flags); 1009 1004 return 0; 1010 1005 } ··· 1017 1012 * driver field is used to specify which set contains the permission. 1018 1013 */ 1019 1014 int avc_has_extended_perms(u32 ssid, u32 tsid, u16 tclass, u32 requested, 1020 - u8 driver, u8 xperm, struct common_audit_data *ad) 1015 + u8 driver, u8 base_perm, u8 xperm, 1016 + struct common_audit_data *ad) 1021 1017 { 1022 1018 struct avc_node *node; 1023 1019 struct av_decision avd; ··· 1053 1047 local_xpd.auditallow = &auditallow; 1054 1048 local_xpd.dontaudit = &dontaudit; 1055 1049 1056 - xpd = avc_xperms_decision_lookup(driver, xp_node); 1050 + xpd = avc_xperms_decision_lookup(driver, base_perm, xp_node); 1057 1051 if (unlikely(!xpd)) { 1058 1052 /* 1059 1053 * Compute the extended_perms_decision only if the driver 1060 - * is flagged 1054 + * is flagged and the base permission is known. 1061 1055 */ 1062 - if (!security_xperm_test(xp_node->xp.drivers.p, driver)) { 1056 + if (!security_xperm_test(xp_node->xp.drivers.p, driver) || 1057 + !(xp_node->xp.base_perms & base_perm)) { 1063 1058 avd.allowed &= ~requested; 1064 1059 goto decision; 1065 1060 } 1066 1061 rcu_read_unlock(); 1067 - security_compute_xperms_decision(ssid, tsid, tclass, 1068 - driver, &local_xpd); 1062 + security_compute_xperms_decision(ssid, tsid, tclass, driver, 1063 + base_perm, &local_xpd); 1069 1064 rcu_read_lock(); 1070 - avc_update_node(AVC_CALLBACK_ADD_XPERMS, requested, 1071 - driver, xperm, ssid, tsid, tclass, avd.seqno, 1065 + avc_update_node(AVC_CALLBACK_ADD_XPERMS, requested, driver, 1066 + base_perm, xperm, ssid, tsid, tclass, avd.seqno, 1072 1067 &local_xpd, 0); 1073 1068 } else { 1074 1069 avc_quick_copy_xperms_decision(xperm, &local_xpd, xpd); ··· 1082 1075 decision: 1083 1076 denied = requested & ~(avd.allowed); 1084 1077 if (unlikely(denied)) 1085 - rc = avc_denied(ssid, tsid, tclass, requested, 1086 - driver, xperm, AVC_EXTENDED_PERMS, &avd); 1078 + rc = avc_denied(ssid, tsid, tclass, requested, driver, 1079 + base_perm, xperm, AVC_EXTENDED_PERMS, &avd); 1087 1080 1088 1081 rcu_read_unlock(); 1089 1082 ··· 1117 1110 avc_compute_av(ssid, tsid, tclass, avd, &xp_node); 1118 1111 denied = requested & ~(avd->allowed); 1119 1112 if (unlikely(denied)) 1120 - return avc_denied(ssid, tsid, tclass, requested, 0, 0, 1113 + return avc_denied(ssid, tsid, tclass, requested, 0, 0, 0, 1121 1114 flags, avd); 1122 1115 return 0; 1123 1116 } ··· 1165 1158 rcu_read_unlock(); 1166 1159 1167 1160 if (unlikely(denied)) 1168 - return avc_denied(ssid, tsid, tclass, requested, 0, 0, 1161 + return avc_denied(ssid, tsid, tclass, requested, 0, 0, 0, 1169 1162 flags, avd); 1170 1163 return 0; 1171 1164 }
+3 -3
security/selinux/hooks.c
··· 3688 3688 return 0; 3689 3689 3690 3690 isec = inode_security(inode); 3691 - rc = avc_has_extended_perms(ssid, isec->sid, isec->sclass, 3692 - requested, driver, xperm, &ad); 3691 + rc = avc_has_extended_perms(ssid, isec->sid, isec->sclass, requested, 3692 + driver, AVC_EXT_IOCTL, xperm, &ad); 3693 3693 out: 3694 3694 return rc; 3695 3695 } ··· 5952 5952 xperm = nlmsg_type & 0xff; 5953 5953 5954 5954 return avc_has_extended_perms(current_sid(), sksec->sid, sksec->sclass, 5955 - perms, driver, xperm, &ad); 5955 + perms, driver, AVC_EXT_NLMSG, xperm, &ad); 5956 5956 } 5957 5957 5958 5958 static int selinux_netlink_send(struct sock *sk, struct sk_buff *skb)
+4 -1
security/selinux/include/avc.h
··· 136 136 int avc_has_perm(u32 ssid, u32 tsid, u16 tclass, u32 requested, 137 137 struct common_audit_data *auditdata); 138 138 139 + #define AVC_EXT_IOCTL (1 << 0) /* Cache entry for an ioctl extended permission */ 140 + #define AVC_EXT_NLMSG (1 << 1) /* Cache entry for an nlmsg extended permission */ 139 141 int avc_has_extended_perms(u32 ssid, u32 tsid, u16 tclass, u32 requested, 140 - u8 driver, u8 perm, struct common_audit_data *ad); 142 + u8 driver, u8 base_perm, u8 perm, 143 + struct common_audit_data *ad); 141 144 142 145 u32 avc_policy_seqno(void); 143 146
+3
security/selinux/include/security.h
··· 239 239 struct extended_perms_decision { 240 240 u8 used; 241 241 u8 driver; 242 + u8 base_perm; 242 243 struct extended_perms_data *allowed; 243 244 struct extended_perms_data *auditallow; 244 245 struct extended_perms_data *dontaudit; ··· 247 246 248 247 struct extended_perms { 249 248 u16 len; /* length associated decision chain */ 249 + u8 base_perms; /* which base permissions are covered */ 250 250 struct extended_perms_data drivers; /* flag drivers that are used */ 251 251 }; 252 252 ··· 259 257 struct extended_perms *xperms); 260 258 261 259 void security_compute_xperms_decision(u32 ssid, u32 tsid, u16 tclass, u8 driver, 260 + u8 base_perm, 262 261 struct extended_perms_decision *xpermd); 263 262 264 263 void security_compute_av_user(u32 ssid, u32 tsid, u16 tclass,
+27 -9
security/selinux/ss/services.c
··· 582 582 } 583 583 584 584 /* 585 - * Flag which drivers have permissions. 585 + * Flag which drivers have permissions and which base permissions are covered. 586 586 */ 587 587 void services_compute_xperms_drivers( 588 588 struct extended_perms *xperms, ··· 592 592 593 593 switch (node->datum.u.xperms->specified) { 594 594 case AVTAB_XPERMS_IOCTLDRIVER: 595 + xperms->base_perms |= AVC_EXT_IOCTL; 595 596 /* if one or more driver has all permissions allowed */ 596 597 for (i = 0; i < ARRAY_SIZE(xperms->drivers.p); i++) 597 598 xperms->drivers.p[i] |= node->datum.u.xperms->perms.p[i]; 598 599 break; 599 600 case AVTAB_XPERMS_IOCTLFUNCTION: 601 + xperms->base_perms |= AVC_EXT_IOCTL; 602 + /* if allowing permissions within a driver */ 603 + security_xperm_set(xperms->drivers.p, 604 + node->datum.u.xperms->driver); 605 + break; 600 606 case AVTAB_XPERMS_NLMSG: 607 + xperms->base_perms |= AVC_EXT_NLMSG; 601 608 /* if allowing permissions within a driver */ 602 609 security_xperm_set(xperms->drivers.p, 603 610 node->datum.u.xperms->driver); ··· 638 631 avd->auditallow = 0; 639 632 avd->auditdeny = 0xffffffff; 640 633 if (xperms) { 641 - memset(&xperms->drivers, 0, sizeof(xperms->drivers)); 642 - xperms->len = 0; 634 + memset(xperms, 0, sizeof(*xperms)); 643 635 } 644 636 645 637 if (unlikely(!tclass || tclass > policydb->p_classes.nprim)) { ··· 975 969 { 976 970 switch (node->datum.u.xperms->specified) { 977 971 case AVTAB_XPERMS_IOCTLFUNCTION: 978 - case AVTAB_XPERMS_NLMSG: 979 - if (xpermd->driver != node->datum.u.xperms->driver) 972 + if (xpermd->base_perm != AVC_EXT_IOCTL || 973 + xpermd->driver != node->datum.u.xperms->driver) 980 974 return; 981 975 break; 982 976 case AVTAB_XPERMS_IOCTLDRIVER: 983 - if (!security_xperm_test(node->datum.u.xperms->perms.p, 984 - xpermd->driver)) 977 + if (xpermd->base_perm != AVC_EXT_IOCTL || 978 + !security_xperm_test(node->datum.u.xperms->perms.p, 979 + xpermd->driver)) 980 + return; 981 + break; 982 + case AVTAB_XPERMS_NLMSG: 983 + if (xpermd->base_perm != AVC_EXT_NLMSG || 984 + xpermd->driver != node->datum.u.xperms->driver) 985 985 return; 986 986 break; 987 987 default: 988 - BUG(); 988 + pr_warn_once( 989 + "SELinux: unknown extended permission (%u) will be ignored\n", 990 + node->datum.u.xperms->specified); 991 + return; 989 992 } 990 993 991 994 if (node->key.specified == AVTAB_XPERMS_ALLOWED) { ··· 1013 998 &node->datum.u.xperms->perms, 1014 999 xpermd->dontaudit); 1015 1000 } else { 1016 - BUG(); 1001 + pr_warn_once("SELinux: unknown specified key (%u)\n", 1002 + node->key.specified); 1017 1003 } 1018 1004 } 1019 1005 ··· 1022 1006 u32 tsid, 1023 1007 u16 orig_tclass, 1024 1008 u8 driver, 1009 + u8 base_perm, 1025 1010 struct extended_perms_decision *xpermd) 1026 1011 { 1027 1012 struct selinux_policy *policy; ··· 1036 1019 struct ebitmap_node *snode, *tnode; 1037 1020 unsigned int i, j; 1038 1021 1022 + xpermd->base_perm = base_perm; 1039 1023 xpermd->driver = driver; 1040 1024 xpermd->used = 0; 1041 1025 memset(xpermd->allowed->p, 0, sizeof(xpermd->allowed->p));
+26 -17
sound/core/compress_offload.c
··· 1025 1025 static int snd_compr_task_new(struct snd_compr_stream *stream, struct snd_compr_task *utask) 1026 1026 { 1027 1027 struct snd_compr_task_runtime *task; 1028 - int retval; 1028 + int retval, fd_i, fd_o; 1029 1029 1030 1030 if (stream->runtime->total_tasks >= stream->runtime->fragments) 1031 1031 return -EBUSY; ··· 1039 1039 retval = stream->ops->task_create(stream, task); 1040 1040 if (retval < 0) 1041 1041 goto cleanup; 1042 - utask->input_fd = dma_buf_fd(task->input, O_WRONLY|O_CLOEXEC); 1043 - if (utask->input_fd < 0) { 1044 - retval = utask->input_fd; 1042 + /* similar functionality as in dma_buf_fd(), but ensure that both 1043 + file descriptors are allocated before fd_install() */ 1044 + if (!task->input || !task->input->file || !task->output || !task->output->file) { 1045 + retval = -EINVAL; 1045 1046 goto cleanup; 1046 1047 } 1047 - utask->output_fd = dma_buf_fd(task->output, O_RDONLY|O_CLOEXEC); 1048 - if (utask->output_fd < 0) { 1049 - retval = utask->output_fd; 1048 + fd_i = get_unused_fd_flags(O_WRONLY|O_CLOEXEC); 1049 + if (fd_i < 0) 1050 + goto cleanup; 1051 + fd_o = get_unused_fd_flags(O_RDONLY|O_CLOEXEC); 1052 + if (fd_o < 0) { 1053 + put_unused_fd(fd_i); 1050 1054 goto cleanup; 1051 1055 } 1052 1056 /* keep dmabuf reference until freed with task free ioctl */ 1053 - dma_buf_get(utask->input_fd); 1054 - dma_buf_get(utask->output_fd); 1057 + get_dma_buf(task->input); 1058 + get_dma_buf(task->output); 1059 + fd_install(fd_i, task->input->file); 1060 + fd_install(fd_o, task->output->file); 1061 + utask->input_fd = fd_i; 1062 + utask->output_fd = fd_o; 1055 1063 list_add_tail(&task->list, &stream->runtime->tasks); 1056 1064 stream->runtime->total_tasks++; 1057 1065 return 0; ··· 1077 1069 return -EPERM; 1078 1070 task = memdup_user((void __user *)arg, sizeof(*task)); 1079 1071 if (IS_ERR(task)) 1080 - return PTR_ERR(no_free_ptr(task)); 1072 + return PTR_ERR(task); 1081 1073 retval = snd_compr_task_new(stream, task); 1082 1074 if (retval >= 0) 1083 1075 if (copy_to_user((void __user *)arg, task, sizeof(*task))) ··· 1138 1130 return -EPERM; 1139 1131 task = memdup_user((void __user *)arg, sizeof(*task)); 1140 1132 if (IS_ERR(task)) 1141 - return PTR_ERR(no_free_ptr(task)); 1133 + return PTR_ERR(task); 1142 1134 retval = snd_compr_task_start(stream, task); 1143 1135 if (retval >= 0) 1144 1136 if (copy_to_user((void __user *)arg, task, sizeof(*task))) ··· 1182 1174 static int snd_compr_task_seq(struct snd_compr_stream *stream, unsigned long arg, 1183 1175 snd_compr_seq_func_t fcn) 1184 1176 { 1185 - struct snd_compr_task_runtime *task; 1177 + struct snd_compr_task_runtime *task, *temp; 1186 1178 __u64 seqno; 1187 1179 int retval; 1188 1180 1189 1181 if (stream->runtime->state != SNDRV_PCM_STATE_SETUP) 1190 1182 return -EPERM; 1191 - retval = get_user(seqno, (__u64 __user *)arg); 1192 - if (retval < 0) 1193 - return retval; 1183 + retval = copy_from_user(&seqno, (__u64 __user *)arg, sizeof(seqno)); 1184 + if (retval) 1185 + return -EFAULT; 1194 1186 retval = 0; 1195 1187 if (seqno == 0) { 1196 - list_for_each_entry_reverse(task, &stream->runtime->tasks, list) 1188 + list_for_each_entry_safe_reverse(task, temp, &stream->runtime->tasks, list) 1197 1189 fcn(stream, task); 1198 1190 } else { 1199 1191 task = snd_compr_find_task(stream, seqno); ··· 1229 1221 return -EPERM; 1230 1222 status = memdup_user((void __user *)arg, sizeof(*status)); 1231 1223 if (IS_ERR(status)) 1232 - return PTR_ERR(no_free_ptr(status)); 1224 + return PTR_ERR(status); 1233 1225 retval = snd_compr_task_status(stream, status); 1234 1226 if (retval >= 0) 1235 1227 if (copy_to_user((void __user *)arg, status, sizeof(*status))) ··· 1255 1247 } 1256 1248 EXPORT_SYMBOL_GPL(snd_compr_task_finished); 1257 1249 1250 + MODULE_IMPORT_NS("DMA_BUF"); 1258 1251 #endif /* CONFIG_SND_COMPRESS_ACCEL */ 1259 1252 1260 1253 static long snd_compr_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+1 -1
sound/core/memalloc.c
··· 505 505 if (!p) 506 506 return NULL; 507 507 dmab->addr = dma_map_single(dmab->dev.dev, p, size, DMA_BIDIRECTIONAL); 508 - if (dmab->addr == DMA_MAPPING_ERROR) { 508 + if (dma_mapping_error(dmab->dev.dev, dmab->addr)) { 509 509 do_free_pages(dmab->area, size, true); 510 510 return NULL; 511 511 }
+2
sound/core/seq/oss/seq_oss_synth.c
··· 66 66 }; 67 67 68 68 static DEFINE_SPINLOCK(register_lock); 69 + static DEFINE_MUTEX(sysex_mutex); 69 70 70 71 /* 71 72 * prototypes ··· 498 497 if (!info) 499 498 return -ENXIO; 500 499 500 + guard(mutex)(&sysex_mutex); 501 501 sysex = info->sysex; 502 502 if (sysex == NULL) { 503 503 sysex = kzalloc(sizeof(*sysex), GFP_KERNEL);
+10 -4
sound/core/seq/seq_clientmgr.c
··· 1275 1275 if (client->type != client_info->type) 1276 1276 return -EINVAL; 1277 1277 1278 - /* check validity of midi_version field */ 1279 - if (client->user_pversion >= SNDRV_PROTOCOL_VERSION(1, 0, 3) && 1280 - client_info->midi_version > SNDRV_SEQ_CLIENT_UMP_MIDI_2_0) 1281 - return -EINVAL; 1278 + if (client->user_pversion >= SNDRV_PROTOCOL_VERSION(1, 0, 3)) { 1279 + /* check validity of midi_version field */ 1280 + if (client_info->midi_version > SNDRV_SEQ_CLIENT_UMP_MIDI_2_0) 1281 + return -EINVAL; 1282 + 1283 + /* check if UMP is supported in kernel */ 1284 + if (!IS_ENABLED(CONFIG_SND_SEQ_UMP) && 1285 + client_info->midi_version > 0) 1286 + return -EINVAL; 1287 + } 1282 1288 1283 1289 /* fill the info fields */ 1284 1290 if (client_info->name[0])
+1 -1
sound/core/ump.c
··· 1244 1244 1245 1245 num = 0; 1246 1246 for (i = 0; i < SNDRV_UMP_MAX_GROUPS; i++) 1247 - if ((group_maps & (1U << i)) && ump->groups[i].valid) 1247 + if (group_maps & (1U << i)) 1248 1248 ump->legacy_mapping[num++] = i; 1249 1249 1250 1250 return num;
+1
sound/pci/hda/patch_realtek.c
··· 11009 11009 SND_PCI_QUIRK(0xf111, 0x0001, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 11010 11010 SND_PCI_QUIRK(0xf111, 0x0006, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 11011 11011 SND_PCI_QUIRK(0xf111, 0x0009, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 11012 + SND_PCI_QUIRK(0xf111, 0x000c, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 11012 11013 11013 11014 #if 0 11014 11015 /* Below is a quirk table taken from the old code.
+4
sound/pci/hda/tas2781_hda_i2c.c
··· 142 142 } 143 143 sub = acpi_get_subsystem_id(ACPI_HANDLE(physdev)); 144 144 if (IS_ERR(sub)) { 145 + /* No subsys id in older tas2563 projects. */ 146 + if (!strncmp(hid, "INT8866", sizeof("INT8866"))) 147 + goto end_2563; 145 148 dev_err(p->dev, "Failed to get SUBSYS ID.\n"); 146 149 ret = PTR_ERR(sub); 147 150 goto err; ··· 167 164 p->speaker_id = NULL; 168 165 } 169 166 167 + end_2563: 170 168 acpi_dev_free_resource_list(&resources); 171 169 strscpy(p->dev_name, hid, sizeof(p->dev_name)); 172 170 put_device(physdev);
+1 -1
sound/sh/sh_dac_audio.c
··· 163 163 /* channel is not used (interleaved data) */ 164 164 struct snd_sh_dac *chip = snd_pcm_substream_chip(substream); 165 165 166 - if (copy_from_iter(chip->data_buffer + pos, src, count) != count) 166 + if (copy_from_iter(chip->data_buffer + pos, count, src) != count) 167 167 return -EFAULT; 168 168 chip->buffer_end = chip->data_buffer + pos + count; 169 169
+16 -1
sound/soc/amd/ps/pci-ps.c
··· 375 375 { 376 376 struct acpi_device *pdm_dev; 377 377 const union acpi_object *obj; 378 + acpi_handle handle; 379 + acpi_integer dmic_status; 378 380 u32 config; 379 381 bool is_dmic_dev = false; 380 382 bool is_sdw_dev = false; 383 + bool wov_en, dmic_en; 381 384 int ret; 385 + 386 + /* IF WOV entry not found, enable dmic based on acp-audio-device-type entry*/ 387 + wov_en = true; 388 + dmic_en = false; 382 389 383 390 config = readl(acp_data->acp63_base + ACP_PIN_CONFIG); 384 391 switch (config) { ··· 419 412 if (!acpi_dev_get_property(pdm_dev, "acp-audio-device-type", 420 413 ACPI_TYPE_INTEGER, &obj) && 421 414 obj->integer.value == ACP_DMIC_DEV) 422 - is_dmic_dev = true; 415 + dmic_en = true; 423 416 } 417 + 418 + handle = ACPI_HANDLE(&pci->dev); 419 + ret = acpi_evaluate_integer(handle, "_WOV", NULL, &dmic_status); 420 + if (!ACPI_FAILURE(ret)) 421 + wov_en = dmic_status; 424 422 } 423 + 424 + if (dmic_en && wov_en) 425 + is_dmic_dev = true; 425 426 426 427 if (acp_data->is_sdw_config) { 427 428 ret = acp_scan_sdw_devices(&pci->dev, ACP63_SDW_ADDR);
+6 -1
sound/soc/codecs/rt722-sdca.c
··· 1468 1468 0x008d); 1469 1469 /* check HP calibration FSM status */ 1470 1470 for (loop_check = 0; loop_check < chk_cnt; loop_check++) { 1471 + usleep_range(10000, 11000); 1471 1472 ret = rt722_sdca_index_read(rt722, RT722_VENDOR_CALI, 1472 1473 RT722_DAC_DC_CALI_CTL3, &calib_status); 1473 - if (ret < 0 || loop_check == chk_cnt) 1474 + if (ret < 0) 1474 1475 dev_dbg(&rt722->slave->dev, "calibration failed!, ret=%d\n", ret); 1475 1476 if ((calib_status & 0x0040) == 0x0) 1476 1477 break; 1477 1478 } 1479 + 1480 + if (loop_check == chk_cnt) 1481 + dev_dbg(&rt722->slave->dev, "%s, calibration time-out!\n", __func__); 1482 + 1478 1483 /* Set ADC09 power entity floating control */ 1479 1484 rt722_sdca_index_write(rt722, RT722_VENDOR_HDA_CTL, RT722_ADC0A_08_PDE_FLOAT_CTL, 1480 1485 0x2a12);
+1 -1
sound/soc/fsl/Kconfig
··· 29 29 config SND_SOC_FSL_MQS 30 30 tristate "Medium Quality Sound (MQS) module support" 31 31 depends on SND_SOC_FSL_SAI 32 + depends on IMX_SCMI_MISC_DRV || !IMX_SCMI_MISC_DRV 32 33 select REGMAP_MMIO 33 - select IMX_SCMI_MISC_DRV if IMX_SCMI_MISC_EXT !=n 34 34 help 35 35 Say Y if you want to add Medium Quality Sound (MQS) 36 36 support for the Freescale CPUs.
+20 -3
sound/soc/intel/boards/sof_sdw.c
··· 632 632 .callback = sof_sdw_quirk_cb, 633 633 .matches = { 634 634 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 635 - DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "233C") 635 + DMI_MATCH(DMI_PRODUCT_NAME, "21QB") 636 636 }, 637 637 /* Note this quirk excludes the CODEC mic */ 638 638 .driver_data = (void *)(SOC_SDW_CODEC_MIC), ··· 641 641 .callback = sof_sdw_quirk_cb, 642 642 .matches = { 643 643 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 644 - DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "233B") 644 + DMI_MATCH(DMI_PRODUCT_NAME, "21QA") 645 645 }, 646 - .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS), 646 + /* Note this quirk excludes the CODEC mic */ 647 + .driver_data = (void *)(SOC_SDW_CODEC_MIC), 648 + }, 649 + { 650 + .callback = sof_sdw_quirk_cb, 651 + .matches = { 652 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 653 + DMI_MATCH(DMI_PRODUCT_NAME, "21Q6") 654 + }, 655 + .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC), 656 + }, 657 + { 658 + .callback = sof_sdw_quirk_cb, 659 + .matches = { 660 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 661 + DMI_MATCH(DMI_PRODUCT_NAME, "21Q7") 662 + }, 663 + .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC), 647 664 }, 648 665 649 666 /* ArrowLake devices */
+2 -2
sound/soc/mediatek/common/mtk-afe-platform-driver.c
··· 120 120 struct mtk_base_afe *afe = snd_soc_component_get_drvdata(component); 121 121 122 122 size = afe->mtk_afe_hardware->buffer_bytes_max; 123 - snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV, 124 - afe->dev, size, size); 123 + snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV, afe->dev, 0, size); 124 + 125 125 return 0; 126 126 } 127 127 EXPORT_SYMBOL_GPL(mtk_afe_pcm_new);
+19 -6
sound/soc/sof/intel/hda-dai.c
··· 103 103 return sdai->platform_private; 104 104 } 105 105 106 - int hda_link_dma_cleanup(struct snd_pcm_substream *substream, struct hdac_ext_stream *hext_stream, 107 - struct snd_soc_dai *cpu_dai) 106 + static int 107 + hda_link_dma_cleanup(struct snd_pcm_substream *substream, 108 + struct hdac_ext_stream *hext_stream, 109 + struct snd_soc_dai *cpu_dai, bool release) 108 110 { 109 111 const struct hda_dai_widget_dma_ops *ops = hda_dai_get_ops(substream, cpu_dai); 110 112 struct sof_intel_hda_stream *hda_stream; ··· 128 126 if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { 129 127 stream_tag = hdac_stream(hext_stream)->stream_tag; 130 128 snd_hdac_ext_bus_link_clear_stream_id(hlink, stream_tag); 129 + } 130 + 131 + if (!release) { 132 + /* 133 + * Force stream reconfiguration without releasing the channel on 134 + * subsequent stream restart (without free), including LinkDMA 135 + * reset. 136 + * The stream is released via hda_dai_hw_free() 137 + */ 138 + hext_stream->link_prepared = 0; 139 + return 0; 131 140 } 132 141 133 142 if (ops->release_hext_stream) ··· 224 211 if (!hext_stream) 225 212 return 0; 226 213 227 - return hda_link_dma_cleanup(substream, hext_stream, cpu_dai); 214 + return hda_link_dma_cleanup(substream, hext_stream, cpu_dai, true); 228 215 } 229 216 230 217 static int __maybe_unused hda_dai_hw_params_data(struct snd_pcm_substream *substream, ··· 317 304 switch (cmd) { 318 305 case SNDRV_PCM_TRIGGER_STOP: 319 306 case SNDRV_PCM_TRIGGER_SUSPEND: 320 - ret = hda_link_dma_cleanup(substream, hext_stream, dai); 307 + ret = hda_link_dma_cleanup(substream, hext_stream, dai, 308 + cmd == SNDRV_PCM_TRIGGER_STOP ? false : true); 321 309 if (ret < 0) { 322 310 dev_err(sdev->dev, "%s: failed to clean up link DMA\n", __func__); 323 311 return ret; ··· 674 660 } 675 661 676 662 ret = hda_link_dma_cleanup(hext_stream->link_substream, 677 - hext_stream, 678 - cpu_dai); 663 + hext_stream, cpu_dai, true); 679 664 if (ret < 0) 680 665 return ret; 681 666 }
-2
sound/soc/sof/intel/hda.h
··· 1038 1038 hda_select_dai_widget_ops(struct snd_sof_dev *sdev, struct snd_sof_widget *swidget); 1039 1039 int hda_dai_config(struct snd_soc_dapm_widget *w, unsigned int flags, 1040 1040 struct snd_sof_dai_config_data *data); 1041 - int hda_link_dma_cleanup(struct snd_pcm_substream *substream, struct hdac_ext_stream *hext_stream, 1042 - struct snd_soc_dai *cpu_dai); 1043 1041 1044 1042 static inline struct snd_sof_dev *widget_to_sdev(struct snd_soc_dapm_widget *w) 1045 1043 {
+1 -1
sound/usb/mixer_us16x08.c
··· 687 687 struct usb_mixer_elem_info *elem = kcontrol->private_data; 688 688 struct snd_usb_audio *chip = elem->head.mixer->chip; 689 689 struct snd_us16x08_meter_store *store = elem->private_data; 690 - u8 meter_urb[64]; 690 + u8 meter_urb[64] = {0}; 691 691 692 692 switch (kcontrol->private_value) { 693 693 case 0: {
+3
tools/hv/.gitignore
··· 1 + hv_fcopy_uio_daemon 2 + hv_kvp_daemon 3 + hv_vss_daemon
+6 -6
tools/hv/hv_fcopy_uio_daemon.c
··· 35 35 #define WIN8_SRV_MINOR 1 36 36 #define WIN8_SRV_VERSION (WIN8_SRV_MAJOR << 16 | WIN8_SRV_MINOR) 37 37 38 - #define MAX_FOLDER_NAME 15 39 - #define MAX_PATH_LEN 15 40 38 #define FCOPY_UIO "/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio" 41 39 42 40 #define FCOPY_VER_COUNT 1 ··· 49 51 50 52 #define HV_RING_SIZE 0x4000 /* 16KB ring buffer size */ 51 53 52 - unsigned char desc[HV_RING_SIZE]; 54 + static unsigned char desc[HV_RING_SIZE]; 53 55 54 56 static int target_fd; 55 57 static char target_fname[PATH_MAX]; ··· 407 409 struct vmbus_br txbr, rxbr; 408 410 void *ring; 409 411 uint32_t len = HV_RING_SIZE; 410 - char uio_name[MAX_FOLDER_NAME] = {0}; 411 - char uio_dev_path[MAX_PATH_LEN] = {0}; 412 + char uio_name[NAME_MAX] = {0}; 413 + char uio_dev_path[PATH_MAX] = {0}; 412 414 413 415 static struct option long_options[] = { 414 416 {"help", no_argument, 0, 'h' }, ··· 466 468 */ 467 469 ret = pread(fcopy_fd, &tmp, sizeof(int), 0); 468 470 if (ret < 0) { 471 + if (errno == EINTR || errno == EAGAIN) 472 + continue; 469 473 syslog(LOG_ERR, "pread failed: %s", strerror(errno)); 470 - continue; 474 + goto close; 471 475 } 472 476 473 477 len = HV_RING_SIZE;
+2 -2
tools/hv/hv_get_dns_info.sh
··· 1 - #!/bin/bash 1 + #!/bin/sh 2 2 3 3 # This example script parses /etc/resolv.conf to retrive DNS information. 4 4 # In the interest of keeping the KVP daemon code free of distro specific ··· 10 10 # this script can be based on the Network Manager APIs for retrieving DNS 11 11 # entries. 12 12 13 - cat /etc/resolv.conf 2>/dev/null | awk '/^nameserver/ { print $2 }' 13 + exec awk '/^nameserver/ { print $2 }' /etc/resolv.conf 2>/dev/null
+5 -4
tools/hv/hv_kvp_daemon.c
··· 725 725 * . 726 726 */ 727 727 728 - sprintf(cmd, KVP_SCRIPTS_PATH "%s", "hv_get_dns_info"); 728 + sprintf(cmd, "exec %s %s", KVP_SCRIPTS_PATH "hv_get_dns_info", if_name); 729 729 730 730 /* 731 731 * Execute the command to gather DNS info. ··· 742 742 * Enabled: DHCP enabled. 743 743 */ 744 744 745 - sprintf(cmd, KVP_SCRIPTS_PATH "%s %s", "hv_get_dhcp_info", if_name); 745 + sprintf(cmd, "exec %s %s", KVP_SCRIPTS_PATH "hv_get_dhcp_info", if_name); 746 746 747 747 file = popen(cmd, "r"); 748 748 if (file == NULL) ··· 1606 1606 * invoke the external script to do its magic. 1607 1607 */ 1608 1608 1609 - str_len = snprintf(cmd, sizeof(cmd), KVP_SCRIPTS_PATH "%s %s %s", 1610 - "hv_set_ifconfig", if_filename, nm_filename); 1609 + str_len = snprintf(cmd, sizeof(cmd), "exec %s %s %s", 1610 + KVP_SCRIPTS_PATH "hv_set_ifconfig", 1611 + if_filename, nm_filename); 1611 1612 /* 1612 1613 * This is a little overcautious, but it's necessary to suppress some 1613 1614 * false warnings from gcc 8.0.1.
+1 -1
tools/hv/hv_set_ifconfig.sh
··· 81 81 82 82 cp $1 /etc/sysconfig/network-scripts/ 83 83 84 - chmod 600 $2 84 + umask 0177 85 85 interface=$(echo $2 | awk -F - '{ print $2 }') 86 86 filename="${2##*/}" 87 87
+11 -4
tools/include/uapi/linux/stddef.h
··· 8 8 #define __always_inline __inline__ 9 9 #endif 10 10 11 + /* Not all C++ standards support type declarations inside an anonymous union */ 12 + #ifndef __cplusplus 13 + #define __struct_group_tag(TAG) TAG 14 + #else 15 + #define __struct_group_tag(TAG) 16 + #endif 17 + 11 18 /** 12 19 * __struct_group() - Create a mirrored named and anonyomous struct 13 20 * ··· 27 20 * and size: one anonymous and one named. The former's members can be used 28 21 * normally without sub-struct naming, and the latter can be used to 29 22 * reason about the start, end, and size of the group of struct members. 30 - * The named struct can also be explicitly tagged for layer reuse, as well 31 - * as both having struct attributes appended. 23 + * The named struct can also be explicitly tagged for layer reuse (C only), 24 + * as well as both having struct attributes appended. 32 25 */ 33 26 #define __struct_group(TAG, NAME, ATTRS, MEMBERS...) \ 34 27 union { \ 35 28 struct { MEMBERS } ATTRS; \ 36 - struct TAG { MEMBERS } ATTRS NAME; \ 37 - } 29 + struct __struct_group_tag(TAG) { MEMBERS } ATTRS NAME; \ 30 + } ATTRS 38 31 39 32 /** 40 33 * __DECLARE_FLEX_ARRAY() - Declare a flexible array usable in a union
+3 -3
tools/net/ynl/lib/ynl.py
··· 556 556 if attr["type"] == 'nest': 557 557 nl_type |= Netlink.NLA_F_NESTED 558 558 attr_payload = b'' 559 - sub_attrs = SpaceAttrs(self.attr_sets[space], value, search_attrs) 559 + sub_space = attr['nested-attributes'] 560 + sub_attrs = SpaceAttrs(self.attr_sets[sub_space], value, search_attrs) 560 561 for subname, subvalue in value.items(): 561 - attr_payload += self._add_attr(attr['nested-attributes'], 562 - subname, subvalue, sub_attrs) 562 + attr_payload += self._add_attr(sub_space, subname, subvalue, sub_attrs) 563 563 elif attr["type"] == 'flag': 564 564 if not value: 565 565 # If value is absent or false then skip attribute creation.
+6 -3
tools/objtool/check.c
··· 3820 3820 break; 3821 3821 3822 3822 case INSN_CONTEXT_SWITCH: 3823 - if (func && (!next_insn || !next_insn->hint)) { 3824 - WARN_INSN(insn, "unsupported instruction in callable function"); 3825 - return 1; 3823 + if (func) { 3824 + if (!next_insn || !next_insn->hint) { 3825 + WARN_INSN(insn, "unsupported instruction in callable function"); 3826 + return 1; 3827 + } 3828 + break; 3826 3829 } 3827 3830 return 0; 3828 3831
+1
tools/objtool/noreturns.h
··· 19 19 NORETURN(arch_cpu_idle_dead) 20 20 NORETURN(bch2_trans_in_restart_error) 21 21 NORETURN(bch2_trans_restart_error) 22 + NORETURN(bch2_trans_unlocked_error) 22 23 NORETURN(cpu_bringup_and_idle) 23 24 NORETURN(cpu_startup_entry) 24 25 NORETURN(do_exit)
+3 -3
tools/sched_ext/include/scx/common.bpf.h
··· 40 40 void scx_bpf_dsq_insert_vtime(struct task_struct *p, u64 dsq_id, u64 slice, u64 vtime, u64 enq_flags) __ksym __weak; 41 41 u32 scx_bpf_dispatch_nr_slots(void) __ksym; 42 42 void scx_bpf_dispatch_cancel(void) __ksym; 43 - bool scx_bpf_dsq_move_to_local(u64 dsq_id) __ksym; 44 - void scx_bpf_dsq_move_set_slice(struct bpf_iter_scx_dsq *it__iter, u64 slice) __ksym; 45 - void scx_bpf_dsq_move_set_vtime(struct bpf_iter_scx_dsq *it__iter, u64 vtime) __ksym; 43 + bool scx_bpf_dsq_move_to_local(u64 dsq_id) __ksym __weak; 44 + void scx_bpf_dsq_move_set_slice(struct bpf_iter_scx_dsq *it__iter, u64 slice) __ksym __weak; 45 + void scx_bpf_dsq_move_set_vtime(struct bpf_iter_scx_dsq *it__iter, u64 vtime) __ksym __weak; 46 46 bool scx_bpf_dsq_move(struct bpf_iter_scx_dsq *it__iter, struct task_struct *p, u64 dsq_id, u64 enq_flags) __ksym __weak; 47 47 bool scx_bpf_dsq_move_vtime(struct bpf_iter_scx_dsq *it__iter, struct task_struct *p, u64 dsq_id, u64 enq_flags) __ksym __weak; 48 48 u32 scx_bpf_reenqueue_local(void) __ksym;
+1 -1
tools/sched_ext/scx_central.c
··· 97 97 SCX_BUG_ON(!cpuset, "Failed to allocate cpuset"); 98 98 CPU_ZERO(cpuset); 99 99 CPU_SET(skel->rodata->central_cpu, cpuset); 100 - SCX_BUG_ON(sched_setaffinity(0, sizeof(cpuset), cpuset), 100 + SCX_BUG_ON(sched_setaffinity(0, sizeof(*cpuset), cpuset), 101 101 "Failed to affinitize to central CPU %d (max %d)", 102 102 skel->rodata->central_cpu, skel->rodata->nr_cpu_ids - 1); 103 103 CPU_FREE(cpuset);
+1 -1
tools/testing/selftests/alsa/Makefile
··· 27 27 $(OUTPUT)/libatest.so: conf.c alsa-local.h 28 28 $(CC) $(CFLAGS) -shared -fPIC $< $(LDLIBS) -o $@ 29 29 30 - $(OUTPUT)/%: %.c $(TEST_GEN_PROGS_EXTENDED) alsa-local.h 30 + $(OUTPUT)/%: %.c $(OUTPUT)/libatest.so alsa-local.h 31 31 $(CC) $(CFLAGS) $< $(LDLIBS) -latest -o $@
+394
tools/testing/selftests/bpf/prog_tests/socket_helpers.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef __SOCKET_HELPERS__ 4 + #define __SOCKET_HELPERS__ 5 + 6 + #include <linux/vm_sockets.h> 7 + 8 + /* include/linux/net.h */ 9 + #define SOCK_TYPE_MASK 0xf 10 + 11 + #define IO_TIMEOUT_SEC 30 12 + #define MAX_STRERR_LEN 256 13 + 14 + /* workaround for older vm_sockets.h */ 15 + #ifndef VMADDR_CID_LOCAL 16 + #define VMADDR_CID_LOCAL 1 17 + #endif 18 + 19 + /* include/linux/cleanup.h */ 20 + #define __get_and_null(p, nullvalue) \ 21 + ({ \ 22 + __auto_type __ptr = &(p); \ 23 + __auto_type __val = *__ptr; \ 24 + *__ptr = nullvalue; \ 25 + __val; \ 26 + }) 27 + 28 + #define take_fd(fd) __get_and_null(fd, -EBADF) 29 + 30 + /* Wrappers that fail the test on error and report it. */ 31 + 32 + #define _FAIL(errnum, fmt...) \ 33 + ({ \ 34 + error_at_line(0, (errnum), __func__, __LINE__, fmt); \ 35 + CHECK_FAIL(true); \ 36 + }) 37 + #define FAIL(fmt...) _FAIL(0, fmt) 38 + #define FAIL_ERRNO(fmt...) _FAIL(errno, fmt) 39 + #define FAIL_LIBBPF(err, msg) \ 40 + ({ \ 41 + char __buf[MAX_STRERR_LEN]; \ 42 + libbpf_strerror((err), __buf, sizeof(__buf)); \ 43 + FAIL("%s: %s", (msg), __buf); \ 44 + }) 45 + 46 + 47 + #define xaccept_nonblock(fd, addr, len) \ 48 + ({ \ 49 + int __ret = \ 50 + accept_timeout((fd), (addr), (len), IO_TIMEOUT_SEC); \ 51 + if (__ret == -1) \ 52 + FAIL_ERRNO("accept"); \ 53 + __ret; \ 54 + }) 55 + 56 + #define xbind(fd, addr, len) \ 57 + ({ \ 58 + int __ret = bind((fd), (addr), (len)); \ 59 + if (__ret == -1) \ 60 + FAIL_ERRNO("bind"); \ 61 + __ret; \ 62 + }) 63 + 64 + #define xclose(fd) \ 65 + ({ \ 66 + int __ret = close((fd)); \ 67 + if (__ret == -1) \ 68 + FAIL_ERRNO("close"); \ 69 + __ret; \ 70 + }) 71 + 72 + #define xconnect(fd, addr, len) \ 73 + ({ \ 74 + int __ret = connect((fd), (addr), (len)); \ 75 + if (__ret == -1) \ 76 + FAIL_ERRNO("connect"); \ 77 + __ret; \ 78 + }) 79 + 80 + #define xgetsockname(fd, addr, len) \ 81 + ({ \ 82 + int __ret = getsockname((fd), (addr), (len)); \ 83 + if (__ret == -1) \ 84 + FAIL_ERRNO("getsockname"); \ 85 + __ret; \ 86 + }) 87 + 88 + #define xgetsockopt(fd, level, name, val, len) \ 89 + ({ \ 90 + int __ret = getsockopt((fd), (level), (name), (val), (len)); \ 91 + if (__ret == -1) \ 92 + FAIL_ERRNO("getsockopt(" #name ")"); \ 93 + __ret; \ 94 + }) 95 + 96 + #define xlisten(fd, backlog) \ 97 + ({ \ 98 + int __ret = listen((fd), (backlog)); \ 99 + if (__ret == -1) \ 100 + FAIL_ERRNO("listen"); \ 101 + __ret; \ 102 + }) 103 + 104 + #define xsetsockopt(fd, level, name, val, len) \ 105 + ({ \ 106 + int __ret = setsockopt((fd), (level), (name), (val), (len)); \ 107 + if (__ret == -1) \ 108 + FAIL_ERRNO("setsockopt(" #name ")"); \ 109 + __ret; \ 110 + }) 111 + 112 + #define xsend(fd, buf, len, flags) \ 113 + ({ \ 114 + ssize_t __ret = send((fd), (buf), (len), (flags)); \ 115 + if (__ret == -1) \ 116 + FAIL_ERRNO("send"); \ 117 + __ret; \ 118 + }) 119 + 120 + #define xrecv_nonblock(fd, buf, len, flags) \ 121 + ({ \ 122 + ssize_t __ret = recv_timeout((fd), (buf), (len), (flags), \ 123 + IO_TIMEOUT_SEC); \ 124 + if (__ret == -1) \ 125 + FAIL_ERRNO("recv"); \ 126 + __ret; \ 127 + }) 128 + 129 + #define xsocket(family, sotype, flags) \ 130 + ({ \ 131 + int __ret = socket(family, sotype, flags); \ 132 + if (__ret == -1) \ 133 + FAIL_ERRNO("socket"); \ 134 + __ret; \ 135 + }) 136 + 137 + static inline void close_fd(int *fd) 138 + { 139 + if (*fd >= 0) 140 + xclose(*fd); 141 + } 142 + 143 + #define __close_fd __attribute__((cleanup(close_fd))) 144 + 145 + static inline struct sockaddr *sockaddr(struct sockaddr_storage *ss) 146 + { 147 + return (struct sockaddr *)ss; 148 + } 149 + 150 + static inline void init_addr_loopback4(struct sockaddr_storage *ss, 151 + socklen_t *len) 152 + { 153 + struct sockaddr_in *addr4 = memset(ss, 0, sizeof(*ss)); 154 + 155 + addr4->sin_family = AF_INET; 156 + addr4->sin_port = 0; 157 + addr4->sin_addr.s_addr = htonl(INADDR_LOOPBACK); 158 + *len = sizeof(*addr4); 159 + } 160 + 161 + static inline void init_addr_loopback6(struct sockaddr_storage *ss, 162 + socklen_t *len) 163 + { 164 + struct sockaddr_in6 *addr6 = memset(ss, 0, sizeof(*ss)); 165 + 166 + addr6->sin6_family = AF_INET6; 167 + addr6->sin6_port = 0; 168 + addr6->sin6_addr = in6addr_loopback; 169 + *len = sizeof(*addr6); 170 + } 171 + 172 + static inline void init_addr_loopback_vsock(struct sockaddr_storage *ss, 173 + socklen_t *len) 174 + { 175 + struct sockaddr_vm *addr = memset(ss, 0, sizeof(*ss)); 176 + 177 + addr->svm_family = AF_VSOCK; 178 + addr->svm_port = VMADDR_PORT_ANY; 179 + addr->svm_cid = VMADDR_CID_LOCAL; 180 + *len = sizeof(*addr); 181 + } 182 + 183 + static inline void init_addr_loopback(int family, struct sockaddr_storage *ss, 184 + socklen_t *len) 185 + { 186 + switch (family) { 187 + case AF_INET: 188 + init_addr_loopback4(ss, len); 189 + return; 190 + case AF_INET6: 191 + init_addr_loopback6(ss, len); 192 + return; 193 + case AF_VSOCK: 194 + init_addr_loopback_vsock(ss, len); 195 + return; 196 + default: 197 + FAIL("unsupported address family %d", family); 198 + } 199 + } 200 + 201 + static inline int enable_reuseport(int s, int progfd) 202 + { 203 + int err, one = 1; 204 + 205 + err = xsetsockopt(s, SOL_SOCKET, SO_REUSEPORT, &one, sizeof(one)); 206 + if (err) 207 + return -1; 208 + err = xsetsockopt(s, SOL_SOCKET, SO_ATTACH_REUSEPORT_EBPF, &progfd, 209 + sizeof(progfd)); 210 + if (err) 211 + return -1; 212 + 213 + return 0; 214 + } 215 + 216 + static inline int socket_loopback_reuseport(int family, int sotype, int progfd) 217 + { 218 + struct sockaddr_storage addr; 219 + socklen_t len = 0; 220 + int err, s; 221 + 222 + init_addr_loopback(family, &addr, &len); 223 + 224 + s = xsocket(family, sotype, 0); 225 + if (s == -1) 226 + return -1; 227 + 228 + if (progfd >= 0) 229 + enable_reuseport(s, progfd); 230 + 231 + err = xbind(s, sockaddr(&addr), len); 232 + if (err) 233 + goto close; 234 + 235 + if (sotype & SOCK_DGRAM) 236 + return s; 237 + 238 + err = xlisten(s, SOMAXCONN); 239 + if (err) 240 + goto close; 241 + 242 + return s; 243 + close: 244 + xclose(s); 245 + return -1; 246 + } 247 + 248 + static inline int socket_loopback(int family, int sotype) 249 + { 250 + return socket_loopback_reuseport(family, sotype, -1); 251 + } 252 + 253 + static inline int poll_connect(int fd, unsigned int timeout_sec) 254 + { 255 + struct timeval timeout = { .tv_sec = timeout_sec }; 256 + fd_set wfds; 257 + int r, eval; 258 + socklen_t esize = sizeof(eval); 259 + 260 + FD_ZERO(&wfds); 261 + FD_SET(fd, &wfds); 262 + 263 + r = select(fd + 1, NULL, &wfds, NULL, &timeout); 264 + if (r == 0) 265 + errno = ETIME; 266 + if (r != 1) 267 + return -1; 268 + 269 + if (getsockopt(fd, SOL_SOCKET, SO_ERROR, &eval, &esize) < 0) 270 + return -1; 271 + if (eval != 0) { 272 + errno = eval; 273 + return -1; 274 + } 275 + 276 + return 0; 277 + } 278 + 279 + static inline int poll_read(int fd, unsigned int timeout_sec) 280 + { 281 + struct timeval timeout = { .tv_sec = timeout_sec }; 282 + fd_set rfds; 283 + int r; 284 + 285 + FD_ZERO(&rfds); 286 + FD_SET(fd, &rfds); 287 + 288 + r = select(fd + 1, &rfds, NULL, NULL, &timeout); 289 + if (r == 0) 290 + errno = ETIME; 291 + 292 + return r == 1 ? 0 : -1; 293 + } 294 + 295 + static inline int accept_timeout(int fd, struct sockaddr *addr, socklen_t *len, 296 + unsigned int timeout_sec) 297 + { 298 + if (poll_read(fd, timeout_sec)) 299 + return -1; 300 + 301 + return accept(fd, addr, len); 302 + } 303 + 304 + static inline int recv_timeout(int fd, void *buf, size_t len, int flags, 305 + unsigned int timeout_sec) 306 + { 307 + if (poll_read(fd, timeout_sec)) 308 + return -1; 309 + 310 + return recv(fd, buf, len, flags); 311 + } 312 + 313 + 314 + static inline int create_pair(int family, int sotype, int *p0, int *p1) 315 + { 316 + __close_fd int s, c = -1, p = -1; 317 + struct sockaddr_storage addr; 318 + socklen_t len = sizeof(addr); 319 + int err; 320 + 321 + s = socket_loopback(family, sotype); 322 + if (s < 0) 323 + return s; 324 + 325 + err = xgetsockname(s, sockaddr(&addr), &len); 326 + if (err) 327 + return err; 328 + 329 + c = xsocket(family, sotype, 0); 330 + if (c < 0) 331 + return c; 332 + 333 + err = connect(c, sockaddr(&addr), len); 334 + if (err) { 335 + if (errno != EINPROGRESS) { 336 + FAIL_ERRNO("connect"); 337 + return err; 338 + } 339 + 340 + err = poll_connect(c, IO_TIMEOUT_SEC); 341 + if (err) { 342 + FAIL_ERRNO("poll_connect"); 343 + return err; 344 + } 345 + } 346 + 347 + switch (sotype & SOCK_TYPE_MASK) { 348 + case SOCK_DGRAM: 349 + err = xgetsockname(c, sockaddr(&addr), &len); 350 + if (err) 351 + return err; 352 + 353 + err = xconnect(s, sockaddr(&addr), len); 354 + if (err) 355 + return err; 356 + 357 + *p0 = take_fd(s); 358 + break; 359 + case SOCK_STREAM: 360 + case SOCK_SEQPACKET: 361 + p = xaccept_nonblock(s, NULL, NULL); 362 + if (p < 0) 363 + return p; 364 + 365 + *p0 = take_fd(p); 366 + break; 367 + default: 368 + FAIL("Unsupported socket type %#x", sotype); 369 + return -EOPNOTSUPP; 370 + } 371 + 372 + *p1 = take_fd(c); 373 + return 0; 374 + } 375 + 376 + static inline int create_socket_pairs(int family, int sotype, int *c0, int *c1, 377 + int *p0, int *p1) 378 + { 379 + int err; 380 + 381 + err = create_pair(family, sotype, c0, p0); 382 + if (err) 383 + return err; 384 + 385 + err = create_pair(family, sotype, c1, p1); 386 + if (err) { 387 + close(*c0); 388 + close(*p0); 389 + } 390 + 391 + return err; 392 + } 393 + 394 + #endif // __SOCKET_HELPERS__
+51
tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
··· 12 12 #include "test_sockmap_progs_query.skel.h" 13 13 #include "test_sockmap_pass_prog.skel.h" 14 14 #include "test_sockmap_drop_prog.skel.h" 15 + #include "test_sockmap_change_tail.skel.h" 15 16 #include "bpf_iter_sockmap.skel.h" 16 17 17 18 #include "sockmap_helpers.h" ··· 644 643 test_sockmap_drop_prog__destroy(drop); 645 644 } 646 645 646 + static void test_sockmap_skb_verdict_change_tail(void) 647 + { 648 + struct test_sockmap_change_tail *skel; 649 + int err, map, verdict; 650 + int c1, p1, sent, recvd; 651 + int zero = 0; 652 + char buf[2]; 653 + 654 + skel = test_sockmap_change_tail__open_and_load(); 655 + if (!ASSERT_OK_PTR(skel, "open_and_load")) 656 + return; 657 + verdict = bpf_program__fd(skel->progs.prog_skb_verdict); 658 + map = bpf_map__fd(skel->maps.sock_map_rx); 659 + 660 + err = bpf_prog_attach(verdict, map, BPF_SK_SKB_STREAM_VERDICT, 0); 661 + if (!ASSERT_OK(err, "bpf_prog_attach")) 662 + goto out; 663 + err = create_pair(AF_INET, SOCK_STREAM, &c1, &p1); 664 + if (!ASSERT_OK(err, "create_pair()")) 665 + goto out; 666 + err = bpf_map_update_elem(map, &zero, &c1, BPF_NOEXIST); 667 + if (!ASSERT_OK(err, "bpf_map_update_elem(c1)")) 668 + goto out_close; 669 + sent = xsend(p1, "Tr", 2, 0); 670 + ASSERT_EQ(sent, 2, "xsend(p1)"); 671 + recvd = recv(c1, buf, 2, 0); 672 + ASSERT_EQ(recvd, 1, "recv(c1)"); 673 + ASSERT_EQ(skel->data->change_tail_ret, 0, "change_tail_ret"); 674 + 675 + sent = xsend(p1, "G", 1, 0); 676 + ASSERT_EQ(sent, 1, "xsend(p1)"); 677 + recvd = recv(c1, buf, 2, 0); 678 + ASSERT_EQ(recvd, 2, "recv(c1)"); 679 + ASSERT_EQ(skel->data->change_tail_ret, 0, "change_tail_ret"); 680 + 681 + sent = xsend(p1, "E", 1, 0); 682 + ASSERT_EQ(sent, 1, "xsend(p1)"); 683 + recvd = recv(c1, buf, 1, 0); 684 + ASSERT_EQ(recvd, 1, "recv(c1)"); 685 + ASSERT_EQ(skel->data->change_tail_ret, -EINVAL, "change_tail_ret"); 686 + 687 + out_close: 688 + close(c1); 689 + close(p1); 690 + out: 691 + test_sockmap_change_tail__destroy(skel); 692 + } 693 + 647 694 static void test_sockmap_skb_verdict_peek_helper(int map) 648 695 { 649 696 int err, c1, p1, zero = 0, sent, recvd, avail; ··· 1107 1058 test_sockmap_skb_verdict_fionread(true); 1108 1059 if (test__start_subtest("sockmap skb_verdict fionread on drop")) 1109 1060 test_sockmap_skb_verdict_fionread(false); 1061 + if (test__start_subtest("sockmap skb_verdict change tail")) 1062 + test_sockmap_skb_verdict_change_tail(); 1110 1063 if (test__start_subtest("sockmap skb_verdict msg_f_peek")) 1111 1064 test_sockmap_skb_verdict_peek(); 1112 1065 if (test__start_subtest("sockmap skb_verdict msg_f_peek with link"))
+1 -384
tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h
··· 1 1 #ifndef __SOCKMAP_HELPERS__ 2 2 #define __SOCKMAP_HELPERS__ 3 3 4 - #include <linux/vm_sockets.h> 4 + #include "socket_helpers.h" 5 5 6 - /* include/linux/net.h */ 7 - #define SOCK_TYPE_MASK 0xf 8 - 9 - #define IO_TIMEOUT_SEC 30 10 - #define MAX_STRERR_LEN 256 11 6 #define MAX_TEST_NAME 80 12 7 13 - /* workaround for older vm_sockets.h */ 14 - #ifndef VMADDR_CID_LOCAL 15 - #define VMADDR_CID_LOCAL 1 16 - #endif 17 - 18 8 #define __always_unused __attribute__((__unused__)) 19 - 20 - /* include/linux/cleanup.h */ 21 - #define __get_and_null(p, nullvalue) \ 22 - ({ \ 23 - __auto_type __ptr = &(p); \ 24 - __auto_type __val = *__ptr; \ 25 - *__ptr = nullvalue; \ 26 - __val; \ 27 - }) 28 - 29 - #define take_fd(fd) __get_and_null(fd, -EBADF) 30 - 31 - #define _FAIL(errnum, fmt...) \ 32 - ({ \ 33 - error_at_line(0, (errnum), __func__, __LINE__, fmt); \ 34 - CHECK_FAIL(true); \ 35 - }) 36 - #define FAIL(fmt...) _FAIL(0, fmt) 37 - #define FAIL_ERRNO(fmt...) _FAIL(errno, fmt) 38 - #define FAIL_LIBBPF(err, msg) \ 39 - ({ \ 40 - char __buf[MAX_STRERR_LEN]; \ 41 - libbpf_strerror((err), __buf, sizeof(__buf)); \ 42 - FAIL("%s: %s", (msg), __buf); \ 43 - }) 44 - 45 - /* Wrappers that fail the test on error and report it. */ 46 - 47 - #define xaccept_nonblock(fd, addr, len) \ 48 - ({ \ 49 - int __ret = \ 50 - accept_timeout((fd), (addr), (len), IO_TIMEOUT_SEC); \ 51 - if (__ret == -1) \ 52 - FAIL_ERRNO("accept"); \ 53 - __ret; \ 54 - }) 55 - 56 - #define xbind(fd, addr, len) \ 57 - ({ \ 58 - int __ret = bind((fd), (addr), (len)); \ 59 - if (__ret == -1) \ 60 - FAIL_ERRNO("bind"); \ 61 - __ret; \ 62 - }) 63 - 64 - #define xclose(fd) \ 65 - ({ \ 66 - int __ret = close((fd)); \ 67 - if (__ret == -1) \ 68 - FAIL_ERRNO("close"); \ 69 - __ret; \ 70 - }) 71 - 72 - #define xconnect(fd, addr, len) \ 73 - ({ \ 74 - int __ret = connect((fd), (addr), (len)); \ 75 - if (__ret == -1) \ 76 - FAIL_ERRNO("connect"); \ 77 - __ret; \ 78 - }) 79 - 80 - #define xgetsockname(fd, addr, len) \ 81 - ({ \ 82 - int __ret = getsockname((fd), (addr), (len)); \ 83 - if (__ret == -1) \ 84 - FAIL_ERRNO("getsockname"); \ 85 - __ret; \ 86 - }) 87 - 88 - #define xgetsockopt(fd, level, name, val, len) \ 89 - ({ \ 90 - int __ret = getsockopt((fd), (level), (name), (val), (len)); \ 91 - if (__ret == -1) \ 92 - FAIL_ERRNO("getsockopt(" #name ")"); \ 93 - __ret; \ 94 - }) 95 - 96 - #define xlisten(fd, backlog) \ 97 - ({ \ 98 - int __ret = listen((fd), (backlog)); \ 99 - if (__ret == -1) \ 100 - FAIL_ERRNO("listen"); \ 101 - __ret; \ 102 - }) 103 - 104 - #define xsetsockopt(fd, level, name, val, len) \ 105 - ({ \ 106 - int __ret = setsockopt((fd), (level), (name), (val), (len)); \ 107 - if (__ret == -1) \ 108 - FAIL_ERRNO("setsockopt(" #name ")"); \ 109 - __ret; \ 110 - }) 111 - 112 - #define xsend(fd, buf, len, flags) \ 113 - ({ \ 114 - ssize_t __ret = send((fd), (buf), (len), (flags)); \ 115 - if (__ret == -1) \ 116 - FAIL_ERRNO("send"); \ 117 - __ret; \ 118 - }) 119 - 120 - #define xrecv_nonblock(fd, buf, len, flags) \ 121 - ({ \ 122 - ssize_t __ret = recv_timeout((fd), (buf), (len), (flags), \ 123 - IO_TIMEOUT_SEC); \ 124 - if (__ret == -1) \ 125 - FAIL_ERRNO("recv"); \ 126 - __ret; \ 127 - }) 128 - 129 - #define xsocket(family, sotype, flags) \ 130 - ({ \ 131 - int __ret = socket(family, sotype, flags); \ 132 - if (__ret == -1) \ 133 - FAIL_ERRNO("socket"); \ 134 - __ret; \ 135 - }) 136 9 137 10 #define xbpf_map_delete_elem(fd, key) \ 138 11 ({ \ ··· 66 193 __ret; \ 67 194 }) 68 195 69 - static inline void close_fd(int *fd) 70 - { 71 - if (*fd >= 0) 72 - xclose(*fd); 73 - } 74 - 75 - #define __close_fd __attribute__((cleanup(close_fd))) 76 - 77 - static inline int poll_connect(int fd, unsigned int timeout_sec) 78 - { 79 - struct timeval timeout = { .tv_sec = timeout_sec }; 80 - fd_set wfds; 81 - int r, eval; 82 - socklen_t esize = sizeof(eval); 83 - 84 - FD_ZERO(&wfds); 85 - FD_SET(fd, &wfds); 86 - 87 - r = select(fd + 1, NULL, &wfds, NULL, &timeout); 88 - if (r == 0) 89 - errno = ETIME; 90 - if (r != 1) 91 - return -1; 92 - 93 - if (getsockopt(fd, SOL_SOCKET, SO_ERROR, &eval, &esize) < 0) 94 - return -1; 95 - if (eval != 0) { 96 - errno = eval; 97 - return -1; 98 - } 99 - 100 - return 0; 101 - } 102 - 103 - static inline int poll_read(int fd, unsigned int timeout_sec) 104 - { 105 - struct timeval timeout = { .tv_sec = timeout_sec }; 106 - fd_set rfds; 107 - int r; 108 - 109 - FD_ZERO(&rfds); 110 - FD_SET(fd, &rfds); 111 - 112 - r = select(fd + 1, &rfds, NULL, NULL, &timeout); 113 - if (r == 0) 114 - errno = ETIME; 115 - 116 - return r == 1 ? 0 : -1; 117 - } 118 - 119 - static inline int accept_timeout(int fd, struct sockaddr *addr, socklen_t *len, 120 - unsigned int timeout_sec) 121 - { 122 - if (poll_read(fd, timeout_sec)) 123 - return -1; 124 - 125 - return accept(fd, addr, len); 126 - } 127 - 128 - static inline int recv_timeout(int fd, void *buf, size_t len, int flags, 129 - unsigned int timeout_sec) 130 - { 131 - if (poll_read(fd, timeout_sec)) 132 - return -1; 133 - 134 - return recv(fd, buf, len, flags); 135 - } 136 - 137 - static inline void init_addr_loopback4(struct sockaddr_storage *ss, 138 - socklen_t *len) 139 - { 140 - struct sockaddr_in *addr4 = memset(ss, 0, sizeof(*ss)); 141 - 142 - addr4->sin_family = AF_INET; 143 - addr4->sin_port = 0; 144 - addr4->sin_addr.s_addr = htonl(INADDR_LOOPBACK); 145 - *len = sizeof(*addr4); 146 - } 147 - 148 - static inline void init_addr_loopback6(struct sockaddr_storage *ss, 149 - socklen_t *len) 150 - { 151 - struct sockaddr_in6 *addr6 = memset(ss, 0, sizeof(*ss)); 152 - 153 - addr6->sin6_family = AF_INET6; 154 - addr6->sin6_port = 0; 155 - addr6->sin6_addr = in6addr_loopback; 156 - *len = sizeof(*addr6); 157 - } 158 - 159 - static inline void init_addr_loopback_vsock(struct sockaddr_storage *ss, 160 - socklen_t *len) 161 - { 162 - struct sockaddr_vm *addr = memset(ss, 0, sizeof(*ss)); 163 - 164 - addr->svm_family = AF_VSOCK; 165 - addr->svm_port = VMADDR_PORT_ANY; 166 - addr->svm_cid = VMADDR_CID_LOCAL; 167 - *len = sizeof(*addr); 168 - } 169 - 170 - static inline void init_addr_loopback(int family, struct sockaddr_storage *ss, 171 - socklen_t *len) 172 - { 173 - switch (family) { 174 - case AF_INET: 175 - init_addr_loopback4(ss, len); 176 - return; 177 - case AF_INET6: 178 - init_addr_loopback6(ss, len); 179 - return; 180 - case AF_VSOCK: 181 - init_addr_loopback_vsock(ss, len); 182 - return; 183 - default: 184 - FAIL("unsupported address family %d", family); 185 - } 186 - } 187 - 188 - static inline struct sockaddr *sockaddr(struct sockaddr_storage *ss) 189 - { 190 - return (struct sockaddr *)ss; 191 - } 192 - 193 196 static inline int add_to_sockmap(int sock_mapfd, int fd1, int fd2) 194 197 { 195 198 u64 value; ··· 81 332 key = 1; 82 333 value = fd2; 83 334 return xbpf_map_update_elem(sock_mapfd, &key, &value, BPF_NOEXIST); 84 - } 85 - 86 - static inline int enable_reuseport(int s, int progfd) 87 - { 88 - int err, one = 1; 89 - 90 - err = xsetsockopt(s, SOL_SOCKET, SO_REUSEPORT, &one, sizeof(one)); 91 - if (err) 92 - return -1; 93 - err = xsetsockopt(s, SOL_SOCKET, SO_ATTACH_REUSEPORT_EBPF, &progfd, 94 - sizeof(progfd)); 95 - if (err) 96 - return -1; 97 - 98 - return 0; 99 - } 100 - 101 - static inline int socket_loopback_reuseport(int family, int sotype, int progfd) 102 - { 103 - struct sockaddr_storage addr; 104 - socklen_t len = 0; 105 - int err, s; 106 - 107 - init_addr_loopback(family, &addr, &len); 108 - 109 - s = xsocket(family, sotype, 0); 110 - if (s == -1) 111 - return -1; 112 - 113 - if (progfd >= 0) 114 - enable_reuseport(s, progfd); 115 - 116 - err = xbind(s, sockaddr(&addr), len); 117 - if (err) 118 - goto close; 119 - 120 - if (sotype & SOCK_DGRAM) 121 - return s; 122 - 123 - err = xlisten(s, SOMAXCONN); 124 - if (err) 125 - goto close; 126 - 127 - return s; 128 - close: 129 - xclose(s); 130 - return -1; 131 - } 132 - 133 - static inline int socket_loopback(int family, int sotype) 134 - { 135 - return socket_loopback_reuseport(family, sotype, -1); 136 - } 137 - 138 - static inline int create_pair(int family, int sotype, int *p0, int *p1) 139 - { 140 - __close_fd int s, c = -1, p = -1; 141 - struct sockaddr_storage addr; 142 - socklen_t len = sizeof(addr); 143 - int err; 144 - 145 - s = socket_loopback(family, sotype); 146 - if (s < 0) 147 - return s; 148 - 149 - err = xgetsockname(s, sockaddr(&addr), &len); 150 - if (err) 151 - return err; 152 - 153 - c = xsocket(family, sotype, 0); 154 - if (c < 0) 155 - return c; 156 - 157 - err = connect(c, sockaddr(&addr), len); 158 - if (err) { 159 - if (errno != EINPROGRESS) { 160 - FAIL_ERRNO("connect"); 161 - return err; 162 - } 163 - 164 - err = poll_connect(c, IO_TIMEOUT_SEC); 165 - if (err) { 166 - FAIL_ERRNO("poll_connect"); 167 - return err; 168 - } 169 - } 170 - 171 - switch (sotype & SOCK_TYPE_MASK) { 172 - case SOCK_DGRAM: 173 - err = xgetsockname(c, sockaddr(&addr), &len); 174 - if (err) 175 - return err; 176 - 177 - err = xconnect(s, sockaddr(&addr), len); 178 - if (err) 179 - return err; 180 - 181 - *p0 = take_fd(s); 182 - break; 183 - case SOCK_STREAM: 184 - case SOCK_SEQPACKET: 185 - p = xaccept_nonblock(s, NULL, NULL); 186 - if (p < 0) 187 - return p; 188 - 189 - *p0 = take_fd(p); 190 - break; 191 - default: 192 - FAIL("Unsupported socket type %#x", sotype); 193 - return -EOPNOTSUPP; 194 - } 195 - 196 - *p1 = take_fd(c); 197 - return 0; 198 - } 199 - 200 - static inline int create_socket_pairs(int family, int sotype, int *c0, int *c1, 201 - int *p0, int *p1) 202 - { 203 - int err; 204 - 205 - err = create_pair(family, sotype, c0, p0); 206 - if (err) 207 - return err; 208 - 209 - err = create_pair(family, sotype, c1, p1); 210 - if (err) { 211 - close(*c0); 212 - close(*p0); 213 - } 214 - 215 - return err; 216 335 } 217 336 218 337 #endif // __SOCKMAP_HELPERS__
+62
tools/testing/selftests/bpf/prog_tests/tc_change_tail.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <error.h> 3 + #include <test_progs.h> 4 + #include <linux/pkt_cls.h> 5 + 6 + #include "test_tc_change_tail.skel.h" 7 + #include "socket_helpers.h" 8 + 9 + #define LO_IFINDEX 1 10 + 11 + void test_tc_change_tail(void) 12 + { 13 + LIBBPF_OPTS(bpf_tcx_opts, tcx_opts); 14 + struct test_tc_change_tail *skel = NULL; 15 + struct bpf_link *link; 16 + int c1, p1; 17 + char buf[2]; 18 + int ret; 19 + 20 + skel = test_tc_change_tail__open_and_load(); 21 + if (!ASSERT_OK_PTR(skel, "test_tc_change_tail__open_and_load")) 22 + return; 23 + 24 + link = bpf_program__attach_tcx(skel->progs.change_tail, LO_IFINDEX, 25 + &tcx_opts); 26 + if (!ASSERT_OK_PTR(link, "bpf_program__attach_tcx")) 27 + goto destroy; 28 + 29 + skel->links.change_tail = link; 30 + ret = create_pair(AF_INET, SOCK_DGRAM, &c1, &p1); 31 + if (!ASSERT_OK(ret, "create_pair")) 32 + goto destroy; 33 + 34 + ret = xsend(p1, "Tr", 2, 0); 35 + ASSERT_EQ(ret, 2, "xsend(p1)"); 36 + ret = recv(c1, buf, 2, 0); 37 + ASSERT_EQ(ret, 2, "recv(c1)"); 38 + ASSERT_EQ(skel->data->change_tail_ret, 0, "change_tail_ret"); 39 + 40 + ret = xsend(p1, "G", 1, 0); 41 + ASSERT_EQ(ret, 1, "xsend(p1)"); 42 + ret = recv(c1, buf, 2, 0); 43 + ASSERT_EQ(ret, 1, "recv(c1)"); 44 + ASSERT_EQ(skel->data->change_tail_ret, 0, "change_tail_ret"); 45 + 46 + ret = xsend(p1, "E", 1, 0); 47 + ASSERT_EQ(ret, 1, "xsend(p1)"); 48 + ret = recv(c1, buf, 1, 0); 49 + ASSERT_EQ(ret, 1, "recv(c1)"); 50 + ASSERT_EQ(skel->data->change_tail_ret, -EINVAL, "change_tail_ret"); 51 + 52 + ret = xsend(p1, "Z", 1, 0); 53 + ASSERT_EQ(ret, 1, "xsend(p1)"); 54 + ret = recv(c1, buf, 1, 0); 55 + ASSERT_EQ(ret, 1, "recv(c1)"); 56 + ASSERT_EQ(skel->data->change_tail_ret, -EINVAL, "change_tail_ret"); 57 + 58 + close(c1); 59 + close(p1); 60 + destroy: 61 + test_tc_change_tail__destroy(skel); 62 + }
+40
tools/testing/selftests/bpf/progs/test_sockmap_change_tail.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2024 ByteDance */ 3 + #include <linux/bpf.h> 4 + #include <bpf/bpf_helpers.h> 5 + 6 + struct { 7 + __uint(type, BPF_MAP_TYPE_SOCKMAP); 8 + __uint(max_entries, 1); 9 + __type(key, int); 10 + __type(value, int); 11 + } sock_map_rx SEC(".maps"); 12 + 13 + long change_tail_ret = 1; 14 + 15 + SEC("sk_skb") 16 + int prog_skb_verdict(struct __sk_buff *skb) 17 + { 18 + char *data, *data_end; 19 + 20 + bpf_skb_pull_data(skb, 1); 21 + data = (char *)(unsigned long)skb->data; 22 + data_end = (char *)(unsigned long)skb->data_end; 23 + 24 + if (data + 1 > data_end) 25 + return SK_PASS; 26 + 27 + if (data[0] == 'T') { /* Trim the packet */ 28 + change_tail_ret = bpf_skb_change_tail(skb, skb->len - 1, 0); 29 + return SK_PASS; 30 + } else if (data[0] == 'G') { /* Grow the packet */ 31 + change_tail_ret = bpf_skb_change_tail(skb, skb->len + 1, 0); 32 + return SK_PASS; 33 + } else if (data[0] == 'E') { /* Error */ 34 + change_tail_ret = bpf_skb_change_tail(skb, 65535, 0); 35 + return SK_PASS; 36 + } 37 + return SK_PASS; 38 + } 39 + 40 + char _license[] SEC("license") = "GPL";
+106
tools/testing/selftests/bpf/progs/test_tc_change_tail.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/bpf.h> 3 + #include <bpf/bpf_helpers.h> 4 + #include <linux/if_ether.h> 5 + #include <linux/in.h> 6 + #include <linux/ip.h> 7 + #include <linux/udp.h> 8 + #include <linux/pkt_cls.h> 9 + 10 + long change_tail_ret = 1; 11 + 12 + static __always_inline struct iphdr *parse_ip_header(struct __sk_buff *skb, int *ip_proto) 13 + { 14 + void *data_end = (void *)(long)skb->data_end; 15 + void *data = (void *)(long)skb->data; 16 + struct ethhdr *eth = data; 17 + struct iphdr *iph; 18 + 19 + /* Verify Ethernet header */ 20 + if ((void *)(data + sizeof(*eth)) > data_end) 21 + return NULL; 22 + 23 + /* Skip Ethernet header to get to IP header */ 24 + iph = (void *)(data + sizeof(struct ethhdr)); 25 + 26 + /* Verify IP header */ 27 + if ((void *)(data + sizeof(struct ethhdr) + sizeof(*iph)) > data_end) 28 + return NULL; 29 + 30 + /* Basic IP header validation */ 31 + if (iph->version != 4) /* Only support IPv4 */ 32 + return NULL; 33 + 34 + if (iph->ihl < 5) /* Minimum IP header length */ 35 + return NULL; 36 + 37 + *ip_proto = iph->protocol; 38 + return iph; 39 + } 40 + 41 + static __always_inline struct udphdr *parse_udp_header(struct __sk_buff *skb, struct iphdr *iph) 42 + { 43 + void *data_end = (void *)(long)skb->data_end; 44 + void *hdr = (void *)iph; 45 + struct udphdr *udp; 46 + 47 + /* Calculate UDP header position */ 48 + udp = hdr + (iph->ihl * 4); 49 + hdr = (void *)udp; 50 + 51 + /* Verify UDP header bounds */ 52 + if ((void *)(hdr + sizeof(*udp)) > data_end) 53 + return NULL; 54 + 55 + return udp; 56 + } 57 + 58 + SEC("tc/ingress") 59 + int change_tail(struct __sk_buff *skb) 60 + { 61 + int len = skb->len; 62 + struct udphdr *udp; 63 + struct iphdr *iph; 64 + void *data_end; 65 + char *payload; 66 + int ip_proto; 67 + 68 + bpf_skb_pull_data(skb, len); 69 + 70 + data_end = (void *)(long)skb->data_end; 71 + iph = parse_ip_header(skb, &ip_proto); 72 + if (!iph) 73 + return TCX_PASS; 74 + 75 + if (ip_proto != IPPROTO_UDP) 76 + return TCX_PASS; 77 + 78 + udp = parse_udp_header(skb, iph); 79 + if (!udp) 80 + return TCX_PASS; 81 + 82 + payload = (char *)udp + (sizeof(struct udphdr)); 83 + if (payload + 1 > (char *)data_end) 84 + return TCX_PASS; 85 + 86 + if (payload[0] == 'T') { /* Trim the packet */ 87 + change_tail_ret = bpf_skb_change_tail(skb, len - 1, 0); 88 + if (!change_tail_ret) 89 + bpf_skb_change_tail(skb, len, 0); 90 + return TCX_PASS; 91 + } else if (payload[0] == 'G') { /* Grow the packet */ 92 + change_tail_ret = bpf_skb_change_tail(skb, len + 1, 0); 93 + if (!change_tail_ret) 94 + bpf_skb_change_tail(skb, len, 0); 95 + return TCX_PASS; 96 + } else if (payload[0] == 'E') { /* Error */ 97 + change_tail_ret = bpf_skb_change_tail(skb, 65535, 0); 98 + return TCX_PASS; 99 + } else if (payload[0] == 'Z') { /* Zero */ 100 + change_tail_ret = bpf_skb_change_tail(skb, 0, 0); 101 + return TCX_PASS; 102 + } 103 + return TCX_DROP; 104 + } 105 + 106 + char _license[] SEC("license") = "GPL";
+2
tools/testing/selftests/bpf/sdt.h
··· 102 102 # define STAP_SDT_ARG_CONSTRAINT nZr 103 103 # elif defined __arm__ 104 104 # define STAP_SDT_ARG_CONSTRAINT g 105 + # elif defined __loongarch__ 106 + # define STAP_SDT_ARG_CONSTRAINT nmr 105 107 # else 106 108 # define STAP_SDT_ARG_CONSTRAINT nor 107 109 # endif
+4
tools/testing/selftests/bpf/trace_helpers.c
··· 293 293 return 0; 294 294 } 295 295 #else 296 + # ifndef PROCMAP_QUERY_VMA_EXECUTABLE 297 + # define PROCMAP_QUERY_VMA_EXECUTABLE 0x04 298 + # endif 299 + 296 300 static int procmap_query(int fd, const void *addr, __u32 query_flags, size_t *start, size_t *offset, int *flags) 297 301 { 298 302 return -EOPNOTSUPP;
+19 -14
tools/testing/selftests/cgroup/test_cpuset_prs.sh
··· 86 86 87 87 # 88 88 # If isolated CPUs have been reserved at boot time (as shown in 89 - # cpuset.cpus.isolated), these isolated CPUs should be outside of CPUs 0-7 89 + # cpuset.cpus.isolated), these isolated CPUs should be outside of CPUs 0-8 90 90 # that will be used by this script for testing purpose. If not, some of 91 - # the tests may fail incorrectly. These isolated CPUs will also be removed 92 - # before being compared with the expected results. 91 + # the tests may fail incorrectly. These pre-isolated CPUs should stay in 92 + # an isolated state throughout the testing process for now. 93 93 # 94 94 BOOT_ISOLCPUS=$(cat $CGROUP2/cpuset.cpus.isolated) 95 95 if [[ -n "$BOOT_ISOLCPUS" ]] 96 96 then 97 - [[ $(echo $BOOT_ISOLCPUS | sed -e "s/[,-].*//") -le 7 ]] && 97 + [[ $(echo $BOOT_ISOLCPUS | sed -e "s/[,-].*//") -le 8 ]] && 98 98 skip_test "Pre-isolated CPUs ($BOOT_ISOLCPUS) overlap CPUs to be tested" 99 99 echo "Pre-isolated CPUs: $BOOT_ISOLCPUS" 100 100 fi ··· 684 684 fi 685 685 686 686 # 687 + # Appending pre-isolated CPUs 688 + # Even though CPU #8 isn't used for testing, it can't be pre-isolated 689 + # to make appending those CPUs easier. 690 + # 691 + [[ -n "$BOOT_ISOLCPUS" ]] && { 692 + EXPECT_VAL=${EXPECT_VAL:+${EXPECT_VAL},}${BOOT_ISOLCPUS} 693 + EXPECT_VAL2=${EXPECT_VAL2:+${EXPECT_VAL2},}${BOOT_ISOLCPUS} 694 + } 695 + 696 + # 687 697 # Check cpuset.cpus.isolated cpumask 688 698 # 689 - if [[ -z "$BOOT_ISOLCPUS" ]] 690 - then 691 - ISOLCPUS=$(cat $ISCPUS) 692 - else 693 - ISOLCPUS=$(cat $ISCPUS | sed -e "s/,*$BOOT_ISOLCPUS//") 694 - fi 695 699 [[ "$EXPECT_VAL2" != "$ISOLCPUS" ]] && { 696 700 # Take a 50ms pause and try again 697 701 pause 0.05 ··· 735 731 fi 736 732 done 737 733 [[ "$ISOLCPUS" = *- ]] && ISOLCPUS=${ISOLCPUS}$LASTISOLCPU 738 - [[ -n "BOOT_ISOLCPUS" ]] && 739 - ISOLCPUS=$(echo $ISOLCPUS | sed -e "s/,*$BOOT_ISOLCPUS//") 740 734 741 735 [[ "$EXPECT_VAL" = "$ISOLCPUS" ]] 742 736 } ··· 838 836 # if available 839 837 [[ -n "$ICPUS" ]] && { 840 838 check_isolcpus $ICPUS 841 - [[ $? -ne 0 ]] && test_fail $I "isolated CPU" \ 842 - "Expect $ICPUS, get $ISOLCPUS instead" 839 + [[ $? -ne 0 ]] && { 840 + [[ -n "$BOOT_ISOLCPUS" ]] && ICPUS=${ICPUS},${BOOT_ISOLCPUS} 841 + test_fail $I "isolated CPU" \ 842 + "Expect $ICPUS, get $ISOLCPUS instead" 843 + } 843 844 } 844 845 reset_cgroup_states 845 846 #
+37 -14
tools/testing/selftests/drivers/net/queues.py
··· 1 1 #!/usr/bin/env python3 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 - from lib.py import ksft_run, ksft_exit, ksft_eq, KsftSkipEx 5 - from lib.py import EthtoolFamily, NetdevFamily 4 + from lib.py import ksft_disruptive, ksft_exit, ksft_run 5 + from lib.py import ksft_eq, ksft_raises, KsftSkipEx 6 + from lib.py import EthtoolFamily, NetdevFamily, NlError 6 7 from lib.py import NetDrvEnv 7 - from lib.py import cmd 8 + from lib.py import cmd, defer, ip 9 + import errno 8 10 import glob 9 11 10 12 11 - def sys_get_queues(ifname) -> int: 12 - folders = glob.glob(f'/sys/class/net/{ifname}/queues/rx-*') 13 + def sys_get_queues(ifname, qtype='rx') -> int: 14 + folders = glob.glob(f'/sys/class/net/{ifname}/queues/{qtype}-*') 13 15 return len(folders) 14 16 15 17 16 - def nl_get_queues(cfg, nl): 18 + def nl_get_queues(cfg, nl, qtype='rx'): 17 19 queues = nl.queue_get({'ifindex': cfg.ifindex}, dump=True) 18 20 if queues: 19 - return len([q for q in queues if q['type'] == 'rx']) 21 + return len([q for q in queues if q['type'] == qtype]) 20 22 return None 21 23 22 24 23 25 def get_queues(cfg, nl) -> None: 24 - queues = nl_get_queues(cfg, nl) 25 - if not queues: 26 - raise KsftSkipEx('queue-get not supported by device') 26 + snl = NetdevFamily(recv_size=4096) 27 27 28 - expected = sys_get_queues(cfg.dev['ifname']) 29 - ksft_eq(queues, expected) 28 + for qtype in ['rx', 'tx']: 29 + queues = nl_get_queues(cfg, snl, qtype) 30 + if not queues: 31 + raise KsftSkipEx('queue-get not supported by device') 32 + 33 + expected = sys_get_queues(cfg.dev['ifname'], qtype) 34 + ksft_eq(queues, expected) 30 35 31 36 32 37 def addremove_queues(cfg, nl) -> None: ··· 61 56 ksft_eq(queues, expected) 62 57 63 58 59 + @ksft_disruptive 60 + def check_down(cfg, nl) -> None: 61 + # Check the NAPI IDs before interface goes down and hides them 62 + napis = nl.napi_get({'ifindex': cfg.ifindex}, dump=True) 63 + 64 + ip(f"link set dev {cfg.dev['ifname']} down") 65 + defer(ip, f"link set dev {cfg.dev['ifname']} up") 66 + 67 + with ksft_raises(NlError) as cm: 68 + nl.queue_get({'ifindex': cfg.ifindex, 'id': 0, 'type': 'rx'}) 69 + ksft_eq(cm.exception.nl_msg.error, -errno.ENOENT) 70 + 71 + if napis: 72 + with ksft_raises(NlError) as cm: 73 + nl.napi_get({'id': napis[0]['id']}) 74 + ksft_eq(cm.exception.nl_msg.error, -errno.ENOENT) 75 + 76 + 64 77 def main() -> None: 65 - with NetDrvEnv(__file__, queue_count=3) as cfg: 66 - ksft_run([get_queues, addremove_queues], args=(cfg, NetdevFamily())) 78 + with NetDrvEnv(__file__, queue_count=100) as cfg: 79 + ksft_run([get_queues, addremove_queues, check_down], args=(cfg, NetdevFamily())) 67 80 ksft_exit() 68 81 69 82
+18 -1
tools/testing/selftests/drivers/net/stats.py
··· 110 110 ksft_ge(triple[1][key], triple[0][key], comment="bad key: " + key) 111 111 ksft_ge(triple[2][key], triple[1][key], comment="bad key: " + key) 112 112 113 + # Sanity check the dumps 114 + queues = NetdevFamily(recv_size=4096).qstats_get({"scope": "queue"}, dump=True) 115 + # Reformat the output into {ifindex: {rx: [id, id, ...], tx: [id, id, ...]}} 116 + parsed = {} 117 + for entry in queues: 118 + ifindex = entry["ifindex"] 119 + if ifindex not in parsed: 120 + parsed[ifindex] = {"rx":[], "tx": []} 121 + parsed[ifindex][entry["queue-type"]].append(entry['queue-id']) 122 + # Now, validate 123 + for ifindex, queues in parsed.items(): 124 + for qtype in ['rx', 'tx']: 125 + ksft_eq(len(queues[qtype]), len(set(queues[qtype])), 126 + comment="repeated queue keys") 127 + ksft_eq(len(queues[qtype]), max(queues[qtype]) + 1, 128 + comment="missing queue keys") 129 + 113 130 # Test invalid dumps 114 131 # 0 is invalid 115 132 with ksft_raises(NlError) as cm: ··· 175 158 176 159 177 160 def main() -> None: 178 - with NetDrvEnv(__file__) as cfg: 161 + with NetDrvEnv(__file__, queue_count=100) as cfg: 179 162 ksft_run([check_pause, check_fec, pkt_byte_sum, qstat_by_ifindex, 180 163 check_down], 181 164 args=(cfg, ))
-1
tools/testing/selftests/kvm/arm64/set_id_regs.c
··· 152 152 REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, BIGENDEL0, 0), 153 153 REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, SNSMEM, 0), 154 154 REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, BIGEND, 0), 155 - REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, ASIDBITS, 0), 156 155 REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, PARANGE, 0), 157 156 REG_FTR_END, 158 157 };
+171 -1
tools/testing/selftests/kvm/s390/ucontrol_test.c
··· 210 210 struct kvm_device_attr attr = { 211 211 .group = KVM_S390_VM_MEM_CTRL, 212 212 .attr = KVM_S390_VM_MEM_LIMIT_SIZE, 213 - .addr = (unsigned long)&limit, 213 + .addr = (u64)&limit, 214 214 }; 215 215 int rc; 216 + 217 + rc = ioctl(self->vm_fd, KVM_HAS_DEVICE_ATTR, &attr); 218 + EXPECT_EQ(0, rc); 216 219 217 220 rc = ioctl(self->vm_fd, KVM_GET_DEVICE_ATTR, &attr); 218 221 EXPECT_EQ(0, rc); ··· 636 633 ASSERT_EQ(skeyvalue & 0xfa, sync_regs->gprs[1]); 637 634 ASSERT_EQ(0, sync_regs->gprs[1] & 0x04); 638 635 uc_assert_diag44(self); 636 + } 637 + 638 + static char uc_flic_b[PAGE_SIZE]; 639 + static struct kvm_s390_io_adapter uc_flic_ioa = { .id = 0 }; 640 + static struct kvm_s390_io_adapter_req uc_flic_ioam = { .id = 0 }; 641 + static struct kvm_s390_ais_req uc_flic_asim = { .isc = 0 }; 642 + static struct kvm_s390_ais_all uc_flic_asima = { .simm = 0 }; 643 + static struct uc_flic_attr_test { 644 + char *name; 645 + struct kvm_device_attr a; 646 + int hasrc; 647 + int geterrno; 648 + int seterrno; 649 + } uc_flic_attr_tests[] = { 650 + { 651 + .name = "KVM_DEV_FLIC_GET_ALL_IRQS", 652 + .seterrno = EINVAL, 653 + .a = { 654 + .group = KVM_DEV_FLIC_GET_ALL_IRQS, 655 + .addr = (u64)&uc_flic_b, 656 + .attr = PAGE_SIZE, 657 + }, 658 + }, 659 + { 660 + .name = "KVM_DEV_FLIC_ENQUEUE", 661 + .geterrno = EINVAL, 662 + .a = { .group = KVM_DEV_FLIC_ENQUEUE, }, 663 + }, 664 + { 665 + .name = "KVM_DEV_FLIC_CLEAR_IRQS", 666 + .geterrno = EINVAL, 667 + .a = { .group = KVM_DEV_FLIC_CLEAR_IRQS, }, 668 + }, 669 + { 670 + .name = "KVM_DEV_FLIC_ADAPTER_REGISTER", 671 + .geterrno = EINVAL, 672 + .a = { 673 + .group = KVM_DEV_FLIC_ADAPTER_REGISTER, 674 + .addr = (u64)&uc_flic_ioa, 675 + }, 676 + }, 677 + { 678 + .name = "KVM_DEV_FLIC_ADAPTER_MODIFY", 679 + .geterrno = EINVAL, 680 + .seterrno = EINVAL, 681 + .a = { 682 + .group = KVM_DEV_FLIC_ADAPTER_MODIFY, 683 + .addr = (u64)&uc_flic_ioam, 684 + .attr = sizeof(uc_flic_ioam), 685 + }, 686 + }, 687 + { 688 + .name = "KVM_DEV_FLIC_CLEAR_IO_IRQ", 689 + .geterrno = EINVAL, 690 + .seterrno = EINVAL, 691 + .a = { 692 + .group = KVM_DEV_FLIC_CLEAR_IO_IRQ, 693 + .attr = 32, 694 + }, 695 + }, 696 + { 697 + .name = "KVM_DEV_FLIC_AISM", 698 + .geterrno = EINVAL, 699 + .seterrno = ENOTSUP, 700 + .a = { 701 + .group = KVM_DEV_FLIC_AISM, 702 + .addr = (u64)&uc_flic_asim, 703 + }, 704 + }, 705 + { 706 + .name = "KVM_DEV_FLIC_AIRQ_INJECT", 707 + .geterrno = EINVAL, 708 + .a = { .group = KVM_DEV_FLIC_AIRQ_INJECT, }, 709 + }, 710 + { 711 + .name = "KVM_DEV_FLIC_AISM_ALL", 712 + .geterrno = ENOTSUP, 713 + .seterrno = ENOTSUP, 714 + .a = { 715 + .group = KVM_DEV_FLIC_AISM_ALL, 716 + .addr = (u64)&uc_flic_asima, 717 + .attr = sizeof(uc_flic_asima), 718 + }, 719 + }, 720 + { 721 + .name = "KVM_DEV_FLIC_APF_ENABLE", 722 + .geterrno = EINVAL, 723 + .seterrno = EINVAL, 724 + .a = { .group = KVM_DEV_FLIC_APF_ENABLE, }, 725 + }, 726 + { 727 + .name = "KVM_DEV_FLIC_APF_DISABLE_WAIT", 728 + .geterrno = EINVAL, 729 + .seterrno = EINVAL, 730 + .a = { .group = KVM_DEV_FLIC_APF_DISABLE_WAIT, }, 731 + }, 732 + }; 733 + 734 + TEST_F(uc_kvm, uc_flic_attrs) 735 + { 736 + struct kvm_create_device cd = { .type = KVM_DEV_TYPE_FLIC }; 737 + struct kvm_device_attr attr; 738 + u64 value; 739 + int rc, i; 740 + 741 + rc = ioctl(self->vm_fd, KVM_CREATE_DEVICE, &cd); 742 + ASSERT_EQ(0, rc) TH_LOG("create device failed with err %s (%i)", 743 + strerror(errno), errno); 744 + 745 + for (i = 0; i < ARRAY_SIZE(uc_flic_attr_tests); i++) { 746 + TH_LOG("test %s", uc_flic_attr_tests[i].name); 747 + attr = (struct kvm_device_attr) { 748 + .group = uc_flic_attr_tests[i].a.group, 749 + .attr = uc_flic_attr_tests[i].a.attr, 750 + .addr = uc_flic_attr_tests[i].a.addr, 751 + }; 752 + if (attr.addr == 0) 753 + attr.addr = (u64)&value; 754 + 755 + rc = ioctl(cd.fd, KVM_HAS_DEVICE_ATTR, &attr); 756 + EXPECT_EQ(uc_flic_attr_tests[i].hasrc, !!rc) 757 + TH_LOG("expected dev attr missing %s", 758 + uc_flic_attr_tests[i].name); 759 + 760 + rc = ioctl(cd.fd, KVM_GET_DEVICE_ATTR, &attr); 761 + EXPECT_EQ(!!uc_flic_attr_tests[i].geterrno, !!rc) 762 + TH_LOG("get dev attr rc not expected on %s %s (%i)", 763 + uc_flic_attr_tests[i].name, 764 + strerror(errno), errno); 765 + if (uc_flic_attr_tests[i].geterrno) 766 + EXPECT_EQ(uc_flic_attr_tests[i].geterrno, errno) 767 + TH_LOG("get dev attr errno not expected on %s %s (%i)", 768 + uc_flic_attr_tests[i].name, 769 + strerror(errno), errno); 770 + 771 + rc = ioctl(cd.fd, KVM_SET_DEVICE_ATTR, &attr); 772 + EXPECT_EQ(!!uc_flic_attr_tests[i].seterrno, !!rc) 773 + TH_LOG("set sev attr rc not expected on %s %s (%i)", 774 + uc_flic_attr_tests[i].name, 775 + strerror(errno), errno); 776 + if (uc_flic_attr_tests[i].seterrno) 777 + EXPECT_EQ(uc_flic_attr_tests[i].seterrno, errno) 778 + TH_LOG("set dev attr errno not expected on %s %s (%i)", 779 + uc_flic_attr_tests[i].name, 780 + strerror(errno), errno); 781 + } 782 + 783 + close(cd.fd); 784 + } 785 + 786 + TEST_F(uc_kvm, uc_set_gsi_routing) 787 + { 788 + struct kvm_irq_routing *routing = kvm_gsi_routing_create(); 789 + struct kvm_irq_routing_entry ue = { 790 + .type = KVM_IRQ_ROUTING_S390_ADAPTER, 791 + .gsi = 1, 792 + .u.adapter = (struct kvm_irq_routing_s390_adapter) { 793 + .ind_addr = 0, 794 + }, 795 + }; 796 + int rc; 797 + 798 + routing->entries[0] = ue; 799 + routing->nr = 1; 800 + rc = ioctl(self->vm_fd, KVM_SET_GSI_ROUTING, routing); 801 + ASSERT_EQ(-1, rc) TH_LOG("err %s (%i)", strerror(errno), errno); 802 + ASSERT_EQ(EINVAL, errno) TH_LOG("err %s (%i)", strerror(errno), errno); 639 803 } 640 804 641 805 TEST_HARNESS_MAIN
+55 -2
tools/testing/selftests/memfd/memfd_test.c
··· 9 9 #include <fcntl.h> 10 10 #include <linux/memfd.h> 11 11 #include <sched.h> 12 + #include <stdbool.h> 12 13 #include <stdio.h> 13 14 #include <stdlib.h> 14 15 #include <signal.h> ··· 271 270 p = mmap(NULL, 272 271 mfd_def_size, 273 272 PROT_READ | PROT_WRITE, 273 + MAP_SHARED, 274 + fd, 275 + 0); 276 + if (p == MAP_FAILED) { 277 + printf("mmap() failed: %m\n"); 278 + abort(); 279 + } 280 + 281 + return p; 282 + } 283 + 284 + static void *mfd_assert_mmap_read_shared(int fd) 285 + { 286 + void *p; 287 + 288 + p = mmap(NULL, 289 + mfd_def_size, 290 + PROT_READ, 274 291 MAP_SHARED, 275 292 fd, 276 293 0); ··· 998 979 close(fd); 999 980 } 1000 981 982 + static void test_seal_write_map_read_shared(void) 983 + { 984 + int fd; 985 + void *p; 986 + 987 + printf("%s SEAL-WRITE-MAP-READ\n", memfd_str); 988 + 989 + fd = mfd_assert_new("kern_memfd_seal_write_map_read", 990 + mfd_def_size, 991 + MFD_CLOEXEC | MFD_ALLOW_SEALING); 992 + 993 + mfd_assert_add_seals(fd, F_SEAL_WRITE); 994 + mfd_assert_has_seals(fd, F_SEAL_WRITE); 995 + 996 + p = mfd_assert_mmap_read_shared(fd); 997 + 998 + mfd_assert_read(fd); 999 + mfd_assert_read_shared(fd); 1000 + mfd_fail_write(fd); 1001 + 1002 + munmap(p, mfd_def_size); 1003 + close(fd); 1004 + } 1005 + 1001 1006 /* 1002 1007 * Test SEAL_SHRINK 1003 1008 * Test whether SEAL_SHRINK actually prevents shrinking ··· 1600 1557 close(fd); 1601 1558 } 1602 1559 1560 + static bool pid_ns_supported(void) 1561 + { 1562 + return access("/proc/self/ns/pid", F_OK) == 0; 1563 + } 1564 + 1603 1565 int main(int argc, char **argv) 1604 1566 { 1605 1567 pid_t pid; ··· 1635 1587 1636 1588 test_seal_write(); 1637 1589 test_seal_future_write(); 1590 + test_seal_write_map_read_shared(); 1638 1591 test_seal_shrink(); 1639 1592 test_seal_grow(); 1640 1593 test_seal_resize(); 1641 1594 1642 - test_sysctl_simple(); 1643 - test_sysctl_nested(); 1595 + if (pid_ns_supported()) { 1596 + test_sysctl_simple(); 1597 + test_sysctl_nested(); 1598 + } else { 1599 + printf("PID namespaces are not supported; skipping sysctl tests\n"); 1600 + } 1644 1601 1645 1602 test_share_dup("SHARE-DUP", ""); 1646 1603 test_share_mmap("SHARE-MMAP", "");
-1
tools/testing/selftests/net/forwarding/local_termination.sh
··· 7 7 NUM_NETIFS=2 8 8 PING_COUNT=1 9 9 REQUIRE_MTOOLS=yes 10 - REQUIRE_MZ=no 11 10 12 11 source lib.sh 13 12
+8 -8
tools/testing/selftests/net/lib/py/ynl.py
··· 32 32 # Set schema='' to avoid jsonschema validation, it's slow 33 33 # 34 34 class EthtoolFamily(YnlFamily): 35 - def __init__(self): 35 + def __init__(self, recv_size=0): 36 36 super().__init__((SPEC_PATH / Path('ethtool.yaml')).as_posix(), 37 - schema='') 37 + schema='', recv_size=recv_size) 38 38 39 39 40 40 class RtnlFamily(YnlFamily): 41 - def __init__(self): 41 + def __init__(self, recv_size=0): 42 42 super().__init__((SPEC_PATH / Path('rt_link.yaml')).as_posix(), 43 - schema='') 43 + schema='', recv_size=recv_size) 44 44 45 45 46 46 class NetdevFamily(YnlFamily): 47 - def __init__(self): 47 + def __init__(self, recv_size=0): 48 48 super().__init__((SPEC_PATH / Path('netdev.yaml')).as_posix(), 49 - schema='') 49 + schema='', recv_size=recv_size) 50 50 51 51 class NetshaperFamily(YnlFamily): 52 - def __init__(self): 52 + def __init__(self, recv_size=0): 53 53 super().__init__((SPEC_PATH / Path('net_shaper.yaml')).as_posix(), 54 - schema='') 54 + schema='', recv_size=recv_size)
+4 -2
tools/testing/selftests/net/openvswitch/openvswitch.sh
··· 171 171 ovs_add_if "$1" "$2" "$4" -u || return 1 172 172 fi 173 173 174 - [ $TRACING -eq 1 ] && ovs_netns_spawn_daemon "$1" "$ns" \ 175 - tcpdump -i any -s 65535 174 + if [ $TRACING -eq 1 ]; then 175 + ovs_netns_spawn_daemon "$1" "$3" tcpdump -l -i any -s 6553 176 + ovs_wait grep -q "listening on any" ${ovs_dir}/stderr 177 + fi 176 178 177 179 return 0 178 180 }
+22 -6
tools/testing/selftests/riscv/abi/pointer_masking.c
··· 185 185 } 186 186 } 187 187 188 + static bool pwrite_wrapper(int fd, void *buf, size_t count, const char *msg) 189 + { 190 + int ret = pwrite(fd, buf, count, 0); 191 + 192 + if (ret != count) { 193 + ksft_perror(msg); 194 + return false; 195 + } 196 + return true; 197 + } 198 + 188 199 static void test_tagged_addr_abi_sysctl(void) 189 200 { 201 + char *err_pwrite_msg = "failed to write to /proc/sys/abi/tagged_addr_disabled\n"; 190 202 char value; 191 203 int fd; 192 204 ··· 212 200 } 213 201 214 202 value = '1'; 215 - pwrite(fd, &value, 1, 0); 216 - ksft_test_result(set_tagged_addr_ctrl(min_pmlen, true) == -EINVAL, 217 - "sysctl disabled\n"); 203 + if (!pwrite_wrapper(fd, &value, 1, "write '1'")) 204 + ksft_test_result_fail(err_pwrite_msg); 205 + else 206 + ksft_test_result(set_tagged_addr_ctrl(min_pmlen, true) == -EINVAL, 207 + "sysctl disabled\n"); 218 208 219 209 value = '0'; 220 - pwrite(fd, &value, 1, 0); 221 - ksft_test_result(set_tagged_addr_ctrl(min_pmlen, true) == 0, 222 - "sysctl enabled\n"); 210 + if (!pwrite_wrapper(fd, &value, 1, "write '0'")) 211 + ksft_test_result_fail(err_pwrite_msg); 212 + else 213 + ksft_test_result(set_tagged_addr_ctrl(min_pmlen, true) == 0, 214 + "sysctl enabled\n"); 223 215 224 216 set_tagged_addr_ctrl(0, false); 225 217
+4
tools/testing/selftests/riscv/vector/v_initval_nolibc.c
··· 25 25 unsigned long vl; 26 26 char *datap, *tmp; 27 27 28 + ksft_set_plan(1); 29 + 28 30 datap = malloc(MAX_VSIZE); 29 31 if (!datap) { 30 32 ksft_test_result_fail("fail to allocate memory for size = %d\n", MAX_VSIZE); ··· 65 63 } 66 64 67 65 free(datap); 66 + 67 + ksft_test_result_pass("tests for v_initval_nolibc pass\n"); 68 68 ksft_exit_pass(); 69 69 return 0; 70 70 }
+2
tools/testing/selftests/riscv/vector/vstate_prctl.c
··· 76 76 long flag, expected; 77 77 long rc; 78 78 79 + ksft_set_plan(1); 80 + 79 81 pair.key = RISCV_HWPROBE_KEY_IMA_EXT_0; 80 82 rc = riscv_hwprobe(&pair, 1, 0, NULL, 0); 81 83 if (rc < 0) {
+1 -1
tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
··· 20 20 * If we dispatch to a bogus DSQ that will fall back to the 21 21 * builtin global DSQ, we fail gracefully. 22 22 */ 23 - scx_bpf_dispatch_vtime(p, 0xcafef00d, SCX_SLICE_DFL, 23 + scx_bpf_dsq_insert_vtime(p, 0xcafef00d, SCX_SLICE_DFL, 24 24 p->scx.dsq_vtime, 0); 25 25 return cpu; 26 26 }
+2 -2
tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
··· 17 17 18 18 if (cpu >= 0) { 19 19 /* Shouldn't be allowed to vtime dispatch to a builtin DSQ. */ 20 - scx_bpf_dispatch_vtime(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 21 - p->scx.dsq_vtime, 0); 20 + scx_bpf_dsq_insert_vtime(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 21 + p->scx.dsq_vtime, 0); 22 22 return cpu; 23 23 } 24 24
+5 -2
tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
··· 43 43 if (!p) 44 44 return; 45 45 46 - target = bpf_get_prandom_u32() % nr_cpus; 46 + if (p->nr_cpus_allowed == nr_cpus) 47 + target = bpf_get_prandom_u32() % nr_cpus; 48 + else 49 + target = scx_bpf_task_cpu(p); 47 50 48 - scx_bpf_dispatch(p, SCX_DSQ_LOCAL_ON | target, SCX_SLICE_DFL, 0); 51 + scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL_ON | target, SCX_SLICE_DFL, 0); 49 52 bpf_task_release(p); 50 53 } 51 54
+3 -2
tools/testing/selftests/sched_ext/dsp_local_on.c
··· 34 34 /* Just sleeping is fine, plenty of scheduling events happening */ 35 35 sleep(1); 36 36 37 - SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_ERROR)); 38 37 bpf_link__destroy(link); 38 + 39 + SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_UNREG)); 39 40 40 41 return SCX_TEST_PASS; 41 42 } ··· 51 50 struct scx_test dsp_local_on = { 52 51 .name = "dsp_local_on", 53 52 .description = "Verify we can directly dispatch tasks to a local DSQs " 54 - "from osp.dispatch()", 53 + "from ops.dispatch()", 55 54 .setup = setup, 56 55 .run = run, 57 56 .cleanup = cleanup,
+1 -1
tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
··· 31 31 /* Can only call from ops.select_cpu() */ 32 32 scx_bpf_select_cpu_dfl(p, 0, 0, &found); 33 33 34 - scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); 34 + scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); 35 35 } 36 36 37 37 SEC(".struct_ops.link")
+2 -2
tools/testing/selftests/sched_ext/exit.bpf.c
··· 33 33 if (exit_point == EXIT_ENQUEUE) 34 34 EXIT_CLEANLY(); 35 35 36 - scx_bpf_dispatch(p, DSQ_ID, SCX_SLICE_DFL, enq_flags); 36 + scx_bpf_dsq_insert(p, DSQ_ID, SCX_SLICE_DFL, enq_flags); 37 37 } 38 38 39 39 void BPF_STRUCT_OPS(exit_dispatch, s32 cpu, struct task_struct *p) ··· 41 41 if (exit_point == EXIT_DISPATCH) 42 42 EXIT_CLEANLY(); 43 43 44 - scx_bpf_consume(DSQ_ID); 44 + scx_bpf_dsq_move_to_local(DSQ_ID); 45 45 } 46 46 47 47 void BPF_STRUCT_OPS(exit_enable, struct task_struct *p)
+5 -3
tools/testing/selftests/sched_ext/maximal.bpf.c
··· 12 12 13 13 char _license[] SEC("license") = "GPL"; 14 14 15 + #define DSQ_ID 0 16 + 15 17 s32 BPF_STRUCT_OPS(maximal_select_cpu, struct task_struct *p, s32 prev_cpu, 16 18 u64 wake_flags) 17 19 { ··· 22 20 23 21 void BPF_STRUCT_OPS(maximal_enqueue, struct task_struct *p, u64 enq_flags) 24 22 { 25 - scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); 23 + scx_bpf_dsq_insert(p, DSQ_ID, SCX_SLICE_DFL, enq_flags); 26 24 } 27 25 28 26 void BPF_STRUCT_OPS(maximal_dequeue, struct task_struct *p, u64 deq_flags) ··· 30 28 31 29 void BPF_STRUCT_OPS(maximal_dispatch, s32 cpu, struct task_struct *prev) 32 30 { 33 - scx_bpf_consume(SCX_DSQ_GLOBAL); 31 + scx_bpf_dsq_move_to_local(DSQ_ID); 34 32 } 35 33 36 34 void BPF_STRUCT_OPS(maximal_runnable, struct task_struct *p, u64 enq_flags) ··· 125 123 126 124 s32 BPF_STRUCT_OPS_SLEEPABLE(maximal_init) 127 125 { 128 - return 0; 126 + return scx_bpf_create_dsq(DSQ_ID, -1); 129 127 } 130 128 131 129 void BPF_STRUCT_OPS(maximal_exit, struct scx_exit_info *info)
+1 -1
tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
··· 30 30 } 31 31 scx_bpf_put_idle_cpumask(idle_mask); 32 32 33 - scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); 33 + scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags); 34 34 } 35 35 36 36 SEC(".struct_ops.link")
+1 -1
tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
··· 67 67 saw_local = true; 68 68 } 69 69 70 - scx_bpf_dispatch(p, dsq_id, SCX_SLICE_DFL, enq_flags); 70 + scx_bpf_dsq_insert(p, dsq_id, SCX_SLICE_DFL, enq_flags); 71 71 } 72 72 73 73 s32 BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_init_task,
+1 -1
tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
··· 29 29 cpu = prev_cpu; 30 30 31 31 dispatch: 32 - scx_bpf_dispatch(p, dsq_id, SCX_SLICE_DFL, 0); 32 + scx_bpf_dsq_insert(p, dsq_id, SCX_SLICE_DFL, 0); 33 33 return cpu; 34 34 } 35 35
+1 -1
tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
··· 18 18 s32 prev_cpu, u64 wake_flags) 19 19 { 20 20 /* Dispatching to a random DSQ should fail. */ 21 - scx_bpf_dispatch(p, 0xcafef00d, SCX_SLICE_DFL, 0); 21 + scx_bpf_dsq_insert(p, 0xcafef00d, SCX_SLICE_DFL, 0); 22 22 23 23 return prev_cpu; 24 24 }
+2 -2
tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
··· 18 18 s32 prev_cpu, u64 wake_flags) 19 19 { 20 20 /* Dispatching twice in a row is disallowed. */ 21 - scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0); 22 - scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0); 21 + scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0); 22 + scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0); 23 23 24 24 return prev_cpu; 25 25 }
+4 -4
tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
··· 2 2 /* 3 3 * A scheduler that validates that enqueue flags are properly stored and 4 4 * applied at dispatch time when a task is directly dispatched from 5 - * ops.select_cpu(). We validate this by using scx_bpf_dispatch_vtime(), and 6 - * making the test a very basic vtime scheduler. 5 + * ops.select_cpu(). We validate this by using scx_bpf_dsq_insert_vtime(), 6 + * and making the test a very basic vtime scheduler. 7 7 * 8 8 * Copyright (c) 2024 Meta Platforms, Inc. and affiliates. 9 9 * Copyright (c) 2024 David Vernet <dvernet@meta.com> ··· 47 47 cpu = prev_cpu; 48 48 scx_bpf_test_and_clear_cpu_idle(cpu); 49 49 ddsp: 50 - scx_bpf_dispatch_vtime(p, VTIME_DSQ, SCX_SLICE_DFL, task_vtime(p), 0); 50 + scx_bpf_dsq_insert_vtime(p, VTIME_DSQ, SCX_SLICE_DFL, task_vtime(p), 0); 51 51 return cpu; 52 52 } 53 53 54 54 void BPF_STRUCT_OPS(select_cpu_vtime_dispatch, s32 cpu, struct task_struct *p) 55 55 { 56 - if (scx_bpf_consume(VTIME_DSQ)) 56 + if (scx_bpf_dsq_move_to_local(VTIME_DSQ)) 57 57 consumed = true; 58 58 } 59 59
+2 -2
tools/testing/selftests/tc-testing/tc-tests/filters/flow.json
··· 78 78 "setup": [ 79 79 "$TC qdisc add dev $DEV1 ingress" 80 80 ], 81 - "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 protocol ip flow map key dst rshift 0xff", 81 + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 protocol ip flow map key dst rshift 0x1f", 82 82 "expExitCode": "0", 83 83 "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 protocol ip prio 1 flow", 84 - "matchPattern": "filter parent ffff: protocol ip pref 1 flow chain [0-9]+ handle 0x1 map keys dst rshift 255 baseclass", 84 + "matchPattern": "filter parent ffff: protocol ip pref 1 flow chain [0-9]+ handle 0x1 map keys dst rshift 31 baseclass", 85 85 "matchCount": "1", 86 86 "teardown": [ 87 87 "$TC qdisc del dev $DEV1 ingress"
+96 -81
tools/tracing/rtla/src/timerlat_hist.c
··· 282 282 } 283 283 284 284 /* 285 + * format_summary_value - format a line of summary value (min, max or avg) 286 + * of hist data 287 + */ 288 + static void format_summary_value(struct trace_seq *seq, 289 + int count, 290 + unsigned long long val, 291 + bool avg) 292 + { 293 + if (count) 294 + trace_seq_printf(seq, "%9llu ", avg ? val / count : val); 295 + else 296 + trace_seq_printf(seq, "%9c ", '-'); 297 + } 298 + 299 + /* 285 300 * timerlat_print_summary - print the summary of the hist data to the output 286 301 */ 287 302 static void ··· 343 328 if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count) 344 329 continue; 345 330 346 - if (!params->no_irq) { 347 - if (data->hist[cpu].irq_count) 348 - trace_seq_printf(trace->seq, "%9llu ", 349 - data->hist[cpu].min_irq); 350 - else 351 - trace_seq_printf(trace->seq, " - "); 352 - } 331 + if (!params->no_irq) 332 + format_summary_value(trace->seq, 333 + data->hist[cpu].irq_count, 334 + data->hist[cpu].min_irq, 335 + false); 353 336 354 - if (!params->no_thread) { 355 - if (data->hist[cpu].thread_count) 356 - trace_seq_printf(trace->seq, "%9llu ", 357 - data->hist[cpu].min_thread); 358 - else 359 - trace_seq_printf(trace->seq, " - "); 360 - } 337 + if (!params->no_thread) 338 + format_summary_value(trace->seq, 339 + data->hist[cpu].thread_count, 340 + data->hist[cpu].min_thread, 341 + false); 361 342 362 - if (params->user_hist) { 363 - if (data->hist[cpu].user_count) 364 - trace_seq_printf(trace->seq, "%9llu ", 365 - data->hist[cpu].min_user); 366 - else 367 - trace_seq_printf(trace->seq, " - "); 368 - } 343 + if (params->user_hist) 344 + format_summary_value(trace->seq, 345 + data->hist[cpu].user_count, 346 + data->hist[cpu].min_user, 347 + false); 369 348 } 370 349 trace_seq_printf(trace->seq, "\n"); 371 350 ··· 373 364 if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count) 374 365 continue; 375 366 376 - if (!params->no_irq) { 377 - if (data->hist[cpu].irq_count) 378 - trace_seq_printf(trace->seq, "%9llu ", 379 - data->hist[cpu].sum_irq / data->hist[cpu].irq_count); 380 - else 381 - trace_seq_printf(trace->seq, " - "); 382 - } 367 + if (!params->no_irq) 368 + format_summary_value(trace->seq, 369 + data->hist[cpu].irq_count, 370 + data->hist[cpu].sum_irq, 371 + true); 383 372 384 - if (!params->no_thread) { 385 - if (data->hist[cpu].thread_count) 386 - trace_seq_printf(trace->seq, "%9llu ", 387 - data->hist[cpu].sum_thread / data->hist[cpu].thread_count); 388 - else 389 - trace_seq_printf(trace->seq, " - "); 390 - } 373 + if (!params->no_thread) 374 + format_summary_value(trace->seq, 375 + data->hist[cpu].thread_count, 376 + data->hist[cpu].sum_thread, 377 + true); 391 378 392 - if (params->user_hist) { 393 - if (data->hist[cpu].user_count) 394 - trace_seq_printf(trace->seq, "%9llu ", 395 - data->hist[cpu].sum_user / data->hist[cpu].user_count); 396 - else 397 - trace_seq_printf(trace->seq, " - "); 398 - } 379 + if (params->user_hist) 380 + format_summary_value(trace->seq, 381 + data->hist[cpu].user_count, 382 + data->hist[cpu].sum_user, 383 + true); 399 384 } 400 385 trace_seq_printf(trace->seq, "\n"); 401 386 ··· 403 400 if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count) 404 401 continue; 405 402 406 - if (!params->no_irq) { 407 - if (data->hist[cpu].irq_count) 408 - trace_seq_printf(trace->seq, "%9llu ", 409 - data->hist[cpu].max_irq); 410 - else 411 - trace_seq_printf(trace->seq, " - "); 412 - } 403 + if (!params->no_irq) 404 + format_summary_value(trace->seq, 405 + data->hist[cpu].irq_count, 406 + data->hist[cpu].max_irq, 407 + false); 413 408 414 - if (!params->no_thread) { 415 - if (data->hist[cpu].thread_count) 416 - trace_seq_printf(trace->seq, "%9llu ", 417 - data->hist[cpu].max_thread); 418 - else 419 - trace_seq_printf(trace->seq, " - "); 420 - } 409 + if (!params->no_thread) 410 + format_summary_value(trace->seq, 411 + data->hist[cpu].thread_count, 412 + data->hist[cpu].max_thread, 413 + false); 421 414 422 - if (params->user_hist) { 423 - if (data->hist[cpu].user_count) 424 - trace_seq_printf(trace->seq, "%9llu ", 425 - data->hist[cpu].max_user); 426 - else 427 - trace_seq_printf(trace->seq, " - "); 428 - } 415 + if (params->user_hist) 416 + format_summary_value(trace->seq, 417 + data->hist[cpu].user_count, 418 + data->hist[cpu].max_user, 419 + false); 429 420 } 430 421 trace_seq_printf(trace->seq, "\n"); 431 422 trace_seq_do_printf(trace->seq); ··· 503 506 trace_seq_printf(trace->seq, "min: "); 504 507 505 508 if (!params->no_irq) 506 - trace_seq_printf(trace->seq, "%9llu ", 507 - sum.min_irq); 509 + format_summary_value(trace->seq, 510 + sum.irq_count, 511 + sum.min_irq, 512 + false); 508 513 509 514 if (!params->no_thread) 510 - trace_seq_printf(trace->seq, "%9llu ", 511 - sum.min_thread); 515 + format_summary_value(trace->seq, 516 + sum.thread_count, 517 + sum.min_thread, 518 + false); 512 519 513 520 if (params->user_hist) 514 - trace_seq_printf(trace->seq, "%9llu ", 515 - sum.min_user); 521 + format_summary_value(trace->seq, 522 + sum.user_count, 523 + sum.min_user, 524 + false); 516 525 517 526 trace_seq_printf(trace->seq, "\n"); 518 527 ··· 526 523 trace_seq_printf(trace->seq, "avg: "); 527 524 528 525 if (!params->no_irq) 529 - trace_seq_printf(trace->seq, "%9llu ", 530 - sum.sum_irq / sum.irq_count); 526 + format_summary_value(trace->seq, 527 + sum.irq_count, 528 + sum.sum_irq, 529 + true); 531 530 532 531 if (!params->no_thread) 533 - trace_seq_printf(trace->seq, "%9llu ", 534 - sum.sum_thread / sum.thread_count); 532 + format_summary_value(trace->seq, 533 + sum.thread_count, 534 + sum.sum_thread, 535 + true); 535 536 536 537 if (params->user_hist) 537 - trace_seq_printf(trace->seq, "%9llu ", 538 - sum.sum_user / sum.user_count); 538 + format_summary_value(trace->seq, 539 + sum.user_count, 540 + sum.sum_user, 541 + true); 539 542 540 543 trace_seq_printf(trace->seq, "\n"); 541 544 ··· 549 540 trace_seq_printf(trace->seq, "max: "); 550 541 551 542 if (!params->no_irq) 552 - trace_seq_printf(trace->seq, "%9llu ", 553 - sum.max_irq); 543 + format_summary_value(trace->seq, 544 + sum.irq_count, 545 + sum.max_irq, 546 + false); 554 547 555 548 if (!params->no_thread) 556 - trace_seq_printf(trace->seq, "%9llu ", 557 - sum.max_thread); 549 + format_summary_value(trace->seq, 550 + sum.thread_count, 551 + sum.max_thread, 552 + false); 558 553 559 554 if (params->user_hist) 560 - trace_seq_printf(trace->seq, "%9llu ", 561 - sum.max_user); 555 + format_summary_value(trace->seq, 556 + sum.user_count, 557 + sum.max_user, 558 + false); 562 559 563 560 trace_seq_printf(trace->seq, "\n"); 564 561 trace_seq_do_printf(trace->seq);
+1 -1
usr/include/Makefile
··· 78 78 cmd_hdrtest = \ 79 79 $(CC) $(c_flags) -fsyntax-only -x c /dev/null \ 80 80 $(if $(filter-out $(no-header-test), $*.h), -include $< -include $<); \ 81 - $(PERL) $(src)/headers_check.pl $(obj) $(SRCARCH) $<; \ 81 + $(PERL) $(src)/headers_check.pl $(obj) $<; \ 82 82 touch $@ 83 83 84 84 $(obj)/%.hdrtest: $(obj)/%.h FORCE
+2 -7
usr/include/headers_check.pl
··· 3 3 # 4 4 # headers_check.pl execute a number of trivial consistency checks 5 5 # 6 - # Usage: headers_check.pl dir arch [files...] 6 + # Usage: headers_check.pl dir [files...] 7 7 # dir: dir to look for included files 8 - # arch: architecture 9 8 # files: list of files to check 10 9 # 11 10 # The script reads the supplied files line by line and: ··· 22 23 use strict; 23 24 use File::Basename; 24 25 25 - my ($dir, $arch, @files) = @ARGV; 26 + my ($dir, @files) = @ARGV; 26 27 27 28 my $ret = 0; 28 29 my $line; ··· 53 54 my $inc = $1; 54 55 my $found; 55 56 $found = stat($dir . "/" . $inc); 56 - if (!$found) { 57 - $inc =~ s#asm/#asm-$arch/#; 58 - $found = stat($dir . "/" . $inc); 59 - } 60 57 if (!$found) { 61 58 printf STDERR "$filename:$lineno: included file '$inc' is not exported\n"; 62 59 $ret = 1;