Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge patch series "Add OP-TEE based RPMB driver for UFS devices"

Bean Huo <beanhuo@iokpp.de> says:

This patch series introduces OP-TEE based RPMB (Replay Protected
Memory Block) support for UFS devices, extending the kernel-level
secure storage capabilities that are currently available for eMMC
devices.

Previously, OP-TEE required a userspace supplicant to access RPMB
partitions, which created complex dependencies and reliability issues,
especially during early boot scenarios. Recent work by Linaro has
moved core supplicant functionality directly into the Linux kernel for
eMMC devices, eliminating userspace dependencies and enabling
immediate secure storage access. This series extends the same approach
to UFS devices, which are used in enterprise and mobile applications
that require secure storage capabilities.

Benefits:

- Eliminates dependency on userspace supplicant for UFS RPMB access

- Enables early boot secure storage access (e.g., fTPM, secure UEFI
variables)

- Provides kernel-level RPMB access as soon as UFS driver is
initialized

- Removes complex initramfs dependencies and boot ordering
requirements

- Ensures reliable and deterministic secure storage operations

- Supports both built-in and modular fTPM configurations.

Prerequisites:
--------------

This patch series depends on commit 7e8242405b94 ("rpmb: move struct
rpmb_frame to common header") which has been merged into mainline
v6.18-rc2.

Link: https://patch.msgid.link/20251107230518.4060231-1-beanhuo@iokpp.de
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>

+4150 -2229
+1
.mailmap
··· 227 227 Dmitry Safonov <0x7f454c46@gmail.com> <d.safonov@partner.samsung.com> 228 228 Dmitry Safonov <0x7f454c46@gmail.com> <dsafonov@virtuozzo.com> 229 229 Domen Puncer <domen@coderock.org> 230 + Dong Aisheng <aisheng.dong@nxp.com> <b29396@freescale.com> 230 231 Douglas Gilbert <dougg@torque.net> 231 232 Drew Fustini <fustini@kernel.org> <drew@pdp7.com> 232 233 <duje@dujemihanovic.xyz> <duje.mihanovic@skole.hr>
+36
Documentation/devicetree/bindings/i2c/apm,xgene-slimpro-i2c.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/i2c/apm,xgene-slimpro-i2c.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: APM X-Gene SLIMpro Mailbox I2C 8 + 9 + maintainers: 10 + - Khuong Dinh <khuong@os.amperecomputing.com> 11 + 12 + description: 13 + An I2C controller accessed over the "SLIMpro" mailbox. 14 + 15 + allOf: 16 + - $ref: /schemas/i2c/i2c-controller.yaml# 17 + 18 + properties: 19 + compatible: 20 + const: apm,xgene-slimpro-i2c 21 + 22 + mboxes: 23 + maxItems: 1 24 + 25 + required: 26 + - compatible 27 + - mboxes 28 + 29 + unevaluatedProperties: false 30 + 31 + examples: 32 + - | 33 + i2c { 34 + compatible = "apm,xgene-slimpro-i2c"; 35 + mboxes = <&mailbox 0>; 36 + };
-15
Documentation/devicetree/bindings/i2c/i2c-xgene-slimpro.txt
··· 1 - APM X-Gene SLIMpro Mailbox I2C Driver 2 - 3 - An I2C controller accessed over the "SLIMpro" mailbox. 4 - 5 - Required properties : 6 - 7 - - compatible : should be "apm,xgene-slimpro-i2c" 8 - - mboxes : use the label reference for the mailbox as the first parameter. 9 - The second parameter is the channel number. 10 - 11 - Example : 12 - i2cslimpro { 13 - compatible = "apm,xgene-slimpro-i2c"; 14 - mboxes = <&mailbox 0>; 15 - };
+1
Documentation/devicetree/bindings/sound/fsl-asoc-card.yaml
··· 79 79 - fsl,imx-audio-nau8822 80 80 - fsl,imx-audio-sgtl5000 81 81 - fsl,imx-audio-si476x 82 + - fsl,imx-audio-tlv320 82 83 - fsl,imx-audio-tlv320aic31xx 83 84 - fsl,imx-audio-tlv320aic32x4 84 85 - fsl,imx-audio-wm8524
+1
Documentation/devicetree/bindings/sound/qcom,sm8250.yaml
··· 33 33 - qcom,apq8096-sndcard 34 34 - qcom,glymur-sndcard 35 35 - qcom,qcm6490-idp-sndcard 36 + - qcom,qcs615-sndcard 36 37 - qcom,qcs6490-rb3gen2-sndcard 37 38 - qcom,qcs8275-sndcard 38 39 - qcom,qcs9075-sndcard
+37 -6
Documentation/devicetree/bindings/sound/ti,tas2781.yaml
··· 24 24 Instruments Smart Amp speaker protection algorithm. The 25 25 integrated speaker voltage and current sense provides for real time 26 26 monitoring of loudspeaker behavior. 27 - The TAS5825/TAS5827 is a stereo, digital input Class-D audio 28 - amplifier optimized for efficiently driving high peak power into 29 - small loudspeakers. An integrated on-chip DSP supports Texas 30 - Instruments Smart Amp speaker protection algorithm. 27 + The TAS5802/TAS5815/TAS5825/TAS5827/TAS5828 is a stereo, digital input 28 + Class-D audio amplifier optimized for efficiently driving high peak 29 + power into small loudspeakers. An integrated on-chip DSP supports 30 + Texas Instruments Smart Amp speaker protection algorithm. 31 31 32 32 Specifications about the audio amplifier can be found at: 33 33 https://www.ti.com/lit/gpn/tas2120 ··· 35 35 https://www.ti.com/lit/gpn/tas2563 36 36 https://www.ti.com/lit/gpn/tas2572 37 37 https://www.ti.com/lit/gpn/tas2781 38 + https://www.ti.com/lit/gpn/tas5815 38 39 https://www.ti.com/lit/gpn/tas5825m 39 40 https://www.ti.com/lit/gpn/tas5827 41 + https://www.ti.com/lit/gpn/tas5828m 40 42 41 43 properties: 42 44 compatible: ··· 67 65 Protection and Audio Processing, 16/20/24/32bit stereo I2S or 68 66 multichannel TDM. 69 67 68 + ti,tas5802: 22-W, Inductor-Less, Digital Input, Closed-Loop Class-D 69 + Audio Amplifier with 96-Khz Extended Processing and Low Idle Power 70 + Dissipation. 71 + 72 + ti,tas5815: 30-W, Digital Input, Stereo, Closed-loop Class-D Audio 73 + Amplifier with 96 kHz Enhanced Processing 74 + 70 75 ti,tas5825: 38-W Stereo, Inductor-Less, Digital Input, Closed-Loop 4.5V 71 76 to 26.4V Class-D Audio Amplifier with 192-kHz Extended Audio Processing. 72 77 73 - ti,tas5827: 47-W Stereo, Digital Input, High Efficiency Closed-Loop Class-D 74 - Amplifier with Class-H Algorithm 78 + ti,tas5827: 47-W Stereo, Digital Input, High Efficiency Closed-Loop 79 + Class-D Amplifier with Class-H Algorithm 80 + 81 + ti,tas5828: 50-W Stereo, Digital Input, High Efficiency Closed-Loop 82 + Class-D Amplifier with Hybrid-Pro Algorithm 75 83 oneOf: 76 84 - items: 77 85 - enum: ··· 92 80 - ti,tas2563 93 81 - ti,tas2570 94 82 - ti,tas2572 83 + - ti,tas5802 84 + - ti,tas5815 95 85 - ti,tas5825 96 86 - ti,tas5827 87 + - ti,tas5828 97 88 - const: ti,tas2781 98 89 - enum: 99 90 - ti,tas2781 ··· 197 182 compatible: 198 183 contains: 199 184 enum: 185 + - ti,tas5802 186 + - ti,tas5815 187 + then: 188 + properties: 189 + reg: 190 + maxItems: 4 191 + items: 192 + minimum: 0x54 193 + maximum: 0x57 194 + 195 + - if: 196 + properties: 197 + compatible: 198 + contains: 199 + enum: 200 200 - ti,tas5827 201 + - ti,tas5828 201 202 then: 202 203 properties: 203 204 reg:
+31 -30
Documentation/filesystems/ext4/directory.rst
··· 183 183 - det_checksum 184 184 - Directory leaf block checksum. 185 185 186 - The leaf directory block checksum is calculated against the FS UUID, the 187 - directory's inode number, the directory's inode generation number, and 188 - the entire directory entry block up to (but not including) the fake 189 - directory entry. 186 + The leaf directory block checksum is calculated against the FS UUID (or 187 + the checksum seed, if that feature is enabled for the fs), the directory's 188 + inode number, the directory's inode generation number, and the entire 189 + directory entry block up to (but not including) the fake directory entry. 190 190 191 191 Hash Tree Directories 192 192 ~~~~~~~~~~~~~~~~~~~~~ ··· 196 196 balanced tree keyed off a hash of the directory entry name. If the 197 197 EXT4_INDEX_FL (0x1000) flag is set in the inode, this directory uses a 198 198 hashed btree (htree) to organize and find directory entries. For 199 - backwards read-only compatibility with ext2, this tree is actually 200 - hidden inside the directory file, masquerading as “empty” directory data 201 - blocks! It was stated previously that the end of the linear directory 202 - entry table was signified with an entry pointing to inode 0; this is 203 - (ab)used to fool the old linear-scan algorithm into thinking that the 204 - rest of the directory block is empty so that it moves on. 199 + backwards read-only compatibility with ext2, interior tree nodes are actually 200 + hidden inside the directory file, masquerading as “empty” directory entries 201 + spanning the whole block. It was stated previously that directory entries 202 + with the inode set to 0 are treated as unused entries; this is (ab)used to 203 + fool the old linear-scan algorithm into skipping over those blocks containing 204 + the interior tree node data. 205 205 206 206 The root of the tree always lives in the first data block of the 207 207 directory. By ext2 custom, the '.' and '..' entries must appear at the ··· 209 209 ``struct ext4_dir_entry_2`` s and not stored in the tree. The rest of 210 210 the root node contains metadata about the tree and finally a hash->block 211 211 map to find nodes that are lower in the htree. If 212 - ``dx_root.info.indirect_levels`` is non-zero then the htree has two 213 - levels; the data block pointed to by the root node's map is an interior 214 - node, which is indexed by a minor hash. Interior nodes in this tree 215 - contains a zeroed out ``struct ext4_dir_entry_2`` followed by a 216 - minor_hash->block map to find leafe nodes. Leaf nodes contain a linear 217 - array of all ``struct ext4_dir_entry_2``; all of these entries 218 - (presumably) hash to the same value. If there is an overflow, the 219 - entries simply overflow into the next leaf node, and the 220 - least-significant bit of the hash (in the interior node map) that gets 221 - us to this next leaf node is set. 212 + ``dx_root.info.indirect_levels`` is non-zero then the htree has that many 213 + levels and the blocks pointed to by the root node's map are interior nodes. 214 + These interior nodes have a zeroed out ``struct ext4_dir_entry_2`` followed by 215 + a hash->block map to find nodes of the next level. Leaf nodes look like 216 + classic linear directory blocks, but all of its entries have a hash value 217 + equal or greater than the indicated hash of the parent node. 222 218 223 - To traverse the directory as a htree, the code calculates the hash of 224 - the desired file name and uses it to find the corresponding block 225 - number. If the tree is flat, the block is a linear array of directory 226 - entries that can be searched; otherwise, the minor hash of the file name 227 - is computed and used against this second block to find the corresponding 228 - third block number. That third block number will be a linear array of 229 - directory entries. 219 + The actual hash value for an entry name is only 31 bits, the least-significant 220 + bit is set to 0. However, if there is a hash collision between directory 221 + entries, the least-significant bit may get set to 1 on interior nodes in the 222 + case where these two (or more) hash-colliding entries do not fit into one leaf 223 + node and must be split across multiple nodes. 224 + 225 + To look up a name in such a htree, the code calculates the hash of the desired 226 + file name and uses it to find the leaf node with the range of hash values the 227 + calculated hash falls into (in other words, a lookup works basically the same 228 + as it would in a B-Tree keyed by the hash value), and possibly also scanning 229 + the leaf nodes that follow (in tree order) in case of hash collisions. 230 230 231 231 To traverse the directory as a linear array (such as the old code does), 232 232 the code simply reads every data block in the directory. The blocks used ··· 319 319 * - 0x24 320 320 - __le32 321 321 - block 322 - - The block number (within the directory file) that goes with hash=0. 322 + - The block number (within the directory file) that lead to the left-most 323 + leaf node, i.e. the leaf containing entries with the lowest hash values. 323 324 * - 0x28 324 325 - struct dx_entry 325 326 - entries[0] ··· 443 442 * - 0x0 444 443 - u32 445 444 - dt_reserved 446 - - Zero. 445 + - Unused (but still part of the checksum curiously). 447 446 * - 0x4 448 447 - __le32 449 448 - dt_checksum ··· 451 450 452 451 The checksum is calculated against the FS UUID, the htree index header 453 452 (dx_root or dx_node), all of the htree indices (dx_entry) that are in 454 - use, and the tail block (dx_tail). 453 + use, and the tail block (dx_tail) with the dt_checksum initially set to 0.
+67 -4
Documentation/networking/can.rst
··· 1398 1398 Additionally CAN FD capable CAN controllers support up to 64 bytes of 1399 1399 payload. The representation of this length in can_frame.len and 1400 1400 canfd_frame.len for userspace applications and inside the Linux network 1401 - layer is a plain value from 0 .. 64 instead of the CAN 'data length code'. 1402 - The data length code was a 1:1 mapping to the payload length in the Classical 1403 - CAN frames anyway. The payload length to the bus-relevant DLC mapping is 1404 - only performed inside the CAN drivers, preferably with the helper 1401 + layer is a plain value from 0 .. 64 instead of the Classical CAN length 1402 + which ranges from 0 to 8. The payload length to the bus-relevant DLC mapping 1403 + is only performed inside the CAN drivers, preferably with the helper 1405 1404 functions can_fd_dlc2len() and can_fd_len2dlc(). 1406 1405 1407 1406 The CAN netdevice driver capabilities can be distinguished by the network ··· 1462 1463 Example when 'fd-non-iso on' is added on this switchable CAN FD adapter:: 1463 1464 1464 1465 can <FD,FD-NON-ISO> state ERROR-ACTIVE (berr-counter tx 0 rx 0) restart-ms 0 1466 + 1467 + 1468 + Transmitter Delay Compensation 1469 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1470 + 1471 + At high bit rates, the propagation delay from the TX pin to the RX pin of 1472 + the transceiver might become greater than the actual bit time causing 1473 + measurement errors: the RX pin would still be measuring the previous bit. 1474 + 1475 + The Transmitter Delay Compensation (thereafter, TDC) resolves this problem 1476 + by introducing a Secondary Sample Point (SSP) equal to the distance, in 1477 + minimum time quantum, from the start of the bit time on the TX pin to the 1478 + actual measurement on the RX pin. The SSP is calculated as the sum of two 1479 + configurable values: the TDC Value (TDCV) and the TDC offset (TDCO). 1480 + 1481 + TDC, if supported by the device, can be configured together with CAN-FD 1482 + using the ip tool's "tdc-mode" argument as follow: 1483 + 1484 + **omitted** 1485 + When no "tdc-mode" option is provided, the kernel will automatically 1486 + decide whether TDC should be turned on, in which case it will 1487 + calculate a default TDCO and use the TDCV as measured by the 1488 + device. This is the recommended method to use TDC. 1489 + 1490 + **"tdc-mode off"** 1491 + TDC is explicitly disabled. 1492 + 1493 + **"tdc-mode auto"** 1494 + The user must provide the "tdco" argument. The TDCV will be 1495 + automatically calculated by the device. This option is only 1496 + available if the device supports the TDC-AUTO CAN controller mode. 1497 + 1498 + **"tdc-mode manual"** 1499 + The user must provide both the "tdco" and "tdcv" arguments. This 1500 + option is only available if the device supports the TDC-MANUAL CAN 1501 + controller mode. 1502 + 1503 + Note that some devices may offer an additional parameter: "tdcf" (TDC Filter 1504 + window). If supported by your device, this can be added as an optional 1505 + argument to either "tdc-mode auto" or "tdc-mode manual". 1506 + 1507 + Example configuring a 500 kbit/s arbitration bitrate, a 5 Mbit/s data 1508 + bitrate, a TDCO of 15 minimum time quantum and a TDCV automatically measured 1509 + by the device:: 1510 + 1511 + $ ip link set can0 up type can bitrate 500000 \ 1512 + fd on dbitrate 4000000 \ 1513 + tdc-mode auto tdco 15 1514 + $ ip -details link show can0 1515 + 5: can0: <NOARP,UP,LOWER_UP,ECHO> mtu 72 qdisc pfifo_fast state UP \ 1516 + mode DEFAULT group default qlen 10 1517 + link/can promiscuity 0 allmulti 0 minmtu 72 maxmtu 72 1518 + can <FD,TDC-AUTO> state ERROR-ACTIVE restart-ms 0 1519 + bitrate 500000 sample-point 0.875 1520 + tq 12 prop-seg 69 phase-seg1 70 phase-seg2 20 sjw 10 brp 1 1521 + ES582.1/ES584.1: tseg1 2..256 tseg2 2..128 sjw 1..128 brp 1..512 \ 1522 + brp_inc 1 1523 + dbitrate 4000000 dsample-point 0.750 1524 + dtq 12 dprop-seg 7 dphase-seg1 7 dphase-seg2 5 dsjw 2 dbrp 1 1525 + tdco 15 tdcf 0 1526 + ES582.1/ES584.1: dtseg1 2..32 dtseg2 1..16 dsjw 1..8 dbrp 1..32 \ 1527 + dbrp_inc 1 1528 + tdco 0..127 tdcf 0..127 1529 + clock 80000000 1465 1530 1466 1531 1467 1532 Supported CAN Hardware
+3
Documentation/networking/seg6-sysctl.rst
··· 25 25 26 26 Default is 0. 27 27 28 + /proc/sys/net/ipv6/seg6_* variables: 29 + ==================================== 30 + 28 31 seg6_flowlabel - INTEGER 29 32 Controls the behaviour of computing the flowlabel of outer 30 33 IPv6 header in case of SR T.encaps
+75
Documentation/rust/coding-guidelines.rst
··· 38 38 individual files, and does not require a kernel configuration. Sometimes it may 39 39 even work with broken code. 40 40 41 + Imports 42 + ~~~~~~~ 43 + 44 + ``rustfmt``, by default, formats imports in a way that is prone to conflicts 45 + while merging and rebasing, since in some cases it condenses several items into 46 + the same line. For instance: 47 + 48 + .. code-block:: rust 49 + 50 + // Do not use this style. 51 + use crate::{ 52 + example1, 53 + example2::{example3, example4, example5}, 54 + example6, example7, 55 + example8::example9, 56 + }; 57 + 58 + Instead, the kernel uses a vertical layout that looks like this: 59 + 60 + .. code-block:: rust 61 + 62 + use crate::{ 63 + example1, 64 + example2::{ 65 + example3, 66 + example4, 67 + example5, // 68 + }, 69 + example6, 70 + example7, 71 + example8::example9, // 72 + }; 73 + 74 + That is, each item goes into its own line, and braces are used as soon as there 75 + is more than one item in a list. 76 + 77 + The trailing empty comment allows to preserve this formatting. Not only that, 78 + ``rustfmt`` will actually reformat imports vertically when the empty comment is 79 + added. That is, it is possible to easily reformat the original example into the 80 + expected style by running ``rustfmt`` on an input like: 81 + 82 + .. code-block:: rust 83 + 84 + // Do not use this style. 85 + use crate::{ 86 + example1, 87 + example2::{example3, example4, example5, // 88 + }, 89 + example6, example7, 90 + example8::example9, // 91 + }; 92 + 93 + The trailing empty comment works for nested imports, as shown above, as well as 94 + for single item imports -- this can be useful to minimize diffs within patch 95 + series: 96 + 97 + .. code-block:: rust 98 + 99 + use crate::{ 100 + example1, // 101 + }; 102 + 103 + The trailing empty comment works in any of the lines within the braces, but it 104 + is preferred to keep it in the last item, since it is reminiscent of the 105 + trailing comma in other formatters. Sometimes it may be simpler to avoid moving 106 + the comment several times within a patch series due to changes in the list. 107 + 108 + There may be cases where exceptions may need to be made, i.e. none of this is 109 + a hard rule. There is also code that is not migrated to this style yet, but 110 + please do not introduce code in other styles. 111 + 112 + Eventually, the goal is to get ``rustfmt`` to support this formatting style (or 113 + a similar one) automatically in a stable release without requiring the trailing 114 + empty comment. Thus, at some point, the goal is to remove those comments. 115 + 41 116 42 117 Comments 43 118 --------
+17 -3
Documentation/virt/kvm/api.rst
··· 1229 1229 KVM_SET_VCPU_EVENTS or otherwise) because such an exception is always delivered 1230 1230 directly to the virtual CPU). 1231 1231 1232 + Calling this ioctl on a vCPU that hasn't been initialized will return 1233 + -ENOEXEC. 1234 + 1232 1235 :: 1233 1236 1234 1237 struct kvm_vcpu_events { ··· 1312 1309 1313 1310 See KVM_GET_VCPU_EVENTS for the data structure. 1314 1311 1312 + Calling this ioctl on a vCPU that hasn't been initialized will return 1313 + -ENOEXEC. 1315 1314 1316 1315 4.33 KVM_GET_DEBUGREGS 1317 1316 ---------------------- ··· 6437 6432 guest_memfd range is not allowed (any number of memory regions can be bound to 6438 6433 a single guest_memfd file, but the bound ranges must not overlap). 6439 6434 6440 - When the capability KVM_CAP_GUEST_MEMFD_MMAP is supported, the 'flags' field 6441 - supports GUEST_MEMFD_FLAG_MMAP. Setting this flag on guest_memfd creation 6442 - enables mmap() and faulting of guest_memfd memory to host userspace. 6435 + The capability KVM_CAP_GUEST_MEMFD_FLAGS enumerates the `flags` that can be 6436 + specified via KVM_CREATE_GUEST_MEMFD. Currently defined flags: 6437 + 6438 + ============================ ================================================ 6439 + GUEST_MEMFD_FLAG_MMAP Enable using mmap() on the guest_memfd file 6440 + descriptor. 6441 + GUEST_MEMFD_FLAG_INIT_SHARED Make all memory in the file shared during 6442 + KVM_CREATE_GUEST_MEMFD (memory files created 6443 + without INIT_SHARED will be marked private). 6444 + Shared memory can be faulted into host userspace 6445 + page tables. Private memory cannot. 6446 + ============================ ================================================ 6443 6447 6444 6448 When the KVM MMU performs a PFN lookup to service a guest fault and the backing 6445 6449 guest_memfd has the GUEST_MEMFD_FLAG_MMAP set, then the fault will always be
+2 -1
Documentation/virt/kvm/devices/arm-vgic-v3.rst
··· 13 13 to inject interrupts to the VGIC instead of directly to CPUs. It is not 14 14 possible to create both a GICv3 and GICv2 on the same VM. 15 15 16 - Creating a guest GICv3 device requires a host GICv3 as well. 16 + Creating a guest GICv3 device requires a host GICv3 host, or a GICv5 host with 17 + support for FEAT_GCIE_LEGACY. 17 18 18 19 19 20 Groups:
+11
MAINTAINERS
··· 4804 4804 4805 4805 BROADCOM B53/SF2 ETHERNET SWITCH DRIVER 4806 4806 M: Florian Fainelli <florian.fainelli@broadcom.com> 4807 + M: Jonas Gorski <jonas.gorski@gmail.com> 4807 4808 L: netdev@vger.kernel.org 4808 4809 L: openwrt-devel@lists.openwrt.org (subscribers-only) 4809 4810 S: Supported ··· 18013 18012 X: net/rfkill/ 18014 18013 X: net/wireless/ 18015 18014 X: tools/testing/selftests/net/can/ 18015 + 18016 + NETWORKING [IOAM] 18017 + M: Justin Iurman <justin.iurman@uliege.be> 18018 + S: Maintained 18019 + F: Documentation/networking/ioam6* 18020 + F: include/linux/ioam6* 18021 + F: include/net/ioam6* 18022 + F: include/uapi/linux/ioam6* 18023 + F: net/ipv6/ioam6* 18024 + F: tools/testing/selftests/net/ioam6* 18016 18025 18017 18026 NETWORKING [IPSEC] 18018 18027 M: Steffen Klassert <steffen.klassert@secunet.com>
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 18 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc1 5 + EXTRAVERSION = -rc2 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+1
arch/Kconfig
··· 965 965 def_bool y 966 966 depends on HAVE_CFI_ICALL_NORMALIZE_INTEGERS 967 967 depends on RUSTC_VERSION >= 107900 968 + depends on ARM64 || X86_64 968 969 # With GCOV/KASAN we need this fix: https://github.com/rust-lang/rust/pull/129373 969 970 depends on (RUSTC_LLVM_VERSION >= 190103 && RUSTC_VERSION >= 108200) || \ 970 971 (!GCOV_KERNEL && !KASAN_GENERIC && !KASAN_SW_TAGS)
+1 -1
arch/arc/configs/axs101_defconfig
··· 88 88 CONFIG_MMC_SDHCI_PLTFM=y 89 89 CONFIG_MMC_DW=y 90 90 # CONFIG_IOMMU_SUPPORT is not set 91 - CONFIG_EXT3_FS=y 91 + CONFIG_EXT4_FS=y 92 92 CONFIG_MSDOS_FS=y 93 93 CONFIG_VFAT_FS=y 94 94 CONFIG_NTFS_FS=y
+1 -1
arch/arc/configs/axs103_defconfig
··· 86 86 CONFIG_MMC_SDHCI_PLTFM=y 87 87 CONFIG_MMC_DW=y 88 88 # CONFIG_IOMMU_SUPPORT is not set 89 - CONFIG_EXT3_FS=y 89 + CONFIG_EXT4_FS=y 90 90 CONFIG_MSDOS_FS=y 91 91 CONFIG_VFAT_FS=y 92 92 CONFIG_NTFS_FS=y
+1 -1
arch/arc/configs/axs103_smp_defconfig
··· 88 88 CONFIG_MMC_SDHCI_PLTFM=y 89 89 CONFIG_MMC_DW=y 90 90 # CONFIG_IOMMU_SUPPORT is not set 91 - CONFIG_EXT3_FS=y 91 + CONFIG_EXT4_FS=y 92 92 CONFIG_MSDOS_FS=y 93 93 CONFIG_VFAT_FS=y 94 94 CONFIG_NTFS_FS=y
+1 -1
arch/arc/configs/hsdk_defconfig
··· 77 77 CONFIG_DW_AXI_DMAC=y 78 78 CONFIG_IIO=y 79 79 CONFIG_TI_ADC108S102=y 80 - CONFIG_EXT3_FS=y 80 + CONFIG_EXT4_FS=y 81 81 CONFIG_VFAT_FS=y 82 82 CONFIG_TMPFS=y 83 83 CONFIG_NFS_FS=y
+1 -1
arch/arc/configs/vdk_hs38_defconfig
··· 74 74 CONFIG_USB_STORAGE=y 75 75 CONFIG_USB_SERIAL=y 76 76 # CONFIG_IOMMU_SUPPORT is not set 77 - CONFIG_EXT3_FS=y 77 + CONFIG_EXT4_FS=y 78 78 CONFIG_EXT4_FS=y 79 79 CONFIG_MSDOS_FS=y 80 80 CONFIG_VFAT_FS=y
+1 -1
arch/arc/configs/vdk_hs38_smp_defconfig
··· 81 81 CONFIG_UIO=y 82 82 CONFIG_UIO_PDRV_GENIRQ=y 83 83 # CONFIG_IOMMU_SUPPORT is not set 84 - CONFIG_EXT3_FS=y 84 + CONFIG_EXT4_FS=y 85 85 CONFIG_MSDOS_FS=y 86 86 CONFIG_VFAT_FS=y 87 87 CONFIG_NTFS_FS=y
+1 -2
arch/arm/configs/axm55xx_defconfig
··· 194 194 CONFIG_PL320_MBOX=y 195 195 # CONFIG_IOMMU_SUPPORT is not set 196 196 CONFIG_EXT2_FS=y 197 - CONFIG_EXT3_FS=y 198 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 197 + CONFIG_EXT4_FS=y 199 198 CONFIG_EXT4_FS=y 200 199 CONFIG_AUTOFS_FS=y 201 200 CONFIG_FUSE_FS=y
+2 -2
arch/arm/configs/bcm2835_defconfig
··· 154 154 CONFIG_EXT2_FS=y 155 155 CONFIG_EXT2_FS_XATTR=y 156 156 CONFIG_EXT2_FS_POSIX_ACL=y 157 - CONFIG_EXT3_FS=y 158 - CONFIG_EXT3_FS_POSIX_ACL=y 157 + CONFIG_EXT4_FS=y 158 + CONFIG_EXT4_FS_POSIX_ACL=y 159 159 CONFIG_FANOTIFY=y 160 160 CONFIG_MSDOS_FS=y 161 161 CONFIG_VFAT_FS=y
+1 -1
arch/arm/configs/davinci_all_defconfig
··· 228 228 CONFIG_PWM_TIECAP=m 229 229 CONFIG_PWM_TIEHRPWM=m 230 230 CONFIG_EXT2_FS=y 231 - CONFIG_EXT3_FS=y 231 + CONFIG_EXT4_FS=y 232 232 CONFIG_EXT4_FS_POSIX_ACL=y 233 233 CONFIG_XFS_FS=m 234 234 CONFIG_AUTOFS_FS=m
+2 -2
arch/arm/configs/dove_defconfig
··· 95 95 CONFIG_DMADEVICES=y 96 96 CONFIG_MV_XOR=y 97 97 CONFIG_EXT2_FS=y 98 - CONFIG_EXT3_FS=y 99 - # CONFIG_EXT3_FS_XATTR is not set 98 + CONFIG_EXT4_FS=y 99 + # CONFIG_EXT4_FS_XATTR is not set 100 100 CONFIG_EXT4_FS=y 101 101 CONFIG_ISO9660_FS=y 102 102 CONFIG_JOLIET=y
+2 -2
arch/arm/configs/ep93xx_defconfig
··· 103 103 CONFIG_DMADEVICES=y 104 104 CONFIG_EP93XX_DMA=y 105 105 CONFIG_EXT2_FS=y 106 - CONFIG_EXT3_FS=y 107 - # CONFIG_EXT3_FS_XATTR is not set 106 + CONFIG_EXT4_FS=y 107 + # CONFIG_EXT4_FS_XATTR is not set 108 108 CONFIG_EXT4_FS=y 109 109 CONFIG_VFAT_FS=y 110 110 CONFIG_TMPFS=y
+3 -3
arch/arm/configs/imx_v6_v7_defconfig
··· 436 436 CONFIG_EXT2_FS_XATTR=y 437 437 CONFIG_EXT2_FS_POSIX_ACL=y 438 438 CONFIG_EXT2_FS_SECURITY=y 439 - CONFIG_EXT3_FS=y 440 - CONFIG_EXT3_FS_POSIX_ACL=y 441 - CONFIG_EXT3_FS_SECURITY=y 439 + CONFIG_EXT4_FS=y 440 + CONFIG_EXT4_FS_POSIX_ACL=y 441 + CONFIG_EXT4_FS_SECURITY=y 442 442 CONFIG_QUOTA=y 443 443 CONFIG_QUOTA_NETLINK_INTERFACE=y 444 444 CONFIG_AUTOFS_FS=y
+2 -2
arch/arm/configs/ixp4xx_defconfig
··· 158 158 CONFIG_EXT2_FS=y 159 159 CONFIG_EXT2_FS_XATTR=y 160 160 CONFIG_EXT2_FS_POSIX_ACL=y 161 - CONFIG_EXT3_FS=y 162 - CONFIG_EXT3_FS_POSIX_ACL=y 161 + CONFIG_EXT4_FS=y 162 + CONFIG_EXT4_FS_POSIX_ACL=y 163 163 CONFIG_OVERLAY_FS=y 164 164 CONFIG_TMPFS=y 165 165 CONFIG_TMPFS_POSIX_ACL=y
+1 -1
arch/arm/configs/mmp2_defconfig
··· 53 53 CONFIG_RTC_DRV_MAX8925=y 54 54 # CONFIG_RESET_CONTROLLER is not set 55 55 CONFIG_EXT2_FS=y 56 - CONFIG_EXT3_FS=y 56 + CONFIG_EXT4_FS=y 57 57 CONFIG_EXT4_FS=y 58 58 # CONFIG_DNOTIFY is not set 59 59 CONFIG_MSDOS_FS=y
+1 -1
arch/arm/configs/moxart_defconfig
··· 113 113 CONFIG_DMADEVICES=y 114 114 CONFIG_MOXART_DMA=y 115 115 # CONFIG_IOMMU_SUPPORT is not set 116 - CONFIG_EXT3_FS=y 116 + CONFIG_EXT4_FS=y 117 117 CONFIG_TMPFS=y 118 118 CONFIG_CONFIGFS_FS=y 119 119 CONFIG_JFFS2_FS=y
+1 -1
arch/arm/configs/multi_v5_defconfig
··· 268 268 CONFIG_PWM_ATMEL_HLCDC_PWM=m 269 269 CONFIG_PWM_ATMEL_TCB=m 270 270 CONFIG_EXT2_FS=y 271 - CONFIG_EXT3_FS=y 271 + CONFIG_EXT4_FS=y 272 272 CONFIG_ISO9660_FS=m 273 273 CONFIG_JOLIET=y 274 274 CONFIG_UDF_FS=m
+2 -2
arch/arm/configs/mv78xx0_defconfig
··· 91 91 CONFIG_RTC_DRV_RS5C372=y 92 92 CONFIG_RTC_DRV_M41T80=y 93 93 CONFIG_EXT2_FS=y 94 - CONFIG_EXT3_FS=y 95 - # CONFIG_EXT3_FS_XATTR is not set 94 + CONFIG_EXT4_FS=y 95 + # CONFIG_EXT4_FS_XATTR is not set 96 96 CONFIG_EXT4_FS=m 97 97 CONFIG_ISO9660_FS=m 98 98 CONFIG_JOLIET=y
+1 -1
arch/arm/configs/mvebu_v5_defconfig
··· 168 168 CONFIG_STAGING=y 169 169 CONFIG_FB_XGI=y 170 170 CONFIG_EXT2_FS=y 171 - CONFIG_EXT3_FS=y 171 + CONFIG_EXT4_FS=y 172 172 CONFIG_ISO9660_FS=m 173 173 CONFIG_JOLIET=y 174 174 CONFIG_UDF_FS=m
+1 -1
arch/arm/configs/nhk8815_defconfig
··· 116 116 CONFIG_PWM=y 117 117 CONFIG_PWM_STMPE=y 118 118 CONFIG_EXT2_FS=y 119 - CONFIG_EXT3_FS=y 119 + CONFIG_EXT4_FS=y 120 120 CONFIG_FUSE_FS=y 121 121 CONFIG_MSDOS_FS=y 122 122 CONFIG_VFAT_FS=y
+1 -1
arch/arm/configs/omap1_defconfig
··· 184 184 CONFIG_RTC_CLASS=y 185 185 CONFIG_RTC_DRV_OMAP=y 186 186 CONFIG_EXT2_FS=y 187 - CONFIG_EXT3_FS=y 187 + CONFIG_EXT4_FS=y 188 188 # CONFIG_DNOTIFY is not set 189 189 CONFIG_AUTOFS_FS=y 190 190 CONFIG_ISO9660_FS=y
+1 -1
arch/arm/configs/omap2plus_defconfig
··· 679 679 CONFIG_COUNTER=m 680 680 CONFIG_TI_EQEP=m 681 681 CONFIG_EXT2_FS=y 682 - CONFIG_EXT3_FS=y 682 + CONFIG_EXT4_FS=y 683 683 CONFIG_EXT4_FS_SECURITY=y 684 684 CONFIG_FANOTIFY=y 685 685 CONFIG_QUOTA=y
+2 -2
arch/arm/configs/orion5x_defconfig
··· 115 115 CONFIG_DMADEVICES=y 116 116 CONFIG_MV_XOR=y 117 117 CONFIG_EXT2_FS=y 118 - CONFIG_EXT3_FS=y 119 - # CONFIG_EXT3_FS_XATTR is not set 118 + CONFIG_EXT4_FS=y 119 + # CONFIG_EXT4_FS_XATTR is not set 120 120 CONFIG_EXT4_FS=m 121 121 CONFIG_ISO9660_FS=m 122 122 CONFIG_JOLIET=y
+3 -3
arch/arm/configs/pxa_defconfig
··· 579 579 CONFIG_EXT2_FS_XATTR=y 580 580 CONFIG_EXT2_FS_POSIX_ACL=y 581 581 CONFIG_EXT2_FS_SECURITY=y 582 - CONFIG_EXT3_FS=y 583 - CONFIG_EXT3_FS_POSIX_ACL=y 584 - CONFIG_EXT3_FS_SECURITY=y 582 + CONFIG_EXT4_FS=y 583 + CONFIG_EXT4_FS_POSIX_ACL=y 584 + CONFIG_EXT4_FS_SECURITY=y 585 585 CONFIG_XFS_FS=m 586 586 CONFIG_AUTOFS_FS=m 587 587 CONFIG_FUSE_FS=m
+1 -1
arch/arm/configs/qcom_defconfig
··· 291 291 CONFIG_INTERCONNECT_QCOM_SDX55=m 292 292 CONFIG_EXT2_FS=y 293 293 CONFIG_EXT2_FS_XATTR=y 294 - CONFIG_EXT3_FS=y 294 + CONFIG_EXT4_FS=y 295 295 CONFIG_FUSE_FS=y 296 296 CONFIG_VFAT_FS=y 297 297 CONFIG_TMPFS=y
+1 -1
arch/arm/configs/rpc_defconfig
··· 77 77 CONFIG_RTC_CLASS=y 78 78 CONFIG_RTC_DRV_PCF8583=y 79 79 CONFIG_EXT2_FS=y 80 - CONFIG_EXT3_FS=y 80 + CONFIG_EXT4_FS=y 81 81 CONFIG_AUTOFS_FS=m 82 82 CONFIG_ISO9660_FS=y 83 83 CONFIG_JOLIET=y
+3 -3
arch/arm/configs/s3c6400_defconfig
··· 52 52 CONFIG_RTC_DRV_S3C=y 53 53 CONFIG_PWM=y 54 54 CONFIG_EXT2_FS=y 55 - CONFIG_EXT3_FS=y 56 - CONFIG_EXT3_FS_POSIX_ACL=y 57 - CONFIG_EXT3_FS_SECURITY=y 55 + CONFIG_EXT4_FS=y 56 + CONFIG_EXT4_FS_POSIX_ACL=y 57 + CONFIG_EXT4_FS_SECURITY=y 58 58 CONFIG_TMPFS=y 59 59 CONFIG_TMPFS_POSIX_ACL=y 60 60 CONFIG_CRAMFS=y
+1 -1
arch/arm/configs/sama7_defconfig
··· 201 201 CONFIG_RESET_CONTROLLER=y 202 202 CONFIG_NVMEM_MICROCHIP_OTPC=y 203 203 CONFIG_EXT2_FS=y 204 - CONFIG_EXT3_FS=y 204 + CONFIG_EXT4_FS=y 205 205 CONFIG_FANOTIFY=y 206 206 CONFIG_AUTOFS_FS=m 207 207 CONFIG_VFAT_FS=y
+1 -1
arch/arm/configs/socfpga_defconfig
··· 136 136 CONFIG_EXT2_FS=y 137 137 CONFIG_EXT2_FS_XATTR=y 138 138 CONFIG_EXT2_FS_POSIX_ACL=y 139 - CONFIG_EXT3_FS=y 139 + CONFIG_EXT4_FS=y 140 140 CONFIG_AUTOFS_FS=y 141 141 CONFIG_VFAT_FS=y 142 142 CONFIG_NTFS_FS=y
+2 -2
arch/arm/configs/spear13xx_defconfig
··· 84 84 CONFIG_EXT2_FS=y 85 85 CONFIG_EXT2_FS_XATTR=y 86 86 CONFIG_EXT2_FS_SECURITY=y 87 - CONFIG_EXT3_FS=y 88 - CONFIG_EXT3_FS_SECURITY=y 87 + CONFIG_EXT4_FS=y 88 + CONFIG_EXT4_FS_SECURITY=y 89 89 CONFIG_AUTOFS_FS=m 90 90 CONFIG_FUSE_FS=y 91 91 CONFIG_MSDOS_FS=m
+2 -2
arch/arm/configs/spear3xx_defconfig
··· 67 67 CONFIG_EXT2_FS=y 68 68 CONFIG_EXT2_FS_XATTR=y 69 69 CONFIG_EXT2_FS_SECURITY=y 70 - CONFIG_EXT3_FS=y 71 - CONFIG_EXT3_FS_SECURITY=y 70 + CONFIG_EXT4_FS=y 71 + CONFIG_EXT4_FS_SECURITY=y 72 72 CONFIG_AUTOFS_FS=m 73 73 CONFIG_MSDOS_FS=m 74 74 CONFIG_VFAT_FS=m
+2 -2
arch/arm/configs/spear6xx_defconfig
··· 53 53 CONFIG_EXT2_FS=y 54 54 CONFIG_EXT2_FS_XATTR=y 55 55 CONFIG_EXT2_FS_SECURITY=y 56 - CONFIG_EXT3_FS=y 57 - CONFIG_EXT3_FS_SECURITY=y 56 + CONFIG_EXT4_FS=y 57 + CONFIG_EXT4_FS_SECURITY=y 58 58 CONFIG_AUTOFS_FS=m 59 59 CONFIG_MSDOS_FS=m 60 60 CONFIG_VFAT_FS=m
+2 -2
arch/arm/configs/spitz_defconfig
··· 193 193 CONFIG_EXT2_FS_XATTR=y 194 194 CONFIG_EXT2_FS_POSIX_ACL=y 195 195 CONFIG_EXT2_FS_SECURITY=y 196 - CONFIG_EXT3_FS=y 197 - # CONFIG_EXT3_FS_XATTR is not set 196 + CONFIG_EXT4_FS=y 197 + # CONFIG_EXT4_FS_XATTR is not set 198 198 CONFIG_MSDOS_FS=y 199 199 CONFIG_VFAT_FS=y 200 200 CONFIG_TMPFS=y
+1 -1
arch/arm/configs/stm32_defconfig
··· 69 69 CONFIG_IIO=y 70 70 CONFIG_STM32_ADC_CORE=y 71 71 CONFIG_STM32_ADC=y 72 - CONFIG_EXT3_FS=y 72 + CONFIG_EXT4_FS=y 73 73 # CONFIG_FILE_LOCKING is not set 74 74 # CONFIG_DNOTIFY is not set 75 75 # CONFIG_INOTIFY_USER is not set
+3 -3
arch/arm/configs/tegra_defconfig
··· 319 319 CONFIG_EXT2_FS_XATTR=y 320 320 CONFIG_EXT2_FS_POSIX_ACL=y 321 321 CONFIG_EXT2_FS_SECURITY=y 322 - CONFIG_EXT3_FS=y 323 - CONFIG_EXT3_FS_POSIX_ACL=y 324 - CONFIG_EXT3_FS_SECURITY=y 322 + CONFIG_EXT4_FS=y 323 + CONFIG_EXT4_FS_POSIX_ACL=y 324 + CONFIG_EXT4_FS_SECURITY=y 325 325 # CONFIG_DNOTIFY is not set 326 326 CONFIG_VFAT_FS=y 327 327 CONFIG_TMPFS=y
+1 -1
arch/arm/configs/u8500_defconfig
··· 175 175 CONFIG_EXT2_FS_XATTR=y 176 176 CONFIG_EXT2_FS_POSIX_ACL=y 177 177 CONFIG_EXT2_FS_SECURITY=y 178 - CONFIG_EXT3_FS=y 178 + CONFIG_EXT4_FS=y 179 179 CONFIG_VFAT_FS=y 180 180 CONFIG_TMPFS=y 181 181 CONFIG_TMPFS_POSIX_ACL=y
+1 -1
arch/arm/configs/vexpress_defconfig
··· 120 120 CONFIG_VIRTIO_MMIO=y 121 121 CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y 122 122 CONFIG_EXT2_FS=y 123 - CONFIG_EXT3_FS=y 123 + CONFIG_EXT4_FS=y 124 124 CONFIG_VFAT_FS=y 125 125 CONFIG_TMPFS=y 126 126 CONFIG_JFFS2_FS=y
+32 -6
arch/arm64/include/asm/el2_setup.h
··· 24 24 * ID_AA64MMFR4_EL1.E2H0 < 0. On such CPUs HCR_EL2.E2H is RES1, but it 25 25 * can reset into an UNKNOWN state and might not read as 1 until it has 26 26 * been initialized explicitly. 27 - * 28 - * Fruity CPUs seem to have HCR_EL2.E2H set to RAO/WI, but 29 - * don't advertise it (they predate this relaxation). 30 - * 31 27 * Initalize HCR_EL2.E2H so that later code can rely upon HCR_EL2.E2H 32 28 * indicating whether the CPU is running in E2H mode. 33 29 */ 34 30 mrs_s x1, SYS_ID_AA64MMFR4_EL1 35 31 sbfx x1, x1, #ID_AA64MMFR4_EL1_E2H0_SHIFT, #ID_AA64MMFR4_EL1_E2H0_WIDTH 36 32 cmp x1, #0 37 - b.ge .LnVHE_\@ 33 + b.lt .LnE2H0_\@ 38 34 35 + /* 36 + * Unfortunately, HCR_EL2.E2H can be RES1 even if not advertised 37 + * as such via ID_AA64MMFR4_EL1.E2H0: 38 + * 39 + * - Fruity CPUs predate the !FEAT_E2H0 relaxation, and seem to 40 + * have HCR_EL2.E2H implemented as RAO/WI. 41 + * 42 + * - On CPUs that lack FEAT_FGT, a hypervisor can't trap guest 43 + * reads of ID_AA64MMFR4_EL1 to advertise !FEAT_E2H0. NV 44 + * guests on these hosts can write to HCR_EL2.E2H without 45 + * trapping to the hypervisor, but these writes have no 46 + * functional effect. 47 + * 48 + * Handle both cases by checking for an essential VHE property 49 + * (system register remapping) to decide whether we're 50 + * effectively VHE-only or not. 51 + */ 52 + msr_hcr_el2 x0 // Setup HCR_EL2 as nVHE 53 + isb 54 + mov x1, #1 // Write something to FAR_EL1 55 + msr far_el1, x1 56 + isb 57 + mov x1, #2 // Try to overwrite it via FAR_EL2 58 + msr far_el2, x1 59 + isb 60 + mrs x1, far_el1 // If we see the latest write in FAR_EL1, 61 + cmp x1, #2 // we can safely assume we are VHE only. 62 + b.ne .LnVHE_\@ // Otherwise, we know that nVHE works. 63 + 64 + .LnE2H0_\@: 39 65 orr x0, x0, #HCR_E2H 40 - .LnVHE_\@: 41 66 msr_hcr_el2 x0 42 67 isb 68 + .LnVHE_\@: 43 69 .endm 44 70 45 71 .macro __init_el2_sctlr
+50
arch/arm64/include/asm/kvm_host.h
··· 816 816 u64 hcrx_el2; 817 817 u64 mdcr_el2; 818 818 819 + struct { 820 + u64 r; 821 + u64 w; 822 + } fgt[__NR_FGT_GROUP_IDS__]; 823 + 819 824 /* Exception Information */ 820 825 struct kvm_vcpu_fault_info fault; 821 826 ··· 1605 1600 void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt); 1606 1601 void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *res1); 1607 1602 void check_feature_map(void); 1603 + void kvm_vcpu_load_fgt(struct kvm_vcpu *vcpu); 1608 1604 1605 + static __always_inline enum fgt_group_id __fgt_reg_to_group_id(enum vcpu_sysreg reg) 1606 + { 1607 + switch (reg) { 1608 + case HFGRTR_EL2: 1609 + case HFGWTR_EL2: 1610 + return HFGRTR_GROUP; 1611 + case HFGITR_EL2: 1612 + return HFGITR_GROUP; 1613 + case HDFGRTR_EL2: 1614 + case HDFGWTR_EL2: 1615 + return HDFGRTR_GROUP; 1616 + case HAFGRTR_EL2: 1617 + return HAFGRTR_GROUP; 1618 + case HFGRTR2_EL2: 1619 + case HFGWTR2_EL2: 1620 + return HFGRTR2_GROUP; 1621 + case HFGITR2_EL2: 1622 + return HFGITR2_GROUP; 1623 + case HDFGRTR2_EL2: 1624 + case HDFGWTR2_EL2: 1625 + return HDFGRTR2_GROUP; 1626 + default: 1627 + BUILD_BUG_ON(1); 1628 + } 1629 + } 1630 + 1631 + #define vcpu_fgt(vcpu, reg) \ 1632 + ({ \ 1633 + enum fgt_group_id id = __fgt_reg_to_group_id(reg); \ 1634 + u64 *p; \ 1635 + switch (reg) { \ 1636 + case HFGWTR_EL2: \ 1637 + case HDFGWTR_EL2: \ 1638 + case HFGWTR2_EL2: \ 1639 + case HDFGWTR2_EL2: \ 1640 + p = &(vcpu)->arch.fgt[id].w; \ 1641 + break; \ 1642 + default: \ 1643 + p = &(vcpu)->arch.fgt[id].r; \ 1644 + break; \ 1645 + } \ 1646 + \ 1647 + p; \ 1648 + }) 1609 1649 1610 1650 #endif /* __ARM64_KVM_HOST_H__ */
+10 -1
arch/arm64/include/asm/sysreg.h
··· 1220 1220 __val; \ 1221 1221 }) 1222 1222 1223 + /* 1224 + * The "Z" constraint combined with the "%x0" template should be enough 1225 + * to force XZR generation if (v) is a constant 0 value but LLVM does not 1226 + * yet understand that modifier/constraint combo so a conditional is required 1227 + * to nudge the compiler into using XZR as a source for a 0 constant value. 1228 + */ 1223 1229 #define write_sysreg_s(v, r) do { \ 1224 1230 u64 __val = (u64)(v); \ 1225 1231 u32 __maybe_unused __check_r = (u32)(r); \ 1226 - asm volatile(__msr_s(r, "%x0") : : "rZ" (__val)); \ 1232 + if (__builtin_constant_p(__val) && __val == 0) \ 1233 + asm volatile(__msr_s(r, "xzr")); \ 1234 + else \ 1235 + asm volatile(__msr_s(r, "%x0") : : "r" (__val)); \ 1227 1236 } while (0) 1228 1237 1229 1238 /*
+5 -3
arch/arm64/kernel/entry-common.c
··· 697 697 698 698 static void noinstr el0_softstp(struct pt_regs *regs, unsigned long esr) 699 699 { 700 + bool step_done; 701 + 700 702 if (!is_ttbr0_addr(regs->pc)) 701 703 arm64_apply_bp_hardening(); 702 704 ··· 709 707 * If we are stepping a suspended breakpoint there's nothing more to do: 710 708 * the single-step is complete. 711 709 */ 712 - if (!try_step_suspended_breakpoints(regs)) { 713 - local_daif_restore(DAIF_PROCCTX); 710 + step_done = try_step_suspended_breakpoints(regs); 711 + local_daif_restore(DAIF_PROCCTX); 712 + if (!step_done) 714 713 do_el0_softstep(esr, regs); 715 - } 716 714 arm64_exit_to_user_mode(regs); 717 715 } 718 716
+14 -91
arch/arm64/kvm/arch_timer.c
··· 66 66 67 67 u32 timer_get_ctl(struct arch_timer_context *ctxt) 68 68 { 69 - struct kvm_vcpu *vcpu = ctxt->vcpu; 69 + struct kvm_vcpu *vcpu = timer_context_to_vcpu(ctxt); 70 70 71 71 switch(arch_timer_ctx_index(ctxt)) { 72 72 case TIMER_VTIMER: ··· 85 85 86 86 u64 timer_get_cval(struct arch_timer_context *ctxt) 87 87 { 88 - struct kvm_vcpu *vcpu = ctxt->vcpu; 88 + struct kvm_vcpu *vcpu = timer_context_to_vcpu(ctxt); 89 89 90 90 switch(arch_timer_ctx_index(ctxt)) { 91 91 case TIMER_VTIMER: ··· 104 104 105 105 static void timer_set_ctl(struct arch_timer_context *ctxt, u32 ctl) 106 106 { 107 - struct kvm_vcpu *vcpu = ctxt->vcpu; 107 + struct kvm_vcpu *vcpu = timer_context_to_vcpu(ctxt); 108 108 109 109 switch(arch_timer_ctx_index(ctxt)) { 110 110 case TIMER_VTIMER: ··· 126 126 127 127 static void timer_set_cval(struct arch_timer_context *ctxt, u64 cval) 128 128 { 129 - struct kvm_vcpu *vcpu = ctxt->vcpu; 129 + struct kvm_vcpu *vcpu = timer_context_to_vcpu(ctxt); 130 130 131 131 switch(arch_timer_ctx_index(ctxt)) { 132 132 case TIMER_VTIMER: ··· 144 144 default: 145 145 WARN_ON(1); 146 146 } 147 - } 148 - 149 - static void timer_set_offset(struct arch_timer_context *ctxt, u64 offset) 150 - { 151 - if (!ctxt->offset.vm_offset) { 152 - WARN(offset, "timer %ld\n", arch_timer_ctx_index(ctxt)); 153 - return; 154 - } 155 - 156 - WRITE_ONCE(*ctxt->offset.vm_offset, offset); 157 147 } 158 148 159 149 u64 kvm_phys_timer_read(void) ··· 333 343 u64 ns; 334 344 335 345 ctx = container_of(hrt, struct arch_timer_context, hrtimer); 336 - vcpu = ctx->vcpu; 346 + vcpu = timer_context_to_vcpu(ctx); 337 347 338 348 trace_kvm_timer_hrtimer_expire(ctx); 339 349 ··· 426 436 * 427 437 * But hey, it's fast, right? 428 438 */ 429 - if (is_hyp_ctxt(ctx->vcpu) && 430 - (ctx == vcpu_vtimer(ctx->vcpu) || ctx == vcpu_ptimer(ctx->vcpu))) { 439 + struct kvm_vcpu *vcpu = timer_context_to_vcpu(ctx); 440 + if (is_hyp_ctxt(vcpu) && 441 + (ctx == vcpu_vtimer(vcpu) || ctx == vcpu_ptimer(vcpu))) { 431 442 unsigned long val = timer_get_ctl(ctx); 432 443 __assign_bit(__ffs(ARCH_TIMER_CTRL_IT_STAT), &val, level); 433 444 timer_set_ctl(ctx, val); ··· 461 470 trace_kvm_timer_emulate(ctx, should_fire); 462 471 463 472 if (should_fire != ctx->irq.level) 464 - kvm_timer_update_irq(ctx->vcpu, should_fire, ctx); 473 + kvm_timer_update_irq(timer_context_to_vcpu(ctx), should_fire, ctx); 465 474 466 475 kvm_timer_update_status(ctx, should_fire); 467 476 ··· 489 498 490 499 static void timer_save_state(struct arch_timer_context *ctx) 491 500 { 492 - struct arch_timer_cpu *timer = vcpu_timer(ctx->vcpu); 501 + struct arch_timer_cpu *timer = vcpu_timer(timer_context_to_vcpu(ctx)); 493 502 enum kvm_arch_timers index = arch_timer_ctx_index(ctx); 494 503 unsigned long flags; 495 504 ··· 600 609 601 610 static void timer_restore_state(struct arch_timer_context *ctx) 602 611 { 603 - struct arch_timer_cpu *timer = vcpu_timer(ctx->vcpu); 612 + struct arch_timer_cpu *timer = vcpu_timer(timer_context_to_vcpu(ctx)); 604 613 enum kvm_arch_timers index = arch_timer_ctx_index(ctx); 605 614 unsigned long flags; 606 615 ··· 659 668 660 669 static void kvm_timer_vcpu_load_gic(struct arch_timer_context *ctx) 661 670 { 662 - struct kvm_vcpu *vcpu = ctx->vcpu; 671 + struct kvm_vcpu *vcpu = timer_context_to_vcpu(ctx); 663 672 bool phys_active = false; 664 673 665 674 /* ··· 668 677 * this point and the register restoration, we'll take the 669 678 * interrupt anyway. 670 679 */ 671 - kvm_timer_update_irq(ctx->vcpu, kvm_timer_should_fire(ctx), ctx); 680 + kvm_timer_update_irq(vcpu, kvm_timer_should_fire(ctx), ctx); 672 681 673 682 if (irqchip_in_kernel(vcpu->kvm)) 674 683 phys_active = kvm_vgic_map_is_active(vcpu, timer_irq(ctx)); ··· 1054 1063 struct arch_timer_context *ctxt = vcpu_get_timer(vcpu, timerid); 1055 1064 struct kvm *kvm = vcpu->kvm; 1056 1065 1057 - ctxt->vcpu = vcpu; 1066 + ctxt->timer_id = timerid; 1058 1067 1059 1068 if (timerid == TIMER_VTIMER) 1060 1069 ctxt->offset.vm_offset = &kvm->arch.timer_data.voffset; ··· 1112 1121 disable_percpu_irq(host_ptimer_irq); 1113 1122 } 1114 1123 1115 - int kvm_arm_timer_set_reg(struct kvm_vcpu *vcpu, u64 regid, u64 value) 1116 - { 1117 - struct arch_timer_context *timer; 1118 - 1119 - switch (regid) { 1120 - case KVM_REG_ARM_TIMER_CTL: 1121 - timer = vcpu_vtimer(vcpu); 1122 - kvm_arm_timer_write(vcpu, timer, TIMER_REG_CTL, value); 1123 - break; 1124 - case KVM_REG_ARM_TIMER_CNT: 1125 - if (!test_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET, 1126 - &vcpu->kvm->arch.flags)) { 1127 - timer = vcpu_vtimer(vcpu); 1128 - timer_set_offset(timer, kvm_phys_timer_read() - value); 1129 - } 1130 - break; 1131 - case KVM_REG_ARM_TIMER_CVAL: 1132 - timer = vcpu_vtimer(vcpu); 1133 - kvm_arm_timer_write(vcpu, timer, TIMER_REG_CVAL, value); 1134 - break; 1135 - case KVM_REG_ARM_PTIMER_CTL: 1136 - timer = vcpu_ptimer(vcpu); 1137 - kvm_arm_timer_write(vcpu, timer, TIMER_REG_CTL, value); 1138 - break; 1139 - case KVM_REG_ARM_PTIMER_CNT: 1140 - if (!test_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET, 1141 - &vcpu->kvm->arch.flags)) { 1142 - timer = vcpu_ptimer(vcpu); 1143 - timer_set_offset(timer, kvm_phys_timer_read() - value); 1144 - } 1145 - break; 1146 - case KVM_REG_ARM_PTIMER_CVAL: 1147 - timer = vcpu_ptimer(vcpu); 1148 - kvm_arm_timer_write(vcpu, timer, TIMER_REG_CVAL, value); 1149 - break; 1150 - 1151 - default: 1152 - return -1; 1153 - } 1154 - 1155 - return 0; 1156 - } 1157 - 1158 1124 static u64 read_timer_ctl(struct arch_timer_context *timer) 1159 1125 { 1160 1126 /* ··· 1126 1178 ctl |= ARCH_TIMER_CTRL_IT_STAT; 1127 1179 1128 1180 return ctl; 1129 - } 1130 - 1131 - u64 kvm_arm_timer_get_reg(struct kvm_vcpu *vcpu, u64 regid) 1132 - { 1133 - switch (regid) { 1134 - case KVM_REG_ARM_TIMER_CTL: 1135 - return kvm_arm_timer_read(vcpu, 1136 - vcpu_vtimer(vcpu), TIMER_REG_CTL); 1137 - case KVM_REG_ARM_TIMER_CNT: 1138 - return kvm_arm_timer_read(vcpu, 1139 - vcpu_vtimer(vcpu), TIMER_REG_CNT); 1140 - case KVM_REG_ARM_TIMER_CVAL: 1141 - return kvm_arm_timer_read(vcpu, 1142 - vcpu_vtimer(vcpu), TIMER_REG_CVAL); 1143 - case KVM_REG_ARM_PTIMER_CTL: 1144 - return kvm_arm_timer_read(vcpu, 1145 - vcpu_ptimer(vcpu), TIMER_REG_CTL); 1146 - case KVM_REG_ARM_PTIMER_CNT: 1147 - return kvm_arm_timer_read(vcpu, 1148 - vcpu_ptimer(vcpu), TIMER_REG_CNT); 1149 - case KVM_REG_ARM_PTIMER_CVAL: 1150 - return kvm_arm_timer_read(vcpu, 1151 - vcpu_ptimer(vcpu), TIMER_REG_CVAL); 1152 - } 1153 - return (u64)-1; 1154 1181 } 1155 1182 1156 1183 static u64 kvm_arm_timer_read(struct kvm_vcpu *vcpu,
+7
arch/arm64/kvm/arm.c
··· 642 642 vcpu->arch.hcr_el2 |= HCR_TWI; 643 643 644 644 vcpu_set_pauth_traps(vcpu); 645 + kvm_vcpu_load_fgt(vcpu); 645 646 646 647 if (is_protected_kvm_enabled()) { 647 648 kvm_call_hyp_nvhe(__pkvm_vcpu_load, ··· 1795 1794 case KVM_GET_VCPU_EVENTS: { 1796 1795 struct kvm_vcpu_events events; 1797 1796 1797 + if (!kvm_vcpu_initialized(vcpu)) 1798 + return -ENOEXEC; 1799 + 1798 1800 if (kvm_arm_vcpu_get_events(vcpu, &events)) 1799 1801 return -EINVAL; 1800 1802 ··· 1808 1804 } 1809 1805 case KVM_SET_VCPU_EVENTS: { 1810 1806 struct kvm_vcpu_events events; 1807 + 1808 + if (!kvm_vcpu_initialized(vcpu)) 1809 + return -ENOEXEC; 1811 1810 1812 1811 if (copy_from_user(&events, argp, sizeof(events))) 1813 1812 return -EFAULT;
+5 -2
arch/arm64/kvm/at.c
··· 91 91 case OP_AT_S1E2W: 92 92 case OP_AT_S1E2A: 93 93 return vcpu_el2_e2h_is_set(vcpu) ? TR_EL20 : TR_EL2; 94 - break; 95 94 default: 96 95 return (vcpu_el2_e2h_is_set(vcpu) && 97 96 vcpu_el2_tge_is_set(vcpu)) ? TR_EL20 : TR_EL10; ··· 1601 1602 .fn = match_s1_desc, 1602 1603 .priv = &dm, 1603 1604 }, 1604 - .regime = TR_EL10, 1605 1605 .as_el0 = false, 1606 1606 .pan = false, 1607 1607 }; 1608 1608 struct s1_walk_result wr = {}; 1609 1609 int ret; 1610 + 1611 + if (is_hyp_ctxt(vcpu)) 1612 + wi.regime = vcpu_el2_e2h_is_set(vcpu) ? TR_EL20 : TR_EL2; 1613 + else 1614 + wi.regime = TR_EL10; 1610 1615 1611 1616 ret = setup_s1_walk(vcpu, &wi, &wr, va); 1612 1617 if (ret)
+90
arch/arm64/kvm/config.c
··· 5 5 */ 6 6 7 7 #include <linux/kvm_host.h> 8 + #include <asm/kvm_emulate.h> 9 + #include <asm/kvm_nested.h> 8 10 #include <asm/sysreg.h> 9 11 10 12 /* ··· 1429 1427 *res0 = *res1 = 0; 1430 1428 break; 1431 1429 } 1430 + } 1431 + 1432 + static __always_inline struct fgt_masks *__fgt_reg_to_masks(enum vcpu_sysreg reg) 1433 + { 1434 + switch (reg) { 1435 + case HFGRTR_EL2: 1436 + return &hfgrtr_masks; 1437 + case HFGWTR_EL2: 1438 + return &hfgwtr_masks; 1439 + case HFGITR_EL2: 1440 + return &hfgitr_masks; 1441 + case HDFGRTR_EL2: 1442 + return &hdfgrtr_masks; 1443 + case HDFGWTR_EL2: 1444 + return &hdfgwtr_masks; 1445 + case HAFGRTR_EL2: 1446 + return &hafgrtr_masks; 1447 + case HFGRTR2_EL2: 1448 + return &hfgrtr2_masks; 1449 + case HFGWTR2_EL2: 1450 + return &hfgwtr2_masks; 1451 + case HFGITR2_EL2: 1452 + return &hfgitr2_masks; 1453 + case HDFGRTR2_EL2: 1454 + return &hdfgrtr2_masks; 1455 + case HDFGWTR2_EL2: 1456 + return &hdfgwtr2_masks; 1457 + default: 1458 + BUILD_BUG_ON(1); 1459 + } 1460 + } 1461 + 1462 + static __always_inline void __compute_fgt(struct kvm_vcpu *vcpu, enum vcpu_sysreg reg) 1463 + { 1464 + u64 fgu = vcpu->kvm->arch.fgu[__fgt_reg_to_group_id(reg)]; 1465 + struct fgt_masks *m = __fgt_reg_to_masks(reg); 1466 + u64 clear = 0, set = 0, val = m->nmask; 1467 + 1468 + set |= fgu & m->mask; 1469 + clear |= fgu & m->nmask; 1470 + 1471 + if (is_nested_ctxt(vcpu)) { 1472 + u64 nested = __vcpu_sys_reg(vcpu, reg); 1473 + set |= nested & m->mask; 1474 + clear |= ~nested & m->nmask; 1475 + } 1476 + 1477 + val |= set; 1478 + val &= ~clear; 1479 + *vcpu_fgt(vcpu, reg) = val; 1480 + } 1481 + 1482 + static void __compute_hfgwtr(struct kvm_vcpu *vcpu) 1483 + { 1484 + __compute_fgt(vcpu, HFGWTR_EL2); 1485 + 1486 + if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38)) 1487 + *vcpu_fgt(vcpu, HFGWTR_EL2) |= HFGWTR_EL2_TCR_EL1; 1488 + } 1489 + 1490 + static void __compute_hdfgwtr(struct kvm_vcpu *vcpu) 1491 + { 1492 + __compute_fgt(vcpu, HDFGWTR_EL2); 1493 + 1494 + if (is_hyp_ctxt(vcpu)) 1495 + *vcpu_fgt(vcpu, HDFGWTR_EL2) |= HDFGWTR_EL2_MDSCR_EL1; 1496 + } 1497 + 1498 + void kvm_vcpu_load_fgt(struct kvm_vcpu *vcpu) 1499 + { 1500 + if (!cpus_have_final_cap(ARM64_HAS_FGT)) 1501 + return; 1502 + 1503 + __compute_fgt(vcpu, HFGRTR_EL2); 1504 + __compute_hfgwtr(vcpu); 1505 + __compute_fgt(vcpu, HFGITR_EL2); 1506 + __compute_fgt(vcpu, HDFGRTR_EL2); 1507 + __compute_hdfgwtr(vcpu); 1508 + __compute_fgt(vcpu, HAFGRTR_EL2); 1509 + 1510 + if (!cpus_have_final_cap(ARM64_HAS_FGT2)) 1511 + return; 1512 + 1513 + __compute_fgt(vcpu, HFGRTR2_EL2); 1514 + __compute_fgt(vcpu, HFGWTR2_EL2); 1515 + __compute_fgt(vcpu, HFGITR2_EL2); 1516 + __compute_fgt(vcpu, HDFGRTR2_EL2); 1517 + __compute_fgt(vcpu, HDFGWTR2_EL2); 1432 1518 }
+10 -5
arch/arm64/kvm/debug.c
··· 15 15 #include <asm/kvm_arm.h> 16 16 #include <asm/kvm_emulate.h> 17 17 18 + static int cpu_has_spe(u64 dfr0) 19 + { 20 + return cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_PMSVer_SHIFT) && 21 + !(read_sysreg_s(SYS_PMBIDR_EL1) & PMBIDR_EL1_P); 22 + } 23 + 18 24 /** 19 25 * kvm_arm_setup_mdcr_el2 - configure vcpu mdcr_el2 value 20 26 * ··· 83 77 *host_data_ptr(debug_brps) = SYS_FIELD_GET(ID_AA64DFR0_EL1, BRPs, dfr0); 84 78 *host_data_ptr(debug_wrps) = SYS_FIELD_GET(ID_AA64DFR0_EL1, WRPs, dfr0); 85 79 80 + if (cpu_has_spe(dfr0)) 81 + host_data_set_flag(HAS_SPE); 82 + 86 83 if (has_vhe()) 87 84 return; 88 - 89 - if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_PMSVer_SHIFT) && 90 - !(read_sysreg_s(SYS_PMBIDR_EL1) & PMBIDR_EL1_P)) 91 - host_data_set_flag(HAS_SPE); 92 85 93 86 /* Check if we have BRBE implemented and available at the host */ 94 87 if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_BRBE_SHIFT)) ··· 107 102 void kvm_debug_init_vhe(void) 108 103 { 109 104 /* Clear PMSCR_EL1.E{0,1}SPE which reset to UNKNOWN values. */ 110 - if (SYS_FIELD_GET(ID_AA64DFR0_EL1, PMSVer, read_sysreg(id_aa64dfr0_el1))) 105 + if (host_data_test_flag(HAS_SPE)) 111 106 write_sysreg_el1(0, SYS_PMSCR); 112 107 } 113 108
-70
arch/arm64/kvm/guest.c
··· 591 591 return copy_core_reg_indices(vcpu, NULL); 592 592 } 593 593 594 - static const u64 timer_reg_list[] = { 595 - KVM_REG_ARM_TIMER_CTL, 596 - KVM_REG_ARM_TIMER_CNT, 597 - KVM_REG_ARM_TIMER_CVAL, 598 - KVM_REG_ARM_PTIMER_CTL, 599 - KVM_REG_ARM_PTIMER_CNT, 600 - KVM_REG_ARM_PTIMER_CVAL, 601 - }; 602 - 603 - #define NUM_TIMER_REGS ARRAY_SIZE(timer_reg_list) 604 - 605 - static bool is_timer_reg(u64 index) 606 - { 607 - switch (index) { 608 - case KVM_REG_ARM_TIMER_CTL: 609 - case KVM_REG_ARM_TIMER_CNT: 610 - case KVM_REG_ARM_TIMER_CVAL: 611 - case KVM_REG_ARM_PTIMER_CTL: 612 - case KVM_REG_ARM_PTIMER_CNT: 613 - case KVM_REG_ARM_PTIMER_CVAL: 614 - return true; 615 - } 616 - return false; 617 - } 618 - 619 - static int copy_timer_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) 620 - { 621 - for (int i = 0; i < NUM_TIMER_REGS; i++) { 622 - if (put_user(timer_reg_list[i], uindices)) 623 - return -EFAULT; 624 - uindices++; 625 - } 626 - 627 - return 0; 628 - } 629 - 630 - static int set_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) 631 - { 632 - void __user *uaddr = (void __user *)(long)reg->addr; 633 - u64 val; 634 - int ret; 635 - 636 - ret = copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)); 637 - if (ret != 0) 638 - return -EFAULT; 639 - 640 - return kvm_arm_timer_set_reg(vcpu, reg->id, val); 641 - } 642 - 643 - static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) 644 - { 645 - void __user *uaddr = (void __user *)(long)reg->addr; 646 - u64 val; 647 - 648 - val = kvm_arm_timer_get_reg(vcpu, reg->id); 649 - return copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)) ? -EFAULT : 0; 650 - } 651 - 652 594 static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) 653 595 { 654 596 const unsigned int slices = vcpu_sve_slices(vcpu); ··· 666 724 res += num_sve_regs(vcpu); 667 725 res += kvm_arm_num_sys_reg_descs(vcpu); 668 726 res += kvm_arm_get_fw_num_regs(vcpu); 669 - res += NUM_TIMER_REGS; 670 727 671 728 return res; 672 729 } ··· 696 755 return ret; 697 756 uindices += kvm_arm_get_fw_num_regs(vcpu); 698 757 699 - ret = copy_timer_indices(vcpu, uindices); 700 - if (ret < 0) 701 - return ret; 702 - uindices += NUM_TIMER_REGS; 703 - 704 758 return kvm_arm_copy_sys_reg_indices(vcpu, uindices); 705 759 } 706 760 ··· 713 777 case KVM_REG_ARM64_SVE: return get_sve_reg(vcpu, reg); 714 778 } 715 779 716 - if (is_timer_reg(reg->id)) 717 - return get_timer_reg(vcpu, reg); 718 - 719 780 return kvm_arm_sys_reg_get_reg(vcpu, reg); 720 781 } 721 782 ··· 729 796 return kvm_arm_set_fw_reg(vcpu, reg); 730 797 case KVM_REG_ARM64_SVE: return set_sve_reg(vcpu, reg); 731 798 } 732 - 733 - if (is_timer_reg(reg->id)) 734 - return set_timer_reg(vcpu, reg); 735 799 736 800 return kvm_arm_sys_reg_set_reg(vcpu, reg); 737 801 }
+6 -1
arch/arm64/kvm/handle_exit.c
··· 147 147 if (esr & ESR_ELx_WFx_ISS_RV) { 148 148 u64 val, now; 149 149 150 - now = kvm_arm_timer_get_reg(vcpu, KVM_REG_ARM_TIMER_CNT); 150 + now = kvm_phys_timer_read(); 151 + if (is_hyp_ctxt(vcpu) && vcpu_el2_e2h_is_set(vcpu)) 152 + now -= timer_get_offset(vcpu_hvtimer(vcpu)); 153 + else 154 + now -= timer_get_offset(vcpu_vtimer(vcpu)); 155 + 151 156 val = vcpu_get_reg(vcpu, kvm_vcpu_sys_get_rt(vcpu)); 152 157 153 158 if (now >= val)
+17 -131
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 195 195 __deactivate_cptr_traps_nvhe(vcpu); 196 196 } 197 197 198 - #define reg_to_fgt_masks(reg) \ 199 - ({ \ 200 - struct fgt_masks *m; \ 201 - switch(reg) { \ 202 - case HFGRTR_EL2: \ 203 - m = &hfgrtr_masks; \ 204 - break; \ 205 - case HFGWTR_EL2: \ 206 - m = &hfgwtr_masks; \ 207 - break; \ 208 - case HFGITR_EL2: \ 209 - m = &hfgitr_masks; \ 210 - break; \ 211 - case HDFGRTR_EL2: \ 212 - m = &hdfgrtr_masks; \ 213 - break; \ 214 - case HDFGWTR_EL2: \ 215 - m = &hdfgwtr_masks; \ 216 - break; \ 217 - case HAFGRTR_EL2: \ 218 - m = &hafgrtr_masks; \ 219 - break; \ 220 - case HFGRTR2_EL2: \ 221 - m = &hfgrtr2_masks; \ 222 - break; \ 223 - case HFGWTR2_EL2: \ 224 - m = &hfgwtr2_masks; \ 225 - break; \ 226 - case HFGITR2_EL2: \ 227 - m = &hfgitr2_masks; \ 228 - break; \ 229 - case HDFGRTR2_EL2: \ 230 - m = &hdfgrtr2_masks; \ 231 - break; \ 232 - case HDFGWTR2_EL2: \ 233 - m = &hdfgwtr2_masks; \ 234 - break; \ 235 - default: \ 236 - BUILD_BUG_ON(1); \ 237 - } \ 238 - \ 239 - m; \ 240 - }) 241 - 242 - #define compute_clr_set(vcpu, reg, clr, set) \ 243 - do { \ 244 - u64 hfg = __vcpu_sys_reg(vcpu, reg); \ 245 - struct fgt_masks *m = reg_to_fgt_masks(reg); \ 246 - set |= hfg & m->mask; \ 247 - clr |= ~hfg & m->nmask; \ 248 - } while(0) 249 - 250 - #define reg_to_fgt_group_id(reg) \ 251 - ({ \ 252 - enum fgt_group_id id; \ 253 - switch(reg) { \ 254 - case HFGRTR_EL2: \ 255 - case HFGWTR_EL2: \ 256 - id = HFGRTR_GROUP; \ 257 - break; \ 258 - case HFGITR_EL2: \ 259 - id = HFGITR_GROUP; \ 260 - break; \ 261 - case HDFGRTR_EL2: \ 262 - case HDFGWTR_EL2: \ 263 - id = HDFGRTR_GROUP; \ 264 - break; \ 265 - case HAFGRTR_EL2: \ 266 - id = HAFGRTR_GROUP; \ 267 - break; \ 268 - case HFGRTR2_EL2: \ 269 - case HFGWTR2_EL2: \ 270 - id = HFGRTR2_GROUP; \ 271 - break; \ 272 - case HFGITR2_EL2: \ 273 - id = HFGITR2_GROUP; \ 274 - break; \ 275 - case HDFGRTR2_EL2: \ 276 - case HDFGWTR2_EL2: \ 277 - id = HDFGRTR2_GROUP; \ 278 - break; \ 279 - default: \ 280 - BUILD_BUG_ON(1); \ 281 - } \ 282 - \ 283 - id; \ 284 - }) 285 - 286 - #define compute_undef_clr_set(vcpu, kvm, reg, clr, set) \ 287 - do { \ 288 - u64 hfg = kvm->arch.fgu[reg_to_fgt_group_id(reg)]; \ 289 - struct fgt_masks *m = reg_to_fgt_masks(reg); \ 290 - set |= hfg & m->mask; \ 291 - clr |= hfg & m->nmask; \ 292 - } while(0) 293 - 294 - #define update_fgt_traps_cs(hctxt, vcpu, kvm, reg, clr, set) \ 295 - do { \ 296 - struct fgt_masks *m = reg_to_fgt_masks(reg); \ 297 - u64 c = clr, s = set; \ 298 - u64 val; \ 299 - \ 300 - ctxt_sys_reg(hctxt, reg) = read_sysreg_s(SYS_ ## reg); \ 301 - if (is_nested_ctxt(vcpu)) \ 302 - compute_clr_set(vcpu, reg, c, s); \ 303 - \ 304 - compute_undef_clr_set(vcpu, kvm, reg, c, s); \ 305 - \ 306 - val = m->nmask; \ 307 - val |= s; \ 308 - val &= ~c; \ 309 - write_sysreg_s(val, SYS_ ## reg); \ 310 - } while(0) 311 - 312 - #define update_fgt_traps(hctxt, vcpu, kvm, reg) \ 313 - update_fgt_traps_cs(hctxt, vcpu, kvm, reg, 0, 0) 314 - 315 198 static inline bool cpu_has_amu(void) 316 199 { 317 200 u64 pfr0 = read_sysreg_s(SYS_ID_AA64PFR0_EL1); ··· 203 320 ID_AA64PFR0_EL1_AMU_SHIFT); 204 321 } 205 322 323 + #define __activate_fgt(hctxt, vcpu, reg) \ 324 + do { \ 325 + ctxt_sys_reg(hctxt, reg) = read_sysreg_s(SYS_ ## reg); \ 326 + write_sysreg_s(*vcpu_fgt(vcpu, reg), SYS_ ## reg); \ 327 + } while (0) 328 + 206 329 static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu) 207 330 { 208 331 struct kvm_cpu_context *hctxt = host_data_ptr(host_ctxt); 209 - struct kvm *kvm = kern_hyp_va(vcpu->kvm); 210 332 211 333 if (!cpus_have_final_cap(ARM64_HAS_FGT)) 212 334 return; 213 335 214 - update_fgt_traps(hctxt, vcpu, kvm, HFGRTR_EL2); 215 - update_fgt_traps_cs(hctxt, vcpu, kvm, HFGWTR_EL2, 0, 216 - cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38) ? 217 - HFGWTR_EL2_TCR_EL1_MASK : 0); 218 - update_fgt_traps(hctxt, vcpu, kvm, HFGITR_EL2); 219 - update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR_EL2); 220 - update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR_EL2); 336 + __activate_fgt(hctxt, vcpu, HFGRTR_EL2); 337 + __activate_fgt(hctxt, vcpu, HFGWTR_EL2); 338 + __activate_fgt(hctxt, vcpu, HFGITR_EL2); 339 + __activate_fgt(hctxt, vcpu, HDFGRTR_EL2); 340 + __activate_fgt(hctxt, vcpu, HDFGWTR_EL2); 221 341 222 342 if (cpu_has_amu()) 223 - update_fgt_traps(hctxt, vcpu, kvm, HAFGRTR_EL2); 343 + __activate_fgt(hctxt, vcpu, HAFGRTR_EL2); 224 344 225 345 if (!cpus_have_final_cap(ARM64_HAS_FGT2)) 226 346 return; 227 347 228 - update_fgt_traps(hctxt, vcpu, kvm, HFGRTR2_EL2); 229 - update_fgt_traps(hctxt, vcpu, kvm, HFGWTR2_EL2); 230 - update_fgt_traps(hctxt, vcpu, kvm, HFGITR2_EL2); 231 - update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR2_EL2); 232 - update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR2_EL2); 348 + __activate_fgt(hctxt, vcpu, HFGRTR2_EL2); 349 + __activate_fgt(hctxt, vcpu, HFGWTR2_EL2); 350 + __activate_fgt(hctxt, vcpu, HFGITR2_EL2); 351 + __activate_fgt(hctxt, vcpu, HDFGRTR2_EL2); 352 + __activate_fgt(hctxt, vcpu, HDFGWTR2_EL2); 233 353 } 234 354 235 355 #define __deactivate_fgt(htcxt, vcpu, reg) \
+1
arch/arm64/kvm/hyp/nvhe/pkvm.c
··· 172 172 173 173 /* Trust the host for non-protected vcpu features. */ 174 174 vcpu->arch.hcrx_el2 = host_vcpu->arch.hcrx_el2; 175 + memcpy(vcpu->arch.fgt, host_vcpu->arch.fgt, sizeof(vcpu->arch.fgt)); 175 176 return 0; 176 177 } 177 178
+6 -3
arch/arm64/kvm/nested.c
··· 1859 1859 { 1860 1860 u64 guest_mdcr = __vcpu_sys_reg(vcpu, MDCR_EL2); 1861 1861 1862 + if (is_nested_ctxt(vcpu)) 1863 + vcpu->arch.mdcr_el2 |= (guest_mdcr & NV_MDCR_GUEST_INCLUDE); 1862 1864 /* 1863 1865 * In yet another example where FEAT_NV2 is fscking broken, accesses 1864 1866 * to MDSCR_EL1 are redirected to the VNCR despite having an effect 1865 1867 * at EL2. Use a big hammer to apply sanity. 1868 + * 1869 + * Unless of course we have FEAT_FGT, in which case we can precisely 1870 + * trap MDSCR_EL1. 1866 1871 */ 1867 - if (is_hyp_ctxt(vcpu)) 1872 + else if (!cpus_have_final_cap(ARM64_HAS_FGT)) 1868 1873 vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA; 1869 - else 1870 - vcpu->arch.mdcr_el2 |= (guest_mdcr & NV_MDCR_GUEST_INCLUDE); 1871 1874 }
+105 -26
arch/arm64/kvm/sys_regs.c
··· 203 203 MAPPED_EL2_SYSREG(AMAIR_EL2, AMAIR_EL1, NULL ); 204 204 MAPPED_EL2_SYSREG(ELR_EL2, ELR_EL1, NULL ); 205 205 MAPPED_EL2_SYSREG(SPSR_EL2, SPSR_EL1, NULL ); 206 - MAPPED_EL2_SYSREG(ZCR_EL2, ZCR_EL1, NULL ); 207 206 MAPPED_EL2_SYSREG(CONTEXTIDR_EL2, CONTEXTIDR_EL1, NULL ); 208 207 MAPPED_EL2_SYSREG(SCTLR2_EL2, SCTLR2_EL1, NULL ); 209 208 case CNTHCTL_EL2: ··· 1594 1595 return true; 1595 1596 } 1596 1597 1597 - static bool access_hv_timer(struct kvm_vcpu *vcpu, 1598 - struct sys_reg_params *p, 1599 - const struct sys_reg_desc *r) 1598 + static int arch_timer_set_user(struct kvm_vcpu *vcpu, 1599 + const struct sys_reg_desc *rd, 1600 + u64 val) 1600 1601 { 1601 - if (!vcpu_el2_e2h_is_set(vcpu)) 1602 - return undef_access(vcpu, p, r); 1602 + switch (reg_to_encoding(rd)) { 1603 + case SYS_CNTV_CTL_EL0: 1604 + case SYS_CNTP_CTL_EL0: 1605 + case SYS_CNTHV_CTL_EL2: 1606 + case SYS_CNTHP_CTL_EL2: 1607 + val &= ~ARCH_TIMER_CTRL_IT_STAT; 1608 + break; 1609 + case SYS_CNTVCT_EL0: 1610 + if (!test_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET, &vcpu->kvm->arch.flags)) 1611 + timer_set_offset(vcpu_vtimer(vcpu), kvm_phys_timer_read() - val); 1612 + return 0; 1613 + case SYS_CNTPCT_EL0: 1614 + if (!test_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET, &vcpu->kvm->arch.flags)) 1615 + timer_set_offset(vcpu_ptimer(vcpu), kvm_phys_timer_read() - val); 1616 + return 0; 1617 + } 1603 1618 1604 - return access_arch_timer(vcpu, p, r); 1619 + __vcpu_assign_sys_reg(vcpu, rd->reg, val); 1620 + return 0; 1621 + } 1622 + 1623 + static int arch_timer_get_user(struct kvm_vcpu *vcpu, 1624 + const struct sys_reg_desc *rd, 1625 + u64 *val) 1626 + { 1627 + switch (reg_to_encoding(rd)) { 1628 + case SYS_CNTVCT_EL0: 1629 + *val = kvm_phys_timer_read() - timer_get_offset(vcpu_vtimer(vcpu)); 1630 + break; 1631 + case SYS_CNTPCT_EL0: 1632 + *val = kvm_phys_timer_read() - timer_get_offset(vcpu_ptimer(vcpu)); 1633 + break; 1634 + default: 1635 + *val = __vcpu_sys_reg(vcpu, rd->reg); 1636 + } 1637 + 1638 + return 0; 1605 1639 } 1606 1640 1607 1641 static s64 kvm_arm64_ftr_safe_value(u32 id, const struct arm64_ftr_bits *ftrp, ··· 2539 2507 "trap of EL2 register redirected to EL1"); 2540 2508 } 2541 2509 2542 - #define EL2_REG_FILTERED(name, acc, rst, v, filter) { \ 2510 + #define SYS_REG_USER_FILTER(name, acc, rst, v, gu, su, filter) { \ 2543 2511 SYS_DESC(SYS_##name), \ 2544 2512 .access = acc, \ 2545 2513 .reset = rst, \ 2546 2514 .reg = name, \ 2515 + .get_user = gu, \ 2516 + .set_user = su, \ 2547 2517 .visibility = filter, \ 2548 2518 .val = v, \ 2549 2519 } 2520 + 2521 + #define EL2_REG_FILTERED(name, acc, rst, v, filter) \ 2522 + SYS_REG_USER_FILTER(name, acc, rst, v, NULL, NULL, filter) 2550 2523 2551 2524 #define EL2_REG(name, acc, rst, v) \ 2552 2525 EL2_REG_FILTERED(name, acc, rst, v, el2_visibility) ··· 2562 2525 #define EL2_REG_VNCR_GICv3(name) \ 2563 2526 EL2_REG_VNCR_FILT(name, hidden_visibility) 2564 2527 #define EL2_REG_REDIR(name, rst, v) EL2_REG(name, bad_redir_trap, rst, v) 2528 + 2529 + #define TIMER_REG(name, vis) \ 2530 + SYS_REG_USER_FILTER(name, access_arch_timer, reset_val, 0, \ 2531 + arch_timer_get_user, arch_timer_set_user, vis) 2565 2532 2566 2533 /* 2567 2534 * Since reset() callback and field val are not used for idregs, they will be ··· 2746 2705 2747 2706 if (guest_hyp_sve_traps_enabled(vcpu)) { 2748 2707 kvm_inject_nested_sve_trap(vcpu); 2749 - return true; 2708 + return false; 2750 2709 } 2751 2710 2752 2711 if (!p->is_write) { 2753 - p->regval = vcpu_read_sys_reg(vcpu, ZCR_EL2); 2712 + p->regval = __vcpu_sys_reg(vcpu, ZCR_EL2); 2754 2713 return true; 2755 2714 } 2756 2715 2757 2716 vq = SYS_FIELD_GET(ZCR_ELx, LEN, p->regval) + 1; 2758 2717 vq = min(vq, vcpu_sve_max_vq(vcpu)); 2759 - vcpu_write_sys_reg(vcpu, vq - 1, ZCR_EL2); 2760 - 2718 + __vcpu_assign_sys_reg(vcpu, ZCR_EL2, vq - 1); 2761 2719 return true; 2762 2720 } 2763 2721 ··· 2871 2831 const struct sys_reg_desc *rd) 2872 2832 { 2873 2833 return __el2_visibility(vcpu, rd, s1pie_visibility); 2834 + } 2835 + 2836 + static unsigned int cnthv_visibility(const struct kvm_vcpu *vcpu, 2837 + const struct sys_reg_desc *rd) 2838 + { 2839 + if (vcpu_has_nv(vcpu) && 2840 + !vcpu_has_feature(vcpu, KVM_ARM_VCPU_HAS_EL2_E2H0)) 2841 + return 0; 2842 + 2843 + return REG_HIDDEN; 2874 2844 } 2875 2845 2876 2846 static bool access_mdcr(struct kvm_vcpu *vcpu, ··· 3532 3482 AMU_AMEVTYPER1_EL0(14), 3533 3483 AMU_AMEVTYPER1_EL0(15), 3534 3484 3535 - { SYS_DESC(SYS_CNTPCT_EL0), access_arch_timer }, 3536 - { SYS_DESC(SYS_CNTVCT_EL0), access_arch_timer }, 3485 + { SYS_DESC(SYS_CNTPCT_EL0), .access = access_arch_timer, 3486 + .get_user = arch_timer_get_user, .set_user = arch_timer_set_user }, 3487 + { SYS_DESC(SYS_CNTVCT_EL0), .access = access_arch_timer, 3488 + .get_user = arch_timer_get_user, .set_user = arch_timer_set_user }, 3537 3489 { SYS_DESC(SYS_CNTPCTSS_EL0), access_arch_timer }, 3538 3490 { SYS_DESC(SYS_CNTVCTSS_EL0), access_arch_timer }, 3539 3491 { SYS_DESC(SYS_CNTP_TVAL_EL0), access_arch_timer }, 3540 - { SYS_DESC(SYS_CNTP_CTL_EL0), access_arch_timer }, 3541 - { SYS_DESC(SYS_CNTP_CVAL_EL0), access_arch_timer }, 3492 + TIMER_REG(CNTP_CTL_EL0, NULL), 3493 + TIMER_REG(CNTP_CVAL_EL0, NULL), 3542 3494 3543 3495 { SYS_DESC(SYS_CNTV_TVAL_EL0), access_arch_timer }, 3544 - { SYS_DESC(SYS_CNTV_CTL_EL0), access_arch_timer }, 3545 - { SYS_DESC(SYS_CNTV_CVAL_EL0), access_arch_timer }, 3496 + TIMER_REG(CNTV_CTL_EL0, NULL), 3497 + TIMER_REG(CNTV_CVAL_EL0, NULL), 3546 3498 3547 3499 /* PMEVCNTRn_EL0 */ 3548 3500 PMU_PMEVCNTR_EL0(0), ··· 3742 3690 EL2_REG_VNCR(CNTVOFF_EL2, reset_val, 0), 3743 3691 EL2_REG(CNTHCTL_EL2, access_rw, reset_val, 0), 3744 3692 { SYS_DESC(SYS_CNTHP_TVAL_EL2), access_arch_timer }, 3745 - EL2_REG(CNTHP_CTL_EL2, access_arch_timer, reset_val, 0), 3746 - EL2_REG(CNTHP_CVAL_EL2, access_arch_timer, reset_val, 0), 3693 + TIMER_REG(CNTHP_CTL_EL2, el2_visibility), 3694 + TIMER_REG(CNTHP_CVAL_EL2, el2_visibility), 3747 3695 3748 - { SYS_DESC(SYS_CNTHV_TVAL_EL2), access_hv_timer }, 3749 - EL2_REG(CNTHV_CTL_EL2, access_hv_timer, reset_val, 0), 3750 - EL2_REG(CNTHV_CVAL_EL2, access_hv_timer, reset_val, 0), 3696 + { SYS_DESC(SYS_CNTHV_TVAL_EL2), access_arch_timer, .visibility = cnthv_visibility }, 3697 + TIMER_REG(CNTHV_CTL_EL2, cnthv_visibility), 3698 + TIMER_REG(CNTHV_CVAL_EL2, cnthv_visibility), 3751 3699 3752 3700 { SYS_DESC(SYS_CNTKCTL_EL12), access_cntkctl_el12 }, 3753 3701 ··· 5285 5233 } 5286 5234 } 5287 5235 5236 + static u64 kvm_one_reg_to_id(const struct kvm_one_reg *reg) 5237 + { 5238 + switch(reg->id) { 5239 + case KVM_REG_ARM_TIMER_CVAL: 5240 + return TO_ARM64_SYS_REG(CNTV_CVAL_EL0); 5241 + case KVM_REG_ARM_TIMER_CNT: 5242 + return TO_ARM64_SYS_REG(CNTVCT_EL0); 5243 + default: 5244 + return reg->id; 5245 + } 5246 + } 5247 + 5288 5248 int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg, 5289 5249 const struct sys_reg_desc table[], unsigned int num) 5290 5250 { 5291 5251 u64 __user *uaddr = (u64 __user *)(unsigned long)reg->addr; 5292 5252 const struct sys_reg_desc *r; 5253 + u64 id = kvm_one_reg_to_id(reg); 5293 5254 u64 val; 5294 5255 int ret; 5295 5256 5296 - r = id_to_sys_reg_desc(vcpu, reg->id, table, num); 5257 + r = id_to_sys_reg_desc(vcpu, id, table, num); 5297 5258 if (!r || sysreg_hidden(vcpu, r)) 5298 5259 return -ENOENT; 5299 5260 ··· 5339 5274 { 5340 5275 u64 __user *uaddr = (u64 __user *)(unsigned long)reg->addr; 5341 5276 const struct sys_reg_desc *r; 5277 + u64 id = kvm_one_reg_to_id(reg); 5342 5278 u64 val; 5343 5279 int ret; 5344 5280 5345 5281 if (get_user(val, uaddr)) 5346 5282 return -EFAULT; 5347 5283 5348 - r = id_to_sys_reg_desc(vcpu, reg->id, table, num); 5284 + r = id_to_sys_reg_desc(vcpu, id, table, num); 5349 5285 if (!r || sysreg_hidden(vcpu, r)) 5350 5286 return -ENOENT; 5351 5287 ··· 5406 5340 5407 5341 static bool copy_reg_to_user(const struct sys_reg_desc *reg, u64 __user **uind) 5408 5342 { 5343 + u64 idx; 5344 + 5409 5345 if (!*uind) 5410 5346 return true; 5411 5347 5412 - if (put_user(sys_reg_to_index(reg), *uind)) 5348 + switch (reg_to_encoding(reg)) { 5349 + case SYS_CNTV_CVAL_EL0: 5350 + idx = KVM_REG_ARM_TIMER_CVAL; 5351 + break; 5352 + case SYS_CNTVCT_EL0: 5353 + idx = KVM_REG_ARM_TIMER_CNT; 5354 + break; 5355 + default: 5356 + idx = sys_reg_to_index(reg); 5357 + } 5358 + 5359 + if (put_user(idx, *uind)) 5413 5360 return false; 5414 5361 5415 5362 (*uind)++;
+6
arch/arm64/kvm/sys_regs.h
··· 257 257 (val); \ 258 258 }) 259 259 260 + #define TO_ARM64_SYS_REG(r) ARM64_SYS_REG(sys_reg_Op0(SYS_ ## r), \ 261 + sys_reg_Op1(SYS_ ## r), \ 262 + sys_reg_CRn(SYS_ ## r), \ 263 + sys_reg_CRm(SYS_ ## r), \ 264 + sys_reg_Op2(SYS_ ## r)) 265 + 260 266 #endif /* __ARM64_KVM_SYS_REGS_LOCAL_H__ */
+4 -1
arch/arm64/kvm/vgic/vgic-v3.c
··· 297 297 { 298 298 struct vgic_v3_cpu_if *vgic_v3 = &vcpu->arch.vgic_cpu.vgic_v3; 299 299 300 + if (!vgic_is_v3(vcpu->kvm)) 301 + return; 302 + 300 303 /* Hide GICv3 sysreg if necessary */ 301 - if (!kvm_has_gicv3(vcpu->kvm)) { 304 + if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V2) { 302 305 vgic_v3->vgic_hcr |= (ICH_HCR_EL2_TALL0 | ICH_HCR_EL2_TALL1 | 303 306 ICH_HCR_EL2_TC); 304 307 return;
+3 -4
arch/hexagon/configs/comet_defconfig
··· 46 46 CONFIG_EXT2_FS_XATTR=y 47 47 CONFIG_EXT2_FS_POSIX_ACL=y 48 48 CONFIG_EXT2_FS_SECURITY=y 49 - CONFIG_EXT3_FS=y 50 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 51 - CONFIG_EXT3_FS_POSIX_ACL=y 52 - CONFIG_EXT3_FS_SECURITY=y 49 + CONFIG_EXT4_FS=y 50 + CONFIG_EXT4_FS_POSIX_ACL=y 51 + CONFIG_EXT4_FS_SECURITY=y 53 52 CONFIG_QUOTA=y 54 53 CONFIG_PROC_KCORE=y 55 54 CONFIG_TMPFS=y
+3 -3
arch/m68k/configs/stmark2_defconfig
··· 72 72 CONFIG_EXT2_FS_XATTR=y 73 73 CONFIG_EXT2_FS_POSIX_ACL=y 74 74 CONFIG_EXT2_FS_SECURITY=y 75 - CONFIG_EXT3_FS=y 76 - CONFIG_EXT3_FS_POSIX_ACL=y 77 - CONFIG_EXT3_FS_SECURITY=y 75 + CONFIG_EXT4_FS=y 76 + CONFIG_EXT4_FS_POSIX_ACL=y 77 + CONFIG_EXT4_FS_SECURITY=y 78 78 # CONFIG_FILE_LOCKING is not set 79 79 # CONFIG_DNOTIFY is not set 80 80 # CONFIG_INOTIFY_USER is not set
+1 -1
arch/microblaze/configs/mmu_defconfig
··· 73 73 CONFIG_UIO=y 74 74 CONFIG_UIO_PDRV_GENIRQ=y 75 75 CONFIG_UIO_DMEM_GENIRQ=y 76 - CONFIG_EXT3_FS=y 76 + CONFIG_EXT4_FS=y 77 77 # CONFIG_DNOTIFY is not set 78 78 CONFIG_TMPFS=y 79 79 CONFIG_CRAMFS=y
+3 -3
arch/mips/configs/bigsur_defconfig
··· 144 144 CONFIG_EXT2_FS_XATTR=y 145 145 CONFIG_EXT2_FS_POSIX_ACL=y 146 146 CONFIG_EXT2_FS_SECURITY=y 147 - CONFIG_EXT3_FS=m 148 - CONFIG_EXT3_FS_POSIX_ACL=y 149 - CONFIG_EXT3_FS_SECURITY=y 147 + CONFIG_EXT4_FS=m 148 + CONFIG_EXT4_FS_POSIX_ACL=y 149 + CONFIG_EXT4_FS_SECURITY=y 150 150 CONFIG_EXT4_FS=y 151 151 CONFIG_QUOTA=y 152 152 CONFIG_QUOTA_NETLINK_INTERFACE=y
+3 -3
arch/mips/configs/cobalt_defconfig
··· 59 59 CONFIG_EXT2_FS_XATTR=y 60 60 CONFIG_EXT2_FS_POSIX_ACL=y 61 61 CONFIG_EXT2_FS_SECURITY=y 62 - CONFIG_EXT3_FS=y 63 - CONFIG_EXT3_FS_POSIX_ACL=y 64 - CONFIG_EXT3_FS_SECURITY=y 62 + CONFIG_EXT4_FS=y 63 + CONFIG_EXT4_FS_POSIX_ACL=y 64 + CONFIG_EXT4_FS_SECURITY=y 65 65 CONFIG_PROC_KCORE=y 66 66 CONFIG_TMPFS=y 67 67 CONFIG_TMPFS_POSIX_ACL=y
+3 -3
arch/mips/configs/decstation_64_defconfig
··· 133 133 CONFIG_EXT2_FS_XATTR=y 134 134 CONFIG_EXT2_FS_POSIX_ACL=y 135 135 CONFIG_EXT2_FS_SECURITY=y 136 - CONFIG_EXT3_FS=y 137 - CONFIG_EXT3_FS_POSIX_ACL=y 138 - CONFIG_EXT3_FS_SECURITY=y 136 + CONFIG_EXT4_FS=y 137 + CONFIG_EXT4_FS_POSIX_ACL=y 138 + CONFIG_EXT4_FS_SECURITY=y 139 139 CONFIG_ISO9660_FS=y 140 140 CONFIG_JOLIET=y 141 141 CONFIG_PROC_KCORE=y
+3 -3
arch/mips/configs/decstation_defconfig
··· 129 129 CONFIG_EXT2_FS_XATTR=y 130 130 CONFIG_EXT2_FS_POSIX_ACL=y 131 131 CONFIG_EXT2_FS_SECURITY=y 132 - CONFIG_EXT3_FS=y 133 - CONFIG_EXT3_FS_POSIX_ACL=y 134 - CONFIG_EXT3_FS_SECURITY=y 132 + CONFIG_EXT4_FS=y 133 + CONFIG_EXT4_FS_POSIX_ACL=y 134 + CONFIG_EXT4_FS_SECURITY=y 135 135 CONFIG_ISO9660_FS=y 136 136 CONFIG_JOLIET=y 137 137 CONFIG_PROC_KCORE=y
+3 -3
arch/mips/configs/decstation_r4k_defconfig
··· 129 129 CONFIG_EXT2_FS_XATTR=y 130 130 CONFIG_EXT2_FS_POSIX_ACL=y 131 131 CONFIG_EXT2_FS_SECURITY=y 132 - CONFIG_EXT3_FS=y 133 - CONFIG_EXT3_FS_POSIX_ACL=y 134 - CONFIG_EXT3_FS_SECURITY=y 132 + CONFIG_EXT4_FS=y 133 + CONFIG_EXT4_FS_POSIX_ACL=y 134 + CONFIG_EXT4_FS_SECURITY=y 135 135 CONFIG_ISO9660_FS=y 136 136 CONFIG_JOLIET=y 137 137 CONFIG_PROC_KCORE=y
+1 -1
arch/mips/configs/fuloong2e_defconfig
··· 173 173 CONFIG_UIO=m 174 174 CONFIG_UIO_CIF=m 175 175 CONFIG_EXT2_FS=y 176 - CONFIG_EXT3_FS=y 176 + CONFIG_EXT4_FS=y 177 177 CONFIG_EXT4_FS_POSIX_ACL=y 178 178 CONFIG_EXT4_FS_SECURITY=y 179 179 CONFIG_AUTOFS_FS=y
+3 -3
arch/mips/configs/ip22_defconfig
··· 232 232 CONFIG_RTC_INTF_DEV_UIE_EMUL=y 233 233 CONFIG_RTC_DRV_DS1286=y 234 234 CONFIG_EXT2_FS=m 235 - CONFIG_EXT3_FS=y 236 - CONFIG_EXT3_FS_POSIX_ACL=y 237 - CONFIG_EXT3_FS_SECURITY=y 235 + CONFIG_EXT4_FS=y 236 + CONFIG_EXT4_FS_POSIX_ACL=y 237 + CONFIG_EXT4_FS_SECURITY=y 238 238 CONFIG_XFS_FS=m 239 239 CONFIG_XFS_QUOTA=y 240 240 CONFIG_QUOTA=y
+3 -3
arch/mips/configs/ip27_defconfig
··· 272 272 CONFIG_EXT2_FS_XATTR=y 273 273 CONFIG_EXT2_FS_POSIX_ACL=y 274 274 CONFIG_EXT2_FS_SECURITY=y 275 - CONFIG_EXT3_FS=y 276 - CONFIG_EXT3_FS_POSIX_ACL=y 277 - CONFIG_EXT3_FS_SECURITY=y 275 + CONFIG_EXT4_FS=y 276 + CONFIG_EXT4_FS_POSIX_ACL=y 277 + CONFIG_EXT4_FS_SECURITY=y 278 278 CONFIG_XFS_FS=m 279 279 CONFIG_XFS_QUOTA=y 280 280 CONFIG_XFS_POSIX_ACL=y
+3 -3
arch/mips/configs/ip28_defconfig
··· 49 49 CONFIG_INDYDOG=y 50 50 # CONFIG_VGA_CONSOLE is not set 51 51 CONFIG_EXT2_FS=y 52 - CONFIG_EXT3_FS=y 53 - CONFIG_EXT3_FS_POSIX_ACL=y 54 - CONFIG_EXT3_FS_SECURITY=y 52 + CONFIG_EXT4_FS=y 53 + CONFIG_EXT4_FS_POSIX_ACL=y 54 + CONFIG_EXT4_FS_SECURITY=y 55 55 CONFIG_QUOTA=y 56 56 CONFIG_PROC_KCORE=y 57 57 # CONFIG_PROC_PAGE_MONITOR is not set
+3 -3
arch/mips/configs/ip30_defconfig
··· 143 143 CONFIG_EXT2_FS_XATTR=y 144 144 CONFIG_EXT2_FS_POSIX_ACL=y 145 145 CONFIG_EXT2_FS_SECURITY=y 146 - CONFIG_EXT3_FS=y 147 - CONFIG_EXT3_FS_POSIX_ACL=y 148 - CONFIG_EXT3_FS_SECURITY=y 146 + CONFIG_EXT4_FS=y 147 + CONFIG_EXT4_FS_POSIX_ACL=y 148 + CONFIG_EXT4_FS_SECURITY=y 149 149 CONFIG_XFS_FS=m 150 150 CONFIG_XFS_QUOTA=y 151 151 CONFIG_XFS_POSIX_ACL=y
+3 -3
arch/mips/configs/ip32_defconfig
··· 89 89 CONFIG_EXT2_FS_XATTR=y 90 90 CONFIG_EXT2_FS_POSIX_ACL=y 91 91 CONFIG_EXT2_FS_SECURITY=y 92 - CONFIG_EXT3_FS=y 93 - CONFIG_EXT3_FS_POSIX_ACL=y 94 - CONFIG_EXT3_FS_SECURITY=y 92 + CONFIG_EXT4_FS=y 93 + CONFIG_EXT4_FS_POSIX_ACL=y 94 + CONFIG_EXT4_FS_SECURITY=y 95 95 CONFIG_QUOTA=y 96 96 CONFIG_QFMT_V1=m 97 97 CONFIG_QFMT_V2=m
+1 -1
arch/mips/configs/jazz_defconfig
··· 69 69 CONFIG_FRAMEBUFFER_CONSOLE=y 70 70 # CONFIG_HWMON is not set 71 71 CONFIG_EXT2_FS=m 72 - CONFIG_EXT3_FS=y 72 + CONFIG_EXT4_FS=y 73 73 CONFIG_XFS_FS=m 74 74 CONFIG_XFS_QUOTA=y 75 75 CONFIG_AUTOFS_FS=m
+3 -3
arch/mips/configs/lemote2f_defconfig
··· 226 226 CONFIG_LEDS_CLASS=y 227 227 CONFIG_STAGING=y 228 228 CONFIG_EXT2_FS=m 229 - CONFIG_EXT3_FS=y 230 - CONFIG_EXT3_FS_POSIX_ACL=y 231 - CONFIG_EXT3_FS_SECURITY=y 229 + CONFIG_EXT4_FS=y 230 + CONFIG_EXT4_FS_POSIX_ACL=y 231 + CONFIG_EXT4_FS_SECURITY=y 232 232 CONFIG_JFS_FS=m 233 233 CONFIG_JFS_POSIX_ACL=y 234 234 CONFIG_XFS_FS=m
+3 -3
arch/mips/configs/loongson2k_defconfig
··· 298 298 CONFIG_EXT2_FS_XATTR=y 299 299 CONFIG_EXT2_FS_POSIX_ACL=y 300 300 CONFIG_EXT2_FS_SECURITY=y 301 - CONFIG_EXT3_FS=y 302 - CONFIG_EXT3_FS_POSIX_ACL=y 303 - CONFIG_EXT3_FS_SECURITY=y 301 + CONFIG_EXT4_FS=y 302 + CONFIG_EXT4_FS_POSIX_ACL=y 303 + CONFIG_EXT4_FS_SECURITY=y 304 304 CONFIG_XFS_FS=y 305 305 CONFIG_XFS_QUOTA=y 306 306 CONFIG_XFS_POSIX_ACL=y
+3 -3
arch/mips/configs/loongson3_defconfig
··· 348 348 CONFIG_EXT2_FS_XATTR=y 349 349 CONFIG_EXT2_FS_POSIX_ACL=y 350 350 CONFIG_EXT2_FS_SECURITY=y 351 - CONFIG_EXT3_FS=y 352 - CONFIG_EXT3_FS_POSIX_ACL=y 353 - CONFIG_EXT3_FS_SECURITY=y 351 + CONFIG_EXT4_FS=y 352 + CONFIG_EXT4_FS_POSIX_ACL=y 353 + CONFIG_EXT4_FS_SECURITY=y 354 354 CONFIG_XFS_FS=y 355 355 CONFIG_XFS_POSIX_ACL=y 356 356 CONFIG_QUOTA=y
+1 -1
arch/mips/configs/malta_defconfig
··· 313 313 CONFIG_UIO=m 314 314 CONFIG_UIO_CIF=m 315 315 CONFIG_EXT2_FS=y 316 - CONFIG_EXT3_FS=y 316 + CONFIG_EXT4_FS=y 317 317 CONFIG_JFS_FS=m 318 318 CONFIG_JFS_POSIX_ACL=y 319 319 CONFIG_JFS_SECURITY=y
+1 -1
arch/mips/configs/malta_kvm_defconfig
··· 319 319 CONFIG_UIO=m 320 320 CONFIG_UIO_CIF=m 321 321 CONFIG_EXT2_FS=y 322 - CONFIG_EXT3_FS=y 322 + CONFIG_EXT4_FS=y 323 323 CONFIG_JFS_FS=m 324 324 CONFIG_JFS_POSIX_ACL=y 325 325 CONFIG_JFS_SECURITY=y
+1 -1
arch/mips/configs/malta_qemu_32r6_defconfig
··· 148 148 CONFIG_RTC_CLASS=y 149 149 CONFIG_RTC_DRV_CMOS=y 150 150 CONFIG_EXT2_FS=y 151 - CONFIG_EXT3_FS=y 151 + CONFIG_EXT4_FS=y 152 152 CONFIG_XFS_FS=y 153 153 CONFIG_XFS_QUOTA=y 154 154 CONFIG_XFS_POSIX_ACL=y
+1 -1
arch/mips/configs/maltaaprp_defconfig
··· 149 149 CONFIG_RTC_CLASS=y 150 150 CONFIG_RTC_DRV_CMOS=y 151 151 CONFIG_EXT2_FS=y 152 - CONFIG_EXT3_FS=y 152 + CONFIG_EXT4_FS=y 153 153 CONFIG_XFS_FS=y 154 154 CONFIG_XFS_QUOTA=y 155 155 CONFIG_XFS_POSIX_ACL=y
+3 -3
arch/mips/configs/maltasmvp_defconfig
··· 148 148 CONFIG_RTC_CLASS=y 149 149 CONFIG_RTC_DRV_CMOS=y 150 150 CONFIG_EXT2_FS=y 151 - CONFIG_EXT3_FS=y 152 - CONFIG_EXT3_FS_POSIX_ACL=y 153 - CONFIG_EXT3_FS_SECURITY=y 151 + CONFIG_EXT4_FS=y 152 + CONFIG_EXT4_FS_POSIX_ACL=y 153 + CONFIG_EXT4_FS_SECURITY=y 154 154 CONFIG_XFS_FS=y 155 155 CONFIG_XFS_QUOTA=y 156 156 CONFIG_XFS_POSIX_ACL=y
+1 -1
arch/mips/configs/maltasmvp_eva_defconfig
··· 152 152 CONFIG_RTC_CLASS=y 153 153 CONFIG_RTC_DRV_CMOS=y 154 154 CONFIG_EXT2_FS=y 155 - CONFIG_EXT3_FS=y 155 + CONFIG_EXT4_FS=y 156 156 CONFIG_XFS_FS=y 157 157 CONFIG_XFS_QUOTA=y 158 158 CONFIG_XFS_POSIX_ACL=y
+1 -1
arch/mips/configs/maltaup_defconfig
··· 148 148 CONFIG_RTC_CLASS=y 149 149 CONFIG_RTC_DRV_CMOS=y 150 150 CONFIG_EXT2_FS=y 151 - CONFIG_EXT3_FS=y 151 + CONFIG_EXT4_FS=y 152 152 CONFIG_XFS_FS=y 153 153 CONFIG_XFS_QUOTA=y 154 154 CONFIG_XFS_POSIX_ACL=y
+1 -1
arch/mips/configs/maltaup_xpa_defconfig
··· 319 319 CONFIG_UIO=m 320 320 CONFIG_UIO_CIF=m 321 321 CONFIG_EXT2_FS=y 322 - CONFIG_EXT3_FS=y 322 + CONFIG_EXT4_FS=y 323 323 CONFIG_JFS_FS=m 324 324 CONFIG_JFS_POSIX_ACL=y 325 325 CONFIG_JFS_SECURITY=y
+3 -3
arch/mips/configs/mtx1_defconfig
··· 595 595 CONFIG_EXT2_FS_XATTR=y 596 596 CONFIG_EXT2_FS_POSIX_ACL=y 597 597 CONFIG_EXT2_FS_SECURITY=y 598 - CONFIG_EXT3_FS=m 599 - CONFIG_EXT3_FS_POSIX_ACL=y 600 - CONFIG_EXT3_FS_SECURITY=y 598 + CONFIG_EXT4_FS=m 599 + CONFIG_EXT4_FS_POSIX_ACL=y 600 + CONFIG_EXT4_FS_SECURITY=y 601 601 CONFIG_QUOTA=y 602 602 CONFIG_AUTOFS_FS=y 603 603 CONFIG_FUSE_FS=m
+1 -1
arch/mips/configs/rm200_defconfig
··· 307 307 CONFIG_USB_LD=m 308 308 CONFIG_USB_TEST=m 309 309 CONFIG_EXT2_FS=m 310 - CONFIG_EXT3_FS=y 310 + CONFIG_EXT4_FS=y 311 311 CONFIG_XFS_FS=m 312 312 CONFIG_XFS_QUOTA=y 313 313 CONFIG_AUTOFS_FS=m
+1 -1
arch/openrisc/configs/or1klitex_defconfig
··· 38 38 # CONFIG_IOMMU_SUPPORT is not set 39 39 CONFIG_LITEX_SOC_CONTROLLER=y 40 40 CONFIG_EXT2_FS=y 41 - CONFIG_EXT3_FS=y 41 + CONFIG_EXT4_FS=y 42 42 CONFIG_MSDOS_FS=y 43 43 CONFIG_VFAT_FS=y 44 44 CONFIG_EXFAT_FS=y
+2 -2
arch/openrisc/configs/virt_defconfig
··· 94 94 CONFIG_VIRTIO_INPUT=y 95 95 CONFIG_VIRTIO_MMIO=y 96 96 CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y 97 - CONFIG_EXT3_FS=y 98 - CONFIG_EXT3_FS_POSIX_ACL=y 97 + CONFIG_EXT4_FS=y 98 + CONFIG_EXT4_FS_POSIX_ACL=y 99 99 # CONFIG_DNOTIFY is not set 100 100 CONFIG_MSDOS_FS=y 101 101 CONFIG_VFAT_FS=y
+2 -2
arch/parisc/configs/generic-32bit_defconfig
··· 232 232 CONFIG_EXT2_FS=y 233 233 CONFIG_EXT2_FS_XATTR=y 234 234 CONFIG_EXT2_FS_SECURITY=y 235 - CONFIG_EXT3_FS=y 236 - CONFIG_EXT3_FS_SECURITY=y 235 + CONFIG_EXT4_FS=y 236 + CONFIG_EXT4_FS_SECURITY=y 237 237 CONFIG_QUOTA=y 238 238 CONFIG_QUOTA_NETLINK_INTERFACE=y 239 239 CONFIG_QFMT_V2=y
+2 -2
arch/parisc/configs/generic-64bit_defconfig
··· 251 251 CONFIG_EXT2_FS=y 252 252 CONFIG_EXT2_FS_XATTR=y 253 253 CONFIG_EXT2_FS_SECURITY=y 254 - CONFIG_EXT3_FS=y 255 - CONFIG_EXT3_FS_SECURITY=y 254 + CONFIG_EXT4_FS=y 255 + CONFIG_EXT4_FS_SECURITY=y 256 256 CONFIG_XFS_FS=m 257 257 CONFIG_BTRFS_FS=m 258 258 CONFIG_QUOTA=y
+3
arch/powerpc/kernel/fadump.c
··· 1747 1747 { 1748 1748 phys_addr_t range_start, range_end; 1749 1749 1750 + if (!fw_dump.fadump_enabled) 1751 + return; 1752 + 1750 1753 if (!fw_dump.param_area_supported || fw_dump.dump_active) 1751 1754 return; 1752 1755
+4 -8
arch/powerpc/kvm/book3s_xive.c
··· 916 916 * it fires once. 917 917 */ 918 918 if (single_escalation) { 919 - struct irq_data *d = irq_get_irq_data(xc->esc_virq[prio]); 920 - struct xive_irq_data *xd = irq_data_get_irq_handler_data(d); 919 + struct xive_irq_data *xd = irq_get_chip_data(xc->esc_virq[prio]); 921 920 922 921 xive_vm_esb_load(xd, XIVE_ESB_SET_PQ_01); 923 922 vcpu->arch.xive_esc_raddr = xd->eoi_page; ··· 1611 1612 1612 1613 /* Grab info about irq */ 1613 1614 state->pt_number = hw_irq; 1614 - state->pt_data = irq_data_get_irq_handler_data(host_data); 1615 + state->pt_data = irq_data_get_irq_chip_data(host_data); 1615 1616 1616 1617 /* 1617 1618 * Configure the IRQ to match the existing configuration of ··· 1786 1787 */ 1787 1788 void xive_cleanup_single_escalation(struct kvm_vcpu *vcpu, int irq) 1788 1789 { 1789 - struct irq_data *d = irq_get_irq_data(irq); 1790 - struct xive_irq_data *xd = irq_data_get_irq_handler_data(d); 1790 + struct xive_irq_data *xd = irq_get_chip_data(irq); 1791 1791 1792 1792 /* 1793 1793 * This slightly odd sequence gives the right result ··· 2825 2827 i0, i1); 2826 2828 } 2827 2829 if (xc->esc_virq[i]) { 2828 - struct irq_data *d = irq_get_irq_data(xc->esc_virq[i]); 2829 - struct xive_irq_data *xd = 2830 - irq_data_get_irq_handler_data(d); 2830 + struct xive_irq_data *xd = irq_get_chip_data(xc->esc_virq[i]); 2831 2831 u64 pq = xive_vm_esb_load(xd, XIVE_ESB_GET); 2832 2832 2833 2833 seq_printf(m, " ESC %d %c%c EOI @%llx",
+1 -1
arch/powerpc/platforms/powernv/vas.c
··· 121 121 return -EINVAL; 122 122 } 123 123 124 - xd = irq_get_handler_data(vinst->virq); 124 + xd = irq_get_chip_data(vinst->virq); 125 125 if (!xd) { 126 126 pr_err("Inst%d: Invalid virq %d\n", 127 127 vinst->vas_id, vinst->virq);
+1 -2
arch/powerpc/platforms/pseries/msi.c
··· 443 443 */ 444 444 static void pseries_msi_ops_teardown(struct irq_domain *domain, msi_alloc_info_t *arg) 445 445 { 446 - struct msi_desc *desc = arg->desc; 447 - struct pci_dev *pdev = msi_desc_to_pci_dev(desc); 446 + struct pci_dev *pdev = to_pci_dev(domain->dev); 448 447 449 448 rtas_disable_msi(pdev); 450 449 }
+1 -1
arch/powerpc/sysdev/xive/common.c
··· 1580 1580 cpu, irq); 1581 1581 #endif 1582 1582 raw_spin_lock(&desc->lock); 1583 - xd = irq_desc_get_handler_data(desc); 1583 + xd = irq_desc_get_chip_data(desc); 1584 1584 1585 1585 /* 1586 1586 * Clear saved_p to indicate that it's no longer pending
+1 -1
arch/riscv/Kconfig
··· 29 29 select ARCH_HAS_DEBUG_VIRTUAL if MMU 30 30 select ARCH_HAS_DEBUG_VM_PGTABLE 31 31 select ARCH_HAS_DEBUG_WX 32 - select ARCH_HAS_ELF_CORE_EFLAGS 32 + select ARCH_HAS_ELF_CORE_EFLAGS if BINFMT_ELF && ELF_CORE 33 33 select ARCH_HAS_FAST_MULTIPLIER 34 34 select ARCH_HAS_FORTIFY_SOURCE 35 35 select ARCH_HAS_GCOV_PROFILE_ALL
+7 -2
arch/riscv/include/asm/kgdb.h
··· 3 3 #ifndef __ASM_KGDB_H_ 4 4 #define __ASM_KGDB_H_ 5 5 6 + #include <linux/build_bug.h> 7 + 6 8 #ifdef __KERNEL__ 7 9 8 10 #define GDB_SIZEOF_REG sizeof(unsigned long) 9 11 10 - #define DBG_MAX_REG_NUM (36) 11 - #define NUMREGBYTES ((DBG_MAX_REG_NUM) * GDB_SIZEOF_REG) 12 + #define DBG_MAX_REG_NUM 36 13 + #define NUMREGBYTES (DBG_MAX_REG_NUM * GDB_SIZEOF_REG) 12 14 #define CACHE_FLUSH_IS_SAFE 1 13 15 #define BUFMAX 2048 16 + static_assert(BUFMAX > NUMREGBYTES, 17 + "As per KGDB documentation, BUFMAX must be larger than NUMREGBYTES"); 14 18 #ifdef CONFIG_RISCV_ISA_C 15 19 #define BREAK_INSTR_SIZE 2 16 20 #else ··· 101 97 #define DBG_REG_STATUS_OFF 33 102 98 #define DBG_REG_BADADDR_OFF 34 103 99 #define DBG_REG_CAUSE_OFF 35 100 + /* NOTE: increase DBG_MAX_REG_NUM if you add more values here. */ 104 101 105 102 extern const char riscv_gdb_stub_feature[64]; 106 103
+1
arch/riscv/kernel/cpu-hotplug.c
··· 54 54 55 55 pr_notice("CPU%u: off\n", cpu); 56 56 57 + clear_tasks_mm_cpumask(cpu); 57 58 /* Verify from the firmware if the cpu is really stopped*/ 58 59 if (cpu_ops->cpu_is_stopped) 59 60 ret = cpu_ops->cpu_is_stopped(cpu);
+1 -1
arch/riscv/kernel/entry.S
··· 455 455 RISCV_PTR do_trap_ecall_s 456 456 RISCV_PTR do_trap_unknown 457 457 RISCV_PTR do_trap_ecall_m 458 - /* instruciton page fault */ 458 + /* instruction page fault */ 459 459 ALT_PAGE_FAULT(RISCV_PTR do_page_fault) 460 460 RISCV_PTR do_page_fault /* load page fault */ 461 461 RISCV_PTR do_trap_unknown
+9 -4
arch/riscv/kernel/probes/kprobes.c
··· 49 49 post_kprobe_handler(p, kcb, regs); 50 50 } 51 51 52 - static bool __kprobes arch_check_kprobe(struct kprobe *p) 52 + static bool __kprobes arch_check_kprobe(unsigned long addr) 53 53 { 54 - unsigned long tmp = (unsigned long)p->addr - p->offset; 55 - unsigned long addr = (unsigned long)p->addr; 54 + unsigned long tmp, offset; 55 + 56 + /* start iterating at the closest preceding symbol */ 57 + if (!kallsyms_lookup_size_offset(addr, NULL, &offset)) 58 + return false; 59 + 60 + tmp = addr - offset; 56 61 57 62 while (tmp <= addr) { 58 63 if (tmp == addr) ··· 76 71 if ((unsigned long)insn & 0x1) 77 72 return -EILSEQ; 78 73 79 - if (!arch_check_kprobe(p)) 74 + if (!arch_check_kprobe((unsigned long)p->addr)) 80 75 return -EILSEQ; 81 76 82 77 /* copy instruction */
+5 -2
arch/riscv/kernel/setup.c
··· 331 331 /* Parse the ACPI tables for possible boot-time configuration */ 332 332 acpi_boot_table_init(); 333 333 334 + if (acpi_disabled) { 334 335 #if IS_ENABLED(CONFIG_BUILTIN_DTB) 335 - unflatten_and_copy_device_tree(); 336 + unflatten_and_copy_device_tree(); 336 337 #else 337 - unflatten_device_tree(); 338 + unflatten_device_tree(); 338 339 #endif 340 + } 341 + 339 342 misc_mem_init(); 340 343 341 344 init_resources();
+2 -2
arch/riscv/kernel/tests/kprobes/test-kprobes.h
··· 11 11 #define KPROBE_TEST_MAGIC_LOWER 0x0000babe 12 12 #define KPROBE_TEST_MAGIC_UPPER 0xcafe0000 13 13 14 - #ifndef __ASSEMBLY__ 14 + #ifndef __ASSEMBLER__ 15 15 16 16 /* array of addresses to install kprobes */ 17 17 extern void *test_kprobes_addresses[]; ··· 19 19 /* array of functions that return KPROBE_TEST_MAGIC */ 20 20 extern long (*test_kprobes_functions[])(void); 21 21 22 - #endif /* __ASSEMBLY__ */ 22 + #endif /* __ASSEMBLER__ */ 23 23 24 24 #endif /* TEST_KPROBES_H */
+3 -4
arch/sh/configs/ap325rxa_defconfig
··· 81 81 CONFIG_EXT2_FS_XATTR=y 82 82 CONFIG_EXT2_FS_POSIX_ACL=y 83 83 CONFIG_EXT2_FS_SECURITY=y 84 - CONFIG_EXT3_FS=y 85 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 86 - CONFIG_EXT3_FS_POSIX_ACL=y 87 - CONFIG_EXT3_FS_SECURITY=y 84 + CONFIG_EXT4_FS=y 85 + CONFIG_EXT4_FS_POSIX_ACL=y 86 + CONFIG_EXT4_FS_SECURITY=y 88 87 CONFIG_VFAT_FS=y 89 88 CONFIG_PROC_KCORE=y 90 89 CONFIG_TMPFS=y
+1 -2
arch/sh/configs/apsh4a3a_defconfig
··· 60 60 CONFIG_LOGO=y 61 61 # CONFIG_USB_SUPPORT is not set 62 62 CONFIG_EXT2_FS=y 63 - CONFIG_EXT3_FS=y 64 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 63 + CONFIG_EXT4_FS=y 65 64 CONFIG_MSDOS_FS=y 66 65 CONFIG_VFAT_FS=y 67 66 CONFIG_NTFS_FS=y
+1 -2
arch/sh/configs/apsh4ad0a_defconfig
··· 88 88 CONFIG_USB_OHCI_HCD=y 89 89 CONFIG_USB_STORAGE=y 90 90 CONFIG_EXT2_FS=y 91 - CONFIG_EXT3_FS=y 92 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 91 + CONFIG_EXT4_FS=y 93 92 CONFIG_MSDOS_FS=y 94 93 CONFIG_VFAT_FS=y 95 94 CONFIG_NTFS_FS=y
+3 -4
arch/sh/configs/ecovec24_defconfig
··· 109 109 CONFIG_EXT2_FS_XATTR=y 110 110 CONFIG_EXT2_FS_POSIX_ACL=y 111 111 CONFIG_EXT2_FS_SECURITY=y 112 - CONFIG_EXT3_FS=y 113 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 114 - CONFIG_EXT3_FS_POSIX_ACL=y 115 - CONFIG_EXT3_FS_SECURITY=y 112 + CONFIG_EXT4_FS=y 113 + CONFIG_EXT4_FS_POSIX_ACL=y 114 + CONFIG_EXT4_FS_SECURITY=y 116 115 CONFIG_VFAT_FS=y 117 116 CONFIG_PROC_KCORE=y 118 117 CONFIG_TMPFS=y
+1 -2
arch/sh/configs/edosk7760_defconfig
··· 87 87 CONFIG_EXT2_FS=y 88 88 CONFIG_EXT2_FS_XATTR=y 89 89 CONFIG_EXT2_FS_XIP=y 90 - CONFIG_EXT3_FS=y 91 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 90 + CONFIG_EXT4_FS=y 92 91 CONFIG_TMPFS=y 93 92 CONFIG_TMPFS_POSIX_ACL=y 94 93 CONFIG_NFS_FS=y
+1 -2
arch/sh/configs/espt_defconfig
··· 59 59 CONFIG_USB_OHCI_HCD=y 60 60 CONFIG_USB_STORAGE=y 61 61 CONFIG_EXT2_FS=y 62 - CONFIG_EXT3_FS=y 63 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 62 + CONFIG_EXT4_FS=y 64 63 CONFIG_AUTOFS_FS=y 65 64 CONFIG_PROC_KCORE=y 66 65 CONFIG_TMPFS=y
+1 -2
arch/sh/configs/landisk_defconfig
··· 93 93 CONFIG_USB_EMI26=m 94 94 CONFIG_USB_SISUSBVGA=m 95 95 CONFIG_EXT2_FS=y 96 - CONFIG_EXT3_FS=y 97 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 96 + CONFIG_EXT4_FS=y 98 97 CONFIG_ISO9660_FS=m 99 98 CONFIG_MSDOS_FS=y 100 99 CONFIG_VFAT_FS=y
+1 -2
arch/sh/configs/lboxre2_defconfig
··· 49 49 CONFIG_HW_RANDOM=y 50 50 CONFIG_RTC_CLASS=y 51 51 CONFIG_EXT2_FS=y 52 - CONFIG_EXT3_FS=y 53 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 52 + CONFIG_EXT4_FS=y 54 53 CONFIG_MSDOS_FS=y 55 54 CONFIG_VFAT_FS=y 56 55 CONFIG_TMPFS=y
+2 -3
arch/sh/configs/magicpanelr2_defconfig
··· 64 64 # CONFIG_RTC_HCTOSYS is not set 65 65 CONFIG_RTC_DRV_SH=y 66 66 CONFIG_EXT2_FS=y 67 - CONFIG_EXT3_FS=y 68 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 69 - # CONFIG_EXT3_FS_XATTR is not set 67 + CONFIG_EXT4_FS=y 68 + # CONFIG_EXT4_FS_XATTR is not set 70 69 # CONFIG_DNOTIFY is not set 71 70 CONFIG_PROC_KCORE=y 72 71 CONFIG_TMPFS=y
+1 -2
arch/sh/configs/r7780mp_defconfig
··· 74 74 CONFIG_RTC_DRV_RS5C372=y 75 75 CONFIG_RTC_DRV_SH=y 76 76 CONFIG_EXT2_FS=y 77 - CONFIG_EXT3_FS=y 78 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 77 + CONFIG_EXT4_FS=y 79 78 CONFIG_FUSE_FS=m 80 79 CONFIG_MSDOS_FS=y 81 80 CONFIG_VFAT_FS=y
+1 -2
arch/sh/configs/r7785rp_defconfig
··· 69 69 CONFIG_RTC_DRV_RS5C372=y 70 70 CONFIG_RTC_DRV_SH=y 71 71 CONFIG_EXT2_FS=y 72 - CONFIG_EXT3_FS=y 73 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 72 + CONFIG_EXT4_FS=y 74 73 CONFIG_FUSE_FS=m 75 74 CONFIG_MSDOS_FS=y 76 75 CONFIG_VFAT_FS=y
+1 -2
arch/sh/configs/rsk7264_defconfig
··· 59 59 CONFIG_USB_STORAGE=y 60 60 CONFIG_USB_STORAGE_DEBUG=y 61 61 CONFIG_EXT2_FS=y 62 - CONFIG_EXT3_FS=y 63 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 62 + CONFIG_EXT4_FS=y 64 63 CONFIG_VFAT_FS=y 65 64 CONFIG_NFS_FS=y 66 65 CONFIG_NFS_V3=y
+1 -2
arch/sh/configs/rsk7269_defconfig
··· 43 43 CONFIG_USB_STORAGE=y 44 44 CONFIG_USB_STORAGE_DEBUG=y 45 45 CONFIG_EXT2_FS=y 46 - CONFIG_EXT3_FS=y 47 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 46 + CONFIG_EXT4_FS=y 48 47 CONFIG_VFAT_FS=y 49 48 CONFIG_NFS_FS=y 50 49 CONFIG_NFS_V3=y
+2 -3
arch/sh/configs/sdk7780_defconfig
··· 102 102 CONFIG_EXT2_FS=y 103 103 CONFIG_EXT2_FS_XATTR=y 104 104 CONFIG_EXT2_FS_POSIX_ACL=y 105 - CONFIG_EXT3_FS=y 106 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 107 - CONFIG_EXT3_FS_POSIX_ACL=y 105 + CONFIG_EXT4_FS=y 106 + CONFIG_EXT4_FS_POSIX_ACL=y 108 107 CONFIG_AUTOFS_FS=y 109 108 CONFIG_ISO9660_FS=y 110 109 CONFIG_MSDOS_FS=y
+1 -2
arch/sh/configs/sdk7786_defconfig
··· 161 161 # CONFIG_STAGING_EXCLUDE_BUILD is not set 162 162 CONFIG_EXT2_FS=y 163 163 CONFIG_EXT2_FS_XATTR=y 164 - CONFIG_EXT3_FS=y 165 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 164 + CONFIG_EXT4_FS=y 166 165 CONFIG_EXT4_FS=y 167 166 CONFIG_XFS_FS=y 168 167 CONFIG_BTRFS_FS=y
+1 -2
arch/sh/configs/se7343_defconfig
··· 84 84 CONFIG_USB_ISP116X_HCD=y 85 85 CONFIG_UIO=y 86 86 CONFIG_EXT2_FS=y 87 - CONFIG_EXT3_FS=y 88 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 87 + CONFIG_EXT4_FS=y 89 88 # CONFIG_DNOTIFY is not set 90 89 CONFIG_JFFS2_FS=y 91 90 CONFIG_CRAMFS=y
+1 -2
arch/sh/configs/se7712_defconfig
··· 83 83 CONFIG_EXT2_FS_XATTR=y 84 84 CONFIG_EXT2_FS_POSIX_ACL=y 85 85 CONFIG_EXT2_FS_SECURITY=y 86 - CONFIG_EXT3_FS=y 87 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 86 + CONFIG_EXT4_FS=y 88 87 # CONFIG_DNOTIFY is not set 89 88 CONFIG_JFFS2_FS=y 90 89 CONFIG_CRAMFS=y
+1 -2
arch/sh/configs/se7721_defconfig
··· 107 107 CONFIG_EXT2_FS_XATTR=y 108 108 CONFIG_EXT2_FS_POSIX_ACL=y 109 109 CONFIG_EXT2_FS_SECURITY=y 110 - CONFIG_EXT3_FS=y 111 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 110 + CONFIG_EXT4_FS=y 112 111 # CONFIG_DNOTIFY is not set 113 112 CONFIG_MSDOS_FS=y 114 113 CONFIG_VFAT_FS=y
+1 -2
arch/sh/configs/se7722_defconfig
··· 44 44 CONFIG_RTC_CLASS=y 45 45 CONFIG_RTC_DRV_SH=y 46 46 CONFIG_EXT2_FS=y 47 - CONFIG_EXT3_FS=y 48 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 47 + CONFIG_EXT4_FS=y 49 48 CONFIG_PROC_KCORE=y 50 49 CONFIG_TMPFS=y 51 50 CONFIG_HUGETLBFS=y
+3 -4
arch/sh/configs/se7724_defconfig
··· 110 110 CONFIG_EXT2_FS_XATTR=y 111 111 CONFIG_EXT2_FS_POSIX_ACL=y 112 112 CONFIG_EXT2_FS_SECURITY=y 113 - CONFIG_EXT3_FS=y 114 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 115 - CONFIG_EXT3_FS_POSIX_ACL=y 116 - CONFIG_EXT3_FS_SECURITY=y 113 + CONFIG_EXT4_FS=y 114 + CONFIG_EXT4_FS_POSIX_ACL=y 115 + CONFIG_EXT4_FS_SECURITY=y 117 116 CONFIG_VFAT_FS=y 118 117 CONFIG_PROC_KCORE=y 119 118 CONFIG_TMPFS=y
+2 -3
arch/sh/configs/sh03_defconfig
··· 57 57 CONFIG_SH_WDT=m 58 58 CONFIG_EXT2_FS=y 59 59 CONFIG_EXT2_FS_XATTR=y 60 - CONFIG_EXT3_FS=y 61 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 62 - CONFIG_EXT3_FS_POSIX_ACL=y 60 + CONFIG_EXT4_FS=y 61 + CONFIG_EXT4_FS_POSIX_ACL=y 63 62 CONFIG_AUTOFS_FS=y 64 63 CONFIG_ISO9660_FS=m 65 64 CONFIG_JOLIET=y
+1 -1
arch/sh/configs/sh2007_defconfig
··· 95 95 CONFIG_RTC_INTF_DEV_UIE_EMUL=y 96 96 CONFIG_DMADEVICES=y 97 97 CONFIG_TIMB_DMA=y 98 - CONFIG_EXT3_FS=y 98 + CONFIG_EXT4_FS=y 99 99 CONFIG_ISO9660_FS=y 100 100 CONFIG_JOLIET=y 101 101 CONFIG_ZISOFS=y
+1 -1
arch/sh/configs/sh7757lcr_defconfig
··· 64 64 CONFIG_MMC_SDHI=y 65 65 CONFIG_MMC_SH_MMCIF=y 66 66 CONFIG_EXT2_FS=y 67 - CONFIG_EXT3_FS=y 67 + CONFIG_EXT4_FS=y 68 68 CONFIG_ISO9660_FS=y 69 69 CONFIG_VFAT_FS=y 70 70 CONFIG_PROC_KCORE=y
+1 -2
arch/sh/configs/sh7763rdp_defconfig
··· 61 61 CONFIG_USB_STORAGE=y 62 62 CONFIG_MMC=y 63 63 CONFIG_EXT2_FS=y 64 - CONFIG_EXT3_FS=y 65 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 64 + CONFIG_EXT4_FS=y 66 65 CONFIG_AUTOFS_FS=y 67 66 CONFIG_MSDOS_FS=y 68 67 CONFIG_VFAT_FS=y
+1 -2
arch/sh/configs/sh7785lcr_32bit_defconfig
··· 113 113 CONFIG_DMADEVICES=y 114 114 CONFIG_UIO=m 115 115 CONFIG_EXT2_FS=y 116 - CONFIG_EXT3_FS=y 117 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 116 + CONFIG_EXT4_FS=y 118 117 CONFIG_MSDOS_FS=y 119 118 CONFIG_VFAT_FS=y 120 119 CONFIG_NTFS_FS=y
+1 -2
arch/sh/configs/sh7785lcr_defconfig
··· 90 90 CONFIG_RTC_CLASS=y 91 91 CONFIG_RTC_DRV_RS5C372=y 92 92 CONFIG_EXT2_FS=y 93 - CONFIG_EXT3_FS=y 94 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 93 + CONFIG_EXT4_FS=y 95 94 CONFIG_MSDOS_FS=y 96 95 CONFIG_VFAT_FS=y 97 96 CONFIG_NTFS_FS=y
+1 -2
arch/sh/configs/shx3_defconfig
··· 84 84 CONFIG_RTC_DRV_SH=y 85 85 CONFIG_UIO=m 86 86 CONFIG_EXT2_FS=y 87 - CONFIG_EXT3_FS=y 88 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 87 + CONFIG_EXT4_FS=y 89 88 CONFIG_PROC_KCORE=y 90 89 CONFIG_TMPFS=y 91 90 CONFIG_HUGETLBFS=y
+2 -3
arch/sh/configs/titan_defconfig
··· 215 215 CONFIG_RTC_CLASS=y 216 216 CONFIG_RTC_DRV_SH=m 217 217 CONFIG_EXT2_FS=y 218 - CONFIG_EXT3_FS=y 219 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 220 - # CONFIG_EXT3_FS_XATTR is not set 218 + CONFIG_EXT4_FS=y 219 + # CONFIG_EXT4_FS_XATTR is not set 221 220 CONFIG_XFS_FS=m 222 221 CONFIG_FUSE_FS=m 223 222 CONFIG_ISO9660_FS=m
+1 -2
arch/sh/configs/ul2_defconfig
··· 66 66 CONFIG_USB_STORAGE=y 67 67 CONFIG_MMC=y 68 68 CONFIG_EXT2_FS=y 69 - CONFIG_EXT3_FS=y 70 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 69 + CONFIG_EXT4_FS=y 71 70 CONFIG_VFAT_FS=y 72 71 CONFIG_PROC_KCORE=y 73 72 CONFIG_TMPFS=y
+1 -2
arch/sh/configs/urquell_defconfig
··· 114 114 CONFIG_RTC_DRV_SH=y 115 115 CONFIG_RTC_DRV_GENERIC=y 116 116 CONFIG_EXT2_FS=y 117 - CONFIG_EXT3_FS=y 118 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 117 + CONFIG_EXT4_FS=y 119 118 CONFIG_EXT4_FS=y 120 119 CONFIG_BTRFS_FS=y 121 120 CONFIG_MSDOS_FS=y
+3 -4
arch/sparc/configs/sparc64_defconfig
··· 187 187 CONFIG_EXT2_FS_XATTR=y 188 188 CONFIG_EXT2_FS_POSIX_ACL=y 189 189 CONFIG_EXT2_FS_SECURITY=y 190 - CONFIG_EXT3_FS=y 191 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 192 - CONFIG_EXT3_FS_POSIX_ACL=y 193 - CONFIG_EXT3_FS_SECURITY=y 190 + CONFIG_EXT4_FS=y 191 + CONFIG_EXT4_FS_POSIX_ACL=y 192 + CONFIG_EXT4_FS_SECURITY=y 194 193 CONFIG_PROC_KCORE=y 195 194 CONFIG_TMPFS=y 196 195 CONFIG_HUGETLBFS=y
+14 -2
arch/x86/kernel/cpu/amd.c
··· 1355 1355 return 0; 1356 1356 1357 1357 value = ioread32(addr); 1358 - iounmap(addr); 1359 1358 1360 1359 /* Value with "all bits set" is an error response and should be ignored. */ 1361 - if (value == U32_MAX) 1360 + if (value == U32_MAX) { 1361 + iounmap(addr); 1362 1362 return 0; 1363 + } 1364 + 1365 + /* 1366 + * Clear all reason bits so they won't be retained if the next reset 1367 + * does not update the register. Besides, some bits are never cleared by 1368 + * hardware so it's software's responsibility to clear them. 1369 + * 1370 + * Writing the value back effectively clears all reason bits as they are 1371 + * write-1-to-clear. 1372 + */ 1373 + iowrite32(value, addr); 1374 + iounmap(addr); 1363 1375 1364 1376 for (i = 0; i < ARRAY_SIZE(s5_reset_reason_txt); i++) { 1365 1377 if (!(value & BIT(i)))
+10 -4
arch/x86/kernel/cpu/resctrl/monitor.c
··· 242 242 u32 unused, u32 rmid, enum resctrl_event_id eventid, 243 243 u64 *val, void *ignored) 244 244 { 245 + struct rdt_hw_mon_domain *hw_dom = resctrl_to_arch_mon_dom(d); 245 246 int cpu = cpumask_any(&d->hdr.cpu_mask); 247 + struct arch_mbm_state *am; 246 248 u64 msr_val; 247 249 u32 prmid; 248 250 int ret; ··· 253 251 254 252 prmid = logical_rmid_to_physical_rmid(cpu, rmid); 255 253 ret = __rmid_read_phys(prmid, eventid, &msr_val); 256 - if (ret) 257 - return ret; 258 254 259 - *val = get_corrected_val(r, d, rmid, eventid, msr_val); 255 + if (!ret) { 256 + *val = get_corrected_val(r, d, rmid, eventid, msr_val); 257 + } else if (ret == -EINVAL) { 258 + am = get_arch_mbm_state(hw_dom, rmid, eventid); 259 + if (am) 260 + am->prev_msr = 0; 261 + } 260 262 261 - return 0; 263 + return ret; 262 264 } 263 265 264 266 static int __cntr_id_read(u32 cntr_id, u64 *val)
+5 -3
arch/x86/kvm/pmu.c
··· 108 108 bool is_intel = boot_cpu_data.x86_vendor == X86_VENDOR_INTEL; 109 109 int min_nr_gp_ctrs = pmu_ops->MIN_NR_GP_COUNTERS; 110 110 111 - perf_get_x86_pmu_capability(&kvm_host_pmu); 112 - 113 111 /* 114 112 * Hybrid PMUs don't play nice with virtualization without careful 115 113 * configuration by userspace, and KVM's APIs for reporting supported 116 114 * vPMU features do not account for hybrid PMUs. Disable vPMU support 117 115 * for hybrid PMUs until KVM gains a way to let userspace opt-in. 118 116 */ 119 - if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) 117 + if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) { 120 118 enable_pmu = false; 119 + memset(&kvm_host_pmu, 0, sizeof(kvm_host_pmu)); 120 + } else { 121 + perf_get_x86_pmu_capability(&kvm_host_pmu); 122 + } 121 123 122 124 if (enable_pmu) { 123 125 /*
+4 -3
arch/x86/kvm/x86.c
··· 13941 13941 13942 13942 #ifdef CONFIG_KVM_GUEST_MEMFD 13943 13943 /* 13944 - * KVM doesn't yet support mmap() on guest_memfd for VMs with private memory 13945 - * (the private vs. shared tracking needs to be moved into guest_memfd). 13944 + * KVM doesn't yet support initializing guest_memfd memory as shared for VMs 13945 + * with private memory (the private vs. shared tracking needs to be moved into 13946 + * guest_memfd). 13946 13947 */ 13947 - bool kvm_arch_supports_gmem_mmap(struct kvm *kvm) 13948 + bool kvm_arch_supports_gmem_init_shared(struct kvm *kvm) 13948 13949 { 13949 13950 return !kvm_arch_has_private_mem(kvm); 13950 13951 }
+1 -1
arch/x86/mm/pat/set_memory.c
··· 446 446 } 447 447 448 448 start = fix_addr(__cpa_addr(cpa, 0)); 449 - end = fix_addr(__cpa_addr(cpa, cpa->numpages)); 449 + end = start + cpa->numpages * PAGE_SIZE; 450 450 if (cpa->force_flush_all) 451 451 end = TLB_FLUSH_ALL; 452 452
+22 -2
arch/x86/mm/tlb.c
··· 911 911 * CR3 and cpu_tlbstate.loaded_mm are not all in sync. 912 912 */ 913 913 this_cpu_write(cpu_tlbstate.loaded_mm, LOADED_MM_SWITCHING); 914 - barrier(); 915 914 916 - /* Start receiving IPIs and then read tlb_gen (and LAM below) */ 915 + /* 916 + * Make sure this CPU is set in mm_cpumask() such that we'll 917 + * receive invalidation IPIs. 918 + * 919 + * Rely on the smp_mb() implied by cpumask_set_cpu()'s atomic 920 + * operation, or explicitly provide one. Such that: 921 + * 922 + * switch_mm_irqs_off() flush_tlb_mm_range() 923 + * smp_store_release(loaded_mm, SWITCHING); atomic64_inc_return(tlb_gen) 924 + * smp_mb(); // here // smp_mb() implied 925 + * atomic64_read(tlb_gen); this_cpu_read(loaded_mm); 926 + * 927 + * we properly order against flush_tlb_mm_range(), where the 928 + * loaded_mm load can happen in mative_flush_tlb_multi() -> 929 + * should_flush_tlb(). 930 + * 931 + * This way switch_mm() must see the new tlb_gen or 932 + * flush_tlb_mm_range() must see the new loaded_mm, or both. 933 + */ 917 934 if (next != &init_mm && !cpumask_test_cpu(cpu, mm_cpumask(next))) 918 935 cpumask_set_cpu(cpu, mm_cpumask(next)); 936 + else 937 + smp_mb(); 938 + 919 939 next_tlb_gen = atomic64_read(&next->context.tlb_gen); 920 940 921 941 ns = choose_new_asid(next, next_tlb_gen);
+1 -1
arch/xtensa/configs/audio_kc705_defconfig
··· 103 103 # CONFIG_USB_SUPPORT is not set 104 104 CONFIG_COMMON_CLK_CDCE706=y 105 105 # CONFIG_IOMMU_SUPPORT is not set 106 - CONFIG_EXT3_FS=y 106 + CONFIG_EXT4_FS=y 107 107 CONFIG_EXT4_FS=y 108 108 CONFIG_FANOTIFY=y 109 109 CONFIG_VFAT_FS=y
+1 -1
arch/xtensa/configs/cadence_csp_defconfig
··· 80 80 # CONFIG_VGA_CONSOLE is not set 81 81 # CONFIG_USB_SUPPORT is not set 82 82 # CONFIG_IOMMU_SUPPORT is not set 83 - CONFIG_EXT3_FS=y 83 + CONFIG_EXT4_FS=y 84 84 CONFIG_FANOTIFY=y 85 85 CONFIG_VFAT_FS=y 86 86 CONFIG_PROC_KCORE=y
+1 -1
arch/xtensa/configs/generic_kc705_defconfig
··· 90 90 # CONFIG_VGA_CONSOLE is not set 91 91 # CONFIG_USB_SUPPORT is not set 92 92 # CONFIG_IOMMU_SUPPORT is not set 93 - CONFIG_EXT3_FS=y 93 + CONFIG_EXT4_FS=y 94 94 CONFIG_EXT4_FS=y 95 95 CONFIG_FANOTIFY=y 96 96 CONFIG_VFAT_FS=y
+1 -1
arch/xtensa/configs/nommu_kc705_defconfig
··· 91 91 CONFIG_SOFT_WATCHDOG=y 92 92 # CONFIG_VGA_CONSOLE is not set 93 93 # CONFIG_USB_SUPPORT is not set 94 - CONFIG_EXT3_FS=y 94 + CONFIG_EXT4_FS=y 95 95 CONFIG_EXT4_FS=y 96 96 CONFIG_FANOTIFY=y 97 97 CONFIG_VFAT_FS=y
+1 -1
arch/xtensa/configs/smp_lx200_defconfig
··· 94 94 # CONFIG_VGA_CONSOLE is not set 95 95 # CONFIG_USB_SUPPORT is not set 96 96 # CONFIG_IOMMU_SUPPORT is not set 97 - CONFIG_EXT3_FS=y 97 + CONFIG_EXT4_FS=y 98 98 CONFIG_EXT4_FS=y 99 99 CONFIG_FANOTIFY=y 100 100 CONFIG_VFAT_FS=y
+1 -1
arch/xtensa/configs/virt_defconfig
··· 76 76 CONFIG_VIRTIO_PCI=y 77 77 CONFIG_VIRTIO_INPUT=y 78 78 # CONFIG_IOMMU_SUPPORT is not set 79 - CONFIG_EXT3_FS=y 79 + CONFIG_EXT4_FS=y 80 80 CONFIG_FANOTIFY=y 81 81 CONFIG_VFAT_FS=y 82 82 CONFIG_PROC_KCORE=y
+1 -1
arch/xtensa/configs/xip_kc705_defconfig
··· 82 82 # CONFIG_VGA_CONSOLE is not set 83 83 # CONFIG_USB_SUPPORT is not set 84 84 # CONFIG_IOMMU_SUPPORT is not set 85 - CONFIG_EXT3_FS=y 85 + CONFIG_EXT4_FS=y 86 86 CONFIG_FANOTIFY=y 87 87 CONFIG_VFAT_FS=y 88 88 CONFIG_PROC_KCORE=y
+4 -9
block/blk-cgroup.c
··· 812 812 } 813 813 /* 814 814 * Similar to blkg_conf_open_bdev, but additionally freezes the queue, 815 - * acquires q->elevator_lock, and ensures the correct locking order 816 - * between q->elevator_lock and q->rq_qos_mutex. 815 + * ensures the correct locking order between freeze queue and q->rq_qos_mutex. 817 816 * 818 817 * This function returns negative error on failure. On success it returns 819 818 * memflags which must be saved and later passed to blkg_conf_exit_frozen ··· 833 834 * At this point, we haven’t started protecting anything related to QoS, 834 835 * so we release q->rq_qos_mutex here, which was first acquired in blkg_ 835 836 * conf_open_bdev. Later, we re-acquire q->rq_qos_mutex after freezing 836 - * the queue and acquiring q->elevator_lock to maintain the correct 837 - * locking order. 837 + * the queue to maintain the correct locking order. 838 838 */ 839 839 mutex_unlock(&ctx->bdev->bd_queue->rq_qos_mutex); 840 840 841 841 memflags = blk_mq_freeze_queue(ctx->bdev->bd_queue); 842 - mutex_lock(&ctx->bdev->bd_queue->elevator_lock); 843 842 mutex_lock(&ctx->bdev->bd_queue->rq_qos_mutex); 844 843 845 844 return memflags; ··· 992 995 EXPORT_SYMBOL_GPL(blkg_conf_exit); 993 996 994 997 /* 995 - * Similar to blkg_conf_exit, but also unfreezes the queue and releases 996 - * q->elevator_lock. Should be used when blkg_conf_open_bdev_frozen 997 - * is used to open the bdev. 998 + * Similar to blkg_conf_exit, but also unfreezes the queue. Should be used 999 + * when blkg_conf_open_bdev_frozen is used to open the bdev. 998 1000 */ 999 1001 void blkg_conf_exit_frozen(struct blkg_conf_ctx *ctx, unsigned long memflags) 1000 1002 { ··· 1001 1005 struct request_queue *q = ctx->bdev->bd_queue; 1002 1006 1003 1007 blkg_conf_exit(ctx); 1004 - mutex_unlock(&q->elevator_lock); 1005 1008 blk_mq_unfreeze_queue(q, memflags); 1006 1009 } 1007 1010 }
+1 -1
block/blk-mq-sched.c
··· 557 557 if (blk_mq_is_shared_tags(flags)) { 558 558 /* Shared tags are stored at index 0 in @et->tags. */ 559 559 q->sched_shared_tags = et->tags[0]; 560 - blk_mq_tag_update_sched_shared_tags(q); 560 + blk_mq_tag_update_sched_shared_tags(q, et->nr_requests); 561 561 } 562 562 563 563 queue_for_each_hw_ctx(q, hctx, i) {
+3 -2
block/blk-mq-tag.c
··· 622 622 sbitmap_queue_resize(&tags->bitmap_tags, size - set->reserved_tags); 623 623 } 624 624 625 - void blk_mq_tag_update_sched_shared_tags(struct request_queue *q) 625 + void blk_mq_tag_update_sched_shared_tags(struct request_queue *q, 626 + unsigned int nr) 626 627 { 627 628 sbitmap_queue_resize(&q->sched_shared_tags->bitmap_tags, 628 - q->nr_requests - q->tag_set->reserved_tags); 629 + nr - q->tag_set->reserved_tags); 629 630 } 630 631 631 632 /**
+1 -1
block/blk-mq.c
··· 4941 4941 * tags can't grow, see blk_mq_alloc_sched_tags(). 4942 4942 */ 4943 4943 if (q->elevator) 4944 - blk_mq_tag_update_sched_shared_tags(q); 4944 + blk_mq_tag_update_sched_shared_tags(q, nr); 4945 4945 else 4946 4946 blk_mq_tag_resize_shared_tags(set, nr); 4947 4947 } else if (!q->elevator) {
+2 -1
block/blk-mq.h
··· 186 186 void blk_mq_put_tags(struct blk_mq_tags *tags, int *tag_array, int nr_tags); 187 187 void blk_mq_tag_resize_shared_tags(struct blk_mq_tag_set *set, 188 188 unsigned int size); 189 - void blk_mq_tag_update_sched_shared_tags(struct request_queue *q); 189 + void blk_mq_tag_update_sched_shared_tags(struct request_queue *q, 190 + unsigned int nr); 190 191 191 192 void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool); 192 193 void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_tag_iter_fn *fn,
+2
drivers/accel/qaic/qaic.h
··· 97 97 * response queue's head and tail pointer of this DBC. 98 98 */ 99 99 void __iomem *dbc_base; 100 + /* Synchronizes access to Request queue's head and tail pointer */ 101 + struct mutex req_lock; 100 102 /* Head of list where each node is a memory handle queued in request queue */ 101 103 struct list_head xfer_list; 102 104 /* Synchronizes DBC readers during cleanup */
+1 -1
drivers/accel/qaic/qaic_control.c
··· 407 407 return -EINVAL; 408 408 remaining = in_trans->size - resources->xferred_dma_size; 409 409 if (remaining == 0) 410 - return 0; 410 + return -EINVAL; 411 411 412 412 if (check_add_overflow(xfer_start_addr, remaining, &end)) 413 413 return -EINVAL;
+10 -2
drivers/accel/qaic/qaic_data.c
··· 1356 1356 goto release_ch_rcu; 1357 1357 } 1358 1358 1359 + ret = mutex_lock_interruptible(&dbc->req_lock); 1360 + if (ret) 1361 + goto release_ch_rcu; 1362 + 1359 1363 head = readl(dbc->dbc_base + REQHP_OFF); 1360 1364 tail = readl(dbc->dbc_base + REQTP_OFF); 1361 1365 1362 1366 if (head == U32_MAX || tail == U32_MAX) { 1363 1367 /* PCI link error */ 1364 1368 ret = -ENODEV; 1365 - goto release_ch_rcu; 1369 + goto unlock_req_lock; 1366 1370 } 1367 1371 1368 1372 queue_level = head <= tail ? tail - head : dbc->nelem - (head - tail); ··· 1374 1370 ret = send_bo_list_to_device(qdev, file_priv, exec, args->hdr.count, is_partial, dbc, 1375 1371 head, &tail); 1376 1372 if (ret) 1377 - goto release_ch_rcu; 1373 + goto unlock_req_lock; 1378 1374 1379 1375 /* Finalize commit to hardware */ 1380 1376 submit_ts = ktime_get_ns(); 1381 1377 writel(tail, dbc->dbc_base + REQTP_OFF); 1378 + mutex_unlock(&dbc->req_lock); 1382 1379 1383 1380 update_profiling_data(file_priv, exec, args->hdr.count, is_partial, received_ts, 1384 1381 submit_ts, queue_level); ··· 1387 1382 if (datapath_polling) 1388 1383 schedule_work(&dbc->poll_work); 1389 1384 1385 + unlock_req_lock: 1386 + if (ret) 1387 + mutex_unlock(&dbc->req_lock); 1390 1388 release_ch_rcu: 1391 1389 srcu_read_unlock(&dbc->ch_lock, rcu_id); 1392 1390 unlock_dev_srcu:
+3 -2
drivers/accel/qaic/qaic_debugfs.c
··· 218 218 if (ret) 219 219 goto destroy_workqueue; 220 220 221 + dev_set_drvdata(&mhi_dev->dev, qdev); 222 + qdev->bootlog_ch = mhi_dev; 223 + 221 224 for (i = 0; i < BOOTLOG_POOL_SIZE; i++) { 222 225 msg = devm_kzalloc(&qdev->pdev->dev, sizeof(*msg), GFP_KERNEL); 223 226 if (!msg) { ··· 236 233 goto mhi_unprepare; 237 234 } 238 235 239 - dev_set_drvdata(&mhi_dev->dev, qdev); 240 - qdev->bootlog_ch = mhi_dev; 241 236 return 0; 242 237 243 238 mhi_unprepare:
+3
drivers/accel/qaic/qaic_drv.c
··· 454 454 return NULL; 455 455 init_waitqueue_head(&qdev->dbc[i].dbc_release); 456 456 INIT_LIST_HEAD(&qdev->dbc[i].bo_lists); 457 + ret = drmm_mutex_init(drm, &qdev->dbc[i].req_lock); 458 + if (ret) 459 + return NULL; 457 460 } 458 461 459 462 return qdev;
+4 -7
drivers/ata/libata-core.c
··· 2174 2174 } 2175 2175 2176 2176 version = get_unaligned_le16(&dev->gp_log_dir[0]); 2177 - if (version != 0x0001) { 2178 - ata_dev_err(dev, "Invalid log directory version 0x%04x\n", 2179 - version); 2180 - ata_clear_log_directory(dev); 2181 - dev->quirks |= ATA_QUIRK_NO_LOG_DIR; 2182 - return -EINVAL; 2183 - } 2177 + if (version != 0x0001) 2178 + ata_dev_warn_once(dev, 2179 + "Invalid log directory version 0x%04x\n", 2180 + version); 2184 2181 2185 2182 return 0; 2186 2183 }
+4 -1
drivers/char/ipmi/ipmi_msghandler.c
··· 2301 2301 if (supplied_recv) { 2302 2302 recv_msg = supplied_recv; 2303 2303 recv_msg->user = user; 2304 - if (user) 2304 + if (user) { 2305 2305 atomic_inc(&user->nr_msgs); 2306 + /* The put happens when the message is freed. */ 2307 + kref_get(&user->refcount); 2308 + } 2306 2309 } else { 2307 2310 recv_msg = ipmi_alloc_recv_msg(user); 2308 2311 if (IS_ERR(recv_msg))
+20 -9
drivers/char/tpm/tpm_crb.c
··· 133 133 { 134 134 return !(start_method == ACPI_TPM2_START_METHOD || 135 135 start_method == ACPI_TPM2_COMMAND_BUFFER_WITH_START_METHOD || 136 - start_method == ACPI_TPM2_COMMAND_BUFFER_WITH_ARM_SMC || 137 - start_method == ACPI_TPM2_CRB_WITH_ARM_FFA); 136 + start_method == ACPI_TPM2_COMMAND_BUFFER_WITH_ARM_SMC); 138 137 } 139 138 140 139 static bool crb_wait_for_reg_32(u32 __iomem *reg, u32 mask, u32 value, ··· 190 191 * 191 192 * Return: 0 always 192 193 */ 193 - static int __crb_go_idle(struct device *dev, struct crb_priv *priv) 194 + static int __crb_go_idle(struct device *dev, struct crb_priv *priv, int loc) 194 195 { 195 196 int rc; 196 197 ··· 198 199 return 0; 199 200 200 201 iowrite32(CRB_CTRL_REQ_GO_IDLE, &priv->regs_t->ctrl_req); 202 + 203 + if (priv->sm == ACPI_TPM2_CRB_WITH_ARM_FFA) { 204 + rc = tpm_crb_ffa_start(CRB_FFA_START_TYPE_COMMAND, loc); 205 + if (rc) 206 + return rc; 207 + } 201 208 202 209 rc = crb_try_pluton_doorbell(priv, true); 203 210 if (rc) ··· 225 220 struct device *dev = &chip->dev; 226 221 struct crb_priv *priv = dev_get_drvdata(dev); 227 222 228 - return __crb_go_idle(dev, priv); 223 + return __crb_go_idle(dev, priv, chip->locality); 229 224 } 230 225 231 226 /** ··· 243 238 * 244 239 * Return: 0 on success -ETIME on timeout; 245 240 */ 246 - static int __crb_cmd_ready(struct device *dev, struct crb_priv *priv) 241 + static int __crb_cmd_ready(struct device *dev, struct crb_priv *priv, int loc) 247 242 { 248 243 int rc; 249 244 ··· 251 246 return 0; 252 247 253 248 iowrite32(CRB_CTRL_REQ_CMD_READY, &priv->regs_t->ctrl_req); 249 + 250 + if (priv->sm == ACPI_TPM2_CRB_WITH_ARM_FFA) { 251 + rc = tpm_crb_ffa_start(CRB_FFA_START_TYPE_COMMAND, loc); 252 + if (rc) 253 + return rc; 254 + } 254 255 255 256 rc = crb_try_pluton_doorbell(priv, true); 256 257 if (rc) ··· 278 267 struct device *dev = &chip->dev; 279 268 struct crb_priv *priv = dev_get_drvdata(dev); 280 269 281 - return __crb_cmd_ready(dev, priv); 270 + return __crb_cmd_ready(dev, priv, chip->locality); 282 271 } 283 272 284 273 static int __crb_request_locality(struct device *dev, ··· 455 444 456 445 /* Seems to be necessary for every command */ 457 446 if (priv->sm == ACPI_TPM2_COMMAND_BUFFER_WITH_PLUTON) 458 - __crb_cmd_ready(&chip->dev, priv); 447 + __crb_cmd_ready(&chip->dev, priv, chip->locality); 459 448 460 449 memcpy_toio(priv->cmd, buf, len); 461 450 ··· 683 672 * PTT HW bug w/a: wake up the device to access 684 673 * possibly not retained registers. 685 674 */ 686 - ret = __crb_cmd_ready(dev, priv); 675 + ret = __crb_cmd_ready(dev, priv, 0); 687 676 if (ret) 688 677 goto out_relinquish_locality; 689 678 ··· 755 744 if (!ret) 756 745 priv->cmd_size = cmd_size; 757 746 758 - __crb_go_idle(dev, priv); 747 + __crb_go_idle(dev, priv, 0); 759 748 760 749 out_relinquish_locality: 761 750
+1 -1
drivers/cxl/acpi.c
··· 348 348 struct resource res; 349 349 int nid, rc; 350 350 351 - res = DEFINE_RES(start, size, 0); 351 + res = DEFINE_RES_MEM(start, size); 352 352 nid = phys_to_target_node(start); 353 353 354 354 rc = hmat_get_extended_linear_cache_size(&res, nid, &cache_size);
+3
drivers/cxl/core/features.c
··· 371 371 { 372 372 struct cxl_feat_entry *feat; 373 373 374 + if (!cxlfs || !cxlfs->entries) 375 + return ERR_PTR(-EOPNOTSUPP); 376 + 374 377 for (int i = 0; i < cxlfs->entries->num_features; i++) { 375 378 feat = &cxlfs->entries->ent[i]; 376 379 if (uuid_equal(uuid, &feat->uuid))
+14 -12
drivers/cxl/core/port.c
··· 1182 1182 if (rc) 1183 1183 return ERR_PTR(rc); 1184 1184 1185 + /* 1186 + * Setup port register if this is the first dport showed up. Having 1187 + * a dport also means that there is at least 1 active link. 1188 + */ 1189 + if (port->nr_dports == 1 && 1190 + port->component_reg_phys != CXL_RESOURCE_NONE) { 1191 + rc = cxl_port_setup_regs(port, port->component_reg_phys); 1192 + if (rc) { 1193 + xa_erase(&port->dports, (unsigned long)dport->dport_dev); 1194 + return ERR_PTR(rc); 1195 + } 1196 + port->component_reg_phys = CXL_RESOURCE_NONE; 1197 + } 1198 + 1185 1199 get_device(dport_dev); 1186 1200 rc = devm_add_action_or_reset(host, cxl_dport_remove, dport); 1187 1201 if (rc) ··· 1213 1199 dport->link_latency = cxl_pci_get_latency(to_pci_dev(dport_dev)); 1214 1200 1215 1201 cxl_debugfs_create_dport_dir(dport); 1216 - 1217 - /* 1218 - * Setup port register if this is the first dport showed up. Having 1219 - * a dport also means that there is at least 1 active link. 1220 - */ 1221 - if (port->nr_dports == 1 && 1222 - port->component_reg_phys != CXL_RESOURCE_NONE) { 1223 - rc = cxl_port_setup_regs(port, port->component_reg_phys); 1224 - if (rc) 1225 - return ERR_PTR(rc); 1226 - port->component_reg_phys = CXL_RESOURCE_NONE; 1227 - } 1228 1202 1229 1203 return dport; 1230 1204 }
+4 -7
drivers/cxl/core/region.c
··· 839 839 } 840 840 841 841 static bool region_res_match_cxl_range(const struct cxl_region_params *p, 842 - struct range *range) 842 + const struct range *range) 843 843 { 844 844 if (!p->res) 845 845 return false; ··· 3398 3398 p = &cxlr->params; 3399 3399 3400 3400 guard(rwsem_read)(&cxl_rwsem.region); 3401 - if (p->res && p->res->start == r->start && p->res->end == r->end) 3402 - return 1; 3403 - 3404 - return 0; 3401 + return region_res_match_cxl_range(p, r); 3405 3402 } 3406 3403 3407 3404 static int cxl_extended_linear_cache_resize(struct cxl_region *cxlr, ··· 3663 3666 3664 3667 if (offset < p->cache_size) { 3665 3668 dev_err(&cxlr->dev, 3666 - "Offset %#llx is within extended linear cache %pr\n", 3669 + "Offset %#llx is within extended linear cache %pa\n", 3667 3670 offset, &p->cache_size); 3668 3671 return -EINVAL; 3669 3672 } 3670 3673 3671 3674 region_size = resource_size(p->res); 3672 3675 if (offset >= region_size) { 3673 - dev_err(&cxlr->dev, "Offset %#llx exceeds region size %pr\n", 3676 + dev_err(&cxlr->dev, "Offset %#llx exceeds region size %pa\n", 3674 3677 offset, &region_size); 3675 3678 return -EINVAL; 3676 3679 }
+1 -1
drivers/cxl/core/trace.h
··· 1068 1068 __entry->hpa = cxl_dpa_to_hpa(cxlr, cxlmd, 1069 1069 __entry->dpa); 1070 1070 if (__entry->hpa != ULLONG_MAX && cxlr->params.cache_size) 1071 - __entry->hpa_alias0 = __entry->hpa + 1071 + __entry->hpa_alias0 = __entry->hpa - 1072 1072 cxlr->params.cache_size; 1073 1073 else 1074 1074 __entry->hpa_alias0 = ULLONG_MAX;
+22 -23
drivers/devfreq/event/rockchip-dfi.c
··· 20 20 #include <linux/of.h> 21 21 #include <linux/of_device.h> 22 22 #include <linux/bitfield.h> 23 + #include <linux/hw_bitfield.h> 23 24 #include <linux/bits.h> 24 25 #include <linux/perf_event.h> 25 26 ··· 31 30 32 31 #define DMC_MAX_CHANNELS 4 33 32 34 - #define HIWORD_UPDATE(val, mask) ((val) | (mask) << 16) 35 - 36 33 /* DDRMON_CTRL */ 37 34 #define DDRMON_CTRL 0x04 38 35 #define DDRMON_CTRL_LPDDR5 BIT(6) ··· 40 41 #define DDRMON_CTRL_LPDDR23 BIT(2) 41 42 #define DDRMON_CTRL_SOFTWARE_EN BIT(1) 42 43 #define DDRMON_CTRL_TIMER_CNT_EN BIT(0) 43 - #define DDRMON_CTRL_DDR_TYPE_MASK (DDRMON_CTRL_LPDDR5 | \ 44 - DDRMON_CTRL_DDR4 | \ 45 - DDRMON_CTRL_LPDDR4 | \ 46 - DDRMON_CTRL_LPDDR23) 47 44 #define DDRMON_CTRL_LP5_BANK_MODE_MASK GENMASK(8, 7) 48 45 49 46 #define DDRMON_CH0_WR_NUM 0x20 ··· 119 124 unsigned int count_multiplier; /* number of data clocks per count */ 120 125 }; 121 126 122 - static int rockchip_dfi_ddrtype_to_ctrl(struct rockchip_dfi *dfi, u32 *ctrl, 123 - u32 *mask) 127 + static int rockchip_dfi_ddrtype_to_ctrl(struct rockchip_dfi *dfi, u32 *ctrl) 124 128 { 125 129 u32 ddrmon_ver; 126 - 127 - *mask = DDRMON_CTRL_DDR_TYPE_MASK; 128 130 129 131 switch (dfi->ddr_type) { 130 132 case ROCKCHIP_DDRTYPE_LPDDR2: 131 133 case ROCKCHIP_DDRTYPE_LPDDR3: 132 - *ctrl = DDRMON_CTRL_LPDDR23; 134 + *ctrl = FIELD_PREP_WM16(DDRMON_CTRL_LPDDR23, 1) | 135 + FIELD_PREP_WM16(DDRMON_CTRL_LPDDR4, 0) | 136 + FIELD_PREP_WM16(DDRMON_CTRL_LPDDR5, 0); 133 137 break; 134 138 case ROCKCHIP_DDRTYPE_LPDDR4: 135 139 case ROCKCHIP_DDRTYPE_LPDDR4X: 136 - *ctrl = DDRMON_CTRL_LPDDR4; 140 + *ctrl = FIELD_PREP_WM16(DDRMON_CTRL_LPDDR23, 0) | 141 + FIELD_PREP_WM16(DDRMON_CTRL_LPDDR4, 1) | 142 + FIELD_PREP_WM16(DDRMON_CTRL_LPDDR5, 0); 137 143 break; 138 144 case ROCKCHIP_DDRTYPE_LPDDR5: 139 145 ddrmon_ver = readl_relaxed(dfi->regs); 140 146 if (ddrmon_ver < 0x40) { 141 - *ctrl = DDRMON_CTRL_LPDDR5 | dfi->lp5_bank_mode; 142 - *mask |= DDRMON_CTRL_LP5_BANK_MODE_MASK; 147 + *ctrl = FIELD_PREP_WM16(DDRMON_CTRL_LPDDR23, 0) | 148 + FIELD_PREP_WM16(DDRMON_CTRL_LPDDR4, 0) | 149 + FIELD_PREP_WM16(DDRMON_CTRL_LPDDR5, 1) | 150 + FIELD_PREP_WM16(DDRMON_CTRL_LP5_BANK_MODE_MASK, 151 + dfi->lp5_bank_mode); 143 152 break; 144 153 } 145 154 ··· 171 172 void __iomem *dfi_regs = dfi->regs; 172 173 int i, ret = 0; 173 174 u32 ctrl; 174 - u32 ctrl_mask; 175 175 176 176 mutex_lock(&dfi->mutex); 177 177 ··· 184 186 goto out; 185 187 } 186 188 187 - ret = rockchip_dfi_ddrtype_to_ctrl(dfi, &ctrl, &ctrl_mask); 189 + ret = rockchip_dfi_ddrtype_to_ctrl(dfi, &ctrl); 188 190 if (ret) 189 191 goto out; 190 192 ··· 194 196 continue; 195 197 196 198 /* clear DDRMON_CTRL setting */ 197 - writel_relaxed(HIWORD_UPDATE(0, DDRMON_CTRL_TIMER_CNT_EN | 198 - DDRMON_CTRL_SOFTWARE_EN | DDRMON_CTRL_HARDWARE_EN), 199 + writel_relaxed(FIELD_PREP_WM16(DDRMON_CTRL_TIMER_CNT_EN, 0) | 200 + FIELD_PREP_WM16(DDRMON_CTRL_SOFTWARE_EN, 0) | 201 + FIELD_PREP_WM16(DDRMON_CTRL_HARDWARE_EN, 0), 199 202 dfi_regs + i * dfi->ddrmon_stride + DDRMON_CTRL); 200 203 201 - writel_relaxed(HIWORD_UPDATE(ctrl, ctrl_mask), 202 - dfi_regs + i * dfi->ddrmon_stride + DDRMON_CTRL); 204 + writel_relaxed(ctrl, dfi_regs + i * dfi->ddrmon_stride + 205 + DDRMON_CTRL); 203 206 204 207 /* enable count, use software mode */ 205 - writel_relaxed(HIWORD_UPDATE(DDRMON_CTRL_SOFTWARE_EN, DDRMON_CTRL_SOFTWARE_EN), 208 + writel_relaxed(FIELD_PREP_WM16(DDRMON_CTRL_SOFTWARE_EN, 1), 206 209 dfi_regs + i * dfi->ddrmon_stride + DDRMON_CTRL); 207 210 208 211 if (dfi->ddrmon_ctrl_single) ··· 233 234 if (!(dfi->channel_mask & BIT(i))) 234 235 continue; 235 236 236 - writel_relaxed(HIWORD_UPDATE(0, DDRMON_CTRL_SOFTWARE_EN), 237 - dfi_regs + i * dfi->ddrmon_stride + DDRMON_CTRL); 237 + writel_relaxed(FIELD_PREP_WM16(DDRMON_CTRL_SOFTWARE_EN, 0), 238 + dfi_regs + i * dfi->ddrmon_stride + DDRMON_CTRL); 238 239 239 240 if (dfi->ddrmon_ctrl_single) 240 241 break;
+21
drivers/dpll/zl3073x/core.c
··· 1038 1038 int zl3073x_dev_start(struct zl3073x_dev *zldev, bool full) 1039 1039 { 1040 1040 struct zl3073x_dpll *zldpll; 1041 + u8 info; 1041 1042 int rc; 1043 + 1044 + rc = zl3073x_read_u8(zldev, ZL_REG_INFO, &info); 1045 + if (rc) { 1046 + dev_err(zldev->dev, "Failed to read device status info\n"); 1047 + return rc; 1048 + } 1049 + 1050 + if (!FIELD_GET(ZL_INFO_READY, info)) { 1051 + /* The ready bit indicates that the firmware was successfully 1052 + * configured and is ready for normal operation. If it is 1053 + * cleared then the configuration stored in flash is wrong 1054 + * or missing. In this situation the driver will expose 1055 + * only devlink interface to give an opportunity to flash 1056 + * the correct config. 1057 + */ 1058 + dev_info(zldev->dev, 1059 + "FW not fully ready - missing or corrupted config\n"); 1060 + 1061 + return 0; 1062 + } 1042 1063 1043 1064 if (full) { 1044 1065 /* Fetch device state */
+1 -1
drivers/dpll/zl3073x/fw.c
··· 37 37 static const struct zl3073x_fw_component_info component_info[] = { 38 38 [ZL_FW_COMPONENT_UTIL] = { 39 39 .name = "utility", 40 - .max_size = 0x2300, 40 + .max_size = 0x4000, 41 41 .load_addr = 0x20000000, 42 42 .flash_type = ZL3073X_FLASH_TYPE_NONE, 43 43 },
+3
drivers/dpll/zl3073x/regs.h
··· 67 67 * Register Page 0, General 68 68 **************************/ 69 69 70 + #define ZL_REG_INFO ZL_REG(0, 0x00, 1) 71 + #define ZL_INFO_READY BIT(7) 72 + 70 73 #define ZL_REG_ID ZL_REG(0, 0x01, 2) 71 74 #define ZL_REG_REVISION ZL_REG(0, 0x03, 2) 72 75 #define ZL_REG_FW_VER ZL_REG(0, 0x05, 2)
+1
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 1290 1290 bool debug_disable_gpu_ring_reset; 1291 1291 bool debug_vm_userptr; 1292 1292 bool debug_disable_ce_logs; 1293 + bool debug_enable_ce_cs; 1293 1294 1294 1295 /* Protection for the following isolation structure */ 1295 1296 struct mutex enforce_isolation_mutex;
+2 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 2329 2329 int amdgpu_amdkfd_gpuvm_get_vm_fault_info(struct amdgpu_device *adev, 2330 2330 struct kfd_vm_fault_info *mem) 2331 2331 { 2332 - if (atomic_read(&adev->gmc.vm_fault_info_updated) == 1) { 2332 + if (atomic_read_acquire(&adev->gmc.vm_fault_info_updated) == 1) { 2333 2333 *mem = *adev->gmc.vm_fault_info; 2334 - mb(); /* make sure read happened */ 2335 - atomic_set(&adev->gmc.vm_fault_info_updated, 0); 2334 + atomic_set_release(&adev->gmc.vm_fault_info_updated, 0); 2336 2335 } 2337 2336 return 0; 2338 2337 }
+7 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
··· 364 364 if (p->uf_bo && ring->funcs->no_user_fence) 365 365 return -EINVAL; 366 366 367 + if (!p->adev->debug_enable_ce_cs && 368 + chunk_ib->flags & AMDGPU_IB_FLAG_CE) { 369 + dev_err_ratelimited(p->adev->dev, "CE CS is blocked, use debug=0x400 to override\n"); 370 + return -EINVAL; 371 + } 372 + 367 373 if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX && 368 374 chunk_ib->flags & AMDGPU_IB_FLAG_PREEMPT) { 369 375 if (chunk_ib->flags & AMDGPU_IB_FLAG_CE) ··· 708 702 */ 709 703 const s64 us_upper_bound = 200000; 710 704 711 - if (!adev->mm_stats.log2_max_MBps) { 705 + if ((!adev->mm_stats.log2_max_MBps) || !ttm_resource_manager_used(&adev->mman.vram_mgr.manager)) { 712 706 *max_bytes = 0; 713 707 *max_vis_bytes = 0; 714 708 return;
+7
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 1882 1882 1883 1883 static bool amdgpu_device_aspm_support_quirk(struct amdgpu_device *adev) 1884 1884 { 1885 + /* Enabling ASPM causes randoms hangs on Tahiti and Oland on Zen4. 1886 + * It's unclear if this is a platform-specific or GPU-specific issue. 1887 + * Disable ASPM on SI for the time being. 1888 + */ 1889 + if (adev->family == AMDGPU_FAMILY_SI) 1890 + return true; 1891 + 1885 1892 #if IS_ENABLED(CONFIG_X86) 1886 1893 struct cpuinfo_x86 *c = &cpu_data(0); 1887 1894
+17 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
··· 1033 1033 /* Until a uniform way is figured, get mask based on hwid */ 1034 1034 switch (hw_id) { 1035 1035 case VCN_HWID: 1036 - harvest = ((1 << inst) & adev->vcn.inst_mask) == 0; 1036 + /* VCN vs UVD+VCE */ 1037 + if (!amdgpu_ip_version(adev, VCE_HWIP, 0)) 1038 + harvest = ((1 << inst) & adev->vcn.inst_mask) == 0; 1037 1039 break; 1038 1040 case DMU_HWID: 1039 1041 if (adev->harvest_ip_mask & AMD_HARVEST_IP_DMU_MASK) ··· 2567 2565 amdgpu_discovery_init(adev); 2568 2566 vega10_reg_base_init(adev); 2569 2567 adev->sdma.num_instances = 2; 2568 + adev->sdma.sdma_mask = 3; 2570 2569 adev->gmc.num_umc = 4; 2570 + adev->gfx.xcc_mask = 1; 2571 2571 adev->ip_versions[MMHUB_HWIP][0] = IP_VERSION(9, 0, 0); 2572 2572 adev->ip_versions[ATHUB_HWIP][0] = IP_VERSION(9, 0, 0); 2573 2573 adev->ip_versions[OSSSYS_HWIP][0] = IP_VERSION(4, 0, 0); ··· 2596 2592 amdgpu_discovery_init(adev); 2597 2593 vega10_reg_base_init(adev); 2598 2594 adev->sdma.num_instances = 2; 2595 + adev->sdma.sdma_mask = 3; 2599 2596 adev->gmc.num_umc = 4; 2597 + adev->gfx.xcc_mask = 1; 2600 2598 adev->ip_versions[MMHUB_HWIP][0] = IP_VERSION(9, 3, 0); 2601 2599 adev->ip_versions[ATHUB_HWIP][0] = IP_VERSION(9, 3, 0); 2602 2600 adev->ip_versions[OSSSYS_HWIP][0] = IP_VERSION(4, 0, 1); ··· 2625 2619 amdgpu_discovery_init(adev); 2626 2620 vega10_reg_base_init(adev); 2627 2621 adev->sdma.num_instances = 1; 2622 + adev->sdma.sdma_mask = 1; 2628 2623 adev->vcn.num_vcn_inst = 1; 2629 2624 adev->gmc.num_umc = 2; 2625 + adev->gfx.xcc_mask = 1; 2630 2626 if (adev->apu_flags & AMD_APU_IS_RAVEN2) { 2631 2627 adev->ip_versions[MMHUB_HWIP][0] = IP_VERSION(9, 2, 0); 2632 2628 adev->ip_versions[ATHUB_HWIP][0] = IP_VERSION(9, 2, 0); ··· 2673 2665 amdgpu_discovery_init(adev); 2674 2666 vega20_reg_base_init(adev); 2675 2667 adev->sdma.num_instances = 2; 2668 + adev->sdma.sdma_mask = 3; 2676 2669 adev->gmc.num_umc = 8; 2670 + adev->gfx.xcc_mask = 1; 2677 2671 adev->ip_versions[MMHUB_HWIP][0] = IP_VERSION(9, 4, 0); 2678 2672 adev->ip_versions[ATHUB_HWIP][0] = IP_VERSION(9, 4, 0); 2679 2673 adev->ip_versions[OSSSYS_HWIP][0] = IP_VERSION(4, 2, 0); ··· 2703 2693 amdgpu_discovery_init(adev); 2704 2694 arct_reg_base_init(adev); 2705 2695 adev->sdma.num_instances = 8; 2696 + adev->sdma.sdma_mask = 0xff; 2706 2697 adev->vcn.num_vcn_inst = 2; 2707 2698 adev->gmc.num_umc = 8; 2699 + adev->gfx.xcc_mask = 1; 2708 2700 adev->ip_versions[MMHUB_HWIP][0] = IP_VERSION(9, 4, 1); 2709 2701 adev->ip_versions[ATHUB_HWIP][0] = IP_VERSION(9, 4, 1); 2710 2702 adev->ip_versions[OSSSYS_HWIP][0] = IP_VERSION(4, 2, 1); ··· 2738 2726 amdgpu_discovery_init(adev); 2739 2727 aldebaran_reg_base_init(adev); 2740 2728 adev->sdma.num_instances = 5; 2729 + adev->sdma.sdma_mask = 0x1f; 2741 2730 adev->vcn.num_vcn_inst = 2; 2742 2731 adev->gmc.num_umc = 4; 2732 + adev->gfx.xcc_mask = 1; 2743 2733 adev->ip_versions[MMHUB_HWIP][0] = IP_VERSION(9, 4, 2); 2744 2734 adev->ip_versions[ATHUB_HWIP][0] = IP_VERSION(9, 4, 2); 2745 2735 adev->ip_versions[OSSSYS_HWIP][0] = IP_VERSION(4, 4, 0); ··· 2776 2762 } else { 2777 2763 cyan_skillfish_reg_base_init(adev); 2778 2764 adev->sdma.num_instances = 2; 2765 + adev->sdma.sdma_mask = 3; 2766 + adev->gfx.xcc_mask = 1; 2779 2767 adev->ip_versions[MMHUB_HWIP][0] = IP_VERSION(2, 0, 3); 2780 2768 adev->ip_versions[ATHUB_HWIP][0] = IP_VERSION(2, 0, 3); 2781 2769 adev->ip_versions[OSSSYS_HWIP][0] = IP_VERSION(5, 0, 1);
+7 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 144 144 AMDGPU_DEBUG_DISABLE_GPU_RING_RESET = BIT(6), 145 145 AMDGPU_DEBUG_SMU_POOL = BIT(7), 146 146 AMDGPU_DEBUG_VM_USERPTR = BIT(8), 147 - AMDGPU_DEBUG_DISABLE_RAS_CE_LOG = BIT(9) 147 + AMDGPU_DEBUG_DISABLE_RAS_CE_LOG = BIT(9), 148 + AMDGPU_DEBUG_ENABLE_CE_CS = BIT(10) 148 149 }; 149 150 150 151 unsigned int amdgpu_vram_limit = UINT_MAX; ··· 2289 2288 if (amdgpu_debug_mask & AMDGPU_DEBUG_DISABLE_RAS_CE_LOG) { 2290 2289 pr_info("debug: disable kernel logs of correctable errors\n"); 2291 2290 adev->debug_disable_ce_logs = true; 2291 + } 2292 + 2293 + if (amdgpu_debug_mask & AMDGPU_DEBUG_ENABLE_CE_CS) { 2294 + pr_info("debug: allowing command submission to CE engine\n"); 2295 + adev->debug_enable_ce_cs = true; 2292 2296 } 2293 2297 } 2294 2298
+45 -9
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
··· 758 758 * @fence: fence of the ring to signal 759 759 * 760 760 */ 761 - void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *fence) 761 + void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *af) 762 762 { 763 - dma_fence_set_error(&fence->base, -ETIME); 764 - amdgpu_fence_write(fence->ring, fence->seq); 765 - amdgpu_fence_process(fence->ring); 763 + struct dma_fence *unprocessed; 764 + struct dma_fence __rcu **ptr; 765 + struct amdgpu_fence *fence; 766 + struct amdgpu_ring *ring = af->ring; 767 + unsigned long flags; 768 + u32 seq, last_seq; 769 + 770 + last_seq = amdgpu_fence_read(ring) & ring->fence_drv.num_fences_mask; 771 + seq = ring->fence_drv.sync_seq & ring->fence_drv.num_fences_mask; 772 + 773 + /* mark all fences from the guilty context with an error */ 774 + spin_lock_irqsave(&ring->fence_drv.lock, flags); 775 + do { 776 + last_seq++; 777 + last_seq &= ring->fence_drv.num_fences_mask; 778 + 779 + ptr = &ring->fence_drv.fences[last_seq]; 780 + rcu_read_lock(); 781 + unprocessed = rcu_dereference(*ptr); 782 + 783 + if (unprocessed && !dma_fence_is_signaled_locked(unprocessed)) { 784 + fence = container_of(unprocessed, struct amdgpu_fence, base); 785 + 786 + if (fence == af) 787 + dma_fence_set_error(&fence->base, -ETIME); 788 + else if (fence->context == af->context) 789 + dma_fence_set_error(&fence->base, -ECANCELED); 790 + } 791 + rcu_read_unlock(); 792 + } while (last_seq != seq); 793 + spin_unlock_irqrestore(&ring->fence_drv.lock, flags); 794 + /* signal the guilty fence */ 795 + amdgpu_fence_write(ring, af->seq); 796 + amdgpu_fence_process(ring); 766 797 } 767 798 768 799 void amdgpu_fence_save_wptr(struct dma_fence *fence) ··· 821 790 struct dma_fence *unprocessed; 822 791 struct dma_fence __rcu **ptr; 823 792 struct amdgpu_fence *fence; 824 - u64 wptr, i, seqno; 793 + u64 wptr; 794 + u32 seq, last_seq; 825 795 826 - seqno = amdgpu_fence_read(ring); 796 + last_seq = amdgpu_fence_read(ring) & ring->fence_drv.num_fences_mask; 797 + seq = ring->fence_drv.sync_seq & ring->fence_drv.num_fences_mask; 827 798 wptr = ring->fence_drv.signalled_wptr; 828 799 ring->ring_backup_entries_to_copy = 0; 829 800 830 - for (i = seqno + 1; i <= ring->fence_drv.sync_seq; ++i) { 831 - ptr = &ring->fence_drv.fences[i & ring->fence_drv.num_fences_mask]; 801 + do { 802 + last_seq++; 803 + last_seq &= ring->fence_drv.num_fences_mask; 804 + 805 + ptr = &ring->fence_drv.fences[last_seq]; 832 806 rcu_read_lock(); 833 807 unprocessed = rcu_dereference(*ptr); 834 808 ··· 849 813 wptr = fence->wptr; 850 814 } 851 815 rcu_read_unlock(); 852 - } 816 + } while (last_seq != seq); 853 817 } 854 818 855 819 /*
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
··· 371 371 for (i = 0; i < adev->jpeg.num_jpeg_inst; ++i) { 372 372 for (j = 0; j < adev->jpeg.num_jpeg_rings; ++j) { 373 373 ring = &adev->jpeg.inst[i].ring_dec[j]; 374 - if (val & (BIT_ULL(1) << ((i * adev->jpeg.num_jpeg_rings) + j))) 374 + if (val & (BIT_ULL((i * adev->jpeg.num_jpeg_rings) + j))) 375 375 ring->sched.ready = true; 376 376 else 377 377 ring->sched.ready = false;
+4 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
··· 758 758 ui64 = atomic64_read(&adev->num_vram_cpu_page_faults); 759 759 return copy_to_user(out, &ui64, min(size, 8u)) ? -EFAULT : 0; 760 760 case AMDGPU_INFO_VRAM_USAGE: 761 - ui64 = ttm_resource_manager_usage(&adev->mman.vram_mgr.manager); 761 + ui64 = ttm_resource_manager_used(&adev->mman.vram_mgr.manager) ? 762 + ttm_resource_manager_usage(&adev->mman.vram_mgr.manager) : 0; 762 763 return copy_to_user(out, &ui64, min(size, 8u)) ? -EFAULT : 0; 763 764 case AMDGPU_INFO_VIS_VRAM_USAGE: 764 765 ui64 = amdgpu_vram_mgr_vis_usage(&adev->mman.vram_mgr); ··· 805 804 mem.vram.usable_heap_size = adev->gmc.real_vram_size - 806 805 atomic64_read(&adev->vram_pin_size) - 807 806 AMDGPU_VM_RESERVED_VRAM; 808 - mem.vram.heap_usage = 809 - ttm_resource_manager_usage(vram_man); 807 + mem.vram.heap_usage = ttm_resource_manager_used(&adev->mman.vram_mgr.manager) ? 808 + ttm_resource_manager_usage(vram_man) : 0; 810 809 mem.vram.max_allocation = mem.vram.usable_heap_size * 3 / 4; 811 810 812 811 mem.cpu_accessible_vram.total_heap_size =
+11 -9
drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
··· 409 409 return -EINVAL; 410 410 411 411 /* Clear the doorbell array before detection */ 412 - memset(adev->mes.hung_queue_db_array_cpu_addr, 0, 412 + memset(adev->mes.hung_queue_db_array_cpu_addr, AMDGPU_MES_INVALID_DB_OFFSET, 413 413 adev->mes.hung_queue_db_array_size * sizeof(u32)); 414 414 input.queue_type = queue_type; 415 415 input.detect_only = detect_only; ··· 420 420 dev_err(adev->dev, "failed to detect and reset\n"); 421 421 } else { 422 422 *hung_db_num = 0; 423 - for (i = 0; i < adev->mes.hung_queue_db_array_size; i++) { 423 + for (i = 0; i < adev->mes.hung_queue_hqd_info_offset; i++) { 424 424 if (db_array[i] != AMDGPU_MES_INVALID_DB_OFFSET) { 425 425 hung_db_array[i] = db_array[i]; 426 426 *hung_db_num += 1; 427 427 } 428 428 } 429 + 430 + /* 431 + * TODO: return HQD info for MES scheduled user compute queue reset cases 432 + * stored in hung_db_array hqd info offset to full array size 433 + */ 429 434 } 430 435 431 436 return r; ··· 691 686 bool amdgpu_mes_suspend_resume_all_supported(struct amdgpu_device *adev) 692 687 { 693 688 uint32_t mes_rev = adev->mes.sched_version & AMDGPU_MES_VERSION_MASK; 694 - bool is_supported = false; 695 689 696 - if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(11, 0, 0) && 697 - amdgpu_ip_version(adev, GC_HWIP, 0) < IP_VERSION(12, 0, 0) && 698 - mes_rev >= 0x63) 699 - is_supported = true; 700 - 701 - return is_supported; 690 + return ((amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(11, 0, 0) && 691 + amdgpu_ip_version(adev, GC_HWIP, 0) < IP_VERSION(12, 0, 0) && 692 + mes_rev >= 0x63) || 693 + amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(12, 0, 0)); 702 694 } 703 695 704 696 /* Fix me -- node_id is used to identify the correct MES instances in the future */
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
··· 149 149 void *resource_1_addr[AMDGPU_MAX_MES_PIPES]; 150 150 151 151 int hung_queue_db_array_size; 152 + int hung_queue_hqd_info_offset; 152 153 struct amdgpu_bo *hung_queue_db_array_gpu_obj; 153 154 uint64_t hung_queue_db_array_gpu_addr; 154 155 void *hung_queue_db_array_cpu_addr;
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
··· 811 811 if (r) 812 812 return r; 813 813 814 - /* signal the fence of the bad job */ 814 + /* signal the guilty fence and set an error on all fences from the context */ 815 815 if (guilty_fence) 816 816 amdgpu_fence_driver_guilty_force_completion(guilty_fence); 817 817 /* Re-emit the non-guilty commands */
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
··· 155 155 void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring); 156 156 void amdgpu_fence_driver_set_error(struct amdgpu_ring *ring, int error); 157 157 void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring); 158 - void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *fence); 158 + void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *af); 159 159 void amdgpu_fence_save_wptr(struct dma_fence *fence); 160 160 161 161 int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring);
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
··· 598 598 vf2pf_info->driver_cert = 0; 599 599 vf2pf_info->os_info.all = 0; 600 600 601 - vf2pf_info->fb_usage = 602 - ttm_resource_manager_usage(&adev->mman.vram_mgr.manager) >> 20; 601 + vf2pf_info->fb_usage = ttm_resource_manager_used(&adev->mman.vram_mgr.manager) ? 602 + ttm_resource_manager_usage(&adev->mman.vram_mgr.manager) >> 20 : 0; 603 603 vf2pf_info->fb_vis_usage = 604 604 amdgpu_vram_mgr_vis_usage(&adev->mman.vram_mgr) >> 20; 605 605 vf2pf_info->fb_size = adev->gmc.real_vram_size >> 20;
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
··· 234 234 !adev->gmc.vram_vendor) 235 235 return 0; 236 236 237 + if (!ttm_resource_manager_used(&adev->mman.vram_mgr.manager)) 238 + return 0; 239 + 237 240 return attr->mode; 238 241 } 239 242
-2
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
··· 5862 5862 unsigned vmid = AMDGPU_JOB_GET_VMID(job); 5863 5863 u32 header, control = 0; 5864 5864 5865 - BUG_ON(ib->flags & AMDGPU_IB_FLAG_CE); 5866 - 5867 5865 header = PACKET3(PACKET3_INDIRECT_BUFFER, 2); 5868 5866 5869 5867 control |= ib->length_dw | (vmid << 24);
-2
drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
··· 4419 4419 unsigned vmid = AMDGPU_JOB_GET_VMID(job); 4420 4420 u32 header, control = 0; 4421 4421 4422 - BUG_ON(ib->flags & AMDGPU_IB_FLAG_CE); 4423 - 4424 4422 header = PACKET3(PACKET3_INDIRECT_BUFFER, 2); 4425 4423 4426 4424 control |= ib->length_dw | (vmid << 24);
+3 -4
drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
··· 1068 1068 GFP_KERNEL); 1069 1069 if (!adev->gmc.vm_fault_info) 1070 1070 return -ENOMEM; 1071 - atomic_set(&adev->gmc.vm_fault_info_updated, 0); 1071 + atomic_set_release(&adev->gmc.vm_fault_info_updated, 0); 1072 1072 1073 1073 return 0; 1074 1074 } ··· 1290 1290 vmid = REG_GET_FIELD(status, VM_CONTEXT1_PROTECTION_FAULT_STATUS, 1291 1291 VMID); 1292 1292 if (amdgpu_amdkfd_is_kfd_vmid(adev, vmid) 1293 - && !atomic_read(&adev->gmc.vm_fault_info_updated)) { 1293 + && !atomic_read_acquire(&adev->gmc.vm_fault_info_updated)) { 1294 1294 struct kfd_vm_fault_info *info = adev->gmc.vm_fault_info; 1295 1295 u32 protections = REG_GET_FIELD(status, 1296 1296 VM_CONTEXT1_PROTECTION_FAULT_STATUS, ··· 1306 1306 info->prot_read = protections & 0x8 ? true : false; 1307 1307 info->prot_write = protections & 0x10 ? true : false; 1308 1308 info->prot_exec = protections & 0x20 ? true : false; 1309 - mb(); 1310 - atomic_set(&adev->gmc.vm_fault_info_updated, 1); 1309 + atomic_set_release(&adev->gmc.vm_fault_info_updated, 1); 1311 1310 } 1312 1311 1313 1312 return 0;
+3 -4
drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
··· 1183 1183 GFP_KERNEL); 1184 1184 if (!adev->gmc.vm_fault_info) 1185 1185 return -ENOMEM; 1186 - atomic_set(&adev->gmc.vm_fault_info_updated, 0); 1186 + atomic_set_release(&adev->gmc.vm_fault_info_updated, 0); 1187 1187 1188 1188 return 0; 1189 1189 } ··· 1478 1478 vmid = REG_GET_FIELD(status, VM_CONTEXT1_PROTECTION_FAULT_STATUS, 1479 1479 VMID); 1480 1480 if (amdgpu_amdkfd_is_kfd_vmid(adev, vmid) 1481 - && !atomic_read(&adev->gmc.vm_fault_info_updated)) { 1481 + && !atomic_read_acquire(&adev->gmc.vm_fault_info_updated)) { 1482 1482 struct kfd_vm_fault_info *info = adev->gmc.vm_fault_info; 1483 1483 u32 protections = REG_GET_FIELD(status, 1484 1484 VM_CONTEXT1_PROTECTION_FAULT_STATUS, ··· 1494 1494 info->prot_read = protections & 0x8 ? true : false; 1495 1495 info->prot_write = protections & 0x10 ? true : false; 1496 1496 info->prot_exec = protections & 0x20 ? true : false; 1497 - mb(); 1498 - atomic_set(&adev->gmc.vm_fault_info_updated, 1); 1497 + atomic_set_release(&adev->gmc.vm_fault_info_updated, 1); 1499 1498 } 1500 1499 1501 1500 return 0;
+3 -3
drivers/gpu/drm/amd/amdgpu/mes_userqueue.c
··· 208 208 struct amdgpu_userq_mgr *uqm, *tmp; 209 209 unsigned int hung_db_num = 0; 210 210 int queue_id, r, i; 211 - u32 db_array[4]; 211 + u32 db_array[8]; 212 212 213 - if (db_array_size > 4) { 214 - dev_err(adev->dev, "DB array size (%d vs 4) too small\n", 213 + if (db_array_size > 8) { 214 + dev_err(adev->dev, "DB array size (%d vs 8) too small\n", 215 215 db_array_size); 216 216 return -EINVAL; 217 217 }
+5 -3
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
··· 66 66 #define GFX_MES_DRAM_SIZE 0x80000 67 67 #define MES11_HW_RESOURCE_1_SIZE (128 * AMDGPU_GPU_PAGE_SIZE) 68 68 69 - #define MES11_HUNG_DB_OFFSET_ARRAY_SIZE 4 69 + #define MES11_HUNG_DB_OFFSET_ARRAY_SIZE 8 /* [0:3] = db offset, [4:7] = hqd info */ 70 + #define MES11_HUNG_HQD_INFO_OFFSET 4 70 71 71 72 static void mes_v11_0_ring_set_wptr(struct amdgpu_ring *ring) 72 73 { ··· 1721 1720 struct amdgpu_device *adev = ip_block->adev; 1722 1721 int pipe, r; 1723 1722 1724 - adev->mes.hung_queue_db_array_size = 1725 - MES11_HUNG_DB_OFFSET_ARRAY_SIZE; 1723 + adev->mes.hung_queue_db_array_size = MES11_HUNG_DB_OFFSET_ARRAY_SIZE; 1724 + adev->mes.hung_queue_hqd_info_offset = MES11_HUNG_HQD_INFO_OFFSET; 1725 + 1726 1726 for (pipe = 0; pipe < AMDGPU_MAX_MES_PIPES; pipe++) { 1727 1727 if (!adev->enable_mes_kiq && pipe == AMDGPU_MES_KIQ_PIPE) 1728 1728 continue;
+11 -4
drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
··· 47 47 48 48 #define MES_EOP_SIZE 2048 49 49 50 - #define MES12_HUNG_DB_OFFSET_ARRAY_SIZE 4 50 + #define MES12_HUNG_DB_OFFSET_ARRAY_SIZE 8 /* [0:3] = db offset [4:7] hqd info */ 51 + #define MES12_HUNG_HQD_INFO_OFFSET 4 51 52 52 53 static void mes_v12_0_ring_set_wptr(struct amdgpu_ring *ring) 53 54 { ··· 229 228 pipe, x_pkt->header.opcode); 230 229 231 230 r = amdgpu_fence_wait_polling(ring, seq, timeout); 232 - if (r < 1 || !*status_ptr) { 231 + 232 + /* 233 + * status_ptr[31:0] == 0 (fail) or status_ptr[63:0] == 1 (success). 234 + * If status_ptr[31:0] == 0 then status_ptr[63:32] will have debug error information. 235 + */ 236 + if (r < 1 || !(lower_32_bits(*status_ptr))) { 233 237 234 238 if (misc_op_str) 235 239 dev_err(adev->dev, "MES(%d) failed to respond to msg=%s (%s)\n", ··· 1905 1899 struct amdgpu_device *adev = ip_block->adev; 1906 1900 int pipe, r; 1907 1901 1908 - adev->mes.hung_queue_db_array_size = 1909 - MES12_HUNG_DB_OFFSET_ARRAY_SIZE; 1902 + adev->mes.hung_queue_db_array_size = MES12_HUNG_DB_OFFSET_ARRAY_SIZE; 1903 + adev->mes.hung_queue_hqd_info_offset = MES12_HUNG_HQD_INFO_OFFSET; 1904 + 1910 1905 for (pipe = 0; pipe < AMDGPU_MAX_MES_PIPES; pipe++) { 1911 1906 r = amdgpu_mes_init_microcode(adev, pipe); 1912 1907 if (r)
+21 -52
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
··· 1209 1209 pr_debug_ratelimited("Evicting process pid %d queues\n", 1210 1210 pdd->process->lead_thread->pid); 1211 1211 1212 + if (dqm->dev->kfd->shared_resources.enable_mes) { 1213 + pdd->last_evict_timestamp = get_jiffies_64(); 1214 + retval = suspend_all_queues_mes(dqm); 1215 + if (retval) { 1216 + dev_err(dev, "Suspending all queues failed"); 1217 + goto out; 1218 + } 1219 + } 1220 + 1212 1221 /* Mark all queues as evicted. Deactivate all active queues on 1213 1222 * the qpd. 1214 1223 */ ··· 1230 1221 decrement_queue_count(dqm, qpd, q); 1231 1222 1232 1223 if (dqm->dev->kfd->shared_resources.enable_mes) { 1233 - int err; 1234 - 1235 - err = remove_queue_mes(dqm, q, qpd); 1236 - if (err) { 1224 + retval = remove_queue_mes(dqm, q, qpd); 1225 + if (retval) { 1237 1226 dev_err(dev, "Failed to evict queue %d\n", 1238 1227 q->properties.queue_id); 1239 - retval = err; 1228 + goto out; 1240 1229 } 1241 1230 } 1242 1231 } 1243 - pdd->last_evict_timestamp = get_jiffies_64(); 1244 - if (!dqm->dev->kfd->shared_resources.enable_mes) 1232 + 1233 + if (!dqm->dev->kfd->shared_resources.enable_mes) { 1234 + pdd->last_evict_timestamp = get_jiffies_64(); 1245 1235 retval = execute_queues_cpsch(dqm, 1246 1236 qpd->is_debug ? 1247 1237 KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES : 1248 1238 KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0, 1249 1239 USE_DEFAULT_GRACE_PERIOD); 1240 + } else { 1241 + retval = resume_all_queues_mes(dqm); 1242 + if (retval) 1243 + dev_err(dev, "Resuming all queues failed"); 1244 + } 1250 1245 1251 1246 out: 1252 1247 dqm_unlock(dqm); ··· 3111 3098 return ret; 3112 3099 } 3113 3100 3114 - static int kfd_dqm_evict_pasid_mes(struct device_queue_manager *dqm, 3115 - struct qcm_process_device *qpd) 3116 - { 3117 - struct device *dev = dqm->dev->adev->dev; 3118 - int ret = 0; 3119 - 3120 - /* Check if process is already evicted */ 3121 - dqm_lock(dqm); 3122 - if (qpd->evicted) { 3123 - /* Increment the evicted count to make sure the 3124 - * process stays evicted before its terminated. 3125 - */ 3126 - qpd->evicted++; 3127 - dqm_unlock(dqm); 3128 - goto out; 3129 - } 3130 - dqm_unlock(dqm); 3131 - 3132 - ret = suspend_all_queues_mes(dqm); 3133 - if (ret) { 3134 - dev_err(dev, "Suspending all queues failed"); 3135 - goto out; 3136 - } 3137 - 3138 - ret = dqm->ops.evict_process_queues(dqm, qpd); 3139 - if (ret) { 3140 - dev_err(dev, "Evicting process queues failed"); 3141 - goto out; 3142 - } 3143 - 3144 - ret = resume_all_queues_mes(dqm); 3145 - if (ret) 3146 - dev_err(dev, "Resuming all queues failed"); 3147 - 3148 - out: 3149 - return ret; 3150 - } 3151 - 3152 3101 int kfd_evict_process_device(struct kfd_process_device *pdd) 3153 3102 { 3154 3103 struct device_queue_manager *dqm; 3155 3104 struct kfd_process *p; 3156 - int ret = 0; 3157 3105 3158 3106 p = pdd->process; 3159 3107 dqm = pdd->dev->dqm; 3160 3108 3161 3109 WARN(debug_evictions, "Evicting pid %d", p->lead_thread->pid); 3162 3110 3163 - if (dqm->dev->kfd->shared_resources.enable_mes) 3164 - ret = kfd_dqm_evict_pasid_mes(dqm, &pdd->qpd); 3165 - else 3166 - ret = dqm->ops.evict_process_queues(dqm, &pdd->qpd); 3167 - 3168 - return ret; 3111 + return dqm->ops.evict_process_queues(dqm, &pdd->qpd); 3169 3112 } 3170 3113 3171 3114 int reserve_debug_trap_vmid(struct device_queue_manager *dqm,
+4 -8
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 2085 2085 2086 2086 dc_hardware_init(adev->dm.dc); 2087 2087 2088 - adev->dm.restore_backlight = true; 2089 - 2090 2088 adev->dm.hpd_rx_offload_wq = hpd_rx_irq_create_workqueue(adev); 2091 2089 if (!adev->dm.hpd_rx_offload_wq) { 2092 2090 drm_err(adev_to_drm(adev), "failed to create hpd rx offload workqueue.\n"); ··· 3440 3442 dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0); 3441 3443 3442 3444 dc_resume(dm->dc); 3443 - adev->dm.restore_backlight = true; 3444 3445 3445 3446 amdgpu_dm_irq_resume_early(adev); 3446 3447 ··· 9966 9969 bool mode_set_reset_required = false; 9967 9970 u32 i; 9968 9971 struct dc_commit_streams_params params = {dc_state->streams, dc_state->stream_count}; 9972 + bool set_backlight_level = false; 9969 9973 9970 9974 /* Disable writeback */ 9971 9975 for_each_old_connector_in_state(state, connector, old_con_state, i) { ··· 10086 10088 acrtc->hw_mode = new_crtc_state->mode; 10087 10089 crtc->hwmode = new_crtc_state->mode; 10088 10090 mode_set_reset_required = true; 10091 + set_backlight_level = true; 10089 10092 } else if (modereset_required(new_crtc_state)) { 10090 10093 drm_dbg_atomic(dev, 10091 10094 "Atomic commit: RESET. crtc id %d:[%p]\n", ··· 10143 10144 * to fix a flicker issue. 10144 10145 * It will cause the dm->actual_brightness is not the current panel brightness 10145 10146 * level. (the dm->brightness is the correct panel level) 10146 - * So we set the backlight level with dm->brightness value after initial 10147 - * set mode. Use restore_backlight flag to avoid setting backlight level 10148 - * for every subsequent mode set. 10147 + * So we set the backlight level with dm->brightness value after set mode 10149 10148 */ 10150 - if (dm->restore_backlight) { 10149 + if (set_backlight_level) { 10151 10150 for (i = 0; i < dm->num_of_edps; i++) { 10152 10151 if (dm->backlight_dev[i]) 10153 10152 amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]); 10154 10153 } 10155 - dm->restore_backlight = false; 10156 10154 } 10157 10155 } 10158 10156
-7
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
··· 631 631 u32 actual_brightness[AMDGPU_DM_MAX_NUM_EDP]; 632 632 633 633 /** 634 - * @restore_backlight: 635 - * 636 - * Flag to indicate whether to restore backlight after modeset. 637 - */ 638 - bool restore_backlight; 639 - 640 - /** 641 634 * @aux_hpd_discon_quirk: 642 635 * 643 636 * quirk for hpd discon while aux is on-going.
+5
drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
··· 3500 3500 * for these GPUs to calculate bandwidth requirements. 3501 3501 */ 3502 3502 if (high_pixelclock_count) { 3503 + /* Work around flickering lines at the bottom edge 3504 + * of the screen when using a single 4K 60Hz monitor. 3505 + */ 3506 + disable_mclk_switching = true; 3507 + 3503 3508 /* On Oland, we observe some flickering when two 4K 60Hz 3504 3509 * displays are connected, possibly because voltage is too low. 3505 3510 * Raise the voltage by requiring a higher SCLK.
+1 -2
drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
··· 5444 5444 thermal_data->max = table_info->cac_dtp_table->usSoftwareShutdownTemp * 5445 5445 PP_TEMPERATURE_UNITS_PER_CENTIGRADES; 5446 5446 else if (hwmgr->pp_table_version == PP_TABLE_V0) 5447 - thermal_data->max = data->thermal_temp_setting.temperature_shutdown * 5448 - PP_TEMPERATURE_UNITS_PER_CENTIGRADES; 5447 + thermal_data->max = data->thermal_temp_setting.temperature_shutdown; 5449 5448 5450 5449 thermal_data->sw_ctf_threshold = thermal_data->max; 5451 5450
+10 -8
drivers/gpu/drm/ast/ast_mode.c
··· 836 836 static void ast_crtc_helper_atomic_enable(struct drm_crtc *crtc, struct drm_atomic_state *state) 837 837 { 838 838 struct ast_device *ast = to_ast_device(crtc->dev); 839 + u8 vgacr17 = 0x00; 840 + u8 vgacrb6 = 0xff; 839 841 840 - ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xb6, 0xfc, 0x00); 841 - ast_set_index_reg_mask(ast, AST_IO_VGASRI, 0x01, 0xdf, 0x00); 842 + vgacr17 |= AST_IO_VGACR17_SYNC_ENABLE; 843 + vgacrb6 &= ~(AST_IO_VGACRB6_VSYNC_OFF | AST_IO_VGACRB6_HSYNC_OFF); 844 + 845 + ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0x17, 0x7f, vgacr17); 846 + ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xb6, 0xfc, vgacrb6); 842 847 } 843 848 844 849 static void ast_crtc_helper_atomic_disable(struct drm_crtc *crtc, struct drm_atomic_state *state) 845 850 { 846 851 struct drm_crtc_state *old_crtc_state = drm_atomic_get_old_crtc_state(state, crtc); 847 852 struct ast_device *ast = to_ast_device(crtc->dev); 848 - u8 vgacrb6; 853 + u8 vgacr17 = 0xff; 849 854 850 - ast_set_index_reg_mask(ast, AST_IO_VGASRI, 0x01, 0xdf, AST_IO_VGASR1_SD); 851 - 852 - vgacrb6 = AST_IO_VGACRB6_VSYNC_OFF | 853 - AST_IO_VGACRB6_HSYNC_OFF; 854 - ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xb6, 0xfc, vgacrb6); 855 + vgacr17 &= ~AST_IO_VGACR17_SYNC_ENABLE; 856 + ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0x17, 0x7f, vgacr17); 855 857 856 858 /* 857 859 * HW cursors require the underlying primary plane and CRTC to
+1
drivers/gpu/drm/ast/ast_reg.h
··· 29 29 #define AST_IO_VGAGRI (0x4E) 30 30 31 31 #define AST_IO_VGACRI (0x54) 32 + #define AST_IO_VGACR17_SYNC_ENABLE BIT(7) /* called "Hardware reset" in docs */ 32 33 #define AST_IO_VGACR80_PASSWORD (0xa8) 33 34 #define AST_IO_VGACR99_VGAMEM_RSRV_MASK GENMASK(1, 0) 34 35 #define AST_IO_VGACRA1_VGAIO_DISABLED BIT(1)
+1 -2
drivers/gpu/drm/bridge/lontium-lt9211.c
··· 121 121 } 122 122 123 123 /* Test for known Chip ID. */ 124 - if (chipid[0] != REG_CHIPID0_VALUE || chipid[1] != REG_CHIPID1_VALUE || 125 - chipid[2] != REG_CHIPID2_VALUE) { 124 + if (chipid[0] != REG_CHIPID0_VALUE || chipid[1] != REG_CHIPID1_VALUE) { 126 125 dev_err(ctx->dev, "Unknown Chip ID: 0x%02x 0x%02x 0x%02x\n", 127 126 chipid[0], chipid[1], chipid[2]); 128 127 return -EINVAL;
+1 -1
drivers/gpu/drm/drm_draw.c
··· 127 127 128 128 void drm_draw_fill24(struct iosys_map *dmap, unsigned int dpitch, 129 129 unsigned int height, unsigned int width, 130 - u16 color) 130 + u32 color) 131 131 { 132 132 unsigned int y, x; 133 133
+1 -1
drivers/gpu/drm/drm_draw_internal.h
··· 47 47 48 48 void drm_draw_fill24(struct iosys_map *dmap, unsigned int dpitch, 49 49 unsigned int height, unsigned int width, 50 - u16 color); 50 + u32 color); 51 51 52 52 void drm_draw_fill32(struct iosys_map *dmap, unsigned int dpitch, 53 53 unsigned int height, unsigned int width,
+20 -18
drivers/gpu/drm/i915/display/intel_fb.c
··· 2113 2113 if (intel_fb_uses_dpt(fb)) 2114 2114 intel_dpt_destroy(intel_fb->dpt_vm); 2115 2115 2116 - intel_frontbuffer_put(intel_fb->frontbuffer); 2117 - 2118 2116 intel_fb_bo_framebuffer_fini(intel_fb_bo(fb)); 2117 + 2118 + intel_frontbuffer_put(intel_fb->frontbuffer); 2119 2119 2120 2120 kfree(intel_fb); 2121 2121 } ··· 2218 2218 int ret = -EINVAL; 2219 2219 int i; 2220 2220 2221 + /* 2222 + * intel_frontbuffer_get() must be done before 2223 + * intel_fb_bo_framebuffer_init() to avoid set_tiling vs. addfb race. 2224 + */ 2225 + intel_fb->frontbuffer = intel_frontbuffer_get(obj); 2226 + if (!intel_fb->frontbuffer) 2227 + return -ENOMEM; 2228 + 2221 2229 ret = intel_fb_bo_framebuffer_init(fb, obj, mode_cmd); 2222 2230 if (ret) 2223 - return ret; 2224 - 2225 - intel_fb->frontbuffer = intel_frontbuffer_get(obj); 2226 - if (!intel_fb->frontbuffer) { 2227 - ret = -ENOMEM; 2228 - goto err; 2229 - } 2231 + goto err_frontbuffer_put; 2230 2232 2231 2233 ret = -EINVAL; 2232 2234 if (!drm_any_plane_has_format(display->drm, ··· 2237 2235 drm_dbg_kms(display->drm, 2238 2236 "unsupported pixel format %p4cc / modifier 0x%llx\n", 2239 2237 &mode_cmd->pixel_format, mode_cmd->modifier[0]); 2240 - goto err_frontbuffer_put; 2238 + goto err_bo_framebuffer_fini; 2241 2239 } 2242 2240 2243 2241 max_stride = intel_fb_max_stride(display, mode_cmd->pixel_format, ··· 2248 2246 mode_cmd->modifier[0] != DRM_FORMAT_MOD_LINEAR ? 2249 2247 "tiled" : "linear", 2250 2248 mode_cmd->pitches[0], max_stride); 2251 - goto err_frontbuffer_put; 2249 + goto err_bo_framebuffer_fini; 2252 2250 } 2253 2251 2254 2252 /* FIXME need to adjust LINOFF/TILEOFF accordingly. */ ··· 2256 2254 drm_dbg_kms(display->drm, 2257 2255 "plane 0 offset (0x%08x) must be 0\n", 2258 2256 mode_cmd->offsets[0]); 2259 - goto err_frontbuffer_put; 2257 + goto err_bo_framebuffer_fini; 2260 2258 } 2261 2259 2262 2260 drm_helper_mode_fill_fb_struct(display->drm, fb, info, mode_cmd); ··· 2266 2264 2267 2265 if (mode_cmd->handles[i] != mode_cmd->handles[0]) { 2268 2266 drm_dbg_kms(display->drm, "bad plane %d handle\n", i); 2269 - goto err_frontbuffer_put; 2267 + goto err_bo_framebuffer_fini; 2270 2268 } 2271 2269 2272 2270 stride_alignment = intel_fb_stride_alignment(fb, i); ··· 2274 2272 drm_dbg_kms(display->drm, 2275 2273 "plane %d pitch (%d) must be at least %u byte aligned\n", 2276 2274 i, fb->pitches[i], stride_alignment); 2277 - goto err_frontbuffer_put; 2275 + goto err_bo_framebuffer_fini; 2278 2276 } 2279 2277 2280 2278 if (intel_fb_is_gen12_ccs_aux_plane(fb, i)) { ··· 2284 2282 drm_dbg_kms(display->drm, 2285 2283 "ccs aux plane %d pitch (%d) must be %d\n", 2286 2284 i, fb->pitches[i], ccs_aux_stride); 2287 - goto err_frontbuffer_put; 2285 + goto err_bo_framebuffer_fini; 2288 2286 } 2289 2287 } 2290 2288 ··· 2293 2291 2294 2292 ret = intel_fill_fb_info(display, intel_fb); 2295 2293 if (ret) 2296 - goto err_frontbuffer_put; 2294 + goto err_bo_framebuffer_fini; 2297 2295 2298 2296 if (intel_fb_uses_dpt(fb)) { 2299 2297 struct i915_address_space *vm; ··· 2319 2317 err_free_dpt: 2320 2318 if (intel_fb_uses_dpt(fb)) 2321 2319 intel_dpt_destroy(intel_fb->dpt_vm); 2320 + err_bo_framebuffer_fini: 2321 + intel_fb_bo_framebuffer_fini(obj); 2322 2322 err_frontbuffer_put: 2323 2323 intel_frontbuffer_put(intel_fb->frontbuffer); 2324 - err: 2325 - intel_fb_bo_framebuffer_fini(obj); 2326 2324 return ret; 2327 2325 } 2328 2326
+9 -1
drivers/gpu/drm/i915/display/intel_frontbuffer.c
··· 270 270 spin_unlock(&display->fb_tracking.lock); 271 271 272 272 i915_active_fini(&front->write); 273 + 274 + drm_gem_object_put(obj); 273 275 kfree_rcu(front, rcu); 274 276 } 275 277 ··· 289 287 if (!front) 290 288 return NULL; 291 289 290 + drm_gem_object_get(obj); 291 + 292 292 front->obj = obj; 293 293 kref_init(&front->ref); 294 294 atomic_set(&front->bits, 0); ··· 303 299 spin_lock(&display->fb_tracking.lock); 304 300 cur = intel_bo_set_frontbuffer(obj, front); 305 301 spin_unlock(&display->fb_tracking.lock); 306 - if (cur != front) 302 + 303 + if (cur != front) { 304 + drm_gem_object_put(obj); 307 305 kfree(front); 306 + } 307 + 308 308 return cur; 309 309 } 310 310
+10 -2
drivers/gpu/drm/i915/display/intel_psr.c
··· 3402 3402 struct intel_display *display = to_intel_display(intel_dp); 3403 3403 3404 3404 if (DISPLAY_VER(display) < 20 && intel_dp->psr.psr2_sel_fetch_enabled) { 3405 + /* Selective fetch prior LNL */ 3405 3406 if (intel_dp->psr.psr2_sel_fetch_cff_enabled) { 3406 3407 /* can we turn CFF off? */ 3407 3408 if (intel_dp->psr.busy_frontbuffer_bits == 0) ··· 3421 3420 intel_psr_configure_full_frame_update(intel_dp); 3422 3421 3423 3422 intel_psr_force_update(intel_dp); 3423 + } else if (!intel_dp->psr.psr2_sel_fetch_enabled) { 3424 + /* 3425 + * PSR1 on all platforms 3426 + * PSR2 HW tracking 3427 + * Panel Replay Full frame update 3428 + */ 3429 + intel_psr_force_update(intel_dp); 3424 3430 } else { 3431 + /* Selective update LNL onwards */ 3425 3432 intel_psr_exit(intel_dp); 3426 3433 } 3427 3434 3428 - if ((!intel_dp->psr.psr2_sel_fetch_enabled || DISPLAY_VER(display) >= 20) && 3429 - !intel_dp->psr.busy_frontbuffer_bits) 3435 + if (!intel_dp->psr.active && !intel_dp->psr.busy_frontbuffer_bits) 3430 3436 queue_work(display->wq.unordered, &intel_dp->psr.work); 3431 3437 } 3432 3438
-2
drivers/gpu/drm/i915/gem/i915_gem_object_frontbuffer.h
··· 89 89 90 90 if (!front) { 91 91 RCU_INIT_POINTER(obj->frontbuffer, NULL); 92 - drm_gem_object_put(intel_bo_to_drm_bo(obj)); 93 92 } else if (rcu_access_pointer(obj->frontbuffer)) { 94 93 cur = rcu_dereference_protected(obj->frontbuffer, true); 95 94 kref_get(&cur->ref); 96 95 } else { 97 - drm_gem_object_get(intel_bo_to_drm_bo(obj)); 98 96 rcu_assign_pointer(obj->frontbuffer, front); 99 97 } 100 98
+8 -1
drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
··· 1325 1325 1326 1326 static void ct_try_receive_message(struct intel_guc_ct *ct) 1327 1327 { 1328 + struct intel_guc *guc = ct_to_guc(ct); 1328 1329 int ret; 1329 1330 1330 - if (GEM_WARN_ON(!ct->enabled)) 1331 + if (!ct->enabled) { 1332 + GEM_WARN_ON(!guc_to_gt(guc)->uc.reset_in_progress); 1333 + return; 1334 + } 1335 + 1336 + /* When interrupt disabled, message handling is not expected */ 1337 + if (!guc->interrupts.enabled) 1331 1338 return; 1332 1339 1333 1340 ret = ct_receive(ct);
+1
drivers/gpu/drm/panthor/panthor_fw.c
··· 1099 1099 } 1100 1100 1101 1101 panthor_job_irq_suspend(&ptdev->fw->irq); 1102 + panthor_fw_stop(ptdev); 1102 1103 } 1103 1104 1104 1105 /**
+1 -1
drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
··· 1031 1031 return format; 1032 1032 1033 1033 if (drm_rect_width(src) >> 16 < 4 || drm_rect_height(src) >> 16 < 4 || 1034 - drm_rect_width(dest) < 4 || drm_rect_width(dest) < 4) { 1034 + drm_rect_width(dest) < 4 || drm_rect_height(dest) < 4) { 1035 1035 drm_err(vop2->drm, "Invalid size: %dx%d->%dx%d, min size is 4x4\n", 1036 1036 drm_rect_width(src) >> 16, drm_rect_height(src) >> 16, 1037 1037 drm_rect_width(dest), drm_rect_height(dest));
+7 -6
drivers/gpu/drm/scheduler/sched_main.c
··· 965 965 dma_resv_assert_held(resv); 966 966 967 967 dma_resv_for_each_fence(&cursor, resv, usage, fence) { 968 - /* Make sure to grab an additional ref on the added fence */ 969 - dma_fence_get(fence); 970 - ret = drm_sched_job_add_dependency(job, fence); 971 - if (ret) { 972 - dma_fence_put(fence); 968 + /* 969 + * As drm_sched_job_add_dependency always consumes the fence 970 + * reference (even when it fails), and dma_resv_for_each_fence 971 + * is not obtaining one, we need to grab one before calling. 972 + */ 973 + ret = drm_sched_job_add_dependency(job, dma_fence_get(fence)); 974 + if (ret) 973 975 return ret; 974 - } 975 976 } 976 977 return 0; 977 978 }
+1
drivers/gpu/drm/xe/regs/xe_gt_regs.h
··· 342 342 #define POWERGATE_ENABLE XE_REG(0xa210) 343 343 #define RENDER_POWERGATE_ENABLE REG_BIT(0) 344 344 #define MEDIA_POWERGATE_ENABLE REG_BIT(1) 345 + #define MEDIA_SAMPLERS_POWERGATE_ENABLE REG_BIT(2) 345 346 #define VDN_HCP_POWERGATE_ENABLE(n) REG_BIT(3 + 2 * (n)) 346 347 #define VDN_MFXVDENC_POWERGATE_ENABLE(n) REG_BIT(4 + 2 * (n)) 347 348
+5
drivers/gpu/drm/xe/tests/xe_pci.c
··· 66 66 67 67 /** 68 68 * xe_pci_fake_data_gen_params - Generate struct xe_pci_fake_data parameters 69 + * @test: test context object 69 70 * @prev: the pointer to the previous parameter to iterate from or NULL 70 71 * @desc: output buffer with minimum size of KUNIT_PARAM_DESC_SIZE 71 72 * ··· 243 242 244 243 /** 245 244 * xe_pci_graphics_ip_gen_param - Generate graphics struct xe_ip parameters 245 + * @test: test context object 246 246 * @prev: the pointer to the previous parameter to iterate from or NULL 247 247 * @desc: output buffer with minimum size of KUNIT_PARAM_DESC_SIZE 248 248 * ··· 268 266 269 267 /** 270 268 * xe_pci_media_ip_gen_param - Generate media struct xe_ip parameters 269 + * @test: test context object 271 270 * @prev: the pointer to the previous parameter to iterate from or NULL 272 271 * @desc: output buffer with minimum size of KUNIT_PARAM_DESC_SIZE 273 272 * ··· 293 290 294 291 /** 295 292 * xe_pci_id_gen_param - Generate struct pci_device_id parameters 293 + * @test: test context object 296 294 * @prev: the pointer to the previous parameter to iterate from or NULL 297 295 * @desc: output buffer with minimum size of KUNIT_PARAM_DESC_SIZE 298 296 * ··· 380 376 381 377 /** 382 378 * xe_pci_live_device_gen_param - Helper to iterate Xe devices as KUnit parameters 379 + * @test: test context object 383 380 * @prev: the previously returned value, or NULL for the first iteration 384 381 * @desc: the buffer for a parameter name 385 382 *
-8
drivers/gpu/drm/xe/xe_bo_evict.c
··· 182 182 183 183 static int xe_bo_restore_and_map_ggtt(struct xe_bo *bo) 184 184 { 185 - struct xe_device *xe = xe_bo_device(bo); 186 185 int ret; 187 186 188 187 ret = xe_bo_restore_pinned(bo); ··· 199 200 xe_ggtt_map_bo_unlocked(tile->mem.ggtt, bo); 200 201 } 201 202 } 202 - 203 - /* 204 - * We expect validate to trigger a move VRAM and our move code 205 - * should setup the iosys map. 206 - */ 207 - xe_assert(xe, !(bo->flags & XE_BO_FLAG_PINNED_LATE_RESTORE) || 208 - !iosys_map_is_null(&bo->vmap)); 209 203 210 204 return 0; 211 205 }
+1 -1
drivers/gpu/drm/xe/xe_device.c
··· 1070 1070 spin_lock(&gt->global_invl_lock); 1071 1071 1072 1072 xe_mmio_write32(&gt->mmio, XE2_GLOBAL_INVAL, 0x1); 1073 - if (xe_mmio_wait32(&gt->mmio, XE2_GLOBAL_INVAL, 0x1, 0x0, 500, NULL, true)) 1073 + if (xe_mmio_wait32(&gt->mmio, XE2_GLOBAL_INVAL, 0x1, 0x0, 1000, NULL, true)) 1074 1074 xe_gt_err_once(gt, "Global invalidation timeout\n"); 1075 1075 1076 1076 spin_unlock(&gt->global_invl_lock);
+8
drivers/gpu/drm/xe/xe_gt_idle.c
··· 124 124 if (xe_gt_is_main_type(gt)) 125 125 gtidle->powergate_enable |= RENDER_POWERGATE_ENABLE; 126 126 127 + if (MEDIA_VERx100(xe) >= 1100 && MEDIA_VERx100(xe) < 1255) 128 + gtidle->powergate_enable |= MEDIA_SAMPLERS_POWERGATE_ENABLE; 129 + 127 130 if (xe->info.platform != XE_DG1) { 128 131 for (i = XE_HW_ENGINE_VCS0, j = 0; i <= XE_HW_ENGINE_VCS7; ++i, ++j) { 129 132 if ((gt->info.engine_mask & BIT(i))) ··· 249 246 drm_printf(p, "Media Slice%d Power Gate Status: %s\n", n, 250 247 str_up_down(pg_status & media_slices[n].status_bit)); 251 248 } 249 + 250 + if (MEDIA_VERx100(xe) >= 1100 && MEDIA_VERx100(xe) < 1255) 251 + drm_printf(p, "Media Samplers Power Gating Enabled: %s\n", 252 + str_yes_no(pg_enabled & MEDIA_SAMPLERS_POWERGATE_ENABLE)); 253 + 252 254 return 0; 253 255 } 254 256
+12 -1
drivers/gpu/drm/xe/xe_guc_submit.c
··· 44 44 #include "xe_ring_ops_types.h" 45 45 #include "xe_sched_job.h" 46 46 #include "xe_trace.h" 47 + #include "xe_uc_fw.h" 47 48 #include "xe_vm.h" 48 49 49 50 static struct xe_guc * ··· 1490 1489 xe_gt_assert(guc_to_gt(guc), !(q->flags & EXEC_QUEUE_FLAG_PERMANENT)); 1491 1490 trace_xe_exec_queue_cleanup_entity(q); 1492 1491 1493 - if (exec_queue_registered(q)) 1492 + /* 1493 + * Expected state transitions for cleanup: 1494 + * - If the exec queue is registered and GuC firmware is running, we must first 1495 + * disable scheduling and deregister the queue to ensure proper teardown and 1496 + * resource release in the GuC, then destroy the exec queue on driver side. 1497 + * - If the GuC is already stopped (e.g., during driver unload or GPU reset), 1498 + * we cannot expect a response for the deregister request. In this case, 1499 + * it is safe to directly destroy the exec queue on driver side, as the GuC 1500 + * will not process further requests and all resources must be cleaned up locally. 1501 + */ 1502 + if (exec_queue_registered(q) && xe_uc_fw_is_running(&guc->fw)) 1494 1503 disable_scheduling_deregister(guc, q); 1495 1504 else 1496 1505 __guc_exec_queue_destroy(guc, q);
+4 -2
drivers/gpu/drm/xe/xe_migrate.c
··· 434 434 435 435 err = xe_migrate_lock_prepare_vm(tile, m, vm); 436 436 if (err) 437 - return err; 437 + goto err_out; 438 438 439 439 if (xe->info.has_usm) { 440 440 struct xe_hw_engine *hwe = xe_gt_hw_engine(primary_gt, ··· 2113 2113 if (current_bytes & ~PAGE_MASK) { 2114 2114 int pitch = 4; 2115 2115 2116 - current_bytes = min_t(int, current_bytes, S16_MAX * pitch); 2116 + current_bytes = min_t(int, current_bytes, 2117 + round_down(S16_MAX * pitch, 2118 + XE_CACHELINE_BYTES)); 2117 2119 } 2118 2120 2119 2121 __fence = xe_migrate_vram(m, current_bytes,
+2
drivers/gpu/drm/xe/xe_pci.c
··· 867 867 if (err) 868 868 return err; 869 869 870 + xe_vram_resize_bar(xe); 871 + 870 872 err = xe_device_probe_early(xe); 871 873 /* 872 874 * In Boot Survivability mode, no drm card is exposed and driver
+15 -2
drivers/gpu/drm/xe/xe_svm.c
··· 1034 1034 if (err) 1035 1035 return err; 1036 1036 1037 + dpagemap = xe_vma_resolve_pagemap(vma, tile); 1038 + if (!dpagemap && !ctx.devmem_only) 1039 + ctx.device_private_page_owner = NULL; 1037 1040 range = xe_svm_range_find_or_insert(vm, fault_addr, vma, &ctx); 1038 1041 1039 1042 if (IS_ERR(range)) ··· 1057 1054 1058 1055 range_debug(range, "PAGE FAULT"); 1059 1056 1060 - dpagemap = xe_vma_resolve_pagemap(vma, tile); 1061 1057 if (--migrate_try_count >= 0 && 1062 1058 xe_svm_range_needs_migrate_to_vram(range, vma, !!dpagemap || ctx.devmem_only)) { 1063 1059 ktime_t migrate_start = xe_svm_stats_ktime_get(); ··· 1075 1073 drm_dbg(&vm->xe->drm, 1076 1074 "VRAM allocation failed, falling back to retrying fault, asid=%u, errno=%pe\n", 1077 1075 vm->usm.asid, ERR_PTR(err)); 1078 - goto retry; 1076 + 1077 + /* 1078 + * In the devmem-only case, mixed mappings may 1079 + * be found. The get_pages function will fix 1080 + * these up to a single location, allowing the 1081 + * page fault handler to make forward progress. 1082 + */ 1083 + if (ctx.devmem_only) 1084 + goto get_pages; 1085 + else 1086 + goto retry; 1079 1087 } else { 1080 1088 drm_err(&vm->xe->drm, 1081 1089 "VRAM allocation failed, retry count exceeded, asid=%u, errno=%pe\n", ··· 1095 1083 } 1096 1084 } 1097 1085 1086 + get_pages: 1098 1087 get_pages_start = xe_svm_stats_ktime_get(); 1099 1088 1100 1089 range_debug(range, "GET PAGES");
+23 -9
drivers/gpu/drm/xe/xe_vm.c
··· 2832 2832 } 2833 2833 2834 2834 static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma, 2835 - bool validate) 2835 + bool res_evict, bool validate) 2836 2836 { 2837 2837 struct xe_bo *bo = xe_vma_bo(vma); 2838 2838 struct xe_vm *vm = xe_vma_vm(vma); ··· 2843 2843 err = drm_exec_lock_obj(exec, &bo->ttm.base); 2844 2844 if (!err && validate) 2845 2845 err = xe_bo_validate(bo, vm, 2846 - !xe_vm_in_preempt_fence_mode(vm), exec); 2846 + !xe_vm_in_preempt_fence_mode(vm) && 2847 + res_evict, exec); 2847 2848 } 2848 2849 2849 2850 return err; ··· 2914 2913 } 2915 2914 2916 2915 static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm, 2917 - struct xe_vma_op *op) 2916 + struct xe_vma_ops *vops, struct xe_vma_op *op) 2918 2917 { 2919 2918 int err = 0; 2919 + bool res_evict; 2920 + 2921 + /* 2922 + * We only allow evicting a BO within the VM if it is not part of an 2923 + * array of binds, as an array of binds can evict another BO within the 2924 + * bind. 2925 + */ 2926 + res_evict = !(vops->flags & XE_VMA_OPS_ARRAY_OF_BINDS); 2920 2927 2921 2928 switch (op->base.op) { 2922 2929 case DRM_GPUVA_OP_MAP: 2923 2930 if (!op->map.invalidate_on_bind) 2924 2931 err = vma_lock_and_validate(exec, op->map.vma, 2932 + res_evict, 2925 2933 !xe_vm_in_fault_mode(vm) || 2926 2934 op->map.immediate); 2927 2935 break; ··· 2941 2931 2942 2932 err = vma_lock_and_validate(exec, 2943 2933 gpuva_to_vma(op->base.remap.unmap->va), 2944 - false); 2934 + res_evict, false); 2945 2935 if (!err && op->remap.prev) 2946 - err = vma_lock_and_validate(exec, op->remap.prev, true); 2936 + err = vma_lock_and_validate(exec, op->remap.prev, 2937 + res_evict, true); 2947 2938 if (!err && op->remap.next) 2948 - err = vma_lock_and_validate(exec, op->remap.next, true); 2939 + err = vma_lock_and_validate(exec, op->remap.next, 2940 + res_evict, true); 2949 2941 break; 2950 2942 case DRM_GPUVA_OP_UNMAP: 2951 2943 err = check_ufence(gpuva_to_vma(op->base.unmap.va)); ··· 2956 2944 2957 2945 err = vma_lock_and_validate(exec, 2958 2946 gpuva_to_vma(op->base.unmap.va), 2959 - false); 2947 + res_evict, false); 2960 2948 break; 2961 2949 case DRM_GPUVA_OP_PREFETCH: 2962 2950 { ··· 2971 2959 2972 2960 err = vma_lock_and_validate(exec, 2973 2961 gpuva_to_vma(op->base.prefetch.va), 2974 - false); 2962 + res_evict, false); 2975 2963 if (!err && !xe_vma_has_no_bo(vma)) 2976 2964 err = xe_bo_migrate(xe_vma_bo(vma), 2977 2965 region_to_mem_type[region], ··· 3017 3005 return err; 3018 3006 3019 3007 list_for_each_entry(op, &vops->list, link) { 3020 - err = op_lock_and_prep(exec, vm, op); 3008 + err = op_lock_and_prep(exec, vm, vops, op); 3021 3009 if (err) 3022 3010 return err; 3023 3011 } ··· 3650 3638 } 3651 3639 3652 3640 xe_vma_ops_init(&vops, vm, q, syncs, num_syncs); 3641 + if (args->num_binds > 1) 3642 + vops.flags |= XE_VMA_OPS_ARRAY_OF_BINDS; 3653 3643 for (i = 0; i < args->num_binds; ++i) { 3654 3644 u64 range = bind_ops[i].range; 3655 3645 u64 addr = bind_ops[i].addr;
+1
drivers/gpu/drm/xe/xe_vm_types.h
··· 476 476 /** @flag: signify the properties within xe_vma_ops*/ 477 477 #define XE_VMA_OPS_FLAG_HAS_SVM_PREFETCH BIT(0) 478 478 #define XE_VMA_OPS_FLAG_MADVISE BIT(1) 479 + #define XE_VMA_OPS_ARRAY_OF_BINDS BIT(2) 479 480 u32 flags; 480 481 #ifdef TEST_VM_OPS_ERROR 481 482 /** @inject_error: inject error to test error handling */
+26 -8
drivers/gpu/drm/xe/xe_vram.c
··· 26 26 27 27 #define BAR_SIZE_SHIFT 20 28 28 29 - static void 30 - _resize_bar(struct xe_device *xe, int resno, resource_size_t size) 29 + /* 30 + * Release all the BARs that could influence/block LMEMBAR resizing, i.e. 31 + * assigned IORESOURCE_MEM_64 BARs 32 + */ 33 + static void release_bars(struct pci_dev *pdev) 34 + { 35 + struct resource *res; 36 + int i; 37 + 38 + pci_dev_for_each_resource(pdev, res, i) { 39 + /* Resource already un-assigned, do not reset it */ 40 + if (!res->parent) 41 + continue; 42 + 43 + /* No need to release unrelated BARs */ 44 + if (!(res->flags & IORESOURCE_MEM_64)) 45 + continue; 46 + 47 + pci_release_resource(pdev, i); 48 + } 49 + } 50 + 51 + static void resize_bar(struct xe_device *xe, int resno, resource_size_t size) 31 52 { 32 53 struct pci_dev *pdev = to_pci_dev(xe->drm.dev); 33 54 int bar_size = pci_rebar_bytes_to_size(size); 34 55 int ret; 35 56 36 - if (pci_resource_len(pdev, resno)) 37 - pci_release_resource(pdev, resno); 57 + release_bars(pdev); 38 58 39 59 ret = pci_resize_resource(pdev, resno, bar_size); 40 60 if (ret) { ··· 70 50 * if force_vram_bar_size is set, attempt to set to the requested size 71 51 * else set to maximum possible size 72 52 */ 73 - static void resize_vram_bar(struct xe_device *xe) 53 + void xe_vram_resize_bar(struct xe_device *xe) 74 54 { 75 55 int force_vram_bar_size = xe_modparam.force_vram_bar_size; 76 56 struct pci_dev *pdev = to_pci_dev(xe->drm.dev); ··· 139 119 pci_read_config_dword(pdev, PCI_COMMAND, &pci_cmd); 140 120 pci_write_config_dword(pdev, PCI_COMMAND, pci_cmd & ~PCI_COMMAND_MEMORY); 141 121 142 - _resize_bar(xe, LMEM_BAR, rebar_size); 122 + resize_bar(xe, LMEM_BAR, rebar_size); 143 123 144 124 pci_assign_unassigned_bus_resources(pdev->bus); 145 125 pci_write_config_dword(pdev, PCI_COMMAND, pci_cmd); ··· 167 147 drm_err(&xe->drm, "pci resource is not valid\n"); 168 148 return -ENXIO; 169 149 } 170 - 171 - resize_vram_bar(xe); 172 150 173 151 lmem_bar->io_start = pci_resource_start(pdev, LMEM_BAR); 174 152 lmem_bar->io_size = pci_resource_len(pdev, LMEM_BAR);
+1
drivers/gpu/drm/xe/xe_vram.h
··· 11 11 struct xe_device; 12 12 struct xe_vram_region; 13 13 14 + void xe_vram_resize_bar(struct xe_device *xe); 14 15 int xe_vram_probe(struct xe_device *xe); 15 16 16 17 struct xe_vram_region *xe_vram_region_alloc(struct xe_device *xe, u8 id, u32 placement);
+1 -1
drivers/hid/Kconfig
··· 93 93 If unsure, say Y. 94 94 95 95 config HID_HAPTIC 96 - tristate "Haptic touchpad support" 96 + bool "Haptic touchpad support" 97 97 default n 98 98 help 99 99 Support for touchpads with force sensors and haptic actuators instead of a
+24 -3
drivers/hid/hid-cp2112.c
··· 689 689 count = cp2112_write_read_req(buf, addr, read_length, 690 690 command, NULL, 0); 691 691 } else { 692 - count = cp2112_write_req(buf, addr, command, 692 + /* Copy starts from data->block[1] so the length can 693 + * be at max I2C_SMBUS_CLOCK_MAX + 1 694 + */ 695 + 696 + if (data->block[0] > I2C_SMBUS_BLOCK_MAX + 1) 697 + count = -EINVAL; 698 + else 699 + count = cp2112_write_req(buf, addr, command, 693 700 data->block + 1, 694 701 data->block[0]); 695 702 } ··· 707 700 I2C_SMBUS_BLOCK_MAX, 708 701 command, NULL, 0); 709 702 } else { 710 - count = cp2112_write_req(buf, addr, command, 703 + /* data_length here is data->block[0] + 1 704 + * so make sure that the data->block[0] is 705 + * less than or equals I2C_SMBUS_BLOCK_MAX + 1 706 + */ 707 + if (data->block[0] > I2C_SMBUS_BLOCK_MAX + 1) 708 + count = -EINVAL; 709 + else 710 + count = cp2112_write_req(buf, addr, command, 711 711 data->block, 712 712 data->block[0] + 1); 713 713 } ··· 723 709 size = I2C_SMBUS_BLOCK_DATA; 724 710 read_write = I2C_SMBUS_READ; 725 711 726 - count = cp2112_write_read_req(buf, addr, I2C_SMBUS_BLOCK_MAX, 712 + /* data_length is data->block[0] + 1, so 713 + * so data->block[0] should be less than or 714 + * equal to the I2C_SMBUS_BLOCK_MAX + 1 715 + */ 716 + if (data->block[0] > I2C_SMBUS_BLOCK_MAX + 1) 717 + count = -EINVAL; 718 + else 719 + count = cp2112_write_read_req(buf, addr, I2C_SMBUS_BLOCK_MAX, 727 720 command, data->block, 728 721 data->block[0] + 1); 729 722 break;
+1 -1
drivers/hid/hid-debug.c
··· 2523 2523 { 0x85, 0x0088, "iDeviceName" }, 2524 2524 { 0x85, 0x0089, "iDeviceChemistry" }, 2525 2525 { 0x85, 0x008a, "ManufacturerData" }, 2526 - { 0x85, 0x008b, "Rechargable" }, 2526 + { 0x85, 0x008b, "Rechargeable" }, 2527 2527 { 0x85, 0x008c, "WarningCapacityLimit" }, 2528 2528 { 0x85, 0x008d, "CapacityGranularity1" }, 2529 2529 { 0x85, 0x008e, "CapacityGranularity2" },
+4
drivers/hid/hid-ids.h
··· 342 342 #define USB_DEVICE_ID_CODEMERCS_IOW_FIRST 0x1500 343 343 #define USB_DEVICE_ID_CODEMERCS_IOW_LAST 0x15ff 344 344 345 + #define USB_VENDOR_ID_COOLER_MASTER 0x2516 346 + #define USB_DEVICE_ID_COOLER_MASTER_MICE_DONGLE 0x01b7 347 + 345 348 #define USB_VENDOR_ID_CORSAIR 0x1b1c 346 349 #define USB_DEVICE_ID_CORSAIR_K90 0x1b02 347 350 #define USB_DEVICE_ID_CORSAIR_K70R 0x1b09 ··· 1435 1432 1436 1433 #define USB_VENDOR_ID_VRS 0x0483 1437 1434 #define USB_DEVICE_ID_VRS_DFP 0xa355 1435 + #define USB_DEVICE_ID_VRS_R295 0xa44c 1438 1436 1439 1437 #define USB_VENDOR_ID_VTL 0x0306 1440 1438 #define USB_DEVICE_ID_VTL_MULTITOUCH_FF3F 0xff3f
+4 -1
drivers/hid/hid-input.c
··· 635 635 return; 636 636 } 637 637 638 - if (value == 0 || value < dev->battery_min || value > dev->battery_max) 638 + if ((usage & HID_USAGE_PAGE) == HID_UP_DIGITIZER && value == 0) 639 + return; 640 + 641 + if (value < dev->battery_min || value > dev->battery_max) 639 642 return; 640 643 641 644 capacity = hidinput_scale_battery_capacity(dev, value);
+21
drivers/hid/hid-logitech-hidpp.c
··· 75 75 #define HIDPP_QUIRK_HIDPP_CONSUMER_VENDOR_KEYS BIT(27) 76 76 #define HIDPP_QUIRK_HI_RES_SCROLL_1P0 BIT(28) 77 77 #define HIDPP_QUIRK_WIRELESS_STATUS BIT(29) 78 + #define HIDPP_QUIRK_RESET_HI_RES_SCROLL BIT(30) 78 79 79 80 /* These are just aliases for now */ 80 81 #define HIDPP_QUIRK_KBD_SCROLL_WHEEL HIDPP_QUIRK_HIDPP_WHEELS ··· 194 193 void *private_data; 195 194 196 195 struct work_struct work; 196 + struct work_struct reset_hi_res_work; 197 197 struct kfifo delayed_work_fifo; 198 198 struct input_dev *delayed_input; 199 199 ··· 3838 3836 struct hidpp_report *answer = hidpp->send_receive_buf; 3839 3837 struct hidpp_report *report = (struct hidpp_report *)data; 3840 3838 int ret; 3839 + int last_online; 3841 3840 3842 3841 /* 3843 3842 * If the mutex is locked then we have a pending answer from a ··· 3880 3877 "See: https://gitlab.freedesktop.org/jwrdegoede/logitech-27mhz-keyboard-encryption-setup/\n"); 3881 3878 } 3882 3879 3880 + last_online = hidpp->battery.online; 3883 3881 if (hidpp->capabilities & HIDPP_CAPABILITY_HIDPP20_BATTERY) { 3884 3882 ret = hidpp20_battery_event_1000(hidpp, data, size); 3885 3883 if (ret != 0) ··· 3903 3899 ret = hidpp10_battery_event(hidpp, data, size); 3904 3900 if (ret != 0) 3905 3901 return ret; 3902 + } 3903 + 3904 + if (hidpp->quirks & HIDPP_QUIRK_RESET_HI_RES_SCROLL) { 3905 + if (last_online == 0 && hidpp->battery.online == 1) 3906 + schedule_work(&hidpp->reset_hi_res_work); 3906 3907 } 3907 3908 3908 3909 if (hidpp->quirks & HIDPP_QUIRK_HIDPP_WHEELS) { ··· 4283 4274 hidpp->delayed_input = input; 4284 4275 } 4285 4276 4277 + static void hidpp_reset_hi_res_handler(struct work_struct *work) 4278 + { 4279 + struct hidpp_device *hidpp = container_of(work, struct hidpp_device, reset_hi_res_work); 4280 + 4281 + hi_res_scroll_enable(hidpp); 4282 + } 4283 + 4286 4284 static DEVICE_ATTR(builtin_power_supply, 0000, NULL, NULL); 4287 4285 4288 4286 static struct attribute *sysfs_attrs[] = { ··· 4420 4404 } 4421 4405 4422 4406 INIT_WORK(&hidpp->work, hidpp_connect_event); 4407 + INIT_WORK(&hidpp->reset_hi_res_work, hidpp_reset_hi_res_handler); 4423 4408 mutex_init(&hidpp->send_mutex); 4424 4409 init_waitqueue_head(&hidpp->wait); 4425 4410 ··· 4516 4499 4517 4500 hid_hw_stop(hdev); 4518 4501 cancel_work_sync(&hidpp->work); 4502 + cancel_work_sync(&hidpp->reset_hi_res_work); 4519 4503 mutex_destroy(&hidpp->send_mutex); 4520 4504 } 4521 4505 ··· 4564 4546 { /* Keyboard MX5500 (Bluetooth-receiver in HID proxy mode) */ 4565 4547 LDJ_DEVICE(0xb30b), 4566 4548 .driver_data = HIDPP_QUIRK_HIDPP_CONSUMER_VENDOR_KEYS }, 4549 + { /* Logitech G502 Lightspeed Wireless Gaming Mouse */ 4550 + LDJ_DEVICE(0x407f), 4551 + .driver_data = HIDPP_QUIRK_RESET_HI_RES_SCROLL }, 4567 4552 4568 4553 { LDJ_DEVICE(HID_ANY_ID) }, 4569 4554
+15 -13
drivers/hid/hid-multitouch.c
··· 94 94 TOUCHPAD_REPORT_ALL = TOUCHPAD_REPORT_BUTTONS | TOUCHPAD_REPORT_CONTACTS, 95 95 }; 96 96 97 - #define MT_IO_FLAGS_RUNNING 0 98 - #define MT_IO_FLAGS_ACTIVE_SLOTS 1 99 - #define MT_IO_FLAGS_PENDING_SLOTS 2 97 + #define MT_IO_SLOTS_MASK GENMASK(7, 0) /* reserve first 8 bits for slot tracking */ 98 + #define MT_IO_FLAGS_RUNNING 32 100 99 101 100 static const bool mtrue = true; /* default for true */ 102 101 static const bool mfalse; /* default for false */ ··· 171 172 struct timer_list release_timer; /* to release sticky fingers */ 172 173 struct hid_haptic_device *haptic; /* haptic related configuration */ 173 174 struct hid_device *hdev; /* hid_device we're attached to */ 174 - unsigned long mt_io_flags; /* mt flags (MT_IO_FLAGS_*) */ 175 + unsigned long mt_io_flags; /* mt flags (MT_IO_FLAGS_RUNNING) 176 + * first 8 bits are reserved for keeping the slot 177 + * states, this is fine because we only support up 178 + * to 250 slots (MT_MAX_MAXCONTACT) 179 + */ 175 180 __u8 inputmode_value; /* InputMode HID feature value */ 176 181 __u8 maxcontacts; 177 182 bool is_buttonpad; /* is this device a button pad? */ ··· 989 986 990 987 for_each_set_bit(slotnum, app->pending_palm_slots, td->maxcontacts) { 991 988 clear_bit(slotnum, app->pending_palm_slots); 989 + clear_bit(slotnum, &td->mt_io_flags); 992 990 993 991 input_mt_slot(input, slotnum); 994 992 input_mt_report_slot_inactive(input); ··· 1023 1019 app->left_button_state = 0; 1024 1020 if (td->is_haptic_touchpad) 1025 1021 hid_haptic_pressure_reset(td->haptic); 1026 - 1027 - if (test_bit(MT_IO_FLAGS_ACTIVE_SLOTS, &td->mt_io_flags)) 1028 - set_bit(MT_IO_FLAGS_PENDING_SLOTS, &td->mt_io_flags); 1029 - else 1030 - clear_bit(MT_IO_FLAGS_PENDING_SLOTS, &td->mt_io_flags); 1031 - clear_bit(MT_IO_FLAGS_ACTIVE_SLOTS, &td->mt_io_flags); 1032 1022 } 1033 1023 1034 1024 static int mt_compute_timestamp(struct mt_application *app, __s32 value) ··· 1200 1202 input_event(input, EV_ABS, ABS_MT_TOUCH_MAJOR, major); 1201 1203 input_event(input, EV_ABS, ABS_MT_TOUCH_MINOR, minor); 1202 1204 1203 - set_bit(MT_IO_FLAGS_ACTIVE_SLOTS, &td->mt_io_flags); 1205 + set_bit(slotnum, &td->mt_io_flags); 1206 + } else { 1207 + clear_bit(slotnum, &td->mt_io_flags); 1204 1208 } 1205 1209 1206 1210 return 0; ··· 1337 1337 * defect. 1338 1338 */ 1339 1339 if (app->quirks & MT_QUIRK_STICKY_FINGERS) { 1340 - if (test_bit(MT_IO_FLAGS_PENDING_SLOTS, &td->mt_io_flags)) 1340 + if (td->mt_io_flags & MT_IO_SLOTS_MASK) 1341 1341 mod_timer(&td->release_timer, 1342 1342 jiffies + msecs_to_jiffies(100)); 1343 1343 else ··· 1742 1742 case HID_CP_CONSUMER_CONTROL: 1743 1743 case HID_GD_WIRELESS_RADIO_CTLS: 1744 1744 case HID_GD_SYSTEM_MULTIAXIS: 1745 + case HID_DG_PEN: 1745 1746 /* already handled by hid core */ 1746 1747 break; 1747 1748 case HID_DG_TOUCHSCREEN: ··· 1814 1813 for (i = 0; i < mt->num_slots; i++) { 1815 1814 input_mt_slot(input_dev, i); 1816 1815 input_mt_report_slot_inactive(input_dev); 1816 + clear_bit(i, &td->mt_io_flags); 1817 1817 } 1818 1818 input_mt_sync_frame(input_dev); 1819 1819 input_sync(input_dev); ··· 1837 1835 */ 1838 1836 if (test_and_set_bit_lock(MT_IO_FLAGS_RUNNING, &td->mt_io_flags)) 1839 1837 return; 1840 - if (test_bit(MT_IO_FLAGS_PENDING_SLOTS, &td->mt_io_flags)) 1838 + if (td->mt_io_flags & MT_IO_SLOTS_MASK) 1841 1839 mt_release_contacts(hdev); 1842 1840 clear_bit_unlock(MT_IO_FLAGS_RUNNING, &td->mt_io_flags); 1843 1841 }
+3 -3
drivers/hid/hid-nintendo.c
··· 1455 1455 ctlr->imu_avg_delta_ms; 1456 1456 ctlr->imu_timestamp_us += 1000 * ctlr->imu_avg_delta_ms; 1457 1457 if (dropped_pkts > JC_IMU_DROPPED_PKT_WARNING) { 1458 - hid_warn(ctlr->hdev, 1458 + hid_warn_ratelimited(ctlr->hdev, 1459 1459 "compensating for %u dropped IMU reports\n", 1460 1460 dropped_pkts); 1461 - hid_warn(ctlr->hdev, 1461 + hid_warn_ratelimited(ctlr->hdev, 1462 1462 "delta=%u avg_delta=%u\n", 1463 1463 delta, ctlr->imu_avg_delta_ms); 1464 1464 } ··· 2420 2420 struct joycon_input_report *report; 2421 2421 2422 2422 req.subcmd_id = JC_SUBCMD_REQ_DEV_INFO; 2423 - ret = joycon_send_subcmd(ctlr, &req, 0, HZ); 2423 + ret = joycon_send_subcmd(ctlr, &req, 0, 2 * HZ); 2424 2424 if (ret) { 2425 2425 hid_err(ctlr->hdev, "Failed to get joycon info; ret=%d\n", ret); 2426 2426 return ret;
+2
drivers/hid/hid-quirks.c
··· 57 57 { HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_FLIGHT_SIM_YOKE), HID_QUIRK_NOGET }, 58 58 { HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_PRO_PEDALS), HID_QUIRK_NOGET }, 59 59 { HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_PRO_THROTTLE), HID_QUIRK_NOGET }, 60 + { HID_USB_DEVICE(USB_VENDOR_ID_COOLER_MASTER, USB_DEVICE_ID_COOLER_MASTER_MICE_DONGLE), HID_QUIRK_ALWAYS_POLL }, 60 61 { HID_USB_DEVICE(USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K65RGB), HID_QUIRK_NO_INIT_REPORTS }, 61 62 { HID_USB_DEVICE(USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K65RGB_RAPIDFIRE), HID_QUIRK_NO_INIT_REPORTS | HID_QUIRK_ALWAYS_POLL }, 62 63 { HID_USB_DEVICE(USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K70RGB), HID_QUIRK_NO_INIT_REPORTS }, ··· 207 206 { HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC, USB_DEVICE_ID_UCLOGIC_TABLET_KNA5), HID_QUIRK_MULTI_INPUT }, 208 207 { HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC, USB_DEVICE_ID_UCLOGIC_TABLET_TWA60), HID_QUIRK_MULTI_INPUT }, 209 208 { HID_USB_DEVICE(USB_VENDOR_ID_UGTIZER, USB_DEVICE_ID_UGTIZER_TABLET_WP5540), HID_QUIRK_MULTI_INPUT }, 209 + { HID_USB_DEVICE(USB_VENDOR_ID_VRS, USB_DEVICE_ID_VRS_R295), HID_QUIRK_ALWAYS_POLL }, 210 210 { HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_MEDIA_TABLET_10_6_INCH), HID_QUIRK_MULTI_INPUT }, 211 211 { HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_MEDIA_TABLET_14_1_INCH), HID_QUIRK_MULTI_INPUT }, 212 212 { HID_USB_DEVICE(USB_VENDOR_ID_WALTOP, USB_DEVICE_ID_WALTOP_SIRIUS_BATTERY_FREE_TABLET), HID_QUIRK_MULTI_INPUT },
+1 -1
drivers/hid/intel-thc-hid/intel-quicki2c/pci-quicki2c.c
··· 466 466 dev_warn(qcdev->dev, 467 467 "Max frame size is smaller than hid max input length!"); 468 468 thc_i2c_set_rx_max_size(qcdev->thc_hw, 469 - le16_to_cpu(qcdev->i2c_max_frame_size)); 469 + qcdev->i2c_max_frame_size); 470 470 } 471 471 thc_i2c_rx_max_size_enable(qcdev->thc_hw, true); 472 472 }
+6
drivers/hid/intel-thc-hid/intel-quickspi/pci-quickspi.c
··· 33 33 .max_packet_size_value = MAX_PACKET_SIZE_VALUE_LNL, 34 34 }; 35 35 36 + struct quickspi_driver_data arl = { 37 + .max_packet_size_value = MAX_PACKET_SIZE_VALUE_MTL, 38 + }; 39 + 36 40 /* THC QuickSPI ACPI method to get device properties */ 37 41 /* HIDSPI Method: {6e2ac436-0fcf-41af-a265-b32a220dcfab} */ 38 42 static guid_t hidspi_guid = ··· 982 978 {PCI_DEVICE_DATA(INTEL, THC_PTL_U_DEVICE_ID_SPI_PORT2, &ptl), }, 983 979 {PCI_DEVICE_DATA(INTEL, THC_WCL_DEVICE_ID_SPI_PORT1, &ptl), }, 984 980 {PCI_DEVICE_DATA(INTEL, THC_WCL_DEVICE_ID_SPI_PORT2, &ptl), }, 981 + {PCI_DEVICE_DATA(INTEL, THC_ARL_DEVICE_ID_SPI_PORT1, &arl), }, 982 + {PCI_DEVICE_DATA(INTEL, THC_ARL_DEVICE_ID_SPI_PORT2, &arl), }, 985 983 {} 986 984 }; 987 985 MODULE_DEVICE_TABLE(pci, quickspi_pci_tbl);
+2
drivers/hid/intel-thc-hid/intel-quickspi/quickspi-dev.h
··· 21 21 #define PCI_DEVICE_ID_INTEL_THC_PTL_U_DEVICE_ID_SPI_PORT2 0xE44B 22 22 #define PCI_DEVICE_ID_INTEL_THC_WCL_DEVICE_ID_SPI_PORT1 0x4D49 23 23 #define PCI_DEVICE_ID_INTEL_THC_WCL_DEVICE_ID_SPI_PORT2 0x4D4B 24 + #define PCI_DEVICE_ID_INTEL_THC_ARL_DEVICE_ID_SPI_PORT1 0x7749 25 + #define PCI_DEVICE_ID_INTEL_THC_ARL_DEVICE_ID_SPI_PORT2 0x774B 24 26 25 27 /* HIDSPI special ACPI parameters DSM methods */ 26 28 #define ACPI_QUICKSPI_REVISION_NUM 2
+1 -2
drivers/hid/intel-thc-hid/intel-quickspi/quickspi-protocol.c
··· 280 280 281 281 qsdev->reset_ack = false; 282 282 283 - /* First interrupt uses level trigger to avoid missing interrupt */ 284 - thc_int_trigger_type_select(qsdev->thc_hw, false); 283 + thc_int_trigger_type_select(qsdev->thc_hw, true); 285 284 286 285 ret = acpi_tic_reset(qsdev); 287 286 if (ret)
-1
drivers/i2c/busses/i2c-amd-mp2.h
··· 207 207 208 208 static inline void amd_mp2_pm_runtime_put(struct amd_mp2_dev *mp2_dev) 209 209 { 210 - pm_runtime_mark_last_busy(&mp2_dev->pci_dev->dev); 211 210 pm_runtime_put_autosuspend(&mp2_dev->pci_dev->dev); 212 211 } 213 212
-1
drivers/i2c/busses/i2c-at91-core.c
··· 313 313 return ret; 314 314 } 315 315 316 - pm_runtime_mark_last_busy(dev); 317 316 pm_request_autosuspend(dev); 318 317 319 318 at91_init_twi_bus(twi_dev);
-1
drivers/i2c/busses/i2c-at91-master.c
··· 717 717 718 718 ret = (ret < 0) ? ret : num; 719 719 out: 720 - pm_runtime_mark_last_busy(dev->dev); 721 720 pm_runtime_put_autosuspend(dev->dev); 722 721 723 722 return ret;
-1
drivers/i2c/busses/i2c-cadence.c
··· 1128 1128 cdns_i2c_set_mode(CDNS_I2C_MODE_SLAVE, id); 1129 1129 #endif 1130 1130 1131 - pm_runtime_mark_last_busy(id->dev); 1132 1131 pm_runtime_put_autosuspend(id->dev); 1133 1132 return ret; 1134 1133 }
-2
drivers/i2c/busses/i2c-davinci.c
··· 543 543 ret = num; 544 544 545 545 out: 546 - pm_runtime_mark_last_busy(dev->dev); 547 546 pm_runtime_put_autosuspend(dev->dev); 548 547 549 548 return ret; ··· 820 821 if (r) 821 822 goto err_unuse_clocks; 822 823 823 - pm_runtime_mark_last_busy(dev->dev); 824 824 pm_runtime_put_autosuspend(dev->dev); 825 825 826 826 return 0;
-1
drivers/i2c/busses/i2c-designware-master.c
··· 901 901 i2c_dw_release_lock(dev); 902 902 903 903 done_nolock: 904 - pm_runtime_mark_last_busy(dev->dev); 905 904 pm_runtime_put_autosuspend(dev->dev); 906 905 907 906 return ret;
-1
drivers/i2c/busses/i2c-hix5hd2.c
··· 373 373 ret = num; 374 374 375 375 out: 376 - pm_runtime_mark_last_busy(priv->dev); 377 376 pm_runtime_put_autosuspend(priv->dev); 378 377 return ret; 379 378 }
-1
drivers/i2c/busses/i2c-i801.c
··· 930 930 */ 931 931 iowrite8(SMBHSTSTS_INUSE_STS | STATUS_FLAGS, SMBHSTSTS(priv)); 932 932 933 - pm_runtime_mark_last_busy(&priv->pci_dev->dev); 934 933 pm_runtime_put_autosuspend(&priv->pci_dev->dev); 935 934 return ret; 936 935 }
-3
drivers/i2c/busses/i2c-img-scb.c
··· 1131 1131 break; 1132 1132 } 1133 1133 1134 - pm_runtime_mark_last_busy(adap->dev.parent); 1135 1134 pm_runtime_put_autosuspend(adap->dev.parent); 1136 1135 1137 1136 return i2c->msg_status ? i2c->msg_status : num; ··· 1164 1165 "Unknown hardware revision (%d.%d.%d.%d)\n", 1165 1166 (rev >> 24) & 0xff, (rev >> 16) & 0xff, 1166 1167 (rev >> 8) & 0xff, rev & 0xff); 1167 - pm_runtime_mark_last_busy(i2c->adap.dev.parent); 1168 1168 pm_runtime_put_autosuspend(i2c->adap.dev.parent); 1169 1169 return -EINVAL; 1170 1170 } ··· 1315 1317 /* Perform a synchronous sequence to reset the bus */ 1316 1318 ret = img_i2c_reset_bus(i2c); 1317 1319 1318 - pm_runtime_mark_last_busy(i2c->adap.dev.parent); 1319 1320 pm_runtime_put_autosuspend(i2c->adap.dev.parent); 1320 1321 1321 1322 return ret;
-4
drivers/i2c/busses/i2c-imx-lpi2c.c
··· 363 363 return 0; 364 364 365 365 rpm_put: 366 - pm_runtime_mark_last_busy(lpi2c_imx->adapter.dev.parent); 367 366 pm_runtime_put_autosuspend(lpi2c_imx->adapter.dev.parent); 368 367 369 368 return ret; ··· 376 377 temp &= ~MCR_MEN; 377 378 writel(temp, lpi2c_imx->base + LPI2C_MCR); 378 379 379 - pm_runtime_mark_last_busy(lpi2c_imx->adapter.dev.parent); 380 380 pm_runtime_put_autosuspend(lpi2c_imx->adapter.dev.parent); 381 381 382 382 return 0; ··· 1460 1462 if (ret) 1461 1463 goto rpm_disable; 1462 1464 1463 - pm_runtime_mark_last_busy(&pdev->dev); 1464 1465 pm_runtime_put_autosuspend(&pdev->dev); 1465 1466 1466 1467 dev_info(&lpi2c_imx->adapter.dev, "LPI2C adapter registered\n"); ··· 1561 1564 1562 1565 static int lpi2c_resume(struct device *dev) 1563 1566 { 1564 - pm_runtime_mark_last_busy(dev); 1565 1567 pm_runtime_put_autosuspend(dev); 1566 1568 1567 1569 return 0;
-3
drivers/i2c/busses/i2c-imx.c
··· 1637 1637 1638 1638 result = i2c_imx_xfer_common(adapter, msgs, num, false); 1639 1639 1640 - pm_runtime_mark_last_busy(i2c_imx->adapter.dev.parent); 1641 1640 pm_runtime_put_autosuspend(i2c_imx->adapter.dev.parent); 1642 1641 1643 1642 return result; ··· 1821 1822 if (ret < 0) 1822 1823 goto clk_notifier_unregister; 1823 1824 1824 - pm_runtime_mark_last_busy(&pdev->dev); 1825 1825 pm_runtime_put_autosuspend(&pdev->dev); 1826 1826 1827 1827 dev_dbg(&i2c_imx->adapter.dev, "claimed irq %d\n", irq); ··· 1926 1928 1927 1929 static int i2c_imx_resume(struct device *dev) 1928 1930 { 1929 - pm_runtime_mark_last_busy(dev); 1930 1931 pm_runtime_put_autosuspend(dev); 1931 1932 1932 1933 return 0;
-1
drivers/i2c/busses/i2c-mv64xxx.c
··· 766 766 drv_data->num_msgs = 0; 767 767 drv_data->msgs = NULL; 768 768 769 - pm_runtime_mark_last_busy(&adap->dev); 770 769 pm_runtime_put_autosuspend(&adap->dev); 771 770 772 771 return ret;
-1
drivers/i2c/busses/i2c-nvidia-gpu.c
··· 216 216 if (status2 < 0) 217 217 dev_err(i2cd->dev, "i2c stop failed %d\n", status2); 218 218 } 219 - pm_runtime_mark_last_busy(i2cd->dev); 220 219 pm_runtime_put_autosuspend(i2cd->dev); 221 220 return status; 222 221 }
-3
drivers/i2c/busses/i2c-omap.c
··· 828 828 omap->set_mpu_wkup_lat(omap->dev, -1); 829 829 830 830 out: 831 - pm_runtime_mark_last_busy(omap->dev); 832 831 pm_runtime_put_autosuspend(omap->dev); 833 832 return r; 834 833 } ··· 1509 1510 dev_info(omap->dev, "bus %d rev%d.%d at %d kHz\n", adap->nr, 1510 1511 major, minor, omap->speed); 1511 1512 1512 - pm_runtime_mark_last_busy(omap->dev); 1513 1513 pm_runtime_put_autosuspend(omap->dev); 1514 1514 1515 1515 return 0; ··· 1603 1605 1604 1606 static int omap_i2c_resume(struct device *dev) 1605 1607 { 1606 - pm_runtime_mark_last_busy(dev); 1607 1608 pm_runtime_put_autosuspend(dev); 1608 1609 1609 1610 return 0;
-2
drivers/i2c/busses/i2c-qcom-cci.c
··· 450 450 ret = num; 451 451 452 452 err: 453 - pm_runtime_mark_last_busy(cci->dev); 454 453 pm_runtime_put_autosuspend(cci->dev); 455 454 456 455 return ret; ··· 507 508 static int __maybe_unused cci_resume(struct device *dev) 508 509 { 509 510 cci_resume_runtime(dev); 510 - pm_runtime_mark_last_busy(dev); 511 511 pm_request_autosuspend(dev); 512 512 513 513 return 0;
-1
drivers/i2c/busses/i2c-qcom-geni.c
··· 714 714 else 715 715 ret = geni_i2c_fifo_xfer(gi2c, msgs, num); 716 716 717 - pm_runtime_mark_last_busy(gi2c->se.dev); 718 717 pm_runtime_put_autosuspend(gi2c->se.dev); 719 718 gi2c->cur = NULL; 720 719 gi2c->err = 0;
-3
drivers/i2c/busses/i2c-qup.c
··· 1139 1139 ret = num; 1140 1140 out: 1141 1141 1142 - pm_runtime_mark_last_busy(qup->dev); 1143 1142 pm_runtime_put_autosuspend(qup->dev); 1144 1143 1145 1144 return ret; ··· 1623 1624 if (ret == 0) 1624 1625 ret = num; 1625 1626 out: 1626 - pm_runtime_mark_last_busy(qup->dev); 1627 1627 pm_runtime_put_autosuspend(qup->dev); 1628 1628 1629 1629 return ret; ··· 1989 1991 static int qup_i2c_resume(struct device *device) 1990 1992 { 1991 1993 qup_i2c_pm_resume_runtime(device); 1992 - pm_runtime_mark_last_busy(device); 1993 1994 pm_request_autosuspend(device); 1994 1995 return 0; 1995 1996 }
-2
drivers/i2c/busses/i2c-riic.c
··· 206 206 } 207 207 208 208 out: 209 - pm_runtime_mark_last_busy(dev); 210 209 pm_runtime_put_autosuspend(dev); 211 210 212 211 return riic->err ?: num; ··· 451 452 452 453 riic_clear_set_bit(riic, ICCR1_IICRST, 0, RIIC_ICCR1); 453 454 454 - pm_runtime_mark_last_busy(dev); 455 455 pm_runtime_put_autosuspend(dev); 456 456 return 0; 457 457 }
-1
drivers/i2c/busses/i2c-rzv2m.c
··· 372 372 ret = num; 373 373 374 374 out: 375 - pm_runtime_mark_last_busy(dev); 376 375 pm_runtime_put_autosuspend(dev); 377 376 378 377 return ret;
-2
drivers/i2c/busses/i2c-sprd.c
··· 302 302 ret = sprd_i2c_handle_msg(i2c_adap, &msgs[im++], 1); 303 303 304 304 err_msg: 305 - pm_runtime_mark_last_busy(i2c_dev->dev); 306 305 pm_runtime_put_autosuspend(i2c_dev->dev); 307 306 308 307 return ret < 0 ? ret : im; ··· 558 559 goto err_rpm_put; 559 560 } 560 561 561 - pm_runtime_mark_last_busy(i2c_dev->dev); 562 562 pm_runtime_put_autosuspend(i2c_dev->dev); 563 563 return 0; 564 564
-5
drivers/i2c/busses/i2c-stm32f7.c
··· 1761 1761 } 1762 1762 1763 1763 pm_free: 1764 - pm_runtime_mark_last_busy(i2c_dev->dev); 1765 1764 pm_runtime_put_autosuspend(i2c_dev->dev); 1766 1765 1767 1766 return (ret < 0) ? ret : num; ··· 1869 1870 } 1870 1871 1871 1872 pm_free: 1872 - pm_runtime_mark_last_busy(dev); 1873 1873 pm_runtime_put_autosuspend(dev); 1874 1874 return ret; 1875 1875 } ··· 1975 1977 if (!stm32f7_i2c_is_slave_registered(i2c_dev)) 1976 1978 stm32f7_i2c_enable_wakeup(i2c_dev, false); 1977 1979 1978 - pm_runtime_mark_last_busy(dev); 1979 1980 pm_runtime_put_autosuspend(dev); 1980 1981 1981 1982 return ret; ··· 2012 2015 stm32f7_i2c_enable_wakeup(i2c_dev, false); 2013 2016 } 2014 2017 2015 - pm_runtime_mark_last_busy(i2c_dev->dev); 2016 2018 pm_runtime_put_autosuspend(i2c_dev->dev); 2017 2019 2018 2020 return 0; ··· 2324 2328 2325 2329 dev_info(i2c_dev->dev, "STM32F7 I2C-%d bus adapter\n", adap->nr); 2326 2330 2327 - pm_runtime_mark_last_busy(i2c_dev->dev); 2328 2331 pm_runtime_put_autosuspend(i2c_dev->dev); 2329 2332 2330 2333 return 0;
+1
drivers/i2c/busses/i2c-usbio.c
··· 27 27 { "INTC1008" }, /* MTL */ 28 28 { "INTC10B3" }, /* ARL */ 29 29 { "INTC10B6" }, /* LNL */ 30 + { "INTC10D2" }, /* MTL-CVF */ 30 31 { "INTC10E3" }, /* PTL */ 31 32 { } 32 33 };
-1
drivers/i2c/busses/i2c-xiic.c
··· 1349 1349 mutex_unlock(&i2c->lock); 1350 1350 1351 1351 out: 1352 - pm_runtime_mark_last_busy(i2c->dev); 1353 1352 pm_runtime_put_autosuspend(i2c->dev); 1354 1353 return err; 1355 1354 }
+8 -4
drivers/mfd/ls2k-bmc-core.c
··· 469 469 return ret; 470 470 471 471 ddata = devm_kzalloc(&dev->dev, sizeof(*ddata), GFP_KERNEL); 472 - if (IS_ERR(ddata)) { 472 + if (!ddata) { 473 473 ret = -ENOMEM; 474 474 goto disable_pci; 475 475 } ··· 495 495 goto disable_pci; 496 496 } 497 497 498 - return devm_mfd_add_devices(&dev->dev, PLATFORM_DEVID_AUTO, 499 - ls2k_bmc_cells, ARRAY_SIZE(ls2k_bmc_cells), 500 - &dev->resource[0], 0, NULL); 498 + ret = devm_mfd_add_devices(&dev->dev, PLATFORM_DEVID_AUTO, 499 + ls2k_bmc_cells, ARRAY_SIZE(ls2k_bmc_cells), 500 + &dev->resource[0], 0, NULL); 501 + if (ret) 502 + goto disable_pci; 503 + 504 + return 0; 501 505 502 506 disable_pci: 503 507 pci_disable_device(dev);
+1 -1
drivers/misc/Kconfig
··· 106 106 107 107 config RPMB 108 108 tristate "RPMB partition interface" 109 - depends on MMC 109 + depends on MMC || SCSI_UFSHCD 110 110 help 111 111 Unified RPMB unit interface for RPMB capable devices such as eMMC and 112 112 UFS. Provides interface for in-kernel security controllers to access
+1 -1
drivers/misc/ocxl/afu_irq.c
··· 203 203 mutex_lock(&ctx->irq_lock); 204 204 irq = idr_find(&ctx->irq_idr, irq_id); 205 205 if (irq) { 206 - xd = irq_get_handler_data(irq->virq); 206 + xd = irq_get_chip_data(irq->virq); 207 207 addr = xd ? xd->trig_page : 0; 208 208 } 209 209 mutex_unlock(&ctx->irq_lock);
-42
drivers/mmc/core/block.c
··· 79 79 #define MMC_EXTRACT_INDEX_FROM_ARG(x) ((x & 0x00FF0000) >> 16) 80 80 #define MMC_EXTRACT_VALUE_FROM_ARG(x) ((x & 0x0000FF00) >> 8) 81 81 82 - /** 83 - * struct rpmb_frame - rpmb frame as defined by eMMC 5.1 (JESD84-B51) 84 - * 85 - * @stuff : stuff bytes 86 - * @key_mac : The authentication key or the message authentication 87 - * code (MAC) depending on the request/response type. 88 - * The MAC will be delivered in the last (or the only) 89 - * block of data. 90 - * @data : Data to be written or read by signed access. 91 - * @nonce : Random number generated by the host for the requests 92 - * and copied to the response by the RPMB engine. 93 - * @write_counter: Counter value for the total amount of the successful 94 - * authenticated data write requests made by the host. 95 - * @addr : Address of the data to be programmed to or read 96 - * from the RPMB. Address is the serial number of 97 - * the accessed block (half sector 256B). 98 - * @block_count : Number of blocks (half sectors, 256B) requested to be 99 - * read/programmed. 100 - * @result : Includes information about the status of the write counter 101 - * (valid, expired) and result of the access made to the RPMB. 102 - * @req_resp : Defines the type of request and response to/from the memory. 103 - * 104 - * The stuff bytes and big-endian properties are modeled to fit to the spec. 105 - */ 106 - struct rpmb_frame { 107 - u8 stuff[196]; 108 - u8 key_mac[32]; 109 - u8 data[256]; 110 - u8 nonce[16]; 111 - __be32 write_counter; 112 - __be16 addr; 113 - __be16 block_count; 114 - __be16 result; 115 - __be16 req_resp; 116 - } __packed; 117 - 118 - #define RPMB_PROGRAM_KEY 0x1 /* Program RPMB Authentication Key */ 119 - #define RPMB_GET_WRITE_COUNTER 0x2 /* Read RPMB write counter */ 120 - #define RPMB_WRITE_DATA 0x3 /* Write data to RPMB partition */ 121 - #define RPMB_READ_DATA 0x4 /* Read data from RPMB partition */ 122 - #define RPMB_RESULT_READ 0x5 /* Read result request (Internal) */ 123 - 124 82 #define RPMB_FRAME_SIZE sizeof(struct rpmb_frame) 125 83 #define CHECK_SIZE_NEQ(val) ((val) != sizeof(struct rpmb_frame)) 126 84 #define CHECK_SIZE_ALIGNED(val) IS_ALIGNED((val), sizeof(struct rpmb_frame))
+39 -29
drivers/net/can/m_can/m_can.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 // CAN bus driver for Bosch M_CAN controller 3 3 // Copyright (C) 2014 Freescale Semiconductor, Inc. 4 - // Dong Aisheng <b29396@freescale.com> 4 + // Dong Aisheng <aisheng.dong@nxp.com> 5 5 // Copyright (C) 2018-19 Texas Instruments Incorporated - http://www.ti.com/ 6 6 7 7 /* Bosch M_CAN user manual can be obtained from: ··· 812 812 u32 timestamp = 0; 813 813 814 814 switch (new_state) { 815 + case CAN_STATE_ERROR_ACTIVE: 816 + cdev->can.state = CAN_STATE_ERROR_ACTIVE; 817 + break; 815 818 case CAN_STATE_ERROR_WARNING: 816 819 /* error warning state */ 817 820 cdev->can.can_stats.error_warning++; ··· 844 841 __m_can_get_berr_counter(dev, &bec); 845 842 846 843 switch (new_state) { 844 + case CAN_STATE_ERROR_ACTIVE: 845 + cf->can_id |= CAN_ERR_CRTL | CAN_ERR_CNT; 846 + cf->data[1] = CAN_ERR_CRTL_ACTIVE; 847 + cf->data[6] = bec.txerr; 848 + cf->data[7] = bec.rxerr; 849 + break; 847 850 case CAN_STATE_ERROR_WARNING: 848 851 /* error warning state */ 849 852 cf->can_id |= CAN_ERR_CRTL | CAN_ERR_CNT; ··· 886 877 return 1; 887 878 } 888 879 889 - static int m_can_handle_state_errors(struct net_device *dev, u32 psr) 880 + static enum can_state 881 + m_can_state_get_by_psr(struct m_can_classdev *cdev) 882 + { 883 + u32 reg_psr; 884 + 885 + reg_psr = m_can_read(cdev, M_CAN_PSR); 886 + 887 + if (reg_psr & PSR_BO) 888 + return CAN_STATE_BUS_OFF; 889 + if (reg_psr & PSR_EP) 890 + return CAN_STATE_ERROR_PASSIVE; 891 + if (reg_psr & PSR_EW) 892 + return CAN_STATE_ERROR_WARNING; 893 + 894 + return CAN_STATE_ERROR_ACTIVE; 895 + } 896 + 897 + static int m_can_handle_state_errors(struct net_device *dev) 890 898 { 891 899 struct m_can_classdev *cdev = netdev_priv(dev); 892 - int work_done = 0; 900 + enum can_state new_state; 893 901 894 - if (psr & PSR_EW && cdev->can.state != CAN_STATE_ERROR_WARNING) { 895 - netdev_dbg(dev, "entered error warning state\n"); 896 - work_done += m_can_handle_state_change(dev, 897 - CAN_STATE_ERROR_WARNING); 898 - } 902 + new_state = m_can_state_get_by_psr(cdev); 903 + if (new_state == cdev->can.state) 904 + return 0; 899 905 900 - if (psr & PSR_EP && cdev->can.state != CAN_STATE_ERROR_PASSIVE) { 901 - netdev_dbg(dev, "entered error passive state\n"); 902 - work_done += m_can_handle_state_change(dev, 903 - CAN_STATE_ERROR_PASSIVE); 904 - } 905 - 906 - if (psr & PSR_BO && cdev->can.state != CAN_STATE_BUS_OFF) { 907 - netdev_dbg(dev, "entered error bus off state\n"); 908 - work_done += m_can_handle_state_change(dev, 909 - CAN_STATE_BUS_OFF); 910 - } 911 - 912 - return work_done; 906 + return m_can_handle_state_change(dev, new_state); 913 907 } 914 908 915 909 static void m_can_handle_other_err(struct net_device *dev, u32 irqstatus) ··· 1043 1031 } 1044 1032 1045 1033 if (irqstatus & IR_ERR_STATE) 1046 - work_done += m_can_handle_state_errors(dev, 1047 - m_can_read(cdev, M_CAN_PSR)); 1034 + work_done += m_can_handle_state_errors(dev); 1048 1035 1049 1036 if (irqstatus & IR_ERR_BUS_30X) 1050 1037 work_done += m_can_handle_bus_errors(dev, irqstatus, ··· 1617 1606 netdev_queue_set_dql_min_limit(netdev_get_tx_queue(cdev->net, 0), 1618 1607 cdev->tx_max_coalesced_frames); 1619 1608 1620 - cdev->can.state = CAN_STATE_ERROR_ACTIVE; 1609 + cdev->can.state = m_can_state_get_by_psr(cdev); 1621 1610 1622 1611 m_can_enable_all_interrupts(cdev); 1623 1612 ··· 2503 2492 } 2504 2493 2505 2494 m_can_clk_stop(cdev); 2495 + cdev->can.state = CAN_STATE_SLEEPING; 2506 2496 } 2507 2497 2508 2498 pinctrl_pm_select_sleep_state(dev); 2509 - 2510 - cdev->can.state = CAN_STATE_SLEEPING; 2511 2499 2512 2500 return ret; 2513 2501 } ··· 2519 2509 int ret = 0; 2520 2510 2521 2511 pinctrl_pm_select_default_state(dev); 2522 - 2523 - cdev->can.state = CAN_STATE_ERROR_ACTIVE; 2524 2512 2525 2513 if (netif_running(ndev)) { 2526 2514 ret = m_can_clk_start(cdev); ··· 2537 2529 if (cdev->ops->init) 2538 2530 ret = cdev->ops->init(cdev); 2539 2531 2532 + cdev->can.state = m_can_state_get_by_psr(cdev); 2533 + 2540 2534 m_can_write(cdev, M_CAN_IE, cdev->active_interrupts); 2541 2535 } else { 2542 2536 ret = m_can_start(ndev); ··· 2556 2546 } 2557 2547 EXPORT_SYMBOL_GPL(m_can_class_resume); 2558 2548 2559 - MODULE_AUTHOR("Dong Aisheng <b29396@freescale.com>"); 2549 + MODULE_AUTHOR("Dong Aisheng <aisheng.dong@nxp.com>"); 2560 2550 MODULE_AUTHOR("Dan Murphy <dmurphy@ti.com>"); 2561 2551 MODULE_LICENSE("GPL v2"); 2562 2552 MODULE_DESCRIPTION("CAN bus driver for Bosch M_CAN controller");
+3 -3
drivers/net/can/m_can/m_can_platform.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 // IOMapped CAN bus driver for Bosch M_CAN controller 3 3 // Copyright (C) 2014 Freescale Semiconductor, Inc. 4 - // Dong Aisheng <b29396@freescale.com> 4 + // Dong Aisheng <aisheng.dong@nxp.com> 5 5 // 6 6 // Copyright (C) 2018-19 Texas Instruments Incorporated - http://www.ti.com/ 7 7 ··· 180 180 struct m_can_classdev *mcan_class = &priv->cdev; 181 181 182 182 m_can_class_unregister(mcan_class); 183 - 183 + pm_runtime_disable(mcan_class->dev); 184 184 m_can_class_free_dev(mcan_class->net); 185 185 } 186 186 ··· 236 236 237 237 module_platform_driver(m_can_plat_driver); 238 238 239 - MODULE_AUTHOR("Dong Aisheng <b29396@freescale.com>"); 239 + MODULE_AUTHOR("Dong Aisheng <aisheng.dong@nxp.com>"); 240 240 MODULE_AUTHOR("Dan Murphy <dmurphy@ti.com>"); 241 241 MODULE_LICENSE("GPL v2"); 242 242 MODULE_DESCRIPTION("M_CAN driver for IO Mapped Bosch controllers");
+11 -12
drivers/net/can/usb/gs_usb.c
··· 289 289 #define GS_MAX_RX_URBS 30 290 290 #define GS_NAPI_WEIGHT 32 291 291 292 - /* Maximum number of interfaces the driver supports per device. 293 - * Current hardware only supports 3 interfaces. The future may vary. 294 - */ 295 - #define GS_MAX_INTF 3 296 - 297 292 struct gs_tx_context { 298 293 struct gs_can *dev; 299 294 unsigned int echo_id; ··· 319 324 320 325 /* usb interface struct */ 321 326 struct gs_usb { 322 - struct gs_can *canch[GS_MAX_INTF]; 323 327 struct usb_anchor rx_submitted; 324 328 struct usb_device *udev; 325 329 ··· 330 336 331 337 unsigned int hf_size_rx; 332 338 u8 active_channels; 339 + u8 channel_cnt; 333 340 334 341 unsigned int pipe_in; 335 342 unsigned int pipe_out; 343 + struct gs_can *canch[] __counted_by(channel_cnt); 336 344 }; 337 345 338 346 /* 'allocate' a tx context. ··· 595 599 } 596 600 597 601 /* device reports out of range channel id */ 598 - if (hf->channel >= GS_MAX_INTF) 602 + if (hf->channel >= parent->channel_cnt) 599 603 goto device_detach; 600 604 601 605 dev = parent->canch[hf->channel]; ··· 695 699 /* USB failure take down all interfaces */ 696 700 if (rc == -ENODEV) { 697 701 device_detach: 698 - for (rc = 0; rc < GS_MAX_INTF; rc++) { 702 + for (rc = 0; rc < parent->channel_cnt; rc++) { 699 703 if (parent->canch[rc]) 700 704 netif_device_detach(parent->canch[rc]->netdev); 701 705 } ··· 1245 1249 1246 1250 netdev->flags |= IFF_ECHO; /* we support full roundtrip echo */ 1247 1251 netdev->dev_id = channel; 1252 + netdev->dev_port = channel; 1248 1253 1249 1254 /* dev setup */ 1250 1255 strcpy(dev->bt_const.name, KBUILD_MODNAME); ··· 1457 1460 icount = dconf.icount + 1; 1458 1461 dev_info(&intf->dev, "Configuring for %u interfaces\n", icount); 1459 1462 1460 - if (icount > GS_MAX_INTF) { 1463 + if (icount > type_max(parent->channel_cnt)) { 1461 1464 dev_err(&intf->dev, 1462 1465 "Driver cannot handle more that %u CAN interfaces\n", 1463 - GS_MAX_INTF); 1466 + type_max(parent->channel_cnt)); 1464 1467 return -EINVAL; 1465 1468 } 1466 1469 1467 - parent = kzalloc(sizeof(*parent), GFP_KERNEL); 1470 + parent = kzalloc(struct_size(parent, canch, icount), GFP_KERNEL); 1468 1471 if (!parent) 1469 1472 return -ENOMEM; 1473 + 1474 + parent->channel_cnt = icount; 1470 1475 1471 1476 init_usb_anchor(&parent->rx_submitted); 1472 1477 ··· 1530 1531 return; 1531 1532 } 1532 1533 1533 - for (i = 0; i < GS_MAX_INTF; i++) 1534 + for (i = 0; i < parent->channel_cnt; i++) 1534 1535 if (parent->canch[i]) 1535 1536 gs_destroy_candev(parent->canch[i]); 1536 1537
+15 -1
drivers/net/ethernet/airoha/airoha_eth.c
··· 1873 1873 #endif 1874 1874 } 1875 1875 1876 + static bool airoha_dev_tx_queue_busy(struct airoha_queue *q, u32 nr_frags) 1877 + { 1878 + u32 tail = q->tail <= q->head ? q->tail + q->ndesc : q->tail; 1879 + u32 index = q->head + nr_frags; 1880 + 1881 + /* completion napi can free out-of-order tx descriptors if hw QoS is 1882 + * enabled and packets with different priorities are queued to the same 1883 + * DMA ring. Take into account possible out-of-order reports checking 1884 + * if the tx queue is full using circular buffer head/tail pointers 1885 + * instead of the number of queued packets. 1886 + */ 1887 + return index >= tail; 1888 + } 1889 + 1876 1890 static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb, 1877 1891 struct net_device *dev) 1878 1892 { ··· 1940 1926 txq = netdev_get_tx_queue(dev, qid); 1941 1927 nr_frags = 1 + skb_shinfo(skb)->nr_frags; 1942 1928 1943 - if (q->queued + nr_frags > q->ndesc) { 1929 + if (airoha_dev_tx_queue_busy(q, nr_frags)) { 1944 1930 /* not enough space in the queue */ 1945 1931 netif_tx_stop_queue(txq); 1946 1932 spin_unlock_bh(&q->lock);
-1
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
··· 1080 1080 1081 1081 static int xgbe_phy_reset(struct xgbe_prv_data *pdata) 1082 1082 { 1083 - pdata->phy_link = -1; 1084 1083 pdata->phy_speed = SPEED_UNKNOWN; 1085 1084 1086 1085 return pdata->phy_if.phy_reset(pdata);
+1
drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
··· 1555 1555 pdata->phy.duplex = DUPLEX_FULL; 1556 1556 } 1557 1557 1558 + pdata->phy_link = 0; 1558 1559 pdata->phy.link = 0; 1559 1560 1560 1561 pdata->phy.pause_autoneg = pdata->pause_autoneg;
+1 -4
drivers/net/ethernet/broadcom/tg3.c
··· 5803 5803 u32 current_speed = SPEED_UNKNOWN; 5804 5804 u8 current_duplex = DUPLEX_UNKNOWN; 5805 5805 bool current_link_up = false; 5806 - u32 local_adv, remote_adv, sgsr; 5806 + u32 local_adv = 0, remote_adv = 0, sgsr; 5807 5807 5808 5808 if ((tg3_asic_rev(tp) == ASIC_REV_5719 || 5809 5809 tg3_asic_rev(tp) == ASIC_REV_5720) && ··· 5943 5943 current_duplex = DUPLEX_FULL; 5944 5944 else 5945 5945 current_duplex = DUPLEX_HALF; 5946 - 5947 - local_adv = 0; 5948 - remote_adv = 0; 5949 5946 5950 5947 if (bmcr & BMCR_ANENABLE) { 5951 5948 u32 common;
+16 -7
drivers/net/ethernet/dlink/dl2k.c
··· 508 508 for (i = 0; i < RX_RING_SIZE; i++) { 509 509 /* Allocated fixed size of skbuff */ 510 510 struct sk_buff *skb; 511 + dma_addr_t addr; 511 512 512 513 skb = netdev_alloc_skb_ip_align(dev, np->rx_buf_sz); 513 514 np->rx_skbuff[i] = skb; 514 - if (!skb) { 515 - free_list(dev); 516 - return -ENOMEM; 517 - } 515 + if (!skb) 516 + goto err_free_list; 517 + 518 + addr = dma_map_single(&np->pdev->dev, skb->data, 519 + np->rx_buf_sz, DMA_FROM_DEVICE); 520 + if (dma_mapping_error(&np->pdev->dev, addr)) 521 + goto err_kfree_skb; 518 522 519 523 np->rx_ring[i].next_desc = cpu_to_le64(np->rx_ring_dma + 520 524 ((i + 1) % RX_RING_SIZE) * 521 525 sizeof(struct netdev_desc)); 522 526 /* Rubicon now supports 40 bits of addressing space. */ 523 - np->rx_ring[i].fraginfo = 524 - cpu_to_le64(dma_map_single(&np->pdev->dev, skb->data, 525 - np->rx_buf_sz, DMA_FROM_DEVICE)); 527 + np->rx_ring[i].fraginfo = cpu_to_le64(addr); 526 528 np->rx_ring[i].fraginfo |= cpu_to_le64((u64)np->rx_buf_sz << 48); 527 529 } 528 530 529 531 return 0; 532 + 533 + err_kfree_skb: 534 + dev_kfree_skb(np->rx_skbuff[i]); 535 + np->rx_skbuff[i] = NULL; 536 + err_free_list: 537 + free_list(dev); 538 + return -ENOMEM; 530 539 } 531 540 532 541 static void rio_hw_init(struct net_device *dev)
+2
drivers/net/ethernet/google/gve/gve.h
··· 100 100 */ 101 101 #define GVE_DQO_QPL_ONDEMAND_ALLOC_THRESHOLD 96 102 102 103 + #define GVE_DQO_RX_HWTSTAMP_VALID 0x1 104 + 103 105 /* Each slot in the desc ring has a 1:1 mapping to a slot in the data ring */ 104 106 struct gve_rx_desc_queue { 105 107 struct gve_rx_desc *desc_ring; /* the descriptor ring */
+2 -1
drivers/net/ethernet/google/gve/gve_desc_dqo.h
··· 236 236 237 237 u8 status_error1; 238 238 239 - __le16 reserved5; 239 + u8 reserved5; 240 + u8 ts_sub_nsecs_low; 240 241 __le16 buf_id; /* Buffer ID which was sent on the buffer queue. */ 241 242 242 243 union {
+11 -5
drivers/net/ethernet/google/gve/gve_rx_dqo.c
··· 456 456 * Note that this means if the time delta between packet reception and the last 457 457 * clock read is greater than ~2 seconds, this will provide invalid results. 458 458 */ 459 - static void gve_rx_skb_hwtstamp(struct gve_rx_ring *rx, u32 hwts) 459 + static void gve_rx_skb_hwtstamp(struct gve_rx_ring *rx, 460 + const struct gve_rx_compl_desc_dqo *desc) 460 461 { 461 462 u64 last_read = READ_ONCE(rx->gve->last_sync_nic_counter); 462 463 struct sk_buff *skb = rx->ctx.skb_head; 463 - u32 low = (u32)last_read; 464 - s32 diff = hwts - low; 464 + u32 ts, low; 465 + s32 diff; 465 466 466 - skb_hwtstamps(skb)->hwtstamp = ns_to_ktime(last_read + diff); 467 + if (desc->ts_sub_nsecs_low & GVE_DQO_RX_HWTSTAMP_VALID) { 468 + ts = le32_to_cpu(desc->ts); 469 + low = (u32)last_read; 470 + diff = ts - low; 471 + skb_hwtstamps(skb)->hwtstamp = ns_to_ktime(last_read + diff); 472 + } 467 473 } 468 474 469 475 static void gve_rx_free_skb(struct napi_struct *napi, struct gve_rx_ring *rx) ··· 950 944 gve_rx_skb_csum(rx->ctx.skb_head, desc, ptype); 951 945 952 946 if (rx->gve->ts_config.rx_filter == HWTSTAMP_FILTER_ALL) 953 - gve_rx_skb_hwtstamp(rx, le32_to_cpu(desc->ts)); 947 + gve_rx_skb_hwtstamp(rx, desc); 954 948 955 949 /* RSC packets must set gso_size otherwise the TCP stack will complain 956 950 * that packets are larger than MTU.
+3
drivers/net/ethernet/intel/idpf/idpf_ptp.c
··· 863 863 u64_stats_inc(&vport->tstamp_stats.flushed); 864 864 865 865 list_del(&ptp_tx_tstamp->list_member); 866 + if (ptp_tx_tstamp->skb) 867 + consume_skb(ptp_tx_tstamp->skb); 868 + 866 869 kfree(ptp_tx_tstamp); 867 870 } 868 871 u64_stats_update_end(&vport->tstamp_stats.stats_sync);
+1
drivers/net/ethernet/intel/idpf/idpf_virtchnl_ptp.c
··· 517 517 shhwtstamps.hwtstamp = ns_to_ktime(tstamp); 518 518 skb_tstamp_tx(ptp_tx_tstamp->skb, &shhwtstamps); 519 519 consume_skb(ptp_tx_tstamp->skb); 520 + ptp_tx_tstamp->skb = NULL; 520 521 521 522 list_add(&ptp_tx_tstamp->list_member, 522 523 &tx_tstamp_caps->latches_free);
+2 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 12101 12101 12102 12102 devl_port_unregister(&adapter->devlink_port); 12103 12103 devl_unlock(adapter->devlink); 12104 - devlink_free(adapter->devlink); 12105 12104 12106 12105 ixgbe_stop_ipsec_offload(adapter); 12107 12106 ixgbe_clear_interrupt_scheme(adapter); ··· 12136 12137 12137 12138 if (disable_dev) 12138 12139 pci_disable_device(pdev); 12140 + 12141 + devlink_free(adapter->devlink); 12139 12142 } 12140 12143 12141 12144 /**
+15
drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h
··· 50 50 ixgbe_mbox_api_12, /* API version 1.2, linux/freebsd VF driver */ 51 51 ixgbe_mbox_api_13, /* API version 1.3, linux/freebsd VF driver */ 52 52 ixgbe_mbox_api_14, /* API version 1.4, linux/freebsd VF driver */ 53 + ixgbe_mbox_api_15, /* API version 1.5, linux/freebsd VF driver */ 54 + ixgbe_mbox_api_16, /* API version 1.6, linux/freebsd VF driver */ 55 + ixgbe_mbox_api_17, /* API version 1.7, linux/freebsd VF driver */ 53 56 /* This value should always be last */ 54 57 ixgbe_mbox_api_unknown, /* indicates that API version is not known */ 55 58 }; ··· 89 86 90 87 #define IXGBE_VF_GET_LINK_STATE 0x10 /* get vf link state */ 91 88 89 + /* mailbox API, version 1.6 VF requests */ 90 + #define IXGBE_VF_GET_PF_LINK_STATE 0x11 /* request PF to send link info */ 91 + 92 + /* mailbox API, version 1.7 VF requests */ 93 + #define IXGBE_VF_FEATURES_NEGOTIATE 0x12 /* get features supported by PF */ 94 + 92 95 /* length of permanent address message returned from PF */ 93 96 #define IXGBE_VF_PERMADDR_MSG_LEN 4 94 97 /* word in permanent address message with the current multicast type */ ··· 104 95 105 96 #define IXGBE_VF_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */ 106 97 #define IXGBE_VF_MBX_INIT_DELAY 500 /* microseconds between retries */ 98 + 99 + /* features negotiated between PF/VF */ 100 + #define IXGBEVF_PF_SUP_IPSEC BIT(0) 101 + #define IXGBEVF_PF_SUP_ESX_MBX BIT(1) 102 + 103 + #define IXGBE_SUPPORTED_FEATURES IXGBEVF_PF_SUP_IPSEC 107 104 108 105 struct ixgbe_hw; 109 106
+79
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
··· 510 510 case ixgbe_mbox_api_12: 511 511 case ixgbe_mbox_api_13: 512 512 case ixgbe_mbox_api_14: 513 + case ixgbe_mbox_api_16: 514 + case ixgbe_mbox_api_17: 513 515 /* Version 1.1 supports jumbo frames on VFs if PF has 514 516 * jumbo frames enabled which means legacy VFs are 515 517 * disabled ··· 1048 1046 case ixgbe_mbox_api_12: 1049 1047 case ixgbe_mbox_api_13: 1050 1048 case ixgbe_mbox_api_14: 1049 + case ixgbe_mbox_api_16: 1050 + case ixgbe_mbox_api_17: 1051 1051 adapter->vfinfo[vf].vf_api = api; 1052 1052 return 0; 1053 1053 default: ··· 1076 1072 case ixgbe_mbox_api_12: 1077 1073 case ixgbe_mbox_api_13: 1078 1074 case ixgbe_mbox_api_14: 1075 + case ixgbe_mbox_api_16: 1076 + case ixgbe_mbox_api_17: 1079 1077 break; 1080 1078 default: 1081 1079 return -1; ··· 1118 1112 1119 1113 /* verify the PF is supporting the correct API */ 1120 1114 switch (adapter->vfinfo[vf].vf_api) { 1115 + case ixgbe_mbox_api_17: 1116 + case ixgbe_mbox_api_16: 1121 1117 case ixgbe_mbox_api_14: 1122 1118 case ixgbe_mbox_api_13: 1123 1119 case ixgbe_mbox_api_12: ··· 1153 1145 1154 1146 /* verify the PF is supporting the correct API */ 1155 1147 switch (adapter->vfinfo[vf].vf_api) { 1148 + case ixgbe_mbox_api_17: 1149 + case ixgbe_mbox_api_16: 1156 1150 case ixgbe_mbox_api_14: 1157 1151 case ixgbe_mbox_api_13: 1158 1152 case ixgbe_mbox_api_12: ··· 1184 1174 fallthrough; 1185 1175 case ixgbe_mbox_api_13: 1186 1176 case ixgbe_mbox_api_14: 1177 + case ixgbe_mbox_api_16: 1178 + case ixgbe_mbox_api_17: 1187 1179 break; 1188 1180 default: 1189 1181 return -EOPNOTSUPP; ··· 1256 1244 case ixgbe_mbox_api_12: 1257 1245 case ixgbe_mbox_api_13: 1258 1246 case ixgbe_mbox_api_14: 1247 + case ixgbe_mbox_api_16: 1248 + case ixgbe_mbox_api_17: 1259 1249 break; 1260 1250 default: 1261 1251 return -EOPNOTSUPP; 1262 1252 } 1263 1253 1264 1254 *link_state = adapter->vfinfo[vf].link_enable; 1255 + 1256 + return 0; 1257 + } 1258 + 1259 + /** 1260 + * ixgbe_send_vf_link_status - send link status data to VF 1261 + * @adapter: pointer to adapter struct 1262 + * @msgbuf: pointer to message buffers 1263 + * @vf: VF identifier 1264 + * 1265 + * Reply for IXGBE_VF_GET_PF_LINK_STATE mbox command sending link status data. 1266 + * 1267 + * Return: 0 on success or -EOPNOTSUPP when operation is not supported. 1268 + */ 1269 + static int ixgbe_send_vf_link_status(struct ixgbe_adapter *adapter, 1270 + u32 *msgbuf, u32 vf) 1271 + { 1272 + struct ixgbe_hw *hw = &adapter->hw; 1273 + 1274 + switch (adapter->vfinfo[vf].vf_api) { 1275 + case ixgbe_mbox_api_16: 1276 + case ixgbe_mbox_api_17: 1277 + if (hw->mac.type != ixgbe_mac_e610) 1278 + return -EOPNOTSUPP; 1279 + break; 1280 + default: 1281 + return -EOPNOTSUPP; 1282 + } 1283 + /* Simply provide stored values as watchdog & link status events take 1284 + * care of its freshness. 1285 + */ 1286 + msgbuf[1] = adapter->link_speed; 1287 + msgbuf[2] = adapter->link_up; 1288 + 1289 + return 0; 1290 + } 1291 + 1292 + /** 1293 + * ixgbe_negotiate_vf_features - negotiate supported features with VF driver 1294 + * @adapter: pointer to adapter struct 1295 + * @msgbuf: pointer to message buffers 1296 + * @vf: VF identifier 1297 + * 1298 + * Return: 0 on success or -EOPNOTSUPP when operation is not supported. 1299 + */ 1300 + static int ixgbe_negotiate_vf_features(struct ixgbe_adapter *adapter, 1301 + u32 *msgbuf, u32 vf) 1302 + { 1303 + u32 features = msgbuf[1]; 1304 + 1305 + switch (adapter->vfinfo[vf].vf_api) { 1306 + case ixgbe_mbox_api_17: 1307 + break; 1308 + default: 1309 + return -EOPNOTSUPP; 1310 + } 1311 + 1312 + features &= IXGBE_SUPPORTED_FEATURES; 1313 + msgbuf[1] = features; 1265 1314 1266 1315 return 0; 1267 1316 } ··· 1400 1327 break; 1401 1328 case IXGBE_VF_IPSEC_DEL: 1402 1329 retval = ixgbe_ipsec_vf_del_sa(adapter, msgbuf, vf); 1330 + break; 1331 + case IXGBE_VF_GET_PF_LINK_STATE: 1332 + retval = ixgbe_send_vf_link_status(adapter, msgbuf, vf); 1333 + break; 1334 + case IXGBE_VF_FEATURES_NEGOTIATE: 1335 + retval = ixgbe_negotiate_vf_features(adapter, msgbuf, vf); 1403 1336 break; 1404 1337 default: 1405 1338 e_err(drv, "Unhandled Msg %8.8x\n", msgbuf[0]);
+1
drivers/net/ethernet/intel/ixgbevf/defines.h
··· 28 28 29 29 /* Link speed */ 30 30 typedef u32 ixgbe_link_speed; 31 + #define IXGBE_LINK_SPEED_UNKNOWN 0 31 32 #define IXGBE_LINK_SPEED_1GB_FULL 0x0020 32 33 #define IXGBE_LINK_SPEED_10GB_FULL 0x0080 33 34 #define IXGBE_LINK_SPEED_100_FULL 0x0008
+10
drivers/net/ethernet/intel/ixgbevf/ipsec.c
··· 273 273 adapter = netdev_priv(dev); 274 274 ipsec = adapter->ipsec; 275 275 276 + if (!(adapter->pf_features & IXGBEVF_PF_SUP_IPSEC)) 277 + return -EOPNOTSUPP; 278 + 276 279 if (xs->id.proto != IPPROTO_ESP && xs->id.proto != IPPROTO_AH) { 277 280 NL_SET_ERR_MSG_MOD(extack, "Unsupported protocol for IPsec offload"); 278 281 return -EINVAL; ··· 407 404 408 405 adapter = netdev_priv(dev); 409 406 ipsec = adapter->ipsec; 407 + 408 + if (!(adapter->pf_features & IXGBEVF_PF_SUP_IPSEC)) 409 + return; 410 410 411 411 if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN) { 412 412 sa_idx = xs->xso.offload_handle - IXGBE_IPSEC_BASE_RX_INDEX; ··· 618 612 size_t size; 619 613 620 614 switch (adapter->hw.api_version) { 615 + case ixgbe_mbox_api_17: 616 + if (!(adapter->pf_features & IXGBEVF_PF_SUP_IPSEC)) 617 + return; 618 + break; 621 619 case ixgbe_mbox_api_14: 622 620 break; 623 621 default:
+7
drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
··· 363 363 struct ixgbe_hw hw; 364 364 u16 msg_enable; 365 365 366 + u32 pf_features; 367 + #define IXGBEVF_PF_SUP_IPSEC BIT(0) 368 + #define IXGBEVF_PF_SUP_ESX_MBX BIT(1) 369 + 370 + #define IXGBEVF_SUPPORTED_FEATURES (IXGBEVF_PF_SUP_IPSEC | \ 371 + IXGBEVF_PF_SUP_ESX_MBX) 372 + 366 373 struct ixgbevf_hw_stats stats; 367 374 368 375 unsigned long state;
+33 -1
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
··· 2271 2271 adapter->stats.base_vfmprc = adapter->stats.last_vfmprc; 2272 2272 } 2273 2273 2274 + /** 2275 + * ixgbevf_set_features - Set features supported by PF 2276 + * @adapter: pointer to the adapter struct 2277 + * 2278 + * Negotiate with PF supported features and then set pf_features accordingly. 2279 + */ 2280 + static void ixgbevf_set_features(struct ixgbevf_adapter *adapter) 2281 + { 2282 + u32 *pf_features = &adapter->pf_features; 2283 + struct ixgbe_hw *hw = &adapter->hw; 2284 + int err; 2285 + 2286 + err = hw->mac.ops.negotiate_features(hw, pf_features); 2287 + if (err && err != -EOPNOTSUPP) 2288 + netdev_dbg(adapter->netdev, 2289 + "PF feature negotiation failed.\n"); 2290 + 2291 + /* Address also pre API 1.7 cases */ 2292 + if (hw->api_version == ixgbe_mbox_api_14) 2293 + *pf_features |= IXGBEVF_PF_SUP_IPSEC; 2294 + else if (hw->api_version == ixgbe_mbox_api_15) 2295 + *pf_features |= IXGBEVF_PF_SUP_ESX_MBX; 2296 + } 2297 + 2274 2298 static void ixgbevf_negotiate_api(struct ixgbevf_adapter *adapter) 2275 2299 { 2276 2300 struct ixgbe_hw *hw = &adapter->hw; 2277 2301 static const int api[] = { 2302 + ixgbe_mbox_api_17, 2303 + ixgbe_mbox_api_16, 2278 2304 ixgbe_mbox_api_15, 2279 2305 ixgbe_mbox_api_14, 2280 2306 ixgbe_mbox_api_13, ··· 2320 2294 idx++; 2321 2295 } 2322 2296 2323 - if (hw->api_version >= ixgbe_mbox_api_15) { 2297 + ixgbevf_set_features(adapter); 2298 + 2299 + if (adapter->pf_features & IXGBEVF_PF_SUP_ESX_MBX) { 2324 2300 hw->mbx.ops.init_params(hw); 2325 2301 memcpy(&hw->mbx.ops, &ixgbevf_mbx_ops, 2326 2302 sizeof(struct ixgbe_mbx_operations)); ··· 2679 2651 case ixgbe_mbox_api_13: 2680 2652 case ixgbe_mbox_api_14: 2681 2653 case ixgbe_mbox_api_15: 2654 + case ixgbe_mbox_api_16: 2655 + case ixgbe_mbox_api_17: 2682 2656 if (adapter->xdp_prog && 2683 2657 hw->mac.max_tx_queues == rss) 2684 2658 rss = rss > 3 ? 2 : 1; ··· 4675 4645 case ixgbe_mbox_api_13: 4676 4646 case ixgbe_mbox_api_14: 4677 4647 case ixgbe_mbox_api_15: 4648 + case ixgbe_mbox_api_16: 4649 + case ixgbe_mbox_api_17: 4678 4650 netdev->max_mtu = IXGBE_MAX_JUMBO_FRAME_SIZE - 4679 4651 (ETH_HLEN + ETH_FCS_LEN); 4680 4652 break;
+8
drivers/net/ethernet/intel/ixgbevf/mbx.h
··· 66 66 ixgbe_mbox_api_13, /* API version 1.3, linux/freebsd VF driver */ 67 67 ixgbe_mbox_api_14, /* API version 1.4, linux/freebsd VF driver */ 68 68 ixgbe_mbox_api_15, /* API version 1.5, linux/freebsd VF driver */ 69 + ixgbe_mbox_api_16, /* API version 1.6, linux/freebsd VF driver */ 70 + ixgbe_mbox_api_17, /* API version 1.7, linux/freebsd VF driver */ 69 71 /* This value should always be last */ 70 72 ixgbe_mbox_api_unknown, /* indicates that API version is not known */ 71 73 }; ··· 103 101 #define IXGBE_VF_IPSEC_DEL 0x0e 104 102 105 103 #define IXGBE_VF_GET_LINK_STATE 0x10 /* get vf link state */ 104 + 105 + /* mailbox API, version 1.6 VF requests */ 106 + #define IXGBE_VF_GET_PF_LINK_STATE 0x11 /* request PF to send link info */ 107 + 108 + /* mailbox API, version 1.7 VF requests */ 109 + #define IXGBE_VF_FEATURES_NEGOTIATE 0x12 /* get features supported by PF*/ 106 110 107 111 /* length of permanent address message returned from PF */ 108 112 #define IXGBE_VF_PERMADDR_MSG_LEN 4
+150 -32
drivers/net/ethernet/intel/ixgbevf/vf.c
··· 313 313 * is not supported for this device type. 314 314 */ 315 315 switch (hw->api_version) { 316 + case ixgbe_mbox_api_17: 317 + case ixgbe_mbox_api_16: 316 318 case ixgbe_mbox_api_15: 317 319 case ixgbe_mbox_api_14: 318 320 case ixgbe_mbox_api_13: ··· 384 382 * or if the operation is not supported for this device type. 385 383 */ 386 384 switch (hw->api_version) { 385 + case ixgbe_mbox_api_17: 386 + case ixgbe_mbox_api_16: 387 387 case ixgbe_mbox_api_15: 388 388 case ixgbe_mbox_api_14: 389 389 case ixgbe_mbox_api_13: ··· 556 552 case ixgbe_mbox_api_13: 557 553 case ixgbe_mbox_api_14: 558 554 case ixgbe_mbox_api_15: 555 + case ixgbe_mbox_api_16: 556 + case ixgbe_mbox_api_17: 559 557 break; 560 558 default: 561 559 return -EOPNOTSUPP; ··· 631 625 } 632 626 633 627 /** 628 + * ixgbevf_get_pf_link_state - Get PF's link status 629 + * @hw: pointer to the HW structure 630 + * @speed: link speed 631 + * @link_up: indicate if link is up/down 632 + * 633 + * Ask PF to provide link_up state and speed of the link. 634 + * 635 + * Return: IXGBE_ERR_MBX in the case of mailbox error, 636 + * -EOPNOTSUPP if the op is not supported or 0 on success. 637 + */ 638 + static int ixgbevf_get_pf_link_state(struct ixgbe_hw *hw, ixgbe_link_speed *speed, 639 + bool *link_up) 640 + { 641 + u32 msgbuf[3] = {}; 642 + int err; 643 + 644 + switch (hw->api_version) { 645 + case ixgbe_mbox_api_16: 646 + case ixgbe_mbox_api_17: 647 + break; 648 + default: 649 + return -EOPNOTSUPP; 650 + } 651 + 652 + msgbuf[0] = IXGBE_VF_GET_PF_LINK_STATE; 653 + 654 + err = ixgbevf_write_msg_read_ack(hw, msgbuf, msgbuf, 655 + ARRAY_SIZE(msgbuf)); 656 + if (err || (msgbuf[0] & IXGBE_VT_MSGTYPE_FAILURE)) { 657 + err = IXGBE_ERR_MBX; 658 + *speed = IXGBE_LINK_SPEED_UNKNOWN; 659 + /* No need to set @link_up to false as it will be done by 660 + * ixgbe_check_mac_link_vf(). 661 + */ 662 + } else { 663 + *speed = msgbuf[1]; 664 + *link_up = msgbuf[2]; 665 + } 666 + 667 + return err; 668 + } 669 + 670 + /** 671 + * ixgbevf_negotiate_features_vf - negotiate supported features with PF driver 672 + * @hw: pointer to the HW structure 673 + * @pf_features: bitmask of features supported by PF 674 + * 675 + * Return: IXGBE_ERR_MBX in the case of mailbox error, 676 + * -EOPNOTSUPP if the op is not supported or 0 on success. 677 + */ 678 + static int ixgbevf_negotiate_features_vf(struct ixgbe_hw *hw, u32 *pf_features) 679 + { 680 + u32 msgbuf[2] = {}; 681 + int err; 682 + 683 + switch (hw->api_version) { 684 + case ixgbe_mbox_api_17: 685 + break; 686 + default: 687 + return -EOPNOTSUPP; 688 + } 689 + 690 + msgbuf[0] = IXGBE_VF_FEATURES_NEGOTIATE; 691 + msgbuf[1] = IXGBEVF_SUPPORTED_FEATURES; 692 + 693 + err = ixgbevf_write_msg_read_ack(hw, msgbuf, msgbuf, 694 + ARRAY_SIZE(msgbuf)); 695 + 696 + if (err || (msgbuf[0] & IXGBE_VT_MSGTYPE_FAILURE)) { 697 + err = IXGBE_ERR_MBX; 698 + *pf_features = 0x0; 699 + } else { 700 + *pf_features = msgbuf[1]; 701 + } 702 + 703 + return err; 704 + } 705 + 706 + /** 634 707 * ixgbevf_set_vfta_vf - Set/Unset VLAN filter table address 635 708 * @hw: pointer to the HW structure 636 709 * @vlan: 12 bit VLAN ID ··· 741 656 742 657 mbx_err: 743 658 return err; 659 + } 660 + 661 + /** 662 + * ixgbe_read_vflinks - Read VFLINKS register 663 + * @hw: pointer to the HW structure 664 + * @speed: link speed 665 + * @link_up: indicate if link is up/down 666 + * 667 + * Get linkup status and link speed from the VFLINKS register. 668 + */ 669 + static void ixgbe_read_vflinks(struct ixgbe_hw *hw, ixgbe_link_speed *speed, 670 + bool *link_up) 671 + { 672 + u32 vflinks = IXGBE_READ_REG(hw, IXGBE_VFLINKS); 673 + 674 + /* if link status is down no point in checking to see if PF is up */ 675 + if (!(vflinks & IXGBE_LINKS_UP)) { 676 + *link_up = false; 677 + return; 678 + } 679 + 680 + /* for SFP+ modules and DA cables on 82599 it can take up to 500usecs 681 + * before the link status is correct 682 + */ 683 + if (hw->mac.type == ixgbe_mac_82599_vf) { 684 + for (int i = 0; i < 5; i++) { 685 + udelay(100); 686 + vflinks = IXGBE_READ_REG(hw, IXGBE_VFLINKS); 687 + 688 + if (!(vflinks & IXGBE_LINKS_UP)) { 689 + *link_up = false; 690 + return; 691 + } 692 + } 693 + } 694 + 695 + /* We reached this point so there's link */ 696 + *link_up = true; 697 + 698 + switch (vflinks & IXGBE_LINKS_SPEED_82599) { 699 + case IXGBE_LINKS_SPEED_10G_82599: 700 + *speed = IXGBE_LINK_SPEED_10GB_FULL; 701 + break; 702 + case IXGBE_LINKS_SPEED_1G_82599: 703 + *speed = IXGBE_LINK_SPEED_1GB_FULL; 704 + break; 705 + case IXGBE_LINKS_SPEED_100_82599: 706 + *speed = IXGBE_LINK_SPEED_100_FULL; 707 + break; 708 + default: 709 + *speed = IXGBE_LINK_SPEED_UNKNOWN; 710 + } 744 711 } 745 712 746 713 /** ··· 839 702 bool *link_up, 840 703 bool autoneg_wait_to_complete) 841 704 { 705 + struct ixgbevf_adapter *adapter = hw->back; 842 706 struct ixgbe_mbx_info *mbx = &hw->mbx; 843 707 struct ixgbe_mac_info *mac = &hw->mac; 844 708 s32 ret_val = 0; 845 - u32 links_reg; 846 709 u32 in_msg = 0; 847 710 848 711 /* If we were hit with a reset drop the link */ ··· 852 715 if (!mac->get_link_status) 853 716 goto out; 854 717 855 - /* if link status is down no point in checking to see if pf is up */ 856 - links_reg = IXGBE_READ_REG(hw, IXGBE_VFLINKS); 857 - if (!(links_reg & IXGBE_LINKS_UP)) 858 - goto out; 859 - 860 - /* for SFP+ modules and DA cables on 82599 it can take up to 500usecs 861 - * before the link status is correct 862 - */ 863 - if (mac->type == ixgbe_mac_82599_vf) { 864 - int i; 865 - 866 - for (i = 0; i < 5; i++) { 867 - udelay(100); 868 - links_reg = IXGBE_READ_REG(hw, IXGBE_VFLINKS); 869 - 870 - if (!(links_reg & IXGBE_LINKS_UP)) 871 - goto out; 872 - } 873 - } 874 - 875 - switch (links_reg & IXGBE_LINKS_SPEED_82599) { 876 - case IXGBE_LINKS_SPEED_10G_82599: 877 - *speed = IXGBE_LINK_SPEED_10GB_FULL; 878 - break; 879 - case IXGBE_LINKS_SPEED_1G_82599: 880 - *speed = IXGBE_LINK_SPEED_1GB_FULL; 881 - break; 882 - case IXGBE_LINKS_SPEED_100_82599: 883 - *speed = IXGBE_LINK_SPEED_100_FULL; 884 - break; 718 + if (hw->mac.type == ixgbe_mac_e610_vf) { 719 + ret_val = ixgbevf_get_pf_link_state(hw, speed, link_up); 720 + if (ret_val) 721 + goto out; 722 + } else { 723 + ixgbe_read_vflinks(hw, speed, link_up); 724 + if (*link_up == false) 725 + goto out; 885 726 } 886 727 887 728 /* if the read failed it could just be a mailbox collision, best wait 888 729 * until we are called again and don't report an error 889 730 */ 890 731 if (mbx->ops.read(hw, &in_msg, 1)) { 891 - if (hw->api_version >= ixgbe_mbox_api_15) 732 + if (adapter->pf_features & IXGBEVF_PF_SUP_ESX_MBX) 892 733 mac->get_link_status = false; 893 734 goto out; 894 735 } ··· 1066 951 case ixgbe_mbox_api_13: 1067 952 case ixgbe_mbox_api_14: 1068 953 case ixgbe_mbox_api_15: 954 + case ixgbe_mbox_api_16: 955 + case ixgbe_mbox_api_17: 1069 956 break; 1070 957 default: 1071 958 return 0; ··· 1122 1005 .setup_link = ixgbevf_setup_mac_link_vf, 1123 1006 .check_link = ixgbevf_check_mac_link_vf, 1124 1007 .negotiate_api_version = ixgbevf_negotiate_api_version_vf, 1008 + .negotiate_features = ixgbevf_negotiate_features_vf, 1125 1009 .set_rar = ixgbevf_set_rar_vf, 1126 1010 .update_mc_addr_list = ixgbevf_update_mc_addr_list_vf, 1127 1011 .update_xcast_mode = ixgbevf_update_xcast_mode,
+1
drivers/net/ethernet/intel/ixgbevf/vf.h
··· 26 26 s32 (*stop_adapter)(struct ixgbe_hw *); 27 27 s32 (*get_bus_info)(struct ixgbe_hw *); 28 28 s32 (*negotiate_api_version)(struct ixgbe_hw *hw, int api); 29 + int (*negotiate_features)(struct ixgbe_hw *hw, u32 *pf_features); 29 30 30 31 /* Link */ 31 32 s32 (*setup_link)(struct ixgbe_hw *, ixgbe_link_speed, bool, bool);
+1
drivers/net/ethernet/marvell/octeontx2/af/cgx.c
··· 1981 1981 !is_cgx_mapped_to_nix(pdev->subsystem_device, cgx->cgx_id)) { 1982 1982 dev_notice(dev, "CGX %d not mapped to NIX, skipping probe\n", 1983 1983 cgx->cgx_id); 1984 + err = -ENODEV; 1984 1985 goto err_release_regions; 1985 1986 } 1986 1987
+6 -2
drivers/net/ethernet/mediatek/mtk_wed.c
··· 677 677 void *buf; 678 678 int s; 679 679 680 - page = __dev_alloc_page(GFP_KERNEL); 680 + page = __dev_alloc_page(GFP_KERNEL | GFP_DMA32); 681 681 if (!page) 682 682 return -ENOMEM; 683 683 ··· 800 800 struct page *page; 801 801 int s; 802 802 803 - page = __dev_alloc_page(GFP_KERNEL); 803 + page = __dev_alloc_page(GFP_KERNEL | GFP_DMA32); 804 804 if (!page) 805 805 return -ENOMEM; 806 806 ··· 2425 2425 dev->wdma_idx = hw->index; 2426 2426 dev->version = hw->version; 2427 2427 dev->hw->pcie_base = mtk_wed_get_pcie_base(dev); 2428 + 2429 + ret = dma_set_mask_and_coherent(hw->dev, DMA_BIT_MASK(32)); 2430 + if (ret) 2431 + goto out; 2428 2432 2429 2433 if (hw->eth->dma_dev == hw->eth->dev && 2430 2434 of_dma_is_coherent(hw->eth->dev->of_node))
+3 -2
drivers/net/ethernet/realtek/r8169_main.c
··· 4994 4994 if (!device_may_wakeup(tp_to_dev(tp))) 4995 4995 clk_prepare_enable(tp->clk); 4996 4996 4997 - /* Reportedly at least Asus X453MA truncates packets otherwise */ 4998 - if (tp->mac_version == RTL_GIGA_MAC_VER_37) 4997 + /* Some chip versions may truncate packets without this initialization */ 4998 + if (tp->mac_version == RTL_GIGA_MAC_VER_37 || 4999 + tp->mac_version == RTL_GIGA_MAC_VER_46) 4999 5000 rtl_init_rxcfg(tp); 5000 5001 5001 5002 return rtl8169_runtime_resume(device);
+7
drivers/net/netdevsim/netdev.c
··· 545 545 static int nsim_open(struct net_device *dev) 546 546 { 547 547 struct netdevsim *ns = netdev_priv(dev); 548 + struct netdevsim *peer; 548 549 int err; 549 550 550 551 netdev_assert_locked(dev); ··· 555 554 return err; 556 555 557 556 nsim_enable_napi(ns); 557 + 558 + peer = rtnl_dereference(ns->peer); 559 + if (peer && netif_running(peer->netdev)) { 560 + netif_carrier_on(dev); 561 + netif_carrier_on(peer->netdev); 562 + } 558 563 559 564 return 0; 560 565 }
+19 -1
drivers/net/phy/broadcom.c
··· 405 405 static int bcm54811_config_init(struct phy_device *phydev) 406 406 { 407 407 struct bcm54xx_phy_priv *priv = phydev->priv; 408 - int err, reg, exp_sync_ethernet; 408 + int err, reg, exp_sync_ethernet, aux_rgmii_en; 409 409 410 410 /* Enable CLK125 MUX on LED4 if ref clock is enabled. */ 411 411 if (!(phydev->dev_flags & PHY_BRCM_RX_REFCLK_UNUSED)) { ··· 431 431 err = bcm_phy_modify_exp(phydev, BCM_EXP_SYNC_ETHERNET, 432 432 BCM_EXP_SYNC_ETHERNET_MII_LITE, 433 433 exp_sync_ethernet); 434 + if (err < 0) 435 + return err; 436 + 437 + /* Enable RGMII if configured */ 438 + if (phy_interface_is_rgmii(phydev)) 439 + aux_rgmii_en = MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_EN | 440 + MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_SKEW_EN; 441 + else 442 + aux_rgmii_en = 0; 443 + 444 + /* Also writing Reserved bits 6:5 because the documentation requires 445 + * them to be written to 0b11 446 + */ 447 + err = bcm54xx_auxctl_write(phydev, 448 + MII_BCM54XX_AUXCTL_SHDWSEL_MISC, 449 + MII_BCM54XX_AUXCTL_MISC_WREN | 450 + aux_rgmii_en | 451 + MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RSVD); 434 452 if (err < 0) 435 453 return err; 436 454
+11 -12
drivers/net/phy/realtek/realtek_main.c
··· 633 633 str_enabled_disabled(val_rxdly)); 634 634 } 635 635 636 + if (!priv->has_phycr2) 637 + return 0; 638 + 636 639 /* Disable PHY-mode EEE so LPI is passed to the MAC */ 637 640 ret = phy_modify_paged(phydev, RTL8211F_PHYCR_PAGE, RTL8211F_PHYCR2, 638 641 RTL8211F_PHYCR2_PHY_EEE_ENABLE, 0); 639 642 if (ret) 640 643 return ret; 641 644 642 - if (priv->has_phycr2) { 643 - ret = phy_modify_paged(phydev, RTL8211F_PHYCR_PAGE, 644 - RTL8211F_PHYCR2, RTL8211F_CLKOUT_EN, 645 - priv->phycr2); 646 - if (ret < 0) { 647 - dev_err(dev, "clkout configuration failed: %pe\n", 648 - ERR_PTR(ret)); 649 - return ret; 650 - } 651 - 652 - return genphy_soft_reset(phydev); 645 + ret = phy_modify_paged(phydev, RTL8211F_PHYCR_PAGE, 646 + RTL8211F_PHYCR2, RTL8211F_CLKOUT_EN, 647 + priv->phycr2); 648 + if (ret < 0) { 649 + dev_err(dev, "clkout configuration failed: %pe\n", 650 + ERR_PTR(ret)); 651 + return ret; 653 652 } 654 653 655 - return 0; 654 + return genphy_soft_reset(phydev); 656 655 } 657 656 658 657 static int rtl821x_suspend(struct phy_device *phydev)
+11 -8
drivers/net/usb/lan78xx.c
··· 1175 1175 } 1176 1176 1177 1177 write_raw_eeprom_done: 1178 - if (dev->chipid == ID_REV_CHIP_ID_7800_) 1179 - return lan78xx_write_reg(dev, HW_CFG, saved); 1180 - 1181 - return 0; 1178 + if (dev->chipid == ID_REV_CHIP_ID_7800_) { 1179 + int rc = lan78xx_write_reg(dev, HW_CFG, saved); 1180 + /* If USB fails, there is nothing to do */ 1181 + if (rc < 0) 1182 + return rc; 1183 + } 1184 + return ret; 1182 1185 } 1183 1186 1184 1187 static int lan78xx_read_raw_otp(struct lan78xx_net *dev, u32 offset, ··· 3250 3247 } 3251 3248 } while (buf & HW_CFG_LRST_); 3252 3249 3253 - ret = lan78xx_init_mac_address(dev); 3254 - if (ret < 0) 3255 - return ret; 3256 - 3257 3250 /* save DEVID for later usage */ 3258 3251 ret = lan78xx_read_reg(dev, ID_REV, &buf); 3259 3252 if (ret < 0) ··· 3257 3258 3258 3259 dev->chipid = (buf & ID_REV_CHIP_ID_MASK_) >> 16; 3259 3260 dev->chiprev = buf & ID_REV_CHIP_REV_MASK_; 3261 + 3262 + ret = lan78xx_init_mac_address(dev); 3263 + if (ret < 0) 3264 + return ret; 3260 3265 3261 3266 /* Respond to the IN token with a NAK */ 3262 3267 ret = lan78xx_read_reg(dev, USB_CFG0, &buf);
+6 -1
drivers/net/usb/r8152.c
··· 10122 10122 ret = usb_register_device_driver(&rtl8152_cfgselector_driver, THIS_MODULE); 10123 10123 if (ret) 10124 10124 return ret; 10125 - return usb_register(&rtl8152_driver); 10125 + 10126 + ret = usb_register(&rtl8152_driver); 10127 + if (ret) 10128 + usb_deregister_device_driver(&rtl8152_cfgselector_driver); 10129 + 10130 + return ret; 10126 10131 } 10127 10132 10128 10133 static void __exit rtl8152_driver_exit(void)
+2
drivers/net/usb/usbnet.c
··· 702 702 struct sk_buff *skb; 703 703 int num = 0; 704 704 705 + local_bh_disable(); 705 706 clear_bit(EVENT_RX_PAUSED, &dev->flags); 706 707 707 708 while ((skb = skb_dequeue(&dev->rxq_pause)) != NULL) { ··· 711 710 } 712 711 713 712 queue_work(system_bh_wq, &dev->bh_work); 713 + local_bh_enable(); 714 714 715 715 netif_dbg(dev, rx_status, dev->net, 716 716 "paused rx queue disabled, %d skbs requeued\n", num);
+5 -1
drivers/nvme/host/auth.c
··· 36 36 u8 status; 37 37 u8 dhgroup_id; 38 38 u8 hash_id; 39 + u8 sc_c; 39 40 size_t hash_len; 40 41 u8 c1[64]; 41 42 u8 c2[64]; ··· 154 153 data->auth_protocol[0].dhchap.idlist[33] = NVME_AUTH_DHGROUP_4096; 155 154 data->auth_protocol[0].dhchap.idlist[34] = NVME_AUTH_DHGROUP_6144; 156 155 data->auth_protocol[0].dhchap.idlist[35] = NVME_AUTH_DHGROUP_8192; 156 + 157 + chap->sc_c = data->sc_c; 157 158 158 159 return size; 159 160 } ··· 492 489 ret = crypto_shash_update(shash, buf, 2); 493 490 if (ret) 494 491 goto out; 495 - memset(buf, 0, sizeof(buf)); 492 + *buf = chap->sc_c; 496 493 ret = crypto_shash_update(shash, buf, 1); 497 494 if (ret) 498 495 goto out; ··· 503 500 strlen(ctrl->opts->host->nqn)); 504 501 if (ret) 505 502 goto out; 503 + memset(buf, 0, sizeof(buf)); 506 504 ret = crypto_shash_update(shash, buf, 1); 507 505 if (ret) 508 506 goto out;
+4 -2
drivers/nvme/host/multipath.c
··· 182 182 struct nvme_ns *ns = rq->q->queuedata; 183 183 struct gendisk *disk = ns->head->disk; 184 184 185 - if (READ_ONCE(ns->head->subsys->iopolicy) == NVME_IOPOLICY_QD) { 185 + if ((READ_ONCE(ns->head->subsys->iopolicy) == NVME_IOPOLICY_QD) && 186 + !(nvme_req(rq)->flags & NVME_MPATH_CNT_ACTIVE)) { 186 187 atomic_inc(&ns->ctrl->nr_active); 187 188 nvme_req(rq)->flags |= NVME_MPATH_CNT_ACTIVE; 188 189 } 189 190 190 - if (!blk_queue_io_stat(disk->queue) || blk_rq_is_passthrough(rq)) 191 + if (!blk_queue_io_stat(disk->queue) || blk_rq_is_passthrough(rq) || 192 + (nvme_req(rq)->flags & NVME_MPATH_IO_STATS)) 191 193 return; 192 194 193 195 nvme_req(rq)->flags |= NVME_MPATH_IO_STATS;
+3
drivers/nvme/host/tcp.c
··· 1081 1081 queue = sk->sk_user_data; 1082 1082 if (likely(queue && sk_stream_is_writeable(sk))) { 1083 1083 clear_bit(SOCK_NOSPACE, &sk->sk_socket->flags); 1084 + /* Ensure pending TLS partial records are retried */ 1085 + if (nvme_tcp_queue_tls(queue)) 1086 + queue->write_space(sk); 1084 1087 queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work); 1085 1088 } 1086 1089 read_unlock_bh(&sk->sk_callback_lock);
+1
drivers/pci/Kconfig
··· 306 306 bool "VGA Arbitration" if EXPERT 307 307 default y 308 308 depends on (PCI && !S390) 309 + select SCREEN_INFO if X86 309 310 help 310 311 Some "legacy" VGA devices implemented on PCI typically have the same 311 312 hard-decoded addresses as they did on ISA. When multiple PCI devices
+1 -1
drivers/pci/controller/cadence/pcie-cadence-ep.c
··· 255 255 u16 flags, mme; 256 256 u8 cap; 257 257 258 - cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSIX); 258 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSI); 259 259 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 260 260 261 261 /* Validate that the MSI feature is actually enabled. */
+13
drivers/pci/controller/vmd.c
··· 192 192 data->chip->irq_unmask(data); 193 193 } 194 194 195 + static unsigned int vmd_pci_msi_startup(struct irq_data *data) 196 + { 197 + vmd_pci_msi_enable(data); 198 + return 0; 199 + } 200 + 195 201 static void vmd_irq_disable(struct irq_data *data) 196 202 { 197 203 struct vmd_irq *vmdirq = data->chip_data; ··· 214 208 { 215 209 data->chip->irq_mask(data); 216 210 vmd_irq_disable(data->parent_data); 211 + } 212 + 213 + static void vmd_pci_msi_shutdown(struct irq_data *data) 214 + { 215 + vmd_pci_msi_disable(data); 217 216 } 218 217 219 218 static struct irq_chip vmd_msi_controller = { ··· 320 309 if (!msi_lib_init_dev_msi_info(dev, domain, real_parent, info)) 321 310 return false; 322 311 312 + info->chip->irq_startup = vmd_pci_msi_startup; 313 + info->chip->irq_shutdown = vmd_pci_msi_shutdown; 323 314 info->chip->irq_enable = vmd_pci_msi_enable; 324 315 info->chip->irq_disable = vmd_pci_msi_disable; 325 316 return true;
+3 -10
drivers/pci/probe.c
··· 538 538 } 539 539 if (io) { 540 540 bridge->io_window = 1; 541 - pci_read_bridge_io(bridge, 542 - pci_resource_n(bridge, PCI_BRIDGE_IO_WINDOW), 543 - true); 541 + pci_read_bridge_io(bridge, &res, true); 544 542 } 545 543 546 - pci_read_bridge_mmio(bridge, 547 - pci_resource_n(bridge, PCI_BRIDGE_MEM_WINDOW), 548 - true); 544 + pci_read_bridge_mmio(bridge, &res, true); 549 545 550 546 /* 551 547 * DECchip 21050 pass 2 errata: the bridge may miss an address ··· 579 583 bridge->pref_64_window = 1; 580 584 } 581 585 582 - pci_read_bridge_mmio_pref(bridge, 583 - pci_resource_n(bridge, 584 - PCI_BRIDGE_PREF_MEM_WINDOW), 585 - true); 586 + pci_read_bridge_mmio_pref(bridge, &res, true); 586 587 } 587 588 588 589 void pci_read_bridge_bases(struct pci_bus *child)
+2 -4
drivers/pci/vgaarb.c
··· 556 556 557 557 static bool vga_is_firmware_default(struct pci_dev *pdev) 558 558 { 559 - #ifdef CONFIG_SCREEN_INFO 560 - struct screen_info *si = &screen_info; 561 - 562 - return pdev == screen_info_pci_dev(si); 559 + #if defined CONFIG_X86 560 + return pdev == screen_info_pci_dev(&screen_info); 563 561 #else 564 562 return false; 565 563 #endif
+1
drivers/ufs/core/Makefile
··· 2 2 3 3 obj-$(CONFIG_SCSI_UFSHCD) += ufshcd-core.o 4 4 ufshcd-core-y += ufshcd.o ufs-sysfs.o ufs-mcq.o 5 + ufshcd-core-$(CONFIG_RPMB) += ufs-rpmb.o 5 6 ufshcd-core-$(CONFIG_DEBUG_FS) += ufs-debugfs.o 6 7 ufshcd-core-$(CONFIG_SCSI_UFS_BSG) += ufs_bsg.o 7 8 ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o
+254
drivers/ufs/core/ufs-rpmb.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * UFS OP-TEE based RPMB Driver 4 + * 5 + * Copyright (C) 2025 Micron Technology, Inc. 6 + * Copyright (C) 2025 Qualcomm Technologies, Inc. 7 + * 8 + * Authors: 9 + * Bean Huo <beanhuo@micron.com> 10 + * Can Guo <can.guo@oss.qualcomm.com> 11 + */ 12 + 13 + #include <linux/module.h> 14 + #include <linux/device.h> 15 + #include <linux/kernel.h> 16 + #include <linux/types.h> 17 + #include <linux/rpmb.h> 18 + #include <linux/string.h> 19 + #include <linux/list.h> 20 + #include <ufs/ufshcd.h> 21 + #include <linux/unaligned.h> 22 + #include "ufshcd-priv.h" 23 + 24 + #define UFS_RPMB_SEC_PROTOCOL 0xEC /* JEDEC UFS application */ 25 + #define UFS_RPMB_SEC_PROTOCOL_ID 0x01 /* JEDEC UFS RPMB protocol ID, CDB byte3 */ 26 + 27 + static const struct bus_type ufs_rpmb_bus_type = { 28 + .name = "ufs_rpmb", 29 + }; 30 + 31 + /* UFS RPMB device structure */ 32 + struct ufs_rpmb_dev { 33 + u8 region_id; 34 + struct device dev; 35 + struct rpmb_dev *rdev; 36 + struct ufs_hba *hba; 37 + struct list_head node; 38 + }; 39 + 40 + static int ufs_sec_submit(struct ufs_hba *hba, u16 spsp, void *buffer, size_t len, bool send) 41 + { 42 + struct scsi_device *sdev = hba->ufs_rpmb_wlun; 43 + u8 cdb[12] = { }; 44 + 45 + cdb[0] = send ? SECURITY_PROTOCOL_OUT : SECURITY_PROTOCOL_IN; 46 + cdb[1] = UFS_RPMB_SEC_PROTOCOL; 47 + put_unaligned_be16(spsp, &cdb[2]); 48 + put_unaligned_be32(len, &cdb[6]); 49 + 50 + return scsi_execute_cmd(sdev, cdb, send ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN, 51 + buffer, len, /*timeout=*/30 * HZ, 0, NULL); 52 + } 53 + 54 + /* UFS RPMB route frames implementation */ 55 + static int ufs_rpmb_route_frames(struct device *dev, u8 *req, unsigned int req_len, u8 *resp, 56 + unsigned int resp_len) 57 + { 58 + struct ufs_rpmb_dev *ufs_rpmb = dev_get_drvdata(dev); 59 + struct rpmb_frame *frm_out = (struct rpmb_frame *)req; 60 + bool need_result_read = true; 61 + u16 req_type, protocol_id; 62 + struct ufs_hba *hba; 63 + int ret; 64 + 65 + if (!ufs_rpmb) { 66 + dev_err(dev, "Missing driver data\n"); 67 + return -ENODEV; 68 + } 69 + 70 + hba = ufs_rpmb->hba; 71 + 72 + req_type = be16_to_cpu(frm_out->req_resp); 73 + 74 + switch (req_type) { 75 + case RPMB_PROGRAM_KEY: 76 + if (req_len != sizeof(struct rpmb_frame) || resp_len != sizeof(struct rpmb_frame)) 77 + return -EINVAL; 78 + break; 79 + case RPMB_GET_WRITE_COUNTER: 80 + if (req_len != sizeof(struct rpmb_frame) || resp_len != sizeof(struct rpmb_frame)) 81 + return -EINVAL; 82 + need_result_read = false; 83 + break; 84 + case RPMB_WRITE_DATA: 85 + if (req_len % sizeof(struct rpmb_frame) || resp_len != sizeof(struct rpmb_frame)) 86 + return -EINVAL; 87 + break; 88 + case RPMB_READ_DATA: 89 + if (req_len != sizeof(struct rpmb_frame) || resp_len % sizeof(struct rpmb_frame)) 90 + return -EINVAL; 91 + need_result_read = false; 92 + break; 93 + default: 94 + dev_err(dev, "Unknown request type=0x%04x\n", req_type); 95 + return -EINVAL; 96 + } 97 + 98 + protocol_id = ufs_rpmb->region_id << 8 | UFS_RPMB_SEC_PROTOCOL_ID; 99 + 100 + ret = ufs_sec_submit(hba, protocol_id, req, req_len, true); 101 + if (ret) { 102 + dev_err(dev, "Command failed with ret=%d\n", ret); 103 + return ret; 104 + } 105 + 106 + if (need_result_read) { 107 + struct rpmb_frame *frm_resp = (struct rpmb_frame *)resp; 108 + 109 + memset(frm_resp, 0, sizeof(*frm_resp)); 110 + frm_resp->req_resp = cpu_to_be16(RPMB_RESULT_READ); 111 + ret = ufs_sec_submit(hba, protocol_id, resp, resp_len, true); 112 + if (ret) { 113 + dev_err(dev, "Result read request failed with ret=%d\n", ret); 114 + return ret; 115 + } 116 + } 117 + 118 + if (!ret) { 119 + ret = ufs_sec_submit(hba, protocol_id, resp, resp_len, false); 120 + if (ret) 121 + dev_err(dev, "Response read failed with ret=%d\n", ret); 122 + } 123 + 124 + return ret; 125 + } 126 + 127 + static void ufs_rpmb_device_release(struct device *dev) 128 + { 129 + struct ufs_rpmb_dev *ufs_rpmb = dev_get_drvdata(dev); 130 + 131 + rpmb_dev_unregister(ufs_rpmb->rdev); 132 + } 133 + 134 + /* UFS RPMB device registration */ 135 + int ufs_rpmb_probe(struct ufs_hba *hba) 136 + { 137 + struct ufs_rpmb_dev *ufs_rpmb, *it, *tmp; 138 + struct rpmb_dev *rdev; 139 + char *cid = NULL; 140 + int region; 141 + u32 cap; 142 + int ret; 143 + 144 + if (!hba->ufs_rpmb_wlun || hba->dev_info.b_advanced_rpmb_en) { 145 + dev_info(hba->dev, "Skip OP-TEE RPMB registration\n"); 146 + return -ENODEV; 147 + } 148 + 149 + /* Check if device_id is available */ 150 + if (!hba->dev_info.device_id) { 151 + dev_err(hba->dev, "UFS Device ID not available\n"); 152 + return -EINVAL; 153 + } 154 + 155 + INIT_LIST_HEAD(&hba->rpmbs); 156 + 157 + struct rpmb_descr descr = { 158 + .type = RPMB_TYPE_UFS, 159 + .route_frames = ufs_rpmb_route_frames, 160 + .reliable_wr_count = hba->dev_info.rpmb_io_size, 161 + }; 162 + 163 + for (region = 0; region < ARRAY_SIZE(hba->dev_info.rpmb_region_size); region++) { 164 + cap = hba->dev_info.rpmb_region_size[region]; 165 + if (!cap) 166 + continue; 167 + 168 + ufs_rpmb = devm_kzalloc(hba->dev, sizeof(*ufs_rpmb), GFP_KERNEL); 169 + if (!ufs_rpmb) { 170 + ret = -ENOMEM; 171 + goto err_out; 172 + } 173 + 174 + ufs_rpmb->hba = hba; 175 + ufs_rpmb->dev.parent = &hba->ufs_rpmb_wlun->sdev_gendev; 176 + ufs_rpmb->dev.bus = &ufs_rpmb_bus_type; 177 + ufs_rpmb->dev.release = ufs_rpmb_device_release; 178 + dev_set_name(&ufs_rpmb->dev, "ufs_rpmb%d", region); 179 + 180 + /* Set driver data BEFORE device_register */ 181 + dev_set_drvdata(&ufs_rpmb->dev, ufs_rpmb); 182 + 183 + ret = device_register(&ufs_rpmb->dev); 184 + if (ret) { 185 + dev_err(hba->dev, "Failed to register UFS RPMB device %d\n", region); 186 + put_device(&ufs_rpmb->dev); 187 + goto err_out; 188 + } 189 + 190 + /* Create unique ID by appending region number to device_id */ 191 + cid = kasprintf(GFP_KERNEL, "%s-R%d", hba->dev_info.device_id, region); 192 + if (!cid) { 193 + device_unregister(&ufs_rpmb->dev); 194 + ret = -ENOMEM; 195 + goto err_out; 196 + } 197 + 198 + descr.dev_id = cid; 199 + descr.dev_id_len = strlen(cid); 200 + descr.capacity = cap; 201 + 202 + /* Register RPMB device */ 203 + rdev = rpmb_dev_register(&ufs_rpmb->dev, &descr); 204 + if (IS_ERR(rdev)) { 205 + dev_err(hba->dev, "Failed to register UFS RPMB device.\n"); 206 + device_unregister(&ufs_rpmb->dev); 207 + ret = PTR_ERR(rdev); 208 + goto err_out; 209 + } 210 + 211 + kfree(cid); 212 + cid = NULL; 213 + 214 + ufs_rpmb->rdev = rdev; 215 + ufs_rpmb->region_id = region; 216 + 217 + list_add_tail(&ufs_rpmb->node, &hba->rpmbs); 218 + 219 + dev_info(hba->dev, "UFS RPMB region %d registered (capacity=%u)\n", region, cap); 220 + } 221 + 222 + return 0; 223 + err_out: 224 + kfree(cid); 225 + list_for_each_entry_safe(it, tmp, &hba->rpmbs, node) { 226 + list_del(&it->node); 227 + device_unregister(&it->dev); 228 + } 229 + 230 + return ret; 231 + } 232 + 233 + /* UFS RPMB remove handler */ 234 + void ufs_rpmb_remove(struct ufs_hba *hba) 235 + { 236 + struct ufs_rpmb_dev *ufs_rpmb, *tmp; 237 + 238 + if (list_empty(&hba->rpmbs)) 239 + return; 240 + 241 + /* Remove all registered RPMB devices */ 242 + list_for_each_entry_safe(ufs_rpmb, tmp, &hba->rpmbs, node) { 243 + dev_info(hba->dev, "Removing UFS RPMB region %d\n", ufs_rpmb->region_id); 244 + /* Remove from list first */ 245 + list_del(&ufs_rpmb->node); 246 + /* Unregister device */ 247 + device_unregister(&ufs_rpmb->dev); 248 + } 249 + 250 + dev_info(hba->dev, "All UFS RPMB devices unregistered\n"); 251 + } 252 + 253 + MODULE_LICENSE("GPL v2"); 254 + MODULE_DESCRIPTION("OP-TEE UFS RPMB driver");
+23 -4
drivers/ufs/core/ufshcd-priv.h
··· 79 79 int ufshcd_try_to_abort_task(struct ufs_hba *hba, int tag); 80 80 void ufshcd_release_scsi_cmd(struct ufs_hba *hba, struct scsi_cmnd *cmd); 81 81 82 - #define SD_ASCII_STD true 83 - #define SD_RAW false 84 - int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index, 85 - u8 **buf, bool ascii); 82 + /** 83 + * enum ufs_descr_fmt - UFS string descriptor format 84 + * @SD_RAW: Raw UTF-16 format 85 + * @SD_ASCII_STD: Convert to null-terminated ASCII string 86 + */ 87 + enum ufs_descr_fmt { 88 + SD_RAW = 0, 89 + SD_ASCII_STD = 1, 90 + }; 86 91 92 + int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index, u8 **buf, enum ufs_descr_fmt fmt); 87 93 int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd); 88 94 int ufshcd_send_bsg_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd); 89 95 ··· 437 431 438 432 return val / sizeof(struct utp_transfer_req_desc); 439 433 } 434 + 435 + #if IS_ENABLED(CONFIG_RPMB) 436 + int ufs_rpmb_probe(struct ufs_hba *hba); 437 + void ufs_rpmb_remove(struct ufs_hba *hba); 438 + #else 439 + static inline int ufs_rpmb_probe(struct ufs_hba *hba) 440 + { 441 + return 0; 442 + } 443 + static inline void ufs_rpmb_remove(struct ufs_hba *hba) 444 + { 445 + } 446 + #endif 440 447 441 448 #endif /* _UFSHCD_PRIV_H_ */
+86 -10
drivers/ufs/core/ufshcd.c
··· 3770 3770 * @desc_index: descriptor index 3771 3771 * @buf: pointer to buffer where descriptor would be read, 3772 3772 * the caller should free the memory. 3773 - * @ascii: if true convert from unicode to ascii characters 3774 - * null terminated string. 3773 + * @fmt: if %SD_ASCII_STD, convert from UTF-16 to ASCII 3775 3774 * 3776 3775 * Return: 3777 3776 * * string size on success. 3778 3777 * * -ENOMEM: on allocation failure 3779 3778 * * -EINVAL: on a wrong parameter 3780 3779 */ 3781 - int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index, 3782 - u8 **buf, bool ascii) 3780 + int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index, u8 **buf, enum ufs_descr_fmt fmt) 3783 3781 { 3784 3782 struct uc_string_id *uc_str; 3785 3783 u8 *str; ··· 3806 3808 goto out; 3807 3809 } 3808 3810 3809 - if (ascii) { 3811 + if (fmt == SD_ASCII_STD) { 3810 3812 ssize_t ascii_len; 3811 3813 int i; 3812 3814 /* remove header and divide by 2 to move from UTF16 to UTF8 */ ··· 3832 3834 str[ret++] = '\0'; 3833 3835 3834 3836 } else { 3835 - str = kmemdup(uc_str, uc_str->len, GFP_KERNEL); 3837 + str = kmemdup(uc_str->uc, uc_str->len, GFP_KERNEL); 3836 3838 if (!str) { 3837 3839 ret = -ENOMEM; 3838 3840 goto out; ··· 5238 5240 desc_buf[UNIT_DESC_PARAM_LU_WR_PROTECT] == UFS_LU_POWER_ON_WP) 5239 5241 hba->dev_info.is_lu_power_on_wp = true; 5240 5242 5241 - /* In case of RPMB LU, check if advanced RPMB mode is enabled */ 5242 - if (desc_buf[UNIT_DESC_PARAM_UNIT_INDEX] == UFS_UPIU_RPMB_WLUN && 5243 - desc_buf[RPMB_UNIT_DESC_PARAM_REGION_EN] & BIT(4)) 5244 - hba->dev_info.b_advanced_rpmb_en = true; 5243 + /* In case of RPMB LU, check if advanced RPMB mode is enabled, and get region size */ 5244 + if (desc_buf[UNIT_DESC_PARAM_UNIT_INDEX] == UFS_UPIU_RPMB_WLUN) { 5245 + if (desc_buf[RPMB_UNIT_DESC_PARAM_REGION_EN] & BIT(4)) 5246 + hba->dev_info.b_advanced_rpmb_en = true; 5247 + hba->dev_info.rpmb_region_size[0] = desc_buf[RPMB_UNIT_DESC_PARAM_REGION0_SIZE]; 5248 + hba->dev_info.rpmb_region_size[1] = desc_buf[RPMB_UNIT_DESC_PARAM_REGION1_SIZE]; 5249 + hba->dev_info.rpmb_region_size[2] = desc_buf[RPMB_UNIT_DESC_PARAM_REGION2_SIZE]; 5250 + hba->dev_info.rpmb_region_size[3] = desc_buf[RPMB_UNIT_DESC_PARAM_REGION3_SIZE]; 5251 + } 5245 5252 5246 5253 5247 5254 kfree(desc_buf); ··· 8223 8220 ufshcd_upiu_wlun_to_scsi_wlun(UFS_UPIU_RPMB_WLUN), NULL); 8224 8221 if (IS_ERR(sdev_rpmb)) { 8225 8222 ret = PTR_ERR(sdev_rpmb); 8223 + hba->ufs_rpmb_wlun = NULL; 8224 + dev_err(hba->dev, "%s: RPMB WLUN not found\n", __func__); 8226 8225 goto remove_ufs_device_wlun; 8227 8226 } 8227 + hba->ufs_rpmb_wlun = sdev_rpmb; 8228 8228 ufshcd_blk_pm_runtime_init(sdev_rpmb); 8229 8229 scsi_device_put(sdev_rpmb); 8230 8230 ··· 8495 8489 dev_info->rtc_update_period = 0; 8496 8490 } 8497 8491 8492 + /** 8493 + * ufshcd_create_device_id - Generate unique device identifier string 8494 + * @hba: per-adapter instance 8495 + * @desc_buf: device descriptor buffer 8496 + * 8497 + * Creates a unique device ID string combining manufacturer ID, spec version, 8498 + * model name, serial number (as hex), device version, and manufacture date. 8499 + * 8500 + * Returns: Allocated device ID string on success, NULL on failure 8501 + */ 8502 + static char *ufshcd_create_device_id(struct ufs_hba *hba, u8 *desc_buf) 8503 + { 8504 + struct ufs_dev_info *dev_info = &hba->dev_info; 8505 + u16 manufacture_date; 8506 + u16 device_version; 8507 + u8 *serial_number; 8508 + char *serial_hex; 8509 + char *device_id; 8510 + u8 serial_index; 8511 + int serial_len; 8512 + int ret; 8513 + 8514 + serial_index = desc_buf[DEVICE_DESC_PARAM_SN]; 8515 + 8516 + ret = ufshcd_read_string_desc(hba, serial_index, &serial_number, SD_RAW); 8517 + if (ret < 0) { 8518 + dev_err(hba->dev, "Failed reading Serial Number. err = %d\n", ret); 8519 + return NULL; 8520 + } 8521 + 8522 + device_version = get_unaligned_be16(&desc_buf[DEVICE_DESC_PARAM_DEV_VER]); 8523 + manufacture_date = get_unaligned_be16(&desc_buf[DEVICE_DESC_PARAM_MANF_DATE]); 8524 + 8525 + serial_len = ret; 8526 + /* Allocate buffer for hex string: 2 chars per byte + null terminator */ 8527 + serial_hex = kzalloc(serial_len * 2 + 1, GFP_KERNEL); 8528 + if (!serial_hex) { 8529 + kfree(serial_number); 8530 + return NULL; 8531 + } 8532 + 8533 + bin2hex(serial_hex, serial_number, serial_len); 8534 + 8535 + /* 8536 + * Device ID format is ABI with secure world - do not change without firmware 8537 + * coordination. 8538 + */ 8539 + device_id = kasprintf(GFP_KERNEL, "%04X-%04X-%s-%s-%04X-%04X", 8540 + dev_info->wmanufacturerid, dev_info->wspecversion, 8541 + dev_info->model, serial_hex, device_version, 8542 + manufacture_date); 8543 + 8544 + kfree(serial_hex); 8545 + kfree(serial_number); 8546 + 8547 + if (!device_id) 8548 + dev_warn(hba->dev, "Failed to allocate unique device ID\n"); 8549 + 8550 + return device_id; 8551 + } 8552 + 8498 8553 static int ufs_get_device_desc(struct ufs_hba *hba) 8499 8554 { 8500 8555 struct ufs_dev_info *dev_info = &hba->dev_info; ··· 8618 8551 goto out; 8619 8552 } 8620 8553 8554 + /* Generate unique device ID */ 8555 + dev_info->device_id = ufshcd_create_device_id(hba, desc_buf); 8556 + 8621 8557 hba->luns_avail = desc_buf[DEVICE_DESC_PARAM_NUM_LU] + 8622 8558 desc_buf[DEVICE_DESC_PARAM_NUM_WLU]; 8623 8559 ··· 8656 8586 8657 8587 kfree(dev_info->model); 8658 8588 dev_info->model = NULL; 8589 + kfree(dev_info->device_id); 8590 + dev_info->device_id = NULL; 8659 8591 } 8660 8592 8661 8593 /** ··· 8800 8728 hba->dev_info.max_lu_supported = 32; 8801 8729 else if (desc_buf[GEOMETRY_DESC_PARAM_MAX_NUM_LUN] == 0) 8802 8730 hba->dev_info.max_lu_supported = 8; 8731 + 8732 + hba->dev_info.rpmb_io_size = desc_buf[GEOMETRY_DESC_PARAM_RPMB_RW_SIZE]; 8803 8733 8804 8734 out: 8805 8735 kfree(desc_buf); ··· 8989 8915 8990 8916 ufs_bsg_probe(hba); 8991 8917 scsi_scan_host(hba->host); 8918 + ufs_rpmb_probe(hba); 8992 8919 8993 8920 out: 8994 8921 return ret; ··· 10545 10470 ufshcd_rpm_get_sync(hba); 10546 10471 ufs_hwmon_remove(hba); 10547 10472 ufs_bsg_remove(hba); 10473 + ufs_rpmb_remove(hba); 10548 10474 ufs_sysfs_remove_nodes(hba->dev); 10549 10475 cancel_delayed_work_sync(&hba->ufs_rtc_update_work); 10550 10476 blk_mq_destroy_queue(hba->tmf_queue);
+1 -1
fs/btrfs/extent_io.c
··· 973 973 { 974 974 const u64 ra_pos = readahead_pos(ractl); 975 975 const u64 ra_end = ra_pos + readahead_length(ractl); 976 - const u64 em_end = em->start + em->ram_bytes; 976 + const u64 em_end = em->start + em->len; 977 977 978 978 /* No expansion for holes and inline extents. */ 979 979 if (em->disk_bytenr > EXTENT_MAP_LAST_BYTE)
+8 -7
fs/btrfs/free-space-tree.c
··· 1106 1106 * If ret is 1 (no key found), it means this is an empty block group, 1107 1107 * without any extents allocated from it and there's no block group 1108 1108 * item (key BTRFS_BLOCK_GROUP_ITEM_KEY) located in the extent tree 1109 - * because we are using the block group tree feature, so block group 1110 - * items are stored in the block group tree. It also means there are no 1111 - * extents allocated for block groups with a start offset beyond this 1112 - * block group's end offset (this is the last, highest, block group). 1109 + * because we are using the block group tree feature (so block group 1110 + * items are stored in the block group tree) or this is a new block 1111 + * group created in the current transaction and its block group item 1112 + * was not yet inserted in the extent tree (that happens in 1113 + * btrfs_create_pending_block_groups() -> insert_block_group_item()). 1114 + * It also means there are no extents allocated for block groups with a 1115 + * start offset beyond this block group's end offset (this is the last, 1116 + * highest, block group). 1113 1117 */ 1114 - if (!btrfs_fs_compat_ro(trans->fs_info, BLOCK_GROUP_TREE)) 1115 - ASSERT(ret == 0); 1116 - 1117 1118 start = block_group->start; 1118 1119 end = block_group->start + block_group->length; 1119 1120 while (ret == 0) {
+1 -1
fs/btrfs/ioctl.c
··· 3740 3740 prealloc = kzalloc(sizeof(*prealloc), GFP_KERNEL); 3741 3741 if (!prealloc) { 3742 3742 ret = -ENOMEM; 3743 - goto drop_write; 3743 + goto out; 3744 3744 } 3745 3745 } 3746 3746
+7 -6
fs/btrfs/relocation.c
··· 3780 3780 /* 3781 3781 * Mark start of chunk relocation that is cancellable. Check if the cancellation 3782 3782 * has been requested meanwhile and don't start in that case. 3783 + * NOTE: if this returns an error, reloc_chunk_end() must not be called. 3783 3784 * 3784 3785 * Return: 3785 3786 * 0 success ··· 3797 3796 3798 3797 if (atomic_read(&fs_info->reloc_cancel_req) > 0) { 3799 3798 btrfs_info(fs_info, "chunk relocation canceled on start"); 3800 - /* 3801 - * On cancel, clear all requests but let the caller mark 3802 - * the end after cleanup operations. 3803 - */ 3799 + /* On cancel, clear all requests. */ 3800 + clear_and_wake_up_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags); 3804 3801 atomic_set(&fs_info->reloc_cancel_req, 0); 3805 3802 return -ECANCELED; 3806 3803 } ··· 3807 3808 3808 3809 /* 3809 3810 * Mark end of chunk relocation that is cancellable and wake any waiters. 3811 + * NOTE: call only if a previous call to reloc_chunk_start() succeeded. 3810 3812 */ 3811 3813 static void reloc_chunk_end(struct btrfs_fs_info *fs_info) 3812 3814 { 3815 + ASSERT(test_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags)); 3813 3816 /* Requested after start, clear bit first so any waiters can continue */ 3814 3817 if (atomic_read(&fs_info->reloc_cancel_req) > 0) 3815 3818 btrfs_info(fs_info, "chunk relocation canceled during operation"); ··· 4024 4023 if (err && rw) 4025 4024 btrfs_dec_block_group_ro(rc->block_group); 4026 4025 iput(rc->data_inode); 4026 + reloc_chunk_end(fs_info); 4027 4027 out_put_bg: 4028 4028 btrfs_put_block_group(bg); 4029 - reloc_chunk_end(fs_info); 4030 4029 free_reloc_control(rc); 4031 4030 return err; 4032 4031 } ··· 4209 4208 ret = ret2; 4210 4209 out_unset: 4211 4210 unset_reloc_control(rc); 4212 - out_end: 4213 4211 reloc_chunk_end(fs_info); 4212 + out_end: 4214 4213 free_reloc_control(rc); 4215 4214 out: 4216 4215 free_reloc_roots(&reloc_roots);
+2 -2
fs/btrfs/scrub.c
··· 694 694 695 695 /* stripe->folios[] is allocated by us and no highmem is allowed. */ 696 696 ASSERT(folio); 697 - ASSERT(!folio_test_partial_kmap(folio)); 697 + ASSERT(!folio_test_highmem(folio)); 698 698 return folio_address(folio) + offset_in_folio(folio, offset); 699 699 } 700 700 ··· 707 707 708 708 /* stripe->folios[] is allocated by us and no highmem is allowed. */ 709 709 ASSERT(folio); 710 - ASSERT(!folio_test_partial_kmap(folio)); 710 + ASSERT(!folio_test_highmem(folio)); 711 711 /* And the range must be contained inside the folio. */ 712 712 ASSERT(offset_in_folio(folio, offset) + fs_info->sectorsize <= folio_size(folio)); 713 713 return page_to_phys(folio_page(folio, 0)) + offset_in_folio(folio, offset);
+3 -1
fs/btrfs/send.c
··· 178 178 u64 cur_inode_rdev; 179 179 u64 cur_inode_last_extent; 180 180 u64 cur_inode_next_write_offset; 181 - struct fs_path cur_inode_path; 182 181 bool cur_inode_new; 183 182 bool cur_inode_new_gen; 184 183 bool cur_inode_deleted; ··· 304 305 305 306 struct btrfs_lru_cache dir_created_cache; 306 307 struct btrfs_lru_cache dir_utimes_cache; 308 + 309 + /* Must be last as it ends in a flexible-array member. */ 310 + struct fs_path cur_inode_path; 307 311 }; 308 312 309 313 struct pending_dir_move {
+1 -2
fs/btrfs/super.c
··· 1900 1900 return PTR_ERR(sb); 1901 1901 } 1902 1902 1903 - set_device_specific_options(fs_info); 1904 - 1905 1903 if (sb->s_root) { 1906 1904 /* 1907 1905 * Not the first mount of the fs thus got an existing super block. ··· 1944 1946 deactivate_locked_super(sb); 1945 1947 return -EACCES; 1946 1948 } 1949 + set_device_specific_options(fs_info); 1947 1950 bdev = fs_devices->latest_dev->bdev; 1948 1951 snprintf(sb->s_id, sizeof(sb->s_id), "%pg", bdev); 1949 1952 shrinker_debugfs_rename(sb->s_shrink, "sb-btrfs:%s", sb->s_id);
+1 -1
fs/btrfs/tree-checker.c
··· 1797 1797 struct btrfs_inode_extref *extref = (struct btrfs_inode_extref *)ptr; 1798 1798 u16 namelen; 1799 1799 1800 - if (unlikely(ptr + sizeof(*extref)) > end) { 1800 + if (unlikely(ptr + sizeof(*extref) > end)) { 1801 1801 inode_ref_err(leaf, slot, 1802 1802 "inode extref overflow, ptr %lu end %lu inode_extref size %zu", 1803 1803 ptr, end, sizeof(*extref));
+1 -1
fs/btrfs/zoned.c
··· 1753 1753 !fs_info->stripe_root) { 1754 1754 btrfs_err(fs_info, "zoned: data %s needs raid-stripe-tree", 1755 1755 btrfs_bg_type_to_raid_name(map->type)); 1756 - return -EINVAL; 1756 + ret = -EINVAL; 1757 1757 } 1758 1758 1759 1759 if (unlikely(cache->alloc_offset > cache->zone_capacity)) {
+1 -1
fs/coredump.c
··· 1468 1468 ssize_t retval; 1469 1469 char old_core_pattern[CORENAME_MAX_SIZE]; 1470 1470 1471 - if (write) 1471 + if (!write) 1472 1472 return proc_dostring(table, write, buffer, lenp, ppos); 1473 1473 1474 1474 retval = strscpy(old_core_pattern, core_pattern, CORENAME_MAX_SIZE);
+1 -1
fs/dax.c
··· 1725 1725 if (iov_iter_rw(iter) == WRITE) { 1726 1726 lockdep_assert_held_write(&iomi.inode->i_rwsem); 1727 1727 iomi.flags |= IOMAP_WRITE; 1728 - } else { 1728 + } else if (!sb_rdonly(iomi.inode->i_sb)) { 1729 1729 lockdep_assert_held(&iomi.inode->i_rwsem); 1730 1730 } 1731 1731
+2
fs/dcache.c
··· 2557 2557 spin_lock(&parent->d_lock); 2558 2558 new->d_parent = dget_dlock(parent); 2559 2559 hlist_add_head(&new->d_sib, &parent->d_children); 2560 + if (parent->d_flags & DCACHE_DISCONNECTED) 2561 + new->d_flags |= DCACHE_DISCONNECTED; 2560 2562 spin_unlock(&parent->d_lock); 2561 2563 2562 2564 retry:
+1 -1
fs/exec.c
··· 2048 2048 { 2049 2049 int error = proc_dointvec_minmax(table, write, buffer, lenp, ppos); 2050 2050 2051 - if (!error && !write) 2051 + if (!error && write) 2052 2052 validate_coredump_safety(); 2053 2053 return error; 2054 2054 }
-1
fs/exfat/exfat_fs.h
··· 29 29 enum { 30 30 NLS_NAME_NO_LOSSY = 0, /* no lossy */ 31 31 NLS_NAME_LOSSY = 1 << 0, /* just detected incorrect filename(s) */ 32 - NLS_NAME_OVERLEN = 1 << 1, /* the length is over than its limit */ 33 32 }; 34 33 35 34 #define EXFAT_HASH_BITS 8
+4 -3
fs/exfat/file.c
··· 509 509 static int exfat_ioctl_set_volume_label(struct super_block *sb, 510 510 unsigned long arg) 511 511 { 512 - int ret = 0, lossy; 513 - char label[FSLABEL_MAX]; 512 + int ret = 0, lossy, label_len; 513 + char label[FSLABEL_MAX] = {0}; 514 514 struct exfat_uni_name uniname; 515 515 516 516 if (!capable(CAP_SYS_ADMIN)) ··· 520 520 return -EFAULT; 521 521 522 522 memset(&uniname, 0, sizeof(uniname)); 523 + label_len = strnlen(label, FSLABEL_MAX - 1); 523 524 if (label[0]) { 524 - ret = exfat_nls_to_utf16(sb, label, FSLABEL_MAX, 525 + ret = exfat_nls_to_utf16(sb, label, label_len, 525 526 &uniname, &lossy); 526 527 if (ret < 0) 527 528 return ret;
+6 -2
fs/exfat/namei.c
··· 442 442 return namelen; /* return error value */ 443 443 444 444 if ((lossy && !lookup) || !namelen) 445 - return (lossy & NLS_NAME_OVERLEN) ? -ENAMETOOLONG : -EINVAL; 445 + return -EINVAL; 446 446 447 447 return 0; 448 448 } ··· 642 642 643 643 info->type = exfat_get_entry_type(ep); 644 644 info->attr = le16_to_cpu(ep->dentry.file.attr); 645 - info->size = le64_to_cpu(ep2->dentry.stream.valid_size); 646 645 info->valid_size = le64_to_cpu(ep2->dentry.stream.valid_size); 647 646 info->size = le64_to_cpu(ep2->dentry.stream.size); 647 + 648 + if (info->valid_size < 0) { 649 + exfat_fs_error(sb, "data valid size is invalid(%lld)", info->valid_size); 650 + return -EIO; 651 + } 648 652 649 653 if (unlikely(EXFAT_B_TO_CLU_ROUND_UP(info->size, sbi) > sbi->used_clusters)) { 650 654 exfat_fs_error(sb, "data size is invalid(%lld)", info->size);
-3
fs/exfat/nls.c
··· 616 616 unilen++; 617 617 } 618 618 619 - if (p_cstring[i] != '\0') 620 - lossy |= NLS_NAME_OVERLEN; 621 - 622 619 *uniname = '\0'; 623 620 p_uniname->name_len = unilen; 624 621 p_uniname->name_hash = exfat_calc_chksum16(upname, unilen << 1, 0,
+9 -2
fs/ext4/ext4_jbd2.c
··· 280 280 bh, is_metadata, inode->i_mode, 281 281 test_opt(inode->i_sb, DATA_FLAGS)); 282 282 283 - /* In the no journal case, we can just do a bforget and return */ 283 + /* 284 + * In the no journal case, we should wait for the ongoing buffer 285 + * to complete and do a forget. 286 + */ 284 287 if (!ext4_handle_valid(handle)) { 285 - bforget(bh); 288 + if (bh) { 289 + clear_buffer_dirty(bh); 290 + wait_on_buffer(bh); 291 + __bforget(bh); 292 + } 286 293 return 0; 287 294 } 288 295
+8
fs/ext4/inode.c
··· 5319 5319 } 5320 5320 ei->i_flags = le32_to_cpu(raw_inode->i_flags); 5321 5321 ext4_set_inode_flags(inode, true); 5322 + /* Detect invalid flag combination - can't have both inline data and extents */ 5323 + if (ext4_test_inode_flag(inode, EXT4_INODE_INLINE_DATA) && 5324 + ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) { 5325 + ext4_error_inode(inode, function, line, 0, 5326 + "inode has both inline data and extents flags"); 5327 + ret = -EFSCORRUPTED; 5328 + goto bad_inode; 5329 + } 5322 5330 inode->i_blocks = ext4_inode_blocks(raw_inode, ei); 5323 5331 ei->i_file_acl = le32_to_cpu(raw_inode->i_file_acl_lo); 5324 5332 if (ext4_has_feature_64bit(sb))
+2 -2
fs/ext4/orphan.c
··· 513 513 return; 514 514 for (i = 0; i < oi->of_blocks; i++) 515 515 brelse(oi->of_binfo[i].ob_bh); 516 - kfree(oi->of_binfo); 516 + kvfree(oi->of_binfo); 517 517 } 518 518 519 519 static struct ext4_orphan_block_tail *ext4_orphan_block_tail( ··· 637 637 out_free: 638 638 for (i--; i >= 0; i--) 639 639 brelse(oi->of_binfo[i].ob_bh); 640 - kfree(oi->of_binfo); 640 + kvfree(oi->of_binfo); 641 641 out_put: 642 642 iput(inode); 643 643 return ret;
+1 -1
fs/f2fs/data.c
··· 1497 1497 struct f2fs_dev_info *dev = &sbi->devs[bidx]; 1498 1498 1499 1499 map->m_bdev = dev->bdev; 1500 - map->m_pblk -= dev->start_blk; 1501 1500 map->m_len = min(map->m_len, dev->end_blk + 1 - map->m_pblk); 1501 + map->m_pblk -= dev->start_blk; 1502 1502 } else { 1503 1503 map->m_bdev = inode->i_sb->s_bdev; 1504 1504 }
+1 -1
fs/f2fs/super.c
··· 1820 1820 sb_end_intwrite(inode->i_sb); 1821 1821 1822 1822 spin_lock(&inode->i_lock); 1823 - iput(inode); 1823 + atomic_dec(&inode->i_count); 1824 1824 } 1825 1825 trace_f2fs_drop_inode(inode, 0); 1826 1826 return 0;
+6 -10
fs/file_attr.c
··· 84 84 int error; 85 85 86 86 if (!inode->i_op->fileattr_get) 87 - return -EOPNOTSUPP; 87 + return -ENOIOCTLCMD; 88 88 89 89 error = security_inode_file_getattr(dentry, fa); 90 90 if (error) ··· 270 270 int err; 271 271 272 272 if (!inode->i_op->fileattr_set) 273 - return -EOPNOTSUPP; 273 + return -ENOIOCTLCMD; 274 274 275 275 if (!inode_owner_or_capable(idmap, inode)) 276 276 return -EPERM; ··· 312 312 int err; 313 313 314 314 err = vfs_fileattr_get(file->f_path.dentry, &fa); 315 - if (err == -EOPNOTSUPP) 316 - err = -ENOIOCTLCMD; 317 315 if (!err) 318 316 err = put_user(fa.flags, argp); 319 317 return err; ··· 333 335 fileattr_fill_flags(&fa, flags); 334 336 err = vfs_fileattr_set(idmap, dentry, &fa); 335 337 mnt_drop_write_file(file); 336 - if (err == -EOPNOTSUPP) 337 - err = -ENOIOCTLCMD; 338 338 } 339 339 } 340 340 return err; ··· 345 349 int err; 346 350 347 351 err = vfs_fileattr_get(file->f_path.dentry, &fa); 348 - if (err == -EOPNOTSUPP) 349 - err = -ENOIOCTLCMD; 350 352 if (!err) 351 353 err = copy_fsxattr_to_user(&fa, argp); 352 354 ··· 365 371 if (!err) { 366 372 err = vfs_fileattr_set(idmap, dentry, &fa); 367 373 mnt_drop_write_file(file); 368 - if (err == -EOPNOTSUPP) 369 - err = -ENOIOCTLCMD; 370 374 } 371 375 } 372 376 return err; ··· 416 424 } 417 425 418 426 error = vfs_fileattr_get(filepath.dentry, &fa); 427 + if (error == -ENOIOCTLCMD || error == -ENOTTY) 428 + error = -EOPNOTSUPP; 419 429 if (error) 420 430 return error; 421 431 ··· 485 491 if (!error) { 486 492 error = vfs_fileattr_set(mnt_idmap(filepath.mnt), 487 493 filepath.dentry, &fa); 494 + if (error == -ENOIOCTLCMD || error == -ENOTTY) 495 + error = -EOPNOTSUPP; 488 496 mnt_drop_write(filepath.mnt); 489 497 } 490 498
+1 -1
fs/file_table.c
··· 192 192 f->f_sb_err = 0; 193 193 194 194 /* 195 - * We're SLAB_TYPESAFE_BY_RCU so initialize f_count last. While 195 + * We're SLAB_TYPESAFE_BY_RCU so initialize f_ref last. While 196 196 * fget-rcu pattern users need to be able to handle spurious 197 197 * refcount bumps we should reinitialize the reused file first. 198 198 */
-4
fs/fuse/ioctl.c
··· 536 536 cleanup: 537 537 fuse_priv_ioctl_cleanup(inode, ff); 538 538 539 - if (err == -ENOTTY) 540 - err = -EOPNOTSUPP; 541 539 return err; 542 540 } 543 541 ··· 572 574 cleanup: 573 575 fuse_priv_ioctl_cleanup(inode, ff); 574 576 575 - if (err == -ENOTTY) 576 - err = -EOPNOTSUPP; 577 577 return err; 578 578 }
+9 -4
fs/jbd2/transaction.c
··· 1659 1659 int drop_reserve = 0; 1660 1660 int err = 0; 1661 1661 int was_modified = 0; 1662 + int wait_for_writeback = 0; 1662 1663 1663 1664 if (is_handle_aborted(handle)) 1664 1665 return -EROFS; ··· 1783 1782 } 1784 1783 1785 1784 /* 1786 - * The buffer is still not written to disk, we should 1787 - * attach this buffer to current transaction so that the 1788 - * buffer can be checkpointed only after the current 1789 - * transaction commits. 1785 + * The buffer has not yet been written to disk. We should 1786 + * either clear the buffer or ensure that the ongoing I/O 1787 + * is completed, and attach this buffer to current 1788 + * transaction so that the buffer can be checkpointed only 1789 + * after the current transaction commits. 1790 1790 */ 1791 1791 clear_buffer_dirty(bh); 1792 + wait_for_writeback = 1; 1792 1793 __jbd2_journal_file_buffer(jh, transaction, BJ_Forget); 1793 1794 spin_unlock(&journal->j_list_lock); 1794 1795 } 1795 1796 drop: 1796 1797 __brelse(bh); 1797 1798 spin_unlock(&jh->b_state_lock); 1799 + if (wait_for_writeback) 1800 + wait_on_buffer(bh); 1798 1801 jbd2_journal_put_journal_head(jh); 1799 1802 if (drop_reserve) { 1800 1803 /* no need to reserve log space for this block -bzzz */
+21 -14
fs/nfs/flexfilelayout/flexfilelayout.c
··· 270 270 mirror->layout = NULL; 271 271 } 272 272 273 - static struct nfs4_ff_layout_mirror *ff_layout_alloc_mirror(gfp_t gfp_flags) 273 + static struct nfs4_ff_layout_mirror *ff_layout_alloc_mirror(u32 dss_count, 274 + gfp_t gfp_flags) 274 275 { 275 276 struct nfs4_ff_layout_mirror *mirror; 276 - u32 dss_id; 277 277 278 278 mirror = kzalloc(sizeof(*mirror), gfp_flags); 279 - if (mirror != NULL) { 280 - spin_lock_init(&mirror->lock); 281 - refcount_set(&mirror->ref, 1); 282 - INIT_LIST_HEAD(&mirror->mirrors); 283 - for (dss_id = 0; dss_id < mirror->dss_count; dss_id++) 284 - nfs_localio_file_init(&mirror->dss[dss_id].nfl); 279 + if (mirror == NULL) 280 + return NULL; 281 + 282 + spin_lock_init(&mirror->lock); 283 + refcount_set(&mirror->ref, 1); 284 + INIT_LIST_HEAD(&mirror->mirrors); 285 + 286 + mirror->dss_count = dss_count; 287 + mirror->dss = 288 + kcalloc(dss_count, sizeof(struct nfs4_ff_layout_ds_stripe), 289 + gfp_flags); 290 + if (mirror->dss == NULL) { 291 + kfree(mirror); 292 + return NULL; 285 293 } 294 + 295 + for (u32 dss_id = 0; dss_id < mirror->dss_count; dss_id++) 296 + nfs_localio_file_init(&mirror->dss[dss_id].nfl); 297 + 286 298 return mirror; 287 299 } 288 300 ··· 519 507 if (dss_count > 1 && stripe_unit == 0) 520 508 goto out_err_free; 521 509 522 - fls->mirror_array[i] = ff_layout_alloc_mirror(gfp_flags); 510 + fls->mirror_array[i] = ff_layout_alloc_mirror(dss_count, gfp_flags); 523 511 if (fls->mirror_array[i] == NULL) { 524 512 rc = -ENOMEM; 525 513 goto out_err_free; 526 514 } 527 - 528 - fls->mirror_array[i]->dss_count = dss_count; 529 - fls->mirror_array[i]->dss = 530 - kcalloc(dss_count, sizeof(struct nfs4_ff_layout_ds_stripe), 531 - gfp_flags); 532 515 533 516 for (dss_id = 0; dss_id < dss_count; dss_id++) { 534 517 dss_info = &fls->mirror_array[i]->dss[dss_id];
+1
fs/nfs/nfs4client.c
··· 222 222 clp->cl_state = 1 << NFS4CLNT_LEASE_EXPIRED; 223 223 clp->cl_mvops = nfs_v4_minor_ops[cl_init->minorversion]; 224 224 clp->cl_mig_gen = 1; 225 + clp->cl_last_renewal = jiffies; 225 226 #if IS_ENABLED(CONFIG_NFS_V4_1) 226 227 init_waitqueue_head(&clp->cl_lock_waitq); 227 228 #endif
+13
fs/nfs/nfs4proc.c
··· 3636 3636 } lr; 3637 3637 struct nfs_fattr fattr; 3638 3638 unsigned long timestamp; 3639 + unsigned short retrans; 3639 3640 }; 3640 3641 3641 3642 static void nfs4_free_closedata(void *data) ··· 3665 3664 .state = state, 3666 3665 .inode = calldata->inode, 3667 3666 .stateid = &calldata->arg.stateid, 3667 + .retrans = calldata->retrans, 3668 3668 }; 3669 3669 3670 3670 if (!nfs4_sequence_done(task, &calldata->res.seq_res)) ··· 3713 3711 default: 3714 3712 task->tk_status = nfs4_async_handle_exception(task, 3715 3713 server, task->tk_status, &exception); 3714 + calldata->retrans = exception.retrans; 3716 3715 if (exception.retry) 3717 3716 goto out_restart; 3718 3717 } ··· 5596 5593 .inode = hdr->inode, 5597 5594 .state = hdr->args.context->state, 5598 5595 .stateid = &hdr->args.stateid, 5596 + .retrans = hdr->retrans, 5599 5597 }; 5600 5598 task->tk_status = nfs4_async_handle_exception(task, 5601 5599 server, task->tk_status, &exception); 5600 + hdr->retrans = exception.retrans; 5602 5601 if (exception.retry) { 5603 5602 rpc_restart_call_prepare(task); 5604 5603 return -EAGAIN; ··· 5714 5709 .inode = hdr->inode, 5715 5710 .state = hdr->args.context->state, 5716 5711 .stateid = &hdr->args.stateid, 5712 + .retrans = hdr->retrans, 5717 5713 }; 5718 5714 task->tk_status = nfs4_async_handle_exception(task, 5719 5715 NFS_SERVER(inode), task->tk_status, 5720 5716 &exception); 5717 + hdr->retrans = exception.retrans; 5721 5718 if (exception.retry) { 5722 5719 rpc_restart_call_prepare(task); 5723 5720 return -EAGAIN; ··· 6733 6726 struct nfs_fh fh; 6734 6727 nfs4_stateid stateid; 6735 6728 unsigned long timestamp; 6729 + unsigned short retrans; 6736 6730 struct { 6737 6731 struct nfs4_layoutreturn_args arg; 6738 6732 struct nfs4_layoutreturn_res res; ··· 6754 6746 .inode = data->inode, 6755 6747 .stateid = &data->stateid, 6756 6748 .task_is_privileged = data->args.seq_args.sa_privileged, 6749 + .retrans = data->retrans, 6757 6750 }; 6758 6751 6759 6752 if (!nfs4_sequence_done(task, &data->res.seq_res)) ··· 6826 6817 task->tk_status = nfs4_async_handle_exception(task, 6827 6818 data->res.server, task->tk_status, 6828 6819 &exception); 6820 + data->retrans = exception.retrans; 6829 6821 if (exception.retry) 6830 6822 goto out_restart; 6831 6823 } ··· 7103 7093 struct file_lock fl; 7104 7094 struct nfs_server *server; 7105 7095 unsigned long timestamp; 7096 + unsigned short retrans; 7106 7097 }; 7107 7098 7108 7099 static struct nfs4_unlockdata *nfs4_alloc_unlockdata(struct file_lock *fl, ··· 7158 7147 struct nfs4_exception exception = { 7159 7148 .inode = calldata->lsp->ls_state->inode, 7160 7149 .stateid = &calldata->arg.stateid, 7150 + .retrans = calldata->retrans, 7161 7151 }; 7162 7152 7163 7153 if (!nfs4_sequence_done(task, &calldata->res.seq_res)) ··· 7192 7180 task->tk_status = nfs4_async_handle_exception(task, 7193 7181 calldata->server, task->tk_status, 7194 7182 &exception); 7183 + calldata->retrans = exception.retrans; 7195 7184 if (exception.retry) 7196 7185 rpc_restart_call_prepare(task); 7197 7186 }
+2 -1
fs/nfs/write.c
··· 1535 1535 /* Deal with the suid/sgid bit corner case */ 1536 1536 if (nfs_should_remove_suid(inode)) { 1537 1537 spin_lock(&inode->i_lock); 1538 - nfs_set_cache_invalid(inode, NFS_INO_INVALID_MODE); 1538 + nfs_set_cache_invalid(inode, NFS_INO_INVALID_MODE 1539 + | NFS_INO_REVAL_FORCED); 1539 1540 spin_unlock(&inode->i_lock); 1540 1541 } 1541 1542 return 0;
+8
fs/nfsd/flexfilelayout.c
··· 125 125 return 0; 126 126 } 127 127 128 + static __be32 129 + nfsd4_ff_proc_layoutcommit(struct inode *inode, struct svc_rqst *rqstp, 130 + struct nfsd4_layoutcommit *lcp) 131 + { 132 + return nfs_ok; 133 + } 134 + 128 135 const struct nfsd4_layout_ops ff_layout_ops = { 129 136 .notify_types = 130 137 NOTIFY_DEVICEID4_DELETE | NOTIFY_DEVICEID4_CHANGE, ··· 140 133 .encode_getdeviceinfo = nfsd4_ff_encode_getdeviceinfo, 141 134 .proc_layoutget = nfsd4_ff_proc_layoutget, 142 135 .encode_layoutget = nfsd4_ff_encode_layoutget, 136 + .proc_layoutcommit = nfsd4_ff_proc_layoutcommit, 143 137 };
+3 -1
fs/nsfs.c
··· 490 490 491 491 VFS_WARN_ON_ONCE(ns->ns_id != fid->ns_id); 492 492 VFS_WARN_ON_ONCE(ns->ns_type != fid->ns_type); 493 - VFS_WARN_ON_ONCE(ns->inum != fid->ns_inum); 493 + 494 + if (ns->inum != fid->ns_inum) 495 + return NULL; 494 496 495 497 if (!__ns_ref_get(ns)) 496 498 return NULL;
+1 -1
fs/overlayfs/copy_up.c
··· 178 178 err = ovl_real_fileattr_get(old, &oldfa); 179 179 if (err) { 180 180 /* Ntfs-3g returns -EINVAL for "no fileattr support" */ 181 - if (err == -EOPNOTSUPP || err == -EINVAL) 181 + if (err == -ENOTTY || err == -EINVAL) 182 182 return 0; 183 183 pr_warn("failed to retrieve lower fileattr (%pd2, err=%i)\n", 184 184 old->dentry, err);
-5
fs/overlayfs/file.c
··· 369 369 if (!ovl_should_sync(OVL_FS(inode->i_sb))) 370 370 ifl &= ~(IOCB_DSYNC | IOCB_SYNC); 371 371 372 - /* 373 - * Overlayfs doesn't support deferred completions, don't copy 374 - * this property in case it is set by the issuer. 375 - */ 376 - ifl &= ~IOCB_DIO_CALLER_COMP; 377 372 ret = backing_file_write_iter(realfile, iter, iocb, ifl, &ctx); 378 373 379 374 out_unlock:
+4 -1
fs/overlayfs/inode.c
··· 720 720 if (err) 721 721 return err; 722 722 723 - return vfs_fileattr_get(realpath->dentry, fa); 723 + err = vfs_fileattr_get(realpath->dentry, fa); 724 + if (err == -ENOIOCTLCMD) 725 + err = -ENOTTY; 726 + return err; 724 727 } 725 728 726 729 int ovl_fileattr_get(struct dentry *dentry, struct file_kattr *fa)
+3 -4
fs/smb/client/Kconfig
··· 5 5 select NLS 6 6 select NLS_UCS2_UTILS 7 7 select CRYPTO 8 - select CRYPTO_MD5 9 - select CRYPTO_SHA256 10 - select CRYPTO_SHA512 11 8 select CRYPTO_CMAC 12 - select CRYPTO_HMAC 13 9 select CRYPTO_AEAD2 14 10 select CRYPTO_CCM 15 11 select CRYPTO_GCM 16 12 select CRYPTO_ECB 17 13 select CRYPTO_AES 18 14 select CRYPTO_LIB_ARC4 15 + select CRYPTO_LIB_MD5 16 + select CRYPTO_LIB_SHA256 17 + select CRYPTO_LIB_SHA512 19 18 select KEYS 20 19 select DNS_RESOLVER 21 20 select ASN1
+2 -3
fs/smb/client/cifsacl.c
··· 339 339 sid_to_id(struct cifs_sb_info *cifs_sb, struct smb_sid *psid, 340 340 struct cifs_fattr *fattr, uint sidtype) 341 341 { 342 - int rc = 0; 343 342 struct key *sidkey; 344 343 char *sidstr; 345 344 const struct cred *saved_cred; ··· 445 446 * fails then we just fall back to using the ctx->linux_uid/linux_gid. 446 447 */ 447 448 got_valid_id: 448 - rc = 0; 449 449 if (sidtype == SIDOWNER) 450 450 fattr->cf_uid = fuid; 451 451 else 452 452 fattr->cf_gid = fgid; 453 - return rc; 453 + 454 + return 0; 454 455 } 455 456 456 457 int
+74 -127
fs/smb/client/cifsencrypt.c
··· 24 24 #include <linux/iov_iter.h> 25 25 #include <crypto/aead.h> 26 26 #include <crypto/arc4.h> 27 + #include <crypto/md5.h> 28 + #include <crypto/sha2.h> 27 29 28 - static size_t cifs_shash_step(void *iter_base, size_t progress, size_t len, 29 - void *priv, void *priv2) 30 + static int cifs_sig_update(struct cifs_calc_sig_ctx *ctx, 31 + const u8 *data, size_t len) 30 32 { 31 - struct shash_desc *shash = priv; 33 + if (ctx->md5) { 34 + md5_update(ctx->md5, data, len); 35 + return 0; 36 + } 37 + if (ctx->hmac) { 38 + hmac_sha256_update(ctx->hmac, data, len); 39 + return 0; 40 + } 41 + return crypto_shash_update(ctx->shash, data, len); 42 + } 43 + 44 + static int cifs_sig_final(struct cifs_calc_sig_ctx *ctx, u8 *out) 45 + { 46 + if (ctx->md5) { 47 + md5_final(ctx->md5, out); 48 + return 0; 49 + } 50 + if (ctx->hmac) { 51 + hmac_sha256_final(ctx->hmac, out); 52 + return 0; 53 + } 54 + return crypto_shash_final(ctx->shash, out); 55 + } 56 + 57 + static size_t cifs_sig_step(void *iter_base, size_t progress, size_t len, 58 + void *priv, void *priv2) 59 + { 60 + struct cifs_calc_sig_ctx *ctx = priv; 32 61 int ret, *pret = priv2; 33 62 34 - ret = crypto_shash_update(shash, iter_base, len); 63 + ret = cifs_sig_update(ctx, iter_base, len); 35 64 if (ret < 0) { 36 65 *pret = ret; 37 66 return len; ··· 71 42 /* 72 43 * Pass the data from an iterator into a hash. 73 44 */ 74 - static int cifs_shash_iter(const struct iov_iter *iter, size_t maxsize, 75 - struct shash_desc *shash) 45 + static int cifs_sig_iter(const struct iov_iter *iter, size_t maxsize, 46 + struct cifs_calc_sig_ctx *ctx) 76 47 { 77 48 struct iov_iter tmp_iter = *iter; 78 49 int err = -EIO; 79 50 80 - if (iterate_and_advance_kernel(&tmp_iter, maxsize, shash, &err, 81 - cifs_shash_step) != maxsize) 51 + if (iterate_and_advance_kernel(&tmp_iter, maxsize, ctx, &err, 52 + cifs_sig_step) != maxsize) 82 53 return err; 83 54 return 0; 84 55 } 85 56 86 - int __cifs_calc_signature(struct smb_rqst *rqst, 87 - struct TCP_Server_Info *server, char *signature, 88 - struct shash_desc *shash) 57 + int __cifs_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server, 58 + char *signature, struct cifs_calc_sig_ctx *ctx) 89 59 { 90 60 int i; 91 61 ssize_t rc; ··· 110 82 return -EIO; 111 83 } 112 84 113 - rc = crypto_shash_update(shash, 114 - iov[i].iov_base, iov[i].iov_len); 85 + rc = cifs_sig_update(ctx, iov[i].iov_base, iov[i].iov_len); 115 86 if (rc) { 116 87 cifs_dbg(VFS, "%s: Could not update with payload\n", 117 88 __func__); ··· 118 91 } 119 92 } 120 93 121 - rc = cifs_shash_iter(&rqst->rq_iter, iov_iter_count(&rqst->rq_iter), shash); 94 + rc = cifs_sig_iter(&rqst->rq_iter, iov_iter_count(&rqst->rq_iter), ctx); 122 95 if (rc < 0) 123 96 return rc; 124 97 125 - rc = crypto_shash_final(shash, signature); 98 + rc = cifs_sig_final(ctx, signature); 126 99 if (rc) 127 100 cifs_dbg(VFS, "%s: Could not generate hash\n", __func__); 128 101 ··· 139 112 static int cifs_calc_signature(struct smb_rqst *rqst, 140 113 struct TCP_Server_Info *server, char *signature) 141 114 { 142 - int rc; 115 + struct md5_ctx ctx; 143 116 144 117 if (!rqst->rq_iov || !signature || !server) 145 118 return -EINVAL; 146 - 147 - rc = cifs_alloc_hash("md5", &server->secmech.md5); 148 - if (rc) 149 - return -1; 150 - 151 - rc = crypto_shash_init(server->secmech.md5); 152 - if (rc) { 153 - cifs_dbg(VFS, "%s: Could not init md5\n", __func__); 154 - return rc; 119 + if (fips_enabled) { 120 + cifs_dbg(VFS, 121 + "MD5 signature support is disabled due to FIPS\n"); 122 + return -EOPNOTSUPP; 155 123 } 156 124 157 - rc = crypto_shash_update(server->secmech.md5, 158 - server->session_key.response, server->session_key.len); 159 - if (rc) { 160 - cifs_dbg(VFS, "%s: Could not update with response\n", __func__); 161 - return rc; 162 - } 125 + md5_init(&ctx); 126 + md5_update(&ctx, server->session_key.response, server->session_key.len); 163 127 164 - return __cifs_calc_signature(rqst, server, signature, server->secmech.md5); 128 + return __cifs_calc_signature( 129 + rqst, server, signature, 130 + &(struct cifs_calc_sig_ctx){ .md5 = &ctx }); 165 131 } 166 132 167 133 /* must be called with server->srv_mutex held */ ··· 425 405 } 426 406 427 407 static int calc_ntlmv2_hash(struct cifs_ses *ses, char *ntlmv2_hash, 428 - const struct nls_table *nls_cp, struct shash_desc *hmacmd5) 408 + const struct nls_table *nls_cp) 429 409 { 430 - int rc = 0; 431 410 int len; 432 411 char nt_hash[CIFS_NTHASH_SIZE]; 412 + struct hmac_md5_ctx hmac_ctx; 433 413 __le16 *user; 434 414 wchar_t *domain; 435 415 wchar_t *server; ··· 437 417 /* calculate md4 hash of password */ 438 418 E_md4hash(ses->password, nt_hash, nls_cp); 439 419 440 - rc = crypto_shash_setkey(hmacmd5->tfm, nt_hash, CIFS_NTHASH_SIZE); 441 - if (rc) { 442 - cifs_dbg(VFS, "%s: Could not set NT hash as a key, rc=%d\n", __func__, rc); 443 - return rc; 444 - } 445 - 446 - rc = crypto_shash_init(hmacmd5); 447 - if (rc) { 448 - cifs_dbg(VFS, "%s: Could not init HMAC-MD5, rc=%d\n", __func__, rc); 449 - return rc; 450 - } 420 + hmac_md5_init_usingrawkey(&hmac_ctx, nt_hash, CIFS_NTHASH_SIZE); 451 421 452 422 /* convert ses->user_name to unicode */ 453 423 len = ses->user_name ? strlen(ses->user_name) : 0; ··· 452 442 *(u16 *)user = 0; 453 443 } 454 444 455 - rc = crypto_shash_update(hmacmd5, (char *)user, 2 * len); 445 + hmac_md5_update(&hmac_ctx, (const u8 *)user, 2 * len); 456 446 kfree(user); 457 - if (rc) { 458 - cifs_dbg(VFS, "%s: Could not update with user, rc=%d\n", __func__, rc); 459 - return rc; 460 - } 461 447 462 448 /* convert ses->domainName to unicode and uppercase */ 463 449 if (ses->domainName) { ··· 465 459 466 460 len = cifs_strtoUTF16((__le16 *)domain, ses->domainName, len, 467 461 nls_cp); 468 - rc = crypto_shash_update(hmacmd5, (char *)domain, 2 * len); 462 + hmac_md5_update(&hmac_ctx, (const u8 *)domain, 2 * len); 469 463 kfree(domain); 470 - if (rc) { 471 - cifs_dbg(VFS, "%s: Could not update with domain, rc=%d\n", __func__, rc); 472 - return rc; 473 - } 474 464 } else { 475 465 /* We use ses->ip_addr if no domain name available */ 476 466 len = strlen(ses->ip_addr); ··· 476 474 return -ENOMEM; 477 475 478 476 len = cifs_strtoUTF16((__le16 *)server, ses->ip_addr, len, nls_cp); 479 - rc = crypto_shash_update(hmacmd5, (char *)server, 2 * len); 477 + hmac_md5_update(&hmac_ctx, (const u8 *)server, 2 * len); 480 478 kfree(server); 481 - if (rc) { 482 - cifs_dbg(VFS, "%s: Could not update with server, rc=%d\n", __func__, rc); 483 - return rc; 484 - } 485 479 } 486 480 487 - rc = crypto_shash_final(hmacmd5, ntlmv2_hash); 488 - if (rc) 489 - cifs_dbg(VFS, "%s: Could not generate MD5 hash, rc=%d\n", __func__, rc); 490 - 491 - return rc; 481 + hmac_md5_final(&hmac_ctx, ntlmv2_hash); 482 + return 0; 492 483 } 493 484 494 - static int 495 - CalcNTLMv2_response(const struct cifs_ses *ses, char *ntlmv2_hash, struct shash_desc *hmacmd5) 485 + static void CalcNTLMv2_response(const struct cifs_ses *ses, char *ntlmv2_hash) 496 486 { 497 - int rc; 498 487 struct ntlmv2_resp *ntlmv2 = (struct ntlmv2_resp *) 499 488 (ses->auth_key.response + CIFS_SESS_KEY_SIZE); 500 489 unsigned int hash_len; ··· 494 501 hash_len = ses->auth_key.len - (CIFS_SESS_KEY_SIZE + 495 502 offsetof(struct ntlmv2_resp, challenge.key[0])); 496 503 497 - rc = crypto_shash_setkey(hmacmd5->tfm, ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE); 498 - if (rc) { 499 - cifs_dbg(VFS, "%s: Could not set NTLMv2 hash as a key, rc=%d\n", __func__, rc); 500 - return rc; 501 - } 502 - 503 - rc = crypto_shash_init(hmacmd5); 504 - if (rc) { 505 - cifs_dbg(VFS, "%s: Could not init HMAC-MD5, rc=%d\n", __func__, rc); 506 - return rc; 507 - } 508 - 509 504 if (ses->server->negflavor == CIFS_NEGFLAVOR_EXTENDED) 510 505 memcpy(ntlmv2->challenge.key, ses->ntlmssp->cryptkey, CIFS_SERVER_CHALLENGE_SIZE); 511 506 else 512 507 memcpy(ntlmv2->challenge.key, ses->server->cryptkey, CIFS_SERVER_CHALLENGE_SIZE); 513 508 514 - rc = crypto_shash_update(hmacmd5, ntlmv2->challenge.key, hash_len); 515 - if (rc) { 516 - cifs_dbg(VFS, "%s: Could not update with response, rc=%d\n", __func__, rc); 517 - return rc; 518 - } 519 - 520 - /* Note that the MD5 digest over writes anon.challenge_key.key */ 521 - rc = crypto_shash_final(hmacmd5, ntlmv2->ntlmv2_hash); 522 - if (rc) 523 - cifs_dbg(VFS, "%s: Could not generate MD5 hash, rc=%d\n", __func__, rc); 524 - 525 - return rc; 509 + /* Note that the HMAC-MD5 value overwrites ntlmv2->challenge.key */ 510 + hmac_md5_usingrawkey(ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE, 511 + ntlmv2->challenge.key, hash_len, 512 + ntlmv2->ntlmv2_hash); 526 513 } 527 514 528 515 /* ··· 559 586 int 560 587 setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp) 561 588 { 562 - struct shash_desc *hmacmd5 = NULL; 563 589 unsigned char *tiblob = NULL; /* target info blob */ 564 590 struct ntlmv2_resp *ntlmv2; 565 591 char ntlmv2_hash[16]; ··· 629 657 ntlmv2->client_chal = cc; 630 658 ntlmv2->reserved2 = 0; 631 659 632 - rc = cifs_alloc_hash("hmac(md5)", &hmacmd5); 633 - if (rc) { 634 - cifs_dbg(VFS, "Could not allocate HMAC-MD5, rc=%d\n", rc); 660 + if (fips_enabled) { 661 + cifs_dbg(VFS, "NTLMv2 support is disabled due to FIPS\n"); 662 + rc = -EOPNOTSUPP; 635 663 goto unlock; 636 664 } 637 665 638 666 /* calculate ntlmv2_hash */ 639 - rc = calc_ntlmv2_hash(ses, ntlmv2_hash, nls_cp, hmacmd5); 667 + rc = calc_ntlmv2_hash(ses, ntlmv2_hash, nls_cp); 640 668 if (rc) { 641 669 cifs_dbg(VFS, "Could not get NTLMv2 hash, rc=%d\n", rc); 642 670 goto unlock; 643 671 } 644 672 645 673 /* calculate first part of the client response (CR1) */ 646 - rc = CalcNTLMv2_response(ses, ntlmv2_hash, hmacmd5); 647 - if (rc) { 648 - cifs_dbg(VFS, "Could not calculate CR1, rc=%d\n", rc); 649 - goto unlock; 650 - } 674 + CalcNTLMv2_response(ses, ntlmv2_hash); 651 675 652 676 /* now calculate the session key for NTLMv2 */ 653 - rc = crypto_shash_setkey(hmacmd5->tfm, ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE); 654 - if (rc) { 655 - cifs_dbg(VFS, "%s: Could not set NTLMv2 hash as a key, rc=%d\n", __func__, rc); 656 - goto unlock; 657 - } 658 - 659 - rc = crypto_shash_init(hmacmd5); 660 - if (rc) { 661 - cifs_dbg(VFS, "%s: Could not init HMAC-MD5, rc=%d\n", __func__, rc); 662 - goto unlock; 663 - } 664 - 665 - rc = crypto_shash_update(hmacmd5, ntlmv2->ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE); 666 - if (rc) { 667 - cifs_dbg(VFS, "%s: Could not update with response, rc=%d\n", __func__, rc); 668 - goto unlock; 669 - } 670 - 671 - rc = crypto_shash_final(hmacmd5, ses->auth_key.response); 672 - if (rc) 673 - cifs_dbg(VFS, "%s: Could not generate MD5 hash, rc=%d\n", __func__, rc); 677 + hmac_md5_usingrawkey(ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE, 678 + ntlmv2->ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE, 679 + ses->auth_key.response); 680 + rc = 0; 674 681 unlock: 675 682 cifs_server_unlock(ses->server); 676 - cifs_free_hash(&hmacmd5); 677 683 setup_ntlmv2_rsp_ret: 678 684 kfree_sensitive(tiblob); 679 685 ··· 693 743 cifs_crypto_secmech_release(struct TCP_Server_Info *server) 694 744 { 695 745 cifs_free_hash(&server->secmech.aes_cmac); 696 - cifs_free_hash(&server->secmech.hmacsha256); 697 - cifs_free_hash(&server->secmech.md5); 698 - cifs_free_hash(&server->secmech.sha512); 699 746 700 747 if (server->secmech.enc) { 701 748 crypto_free_aead(server->secmech.enc);
-4
fs/smb/client/cifsfs.c
··· 2139 2139 "also older servers complying with the SNIA CIFS Specification)"); 2140 2140 MODULE_VERSION(CIFS_VERSION); 2141 2141 MODULE_SOFTDEP("ecb"); 2142 - MODULE_SOFTDEP("hmac"); 2143 - MODULE_SOFTDEP("md5"); 2144 2142 MODULE_SOFTDEP("nls"); 2145 2143 MODULE_SOFTDEP("aes"); 2146 2144 MODULE_SOFTDEP("cmac"); 2147 - MODULE_SOFTDEP("sha256"); 2148 - MODULE_SOFTDEP("sha512"); 2149 2145 MODULE_SOFTDEP("aead2"); 2150 2146 MODULE_SOFTDEP("ccm"); 2151 2147 MODULE_SOFTDEP("gcm");
+1 -21
fs/smb/client/cifsglob.h
··· 24 24 #include "cifsacl.h" 25 25 #include <crypto/internal/hash.h> 26 26 #include <uapi/linux/cifs/cifs_mount.h> 27 + #include "../common/cifsglob.h" 27 28 #include "../common/smb2pdu.h" 28 29 #include "smb2pdu.h" 29 30 #include <linux/filelock.h> ··· 222 221 223 222 /* crypto hashing related structure/fields, not specific to a sec mech */ 224 223 struct cifs_secmech { 225 - struct shash_desc *md5; /* md5 hash function, for CIFS/SMB1 signatures */ 226 - struct shash_desc *hmacsha256; /* hmac-sha256 hash function, for SMB2 signatures */ 227 - struct shash_desc *sha512; /* sha512 hash function, for SMB3.1.1 preauth hash */ 228 224 struct shash_desc *aes_cmac; /* block-cipher based MAC function, for SMB3 signatures */ 229 225 230 226 struct crypto_aead *enc; /* smb3 encryption AEAD TFM (AES-CCM and AES-GCM) */ ··· 700 702 return be32_to_cpu(*((__be32 *)buf)) & 0xffffff; 701 703 } 702 704 703 - static inline void 704 - inc_rfc1001_len(void *buf, int count) 705 - { 706 - be32_add_cpu((__be32 *)buf, count); 707 - } 708 - 709 705 struct TCP_Server_Info { 710 706 struct list_head tcp_ses_list; 711 707 struct list_head smb_ses_list; ··· 1012 1020 */ 1013 1021 #define CIFS_MAX_RFC1002_WSIZE ((1<<17) - 1 - sizeof(WRITE_REQ) + 4) 1014 1022 #define CIFS_MAX_RFC1002_RSIZE ((1<<17) - 1 - sizeof(READ_RSP) + 4) 1015 - 1016 - #define CIFS_DEFAULT_IOSIZE (1024 * 1024) 1017 1023 1018 1024 /* 1019 1025 * Windows only supports a max of 60kb reads and 65535 byte writes. Default to ··· 2138 2148 extern mempool_t cifs_io_subrequest_pool; 2139 2149 2140 2150 /* Operations for different SMB versions */ 2141 - #define SMB1_VERSION_STRING "1.0" 2142 - #define SMB20_VERSION_STRING "2.0" 2143 2151 #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY 2144 2152 extern struct smb_version_operations smb1_operations; 2145 2153 extern struct smb_version_values smb1_values; 2146 2154 extern struct smb_version_operations smb20_operations; 2147 2155 extern struct smb_version_values smb20_values; 2148 2156 #endif /* CIFS_ALLOW_INSECURE_LEGACY */ 2149 - #define SMB21_VERSION_STRING "2.1" 2150 2157 extern struct smb_version_operations smb21_operations; 2151 2158 extern struct smb_version_values smb21_values; 2152 - #define SMBDEFAULT_VERSION_STRING "default" 2153 2159 extern struct smb_version_values smbdefault_values; 2154 - #define SMB3ANY_VERSION_STRING "3" 2155 2160 extern struct smb_version_values smb3any_values; 2156 - #define SMB30_VERSION_STRING "3.0" 2157 2161 extern struct smb_version_operations smb30_operations; 2158 2162 extern struct smb_version_values smb30_values; 2159 - #define SMB302_VERSION_STRING "3.02" 2160 - #define ALT_SMB302_VERSION_STRING "3.0.2" 2161 2163 /*extern struct smb_version_operations smb302_operations;*/ /* not needed yet */ 2162 2164 extern struct smb_version_values smb302_values; 2163 - #define SMB311_VERSION_STRING "3.1.1" 2164 - #define ALT_SMB311_VERSION_STRING "3.11" 2165 2165 extern struct smb_version_operations smb311_operations; 2166 2166 extern struct smb_version_values smb311_values; 2167 2167
+7 -3
fs/smb/client/cifsproto.h
··· 632 632 struct cifs_sb_info *cifs_sb, 633 633 const unsigned char *path, char *pbuf, 634 634 unsigned int *pbytes_written); 635 - int __cifs_calc_signature(struct smb_rqst *rqst, 636 - struct TCP_Server_Info *server, char *signature, 637 - struct shash_desc *shash); 635 + struct cifs_calc_sig_ctx { 636 + struct md5_ctx *md5; 637 + struct hmac_sha256_ctx *hmac; 638 + struct shash_desc *shash; 639 + }; 640 + int __cifs_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server, 641 + char *signature, struct cifs_calc_sig_ctx *ctx); 638 642 enum securityEnum cifs_select_sectype(struct TCP_Server_Info *, 639 643 enum securityEnum); 640 644
+4 -2
fs/smb/client/inode.c
··· 2431 2431 tcon = tlink_tcon(tlink); 2432 2432 server = tcon->ses->server; 2433 2433 2434 - if (!server->ops->rename) 2435 - return -ENOSYS; 2434 + if (!server->ops->rename) { 2435 + rc = -ENOSYS; 2436 + goto do_rename_exit; 2437 + } 2436 2438 2437 2439 /* try path-based rename first */ 2438 2440 rc = server->ops->rename(xid, tcon, from_dentry,
+3 -28
fs/smb/client/link.c
··· 5 5 * Author(s): Steve French (sfrench@us.ibm.com) 6 6 * 7 7 */ 8 + #include <crypto/md5.h> 8 9 #include <linux/fs.h> 9 10 #include <linux/stat.h> 10 11 #include <linux/slab.h> ··· 38 37 #define CIFS_MF_SYMLINK_MD5_ARGS(md5_hash) md5_hash 39 38 40 39 static int 41 - symlink_hash(unsigned int link_len, const char *link_str, u8 *md5_hash) 42 - { 43 - int rc; 44 - struct shash_desc *md5 = NULL; 45 - 46 - rc = cifs_alloc_hash("md5", &md5); 47 - if (rc) 48 - return rc; 49 - 50 - rc = crypto_shash_digest(md5, link_str, link_len, md5_hash); 51 - if (rc) 52 - cifs_dbg(VFS, "%s: Could not generate md5 hash\n", __func__); 53 - cifs_free_hash(&md5); 54 - return rc; 55 - } 56 - 57 - static int 58 40 parse_mf_symlink(const u8 *buf, unsigned int buf_len, unsigned int *_link_len, 59 41 char **_link_str) 60 42 { ··· 61 77 if (link_len > CIFS_MF_SYMLINK_LINK_MAXLEN) 62 78 return -EINVAL; 63 79 64 - rc = symlink_hash(link_len, link_str, md5_hash); 65 - if (rc) { 66 - cifs_dbg(FYI, "%s: MD5 hash failure: %d\n", __func__, rc); 67 - return rc; 68 - } 80 + md5(link_str, link_len, md5_hash); 69 81 70 82 scnprintf(md5_str2, sizeof(md5_str2), 71 83 CIFS_MF_SYMLINK_MD5_FORMAT, ··· 83 103 static int 84 104 format_mf_symlink(u8 *buf, unsigned int buf_len, const char *link_str) 85 105 { 86 - int rc; 87 106 unsigned int link_len; 88 107 unsigned int ofs; 89 108 u8 md5_hash[16]; ··· 95 116 if (link_len > CIFS_MF_SYMLINK_LINK_MAXLEN) 96 117 return -ENAMETOOLONG; 97 118 98 - rc = symlink_hash(link_len, link_str, md5_hash); 99 - if (rc) { 100 - cifs_dbg(FYI, "%s: MD5 hash failure: %d\n", __func__, rc); 101 - return rc; 102 - } 119 + md5(link_str, link_len, md5_hash); 103 120 104 121 scnprintf(buf, buf_len, 105 122 CIFS_MF_SYMLINK_LEN_FORMAT CIFS_MF_SYMLINK_MD5_FORMAT,
+17
fs/smb/client/misc.c
··· 916 916 char *data_end; 917 917 struct dfs_referral_level_3 *ref; 918 918 919 + if (rsp_size < sizeof(*rsp)) { 920 + cifs_dbg(VFS | ONCE, 921 + "%s: header is malformed (size is %u, must be %zu)\n", 922 + __func__, rsp_size, sizeof(*rsp)); 923 + rc = -EINVAL; 924 + goto parse_DFS_referrals_exit; 925 + } 926 + 919 927 *num_of_nodes = le16_to_cpu(rsp->NumberOfReferrals); 920 928 921 929 if (*num_of_nodes < 1) { 922 930 cifs_dbg(VFS | ONCE, "%s: [path=%s] num_referrals must be at least > 0, but we got %d\n", 923 931 __func__, searchName, *num_of_nodes); 924 932 rc = -ENOENT; 933 + goto parse_DFS_referrals_exit; 934 + } 935 + 936 + if (sizeof(*rsp) + *num_of_nodes * sizeof(REFERRAL3) > rsp_size) { 937 + cifs_dbg(VFS | ONCE, 938 + "%s: malformed buffer (size is %u, must be at least %zu)\n", 939 + __func__, rsp_size, 940 + sizeof(*rsp) + *num_of_nodes * sizeof(REFERRAL3)); 941 + rc = -EINVAL; 925 942 goto parse_DFS_referrals_exit; 926 943 } 927 944
+1 -1
fs/smb/client/sess.c
··· 584 584 * to sign packets before we generate the channel signing key 585 585 * (we sign with the session key) 586 586 */ 587 - rc = smb311_crypto_shash_allocate(chan->server); 587 + rc = smb3_crypto_shash_allocate(chan->server); 588 588 if (rc) { 589 589 cifs_dbg(VFS, "%s: crypto alloc failed\n", __func__); 590 590 mutex_unlock(&ses->session_mutex);
+12 -41
fs/smb/client/smb2misc.c
··· 7 7 * Pavel Shilovsky (pshilovsky@samba.org) 2012 8 8 * 9 9 */ 10 + #include <crypto/sha2.h> 10 11 #include <linux/ctype.h> 11 12 #include "cifsglob.h" 12 13 #include "cifsproto.h" ··· 889 888 * @iov: array containing the SMB request we will send to the server 890 889 * @nvec: number of array entries for the iov 891 890 */ 892 - int 891 + void 893 892 smb311_update_preauth_hash(struct cifs_ses *ses, struct TCP_Server_Info *server, 894 893 struct kvec *iov, int nvec) 895 894 { 896 - int i, rc; 895 + int i; 897 896 struct smb2_hdr *hdr; 898 - struct shash_desc *sha512 = NULL; 897 + struct sha512_ctx sha_ctx; 899 898 900 899 hdr = (struct smb2_hdr *)iov[0].iov_base; 901 900 /* neg prot are always taken */ ··· 908 907 * and we can test it. Preauth requires 3.1.1 for now. 909 908 */ 910 909 if (server->dialect != SMB311_PROT_ID) 911 - return 0; 910 + return; 912 911 913 912 if (hdr->Command != SMB2_SESSION_SETUP) 914 - return 0; 913 + return; 915 914 916 915 /* skip last sess setup response */ 917 916 if ((hdr->Flags & SMB2_FLAGS_SERVER_TO_REDIR) 918 917 && (hdr->Status == NT_STATUS_OK 919 918 || (hdr->Status != 920 919 cpu_to_le32(NT_STATUS_MORE_PROCESSING_REQUIRED)))) 921 - return 0; 920 + return; 922 921 923 922 ok: 924 - rc = smb311_crypto_shash_allocate(server); 925 - if (rc) 926 - return rc; 927 - 928 - sha512 = server->secmech.sha512; 929 - rc = crypto_shash_init(sha512); 930 - if (rc) { 931 - cifs_dbg(VFS, "%s: Could not init sha512 shash\n", __func__); 932 - return rc; 933 - } 934 - 935 - rc = crypto_shash_update(sha512, ses->preauth_sha_hash, 936 - SMB2_PREAUTH_HASH_SIZE); 937 - if (rc) { 938 - cifs_dbg(VFS, "%s: Could not update sha512 shash\n", __func__); 939 - return rc; 940 - } 941 - 942 - for (i = 0; i < nvec; i++) { 943 - rc = crypto_shash_update(sha512, iov[i].iov_base, iov[i].iov_len); 944 - if (rc) { 945 - cifs_dbg(VFS, "%s: Could not update sha512 shash\n", 946 - __func__); 947 - return rc; 948 - } 949 - } 950 - 951 - rc = crypto_shash_final(sha512, ses->preauth_sha_hash); 952 - if (rc) { 953 - cifs_dbg(VFS, "%s: Could not finalize sha512 shash\n", 954 - __func__); 955 - return rc; 956 - } 957 - 958 - return 0; 923 + sha512_init(&sha_ctx); 924 + sha512_update(&sha_ctx, ses->preauth_sha_hash, SMB2_PREAUTH_HASH_SIZE); 925 + for (i = 0; i < nvec; i++) 926 + sha512_update(&sha_ctx, iov[i].iov_base, iov[i].iov_len); 927 + sha512_final(&sha_ctx, ses->preauth_sha_hash); 959 928 }
+4 -4
fs/smb/client/smb2ops.c
··· 3212 3212 utf16_path = cifs_convert_path_to_utf16(path, cifs_sb); 3213 3213 if (!utf16_path) { 3214 3214 rc = -ENOMEM; 3215 - free_xid(xid); 3216 - return ERR_PTR(rc); 3215 + goto put_tlink; 3217 3216 } 3218 3217 3219 3218 oparms = (struct cifs_open_parms) { ··· 3244 3245 SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid); 3245 3246 } 3246 3247 3248 + put_tlink: 3247 3249 cifs_put_tlink(tlink); 3248 3250 free_xid(xid); 3249 3251 ··· 3285 3285 utf16_path = cifs_convert_path_to_utf16(path, cifs_sb); 3286 3286 if (!utf16_path) { 3287 3287 rc = -ENOMEM; 3288 - free_xid(xid); 3289 - return rc; 3288 + goto put_tlink; 3290 3289 } 3291 3290 3292 3291 oparms = (struct cifs_open_parms) { ··· 3306 3307 SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid); 3307 3308 } 3308 3309 3310 + put_tlink: 3309 3311 cifs_put_tlink(tlink); 3310 3312 free_xid(xid); 3311 3313 return rc;
+4 -4
fs/smb/client/smb2proto.h
··· 295 295 extern void smb2_copy_fs_info_to_kstatfs( 296 296 struct smb2_fs_full_size_info *pfs_inf, 297 297 struct kstatfs *kst); 298 - extern int smb311_crypto_shash_allocate(struct TCP_Server_Info *server); 299 - extern int smb311_update_preauth_hash(struct cifs_ses *ses, 300 - struct TCP_Server_Info *server, 301 - struct kvec *iov, int nvec); 298 + extern int smb3_crypto_shash_allocate(struct TCP_Server_Info *server); 299 + extern void smb311_update_preauth_hash(struct cifs_ses *ses, 300 + struct TCP_Server_Info *server, 301 + struct kvec *iov, int nvec); 302 302 extern int smb2_query_info_compound(const unsigned int xid, 303 303 struct cifs_tcon *tcon, 304 304 const char *path, u32 desired_access,
+27 -137
fs/smb/client/smb2transport.c
··· 19 19 #include <linux/mempool.h> 20 20 #include <linux/highmem.h> 21 21 #include <crypto/aead.h> 22 + #include <crypto/sha2.h> 22 23 #include "cifsglob.h" 23 24 #include "cifsproto.h" 24 25 #include "smb2proto.h" ··· 27 26 #include "../common/smb2status.h" 28 27 #include "smb2glob.h" 29 28 30 - static int 29 + int 31 30 smb3_crypto_shash_allocate(struct TCP_Server_Info *server) 32 31 { 33 32 struct cifs_secmech *p = &server->secmech; 34 - int rc; 35 33 36 - rc = cifs_alloc_hash("hmac(sha256)", &p->hmacsha256); 37 - if (rc) 38 - goto err; 39 - 40 - rc = cifs_alloc_hash("cmac(aes)", &p->aes_cmac); 41 - if (rc) 42 - goto err; 43 - 44 - return 0; 45 - err: 46 - cifs_free_hash(&p->hmacsha256); 47 - return rc; 34 + return cifs_alloc_hash("cmac(aes)", &p->aes_cmac); 48 35 } 49 - 50 - int 51 - smb311_crypto_shash_allocate(struct TCP_Server_Info *server) 52 - { 53 - struct cifs_secmech *p = &server->secmech; 54 - int rc = 0; 55 - 56 - rc = cifs_alloc_hash("hmac(sha256)", &p->hmacsha256); 57 - if (rc) 58 - return rc; 59 - 60 - rc = cifs_alloc_hash("cmac(aes)", &p->aes_cmac); 61 - if (rc) 62 - goto err; 63 - 64 - rc = cifs_alloc_hash("sha512", &p->sha512); 65 - if (rc) 66 - goto err; 67 - 68 - return 0; 69 - 70 - err: 71 - cifs_free_hash(&p->aes_cmac); 72 - cifs_free_hash(&p->hmacsha256); 73 - return rc; 74 - } 75 - 76 36 77 37 static 78 38 int smb3_get_sign_key(__u64 ses_id, struct TCP_Server_Info *server, u8 *key) ··· 215 253 { 216 254 int rc; 217 255 unsigned char smb2_signature[SMB2_HMACSHA256_SIZE]; 218 - unsigned char *sigptr = smb2_signature; 219 256 struct kvec *iov = rqst->rq_iov; 220 257 struct smb2_hdr *shdr = (struct smb2_hdr *)iov[0].iov_base; 221 - struct shash_desc *shash = NULL; 258 + struct hmac_sha256_ctx hmac_ctx; 222 259 struct smb_rqst drqst; 223 260 __u64 sid = le64_to_cpu(shdr->SessionId); 224 261 u8 key[SMB2_NTLMV2_SESSKEY_SIZE]; ··· 232 271 memset(smb2_signature, 0x0, SMB2_HMACSHA256_SIZE); 233 272 memset(shdr->Signature, 0x0, SMB2_SIGNATURE_SIZE); 234 273 235 - if (allocate_crypto) { 236 - rc = cifs_alloc_hash("hmac(sha256)", &shash); 237 - if (rc) { 238 - cifs_server_dbg(VFS, 239 - "%s: sha256 alloc failed\n", __func__); 240 - goto out; 241 - } 242 - } else { 243 - shash = server->secmech.hmacsha256; 244 - } 245 - 246 - rc = crypto_shash_setkey(shash->tfm, key, sizeof(key)); 247 - if (rc) { 248 - cifs_server_dbg(VFS, 249 - "%s: Could not update with response\n", 250 - __func__); 251 - goto out; 252 - } 253 - 254 - rc = crypto_shash_init(shash); 255 - if (rc) { 256 - cifs_server_dbg(VFS, "%s: Could not init sha256", __func__); 257 - goto out; 258 - } 274 + hmac_sha256_init_usingrawkey(&hmac_ctx, key, sizeof(key)); 259 275 260 276 /* 261 277 * For SMB2+, __cifs_calc_signature() expects to sign only the actual ··· 243 305 */ 244 306 drqst = *rqst; 245 307 if (drqst.rq_nvec >= 2 && iov[0].iov_len == 4) { 246 - rc = crypto_shash_update(shash, iov[0].iov_base, 247 - iov[0].iov_len); 248 - if (rc) { 249 - cifs_server_dbg(VFS, 250 - "%s: Could not update with payload\n", 251 - __func__); 252 - goto out; 253 - } 308 + hmac_sha256_update(&hmac_ctx, iov[0].iov_base, iov[0].iov_len); 254 309 drqst.rq_iov++; 255 310 drqst.rq_nvec--; 256 311 } 257 312 258 - rc = __cifs_calc_signature(&drqst, server, sigptr, shash); 313 + rc = __cifs_calc_signature( 314 + &drqst, server, smb2_signature, 315 + &(struct cifs_calc_sig_ctx){ .hmac = &hmac_ctx }); 259 316 if (!rc) 260 - memcpy(shdr->Signature, sigptr, SMB2_SIGNATURE_SIZE); 317 + memcpy(shdr->Signature, smb2_signature, SMB2_SIGNATURE_SIZE); 261 318 262 - out: 263 - if (allocate_crypto) 264 - cifs_free_hash(&shash); 265 319 return rc; 266 320 } 267 321 ··· 266 336 __u8 L256[4] = {0, 0, 1, 0}; 267 337 int rc = 0; 268 338 unsigned char prfhash[SMB2_HMACSHA256_SIZE]; 269 - unsigned char *hashptr = prfhash; 270 339 struct TCP_Server_Info *server = ses->server; 340 + struct hmac_sha256_ctx hmac_ctx; 271 341 272 342 memset(prfhash, 0x0, SMB2_HMACSHA256_SIZE); 273 343 memset(key, 0x0, key_size); ··· 275 345 rc = smb3_crypto_shash_allocate(server); 276 346 if (rc) { 277 347 cifs_server_dbg(VFS, "%s: crypto alloc failed\n", __func__); 278 - goto smb3signkey_ret; 348 + return rc; 279 349 } 280 350 281 - rc = crypto_shash_setkey(server->secmech.hmacsha256->tfm, 282 - ses->auth_key.response, SMB2_NTLMV2_SESSKEY_SIZE); 283 - if (rc) { 284 - cifs_server_dbg(VFS, "%s: Could not set with session key\n", __func__); 285 - goto smb3signkey_ret; 286 - } 287 - 288 - rc = crypto_shash_init(server->secmech.hmacsha256); 289 - if (rc) { 290 - cifs_server_dbg(VFS, "%s: Could not init sign hmac\n", __func__); 291 - goto smb3signkey_ret; 292 - } 293 - 294 - rc = crypto_shash_update(server->secmech.hmacsha256, i, 4); 295 - if (rc) { 296 - cifs_server_dbg(VFS, "%s: Could not update with n\n", __func__); 297 - goto smb3signkey_ret; 298 - } 299 - 300 - rc = crypto_shash_update(server->secmech.hmacsha256, label.iov_base, label.iov_len); 301 - if (rc) { 302 - cifs_server_dbg(VFS, "%s: Could not update with label\n", __func__); 303 - goto smb3signkey_ret; 304 - } 305 - 306 - rc = crypto_shash_update(server->secmech.hmacsha256, &zero, 1); 307 - if (rc) { 308 - cifs_server_dbg(VFS, "%s: Could not update with zero\n", __func__); 309 - goto smb3signkey_ret; 310 - } 311 - 312 - rc = crypto_shash_update(server->secmech.hmacsha256, context.iov_base, context.iov_len); 313 - if (rc) { 314 - cifs_server_dbg(VFS, "%s: Could not update with context\n", __func__); 315 - goto smb3signkey_ret; 316 - } 351 + hmac_sha256_init_usingrawkey(&hmac_ctx, ses->auth_key.response, 352 + SMB2_NTLMV2_SESSKEY_SIZE); 353 + hmac_sha256_update(&hmac_ctx, i, 4); 354 + hmac_sha256_update(&hmac_ctx, label.iov_base, label.iov_len); 355 + hmac_sha256_update(&hmac_ctx, &zero, 1); 356 + hmac_sha256_update(&hmac_ctx, context.iov_base, context.iov_len); 317 357 318 358 if ((server->cipher_type == SMB2_ENCRYPTION_AES256_CCM) || 319 359 (server->cipher_type == SMB2_ENCRYPTION_AES256_GCM)) { 320 - rc = crypto_shash_update(server->secmech.hmacsha256, L256, 4); 360 + hmac_sha256_update(&hmac_ctx, L256, 4); 321 361 } else { 322 - rc = crypto_shash_update(server->secmech.hmacsha256, L128, 4); 362 + hmac_sha256_update(&hmac_ctx, L128, 4); 323 363 } 324 - if (rc) { 325 - cifs_server_dbg(VFS, "%s: Could not update with L\n", __func__); 326 - goto smb3signkey_ret; 327 - } 364 + hmac_sha256_final(&hmac_ctx, prfhash); 328 365 329 - rc = crypto_shash_final(server->secmech.hmacsha256, hashptr); 330 - if (rc) { 331 - cifs_server_dbg(VFS, "%s: Could not generate sha256 hash\n", __func__); 332 - goto smb3signkey_ret; 333 - } 334 - 335 - memcpy(key, hashptr, key_size); 336 - 337 - smb3signkey_ret: 338 - return rc; 366 + memcpy(key, prfhash, key_size); 367 + return 0; 339 368 } 340 369 341 370 struct derivation { ··· 471 582 { 472 583 int rc; 473 584 unsigned char smb3_signature[SMB2_CMACAES_SIZE]; 474 - unsigned char *sigptr = smb3_signature; 475 585 struct kvec *iov = rqst->rq_iov; 476 586 struct smb2_hdr *shdr = (struct smb2_hdr *)iov[0].iov_base; 477 587 struct shash_desc *shash = NULL; ··· 531 643 drqst.rq_nvec--; 532 644 } 533 645 534 - rc = __cifs_calc_signature(&drqst, server, sigptr, shash); 646 + rc = __cifs_calc_signature( 647 + &drqst, server, smb3_signature, 648 + &(struct cifs_calc_sig_ctx){ .shash = shash }); 535 649 if (!rc) 536 - memcpy(shdr->Signature, sigptr, SMB2_SIGNATURE_SIZE); 650 + memcpy(shdr->Signature, smb3_signature, SMB2_SIGNATURE_SIZE); 537 651 538 652 out: 539 653 if (allocate_crypto)
+220 -105
fs/smb/client/smbdirect.c
··· 1575 1575 disable_work_sync(&sc->disconnect_work); 1576 1576 1577 1577 log_rdma_event(INFO, "destroying rdma session\n"); 1578 - if (sc->status < SMBDIRECT_SOCKET_DISCONNECTING) { 1578 + if (sc->status < SMBDIRECT_SOCKET_DISCONNECTING) 1579 1579 smbd_disconnect_rdma_work(&sc->disconnect_work); 1580 + if (sc->status < SMBDIRECT_SOCKET_DISCONNECTED) { 1580 1581 log_rdma_event(INFO, "wait for transport being disconnected\n"); 1581 - wait_event_interruptible( 1582 - sc->status_wait, 1583 - sc->status == SMBDIRECT_SOCKET_DISCONNECTED); 1582 + wait_event(sc->status_wait, sc->status == SMBDIRECT_SOCKET_DISCONNECTED); 1583 + log_rdma_event(INFO, "waited for transport being disconnected\n"); 1584 1584 } 1585 1585 1586 1586 /* ··· 1624 1624 log_rdma_event(INFO, "free receive buffers\n"); 1625 1625 destroy_receive_buffers(sc); 1626 1626 1627 - /* 1628 - * For performance reasons, memory registration and deregistration 1629 - * are not locked by srv_mutex. It is possible some processes are 1630 - * blocked on transport srv_mutex while holding memory registration. 1631 - * Release the transport srv_mutex to allow them to hit the failure 1632 - * path when sending data, and then release memory registrations. 1633 - */ 1634 1627 log_rdma_event(INFO, "freeing mr list\n"); 1635 - while (atomic_read(&sc->mr_io.used.count)) { 1636 - cifs_server_unlock(server); 1637 - msleep(1000); 1638 - cifs_server_lock(server); 1639 - } 1640 1628 destroy_mr_list(sc); 1641 1629 1642 1630 ib_free_cq(sc->ib.send_cq); ··· 2340 2352 } 2341 2353 } 2342 2354 2355 + static void smbd_mr_disable_locked(struct smbdirect_mr_io *mr) 2356 + { 2357 + struct smbdirect_socket *sc = mr->socket; 2358 + 2359 + lockdep_assert_held(&mr->mutex); 2360 + 2361 + if (mr->state == SMBDIRECT_MR_DISABLED) 2362 + return; 2363 + 2364 + if (mr->mr) 2365 + ib_dereg_mr(mr->mr); 2366 + if (mr->sgt.nents) 2367 + ib_dma_unmap_sg(sc->ib.dev, mr->sgt.sgl, mr->sgt.nents, mr->dir); 2368 + kfree(mr->sgt.sgl); 2369 + 2370 + mr->mr = NULL; 2371 + mr->sgt.sgl = NULL; 2372 + mr->sgt.nents = 0; 2373 + 2374 + mr->state = SMBDIRECT_MR_DISABLED; 2375 + } 2376 + 2377 + static void smbd_mr_free_locked(struct kref *kref) 2378 + { 2379 + struct smbdirect_mr_io *mr = 2380 + container_of(kref, struct smbdirect_mr_io, kref); 2381 + 2382 + lockdep_assert_held(&mr->mutex); 2383 + 2384 + /* 2385 + * smbd_mr_disable_locked() should already be called! 2386 + */ 2387 + if (WARN_ON_ONCE(mr->state != SMBDIRECT_MR_DISABLED)) 2388 + smbd_mr_disable_locked(mr); 2389 + 2390 + mutex_unlock(&mr->mutex); 2391 + mutex_destroy(&mr->mutex); 2392 + kfree(mr); 2393 + } 2394 + 2343 2395 static void destroy_mr_list(struct smbdirect_socket *sc) 2344 2396 { 2345 2397 struct smbdirect_mr_io *mr, *tmp; 2398 + LIST_HEAD(all_list); 2399 + unsigned long flags; 2346 2400 2347 2401 disable_work_sync(&sc->mr_io.recovery_work); 2348 - list_for_each_entry_safe(mr, tmp, &sc->mr_io.all.list, list) { 2349 - if (mr->state == SMBDIRECT_MR_INVALIDATED) 2350 - ib_dma_unmap_sg(sc->ib.dev, mr->sgt.sgl, 2351 - mr->sgt.nents, mr->dir); 2352 - ib_dereg_mr(mr->mr); 2353 - kfree(mr->sgt.sgl); 2354 - kfree(mr); 2402 + 2403 + spin_lock_irqsave(&sc->mr_io.all.lock, flags); 2404 + list_splice_tail_init(&sc->mr_io.all.list, &all_list); 2405 + spin_unlock_irqrestore(&sc->mr_io.all.lock, flags); 2406 + 2407 + list_for_each_entry_safe(mr, tmp, &all_list, list) { 2408 + mutex_lock(&mr->mutex); 2409 + 2410 + smbd_mr_disable_locked(mr); 2411 + list_del(&mr->list); 2412 + mr->socket = NULL; 2413 + 2414 + /* 2415 + * No kref_put_mutex() as it's already locked. 2416 + * 2417 + * If smbd_mr_free_locked() is called 2418 + * and the mutex is unlocked and mr is gone, 2419 + * in that case kref_put() returned 1. 2420 + * 2421 + * If kref_put() returned 0 we know that 2422 + * smbd_mr_free_locked() didn't 2423 + * run. Not by us nor by anyone else, as we 2424 + * still hold the mutex, so we need to unlock. 2425 + * 2426 + * If the mr is still registered it will 2427 + * be dangling (detached from the connection 2428 + * waiting for smbd_deregister_mr() to be 2429 + * called in order to free the memory. 2430 + */ 2431 + if (!kref_put(&mr->kref, smbd_mr_free_locked)) 2432 + mutex_unlock(&mr->mutex); 2355 2433 } 2356 2434 } 2357 2435 ··· 2431 2377 static int allocate_mr_list(struct smbdirect_socket *sc) 2432 2378 { 2433 2379 struct smbdirect_socket_parameters *sp = &sc->parameters; 2434 - int i; 2435 - struct smbdirect_mr_io *smbdirect_mr, *tmp; 2436 - 2437 - INIT_WORK(&sc->mr_io.recovery_work, smbd_mr_recovery_work); 2380 + struct smbdirect_mr_io *mr; 2381 + int ret; 2382 + u32 i; 2438 2383 2439 2384 if (sp->responder_resources == 0) { 2440 2385 log_rdma_mr(ERR, "responder_resources negotiated as 0\n"); ··· 2442 2389 2443 2390 /* Allocate more MRs (2x) than hardware responder_resources */ 2444 2391 for (i = 0; i < sp->responder_resources * 2; i++) { 2445 - smbdirect_mr = kzalloc(sizeof(*smbdirect_mr), GFP_KERNEL); 2446 - if (!smbdirect_mr) 2447 - goto cleanup_entries; 2448 - smbdirect_mr->mr = ib_alloc_mr(sc->ib.pd, sc->mr_io.type, 2449 - sp->max_frmr_depth); 2450 - if (IS_ERR(smbdirect_mr->mr)) { 2392 + mr = kzalloc(sizeof(*mr), GFP_KERNEL); 2393 + if (!mr) { 2394 + ret = -ENOMEM; 2395 + goto kzalloc_mr_failed; 2396 + } 2397 + 2398 + kref_init(&mr->kref); 2399 + mutex_init(&mr->mutex); 2400 + 2401 + mr->mr = ib_alloc_mr(sc->ib.pd, 2402 + sc->mr_io.type, 2403 + sp->max_frmr_depth); 2404 + if (IS_ERR(mr->mr)) { 2405 + ret = PTR_ERR(mr->mr); 2451 2406 log_rdma_mr(ERR, "ib_alloc_mr failed mr_type=%x max_frmr_depth=%x\n", 2452 2407 sc->mr_io.type, sp->max_frmr_depth); 2453 - goto out; 2408 + goto ib_alloc_mr_failed; 2454 2409 } 2455 - smbdirect_mr->sgt.sgl = kcalloc(sp->max_frmr_depth, 2456 - sizeof(struct scatterlist), 2457 - GFP_KERNEL); 2458 - if (!smbdirect_mr->sgt.sgl) { 2459 - log_rdma_mr(ERR, "failed to allocate sgl\n"); 2460 - ib_dereg_mr(smbdirect_mr->mr); 2461 - goto out; 2462 - } 2463 - smbdirect_mr->state = SMBDIRECT_MR_READY; 2464 - smbdirect_mr->socket = sc; 2465 2410 2466 - list_add_tail(&smbdirect_mr->list, &sc->mr_io.all.list); 2411 + mr->sgt.sgl = kcalloc(sp->max_frmr_depth, 2412 + sizeof(struct scatterlist), 2413 + GFP_KERNEL); 2414 + if (!mr->sgt.sgl) { 2415 + ret = -ENOMEM; 2416 + log_rdma_mr(ERR, "failed to allocate sgl\n"); 2417 + goto kcalloc_sgl_failed; 2418 + } 2419 + mr->state = SMBDIRECT_MR_READY; 2420 + mr->socket = sc; 2421 + 2422 + list_add_tail(&mr->list, &sc->mr_io.all.list); 2467 2423 atomic_inc(&sc->mr_io.ready.count); 2468 2424 } 2425 + 2426 + INIT_WORK(&sc->mr_io.recovery_work, smbd_mr_recovery_work); 2427 + 2469 2428 return 0; 2470 2429 2471 - out: 2472 - kfree(smbdirect_mr); 2473 - cleanup_entries: 2474 - list_for_each_entry_safe(smbdirect_mr, tmp, &sc->mr_io.all.list, list) { 2475 - list_del(&smbdirect_mr->list); 2476 - ib_dereg_mr(smbdirect_mr->mr); 2477 - kfree(smbdirect_mr->sgt.sgl); 2478 - kfree(smbdirect_mr); 2479 - } 2480 - return -ENOMEM; 2430 + kcalloc_sgl_failed: 2431 + ib_dereg_mr(mr->mr); 2432 + ib_alloc_mr_failed: 2433 + mutex_destroy(&mr->mutex); 2434 + kfree(mr); 2435 + kzalloc_mr_failed: 2436 + destroy_mr_list(sc); 2437 + return ret; 2481 2438 } 2482 2439 2483 2440 /* ··· 2521 2458 list_for_each_entry(ret, &sc->mr_io.all.list, list) { 2522 2459 if (ret->state == SMBDIRECT_MR_READY) { 2523 2460 ret->state = SMBDIRECT_MR_REGISTERED; 2461 + kref_get(&ret->kref); 2524 2462 spin_unlock_irqrestore(&sc->mr_io.all.lock, flags); 2525 2463 atomic_dec(&sc->mr_io.ready.count); 2526 2464 atomic_inc(&sc->mr_io.used.count); ··· 2568 2504 { 2569 2505 struct smbdirect_socket *sc = &info->socket; 2570 2506 struct smbdirect_socket_parameters *sp = &sc->parameters; 2571 - struct smbdirect_mr_io *smbdirect_mr; 2507 + struct smbdirect_mr_io *mr; 2572 2508 int rc, num_pages; 2573 - enum dma_data_direction dir; 2574 2509 struct ib_reg_wr *reg_wr; 2575 2510 2576 2511 num_pages = iov_iter_npages(iter, sp->max_frmr_depth + 1); ··· 2580 2517 return NULL; 2581 2518 } 2582 2519 2583 - smbdirect_mr = get_mr(sc); 2584 - if (!smbdirect_mr) { 2520 + mr = get_mr(sc); 2521 + if (!mr) { 2585 2522 log_rdma_mr(ERR, "get_mr returning NULL\n"); 2586 2523 return NULL; 2587 2524 } 2588 2525 2589 - dir = writing ? DMA_FROM_DEVICE : DMA_TO_DEVICE; 2590 - smbdirect_mr->dir = dir; 2591 - smbdirect_mr->need_invalidate = need_invalidate; 2592 - smbdirect_mr->sgt.nents = 0; 2593 - smbdirect_mr->sgt.orig_nents = 0; 2526 + mutex_lock(&mr->mutex); 2527 + 2528 + mr->dir = writing ? DMA_FROM_DEVICE : DMA_TO_DEVICE; 2529 + mr->need_invalidate = need_invalidate; 2530 + mr->sgt.nents = 0; 2531 + mr->sgt.orig_nents = 0; 2594 2532 2595 2533 log_rdma_mr(INFO, "num_pages=0x%x count=0x%zx depth=%u\n", 2596 2534 num_pages, iov_iter_count(iter), sp->max_frmr_depth); 2597 - smbd_iter_to_mr(iter, &smbdirect_mr->sgt, sp->max_frmr_depth); 2535 + smbd_iter_to_mr(iter, &mr->sgt, sp->max_frmr_depth); 2598 2536 2599 - rc = ib_dma_map_sg(sc->ib.dev, smbdirect_mr->sgt.sgl, 2600 - smbdirect_mr->sgt.nents, dir); 2537 + rc = ib_dma_map_sg(sc->ib.dev, mr->sgt.sgl, mr->sgt.nents, mr->dir); 2601 2538 if (!rc) { 2602 2539 log_rdma_mr(ERR, "ib_dma_map_sg num_pages=%x dir=%x rc=%x\n", 2603 - num_pages, dir, rc); 2540 + num_pages, mr->dir, rc); 2604 2541 goto dma_map_error; 2605 2542 } 2606 2543 2607 - rc = ib_map_mr_sg(smbdirect_mr->mr, smbdirect_mr->sgt.sgl, 2608 - smbdirect_mr->sgt.nents, NULL, PAGE_SIZE); 2609 - if (rc != smbdirect_mr->sgt.nents) { 2544 + rc = ib_map_mr_sg(mr->mr, mr->sgt.sgl, mr->sgt.nents, NULL, PAGE_SIZE); 2545 + if (rc != mr->sgt.nents) { 2610 2546 log_rdma_mr(ERR, 2611 - "ib_map_mr_sg failed rc = %d nents = %x\n", 2612 - rc, smbdirect_mr->sgt.nents); 2547 + "ib_map_mr_sg failed rc = %d nents = %x\n", 2548 + rc, mr->sgt.nents); 2613 2549 goto map_mr_error; 2614 2550 } 2615 2551 2616 - ib_update_fast_reg_key(smbdirect_mr->mr, 2617 - ib_inc_rkey(smbdirect_mr->mr->rkey)); 2618 - reg_wr = &smbdirect_mr->wr; 2552 + ib_update_fast_reg_key(mr->mr, ib_inc_rkey(mr->mr->rkey)); 2553 + reg_wr = &mr->wr; 2619 2554 reg_wr->wr.opcode = IB_WR_REG_MR; 2620 - smbdirect_mr->cqe.done = register_mr_done; 2621 - reg_wr->wr.wr_cqe = &smbdirect_mr->cqe; 2555 + mr->cqe.done = register_mr_done; 2556 + reg_wr->wr.wr_cqe = &mr->cqe; 2622 2557 reg_wr->wr.num_sge = 0; 2623 2558 reg_wr->wr.send_flags = IB_SEND_SIGNALED; 2624 - reg_wr->mr = smbdirect_mr->mr; 2625 - reg_wr->key = smbdirect_mr->mr->rkey; 2559 + reg_wr->mr = mr->mr; 2560 + reg_wr->key = mr->mr->rkey; 2626 2561 reg_wr->access = writing ? 2627 2562 IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE : 2628 2563 IB_ACCESS_REMOTE_READ; ··· 2631 2570 * on the next ib_post_send when we actually send I/O to remote peer 2632 2571 */ 2633 2572 rc = ib_post_send(sc->ib.qp, &reg_wr->wr, NULL); 2634 - if (!rc) 2635 - return smbdirect_mr; 2573 + if (!rc) { 2574 + /* 2575 + * get_mr() gave us a reference 2576 + * via kref_get(&mr->kref), we keep that and let 2577 + * the caller use smbd_deregister_mr() 2578 + * to remove it again. 2579 + */ 2580 + mutex_unlock(&mr->mutex); 2581 + return mr; 2582 + } 2636 2583 2637 2584 log_rdma_mr(ERR, "ib_post_send failed rc=%x reg_wr->key=%x\n", 2638 2585 rc, reg_wr->key); 2639 2586 2640 2587 /* If all failed, attempt to recover this MR by setting it SMBDIRECT_MR_ERROR*/ 2641 2588 map_mr_error: 2642 - ib_dma_unmap_sg(sc->ib.dev, smbdirect_mr->sgt.sgl, 2643 - smbdirect_mr->sgt.nents, smbdirect_mr->dir); 2589 + ib_dma_unmap_sg(sc->ib.dev, mr->sgt.sgl, mr->sgt.nents, mr->dir); 2644 2590 2645 2591 dma_map_error: 2646 - smbdirect_mr->state = SMBDIRECT_MR_ERROR; 2592 + mr->sgt.nents = 0; 2593 + mr->state = SMBDIRECT_MR_ERROR; 2647 2594 if (atomic_dec_and_test(&sc->mr_io.used.count)) 2648 2595 wake_up(&sc->mr_io.cleanup.wait_queue); 2649 2596 2650 2597 smbd_disconnect_rdma_connection(sc); 2598 + 2599 + /* 2600 + * get_mr() gave us a reference 2601 + * via kref_get(&mr->kref), we need to remove it again 2602 + * on error. 2603 + * 2604 + * No kref_put_mutex() as it's already locked. 2605 + * 2606 + * If smbd_mr_free_locked() is called 2607 + * and the mutex is unlocked and mr is gone, 2608 + * in that case kref_put() returned 1. 2609 + * 2610 + * If kref_put() returned 0 we know that 2611 + * smbd_mr_free_locked() didn't 2612 + * run. Not by us nor by anyone else, as we 2613 + * still hold the mutex, so we need to unlock. 2614 + */ 2615 + if (!kref_put(&mr->kref, smbd_mr_free_locked)) 2616 + mutex_unlock(&mr->mutex); 2651 2617 2652 2618 return NULL; 2653 2619 } ··· 2700 2612 * and we have to locally invalidate the buffer to prevent data is being 2701 2613 * modified by remote peer after upper layer consumes it 2702 2614 */ 2703 - int smbd_deregister_mr(struct smbdirect_mr_io *smbdirect_mr) 2615 + void smbd_deregister_mr(struct smbdirect_mr_io *mr) 2704 2616 { 2705 - struct ib_send_wr *wr; 2706 - struct smbdirect_socket *sc = smbdirect_mr->socket; 2707 - int rc = 0; 2617 + struct smbdirect_socket *sc = mr->socket; 2708 2618 2709 - if (smbdirect_mr->need_invalidate) { 2619 + mutex_lock(&mr->mutex); 2620 + if (mr->state == SMBDIRECT_MR_DISABLED) 2621 + goto put_kref; 2622 + 2623 + if (sc->status != SMBDIRECT_SOCKET_CONNECTED) { 2624 + smbd_mr_disable_locked(mr); 2625 + goto put_kref; 2626 + } 2627 + 2628 + if (mr->need_invalidate) { 2629 + struct ib_send_wr *wr = &mr->inv_wr; 2630 + int rc; 2631 + 2710 2632 /* Need to finish local invalidation before returning */ 2711 - wr = &smbdirect_mr->inv_wr; 2712 2633 wr->opcode = IB_WR_LOCAL_INV; 2713 - smbdirect_mr->cqe.done = local_inv_done; 2714 - wr->wr_cqe = &smbdirect_mr->cqe; 2634 + mr->cqe.done = local_inv_done; 2635 + wr->wr_cqe = &mr->cqe; 2715 2636 wr->num_sge = 0; 2716 - wr->ex.invalidate_rkey = smbdirect_mr->mr->rkey; 2637 + wr->ex.invalidate_rkey = mr->mr->rkey; 2717 2638 wr->send_flags = IB_SEND_SIGNALED; 2718 2639 2719 - init_completion(&smbdirect_mr->invalidate_done); 2640 + init_completion(&mr->invalidate_done); 2720 2641 rc = ib_post_send(sc->ib.qp, wr, NULL); 2721 2642 if (rc) { 2722 2643 log_rdma_mr(ERR, "ib_post_send failed rc=%x\n", rc); 2644 + smbd_mr_disable_locked(mr); 2723 2645 smbd_disconnect_rdma_connection(sc); 2724 2646 goto done; 2725 2647 } 2726 - wait_for_completion(&smbdirect_mr->invalidate_done); 2727 - smbdirect_mr->need_invalidate = false; 2648 + wait_for_completion(&mr->invalidate_done); 2649 + mr->need_invalidate = false; 2728 2650 } else 2729 2651 /* 2730 2652 * For remote invalidation, just set it to SMBDIRECT_MR_INVALIDATED 2731 2653 * and defer to mr_recovery_work to recover the MR for next use 2732 2654 */ 2733 - smbdirect_mr->state = SMBDIRECT_MR_INVALIDATED; 2655 + mr->state = SMBDIRECT_MR_INVALIDATED; 2734 2656 2735 - if (smbdirect_mr->state == SMBDIRECT_MR_INVALIDATED) { 2736 - ib_dma_unmap_sg( 2737 - sc->ib.dev, smbdirect_mr->sgt.sgl, 2738 - smbdirect_mr->sgt.nents, 2739 - smbdirect_mr->dir); 2740 - smbdirect_mr->state = SMBDIRECT_MR_READY; 2657 + if (mr->sgt.nents) { 2658 + ib_dma_unmap_sg(sc->ib.dev, mr->sgt.sgl, mr->sgt.nents, mr->dir); 2659 + mr->sgt.nents = 0; 2660 + } 2661 + 2662 + if (mr->state == SMBDIRECT_MR_INVALIDATED) { 2663 + mr->state = SMBDIRECT_MR_READY; 2741 2664 if (atomic_inc_return(&sc->mr_io.ready.count) == 1) 2742 2665 wake_up(&sc->mr_io.ready.wait_queue); 2743 2666 } else ··· 2762 2663 if (atomic_dec_and_test(&sc->mr_io.used.count)) 2763 2664 wake_up(&sc->mr_io.cleanup.wait_queue); 2764 2665 2765 - return rc; 2666 + put_kref: 2667 + /* 2668 + * No kref_put_mutex() as it's already locked. 2669 + * 2670 + * If smbd_mr_free_locked() is called 2671 + * and the mutex is unlocked and mr is gone, 2672 + * in that case kref_put() returned 1. 2673 + * 2674 + * If kref_put() returned 0 we know that 2675 + * smbd_mr_free_locked() didn't 2676 + * run. Not by us nor by anyone else, as we 2677 + * still hold the mutex, so we need to unlock 2678 + * and keep the mr in SMBDIRECT_MR_READY or 2679 + * SMBDIRECT_MR_ERROR state. 2680 + */ 2681 + if (!kref_put(&mr->kref, smbd_mr_free_locked)) 2682 + mutex_unlock(&mr->mutex); 2766 2683 } 2767 2684 2768 2685 static bool smb_set_sge(struct smb_extract_to_rdma *rdma,
+1 -1
fs/smb/client/smbdirect.h
··· 60 60 struct smbdirect_mr_io *smbd_register_mr( 61 61 struct smbd_connection *info, struct iov_iter *iter, 62 62 bool writing, bool need_invalidate); 63 - int smbd_deregister_mr(struct smbdirect_mr_io *mr); 63 + void smbd_deregister_mr(struct smbdirect_mr_io *mr); 64 64 65 65 #else 66 66 #define cifs_rdma_enabled(server) 0
-1
fs/smb/client/xattr.c
··· 178 178 memcpy(pacl, value, size); 179 179 if (pTcon->ses->server->ops->set_acl) { 180 180 int aclflags = 0; 181 - rc = 0; 182 181 183 182 switch (handler->flags) { 184 183 case XATTR_CIFS_NTSD_FULL:
+30
fs/smb/common/cifsglob.h
··· 1 + /* SPDX-License-Identifier: LGPL-2.1 */ 2 + /* 3 + * 4 + * Copyright (C) International Business Machines Corp., 2002,2008 5 + * Author(s): Steve French (sfrench@us.ibm.com) 6 + * Jeremy Allison (jra@samba.org) 7 + * 8 + */ 9 + #ifndef _COMMON_CIFS_GLOB_H 10 + #define _COMMON_CIFS_GLOB_H 11 + 12 + static inline void inc_rfc1001_len(void *buf, int count) 13 + { 14 + be32_add_cpu((__be32 *)buf, count); 15 + } 16 + 17 + #define SMB1_VERSION_STRING "1.0" 18 + #define SMB20_VERSION_STRING "2.0" 19 + #define SMB21_VERSION_STRING "2.1" 20 + #define SMBDEFAULT_VERSION_STRING "default" 21 + #define SMB3ANY_VERSION_STRING "3" 22 + #define SMB30_VERSION_STRING "3.0" 23 + #define SMB302_VERSION_STRING "3.02" 24 + #define ALT_SMB302_VERSION_STRING "3.0.2" 25 + #define SMB311_VERSION_STRING "3.1.1" 26 + #define ALT_SMB311_VERSION_STRING "3.11" 27 + 28 + #define CIFS_DEFAULT_IOSIZE (1024 * 1024) 29 + 30 + #endif /* _COMMON_CIFS_GLOB_H */
+10 -1
fs/smb/common/smbdirect/smbdirect_socket.h
··· 437 437 SMBDIRECT_MR_READY, 438 438 SMBDIRECT_MR_REGISTERED, 439 439 SMBDIRECT_MR_INVALIDATED, 440 - SMBDIRECT_MR_ERROR 440 + SMBDIRECT_MR_ERROR, 441 + SMBDIRECT_MR_DISABLED 441 442 }; 442 443 443 444 struct smbdirect_mr_io { 444 445 struct smbdirect_socket *socket; 445 446 struct ib_cqe cqe; 447 + 448 + /* 449 + * We can have up to two references: 450 + * 1. by the connection 451 + * 2. by the registration 452 + */ 453 + struct kref kref; 454 + struct mutex mutex; 446 455 447 456 struct list_head list; 448 457
+2 -5
fs/smb/server/mgmt/user_session.c
··· 147 147 int ksmbd_session_rpc_method(struct ksmbd_session *sess, int id) 148 148 { 149 149 struct ksmbd_session_rpc *entry; 150 - int method; 151 150 152 - down_read(&sess->rpc_lock); 151 + lockdep_assert_held(&sess->rpc_lock); 153 152 entry = xa_load(&sess->rpc_handle_list, id); 154 - method = entry ? entry->method : 0; 155 - up_read(&sess->rpc_lock); 156 153 157 - return method; 154 + return entry ? entry->method : 0; 158 155 } 159 156 160 157 void ksmbd_session_destroy(struct ksmbd_session *sess)
+10 -1
fs/smb/server/smb2pdu.c
··· 1806 1806 1807 1807 if (ksmbd_conn_need_reconnect(conn)) { 1808 1808 rc = -EFAULT; 1809 + ksmbd_user_session_put(sess); 1809 1810 sess = NULL; 1810 1811 goto out_err; 1811 1812 } ··· 4626 4625 * pipe without opening it, checking error condition here 4627 4626 */ 4628 4627 id = req->VolatileFileId; 4629 - if (!ksmbd_session_rpc_method(sess, id)) 4628 + 4629 + lockdep_assert_not_held(&sess->rpc_lock); 4630 + 4631 + down_read(&sess->rpc_lock); 4632 + if (!ksmbd_session_rpc_method(sess, id)) { 4633 + up_read(&sess->rpc_lock); 4630 4634 return -ENOENT; 4635 + } 4636 + up_read(&sess->rpc_lock); 4631 4637 4632 4638 ksmbd_debug(SMB, "FileInfoClass %u, FileId 0x%llx\n", 4633 4639 req->FileInfoClass, req->VolatileFileId); ··· 6832 6824 6833 6825 nbytes = ksmbd_vfs_read(work, fp, length, &offset, aux_payload_buf); 6834 6826 if (nbytes < 0) { 6827 + kvfree(aux_payload_buf); 6835 6828 err = nbytes; 6836 6829 goto out; 6837 6830 }
+1 -13
fs/smb/server/smb_common.h
··· 10 10 11 11 #include "glob.h" 12 12 #include "nterr.h" 13 + #include "../common/cifsglob.h" 13 14 #include "../common/smb2pdu.h" 14 15 #include "smb2pdu.h" 15 16 ··· 27 26 #define SMB311_PROT 6 28 27 #define BAD_PROT 0xFFFF 29 28 30 - #define SMB1_VERSION_STRING "1.0" 31 - #define SMB20_VERSION_STRING "2.0" 32 - #define SMB21_VERSION_STRING "2.1" 33 - #define SMB30_VERSION_STRING "3.0" 34 - #define SMB302_VERSION_STRING "3.02" 35 - #define SMB311_VERSION_STRING "3.1.1" 36 - 37 29 #define SMB_ECHO_INTERVAL (60 * HZ) 38 30 39 - #define CIFS_DEFAULT_IOSIZE (64 * 1024) 40 31 #define MAX_CIFS_SMALL_BUFFER_SIZE 448 /* big enough for most */ 41 32 42 33 #define MAX_STREAM_PROT_LEN 0x00FFFFFF ··· 456 463 static inline unsigned int get_rfc1002_len(void *buf) 457 464 { 458 465 return be32_to_cpu(*((__be32 *)buf)) & 0xffffff; 459 - } 460 - 461 - static inline void inc_rfc1001_len(void *buf, int count) 462 - { 463 - be32_add_cpu((__be32 *)buf, count); 464 466 } 465 467 #endif /* __SMB_COMMON_H__ */
+12
fs/smb/server/transport_ipc.c
··· 825 825 if (!msg) 826 826 return NULL; 827 827 828 + lockdep_assert_not_held(&sess->rpc_lock); 829 + 830 + down_read(&sess->rpc_lock); 828 831 msg->type = KSMBD_EVENT_RPC_REQUEST; 829 832 req = (struct ksmbd_rpc_command *)msg->payload; 830 833 req->handle = handle; ··· 836 833 req->flags |= KSMBD_RPC_WRITE_METHOD; 837 834 req->payload_sz = payload_sz; 838 835 memcpy(req->payload, payload, payload_sz); 836 + up_read(&sess->rpc_lock); 839 837 840 838 resp = ipc_msg_send_request(msg, req->handle); 841 839 ipc_msg_free(msg); ··· 853 849 if (!msg) 854 850 return NULL; 855 851 852 + lockdep_assert_not_held(&sess->rpc_lock); 853 + 854 + down_read(&sess->rpc_lock); 856 855 msg->type = KSMBD_EVENT_RPC_REQUEST; 857 856 req = (struct ksmbd_rpc_command *)msg->payload; 858 857 req->handle = handle; ··· 863 856 req->flags |= rpc_context_flags(sess); 864 857 req->flags |= KSMBD_RPC_READ_METHOD; 865 858 req->payload_sz = 0; 859 + up_read(&sess->rpc_lock); 866 860 867 861 resp = ipc_msg_send_request(msg, req->handle); 868 862 ipc_msg_free(msg); ··· 884 876 if (!msg) 885 877 return NULL; 886 878 879 + lockdep_assert_not_held(&sess->rpc_lock); 880 + 881 + down_read(&sess->rpc_lock); 887 882 msg->type = KSMBD_EVENT_RPC_REQUEST; 888 883 req = (struct ksmbd_rpc_command *)msg->payload; 889 884 req->handle = handle; ··· 895 884 req->flags |= KSMBD_RPC_IOCTL_METHOD; 896 885 req->payload_sz = payload_sz; 897 886 memcpy(req->payload, payload, payload_sz); 887 + up_read(&sess->rpc_lock); 898 888 899 889 resp = ipc_msg_send_request(msg, req->handle); 900 890 ipc_msg_free(msg);
+10 -10
fs/smb/server/transport_rdma.c
··· 1574 1574 get_buf_page_count(desc_buf, desc_buf_len), 1575 1575 msg->sg_list, SG_CHUNK_SIZE); 1576 1576 if (ret) { 1577 - kfree(msg); 1578 1577 ret = -ENOMEM; 1579 - goto out; 1578 + goto free_msg; 1580 1579 } 1581 1580 1582 1581 ret = get_sg_list(desc_buf, desc_buf_len, 1583 1582 msg->sgt.sgl, msg->sgt.orig_nents); 1584 - if (ret < 0) { 1585 - sg_free_table_chained(&msg->sgt, SG_CHUNK_SIZE); 1586 - kfree(msg); 1587 - goto out; 1588 - } 1583 + if (ret < 0) 1584 + goto free_table; 1589 1585 1590 1586 ret = rdma_rw_ctx_init(&msg->rdma_ctx, sc->ib.qp, sc->ib.qp->port, 1591 1587 msg->sgt.sgl, ··· 1592 1596 is_read ? DMA_FROM_DEVICE : DMA_TO_DEVICE); 1593 1597 if (ret < 0) { 1594 1598 pr_err("failed to init rdma_rw_ctx: %d\n", ret); 1595 - sg_free_table_chained(&msg->sgt, SG_CHUNK_SIZE); 1596 - kfree(msg); 1597 - goto out; 1599 + goto free_table; 1598 1600 } 1599 1601 1600 1602 list_add_tail(&msg->list, &msg_list); ··· 1624 1630 atomic_add(credits_needed, &sc->rw_io.credits.count); 1625 1631 wake_up(&sc->rw_io.credits.wait_queue); 1626 1632 return ret; 1633 + 1634 + free_table: 1635 + sg_free_table_chained(&msg->sgt, SG_CHUNK_SIZE); 1636 + free_msg: 1637 + kfree(msg); 1638 + goto out; 1627 1639 } 1628 1640 1629 1641 static int smb_direct_rdma_write(struct ksmbd_transport *t,
+1 -1
include/drm/drm_gpuvm.h
··· 1078 1078 */ 1079 1079 struct drm_gpuvm_map_req { 1080 1080 /** 1081 - * @op_map: struct drm_gpuva_op_map 1081 + * @map: struct drm_gpuva_op_map 1082 1082 */ 1083 1083 struct drm_gpuva_op_map map; 1084 1084 };
+16 -8
include/kvm/arm_arch_timer.h
··· 51 51 }; 52 52 53 53 struct arch_timer_context { 54 - struct kvm_vcpu *vcpu; 55 - 56 54 /* Emulated Timer (may be unused) */ 57 55 struct hrtimer hrtimer; 58 56 u64 ns_frac; ··· 68 70 struct { 69 71 bool level; 70 72 } irq; 73 + 74 + /* Who am I? */ 75 + enum kvm_arch_timers timer_id; 71 76 72 77 /* Duplicated state from arch_timer.c for convenience */ 73 78 u32 host_timer_irq; ··· 107 106 108 107 void kvm_timer_init_vm(struct kvm *kvm); 109 108 110 - u64 kvm_arm_timer_get_reg(struct kvm_vcpu *, u64 regid); 111 - int kvm_arm_timer_set_reg(struct kvm_vcpu *, u64 regid, u64 value); 112 - 113 109 int kvm_arm_timer_set_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); 114 110 int kvm_arm_timer_get_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); 115 111 int kvm_arm_timer_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); ··· 125 127 #define vcpu_hvtimer(v) (&(v)->arch.timer_cpu.timers[TIMER_HVTIMER]) 126 128 #define vcpu_hptimer(v) (&(v)->arch.timer_cpu.timers[TIMER_HPTIMER]) 127 129 128 - #define arch_timer_ctx_index(ctx) ((ctx) - vcpu_timer((ctx)->vcpu)->timers) 129 - 130 - #define timer_vm_data(ctx) (&(ctx)->vcpu->kvm->arch.timer_data) 130 + #define arch_timer_ctx_index(ctx) ((ctx)->timer_id) 131 + #define timer_context_to_vcpu(ctx) container_of((ctx), struct kvm_vcpu, arch.timer_cpu.timers[(ctx)->timer_id]) 132 + #define timer_vm_data(ctx) (&(timer_context_to_vcpu(ctx)->kvm->arch.timer_data)) 131 133 #define timer_irq(ctx) (timer_vm_data(ctx)->ppi[arch_timer_ctx_index(ctx)]) 132 134 133 135 u64 kvm_arm_timer_read_sysreg(struct kvm_vcpu *vcpu, ··· 174 176 offset += *ctxt->offset.vcpu_offset; 175 177 176 178 return offset; 179 + } 180 + 181 + static inline void timer_set_offset(struct arch_timer_context *ctxt, u64 offset) 182 + { 183 + if (!ctxt->offset.vm_offset) { 184 + WARN(offset, "timer %d\n", arch_timer_ctx_index(ctxt)); 185 + return; 186 + } 187 + 188 + WRITE_ONCE(*ctxt->offset.vm_offset, offset); 177 189 } 178 190 179 191 #endif
+4
include/linux/bpf.h
··· 2499 2499 #ifdef CONFIG_MEMCG 2500 2500 void *bpf_map_kmalloc_node(const struct bpf_map *map, size_t size, gfp_t flags, 2501 2501 int node); 2502 + void *bpf_map_kmalloc_nolock(const struct bpf_map *map, size_t size, gfp_t flags, 2503 + int node); 2502 2504 void *bpf_map_kzalloc(const struct bpf_map *map, size_t size, gfp_t flags); 2503 2505 void *bpf_map_kvcalloc(struct bpf_map *map, size_t n, size_t size, 2504 2506 gfp_t flags); ··· 2513 2511 */ 2514 2512 #define bpf_map_kmalloc_node(_map, _size, _flags, _node) \ 2515 2513 kmalloc_node(_size, _flags, _node) 2514 + #define bpf_map_kmalloc_nolock(_map, _size, _flags, _node) \ 2515 + kmalloc_nolock(_size, _flags, _node) 2516 2516 #define bpf_map_kzalloc(_map, _size, _flags) \ 2517 2517 kzalloc(_size, _flags) 2518 2518 #define bpf_map_kvcalloc(_map, _n, _size, _flags) \
+1
include/linux/brcmphy.h
··· 137 137 138 138 #define MII_BCM54XX_AUXCTL_SHDWSEL_MISC 0x07 139 139 #define MII_BCM54XX_AUXCTL_SHDWSEL_MISC_WIRESPEED_EN 0x0010 140 + #define MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RSVD 0x0060 140 141 #define MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_EN 0x0080 141 142 #define MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_SKEW_EN 0x0100 142 143 #define MII_BCM54XX_AUXCTL_MISC_FORCE_AMDIX 0x0200
+11
include/linux/hid.h
··· 1292 1292 #define hid_dbg_once(hid, fmt, ...) \ 1293 1293 dev_dbg_once(&(hid)->dev, fmt, ##__VA_ARGS__) 1294 1294 1295 + #define hid_err_ratelimited(hid, fmt, ...) \ 1296 + dev_err_ratelimited(&(hid)->dev, fmt, ##__VA_ARGS__) 1297 + #define hid_notice_ratelimited(hid, fmt, ...) \ 1298 + dev_notice_ratelimited(&(hid)->dev, fmt, ##__VA_ARGS__) 1299 + #define hid_warn_ratelimited(hid, fmt, ...) \ 1300 + dev_warn_ratelimited(&(hid)->dev, fmt, ##__VA_ARGS__) 1301 + #define hid_info_ratelimited(hid, fmt, ...) \ 1302 + dev_info_ratelimited(&(hid)->dev, fmt, ##__VA_ARGS__) 1303 + #define hid_dbg_ratelimited(hid, fmt, ...) \ 1304 + dev_dbg_ratelimited(&(hid)->dev, fmt, ##__VA_ARGS__) 1305 + 1295 1306 #endif
+11 -1
include/linux/kvm_host.h
··· 729 729 #endif 730 730 731 731 #ifdef CONFIG_KVM_GUEST_MEMFD 732 - bool kvm_arch_supports_gmem_mmap(struct kvm *kvm); 732 + bool kvm_arch_supports_gmem_init_shared(struct kvm *kvm); 733 + 734 + static inline u64 kvm_gmem_get_supported_flags(struct kvm *kvm) 735 + { 736 + u64 flags = GUEST_MEMFD_FLAG_MMAP; 737 + 738 + if (!kvm || kvm_arch_supports_gmem_init_shared(kvm)) 739 + flags |= GUEST_MEMFD_FLAG_INIT_SHARED; 740 + 741 + return flags; 742 + } 733 743 #endif 734 744 735 745 #ifndef kvm_arch_has_readonly_mem
+6
include/linux/libata.h
··· 1594 1594 #define ata_dev_dbg(dev, fmt, ...) \ 1595 1595 ata_dev_printk(debug, dev, fmt, ##__VA_ARGS__) 1596 1596 1597 + #define ata_dev_warn_once(dev, fmt, ...) \ 1598 + pr_warn_once("ata%u.%02u: " fmt, \ 1599 + (dev)->link->ap->print_id, \ 1600 + (dev)->link->pmp + (dev)->devno, \ 1601 + ##__VA_ARGS__) 1602 + 1597 1603 static inline void ata_print_version_once(const struct device *dev, 1598 1604 const char *version) 1599 1605 {
+1
include/linux/nfs_xdr.h
··· 1659 1659 void *netfs; 1660 1660 #endif 1661 1661 1662 + unsigned short retrans; 1662 1663 int pnfs_error; 1663 1664 int error; /* merge with pnfs_error */ 1664 1665 unsigned int good_bytes; /* boundary of good data */
+44
include/linux/rpmb.h
··· 61 61 62 62 #define to_rpmb_dev(x) container_of((x), struct rpmb_dev, dev) 63 63 64 + /** 65 + * struct rpmb_frame - RPMB frame structure for authenticated access 66 + * 67 + * @stuff : stuff bytes, a padding/reserved area of 196 bytes at the 68 + * beginning of the RPMB frame. They don’t carry meaningful 69 + * data but are required to make the frame exactly 512 bytes. 70 + * @key_mac : The authentication key or the message authentication 71 + * code (MAC) depending on the request/response type. 72 + * The MAC will be delivered in the last (or the only) 73 + * block of data. 74 + * @data : Data to be written or read by signed access. 75 + * @nonce : Random number generated by the host for the requests 76 + * and copied to the response by the RPMB engine. 77 + * @write_counter: Counter value for the total amount of the successful 78 + * authenticated data write requests made by the host. 79 + * @addr : Address of the data to be programmed to or read 80 + * from the RPMB. Address is the serial number of 81 + * the accessed block (half sector 256B). 82 + * @block_count : Number of blocks (half sectors, 256B) requested to be 83 + * read/programmed. 84 + * @result : Includes information about the status of the write counter 85 + * (valid, expired) and result of the access made to the RPMB. 86 + * @req_resp : Defines the type of request and response to/from the memory. 87 + * 88 + * The stuff bytes and big-endian properties are modeled to fit to the spec. 89 + */ 90 + struct rpmb_frame { 91 + u8 stuff[196]; 92 + u8 key_mac[32]; 93 + u8 data[256]; 94 + u8 nonce[16]; 95 + __be32 write_counter; 96 + __be16 addr; 97 + __be16 block_count; 98 + __be16 result; 99 + __be16 req_resp; 100 + }; 101 + 102 + #define RPMB_PROGRAM_KEY 0x1 /* Program RPMB Authentication Key */ 103 + #define RPMB_GET_WRITE_COUNTER 0x2 /* Read RPMB write counter */ 104 + #define RPMB_WRITE_DATA 0x3 /* Write data to RPMB partition */ 105 + #define RPMB_READ_DATA 0x4 /* Read data from RPMB partition */ 106 + #define RPMB_RESULT_READ 0x5 /* Read result request (Internal) */ 107 + 64 108 #if IS_ENABLED(CONFIG_RPMB) 65 109 struct rpmb_dev *rpmb_dev_get(struct rpmb_dev *rdev); 66 110 void rpmb_dev_put(struct rpmb_dev *rdev);
+15
include/net/ip_tunnels.h
··· 611 611 int skb_tunnel_check_pmtu(struct sk_buff *skb, struct dst_entry *encap_dst, 612 612 int headroom, bool reply); 613 613 614 + static inline void ip_tunnel_adj_headroom(struct net_device *dev, 615 + unsigned int headroom) 616 + { 617 + /* we must cap headroom to some upperlimit, else pskb_expand_head 618 + * will overflow header offsets in skb_headers_offset_update(). 619 + */ 620 + const unsigned int max_allowed = 512; 621 + 622 + if (headroom > max_allowed) 623 + headroom = max_allowed; 624 + 625 + if (headroom > READ_ONCE(dev->needed_headroom)) 626 + WRITE_ONCE(dev->needed_headroom, headroom); 627 + } 628 + 614 629 int iptunnel_handle_offloads(struct sk_buff *skb, int gso_type_mask); 615 630 616 631 static inline int iptunnel_pull_offloads(struct sk_buff *skb)
+3
include/sound/tas2781.h
··· 120 120 TAS2570, 121 121 TAS2572, 122 122 TAS2781, 123 + TAS5802, 124 + TAS5815, 123 125 TAS5825, 124 126 TAS5827, 127 + TAS5828, 125 128 TAS_OTHERS, 126 129 }; 127 130
-21
include/uapi/drm/amdgpu_drm.h
··· 1555 1555 __u32 userq_num_slots; 1556 1556 }; 1557 1557 1558 - /* GFX metadata BO sizes and alignment info (in bytes) */ 1559 - struct drm_amdgpu_info_uq_fw_areas_gfx { 1560 - /* shadow area size */ 1561 - __u32 shadow_size; 1562 - /* shadow area base virtual mem alignment */ 1563 - __u32 shadow_alignment; 1564 - /* context save area size */ 1565 - __u32 csa_size; 1566 - /* context save area base virtual mem alignment */ 1567 - __u32 csa_alignment; 1568 - }; 1569 - 1570 - /* IP specific fw related information used in the 1571 - * subquery AMDGPU_INFO_UQ_FW_AREAS 1572 - */ 1573 - struct drm_amdgpu_info_uq_fw_areas { 1574 - union { 1575 - struct drm_amdgpu_info_uq_fw_areas_gfx gfx; 1576 - }; 1577 - }; 1578 - 1579 1558 struct drm_amdgpu_info_num_handles { 1580 1559 /** Max handles as supported by firmware for UVD */ 1581 1560 __u32 uvd_max_handles;
+3 -2
include/uapi/linux/kvm.h
··· 962 962 #define KVM_CAP_ARM_EL2_E2H0 241 963 963 #define KVM_CAP_RISCV_MP_STATE_RESET 242 964 964 #define KVM_CAP_ARM_CACHEABLE_PFNMAP_SUPPORTED 243 965 - #define KVM_CAP_GUEST_MEMFD_MMAP 244 965 + #define KVM_CAP_GUEST_MEMFD_FLAGS 244 966 966 967 967 struct kvm_irq_routing_irqchip { 968 968 __u32 irqchip; ··· 1599 1599 #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3) 1600 1600 1601 1601 #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd) 1602 - #define GUEST_MEMFD_FLAG_MMAP (1ULL << 0) 1602 + #define GUEST_MEMFD_FLAG_MMAP (1ULL << 0) 1603 + #define GUEST_MEMFD_FLAG_INIT_SHARED (1ULL << 1) 1603 1604 1604 1605 struct kvm_create_guest_memfd { 1605 1606 __u64 size;
+5
include/ufs/ufs.h
··· 651 651 u8 rtt_cap; /* bDeviceRTTCap */ 652 652 653 653 bool hid_sup; 654 + 655 + /* Unique device ID string (manufacturer+model+serial+version+date) */ 656 + char *device_id; 657 + u8 rpmb_io_size; 658 + u8 rpmb_region_size[4]; 654 659 }; 655 660 656 661 #endif /* End of Header */
+5 -7
include/ufs/ufshcd.h
··· 828 828 * @host: Scsi_Host instance of the driver 829 829 * @dev: device handle 830 830 * @ufs_device_wlun: WLUN that controls the entire UFS device. 831 + * @ufs_rpmb_wlun: RPMB WLUN SCSI device 831 832 * @hwmon_device: device instance registered with the hwmon core. 832 833 * @curr_dev_pwr_mode: active UFS device power mode. 833 834 * @uic_link_state: active state of the link to the UFS device. ··· 941 940 * @pm_qos_mutex: synchronizes PM QoS request and status updates 942 941 * @critical_health_count: count of critical health exceptions 943 942 * @dev_lvl_exception_count: count of device level exceptions since last reset 944 - * @dev_lvl_exception_id: vendor specific information about the 945 - * device level exception event. 943 + * @dev_lvl_exception_id: vendor specific information about the device level exception event. 944 + * @rpmbs: list of OP-TEE RPMB devices (one per RPMB region) 946 945 */ 947 946 struct ufs_hba { 948 947 void __iomem *mmio_base; ··· 960 959 struct Scsi_Host *host; 961 960 struct device *dev; 962 961 struct scsi_device *ufs_device_wlun; 962 + struct scsi_device *ufs_rpmb_wlun; 963 963 964 964 #ifdef CONFIG_SCSI_UFS_HWMON 965 965 struct device *hwmon_device; ··· 1114 1112 int critical_health_count; 1115 1113 atomic_t dev_lvl_exception_count; 1116 1114 u64 dev_lvl_exception_id; 1117 - 1118 1115 u32 vcc_off_delay_us; 1116 + struct list_head rpmbs; 1119 1117 }; 1120 1118 1121 1119 /** ··· 1429 1427 void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit); 1430 1428 void ufshcd_fixup_dev_quirks(struct ufs_hba *hba, 1431 1429 const struct ufs_dev_quirk *fixups); 1432 - #define SD_ASCII_STD true 1433 - #define SD_RAW false 1434 - int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index, 1435 - u8 **buf, bool ascii); 1436 1430 1437 1431 void ufshcd_hold(struct ufs_hba *hba); 1438 1432 void ufshcd_release(struct ufs_hba *hba);
+1 -7
io_uring/register.c
··· 421 421 if (unlikely(ret)) 422 422 return ret; 423 423 424 - /* nothing to do, but copy params back */ 425 - if (p.sq_entries == ctx->sq_entries && p.cq_entries == ctx->cq_entries) { 426 - if (copy_to_user(arg, &p, sizeof(p))) 427 - return -EFAULT; 428 - return 0; 429 - } 430 - 431 424 size = rings_size(p.flags, p.sq_entries, p.cq_entries, 432 425 &sq_array_offset); 433 426 if (size == SIZE_MAX) ··· 606 613 if (ret) 607 614 return ret; 608 615 if (copy_to_user(rd_uptr, &rd, sizeof(rd))) { 616 + guard(mutex)(&ctx->mmap_lock); 609 617 io_free_region(ctx, &ctx->param_region); 610 618 return -EFAULT; 611 619 }
+6 -2
io_uring/rw.c
··· 542 542 { 543 543 if (res == req->cqe.res) 544 544 return; 545 - if (res == -EAGAIN && io_rw_should_reissue(req)) { 545 + if ((res == -EOPNOTSUPP || res == -EAGAIN) && io_rw_should_reissue(req)) { 546 546 req->flags |= REQ_F_REISSUE | REQ_F_BL_NO_RECYCLE; 547 547 } else { 548 548 req_set_fail(req); ··· 655 655 if (ret >= 0 && req->flags & REQ_F_CUR_POS) 656 656 req->file->f_pos = rw->kiocb.ki_pos; 657 657 if (ret >= 0 && !(req->ctx->flags & IORING_SETUP_IOPOLL)) { 658 + u32 cflags = 0; 659 + 658 660 __io_complete_rw_common(req, ret); 659 661 /* 660 662 * Safe to call io_end from here as we're inline 661 663 * from the submission path. 662 664 */ 663 665 io_req_io_end(req); 664 - io_req_set_res(req, final_ret, io_put_kbuf(req, ret, sel->buf_list)); 666 + if (sel) 667 + cflags = io_put_kbuf(req, ret, sel->buf_list); 668 + io_req_set_res(req, final_ret, cflags); 665 669 io_req_rw_cleanup(req, issue_flags); 666 670 return IOU_COMPLETE; 667 671 } else {
+14 -11
kernel/bpf/helpers.c
··· 1215 1215 rcu_read_unlock_trace(); 1216 1216 } 1217 1217 1218 + static void bpf_async_cb_rcu_free(struct rcu_head *rcu) 1219 + { 1220 + struct bpf_async_cb *cb = container_of(rcu, struct bpf_async_cb, rcu); 1221 + 1222 + kfree_nolock(cb); 1223 + } 1224 + 1218 1225 static void bpf_wq_delete_work(struct work_struct *work) 1219 1226 { 1220 1227 struct bpf_work *w = container_of(work, struct bpf_work, delete_work); 1221 1228 1222 1229 cancel_work_sync(&w->work); 1223 1230 1224 - kfree_rcu(w, cb.rcu); 1231 + call_rcu(&w->cb.rcu, bpf_async_cb_rcu_free); 1225 1232 } 1226 1233 1227 1234 static void bpf_timer_delete_work(struct work_struct *work) ··· 1237 1230 1238 1231 /* Cancel the timer and wait for callback to complete if it was running. 1239 1232 * If hrtimer_cancel() can be safely called it's safe to call 1240 - * kfree_rcu(t) right after for both preallocated and non-preallocated 1233 + * call_rcu() right after for both preallocated and non-preallocated 1241 1234 * maps. The async->cb = NULL was already done and no code path can see 1242 1235 * address 't' anymore. Timer if armed for existing bpf_hrtimer before 1243 1236 * bpf_timer_cancel_and_free will have been cancelled. 1244 1237 */ 1245 1238 hrtimer_cancel(&t->timer); 1246 - kfree_rcu(t, cb.rcu); 1239 + call_rcu(&t->cb.rcu, bpf_async_cb_rcu_free); 1247 1240 } 1248 1241 1249 1242 static int __bpf_async_init(struct bpf_async_kern *async, struct bpf_map *map, u64 flags, ··· 1277 1270 goto out; 1278 1271 } 1279 1272 1280 - /* Allocate via bpf_map_kmalloc_node() for memcg accounting. Until 1281 - * kmalloc_nolock() is available, avoid locking issues by using 1282 - * __GFP_HIGH (GFP_ATOMIC & ~__GFP_RECLAIM). 1283 - */ 1284 - cb = bpf_map_kmalloc_node(map, size, __GFP_HIGH, map->numa_node); 1273 + cb = bpf_map_kmalloc_nolock(map, size, 0, map->numa_node); 1285 1274 if (!cb) { 1286 1275 ret = -ENOMEM; 1287 1276 goto out; ··· 1318 1315 * or pinned in bpffs. 1319 1316 */ 1320 1317 WRITE_ONCE(async->cb, NULL); 1321 - kfree(cb); 1318 + kfree_nolock(cb); 1322 1319 ret = -EPERM; 1323 1320 } 1324 1321 out: ··· 1583 1580 * timer _before_ calling us, such that failing to cancel it here will 1584 1581 * cause it to possibly use struct hrtimer after freeing bpf_hrtimer. 1585 1582 * Therefore, we _need_ to cancel any outstanding timers before we do 1586 - * kfree_rcu, even though no more timers can be armed. 1583 + * call_rcu, even though no more timers can be armed. 1587 1584 * 1588 1585 * Moreover, we need to schedule work even if timer does not belong to 1589 1586 * the calling callback_fn, as on two different CPUs, we can end up in a ··· 1610 1607 * completion. 1611 1608 */ 1612 1609 if (hrtimer_try_to_cancel(&t->timer) >= 0) 1613 - kfree_rcu(t, cb.rcu); 1610 + call_rcu(&t->cb.rcu, bpf_async_cb_rcu_free); 1614 1611 else 1615 1612 queue_work(system_dfl_wq, &t->cb.delete_work); 1616 1613 } else {
+3 -1
kernel/bpf/liveness.c
··· 195 195 return ERR_PTR(-ENOMEM); 196 196 result->must_write_set = kvcalloc(subprog_sz, sizeof(*result->must_write_set), 197 197 GFP_KERNEL_ACCOUNT); 198 - if (!result->must_write_set) 198 + if (!result->must_write_set) { 199 + kvfree(result); 199 200 return ERR_PTR(-ENOMEM); 201 + } 200 202 memcpy(&result->callchain, callchain, sizeof(*callchain)); 201 203 result->insn_cnt = subprog_sz; 202 204 hash_add(liveness->func_instances, &result->hl_node, key);
+15
kernel/bpf/syscall.c
··· 520 520 return ptr; 521 521 } 522 522 523 + void *bpf_map_kmalloc_nolock(const struct bpf_map *map, size_t size, gfp_t flags, 524 + int node) 525 + { 526 + struct mem_cgroup *memcg, *old_memcg; 527 + void *ptr; 528 + 529 + memcg = bpf_map_get_memcg(map); 530 + old_memcg = set_active_memcg(memcg); 531 + ptr = kmalloc_nolock(size, flags | __GFP_ACCOUNT, node); 532 + set_active_memcg(old_memcg); 533 + mem_cgroup_put(memcg); 534 + 535 + return ptr; 536 + } 537 + 523 538 void *bpf_map_kzalloc(const struct bpf_map *map, size_t size, gfp_t flags) 524 539 { 525 540 struct mem_cgroup *memcg, *old_memcg;
+4 -4
kernel/events/core.c
··· 9403 9403 flags |= MAP_HUGETLB; 9404 9404 9405 9405 if (file) { 9406 - struct inode *inode; 9406 + const struct inode *inode; 9407 9407 dev_t dev; 9408 9408 9409 9409 buf = kmalloc(PATH_MAX, GFP_KERNEL); ··· 9416 9416 * need to add enough zero bytes after the string to handle 9417 9417 * the 64bit alignment we do later. 9418 9418 */ 9419 - name = file_path(file, buf, PATH_MAX - sizeof(u64)); 9419 + name = d_path(file_user_path(file), buf, PATH_MAX - sizeof(u64)); 9420 9420 if (IS_ERR(name)) { 9421 9421 name = "//toolong"; 9422 9422 goto cpy_name; 9423 9423 } 9424 - inode = file_inode(vma->vm_file); 9424 + inode = file_user_inode(vma->vm_file); 9425 9425 dev = inode->i_sb->s_dev; 9426 9426 ino = inode->i_ino; 9427 9427 gen = inode->i_generation; ··· 9492 9492 if (!filter->path.dentry) 9493 9493 return false; 9494 9494 9495 - if (d_inode(filter->path.dentry) != file_inode(file)) 9495 + if (d_inode(filter->path.dentry) != file_user_inode(file)) 9496 9496 return false; 9497 9497 9498 9498 if (filter->offset > offset + size)
+3 -3
kernel/events/uprobes.c
··· 2765 2765 2766 2766 handler_chain(uprobe, regs); 2767 2767 2768 + /* Try to optimize after first hit. */ 2769 + arch_uprobe_optimize(&uprobe->arch, bp_vaddr); 2770 + 2768 2771 /* 2769 2772 * If user decided to take execution elsewhere, it makes little sense 2770 2773 * to execute the original instruction, so let's skip it. 2771 2774 */ 2772 2775 if (instruction_pointer(regs) != bp_vaddr) 2773 2776 goto out; 2774 - 2775 - /* Try to optimize after first hit. */ 2776 - arch_uprobe_optimize(&uprobe->arch, bp_vaddr); 2777 2777 2778 2778 if (arch_uprobe_skip_sstep(&uprobe->arch, regs)) 2779 2779 goto out;
+2
kernel/sched/core.c
··· 8571 8571 sched_tick_stop(cpu); 8572 8572 8573 8573 rq_lock_irqsave(rq, &rf); 8574 + update_rq_clock(rq); 8574 8575 if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) { 8575 8576 WARN(true, "Dying CPU not properly vacated!"); 8576 8577 dump_rq_tasks(rq, KERN_WARNING); 8577 8578 } 8579 + dl_server_stop(&rq->fair_server); 8578 8580 rq_unlock_irqrestore(rq, &rf); 8579 8581 8580 8582 calc_load_migrate(rq);
+3
kernel/sched/deadline.c
··· 1582 1582 if (!dl_server(dl_se) || dl_se->dl_server_active) 1583 1583 return; 1584 1584 1585 + if (WARN_ON_ONCE(!cpu_online(cpu_of(rq)))) 1586 + return; 1587 + 1585 1588 dl_se->dl_server_active = 1; 1586 1589 enqueue_dl_entity(dl_se, ENQUEUE_WAKEUP); 1587 1590 if (!dl_task(dl_se->rq->curr) || dl_entity_preempt(dl_se, &rq->curr->dl))
+13 -13
kernel/sched/fair.c
··· 8920 8920 return p; 8921 8921 8922 8922 idle: 8923 - if (!rf) 8924 - return NULL; 8923 + if (rf) { 8924 + new_tasks = sched_balance_newidle(rq, rf); 8925 8925 8926 - new_tasks = sched_balance_newidle(rq, rf); 8926 + /* 8927 + * Because sched_balance_newidle() releases (and re-acquires) 8928 + * rq->lock, it is possible for any higher priority task to 8929 + * appear. In that case we must re-start the pick_next_entity() 8930 + * loop. 8931 + */ 8932 + if (new_tasks < 0) 8933 + return RETRY_TASK; 8927 8934 8928 - /* 8929 - * Because sched_balance_newidle() releases (and re-acquires) rq->lock, it is 8930 - * possible for any higher priority task to appear. In that case we 8931 - * must re-start the pick_next_entity() loop. 8932 - */ 8933 - if (new_tasks < 0) 8934 - return RETRY_TASK; 8935 - 8936 - if (new_tasks > 0) 8937 - goto again; 8935 + if (new_tasks > 0) 8936 + goto again; 8937 + } 8938 8938 8939 8939 /* 8940 8940 * rq is about to be idle, check if we need to update the
+12 -4
mm/slub.c
··· 2170 2170 struct slabobj_ext *obj_exts; 2171 2171 2172 2172 obj_exts = slab_obj_exts(slab); 2173 - if (!obj_exts) 2173 + if (!obj_exts) { 2174 + /* 2175 + * If obj_exts allocation failed, slab->obj_exts is set to 2176 + * OBJEXTS_ALLOC_FAIL. In this case, we end up here and should 2177 + * clear the flag. 2178 + */ 2179 + slab->obj_exts = 0; 2174 2180 return; 2181 + } 2175 2182 2176 2183 /* 2177 2184 * obj_exts was created with __GFP_NO_OBJ_EXT flag, therefore its ··· 6450 6443 slab = virt_to_slab(x); 6451 6444 s = slab->slab_cache; 6452 6445 6446 + /* Point 'x' back to the beginning of allocated object */ 6447 + x -= s->offset; 6448 + 6453 6449 /* 6454 6450 * We used freepointer in 'x' to link 'x' into df->objects. 6455 6451 * Clear it to NULL to avoid false positive detection 6456 6452 * of "Freepointer corruption". 6457 6453 */ 6458 - *(void **)x = NULL; 6454 + set_freepointer(s, x, NULL); 6459 6455 6460 - /* Point 'x' back to the beginning of allocated object */ 6461 - x -= s->offset; 6462 6456 __slab_free(s, slab, x, x, 1, _THIS_IP_); 6463 6457 } 6464 6458
+7 -18
net/bpf/test_run.c
··· 29 29 #include <trace/events/bpf_test_run.h> 30 30 31 31 struct bpf_test_timer { 32 - enum { NO_PREEMPT, NO_MIGRATE } mode; 33 32 u32 i; 34 33 u64 time_start, time_spent; 35 34 }; ··· 36 37 static void bpf_test_timer_enter(struct bpf_test_timer *t) 37 38 __acquires(rcu) 38 39 { 39 - rcu_read_lock(); 40 - if (t->mode == NO_PREEMPT) 41 - preempt_disable(); 42 - else 43 - migrate_disable(); 44 - 40 + rcu_read_lock_dont_migrate(); 45 41 t->time_start = ktime_get_ns(); 46 42 } 47 43 ··· 44 50 __releases(rcu) 45 51 { 46 52 t->time_start = 0; 47 - 48 - if (t->mode == NO_PREEMPT) 49 - preempt_enable(); 50 - else 51 - migrate_enable(); 52 - rcu_read_unlock(); 53 + rcu_read_unlock_migrate(); 53 54 } 54 55 55 56 static bool bpf_test_timer_continue(struct bpf_test_timer *t, int iterations, ··· 363 374 364 375 { 365 376 struct xdp_test_data xdp = { .batch_size = batch_size }; 366 - struct bpf_test_timer t = { .mode = NO_MIGRATE }; 377 + struct bpf_test_timer t = {}; 367 378 int ret; 368 379 369 380 if (!repeat) ··· 393 404 struct bpf_prog_array_item item = {.prog = prog}; 394 405 struct bpf_run_ctx *old_ctx; 395 406 struct bpf_cg_run_ctx run_ctx; 396 - struct bpf_test_timer t = { NO_MIGRATE }; 407 + struct bpf_test_timer t = {}; 397 408 enum bpf_cgroup_storage_type stype; 398 409 int ret; 399 410 ··· 1258 1269 goto free_ctx; 1259 1270 1260 1271 if (kattr->test.data_size_in - meta_sz < ETH_HLEN) 1261 - return -EINVAL; 1272 + goto free_ctx; 1262 1273 1263 1274 data = bpf_test_init(kattr, linear_sz, max_linear_sz, headroom, tailroom); 1264 1275 if (IS_ERR(data)) { ··· 1366 1377 const union bpf_attr *kattr, 1367 1378 union bpf_attr __user *uattr) 1368 1379 { 1369 - struct bpf_test_timer t = { NO_PREEMPT }; 1380 + struct bpf_test_timer t = {}; 1370 1381 u32 size = kattr->test.data_size_in; 1371 1382 struct bpf_flow_dissector ctx = {}; 1372 1383 u32 repeat = kattr->test.repeat; ··· 1434 1445 int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kattr, 1435 1446 union bpf_attr __user *uattr) 1436 1447 { 1437 - struct bpf_test_timer t = { NO_PREEMPT }; 1448 + struct bpf_test_timer t = {}; 1438 1449 struct bpf_prog_array *progs = NULL; 1439 1450 struct bpf_sk_lookup_kern ctx = {}; 1440 1451 u32 repeat = kattr->test.repeat;
+2
net/can/j1939/main.c
··· 378 378 j1939_ecu_unmap_all(priv); 379 379 break; 380 380 case NETDEV_UNREGISTER: 381 + j1939_cancel_active_session(priv, NULL); 382 + j1939_sk_netdev_event_netdown(priv); 381 383 j1939_sk_netdev_event_unregister(priv); 382 384 break; 383 385 }
+35 -5
net/core/dev.c
··· 12176 12176 } 12177 12177 } 12178 12178 12179 + /* devices must be UP and netdev_lock()'d */ 12180 + static void netif_close_many_and_unlock(struct list_head *close_head) 12181 + { 12182 + struct net_device *dev, *tmp; 12183 + 12184 + netif_close_many(close_head, false); 12185 + 12186 + /* ... now unlock them */ 12187 + list_for_each_entry_safe(dev, tmp, close_head, close_list) { 12188 + netdev_unlock(dev); 12189 + list_del_init(&dev->close_list); 12190 + } 12191 + } 12192 + 12193 + static void netif_close_many_and_unlock_cond(struct list_head *close_head) 12194 + { 12195 + #ifdef CONFIG_LOCKDEP 12196 + /* We can only track up to MAX_LOCK_DEPTH locks per task. 12197 + * 12198 + * Reserve half the available slots for additional locks possibly 12199 + * taken by notifiers and (soft)irqs. 12200 + */ 12201 + unsigned int limit = MAX_LOCK_DEPTH / 2; 12202 + 12203 + if (lockdep_depth(current) > limit) 12204 + netif_close_many_and_unlock(close_head); 12205 + #endif 12206 + } 12207 + 12179 12208 void unregister_netdevice_many_notify(struct list_head *head, 12180 12209 u32 portid, const struct nlmsghdr *nlh) 12181 12210 { ··· 12237 12208 12238 12209 /* If device is running, close it first. Start with ops locked... */ 12239 12210 list_for_each_entry(dev, head, unreg_list) { 12211 + if (!(dev->flags & IFF_UP)) 12212 + continue; 12240 12213 if (netdev_need_ops_lock(dev)) { 12241 12214 list_add_tail(&dev->close_list, &close_head); 12242 12215 netdev_lock(dev); 12243 12216 } 12217 + netif_close_many_and_unlock_cond(&close_head); 12244 12218 } 12245 - netif_close_many(&close_head, true); 12246 - /* ... now unlock them and go over the rest. */ 12219 + netif_close_many_and_unlock(&close_head); 12220 + /* ... now go over the rest. */ 12247 12221 list_for_each_entry(dev, head, unreg_list) { 12248 - if (netdev_need_ops_lock(dev)) 12249 - netdev_unlock(dev); 12250 - else 12222 + if (!netdev_need_ops_lock(dev)) 12251 12223 list_add_tail(&dev->close_list, &close_head); 12252 12224 } 12253 12225 netif_close_many(&close_head, true);
+10
net/core/gro_cells.c
··· 8 8 struct gro_cell { 9 9 struct sk_buff_head napi_skbs; 10 10 struct napi_struct napi; 11 + local_lock_t bh_lock; 11 12 }; 12 13 13 14 int gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb) 14 15 { 15 16 struct net_device *dev = skb->dev; 17 + bool have_bh_lock = false; 16 18 struct gro_cell *cell; 17 19 int res; 18 20 ··· 27 25 goto unlock; 28 26 } 29 27 28 + local_lock_nested_bh(&gcells->cells->bh_lock); 29 + have_bh_lock = true; 30 30 cell = this_cpu_ptr(gcells->cells); 31 31 32 32 if (skb_queue_len(&cell->napi_skbs) > READ_ONCE(net_hotdata.max_backlog)) { ··· 42 38 __skb_queue_tail(&cell->napi_skbs, skb); 43 39 if (skb_queue_len(&cell->napi_skbs) == 1) 44 40 napi_schedule(&cell->napi); 41 + 42 + if (have_bh_lock) 43 + local_unlock_nested_bh(&gcells->cells->bh_lock); 45 44 46 45 res = NET_RX_SUCCESS; 47 46 ··· 61 54 struct sk_buff *skb; 62 55 int work_done = 0; 63 56 57 + __local_lock_nested_bh(&cell->bh_lock); 64 58 while (work_done < budget) { 65 59 skb = __skb_dequeue(&cell->napi_skbs); 66 60 if (!skb) ··· 72 64 73 65 if (work_done < budget) 74 66 napi_complete_done(napi, work_done); 67 + __local_unlock_nested_bh(&cell->bh_lock); 75 68 return work_done; 76 69 } 77 70 ··· 88 79 struct gro_cell *cell = per_cpu_ptr(gcells->cells, i); 89 80 90 81 __skb_queue_head_init(&cell->napi_skbs); 82 + local_lock_init(&cell->bh_lock); 91 83 92 84 set_bit(NAPI_STATE_NO_BUSY_POLL, &cell->napi.state); 93 85
+1
net/core/skbuff.c
··· 7200 7200 7201 7201 DEBUG_NET_WARN_ON_ONCE(skb_dst(skb)); 7202 7202 DEBUG_NET_WARN_ON_ONCE(skb->destructor); 7203 + DEBUG_NET_WARN_ON_ONCE(skb_nfct(skb)); 7203 7204 7204 7205 sdn = per_cpu_ptr(net_hotdata.skb_defer_nodes, cpu) + numa_node_id(); 7205 7206
-14
net/ipv4/ip_tunnel.c
··· 568 568 return 0; 569 569 } 570 570 571 - static void ip_tunnel_adj_headroom(struct net_device *dev, unsigned int headroom) 572 - { 573 - /* we must cap headroom to some upperlimit, else pskb_expand_head 574 - * will overflow header offsets in skb_headers_offset_update(). 575 - */ 576 - static const unsigned int max_allowed = 512; 577 - 578 - if (headroom > max_allowed) 579 - headroom = max_allowed; 580 - 581 - if (headroom > READ_ONCE(dev->needed_headroom)) 582 - WRITE_ONCE(dev->needed_headroom, headroom); 583 - } 584 - 585 571 void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev, 586 572 u8 proto, int tunnel_hlen) 587 573 {
+15 -4
net/ipv4/tcp_output.c
··· 2369 2369 u32 max_segs) 2370 2370 { 2371 2371 const struct inet_connection_sock *icsk = inet_csk(sk); 2372 - u32 send_win, cong_win, limit, in_flight; 2372 + u32 send_win, cong_win, limit, in_flight, threshold; 2373 + u64 srtt_in_ns, expected_ack, how_far_is_the_ack; 2373 2374 struct tcp_sock *tp = tcp_sk(sk); 2374 2375 struct sk_buff *head; 2375 2376 int win_divisor; ··· 2432 2431 head = tcp_rtx_queue_head(sk); 2433 2432 if (!head) 2434 2433 goto send_now; 2435 - delta = tp->tcp_clock_cache - head->tstamp; 2436 - /* If next ACK is likely to come too late (half srtt), do not defer */ 2437 - if ((s64)(delta - (u64)NSEC_PER_USEC * (tp->srtt_us >> 4)) < 0) 2434 + 2435 + srtt_in_ns = (u64)(NSEC_PER_USEC >> 3) * tp->srtt_us; 2436 + /* When is the ACK expected ? */ 2437 + expected_ack = head->tstamp + srtt_in_ns; 2438 + /* How far from now is the ACK expected ? */ 2439 + how_far_is_the_ack = expected_ack - tp->tcp_clock_cache; 2440 + 2441 + /* If next ACK is likely to come too late, 2442 + * ie in more than min(1ms, half srtt), do not defer. 2443 + */ 2444 + threshold = min(srtt_in_ns >> 1, NSEC_PER_MSEC); 2445 + 2446 + if ((s64)(how_far_is_the_ack - threshold) > 0) 2438 2447 goto send_now; 2439 2448 2440 2449 /* Ok, it looks like it is advisable to defer.
-2
net/ipv4/udp.c
··· 1851 1851 sk_peek_offset_bwd(sk, len); 1852 1852 1853 1853 if (!skb_shared(skb)) { 1854 - if (unlikely(udp_skb_has_head_state(skb))) 1855 - skb_release_head_state(skb); 1856 1854 skb_attempt_defer_free(skb); 1857 1855 return; 1858 1856 }
+1 -2
net/ipv6/ip6_tunnel.c
··· 1257 1257 */ 1258 1258 max_headroom = LL_RESERVED_SPACE(tdev) + sizeof(struct ipv6hdr) 1259 1259 + dst->header_len + t->hlen; 1260 - if (max_headroom > READ_ONCE(dev->needed_headroom)) 1261 - WRITE_ONCE(dev->needed_headroom, max_headroom); 1260 + ip_tunnel_adj_headroom(dev, max_headroom); 1262 1261 1263 1262 err = ip6_tnl_encap(skb, t, &proto, fl6); 1264 1263 if (err)
+2 -5
net/tls/tls_main.c
··· 255 255 if (msg->msg_flags & MSG_MORE) 256 256 return -EINVAL; 257 257 258 - rc = tls_handle_open_record(sk, msg->msg_flags); 259 - if (rc) 260 - return rc; 261 - 262 258 *record_type = *(unsigned char *)CMSG_DATA(cmsg); 263 - rc = 0; 259 + 260 + rc = tls_handle_open_record(sk, msg->msg_flags); 264 261 break; 265 262 default: 266 263 return -EINVAL;
+25 -6
net/tls/tls_sw.c
··· 1054 1054 if (ret == -EINPROGRESS) 1055 1055 num_async++; 1056 1056 else if (ret != -EAGAIN) 1057 - goto send_end; 1057 + goto end; 1058 1058 } 1059 1059 } 1060 1060 ··· 1112 1112 goto send_end; 1113 1113 tls_ctx->pending_open_record_frags = true; 1114 1114 1115 - if (sk_msg_full(msg_pl)) 1115 + if (sk_msg_full(msg_pl)) { 1116 1116 full_record = true; 1117 + sk_msg_trim(sk, msg_en, 1118 + msg_pl->sg.size + prot->overhead_size); 1119 + } 1117 1120 1118 1121 if (full_record || eor) 1119 1122 goto copied; ··· 1152 1149 } else if (ret != -EAGAIN) 1153 1150 goto send_end; 1154 1151 } 1152 + 1153 + /* Transmit if any encryptions have completed */ 1154 + if (test_and_clear_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask)) { 1155 + cancel_delayed_work(&ctx->tx_work.work); 1156 + tls_tx_records(sk, msg->msg_flags); 1157 + } 1158 + 1155 1159 continue; 1156 1160 rollback_iter: 1157 1161 copied -= try_to_copy; ··· 1214 1204 goto send_end; 1215 1205 } 1216 1206 } 1207 + 1208 + /* Transmit if any encryptions have completed */ 1209 + if (test_and_clear_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask)) { 1210 + cancel_delayed_work(&ctx->tx_work.work); 1211 + tls_tx_records(sk, msg->msg_flags); 1212 + } 1217 1213 } 1218 1214 1219 1215 continue; ··· 1239 1223 goto alloc_encrypted; 1240 1224 } 1241 1225 1226 + send_end: 1242 1227 if (!num_async) { 1243 - goto send_end; 1228 + goto end; 1244 1229 } else if (num_zc || eor) { 1245 1230 int err; 1246 1231 ··· 1259 1242 tls_tx_records(sk, msg->msg_flags); 1260 1243 } 1261 1244 1262 - send_end: 1245 + end: 1263 1246 ret = sk_stream_error(sk, msg->msg_flags, ret); 1264 1247 return copied > 0 ? copied : ret; 1265 1248 } ··· 1654 1637 1655 1638 if (unlikely(darg->async)) { 1656 1639 err = tls_strp_msg_hold(&ctx->strp, &ctx->async_hold); 1657 - if (err) 1658 - __skb_queue_tail(&ctx->async_hold, darg->skb); 1640 + if (err) { 1641 + err = tls_decrypt_async_wait(ctx); 1642 + darg->async = false; 1643 + } 1659 1644 return err; 1660 1645 } 1661 1646
+1 -1
rust/kernel/alloc/kvec.rs
··· 9 9 }; 10 10 use crate::{ 11 11 fmt, 12 - page::AsPageIter, 12 + page::AsPageIter, // 13 13 }; 14 14 use core::{ 15 15 borrow::{Borrow, BorrowMut},
+8 -2
rust/kernel/bitmap.rs
··· 166 166 fn deref(&self) -> &Bitmap { 167 167 let ptr = if self.nbits <= BITS_PER_LONG { 168 168 // SAFETY: Bitmap is represented inline. 169 - unsafe { core::ptr::addr_of!(self.repr.bitmap) } 169 + #[allow(unused_unsafe, reason = "Safe since Rust 1.92.0")] 170 + unsafe { 171 + core::ptr::addr_of!(self.repr.bitmap) 172 + } 170 173 } else { 171 174 // SAFETY: Bitmap is represented as array of `unsigned long`. 172 175 unsafe { self.repr.ptr.as_ptr() } ··· 185 182 fn deref_mut(&mut self) -> &mut Bitmap { 186 183 let ptr = if self.nbits <= BITS_PER_LONG { 187 184 // SAFETY: Bitmap is represented inline. 188 - unsafe { core::ptr::addr_of_mut!(self.repr.bitmap) } 185 + #[allow(unused_unsafe, reason = "Safe since Rust 1.92.0")] 186 + unsafe { 187 + core::ptr::addr_of_mut!(self.repr.bitmap) 188 + } 189 189 } else { 190 190 // SAFETY: Bitmap is represented as array of `unsigned long`. 191 191 unsafe { self.repr.ptr.as_ptr() }
+1 -2
rust/kernel/cpufreq.rs
··· 38 38 const CPUFREQ_NAME_LEN: usize = bindings::CPUFREQ_NAME_LEN as usize; 39 39 40 40 /// Default transition latency value in nanoseconds. 41 - pub const DEFAULT_TRANSITION_LATENCY_NS: u32 = 42 - bindings::CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS; 41 + pub const DEFAULT_TRANSITION_LATENCY_NS: u32 = bindings::CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS; 43 42 44 43 /// CPU frequency driver flags. 45 44 pub mod flags {
+2 -2
rust/kernel/fs/file.rs
··· 448 448 } 449 449 } 450 450 451 - /// Represents the `EBADF` error code. 451 + /// Represents the [`EBADF`] error code. 452 452 /// 453 - /// Used for methods that can only fail with `EBADF`. 453 + /// Used for methods that can only fail with [`EBADF`]. 454 454 #[derive(Copy, Clone, Eq, PartialEq)] 455 455 pub struct BadFdError; 456 456
+1 -1
sound/firewire/amdtp-stream.h
··· 32 32 * allows 5 times as large as IEC 61883-6 defines. 33 33 * @CIP_HEADER_WITHOUT_EOH: Only for in-stream. CIP Header doesn't include 34 34 * valid EOH. 35 - * @CIP_NO_HEADERS: a lack of headers in packets 35 + * @CIP_NO_HEADER: a lack of headers in packets 36 36 * @CIP_UNALIGHED_DBC: Only for in-stream. The value of dbc is not alighed to 37 37 * the value of current SYT_INTERVAL; e.g. initial value is not zero. 38 38 * @CIP_UNAWARE_SYT: For outgoing packet, the value in SYT field of CIP is 0xffff.
+2
sound/hda/codecs/realtek/alc269.c
··· 6397 6397 SND_PCI_QUIRK(0x103c, 0x854a, "HP EliteBook 830 G6", ALC285_FIXUP_HP_GPIO_LED), 6398 6398 SND_PCI_QUIRK(0x103c, 0x85c6, "HP Pavilion x360 Convertible 14-dy1xxx", ALC295_FIXUP_HP_MUTE_LED_COEFBIT11), 6399 6399 SND_PCI_QUIRK(0x103c, 0x85de, "HP Envy x360 13-ar0xxx", ALC285_FIXUP_HP_ENVY_X360), 6400 + SND_PCI_QUIRK(0x103c, 0x8603, "HP Omen 17-cb0xxx", ALC285_FIXUP_HP_MUTE_LED), 6401 + SND_PCI_QUIRK(0x103c, 0x860c, "HP ZBook 17 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT), 6400 6402 SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT), 6401 6403 SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT), 6402 6404 SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+2
sound/hda/codecs/side-codecs/cs35l41_hda.c
··· 1410 1410 1411 1411 if (cs35l41_dsm_supported(handle, CS35L41_DSM_GET_MUTE)) { 1412 1412 ret = acpi_evaluate_dsm(handle, &guid, 0, CS35L41_DSM_GET_MUTE, NULL); 1413 + if (!ret) 1414 + return -EINVAL; 1413 1415 mute = *ret->buffer.pointer; 1414 1416 dev_dbg(cs35l41->dev, "CS35L41_DSM_GET_MUTE: %d\n", mute); 1415 1417 }
+4
sound/hda/codecs/side-codecs/hda_component.c
··· 174 174 sm->match_str = match_str; 175 175 sm->index = i; 176 176 component_match_add(dev, &match, hda_comp_match_dev_name, sm); 177 + if (IS_ERR(match)) { 178 + codec_err(cdc, "Fail to add component %ld\n", PTR_ERR(match)); 179 + return PTR_ERR(match); 180 + } 177 181 } 178 182 179 183 ret = component_master_add_with_match(dev, ops, match);
+1
sound/hda/codecs/side-codecs/tas2781_hda_i2c.c
··· 669 669 */ 670 670 device_name = "TXNW5825"; 671 671 hda_priv->hda_chip_id = HDA_TAS5825; 672 + tas_hda->priv->chip_id = TAS5825; 672 673 } else { 673 674 return -ENODEV; 674 675 }
+1
sound/hda/controllers/intel.c
··· 2075 2075 { PCI_DEVICE_SUB(0x1022, 0x1487, 0x1043, 0x874f) }, /* ASUS ROG Zenith II / Strix */ 2076 2076 { PCI_DEVICE_SUB(0x1022, 0x1487, 0x1462, 0xcb59) }, /* MSI TRX40 Creator */ 2077 2077 { PCI_DEVICE_SUB(0x1022, 0x1487, 0x1462, 0xcb60) }, /* MSI TRX40 */ 2078 + { PCI_DEVICE_SUB(0x1022, 0x15e3, 0x1462, 0xee59) }, /* MSI X870E Tomahawk WiFi */ 2078 2079 {} 2079 2080 }; 2080 2081
+1 -1
sound/soc/amd/acp/acp-sdw-sof-mach.c
··· 176 176 cpus->dai_name = devm_kasprintf(dev, GFP_KERNEL, 177 177 "SDW%d Pin%d", 178 178 link_num, cpu_pin_id); 179 - dev_dbg(dev, "cpu->dai_name:%s\n", cpus->dai_name); 180 179 if (!cpus->dai_name) 181 180 return -ENOMEM; 181 + dev_dbg(dev, "cpu->dai_name:%s\n", cpus->dai_name); 182 182 183 183 codec_maps[j].cpu = 0; 184 184 codec_maps[j].codec = j;
+6 -6
sound/soc/codecs/idt821034.c
··· 548 548 return ret; 549 549 } 550 550 551 - static const DECLARE_TLV_DB_LINEAR(idt821034_gain_in, -6520, 1306); 552 - #define IDT821034_GAIN_IN_MIN_RAW 1 /* -65.20 dB -> 10^(-65.2/20.0) * 1820 = 1 */ 553 - #define IDT821034_GAIN_IN_MAX_RAW 8191 /* 13.06 dB -> 10^(13.06/20.0) * 1820 = 8191 */ 551 + static const DECLARE_TLV_DB_LINEAR(idt821034_gain_in, -300, 1300); 552 + #define IDT821034_GAIN_IN_MIN_RAW 1288 /* -3.0 dB -> 10^(-3.0/20.0) * 1820 = 1288 */ 553 + #define IDT821034_GAIN_IN_MAX_RAW 8130 /* 13.0 dB -> 10^(13.0/20.0) * 1820 = 8130 */ 554 554 #define IDT821034_GAIN_IN_INIT_RAW 1820 /* 0dB -> 10^(0/20) * 1820 = 1820 */ 555 555 556 - static const DECLARE_TLV_DB_LINEAR(idt821034_gain_out, -6798, 1029); 557 - #define IDT821034_GAIN_OUT_MIN_RAW 1 /* -67.98 dB -> 10^(-67.98/20.0) * 2506 = 1*/ 558 - #define IDT821034_GAIN_OUT_MAX_RAW 8191 /* 10.29 dB -> 10^(10.29/20.0) * 2506 = 8191 */ 556 + static const DECLARE_TLV_DB_LINEAR(idt821034_gain_out, -1300, 300); 557 + #define IDT821034_GAIN_OUT_MIN_RAW 561 /* -13.0 dB -> 10^(-13.0/20.0) * 2506 = 561 */ 558 + #define IDT821034_GAIN_OUT_MAX_RAW 3540 /* 3.0 dB -> 10^(3.0/20.0) * 2506 = 3540 */ 559 559 #define IDT821034_GAIN_OUT_INIT_RAW 2506 /* 0dB -> 10^(0/20) * 2506 = 2506 */ 560 560 561 561 static const struct snd_kcontrol_new idt821034_controls[] = {
+4 -2
sound/soc/codecs/max98090.c
··· 1234 1234 SND_SOC_DAPM_INPUT("DMIC4"), 1235 1235 1236 1236 SND_SOC_DAPM_SUPPLY("DMIC3_ENA", M98090_REG_DIGITAL_MIC_ENABLE, 1237 - M98090_DIGMIC3_SHIFT, 0, NULL, 0), 1237 + M98090_DIGMIC3_SHIFT, 0, max98090_shdn_event, 1238 + SND_SOC_DAPM_POST_PMU), 1238 1239 SND_SOC_DAPM_SUPPLY("DMIC4_ENA", M98090_REG_DIGITAL_MIC_ENABLE, 1239 - M98090_DIGMIC4_SHIFT, 0, NULL, 0), 1240 + M98090_DIGMIC4_SHIFT, 0, max98090_shdn_event, 1241 + SND_SOC_DAPM_POST_PMU), 1240 1242 }; 1241 1243 1242 1244 static const struct snd_soc_dapm_route max98090_dapm_routes[] = {
+82 -51
sound/soc/codecs/nau8821.c
··· 26 26 #include <sound/tlv.h> 27 27 #include "nau8821.h" 28 28 29 - #define NAU8821_JD_ACTIVE_HIGH BIT(0) 29 + #define NAU8821_QUIRK_JD_ACTIVE_HIGH BIT(0) 30 + #define NAU8821_QUIRK_JD_DB_BYPASS BIT(1) 30 31 31 32 static int nau8821_quirk; 32 33 static int quirk_override = -1; ··· 1022 1021 return active_high == is_high; 1023 1022 } 1024 1023 1025 - static void nau8821_int_status_clear_all(struct regmap *regmap) 1024 + static void nau8821_irq_status_clear(struct regmap *regmap, int active_irq) 1026 1025 { 1027 - int active_irq, clear_irq, i; 1026 + int clear_irq, i; 1028 1027 1029 - /* Reset the intrruption status from rightmost bit if the corres- 1030 - * ponding irq event occurs. 1028 + if (active_irq) { 1029 + regmap_write(regmap, NAU8821_R11_INT_CLR_KEY_STATUS, active_irq); 1030 + return; 1031 + } 1032 + 1033 + /* Reset the interruption status from rightmost bit if the 1034 + * corresponding irq event occurs. 1031 1035 */ 1032 1036 regmap_read(regmap, NAU8821_R10_IRQ_STATUS, &active_irq); 1033 1037 for (i = 0; i < NAU8821_REG_DATA_LEN; i++) { ··· 1058 1052 snd_soc_component_disable_pin(component, "MICBIAS"); 1059 1053 snd_soc_dapm_sync(dapm); 1060 1054 1061 - /* Clear all interruption status */ 1062 - nau8821_int_status_clear_all(regmap); 1063 - 1064 - /* Enable the insertion interruption, disable the ejection inter- 1065 - * ruption, and then bypass de-bounce circuit. 1066 - */ 1055 + /* Disable & mask both insertion & ejection IRQs */ 1067 1056 regmap_update_bits(regmap, NAU8821_R12_INTERRUPT_DIS_CTRL, 1068 - NAU8821_IRQ_EJECT_DIS | NAU8821_IRQ_INSERT_DIS, 1069 - NAU8821_IRQ_EJECT_DIS); 1070 - /* Mask unneeded IRQs: 1 - disable, 0 - enable */ 1057 + NAU8821_IRQ_INSERT_DIS | NAU8821_IRQ_EJECT_DIS, 1058 + NAU8821_IRQ_INSERT_DIS | NAU8821_IRQ_EJECT_DIS); 1071 1059 regmap_update_bits(regmap, NAU8821_R0F_INTERRUPT_MASK, 1072 - NAU8821_IRQ_EJECT_EN | NAU8821_IRQ_INSERT_EN, 1073 - NAU8821_IRQ_EJECT_EN); 1060 + NAU8821_IRQ_INSERT_EN | NAU8821_IRQ_EJECT_EN, 1061 + NAU8821_IRQ_INSERT_EN | NAU8821_IRQ_EJECT_EN); 1074 1062 1063 + /* Clear all interruption status */ 1064 + nau8821_irq_status_clear(regmap, 0); 1065 + 1066 + /* Enable & unmask the insertion IRQ */ 1067 + regmap_update_bits(regmap, NAU8821_R12_INTERRUPT_DIS_CTRL, 1068 + NAU8821_IRQ_INSERT_DIS, 0); 1069 + regmap_update_bits(regmap, NAU8821_R0F_INTERRUPT_MASK, 1070 + NAU8821_IRQ_INSERT_EN, 0); 1071 + 1072 + /* Bypass de-bounce circuit */ 1075 1073 regmap_update_bits(regmap, NAU8821_R0D_JACK_DET_CTRL, 1076 1074 NAU8821_JACK_DET_DB_BYPASS, NAU8821_JACK_DET_DB_BYPASS); 1077 1075 ··· 1099 1089 NAU8821_IRQ_KEY_RELEASE_DIS | 1100 1090 NAU8821_IRQ_KEY_PRESS_DIS); 1101 1091 } 1102 - 1103 1092 } 1104 1093 1105 1094 static void nau8821_jdet_work(struct work_struct *work) 1106 1095 { 1107 1096 struct nau8821 *nau8821 = 1108 - container_of(work, struct nau8821, jdet_work); 1097 + container_of(work, struct nau8821, jdet_work.work); 1109 1098 struct snd_soc_dapm_context *dapm = nau8821->dapm; 1110 1099 struct snd_soc_component *component = snd_soc_dapm_to_component(dapm); 1111 1100 struct regmap *regmap = nau8821->regmap; 1112 1101 int jack_status_reg, mic_detected, event = 0, event_mask = 0; 1113 - 1114 - snd_soc_component_force_enable_pin(component, "MICBIAS"); 1115 - snd_soc_dapm_sync(dapm); 1116 - msleep(20); 1117 1102 1118 1103 regmap_read(regmap, NAU8821_R58_I2C_DEVICE_ID, &jack_status_reg); 1119 1104 mic_detected = !(jack_status_reg & NAU8821_KEYDET); ··· 1142 1137 snd_soc_component_disable_pin(component, "MICBIAS"); 1143 1138 snd_soc_dapm_sync(dapm); 1144 1139 } 1140 + 1145 1141 event_mask |= SND_JACK_HEADSET; 1146 1142 snd_soc_jack_report(nau8821->jack, event, event_mask); 1147 1143 } ··· 1151 1145 static void nau8821_setup_inserted_irq(struct nau8821 *nau8821) 1152 1146 { 1153 1147 struct regmap *regmap = nau8821->regmap; 1148 + 1149 + /* Disable & mask insertion IRQ */ 1150 + regmap_update_bits(regmap, NAU8821_R12_INTERRUPT_DIS_CTRL, 1151 + NAU8821_IRQ_INSERT_DIS, NAU8821_IRQ_INSERT_DIS); 1152 + regmap_update_bits(regmap, NAU8821_R0F_INTERRUPT_MASK, 1153 + NAU8821_IRQ_INSERT_EN, NAU8821_IRQ_INSERT_EN); 1154 + 1155 + /* Clear insert IRQ status */ 1156 + nau8821_irq_status_clear(regmap, NAU8821_JACK_INSERT_DETECTED); 1154 1157 1155 1158 /* Enable internal VCO needed for interruptions */ 1156 1159 if (nau8821->dapm->bias_level < SND_SOC_BIAS_PREPARE) ··· 1175 1160 regmap_update_bits(regmap, NAU8821_R1D_I2S_PCM_CTRL2, 1176 1161 NAU8821_I2S_MS_MASK, NAU8821_I2S_MS_SLAVE); 1177 1162 1178 - /* Not bypass de-bounce circuit */ 1179 - regmap_update_bits(regmap, NAU8821_R0D_JACK_DET_CTRL, 1180 - NAU8821_JACK_DET_DB_BYPASS, 0); 1163 + /* Do not bypass de-bounce circuit */ 1164 + if (!(nau8821_quirk & NAU8821_QUIRK_JD_DB_BYPASS)) 1165 + regmap_update_bits(regmap, NAU8821_R0D_JACK_DET_CTRL, 1166 + NAU8821_JACK_DET_DB_BYPASS, 0); 1181 1167 1168 + /* Unmask & enable the ejection IRQs */ 1182 1169 regmap_update_bits(regmap, NAU8821_R0F_INTERRUPT_MASK, 1183 - NAU8821_IRQ_EJECT_EN, 0); 1170 + NAU8821_IRQ_EJECT_EN, 0); 1184 1171 regmap_update_bits(regmap, NAU8821_R12_INTERRUPT_DIS_CTRL, 1185 - NAU8821_IRQ_EJECT_DIS, 0); 1172 + NAU8821_IRQ_EJECT_DIS, 0); 1186 1173 } 1187 1174 1188 1175 static irqreturn_t nau8821_interrupt(int irq, void *data) 1189 1176 { 1190 1177 struct nau8821 *nau8821 = (struct nau8821 *)data; 1191 1178 struct regmap *regmap = nau8821->regmap; 1192 - int active_irq, clear_irq = 0, event = 0, event_mask = 0; 1179 + struct snd_soc_component *component; 1180 + int active_irq, event = 0, event_mask = 0; 1193 1181 1194 1182 if (regmap_read(regmap, NAU8821_R10_IRQ_STATUS, &active_irq)) { 1195 1183 dev_err(nau8821->dev, "failed to read irq status\n"); ··· 1203 1185 1204 1186 if ((active_irq & NAU8821_JACK_EJECT_IRQ_MASK) == 1205 1187 NAU8821_JACK_EJECT_DETECTED) { 1188 + cancel_delayed_work_sync(&nau8821->jdet_work); 1206 1189 regmap_update_bits(regmap, NAU8821_R71_ANALOG_ADC_1, 1207 1190 NAU8821_MICDET_MASK, NAU8821_MICDET_DIS); 1208 1191 nau8821_eject_jack(nau8821); 1209 1192 event_mask |= SND_JACK_HEADSET; 1210 - clear_irq = NAU8821_JACK_EJECT_IRQ_MASK; 1211 1193 } else if (active_irq & NAU8821_KEY_SHORT_PRESS_IRQ) { 1212 1194 event |= NAU8821_BUTTON; 1213 1195 event_mask |= NAU8821_BUTTON; 1214 - clear_irq = NAU8821_KEY_SHORT_PRESS_IRQ; 1196 + nau8821_irq_status_clear(regmap, NAU8821_KEY_SHORT_PRESS_IRQ); 1215 1197 } else if (active_irq & NAU8821_KEY_RELEASE_IRQ) { 1216 1198 event_mask = NAU8821_BUTTON; 1217 - clear_irq = NAU8821_KEY_RELEASE_IRQ; 1199 + nau8821_irq_status_clear(regmap, NAU8821_KEY_RELEASE_IRQ); 1218 1200 } else if ((active_irq & NAU8821_JACK_INSERT_IRQ_MASK) == 1219 1201 NAU8821_JACK_INSERT_DETECTED) { 1202 + cancel_delayed_work_sync(&nau8821->jdet_work); 1220 1203 regmap_update_bits(regmap, NAU8821_R71_ANALOG_ADC_1, 1221 1204 NAU8821_MICDET_MASK, NAU8821_MICDET_EN); 1222 1205 if (nau8821_is_jack_inserted(regmap)) { 1223 - /* detect microphone and jack type */ 1224 - cancel_work_sync(&nau8821->jdet_work); 1225 - schedule_work(&nau8821->jdet_work); 1206 + /* Detect microphone and jack type */ 1207 + component = snd_soc_dapm_to_component(nau8821->dapm); 1208 + snd_soc_component_force_enable_pin(component, "MICBIAS"); 1209 + snd_soc_dapm_sync(nau8821->dapm); 1210 + schedule_delayed_work(&nau8821->jdet_work, msecs_to_jiffies(20)); 1226 1211 /* Turn off insertion interruption at manual mode */ 1227 - regmap_update_bits(regmap, 1228 - NAU8821_R12_INTERRUPT_DIS_CTRL, 1229 - NAU8821_IRQ_INSERT_DIS, 1230 - NAU8821_IRQ_INSERT_DIS); 1231 - regmap_update_bits(regmap, 1232 - NAU8821_R0F_INTERRUPT_MASK, 1233 - NAU8821_IRQ_INSERT_EN, 1234 - NAU8821_IRQ_INSERT_EN); 1235 1212 nau8821_setup_inserted_irq(nau8821); 1236 1213 } else { 1237 1214 dev_warn(nau8821->dev, 1238 1215 "Inserted IRQ fired but not connected\n"); 1239 1216 nau8821_eject_jack(nau8821); 1240 1217 } 1218 + } else { 1219 + /* Clear the rightmost interrupt */ 1220 + nau8821_irq_status_clear(regmap, active_irq); 1241 1221 } 1242 - 1243 - if (!clear_irq) 1244 - clear_irq = active_irq; 1245 - /* clears the rightmost interruption */ 1246 - regmap_write(regmap, NAU8821_R11_INT_CLR_KEY_STATUS, clear_irq); 1247 1222 1248 1223 if (event_mask) 1249 1224 snd_soc_jack_report(nau8821->jack, event, event_mask); ··· 1532 1521 nau8821_configure_sysclk(nau8821, NAU8821_CLK_DIS, 0); 1533 1522 if (nau8821->irq) { 1534 1523 /* Clear all interruption status */ 1535 - nau8821_int_status_clear_all(regmap); 1524 + nau8821_irq_status_clear(regmap, 0); 1536 1525 1537 1526 /* Enable both insertion and ejection interruptions, and then 1538 1527 * bypass de-bounce circuit. ··· 1662 1651 1663 1652 nau8821->jack = jack; 1664 1653 /* Initiate jack detection work queue */ 1665 - INIT_WORK(&nau8821->jdet_work, nau8821_jdet_work); 1654 + INIT_DELAYED_WORK(&nau8821->jdet_work, nau8821_jdet_work); 1655 + 1666 1656 ret = devm_request_threaded_irq(nau8821->dev, nau8821->irq, NULL, 1667 1657 nau8821_interrupt, IRQF_TRIGGER_LOW | IRQF_ONESHOT, 1668 1658 "nau8821", nau8821); ··· 1868 1856 DMI_MATCH(DMI_SYS_VENDOR, "Positivo Tecnologia SA"), 1869 1857 DMI_MATCH(DMI_BOARD_NAME, "CW14Q01P-V2"), 1870 1858 }, 1871 - .driver_data = (void *)(NAU8821_JD_ACTIVE_HIGH), 1859 + .driver_data = (void *)(NAU8821_QUIRK_JD_ACTIVE_HIGH), 1860 + }, 1861 + { 1862 + /* Valve Steam Deck LCD */ 1863 + .matches = { 1864 + DMI_MATCH(DMI_SYS_VENDOR, "Valve"), 1865 + DMI_MATCH(DMI_PRODUCT_NAME, "Jupiter"), 1866 + }, 1867 + .driver_data = (void *)(NAU8821_QUIRK_JD_DB_BYPASS), 1868 + }, 1869 + { 1870 + /* Valve Steam Deck OLED */ 1871 + .matches = { 1872 + DMI_MATCH(DMI_SYS_VENDOR, "Valve"), 1873 + DMI_MATCH(DMI_PRODUCT_NAME, "Galileo"), 1874 + }, 1875 + .driver_data = (void *)(NAU8821_QUIRK_JD_DB_BYPASS), 1872 1876 }, 1873 1877 {} 1874 1878 }; ··· 1926 1898 1927 1899 nau8821_check_quirks(); 1928 1900 1929 - if (nau8821_quirk & NAU8821_JD_ACTIVE_HIGH) 1901 + if (nau8821_quirk & NAU8821_QUIRK_JD_ACTIVE_HIGH) 1930 1902 nau8821->jkdet_polarity = 0; 1903 + 1904 + if (nau8821_quirk & NAU8821_QUIRK_JD_DB_BYPASS) 1905 + dev_dbg(dev, "Force bypassing jack detection debounce circuit\n"); 1931 1906 1932 1907 nau8821_print_device_properties(nau8821); 1933 1908
+1 -1
sound/soc/codecs/nau8821.h
··· 561 561 struct regmap *regmap; 562 562 struct snd_soc_dapm_context *dapm; 563 563 struct snd_soc_jack *jack; 564 - struct work_struct jdet_work; 564 + struct delayed_work jdet_work; 565 565 int irq; 566 566 int clk_id; 567 567 int micbias_voltage;
+19 -2
sound/soc/codecs/tas2781-i2c.c
··· 108 108 { "tas2570", TAS2570 }, 109 109 { "tas2572", TAS2572 }, 110 110 { "tas2781", TAS2781 }, 111 + { "tas5802", TAS5802 }, 112 + { "tas5815", TAS5815 }, 111 113 { "tas5825", TAS5825 }, 112 114 { "tas5827", TAS5827 }, 115 + { "tas5828", TAS5828 }, 113 116 {} 114 117 }; 115 118 MODULE_DEVICE_TABLE(i2c, tasdevice_id); ··· 127 124 { .compatible = "ti,tas2570" }, 128 125 { .compatible = "ti,tas2572" }, 129 126 { .compatible = "ti,tas2781" }, 127 + { .compatible = "ti,tas5802" }, 128 + { .compatible = "ti,tas5815" }, 130 129 { .compatible = "ti,tas5825" }, 131 130 { .compatible = "ti,tas5827" }, 131 + { .compatible = "ti,tas5828" }, 132 132 {}, 133 133 }; 134 134 MODULE_DEVICE_TABLE(of, tasdevice_of_match); ··· 1671 1665 } 1672 1666 tas_priv->fw_state = TASDEVICE_DSP_FW_ALL_OK; 1673 1667 1674 - /* There is no calibration required for TAS5825/TAS5827. */ 1675 - if (tas_priv->chip_id < TAS5825) { 1668 + /* There is no calibration required for 1669 + * TAS5802/TAS5815/TAS5825/TAS5827/TAS5828. 1670 + */ 1671 + if (tas_priv->chip_id < TAS5802) { 1676 1672 ret = tasdevice_create_cali_ctrls(tas_priv); 1677 1673 if (ret) { 1678 1674 dev_err(tas_priv->dev, "cali controls error\n"); ··· 1728 1720 switch (tas_priv->chip_id) { 1729 1721 case TAS2563: 1730 1722 case TAS2781: 1723 + case TAS5802: 1724 + case TAS5815: 1731 1725 case TAS5825: 1732 1726 case TAS5827: 1727 + case TAS5828: 1733 1728 /* If DSP FW fail, DSP kcontrol won't be created. */ 1734 1729 tasdevice_dsp_remove(tas_priv); 1735 1730 } ··· 1893 1882 p = (struct snd_kcontrol_new *)tas2781_snd_controls; 1894 1883 size = ARRAY_SIZE(tas2781_snd_controls); 1895 1884 break; 1885 + case TAS5802: 1886 + case TAS5815: 1896 1887 case TAS5825: 1897 1888 case TAS5827: 1889 + case TAS5828: 1898 1890 p = (struct snd_kcontrol_new *)tas5825_snd_controls; 1899 1891 size = ARRAY_SIZE(tas5825_snd_controls); 1900 1892 break; ··· 2068 2054 { "TXNW2570", TAS2570 }, 2069 2055 { "TXNW2572", TAS2572 }, 2070 2056 { "TXNW2781", TAS2781 }, 2057 + { "TXNW5802", TAS5802 }, 2058 + { "TXNW5815", TAS5815 }, 2071 2059 { "TXNW5825", TAS5825 }, 2072 2060 { "TXNW5827", TAS5827 }, 2061 + { "TXNW5828", TAS5828 }, 2073 2062 {}, 2074 2063 }; 2075 2064
+4 -18
sound/soc/codecs/wcd938x-sdw.c
··· 1207 1207 regcache_cache_only(wcd->regmap, true); 1208 1208 } 1209 1209 1210 - pm_runtime_set_autosuspend_delay(dev, 3000); 1211 - pm_runtime_use_autosuspend(dev); 1212 - pm_runtime_mark_last_busy(dev); 1213 - pm_runtime_set_active(dev); 1214 - pm_runtime_enable(dev); 1215 - 1216 1210 ret = component_add(dev, &wcd_sdw_component_ops); 1217 1211 if (ret) 1218 - goto err_disable_rpm; 1212 + return ret; 1213 + 1214 + /* Set suspended until aggregate device is bind */ 1215 + pm_runtime_set_suspended(dev); 1219 1216 1220 1217 return 0; 1221 - 1222 - err_disable_rpm: 1223 - pm_runtime_disable(dev); 1224 - pm_runtime_set_suspended(dev); 1225 - pm_runtime_dont_use_autosuspend(dev); 1226 - 1227 - return ret; 1228 1218 } 1229 1219 1230 1220 static int wcd9380_remove(struct sdw_slave *pdev) ··· 1222 1232 struct device *dev = &pdev->dev; 1223 1233 1224 1234 component_del(dev, &wcd_sdw_component_ops); 1225 - 1226 - pm_runtime_disable(dev); 1227 - pm_runtime_set_suspended(dev); 1228 - pm_runtime_dont_use_autosuspend(dev); 1229 1235 1230 1236 return 0; 1231 1237 }
+1
sound/soc/qcom/sc8280xp.c
··· 192 192 193 193 static const struct of_device_id snd_sc8280xp_dt_match[] = { 194 194 {.compatible = "qcom,qcm6490-idp-sndcard", "qcm6490"}, 195 + {.compatible = "qcom,qcs615-sndcard", "qcs615"}, 195 196 {.compatible = "qcom,qcs6490-rb3gen2-sndcard", "qcs6490"}, 196 197 {.compatible = "qcom,qcs8275-sndcard", "qcs8300"}, 197 198 {.compatible = "qcom,qcs9075-sndcard", "sa8775p"},
+20
sound/soc/sdw_utils/soc_sdw_utils.c
··· 312 312 .dai_num = 1, 313 313 }, 314 314 { 315 + .part_id = 0x1321, 316 + .dais = { 317 + { 318 + .direction = {true, false}, 319 + .dai_name = "rt1320-aif1", 320 + .component_name = "rt1320", 321 + .dai_type = SOC_SDW_DAI_TYPE_AMP, 322 + .dailink = {SOC_SDW_AMP_OUT_DAI_ID, SOC_SDW_UNUSED_DAI_ID}, 323 + .init = asoc_sdw_rt_amp_init, 324 + .exit = asoc_sdw_rt_amp_exit, 325 + .rtd_init = asoc_sdw_rt_amp_spk_rtd_init, 326 + .controls = generic_spk_controls, 327 + .num_controls = ARRAY_SIZE(generic_spk_controls), 328 + .widgets = generic_spk_widgets, 329 + .num_widgets = ARRAY_SIZE(generic_spk_widgets), 330 + }, 331 + }, 332 + .dai_num = 1, 333 + }, 334 + { 315 335 .part_id = 0x714, 316 336 .version_id = 3, 317 337 .ignore_internal_dmic = true,
+8 -2
sound/usb/card.c
··· 891 891 */ 892 892 static int try_to_register_card(struct snd_usb_audio *chip, int ifnum) 893 893 { 894 + struct usb_interface *iface; 895 + 894 896 if (check_delayed_register_option(chip) == ifnum || 895 - chip->last_iface == ifnum || 896 - usb_interface_claimed(usb_ifnum_to_if(chip->dev, chip->last_iface))) 897 + chip->last_iface == ifnum) 897 898 return snd_card_register(chip->card); 899 + 900 + iface = usb_ifnum_to_if(chip->dev, chip->last_iface); 901 + if (iface && usb_interface_claimed(iface)) 902 + return snd_card_register(chip->card); 903 + 898 904 return 0; 899 905 } 900 906
+15
sound/usb/mixer.c
··· 1147 1147 } 1148 1148 break; 1149 1149 1150 + case USB_ID(0x045e, 0x070f): /* MS LifeChat LX-3000 Headset */ 1151 + if (!strcmp(kctl->id.name, "Speaker Playback Volume")) { 1152 + usb_audio_info(chip, 1153 + "set volume quirk for MS LifeChat LX-3000\n"); 1154 + cval->res = 192; 1155 + } 1156 + break; 1157 + 1150 1158 case USB_ID(0x0471, 0x0101): 1151 1159 case USB_ID(0x0471, 0x0104): 1152 1160 case USB_ID(0x0471, 0x0105): ··· 1195 1187 usb_audio_info(chip, 1196 1188 "set volume quirk for MOONDROP Quark2\n"); 1197 1189 cval->min = -14208; /* Mute under it */ 1190 + } 1191 + break; 1192 + case USB_ID(0x12d1, 0x3a07): /* Huawei Technologies Co., Ltd. CM-Q3 */ 1193 + if (!strcmp(kctl->id.name, "PCM Playback Volume")) { 1194 + usb_audio_info(chip, 1195 + "set volume quirk for Huawei Technologies Co., Ltd. CM-Q3\n"); 1196 + cval->min = -11264; /* Mute under it */ 1198 1197 } 1199 1198 break; 1200 1199 }
+5
sound/usb/quirks.c
··· 2153 2153 DEVICE_FLG(0x045e, 0x083c, /* MS USB Link headset */ 2154 2154 QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_CTL_MSG_DELAY | 2155 2155 QUIRK_FLAG_DISABLE_AUTOSUSPEND), 2156 + DEVICE_FLG(0x045e, 0x070f, /* MS LifeChat LX-3000 Headset */ 2157 + QUIRK_FLAG_MIXER_PLAYBACK_MIN_MUTE), 2156 2158 DEVICE_FLG(0x046d, 0x0807, /* Logitech Webcam C500 */ 2157 2159 QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384), 2158 2160 DEVICE_FLG(0x046d, 0x0808, /* Logitech Webcam C600 */ ··· 2182 2180 QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384), 2183 2181 DEVICE_FLG(0x046d, 0x09a4, /* Logitech QuickCam E 3500 */ 2184 2182 QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_IGNORE_CTL_ERROR), 2183 + DEVICE_FLG(0x046d, 0x0a8f, /* Logitech H390 headset */ 2184 + QUIRK_FLAG_CTL_MSG_DELAY_1M | 2185 + QUIRK_FLAG_MIXER_PLAYBACK_MIN_MUTE), 2185 2186 DEVICE_FLG(0x0499, 0x1506, /* Yamaha THR5 */ 2186 2187 QUIRK_FLAG_GENERIC_IMPLICIT_FB), 2187 2188 DEVICE_FLG(0x0499, 0x1509, /* Steinberg UR22 */
+9 -3
tools/testing/selftests/bpf/prog_tests/arg_parsing.c
··· 144 144 if (!ASSERT_OK(ferror(fp), "prepare tmp")) 145 145 goto out_fclose; 146 146 147 + if (!ASSERT_OK(fsync(fileno(fp)), "fsync tmp")) 148 + goto out_fclose; 149 + 147 150 init_test_filter_set(&set); 148 151 149 - ASSERT_OK(parse_test_list_file(tmpfile, &set, true), "parse file"); 152 + if (!ASSERT_OK(parse_test_list_file(tmpfile, &set, true), "parse file")) 153 + goto out_fclose; 150 154 151 - ASSERT_EQ(set.cnt, 4, "test count"); 155 + if (!ASSERT_EQ(set.cnt, 4, "test count")) 156 + goto out_free_set; 157 + 152 158 ASSERT_OK(strcmp("test_with_spaces", set.tests[0].name), "test 0 name"); 153 159 ASSERT_EQ(set.tests[0].subtest_cnt, 0, "test 0 subtest count"); 154 160 ASSERT_OK(strcmp("testA", set.tests[1].name), "test 1 name"); ··· 164 158 ASSERT_OK(strcmp("testB", set.tests[2].name), "test 2 name"); 165 159 ASSERT_OK(strcmp("testC_no_eof_newline", set.tests[3].name), "test 3 name"); 166 160 161 + out_free_set: 167 162 free_test_filter_set(&set); 168 - 169 163 out_fclose: 170 164 fclose(fp); 171 165 out_remove:
+7 -7
tools/testing/selftests/bpf/progs/verifier_global_ptr_args.c
··· 225 225 } 226 226 227 227 char mem[16]; 228 - u32 off; 228 + u32 offset; 229 229 230 230 SEC("tp_btf/sys_enter") 231 231 __success ··· 240 240 /* scalar to untrusted */ 241 241 subprog_untrusted(0); 242 242 /* variable offset to untrusted (map) */ 243 - subprog_untrusted((void *)mem + off); 243 + subprog_untrusted((void *)mem + offset); 244 244 /* variable offset to untrusted (trusted) */ 245 - subprog_untrusted((void *)bpf_get_current_task_btf() + off); 245 + subprog_untrusted((void *)bpf_get_current_task_btf() + offset); 246 246 return 0; 247 247 } 248 248 ··· 298 298 /* scalar to untrusted mem */ 299 299 subprog_void_untrusted(0); 300 300 /* variable offset to untrusted mem (map) */ 301 - subprog_void_untrusted((void *)mem + off); 301 + subprog_void_untrusted((void *)mem + offset); 302 302 /* variable offset to untrusted mem (trusted) */ 303 - subprog_void_untrusted(bpf_get_current_task_btf() + off); 303 + subprog_void_untrusted(bpf_get_current_task_btf() + offset); 304 304 /* variable offset to untrusted char/enum (map) */ 305 - subprog_char_untrusted(mem + off); 306 - subprog_enum_untrusted((void *)mem + off); 305 + subprog_char_untrusted(mem + offset); 306 + subprog_enum_untrusted((void *)mem + offset); 307 307 return 0; 308 308 } 309 309
+29 -11
tools/testing/selftests/drivers/net/hw/lib/py/__init__.py
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 + """ 4 + Driver test environment (hardware-only tests). 5 + NetDrvEnv and NetDrvEpEnv are the main environment classes. 6 + Former is for local host only tests, latter creates / connects 7 + to a remote endpoint. See NIPA wiki for more information about 8 + running and writing driver tests. 9 + """ 10 + 3 11 import sys 4 12 from pathlib import Path 5 13 ··· 16 8 try: 17 9 sys.path.append(KSFT_DIR.as_posix()) 18 10 19 - from net.lib.py import * 20 - from drivers.net.lib.py import * 21 - 22 11 # Import one by one to avoid pylint false positives 12 + from net.lib.py import NetNS, NetNSEnter, NetdevSimDev 23 13 from net.lib.py import EthtoolFamily, NetdevFamily, NetshaperFamily, \ 24 14 NlError, RtnlFamily, DevlinkFamily, PSPFamily 25 15 from net.lib.py import CmdExitFailure 26 - from net.lib.py import bkg, cmd, defer, ethtool, fd_read_timeout, ip, \ 27 - rand_port, tool, wait_port_listen 28 - from net.lib.py import fd_read_timeout 16 + from net.lib.py import bkg, cmd, bpftool, bpftrace, defer, ethtool, \ 17 + fd_read_timeout, ip, rand_port, wait_port_listen, wait_file 29 18 from net.lib.py import KsftSkipEx, KsftFailEx, KsftXfailEx 30 19 from net.lib.py import ksft_disruptive, ksft_exit, ksft_pr, ksft_run, \ 31 20 ksft_setup 32 21 from net.lib.py import ksft_eq, ksft_ge, ksft_in, ksft_is, ksft_lt, \ 33 22 ksft_ne, ksft_not_in, ksft_raises, ksft_true, ksft_gt, ksft_not_none 34 - from net.lib.py import NetNSEnter 35 - from drivers.net.lib.py import GenerateTraffic 23 + from drivers.net.lib.py import GenerateTraffic, Remote 36 24 from drivers.net.lib.py import NetDrvEnv, NetDrvEpEnv 25 + 26 + __all__ = ["NetNS", "NetNSEnter", "NetdevSimDev", 27 + "EthtoolFamily", "NetdevFamily", "NetshaperFamily", 28 + "NlError", "RtnlFamily", "DevlinkFamily", "PSPFamily", 29 + "CmdExitFailure", 30 + "bkg", "cmd", "bpftool", "bpftrace", "defer", "ethtool", 31 + "fd_read_timeout", "ip", "rand_port", 32 + "wait_port_listen", "wait_file", 33 + "KsftSkipEx", "KsftFailEx", "KsftXfailEx", 34 + "ksft_disruptive", "ksft_exit", "ksft_pr", "ksft_run", 35 + "ksft_setup", 36 + "ksft_eq", "ksft_ge", "ksft_in", "ksft_is", "ksft_lt", 37 + "ksft_ne", "ksft_not_in", "ksft_raises", "ksft_true", "ksft_gt", 38 + "ksft_not_none", "ksft_not_none", 39 + "NetDrvEnv", "NetDrvEpEnv", "GenerateTraffic", "Remote"] 37 40 except ModuleNotFoundError as e: 38 - ksft_pr("Failed importing `net` library from kernel sources") 39 - ksft_pr(str(e)) 40 - ktap_result(True, comment="SKIP") 41 + print("Failed importing `net` library from kernel sources") 42 + print(str(e)) 41 43 sys.exit(4)
+2 -2
tools/testing/selftests/drivers/net/lib/py/__init__.py
··· 22 22 NlError, RtnlFamily, DevlinkFamily, PSPFamily 23 23 from net.lib.py import CmdExitFailure 24 24 from net.lib.py import bkg, cmd, bpftool, bpftrace, defer, ethtool, \ 25 - fd_read_timeout, ip, rand_port, tool, wait_port_listen, wait_file 25 + fd_read_timeout, ip, rand_port, wait_port_listen, wait_file 26 26 from net.lib.py import KsftSkipEx, KsftFailEx, KsftXfailEx 27 27 from net.lib.py import ksft_disruptive, ksft_exit, ksft_pr, ksft_run, \ 28 28 ksft_setup ··· 34 34 "NlError", "RtnlFamily", "DevlinkFamily", "PSPFamily", 35 35 "CmdExitFailure", 36 36 "bkg", "cmd", "bpftool", "bpftrace", "defer", "ethtool", 37 - "fd_read_timeout", "ip", "rand_port", "tool", 37 + "fd_read_timeout", "ip", "rand_port", 38 38 "wait_port_listen", "wait_file", 39 39 "KsftSkipEx", "KsftFailEx", "KsftXfailEx", 40 40 "ksft_disruptive", "ksft_exit", "ksft_pr", "ksft_run",
+55
tools/testing/selftests/hid/tests/test_multitouch.py
··· 1752 1752 assert evdev.slots[0][libevdev.EV_ABS.ABS_MT_TRACKING_ID] == -1 1753 1753 1754 1754 1755 + @pytest.mark.skip_if_uhdev( 1756 + lambda uhdev: "Confidence" not in uhdev.fields, 1757 + "Device not compatible, missing Confidence usage", 1758 + ) 1759 + def test_mt_confidence_bad_multi_release(self): 1760 + """Check for the sticky finger being properly detected. 1761 + 1762 + We first inject 3 fingers, then release only the second. 1763 + After 100 ms, we should receive a generated event about the 1764 + 2 missing fingers being released. 1765 + """ 1766 + uhdev = self.uhdev 1767 + evdev = uhdev.get_evdev() 1768 + 1769 + # send 3 touches 1770 + t0 = Touch(1, 50, 10) 1771 + t1 = Touch(2, 150, 100) 1772 + t2 = Touch(3, 250, 200) 1773 + r = uhdev.event([t0, t1, t2]) 1774 + events = uhdev.next_sync_events() 1775 + self.debug_reports(r, uhdev, events) 1776 + 1777 + # release the second 1778 + t1.tipswitch = False 1779 + r = uhdev.event([t1]) 1780 + events = uhdev.next_sync_events() 1781 + self.debug_reports(r, uhdev, events) 1782 + 1783 + # only the second is released 1784 + assert evdev.slots[0][libevdev.EV_ABS.ABS_MT_TRACKING_ID] != -1 1785 + assert evdev.slots[1][libevdev.EV_ABS.ABS_MT_TRACKING_ID] == -1 1786 + assert evdev.slots[2][libevdev.EV_ABS.ABS_MT_TRACKING_ID] != -1 1787 + 1788 + # wait for the timer to kick in 1789 + time.sleep(0.2) 1790 + 1791 + events = uhdev.next_sync_events() 1792 + self.debug_reports([], uhdev, events) 1793 + 1794 + # now all 3 fingers are released 1795 + assert libevdev.InputEvent(libevdev.EV_KEY.BTN_TOUCH, 0) in events 1796 + assert evdev.slots[0][libevdev.EV_ABS.ABS_MT_TRACKING_ID] == -1 1797 + assert evdev.slots[1][libevdev.EV_ABS.ABS_MT_TRACKING_ID] == -1 1798 + assert evdev.slots[2][libevdev.EV_ABS.ABS_MT_TRACKING_ID] == -1 1799 + 1800 + 1755 1801 class TestElanXPS9360(BaseTest.TestWin8Multitouch): 1756 1802 def create_device(self): 1757 1803 return Digitizer( ··· 2131 2085 physical="Vendor Usage 1", 2132 2086 input_info=(BusType.I2C, 0x06CB, 0xCE08), 2133 2087 rdesc="05 01 09 02 a1 01 85 02 09 01 a1 00 05 09 19 01 29 02 15 00 25 01 75 01 95 02 81 02 95 06 81 01 05 01 09 30 09 31 15 81 25 7f 75 08 95 02 81 06 c0 c0 05 01 09 02 a1 01 85 18 09 01 a1 00 05 09 19 01 29 03 46 00 00 15 00 25 01 75 01 95 03 81 02 95 05 81 01 05 01 09 30 09 31 15 81 25 7f 75 08 95 02 81 06 c0 c0 06 00 ff 09 02 a1 01 85 20 09 01 a1 00 09 03 15 00 26 ff 00 35 00 46 ff 00 75 08 95 05 81 02 c0 c0 05 0d 09 05 a1 01 85 03 05 0d 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 95 01 75 03 25 05 09 51 81 02 75 01 95 03 81 03 05 01 15 00 26 f8 04 75 10 55 0e 65 11 09 30 35 00 46 24 04 95 01 81 02 46 30 02 26 a0 02 09 31 81 02 c0 05 0d 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 95 01 75 03 25 05 09 51 81 02 75 01 95 03 81 03 05 01 15 00 26 f8 04 75 10 55 0e 65 11 09 30 35 00 46 24 04 95 01 81 02 46 30 02 26 a0 02 09 31 81 02 c0 05 0d 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 95 01 75 03 25 05 09 51 81 02 75 01 95 03 81 03 05 01 15 00 26 f8 04 75 10 55 0e 65 11 09 30 35 00 46 24 04 95 01 81 02 46 30 02 26 a0 02 09 31 81 02 c0 05 0d 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 95 01 75 03 25 05 09 51 81 02 75 01 95 03 81 03 05 01 15 00 26 f8 04 75 10 55 0e 65 11 09 30 35 00 46 24 04 95 01 81 02 46 30 02 26 a0 02 09 31 81 02 c0 05 0d 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 95 01 75 03 25 05 09 51 81 02 75 01 95 03 81 03 05 01 15 00 26 f8 04 75 10 55 0e 65 11 09 30 35 00 46 24 04 95 01 81 02 46 30 02 26 a0 02 09 31 81 02 c0 05 0d 55 0c 66 01 10 47 ff ff 00 00 27 ff ff 00 00 75 10 95 01 09 56 81 02 09 54 25 7f 95 01 75 08 81 02 05 09 09 01 25 01 75 01 95 01 81 02 95 07 81 03 05 0d 85 08 09 55 09 59 75 04 95 02 25 0f b1 02 85 0d 09 60 75 01 95 01 15 00 25 01 b1 02 95 07 b1 03 85 07 06 00 ff 09 c5 15 00 26 ff 00 75 08 96 00 01 b1 02 c0 05 0d 09 0e a1 01 85 04 09 22 a1 02 09 52 15 00 25 0a 75 08 95 01 b1 02 c0 09 22 a1 00 85 06 09 57 09 58 75 01 95 02 25 01 b1 02 95 06 b1 03 c0 c0 06 00 ff 09 01 a1 01 85 09 09 02 15 00 26 ff 00 75 08 95 14 91 02 85 0a 09 03 15 00 26 ff 00 75 08 95 14 91 02 85 0b 09 04 15 00 26 ff 00 75 08 95 45 81 02 85 0c 09 05 15 00 26 ff 00 75 08 95 45 81 02 85 0f 09 06 15 00 26 ff 00 75 08 95 03 b1 02 85 0e 09 07 15 00 26 ff 00 75 08 95 01 b1 02 c0", 2088 + ) 2089 + 2090 + class Testsynaptics_06cb_ce26(TestWin8TSConfidence): 2091 + def create_device(self): 2092 + return PTP( 2093 + "uhid test synaptics_06cb_ce26", 2094 + max_contacts=5, 2095 + input_info=(BusType.I2C, 0x06CB, 0xCE26), 2096 + rdesc="05 01 09 02 a1 01 85 02 09 01 a1 00 05 09 19 01 29 02 15 00 25 01 75 01 95 02 81 02 95 06 81 01 05 01 09 30 09 31 15 81 25 7f 75 08 95 02 81 06 c0 c0 05 0d 09 05 a1 01 85 03 05 0d 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 95 01 75 03 25 05 09 51 81 02 75 01 95 03 81 03 05 01 15 00 26 45 05 75 10 55 0e 65 11 09 30 35 00 46 64 04 95 01 81 02 46 a2 02 26 29 03 09 31 81 02 c0 05 0d 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 95 01 75 03 25 05 09 51 81 02 75 01 95 03 81 03 05 01 15 00 26 45 05 75 10 55 0e 65 11 09 30 35 00 46 64 04 95 01 81 02 46 a2 02 26 29 03 09 31 81 02 c0 05 0d 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 95 01 75 03 25 05 09 51 81 02 75 01 95 03 81 03 05 01 15 00 26 45 05 75 10 55 0e 65 11 09 30 35 00 46 64 04 95 01 81 02 46 a2 02 26 29 03 09 31 81 02 c0 05 0d 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 95 01 75 03 25 05 09 51 81 02 75 01 95 03 81 03 05 01 15 00 26 45 05 75 10 55 0e 65 11 09 30 35 00 46 64 04 95 01 81 02 46 a2 02 26 29 03 09 31 81 02 c0 05 0d 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 95 01 75 03 25 05 09 51 81 02 75 01 95 03 81 03 05 01 15 00 26 45 05 75 10 55 0e 65 11 09 30 35 00 46 64 04 95 01 81 02 46 a2 02 26 29 03 09 31 81 02 c0 05 0d 55 0c 66 01 10 47 ff ff 00 00 27 ff ff 00 00 75 10 95 01 09 56 81 02 09 54 25 7f 95 01 75 08 81 02 05 09 09 01 25 01 75 01 95 01 81 02 95 07 81 03 05 0d 85 08 09 55 09 59 75 04 95 02 25 0f b1 02 85 0d 09 60 75 01 95 01 15 00 25 01 b1 02 95 07 b1 03 85 07 06 00 ff 09 c5 15 00 26 ff 00 75 08 96 00 01 b1 02 c0 05 0d 09 0e a1 01 85 04 09 22 a1 02 09 52 15 00 25 0a 75 08 95 01 b1 02 c0 09 22 a1 00 85 06 09 57 09 58 75 01 95 02 25 01 b1 02 95 06 b1 03 c0 c0 06 00 ff 09 01 a1 01 85 09 09 02 15 00 26 ff 00 75 08 95 14 91 02 85 0a 09 03 15 00 26 ff 00 75 08 95 14 91 02 85 0b 09 04 15 00 26 ff 00 75 08 95 3d 81 02 85 0c 09 05 15 00 26 ff 00 75 08 95 3d 81 02 85 0f 09 06 15 00 26 ff 00 75 08 95 03 b1 02 85 0e 09 07 15 00 26 ff 00 75 08 95 01 b1 02 c0", 2134 2097 )
+1 -1
tools/testing/selftests/kvm/arm64/arch_timer_edge_cases.c
··· 1020 1020 { 1021 1021 const uint64_t MIN_ROLLOVER_SECS = 40ULL * 365 * 24 * 3600; 1022 1022 uint64_t freq = read_sysreg(CNTFRQ_EL0); 1023 - uint64_t width = ilog2(MIN_ROLLOVER_SECS * freq); 1023 + int width = ilog2(MIN_ROLLOVER_SECS * freq); 1024 1024 1025 1025 width = clamp(width, 56, 64); 1026 1026 CVAL_MAX = GENMASK_ULL(width - 1, 0);
+43
tools/testing/selftests/kvm/arm64/external_aborts.c
··· 359 359 kvm_vm_free(vm); 360 360 } 361 361 362 + static void test_serror_amo_guest(void) 363 + { 364 + /* 365 + * The ISB is entirely unnecessary (and highlights how FEAT_NV2 is borked) 366 + * since the write is redirected to memory. But don't write (intentionally) 367 + * broken code! 368 + */ 369 + sysreg_clear_set(hcr_el2, HCR_EL2_AMO | HCR_EL2_TGE, 0); 370 + isb(); 371 + 372 + GUEST_SYNC(0); 373 + GUEST_ASSERT(read_sysreg(isr_el1) & ISR_EL1_A); 374 + 375 + /* 376 + * KVM treats the effective value of AMO as 1 when 377 + * HCR_EL2.{E2H,TGE} = {1, 0}, meaning the SError will be taken when 378 + * unmasked. 379 + */ 380 + local_serror_enable(); 381 + isb(); 382 + local_serror_disable(); 383 + 384 + GUEST_FAIL("Should've taken pending SError exception"); 385 + } 386 + 387 + static void test_serror_amo(void) 388 + { 389 + struct kvm_vcpu *vcpu; 390 + struct kvm_vm *vm = vm_create_with_dabt_handler(&vcpu, test_serror_amo_guest, 391 + unexpected_dabt_handler); 392 + 393 + vm_install_exception_handler(vm, VECTOR_ERROR_CURRENT, expect_serror_handler); 394 + vcpu_run_expect_sync(vcpu); 395 + vcpu_inject_serror(vcpu); 396 + vcpu_run_expect_done(vcpu); 397 + kvm_vm_free(vm); 398 + } 399 + 362 400 int main(void) 363 401 { 364 402 test_mmio_abort(); ··· 407 369 test_serror_emulated(); 408 370 test_mmio_ease(); 409 371 test_s1ptw_abort(); 372 + 373 + if (!test_supports_el2()) 374 + return 0; 375 + 376 + test_serror_amo(); 410 377 }
+96 -3
tools/testing/selftests/kvm/arm64/get-reg-list.c
··· 65 65 REG_FEAT(SCTLR2_EL1, ID_AA64MMFR3_EL1, SCTLRX, IMP), 66 66 REG_FEAT(VDISR_EL2, ID_AA64PFR0_EL1, RAS, IMP), 67 67 REG_FEAT(VSESR_EL2, ID_AA64PFR0_EL1, RAS, IMP), 68 + REG_FEAT(VNCR_EL2, ID_AA64MMFR4_EL1, NV_frac, NV2_ONLY), 69 + REG_FEAT(CNTHV_CTL_EL2, ID_AA64MMFR1_EL1, VH, IMP), 70 + REG_FEAT(CNTHV_CVAL_EL2,ID_AA64MMFR1_EL1, VH, IMP), 68 71 }; 69 72 70 73 bool filter_reg(__u64 reg) ··· 348 345 KVM_REG_ARM_FW_FEAT_BMAP_REG(1), /* KVM_REG_ARM_STD_HYP_BMAP */ 349 346 KVM_REG_ARM_FW_FEAT_BMAP_REG(2), /* KVM_REG_ARM_VENDOR_HYP_BMAP */ 350 347 KVM_REG_ARM_FW_FEAT_BMAP_REG(3), /* KVM_REG_ARM_VENDOR_HYP_BMAP_2 */ 351 - ARM64_SYS_REG(3, 3, 14, 3, 1), /* CNTV_CTL_EL0 */ 352 - ARM64_SYS_REG(3, 3, 14, 3, 2), /* CNTV_CVAL_EL0 */ 353 - ARM64_SYS_REG(3, 3, 14, 0, 2), 348 + 349 + /* 350 + * EL0 Virtual Timer Registers 351 + * 352 + * WARNING: 353 + * KVM_REG_ARM_TIMER_CVAL and KVM_REG_ARM_TIMER_CNT are not defined 354 + * with the appropriate register encodings. Their values have been 355 + * accidentally swapped. As this is set API, the definitions here 356 + * must be used, rather than ones derived from the encodings. 357 + */ 358 + KVM_ARM64_SYS_REG(SYS_CNTV_CTL_EL0), 359 + KVM_REG_ARM_TIMER_CVAL, 360 + KVM_REG_ARM_TIMER_CNT, 361 + 354 362 ARM64_SYS_REG(3, 0, 0, 0, 0), /* MIDR_EL1 */ 355 363 ARM64_SYS_REG(3, 0, 0, 0, 6), /* REVIDR_EL1 */ 356 364 ARM64_SYS_REG(3, 1, 0, 0, 1), /* CLIDR_EL1 */ ··· 769 755 SYS_REG(VSESR_EL2), 770 756 }; 771 757 758 + static __u64 el2_e2h0_regs[] = { 759 + /* Empty */ 760 + }; 761 + 772 762 #define BASE_SUBLIST \ 773 763 { "base", .regs = base_regs, .regs_n = ARRAY_SIZE(base_regs), } 774 764 #define VREGS_SUBLIST \ ··· 806 788 .feature = KVM_ARM_VCPU_HAS_EL2, \ 807 789 .regs = el2_regs, \ 808 790 .regs_n = ARRAY_SIZE(el2_regs), \ 791 + } 792 + #define EL2_E2H0_SUBLIST \ 793 + EL2_SUBLIST, \ 794 + { \ 795 + .name = "EL2 E2H0", \ 796 + .capability = KVM_CAP_ARM_EL2_E2H0, \ 797 + .feature = KVM_ARM_VCPU_HAS_EL2_E2H0, \ 798 + .regs = el2_e2h0_regs, \ 799 + .regs_n = ARRAY_SIZE(el2_e2h0_regs), \ 809 800 } 810 801 811 802 static struct vcpu_reg_list vregs_config = { ··· 924 897 }, 925 898 }; 926 899 900 + static struct vcpu_reg_list el2_e2h0_vregs_config = { 901 + .sublists = { 902 + BASE_SUBLIST, 903 + EL2_E2H0_SUBLIST, 904 + VREGS_SUBLIST, 905 + {0}, 906 + }, 907 + }; 908 + 909 + static struct vcpu_reg_list el2_e2h0_vregs_pmu_config = { 910 + .sublists = { 911 + BASE_SUBLIST, 912 + EL2_E2H0_SUBLIST, 913 + VREGS_SUBLIST, 914 + PMU_SUBLIST, 915 + {0}, 916 + }, 917 + }; 918 + 919 + static struct vcpu_reg_list el2_e2h0_sve_config = { 920 + .sublists = { 921 + BASE_SUBLIST, 922 + EL2_E2H0_SUBLIST, 923 + SVE_SUBLIST, 924 + {0}, 925 + }, 926 + }; 927 + 928 + static struct vcpu_reg_list el2_e2h0_sve_pmu_config = { 929 + .sublists = { 930 + BASE_SUBLIST, 931 + EL2_E2H0_SUBLIST, 932 + SVE_SUBLIST, 933 + PMU_SUBLIST, 934 + {0}, 935 + }, 936 + }; 937 + 938 + static struct vcpu_reg_list el2_e2h0_pauth_config = { 939 + .sublists = { 940 + BASE_SUBLIST, 941 + EL2_E2H0_SUBLIST, 942 + VREGS_SUBLIST, 943 + PAUTH_SUBLIST, 944 + {0}, 945 + }, 946 + }; 947 + 948 + static struct vcpu_reg_list el2_e2h0_pauth_pmu_config = { 949 + .sublists = { 950 + BASE_SUBLIST, 951 + EL2_E2H0_SUBLIST, 952 + VREGS_SUBLIST, 953 + PAUTH_SUBLIST, 954 + PMU_SUBLIST, 955 + {0}, 956 + }, 957 + }; 958 + 927 959 struct vcpu_reg_list *vcpu_configs[] = { 928 960 &vregs_config, 929 961 &vregs_pmu_config, ··· 997 911 &el2_sve_pmu_config, 998 912 &el2_pauth_config, 999 913 &el2_pauth_pmu_config, 914 + 915 + &el2_e2h0_vregs_config, 916 + &el2_e2h0_vregs_pmu_config, 917 + &el2_e2h0_sve_config, 918 + &el2_e2h0_sve_pmu_config, 919 + &el2_e2h0_pauth_config, 920 + &el2_e2h0_pauth_pmu_config, 1000 921 }; 1001 922 int vcpu_configs_n = ARRAY_SIZE(vcpu_configs);
+3
tools/testing/selftests/kvm/arm64/set_id_regs.c
··· 249 249 GUEST_REG_SYNC(SYS_ID_AA64ISAR2_EL1); 250 250 GUEST_REG_SYNC(SYS_ID_AA64ISAR3_EL1); 251 251 GUEST_REG_SYNC(SYS_ID_AA64PFR0_EL1); 252 + GUEST_REG_SYNC(SYS_ID_AA64PFR1_EL1); 252 253 GUEST_REG_SYNC(SYS_ID_AA64MMFR0_EL1); 253 254 GUEST_REG_SYNC(SYS_ID_AA64MMFR1_EL1); 254 255 GUEST_REG_SYNC(SYS_ID_AA64MMFR2_EL1); 255 256 GUEST_REG_SYNC(SYS_ID_AA64MMFR3_EL1); 256 257 GUEST_REG_SYNC(SYS_ID_AA64ZFR0_EL1); 258 + GUEST_REG_SYNC(SYS_MPIDR_EL1); 259 + GUEST_REG_SYNC(SYS_CLIDR_EL1); 257 260 GUEST_REG_SYNC(SYS_CTR_EL0); 258 261 GUEST_REG_SYNC(SYS_MIDR_EL1); 259 262 GUEST_REG_SYNC(SYS_REVIDR_EL1);
+2 -1
tools/testing/selftests/kvm/arm64/vgic_lpi_stress.c
··· 123 123 static void guest_code(size_t nr_lpis) 124 124 { 125 125 guest_setup_gic(); 126 + local_irq_enable(); 126 127 127 128 GUEST_SYNC(0); 128 129 ··· 332 331 { 333 332 int i; 334 333 335 - vcpus = malloc(test_data.nr_cpus * sizeof(struct kvm_vcpu)); 334 + vcpus = malloc(test_data.nr_cpus * sizeof(struct kvm_vcpu *)); 336 335 TEST_ASSERT(vcpus, "Failed to allocate vCPU array"); 337 336 338 337 vm = vm_create_with_vcpus(test_data.nr_cpus, guest_code, vcpus);
+92 -79
tools/testing/selftests/kvm/guest_memfd_test.c
··· 14 14 #include <linux/bitmap.h> 15 15 #include <linux/falloc.h> 16 16 #include <linux/sizes.h> 17 - #include <setjmp.h> 18 - #include <signal.h> 19 17 #include <sys/mman.h> 20 18 #include <sys/types.h> 21 19 #include <sys/stat.h> ··· 22 24 #include "test_util.h" 23 25 #include "ucall_common.h" 24 26 25 - static void test_file_read_write(int fd) 27 + static size_t page_size; 28 + 29 + static void test_file_read_write(int fd, size_t total_size) 26 30 { 27 31 char buf[64]; 28 32 ··· 38 38 "pwrite on a guest_mem fd should fail"); 39 39 } 40 40 41 - static void test_mmap_supported(int fd, size_t page_size, size_t total_size) 41 + static void test_mmap_cow(int fd, size_t size) 42 + { 43 + void *mem; 44 + 45 + mem = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0); 46 + TEST_ASSERT(mem == MAP_FAILED, "Copy-on-write not allowed by guest_memfd."); 47 + } 48 + 49 + static void test_mmap_supported(int fd, size_t total_size) 42 50 { 43 51 const char val = 0xaa; 44 52 char *mem; 45 53 size_t i; 46 54 int ret; 47 55 48 - mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0); 49 - TEST_ASSERT(mem == MAP_FAILED, "Copy-on-write not allowed by guest_memfd."); 50 - 51 - mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); 52 - TEST_ASSERT(mem != MAP_FAILED, "mmap() for guest_memfd should succeed."); 56 + mem = kvm_mmap(total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd); 53 57 54 58 memset(mem, val, total_size); 55 59 for (i = 0; i < total_size; i++) ··· 72 68 for (i = 0; i < total_size; i++) 73 69 TEST_ASSERT_EQ(READ_ONCE(mem[i]), val); 74 70 75 - ret = munmap(mem, total_size); 76 - TEST_ASSERT(!ret, "munmap() should succeed."); 71 + kvm_munmap(mem, total_size); 77 72 } 78 73 79 - static sigjmp_buf jmpbuf; 80 - void fault_sigbus_handler(int signum) 74 + static void test_fault_sigbus(int fd, size_t accessible_size, size_t map_size) 81 75 { 82 - siglongjmp(jmpbuf, 1); 83 - } 84 - 85 - static void test_fault_overflow(int fd, size_t page_size, size_t total_size) 86 - { 87 - struct sigaction sa_old, sa_new = { 88 - .sa_handler = fault_sigbus_handler, 89 - }; 90 - size_t map_size = total_size * 4; 91 76 const char val = 0xaa; 92 77 char *mem; 93 78 size_t i; 94 - int ret; 95 79 96 - mem = mmap(NULL, map_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); 97 - TEST_ASSERT(mem != MAP_FAILED, "mmap() for guest_memfd should succeed."); 80 + mem = kvm_mmap(map_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd); 98 81 99 - sigaction(SIGBUS, &sa_new, &sa_old); 100 - if (sigsetjmp(jmpbuf, 1) == 0) { 101 - memset(mem, 0xaa, map_size); 102 - TEST_ASSERT(false, "memset() should have triggered SIGBUS."); 103 - } 104 - sigaction(SIGBUS, &sa_old, NULL); 82 + TEST_EXPECT_SIGBUS(memset(mem, val, map_size)); 83 + TEST_EXPECT_SIGBUS((void)READ_ONCE(mem[accessible_size])); 105 84 106 - for (i = 0; i < total_size; i++) 85 + for (i = 0; i < accessible_size; i++) 107 86 TEST_ASSERT_EQ(READ_ONCE(mem[i]), val); 108 87 109 - ret = munmap(mem, map_size); 110 - TEST_ASSERT(!ret, "munmap() should succeed."); 88 + kvm_munmap(mem, map_size); 111 89 } 112 90 113 - static void test_mmap_not_supported(int fd, size_t page_size, size_t total_size) 91 + static void test_fault_overflow(int fd, size_t total_size) 92 + { 93 + test_fault_sigbus(fd, total_size, total_size * 4); 94 + } 95 + 96 + static void test_fault_private(int fd, size_t total_size) 97 + { 98 + test_fault_sigbus(fd, 0, total_size); 99 + } 100 + 101 + static void test_mmap_not_supported(int fd, size_t total_size) 114 102 { 115 103 char *mem; 116 104 ··· 113 117 TEST_ASSERT_EQ(mem, MAP_FAILED); 114 118 } 115 119 116 - static void test_file_size(int fd, size_t page_size, size_t total_size) 120 + static void test_file_size(int fd, size_t total_size) 117 121 { 118 122 struct stat sb; 119 123 int ret; ··· 124 128 TEST_ASSERT_EQ(sb.st_blksize, page_size); 125 129 } 126 130 127 - static void test_fallocate(int fd, size_t page_size, size_t total_size) 131 + static void test_fallocate(int fd, size_t total_size) 128 132 { 129 133 int ret; 130 134 ··· 161 165 TEST_ASSERT(!ret, "fallocate to restore punched hole should succeed"); 162 166 } 163 167 164 - static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size) 168 + static void test_invalid_punch_hole(int fd, size_t total_size) 165 169 { 166 170 struct { 167 171 off_t offset; ··· 192 196 } 193 197 194 198 static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm, 195 - uint64_t guest_memfd_flags, 196 - size_t page_size) 199 + uint64_t guest_memfd_flags) 197 200 { 198 201 size_t size; 199 202 int fd; ··· 209 214 { 210 215 int fd1, fd2, ret; 211 216 struct stat st1, st2; 212 - size_t page_size = getpagesize(); 213 217 214 218 fd1 = __vm_create_guest_memfd(vm, page_size, 0); 215 219 TEST_ASSERT(fd1 != -1, "memfd creation should succeed"); ··· 233 239 close(fd1); 234 240 } 235 241 236 - static void test_guest_memfd_flags(struct kvm_vm *vm, uint64_t valid_flags) 242 + static void test_guest_memfd_flags(struct kvm_vm *vm) 237 243 { 238 - size_t page_size = getpagesize(); 244 + uint64_t valid_flags = vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_FLAGS); 239 245 uint64_t flag; 240 246 int fd; 241 247 ··· 254 260 } 255 261 } 256 262 257 - static void test_guest_memfd(unsigned long vm_type) 263 + #define gmem_test(__test, __vm, __flags) \ 264 + do { \ 265 + int fd = vm_create_guest_memfd(__vm, page_size * 4, __flags); \ 266 + \ 267 + test_##__test(fd, page_size * 4); \ 268 + close(fd); \ 269 + } while (0) 270 + 271 + static void __test_guest_memfd(struct kvm_vm *vm, uint64_t flags) 258 272 { 259 - uint64_t flags = 0; 260 - struct kvm_vm *vm; 261 - size_t total_size; 262 - size_t page_size; 263 - int fd; 264 - 265 - page_size = getpagesize(); 266 - total_size = page_size * 4; 267 - 268 - vm = vm_create_barebones_type(vm_type); 269 - 270 - if (vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_MMAP)) 271 - flags |= GUEST_MEMFD_FLAG_MMAP; 272 - 273 273 test_create_guest_memfd_multiple(vm); 274 - test_create_guest_memfd_invalid_sizes(vm, flags, page_size); 274 + test_create_guest_memfd_invalid_sizes(vm, flags); 275 275 276 - fd = vm_create_guest_memfd(vm, total_size, flags); 277 - 278 - test_file_read_write(fd); 276 + gmem_test(file_read_write, vm, flags); 279 277 280 278 if (flags & GUEST_MEMFD_FLAG_MMAP) { 281 - test_mmap_supported(fd, page_size, total_size); 282 - test_fault_overflow(fd, page_size, total_size); 279 + if (flags & GUEST_MEMFD_FLAG_INIT_SHARED) { 280 + gmem_test(mmap_supported, vm, flags); 281 + gmem_test(fault_overflow, vm, flags); 282 + } else { 283 + gmem_test(fault_private, vm, flags); 284 + } 285 + 286 + gmem_test(mmap_cow, vm, flags); 283 287 } else { 284 - test_mmap_not_supported(fd, page_size, total_size); 288 + gmem_test(mmap_not_supported, vm, flags); 285 289 } 286 290 287 - test_file_size(fd, page_size, total_size); 288 - test_fallocate(fd, page_size, total_size); 289 - test_invalid_punch_hole(fd, page_size, total_size); 291 + gmem_test(file_size, vm, flags); 292 + gmem_test(fallocate, vm, flags); 293 + gmem_test(invalid_punch_hole, vm, flags); 294 + } 290 295 291 - test_guest_memfd_flags(vm, flags); 296 + static void test_guest_memfd(unsigned long vm_type) 297 + { 298 + struct kvm_vm *vm = vm_create_barebones_type(vm_type); 299 + uint64_t flags; 292 300 293 - close(fd); 301 + test_guest_memfd_flags(vm); 302 + 303 + __test_guest_memfd(vm, 0); 304 + 305 + flags = vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_FLAGS); 306 + if (flags & GUEST_MEMFD_FLAG_MMAP) 307 + __test_guest_memfd(vm, GUEST_MEMFD_FLAG_MMAP); 308 + 309 + /* MMAP should always be supported if INIT_SHARED is supported. */ 310 + if (flags & GUEST_MEMFD_FLAG_INIT_SHARED) 311 + __test_guest_memfd(vm, GUEST_MEMFD_FLAG_MMAP | 312 + GUEST_MEMFD_FLAG_INIT_SHARED); 313 + 294 314 kvm_vm_free(vm); 295 315 } 296 316 ··· 336 328 size_t size; 337 329 int fd, i; 338 330 339 - if (!kvm_has_cap(KVM_CAP_GUEST_MEMFD_MMAP)) 331 + if (!kvm_check_cap(KVM_CAP_GUEST_MEMFD_FLAGS)) 340 332 return; 341 333 342 334 vm = __vm_create_shape_with_one_vcpu(VM_SHAPE_DEFAULT, &vcpu, 1, guest_code); 343 335 344 - TEST_ASSERT(vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_MMAP), 345 - "Default VM type should always support guest_memfd mmap()"); 336 + TEST_ASSERT(vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_FLAGS) & GUEST_MEMFD_FLAG_MMAP, 337 + "Default VM type should support MMAP, supported flags = 0x%x", 338 + vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_FLAGS)); 339 + TEST_ASSERT(vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_FLAGS) & GUEST_MEMFD_FLAG_INIT_SHARED, 340 + "Default VM type should support INIT_SHARED, supported flags = 0x%x", 341 + vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_FLAGS)); 346 342 347 343 size = vm->page_size; 348 - fd = vm_create_guest_memfd(vm, size, GUEST_MEMFD_FLAG_MMAP); 344 + fd = vm_create_guest_memfd(vm, size, GUEST_MEMFD_FLAG_MMAP | 345 + GUEST_MEMFD_FLAG_INIT_SHARED); 349 346 vm_set_user_memory_region2(vm, slot, KVM_MEM_GUEST_MEMFD, gpa, size, NULL, fd, 0); 350 347 351 - mem = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); 352 - TEST_ASSERT(mem != MAP_FAILED, "mmap() on guest_memfd failed"); 348 + mem = kvm_mmap(size, PROT_READ | PROT_WRITE, MAP_SHARED, fd); 353 349 memset(mem, 0xaa, size); 354 - munmap(mem, size); 350 + kvm_munmap(mem, size); 355 351 356 352 virt_pg_map(vm, gpa, gpa); 357 353 vcpu_args_set(vcpu, 2, gpa, size); ··· 363 351 364 352 TEST_ASSERT_EQ(get_ucall(vcpu, NULL), UCALL_DONE); 365 353 366 - mem = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); 367 - TEST_ASSERT(mem != MAP_FAILED, "mmap() on guest_memfd failed"); 354 + mem = kvm_mmap(size, PROT_READ | PROT_WRITE, MAP_SHARED, fd); 368 355 for (i = 0; i < size; i++) 369 356 TEST_ASSERT_EQ(mem[i], 0xff); 370 357 ··· 376 365 unsigned long vm_types, vm_type; 377 366 378 367 TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); 368 + 369 + page_size = getpagesize(); 379 370 380 371 /* 381 372 * Not all architectures support KVM_CAP_VM_TYPES. However, those that
+11 -1
tools/testing/selftests/kvm/include/arm64/processor.h
··· 305 305 void test_disable_default_vgic(void); 306 306 307 307 bool vm_supports_el2(struct kvm_vm *vm); 308 - static bool vcpu_has_el2(struct kvm_vcpu *vcpu) 308 + 309 + static inline bool test_supports_el2(void) 310 + { 311 + struct kvm_vm *vm = vm_create(1); 312 + bool supported = vm_supports_el2(vm); 313 + 314 + kvm_vm_free(vm); 315 + return supported; 316 + } 317 + 318 + static inline bool vcpu_has_el2(struct kvm_vcpu *vcpu) 309 319 { 310 320 return vcpu->init.features[0] & BIT(KVM_ARM_VCPU_HAS_EL2); 311 321 }
+27
tools/testing/selftests/kvm/include/kvm_util.h
··· 286 286 #define __KVM_SYSCALL_ERROR(_name, _ret) \ 287 287 "%s failed, rc: %i errno: %i (%s)", (_name), (_ret), errno, strerror(errno) 288 288 289 + static inline void *__kvm_mmap(size_t size, int prot, int flags, int fd, 290 + off_t offset) 291 + { 292 + void *mem; 293 + 294 + mem = mmap(NULL, size, prot, flags, fd, offset); 295 + TEST_ASSERT(mem != MAP_FAILED, __KVM_SYSCALL_ERROR("mmap()", 296 + (int)(unsigned long)MAP_FAILED)); 297 + 298 + return mem; 299 + } 300 + 301 + static inline void *kvm_mmap(size_t size, int prot, int flags, int fd) 302 + { 303 + return __kvm_mmap(size, prot, flags, fd, 0); 304 + } 305 + 306 + static inline void kvm_munmap(void *mem, size_t size) 307 + { 308 + int ret; 309 + 310 + ret = munmap(mem, size); 311 + TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); 312 + } 313 + 289 314 /* 290 315 * Use the "inner", double-underscore macro when reporting errors from within 291 316 * other macros so that the name of ioctl() and not its literal numeric value ··· 1297 1272 bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr); 1298 1273 1299 1274 uint32_t guest_get_vcpuid(void); 1275 + 1276 + bool kvm_arch_has_default_irqchip(void); 1300 1277 1301 1278 #endif /* SELFTEST_KVM_UTIL_H */
+19
tools/testing/selftests/kvm/include/test_util.h
··· 8 8 #ifndef SELFTEST_KVM_TEST_UTIL_H 9 9 #define SELFTEST_KVM_TEST_UTIL_H 10 10 11 + #include <setjmp.h> 12 + #include <signal.h> 11 13 #include <stdlib.h> 12 14 #include <stdarg.h> 13 15 #include <stdbool.h> ··· 78 76 #define TEST_FAIL(fmt, ...) do { \ 79 77 TEST_ASSERT(false, fmt, ##__VA_ARGS__); \ 80 78 __builtin_unreachable(); \ 79 + } while (0) 80 + 81 + extern sigjmp_buf expect_sigbus_jmpbuf; 82 + void expect_sigbus_handler(int signum); 83 + 84 + #define TEST_EXPECT_SIGBUS(action) \ 85 + do { \ 86 + struct sigaction sa_old, sa_new = { \ 87 + .sa_handler = expect_sigbus_handler, \ 88 + }; \ 89 + \ 90 + sigaction(SIGBUS, &sa_new, &sa_old); \ 91 + if (sigsetjmp(expect_sigbus_jmpbuf, 1) == 0) { \ 92 + action; \ 93 + TEST_FAIL("'%s' should have triggered SIGBUS", #action); \ 94 + } \ 95 + sigaction(SIGBUS, &sa_old, NULL); \ 81 96 } while (0) 82 97 83 98 size_t parse_size(const char *size);
+11 -3
tools/testing/selftests/kvm/irqfd_test.c
··· 89 89 int main(int argc, char *argv[]) 90 90 { 91 91 pthread_t racing_thread; 92 + struct kvm_vcpu *unused; 92 93 int r, i; 93 94 94 - /* Create "full" VMs, as KVM_IRQFD requires an in-kernel IRQ chip. */ 95 - vm1 = vm_create(1); 96 - vm2 = vm_create(1); 95 + TEST_REQUIRE(kvm_arch_has_default_irqchip()); 96 + 97 + /* 98 + * Create "full" VMs, as KVM_IRQFD requires an in-kernel IRQ chip. Also 99 + * create an unused vCPU as certain architectures (like arm64) need to 100 + * complete IRQ chip initialization after all possible vCPUs for a VM 101 + * have been created. 102 + */ 103 + vm1 = vm_create_with_one_vcpu(&unused, NULL); 104 + vm2 = vm_create_with_one_vcpu(&unused, NULL); 97 105 98 106 WRITE_ONCE(__eventfd, kvm_new_eventfd()); 99 107
+5
tools/testing/selftests/kvm/lib/arm64/processor.c
··· 725 725 if (vm->arch.has_gic) 726 726 close(vm->arch.gic_fd); 727 727 } 728 + 729 + bool kvm_arch_has_default_irqchip(void) 730 + { 731 + return request_vgic && kvm_supports_vgic_v3(); 732 + }
+20 -29
tools/testing/selftests/kvm/lib/kvm_util.c
··· 741 741 int ret; 742 742 743 743 if (vcpu->dirty_gfns) { 744 - ret = munmap(vcpu->dirty_gfns, vm->dirty_ring_size); 745 - TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); 744 + kvm_munmap(vcpu->dirty_gfns, vm->dirty_ring_size); 746 745 vcpu->dirty_gfns = NULL; 747 746 } 748 747 749 - ret = munmap(vcpu->run, vcpu_mmap_sz()); 750 - TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); 748 + kvm_munmap(vcpu->run, vcpu_mmap_sz()); 751 749 752 750 ret = close(vcpu->fd); 753 751 TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("close()", ret)); ··· 781 783 static void __vm_mem_region_delete(struct kvm_vm *vm, 782 784 struct userspace_mem_region *region) 783 785 { 784 - int ret; 785 - 786 786 rb_erase(&region->gpa_node, &vm->regions.gpa_tree); 787 787 rb_erase(&region->hva_node, &vm->regions.hva_tree); 788 788 hash_del(&region->slot_node); 789 789 790 790 sparsebit_free(&region->unused_phy_pages); 791 791 sparsebit_free(&region->protected_phy_pages); 792 - ret = munmap(region->mmap_start, region->mmap_size); 793 - TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); 792 + kvm_munmap(region->mmap_start, region->mmap_size); 794 793 if (region->fd >= 0) { 795 794 /* There's an extra map when using shared memory. */ 796 - ret = munmap(region->mmap_alias, region->mmap_size); 797 - TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); 795 + kvm_munmap(region->mmap_alias, region->mmap_size); 798 796 close(region->fd); 799 797 } 800 798 if (region->region.guest_memfd >= 0) ··· 1047 1053 region->fd = kvm_memfd_alloc(region->mmap_size, 1048 1054 src_type == VM_MEM_SRC_SHARED_HUGETLB); 1049 1055 1050 - region->mmap_start = mmap(NULL, region->mmap_size, 1051 - PROT_READ | PROT_WRITE, 1052 - vm_mem_backing_src_alias(src_type)->flag, 1053 - region->fd, 0); 1054 - TEST_ASSERT(region->mmap_start != MAP_FAILED, 1055 - __KVM_SYSCALL_ERROR("mmap()", (int)(unsigned long)MAP_FAILED)); 1056 + region->mmap_start = kvm_mmap(region->mmap_size, PROT_READ | PROT_WRITE, 1057 + vm_mem_backing_src_alias(src_type)->flag, 1058 + region->fd); 1056 1059 1057 1060 TEST_ASSERT(!is_backing_src_hugetlb(src_type) || 1058 1061 region->mmap_start == align_ptr_up(region->mmap_start, backing_src_pagesz), ··· 1120 1129 1121 1130 /* If shared memory, create an alias. */ 1122 1131 if (region->fd >= 0) { 1123 - region->mmap_alias = mmap(NULL, region->mmap_size, 1124 - PROT_READ | PROT_WRITE, 1125 - vm_mem_backing_src_alias(src_type)->flag, 1126 - region->fd, 0); 1127 - TEST_ASSERT(region->mmap_alias != MAP_FAILED, 1128 - __KVM_SYSCALL_ERROR("mmap()", (int)(unsigned long)MAP_FAILED)); 1132 + region->mmap_alias = kvm_mmap(region->mmap_size, 1133 + PROT_READ | PROT_WRITE, 1134 + vm_mem_backing_src_alias(src_type)->flag, 1135 + region->fd); 1129 1136 1130 1137 /* Align host alias address */ 1131 1138 region->host_alias = align_ptr_up(region->mmap_alias, alignment); ··· 1333 1344 TEST_ASSERT(vcpu_mmap_sz() >= sizeof(*vcpu->run), "vcpu mmap size " 1334 1345 "smaller than expected, vcpu_mmap_sz: %zi expected_min: %zi", 1335 1346 vcpu_mmap_sz(), sizeof(*vcpu->run)); 1336 - vcpu->run = (struct kvm_run *) mmap(NULL, vcpu_mmap_sz(), 1337 - PROT_READ | PROT_WRITE, MAP_SHARED, vcpu->fd, 0); 1338 - TEST_ASSERT(vcpu->run != MAP_FAILED, 1339 - __KVM_SYSCALL_ERROR("mmap()", (int)(unsigned long)MAP_FAILED)); 1347 + vcpu->run = kvm_mmap(vcpu_mmap_sz(), PROT_READ | PROT_WRITE, 1348 + MAP_SHARED, vcpu->fd); 1340 1349 1341 1350 if (kvm_has_cap(KVM_CAP_BINARY_STATS_FD)) 1342 1351 vcpu->stats.fd = vcpu_get_stats_fd(vcpu); ··· 1781 1794 page_size * KVM_DIRTY_LOG_PAGE_OFFSET); 1782 1795 TEST_ASSERT(addr == MAP_FAILED, "Dirty ring mapped exec"); 1783 1796 1784 - addr = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, vcpu->fd, 1785 - page_size * KVM_DIRTY_LOG_PAGE_OFFSET); 1786 - TEST_ASSERT(addr != MAP_FAILED, "Dirty ring map failed"); 1797 + addr = __kvm_mmap(size, PROT_READ | PROT_WRITE, MAP_SHARED, vcpu->fd, 1798 + page_size * KVM_DIRTY_LOG_PAGE_OFFSET); 1787 1799 1788 1800 vcpu->dirty_gfns = addr; 1789 1801 vcpu->dirty_gfns_count = size / sizeof(struct kvm_dirty_gfn); ··· 2329 2343 2330 2344 pg = paddr >> vm->page_shift; 2331 2345 return sparsebit_is_set(region->protected_phy_pages, pg); 2346 + } 2347 + 2348 + __weak bool kvm_arch_has_default_irqchip(void) 2349 + { 2350 + return false; 2332 2351 }
+5
tools/testing/selftests/kvm/lib/s390/processor.c
··· 221 221 void assert_on_unhandled_exception(struct kvm_vcpu *vcpu) 222 222 { 223 223 } 224 + 225 + bool kvm_arch_has_default_irqchip(void) 226 + { 227 + return true; 228 + }
+7
tools/testing/selftests/kvm/lib/test_util.c
··· 18 18 19 19 #include "test_util.h" 20 20 21 + sigjmp_buf expect_sigbus_jmpbuf; 22 + 23 + void __attribute__((used)) expect_sigbus_handler(int signum) 24 + { 25 + siglongjmp(expect_sigbus_jmpbuf, 1); 26 + } 27 + 21 28 /* 22 29 * Random number generator that is usable from guest code. This is the 23 30 * Park-Miller LCG using standard constants.
+5
tools/testing/selftests/kvm/lib/x86/processor.c
··· 1318 1318 1319 1319 return ret; 1320 1320 } 1321 + 1322 + bool kvm_arch_has_default_irqchip(void) 1323 + { 1324 + return true; 1325 + }
+2 -3
tools/testing/selftests/kvm/mmu_stress_test.c
··· 339 339 TEST_ASSERT(max_gpa > (4 * slot_size), "MAXPHYADDR <4gb "); 340 340 341 341 fd = kvm_memfd_alloc(slot_size, hugepages); 342 - mem = mmap(NULL, slot_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); 343 - TEST_ASSERT(mem != MAP_FAILED, "mmap() failed"); 342 + mem = kvm_mmap(slot_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd); 344 343 345 344 TEST_ASSERT(!madvise(mem, slot_size, MADV_NOHUGEPAGE), "madvise() failed"); 346 345 ··· 412 413 for (slot = (slot - 1) & ~1ull; slot >= first_slot; slot -= 2) 413 414 vm_set_user_memory_region(vm, slot, 0, 0, 0, NULL); 414 415 415 - munmap(mem, slot_size / 2); 416 + kvm_munmap(mem, slot_size / 2); 416 417 417 418 /* Sanity check that the vCPUs actually ran. */ 418 419 for (i = 0; i < nr_vcpus; i++)
+114 -17
tools/testing/selftests/kvm/pre_fault_memory_test.c
··· 10 10 #include <test_util.h> 11 11 #include <kvm_util.h> 12 12 #include <processor.h> 13 + #include <pthread.h> 13 14 14 15 /* Arbitrarily chosen values */ 15 16 #define TEST_SIZE (SZ_2M + PAGE_SIZE) ··· 31 30 GUEST_DONE(); 32 31 } 33 32 34 - static void pre_fault_memory(struct kvm_vcpu *vcpu, u64 gpa, u64 size, 35 - u64 left) 33 + struct slot_worker_data { 34 + struct kvm_vm *vm; 35 + u64 gpa; 36 + uint32_t flags; 37 + bool worker_ready; 38 + bool prefault_ready; 39 + bool recreate_slot; 40 + }; 41 + 42 + static void *delete_slot_worker(void *__data) 43 + { 44 + struct slot_worker_data *data = __data; 45 + struct kvm_vm *vm = data->vm; 46 + 47 + WRITE_ONCE(data->worker_ready, true); 48 + 49 + while (!READ_ONCE(data->prefault_ready)) 50 + cpu_relax(); 51 + 52 + vm_mem_region_delete(vm, TEST_SLOT); 53 + 54 + while (!READ_ONCE(data->recreate_slot)) 55 + cpu_relax(); 56 + 57 + vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, data->gpa, 58 + TEST_SLOT, TEST_NPAGES, data->flags); 59 + 60 + return NULL; 61 + } 62 + 63 + static void pre_fault_memory(struct kvm_vcpu *vcpu, u64 base_gpa, u64 offset, 64 + u64 size, u64 expected_left, bool private) 36 65 { 37 66 struct kvm_pre_fault_memory range = { 38 - .gpa = gpa, 67 + .gpa = base_gpa + offset, 39 68 .size = size, 40 69 .flags = 0, 41 70 }; 42 - u64 prev; 71 + struct slot_worker_data data = { 72 + .vm = vcpu->vm, 73 + .gpa = base_gpa, 74 + .flags = private ? KVM_MEM_GUEST_MEMFD : 0, 75 + }; 76 + bool slot_recreated = false; 77 + pthread_t slot_worker; 43 78 int ret, save_errno; 79 + u64 prev; 44 80 45 - do { 81 + /* 82 + * Concurrently delete (and recreate) the slot to test KVM's handling 83 + * of a racing memslot deletion with prefaulting. 84 + */ 85 + pthread_create(&slot_worker, NULL, delete_slot_worker, &data); 86 + 87 + while (!READ_ONCE(data.worker_ready)) 88 + cpu_relax(); 89 + 90 + WRITE_ONCE(data.prefault_ready, true); 91 + 92 + for (;;) { 46 93 prev = range.size; 47 94 ret = __vcpu_ioctl(vcpu, KVM_PRE_FAULT_MEMORY, &range); 48 95 save_errno = errno; ··· 98 49 "%sexpecting range.size to change on %s", 99 50 ret < 0 ? "not " : "", 100 51 ret < 0 ? "failure" : "success"); 101 - } while (ret >= 0 ? range.size : save_errno == EINTR); 102 52 103 - TEST_ASSERT(range.size == left, 104 - "Completed with %lld bytes left, expected %" PRId64, 105 - range.size, left); 53 + /* 54 + * Immediately retry prefaulting if KVM was interrupted by an 55 + * unrelated signal/event. 56 + */ 57 + if (ret < 0 && save_errno == EINTR) 58 + continue; 106 59 107 - if (left == 0) 108 - __TEST_ASSERT_VM_VCPU_IOCTL(!ret, "KVM_PRE_FAULT_MEMORY", ret, vcpu->vm); 60 + /* 61 + * Tell the worker to recreate the slot in order to complete 62 + * prefaulting (if prefault didn't already succeed before the 63 + * slot was deleted) and/or to prepare for the next testcase. 64 + * Wait for the worker to exit so that the next invocation of 65 + * prefaulting is guaranteed to complete (assuming no KVM bugs). 66 + */ 67 + if (!slot_recreated) { 68 + WRITE_ONCE(data.recreate_slot, true); 69 + pthread_join(slot_worker, NULL); 70 + slot_recreated = true; 71 + 72 + /* 73 + * Retry prefaulting to get a stable result, i.e. to 74 + * avoid seeing random EAGAIN failures. Don't retry if 75 + * prefaulting already succeeded, as KVM disallows 76 + * prefaulting with size=0, i.e. blindly retrying would 77 + * result in test failures due to EINVAL. KVM should 78 + * always return success if all bytes are prefaulted, 79 + * i.e. there is no need to guard against EAGAIN being 80 + * returned. 81 + */ 82 + if (range.size) 83 + continue; 84 + } 85 + 86 + /* 87 + * All done if there are no remaining bytes to prefault, or if 88 + * prefaulting failed (EINTR was handled above, and EAGAIN due 89 + * to prefaulting a memslot that's being actively deleted should 90 + * be impossible since the memslot has already been recreated). 91 + */ 92 + if (!range.size || ret < 0) 93 + break; 94 + } 95 + 96 + TEST_ASSERT(range.size == expected_left, 97 + "Completed with %llu bytes left, expected %lu", 98 + range.size, expected_left); 99 + 100 + /* 101 + * Assert success if prefaulting the entire range should succeed, i.e. 102 + * complete with no bytes remaining. Otherwise prefaulting should have 103 + * failed due to ENOENT (due to RET_PF_EMULATE for emulated MMIO when 104 + * no memslot exists). 105 + */ 106 + if (!expected_left) 107 + TEST_ASSERT_VM_VCPU_IOCTL(!ret, KVM_PRE_FAULT_MEMORY, ret, vcpu->vm); 109 108 else 110 - /* No memory slot causes RET_PF_EMULATE. it results in -ENOENT. */ 111 - __TEST_ASSERT_VM_VCPU_IOCTL(ret && save_errno == ENOENT, 112 - "KVM_PRE_FAULT_MEMORY", ret, vcpu->vm); 109 + TEST_ASSERT_VM_VCPU_IOCTL(ret && save_errno == ENOENT, 110 + KVM_PRE_FAULT_MEMORY, ret, vcpu->vm); 113 111 } 114 112 115 113 static void __test_pre_fault_memory(unsigned long vm_type, bool private) ··· 193 97 194 98 if (private) 195 99 vm_mem_set_private(vm, guest_test_phys_mem, TEST_SIZE); 196 - pre_fault_memory(vcpu, guest_test_phys_mem, SZ_2M, 0); 197 - pre_fault_memory(vcpu, guest_test_phys_mem + SZ_2M, PAGE_SIZE * 2, PAGE_SIZE); 198 - pre_fault_memory(vcpu, guest_test_phys_mem + TEST_SIZE, PAGE_SIZE, PAGE_SIZE); 100 + 101 + pre_fault_memory(vcpu, guest_test_phys_mem, 0, SZ_2M, 0, private); 102 + pre_fault_memory(vcpu, guest_test_phys_mem, SZ_2M, PAGE_SIZE * 2, PAGE_SIZE, private); 103 + pre_fault_memory(vcpu, guest_test_phys_mem, TEST_SIZE, PAGE_SIZE, PAGE_SIZE, private); 199 104 200 105 vcpu_args_set(vcpu, 1, guest_test_virt_mem); 201 106 vcpu_run(vcpu);
+7 -9
tools/testing/selftests/kvm/s390/ucontrol_test.c
··· 142 142 self->kvm_run_size = ioctl(self->kvm_fd, KVM_GET_VCPU_MMAP_SIZE, NULL); 143 143 ASSERT_GE(self->kvm_run_size, sizeof(struct kvm_run)) 144 144 TH_LOG(KVM_IOCTL_ERROR(KVM_GET_VCPU_MMAP_SIZE, self->kvm_run_size)); 145 - self->run = (struct kvm_run *)mmap(NULL, self->kvm_run_size, 146 - PROT_READ | PROT_WRITE, MAP_SHARED, self->vcpu_fd, 0); 147 - ASSERT_NE(self->run, MAP_FAILED); 145 + self->run = kvm_mmap(self->kvm_run_size, PROT_READ | PROT_WRITE, 146 + MAP_SHARED, self->vcpu_fd); 148 147 /** 149 148 * For virtual cpus that have been created with S390 user controlled 150 149 * virtual machines, the resulting vcpu fd can be memory mapped at page 151 150 * offset KVM_S390_SIE_PAGE_OFFSET in order to obtain a memory map of 152 151 * the virtual cpu's hardware control block. 153 152 */ 154 - self->sie_block = (struct kvm_s390_sie_block *)mmap(NULL, PAGE_SIZE, 155 - PROT_READ | PROT_WRITE, MAP_SHARED, 156 - self->vcpu_fd, KVM_S390_SIE_PAGE_OFFSET << PAGE_SHIFT); 157 - ASSERT_NE(self->sie_block, MAP_FAILED); 153 + self->sie_block = __kvm_mmap(PAGE_SIZE, PROT_READ | PROT_WRITE, 154 + MAP_SHARED, self->vcpu_fd, 155 + KVM_S390_SIE_PAGE_OFFSET << PAGE_SHIFT); 158 156 159 157 TH_LOG("VM created %p %p", self->run, self->sie_block); 160 158 ··· 184 186 185 187 FIXTURE_TEARDOWN(uc_kvm) 186 188 { 187 - munmap(self->sie_block, PAGE_SIZE); 188 - munmap(self->run, self->kvm_run_size); 189 + kvm_munmap(self->sie_block, PAGE_SIZE); 190 + kvm_munmap(self->run, self->kvm_run_size); 189 191 close(self->vcpu_fd); 190 192 close(self->vm_fd); 191 193 close(self->kvm_fd);
+8 -9
tools/testing/selftests/kvm/set_memory_region_test.c
··· 433 433 pr_info("Adding slots 0..%i, each memory region with %dK size\n", 434 434 (max_mem_slots - 1), MEM_REGION_SIZE >> 10); 435 435 436 - mem = mmap(NULL, (size_t)max_mem_slots * MEM_REGION_SIZE + alignment, 437 - PROT_READ | PROT_WRITE, 438 - MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0); 439 - TEST_ASSERT(mem != MAP_FAILED, "Failed to mmap() host"); 436 + 437 + mem = kvm_mmap((size_t)max_mem_slots * MEM_REGION_SIZE + alignment, 438 + PROT_READ | PROT_WRITE, 439 + MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1); 440 440 mem_aligned = (void *)(((size_t) mem + alignment - 1) & ~(alignment - 1)); 441 441 442 442 for (slot = 0; slot < max_mem_slots; slot++) ··· 446 446 mem_aligned + (uint64_t)slot * MEM_REGION_SIZE); 447 447 448 448 /* Check it cannot be added memory slots beyond the limit */ 449 - mem_extra = mmap(NULL, MEM_REGION_SIZE, PROT_READ | PROT_WRITE, 450 - MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); 451 - TEST_ASSERT(mem_extra != MAP_FAILED, "Failed to mmap() host"); 449 + mem_extra = kvm_mmap(MEM_REGION_SIZE, PROT_READ | PROT_WRITE, 450 + MAP_PRIVATE | MAP_ANONYMOUS, -1); 452 451 453 452 ret = __vm_set_user_memory_region(vm, max_mem_slots, 0, 454 453 (uint64_t)max_mem_slots * MEM_REGION_SIZE, ··· 455 456 TEST_ASSERT(ret == -1 && errno == EINVAL, 456 457 "Adding one more memory slot should fail with EINVAL"); 457 458 458 - munmap(mem, (size_t)max_mem_slots * MEM_REGION_SIZE + alignment); 459 - munmap(mem_extra, MEM_REGION_SIZE); 459 + kvm_munmap(mem, (size_t)max_mem_slots * MEM_REGION_SIZE + alignment); 460 + kvm_munmap(mem_extra, MEM_REGION_SIZE); 460 461 kvm_vm_free(vm); 461 462 } 462 463
+26 -3
tools/testing/selftests/net/lib/py/__init__.py
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 + """ 4 + Python selftest helpers for netdev. 5 + """ 6 + 3 7 from .consts import KSRC 4 - from .ksft import * 8 + from .ksft import KsftFailEx, KsftSkipEx, KsftXfailEx, ksft_pr, ksft_eq, \ 9 + ksft_ne, ksft_true, ksft_not_none, ksft_in, ksft_not_in, ksft_is, \ 10 + ksft_ge, ksft_gt, ksft_lt, ksft_raises, ksft_busy_wait, \ 11 + ktap_result, ksft_disruptive, ksft_setup, ksft_run, ksft_exit 5 12 from .netns import NetNS, NetNSEnter 6 - from .nsim import * 7 - from .utils import * 13 + from .nsim import NetdevSim, NetdevSimDev 14 + from .utils import CmdExitFailure, fd_read_timeout, cmd, bkg, defer, \ 15 + bpftool, ip, ethtool, bpftrace, rand_port, wait_port_listen, wait_file 8 16 from .ynl import NlError, YnlFamily, EthtoolFamily, NetdevFamily, RtnlFamily, RtnlAddrFamily 9 17 from .ynl import NetshaperFamily, DevlinkFamily, PSPFamily 18 + 19 + __all__ = ["KSRC", 20 + "KsftFailEx", "KsftSkipEx", "KsftXfailEx", "ksft_pr", "ksft_eq", 21 + "ksft_ne", "ksft_true", "ksft_not_none", "ksft_in", "ksft_not_in", 22 + "ksft_is", "ksft_ge", "ksft_gt", "ksft_lt", "ksft_raises", 23 + "ksft_busy_wait", "ktap_result", "ksft_disruptive", "ksft_setup", 24 + "ksft_run", "ksft_exit", 25 + "NetNS", "NetNSEnter", 26 + "CmdExitFailure", "fd_read_timeout", "cmd", "bkg", "defer", 27 + "bpftool", "ip", "ethtool", "bpftrace", "rand_port", 28 + "wait_port_listen", "wait_file", 29 + "NetdevSim", "NetdevSimDev", 30 + "NetshaperFamily", "DevlinkFamily", "PSPFamily", "NlError", 31 + "YnlFamily", "EthtoolFamily", "NetdevFamily", "RtnlFamily", 32 + "RtnlAddrFamily"]
+2
tools/testing/selftests/net/rtnetlink.sh
··· 1466 1466 EOF 1467 1467 } 1468 1468 1469 + require_command jq 1470 + 1469 1471 #check for needed privileges 1470 1472 if [ "$(id -u)" -ne 0 ];then 1471 1473 end_test "SKIP: Need root privileges"
+65
tools/testing/selftests/net/tls.c
··· 564 564 EXPECT_EQ(memcmp(buf, test_str, send_len), 0); 565 565 } 566 566 567 + TEST_F(tls, cmsg_msg_more) 568 + { 569 + char *test_str = "test_read"; 570 + char record_type = 100; 571 + int send_len = 10; 572 + 573 + /* we don't allow MSG_MORE with non-DATA records */ 574 + EXPECT_EQ(tls_send_cmsg(self->fd, record_type, test_str, send_len, 575 + MSG_MORE), -1); 576 + EXPECT_EQ(errno, EINVAL); 577 + } 578 + 579 + TEST_F(tls, msg_more_then_cmsg) 580 + { 581 + char *test_str = "test_read"; 582 + char record_type = 100; 583 + int send_len = 10; 584 + char buf[10 * 2]; 585 + int ret; 586 + 587 + EXPECT_EQ(send(self->fd, test_str, send_len, MSG_MORE), send_len); 588 + EXPECT_EQ(recv(self->cfd, buf, send_len, MSG_DONTWAIT), -1); 589 + 590 + ret = tls_send_cmsg(self->fd, record_type, test_str, send_len, 0); 591 + EXPECT_EQ(ret, send_len); 592 + 593 + /* initial DATA record didn't get merged with the non-DATA record */ 594 + EXPECT_EQ(recv(self->cfd, buf, send_len * 2, 0), send_len); 595 + 596 + EXPECT_EQ(tls_recv_cmsg(_metadata, self->cfd, record_type, 597 + buf, sizeof(buf), MSG_WAITALL), 598 + send_len); 599 + } 600 + 567 601 TEST_F(tls, msg_more_unsent) 568 602 { 569 603 char const *test_str = "test_read"; ··· 945 911 EXPECT_EQ(read(p[0], mem_recv, send_len), send_len); 946 912 EXPECT_EQ(memcmp(mem_send, mem_recv, send_len), 0); 947 913 } 914 + 915 + #define MAX_FRAGS 48 916 + TEST_F(tls, splice_short) 917 + { 918 + struct iovec sendchar_iov; 919 + char read_buf[0x10000]; 920 + char sendbuf[0x100]; 921 + char sendchar = 'S'; 922 + int pipefds[2]; 923 + int i; 924 + 925 + sendchar_iov.iov_base = &sendchar; 926 + sendchar_iov.iov_len = 1; 927 + 928 + memset(sendbuf, 's', sizeof(sendbuf)); 929 + 930 + ASSERT_GE(pipe2(pipefds, O_NONBLOCK), 0); 931 + ASSERT_GE(fcntl(pipefds[0], F_SETPIPE_SZ, (MAX_FRAGS + 1) * 0x1000), 0); 932 + 933 + for (i = 0; i < MAX_FRAGS; i++) 934 + ASSERT_GE(vmsplice(pipefds[1], &sendchar_iov, 1, 0), 0); 935 + 936 + ASSERT_EQ(write(pipefds[1], sendbuf, sizeof(sendbuf)), sizeof(sendbuf)); 937 + 938 + EXPECT_EQ(splice(pipefds[0], NULL, self->fd, NULL, MAX_FRAGS + 0x1000, 0), 939 + MAX_FRAGS + sizeof(sendbuf)); 940 + EXPECT_EQ(recv(self->cfd, read_buf, sizeof(read_buf), 0), MAX_FRAGS + sizeof(sendbuf)); 941 + EXPECT_EQ(recv(self->cfd, read_buf, sizeof(read_buf), MSG_DONTWAIT), -1); 942 + EXPECT_EQ(errno, EAGAIN); 943 + } 944 + #undef MAX_FRAGS 948 945 949 946 TEST_F(tls, recvmsg_single) 950 947 {
+2
tools/testing/selftests/net/vlan_bridge_binding.sh
··· 249 249 do_test_binding_off : "on->off when upper down" 250 250 } 251 251 252 + require_command jq 253 + 252 254 trap defer_scopes_cleanup EXIT 253 255 setup_prepare 254 256 tests_run
+1
virt/kvm/Kconfig
··· 113 113 bool 114 114 115 115 config KVM_GUEST_MEMFD 116 + depends on KVM_GENERIC_MMU_NOTIFIER 116 117 select XARRAY_MULTI 117 118 bool 118 119
+49 -26
virt/kvm/guest_memfd.c
··· 102 102 return filemap_grab_folio(inode->i_mapping, index); 103 103 } 104 104 105 - static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, 106 - pgoff_t end) 105 + static enum kvm_gfn_range_filter kvm_gmem_get_invalidate_filter(struct inode *inode) 106 + { 107 + if ((u64)inode->i_private & GUEST_MEMFD_FLAG_INIT_SHARED) 108 + return KVM_FILTER_SHARED; 109 + 110 + return KVM_FILTER_PRIVATE; 111 + } 112 + 113 + static void __kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, 114 + pgoff_t end, 115 + enum kvm_gfn_range_filter attr_filter) 107 116 { 108 117 bool flush = false, found_memslot = false; 109 118 struct kvm_memory_slot *slot; ··· 127 118 .end = slot->base_gfn + min(pgoff + slot->npages, end) - pgoff, 128 119 .slot = slot, 129 120 .may_block = true, 130 - /* guest memfd is relevant to only private mappings. */ 131 - .attr_filter = KVM_FILTER_PRIVATE, 121 + .attr_filter = attr_filter, 132 122 }; 133 123 134 124 if (!found_memslot) { ··· 147 139 KVM_MMU_UNLOCK(kvm); 148 140 } 149 141 150 - static void kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start, 151 - pgoff_t end) 142 + static void kvm_gmem_invalidate_begin(struct inode *inode, pgoff_t start, 143 + pgoff_t end) 144 + { 145 + struct list_head *gmem_list = &inode->i_mapping->i_private_list; 146 + enum kvm_gfn_range_filter attr_filter; 147 + struct kvm_gmem *gmem; 148 + 149 + attr_filter = kvm_gmem_get_invalidate_filter(inode); 150 + 151 + list_for_each_entry(gmem, gmem_list, entry) 152 + __kvm_gmem_invalidate_begin(gmem, start, end, attr_filter); 153 + } 154 + 155 + static void __kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start, 156 + pgoff_t end) 152 157 { 153 158 struct kvm *kvm = gmem->kvm; 154 159 ··· 172 151 } 173 152 } 174 153 175 - static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len) 154 + static void kvm_gmem_invalidate_end(struct inode *inode, pgoff_t start, 155 + pgoff_t end) 176 156 { 177 157 struct list_head *gmem_list = &inode->i_mapping->i_private_list; 158 + struct kvm_gmem *gmem; 159 + 160 + list_for_each_entry(gmem, gmem_list, entry) 161 + __kvm_gmem_invalidate_end(gmem, start, end); 162 + } 163 + 164 + static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len) 165 + { 178 166 pgoff_t start = offset >> PAGE_SHIFT; 179 167 pgoff_t end = (offset + len) >> PAGE_SHIFT; 180 - struct kvm_gmem *gmem; 181 168 182 169 /* 183 170 * Bindings must be stable across invalidation to ensure the start+end ··· 193 164 */ 194 165 filemap_invalidate_lock(inode->i_mapping); 195 166 196 - list_for_each_entry(gmem, gmem_list, entry) 197 - kvm_gmem_invalidate_begin(gmem, start, end); 167 + kvm_gmem_invalidate_begin(inode, start, end); 198 168 199 169 truncate_inode_pages_range(inode->i_mapping, offset, offset + len - 1); 200 170 201 - list_for_each_entry(gmem, gmem_list, entry) 202 - kvm_gmem_invalidate_end(gmem, start, end); 171 + kvm_gmem_invalidate_end(inode, start, end); 203 172 204 173 filemap_invalidate_unlock(inode->i_mapping); 205 174 ··· 307 280 * Zap all SPTEs pointed at by this file. Do not free the backing 308 281 * memory, as its lifetime is associated with the inode, not the file. 309 282 */ 310 - kvm_gmem_invalidate_begin(gmem, 0, -1ul); 311 - kvm_gmem_invalidate_end(gmem, 0, -1ul); 283 + __kvm_gmem_invalidate_begin(gmem, 0, -1ul, 284 + kvm_gmem_get_invalidate_filter(inode)); 285 + __kvm_gmem_invalidate_end(gmem, 0, -1ul); 312 286 313 287 list_del(&gmem->entry); 314 288 ··· 354 326 vm_fault_t ret = VM_FAULT_LOCKED; 355 327 356 328 if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode)) 329 + return VM_FAULT_SIGBUS; 330 + 331 + if (!((u64)inode->i_private & GUEST_MEMFD_FLAG_INIT_SHARED)) 357 332 return VM_FAULT_SIGBUS; 358 333 359 334 folio = kvm_gmem_get_folio(inode, vmf->pgoff); ··· 431 400 432 401 static int kvm_gmem_error_folio(struct address_space *mapping, struct folio *folio) 433 402 { 434 - struct list_head *gmem_list = &mapping->i_private_list; 435 - struct kvm_gmem *gmem; 436 403 pgoff_t start, end; 437 404 438 405 filemap_invalidate_lock_shared(mapping); ··· 438 409 start = folio->index; 439 410 end = start + folio_nr_pages(folio); 440 411 441 - list_for_each_entry(gmem, gmem_list, entry) 442 - kvm_gmem_invalidate_begin(gmem, start, end); 412 + kvm_gmem_invalidate_begin(mapping->host, start, end); 443 413 444 414 /* 445 415 * Do not truncate the range, what action is taken in response to the ··· 449 421 * error to userspace. 450 422 */ 451 423 452 - list_for_each_entry(gmem, gmem_list, entry) 453 - kvm_gmem_invalidate_end(gmem, start, end); 424 + kvm_gmem_invalidate_end(mapping->host, start, end); 454 425 455 426 filemap_invalidate_unlock_shared(mapping); 456 427 ··· 485 458 .setattr = kvm_gmem_setattr, 486 459 }; 487 460 488 - bool __weak kvm_arch_supports_gmem_mmap(struct kvm *kvm) 461 + bool __weak kvm_arch_supports_gmem_init_shared(struct kvm *kvm) 489 462 { 490 463 return true; 491 464 } ··· 549 522 { 550 523 loff_t size = args->size; 551 524 u64 flags = args->flags; 552 - u64 valid_flags = 0; 553 525 554 - if (kvm_arch_supports_gmem_mmap(kvm)) 555 - valid_flags |= GUEST_MEMFD_FLAG_MMAP; 556 - 557 - if (flags & ~valid_flags) 526 + if (flags & ~kvm_gmem_get_supported_flags(kvm)) 558 527 return -EINVAL; 559 528 560 529 if (size <= 0 || !PAGE_ALIGNED(size))
+2 -2
virt/kvm/kvm_main.c
··· 4928 4928 #ifdef CONFIG_KVM_GUEST_MEMFD 4929 4929 case KVM_CAP_GUEST_MEMFD: 4930 4930 return 1; 4931 - case KVM_CAP_GUEST_MEMFD_MMAP: 4932 - return !kvm || kvm_arch_supports_gmem_mmap(kvm); 4931 + case KVM_CAP_GUEST_MEMFD_FLAGS: 4932 + return kvm_gmem_get_supported_flags(kvm); 4933 4933 #endif 4934 4934 default: 4935 4935 break;