···11-APM X-Gene SLIMpro Mailbox I2C Driver22-33-An I2C controller accessed over the "SLIMpro" mailbox.44-55-Required properties :66-77- - compatible : should be "apm,xgene-slimpro-i2c"88- - mboxes : use the label reference for the mailbox as the first parameter.99- The second parameter is the channel number.1010-1111-Example :1212- i2cslimpro {1313- compatible = "apm,xgene-slimpro-i2c";1414- mboxes = <&mailbox 0>;1515- };
···2424 Instruments Smart Amp speaker protection algorithm. The2525 integrated speaker voltage and current sense provides for real time2626 monitoring of loudspeaker behavior.2727- The TAS5825/TAS5827 is a stereo, digital input Class-D audio2828- amplifier optimized for efficiently driving high peak power into2929- small loudspeakers. An integrated on-chip DSP supports Texas3030- Instruments Smart Amp speaker protection algorithm.2727+ The TAS5802/TAS5815/TAS5825/TAS5827/TAS5828 is a stereo, digital input2828+ Class-D audio amplifier optimized for efficiently driving high peak2929+ power into small loudspeakers. An integrated on-chip DSP supports3030+ Texas Instruments Smart Amp speaker protection algorithm.31313232 Specifications about the audio amplifier can be found at:3333 https://www.ti.com/lit/gpn/tas2120···3535 https://www.ti.com/lit/gpn/tas25633636 https://www.ti.com/lit/gpn/tas25723737 https://www.ti.com/lit/gpn/tas27813838+ https://www.ti.com/lit/gpn/tas58153839 https://www.ti.com/lit/gpn/tas5825m3940 https://www.ti.com/lit/gpn/tas58274141+ https://www.ti.com/lit/gpn/tas5828m40424143properties:4244 compatible:···6765 Protection and Audio Processing, 16/20/24/32bit stereo I2S or6866 multichannel TDM.69676868+ ti,tas5802: 22-W, Inductor-Less, Digital Input, Closed-Loop Class-D6969+ Audio Amplifier with 96-Khz Extended Processing and Low Idle Power7070+ Dissipation.7171+7272+ ti,tas5815: 30-W, Digital Input, Stereo, Closed-loop Class-D Audio7373+ Amplifier with 96 kHz Enhanced Processing7474+7075 ti,tas5825: 38-W Stereo, Inductor-Less, Digital Input, Closed-Loop 4.5V7176 to 26.4V Class-D Audio Amplifier with 192-kHz Extended Audio Processing.72777373- ti,tas5827: 47-W Stereo, Digital Input, High Efficiency Closed-Loop Class-D7474- Amplifier with Class-H Algorithm7878+ ti,tas5827: 47-W Stereo, Digital Input, High Efficiency Closed-Loop7979+ Class-D Amplifier with Class-H Algorithm8080+8181+ ti,tas5828: 50-W Stereo, Digital Input, High Efficiency Closed-Loop8282+ Class-D Amplifier with Hybrid-Pro Algorithm7583 oneOf:7684 - items:7785 - enum:···9280 - ti,tas25639381 - ti,tas25709482 - ti,tas25728383+ - ti,tas58028484+ - ti,tas58159585 - ti,tas58259686 - ti,tas58278787+ - ti,tas58289788 - const: ti,tas27819889 - enum:9990 - ti,tas2781···197182 compatible:198183 contains:199184 enum:185185+ - ti,tas5802186186+ - ti,tas5815187187+ then:188188+ properties:189189+ reg:190190+ maxItems: 4191191+ items:192192+ minimum: 0x54193193+ maximum: 0x57194194+195195+ - if:196196+ properties:197197+ compatible:198198+ contains:199199+ enum:200200 - ti,tas5827201201+ - ti,tas5828201202 then:202203 properties:203204 reg:
+31-30
Documentation/filesystems/ext4/directory.rst
···183183 - det_checksum184184 - Directory leaf block checksum.185185186186-The leaf directory block checksum is calculated against the FS UUID, the187187-directory's inode number, the directory's inode generation number, and188188-the entire directory entry block up to (but not including) the fake189189-directory entry.186186+The leaf directory block checksum is calculated against the FS UUID (or187187+the checksum seed, if that feature is enabled for the fs), the directory's188188+inode number, the directory's inode generation number, and the entire189189+directory entry block up to (but not including) the fake directory entry.190190191191Hash Tree Directories192192~~~~~~~~~~~~~~~~~~~~~···196196balanced tree keyed off a hash of the directory entry name. If the197197EXT4_INDEX_FL (0x1000) flag is set in the inode, this directory uses a198198hashed btree (htree) to organize and find directory entries. For199199-backwards read-only compatibility with ext2, this tree is actually200200-hidden inside the directory file, masquerading as “empty” directory data201201-blocks! It was stated previously that the end of the linear directory202202-entry table was signified with an entry pointing to inode 0; this is203203-(ab)used to fool the old linear-scan algorithm into thinking that the204204-rest of the directory block is empty so that it moves on.199199+backwards read-only compatibility with ext2, interior tree nodes are actually200200+hidden inside the directory file, masquerading as “empty” directory entries201201+spanning the whole block. It was stated previously that directory entries202202+with the inode set to 0 are treated as unused entries; this is (ab)used to203203+fool the old linear-scan algorithm into skipping over those blocks containing204204+the interior tree node data.205205206206The root of the tree always lives in the first data block of the207207directory. By ext2 custom, the '.' and '..' entries must appear at the···209209``struct ext4_dir_entry_2`` s and not stored in the tree. The rest of210210the root node contains metadata about the tree and finally a hash->block211211map to find nodes that are lower in the htree. If212212-``dx_root.info.indirect_levels`` is non-zero then the htree has two213213-levels; the data block pointed to by the root node's map is an interior214214-node, which is indexed by a minor hash. Interior nodes in this tree215215-contains a zeroed out ``struct ext4_dir_entry_2`` followed by a216216-minor_hash->block map to find leafe nodes. Leaf nodes contain a linear217217-array of all ``struct ext4_dir_entry_2``; all of these entries218218-(presumably) hash to the same value. If there is an overflow, the219219-entries simply overflow into the next leaf node, and the220220-least-significant bit of the hash (in the interior node map) that gets221221-us to this next leaf node is set.212212+``dx_root.info.indirect_levels`` is non-zero then the htree has that many213213+levels and the blocks pointed to by the root node's map are interior nodes.214214+These interior nodes have a zeroed out ``struct ext4_dir_entry_2`` followed by215215+a hash->block map to find nodes of the next level. Leaf nodes look like216216+classic linear directory blocks, but all of its entries have a hash value217217+equal or greater than the indicated hash of the parent node.222218223223-To traverse the directory as a htree, the code calculates the hash of224224-the desired file name and uses it to find the corresponding block225225-number. If the tree is flat, the block is a linear array of directory226226-entries that can be searched; otherwise, the minor hash of the file name227227-is computed and used against this second block to find the corresponding228228-third block number. That third block number will be a linear array of229229-directory entries.219219+The actual hash value for an entry name is only 31 bits, the least-significant220220+bit is set to 0. However, if there is a hash collision between directory221221+entries, the least-significant bit may get set to 1 on interior nodes in the222222+case where these two (or more) hash-colliding entries do not fit into one leaf223223+node and must be split across multiple nodes.224224+225225+To look up a name in such a htree, the code calculates the hash of the desired226226+file name and uses it to find the leaf node with the range of hash values the227227+calculated hash falls into (in other words, a lookup works basically the same228228+as it would in a B-Tree keyed by the hash value), and possibly also scanning229229+the leaf nodes that follow (in tree order) in case of hash collisions.230230231231To traverse the directory as a linear array (such as the old code does),232232the code simply reads every data block in the directory. The blocks used···319319 * - 0x24320320 - __le32321321 - block322322- - The block number (within the directory file) that goes with hash=0.322322+ - The block number (within the directory file) that lead to the left-most323323+ leaf node, i.e. the leaf containing entries with the lowest hash values.323324 * - 0x28324325 - struct dx_entry325326 - entries[0]···443442 * - 0x0444443 - u32445444 - dt_reserved446446- - Zero.445445+ - Unused (but still part of the checksum curiously).447446 * - 0x4448447 - __le32449448 - dt_checksum···451450452451The checksum is calculated against the FS UUID, the htree index header453452(dx_root or dx_node), all of the htree indices (dx_entry) that are in454454-use, and the tail block (dx_tail).453453+use, and the tail block (dx_tail) with the dt_checksum initially set to 0.
+67-4
Documentation/networking/can.rst
···13981398Additionally CAN FD capable CAN controllers support up to 64 bytes of13991399payload. The representation of this length in can_frame.len and14001400canfd_frame.len for userspace applications and inside the Linux network14011401-layer is a plain value from 0 .. 64 instead of the CAN 'data length code'.14021402-The data length code was a 1:1 mapping to the payload length in the Classical14031403-CAN frames anyway. The payload length to the bus-relevant DLC mapping is14041404-only performed inside the CAN drivers, preferably with the helper14011401+layer is a plain value from 0 .. 64 instead of the Classical CAN length14021402+which ranges from 0 to 8. The payload length to the bus-relevant DLC mapping14031403+is only performed inside the CAN drivers, preferably with the helper14051404functions can_fd_dlc2len() and can_fd_len2dlc().1406140514071406The CAN netdevice driver capabilities can be distinguished by the network···14621463Example when 'fd-non-iso on' is added on this switchable CAN FD adapter::1463146414641465 can <FD,FD-NON-ISO> state ERROR-ACTIVE (berr-counter tx 0 rx 0) restart-ms 014661466+14671467+14681468+Transmitter Delay Compensation14691469+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~14701470+14711471+At high bit rates, the propagation delay from the TX pin to the RX pin of14721472+the transceiver might become greater than the actual bit time causing14731473+measurement errors: the RX pin would still be measuring the previous bit.14741474+14751475+The Transmitter Delay Compensation (thereafter, TDC) resolves this problem14761476+by introducing a Secondary Sample Point (SSP) equal to the distance, in14771477+minimum time quantum, from the start of the bit time on the TX pin to the14781478+actual measurement on the RX pin. The SSP is calculated as the sum of two14791479+configurable values: the TDC Value (TDCV) and the TDC offset (TDCO).14801480+14811481+TDC, if supported by the device, can be configured together with CAN-FD14821482+using the ip tool's "tdc-mode" argument as follow:14831483+14841484+**omitted**14851485+ When no "tdc-mode" option is provided, the kernel will automatically14861486+ decide whether TDC should be turned on, in which case it will14871487+ calculate a default TDCO and use the TDCV as measured by the14881488+ device. This is the recommended method to use TDC.14891489+14901490+**"tdc-mode off"**14911491+ TDC is explicitly disabled.14921492+14931493+**"tdc-mode auto"**14941494+ The user must provide the "tdco" argument. The TDCV will be14951495+ automatically calculated by the device. This option is only14961496+ available if the device supports the TDC-AUTO CAN controller mode.14971497+14981498+**"tdc-mode manual"**14991499+ The user must provide both the "tdco" and "tdcv" arguments. This15001500+ option is only available if the device supports the TDC-MANUAL CAN15011501+ controller mode.15021502+15031503+Note that some devices may offer an additional parameter: "tdcf" (TDC Filter15041504+window). If supported by your device, this can be added as an optional15051505+argument to either "tdc-mode auto" or "tdc-mode manual".15061506+15071507+Example configuring a 500 kbit/s arbitration bitrate, a 5 Mbit/s data15081508+bitrate, a TDCO of 15 minimum time quantum and a TDCV automatically measured15091509+by the device::15101510+15111511+ $ ip link set can0 up type can bitrate 500000 \15121512+ fd on dbitrate 4000000 \15131513+ tdc-mode auto tdco 1515141514+ $ ip -details link show can015151515+ 5: can0: <NOARP,UP,LOWER_UP,ECHO> mtu 72 qdisc pfifo_fast state UP \15161516+ mode DEFAULT group default qlen 1015171517+ link/can promiscuity 0 allmulti 0 minmtu 72 maxmtu 7215181518+ can <FD,TDC-AUTO> state ERROR-ACTIVE restart-ms 015191519+ bitrate 500000 sample-point 0.87515201520+ tq 12 prop-seg 69 phase-seg1 70 phase-seg2 20 sjw 10 brp 115211521+ ES582.1/ES584.1: tseg1 2..256 tseg2 2..128 sjw 1..128 brp 1..512 \15221522+ brp_inc 115231523+ dbitrate 4000000 dsample-point 0.75015241524+ dtq 12 dprop-seg 7 dphase-seg1 7 dphase-seg2 5 dsjw 2 dbrp 115251525+ tdco 15 tdcf 015261526+ ES582.1/ES584.1: dtseg1 2..32 dtseg2 1..16 dsjw 1..8 dbrp 1..32 \15271527+ dbrp_inc 115281528+ tdco 0..127 tdcf 0..12715291529+ clock 80000000146515301466153114671532Supported CAN Hardware
+3
Documentation/networking/seg6-sysctl.rst
···25252626 Default is 0.27272828+/proc/sys/net/ipv6/seg6_* variables:2929+====================================3030+2831seg6_flowlabel - INTEGER2932 Controls the behaviour of computing the flowlabel of outer3033 IPv6 header in case of SR T.encaps
+75
Documentation/rust/coding-guidelines.rst
···3838individual files, and does not require a kernel configuration. Sometimes it may3939even work with broken code.40404141+Imports4242+~~~~~~~4343+4444+``rustfmt``, by default, formats imports in a way that is prone to conflicts4545+while merging and rebasing, since in some cases it condenses several items into4646+the same line. For instance:4747+4848+.. code-block:: rust4949+5050+ // Do not use this style.5151+ use crate::{5252+ example1,5353+ example2::{example3, example4, example5},5454+ example6, example7,5555+ example8::example9,5656+ };5757+5858+Instead, the kernel uses a vertical layout that looks like this:5959+6060+.. code-block:: rust6161+6262+ use crate::{6363+ example1,6464+ example2::{6565+ example3,6666+ example4,6767+ example5, //6868+ },6969+ example6,7070+ example7,7171+ example8::example9, //7272+ };7373+7474+That is, each item goes into its own line, and braces are used as soon as there7575+is more than one item in a list.7676+7777+The trailing empty comment allows to preserve this formatting. Not only that,7878+``rustfmt`` will actually reformat imports vertically when the empty comment is7979+added. That is, it is possible to easily reformat the original example into the8080+expected style by running ``rustfmt`` on an input like:8181+8282+.. code-block:: rust8383+8484+ // Do not use this style.8585+ use crate::{8686+ example1,8787+ example2::{example3, example4, example5, //8888+ },8989+ example6, example7,9090+ example8::example9, //9191+ };9292+9393+The trailing empty comment works for nested imports, as shown above, as well as9494+for single item imports -- this can be useful to minimize diffs within patch9595+series:9696+9797+.. code-block:: rust9898+9999+ use crate::{100100+ example1, //101101+ };102102+103103+The trailing empty comment works in any of the lines within the braces, but it104104+is preferred to keep it in the last item, since it is reminiscent of the105105+trailing comma in other formatters. Sometimes it may be simpler to avoid moving106106+the comment several times within a patch series due to changes in the list.107107+108108+There may be cases where exceptions may need to be made, i.e. none of this is109109+a hard rule. There is also code that is not migrated to this style yet, but110110+please do not introduce code in other styles.111111+112112+Eventually, the goal is to get ``rustfmt`` to support this formatting style (or113113+a similar one) automatically in a stable release without requiring the trailing114114+empty comment. Thus, at some point, the goal is to remove those comments.115115+4111642117Comments43118--------
+17-3
Documentation/virt/kvm/api.rst
···12291229KVM_SET_VCPU_EVENTS or otherwise) because such an exception is always delivered12301230directly to the virtual CPU).1231123112321232+Calling this ioctl on a vCPU that hasn't been initialized will return12331233+-ENOEXEC.12341234+12321235::1233123612341237 struct kvm_vcpu_events {···1312130913131310See KVM_GET_VCPU_EVENTS for the data structure.1314131113121312+Calling this ioctl on a vCPU that hasn't been initialized will return13131313+-ENOEXEC.13151314131613154.33 KVM_GET_DEBUGREGS13171316----------------------···64376432guest_memfd range is not allowed (any number of memory regions can be bound to64386433a single guest_memfd file, but the bound ranges must not overlap).6439643464406440-When the capability KVM_CAP_GUEST_MEMFD_MMAP is supported, the 'flags' field64416441-supports GUEST_MEMFD_FLAG_MMAP. Setting this flag on guest_memfd creation64426442-enables mmap() and faulting of guest_memfd memory to host userspace.64356435+The capability KVM_CAP_GUEST_MEMFD_FLAGS enumerates the `flags` that can be64366436+specified via KVM_CREATE_GUEST_MEMFD. Currently defined flags:64376437+64386438+ ============================ ================================================64396439+ GUEST_MEMFD_FLAG_MMAP Enable using mmap() on the guest_memfd file64406440+ descriptor.64416441+ GUEST_MEMFD_FLAG_INIT_SHARED Make all memory in the file shared during64426442+ KVM_CREATE_GUEST_MEMFD (memory files created64436443+ without INIT_SHARED will be marked private).64446444+ Shared memory can be faulted into host userspace64456445+ page tables. Private memory cannot.64466446+ ============================ ================================================6443644764446448When the KVM MMU performs a PFN lookup to service a guest fault and the backing64456449guest_memfd has the GUEST_MEMFD_FLAG_MMAP set, then the fault will always be
+2-1
Documentation/virt/kvm/devices/arm-vgic-v3.rst
···1313to inject interrupts to the VGIC instead of directly to CPUs. It is not1414possible to create both a GICv3 and GICv2 on the same VM.15151616-Creating a guest GICv3 device requires a host GICv3 as well.1616+Creating a guest GICv3 device requires a host GICv3 host, or a GICv5 host with1717+support for FEAT_GCIE_LEGACY.171818191920Groups:
···965965 def_bool y966966 depends on HAVE_CFI_ICALL_NORMALIZE_INTEGERS967967 depends on RUSTC_VERSION >= 107900968968+ depends on ARM64 || X86_64968969 # With GCOV/KASAN we need this fix: https://github.com/rust-lang/rust/pull/129373969970 depends on (RUSTC_LLVM_VERSION >= 190103 && RUSTC_VERSION >= 108200) || \970971 (!GCOV_KERNEL && !KASAN_GENERIC && !KASAN_SW_TAGS)
+1-1
arch/arc/configs/axs101_defconfig
···8888CONFIG_MMC_SDHCI_PLTFM=y8989CONFIG_MMC_DW=y9090# CONFIG_IOMMU_SUPPORT is not set9191-CONFIG_EXT3_FS=y9191+CONFIG_EXT4_FS=y9292CONFIG_MSDOS_FS=y9393CONFIG_VFAT_FS=y9494CONFIG_NTFS_FS=y
+1-1
arch/arc/configs/axs103_defconfig
···8686CONFIG_MMC_SDHCI_PLTFM=y8787CONFIG_MMC_DW=y8888# CONFIG_IOMMU_SUPPORT is not set8989-CONFIG_EXT3_FS=y8989+CONFIG_EXT4_FS=y9090CONFIG_MSDOS_FS=y9191CONFIG_VFAT_FS=y9292CONFIG_NTFS_FS=y
+1-1
arch/arc/configs/axs103_smp_defconfig
···8888CONFIG_MMC_SDHCI_PLTFM=y8989CONFIG_MMC_DW=y9090# CONFIG_IOMMU_SUPPORT is not set9191-CONFIG_EXT3_FS=y9191+CONFIG_EXT4_FS=y9292CONFIG_MSDOS_FS=y9393CONFIG_VFAT_FS=y9494CONFIG_NTFS_FS=y
···7474CONFIG_USB_STORAGE=y7575CONFIG_USB_SERIAL=y7676# CONFIG_IOMMU_SUPPORT is not set7777-CONFIG_EXT3_FS=y7777+CONFIG_EXT4_FS=y7878CONFIG_EXT4_FS=y7979CONFIG_MSDOS_FS=y8080CONFIG_VFAT_FS=y
+1-1
arch/arc/configs/vdk_hs38_smp_defconfig
···8181CONFIG_UIO=y8282CONFIG_UIO_PDRV_GENIRQ=y8383# CONFIG_IOMMU_SUPPORT is not set8484-CONFIG_EXT3_FS=y8484+CONFIG_EXT4_FS=y8585CONFIG_MSDOS_FS=y8686CONFIG_VFAT_FS=y8787CONFIG_NTFS_FS=y
+1-2
arch/arm/configs/axm55xx_defconfig
···194194CONFIG_PL320_MBOX=y195195# CONFIG_IOMMU_SUPPORT is not set196196CONFIG_EXT2_FS=y197197-CONFIG_EXT3_FS=y198198-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set197197+CONFIG_EXT4_FS=y199198CONFIG_EXT4_FS=y200199CONFIG_AUTOFS_FS=y201200CONFIG_FUSE_FS=y
···9595CONFIG_DMADEVICES=y9696CONFIG_MV_XOR=y9797CONFIG_EXT2_FS=y9898-CONFIG_EXT3_FS=y9999-# CONFIG_EXT3_FS_XATTR is not set9898+CONFIG_EXT4_FS=y9999+# CONFIG_EXT4_FS_XATTR is not set100100CONFIG_EXT4_FS=y101101CONFIG_ISO9660_FS=y102102CONFIG_JOLIET=y
+2-2
arch/arm/configs/ep93xx_defconfig
···103103CONFIG_DMADEVICES=y104104CONFIG_EP93XX_DMA=y105105CONFIG_EXT2_FS=y106106-CONFIG_EXT3_FS=y107107-# CONFIG_EXT3_FS_XATTR is not set106106+CONFIG_EXT4_FS=y107107+# CONFIG_EXT4_FS_XATTR is not set108108CONFIG_EXT4_FS=y109109CONFIG_VFAT_FS=y110110CONFIG_TMPFS=y
···5353CONFIG_RTC_DRV_MAX8925=y5454# CONFIG_RESET_CONTROLLER is not set5555CONFIG_EXT2_FS=y5656-CONFIG_EXT3_FS=y5656+CONFIG_EXT4_FS=y5757CONFIG_EXT4_FS=y5858# CONFIG_DNOTIFY is not set5959CONFIG_MSDOS_FS=y
+1-1
arch/arm/configs/moxart_defconfig
···113113CONFIG_DMADEVICES=y114114CONFIG_MOXART_DMA=y115115# CONFIG_IOMMU_SUPPORT is not set116116-CONFIG_EXT3_FS=y116116+CONFIG_EXT4_FS=y117117CONFIG_TMPFS=y118118CONFIG_CONFIGFS_FS=y119119CONFIG_JFFS2_FS=y
···9191CONFIG_RTC_DRV_RS5C372=y9292CONFIG_RTC_DRV_M41T80=y9393CONFIG_EXT2_FS=y9494-CONFIG_EXT3_FS=y9595-# CONFIG_EXT3_FS_XATTR is not set9494+CONFIG_EXT4_FS=y9595+# CONFIG_EXT4_FS_XATTR is not set9696CONFIG_EXT4_FS=m9797CONFIG_ISO9660_FS=m9898CONFIG_JOLIET=y
···184184CONFIG_RTC_CLASS=y185185CONFIG_RTC_DRV_OMAP=y186186CONFIG_EXT2_FS=y187187-CONFIG_EXT3_FS=y187187+CONFIG_EXT4_FS=y188188# CONFIG_DNOTIFY is not set189189CONFIG_AUTOFS_FS=y190190CONFIG_ISO9660_FS=y
···115115CONFIG_DMADEVICES=y116116CONFIG_MV_XOR=y117117CONFIG_EXT2_FS=y118118-CONFIG_EXT3_FS=y119119-# CONFIG_EXT3_FS_XATTR is not set118118+CONFIG_EXT4_FS=y119119+# CONFIG_EXT4_FS_XATTR is not set120120CONFIG_EXT4_FS=m121121CONFIG_ISO9660_FS=m122122CONFIG_JOLIET=y
···193193CONFIG_EXT2_FS_XATTR=y194194CONFIG_EXT2_FS_POSIX_ACL=y195195CONFIG_EXT2_FS_SECURITY=y196196-CONFIG_EXT3_FS=y197197-# CONFIG_EXT3_FS_XATTR is not set196196+CONFIG_EXT4_FS=y197197+# CONFIG_EXT4_FS_XATTR is not set198198CONFIG_MSDOS_FS=y199199CONFIG_VFAT_FS=y200200CONFIG_TMPFS=y
+1-1
arch/arm/configs/stm32_defconfig
···6969CONFIG_IIO=y7070CONFIG_STM32_ADC_CORE=y7171CONFIG_STM32_ADC=y7272-CONFIG_EXT3_FS=y7272+CONFIG_EXT4_FS=y7373# CONFIG_FILE_LOCKING is not set7474# CONFIG_DNOTIFY is not set7575# CONFIG_INOTIFY_USER is not set
+3-3
arch/arm/configs/tegra_defconfig
···319319CONFIG_EXT2_FS_XATTR=y320320CONFIG_EXT2_FS_POSIX_ACL=y321321CONFIG_EXT2_FS_SECURITY=y322322-CONFIG_EXT3_FS=y323323-CONFIG_EXT3_FS_POSIX_ACL=y324324-CONFIG_EXT3_FS_SECURITY=y322322+CONFIG_EXT4_FS=y323323+CONFIG_EXT4_FS_POSIX_ACL=y324324+CONFIG_EXT4_FS_SECURITY=y325325# CONFIG_DNOTIFY is not set326326CONFIG_VFAT_FS=y327327CONFIG_TMPFS=y
···2424 * ID_AA64MMFR4_EL1.E2H0 < 0. On such CPUs HCR_EL2.E2H is RES1, but it2525 * can reset into an UNKNOWN state and might not read as 1 until it has2626 * been initialized explicitly.2727- *2828- * Fruity CPUs seem to have HCR_EL2.E2H set to RAO/WI, but2929- * don't advertise it (they predate this relaxation).3030- *3127 * Initalize HCR_EL2.E2H so that later code can rely upon HCR_EL2.E2H3228 * indicating whether the CPU is running in E2H mode.3329 */3430 mrs_s x1, SYS_ID_AA64MMFR4_EL13531 sbfx x1, x1, #ID_AA64MMFR4_EL1_E2H0_SHIFT, #ID_AA64MMFR4_EL1_E2H0_WIDTH3632 cmp x1, #03737- b.ge .LnVHE_\@3333+ b.lt .LnE2H0_\@38343535+ /*3636+ * Unfortunately, HCR_EL2.E2H can be RES1 even if not advertised3737+ * as such via ID_AA64MMFR4_EL1.E2H0:3838+ *3939+ * - Fruity CPUs predate the !FEAT_E2H0 relaxation, and seem to4040+ * have HCR_EL2.E2H implemented as RAO/WI.4141+ *4242+ * - On CPUs that lack FEAT_FGT, a hypervisor can't trap guest4343+ * reads of ID_AA64MMFR4_EL1 to advertise !FEAT_E2H0. NV4444+ * guests on these hosts can write to HCR_EL2.E2H without4545+ * trapping to the hypervisor, but these writes have no4646+ * functional effect.4747+ *4848+ * Handle both cases by checking for an essential VHE property4949+ * (system register remapping) to decide whether we're5050+ * effectively VHE-only or not.5151+ */5252+ msr_hcr_el2 x0 // Setup HCR_EL2 as nVHE5353+ isb5454+ mov x1, #1 // Write something to FAR_EL15555+ msr far_el1, x15656+ isb5757+ mov x1, #2 // Try to overwrite it via FAR_EL25858+ msr far_el2, x15959+ isb6060+ mrs x1, far_el1 // If we see the latest write in FAR_EL1,6161+ cmp x1, #2 // we can safely assume we are VHE only.6262+ b.ne .LnVHE_\@ // Otherwise, we know that nVHE works.6363+6464+.LnE2H0_\@:3965 orr x0, x0, #HCR_E2H4040-.LnVHE_\@:4166 msr_hcr_el2 x04267 isb6868+.LnVHE_\@:4369.endm44704571.macro __init_el2_sctlr
+50
arch/arm64/include/asm/kvm_host.h
···816816 u64 hcrx_el2;817817 u64 mdcr_el2;818818819819+ struct {820820+ u64 r;821821+ u64 w;822822+ } fgt[__NR_FGT_GROUP_IDS__];823823+819824 /* Exception Information */820825 struct kvm_vcpu_fault_info fault;821826···16051600void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt);16061601void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *res1);16071602void check_feature_map(void);16031603+void kvm_vcpu_load_fgt(struct kvm_vcpu *vcpu);1608160416051605+static __always_inline enum fgt_group_id __fgt_reg_to_group_id(enum vcpu_sysreg reg)16061606+{16071607+ switch (reg) {16081608+ case HFGRTR_EL2:16091609+ case HFGWTR_EL2:16101610+ return HFGRTR_GROUP;16111611+ case HFGITR_EL2:16121612+ return HFGITR_GROUP;16131613+ case HDFGRTR_EL2:16141614+ case HDFGWTR_EL2:16151615+ return HDFGRTR_GROUP;16161616+ case HAFGRTR_EL2:16171617+ return HAFGRTR_GROUP;16181618+ case HFGRTR2_EL2:16191619+ case HFGWTR2_EL2:16201620+ return HFGRTR2_GROUP;16211621+ case HFGITR2_EL2:16221622+ return HFGITR2_GROUP;16231623+ case HDFGRTR2_EL2:16241624+ case HDFGWTR2_EL2:16251625+ return HDFGRTR2_GROUP;16261626+ default:16271627+ BUILD_BUG_ON(1);16281628+ }16291629+}16301630+16311631+#define vcpu_fgt(vcpu, reg) \16321632+ ({ \16331633+ enum fgt_group_id id = __fgt_reg_to_group_id(reg); \16341634+ u64 *p; \16351635+ switch (reg) { \16361636+ case HFGWTR_EL2: \16371637+ case HDFGWTR_EL2: \16381638+ case HFGWTR2_EL2: \16391639+ case HDFGWTR2_EL2: \16401640+ p = &(vcpu)->arch.fgt[id].w; \16411641+ break; \16421642+ default: \16431643+ p = &(vcpu)->arch.fgt[id].r; \16441644+ break; \16451645+ } \16461646+ \16471647+ p; \16481648+ })1609164916101650#endif /* __ARM64_KVM_HOST_H__ */
+10-1
arch/arm64/include/asm/sysreg.h
···12201220 __val; \12211221})1222122212231223+/*12241224+ * The "Z" constraint combined with the "%x0" template should be enough12251225+ * to force XZR generation if (v) is a constant 0 value but LLVM does not12261226+ * yet understand that modifier/constraint combo so a conditional is required12271227+ * to nudge the compiler into using XZR as a source for a 0 constant value.12281228+ */12231229#define write_sysreg_s(v, r) do { \12241230 u64 __val = (u64)(v); \12251231 u32 __maybe_unused __check_r = (u32)(r); \12261226- asm volatile(__msr_s(r, "%x0") : : "rZ" (__val)); \12321232+ if (__builtin_constant_p(__val) && __val == 0) \12331233+ asm volatile(__msr_s(r, "xzr")); \12341234+ else \12351235+ asm volatile(__msr_s(r, "%x0") : : "r" (__val)); \12271236} while (0)1228123712291238/*
+5-3
arch/arm64/kernel/entry-common.c
···697697698698static void noinstr el0_softstp(struct pt_regs *regs, unsigned long esr)699699{700700+ bool step_done;701701+700702 if (!is_ttbr0_addr(regs->pc))701703 arm64_apply_bp_hardening();702704···709707 * If we are stepping a suspended breakpoint there's nothing more to do:710708 * the single-step is complete.711709 */712712- if (!try_step_suspended_breakpoints(regs)) {713713- local_daif_restore(DAIF_PROCCTX);710710+ step_done = try_step_suspended_breakpoints(regs);711711+ local_daif_restore(DAIF_PROCCTX);712712+ if (!step_done)714713 do_el0_softstep(esr, regs);715715- }716714 arm64_exit_to_user_mode(regs);717715}718716
···55 */6677#include <linux/kvm_host.h>88+#include <asm/kvm_emulate.h>99+#include <asm/kvm_nested.h>810#include <asm/sysreg.h>9111012/*···14291427 *res0 = *res1 = 0;14301428 break;14311429 }14301430+}14311431+14321432+static __always_inline struct fgt_masks *__fgt_reg_to_masks(enum vcpu_sysreg reg)14331433+{14341434+ switch (reg) {14351435+ case HFGRTR_EL2:14361436+ return &hfgrtr_masks;14371437+ case HFGWTR_EL2:14381438+ return &hfgwtr_masks;14391439+ case HFGITR_EL2:14401440+ return &hfgitr_masks;14411441+ case HDFGRTR_EL2:14421442+ return &hdfgrtr_masks;14431443+ case HDFGWTR_EL2:14441444+ return &hdfgwtr_masks;14451445+ case HAFGRTR_EL2:14461446+ return &hafgrtr_masks;14471447+ case HFGRTR2_EL2:14481448+ return &hfgrtr2_masks;14491449+ case HFGWTR2_EL2:14501450+ return &hfgwtr2_masks;14511451+ case HFGITR2_EL2:14521452+ return &hfgitr2_masks;14531453+ case HDFGRTR2_EL2:14541454+ return &hdfgrtr2_masks;14551455+ case HDFGWTR2_EL2:14561456+ return &hdfgwtr2_masks;14571457+ default:14581458+ BUILD_BUG_ON(1);14591459+ }14601460+}14611461+14621462+static __always_inline void __compute_fgt(struct kvm_vcpu *vcpu, enum vcpu_sysreg reg)14631463+{14641464+ u64 fgu = vcpu->kvm->arch.fgu[__fgt_reg_to_group_id(reg)];14651465+ struct fgt_masks *m = __fgt_reg_to_masks(reg);14661466+ u64 clear = 0, set = 0, val = m->nmask;14671467+14681468+ set |= fgu & m->mask;14691469+ clear |= fgu & m->nmask;14701470+14711471+ if (is_nested_ctxt(vcpu)) {14721472+ u64 nested = __vcpu_sys_reg(vcpu, reg);14731473+ set |= nested & m->mask;14741474+ clear |= ~nested & m->nmask;14751475+ }14761476+14771477+ val |= set;14781478+ val &= ~clear;14791479+ *vcpu_fgt(vcpu, reg) = val;14801480+}14811481+14821482+static void __compute_hfgwtr(struct kvm_vcpu *vcpu)14831483+{14841484+ __compute_fgt(vcpu, HFGWTR_EL2);14851485+14861486+ if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))14871487+ *vcpu_fgt(vcpu, HFGWTR_EL2) |= HFGWTR_EL2_TCR_EL1;14881488+}14891489+14901490+static void __compute_hdfgwtr(struct kvm_vcpu *vcpu)14911491+{14921492+ __compute_fgt(vcpu, HDFGWTR_EL2);14931493+14941494+ if (is_hyp_ctxt(vcpu))14951495+ *vcpu_fgt(vcpu, HDFGWTR_EL2) |= HDFGWTR_EL2_MDSCR_EL1;14961496+}14971497+14981498+void kvm_vcpu_load_fgt(struct kvm_vcpu *vcpu)14991499+{15001500+ if (!cpus_have_final_cap(ARM64_HAS_FGT))15011501+ return;15021502+15031503+ __compute_fgt(vcpu, HFGRTR_EL2);15041504+ __compute_hfgwtr(vcpu);15051505+ __compute_fgt(vcpu, HFGITR_EL2);15061506+ __compute_fgt(vcpu, HDFGRTR_EL2);15071507+ __compute_hdfgwtr(vcpu);15081508+ __compute_fgt(vcpu, HAFGRTR_EL2);15091509+15101510+ if (!cpus_have_final_cap(ARM64_HAS_FGT2))15111511+ return;15121512+15131513+ __compute_fgt(vcpu, HFGRTR2_EL2);15141514+ __compute_fgt(vcpu, HFGWTR2_EL2);15151515+ __compute_fgt(vcpu, HFGITR2_EL2);15161516+ __compute_fgt(vcpu, HDFGRTR2_EL2);15171517+ __compute_fgt(vcpu, HDFGWTR2_EL2);14321518}
+10-5
arch/arm64/kvm/debug.c
···1515#include <asm/kvm_arm.h>1616#include <asm/kvm_emulate.h>17171818+static int cpu_has_spe(u64 dfr0)1919+{2020+ return cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_PMSVer_SHIFT) &&2121+ !(read_sysreg_s(SYS_PMBIDR_EL1) & PMBIDR_EL1_P);2222+}2323+1824/**1925 * kvm_arm_setup_mdcr_el2 - configure vcpu mdcr_el2 value2026 *···8377 *host_data_ptr(debug_brps) = SYS_FIELD_GET(ID_AA64DFR0_EL1, BRPs, dfr0);8478 *host_data_ptr(debug_wrps) = SYS_FIELD_GET(ID_AA64DFR0_EL1, WRPs, dfr0);85798080+ if (cpu_has_spe(dfr0))8181+ host_data_set_flag(HAS_SPE);8282+8683 if (has_vhe())8784 return;8888-8989- if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_PMSVer_SHIFT) &&9090- !(read_sysreg_s(SYS_PMBIDR_EL1) & PMBIDR_EL1_P))9191- host_data_set_flag(HAS_SPE);92859386 /* Check if we have BRBE implemented and available at the host */9487 if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_BRBE_SHIFT))···107102void kvm_debug_init_vhe(void)108103{109104 /* Clear PMSCR_EL1.E{0,1}SPE which reset to UNKNOWN values. */110110- if (SYS_FIELD_GET(ID_AA64DFR0_EL1, PMSVer, read_sysreg(id_aa64dfr0_el1)))105105+ if (host_data_test_flag(HAS_SPE))111106 write_sysreg_el1(0, SYS_PMSCR);112107}113108
-70
arch/arm64/kvm/guest.c
···591591 return copy_core_reg_indices(vcpu, NULL);592592}593593594594-static const u64 timer_reg_list[] = {595595- KVM_REG_ARM_TIMER_CTL,596596- KVM_REG_ARM_TIMER_CNT,597597- KVM_REG_ARM_TIMER_CVAL,598598- KVM_REG_ARM_PTIMER_CTL,599599- KVM_REG_ARM_PTIMER_CNT,600600- KVM_REG_ARM_PTIMER_CVAL,601601-};602602-603603-#define NUM_TIMER_REGS ARRAY_SIZE(timer_reg_list)604604-605605-static bool is_timer_reg(u64 index)606606-{607607- switch (index) {608608- case KVM_REG_ARM_TIMER_CTL:609609- case KVM_REG_ARM_TIMER_CNT:610610- case KVM_REG_ARM_TIMER_CVAL:611611- case KVM_REG_ARM_PTIMER_CTL:612612- case KVM_REG_ARM_PTIMER_CNT:613613- case KVM_REG_ARM_PTIMER_CVAL:614614- return true;615615- }616616- return false;617617-}618618-619619-static int copy_timer_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)620620-{621621- for (int i = 0; i < NUM_TIMER_REGS; i++) {622622- if (put_user(timer_reg_list[i], uindices))623623- return -EFAULT;624624- uindices++;625625- }626626-627627- return 0;628628-}629629-630630-static int set_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)631631-{632632- void __user *uaddr = (void __user *)(long)reg->addr;633633- u64 val;634634- int ret;635635-636636- ret = copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id));637637- if (ret != 0)638638- return -EFAULT;639639-640640- return kvm_arm_timer_set_reg(vcpu, reg->id, val);641641-}642642-643643-static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)644644-{645645- void __user *uaddr = (void __user *)(long)reg->addr;646646- u64 val;647647-648648- val = kvm_arm_timer_get_reg(vcpu, reg->id);649649- return copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)) ? -EFAULT : 0;650650-}651651-652594static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu)653595{654596 const unsigned int slices = vcpu_sve_slices(vcpu);···666724 res += num_sve_regs(vcpu);667725 res += kvm_arm_num_sys_reg_descs(vcpu);668726 res += kvm_arm_get_fw_num_regs(vcpu);669669- res += NUM_TIMER_REGS;670727671728 return res;672729}···696755 return ret;697756 uindices += kvm_arm_get_fw_num_regs(vcpu);698757699699- ret = copy_timer_indices(vcpu, uindices);700700- if (ret < 0)701701- return ret;702702- uindices += NUM_TIMER_REGS;703703-704758 return kvm_arm_copy_sys_reg_indices(vcpu, uindices);705759}706760···713777 case KVM_REG_ARM64_SVE: return get_sve_reg(vcpu, reg);714778 }715779716716- if (is_timer_reg(reg->id))717717- return get_timer_reg(vcpu, reg);718718-719780 return kvm_arm_sys_reg_get_reg(vcpu, reg);720781}721782···729796 return kvm_arm_set_fw_reg(vcpu, reg);730797 case KVM_REG_ARM64_SVE: return set_sve_reg(vcpu, reg);731798 }732732-733733- if (is_timer_reg(reg->id))734734- return set_timer_reg(vcpu, reg);735799736800 return kvm_arm_sys_reg_set_reg(vcpu, reg);737801}
+6-1
arch/arm64/kvm/handle_exit.c
···147147 if (esr & ESR_ELx_WFx_ISS_RV) {148148 u64 val, now;149149150150- now = kvm_arm_timer_get_reg(vcpu, KVM_REG_ARM_TIMER_CNT);150150+ now = kvm_phys_timer_read();151151+ if (is_hyp_ctxt(vcpu) && vcpu_el2_e2h_is_set(vcpu))152152+ now -= timer_get_offset(vcpu_hvtimer(vcpu));153153+ else154154+ now -= timer_get_offset(vcpu_vtimer(vcpu));155155+151156 val = vcpu_get_reg(vcpu, kvm_vcpu_sys_get_rt(vcpu));152157153158 if (now >= val)
+17-131
arch/arm64/kvm/hyp/include/hyp/switch.h
···195195 __deactivate_cptr_traps_nvhe(vcpu);196196}197197198198-#define reg_to_fgt_masks(reg) \199199- ({ \200200- struct fgt_masks *m; \201201- switch(reg) { \202202- case HFGRTR_EL2: \203203- m = &hfgrtr_masks; \204204- break; \205205- case HFGWTR_EL2: \206206- m = &hfgwtr_masks; \207207- break; \208208- case HFGITR_EL2: \209209- m = &hfgitr_masks; \210210- break; \211211- case HDFGRTR_EL2: \212212- m = &hdfgrtr_masks; \213213- break; \214214- case HDFGWTR_EL2: \215215- m = &hdfgwtr_masks; \216216- break; \217217- case HAFGRTR_EL2: \218218- m = &hafgrtr_masks; \219219- break; \220220- case HFGRTR2_EL2: \221221- m = &hfgrtr2_masks; \222222- break; \223223- case HFGWTR2_EL2: \224224- m = &hfgwtr2_masks; \225225- break; \226226- case HFGITR2_EL2: \227227- m = &hfgitr2_masks; \228228- break; \229229- case HDFGRTR2_EL2: \230230- m = &hdfgrtr2_masks; \231231- break; \232232- case HDFGWTR2_EL2: \233233- m = &hdfgwtr2_masks; \234234- break; \235235- default: \236236- BUILD_BUG_ON(1); \237237- } \238238- \239239- m; \240240- })241241-242242-#define compute_clr_set(vcpu, reg, clr, set) \243243- do { \244244- u64 hfg = __vcpu_sys_reg(vcpu, reg); \245245- struct fgt_masks *m = reg_to_fgt_masks(reg); \246246- set |= hfg & m->mask; \247247- clr |= ~hfg & m->nmask; \248248- } while(0)249249-250250-#define reg_to_fgt_group_id(reg) \251251- ({ \252252- enum fgt_group_id id; \253253- switch(reg) { \254254- case HFGRTR_EL2: \255255- case HFGWTR_EL2: \256256- id = HFGRTR_GROUP; \257257- break; \258258- case HFGITR_EL2: \259259- id = HFGITR_GROUP; \260260- break; \261261- case HDFGRTR_EL2: \262262- case HDFGWTR_EL2: \263263- id = HDFGRTR_GROUP; \264264- break; \265265- case HAFGRTR_EL2: \266266- id = HAFGRTR_GROUP; \267267- break; \268268- case HFGRTR2_EL2: \269269- case HFGWTR2_EL2: \270270- id = HFGRTR2_GROUP; \271271- break; \272272- case HFGITR2_EL2: \273273- id = HFGITR2_GROUP; \274274- break; \275275- case HDFGRTR2_EL2: \276276- case HDFGWTR2_EL2: \277277- id = HDFGRTR2_GROUP; \278278- break; \279279- default: \280280- BUILD_BUG_ON(1); \281281- } \282282- \283283- id; \284284- })285285-286286-#define compute_undef_clr_set(vcpu, kvm, reg, clr, set) \287287- do { \288288- u64 hfg = kvm->arch.fgu[reg_to_fgt_group_id(reg)]; \289289- struct fgt_masks *m = reg_to_fgt_masks(reg); \290290- set |= hfg & m->mask; \291291- clr |= hfg & m->nmask; \292292- } while(0)293293-294294-#define update_fgt_traps_cs(hctxt, vcpu, kvm, reg, clr, set) \295295- do { \296296- struct fgt_masks *m = reg_to_fgt_masks(reg); \297297- u64 c = clr, s = set; \298298- u64 val; \299299- \300300- ctxt_sys_reg(hctxt, reg) = read_sysreg_s(SYS_ ## reg); \301301- if (is_nested_ctxt(vcpu)) \302302- compute_clr_set(vcpu, reg, c, s); \303303- \304304- compute_undef_clr_set(vcpu, kvm, reg, c, s); \305305- \306306- val = m->nmask; \307307- val |= s; \308308- val &= ~c; \309309- write_sysreg_s(val, SYS_ ## reg); \310310- } while(0)311311-312312-#define update_fgt_traps(hctxt, vcpu, kvm, reg) \313313- update_fgt_traps_cs(hctxt, vcpu, kvm, reg, 0, 0)314314-315198static inline bool cpu_has_amu(void)316199{317200 u64 pfr0 = read_sysreg_s(SYS_ID_AA64PFR0_EL1);···203320 ID_AA64PFR0_EL1_AMU_SHIFT);204321}205322323323+#define __activate_fgt(hctxt, vcpu, reg) \324324+ do { \325325+ ctxt_sys_reg(hctxt, reg) = read_sysreg_s(SYS_ ## reg); \326326+ write_sysreg_s(*vcpu_fgt(vcpu, reg), SYS_ ## reg); \327327+ } while (0)328328+206329static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)207330{208331 struct kvm_cpu_context *hctxt = host_data_ptr(host_ctxt);209209- struct kvm *kvm = kern_hyp_va(vcpu->kvm);210332211333 if (!cpus_have_final_cap(ARM64_HAS_FGT))212334 return;213335214214- update_fgt_traps(hctxt, vcpu, kvm, HFGRTR_EL2);215215- update_fgt_traps_cs(hctxt, vcpu, kvm, HFGWTR_EL2, 0,216216- cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38) ?217217- HFGWTR_EL2_TCR_EL1_MASK : 0);218218- update_fgt_traps(hctxt, vcpu, kvm, HFGITR_EL2);219219- update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR_EL2);220220- update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR_EL2);336336+ __activate_fgt(hctxt, vcpu, HFGRTR_EL2);337337+ __activate_fgt(hctxt, vcpu, HFGWTR_EL2);338338+ __activate_fgt(hctxt, vcpu, HFGITR_EL2);339339+ __activate_fgt(hctxt, vcpu, HDFGRTR_EL2);340340+ __activate_fgt(hctxt, vcpu, HDFGWTR_EL2);221341222342 if (cpu_has_amu())223223- update_fgt_traps(hctxt, vcpu, kvm, HAFGRTR_EL2);343343+ __activate_fgt(hctxt, vcpu, HAFGRTR_EL2);224344225345 if (!cpus_have_final_cap(ARM64_HAS_FGT2))226346 return;227347228228- update_fgt_traps(hctxt, vcpu, kvm, HFGRTR2_EL2);229229- update_fgt_traps(hctxt, vcpu, kvm, HFGWTR2_EL2);230230- update_fgt_traps(hctxt, vcpu, kvm, HFGITR2_EL2);231231- update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR2_EL2);232232- update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR2_EL2);348348+ __activate_fgt(hctxt, vcpu, HFGRTR2_EL2);349349+ __activate_fgt(hctxt, vcpu, HFGWTR2_EL2);350350+ __activate_fgt(hctxt, vcpu, HFGITR2_EL2);351351+ __activate_fgt(hctxt, vcpu, HDFGRTR2_EL2);352352+ __activate_fgt(hctxt, vcpu, HDFGWTR2_EL2);233353}234354235355#define __deactivate_fgt(htcxt, vcpu, reg) \
+1
arch/arm64/kvm/hyp/nvhe/pkvm.c
···172172173173 /* Trust the host for non-protected vcpu features. */174174 vcpu->arch.hcrx_el2 = host_vcpu->arch.hcrx_el2;175175+ memcpy(vcpu->arch.fgt, host_vcpu->arch.fgt, sizeof(vcpu->arch.fgt));175176 return 0;176177 }177178
+6-3
arch/arm64/kvm/nested.c
···18591859{18601860 u64 guest_mdcr = __vcpu_sys_reg(vcpu, MDCR_EL2);1861186118621862+ if (is_nested_ctxt(vcpu))18631863+ vcpu->arch.mdcr_el2 |= (guest_mdcr & NV_MDCR_GUEST_INCLUDE);18621864 /*18631865 * In yet another example where FEAT_NV2 is fscking broken, accesses18641866 * to MDSCR_EL1 are redirected to the VNCR despite having an effect18651867 * at EL2. Use a big hammer to apply sanity.18681868+ *18691869+ * Unless of course we have FEAT_FGT, in which case we can precisely18701870+ * trap MDSCR_EL1.18661871 */18671867- if (is_hyp_ctxt(vcpu))18721872+ else if (!cpus_have_final_cap(ARM64_HAS_FGT))18681873 vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;18691869- else18701870- vcpu->arch.mdcr_el2 |= (guest_mdcr & NV_MDCR_GUEST_INCLUDE);18711874}
···297297{298298 struct vgic_v3_cpu_if *vgic_v3 = &vcpu->arch.vgic_cpu.vgic_v3;299299300300+ if (!vgic_is_v3(vcpu->kvm))301301+ return;302302+300303 /* Hide GICv3 sysreg if necessary */301301- if (!kvm_has_gicv3(vcpu->kvm)) {304304+ if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V2) {302305 vgic_v3->vgic_hcr |= (ICH_HCR_EL2_TALL0 | ICH_HCR_EL2_TALL1 |303306 ICH_HCR_EL2_TC);304307 return;
+3-4
arch/hexagon/configs/comet_defconfig
···4646CONFIG_EXT2_FS_XATTR=y4747CONFIG_EXT2_FS_POSIX_ACL=y4848CONFIG_EXT2_FS_SECURITY=y4949-CONFIG_EXT3_FS=y5050-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set5151-CONFIG_EXT3_FS_POSIX_ACL=y5252-CONFIG_EXT3_FS_SECURITY=y4949+CONFIG_EXT4_FS=y5050+CONFIG_EXT4_FS_POSIX_ACL=y5151+CONFIG_EXT4_FS_SECURITY=y5352CONFIG_QUOTA=y5453CONFIG_PROC_KCORE=y5554CONFIG_TMPFS=y
+3-3
arch/m68k/configs/stmark2_defconfig
···7272CONFIG_EXT2_FS_XATTR=y7373CONFIG_EXT2_FS_POSIX_ACL=y7474CONFIG_EXT2_FS_SECURITY=y7575-CONFIG_EXT3_FS=y7676-CONFIG_EXT3_FS_POSIX_ACL=y7777-CONFIG_EXT3_FS_SECURITY=y7575+CONFIG_EXT4_FS=y7676+CONFIG_EXT4_FS_POSIX_ACL=y7777+CONFIG_EXT4_FS_SECURITY=y7878# CONFIG_FILE_LOCKING is not set7979# CONFIG_DNOTIFY is not set8080# CONFIG_INOTIFY_USER is not set
+1-1
arch/microblaze/configs/mmu_defconfig
···7373CONFIG_UIO=y7474CONFIG_UIO_PDRV_GENIRQ=y7575CONFIG_UIO_DMEM_GENIRQ=y7676-CONFIG_EXT3_FS=y7676+CONFIG_EXT4_FS=y7777# CONFIG_DNOTIFY is not set7878CONFIG_TMPFS=y7979CONFIG_CRAMFS=y
···4949CONFIG_INDYDOG=y5050# CONFIG_VGA_CONSOLE is not set5151CONFIG_EXT2_FS=y5252-CONFIG_EXT3_FS=y5353-CONFIG_EXT3_FS_POSIX_ACL=y5454-CONFIG_EXT3_FS_SECURITY=y5252+CONFIG_EXT4_FS=y5353+CONFIG_EXT4_FS_POSIX_ACL=y5454+CONFIG_EXT4_FS_SECURITY=y5555CONFIG_QUOTA=y5656CONFIG_PROC_KCORE=y5757# CONFIG_PROC_PAGE_MONITOR is not set
···6969CONFIG_FRAMEBUFFER_CONSOLE=y7070# CONFIG_HWMON is not set7171CONFIG_EXT2_FS=m7272-CONFIG_EXT3_FS=y7272+CONFIG_EXT4_FS=y7373CONFIG_XFS_FS=m7474CONFIG_XFS_QUOTA=y7575CONFIG_AUTOFS_FS=m
···3838# CONFIG_IOMMU_SUPPORT is not set3939CONFIG_LITEX_SOC_CONTROLLER=y4040CONFIG_EXT2_FS=y4141-CONFIG_EXT3_FS=y4141+CONFIG_EXT4_FS=y4242CONFIG_MSDOS_FS=y4343CONFIG_VFAT_FS=y4444CONFIG_EXFAT_FS=y
+2-2
arch/openrisc/configs/virt_defconfig
···9494CONFIG_VIRTIO_INPUT=y9595CONFIG_VIRTIO_MMIO=y9696CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y9797-CONFIG_EXT3_FS=y9898-CONFIG_EXT3_FS_POSIX_ACL=y9797+CONFIG_EXT4_FS=y9898+CONFIG_EXT4_FS_POSIX_ACL=y9999# CONFIG_DNOTIFY is not set100100CONFIG_MSDOS_FS=y101101CONFIG_VFAT_FS=y
···33#ifndef __ASM_KGDB_H_44#define __ASM_KGDB_H_5566+#include <linux/build_bug.h>77+68#ifdef __KERNEL__79810#define GDB_SIZEOF_REG sizeof(unsigned long)9111010-#define DBG_MAX_REG_NUM (36)1111-#define NUMREGBYTES ((DBG_MAX_REG_NUM) * GDB_SIZEOF_REG)1212+#define DBG_MAX_REG_NUM 361313+#define NUMREGBYTES (DBG_MAX_REG_NUM * GDB_SIZEOF_REG)1214#define CACHE_FLUSH_IS_SAFE 11315#define BUFMAX 20481616+static_assert(BUFMAX > NUMREGBYTES,1717+ "As per KGDB documentation, BUFMAX must be larger than NUMREGBYTES");1418#ifdef CONFIG_RISCV_ISA_C1519#define BREAK_INSTR_SIZE 21620#else···10197#define DBG_REG_STATUS_OFF 3310298#define DBG_REG_BADADDR_OFF 3410399#define DBG_REG_CAUSE_OFF 35100100+/* NOTE: increase DBG_MAX_REG_NUM if you add more values here. */104101105102extern const char riscv_gdb_stub_feature[64];106103
+1
arch/riscv/kernel/cpu-hotplug.c
···54545555 pr_notice("CPU%u: off\n", cpu);56565757+ clear_tasks_mm_cpumask(cpu);5758 /* Verify from the firmware if the cpu is really stopped*/5859 if (cpu_ops->cpu_is_stopped)5960 ret = cpu_ops->cpu_is_stopped(cpu);
···4949 post_kprobe_handler(p, kcb, regs);5050}51515252-static bool __kprobes arch_check_kprobe(struct kprobe *p)5252+static bool __kprobes arch_check_kprobe(unsigned long addr)5353{5454- unsigned long tmp = (unsigned long)p->addr - p->offset;5555- unsigned long addr = (unsigned long)p->addr;5454+ unsigned long tmp, offset;5555+5656+ /* start iterating at the closest preceding symbol */5757+ if (!kallsyms_lookup_size_offset(addr, NULL, &offset))5858+ return false;5959+6060+ tmp = addr - offset;56615762 while (tmp <= addr) {5863 if (tmp == addr)···7671 if ((unsigned long)insn & 0x1)7772 return -EILSEQ;78737979- if (!arch_check_kprobe(p))7474+ if (!arch_check_kprobe((unsigned long)p->addr))8075 return -EILSEQ;81768277 /* copy instruction */
+5-2
arch/riscv/kernel/setup.c
···331331 /* Parse the ACPI tables for possible boot-time configuration */332332 acpi_boot_table_init();333333334334+ if (acpi_disabled) {334335#if IS_ENABLED(CONFIG_BUILTIN_DTB)335335- unflatten_and_copy_device_tree();336336+ unflatten_and_copy_device_tree();336337#else337337- unflatten_device_tree();338338+ unflatten_device_tree();338339#endif340340+ }341341+339342 misc_mem_init();340343341344 init_resources();
+2-2
arch/riscv/kernel/tests/kprobes/test-kprobes.h
···1111#define KPROBE_TEST_MAGIC_LOWER 0x0000babe1212#define KPROBE_TEST_MAGIC_UPPER 0xcafe000013131414-#ifndef __ASSEMBLY__1414+#ifndef __ASSEMBLER__15151616/* array of addresses to install kprobes */1717extern void *test_kprobes_addresses[];···1919/* array of functions that return KPROBE_TEST_MAGIC */2020extern long (*test_kprobes_functions[])(void);21212222-#endif /* __ASSEMBLY__ */2222+#endif /* __ASSEMBLER__ */23232424#endif /* TEST_KPROBES_H */
+3-4
arch/sh/configs/ap325rxa_defconfig
···8181CONFIG_EXT2_FS_XATTR=y8282CONFIG_EXT2_FS_POSIX_ACL=y8383CONFIG_EXT2_FS_SECURITY=y8484-CONFIG_EXT3_FS=y8585-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set8686-CONFIG_EXT3_FS_POSIX_ACL=y8787-CONFIG_EXT3_FS_SECURITY=y8484+CONFIG_EXT4_FS=y8585+CONFIG_EXT4_FS_POSIX_ACL=y8686+CONFIG_EXT4_FS_SECURITY=y8887CONFIG_VFAT_FS=y8988CONFIG_PROC_KCORE=y9089CONFIG_TMPFS=y
+1-2
arch/sh/configs/apsh4a3a_defconfig
···6060CONFIG_LOGO=y6161# CONFIG_USB_SUPPORT is not set6262CONFIG_EXT2_FS=y6363-CONFIG_EXT3_FS=y6464-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set6363+CONFIG_EXT4_FS=y6564CONFIG_MSDOS_FS=y6665CONFIG_VFAT_FS=y6766CONFIG_NTFS_FS=y
+1-2
arch/sh/configs/apsh4ad0a_defconfig
···8888CONFIG_USB_OHCI_HCD=y8989CONFIG_USB_STORAGE=y9090CONFIG_EXT2_FS=y9191-CONFIG_EXT3_FS=y9292-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set9191+CONFIG_EXT4_FS=y9392CONFIG_MSDOS_FS=y9493CONFIG_VFAT_FS=y9594CONFIG_NTFS_FS=y
+3-4
arch/sh/configs/ecovec24_defconfig
···109109CONFIG_EXT2_FS_XATTR=y110110CONFIG_EXT2_FS_POSIX_ACL=y111111CONFIG_EXT2_FS_SECURITY=y112112-CONFIG_EXT3_FS=y113113-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set114114-CONFIG_EXT3_FS_POSIX_ACL=y115115-CONFIG_EXT3_FS_SECURITY=y112112+CONFIG_EXT4_FS=y113113+CONFIG_EXT4_FS_POSIX_ACL=y114114+CONFIG_EXT4_FS_SECURITY=y116115CONFIG_VFAT_FS=y117116CONFIG_PROC_KCORE=y118117CONFIG_TMPFS=y
+1-2
arch/sh/configs/edosk7760_defconfig
···8787CONFIG_EXT2_FS=y8888CONFIG_EXT2_FS_XATTR=y8989CONFIG_EXT2_FS_XIP=y9090-CONFIG_EXT3_FS=y9191-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set9090+CONFIG_EXT4_FS=y9291CONFIG_TMPFS=y9392CONFIG_TMPFS_POSIX_ACL=y9493CONFIG_NFS_FS=y
+1-2
arch/sh/configs/espt_defconfig
···5959CONFIG_USB_OHCI_HCD=y6060CONFIG_USB_STORAGE=y6161CONFIG_EXT2_FS=y6262-CONFIG_EXT3_FS=y6363-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set6262+CONFIG_EXT4_FS=y6463CONFIG_AUTOFS_FS=y6564CONFIG_PROC_KCORE=y6665CONFIG_TMPFS=y
+1-2
arch/sh/configs/landisk_defconfig
···9393CONFIG_USB_EMI26=m9494CONFIG_USB_SISUSBVGA=m9595CONFIG_EXT2_FS=y9696-CONFIG_EXT3_FS=y9797-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set9696+CONFIG_EXT4_FS=y9897CONFIG_ISO9660_FS=m9998CONFIG_MSDOS_FS=y10099CONFIG_VFAT_FS=y
+1-2
arch/sh/configs/lboxre2_defconfig
···4949CONFIG_HW_RANDOM=y5050CONFIG_RTC_CLASS=y5151CONFIG_EXT2_FS=y5252-CONFIG_EXT3_FS=y5353-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set5252+CONFIG_EXT4_FS=y5453CONFIG_MSDOS_FS=y5554CONFIG_VFAT_FS=y5655CONFIG_TMPFS=y
+2-3
arch/sh/configs/magicpanelr2_defconfig
···6464# CONFIG_RTC_HCTOSYS is not set6565CONFIG_RTC_DRV_SH=y6666CONFIG_EXT2_FS=y6767-CONFIG_EXT3_FS=y6868-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set6969-# CONFIG_EXT3_FS_XATTR is not set6767+CONFIG_EXT4_FS=y6868+# CONFIG_EXT4_FS_XATTR is not set7069# CONFIG_DNOTIFY is not set7170CONFIG_PROC_KCORE=y7271CONFIG_TMPFS=y
+1-2
arch/sh/configs/r7780mp_defconfig
···7474CONFIG_RTC_DRV_RS5C372=y7575CONFIG_RTC_DRV_SH=y7676CONFIG_EXT2_FS=y7777-CONFIG_EXT3_FS=y7878-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set7777+CONFIG_EXT4_FS=y7978CONFIG_FUSE_FS=m8079CONFIG_MSDOS_FS=y8180CONFIG_VFAT_FS=y
+1-2
arch/sh/configs/r7785rp_defconfig
···6969CONFIG_RTC_DRV_RS5C372=y7070CONFIG_RTC_DRV_SH=y7171CONFIG_EXT2_FS=y7272-CONFIG_EXT3_FS=y7373-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set7272+CONFIG_EXT4_FS=y7473CONFIG_FUSE_FS=m7574CONFIG_MSDOS_FS=y7675CONFIG_VFAT_FS=y
+1-2
arch/sh/configs/rsk7264_defconfig
···5959CONFIG_USB_STORAGE=y6060CONFIG_USB_STORAGE_DEBUG=y6161CONFIG_EXT2_FS=y6262-CONFIG_EXT3_FS=y6363-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set6262+CONFIG_EXT4_FS=y6463CONFIG_VFAT_FS=y6564CONFIG_NFS_FS=y6665CONFIG_NFS_V3=y
+1-2
arch/sh/configs/rsk7269_defconfig
···4343CONFIG_USB_STORAGE=y4444CONFIG_USB_STORAGE_DEBUG=y4545CONFIG_EXT2_FS=y4646-CONFIG_EXT3_FS=y4747-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set4646+CONFIG_EXT4_FS=y4847CONFIG_VFAT_FS=y4948CONFIG_NFS_FS=y5049CONFIG_NFS_V3=y
+2-3
arch/sh/configs/sdk7780_defconfig
···102102CONFIG_EXT2_FS=y103103CONFIG_EXT2_FS_XATTR=y104104CONFIG_EXT2_FS_POSIX_ACL=y105105-CONFIG_EXT3_FS=y106106-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set107107-CONFIG_EXT3_FS_POSIX_ACL=y105105+CONFIG_EXT4_FS=y106106+CONFIG_EXT4_FS_POSIX_ACL=y108107CONFIG_AUTOFS_FS=y109108CONFIG_ISO9660_FS=y110109CONFIG_MSDOS_FS=y
+1-2
arch/sh/configs/sdk7786_defconfig
···161161# CONFIG_STAGING_EXCLUDE_BUILD is not set162162CONFIG_EXT2_FS=y163163CONFIG_EXT2_FS_XATTR=y164164-CONFIG_EXT3_FS=y165165-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set164164+CONFIG_EXT4_FS=y166165CONFIG_EXT4_FS=y167166CONFIG_XFS_FS=y168167CONFIG_BTRFS_FS=y
+1-2
arch/sh/configs/se7343_defconfig
···8484CONFIG_USB_ISP116X_HCD=y8585CONFIG_UIO=y8686CONFIG_EXT2_FS=y8787-CONFIG_EXT3_FS=y8888-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set8787+CONFIG_EXT4_FS=y8988# CONFIG_DNOTIFY is not set9089CONFIG_JFFS2_FS=y9190CONFIG_CRAMFS=y
+1-2
arch/sh/configs/se7712_defconfig
···8383CONFIG_EXT2_FS_XATTR=y8484CONFIG_EXT2_FS_POSIX_ACL=y8585CONFIG_EXT2_FS_SECURITY=y8686-CONFIG_EXT3_FS=y8787-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set8686+CONFIG_EXT4_FS=y8887# CONFIG_DNOTIFY is not set8988CONFIG_JFFS2_FS=y9089CONFIG_CRAMFS=y
+1-2
arch/sh/configs/se7721_defconfig
···107107CONFIG_EXT2_FS_XATTR=y108108CONFIG_EXT2_FS_POSIX_ACL=y109109CONFIG_EXT2_FS_SECURITY=y110110-CONFIG_EXT3_FS=y111111-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set110110+CONFIG_EXT4_FS=y112111# CONFIG_DNOTIFY is not set113112CONFIG_MSDOS_FS=y114113CONFIG_VFAT_FS=y
+1-2
arch/sh/configs/se7722_defconfig
···4444CONFIG_RTC_CLASS=y4545CONFIG_RTC_DRV_SH=y4646CONFIG_EXT2_FS=y4747-CONFIG_EXT3_FS=y4848-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set4747+CONFIG_EXT4_FS=y4948CONFIG_PROC_KCORE=y5049CONFIG_TMPFS=y5150CONFIG_HUGETLBFS=y
+3-4
arch/sh/configs/se7724_defconfig
···110110CONFIG_EXT2_FS_XATTR=y111111CONFIG_EXT2_FS_POSIX_ACL=y112112CONFIG_EXT2_FS_SECURITY=y113113-CONFIG_EXT3_FS=y114114-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set115115-CONFIG_EXT3_FS_POSIX_ACL=y116116-CONFIG_EXT3_FS_SECURITY=y113113+CONFIG_EXT4_FS=y114114+CONFIG_EXT4_FS_POSIX_ACL=y115115+CONFIG_EXT4_FS_SECURITY=y117116CONFIG_VFAT_FS=y118117CONFIG_PROC_KCORE=y119118CONFIG_TMPFS=y
+2-3
arch/sh/configs/sh03_defconfig
···5757CONFIG_SH_WDT=m5858CONFIG_EXT2_FS=y5959CONFIG_EXT2_FS_XATTR=y6060-CONFIG_EXT3_FS=y6161-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set6262-CONFIG_EXT3_FS_POSIX_ACL=y6060+CONFIG_EXT4_FS=y6161+CONFIG_EXT4_FS_POSIX_ACL=y6362CONFIG_AUTOFS_FS=y6463CONFIG_ISO9660_FS=m6564CONFIG_JOLIET=y
···6161CONFIG_USB_STORAGE=y6262CONFIG_MMC=y6363CONFIG_EXT2_FS=y6464-CONFIG_EXT3_FS=y6565-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set6464+CONFIG_EXT4_FS=y6665CONFIG_AUTOFS_FS=y6766CONFIG_MSDOS_FS=y6867CONFIG_VFAT_FS=y
+1-2
arch/sh/configs/sh7785lcr_32bit_defconfig
···113113CONFIG_DMADEVICES=y114114CONFIG_UIO=m115115CONFIG_EXT2_FS=y116116-CONFIG_EXT3_FS=y117117-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set116116+CONFIG_EXT4_FS=y118117CONFIG_MSDOS_FS=y119118CONFIG_VFAT_FS=y120119CONFIG_NTFS_FS=y
+1-2
arch/sh/configs/sh7785lcr_defconfig
···9090CONFIG_RTC_CLASS=y9191CONFIG_RTC_DRV_RS5C372=y9292CONFIG_EXT2_FS=y9393-CONFIG_EXT3_FS=y9494-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set9393+CONFIG_EXT4_FS=y9594CONFIG_MSDOS_FS=y9695CONFIG_VFAT_FS=y9796CONFIG_NTFS_FS=y
+1-2
arch/sh/configs/shx3_defconfig
···8484CONFIG_RTC_DRV_SH=y8585CONFIG_UIO=m8686CONFIG_EXT2_FS=y8787-CONFIG_EXT3_FS=y8888-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set8787+CONFIG_EXT4_FS=y8988CONFIG_PROC_KCORE=y9089CONFIG_TMPFS=y9190CONFIG_HUGETLBFS=y
+2-3
arch/sh/configs/titan_defconfig
···215215CONFIG_RTC_CLASS=y216216CONFIG_RTC_DRV_SH=m217217CONFIG_EXT2_FS=y218218-CONFIG_EXT3_FS=y219219-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set220220-# CONFIG_EXT3_FS_XATTR is not set218218+CONFIG_EXT4_FS=y219219+# CONFIG_EXT4_FS_XATTR is not set221220CONFIG_XFS_FS=m222221CONFIG_FUSE_FS=m223222CONFIG_ISO9660_FS=m
+1-2
arch/sh/configs/ul2_defconfig
···6666CONFIG_USB_STORAGE=y6767CONFIG_MMC=y6868CONFIG_EXT2_FS=y6969-CONFIG_EXT3_FS=y7070-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set6969+CONFIG_EXT4_FS=y7170CONFIG_VFAT_FS=y7271CONFIG_PROC_KCORE=y7372CONFIG_TMPFS=y
+1-2
arch/sh/configs/urquell_defconfig
···114114CONFIG_RTC_DRV_SH=y115115CONFIG_RTC_DRV_GENERIC=y116116CONFIG_EXT2_FS=y117117-CONFIG_EXT3_FS=y118118-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set117117+CONFIG_EXT4_FS=y119118CONFIG_EXT4_FS=y120119CONFIG_BTRFS_FS=y121120CONFIG_MSDOS_FS=y
+3-4
arch/sparc/configs/sparc64_defconfig
···187187CONFIG_EXT2_FS_XATTR=y188188CONFIG_EXT2_FS_POSIX_ACL=y189189CONFIG_EXT2_FS_SECURITY=y190190-CONFIG_EXT3_FS=y191191-# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set192192-CONFIG_EXT3_FS_POSIX_ACL=y193193-CONFIG_EXT3_FS_SECURITY=y190190+CONFIG_EXT4_FS=y191191+CONFIG_EXT4_FS_POSIX_ACL=y192192+CONFIG_EXT4_FS_SECURITY=y194193CONFIG_PROC_KCORE=y195194CONFIG_TMPFS=y196195CONFIG_HUGETLBFS=y
+5-3
arch/x86/kvm/pmu.c
···108108 bool is_intel = boot_cpu_data.x86_vendor == X86_VENDOR_INTEL;109109 int min_nr_gp_ctrs = pmu_ops->MIN_NR_GP_COUNTERS;110110111111- perf_get_x86_pmu_capability(&kvm_host_pmu);112112-113111 /*114112 * Hybrid PMUs don't play nice with virtualization without careful115113 * configuration by userspace, and KVM's APIs for reporting supported116114 * vPMU features do not account for hybrid PMUs. Disable vPMU support117115 * for hybrid PMUs until KVM gains a way to let userspace opt-in.118116 */119119- if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU))117117+ if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) {120118 enable_pmu = false;119119+ memset(&kvm_host_pmu, 0, sizeof(kvm_host_pmu));120120+ } else {121121+ perf_get_x86_pmu_capability(&kvm_host_pmu);122122+ }121123122124 if (enable_pmu) {123125 /*
+4-3
arch/x86/kvm/x86.c
···13941139411394213942#ifdef CONFIG_KVM_GUEST_MEMFD1394313943/*1394413944- * KVM doesn't yet support mmap() on guest_memfd for VMs with private memory1394513945- * (the private vs. shared tracking needs to be moved into guest_memfd).1394413944+ * KVM doesn't yet support initializing guest_memfd memory as shared for VMs1394513945+ * with private memory (the private vs. shared tracking needs to be moved into1394613946+ * guest_memfd).1394613947 */1394713947-bool kvm_arch_supports_gmem_mmap(struct kvm *kvm)1394813948+bool kvm_arch_supports_gmem_init_shared(struct kvm *kvm)1394813949{1394913950 return !kvm_arch_has_private_mem(kvm);1395013951}
+1-1
arch/xtensa/configs/audio_kc705_defconfig
···103103# CONFIG_USB_SUPPORT is not set104104CONFIG_COMMON_CLK_CDCE706=y105105# CONFIG_IOMMU_SUPPORT is not set106106-CONFIG_EXT3_FS=y106106+CONFIG_EXT4_FS=y107107CONFIG_EXT4_FS=y108108CONFIG_FANOTIFY=y109109CONFIG_VFAT_FS=y
+1-1
arch/xtensa/configs/cadence_csp_defconfig
···8080# CONFIG_VGA_CONSOLE is not set8181# CONFIG_USB_SUPPORT is not set8282# CONFIG_IOMMU_SUPPORT is not set8383-CONFIG_EXT3_FS=y8383+CONFIG_EXT4_FS=y8484CONFIG_FANOTIFY=y8585CONFIG_VFAT_FS=y8686CONFIG_PROC_KCORE=y
+1-1
arch/xtensa/configs/generic_kc705_defconfig
···9090# CONFIG_VGA_CONSOLE is not set9191# CONFIG_USB_SUPPORT is not set9292# CONFIG_IOMMU_SUPPORT is not set9393-CONFIG_EXT3_FS=y9393+CONFIG_EXT4_FS=y9494CONFIG_EXT4_FS=y9595CONFIG_FANOTIFY=y9696CONFIG_VFAT_FS=y
+1-1
arch/xtensa/configs/nommu_kc705_defconfig
···9191CONFIG_SOFT_WATCHDOG=y9292# CONFIG_VGA_CONSOLE is not set9393# CONFIG_USB_SUPPORT is not set9494-CONFIG_EXT3_FS=y9494+CONFIG_EXT4_FS=y9595CONFIG_EXT4_FS=y9696CONFIG_FANOTIFY=y9797CONFIG_VFAT_FS=y
+1-1
arch/xtensa/configs/smp_lx200_defconfig
···9494# CONFIG_VGA_CONSOLE is not set9595# CONFIG_USB_SUPPORT is not set9696# CONFIG_IOMMU_SUPPORT is not set9797-CONFIG_EXT3_FS=y9797+CONFIG_EXT4_FS=y9898CONFIG_EXT4_FS=y9999CONFIG_FANOTIFY=y100100CONFIG_VFAT_FS=y
+1-1
arch/xtensa/configs/virt_defconfig
···7676CONFIG_VIRTIO_PCI=y7777CONFIG_VIRTIO_INPUT=y7878# CONFIG_IOMMU_SUPPORT is not set7979-CONFIG_EXT3_FS=y7979+CONFIG_EXT4_FS=y8080CONFIG_FANOTIFY=y8181CONFIG_VFAT_FS=y8282CONFIG_PROC_KCORE=y
+1-1
arch/xtensa/configs/xip_kc705_defconfig
···8282# CONFIG_VGA_CONSOLE is not set8383# CONFIG_USB_SUPPORT is not set8484# CONFIG_IOMMU_SUPPORT is not set8585-CONFIG_EXT3_FS=y8585+CONFIG_EXT4_FS=y8686CONFIG_FANOTIFY=y8787CONFIG_VFAT_FS=y8888CONFIG_PROC_KCORE=y
+4-9
block/blk-cgroup.c
···812812}813813/*814814 * Similar to blkg_conf_open_bdev, but additionally freezes the queue,815815- * acquires q->elevator_lock, and ensures the correct locking order816816- * between q->elevator_lock and q->rq_qos_mutex.815815+ * ensures the correct locking order between freeze queue and q->rq_qos_mutex.817816 *818817 * This function returns negative error on failure. On success it returns819818 * memflags which must be saved and later passed to blkg_conf_exit_frozen···833834 * At this point, we haven’t started protecting anything related to QoS,834835 * so we release q->rq_qos_mutex here, which was first acquired in blkg_835836 * conf_open_bdev. Later, we re-acquire q->rq_qos_mutex after freezing836836- * the queue and acquiring q->elevator_lock to maintain the correct837837- * locking order.837837+ * the queue to maintain the correct locking order.838838 */839839 mutex_unlock(&ctx->bdev->bd_queue->rq_qos_mutex);840840841841 memflags = blk_mq_freeze_queue(ctx->bdev->bd_queue);842842- mutex_lock(&ctx->bdev->bd_queue->elevator_lock);843842 mutex_lock(&ctx->bdev->bd_queue->rq_qos_mutex);844843845844 return memflags;···992995EXPORT_SYMBOL_GPL(blkg_conf_exit);993996994997/*995995- * Similar to blkg_conf_exit, but also unfreezes the queue and releases996996- * q->elevator_lock. Should be used when blkg_conf_open_bdev_frozen997997- * is used to open the bdev.998998+ * Similar to blkg_conf_exit, but also unfreezes the queue. Should be used999999+ * when blkg_conf_open_bdev_frozen is used to open the bdev.9981000 */9991001void blkg_conf_exit_frozen(struct blkg_conf_ctx *ctx, unsigned long memflags)10001002{···10011005 struct request_queue *q = ctx->bdev->bd_queue;1002100610031007 blkg_conf_exit(ctx);10041004- mutex_unlock(&q->elevator_lock);10051008 blk_mq_unfreeze_queue(q, memflags);10061009 }10071010}
+1-1
block/blk-mq-sched.c
···557557 if (blk_mq_is_shared_tags(flags)) {558558 /* Shared tags are stored at index 0 in @et->tags. */559559 q->sched_shared_tags = et->tags[0];560560- blk_mq_tag_update_sched_shared_tags(q);560560+ blk_mq_tag_update_sched_shared_tags(q, et->nr_requests);561561 }562562563563 queue_for_each_hw_ctx(q, hctx, i) {
···49414941 * tags can't grow, see blk_mq_alloc_sched_tags().49424942 */49434943 if (q->elevator)49444944- blk_mq_tag_update_sched_shared_tags(q);49444944+ blk_mq_tag_update_sched_shared_tags(q, nr);49454945 else49464946 blk_mq_tag_resize_shared_tags(set, nr);49474947 } else if (!q->elevator) {
+2-1
block/blk-mq.h
···186186void blk_mq_put_tags(struct blk_mq_tags *tags, int *tag_array, int nr_tags);187187void blk_mq_tag_resize_shared_tags(struct blk_mq_tag_set *set,188188 unsigned int size);189189-void blk_mq_tag_update_sched_shared_tags(struct request_queue *q);189189+void blk_mq_tag_update_sched_shared_tags(struct request_queue *q,190190+ unsigned int nr);190191191192void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool);192193void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_tag_iter_fn *fn,
+2
drivers/accel/qaic/qaic.h
···9797 * response queue's head and tail pointer of this DBC.9898 */9999 void __iomem *dbc_base;100100+ /* Synchronizes access to Request queue's head and tail pointer */101101+ struct mutex req_lock;100102 /* Head of list where each node is a memory handle queued in request queue */101103 struct list_head xfer_list;102104 /* Synchronizes DBC readers during cleanup */
···13561356 goto release_ch_rcu;13571357 }1358135813591359+ ret = mutex_lock_interruptible(&dbc->req_lock);13601360+ if (ret)13611361+ goto release_ch_rcu;13621362+13591363 head = readl(dbc->dbc_base + REQHP_OFF);13601364 tail = readl(dbc->dbc_base + REQTP_OFF);1361136513621366 if (head == U32_MAX || tail == U32_MAX) {13631367 /* PCI link error */13641368 ret = -ENODEV;13651365- goto release_ch_rcu;13691369+ goto unlock_req_lock;13661370 }1367137113681372 queue_level = head <= tail ? tail - head : dbc->nelem - (head - tail);···13741370 ret = send_bo_list_to_device(qdev, file_priv, exec, args->hdr.count, is_partial, dbc,13751371 head, &tail);13761372 if (ret)13771377- goto release_ch_rcu;13731373+ goto unlock_req_lock;1378137413791375 /* Finalize commit to hardware */13801376 submit_ts = ktime_get_ns();13811377 writel(tail, dbc->dbc_base + REQTP_OFF);13781378+ mutex_unlock(&dbc->req_lock);1382137913831380 update_profiling_data(file_priv, exec, args->hdr.count, is_partial, received_ts,13841381 submit_ts, queue_level);···13871382 if (datapath_polling)13881383 schedule_work(&dbc->poll_work);1389138413851385+unlock_req_lock:13861386+ if (ret)13871387+ mutex_unlock(&dbc->req_lock);13901388release_ch_rcu:13911389 srcu_read_unlock(&dbc->ch_lock, rcu_id);13921390unlock_dev_srcu:
+3-2
drivers/accel/qaic/qaic_debugfs.c
···218218 if (ret)219219 goto destroy_workqueue;220220221221+ dev_set_drvdata(&mhi_dev->dev, qdev);222222+ qdev->bootlog_ch = mhi_dev;223223+221224 for (i = 0; i < BOOTLOG_POOL_SIZE; i++) {222225 msg = devm_kzalloc(&qdev->pdev->dev, sizeof(*msg), GFP_KERNEL);223226 if (!msg) {···236233 goto mhi_unprepare;237234 }238235239239- dev_set_drvdata(&mhi_dev->dev, qdev);240240- qdev->bootlog_ch = mhi_dev;241236 return 0;242237243238mhi_unprepare:
+3
drivers/accel/qaic/qaic_drv.c
···454454 return NULL;455455 init_waitqueue_head(&qdev->dbc[i].dbc_release);456456 INIT_LIST_HEAD(&qdev->dbc[i].bo_lists);457457+ ret = drmm_mutex_init(drm, &qdev->dbc[i].req_lock);458458+ if (ret)459459+ return NULL;457460 }458461459462 return qdev;
+4-7
drivers/ata/libata-core.c
···21742174 }2175217521762176 version = get_unaligned_le16(&dev->gp_log_dir[0]);21772177- if (version != 0x0001) {21782178- ata_dev_err(dev, "Invalid log directory version 0x%04x\n",21792179- version);21802180- ata_clear_log_directory(dev);21812181- dev->quirks |= ATA_QUIRK_NO_LOG_DIR;21822182- return -EINVAL;21832183- }21772177+ if (version != 0x0001)21782178+ ata_dev_warn_once(dev,21792179+ "Invalid log directory version 0x%04x\n",21802180+ version);2184218121852182 return 0;21862183}
+4-1
drivers/char/ipmi/ipmi_msghandler.c
···23012301 if (supplied_recv) {23022302 recv_msg = supplied_recv;23032303 recv_msg->user = user;23042304- if (user)23042304+ if (user) {23052305 atomic_inc(&user->nr_msgs);23062306+ /* The put happens when the message is freed. */23072307+ kref_get(&user->refcount);23082308+ }23062309 } else {23072310 recv_msg = ipmi_alloc_recv_msg(user);23082311 if (IS_ERR(recv_msg))
+20-9
drivers/char/tpm/tpm_crb.c
···133133{134134 return !(start_method == ACPI_TPM2_START_METHOD ||135135 start_method == ACPI_TPM2_COMMAND_BUFFER_WITH_START_METHOD ||136136- start_method == ACPI_TPM2_COMMAND_BUFFER_WITH_ARM_SMC ||137137- start_method == ACPI_TPM2_CRB_WITH_ARM_FFA);136136+ start_method == ACPI_TPM2_COMMAND_BUFFER_WITH_ARM_SMC);138137}139138140139static bool crb_wait_for_reg_32(u32 __iomem *reg, u32 mask, u32 value,···190191 *191192 * Return: 0 always192193 */193193-static int __crb_go_idle(struct device *dev, struct crb_priv *priv)194194+static int __crb_go_idle(struct device *dev, struct crb_priv *priv, int loc)194195{195196 int rc;196197···198199 return 0;199200200201 iowrite32(CRB_CTRL_REQ_GO_IDLE, &priv->regs_t->ctrl_req);202202+203203+ if (priv->sm == ACPI_TPM2_CRB_WITH_ARM_FFA) {204204+ rc = tpm_crb_ffa_start(CRB_FFA_START_TYPE_COMMAND, loc);205205+ if (rc)206206+ return rc;207207+ }201208202209 rc = crb_try_pluton_doorbell(priv, true);203210 if (rc)···225220 struct device *dev = &chip->dev;226221 struct crb_priv *priv = dev_get_drvdata(dev);227222228228- return __crb_go_idle(dev, priv);223223+ return __crb_go_idle(dev, priv, chip->locality);229224}230225231226/**···243238 *244239 * Return: 0 on success -ETIME on timeout;245240 */246246-static int __crb_cmd_ready(struct device *dev, struct crb_priv *priv)241241+static int __crb_cmd_ready(struct device *dev, struct crb_priv *priv, int loc)247242{248243 int rc;249244···251246 return 0;252247253248 iowrite32(CRB_CTRL_REQ_CMD_READY, &priv->regs_t->ctrl_req);249249+250250+ if (priv->sm == ACPI_TPM2_CRB_WITH_ARM_FFA) {251251+ rc = tpm_crb_ffa_start(CRB_FFA_START_TYPE_COMMAND, loc);252252+ if (rc)253253+ return rc;254254+ }254255255256 rc = crb_try_pluton_doorbell(priv, true);256257 if (rc)···278267 struct device *dev = &chip->dev;279268 struct crb_priv *priv = dev_get_drvdata(dev);280269281281- return __crb_cmd_ready(dev, priv);270270+ return __crb_cmd_ready(dev, priv, chip->locality);282271}283272284273static int __crb_request_locality(struct device *dev,···455444456445 /* Seems to be necessary for every command */457446 if (priv->sm == ACPI_TPM2_COMMAND_BUFFER_WITH_PLUTON)458458- __crb_cmd_ready(&chip->dev, priv);447447+ __crb_cmd_ready(&chip->dev, priv, chip->locality);459448460449 memcpy_toio(priv->cmd, buf, len);461450···683672 * PTT HW bug w/a: wake up the device to access684673 * possibly not retained registers.685674 */686686- ret = __crb_cmd_ready(dev, priv);675675+ ret = __crb_cmd_ready(dev, priv, 0);687676 if (ret)688677 goto out_relinquish_locality;689678···755744 if (!ret)756745 priv->cmd_size = cmd_size;757746758758- __crb_go_idle(dev, priv);747747+ __crb_go_idle(dev, priv, 0);759748760749out_relinquish_locality:761750
+1-1
drivers/cxl/acpi.c
···348348 struct resource res;349349 int nid, rc;350350351351- res = DEFINE_RES(start, size, 0);351351+ res = DEFINE_RES_MEM(start, size);352352 nid = phys_to_target_node(start);353353354354 rc = hmat_get_extended_linear_cache_size(&res, nid, &cache_size);
+3
drivers/cxl/core/features.c
···371371{372372 struct cxl_feat_entry *feat;373373374374+ if (!cxlfs || !cxlfs->entries)375375+ return ERR_PTR(-EOPNOTSUPP);376376+374377 for (int i = 0; i < cxlfs->entries->num_features; i++) {375378 feat = &cxlfs->entries->ent[i];376379 if (uuid_equal(uuid, &feat->uuid))
+14-12
drivers/cxl/core/port.c
···11821182 if (rc)11831183 return ERR_PTR(rc);1184118411851185+ /*11861186+ * Setup port register if this is the first dport showed up. Having11871187+ * a dport also means that there is at least 1 active link.11881188+ */11891189+ if (port->nr_dports == 1 &&11901190+ port->component_reg_phys != CXL_RESOURCE_NONE) {11911191+ rc = cxl_port_setup_regs(port, port->component_reg_phys);11921192+ if (rc) {11931193+ xa_erase(&port->dports, (unsigned long)dport->dport_dev);11941194+ return ERR_PTR(rc);11951195+ }11961196+ port->component_reg_phys = CXL_RESOURCE_NONE;11971197+ }11981198+11851199 get_device(dport_dev);11861200 rc = devm_add_action_or_reset(host, cxl_dport_remove, dport);11871201 if (rc)···12131199 dport->link_latency = cxl_pci_get_latency(to_pci_dev(dport_dev));1214120012151201 cxl_debugfs_create_dport_dir(dport);12161216-12171217- /*12181218- * Setup port register if this is the first dport showed up. Having12191219- * a dport also means that there is at least 1 active link.12201220- */12211221- if (port->nr_dports == 1 &&12221222- port->component_reg_phys != CXL_RESOURCE_NONE) {12231223- rc = cxl_port_setup_regs(port, port->component_reg_phys);12241224- if (rc)12251225- return ERR_PTR(rc);12261226- port->component_reg_phys = CXL_RESOURCE_NONE;12271227- }1228120212291203 return dport;12301204}
+4-7
drivers/cxl/core/region.c
···839839}840840841841static bool region_res_match_cxl_range(const struct cxl_region_params *p,842842- struct range *range)842842+ const struct range *range)843843{844844 if (!p->res)845845 return false;···33983398 p = &cxlr->params;3399339934003400 guard(rwsem_read)(&cxl_rwsem.region);34013401- if (p->res && p->res->start == r->start && p->res->end == r->end)34023402- return 1;34033403-34043404- return 0;34013401+ return region_res_match_cxl_range(p, r);34053402}3406340334073404static int cxl_extended_linear_cache_resize(struct cxl_region *cxlr,···3663366636643667 if (offset < p->cache_size) {36653668 dev_err(&cxlr->dev,36663666- "Offset %#llx is within extended linear cache %pr\n",36693669+ "Offset %#llx is within extended linear cache %pa\n",36673670 offset, &p->cache_size);36683671 return -EINVAL;36693672 }3670367336713674 region_size = resource_size(p->res);36723675 if (offset >= region_size) {36733673- dev_err(&cxlr->dev, "Offset %#llx exceeds region size %pr\n",36763676+ dev_err(&cxlr->dev, "Offset %#llx exceeds region size %pa\n",36743677 offset, ®ion_size);36753678 return -EINVAL;36763679 }
···2020#include <linux/of.h>2121#include <linux/of_device.h>2222#include <linux/bitfield.h>2323+#include <linux/hw_bitfield.h>2324#include <linux/bits.h>2425#include <linux/perf_event.h>2526···31303231#define DMC_MAX_CHANNELS 433323434-#define HIWORD_UPDATE(val, mask) ((val) | (mask) << 16)3535-3633/* DDRMON_CTRL */3734#define DDRMON_CTRL 0x043835#define DDRMON_CTRL_LPDDR5 BIT(6)···4041#define DDRMON_CTRL_LPDDR23 BIT(2)4142#define DDRMON_CTRL_SOFTWARE_EN BIT(1)4243#define DDRMON_CTRL_TIMER_CNT_EN BIT(0)4343-#define DDRMON_CTRL_DDR_TYPE_MASK (DDRMON_CTRL_LPDDR5 | \4444- DDRMON_CTRL_DDR4 | \4545- DDRMON_CTRL_LPDDR4 | \4646- DDRMON_CTRL_LPDDR23)4744#define DDRMON_CTRL_LP5_BANK_MODE_MASK GENMASK(8, 7)48454946#define DDRMON_CH0_WR_NUM 0x20···119124 unsigned int count_multiplier; /* number of data clocks per count */120125};121126122122-static int rockchip_dfi_ddrtype_to_ctrl(struct rockchip_dfi *dfi, u32 *ctrl,123123- u32 *mask)127127+static int rockchip_dfi_ddrtype_to_ctrl(struct rockchip_dfi *dfi, u32 *ctrl)124128{125129 u32 ddrmon_ver;126126-127127- *mask = DDRMON_CTRL_DDR_TYPE_MASK;128130129131 switch (dfi->ddr_type) {130132 case ROCKCHIP_DDRTYPE_LPDDR2:131133 case ROCKCHIP_DDRTYPE_LPDDR3:132132- *ctrl = DDRMON_CTRL_LPDDR23;134134+ *ctrl = FIELD_PREP_WM16(DDRMON_CTRL_LPDDR23, 1) |135135+ FIELD_PREP_WM16(DDRMON_CTRL_LPDDR4, 0) |136136+ FIELD_PREP_WM16(DDRMON_CTRL_LPDDR5, 0);133137 break;134138 case ROCKCHIP_DDRTYPE_LPDDR4:135139 case ROCKCHIP_DDRTYPE_LPDDR4X:136136- *ctrl = DDRMON_CTRL_LPDDR4;140140+ *ctrl = FIELD_PREP_WM16(DDRMON_CTRL_LPDDR23, 0) |141141+ FIELD_PREP_WM16(DDRMON_CTRL_LPDDR4, 1) |142142+ FIELD_PREP_WM16(DDRMON_CTRL_LPDDR5, 0);137143 break;138144 case ROCKCHIP_DDRTYPE_LPDDR5:139145 ddrmon_ver = readl_relaxed(dfi->regs);140146 if (ddrmon_ver < 0x40) {141141- *ctrl = DDRMON_CTRL_LPDDR5 | dfi->lp5_bank_mode;142142- *mask |= DDRMON_CTRL_LP5_BANK_MODE_MASK;147147+ *ctrl = FIELD_PREP_WM16(DDRMON_CTRL_LPDDR23, 0) |148148+ FIELD_PREP_WM16(DDRMON_CTRL_LPDDR4, 0) |149149+ FIELD_PREP_WM16(DDRMON_CTRL_LPDDR5, 1) |150150+ FIELD_PREP_WM16(DDRMON_CTRL_LP5_BANK_MODE_MASK,151151+ dfi->lp5_bank_mode);143152 break;144153 }145154···171172 void __iomem *dfi_regs = dfi->regs;172173 int i, ret = 0;173174 u32 ctrl;174174- u32 ctrl_mask;175175176176 mutex_lock(&dfi->mutex);177177···184186 goto out;185187 }186188187187- ret = rockchip_dfi_ddrtype_to_ctrl(dfi, &ctrl, &ctrl_mask);189189+ ret = rockchip_dfi_ddrtype_to_ctrl(dfi, &ctrl);188190 if (ret)189191 goto out;190192···194196 continue;195197196198 /* clear DDRMON_CTRL setting */197197- writel_relaxed(HIWORD_UPDATE(0, DDRMON_CTRL_TIMER_CNT_EN |198198- DDRMON_CTRL_SOFTWARE_EN | DDRMON_CTRL_HARDWARE_EN),199199+ writel_relaxed(FIELD_PREP_WM16(DDRMON_CTRL_TIMER_CNT_EN, 0) |200200+ FIELD_PREP_WM16(DDRMON_CTRL_SOFTWARE_EN, 0) |201201+ FIELD_PREP_WM16(DDRMON_CTRL_HARDWARE_EN, 0),199202 dfi_regs + i * dfi->ddrmon_stride + DDRMON_CTRL);200203201201- writel_relaxed(HIWORD_UPDATE(ctrl, ctrl_mask),202202- dfi_regs + i * dfi->ddrmon_stride + DDRMON_CTRL);204204+ writel_relaxed(ctrl, dfi_regs + i * dfi->ddrmon_stride +205205+ DDRMON_CTRL);203206204207 /* enable count, use software mode */205205- writel_relaxed(HIWORD_UPDATE(DDRMON_CTRL_SOFTWARE_EN, DDRMON_CTRL_SOFTWARE_EN),208208+ writel_relaxed(FIELD_PREP_WM16(DDRMON_CTRL_SOFTWARE_EN, 1),206209 dfi_regs + i * dfi->ddrmon_stride + DDRMON_CTRL);207210208211 if (dfi->ddrmon_ctrl_single)···233234 if (!(dfi->channel_mask & BIT(i)))234235 continue;235236236236- writel_relaxed(HIWORD_UPDATE(0, DDRMON_CTRL_SOFTWARE_EN),237237- dfi_regs + i * dfi->ddrmon_stride + DDRMON_CTRL);237237+ writel_relaxed(FIELD_PREP_WM16(DDRMON_CTRL_SOFTWARE_EN, 0),238238+ dfi_regs + i * dfi->ddrmon_stride + DDRMON_CTRL);238239239240 if (dfi->ddrmon_ctrl_single)240241 break;
+21
drivers/dpll/zl3073x/core.c
···10381038int zl3073x_dev_start(struct zl3073x_dev *zldev, bool full)10391039{10401040 struct zl3073x_dpll *zldpll;10411041+ u8 info;10411042 int rc;10431043+10441044+ rc = zl3073x_read_u8(zldev, ZL_REG_INFO, &info);10451045+ if (rc) {10461046+ dev_err(zldev->dev, "Failed to read device status info\n");10471047+ return rc;10481048+ }10491049+10501050+ if (!FIELD_GET(ZL_INFO_READY, info)) {10511051+ /* The ready bit indicates that the firmware was successfully10521052+ * configured and is ready for normal operation. If it is10531053+ * cleared then the configuration stored in flash is wrong10541054+ * or missing. In this situation the driver will expose10551055+ * only devlink interface to give an opportunity to flash10561056+ * the correct config.10571057+ */10581058+ dev_info(zldev->dev,10591059+ "FW not fully ready - missing or corrupted config\n");10601060+10611061+ return 0;10621062+ }1042106310431064 if (full) {10441065 /* Fetch device state */
···364364 if (p->uf_bo && ring->funcs->no_user_fence)365365 return -EINVAL;366366367367+ if (!p->adev->debug_enable_ce_cs &&368368+ chunk_ib->flags & AMDGPU_IB_FLAG_CE) {369369+ dev_err_ratelimited(p->adev->dev, "CE CS is blocked, use debug=0x400 to override\n");370370+ return -EINVAL;371371+ }372372+367373 if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX &&368374 chunk_ib->flags & AMDGPU_IB_FLAG_PREEMPT) {369375 if (chunk_ib->flags & AMDGPU_IB_FLAG_CE)···708702 */709703 const s64 us_upper_bound = 200000;710704711711- if (!adev->mm_stats.log2_max_MBps) {705705+ if ((!adev->mm_stats.log2_max_MBps) || !ttm_resource_manager_used(&adev->mman.vram_mgr.manager)) {712706 *max_bytes = 0;713707 *max_vis_bytes = 0;714708 return;
+7
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···1882188218831883static bool amdgpu_device_aspm_support_quirk(struct amdgpu_device *adev)18841884{18851885+ /* Enabling ASPM causes randoms hangs on Tahiti and Oland on Zen4.18861886+ * It's unclear if this is a platform-specific or GPU-specific issue.18871887+ * Disable ASPM on SI for the time being.18881888+ */18891889+ if (adev->family == AMDGPU_FAMILY_SI)18901890+ return true;18911891+18851892#if IS_ENABLED(CONFIG_X86)18861893 struct cpuinfo_x86 *c = &cpu_data(0);18871894
···409409 return -EINVAL;410410411411 /* Clear the doorbell array before detection */412412- memset(adev->mes.hung_queue_db_array_cpu_addr, 0,412412+ memset(adev->mes.hung_queue_db_array_cpu_addr, AMDGPU_MES_INVALID_DB_OFFSET,413413 adev->mes.hung_queue_db_array_size * sizeof(u32));414414 input.queue_type = queue_type;415415 input.detect_only = detect_only;···420420 dev_err(adev->dev, "failed to detect and reset\n");421421 } else {422422 *hung_db_num = 0;423423- for (i = 0; i < adev->mes.hung_queue_db_array_size; i++) {423423+ for (i = 0; i < adev->mes.hung_queue_hqd_info_offset; i++) {424424 if (db_array[i] != AMDGPU_MES_INVALID_DB_OFFSET) {425425 hung_db_array[i] = db_array[i];426426 *hung_db_num += 1;427427 }428428 }429429+430430+ /*431431+ * TODO: return HQD info for MES scheduled user compute queue reset cases432432+ * stored in hung_db_array hqd info offset to full array size433433+ */429434 }430435431436 return r;···691686bool amdgpu_mes_suspend_resume_all_supported(struct amdgpu_device *adev)692687{693688 uint32_t mes_rev = adev->mes.sched_version & AMDGPU_MES_VERSION_MASK;694694- bool is_supported = false;695689696696- if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(11, 0, 0) &&697697- amdgpu_ip_version(adev, GC_HWIP, 0) < IP_VERSION(12, 0, 0) &&698698- mes_rev >= 0x63)699699- is_supported = true;700700-701701- return is_supported;690690+ return ((amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(11, 0, 0) &&691691+ amdgpu_ip_version(adev, GC_HWIP, 0) < IP_VERSION(12, 0, 0) &&692692+ mes_rev >= 0x63) ||693693+ amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(12, 0, 0));702694}703695704696/* Fix me -- node_id is used to identify the correct MES instances in the future */
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
···149149 void *resource_1_addr[AMDGPU_MAX_MES_PIPES];150150151151 int hung_queue_db_array_size;152152+ int hung_queue_hqd_info_offset;152153 struct amdgpu_bo *hung_queue_db_array_gpu_obj;153154 uint64_t hung_queue_db_array_gpu_addr;154155 void *hung_queue_db_array_cpu_addr;
+1-1
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
···811811 if (r)812812 return r;813813814814- /* signal the fence of the bad job */814814+ /* signal the guilty fence and set an error on all fences from the context */815815 if (guilty_fence)816816 amdgpu_fence_driver_guilty_force_completion(guilty_fence);817817 /* Re-emit the non-guilty commands */
···12091209 pr_debug_ratelimited("Evicting process pid %d queues\n",12101210 pdd->process->lead_thread->pid);1211121112121212+ if (dqm->dev->kfd->shared_resources.enable_mes) {12131213+ pdd->last_evict_timestamp = get_jiffies_64();12141214+ retval = suspend_all_queues_mes(dqm);12151215+ if (retval) {12161216+ dev_err(dev, "Suspending all queues failed");12171217+ goto out;12181218+ }12191219+ }12201220+12121221 /* Mark all queues as evicted. Deactivate all active queues on12131222 * the qpd.12141223 */···12301221 decrement_queue_count(dqm, qpd, q);1231122212321223 if (dqm->dev->kfd->shared_resources.enable_mes) {12331233- int err;12341234-12351235- err = remove_queue_mes(dqm, q, qpd);12361236- if (err) {12241224+ retval = remove_queue_mes(dqm, q, qpd);12251225+ if (retval) {12371226 dev_err(dev, "Failed to evict queue %d\n",12381227 q->properties.queue_id);12391239- retval = err;12281228+ goto out;12401229 }12411230 }12421231 }12431243- pdd->last_evict_timestamp = get_jiffies_64();12441244- if (!dqm->dev->kfd->shared_resources.enable_mes)12321232+12331233+ if (!dqm->dev->kfd->shared_resources.enable_mes) {12341234+ pdd->last_evict_timestamp = get_jiffies_64();12451235 retval = execute_queues_cpsch(dqm,12461236 qpd->is_debug ?12471237 KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES :12481238 KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0,12491239 USE_DEFAULT_GRACE_PERIOD);12401240+ } else {12411241+ retval = resume_all_queues_mes(dqm);12421242+ if (retval)12431243+ dev_err(dev, "Resuming all queues failed");12441244+ }1250124512511246out:12521247 dqm_unlock(dqm);···31113098 return ret;31123099}3113310031143114-static int kfd_dqm_evict_pasid_mes(struct device_queue_manager *dqm,31153115- struct qcm_process_device *qpd)31163116-{31173117- struct device *dev = dqm->dev->adev->dev;31183118- int ret = 0;31193119-31203120- /* Check if process is already evicted */31213121- dqm_lock(dqm);31223122- if (qpd->evicted) {31233123- /* Increment the evicted count to make sure the31243124- * process stays evicted before its terminated.31253125- */31263126- qpd->evicted++;31273127- dqm_unlock(dqm);31283128- goto out;31293129- }31303130- dqm_unlock(dqm);31313131-31323132- ret = suspend_all_queues_mes(dqm);31333133- if (ret) {31343134- dev_err(dev, "Suspending all queues failed");31353135- goto out;31363136- }31373137-31383138- ret = dqm->ops.evict_process_queues(dqm, qpd);31393139- if (ret) {31403140- dev_err(dev, "Evicting process queues failed");31413141- goto out;31423142- }31433143-31443144- ret = resume_all_queues_mes(dqm);31453145- if (ret)31463146- dev_err(dev, "Resuming all queues failed");31473147-31483148-out:31493149- return ret;31503150-}31513151-31523101int kfd_evict_process_device(struct kfd_process_device *pdd)31533102{31543103 struct device_queue_manager *dqm;31553104 struct kfd_process *p;31563156- int ret = 0;3157310531583106 p = pdd->process;31593107 dqm = pdd->dev->dqm;3160310831613109 WARN(debug_evictions, "Evicting pid %d", p->lead_thread->pid);3162311031633163- if (dqm->dev->kfd->shared_resources.enable_mes)31643164- ret = kfd_dqm_evict_pasid_mes(dqm, &pdd->qpd);31653165- else31663166- ret = dqm->ops.evict_process_queues(dqm, &pdd->qpd);31673167-31683168- return ret;31113111+ return dqm->ops.evict_process_queues(dqm, &pdd->qpd);31693112}3170311331713114int reserve_debug_trap_vmid(struct device_queue_manager *dqm,
+4-8
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
···2085208520862086 dc_hardware_init(adev->dm.dc);2087208720882088- adev->dm.restore_backlight = true;20892089-20902088 adev->dm.hpd_rx_offload_wq = hpd_rx_irq_create_workqueue(adev);20912089 if (!adev->dm.hpd_rx_offload_wq) {20922090 drm_err(adev_to_drm(adev), "failed to create hpd rx offload workqueue.\n");···34403442 dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0);3441344334423444 dc_resume(dm->dc);34433443- adev->dm.restore_backlight = true;3444344534453446 amdgpu_dm_irq_resume_early(adev);34463447···99669969 bool mode_set_reset_required = false;99679970 u32 i;99689971 struct dc_commit_streams_params params = {dc_state->streams, dc_state->stream_count};99729972+ bool set_backlight_level = false;9969997399709974 /* Disable writeback */99719975 for_each_old_connector_in_state(state, connector, old_con_state, i) {···1008610088 acrtc->hw_mode = new_crtc_state->mode;1008710089 crtc->hwmode = new_crtc_state->mode;1008810090 mode_set_reset_required = true;1009110091+ set_backlight_level = true;1008910092 } else if (modereset_required(new_crtc_state)) {1009010093 drm_dbg_atomic(dev,1009110094 "Atomic commit: RESET. crtc id %d:[%p]\n",···1014310144 * to fix a flicker issue.1014410145 * It will cause the dm->actual_brightness is not the current panel brightness1014510146 * level. (the dm->brightness is the correct panel level)1014610146- * So we set the backlight level with dm->brightness value after initial1014710147- * set mode. Use restore_backlight flag to avoid setting backlight level1014810148- * for every subsequent mode set.1014710147+ * So we set the backlight level with dm->brightness value after set mode1014910148 */1015010150- if (dm->restore_backlight) {1014910149+ if (set_backlight_level) {1015110150 for (i = 0; i < dm->num_of_edps; i++) {1015210151 if (dm->backlight_dev[i])1015310152 amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]);1015410153 }1015510155- dm->restore_backlight = false;1015610154 }1015710155}1015810156
-7
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
···631631 u32 actual_brightness[AMDGPU_DM_MAX_NUM_EDP];632632633633 /**634634- * @restore_backlight:635635- *636636- * Flag to indicate whether to restore backlight after modeset.637637- */638638- bool restore_backlight;639639-640640- /**641634 * @aux_hpd_discon_quirk:642635 *643636 * quirk for hpd discon while aux is on-going.
+5
drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
···35003500 * for these GPUs to calculate bandwidth requirements.35013501 */35023502 if (high_pixelclock_count) {35033503+ /* Work around flickering lines at the bottom edge35043504+ * of the screen when using a single 4K 60Hz monitor.35053505+ */35063506+ disable_mclk_switching = true;35073507+35033508 /* On Oland, we observe some flickering when two 4K 60Hz35043509 * displays are connected, possibly because voltage is too low.35053510 * Raise the voltage by requiring a higher SCLK.
···121121 }122122123123 /* Test for known Chip ID. */124124- if (chipid[0] != REG_CHIPID0_VALUE || chipid[1] != REG_CHIPID1_VALUE ||125125- chipid[2] != REG_CHIPID2_VALUE) {124124+ if (chipid[0] != REG_CHIPID0_VALUE || chipid[1] != REG_CHIPID1_VALUE) {126125 dev_err(ctx->dev, "Unknown Chip ID: 0x%02x 0x%02x 0x%02x\n",127126 chipid[0], chipid[1], chipid[2]);128127 return -EINVAL;
+1-1
drivers/gpu/drm/drm_draw.c
···127127128128void drm_draw_fill24(struct iosys_map *dmap, unsigned int dpitch,129129 unsigned int height, unsigned int width,130130- u16 color)130130+ u32 color)131131{132132 unsigned int y, x;133133
+1-1
drivers/gpu/drm/drm_draw_internal.h
···47474848void drm_draw_fill24(struct iosys_map *dmap, unsigned int dpitch,4949 unsigned int height, unsigned int width,5050- u16 color);5050+ u32 color);51515252void drm_draw_fill32(struct iosys_map *dmap, unsigned int dpitch,5353 unsigned int height, unsigned int width,
+20-18
drivers/gpu/drm/i915/display/intel_fb.c
···21132113 if (intel_fb_uses_dpt(fb))21142114 intel_dpt_destroy(intel_fb->dpt_vm);2115211521162116- intel_frontbuffer_put(intel_fb->frontbuffer);21172117-21182116 intel_fb_bo_framebuffer_fini(intel_fb_bo(fb));21172117+21182118+ intel_frontbuffer_put(intel_fb->frontbuffer);2119211921202120 kfree(intel_fb);21212121}···22182218 int ret = -EINVAL;22192219 int i;2220222022212221+ /*22222222+ * intel_frontbuffer_get() must be done before22232223+ * intel_fb_bo_framebuffer_init() to avoid set_tiling vs. addfb race.22242224+ */22252225+ intel_fb->frontbuffer = intel_frontbuffer_get(obj);22262226+ if (!intel_fb->frontbuffer)22272227+ return -ENOMEM;22282228+22212229 ret = intel_fb_bo_framebuffer_init(fb, obj, mode_cmd);22222230 if (ret)22232223- return ret;22242224-22252225- intel_fb->frontbuffer = intel_frontbuffer_get(obj);22262226- if (!intel_fb->frontbuffer) {22272227- ret = -ENOMEM;22282228- goto err;22292229- }22312231+ goto err_frontbuffer_put;2230223222312233 ret = -EINVAL;22322234 if (!drm_any_plane_has_format(display->drm,···22372235 drm_dbg_kms(display->drm,22382236 "unsupported pixel format %p4cc / modifier 0x%llx\n",22392237 &mode_cmd->pixel_format, mode_cmd->modifier[0]);22402240- goto err_frontbuffer_put;22382238+ goto err_bo_framebuffer_fini;22412239 }2242224022432241 max_stride = intel_fb_max_stride(display, mode_cmd->pixel_format,···22482246 mode_cmd->modifier[0] != DRM_FORMAT_MOD_LINEAR ?22492247 "tiled" : "linear",22502248 mode_cmd->pitches[0], max_stride);22512251- goto err_frontbuffer_put;22492249+ goto err_bo_framebuffer_fini;22522250 }2253225122542252 /* FIXME need to adjust LINOFF/TILEOFF accordingly. */···22562254 drm_dbg_kms(display->drm,22572255 "plane 0 offset (0x%08x) must be 0\n",22582256 mode_cmd->offsets[0]);22592259- goto err_frontbuffer_put;22572257+ goto err_bo_framebuffer_fini;22602258 }2261225922622260 drm_helper_mode_fill_fb_struct(display->drm, fb, info, mode_cmd);···2266226422672265 if (mode_cmd->handles[i] != mode_cmd->handles[0]) {22682266 drm_dbg_kms(display->drm, "bad plane %d handle\n", i);22692269- goto err_frontbuffer_put;22672267+ goto err_bo_framebuffer_fini;22702268 }2271226922722270 stride_alignment = intel_fb_stride_alignment(fb, i);···22742272 drm_dbg_kms(display->drm,22752273 "plane %d pitch (%d) must be at least %u byte aligned\n",22762274 i, fb->pitches[i], stride_alignment);22772277- goto err_frontbuffer_put;22752275+ goto err_bo_framebuffer_fini;22782276 }2279227722802278 if (intel_fb_is_gen12_ccs_aux_plane(fb, i)) {···22842282 drm_dbg_kms(display->drm,22852283 "ccs aux plane %d pitch (%d) must be %d\n",22862284 i, fb->pitches[i], ccs_aux_stride);22872287- goto err_frontbuffer_put;22852285+ goto err_bo_framebuffer_fini;22882286 }22892287 }22902288···2293229122942292 ret = intel_fill_fb_info(display, intel_fb);22952293 if (ret)22962296- goto err_frontbuffer_put;22942294+ goto err_bo_framebuffer_fini;2297229522982296 if (intel_fb_uses_dpt(fb)) {22992297 struct i915_address_space *vm;···23192317err_free_dpt:23202318 if (intel_fb_uses_dpt(fb))23212319 intel_dpt_destroy(intel_fb->dpt_vm);23202320+err_bo_framebuffer_fini:23212321+ intel_fb_bo_framebuffer_fini(obj);23222322err_frontbuffer_put:23232323 intel_frontbuffer_put(intel_fb->frontbuffer);23242324-err:23252325- intel_fb_bo_framebuffer_fini(obj);23262324 return ret;23272325}23282326
+9-1
drivers/gpu/drm/i915/display/intel_frontbuffer.c
···270270 spin_unlock(&display->fb_tracking.lock);271271272272 i915_active_fini(&front->write);273273+274274+ drm_gem_object_put(obj);273275 kfree_rcu(front, rcu);274276}275277···289287 if (!front)290288 return NULL;291289290290+ drm_gem_object_get(obj);291291+292292 front->obj = obj;293293 kref_init(&front->ref);294294 atomic_set(&front->bits, 0);···303299 spin_lock(&display->fb_tracking.lock);304300 cur = intel_bo_set_frontbuffer(obj, front);305301 spin_unlock(&display->fb_tracking.lock);306306- if (cur != front)302302+303303+ if (cur != front) {304304+ drm_gem_object_put(obj);307305 kfree(front);306306+ }307307+308308 return cur;309309}310310
+10-2
drivers/gpu/drm/i915/display/intel_psr.c
···34023402 struct intel_display *display = to_intel_display(intel_dp);3403340334043404 if (DISPLAY_VER(display) < 20 && intel_dp->psr.psr2_sel_fetch_enabled) {34053405+ /* Selective fetch prior LNL */34053406 if (intel_dp->psr.psr2_sel_fetch_cff_enabled) {34063407 /* can we turn CFF off? */34073408 if (intel_dp->psr.busy_frontbuffer_bits == 0)···34213420 intel_psr_configure_full_frame_update(intel_dp);3422342134233422 intel_psr_force_update(intel_dp);34233423+ } else if (!intel_dp->psr.psr2_sel_fetch_enabled) {34243424+ /*34253425+ * PSR1 on all platforms34263426+ * PSR2 HW tracking34273427+ * Panel Replay Full frame update34283428+ */34293429+ intel_psr_force_update(intel_dp);34243430 } else {34313431+ /* Selective update LNL onwards */34253432 intel_psr_exit(intel_dp);34263433 }3427343434283428- if ((!intel_dp->psr.psr2_sel_fetch_enabled || DISPLAY_VER(display) >= 20) &&34293429- !intel_dp->psr.busy_frontbuffer_bits)34353435+ if (!intel_dp->psr.active && !intel_dp->psr.busy_frontbuffer_bits)34303436 queue_work(display->wq.unordered, &intel_dp->psr.work);34313437}34323438
···89899090 if (!front) {9191 RCU_INIT_POINTER(obj->frontbuffer, NULL);9292- drm_gem_object_put(intel_bo_to_drm_bo(obj));9392 } else if (rcu_access_pointer(obj->frontbuffer)) {9493 cur = rcu_dereference_protected(obj->frontbuffer, true);9594 kref_get(&cur->ref);9695 } else {9797- drm_gem_object_get(intel_bo_to_drm_bo(obj));9896 rcu_assign_pointer(obj->frontbuffer, front);9997 }10098
+8-1
drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
···1325132513261326static void ct_try_receive_message(struct intel_guc_ct *ct)13271327{13281328+ struct intel_guc *guc = ct_to_guc(ct);13281329 int ret;1329133013301330- if (GEM_WARN_ON(!ct->enabled))13311331+ if (!ct->enabled) {13321332+ GEM_WARN_ON(!guc_to_gt(guc)->uc.reset_in_progress);13331333+ return;13341334+ }13351335+13361336+ /* When interrupt disabled, message handling is not expected */13371337+ if (!guc->interrupts.enabled)13311338 return;1332133913331340 ret = ct_receive(ct);
···965965 dma_resv_assert_held(resv);966966967967 dma_resv_for_each_fence(&cursor, resv, usage, fence) {968968- /* Make sure to grab an additional ref on the added fence */969969- dma_fence_get(fence);970970- ret = drm_sched_job_add_dependency(job, fence);971971- if (ret) {972972- dma_fence_put(fence);968968+ /*969969+ * As drm_sched_job_add_dependency always consumes the fence970970+ * reference (even when it fails), and dma_resv_for_each_fence971971+ * is not obtaining one, we need to grab one before calling.972972+ */973973+ ret = drm_sched_job_add_dependency(job, dma_fence_get(fence));974974+ if (ret)973975 return ret;974974- }975976 }976977 return 0;977978}
···66666767/**6868 * xe_pci_fake_data_gen_params - Generate struct xe_pci_fake_data parameters6969+ * @test: test context object6970 * @prev: the pointer to the previous parameter to iterate from or NULL7071 * @desc: output buffer with minimum size of KUNIT_PARAM_DESC_SIZE7172 *···243242244243/**245244 * xe_pci_graphics_ip_gen_param - Generate graphics struct xe_ip parameters245245+ * @test: test context object246246 * @prev: the pointer to the previous parameter to iterate from or NULL247247 * @desc: output buffer with minimum size of KUNIT_PARAM_DESC_SIZE248248 *···268266269267/**270268 * xe_pci_media_ip_gen_param - Generate media struct xe_ip parameters269269+ * @test: test context object271270 * @prev: the pointer to the previous parameter to iterate from or NULL272271 * @desc: output buffer with minimum size of KUNIT_PARAM_DESC_SIZE273272 *···293290294291/**295292 * xe_pci_id_gen_param - Generate struct pci_device_id parameters293293+ * @test: test context object296294 * @prev: the pointer to the previous parameter to iterate from or NULL297295 * @desc: output buffer with minimum size of KUNIT_PARAM_DESC_SIZE298296 *···380376381377/**382378 * xe_pci_live_device_gen_param - Helper to iterate Xe devices as KUnit parameters379379+ * @test: test context object383380 * @prev: the previously returned value, or NULL for the first iteration384381 * @desc: the buffer for a parameter name385382 *
-8
drivers/gpu/drm/xe/xe_bo_evict.c
···182182183183static int xe_bo_restore_and_map_ggtt(struct xe_bo *bo)184184{185185- struct xe_device *xe = xe_bo_device(bo);186185 int ret;187186188187 ret = xe_bo_restore_pinned(bo);···199200 xe_ggtt_map_bo_unlocked(tile->mem.ggtt, bo);200201 }201202 }202202-203203- /*204204- * We expect validate to trigger a move VRAM and our move code205205- * should setup the iosys map.206206- */207207- xe_assert(xe, !(bo->flags & XE_BO_FLAG_PINNED_LATE_RESTORE) ||208208- !iosys_map_is_null(&bo->vmap));209203210204 return 0;211205}
···124124 if (xe_gt_is_main_type(gt))125125 gtidle->powergate_enable |= RENDER_POWERGATE_ENABLE;126126127127+ if (MEDIA_VERx100(xe) >= 1100 && MEDIA_VERx100(xe) < 1255)128128+ gtidle->powergate_enable |= MEDIA_SAMPLERS_POWERGATE_ENABLE;129129+127130 if (xe->info.platform != XE_DG1) {128131 for (i = XE_HW_ENGINE_VCS0, j = 0; i <= XE_HW_ENGINE_VCS7; ++i, ++j) {129132 if ((gt->info.engine_mask & BIT(i)))···249246 drm_printf(p, "Media Slice%d Power Gate Status: %s\n", n,250247 str_up_down(pg_status & media_slices[n].status_bit));251248 }249249+250250+ if (MEDIA_VERx100(xe) >= 1100 && MEDIA_VERx100(xe) < 1255)251251+ drm_printf(p, "Media Samplers Power Gating Enabled: %s\n",252252+ str_yes_no(pg_enabled & MEDIA_SAMPLERS_POWERGATE_ENABLE));253253+252254 return 0;253255}254256
+12-1
drivers/gpu/drm/xe/xe_guc_submit.c
···4444#include "xe_ring_ops_types.h"4545#include "xe_sched_job.h"4646#include "xe_trace.h"4747+#include "xe_uc_fw.h"4748#include "xe_vm.h"48494950static struct xe_guc *···14901489 xe_gt_assert(guc_to_gt(guc), !(q->flags & EXEC_QUEUE_FLAG_PERMANENT));14911490 trace_xe_exec_queue_cleanup_entity(q);1492149114931493- if (exec_queue_registered(q))14921492+ /*14931493+ * Expected state transitions for cleanup:14941494+ * - If the exec queue is registered and GuC firmware is running, we must first14951495+ * disable scheduling and deregister the queue to ensure proper teardown and14961496+ * resource release in the GuC, then destroy the exec queue on driver side.14971497+ * - If the GuC is already stopped (e.g., during driver unload or GPU reset),14981498+ * we cannot expect a response for the deregister request. In this case,14991499+ * it is safe to directly destroy the exec queue on driver side, as the GuC15001500+ * will not process further requests and all resources must be cleaned up locally.15011501+ */15021502+ if (exec_queue_registered(q) && xe_uc_fw_is_running(&guc->fw))14941503 disable_scheduling_deregister(guc, q);14951504 else14961505 __guc_exec_queue_destroy(guc, q);
+4-2
drivers/gpu/drm/xe/xe_migrate.c
···434434435435 err = xe_migrate_lock_prepare_vm(tile, m, vm);436436 if (err)437437- return err;437437+ goto err_out;438438439439 if (xe->info.has_usm) {440440 struct xe_hw_engine *hwe = xe_gt_hw_engine(primary_gt,···21132113 if (current_bytes & ~PAGE_MASK) {21142114 int pitch = 4;2115211521162116- current_bytes = min_t(int, current_bytes, S16_MAX * pitch);21162116+ current_bytes = min_t(int, current_bytes,21172117+ round_down(S16_MAX * pitch,21182118+ XE_CACHELINE_BYTES));21172119 }2118212021192121 __fence = xe_migrate_vram(m, current_bytes,
+2
drivers/gpu/drm/xe/xe_pci.c
···867867 if (err)868868 return err;869869870870+ xe_vram_resize_bar(xe);871871+870872 err = xe_device_probe_early(xe);871873 /*872874 * In Boot Survivability mode, no drm card is exposed and driver
+15-2
drivers/gpu/drm/xe/xe_svm.c
···10341034 if (err)10351035 return err;1036103610371037+ dpagemap = xe_vma_resolve_pagemap(vma, tile);10381038+ if (!dpagemap && !ctx.devmem_only)10391039+ ctx.device_private_page_owner = NULL;10371040 range = xe_svm_range_find_or_insert(vm, fault_addr, vma, &ctx);1038104110391042 if (IS_ERR(range))···1057105410581055 range_debug(range, "PAGE FAULT");1059105610601060- dpagemap = xe_vma_resolve_pagemap(vma, tile);10611057 if (--migrate_try_count >= 0 &&10621058 xe_svm_range_needs_migrate_to_vram(range, vma, !!dpagemap || ctx.devmem_only)) {10631059 ktime_t migrate_start = xe_svm_stats_ktime_get();···10751073 drm_dbg(&vm->xe->drm,10761074 "VRAM allocation failed, falling back to retrying fault, asid=%u, errno=%pe\n",10771075 vm->usm.asid, ERR_PTR(err));10781078- goto retry;10761076+10771077+ /*10781078+ * In the devmem-only case, mixed mappings may10791079+ * be found. The get_pages function will fix10801080+ * these up to a single location, allowing the10811081+ * page fault handler to make forward progress.10821082+ */10831083+ if (ctx.devmem_only)10841084+ goto get_pages;10851085+ else10861086+ goto retry;10791087 } else {10801088 drm_err(&vm->xe->drm,10811089 "VRAM allocation failed, retry count exceeded, asid=%u, errno=%pe\n",···10951083 }10961084 }1097108510861086+get_pages:10981087 get_pages_start = xe_svm_stats_ktime_get();1099108811001089 range_debug(range, "GET PAGES");
+23-9
drivers/gpu/drm/xe/xe_vm.c
···28322832}2833283328342834static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,28352835- bool validate)28352835+ bool res_evict, bool validate)28362836{28372837 struct xe_bo *bo = xe_vma_bo(vma);28382838 struct xe_vm *vm = xe_vma_vm(vma);···28432843 err = drm_exec_lock_obj(exec, &bo->ttm.base);28442844 if (!err && validate)28452845 err = xe_bo_validate(bo, vm,28462846- !xe_vm_in_preempt_fence_mode(vm), exec);28462846+ !xe_vm_in_preempt_fence_mode(vm) &&28472847+ res_evict, exec);28472848 }2848284928492850 return err;···29142913}2915291429162915static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,29172917- struct xe_vma_op *op)29162916+ struct xe_vma_ops *vops, struct xe_vma_op *op)29182917{29192918 int err = 0;29192919+ bool res_evict;29202920+29212921+ /*29222922+ * We only allow evicting a BO within the VM if it is not part of an29232923+ * array of binds, as an array of binds can evict another BO within the29242924+ * bind.29252925+ */29262926+ res_evict = !(vops->flags & XE_VMA_OPS_ARRAY_OF_BINDS);2920292729212928 switch (op->base.op) {29222929 case DRM_GPUVA_OP_MAP:29232930 if (!op->map.invalidate_on_bind)29242931 err = vma_lock_and_validate(exec, op->map.vma,29322932+ res_evict,29252933 !xe_vm_in_fault_mode(vm) ||29262934 op->map.immediate);29272935 break;···2941293129422932 err = vma_lock_and_validate(exec,29432933 gpuva_to_vma(op->base.remap.unmap->va),29442944- false);29342934+ res_evict, false);29452935 if (!err && op->remap.prev)29462946- err = vma_lock_and_validate(exec, op->remap.prev, true);29362936+ err = vma_lock_and_validate(exec, op->remap.prev,29372937+ res_evict, true);29472938 if (!err && op->remap.next)29482948- err = vma_lock_and_validate(exec, op->remap.next, true);29392939+ err = vma_lock_and_validate(exec, op->remap.next,29402940+ res_evict, true);29492941 break;29502942 case DRM_GPUVA_OP_UNMAP:29512943 err = check_ufence(gpuva_to_vma(op->base.unmap.va));···2956294429572945 err = vma_lock_and_validate(exec,29582946 gpuva_to_vma(op->base.unmap.va),29592959- false);29472947+ res_evict, false);29602948 break;29612949 case DRM_GPUVA_OP_PREFETCH:29622950 {···2971295929722960 err = vma_lock_and_validate(exec,29732961 gpuva_to_vma(op->base.prefetch.va),29742974- false);29622962+ res_evict, false);29752963 if (!err && !xe_vma_has_no_bo(vma))29762964 err = xe_bo_migrate(xe_vma_bo(vma),29772965 region_to_mem_type[region],···30173005 return err;3018300630193007 list_for_each_entry(op, &vops->list, link) {30203020- err = op_lock_and_prep(exec, vm, op);30083008+ err = op_lock_and_prep(exec, vm, vops, op);30213009 if (err)30223010 return err;30233011 }···36503638 }3651363936523640 xe_vma_ops_init(&vops, vm, q, syncs, num_syncs);36413641+ if (args->num_binds > 1)36423642+ vops.flags |= XE_VMA_OPS_ARRAY_OF_BINDS;36533643 for (i = 0; i < args->num_binds; ++i) {36543644 u64 range = bind_ops[i].range;36553645 u64 addr = bind_ops[i].addr;
+1
drivers/gpu/drm/xe/xe_vm_types.h
···476476 /** @flag: signify the properties within xe_vma_ops*/477477#define XE_VMA_OPS_FLAG_HAS_SVM_PREFETCH BIT(0)478478#define XE_VMA_OPS_FLAG_MADVISE BIT(1)479479+#define XE_VMA_OPS_ARRAY_OF_BINDS BIT(2)479480 u32 flags;480481#ifdef TEST_VM_OPS_ERROR481482 /** @inject_error: inject error to test error handling */
+26-8
drivers/gpu/drm/xe/xe_vram.c
···26262727#define BAR_SIZE_SHIFT 2028282929-static void3030-_resize_bar(struct xe_device *xe, int resno, resource_size_t size)2929+/*3030+ * Release all the BARs that could influence/block LMEMBAR resizing, i.e.3131+ * assigned IORESOURCE_MEM_64 BARs3232+ */3333+static void release_bars(struct pci_dev *pdev)3434+{3535+ struct resource *res;3636+ int i;3737+3838+ pci_dev_for_each_resource(pdev, res, i) {3939+ /* Resource already un-assigned, do not reset it */4040+ if (!res->parent)4141+ continue;4242+4343+ /* No need to release unrelated BARs */4444+ if (!(res->flags & IORESOURCE_MEM_64))4545+ continue;4646+4747+ pci_release_resource(pdev, i);4848+ }4949+}5050+5151+static void resize_bar(struct xe_device *xe, int resno, resource_size_t size)3152{3253 struct pci_dev *pdev = to_pci_dev(xe->drm.dev);3354 int bar_size = pci_rebar_bytes_to_size(size);3455 int ret;35563636- if (pci_resource_len(pdev, resno))3737- pci_release_resource(pdev, resno);5757+ release_bars(pdev);38583959 ret = pci_resize_resource(pdev, resno, bar_size);4060 if (ret) {···7050 * if force_vram_bar_size is set, attempt to set to the requested size7151 * else set to maximum possible size7252 */7373-static void resize_vram_bar(struct xe_device *xe)5353+void xe_vram_resize_bar(struct xe_device *xe)7454{7555 int force_vram_bar_size = xe_modparam.force_vram_bar_size;7656 struct pci_dev *pdev = to_pci_dev(xe->drm.dev);···139119 pci_read_config_dword(pdev, PCI_COMMAND, &pci_cmd);140120 pci_write_config_dword(pdev, PCI_COMMAND, pci_cmd & ~PCI_COMMAND_MEMORY);141121142142- _resize_bar(xe, LMEM_BAR, rebar_size);122122+ resize_bar(xe, LMEM_BAR, rebar_size);143123144124 pci_assign_unassigned_bus_resources(pdev->bus);145125 pci_write_config_dword(pdev, PCI_COMMAND, pci_cmd);···167147 drm_err(&xe->drm, "pci resource is not valid\n");168148 return -ENXIO;169149 }170170-171171- resize_vram_bar(xe);172150173151 lmem_bar->io_start = pci_resource_start(pdev, LMEM_BAR);174152 lmem_bar->io_size = pci_resource_len(pdev, LMEM_BAR);
···9393 If unsure, say Y.94949595config HID_HAPTIC9696- tristate "Haptic touchpad support"9696+ bool "Haptic touchpad support"9797 default n9898 help9999 Support for touchpads with force sensors and haptic actuators instead of a
+24-3
drivers/hid/hid-cp2112.c
···689689 count = cp2112_write_read_req(buf, addr, read_length,690690 command, NULL, 0);691691 } else {692692- count = cp2112_write_req(buf, addr, command,692692+ /* Copy starts from data->block[1] so the length can693693+ * be at max I2C_SMBUS_CLOCK_MAX + 1694694+ */695695+696696+ if (data->block[0] > I2C_SMBUS_BLOCK_MAX + 1)697697+ count = -EINVAL;698698+ else699699+ count = cp2112_write_req(buf, addr, command,693700 data->block + 1,694701 data->block[0]);695702 }···707700 I2C_SMBUS_BLOCK_MAX,708701 command, NULL, 0);709702 } else {710710- count = cp2112_write_req(buf, addr, command,703703+ /* data_length here is data->block[0] + 1704704+ * so make sure that the data->block[0] is705705+ * less than or equals I2C_SMBUS_BLOCK_MAX + 1706706+ */707707+ if (data->block[0] > I2C_SMBUS_BLOCK_MAX + 1)708708+ count = -EINVAL;709709+ else710710+ count = cp2112_write_req(buf, addr, command,711711 data->block,712712 data->block[0] + 1);713713 }···723709 size = I2C_SMBUS_BLOCK_DATA;724710 read_write = I2C_SMBUS_READ;725711726726- count = cp2112_write_read_req(buf, addr, I2C_SMBUS_BLOCK_MAX,712712+ /* data_length is data->block[0] + 1, so713713+ * so data->block[0] should be less than or714714+ * equal to the I2C_SMBUS_BLOCK_MAX + 1715715+ */716716+ if (data->block[0] > I2C_SMBUS_BLOCK_MAX + 1)717717+ count = -EINVAL;718718+ else719719+ count = cp2112_write_read_req(buf, addr, I2C_SMBUS_BLOCK_MAX,727720 command, data->block,728721 data->block[0] + 1);729722 break;
···2222LIST_HEAD(__i2c_board_list);2323EXPORT_SYMBOL_GPL(__i2c_board_list);24242525-int __i2c_first_dynamic_bus_num __ro_after_init;2525+int __i2c_first_dynamic_bus_num;2626EXPORT_SYMBOL_GPL(__i2c_first_dynamic_bus_num);27272828···4848 * The board info passed can safely be __initdata, but be careful of embedded4949 * pointers (for platform_data, functions, etc) since that won't be copied.5050 */5151-int __init i2c_register_board_info(int busnum, struct i2c_board_info const *info, unsigned len)5151+int i2c_register_board_info(int busnum, struct i2c_board_info const *info, unsigned len)5252{5353 int status;5454
+2-2
drivers/irqchip/irq-aspeed-scu-ic.c
···215215 int irq, rc = 0;216216217217 scu_ic->base = of_iomap(node, 0);218218- if (IS_ERR(scu_ic->base)) {219219- rc = PTR_ERR(scu_ic->base);218218+ if (!scu_ic->base) {219219+ rc = -ENOMEM;220220 goto err;221221 }222222
+4-2
drivers/irqchip/irq-sifive-plic.c
···254254255255 priv = per_cpu_ptr(&plic_handlers, smp_processor_id())->priv;256256257257- for (i = 0; i < priv->nr_irqs; i++) {257257+ /* irq ID 0 is reserved */258258+ for (i = 1; i < priv->nr_irqs; i++) {258259 __assign_bit(i, priv->prio_save,259260 readl(priv->regs + PRIORITY_BASE + i * PRIORITY_PER_ID));260261 }···286285287286 priv = per_cpu_ptr(&plic_handlers, smp_processor_id())->priv;288287289289- for (i = 0; i < priv->nr_irqs; i++) {288288+ /* irq ID 0 is reserved */289289+ for (i = 1; i < priv->nr_irqs; i++) {290290 index = BIT_WORD(i);291291 writel((priv->prio_save[index] & BIT_MASK(i)) ? 1 : 0,292292 priv->regs + PRIORITY_BASE + i * PRIORITY_PER_ID);
+8-4
drivers/mfd/ls2k-bmc-core.c
···469469 return ret;470470471471 ddata = devm_kzalloc(&dev->dev, sizeof(*ddata), GFP_KERNEL);472472- if (IS_ERR(ddata)) {472472+ if (!ddata) {473473 ret = -ENOMEM;474474 goto disable_pci;475475 }···495495 goto disable_pci;496496 }497497498498- return devm_mfd_add_devices(&dev->dev, PLATFORM_DEVID_AUTO,499499- ls2k_bmc_cells, ARRAY_SIZE(ls2k_bmc_cells),500500- &dev->resource[0], 0, NULL);498498+ ret = devm_mfd_add_devices(&dev->dev, PLATFORM_DEVID_AUTO,499499+ ls2k_bmc_cells, ARRAY_SIZE(ls2k_bmc_cells),500500+ &dev->resource[0], 0, NULL);501501+ if (ret)502502+ goto disable_pci;503503+504504+ return 0;501505502506disable_pci:503507 pci_disable_device(dev);
···7979#define MMC_EXTRACT_INDEX_FROM_ARG(x) ((x & 0x00FF0000) >> 16)8080#define MMC_EXTRACT_VALUE_FROM_ARG(x) ((x & 0x0000FF00) >> 8)81818282-/**8383- * struct rpmb_frame - rpmb frame as defined by eMMC 5.1 (JESD84-B51)8484- *8585- * @stuff : stuff bytes8686- * @key_mac : The authentication key or the message authentication8787- * code (MAC) depending on the request/response type.8888- * The MAC will be delivered in the last (or the only)8989- * block of data.9090- * @data : Data to be written or read by signed access.9191- * @nonce : Random number generated by the host for the requests9292- * and copied to the response by the RPMB engine.9393- * @write_counter: Counter value for the total amount of the successful9494- * authenticated data write requests made by the host.9595- * @addr : Address of the data to be programmed to or read9696- * from the RPMB. Address is the serial number of9797- * the accessed block (half sector 256B).9898- * @block_count : Number of blocks (half sectors, 256B) requested to be9999- * read/programmed.100100- * @result : Includes information about the status of the write counter101101- * (valid, expired) and result of the access made to the RPMB.102102- * @req_resp : Defines the type of request and response to/from the memory.103103- *104104- * The stuff bytes and big-endian properties are modeled to fit to the spec.105105- */106106-struct rpmb_frame {107107- u8 stuff[196];108108- u8 key_mac[32];109109- u8 data[256];110110- u8 nonce[16];111111- __be32 write_counter;112112- __be16 addr;113113- __be16 block_count;114114- __be16 result;115115- __be16 req_resp;116116-} __packed;117117-118118-#define RPMB_PROGRAM_KEY 0x1 /* Program RPMB Authentication Key */119119-#define RPMB_GET_WRITE_COUNTER 0x2 /* Read RPMB write counter */120120-#define RPMB_WRITE_DATA 0x3 /* Write data to RPMB partition */121121-#define RPMB_READ_DATA 0x4 /* Read data from RPMB partition */122122-#define RPMB_RESULT_READ 0x5 /* Read result request (Internal) */123123-12482#define RPMB_FRAME_SIZE sizeof(struct rpmb_frame)12583#define CHECK_SIZE_NEQ(val) ((val) != sizeof(struct rpmb_frame))12684#define CHECK_SIZE_ALIGNED(val) IS_ALIGNED((val), sizeof(struct rpmb_frame))
+39-29
drivers/net/can/m_can/m_can.c
···11// SPDX-License-Identifier: GPL-2.022// CAN bus driver for Bosch M_CAN controller33// Copyright (C) 2014 Freescale Semiconductor, Inc.44-// Dong Aisheng <b29396@freescale.com>44+// Dong Aisheng <aisheng.dong@nxp.com>55// Copyright (C) 2018-19 Texas Instruments Incorporated - http://www.ti.com/6677/* Bosch M_CAN user manual can be obtained from:···812812 u32 timestamp = 0;813813814814 switch (new_state) {815815+ case CAN_STATE_ERROR_ACTIVE:816816+ cdev->can.state = CAN_STATE_ERROR_ACTIVE;817817+ break;815818 case CAN_STATE_ERROR_WARNING:816819 /* error warning state */817820 cdev->can.can_stats.error_warning++;···844841 __m_can_get_berr_counter(dev, &bec);845842846843 switch (new_state) {844844+ case CAN_STATE_ERROR_ACTIVE:845845+ cf->can_id |= CAN_ERR_CRTL | CAN_ERR_CNT;846846+ cf->data[1] = CAN_ERR_CRTL_ACTIVE;847847+ cf->data[6] = bec.txerr;848848+ cf->data[7] = bec.rxerr;849849+ break;847850 case CAN_STATE_ERROR_WARNING:848851 /* error warning state */849852 cf->can_id |= CAN_ERR_CRTL | CAN_ERR_CNT;···886877 return 1;887878}888879889889-static int m_can_handle_state_errors(struct net_device *dev, u32 psr)880880+static enum can_state881881+m_can_state_get_by_psr(struct m_can_classdev *cdev)882882+{883883+ u32 reg_psr;884884+885885+ reg_psr = m_can_read(cdev, M_CAN_PSR);886886+887887+ if (reg_psr & PSR_BO)888888+ return CAN_STATE_BUS_OFF;889889+ if (reg_psr & PSR_EP)890890+ return CAN_STATE_ERROR_PASSIVE;891891+ if (reg_psr & PSR_EW)892892+ return CAN_STATE_ERROR_WARNING;893893+894894+ return CAN_STATE_ERROR_ACTIVE;895895+}896896+897897+static int m_can_handle_state_errors(struct net_device *dev)890898{891899 struct m_can_classdev *cdev = netdev_priv(dev);892892- int work_done = 0;900900+ enum can_state new_state;893901894894- if (psr & PSR_EW && cdev->can.state != CAN_STATE_ERROR_WARNING) {895895- netdev_dbg(dev, "entered error warning state\n");896896- work_done += m_can_handle_state_change(dev,897897- CAN_STATE_ERROR_WARNING);898898- }902902+ new_state = m_can_state_get_by_psr(cdev);903903+ if (new_state == cdev->can.state)904904+ return 0;899905900900- if (psr & PSR_EP && cdev->can.state != CAN_STATE_ERROR_PASSIVE) {901901- netdev_dbg(dev, "entered error passive state\n");902902- work_done += m_can_handle_state_change(dev,903903- CAN_STATE_ERROR_PASSIVE);904904- }905905-906906- if (psr & PSR_BO && cdev->can.state != CAN_STATE_BUS_OFF) {907907- netdev_dbg(dev, "entered error bus off state\n");908908- work_done += m_can_handle_state_change(dev,909909- CAN_STATE_BUS_OFF);910910- }911911-912912- return work_done;906906+ return m_can_handle_state_change(dev, new_state);913907}914908915909static void m_can_handle_other_err(struct net_device *dev, u32 irqstatus)···10431031 }1044103210451033 if (irqstatus & IR_ERR_STATE)10461046- work_done += m_can_handle_state_errors(dev,10471047- m_can_read(cdev, M_CAN_PSR));10341034+ work_done += m_can_handle_state_errors(dev);1048103510491036 if (irqstatus & IR_ERR_BUS_30X)10501037 work_done += m_can_handle_bus_errors(dev, irqstatus,···16171606 netdev_queue_set_dql_min_limit(netdev_get_tx_queue(cdev->net, 0),16181607 cdev->tx_max_coalesced_frames);1619160816201620- cdev->can.state = CAN_STATE_ERROR_ACTIVE;16091609+ cdev->can.state = m_can_state_get_by_psr(cdev);1621161016221611 m_can_enable_all_interrupts(cdev);16231612···25032492 }2504249325052494 m_can_clk_stop(cdev);24952495+ cdev->can.state = CAN_STATE_SLEEPING;25062496 }2507249725082498 pinctrl_pm_select_sleep_state(dev);25092509-25102510- cdev->can.state = CAN_STATE_SLEEPING;2511249925122500 return ret;25132501}···25192509 int ret = 0;2520251025212511 pinctrl_pm_select_default_state(dev);25222522-25232523- cdev->can.state = CAN_STATE_ERROR_ACTIVE;2524251225252513 if (netif_running(ndev)) {25262514 ret = m_can_clk_start(cdev);···25372529 if (cdev->ops->init)25382530 ret = cdev->ops->init(cdev);2539253125322532+ cdev->can.state = m_can_state_get_by_psr(cdev);25332533+25402534 m_can_write(cdev, M_CAN_IE, cdev->active_interrupts);25412535 } else {25422536 ret = m_can_start(ndev);···25562546}25572547EXPORT_SYMBOL_GPL(m_can_class_resume);2558254825592559-MODULE_AUTHOR("Dong Aisheng <b29396@freescale.com>");25492549+MODULE_AUTHOR("Dong Aisheng <aisheng.dong@nxp.com>");25602550MODULE_AUTHOR("Dan Murphy <dmurphy@ti.com>");25612551MODULE_LICENSE("GPL v2");25622552MODULE_DESCRIPTION("CAN bus driver for Bosch M_CAN controller");
···289289#define GS_MAX_RX_URBS 30290290#define GS_NAPI_WEIGHT 32291291292292-/* Maximum number of interfaces the driver supports per device.293293- * Current hardware only supports 3 interfaces. The future may vary.294294- */295295-#define GS_MAX_INTF 3296296-297292struct gs_tx_context {298293 struct gs_can *dev;299294 unsigned int echo_id;···319324320325/* usb interface struct */321326struct gs_usb {322322- struct gs_can *canch[GS_MAX_INTF];323327 struct usb_anchor rx_submitted;324328 struct usb_device *udev;325329···330336331337 unsigned int hf_size_rx;332338 u8 active_channels;339339+ u8 channel_cnt;333340334341 unsigned int pipe_in;335342 unsigned int pipe_out;343343+ struct gs_can *canch[] __counted_by(channel_cnt);336344};337345338346/* 'allocate' a tx context.···595599 }596600597601 /* device reports out of range channel id */598598- if (hf->channel >= GS_MAX_INTF)602602+ if (hf->channel >= parent->channel_cnt)599603 goto device_detach;600604601605 dev = parent->canch[hf->channel];···695699 /* USB failure take down all interfaces */696700 if (rc == -ENODEV) {697701device_detach:698698- for (rc = 0; rc < GS_MAX_INTF; rc++) {702702+ for (rc = 0; rc < parent->channel_cnt; rc++) {699703 if (parent->canch[rc])700704 netif_device_detach(parent->canch[rc]->netdev);701705 }···1245124912461250 netdev->flags |= IFF_ECHO; /* we support full roundtrip echo */12471251 netdev->dev_id = channel;12521252+ netdev->dev_port = channel;1248125312491254 /* dev setup */12501255 strcpy(dev->bt_const.name, KBUILD_MODNAME);···14571460 icount = dconf.icount + 1;14581461 dev_info(&intf->dev, "Configuring for %u interfaces\n", icount);1459146214601460- if (icount > GS_MAX_INTF) {14631463+ if (icount > type_max(parent->channel_cnt)) {14611464 dev_err(&intf->dev,14621465 "Driver cannot handle more that %u CAN interfaces\n",14631463- GS_MAX_INTF);14661466+ type_max(parent->channel_cnt));14641467 return -EINVAL;14651468 }1466146914671467- parent = kzalloc(sizeof(*parent), GFP_KERNEL);14701470+ parent = kzalloc(struct_size(parent, canch, icount), GFP_KERNEL);14681471 if (!parent)14691472 return -ENOMEM;14731473+14741474+ parent->channel_cnt = icount;1470147514711476 init_usb_anchor(&parent->rx_submitted);14721477···15301531 return;15311532 }1532153315331533- for (i = 0; i < GS_MAX_INTF; i++)15341534+ for (i = 0; i < parent->channel_cnt; i++)15341535 if (parent->canch[i])15351536 gs_destroy_candev(parent->canch[i]);15361537
+15-1
drivers/net/ethernet/airoha/airoha_eth.c
···18731873#endif18741874}1875187518761876+static bool airoha_dev_tx_queue_busy(struct airoha_queue *q, u32 nr_frags)18771877+{18781878+ u32 tail = q->tail <= q->head ? q->tail + q->ndesc : q->tail;18791879+ u32 index = q->head + nr_frags;18801880+18811881+ /* completion napi can free out-of-order tx descriptors if hw QoS is18821882+ * enabled and packets with different priorities are queued to the same18831883+ * DMA ring. Take into account possible out-of-order reports checking18841884+ * if the tx queue is full using circular buffer head/tail pointers18851885+ * instead of the number of queued packets.18861886+ */18871887+ return index >= tail;18881888+}18891889+18761890static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb,18771891 struct net_device *dev)18781892{···19401926 txq = netdev_get_tx_queue(dev, qid);19411927 nr_frags = 1 + skb_shinfo(skb)->nr_frags;1942192819431943- if (q->queued + nr_frags > q->ndesc) {19291929+ if (airoha_dev_tx_queue_busy(q, nr_frags)) {19441930 /* not enough space in the queue */19451931 netif_tx_stop_queue(txq);19461932 spin_unlock_bh(&q->lock);
···100100 */101101#define GVE_DQO_QPL_ONDEMAND_ALLOC_THRESHOLD 96102102103103+#define GVE_DQO_RX_HWTSTAMP_VALID 0x1104104+103105/* Each slot in the desc ring has a 1:1 mapping to a slot in the data ring */104106struct gve_rx_desc_queue {105107 struct gve_rx_desc *desc_ring; /* the descriptor ring */
+2-1
drivers/net/ethernet/google/gve/gve_desc_dqo.h
···236236237237 u8 status_error1;238238239239- __le16 reserved5;239239+ u8 reserved5;240240+ u8 ts_sub_nsecs_low;240241 __le16 buf_id; /* Buffer ID which was sent on the buffer queue. */241242242243 union {
+11-5
drivers/net/ethernet/google/gve/gve_rx_dqo.c
···456456 * Note that this means if the time delta between packet reception and the last457457 * clock read is greater than ~2 seconds, this will provide invalid results.458458 */459459-static void gve_rx_skb_hwtstamp(struct gve_rx_ring *rx, u32 hwts)459459+static void gve_rx_skb_hwtstamp(struct gve_rx_ring *rx,460460+ const struct gve_rx_compl_desc_dqo *desc)460461{461462 u64 last_read = READ_ONCE(rx->gve->last_sync_nic_counter);462463 struct sk_buff *skb = rx->ctx.skb_head;463463- u32 low = (u32)last_read;464464- s32 diff = hwts - low;464464+ u32 ts, low;465465+ s32 diff;465466466466- skb_hwtstamps(skb)->hwtstamp = ns_to_ktime(last_read + diff);467467+ if (desc->ts_sub_nsecs_low & GVE_DQO_RX_HWTSTAMP_VALID) {468468+ ts = le32_to_cpu(desc->ts);469469+ low = (u32)last_read;470470+ diff = ts - low;471471+ skb_hwtstamps(skb)->hwtstamp = ns_to_ktime(last_read + diff);472472+ }467473}468474469475static void gve_rx_free_skb(struct napi_struct *napi, struct gve_rx_ring *rx)···950944 gve_rx_skb_csum(rx->ctx.skb_head, desc, ptype);951945952946 if (rx->gve->ts_config.rx_filter == HWTSTAMP_FILTER_ALL)953953- gve_rx_skb_hwtstamp(rx, le32_to_cpu(desc->ts));947947+ gve_rx_skb_hwtstamp(rx, desc);954948955949 /* RSC packets must set gso_size otherwise the TCP stack will complain956950 * that packets are larger than MTU.
+3
drivers/net/ethernet/intel/idpf/idpf_ptp.c
···863863 u64_stats_inc(&vport->tstamp_stats.flushed);864864865865 list_del(&ptp_tx_tstamp->list_member);866866+ if (ptp_tx_tstamp->skb)867867+ consume_skb(ptp_tx_tstamp->skb);868868+866869 kfree(ptp_tx_tstamp);867870 }868871 u64_stats_update_end(&vport->tstamp_stats.stats_sync);
···5050 ixgbe_mbox_api_12, /* API version 1.2, linux/freebsd VF driver */5151 ixgbe_mbox_api_13, /* API version 1.3, linux/freebsd VF driver */5252 ixgbe_mbox_api_14, /* API version 1.4, linux/freebsd VF driver */5353+ ixgbe_mbox_api_15, /* API version 1.5, linux/freebsd VF driver */5454+ ixgbe_mbox_api_16, /* API version 1.6, linux/freebsd VF driver */5555+ ixgbe_mbox_api_17, /* API version 1.7, linux/freebsd VF driver */5356 /* This value should always be last */5457 ixgbe_mbox_api_unknown, /* indicates that API version is not known */5558};···89869087#define IXGBE_VF_GET_LINK_STATE 0x10 /* get vf link state */91888989+/* mailbox API, version 1.6 VF requests */9090+#define IXGBE_VF_GET_PF_LINK_STATE 0x11 /* request PF to send link info */9191+9292+/* mailbox API, version 1.7 VF requests */9393+#define IXGBE_VF_FEATURES_NEGOTIATE 0x12 /* get features supported by PF */9494+9295/* length of permanent address message returned from PF */9396#define IXGBE_VF_PERMADDR_MSG_LEN 49497/* word in permanent address message with the current multicast type */···1049510596#define IXGBE_VF_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */10697#define IXGBE_VF_MBX_INIT_DELAY 500 /* microseconds between retries */9898+9999+/* features negotiated between PF/VF */100100+#define IXGBEVF_PF_SUP_IPSEC BIT(0)101101+#define IXGBEVF_PF_SUP_ESX_MBX BIT(1)102102+103103+#define IXGBE_SUPPORTED_FEATURES IXGBEVF_PF_SUP_IPSEC107104108105struct ixgbe_hw;109106
+79
drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
···510510 case ixgbe_mbox_api_12:511511 case ixgbe_mbox_api_13:512512 case ixgbe_mbox_api_14:513513+ case ixgbe_mbox_api_16:514514+ case ixgbe_mbox_api_17:513515 /* Version 1.1 supports jumbo frames on VFs if PF has514516 * jumbo frames enabled which means legacy VFs are515517 * disabled···10481046 case ixgbe_mbox_api_12:10491047 case ixgbe_mbox_api_13:10501048 case ixgbe_mbox_api_14:10491049+ case ixgbe_mbox_api_16:10501050+ case ixgbe_mbox_api_17:10511051 adapter->vfinfo[vf].vf_api = api;10521052 return 0;10531053 default:···10761072 case ixgbe_mbox_api_12:10771073 case ixgbe_mbox_api_13:10781074 case ixgbe_mbox_api_14:10751075+ case ixgbe_mbox_api_16:10761076+ case ixgbe_mbox_api_17:10791077 break;10801078 default:10811079 return -1;···1118111211191113 /* verify the PF is supporting the correct API */11201114 switch (adapter->vfinfo[vf].vf_api) {11151115+ case ixgbe_mbox_api_17:11161116+ case ixgbe_mbox_api_16:11211117 case ixgbe_mbox_api_14:11221118 case ixgbe_mbox_api_13:11231119 case ixgbe_mbox_api_12:···1153114511541146 /* verify the PF is supporting the correct API */11551147 switch (adapter->vfinfo[vf].vf_api) {11481148+ case ixgbe_mbox_api_17:11491149+ case ixgbe_mbox_api_16:11561150 case ixgbe_mbox_api_14:11571151 case ixgbe_mbox_api_13:11581152 case ixgbe_mbox_api_12:···11841174 fallthrough;11851175 case ixgbe_mbox_api_13:11861176 case ixgbe_mbox_api_14:11771177+ case ixgbe_mbox_api_16:11781178+ case ixgbe_mbox_api_17:11871179 break;11881180 default:11891181 return -EOPNOTSUPP;···12561244 case ixgbe_mbox_api_12:12571245 case ixgbe_mbox_api_13:12581246 case ixgbe_mbox_api_14:12471247+ case ixgbe_mbox_api_16:12481248+ case ixgbe_mbox_api_17:12591249 break;12601250 default:12611251 return -EOPNOTSUPP;12621252 }1263125312641254 *link_state = adapter->vfinfo[vf].link_enable;12551255+12561256+ return 0;12571257+}12581258+12591259+/**12601260+ * ixgbe_send_vf_link_status - send link status data to VF12611261+ * @adapter: pointer to adapter struct12621262+ * @msgbuf: pointer to message buffers12631263+ * @vf: VF identifier12641264+ *12651265+ * Reply for IXGBE_VF_GET_PF_LINK_STATE mbox command sending link status data.12661266+ *12671267+ * Return: 0 on success or -EOPNOTSUPP when operation is not supported.12681268+ */12691269+static int ixgbe_send_vf_link_status(struct ixgbe_adapter *adapter,12701270+ u32 *msgbuf, u32 vf)12711271+{12721272+ struct ixgbe_hw *hw = &adapter->hw;12731273+12741274+ switch (adapter->vfinfo[vf].vf_api) {12751275+ case ixgbe_mbox_api_16:12761276+ case ixgbe_mbox_api_17:12771277+ if (hw->mac.type != ixgbe_mac_e610)12781278+ return -EOPNOTSUPP;12791279+ break;12801280+ default:12811281+ return -EOPNOTSUPP;12821282+ }12831283+ /* Simply provide stored values as watchdog & link status events take12841284+ * care of its freshness.12851285+ */12861286+ msgbuf[1] = adapter->link_speed;12871287+ msgbuf[2] = adapter->link_up;12881288+12891289+ return 0;12901290+}12911291+12921292+/**12931293+ * ixgbe_negotiate_vf_features - negotiate supported features with VF driver12941294+ * @adapter: pointer to adapter struct12951295+ * @msgbuf: pointer to message buffers12961296+ * @vf: VF identifier12971297+ *12981298+ * Return: 0 on success or -EOPNOTSUPP when operation is not supported.12991299+ */13001300+static int ixgbe_negotiate_vf_features(struct ixgbe_adapter *adapter,13011301+ u32 *msgbuf, u32 vf)13021302+{13031303+ u32 features = msgbuf[1];13041304+13051305+ switch (adapter->vfinfo[vf].vf_api) {13061306+ case ixgbe_mbox_api_17:13071307+ break;13081308+ default:13091309+ return -EOPNOTSUPP;13101310+ }13111311+13121312+ features &= IXGBE_SUPPORTED_FEATURES;13131313+ msgbuf[1] = features;1265131412661315 return 0;12671316}···14001327 break;14011328 case IXGBE_VF_IPSEC_DEL:14021329 retval = ixgbe_ipsec_vf_del_sa(adapter, msgbuf, vf);13301330+ break;13311331+ case IXGBE_VF_GET_PF_LINK_STATE:13321332+ retval = ixgbe_send_vf_link_status(adapter, msgbuf, vf);13331333+ break;13341334+ case IXGBE_VF_FEATURES_NEGOTIATE:13351335+ retval = ixgbe_negotiate_vf_features(adapter, msgbuf, vf);14031336 break;14041337 default:14051338 e_err(drv, "Unhandled Msg %8.8x\n", msgbuf[0]);
···22712271 adapter->stats.base_vfmprc = adapter->stats.last_vfmprc;22722272}2273227322742274+/**22752275+ * ixgbevf_set_features - Set features supported by PF22762276+ * @adapter: pointer to the adapter struct22772277+ *22782278+ * Negotiate with PF supported features and then set pf_features accordingly.22792279+ */22802280+static void ixgbevf_set_features(struct ixgbevf_adapter *adapter)22812281+{22822282+ u32 *pf_features = &adapter->pf_features;22832283+ struct ixgbe_hw *hw = &adapter->hw;22842284+ int err;22852285+22862286+ err = hw->mac.ops.negotiate_features(hw, pf_features);22872287+ if (err && err != -EOPNOTSUPP)22882288+ netdev_dbg(adapter->netdev,22892289+ "PF feature negotiation failed.\n");22902290+22912291+ /* Address also pre API 1.7 cases */22922292+ if (hw->api_version == ixgbe_mbox_api_14)22932293+ *pf_features |= IXGBEVF_PF_SUP_IPSEC;22942294+ else if (hw->api_version == ixgbe_mbox_api_15)22952295+ *pf_features |= IXGBEVF_PF_SUP_ESX_MBX;22962296+}22972297+22742298static void ixgbevf_negotiate_api(struct ixgbevf_adapter *adapter)22752299{22762300 struct ixgbe_hw *hw = &adapter->hw;22772301 static const int api[] = {23022302+ ixgbe_mbox_api_17,23032303+ ixgbe_mbox_api_16,22782304 ixgbe_mbox_api_15,22792305 ixgbe_mbox_api_14,22802306 ixgbe_mbox_api_13,···23202294 idx++;23212295 }2322229623232323- if (hw->api_version >= ixgbe_mbox_api_15) {22972297+ ixgbevf_set_features(adapter);22982298+22992299+ if (adapter->pf_features & IXGBEVF_PF_SUP_ESX_MBX) {23242300 hw->mbx.ops.init_params(hw);23252301 memcpy(&hw->mbx.ops, &ixgbevf_mbx_ops,23262302 sizeof(struct ixgbe_mbx_operations));···26792651 case ixgbe_mbox_api_13:26802652 case ixgbe_mbox_api_14:26812653 case ixgbe_mbox_api_15:26542654+ case ixgbe_mbox_api_16:26552655+ case ixgbe_mbox_api_17:26822656 if (adapter->xdp_prog &&26832657 hw->mac.max_tx_queues == rss)26842658 rss = rss > 3 ? 2 : 1;···46754645 case ixgbe_mbox_api_13:46764646 case ixgbe_mbox_api_14:46774647 case ixgbe_mbox_api_15:46484648+ case ixgbe_mbox_api_16:46494649+ case ixgbe_mbox_api_17:46784650 netdev->max_mtu = IXGBE_MAX_JUMBO_FRAME_SIZE -46794651 (ETH_HLEN + ETH_FCS_LEN);46804652 break;
+8
drivers/net/ethernet/intel/ixgbevf/mbx.h
···6666 ixgbe_mbox_api_13, /* API version 1.3, linux/freebsd VF driver */6767 ixgbe_mbox_api_14, /* API version 1.4, linux/freebsd VF driver */6868 ixgbe_mbox_api_15, /* API version 1.5, linux/freebsd VF driver */6969+ ixgbe_mbox_api_16, /* API version 1.6, linux/freebsd VF driver */7070+ ixgbe_mbox_api_17, /* API version 1.7, linux/freebsd VF driver */6971 /* This value should always be last */7072 ixgbe_mbox_api_unknown, /* indicates that API version is not known */7173};···103101#define IXGBE_VF_IPSEC_DEL 0x0e104102105103#define IXGBE_VF_GET_LINK_STATE 0x10 /* get vf link state */104104+105105+/* mailbox API, version 1.6 VF requests */106106+#define IXGBE_VF_GET_PF_LINK_STATE 0x11 /* request PF to send link info */107107+108108+/* mailbox API, version 1.7 VF requests */109109+#define IXGBE_VF_FEATURES_NEGOTIATE 0x12 /* get features supported by PF*/106110107111/* length of permanent address message returned from PF */108112#define IXGBE_VF_PERMADDR_MSG_LEN 4
+150-32
drivers/net/ethernet/intel/ixgbevf/vf.c
···313313 * is not supported for this device type.314314 */315315 switch (hw->api_version) {316316+ case ixgbe_mbox_api_17:317317+ case ixgbe_mbox_api_16:316318 case ixgbe_mbox_api_15:317319 case ixgbe_mbox_api_14:318320 case ixgbe_mbox_api_13:···384382 * or if the operation is not supported for this device type.385383 */386384 switch (hw->api_version) {385385+ case ixgbe_mbox_api_17:386386+ case ixgbe_mbox_api_16:387387 case ixgbe_mbox_api_15:388388 case ixgbe_mbox_api_14:389389 case ixgbe_mbox_api_13:···556552 case ixgbe_mbox_api_13:557553 case ixgbe_mbox_api_14:558554 case ixgbe_mbox_api_15:555555+ case ixgbe_mbox_api_16:556556+ case ixgbe_mbox_api_17:559557 break;560558 default:561559 return -EOPNOTSUPP;···631625}632626633627/**628628+ * ixgbevf_get_pf_link_state - Get PF's link status629629+ * @hw: pointer to the HW structure630630+ * @speed: link speed631631+ * @link_up: indicate if link is up/down632632+ *633633+ * Ask PF to provide link_up state and speed of the link.634634+ *635635+ * Return: IXGBE_ERR_MBX in the case of mailbox error,636636+ * -EOPNOTSUPP if the op is not supported or 0 on success.637637+ */638638+static int ixgbevf_get_pf_link_state(struct ixgbe_hw *hw, ixgbe_link_speed *speed,639639+ bool *link_up)640640+{641641+ u32 msgbuf[3] = {};642642+ int err;643643+644644+ switch (hw->api_version) {645645+ case ixgbe_mbox_api_16:646646+ case ixgbe_mbox_api_17:647647+ break;648648+ default:649649+ return -EOPNOTSUPP;650650+ }651651+652652+ msgbuf[0] = IXGBE_VF_GET_PF_LINK_STATE;653653+654654+ err = ixgbevf_write_msg_read_ack(hw, msgbuf, msgbuf,655655+ ARRAY_SIZE(msgbuf));656656+ if (err || (msgbuf[0] & IXGBE_VT_MSGTYPE_FAILURE)) {657657+ err = IXGBE_ERR_MBX;658658+ *speed = IXGBE_LINK_SPEED_UNKNOWN;659659+ /* No need to set @link_up to false as it will be done by660660+ * ixgbe_check_mac_link_vf().661661+ */662662+ } else {663663+ *speed = msgbuf[1];664664+ *link_up = msgbuf[2];665665+ }666666+667667+ return err;668668+}669669+670670+/**671671+ * ixgbevf_negotiate_features_vf - negotiate supported features with PF driver672672+ * @hw: pointer to the HW structure673673+ * @pf_features: bitmask of features supported by PF674674+ *675675+ * Return: IXGBE_ERR_MBX in the case of mailbox error,676676+ * -EOPNOTSUPP if the op is not supported or 0 on success.677677+ */678678+static int ixgbevf_negotiate_features_vf(struct ixgbe_hw *hw, u32 *pf_features)679679+{680680+ u32 msgbuf[2] = {};681681+ int err;682682+683683+ switch (hw->api_version) {684684+ case ixgbe_mbox_api_17:685685+ break;686686+ default:687687+ return -EOPNOTSUPP;688688+ }689689+690690+ msgbuf[0] = IXGBE_VF_FEATURES_NEGOTIATE;691691+ msgbuf[1] = IXGBEVF_SUPPORTED_FEATURES;692692+693693+ err = ixgbevf_write_msg_read_ack(hw, msgbuf, msgbuf,694694+ ARRAY_SIZE(msgbuf));695695+696696+ if (err || (msgbuf[0] & IXGBE_VT_MSGTYPE_FAILURE)) {697697+ err = IXGBE_ERR_MBX;698698+ *pf_features = 0x0;699699+ } else {700700+ *pf_features = msgbuf[1];701701+ }702702+703703+ return err;704704+}705705+706706+/**634707 * ixgbevf_set_vfta_vf - Set/Unset VLAN filter table address635708 * @hw: pointer to the HW structure636709 * @vlan: 12 bit VLAN ID···741656742657mbx_err:743658 return err;659659+}660660+661661+/**662662+ * ixgbe_read_vflinks - Read VFLINKS register663663+ * @hw: pointer to the HW structure664664+ * @speed: link speed665665+ * @link_up: indicate if link is up/down666666+ *667667+ * Get linkup status and link speed from the VFLINKS register.668668+ */669669+static void ixgbe_read_vflinks(struct ixgbe_hw *hw, ixgbe_link_speed *speed,670670+ bool *link_up)671671+{672672+ u32 vflinks = IXGBE_READ_REG(hw, IXGBE_VFLINKS);673673+674674+ /* if link status is down no point in checking to see if PF is up */675675+ if (!(vflinks & IXGBE_LINKS_UP)) {676676+ *link_up = false;677677+ return;678678+ }679679+680680+ /* for SFP+ modules and DA cables on 82599 it can take up to 500usecs681681+ * before the link status is correct682682+ */683683+ if (hw->mac.type == ixgbe_mac_82599_vf) {684684+ for (int i = 0; i < 5; i++) {685685+ udelay(100);686686+ vflinks = IXGBE_READ_REG(hw, IXGBE_VFLINKS);687687+688688+ if (!(vflinks & IXGBE_LINKS_UP)) {689689+ *link_up = false;690690+ return;691691+ }692692+ }693693+ }694694+695695+ /* We reached this point so there's link */696696+ *link_up = true;697697+698698+ switch (vflinks & IXGBE_LINKS_SPEED_82599) {699699+ case IXGBE_LINKS_SPEED_10G_82599:700700+ *speed = IXGBE_LINK_SPEED_10GB_FULL;701701+ break;702702+ case IXGBE_LINKS_SPEED_1G_82599:703703+ *speed = IXGBE_LINK_SPEED_1GB_FULL;704704+ break;705705+ case IXGBE_LINKS_SPEED_100_82599:706706+ *speed = IXGBE_LINK_SPEED_100_FULL;707707+ break;708708+ default:709709+ *speed = IXGBE_LINK_SPEED_UNKNOWN;710710+ }744711}745712746713/**···839702 bool *link_up,840703 bool autoneg_wait_to_complete)841704{705705+ struct ixgbevf_adapter *adapter = hw->back;842706 struct ixgbe_mbx_info *mbx = &hw->mbx;843707 struct ixgbe_mac_info *mac = &hw->mac;844708 s32 ret_val = 0;845845- u32 links_reg;846709 u32 in_msg = 0;847710848711 /* If we were hit with a reset drop the link */···852715 if (!mac->get_link_status)853716 goto out;854717855855- /* if link status is down no point in checking to see if pf is up */856856- links_reg = IXGBE_READ_REG(hw, IXGBE_VFLINKS);857857- if (!(links_reg & IXGBE_LINKS_UP))858858- goto out;859859-860860- /* for SFP+ modules and DA cables on 82599 it can take up to 500usecs861861- * before the link status is correct862862- */863863- if (mac->type == ixgbe_mac_82599_vf) {864864- int i;865865-866866- for (i = 0; i < 5; i++) {867867- udelay(100);868868- links_reg = IXGBE_READ_REG(hw, IXGBE_VFLINKS);869869-870870- if (!(links_reg & IXGBE_LINKS_UP))871871- goto out;872872- }873873- }874874-875875- switch (links_reg & IXGBE_LINKS_SPEED_82599) {876876- case IXGBE_LINKS_SPEED_10G_82599:877877- *speed = IXGBE_LINK_SPEED_10GB_FULL;878878- break;879879- case IXGBE_LINKS_SPEED_1G_82599:880880- *speed = IXGBE_LINK_SPEED_1GB_FULL;881881- break;882882- case IXGBE_LINKS_SPEED_100_82599:883883- *speed = IXGBE_LINK_SPEED_100_FULL;884884- break;718718+ if (hw->mac.type == ixgbe_mac_e610_vf) {719719+ ret_val = ixgbevf_get_pf_link_state(hw, speed, link_up);720720+ if (ret_val)721721+ goto out;722722+ } else {723723+ ixgbe_read_vflinks(hw, speed, link_up);724724+ if (*link_up == false)725725+ goto out;885726 }886727887728 /* if the read failed it could just be a mailbox collision, best wait888729 * until we are called again and don't report an error889730 */890731 if (mbx->ops.read(hw, &in_msg, 1)) {891891- if (hw->api_version >= ixgbe_mbox_api_15)732732+ if (adapter->pf_features & IXGBEVF_PF_SUP_ESX_MBX)892733 mac->get_link_status = false;893734 goto out;894735 }···1066951 case ixgbe_mbox_api_13:1067952 case ixgbe_mbox_api_14:1068953 case ixgbe_mbox_api_15:954954+ case ixgbe_mbox_api_16:955955+ case ixgbe_mbox_api_17:1069956 break;1070957 default:1071958 return 0;···11221005 .setup_link = ixgbevf_setup_mac_link_vf,11231006 .check_link = ixgbevf_check_mac_link_vf,11241007 .negotiate_api_version = ixgbevf_negotiate_api_version_vf,10081008+ .negotiate_features = ixgbevf_negotiate_features_vf,11251009 .set_rar = ixgbevf_set_rar_vf,11261010 .update_mc_addr_list = ixgbevf_update_mc_addr_list_vf,11271011 .update_xcast_mode = ixgbevf_update_xcast_mode,
···19811981 !is_cgx_mapped_to_nix(pdev->subsystem_device, cgx->cgx_id)) {19821982 dev_notice(dev, "CGX %d not mapped to NIX, skipping probe\n",19831983 cgx->cgx_id);19841984+ err = -ENODEV;19841985 goto err_release_regions;19851986 }19861987
+6-2
drivers/net/ethernet/mediatek/mtk_wed.c
···677677 void *buf;678678 int s;679679680680- page = __dev_alloc_page(GFP_KERNEL);680680+ page = __dev_alloc_page(GFP_KERNEL | GFP_DMA32);681681 if (!page)682682 return -ENOMEM;683683···800800 struct page *page;801801 int s;802802803803- page = __dev_alloc_page(GFP_KERNEL);803803+ page = __dev_alloc_page(GFP_KERNEL | GFP_DMA32);804804 if (!page)805805 return -ENOMEM;806806···24252425 dev->wdma_idx = hw->index;24262426 dev->version = hw->version;24272427 dev->hw->pcie_base = mtk_wed_get_pcie_base(dev);24282428+24292429+ ret = dma_set_mask_and_coherent(hw->dev, DMA_BIT_MASK(32));24302430+ if (ret)24312431+ goto out;2428243224292433 if (hw->eth->dma_dev == hw->eth->dev &&24302434 of_dma_is_coherent(hw->eth->dev->of_node))
+3-2
drivers/net/ethernet/realtek/r8169_main.c
···49944994 if (!device_may_wakeup(tp_to_dev(tp)))49954995 clk_prepare_enable(tp->clk);4996499649974997- /* Reportedly at least Asus X453MA truncates packets otherwise */49984998- if (tp->mac_version == RTL_GIGA_MAC_VER_37)49974997+ /* Some chip versions may truncate packets without this initialization */49984998+ if (tp->mac_version == RTL_GIGA_MAC_VER_37 ||49994999+ tp->mac_version == RTL_GIGA_MAC_VER_46)49995000 rtl_init_rxcfg(tp);5000500150015002 return rtl8169_runtime_resume(device);
···405405static int bcm54811_config_init(struct phy_device *phydev)406406{407407 struct bcm54xx_phy_priv *priv = phydev->priv;408408- int err, reg, exp_sync_ethernet;408408+ int err, reg, exp_sync_ethernet, aux_rgmii_en;409409410410 /* Enable CLK125 MUX on LED4 if ref clock is enabled. */411411 if (!(phydev->dev_flags & PHY_BRCM_RX_REFCLK_UNUSED)) {···431431 err = bcm_phy_modify_exp(phydev, BCM_EXP_SYNC_ETHERNET,432432 BCM_EXP_SYNC_ETHERNET_MII_LITE,433433 exp_sync_ethernet);434434+ if (err < 0)435435+ return err;436436+437437+ /* Enable RGMII if configured */438438+ if (phy_interface_is_rgmii(phydev))439439+ aux_rgmii_en = MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_EN |440440+ MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_SKEW_EN;441441+ else442442+ aux_rgmii_en = 0;443443+444444+ /* Also writing Reserved bits 6:5 because the documentation requires445445+ * them to be written to 0b11446446+ */447447+ err = bcm54xx_auxctl_write(phydev,448448+ MII_BCM54XX_AUXCTL_SHDWSEL_MISC,449449+ MII_BCM54XX_AUXCTL_MISC_WREN |450450+ aux_rgmii_en |451451+ MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RSVD);434452 if (err < 0)435453 return err;436454
+11-12
drivers/net/phy/realtek/realtek_main.c
···633633 str_enabled_disabled(val_rxdly));634634 }635635636636+ if (!priv->has_phycr2)637637+ return 0;638638+636639 /* Disable PHY-mode EEE so LPI is passed to the MAC */637640 ret = phy_modify_paged(phydev, RTL8211F_PHYCR_PAGE, RTL8211F_PHYCR2,638641 RTL8211F_PHYCR2_PHY_EEE_ENABLE, 0);639642 if (ret)640643 return ret;641644642642- if (priv->has_phycr2) {643643- ret = phy_modify_paged(phydev, RTL8211F_PHYCR_PAGE,644644- RTL8211F_PHYCR2, RTL8211F_CLKOUT_EN,645645- priv->phycr2);646646- if (ret < 0) {647647- dev_err(dev, "clkout configuration failed: %pe\n",648648- ERR_PTR(ret));649649- return ret;650650- }651651-652652- return genphy_soft_reset(phydev);645645+ ret = phy_modify_paged(phydev, RTL8211F_PHYCR_PAGE,646646+ RTL8211F_PHYCR2, RTL8211F_CLKOUT_EN,647647+ priv->phycr2);648648+ if (ret < 0) {649649+ dev_err(dev, "clkout configuration failed: %pe\n",650650+ ERR_PTR(ret));651651+ return ret;653652 }654653655655- return 0;654654+ return genphy_soft_reset(phydev);656655}657656658657static int rtl821x_suspend(struct phy_device *phydev)
+11-8
drivers/net/usb/lan78xx.c
···11751175 }1176117611771177write_raw_eeprom_done:11781178- if (dev->chipid == ID_REV_CHIP_ID_7800_)11791179- return lan78xx_write_reg(dev, HW_CFG, saved);11801180-11811181- return 0;11781178+ if (dev->chipid == ID_REV_CHIP_ID_7800_) {11791179+ int rc = lan78xx_write_reg(dev, HW_CFG, saved);11801180+ /* If USB fails, there is nothing to do */11811181+ if (rc < 0)11821182+ return rc;11831183+ }11841184+ return ret;11821185}1183118611841187static int lan78xx_read_raw_otp(struct lan78xx_net *dev, u32 offset,···32503247 }32513248 } while (buf & HW_CFG_LRST_);3252324932533253- ret = lan78xx_init_mac_address(dev);32543254- if (ret < 0)32553255- return ret;32563256-32573250 /* save DEVID for later usage */32583251 ret = lan78xx_read_reg(dev, ID_REV, &buf);32593252 if (ret < 0)···3257325832583259 dev->chipid = (buf & ID_REV_CHIP_ID_MASK_) >> 16;32593260 dev->chiprev = buf & ID_REV_CHIP_REV_MASK_;32613261+32623262+ ret = lan78xx_init_mac_address(dev);32633263+ if (ret < 0)32643264+ return ret;3260326532613266 /* Respond to the IN token with a NAK */32623267 ret = lan78xx_read_reg(dev, USB_CFG0, &buf);
+6-1
drivers/net/usb/r8152.c
···1012210122 ret = usb_register_device_driver(&rtl8152_cfgselector_driver, THIS_MODULE);1012310123 if (ret)1012410124 return ret;1012510125- return usb_register(&rtl8152_driver);1012510125+1012610126+ ret = usb_register(&rtl8152_driver);1012710127+ if (ret)1012810128+ usb_deregister_device_driver(&rtl8152_cfgselector_driver);1012910129+1013010130+ return ret;1012610131}10127101321012810133static void __exit rtl8152_driver_exit(void)
···10811081 queue = sk->sk_user_data;10821082 if (likely(queue && sk_stream_is_writeable(sk))) {10831083 clear_bit(SOCK_NOSPACE, &sk->sk_socket->flags);10841084+ /* Ensure pending TLS partial records are retried */10851085+ if (nvme_tcp_queue_tls(queue))10861086+ queue->write_space(sk);10841087 queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work);10851088 }10861089 read_unlock_bh(&sk->sk_callback_lock);
+1
drivers/pci/Kconfig
···306306 bool "VGA Arbitration" if EXPERT307307 default y308308 depends on (PCI && !S390)309309+ select SCREEN_INFO if X86309310 help310311 Some "legacy" VGA devices implemented on PCI typically have the same311312 hard-decoded addresses as they did on ISA. When multiple PCI devices
+1-1
drivers/pci/controller/cadence/pcie-cadence-ep.c
···255255 u16 flags, mme;256256 u8 cap;257257258258- cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSIX);258258+ cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSI);259259 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);260260261261 /* Validate that the MSI feature is actually enabled. */
···973973{974974 const u64 ra_pos = readahead_pos(ractl);975975 const u64 ra_end = ra_pos + readahead_length(ractl);976976- const u64 em_end = em->start + em->ram_bytes;976976+ const u64 em_end = em->start + em->len;977977978978 /* No expansion for holes and inline extents. */979979 if (em->disk_bytenr > EXTENT_MAP_LAST_BYTE)
+8-7
fs/btrfs/free-space-tree.c
···11061106 * If ret is 1 (no key found), it means this is an empty block group,11071107 * without any extents allocated from it and there's no block group11081108 * item (key BTRFS_BLOCK_GROUP_ITEM_KEY) located in the extent tree11091109- * because we are using the block group tree feature, so block group11101110- * items are stored in the block group tree. It also means there are no11111111- * extents allocated for block groups with a start offset beyond this11121112- * block group's end offset (this is the last, highest, block group).11091109+ * because we are using the block group tree feature (so block group11101110+ * items are stored in the block group tree) or this is a new block11111111+ * group created in the current transaction and its block group item11121112+ * was not yet inserted in the extent tree (that happens in11131113+ * btrfs_create_pending_block_groups() -> insert_block_group_item()).11141114+ * It also means there are no extents allocated for block groups with a11151115+ * start offset beyond this block group's end offset (this is the last,11161116+ * highest, block group).11131117 */11141114- if (!btrfs_fs_compat_ro(trans->fs_info, BLOCK_GROUP_TREE))11151115- ASSERT(ret == 0);11161116-11171118 start = block_group->start;11181119 end = block_group->start + block_group->length;11191120 while (ret == 0) {
···37803780/*37813781 * Mark start of chunk relocation that is cancellable. Check if the cancellation37823782 * has been requested meanwhile and don't start in that case.37833783+ * NOTE: if this returns an error, reloc_chunk_end() must not be called.37833784 *37843785 * Return:37853786 * 0 success···3797379637983797 if (atomic_read(&fs_info->reloc_cancel_req) > 0) {37993798 btrfs_info(fs_info, "chunk relocation canceled on start");38003800- /*38013801- * On cancel, clear all requests but let the caller mark38023802- * the end after cleanup operations.38033803- */37993799+ /* On cancel, clear all requests. */38003800+ clear_and_wake_up_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags);38043801 atomic_set(&fs_info->reloc_cancel_req, 0);38053802 return -ECANCELED;38063803 }···3807380838083809/*38093810 * Mark end of chunk relocation that is cancellable and wake any waiters.38113811+ * NOTE: call only if a previous call to reloc_chunk_start() succeeded.38103812 */38113813static void reloc_chunk_end(struct btrfs_fs_info *fs_info)38123814{38153815+ ASSERT(test_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags));38133816 /* Requested after start, clear bit first so any waiters can continue */38143817 if (atomic_read(&fs_info->reloc_cancel_req) > 0)38153818 btrfs_info(fs_info, "chunk relocation canceled during operation");···40244023 if (err && rw)40254024 btrfs_dec_block_group_ro(rc->block_group);40264025 iput(rc->data_inode);40264026+ reloc_chunk_end(fs_info);40274027out_put_bg:40284028 btrfs_put_block_group(bg);40294029- reloc_chunk_end(fs_info);40304029 free_reloc_control(rc);40314030 return err;40324031}···42094208 ret = ret2;42104209out_unset:42114210 unset_reloc_control(rc);42124212-out_end:42134211 reloc_chunk_end(fs_info);42124212+out_end:42144213 free_reloc_control(rc);42154214out:42164215 free_reloc_roots(&reloc_roots);
+2-2
fs/btrfs/scrub.c
···694694695695 /* stripe->folios[] is allocated by us and no highmem is allowed. */696696 ASSERT(folio);697697- ASSERT(!folio_test_partial_kmap(folio));697697+ ASSERT(!folio_test_highmem(folio));698698 return folio_address(folio) + offset_in_folio(folio, offset);699699}700700···707707708708 /* stripe->folios[] is allocated by us and no highmem is allowed. */709709 ASSERT(folio);710710- ASSERT(!folio_test_partial_kmap(folio));710710+ ASSERT(!folio_test_highmem(folio));711711 /* And the range must be contained inside the folio. */712712 ASSERT(offset_in_folio(folio, offset) + fs_info->sectorsize <= folio_size(folio));713713 return page_to_phys(folio_page(folio, 0)) + offset_in_folio(folio, offset);
+3-1
fs/btrfs/send.c
···178178 u64 cur_inode_rdev;179179 u64 cur_inode_last_extent;180180 u64 cur_inode_next_write_offset;181181- struct fs_path cur_inode_path;182181 bool cur_inode_new;183182 bool cur_inode_new_gen;184183 bool cur_inode_deleted;···304305305306 struct btrfs_lru_cache dir_created_cache;306307 struct btrfs_lru_cache dir_utimes_cache;308308+309309+ /* Must be last as it ends in a flexible-array member. */310310+ struct fs_path cur_inode_path;307311};308312309313struct pending_dir_move {
+1-2
fs/btrfs/super.c
···19001900 return PTR_ERR(sb);19011901 }1902190219031903- set_device_specific_options(fs_info);19041904-19051903 if (sb->s_root) {19061904 /*19071905 * Not the first mount of the fs thus got an existing super block.···19441946 deactivate_locked_super(sb);19451947 return -EACCES;19461948 }19491949+ set_device_specific_options(fs_info);19471950 bdev = fs_devices->latest_dev->bdev;19481951 snprintf(sb->s_id, sizeof(sb->s_id), "%pg", bdev);19491952 shrinker_debugfs_rename(sb->s_shrink, "sb-btrfs:%s", sb->s_id);
···280280 bh, is_metadata, inode->i_mode,281281 test_opt(inode->i_sb, DATA_FLAGS));282282283283- /* In the no journal case, we can just do a bforget and return */283283+ /*284284+ * In the no journal case, we should wait for the ongoing buffer285285+ * to complete and do a forget.286286+ */284287 if (!ext4_handle_valid(handle)) {285285- bforget(bh);288288+ if (bh) {289289+ clear_buffer_dirty(bh);290290+ wait_on_buffer(bh);291291+ __bforget(bh);292292+ }286293 return 0;287294 }288295
+8
fs/ext4/inode.c
···53195319 }53205320 ei->i_flags = le32_to_cpu(raw_inode->i_flags);53215321 ext4_set_inode_flags(inode, true);53225322+ /* Detect invalid flag combination - can't have both inline data and extents */53235323+ if (ext4_test_inode_flag(inode, EXT4_INODE_INLINE_DATA) &&53245324+ ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) {53255325+ ext4_error_inode(inode, function, line, 0,53265326+ "inode has both inline data and extents flags");53275327+ ret = -EFSCORRUPTED;53285328+ goto bad_inode;53295329+ }53225330 inode->i_blocks = ext4_inode_blocks(raw_inode, ei);53235331 ei->i_file_acl = le32_to_cpu(raw_inode->i_file_acl_lo);53245332 if (ext4_has_feature_64bit(sb))
+2-2
fs/ext4/orphan.c
···513513 return;514514 for (i = 0; i < oi->of_blocks; i++)515515 brelse(oi->of_binfo[i].ob_bh);516516- kfree(oi->of_binfo);516516+ kvfree(oi->of_binfo);517517}518518519519static struct ext4_orphan_block_tail *ext4_orphan_block_tail(···637637out_free:638638 for (i--; i >= 0; i--)639639 brelse(oi->of_binfo[i].ob_bh);640640- kfree(oi->of_binfo);640640+ kvfree(oi->of_binfo);641641out_put:642642 iput(inode);643643 return ret;
···8484 int error;85858686 if (!inode->i_op->fileattr_get)8787- return -EOPNOTSUPP;8787+ return -ENOIOCTLCMD;88888989 error = security_inode_file_getattr(dentry, fa);9090 if (error)···270270 int err;271271272272 if (!inode->i_op->fileattr_set)273273- return -EOPNOTSUPP;273273+ return -ENOIOCTLCMD;274274275275 if (!inode_owner_or_capable(idmap, inode))276276 return -EPERM;···312312 int err;313313314314 err = vfs_fileattr_get(file->f_path.dentry, &fa);315315- if (err == -EOPNOTSUPP)316316- err = -ENOIOCTLCMD;317315 if (!err)318316 err = put_user(fa.flags, argp);319317 return err;···333335 fileattr_fill_flags(&fa, flags);334336 err = vfs_fileattr_set(idmap, dentry, &fa);335337 mnt_drop_write_file(file);336336- if (err == -EOPNOTSUPP)337337- err = -ENOIOCTLCMD;338338 }339339 }340340 return err;···345349 int err;346350347351 err = vfs_fileattr_get(file->f_path.dentry, &fa);348348- if (err == -EOPNOTSUPP)349349- err = -ENOIOCTLCMD;350352 if (!err)351353 err = copy_fsxattr_to_user(&fa, argp);352354···365371 if (!err) {366372 err = vfs_fileattr_set(idmap, dentry, &fa);367373 mnt_drop_write_file(file);368368- if (err == -EOPNOTSUPP)369369- err = -ENOIOCTLCMD;370374 }371375 }372376 return err;···416424 }417425418426 error = vfs_fileattr_get(filepath.dentry, &fa);427427+ if (error == -ENOIOCTLCMD || error == -ENOTTY)428428+ error = -EOPNOTSUPP;419429 if (error)420430 return error;421431···485491 if (!error) {486492 error = vfs_fileattr_set(mnt_idmap(filepath.mnt),487493 filepath.dentry, &fa);494494+ if (error == -ENOIOCTLCMD || error == -ENOTTY)495495+ error = -EOPNOTSUPP;488496 mnt_drop_write(filepath.mnt);489497 }490498
+1-1
fs/file_table.c
···192192 f->f_sb_err = 0;193193194194 /*195195- * We're SLAB_TYPESAFE_BY_RCU so initialize f_count last. While195195+ * We're SLAB_TYPESAFE_BY_RCU so initialize f_ref last. While196196 * fget-rcu pattern users need to be able to handle spurious197197 * refcount bumps we should reinitialize the reused file first.198198 */
···16591659 int drop_reserve = 0;16601660 int err = 0;16611661 int was_modified = 0;16621662+ int wait_for_writeback = 0;1662166316631664 if (is_handle_aborted(handle))16641665 return -EROFS;···17831782 }1784178317851784 /*17861786- * The buffer is still not written to disk, we should17871787- * attach this buffer to current transaction so that the17881788- * buffer can be checkpointed only after the current17891789- * transaction commits.17851785+ * The buffer has not yet been written to disk. We should17861786+ * either clear the buffer or ensure that the ongoing I/O17871787+ * is completed, and attach this buffer to current17881788+ * transaction so that the buffer can be checkpointed only17891789+ * after the current transaction commits.17901790 */17911791 clear_buffer_dirty(bh);17921792+ wait_for_writeback = 1;17921793 __jbd2_journal_file_buffer(jh, transaction, BJ_Forget);17931794 spin_unlock(&journal->j_list_lock);17941795 }17951796drop:17961797 __brelse(bh);17971798 spin_unlock(&jh->b_state_lock);17991799+ if (wait_for_writeback)18001800+ wait_on_buffer(bh);17981801 jbd2_journal_put_journal_head(jh);17991802 if (drop_reserve) {18001803 /* no need to reserve log space for this block -bzzz */
···15351535 /* Deal with the suid/sgid bit corner case */15361536 if (nfs_should_remove_suid(inode)) {15371537 spin_lock(&inode->i_lock);15381538- nfs_set_cache_invalid(inode, NFS_INO_INVALID_MODE);15381538+ nfs_set_cache_invalid(inode, NFS_INO_INVALID_MODE15391539+ | NFS_INO_REVAL_FORCED);15391540 spin_unlock(&inode->i_lock);15401541 }15411542 return 0;
···490490491491 VFS_WARN_ON_ONCE(ns->ns_id != fid->ns_id);492492 VFS_WARN_ON_ONCE(ns->ns_type != fid->ns_type);493493- VFS_WARN_ON_ONCE(ns->inum != fid->ns_inum);493493+494494+ if (ns->inum != fid->ns_inum)495495+ return NULL;494496495497 if (!__ns_ref_get(ns))496498 return NULL;
+1-1
fs/overlayfs/copy_up.c
···178178 err = ovl_real_fileattr_get(old, &oldfa);179179 if (err) {180180 /* Ntfs-3g returns -EINVAL for "no fileattr support" */181181- if (err == -EOPNOTSUPP || err == -EINVAL)181181+ if (err == -ENOTTY || err == -EINVAL)182182 return 0;183183 pr_warn("failed to retrieve lower fileattr (%pd2, err=%i)\n",184184 old->dentry, err);
-5
fs/overlayfs/file.c
···369369 if (!ovl_should_sync(OVL_FS(inode->i_sb)))370370 ifl &= ~(IOCB_DSYNC | IOCB_SYNC);371371372372- /*373373- * Overlayfs doesn't support deferred completions, don't copy374374- * this property in case it is set by the issuer.375375- */376376- ifl &= ~IOCB_DIO_CALLER_COMP;377372 ret = backing_file_write_iter(realfile, iter, iocb, ifl, &ctx);378373379374out_unlock:
···339339sid_to_id(struct cifs_sb_info *cifs_sb, struct smb_sid *psid,340340 struct cifs_fattr *fattr, uint sidtype)341341{342342- int rc = 0;343342 struct key *sidkey;344343 char *sidstr;345344 const struct cred *saved_cred;···445446 * fails then we just fall back to using the ctx->linux_uid/linux_gid.446447 */447448got_valid_id:448448- rc = 0;449449 if (sidtype == SIDOWNER)450450 fattr->cf_uid = fuid;451451 else452452 fattr->cf_gid = fgid;453453- return rc;453453+454454+ return 0;454455}455456456457int
+74-127
fs/smb/client/cifsencrypt.c
···2424#include <linux/iov_iter.h>2525#include <crypto/aead.h>2626#include <crypto/arc4.h>2727+#include <crypto/md5.h>2828+#include <crypto/sha2.h>27292828-static size_t cifs_shash_step(void *iter_base, size_t progress, size_t len,2929- void *priv, void *priv2)3030+static int cifs_sig_update(struct cifs_calc_sig_ctx *ctx,3131+ const u8 *data, size_t len)3032{3131- struct shash_desc *shash = priv;3333+ if (ctx->md5) {3434+ md5_update(ctx->md5, data, len);3535+ return 0;3636+ }3737+ if (ctx->hmac) {3838+ hmac_sha256_update(ctx->hmac, data, len);3939+ return 0;4040+ }4141+ return crypto_shash_update(ctx->shash, data, len);4242+}4343+4444+static int cifs_sig_final(struct cifs_calc_sig_ctx *ctx, u8 *out)4545+{4646+ if (ctx->md5) {4747+ md5_final(ctx->md5, out);4848+ return 0;4949+ }5050+ if (ctx->hmac) {5151+ hmac_sha256_final(ctx->hmac, out);5252+ return 0;5353+ }5454+ return crypto_shash_final(ctx->shash, out);5555+}5656+5757+static size_t cifs_sig_step(void *iter_base, size_t progress, size_t len,5858+ void *priv, void *priv2)5959+{6060+ struct cifs_calc_sig_ctx *ctx = priv;3261 int ret, *pret = priv2;33623434- ret = crypto_shash_update(shash, iter_base, len);6363+ ret = cifs_sig_update(ctx, iter_base, len);3564 if (ret < 0) {3665 *pret = ret;3766 return len;···7142/*7243 * Pass the data from an iterator into a hash.7344 */7474-static int cifs_shash_iter(const struct iov_iter *iter, size_t maxsize,7575- struct shash_desc *shash)4545+static int cifs_sig_iter(const struct iov_iter *iter, size_t maxsize,4646+ struct cifs_calc_sig_ctx *ctx)7647{7748 struct iov_iter tmp_iter = *iter;7849 int err = -EIO;79508080- if (iterate_and_advance_kernel(&tmp_iter, maxsize, shash, &err,8181- cifs_shash_step) != maxsize)5151+ if (iterate_and_advance_kernel(&tmp_iter, maxsize, ctx, &err,5252+ cifs_sig_step) != maxsize)8253 return err;8354 return 0;8455}85568686-int __cifs_calc_signature(struct smb_rqst *rqst,8787- struct TCP_Server_Info *server, char *signature,8888- struct shash_desc *shash)5757+int __cifs_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server,5858+ char *signature, struct cifs_calc_sig_ctx *ctx)8959{9060 int i;9161 ssize_t rc;···11082 return -EIO;11183 }11284113113- rc = crypto_shash_update(shash,114114- iov[i].iov_base, iov[i].iov_len);8585+ rc = cifs_sig_update(ctx, iov[i].iov_base, iov[i].iov_len);11586 if (rc) {11687 cifs_dbg(VFS, "%s: Could not update with payload\n",11788 __func__);···11891 }11992 }12093121121- rc = cifs_shash_iter(&rqst->rq_iter, iov_iter_count(&rqst->rq_iter), shash);9494+ rc = cifs_sig_iter(&rqst->rq_iter, iov_iter_count(&rqst->rq_iter), ctx);12295 if (rc < 0)12396 return rc;12497125125- rc = crypto_shash_final(shash, signature);9898+ rc = cifs_sig_final(ctx, signature);12699 if (rc)127100 cifs_dbg(VFS, "%s: Could not generate hash\n", __func__);128101···139112static int cifs_calc_signature(struct smb_rqst *rqst,140113 struct TCP_Server_Info *server, char *signature)141114{142142- int rc;115115+ struct md5_ctx ctx;143116144117 if (!rqst->rq_iov || !signature || !server)145118 return -EINVAL;146146-147147- rc = cifs_alloc_hash("md5", &server->secmech.md5);148148- if (rc)149149- return -1;150150-151151- rc = crypto_shash_init(server->secmech.md5);152152- if (rc) {153153- cifs_dbg(VFS, "%s: Could not init md5\n", __func__);154154- return rc;119119+ if (fips_enabled) {120120+ cifs_dbg(VFS,121121+ "MD5 signature support is disabled due to FIPS\n");122122+ return -EOPNOTSUPP;155123 }156124157157- rc = crypto_shash_update(server->secmech.md5,158158- server->session_key.response, server->session_key.len);159159- if (rc) {160160- cifs_dbg(VFS, "%s: Could not update with response\n", __func__);161161- return rc;162162- }125125+ md5_init(&ctx);126126+ md5_update(&ctx, server->session_key.response, server->session_key.len);163127164164- return __cifs_calc_signature(rqst, server, signature, server->secmech.md5);128128+ return __cifs_calc_signature(129129+ rqst, server, signature,130130+ &(struct cifs_calc_sig_ctx){ .md5 = &ctx });165131}166132167133/* must be called with server->srv_mutex held */···425405}426406427407static int calc_ntlmv2_hash(struct cifs_ses *ses, char *ntlmv2_hash,428428- const struct nls_table *nls_cp, struct shash_desc *hmacmd5)408408+ const struct nls_table *nls_cp)429409{430430- int rc = 0;431410 int len;432411 char nt_hash[CIFS_NTHASH_SIZE];412412+ struct hmac_md5_ctx hmac_ctx;433413 __le16 *user;434414 wchar_t *domain;435415 wchar_t *server;···437417 /* calculate md4 hash of password */438418 E_md4hash(ses->password, nt_hash, nls_cp);439419440440- rc = crypto_shash_setkey(hmacmd5->tfm, nt_hash, CIFS_NTHASH_SIZE);441441- if (rc) {442442- cifs_dbg(VFS, "%s: Could not set NT hash as a key, rc=%d\n", __func__, rc);443443- return rc;444444- }445445-446446- rc = crypto_shash_init(hmacmd5);447447- if (rc) {448448- cifs_dbg(VFS, "%s: Could not init HMAC-MD5, rc=%d\n", __func__, rc);449449- return rc;450450- }420420+ hmac_md5_init_usingrawkey(&hmac_ctx, nt_hash, CIFS_NTHASH_SIZE);451421452422 /* convert ses->user_name to unicode */453423 len = ses->user_name ? strlen(ses->user_name) : 0;···452442 *(u16 *)user = 0;453443 }454444455455- rc = crypto_shash_update(hmacmd5, (char *)user, 2 * len);445445+ hmac_md5_update(&hmac_ctx, (const u8 *)user, 2 * len);456446 kfree(user);457457- if (rc) {458458- cifs_dbg(VFS, "%s: Could not update with user, rc=%d\n", __func__, rc);459459- return rc;460460- }461447462448 /* convert ses->domainName to unicode and uppercase */463449 if (ses->domainName) {···465459466460 len = cifs_strtoUTF16((__le16 *)domain, ses->domainName, len,467461 nls_cp);468468- rc = crypto_shash_update(hmacmd5, (char *)domain, 2 * len);462462+ hmac_md5_update(&hmac_ctx, (const u8 *)domain, 2 * len);469463 kfree(domain);470470- if (rc) {471471- cifs_dbg(VFS, "%s: Could not update with domain, rc=%d\n", __func__, rc);472472- return rc;473473- }474464 } else {475465 /* We use ses->ip_addr if no domain name available */476466 len = strlen(ses->ip_addr);···476474 return -ENOMEM;477475478476 len = cifs_strtoUTF16((__le16 *)server, ses->ip_addr, len, nls_cp);479479- rc = crypto_shash_update(hmacmd5, (char *)server, 2 * len);477477+ hmac_md5_update(&hmac_ctx, (const u8 *)server, 2 * len);480478 kfree(server);481481- if (rc) {482482- cifs_dbg(VFS, "%s: Could not update with server, rc=%d\n", __func__, rc);483483- return rc;484484- }485479 }486480487487- rc = crypto_shash_final(hmacmd5, ntlmv2_hash);488488- if (rc)489489- cifs_dbg(VFS, "%s: Could not generate MD5 hash, rc=%d\n", __func__, rc);490490-491491- return rc;481481+ hmac_md5_final(&hmac_ctx, ntlmv2_hash);482482+ return 0;492483}493484494494-static int495495-CalcNTLMv2_response(const struct cifs_ses *ses, char *ntlmv2_hash, struct shash_desc *hmacmd5)485485+static void CalcNTLMv2_response(const struct cifs_ses *ses, char *ntlmv2_hash)496486{497497- int rc;498487 struct ntlmv2_resp *ntlmv2 = (struct ntlmv2_resp *)499488 (ses->auth_key.response + CIFS_SESS_KEY_SIZE);500489 unsigned int hash_len;···494501 hash_len = ses->auth_key.len - (CIFS_SESS_KEY_SIZE +495502 offsetof(struct ntlmv2_resp, challenge.key[0]));496503497497- rc = crypto_shash_setkey(hmacmd5->tfm, ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE);498498- if (rc) {499499- cifs_dbg(VFS, "%s: Could not set NTLMv2 hash as a key, rc=%d\n", __func__, rc);500500- return rc;501501- }502502-503503- rc = crypto_shash_init(hmacmd5);504504- if (rc) {505505- cifs_dbg(VFS, "%s: Could not init HMAC-MD5, rc=%d\n", __func__, rc);506506- return rc;507507- }508508-509504 if (ses->server->negflavor == CIFS_NEGFLAVOR_EXTENDED)510505 memcpy(ntlmv2->challenge.key, ses->ntlmssp->cryptkey, CIFS_SERVER_CHALLENGE_SIZE);511506 else512507 memcpy(ntlmv2->challenge.key, ses->server->cryptkey, CIFS_SERVER_CHALLENGE_SIZE);513508514514- rc = crypto_shash_update(hmacmd5, ntlmv2->challenge.key, hash_len);515515- if (rc) {516516- cifs_dbg(VFS, "%s: Could not update with response, rc=%d\n", __func__, rc);517517- return rc;518518- }519519-520520- /* Note that the MD5 digest over writes anon.challenge_key.key */521521- rc = crypto_shash_final(hmacmd5, ntlmv2->ntlmv2_hash);522522- if (rc)523523- cifs_dbg(VFS, "%s: Could not generate MD5 hash, rc=%d\n", __func__, rc);524524-525525- return rc;509509+ /* Note that the HMAC-MD5 value overwrites ntlmv2->challenge.key */510510+ hmac_md5_usingrawkey(ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE,511511+ ntlmv2->challenge.key, hash_len,512512+ ntlmv2->ntlmv2_hash);526513}527514528515/*···559586int560587setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)561588{562562- struct shash_desc *hmacmd5 = NULL;563589 unsigned char *tiblob = NULL; /* target info blob */564590 struct ntlmv2_resp *ntlmv2;565591 char ntlmv2_hash[16];···629657 ntlmv2->client_chal = cc;630658 ntlmv2->reserved2 = 0;631659632632- rc = cifs_alloc_hash("hmac(md5)", &hmacmd5);633633- if (rc) {634634- cifs_dbg(VFS, "Could not allocate HMAC-MD5, rc=%d\n", rc);660660+ if (fips_enabled) {661661+ cifs_dbg(VFS, "NTLMv2 support is disabled due to FIPS\n");662662+ rc = -EOPNOTSUPP;635663 goto unlock;636664 }637665638666 /* calculate ntlmv2_hash */639639- rc = calc_ntlmv2_hash(ses, ntlmv2_hash, nls_cp, hmacmd5);667667+ rc = calc_ntlmv2_hash(ses, ntlmv2_hash, nls_cp);640668 if (rc) {641669 cifs_dbg(VFS, "Could not get NTLMv2 hash, rc=%d\n", rc);642670 goto unlock;643671 }644672645673 /* calculate first part of the client response (CR1) */646646- rc = CalcNTLMv2_response(ses, ntlmv2_hash, hmacmd5);647647- if (rc) {648648- cifs_dbg(VFS, "Could not calculate CR1, rc=%d\n", rc);649649- goto unlock;650650- }674674+ CalcNTLMv2_response(ses, ntlmv2_hash);651675652676 /* now calculate the session key for NTLMv2 */653653- rc = crypto_shash_setkey(hmacmd5->tfm, ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE);654654- if (rc) {655655- cifs_dbg(VFS, "%s: Could not set NTLMv2 hash as a key, rc=%d\n", __func__, rc);656656- goto unlock;657657- }658658-659659- rc = crypto_shash_init(hmacmd5);660660- if (rc) {661661- cifs_dbg(VFS, "%s: Could not init HMAC-MD5, rc=%d\n", __func__, rc);662662- goto unlock;663663- }664664-665665- rc = crypto_shash_update(hmacmd5, ntlmv2->ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE);666666- if (rc) {667667- cifs_dbg(VFS, "%s: Could not update with response, rc=%d\n", __func__, rc);668668- goto unlock;669669- }670670-671671- rc = crypto_shash_final(hmacmd5, ses->auth_key.response);672672- if (rc)673673- cifs_dbg(VFS, "%s: Could not generate MD5 hash, rc=%d\n", __func__, rc);677677+ hmac_md5_usingrawkey(ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE,678678+ ntlmv2->ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE,679679+ ses->auth_key.response);680680+ rc = 0;674681unlock:675682 cifs_server_unlock(ses->server);676676- cifs_free_hash(&hmacmd5);677683setup_ntlmv2_rsp_ret:678684 kfree_sensitive(tiblob);679685···693743cifs_crypto_secmech_release(struct TCP_Server_Info *server)694744{695745 cifs_free_hash(&server->secmech.aes_cmac);696696- cifs_free_hash(&server->secmech.hmacsha256);697697- cifs_free_hash(&server->secmech.md5);698698- cifs_free_hash(&server->secmech.sha512);699746700747 if (server->secmech.enc) {701748 crypto_free_aead(server->secmech.enc);
-4
fs/smb/client/cifsfs.c
···21392139 "also older servers complying with the SNIA CIFS Specification)");21402140MODULE_VERSION(CIFS_VERSION);21412141MODULE_SOFTDEP("ecb");21422142-MODULE_SOFTDEP("hmac");21432143-MODULE_SOFTDEP("md5");21442142MODULE_SOFTDEP("nls");21452143MODULE_SOFTDEP("aes");21462144MODULE_SOFTDEP("cmac");21472147-MODULE_SOFTDEP("sha256");21482148-MODULE_SOFTDEP("sha512");21492145MODULE_SOFTDEP("aead2");21502146MODULE_SOFTDEP("ccm");21512147MODULE_SOFTDEP("gcm");
+1-21
fs/smb/client/cifsglob.h
···2424#include "cifsacl.h"2525#include <crypto/internal/hash.h>2626#include <uapi/linux/cifs/cifs_mount.h>2727+#include "../common/cifsglob.h"2728#include "../common/smb2pdu.h"2829#include "smb2pdu.h"2930#include <linux/filelock.h>···222221223222/* crypto hashing related structure/fields, not specific to a sec mech */224223struct cifs_secmech {225225- struct shash_desc *md5; /* md5 hash function, for CIFS/SMB1 signatures */226226- struct shash_desc *hmacsha256; /* hmac-sha256 hash function, for SMB2 signatures */227227- struct shash_desc *sha512; /* sha512 hash function, for SMB3.1.1 preauth hash */228224 struct shash_desc *aes_cmac; /* block-cipher based MAC function, for SMB3 signatures */229225230226 struct crypto_aead *enc; /* smb3 encryption AEAD TFM (AES-CCM and AES-GCM) */···700702 return be32_to_cpu(*((__be32 *)buf)) & 0xffffff;701703}702704703703-static inline void704704-inc_rfc1001_len(void *buf, int count)705705-{706706- be32_add_cpu((__be32 *)buf, count);707707-}708708-709705struct TCP_Server_Info {710706 struct list_head tcp_ses_list;711707 struct list_head smb_ses_list;···10121020 */10131021#define CIFS_MAX_RFC1002_WSIZE ((1<<17) - 1 - sizeof(WRITE_REQ) + 4)10141022#define CIFS_MAX_RFC1002_RSIZE ((1<<17) - 1 - sizeof(READ_RSP) + 4)10151015-10161016-#define CIFS_DEFAULT_IOSIZE (1024 * 1024)1017102310181024/*10191025 * Windows only supports a max of 60kb reads and 65535 byte writes. Default to···21382148extern mempool_t cifs_io_subrequest_pool;2139214921402150/* Operations for different SMB versions */21412141-#define SMB1_VERSION_STRING "1.0"21422142-#define SMB20_VERSION_STRING "2.0"21432151#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY21442152extern struct smb_version_operations smb1_operations;21452153extern struct smb_version_values smb1_values;21462154extern struct smb_version_operations smb20_operations;21472155extern struct smb_version_values smb20_values;21482156#endif /* CIFS_ALLOW_INSECURE_LEGACY */21492149-#define SMB21_VERSION_STRING "2.1"21502157extern struct smb_version_operations smb21_operations;21512158extern struct smb_version_values smb21_values;21522152-#define SMBDEFAULT_VERSION_STRING "default"21532159extern struct smb_version_values smbdefault_values;21542154-#define SMB3ANY_VERSION_STRING "3"21552160extern struct smb_version_values smb3any_values;21562156-#define SMB30_VERSION_STRING "3.0"21572161extern struct smb_version_operations smb30_operations;21582162extern struct smb_version_values smb30_values;21592159-#define SMB302_VERSION_STRING "3.02"21602160-#define ALT_SMB302_VERSION_STRING "3.0.2"21612163/*extern struct smb_version_operations smb302_operations;*/ /* not needed yet */21622164extern struct smb_version_values smb302_values;21632163-#define SMB311_VERSION_STRING "3.1.1"21642164-#define ALT_SMB311_VERSION_STRING "3.11"21652165extern struct smb_version_operations smb311_operations;21662166extern struct smb_version_values smb311_values;21672167
···437437 SMBDIRECT_MR_READY,438438 SMBDIRECT_MR_REGISTERED,439439 SMBDIRECT_MR_INVALIDATED,440440- SMBDIRECT_MR_ERROR440440+ SMBDIRECT_MR_ERROR,441441+ SMBDIRECT_MR_DISABLED441442};442443443444struct smbdirect_mr_io {444445 struct smbdirect_socket *socket;445446 struct ib_cqe cqe;447447+448448+ /*449449+ * We can have up to two references:450450+ * 1. by the connection451451+ * 2. by the registration452452+ */453453+ struct kref kref;454454+ struct mutex mutex;446455447456 struct list_head list;448457
···499499 * Modules for add-on boards must use other calls.500500 */501501#ifdef CONFIG_I2C_BOARDINFO502502-int __init502502+int503503i2c_register_board_info(int busnum, struct i2c_board_info const *info,504504 unsigned n);505505#else
···16591659 void *netfs;16601660#endif1661166116621662+ unsigned short retrans;16621663 int pnfs_error;16631664 int error; /* merge with pnfs_error */16641665 unsigned int good_bytes; /* boundary of good data */
+44
include/linux/rpmb.h
···61616262#define to_rpmb_dev(x) container_of((x), struct rpmb_dev, dev)63636464+/**6565+ * struct rpmb_frame - RPMB frame structure for authenticated access6666+ *6767+ * @stuff : stuff bytes, a padding/reserved area of 196 bytes at the6868+ * beginning of the RPMB frame. They don’t carry meaningful6969+ * data but are required to make the frame exactly 512 bytes.7070+ * @key_mac : The authentication key or the message authentication7171+ * code (MAC) depending on the request/response type.7272+ * The MAC will be delivered in the last (or the only)7373+ * block of data.7474+ * @data : Data to be written or read by signed access.7575+ * @nonce : Random number generated by the host for the requests7676+ * and copied to the response by the RPMB engine.7777+ * @write_counter: Counter value for the total amount of the successful7878+ * authenticated data write requests made by the host.7979+ * @addr : Address of the data to be programmed to or read8080+ * from the RPMB. Address is the serial number of8181+ * the accessed block (half sector 256B).8282+ * @block_count : Number of blocks (half sectors, 256B) requested to be8383+ * read/programmed.8484+ * @result : Includes information about the status of the write counter8585+ * (valid, expired) and result of the access made to the RPMB.8686+ * @req_resp : Defines the type of request and response to/from the memory.8787+ *8888+ * The stuff bytes and big-endian properties are modeled to fit to the spec.8989+ */9090+struct rpmb_frame {9191+ u8 stuff[196];9292+ u8 key_mac[32];9393+ u8 data[256];9494+ u8 nonce[16];9595+ __be32 write_counter;9696+ __be16 addr;9797+ __be16 block_count;9898+ __be16 result;9999+ __be16 req_resp;100100+};101101+102102+#define RPMB_PROGRAM_KEY 0x1 /* Program RPMB Authentication Key */103103+#define RPMB_GET_WRITE_COUNTER 0x2 /* Read RPMB write counter */104104+#define RPMB_WRITE_DATA 0x3 /* Write data to RPMB partition */105105+#define RPMB_READ_DATA 0x4 /* Read data from RPMB partition */106106+#define RPMB_RESULT_READ 0x5 /* Read result request (Internal) */107107+64108#if IS_ENABLED(CONFIG_RPMB)65109struct rpmb_dev *rpmb_dev_get(struct rpmb_dev *rdev);66110void rpmb_dev_put(struct rpmb_dev *rdev);
+15
include/net/ip_tunnels.h
···611611int skb_tunnel_check_pmtu(struct sk_buff *skb, struct dst_entry *encap_dst,612612 int headroom, bool reply);613613614614+static inline void ip_tunnel_adj_headroom(struct net_device *dev,615615+ unsigned int headroom)616616+{617617+ /* we must cap headroom to some upperlimit, else pskb_expand_head618618+ * will overflow header offsets in skb_headers_offset_update().619619+ */620620+ const unsigned int max_allowed = 512;621621+622622+ if (headroom > max_allowed)623623+ headroom = max_allowed;624624+625625+ if (headroom > READ_ONCE(dev->needed_headroom))626626+ WRITE_ONCE(dev->needed_headroom, headroom);627627+}628628+614629int iptunnel_handle_offloads(struct sk_buff *skb, int gso_type_mask);615630616631static inline int iptunnel_pull_offloads(struct sk_buff *skb)
···15551555 __u32 userq_num_slots;15561556};1557155715581558-/* GFX metadata BO sizes and alignment info (in bytes) */15591559-struct drm_amdgpu_info_uq_fw_areas_gfx {15601560- /* shadow area size */15611561- __u32 shadow_size;15621562- /* shadow area base virtual mem alignment */15631563- __u32 shadow_alignment;15641564- /* context save area size */15651565- __u32 csa_size;15661566- /* context save area base virtual mem alignment */15671567- __u32 csa_alignment;15681568-};15691569-15701570-/* IP specific fw related information used in the15711571- * subquery AMDGPU_INFO_UQ_FW_AREAS15721572- */15731573-struct drm_amdgpu_info_uq_fw_areas {15741574- union {15751575- struct drm_amdgpu_info_uq_fw_areas_gfx gfx;15761576- };15771577-};15781578-15791558struct drm_amdgpu_info_num_handles {15801559 /** Max handles as supported by firmware for UVD */15811560 __u32 uvd_max_handles;
···421421 if (unlikely(ret))422422 return ret;423423424424- /* nothing to do, but copy params back */425425- if (p.sq_entries == ctx->sq_entries && p.cq_entries == ctx->cq_entries) {426426- if (copy_to_user(arg, &p, sizeof(p)))427427- return -EFAULT;428428- return 0;429429- }430430-431424 size = rings_size(p.flags, p.sq_entries, p.cq_entries,432425 &sq_array_offset);433426 if (size == SIZE_MAX)···606613 if (ret)607614 return ret;608615 if (copy_to_user(rd_uptr, &rd, sizeof(rd))) {616616+ guard(mutex)(&ctx->mmap_lock);609617 io_free_region(ctx, &ctx->param_region);610618 return -EFAULT;611619 }
+6-2
io_uring/rw.c
···542542{543543 if (res == req->cqe.res)544544 return;545545- if (res == -EAGAIN && io_rw_should_reissue(req)) {545545+ if ((res == -EOPNOTSUPP || res == -EAGAIN) && io_rw_should_reissue(req)) {546546 req->flags |= REQ_F_REISSUE | REQ_F_BL_NO_RECYCLE;547547 } else {548548 req_set_fail(req);···655655 if (ret >= 0 && req->flags & REQ_F_CUR_POS)656656 req->file->f_pos = rw->kiocb.ki_pos;657657 if (ret >= 0 && !(req->ctx->flags & IORING_SETUP_IOPOLL)) {658658+ u32 cflags = 0;659659+658660 __io_complete_rw_common(req, ret);659661 /*660662 * Safe to call io_end from here as we're inline661663 * from the submission path.662664 */663665 io_req_io_end(req);664664- io_req_set_res(req, final_ret, io_put_kbuf(req, ret, sel->buf_list));666666+ if (sel)667667+ cflags = io_put_kbuf(req, ret, sel->buf_list);668668+ io_req_set_res(req, final_ret, cflags);665669 io_req_rw_cleanup(req, issue_flags);666670 return IOU_COMPLETE;667671 } else {
+14-11
kernel/bpf/helpers.c
···12111211 rcu_read_unlock_trace();12121212}1213121312141214+static void bpf_async_cb_rcu_free(struct rcu_head *rcu)12151215+{12161216+ struct bpf_async_cb *cb = container_of(rcu, struct bpf_async_cb, rcu);12171217+12181218+ kfree_nolock(cb);12191219+}12201220+12141221static void bpf_wq_delete_work(struct work_struct *work)12151222{12161223 struct bpf_work *w = container_of(work, struct bpf_work, delete_work);1217122412181225 cancel_work_sync(&w->work);1219122612201220- kfree_rcu(w, cb.rcu);12271227+ call_rcu(&w->cb.rcu, bpf_async_cb_rcu_free);12211228}1222122912231230static void bpf_timer_delete_work(struct work_struct *work)···1233122612341227 /* Cancel the timer and wait for callback to complete if it was running.12351228 * If hrtimer_cancel() can be safely called it's safe to call12361236- * kfree_rcu(t) right after for both preallocated and non-preallocated12291229+ * call_rcu() right after for both preallocated and non-preallocated12371230 * maps. The async->cb = NULL was already done and no code path can see12381231 * address 't' anymore. Timer if armed for existing bpf_hrtimer before12391232 * bpf_timer_cancel_and_free will have been cancelled.12401233 */12411234 hrtimer_cancel(&t->timer);12421242- kfree_rcu(t, cb.rcu);12351235+ call_rcu(&t->cb.rcu, bpf_async_cb_rcu_free);12431236}1244123712451238static int __bpf_async_init(struct bpf_async_kern *async, struct bpf_map *map, u64 flags,···12731266 goto out;12741267 }1275126812761276- /* Allocate via bpf_map_kmalloc_node() for memcg accounting. Until12771277- * kmalloc_nolock() is available, avoid locking issues by using12781278- * __GFP_HIGH (GFP_ATOMIC & ~__GFP_RECLAIM).12791279- */12801280- cb = bpf_map_kmalloc_node(map, size, __GFP_HIGH, map->numa_node);12691269+ cb = bpf_map_kmalloc_nolock(map, size, 0, map->numa_node);12811270 if (!cb) {12821271 ret = -ENOMEM;12831272 goto out;···13141311 * or pinned in bpffs.13151312 */13161313 WRITE_ONCE(async->cb, NULL);13171317- kfree(cb);13141314+ kfree_nolock(cb);13181315 ret = -EPERM;13191316 }13201317out:···15791576 * timer _before_ calling us, such that failing to cancel it here will15801577 * cause it to possibly use struct hrtimer after freeing bpf_hrtimer.15811578 * Therefore, we _need_ to cancel any outstanding timers before we do15821582- * kfree_rcu, even though no more timers can be armed.15791579+ * call_rcu, even though no more timers can be armed.15831580 *15841581 * Moreover, we need to schedule work even if timer does not belong to15851582 * the calling callback_fn, as on two different CPUs, we can end up in a···16061603 * completion.16071604 */16081605 if (hrtimer_try_to_cancel(&t->timer) >= 0)16091609- kfree_rcu(t, cb.rcu);16061606+ call_rcu(&t->cb.rcu, bpf_async_cb_rcu_free);16101607 else16111608 queue_work(system_dfl_wq, &t->cb.delete_work);16121609 } else {
···21702170 struct slabobj_ext *obj_exts;2171217121722172 obj_exts = slab_obj_exts(slab);21732173- if (!obj_exts)21732173+ if (!obj_exts) {21742174+ /*21752175+ * If obj_exts allocation failed, slab->obj_exts is set to21762176+ * OBJEXTS_ALLOC_FAIL. In this case, we end up here and should21772177+ * clear the flag.21782178+ */21792179+ slab->obj_exts = 0;21742180 return;21812181+ }2175218221762183 /*21772184 * obj_exts was created with __GFP_NO_OBJ_EXT flag, therefore its···64506443 slab = virt_to_slab(x);64516444 s = slab->slab_cache;6452644564466446+ /* Point 'x' back to the beginning of allocated object */64476447+ x -= s->offset;64486448+64536449 /*64546450 * We used freepointer in 'x' to link 'x' into df->objects.64556451 * Clear it to NULL to avoid false positive detection64566452 * of "Freepointer corruption".64576453 */64586458- *(void **)x = NULL;64546454+ set_freepointer(s, x, NULL);6459645564606460- /* Point 'x' back to the beginning of allocated object */64616461- x -= s->offset;64626456 __slab_free(s, slab, x, x, 1, _THIS_IP_);64636457 }64646458
+7-18
net/bpf/test_run.c
···2929#include <trace/events/bpf_test_run.h>30303131struct bpf_test_timer {3232- enum { NO_PREEMPT, NO_MIGRATE } mode;3332 u32 i;3433 u64 time_start, time_spent;3534};···3637static void bpf_test_timer_enter(struct bpf_test_timer *t)3738 __acquires(rcu)3839{3939- rcu_read_lock();4040- if (t->mode == NO_PREEMPT)4141- preempt_disable();4242- else4343- migrate_disable();4444-4040+ rcu_read_lock_dont_migrate();4541 t->time_start = ktime_get_ns();4642}4743···4450 __releases(rcu)4551{4652 t->time_start = 0;4747-4848- if (t->mode == NO_PREEMPT)4949- preempt_enable();5050- else5151- migrate_enable();5252- rcu_read_unlock();5353+ rcu_read_unlock_migrate();5354}54555556static bool bpf_test_timer_continue(struct bpf_test_timer *t, int iterations,···363374364375{365376 struct xdp_test_data xdp = { .batch_size = batch_size };366366- struct bpf_test_timer t = { .mode = NO_MIGRATE };377377+ struct bpf_test_timer t = {};367378 int ret;368379369380 if (!repeat)···393404 struct bpf_prog_array_item item = {.prog = prog};394405 struct bpf_run_ctx *old_ctx;395406 struct bpf_cg_run_ctx run_ctx;396396- struct bpf_test_timer t = { NO_MIGRATE };407407+ struct bpf_test_timer t = {};397408 enum bpf_cgroup_storage_type stype;398409 int ret;399410···13211332 goto free_ctx;1322133313231334 if (kattr->test.data_size_in - meta_sz < ETH_HLEN)13241324- return -EINVAL;13351335+ goto free_ctx;1325133613261337 data = bpf_test_init(kattr, linear_sz, max_linear_sz, headroom, tailroom);13271338 if (IS_ERR(data)) {···14291440 const union bpf_attr *kattr,14301441 union bpf_attr __user *uattr)14311442{14321432- struct bpf_test_timer t = { NO_PREEMPT };14431443+ struct bpf_test_timer t = {};14331444 u32 size = kattr->test.data_size_in;14341445 struct bpf_flow_dissector ctx = {};14351446 u32 repeat = kattr->test.repeat;···14971508int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kattr,14981509 union bpf_attr __user *uattr)14991510{15001500- struct bpf_test_timer t = { NO_PREEMPT };15111511+ struct bpf_test_timer t = {};15011512 struct bpf_prog_array *progs = NULL;15021513 struct bpf_sk_lookup_kern ctx = {};15031514 u32 repeat = kattr->test.repeat;
···1217612176 }1217712177}12178121781217912179+/* devices must be UP and netdev_lock()'d */1218012180+static void netif_close_many_and_unlock(struct list_head *close_head)1218112181+{1218212182+ struct net_device *dev, *tmp;1218312183+1218412184+ netif_close_many(close_head, false);1218512185+1218612186+ /* ... now unlock them */1218712187+ list_for_each_entry_safe(dev, tmp, close_head, close_list) {1218812188+ netdev_unlock(dev);1218912189+ list_del_init(&dev->close_list);1219012190+ }1219112191+}1219212192+1219312193+static void netif_close_many_and_unlock_cond(struct list_head *close_head)1219412194+{1219512195+#ifdef CONFIG_LOCKDEP1219612196+ /* We can only track up to MAX_LOCK_DEPTH locks per task.1219712197+ *1219812198+ * Reserve half the available slots for additional locks possibly1219912199+ * taken by notifiers and (soft)irqs.1220012200+ */1220112201+ unsigned int limit = MAX_LOCK_DEPTH / 2;1220212202+1220312203+ if (lockdep_depth(current) > limit)1220412204+ netif_close_many_and_unlock(close_head);1220512205+#endif1220612206+}1220712207+1217912208void unregister_netdevice_many_notify(struct list_head *head,1218012209 u32 portid, const struct nlmsghdr *nlh)1218112210{···12237122081223812209 /* If device is running, close it first. Start with ops locked... */1223912210 list_for_each_entry(dev, head, unreg_list) {1221112211+ if (!(dev->flags & IFF_UP))1221212212+ continue;1224012213 if (netdev_need_ops_lock(dev)) {1224112214 list_add_tail(&dev->close_list, &close_head);1224212215 netdev_lock(dev);1224312216 }1221712217+ netif_close_many_and_unlock_cond(&close_head);1224412218 }1224512245- netif_close_many(&close_head, true);1224612246- /* ... now unlock them and go over the rest. */1221912219+ netif_close_many_and_unlock(&close_head);1222012220+ /* ... now go over the rest. */1224712221 list_for_each_entry(dev, head, unreg_list) {1224812248- if (netdev_need_ops_lock(dev))1224912249- netdev_unlock(dev);1225012250- else1222212222+ if (!netdev_need_ops_lock(dev))1225112223 list_add_tail(&dev->close_list, &close_head);1225212224 }1225312225 netif_close_many(&close_head, true);
···7200720072017201 DEBUG_NET_WARN_ON_ONCE(skb_dst(skb));72027202 DEBUG_NET_WARN_ON_ONCE(skb->destructor);72037203+ DEBUG_NET_WARN_ON_ONCE(skb_nfct(skb));7203720472047205 sdn = per_cpu_ptr(net_hotdata.skb_defer_nodes, cpu) + numa_node_id();72057206
-14
net/ipv4/ip_tunnel.c
···568568 return 0;569569}570570571571-static void ip_tunnel_adj_headroom(struct net_device *dev, unsigned int headroom)572572-{573573- /* we must cap headroom to some upperlimit, else pskb_expand_head574574- * will overflow header offsets in skb_headers_offset_update().575575- */576576- static const unsigned int max_allowed = 512;577577-578578- if (headroom > max_allowed)579579- headroom = max_allowed;580580-581581- if (headroom > READ_ONCE(dev->needed_headroom))582582- WRITE_ONCE(dev->needed_headroom, headroom);583583-}584584-585571void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,586572 u8 proto, int tunnel_hlen)587573{
+15-4
net/ipv4/tcp_output.c
···23692369 u32 max_segs)23702370{23712371 const struct inet_connection_sock *icsk = inet_csk(sk);23722372- u32 send_win, cong_win, limit, in_flight;23722372+ u32 send_win, cong_win, limit, in_flight, threshold;23732373+ u64 srtt_in_ns, expected_ack, how_far_is_the_ack;23732374 struct tcp_sock *tp = tcp_sk(sk);23742375 struct sk_buff *head;23752376 int win_divisor;···24322431 head = tcp_rtx_queue_head(sk);24332432 if (!head)24342433 goto send_now;24352435- delta = tp->tcp_clock_cache - head->tstamp;24362436- /* If next ACK is likely to come too late (half srtt), do not defer */24372437- if ((s64)(delta - (u64)NSEC_PER_USEC * (tp->srtt_us >> 4)) < 0)24342434+24352435+ srtt_in_ns = (u64)(NSEC_PER_USEC >> 3) * tp->srtt_us;24362436+ /* When is the ACK expected ? */24372437+ expected_ack = head->tstamp + srtt_in_ns;24382438+ /* How far from now is the ACK expected ? */24392439+ how_far_is_the_ack = expected_ack - tp->tcp_clock_cache;24402440+24412441+ /* If next ACK is likely to come too late,24422442+ * ie in more than min(1ms, half srtt), do not defer.24432443+ */24442444+ threshold = min(srtt_in_ns >> 1, NSEC_PER_MSEC);24452445+24462446+ if ((s64)(how_far_is_the_ack - threshold) > 0)24382447 goto send_now;2439244824402449 /* Ok, it looks like it is advisable to defer.
-2
net/ipv4/udp.c
···18511851 sk_peek_offset_bwd(sk, len);1852185218531853 if (!skb_shared(skb)) {18541854- if (unlikely(udp_skb_has_head_state(skb)))18551855- skb_release_head_state(skb);18561854 skb_attempt_defer_free(skb);18571855 return;18581856 }
···166166 fn deref(&self) -> &Bitmap {167167 let ptr = if self.nbits <= BITS_PER_LONG {168168 // SAFETY: Bitmap is represented inline.169169- unsafe { core::ptr::addr_of!(self.repr.bitmap) }169169+ #[allow(unused_unsafe, reason = "Safe since Rust 1.92.0")]170170+ unsafe {171171+ core::ptr::addr_of!(self.repr.bitmap)172172+ }170173 } else {171174 // SAFETY: Bitmap is represented as array of `unsigned long`.172175 unsafe { self.repr.ptr.as_ptr() }···185182 fn deref_mut(&mut self) -> &mut Bitmap {186183 let ptr = if self.nbits <= BITS_PER_LONG {187184 // SAFETY: Bitmap is represented inline.188188- unsafe { core::ptr::addr_of_mut!(self.repr.bitmap) }185185+ #[allow(unused_unsafe, reason = "Safe since Rust 1.92.0")]186186+ unsafe {187187+ core::ptr::addr_of_mut!(self.repr.bitmap)188188+ }189189 } else {190190 // SAFETY: Bitmap is represented as array of `unsigned long`.191191 unsafe { self.repr.ptr.as_ptr() }
+1-2
rust/kernel/cpufreq.rs
···3838const CPUFREQ_NAME_LEN: usize = bindings::CPUFREQ_NAME_LEN as usize;39394040/// Default transition latency value in nanoseconds.4141-pub const DEFAULT_TRANSITION_LATENCY_NS: u32 =4242- bindings::CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS;4141+pub const DEFAULT_TRANSITION_LATENCY_NS: u32 = bindings::CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS;43424443/// CPU frequency driver flags.4544pub mod flags {
+2-2
rust/kernel/fs/file.rs
···448448 }449449}450450451451-/// Represents the `EBADF` error code.451451+/// Represents the [`EBADF`] error code.452452///453453-/// Used for methods that can only fail with `EBADF`.453453+/// Used for methods that can only fail with [`EBADF`].454454#[derive(Copy, Clone, Eq, PartialEq)]455455pub struct BadFdError;456456
+1-1
sound/firewire/amdtp-stream.h
···3232 * allows 5 times as large as IEC 61883-6 defines.3333 * @CIP_HEADER_WITHOUT_EOH: Only for in-stream. CIP Header doesn't include3434 * valid EOH.3535- * @CIP_NO_HEADERS: a lack of headers in packets3535+ * @CIP_NO_HEADER: a lack of headers in packets3636 * @CIP_UNALIGHED_DBC: Only for in-stream. The value of dbc is not alighed to3737 * the value of current SYT_INTERVAL; e.g. initial value is not zero.3838 * @CIP_UNAWARE_SYT: For outgoing packet, the value in SYT field of CIP is 0xffff.
···18181919#include "test_util.h"20202121+sigjmp_buf expect_sigbus_jmpbuf;2222+2323+void __attribute__((used)) expect_sigbus_handler(int signum)2424+{2525+ siglongjmp(expect_sigbus_jmpbuf, 1);2626+}2727+2128/*2229 * Random number generator that is usable from guest code. This is the2330 * Park-Miller LCG using standard constants.