Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR.

No conflicts.

Adjacent changes:

net/core/page_pool_user.c
0b11b1c5c320 ("netdev: let netlink core handle -EMSGSIZE errors")
429679dcf7d9 ("page_pool: fix netlink dump stop/resume")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+2712 -1421
+2
Documentation/devicetree/bindings/net/renesas,ethertsn.yaml
··· 65 65 66 66 rx-internal-delay-ps: 67 67 enum: [0, 1800] 68 + default: 0 68 69 69 70 tx-internal-delay-ps: 70 71 enum: [0, 2000] 72 + default: 0 71 73 72 74 '#address-cells': 73 75 const: 1
+1 -1
Documentation/driver-api/dpll.rst
··· 545 545 to drive dpll with signal recovered from the PHY netdevice. 546 546 This is done by exposing a pin to the netdevice - attaching pin to the 547 547 netdevice itself with 548 - ``netdev_dpll_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin)``. 548 + ``dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin)``. 549 549 Exposed pin id handle ``DPLL_A_PIN_ID`` is then identifiable by the user 550 550 as it is attached to rtnetlink respond to get ``RTM_NEWLINK`` command in 551 551 nested attribute ``IFLA_DPLL_PIN``.
+1
Documentation/virt/hyperv/index.rst
··· 10 10 overview 11 11 vmbus 12 12 clocks 13 + vpci
+316
Documentation/virt/hyperv/vpci.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + PCI pass-thru devices 4 + ========================= 5 + In a Hyper-V guest VM, PCI pass-thru devices (also called 6 + virtual PCI devices, or vPCI devices) are physical PCI devices 7 + that are mapped directly into the VM's physical address space. 8 + Guest device drivers can interact directly with the hardware 9 + without intermediation by the host hypervisor. This approach 10 + provides higher bandwidth access to the device with lower 11 + latency, compared with devices that are virtualized by the 12 + hypervisor. The device should appear to the guest just as it 13 + would when running on bare metal, so no changes are required 14 + to the Linux device drivers for the device. 15 + 16 + Hyper-V terminology for vPCI devices is "Discrete Device 17 + Assignment" (DDA). Public documentation for Hyper-V DDA is 18 + available here: `DDA`_ 19 + 20 + .. _DDA: https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment 21 + 22 + DDA is typically used for storage controllers, such as NVMe, 23 + and for GPUs. A similar mechanism for NICs is called SR-IOV 24 + and produces the same benefits by allowing a guest device 25 + driver to interact directly with the hardware. See Hyper-V 26 + public documentation here: `SR-IOV`_ 27 + 28 + .. _SR-IOV: https://learn.microsoft.com/en-us/windows-hardware/drivers/network/overview-of-single-root-i-o-virtualization--sr-iov- 29 + 30 + This discussion of vPCI devices includes DDA and SR-IOV 31 + devices. 32 + 33 + Device Presentation 34 + ------------------- 35 + Hyper-V provides full PCI functionality for a vPCI device when 36 + it is operating, so the Linux device driver for the device can 37 + be used unchanged, provided it uses the correct Linux kernel 38 + APIs for accessing PCI config space and for other integration 39 + with Linux. But the initial detection of the PCI device and 40 + its integration with the Linux PCI subsystem must use Hyper-V 41 + specific mechanisms. Consequently, vPCI devices on Hyper-V 42 + have a dual identity. They are initially presented to Linux 43 + guests as VMBus devices via the standard VMBus "offer" 44 + mechanism, so they have a VMBus identity and appear under 45 + /sys/bus/vmbus/devices. The VMBus vPCI driver in Linux at 46 + drivers/pci/controller/pci-hyperv.c handles a newly introduced 47 + vPCI device by fabricating a PCI bus topology and creating all 48 + the normal PCI device data structures in Linux that would 49 + exist if the PCI device were discovered via ACPI on a bare- 50 + metal system. Once those data structures are set up, the 51 + device also has a normal PCI identity in Linux, and the normal 52 + Linux device driver for the vPCI device can function as if it 53 + were running in Linux on bare-metal. Because vPCI devices are 54 + presented dynamically through the VMBus offer mechanism, they 55 + do not appear in the Linux guest's ACPI tables. vPCI devices 56 + may be added to a VM or removed from a VM at any time during 57 + the life of the VM, and not just during initial boot. 58 + 59 + With this approach, the vPCI device is a VMBus device and a 60 + PCI device at the same time. In response to the VMBus offer 61 + message, the hv_pci_probe() function runs and establishes a 62 + VMBus connection to the vPCI VSP on the Hyper-V host. That 63 + connection has a single VMBus channel. The channel is used to 64 + exchange messages with the vPCI VSP for the purpose of setting 65 + up and configuring the vPCI device in Linux. Once the device 66 + is fully configured in Linux as a PCI device, the VMBus 67 + channel is used only if Linux changes the vCPU to be interrupted 68 + in the guest, or if the vPCI device is removed from 69 + the VM while the VM is running. The ongoing operation of the 70 + device happens directly between the Linux device driver for 71 + the device and the hardware, with VMBus and the VMBus channel 72 + playing no role. 73 + 74 + PCI Device Setup 75 + ---------------- 76 + PCI device setup follows a sequence that Hyper-V originally 77 + created for Windows guests, and that can be ill-suited for 78 + Linux guests due to differences in the overall structure of 79 + the Linux PCI subsystem compared with Windows. Nonetheless, 80 + with a bit of hackery in the Hyper-V virtual PCI driver for 81 + Linux, the virtual PCI device is setup in Linux so that 82 + generic Linux PCI subsystem code and the Linux driver for the 83 + device "just work". 84 + 85 + Each vPCI device is set up in Linux to be in its own PCI 86 + domain with a host bridge. The PCI domainID is derived from 87 + bytes 4 and 5 of the instance GUID assigned to the VMBus vPCI 88 + device. The Hyper-V host does not guarantee that these bytes 89 + are unique, so hv_pci_probe() has an algorithm to resolve 90 + collisions. The collision resolution is intended to be stable 91 + across reboots of the same VM so that the PCI domainIDs don't 92 + change, as the domainID appears in the user space 93 + configuration of some devices. 94 + 95 + hv_pci_probe() allocates a guest MMIO range to be used as PCI 96 + config space for the device. This MMIO range is communicated 97 + to the Hyper-V host over the VMBus channel as part of telling 98 + the host that the device is ready to enter d0. See 99 + hv_pci_enter_d0(). When the guest subsequently accesses this 100 + MMIO range, the Hyper-V host intercepts the accesses and maps 101 + them to the physical device PCI config space. 102 + 103 + hv_pci_probe() also gets BAR information for the device from 104 + the Hyper-V host, and uses this information to allocate MMIO 105 + space for the BARs. That MMIO space is then setup to be 106 + associated with the host bridge so that it works when generic 107 + PCI subsystem code in Linux processes the BARs. 108 + 109 + Finally, hv_pci_probe() creates the root PCI bus. At this 110 + point the Hyper-V virtual PCI driver hackery is done, and the 111 + normal Linux PCI machinery for scanning the root bus works to 112 + detect the device, to perform driver matching, and to 113 + initialize the driver and device. 114 + 115 + PCI Device Removal 116 + ------------------ 117 + A Hyper-V host may initiate removal of a vPCI device from a 118 + guest VM at any time during the life of the VM. The removal 119 + is instigated by an admin action taken on the Hyper-V host and 120 + is not under the control of the guest OS. 121 + 122 + A guest VM is notified of the removal by an unsolicited 123 + "Eject" message sent from the host to the guest over the VMBus 124 + channel associated with the vPCI device. Upon receipt of such 125 + a message, the Hyper-V virtual PCI driver in Linux 126 + asynchronously invokes Linux kernel PCI subsystem calls to 127 + shutdown and remove the device. When those calls are 128 + complete, an "Ejection Complete" message is sent back to 129 + Hyper-V over the VMBus channel indicating that the device has 130 + been removed. At this point, Hyper-V sends a VMBus rescind 131 + message to the Linux guest, which the VMBus driver in Linux 132 + processes by removing the VMBus identity for the device. Once 133 + that processing is complete, all vestiges of the device having 134 + been present are gone from the Linux kernel. The rescind 135 + message also indicates to the guest that Hyper-V has stopped 136 + providing support for the vPCI device in the guest. If the 137 + guest were to attempt to access that device's MMIO space, it 138 + would be an invalid reference. Hypercalls affecting the device 139 + return errors, and any further messages sent in the VMBus 140 + channel are ignored. 141 + 142 + After sending the Eject message, Hyper-V allows the guest VM 143 + 60 seconds to cleanly shutdown the device and respond with 144 + Ejection Complete before sending the VMBus rescind 145 + message. If for any reason the Eject steps don't complete 146 + within the allowed 60 seconds, the Hyper-V host forcibly 147 + performs the rescind steps, which will likely result in 148 + cascading errors in the guest because the device is now no 149 + longer present from the guest standpoint and accessing the 150 + device MMIO space will fail. 151 + 152 + Because ejection is asynchronous and can happen at any point 153 + during the guest VM lifecycle, proper synchronization in the 154 + Hyper-V virtual PCI driver is very tricky. Ejection has been 155 + observed even before a newly offered vPCI device has been 156 + fully setup. The Hyper-V virtual PCI driver has been updated 157 + several times over the years to fix race conditions when 158 + ejections happen at inopportune times. Care must be taken when 159 + modifying this code to prevent re-introducing such problems. 160 + See comments in the code. 161 + 162 + Interrupt Assignment 163 + -------------------- 164 + The Hyper-V virtual PCI driver supports vPCI devices using 165 + MSI, multi-MSI, or MSI-X. Assigning the guest vCPU that will 166 + receive the interrupt for a particular MSI or MSI-X message is 167 + complex because of the way the Linux setup of IRQs maps onto 168 + the Hyper-V interfaces. For the single-MSI and MSI-X cases, 169 + Linux calls hv_compse_msi_msg() twice, with the first call 170 + containing a dummy vCPU and the second call containing the 171 + real vCPU. Furthermore, hv_irq_unmask() is finally called 172 + (on x86) or the GICD registers are set (on arm64) to specify 173 + the real vCPU again. Each of these three calls interact 174 + with Hyper-V, which must decide which physical CPU should 175 + receive the interrupt before it is forwarded to the guest VM. 176 + Unfortunately, the Hyper-V decision-making process is a bit 177 + limited, and can result in concentrating the physical 178 + interrupts on a single CPU, causing a performance bottleneck. 179 + See details about how this is resolved in the extensive 180 + comment above the function hv_compose_msi_req_get_cpu(). 181 + 182 + The Hyper-V virtual PCI driver implements the 183 + irq_chip.irq_compose_msi_msg function as hv_compose_msi_msg(). 184 + Unfortunately, on Hyper-V the implementation requires sending 185 + a VMBus message to the Hyper-V host and awaiting an interrupt 186 + indicating receipt of a reply message. Since 187 + irq_chip.irq_compose_msi_msg can be called with IRQ locks 188 + held, it doesn't work to do the normal sleep until awakened by 189 + the interrupt. Instead hv_compose_msi_msg() must send the 190 + VMBus message, and then poll for the completion message. As 191 + further complexity, the vPCI device could be ejected/rescinded 192 + while the polling is in progress, so this scenario must be 193 + detected as well. See comments in the code regarding this 194 + very tricky area. 195 + 196 + Most of the code in the Hyper-V virtual PCI driver (pci- 197 + hyperv.c) applies to Hyper-V and Linux guests running on x86 198 + and on arm64 architectures. But there are differences in how 199 + interrupt assignments are managed. On x86, the Hyper-V 200 + virtual PCI driver in the guest must make a hypercall to tell 201 + Hyper-V which guest vCPU should be interrupted by each 202 + MSI/MSI-X interrupt, and the x86 interrupt vector number that 203 + the x86_vector IRQ domain has picked for the interrupt. This 204 + hypercall is made by hv_arch_irq_unmask(). On arm64, the 205 + Hyper-V virtual PCI driver manages the allocation of an SPI 206 + for each MSI/MSI-X interrupt. The Hyper-V virtual PCI driver 207 + stores the allocated SPI in the architectural GICD registers, 208 + which Hyper-V emulates, so no hypercall is necessary as with 209 + x86. Hyper-V does not support using LPIs for vPCI devices in 210 + arm64 guest VMs because it does not emulate a GICv3 ITS. 211 + 212 + The Hyper-V virtual PCI driver in Linux supports vPCI devices 213 + whose drivers create managed or unmanaged Linux IRQs. If the 214 + smp_affinity for an unmanaged IRQ is updated via the /proc/irq 215 + interface, the Hyper-V virtual PCI driver is called to tell 216 + the Hyper-V host to change the interrupt targeting and 217 + everything works properly. However, on x86 if the x86_vector 218 + IRQ domain needs to reassign an interrupt vector due to 219 + running out of vectors on a CPU, there's no path to inform the 220 + Hyper-V host of the change, and things break. Fortunately, 221 + guest VMs operate in a constrained device environment where 222 + using all the vectors on a CPU doesn't happen. Since such a 223 + problem is only a theoretical concern rather than a practical 224 + concern, it has been left unaddressed. 225 + 226 + DMA 227 + --- 228 + By default, Hyper-V pins all guest VM memory in the host 229 + when the VM is created, and programs the physical IOMMU to 230 + allow the VM to have DMA access to all its memory. Hence 231 + it is safe to assign PCI devices to the VM, and allow the 232 + guest operating system to program the DMA transfers. The 233 + physical IOMMU prevents a malicious guest from initiating 234 + DMA to memory belonging to the host or to other VMs on the 235 + host. From the Linux guest standpoint, such DMA transfers 236 + are in "direct" mode since Hyper-V does not provide a virtual 237 + IOMMU in the guest. 238 + 239 + Hyper-V assumes that physical PCI devices always perform 240 + cache-coherent DMA. When running on x86, this behavior is 241 + required by the architecture. When running on arm64, the 242 + architecture allows for both cache-coherent and 243 + non-cache-coherent devices, with the behavior of each device 244 + specified in the ACPI DSDT. But when a PCI device is assigned 245 + to a guest VM, that device does not appear in the DSDT, so the 246 + Hyper-V VMBus driver propagates cache-coherency information 247 + from the VMBus node in the ACPI DSDT to all VMBus devices, 248 + including vPCI devices (since they have a dual identity as a VMBus 249 + device and as a PCI device). See vmbus_dma_configure(). 250 + Current Hyper-V versions always indicate that the VMBus is 251 + cache coherent, so vPCI devices on arm64 always get marked as 252 + cache coherent and the CPU does not perform any sync 253 + operations as part of dma_map/unmap_*() calls. 254 + 255 + vPCI protocol versions 256 + ---------------------- 257 + As previously described, during vPCI device setup and teardown 258 + messages are passed over a VMBus channel between the Hyper-V 259 + host and the Hyper-v vPCI driver in the Linux guest. Some 260 + messages have been revised in newer versions of Hyper-V, so 261 + the guest and host must agree on the vPCI protocol version to 262 + be used. The version is negotiated when communication over 263 + the VMBus channel is first established. See 264 + hv_pci_protocol_negotiation(). Newer versions of the protocol 265 + extend support to VMs with more than 64 vCPUs, and provide 266 + additional information about the vPCI device, such as the 267 + guest virtual NUMA node to which it is most closely affined in 268 + the underlying hardware. 269 + 270 + Guest NUMA node affinity 271 + ------------------------ 272 + When the vPCI protocol version provides it, the guest NUMA 273 + node affinity of the vPCI device is stored as part of the Linux 274 + device information for subsequent use by the Linux driver. See 275 + hv_pci_assign_numa_node(). If the negotiated protocol version 276 + does not support the host providing NUMA affinity information, 277 + the Linux guest defaults the device NUMA node to 0. But even 278 + when the negotiated protocol version includes NUMA affinity 279 + information, the ability of the host to provide such 280 + information depends on certain host configuration options. If 281 + the guest receives NUMA node value "0", it could mean NUMA 282 + node 0, or it could mean "no information is available". 283 + Unfortunately it is not possible to distinguish the two cases 284 + from the guest side. 285 + 286 + PCI config space access in a CoCo VM 287 + ------------------------------------ 288 + Linux PCI device drivers access PCI config space using a 289 + standard set of functions provided by the Linux PCI subsystem. 290 + In Hyper-V guests these standard functions map to functions 291 + hv_pcifront_read_config() and hv_pcifront_write_config() 292 + in the Hyper-V virtual PCI driver. In normal VMs, 293 + these hv_pcifront_*() functions directly access the PCI config 294 + space, and the accesses trap to Hyper-V to be handled. 295 + But in CoCo VMs, memory encryption prevents Hyper-V 296 + from reading the guest instruction stream to emulate the 297 + access, so the hv_pcifront_*() functions must invoke 298 + hypercalls with explicit arguments describing the access to be 299 + made. 300 + 301 + Config Block back-channel 302 + ------------------------- 303 + The Hyper-V host and Hyper-V virtual PCI driver in Linux 304 + together implement a non-standard back-channel communication 305 + path between the host and guest. The back-channel path uses 306 + messages sent over the VMBus channel associated with the vPCI 307 + device. The functions hyperv_read_cfg_blk() and 308 + hyperv_write_cfg_blk() are the primary interfaces provided to 309 + other parts of the Linux kernel. As of this writing, these 310 + interfaces are used only by the Mellanox mlx5 driver to pass 311 + diagnostic data to a Hyper-V host running in the Azure public 312 + cloud. The functions hyperv_read_cfg_blk() and 313 + hyperv_write_cfg_blk() are implemented in a separate module 314 + (pci-hyperv-intf.c, under CONFIG_PCI_HYPERV_INTERFACE) that 315 + effectively stubs them out when running in non-Hyper-V 316 + environments.
+15 -34
MAINTAINERS
··· 1395 1395 1396 1396 ANALOGBITS PLL LIBRARIES 1397 1397 M: Paul Walmsley <paul.walmsley@sifive.com> 1398 + M: Samuel Holland <samuel.holland@sifive.com> 1398 1399 S: Supported 1399 1400 F: drivers/clk/analogbits/* 1400 1401 F: include/linux/clk/analogbits* ··· 2157 2156 M: Sascha Hauer <s.hauer@pengutronix.de> 2158 2157 R: Pengutronix Kernel Team <kernel@pengutronix.de> 2159 2158 R: Fabio Estevam <festevam@gmail.com> 2160 - R: NXP Linux Team <linux-imx@nxp.com> 2159 + L: imx@lists.linux.dev 2161 2160 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2162 2161 S: Maintained 2163 2162 T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git ··· 8506 8505 M: Wei Fang <wei.fang@nxp.com> 8507 8506 R: Shenwei Wang <shenwei.wang@nxp.com> 8508 8507 R: Clark Wang <xiaoning.wang@nxp.com> 8509 - R: NXP Linux Team <linux-imx@nxp.com> 8508 + L: imx@lists.linux.dev 8510 8509 L: netdev@vger.kernel.org 8511 8510 S: Maintained 8512 8511 F: Documentation/devicetree/bindings/net/fsl,fec.yaml ··· 8541 8540 FREESCALE IMX LPI2C DRIVER 8542 8541 M: Dong Aisheng <aisheng.dong@nxp.com> 8543 8542 L: linux-i2c@vger.kernel.org 8544 - L: linux-imx@nxp.com 8543 + L: imx@lists.linux.dev 8545 8544 S: Maintained 8546 8545 F: Documentation/devicetree/bindings/i2c/i2c-imx-lpi2c.yaml 8547 8546 F: drivers/i2c/busses/i2c-imx-lpi2c.c ··· 15748 15747 NXP i.MX 7D/6SX/6UL/93 AND VF610 ADC DRIVER 15749 15748 M: Haibo Chen <haibo.chen@nxp.com> 15750 15749 L: linux-iio@vger.kernel.org 15751 - L: linux-imx@nxp.com 15750 + L: imx@lists.linux.dev 15752 15751 S: Maintained 15753 15752 F: Documentation/devicetree/bindings/iio/adc/fsl,imx7d-adc.yaml 15754 15753 F: Documentation/devicetree/bindings/iio/adc/fsl,vf610-adc.yaml ··· 15785 15784 NXP i.MX 8QXP ADC DRIVER 15786 15785 M: Cai Huoqing <cai.huoqing@linux.dev> 15787 15786 M: Haibo Chen <haibo.chen@nxp.com> 15788 - L: linux-imx@nxp.com 15787 + L: imx@lists.linux.dev 15789 15788 L: linux-iio@vger.kernel.org 15790 15789 S: Maintained 15791 15790 F: Documentation/devicetree/bindings/iio/adc/nxp,imx8qxp-adc.yaml ··· 15793 15792 15794 15793 NXP i.MX 8QXP/8QM JPEG V4L2 DRIVER 15795 15794 M: Mirela Rabulea <mirela.rabulea@nxp.com> 15796 - R: NXP Linux Team <linux-imx@nxp.com> 15795 + L: imx@lists.linux.dev 15797 15796 L: linux-media@vger.kernel.org 15798 15797 S: Maintained 15799 15798 F: Documentation/devicetree/bindings/media/nxp,imx8-jpeg.yaml ··· 15803 15802 M: Abel Vesa <abelvesa@kernel.org> 15804 15803 R: Peng Fan <peng.fan@nxp.com> 15805 15804 L: linux-clk@vger.kernel.org 15806 - L: linux-imx@nxp.com 15805 + L: imx@lists.linux.dev 15807 15806 S: Maintained 15808 15807 T: git git://git.kernel.org/pub/scm/linux/kernel/git/abelvesa/linux.git clk/imx 15809 15808 F: Documentation/devicetree/bindings/clock/imx* ··· 16764 16763 PCI DRIVER FOR FU740 16765 16764 M: Paul Walmsley <paul.walmsley@sifive.com> 16766 16765 M: Greentime Hu <greentime.hu@sifive.com> 16766 + M: Samuel Holland <samuel.holland@sifive.com> 16767 16767 L: linux-pci@vger.kernel.org 16768 16768 S: Maintained 16769 16769 F: Documentation/devicetree/bindings/pci/sifive,fu740-pcie.yaml ··· 19682 19680 19683 19681 SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) NXP i.MX DRIVER 19684 19682 M: Haibo Chen <haibo.chen@nxp.com> 19685 - L: linux-imx@nxp.com 19683 + L: imx@lists.linux.dev 19686 19684 L: linux-mmc@vger.kernel.org 19687 19685 S: Maintained 19688 19686 F: drivers/mmc/host/sdhci-esdhc-imx.c ··· 20017 20015 F: drivers/watchdog/simatic-ipc-wdt.c 20018 20016 20019 20017 SIFIVE DRIVERS 20020 - M: Palmer Dabbelt <palmer@dabbelt.com> 20021 20018 M: Paul Walmsley <paul.walmsley@sifive.com> 20019 + M: Samuel Holland <samuel.holland@sifive.com> 20022 20020 L: linux-riscv@lists.infradead.org 20023 20021 S: Supported 20024 - N: sifive 20025 - K: [^@]sifive 20026 - 20027 - SIFIVE CACHE DRIVER 20028 - M: Conor Dooley <conor@kernel.org> 20029 - L: linux-riscv@lists.infradead.org 20030 - S: Maintained 20031 - F: Documentation/devicetree/bindings/cache/sifive,ccache0.yaml 20032 - F: drivers/cache/sifive_ccache.c 20033 - 20034 - SIFIVE FU540 SYSTEM-ON-CHIP 20035 - M: Paul Walmsley <paul.walmsley@sifive.com> 20036 - M: Palmer Dabbelt <palmer@dabbelt.com> 20037 - L: linux-riscv@lists.infradead.org 20038 - S: Supported 20039 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/pjw/sifive.git 20040 - N: fu540 20041 - K: fu540 20042 - 20043 - SIFIVE PDMA DRIVER 20044 - M: Green Wan <green.wan@sifive.com> 20045 - S: Maintained 20046 - F: Documentation/devicetree/bindings/dma/sifive,fu540-c000-pdma.yaml 20047 20022 F: drivers/dma/sf-pdma/ 20048 - 20023 + N: sifive 20024 + K: fu[57]40 20025 + K: [^@]sifive 20049 20026 20050 20027 SILEAD TOUCHSCREEN DRIVER 20051 20028 M: Hans de Goede <hdegoede@redhat.com> ··· 20234 20253 F: drivers/net/ethernet/socionext/sni_ave.c 20235 20254 20236 20255 SOCIONEXT (SNI) NETSEC NETWORK DRIVER 20237 - M: Jassi Brar <jaswinder.singh@linaro.org> 20238 20256 M: Ilias Apalodimas <ilias.apalodimas@linaro.org> 20257 + M: Masahisa Kojima <kojima.masahisa@socionext.com> 20239 20258 L: netdev@vger.kernel.org 20240 20259 S: Maintained 20241 20260 F: Documentation/devicetree/bindings/net/socionext,synquacer-netsec.yaml
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 8 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc7 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
-26
arch/arm/boot/dts/nxp/imx/imx7s.dtsi
··· 834 834 <&clks IMX7D_LCDIF_PIXEL_ROOT_CLK>; 835 835 clock-names = "pix", "axi"; 836 836 status = "disabled"; 837 - 838 - port { 839 - #address-cells = <1>; 840 - #size-cells = <0>; 841 - 842 - lcdif_out_mipi_dsi: endpoint@0 { 843 - reg = <0>; 844 - remote-endpoint = <&mipi_dsi_in_lcdif>; 845 - }; 846 - }; 847 837 }; 848 838 849 839 mipi_csi: mipi-csi@30750000 { ··· 885 895 samsung,esc-clock-frequency = <20000000>; 886 896 samsung,pll-clock-frequency = <24000000>; 887 897 status = "disabled"; 888 - 889 - ports { 890 - #address-cells = <1>; 891 - #size-cells = <0>; 892 - 893 - port@0 { 894 - reg = <0>; 895 - #address-cells = <1>; 896 - #size-cells = <0>; 897 - 898 - mipi_dsi_in_lcdif: endpoint@0 { 899 - reg = <0>; 900 - remote-endpoint = <&lcdif_out_mipi_dsi>; 901 - }; 902 - }; 903 - }; 904 898 }; 905 899 }; 906 900
+1
arch/arm/configs/imx_v6_v7_defconfig
··· 297 297 CONFIG_LCD_CLASS_DEVICE=y 298 298 CONFIG_LCD_L4F00242T03=y 299 299 CONFIG_LCD_PLATFORM=y 300 + CONFIG_BACKLIGHT_CLASS_DEVICE=y 300 301 CONFIG_BACKLIGHT_PWM=y 301 302 CONFIG_BACKLIGHT_GPIO=y 302 303 CONFIG_FRAMEBUFFER_CONSOLE=y
+1
arch/arm64/boot/dts/allwinner/Makefile
··· 42 42 dtb-$(CONFIG_ARCH_SUNXI) += sun50i-h616-bigtreetech-pi.dtb 43 43 dtb-$(CONFIG_ARCH_SUNXI) += sun50i-h616-orangepi-zero2.dtb 44 44 dtb-$(CONFIG_ARCH_SUNXI) += sun50i-h616-x96-mate.dtb 45 + dtb-$(CONFIG_ARCH_SUNXI) += sun50i-h618-orangepi-zero2w.dtb 45 46 dtb-$(CONFIG_ARCH_SUNXI) += sun50i-h618-orangepi-zero3.dtb 46 47 dtb-$(CONFIG_ARCH_SUNXI) += sun50i-h618-transpeed-8k618-t.dtb
+1 -1
arch/arm64/boot/dts/freescale/imx8mp-dhcom-som.dtsi
··· 255 255 <&clk IMX8MP_AUDIO_PLL2_OUT>; 256 256 assigned-clock-parents = <&clk IMX8MP_AUDIO_PLL2_OUT>; 257 257 assigned-clock-rates = <13000000>, <13000000>, <156000000>; 258 - reset-gpios = <&gpio3 21 GPIO_ACTIVE_HIGH>; 258 + reset-gpios = <&gpio4 1 GPIO_ACTIVE_HIGH>; 259 259 status = "disabled"; 260 260 261 261 ports {
+1 -1
arch/arm64/boot/dts/freescale/imx8mp.dtsi
··· 1820 1820 compatible = "fsl,imx8mp-ldb"; 1821 1821 reg = <0x5c 0x4>, <0x128 0x4>; 1822 1822 reg-names = "ldb", "lvds"; 1823 - clocks = <&clk IMX8MP_CLK_MEDIA_LDB>; 1823 + clocks = <&clk IMX8MP_CLK_MEDIA_LDB_ROOT>; 1824 1824 clock-names = "ldb"; 1825 1825 assigned-clocks = <&clk IMX8MP_CLK_MEDIA_LDB>; 1826 1826 assigned-clock-parents = <&clk IMX8MP_VIDEO_PLL1_OUT>;
+1 -1
arch/arm64/boot/dts/nvidia/tegra234-p3737-0000+p3701-0000.dts
··· 175 175 status = "okay"; 176 176 177 177 phy-handle = <&mgbe0_phy>; 178 - phy-mode = "usxgmii"; 178 + phy-mode = "10gbase-r"; 179 179 180 180 mdio { 181 181 #address-cells = <1>;
+3 -3
arch/arm64/boot/dts/nvidia/tegra234.dtsi
··· 1459 1459 <&mc TEGRA234_MEMORY_CLIENT_MGBEAWR &emc>; 1460 1460 interconnect-names = "dma-mem", "write"; 1461 1461 iommus = <&smmu_niso0 TEGRA234_SID_MGBE>; 1462 - power-domains = <&bpmp TEGRA234_POWER_DOMAIN_MGBEA>; 1462 + power-domains = <&bpmp TEGRA234_POWER_DOMAIN_MGBEB>; 1463 1463 status = "disabled"; 1464 1464 }; 1465 1465 ··· 1493 1493 <&mc TEGRA234_MEMORY_CLIENT_MGBEBWR &emc>; 1494 1494 interconnect-names = "dma-mem", "write"; 1495 1495 iommus = <&smmu_niso0 TEGRA234_SID_MGBE_VF1>; 1496 - power-domains = <&bpmp TEGRA234_POWER_DOMAIN_MGBEB>; 1496 + power-domains = <&bpmp TEGRA234_POWER_DOMAIN_MGBEC>; 1497 1497 status = "disabled"; 1498 1498 }; 1499 1499 ··· 1527 1527 <&mc TEGRA234_MEMORY_CLIENT_MGBECWR &emc>; 1528 1528 interconnect-names = "dma-mem", "write"; 1529 1529 iommus = <&smmu_niso0 TEGRA234_SID_MGBE_VF2>; 1530 - power-domains = <&bpmp TEGRA234_POWER_DOMAIN_MGBEC>; 1530 + power-domains = <&bpmp TEGRA234_POWER_DOMAIN_MGBED>; 1531 1531 status = "disabled"; 1532 1532 }; 1533 1533
+6 -33
arch/arm64/boot/dts/qcom/msm8996.dtsi
··· 457 457 }; 458 458 }; 459 459 460 - mpm: interrupt-controller { 461 - compatible = "qcom,mpm"; 462 - qcom,rpm-msg-ram = <&apss_mpm>; 463 - interrupts = <GIC_SPI 171 IRQ_TYPE_EDGE_RISING>; 464 - mboxes = <&apcs_glb 1>; 465 - interrupt-controller; 466 - #interrupt-cells = <2>; 467 - #power-domain-cells = <0>; 468 - interrupt-parent = <&intc>; 469 - qcom,mpm-pin-count = <96>; 470 - qcom,mpm-pin-map = <2 184>, /* TSENS1 upper_lower_int */ 471 - <52 243>, /* DWC3_PRI ss_phy_irq */ 472 - <79 347>, /* DWC3_PRI hs_phy_irq */ 473 - <80 352>, /* DWC3_SEC hs_phy_irq */ 474 - <81 347>, /* QUSB2_PHY_PRI DP+DM */ 475 - <82 352>, /* QUSB2_PHY_SEC DP+DM */ 476 - <87 326>; /* SPMI */ 477 - }; 478 - 479 460 psci { 480 461 compatible = "arm,psci-1.0"; 481 462 method = "smc"; ··· 746 765 }; 747 766 748 767 rpm_msg_ram: sram@68000 { 749 - compatible = "qcom,rpm-msg-ram", "mmio-sram"; 768 + compatible = "qcom,rpm-msg-ram"; 750 769 reg = <0x00068000 0x6000>; 751 - #address-cells = <1>; 752 - #size-cells = <1>; 753 - ranges = <0 0x00068000 0x7000>; 754 - 755 - apss_mpm: sram@1b8 { 756 - reg = <0x1b8 0x48>; 757 - }; 758 770 }; 759 771 760 772 qfprom@74000 { ··· 830 856 reg = <0x004ad000 0x1000>, /* TM */ 831 857 <0x004ac000 0x1000>; /* SROT */ 832 858 #qcom,sensors = <8>; 833 - interrupts-extended = <&mpm 2 IRQ_TYPE_LEVEL_HIGH>, 834 - <&intc GIC_SPI 430 IRQ_TYPE_LEVEL_HIGH>; 859 + interrupts = <GIC_SPI 184 IRQ_TYPE_LEVEL_HIGH>, 860 + <GIC_SPI 430 IRQ_TYPE_LEVEL_HIGH>; 835 861 interrupt-names = "uplow", "critical"; 836 862 #thermal-sensor-cells = <1>; 837 863 }; ··· 1337 1363 interrupts = <GIC_SPI 208 IRQ_TYPE_LEVEL_HIGH>; 1338 1364 gpio-controller; 1339 1365 gpio-ranges = <&tlmm 0 0 150>; 1340 - wakeup-parent = <&mpm>; 1341 1366 #gpio-cells = <2>; 1342 1367 interrupt-controller; 1343 1368 #interrupt-cells = <2>; ··· 1864 1891 <0x0400a000 0x002100>; 1865 1892 reg-names = "core", "chnls", "obsrvr", "intr", "cnfg"; 1866 1893 interrupt-names = "periph_irq"; 1867 - interrupts-extended = <&mpm 87 IRQ_TYPE_LEVEL_HIGH>; 1894 + interrupts = <GIC_SPI 326 IRQ_TYPE_LEVEL_HIGH>; 1868 1895 qcom,ee = <0>; 1869 1896 qcom,channel = <0>; 1870 1897 #address-cells = <2>; ··· 3025 3052 #size-cells = <1>; 3026 3053 ranges; 3027 3054 3028 - interrupts-extended = <&mpm 79 IRQ_TYPE_LEVEL_HIGH>, 3029 - <&mpm 52 IRQ_TYPE_LEVEL_HIGH>; 3055 + interrupts = <GIC_SPI 347 IRQ_TYPE_LEVEL_HIGH>, 3056 + <GIC_SPI 243 IRQ_TYPE_LEVEL_HIGH>; 3030 3057 interrupt-names = "hs_phy_irq", "ss_phy_irq"; 3031 3058 3032 3059 clocks = <&gcc GCC_SYS_NOC_USB3_AXI_CLK>,
+2
arch/arm64/boot/dts/qcom/sc8280xp-crd.dts
··· 563 563 }; 564 564 565 565 &pcie4 { 566 + max-link-speed = <2>; 567 + 566 568 perst-gpios = <&tlmm 141 GPIO_ACTIVE_LOW>; 567 569 wake-gpios = <&tlmm 139 GPIO_ACTIVE_LOW>; 568 570
+2
arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
··· 722 722 }; 723 723 724 724 &pcie4 { 725 + max-link-speed = <2>; 726 + 725 727 perst-gpios = <&tlmm 141 GPIO_ACTIVE_LOW>; 726 728 wake-gpios = <&tlmm 139 GPIO_ACTIVE_LOW>; 727 729
+3
arch/arm64/boot/dts/qcom/sm6115.dtsi
··· 1304 1304 &config_noc SLAVE_QUP_0 RPM_ALWAYS_TAG>, 1305 1305 <&system_noc MASTER_QUP_0 RPM_ALWAYS_TAG 1306 1306 &bimc SLAVE_EBI_CH0 RPM_ALWAYS_TAG>; 1307 + interconnect-names = "qup-core", 1308 + "qup-config", 1309 + "qup-memory"; 1307 1310 #address-cells = <1>; 1308 1311 #size-cells = <0>; 1309 1312 status = "disabled";
+1 -1
arch/arm64/boot/dts/qcom/sm8650-mtp.dts
··· 622 622 623 623 &tlmm { 624 624 /* Reserved I/Os for NFC */ 625 - gpio-reserved-ranges = <32 8>; 625 + gpio-reserved-ranges = <32 8>, <74 1>; 626 626 627 627 disp0_reset_n_active: disp0-reset-n-active-state { 628 628 pins = "gpio133";
+1 -1
arch/arm64/boot/dts/qcom/sm8650-qrd.dts
··· 659 659 660 660 &tlmm { 661 661 /* Reserved I/Os for NFC */ 662 - gpio-reserved-ranges = <32 8>; 662 + gpio-reserved-ranges = <32 8>, <74 1>; 663 663 664 664 bt_default: bt-default-state { 665 665 bt-en-pins {
+2 -2
arch/powerpc/include/asm/rtas.h
··· 69 69 RTAS_FNIDX__IBM_READ_SLOT_RESET_STATE, 70 70 RTAS_FNIDX__IBM_READ_SLOT_RESET_STATE2, 71 71 RTAS_FNIDX__IBM_REMOVE_PE_DMA_WINDOW, 72 - RTAS_FNIDX__IBM_RESET_PE_DMA_WINDOWS, 72 + RTAS_FNIDX__IBM_RESET_PE_DMA_WINDOW, 73 73 RTAS_FNIDX__IBM_SCAN_LOG_DUMP, 74 74 RTAS_FNIDX__IBM_SET_DYNAMIC_INDICATOR, 75 75 RTAS_FNIDX__IBM_SET_EEH_OPTION, ··· 164 164 #define RTAS_FN_IBM_READ_SLOT_RESET_STATE rtas_fn_handle(RTAS_FNIDX__IBM_READ_SLOT_RESET_STATE) 165 165 #define RTAS_FN_IBM_READ_SLOT_RESET_STATE2 rtas_fn_handle(RTAS_FNIDX__IBM_READ_SLOT_RESET_STATE2) 166 166 #define RTAS_FN_IBM_REMOVE_PE_DMA_WINDOW rtas_fn_handle(RTAS_FNIDX__IBM_REMOVE_PE_DMA_WINDOW) 167 - #define RTAS_FN_IBM_RESET_PE_DMA_WINDOWS rtas_fn_handle(RTAS_FNIDX__IBM_RESET_PE_DMA_WINDOWS) 167 + #define RTAS_FN_IBM_RESET_PE_DMA_WINDOW rtas_fn_handle(RTAS_FNIDX__IBM_RESET_PE_DMA_WINDOW) 168 168 #define RTAS_FN_IBM_SCAN_LOG_DUMP rtas_fn_handle(RTAS_FNIDX__IBM_SCAN_LOG_DUMP) 169 169 #define RTAS_FN_IBM_SET_DYNAMIC_INDICATOR rtas_fn_handle(RTAS_FNIDX__IBM_SET_DYNAMIC_INDICATOR) 170 170 #define RTAS_FN_IBM_SET_EEH_OPTION rtas_fn_handle(RTAS_FNIDX__IBM_SET_EEH_OPTION)
+7 -2
arch/powerpc/kernel/rtas.c
··· 375 375 [RTAS_FNIDX__IBM_REMOVE_PE_DMA_WINDOW] = { 376 376 .name = "ibm,remove-pe-dma-window", 377 377 }, 378 - [RTAS_FNIDX__IBM_RESET_PE_DMA_WINDOWS] = { 379 - .name = "ibm,reset-pe-dma-windows", 378 + [RTAS_FNIDX__IBM_RESET_PE_DMA_WINDOW] = { 379 + /* 380 + * Note: PAPR+ v2.13 7.3.31.4.1 spells this as 381 + * "ibm,reset-pe-dma-windows" (plural), but RTAS 382 + * implementations use the singular form in practice. 383 + */ 384 + .name = "ibm,reset-pe-dma-window", 380 385 }, 381 386 [RTAS_FNIDX__IBM_SCAN_LOG_DUMP] = { 382 387 .name = "ibm,scan-log-dump",
+105 -51
arch/powerpc/platforms/pseries/iommu.c
··· 574 574 575 575 struct iommu_table_ops iommu_table_lpar_multi_ops; 576 576 577 - /* 578 - * iommu_table_setparms_lpar 579 - * 580 - * Function: On pSeries LPAR systems, return TCE table info, given a pci bus. 581 - */ 582 - static void iommu_table_setparms_lpar(struct pci_controller *phb, 583 - struct device_node *dn, 584 - struct iommu_table *tbl, 585 - struct iommu_table_group *table_group, 586 - const __be32 *dma_window) 587 - { 588 - unsigned long offset, size, liobn; 589 - 590 - of_parse_dma_window(dn, dma_window, &liobn, &offset, &size); 591 - 592 - iommu_table_setparms_common(tbl, phb->bus->number, liobn, offset, size, IOMMU_PAGE_SHIFT_4K, NULL, 593 - &iommu_table_lpar_multi_ops); 594 - 595 - 596 - table_group->tce32_start = offset; 597 - table_group->tce32_size = size; 598 - } 599 - 600 577 struct iommu_table_ops iommu_table_pseries_ops = { 601 578 .set = tce_build_pSeries, 602 579 .clear = tce_free_pSeries, ··· 701 724 * dynamic 64bit DMA window, walking up the device tree. 702 725 */ 703 726 static struct device_node *pci_dma_find(struct device_node *dn, 704 - const __be32 **dma_window) 727 + struct dynamic_dma_window_prop *prop) 705 728 { 706 - const __be32 *dw = NULL; 729 + const __be32 *default_prop = NULL; 730 + const __be32 *ddw_prop = NULL; 731 + struct device_node *rdn = NULL; 732 + bool default_win = false, ddw_win = false; 707 733 708 734 for ( ; dn && PCI_DN(dn); dn = dn->parent) { 709 - dw = of_get_property(dn, "ibm,dma-window", NULL); 710 - if (dw) { 711 - if (dma_window) 712 - *dma_window = dw; 713 - return dn; 735 + default_prop = of_get_property(dn, "ibm,dma-window", NULL); 736 + if (default_prop) { 737 + rdn = dn; 738 + default_win = true; 714 739 } 715 - dw = of_get_property(dn, DIRECT64_PROPNAME, NULL); 716 - if (dw) 717 - return dn; 718 - dw = of_get_property(dn, DMA64_PROPNAME, NULL); 719 - if (dw) 720 - return dn; 740 + ddw_prop = of_get_property(dn, DIRECT64_PROPNAME, NULL); 741 + if (ddw_prop) { 742 + rdn = dn; 743 + ddw_win = true; 744 + break; 745 + } 746 + ddw_prop = of_get_property(dn, DMA64_PROPNAME, NULL); 747 + if (ddw_prop) { 748 + rdn = dn; 749 + ddw_win = true; 750 + break; 751 + } 752 + 753 + /* At least found default window, which is the case for normal boot */ 754 + if (default_win) 755 + break; 721 756 } 722 757 723 - return NULL; 758 + /* For PCI devices there will always be a DMA window, either on the device 759 + * or parent bus 760 + */ 761 + WARN_ON(!(default_win | ddw_win)); 762 + 763 + /* caller doesn't want to get DMA window property */ 764 + if (!prop) 765 + return rdn; 766 + 767 + /* parse DMA window property. During normal system boot, only default 768 + * DMA window is passed in OF. But, for kdump, a dedicated adapter might 769 + * have both default and DDW in FDT. In this scenario, DDW takes precedence 770 + * over default window. 771 + */ 772 + if (ddw_win) { 773 + struct dynamic_dma_window_prop *p; 774 + 775 + p = (struct dynamic_dma_window_prop *)ddw_prop; 776 + prop->liobn = p->liobn; 777 + prop->dma_base = p->dma_base; 778 + prop->tce_shift = p->tce_shift; 779 + prop->window_shift = p->window_shift; 780 + } else if (default_win) { 781 + unsigned long offset, size, liobn; 782 + 783 + of_parse_dma_window(rdn, default_prop, &liobn, &offset, &size); 784 + 785 + prop->liobn = cpu_to_be32((u32)liobn); 786 + prop->dma_base = cpu_to_be64(offset); 787 + prop->tce_shift = cpu_to_be32(IOMMU_PAGE_SHIFT_4K); 788 + prop->window_shift = cpu_to_be32(order_base_2(size)); 789 + } 790 + 791 + return rdn; 724 792 } 725 793 726 794 static void pci_dma_bus_setup_pSeriesLP(struct pci_bus *bus) ··· 773 751 struct iommu_table *tbl; 774 752 struct device_node *dn, *pdn; 775 753 struct pci_dn *ppci; 776 - const __be32 *dma_window = NULL; 754 + struct dynamic_dma_window_prop prop; 777 755 778 756 dn = pci_bus_to_OF_node(bus); 779 757 780 758 pr_debug("pci_dma_bus_setup_pSeriesLP: setting up bus %pOF\n", 781 759 dn); 782 760 783 - pdn = pci_dma_find(dn, &dma_window); 761 + pdn = pci_dma_find(dn, &prop); 784 762 785 - if (dma_window == NULL) 786 - pr_debug(" no ibm,dma-window property !\n"); 763 + /* In PPC architecture, there will always be DMA window on bus or one of the 764 + * parent bus. During reboot, there will be ibm,dma-window property to 765 + * define DMA window. For kdump, there will at least be default window or DDW 766 + * or both. 767 + */ 787 768 788 769 ppci = PCI_DN(pdn); 789 770 ··· 796 771 if (!ppci->table_group) { 797 772 ppci->table_group = iommu_pseries_alloc_group(ppci->phb->node); 798 773 tbl = ppci->table_group->tables[0]; 799 - if (dma_window) { 800 - iommu_table_setparms_lpar(ppci->phb, pdn, tbl, 801 - ppci->table_group, dma_window); 802 774 803 - if (!iommu_init_table(tbl, ppci->phb->node, 0, 0)) 804 - panic("Failed to initialize iommu table"); 805 - } 775 + iommu_table_setparms_common(tbl, ppci->phb->bus->number, 776 + be32_to_cpu(prop.liobn), 777 + be64_to_cpu(prop.dma_base), 778 + 1ULL << be32_to_cpu(prop.window_shift), 779 + be32_to_cpu(prop.tce_shift), NULL, 780 + &iommu_table_lpar_multi_ops); 781 + 782 + /* Only for normal boot with default window. Doesn't matter even 783 + * if we set these with DDW which is 64bit during kdump, since 784 + * these will not be used during kdump. 785 + */ 786 + ppci->table_group->tce32_start = be64_to_cpu(prop.dma_base); 787 + ppci->table_group->tce32_size = 1 << be32_to_cpu(prop.window_shift); 788 + 789 + if (!iommu_init_table(tbl, ppci->phb->node, 0, 0)) 790 + panic("Failed to initialize iommu table"); 791 + 806 792 iommu_register_group(ppci->table_group, 807 793 pci_domain_nr(bus), 0); 808 794 pr_debug(" created table: %p\n", ppci->table_group); ··· 1004 968 continue; 1005 969 } 1006 970 971 + /* If at the time of system initialization, there are DDWs in OF, 972 + * it means this is during kexec. DDW could be direct or dynamic. 973 + * We will just mark DDWs as "dynamic" since this is kdump path, 974 + * no need to worry about perforance. ddw_list_new_entry() will 975 + * set window->direct = false. 976 + */ 1007 977 window = ddw_list_new_entry(pdn, dma64); 1008 978 if (!window) { 1009 979 of_node_put(pdn); ··· 1566 1524 { 1567 1525 struct device_node *pdn, *dn; 1568 1526 struct iommu_table *tbl; 1569 - const __be32 *dma_window = NULL; 1570 1527 struct pci_dn *pci; 1528 + struct dynamic_dma_window_prop prop; 1571 1529 1572 1530 pr_debug("pci_dma_dev_setup_pSeriesLP: %s\n", pci_name(dev)); 1573 1531 ··· 1580 1538 dn = pci_device_to_OF_node(dev); 1581 1539 pr_debug(" node is %pOF\n", dn); 1582 1540 1583 - pdn = pci_dma_find(dn, &dma_window); 1541 + pdn = pci_dma_find(dn, &prop); 1584 1542 if (!pdn || !PCI_DN(pdn)) { 1585 1543 printk(KERN_WARNING "pci_dma_dev_setup_pSeriesLP: " 1586 1544 "no DMA window found for pci dev=%s dn=%pOF\n", ··· 1593 1551 if (!pci->table_group) { 1594 1552 pci->table_group = iommu_pseries_alloc_group(pci->phb->node); 1595 1553 tbl = pci->table_group->tables[0]; 1596 - iommu_table_setparms_lpar(pci->phb, pdn, tbl, 1597 - pci->table_group, dma_window); 1554 + 1555 + iommu_table_setparms_common(tbl, pci->phb->bus->number, 1556 + be32_to_cpu(prop.liobn), 1557 + be64_to_cpu(prop.dma_base), 1558 + 1ULL << be32_to_cpu(prop.window_shift), 1559 + be32_to_cpu(prop.tce_shift), NULL, 1560 + &iommu_table_lpar_multi_ops); 1561 + 1562 + /* Only for normal boot with default window. Doesn't matter even 1563 + * if we set these with DDW which is 64bit during kdump, since 1564 + * these will not be used during kdump. 1565 + */ 1566 + pci->table_group->tce32_start = be64_to_cpu(prop.dma_base); 1567 + pci->table_group->tce32_size = 1 << be32_to_cpu(prop.window_shift); 1598 1568 1599 1569 iommu_init_table(tbl, pci->phb->node, 0, 0); 1600 1570 iommu_register_group(pci->table_group,
-1
arch/riscv/Kconfig
··· 315 315 # https://reviews.llvm.org/D123515 316 316 def_bool y 317 317 depends on $(as-instr, .option arch$(comma) +m) 318 - depends on !$(as-instr, .option arch$(comma) -i) 319 318 320 319 source "arch/riscv/Kconfig.socs" 321 320 source "arch/riscv/Kconfig.errata"
+2
arch/riscv/include/asm/csr.h
··· 424 424 # define CSR_STATUS CSR_MSTATUS 425 425 # define CSR_IE CSR_MIE 426 426 # define CSR_TVEC CSR_MTVEC 427 + # define CSR_ENVCFG CSR_MENVCFG 427 428 # define CSR_SCRATCH CSR_MSCRATCH 428 429 # define CSR_EPC CSR_MEPC 429 430 # define CSR_CAUSE CSR_MCAUSE ··· 449 448 # define CSR_STATUS CSR_SSTATUS 450 449 # define CSR_IE CSR_SIE 451 450 # define CSR_TVEC CSR_STVEC 451 + # define CSR_ENVCFG CSR_SENVCFG 452 452 # define CSR_SCRATCH CSR_SSCRATCH 453 453 # define CSR_EPC CSR_SEPC 454 454 # define CSR_CAUSE CSR_SCAUSE
+5
arch/riscv/include/asm/ftrace.h
··· 25 25 26 26 #define ARCH_SUPPORTS_FTRACE_OPS 1 27 27 #ifndef __ASSEMBLY__ 28 + 29 + extern void *return_address(unsigned int level); 30 + 31 + #define ftrace_return_address(n) return_address(n) 32 + 28 33 void MCOUNT_NAME(void); 29 34 static inline unsigned long ftrace_call_adjust(unsigned long addr) 30 35 {
+2
arch/riscv/include/asm/hugetlb.h
··· 11 11 } 12 12 #define arch_clear_hugepage_flags arch_clear_hugepage_flags 13 13 14 + #ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION 14 15 bool arch_hugetlb_migration_supported(struct hstate *h); 15 16 #define arch_hugetlb_migration_supported arch_hugetlb_migration_supported 17 + #endif 16 18 17 19 #ifdef CONFIG_RISCV_ISA_SVNAPOT 18 20 #define __HAVE_ARCH_HUGE_PTE_CLEAR
+2
arch/riscv/include/asm/hwcap.h
··· 81 81 #define RISCV_ISA_EXT_ZTSO 72 82 82 #define RISCV_ISA_EXT_ZACAS 73 83 83 84 + #define RISCV_ISA_EXT_XLINUXENVCFG 127 85 + 84 86 #define RISCV_ISA_EXT_MAX 128 85 87 #define RISCV_ISA_EXT_INVALID U32_MAX 86 88
+17 -3
arch/riscv/include/asm/pgalloc.h
··· 95 95 __pud_free(mm, pud); 96 96 } 97 97 98 - #define __pud_free_tlb(tlb, pud, addr) pud_free((tlb)->mm, pud) 98 + #define __pud_free_tlb(tlb, pud, addr) \ 99 + do { \ 100 + if (pgtable_l4_enabled) { \ 101 + pagetable_pud_dtor(virt_to_ptdesc(pud)); \ 102 + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pud)); \ 103 + } \ 104 + } while (0) 99 105 100 106 #define p4d_alloc_one p4d_alloc_one 101 107 static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) ··· 130 124 __p4d_free(mm, p4d); 131 125 } 132 126 133 - #define __p4d_free_tlb(tlb, p4d, addr) p4d_free((tlb)->mm, p4d) 127 + #define __p4d_free_tlb(tlb, p4d, addr) \ 128 + do { \ 129 + if (pgtable_l5_enabled) \ 130 + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(p4d)); \ 131 + } while (0) 134 132 #endif /* __PAGETABLE_PMD_FOLDED */ 135 133 136 134 static inline void sync_kernel_mappings(pgd_t *pgd) ··· 159 149 160 150 #ifndef __PAGETABLE_PMD_FOLDED 161 151 162 - #define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) 152 + #define __pmd_free_tlb(tlb, pmd, addr) \ 153 + do { \ 154 + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); \ 155 + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd)); \ 156 + } while (0) 163 157 164 158 #endif /* __PAGETABLE_PMD_FOLDED */ 165 159
+1 -1
arch/riscv/include/asm/pgtable-64.h
··· 136 136 * 10010 - IO Strongly-ordered, Non-cacheable, Non-bufferable, Shareable, Non-trustable 137 137 */ 138 138 #define _PAGE_PMA_THEAD ((1UL << 62) | (1UL << 61) | (1UL << 60)) 139 - #define _PAGE_NOCACHE_THEAD ((1UL < 61) | (1UL << 60)) 139 + #define _PAGE_NOCACHE_THEAD ((1UL << 61) | (1UL << 60)) 140 140 #define _PAGE_IO_THEAD ((1UL << 63) | (1UL << 60)) 141 141 #define _PAGE_MTMASK_THEAD (_PAGE_PMA_THEAD | _PAGE_IO_THEAD | (1UL << 59)) 142 142
+5 -1
arch/riscv/include/asm/pgtable.h
··· 84 84 * Define vmemmap for pfn_to_page & page_to_pfn calls. Needed if kernel 85 85 * is configured with CONFIG_SPARSEMEM_VMEMMAP enabled. 86 86 */ 87 - #define vmemmap ((struct page *)VMEMMAP_START) 87 + #define vmemmap ((struct page *)VMEMMAP_START - (phys_ram_base >> PAGE_SHIFT)) 88 88 89 89 #define PCI_IO_SIZE SZ_16M 90 90 #define PCI_IO_END VMEMMAP_START ··· 438 438 { 439 439 return pte; 440 440 } 441 + 442 + #define pte_leaf_size(pte) (pte_napot(pte) ? \ 443 + napot_cont_size(napot_cont_order(pte)) :\ 444 + PAGE_SIZE) 441 445 442 446 #ifdef CONFIG_NUMA_BALANCING 443 447 /*
+1
arch/riscv/include/asm/suspend.h
··· 14 14 struct pt_regs regs; 15 15 /* Saved and restored by high-level functions */ 16 16 unsigned long scratch; 17 + unsigned long envcfg; 17 18 unsigned long tvec; 18 19 unsigned long ie; 19 20 #ifdef CONFIG_MMU
+1 -60
arch/riscv/include/asm/vmalloc.h
··· 19 19 return true; 20 20 } 21 21 22 - #ifdef CONFIG_RISCV_ISA_SVNAPOT 23 - #include <linux/pgtable.h> 22 + #endif 24 23 25 - #define arch_vmap_pte_range_map_size arch_vmap_pte_range_map_size 26 - static inline unsigned long arch_vmap_pte_range_map_size(unsigned long addr, unsigned long end, 27 - u64 pfn, unsigned int max_page_shift) 28 - { 29 - unsigned long map_size = PAGE_SIZE; 30 - unsigned long size, order; 31 - 32 - if (!has_svnapot()) 33 - return map_size; 34 - 35 - for_each_napot_order_rev(order) { 36 - if (napot_cont_shift(order) > max_page_shift) 37 - continue; 38 - 39 - size = napot_cont_size(order); 40 - if (end - addr < size) 41 - continue; 42 - 43 - if (!IS_ALIGNED(addr, size)) 44 - continue; 45 - 46 - if (!IS_ALIGNED(PFN_PHYS(pfn), size)) 47 - continue; 48 - 49 - map_size = size; 50 - break; 51 - } 52 - 53 - return map_size; 54 - } 55 - 56 - #define arch_vmap_pte_supported_shift arch_vmap_pte_supported_shift 57 - static inline int arch_vmap_pte_supported_shift(unsigned long size) 58 - { 59 - int shift = PAGE_SHIFT; 60 - unsigned long order; 61 - 62 - if (!has_svnapot()) 63 - return shift; 64 - 65 - WARN_ON_ONCE(size >= PMD_SIZE); 66 - 67 - for_each_napot_order_rev(order) { 68 - if (napot_cont_size(order) > size) 69 - continue; 70 - 71 - if (!IS_ALIGNED(size, napot_cont_size(order))) 72 - continue; 73 - 74 - shift = napot_cont_shift(order); 75 - break; 76 - } 77 - 78 - return shift; 79 - } 80 - 81 - #endif /* CONFIG_RISCV_ISA_SVNAPOT */ 82 - #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ 83 24 #endif /* _ASM_RISCV_VMALLOC_H */
+2
arch/riscv/kernel/Makefile
··· 7 7 CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE) 8 8 CFLAGS_REMOVE_patch.o = $(CC_FLAGS_FTRACE) 9 9 CFLAGS_REMOVE_sbi.o = $(CC_FLAGS_FTRACE) 10 + CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE) 10 11 endif 11 12 CFLAGS_syscall_table.o += $(call cc-option,-Wno-override-init,) 12 13 CFLAGS_compat_syscall_table.o += $(call cc-option,-Wno-override-init,) ··· 47 46 obj-y += process.o 48 47 obj-y += ptrace.o 49 48 obj-y += reset.o 49 + obj-y += return_address.o 50 50 obj-y += setup.o 51 51 obj-y += signal.o 52 52 obj-y += syscall_table.o
+28 -3
arch/riscv/kernel/cpufeature.c
··· 24 24 #include <asm/hwprobe.h> 25 25 #include <asm/patch.h> 26 26 #include <asm/processor.h> 27 + #include <asm/sbi.h> 27 28 #include <asm/vector.h> 28 29 29 30 #include "copy-unaligned.h" ··· 203 202 }; 204 203 205 204 /* 205 + * While the [ms]envcfg CSRs were not defined until version 1.12 of the RISC-V 206 + * privileged ISA, the existence of the CSRs is implied by any extension which 207 + * specifies [ms]envcfg bit(s). Hence, we define a custom ISA extension for the 208 + * existence of the CSR, and treat it as a subset of those other extensions. 209 + */ 210 + static const unsigned int riscv_xlinuxenvcfg_exts[] = { 211 + RISCV_ISA_EXT_XLINUXENVCFG 212 + }; 213 + 214 + /* 206 215 * The canonical order of ISA extension names in the ISA string is defined in 207 216 * chapter 27 of the unprivileged specification. 208 217 * ··· 261 250 __RISCV_ISA_EXT_DATA(c, RISCV_ISA_EXT_c), 262 251 __RISCV_ISA_EXT_DATA(v, RISCV_ISA_EXT_v), 263 252 __RISCV_ISA_EXT_DATA(h, RISCV_ISA_EXT_h), 264 - __RISCV_ISA_EXT_DATA(zicbom, RISCV_ISA_EXT_ZICBOM), 265 - __RISCV_ISA_EXT_DATA(zicboz, RISCV_ISA_EXT_ZICBOZ), 253 + __RISCV_ISA_EXT_SUPERSET(zicbom, RISCV_ISA_EXT_ZICBOM, riscv_xlinuxenvcfg_exts), 254 + __RISCV_ISA_EXT_SUPERSET(zicboz, RISCV_ISA_EXT_ZICBOZ, riscv_xlinuxenvcfg_exts), 266 255 __RISCV_ISA_EXT_DATA(zicntr, RISCV_ISA_EXT_ZICNTR), 267 256 __RISCV_ISA_EXT_DATA(zicond, RISCV_ISA_EXT_ZICOND), 268 257 __RISCV_ISA_EXT_DATA(zicsr, RISCV_ISA_EXT_ZICSR), ··· 547 536 set_bit(RISCV_ISA_EXT_ZIFENCEI, isainfo->isa); 548 537 set_bit(RISCV_ISA_EXT_ZICNTR, isainfo->isa); 549 538 set_bit(RISCV_ISA_EXT_ZIHPM, isainfo->isa); 539 + } 540 + 541 + /* 542 + * "V" in ISA strings is ambiguous in practice: it should mean 543 + * just the standard V-1.0 but vendors aren't well behaved. 544 + * Many vendors with T-Head CPU cores which implement the 0.7.1 545 + * version of the vector specification put "v" into their DTs. 546 + * CPU cores with the ratified spec will contain non-zero 547 + * marchid. 548 + */ 549 + if (acpi_disabled && riscv_cached_mvendorid(cpu) == THEAD_VENDOR_ID && 550 + riscv_cached_marchid(cpu) == 0x0) { 551 + this_hwcap &= ~isa2hwcap[RISCV_ISA_EXT_v]; 552 + clear_bit(RISCV_ISA_EXT_v, isainfo->isa); 550 553 } 551 554 552 555 /* ··· 975 950 void riscv_user_isa_enable(void) 976 951 { 977 952 if (riscv_cpu_has_extension_unlikely(smp_processor_id(), RISCV_ISA_EXT_ZICBOZ)) 978 - csr_set(CSR_SENVCFG, ENVCFG_CBZE); 953 + csr_set(CSR_ENVCFG, ENVCFG_CBZE); 979 954 } 980 955 981 956 #ifdef CONFIG_RISCV_ALTERNATIVE
+48
arch/riscv/kernel/return_address.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * This code come from arch/arm64/kernel/return_address.c 4 + * 5 + * Copyright (C) 2023 SiFive. 6 + */ 7 + 8 + #include <linux/export.h> 9 + #include <linux/kprobes.h> 10 + #include <linux/stacktrace.h> 11 + 12 + struct return_address_data { 13 + unsigned int level; 14 + void *addr; 15 + }; 16 + 17 + static bool save_return_addr(void *d, unsigned long pc) 18 + { 19 + struct return_address_data *data = d; 20 + 21 + if (!data->level) { 22 + data->addr = (void *)pc; 23 + return false; 24 + } 25 + 26 + --data->level; 27 + 28 + return true; 29 + } 30 + NOKPROBE_SYMBOL(save_return_addr); 31 + 32 + noinline void *return_address(unsigned int level) 33 + { 34 + struct return_address_data data; 35 + 36 + data.level = level + 3; 37 + data.addr = NULL; 38 + 39 + arch_stack_walk(save_return_addr, &data, current, NULL); 40 + 41 + if (!data.level) 42 + return data.addr; 43 + else 44 + return NULL; 45 + 46 + } 47 + EXPORT_SYMBOL_GPL(return_address); 48 + NOKPROBE_SYMBOL(return_address);
+4
arch/riscv/kernel/suspend.c
··· 15 15 void suspend_save_csrs(struct suspend_context *context) 16 16 { 17 17 context->scratch = csr_read(CSR_SCRATCH); 18 + if (riscv_cpu_has_extension_unlikely(smp_processor_id(), RISCV_ISA_EXT_XLINUXENVCFG)) 19 + context->envcfg = csr_read(CSR_ENVCFG); 18 20 context->tvec = csr_read(CSR_TVEC); 19 21 context->ie = csr_read(CSR_IE); 20 22 ··· 38 36 void suspend_restore_csrs(struct suspend_context *context) 39 37 { 40 38 csr_write(CSR_SCRATCH, context->scratch); 39 + if (riscv_cpu_has_extension_unlikely(smp_processor_id(), RISCV_ISA_EXT_XLINUXENVCFG)) 40 + csr_write(CSR_ENVCFG, context->envcfg); 41 41 csr_write(CSR_TVEC, context->tvec); 42 42 csr_write(CSR_IE, context->ie); 43 43
+2
arch/riscv/mm/hugetlbpage.c
··· 426 426 return __hugetlb_valid_size(size); 427 427 } 428 428 429 + #ifdef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION 429 430 bool arch_hugetlb_migration_supported(struct hstate *h) 430 431 { 431 432 return __hugetlb_valid_size(huge_page_size(h)); 432 433 } 434 + #endif 433 435 434 436 #ifdef CONFIG_CONTIG_ALLOC 435 437 static __init int gigantic_pages_init(void)
+7
arch/x86/hyperv/hv_vtl.c
··· 16 16 extern struct boot_params boot_params; 17 17 static struct real_mode_header hv_vtl_real_mode_header; 18 18 19 + static bool __init hv_vtl_msi_ext_dest_id(void) 20 + { 21 + return true; 22 + } 23 + 19 24 void __init hv_vtl_init_platform(void) 20 25 { 21 26 pr_info("Linux runs in Hyper-V Virtual Trust Level\n"); ··· 43 38 x86_platform.legacy.warm_reset = 0; 44 39 x86_platform.legacy.reserve_bios_regions = 0; 45 40 x86_platform.legacy.devices.pnpbios = 0; 41 + 42 + x86_init.hyper.msi_ext_dest_id = hv_vtl_msi_ext_dest_id; 46 43 } 47 44 48 45 static inline u64 hv_vtl_system_desc_base(struct ldttss_desc *desc)
+60 -5
arch/x86/hyperv/ivm.c
··· 15 15 #include <asm/io.h> 16 16 #include <asm/coco.h> 17 17 #include <asm/mem_encrypt.h> 18 + #include <asm/set_memory.h> 18 19 #include <asm/mshyperv.h> 19 20 #include <asm/hypervisor.h> 20 21 #include <asm/mtrr.h> ··· 504 503 } 505 504 506 505 /* 506 + * When transitioning memory between encrypted and decrypted, the caller 507 + * of set_memory_encrypted() or set_memory_decrypted() is responsible for 508 + * ensuring that the memory isn't in use and isn't referenced while the 509 + * transition is in progress. The transition has multiple steps, and the 510 + * memory is in an inconsistent state until all steps are complete. A 511 + * reference while the state is inconsistent could result in an exception 512 + * that can't be cleanly fixed up. 513 + * 514 + * But the Linux kernel load_unaligned_zeropad() mechanism could cause a 515 + * stray reference that can't be prevented by the caller, so Linux has 516 + * specific code to handle this case. But when the #VC and #VE exceptions 517 + * routed to a paravisor, the specific code doesn't work. To avoid this 518 + * problem, mark the pages as "not present" while the transition is in 519 + * progress. If load_unaligned_zeropad() causes a stray reference, a normal 520 + * page fault is generated instead of #VC or #VE, and the page-fault-based 521 + * handlers for load_unaligned_zeropad() resolve the reference. When the 522 + * transition is complete, hv_vtom_set_host_visibility() marks the pages 523 + * as "present" again. 524 + */ 525 + static bool hv_vtom_clear_present(unsigned long kbuffer, int pagecount, bool enc) 526 + { 527 + return !set_memory_np(kbuffer, pagecount); 528 + } 529 + 530 + /* 507 531 * hv_vtom_set_host_visibility - Set specified memory visible to host. 508 532 * 509 533 * In Isolation VM, all guest memory is encrypted from host and guest ··· 541 515 enum hv_mem_host_visibility visibility = enc ? 542 516 VMBUS_PAGE_NOT_VISIBLE : VMBUS_PAGE_VISIBLE_READ_WRITE; 543 517 u64 *pfn_array; 518 + phys_addr_t paddr; 519 + void *vaddr; 544 520 int ret = 0; 545 521 bool result = true; 546 522 int i, pfn; 547 523 548 524 pfn_array = kmalloc(HV_HYP_PAGE_SIZE, GFP_KERNEL); 549 - if (!pfn_array) 550 - return false; 525 + if (!pfn_array) { 526 + result = false; 527 + goto err_set_memory_p; 528 + } 551 529 552 530 for (i = 0, pfn = 0; i < pagecount; i++) { 553 - pfn_array[pfn] = virt_to_hvpfn((void *)kbuffer + i * HV_HYP_PAGE_SIZE); 531 + /* 532 + * Use slow_virt_to_phys() because the PRESENT bit has been 533 + * temporarily cleared in the PTEs. slow_virt_to_phys() works 534 + * without the PRESENT bit while virt_to_hvpfn() or similar 535 + * does not. 536 + */ 537 + vaddr = (void *)kbuffer + (i * HV_HYP_PAGE_SIZE); 538 + paddr = slow_virt_to_phys(vaddr); 539 + pfn_array[pfn] = paddr >> HV_HYP_PAGE_SHIFT; 554 540 pfn++; 555 541 556 542 if (pfn == HV_MAX_MODIFY_GPA_REP_COUNT || i == pagecount - 1) { ··· 576 538 } 577 539 } 578 540 579 - err_free_pfn_array: 541 + err_free_pfn_array: 580 542 kfree(pfn_array); 543 + 544 + err_set_memory_p: 545 + /* 546 + * Set the PTE PRESENT bits again to revert what hv_vtom_clear_present() 547 + * did. Do this even if there is an error earlier in this function in 548 + * order to avoid leaving the memory range in a "broken" state. Setting 549 + * the PRESENT bits shouldn't fail, but return an error if it does. 550 + */ 551 + if (set_memory_p(kbuffer, pagecount)) 552 + result = false; 553 + 581 554 return result; 582 555 } 583 556 584 557 static bool hv_vtom_tlb_flush_required(bool private) 585 558 { 586 - return true; 559 + /* 560 + * Since hv_vtom_clear_present() marks the PTEs as "not present" 561 + * and flushes the TLB, they can't be in the TLB. That makes the 562 + * flush controlled by this function redundant, so return "false". 563 + */ 564 + return false; 587 565 } 588 566 589 567 static bool hv_vtom_cache_flush_required(void) ··· 662 608 x86_platform.hyper.is_private_mmio = hv_is_private_mmio; 663 609 x86_platform.guest.enc_cache_flush_required = hv_vtom_cache_flush_required; 664 610 x86_platform.guest.enc_tlb_flush_required = hv_vtom_tlb_flush_required; 611 + x86_platform.guest.enc_status_change_prepare = hv_vtom_clear_present; 665 612 x86_platform.guest.enc_status_change_finish = hv_vtom_set_host_visibility; 666 613 667 614 /* Set WB as the default cache mode. */
+1
arch/x86/include/asm/set_memory.h
··· 47 47 int set_memory_wc(unsigned long addr, int numpages); 48 48 int set_memory_wb(unsigned long addr, int numpages); 49 49 int set_memory_np(unsigned long addr, int numpages); 50 + int set_memory_p(unsigned long addr, int numpages); 50 51 int set_memory_4k(unsigned long addr, int numpages); 51 52 int set_memory_encrypted(unsigned long addr, int numpages); 52 53 int set_memory_decrypted(unsigned long addr, int numpages);
+2 -2
arch/x86/kernel/cpu/common.c
··· 1589 1589 get_cpu_vendor(c); 1590 1590 get_cpu_cap(c); 1591 1591 setup_force_cpu_cap(X86_FEATURE_CPUID); 1592 + get_cpu_address_sizes(c); 1592 1593 cpu_parse_early_param(); 1593 1594 1594 1595 if (this_cpu->c_early_init) ··· 1602 1601 this_cpu->c_bsp_init(c); 1603 1602 } else { 1604 1603 setup_clear_cpu_cap(X86_FEATURE_CPUID); 1604 + get_cpu_address_sizes(c); 1605 1605 } 1606 - 1607 - get_cpu_address_sizes(c); 1608 1606 1609 1607 setup_force_cpu_cap(X86_FEATURE_ALWAYS); 1610 1608
+91 -87
arch/x86/kernel/cpu/intel.c
··· 184 184 return false; 185 185 } 186 186 187 + #define MSR_IA32_TME_ACTIVATE 0x982 188 + 189 + /* Helpers to access TME_ACTIVATE MSR */ 190 + #define TME_ACTIVATE_LOCKED(x) (x & 0x1) 191 + #define TME_ACTIVATE_ENABLED(x) (x & 0x2) 192 + 193 + #define TME_ACTIVATE_POLICY(x) ((x >> 4) & 0xf) /* Bits 7:4 */ 194 + #define TME_ACTIVATE_POLICY_AES_XTS_128 0 195 + 196 + #define TME_ACTIVATE_KEYID_BITS(x) ((x >> 32) & 0xf) /* Bits 35:32 */ 197 + 198 + #define TME_ACTIVATE_CRYPTO_ALGS(x) ((x >> 48) & 0xffff) /* Bits 63:48 */ 199 + #define TME_ACTIVATE_CRYPTO_AES_XTS_128 1 200 + 201 + /* Values for mktme_status (SW only construct) */ 202 + #define MKTME_ENABLED 0 203 + #define MKTME_DISABLED 1 204 + #define MKTME_UNINITIALIZED 2 205 + static int mktme_status = MKTME_UNINITIALIZED; 206 + 207 + static void detect_tme_early(struct cpuinfo_x86 *c) 208 + { 209 + u64 tme_activate, tme_policy, tme_crypto_algs; 210 + int keyid_bits = 0, nr_keyids = 0; 211 + static u64 tme_activate_cpu0 = 0; 212 + 213 + rdmsrl(MSR_IA32_TME_ACTIVATE, tme_activate); 214 + 215 + if (mktme_status != MKTME_UNINITIALIZED) { 216 + if (tme_activate != tme_activate_cpu0) { 217 + /* Broken BIOS? */ 218 + pr_err_once("x86/tme: configuration is inconsistent between CPUs\n"); 219 + pr_err_once("x86/tme: MKTME is not usable\n"); 220 + mktme_status = MKTME_DISABLED; 221 + 222 + /* Proceed. We may need to exclude bits from x86_phys_bits. */ 223 + } 224 + } else { 225 + tme_activate_cpu0 = tme_activate; 226 + } 227 + 228 + if (!TME_ACTIVATE_LOCKED(tme_activate) || !TME_ACTIVATE_ENABLED(tme_activate)) { 229 + pr_info_once("x86/tme: not enabled by BIOS\n"); 230 + mktme_status = MKTME_DISABLED; 231 + return; 232 + } 233 + 234 + if (mktme_status != MKTME_UNINITIALIZED) 235 + goto detect_keyid_bits; 236 + 237 + pr_info("x86/tme: enabled by BIOS\n"); 238 + 239 + tme_policy = TME_ACTIVATE_POLICY(tme_activate); 240 + if (tme_policy != TME_ACTIVATE_POLICY_AES_XTS_128) 241 + pr_warn("x86/tme: Unknown policy is active: %#llx\n", tme_policy); 242 + 243 + tme_crypto_algs = TME_ACTIVATE_CRYPTO_ALGS(tme_activate); 244 + if (!(tme_crypto_algs & TME_ACTIVATE_CRYPTO_AES_XTS_128)) { 245 + pr_err("x86/mktme: No known encryption algorithm is supported: %#llx\n", 246 + tme_crypto_algs); 247 + mktme_status = MKTME_DISABLED; 248 + } 249 + detect_keyid_bits: 250 + keyid_bits = TME_ACTIVATE_KEYID_BITS(tme_activate); 251 + nr_keyids = (1UL << keyid_bits) - 1; 252 + if (nr_keyids) { 253 + pr_info_once("x86/mktme: enabled by BIOS\n"); 254 + pr_info_once("x86/mktme: %d KeyIDs available\n", nr_keyids); 255 + } else { 256 + pr_info_once("x86/mktme: disabled by BIOS\n"); 257 + } 258 + 259 + if (mktme_status == MKTME_UNINITIALIZED) { 260 + /* MKTME is usable */ 261 + mktme_status = MKTME_ENABLED; 262 + } 263 + 264 + /* 265 + * KeyID bits effectively lower the number of physical address 266 + * bits. Update cpuinfo_x86::x86_phys_bits accordingly. 267 + */ 268 + c->x86_phys_bits -= keyid_bits; 269 + } 270 + 187 271 static void early_init_intel(struct cpuinfo_x86 *c) 188 272 { 189 273 u64 misc_enable; ··· 406 322 */ 407 323 if (detect_extended_topology_early(c) < 0) 408 324 detect_ht_early(c); 325 + 326 + /* 327 + * Adjust the number of physical bits early because it affects the 328 + * valid bits of the MTRR mask registers. 329 + */ 330 + if (cpu_has(c, X86_FEATURE_TME)) 331 + detect_tme_early(c); 409 332 } 410 333 411 334 static void bsp_init_intel(struct cpuinfo_x86 *c) ··· 573 482 #endif 574 483 } 575 484 576 - #define MSR_IA32_TME_ACTIVATE 0x982 577 - 578 - /* Helpers to access TME_ACTIVATE MSR */ 579 - #define TME_ACTIVATE_LOCKED(x) (x & 0x1) 580 - #define TME_ACTIVATE_ENABLED(x) (x & 0x2) 581 - 582 - #define TME_ACTIVATE_POLICY(x) ((x >> 4) & 0xf) /* Bits 7:4 */ 583 - #define TME_ACTIVATE_POLICY_AES_XTS_128 0 584 - 585 - #define TME_ACTIVATE_KEYID_BITS(x) ((x >> 32) & 0xf) /* Bits 35:32 */ 586 - 587 - #define TME_ACTIVATE_CRYPTO_ALGS(x) ((x >> 48) & 0xffff) /* Bits 63:48 */ 588 - #define TME_ACTIVATE_CRYPTO_AES_XTS_128 1 589 - 590 - /* Values for mktme_status (SW only construct) */ 591 - #define MKTME_ENABLED 0 592 - #define MKTME_DISABLED 1 593 - #define MKTME_UNINITIALIZED 2 594 - static int mktme_status = MKTME_UNINITIALIZED; 595 - 596 - static void detect_tme(struct cpuinfo_x86 *c) 597 - { 598 - u64 tme_activate, tme_policy, tme_crypto_algs; 599 - int keyid_bits = 0, nr_keyids = 0; 600 - static u64 tme_activate_cpu0 = 0; 601 - 602 - rdmsrl(MSR_IA32_TME_ACTIVATE, tme_activate); 603 - 604 - if (mktme_status != MKTME_UNINITIALIZED) { 605 - if (tme_activate != tme_activate_cpu0) { 606 - /* Broken BIOS? */ 607 - pr_err_once("x86/tme: configuration is inconsistent between CPUs\n"); 608 - pr_err_once("x86/tme: MKTME is not usable\n"); 609 - mktme_status = MKTME_DISABLED; 610 - 611 - /* Proceed. We may need to exclude bits from x86_phys_bits. */ 612 - } 613 - } else { 614 - tme_activate_cpu0 = tme_activate; 615 - } 616 - 617 - if (!TME_ACTIVATE_LOCKED(tme_activate) || !TME_ACTIVATE_ENABLED(tme_activate)) { 618 - pr_info_once("x86/tme: not enabled by BIOS\n"); 619 - mktme_status = MKTME_DISABLED; 620 - return; 621 - } 622 - 623 - if (mktme_status != MKTME_UNINITIALIZED) 624 - goto detect_keyid_bits; 625 - 626 - pr_info("x86/tme: enabled by BIOS\n"); 627 - 628 - tme_policy = TME_ACTIVATE_POLICY(tme_activate); 629 - if (tme_policy != TME_ACTIVATE_POLICY_AES_XTS_128) 630 - pr_warn("x86/tme: Unknown policy is active: %#llx\n", tme_policy); 631 - 632 - tme_crypto_algs = TME_ACTIVATE_CRYPTO_ALGS(tme_activate); 633 - if (!(tme_crypto_algs & TME_ACTIVATE_CRYPTO_AES_XTS_128)) { 634 - pr_err("x86/mktme: No known encryption algorithm is supported: %#llx\n", 635 - tme_crypto_algs); 636 - mktme_status = MKTME_DISABLED; 637 - } 638 - detect_keyid_bits: 639 - keyid_bits = TME_ACTIVATE_KEYID_BITS(tme_activate); 640 - nr_keyids = (1UL << keyid_bits) - 1; 641 - if (nr_keyids) { 642 - pr_info_once("x86/mktme: enabled by BIOS\n"); 643 - pr_info_once("x86/mktme: %d KeyIDs available\n", nr_keyids); 644 - } else { 645 - pr_info_once("x86/mktme: disabled by BIOS\n"); 646 - } 647 - 648 - if (mktme_status == MKTME_UNINITIALIZED) { 649 - /* MKTME is usable */ 650 - mktme_status = MKTME_ENABLED; 651 - } 652 - 653 - /* 654 - * KeyID bits effectively lower the number of physical address 655 - * bits. Update cpuinfo_x86::x86_phys_bits accordingly. 656 - */ 657 - c->x86_phys_bits -= keyid_bits; 658 - } 659 - 660 485 static void init_cpuid_fault(struct cpuinfo_x86 *c) 661 486 { 662 487 u64 msr; ··· 708 701 srat_detect_node(c); 709 702 710 703 init_ia32_feat_ctl(c); 711 - 712 - if (cpu_has(c, X86_FEATURE_TME)) 713 - detect_tme(c); 714 704 715 705 init_intel_misc_features(c); 716 706
+5 -3
arch/x86/kernel/e820.c
··· 1017 1017 e820__range_update(pa_data, sizeof(*data)+data->len, E820_TYPE_RAM, E820_TYPE_RESERVED_KERN); 1018 1018 1019 1019 /* 1020 - * SETUP_EFI and SETUP_IMA are supplied by kexec and do not need 1021 - * to be reserved. 1020 + * SETUP_EFI, SETUP_IMA and SETUP_RNG_SEED are supplied by 1021 + * kexec and do not need to be reserved. 1022 1022 */ 1023 - if (data->type != SETUP_EFI && data->type != SETUP_IMA) 1023 + if (data->type != SETUP_EFI && 1024 + data->type != SETUP_IMA && 1025 + data->type != SETUP_RNG_SEED) 1024 1026 e820__range_update_kexec(pa_data, 1025 1027 sizeof(*data) + data->len, 1026 1028 E820_TYPE_RAM, E820_TYPE_RESERVED_KERN);
+14 -10
arch/x86/mm/pat/set_memory.c
··· 755 755 * areas on 32-bit NUMA systems. The percpu areas can 756 756 * end up in this kind of memory, for instance. 757 757 * 758 - * This could be optimized, but it is only intended to be 759 - * used at initialization time, and keeping it 760 - * unoptimized should increase the testing coverage for 761 - * the more obscure platforms. 758 + * Note that as long as the PTEs are well-formed with correct PFNs, this 759 + * works without checking the PRESENT bit in the leaf PTE. This is unlike 760 + * the similar vmalloc_to_page() and derivatives. Callers may depend on 761 + * this behavior. 762 + * 763 + * This could be optimized, but it is only used in paths that are not perf 764 + * sensitive, and keeping it unoptimized should increase the testing coverage 765 + * for the more obscure platforms. 762 766 */ 763 767 phys_addr_t slow_virt_to_phys(void *__virt_addr) 764 768 { ··· 2045 2041 return rc; 2046 2042 } 2047 2043 2048 - static int set_memory_p(unsigned long *addr, int numpages) 2049 - { 2050 - return change_page_attr_set(addr, numpages, __pgprot(_PAGE_PRESENT), 0); 2051 - } 2052 - 2053 2044 /* Restore full speculative operation to the pfn. */ 2054 2045 int clear_mce_nospec(unsigned long pfn) 2055 2046 { 2056 2047 unsigned long addr = (unsigned long) pfn_to_kaddr(pfn); 2057 2048 2058 - return set_memory_p(&addr, 1); 2049 + return set_memory_p(addr, 1); 2059 2050 } 2060 2051 EXPORT_SYMBOL_GPL(clear_mce_nospec); 2061 2052 #endif /* CONFIG_X86_64 */ ··· 2101 2102 return change_page_attr_set_clr(&addr, numpages, __pgprot(0), 2102 2103 __pgprot(_PAGE_PRESENT), 0, 2103 2104 CPA_NO_CHECK_ALIAS, NULL); 2105 + } 2106 + 2107 + int set_memory_p(unsigned long addr, int numpages) 2108 + { 2109 + return change_page_attr_set(&addr, numpages, __pgprot(_PAGE_PRESENT), 0); 2104 2110 } 2105 2111 2106 2112 int set_memory_4k(unsigned long addr, int numpages)
+17 -17
drivers/crypto/allwinner/sun8i-ce/sun8i-ce-cipher.c
··· 299 299 return err; 300 300 } 301 301 302 - static void sun8i_ce_cipher_run(struct crypto_engine *engine, void *areq) 303 - { 304 - struct skcipher_request *breq = container_of(areq, struct skcipher_request, base); 305 - struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(breq); 306 - struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm); 307 - struct sun8i_ce_dev *ce = op->ce; 308 - struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(breq); 309 - int flow, err; 310 - 311 - flow = rctx->flow; 312 - err = sun8i_ce_run_task(ce, flow, crypto_tfm_alg_name(breq->base.tfm)); 313 - local_bh_disable(); 314 - crypto_finalize_skcipher_request(engine, breq, err); 315 - local_bh_enable(); 316 - } 317 - 318 302 static void sun8i_ce_cipher_unprepare(struct crypto_engine *engine, 319 303 void *async_req) 320 304 { ··· 344 360 dma_unmap_single(ce->dev, rctx->addr_key, op->keylen, DMA_TO_DEVICE); 345 361 } 346 362 363 + static void sun8i_ce_cipher_run(struct crypto_engine *engine, void *areq) 364 + { 365 + struct skcipher_request *breq = container_of(areq, struct skcipher_request, base); 366 + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(breq); 367 + struct sun8i_cipher_tfm_ctx *op = crypto_skcipher_ctx(tfm); 368 + struct sun8i_ce_dev *ce = op->ce; 369 + struct sun8i_cipher_req_ctx *rctx = skcipher_request_ctx(breq); 370 + int flow, err; 371 + 372 + flow = rctx->flow; 373 + err = sun8i_ce_run_task(ce, flow, crypto_tfm_alg_name(breq->base.tfm)); 374 + sun8i_ce_cipher_unprepare(engine, areq); 375 + local_bh_disable(); 376 + crypto_finalize_skcipher_request(engine, breq, err); 377 + local_bh_enable(); 378 + } 379 + 347 380 int sun8i_ce_cipher_do_one(struct crypto_engine *engine, void *areq) 348 381 { 349 382 int err = sun8i_ce_cipher_prepare(engine, areq); ··· 369 368 return err; 370 369 371 370 sun8i_ce_cipher_run(engine, areq); 372 - sun8i_ce_cipher_unprepare(engine, areq); 373 371 return 0; 374 372 } 375 373
+2 -2
drivers/crypto/rockchip/rk3288_crypto_ahash.c
··· 332 332 theend: 333 333 pm_runtime_put_autosuspend(rkc->dev); 334 334 335 + rk_hash_unprepare(engine, breq); 336 + 335 337 local_bh_disable(); 336 338 crypto_finalize_hash_request(engine, breq, err); 337 339 local_bh_enable(); 338 - 339 - rk_hash_unprepare(engine, breq); 340 340 341 341 return 0; 342 342 }
+17
drivers/dma/dw-edma/dw-edma-v0-core.c
··· 346 346 dw_edma_v0_write_ll_link(chunk, i, control, chunk->ll_region.paddr); 347 347 } 348 348 349 + static void dw_edma_v0_sync_ll_data(struct dw_edma_chunk *chunk) 350 + { 351 + /* 352 + * In case of remote eDMA engine setup, the DW PCIe RP/EP internal 353 + * configuration registers and application memory are normally accessed 354 + * over different buses. Ensure LL-data reaches the memory before the 355 + * doorbell register is toggled by issuing the dummy-read from the remote 356 + * LL memory in a hope that the MRd TLP will return only after the 357 + * last MWr TLP is completed 358 + */ 359 + if (!(chunk->chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL)) 360 + readl(chunk->ll_region.vaddr.io); 361 + } 362 + 349 363 static void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first) 350 364 { 351 365 struct dw_edma_chan *chan = chunk->chan; ··· 426 412 SET_CH_32(dw, chan->dir, chan->id, llp.msb, 427 413 upper_32_bits(chunk->ll_region.paddr)); 428 414 } 415 + 416 + dw_edma_v0_sync_ll_data(chunk); 417 + 429 418 /* Doorbell */ 430 419 SET_RW_32(dw, chan->dir, doorbell, 431 420 FIELD_PREP(EDMA_V0_DOORBELL_CH_MASK, chan->id));
+26 -13
drivers/dma/dw-edma/dw-hdma-v0-core.c
··· 65 65 66 66 static u16 dw_hdma_v0_core_ch_count(struct dw_edma *dw, enum dw_edma_dir dir) 67 67 { 68 - u32 num_ch = 0; 69 - int id; 70 - 71 - for (id = 0; id < HDMA_V0_MAX_NR_CH; id++) { 72 - if (GET_CH_32(dw, id, dir, ch_en) & BIT(0)) 73 - num_ch++; 74 - } 75 - 76 - if (num_ch > HDMA_V0_MAX_NR_CH) 77 - num_ch = HDMA_V0_MAX_NR_CH; 78 - 79 - return (u16)num_ch; 68 + /* 69 + * The HDMA IP have no way to know the number of hardware channels 70 + * available, we set it to maximum channels and let the platform 71 + * set the right number of channels. 72 + */ 73 + return HDMA_V0_MAX_NR_CH; 80 74 } 81 75 82 76 static enum dma_status dw_hdma_v0_core_ch_status(struct dw_edma_chan *chan) ··· 222 228 dw_hdma_v0_write_ll_link(chunk, i, control, chunk->ll_region.paddr); 223 229 } 224 230 231 + static void dw_hdma_v0_sync_ll_data(struct dw_edma_chunk *chunk) 232 + { 233 + /* 234 + * In case of remote HDMA engine setup, the DW PCIe RP/EP internal 235 + * configuration registers and application memory are normally accessed 236 + * over different buses. Ensure LL-data reaches the memory before the 237 + * doorbell register is toggled by issuing the dummy-read from the remote 238 + * LL memory in a hope that the MRd TLP will return only after the 239 + * last MWr TLP is completed 240 + */ 241 + if (!(chunk->chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL)) 242 + readl(chunk->ll_region.vaddr.io); 243 + } 244 + 225 245 static void dw_hdma_v0_core_start(struct dw_edma_chunk *chunk, bool first) 226 246 { 227 247 struct dw_edma_chan *chan = chunk->chan; ··· 250 242 /* Interrupt enable&unmask - done, abort */ 251 243 tmp = GET_CH_32(dw, chan->dir, chan->id, int_setup) | 252 244 HDMA_V0_STOP_INT_MASK | HDMA_V0_ABORT_INT_MASK | 253 - HDMA_V0_LOCAL_STOP_INT_EN | HDMA_V0_LOCAL_STOP_INT_EN; 245 + HDMA_V0_LOCAL_STOP_INT_EN | HDMA_V0_LOCAL_ABORT_INT_EN; 246 + if (!(dw->chip->flags & DW_EDMA_CHIP_LOCAL)) 247 + tmp |= HDMA_V0_REMOTE_STOP_INT_EN | HDMA_V0_REMOTE_ABORT_INT_EN; 254 248 SET_CH_32(dw, chan->dir, chan->id, int_setup, tmp); 255 249 /* Channel control */ 256 250 SET_CH_32(dw, chan->dir, chan->id, control1, HDMA_V0_LINKLIST_EN); ··· 266 256 /* Set consumer cycle */ 267 257 SET_CH_32(dw, chan->dir, chan->id, cycle_sync, 268 258 HDMA_V0_CONSUMER_CYCLE_STAT | HDMA_V0_CONSUMER_CYCLE_BIT); 259 + 260 + dw_hdma_v0_sync_ll_data(chunk); 261 + 269 262 /* Doorbell */ 270 263 SET_CH_32(dw, chan->dir, chan->id, doorbell, HDMA_V0_DOORBELL_START); 271 264 }
+1 -1
drivers/dma/dw-edma/dw-hdma-v0-regs.h
··· 15 15 #define HDMA_V0_LOCAL_ABORT_INT_EN BIT(6) 16 16 #define HDMA_V0_REMOTE_ABORT_INT_EN BIT(5) 17 17 #define HDMA_V0_LOCAL_STOP_INT_EN BIT(4) 18 - #define HDMA_V0_REMOTEL_STOP_INT_EN BIT(3) 18 + #define HDMA_V0_REMOTE_STOP_INT_EN BIT(3) 19 19 #define HDMA_V0_ABORT_INT_MASK BIT(2) 20 20 #define HDMA_V0_STOP_INT_MASK BIT(0) 21 21 #define HDMA_V0_LINKLIST_EN BIT(0)
+1 -1
drivers/dma/fsl-edma-common.c
··· 503 503 if (fsl_chan->is_multi_fifo) { 504 504 /* set mloff to support multiple fifo */ 505 505 burst = cfg->direction == DMA_DEV_TO_MEM ? 506 - cfg->src_addr_width : cfg->dst_addr_width; 506 + cfg->src_maxburst : cfg->dst_maxburst; 507 507 nbytes |= EDMA_V3_TCD_NBYTES_MLOFF(-(burst * 4)); 508 508 /* enable DMLOE/SMLOE */ 509 509 if (cfg->direction == DMA_MEM_TO_DEV) {
+3 -2
drivers/dma/fsl-edma-common.h
··· 30 30 #define EDMA_TCD_ATTR_SSIZE(x) (((x) & GENMASK(2, 0)) << 8) 31 31 #define EDMA_TCD_ATTR_SMOD(x) (((x) & GENMASK(4, 0)) << 11) 32 32 33 - #define EDMA_TCD_CITER_CITER(x) ((x) & GENMASK(14, 0)) 34 - #define EDMA_TCD_BITER_BITER(x) ((x) & GENMASK(14, 0)) 33 + #define EDMA_TCD_ITER_MASK GENMASK(14, 0) 34 + #define EDMA_TCD_CITER_CITER(x) ((x) & EDMA_TCD_ITER_MASK) 35 + #define EDMA_TCD_BITER_BITER(x) ((x) & EDMA_TCD_ITER_MASK) 35 36 36 37 #define EDMA_TCD_CSR_START BIT(0) 37 38 #define EDMA_TCD_CSR_INT_MAJOR BIT(1)
+3 -1
drivers/dma/fsl-edma-main.c
··· 10 10 */ 11 11 12 12 #include <dt-bindings/dma/fsl-edma.h> 13 + #include <linux/bitfield.h> 13 14 #include <linux/module.h> 14 15 #include <linux/interrupt.h> 15 16 #include <linux/clk.h> ··· 583 582 DMAENGINE_ALIGN_32_BYTES; 584 583 585 584 /* Per worst case 'nbytes = 1' take CITER as the max_seg_size */ 586 - dma_set_max_seg_size(fsl_edma->dma_dev.dev, 0x3fff); 585 + dma_set_max_seg_size(fsl_edma->dma_dev.dev, 586 + FIELD_GET(EDMA_TCD_ITER_MASK, EDMA_TCD_ITER_MASK)); 587 587 588 588 fsl_edma->dma_dev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT; 589 589
+20 -20
drivers/dma/fsl-qdma.c
··· 109 109 #define FSL_QDMA_CMD_WTHROTL_OFFSET 20 110 110 #define FSL_QDMA_CMD_DSEN_OFFSET 19 111 111 #define FSL_QDMA_CMD_LWC_OFFSET 16 112 + #define FSL_QDMA_CMD_PF BIT(17) 112 113 113 114 /* Field definition for Descriptor status */ 114 115 #define QDMA_CCDF_STATUS_RTE BIT(5) ··· 160 159 u8 addr_hi; 161 160 u8 __reserved1[2]; 162 161 u8 cfg8b_w1; 162 + } __packed; 163 + struct { 164 + __le32 __reserved2; 165 + __le32 cmd; 163 166 } __packed; 164 167 __le64 data; 165 168 }; ··· 359 354 static void fsl_qdma_comp_fill_memcpy(struct fsl_qdma_comp *fsl_comp, 360 355 dma_addr_t dst, dma_addr_t src, u32 len) 361 356 { 362 - u32 cmd; 363 357 struct fsl_qdma_format *sdf, *ddf; 364 358 struct fsl_qdma_format *ccdf, *csgf_desc, *csgf_src, *csgf_dest; 365 359 ··· 387 383 /* This entry is the last entry. */ 388 384 qdma_csgf_set_f(csgf_dest, len); 389 385 /* Descriptor Buffer */ 390 - cmd = cpu_to_le32(FSL_QDMA_CMD_RWTTYPE << 391 - FSL_QDMA_CMD_RWTTYPE_OFFSET); 392 - sdf->data = QDMA_SDDF_CMD(cmd); 386 + sdf->cmd = cpu_to_le32((FSL_QDMA_CMD_RWTTYPE << FSL_QDMA_CMD_RWTTYPE_OFFSET) | 387 + FSL_QDMA_CMD_PF); 393 388 394 - cmd = cpu_to_le32(FSL_QDMA_CMD_RWTTYPE << 395 - FSL_QDMA_CMD_RWTTYPE_OFFSET); 396 - cmd |= cpu_to_le32(FSL_QDMA_CMD_LWC << FSL_QDMA_CMD_LWC_OFFSET); 397 - ddf->data = QDMA_SDDF_CMD(cmd); 389 + ddf->cmd = cpu_to_le32((FSL_QDMA_CMD_RWTTYPE << FSL_QDMA_CMD_RWTTYPE_OFFSET) | 390 + (FSL_QDMA_CMD_LWC << FSL_QDMA_CMD_LWC_OFFSET)); 398 391 } 399 392 400 393 /* ··· 625 624 626 625 static int 627 626 fsl_qdma_queue_transfer_complete(struct fsl_qdma_engine *fsl_qdma, 628 - void *block, 627 + __iomem void *block, 629 628 int id) 630 629 { 631 630 bool duplicate; ··· 1197 1196 if (!fsl_qdma->queue) 1198 1197 return -ENOMEM; 1199 1198 1200 - ret = fsl_qdma_irq_init(pdev, fsl_qdma); 1201 - if (ret) 1202 - return ret; 1203 - 1204 1199 fsl_qdma->irq_base = platform_get_irq_byname(pdev, "qdma-queue0"); 1205 1200 if (fsl_qdma->irq_base < 0) 1206 1201 return fsl_qdma->irq_base; ··· 1235 1238 1236 1239 platform_set_drvdata(pdev, fsl_qdma); 1237 1240 1238 - ret = dma_async_device_register(&fsl_qdma->dma_dev); 1239 - if (ret) { 1240 - dev_err(&pdev->dev, 1241 - "Can't register NXP Layerscape qDMA engine.\n"); 1242 - return ret; 1243 - } 1244 - 1245 1241 ret = fsl_qdma_reg_init(fsl_qdma); 1246 1242 if (ret) { 1247 1243 dev_err(&pdev->dev, "Can't Initialize the qDMA engine.\n"); 1244 + return ret; 1245 + } 1246 + 1247 + ret = fsl_qdma_irq_init(pdev, fsl_qdma); 1248 + if (ret) 1249 + return ret; 1250 + 1251 + ret = dma_async_device_register(&fsl_qdma->dma_dev); 1252 + if (ret) { 1253 + dev_err(&pdev->dev, "Can't register NXP Layerscape qDMA engine.\n"); 1248 1254 return ret; 1249 1255 } 1250 1256
+1 -1
drivers/dma/idxd/cdev.c
··· 345 345 spin_lock(&evl->lock); 346 346 status.bits = ioread64(idxd->reg_base + IDXD_EVLSTATUS_OFFSET); 347 347 t = status.tail; 348 - h = evl->head; 348 + h = status.head; 349 349 size = evl->size; 350 350 351 351 while (h != t) {
+1 -1
drivers/dma/idxd/debugfs.c
··· 68 68 69 69 spin_lock(&evl->lock); 70 70 71 - h = evl->head; 72 71 evl_status.bits = ioread64(idxd->reg_base + IDXD_EVLSTATUS_OFFSET); 73 72 t = evl_status.tail; 73 + h = evl_status.head; 74 74 evl_size = evl->size; 75 75 76 76 seq_printf(s, "Event Log head %u tail %u interrupt pending %u\n\n",
-1
drivers/dma/idxd/idxd.h
··· 300 300 unsigned int log_size; 301 301 /* The number of entries in the event log. */ 302 302 u16 size; 303 - u16 head; 304 303 unsigned long *bmap; 305 304 bool batch_fail[IDXD_MAX_BATCH_IDENT]; 306 305 };
+12 -3
drivers/dma/idxd/init.c
··· 343 343 static int idxd_init_evl(struct idxd_device *idxd) 344 344 { 345 345 struct device *dev = &idxd->pdev->dev; 346 + unsigned int evl_cache_size; 346 347 struct idxd_evl *evl; 348 + const char *idxd_name; 347 349 348 350 if (idxd->hw.gen_cap.evl_support == 0) 349 351 return 0; ··· 357 355 spin_lock_init(&evl->lock); 358 356 evl->size = IDXD_EVL_SIZE_MIN; 359 357 360 - idxd->evl_cache = kmem_cache_create(dev_name(idxd_confdev(idxd)), 361 - sizeof(struct idxd_evl_fault) + evl_ent_size(idxd), 362 - 0, 0, NULL); 358 + idxd_name = dev_name(idxd_confdev(idxd)); 359 + evl_cache_size = sizeof(struct idxd_evl_fault) + evl_ent_size(idxd); 360 + /* 361 + * Since completion record in evl_cache will be copied to user 362 + * when handling completion record page fault, need to create 363 + * the cache suitable for user copy. 364 + */ 365 + idxd->evl_cache = kmem_cache_create_usercopy(idxd_name, evl_cache_size, 366 + 0, 0, 0, evl_cache_size, 367 + NULL); 363 368 if (!idxd->evl_cache) { 364 369 kfree(evl); 365 370 return -ENOMEM;
+1 -2
drivers/dma/idxd/irq.c
··· 367 367 /* Clear interrupt pending bit */ 368 368 iowrite32(evl_status.bits_upper32, 369 369 idxd->reg_base + IDXD_EVLSTATUS_OFFSET + sizeof(u32)); 370 - h = evl->head; 371 370 evl_status.bits = ioread64(idxd->reg_base + IDXD_EVLSTATUS_OFFSET); 372 371 t = evl_status.tail; 372 + h = evl_status.head; 373 373 size = idxd->evl->size; 374 374 375 375 while (h != t) { ··· 378 378 h = (h + 1) % size; 379 379 } 380 380 381 - evl->head = h; 382 381 evl_status.head = h; 383 382 iowrite32(evl_status.bits_lower32, idxd->reg_base + IDXD_EVLSTATUS_OFFSET); 384 383 spin_unlock(&evl->lock);
-2
drivers/dma/ptdma/ptdma-dmaengine.c
··· 385 385 chan->vc.desc_free = pt_do_cleanup; 386 386 vchan_init(&chan->vc, dma_dev); 387 387 388 - dma_set_mask_and_coherent(pt->dev, DMA_BIT_MASK(64)); 389 - 390 388 ret = dma_async_device_register(dma_dev); 391 389 if (ret) 392 390 goto err_reg;
+20 -5
drivers/dpll/dpll_core.c
··· 44 44 void *priv; 45 45 }; 46 46 47 - struct dpll_pin *netdev_dpll_pin(const struct net_device *dev) 48 - { 49 - return rcu_dereference_rtnl(dev->dpll_pin); 50 - } 51 - 52 47 struct dpll_device *dpll_device_get_by_id(int id) 53 48 { 54 49 if (xa_get_mark(&dpll_device_xa, id, DPLL_REGISTERED)) ··· 509 514 kfree(pin); 510 515 return ERR_PTR(ret); 511 516 } 517 + 518 + static void dpll_netdev_pin_assign(struct net_device *dev, struct dpll_pin *dpll_pin) 519 + { 520 + rtnl_lock(); 521 + rcu_assign_pointer(dev->dpll_pin, dpll_pin); 522 + rtnl_unlock(); 523 + } 524 + 525 + void dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin) 526 + { 527 + WARN_ON(!dpll_pin); 528 + dpll_netdev_pin_assign(dev, dpll_pin); 529 + } 530 + EXPORT_SYMBOL(dpll_netdev_pin_set); 531 + 532 + void dpll_netdev_pin_clear(struct net_device *dev) 533 + { 534 + dpll_netdev_pin_assign(dev, NULL); 535 + } 536 + EXPORT_SYMBOL(dpll_netdev_pin_clear); 512 537 513 538 /** 514 539 * dpll_pin_get - find existing or create new dpll pin
+24 -14
drivers/dpll/dpll_netlink.c
··· 8 8 */ 9 9 #include <linux/module.h> 10 10 #include <linux/kernel.h> 11 + #include <linux/netdevice.h> 11 12 #include <net/genetlink.h> 12 13 #include "dpll_core.h" 13 14 #include "dpll_netlink.h" ··· 49 48 } 50 49 51 50 /** 52 - * dpll_msg_pin_handle_size - get size of pin handle attribute for given pin 53 - * @pin: pin pointer 54 - * 55 - * Return: byte size of pin handle attribute for given pin. 56 - */ 57 - size_t dpll_msg_pin_handle_size(struct dpll_pin *pin) 58 - { 59 - return pin ? nla_total_size(4) : 0; /* DPLL_A_PIN_ID */ 60 - } 61 - EXPORT_SYMBOL_GPL(dpll_msg_pin_handle_size); 62 - 63 - /** 64 51 * dpll_msg_add_pin_handle - attach pin handle attribute to a given message 65 52 * @msg: pointer to sk_buff message to attach a pin handle 66 53 * @pin: pin pointer ··· 57 68 * * 0 - success 58 69 * * -EMSGSIZE - no space in message to attach pin handle 59 70 */ 60 - int dpll_msg_add_pin_handle(struct sk_buff *msg, struct dpll_pin *pin) 71 + static int dpll_msg_add_pin_handle(struct sk_buff *msg, struct dpll_pin *pin) 61 72 { 62 73 if (!pin) 63 74 return 0; ··· 65 76 return -EMSGSIZE; 66 77 return 0; 67 78 } 68 - EXPORT_SYMBOL_GPL(dpll_msg_add_pin_handle); 79 + 80 + static struct dpll_pin *dpll_netdev_pin(const struct net_device *dev) 81 + { 82 + return rcu_dereference_rtnl(dev->dpll_pin); 83 + } 84 + 85 + /** 86 + * dpll_netdev_pin_handle_size - get size of pin handle attribute of a netdev 87 + * @dev: netdev from which to get the pin 88 + * 89 + * Return: byte size of pin handle attribute, or 0 if @dev has no pin. 90 + */ 91 + size_t dpll_netdev_pin_handle_size(const struct net_device *dev) 92 + { 93 + return dpll_netdev_pin(dev) ? nla_total_size(4) : 0; /* DPLL_A_PIN_ID */ 94 + } 95 + 96 + int dpll_netdev_add_pin_handle(struct sk_buff *msg, 97 + const struct net_device *dev) 98 + { 99 + return dpll_msg_add_pin_handle(msg, dpll_netdev_pin(dev)); 100 + } 69 101 70 102 static int 71 103 dpll_msg_add_mode(struct sk_buff *msg, struct dpll_device *dpll,
+13 -1
drivers/firewire/core-card.c
··· 500 500 fw_notice(card, "phy config: new root=%x, gap_count=%d\n", 501 501 new_root_id, gap_count); 502 502 fw_send_phy_config(card, new_root_id, generation, gap_count); 503 - reset_bus(card, true); 503 + /* 504 + * Where possible, use a short bus reset to minimize 505 + * disruption to isochronous transfers. But in the event 506 + * of a gap count inconsistency, use a long bus reset. 507 + * 508 + * As noted in 1394a 8.4.6.2, nodes on a mixed 1394/1394a bus 509 + * may set different gap counts after a bus reset. On a mixed 510 + * 1394/1394a bus, a short bus reset can get doubled. Some 511 + * nodes may treat the double reset as one bus reset and others 512 + * may treat it as two, causing a gap count inconsistency 513 + * again. Using a long bus reset prevents this. 514 + */ 515 + reset_bus(card, card->gap_count != 0); 504 516 /* Will allocate broadcast channel after the reset. */ 505 517 goto out; 506 518 }
+1 -1
drivers/firmware/efi/capsule-loader.c
··· 292 292 return -ENOMEM; 293 293 } 294 294 295 - cap_info->phys = kzalloc(sizeof(void *), GFP_KERNEL); 295 + cap_info->phys = kzalloc(sizeof(phys_addr_t), GFP_KERNEL); 296 296 if (!cap_info->phys) { 297 297 kfree(cap_info->pages); 298 298 kfree(cap_info);
+2 -1
drivers/firmware/microchip/mpfs-auto-update.c
··· 384 384 u32 *response_msg; 385 385 int ret; 386 386 387 - response_msg = devm_kzalloc(priv->dev, AUTO_UPDATE_FEATURE_RESP_SIZE * sizeof(response_msg), 387 + response_msg = devm_kzalloc(priv->dev, 388 + AUTO_UPDATE_FEATURE_RESP_SIZE * sizeof(*response_msg), 388 389 GFP_KERNEL); 389 390 if (!response_msg) 390 391 return -ENOMEM;
+2 -2
drivers/gpio/gpio-74x164.c
··· 127 127 if (IS_ERR(chip->gpiod_oe)) 128 128 return PTR_ERR(chip->gpiod_oe); 129 129 130 - gpiod_set_value_cansleep(chip->gpiod_oe, 1); 131 - 132 130 spi_set_drvdata(spi, chip); 133 131 134 132 chip->gpio_chip.label = spi->modalias; ··· 150 152 dev_err(&spi->dev, "Failed writing: %d\n", ret); 151 153 goto exit_destroy; 152 154 } 155 + 156 + gpiod_set_value_cansleep(chip->gpiod_oe, 1); 153 157 154 158 ret = gpiochip_add_data(&chip->gpio_chip, chip); 155 159 if (!ret)
+6 -6
drivers/gpio/gpiolib.c
··· 968 968 969 969 ret = gpiochip_irqchip_init_valid_mask(gc); 970 970 if (ret) 971 - goto err_remove_acpi_chip; 971 + goto err_free_hogs; 972 972 973 973 ret = gpiochip_irqchip_init_hw(gc); 974 974 if (ret) 975 - goto err_remove_acpi_chip; 975 + goto err_remove_irqchip_mask; 976 976 977 977 ret = gpiochip_add_irqchip(gc, lock_key, request_key); 978 978 if (ret) ··· 997 997 gpiochip_irqchip_remove(gc); 998 998 err_remove_irqchip_mask: 999 999 gpiochip_irqchip_free_valid_mask(gc); 1000 - err_remove_acpi_chip: 1001 - acpi_gpiochip_remove(gc); 1002 - err_remove_of_chip: 1000 + err_free_hogs: 1003 1001 gpiochip_free_hogs(gc); 1002 + acpi_gpiochip_remove(gc); 1003 + gpiochip_remove_pin_ranges(gc); 1004 + err_remove_of_chip: 1004 1005 of_gpiochip_remove(gc); 1005 1006 err_free_gpiochip_mask: 1006 - gpiochip_remove_pin_ranges(gc); 1007 1007 gpiochip_free_valid_mask(gc); 1008 1008 err_remove_from_list: 1009 1009 spin_lock_irqsave(&gpio_lock, flags);
+3 -2
drivers/gpu/drm/Kconfig
··· 199 199 config DRM_TTM_KUNIT_TEST 200 200 tristate "KUnit tests for TTM" if !KUNIT_ALL_TESTS 201 201 default n 202 - depends on DRM && KUNIT && MMU 202 + depends on DRM && KUNIT && MMU && (UML || COMPILE_TEST) 203 203 select DRM_TTM 204 204 select DRM_EXPORT_FOR_TESTS if m 205 205 select DRM_KUNIT_TEST_HELPERS ··· 207 207 help 208 208 Enables unit tests for TTM, a GPU memory manager subsystem used 209 209 to manage memory buffers. This option is mostly useful for kernel 210 - developers. 210 + developers. It depends on (UML || COMPILE_TEST) since no other driver 211 + which uses TTM can be loaded while running the tests. 211 212 212 213 If in doubt, say "N". 213 214
+25 -20
drivers/gpu/drm/amd/amdgpu/soc15.c
··· 574 574 return AMD_RESET_METHOD_MODE1; 575 575 } 576 576 577 + static bool soc15_need_reset_on_resume(struct amdgpu_device *adev) 578 + { 579 + u32 sol_reg; 580 + 581 + sol_reg = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81); 582 + 583 + /* Will reset for the following suspend abort cases. 584 + * 1) Only reset limit on APU side, dGPU hasn't checked yet. 585 + * 2) S3 suspend abort and TOS already launched. 586 + */ 587 + if (adev->flags & AMD_IS_APU && adev->in_s3 && 588 + !adev->suspend_complete && 589 + sol_reg) 590 + return true; 591 + 592 + return false; 593 + } 594 + 577 595 static int soc15_asic_reset(struct amdgpu_device *adev) 578 596 { 579 597 /* original raven doesn't have full asic reset */ 580 - if ((adev->apu_flags & AMD_APU_IS_RAVEN) || 581 - (adev->apu_flags & AMD_APU_IS_RAVEN2)) 598 + /* On the latest Raven, the GPU reset can be performed 599 + * successfully. So now, temporarily enable it for the 600 + * S3 suspend abort case. 601 + */ 602 + if (((adev->apu_flags & AMD_APU_IS_RAVEN) || 603 + (adev->apu_flags & AMD_APU_IS_RAVEN2)) && 604 + !soc15_need_reset_on_resume(adev)) 582 605 return 0; 583 606 584 607 switch (soc15_asic_reset_method(adev)) { ··· 1319 1296 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 1320 1297 1321 1298 return soc15_common_hw_fini(adev); 1322 - } 1323 - 1324 - static bool soc15_need_reset_on_resume(struct amdgpu_device *adev) 1325 - { 1326 - u32 sol_reg; 1327 - 1328 - sol_reg = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_81); 1329 - 1330 - /* Will reset for the following suspend abort cases. 1331 - * 1) Only reset limit on APU side, dGPU hasn't checked yet. 1332 - * 2) S3 suspend abort and TOS already launched. 1333 - */ 1334 - if (adev->flags & AMD_IS_APU && adev->in_s3 && 1335 - !adev->suspend_complete && 1336 - sol_reg) 1337 - return true; 1338 - 1339 - return false; 1340 1299 } 1341 1300 1342 1301 static int soc15_common_resume(void *handle)
+4 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
··· 67 67 /* Workaround for some monitors that do not clear DPCD 0x317 if FreeSync is unsupported */ 68 68 case drm_edid_encode_panel_id('A', 'U', 'O', 0xA7AB): 69 69 case drm_edid_encode_panel_id('A', 'U', 'O', 0xE69B): 70 + case drm_edid_encode_panel_id('B', 'O', 'E', 0x092A): 71 + case drm_edid_encode_panel_id('L', 'G', 'D', 0x06D1): 70 72 DRM_DEBUG_DRIVER("Clearing DPCD 0x317 on monitor with panel id %X\n", panel_id); 71 73 edid_caps->panel_patch.remove_sink_ext_caps = true; 72 74 break; ··· 122 120 123 121 edid_caps->edid_hdmi = connector->display_info.is_hdmi; 124 122 123 + apply_edid_quirks(edid_buf, edid_caps); 124 + 125 125 sad_count = drm_edid_to_sad((struct edid *) edid->raw_edid, &sads); 126 126 if (sad_count <= 0) 127 127 return result; ··· 149 145 edid_caps->speaker_flags = sadb[0]; 150 146 else 151 147 edid_caps->speaker_flags = DEFAULT_SPEAKER_LOCATION; 152 - 153 - apply_edid_quirks(edid_buf, edid_caps); 154 148 155 149 kfree(sads); 156 150 kfree(sadb);
+5
drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
··· 76 76 in_out_display_cfg->hw.DLGRefClkFreqMHz = 50; 77 77 } 78 78 for (j = 0; j < mode_support_info->DPPPerSurface[i]; j++) { 79 + if (i >= __DML2_WRAPPER_MAX_STREAMS_PLANES__) { 80 + dml_print("DML::%s: Index out of bounds: i=%d, __DML2_WRAPPER_MAX_STREAMS_PLANES__=%d\n", 81 + __func__, i, __DML2_WRAPPER_MAX_STREAMS_PLANES__); 82 + break; 83 + } 79 84 dml2->v20.scratch.dml_to_dc_pipe_mapping.dml_pipe_idx_to_stream_id[num_pipes] = dml2->v20.scratch.dml_to_dc_pipe_mapping.disp_cfg_to_stream_id[i]; 80 85 dml2->v20.scratch.dml_to_dc_pipe_mapping.dml_pipe_idx_to_stream_id_valid[num_pipes] = true; 81 86 dml2->v20.scratch.dml_to_dc_pipe_mapping.dml_pipe_idx_to_plane_id[num_pipes] = dml2->v20.scratch.dml_to_dc_pipe_mapping.disp_cfg_to_plane_id[i];
+29
drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
··· 6925 6925 return 0; 6926 6926 } 6927 6927 6928 + static int si_set_temperature_range(struct amdgpu_device *adev) 6929 + { 6930 + int ret; 6931 + 6932 + ret = si_thermal_enable_alert(adev, false); 6933 + if (ret) 6934 + return ret; 6935 + ret = si_thermal_set_temperature_range(adev, R600_TEMP_RANGE_MIN, R600_TEMP_RANGE_MAX); 6936 + if (ret) 6937 + return ret; 6938 + ret = si_thermal_enable_alert(adev, true); 6939 + if (ret) 6940 + return ret; 6941 + 6942 + return ret; 6943 + } 6944 + 6928 6945 static void si_dpm_disable(struct amdgpu_device *adev) 6929 6946 { 6930 6947 struct rv7xx_power_info *pi = rv770_get_pi(adev); ··· 7625 7608 7626 7609 static int si_dpm_late_init(void *handle) 7627 7610 { 7611 + int ret; 7612 + struct amdgpu_device *adev = (struct amdgpu_device *)handle; 7613 + 7614 + if (!adev->pm.dpm_enabled) 7615 + return 0; 7616 + 7617 + ret = si_set_temperature_range(adev); 7618 + if (ret) 7619 + return ret; 7620 + #if 0 //TODO ? 7621 + si_dpm_powergate_uvd(adev, true); 7622 + #endif 7628 7623 return 0; 7629 7624 } 7630 7625
+4 -5
drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
··· 1303 1303 if (default_power_limit) 1304 1304 *default_power_limit = power_limit; 1305 1305 1306 - if (smu->od_enabled) { 1306 + if (smu->od_enabled) 1307 1307 od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_11_0_ODSETTING_POWERPERCENTAGE]); 1308 - od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_ODSETTING_POWERPERCENTAGE]); 1309 - } else { 1308 + else 1310 1309 od_percent_upper = 0; 1311 - od_percent_lower = 100; 1312 - } 1310 + 1311 + od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_ODSETTING_POWERPERCENTAGE]); 1313 1312 1314 1313 dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n", 1315 1314 od_percent_upper, od_percent_lower, power_limit);
+4 -5
drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
··· 2357 2357 *default_power_limit = power_limit; 2358 2358 2359 2359 if (smu->od_enabled && 2360 - navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_POWER_LIMIT)) { 2360 + navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_POWER_LIMIT)) 2361 2361 od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_11_0_ODSETTING_POWERPERCENTAGE]); 2362 - od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_ODSETTING_POWERPERCENTAGE]); 2363 - } else { 2362 + else 2364 2363 od_percent_upper = 0; 2365 - od_percent_lower = 100; 2366 - } 2364 + 2365 + od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_ODSETTING_POWERPERCENTAGE]); 2367 2366 2368 2367 dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n", 2369 2368 od_percent_upper, od_percent_lower, power_limit);
+4 -5
drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
··· 640 640 if (default_power_limit) 641 641 *default_power_limit = power_limit; 642 642 643 - if (smu->od_enabled) { 643 + if (smu->od_enabled) 644 644 od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_11_0_7_ODSETTING_POWERPERCENTAGE]); 645 - od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_7_ODSETTING_POWERPERCENTAGE]); 646 - } else { 645 + else 647 646 od_percent_upper = 0; 648 - od_percent_lower = 100; 649 - } 647 + 648 + od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_11_0_7_ODSETTING_POWERPERCENTAGE]); 650 649 651 650 dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n", 652 651 od_percent_upper, od_percent_lower, power_limit);
+4 -5
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 2369 2369 if (default_power_limit) 2370 2370 *default_power_limit = power_limit; 2371 2371 2372 - if (smu->od_enabled) { 2372 + if (smu->od_enabled) 2373 2373 od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_13_0_0_ODSETTING_POWERPERCENTAGE]); 2374 - od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_13_0_0_ODSETTING_POWERPERCENTAGE]); 2375 - } else { 2374 + else 2376 2375 od_percent_upper = 0; 2377 - od_percent_lower = 100; 2378 - } 2376 + 2377 + od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_13_0_0_ODSETTING_POWERPERCENTAGE]); 2379 2378 2380 2379 dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n", 2381 2380 od_percent_upper, od_percent_lower, power_limit);
+4 -5
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
··· 2333 2333 if (default_power_limit) 2334 2334 *default_power_limit = power_limit; 2335 2335 2336 - if (smu->od_enabled) { 2336 + if (smu->od_enabled) 2337 2337 od_percent_upper = le32_to_cpu(powerplay_table->overdrive_table.max[SMU_13_0_7_ODSETTING_POWERPERCENTAGE]); 2338 - od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_13_0_7_ODSETTING_POWERPERCENTAGE]); 2339 - } else { 2338 + else 2340 2339 od_percent_upper = 0; 2341 - od_percent_lower = 100; 2342 - } 2340 + 2341 + od_percent_lower = le32_to_cpu(powerplay_table->overdrive_table.min[SMU_13_0_7_ODSETTING_POWERPERCENTAGE]); 2343 2342 2344 2343 dev_dbg(smu->adev->dev, "od percent upper:%d, od percent lower:%d (default power: %d)\n", 2345 2344 od_percent_upper, od_percent_lower, power_limit);
+55 -15
drivers/gpu/drm/bridge/aux-hpd-bridge.c
··· 25 25 ida_free(&drm_aux_hpd_bridge_ida, adev->id); 26 26 27 27 of_node_put(adev->dev.platform_data); 28 + of_node_put(adev->dev.of_node); 28 29 29 30 kfree(adev); 30 31 } 31 32 32 - static void drm_aux_hpd_bridge_unregister_adev(void *_adev) 33 + static void drm_aux_hpd_bridge_free_adev(void *_adev) 33 34 { 34 - struct auxiliary_device *adev = _adev; 35 - 36 - auxiliary_device_delete(adev); 37 - auxiliary_device_uninit(adev); 35 + auxiliary_device_uninit(_adev); 38 36 } 39 37 40 38 /** 41 - * drm_dp_hpd_bridge_register - Create a simple HPD DisplayPort bridge 39 + * devm_drm_dp_hpd_bridge_alloc - allocate a HPD DisplayPort bridge 42 40 * @parent: device instance providing this bridge 43 41 * @np: device node pointer corresponding to this bridge instance 44 42 * ··· 44 46 * DRM_MODE_CONNECTOR_DisplayPort, which terminates the bridge chain and is 45 47 * able to send the HPD events. 46 48 * 47 - * Return: device instance that will handle created bridge or an error code 48 - * encoded into the pointer. 49 + * Return: bridge auxiliary device pointer or an error pointer 49 50 */ 50 - struct device *drm_dp_hpd_bridge_register(struct device *parent, 51 - struct device_node *np) 51 + struct auxiliary_device *devm_drm_dp_hpd_bridge_alloc(struct device *parent, struct device_node *np) 52 52 { 53 53 struct auxiliary_device *adev; 54 54 int ret; ··· 70 74 71 75 ret = auxiliary_device_init(adev); 72 76 if (ret) { 77 + of_node_put(adev->dev.platform_data); 78 + of_node_put(adev->dev.of_node); 73 79 ida_free(&drm_aux_hpd_bridge_ida, adev->id); 74 80 kfree(adev); 75 81 return ERR_PTR(ret); 76 82 } 77 83 78 - ret = auxiliary_device_add(adev); 79 - if (ret) { 80 - auxiliary_device_uninit(adev); 84 + ret = devm_add_action_or_reset(parent, drm_aux_hpd_bridge_free_adev, adev); 85 + if (ret) 81 86 return ERR_PTR(ret); 82 - } 83 87 84 - ret = devm_add_action_or_reset(parent, drm_aux_hpd_bridge_unregister_adev, adev); 88 + return adev; 89 + } 90 + EXPORT_SYMBOL_GPL(devm_drm_dp_hpd_bridge_alloc); 91 + 92 + static void drm_aux_hpd_bridge_del_adev(void *_adev) 93 + { 94 + auxiliary_device_delete(_adev); 95 + } 96 + 97 + /** 98 + * devm_drm_dp_hpd_bridge_add - register a HDP DisplayPort bridge 99 + * @dev: struct device to tie registration lifetime to 100 + * @adev: bridge auxiliary device to be registered 101 + * 102 + * Returns: zero on success or a negative errno 103 + */ 104 + int devm_drm_dp_hpd_bridge_add(struct device *dev, struct auxiliary_device *adev) 105 + { 106 + int ret; 107 + 108 + ret = auxiliary_device_add(adev); 109 + if (ret) 110 + return ret; 111 + 112 + return devm_add_action_or_reset(dev, drm_aux_hpd_bridge_del_adev, adev); 113 + } 114 + EXPORT_SYMBOL_GPL(devm_drm_dp_hpd_bridge_add); 115 + 116 + /** 117 + * drm_dp_hpd_bridge_register - allocate and register a HDP DisplayPort bridge 118 + * @parent: device instance providing this bridge 119 + * @np: device node pointer corresponding to this bridge instance 120 + * 121 + * Return: device instance that will handle created bridge or an error pointer 122 + */ 123 + struct device *drm_dp_hpd_bridge_register(struct device *parent, struct device_node *np) 124 + { 125 + struct auxiliary_device *adev; 126 + int ret; 127 + 128 + adev = devm_drm_dp_hpd_bridge_alloc(parent, np); 129 + if (IS_ERR(adev)) 130 + return ERR_CAST(adev); 131 + 132 + ret = devm_drm_dp_hpd_bridge_add(parent, adev); 85 133 if (ret) 86 134 return ERR_PTR(ret); 87 135
+15 -1
drivers/gpu/drm/drm_buddy.c
··· 332 332 u64 start, u64 end, 333 333 unsigned int order) 334 334 { 335 + u64 req_size = mm->chunk_size << order; 335 336 struct drm_buddy_block *block; 336 337 struct drm_buddy_block *buddy; 337 338 LIST_HEAD(dfs); ··· 367 366 368 367 if (drm_buddy_block_is_allocated(block)) 369 368 continue; 369 + 370 + if (block_start < start || block_end > end) { 371 + u64 adjusted_start = max(block_start, start); 372 + u64 adjusted_end = min(block_end, end); 373 + 374 + if (round_down(adjusted_end + 1, req_size) <= 375 + round_up(adjusted_start, req_size)) 376 + continue; 377 + } 370 378 371 379 if (contains(start, end, block_start, block_end) && 372 380 order == drm_buddy_block_order(block)) { ··· 771 761 return -EINVAL; 772 762 773 763 /* Actual range allocation */ 774 - if (start + size == end) 764 + if (start + size == end) { 765 + if (!IS_ALIGNED(start | end, min_block_size)) 766 + return -EINVAL; 767 + 775 768 return __drm_buddy_alloc_range(mm, start, size, NULL, blocks); 769 + } 776 770 777 771 original_size = size; 778 772 original_min_size = min_block_size;
+18 -2
drivers/gpu/drm/msm/dp/dp_display.c
··· 329 329 .unbind = dp_display_unbind, 330 330 }; 331 331 332 + static void dp_display_send_hpd_event(struct msm_dp *dp_display) 333 + { 334 + struct dp_display_private *dp; 335 + struct drm_connector *connector; 336 + 337 + dp = container_of(dp_display, struct dp_display_private, dp_display); 338 + 339 + connector = dp->dp_display.connector; 340 + drm_helper_hpd_irq_event(connector->dev); 341 + } 342 + 332 343 static int dp_display_send_hpd_notification(struct dp_display_private *dp, 333 344 bool hpd) 334 345 { 335 - struct drm_bridge *bridge = dp->dp_display.bridge; 346 + if ((hpd && dp->dp_display.link_ready) || 347 + (!hpd && !dp->dp_display.link_ready)) { 348 + drm_dbg_dp(dp->drm_dev, "HPD already %s\n", 349 + (hpd ? "on" : "off")); 350 + return 0; 351 + } 336 352 337 353 /* reset video pattern flag on disconnect */ 338 354 if (!hpd) { ··· 364 348 365 349 drm_dbg_dp(dp->drm_dev, "type=%d hpd=%d\n", 366 350 dp->dp_display.connector_type, hpd); 367 - drm_bridge_hpd_notify(bridge, dp->dp_display.link_ready); 351 + dp_display_send_hpd_event(&dp->dp_display); 368 352 369 353 return 0; 370 354 }
+1 -1
drivers/gpu/drm/nouveau/nouveau_abi16.c
··· 269 269 break; 270 270 case NOUVEAU_GETPARAM_VRAM_USED: { 271 271 struct ttm_resource_manager *vram_mgr = ttm_manager_type(&drm->ttm.bdev, TTM_PL_VRAM); 272 - getparam->value = (u64)ttm_resource_manager_usage(vram_mgr) << PAGE_SHIFT; 272 + getparam->value = (u64)ttm_resource_manager_usage(vram_mgr); 273 273 break; 274 274 } 275 275 default:
+2 -2
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
··· 1054 1054 /* Release the DMA buffers that were needed only for boot and init */ 1055 1055 nvkm_gsp_mem_dtor(gsp, &gsp->boot.fw); 1056 1056 nvkm_gsp_mem_dtor(gsp, &gsp->libos); 1057 - nvkm_gsp_mem_dtor(gsp, &gsp->rmargs); 1058 - nvkm_gsp_mem_dtor(gsp, &gsp->wpr_meta); 1059 1057 1060 1058 return ret; 1061 1059 } ··· 2161 2163 2162 2164 r535_gsp_dtor_fws(gsp); 2163 2165 2166 + nvkm_gsp_mem_dtor(gsp, &gsp->rmargs); 2167 + nvkm_gsp_mem_dtor(gsp, &gsp->wpr_meta); 2164 2168 nvkm_gsp_mem_dtor(gsp, &gsp->shm.mem); 2165 2169 nvkm_gsp_mem_dtor(gsp, &gsp->loginit); 2166 2170 nvkm_gsp_mem_dtor(gsp, &gsp->logintr);
+20 -3
drivers/gpu/drm/tegra/drm.c
··· 1243 1243 1244 1244 drm_mode_config_reset(drm); 1245 1245 1246 - err = drm_aperture_remove_framebuffers(&tegra_drm_driver); 1247 - if (err < 0) 1248 - goto hub; 1246 + /* 1247 + * Only take over from a potential firmware framebuffer if any CRTCs 1248 + * have been registered. This must not be a fatal error because there 1249 + * are other accelerators that are exposed via this driver. 1250 + * 1251 + * Another case where this happens is on Tegra234 where the display 1252 + * hardware is no longer part of the host1x complex, so this driver 1253 + * will not expose any modesetting features. 1254 + */ 1255 + if (drm->mode_config.num_crtc > 0) { 1256 + err = drm_aperture_remove_framebuffers(&tegra_drm_driver); 1257 + if (err < 0) 1258 + goto hub; 1259 + } else { 1260 + /* 1261 + * Indicate to userspace that this doesn't expose any display 1262 + * capabilities. 1263 + */ 1264 + drm->driver_features &= ~(DRIVER_MODESET | DRIVER_ATOMIC); 1265 + } 1249 1266 1250 1267 err = drm_dev_register(drm, 0); 1251 1268 if (err < 0)
+218
drivers/gpu/drm/tests/drm_buddy_test.c
··· 14 14 15 15 #include "../lib/drm_random.h" 16 16 17 + static unsigned int random_seed; 18 + 17 19 static inline u64 get_size(int order, u64 chunk_size) 18 20 { 19 21 return (1 << order) * chunk_size; 22 + } 23 + 24 + static void drm_test_buddy_alloc_range_bias(struct kunit *test) 25 + { 26 + u32 mm_size, ps, bias_size, bias_start, bias_end, bias_rem; 27 + DRM_RND_STATE(prng, random_seed); 28 + unsigned int i, count, *order; 29 + struct drm_buddy mm; 30 + LIST_HEAD(allocated); 31 + 32 + bias_size = SZ_1M; 33 + ps = roundup_pow_of_two(prandom_u32_state(&prng) % bias_size); 34 + ps = max(SZ_4K, ps); 35 + mm_size = (SZ_8M-1) & ~(ps-1); /* Multiple roots */ 36 + 37 + kunit_info(test, "mm_size=%u, ps=%u\n", mm_size, ps); 38 + 39 + KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, ps), 40 + "buddy_init failed\n"); 41 + 42 + count = mm_size / bias_size; 43 + order = drm_random_order(count, &prng); 44 + KUNIT_EXPECT_TRUE(test, order); 45 + 46 + /* 47 + * Idea is to split the address space into uniform bias ranges, and then 48 + * in some random order allocate within each bias, using various 49 + * patterns within. This should detect if allocations leak out from a 50 + * given bias, for example. 51 + */ 52 + 53 + for (i = 0; i < count; i++) { 54 + LIST_HEAD(tmp); 55 + u32 size; 56 + 57 + bias_start = order[i] * bias_size; 58 + bias_end = bias_start + bias_size; 59 + bias_rem = bias_size; 60 + 61 + /* internal round_up too big */ 62 + KUNIT_ASSERT_TRUE_MSG(test, 63 + drm_buddy_alloc_blocks(&mm, bias_start, 64 + bias_end, bias_size + ps, bias_size, 65 + &allocated, 66 + DRM_BUDDY_RANGE_ALLOCATION), 67 + "buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n", 68 + bias_start, bias_end, bias_size, bias_size); 69 + 70 + /* size too big */ 71 + KUNIT_ASSERT_TRUE_MSG(test, 72 + drm_buddy_alloc_blocks(&mm, bias_start, 73 + bias_end, bias_size + ps, ps, 74 + &allocated, 75 + DRM_BUDDY_RANGE_ALLOCATION), 76 + "buddy_alloc didn't fail with bias(%x-%x), size=%u, ps=%u\n", 77 + bias_start, bias_end, bias_size + ps, ps); 78 + 79 + /* bias range too small for size */ 80 + KUNIT_ASSERT_TRUE_MSG(test, 81 + drm_buddy_alloc_blocks(&mm, bias_start + ps, 82 + bias_end, bias_size, ps, 83 + &allocated, 84 + DRM_BUDDY_RANGE_ALLOCATION), 85 + "buddy_alloc didn't fail with bias(%x-%x), size=%u, ps=%u\n", 86 + bias_start + ps, bias_end, bias_size, ps); 87 + 88 + /* bias misaligned */ 89 + KUNIT_ASSERT_TRUE_MSG(test, 90 + drm_buddy_alloc_blocks(&mm, bias_start + ps, 91 + bias_end - ps, 92 + bias_size >> 1, bias_size >> 1, 93 + &allocated, 94 + DRM_BUDDY_RANGE_ALLOCATION), 95 + "buddy_alloc h didn't fail with bias(%x-%x), size=%u, ps=%u\n", 96 + bias_start + ps, bias_end - ps, bias_size >> 1, bias_size >> 1); 97 + 98 + /* single big page */ 99 + KUNIT_ASSERT_FALSE_MSG(test, 100 + drm_buddy_alloc_blocks(&mm, bias_start, 101 + bias_end, bias_size, bias_size, 102 + &tmp, 103 + DRM_BUDDY_RANGE_ALLOCATION), 104 + "buddy_alloc i failed with bias(%x-%x), size=%u, ps=%u\n", 105 + bias_start, bias_end, bias_size, bias_size); 106 + drm_buddy_free_list(&mm, &tmp); 107 + 108 + /* single page with internal round_up */ 109 + KUNIT_ASSERT_FALSE_MSG(test, 110 + drm_buddy_alloc_blocks(&mm, bias_start, 111 + bias_end, ps, bias_size, 112 + &tmp, 113 + DRM_BUDDY_RANGE_ALLOCATION), 114 + "buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n", 115 + bias_start, bias_end, ps, bias_size); 116 + drm_buddy_free_list(&mm, &tmp); 117 + 118 + /* random size within */ 119 + size = max(round_up(prandom_u32_state(&prng) % bias_rem, ps), ps); 120 + if (size) 121 + KUNIT_ASSERT_FALSE_MSG(test, 122 + drm_buddy_alloc_blocks(&mm, bias_start, 123 + bias_end, size, ps, 124 + &tmp, 125 + DRM_BUDDY_RANGE_ALLOCATION), 126 + "buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n", 127 + bias_start, bias_end, size, ps); 128 + 129 + bias_rem -= size; 130 + /* too big for current avail */ 131 + KUNIT_ASSERT_TRUE_MSG(test, 132 + drm_buddy_alloc_blocks(&mm, bias_start, 133 + bias_end, bias_rem + ps, ps, 134 + &allocated, 135 + DRM_BUDDY_RANGE_ALLOCATION), 136 + "buddy_alloc didn't fail with bias(%x-%x), size=%u, ps=%u\n", 137 + bias_start, bias_end, bias_rem + ps, ps); 138 + 139 + if (bias_rem) { 140 + /* random fill of the remainder */ 141 + size = max(round_up(prandom_u32_state(&prng) % bias_rem, ps), ps); 142 + size = max(size, ps); 143 + 144 + KUNIT_ASSERT_FALSE_MSG(test, 145 + drm_buddy_alloc_blocks(&mm, bias_start, 146 + bias_end, size, ps, 147 + &allocated, 148 + DRM_BUDDY_RANGE_ALLOCATION), 149 + "buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n", 150 + bias_start, bias_end, size, ps); 151 + /* 152 + * Intentionally allow some space to be left 153 + * unallocated, and ideally not always on the bias 154 + * boundaries. 155 + */ 156 + drm_buddy_free_list(&mm, &tmp); 157 + } else { 158 + list_splice_tail(&tmp, &allocated); 159 + } 160 + } 161 + 162 + kfree(order); 163 + drm_buddy_free_list(&mm, &allocated); 164 + drm_buddy_fini(&mm); 165 + 166 + /* 167 + * Something more free-form. Idea is to pick a random starting bias 168 + * range within the address space and then start filling it up. Also 169 + * randomly grow the bias range in both directions as we go along. This 170 + * should give us bias start/end which is not always uniform like above, 171 + * and in some cases will require the allocator to jump over already 172 + * allocated nodes in the middle of the address space. 173 + */ 174 + 175 + KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_init(&mm, mm_size, ps), 176 + "buddy_init failed\n"); 177 + 178 + bias_start = round_up(prandom_u32_state(&prng) % (mm_size - ps), ps); 179 + bias_end = round_up(bias_start + prandom_u32_state(&prng) % (mm_size - bias_start), ps); 180 + bias_end = max(bias_end, bias_start + ps); 181 + bias_rem = bias_end - bias_start; 182 + 183 + do { 184 + u32 size = max(round_up(prandom_u32_state(&prng) % bias_rem, ps), ps); 185 + 186 + KUNIT_ASSERT_FALSE_MSG(test, 187 + drm_buddy_alloc_blocks(&mm, bias_start, 188 + bias_end, size, ps, 189 + &allocated, 190 + DRM_BUDDY_RANGE_ALLOCATION), 191 + "buddy_alloc failed with bias(%x-%x), size=%u, ps=%u\n", 192 + bias_start, bias_end, size); 193 + bias_rem -= size; 194 + 195 + /* 196 + * Try to randomly grow the bias range in both directions, or 197 + * only one, or perhaps don't grow at all. 198 + */ 199 + do { 200 + u32 old_bias_start = bias_start; 201 + u32 old_bias_end = bias_end; 202 + 203 + if (bias_start) 204 + bias_start -= round_up(prandom_u32_state(&prng) % bias_start, ps); 205 + if (bias_end != mm_size) 206 + bias_end += round_up(prandom_u32_state(&prng) % (mm_size - bias_end), ps); 207 + 208 + bias_rem += old_bias_start - bias_start; 209 + bias_rem += bias_end - old_bias_end; 210 + } while (!bias_rem && (bias_start || bias_end != mm_size)); 211 + } while (bias_rem); 212 + 213 + KUNIT_ASSERT_EQ(test, bias_start, 0); 214 + KUNIT_ASSERT_EQ(test, bias_end, mm_size); 215 + KUNIT_ASSERT_TRUE_MSG(test, 216 + drm_buddy_alloc_blocks(&mm, bias_start, bias_end, 217 + ps, ps, 218 + &allocated, 219 + DRM_BUDDY_RANGE_ALLOCATION), 220 + "buddy_alloc passed with bias(%x-%x), size=%u\n", 221 + bias_start, bias_end, ps); 222 + 223 + drm_buddy_free_list(&mm, &allocated); 224 + drm_buddy_fini(&mm); 20 225 } 21 226 22 227 static void drm_test_buddy_alloc_contiguous(struct kunit *test) ··· 567 362 drm_buddy_fini(&mm); 568 363 } 569 364 365 + static int drm_buddy_suite_init(struct kunit_suite *suite) 366 + { 367 + while (!random_seed) 368 + random_seed = get_random_u32(); 369 + 370 + kunit_info(suite, "Testing DRM buddy manager, with random_seed=0x%x\n", 371 + random_seed); 372 + 373 + return 0; 374 + } 375 + 570 376 static struct kunit_case drm_buddy_tests[] = { 571 377 KUNIT_CASE(drm_test_buddy_alloc_limit), 572 378 KUNIT_CASE(drm_test_buddy_alloc_optimistic), 573 379 KUNIT_CASE(drm_test_buddy_alloc_pessimistic), 574 380 KUNIT_CASE(drm_test_buddy_alloc_pathological), 575 381 KUNIT_CASE(drm_test_buddy_alloc_contiguous), 382 + KUNIT_CASE(drm_test_buddy_alloc_range_bias), 576 383 {} 577 384 }; 578 385 579 386 static struct kunit_suite drm_buddy_test_suite = { 580 387 .name = "drm_buddy", 388 + .suite_init = drm_buddy_suite_init, 581 389 .test_cases = drm_buddy_tests, 582 390 }; 583 391
+9 -2
drivers/gpu/drm/xe/xe_bo.c
··· 28 28 #include "xe_ttm_stolen_mgr.h" 29 29 #include "xe_vm.h" 30 30 31 + const char *const xe_mem_type_to_name[TTM_NUM_MEM_TYPES] = { 32 + [XE_PL_SYSTEM] = "system", 33 + [XE_PL_TT] = "gtt", 34 + [XE_PL_VRAM0] = "vram0", 35 + [XE_PL_VRAM1] = "vram1", 36 + [XE_PL_STOLEN] = "stolen" 37 + }; 38 + 31 39 static const struct ttm_place sys_placement_flags = { 32 40 .fpfn = 0, 33 41 .lpfn = 0, ··· 721 713 migrate = xe->tiles[0].migrate; 722 714 723 715 xe_assert(xe, migrate); 724 - 725 - trace_xe_bo_move(bo); 716 + trace_xe_bo_move(bo, new_mem->mem_type, old_mem_type, move_lacks_source); 726 717 xe_device_mem_access_get(xe); 727 718 728 719 if (xe_bo_is_pinned(bo) && !xe_bo_is_user(bo)) {
+1
drivers/gpu/drm/xe/xe_bo.h
··· 243 243 int xe_bo_restore_pinned(struct xe_bo *bo); 244 244 245 245 extern struct ttm_device_funcs xe_ttm_funcs; 246 + extern const char *const xe_mem_type_to_name[]; 246 247 247 248 int xe_gem_create_ioctl(struct drm_device *dev, void *data, 248 249 struct drm_file *file);
+2 -10
drivers/gpu/drm/xe/xe_drm_client.c
··· 131 131 132 132 static void show_meminfo(struct drm_printer *p, struct drm_file *file) 133 133 { 134 - static const char *const mem_type_to_name[TTM_NUM_MEM_TYPES] = { 135 - [XE_PL_SYSTEM] = "system", 136 - [XE_PL_TT] = "gtt", 137 - [XE_PL_VRAM0] = "vram0", 138 - [XE_PL_VRAM1] = "vram1", 139 - [4 ... 6] = NULL, 140 - [XE_PL_STOLEN] = "stolen" 141 - }; 142 134 struct drm_memory_stats stats[TTM_NUM_MEM_TYPES] = {}; 143 135 struct xe_file *xef = file->driver_priv; 144 136 struct ttm_device *bdev = &xef->xe->ttm; ··· 163 171 spin_unlock(&client->bos_lock); 164 172 165 173 for (mem_type = XE_PL_SYSTEM; mem_type < TTM_NUM_MEM_TYPES; ++mem_type) { 166 - if (!mem_type_to_name[mem_type]) 174 + if (!xe_mem_type_to_name[mem_type]) 167 175 continue; 168 176 169 177 man = ttm_manager_type(bdev, mem_type); ··· 174 182 DRM_GEM_OBJECT_RESIDENT | 175 183 (mem_type != XE_PL_SYSTEM ? 0 : 176 184 DRM_GEM_OBJECT_PURGEABLE), 177 - mem_type_to_name[mem_type]); 185 + xe_mem_type_to_name[mem_type]); 178 186 } 179 187 } 180 188 }
+3 -85
drivers/gpu/drm/xe/xe_exec_queue.c
··· 309 309 return q->ops->set_timeslice(q, value); 310 310 } 311 311 312 - static int exec_queue_set_preemption_timeout(struct xe_device *xe, 313 - struct xe_exec_queue *q, u64 value, 314 - bool create) 315 - { 316 - u32 min = 0, max = 0; 317 - 318 - xe_exec_queue_get_prop_minmax(q->hwe->eclass, 319 - XE_EXEC_QUEUE_PREEMPT_TIMEOUT, &min, &max); 320 - 321 - if (xe_exec_queue_enforce_schedule_limit() && 322 - !xe_hw_engine_timeout_in_range(value, min, max)) 323 - return -EINVAL; 324 - 325 - return q->ops->set_preempt_timeout(q, value); 326 - } 327 - 328 - static int exec_queue_set_job_timeout(struct xe_device *xe, struct xe_exec_queue *q, 329 - u64 value, bool create) 330 - { 331 - u32 min = 0, max = 0; 332 - 333 - if (XE_IOCTL_DBG(xe, !create)) 334 - return -EINVAL; 335 - 336 - xe_exec_queue_get_prop_minmax(q->hwe->eclass, 337 - XE_EXEC_QUEUE_JOB_TIMEOUT, &min, &max); 338 - 339 - if (xe_exec_queue_enforce_schedule_limit() && 340 - !xe_hw_engine_timeout_in_range(value, min, max)) 341 - return -EINVAL; 342 - 343 - return q->ops->set_job_timeout(q, value); 344 - } 345 - 346 - static int exec_queue_set_acc_trigger(struct xe_device *xe, struct xe_exec_queue *q, 347 - u64 value, bool create) 348 - { 349 - if (XE_IOCTL_DBG(xe, !create)) 350 - return -EINVAL; 351 - 352 - if (XE_IOCTL_DBG(xe, !xe->info.has_usm)) 353 - return -EINVAL; 354 - 355 - q->usm.acc_trigger = value; 356 - 357 - return 0; 358 - } 359 - 360 - static int exec_queue_set_acc_notify(struct xe_device *xe, struct xe_exec_queue *q, 361 - u64 value, bool create) 362 - { 363 - if (XE_IOCTL_DBG(xe, !create)) 364 - return -EINVAL; 365 - 366 - if (XE_IOCTL_DBG(xe, !xe->info.has_usm)) 367 - return -EINVAL; 368 - 369 - q->usm.acc_notify = value; 370 - 371 - return 0; 372 - } 373 - 374 - static int exec_queue_set_acc_granularity(struct xe_device *xe, struct xe_exec_queue *q, 375 - u64 value, bool create) 376 - { 377 - if (XE_IOCTL_DBG(xe, !create)) 378 - return -EINVAL; 379 - 380 - if (XE_IOCTL_DBG(xe, !xe->info.has_usm)) 381 - return -EINVAL; 382 - 383 - if (value > DRM_XE_ACC_GRANULARITY_64M) 384 - return -EINVAL; 385 - 386 - q->usm.acc_granularity = value; 387 - 388 - return 0; 389 - } 390 - 391 312 typedef int (*xe_exec_queue_set_property_fn)(struct xe_device *xe, 392 313 struct xe_exec_queue *q, 393 314 u64 value, bool create); ··· 316 395 static const xe_exec_queue_set_property_fn exec_queue_set_property_funcs[] = { 317 396 [DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY] = exec_queue_set_priority, 318 397 [DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE] = exec_queue_set_timeslice, 319 - [DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT] = exec_queue_set_preemption_timeout, 320 - [DRM_XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT] = exec_queue_set_job_timeout, 321 - [DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_TRIGGER] = exec_queue_set_acc_trigger, 322 - [DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_NOTIFY] = exec_queue_set_acc_notify, 323 - [DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY] = exec_queue_set_acc_granularity, 324 398 }; 325 399 326 400 static int exec_queue_user_ext_set_property(struct xe_device *xe, ··· 334 418 335 419 if (XE_IOCTL_DBG(xe, ext.property >= 336 420 ARRAY_SIZE(exec_queue_set_property_funcs)) || 337 - XE_IOCTL_DBG(xe, ext.pad)) 421 + XE_IOCTL_DBG(xe, ext.pad) || 422 + XE_IOCTL_DBG(xe, ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY && 423 + ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE)) 338 424 return -EINVAL; 339 425 340 426 idx = array_index_nospec(ext.property, ARRAY_SIZE(exec_queue_set_property_funcs));
-10
drivers/gpu/drm/xe/xe_exec_queue_types.h
··· 150 150 spinlock_t lock; 151 151 } compute; 152 152 153 - /** @usm: unified shared memory state */ 154 - struct { 155 - /** @acc_trigger: access counter trigger */ 156 - u32 acc_trigger; 157 - /** @acc_notify: access counter notify */ 158 - u32 acc_notify; 159 - /** @acc_granularity: access counter granularity */ 160 - u32 acc_granularity; 161 - } usm; 162 - 163 153 /** @ops: submission backend exec queue operations */ 164 154 const struct xe_exec_queue_ops *ops; 165 155
+1 -1
drivers/gpu/drm/xe/xe_execlist.c
··· 212 212 static void xe_execlist_make_active(struct xe_execlist_exec_queue *exl) 213 213 { 214 214 struct xe_execlist_port *port = exl->port; 215 - enum xe_exec_queue_priority priority = exl->active_priority; 215 + enum xe_exec_queue_priority priority = exl->q->sched_props.priority; 216 216 217 217 XE_WARN_ON(priority == XE_EXEC_QUEUE_PRIORITY_UNSET); 218 218 XE_WARN_ON(priority < 0);
+12
drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
··· 247 247 248 248 xe_gt_assert(gt, vma); 249 249 250 + /* Execlists not supported */ 251 + if (gt_to_xe(gt)->info.force_execlist) { 252 + if (fence) 253 + __invalidation_fence_signal(fence); 254 + 255 + return 0; 256 + } 257 + 250 258 action[len++] = XE_GUC_ACTION_TLB_INVALIDATION; 251 259 action[len++] = 0; /* seqno, replaced in send_tlb_invalidation */ 252 260 if (!xe->info.has_range_tlb_invalidation) { ··· 324 316 struct xe_guc *guc = &gt->uc.guc; 325 317 struct drm_printer p = drm_err_printer(__func__); 326 318 int ret; 319 + 320 + /* Execlists not supported */ 321 + if (gt_to_xe(gt)->info.force_execlist) 322 + return 0; 327 323 328 324 /* 329 325 * XXX: See above, this algorithm only works if seqno are always in
+1 -9
drivers/gpu/drm/xe/xe_lrc.c
··· 682 682 683 683 #define PVC_CTX_ASID (0x2e + 1) 684 684 #define PVC_CTX_ACC_CTR_THOLD (0x2a + 1) 685 - #define ACC_GRANULARITY_S 20 686 - #define ACC_NOTIFY_S 16 687 685 688 686 int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe, 689 687 struct xe_exec_queue *q, struct xe_vm *vm, u32 ring_size) ··· 752 754 xe_lrc_write_ctx_reg(lrc, CTX_RING_CTL, 753 755 RING_CTL_SIZE(lrc->ring.size) | RING_VALID); 754 756 if (xe->info.has_asid && vm) 755 - xe_lrc_write_ctx_reg(lrc, PVC_CTX_ASID, 756 - (q->usm.acc_granularity << 757 - ACC_GRANULARITY_S) | vm->usm.asid); 758 - if (xe->info.has_usm && vm) 759 - xe_lrc_write_ctx_reg(lrc, PVC_CTX_ACC_CTR_THOLD, 760 - (q->usm.acc_notify << ACC_NOTIFY_S) | 761 - q->usm.acc_trigger); 757 + xe_lrc_write_ctx_reg(lrc, PVC_CTX_ASID, vm->usm.asid); 762 758 763 759 lrc->desc = LRC_VALID; 764 760 lrc->desc |= LRC_LEGACY_64B_CONTEXT << LRC_ADDRESSING_MODE_SHIFT;
+1 -1
drivers/gpu/drm/xe/xe_mmio.c
··· 105 105 106 106 pci_bus_for_each_resource(root, root_res, i) { 107 107 if (root_res && root_res->flags & (IORESOURCE_MEM | IORESOURCE_MEM_64) && 108 - root_res->start > 0x100000000ull) 108 + (u64)root_res->start > 0x100000000ul) 109 109 break; 110 110 } 111 111
+48 -10
drivers/gpu/drm/xe/xe_sync.c
··· 19 19 #include "xe_macros.h" 20 20 #include "xe_sched_job_types.h" 21 21 22 - struct user_fence { 22 + struct xe_user_fence { 23 23 struct xe_device *xe; 24 24 struct kref refcount; 25 25 struct dma_fence_cb cb; ··· 27 27 struct mm_struct *mm; 28 28 u64 __user *addr; 29 29 u64 value; 30 + int signalled; 30 31 }; 31 32 32 33 static void user_fence_destroy(struct kref *kref) 33 34 { 34 - struct user_fence *ufence = container_of(kref, struct user_fence, 35 + struct xe_user_fence *ufence = container_of(kref, struct xe_user_fence, 35 36 refcount); 36 37 37 38 mmdrop(ufence->mm); 38 39 kfree(ufence); 39 40 } 40 41 41 - static void user_fence_get(struct user_fence *ufence) 42 + static void user_fence_get(struct xe_user_fence *ufence) 42 43 { 43 44 kref_get(&ufence->refcount); 44 45 } 45 46 46 - static void user_fence_put(struct user_fence *ufence) 47 + static void user_fence_put(struct xe_user_fence *ufence) 47 48 { 48 49 kref_put(&ufence->refcount, user_fence_destroy); 49 50 } 50 51 51 - static struct user_fence *user_fence_create(struct xe_device *xe, u64 addr, 52 - u64 value) 52 + static struct xe_user_fence *user_fence_create(struct xe_device *xe, u64 addr, 53 + u64 value) 53 54 { 54 - struct user_fence *ufence; 55 + struct xe_user_fence *ufence; 55 56 56 57 ufence = kmalloc(sizeof(*ufence), GFP_KERNEL); 57 58 if (!ufence) ··· 70 69 71 70 static void user_fence_worker(struct work_struct *w) 72 71 { 73 - struct user_fence *ufence = container_of(w, struct user_fence, worker); 72 + struct xe_user_fence *ufence = container_of(w, struct xe_user_fence, worker); 74 73 75 74 if (mmget_not_zero(ufence->mm)) { 76 75 kthread_use_mm(ufence->mm); ··· 81 80 } 82 81 83 82 wake_up_all(&ufence->xe->ufence_wq); 83 + WRITE_ONCE(ufence->signalled, 1); 84 84 user_fence_put(ufence); 85 85 } 86 86 87 - static void kick_ufence(struct user_fence *ufence, struct dma_fence *fence) 87 + static void kick_ufence(struct xe_user_fence *ufence, struct dma_fence *fence) 88 88 { 89 89 INIT_WORK(&ufence->worker, user_fence_worker); 90 90 queue_work(ufence->xe->ordered_wq, &ufence->worker); ··· 94 92 95 93 static void user_fence_cb(struct dma_fence *fence, struct dma_fence_cb *cb) 96 94 { 97 - struct user_fence *ufence = container_of(cb, struct user_fence, cb); 95 + struct xe_user_fence *ufence = container_of(cb, struct xe_user_fence, cb); 98 96 99 97 kick_ufence(ufence, fence); 100 98 } ··· 341 339 kfree(cf); 342 340 343 341 return ERR_PTR(-ENOMEM); 342 + } 343 + 344 + /** 345 + * xe_sync_ufence_get() - Get user fence from sync 346 + * @sync: input sync 347 + * 348 + * Get a user fence reference from sync. 349 + * 350 + * Return: xe_user_fence pointer with reference 351 + */ 352 + struct xe_user_fence *xe_sync_ufence_get(struct xe_sync_entry *sync) 353 + { 354 + user_fence_get(sync->ufence); 355 + 356 + return sync->ufence; 357 + } 358 + 359 + /** 360 + * xe_sync_ufence_put() - Put user fence reference 361 + * @ufence: user fence reference 362 + * 363 + */ 364 + void xe_sync_ufence_put(struct xe_user_fence *ufence) 365 + { 366 + user_fence_put(ufence); 367 + } 368 + 369 + /** 370 + * xe_sync_ufence_get_status() - Get user fence status 371 + * @ufence: user fence 372 + * 373 + * Return: 1 if signalled, 0 not signalled, <0 on error 374 + */ 375 + int xe_sync_ufence_get_status(struct xe_user_fence *ufence) 376 + { 377 + return READ_ONCE(ufence->signalled); 344 378 }
+4
drivers/gpu/drm/xe/xe_sync.h
··· 38 38 return !!sync->ufence; 39 39 } 40 40 41 + struct xe_user_fence *xe_sync_ufence_get(struct xe_sync_entry *sync); 42 + void xe_sync_ufence_put(struct xe_user_fence *ufence); 43 + int xe_sync_ufence_get_status(struct xe_user_fence *ufence); 44 + 41 45 #endif
+1 -1
drivers/gpu/drm/xe/xe_sync_types.h
··· 18 18 struct drm_syncobj *syncobj; 19 19 struct dma_fence *fence; 20 20 struct dma_fence_chain *chain_fence; 21 - struct user_fence *ufence; 21 + struct xe_user_fence *ufence; 22 22 u64 addr; 23 23 u64 timeline_value; 24 24 u32 type;
+41 -18
drivers/gpu/drm/xe/xe_trace.h
··· 12 12 #include <linux/tracepoint.h> 13 13 #include <linux/types.h> 14 14 15 + #include "xe_bo.h" 15 16 #include "xe_bo_types.h" 16 17 #include "xe_exec_queue_types.h" 17 18 #include "xe_gpu_scheduler_types.h" ··· 27 26 TP_ARGS(fence), 28 27 29 28 TP_STRUCT__entry( 30 - __field(u64, fence) 29 + __field(struct xe_gt_tlb_invalidation_fence *, fence) 31 30 __field(int, seqno) 32 31 ), 33 32 34 33 TP_fast_assign( 35 - __entry->fence = (u64)fence; 34 + __entry->fence = fence; 36 35 __entry->seqno = fence->seqno; 37 36 ), 38 37 39 - TP_printk("fence=0x%016llx, seqno=%d", 38 + TP_printk("fence=%p, seqno=%d", 40 39 __entry->fence, __entry->seqno) 41 40 ); 42 41 ··· 83 82 TP_STRUCT__entry( 84 83 __field(size_t, size) 85 84 __field(u32, flags) 86 - __field(u64, vm) 85 + __field(struct xe_vm *, vm) 87 86 ), 88 87 89 88 TP_fast_assign( 90 89 __entry->size = bo->size; 91 90 __entry->flags = bo->flags; 92 - __entry->vm = (unsigned long)bo->vm; 91 + __entry->vm = bo->vm; 93 92 ), 94 93 95 - TP_printk("size=%zu, flags=0x%02x, vm=0x%016llx", 94 + TP_printk("size=%zu, flags=0x%02x, vm=%p", 96 95 __entry->size, __entry->flags, __entry->vm) 97 96 ); 98 97 ··· 101 100 TP_ARGS(bo) 102 101 ); 103 102 104 - DEFINE_EVENT(xe_bo, xe_bo_move, 105 - TP_PROTO(struct xe_bo *bo), 106 - TP_ARGS(bo) 103 + TRACE_EVENT(xe_bo_move, 104 + TP_PROTO(struct xe_bo *bo, uint32_t new_placement, uint32_t old_placement, 105 + bool move_lacks_source), 106 + TP_ARGS(bo, new_placement, old_placement, move_lacks_source), 107 + TP_STRUCT__entry( 108 + __field(struct xe_bo *, bo) 109 + __field(size_t, size) 110 + __field(u32, new_placement) 111 + __field(u32, old_placement) 112 + __array(char, device_id, 12) 113 + __field(bool, move_lacks_source) 114 + ), 115 + 116 + TP_fast_assign( 117 + __entry->bo = bo; 118 + __entry->size = bo->size; 119 + __entry->new_placement = new_placement; 120 + __entry->old_placement = old_placement; 121 + strscpy(__entry->device_id, dev_name(xe_bo_device(__entry->bo)->drm.dev), 12); 122 + __entry->move_lacks_source = move_lacks_source; 123 + ), 124 + TP_printk("move_lacks_source:%s, migrate object %p [size %zu] from %s to %s device_id:%s", 125 + __entry->move_lacks_source ? "yes" : "no", __entry->bo, __entry->size, 126 + xe_mem_type_to_name[__entry->old_placement], 127 + xe_mem_type_to_name[__entry->new_placement], __entry->device_id) 107 128 ); 108 129 109 130 DECLARE_EVENT_CLASS(xe_exec_queue, ··· 350 327 TP_STRUCT__entry( 351 328 __field(u64, ctx) 352 329 __field(u32, seqno) 353 - __field(u64, fence) 330 + __field(struct xe_hw_fence *, fence) 354 331 ), 355 332 356 333 TP_fast_assign( 357 334 __entry->ctx = fence->dma.context; 358 335 __entry->seqno = fence->dma.seqno; 359 - __entry->fence = (unsigned long)fence; 336 + __entry->fence = fence; 360 337 ), 361 338 362 - TP_printk("ctx=0x%016llx, fence=0x%016llx, seqno=%u", 339 + TP_printk("ctx=0x%016llx, fence=%p, seqno=%u", 363 340 __entry->ctx, __entry->fence, __entry->seqno) 364 341 ); 365 342 ··· 388 365 TP_ARGS(vma), 389 366 390 367 TP_STRUCT__entry( 391 - __field(u64, vma) 368 + __field(struct xe_vma *, vma) 392 369 __field(u32, asid) 393 370 __field(u64, start) 394 371 __field(u64, end) ··· 396 373 ), 397 374 398 375 TP_fast_assign( 399 - __entry->vma = (unsigned long)vma; 376 + __entry->vma = vma; 400 377 __entry->asid = xe_vma_vm(vma)->usm.asid; 401 378 __entry->start = xe_vma_start(vma); 402 379 __entry->end = xe_vma_end(vma) - 1; 403 380 __entry->ptr = xe_vma_userptr(vma); 404 381 ), 405 382 406 - TP_printk("vma=0x%016llx, asid=0x%05x, start=0x%012llx, end=0x%012llx, ptr=0x%012llx,", 383 + TP_printk("vma=%p, asid=0x%05x, start=0x%012llx, end=0x%012llx, userptr=0x%012llx,", 407 384 __entry->vma, __entry->asid, __entry->start, 408 385 __entry->end, __entry->ptr) 409 386 ) ··· 488 465 TP_ARGS(vm), 489 466 490 467 TP_STRUCT__entry( 491 - __field(u64, vm) 468 + __field(struct xe_vm *, vm) 492 469 __field(u32, asid) 493 470 ), 494 471 495 472 TP_fast_assign( 496 - __entry->vm = (unsigned long)vm; 473 + __entry->vm = vm; 497 474 __entry->asid = vm->usm.asid; 498 475 ), 499 476 500 - TP_printk("vm=0x%016llx, asid=0x%05x", __entry->vm, 477 + TP_printk("vm=%p, asid=0x%05x", __entry->vm, 501 478 __entry->asid) 502 479 ); 503 480
+55 -25
drivers/gpu/drm/xe/xe_vm.c
··· 897 897 struct xe_device *xe = vm->xe; 898 898 bool read_only = xe_vma_read_only(vma); 899 899 900 + if (vma->ufence) { 901 + xe_sync_ufence_put(vma->ufence); 902 + vma->ufence = NULL; 903 + } 904 + 900 905 if (xe_vma_is_userptr(vma)) { 901 906 struct xe_userptr *userptr = &to_userptr_vma(vma)->userptr; 902 907 ··· 1613 1608 1614 1609 trace_xe_vma_unbind(vma); 1615 1610 1611 + if (vma->ufence) { 1612 + struct xe_user_fence * const f = vma->ufence; 1613 + 1614 + if (!xe_sync_ufence_get_status(f)) 1615 + return ERR_PTR(-EBUSY); 1616 + 1617 + vma->ufence = NULL; 1618 + xe_sync_ufence_put(f); 1619 + } 1620 + 1616 1621 if (number_tiles > 1) { 1617 1622 fences = kmalloc_array(number_tiles, sizeof(*fences), 1618 1623 GFP_KERNEL); ··· 1756 1741 return ERR_PTR(err); 1757 1742 } 1758 1743 1744 + static struct xe_user_fence * 1745 + find_ufence_get(struct xe_sync_entry *syncs, u32 num_syncs) 1746 + { 1747 + unsigned int i; 1748 + 1749 + for (i = 0; i < num_syncs; i++) { 1750 + struct xe_sync_entry *e = &syncs[i]; 1751 + 1752 + if (xe_sync_is_ufence(e)) 1753 + return xe_sync_ufence_get(e); 1754 + } 1755 + 1756 + return NULL; 1757 + } 1758 + 1759 1759 static int __xe_vm_bind(struct xe_vm *vm, struct xe_vma *vma, 1760 1760 struct xe_exec_queue *q, struct xe_sync_entry *syncs, 1761 1761 u32 num_syncs, bool immediate, bool first_op, ··· 1778 1748 { 1779 1749 struct dma_fence *fence; 1780 1750 struct xe_exec_queue *wait_exec_queue = to_wait_exec_queue(vm, q); 1751 + struct xe_user_fence *ufence; 1781 1752 1782 1753 xe_vm_assert_held(vm); 1754 + 1755 + ufence = find_ufence_get(syncs, num_syncs); 1756 + if (vma->ufence && ufence) 1757 + xe_sync_ufence_put(vma->ufence); 1758 + 1759 + vma->ufence = ufence ?: vma->ufence; 1783 1760 1784 1761 if (immediate) { 1785 1762 fence = xe_vm_bind_vma(vma, q, syncs, num_syncs, first_op, ··· 2154 2117 struct xe_vma_op *op = gpuva_op_to_vma_op(__op); 2155 2118 2156 2119 if (__op->op == DRM_GPUVA_OP_MAP) { 2157 - op->map.immediate = 2158 - flags & DRM_XE_VM_BIND_FLAG_IMMEDIATE; 2159 - op->map.read_only = 2160 - flags & DRM_XE_VM_BIND_FLAG_READONLY; 2161 2120 op->map.is_null = flags & DRM_XE_VM_BIND_FLAG_NULL; 2162 2121 op->map.pat_index = pat_index; 2163 2122 } else if (__op->op == DRM_GPUVA_OP_PREFETCH) { ··· 2346 2313 switch (op->base.op) { 2347 2314 case DRM_GPUVA_OP_MAP: 2348 2315 { 2349 - flags |= op->map.read_only ? 2350 - VMA_CREATE_FLAG_READ_ONLY : 0; 2351 2316 flags |= op->map.is_null ? 2352 2317 VMA_CREATE_FLAG_IS_NULL : 0; 2353 2318 ··· 2476 2445 case DRM_GPUVA_OP_MAP: 2477 2446 err = xe_vm_bind(vm, vma, op->q, xe_vma_bo(vma), 2478 2447 op->syncs, op->num_syncs, 2479 - op->map.immediate || !xe_vm_in_fault_mode(vm), 2448 + !xe_vm_in_fault_mode(vm), 2480 2449 op->flags & XE_VMA_OP_FIRST, 2481 2450 op->flags & XE_VMA_OP_LAST); 2482 2451 break; ··· 2751 2720 return 0; 2752 2721 } 2753 2722 2754 - #define SUPPORTED_FLAGS \ 2755 - (DRM_XE_VM_BIND_FLAG_READONLY | \ 2756 - DRM_XE_VM_BIND_FLAG_IMMEDIATE | DRM_XE_VM_BIND_FLAG_NULL) 2723 + #define SUPPORTED_FLAGS (DRM_XE_VM_BIND_FLAG_NULL | \ 2724 + DRM_XE_VM_BIND_FLAG_DUMPABLE) 2757 2725 #define XE_64K_PAGE_MASK 0xffffull 2758 2726 #define ALL_DRM_XE_SYNCS_FLAGS (DRM_XE_SYNCS_FLAG_WAIT_FOR_OP) 2759 - 2760 - #define MAX_BINDS 512 /* FIXME: Picking random upper limit */ 2761 2727 2762 2728 static int vm_bind_ioctl_check_args(struct xe_device *xe, 2763 2729 struct drm_xe_vm_bind *args, ··· 2767 2739 XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1])) 2768 2740 return -EINVAL; 2769 2741 2770 - if (XE_IOCTL_DBG(xe, args->extensions) || 2771 - XE_IOCTL_DBG(xe, args->num_binds > MAX_BINDS)) 2742 + if (XE_IOCTL_DBG(xe, args->extensions)) 2772 2743 return -EINVAL; 2773 2744 2774 2745 if (args->num_binds > 1) { 2775 2746 u64 __user *bind_user = 2776 2747 u64_to_user_ptr(args->vector_of_binds); 2777 2748 2778 - *bind_ops = kmalloc(sizeof(struct drm_xe_vm_bind_op) * 2779 - args->num_binds, GFP_KERNEL); 2749 + *bind_ops = kvmalloc_array(args->num_binds, 2750 + sizeof(struct drm_xe_vm_bind_op), 2751 + GFP_KERNEL | __GFP_ACCOUNT); 2780 2752 if (!*bind_ops) 2781 2753 return -ENOMEM; 2782 2754 ··· 2866 2838 2867 2839 free_bind_ops: 2868 2840 if (args->num_binds > 1) 2869 - kfree(*bind_ops); 2841 + kvfree(*bind_ops); 2870 2842 return err; 2871 2843 } 2872 2844 ··· 2954 2926 } 2955 2927 2956 2928 if (args->num_binds) { 2957 - bos = kcalloc(args->num_binds, sizeof(*bos), GFP_KERNEL); 2929 + bos = kvcalloc(args->num_binds, sizeof(*bos), 2930 + GFP_KERNEL | __GFP_ACCOUNT); 2958 2931 if (!bos) { 2959 2932 err = -ENOMEM; 2960 2933 goto release_vm_lock; 2961 2934 } 2962 2935 2963 - ops = kcalloc(args->num_binds, sizeof(*ops), GFP_KERNEL); 2936 + ops = kvcalloc(args->num_binds, sizeof(*ops), 2937 + GFP_KERNEL | __GFP_ACCOUNT); 2964 2938 if (!ops) { 2965 2939 err = -ENOMEM; 2966 2940 goto release_vm_lock; ··· 3103 3073 for (i = 0; bos && i < args->num_binds; ++i) 3104 3074 xe_bo_put(bos[i]); 3105 3075 3106 - kfree(bos); 3107 - kfree(ops); 3076 + kvfree(bos); 3077 + kvfree(ops); 3108 3078 if (args->num_binds > 1) 3109 - kfree(bind_ops); 3079 + kvfree(bind_ops); 3110 3080 3111 3081 return err; 3112 3082 ··· 3130 3100 if (q) 3131 3101 xe_exec_queue_put(q); 3132 3102 free_objs: 3133 - kfree(bos); 3134 - kfree(ops); 3103 + kvfree(bos); 3104 + kvfree(ops); 3135 3105 if (args->num_binds > 1) 3136 - kfree(bind_ops); 3106 + kvfree(bind_ops); 3137 3107 return err; 3138 3108 } 3139 3109
+7 -4
drivers/gpu/drm/xe/xe_vm_types.h
··· 19 19 20 20 struct xe_bo; 21 21 struct xe_sync_entry; 22 + struct xe_user_fence; 22 23 struct xe_vm; 23 24 24 25 #define XE_VMA_READ_ONLY DRM_GPUVA_USERBITS ··· 105 104 * @pat_index: The pat index to use when encoding the PTEs for this vma. 106 105 */ 107 106 u16 pat_index; 107 + 108 + /** 109 + * @ufence: The user fence that was provided with MAP. 110 + * Needs to be signalled before UNMAP can be processed. 111 + */ 112 + struct xe_user_fence *ufence; 108 113 }; 109 114 110 115 /** ··· 295 288 struct xe_vma_op_map { 296 289 /** @vma: VMA to map */ 297 290 struct xe_vma *vma; 298 - /** @immediate: Immediate bind */ 299 - bool immediate; 300 - /** @read_only: Read only */ 301 - bool read_only; 302 291 /** @is_null: is NULL binding */ 303 292 bool is_null; 304 293 /** @pat_index: The pat index to use for this operation. */
+9 -6
drivers/gpu/host1x/dev.c
··· 169 169 .num_sid_entries = ARRAY_SIZE(tegra186_sid_table), 170 170 .sid_table = tegra186_sid_table, 171 171 .reserve_vblank_syncpts = false, 172 + .skip_reset_assert = true, 172 173 }; 173 174 174 175 static const struct host1x_sid_entry tegra194_sid_table[] = { ··· 681 680 host1x_intr_stop(host); 682 681 host1x_syncpt_save(host); 683 682 684 - err = reset_control_bulk_assert(host->nresets, host->resets); 685 - if (err) { 686 - dev_err(dev, "failed to assert reset: %d\n", err); 687 - goto resume_host1x; 688 - } 683 + if (!host->info->skip_reset_assert) { 684 + err = reset_control_bulk_assert(host->nresets, host->resets); 685 + if (err) { 686 + dev_err(dev, "failed to assert reset: %d\n", err); 687 + goto resume_host1x; 688 + } 689 689 690 - usleep_range(1000, 2000); 690 + usleep_range(1000, 2000); 691 + } 691 692 692 693 clk_disable_unprepare(host->clk); 693 694 reset_control_bulk_release(host->nresets, host->resets);
+6
drivers/gpu/host1x/dev.h
··· 116 116 * the display driver disables VBLANK increments. 117 117 */ 118 118 bool reserve_vblank_syncpts; 119 + /* 120 + * On Tegra186, secure world applications may require access to 121 + * host1x during suspend/resume. To allow this, we need to leave 122 + * host1x not in reset. 123 + */ 124 + bool skip_reset_assert; 119 125 }; 120 126 121 127 struct host1x {
+66 -102
drivers/hv/channel.c
··· 322 322 323 323 pagecount = hv_gpadl_size(type, size) >> HV_HYP_PAGE_SHIFT; 324 324 325 - /* do we need a gpadl body msg */ 326 325 pfnsize = MAX_SIZE_CHANNEL_MESSAGE - 327 326 sizeof(struct vmbus_channel_gpadl_header) - 328 327 sizeof(struct gpa_range); 328 + pfncount = umin(pagecount, pfnsize / sizeof(u64)); 329 + 330 + msgsize = sizeof(struct vmbus_channel_msginfo) + 331 + sizeof(struct vmbus_channel_gpadl_header) + 332 + sizeof(struct gpa_range) + pfncount * sizeof(u64); 333 + msgheader = kzalloc(msgsize, GFP_KERNEL); 334 + if (!msgheader) 335 + return -ENOMEM; 336 + 337 + INIT_LIST_HEAD(&msgheader->submsglist); 338 + msgheader->msgsize = msgsize; 339 + 340 + gpadl_header = (struct vmbus_channel_gpadl_header *) 341 + msgheader->msg; 342 + gpadl_header->rangecount = 1; 343 + gpadl_header->range_buflen = sizeof(struct gpa_range) + 344 + pagecount * sizeof(u64); 345 + gpadl_header->range[0].byte_offset = 0; 346 + gpadl_header->range[0].byte_count = hv_gpadl_size(type, size); 347 + for (i = 0; i < pfncount; i++) 348 + gpadl_header->range[0].pfn_array[i] = hv_gpadl_hvpfn( 349 + type, kbuffer, size, send_offset, i); 350 + *msginfo = msgheader; 351 + 352 + pfnsum = pfncount; 353 + pfnleft = pagecount - pfncount; 354 + 355 + /* how many pfns can we fit in a body message */ 356 + pfnsize = MAX_SIZE_CHANNEL_MESSAGE - 357 + sizeof(struct vmbus_channel_gpadl_body); 329 358 pfncount = pfnsize / sizeof(u64); 330 359 331 - if (pagecount > pfncount) { 332 - /* we need a gpadl body */ 333 - /* fill in the header */ 360 + /* 361 + * If pfnleft is zero, everything fits in the header and no body 362 + * messages are needed 363 + */ 364 + while (pfnleft) { 365 + pfncurr = umin(pfncount, pfnleft); 334 366 msgsize = sizeof(struct vmbus_channel_msginfo) + 335 - sizeof(struct vmbus_channel_gpadl_header) + 336 - sizeof(struct gpa_range) + pfncount * sizeof(u64); 337 - msgheader = kzalloc(msgsize, GFP_KERNEL); 338 - if (!msgheader) 339 - goto nomem; 367 + sizeof(struct vmbus_channel_gpadl_body) + 368 + pfncurr * sizeof(u64); 369 + msgbody = kzalloc(msgsize, GFP_KERNEL); 340 370 341 - INIT_LIST_HEAD(&msgheader->submsglist); 342 - msgheader->msgsize = msgsize; 343 - 344 - gpadl_header = (struct vmbus_channel_gpadl_header *) 345 - msgheader->msg; 346 - gpadl_header->rangecount = 1; 347 - gpadl_header->range_buflen = sizeof(struct gpa_range) + 348 - pagecount * sizeof(u64); 349 - gpadl_header->range[0].byte_offset = 0; 350 - gpadl_header->range[0].byte_count = hv_gpadl_size(type, size); 351 - for (i = 0; i < pfncount; i++) 352 - gpadl_header->range[0].pfn_array[i] = hv_gpadl_hvpfn( 353 - type, kbuffer, size, send_offset, i); 354 - *msginfo = msgheader; 355 - 356 - pfnsum = pfncount; 357 - pfnleft = pagecount - pfncount; 358 - 359 - /* how many pfns can we fit */ 360 - pfnsize = MAX_SIZE_CHANNEL_MESSAGE - 361 - sizeof(struct vmbus_channel_gpadl_body); 362 - pfncount = pfnsize / sizeof(u64); 363 - 364 - /* fill in the body */ 365 - while (pfnleft) { 366 - if (pfnleft > pfncount) 367 - pfncurr = pfncount; 368 - else 369 - pfncurr = pfnleft; 370 - 371 - msgsize = sizeof(struct vmbus_channel_msginfo) + 372 - sizeof(struct vmbus_channel_gpadl_body) + 373 - pfncurr * sizeof(u64); 374 - msgbody = kzalloc(msgsize, GFP_KERNEL); 375 - 376 - if (!msgbody) { 377 - struct vmbus_channel_msginfo *pos = NULL; 378 - struct vmbus_channel_msginfo *tmp = NULL; 379 - /* 380 - * Free up all the allocated messages. 381 - */ 382 - list_for_each_entry_safe(pos, tmp, 383 - &msgheader->submsglist, 384 - msglistentry) { 385 - 386 - list_del(&pos->msglistentry); 387 - kfree(pos); 388 - } 389 - 390 - goto nomem; 391 - } 392 - 393 - msgbody->msgsize = msgsize; 394 - gpadl_body = 395 - (struct vmbus_channel_gpadl_body *)msgbody->msg; 396 - 371 + if (!msgbody) { 372 + struct vmbus_channel_msginfo *pos = NULL; 373 + struct vmbus_channel_msginfo *tmp = NULL; 397 374 /* 398 - * Gpadl is u32 and we are using a pointer which could 399 - * be 64-bit 400 - * This is governed by the guest/host protocol and 401 - * so the hypervisor guarantees that this is ok. 375 + * Free up all the allocated messages. 402 376 */ 403 - for (i = 0; i < pfncurr; i++) 404 - gpadl_body->pfn[i] = hv_gpadl_hvpfn(type, 405 - kbuffer, size, send_offset, pfnsum + i); 377 + list_for_each_entry_safe(pos, tmp, 378 + &msgheader->submsglist, 379 + msglistentry) { 406 380 407 - /* add to msg header */ 408 - list_add_tail(&msgbody->msglistentry, 409 - &msgheader->submsglist); 410 - pfnsum += pfncurr; 411 - pfnleft -= pfncurr; 381 + list_del(&pos->msglistentry); 382 + kfree(pos); 383 + } 384 + kfree(msgheader); 385 + return -ENOMEM; 412 386 } 413 - } else { 414 - /* everything fits in a header */ 415 - msgsize = sizeof(struct vmbus_channel_msginfo) + 416 - sizeof(struct vmbus_channel_gpadl_header) + 417 - sizeof(struct gpa_range) + pagecount * sizeof(u64); 418 - msgheader = kzalloc(msgsize, GFP_KERNEL); 419 - if (msgheader == NULL) 420 - goto nomem; 421 387 422 - INIT_LIST_HEAD(&msgheader->submsglist); 423 - msgheader->msgsize = msgsize; 388 + msgbody->msgsize = msgsize; 389 + gpadl_body = (struct vmbus_channel_gpadl_body *)msgbody->msg; 424 390 425 - gpadl_header = (struct vmbus_channel_gpadl_header *) 426 - msgheader->msg; 427 - gpadl_header->rangecount = 1; 428 - gpadl_header->range_buflen = sizeof(struct gpa_range) + 429 - pagecount * sizeof(u64); 430 - gpadl_header->range[0].byte_offset = 0; 431 - gpadl_header->range[0].byte_count = hv_gpadl_size(type, size); 432 - for (i = 0; i < pagecount; i++) 433 - gpadl_header->range[0].pfn_array[i] = hv_gpadl_hvpfn( 434 - type, kbuffer, size, send_offset, i); 391 + /* 392 + * Gpadl is u32 and we are using a pointer which could 393 + * be 64-bit 394 + * This is governed by the guest/host protocol and 395 + * so the hypervisor guarantees that this is ok. 396 + */ 397 + for (i = 0; i < pfncurr; i++) 398 + gpadl_body->pfn[i] = hv_gpadl_hvpfn(type, 399 + kbuffer, size, send_offset, pfnsum + i); 435 400 436 - *msginfo = msgheader; 401 + /* add to msg header */ 402 + list_add_tail(&msgbody->msglistentry, &msgheader->submsglist); 403 + pfnsum += pfncurr; 404 + pfnleft -= pfncurr; 437 405 } 438 406 439 407 return 0; 440 - nomem: 441 - kfree(msgheader); 442 - kfree(msgbody); 443 - return -ENOMEM; 444 408 } 445 409 446 410 /*
+30 -1
drivers/hv/hv_util.c
··· 296 296 spinlock_t lock; 297 297 } host_ts; 298 298 299 + static bool timesync_implicit; 300 + 301 + module_param(timesync_implicit, bool, 0644); 302 + MODULE_PARM_DESC(timesync_implicit, "If set treat SAMPLE as SYNC when clock is behind"); 303 + 299 304 static inline u64 reftime_to_ns(u64 reftime) 300 305 { 301 306 return (reftime - WLTIMEDELTA) * 100; ··· 350 345 } 351 346 352 347 /* 348 + * Due to a bug on Hyper-V hosts, the sync flag may not always be sent on resume. 349 + * Force a sync if the guest is behind. 350 + */ 351 + static inline bool hv_implicit_sync(u64 host_time) 352 + { 353 + struct timespec64 new_ts; 354 + struct timespec64 threshold_ts; 355 + 356 + new_ts = ns_to_timespec64(reftime_to_ns(host_time)); 357 + ktime_get_real_ts64(&threshold_ts); 358 + 359 + threshold_ts.tv_sec += 5; 360 + 361 + /* 362 + * If guest behind the host by 5 or more seconds. 363 + */ 364 + if (timespec64_compare(&new_ts, &threshold_ts) >= 0) 365 + return true; 366 + 367 + return false; 368 + } 369 + 370 + /* 353 371 * Synchronize time with host after reboot, restore, etc. 354 372 * 355 373 * ICTIMESYNCFLAG_SYNC flag bit indicates reboot, restore events of the VM. ··· 412 384 spin_unlock_irqrestore(&host_ts.lock, flags); 413 385 414 386 /* Schedule work to do do_settimeofday64() */ 415 - if (adj_flags & ICTIMESYNCFLAG_SYNC) 387 + if ((adj_flags & ICTIMESYNCFLAG_SYNC) || 388 + (timesync_implicit && hv_implicit_sync(host_ts.host_time))) 416 389 schedule_work(&adj_time_work); 417 390 } 418 391
+1 -1
drivers/hv/vmbus_drv.c
··· 988 988 }; 989 989 990 990 /* The one and only one */ 991 - static struct bus_type hv_bus = { 991 + static const struct bus_type hv_bus = { 992 992 .name = "vmbus", 993 993 .match = vmbus_match, 994 994 .shutdown = vmbus_shutdown,
+2 -2
drivers/iommu/iommu-sva.c
··· 117 117 if (ret) 118 118 goto out_free_domain; 119 119 domain->users = 1; 120 - refcount_set(&handle->users, 1); 121 120 list_add(&domain->next, &mm->iommu_mm->sva_domains); 122 - list_add(&handle->handle_item, &mm->iommu_mm->sva_handles); 123 121 124 122 out: 123 + refcount_set(&handle->users, 1); 124 + list_add(&handle->handle_item, &mm->iommu_mm->sva_handles); 125 125 mutex_unlock(&iommu_sva_lock); 126 126 handle->dev = dev; 127 127 handle->domain = domain;
+6 -3
drivers/iommu/iommufd/io_pagetable.c
··· 1330 1330 1331 1331 int iopt_add_access(struct io_pagetable *iopt, struct iommufd_access *access) 1332 1332 { 1333 + u32 new_id; 1333 1334 int rc; 1334 1335 1335 1336 down_write(&iopt->domains_rwsem); 1336 1337 down_write(&iopt->iova_rwsem); 1337 - rc = xa_alloc(&iopt->access_list, &access->iopt_access_list_id, access, 1338 - xa_limit_16b, GFP_KERNEL_ACCOUNT); 1338 + rc = xa_alloc(&iopt->access_list, &new_id, access, xa_limit_16b, 1339 + GFP_KERNEL_ACCOUNT); 1340 + 1339 1341 if (rc) 1340 1342 goto out_unlock; 1341 1343 1342 1344 rc = iopt_calculate_iova_alignment(iopt); 1343 1345 if (rc) { 1344 - xa_erase(&iopt->access_list, access->iopt_access_list_id); 1346 + xa_erase(&iopt->access_list, new_id); 1345 1347 goto out_unlock; 1346 1348 } 1349 + access->iopt_access_list_id = new_id; 1347 1350 1348 1351 out_unlock: 1349 1352 up_write(&iopt->iova_rwsem);
+48 -21
drivers/iommu/iommufd/selftest.c
··· 36 36 }, 37 37 }; 38 38 39 - static atomic_t mock_dev_num; 39 + static DEFINE_IDA(mock_dev_ida); 40 40 41 41 enum { 42 42 MOCK_DIRTY_TRACK = 1, ··· 63 63 * In syzkaller mode the 64 bit IOVA is converted into an nth area and offset 64 64 * value. This has a much smaller randomization space and syzkaller can hit it. 65 65 */ 66 - static unsigned long iommufd_test_syz_conv_iova(struct io_pagetable *iopt, 67 - u64 *iova) 66 + static unsigned long __iommufd_test_syz_conv_iova(struct io_pagetable *iopt, 67 + u64 *iova) 68 68 { 69 69 struct syz_layout { 70 70 __u32 nth_area; ··· 88 88 return 0; 89 89 } 90 90 91 + static unsigned long iommufd_test_syz_conv_iova(struct iommufd_access *access, 92 + u64 *iova) 93 + { 94 + unsigned long ret; 95 + 96 + mutex_lock(&access->ioas_lock); 97 + if (!access->ioas) { 98 + mutex_unlock(&access->ioas_lock); 99 + return 0; 100 + } 101 + ret = __iommufd_test_syz_conv_iova(&access->ioas->iopt, iova); 102 + mutex_unlock(&access->ioas_lock); 103 + return ret; 104 + } 105 + 91 106 void iommufd_test_syz_conv_iova_id(struct iommufd_ucmd *ucmd, 92 107 unsigned int ioas_id, u64 *iova, u32 *flags) 93 108 { ··· 115 100 ioas = iommufd_get_ioas(ucmd->ictx, ioas_id); 116 101 if (IS_ERR(ioas)) 117 102 return; 118 - *iova = iommufd_test_syz_conv_iova(&ioas->iopt, iova); 103 + *iova = __iommufd_test_syz_conv_iova(&ioas->iopt, iova); 119 104 iommufd_put_object(ucmd->ictx, &ioas->obj); 120 105 } 121 106 ··· 138 123 struct mock_dev { 139 124 struct device dev; 140 125 unsigned long flags; 126 + int id; 141 127 }; 142 128 143 129 struct selftest_obj { ··· 446 430 447 431 /* 448 432 * iommufd generates unmaps that must be a strict 449 - * superset of the map's performend So every starting 450 - * IOVA should have been an iova passed to map, and the 433 + * superset of the map's performend So every 434 + * starting/ending IOVA should have been an iova passed 435 + * to map. 451 436 * 452 - * First IOVA must be present and have been a first IOVA 453 - * passed to map_pages 437 + * This simple logic doesn't work when the HUGE_PAGE is 438 + * turned on since the core code will automatically 439 + * switch between the two page sizes creating a break in 440 + * the unmap calls. The break can land in the middle of 441 + * contiguous IOVA. 454 442 */ 455 - if (first) { 456 - WARN_ON(ent && !(xa_to_value(ent) & 457 - MOCK_PFN_START_IOVA)); 458 - first = false; 443 + if (!(domain->pgsize_bitmap & MOCK_HUGE_PAGE_SIZE)) { 444 + if (first) { 445 + WARN_ON(ent && !(xa_to_value(ent) & 446 + MOCK_PFN_START_IOVA)); 447 + first = false; 448 + } 449 + if (pgcount == 1 && 450 + cur + MOCK_IO_PAGE_SIZE == pgsize) 451 + WARN_ON(ent && !(xa_to_value(ent) & 452 + MOCK_PFN_LAST_IOVA)); 459 453 } 460 - if (pgcount == 1 && cur + MOCK_IO_PAGE_SIZE == pgsize) 461 - WARN_ON(ent && !(xa_to_value(ent) & 462 - MOCK_PFN_LAST_IOVA)); 463 454 464 455 iova += MOCK_IO_PAGE_SIZE; 465 456 ret += MOCK_IO_PAGE_SIZE; ··· 654 631 { 655 632 struct mock_dev *mdev = container_of(dev, struct mock_dev, dev); 656 633 657 - atomic_dec(&mock_dev_num); 634 + ida_free(&mock_dev_ida, mdev->id); 658 635 kfree(mdev); 659 636 } 660 637 ··· 676 653 mdev->dev.release = mock_dev_release; 677 654 mdev->dev.bus = &iommufd_mock_bus_type.bus; 678 655 679 - rc = dev_set_name(&mdev->dev, "iommufd_mock%u", 680 - atomic_inc_return(&mock_dev_num)); 656 + rc = ida_alloc(&mock_dev_ida, GFP_KERNEL); 657 + if (rc < 0) 658 + goto err_put; 659 + mdev->id = rc; 660 + 661 + rc = dev_set_name(&mdev->dev, "iommufd_mock%u", mdev->id); 681 662 if (rc) 682 663 goto err_put; 683 664 ··· 1183 1156 } 1184 1157 1185 1158 if (flags & MOCK_FLAGS_ACCESS_SYZ) 1186 - iova = iommufd_test_syz_conv_iova(&staccess->access->ioas->iopt, 1159 + iova = iommufd_test_syz_conv_iova(staccess->access, 1187 1160 &cmd->access_pages.iova); 1188 1161 1189 1162 npages = (ALIGN(iova + length, PAGE_SIZE) - ··· 1285 1258 } 1286 1259 1287 1260 if (flags & MOCK_FLAGS_ACCESS_SYZ) 1288 - iova = iommufd_test_syz_conv_iova(&staccess->access->ioas->iopt, 1289 - &cmd->access_rw.iova); 1261 + iova = iommufd_test_syz_conv_iova(staccess->access, 1262 + &cmd->access_rw.iova); 1290 1263 1291 1264 rc = iommufd_access_rw(staccess->access, iova, tmp, length, flags); 1292 1265 if (rc)
+2
drivers/mmc/core/mmc.c
··· 1015 1015 static unsigned ext_csd_bits[] = { 1016 1016 EXT_CSD_BUS_WIDTH_8, 1017 1017 EXT_CSD_BUS_WIDTH_4, 1018 + EXT_CSD_BUS_WIDTH_1, 1018 1019 }; 1019 1020 static unsigned bus_widths[] = { 1020 1021 MMC_BUS_WIDTH_8, 1021 1022 MMC_BUS_WIDTH_4, 1023 + MMC_BUS_WIDTH_1, 1022 1024 }; 1023 1025 struct mmc_host *host = card->host; 1024 1026 unsigned idx, bus_width = 0;
+24
drivers/mmc/host/mmci_stm32_sdmmc.c
··· 225 225 struct scatterlist *sg; 226 226 int i; 227 227 228 + host->dma_in_progress = true; 229 + 228 230 if (!host->variant->dma_lli || data->sg_len == 1 || 229 231 idma->use_bounce_buffer) { 230 232 u32 dma_addr; ··· 265 263 return 0; 266 264 } 267 265 266 + static void sdmmc_idma_error(struct mmci_host *host) 267 + { 268 + struct mmc_data *data = host->data; 269 + struct sdmmc_idma *idma = host->dma_priv; 270 + 271 + if (!dma_inprogress(host)) 272 + return; 273 + 274 + writel_relaxed(0, host->base + MMCI_STM32_IDMACTRLR); 275 + host->dma_in_progress = false; 276 + data->host_cookie = 0; 277 + 278 + if (!idma->use_bounce_buffer) 279 + dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, 280 + mmc_get_dma_dir(data)); 281 + } 282 + 268 283 static void sdmmc_idma_finalize(struct mmci_host *host, struct mmc_data *data) 269 284 { 285 + if (!dma_inprogress(host)) 286 + return; 287 + 270 288 writel_relaxed(0, host->base + MMCI_STM32_IDMACTRLR); 289 + host->dma_in_progress = false; 271 290 272 291 if (!data->host_cookie) 273 292 sdmmc_idma_unprep_data(host, data, 0); ··· 699 676 .dma_setup = sdmmc_idma_setup, 700 677 .dma_start = sdmmc_idma_start, 701 678 .dma_finalize = sdmmc_idma_finalize, 679 + .dma_error = sdmmc_idma_error, 702 680 .set_clkreg = mmci_sdmmc_set_clkreg, 703 681 .set_pwrreg = mmci_sdmmc_set_pwrreg, 704 682 .busy_complete = sdmmc_busy_complete,
+39 -9
drivers/mmc/host/sdhci-xenon-phy.c
··· 11 11 #include <linux/slab.h> 12 12 #include <linux/delay.h> 13 13 #include <linux/ktime.h> 14 + #include <linux/iopoll.h> 14 15 #include <linux/of_address.h> 15 16 16 17 #include "sdhci-pltfm.h" ··· 109 108 #define XENON_EMMC_5_0_PHY_LOGIC_TIMING_VALUE 0x5A54 110 109 #define XENON_EMMC_PHY_LOGIC_TIMING_ADJUST (XENON_EMMC_PHY_REG_BASE + 0x18) 111 110 #define XENON_LOGIC_TIMING_VALUE 0x00AA8977 111 + 112 + #define XENON_MAX_PHY_TIMEOUT_LOOPS 100 112 113 113 114 /* 114 115 * List offset of PHY registers and some special register values ··· 219 216 return 0; 220 217 } 221 218 219 + static int xenon_check_stability_internal_clk(struct sdhci_host *host) 220 + { 221 + u32 reg; 222 + int err; 223 + 224 + err = read_poll_timeout(sdhci_readw, reg, reg & SDHCI_CLOCK_INT_STABLE, 225 + 1100, 20000, false, host, SDHCI_CLOCK_CONTROL); 226 + if (err) 227 + dev_err(mmc_dev(host->mmc), "phy_init: Internal clock never stabilized.\n"); 228 + 229 + return err; 230 + } 231 + 222 232 /* 223 233 * eMMC 5.0/5.1 PHY init/re-init. 224 234 * eMMC PHY init should be executed after: ··· 247 231 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 248 232 struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 249 233 struct xenon_emmc_phy_regs *phy_regs = priv->emmc_phy_regs; 234 + 235 + int ret = xenon_check_stability_internal_clk(host); 236 + 237 + if (ret) 238 + return ret; 250 239 251 240 reg = sdhci_readl(host, phy_regs->timing_adj); 252 241 reg |= XENON_PHY_INITIALIZAION; ··· 280 259 /* get the wait time */ 281 260 wait /= clock; 282 261 wait++; 283 - /* wait for host eMMC PHY init completes */ 284 - udelay(wait); 285 262 286 - reg = sdhci_readl(host, phy_regs->timing_adj); 287 - reg &= XENON_PHY_INITIALIZAION; 288 - if (reg) { 263 + /* 264 + * AC5X spec says bit must be polled until zero. 265 + * We see cases in which timeout can take longer 266 + * than the standard calculation on AC5X, which is 267 + * expected following the spec comment above. 268 + * According to the spec, we must wait as long as 269 + * it takes for that bit to toggle on AC5X. 270 + * Cap that with 100 delay loops so we won't get 271 + * stuck here forever: 272 + */ 273 + 274 + ret = read_poll_timeout(sdhci_readl, reg, 275 + !(reg & XENON_PHY_INITIALIZAION), 276 + wait, XENON_MAX_PHY_TIMEOUT_LOOPS * wait, 277 + false, host, phy_regs->timing_adj); 278 + if (ret) 289 279 dev_err(mmc_dev(host->mmc), "eMMC PHY init cannot complete after %d us\n", 290 - wait); 291 - return -ETIMEDOUT; 292 - } 280 + wait * XENON_MAX_PHY_TIMEOUT_LOOPS); 293 281 294 - return 0; 282 + return ret; 295 283 } 296 284 297 285 #define ARMADA_3700_SOC_PAD_1_8V 0x1
+1 -1
drivers/net/bonding/bond_main.c
··· 1811 1811 1812 1812 ASSERT_RTNL(); 1813 1813 1814 - if (!bond_xdp_check(bond)) { 1814 + if (!bond_xdp_check(bond) || !bond_has_slaves(bond)) { 1815 1815 xdp_clear_features_flag(bond_dev); 1816 1816 return; 1817 1817 }
+2 -2
drivers/net/dsa/microchip/ksz8795.c
··· 49 49 mutex_lock(&dev->alu_mutex); 50 50 51 51 ctrl_addr = IND_ACC_TABLE(table) | addr; 52 - ret = ksz_write8(dev, regs[REG_IND_BYTE], data); 52 + ret = ksz_write16(dev, regs[REG_IND_CTRL_0], ctrl_addr); 53 53 if (!ret) 54 - ret = ksz_write16(dev, regs[REG_IND_CTRL_0], ctrl_addr); 54 + ret = ksz_write8(dev, regs[REG_IND_BYTE], data); 55 55 56 56 mutex_unlock(&dev->alu_mutex); 57 57
+4 -8
drivers/net/ethernet/amd/pds_core/auxbus.c
··· 160 160 if (err < 0) { 161 161 dev_warn(cf->dev, "auxiliary_device_init of %s failed: %pe\n", 162 162 name, ERR_PTR(err)); 163 - goto err_out; 163 + kfree(padev); 164 + return ERR_PTR(err); 164 165 } 165 166 166 167 err = auxiliary_device_add(aux_dev); 167 168 if (err) { 168 169 dev_warn(cf->dev, "auxiliary_device_add of %s failed: %pe\n", 169 170 name, ERR_PTR(err)); 170 - goto err_out_uninit; 171 + auxiliary_device_uninit(aux_dev); 172 + return ERR_PTR(err); 171 173 } 172 174 173 175 return padev; 174 - 175 - err_out_uninit: 176 - auxiliary_device_uninit(aux_dev); 177 - err_out: 178 - kfree(padev); 179 - return ERR_PTR(err); 180 176 } 181 177 182 178 int pdsc_auxbus_dev_del(struct pdsc *cf, struct pdsc *pf)
+1 -1
drivers/net/ethernet/intel/e1000e/ich8lan.c
··· 2559 2559 hw->phy.ops.write_reg_page(hw, BM_RAR_H(i), 2560 2560 (u16)(mac_reg & 0xFFFF)); 2561 2561 hw->phy.ops.write_reg_page(hw, BM_RAR_CTRL(i), 2562 - FIELD_GET(E1000_RAH_AV, mac_reg)); 2562 + (u16)((mac_reg & E1000_RAH_AV) >> 16)); 2563 2563 } 2564 2564 2565 2565 e1000_disable_phy_wakeup_reg_access_bm(hw, &phy_reg);
+1 -1
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 13523 13523 return err; 13524 13524 13525 13525 i40e_queue_pair_disable_irq(vsi, queue_pair); 13526 + i40e_queue_pair_toggle_napi(vsi, queue_pair, false /* off */); 13526 13527 err = i40e_queue_pair_toggle_rings(vsi, queue_pair, false /* off */); 13527 13528 i40e_clean_rx_ring(vsi->rx_rings[queue_pair]); 13528 - i40e_queue_pair_toggle_napi(vsi, queue_pair, false /* off */); 13529 13529 i40e_queue_pair_clean_rings(vsi, queue_pair); 13530 13530 i40e_queue_pair_reset_stats(vsi, queue_pair); 13531 13531
+1 -2
drivers/net/ethernet/intel/i40e/i40e_prototype.h
··· 567 567 **/ 568 568 static inline bool i40e_is_fw_ver_eq(struct i40e_hw *hw, u16 maj, u16 min) 569 569 { 570 - return (hw->aq.fw_maj_ver > maj || 571 - (hw->aq.fw_maj_ver == maj && hw->aq.fw_min_ver == min)); 570 + return (hw->aq.fw_maj_ver == maj && hw->aq.fw_min_ver == min); 572 571 } 573 572 574 573 #endif /* _I40E_PROTOTYPE_H_ */
+3 -3
drivers/net/ethernet/intel/ice/ice_dpll.c
··· 1599 1599 } 1600 1600 if (WARN_ON_ONCE(!vsi || !vsi->netdev)) 1601 1601 return; 1602 - netdev_dpll_pin_clear(vsi->netdev); 1602 + dpll_netdev_pin_clear(vsi->netdev); 1603 1603 dpll_pin_put(rclk->pin); 1604 1604 } 1605 1605 ··· 1643 1643 } 1644 1644 if (WARN_ON((!vsi || !vsi->netdev))) 1645 1645 return -EINVAL; 1646 - netdev_dpll_pin_set(vsi->netdev, pf->dplls.rclk.pin); 1646 + dpll_netdev_pin_set(vsi->netdev, pf->dplls.rclk.pin); 1647 1647 1648 1648 return 0; 1649 1649 ··· 2122 2122 struct ice_dplls *d = &pf->dplls; 2123 2123 int err = 0; 2124 2124 2125 + mutex_init(&d->lock); 2125 2126 err = ice_dpll_init_info(pf, cgu); 2126 2127 if (err) 2127 2128 goto err_exit; ··· 2135 2134 err = ice_dpll_init_pins(pf, cgu); 2136 2135 if (err) 2137 2136 goto deinit_pps; 2138 - mutex_init(&d->lock); 2139 2137 if (cgu) { 2140 2138 err = ice_dpll_init_worker(pf); 2141 2139 if (err)
+1 -1
drivers/net/ethernet/intel/ice/ice_lib.c
··· 3008 3008 } 3009 3009 } 3010 3010 3011 - tx_ring_stats = vsi_stat->rx_ring_stats; 3011 + tx_ring_stats = vsi_stat->tx_ring_stats; 3012 3012 vsi_stat->tx_ring_stats = 3013 3013 krealloc_array(vsi_stat->tx_ring_stats, req_txq, 3014 3014 sizeof(*vsi_stat->tx_ring_stats),
+2
drivers/net/ethernet/intel/ice/ice_main.c
··· 7998 7998 pf_sw = pf->first_sw; 7999 7999 /* find the attribute in the netlink message */ 8000 8000 br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC); 8001 + if (!br_spec) 8002 + return -EINVAL; 8001 8003 8002 8004 nla_for_each_nested(attr, br_spec, rem) { 8003 8005 __u16 mode;
+9 -2
drivers/net/ethernet/intel/ice/ice_sriov.c
··· 1067 1067 struct ice_pf *pf = pci_get_drvdata(pdev); 1068 1068 u16 prev_msix, prev_queues, queues; 1069 1069 bool needs_rebuild = false; 1070 + struct ice_vsi *vsi; 1070 1071 struct ice_vf *vf; 1071 1072 int id; 1072 1073 ··· 1102 1101 if (!vf) 1103 1102 return -ENOENT; 1104 1103 1104 + vsi = ice_get_vf_vsi(vf); 1105 + if (!vsi) 1106 + return -ENOENT; 1107 + 1105 1108 prev_msix = vf->num_msix; 1106 1109 prev_queues = vf->num_vf_qs; 1107 1110 ··· 1126 1121 if (vf->first_vector_idx < 0) 1127 1122 goto unroll; 1128 1123 1129 - if (ice_vf_reconfig_vsi(vf)) { 1124 + if (ice_vf_reconfig_vsi(vf) || ice_vf_init_host_cfg(vf, vsi)) { 1130 1125 /* Try to rebuild with previous values */ 1131 1126 needs_rebuild = true; 1132 1127 goto unroll; ··· 1152 1147 if (vf->first_vector_idx < 0) 1153 1148 return -EINVAL; 1154 1149 1155 - if (needs_rebuild) 1150 + if (needs_rebuild) { 1156 1151 ice_vf_reconfig_vsi(vf); 1152 + ice_vf_init_host_cfg(vf, vsi); 1153 + } 1157 1154 1158 1155 ice_ena_vf_mappings(vf); 1159 1156 ice_put_vf(vf);
+1 -8
drivers/net/ethernet/intel/ice/ice_virtchnl.c
··· 440 440 vf->driver_caps = *(u32 *)msg; 441 441 else 442 442 vf->driver_caps = VIRTCHNL_VF_OFFLOAD_L2 | 443 - VIRTCHNL_VF_OFFLOAD_RSS_REG | 444 443 VIRTCHNL_VF_OFFLOAD_VLAN; 445 444 446 445 vfres->vf_cap_flags = VIRTCHNL_VF_OFFLOAD_L2; ··· 452 453 vfres->vf_cap_flags |= ice_vc_get_vlan_caps(hw, vf, vsi, 453 454 vf->driver_caps); 454 455 455 - if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) { 456 + if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RSS_PF) 456 457 vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_RSS_PF; 457 - } else { 458 - if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RSS_AQ) 459 - vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_RSS_AQ; 460 - else 461 - vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_RSS_REG; 462 - } 463 458 464 459 if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) 465 460 vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC;
-2
drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c
··· 13 13 * - opcodes needed by VF when caps are activated 14 14 * 15 15 * Caps that don't use new opcodes (no opcodes should be allowed): 16 - * - VIRTCHNL_VF_OFFLOAD_RSS_AQ 17 - * - VIRTCHNL_VF_OFFLOAD_RSS_REG 18 16 * - VIRTCHNL_VF_OFFLOAD_WB_ON_ITR 19 17 * - VIRTCHNL_VF_OFFLOAD_CRC 20 18 * - VIRTCHNL_VF_OFFLOAD_RX_POLLING
+5 -4
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 179 179 return -EBUSY; 180 180 usleep_range(1000, 2000); 181 181 } 182 + 183 + ice_qvec_dis_irq(vsi, rx_ring, q_vector); 184 + ice_qvec_toggle_napi(vsi, q_vector, false); 185 + 182 186 netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); 183 187 184 188 ice_fill_txq_meta(vsi, tx_ring, &txq_meta); ··· 199 195 if (err) 200 196 return err; 201 197 } 202 - ice_qvec_dis_irq(vsi, rx_ring, q_vector); 203 - 204 198 err = ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, true); 205 199 if (err) 206 200 return err; 207 201 208 - ice_qvec_toggle_napi(vsi, q_vector, false); 209 202 ice_qp_clean_rings(vsi, q_idx); 210 203 ice_qp_reset_stats(vsi, q_idx); 211 204 ··· 246 245 if (err) 247 246 return err; 248 247 249 - clear_bit(ICE_CFG_BUSY, vsi->state); 250 248 ice_qvec_toggle_napi(vsi, q_vector, true); 251 249 ice_qvec_ena_irq(vsi, q_vector); 252 250 253 251 netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); 252 + clear_bit(ICE_CFG_BUSY, vsi->state); 254 253 255 254 return 0; 256 255 }
+2
drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
··· 1978 1978 set_bit(__IDPF_Q_POLL_MODE, vport->txqs[i]->flags); 1979 1979 1980 1980 /* schedule the napi to receive all the marker packets */ 1981 + local_bh_disable(); 1981 1982 for (i = 0; i < vport->num_q_vectors; i++) 1982 1983 napi_schedule(&vport->q_vectors[i].napi); 1984 + local_bh_enable(); 1983 1985 1984 1986 return idpf_wait_for_marker_event(vport); 1985 1987 }
+6 -7
drivers/net/ethernet/intel/igc/igc_main.c
··· 6488 6488 int cpu = smp_processor_id(); 6489 6489 struct netdev_queue *nq; 6490 6490 struct igc_ring *ring; 6491 - int i, drops; 6491 + int i, nxmit; 6492 6492 6493 6493 if (unlikely(!netif_carrier_ok(dev))) 6494 6494 return -ENETDOWN; ··· 6504 6504 /* Avoid transmit queue timeout since we share it with the slow path */ 6505 6505 txq_trans_cond_update(nq); 6506 6506 6507 - drops = 0; 6507 + nxmit = 0; 6508 6508 for (i = 0; i < num_frames; i++) { 6509 6509 int err; 6510 6510 struct xdp_frame *xdpf = frames[i]; 6511 6511 6512 6512 err = igc_xdp_init_tx_descriptor(ring, xdpf); 6513 - if (err) { 6514 - xdp_return_frame_rx_napi(xdpf); 6515 - drops++; 6516 - } 6513 + if (err) 6514 + break; 6515 + nxmit++; 6517 6516 } 6518 6517 6519 6518 if (flags & XDP_XMIT_FLUSH) ··· 6520 6521 6521 6522 __netif_tx_unlock(nq); 6522 6523 6523 - return num_frames - drops; 6524 + return nxmit; 6524 6525 } 6525 6526 6526 6527 static void igc_trigger_rxtxq_interrupt(struct igc_adapter *adapter,
+49 -7
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 2939 2939 static inline void ixgbe_irq_enable_queues(struct ixgbe_adapter *adapter, 2940 2940 u64 qmask) 2941 2941 { 2942 - u32 mask; 2943 2942 struct ixgbe_hw *hw = &adapter->hw; 2943 + u32 mask; 2944 2944 2945 2945 switch (hw->mac.type) { 2946 2946 case ixgbe_mac_82598EB: ··· 10525 10525 } 10526 10526 10527 10527 /** 10528 + * ixgbe_irq_disable_single - Disable single IRQ vector 10529 + * @adapter: adapter structure 10530 + * @ring: ring index 10531 + **/ 10532 + static void ixgbe_irq_disable_single(struct ixgbe_adapter *adapter, u32 ring) 10533 + { 10534 + struct ixgbe_hw *hw = &adapter->hw; 10535 + u64 qmask = BIT_ULL(ring); 10536 + u32 mask; 10537 + 10538 + switch (adapter->hw.mac.type) { 10539 + case ixgbe_mac_82598EB: 10540 + mask = qmask & IXGBE_EIMC_RTX_QUEUE; 10541 + IXGBE_WRITE_REG(&adapter->hw, IXGBE_EIMC, mask); 10542 + break; 10543 + case ixgbe_mac_82599EB: 10544 + case ixgbe_mac_X540: 10545 + case ixgbe_mac_X550: 10546 + case ixgbe_mac_X550EM_x: 10547 + case ixgbe_mac_x550em_a: 10548 + mask = (qmask & 0xFFFFFFFF); 10549 + if (mask) 10550 + IXGBE_WRITE_REG(hw, IXGBE_EIMS_EX(0), mask); 10551 + mask = (qmask >> 32); 10552 + if (mask) 10553 + IXGBE_WRITE_REG(hw, IXGBE_EIMS_EX(1), mask); 10554 + break; 10555 + default: 10556 + break; 10557 + } 10558 + IXGBE_WRITE_FLUSH(&adapter->hw); 10559 + if (adapter->flags & IXGBE_FLAG_MSIX_ENABLED) 10560 + synchronize_irq(adapter->msix_entries[ring].vector); 10561 + else 10562 + synchronize_irq(adapter->pdev->irq); 10563 + } 10564 + 10565 + /** 10528 10566 * ixgbe_txrx_ring_disable - Disable Rx/Tx/XDP Tx rings 10529 10567 * @adapter: adapter structure 10530 10568 * @ring: ring index ··· 10578 10540 tx_ring = adapter->tx_ring[ring]; 10579 10541 xdp_ring = adapter->xdp_ring[ring]; 10580 10542 10543 + ixgbe_irq_disable_single(adapter, ring); 10544 + 10545 + /* Rx/Tx/XDP Tx share the same napi context. */ 10546 + napi_disable(&rx_ring->q_vector->napi); 10547 + 10581 10548 ixgbe_disable_txr(adapter, tx_ring); 10582 10549 if (xdp_ring) 10583 10550 ixgbe_disable_txr(adapter, xdp_ring); ··· 10590 10547 10591 10548 if (xdp_ring) 10592 10549 synchronize_rcu(); 10593 - 10594 - /* Rx/Tx/XDP Tx share the same napi context. */ 10595 - napi_disable(&rx_ring->q_vector->napi); 10596 10550 10597 10551 ixgbe_clean_tx_ring(tx_ring); 10598 10552 if (xdp_ring) ··· 10618 10578 tx_ring = adapter->tx_ring[ring]; 10619 10579 xdp_ring = adapter->xdp_ring[ring]; 10620 10580 10621 - /* Rx/Tx/XDP Tx share the same napi context. */ 10622 - napi_enable(&rx_ring->q_vector->napi); 10623 - 10624 10581 ixgbe_configure_tx_ring(adapter, tx_ring); 10625 10582 if (xdp_ring) 10626 10583 ixgbe_configure_tx_ring(adapter, xdp_ring); ··· 10626 10589 clear_bit(__IXGBE_TX_DISABLED, &tx_ring->state); 10627 10590 if (xdp_ring) 10628 10591 clear_bit(__IXGBE_TX_DISABLED, &xdp_ring->state); 10592 + 10593 + /* Rx/Tx/XDP Tx share the same napi context. */ 10594 + napi_enable(&rx_ring->q_vector->napi); 10595 + ixgbe_irq_enable_queues(adapter, BIT_ULL(ring)); 10596 + IXGBE_WRITE_FLUSH(&adapter->hw); 10629 10597 } 10630 10598 10631 10599 /**
+6
drivers/net/ethernet/mellanox/mlx5/core/devlink.c
··· 157 157 return -EOPNOTSUPP; 158 158 } 159 159 160 + if (action == DEVLINK_RELOAD_ACTION_FW_ACTIVATE && 161 + !dev->priv.fw_reset) { 162 + NL_SET_ERR_MSG_MOD(extack, "FW activate is unsupported for this function"); 163 + return -EOPNOTSUPP; 164 + } 165 + 160 166 if (mlx5_core_is_pf(dev) && pci_num_vf(pdev)) 161 167 NL_SET_ERR_MSG_MOD(extack, "reload while VFs are present is unfavorable"); 162 168
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/dpll.c
··· 285 285 { 286 286 if (mdpll->tracking_netdev) 287 287 return; 288 - netdev_dpll_pin_set(netdev, mdpll->dpll_pin); 288 + dpll_netdev_pin_set(netdev, mdpll->dpll_pin); 289 289 mdpll->tracking_netdev = netdev; 290 290 } 291 291 ··· 293 293 { 294 294 if (!mdpll->tracking_netdev) 295 295 return; 296 - netdev_dpll_pin_clear(mdpll->tracking_netdev); 296 + dpll_netdev_pin_clear(mdpll->tracking_netdev); 297 297 mdpll->tracking_netdev = NULL; 298 298 } 299 299
+6 -6
drivers/net/ethernet/mellanox/mlx5/core/en/ptp.c
··· 42 42 43 43 WARN_ON_ONCE(tracker->inuse); 44 44 tracker->inuse = true; 45 - spin_lock(&list->tracker_list_lock); 45 + spin_lock_bh(&list->tracker_list_lock); 46 46 list_add_tail(&tracker->entry, &list->tracker_list_head); 47 - spin_unlock(&list->tracker_list_lock); 47 + spin_unlock_bh(&list->tracker_list_lock); 48 48 } 49 49 50 50 static void ··· 54 54 55 55 WARN_ON_ONCE(!tracker->inuse); 56 56 tracker->inuse = false; 57 - spin_lock(&list->tracker_list_lock); 57 + spin_lock_bh(&list->tracker_list_lock); 58 58 list_del(&tracker->entry); 59 - spin_unlock(&list->tracker_list_lock); 59 + spin_unlock_bh(&list->tracker_list_lock); 60 60 } 61 61 62 62 void mlx5e_ptpsq_track_metadata(struct mlx5e_ptpsq *ptpsq, u8 metadata) ··· 155 155 struct mlx5e_ptp_metadata_map *metadata_map = &ptpsq->metadata_map; 156 156 struct mlx5e_ptp_port_ts_cqe_tracker *pos, *n; 157 157 158 - spin_lock(&cqe_list->tracker_list_lock); 158 + spin_lock_bh(&cqe_list->tracker_list_lock); 159 159 list_for_each_entry_safe(pos, n, &cqe_list->tracker_list_head, entry) { 160 160 struct sk_buff *skb = 161 161 mlx5e_ptp_metadata_map_lookup(metadata_map, pos->metadata_id); ··· 170 170 pos->inuse = false; 171 171 list_del(&pos->entry); 172 172 } 173 - spin_unlock(&cqe_list->tracker_list_lock); 173 + spin_unlock_bh(&cqe_list->tracker_list_lock); 174 174 } 175 175 176 176 #define PTP_WQE_CTR2IDX(val) ((val) & ptpsq->ts_cqe_ctr_mask)
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en/tc/post_act.c
··· 37 37 38 38 if (!MLX5_CAP_FLOWTABLE_TYPE(priv->mdev, ignore_flow_level, table_type)) { 39 39 if (priv->mdev->coredev_type == MLX5_COREDEV_PF) 40 - mlx5_core_warn(priv->mdev, "firmware level support is missing\n"); 40 + mlx5_core_dbg(priv->mdev, "firmware flow level support is missing\n"); 41 41 err = -EOPNOTSUPP; 42 42 goto err_check; 43 43 }
+51 -31
drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
··· 310 310 mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out)); 311 311 } 312 312 313 - static void mlx5e_macsec_cleanup_sa(struct mlx5e_macsec *macsec, 314 - struct mlx5e_macsec_sa *sa, 315 - bool is_tx, struct net_device *netdev, u32 fs_id) 313 + static void mlx5e_macsec_cleanup_sa_fs(struct mlx5e_macsec *macsec, 314 + struct mlx5e_macsec_sa *sa, bool is_tx, 315 + struct net_device *netdev, u32 fs_id) 316 316 { 317 317 int action = (is_tx) ? MLX5_ACCEL_MACSEC_ACTION_ENCRYPT : 318 318 MLX5_ACCEL_MACSEC_ACTION_DECRYPT; ··· 322 322 323 323 mlx5_macsec_fs_del_rule(macsec->mdev->macsec_fs, sa->macsec_rule, action, netdev, 324 324 fs_id); 325 - mlx5e_macsec_destroy_object(macsec->mdev, sa->macsec_obj_id); 326 325 sa->macsec_rule = NULL; 326 + } 327 + 328 + static void mlx5e_macsec_cleanup_sa(struct mlx5e_macsec *macsec, 329 + struct mlx5e_macsec_sa *sa, bool is_tx, 330 + struct net_device *netdev, u32 fs_id) 331 + { 332 + mlx5e_macsec_cleanup_sa_fs(macsec, sa, is_tx, netdev, fs_id); 333 + mlx5e_macsec_destroy_object(macsec->mdev, sa->macsec_obj_id); 334 + } 335 + 336 + static int mlx5e_macsec_init_sa_fs(struct macsec_context *ctx, 337 + struct mlx5e_macsec_sa *sa, bool encrypt, 338 + bool is_tx, u32 *fs_id) 339 + { 340 + struct mlx5e_priv *priv = macsec_netdev_priv(ctx->netdev); 341 + struct mlx5_macsec_fs *macsec_fs = priv->mdev->macsec_fs; 342 + struct mlx5_macsec_rule_attrs rule_attrs; 343 + union mlx5_macsec_rule *macsec_rule; 344 + 345 + rule_attrs.macsec_obj_id = sa->macsec_obj_id; 346 + rule_attrs.sci = sa->sci; 347 + rule_attrs.assoc_num = sa->assoc_num; 348 + rule_attrs.action = (is_tx) ? MLX5_ACCEL_MACSEC_ACTION_ENCRYPT : 349 + MLX5_ACCEL_MACSEC_ACTION_DECRYPT; 350 + 351 + macsec_rule = mlx5_macsec_fs_add_rule(macsec_fs, ctx, &rule_attrs, fs_id); 352 + if (!macsec_rule) 353 + return -ENOMEM; 354 + 355 + sa->macsec_rule = macsec_rule; 356 + 357 + return 0; 327 358 } 328 359 329 360 static int mlx5e_macsec_init_sa(struct macsec_context *ctx, ··· 363 332 { 364 333 struct mlx5e_priv *priv = macsec_netdev_priv(ctx->netdev); 365 334 struct mlx5e_macsec *macsec = priv->macsec; 366 - struct mlx5_macsec_rule_attrs rule_attrs; 367 335 struct mlx5_core_dev *mdev = priv->mdev; 368 336 struct mlx5_macsec_obj_attrs obj_attrs; 369 - union mlx5_macsec_rule *macsec_rule; 370 337 int err; 371 338 372 339 obj_attrs.next_pn = sa->next_pn; ··· 386 357 if (err) 387 358 return err; 388 359 389 - rule_attrs.macsec_obj_id = sa->macsec_obj_id; 390 - rule_attrs.sci = sa->sci; 391 - rule_attrs.assoc_num = sa->assoc_num; 392 - rule_attrs.action = (is_tx) ? MLX5_ACCEL_MACSEC_ACTION_ENCRYPT : 393 - MLX5_ACCEL_MACSEC_ACTION_DECRYPT; 394 - 395 - macsec_rule = mlx5_macsec_fs_add_rule(mdev->macsec_fs, ctx, &rule_attrs, fs_id); 396 - if (!macsec_rule) { 397 - err = -ENOMEM; 398 - goto destroy_macsec_object; 360 + if (sa->active) { 361 + err = mlx5e_macsec_init_sa_fs(ctx, sa, encrypt, is_tx, fs_id); 362 + if (err) 363 + goto destroy_macsec_object; 399 364 } 400 - 401 - sa->macsec_rule = macsec_rule; 402 365 403 366 return 0; 404 367 ··· 547 526 goto destroy_sa; 548 527 549 528 macsec_device->tx_sa[assoc_num] = tx_sa; 550 - if (!secy->operational || 551 - assoc_num != tx_sc->encoding_sa || 552 - !tx_sa->active) 529 + if (!secy->operational) 553 530 goto out; 554 531 555 532 err = mlx5e_macsec_init_sa(ctx, tx_sa, tx_sc->encrypt, true, NULL); ··· 614 595 goto out; 615 596 616 597 if (ctx_tx_sa->active) { 617 - err = mlx5e_macsec_init_sa(ctx, tx_sa, tx_sc->encrypt, true, NULL); 598 + err = mlx5e_macsec_init_sa_fs(ctx, tx_sa, tx_sc->encrypt, true, NULL); 618 599 if (err) 619 600 goto out; 620 601 } else { ··· 623 604 goto out; 624 605 } 625 606 626 - mlx5e_macsec_cleanup_sa(macsec, tx_sa, true, ctx->secy->netdev, 0); 607 + mlx5e_macsec_cleanup_sa_fs(macsec, tx_sa, true, ctx->secy->netdev, 0); 627 608 } 628 609 out: 629 610 mutex_unlock(&macsec->lock); ··· 1049 1030 goto out; 1050 1031 } 1051 1032 1052 - mlx5e_macsec_cleanup_sa(macsec, rx_sa, false, ctx->secy->netdev, 1053 - rx_sc->sc_xarray_element->fs_id); 1033 + if (rx_sa->active) 1034 + mlx5e_macsec_cleanup_sa(macsec, rx_sa, false, ctx->secy->netdev, 1035 + rx_sc->sc_xarray_element->fs_id); 1054 1036 mlx5_destroy_encryption_key(macsec->mdev, rx_sa->enc_key_id); 1055 1037 kfree(rx_sa); 1056 1038 rx_sc->rx_sa[assoc_num] = NULL; ··· 1132 1112 if (!rx_sa || !rx_sa->macsec_rule) 1133 1113 continue; 1134 1114 1135 - mlx5e_macsec_cleanup_sa(macsec, rx_sa, false, ctx->secy->netdev, 1136 - rx_sc->sc_xarray_element->fs_id); 1115 + mlx5e_macsec_cleanup_sa_fs(macsec, rx_sa, false, ctx->secy->netdev, 1116 + rx_sc->sc_xarray_element->fs_id); 1137 1117 } 1138 1118 } 1139 1119 ··· 1144 1124 continue; 1145 1125 1146 1126 if (rx_sa->active) { 1147 - err = mlx5e_macsec_init_sa(ctx, rx_sa, true, false, 1148 - &rx_sc->sc_xarray_element->fs_id); 1127 + err = mlx5e_macsec_init_sa_fs(ctx, rx_sa, true, false, 1128 + &rx_sc->sc_xarray_element->fs_id); 1149 1129 if (err) 1150 1130 goto out; 1151 1131 } ··· 1198 1178 if (!tx_sa) 1199 1179 continue; 1200 1180 1201 - mlx5e_macsec_cleanup_sa(macsec, tx_sa, true, ctx->secy->netdev, 0); 1181 + mlx5e_macsec_cleanup_sa_fs(macsec, tx_sa, true, ctx->secy->netdev, 0); 1202 1182 } 1203 1183 1204 1184 for (i = 0; i < MACSEC_NUM_AN; ++i) { ··· 1207 1187 continue; 1208 1188 1209 1189 if (tx_sa->assoc_num == tx_sc->encoding_sa && tx_sa->active) { 1210 - err = mlx5e_macsec_init_sa(ctx, tx_sa, tx_sc->encrypt, true, NULL); 1190 + err = mlx5e_macsec_init_sa_fs(ctx, tx_sa, tx_sc->encrypt, true, NULL); 1211 1191 if (err) 1212 1192 goto out; 1213 1193 }
+2
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
··· 401 401 mlx5e_skb_cb_hwtstamp_init(skb); 402 402 mlx5e_ptp_metadata_map_put(&sq->ptpsq->metadata_map, skb, 403 403 metadata_index); 404 + /* ensure skb is put on metadata_map before tracking the index */ 405 + wmb(); 404 406 mlx5e_ptpsq_track_metadata(sq->ptpsq, metadata_index); 405 407 if (!netif_tx_queue_stopped(sq->txq) && 406 408 mlx5e_ptpsq_metadata_freelist_empty(sq->ptpsq)) {
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
··· 152 152 153 153 xa_for_each(&esw->offloads.vport_reps, i, rep) { 154 154 rpriv = rep->rep_data[REP_ETH].priv; 155 - if (!rpriv || !rpriv->netdev || !atomic_read(&rpriv->tc_ht.nelems)) 155 + if (!rpriv || !rpriv->netdev) 156 156 continue; 157 157 158 158 rhashtable_walk_enter(&rpriv->tc_ht, &iter);
+14 -32
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 535 535 } 536 536 537 537 static bool 538 - esw_dests_to_vf_pf_vports(struct mlx5_flow_destination *dests, int max_dest) 538 + esw_dests_to_int_external(struct mlx5_flow_destination *dests, int max_dest) 539 539 { 540 - bool vf_dest = false, pf_dest = false; 540 + bool internal_dest = false, external_dest = false; 541 541 int i; 542 542 543 543 for (i = 0; i < max_dest; i++) { 544 - if (dests[i].type != MLX5_FLOW_DESTINATION_TYPE_VPORT) 544 + if (dests[i].type != MLX5_FLOW_DESTINATION_TYPE_VPORT && 545 + dests[i].type != MLX5_FLOW_DESTINATION_TYPE_UPLINK) 545 546 continue; 546 547 547 - if (dests[i].vport.num == MLX5_VPORT_UPLINK) 548 - pf_dest = true; 548 + /* Uplink dest is external, but considered as internal 549 + * if there is reformat because firmware uses LB+hairpin to support it. 550 + */ 551 + if (dests[i].vport.num == MLX5_VPORT_UPLINK && 552 + !(dests[i].vport.flags & MLX5_FLOW_DEST_VPORT_REFORMAT_ID)) 553 + external_dest = true; 549 554 else 550 - vf_dest = true; 555 + internal_dest = true; 551 556 552 - if (vf_dest && pf_dest) 557 + if (internal_dest && external_dest) 553 558 return true; 554 559 } 555 560 ··· 700 695 701 696 /* Header rewrite with combined wire+loopback in FDB is not allowed */ 702 697 if ((flow_act.action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR) && 703 - esw_dests_to_vf_pf_vports(dest, i)) { 698 + esw_dests_to_int_external(dest, i)) { 704 699 esw_warn(esw->dev, 705 - "FDB: Header rewrite with forwarding to both PF and VF is not allowed\n"); 700 + "FDB: Header rewrite with forwarding to both internal and external dests is not allowed\n"); 706 701 rule = ERR_PTR(-EINVAL); 707 702 goto err_esw_get; 708 703 } ··· 3663 3658 return 0; 3664 3659 } 3665 3660 3666 - static bool esw_offloads_devlink_ns_eq_netdev_ns(struct devlink *devlink) 3667 - { 3668 - struct mlx5_core_dev *dev = devlink_priv(devlink); 3669 - struct net *devl_net, *netdev_net; 3670 - bool ret = false; 3671 - 3672 - mutex_lock(&dev->mlx5e_res.uplink_netdev_lock); 3673 - if (dev->mlx5e_res.uplink_netdev) { 3674 - netdev_net = dev_net(dev->mlx5e_res.uplink_netdev); 3675 - devl_net = devlink_net(devlink); 3676 - ret = net_eq(devl_net, netdev_net); 3677 - } 3678 - mutex_unlock(&dev->mlx5e_res.uplink_netdev_lock); 3679 - return ret; 3680 - } 3681 - 3682 3661 int mlx5_eswitch_block_mode(struct mlx5_core_dev *dev) 3683 3662 { 3684 3663 struct mlx5_eswitch *esw = dev->priv.eswitch; ··· 3706 3717 3707 3718 if (esw_mode_from_devlink(mode, &mlx5_mode)) 3708 3719 return -EINVAL; 3709 - 3710 - if (mode == DEVLINK_ESWITCH_MODE_SWITCHDEV && 3711 - !esw_offloads_devlink_ns_eq_netdev_ns(devlink)) { 3712 - NL_SET_ERR_MSG_MOD(extack, 3713 - "Can't change E-Switch mode to switchdev when netdev net namespace has diverged from the devlink's."); 3714 - return -EPERM; 3715 - } 3716 3720 3717 3721 mlx5_lag_disable_change(esw->dev); 3718 3722 err = mlx5_esw_try_lock(esw);
+20 -2
drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
··· 703 703 { 704 704 struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset; 705 705 706 + if (!fw_reset) 707 + return; 708 + 706 709 MLX5_NB_INIT(&fw_reset->nb, fw_reset_event_notifier, GENERAL_EVENT); 707 710 mlx5_eq_notifier_register(dev, &fw_reset->nb); 708 711 } 709 712 710 713 void mlx5_fw_reset_events_stop(struct mlx5_core_dev *dev) 711 714 { 712 - mlx5_eq_notifier_unregister(dev, &dev->priv.fw_reset->nb); 715 + struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset; 716 + 717 + if (!fw_reset) 718 + return; 719 + 720 + mlx5_eq_notifier_unregister(dev, &fw_reset->nb); 713 721 } 714 722 715 723 void mlx5_drain_fw_reset(struct mlx5_core_dev *dev) 716 724 { 717 725 struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset; 726 + 727 + if (!fw_reset) 728 + return; 718 729 719 730 set_bit(MLX5_FW_RESET_FLAGS_DROP_NEW_REQUESTS, &fw_reset->reset_flags); 720 731 cancel_work_sync(&fw_reset->fw_live_patch_work); ··· 744 733 745 734 int mlx5_fw_reset_init(struct mlx5_core_dev *dev) 746 735 { 747 - struct mlx5_fw_reset *fw_reset = kzalloc(sizeof(*fw_reset), GFP_KERNEL); 736 + struct mlx5_fw_reset *fw_reset; 748 737 int err; 749 738 739 + if (!MLX5_CAP_MCAM_REG(dev, mfrl)) 740 + return 0; 741 + 742 + fw_reset = kzalloc(sizeof(*fw_reset), GFP_KERNEL); 750 743 if (!fw_reset) 751 744 return -ENOMEM; 752 745 fw_reset->wq = create_singlethread_workqueue("mlx5_fw_reset_events"); ··· 785 770 void mlx5_fw_reset_cleanup(struct mlx5_core_dev *dev) 786 771 { 787 772 struct mlx5_fw_reset *fw_reset = dev->priv.fw_reset; 773 + 774 + if (!fw_reset) 775 + return; 788 776 789 777 devl_params_unregister(priv_to_devlink(dev), 790 778 mlx5_fw_reset_devlink_params,
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/health.c
··· 452 452 struct health_buffer __iomem *h = health->health; 453 453 u8 synd = ioread8(&h->synd); 454 454 455 + devlink_fmsg_u8_pair_put(fmsg, "Syndrome", synd); 455 456 if (!synd) 456 457 return 0; 457 458 458 - devlink_fmsg_u8_pair_put(fmsg, "Syndrome", synd); 459 459 devlink_fmsg_string_pair_put(fmsg, "Description", hsynd_str(synd)); 460 460 461 461 return 0;
+2 -2
drivers/net/ethernet/microchip/sparx5/sparx5_mactable.c
··· 347 347 list) { 348 348 if ((vid == 0 || mact_entry->vid == vid) && 349 349 ether_addr_equal(addr, mact_entry->mac)) { 350 + sparx5_mact_forget(sparx5, addr, mact_entry->vid); 351 + 350 352 list_del(&mact_entry->list); 351 353 devm_kfree(sparx5->dev, mact_entry); 352 - 353 - sparx5_mact_forget(sparx5, addr, mact_entry->vid); 354 354 } 355 355 } 356 356 mutex_unlock(&sparx5->mact_lock);
+1 -1
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 294 294 txqueue, 295 295 netif_tx_queue_stopped(netif_txq), 296 296 jiffies_to_msecs(jiffies - trans_start), 297 - dql_avail(&netif_txq->dql), 297 + netdev_queue_dql_avail(netif_txq), 298 298 k3_cppi_desc_pool_avail(tx_chn->desc_pool)); 299 299 300 300 if (netif_tx_queue_stopped(netif_txq)) {
+16 -2
drivers/net/geneve.c
··· 221 221 struct genevehdr *gnvh = geneve_hdr(skb); 222 222 struct metadata_dst *tun_dst = NULL; 223 223 unsigned int len; 224 - int err = 0; 224 + int nh, err = 0; 225 225 void *oiph; 226 226 227 227 if (ip_tunnel_collect_metadata() || gs->collect_md) { ··· 272 272 skb->pkt_type = PACKET_HOST; 273 273 } 274 274 275 - oiph = skb_network_header(skb); 275 + /* Save offset of outer header relative to skb->head, 276 + * because we are going to reset the network header to the inner header 277 + * and might change skb->head. 278 + */ 279 + nh = skb_network_header(skb) - skb->head; 280 + 276 281 skb_reset_network_header(skb); 282 + 283 + if (!pskb_inet_may_pull(skb)) { 284 + DEV_STATS_INC(geneve->dev, rx_length_errors); 285 + DEV_STATS_INC(geneve->dev, rx_errors); 286 + goto drop; 287 + } 288 + 289 + /* Get the outer header. */ 290 + oiph = skb->head + nh; 277 291 278 292 if (geneve_get_sk_family(gs) == AF_INET) 279 293 err = IP_ECN_decapsulate(oiph, skb);
+2 -1
drivers/net/usb/lan78xx.c
··· 3135 3135 done: 3136 3136 mutex_unlock(&dev->dev_mutex); 3137 3137 3138 - usb_autopm_put_interface(dev->intf); 3138 + if (ret < 0) 3139 + usb_autopm_put_interface(dev->intf); 3139 3140 3140 3141 return ret; 3141 3142 }
+1 -1
drivers/of/property.c
··· 1304 1304 int index) 1305 1305 { 1306 1306 /* Return NULL for index > 0 to signify end of remote-endpoints. */ 1307 - if (!index || strcmp(prop_name, "remote-endpoint")) 1307 + if (index > 0 || strcmp(prop_name, "remote-endpoint")) 1308 1308 return NULL; 1309 1309 1310 1310 return of_graph_get_remote_port_parent(np);
+5 -13
drivers/perf/riscv_pmu.c
··· 150 150 struct riscv_pmu *rvpmu = to_riscv_pmu(event->pmu); 151 151 struct hw_perf_event *hwc = &event->hw; 152 152 153 - if (!rvpmu->ctr_get_width) 154 - /** 155 - * If the pmu driver doesn't support counter width, set it to default 156 - * maximum allowed by the specification. 157 - */ 158 - cwidth = 63; 159 - else { 160 - if (hwc->idx == -1) 161 - /* Handle init case where idx is not initialized yet */ 162 - cwidth = rvpmu->ctr_get_width(0); 163 - else 164 - cwidth = rvpmu->ctr_get_width(hwc->idx); 165 - } 153 + if (hwc->idx == -1) 154 + /* Handle init case where idx is not initialized yet */ 155 + cwidth = rvpmu->ctr_get_width(0); 156 + else 157 + cwidth = rvpmu->ctr_get_width(hwc->idx); 166 158 167 159 return GENMASK_ULL(cwidth, 0); 168 160 }
+9 -1
drivers/perf/riscv_pmu_legacy.c
··· 37 37 return pmu_legacy_ctr_get_idx(event); 38 38 } 39 39 40 + /* cycle & instret are always 64 bit, one bit less according to SBI spec */ 41 + static int pmu_legacy_ctr_get_width(int idx) 42 + { 43 + return 63; 44 + } 45 + 40 46 static u64 pmu_legacy_read_ctr(struct perf_event *event) 41 47 { 42 48 struct hw_perf_event *hwc = &event->hw; ··· 117 111 pmu->ctr_stop = NULL; 118 112 pmu->event_map = pmu_legacy_event_map; 119 113 pmu->ctr_get_idx = pmu_legacy_ctr_get_idx; 120 - pmu->ctr_get_width = NULL; 114 + pmu->ctr_get_width = pmu_legacy_ctr_get_width; 121 115 pmu->ctr_clear_idx = NULL; 122 116 pmu->ctr_read = pmu_legacy_read_ctr; 123 117 pmu->event_mapped = pmu_legacy_event_mapped; 124 118 pmu->event_unmapped = pmu_legacy_event_unmapped; 125 119 pmu->csr_index = pmu_legacy_csr_index; 120 + pmu->pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT; 121 + pmu->pmu.capabilities |= PERF_PMU_CAP_NO_EXCLUDE; 126 122 127 123 perf_pmu_register(&pmu->pmu, "cpu", PERF_TYPE_RAW); 128 124 }
+4 -4
drivers/perf/riscv_pmu_sbi.c
··· 512 512 513 513 if (event->hw.idx != -1) 514 514 csr_write(CSR_SCOUNTEREN, 515 - csr_read(CSR_SCOUNTEREN) | (1 << pmu_sbi_csr_index(event))); 515 + csr_read(CSR_SCOUNTEREN) | BIT(pmu_sbi_csr_index(event))); 516 516 } 517 517 518 518 static void pmu_sbi_reset_scounteren(void *arg) ··· 521 521 522 522 if (event->hw.idx != -1) 523 523 csr_write(CSR_SCOUNTEREN, 524 - csr_read(CSR_SCOUNTEREN) & ~(1 << pmu_sbi_csr_index(event))); 524 + csr_read(CSR_SCOUNTEREN) & ~BIT(pmu_sbi_csr_index(event))); 525 525 } 526 526 527 527 static void pmu_sbi_ctr_start(struct perf_event *event, u64 ival) ··· 731 731 /* compute hardware counter index */ 732 732 hidx = info->csr - CSR_CYCLE; 733 733 /* check if the corresponding bit is set in sscountovf */ 734 - if (!(overflow & (1 << hidx))) 734 + if (!(overflow & BIT(hidx))) 735 735 continue; 736 736 737 737 /* 738 738 * Keep a track of overflowed counters so that they can be started 739 739 * with updated initial value. 740 740 */ 741 - overflowed_ctrs |= 1 << lidx; 741 + overflowed_ctrs |= BIT(lidx); 742 742 hw_evt = &event->hw; 743 743 riscv_pmu_event_update(event); 744 744 perf_sample_data_init(&data, 0, hw_evt->last_period);
+1 -1
drivers/phy/freescale/phy-fsl-imx8-mipi-dphy.c
··· 706 706 return ret; 707 707 } 708 708 709 - priv->id = of_alias_get_id(np, "mipi_dphy"); 709 + priv->id = of_alias_get_id(np, "mipi-dphy"); 710 710 if (priv->id < 0) { 711 711 dev_err(dev, "Failed to get phy node alias id: %d\n", 712 712 priv->id);
+59 -101
drivers/phy/qualcomm/phy-qcom-eusb2-repeater.c
··· 37 37 #define EUSB2_TUNE_EUSB_EQU 0x5A 38 38 #define EUSB2_TUNE_EUSB_HS_COMP_CUR 0x5B 39 39 40 - #define QCOM_EUSB2_REPEATER_INIT_CFG(r, v) \ 41 - { \ 42 - .reg = r, \ 43 - .val = v, \ 44 - } 40 + enum eusb2_reg_layout { 41 + TUNE_EUSB_HS_COMP_CUR, 42 + TUNE_EUSB_EQU, 43 + TUNE_EUSB_SLEW, 44 + TUNE_USB2_HS_COMP_CUR, 45 + TUNE_USB2_PREEM, 46 + TUNE_USB2_EQU, 47 + TUNE_USB2_SLEW, 48 + TUNE_SQUELCH_U, 49 + TUNE_HSDISC, 50 + TUNE_RES_FSDIF, 51 + TUNE_IUSB2, 52 + TUNE_USB2_CROSSOVER, 53 + NUM_TUNE_FIELDS, 45 54 46 - enum reg_fields { 47 - F_TUNE_EUSB_HS_COMP_CUR, 48 - F_TUNE_EUSB_EQU, 49 - F_TUNE_EUSB_SLEW, 50 - F_TUNE_USB2_HS_COMP_CUR, 51 - F_TUNE_USB2_PREEM, 52 - F_TUNE_USB2_EQU, 53 - F_TUNE_USB2_SLEW, 54 - F_TUNE_SQUELCH_U, 55 - F_TUNE_HSDISC, 56 - F_TUNE_RES_FSDIF, 57 - F_TUNE_IUSB2, 58 - F_TUNE_USB2_CROSSOVER, 59 - F_NUM_TUNE_FIELDS, 55 + FORCE_VAL_5 = NUM_TUNE_FIELDS, 56 + FORCE_EN_5, 60 57 61 - F_FORCE_VAL_5 = F_NUM_TUNE_FIELDS, 62 - F_FORCE_EN_5, 58 + EN_CTL1, 63 59 64 - F_EN_CTL1, 65 - 66 - F_RPTR_STATUS, 67 - F_NUM_FIELDS, 68 - }; 69 - 70 - static struct reg_field eusb2_repeater_tune_reg_fields[F_NUM_FIELDS] = { 71 - [F_TUNE_EUSB_HS_COMP_CUR] = REG_FIELD(EUSB2_TUNE_EUSB_HS_COMP_CUR, 0, 1), 72 - [F_TUNE_EUSB_EQU] = REG_FIELD(EUSB2_TUNE_EUSB_EQU, 0, 1), 73 - [F_TUNE_EUSB_SLEW] = REG_FIELD(EUSB2_TUNE_EUSB_SLEW, 0, 1), 74 - [F_TUNE_USB2_HS_COMP_CUR] = REG_FIELD(EUSB2_TUNE_USB2_HS_COMP_CUR, 0, 1), 75 - [F_TUNE_USB2_PREEM] = REG_FIELD(EUSB2_TUNE_USB2_PREEM, 0, 2), 76 - [F_TUNE_USB2_EQU] = REG_FIELD(EUSB2_TUNE_USB2_EQU, 0, 1), 77 - [F_TUNE_USB2_SLEW] = REG_FIELD(EUSB2_TUNE_USB2_SLEW, 0, 1), 78 - [F_TUNE_SQUELCH_U] = REG_FIELD(EUSB2_TUNE_SQUELCH_U, 0, 2), 79 - [F_TUNE_HSDISC] = REG_FIELD(EUSB2_TUNE_HSDISC, 0, 2), 80 - [F_TUNE_RES_FSDIF] = REG_FIELD(EUSB2_TUNE_RES_FSDIF, 0, 2), 81 - [F_TUNE_IUSB2] = REG_FIELD(EUSB2_TUNE_IUSB2, 0, 3), 82 - [F_TUNE_USB2_CROSSOVER] = REG_FIELD(EUSB2_TUNE_USB2_CROSSOVER, 0, 2), 83 - 84 - [F_FORCE_VAL_5] = REG_FIELD(EUSB2_FORCE_VAL_5, 0, 7), 85 - [F_FORCE_EN_5] = REG_FIELD(EUSB2_FORCE_EN_5, 0, 7), 86 - 87 - [F_EN_CTL1] = REG_FIELD(EUSB2_EN_CTL1, 0, 7), 88 - 89 - [F_RPTR_STATUS] = REG_FIELD(EUSB2_RPTR_STATUS, 0, 7), 60 + RPTR_STATUS, 61 + LAYOUT_SIZE, 90 62 }; 91 63 92 64 struct eusb2_repeater_cfg { ··· 70 98 71 99 struct eusb2_repeater { 72 100 struct device *dev; 73 - struct regmap_field *regs[F_NUM_FIELDS]; 101 + struct regmap *regmap; 74 102 struct phy *phy; 75 103 struct regulator_bulk_data *vregs; 76 104 const struct eusb2_repeater_cfg *cfg; 105 + u32 base; 77 106 enum phy_mode mode; 78 107 }; 79 108 ··· 82 109 "vdd18", "vdd3", 83 110 }; 84 111 85 - static const u32 pm8550b_init_tbl[F_NUM_TUNE_FIELDS] = { 86 - [F_TUNE_IUSB2] = 0x8, 87 - [F_TUNE_SQUELCH_U] = 0x3, 88 - [F_TUNE_USB2_PREEM] = 0x5, 112 + static const u32 pm8550b_init_tbl[NUM_TUNE_FIELDS] = { 113 + [TUNE_IUSB2] = 0x8, 114 + [TUNE_SQUELCH_U] = 0x3, 115 + [TUNE_USB2_PREEM] = 0x5, 89 116 }; 90 117 91 118 static const struct eusb2_repeater_cfg pm8550b_eusb2_cfg = { ··· 113 140 114 141 static int eusb2_repeater_init(struct phy *phy) 115 142 { 116 - struct reg_field *regfields = eusb2_repeater_tune_reg_fields; 117 143 struct eusb2_repeater *rptr = phy_get_drvdata(phy); 118 144 struct device_node *np = rptr->dev->of_node; 119 - u32 init_tbl[F_NUM_TUNE_FIELDS] = { 0 }; 120 - u8 override; 145 + struct regmap *regmap = rptr->regmap; 146 + const u32 *init_tbl = rptr->cfg->init_tbl; 147 + u8 tune_usb2_preem = init_tbl[TUNE_USB2_PREEM]; 148 + u8 tune_hsdisc = init_tbl[TUNE_HSDISC]; 149 + u8 tune_iusb2 = init_tbl[TUNE_IUSB2]; 150 + u32 base = rptr->base; 121 151 u32 val; 122 152 int ret; 123 - int i; 153 + 154 + of_property_read_u8(np, "qcom,tune-usb2-amplitude", &tune_iusb2); 155 + of_property_read_u8(np, "qcom,tune-usb2-disc-thres", &tune_hsdisc); 156 + of_property_read_u8(np, "qcom,tune-usb2-preem", &tune_usb2_preem); 124 157 125 158 ret = regulator_bulk_enable(rptr->cfg->num_vregs, rptr->vregs); 126 159 if (ret) 127 160 return ret; 128 161 129 - regmap_field_update_bits(rptr->regs[F_EN_CTL1], EUSB2_RPTR_EN, EUSB2_RPTR_EN); 162 + regmap_write(regmap, base + EUSB2_EN_CTL1, EUSB2_RPTR_EN); 130 163 131 - for (i = 0; i < F_NUM_TUNE_FIELDS; i++) { 132 - if (init_tbl[i]) { 133 - regmap_field_update_bits(rptr->regs[i], init_tbl[i], init_tbl[i]); 134 - } else { 135 - /* Write 0 if there's no value set */ 136 - u32 mask = GENMASK(regfields[i].msb, regfields[i].lsb); 164 + regmap_write(regmap, base + EUSB2_TUNE_EUSB_HS_COMP_CUR, init_tbl[TUNE_EUSB_HS_COMP_CUR]); 165 + regmap_write(regmap, base + EUSB2_TUNE_EUSB_EQU, init_tbl[TUNE_EUSB_EQU]); 166 + regmap_write(regmap, base + EUSB2_TUNE_EUSB_SLEW, init_tbl[TUNE_EUSB_SLEW]); 167 + regmap_write(regmap, base + EUSB2_TUNE_USB2_HS_COMP_CUR, init_tbl[TUNE_USB2_HS_COMP_CUR]); 168 + regmap_write(regmap, base + EUSB2_TUNE_USB2_EQU, init_tbl[TUNE_USB2_EQU]); 169 + regmap_write(regmap, base + EUSB2_TUNE_USB2_SLEW, init_tbl[TUNE_USB2_SLEW]); 170 + regmap_write(regmap, base + EUSB2_TUNE_SQUELCH_U, init_tbl[TUNE_SQUELCH_U]); 171 + regmap_write(regmap, base + EUSB2_TUNE_RES_FSDIF, init_tbl[TUNE_RES_FSDIF]); 172 + regmap_write(regmap, base + EUSB2_TUNE_USB2_CROSSOVER, init_tbl[TUNE_USB2_CROSSOVER]); 137 173 138 - regmap_field_update_bits(rptr->regs[i], mask, 0); 139 - } 140 - } 141 - memcpy(init_tbl, rptr->cfg->init_tbl, sizeof(init_tbl)); 174 + regmap_write(regmap, base + EUSB2_TUNE_USB2_PREEM, tune_usb2_preem); 175 + regmap_write(regmap, base + EUSB2_TUNE_HSDISC, tune_hsdisc); 176 + regmap_write(regmap, base + EUSB2_TUNE_IUSB2, tune_iusb2); 142 177 143 - if (!of_property_read_u8(np, "qcom,tune-usb2-amplitude", &override)) 144 - init_tbl[F_TUNE_IUSB2] = override; 145 - 146 - if (!of_property_read_u8(np, "qcom,tune-usb2-disc-thres", &override)) 147 - init_tbl[F_TUNE_HSDISC] = override; 148 - 149 - if (!of_property_read_u8(np, "qcom,tune-usb2-preem", &override)) 150 - init_tbl[F_TUNE_USB2_PREEM] = override; 151 - 152 - for (i = 0; i < F_NUM_TUNE_FIELDS; i++) 153 - regmap_field_update_bits(rptr->regs[i], init_tbl[i], init_tbl[i]); 154 - 155 - ret = regmap_field_read_poll_timeout(rptr->regs[F_RPTR_STATUS], 156 - val, val & RPTR_OK, 10, 5); 178 + ret = regmap_read_poll_timeout(regmap, base + EUSB2_RPTR_STATUS, val, val & RPTR_OK, 10, 5); 157 179 if (ret) 158 180 dev_err(rptr->dev, "initialization timed-out\n"); 159 181 ··· 159 191 enum phy_mode mode, int submode) 160 192 { 161 193 struct eusb2_repeater *rptr = phy_get_drvdata(phy); 194 + struct regmap *regmap = rptr->regmap; 195 + u32 base = rptr->base; 162 196 163 197 switch (mode) { 164 198 case PHY_MODE_USB_HOST: ··· 169 199 * per eUSB 1.2 Spec. Below implement software workaround until 170 200 * PHY and controller is fixing seen observation. 171 201 */ 172 - regmap_field_update_bits(rptr->regs[F_FORCE_EN_5], 173 - F_CLK_19P2M_EN, F_CLK_19P2M_EN); 174 - regmap_field_update_bits(rptr->regs[F_FORCE_VAL_5], 175 - V_CLK_19P2M_EN, V_CLK_19P2M_EN); 202 + regmap_write(regmap, base + EUSB2_FORCE_EN_5, F_CLK_19P2M_EN); 203 + regmap_write(regmap, base + EUSB2_FORCE_VAL_5, V_CLK_19P2M_EN); 176 204 break; 177 205 case PHY_MODE_USB_DEVICE: 178 206 /* ··· 179 211 * repeater doesn't clear previous value due to shared 180 212 * regulators (say host <-> device mode switch). 181 213 */ 182 - regmap_field_update_bits(rptr->regs[F_FORCE_EN_5], 183 - F_CLK_19P2M_EN, 0); 184 - regmap_field_update_bits(rptr->regs[F_FORCE_VAL_5], 185 - V_CLK_19P2M_EN, 0); 214 + regmap_write(regmap, base + EUSB2_FORCE_EN_5, 0); 215 + regmap_write(regmap, base + EUSB2_FORCE_VAL_5, 0); 186 216 break; 187 217 default: 188 218 return -EINVAL; ··· 209 243 struct device *dev = &pdev->dev; 210 244 struct phy_provider *phy_provider; 211 245 struct device_node *np = dev->of_node; 212 - struct regmap *regmap; 213 - int i, ret; 214 246 u32 res; 247 + int ret; 215 248 216 249 rptr = devm_kzalloc(dev, sizeof(*rptr), GFP_KERNEL); 217 250 if (!rptr) ··· 223 258 if (!rptr->cfg) 224 259 return -EINVAL; 225 260 226 - regmap = dev_get_regmap(dev->parent, NULL); 227 - if (!regmap) 261 + rptr->regmap = dev_get_regmap(dev->parent, NULL); 262 + if (!rptr->regmap) 228 263 return -ENODEV; 229 264 230 265 ret = of_property_read_u32(np, "reg", &res); 231 266 if (ret < 0) 232 267 return ret; 233 268 234 - for (i = 0; i < F_NUM_FIELDS; i++) 235 - eusb2_repeater_tune_reg_fields[i].reg += res; 236 - 237 - ret = devm_regmap_field_bulk_alloc(dev, regmap, rptr->regs, 238 - eusb2_repeater_tune_reg_fields, 239 - F_NUM_FIELDS); 240 - if (ret) 241 - return ret; 269 + rptr->base = res; 242 270 243 271 ret = eusb2_repeater_init_vregs(rptr); 244 272 if (ret < 0) {
+1 -1
drivers/phy/qualcomm/phy-qcom-m31.c
··· 299 299 300 300 qphy->vreg = devm_regulator_get(dev, "vdda-phy"); 301 301 if (IS_ERR(qphy->vreg)) 302 - return dev_err_probe(dev, PTR_ERR(qphy->phy), 302 + return dev_err_probe(dev, PTR_ERR(qphy->vreg), 303 303 "failed to get vreg\n"); 304 304 305 305 phy_set_drvdata(qphy->phy, qphy);
+5 -5
drivers/phy/qualcomm/phy-qcom-qmp-usb.c
··· 1556 1556 "vdda-phy", "vdda-pll", 1557 1557 }; 1558 1558 1559 - static const struct qmp_usb_offsets qmp_usb_offsets_ipq8074 = { 1559 + static const struct qmp_usb_offsets qmp_usb_offsets_v3 = { 1560 1560 .serdes = 0, 1561 1561 .pcs = 0x800, 1562 1562 .pcs_misc = 0x600, ··· 1572 1572 .rx = 0x400, 1573 1573 }; 1574 1574 1575 - static const struct qmp_usb_offsets qmp_usb_offsets_v3 = { 1575 + static const struct qmp_usb_offsets qmp_usb_offsets_v3_msm8996 = { 1576 1576 .serdes = 0, 1577 1577 .pcs = 0x600, 1578 1578 .tx = 0x200, ··· 1624 1624 static const struct qmp_phy_cfg ipq6018_usb3phy_cfg = { 1625 1625 .lanes = 1, 1626 1626 1627 - .offsets = &qmp_usb_offsets_ipq8074, 1627 + .offsets = &qmp_usb_offsets_v3, 1628 1628 1629 1629 .serdes_tbl = ipq9574_usb3_serdes_tbl, 1630 1630 .serdes_tbl_num = ARRAY_SIZE(ipq9574_usb3_serdes_tbl), ··· 1642 1642 static const struct qmp_phy_cfg ipq8074_usb3phy_cfg = { 1643 1643 .lanes = 1, 1644 1644 1645 - .offsets = &qmp_usb_offsets_ipq8074, 1645 + .offsets = &qmp_usb_offsets_v3, 1646 1646 1647 1647 .serdes_tbl = ipq8074_usb3_serdes_tbl, 1648 1648 .serdes_tbl_num = ARRAY_SIZE(ipq8074_usb3_serdes_tbl), ··· 1678 1678 static const struct qmp_phy_cfg msm8996_usb3phy_cfg = { 1679 1679 .lanes = 1, 1680 1680 1681 - .offsets = &qmp_usb_offsets_v3, 1681 + .offsets = &qmp_usb_offsets_v3_msm8996, 1682 1682 1683 1683 .serdes_tbl = msm8996_usb3_serdes_tbl, 1684 1684 .serdes_tbl_num = ARRAY_SIZE(msm8996_usb3_serdes_tbl),
+3 -1
drivers/platform/x86/amd/pmf/tee-if.c
··· 458 458 amd_pmf_hex_dump_pb(dev); 459 459 460 460 dev->prev_data = kzalloc(sizeof(*dev->prev_data), GFP_KERNEL); 461 - if (!dev->prev_data) 461 + if (!dev->prev_data) { 462 + ret = -ENOMEM; 462 463 goto error; 464 + } 463 465 464 466 ret = amd_pmf_start_policy_engine(dev); 465 467 if (ret)
+8 -15
drivers/platform/x86/p2sb.c
··· 20 20 #define P2SBC_HIDE BIT(8) 21 21 22 22 #define P2SB_DEVFN_DEFAULT PCI_DEVFN(31, 1) 23 + #define P2SB_DEVFN_GOLDMONT PCI_DEVFN(13, 0) 24 + #define SPI_DEVFN_GOLDMONT PCI_DEVFN(13, 2) 23 25 24 26 static const struct x86_cpu_id p2sb_cpu_ids[] = { 25 - X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT, PCI_DEVFN(13, 0)), 27 + X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT, P2SB_DEVFN_GOLDMONT), 26 28 {} 27 29 }; 28 30 ··· 100 98 101 99 static int p2sb_scan_and_cache(struct pci_bus *bus, unsigned int devfn) 102 100 { 103 - unsigned int slot, fn; 101 + /* Scan the P2SB device and cache its BAR0 */ 102 + p2sb_scan_and_cache_devfn(bus, devfn); 104 103 105 - if (PCI_FUNC(devfn) == 0) { 106 - /* 107 - * When function number of the P2SB device is zero, scan it and 108 - * other function numbers, and if devices are available, cache 109 - * their BAR0s. 110 - */ 111 - slot = PCI_SLOT(devfn); 112 - for (fn = 0; fn < NR_P2SB_RES_CACHE; fn++) 113 - p2sb_scan_and_cache_devfn(bus, PCI_DEVFN(slot, fn)); 114 - } else { 115 - /* Scan the P2SB device and cache its BAR0 */ 116 - p2sb_scan_and_cache_devfn(bus, devfn); 117 - } 104 + /* On Goldmont p2sb_bar() also gets called for the SPI controller */ 105 + if (devfn == P2SB_DEVFN_GOLDMONT) 106 + p2sb_scan_and_cache_devfn(bus, SPI_DEVFN_GOLDMONT); 118 107 119 108 if (!p2sb_valid_resource(&p2sb_resources[PCI_FUNC(devfn)].res)) 120 109 return -ENOENT;
+3
drivers/pmdomain/arm/scmi_perf_domain.c
··· 159 159 struct genpd_onecell_data *scmi_pd_data = dev_get_drvdata(dev); 160 160 int i; 161 161 162 + if (!scmi_pd_data) 163 + return; 164 + 162 165 of_genpd_del_provider(dev->of_node); 163 166 164 167 for (i = 0; i < scmi_pd_data->num_domains; i++)
+5 -2
drivers/pmdomain/qcom/rpmhpd.c
··· 692 692 unsigned int active_corner, sleep_corner; 693 693 unsigned int this_active_corner = 0, this_sleep_corner = 0; 694 694 unsigned int peer_active_corner = 0, peer_sleep_corner = 0; 695 + unsigned int peer_enabled_corner; 695 696 696 697 if (pd->state_synced) { 697 698 to_active_sleep(pd, corner, &this_active_corner, &this_sleep_corner); ··· 702 701 this_sleep_corner = pd->level_count - 1; 703 702 } 704 703 705 - if (peer && peer->enabled) 706 - to_active_sleep(peer, peer->corner, &peer_active_corner, 704 + if (peer && peer->enabled) { 705 + peer_enabled_corner = max(peer->corner, peer->enable_corner); 706 + to_active_sleep(peer, peer_enabled_corner, &peer_active_corner, 707 707 &peer_sleep_corner); 708 + } 708 709 709 710 active_corner = max(this_active_corner, peer_active_corner); 710 711
+1
drivers/power/supply/Kconfig
··· 978 978 config FUEL_GAUGE_MM8013 979 979 tristate "Mitsumi MM8013 fuel gauge driver" 980 980 depends on I2C 981 + select REGMAP_I2C 981 982 help 982 983 Say Y here to enable the Mitsumi MM8013 fuel gauge driver. 983 984 It enables the monitoring of many battery parameters, including
+3 -1
drivers/power/supply/bq27xxx_battery_i2c.c
··· 209 209 { 210 210 struct bq27xxx_device_info *di = i2c_get_clientdata(client); 211 211 212 - free_irq(client->irq, di); 212 + if (client->irq) 213 + free_irq(client->irq, di); 214 + 213 215 bq27xxx_battery_teardown(di); 214 216 215 217 mutex_lock(&battery_mutex);
+6 -1
drivers/scsi/mpi3mr/mpi3mr_transport.c
··· 1671 1671 void 1672 1672 mpi3mr_refresh_sas_ports(struct mpi3mr_ioc *mrioc) 1673 1673 { 1674 - struct host_port h_port[64]; 1674 + struct host_port *h_port = NULL; 1675 1675 int i, j, found, host_port_count = 0, port_idx; 1676 1676 u16 sz, attached_handle, ioc_status; 1677 1677 struct mpi3_sas_io_unit_page0 *sas_io_unit_pg0 = NULL; ··· 1685 1685 sas_io_unit_pg0 = kzalloc(sz, GFP_KERNEL); 1686 1686 if (!sas_io_unit_pg0) 1687 1687 return; 1688 + h_port = kcalloc(64, sizeof(struct host_port), GFP_KERNEL); 1689 + if (!h_port) 1690 + goto out; 1691 + 1688 1692 if (mpi3mr_cfg_get_sas_io_unit_pg0(mrioc, sas_io_unit_pg0, sz)) { 1689 1693 ioc_err(mrioc, "failure at %s:%d/%s()!\n", 1690 1694 __FILE__, __LINE__, __func__); ··· 1818 1814 } 1819 1815 } 1820 1816 out: 1817 + kfree(h_port); 1821 1818 kfree(sas_io_unit_pg0); 1822 1819 } 1823 1820
+3 -1
drivers/scsi/mpt3sas/mpt3sas_base.c
··· 7378 7378 return -EFAULT; 7379 7379 } 7380 7380 7381 - issue_diag_reset: 7381 + return 0; 7382 + 7383 + issue_diag_reset: 7382 7384 rc = _base_diag_reset(ioc); 7383 7385 return rc; 7384 7386 }
+11 -10
drivers/soc/qcom/pmic_glink.c
··· 265 265 266 266 pg->client_mask = *match_data; 267 267 268 + pg->pdr = pdr_handle_alloc(pmic_glink_pdr_callback, pg); 269 + if (IS_ERR(pg->pdr)) { 270 + ret = dev_err_probe(&pdev->dev, PTR_ERR(pg->pdr), 271 + "failed to initialize pdr\n"); 272 + return ret; 273 + } 274 + 268 275 if (pg->client_mask & BIT(PMIC_GLINK_CLIENT_UCSI)) { 269 276 ret = pmic_glink_add_aux_device(pg, &pg->ucsi_aux, "ucsi"); 270 277 if (ret) 271 - return ret; 278 + goto out_release_pdr_handle; 272 279 } 273 280 if (pg->client_mask & BIT(PMIC_GLINK_CLIENT_ALTMODE)) { 274 281 ret = pmic_glink_add_aux_device(pg, &pg->altmode_aux, "altmode"); ··· 288 281 goto out_release_altmode_aux; 289 282 } 290 283 291 - pg->pdr = pdr_handle_alloc(pmic_glink_pdr_callback, pg); 292 - if (IS_ERR(pg->pdr)) { 293 - ret = dev_err_probe(&pdev->dev, PTR_ERR(pg->pdr), "failed to initialize pdr\n"); 294 - goto out_release_aux_devices; 295 - } 296 - 297 284 service = pdr_add_lookup(pg->pdr, "tms/servreg", "msm/adsp/charger_pd"); 298 285 if (IS_ERR(service)) { 299 286 ret = dev_err_probe(&pdev->dev, PTR_ERR(service), 300 287 "failed adding pdr lookup for charger_pd\n"); 301 - goto out_release_pdr_handle; 288 + goto out_release_aux_devices; 302 289 } 303 290 304 291 mutex_lock(&__pmic_glink_lock); ··· 301 300 302 301 return 0; 303 302 304 - out_release_pdr_handle: 305 - pdr_handle_release(pg->pdr); 306 303 out_release_aux_devices: 307 304 if (pg->client_mask & BIT(PMIC_GLINK_CLIENT_BATT)) 308 305 pmic_glink_del_aux_device(pg, &pg->ps_aux); ··· 310 311 out_release_ucsi_aux: 311 312 if (pg->client_mask & BIT(PMIC_GLINK_CLIENT_UCSI)) 312 313 pmic_glink_del_aux_device(pg, &pg->ucsi_aux); 314 + out_release_pdr_handle: 315 + pdr_handle_release(pg->pdr); 313 316 314 317 return ret; 315 318 }
+2 -1
drivers/tee/optee/device.c
··· 90 90 if (rc) { 91 91 pr_err("device registration failed, err: %d\n", rc); 92 92 put_device(&optee_device->dev); 93 + return rc; 93 94 } 94 95 95 96 if (func == PTA_CMD_GET_DEVICES_SUPP) 96 97 device_create_file(&optee_device->dev, 97 98 &dev_attr_need_supplicant); 98 99 99 - return rc; 100 + return 0; 100 101 } 101 102 102 103 static int __optee_enumerate_devices(u32 func)
+3 -5
drivers/video/fbdev/core/fbcon.c
··· 2399 2399 struct fbcon_ops *ops = info->fbcon_par; 2400 2400 struct fbcon_display *p = &fb_display[vc->vc_num]; 2401 2401 int resize, ret, old_userfont, old_width, old_height, old_charcount; 2402 - char *old_data = NULL; 2402 + u8 *old_data = vc->vc_font.data; 2403 2403 2404 2404 resize = (w != vc->vc_font.width) || (h != vc->vc_font.height); 2405 - if (p->userfont) 2406 - old_data = vc->vc_font.data; 2407 2405 vc->vc_font.data = (void *)(p->fontdata = data); 2408 2406 old_userfont = p->userfont; 2409 2407 if ((p->userfont = userfont)) ··· 2435 2437 update_screen(vc); 2436 2438 } 2437 2439 2438 - if (old_data && (--REFCOUNT(old_data) == 0)) 2440 + if (old_userfont && (--REFCOUNT(old_data) == 0)) 2439 2441 kfree(old_data - FONT_EXTRA_WORDS * sizeof(int)); 2440 2442 return 0; 2441 2443 2442 2444 err_out: 2443 2445 p->fontdata = old_data; 2444 - vc->vc_font.data = (void *)old_data; 2446 + vc->vc_font.data = old_data; 2445 2447 2446 2448 if (userfont) { 2447 2449 p->userfont = old_userfont;
-2
drivers/video/fbdev/hyperv_fb.c
··· 1010 1010 goto getmem_done; 1011 1011 } 1012 1012 pr_info("Unable to allocate enough contiguous physical memory on Gen 1 VM. Using MMIO instead.\n"); 1013 - } else { 1014 - goto err1; 1015 1013 } 1016 1014 1017 1015 /*
+3 -1
fs/afs/dir.c
··· 479 479 dire->u.name[0] == '.' && 480 480 ctx->actor != afs_lookup_filldir && 481 481 ctx->actor != afs_lookup_one_filldir && 482 - memcmp(dire->u.name, ".__afs", 6) == 0) 482 + memcmp(dire->u.name, ".__afs", 6) == 0) { 483 + ctx->pos = blkoff + next * sizeof(union afs_xdr_dirent); 483 484 continue; 485 + } 484 486 485 487 /* found the next entry */ 486 488 if (!dir_emit(ctx, dire->u.name, nlen,
+6 -2
fs/aio.c
··· 589 589 590 590 void kiocb_set_cancel_fn(struct kiocb *iocb, kiocb_cancel_fn *cancel) 591 591 { 592 - struct aio_kiocb *req = container_of(iocb, struct aio_kiocb, rw); 593 - struct kioctx *ctx = req->ki_ctx; 592 + struct aio_kiocb *req; 593 + struct kioctx *ctx; 594 594 unsigned long flags; 595 595 596 596 /* ··· 600 600 if (!(iocb->ki_flags & IOCB_AIO_RW)) 601 601 return; 602 602 603 + req = container_of(iocb, struct aio_kiocb, rw); 604 + 603 605 if (WARN_ON_ONCE(!list_empty(&req->ki_list))) 604 606 return; 607 + 608 + ctx = req->ki_ctx; 605 609 606 610 spin_lock_irqsave(&ctx->ctx_lock, flags); 607 611 list_add_tail(&req->ki_list, &ctx->active_reqs);
+11 -11
fs/btrfs/disk-io.c
··· 1307 1307 * 1308 1308 * @objectid: root id 1309 1309 * @anon_dev: preallocated anonymous block device number for new roots, 1310 - * pass 0 for new allocation. 1310 + * pass NULL for a new allocation. 1311 1311 * @check_ref: whether to check root item references, If true, return -ENOENT 1312 1312 * for orphan roots 1313 1313 */ 1314 1314 static struct btrfs_root *btrfs_get_root_ref(struct btrfs_fs_info *fs_info, 1315 - u64 objectid, dev_t anon_dev, 1315 + u64 objectid, dev_t *anon_dev, 1316 1316 bool check_ref) 1317 1317 { 1318 1318 struct btrfs_root *root; ··· 1342 1342 * that common but still possible. In that case, we just need 1343 1343 * to free the anon_dev. 1344 1344 */ 1345 - if (unlikely(anon_dev)) { 1346 - free_anon_bdev(anon_dev); 1347 - anon_dev = 0; 1345 + if (unlikely(anon_dev && *anon_dev)) { 1346 + free_anon_bdev(*anon_dev); 1347 + *anon_dev = 0; 1348 1348 } 1349 1349 1350 1350 if (check_ref && btrfs_root_refs(&root->root_item) == 0) { ··· 1366 1366 goto fail; 1367 1367 } 1368 1368 1369 - ret = btrfs_init_fs_root(root, anon_dev); 1369 + ret = btrfs_init_fs_root(root, anon_dev ? *anon_dev : 0); 1370 1370 if (ret) 1371 1371 goto fail; 1372 1372 ··· 1402 1402 * root's anon_dev to 0 to avoid a double free, once by btrfs_put_root() 1403 1403 * and once again by our caller. 1404 1404 */ 1405 - if (anon_dev) 1405 + if (anon_dev && *anon_dev) 1406 1406 root->anon_dev = 0; 1407 1407 btrfs_put_root(root); 1408 1408 return ERR_PTR(ret); ··· 1418 1418 struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info, 1419 1419 u64 objectid, bool check_ref) 1420 1420 { 1421 - return btrfs_get_root_ref(fs_info, objectid, 0, check_ref); 1421 + return btrfs_get_root_ref(fs_info, objectid, NULL, check_ref); 1422 1422 } 1423 1423 1424 1424 /* ··· 1426 1426 * the anonymous block device id 1427 1427 * 1428 1428 * @objectid: tree objectid 1429 - * @anon_dev: if zero, allocate a new anonymous block device or use the 1430 - * parameter value 1429 + * @anon_dev: if NULL, allocate a new anonymous block device or use the 1430 + * parameter value if not NULL 1431 1431 */ 1432 1432 struct btrfs_root *btrfs_get_new_fs_root(struct btrfs_fs_info *fs_info, 1433 - u64 objectid, dev_t anon_dev) 1433 + u64 objectid, dev_t *anon_dev) 1434 1434 { 1435 1435 return btrfs_get_root_ref(fs_info, objectid, anon_dev, true); 1436 1436 }
+1 -1
fs/btrfs/disk-io.h
··· 61 61 struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info, 62 62 u64 objectid, bool check_ref); 63 63 struct btrfs_root *btrfs_get_new_fs_root(struct btrfs_fs_info *fs_info, 64 - u64 objectid, dev_t anon_dev); 64 + u64 objectid, dev_t *anon_dev); 65 65 struct btrfs_root *btrfs_get_fs_root_commit_root(struct btrfs_fs_info *fs_info, 66 66 struct btrfs_path *path, 67 67 u64 objectid);
+104 -20
fs/btrfs/extent_io.c
··· 2480 2480 struct fiemap_cache *cache, 2481 2481 u64 offset, u64 phys, u64 len, u32 flags) 2482 2482 { 2483 + u64 cache_end; 2483 2484 int ret = 0; 2484 2485 2485 2486 /* Set at the end of extent_fiemap(). */ ··· 2490 2489 goto assign; 2491 2490 2492 2491 /* 2493 - * Sanity check, extent_fiemap() should have ensured that new 2494 - * fiemap extent won't overlap with cached one. 2495 - * Not recoverable. 2492 + * When iterating the extents of the inode, at extent_fiemap(), we may 2493 + * find an extent that starts at an offset behind the end offset of the 2494 + * previous extent we processed. This happens if fiemap is called 2495 + * without FIEMAP_FLAG_SYNC and there are ordered extents completing 2496 + * while we call btrfs_next_leaf() (through fiemap_next_leaf_item()). 2496 2497 * 2497 - * NOTE: Physical address can overlap, due to compression 2498 + * For example we are in leaf X processing its last item, which is the 2499 + * file extent item for file range [512K, 1M[, and after 2500 + * btrfs_next_leaf() releases the path, there's an ordered extent that 2501 + * completes for the file range [768K, 2M[, and that results in trimming 2502 + * the file extent item so that it now corresponds to the file range 2503 + * [512K, 768K[ and a new file extent item is inserted for the file 2504 + * range [768K, 2M[, which may end up as the last item of leaf X or as 2505 + * the first item of the next leaf - in either case btrfs_next_leaf() 2506 + * will leave us with a path pointing to the new extent item, for the 2507 + * file range [768K, 2M[, since that's the first key that follows the 2508 + * last one we processed. So in order not to report overlapping extents 2509 + * to user space, we trim the length of the previously cached extent and 2510 + * emit it. 2511 + * 2512 + * Upon calling btrfs_next_leaf() we may also find an extent with an 2513 + * offset smaller than or equals to cache->offset, and this happens 2514 + * when we had a hole or prealloc extent with several delalloc ranges in 2515 + * it, but after btrfs_next_leaf() released the path, delalloc was 2516 + * flushed and the resulting ordered extents were completed, so we can 2517 + * now have found a file extent item for an offset that is smaller than 2518 + * or equals to what we have in cache->offset. We deal with this as 2519 + * described below. 2498 2520 */ 2499 - if (cache->offset + cache->len > offset) { 2500 - WARN_ON(1); 2501 - return -EINVAL; 2521 + cache_end = cache->offset + cache->len; 2522 + if (cache_end > offset) { 2523 + if (offset == cache->offset) { 2524 + /* 2525 + * We cached a dealloc range (found in the io tree) for 2526 + * a hole or prealloc extent and we have now found a 2527 + * file extent item for the same offset. What we have 2528 + * now is more recent and up to date, so discard what 2529 + * we had in the cache and use what we have just found. 2530 + */ 2531 + goto assign; 2532 + } else if (offset > cache->offset) { 2533 + /* 2534 + * The extent range we previously found ends after the 2535 + * offset of the file extent item we found and that 2536 + * offset falls somewhere in the middle of that previous 2537 + * extent range. So adjust the range we previously found 2538 + * to end at the offset of the file extent item we have 2539 + * just found, since this extent is more up to date. 2540 + * Emit that adjusted range and cache the file extent 2541 + * item we have just found. This corresponds to the case 2542 + * where a previously found file extent item was split 2543 + * due to an ordered extent completing. 2544 + */ 2545 + cache->len = offset - cache->offset; 2546 + goto emit; 2547 + } else { 2548 + const u64 range_end = offset + len; 2549 + 2550 + /* 2551 + * The offset of the file extent item we have just found 2552 + * is behind the cached offset. This means we were 2553 + * processing a hole or prealloc extent for which we 2554 + * have found delalloc ranges (in the io tree), so what 2555 + * we have in the cache is the last delalloc range we 2556 + * found while the file extent item we found can be 2557 + * either for a whole delalloc range we previously 2558 + * emmitted or only a part of that range. 2559 + * 2560 + * We have two cases here: 2561 + * 2562 + * 1) The file extent item's range ends at or behind the 2563 + * cached extent's end. In this case just ignore the 2564 + * current file extent item because we don't want to 2565 + * overlap with previous ranges that may have been 2566 + * emmitted already; 2567 + * 2568 + * 2) The file extent item starts behind the currently 2569 + * cached extent but its end offset goes beyond the 2570 + * end offset of the cached extent. We don't want to 2571 + * overlap with a previous range that may have been 2572 + * emmitted already, so we emit the currently cached 2573 + * extent and then partially store the current file 2574 + * extent item's range in the cache, for the subrange 2575 + * going the cached extent's end to the end of the 2576 + * file extent item. 2577 + */ 2578 + if (range_end <= cache_end) 2579 + return 0; 2580 + 2581 + if (!(flags & (FIEMAP_EXTENT_ENCODED | FIEMAP_EXTENT_DELALLOC))) 2582 + phys += cache_end - offset; 2583 + 2584 + offset = cache_end; 2585 + len = range_end - cache_end; 2586 + goto emit; 2587 + } 2502 2588 } 2503 2589 2504 2590 /* ··· 2605 2517 return 0; 2606 2518 } 2607 2519 2520 + emit: 2608 2521 /* Not mergeable, need to submit cached one */ 2609 2522 ret = fiemap_fill_next_extent(fieinfo, cache->offset, cache->phys, 2610 2523 cache->len, cache->flags); ··· 2996 2907 range_end = round_up(start + len, sectorsize); 2997 2908 prev_extent_end = range_start; 2998 2909 2999 - btrfs_inode_lock(inode, BTRFS_ILOCK_SHARED); 3000 - 3001 2910 ret = fiemap_find_last_extent_offset(inode, path, &last_extent_end); 3002 2911 if (ret < 0) 3003 - goto out_unlock; 2912 + goto out; 3004 2913 btrfs_release_path(path); 3005 2914 3006 2915 path->reada = READA_FORWARD; 3007 2916 ret = fiemap_search_slot(inode, path, range_start); 3008 2917 if (ret < 0) { 3009 - goto out_unlock; 2918 + goto out; 3010 2919 } else if (ret > 0) { 3011 2920 /* 3012 2921 * No file extent item found, but we may have delalloc between ··· 3051 2964 backref_ctx, 0, 0, 0, 3052 2965 prev_extent_end, hole_end); 3053 2966 if (ret < 0) { 3054 - goto out_unlock; 2967 + goto out; 3055 2968 } else if (ret > 0) { 3056 2969 /* fiemap_fill_next_extent() told us to stop. */ 3057 2970 stopped = true; ··· 3107 3020 extent_gen, 3108 3021 backref_ctx); 3109 3022 if (ret < 0) 3110 - goto out_unlock; 3023 + goto out; 3111 3024 else if (ret > 0) 3112 3025 flags |= FIEMAP_EXTENT_SHARED; 3113 3026 } ··· 3118 3031 } 3119 3032 3120 3033 if (ret < 0) { 3121 - goto out_unlock; 3034 + goto out; 3122 3035 } else if (ret > 0) { 3123 3036 /* fiemap_fill_next_extent() told us to stop. */ 3124 3037 stopped = true; ··· 3129 3042 next_item: 3130 3043 if (fatal_signal_pending(current)) { 3131 3044 ret = -EINTR; 3132 - goto out_unlock; 3045 + goto out; 3133 3046 } 3134 3047 3135 3048 ret = fiemap_next_leaf_item(inode, path); 3136 3049 if (ret < 0) { 3137 - goto out_unlock; 3050 + goto out; 3138 3051 } else if (ret > 0) { 3139 3052 /* No more file extent items for this inode. */ 3140 3053 break; ··· 3158 3071 &delalloc_cached_state, backref_ctx, 3159 3072 0, 0, 0, prev_extent_end, range_end - 1); 3160 3073 if (ret < 0) 3161 - goto out_unlock; 3074 + goto out; 3162 3075 prev_extent_end = range_end; 3163 3076 } 3164 3077 ··· 3196 3109 } 3197 3110 3198 3111 ret = emit_last_fiemap_cache(fieinfo, &cache); 3199 - 3200 - out_unlock: 3201 - btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED); 3202 3112 out: 3203 3113 free_extent_state(delalloc_cached_state); 3204 3114 btrfs_free_backref_share_ctx(backref_ctx);
+21 -1
fs/btrfs/inode.c
··· 7835 7835 static int btrfs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, 7836 7836 u64 start, u64 len) 7837 7837 { 7838 + struct btrfs_inode *btrfs_inode = BTRFS_I(inode); 7838 7839 int ret; 7839 7840 7840 7841 ret = fiemap_prep(inode, fieinfo, start, &len, 0); ··· 7861 7860 return ret; 7862 7861 } 7863 7862 7864 - return extent_fiemap(BTRFS_I(inode), fieinfo, start, len); 7863 + btrfs_inode_lock(btrfs_inode, BTRFS_ILOCK_SHARED); 7864 + 7865 + /* 7866 + * We did an initial flush to avoid holding the inode's lock while 7867 + * triggering writeback and waiting for the completion of IO and ordered 7868 + * extents. Now after we locked the inode we do it again, because it's 7869 + * possible a new write may have happened in between those two steps. 7870 + */ 7871 + if (fieinfo->fi_flags & FIEMAP_FLAG_SYNC) { 7872 + ret = btrfs_wait_ordered_range(inode, 0, LLONG_MAX); 7873 + if (ret) { 7874 + btrfs_inode_unlock(btrfs_inode, BTRFS_ILOCK_SHARED); 7875 + return ret; 7876 + } 7877 + } 7878 + 7879 + ret = extent_fiemap(btrfs_inode, fieinfo, start, len); 7880 + btrfs_inode_unlock(btrfs_inode, BTRFS_ILOCK_SHARED); 7881 + 7882 + return ret; 7865 7883 } 7866 7884 7867 7885 static int btrfs_writepages(struct address_space *mapping,
+1 -1
fs/btrfs/ioctl.c
··· 721 721 free_extent_buffer(leaf); 722 722 leaf = NULL; 723 723 724 - new_root = btrfs_get_new_fs_root(fs_info, objectid, anon_dev); 724 + new_root = btrfs_get_new_fs_root(fs_info, objectid, &anon_dev); 725 725 if (IS_ERR(new_root)) { 726 726 ret = PTR_ERR(new_root); 727 727 btrfs_abort_transaction(trans, ret);
+1 -1
fs/btrfs/transaction.c
··· 1834 1834 } 1835 1835 1836 1836 key.offset = (u64)-1; 1837 - pending->snap = btrfs_get_new_fs_root(fs_info, objectid, pending->anon_dev); 1837 + pending->snap = btrfs_get_new_fs_root(fs_info, objectid, &pending->anon_dev); 1838 1838 if (IS_ERR(pending->snap)) { 1839 1839 ret = PTR_ERR(pending->snap); 1840 1840 pending->snap = NULL;
+4 -3
fs/ceph/mdsmap.c
··· 380 380 ceph_decode_skip_8(p, end, bad_ext); 381 381 /* required_client_features */ 382 382 ceph_decode_skip_set(p, end, 64, bad_ext); 383 + /* bal_rank_mask */ 384 + ceph_decode_skip_string(p, end, bad_ext); 385 + } 386 + if (mdsmap_ev >= 18) { 383 387 ceph_decode_64_safe(p, end, m->m_max_xattr_size, bad_ext); 384 - } else { 385 - /* This forces the usage of the (sync) SETXATTR Op */ 386 - m->m_max_xattr_size = 0; 387 388 } 388 389 bad_ext: 389 390 doutc(cl, "m_enabled: %d, m_damaged: %d, m_num_laggy: %d\n",
+5 -1
fs/ceph/mdsmap.h
··· 27 27 u32 m_session_timeout; /* seconds */ 28 28 u32 m_session_autoclose; /* seconds */ 29 29 u64 m_max_file_size; 30 - u64 m_max_xattr_size; /* maximum size for xattrs blob */ 30 + /* 31 + * maximum size for xattrs blob. 32 + * Zeroed by default to force the usage of the (sync) SETXATTR Op. 33 + */ 34 + u64 m_max_xattr_size; 31 35 u32 m_max_mds; /* expected up:active mds number */ 32 36 u32 m_num_active_mds; /* actual up:active mds number */ 33 37 u32 possible_max_rank; /* possible max rank index */
+42 -3
fs/coredump.c
··· 872 872 loff_t pos; 873 873 ssize_t n; 874 874 875 + if (!page) 876 + return 0; 877 + 875 878 if (cprm->to_skip) { 876 879 if (!__dump_skip(cprm, cprm->to_skip)) 877 880 return 0; ··· 887 884 pos = file->f_pos; 888 885 bvec_set_page(&bvec, page, PAGE_SIZE, 0); 889 886 iov_iter_bvec(&iter, ITER_SOURCE, &bvec, 1, PAGE_SIZE); 890 - iov_iter_set_copy_mc(&iter); 891 887 n = __kernel_write_iter(cprm->file, &iter, &pos); 892 888 if (n != PAGE_SIZE) 893 889 return 0; ··· 897 895 return 1; 898 896 } 899 897 898 + /* 899 + * If we might get machine checks from kernel accesses during the 900 + * core dump, let's get those errors early rather than during the 901 + * IO. This is not performance-critical enough to warrant having 902 + * all the machine check logic in the iovec paths. 903 + */ 904 + #ifdef copy_mc_to_kernel 905 + 906 + #define dump_page_alloc() alloc_page(GFP_KERNEL) 907 + #define dump_page_free(x) __free_page(x) 908 + static struct page *dump_page_copy(struct page *src, struct page *dst) 909 + { 910 + void *buf = kmap_local_page(src); 911 + size_t left = copy_mc_to_kernel(page_address(dst), buf, PAGE_SIZE); 912 + kunmap_local(buf); 913 + return left ? NULL : dst; 914 + } 915 + 916 + #else 917 + 918 + /* We just want to return non-NULL; it's never used. */ 919 + #define dump_page_alloc() ERR_PTR(-EINVAL) 920 + #define dump_page_free(x) ((void)(x)) 921 + static inline struct page *dump_page_copy(struct page *src, struct page *dst) 922 + { 923 + return src; 924 + } 925 + #endif 926 + 900 927 int dump_user_range(struct coredump_params *cprm, unsigned long start, 901 928 unsigned long len) 902 929 { 903 930 unsigned long addr; 931 + struct page *dump_page; 932 + 933 + dump_page = dump_page_alloc(); 934 + if (!dump_page) 935 + return 0; 904 936 905 937 for (addr = start; addr < start + len; addr += PAGE_SIZE) { 906 938 struct page *page; ··· 948 912 */ 949 913 page = get_dump_page(addr); 950 914 if (page) { 951 - int stop = !dump_emit_page(cprm, page); 915 + int stop = !dump_emit_page(cprm, dump_page_copy(page, dump_page)); 952 916 put_page(page); 953 - if (stop) 917 + if (stop) { 918 + dump_page_free(dump_page); 954 919 return 0; 920 + } 955 921 } else { 956 922 dump_skip(cprm, PAGE_SIZE); 957 923 } 958 924 } 925 + dump_page_free(dump_page); 959 926 return 1; 960 927 } 961 928 #endif
+1 -1
fs/efivarfs/internal.h
··· 38 38 39 39 int efivar_init(int (*func)(efi_char16_t *, efi_guid_t, unsigned long, void *, 40 40 struct list_head *), 41 - void *data, bool duplicates, struct list_head *head); 41 + void *data, struct list_head *head); 42 42 43 43 int efivar_entry_add(struct efivar_entry *entry, struct list_head *head); 44 44 void __efivar_entry_add(struct efivar_entry *entry, struct list_head *head);
+1 -6
fs/efivarfs/super.c
··· 343 343 if (err) 344 344 return err; 345 345 346 - err = efivar_init(efivarfs_callback, (void *)sb, true, 347 - &sfi->efivarfs_list); 348 - if (err) 349 - efivar_entry_iter(efivarfs_destroy, &sfi->efivarfs_list, NULL); 350 - 351 - return err; 346 + return efivar_init(efivarfs_callback, sb, &sfi->efivarfs_list); 352 347 } 353 348 354 349 static int efivarfs_get_tree(struct fs_context *fc)
+13 -10
fs/efivarfs/vars.c
··· 361 361 * efivar_init - build the initial list of EFI variables 362 362 * @func: callback function to invoke for every variable 363 363 * @data: function-specific data to pass to @func 364 - * @duplicates: error if we encounter duplicates on @head? 365 364 * @head: initialised head of variable list 366 365 * 367 366 * Get every EFI variable from the firmware and invoke @func. @func ··· 370 371 */ 371 372 int efivar_init(int (*func)(efi_char16_t *, efi_guid_t, unsigned long, void *, 372 373 struct list_head *), 373 - void *data, bool duplicates, struct list_head *head) 374 + void *data, struct list_head *head) 374 375 { 375 - unsigned long variable_name_size = 1024; 376 + unsigned long variable_name_size = 512; 376 377 efi_char16_t *variable_name; 377 378 efi_status_t status; 378 379 efi_guid_t vendor_guid; ··· 389 390 goto free; 390 391 391 392 /* 392 - * Per EFI spec, the maximum storage allocated for both 393 - * the variable name and variable data is 1024 bytes. 393 + * A small set of old UEFI implementations reject sizes 394 + * above a certain threshold, the lowest seen in the wild 395 + * is 512. 394 396 */ 395 397 396 398 do { 397 - variable_name_size = 1024; 399 + variable_name_size = 512; 398 400 399 401 status = efivar_get_next_variable(&variable_name_size, 400 402 variable_name, ··· 413 413 * we'll ever see a different variable name, 414 414 * and may end up looping here forever. 415 415 */ 416 - if (duplicates && 417 - variable_is_present(variable_name, &vendor_guid, 416 + if (variable_is_present(variable_name, &vendor_guid, 418 417 head)) { 419 418 dup_variable_bug(variable_name, &vendor_guid, 420 419 variable_name_size); ··· 431 432 break; 432 433 case EFI_NOT_FOUND: 433 434 break; 435 + case EFI_BUFFER_TOO_SMALL: 436 + pr_warn("efivars: Variable name size exceeds maximum (%lu > 512)\n", 437 + variable_name_size); 438 + status = EFI_NOT_FOUND; 439 + break; 434 440 default: 435 - printk(KERN_WARNING "efivars: get_next_variable: status=%lx\n", 436 - status); 441 + pr_warn("efivars: get_next_variable: status=%lx\n", status); 437 442 status = EFI_NOT_FOUND; 438 443 break; 439 444 }
+21 -14
fs/exfat/file.c
··· 35 35 if (new_num_clusters == num_clusters) 36 36 goto out; 37 37 38 - exfat_chain_set(&clu, ei->start_clu, num_clusters, ei->flags); 39 - ret = exfat_find_last_cluster(sb, &clu, &last_clu); 40 - if (ret) 41 - return ret; 38 + if (num_clusters) { 39 + exfat_chain_set(&clu, ei->start_clu, num_clusters, ei->flags); 40 + ret = exfat_find_last_cluster(sb, &clu, &last_clu); 41 + if (ret) 42 + return ret; 42 43 43 - clu.dir = (last_clu == EXFAT_EOF_CLUSTER) ? 44 - EXFAT_EOF_CLUSTER : last_clu + 1; 44 + clu.dir = last_clu + 1; 45 + } else { 46 + last_clu = EXFAT_EOF_CLUSTER; 47 + clu.dir = EXFAT_EOF_CLUSTER; 48 + } 49 + 45 50 clu.size = 0; 46 51 clu.flags = ei->flags; 47 52 ··· 56 51 return ret; 57 52 58 53 /* Append new clusters to chain */ 59 - if (clu.flags != ei->flags) { 60 - exfat_chain_cont_cluster(sb, ei->start_clu, num_clusters); 61 - ei->flags = ALLOC_FAT_CHAIN; 62 - } 63 - if (clu.flags == ALLOC_FAT_CHAIN) 64 - if (exfat_ent_set(sb, last_clu, clu.dir)) 65 - goto free_clu; 54 + if (num_clusters) { 55 + if (clu.flags != ei->flags) 56 + if (exfat_chain_cont_cluster(sb, ei->start_clu, num_clusters)) 57 + goto free_clu; 66 58 67 - if (num_clusters == 0) 59 + if (clu.flags == ALLOC_FAT_CHAIN) 60 + if (exfat_ent_set(sb, last_clu, clu.dir)) 61 + goto free_clu; 62 + } else 68 63 ei->start_clu = clu.dir; 64 + 65 + ei->flags = clu.flags; 69 66 70 67 out: 71 68 inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode));
-1
fs/xfs/xfs_super.c
··· 350 350 return -EINVAL; 351 351 } 352 352 353 - xfs_warn(mp, "DAX enabled. Warning: EXPERIMENTAL, use at your own risk"); 354 353 return 0; 355 354 356 355 disable_dax:
+15
include/drm/bridge/aux-bridge.h
··· 9 9 10 10 #include <drm/drm_connector.h> 11 11 12 + struct auxiliary_device; 13 + 12 14 #if IS_ENABLED(CONFIG_DRM_AUX_BRIDGE) 13 15 int drm_aux_bridge_register(struct device *parent); 14 16 #else ··· 21 19 #endif 22 20 23 21 #if IS_ENABLED(CONFIG_DRM_AUX_HPD_BRIDGE) 22 + struct auxiliary_device *devm_drm_dp_hpd_bridge_alloc(struct device *parent, struct device_node *np); 23 + int devm_drm_dp_hpd_bridge_add(struct device *dev, struct auxiliary_device *adev); 24 24 struct device *drm_dp_hpd_bridge_register(struct device *parent, 25 25 struct device_node *np); 26 26 void drm_aux_hpd_bridge_notify(struct device *dev, enum drm_connector_status status); 27 27 #else 28 + static inline struct auxiliary_device *devm_drm_dp_hpd_bridge_alloc(struct device *parent, 29 + struct device_node *np) 30 + { 31 + return NULL; 32 + } 33 + 34 + static inline int devm_drm_dp_hpd_bridge_add(struct auxiliary_device *adev) 35 + { 36 + return 0; 37 + } 38 + 28 39 static inline struct device *drm_dp_hpd_bridge_register(struct device *parent, 29 40 struct device_node *np) 30 41 {
+1 -1
include/linux/bvec.h
··· 83 83 84 84 unsigned int bi_bvec_done; /* number of bytes completed in 85 85 current bvec */ 86 - } __packed; 86 + } __packed __aligned(4); 87 87 88 88 struct bvec_iter_all { 89 89 struct bio_vec bv;
+13 -13
include/linux/dpll.h
··· 123 123 }; 124 124 125 125 #if IS_ENABLED(CONFIG_DPLL) 126 - size_t dpll_msg_pin_handle_size(struct dpll_pin *pin); 127 - int dpll_msg_add_pin_handle(struct sk_buff *msg, struct dpll_pin *pin); 126 + void dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin); 127 + void dpll_netdev_pin_clear(struct net_device *dev); 128 + 129 + size_t dpll_netdev_pin_handle_size(const struct net_device *dev); 130 + int dpll_netdev_add_pin_handle(struct sk_buff *msg, 131 + const struct net_device *dev); 128 132 #else 129 - static inline size_t dpll_msg_pin_handle_size(struct dpll_pin *pin) 133 + static inline void 134 + dpll_netdev_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin) { } 135 + static inline void dpll_netdev_pin_clear(struct net_device *dev) { } 136 + 137 + static inline size_t dpll_netdev_pin_handle_size(const struct net_device *dev) 130 138 { 131 139 return 0; 132 140 } 133 141 134 - static inline int dpll_msg_add_pin_handle(struct sk_buff *msg, struct dpll_pin *pin) 142 + static inline int 143 + dpll_netdev_add_pin_handle(struct sk_buff *msg, const struct net_device *dev) 135 144 { 136 145 return 0; 137 146 } ··· 178 169 int dpll_device_change_ntf(struct dpll_device *dpll); 179 170 180 171 int dpll_pin_change_ntf(struct dpll_pin *pin); 181 - 182 - #if !IS_ENABLED(CONFIG_DPLL) 183 - static inline struct dpll_pin *netdev_dpll_pin(const struct net_device *dev) 184 - { 185 - return NULL; 186 - } 187 - #else 188 - struct dpll_pin *netdev_dpll_pin(const struct net_device *dev); 189 - #endif 190 172 191 173 #endif
+21 -1
include/linux/hyperv.h
··· 164 164 u8 buffer[]; 165 165 } __packed; 166 166 167 + 168 + /* 169 + * If the requested ring buffer size is at least 8 times the size of the 170 + * header, steal space from the ring buffer for the header. Otherwise, add 171 + * space for the header so that is doesn't take too much of the ring buffer 172 + * space. 173 + * 174 + * The factor of 8 is somewhat arbitrary. The goal is to prevent adding a 175 + * relatively small header (4 Kbytes on x86) to a large-ish power-of-2 ring 176 + * buffer size (such as 128 Kbytes) and so end up making a nearly twice as 177 + * large allocation that will be almost half wasted. As a contrasting example, 178 + * on ARM64 with 64 Kbyte page size, we don't want to take 64 Kbytes for the 179 + * header from a 128 Kbyte allocation, leaving only 64 Kbytes for the ring. 180 + * In this latter case, we must add 64 Kbytes for the header and not worry 181 + * about what's wasted. 182 + */ 183 + #define VMBUS_HEADER_ADJ(payload_sz) \ 184 + ((payload_sz) >= 8 * sizeof(struct hv_ring_buffer) ? \ 185 + 0 : sizeof(struct hv_ring_buffer)) 186 + 167 187 /* Calculate the proper size of a ringbuffer, it must be page-aligned */ 168 - #define VMBUS_RING_SIZE(payload_sz) PAGE_ALIGN(sizeof(struct hv_ring_buffer) + \ 188 + #define VMBUS_RING_SIZE(payload_sz) PAGE_ALIGN(VMBUS_HEADER_ADJ(payload_sz) + \ 169 189 (payload_sz)) 170 190 171 191 struct hv_ring_buffer_info {
+3 -1
include/linux/mlx5/mlx5_ifc.h
··· 10261 10261 10262 10262 u8 regs_63_to_46[0x12]; 10263 10263 u8 mrtc[0x1]; 10264 - u8 regs_44_to_32[0xd]; 10264 + u8 regs_44_to_41[0x4]; 10265 + u8 mfrl[0x1]; 10266 + u8 regs_39_to_32[0x8]; 10265 10267 10266 10268 u8 regs_31_to_10[0x16]; 10267 10269 u8 mtmp[0x1];
+10 -4
include/linux/netdevice.h
··· 79 79 struct xdp_frame; 80 80 struct xdp_metadata_ops; 81 81 struct xdp_md; 82 - /* DPLL specific */ 83 - struct dpll_pin; 84 82 85 83 typedef u32 xdp_features_t; 86 84 ··· 3498 3500 #endif 3499 3501 } 3500 3502 3503 + static inline int netdev_queue_dql_avail(const struct netdev_queue *txq) 3504 + { 3505 + #ifdef CONFIG_BQL 3506 + /* Non-BQL migrated drivers will return 0, too. */ 3507 + return dql_avail(&txq->dql); 3508 + #else 3509 + return 0; 3510 + #endif 3511 + } 3512 + 3501 3513 /** 3502 3514 * netdev_txq_bql_enqueue_prefetchw - prefetch bql data for write 3503 3515 * @dev_queue: pointer to transmit queue ··· 4041 4033 int dev_get_port_parent_id(struct net_device *dev, 4042 4034 struct netdev_phys_item_id *ppid, bool recurse); 4043 4035 bool netdev_port_same_parent_id(struct net_device *a, struct net_device *b); 4044 - void netdev_dpll_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin); 4045 - void netdev_dpll_pin_clear(struct net_device *dev); 4046 4036 4047 4037 struct sk_buff *validate_xmit_skb_list(struct sk_buff *skb, struct net_device *dev, bool *again); 4048 4038 struct sk_buff *dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev,
-16
include/linux/uio.h
··· 40 40 41 41 struct iov_iter { 42 42 u8 iter_type; 43 - bool copy_mc; 44 43 bool nofault; 45 44 bool data_source; 46 45 size_t iov_offset; ··· 247 248 248 249 #ifdef CONFIG_ARCH_HAS_COPY_MC 249 250 size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i); 250 - static inline void iov_iter_set_copy_mc(struct iov_iter *i) 251 - { 252 - i->copy_mc = true; 253 - } 254 - 255 - static inline bool iov_iter_is_copy_mc(const struct iov_iter *i) 256 - { 257 - return i->copy_mc; 258 - } 259 251 #else 260 252 #define _copy_mc_to_iter _copy_to_iter 261 - static inline void iov_iter_set_copy_mc(struct iov_iter *i) { } 262 - static inline bool iov_iter_is_copy_mc(const struct iov_iter *i) 263 - { 264 - return false; 265 - } 266 253 #endif 267 254 268 255 size_t iov_iter_zero(size_t bytes, struct iov_iter *); ··· 340 355 WARN_ON(direction & ~(READ | WRITE)); 341 356 *i = (struct iov_iter) { 342 357 .iter_type = ITER_UBUF, 343 - .copy_mc = false, 344 358 .data_source = direction, 345 359 .ubuf = buf, 346 360 .count = count,
+1 -6
include/net/sch_generic.h
··· 238 238 239 239 static inline int qdisc_avail_bulklimit(const struct netdev_queue *txq) 240 240 { 241 - #ifdef CONFIG_BQL 242 - /* Non-BQL migrated drivers will return 0, too. */ 243 - return dql_avail(&txq->dql); 244 - #else 245 - return 0; 246 - #endif 241 + return netdev_queue_dql_avail(txq); 247 242 } 248 243 249 244 struct Qdisc_class_ops {
+2
include/sound/soc-card.h
··· 30 30 31 31 struct snd_kcontrol *snd_soc_card_get_kcontrol(struct snd_soc_card *soc_card, 32 32 const char *name); 33 + struct snd_kcontrol *snd_soc_card_get_kcontrol_locked(struct snd_soc_card *soc_card, 34 + const char *name); 33 35 int snd_soc_card_jack_new(struct snd_soc_card *card, const char *id, int type, 34 36 struct snd_soc_jack *jack); 35 37 int snd_soc_card_jack_new_pins(struct snd_soc_card *card, const char *id,
+10 -10
include/trace/events/qdisc.h
··· 81 81 TP_ARGS(q), 82 82 83 83 TP_STRUCT__entry( 84 - __string( dev, qdisc_dev(q) ) 85 - __string( kind, q->ops->id ) 86 - __field( u32, parent ) 87 - __field( u32, handle ) 84 + __string( dev, qdisc_dev(q)->name ) 85 + __string( kind, q->ops->id ) 86 + __field( u32, parent ) 87 + __field( u32, handle ) 88 88 ), 89 89 90 90 TP_fast_assign( 91 - __assign_str(dev, qdisc_dev(q)); 91 + __assign_str(dev, qdisc_dev(q)->name); 92 92 __assign_str(kind, q->ops->id); 93 93 __entry->parent = q->parent; 94 94 __entry->handle = q->handle; ··· 106 106 TP_ARGS(q), 107 107 108 108 TP_STRUCT__entry( 109 - __string( dev, qdisc_dev(q) ) 110 - __string( kind, q->ops->id ) 111 - __field( u32, parent ) 112 - __field( u32, handle ) 109 + __string( dev, qdisc_dev(q)->name ) 110 + __string( kind, q->ops->id ) 111 + __field( u32, parent ) 112 + __field( u32, handle ) 113 113 ), 114 114 115 115 TP_fast_assign( 116 - __assign_str(dev, qdisc_dev(q)); 116 + __assign_str(dev, qdisc_dev(q)->name); 117 117 __assign_str(kind, q->ops->id); 118 118 __entry->parent = q->parent; 119 119 __entry->handle = q->handle;
+1 -20
include/uapi/drm/xe_drm.h
··· 831 831 * - %DRM_XE_VM_BIND_OP_PREFETCH 832 832 * 833 833 * and the @flags can be: 834 - * - %DRM_XE_VM_BIND_FLAG_READONLY 835 - * - %DRM_XE_VM_BIND_FLAG_ASYNC 836 - * - %DRM_XE_VM_BIND_FLAG_IMMEDIATE - Valid on a faulting VM only, do the 837 - * MAP operation immediately rather than deferring the MAP to the page 838 - * fault handler. 839 834 * - %DRM_XE_VM_BIND_FLAG_NULL - When the NULL flag is set, the page 840 835 * tables are setup with a special bit which indicates writes are 841 836 * dropped and all reads return zero. In the future, the NULL flags ··· 923 928 /** @op: Bind operation to perform */ 924 929 __u32 op; 925 930 926 - #define DRM_XE_VM_BIND_FLAG_READONLY (1 << 0) 927 - #define DRM_XE_VM_BIND_FLAG_IMMEDIATE (1 << 1) 928 931 #define DRM_XE_VM_BIND_FLAG_NULL (1 << 2) 932 + #define DRM_XE_VM_BIND_FLAG_DUMPABLE (1 << 3) 929 933 /** @flags: Bind flags */ 930 934 __u32 flags; 931 935 ··· 1039 1045 #define DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY 0 1040 1046 #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY 0 1041 1047 #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE 1 1042 - #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PREEMPTION_TIMEOUT 2 1043 - #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_JOB_TIMEOUT 4 1044 - #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_TRIGGER 5 1045 - #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_NOTIFY 6 1046 - #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_ACC_GRANULARITY 7 1047 - /* Monitor 128KB contiguous region with 4K sub-granularity */ 1048 - #define DRM_XE_ACC_GRANULARITY_128K 0 1049 - /* Monitor 2MB contiguous region with 64KB sub-granularity */ 1050 - #define DRM_XE_ACC_GRANULARITY_2M 1 1051 - /* Monitor 16MB contiguous region with 512KB sub-granularity */ 1052 - #define DRM_XE_ACC_GRANULARITY_16M 2 1053 - /* Monitor 64MB contiguous region with 2M sub-granularity */ 1054 - #define DRM_XE_ACC_GRANULARITY_64M 3 1055 1048 1056 1049 /** @extensions: Pointer to the first extension struct, if any */ 1057 1050 __u64 extensions;
+2 -2
include/uapi/sound/asound.h
··· 142 142 * * 143 143 *****************************************************************************/ 144 144 145 - #define SNDRV_PCM_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 16) 145 + #define SNDRV_PCM_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 17) 146 146 147 147 typedef unsigned long snd_pcm_uframes_t; 148 148 typedef signed long snd_pcm_sframes_t; ··· 416 416 unsigned int rmask; /* W: requested masks */ 417 417 unsigned int cmask; /* R: changed masks */ 418 418 unsigned int info; /* R: Info flags for returned setup */ 419 - unsigned int msbits; /* R: used most significant bits */ 419 + unsigned int msbits; /* R: used most significant bits (in sample bit-width) */ 420 420 unsigned int rate_num; /* R: rate numerator */ 421 421 unsigned int rate_den; /* R: rate denominator */ 422 422 snd_pcm_uframes_t fifo_size; /* R: chip FIFO size in frames */
+1 -1
kernel/bpf/cpumap.c
··· 178 178 void **frames, int n, 179 179 struct xdp_cpumap_stats *stats) 180 180 { 181 - struct xdp_rxq_info rxq; 181 + struct xdp_rxq_info rxq = {}; 182 182 struct xdp_buff xdp; 183 183 int i, nframes = 0; 184 184
+3
kernel/bpf/verifier.c
··· 16715 16715 { 16716 16716 int i; 16717 16717 16718 + if (old->callback_depth > cur->callback_depth) 16719 + return false; 16720 + 16718 16721 for (i = 0; i < MAX_BPF_REG; i++) 16719 16722 if (!regsafe(env, &old->regs[i], &cur->regs[i], 16720 16723 &env->idmap_scratch, exact))
+4 -4
kernel/cgroup/cpuset.c
··· 2562 2562 update_partition_sd_lb(cs, old_prs); 2563 2563 out_free: 2564 2564 free_cpumasks(NULL, &tmp); 2565 - return 0; 2565 + return retval; 2566 2566 } 2567 2567 2568 2568 /** ··· 2598 2598 if (cpumask_equal(cs->exclusive_cpus, trialcs->exclusive_cpus)) 2599 2599 return 0; 2600 2600 2601 - if (alloc_cpumasks(NULL, &tmp)) 2602 - return -ENOMEM; 2603 - 2604 2601 if (*buf) 2605 2602 compute_effective_exclusive_cpumask(trialcs, NULL); 2606 2603 ··· 2611 2614 retval = validate_change(cs, trialcs); 2612 2615 if (retval) 2613 2616 return retval; 2617 + 2618 + if (alloc_cpumasks(NULL, &tmp)) 2619 + return -ENOMEM; 2614 2620 2615 2621 if (old_prs) { 2616 2622 if (cpumask_empty(trialcs->effective_xcpus)) {
+6 -8
kernel/trace/fprobe.c
··· 189 189 { 190 190 int size; 191 191 192 - if (num <= 0) 193 - return -EINVAL; 194 - 195 192 if (!fp->exit_handler) { 196 193 fp->rethook = NULL; 197 194 return 0; ··· 196 199 197 200 /* Initialize rethook if needed */ 198 201 if (fp->nr_maxactive) 199 - size = fp->nr_maxactive; 202 + num = fp->nr_maxactive; 200 203 else 201 - size = num * num_possible_cpus() * 2; 202 - if (size <= 0) 204 + num *= num_possible_cpus() * 2; 205 + if (num <= 0) 203 206 return -EINVAL; 204 207 208 + size = sizeof(struct fprobe_rethook_node) + fp->entry_data_size; 209 + 205 210 /* Initialize rethook */ 206 - fp->rethook = rethook_alloc((void *)fp, fprobe_exit_handler, 207 - sizeof(struct fprobe_rethook_node), size); 211 + fp->rethook = rethook_alloc((void *)fp, fprobe_exit_handler, size, num); 208 212 if (IS_ERR(fp->rethook)) 209 213 return PTR_ERR(fp->rethook); 210 214
-23
lib/iov_iter.c
··· 166 166 WARN_ON(direction & ~(READ | WRITE)); 167 167 *i = (struct iov_iter) { 168 168 .iter_type = ITER_IOVEC, 169 - .copy_mc = false, 170 169 .nofault = false, 171 170 .data_source = direction, 172 171 .__iov = iov, ··· 244 245 #endif /* CONFIG_ARCH_HAS_COPY_MC */ 245 246 246 247 static __always_inline 247 - size_t memcpy_from_iter_mc(void *iter_from, size_t progress, 248 - size_t len, void *to, void *priv2) 249 - { 250 - return copy_mc_to_kernel(to + progress, iter_from, len); 251 - } 252 - 253 - static size_t __copy_from_iter_mc(void *addr, size_t bytes, struct iov_iter *i) 254 - { 255 - if (unlikely(i->count < bytes)) 256 - bytes = i->count; 257 - if (unlikely(!bytes)) 258 - return 0; 259 - return iterate_bvec(i, bytes, addr, NULL, memcpy_from_iter_mc); 260 - } 261 - 262 - static __always_inline 263 248 size_t __copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) 264 249 { 265 - if (unlikely(iov_iter_is_copy_mc(i))) 266 - return __copy_from_iter_mc(addr, bytes, i); 267 250 return iterate_and_advance(i, bytes, addr, 268 251 copy_from_user_iter, memcpy_from_iter); 269 252 } ··· 614 633 WARN_ON(direction & ~(READ | WRITE)); 615 634 *i = (struct iov_iter){ 616 635 .iter_type = ITER_KVEC, 617 - .copy_mc = false, 618 636 .data_source = direction, 619 637 .kvec = kvec, 620 638 .nr_segs = nr_segs, ··· 630 650 WARN_ON(direction & ~(READ | WRITE)); 631 651 *i = (struct iov_iter){ 632 652 .iter_type = ITER_BVEC, 633 - .copy_mc = false, 634 653 .data_source = direction, 635 654 .bvec = bvec, 636 655 .nr_segs = nr_segs, ··· 658 679 BUG_ON(direction & ~1); 659 680 *i = (struct iov_iter) { 660 681 .iter_type = ITER_XARRAY, 661 - .copy_mc = false, 662 682 .data_source = direction, 663 683 .xarray = xarray, 664 684 .xarray_start = start, ··· 681 703 BUG_ON(direction != READ); 682 704 *i = (struct iov_iter){ 683 705 .iter_type = ITER_DISCARD, 684 - .copy_mc = false, 685 706 .data_source = false, 686 707 .count = count, 687 708 .iov_offset = 0
-22
net/core/dev.c
··· 9122 9122 } 9123 9123 EXPORT_SYMBOL(netdev_port_same_parent_id); 9124 9124 9125 - static void netdev_dpll_pin_assign(struct net_device *dev, struct dpll_pin *dpll_pin) 9126 - { 9127 - #if IS_ENABLED(CONFIG_DPLL) 9128 - rtnl_lock(); 9129 - rcu_assign_pointer(dev->dpll_pin, dpll_pin); 9130 - rtnl_unlock(); 9131 - #endif 9132 - } 9133 - 9134 - void netdev_dpll_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin) 9135 - { 9136 - WARN_ON(!dpll_pin); 9137 - netdev_dpll_pin_assign(dev, dpll_pin); 9138 - } 9139 - EXPORT_SYMBOL(netdev_dpll_pin_set); 9140 - 9141 - void netdev_dpll_pin_clear(struct net_device *dev) 9142 - { 9143 - netdev_dpll_pin_assign(dev, NULL); 9144 - } 9145 - EXPORT_SYMBOL(netdev_dpll_pin_clear); 9146 - 9147 9125 /** 9148 9126 * dev_change_proto_down - set carrier according to proto_down. 9149 9127 *
+2 -1
net/core/page_pool_user.c
··· 94 94 state->pp_id = pool->user.id; 95 95 err = fill(skb, pool, info); 96 96 if (err) 97 - break; 97 + goto out; 98 98 } 99 99 100 100 state->pp_id = 0; 101 101 } 102 + out: 102 103 mutex_unlock(&page_pools_lock); 103 104 rtnl_unlock(); 104 105
+2 -2
net/core/rtnetlink.c
··· 1056 1056 { 1057 1057 size_t size = nla_total_size(0); /* nest IFLA_DPLL_PIN */ 1058 1058 1059 - size += dpll_msg_pin_handle_size(netdev_dpll_pin(dev)); 1059 + size += dpll_netdev_pin_handle_size(dev); 1060 1060 1061 1061 return size; 1062 1062 } ··· 1793 1793 if (!dpll_pin_nest) 1794 1794 return -EMSGSIZE; 1795 1795 1796 - ret = dpll_msg_add_pin_handle(skb, netdev_dpll_pin(dev)); 1796 + ret = dpll_netdev_add_pin_handle(skb, dev); 1797 1797 if (ret < 0) 1798 1798 goto nest_cancel; 1799 1799
+7 -14
net/ipv6/route.c
··· 5343 5343 err_nh = NULL; 5344 5344 list_for_each_entry(nh, &rt6_nh_list, next) { 5345 5345 err = __ip6_ins_rt(nh->fib6_info, info, extack); 5346 - fib6_info_release(nh->fib6_info); 5347 5346 5348 - if (!err) { 5349 - /* save reference to last route successfully inserted */ 5350 - rt_last = nh->fib6_info; 5351 - 5352 - /* save reference to first route for notification */ 5353 - if (!rt_notif) 5354 - rt_notif = nh->fib6_info; 5355 - } 5356 - 5357 - /* nh->fib6_info is used or freed at this point, reset to NULL*/ 5358 - nh->fib6_info = NULL; 5359 5347 if (err) { 5360 5348 if (replace && nhn) 5361 5349 NL_SET_ERR_MSG_MOD(extack, ··· 5351 5363 err_nh = nh; 5352 5364 goto add_errout; 5353 5365 } 5366 + /* save reference to last route successfully inserted */ 5367 + rt_last = nh->fib6_info; 5368 + 5369 + /* save reference to first route for notification */ 5370 + if (!rt_notif) 5371 + rt_notif = nh->fib6_info; 5354 5372 5355 5373 /* Because each route is added like a single route we remove 5356 5374 * these flags after the first nexthop: if there is a collision, ··· 5417 5423 5418 5424 cleanup: 5419 5425 list_for_each_entry_safe(nh, nh_safe, &rt6_nh_list, next) { 5420 - if (nh->fib6_info) 5421 - fib6_info_release(nh->fib6_info); 5426 + fib6_info_release(nh->fib6_info); 5422 5427 list_del(&nh->next); 5423 5428 kfree(nh); 5424 5429 }
+4
net/netfilter/nf_conntrack_h323_asn1.c
··· 533 533 /* Get fields bitmap */ 534 534 if (nf_h323_error_boundary(bs, 0, f->sz)) 535 535 return H323_ERROR_BOUND; 536 + if (f->sz > 32) 537 + return H323_ERROR_RANGE; 536 538 bmp = get_bitmap(bs, f->sz); 537 539 if (base) 538 540 *(unsigned int *)base = bmp; ··· 591 589 bmp2_len = get_bits(bs, 7) + 1; 592 590 if (nf_h323_error_boundary(bs, 0, bmp2_len)) 593 591 return H323_ERROR_BOUND; 592 + if (bmp2_len > 32) 593 + return H323_ERROR_RANGE; 594 594 bmp2 = get_bitmap(bs, bmp2_len); 595 595 bmp |= bmp2 >> f->sz; 596 596 if (base)
+7
net/netfilter/nf_tables_api.c
··· 5008 5008 if ((flags & (NFT_SET_EVAL | NFT_SET_OBJECT)) == 5009 5009 (NFT_SET_EVAL | NFT_SET_OBJECT)) 5010 5010 return -EOPNOTSUPP; 5011 + if ((flags & (NFT_SET_ANONYMOUS | NFT_SET_TIMEOUT | NFT_SET_EVAL)) == 5012 + (NFT_SET_ANONYMOUS | NFT_SET_TIMEOUT)) 5013 + return -EOPNOTSUPP; 5014 + if ((flags & (NFT_SET_CONSTANT | NFT_SET_TIMEOUT)) == 5015 + (NFT_SET_CONSTANT | NFT_SET_TIMEOUT)) 5016 + return -EOPNOTSUPP; 5011 5017 } 5012 5018 5013 5019 desc.dtype = 0; ··· 5437 5431 5438 5432 if (list_empty(&set->bindings) && nft_set_is_anonymous(set)) { 5439 5433 list_del_rcu(&set->list); 5434 + set->dead = 1; 5440 5435 if (event) 5441 5436 nf_tables_set_notify(ctx, set, NFT_MSG_DELSET, 5442 5437 GFP_KERNEL);
+5 -6
net/netfilter/nft_ct.c
··· 1256 1256 switch (priv->l3num) { 1257 1257 case NFPROTO_IPV4: 1258 1258 case NFPROTO_IPV6: 1259 - if (priv->l3num != ctx->family) 1260 - return -EINVAL; 1259 + if (priv->l3num == ctx->family || ctx->family == NFPROTO_INET) 1260 + break; 1261 1261 1262 - fallthrough; 1263 - case NFPROTO_INET: 1264 - break; 1262 + return -EINVAL; 1263 + case NFPROTO_INET: /* tuple.src.l3num supports NFPROTO_IPV4/6 only */ 1265 1264 default: 1266 - return -EOPNOTSUPP; 1265 + return -EAFNOSUPPORT; 1267 1266 } 1268 1267 1269 1268 priv->l4proto = nla_get_u8(tb[NFTA_CT_EXPECT_L4PROTO]);
+7 -7
net/netrom/af_netrom.c
··· 453 453 nr_init_timers(sk); 454 454 455 455 nr->t1 = 456 - msecs_to_jiffies(sysctl_netrom_transport_timeout); 456 + msecs_to_jiffies(READ_ONCE(sysctl_netrom_transport_timeout)); 457 457 nr->t2 = 458 - msecs_to_jiffies(sysctl_netrom_transport_acknowledge_delay); 458 + msecs_to_jiffies(READ_ONCE(sysctl_netrom_transport_acknowledge_delay)); 459 459 nr->n2 = 460 - msecs_to_jiffies(sysctl_netrom_transport_maximum_tries); 460 + msecs_to_jiffies(READ_ONCE(sysctl_netrom_transport_maximum_tries)); 461 461 nr->t4 = 462 - msecs_to_jiffies(sysctl_netrom_transport_busy_delay); 462 + msecs_to_jiffies(READ_ONCE(sysctl_netrom_transport_busy_delay)); 463 463 nr->idle = 464 - msecs_to_jiffies(sysctl_netrom_transport_no_activity_timeout); 465 - nr->window = sysctl_netrom_transport_requested_window_size; 464 + msecs_to_jiffies(READ_ONCE(sysctl_netrom_transport_no_activity_timeout)); 465 + nr->window = READ_ONCE(sysctl_netrom_transport_requested_window_size); 466 466 467 467 nr->bpqext = 1; 468 468 nr->state = NR_STATE_0; ··· 954 954 * G8PZT's Xrouter which is sending packets with command type 7 955 955 * as an extension of the protocol. 956 956 */ 957 - if (sysctl_netrom_reset_circuit && 957 + if (READ_ONCE(sysctl_netrom_reset_circuit) && 958 958 (frametype != NR_RESET || flags != 0)) 959 959 nr_transmit_reset(skb, 1); 960 960
+1 -1
net/netrom/nr_dev.c
··· 81 81 buff[6] |= AX25_SSSID_SPARE; 82 82 buff += AX25_ADDR_LEN; 83 83 84 - *buff++ = sysctl_netrom_network_ttl_initialiser; 84 + *buff++ = READ_ONCE(sysctl_netrom_network_ttl_initialiser); 85 85 86 86 *buff++ = NR_PROTO_IP; 87 87 *buff++ = NR_PROTO_IP;
+3 -3
net/netrom/nr_in.c
··· 97 97 break; 98 98 99 99 case NR_RESET: 100 - if (sysctl_netrom_reset_circuit) 100 + if (READ_ONCE(sysctl_netrom_reset_circuit)) 101 101 nr_disconnect(sk, ECONNRESET); 102 102 break; 103 103 ··· 128 128 break; 129 129 130 130 case NR_RESET: 131 - if (sysctl_netrom_reset_circuit) 131 + if (READ_ONCE(sysctl_netrom_reset_circuit)) 132 132 nr_disconnect(sk, ECONNRESET); 133 133 break; 134 134 ··· 262 262 break; 263 263 264 264 case NR_RESET: 265 - if (sysctl_netrom_reset_circuit) 265 + if (READ_ONCE(sysctl_netrom_reset_circuit)) 266 266 nr_disconnect(sk, ECONNRESET); 267 267 break; 268 268
+1 -1
net/netrom/nr_out.c
··· 204 204 dptr[6] |= AX25_SSSID_SPARE; 205 205 dptr += AX25_ADDR_LEN; 206 206 207 - *dptr++ = sysctl_netrom_network_ttl_initialiser; 207 + *dptr++ = READ_ONCE(sysctl_netrom_network_ttl_initialiser); 208 208 209 209 if (!nr_route_frame(skb, NULL)) { 210 210 kfree_skb(skb);
+4 -4
net/netrom/nr_route.c
··· 153 153 nr_neigh->digipeat = NULL; 154 154 nr_neigh->ax25 = NULL; 155 155 nr_neigh->dev = dev; 156 - nr_neigh->quality = sysctl_netrom_default_path_quality; 156 + nr_neigh->quality = READ_ONCE(sysctl_netrom_default_path_quality); 157 157 nr_neigh->locked = 0; 158 158 nr_neigh->count = 0; 159 159 nr_neigh->number = nr_neigh_no++; ··· 728 728 nr_neigh->ax25 = NULL; 729 729 ax25_cb_put(ax25); 730 730 731 - if (++nr_neigh->failed < sysctl_netrom_link_fails_count) { 731 + if (++nr_neigh->failed < READ_ONCE(sysctl_netrom_link_fails_count)) { 732 732 nr_neigh_put(nr_neigh); 733 733 return; 734 734 } ··· 766 766 if (ax25 != NULL) { 767 767 ret = nr_add_node(nr_src, "", &ax25->dest_addr, ax25->digipeat, 768 768 ax25->ax25_dev->dev, 0, 769 - sysctl_netrom_obsolescence_count_initialiser); 769 + READ_ONCE(sysctl_netrom_obsolescence_count_initialiser)); 770 770 if (ret) 771 771 return ret; 772 772 } ··· 780 780 return ret; 781 781 } 782 782 783 - if (!sysctl_netrom_routing_control && ax25 != NULL) 783 + if (!READ_ONCE(sysctl_netrom_routing_control) && ax25 != NULL) 784 784 return 0; 785 785 786 786 /* Its Time-To-Live has expired */
+3 -2
net/netrom/nr_subr.c
··· 182 182 *dptr++ = nr->my_id; 183 183 *dptr++ = frametype; 184 184 *dptr++ = nr->window; 185 - if (nr->bpqext) *dptr++ = sysctl_netrom_network_ttl_initialiser; 185 + if (nr->bpqext) 186 + *dptr++ = READ_ONCE(sysctl_netrom_network_ttl_initialiser); 186 187 break; 187 188 188 189 case NR_DISCREQ: ··· 237 236 dptr[6] |= AX25_SSSID_SPARE; 238 237 dptr += AX25_ADDR_LEN; 239 238 240 - *dptr++ = sysctl_netrom_network_ttl_initialiser; 239 + *dptr++ = READ_ONCE(sysctl_netrom_network_ttl_initialiser); 241 240 242 241 if (mine) { 243 242 *dptr++ = 0;
+3
net/rds/rdma.c
··· 301 301 kfree(sg); 302 302 } 303 303 ret = PTR_ERR(trans_private); 304 + /* Trigger connection so that its ready for the next retry */ 305 + if (ret == -ENODEV) 306 + rds_conn_connect_if_down(cp->cp_conn); 304 307 goto out; 305 308 } 306 309
+1 -5
net/rds/send.c
··· 1313 1313 1314 1314 /* Parse any control messages the user may have included. */ 1315 1315 ret = rds_cmsg_send(rs, rm, msg, &allocated_mr, &vct); 1316 - if (ret) { 1317 - /* Trigger connection so that its ready for the next retry */ 1318 - if (ret == -EAGAIN) 1319 - rds_conn_connect_if_down(conn); 1316 + if (ret) 1320 1317 goto out; 1321 - } 1322 1318 1323 1319 if (rm->rdma.op_active && !conn->c_trans->xmit_rdma) { 1324 1320 printk_ratelimited(KERN_NOTICE "rdma_op %p conn xmit_rdma %p\n",
+1 -1
net/xfrm/xfrm_device.c
··· 407 407 struct xfrm_dst *xdst = (struct xfrm_dst *)dst; 408 408 struct net_device *dev = x->xso.dev; 409 409 410 - if (!x->type_offload || x->encap) 410 + if (!x->type_offload) 411 411 return false; 412 412 413 413 if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET ||
+5 -1
net/xfrm/xfrm_output.c
··· 704 704 { 705 705 struct net *net = dev_net(skb_dst(skb)->dev); 706 706 struct xfrm_state *x = skb_dst(skb)->xfrm; 707 + int family; 707 708 int err; 708 709 709 - switch (x->outer_mode.family) { 710 + family = (x->xso.type != XFRM_DEV_OFFLOAD_PACKET) ? x->outer_mode.family 711 + : skb_dst(skb)->ops->family; 712 + 713 + switch (family) { 710 714 case AF_INET: 711 715 memset(IPCB(skb), 0, sizeof(*IPCB(skb))); 712 716 IPCB(skb)->flags |= IPSKB_XFRM_TRANSFORMED;
+4 -2
net/xfrm/xfrm_policy.c
··· 2694 2694 if (xfrm[i]->props.smark.v || xfrm[i]->props.smark.m) 2695 2695 mark = xfrm_smark_get(fl->flowi_mark, xfrm[i]); 2696 2696 2697 - family = xfrm[i]->props.family; 2697 + if (xfrm[i]->xso.type != XFRM_DEV_OFFLOAD_PACKET) 2698 + family = xfrm[i]->props.family; 2699 + 2698 2700 oif = fl->flowi_oif ? : fl->flowi_l3mdev; 2699 2701 dst = xfrm_dst_lookup(xfrm[i], tos, oif, 2700 2702 &saddr, &daddr, family, mark); ··· 3418 3416 } 3419 3417 3420 3418 fl4->flowi4_proto = flkeys->basic.ip_proto; 3421 - fl4->flowi4_tos = flkeys->ip.tos; 3419 + fl4->flowi4_tos = flkeys->ip.tos & ~INET_ECN_MASK; 3422 3420 } 3423 3421 3424 3422 #if IS_ENABLED(CONFIG_IPV6)
+3
net/xfrm/xfrm_user.c
··· 2017 2017 if (xp->xfrm_nr == 0) 2018 2018 return 0; 2019 2019 2020 + if (xp->xfrm_nr > XFRM_MAX_DEPTH) 2021 + return -ENOBUFS; 2022 + 2020 2023 for (i = 0; i < xp->xfrm_nr; i++) { 2021 2024 struct xfrm_user_tmpl *up = &vec[i]; 2022 2025 struct xfrm_tmpl *kp = &xp->xfrm_vec[i];
+1 -1
scripts/Kconfig.include
··· 33 33 34 34 # $(as-instr,<instr>) 35 35 # Return y if the assembler supports <instr>, n otherwise 36 - as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler-with-cpp -o /dev/null -) 36 + as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -Wa$(comma)--fatal-warnings -c -x assembler-with-cpp -o /dev/null -) 37 37 38 38 # check if $(CC) and $(LD) exist 39 39 $(error-if,$(failure,command -v $(CC)),C compiler '$(CC)' not found)
+1 -1
scripts/Makefile.compiler
··· 38 38 # Usage: aflags-y += $(call as-instr,instr,option1,option2) 39 39 40 40 as-instr = $(call try-run,\ 41 - printf "%b\n" "$(1)" | $(CC) -Werror $(CLANG_FLAGS) $(KBUILD_AFLAGS) -c -x assembler-with-cpp -o "$$TMP" -,$(2),$(3)) 41 + printf "%b\n" "$(1)" | $(CC) -Werror $(CLANG_FLAGS) $(KBUILD_AFLAGS) -Wa$(comma)--fatal-warnings -c -x assembler-with-cpp -o "$$TMP" -,$(2),$(3)) 42 42 43 43 # __cc-option 44 44 # Usage: MY_CFLAGS += $(call __cc-option,$(CC),$(MY_CFLAGS),-march=winchip-c6,-march=i586)
+2 -1
security/integrity/digsig.c
··· 179 179 KEY_ALLOC_NOT_IN_QUOTA); 180 180 if (IS_ERR(key)) { 181 181 rc = PTR_ERR(key); 182 - pr_err("Problem loading X.509 certificate %d\n", rc); 182 + if (id != INTEGRITY_KEYRING_MACHINE) 183 + pr_err("Problem loading X.509 certificate %d\n", rc); 183 184 } else { 184 185 pr_notice("Loaded X.509 cert '%s'\n", 185 186 key_ref_to_ptr(key)->description);
+2 -1
security/tomoyo/common.c
··· 2649 2649 { 2650 2650 int error = buffer_len; 2651 2651 size_t avail_len = buffer_len; 2652 - char *cp0 = head->write_buf; 2652 + char *cp0; 2653 2653 int idx; 2654 2654 2655 2655 if (!head->write) 2656 2656 return -EINVAL; 2657 2657 if (mutex_lock_interruptible(&head->io_sem)) 2658 2658 return -EINTR; 2659 + cp0 = head->write_buf; 2659 2660 head->read_user_buf_avail = 0; 2660 2661 idx = tomoyo_read_lock(); 2661 2662 /* Read a line and dispatch it to the policy handler. */
-1
sound/core/Makefile
··· 32 32 snd-ump-$(CONFIG_SND_UMP_LEGACY_RAWMIDI) += ump_convert.o 33 33 snd-timer-objs := timer.o 34 34 snd-hrtimer-objs := hrtimer.o 35 - snd-rtctimer-objs := rtctimer.o 36 35 snd-hwdep-objs := hwdep.o 37 36 snd-seq-device-objs := seq_device.o 38 37
+5
sound/core/pcm_native.c
··· 486 486 i = hw_param_interval_c(params, SNDRV_PCM_HW_PARAM_SAMPLE_BITS); 487 487 if (snd_interval_single(i)) 488 488 params->msbits = snd_interval_value(i); 489 + m = hw_param_mask_c(params, SNDRV_PCM_HW_PARAM_FORMAT); 490 + if (snd_mask_single(m)) { 491 + snd_pcm_format_t format = (__force snd_pcm_format_t)snd_mask_min(m); 492 + params->msbits = snd_pcm_format_width(format); 493 + } 489 494 } 490 495 491 496 if (params->msbits) {
+2 -2
sound/core/ump.c
··· 985 985 struct snd_ump_endpoint *ump = substream->rmidi->private_data; 986 986 int dir = substream->stream; 987 987 int group = ump->legacy_mapping[substream->number]; 988 - int err; 988 + int err = 0; 989 989 990 990 mutex_lock(&ump->open_mutex); 991 991 if (ump->legacy_substreams[dir][group]) { ··· 1009 1009 spin_unlock_irq(&ump->legacy_locks[dir]); 1010 1010 unlock: 1011 1011 mutex_unlock(&ump->open_mutex); 1012 - return 0; 1012 + return err; 1013 1013 } 1014 1014 1015 1015 static int snd_ump_legacy_close(struct snd_rawmidi_substream *substream)
+1 -1
sound/firewire/amdtp-stream.c
··· 951 951 // to the reason. 952 952 unsigned int safe_cycle = increment_ohci_cycle_count(next_cycle, 953 953 IR_JUMBO_PAYLOAD_MAX_SKIP_CYCLES); 954 - lost = (compare_ohci_cycle_count(safe_cycle, cycle) > 0); 954 + lost = (compare_ohci_cycle_count(safe_cycle, cycle) < 0); 955 955 } 956 956 if (lost) { 957 957 dev_err(&s->unit->device, "Detect discontinuity of cycle: %d %d\n",
+32 -2
sound/pci/hda/patch_realtek.c
··· 3684 3684 int i, val; 3685 3685 int coef38, coef0d, coef36; 3686 3686 3687 + alc_write_coefex_idx(codec, 0x58, 0x00, 0x1888); /* write default value */ 3687 3688 alc_update_coef_idx(codec, 0x4a, 1<<15, 1<<15); /* Reset HP JD */ 3688 3689 coef38 = alc_read_coef_idx(codec, 0x38); /* Amp control */ 3689 3690 coef0d = alc_read_coef_idx(codec, 0x0d); /* Digital Misc control */ ··· 7445 7444 ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE, 7446 7445 ALC287_FIXUP_YOGA7_14ITL_SPEAKERS, 7447 7446 ALC298_FIXUP_LENOVO_C940_DUET7, 7447 + ALC287_FIXUP_LENOVO_14IRP8_DUETITL, 7448 7448 ALC287_FIXUP_13S_GEN2_SPEAKERS, 7449 7449 ALC256_FIXUP_SET_COEF_DEFAULTS, 7450 7450 ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE, ··· 7494 7492 id = ALC298_FIXUP_LENOVO_SPK_VOLUME; /* C940 */ 7495 7493 else 7496 7494 id = ALC287_FIXUP_YOGA7_14ITL_SPEAKERS; /* Duet 7 */ 7495 + __snd_hda_apply_fixup(codec, id, action, 0); 7496 + } 7497 + 7498 + /* A special fixup for Lenovo Slim/Yoga Pro 9 14IRP8 and Yoga DuetITL 2021; 7499 + * 14IRP8 PCI SSID will mistakenly be matched with the DuetITL codec SSID, 7500 + * so we need to apply a different fixup in this case. The only DuetITL codec 7501 + * SSID reported so far is the 17aa:3802 while the 14IRP8 has the 17aa:38be 7502 + * and 17aa:38bf. If it weren't for the PCI SSID, the 14IRP8 models would 7503 + * have matched correctly by their codecs. 7504 + */ 7505 + static void alc287_fixup_lenovo_14irp8_duetitl(struct hda_codec *codec, 7506 + const struct hda_fixup *fix, 7507 + int action) 7508 + { 7509 + int id; 7510 + 7511 + if (codec->core.subsystem_id == 0x17aa3802) 7512 + id = ALC287_FIXUP_YOGA7_14ITL_SPEAKERS; /* DuetITL */ 7513 + else 7514 + id = ALC287_FIXUP_TAS2781_I2C; /* 14IRP8 */ 7497 7515 __snd_hda_apply_fixup(codec, id, action, 0); 7498 7516 } 7499 7517 ··· 9401 9379 .type = HDA_FIXUP_FUNC, 9402 9380 .v.func = alc298_fixup_lenovo_c940_duet7, 9403 9381 }, 9382 + [ALC287_FIXUP_LENOVO_14IRP8_DUETITL] = { 9383 + .type = HDA_FIXUP_FUNC, 9384 + .v.func = alc287_fixup_lenovo_14irp8_duetitl, 9385 + }, 9404 9386 [ALC287_FIXUP_13S_GEN2_SPEAKERS] = { 9405 9387 .type = HDA_FIXUP_VERBS, 9406 9388 .v.verbs = (const struct hda_verb[]) { ··· 9611 9585 .type = HDA_FIXUP_FUNC, 9612 9586 .v.func = tas2781_fixup_i2c, 9613 9587 .chained = true, 9614 - .chain_id = ALC269_FIXUP_THINKPAD_ACPI, 9588 + .chain_id = ALC285_FIXUP_THINKPAD_HEADSET_JACK, 9615 9589 }, 9616 9590 [ALC287_FIXUP_YOGA7_14ARB7_I2C] = { 9617 9591 .type = HDA_FIXUP_FUNC, ··· 9772 9746 SND_PCI_QUIRK(0x1028, 0x0c1c, "Dell Precision 3540", ALC236_FIXUP_DELL_DUAL_CODECS), 9773 9747 SND_PCI_QUIRK(0x1028, 0x0c1d, "Dell Precision 3440", ALC236_FIXUP_DELL_DUAL_CODECS), 9774 9748 SND_PCI_QUIRK(0x1028, 0x0c1e, "Dell Precision 3540", ALC236_FIXUP_DELL_DUAL_CODECS), 9749 + SND_PCI_QUIRK(0x1028, 0x0c28, "Dell Inspiron 16 Plus 7630", ALC295_FIXUP_DELL_INSPIRON_TOP_SPEAKERS), 9775 9750 SND_PCI_QUIRK(0x1028, 0x0c4d, "Dell", ALC287_FIXUP_CS35L41_I2C_4), 9776 9751 SND_PCI_QUIRK(0x1028, 0x0cbd, "Dell Oasis 13 CS MTL-U", ALC289_FIXUP_DELL_CS35L41_SPI_2), 9777 9752 SND_PCI_QUIRK(0x1028, 0x0cbe, "Dell Oasis 13 2-IN-1 MTL-U", ALC289_FIXUP_DELL_CS35L41_SPI_2), ··· 9929 9902 SND_PCI_QUIRK(0x103c, 0x8973, "HP EliteBook 860 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9930 9903 SND_PCI_QUIRK(0x103c, 0x8974, "HP EliteBook 840 Aero G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9931 9904 SND_PCI_QUIRK(0x103c, 0x8975, "HP EliteBook x360 840 Aero G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9905 + SND_PCI_QUIRK(0x103c, 0x897d, "HP mt440 Mobile Thin Client U74", ALC236_FIXUP_HP_GPIO_LED), 9932 9906 SND_PCI_QUIRK(0x103c, 0x8981, "HP Elite Dragonfly G3", ALC245_FIXUP_CS35L41_SPI_4), 9933 9907 SND_PCI_QUIRK(0x103c, 0x898e, "HP EliteBook 835 G9", ALC287_FIXUP_CS35L41_I2C_2), 9934 9908 SND_PCI_QUIRK(0x103c, 0x898f, "HP EliteBook 835 G9", ALC287_FIXUP_CS35L41_I2C_2), ··· 9955 9927 SND_PCI_QUIRK(0x103c, 0x8aa3, "HP ProBook 450 G9 (MB 8AA1)", ALC236_FIXUP_HP_GPIO_LED), 9956 9928 SND_PCI_QUIRK(0x103c, 0x8aa8, "HP EliteBook 640 G9 (MB 8AA6)", ALC236_FIXUP_HP_GPIO_LED), 9957 9929 SND_PCI_QUIRK(0x103c, 0x8aab, "HP EliteBook 650 G9 (MB 8AA9)", ALC236_FIXUP_HP_GPIO_LED), 9930 + SND_PCI_QUIRK(0x103c, 0x8ab9, "HP EliteBook 840 G8 (MB 8AB8)", ALC285_FIXUP_HP_GPIO_LED), 9958 9931 SND_PCI_QUIRK(0x103c, 0x8abb, "HP ZBook Firefly 14 G9", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9959 9932 SND_PCI_QUIRK(0x103c, 0x8ad1, "HP EliteBook 840 14 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9960 9933 SND_PCI_QUIRK(0x103c, 0x8ad2, "HP EliteBook 860 16 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9961 9934 SND_PCI_QUIRK(0x103c, 0x8b0f, "HP Elite mt645 G7 Mobile Thin Client U81", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 9962 9935 SND_PCI_QUIRK(0x103c, 0x8b2f, "HP 255 15.6 inch G10 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2), 9936 + SND_PCI_QUIRK(0x103c, 0x8b3f, "HP mt440 Mobile Thin Client U91", ALC236_FIXUP_HP_GPIO_LED), 9963 9937 SND_PCI_QUIRK(0x103c, 0x8b42, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9964 9938 SND_PCI_QUIRK(0x103c, 0x8b43, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9965 9939 SND_PCI_QUIRK(0x103c, 0x8b44, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), ··· 10277 10247 SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340), 10278 10248 SND_PCI_QUIRK(0x17aa, 0x334b, "Lenovo ThinkCentre M70 Gen5", ALC283_FIXUP_HEADSET_MIC), 10279 10249 SND_PCI_QUIRK(0x17aa, 0x3801, "Lenovo Yoga9 14IAP7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN), 10280 - SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), 10250 + SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga Pro 9 14IRP8 / DuetITL 2021", ALC287_FIXUP_LENOVO_14IRP8_DUETITL), 10281 10251 SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS), 10282 10252 SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940 / Yoga Duet 7", ALC298_FIXUP_LENOVO_C940_DUET7), 10283 10253 SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
+14
sound/soc/amd/yc/acp6x-mach.c
··· 203 203 .driver_data = &acp6x_card, 204 204 .matches = { 205 205 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 206 + DMI_MATCH(DMI_PRODUCT_NAME, "21J2"), 207 + } 208 + }, 209 + { 210 + .driver_data = &acp6x_card, 211 + .matches = { 212 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 213 + DMI_MATCH(DMI_PRODUCT_NAME, "21J0"), 214 + } 215 + }, 216 + { 217 + .driver_data = &acp6x_card, 218 + .matches = { 219 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 206 220 DMI_MATCH(DMI_PRODUCT_NAME, "21J5"), 207 221 } 208 222 },
+1
sound/soc/amd/yc/pci-acp6x.c
··· 162 162 /* Yellow Carp device check */ 163 163 switch (pci->revision) { 164 164 case 0x60: 165 + case 0x63: 165 166 case 0x6f: 166 167 break; 167 168 default:
+1 -1
sound/soc/codecs/cs35l45.c
··· 184 184 else 185 185 snprintf(name, SNDRV_CTL_ELEM_ID_NAME_MAXLEN, "%s", ctl_name); 186 186 187 - kcontrol = snd_soc_card_get_kcontrol(component->card, name); 187 + kcontrol = snd_soc_card_get_kcontrol_locked(component->card, name); 188 188 if (!kcontrol) { 189 189 dev_err(component->dev, "Can't find kcontrol %s\n", name); 190 190 return -EINVAL;
+1
sound/soc/codecs/cs35l56-shared.c
··· 335 335 EXPORT_SYMBOL_NS_GPL(cs35l56_wait_min_reset_pulse, SND_SOC_CS35L56_SHARED); 336 336 337 337 static const struct reg_sequence cs35l56_system_reset_seq[] = { 338 + REG_SEQ0(CS35L56_DSP1_HALO_STATE, 0), 338 339 REG_SEQ0(CS35L56_DSP_VIRTUAL1_MBOX_1, CS35L56_MBOX_CMD_SYSTEM_RESET), 339 340 }; 340 341
+1 -1
sound/soc/codecs/cs35l56.c
··· 114 114 name = full_name; 115 115 } 116 116 117 - kcontrol = snd_soc_card_get_kcontrol(dapm->card, name); 117 + kcontrol = snd_soc_card_get_kcontrol_locked(dapm->card, name); 118 118 if (!kcontrol) { 119 119 dev_warn(cs35l56->base.dev, "Could not find control %s\n", name); 120 120 continue;
+11 -1
sound/soc/fsl/fsl_xcvr.c
··· 174 174 struct snd_kcontrol *kctl; 175 175 bool enabled; 176 176 177 - kctl = snd_soc_card_get_kcontrol(card, name); 177 + lockdep_assert_held(&card->snd_card->controls_rwsem); 178 + 179 + kctl = snd_soc_card_get_kcontrol_locked(card, name); 178 180 if (kctl == NULL) 179 181 return -ENOENT; 180 182 ··· 578 576 xcvr->streams |= BIT(substream->stream); 579 577 580 578 if (!xcvr->soc_data->spdif_only) { 579 + struct snd_soc_card *card = dai->component->card; 580 + 581 581 /* Disable XCVR controls if there is stream started */ 582 + down_read(&card->snd_card->controls_rwsem); 582 583 fsl_xcvr_activate_ctl(dai, fsl_xcvr_mode_kctl.name, false); 583 584 fsl_xcvr_activate_ctl(dai, fsl_xcvr_arc_mode_kctl.name, false); 584 585 fsl_xcvr_activate_ctl(dai, fsl_xcvr_earc_capds_kctl.name, false); 586 + up_read(&card->snd_card->controls_rwsem); 585 587 } 586 588 587 589 return 0; ··· 604 598 /* Enable XCVR controls if there is no stream started */ 605 599 if (!xcvr->streams) { 606 600 if (!xcvr->soc_data->spdif_only) { 601 + struct snd_soc_card *card = dai->component->card; 602 + 603 + down_read(&card->snd_card->controls_rwsem); 607 604 fsl_xcvr_activate_ctl(dai, fsl_xcvr_mode_kctl.name, true); 608 605 fsl_xcvr_activate_ctl(dai, fsl_xcvr_arc_mode_kctl.name, 609 606 (xcvr->mode == FSL_XCVR_MODE_ARC)); 610 607 fsl_xcvr_activate_ctl(dai, fsl_xcvr_earc_capds_kctl.name, 611 608 (xcvr->mode == FSL_XCVR_MODE_EARC)); 609 + up_read(&card->snd_card->controls_rwsem); 612 610 } 613 611 ret = regmap_update_bits(xcvr->regmap, FSL_XCVR_EXT_IER0, 614 612 FSL_XCVR_IRQ_EARC_ALL, 0);
+1 -1
sound/soc/qcom/lpass-cdc-dma.c
··· 259 259 int cmd, struct snd_soc_dai *dai) 260 260 { 261 261 struct snd_soc_pcm_runtime *soc_runtime = snd_soc_substream_to_rtd(substream); 262 - struct lpaif_dmactl *dmactl; 262 + struct lpaif_dmactl *dmactl = NULL; 263 263 int ret = 0, id; 264 264 265 265 switch (cmd) {
+22 -2
sound/soc/soc-card.c
··· 5 5 // Copyright (C) 2019 Renesas Electronics Corp. 6 6 // Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> 7 7 // 8 + 9 + #include <linux/lockdep.h> 10 + #include <linux/rwsem.h> 8 11 #include <sound/soc.h> 9 12 #include <sound/jack.h> 10 13 ··· 29 26 return ret; 30 27 } 31 28 32 - struct snd_kcontrol *snd_soc_card_get_kcontrol(struct snd_soc_card *soc_card, 33 - const char *name) 29 + struct snd_kcontrol *snd_soc_card_get_kcontrol_locked(struct snd_soc_card *soc_card, 30 + const char *name) 34 31 { 35 32 struct snd_card *card = soc_card->snd_card; 36 33 struct snd_kcontrol *kctl; 34 + 35 + /* must be held read or write */ 36 + lockdep_assert_held(&card->controls_rwsem); 37 37 38 38 if (unlikely(!name)) 39 39 return NULL; ··· 45 39 if (!strncmp(kctl->id.name, name, sizeof(kctl->id.name))) 46 40 return kctl; 47 41 return NULL; 42 + } 43 + EXPORT_SYMBOL_GPL(snd_soc_card_get_kcontrol_locked); 44 + 45 + struct snd_kcontrol *snd_soc_card_get_kcontrol(struct snd_soc_card *soc_card, 46 + const char *name) 47 + { 48 + struct snd_card *card = soc_card->snd_card; 49 + struct snd_kcontrol *kctl; 50 + 51 + down_read(&card->controls_rwsem); 52 + kctl = snd_soc_card_get_kcontrol_locked(soc_card, name); 53 + up_read(&card->controls_rwsem); 54 + 55 + return kctl; 48 56 } 49 57 EXPORT_SYMBOL_GPL(snd_soc_card_get_kcontrol); 50 58
+2 -2
tools/testing/selftests/bpf/prog_tests/xdp_bonding.c
··· 511 511 if (!ASSERT_OK(err, "bond bpf_xdp_query")) 512 512 goto out; 513 513 514 - if (!ASSERT_EQ(query_opts.feature_flags, NETDEV_XDP_ACT_MASK, 514 + if (!ASSERT_EQ(query_opts.feature_flags, 0, 515 515 "bond query_opts.feature_flags")) 516 516 goto out; 517 517 ··· 601 601 if (!ASSERT_OK(err, "bond bpf_xdp_query")) 602 602 goto out; 603 603 604 - ASSERT_EQ(query_opts.feature_flags, NETDEV_XDP_ACT_MASK, 604 + ASSERT_EQ(query_opts.feature_flags, 0, 605 605 "bond query_opts.feature_flags"); 606 606 out: 607 607 bpf_link__destroy(link);
+70
tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c
··· 239 239 return 1000 * a + b + c; 240 240 } 241 241 242 + struct iter_limit_bug_ctx { 243 + __u64 a; 244 + __u64 b; 245 + __u64 c; 246 + }; 247 + 248 + static __naked void iter_limit_bug_cb(void) 249 + { 250 + /* This is the same as C code below, but written 251 + * in assembly to control which branches are fall-through. 252 + * 253 + * switch (bpf_get_prandom_u32()) { 254 + * case 1: ctx->a = 42; break; 255 + * case 2: ctx->b = 42; break; 256 + * default: ctx->c = 42; break; 257 + * } 258 + */ 259 + asm volatile ( 260 + "r9 = r2;" 261 + "call %[bpf_get_prandom_u32];" 262 + "r1 = r0;" 263 + "r2 = 42;" 264 + "r0 = 0;" 265 + "if r1 == 0x1 goto 1f;" 266 + "if r1 == 0x2 goto 2f;" 267 + "*(u64 *)(r9 + 16) = r2;" 268 + "exit;" 269 + "1: *(u64 *)(r9 + 0) = r2;" 270 + "exit;" 271 + "2: *(u64 *)(r9 + 8) = r2;" 272 + "exit;" 273 + : 274 + : __imm(bpf_get_prandom_u32) 275 + : __clobber_all 276 + ); 277 + } 278 + 279 + SEC("tc") 280 + __failure 281 + __flag(BPF_F_TEST_STATE_FREQ) 282 + int iter_limit_bug(struct __sk_buff *skb) 283 + { 284 + struct iter_limit_bug_ctx ctx = { 7, 7, 7 }; 285 + 286 + bpf_loop(2, iter_limit_bug_cb, &ctx, 0); 287 + 288 + /* This is the same as C code below, 289 + * written in assembly to guarantee checks order. 290 + * 291 + * if (ctx.a == 42 && ctx.b == 42 && ctx.c == 7) 292 + * asm volatile("r1 /= 0;":::"r1"); 293 + */ 294 + asm volatile ( 295 + "r1 = *(u64 *)%[ctx_a];" 296 + "if r1 != 42 goto 1f;" 297 + "r1 = *(u64 *)%[ctx_b];" 298 + "if r1 != 42 goto 1f;" 299 + "r1 = *(u64 *)%[ctx_c];" 300 + "if r1 != 7 goto 1f;" 301 + "r1 /= 0;" 302 + "1:" 303 + : 304 + : [ctx_a]"m"(ctx.a), 305 + [ctx_b]"m"(ctx.b), 306 + [ctx_c]"m"(ctx.c) 307 + : "r1" 308 + ); 309 + return 0; 310 + } 311 + 242 312 char _license[] SEC("license") = "GPL";
+6 -9
tools/testing/selftests/net/mptcp/diag.sh
··· 69 69 else 70 70 echo "[ fail ] expected $expected found $nr" 71 71 mptcp_lib_result_fail "${msg}" 72 - ret=$test_cnt 72 + ret=${KSFT_FAIL} 73 73 fi 74 74 else 75 75 echo "[ ok ]" ··· 96 96 local expected=$1 97 97 local msg="$2" 98 98 99 - __chk_nr "ss -inmlHMON $ns | wc -l" "$expected" "$msg - mptcp" 0 100 - __chk_nr "ss -inmlHtON $ns | wc -l" "$expected" "$msg - subflows" 99 + __chk_nr "ss -nlHMON $ns | wc -l" "$expected" "$msg - mptcp" 0 100 + __chk_nr "ss -nlHtON $ns | wc -l" "$expected" "$msg - subflows" 101 101 } 102 102 103 103 wait_msk_nr() ··· 124 124 if [ $i -ge $timeout ]; then 125 125 echo "[ fail ] timeout while expecting $expected max $max last $nr" 126 126 mptcp_lib_result_fail "${msg} # timeout" 127 - ret=$test_cnt 127 + ret=${KSFT_FAIL} 128 128 elif [ $nr != $expected ]; then 129 129 echo "[ fail ] expected $expected found $nr" 130 130 mptcp_lib_result_fail "${msg} # unexpected result" 131 - ret=$test_cnt 131 + ret=${KSFT_FAIL} 132 132 else 133 133 echo "[ ok ]" 134 134 mptcp_lib_result_pass "${msg}" ··· 304 304 ip netns exec $ns ./mptcp_connect -p $((I + 20001)) \ 305 305 -t ${timeout_poll} -l 0.0.0.0 >/dev/null 2>&1 & 306 306 done 307 - 308 - for I in $(seq 1 $NR_SERVERS); do 309 - mptcp_lib_wait_local_port_listen $ns $((I + 20001)) 310 - done 307 + mptcp_lib_wait_local_port_listen $ns $((NR_SERVERS + 20001)) 311 308 312 309 chk_listener_nr $NR_SERVERS "many listener sockets" 313 310
+6 -10
tools/testing/selftests/powerpc/math/fpu_signal.c
··· 18 18 #include <pthread.h> 19 19 20 20 #include "utils.h" 21 + #include "fpu.h" 21 22 22 23 /* Number of times each thread should receive the signal */ 23 24 #define ITERATIONS 10 ··· 28 27 */ 29 28 #define THREAD_FACTOR 8 30 29 31 - __thread double darray[] = {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 32 - 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 33 - 2.1}; 30 + __thread double darray[32]; 34 31 35 32 bool bad_context; 36 33 int threads_starting; ··· 42 43 ucontext_t *uc = context; 43 44 mcontext_t *mc = &uc->uc_mcontext; 44 45 45 - /* Only the non volatiles were loaded up */ 46 - for (i = 14; i < 32; i++) { 47 - if (mc->fp_regs[i] != darray[i - 14]) { 46 + // Don't check f30/f31, they're used as scratches in check_all_fprs() 47 + for (i = 0; i < 30; i++) { 48 + if (mc->fp_regs[i] != darray[i]) { 48 49 bad_context = true; 49 50 break; 50 51 } ··· 53 54 54 55 void *signal_fpu_c(void *p) 55 56 { 56 - int i; 57 57 long rc; 58 58 struct sigaction act; 59 59 act.sa_sigaction = signal_fpu_sig; ··· 62 64 return p; 63 65 64 66 srand(pthread_self()); 65 - for (i = 0; i < 21; i++) 66 - darray[i] = rand(); 67 - 67 + randomise_darray(darray, ARRAY_SIZE(darray)); 68 68 rc = preempt_fpu(darray, &threads_starting, &running); 69 69 70 70 return (void *) rc;