Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR.

Conflicts:

drivers/net/ethernet/broadcom/bnxt/bnxt.c
1e7962114c10 ("bnxt_en: Restore PTP tx_avail count in case of skb_pad() error")
165f87691a89 ("bnxt_en: add timestamping statistics support")

No adjacent changes.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+3840 -1851
-3
.editorconfig
··· 5 5 [{*.{awk,c,dts,dtsi,dtso,h,mk,s,S},Kconfig,Makefile,Makefile.*}] 6 6 charset = utf-8 7 7 end_of_line = lf 8 - trim_trailing_whitespace = true 9 8 insert_final_newline = true 10 9 indent_style = tab 11 10 indent_size = 8 ··· 12 13 [*.{json,py,rs}] 13 14 charset = utf-8 14 15 end_of_line = lf 15 - trim_trailing_whitespace = true 16 16 insert_final_newline = true 17 17 indent_style = space 18 18 indent_size = 4 ··· 24 26 [*.yaml] 25 27 charset = utf-8 26 28 end_of_line = lf 27 - trim_trailing_whitespace = unset 28 29 insert_final_newline = true 29 30 indent_style = space 30 31 indent_size = 2
+1
.mailmap
··· 608 608 Sricharan Ramabadhran <quic_srichara@quicinc.com> <sricharan@codeaurora.org> 609 609 Srinivas Ramana <quic_sramana@quicinc.com> <sramana@codeaurora.org> 610 610 Sriram R <quic_srirrama@quicinc.com> <srirrama@codeaurora.org> 611 + Stanislav Fomichev <sdf@fomichev.me> <sdf@google.com> 611 612 Stefan Wahren <wahrenst@gmx.net> <stefan.wahren@i2se.com> 612 613 Stéphane Witzmann <stephane.witzmann@ubpmes.univ-bpclermont.fr> 613 614 Stephen Hemminger <stephen@networkplumber.org> <shemminger@linux-foundation.org>
-6
Documentation/admin-guide/kernel-parameters.txt
··· 2192 2192 Format: 0 | 1 2193 2193 Default set by CONFIG_INIT_ON_FREE_DEFAULT_ON. 2194 2194 2195 - init_mlocked_on_free= [MM] Fill freed userspace memory with zeroes if 2196 - it was mlock'ed and not explicitly munlock'ed 2197 - afterwards. 2198 - Format: 0 | 1 2199 - Default set by CONFIG_INIT_MLOCKED_ON_FREE_DEFAULT_ON 2200 - 2201 2195 init_pkru= [X86] Specify the default memory protection keys rights 2202 2196 register contents for all processes. 0x55555554 by 2203 2197 default (disallow access to all but pkey 0). Can
+1 -1
Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml
··· 139 139 Voltage output range of the channel as <minimum, maximum> 140 140 Required connections: 141 141 Rfb1x for: 0 to 2.5 V; 0 to 3V; 0 to 5 V; 142 - Rfb2x for: 0 to 10 V; 2.5 to 7.5V; -5 to 5 V; 142 + Rfb2x for: 0 to 10 V; -2.5 to 7.5V; -5 to 5 V; 143 143 oneOf: 144 144 - items: 145 145 - const: 0
+1
Documentation/devicetree/bindings/usb/realtek,rts5411.yaml
··· 65 65 description: The hard wired USB devices 66 66 type: object 67 67 $ref: /schemas/usb/usb-device.yaml 68 + additionalProperties: true 68 69 69 70 required: 70 71 - peer-hub
+1
Documentation/userspace-api/index.rst
··· 32 32 seccomp_filter 33 33 landlock 34 34 lsm 35 + mfd_noexec 35 36 spec_ctrl 36 37 tee 37 38
+86
Documentation/userspace-api/mfd_noexec.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ================================== 4 + Introduction of non-executable mfd 5 + ================================== 6 + :Author: 7 + Daniel Verkamp <dverkamp@chromium.org> 8 + Jeff Xu <jeffxu@chromium.org> 9 + 10 + :Contributor: 11 + Aleksa Sarai <cyphar@cyphar.com> 12 + 13 + Since Linux introduced the memfd feature, memfds have always had their 14 + execute bit set, and the memfd_create() syscall doesn't allow setting 15 + it differently. 16 + 17 + However, in a secure-by-default system, such as ChromeOS, (where all 18 + executables should come from the rootfs, which is protected by verified 19 + boot), this executable nature of memfd opens a door for NoExec bypass 20 + and enables “confused deputy attack”. E.g, in VRP bug [1]: cros_vm 21 + process created a memfd to share the content with an external process, 22 + however the memfd is overwritten and used for executing arbitrary code 23 + and root escalation. [2] lists more VRP of this kind. 24 + 25 + On the other hand, executable memfd has its legit use: runc uses memfd’s 26 + seal and executable feature to copy the contents of the binary then 27 + execute them. For such a system, we need a solution to differentiate runc's 28 + use of executable memfds and an attacker's [3]. 29 + 30 + To address those above: 31 + - Let memfd_create() set X bit at creation time. 32 + - Let memfd be sealed for modifying X bit when NX is set. 33 + - Add a new pid namespace sysctl: vm.memfd_noexec to help applications in 34 + migrating and enforcing non-executable MFD. 35 + 36 + User API 37 + ======== 38 + ``int memfd_create(const char *name, unsigned int flags)`` 39 + 40 + ``MFD_NOEXEC_SEAL`` 41 + When MFD_NOEXEC_SEAL bit is set in the ``flags``, memfd is created 42 + with NX. F_SEAL_EXEC is set and the memfd can't be modified to 43 + add X later. MFD_ALLOW_SEALING is also implied. 44 + This is the most common case for the application to use memfd. 45 + 46 + ``MFD_EXEC`` 47 + When MFD_EXEC bit is set in the ``flags``, memfd is created with X. 48 + 49 + Note: 50 + ``MFD_NOEXEC_SEAL`` implies ``MFD_ALLOW_SEALING``. In case that 51 + an app doesn't want sealing, it can add F_SEAL_SEAL after creation. 52 + 53 + 54 + Sysctl: 55 + ======== 56 + ``pid namespaced sysctl vm.memfd_noexec`` 57 + 58 + The new pid namespaced sysctl vm.memfd_noexec has 3 values: 59 + 60 + - 0: MEMFD_NOEXEC_SCOPE_EXEC 61 + memfd_create() without MFD_EXEC nor MFD_NOEXEC_SEAL acts like 62 + MFD_EXEC was set. 63 + 64 + - 1: MEMFD_NOEXEC_SCOPE_NOEXEC_SEAL 65 + memfd_create() without MFD_EXEC nor MFD_NOEXEC_SEAL acts like 66 + MFD_NOEXEC_SEAL was set. 67 + 68 + - 2: MEMFD_NOEXEC_SCOPE_NOEXEC_ENFORCED 69 + memfd_create() without MFD_NOEXEC_SEAL will be rejected. 70 + 71 + The sysctl allows finer control of memfd_create for old software that 72 + doesn't set the executable bit; for example, a container with 73 + vm.memfd_noexec=1 means the old software will create non-executable memfd 74 + by default while new software can create executable memfd by setting 75 + MFD_EXEC. 76 + 77 + The value of vm.memfd_noexec is passed to child namespace at creation 78 + time. In addition, the setting is hierarchical, i.e. during memfd_create, 79 + we will search from current ns to root ns and use the most restrictive 80 + setting. 81 + 82 + [1] https://crbug.com/1305267 83 + 84 + [2] https://bugs.chromium.org/p/chromium/issues/list?q=type%3Dbug-security%20memfd%20escalation&can=1 85 + 86 + [3] https://lwn.net/Articles/781013/
+15 -6
Documentation/virt/hyperv/clocks.rst
··· 62 62 space code performs the same algorithm of reading the TSC and 63 63 applying the scale and offset to get the constant 10 MHz clock. 64 64 65 - Linux clockevents are based on Hyper-V synthetic timer 0. While 66 - Hyper-V offers 4 synthetic timers for each CPU, Linux only uses 67 - timer 0. Interrupts from stimer0 are recorded on the "HVS" line in 68 - /proc/interrupts. Clockevents based on the virtualized PIT and 69 - local APIC timer also work, but the Hyper-V synthetic timer is 70 - preferred. 65 + Linux clockevents are based on Hyper-V synthetic timer 0 (stimer0). 66 + While Hyper-V offers 4 synthetic timers for each CPU, Linux only uses 67 + timer 0. In older versions of Hyper-V, an interrupt from stimer0 68 + results in a VMBus control message that is demultiplexed by 69 + vmbus_isr() as described in the Documentation/virt/hyperv/vmbus.rst 70 + documentation. In newer versions of Hyper-V, stimer0 interrupts can 71 + be mapped to an architectural interrupt, which is referred to as 72 + "Direct Mode". Linux prefers to use Direct Mode when available. Since 73 + x86/x64 doesn't support per-CPU interrupts, Direct Mode statically 74 + allocates an x86 interrupt vector (HYPERV_STIMER0_VECTOR) across all CPUs 75 + and explicitly codes it to call the stimer0 interrupt handler. Hence 76 + interrupts from stimer0 are recorded on the "HVS" line in /proc/interrupts 77 + rather than being associated with a Linux IRQ. Clockevents based on the 78 + virtualized PIT and local APIC timer also work, but Hyper-V stimer0 79 + is preferred. 71 80 72 81 The driver for the Hyper-V synthetic system clock and timers is 73 82 drivers/clocksource/hyperv_timer.c.
+11 -11
Documentation/virt/hyperv/overview.rst
··· 40 40 arm64, these synthetic registers must be accessed using explicit 41 41 hypercalls. 42 42 43 - * VMbus: VMbus is a higher-level software construct that is built on 43 + * VMBus: VMBus is a higher-level software construct that is built on 44 44 the other 3 mechanisms. It is a message passing interface between 45 45 the Hyper-V host and the Linux guest. It uses memory that is shared 46 46 between Hyper-V and the guest, along with various signaling ··· 54 54 55 55 .. _Hyper-V Top Level Functional Spec (TLFS): https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/tlfs/tlfs 56 56 57 - VMbus is not documented. This documentation provides a high-level 58 - overview of VMbus and how it works, but the details can be discerned 57 + VMBus is not documented. This documentation provides a high-level 58 + overview of VMBus and how it works, but the details can be discerned 59 59 only from the code. 60 60 61 61 Sharing Memory ··· 74 74 physical address space. How Hyper-V is told about the GPA or list 75 75 of GPAs varies. In some cases, a single GPA is written to a 76 76 synthetic register. In other cases, a GPA or list of GPAs is sent 77 - in a VMbus message. 77 + in a VMBus message. 78 78 79 79 * Hyper-V translates the GPAs into "real" physical memory addresses, 80 80 and creates a virtual mapping that it can use to access the memory. ··· 133 133 any hot-add CPUs. 134 134 135 135 A Linux guest CPU may be taken offline using the normal Linux 136 - mechanisms, provided no VMbus channel interrupts are assigned to 137 - the CPU. See the section on VMbus Interrupts for more details 138 - on how VMbus channel interrupts can be re-assigned to permit 136 + mechanisms, provided no VMBus channel interrupts are assigned to 137 + the CPU. See the section on VMBus Interrupts for more details 138 + on how VMBus channel interrupts can be re-assigned to permit 139 139 taking a CPU offline. 140 140 141 141 32-bit and 64-bit ··· 169 169 via flags in synthetic MSRs that Hyper-V provides to the guest, 170 170 and the guest code tests these flags. 171 171 172 - VMbus has its own protocol version that is negotiated during the 173 - initial VMbus connection from the guest to Hyper-V. This version 172 + VMBus has its own protocol version that is negotiated during the 173 + initial VMBus connection from the guest to Hyper-V. This version 174 174 number is also output to dmesg during boot. This version number 175 175 is checked in a few places in the code to determine if specific 176 176 functionality is present. 177 177 178 - Furthermore, each synthetic device on VMbus also has a protocol 179 - version that is separate from the VMbus protocol version. Device 178 + Furthermore, each synthetic device on VMBus also has a protocol 179 + version that is separate from the VMBus protocol version. Device 180 180 drivers for these synthetic devices typically negotiate the device 181 181 protocol version, and may test that protocol version to determine 182 182 if specific device functionality is present.
+82 -59
Documentation/virt/hyperv/vmbus.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 2 3 - VMbus 3 + VMBus 4 4 ===== 5 - VMbus is a software construct provided by Hyper-V to guest VMs. It 5 + VMBus is a software construct provided by Hyper-V to guest VMs. It 6 6 consists of a control path and common facilities used by synthetic 7 7 devices that Hyper-V presents to guest VMs. The control path is 8 8 used to offer synthetic devices to the guest VM and, in some cases, ··· 12 12 signaling primitives to allow Hyper-V and the guest to interrupt 13 13 each other. 14 14 15 - VMbus is modeled in Linux as a bus, with the expected /sys/bus/vmbus 16 - entry in a running Linux guest. The VMbus driver (drivers/hv/vmbus_drv.c) 17 - establishes the VMbus control path with the Hyper-V host, then 15 + VMBus is modeled in Linux as a bus, with the expected /sys/bus/vmbus 16 + entry in a running Linux guest. The VMBus driver (drivers/hv/vmbus_drv.c) 17 + establishes the VMBus control path with the Hyper-V host, then 18 18 registers itself as a Linux bus driver. It implements the standard 19 19 bus functions for adding and removing devices to/from the bus. 20 20 ··· 49 49 the synthetic SCSI controller is "storvsc". These drivers contain 50 50 functions with names like "storvsc_connect_to_vsp". 51 51 52 - VMbus channels 52 + VMBus channels 53 53 -------------- 54 - An instance of a synthetic device uses VMbus channels to communicate 54 + An instance of a synthetic device uses VMBus channels to communicate 55 55 between the VSP and the VSC. Channels are bi-directional and used 56 56 for passing messages. Most synthetic devices use a single channel, 57 57 but the synthetic SCSI controller and synthetic NIC may use multiple ··· 73 73 actual ring. The size of the ring is determined by the VSC in the 74 74 guest and is specific to each synthetic device. The list of GPAs 75 75 making up the ring is communicated to the Hyper-V host over the 76 - VMbus control path as a GPA Descriptor List (GPADL). See function 76 + VMBus control path as a GPA Descriptor List (GPADL). See function 77 77 vmbus_establish_gpadl(). 78 78 79 79 Each ring buffer is mapped into contiguous Linux kernel virtual ··· 102 102 approximately 1280 Mbytes. For versions prior to Windows Server 103 103 2019, the limit is approximately 384 Mbytes. 104 104 105 - VMbus messages 106 - -------------- 107 - All VMbus messages have a standard header that includes the message 108 - length, the offset of the message payload, some flags, and a 105 + VMBus channel messages 106 + ---------------------- 107 + All messages sent in a VMBus channel have a standard header that includes 108 + the message length, the offset of the message payload, some flags, and a 109 109 transactionID. The portion of the message after the header is 110 110 unique to each VSP/VSC pair. 111 111 ··· 137 137 buffer. For example, the storvsc driver uses this approach to 138 138 specify the data buffers to/from which disk I/O is done. 139 139 140 - Three functions exist to send VMbus messages: 140 + Three functions exist to send VMBus channel messages: 141 141 142 142 1. vmbus_sendpacket(): Control-only messages and messages with 143 143 embedded data -- no GPAs ··· 154 154 and valid messages, and Linux drivers for synthetic devices did not 155 155 fully validate messages. With the introduction of processor 156 156 technologies that fully encrypt guest memory and that allow the 157 - guest to not trust the hypervisor (AMD SNP-SEV, Intel TDX), trusting 157 + guest to not trust the hypervisor (AMD SEV-SNP, Intel TDX), trusting 158 158 the Hyper-V host is no longer a valid assumption. The drivers for 159 - VMbus synthetic devices are being updated to fully validate any 159 + VMBus synthetic devices are being updated to fully validate any 160 160 values read from memory that is shared with Hyper-V, which includes 161 - messages from VMbus devices. To facilitate such validation, 161 + messages from VMBus devices. To facilitate such validation, 162 162 messages read by the guest from the "in" ring buffer are copied to a 163 163 temporary buffer that is not shared with Hyper-V. Validation is 164 164 performed in this temporary buffer without the risk of Hyper-V 165 165 maliciously modifying the message after it is validated but before 166 166 it is used. 167 167 168 - VMbus interrupts 168 + Synthetic Interrupt Controller (synic) 169 + -------------------------------------- 170 + Hyper-V provides each guest CPU with a synthetic interrupt controller 171 + that is used by VMBus for host-guest communication. While each synic 172 + defines 16 synthetic interrupts (SINT), Linux uses only one of the 16 173 + (VMBUS_MESSAGE_SINT). All interrupts related to communication between 174 + the Hyper-V host and a guest CPU use that SINT. 175 + 176 + The SINT is mapped to a single per-CPU architectural interrupt (i.e, 177 + an 8-bit x86/x64 interrupt vector, or an arm64 PPI INTID). Because 178 + each CPU in the guest has a synic and may receive VMBus interrupts, 179 + they are best modeled in Linux as per-CPU interrupts. This model works 180 + well on arm64 where a single per-CPU Linux IRQ is allocated for 181 + VMBUS_MESSAGE_SINT. This IRQ appears in /proc/interrupts as an IRQ labelled 182 + "Hyper-V VMbus". Since x86/x64 lacks support for per-CPU IRQs, an x86 183 + interrupt vector is statically allocated (HYPERVISOR_CALLBACK_VECTOR) 184 + across all CPUs and explicitly coded to call vmbus_isr(). In this case, 185 + there's no Linux IRQ, and the interrupts are visible in aggregate in 186 + /proc/interrupts on the "HYP" line. 187 + 188 + The synic provides the means to demultiplex the architectural interrupt into 189 + one or more logical interrupts and route the logical interrupt to the proper 190 + VMBus handler in Linux. This demultiplexing is done by vmbus_isr() and 191 + related functions that access synic data structures. 192 + 193 + The synic is not modeled in Linux as an irq chip or irq domain, 194 + and the demultiplexed logical interrupts are not Linux IRQs. As such, 195 + they don't appear in /proc/interrupts or /proc/irq. The CPU 196 + affinity for one of these logical interrupts is controlled via an 197 + entry under /sys/bus/vmbus as described below. 198 + 199 + VMBus interrupts 169 200 ---------------- 170 - VMbus provides a mechanism for the guest to interrupt the host when 201 + VMBus provides a mechanism for the guest to interrupt the host when 171 202 the guest has queued new messages in a ring buffer. The host 172 203 expects that the guest will send an interrupt only when an "out" 173 204 ring buffer transitions from empty to non-empty. If the guest sends ··· 207 176 interrupts, the host may throttle that guest by suspending its 208 177 execution for a few seconds to prevent a denial-of-service attack. 209 178 210 - Similarly, the host will interrupt the guest when it sends a new 211 - message on the VMbus control path, or when a VMbus channel "in" ring 212 - buffer transitions from empty to non-empty. Each CPU in the guest 213 - may receive VMbus interrupts, so they are best modeled as per-CPU 214 - interrupts in Linux. This model works well on arm64 where a single 215 - per-CPU IRQ is allocated for VMbus. Since x86/x64 lacks support for 216 - per-CPU IRQs, an x86 interrupt vector is statically allocated (see 217 - HYPERVISOR_CALLBACK_VECTOR) across all CPUs and explicitly coded to 218 - call the VMbus interrupt service routine. These interrupts are 219 - visible in /proc/interrupts on the "HYP" line. 179 + Similarly, the host will interrupt the guest via the synic when 180 + it sends a new message on the VMBus control path, or when a VMBus 181 + channel "in" ring buffer transitions from empty to non-empty due to 182 + the host inserting a new VMBus channel message. The control message stream 183 + and each VMBus channel "in" ring buffer are separate logical interrupts 184 + that are demultiplexed by vmbus_isr(). It demultiplexes by first checking 185 + for channel interrupts by calling vmbus_chan_sched(), which looks at a synic 186 + bitmap to determine which channels have pending interrupts on this CPU. 187 + If multiple channels have pending interrupts for this CPU, they are 188 + processed sequentially. When all channel interrupts have been processed, 189 + vmbus_isr() checks for and processes any messages received on the VMBus 190 + control path. 220 191 221 - The guest CPU that a VMbus channel will interrupt is selected by the 192 + The guest CPU that a VMBus channel will interrupt is selected by the 222 193 guest when the channel is created, and the host is informed of that 223 - selection. VMbus devices are broadly grouped into two categories: 194 + selection. VMBus devices are broadly grouped into two categories: 224 195 225 - 1. "Slow" devices that need only one VMbus channel. The devices 196 + 1. "Slow" devices that need only one VMBus channel. The devices 226 197 (such as keyboard, mouse, heartbeat, and timesync) generate 227 - relatively few interrupts. Their VMbus channels are all 198 + relatively few interrupts. Their VMBus channels are all 228 199 assigned to interrupt the VMBUS_CONNECT_CPU, which is always 229 200 CPU 0. 230 201 231 - 2. "High speed" devices that may use multiple VMbus channels for 202 + 2. "High speed" devices that may use multiple VMBus channels for 232 203 higher parallelism and performance. These devices include the 233 - synthetic SCSI controller and synthetic NIC. Their VMbus 204 + synthetic SCSI controller and synthetic NIC. Their VMBus 234 205 channels interrupts are assigned to CPUs that are spread out 235 206 among the available CPUs in the VM so that interrupts on 236 207 multiple channels can be processed in parallel. 237 208 238 - The assignment of VMbus channel interrupts to CPUs is done in the 209 + The assignment of VMBus channel interrupts to CPUs is done in the 239 210 function init_vp_index(). This assignment is done outside of the 240 211 normal Linux interrupt affinity mechanism, so the interrupts are 241 212 neither "unmanaged" nor "managed" interrupts. 242 213 243 - The CPU that a VMbus channel will interrupt can be seen in 214 + The CPU that a VMBus channel will interrupt can be seen in 244 215 /sys/bus/vmbus/devices/<deviceGUID>/ channels/<channelRelID>/cpu. 245 216 When running on later versions of Hyper-V, the CPU can be changed 246 - by writing a new value to this sysfs entry. Because the interrupt 247 - assignment is done outside of the normal Linux affinity mechanism, 248 - there are no entries in /proc/irq corresponding to individual 249 - VMbus channel interrupts. 217 + by writing a new value to this sysfs entry. Because VMBus channel 218 + interrupts are not Linux IRQs, there are no entries in /proc/interrupts 219 + or /proc/irq corresponding to individual VMBus channel interrupts. 250 220 251 221 An online CPU in a Linux guest may not be taken offline if it has 252 - VMbus channel interrupts assigned to it. Any such channel 222 + VMBus channel interrupts assigned to it. Any such channel 253 223 interrupts must first be manually reassigned to another CPU as 254 224 described above. When no channel interrupts are assigned to the 255 225 CPU, it can be taken offline. 256 226 257 - When a guest CPU receives a VMbus interrupt from the host, the 258 - function vmbus_isr() handles the interrupt. It first checks for 259 - channel interrupts by calling vmbus_chan_sched(), which looks at a 260 - bitmap setup by the host to determine which channels have pending 261 - interrupts on this CPU. If multiple channels have pending 262 - interrupts for this CPU, they are processed sequentially. When all 263 - channel interrupts have been processed, vmbus_isr() checks for and 264 - processes any message received on the VMbus control path. 265 - 266 - The VMbus channel interrupt handling code is designed to work 227 + The VMBus channel interrupt handling code is designed to work 267 228 correctly even if an interrupt is received on a CPU other than the 268 229 CPU assigned to the channel. Specifically, the code does not use 269 230 CPU-based exclusion for correctness. In normal operation, Hyper-V ··· 265 242 even if there is a time lag before Hyper-V starts interrupting the 266 243 new CPU. See comments in target_cpu_store(). 267 244 268 - VMbus device creation/deletion 245 + VMBus device creation/deletion 269 246 ------------------------------ 270 247 Hyper-V and the Linux guest have a separate message-passing path 271 248 that is used for synthetic device creation and deletion. This 272 - path does not use a VMbus channel. See vmbus_post_msg() and 249 + path does not use a VMBus channel. See vmbus_post_msg() and 273 250 vmbus_on_msg_dpc(). 274 251 275 252 The first step is for the guest to connect to the generic 276 - Hyper-V VMbus mechanism. As part of establishing this connection, 277 - the guest and Hyper-V agree on a VMbus protocol version they will 253 + Hyper-V VMBus mechanism. As part of establishing this connection, 254 + the guest and Hyper-V agree on a VMBus protocol version they will 278 255 use. This negotiation allows newer Linux kernels to run on older 279 256 Hyper-V versions, and vice versa. 280 257 281 258 The guest then tells Hyper-V to "send offers". Hyper-V sends an 282 259 offer message to the guest for each synthetic device that the VM 283 - is configured to have. Each VMbus device type has a fixed GUID 284 - known as the "class ID", and each VMbus device instance is also 260 + is configured to have. Each VMBus device type has a fixed GUID 261 + known as the "class ID", and each VMBus device instance is also 285 262 identified by a GUID. The offer message from Hyper-V contains 286 263 both GUIDs to uniquely (within the VM) identify the device. 287 264 There is one offer message for each device instance, so a VM with ··· 298 275 the device. Driver/device matching is performed using the standard 299 276 Linux mechanism. 300 277 301 - The device driver probe function opens the primary VMbus channel to 278 + The device driver probe function opens the primary VMBus channel to 302 279 the corresponding VSP. It allocates guest memory for the channel 303 280 ring buffers and shares the ring buffer with the Hyper-V host by 304 281 giving the host a list of GPAs for the ring buffer memory. See ··· 308 285 setup messages via the primary channel. These messages may include 309 286 negotiating the device protocol version to be used between the Linux 310 287 VSC and the VSP on the Hyper-V host. The setup messages may also 311 - include creating additional VMbus channels, which are somewhat 288 + include creating additional VMBus channels, which are somewhat 312 289 mis-named as "sub-channels" since they are functionally 313 290 equivalent to the primary channel once they are created. 314 291
+16 -17
MAINTAINERS
··· 3980 3980 R: Yonghong Song <yonghong.song@linux.dev> 3981 3981 R: John Fastabend <john.fastabend@gmail.com> 3982 3982 R: KP Singh <kpsingh@kernel.org> 3983 - R: Stanislav Fomichev <sdf@google.com> 3983 + R: Stanislav Fomichev <sdf@fomichev.me> 3984 3984 R: Hao Luo <haoluo@google.com> 3985 3985 R: Jiri Olsa <jolsa@kernel.org> 3986 3986 L: bpf@vger.kernel.org ··· 5295 5295 5296 5296 CLANG CONTROL FLOW INTEGRITY SUPPORT 5297 5297 M: Sami Tolvanen <samitolvanen@google.com> 5298 - M: Kees Cook <keescook@chromium.org> 5298 + M: Kees Cook <kees@kernel.org> 5299 5299 R: Nathan Chancellor <nathan@kernel.org> 5300 5300 L: llvm@lists.linux.dev 5301 5301 S: Supported ··· 8211 8211 8212 8212 EXEC & BINFMT API, ELF 8213 8213 R: Eric Biederman <ebiederm@xmission.com> 8214 - R: Kees Cook <keescook@chromium.org> 8214 + R: Kees Cook <kees@kernel.org> 8215 8215 L: linux-mm@kvack.org 8216 8216 S: Supported 8217 8217 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/execve ··· 8612 8612 F: drivers/net/ethernet/nvidia/* 8613 8613 8614 8614 FORTIFY_SOURCE 8615 - M: Kees Cook <keescook@chromium.org> 8615 + M: Kees Cook <kees@kernel.org> 8616 8616 L: linux-hardening@vger.kernel.org 8617 8617 S: Supported 8618 8618 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/hardening ··· 9102 9102 F: include/linux/platform_data/gsc_hwmon.h 9103 9103 9104 9104 GCC PLUGINS 9105 - M: Kees Cook <keescook@chromium.org> 9105 + M: Kees Cook <kees@kernel.org> 9106 9106 L: linux-hardening@vger.kernel.org 9107 9107 S: Maintained 9108 9108 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/hardening ··· 9236 9236 F: drivers/input/touchscreen/resistive-adc-touch.c 9237 9237 9238 9238 GENERIC STRING LIBRARY 9239 - M: Kees Cook <keescook@chromium.org> 9239 + M: Kees Cook <kees@kernel.org> 9240 9240 R: Andy Shevchenko <andy@kernel.org> 9241 9241 L: linux-hardening@vger.kernel.org 9242 9242 S: Supported ··· 11034 11034 11035 11035 INTEL DRM XE DRIVER (Lunar Lake and newer) 11036 11036 M: Lucas De Marchi <lucas.demarchi@intel.com> 11037 - M: Oded Gabbay <ogabbay@kernel.org> 11038 11037 M: Thomas Hellström <thomas.hellstrom@linux.intel.com> 11038 + M: Rodrigo Vivi <rodrigo.vivi@intel.com> 11039 11039 L: intel-xe@lists.freedesktop.org 11040 11040 S: Supported 11041 11041 W: https://drm.pages.freedesktop.org/intel-docs/ ··· 11950 11950 F: usr/ 11951 11951 11952 11952 KERNEL HARDENING (not covered by other areas) 11953 - M: Kees Cook <keescook@chromium.org> 11953 + M: Kees Cook <kees@kernel.org> 11954 11954 R: Gustavo A. R. Silva <gustavoars@kernel.org> 11955 11955 L: linux-hardening@vger.kernel.org 11956 11956 S: Supported ··· 12479 12479 12480 12480 LEAKING_ADDRESSES 12481 12481 M: Tycho Andersen <tycho@tycho.pizza> 12482 - R: Kees Cook <keescook@chromium.org> 12482 + R: Kees Cook <kees@kernel.org> 12483 12483 L: linux-hardening@vger.kernel.org 12484 12484 S: Maintained 12485 12485 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/hardening ··· 12775 12775 F: arch/powerpc/platforms/83xx/ 12776 12776 12777 12777 LINUX KERNEL DUMP TEST MODULE (LKDTM) 12778 - M: Kees Cook <keescook@chromium.org> 12778 + M: Kees Cook <kees@kernel.org> 12779 12779 S: Maintained 12780 12780 F: drivers/misc/lkdtm/* 12781 12781 F: tools/testing/selftests/lkdtm/* ··· 12905 12905 F: drivers/media/usb/dvb-usb-v2/lmedm04* 12906 12906 12907 12907 LOADPIN SECURITY MODULE 12908 - M: Kees Cook <keescook@chromium.org> 12908 + M: Kees Cook <kees@kernel.org> 12909 12909 S: Supported 12910 12910 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/hardening 12911 12911 F: Documentation/admin-guide/LSM/LoadPin.rst ··· 17997 17997 17998 17998 PROC SYSCTL 17999 17999 M: Luis Chamberlain <mcgrof@kernel.org> 18000 - M: Kees Cook <keescook@chromium.org> 18000 + M: Kees Cook <kees@kernel.org> 18001 18001 M: Joel Granados <j.granados@samsung.com> 18002 18002 L: linux-kernel@vger.kernel.org 18003 18003 L: linux-fsdevel@vger.kernel.org ··· 18053 18053 F: drivers/net/pse-pd/ 18054 18054 18055 18055 PSTORE FILESYSTEM 18056 - M: Kees Cook <keescook@chromium.org> 18056 + M: Kees Cook <kees@kernel.org> 18057 18057 R: Tony Luck <tony.luck@intel.com> 18058 18058 R: Guilherme G. Piccoli <gpiccoli@igalia.com> 18059 18059 L: linux-hardening@vger.kernel.org ··· 20059 20059 F: drivers/media/cec/platform/seco/seco-cec.h 20060 20060 20061 20061 SECURE COMPUTING 20062 - M: Kees Cook <keescook@chromium.org> 20062 + M: Kees Cook <kees@kernel.org> 20063 20063 R: Andy Lutomirski <luto@amacapital.net> 20064 20064 R: Will Drewry <wad@chromium.org> 20065 20065 S: Supported ··· 22973 22973 F: include/uapi/linux/ublk_cmd.h 22974 22974 22975 22975 UBSAN 22976 - M: Kees Cook <keescook@chromium.org> 22976 + M: Kees Cook <kees@kernel.org> 22977 22977 R: Marco Elver <elver@google.com> 22978 22978 R: Andrey Konovalov <andreyknvl@gmail.com> 22979 22979 R: Andrey Ryabinin <ryabinin.a.a@gmail.com> ··· 23975 23975 M: Andrew Morton <akpm@linux-foundation.org> 23976 23976 R: Uladzislau Rezki <urezki@gmail.com> 23977 23977 R: Christoph Hellwig <hch@infradead.org> 23978 - R: Lorenzo Stoakes <lstoakes@gmail.com> 23979 23978 L: linux-mm@kvack.org 23980 23979 S: Maintained 23981 23980 W: http://www.linux-mm.org ··· 24810 24811 F: include/linux/yam.h 24811 24812 24812 24813 YAMA SECURITY MODULE 24813 - M: Kees Cook <keescook@chromium.org> 24814 + M: Kees Cook <kees@kernel.org> 24814 24815 S: Supported 24815 24816 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/hardening 24816 24817 F: Documentation/admin-guide/LSM/Yama.rst
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 10 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc4 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+13
arch/arm/include/asm/efi.h
··· 14 14 #include <asm/mach/map.h> 15 15 #include <asm/mmu_context.h> 16 16 #include <asm/ptrace.h> 17 + #include <asm/uaccess.h> 17 18 18 19 #ifdef CONFIG_EFI 19 20 void efi_init(void); ··· 25 24 26 25 #define arch_efi_call_virt_setup() efi_virtmap_load() 27 26 #define arch_efi_call_virt_teardown() efi_virtmap_unload() 27 + 28 + #ifdef CONFIG_CPU_TTBR0_PAN 29 + #undef arch_efi_call_virt 30 + #define arch_efi_call_virt(p, f, args...) ({ \ 31 + unsigned int flags = uaccess_save_and_enable(); \ 32 + efi_status_t res = _Generic((p)->f(args), \ 33 + efi_status_t: (p)->f(args), \ 34 + default: ((p)->f(args), EFI_ABORTED)); \ 35 + uaccess_restore(flags); \ 36 + res; \ 37 + }) 38 + #endif 28 39 29 40 #define ARCH_EFI_IRQ_FLAGS_MASK \ 30 41 (PSR_J_BIT | PSR_E_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT | \
+2
arch/arm64/kernel/efi.c
··· 9 9 10 10 #include <linux/efi.h> 11 11 #include <linux/init.h> 12 + #include <linux/kmemleak.h> 12 13 #include <linux/screen_info.h> 13 14 #include <linux/vmalloc.h> 14 15 ··· 214 213 return -ENOMEM; 215 214 } 216 215 216 + kmemleak_not_leak(p); 217 217 efi_rt_stack_top = p + THREAD_SIZE; 218 218 return 0; 219 219 }
+2 -1
arch/mips/bmips/setup.c
··· 110 110 * RAC flush causes kernel panics on BCM6358 when booting from TP1 111 111 * because the bootloader is not initializing it properly. 112 112 */ 113 - bmips_rac_flush_disable = !!(read_c0_brcm_cmt_local() & (1 << 31)); 113 + bmips_rac_flush_disable = !!(read_c0_brcm_cmt_local() & (1 << 31)) || 114 + !!BMIPS_GET_CBR(); 114 115 } 115 116 116 117 static void bcm6368_quirks(void)
+2 -2
arch/mips/pci/ops-rc32434.c
··· 112 112 * gives them time to settle 113 113 */ 114 114 if (where == PCI_VENDOR_ID) { 115 - if (ret == 0xffffffff || ret == 0x00000000 || 116 - ret == 0x0000ffff || ret == 0xffff0000) { 115 + if (*val == 0xffffffff || *val == 0x00000000 || 116 + *val == 0x0000ffff || *val == 0xffff0000) { 117 117 if (delay > 4) 118 118 return 0; 119 119 delay *= 2;
+4 -11
arch/parisc/include/asm/cacheflush.h
··· 31 31 void flush_cache_all(void); 32 32 void flush_cache_mm(struct mm_struct *mm); 33 33 34 - void flush_kernel_dcache_page_addr(const void *addr); 35 - 36 34 #define flush_kernel_dcache_range(start,size) \ 37 35 flush_kernel_dcache_range_asm((start), (start)+(size)); 38 36 37 + /* The only way to flush a vmap range is to flush whole cache */ 39 38 #define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1 40 39 void flush_kernel_vmap_range(void *vaddr, int size); 41 40 void invalidate_kernel_vmap_range(void *vaddr, int size); 42 41 43 - #define flush_cache_vmap(start, end) flush_cache_all() 42 + void flush_cache_vmap(unsigned long start, unsigned long end); 44 43 #define flush_cache_vmap_early(start, end) do { } while (0) 45 - #define flush_cache_vunmap(start, end) flush_cache_all() 44 + void flush_cache_vunmap(unsigned long start, unsigned long end); 46 45 47 46 void flush_dcache_folio(struct folio *folio); 48 47 #define flush_dcache_folio flush_dcache_folio ··· 76 77 void flush_cache_range(struct vm_area_struct *vma, 77 78 unsigned long start, unsigned long end); 78 79 79 - /* defined in pacache.S exported in cache.c used by flush_anon_page */ 80 - void flush_dcache_page_asm(unsigned long phys_addr, unsigned long vaddr); 81 - 82 80 #define ARCH_HAS_FLUSH_ANON_PAGE 83 81 void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr); 84 82 85 83 #define ARCH_HAS_FLUSH_ON_KUNMAP 86 - static inline void kunmap_flush_on_unmap(const void *addr) 87 - { 88 - flush_kernel_dcache_page_addr(addr); 89 - } 84 + void kunmap_flush_on_unmap(const void *addr); 90 85 91 86 #endif /* _PARISC_CACHEFLUSH_H */ 92 87
+12 -15
arch/parisc/include/asm/pgtable.h
··· 448 448 return pte; 449 449 } 450 450 451 + static inline pte_t ptep_get(pte_t *ptep) 452 + { 453 + return READ_ONCE(*ptep); 454 + } 455 + #define ptep_get ptep_get 456 + 451 457 static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) 452 458 { 453 459 pte_t pte; 454 460 455 - if (!pte_young(*ptep)) 456 - return 0; 457 - 458 - pte = *ptep; 461 + pte = ptep_get(ptep); 459 462 if (!pte_young(pte)) { 460 463 return 0; 461 464 } ··· 466 463 return 1; 467 464 } 468 465 466 + int ptep_clear_flush_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep); 467 + pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep); 468 + 469 469 struct mm_struct; 470 - static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) 471 - { 472 - pte_t old_pte; 473 - 474 - old_pte = *ptep; 475 - set_pte(ptep, __pte(0)); 476 - 477 - return old_pte; 478 - } 479 - 480 470 static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) 481 471 { 482 472 set_pte(ptep, pte_wrprotect(*ptep)); ··· 507 511 #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN 508 512 509 513 #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG 510 - #define __HAVE_ARCH_PTEP_GET_AND_CLEAR 514 + #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH 515 + #define __HAVE_ARCH_PTEP_CLEAR_FLUSH 511 516 #define __HAVE_ARCH_PTEP_SET_WRPROTECT 512 517 #define __HAVE_ARCH_PTE_SAME 513 518
+263 -158
arch/parisc/kernel/cache.c
··· 20 20 #include <linux/sched.h> 21 21 #include <linux/sched/mm.h> 22 22 #include <linux/syscalls.h> 23 + #include <linux/vmalloc.h> 23 24 #include <asm/pdc.h> 24 25 #include <asm/cache.h> 25 26 #include <asm/cacheflush.h> ··· 32 31 #include <asm/mmu_context.h> 33 32 #include <asm/cachectl.h> 34 33 34 + #define PTR_PAGE_ALIGN_DOWN(addr) PTR_ALIGN_DOWN(addr, PAGE_SIZE) 35 + 36 + /* 37 + * When nonzero, use _PAGE_ACCESSED bit to try to reduce the number 38 + * of page flushes done flush_cache_page_if_present. There are some 39 + * pros and cons in using this option. It may increase the risk of 40 + * random segmentation faults. 41 + */ 42 + #define CONFIG_FLUSH_PAGE_ACCESSED 0 43 + 35 44 int split_tlb __ro_after_init; 36 45 int dcache_stride __ro_after_init; 37 46 int icache_stride __ro_after_init; 38 47 EXPORT_SYMBOL(dcache_stride); 39 48 49 + /* Internal implementation in arch/parisc/kernel/pacache.S */ 40 50 void flush_dcache_page_asm(unsigned long phys_addr, unsigned long vaddr); 41 51 EXPORT_SYMBOL(flush_dcache_page_asm); 42 52 void purge_dcache_page_asm(unsigned long phys_addr, unsigned long vaddr); 43 53 void flush_icache_page_asm(unsigned long phys_addr, unsigned long vaddr); 44 - 45 - /* Internal implementation in arch/parisc/kernel/pacache.S */ 46 54 void flush_data_cache_local(void *); /* flushes local data-cache only */ 47 55 void flush_instruction_cache_local(void); /* flushes local code-cache only */ 56 + 57 + static void flush_kernel_dcache_page_addr(const void *addr); 48 58 49 59 /* On some machines (i.e., ones with the Merced bus), there can be 50 60 * only a single PxTLB broadcast at a time; this must be guaranteed ··· 333 321 { 334 322 if (!static_branch_likely(&parisc_has_cache)) 335 323 return; 324 + 325 + /* 326 + * The TLB is the engine of coherence on parisc. The CPU is 327 + * entitled to speculate any page with a TLB mapping, so here 328 + * we kill the mapping then flush the page along a special flush 329 + * only alias mapping. This guarantees that the page is no-longer 330 + * in the cache for any process and nor may it be speculatively 331 + * read in (until the user or kernel specifically accesses it, 332 + * of course). 333 + */ 334 + flush_tlb_page(vma, vmaddr); 335 + 336 336 preempt_disable(); 337 337 flush_dcache_page_asm(physaddr, vmaddr); 338 338 if (vma->vm_flags & VM_EXEC) ··· 352 328 preempt_enable(); 353 329 } 354 330 355 - static void flush_user_cache_page(struct vm_area_struct *vma, unsigned long vmaddr) 331 + static void flush_kernel_dcache_page_addr(const void *addr) 356 332 { 357 - unsigned long flags, space, pgd, prot; 358 - #ifdef CONFIG_TLB_PTLOCK 359 - unsigned long pgd_lock; 360 - #endif 333 + unsigned long vaddr = (unsigned long)addr; 334 + unsigned long flags; 361 335 362 - vmaddr &= PAGE_MASK; 336 + /* Purge TLB entry to remove translation on all CPUs */ 337 + purge_tlb_start(flags); 338 + pdtlb(SR_KERNEL, addr); 339 + purge_tlb_end(flags); 363 340 341 + /* Use tmpalias flush to prevent data cache move-in */ 364 342 preempt_disable(); 365 - 366 - /* Set context for flush */ 367 - local_irq_save(flags); 368 - prot = mfctl(8); 369 - space = mfsp(SR_USER); 370 - pgd = mfctl(25); 371 - #ifdef CONFIG_TLB_PTLOCK 372 - pgd_lock = mfctl(28); 373 - #endif 374 - switch_mm_irqs_off(NULL, vma->vm_mm, NULL); 375 - local_irq_restore(flags); 376 - 377 - flush_user_dcache_range_asm(vmaddr, vmaddr + PAGE_SIZE); 378 - if (vma->vm_flags & VM_EXEC) 379 - flush_user_icache_range_asm(vmaddr, vmaddr + PAGE_SIZE); 380 - flush_tlb_page(vma, vmaddr); 381 - 382 - /* Restore previous context */ 383 - local_irq_save(flags); 384 - #ifdef CONFIG_TLB_PTLOCK 385 - mtctl(pgd_lock, 28); 386 - #endif 387 - mtctl(pgd, 25); 388 - mtsp(space, SR_USER); 389 - mtctl(prot, 8); 390 - local_irq_restore(flags); 391 - 343 + flush_dcache_page_asm(__pa(vaddr), vaddr); 392 344 preempt_enable(); 393 345 } 346 + 347 + static void flush_kernel_icache_page_addr(const void *addr) 348 + { 349 + unsigned long vaddr = (unsigned long)addr; 350 + unsigned long flags; 351 + 352 + /* Purge TLB entry to remove translation on all CPUs */ 353 + purge_tlb_start(flags); 354 + pdtlb(SR_KERNEL, addr); 355 + purge_tlb_end(flags); 356 + 357 + /* Use tmpalias flush to prevent instruction cache move-in */ 358 + preempt_disable(); 359 + flush_icache_page_asm(__pa(vaddr), vaddr); 360 + preempt_enable(); 361 + } 362 + 363 + void kunmap_flush_on_unmap(const void *addr) 364 + { 365 + flush_kernel_dcache_page_addr(addr); 366 + } 367 + EXPORT_SYMBOL(kunmap_flush_on_unmap); 394 368 395 369 void flush_icache_pages(struct vm_area_struct *vma, struct page *page, 396 370 unsigned int nr) ··· 397 375 398 376 for (;;) { 399 377 flush_kernel_dcache_page_addr(kaddr); 400 - flush_kernel_icache_page(kaddr); 378 + flush_kernel_icache_page_addr(kaddr); 401 379 if (--nr == 0) 402 380 break; 403 381 kaddr += PAGE_SIZE; 404 382 } 405 383 } 406 384 385 + /* 386 + * Walk page directory for MM to find PTEP pointer for address ADDR. 387 + */ 407 388 static inline pte_t *get_ptep(struct mm_struct *mm, unsigned long addr) 408 389 { 409 390 pte_t *ptep = NULL; ··· 433 408 { 434 409 return (pte_val(pte) & (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_NO_CACHE)) 435 410 == (_PAGE_PRESENT | _PAGE_ACCESSED); 411 + } 412 + 413 + /* 414 + * Return user physical address. Returns 0 if page is not present. 415 + */ 416 + static inline unsigned long get_upa(struct mm_struct *mm, unsigned long addr) 417 + { 418 + unsigned long flags, space, pgd, prot, pa; 419 + #ifdef CONFIG_TLB_PTLOCK 420 + unsigned long pgd_lock; 421 + #endif 422 + 423 + /* Save context */ 424 + local_irq_save(flags); 425 + prot = mfctl(8); 426 + space = mfsp(SR_USER); 427 + pgd = mfctl(25); 428 + #ifdef CONFIG_TLB_PTLOCK 429 + pgd_lock = mfctl(28); 430 + #endif 431 + 432 + /* Set context for lpa_user */ 433 + switch_mm_irqs_off(NULL, mm, NULL); 434 + pa = lpa_user(addr); 435 + 436 + /* Restore previous context */ 437 + #ifdef CONFIG_TLB_PTLOCK 438 + mtctl(pgd_lock, 28); 439 + #endif 440 + mtctl(pgd, 25); 441 + mtsp(space, SR_USER); 442 + mtctl(prot, 8); 443 + local_irq_restore(flags); 444 + 445 + return pa; 436 446 } 437 447 438 448 void flush_dcache_folio(struct folio *folio) ··· 518 458 if (addr + nr * PAGE_SIZE > vma->vm_end) 519 459 nr = (vma->vm_end - addr) / PAGE_SIZE; 520 460 521 - if (parisc_requires_coherency()) { 522 - for (i = 0; i < nr; i++) { 523 - pte_t *ptep = get_ptep(vma->vm_mm, 524 - addr + i * PAGE_SIZE); 525 - if (!ptep) 526 - continue; 527 - if (pte_needs_flush(*ptep)) 528 - flush_user_cache_page(vma, 529 - addr + i * PAGE_SIZE); 530 - /* Optimise accesses to the same table? */ 531 - pte_unmap(ptep); 532 - } 533 - } else { 534 - /* 535 - * The TLB is the engine of coherence on parisc: 536 - * The CPU is entitled to speculate any page 537 - * with a TLB mapping, so here we kill the 538 - * mapping then flush the page along a special 539 - * flush only alias mapping. This guarantees that 540 - * the page is no-longer in the cache for any 541 - * process and nor may it be speculatively read 542 - * in (until the user or kernel specifically 543 - * accesses it, of course) 544 - */ 545 - for (i = 0; i < nr; i++) 546 - flush_tlb_page(vma, addr + i * PAGE_SIZE); 547 - if (old_addr == 0 || (old_addr & (SHM_COLOUR - 1)) 461 + if (old_addr == 0 || (old_addr & (SHM_COLOUR - 1)) 548 462 != (addr & (SHM_COLOUR - 1))) { 549 - for (i = 0; i < nr; i++) 550 - __flush_cache_page(vma, 551 - addr + i * PAGE_SIZE, 552 - (pfn + i) * PAGE_SIZE); 553 - /* 554 - * Software is allowed to have any number 555 - * of private mappings to a page. 556 - */ 557 - if (!(vma->vm_flags & VM_SHARED)) 558 - continue; 559 - if (old_addr) 560 - pr_err("INEQUIVALENT ALIASES 0x%lx and 0x%lx in file %pD\n", 561 - old_addr, addr, vma->vm_file); 562 - if (nr == folio_nr_pages(folio)) 563 - old_addr = addr; 564 - } 463 + for (i = 0; i < nr; i++) 464 + __flush_cache_page(vma, 465 + addr + i * PAGE_SIZE, 466 + (pfn + i) * PAGE_SIZE); 467 + /* 468 + * Software is allowed to have any number 469 + * of private mappings to a page. 470 + */ 471 + if (!(vma->vm_flags & VM_SHARED)) 472 + continue; 473 + if (old_addr) 474 + pr_err("INEQUIVALENT ALIASES 0x%lx and 0x%lx in file %pD\n", 475 + old_addr, addr, vma->vm_file); 476 + if (nr == folio_nr_pages(folio)) 477 + old_addr = addr; 565 478 } 566 479 WARN_ON(++count == 4096); 567 480 } ··· 624 591 extern void clear_user_page_asm(void *, unsigned long); 625 592 extern void copy_user_page_asm(void *, void *, unsigned long); 626 593 627 - void flush_kernel_dcache_page_addr(const void *addr) 628 - { 629 - unsigned long flags; 630 - 631 - flush_kernel_dcache_page_asm(addr); 632 - purge_tlb_start(flags); 633 - pdtlb(SR_KERNEL, addr); 634 - purge_tlb_end(flags); 635 - } 636 - EXPORT_SYMBOL(flush_kernel_dcache_page_addr); 637 - 638 594 static void flush_cache_page_if_present(struct vm_area_struct *vma, 639 - unsigned long vmaddr, unsigned long pfn) 595 + unsigned long vmaddr) 640 596 { 597 + #if CONFIG_FLUSH_PAGE_ACCESSED 641 598 bool needs_flush = false; 642 - pte_t *ptep; 599 + pte_t *ptep, pte; 643 600 644 - /* 645 - * The pte check is racy and sometimes the flush will trigger 646 - * a non-access TLB miss. Hopefully, the page has already been 647 - * flushed. 648 - */ 649 601 ptep = get_ptep(vma->vm_mm, vmaddr); 650 602 if (ptep) { 651 - needs_flush = pte_needs_flush(*ptep); 603 + pte = ptep_get(ptep); 604 + needs_flush = pte_needs_flush(pte); 652 605 pte_unmap(ptep); 653 606 } 654 607 if (needs_flush) 655 - flush_cache_page(vma, vmaddr, pfn); 608 + __flush_cache_page(vma, vmaddr, PFN_PHYS(pte_pfn(pte))); 609 + #else 610 + struct mm_struct *mm = vma->vm_mm; 611 + unsigned long physaddr = get_upa(mm, vmaddr); 612 + 613 + if (physaddr) 614 + __flush_cache_page(vma, vmaddr, PAGE_ALIGN_DOWN(physaddr)); 615 + #endif 656 616 } 657 617 658 618 void copy_user_highpage(struct page *to, struct page *from, ··· 655 629 656 630 kfrom = kmap_local_page(from); 657 631 kto = kmap_local_page(to); 658 - flush_cache_page_if_present(vma, vaddr, page_to_pfn(from)); 632 + __flush_cache_page(vma, vaddr, PFN_PHYS(page_to_pfn(from))); 659 633 copy_page_asm(kto, kfrom); 660 634 kunmap_local(kto); 661 635 kunmap_local(kfrom); ··· 664 638 void copy_to_user_page(struct vm_area_struct *vma, struct page *page, 665 639 unsigned long user_vaddr, void *dst, void *src, int len) 666 640 { 667 - flush_cache_page_if_present(vma, user_vaddr, page_to_pfn(page)); 641 + __flush_cache_page(vma, user_vaddr, PFN_PHYS(page_to_pfn(page))); 668 642 memcpy(dst, src, len); 669 - flush_kernel_dcache_range_asm((unsigned long)dst, (unsigned long)dst + len); 643 + flush_kernel_dcache_page_addr(PTR_PAGE_ALIGN_DOWN(dst)); 670 644 } 671 645 672 646 void copy_from_user_page(struct vm_area_struct *vma, struct page *page, 673 647 unsigned long user_vaddr, void *dst, void *src, int len) 674 648 { 675 - flush_cache_page_if_present(vma, user_vaddr, page_to_pfn(page)); 649 + __flush_cache_page(vma, user_vaddr, PFN_PHYS(page_to_pfn(page))); 676 650 memcpy(dst, src, len); 651 + flush_kernel_dcache_page_addr(PTR_PAGE_ALIGN_DOWN(src)); 677 652 } 678 653 679 654 /* __flush_tlb_range() ··· 708 681 709 682 static void flush_cache_pages(struct vm_area_struct *vma, unsigned long start, unsigned long end) 710 683 { 711 - unsigned long addr, pfn; 712 - pte_t *ptep; 684 + unsigned long addr; 713 685 714 - for (addr = start; addr < end; addr += PAGE_SIZE) { 715 - bool needs_flush = false; 716 - /* 717 - * The vma can contain pages that aren't present. Although 718 - * the pte search is expensive, we need the pte to find the 719 - * page pfn and to check whether the page should be flushed. 720 - */ 721 - ptep = get_ptep(vma->vm_mm, addr); 722 - if (ptep) { 723 - needs_flush = pte_needs_flush(*ptep); 724 - pfn = pte_pfn(*ptep); 725 - pte_unmap(ptep); 726 - } 727 - if (needs_flush) { 728 - if (parisc_requires_coherency()) { 729 - flush_user_cache_page(vma, addr); 730 - } else { 731 - if (WARN_ON(!pfn_valid(pfn))) 732 - return; 733 - __flush_cache_page(vma, addr, PFN_PHYS(pfn)); 734 - } 735 - } 736 - } 686 + for (addr = start; addr < end; addr += PAGE_SIZE) 687 + flush_cache_page_if_present(vma, addr); 737 688 } 738 689 739 690 static inline unsigned long mm_total_size(struct mm_struct *mm) ··· 762 757 if (WARN_ON(IS_ENABLED(CONFIG_SMP) && arch_irqs_disabled())) 763 758 return; 764 759 flush_tlb_range(vma, start, end); 765 - flush_cache_all(); 760 + if (vma->vm_flags & VM_EXEC) 761 + flush_cache_all(); 762 + else 763 + flush_data_cache(); 766 764 return; 767 765 } 768 766 769 - flush_cache_pages(vma, start, end); 767 + flush_cache_pages(vma, start & PAGE_MASK, end); 770 768 } 771 769 772 770 void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn) 773 771 { 774 - if (WARN_ON(!pfn_valid(pfn))) 775 - return; 776 - if (parisc_requires_coherency()) 777 - flush_user_cache_page(vma, vmaddr); 778 - else 779 - __flush_cache_page(vma, vmaddr, PFN_PHYS(pfn)); 772 + __flush_cache_page(vma, vmaddr, PFN_PHYS(pfn)); 780 773 } 781 774 782 775 void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr) ··· 782 779 if (!PageAnon(page)) 783 780 return; 784 781 785 - if (parisc_requires_coherency()) { 786 - if (vma->vm_flags & VM_SHARED) 787 - flush_data_cache(); 788 - else 789 - flush_user_cache_page(vma, vmaddr); 782 + __flush_cache_page(vma, vmaddr, PFN_PHYS(page_to_pfn(page))); 783 + } 784 + 785 + int ptep_clear_flush_young(struct vm_area_struct *vma, unsigned long addr, 786 + pte_t *ptep) 787 + { 788 + pte_t pte = ptep_get(ptep); 789 + 790 + if (!pte_young(pte)) 791 + return 0; 792 + set_pte(ptep, pte_mkold(pte)); 793 + #if CONFIG_FLUSH_PAGE_ACCESSED 794 + __flush_cache_page(vma, addr, PFN_PHYS(pte_pfn(pte))); 795 + #endif 796 + return 1; 797 + } 798 + 799 + /* 800 + * After a PTE is cleared, we have no way to flush the cache for 801 + * the physical page. On PA8800 and PA8900 processors, these lines 802 + * can cause random cache corruption. Thus, we must flush the cache 803 + * as well as the TLB when clearing a PTE that's valid. 804 + */ 805 + pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, 806 + pte_t *ptep) 807 + { 808 + struct mm_struct *mm = (vma)->vm_mm; 809 + pte_t pte = ptep_get_and_clear(mm, addr, ptep); 810 + unsigned long pfn = pte_pfn(pte); 811 + 812 + if (pfn_valid(pfn)) 813 + __flush_cache_page(vma, addr, PFN_PHYS(pfn)); 814 + else if (pte_accessible(mm, pte)) 815 + flush_tlb_page(vma, addr); 816 + 817 + return pte; 818 + } 819 + 820 + /* 821 + * The physical address for pages in the ioremap case can be obtained 822 + * from the vm_struct struct. I wasn't able to successfully handle the 823 + * vmalloc and vmap cases. We have an array of struct page pointers in 824 + * the uninitialized vmalloc case but the flush failed using page_to_pfn. 825 + */ 826 + void flush_cache_vmap(unsigned long start, unsigned long end) 827 + { 828 + unsigned long addr, physaddr; 829 + struct vm_struct *vm; 830 + 831 + /* Prevent cache move-in */ 832 + flush_tlb_kernel_range(start, end); 833 + 834 + if (end - start >= parisc_cache_flush_threshold) { 835 + flush_cache_all(); 790 836 return; 791 837 } 792 838 793 - flush_tlb_page(vma, vmaddr); 794 - preempt_disable(); 795 - flush_dcache_page_asm(page_to_phys(page), vmaddr); 796 - preempt_enable(); 797 - } 839 + if (WARN_ON_ONCE(!is_vmalloc_addr((void *)start))) { 840 + flush_cache_all(); 841 + return; 842 + } 798 843 844 + vm = find_vm_area((void *)start); 845 + if (WARN_ON_ONCE(!vm)) { 846 + flush_cache_all(); 847 + return; 848 + } 849 + 850 + /* The physical addresses of IOREMAP regions are contiguous */ 851 + if (vm->flags & VM_IOREMAP) { 852 + physaddr = vm->phys_addr; 853 + for (addr = start; addr < end; addr += PAGE_SIZE) { 854 + preempt_disable(); 855 + flush_dcache_page_asm(physaddr, start); 856 + flush_icache_page_asm(physaddr, start); 857 + preempt_enable(); 858 + physaddr += PAGE_SIZE; 859 + } 860 + return; 861 + } 862 + 863 + flush_cache_all(); 864 + } 865 + EXPORT_SYMBOL(flush_cache_vmap); 866 + 867 + /* 868 + * The vm_struct has been retired and the page table is set up. The 869 + * last page in the range is a guard page. Its physical address can't 870 + * be determined using lpa, so there is no way to flush the range 871 + * using flush_dcache_page_asm. 872 + */ 873 + void flush_cache_vunmap(unsigned long start, unsigned long end) 874 + { 875 + /* Prevent cache move-in */ 876 + flush_tlb_kernel_range(start, end); 877 + flush_data_cache(); 878 + } 879 + EXPORT_SYMBOL(flush_cache_vunmap); 880 + 881 + /* 882 + * On systems with PA8800/PA8900 processors, there is no way to flush 883 + * a vmap range other than using the architected loop to flush the 884 + * entire cache. The page directory is not set up, so we can't use 885 + * fdc, etc. FDCE/FICE don't work to flush a portion of the cache. 886 + * L2 is physically indexed but FDCE/FICE instructions in virtual 887 + * mode output their virtual address on the core bus, not their 888 + * real address. As a result, the L2 cache index formed from the 889 + * virtual address will most likely not be the same as the L2 index 890 + * formed from the real address. 891 + */ 799 892 void flush_kernel_vmap_range(void *vaddr, int size) 800 893 { 801 894 unsigned long start = (unsigned long)vaddr; 802 895 unsigned long end = start + size; 803 896 804 - if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) && 805 - (unsigned long)size >= parisc_cache_flush_threshold) { 806 - flush_tlb_kernel_range(start, end); 807 - flush_data_cache(); 897 + flush_tlb_kernel_range(start, end); 898 + 899 + if (!static_branch_likely(&parisc_has_dcache)) 900 + return; 901 + 902 + /* If interrupts are disabled, we can only do local flush */ 903 + if (WARN_ON(IS_ENABLED(CONFIG_SMP) && arch_irqs_disabled())) { 904 + flush_data_cache_local(NULL); 808 905 return; 809 906 } 810 907 811 - flush_kernel_dcache_range_asm(start, end); 812 - flush_tlb_kernel_range(start, end); 908 + flush_data_cache(); 813 909 } 814 910 EXPORT_SYMBOL(flush_kernel_vmap_range); 815 911 ··· 920 818 /* Ensure DMA is complete */ 921 819 asm_syncdma(); 922 820 923 - if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) && 924 - (unsigned long)size >= parisc_cache_flush_threshold) { 925 - flush_tlb_kernel_range(start, end); 926 - flush_data_cache(); 821 + flush_tlb_kernel_range(start, end); 822 + 823 + if (!static_branch_likely(&parisc_has_dcache)) 824 + return; 825 + 826 + /* If interrupts are disabled, we can only do local flush */ 827 + if (WARN_ON(IS_ENABLED(CONFIG_SMP) && arch_irqs_disabled())) { 828 + flush_data_cache_local(NULL); 927 829 return; 928 830 } 929 831 930 - purge_kernel_dcache_range_asm(start, end); 931 - flush_tlb_kernel_range(start, end); 832 + flush_data_cache(); 932 833 } 933 834 EXPORT_SYMBOL(invalidate_kernel_vmap_range); 934 835
+24 -3
arch/s390/boot/startup.c
··· 384 384 void startup_kernel(void) 385 385 { 386 386 unsigned long kernel_size = vmlinux.image_size + vmlinux.bss_size; 387 - unsigned long nokaslr_offset_phys = mem_safe_offset(); 387 + unsigned long nokaslr_offset_phys, kaslr_large_page_offset; 388 388 unsigned long amode31_lma = 0; 389 389 unsigned long max_physmem_end; 390 390 unsigned long asce_limit; ··· 393 393 394 394 fixup_vmlinux_info(); 395 395 setup_lpp(); 396 + 397 + /* 398 + * Non-randomized kernel physical start address must be _SEGMENT_SIZE 399 + * aligned (see blow). 400 + */ 401 + nokaslr_offset_phys = ALIGN(mem_safe_offset(), _SEGMENT_SIZE); 396 402 safe_addr = PAGE_ALIGN(nokaslr_offset_phys + kernel_size); 397 403 398 404 /* ··· 431 425 save_ipl_cert_comp_list(); 432 426 rescue_initrd(safe_addr, ident_map_size); 433 427 434 - if (kaslr_enabled()) 435 - __kaslr_offset_phys = randomize_within_range(kernel_size, THREAD_SIZE, 0, ident_map_size); 428 + /* 429 + * __kaslr_offset_phys must be _SEGMENT_SIZE aligned, so the lower 430 + * 20 bits (the offset within a large page) are zero. Copy the last 431 + * 20 bits of __kaslr_offset, which is THREAD_SIZE aligned, to 432 + * __kaslr_offset_phys. 433 + * 434 + * With this the last 20 bits of __kaslr_offset_phys and __kaslr_offset 435 + * are identical, which is required to allow for large mappings of the 436 + * kernel image. 437 + */ 438 + kaslr_large_page_offset = __kaslr_offset & ~_SEGMENT_MASK; 439 + if (kaslr_enabled()) { 440 + unsigned long end = ident_map_size - kaslr_large_page_offset; 441 + 442 + __kaslr_offset_phys = randomize_within_range(kernel_size, _SEGMENT_SIZE, 0, end); 443 + } 436 444 if (!__kaslr_offset_phys) 437 445 __kaslr_offset_phys = nokaslr_offset_phys; 446 + __kaslr_offset_phys |= kaslr_large_page_offset; 438 447 kaslr_adjust_vmlinux_info(__kaslr_offset_phys); 439 448 physmem_reserve(RR_VMLINUX, __kaslr_offset_phys, kernel_size); 440 449 deploy_kernel((void *)__kaslr_offset_phys);
+9 -3
arch/s390/boot/vmem.c
··· 261 261 262 262 static bool large_allowed(enum populate_mode mode) 263 263 { 264 - return (mode == POPULATE_DIRECT) || (mode == POPULATE_IDENTITY); 264 + return (mode == POPULATE_DIRECT) || (mode == POPULATE_IDENTITY) || (mode == POPULATE_KERNEL); 265 265 } 266 266 267 267 static bool can_large_pud(pud_t *pu_dir, unsigned long addr, unsigned long end, 268 268 enum populate_mode mode) 269 269 { 270 + unsigned long size = end - addr; 271 + 270 272 return machine.has_edat2 && large_allowed(mode) && 271 - IS_ALIGNED(addr, PUD_SIZE) && (end - addr) >= PUD_SIZE; 273 + IS_ALIGNED(addr, PUD_SIZE) && (size >= PUD_SIZE) && 274 + IS_ALIGNED(_pa(addr, size, mode), PUD_SIZE); 272 275 } 273 276 274 277 static bool can_large_pmd(pmd_t *pm_dir, unsigned long addr, unsigned long end, 275 278 enum populate_mode mode) 276 279 { 280 + unsigned long size = end - addr; 281 + 277 282 return machine.has_edat1 && large_allowed(mode) && 278 - IS_ALIGNED(addr, PMD_SIZE) && (end - addr) >= PMD_SIZE; 283 + IS_ALIGNED(addr, PMD_SIZE) && (size >= PMD_SIZE) && 284 + IS_ALIGNED(_pa(addr, size, mode), PMD_SIZE); 279 285 } 280 286 281 287 static void pgtable_pte_populate(pmd_t *pmd, unsigned long addr, unsigned long end,
+1
arch/s390/boot/vmlinux.lds.S
··· 109 109 #ifdef CONFIG_KERNEL_UNCOMPRESSED 110 110 . = ALIGN(PAGE_SIZE); 111 111 . += AMODE31_SIZE; /* .amode31 section */ 112 + . = ALIGN(1 << 20); /* _SEGMENT_SIZE */ 112 113 #else 113 114 . = ALIGN(8); 114 115 #endif
+34 -9
arch/s390/configs/debug_defconfig
··· 43 43 CONFIG_KEXEC=y 44 44 CONFIG_KEXEC_FILE=y 45 45 CONFIG_KEXEC_SIG=y 46 - CONFIG_CRASH_DUMP=y 47 46 CONFIG_LIVEPATCH=y 48 47 CONFIG_MARCH_Z13=y 49 48 CONFIG_NR_CPUS=512 ··· 50 51 CONFIG_HZ_100=y 51 52 CONFIG_CERT_STORE=y 52 53 CONFIG_EXPOLINE=y 54 + # CONFIG_EXPOLINE_EXTERN is not set 53 55 CONFIG_EXPOLINE_AUTO=y 54 56 CONFIG_CHSC_SCH=y 55 57 CONFIG_VFIO_CCW=m ··· 76 76 CONFIG_MODULE_UNLOAD_TAINT_TRACKING=y 77 77 CONFIG_MODVERSIONS=y 78 78 CONFIG_MODULE_SRCVERSION_ALL=y 79 + CONFIG_MODULE_SIG_SHA256=y 79 80 CONFIG_BLK_DEV_THROTTLING=y 80 81 CONFIG_BLK_WBT=y 81 82 CONFIG_BLK_CGROUP_IOLATENCY=y ··· 101 100 CONFIG_MEMORY_HOTREMOVE=y 102 101 CONFIG_KSM=y 103 102 CONFIG_TRANSPARENT_HUGEPAGE=y 104 - CONFIG_CMA_DEBUG=y 105 103 CONFIG_CMA_DEBUGFS=y 106 104 CONFIG_CMA_SYSFS=y 107 105 CONFIG_CMA_AREAS=7 ··· 119 119 CONFIG_XFRM_USER=m 120 120 CONFIG_NET_KEY=m 121 121 CONFIG_SMC_DIAG=m 122 + CONFIG_SMC_LO=y 122 123 CONFIG_INET=y 123 124 CONFIG_IP_MULTICAST=y 124 125 CONFIG_IP_ADVANCED_ROUTER=y ··· 134 133 CONFIG_IP_MROUTE_MULTIPLE_TABLES=y 135 134 CONFIG_IP_PIMSM_V1=y 136 135 CONFIG_IP_PIMSM_V2=y 137 - CONFIG_SYN_COOKIES=y 138 136 CONFIG_NET_IPVTI=m 139 137 CONFIG_INET_AH=m 140 138 CONFIG_INET_ESP=m ··· 167 167 CONFIG_NETFILTER_NETLINK_HOOK=m 168 168 CONFIG_NF_CONNTRACK=m 169 169 CONFIG_NF_CONNTRACK_SECMARK=y 170 + CONFIG_NF_CONNTRACK_ZONES=y 170 171 CONFIG_NF_CONNTRACK_PROCFS=y 171 172 CONFIG_NF_CONNTRACK_EVENTS=y 172 173 CONFIG_NF_CONNTRACK_TIMEOUT=y ··· 184 183 CONFIG_NF_CONNTRACK_TFTP=m 185 184 CONFIG_NF_CT_NETLINK=m 186 185 CONFIG_NF_CT_NETLINK_TIMEOUT=m 186 + CONFIG_NF_CT_NETLINK_HELPER=m 187 + CONFIG_NETFILTER_NETLINK_GLUE_CT=y 187 188 CONFIG_NF_TABLES=m 188 189 CONFIG_NF_TABLES_INET=y 190 + CONFIG_NF_TABLES_NETDEV=y 191 + CONFIG_NFT_NUMGEN=m 189 192 CONFIG_NFT_CT=m 193 + CONFIG_NFT_FLOW_OFFLOAD=m 194 + CONFIG_NFT_CONNLIMIT=m 190 195 CONFIG_NFT_LOG=m 191 196 CONFIG_NFT_LIMIT=m 197 + CONFIG_NFT_MASQ=m 198 + CONFIG_NFT_REDIR=m 192 199 CONFIG_NFT_NAT=m 200 + CONFIG_NFT_TUNNEL=m 201 + CONFIG_NFT_QUEUE=m 202 + CONFIG_NFT_QUOTA=m 193 203 CONFIG_NFT_REJECT=m 194 204 CONFIG_NFT_COMPAT=m 195 205 CONFIG_NFT_HASH=m 196 206 CONFIG_NFT_FIB_INET=m 197 - CONFIG_NETFILTER_XTABLES_COMPAT=y 207 + CONFIG_NFT_XFRM=m 208 + CONFIG_NFT_SOCKET=m 209 + CONFIG_NFT_OSF=m 210 + CONFIG_NFT_TPROXY=m 211 + CONFIG_NFT_SYNPROXY=m 212 + CONFIG_NFT_DUP_NETDEV=m 213 + CONFIG_NFT_FWD_NETDEV=m 214 + CONFIG_NFT_FIB_NETDEV=m 215 + CONFIG_NFT_REJECT_NETDEV=m 216 + CONFIG_NF_FLOW_TABLE_INET=m 217 + CONFIG_NF_FLOW_TABLE=m 218 + CONFIG_NF_FLOW_TABLE_PROCFS=y 198 219 CONFIG_NETFILTER_XT_SET=m 199 220 CONFIG_NETFILTER_XT_TARGET_AUDIT=m 200 221 CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m ··· 229 206 CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m 230 207 CONFIG_NETFILTER_XT_TARGET_LOG=m 231 208 CONFIG_NETFILTER_XT_TARGET_MARK=m 209 + CONFIG_NETFILTER_XT_TARGET_NETMAP=m 232 210 CONFIG_NETFILTER_XT_TARGET_NFLOG=m 233 211 CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m 212 + CONFIG_NETFILTER_XT_TARGET_REDIRECT=m 234 213 CONFIG_NETFILTER_XT_TARGET_TEE=m 235 214 CONFIG_NETFILTER_XT_TARGET_TPROXY=m 236 215 CONFIG_NETFILTER_XT_TARGET_TRACE=m ··· 241 216 CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m 242 217 CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m 243 218 CONFIG_NETFILTER_XT_MATCH_BPF=m 219 + CONFIG_NETFILTER_XT_MATCH_CGROUP=m 244 220 CONFIG_NETFILTER_XT_MATCH_CLUSTER=m 245 221 CONFIG_NETFILTER_XT_MATCH_COMMENT=m 246 222 CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m ··· 256 230 CONFIG_NETFILTER_XT_MATCH_ESP=m 257 231 CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m 258 232 CONFIG_NETFILTER_XT_MATCH_HELPER=m 233 + CONFIG_NETFILTER_XT_MATCH_IPCOMP=m 259 234 CONFIG_NETFILTER_XT_MATCH_IPRANGE=m 260 235 CONFIG_NETFILTER_XT_MATCH_IPVS=m 261 236 CONFIG_NETFILTER_XT_MATCH_LENGTH=m ··· 274 247 CONFIG_NETFILTER_XT_MATCH_RATEEST=m 275 248 CONFIG_NETFILTER_XT_MATCH_REALM=m 276 249 CONFIG_NETFILTER_XT_MATCH_RECENT=m 250 + CONFIG_NETFILTER_XT_MATCH_SOCKET=m 277 251 CONFIG_NETFILTER_XT_MATCH_STATE=m 278 252 CONFIG_NETFILTER_XT_MATCH_STATISTIC=m 279 253 CONFIG_NETFILTER_XT_MATCH_STRING=m ··· 330 302 CONFIG_IP_NF_TARGET_TTL=m 331 303 CONFIG_IP_NF_RAW=m 332 304 CONFIG_IP_NF_SECURITY=m 333 - CONFIG_IP_NF_ARPTABLES=m 334 305 CONFIG_IP_NF_ARPFILTER=m 335 306 CONFIG_IP_NF_ARP_MANGLE=m 336 307 CONFIG_NFT_FIB_IPV6=m ··· 400 373 CONFIG_NET_ACT_GACT=m 401 374 CONFIG_GACT_PROB=y 402 375 CONFIG_NET_ACT_MIRRED=m 403 - CONFIG_NET_ACT_IPT=m 404 376 CONFIG_NET_ACT_NAT=m 405 377 CONFIG_NET_ACT_PEDIT=m 406 378 CONFIG_NET_ACT_SIMP=m ··· 488 462 CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y 489 463 CONFIG_DM_SWITCH=m 490 464 CONFIG_DM_INTEGRITY=m 465 + CONFIG_DM_VDO=m 491 466 CONFIG_NETDEVICES=y 492 467 CONFIG_BONDING=m 493 468 CONFIG_DUMMY=m ··· 601 574 CONFIG_WATCHDOG_NOWAYOUT=y 602 575 CONFIG_SOFT_WATCHDOG=m 603 576 CONFIG_DIAG288_WATCHDOG=m 604 - # CONFIG_DRM_DEBUG_MODESET_LOCK is not set 605 577 CONFIG_FB=y 606 578 # CONFIG_FB_DEVICE is not set 607 579 CONFIG_FRAMEBUFFER_CONSOLE=y ··· 671 645 CONFIG_VFAT_FS=m 672 646 CONFIG_EXFAT_FS=m 673 647 CONFIG_NTFS_FS=m 674 - CONFIG_NTFS_RW=y 675 648 CONFIG_PROC_KCORE=y 676 649 CONFIG_TMPFS=y 677 650 CONFIG_TMPFS_POSIX_ACL=y ··· 688 663 CONFIG_SQUASHFS_ZSTD=y 689 664 CONFIG_ROMFS_FS=m 690 665 CONFIG_NFS_FS=m 666 + CONFIG_NFS_V2=m 691 667 CONFIG_NFS_V3_ACL=y 692 668 CONFIG_NFS_V4=m 693 669 CONFIG_NFS_SWAP=y ··· 905 879 CONFIG_INTERVAL_TREE_TEST=m 906 880 CONFIG_PERCPU_TEST=m 907 881 CONFIG_ATOMIC64_SELFTEST=y 908 - CONFIG_STRING_SELFTEST=y 909 882 CONFIG_TEST_BITOPS=m 910 883 CONFIG_TEST_BPF=m
+34 -6
arch/s390/configs/defconfig
··· 41 41 CONFIG_KEXEC=y 42 42 CONFIG_KEXEC_FILE=y 43 43 CONFIG_KEXEC_SIG=y 44 - CONFIG_CRASH_DUMP=y 45 44 CONFIG_LIVEPATCH=y 46 45 CONFIG_MARCH_Z13=y 47 46 CONFIG_NR_CPUS=512 ··· 48 49 CONFIG_HZ_100=y 49 50 CONFIG_CERT_STORE=y 50 51 CONFIG_EXPOLINE=y 52 + # CONFIG_EXPOLINE_EXTERN is not set 51 53 CONFIG_EXPOLINE_AUTO=y 52 54 CONFIG_CHSC_SCH=y 53 55 CONFIG_VFIO_CCW=m ··· 71 71 CONFIG_MODULE_UNLOAD_TAINT_TRACKING=y 72 72 CONFIG_MODVERSIONS=y 73 73 CONFIG_MODULE_SRCVERSION_ALL=y 74 + CONFIG_MODULE_SIG_SHA256=y 74 75 CONFIG_BLK_DEV_THROTTLING=y 75 76 CONFIG_BLK_WBT=y 76 77 CONFIG_BLK_CGROUP_IOLATENCY=y ··· 111 110 CONFIG_XFRM_USER=m 112 111 CONFIG_NET_KEY=m 113 112 CONFIG_SMC_DIAG=m 113 + CONFIG_SMC_LO=y 114 114 CONFIG_INET=y 115 115 CONFIG_IP_MULTICAST=y 116 116 CONFIG_IP_ADVANCED_ROUTER=y ··· 126 124 CONFIG_IP_MROUTE_MULTIPLE_TABLES=y 127 125 CONFIG_IP_PIMSM_V1=y 128 126 CONFIG_IP_PIMSM_V2=y 129 - CONFIG_SYN_COOKIES=y 130 127 CONFIG_NET_IPVTI=m 131 128 CONFIG_INET_AH=m 132 129 CONFIG_INET_ESP=m ··· 159 158 CONFIG_NETFILTER_NETLINK_HOOK=m 160 159 CONFIG_NF_CONNTRACK=m 161 160 CONFIG_NF_CONNTRACK_SECMARK=y 161 + CONFIG_NF_CONNTRACK_ZONES=y 162 162 CONFIG_NF_CONNTRACK_PROCFS=y 163 163 CONFIG_NF_CONNTRACK_EVENTS=y 164 164 CONFIG_NF_CONNTRACK_TIMEOUT=y ··· 176 174 CONFIG_NF_CONNTRACK_TFTP=m 177 175 CONFIG_NF_CT_NETLINK=m 178 176 CONFIG_NF_CT_NETLINK_TIMEOUT=m 177 + CONFIG_NF_CT_NETLINK_HELPER=m 178 + CONFIG_NETFILTER_NETLINK_GLUE_CT=y 179 179 CONFIG_NF_TABLES=m 180 180 CONFIG_NF_TABLES_INET=y 181 + CONFIG_NF_TABLES_NETDEV=y 182 + CONFIG_NFT_NUMGEN=m 181 183 CONFIG_NFT_CT=m 184 + CONFIG_NFT_FLOW_OFFLOAD=m 185 + CONFIG_NFT_CONNLIMIT=m 182 186 CONFIG_NFT_LOG=m 183 187 CONFIG_NFT_LIMIT=m 188 + CONFIG_NFT_MASQ=m 189 + CONFIG_NFT_REDIR=m 184 190 CONFIG_NFT_NAT=m 191 + CONFIG_NFT_TUNNEL=m 192 + CONFIG_NFT_QUEUE=m 193 + CONFIG_NFT_QUOTA=m 185 194 CONFIG_NFT_REJECT=m 186 195 CONFIG_NFT_COMPAT=m 187 196 CONFIG_NFT_HASH=m 188 197 CONFIG_NFT_FIB_INET=m 189 - CONFIG_NETFILTER_XTABLES_COMPAT=y 198 + CONFIG_NFT_XFRM=m 199 + CONFIG_NFT_SOCKET=m 200 + CONFIG_NFT_OSF=m 201 + CONFIG_NFT_TPROXY=m 202 + CONFIG_NFT_SYNPROXY=m 203 + CONFIG_NFT_DUP_NETDEV=m 204 + CONFIG_NFT_FWD_NETDEV=m 205 + CONFIG_NFT_FIB_NETDEV=m 206 + CONFIG_NFT_REJECT_NETDEV=m 207 + CONFIG_NF_FLOW_TABLE_INET=m 208 + CONFIG_NF_FLOW_TABLE=m 209 + CONFIG_NF_FLOW_TABLE_PROCFS=y 190 210 CONFIG_NETFILTER_XT_SET=m 191 211 CONFIG_NETFILTER_XT_TARGET_AUDIT=m 192 212 CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m ··· 221 197 CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m 222 198 CONFIG_NETFILTER_XT_TARGET_LOG=m 223 199 CONFIG_NETFILTER_XT_TARGET_MARK=m 200 + CONFIG_NETFILTER_XT_TARGET_NETMAP=m 224 201 CONFIG_NETFILTER_XT_TARGET_NFLOG=m 225 202 CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m 203 + CONFIG_NETFILTER_XT_TARGET_REDIRECT=m 226 204 CONFIG_NETFILTER_XT_TARGET_TEE=m 227 205 CONFIG_NETFILTER_XT_TARGET_TPROXY=m 228 206 CONFIG_NETFILTER_XT_TARGET_TRACE=m ··· 233 207 CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m 234 208 CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m 235 209 CONFIG_NETFILTER_XT_MATCH_BPF=m 210 + CONFIG_NETFILTER_XT_MATCH_CGROUP=m 236 211 CONFIG_NETFILTER_XT_MATCH_CLUSTER=m 237 212 CONFIG_NETFILTER_XT_MATCH_COMMENT=m 238 213 CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m ··· 248 221 CONFIG_NETFILTER_XT_MATCH_ESP=m 249 222 CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m 250 223 CONFIG_NETFILTER_XT_MATCH_HELPER=m 224 + CONFIG_NETFILTER_XT_MATCH_IPCOMP=m 251 225 CONFIG_NETFILTER_XT_MATCH_IPRANGE=m 252 226 CONFIG_NETFILTER_XT_MATCH_IPVS=m 253 227 CONFIG_NETFILTER_XT_MATCH_LENGTH=m ··· 266 238 CONFIG_NETFILTER_XT_MATCH_RATEEST=m 267 239 CONFIG_NETFILTER_XT_MATCH_REALM=m 268 240 CONFIG_NETFILTER_XT_MATCH_RECENT=m 241 + CONFIG_NETFILTER_XT_MATCH_SOCKET=m 269 242 CONFIG_NETFILTER_XT_MATCH_STATE=m 270 243 CONFIG_NETFILTER_XT_MATCH_STATISTIC=m 271 244 CONFIG_NETFILTER_XT_MATCH_STRING=m ··· 322 293 CONFIG_IP_NF_TARGET_TTL=m 323 294 CONFIG_IP_NF_RAW=m 324 295 CONFIG_IP_NF_SECURITY=m 325 - CONFIG_IP_NF_ARPTABLES=m 326 296 CONFIG_IP_NF_ARPFILTER=m 327 297 CONFIG_IP_NF_ARP_MANGLE=m 328 298 CONFIG_NFT_FIB_IPV6=m ··· 391 363 CONFIG_NET_ACT_GACT=m 392 364 CONFIG_GACT_PROB=y 393 365 CONFIG_NET_ACT_MIRRED=m 394 - CONFIG_NET_ACT_IPT=m 395 366 CONFIG_NET_ACT_NAT=m 396 367 CONFIG_NET_ACT_PEDIT=m 397 368 CONFIG_NET_ACT_SIMP=m ··· 479 452 CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG=y 480 453 CONFIG_DM_SWITCH=m 481 454 CONFIG_DM_INTEGRITY=m 455 + CONFIG_DM_VDO=m 482 456 CONFIG_NETDEVICES=y 483 457 CONFIG_BONDING=m 484 458 CONFIG_DUMMY=m ··· 658 630 CONFIG_VFAT_FS=m 659 631 CONFIG_EXFAT_FS=m 660 632 CONFIG_NTFS_FS=m 661 - CONFIG_NTFS_RW=y 662 633 CONFIG_PROC_KCORE=y 663 634 CONFIG_TMPFS=y 664 635 CONFIG_TMPFS_POSIX_ACL=y ··· 676 649 CONFIG_SQUASHFS_ZSTD=y 677 650 CONFIG_ROMFS_FS=m 678 651 CONFIG_NFS_FS=m 652 + CONFIG_NFS_V2=m 679 653 CONFIG_NFS_V3_ACL=y 680 654 CONFIG_NFS_V4=m 681 655 CONFIG_NFS_SWAP=y
+1 -4
arch/s390/configs/zfcpdump_defconfig
··· 9 9 CONFIG_BLK_DEV_INITRD=y 10 10 CONFIG_CC_OPTIMIZE_FOR_SIZE=y 11 11 CONFIG_KEXEC=y 12 - CONFIG_CRASH_DUMP=y 13 12 CONFIG_MARCH_Z13=y 14 13 CONFIG_NR_CPUS=2 15 14 CONFIG_HZ_100=y 16 15 # CONFIG_CHSC_SCH is not set 17 16 # CONFIG_SCM_BUS is not set 17 + # CONFIG_AP is not set 18 18 # CONFIG_PFAULT is not set 19 19 # CONFIG_S390_HYPFS is not set 20 20 # CONFIG_VIRTUALIZATION is not set 21 21 # CONFIG_S390_GUEST is not set 22 22 # CONFIG_SECCOMP is not set 23 - # CONFIG_GCC_PLUGINS is not set 24 23 # CONFIG_BLOCK_LEGACY_AUTOLOAD is not set 25 24 CONFIG_PARTITION_ADVANCED=y 26 25 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 27 26 # CONFIG_SWAP is not set 28 27 # CONFIG_COMPAT_BRK is not set 29 - # CONFIG_COMPACTION is not set 30 - # CONFIG_MIGRATION is not set 31 28 CONFIG_NET=y 32 29 # CONFIG_IUCV is not set 33 30 # CONFIG_PCPU_DEV_REFCNT is not set
+2 -2
arch/x86/boot/compressed/Makefile
··· 105 105 106 106 vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o 107 107 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o 108 - vmlinux-objs-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a 108 + vmlinux-libs-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a 109 109 110 - $(obj)/vmlinux: $(vmlinux-objs-y) FORCE 110 + $(obj)/vmlinux: $(vmlinux-objs-y) $(vmlinux-libs-y) FORCE 111 111 $(call if_changed,ld) 112 112 113 113 OBJCOPYFLAGS_vmlinux.bin := -R .comment -S
-1
arch/x86/include/asm/efi.h
··· 401 401 struct efi_memory_map_data *data); 402 402 extern void __efi_memmap_free(u64 phys, unsigned long size, 403 403 unsigned long flags); 404 - #define __efi_memmap_free __efi_memmap_free 405 404 406 405 extern int __init efi_memmap_install(struct efi_memory_map_data *data); 407 406 extern int __init efi_memmap_split_count(efi_memory_desc_t *md,
+2 -2
arch/x86/include/asm/uaccess.h
··· 78 78 int __ret_gu; \ 79 79 register __inttype(*(ptr)) __val_gu asm("%"_ASM_DX); \ 80 80 __chk_user_ptr(ptr); \ 81 - asm volatile("call __" #fn "_%c4" \ 81 + asm volatile("call __" #fn "_%c[size]" \ 82 82 : "=a" (__ret_gu), "=r" (__val_gu), \ 83 83 ASM_CALL_CONSTRAINT \ 84 - : "0" (ptr), "i" (sizeof(*(ptr)))); \ 84 + : "0" (ptr), [size] "i" (sizeof(*(ptr)))); \ 85 85 instrument_get_user(__val_gu); \ 86 86 (x) = (__force __typeof__(*(ptr))) __val_gu; \ 87 87 __builtin_expect(__ret_gu, 0); \
+5 -1
arch/x86/lib/getuser.S
··· 44 44 or %rdx, %rax 45 45 .else 46 46 cmp $TASK_SIZE_MAX-\size+1, %eax 47 + .if \size != 8 47 48 jae .Lbad_get_user 49 + .else 50 + jae .Lbad_get_user_8 51 + .endif 48 52 sbb %edx, %edx /* array_index_mask_nospec() */ 49 53 and %edx, %eax 50 54 .endif ··· 158 154 #ifdef CONFIG_X86_32 159 155 SYM_CODE_START_LOCAL(__get_user_8_handle_exception) 160 156 ASM_CLAC 161 - bad_get_user_8: 157 + .Lbad_get_user_8: 162 158 xor %edx,%edx 163 159 xor %ecx,%ecx 164 160 mov $(-EFAULT),%_ASM_AX
+11 -1
arch/x86/platform/efi/memmap.c
··· 92 92 */ 93 93 int __init efi_memmap_install(struct efi_memory_map_data *data) 94 94 { 95 + unsigned long size = efi.memmap.desc_size * efi.memmap.nr_map; 96 + unsigned long flags = efi.memmap.flags; 97 + u64 phys = efi.memmap.phys_map; 98 + int ret; 99 + 95 100 efi_memmap_unmap(); 96 101 97 102 if (efi_enabled(EFI_PARAVIRT)) 98 103 return 0; 99 104 100 - return __efi_memmap_init(data); 105 + ret = __efi_memmap_init(data); 106 + if (ret) 107 + return ret; 108 + 109 + __efi_memmap_free(phys, size, flags); 110 + return 0; 101 111 } 102 112 103 113 /**
+24 -2
block/bio-integrity.c
··· 144 144 struct bio_integrity_payload *bip = bio_integrity(bio); 145 145 struct bio_set *bs = bio->bi_pool; 146 146 147 + if (bip->bip_flags & BIP_INTEGRITY_USER) 148 + return; 147 149 if (bip->bip_flags & BIP_BLOCK_INTEGRITY) 148 150 kfree(bvec_virt(bip->bip_vec)); 149 - else if (bip->bip_flags & BIP_INTEGRITY_USER) 150 - bio_integrity_unmap_user(bip); 151 151 152 152 __bio_integrity_free(bs, bip); 153 153 bio->bi_integrity = NULL; 154 154 bio->bi_opf &= ~REQ_INTEGRITY; 155 155 } 156 + 157 + /** 158 + * bio_integrity_unmap_free_user - Unmap and free bio user integrity payload 159 + * @bio: bio containing bip to be unmapped and freed 160 + * 161 + * Description: Used to unmap and free the user mapped integrity portion of a 162 + * bio. Submitter attaching the user integrity buffer is responsible for 163 + * unmapping and freeing it during completion. 164 + */ 165 + void bio_integrity_unmap_free_user(struct bio *bio) 166 + { 167 + struct bio_integrity_payload *bip = bio_integrity(bio); 168 + struct bio_set *bs = bio->bi_pool; 169 + 170 + if (WARN_ON_ONCE(!(bip->bip_flags & BIP_INTEGRITY_USER))) 171 + return; 172 + bio_integrity_unmap_user(bip); 173 + __bio_integrity_free(bs, bip); 174 + bio->bi_integrity = NULL; 175 + bio->bi_opf &= ~REQ_INTEGRITY; 176 + } 177 + EXPORT_SYMBOL(bio_integrity_unmap_free_user); 156 178 157 179 /** 158 180 * bio_integrity_add_page - Attach integrity metadata
+2 -1
block/blk-flush.c
··· 185 185 /* queue for flush */ 186 186 if (list_empty(pending)) 187 187 fq->flush_pending_since = jiffies; 188 - list_move_tail(&rq->queuelist, pending); 188 + list_add_tail(&rq->queuelist, pending); 189 189 break; 190 190 191 191 case REQ_FSEQ_DATA: ··· 263 263 unsigned int seq = blk_flush_cur_seq(rq); 264 264 265 265 BUG_ON(seq != REQ_FSEQ_PREFLUSH && seq != REQ_FSEQ_POSTFLUSH); 266 + list_del_init(&rq->queuelist); 266 267 blk_flush_complete_seq(rq, fq, seq, error); 267 268 } 268 269
+3
block/blk-zoned.c
··· 1552 1552 1553 1553 void disk_free_zone_resources(struct gendisk *disk) 1554 1554 { 1555 + if (!disk->zone_wplugs_pool) 1556 + return; 1557 + 1555 1558 cancel_work_sync(&disk->zone_wplugs_work); 1556 1559 1557 1560 if (disk->zone_wplugs_wq) {
+1 -1
block/sed-opal.c
··· 314 314 &key_type_user, key_name, true); 315 315 316 316 if (IS_ERR(kref)) 317 - ret = PTR_ERR(kref); 317 + return PTR_ERR(kref); 318 318 319 319 key = key_ref_to_ptr(kref); 320 320 down_read(&key->sem);
+4
drivers/acpi/acpica/acevents.h
··· 191 191 acpi_ev_execute_reg_methods(struct acpi_namespace_node *node, 192 192 acpi_adr_space_type space_id, u32 function); 193 193 194 + void 195 + acpi_ev_execute_orphan_reg_method(struct acpi_namespace_node *node, 196 + acpi_adr_space_type space_id); 197 + 194 198 acpi_status 195 199 acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function); 196 200
+1 -5
drivers/acpi/acpica/evregion.c
··· 20 20 21 21 /* Local prototypes */ 22 22 23 - static void 24 - acpi_ev_execute_orphan_reg_method(struct acpi_namespace_node *device_node, 25 - acpi_adr_space_type space_id); 26 - 27 23 static acpi_status 28 24 acpi_ev_reg_run(acpi_handle obj_handle, 29 25 u32 level, void *context, void **return_value); ··· 814 818 * 815 819 ******************************************************************************/ 816 820 817 - static void 821 + void 818 822 acpi_ev_execute_orphan_reg_method(struct acpi_namespace_node *device_node, 819 823 acpi_adr_space_type space_id) 820 824 {
+54
drivers/acpi/acpica/evxfregn.c
··· 306 306 } 307 307 308 308 ACPI_EXPORT_SYMBOL(acpi_execute_reg_methods) 309 + 310 + /******************************************************************************* 311 + * 312 + * FUNCTION: acpi_execute_orphan_reg_method 313 + * 314 + * PARAMETERS: device - Handle for the device 315 + * space_id - The address space ID 316 + * 317 + * RETURN: Status 318 + * 319 + * DESCRIPTION: Execute an "orphan" _REG method that appears under an ACPI 320 + * device. This is a _REG method that has no corresponding region 321 + * within the device's scope. 322 + * 323 + ******************************************************************************/ 324 + acpi_status 325 + acpi_execute_orphan_reg_method(acpi_handle device, acpi_adr_space_type space_id) 326 + { 327 + struct acpi_namespace_node *node; 328 + acpi_status status; 329 + 330 + ACPI_FUNCTION_TRACE(acpi_execute_orphan_reg_method); 331 + 332 + /* Parameter validation */ 333 + 334 + if (!device) { 335 + return_ACPI_STATUS(AE_BAD_PARAMETER); 336 + } 337 + 338 + status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE); 339 + if (ACPI_FAILURE(status)) { 340 + return_ACPI_STATUS(status); 341 + } 342 + 343 + /* Convert and validate the device handle */ 344 + 345 + node = acpi_ns_validate_handle(device); 346 + if (node) { 347 + 348 + /* 349 + * If an "orphan" _REG method is present in the device's scope 350 + * for the given address space ID, run it. 351 + */ 352 + 353 + acpi_ev_execute_orphan_reg_method(node, space_id); 354 + } else { 355 + status = AE_BAD_PARAMETER; 356 + } 357 + 358 + (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE); 359 + return_ACPI_STATUS(status); 360 + } 361 + 362 + ACPI_EXPORT_SYMBOL(acpi_execute_orphan_reg_method)
+3
drivers/acpi/ec.c
··· 1507 1507 1508 1508 if (call_reg && !test_bit(EC_FLAGS_EC_REG_CALLED, &ec->flags)) { 1509 1509 acpi_execute_reg_methods(scope_handle, ACPI_ADR_SPACE_EC); 1510 + if (scope_handle != ec->handle) 1511 + acpi_execute_orphan_reg_method(ec->handle, ACPI_ADR_SPACE_EC); 1512 + 1510 1513 set_bit(EC_FLAGS_EC_REG_CALLED, &ec->flags); 1511 1514 } 1512 1515
+7 -1
drivers/acpi/thermal.c
··· 168 168 169 169 static int acpi_thermal_temp(struct acpi_thermal *tz, int temp_deci_k) 170 170 { 171 + int temp; 172 + 171 173 if (temp_deci_k == THERMAL_TEMP_INVALID) 172 174 return THERMAL_TEMP_INVALID; 173 175 174 - return deci_kelvin_to_millicelsius_with_offset(temp_deci_k, 176 + temp = deci_kelvin_to_millicelsius_with_offset(temp_deci_k, 175 177 tz->kelvin_offset); 178 + if (temp <= 0) 179 + return THERMAL_TEMP_INVALID; 180 + 181 + return temp; 176 182 } 177 183 178 184 static bool acpi_thermal_trip_valid(struct acpi_thermal_trip *acpi_trip)
+10 -14
drivers/acpi/x86/utils.c
··· 206 206 } 207 207 208 208 /* 209 - * AMD systems from Renoir and Lucienne *require* that the NVME controller 209 + * AMD systems from Renoir onwards *require* that the NVME controller 210 210 * is put into D3 over a Modern Standby / suspend-to-idle cycle. 211 211 * 212 212 * This is "typically" accomplished using the `StorageD3Enable` 213 213 * property in the _DSD that is checked via the `acpi_storage_d3` function 214 - * but this property was introduced after many of these systems launched 215 - * and most OEM systems don't have it in their BIOS. 214 + * but some OEM systems still don't have it in their BIOS. 216 215 * 217 216 * The Microsoft documentation for StorageD3Enable mentioned that Windows has 218 - * a hardcoded allowlist for D3 support, which was used for these platforms. 217 + * a hardcoded allowlist for D3 support as well as a registry key to override 218 + * the BIOS, which has been used for these cases. 219 219 * 220 220 * This allows quirking on Linux in a similar fashion. 221 221 * ··· 228 228 * https://bugzilla.kernel.org/show_bug.cgi?id=216773 229 229 * https://bugzilla.kernel.org/show_bug.cgi?id=217003 230 230 * 2) On at least one HP system StorageD3Enable is missing on the second NVME 231 - disk in the system. 231 + * disk in the system. 232 + * 3) On at least one HP Rembrandt system StorageD3Enable is missing on the only 233 + * NVME device. 232 234 */ 233 - static const struct x86_cpu_id storage_d3_cpu_ids[] = { 234 - X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 24, NULL), /* Picasso */ 235 - X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 96, NULL), /* Renoir */ 236 - X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 104, NULL), /* Lucienne */ 237 - X86_MATCH_VENDOR_FAM_MODEL(AMD, 25, 80, NULL), /* Cezanne */ 238 - {} 239 - }; 240 - 241 235 bool force_storage_d3(void) 242 236 { 243 - return x86_match_cpu(storage_d3_cpu_ids); 237 + if (!cpu_feature_enabled(X86_FEATURE_ZEN)) 238 + return false; 239 + return acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0; 244 240 } 245 241 246 242 /*
+4 -4
drivers/ata/libata-scsi.c
··· 1831 1831 2 1832 1832 }; 1833 1833 1834 - /* set scsi removable (RMB) bit per ata bit, or if the 1835 - * AHCI port says it's external (Hotplug-capable, eSATA). 1834 + /* 1835 + * Set the SCSI Removable Media Bit (RMB) if the ATA removable media 1836 + * device bit (obsolete since ATA-8 ACS) is set. 1836 1837 */ 1837 - if (ata_id_removable(args->id) || 1838 - (args->dev->link->ap->pflags & ATA_PFLAG_EXTERNAL)) 1838 + if (ata_id_removable(args->id)) 1839 1839 hdr[1] |= (1 << 7); 1840 1840 1841 1841 if (args->dev->class == ATA_DEV_ZAC) {
+3 -45
drivers/base/core.c
··· 2739 2739 if (!env) 2740 2740 return -ENOMEM; 2741 2741 2742 + /* Synchronize with really_probe() */ 2743 + device_lock(dev); 2742 2744 /* let the kset specific function add its keys */ 2743 2745 retval = kset->uevent_ops->uevent(&dev->kobj, env); 2746 + device_unlock(dev); 2744 2747 if (retval) 2745 2748 goto out; 2746 2749 ··· 2848 2845 sysfs_remove_group(&dev->kobj, group); 2849 2846 } 2850 2847 2851 - static void devm_attr_groups_remove(struct device *dev, void *res) 2852 - { 2853 - union device_attr_group_devres *devres = res; 2854 - const struct attribute_group **groups = devres->groups; 2855 - 2856 - dev_dbg(dev, "%s: removing groups %p\n", __func__, groups); 2857 - sysfs_remove_groups(&dev->kobj, groups); 2858 - } 2859 - 2860 2848 /** 2861 2849 * devm_device_add_group - given a device, create a managed attribute group 2862 2850 * @dev: The device to create the group for ··· 2879 2885 return 0; 2880 2886 } 2881 2887 EXPORT_SYMBOL_GPL(devm_device_add_group); 2882 - 2883 - /** 2884 - * devm_device_add_groups - create a bunch of managed attribute groups 2885 - * @dev: The device to create the group for 2886 - * @groups: The attribute groups to create, NULL terminated 2887 - * 2888 - * This function creates a bunch of managed attribute groups. If an error 2889 - * occurs when creating a group, all previously created groups will be 2890 - * removed, unwinding everything back to the original state when this 2891 - * function was called. It will explicitly warn and error if any of the 2892 - * attribute files being created already exist. 2893 - * 2894 - * Returns 0 on success or error code from sysfs_create_group on failure. 2895 - */ 2896 - int devm_device_add_groups(struct device *dev, 2897 - const struct attribute_group **groups) 2898 - { 2899 - union device_attr_group_devres *devres; 2900 - int error; 2901 - 2902 - devres = devres_alloc(devm_attr_groups_remove, 2903 - sizeof(*devres), GFP_KERNEL); 2904 - if (!devres) 2905 - return -ENOMEM; 2906 - 2907 - error = sysfs_create_groups(&dev->kobj, groups); 2908 - if (error) { 2909 - devres_free(devres); 2910 - return error; 2911 - } 2912 - 2913 - devres->groups = groups; 2914 - devres_add(dev, devres); 2915 - return 0; 2916 - } 2917 - EXPORT_SYMBOL_GPL(devm_device_add_groups); 2918 2888 2919 2889 static int device_add_attrs(struct device *dev) 2920 2890 {
+23
drivers/block/loop.c
··· 302 302 return 0; 303 303 } 304 304 305 + static void loop_clear_limits(struct loop_device *lo, int mode) 306 + { 307 + struct queue_limits lim = queue_limits_start_update(lo->lo_queue); 308 + 309 + if (mode & FALLOC_FL_ZERO_RANGE) 310 + lim.max_write_zeroes_sectors = 0; 311 + 312 + if (mode & FALLOC_FL_PUNCH_HOLE) { 313 + lim.max_hw_discard_sectors = 0; 314 + lim.discard_granularity = 0; 315 + } 316 + 317 + queue_limits_commit_update(lo->lo_queue, &lim); 318 + } 319 + 305 320 static int lo_fallocate(struct loop_device *lo, struct request *rq, loff_t pos, 306 321 int mode) 307 322 { ··· 335 320 ret = file->f_op->fallocate(file, mode, pos, blk_rq_bytes(rq)); 336 321 if (unlikely(ret && ret != -EINVAL && ret != -EOPNOTSUPP)) 337 322 return -EIO; 323 + 324 + /* 325 + * We initially configure the limits in a hope that fallocate is 326 + * supported and clear them here if that turns out not to be true. 327 + */ 328 + if (unlikely(ret == -EOPNOTSUPP)) 329 + loop_clear_limits(lo, mode); 330 + 338 331 return ret; 339 332 } 340 333
+22 -29
drivers/block/nbd.c
··· 589 589 } 590 590 591 591 /* 592 - * Returns BLK_STS_RESOURCE if the caller should retry after a delay. Returns 593 - * -EAGAIN if the caller should requeue @cmd. Returns -EIO if sending failed. 592 + * Returns BLK_STS_RESOURCE if the caller should retry after a delay. 593 + * Returns BLK_STS_IOERR if sending failed. 594 594 */ 595 - static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index) 595 + static blk_status_t nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, 596 + int index) 596 597 { 597 598 struct request *req = blk_mq_rq_from_pdu(cmd); 598 599 struct nbd_config *config = nbd->config; ··· 615 614 616 615 type = req_to_nbd_cmd_type(req); 617 616 if (type == U32_MAX) 618 - return -EIO; 617 + return BLK_STS_IOERR; 619 618 620 619 if (rq_data_dir(req) == WRITE && 621 620 (config->flags & NBD_FLAG_READ_ONLY)) { 622 621 dev_err_ratelimited(disk_to_dev(nbd->disk), 623 622 "Write on read-only\n"); 624 - return -EIO; 623 + return BLK_STS_IOERR; 625 624 } 626 625 627 626 if (req->cmd_flags & REQ_FUA) ··· 675 674 nsock->sent = sent; 676 675 } 677 676 set_bit(NBD_CMD_REQUEUED, &cmd->flags); 678 - return (__force int)BLK_STS_RESOURCE; 677 + return BLK_STS_RESOURCE; 679 678 } 680 679 dev_err_ratelimited(disk_to_dev(nbd->disk), 681 680 "Send control failed (result %d)\n", result); 682 - return -EAGAIN; 681 + goto requeue; 683 682 } 684 683 send_pages: 685 684 if (type != NBD_CMD_WRITE) ··· 716 715 nsock->pending = req; 717 716 nsock->sent = sent; 718 717 set_bit(NBD_CMD_REQUEUED, &cmd->flags); 719 - return (__force int)BLK_STS_RESOURCE; 718 + return BLK_STS_RESOURCE; 720 719 } 721 720 dev_err(disk_to_dev(nbd->disk), 722 721 "Send data failed (result %d)\n", 723 722 result); 724 - return -EAGAIN; 723 + goto requeue; 725 724 } 726 725 /* 727 726 * The completion might already have come in, ··· 738 737 trace_nbd_payload_sent(req, handle); 739 738 nsock->pending = NULL; 740 739 nsock->sent = 0; 741 - return 0; 740 + __set_bit(NBD_CMD_INFLIGHT, &cmd->flags); 741 + return BLK_STS_OK; 742 + 743 + requeue: 744 + /* retry on a different socket */ 745 + dev_err_ratelimited(disk_to_dev(nbd->disk), 746 + "Request send failed, requeueing\n"); 747 + nbd_mark_nsock_dead(nbd, nsock, 1); 748 + nbd_requeue_cmd(cmd); 749 + return BLK_STS_OK; 742 750 } 743 751 744 752 static int nbd_read_reply(struct nbd_device *nbd, struct socket *sock, ··· 1028 1018 struct nbd_device *nbd = cmd->nbd; 1029 1019 struct nbd_config *config; 1030 1020 struct nbd_sock *nsock; 1031 - int ret; 1021 + blk_status_t ret; 1032 1022 1033 1023 lockdep_assert_held(&cmd->lock); 1034 1024 ··· 1082 1072 ret = BLK_STS_OK; 1083 1073 goto out; 1084 1074 } 1085 - /* 1086 - * Some failures are related to the link going down, so anything that 1087 - * returns EAGAIN can be retried on a different socket. 1088 - */ 1089 1075 ret = nbd_send_cmd(nbd, cmd, index); 1090 - /* 1091 - * Access to this flag is protected by cmd->lock, thus it's safe to set 1092 - * the flag after nbd_send_cmd() succeed to send request to server. 1093 - */ 1094 - if (!ret) 1095 - __set_bit(NBD_CMD_INFLIGHT, &cmd->flags); 1096 - else if (ret == -EAGAIN) { 1097 - dev_err_ratelimited(disk_to_dev(nbd->disk), 1098 - "Request send failed, requeueing\n"); 1099 - nbd_mark_nsock_dead(nbd, nsock, 1); 1100 - nbd_requeue_cmd(cmd); 1101 - ret = BLK_STS_OK; 1102 - } 1103 1076 out: 1104 1077 mutex_unlock(&nsock->tx_lock); 1105 1078 nbd_config_put(nbd); 1106 - return ret < 0 ? BLK_STS_IOERR : (__force blk_status_t)ret; 1079 + return ret; 1107 1080 } 1108 1081 1109 1082 static blk_status_t nbd_queue_rq(struct blk_mq_hw_ctx *hctx,
+12 -7
drivers/cpufreq/intel_pstate.c
··· 1302 1302 1303 1303 no_turbo = !!clamp_t(int, input, 0, 1); 1304 1304 1305 - if (no_turbo == global.no_turbo) 1306 - goto unlock_driver; 1307 - 1308 - if (global.turbo_disabled) { 1309 - pr_notice_once("Turbo disabled by BIOS or unavailable on processor\n"); 1305 + WRITE_ONCE(global.turbo_disabled, turbo_is_disabled()); 1306 + if (global.turbo_disabled && !no_turbo) { 1307 + pr_notice("Turbo disabled by BIOS or unavailable on processor\n"); 1310 1308 count = -EPERM; 1309 + if (global.no_turbo) 1310 + goto unlock_driver; 1311 + else 1312 + no_turbo = 1; 1313 + } 1314 + 1315 + if (no_turbo == global.no_turbo) { 1311 1316 goto unlock_driver; 1312 1317 } 1313 1318 ··· 1767 1762 u32 vid; 1768 1763 1769 1764 val = (u64)pstate << 8; 1770 - if (READ_ONCE(global.no_turbo) && !global.turbo_disabled) 1765 + if (READ_ONCE(global.no_turbo) && !READ_ONCE(global.turbo_disabled)) 1771 1766 val |= (u64)1 << 32; 1772 1767 1773 1768 vid_fp = cpudata->vid.min + mul_fp( ··· 1932 1927 u64 val; 1933 1928 1934 1929 val = (u64)pstate << 8; 1935 - if (READ_ONCE(global.no_turbo) && !global.turbo_disabled) 1930 + if (READ_ONCE(global.no_turbo) && !READ_ONCE(global.turbo_disabled)) 1936 1931 val |= (u64)1 << 32; 1937 1932 1938 1933 return val;
+1 -1
drivers/firewire/Kconfig
··· 11 11 This is the new-generation IEEE 1394 (FireWire) driver stack 12 12 a.k.a. Juju, a new implementation designed for robustness and 13 13 simplicity. 14 - See http://ieee1394.wiki.kernel.org/index.php/Juju_Migration 14 + See http://ieee1394.docs.kernel.org/en/latest/migration.html 15 15 for information about migration from the older Linux 1394 stack 16 16 to the new driver stack. 17 17
+3 -3
drivers/firewire/core-card.c
··· 222 222 int reg = short_reset ? 5 : 1; 223 223 int bit = short_reset ? PHY_BUS_SHORT_RESET : PHY_BUS_RESET; 224 224 225 - trace_bus_reset_initiate(card->generation, short_reset); 225 + trace_bus_reset_initiate(card->index, card->generation, short_reset); 226 226 227 227 return card->driver->update_phy_reg(card, reg, 0, bit); 228 228 } 229 229 230 230 void fw_schedule_bus_reset(struct fw_card *card, bool delayed, bool short_reset) 231 231 { 232 - trace_bus_reset_schedule(card->generation, short_reset); 232 + trace_bus_reset_schedule(card->index, card->generation, short_reset); 233 233 234 234 /* We don't try hard to sort out requests of long vs. short resets. */ 235 235 card->br_short = short_reset; ··· 249 249 /* Delay for 2s after last reset per IEEE 1394 clause 8.2.1. */ 250 250 if (card->reset_jiffies != 0 && 251 251 time_before64(get_jiffies_64(), card->reset_jiffies + 2 * HZ)) { 252 - trace_bus_reset_postpone(card->generation, card->br_short); 252 + trace_bus_reset_postpone(card->index, card->generation, card->br_short); 253 253 254 254 if (!queue_delayed_work(fw_workqueue, &card->br_work, 2 * HZ)) 255 255 fw_card_put(card);
+3 -3
drivers/firewire/core-cdev.c
··· 1559 1559 struct client *e_client = e->client; 1560 1560 u32 rcode; 1561 1561 1562 - trace_async_phy_outbound_complete((uintptr_t)packet, status, packet->generation, 1562 + trace_async_phy_outbound_complete((uintptr_t)packet, card->index, status, packet->generation, 1563 1563 packet->timestamp); 1564 1564 1565 1565 switch (status) { ··· 1659 1659 memcpy(pp->data, a->data, sizeof(a->data)); 1660 1660 } 1661 1661 1662 - trace_async_phy_outbound_initiate((uintptr_t)&e->p, e->p.generation, e->p.header[1], 1663 - e->p.header[2]); 1662 + trace_async_phy_outbound_initiate((uintptr_t)&e->p, card->index, e->p.generation, 1663 + e->p.header[1], e->p.header[2]); 1664 1664 1665 1665 card->driver->send_request(card, &e->p); 1666 1666
+1 -1
drivers/firewire/core-topology.c
··· 508 508 struct fw_node *local_node; 509 509 unsigned long flags; 510 510 511 - trace_bus_reset_handle(generation, node_id, bm_abdicate, self_ids, self_id_count); 511 + trace_bus_reset_handle(card->index, generation, node_id, bm_abdicate, self_ids, self_id_count); 512 512 513 513 spin_lock_irqsave(&card->lock, flags); 514 514
+16 -14
drivers/firewire/core-transaction.c
··· 174 174 struct fw_transaction *t = 175 175 container_of(packet, struct fw_transaction, packet); 176 176 177 - trace_async_request_outbound_complete((uintptr_t)t, packet->generation, packet->speed, 178 - status, packet->timestamp); 177 + trace_async_request_outbound_complete((uintptr_t)t, card->index, packet->generation, 178 + packet->speed, status, packet->timestamp); 179 179 180 180 switch (status) { 181 181 case ACK_COMPLETE: ··· 398 398 399 399 spin_unlock_irqrestore(&card->lock, flags); 400 400 401 - trace_async_request_outbound_initiate((uintptr_t)t, generation, speed, t->packet.header, payload, 401 + trace_async_request_outbound_initiate((uintptr_t)t, card->index, generation, speed, 402 + t->packet.header, payload, 402 403 tcode_is_read_request(tcode) ? 0 : length / 4); 403 404 404 405 card->driver->send_request(card, &t->packet); ··· 464 463 static void transmit_phy_packet_callback(struct fw_packet *packet, 465 464 struct fw_card *card, int status) 466 465 { 467 - trace_async_phy_outbound_complete((uintptr_t)packet, packet->generation, status, 466 + trace_async_phy_outbound_complete((uintptr_t)packet, card->index, packet->generation, status, 468 467 packet->timestamp); 469 468 complete(&phy_config_done); 470 469 } ··· 504 503 phy_config_packet.generation = generation; 505 504 reinit_completion(&phy_config_done); 506 505 507 - trace_async_phy_outbound_initiate((uintptr_t)&phy_config_packet, 506 + trace_async_phy_outbound_initiate((uintptr_t)&phy_config_packet, card->index, 508 507 phy_config_packet.generation, phy_config_packet.header[1], 509 508 phy_config_packet.header[2]); 510 509 ··· 675 674 { 676 675 struct fw_request *request = container_of(packet, struct fw_request, response); 677 676 678 - trace_async_response_outbound_complete((uintptr_t)request, packet->generation, 677 + trace_async_response_outbound_complete((uintptr_t)request, card->index, packet->generation, 679 678 packet->speed, status, packet->timestamp); 680 679 681 680 // Decrease the reference count since not at in-flight. ··· 880 879 // Increase the reference count so that the object is kept during in-flight. 881 880 fw_request_get(request); 882 881 883 - trace_async_response_outbound_initiate((uintptr_t)request, request->response.generation, 884 - request->response.speed, request->response.header, 885 - data, data ? data_length / 4 : 0); 882 + trace_async_response_outbound_initiate((uintptr_t)request, card->index, 883 + request->response.generation, request->response.speed, 884 + request->response.header, data, 885 + data ? data_length / 4 : 0); 886 886 887 887 card->driver->send_response(card, &request->response); 888 888 } ··· 997 995 998 996 tcode = async_header_get_tcode(p->header); 999 997 if (tcode_is_link_internal(tcode)) { 1000 - trace_async_phy_inbound((uintptr_t)p, p->generation, p->ack, p->timestamp, 998 + trace_async_phy_inbound((uintptr_t)p, card->index, p->generation, p->ack, p->timestamp, 1001 999 p->header[1], p->header[2]); 1002 1000 fw_cdev_handle_phy_packet(card, p); 1003 1001 return; ··· 1009 1007 return; 1010 1008 } 1011 1009 1012 - trace_async_request_inbound((uintptr_t)request, p->generation, p->speed, p->ack, 1013 - p->timestamp, p->header, request->data, 1010 + trace_async_request_inbound((uintptr_t)request, card->index, p->generation, p->speed, 1011 + p->ack, p->timestamp, p->header, request->data, 1014 1012 tcode_is_read_request(tcode) ? 0 : request->length / 4); 1015 1013 1016 1014 offset = async_header_get_offset(p->header); ··· 1080 1078 } 1081 1079 spin_unlock_irqrestore(&card->lock, flags); 1082 1080 1083 - trace_async_response_inbound((uintptr_t)t, p->generation, p->speed, p->ack, p->timestamp, 1084 - p->header, data, data_length / 4); 1081 + trace_async_response_inbound((uintptr_t)t, card->index, p->generation, p->speed, p->ack, 1082 + p->timestamp, p->header, data, data_length / 4); 1085 1083 1086 1084 if (!t) { 1087 1085 timed_out:
-9
drivers/firmware/efi/memmap.c
··· 15 15 #include <asm/early_ioremap.h> 16 16 #include <asm/efi.h> 17 17 18 - #ifndef __efi_memmap_free 19 - #define __efi_memmap_free(phys, size, flags) do { } while (0) 20 - #endif 21 - 22 18 /** 23 19 * __efi_memmap_init - Common code for mapping the EFI memory map 24 20 * @data: EFI memory map data ··· 46 50 pr_err("Could not map the memory map!\n"); 47 51 return -ENOMEM; 48 52 } 49 - 50 - if (efi.memmap.flags & (EFI_MEMMAP_MEMBLOCK | EFI_MEMMAP_SLAB)) 51 - __efi_memmap_free(efi.memmap.phys_map, 52 - efi.memmap.desc_size * efi.memmap.nr_map, 53 - efi.memmap.flags); 54 53 55 54 map.phys_map = data->phys_map; 56 55 map.nr_map = data->size / data->desc_size;
+1
drivers/gpu/drm/Kconfig
··· 450 450 config DRM_WERROR 451 451 bool "Compile the drm subsystem with warnings as errors" 452 452 depends on DRM && EXPERT 453 + depends on !WERROR 453 454 default n 454 455 help 455 456 A kernel build should not cause any compiler warnings, and this
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
··· 108 108 109 109 memset(&bp, 0, sizeof(bp)); 110 110 *obj = NULL; 111 + flags |= AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE; 111 112 112 113 bp.size = size; 113 114 bp.byte_align = alignment;
-2
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
··· 604 604 if (!amdgpu_bo_support_uswc(bo->flags)) 605 605 bo->flags &= ~AMDGPU_GEM_CREATE_CPU_GTT_USWC; 606 606 607 - bo->flags |= AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE; 608 - 609 607 bo->tbo.bdev = &adev->mman.bdev; 610 608 if (bp->domain & (AMDGPU_GEM_DOMAIN_GWS | AMDGPU_GEM_DOMAIN_OA | 611 609 AMDGPU_GEM_DOMAIN_GDS))
-8
drivers/gpu/drm/arm/display/komeda/komeda_dev.c
··· 12 12 #include <linux/platform_device.h> 13 13 #include <linux/pm_runtime.h> 14 14 #include <linux/dma-mapping.h> 15 - #ifdef CONFIG_DEBUG_FS 16 15 #include <linux/debugfs.h> 17 16 #include <linux/seq_file.h> 18 - #endif 19 17 20 18 #include <drm/drm_print.h> 21 19 ··· 41 43 42 44 DEFINE_SHOW_ATTRIBUTE(komeda_register); 43 45 44 - #ifdef CONFIG_DEBUG_FS 45 46 static void komeda_debugfs_init(struct komeda_dev *mdev) 46 47 { 47 48 if (!debugfs_initialized()) ··· 52 55 debugfs_create_x16("err_verbosity", 0664, mdev->debugfs_root, 53 56 &mdev->err_verbosity); 54 57 } 55 - #endif 56 58 57 59 static ssize_t 58 60 core_id_show(struct device *dev, struct device_attribute *attr, char *buf) ··· 261 265 262 266 mdev->err_verbosity = KOMEDA_DEV_PRINT_ERR_EVENTS; 263 267 264 - #ifdef CONFIG_DEBUG_FS 265 268 komeda_debugfs_init(mdev); 266 - #endif 267 269 268 270 return mdev; 269 271 ··· 280 286 281 287 sysfs_remove_group(&dev->kobj, &komeda_sysfs_attr_group); 282 288 283 - #ifdef CONFIG_DEBUG_FS 284 289 debugfs_remove_recursive(mdev->debugfs_root); 285 - #endif 286 290 287 291 if (mdev->aclk) 288 292 clk_prepare_enable(mdev->aclk);
+1 -1
drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
··· 259 259 u32 avail_scalers; 260 260 261 261 pipe_st = komeda_pipeline_get_state(c->pipeline, state); 262 - if (!pipe_st) 262 + if (IS_ERR_OR_NULL(pipe_st)) 263 263 return NULL; 264 264 265 265 avail_scalers = (pipe_st->active_comps & KOMEDA_PIPELINE_SCALERS) ^
+5 -2
drivers/gpu/drm/bridge/panel.c
··· 360 360 361 361 static void devm_drm_panel_bridge_release(struct device *dev, void *res) 362 362 { 363 - struct drm_bridge **bridge = res; 363 + struct drm_bridge *bridge = *(struct drm_bridge **)res; 364 364 365 - drm_panel_bridge_remove(*bridge); 365 + if (!bridge) 366 + return; 367 + 368 + drm_bridge_remove(bridge); 366 369 } 367 370 368 371 /**
+6
drivers/gpu/drm/drm_panel_orientation_quirks.c
··· 202 202 DMI_MATCH(DMI_BOARD_NAME, "NEXT"), 203 203 }, 204 204 .driver_data = (void *)&lcd800x1280_rightside_up, 205 + }, { /* AYA NEO KUN */ 206 + .matches = { 207 + DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AYANEO"), 208 + DMI_MATCH(DMI_BOARD_NAME, "KUN"), 209 + }, 210 + .driver_data = (void *)&lcd1600x2560_rightside_up, 205 211 }, { /* Chuwi HiBook (CWI514) */ 206 212 .matches = { 207 213 DMI_MATCH(DMI_BOARD_VENDOR, "Hampoo"),
-1
drivers/gpu/drm/exynos/exynos_dp.c
··· 285 285 .remove_new = exynos_dp_remove, 286 286 .driver = { 287 287 .name = "exynos-dp", 288 - .owner = THIS_MODULE, 289 288 .pm = pm_ptr(&exynos_dp_pm_ops), 290 289 .of_match_table = exynos_dp_match, 291 290 },
+6 -1
drivers/gpu/drm/exynos/exynos_drm_vidi.c
··· 309 309 struct vidi_context *ctx = ctx_from_connector(connector); 310 310 struct edid *edid; 311 311 int edid_len; 312 + int count; 312 313 313 314 /* 314 315 * the edid data comes from user side and it would be set ··· 329 328 330 329 drm_connector_update_edid_property(connector, edid); 331 330 332 - return drm_add_edid_modes(connector, edid); 331 + count = drm_add_edid_modes(connector, edid); 332 + 333 + kfree(edid); 334 + 335 + return count; 333 336 } 334 337 335 338 static const struct drm_connector_helper_funcs vidi_connector_helper_funcs = {
+5 -2
drivers/gpu/drm/exynos/exynos_hdmi.c
··· 887 887 int ret; 888 888 889 889 if (!hdata->ddc_adpt) 890 - return 0; 890 + goto no_edid; 891 891 892 892 edid = drm_get_edid(connector, hdata->ddc_adpt); 893 893 if (!edid) 894 - return 0; 894 + goto no_edid; 895 895 896 896 hdata->dvi_mode = !connector->display_info.is_hdmi; 897 897 DRM_DEV_DEBUG_KMS(hdata->dev, "%s : width[%d] x height[%d]\n", ··· 906 906 kfree(edid); 907 907 908 908 return ret; 909 + 910 + no_edid: 911 + return drm_add_modes_noedid(connector, 640, 480); 909 912 } 910 913 911 914 static int hdmi_find_phy_conf(struct hdmi_context *hdata, u32 pixel_clock)
+8
drivers/gpu/drm/mediatek/mtk_drm_drv.c
··· 952 952 of_node_put(private->comp_node[i]); 953 953 } 954 954 955 + static void mtk_drm_shutdown(struct platform_device *pdev) 956 + { 957 + struct mtk_drm_private *private = platform_get_drvdata(pdev); 958 + 959 + drm_atomic_helper_shutdown(private->drm); 960 + } 961 + 955 962 static int mtk_drm_sys_prepare(struct device *dev) 956 963 { 957 964 struct mtk_drm_private *private = dev_get_drvdata(dev); ··· 990 983 static struct platform_driver mtk_drm_platform_driver = { 991 984 .probe = mtk_drm_probe, 992 985 .remove_new = mtk_drm_remove, 986 + .shutdown = mtk_drm_shutdown, 993 987 .driver = { 994 988 .name = "mediatek-drm", 995 989 .pm = &mtk_drm_pm_ops,
+1 -1
drivers/gpu/drm/nouveau/dispnv04/disp.c
··· 68 68 if (nv_two_heads(dev)) 69 69 NVWriteCRTC(dev, 1, NV_PCRTC_INTR_EN_0, 0); 70 70 71 - if (!runtime) 71 + if (!runtime && !drm->headless) 72 72 cancel_work_sync(&drm->hpd_work); 73 73 74 74 if (!suspend)
+1 -1
drivers/gpu/drm/nouveau/dispnv50/disp.c
··· 2680 2680 nv50_mstm_fini(nouveau_encoder(encoder)); 2681 2681 } 2682 2682 2683 - if (!runtime) 2683 + if (!runtime && !drm->headless) 2684 2684 cancel_work_sync(&drm->hpd_work); 2685 2685 } 2686 2686
-5
drivers/gpu/drm/nouveau/nouveau_bios.c
··· 43 43 #define BIOSLOG(sip, fmt, arg...) NV_DEBUG(sip->dev, fmt, ##arg) 44 44 #define LOG_OLD_VALUE(x) 45 45 46 - struct init_exec { 47 - bool execute; 48 - bool repeat; 49 - }; 50 - 51 46 static bool nv_cksum(const uint8_t *data, unsigned int length) 52 47 { 53 48 /*
+5 -1
drivers/gpu/drm/nouveau/nouveau_display.c
··· 450 450 { 451 451 struct nouveau_drm *drm = nouveau_drm(dev); 452 452 453 + if (drm->headless) 454 + return; 455 + 453 456 spin_lock_irq(&drm->hpd_lock); 454 457 drm->hpd_pending = ~0; 455 458 spin_unlock_irq(&drm->hpd_lock); ··· 638 635 } 639 636 drm_connector_list_iter_end(&conn_iter); 640 637 641 - if (!runtime) 638 + if (!runtime && !drm->headless) 642 639 cancel_work_sync(&drm->hpd_work); 643 640 644 641 drm_kms_helper_poll_disable(dev); ··· 732 729 /* no display hw */ 733 730 if (ret == -ENODEV) { 734 731 ret = 0; 732 + drm->headless = true; 735 733 goto disp_create_err; 736 734 } 737 735
+1
drivers/gpu/drm/nouveau/nouveau_drv.h
··· 276 276 /* modesetting */ 277 277 struct nvbios vbios; 278 278 struct nouveau_display *display; 279 + bool headless; 279 280 struct work_struct hpd_work; 280 281 spinlock_t hpd_lock; 281 282 u32 hpd_pending;
+8
drivers/gpu/drm/renesas/shmobile/shmob_drm_drv.c
··· 171 171 drm_kms_helper_poll_fini(ddev); 172 172 } 173 173 174 + static void shmob_drm_shutdown(struct platform_device *pdev) 175 + { 176 + struct shmob_drm_device *sdev = platform_get_drvdata(pdev); 177 + 178 + drm_atomic_helper_shutdown(&sdev->ddev); 179 + } 180 + 174 181 static int shmob_drm_probe(struct platform_device *pdev) 175 182 { 176 183 struct shmob_drm_platform_data *pdata = pdev->dev.platform_data; ··· 280 273 static struct platform_driver shmob_drm_platform_driver = { 281 274 .probe = shmob_drm_probe, 282 275 .remove_new = shmob_drm_remove, 276 + .shutdown = shmob_drm_shutdown, 283 277 .driver = { 284 278 .name = "shmob-drm", 285 279 .of_match_table = of_match_ptr(shmob_drm_of_table),
+8 -1
drivers/gpu/drm/xe/xe_gt_idle.c
··· 147 147 static void gt_idle_sysfs_fini(struct drm_device *drm, void *arg) 148 148 { 149 149 struct kobject *kobj = arg; 150 + struct xe_gt *gt = kobj_to_gt(kobj->parent); 151 + 152 + if (gt_to_xe(gt)->info.skip_guc_pc) { 153 + XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FW_GT)); 154 + xe_gt_idle_disable_c6(gt); 155 + xe_force_wake_put(gt_to_fw(gt), XE_FW_GT); 156 + } 150 157 151 158 sysfs_remove_files(kobj, gt_idle_attrs); 152 159 kobject_put(kobj); ··· 206 199 void xe_gt_idle_disable_c6(struct xe_gt *gt) 207 200 { 208 201 xe_device_assert_mem_access(gt_to_xe(gt)); 209 - xe_force_wake_assert_held(gt_to_fw(gt), XE_FORCEWAKE_ALL); 202 + xe_force_wake_assert_held(gt_to_fw(gt), XE_FW_GT); 210 203 211 204 xe_mmio_write32(gt, PG_ENABLE, 0); 212 205 xe_mmio_write32(gt, RC_CONTROL, 0);
+13 -2
drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
··· 1274 1274 struct xe_tile *tile; 1275 1275 unsigned int tid; 1276 1276 1277 + xe_assert(xe, IS_DGFX(xe)); 1278 + xe_assert(xe, IS_SRIOV_PF(xe)); 1279 + 1277 1280 for_each_tile(tile, xe, tid) { 1278 1281 lmtt = &tile->sriov.pf.lmtt; 1279 1282 xe_lmtt_drop_pages(lmtt, vfid); ··· 1294 1291 unsigned int gtid; 1295 1292 unsigned int tid; 1296 1293 int err; 1294 + 1295 + xe_assert(xe, IS_DGFX(xe)); 1296 + xe_assert(xe, IS_SRIOV_PF(xe)); 1297 1297 1298 1298 total = 0; 1299 1299 for_each_tile(tile, xe, tid) ··· 1343 1337 1344 1338 static void pf_release_vf_config_lmem(struct xe_gt *gt, struct xe_gt_sriov_config *config) 1345 1339 { 1340 + xe_gt_assert(gt, IS_DGFX(gt_to_xe(gt))); 1346 1341 xe_gt_assert(gt, !xe_gt_is_media_type(gt)); 1347 1342 lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt)); 1348 1343 ··· 1362 1355 int err; 1363 1356 1364 1357 xe_gt_assert(gt, vfid); 1358 + xe_gt_assert(gt, IS_DGFX(xe)); 1365 1359 xe_gt_assert(gt, !xe_gt_is_media_type(gt)); 1366 1360 1367 1361 size = round_up(size, pf_get_lmem_alignment(gt)); ··· 1753 1745 static void pf_release_vf_config(struct xe_gt *gt, unsigned int vfid) 1754 1746 { 1755 1747 struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid); 1748 + struct xe_device *xe = gt_to_xe(gt); 1756 1749 1757 1750 if (!xe_gt_is_media_type(gt)) { 1758 1751 pf_release_vf_config_ggtt(gt, config); 1759 - pf_release_vf_config_lmem(gt, config); 1760 - pf_update_vf_lmtt(gt_to_xe(gt), vfid); 1752 + if (IS_DGFX(xe)) { 1753 + pf_release_vf_config_lmem(gt, config); 1754 + pf_update_vf_lmtt(xe, vfid); 1755 + } 1761 1756 } 1762 1757 pf_release_config_ctxs(gt, config); 1763 1758 pf_release_config_dbs(gt, config);
-6
drivers/gpu/drm/xe/xe_guc_pc.c
··· 895 895 static void xe_guc_pc_fini(struct drm_device *drm, void *arg) 896 896 { 897 897 struct xe_guc_pc *pc = arg; 898 - struct xe_device *xe = pc_to_xe(pc); 899 - 900 - if (xe->info.skip_guc_pc) { 901 - xe_gt_idle_disable_c6(pc_to_gt(pc)); 902 - return; 903 - } 904 898 905 899 XE_WARN_ON(xe_force_wake_get(gt_to_fw(pc_to_gt(pc)), XE_FORCEWAKE_ALL)); 906 900 XE_WARN_ON(xe_guc_pc_gucrc_disable(pc));
+16 -2
drivers/gpu/drm/xe/xe_ring_ops.c
··· 80 80 return i; 81 81 } 82 82 83 + static int emit_flush_dw(u32 *dw, int i) 84 + { 85 + dw[i++] = MI_FLUSH_DW | MI_FLUSH_IMM_DW; 86 + dw[i++] = 0; 87 + dw[i++] = 0; 88 + dw[i++] = 0; 89 + 90 + return i; 91 + } 92 + 83 93 static int emit_flush_imm_ggtt(u32 addr, u32 value, bool invalidate_tlb, 84 94 u32 *dw, int i) 85 95 { ··· 244 234 245 235 i = emit_bb_start(batch_addr, ppgtt_flag, dw, i); 246 236 247 - if (job->user_fence.used) 237 + if (job->user_fence.used) { 238 + i = emit_flush_dw(dw, i); 248 239 i = emit_store_imm_ppgtt_posted(job->user_fence.addr, 249 240 job->user_fence.value, 250 241 dw, i); 242 + } 251 243 252 244 i = emit_flush_imm_ggtt(xe_lrc_seqno_ggtt_addr(lrc), seqno, false, dw, i); 253 245 ··· 305 293 306 294 i = emit_bb_start(batch_addr, ppgtt_flag, dw, i); 307 295 308 - if (job->user_fence.used) 296 + if (job->user_fence.used) { 297 + i = emit_flush_dw(dw, i); 309 298 i = emit_store_imm_ppgtt_posted(job->user_fence.addr, 310 299 job->user_fence.value, 311 300 dw, i); 301 + } 312 302 313 303 i = emit_flush_imm_ggtt(xe_lrc_seqno_ggtt_addr(lrc), seqno, false, dw, i); 314 304
+18 -19
drivers/hv/hv.c
··· 45 45 * This involves a hypercall. 46 46 */ 47 47 int hv_post_message(union hv_connection_id connection_id, 48 - enum hv_message_type message_type, 49 - void *payload, size_t payload_size) 48 + enum hv_message_type message_type, 49 + void *payload, size_t payload_size) 50 50 { 51 51 struct hv_input_post_message *aligned_msg; 52 52 unsigned long flags; ··· 86 86 status = HV_STATUS_INVALID_PARAMETER; 87 87 } else { 88 88 status = hv_do_hypercall(HVCALL_POST_MESSAGE, 89 - aligned_msg, NULL); 89 + aligned_msg, NULL); 90 90 } 91 91 92 92 local_irq_restore(flags); ··· 111 111 112 112 hv_context.hv_numa_map = kcalloc(nr_node_ids, sizeof(struct cpumask), 113 113 GFP_KERNEL); 114 - if (hv_context.hv_numa_map == NULL) { 114 + if (!hv_context.hv_numa_map) { 115 115 pr_err("Unable to allocate NUMA map\n"); 116 116 goto err; 117 117 } ··· 120 120 hv_cpu = per_cpu_ptr(hv_context.cpu_context, cpu); 121 121 122 122 tasklet_init(&hv_cpu->msg_dpc, 123 - vmbus_on_msg_dpc, (unsigned long) hv_cpu); 123 + vmbus_on_msg_dpc, (unsigned long)hv_cpu); 124 124 125 125 if (ms_hyperv.paravisor_present && hv_isolation_type_tdx()) { 126 126 hv_cpu->post_msg_page = (void *)get_zeroed_page(GFP_ATOMIC); 127 - if (hv_cpu->post_msg_page == NULL) { 127 + if (!hv_cpu->post_msg_page) { 128 128 pr_err("Unable to allocate post msg page\n"); 129 129 goto err; 130 130 } ··· 147 147 if (!ms_hyperv.paravisor_present && !hv_root_partition) { 148 148 hv_cpu->synic_message_page = 149 149 (void *)get_zeroed_page(GFP_ATOMIC); 150 - if (hv_cpu->synic_message_page == NULL) { 150 + if (!hv_cpu->synic_message_page) { 151 151 pr_err("Unable to allocate SYNIC message page\n"); 152 152 goto err; 153 153 } 154 154 155 155 hv_cpu->synic_event_page = 156 156 (void *)get_zeroed_page(GFP_ATOMIC); 157 - if (hv_cpu->synic_event_page == NULL) { 157 + if (!hv_cpu->synic_event_page) { 158 158 pr_err("Unable to allocate SYNIC event page\n"); 159 159 160 160 free_page((unsigned long)hv_cpu->synic_message_page); ··· 203 203 return ret; 204 204 } 205 205 206 - 207 206 void hv_synic_free(void) 208 207 { 209 208 int cpu, ret; 210 209 211 210 for_each_present_cpu(cpu) { 212 - struct hv_per_cpu_context *hv_cpu 213 - = per_cpu_ptr(hv_context.cpu_context, cpu); 211 + struct hv_per_cpu_context *hv_cpu = 212 + per_cpu_ptr(hv_context.cpu_context, cpu); 214 213 215 214 /* It's better to leak the page if the encryption fails. */ 216 215 if (ms_hyperv.paravisor_present && hv_isolation_type_tdx()) { ··· 261 262 */ 262 263 void hv_synic_enable_regs(unsigned int cpu) 263 264 { 264 - struct hv_per_cpu_context *hv_cpu 265 - = per_cpu_ptr(hv_context.cpu_context, cpu); 265 + struct hv_per_cpu_context *hv_cpu = 266 + per_cpu_ptr(hv_context.cpu_context, cpu); 266 267 union hv_synic_simp simp; 267 268 union hv_synic_siefp siefp; 268 269 union hv_synic_sint shared_sint; ··· 276 277 /* Mask out vTOM bit. ioremap_cache() maps decrypted */ 277 278 u64 base = (simp.base_simp_gpa << HV_HYP_PAGE_SHIFT) & 278 279 ~ms_hyperv.shared_gpa_boundary; 279 - hv_cpu->synic_message_page 280 - = (void *)ioremap_cache(base, HV_HYP_PAGE_SIZE); 280 + hv_cpu->synic_message_page = 281 + (void *)ioremap_cache(base, HV_HYP_PAGE_SIZE); 281 282 if (!hv_cpu->synic_message_page) 282 283 pr_err("Fail to map synic message page.\n"); 283 284 } else { ··· 295 296 /* Mask out vTOM bit. ioremap_cache() maps decrypted */ 296 297 u64 base = (siefp.base_siefp_gpa << HV_HYP_PAGE_SHIFT) & 297 298 ~ms_hyperv.shared_gpa_boundary; 298 - hv_cpu->synic_event_page 299 - = (void *)ioremap_cache(base, HV_HYP_PAGE_SIZE); 299 + hv_cpu->synic_event_page = 300 + (void *)ioremap_cache(base, HV_HYP_PAGE_SIZE); 300 301 if (!hv_cpu->synic_event_page) 301 302 pr_err("Fail to map synic event page.\n"); 302 303 } else { ··· 347 348 */ 348 349 void hv_synic_disable_regs(unsigned int cpu) 349 350 { 350 - struct hv_per_cpu_context *hv_cpu 351 - = per_cpu_ptr(hv_context.cpu_context, cpu); 351 + struct hv_per_cpu_context *hv_cpu = 352 + per_cpu_ptr(hv_context.cpu_context, cpu); 352 353 union hv_synic_sint shared_sint; 353 354 union hv_synic_simp simp; 354 355 union hv_synic_siefp siefp;
+80 -110
drivers/hv/hv_balloon.c
··· 25 25 #include <linux/notifier.h> 26 26 #include <linux/percpu_counter.h> 27 27 #include <linux/page_reporting.h> 28 + #include <linux/sizes.h> 28 29 29 30 #include <linux/hyperv.h> 30 31 #include <asm/hyperv-tlfs.h> ··· 41 40 * 42 41 * Begin protocol definitions. 43 42 */ 44 - 45 - 46 43 47 44 /* 48 45 * Protocol versions. The low word is the minor version, the high word the major ··· 70 71 DYNMEM_PROTOCOL_VERSION_CURRENT = DYNMEM_PROTOCOL_VERSION_WIN10 71 72 }; 72 73 73 - 74 - 75 74 /* 76 75 * Message Types 77 76 */ ··· 98 101 DM_VERSION_1_MAX = 12 99 102 }; 100 103 101 - 102 104 /* 103 105 * Structures defining the dynamic memory management 104 106 * protocol. ··· 110 114 }; 111 115 __u32 version; 112 116 } __packed; 113 - 114 117 115 118 union dm_caps { 116 119 struct { ··· 143 148 __u64 page_range; 144 149 } __packed; 145 150 146 - 147 - 148 151 /* 149 152 * The header for all dynamic memory messages: 150 153 * ··· 166 173 struct dm_header hdr; 167 174 __u8 data[]; /* enclosed message */ 168 175 } __packed; 169 - 170 176 171 177 /* 172 178 * Specific message types supporting the dynamic memory protocol. ··· 263 271 __u32 io_diff; 264 272 } __packed; 265 273 266 - 267 274 /* 268 275 * Message to ask the guest to allocate memory - balloon up message. 269 276 * This message is sent from the host to the guest. The guest may not be ··· 277 286 __u32 reservedz; 278 287 } __packed; 279 288 280 - 281 289 /* 282 290 * Balloon response message; this message is sent from the guest 283 291 * to the host in response to the balloon message. 284 292 * 285 293 * reservedz: Reserved; must be set to zero. 286 294 * more_pages: If FALSE, this is the last message of the transaction. 287 - * if TRUE there will atleast one more message from the guest. 295 + * if TRUE there will be at least one more message from the guest. 288 296 * 289 297 * range_count: The number of ranges in the range array. 290 298 * ··· 304 314 * to the guest to give guest more memory. 305 315 * 306 316 * more_pages: If FALSE, this is the last message of the transaction. 307 - * if TRUE there will atleast one more message from the guest. 317 + * if TRUE there will be at least one more message from the guest. 308 318 * 309 319 * reservedz: Reserved; must be set to zero. 310 320 * ··· 331 341 struct dm_unballoon_response { 332 342 struct dm_header hdr; 333 343 } __packed; 334 - 335 344 336 345 /* 337 346 * Hot add request message. Message sent from the host to the guest. ··· 379 390 MAX_INFO_TYPE 380 391 }; 381 392 382 - 383 393 /* 384 394 * Header for the information message. 385 395 */ ··· 413 425 * The range start_pfn : end_pfn specifies the range 414 426 * that the host has asked us to hot add. The range 415 427 * start_pfn : ha_end_pfn specifies the range that we have 416 - * currently hot added. We hot add in multiples of 128M 417 - * chunks; it is possible that we may not be able to bring 418 - * online all the pages in the region. The range 428 + * currently hot added. We hot add in chunks equal to the 429 + * memory block size; it is possible that we may not be able 430 + * to bring online all the pages in the region. The range 419 431 * covered_start_pfn:covered_end_pfn defines the pages that can 420 - * be brough online. 432 + * be brought online. 421 433 */ 422 434 423 435 struct hv_hotadd_state { ··· 468 480 469 481 static int hv_hypercall_multi_failure; 470 482 471 - module_param(hot_add, bool, (S_IRUGO | S_IWUSR)); 483 + module_param(hot_add, bool, 0644); 472 484 MODULE_PARM_DESC(hot_add, "If set attempt memory hot_add"); 473 485 474 - module_param(pressure_report_delay, uint, (S_IRUGO | S_IWUSR)); 486 + module_param(pressure_report_delay, uint, 0644); 475 487 MODULE_PARM_DESC(pressure_report_delay, "Delay in secs in reporting pressure"); 476 488 static atomic_t trans_id = ATOMIC_INIT(0); 477 489 ··· 490 502 DM_INIT_ERROR 491 503 }; 492 504 493 - 494 505 static __u8 recv_buffer[HV_HYP_PAGE_SIZE]; 495 506 static __u8 balloon_up_send_buffer[HV_HYP_PAGE_SIZE]; 507 + 508 + static unsigned long ha_pages_in_chunk; 509 + #define HA_BYTES_IN_CHUNK (ha_pages_in_chunk << PAGE_SHIFT) 510 + 496 511 #define PAGES_IN_2M (2 * 1024 * 1024 / PAGE_SIZE) 497 - #define HA_CHUNK (128 * 1024 * 1024 / PAGE_SIZE) 498 512 499 513 struct hv_dynmem_device { 500 514 struct hv_device *dev; ··· 585 595 struct hv_hotadd_gap *gap; 586 596 587 597 /* The page is not backed. */ 588 - if ((pfn < has->covered_start_pfn) || (pfn >= has->covered_end_pfn)) 598 + if (pfn < has->covered_start_pfn || pfn >= has->covered_end_pfn) 589 599 return false; 590 600 591 601 /* Check for gaps. */ 592 602 list_for_each_entry(gap, &has->gap_list, list) { 593 - if ((pfn >= gap->start_pfn) && (pfn < gap->end_pfn)) 603 + if (pfn >= gap->start_pfn && pfn < gap->end_pfn) 594 604 return false; 595 605 } 596 606 ··· 714 724 unsigned long processed_pfn; 715 725 unsigned long total_pfn = pfn_count; 716 726 717 - for (i = 0; i < (size/HA_CHUNK); i++) { 718 - start_pfn = start + (i * HA_CHUNK); 727 + for (i = 0; i < (size/ha_pages_in_chunk); i++) { 728 + start_pfn = start + (i * ha_pages_in_chunk); 719 729 720 730 scoped_guard(spinlock_irqsave, &dm_device.ha_lock) { 721 - has->ha_end_pfn += HA_CHUNK; 722 - 723 - if (total_pfn > HA_CHUNK) { 724 - processed_pfn = HA_CHUNK; 725 - total_pfn -= HA_CHUNK; 726 - } else { 727 - processed_pfn = total_pfn; 728 - total_pfn = 0; 729 - } 730 - 731 - has->covered_end_pfn += processed_pfn; 731 + has->ha_end_pfn += ha_pages_in_chunk; 732 + processed_pfn = umin(total_pfn, ha_pages_in_chunk); 733 + total_pfn -= processed_pfn; 734 + has->covered_end_pfn += processed_pfn; 732 735 } 733 736 734 737 reinit_completion(&dm_device.ol_waitevent); 735 738 736 739 nid = memory_add_physaddr_to_nid(PFN_PHYS(start_pfn)); 737 740 ret = add_memory(nid, PFN_PHYS((start_pfn)), 738 - (HA_CHUNK << PAGE_SHIFT), MHP_MERGE_RESOURCE); 741 + HA_BYTES_IN_CHUNK, MHP_MERGE_RESOURCE); 739 742 740 743 if (ret) { 741 744 pr_err("hot_add memory failed error is %d\n", ret); ··· 743 760 do_hot_add = false; 744 761 } 745 762 scoped_guard(spinlock_irqsave, &dm_device.ha_lock) { 746 - has->ha_end_pfn -= HA_CHUNK; 763 + has->ha_end_pfn -= ha_pages_in_chunk; 747 764 has->covered_end_pfn -= processed_pfn; 748 765 } 749 766 break; ··· 770 787 guard(spinlock_irqsave)(&dm_device.ha_lock); 771 788 list_for_each_entry(has, &dm_device.ha_region_list, list) { 772 789 /* The page belongs to a different HAS. */ 773 - if ((pfn < has->start_pfn) || 774 - (pfn + (1UL << order) > has->end_pfn)) 790 + if (pfn < has->start_pfn || 791 + (pfn + (1UL << order) > has->end_pfn)) 775 792 continue; 776 793 777 794 hv_bring_pgs_online(has, pfn, 1UL << order); ··· 783 800 { 784 801 struct hv_hotadd_state *has; 785 802 struct hv_hotadd_gap *gap; 786 - unsigned long residual, new_inc; 803 + unsigned long residual; 787 804 int ret = 0; 788 805 789 806 guard(spinlock_irqsave)(&dm_device.ha_lock); ··· 819 836 * our current limit; extend it. 820 837 */ 821 838 if ((start_pfn + pfn_cnt) > has->end_pfn) { 839 + /* Extend the region by multiples of ha_pages_in_chunk */ 822 840 residual = (start_pfn + pfn_cnt - has->end_pfn); 823 - /* 824 - * Extend the region by multiples of HA_CHUNK. 825 - */ 826 - new_inc = (residual / HA_CHUNK) * HA_CHUNK; 827 - if (residual % HA_CHUNK) 828 - new_inc += HA_CHUNK; 829 - 830 - has->end_pfn += new_inc; 841 + has->end_pfn += ALIGN(residual, ha_pages_in_chunk); 831 842 } 832 843 833 844 ret = 1; ··· 832 855 } 833 856 834 857 static unsigned long handle_pg_range(unsigned long pg_start, 835 - unsigned long pg_count) 858 + unsigned long pg_count) 836 859 { 837 860 unsigned long start_pfn = pg_start; 838 861 unsigned long pfn_cnt = pg_count; ··· 843 866 unsigned long res = 0, flags; 844 867 845 868 pr_debug("Hot adding %lu pages starting at pfn 0x%lx.\n", pg_count, 846 - pg_start); 869 + pg_start); 847 870 848 871 spin_lock_irqsave(&dm_device.ha_lock, flags); 849 872 list_for_each_entry(has, &dm_device.ha_region_list, list) { ··· 879 902 if (start_pfn > has->start_pfn && 880 903 online_section_nr(pfn_to_section_nr(start_pfn))) 881 904 hv_bring_pgs_online(has, start_pfn, pgs_ol); 882 - 883 905 } 884 906 885 - if ((has->ha_end_pfn < has->end_pfn) && (pfn_cnt > 0)) { 907 + if (has->ha_end_pfn < has->end_pfn && pfn_cnt > 0) { 886 908 /* 887 909 * We have some residual hot add range 888 910 * that needs to be hot added; hot add 889 911 * it now. Hot add a multiple of 890 - * HA_CHUNK that fully covers the pages 912 + * ha_pages_in_chunk that fully covers the pages 891 913 * we have. 892 914 */ 893 915 size = (has->end_pfn - has->ha_end_pfn); 894 916 if (pfn_cnt <= size) { 895 - size = ((pfn_cnt / HA_CHUNK) * HA_CHUNK); 896 - if (pfn_cnt % HA_CHUNK) 897 - size += HA_CHUNK; 917 + size = ALIGN(pfn_cnt, ha_pages_in_chunk); 898 918 } else { 899 919 pfn_cnt = size; 900 920 } ··· 984 1010 rg_start = dm->ha_wrk.ha_region_range.finfo.start_page; 985 1011 rg_sz = dm->ha_wrk.ha_region_range.finfo.page_cnt; 986 1012 987 - if ((rg_start == 0) && (!dm->host_specified_ha_region)) { 988 - unsigned long region_size; 989 - unsigned long region_start; 990 - 1013 + if (rg_start == 0 && !dm->host_specified_ha_region) { 991 1014 /* 992 1015 * The host has not specified the hot-add region. 993 1016 * Based on the hot-add page range being specified, ··· 992 1021 * that need to be hot-added while ensuring the alignment 993 1022 * and size requirements of Linux as it relates to hot-add. 994 1023 */ 995 - region_size = (pfn_cnt / HA_CHUNK) * HA_CHUNK; 996 - if (pfn_cnt % HA_CHUNK) 997 - region_size += HA_CHUNK; 998 - 999 - region_start = (pg_start / HA_CHUNK) * HA_CHUNK; 1000 - 1001 - rg_start = region_start; 1002 - rg_sz = region_size; 1024 + rg_start = ALIGN_DOWN(pg_start, ha_pages_in_chunk); 1025 + rg_sz = ALIGN(pfn_cnt, ha_pages_in_chunk); 1003 1026 } 1004 1027 1005 1028 if (do_hot_add) 1006 1029 resp.page_count = process_hot_add(pg_start, pfn_cnt, 1007 - rg_start, rg_sz); 1030 + rg_start, rg_sz); 1008 1031 1009 1032 dm->num_pages_added += resp.page_count; 1010 1033 #endif ··· 1176 1211 sizeof(struct dm_status), 1177 1212 (unsigned long)NULL, 1178 1213 VM_PKT_DATA_INBAND, 0); 1179 - 1180 1214 } 1181 1215 1182 1216 static void free_balloon_pages(struct hv_dynmem_device *dm, 1183 - union dm_mem_page_range *range_array) 1217 + union dm_mem_page_range *range_array) 1184 1218 { 1185 1219 int num_pages = range_array->finfo.page_cnt; 1186 1220 __u64 start_frame = range_array->finfo.start_page; ··· 1194 1230 adjust_managed_page_count(pg, 1); 1195 1231 } 1196 1232 } 1197 - 1198 - 1199 1233 1200 1234 static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm, 1201 1235 unsigned int num_pages, ··· 1240 1278 page_to_pfn(pg); 1241 1279 bl_resp->range_array[i].finfo.page_cnt = alloc_unit; 1242 1280 bl_resp->hdr.size += sizeof(union dm_mem_page_range); 1243 - 1244 1281 } 1245 1282 1246 1283 return i * alloc_unit; ··· 1293 1332 1294 1333 if (num_ballooned == 0 || num_ballooned == num_pages) { 1295 1334 pr_debug("Ballooned %u out of %u requested pages.\n", 1296 - num_pages, dm_device.balloon_wrk.num_pages); 1335 + num_pages, dm_device.balloon_wrk.num_pages); 1297 1336 1298 1337 bl_resp->more_pages = 0; 1299 1338 done = true; ··· 1327 1366 1328 1367 for (i = 0; i < bl_resp->range_count; i++) 1329 1368 free_balloon_pages(&dm_device, 1330 - &bl_resp->range_array[i]); 1369 + &bl_resp->range_array[i]); 1331 1370 1332 1371 done = true; 1333 1372 } 1334 1373 } 1335 - 1336 1374 } 1337 1375 1338 1376 static void balloon_down(struct hv_dynmem_device *dm, 1339 - struct dm_unballoon_request *req) 1377 + struct dm_unballoon_request *req) 1340 1378 { 1341 1379 union dm_mem_page_range *range_array = req->range_array; 1342 1380 int range_count = req->range_count; ··· 1349 1389 } 1350 1390 1351 1391 pr_debug("Freed %u ballooned pages.\n", 1352 - prev_pages_ballooned - dm->num_pages_ballooned); 1392 + prev_pages_ballooned - dm->num_pages_ballooned); 1353 1393 1354 1394 if (req->more_pages == 1) 1355 1395 return; ··· 1374 1414 struct hv_dynmem_device *dm = dm_dev; 1375 1415 1376 1416 while (!kthread_should_stop()) { 1377 - wait_for_completion_interruptible_timeout( 1378 - &dm_device.config_event, 1*HZ); 1417 + wait_for_completion_interruptible_timeout(&dm_device.config_event, 1 * HZ); 1379 1418 /* 1380 1419 * The host expects us to post information on the memory 1381 1420 * pressure every second. ··· 1398 1439 return 0; 1399 1440 } 1400 1441 1401 - 1402 1442 static void version_resp(struct hv_dynmem_device *dm, 1403 - struct dm_version_response *vresp) 1443 + struct dm_version_response *vresp) 1404 1444 { 1405 1445 struct dm_version_request version_req; 1406 1446 int ret; ··· 1460 1502 } 1461 1503 1462 1504 static void cap_resp(struct hv_dynmem_device *dm, 1463 - struct dm_capabilities_resp_msg *cap_resp) 1505 + struct dm_capabilities_resp_msg *cap_resp) 1464 1506 { 1465 1507 if (!cap_resp->is_accepted) { 1466 1508 pr_err("Capabilities not accepted by host\n"); ··· 1493 1535 switch (dm_hdr->type) { 1494 1536 case DM_VERSION_RESPONSE: 1495 1537 version_resp(dm, 1496 - (struct dm_version_response *)dm_msg); 1538 + (struct dm_version_response *)dm_msg); 1497 1539 break; 1498 1540 1499 1541 case DM_CAPABILITIES_RESPONSE: ··· 1523 1565 1524 1566 dm->state = DM_BALLOON_DOWN; 1525 1567 balloon_down(dm, 1526 - (struct dm_unballoon_request *)recv_buffer); 1568 + (struct dm_unballoon_request *)recv_buffer); 1527 1569 break; 1528 1570 1529 1571 case DM_MEM_HOT_ADD_REQUEST: ··· 1561 1603 1562 1604 default: 1563 1605 pr_warn_ratelimited("Unhandled message: type: %d\n", dm_hdr->type); 1564 - 1565 1606 } 1566 1607 } 1567 - 1568 1608 } 1569 1609 1570 1610 #define HV_LARGE_REPORTING_ORDER 9 1571 1611 #define HV_LARGE_REPORTING_LEN (HV_HYP_PAGE_SIZE << \ 1572 1612 HV_LARGE_REPORTING_ORDER) 1573 1613 static int hv_free_page_report(struct page_reporting_dev_info *pr_dev_info, 1574 - struct scatterlist *sgl, unsigned int nents) 1614 + struct scatterlist *sgl, unsigned int nents) 1575 1615 { 1576 1616 unsigned long flags; 1577 1617 struct hv_memory_hint *hint; ··· 1604 1648 */ 1605 1649 1606 1650 /* page reporting for pages 2MB or higher */ 1607 - if (order >= HV_LARGE_REPORTING_ORDER ) { 1651 + if (order >= HV_LARGE_REPORTING_ORDER) { 1608 1652 range->page.largepage = 1; 1609 1653 range->page_size = HV_GPA_PAGE_RANGE_PAGE_SIZE_2MB; 1610 1654 range->base_large_pfn = page_to_hvpfn( ··· 1618 1662 range->page.additional_pages = 1619 1663 (sg->length / HV_HYP_PAGE_SIZE) - 1; 1620 1664 } 1621 - 1622 1665 } 1623 1666 1624 1667 status = hv_do_rep_hypercall(HV_EXT_CALL_MEMORY_HEAT_HINT, nents, 0, 1625 1668 hint, NULL); 1626 1669 local_irq_restore(flags); 1627 1670 if (!hv_result_success(status)) { 1628 - 1629 1671 pr_err("Cold memory discard hypercall failed with status %llx\n", 1630 - status); 1672 + status); 1631 1673 if (hv_hypercall_multi_failure > 0) 1632 1674 hv_hypercall_multi_failure++; 1633 1675 1634 1676 if (hv_result(status) == HV_STATUS_INVALID_PARAMETER) { 1635 1677 pr_err("Underlying Hyper-V does not support order less than 9. Hypercall failed\n"); 1636 1678 pr_err("Defaulting to page_reporting_order %d\n", 1637 - pageblock_order); 1679 + pageblock_order); 1638 1680 page_reporting_order = pageblock_order; 1639 1681 hv_hypercall_multi_failure++; 1640 1682 return -EINVAL; ··· 1666 1712 pr_err("Failed to enable cold memory discard: %d\n", ret); 1667 1713 } else { 1668 1714 pr_info("Cold memory discard hint enabled with order %d\n", 1669 - page_reporting_order); 1715 + page_reporting_order); 1670 1716 } 1671 1717 } 1672 1718 ··· 1749 1795 if (ret) 1750 1796 goto out; 1751 1797 1752 - t = wait_for_completion_timeout(&dm_device.host_event, 5*HZ); 1798 + t = wait_for_completion_timeout(&dm_device.host_event, 5 * HZ); 1753 1799 if (t == 0) { 1754 1800 ret = -ETIMEDOUT; 1755 1801 goto out; ··· 1785 1831 cap_msg.caps.cap_bits.hot_add = hot_add_enabled(); 1786 1832 1787 1833 /* 1788 - * Specify our alignment requirements as it relates 1789 - * memory hot-add. Specify 128MB alignment. 1834 + * Specify our alignment requirements for memory hot-add. The value is 1835 + * the log base 2 of the number of megabytes in a chunk. For example, 1836 + * with 256 MiB chunks, the value is 8. The number of MiB in a chunk 1837 + * must be a power of 2. 1790 1838 */ 1791 - cap_msg.caps.cap_bits.hot_add_alignment = 7; 1839 + cap_msg.caps.cap_bits.hot_add_alignment = 1840 + ilog2(HA_BYTES_IN_CHUNK / SZ_1M); 1792 1841 1793 1842 /* 1794 1843 * Currently the host does not use these ··· 1807 1850 if (ret) 1808 1851 goto out; 1809 1852 1810 - t = wait_for_completion_timeout(&dm_device.host_event, 5*HZ); 1853 + t = wait_for_completion_timeout(&dm_device.host_event, 5 * HZ); 1811 1854 if (t == 0) { 1812 1855 ret = -ETIMEDOUT; 1813 1856 goto out; ··· 1848 1891 char *sname; 1849 1892 1850 1893 seq_printf(f, "%-22s: %u.%u\n", "host_version", 1851 - DYNMEM_MAJOR_VERSION(dm->version), 1852 - DYNMEM_MINOR_VERSION(dm->version)); 1894 + DYNMEM_MAJOR_VERSION(dm->version), 1895 + DYNMEM_MINOR_VERSION(dm->version)); 1853 1896 1854 1897 seq_printf(f, "%-22s:", "capabilities"); 1855 1898 if (ballooning_enabled()) ··· 1898 1941 seq_printf(f, "%-22s: %u\n", "pages_ballooned", dm->num_pages_ballooned); 1899 1942 1900 1943 seq_printf(f, "%-22s: %lu\n", "total_pages_committed", 1901 - get_pages_committed(dm)); 1944 + get_pages_committed(dm)); 1902 1945 1903 1946 seq_printf(f, "%-22s: %llu\n", "max_dynamic_page_count", 1904 - dm->max_dynamic_page_count); 1947 + dm->max_dynamic_page_count); 1905 1948 1906 1949 return 0; 1907 1950 } ··· 1911 1954 static void hv_balloon_debugfs_init(struct hv_dynmem_device *b) 1912 1955 { 1913 1956 debugfs_create_file("hv-balloon", 0444, NULL, b, 1914 - &hv_balloon_debug_fops); 1957 + &hv_balloon_debug_fops); 1915 1958 } 1916 1959 1917 1960 static void hv_balloon_debugfs_exit(struct hv_dynmem_device *b) ··· 1941 1984 hot_add = false; 1942 1985 1943 1986 #ifdef CONFIG_MEMORY_HOTPLUG 1987 + /* 1988 + * Hot-add must operate in chunks that are of size equal to the 1989 + * memory block size because that's what the core add_memory() 1990 + * interface requires. The Hyper-V interface requires that the memory 1991 + * block size be a power of 2, which is guaranteed by the check in 1992 + * memory_dev_init(). 1993 + */ 1994 + ha_pages_in_chunk = memory_block_size_bytes() / PAGE_SIZE; 1944 1995 do_hot_add = hot_add; 1945 1996 #else 1997 + /* 1998 + * Without MEMORY_HOTPLUG, the guest returns a failure status for all 1999 + * hot add requests from Hyper-V, and the chunk size is used only to 2000 + * specify alignment to Hyper-V as required by the host/guest protocol. 2001 + * Somewhat arbitrarily, use 128 MiB. 2002 + */ 2003 + ha_pages_in_chunk = SZ_128M / PAGE_SIZE; 1946 2004 do_hot_add = false; 1947 2005 #endif 1948 2006 dm_device.dev = dev; ··· 2069 2097 tasklet_enable(&hv_dev->channel->callback_event); 2070 2098 2071 2099 return 0; 2072 - 2073 2100 } 2074 2101 2075 2102 static int balloon_resume(struct hv_device *dev) ··· 2127 2156 2128 2157 static int __init init_balloon_drv(void) 2129 2158 { 2130 - 2131 2159 return vmbus_driver_register(&balloon_drv); 2132 2160 } 2133 2161
+1 -2
drivers/i2c/busses/i2c-at91-slave.c
··· 106 106 107 107 static u32 at91_twi_func(struct i2c_adapter *adapter) 108 108 { 109 - return I2C_FUNC_SLAVE | I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL 110 - | I2C_FUNC_SMBUS_READ_BLOCK_DATA; 109 + return I2C_FUNC_SLAVE; 111 110 } 112 111 113 112 static const struct i2c_algorithm at91_twi_algorithm_slave = {
+1 -1
drivers/i2c/busses/i2c-designware-slave.c
··· 220 220 221 221 void i2c_dw_configure_slave(struct dw_i2c_dev *dev) 222 222 { 223 - dev->functionality = I2C_FUNC_SLAVE | DW_IC_DEFAULT_FUNCTIONALITY; 223 + dev->functionality = I2C_FUNC_SLAVE; 224 224 225 225 dev->slave_cfg = DW_IC_CON_RX_FIFO_FULL_HLD_CTRL | 226 226 DW_IC_CON_RESTART_EN | DW_IC_CON_STOP_DET_IFADDRESSED;
+19 -18
drivers/iio/adc/ad7173.c
··· 145 145 unsigned int id; 146 146 char *name; 147 147 bool has_temp; 148 + bool has_input_buf; 148 149 bool has_int_ref; 149 150 bool has_ref2; 150 151 u8 num_gpios; ··· 213 212 .num_configs = 4, 214 213 .num_gpios = 2, 215 214 .has_temp = true, 215 + .has_input_buf = true, 216 216 .has_int_ref = true, 217 217 .clock = 2 * HZ_PER_MHZ, 218 218 .sinc5_data_rates = ad7173_sinc5_data_rates, 219 219 .num_sinc5_data_rates = ARRAY_SIZE(ad7173_sinc5_data_rates), 220 220 }, 221 221 [ID_AD7172_4] = { 222 + .name = "ad7172-4", 222 223 .id = AD7172_4_ID, 223 224 .num_inputs = 9, 224 225 .num_channels = 8, 225 226 .num_configs = 8, 226 227 .num_gpios = 4, 227 228 .has_temp = false, 229 + .has_input_buf = true, 228 230 .has_ref2 = true, 229 231 .clock = 2 * HZ_PER_MHZ, 230 232 .sinc5_data_rates = ad7173_sinc5_data_rates, ··· 241 237 .num_configs = 8, 242 238 .num_gpios = 4, 243 239 .has_temp = true, 240 + .has_input_buf = true, 244 241 .has_int_ref = true, 245 242 .has_ref2 = true, 246 243 .clock = 2 * HZ_PER_MHZ, ··· 256 251 .num_configs = 4, 257 252 .num_gpios = 2, 258 253 .has_temp = true, 254 + .has_input_buf = true, 259 255 .has_int_ref = true, 260 256 .clock = 16 * HZ_PER_MHZ, 261 257 .sinc5_data_rates = ad7175_sinc5_data_rates, 262 258 .num_sinc5_data_rates = ARRAY_SIZE(ad7175_sinc5_data_rates), 263 259 }, 264 260 [ID_AD7175_8] = { 261 + .name = "ad7175-8", 265 262 .id = AD7175_8_ID, 266 263 .num_inputs = 17, 267 264 .num_channels = 16, 268 265 .num_configs = 8, 269 266 .num_gpios = 4, 270 267 .has_temp = true, 268 + .has_input_buf = true, 271 269 .has_int_ref = true, 272 270 .has_ref2 = true, 273 271 .clock = 16 * HZ_PER_MHZ, ··· 285 277 .num_configs = 4, 286 278 .num_gpios = 2, 287 279 .has_temp = false, 280 + .has_input_buf = false, 288 281 .has_int_ref = true, 289 282 .clock = 16 * HZ_PER_MHZ, 290 283 .sinc5_data_rates = ad7175_sinc5_data_rates, 291 284 .num_sinc5_data_rates = ARRAY_SIZE(ad7175_sinc5_data_rates), 292 285 }, 293 286 [ID_AD7177_2] = { 287 + .name = "ad7177-2", 294 288 .id = AD7177_ID, 295 289 .num_inputs = 5, 296 290 .num_channels = 4, 297 291 .num_configs = 4, 298 292 .num_gpios = 2, 299 293 .has_temp = true, 294 + .has_input_buf = true, 300 295 .has_int_ref = true, 301 296 .clock = 16 * HZ_PER_MHZ, 302 297 .odr_start_value = AD7177_ODR_START_VALUE, ··· 543 532 unsigned int interface_mode = st->interface_mode; 544 533 int ret; 545 534 535 + interface_mode &= ~AD7173_INTERFACE_DATA_STAT; 546 536 interface_mode |= AD7173_INTERFACE_DATA_STAT_EN(append); 547 537 ret = ad_sd_write_reg(&st->sd, AD7173_REG_INTERFACE_MODE, 2, interface_mode); 548 538 if (ret) ··· 717 705 { 718 706 struct ad7173_state *st = iio_priv(indio_dev); 719 707 struct ad7173_channel_config *cfg; 720 - unsigned int freq, i, reg; 708 + unsigned int freq, i; 721 709 int ret; 722 710 723 711 ret = iio_device_claim_direct_mode(indio_dev); ··· 733 721 734 722 cfg = &st->channels[chan->address].cfg; 735 723 cfg->odr = i; 736 - 737 - if (!cfg->live) 738 - break; 739 - 740 - ret = ad_sd_read_reg(&st->sd, AD7173_REG_FILTER(cfg->cfg_slot), 2, &reg); 741 - if (ret) 742 - break; 743 - reg &= ~AD7173_FILTER_ODR0_MASK; 744 - reg |= FIELD_PREP(AD7173_FILTER_ODR0_MASK, i); 745 - ret = ad_sd_write_reg(&st->sd, AD7173_REG_FILTER(cfg->cfg_slot), 2, reg); 724 + cfg->live = false; 746 725 break; 747 726 748 727 default: ··· 795 792 .type = IIO_VOLTAGE, 796 793 .indexed = 1, 797 794 .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | 798 - BIT(IIO_CHAN_INFO_SCALE), 799 - .info_mask_shared_by_all = BIT(IIO_CHAN_INFO_SAMP_FREQ), 795 + BIT(IIO_CHAN_INFO_SCALE) | BIT(IIO_CHAN_INFO_SAMP_FREQ), 800 796 .scan_type = { 801 797 .sign = 'u', 802 798 .realbits = 24, ··· 806 804 807 805 static const struct iio_chan_spec ad7173_temp_iio_channel_template = { 808 806 .type = IIO_TEMP, 809 - .indexed = 1, 810 807 .channel = AD7173_AIN_TEMP_POS, 811 808 .channel2 = AD7173_AIN_TEMP_NEG, 812 809 .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | 813 - BIT(IIO_CHAN_INFO_SCALE) | BIT(IIO_CHAN_INFO_OFFSET), 814 - .info_mask_shared_by_all = BIT(IIO_CHAN_INFO_SAMP_FREQ), 810 + BIT(IIO_CHAN_INFO_SCALE) | BIT(IIO_CHAN_INFO_OFFSET) | 811 + BIT(IIO_CHAN_INFO_SAMP_FREQ), 815 812 .scan_type = { 816 813 .sign = 'u', 817 814 .realbits = 24, ··· 933 932 AD7173_CH_ADDRESS(chan_arr[chan_index].channel, 934 933 chan_arr[chan_index].channel2); 935 934 chan_st_priv->cfg.bipolar = false; 936 - chan_st_priv->cfg.input_buf = true; 935 + chan_st_priv->cfg.input_buf = st->info->has_input_buf; 937 936 chan_st_priv->cfg.ref_sel = AD7173_SETUP_REF_SEL_INT_REF; 938 937 st->adc_mode |= AD7173_ADC_MODE_REF_EN; 939 938 ··· 990 989 991 990 chan_st_priv->ain = AD7173_CH_ADDRESS(ain[0], ain[1]); 992 991 chan_st_priv->chan_reg = chan_index; 993 - chan_st_priv->cfg.input_buf = true; 992 + chan_st_priv->cfg.input_buf = st->info->has_input_buf; 994 993 chan_st_priv->cfg.odr = 0; 995 994 996 995 chan_st_priv->cfg.bipolar = fwnode_property_read_bool(child, "bipolar");
+2 -2
drivers/iio/adc/ad9467.c
··· 243 243 } 244 244 245 245 static const struct iio_chan_spec ad9434_channels[] = { 246 - AD9467_CHAN(0, 0, 12, 'S'), 246 + AD9467_CHAN(0, 0, 12, 's'), 247 247 }; 248 248 249 249 static const struct iio_chan_spec ad9467_channels[] = { 250 - AD9467_CHAN(0, 0, 16, 'S'), 250 + AD9467_CHAN(0, 0, 16, 's'), 251 251 }; 252 252 253 253 static const struct ad9467_chip_info ad9467_chip_tbl = {
+5 -1
drivers/iio/common/inv_sensors/inv_sensors_timestamp.c
··· 60 60 int inv_sensors_timestamp_update_odr(struct inv_sensors_timestamp *ts, 61 61 uint32_t period, bool fifo) 62 62 { 63 + uint32_t mult; 64 + 63 65 /* when FIFO is on, prevent odr change if one is already pending */ 64 66 if (fifo && ts->new_mult != 0) 65 67 return -EAGAIN; 66 68 67 - ts->new_mult = period / ts->chip.clock_period; 69 + mult = period / ts->chip.clock_period; 70 + if (mult != ts->mult) 71 + ts->new_mult = mult; 68 72 69 73 return 0; 70 74 }
+1 -1
drivers/iio/dac/ad5592r-base.c
··· 415 415 s64 tmp = *val * (3767897513LL / 25LL); 416 416 *val = div_s64_rem(tmp, 1000000000LL, val2); 417 417 418 - return IIO_VAL_INT_PLUS_MICRO; 418 + return IIO_VAL_INT_PLUS_NANO; 419 419 } 420 420 421 421 mutex_lock(&st->lock);
+3 -2
drivers/iio/imu/bmi323/bmi323_core.c
··· 1391 1391 &data->buffer.channels, 1392 1392 ARRAY_SIZE(data->buffer.channels)); 1393 1393 if (ret) 1394 - return IRQ_NONE; 1394 + goto out; 1395 1395 } else { 1396 1396 for_each_set_bit(bit, indio_dev->active_scan_mask, 1397 1397 BMI323_CHAN_MAX) { ··· 1400 1400 &data->buffer.channels[index++], 1401 1401 BMI323_BYTES_PER_SAMPLE); 1402 1402 if (ret) 1403 - return IRQ_NONE; 1403 + goto out; 1404 1404 } 1405 1405 } 1406 1406 1407 1407 iio_push_to_buffers_with_timestamp(indio_dev, &data->buffer, 1408 1408 iio_get_time_ns(indio_dev)); 1409 1409 1410 + out: 1410 1411 iio_trigger_notify_done(indio_dev->trig); 1411 1412 1412 1413 return IRQ_HANDLED;
-4
drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c
··· 130 130 /* update data FIFO write */ 131 131 inv_sensors_timestamp_apply_odr(ts, 0, 0, 0); 132 132 ret = inv_icm42600_buffer_set_fifo_en(st, fifo_en | st->fifo.en); 133 - if (ret) 134 - goto out_unlock; 135 - 136 - ret = inv_icm42600_buffer_update_watermark(st); 137 133 138 134 out_unlock: 139 135 mutex_unlock(&st->lock);
+17 -2
drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.c
··· 222 222 latency_accel = period_accel * wm_accel; 223 223 224 224 /* 0 value for watermark means that the sensor is turned off */ 225 + if (wm_gyro == 0 && wm_accel == 0) 226 + return 0; 227 + 225 228 if (latency_gyro == 0) { 226 229 watermark = wm_accel; 230 + st->fifo.watermark.eff_accel = wm_accel; 227 231 } else if (latency_accel == 0) { 228 232 watermark = wm_gyro; 233 + st->fifo.watermark.eff_gyro = wm_gyro; 229 234 } else { 230 235 /* compute the smallest latency that is a multiple of both */ 231 236 if (latency_gyro <= latency_accel) ··· 246 241 watermark = latency / period; 247 242 if (watermark < 1) 248 243 watermark = 1; 244 + /* update effective watermark */ 245 + st->fifo.watermark.eff_gyro = latency / period_gyro; 246 + if (st->fifo.watermark.eff_gyro < 1) 247 + st->fifo.watermark.eff_gyro = 1; 248 + st->fifo.watermark.eff_accel = latency / period_accel; 249 + if (st->fifo.watermark.eff_accel < 1) 250 + st->fifo.watermark.eff_accel = 1; 249 251 } 250 252 251 253 /* compute watermark value in bytes */ ··· 526 514 /* handle gyroscope timestamp and FIFO data parsing */ 527 515 if (st->fifo.nb.gyro > 0) { 528 516 ts = &gyro_st->ts; 529 - inv_sensors_timestamp_interrupt(ts, st->fifo.nb.gyro, 517 + inv_sensors_timestamp_interrupt(ts, st->fifo.watermark.eff_gyro, 530 518 st->timestamp.gyro); 531 519 ret = inv_icm42600_gyro_parse_fifo(st->indio_gyro); 532 520 if (ret) ··· 536 524 /* handle accelerometer timestamp and FIFO data parsing */ 537 525 if (st->fifo.nb.accel > 0) { 538 526 ts = &accel_st->ts; 539 - inv_sensors_timestamp_interrupt(ts, st->fifo.nb.accel, 527 + inv_sensors_timestamp_interrupt(ts, st->fifo.watermark.eff_accel, 540 528 st->timestamp.accel); 541 529 ret = inv_icm42600_accel_parse_fifo(st->indio_accel); 542 530 if (ret) ··· 588 576 { 589 577 unsigned int val; 590 578 int ret; 579 + 580 + st->fifo.watermark.eff_gyro = 1; 581 + st->fifo.watermark.eff_accel = 1; 591 582 592 583 /* 593 584 * Default FIFO configuration (bits 7 to 5)
+2
drivers/iio/imu/inv_icm42600/inv_icm42600_buffer.h
··· 32 32 struct { 33 33 unsigned int gyro; 34 34 unsigned int accel; 35 + unsigned int eff_gyro; 36 + unsigned int eff_accel; 35 37 } watermark; 36 38 size_t count; 37 39 struct {
+1
drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
··· 537 537 if (ret) 538 538 return ret; 539 539 540 + irq_type |= IRQF_ONESHOT; 540 541 return devm_request_threaded_irq(dev, irq, inv_icm42600_irq_timestamp, 541 542 inv_icm42600_irq_handler, irq_type, 542 543 "inv_icm42600", st);
-4
drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c
··· 130 130 /* update data FIFO write */ 131 131 inv_sensors_timestamp_apply_odr(ts, 0, 0, 0); 132 132 ret = inv_icm42600_buffer_set_fifo_en(st, fifo_en | st->fifo.en); 133 - if (ret) 134 - goto out_unlock; 135 - 136 - ret = inv_icm42600_buffer_update_watermark(st); 137 133 138 134 out_unlock: 139 135 mutex_unlock(&st->lock);
+2 -2
drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
··· 100 100 goto end_session; 101 101 /* Each FIFO data contains all sensors, so same number for FIFO and sensor data */ 102 102 fifo_period = NSEC_PER_SEC / INV_MPU6050_DIVIDER_TO_FIFO_RATE(st->chip_config.divider); 103 - inv_sensors_timestamp_interrupt(&st->timestamp, nb, pf->timestamp); 104 - inv_sensors_timestamp_apply_odr(&st->timestamp, fifo_period, nb, 0); 103 + inv_sensors_timestamp_interrupt(&st->timestamp, 1, pf->timestamp); 104 + inv_sensors_timestamp_apply_odr(&st->timestamp, fifo_period, 1, 0); 105 105 106 106 /* clear internal data buffer for avoiding kernel data leak */ 107 107 memset(data, 0, sizeof(data));
+1
drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
··· 300 300 if (!st->trig) 301 301 return -ENOMEM; 302 302 303 + irq_type |= IRQF_ONESHOT; 303 304 ret = devm_request_threaded_irq(&indio_dev->dev, st->irq, 304 305 &inv_mpu6050_interrupt_timestamp, 305 306 &inv_mpu6050_interrupt_handle,
+1 -1
drivers/iio/inkern.c
··· 721 721 return ret; 722 722 *val *= scale; 723 723 724 - return 0; 724 + return ret; 725 725 } else { 726 726 ret = iio_channel_read(chan, val, NULL, IIO_CHAN_INFO_RAW); 727 727 if (ret < 0)
+5 -5
drivers/iio/pressure/bmp280-core.c
··· 1394 1394 1395 1395 /* 1396 1396 * Temperature is returned in Celsius degrees in fractional 1397 - * form down 2^16. We rescale by x1000 to return milli Celsius 1398 - * to respect IIO ABI. 1397 + * form down 2^16. We rescale by x1000 to return millidegrees 1398 + * Celsius to respect IIO ABI. 1399 1399 */ 1400 - *val = raw_temp * 1000; 1401 - *val2 = 16; 1402 - return IIO_VAL_FRACTIONAL_LOG2; 1400 + raw_temp = sign_extend32(raw_temp, 23); 1401 + *val = ((s64)raw_temp * 1000) / (1 << 16); 1402 + return IIO_VAL_INT; 1403 1403 } 1404 1404 1405 1405 static int bmp580_read_press(struct bmp280_data *data, int *val, int *val2)
+3 -3
drivers/iio/temperature/mlx90635.c
··· 947 947 "failed to allocate regmap\n"); 948 948 949 949 regmap_ee = devm_regmap_init_i2c(client, &mlx90635_regmap_ee); 950 - if (IS_ERR(regmap)) 951 - return dev_err_probe(&client->dev, PTR_ERR(regmap), 952 - "failed to allocate regmap\n"); 950 + if (IS_ERR(regmap_ee)) 951 + return dev_err_probe(&client->dev, PTR_ERR(regmap_ee), 952 + "failed to allocate EEPROM regmap\n"); 953 953 954 954 mlx90635 = iio_priv(indio_dev); 955 955 i2c_set_clientdata(client, indio_dev);
+1 -1
drivers/iommu/amd/init.c
··· 3362 3362 return 0; 3363 3363 } 3364 3364 3365 - int __init amd_iommu_enable_faulting(unsigned int cpu) 3365 + int amd_iommu_enable_faulting(unsigned int cpu) 3366 3366 { 3367 3367 /* We enable MSI later when PCI is initialized */ 3368 3368 return 0;
-6
drivers/leds/led-class.c
··· 552 552 led_init_core(led_cdev); 553 553 554 554 #ifdef CONFIG_LEDS_TRIGGERS 555 - /* 556 - * If no default trigger was given and hw_control_trigger is set, 557 - * make it the default trigger. 558 - */ 559 - if (!led_cdev->default_trigger && led_cdev->hw_control_trigger) 560 - led_cdev->default_trigger = led_cdev->hw_control_trigger; 561 555 led_trigger_set_default(led_cdev); 562 556 #endif 563 557
+1
drivers/mfd/axp20x.c
··· 210 210 211 211 static const struct regmap_range axp717_writeable_ranges[] = { 212 212 regmap_reg_range(AXP717_IRQ0_EN, AXP717_IRQ4_EN), 213 + regmap_reg_range(AXP717_IRQ0_STATE, AXP717_IRQ4_STATE), 213 214 regmap_reg_range(AXP717_DCDC_OUTPUT_CONTROL, AXP717_CPUSLDO_CONTROL), 214 215 }; 215 216
+7 -2
drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gp.c
··· 69 69 70 70 aux_bus->aux_device_wrapper[1] = kzalloc(sizeof(*aux_bus->aux_device_wrapper[1]), 71 71 GFP_KERNEL); 72 - if (!aux_bus->aux_device_wrapper[1]) 73 - return -ENOMEM; 72 + if (!aux_bus->aux_device_wrapper[1]) { 73 + retval = -ENOMEM; 74 + goto err_aux_dev_add_0; 75 + } 74 76 75 77 retval = ida_alloc(&gp_client_ida, GFP_KERNEL); 76 78 if (retval < 0) ··· 113 111 114 112 err_aux_dev_add_1: 115 113 auxiliary_device_uninit(&aux_bus->aux_device_wrapper[1]->aux_dev); 114 + goto err_aux_dev_add_0; 116 115 117 116 err_aux_dev_init_1: 118 117 ida_free(&gp_client_ida, aux_bus->aux_device_wrapper[1]->aux_dev.id); ··· 123 120 124 121 err_aux_dev_add_0: 125 122 auxiliary_device_uninit(&aux_bus->aux_device_wrapper[0]->aux_dev); 123 + goto err_ret; 126 124 127 125 err_aux_dev_init_0: 128 126 ida_free(&gp_client_ida, aux_bus->aux_device_wrapper[0]->aux_dev.id); ··· 131 127 err_ida_alloc_0: 132 128 kfree(aux_bus->aux_device_wrapper[0]); 133 129 130 + err_ret: 134 131 return retval; 135 132 } 136 133
+1 -1
drivers/misc/mei/main.c
··· 329 329 } 330 330 331 331 if (!mei_cl_is_connected(cl)) { 332 - cl_err(dev, cl, "is not connected"); 332 + cl_dbg(dev, cl, "is not connected"); 333 333 rets = -ENODEV; 334 334 goto out; 335 335 }
+3 -1
drivers/misc/mei/pci-me.c
··· 385 385 } 386 386 387 387 err = mei_restart(dev); 388 - if (err) 388 + if (err) { 389 + free_irq(pdev->irq, dev); 389 390 return err; 391 + } 390 392 391 393 /* Start timer if stopped in suspend */ 392 394 schedule_delayed_work(&dev->timer_work, HZ);
+15 -24
drivers/misc/mei/platform-vsc.c
··· 399 399 400 400 static int mei_vsc_suspend(struct device *dev) 401 401 { 402 - struct mei_device *mei_dev = dev_get_drvdata(dev); 403 - struct mei_vsc_hw *hw = mei_dev_to_vsc_hw(mei_dev); 402 + struct mei_device *mei_dev; 403 + int ret = 0; 404 404 405 - mei_stop(mei_dev); 405 + mei_dev = dev_get_drvdata(dev); 406 + if (!mei_dev) 407 + return -ENODEV; 406 408 407 - mei_disable_interrupts(mei_dev); 409 + mutex_lock(&mei_dev->device_lock); 408 410 409 - vsc_tp_free_irq(hw->tp); 411 + if (!mei_write_is_idle(mei_dev)) 412 + ret = -EAGAIN; 410 413 411 - return 0; 414 + mutex_unlock(&mei_dev->device_lock); 415 + 416 + return ret; 412 417 } 413 418 414 419 static int mei_vsc_resume(struct device *dev) 415 420 { 416 - struct mei_device *mei_dev = dev_get_drvdata(dev); 417 - struct mei_vsc_hw *hw = mei_dev_to_vsc_hw(mei_dev); 418 - int ret; 421 + struct mei_device *mei_dev; 419 422 420 - ret = vsc_tp_request_irq(hw->tp); 421 - if (ret) 422 - return ret; 423 - 424 - ret = mei_restart(mei_dev); 425 - if (ret) 426 - goto err_free; 427 - 428 - /* start timer if stopped in suspend */ 429 - schedule_delayed_work(&mei_dev->timer_work, HZ); 423 + mei_dev = dev_get_drvdata(dev); 424 + if (!mei_dev) 425 + return -ENODEV; 430 426 431 427 return 0; 432 - 433 - err_free: 434 - vsc_tp_free_irq(hw->tp); 435 - 436 - return ret; 437 428 } 438 429 439 430 static DEFINE_SIMPLE_DEV_PM_OPS(mei_vsc_pm_ops, mei_vsc_suspend, mei_vsc_resume);
+1 -1
drivers/misc/mei/vsc-fw-loader.c
··· 252 252 { 253 253 struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER }; 254 254 union acpi_object obj = { 255 - .type = ACPI_TYPE_INTEGER, 255 + .integer.type = ACPI_TYPE_INTEGER, 256 256 .integer.value = 1, 257 257 }; 258 258 struct acpi_object_list arg_list = {
+7 -5
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 736 736 return NETDEV_TX_OK; 737 737 738 738 tx_dma_error: 739 - if (BNXT_TX_PTP_IS_SET(lflags)) { 740 - atomic64_inc(&bp->ptp_cfg->stats.ts_err); 741 - atomic_inc(&bp->ptp_cfg->tx_avail); 742 - } 743 - 744 739 last_frag = i; 745 740 746 741 /* start back at beginning and unmap skb */ ··· 757 762 tx_free: 758 763 dev_kfree_skb_any(skb); 759 764 tx_kick_pending: 765 + if (BNXT_TX_PTP_IS_SET(lflags)) { 766 + atomic64_inc(&bp->ptp_cfg->stats.ts_err); 767 + atomic_inc(&bp->ptp_cfg->tx_avail); 768 + } 760 769 if (txr->kick_pending) 761 770 bnxt_txr_db_kick(bp, txr, txr->tx_prod); 762 771 txr->tx_buf_ring[txr->tx_prod].skb = NULL; ··· 9003 9004 memcpy(vf->mac_addr, resp->mac_address, ETH_ALEN); 9004 9005 #endif 9005 9006 } 9007 + bp->tso_max_segs = le16_to_cpu(resp->max_tso_segs); 9006 9008 9007 9009 hwrm_func_qcaps_exit: 9008 9010 hwrm_req_drop(bp, req); ··· 15371 15371 dev->priv_flags |= IFF_UNICAST_FLT; 15372 15372 15373 15373 netif_set_tso_max_size(dev, GSO_MAX_SIZE); 15374 + if (bp->tso_max_segs) 15375 + netif_set_tso_max_segs(dev, bp->tso_max_segs); 15374 15376 15375 15377 dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | 15376 15378 NETDEV_XDP_ACT_RX_SG;
+1
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 2318 2318 u8 rss_hash_key_updated:1; 2319 2319 2320 2320 u16 max_mtu; 2321 + u16 tso_max_segs; 2321 2322 u8 max_tc; 2322 2323 u8 max_lltc; /* lossless TCs */ 2323 2324 struct bnxt_queue_info q_info[BNXT_MAX_QUEUE];
+178 -133
drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
··· 2 2 * 3 3 * Copyright (c) 2014-2016 Broadcom Corporation 4 4 * Copyright (c) 2014-2018 Broadcom Limited 5 - * Copyright (c) 2018-2023 Broadcom Inc. 5 + * Copyright (c) 2018-2024 Broadcom Inc. 6 6 * 7 7 * This program is free software; you can redistribute it and/or modify 8 8 * it under the terms of the GNU General Public License as published by ··· 500 500 #define HWRM_TFC_IF_TBL_GET 0x399UL 501 501 #define HWRM_TFC_TBL_SCOPE_CONFIG_GET 0x39aUL 502 502 #define HWRM_TFC_RESC_USAGE_QUERY 0x39bUL 503 + #define HWRM_QUEUE_PFCWD_TIMEOUT_QCAPS 0x39cUL 504 + #define HWRM_QUEUE_PFCWD_TIMEOUT_CFG 0x39dUL 505 + #define HWRM_QUEUE_PFCWD_TIMEOUT_QCFG 0x39eUL 503 506 #define HWRM_SV 0x400UL 507 + #define HWRM_DBG_LOG_BUFFER_FLUSH 0xff0fUL 504 508 #define HWRM_DBG_READ_DIRECT 0xff10UL 505 509 #define HWRM_DBG_READ_INDIRECT 0xff11UL 506 510 #define HWRM_DBG_WRITE_DIRECT 0xff12UL ··· 613 609 #define HWRM_VERSION_MAJOR 1 614 610 #define HWRM_VERSION_MINOR 10 615 611 #define HWRM_VERSION_UPDATE 3 616 - #define HWRM_VERSION_RSVD 39 617 - #define HWRM_VERSION_STR "1.10.3.39" 612 + #define HWRM_VERSION_RSVD 44 613 + #define HWRM_VERSION_STR "1.10.3.44" 618 614 619 615 /* hwrm_ver_get_input (size:192b/24B) */ 620 616 struct hwrm_ver_get_input { ··· 668 664 #define VER_GET_RESP_DEV_CAPS_CFG_CFA_TFLIB_SUPPORTED 0x2000UL 669 665 #define VER_GET_RESP_DEV_CAPS_CFG_CFA_TRUFLOW_SUPPORTED 0x4000UL 670 666 #define VER_GET_RESP_DEV_CAPS_CFG_SECURE_BOOT_CAPABLE 0x8000UL 667 + #define VER_GET_RESP_DEV_CAPS_CFG_SECURE_SOC_CAPABLE 0x10000UL 671 668 u8 roce_fw_maj_8b; 672 669 u8 roce_fw_min_8b; 673 670 u8 roce_fw_bld_8b; ··· 848 843 #define ASYNC_EVENT_CMPL_EVENT_ID_HW_DOORBELL_RECOVERY_READ_ERROR 0x49UL 849 844 #define ASYNC_EVENT_CMPL_EVENT_ID_CTX_ERROR 0x4aUL 850 845 #define ASYNC_EVENT_CMPL_EVENT_ID_UDCC_SESSION_CHANGE 0x4bUL 851 - #define ASYNC_EVENT_CMPL_EVENT_ID_MAX_RGTR_EVENT_ID 0x4cUL 846 + #define ASYNC_EVENT_CMPL_EVENT_ID_DBG_BUF_PRODUCER 0x4cUL 847 + #define ASYNC_EVENT_CMPL_EVENT_ID_PEER_MMAP_CHANGE 0x4dUL 848 + #define ASYNC_EVENT_CMPL_EVENT_ID_MAX_RGTR_EVENT_ID 0x4eUL 852 849 #define ASYNC_EVENT_CMPL_EVENT_ID_FW_TRACE_MSG 0xfeUL 853 850 #define ASYNC_EVENT_CMPL_EVENT_ID_HWRM_ERROR 0xffUL 854 851 #define ASYNC_EVENT_CMPL_EVENT_ID_LAST ASYNC_EVENT_CMPL_EVENT_ID_HWRM_ERROR ··· 1333 1326 u8 timestamp_lo; 1334 1327 __le16 timestamp_hi; 1335 1328 __le32 event_data1; 1336 - #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_MASK 0xffUL 1337 - #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_SFT 0 1338 - #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_RESERVED 0x0UL 1339 - #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_PAUSE_STORM 0x1UL 1340 - #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_INVALID_SIGNAL 0x2UL 1341 - #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_NVM 0x3UL 1342 - #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_DOORBELL_DROP_THRESHOLD 0x4UL 1329 + #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_MASK 0xffUL 1330 + #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_SFT 0 1331 + #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_RESERVED 0x0UL 1332 + #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_PAUSE_STORM 0x1UL 1333 + #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_INVALID_SIGNAL 0x2UL 1334 + #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_NVM 0x3UL 1335 + #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_DOORBELL_DROP_THRESHOLD 0x4UL 1343 1336 #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_THERMAL_THRESHOLD 0x5UL 1344 1337 #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_DUAL_DATA_RATE_NOT_SUPPORTED 0x6UL 1345 1338 #define ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_LAST ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_DUAL_DATA_RATE_NOT_SUPPORTED ··· 1821 1814 #define FUNC_QCAPS_RESP_FLAGS_EXT2_SW_MAX_RESOURCE_LIMITS_SUPPORTED 0x800000UL 1822 1815 #define FUNC_QCAPS_RESP_FLAGS_EXT2_TF_INGRESS_NIC_FLOW_SUPPORTED 0x1000000UL 1823 1816 #define FUNC_QCAPS_RESP_FLAGS_EXT2_LPBK_STATS_SUPPORTED 0x2000000UL 1817 + #define FUNC_QCAPS_RESP_FLAGS_EXT2_TF_EGRESS_NIC_FLOW_SUPPORTED 0x4000000UL 1818 + #define FUNC_QCAPS_RESP_FLAGS_EXT2_MULTI_LOSSLESS_QUEUES_SUPPORTED 0x8000000UL 1819 + #define FUNC_QCAPS_RESP_FLAGS_EXT2_PEER_MMAP_SUPPORTED 0x10000000UL 1824 1820 __le16 tunnel_disable_flag; 1825 1821 #define FUNC_QCAPS_RESP_TUNNEL_DISABLE_FLAG_DISABLE_VXLAN 0x1UL 1826 1822 #define FUNC_QCAPS_RESP_TUNNEL_DISABLE_FLAG_DISABLE_NGE 0x2UL ··· 1838 1828 #define FUNC_QCAPS_RESP_XID_PARTITION_CAP_RX_CK 0x2UL 1839 1829 u8 device_serial_number[8]; 1840 1830 __le16 ctxs_per_partition; 1841 - u8 unused_2[2]; 1831 + __le16 max_tso_segs; 1842 1832 __le32 roce_vf_max_av; 1843 1833 __le32 roce_vf_max_cq; 1844 1834 __le32 roce_vf_max_mrw; ··· 2459 2449 #define FUNC_DRV_RGTR_REQ_FLAGS_NPAR_1_2_SUPPORT 0x200UL 2460 2450 #define FUNC_DRV_RGTR_REQ_FLAGS_ASYM_QUEUE_CFG_SUPPORT 0x400UL 2461 2451 #define FUNC_DRV_RGTR_REQ_FLAGS_TF_INGRESS_NIC_FLOW_MODE 0x800UL 2452 + #define FUNC_DRV_RGTR_REQ_FLAGS_TF_EGRESS_NIC_FLOW_MODE 0x1000UL 2462 2453 __le32 enables; 2463 2454 #define FUNC_DRV_RGTR_REQ_ENABLES_OS_TYPE 0x1UL 2464 2455 #define FUNC_DRV_RGTR_REQ_ENABLES_VER 0x2UL ··· 3671 3660 __le16 target_id; 3672 3661 __le64 resp_addr; 3673 3662 __le16 type; 3674 - #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_QP 0x0UL 3675 - #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_SRQ 0x1UL 3676 - #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_CQ 0x2UL 3677 - #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_VNIC 0x3UL 3678 - #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_STAT 0x4UL 3679 - #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_SP_TQM_RING 0x5UL 3680 - #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_FP_TQM_RING 0x6UL 3681 - #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_MRAV 0xeUL 3682 - #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_TIM 0xfUL 3683 - #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_MP_TQM_RING 0x15UL 3684 - #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_SQ_DB_SHADOW 0x16UL 3685 - #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_RQ_DB_SHADOW 0x17UL 3686 - #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_SRQ_DB_SHADOW 0x18UL 3687 - #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_CQ_DB_SHADOW 0x19UL 3688 - #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_TBL_SCOPE 0x1cUL 3689 - #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_XID_PARTITION 0x1dUL 3663 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_QP 0x0UL 3664 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_SRQ 0x1UL 3665 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_CQ 0x2UL 3666 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_VNIC 0x3UL 3667 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_STAT 0x4UL 3668 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_SP_TQM_RING 0x5UL 3669 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_FP_TQM_RING 0x6UL 3670 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_MRAV 0xeUL 3671 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_TIM 0xfUL 3672 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_TX_CK 0x13UL 3673 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_RX_CK 0x14UL 3674 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_MP_TQM_RING 0x15UL 3675 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_SQ_DB_SHADOW 0x16UL 3676 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_RQ_DB_SHADOW 0x17UL 3677 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_SRQ_DB_SHADOW 0x18UL 3678 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_CQ_DB_SHADOW 0x19UL 3679 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_TBL_SCOPE 0x1cUL 3680 + #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_XID_PARTITION 0x1dUL 3690 3681 #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_SRT_TRACE 0x1eUL 3691 3682 #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_SRT2_TRACE 0x1fUL 3692 3683 #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_CRT_TRACE 0x20UL ··· 3785 3772 __le16 seq_id; 3786 3773 __le16 resp_len; 3787 3774 __le16 type; 3788 - #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_QP 0x0UL 3789 - #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_SRQ 0x1UL 3790 - #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_CQ 0x2UL 3791 - #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_VNIC 0x3UL 3792 - #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_STAT 0x4UL 3793 - #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_SP_TQM_RING 0x5UL 3794 - #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_FP_TQM_RING 0x6UL 3795 - #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_MRAV 0xeUL 3796 - #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_TIM 0xfUL 3797 - #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_MP_TQM_RING 0x15UL 3798 - #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_TBL_SCOPE 0x1cUL 3799 - #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_XID_PARTITION 0x1dUL 3775 + #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_QP 0x0UL 3776 + #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_SRQ 0x1UL 3777 + #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_CQ 0x2UL 3778 + #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_VNIC 0x3UL 3779 + #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_STAT 0x4UL 3780 + #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_SP_TQM_RING 0x5UL 3781 + #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_FP_TQM_RING 0x6UL 3782 + #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_MRAV 0xeUL 3783 + #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_TIM 0xfUL 3784 + #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_TX_CK 0x13UL 3785 + #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_RX_CK 0x14UL 3786 + #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_MP_TQM_RING 0x15UL 3787 + #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_TBL_SCOPE 0x1cUL 3788 + #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_XID_PARTITION 0x1dUL 3800 3789 #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_SRT_TRACE 0x1eUL 3801 3790 #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_SRT2_TRACE 0x1fUL 3802 3791 #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_CRT_TRACE 0x20UL ··· 3891 3876 __le16 target_id; 3892 3877 __le64 resp_addr; 3893 3878 __le16 type; 3894 - #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_QP 0x0UL 3895 - #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SRQ 0x1UL 3896 - #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_CQ 0x2UL 3897 - #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_VNIC 0x3UL 3898 - #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_STAT 0x4UL 3899 - #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SP_TQM_RING 0x5UL 3900 - #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_FP_TQM_RING 0x6UL 3901 - #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_MRAV 0xeUL 3902 - #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_TIM 0xfUL 3903 - #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_MP_TQM_RING 0x15UL 3904 - #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SQ_DB_SHADOW 0x16UL 3905 - #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_RQ_DB_SHADOW 0x17UL 3906 - #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SRQ_DB_SHADOW 0x18UL 3907 - #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_CQ_DB_SHADOW 0x19UL 3908 - #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_TBL_SCOPE 0x1cUL 3909 - #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_XID_PARTITION 0x1dUL 3879 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_QP 0x0UL 3880 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SRQ 0x1UL 3881 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_CQ 0x2UL 3882 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_VNIC 0x3UL 3883 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_STAT 0x4UL 3884 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SP_TQM_RING 0x5UL 3885 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_FP_TQM_RING 0x6UL 3886 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_MRAV 0xeUL 3887 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_TIM 0xfUL 3888 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_TX_CK 0x13UL 3889 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_RX_CK 0x14UL 3890 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_MP_TQM_RING 0x15UL 3891 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SQ_DB_SHADOW 0x16UL 3892 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_RQ_DB_SHADOW 0x17UL 3893 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SRQ_DB_SHADOW 0x18UL 3894 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_CQ_DB_SHADOW 0x19UL 3895 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_TBL_SCOPE 0x1cUL 3896 + #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_XID_PARTITION 0x1dUL 3910 3897 #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SRT_TRACE 0x1eUL 3911 3898 #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_SRT2_TRACE 0x1fUL 3912 3899 #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_CRT_TRACE 0x20UL ··· 3928 3911 __le16 seq_id; 3929 3912 __le16 resp_len; 3930 3913 __le16 type; 3931 - #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_QP 0x0UL 3932 - #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_SRQ 0x1UL 3933 - #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_CQ 0x2UL 3934 - #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_VNIC 0x3UL 3935 - #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_STAT 0x4UL 3936 - #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_SP_TQM_RING 0x5UL 3937 - #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_FP_TQM_RING 0x6UL 3938 - #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_MRAV 0xeUL 3939 - #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_TIM 0xfUL 3940 - #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_MP_TQM_RING 0x15UL 3941 - #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_SQ_DB_SHADOW 0x16UL 3942 - #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_RQ_DB_SHADOW 0x17UL 3943 - #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_SRQ_DB_SHADOW 0x18UL 3944 - #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_CQ_DB_SHADOW 0x19UL 3945 - #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_TBL_SCOPE 0x1cUL 3946 - #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_XID_PARTITION 0x1dUL 3914 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_QP 0x0UL 3915 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_SRQ 0x1UL 3916 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_CQ 0x2UL 3917 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_VNIC 0x3UL 3918 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_STAT 0x4UL 3919 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_SP_TQM_RING 0x5UL 3920 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_FP_TQM_RING 0x6UL 3921 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_MRAV 0xeUL 3922 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_TIM 0xfUL 3923 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_TX_CK 0x13UL 3924 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_RX_CK 0x14UL 3925 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_MP_TQM_RING 0x15UL 3926 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_SQ_DB_SHADOW 0x16UL 3927 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_RQ_DB_SHADOW 0x17UL 3928 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_SRQ_DB_SHADOW 0x18UL 3929 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_CQ_DB_SHADOW 0x19UL 3930 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_TBL_SCOPE 0x1cUL 3931 + #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_XID_PARTITION 0x1dUL 3947 3932 #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_SRT_TRACE 0x1eUL 3948 3933 #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_SRT2_TRACE 0x1fUL 3949 3934 #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_CRT_TRACE 0x20UL ··· 4221 4202 #define PORT_PHY_CFG_REQ_FORCE_LINK_SPEEDS2_100GB_PAM4_112 0x3eaUL 4222 4203 #define PORT_PHY_CFG_REQ_FORCE_LINK_SPEEDS2_200GB_PAM4_112 0x7d2UL 4223 4204 #define PORT_PHY_CFG_REQ_FORCE_LINK_SPEEDS2_400GB_PAM4_112 0xfa2UL 4224 - #define PORT_PHY_CFG_REQ_FORCE_LINK_SPEEDS2_LAST PORT_PHY_CFG_REQ_FORCE_LINK_SPEEDS2_400GB_PAM4_112 4205 + #define PORT_PHY_CFG_REQ_FORCE_LINK_SPEEDS2_800GB_PAM4_112 0x1f42UL 4206 + #define PORT_PHY_CFG_REQ_FORCE_LINK_SPEEDS2_LAST PORT_PHY_CFG_REQ_FORCE_LINK_SPEEDS2_800GB_PAM4_112 4225 4207 __le16 auto_link_speeds2_mask; 4226 4208 #define PORT_PHY_CFG_REQ_AUTO_LINK_SPEEDS2_MASK_1GB 0x1UL 4227 4209 #define PORT_PHY_CFG_REQ_AUTO_LINK_SPEEDS2_MASK_10GB 0x2UL ··· 4237 4217 #define PORT_PHY_CFG_REQ_AUTO_LINK_SPEEDS2_MASK_100GB_PAM4_112 0x400UL 4238 4218 #define PORT_PHY_CFG_REQ_AUTO_LINK_SPEEDS2_MASK_200GB_PAM4_112 0x800UL 4239 4219 #define PORT_PHY_CFG_REQ_AUTO_LINK_SPEEDS2_MASK_400GB_PAM4_112 0x1000UL 4220 + #define PORT_PHY_CFG_REQ_AUTO_LINK_SPEEDS2_MASK_800GB_PAM4_112 0x2000UL 4240 4221 u8 unused_2[6]; 4241 4222 }; 4242 4223 ··· 4313 4292 #define PORT_PHY_QCFG_RESP_LINK_SPEED_100GB 0x3e8UL 4314 4293 #define PORT_PHY_QCFG_RESP_LINK_SPEED_200GB 0x7d0UL 4315 4294 #define PORT_PHY_QCFG_RESP_LINK_SPEED_400GB 0xfa0UL 4295 + #define PORT_PHY_QCFG_RESP_LINK_SPEED_800GB 0x1f40UL 4316 4296 #define PORT_PHY_QCFG_RESP_LINK_SPEED_10MB 0xffffUL 4317 4297 #define PORT_PHY_QCFG_RESP_LINK_SPEED_LAST PORT_PHY_QCFG_RESP_LINK_SPEED_10MB 4318 4298 u8 duplex_cfg; ··· 4473 4451 #define PORT_PHY_QCFG_RESP_PHY_TYPE_400G_BASESR4 0x35UL 4474 4452 #define PORT_PHY_QCFG_RESP_PHY_TYPE_400G_BASELR4 0x36UL 4475 4453 #define PORT_PHY_QCFG_RESP_PHY_TYPE_400G_BASEER4 0x37UL 4476 - #define PORT_PHY_QCFG_RESP_PHY_TYPE_LAST PORT_PHY_QCFG_RESP_PHY_TYPE_400G_BASEER4 4454 + #define PORT_PHY_QCFG_RESP_PHY_TYPE_800G_BASECR8 0x38UL 4455 + #define PORT_PHY_QCFG_RESP_PHY_TYPE_800G_BASESR8 0x39UL 4456 + #define PORT_PHY_QCFG_RESP_PHY_TYPE_800G_BASELR8 0x3aUL 4457 + #define PORT_PHY_QCFG_RESP_PHY_TYPE_800G_BASEER8 0x3bUL 4458 + #define PORT_PHY_QCFG_RESP_PHY_TYPE_800G_BASEFR8 0x3cUL 4459 + #define PORT_PHY_QCFG_RESP_PHY_TYPE_800G_BASEDR8 0x3dUL 4460 + #define PORT_PHY_QCFG_RESP_PHY_TYPE_LAST PORT_PHY_QCFG_RESP_PHY_TYPE_800G_BASEDR8 4477 4461 u8 media_type; 4478 4462 #define PORT_PHY_QCFG_RESP_MEDIA_TYPE_UNKNOWN 0x0UL 4479 4463 #define PORT_PHY_QCFG_RESP_MEDIA_TYPE_TP 0x1UL ··· 5077 5049 u8 valid; 5078 5050 }; 5079 5051 5080 - /* hwrm_port_lpbk_qstats_input (size:128b/16B) */ 5052 + /* hwrm_port_lpbk_qstats_input (size:256b/32B) */ 5081 5053 struct hwrm_port_lpbk_qstats_input { 5082 5054 __le16 req_type; 5083 5055 __le16 cmpl_ring; 5084 5056 __le16 seq_id; 5085 5057 __le16 target_id; 5086 5058 __le64 resp_addr; 5059 + __le16 lpbk_stat_size; 5060 + u8 flags; 5061 + #define PORT_LPBK_QSTATS_REQ_FLAGS_COUNTER_MASK 0x1UL 5062 + u8 unused_0[5]; 5063 + __le64 lpbk_stat_host_addr; 5087 5064 }; 5088 5065 5089 - /* hwrm_port_lpbk_qstats_output (size:768b/96B) */ 5066 + /* hwrm_port_lpbk_qstats_output (size:128b/16B) */ 5090 5067 struct hwrm_port_lpbk_qstats_output { 5091 5068 __le16 error_code; 5092 5069 __le16 req_type; 5093 5070 __le16 seq_id; 5094 5071 __le16 resp_len; 5072 + __le16 lpbk_stat_size; 5073 + u8 unused_0[5]; 5074 + u8 valid; 5075 + }; 5076 + 5077 + /* port_lpbk_stats (size:640b/80B) */ 5078 + struct port_lpbk_stats { 5095 5079 __le64 lpbk_ucast_frames; 5096 5080 __le64 lpbk_mcast_frames; 5097 5081 __le64 lpbk_bcast_frames; 5098 5082 __le64 lpbk_ucast_bytes; 5099 5083 __le64 lpbk_mcast_bytes; 5100 5084 __le64 lpbk_bcast_bytes; 5101 - __le64 tx_stat_discard; 5102 - __le64 tx_stat_error; 5103 - __le64 rx_stat_discard; 5104 - __le64 rx_stat_error; 5105 - u8 unused_0[7]; 5106 - u8 valid; 5085 + __le64 lpbk_tx_discards; 5086 + __le64 lpbk_tx_errors; 5087 + __le64 lpbk_rx_discards; 5088 + __le64 lpbk_rx_errors; 5107 5089 }; 5108 5090 5109 5091 /* hwrm_port_ecn_qstats_input (size:256b/32B) */ ··· 5178 5140 u8 valid; 5179 5141 }; 5180 5142 5181 - /* hwrm_port_lpbk_clr_stats_input (size:128b/16B) */ 5143 + /* hwrm_port_lpbk_clr_stats_input (size:192b/24B) */ 5182 5144 struct hwrm_port_lpbk_clr_stats_input { 5183 5145 __le16 req_type; 5184 5146 __le16 cmpl_ring; 5185 5147 __le16 seq_id; 5186 5148 __le16 target_id; 5187 5149 __le64 resp_addr; 5150 + __le16 port_id; 5151 + u8 unused_0[6]; 5188 5152 }; 5189 5153 5190 5154 /* hwrm_port_lpbk_clr_stats_output (size:128b/16B) */ ··· 5327 5287 #define PORT_PHY_QCAPS_RESP_SUPPORTED_PAM4_SPEEDS_FORCE_MODE_100G 0x2UL 5328 5288 #define PORT_PHY_QCAPS_RESP_SUPPORTED_PAM4_SPEEDS_FORCE_MODE_200G 0x4UL 5329 5289 __le16 flags2; 5330 - #define PORT_PHY_QCAPS_RESP_FLAGS2_PAUSE_UNSUPPORTED 0x1UL 5331 - #define PORT_PHY_QCAPS_RESP_FLAGS2_PFC_UNSUPPORTED 0x2UL 5332 - #define PORT_PHY_QCAPS_RESP_FLAGS2_BANK_ADDR_SUPPORTED 0x4UL 5333 - #define PORT_PHY_QCAPS_RESP_FLAGS2_SPEEDS2_SUPPORTED 0x8UL 5290 + #define PORT_PHY_QCAPS_RESP_FLAGS2_PAUSE_UNSUPPORTED 0x1UL 5291 + #define PORT_PHY_QCAPS_RESP_FLAGS2_PFC_UNSUPPORTED 0x2UL 5292 + #define PORT_PHY_QCAPS_RESP_FLAGS2_BANK_ADDR_SUPPORTED 0x4UL 5293 + #define PORT_PHY_QCAPS_RESP_FLAGS2_SPEEDS2_SUPPORTED 0x8UL 5294 + #define PORT_PHY_QCAPS_RESP_FLAGS2_REMOTE_LPBK_UNSUPPORTED 0x10UL 5334 5295 u8 internal_port_cnt; 5335 5296 u8 unused_0; 5336 5297 __le16 supported_speeds2_force_mode; ··· 7484 7443 __le16 target_id; 7485 7444 __le64 resp_addr; 7486 7445 __le32 flags; 7487 - #define CFA_L2_FILTER_CFG_REQ_FLAGS_PATH 0x1UL 7488 - #define CFA_L2_FILTER_CFG_REQ_FLAGS_PATH_TX 0x0UL 7489 - #define CFA_L2_FILTER_CFG_REQ_FLAGS_PATH_RX 0x1UL 7490 - #define CFA_L2_FILTER_CFG_REQ_FLAGS_PATH_LAST CFA_L2_FILTER_CFG_REQ_FLAGS_PATH_RX 7491 - #define CFA_L2_FILTER_CFG_REQ_FLAGS_DROP 0x2UL 7492 - #define CFA_L2_FILTER_CFG_REQ_FLAGS_TRAFFIC_MASK 0xcUL 7493 - #define CFA_L2_FILTER_CFG_REQ_FLAGS_TRAFFIC_SFT 2 7494 - #define CFA_L2_FILTER_CFG_REQ_FLAGS_TRAFFIC_NO_ROCE_L2 (0x0UL << 2) 7495 - #define CFA_L2_FILTER_CFG_REQ_FLAGS_TRAFFIC_L2 (0x1UL << 2) 7496 - #define CFA_L2_FILTER_CFG_REQ_FLAGS_TRAFFIC_ROCE (0x2UL << 2) 7497 - #define CFA_L2_FILTER_CFG_REQ_FLAGS_TRAFFIC_LAST CFA_L2_FILTER_CFG_REQ_FLAGS_TRAFFIC_ROCE 7446 + #define CFA_L2_FILTER_CFG_REQ_FLAGS_PATH 0x1UL 7447 + #define CFA_L2_FILTER_CFG_REQ_FLAGS_PATH_TX 0x0UL 7448 + #define CFA_L2_FILTER_CFG_REQ_FLAGS_PATH_RX 0x1UL 7449 + #define CFA_L2_FILTER_CFG_REQ_FLAGS_PATH_LAST CFA_L2_FILTER_CFG_REQ_FLAGS_PATH_RX 7450 + #define CFA_L2_FILTER_CFG_REQ_FLAGS_DROP 0x2UL 7451 + #define CFA_L2_FILTER_CFG_REQ_FLAGS_TRAFFIC_MASK 0xcUL 7452 + #define CFA_L2_FILTER_CFG_REQ_FLAGS_TRAFFIC_SFT 2 7453 + #define CFA_L2_FILTER_CFG_REQ_FLAGS_TRAFFIC_NO_ROCE_L2 (0x0UL << 2) 7454 + #define CFA_L2_FILTER_CFG_REQ_FLAGS_TRAFFIC_L2 (0x1UL << 2) 7455 + #define CFA_L2_FILTER_CFG_REQ_FLAGS_TRAFFIC_ROCE (0x2UL << 2) 7456 + #define CFA_L2_FILTER_CFG_REQ_FLAGS_TRAFFIC_LAST CFA_L2_FILTER_CFG_REQ_FLAGS_TRAFFIC_ROCE 7498 7457 #define CFA_L2_FILTER_CFG_REQ_FLAGS_REMAP_OP_MASK 0x30UL 7499 7458 #define CFA_L2_FILTER_CFG_REQ_FLAGS_REMAP_OP_SFT 4 7500 7459 #define CFA_L2_FILTER_CFG_REQ_FLAGS_REMAP_OP_NO_UPDATE (0x0UL << 4) ··· 8561 8520 __le16 target_id; 8562 8521 __le64 resp_addr; 8563 8522 u8 tunnel_type; 8564 - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN 0x1UL 8565 - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_GENEVE 0x5UL 8566 - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL 8567 - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL 8568 - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL 8569 - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL 8570 - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_CUSTOM_GRE 0xdUL 8571 - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_ECPRI 0xeUL 8572 - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_SRV6 0xfUL 8573 - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_GPE 0x10UL 8574 - #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_GRE 0x11UL 8523 + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN 0x1UL 8524 + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_GENEVE 0x5UL 8525 + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL 8526 + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL 8527 + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL 8528 + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL 8529 + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_CUSTOM_GRE 0xdUL 8530 + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_ECPRI 0xeUL 8531 + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_SRV6 0xfUL 8532 + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_GPE 0x10UL 8533 + #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_GRE 0x11UL 8575 8534 #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_ULP_DYN_UPAR 0x12UL 8576 8535 #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_ULP_DYN_UPAR_RES01 0x13UL 8577 8536 #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_ULP_DYN_UPAR_RES02 0x14UL ··· 8617 8576 __le16 target_id; 8618 8577 __le64 resp_addr; 8619 8578 u8 tunnel_type; 8620 - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN 0x1UL 8621 - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_GENEVE 0x5UL 8622 - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL 8623 - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL 8624 - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL 8625 - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL 8626 - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_CUSTOM_GRE 0xdUL 8627 - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_ECPRI 0xeUL 8628 - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_SRV6 0xfUL 8629 - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE 0x10UL 8630 - #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_GRE 0x11UL 8579 + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN 0x1UL 8580 + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_GENEVE 0x5UL 8581 + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL 8582 + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL 8583 + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL 8584 + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL 8585 + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_CUSTOM_GRE 0xdUL 8586 + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_ECPRI 0xeUL 8587 + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_SRV6 0xfUL 8588 + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE 0x10UL 8589 + #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_GRE 0x11UL 8631 8590 #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_ULP_DYN_UPAR 0x12UL 8632 8591 #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_ULP_DYN_UPAR_RES01 0x13UL 8633 8592 #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_ULP_DYN_UPAR_RES02 0x14UL ··· 8676 8635 __le16 target_id; 8677 8636 __le64 resp_addr; 8678 8637 u8 tunnel_type; 8679 - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN 0x1UL 8680 - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE 0x5UL 8681 - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL 8682 - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL 8683 - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL 8684 - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL 8685 - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_CUSTOM_GRE 0xdUL 8686 - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_ECPRI 0xeUL 8687 - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_SRV6 0xfUL 8688 - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_GPE 0x10UL 8689 - #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GRE 0x11UL 8638 + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN 0x1UL 8639 + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GENEVE 0x5UL 8640 + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_V4 0x9UL 8641 + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL 8642 + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL 8643 + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL 8644 + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_CUSTOM_GRE 0xdUL 8645 + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_ECPRI 0xeUL 8646 + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_SRV6 0xfUL 8647 + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_GPE 0x10UL 8648 + #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_GRE 0x11UL 8690 8649 #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_ULP_DYN_UPAR 0x12UL 8691 8650 #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_ULP_DYN_UPAR_RES01 0x13UL 8692 8651 #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_ULP_DYN_UPAR_RES02 0x14UL ··· 9150 9109 #define STRUCT_HDR_STRUCT_ID_LLDP_GENERIC 0x424UL 9151 9110 #define STRUCT_HDR_STRUCT_ID_LLDP_DEVICE 0x426UL 9152 9111 #define STRUCT_HDR_STRUCT_ID_POWER_BKUP 0x427UL 9112 + #define STRUCT_HDR_STRUCT_ID_PEER_MMAP 0x429UL 9153 9113 #define STRUCT_HDR_STRUCT_ID_AFM_OPAQUE 0x1UL 9154 9114 #define STRUCT_HDR_STRUCT_ID_PORT_DESCRIPTION 0xaUL 9155 9115 #define STRUCT_HDR_STRUCT_ID_RSS_V2 0x64UL ··· 9800 9758 __le16 instance; 9801 9759 __le16 unused_0; 9802 9760 u8 seg_flags; 9761 + #define DBG_COREDUMP_INITIATE_REQ_SEG_FLAGS_LIVE_DATA 0x1UL 9762 + #define DBG_COREDUMP_INITIATE_REQ_SEG_FLAGS_CRASH_DATA 0x2UL 9763 + #define DBG_COREDUMP_INITIATE_REQ_SEG_FLAGS_COLLECT_CTX_L1_CACHE 0x4UL 9803 9764 u8 unused_1[7]; 9804 9765 }; 9805 9766 ··· 10478 10433 10479 10434 /* dbc_dbc (size:64b/8B) */ 10480 10435 struct dbc_dbc { 10481 - u32 index; 10436 + __le32 index; 10482 10437 #define DBC_DBC_INDEX_MASK 0xffffffUL 10483 10438 #define DBC_DBC_INDEX_SFT 0 10484 10439 #define DBC_DBC_EPOCH 0x1000000UL 10485 10440 #define DBC_DBC_TOGGLE_MASK 0x6000000UL 10486 10441 #define DBC_DBC_TOGGLE_SFT 25 10487 - u32 type_path_xid; 10442 + __le32 type_path_xid; 10488 10443 #define DBC_DBC_XID_MASK 0xfffffUL 10489 10444 #define DBC_DBC_XID_SFT 0 10490 10445 #define DBC_DBC_PATH_MASK 0x3000000UL
+21 -2
drivers/net/ethernet/intel/ice/ice_ddp.c
··· 1339 1339 1340 1340 for (i = 0; i < count; i++) { 1341 1341 bool last = false; 1342 + int try_cnt = 0; 1342 1343 int status; 1343 1344 1344 1345 bh = (struct ice_buf_hdr *)(bufs + start + i); ··· 1347 1346 if (indicate_last) 1348 1347 last = ice_is_last_download_buffer(bh, i, count); 1349 1348 1350 - status = ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE, last, 1351 - &offset, &info, NULL); 1349 + while (1) { 1350 + status = ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE, 1351 + last, &offset, &info, 1352 + NULL); 1353 + if (hw->adminq.sq_last_status != ICE_AQ_RC_ENOSEC && 1354 + hw->adminq.sq_last_status != ICE_AQ_RC_EBADSIG) 1355 + break; 1356 + 1357 + try_cnt++; 1358 + 1359 + if (try_cnt == 5) 1360 + break; 1361 + 1362 + msleep(20); 1363 + } 1364 + 1365 + if (try_cnt) 1366 + dev_dbg(ice_hw_to_dev(hw), 1367 + "ice_aq_download_pkg number of retries: %d\n", 1368 + try_cnt); 1352 1369 1353 1370 /* Save AQ status from download package */ 1354 1371 if (status) {
+9 -1
drivers/net/ethernet/intel/ice/ice_main.c
··· 805 805 } 806 806 807 807 switch (vsi->port_info->phy.link_info.link_speed) { 808 + case ICE_AQ_LINK_SPEED_200GB: 809 + speed = "200 G"; 810 + break; 808 811 case ICE_AQ_LINK_SPEED_100GB: 809 812 speed = "100 G"; 810 813 break; ··· 5567 5564 */ 5568 5565 disabled = ice_service_task_stop(pf); 5569 5566 5570 - ice_unplug_aux_dev(pf); 5567 + ice_deinit_rdma(pf); 5571 5568 5572 5569 /* Already suspended?, then there is nothing to do */ 5573 5570 if (test_and_set_bit(ICE_SUSPENDED, pf->state)) { ··· 5646 5643 ret = ice_reinit_interrupt_scheme(pf); 5647 5644 if (ret) 5648 5645 dev_err(dev, "Cannot restore interrupt scheme: %d\n", ret); 5646 + 5647 + ret = ice_init_rdma(pf); 5648 + if (ret) 5649 + dev_err(dev, "Reinitialize RDMA during resume failed: %d\n", 5650 + ret); 5649 5651 5650 5652 clear_bit(ICE_DOWN, pf->state); 5651 5653 /* Now perform PF reset and rebuild */
+4 -2
drivers/net/ethernet/intel/ice/ice_switch.c
··· 1899 1899 lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC || 1900 1900 lkup_type == ICE_SW_LKUP_PROMISC || 1901 1901 lkup_type == ICE_SW_LKUP_PROMISC_VLAN || 1902 - lkup_type == ICE_SW_LKUP_DFLT) { 1902 + lkup_type == ICE_SW_LKUP_DFLT || 1903 + lkup_type == ICE_SW_LKUP_LAST) { 1903 1904 sw_buf->res_type = cpu_to_le16(ICE_AQC_RES_TYPE_VSI_LIST_REP); 1904 1905 } else if (lkup_type == ICE_SW_LKUP_VLAN) { 1905 1906 if (opc == ice_aqc_opc_alloc_res) ··· 2923 2922 lkup_type == ICE_SW_LKUP_ETHERTYPE_MAC || 2924 2923 lkup_type == ICE_SW_LKUP_PROMISC || 2925 2924 lkup_type == ICE_SW_LKUP_PROMISC_VLAN || 2926 - lkup_type == ICE_SW_LKUP_DFLT) 2925 + lkup_type == ICE_SW_LKUP_DFLT || 2926 + lkup_type == ICE_SW_LKUP_LAST) 2927 2927 rule_type = remove ? ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR : 2928 2928 ICE_AQC_SW_RULES_T_VSI_LIST_SET; 2929 2929 else if (lkup_type == ICE_SW_LKUP_VLAN)
+4 -1
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 4014 4014 } 4015 4015 } 4016 4016 4017 - skb = build_skb(data, frag_size); 4017 + if (frag_size) 4018 + skb = build_skb(data, frag_size); 4019 + else 4020 + skb = slab_build_skb(data); 4018 4021 if (!skb) { 4019 4022 netdev_warn(port->dev, "skb build failed\n"); 4020 4023 goto err_drop_frame;
+1 -2
drivers/net/ethernet/marvell/octeontx2/nic/Makefile
··· 9 9 rvu_nicpf-y := otx2_pf.o otx2_common.o otx2_txrx.o otx2_ethtool.o \ 10 10 otx2_flows.o otx2_tc.o cn10k.o otx2_dmac_flt.o \ 11 11 otx2_devlink.o qos_sq.o qos.o 12 - rvu_nicvf-y := otx2_vf.o otx2_devlink.o 12 + rvu_nicvf-y := otx2_vf.o 13 13 14 14 rvu_nicpf-$(CONFIG_DCB) += otx2_dcbnl.o 15 - rvu_nicvf-$(CONFIG_DCB) += otx2_dcbnl.o 16 15 rvu_nicpf-$(CONFIG_MACSEC) += cn10k_macsec.o 17 16 18 17 ccflags-y += -I$(srctree)/drivers/net/ethernet/marvell/octeontx2/af
+7
drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c
··· 54 54 55 55 return 0; 56 56 } 57 + EXPORT_SYMBOL(otx2_pfc_txschq_config); 57 58 58 59 static int otx2_pfc_txschq_alloc_one(struct otx2_nic *pfvf, u8 prio) 59 60 { ··· 123 122 124 123 return 0; 125 124 } 125 + EXPORT_SYMBOL(otx2_pfc_txschq_alloc); 126 126 127 127 static int otx2_pfc_txschq_stop_one(struct otx2_nic *pfvf, u8 prio) 128 128 { ··· 262 260 263 261 return 0; 264 262 } 263 + EXPORT_SYMBOL(otx2_pfc_txschq_update); 265 264 266 265 int otx2_pfc_txschq_stop(struct otx2_nic *pfvf) 267 266 { ··· 285 282 286 283 return 0; 287 284 } 285 + EXPORT_SYMBOL(otx2_pfc_txschq_stop); 288 286 289 287 int otx2_config_priority_flow_ctrl(struct otx2_nic *pfvf) 290 288 { ··· 325 321 mutex_unlock(&pfvf->mbox.lock); 326 322 return err; 327 323 } 324 + EXPORT_SYMBOL(otx2_config_priority_flow_ctrl); 328 325 329 326 void otx2_update_bpid_in_rqctx(struct otx2_nic *pfvf, int vlan_prio, int qidx, 330 327 bool pfc_enable) ··· 390 385 "Updating BPIDs in CQ and Aura contexts of RQ%d failed with err %d\n", 391 386 qidx, err); 392 387 } 388 + EXPORT_SYMBOL(otx2_update_bpid_in_rqctx); 393 389 394 390 static int otx2_dcbnl_ieee_getpfc(struct net_device *dev, struct ieee_pfc *pfc) 395 391 { ··· 478 472 479 473 return 0; 480 474 } 475 + EXPORT_SYMBOL(otx2_dcbnl_set_ops);
+2
drivers/net/ethernet/marvell/octeontx2/nic/otx2_devlink.c
··· 113 113 devlink_free(dl); 114 114 return err; 115 115 } 116 + EXPORT_SYMBOL(otx2_register_dl); 116 117 117 118 void otx2_unregister_dl(struct otx2_nic *pfvf) 118 119 { ··· 125 124 ARRAY_SIZE(otx2_dl_params)); 126 125 devlink_free(dl); 127 126 } 127 + EXPORT_SYMBOL(otx2_unregister_dl);
+4 -1
drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
··· 1174 1174 1175 1175 if (skb_shinfo(skb)->gso_size && !is_hw_tso_supported(pfvf, skb)) { 1176 1176 /* Insert vlan tag before giving pkt to tso */ 1177 - if (skb_vlan_tag_present(skb)) 1177 + if (skb_vlan_tag_present(skb)) { 1178 1178 skb = __vlan_hwaccel_push_inside(skb); 1179 + if (!skb) 1180 + return true; 1181 + } 1179 1182 otx2_sq_append_tso(pfvf, sq, skb, qidx); 1180 1183 return true; 1181 1184 }
+40 -4
drivers/net/ethernet/microchip/lan743x_ethtool.c
··· 1127 1127 if (netdev->phydev) 1128 1128 phy_ethtool_get_wol(netdev->phydev, wol); 1129 1129 1130 - wol->supported |= WAKE_BCAST | WAKE_UCAST | WAKE_MCAST | 1131 - WAKE_MAGIC | WAKE_PHY | WAKE_ARP; 1130 + if (wol->supported != adapter->phy_wol_supported) 1131 + netif_warn(adapter, drv, adapter->netdev, 1132 + "PHY changed its supported WOL! old=%x, new=%x\n", 1133 + adapter->phy_wol_supported, wol->supported); 1134 + 1135 + wol->supported |= MAC_SUPPORTED_WAKES; 1132 1136 1133 1137 if (adapter->is_pci11x1x) 1134 1138 wol->supported |= WAKE_MAGICSECURE; ··· 1147 1143 { 1148 1144 struct lan743x_adapter *adapter = netdev_priv(netdev); 1149 1145 1146 + /* WAKE_MAGICSEGURE is a modifier of and only valid together with 1147 + * WAKE_MAGIC 1148 + */ 1149 + if ((wol->wolopts & WAKE_MAGICSECURE) && !(wol->wolopts & WAKE_MAGIC)) 1150 + return -EINVAL; 1151 + 1152 + if (netdev->phydev) { 1153 + struct ethtool_wolinfo phy_wol; 1154 + int ret; 1155 + 1156 + phy_wol.wolopts = wol->wolopts & adapter->phy_wol_supported; 1157 + 1158 + /* If WAKE_MAGICSECURE was requested, filter out WAKE_MAGIC 1159 + * for PHYs that do not support WAKE_MAGICSECURE 1160 + */ 1161 + if (wol->wolopts & WAKE_MAGICSECURE && 1162 + !(adapter->phy_wol_supported & WAKE_MAGICSECURE)) 1163 + phy_wol.wolopts &= ~WAKE_MAGIC; 1164 + 1165 + ret = phy_ethtool_set_wol(netdev->phydev, &phy_wol); 1166 + if (ret && (ret != -EOPNOTSUPP)) 1167 + return ret; 1168 + 1169 + if (ret == -EOPNOTSUPP) 1170 + adapter->phy_wolopts = 0; 1171 + else 1172 + adapter->phy_wolopts = phy_wol.wolopts; 1173 + } else { 1174 + adapter->phy_wolopts = 0; 1175 + } 1176 + 1150 1177 adapter->wolopts = 0; 1178 + wol->wolopts &= ~adapter->phy_wolopts; 1151 1179 if (wol->wolopts & WAKE_UCAST) 1152 1180 adapter->wolopts |= WAKE_UCAST; 1153 1181 if (wol->wolopts & WAKE_MCAST) ··· 1200 1164 memset(adapter->sopass, 0, sizeof(u8) * SOPASS_MAX); 1201 1165 } 1202 1166 1167 + wol->wolopts = adapter->wolopts | adapter->phy_wolopts; 1203 1168 device_set_wakeup_enable(&adapter->pdev->dev, (bool)wol->wolopts); 1204 1169 1205 - return netdev->phydev ? phy_ethtool_set_wol(netdev->phydev, wol) 1206 - : -ENETDOWN; 1170 + return 0; 1207 1171 } 1208 1172 #endif /* CONFIG_PM */ 1209 1173
+40 -8
drivers/net/ethernet/microchip/lan743x_main.c
··· 3118 3118 if (ret) 3119 3119 goto close_tx; 3120 3120 } 3121 + 3122 + #ifdef CONFIG_PM 3123 + if (adapter->netdev->phydev) { 3124 + struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL }; 3125 + 3126 + phy_ethtool_get_wol(netdev->phydev, &wol); 3127 + adapter->phy_wol_supported = wol.supported; 3128 + adapter->phy_wolopts = wol.wolopts; 3129 + } 3130 + #endif 3131 + 3121 3132 return 0; 3122 3133 3123 3134 close_tx: ··· 3586 3575 3587 3576 /* clear wake settings */ 3588 3577 pmtctl = lan743x_csr_read(adapter, PMT_CTL); 3589 - pmtctl |= PMT_CTL_WUPS_MASK_; 3578 + pmtctl |= PMT_CTL_WUPS_MASK_ | PMT_CTL_RES_CLR_WKP_MASK_; 3590 3579 pmtctl &= ~(PMT_CTL_GPIO_WAKEUP_EN_ | PMT_CTL_EEE_WAKEUP_EN_ | 3591 3580 PMT_CTL_WOL_EN_ | PMT_CTL_MAC_D3_RX_CLK_OVR_ | 3592 3581 PMT_CTL_RX_FCT_RFE_D3_CLK_OVR_ | PMT_CTL_ETH_PHY_WAKE_EN_); ··· 3598 3587 3599 3588 pmtctl |= PMT_CTL_ETH_PHY_D3_COLD_OVR_ | PMT_CTL_ETH_PHY_D3_OVR_; 3600 3589 3601 - if (adapter->wolopts & WAKE_PHY) { 3602 - pmtctl |= PMT_CTL_ETH_PHY_EDPD_PLL_CTL_; 3590 + if (adapter->phy_wolopts) 3603 3591 pmtctl |= PMT_CTL_ETH_PHY_WAKE_EN_; 3604 - } 3592 + 3605 3593 if (adapter->wolopts & WAKE_MAGIC) { 3606 3594 wucsr |= MAC_WUCSR_MPEN_; 3607 3595 macrx |= MAC_RX_RXEN_; ··· 3696 3686 lan743x_csr_write(adapter, MAC_WUCSR2, 0); 3697 3687 lan743x_csr_write(adapter, MAC_WK_SRC, 0xFFFFFFFF); 3698 3688 3699 - if (adapter->wolopts) 3689 + if (adapter->wolopts || adapter->phy_wolopts) 3700 3690 lan743x_pm_set_wol(adapter); 3701 3691 3702 3692 if (adapter->is_pci11x1x) { ··· 3720 3710 struct pci_dev *pdev = to_pci_dev(dev); 3721 3711 struct net_device *netdev = pci_get_drvdata(pdev); 3722 3712 struct lan743x_adapter *adapter = netdev_priv(netdev); 3713 + u32 data; 3723 3714 int ret; 3724 3715 3725 3716 pci_set_power_state(pdev, PCI_D0); ··· 3739 3728 return ret; 3740 3729 } 3741 3730 3731 + ret = lan743x_csr_read(adapter, MAC_WK_SRC); 3732 + netif_dbg(adapter, drv, adapter->netdev, 3733 + "Wakeup source : 0x%08X\n", ret); 3734 + 3735 + /* Clear the wol configuration and status bits. Note that 3736 + * the status bits are "Write One to Clear (W1C)" 3737 + */ 3738 + data = MAC_WUCSR_EEE_TX_WAKE_ | MAC_WUCSR_EEE_RX_WAKE_ | 3739 + MAC_WUCSR_RFE_WAKE_FR_ | MAC_WUCSR_PFDA_FR_ | MAC_WUCSR_WUFR_ | 3740 + MAC_WUCSR_MPR_ | MAC_WUCSR_BCAST_FR_; 3741 + lan743x_csr_write(adapter, MAC_WUCSR, data); 3742 + 3743 + data = MAC_WUCSR2_NS_RCD_ | MAC_WUCSR2_ARP_RCD_ | 3744 + MAC_WUCSR2_IPV6_TCPSYN_RCD_ | MAC_WUCSR2_IPV4_TCPSYN_RCD_; 3745 + lan743x_csr_write(adapter, MAC_WUCSR2, data); 3746 + 3747 + data = MAC_WK_SRC_ETH_PHY_WK_ | MAC_WK_SRC_IPV6_TCPSYN_RCD_WK_ | 3748 + MAC_WK_SRC_IPV4_TCPSYN_RCD_WK_ | MAC_WK_SRC_EEE_TX_WK_ | 3749 + MAC_WK_SRC_EEE_RX_WK_ | MAC_WK_SRC_RFE_FR_WK_ | 3750 + MAC_WK_SRC_PFDA_FR_WK_ | MAC_WK_SRC_MP_FR_WK_ | 3751 + MAC_WK_SRC_BCAST_FR_WK_ | MAC_WK_SRC_WU_FR_WK_ | 3752 + MAC_WK_SRC_WK_FR_SAVED_; 3753 + lan743x_csr_write(adapter, MAC_WK_SRC, data); 3754 + 3742 3755 /* open netdev when netdev is at running state while resume. 3743 3756 * For instance, it is true when system wakesup after pm-suspend 3744 3757 * However, it is false when system wakes up after suspend GUI menu ··· 3771 3736 lan743x_netdev_open(netdev); 3772 3737 3773 3738 netif_device_attach(netdev); 3774 - ret = lan743x_csr_read(adapter, MAC_WK_SRC); 3775 - netif_info(adapter, drv, adapter->netdev, 3776 - "Wakeup source : 0x%08X\n", ret); 3777 3739 3778 3740 return 0; 3779 3741 }
+28
drivers/net/ethernet/microchip/lan743x_main.h
··· 61 61 #define PMT_CTL_RX_FCT_RFE_D3_CLK_OVR_ BIT(18) 62 62 #define PMT_CTL_GPIO_WAKEUP_EN_ BIT(15) 63 63 #define PMT_CTL_EEE_WAKEUP_EN_ BIT(13) 64 + #define PMT_CTL_RES_CLR_WKP_MASK_ GENMASK(9, 8) 64 65 #define PMT_CTL_READY_ BIT(7) 65 66 #define PMT_CTL_ETH_PHY_RST_ BIT(4) 66 67 #define PMT_CTL_WOL_EN_ BIT(3) ··· 228 227 #define MAC_WUCSR (0x140) 229 228 #define MAC_MP_SO_EN_ BIT(21) 230 229 #define MAC_WUCSR_RFE_WAKE_EN_ BIT(14) 230 + #define MAC_WUCSR_EEE_TX_WAKE_ BIT(13) 231 + #define MAC_WUCSR_EEE_RX_WAKE_ BIT(11) 232 + #define MAC_WUCSR_RFE_WAKE_FR_ BIT(9) 233 + #define MAC_WUCSR_PFDA_FR_ BIT(7) 234 + #define MAC_WUCSR_WUFR_ BIT(6) 235 + #define MAC_WUCSR_MPR_ BIT(5) 236 + #define MAC_WUCSR_BCAST_FR_ BIT(4) 231 237 #define MAC_WUCSR_PFDA_EN_ BIT(3) 232 238 #define MAC_WUCSR_WAKE_EN_ BIT(2) 233 239 #define MAC_WUCSR_MPEN_ BIT(1) 234 240 #define MAC_WUCSR_BCST_EN_ BIT(0) 235 241 236 242 #define MAC_WK_SRC (0x144) 243 + #define MAC_WK_SRC_ETH_PHY_WK_ BIT(17) 244 + #define MAC_WK_SRC_IPV6_TCPSYN_RCD_WK_ BIT(16) 245 + #define MAC_WK_SRC_IPV4_TCPSYN_RCD_WK_ BIT(15) 246 + #define MAC_WK_SRC_EEE_TX_WK_ BIT(14) 247 + #define MAC_WK_SRC_EEE_RX_WK_ BIT(13) 248 + #define MAC_WK_SRC_RFE_FR_WK_ BIT(12) 249 + #define MAC_WK_SRC_PFDA_FR_WK_ BIT(11) 250 + #define MAC_WK_SRC_MP_FR_WK_ BIT(10) 251 + #define MAC_WK_SRC_BCAST_FR_WK_ BIT(9) 252 + #define MAC_WK_SRC_WU_FR_WK_ BIT(8) 253 + #define MAC_WK_SRC_WK_FR_SAVED_ BIT(7) 254 + 237 255 #define MAC_MP_SO_HI (0x148) 238 256 #define MAC_MP_SO_LO (0x14C) 239 257 ··· 315 295 #define RFE_INDX(index) (0x580 + (index << 2)) 316 296 317 297 #define MAC_WUCSR2 (0x600) 298 + #define MAC_WUCSR2_NS_RCD_ BIT(7) 299 + #define MAC_WUCSR2_ARP_RCD_ BIT(6) 300 + #define MAC_WUCSR2_IPV6_TCPSYN_RCD_ BIT(5) 301 + #define MAC_WUCSR2_IPV4_TCPSYN_RCD_ BIT(4) 318 302 319 303 #define SGMII_ACC (0x720) 320 304 #define SGMII_ACC_SGMII_BZY_ BIT(31) ··· 1042 1018 LINK_2500_SLAVE 1043 1019 }; 1044 1020 1021 + #define MAC_SUPPORTED_WAKES (WAKE_BCAST | WAKE_UCAST | WAKE_MCAST | \ 1022 + WAKE_MAGIC | WAKE_ARP) 1045 1023 struct lan743x_adapter { 1046 1024 struct net_device *netdev; 1047 1025 struct mii_bus *mdiobus; ··· 1051 1025 #ifdef CONFIG_PM 1052 1026 u32 wolopts; 1053 1027 u8 sopass[SOPASS_MAX]; 1028 + u32 phy_wolopts; 1029 + u32 phy_wol_supported; 1054 1030 #endif 1055 1031 struct pci_dev *pdev; 1056 1032 struct lan743x_csr csr;
+2 -4
drivers/net/ethernet/qualcomm/qca_debug.c
··· 98 98 99 99 seq_printf(s, "IRQ : %d\n", 100 100 qca->spi_dev->irq); 101 - seq_printf(s, "INTR REQ : %u\n", 102 - qca->intr_req); 103 - seq_printf(s, "INTR SVC : %u\n", 104 - qca->intr_svc); 101 + seq_printf(s, "INTR : %lx\n", 102 + qca->intr); 105 103 106 104 seq_printf(s, "SPI max speed : %lu\n", 107 105 (unsigned long)qca->spi_dev->max_speed_hz);
+8 -8
drivers/net/ethernet/qualcomm/qca_spi.c
··· 35 35 36 36 #define MAX_DMA_BURST_LEN 5000 37 37 38 + #define SPI_INTR 0 39 + 38 40 /* Modules parameters */ 39 41 #define QCASPI_CLK_SPEED_MIN 1000000 40 42 #define QCASPI_CLK_SPEED_MAX 16000000 ··· 581 579 continue; 582 580 } 583 581 584 - if ((qca->intr_req == qca->intr_svc) && 582 + if (!test_bit(SPI_INTR, &qca->intr) && 585 583 !qca->txr.skb[qca->txr.head]) 586 584 schedule(); 587 585 588 586 set_current_state(TASK_RUNNING); 589 587 590 - netdev_dbg(qca->net_dev, "have work to do. int: %d, tx_skb: %p\n", 591 - qca->intr_req - qca->intr_svc, 588 + netdev_dbg(qca->net_dev, "have work to do. int: %lu, tx_skb: %p\n", 589 + qca->intr, 592 590 qca->txr.skb[qca->txr.head]); 593 591 594 592 qcaspi_qca7k_sync(qca, QCASPI_EVENT_UPDATE); ··· 602 600 msleep(QCASPI_QCA7K_REBOOT_TIME_MS); 603 601 } 604 602 605 - if (qca->intr_svc != qca->intr_req) { 606 - qca->intr_svc = qca->intr_req; 603 + if (test_and_clear_bit(SPI_INTR, &qca->intr)) { 607 604 start_spi_intr_handling(qca, &intr_cause); 608 605 609 606 if (intr_cause & SPI_INT_CPU_ON) { ··· 664 663 { 665 664 struct qcaspi *qca = data; 666 665 667 - qca->intr_req++; 666 + set_bit(SPI_INTR, &qca->intr); 668 667 if (qca->spi_thread) 669 668 wake_up_process(qca->spi_thread); 670 669 ··· 680 679 if (!qca) 681 680 return -EINVAL; 682 681 683 - qca->intr_req = 1; 684 - qca->intr_svc = 0; 682 + set_bit(SPI_INTR, &qca->intr); 685 683 qca->sync = QCASPI_SYNC_UNKNOWN; 686 684 qcafrm_fsm_init_spi(&qca->frm_handle); 687 685
+1 -2
drivers/net/ethernet/qualcomm/qca_spi.h
··· 81 81 struct qcafrm_handle frm_handle; 82 82 struct sk_buff *rx_skb; 83 83 84 - unsigned int intr_req; 85 - unsigned int intr_svc; 84 + unsigned long intr; 86 85 u16 reset_count; 87 86 88 87 #ifdef CONFIG_DEBUG_FS
+5 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
··· 218 218 { 219 219 u32 num_snapshot, ts_status, tsync_int; 220 220 struct ptp_clock_event event; 221 + u32 acr_value, channel; 221 222 unsigned long flags; 222 223 u64 ptp_time; 223 224 int i; ··· 244 243 num_snapshot = (ts_status & GMAC_TIMESTAMP_ATSNS_MASK) >> 245 244 GMAC_TIMESTAMP_ATSNS_SHIFT; 246 245 246 + acr_value = readl(priv->ptpaddr + PTP_ACR); 247 + channel = ilog2(FIELD_GET(PTP_ACR_MASK, acr_value)); 248 + 247 249 for (i = 0; i < num_snapshot; i++) { 248 250 read_lock_irqsave(&priv->ptp_lock, flags); 249 251 get_ptptime(priv->ptpaddr, &ptp_time); 250 252 read_unlock_irqrestore(&priv->ptp_lock, flags); 251 253 event.type = PTP_CLOCK_EXTTS; 252 - event.index = 0; 254 + event.index = channel; 253 255 event.timestamp = ptp_time; 254 256 ptp_clock_event(priv->ptp_clock, &event); 255 257 }
+22 -18
drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
··· 358 358 359 359 port_transmit_rate_kbps = qopt->idleslope - qopt->sendslope; 360 360 361 - /* Port Transmit Rate and Speed Divider */ 362 - switch (div_s64(port_transmit_rate_kbps, 1000)) { 363 - case SPEED_10000: 364 - case SPEED_5000: 365 - ptr = 32; 366 - break; 367 - case SPEED_2500: 368 - case SPEED_1000: 369 - ptr = 8; 370 - break; 371 - case SPEED_100: 372 - ptr = 4; 373 - break; 374 - default: 375 - netdev_err(priv->dev, 376 - "Invalid portTransmitRate %lld (idleSlope - sendSlope)\n", 377 - port_transmit_rate_kbps); 378 - return -EINVAL; 361 + if (qopt->enable) { 362 + /* Port Transmit Rate and Speed Divider */ 363 + switch (div_s64(port_transmit_rate_kbps, 1000)) { 364 + case SPEED_10000: 365 + case SPEED_5000: 366 + ptr = 32; 367 + break; 368 + case SPEED_2500: 369 + case SPEED_1000: 370 + ptr = 8; 371 + break; 372 + case SPEED_100: 373 + ptr = 4; 374 + break; 375 + default: 376 + netdev_err(priv->dev, 377 + "Invalid portTransmitRate %lld (idleSlope - sendSlope)\n", 378 + port_transmit_rate_kbps); 379 + return -EINVAL; 380 + } 381 + } else { 382 + ptr = 0; 379 383 } 380 384 381 385 mode_to_use = priv->plat->tx_queues_cfg[queue].mode_to_use;
+34 -4
drivers/net/phy/dp83tg720.c
··· 17 17 #define DP83TG720S_PHY_RESET 0x1f 18 18 #define DP83TG720S_HW_RESET BIT(15) 19 19 20 + #define DP83TG720S_LPS_CFG3 0x18c 21 + /* Power modes are documented as bit fields but used as values */ 22 + /* Power Mode 0 is Normal mode */ 23 + #define DP83TG720S_LPS_CFG3_PWR_MODE_0 BIT(0) 24 + 20 25 #define DP83TG720S_RGMII_DELAY_CTRL 0x602 21 26 /* In RGMII mode, Enable or disable the internal delay for RXD */ 22 27 #define DP83TG720S_RGMII_RX_CLK_SEL BIT(1) ··· 36 31 37 32 static int dp83tg720_config_aneg(struct phy_device *phydev) 38 33 { 34 + int ret; 35 + 39 36 /* Autoneg is not supported and this PHY supports only one speed. 40 37 * We need to care only about master/slave configuration if it was 41 38 * changed by user. 42 39 */ 43 - return genphy_c45_pma_baset1_setup_master_slave(phydev); 40 + ret = genphy_c45_pma_baset1_setup_master_slave(phydev); 41 + if (ret) 42 + return ret; 43 + 44 + /* Re-read role configuration to make changes visible even if 45 + * the link is in administrative down state. 46 + */ 47 + return genphy_c45_pma_baset1_read_master_slave(phydev); 44 48 } 45 49 46 50 static int dp83tg720_read_status(struct phy_device *phydev) ··· 78 64 return ret; 79 65 80 66 /* After HW reset we need to restore master/slave configuration. 67 + * genphy_c45_pma_baset1_read_master_slave() call will be done 68 + * by the dp83tg720_config_aneg() function. 81 69 */ 82 70 ret = dp83tg720_config_aneg(phydev); 83 71 if (ret) ··· 170 154 */ 171 155 usleep_range(1000, 2000); 172 156 173 - if (phy_interface_is_rgmii(phydev)) 174 - return dp83tg720_config_rgmii_delay(phydev); 157 + if (phy_interface_is_rgmii(phydev)) { 158 + ret = dp83tg720_config_rgmii_delay(phydev); 159 + if (ret) 160 + return ret; 161 + } 175 162 176 - return 0; 163 + /* In case the PHY is bootstrapped in managed mode, we need to 164 + * wake it. 165 + */ 166 + ret = phy_write_mmd(phydev, MDIO_MMD_VEND2, DP83TG720S_LPS_CFG3, 167 + DP83TG720S_LPS_CFG3_PWR_MODE_0); 168 + if (ret) 169 + return ret; 170 + 171 + /* Make role configuration visible for ethtool on init and after 172 + * rest. 173 + */ 174 + return genphy_c45_pma_baset1_read_master_slave(phydev); 177 175 } 178 176 179 177 static struct phy_driver dp83tg720_driver[] = {
+38 -20
drivers/net/phy/mxl-gpy.c
··· 107 107 108 108 u8 fw_major; 109 109 u8 fw_minor; 110 + u32 wolopts; 110 111 111 112 /* It takes 3 seconds to fully switch out of loopback mode before 112 113 * it can safely re-enter loopback mode. Record the time when ··· 222 221 } 223 222 #endif 224 223 224 + static int gpy_ack_interrupt(struct phy_device *phydev) 225 + { 226 + int ret; 227 + 228 + /* Clear all pending interrupts */ 229 + ret = phy_read(phydev, PHY_ISTAT); 230 + return ret < 0 ? ret : 0; 231 + } 232 + 225 233 static int gpy_mbox_read(struct phy_device *phydev, u32 addr) 226 234 { 227 235 struct gpy_priv *priv = phydev->priv; ··· 272 262 273 263 static int gpy_config_init(struct phy_device *phydev) 274 264 { 275 - int ret; 276 - 277 - /* Mask all interrupts */ 278 - ret = phy_write(phydev, PHY_IMASK, 0); 279 - if (ret) 280 - return ret; 281 - 282 - /* Clear all pending interrupts */ 283 - ret = phy_read(phydev, PHY_ISTAT); 284 - return ret < 0 ? ret : 0; 265 + /* Nothing to configure. Configuration Requirement Placeholder */ 266 + return 0; 285 267 } 286 268 287 269 static int gpy21x_config_init(struct phy_device *phydev) ··· 629 627 630 628 static int gpy_config_intr(struct phy_device *phydev) 631 629 { 630 + struct gpy_priv *priv = phydev->priv; 632 631 u16 mask = 0; 632 + int ret; 633 + 634 + ret = gpy_ack_interrupt(phydev); 635 + if (ret) 636 + return ret; 633 637 634 638 if (phydev->interrupts == PHY_INTERRUPT_ENABLED) 635 639 mask = PHY_IMASK_MASK; 640 + 641 + if (priv->wolopts & WAKE_MAGIC) 642 + mask |= PHY_IMASK_WOL; 643 + 644 + if (priv->wolopts & WAKE_PHY) 645 + mask |= PHY_IMASK_LSTC; 636 646 637 647 return phy_write(phydev, PHY_IMASK, mask); 638 648 } ··· 692 678 struct ethtool_wolinfo *wol) 693 679 { 694 680 struct net_device *attach_dev = phydev->attached_dev; 681 + struct gpy_priv *priv = phydev->priv; 695 682 int ret; 696 683 697 684 if (wol->wolopts & WAKE_MAGIC) { ··· 740 725 ret = phy_read(phydev, PHY_ISTAT); 741 726 if (ret < 0) 742 727 return ret; 728 + 729 + priv->wolopts |= WAKE_MAGIC; 743 730 } else { 744 731 /* Disable magic packet matching */ 745 732 ret = phy_clear_bits_mmd(phydev, MDIO_MMD_VEND2, ··· 749 732 WOL_EN); 750 733 if (ret < 0) 751 734 return ret; 735 + 736 + /* Disable the WOL interrupt */ 737 + ret = phy_clear_bits(phydev, PHY_IMASK, PHY_IMASK_WOL); 738 + if (ret < 0) 739 + return ret; 740 + 741 + priv->wolopts &= ~WAKE_MAGIC; 752 742 } 753 743 754 744 if (wol->wolopts & WAKE_PHY) { ··· 772 748 if (ret & (PHY_IMASK_MASK & ~PHY_IMASK_LSTC)) 773 749 phy_trigger_machine(phydev); 774 750 751 + priv->wolopts |= WAKE_PHY; 775 752 return 0; 776 753 } 777 754 755 + priv->wolopts &= ~WAKE_PHY; 778 756 /* Disable the link state change interrupt */ 779 757 return phy_clear_bits(phydev, PHY_IMASK, PHY_IMASK_LSTC); 780 758 } ··· 784 758 static void gpy_get_wol(struct phy_device *phydev, 785 759 struct ethtool_wolinfo *wol) 786 760 { 787 - int ret; 761 + struct gpy_priv *priv = phydev->priv; 788 762 789 763 wol->supported = WAKE_MAGIC | WAKE_PHY; 790 - wol->wolopts = 0; 791 - 792 - ret = phy_read_mmd(phydev, MDIO_MMD_VEND2, VPSPEC2_WOL_CTL); 793 - if (ret & WOL_EN) 794 - wol->wolopts |= WAKE_MAGIC; 795 - 796 - ret = phy_read(phydev, PHY_IMASK); 797 - if (ret & PHY_IMASK_LSTC) 798 - wol->wolopts |= WAKE_PHY; 764 + wol->wolopts = priv->wolopts; 799 765 } 800 766 801 767 static int gpy_loopback(struct phy_device *phydev, bool enable)
+13 -5
drivers/net/usb/ax88179_178a.c
··· 174 174 u32 wol_supported; 175 175 u32 wolopts; 176 176 u8 disconnecting; 177 - u8 initialized; 178 177 }; 179 178 180 179 struct ax88179_int_data { ··· 1677 1678 1678 1679 static int ax88179_net_reset(struct usbnet *dev) 1679 1680 { 1680 - struct ax88179_data *ax179_data = dev->driver_priv; 1681 + u16 tmp16; 1681 1682 1682 - if (ax179_data->initialized) 1683 + ax88179_read_cmd(dev, AX_ACCESS_PHY, AX88179_PHY_ID, GMII_PHY_PHYSR, 1684 + 2, &tmp16); 1685 + if (tmp16) { 1686 + ax88179_read_cmd(dev, AX_ACCESS_MAC, AX_MEDIUM_STATUS_MODE, 1687 + 2, 2, &tmp16); 1688 + if (!(tmp16 & AX_MEDIUM_RECEIVE_EN)) { 1689 + tmp16 |= AX_MEDIUM_RECEIVE_EN; 1690 + ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_MEDIUM_STATUS_MODE, 1691 + 2, 2, &tmp16); 1692 + } 1693 + } else { 1683 1694 ax88179_reset(dev); 1684 - else 1685 - ax179_data->initialized = 1; 1695 + } 1686 1696 1687 1697 return 0; 1688 1698 }
+2 -1
drivers/net/usb/rtl8150.c
··· 778 778 struct ethtool_link_ksettings *ecmd) 779 779 { 780 780 rtl8150_t *dev = netdev_priv(netdev); 781 - short lpa, bmcr; 781 + short lpa = 0; 782 + short bmcr = 0; 782 783 u32 supported; 783 784 784 785 supported = (SUPPORTED_10baseT_Half |
+29 -3
drivers/net/virtio_net.c
··· 1388 1388 if (unlikely(hdr->hdr.gso_type)) 1389 1389 goto err_xdp; 1390 1390 1391 + /* Partially checksummed packets must be dropped. */ 1392 + if (unlikely(hdr->hdr.flags & VIRTIO_NET_HDR_F_NEEDS_CSUM)) 1393 + goto err_xdp; 1394 + 1391 1395 buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) + 1392 1396 SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); 1393 1397 ··· 1709 1705 if (unlikely(hdr->hdr.gso_type)) 1710 1706 return NULL; 1711 1707 1708 + /* Partially checksummed packets must be dropped. */ 1709 + if (unlikely(hdr->hdr.flags & VIRTIO_NET_HDR_F_NEEDS_CSUM)) 1710 + return NULL; 1711 + 1712 1712 /* Now XDP core assumes frag size is PAGE_SIZE, but buffers 1713 1713 * with headroom may add hole in truesize, which 1714 1714 * make their length exceed PAGE_SIZE. So we disabled the ··· 1979 1971 struct net_device *dev = vi->dev; 1980 1972 struct sk_buff *skb; 1981 1973 struct virtio_net_common_hdr *hdr; 1974 + u8 flags; 1982 1975 1983 1976 if (unlikely(len < vi->hdr_len + ETH_HLEN)) { 1984 1977 pr_debug("%s: short packet %i\n", dev->name, len); ··· 1987 1978 virtnet_rq_free_buf(vi, rq, buf); 1988 1979 return; 1989 1980 } 1981 + 1982 + /* 1. Save the flags early, as the XDP program might overwrite them. 1983 + * These flags ensure packets marked as VIRTIO_NET_HDR_F_DATA_VALID 1984 + * stay valid after XDP processing. 1985 + * 2. XDP doesn't work with partially checksummed packets (refer to 1986 + * virtnet_xdp_set()), so packets marked as 1987 + * VIRTIO_NET_HDR_F_NEEDS_CSUM get dropped during XDP processing. 1988 + */ 1989 + flags = ((struct virtio_net_common_hdr *)buf)->hdr.flags; 1990 1990 1991 1991 if (vi->mergeable_rx_bufs) 1992 1992 skb = receive_mergeable(dev, vi, rq, buf, ctx, len, xdp_xmit, ··· 2012 1994 if (dev->features & NETIF_F_RXHASH && vi->has_rss_hash_report) 2013 1995 virtio_skb_set_hash(&hdr->hash_v1_hdr, skb); 2014 1996 2015 - if (hdr->hdr.flags & VIRTIO_NET_HDR_F_DATA_VALID) 1997 + if (flags & VIRTIO_NET_HDR_F_DATA_VALID) 2016 1998 skb->ip_summed = CHECKSUM_UNNECESSARY; 2017 1999 2018 2000 if (virtio_net_hdr_to_skb(skb, &hdr->hdr, ··· 5717 5699 dev->features |= dev->hw_features & NETIF_F_ALL_TSO; 5718 5700 /* (!csum && gso) case will be fixed by register_netdev() */ 5719 5701 } 5720 - if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM)) 5721 - dev->features |= NETIF_F_RXCSUM; 5702 + 5703 + /* 1. With VIRTIO_NET_F_GUEST_CSUM negotiation, the driver doesn't 5704 + * need to calculate checksums for partially checksummed packets, 5705 + * as they're considered valid by the upper layer. 5706 + * 2. Without VIRTIO_NET_F_GUEST_CSUM negotiation, the driver only 5707 + * receives fully checksummed packets. The device may assist in 5708 + * validating these packets' checksums, so the driver won't have to. 5709 + */ 5710 + dev->features |= NETIF_F_RXCSUM; 5711 + 5722 5712 if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) || 5723 5713 virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6)) 5724 5714 dev->features |= NETIF_F_GRO_HW;
+1 -1
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 4762 4762 4763 4763 if (fw_ver == IWL_FW_CMD_VER_UNKNOWN) { 4764 4764 ret = iwl_mvm_send_aux_roc_cmd(mvm, channel, vif, duration); 4765 - } else if (fw_ver == 3) { 4765 + } else if (fw_ver >= 3) { 4766 4766 ret = iwl_mvm_roc_add_cmd(mvm, channel, vif, duration, 4767 4767 ROC_ACTIVITY_HOTSPOT); 4768 4768 } else {
+1 -1
drivers/net/wireless/intel/iwlwifi/mvm/scan.c
··· 1830 1830 */ 1831 1831 if (!iwl_mvm_is_scan_fragmented(params->type)) { 1832 1832 if (!cfg80211_channel_is_psc(params->channels[i]) || 1833 - flags & IWL_UHB_CHAN_CFG_FLAG_PSC_CHAN_NO_LISTEN) { 1833 + psc_no_listen) { 1834 1834 if (unsolicited_probe_on_chan) { 1835 1835 max_s_ssids = 2; 1836 1836 max_bssids = 6;
+2
drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
··· 1238 1238 if (te_data->id >= SESSION_PROTECT_CONF_MAX_ID) { 1239 1239 IWL_DEBUG_TE(mvm, 1240 1240 "No remain on channel event\n"); 1241 + mutex_unlock(&mvm->mutex); 1241 1242 return; 1242 1243 } 1243 1244 ··· 1254 1253 te_data = iwl_mvm_get_roc_te(mvm); 1255 1254 if (!te_data) { 1256 1255 IWL_WARN(mvm, "No remain on channel event\n"); 1256 + mutex_unlock(&mvm->mutex); 1257 1257 return; 1258 1258 } 1259 1259
+6 -4
drivers/nvme/host/core.c
··· 998 998 clear_bit_unlock(0, &ctrl->discard_page_busy); 999 999 else 1000 1000 kfree(bvec_virt(&req->special_vec)); 1001 + req->rq_flags &= ~RQF_SPECIAL_PAYLOAD; 1001 1002 } 1002 1003 } 1003 1004 EXPORT_SYMBOL_GPL(nvme_cleanup_cmd); ··· 3960 3959 3961 3960 mutex_lock(&ctrl->namespaces_lock); 3962 3961 list_for_each_entry_safe(ns, next, &ctrl->namespaces, list) { 3963 - if (ns->head->ns_id > nsid) 3964 - list_splice_init_rcu(&ns->list, &rm_list, 3965 - synchronize_rcu); 3962 + if (ns->head->ns_id > nsid) { 3963 + list_del_rcu(&ns->list); 3964 + synchronize_srcu(&ctrl->srcu); 3965 + list_add_tail_rcu(&ns->list, &rm_list); 3966 + } 3966 3967 } 3967 3968 mutex_unlock(&ctrl->namespaces_lock); 3968 - synchronize_srcu(&ctrl->srcu); 3969 3969 3970 3970 list_for_each_entry_safe(ns, next, &rm_list, list) 3971 3971 nvme_ns_remove(ns);
+11 -4
drivers/nvme/host/ioctl.c
··· 111 111 return req; 112 112 } 113 113 114 + static void nvme_unmap_bio(struct bio *bio) 115 + { 116 + if (bio_integrity(bio)) 117 + bio_integrity_unmap_free_user(bio); 118 + blk_rq_unmap_user(bio); 119 + } 120 + 114 121 static int nvme_map_user_request(struct request *req, u64 ubuffer, 115 122 unsigned bufflen, void __user *meta_buffer, unsigned meta_len, 116 123 u32 meta_seed, struct io_uring_cmd *ioucmd, unsigned int flags) ··· 164 157 165 158 out_unmap: 166 159 if (bio) 167 - blk_rq_unmap_user(bio); 160 + nvme_unmap_bio(bio); 168 161 out: 169 162 blk_mq_free_request(req); 170 163 return ret; ··· 202 195 if (result) 203 196 *result = le64_to_cpu(nvme_req(req)->result.u64); 204 197 if (bio) 205 - blk_rq_unmap_user(bio); 198 + nvme_unmap_bio(bio); 206 199 blk_mq_free_request(req); 207 200 208 201 if (effects) ··· 413 406 struct nvme_uring_cmd_pdu *pdu = nvme_uring_cmd_pdu(ioucmd); 414 407 415 408 if (pdu->bio) 416 - blk_rq_unmap_user(pdu->bio); 409 + nvme_unmap_bio(pdu->bio); 417 410 io_uring_cmd_done(ioucmd, pdu->status, pdu->result, issue_flags); 418 411 } 419 412 ··· 439 432 */ 440 433 if (blk_rq_is_poll(req)) { 441 434 if (pdu->bio) 442 - blk_rq_unmap_user(pdu->bio); 435 + nvme_unmap_bio(pdu->bio); 443 436 io_uring_cmd_iopoll_done(ioucmd, pdu->result, pdu->status); 444 437 } else { 445 438 io_uring_cmd_do_in_task_lazy(ioucmd, nvme_uring_task_cb);
+1
drivers/nvme/target/core.c
··· 957 957 req->metadata_sg_cnt = 0; 958 958 req->transfer_len = 0; 959 959 req->metadata_len = 0; 960 + req->cqe->result.u64 = 0; 960 961 req->cqe->status = 0; 961 962 req->cqe->sq_head = 0; 962 963 req->ns = NULL;
-3
drivers/nvme/target/fabrics-cmd-auth.c
··· 333 333 pr_debug("%s: ctrl %d qid %d nvme status %x error loc %d\n", 334 334 __func__, ctrl->cntlid, req->sq->qid, 335 335 status, req->error_loc); 336 - req->cqe->result.u64 = 0; 337 336 if (req->sq->dhchap_step != NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2 && 338 337 req->sq->dhchap_step != NVME_AUTH_DHCHAP_MESSAGE_FAILURE2) { 339 338 unsigned long auth_expire_secs = ctrl->kato ? ctrl->kato : 120; ··· 515 516 status = nvmet_copy_to_sgl(req, 0, d, al); 516 517 kfree(d); 517 518 done: 518 - req->cqe->result.u64 = 0; 519 - 520 519 if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_SUCCESS2) 521 520 nvmet_auth_sq_free(req->sq); 522 521 else if (req->sq->dhchap_step == NVME_AUTH_DHCHAP_MESSAGE_FAILURE1) {
-6
drivers/nvme/target/fabrics-cmd.c
··· 226 226 if (status) 227 227 goto out; 228 228 229 - /* zero out initial completion result, assign values as needed */ 230 - req->cqe->result.u32 = 0; 231 - 232 229 if (c->recfmt != 0) { 233 230 pr_warn("invalid connect version (%d).\n", 234 231 le16_to_cpu(c->recfmt)); ··· 301 304 status = nvmet_copy_from_sgl(req, 0, d, sizeof(*d)); 302 305 if (status) 303 306 goto out; 304 - 305 - /* zero out initial completion result, assign values as needed */ 306 - req->cqe->result.u32 = 0; 307 307 308 308 if (c->recfmt != 0) { 309 309 pr_warn("invalid connect version (%d).\n",
+3 -3
drivers/nvme/target/passthru.c
··· 226 226 req->cmd->common.opcode == nvme_admin_identify) { 227 227 switch (req->cmd->identify.cns) { 228 228 case NVME_ID_CNS_CTRL: 229 - nvmet_passthru_override_id_ctrl(req); 229 + status = nvmet_passthru_override_id_ctrl(req); 230 230 break; 231 231 case NVME_ID_CNS_NS: 232 - nvmet_passthru_override_id_ns(req); 232 + status = nvmet_passthru_override_id_ns(req); 233 233 break; 234 234 case NVME_ID_CNS_NS_DESC_LIST: 235 - nvmet_passthru_override_id_descs(req); 235 + status = nvmet_passthru_override_id_descs(req); 236 236 break; 237 237 } 238 238 } else if (status < 0)
+7 -1
drivers/parport/parport_amiga.c
··· 229 229 parport_put_port(port); 230 230 } 231 231 232 - static struct platform_driver amiga_parallel_driver = { 232 + /* 233 + * amiga_parallel_remove() lives in .exit.text. For drivers registered via 234 + * module_platform_driver_probe() this is ok because they cannot get unbound at 235 + * runtime. So mark the driver struct with __refdata to prevent modpost 236 + * triggering a section mismatch warning. 237 + */ 238 + static struct platform_driver amiga_parallel_driver __refdata = { 233 239 .remove_new = __exit_p(amiga_parallel_remove), 234 240 .driver = { 235 241 .name = "amiga-parallel",
+1 -2
drivers/ptp/ptp_sysfs.c
··· 296 296 if (max < ptp->n_vclocks) 297 297 goto out; 298 298 299 - size = sizeof(int) * max; 300 - vclock_index = kzalloc(size, GFP_KERNEL); 299 + vclock_index = kcalloc(max, sizeof(int), GFP_KERNEL); 301 300 if (!vclock_index) { 302 301 err = -ENOMEM; 303 302 goto out;
+1 -1
drivers/ras/amd/atl/internal.h
··· 224 224 225 225 int get_df_system_info(void); 226 226 int determine_node_id(struct addr_ctx *ctx, u8 socket_num, u8 die_num); 227 - int get_addr_hash_mi300(void); 227 + int get_umc_info_mi300(void); 228 228 229 229 int get_address_map(struct addr_ctx *ctx); 230 230
+1 -1
drivers/ras/amd/atl/system.c
··· 127 127 if (reg == DF_FUNC0_ID_MI300) { 128 128 df_cfg.flags.heterogeneous = 1; 129 129 130 - if (get_addr_hash_mi300()) 130 + if (get_umc_info_mi300()) 131 131 return -EINVAL; 132 132 } 133 133
+114 -46
drivers/ras/amd/atl/umc.c
··· 68 68 }; 69 69 70 70 #define NUM_BANK_BITS 4 71 + #define NUM_COL_BITS 5 72 + #define NUM_SID_BITS 2 71 73 72 74 static struct { 73 75 /* UMC::CH::AddrHashBank */ ··· 82 80 u8 bank_xor; 83 81 } addr_hash; 84 82 83 + static struct { 84 + u8 bank[NUM_BANK_BITS]; 85 + u8 col[NUM_COL_BITS]; 86 + u8 sid[NUM_SID_BITS]; 87 + u8 num_row_lo; 88 + u8 num_row_hi; 89 + u8 row_lo; 90 + u8 row_hi; 91 + u8 pc; 92 + } bit_shifts; 93 + 85 94 #define MI300_UMC_CH_BASE 0x90000 95 + #define MI300_ADDR_CFG (MI300_UMC_CH_BASE + 0x30) 96 + #define MI300_ADDR_SEL (MI300_UMC_CH_BASE + 0x40) 97 + #define MI300_COL_SEL_LO (MI300_UMC_CH_BASE + 0x50) 98 + #define MI300_ADDR_SEL_2 (MI300_UMC_CH_BASE + 0xA4) 86 99 #define MI300_ADDR_HASH_BANK0 (MI300_UMC_CH_BASE + 0xC8) 87 100 #define MI300_ADDR_HASH_PC (MI300_UMC_CH_BASE + 0xE0) 88 101 #define MI300_ADDR_HASH_PC2 (MI300_UMC_CH_BASE + 0xE4) ··· 107 90 #define ADDR_HASH_ROW_XOR GENMASK(31, 14) 108 91 #define ADDR_HASH_BANK_XOR GENMASK(5, 0) 109 92 93 + #define ADDR_CFG_NUM_ROW_LO GENMASK(11, 8) 94 + #define ADDR_CFG_NUM_ROW_HI GENMASK(15, 12) 95 + 96 + #define ADDR_SEL_BANK0 GENMASK(3, 0) 97 + #define ADDR_SEL_BANK1 GENMASK(7, 4) 98 + #define ADDR_SEL_BANK2 GENMASK(11, 8) 99 + #define ADDR_SEL_BANK3 GENMASK(15, 12) 100 + #define ADDR_SEL_BANK4 GENMASK(20, 16) 101 + #define ADDR_SEL_ROW_LO GENMASK(27, 24) 102 + #define ADDR_SEL_ROW_HI GENMASK(31, 28) 103 + 104 + #define COL_SEL_LO_COL0 GENMASK(3, 0) 105 + #define COL_SEL_LO_COL1 GENMASK(7, 4) 106 + #define COL_SEL_LO_COL2 GENMASK(11, 8) 107 + #define COL_SEL_LO_COL3 GENMASK(15, 12) 108 + #define COL_SEL_LO_COL4 GENMASK(19, 16) 109 + 110 + #define ADDR_SEL_2_BANK5 GENMASK(4, 0) 111 + #define ADDR_SEL_2_CHAN GENMASK(15, 12) 112 + 110 113 /* 111 114 * Read UMC::CH::AddrHash{Bank,PC,PC2} registers to get XOR bits used 112 - * for hashing. Do this during module init, since the values will not 113 - * change during run time. 115 + * for hashing. 116 + * 117 + * Also, read UMC::CH::Addr{Cfg,Sel,Sel2} and UMC::CH:ColSelLo registers to 118 + * get the values needed to reconstruct the normalized address. Apply additional 119 + * offsets to the raw register values, as needed. 120 + * 121 + * Do this during module init, since the values will not change during run time. 114 122 * 115 123 * These registers are instantiated for each UMC across each AMD Node. 116 124 * However, they should be identically programmed due to the fixed hardware 117 125 * design of MI300 systems. So read the values from Node 0 UMC 0 and keep a 118 126 * single global structure for simplicity. 119 127 */ 120 - int get_addr_hash_mi300(void) 128 + int get_umc_info_mi300(void) 121 129 { 122 130 u32 temp; 123 131 int ret; ··· 172 130 173 131 addr_hash.bank_xor = FIELD_GET(ADDR_HASH_BANK_XOR, temp); 174 132 133 + ret = amd_smn_read(0, MI300_ADDR_CFG, &temp); 134 + if (ret) 135 + return ret; 136 + 137 + bit_shifts.num_row_hi = FIELD_GET(ADDR_CFG_NUM_ROW_HI, temp); 138 + bit_shifts.num_row_lo = 10 + FIELD_GET(ADDR_CFG_NUM_ROW_LO, temp); 139 + 140 + ret = amd_smn_read(0, MI300_ADDR_SEL, &temp); 141 + if (ret) 142 + return ret; 143 + 144 + bit_shifts.bank[0] = 5 + FIELD_GET(ADDR_SEL_BANK0, temp); 145 + bit_shifts.bank[1] = 5 + FIELD_GET(ADDR_SEL_BANK1, temp); 146 + bit_shifts.bank[2] = 5 + FIELD_GET(ADDR_SEL_BANK2, temp); 147 + bit_shifts.bank[3] = 5 + FIELD_GET(ADDR_SEL_BANK3, temp); 148 + /* Use BankBit4 for the SID0 position. */ 149 + bit_shifts.sid[0] = 5 + FIELD_GET(ADDR_SEL_BANK4, temp); 150 + bit_shifts.row_lo = 12 + FIELD_GET(ADDR_SEL_ROW_LO, temp); 151 + bit_shifts.row_hi = 24 + FIELD_GET(ADDR_SEL_ROW_HI, temp); 152 + 153 + ret = amd_smn_read(0, MI300_COL_SEL_LO, &temp); 154 + if (ret) 155 + return ret; 156 + 157 + bit_shifts.col[0] = 2 + FIELD_GET(COL_SEL_LO_COL0, temp); 158 + bit_shifts.col[1] = 2 + FIELD_GET(COL_SEL_LO_COL1, temp); 159 + bit_shifts.col[2] = 2 + FIELD_GET(COL_SEL_LO_COL2, temp); 160 + bit_shifts.col[3] = 2 + FIELD_GET(COL_SEL_LO_COL3, temp); 161 + bit_shifts.col[4] = 2 + FIELD_GET(COL_SEL_LO_COL4, temp); 162 + 163 + ret = amd_smn_read(0, MI300_ADDR_SEL_2, &temp); 164 + if (ret) 165 + return ret; 166 + 167 + /* Use BankBit5 for the SID1 position. */ 168 + bit_shifts.sid[1] = 5 + FIELD_GET(ADDR_SEL_2_BANK5, temp); 169 + bit_shifts.pc = 5 + FIELD_GET(ADDR_SEL_2_CHAN, temp); 170 + 175 171 return 0; 176 172 } 177 173 ··· 226 146 * The MCA address format is as follows: 227 147 * MCA_ADDR[27:0] = {S[1:0], P[0], R[14:0], B[3:0], C[4:0], Z[0]} 228 148 * 229 - * The normalized address format is fixed in hardware and is as follows: 230 - * NA[30:0] = {S[1:0], R[13:0], C4, B[1:0], B[3:2], C[3:2], P, C[1:0], Z[4:0]} 231 - * 232 149 * Additionally, the PC and Bank bits may be hashed. This must be accounted for before 233 150 * reconstructing the normalized address. 234 151 */ ··· 235 158 #define MI300_UMC_MCA_PC BIT(25) 236 159 #define MI300_UMC_MCA_SID GENMASK(27, 26) 237 160 238 - #define MI300_NA_COL_1_0 GENMASK(6, 5) 239 - #define MI300_NA_PC BIT(7) 240 - #define MI300_NA_COL_3_2 GENMASK(9, 8) 241 - #define MI300_NA_BANK_3_2 GENMASK(11, 10) 242 - #define MI300_NA_BANK_1_0 GENMASK(13, 12) 243 - #define MI300_NA_COL_4 BIT(14) 244 - #define MI300_NA_ROW GENMASK(28, 15) 245 - #define MI300_NA_SID GENMASK(30, 29) 246 - 247 161 static unsigned long convert_dram_to_norm_addr_mi300(unsigned long addr) 248 162 { 249 - u16 i, col, row, bank, pc, sid, temp; 163 + u16 i, col, row, bank, pc, sid; 164 + u32 temp; 250 165 251 166 col = FIELD_GET(MI300_UMC_MCA_COL, addr); 252 167 bank = FIELD_GET(MI300_UMC_MCA_BANK, addr); ··· 258 189 259 190 /* Calculate hash for PC bit. */ 260 191 if (addr_hash.pc.xor_enable) { 261 - /* Bits SID[1:0] act as Bank[6:5] for PC hash, so apply them here. */ 262 - bank |= sid << 5; 263 - 264 192 temp = bitwise_xor_bits(col & addr_hash.pc.col_xor); 265 193 temp ^= bitwise_xor_bits(row & addr_hash.pc.row_xor); 266 - temp ^= bitwise_xor_bits(bank & addr_hash.bank_xor); 194 + /* Bits SID[1:0] act as Bank[5:4] for PC hash, so apply them here. */ 195 + temp ^= bitwise_xor_bits((bank | sid << NUM_BANK_BITS) & addr_hash.bank_xor); 267 196 pc ^= temp; 268 - 269 - /* Drop SID bits for the sake of debug printing later. */ 270 - bank &= 0x1F; 271 197 } 272 198 273 199 /* Reconstruct the normalized address starting with NA[4:0] = 0 */ 274 200 addr = 0; 275 201 276 - /* NA[6:5] = Column[1:0] */ 277 - temp = col & 0x3; 278 - addr |= FIELD_PREP(MI300_NA_COL_1_0, temp); 202 + /* Column bits */ 203 + for (i = 0; i < NUM_COL_BITS; i++) { 204 + temp = (col >> i) & 0x1; 205 + addr |= temp << bit_shifts.col[i]; 206 + } 279 207 280 - /* NA[7] = PC */ 281 - addr |= FIELD_PREP(MI300_NA_PC, pc); 208 + /* Bank bits */ 209 + for (i = 0; i < NUM_BANK_BITS; i++) { 210 + temp = (bank >> i) & 0x1; 211 + addr |= temp << bit_shifts.bank[i]; 212 + } 282 213 283 - /* NA[9:8] = Column[3:2] */ 284 - temp = (col >> 2) & 0x3; 285 - addr |= FIELD_PREP(MI300_NA_COL_3_2, temp); 214 + /* Row lo bits */ 215 + for (i = 0; i < bit_shifts.num_row_lo; i++) { 216 + temp = (row >> i) & 0x1; 217 + addr |= temp << (i + bit_shifts.row_lo); 218 + } 286 219 287 - /* NA[11:10] = Bank[3:2] */ 288 - temp = (bank >> 2) & 0x3; 289 - addr |= FIELD_PREP(MI300_NA_BANK_3_2, temp); 220 + /* Row hi bits */ 221 + for (i = 0; i < bit_shifts.num_row_hi; i++) { 222 + temp = (row >> (i + bit_shifts.num_row_lo)) & 0x1; 223 + addr |= temp << (i + bit_shifts.row_hi); 224 + } 290 225 291 - /* NA[13:12] = Bank[1:0] */ 292 - temp = bank & 0x3; 293 - addr |= FIELD_PREP(MI300_NA_BANK_1_0, temp); 226 + /* PC bit */ 227 + addr |= pc << bit_shifts.pc; 294 228 295 - /* NA[14] = Column[4] */ 296 - temp = (col >> 4) & 0x1; 297 - addr |= FIELD_PREP(MI300_NA_COL_4, temp); 298 - 299 - /* NA[28:15] = Row[13:0] */ 300 - addr |= FIELD_PREP(MI300_NA_ROW, row); 301 - 302 - /* NA[30:29] = SID[1:0] */ 303 - addr |= FIELD_PREP(MI300_NA_SID, sid); 229 + /* SID bits */ 230 + for (i = 0; i < NUM_SID_BITS; i++) { 231 + temp = (sid >> i) & 0x1; 232 + addr |= temp << bit_shifts.sid[i]; 233 + } 304 234 305 235 pr_debug("Addr=0x%016lx", addr); 306 236 pr_debug("Bank=%u Row=%u Column=%u PC=%u SID=%u", bank, row, col, pc, sid);
+62
drivers/scsi/mpi3mr/mpi3mr_app.c
··· 2163 2163 } 2164 2164 static DEVICE_ATTR_RO(persistent_id); 2165 2165 2166 + /** 2167 + * sas_ncq_prio_supported_show - Indicate if device supports NCQ priority 2168 + * @dev: pointer to embedded device 2169 + * @attr: sas_ncq_prio_supported attribute descriptor 2170 + * @buf: the buffer returned 2171 + * 2172 + * A sysfs 'read-only' sdev attribute, only works with SATA devices 2173 + */ 2174 + static ssize_t 2175 + sas_ncq_prio_supported_show(struct device *dev, 2176 + struct device_attribute *attr, char *buf) 2177 + { 2178 + struct scsi_device *sdev = to_scsi_device(dev); 2179 + 2180 + return sysfs_emit(buf, "%d\n", sas_ata_ncq_prio_supported(sdev)); 2181 + } 2182 + static DEVICE_ATTR_RO(sas_ncq_prio_supported); 2183 + 2184 + /** 2185 + * sas_ncq_prio_enable_show - send prioritized io commands to device 2186 + * @dev: pointer to embedded device 2187 + * @attr: sas_ncq_prio_enable attribute descriptor 2188 + * @buf: the buffer returned 2189 + * 2190 + * A sysfs 'read/write' sdev attribute, only works with SATA devices 2191 + */ 2192 + static ssize_t 2193 + sas_ncq_prio_enable_show(struct device *dev, 2194 + struct device_attribute *attr, char *buf) 2195 + { 2196 + struct scsi_device *sdev = to_scsi_device(dev); 2197 + struct mpi3mr_sdev_priv_data *sdev_priv_data = sdev->hostdata; 2198 + 2199 + if (!sdev_priv_data) 2200 + return 0; 2201 + 2202 + return sysfs_emit(buf, "%d\n", sdev_priv_data->ncq_prio_enable); 2203 + } 2204 + 2205 + static ssize_t 2206 + sas_ncq_prio_enable_store(struct device *dev, 2207 + struct device_attribute *attr, 2208 + const char *buf, size_t count) 2209 + { 2210 + struct scsi_device *sdev = to_scsi_device(dev); 2211 + struct mpi3mr_sdev_priv_data *sdev_priv_data = sdev->hostdata; 2212 + bool ncq_prio_enable = 0; 2213 + 2214 + if (kstrtobool(buf, &ncq_prio_enable)) 2215 + return -EINVAL; 2216 + 2217 + if (!sas_ata_ncq_prio_supported(sdev)) 2218 + return -EINVAL; 2219 + 2220 + sdev_priv_data->ncq_prio_enable = ncq_prio_enable; 2221 + 2222 + return strlen(buf); 2223 + } 2224 + static DEVICE_ATTR_RW(sas_ncq_prio_enable); 2225 + 2166 2226 static struct attribute *mpi3mr_dev_attrs[] = { 2167 2227 &dev_attr_sas_address.attr, 2168 2228 &dev_attr_device_handle.attr, 2169 2229 &dev_attr_persistent_id.attr, 2230 + &dev_attr_sas_ncq_prio_supported.attr, 2231 + &dev_attr_sas_ncq_prio_enable.attr, 2170 2232 NULL, 2171 2233 }; 2172 2234
+19
drivers/scsi/mpt3sas/mpt3sas_base.c
··· 8512 8512 ioc->pd_handles_sz = (ioc->facts.MaxDevHandle / 8); 8513 8513 if (ioc->facts.MaxDevHandle % 8) 8514 8514 ioc->pd_handles_sz++; 8515 + /* 8516 + * pd_handles_sz should have, at least, the minimal room for 8517 + * set_bit()/test_bit(), otherwise out-of-memory touch may occur. 8518 + */ 8519 + ioc->pd_handles_sz = ALIGN(ioc->pd_handles_sz, sizeof(unsigned long)); 8520 + 8515 8521 ioc->pd_handles = kzalloc(ioc->pd_handles_sz, 8516 8522 GFP_KERNEL); 8517 8523 if (!ioc->pd_handles) { ··· 8535 8529 ioc->pend_os_device_add_sz = (ioc->facts.MaxDevHandle / 8); 8536 8530 if (ioc->facts.MaxDevHandle % 8) 8537 8531 ioc->pend_os_device_add_sz++; 8532 + 8533 + /* 8534 + * pend_os_device_add_sz should have, at least, the minimal room for 8535 + * set_bit()/test_bit(), otherwise out-of-memory may occur. 8536 + */ 8537 + ioc->pend_os_device_add_sz = ALIGN(ioc->pend_os_device_add_sz, 8538 + sizeof(unsigned long)); 8538 8539 ioc->pend_os_device_add = kzalloc(ioc->pend_os_device_add_sz, 8539 8540 GFP_KERNEL); 8540 8541 if (!ioc->pend_os_device_add) { ··· 8833 8820 if (ioc->facts.MaxDevHandle % 8) 8834 8821 pd_handles_sz++; 8835 8822 8823 + /* 8824 + * pd_handles should have, at least, the minimal room for 8825 + * set_bit()/test_bit(), otherwise out-of-memory touch may 8826 + * occur. 8827 + */ 8828 + pd_handles_sz = ALIGN(pd_handles_sz, sizeof(unsigned long)); 8836 8829 pd_handles = krealloc(ioc->pd_handles, pd_handles_sz, 8837 8830 GFP_KERNEL); 8838 8831 if (!pd_handles) {
-3
drivers/scsi/mpt3sas/mpt3sas_base.h
··· 2048 2048 mpt3sas_setup_direct_io(struct MPT3SAS_ADAPTER *ioc, struct scsi_cmnd *scmd, 2049 2049 struct _raid_device *raid_device, Mpi25SCSIIORequest_t *mpi_request); 2050 2050 2051 - /* NCQ Prio Handling Check */ 2052 - bool scsih_ncq_prio_supp(struct scsi_device *sdev); 2053 - 2054 2051 void mpt3sas_setup_debugfs(struct MPT3SAS_ADAPTER *ioc); 2055 2052 void mpt3sas_destroy_debugfs(struct MPT3SAS_ADAPTER *ioc); 2056 2053 void mpt3sas_init_debugfs(void);
+2 -2
drivers/scsi/mpt3sas/mpt3sas_ctl.c
··· 4088 4088 { 4089 4089 struct scsi_device *sdev = to_scsi_device(dev); 4090 4090 4091 - return sysfs_emit(buf, "%d\n", scsih_ncq_prio_supp(sdev)); 4091 + return sysfs_emit(buf, "%d\n", sas_ata_ncq_prio_supported(sdev)); 4092 4092 } 4093 4093 static DEVICE_ATTR_RO(sas_ncq_prio_supported); 4094 4094 ··· 4123 4123 if (kstrtobool(buf, &ncq_prio_enable)) 4124 4124 return -EINVAL; 4125 4125 4126 - if (!scsih_ncq_prio_supp(sdev)) 4126 + if (!sas_ata_ncq_prio_supported(sdev)) 4127 4127 return -EINVAL; 4128 4128 4129 4129 sas_device_priv_data->ncq_prio_enable = ncq_prio_enable;
-23
drivers/scsi/mpt3sas/mpt3sas_scsih.c
··· 12571 12571 return PCI_ERS_RESULT_RECOVERED; 12572 12572 } 12573 12573 12574 - /** 12575 - * scsih_ncq_prio_supp - Check for NCQ command priority support 12576 - * @sdev: scsi device struct 12577 - * 12578 - * This is called when a user indicates they would like to enable 12579 - * ncq command priorities. This works only on SATA devices. 12580 - */ 12581 - bool scsih_ncq_prio_supp(struct scsi_device *sdev) 12582 - { 12583 - struct scsi_vpd *vpd; 12584 - bool ncq_prio_supp = false; 12585 - 12586 - rcu_read_lock(); 12587 - vpd = rcu_dereference(sdev->vpd_pg89); 12588 - if (!vpd || vpd->len < 214) 12589 - goto out; 12590 - 12591 - ncq_prio_supp = (vpd->data[213] >> 4) & 1; 12592 - out: 12593 - rcu_read_unlock(); 12594 - 12595 - return ncq_prio_supp; 12596 - } 12597 12574 /* 12598 12575 * The pci device ids are defined in mpi/mpi2_cnfg.h. 12599 12576 */
+7
drivers/scsi/scsi.c
··· 673 673 sdev->use_10_for_rw = 0; 674 674 675 675 sdev->cdl_supported = 1; 676 + 677 + /* 678 + * If the device supports CDL, make sure that the current drive 679 + * feature status is consistent with the user controlled 680 + * cdl_enable state. 681 + */ 682 + scsi_cdl_enable(sdev, sdev->cdl_enable); 676 683 } else { 677 684 sdev->cdl_supported = 0; 678 685 }
+23
drivers/scsi/scsi_transport_sas.c
··· 416 416 } 417 417 EXPORT_SYMBOL_GPL(sas_is_tlr_enabled); 418 418 419 + /** 420 + * sas_ata_ncq_prio_supported - Check for ATA NCQ command priority support 421 + * @sdev: SCSI device 422 + * 423 + * Check if an ATA device supports NCQ priority using VPD page 89h (ATA 424 + * Information). Since this VPD page is implemented only for ATA devices, 425 + * this function always returns false for SCSI devices. 426 + */ 427 + bool sas_ata_ncq_prio_supported(struct scsi_device *sdev) 428 + { 429 + struct scsi_vpd *vpd; 430 + bool ncq_prio_supported = false; 431 + 432 + rcu_read_lock(); 433 + vpd = rcu_dereference(sdev->vpd_pg89); 434 + if (vpd && vpd->len >= 214) 435 + ncq_prio_supported = (vpd->data[213] >> 4) & 1; 436 + rcu_read_unlock(); 437 + 438 + return ncq_prio_supported; 439 + } 440 + EXPORT_SYMBOL_GPL(sas_ata_ncq_prio_supported); 441 + 419 442 /* 420 443 * SAS Phy attributes 421 444 */
+12 -5
drivers/scsi/sd.c
··· 3565 3565 3566 3566 static void sd_read_block_zero(struct scsi_disk *sdkp) 3567 3567 { 3568 - unsigned int buf_len = sdkp->device->sector_size; 3569 - char *buffer, cmd[10] = { }; 3568 + struct scsi_device *sdev = sdkp->device; 3569 + unsigned int buf_len = sdev->sector_size; 3570 + u8 *buffer, cmd[16] = { }; 3570 3571 3571 3572 buffer = kmalloc(buf_len, GFP_KERNEL); 3572 3573 if (!buffer) 3573 3574 return; 3574 3575 3575 - cmd[0] = READ_10; 3576 - put_unaligned_be32(0, &cmd[2]); /* Logical block address 0 */ 3577 - put_unaligned_be16(1, &cmd[7]); /* Transfer 1 logical block */ 3576 + if (sdev->use_16_for_rw) { 3577 + cmd[0] = READ_16; 3578 + put_unaligned_be64(0, &cmd[2]); /* Logical block address 0 */ 3579 + put_unaligned_be32(1, &cmd[10]);/* Transfer 1 logical block */ 3580 + } else { 3581 + cmd[0] = READ_10; 3582 + put_unaligned_be32(0, &cmd[2]); /* Logical block address 0 */ 3583 + put_unaligned_be16(1, &cmd[7]); /* Transfer 1 logical block */ 3584 + } 3578 3585 3579 3586 scsi_execute_cmd(sdkp->device, cmd, REQ_OP_DRV_IN, buffer, buf_len, 3580 3587 SD_TIMEOUT, sdkp->max_retries, NULL);
+2 -2
drivers/soc/tegra/fuse/fuse-tegra.c
··· 127 127 128 128 static int tegra_fuse_add_lookups(struct tegra_fuse *fuse) 129 129 { 130 - fuse->lookups = kmemdup_array(fuse->soc->lookups, sizeof(*fuse->lookups), 131 - fuse->soc->num_lookups, GFP_KERNEL); 130 + fuse->lookups = kmemdup_array(fuse->soc->lookups, fuse->soc->num_lookups, 131 + sizeof(*fuse->lookups), GFP_KERNEL); 132 132 if (!fuse->lookups) 133 133 return -ENOMEM; 134 134
+1 -1
drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
··· 1759 1759 if (err) 1760 1760 goto failed_platform_init; 1761 1761 1762 - vchiq_debugfs_init(); 1762 + vchiq_debugfs_init(&mgmt->state); 1763 1763 1764 1764 dev_dbg(&pdev->dev, "arm: platform initialised - version %d (min %d)\n", 1765 1765 VCHIQ_VERSION, VCHIQ_VERSION_MIN);
+4 -4
drivers/staging/vc04_services/interface/vchiq_arm/vchiq_debugfs.c
··· 42 42 43 43 static int vchiq_dump_show(struct seq_file *f, void *offset) 44 44 { 45 - struct vchiq_instance *instance = f->private; 45 + struct vchiq_state *state = f->private; 46 46 47 - vchiq_dump_state(f, instance->state); 47 + vchiq_dump_state(f, state); 48 48 49 49 return 0; 50 50 } ··· 121 121 debugfs_remove_recursive(node->dentry); 122 122 } 123 123 124 - void vchiq_debugfs_init(void) 124 + void vchiq_debugfs_init(struct vchiq_state *state) 125 125 { 126 126 vchiq_dbg_dir = debugfs_create_dir("vchiq", NULL); 127 127 vchiq_dbg_clients = debugfs_create_dir("clients", vchiq_dbg_dir); 128 128 129 - debugfs_create_file("state", S_IFREG | 0444, vchiq_dbg_dir, NULL, 129 + debugfs_create_file("state", S_IFREG | 0444, vchiq_dbg_dir, state, 130 130 &vchiq_dump_fops); 131 131 } 132 132
+1 -1
drivers/staging/vc04_services/interface/vchiq_arm/vchiq_debugfs.h
··· 10 10 struct dentry *dentry; 11 11 }; 12 12 13 - void vchiq_debugfs_init(void); 13 + void vchiq_debugfs_init(struct vchiq_state *state); 14 14 15 15 void vchiq_debugfs_deinit(void); 16 16
+17
drivers/thermal/gov_step_wise.c
··· 93 93 if (instance->initialized && old_target == instance->target) 94 94 continue; 95 95 96 + if (trip->type == THERMAL_TRIP_PASSIVE) { 97 + /* 98 + * If the target state for this thermal instance 99 + * changes from THERMAL_NO_TARGET to something else, 100 + * ensure that the zone temperature will be updated 101 + * (assuming enabled passive cooling) until it becomes 102 + * THERMAL_NO_TARGET again, or the cooling device may 103 + * not be reset to its initial state. 104 + */ 105 + if (old_target == THERMAL_NO_TARGET && 106 + instance->target != THERMAL_NO_TARGET) 107 + tz->passive++; 108 + else if (old_target != THERMAL_NO_TARGET && 109 + instance->target == THERMAL_NO_TARGET) 110 + tz->passive--; 111 + } 112 + 96 113 instance->initialized = true; 97 114 98 115 mutex_lock(&instance->cdev->lock);
+11 -2
drivers/thermal/thermal_core.c
··· 999 999 if (ret) 1000 1000 goto out_cdev_type; 1001 1001 1002 + /* 1003 + * The cooling device's current state is only needed for debug 1004 + * initialization below, so a failure to get it does not cause 1005 + * the entire cooling device initialization to fail. However, 1006 + * the debug will not work for the device if its initial state 1007 + * cannot be determined and drivers are responsible for ensuring 1008 + * that this will not happen. 1009 + */ 1002 1010 ret = cdev->ops->get_cur_state(cdev, &current_state); 1003 1011 if (ret) 1004 - goto out_cdev_type; 1012 + current_state = ULONG_MAX; 1005 1013 1006 1014 thermal_cooling_device_setup_sysfs(cdev); 1007 1015 ··· 1024 1016 return ERR_PTR(ret); 1025 1017 } 1026 1018 1027 - thermal_debug_cdev_add(cdev, current_state); 1019 + if (current_state <= cdev->max_state) 1020 + thermal_debug_cdev_add(cdev, current_state); 1028 1021 1029 1022 /* Add 'this' new cdev to the global cdev list */ 1030 1023 mutex_lock(&thermal_list_lock);
+3 -2
drivers/thunderbolt/debugfs.c
··· 943 943 debugfs_create_file("run", 0600, dir, port, &margining_run_fops); 944 944 debugfs_create_file("results", 0600, dir, port, &margining_results_fops); 945 945 debugfs_create_file("test", 0600, dir, port, &margining_test_fops); 946 - if (independent_voltage_margins(usb4) || 947 - (supports_time(usb4) && independent_time_margins(usb4))) 946 + if (independent_voltage_margins(usb4) == USB4_MARGIN_CAP_0_VOLTAGE_HL || 947 + (supports_time(usb4) && 948 + independent_time_margins(usb4) == USB4_MARGIN_CAP_1_TIME_LR)) 948 949 debugfs_create_file("margin", 0600, dir, port, &margining_margin_fops); 949 950 } 950 951
+16 -6
drivers/tty/n_tty.c
··· 1619 1619 else if (ldata->raw || (L_EXTPROC(tty) && !preops)) 1620 1620 n_tty_receive_buf_raw(tty, cp, fp, count); 1621 1621 else if (tty->closing && !L_EXTPROC(tty)) { 1622 - if (la_count > 0) 1622 + if (la_count > 0) { 1623 1623 n_tty_receive_buf_closing(tty, cp, fp, la_count, true); 1624 - if (count > la_count) 1625 - n_tty_receive_buf_closing(tty, cp, fp, count - la_count, false); 1624 + cp += la_count; 1625 + if (fp) 1626 + fp += la_count; 1627 + count -= la_count; 1628 + } 1629 + if (count > 0) 1630 + n_tty_receive_buf_closing(tty, cp, fp, count, false); 1626 1631 } else { 1627 - if (la_count > 0) 1632 + if (la_count > 0) { 1628 1633 n_tty_receive_buf_standard(tty, cp, fp, la_count, true); 1629 - if (count > la_count) 1630 - n_tty_receive_buf_standard(tty, cp, fp, count - la_count, false); 1634 + cp += la_count; 1635 + if (fp) 1636 + fp += la_count; 1637 + count -= la_count; 1638 + } 1639 + if (count > 0) 1640 + n_tty_receive_buf_standard(tty, cp, fp, count, false); 1631 1641 1632 1642 flush_echoes(tty); 1633 1643 if (tty->ops->flush_chars)
+34 -2
drivers/tty/serial/8250/8250_dw.c
··· 55 55 #define DW_UART_QUIRK_SKIP_SET_RATE BIT(2) 56 56 #define DW_UART_QUIRK_IS_DMA_FC BIT(3) 57 57 #define DW_UART_QUIRK_APMC0D08 BIT(4) 58 + #define DW_UART_QUIRK_CPR_VALUE BIT(5) 59 + 60 + struct dw8250_platform_data { 61 + u8 usr_reg; 62 + u32 cpr_value; 63 + unsigned int quirks; 64 + }; 65 + 66 + struct dw8250_data { 67 + struct dw8250_port_data data; 68 + const struct dw8250_platform_data *pdata; 69 + 70 + int msr_mask_on; 71 + int msr_mask_off; 72 + struct clk *clk; 73 + struct clk *pclk; 74 + struct notifier_block clk_notifier; 75 + struct work_struct clk_work; 76 + struct reset_control *rst; 77 + 78 + unsigned int skip_autocfg:1; 79 + unsigned int uart_16550_compatible:1; 80 + }; 81 + 82 + static inline struct dw8250_data *to_dw8250_data(struct dw8250_port_data *data) 83 + { 84 + return container_of(data, struct dw8250_data, data); 85 + } 58 86 59 87 static inline struct dw8250_data *clk_to_dw8250_data(struct notifier_block *nb) 60 88 { ··· 460 432 static void dw8250_quirks(struct uart_port *p, struct dw8250_data *data) 461 433 { 462 434 unsigned int quirks = data->pdata ? data->pdata->quirks : 0; 435 + u32 cpr_value = data->pdata ? data->pdata->cpr_value : 0; 436 + 437 + if (quirks & DW_UART_QUIRK_CPR_VALUE) 438 + data->data.cpr_value = cpr_value; 463 439 464 440 #ifdef CONFIG_64BIT 465 441 if (quirks & DW_UART_QUIRK_OCTEON) { ··· 746 714 747 715 static const struct dw8250_platform_data dw8250_renesas_rzn1_data = { 748 716 .usr_reg = DW_UART_USR, 749 - .cpr_val = 0x00012f32, 750 - .quirks = DW_UART_QUIRK_IS_DMA_FC, 717 + .cpr_value = 0x00012f32, 718 + .quirks = DW_UART_QUIRK_CPR_VALUE | DW_UART_QUIRK_IS_DMA_FC, 751 719 }; 752 720 753 721 static const struct dw8250_platform_data dw8250_starfive_jh7100_data = {
+1 -2
drivers/tty/serial/8250/8250_dwlib.c
··· 242 242 void dw8250_setup_port(struct uart_port *p) 243 243 { 244 244 struct dw8250_port_data *pd = p->private_data; 245 - struct dw8250_data *data = to_dw8250_data(pd); 246 245 struct uart_8250_port *up = up_to_u8250p(p); 247 246 u32 reg, old_dlf; 248 247 ··· 277 278 278 279 reg = dw8250_readl_ext(p, DW_UART_CPR); 279 280 if (!reg) { 280 - reg = data->pdata->cpr_val; 281 + reg = pd->cpr_value; 281 282 dev_dbg(p->dev, "CPR is not available, using 0x%08x instead\n", reg); 282 283 } 283 284 if (!reg)
+1 -32
drivers/tty/serial/8250/8250_dwlib.h
··· 2 2 /* Synopsys DesignWare 8250 library header file. */ 3 3 4 4 #include <linux/io.h> 5 - #include <linux/notifier.h> 6 5 #include <linux/types.h> 7 - #include <linux/workqueue.h> 8 6 9 7 #include "8250.h" 10 - 11 - struct clk; 12 - struct reset_control; 13 8 14 9 struct dw8250_port_data { 15 10 /* Port properties */ ··· 14 19 struct uart_8250_dma dma; 15 20 16 21 /* Hardware configuration */ 22 + u32 cpr_value; 17 23 u8 dlf_size; 18 24 19 25 /* RS485 variables */ 20 26 bool hw_rs485_support; 21 27 }; 22 28 23 - struct dw8250_platform_data { 24 - u8 usr_reg; 25 - u32 cpr_val; 26 - unsigned int quirks; 27 - }; 28 - 29 - struct dw8250_data { 30 - struct dw8250_port_data data; 31 - const struct dw8250_platform_data *pdata; 32 - 33 - int msr_mask_on; 34 - int msr_mask_off; 35 - struct clk *clk; 36 - struct clk *pclk; 37 - struct notifier_block clk_notifier; 38 - struct work_struct clk_work; 39 - struct reset_control *rst; 40 - 41 - unsigned int skip_autocfg:1; 42 - unsigned int uart_16550_compatible:1; 43 - }; 44 - 45 29 void dw8250_do_set_termios(struct uart_port *p, struct ktermios *termios, const struct ktermios *old); 46 30 void dw8250_setup_port(struct uart_port *p); 47 - 48 - static inline struct dw8250_data *to_dw8250_data(struct dw8250_port_data *data) 49 - { 50 - return container_of(data, struct dw8250_data, data); 51 - } 52 31 53 32 static inline u32 dw8250_readl_ext(struct uart_port *p, int offset) 54 33 {
+1
drivers/tty/serial/8250/8250_pxa.c
··· 125 125 uart.port.iotype = UPIO_MEM32; 126 126 uart.port.regshift = 2; 127 127 uart.port.fifosize = 64; 128 + uart.tx_loadsz = 32; 128 129 uart.dl_write = serial_pxa_dl_write; 129 130 130 131 ret = serial8250_register_8250_port(&uart);
+2 -1
drivers/tty/serial/Kconfig
··· 1023 1023 help 1024 1024 Support for console on SCCNXP serial ports. 1025 1025 1026 - config SERIAL_SC16IS7XX_CORE 1026 + config SERIAL_SC16IS7XX 1027 1027 tristate "NXP SC16IS7xx UART support" 1028 + depends on SPI_MASTER || I2C 1028 1029 select SERIAL_CORE 1029 1030 select SERIAL_SC16IS7XX_SPI if SPI_MASTER 1030 1031 select SERIAL_SC16IS7XX_I2C if I2C
+1 -1
drivers/tty/serial/Makefile
··· 75 75 obj-$(CONFIG_SERIAL_SAMSUNG) += samsung_tty.o 76 76 obj-$(CONFIG_SERIAL_SB1250_DUART) += sb1250-duart.o 77 77 obj-$(CONFIG_SERIAL_SCCNXP) += sccnxp.o 78 - obj-$(CONFIG_SERIAL_SC16IS7XX_CORE) += sc16is7xx.o 78 + obj-$(CONFIG_SERIAL_SC16IS7XX) += sc16is7xx.o 79 79 obj-$(CONFIG_SERIAL_SC16IS7XX_SPI) += sc16is7xx_spi.o 80 80 obj-$(CONFIG_SERIAL_SC16IS7XX_I2C) += sc16is7xx_i2c.o 81 81 obj-$(CONFIG_SERIAL_SH_SCI) += sh-sci.o
+1 -1
drivers/tty/serial/serial_core.c
··· 622 622 return -EL3HLT; 623 623 624 624 port = uart_port_lock(state, flags); 625 - if (WARN_ON_ONCE(!state->port.xmit_buf)) { 625 + if (!state->port.xmit_buf) { 626 626 uart_port_unlock(port, flags); 627 627 return 0; 628 628 }
+7
drivers/tty/serial/serial_port.c
··· 64 64 if (port->flags & UPF_DEAD) 65 65 return 0; 66 66 67 + /* 68 + * Nothing to do on pm_runtime_force_suspend(), see 69 + * DEFINE_RUNTIME_DEV_PM_OPS. 70 + */ 71 + if (!pm_runtime_enabled(dev)) 72 + return 0; 73 + 67 74 uart_port_lock_irqsave(port, &flags); 68 75 if (!port_dev->tx_enabled) { 69 76 uart_port_unlock_irqrestore(port, flags);
+3 -3
drivers/ufs/core/ufshcd.c
··· 1366 1366 * make sure that there are no outstanding requests when 1367 1367 * clock scaling is in progress 1368 1368 */ 1369 - ufshcd_scsi_block_requests(hba); 1369 + blk_mq_quiesce_tagset(&hba->host->tag_set); 1370 1370 mutex_lock(&hba->wb_mutex); 1371 1371 down_write(&hba->clk_scaling_lock); 1372 1372 ··· 1375 1375 ret = -EBUSY; 1376 1376 up_write(&hba->clk_scaling_lock); 1377 1377 mutex_unlock(&hba->wb_mutex); 1378 - ufshcd_scsi_unblock_requests(hba); 1378 + blk_mq_unquiesce_tagset(&hba->host->tag_set); 1379 1379 goto out; 1380 1380 } 1381 1381 ··· 1396 1396 1397 1397 mutex_unlock(&hba->wb_mutex); 1398 1398 1399 - ufshcd_scsi_unblock_requests(hba); 1399 + blk_mq_unquiesce_tagset(&hba->host->tag_set); 1400 1400 ufshcd_release(hba); 1401 1401 } 1402 1402
+1
drivers/usb/Makefile
··· 35 35 obj-$(CONFIG_USB_FSL_USB2) += host/ 36 36 obj-$(CONFIG_USB_FOTG210_HCD) += host/ 37 37 obj-$(CONFIG_USB_MAX3421_HCD) += host/ 38 + obj-$(CONFIG_USB_XEN_HCD) += host/ 38 39 39 40 obj-$(CONFIG_USB_C67X00_HCD) += c67x00/ 40 41
+4 -4
drivers/usb/chipidea/core.c
··· 1084 1084 return -ENODEV; 1085 1085 } 1086 1086 1087 + ret = ci_ulpi_init(ci); 1088 + if (ret) 1089 + return ret; 1090 + 1087 1091 if (ci->platdata->phy) { 1088 1092 ci->phy = ci->platdata->phy; 1089 1093 } else if (ci->platdata->usb_phy) { ··· 1141 1137 dev_err(dev, "unable to init phy: %d\n", ret); 1142 1138 goto ulpi_exit; 1143 1139 } 1144 - 1145 - ret = ci_ulpi_init(ci); 1146 - if (ret) 1147 - return ret; 1148 1140 1149 1141 ci->hw_bank.phys = res->start; 1150 1142
+5
drivers/usb/chipidea/ulpi.c
··· 68 68 if (ci->platdata->phy_mode != USBPHY_INTERFACE_MODE_ULPI) 69 69 return 0; 70 70 71 + /* 72 + * Set PORTSC correctly so we can read/write ULPI registers for 73 + * identification purposes 74 + */ 75 + hw_phymode_configure(ci); 71 76 72 77 ci->ulpi_ops.read = ci_ulpi_read; 73 78 ci->ulpi_ops.write = ci_ulpi_write;
+2 -2
drivers/usb/class/cdc-wdm.c
··· 266 266 dev_err(&desc->intf->dev, "Stall on int endpoint\n"); 267 267 goto sw; /* halt is cleared in work */ 268 268 default: 269 - dev_err(&desc->intf->dev, 269 + dev_err_ratelimited(&desc->intf->dev, 270 270 "nonzero urb status received: %d\n", status); 271 271 break; 272 272 } 273 273 } 274 274 275 275 if (urb->actual_length < sizeof(struct usb_cdc_notification)) { 276 - dev_err(&desc->intf->dev, "wdm_int_callback - %d bytes\n", 276 + dev_err_ratelimited(&desc->intf->dev, "wdm_int_callback - %d bytes\n", 277 277 urb->actual_length); 278 278 goto exit; 279 279 }
+7 -5
drivers/usb/core/hcd.c
··· 1623 1623 struct usb_hcd *hcd = bus_to_hcd(urb->dev->bus); 1624 1624 struct usb_anchor *anchor = urb->anchor; 1625 1625 int status = urb->unlinked; 1626 + unsigned long flags; 1626 1627 1627 1628 urb->hcpriv = NULL; 1628 1629 if (unlikely((urb->transfer_flags & URB_SHORT_NOT_OK) && ··· 1641 1640 /* pass ownership to the completion handler */ 1642 1641 urb->status = status; 1643 1642 /* 1644 - * This function can be called in task context inside another remote 1645 - * coverage collection section, but kcov doesn't support that kind of 1646 - * recursion yet. Only collect coverage in softirq context for now. 1643 + * Only collect coverage in the softirq context and disable interrupts 1644 + * to avoid scenarios with nested remote coverage collection sections 1645 + * that KCOV does not support. 1646 + * See the comment next to kcov_remote_start_usb_softirq() for details. 1647 1647 */ 1648 - kcov_remote_start_usb_softirq((u64)urb->dev->bus->busnum); 1648 + flags = kcov_remote_start_usb_softirq((u64)urb->dev->bus->busnum); 1649 1649 urb->complete(urb); 1650 - kcov_remote_stop_softirq(); 1650 + kcov_remote_stop_softirq(flags); 1651 1651 1652 1652 usb_anchor_resume_wakeups(anchor); 1653 1653 atomic_dec(&urb->use_count);
+7
drivers/usb/host/xhci-pci.c
··· 36 36 37 37 #define PCI_VENDOR_ID_ETRON 0x1b6f 38 38 #define PCI_DEVICE_ID_EJ168 0x7023 39 + #define PCI_DEVICE_ID_EJ188 0x7052 39 40 40 41 #define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c31 41 42 #define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31 ··· 396 395 xhci->quirks |= XHCI_RESET_ON_RESUME; 397 396 xhci->quirks |= XHCI_BROKEN_STREAMS; 398 397 } 398 + if (pdev->vendor == PCI_VENDOR_ID_ETRON && 399 + pdev->device == PCI_DEVICE_ID_EJ188) { 400 + xhci->quirks |= XHCI_RESET_ON_RESUME; 401 + xhci->quirks |= XHCI_BROKEN_STREAMS; 402 + } 403 + 399 404 if (pdev->vendor == PCI_VENDOR_ID_RENESAS && 400 405 pdev->device == 0x0014) { 401 406 xhci->quirks |= XHCI_ZERO_64B_REGS;
+45 -14
drivers/usb/host/xhci-ring.c
··· 1031 1031 break; 1032 1032 case TD_DIRTY: /* TD is cached, clear it */ 1033 1033 case TD_HALTED: 1034 + case TD_CLEARING_CACHE_DEFERRED: 1035 + if (cached_td) { 1036 + if (cached_td->urb->stream_id != td->urb->stream_id) { 1037 + /* Multiple streams case, defer move dq */ 1038 + xhci_dbg(xhci, 1039 + "Move dq deferred: stream %u URB %p\n", 1040 + td->urb->stream_id, td->urb); 1041 + td->cancel_status = TD_CLEARING_CACHE_DEFERRED; 1042 + break; 1043 + } 1044 + 1045 + /* Should never happen, but clear the TD if it does */ 1046 + xhci_warn(xhci, 1047 + "Found multiple active URBs %p and %p in stream %u?\n", 1048 + td->urb, cached_td->urb, 1049 + td->urb->stream_id); 1050 + td_to_noop(xhci, ring, cached_td, false); 1051 + cached_td->cancel_status = TD_CLEARED; 1052 + } 1053 + 1034 1054 td->cancel_status = TD_CLEARING_CACHE; 1035 - if (cached_td) 1036 - /* FIXME stream case, several stopped rings */ 1037 - xhci_dbg(xhci, 1038 - "Move dq past stream %u URB %p instead of stream %u URB %p\n", 1039 - td->urb->stream_id, td->urb, 1040 - cached_td->urb->stream_id, cached_td->urb); 1041 1055 cached_td = td; 1042 1056 break; 1043 1057 } ··· 1071 1057 if (err) { 1072 1058 /* Failed to move past cached td, just set cached TDs to no-op */ 1073 1059 list_for_each_entry_safe(td, tmp_td, &ep->cancelled_td_list, cancelled_td_list) { 1074 - if (td->cancel_status != TD_CLEARING_CACHE) 1060 + /* 1061 + * Deferred TDs need to have the deq pointer set after the above command 1062 + * completes, so if that failed we just give up on all of them (and 1063 + * complain loudly since this could cause issues due to caching). 1064 + */ 1065 + if (td->cancel_status != TD_CLEARING_CACHE && 1066 + td->cancel_status != TD_CLEARING_CACHE_DEFERRED) 1075 1067 continue; 1076 - xhci_dbg(xhci, "Failed to clear cancelled cached URB %p, mark clear anyway\n", 1077 - td->urb); 1068 + xhci_warn(xhci, "Failed to clear cancelled cached URB %p, mark clear anyway\n", 1069 + td->urb); 1078 1070 td_to_noop(xhci, ring, td, false); 1079 1071 td->cancel_status = TD_CLEARED; 1080 1072 } ··· 1366 1346 struct xhci_ep_ctx *ep_ctx; 1367 1347 struct xhci_slot_ctx *slot_ctx; 1368 1348 struct xhci_td *td, *tmp_td; 1349 + bool deferred = false; 1369 1350 1370 1351 ep_index = TRB_TO_EP_INDEX(le32_to_cpu(trb->generic.field[3])); 1371 1352 stream_id = TRB_TO_STREAM_ID(le32_to_cpu(trb->generic.field[2])); ··· 1453 1432 xhci_dbg(ep->xhci, "%s: Giveback cancelled URB %p TD\n", 1454 1433 __func__, td->urb); 1455 1434 xhci_td_cleanup(ep->xhci, td, ep_ring, td->status); 1435 + } else if (td->cancel_status == TD_CLEARING_CACHE_DEFERRED) { 1436 + deferred = true; 1456 1437 } else { 1457 1438 xhci_dbg(ep->xhci, "%s: Keep cancelled URB %p TD as cancel_status is %d\n", 1458 1439 __func__, td->urb, td->cancel_status); ··· 1464 1441 ep->ep_state &= ~SET_DEQ_PENDING; 1465 1442 ep->queued_deq_seg = NULL; 1466 1443 ep->queued_deq_ptr = NULL; 1467 - /* Restart any rings with pending URBs */ 1468 - ring_doorbell_for_active_rings(xhci, slot_id, ep_index); 1444 + 1445 + if (deferred) { 1446 + /* We have more streams to clear */ 1447 + xhci_dbg(ep->xhci, "%s: Pending TDs to clear, continuing with invalidation\n", 1448 + __func__); 1449 + xhci_invalidate_cancelled_tds(ep); 1450 + } else { 1451 + /* Restart any rings with pending URBs */ 1452 + xhci_dbg(ep->xhci, "%s: All TDs cleared, ring doorbell\n", __func__); 1453 + ring_doorbell_for_active_rings(xhci, slot_id, ep_index); 1454 + } 1469 1455 } 1470 1456 1471 1457 static void xhci_handle_cmd_reset_ep(struct xhci_hcd *xhci, int slot_id, ··· 2556 2524 goto finish_td; 2557 2525 case COMP_STOPPED_LENGTH_INVALID: 2558 2526 /* stopped on ep trb with invalid length, exclude it */ 2559 - ep_trb_len = 0; 2560 - remaining = 0; 2561 - break; 2527 + td->urb->actual_length = sum_trb_lengths(xhci, ep_ring, ep_trb); 2528 + goto finish_td; 2562 2529 case COMP_USB_TRANSACTION_ERROR: 2563 2530 if (xhci->quirks & XHCI_NO_SOFT_RETRY || 2564 2531 (ep->err_count++ > MAX_SOFT_RETRY) ||
+1
drivers/usb/host/xhci.h
··· 1276 1276 TD_DIRTY = 0, 1277 1277 TD_HALTED, 1278 1278 TD_CLEARING_CACHE, 1279 + TD_CLEARING_CACHE_DEFERRED, 1279 1280 TD_CLEARED, 1280 1281 }; 1281 1282
+6 -3
drivers/usb/storage/alauda.c
··· 105 105 unsigned char sense_key; 106 106 unsigned long sense_asc; /* additional sense code */ 107 107 unsigned long sense_ascq; /* additional sense code qualifier */ 108 + 109 + bool media_initialized; 108 110 }; 109 111 110 112 #define short_pack(lsb,msb) ( ((u16)(lsb)) | ( ((u16)(msb))<<8 ) ) ··· 478 476 } 479 477 480 478 /* Check for media change */ 481 - if (status[0] & 0x08) { 479 + if (status[0] & 0x08 || !info->media_initialized) { 482 480 usb_stor_dbg(us, "Media change detected\n"); 483 481 alauda_free_maps(&MEDIA_INFO(us)); 484 - alauda_init_media(us); 485 - 482 + rc = alauda_init_media(us); 483 + if (rc == USB_STOR_TRANSPORT_GOOD) 484 + info->media_initialized = true; 486 485 info->sense_key = UNIT_ATTENTION; 487 486 info->sense_asc = 0x28; 488 487 info->sense_ascq = 0x00;
+4 -1
drivers/usb/typec/tcpm/tcpm.c
··· 3014 3014 memcpy(caps.pdo, port->source_caps, sizeof(u32) * port->nr_source_caps); 3015 3015 caps.role = TYPEC_SOURCE; 3016 3016 3017 - if (cap) 3017 + if (cap) { 3018 3018 usb_power_delivery_unregister_capabilities(cap); 3019 + port->partner_source_caps = NULL; 3020 + } 3019 3021 3020 3022 cap = usb_power_delivery_register_capabilities(port->partner_pd, &caps); 3021 3023 if (IS_ERR(cap)) ··· 6174 6172 port->tcpc->set_bist_data(port->tcpc, false); 6175 6173 6176 6174 switch (port->state) { 6175 + case TOGGLING: 6177 6176 case ERROR_RECOVERY: 6178 6177 case PORT_RESET: 6179 6178 case PORT_RESET_WAIT_OFF:
+6 -1
drivers/usb/typec/ucsi/ucsi.c
··· 153 153 } 154 154 155 155 if (cci & UCSI_CCI_ERROR) { 156 - if (cmd == UCSI_GET_ERROR_STATUS) 156 + if (cmd == UCSI_GET_ERROR_STATUS) { 157 + ret = ucsi_acknowledge(ucsi, false); 158 + if (ret) 159 + return ret; 160 + 157 161 return -EIO; 162 + } 158 163 return ucsi_read_error(ucsi); 159 164 } 160 165
+1 -1
drivers/usb/typec/ucsi/ucsi_glink.c
··· 14 14 #include <linux/soc/qcom/pmic_glink.h> 15 15 #include "ucsi.h" 16 16 17 - #define PMIC_GLINK_MAX_PORTS 2 17 + #define PMIC_GLINK_MAX_PORTS 3 18 18 19 19 #define UCSI_BUF_SIZE 48 20 20
+7
drivers/vfio/device_cdev.c
··· 39 39 40 40 filep->private_data = df; 41 41 42 + /* 43 + * Use the pseudo fs inode on the device to link all mmaps 44 + * to the same address space, allowing us to unmap all vmas 45 + * associated to this device using unmap_mapping_range(). 46 + */ 47 + filep->f_mapping = device->inode->i_mapping; 48 + 42 49 return 0; 43 50 44 51 err_put_registration:
+7
drivers/vfio/group.c
··· 286 286 */ 287 287 filep->f_mode |= (FMODE_PREAD | FMODE_PWRITE); 288 288 289 + /* 290 + * Use the pseudo fs inode on the device to link all mmaps 291 + * to the same address space, allowing us to unmap all vmas 292 + * associated to this device using unmap_mapping_range(). 293 + */ 294 + filep->f_mapping = device->inode->i_mapping; 295 + 289 296 if (device->group->type == VFIO_NO_IOMMU) 290 297 dev_warn(device->dev, "vfio-noiommu device opened by user " 291 298 "(%s:%d)\n", current->comm, task_pid_nr(current));
+69 -208
drivers/vfio/pci/vfio_pci_core.c
··· 1610 1610 } 1611 1611 EXPORT_SYMBOL_GPL(vfio_pci_core_write); 1612 1612 1613 - /* Return 1 on zap and vma_lock acquired, 0 on contention (only with @try) */ 1614 - static int vfio_pci_zap_and_vma_lock(struct vfio_pci_core_device *vdev, bool try) 1613 + static void vfio_pci_zap_bars(struct vfio_pci_core_device *vdev) 1615 1614 { 1616 - struct vfio_pci_mmap_vma *mmap_vma, *tmp; 1615 + struct vfio_device *core_vdev = &vdev->vdev; 1616 + loff_t start = VFIO_PCI_INDEX_TO_OFFSET(VFIO_PCI_BAR0_REGION_INDEX); 1617 + loff_t end = VFIO_PCI_INDEX_TO_OFFSET(VFIO_PCI_ROM_REGION_INDEX); 1618 + loff_t len = end - start; 1617 1619 1618 - /* 1619 - * Lock ordering: 1620 - * vma_lock is nested under mmap_lock for vm_ops callback paths. 1621 - * The memory_lock semaphore is used by both code paths calling 1622 - * into this function to zap vmas and the vm_ops.fault callback 1623 - * to protect the memory enable state of the device. 1624 - * 1625 - * When zapping vmas we need to maintain the mmap_lock => vma_lock 1626 - * ordering, which requires using vma_lock to walk vma_list to 1627 - * acquire an mm, then dropping vma_lock to get the mmap_lock and 1628 - * reacquiring vma_lock. This logic is derived from similar 1629 - * requirements in uverbs_user_mmap_disassociate(). 1630 - * 1631 - * mmap_lock must always be the top-level lock when it is taken. 1632 - * Therefore we can only hold the memory_lock write lock when 1633 - * vma_list is empty, as we'd need to take mmap_lock to clear 1634 - * entries. vma_list can only be guaranteed empty when holding 1635 - * vma_lock, thus memory_lock is nested under vma_lock. 1636 - * 1637 - * This enables the vm_ops.fault callback to acquire vma_lock, 1638 - * followed by memory_lock read lock, while already holding 1639 - * mmap_lock without risk of deadlock. 1640 - */ 1641 - while (1) { 1642 - struct mm_struct *mm = NULL; 1643 - 1644 - if (try) { 1645 - if (!mutex_trylock(&vdev->vma_lock)) 1646 - return 0; 1647 - } else { 1648 - mutex_lock(&vdev->vma_lock); 1649 - } 1650 - while (!list_empty(&vdev->vma_list)) { 1651 - mmap_vma = list_first_entry(&vdev->vma_list, 1652 - struct vfio_pci_mmap_vma, 1653 - vma_next); 1654 - mm = mmap_vma->vma->vm_mm; 1655 - if (mmget_not_zero(mm)) 1656 - break; 1657 - 1658 - list_del(&mmap_vma->vma_next); 1659 - kfree(mmap_vma); 1660 - mm = NULL; 1661 - } 1662 - if (!mm) 1663 - return 1; 1664 - mutex_unlock(&vdev->vma_lock); 1665 - 1666 - if (try) { 1667 - if (!mmap_read_trylock(mm)) { 1668 - mmput(mm); 1669 - return 0; 1670 - } 1671 - } else { 1672 - mmap_read_lock(mm); 1673 - } 1674 - if (try) { 1675 - if (!mutex_trylock(&vdev->vma_lock)) { 1676 - mmap_read_unlock(mm); 1677 - mmput(mm); 1678 - return 0; 1679 - } 1680 - } else { 1681 - mutex_lock(&vdev->vma_lock); 1682 - } 1683 - list_for_each_entry_safe(mmap_vma, tmp, 1684 - &vdev->vma_list, vma_next) { 1685 - struct vm_area_struct *vma = mmap_vma->vma; 1686 - 1687 - if (vma->vm_mm != mm) 1688 - continue; 1689 - 1690 - list_del(&mmap_vma->vma_next); 1691 - kfree(mmap_vma); 1692 - 1693 - zap_vma_ptes(vma, vma->vm_start, 1694 - vma->vm_end - vma->vm_start); 1695 - } 1696 - mutex_unlock(&vdev->vma_lock); 1697 - mmap_read_unlock(mm); 1698 - mmput(mm); 1699 - } 1620 + unmap_mapping_range(core_vdev->inode->i_mapping, start, len, true); 1700 1621 } 1701 1622 1702 1623 void vfio_pci_zap_and_down_write_memory_lock(struct vfio_pci_core_device *vdev) 1703 1624 { 1704 - vfio_pci_zap_and_vma_lock(vdev, false); 1705 1625 down_write(&vdev->memory_lock); 1706 - mutex_unlock(&vdev->vma_lock); 1626 + vfio_pci_zap_bars(vdev); 1707 1627 } 1708 1628 1709 1629 u16 vfio_pci_memory_lock_and_enable(struct vfio_pci_core_device *vdev) ··· 1645 1725 up_write(&vdev->memory_lock); 1646 1726 } 1647 1727 1648 - /* Caller holds vma_lock */ 1649 - static int __vfio_pci_add_vma(struct vfio_pci_core_device *vdev, 1650 - struct vm_area_struct *vma) 1651 - { 1652 - struct vfio_pci_mmap_vma *mmap_vma; 1653 - 1654 - mmap_vma = kmalloc(sizeof(*mmap_vma), GFP_KERNEL_ACCOUNT); 1655 - if (!mmap_vma) 1656 - return -ENOMEM; 1657 - 1658 - mmap_vma->vma = vma; 1659 - list_add(&mmap_vma->vma_next, &vdev->vma_list); 1660 - 1661 - return 0; 1662 - } 1663 - 1664 - /* 1665 - * Zap mmaps on open so that we can fault them in on access and therefore 1666 - * our vma_list only tracks mappings accessed since last zap. 1667 - */ 1668 - static void vfio_pci_mmap_open(struct vm_area_struct *vma) 1669 - { 1670 - zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start); 1671 - } 1672 - 1673 - static void vfio_pci_mmap_close(struct vm_area_struct *vma) 1728 + static unsigned long vma_to_pfn(struct vm_area_struct *vma) 1674 1729 { 1675 1730 struct vfio_pci_core_device *vdev = vma->vm_private_data; 1676 - struct vfio_pci_mmap_vma *mmap_vma; 1731 + int index = vma->vm_pgoff >> (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT); 1732 + u64 pgoff; 1677 1733 1678 - mutex_lock(&vdev->vma_lock); 1679 - list_for_each_entry(mmap_vma, &vdev->vma_list, vma_next) { 1680 - if (mmap_vma->vma == vma) { 1681 - list_del(&mmap_vma->vma_next); 1682 - kfree(mmap_vma); 1683 - break; 1684 - } 1685 - } 1686 - mutex_unlock(&vdev->vma_lock); 1734 + pgoff = vma->vm_pgoff & 1735 + ((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1); 1736 + 1737 + return (pci_resource_start(vdev->pdev, index) >> PAGE_SHIFT) + pgoff; 1687 1738 } 1688 1739 1689 1740 static vm_fault_t vfio_pci_mmap_fault(struct vm_fault *vmf) 1690 1741 { 1691 1742 struct vm_area_struct *vma = vmf->vma; 1692 1743 struct vfio_pci_core_device *vdev = vma->vm_private_data; 1693 - struct vfio_pci_mmap_vma *mmap_vma; 1694 - vm_fault_t ret = VM_FAULT_NOPAGE; 1744 + unsigned long pfn, pgoff = vmf->pgoff - vma->vm_pgoff; 1745 + unsigned long addr = vma->vm_start; 1746 + vm_fault_t ret = VM_FAULT_SIGBUS; 1695 1747 1696 - mutex_lock(&vdev->vma_lock); 1748 + pfn = vma_to_pfn(vma); 1749 + 1697 1750 down_read(&vdev->memory_lock); 1698 1751 1699 - /* 1700 - * Memory region cannot be accessed if the low power feature is engaged 1701 - * or memory access is disabled. 1702 - */ 1703 - if (vdev->pm_runtime_engaged || !__vfio_pci_memory_enabled(vdev)) { 1704 - ret = VM_FAULT_SIGBUS; 1705 - goto up_out; 1706 - } 1752 + if (vdev->pm_runtime_engaged || !__vfio_pci_memory_enabled(vdev)) 1753 + goto out_unlock; 1754 + 1755 + ret = vmf_insert_pfn(vma, vmf->address, pfn + pgoff); 1756 + if (ret & VM_FAULT_ERROR) 1757 + goto out_unlock; 1707 1758 1708 1759 /* 1709 - * We populate the whole vma on fault, so we need to test whether 1710 - * the vma has already been mapped, such as for concurrent faults 1711 - * to the same vma. io_remap_pfn_range() will trigger a BUG_ON if 1712 - * we ask it to fill the same range again. 1760 + * Pre-fault the remainder of the vma, abort further insertions and 1761 + * supress error if fault is encountered during pre-fault. 1713 1762 */ 1714 - list_for_each_entry(mmap_vma, &vdev->vma_list, vma_next) { 1715 - if (mmap_vma->vma == vma) 1716 - goto up_out; 1763 + for (; addr < vma->vm_end; addr += PAGE_SIZE, pfn++) { 1764 + if (addr == vmf->address) 1765 + continue; 1766 + 1767 + if (vmf_insert_pfn(vma, addr, pfn) & VM_FAULT_ERROR) 1768 + break; 1717 1769 } 1718 1770 1719 - if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, 1720 - vma->vm_end - vma->vm_start, 1721 - vma->vm_page_prot)) { 1722 - ret = VM_FAULT_SIGBUS; 1723 - zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start); 1724 - goto up_out; 1725 - } 1726 - 1727 - if (__vfio_pci_add_vma(vdev, vma)) { 1728 - ret = VM_FAULT_OOM; 1729 - zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start); 1730 - } 1731 - 1732 - up_out: 1771 + out_unlock: 1733 1772 up_read(&vdev->memory_lock); 1734 - mutex_unlock(&vdev->vma_lock); 1773 + 1735 1774 return ret; 1736 1775 } 1737 1776 1738 1777 static const struct vm_operations_struct vfio_pci_mmap_ops = { 1739 - .open = vfio_pci_mmap_open, 1740 - .close = vfio_pci_mmap_close, 1741 1778 .fault = vfio_pci_mmap_fault, 1742 1779 }; 1743 1780 ··· 1757 1880 1758 1881 vma->vm_private_data = vdev; 1759 1882 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 1760 - vma->vm_pgoff = (pci_resource_start(pdev, index) >> PAGE_SHIFT) + pgoff; 1883 + vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot); 1761 1884 1762 1885 /* 1763 - * See remap_pfn_range(), called from vfio_pci_fault() but we can't 1764 - * change vm_flags within the fault handler. Set them now. 1886 + * Set vm_flags now, they should not be changed in the fault handler. 1887 + * We want the same flags and page protection (decrypted above) as 1888 + * io_remap_pfn_range() would set. 1765 1889 * 1766 1890 * VM_ALLOW_ANY_UNCACHED: The VMA flag is implemented for ARM64, 1767 1891 * allowing KVM stage 2 device mapping attributes to use Normal-NC ··· 2080 2202 mutex_init(&vdev->ioeventfds_lock); 2081 2203 INIT_LIST_HEAD(&vdev->dummy_resources_list); 2082 2204 INIT_LIST_HEAD(&vdev->ioeventfds_list); 2083 - mutex_init(&vdev->vma_lock); 2084 - INIT_LIST_HEAD(&vdev->vma_list); 2085 2205 INIT_LIST_HEAD(&vdev->sriov_pfs_item); 2086 2206 init_rwsem(&vdev->memory_lock); 2087 2207 xa_init(&vdev->ctx); ··· 2095 2219 2096 2220 mutex_destroy(&vdev->igate); 2097 2221 mutex_destroy(&vdev->ioeventfds_lock); 2098 - mutex_destroy(&vdev->vma_lock); 2099 2222 kfree(vdev->region); 2100 2223 kfree(vdev->pm_save); 2101 2224 } ··· 2372 2497 return ret; 2373 2498 } 2374 2499 2375 - /* 2376 - * We need to get memory_lock for each device, but devices can share mmap_lock, 2377 - * therefore we need to zap and hold the vma_lock for each device, and only then 2378 - * get each memory_lock. 2379 - */ 2380 2500 static int vfio_pci_dev_set_hot_reset(struct vfio_device_set *dev_set, 2381 2501 struct vfio_pci_group_info *groups, 2382 2502 struct iommufd_ctx *iommufd_ctx) 2383 2503 { 2384 - struct vfio_pci_core_device *cur_mem; 2385 - struct vfio_pci_core_device *cur_vma; 2386 - struct vfio_pci_core_device *cur; 2504 + struct vfio_pci_core_device *vdev; 2387 2505 struct pci_dev *pdev; 2388 - bool is_mem = true; 2389 2506 int ret; 2390 2507 2391 2508 mutex_lock(&dev_set->lock); 2392 - cur_mem = list_first_entry(&dev_set->device_list, 2393 - struct vfio_pci_core_device, 2394 - vdev.dev_set_list); 2395 2509 2396 2510 pdev = vfio_pci_dev_set_resettable(dev_set); 2397 2511 if (!pdev) { ··· 2397 2533 if (ret) 2398 2534 goto err_unlock; 2399 2535 2400 - list_for_each_entry(cur_vma, &dev_set->device_list, vdev.dev_set_list) { 2536 + list_for_each_entry(vdev, &dev_set->device_list, vdev.dev_set_list) { 2401 2537 bool owned; 2402 2538 2403 2539 /* ··· 2421 2557 * Otherwise, reset is not allowed. 2422 2558 */ 2423 2559 if (iommufd_ctx) { 2424 - int devid = vfio_iommufd_get_dev_id(&cur_vma->vdev, 2560 + int devid = vfio_iommufd_get_dev_id(&vdev->vdev, 2425 2561 iommufd_ctx); 2426 2562 2427 2563 owned = (devid > 0 || devid == -ENOENT); 2428 2564 } else { 2429 - owned = vfio_dev_in_groups(&cur_vma->vdev, groups); 2565 + owned = vfio_dev_in_groups(&vdev->vdev, groups); 2430 2566 } 2431 2567 2432 2568 if (!owned) { 2433 2569 ret = -EINVAL; 2434 - goto err_undo; 2570 + break; 2435 2571 } 2436 2572 2437 2573 /* 2438 - * Locking multiple devices is prone to deadlock, runaway and 2439 - * unwind if we hit contention. 2574 + * Take the memory write lock for each device and zap BAR 2575 + * mappings to prevent the user accessing the device while in 2576 + * reset. Locking multiple devices is prone to deadlock, 2577 + * runaway and unwind if we hit contention. 2440 2578 */ 2441 - if (!vfio_pci_zap_and_vma_lock(cur_vma, true)) { 2579 + if (!down_write_trylock(&vdev->memory_lock)) { 2442 2580 ret = -EBUSY; 2443 - goto err_undo; 2581 + break; 2444 2582 } 2445 - } 2446 - cur_vma = NULL; 2447 2583 2448 - list_for_each_entry(cur_mem, &dev_set->device_list, vdev.dev_set_list) { 2449 - if (!down_write_trylock(&cur_mem->memory_lock)) { 2450 - ret = -EBUSY; 2451 - goto err_undo; 2452 - } 2453 - mutex_unlock(&cur_mem->vma_lock); 2584 + vfio_pci_zap_bars(vdev); 2454 2585 } 2455 - cur_mem = NULL; 2586 + 2587 + if (!list_entry_is_head(vdev, 2588 + &dev_set->device_list, vdev.dev_set_list)) { 2589 + vdev = list_prev_entry(vdev, vdev.dev_set_list); 2590 + goto err_undo; 2591 + } 2456 2592 2457 2593 /* 2458 2594 * The pci_reset_bus() will reset all the devices in the bus. ··· 2463 2599 * cause the PCI config space reset without restoring the original 2464 2600 * state (saved locally in 'vdev->pm_save'). 2465 2601 */ 2466 - list_for_each_entry(cur, &dev_set->device_list, vdev.dev_set_list) 2467 - vfio_pci_set_power_state(cur, PCI_D0); 2602 + list_for_each_entry(vdev, &dev_set->device_list, vdev.dev_set_list) 2603 + vfio_pci_set_power_state(vdev, PCI_D0); 2468 2604 2469 2605 ret = pci_reset_bus(pdev); 2470 2606 2471 - err_undo: 2472 - list_for_each_entry(cur, &dev_set->device_list, vdev.dev_set_list) { 2473 - if (cur == cur_mem) 2474 - is_mem = false; 2475 - if (cur == cur_vma) 2476 - break; 2477 - if (is_mem) 2478 - up_write(&cur->memory_lock); 2479 - else 2480 - mutex_unlock(&cur->vma_lock); 2481 - } 2607 + vdev = list_last_entry(&dev_set->device_list, 2608 + struct vfio_pci_core_device, vdev.dev_set_list); 2482 2609 2483 - list_for_each_entry(cur, &dev_set->device_list, vdev.dev_set_list) 2484 - pm_runtime_put(&cur->pdev->dev); 2610 + err_undo: 2611 + list_for_each_entry_from_reverse(vdev, &dev_set->device_list, 2612 + vdev.dev_set_list) 2613 + up_write(&vdev->memory_lock); 2614 + 2615 + list_for_each_entry(vdev, &dev_set->device_list, vdev.dev_set_list) 2616 + pm_runtime_put(&vdev->pdev->dev); 2617 + 2485 2618 err_unlock: 2486 2619 mutex_unlock(&dev_set->lock); 2487 2620 return ret;
+44
drivers/vfio/vfio_main.c
··· 22 22 #include <linux/list.h> 23 23 #include <linux/miscdevice.h> 24 24 #include <linux/module.h> 25 + #include <linux/mount.h> 25 26 #include <linux/mutex.h> 26 27 #include <linux/pci.h> 28 + #include <linux/pseudo_fs.h> 27 29 #include <linux/rwsem.h> 28 30 #include <linux/sched.h> 29 31 #include <linux/slab.h> ··· 45 43 #define DRIVER_AUTHOR "Alex Williamson <alex.williamson@redhat.com>" 46 44 #define DRIVER_DESC "VFIO - User Level meta-driver" 47 45 46 + #define VFIO_MAGIC 0x5646494f /* "VFIO" */ 47 + 48 48 static struct vfio { 49 49 struct class *device_class; 50 50 struct ida device_ida; 51 + struct vfsmount *vfs_mount; 52 + int fs_count; 51 53 } vfio; 52 54 53 55 #ifdef CONFIG_VFIO_NOIOMMU ··· 192 186 if (device->ops->release) 193 187 device->ops->release(device); 194 188 189 + iput(device->inode); 190 + simple_release_fs(&vfio.vfs_mount, &vfio.fs_count); 195 191 kvfree(device); 196 192 } 197 193 ··· 236 228 } 237 229 EXPORT_SYMBOL_GPL(_vfio_alloc_device); 238 230 231 + static int vfio_fs_init_fs_context(struct fs_context *fc) 232 + { 233 + return init_pseudo(fc, VFIO_MAGIC) ? 0 : -ENOMEM; 234 + } 235 + 236 + static struct file_system_type vfio_fs_type = { 237 + .name = "vfio", 238 + .owner = THIS_MODULE, 239 + .init_fs_context = vfio_fs_init_fs_context, 240 + .kill_sb = kill_anon_super, 241 + }; 242 + 243 + static struct inode *vfio_fs_inode_new(void) 244 + { 245 + struct inode *inode; 246 + int ret; 247 + 248 + ret = simple_pin_fs(&vfio_fs_type, &vfio.vfs_mount, &vfio.fs_count); 249 + if (ret) 250 + return ERR_PTR(ret); 251 + 252 + inode = alloc_anon_inode(vfio.vfs_mount->mnt_sb); 253 + if (IS_ERR(inode)) 254 + simple_release_fs(&vfio.vfs_mount, &vfio.fs_count); 255 + 256 + return inode; 257 + } 258 + 239 259 /* 240 260 * Initialize a vfio_device so it can be registered to vfio core. 241 261 */ ··· 282 246 init_completion(&device->comp); 283 247 device->dev = dev; 284 248 device->ops = ops; 249 + device->inode = vfio_fs_inode_new(); 250 + if (IS_ERR(device->inode)) { 251 + ret = PTR_ERR(device->inode); 252 + goto out_inode; 253 + } 285 254 286 255 if (ops->init) { 287 256 ret = ops->init(device); ··· 301 260 return 0; 302 261 303 262 out_uninit: 263 + iput(device->inode); 264 + simple_release_fs(&vfio.vfs_mount, &vfio.fs_count); 265 + out_inode: 304 266 vfio_release_device_set(device); 305 267 ida_free(&vfio.device_ida, device->index); 306 268 return ret;
+3 -1
fs/jfs/xattr.c
··· 557 557 558 558 size_check: 559 559 if (EALIST_SIZE(ea_buf->xattr) != ea_size) { 560 + int size = min_t(int, EALIST_SIZE(ea_buf->xattr), ea_size); 561 + 560 562 printk(KERN_ERR "ea_get: invalid extended attribute\n"); 561 563 print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, 562 - ea_buf->xattr, ea_size, 1); 564 + ea_buf->xattr, size, 1); 563 565 ea_release(inode, ea_buf); 564 566 rc = -EIO; 565 567 goto clean_up;
+107 -85
fs/ocfs2/journal.c
··· 479 479 return status; 480 480 } 481 481 482 - 483 - struct ocfs2_triggers { 484 - struct jbd2_buffer_trigger_type ot_triggers; 485 - int ot_offset; 486 - }; 487 - 488 482 static inline struct ocfs2_triggers *to_ocfs2_trigger(struct jbd2_buffer_trigger_type *triggers) 489 483 { 490 484 return container_of(triggers, struct ocfs2_triggers, ot_triggers); ··· 542 548 static void ocfs2_abort_trigger(struct jbd2_buffer_trigger_type *triggers, 543 549 struct buffer_head *bh) 544 550 { 551 + struct ocfs2_triggers *ot = to_ocfs2_trigger(triggers); 552 + 545 553 mlog(ML_ERROR, 546 554 "ocfs2_abort_trigger called by JBD2. bh = 0x%lx, " 547 555 "bh->b_blocknr = %llu\n", 548 556 (unsigned long)bh, 549 557 (unsigned long long)bh->b_blocknr); 550 558 551 - ocfs2_error(bh->b_assoc_map->host->i_sb, 559 + ocfs2_error(ot->sb, 552 560 "JBD2 has aborted our journal, ocfs2 cannot continue\n"); 553 561 } 554 562 555 - static struct ocfs2_triggers di_triggers = { 556 - .ot_triggers = { 557 - .t_frozen = ocfs2_frozen_trigger, 558 - .t_abort = ocfs2_abort_trigger, 559 - }, 560 - .ot_offset = offsetof(struct ocfs2_dinode, i_check), 561 - }; 563 + static void ocfs2_setup_csum_triggers(struct super_block *sb, 564 + enum ocfs2_journal_trigger_type type, 565 + struct ocfs2_triggers *ot) 566 + { 567 + BUG_ON(type >= OCFS2_JOURNAL_TRIGGER_COUNT); 562 568 563 - static struct ocfs2_triggers eb_triggers = { 564 - .ot_triggers = { 565 - .t_frozen = ocfs2_frozen_trigger, 566 - .t_abort = ocfs2_abort_trigger, 567 - }, 568 - .ot_offset = offsetof(struct ocfs2_extent_block, h_check), 569 - }; 569 + switch (type) { 570 + case OCFS2_JTR_DI: 571 + ot->ot_triggers.t_frozen = ocfs2_frozen_trigger; 572 + ot->ot_offset = offsetof(struct ocfs2_dinode, i_check); 573 + break; 574 + case OCFS2_JTR_EB: 575 + ot->ot_triggers.t_frozen = ocfs2_frozen_trigger; 576 + ot->ot_offset = offsetof(struct ocfs2_extent_block, h_check); 577 + break; 578 + case OCFS2_JTR_RB: 579 + ot->ot_triggers.t_frozen = ocfs2_frozen_trigger; 580 + ot->ot_offset = offsetof(struct ocfs2_refcount_block, rf_check); 581 + break; 582 + case OCFS2_JTR_GD: 583 + ot->ot_triggers.t_frozen = ocfs2_frozen_trigger; 584 + ot->ot_offset = offsetof(struct ocfs2_group_desc, bg_check); 585 + break; 586 + case OCFS2_JTR_DB: 587 + ot->ot_triggers.t_frozen = ocfs2_db_frozen_trigger; 588 + break; 589 + case OCFS2_JTR_XB: 590 + ot->ot_triggers.t_frozen = ocfs2_frozen_trigger; 591 + ot->ot_offset = offsetof(struct ocfs2_xattr_block, xb_check); 592 + break; 593 + case OCFS2_JTR_DQ: 594 + ot->ot_triggers.t_frozen = ocfs2_dq_frozen_trigger; 595 + break; 596 + case OCFS2_JTR_DR: 597 + ot->ot_triggers.t_frozen = ocfs2_frozen_trigger; 598 + ot->ot_offset = offsetof(struct ocfs2_dx_root_block, dr_check); 599 + break; 600 + case OCFS2_JTR_DL: 601 + ot->ot_triggers.t_frozen = ocfs2_frozen_trigger; 602 + ot->ot_offset = offsetof(struct ocfs2_dx_leaf, dl_check); 603 + break; 604 + case OCFS2_JTR_NONE: 605 + /* To make compiler happy... */ 606 + return; 607 + } 570 608 571 - static struct ocfs2_triggers rb_triggers = { 572 - .ot_triggers = { 573 - .t_frozen = ocfs2_frozen_trigger, 574 - .t_abort = ocfs2_abort_trigger, 575 - }, 576 - .ot_offset = offsetof(struct ocfs2_refcount_block, rf_check), 577 - }; 609 + ot->ot_triggers.t_abort = ocfs2_abort_trigger; 610 + ot->sb = sb; 611 + } 578 612 579 - static struct ocfs2_triggers gd_triggers = { 580 - .ot_triggers = { 581 - .t_frozen = ocfs2_frozen_trigger, 582 - .t_abort = ocfs2_abort_trigger, 583 - }, 584 - .ot_offset = offsetof(struct ocfs2_group_desc, bg_check), 585 - }; 613 + void ocfs2_initialize_journal_triggers(struct super_block *sb, 614 + struct ocfs2_triggers triggers[]) 615 + { 616 + enum ocfs2_journal_trigger_type type; 586 617 587 - static struct ocfs2_triggers db_triggers = { 588 - .ot_triggers = { 589 - .t_frozen = ocfs2_db_frozen_trigger, 590 - .t_abort = ocfs2_abort_trigger, 591 - }, 592 - }; 593 - 594 - static struct ocfs2_triggers xb_triggers = { 595 - .ot_triggers = { 596 - .t_frozen = ocfs2_frozen_trigger, 597 - .t_abort = ocfs2_abort_trigger, 598 - }, 599 - .ot_offset = offsetof(struct ocfs2_xattr_block, xb_check), 600 - }; 601 - 602 - static struct ocfs2_triggers dq_triggers = { 603 - .ot_triggers = { 604 - .t_frozen = ocfs2_dq_frozen_trigger, 605 - .t_abort = ocfs2_abort_trigger, 606 - }, 607 - }; 608 - 609 - static struct ocfs2_triggers dr_triggers = { 610 - .ot_triggers = { 611 - .t_frozen = ocfs2_frozen_trigger, 612 - .t_abort = ocfs2_abort_trigger, 613 - }, 614 - .ot_offset = offsetof(struct ocfs2_dx_root_block, dr_check), 615 - }; 616 - 617 - static struct ocfs2_triggers dl_triggers = { 618 - .ot_triggers = { 619 - .t_frozen = ocfs2_frozen_trigger, 620 - .t_abort = ocfs2_abort_trigger, 621 - }, 622 - .ot_offset = offsetof(struct ocfs2_dx_leaf, dl_check), 623 - }; 618 + for (type = OCFS2_JTR_DI; type < OCFS2_JOURNAL_TRIGGER_COUNT; type++) 619 + ocfs2_setup_csum_triggers(sb, type, &triggers[type]); 620 + } 624 621 625 622 static int __ocfs2_journal_access(handle_t *handle, 626 623 struct ocfs2_caching_info *ci, ··· 693 708 int ocfs2_journal_access_di(handle_t *handle, struct ocfs2_caching_info *ci, 694 709 struct buffer_head *bh, int type) 695 710 { 696 - return __ocfs2_journal_access(handle, ci, bh, &di_triggers, type); 711 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 712 + 713 + return __ocfs2_journal_access(handle, ci, bh, 714 + &osb->s_journal_triggers[OCFS2_JTR_DI], 715 + type); 697 716 } 698 717 699 718 int ocfs2_journal_access_eb(handle_t *handle, struct ocfs2_caching_info *ci, 700 719 struct buffer_head *bh, int type) 701 720 { 702 - return __ocfs2_journal_access(handle, ci, bh, &eb_triggers, type); 721 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 722 + 723 + return __ocfs2_journal_access(handle, ci, bh, 724 + &osb->s_journal_triggers[OCFS2_JTR_EB], 725 + type); 703 726 } 704 727 705 728 int ocfs2_journal_access_rb(handle_t *handle, struct ocfs2_caching_info *ci, 706 729 struct buffer_head *bh, int type) 707 730 { 708 - return __ocfs2_journal_access(handle, ci, bh, &rb_triggers, 731 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 732 + 733 + return __ocfs2_journal_access(handle, ci, bh, 734 + &osb->s_journal_triggers[OCFS2_JTR_RB], 709 735 type); 710 736 } 711 737 712 738 int ocfs2_journal_access_gd(handle_t *handle, struct ocfs2_caching_info *ci, 713 739 struct buffer_head *bh, int type) 714 740 { 715 - return __ocfs2_journal_access(handle, ci, bh, &gd_triggers, type); 741 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 742 + 743 + return __ocfs2_journal_access(handle, ci, bh, 744 + &osb->s_journal_triggers[OCFS2_JTR_GD], 745 + type); 716 746 } 717 747 718 748 int ocfs2_journal_access_db(handle_t *handle, struct ocfs2_caching_info *ci, 719 749 struct buffer_head *bh, int type) 720 750 { 721 - return __ocfs2_journal_access(handle, ci, bh, &db_triggers, type); 751 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 752 + 753 + return __ocfs2_journal_access(handle, ci, bh, 754 + &osb->s_journal_triggers[OCFS2_JTR_DB], 755 + type); 722 756 } 723 757 724 758 int ocfs2_journal_access_xb(handle_t *handle, struct ocfs2_caching_info *ci, 725 759 struct buffer_head *bh, int type) 726 760 { 727 - return __ocfs2_journal_access(handle, ci, bh, &xb_triggers, type); 761 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 762 + 763 + return __ocfs2_journal_access(handle, ci, bh, 764 + &osb->s_journal_triggers[OCFS2_JTR_XB], 765 + type); 728 766 } 729 767 730 768 int ocfs2_journal_access_dq(handle_t *handle, struct ocfs2_caching_info *ci, 731 769 struct buffer_head *bh, int type) 732 770 { 733 - return __ocfs2_journal_access(handle, ci, bh, &dq_triggers, type); 771 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 772 + 773 + return __ocfs2_journal_access(handle, ci, bh, 774 + &osb->s_journal_triggers[OCFS2_JTR_DQ], 775 + type); 734 776 } 735 777 736 778 int ocfs2_journal_access_dr(handle_t *handle, struct ocfs2_caching_info *ci, 737 779 struct buffer_head *bh, int type) 738 780 { 739 - return __ocfs2_journal_access(handle, ci, bh, &dr_triggers, type); 781 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 782 + 783 + return __ocfs2_journal_access(handle, ci, bh, 784 + &osb->s_journal_triggers[OCFS2_JTR_DR], 785 + type); 740 786 } 741 787 742 788 int ocfs2_journal_access_dl(handle_t *handle, struct ocfs2_caching_info *ci, 743 789 struct buffer_head *bh, int type) 744 790 { 745 - return __ocfs2_journal_access(handle, ci, bh, &dl_triggers, type); 791 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 792 + 793 + return __ocfs2_journal_access(handle, ci, bh, 794 + &osb->s_journal_triggers[OCFS2_JTR_DL], 795 + type); 746 796 } 747 797 748 798 int ocfs2_journal_access(handle_t *handle, struct ocfs2_caching_info *ci, ··· 798 778 if (!is_handle_aborted(handle)) { 799 779 journal_t *journal = handle->h_transaction->t_journal; 800 780 801 - mlog(ML_ERROR, "jbd2_journal_dirty_metadata failed. " 802 - "Aborting transaction and journal.\n"); 781 + mlog(ML_ERROR, "jbd2_journal_dirty_metadata failed: " 782 + "handle type %u started at line %u, credits %u/%u " 783 + "errcode %d. Aborting transaction and journal.\n", 784 + handle->h_type, handle->h_line_no, 785 + handle->h_requested_credits, 786 + jbd2_handle_buffer_credits(handle), status); 803 787 handle->h_err = status; 804 788 jbd2_journal_abort_handle(handle); 805 789 jbd2_journal_abort(journal, status); 806 - ocfs2_abort(bh->b_assoc_map->host->i_sb, 807 - "Journal already aborted.\n"); 808 790 } 809 791 } 810 792 }
+27
fs/ocfs2/ocfs2.h
··· 284 284 #define OCFS2_OSB_ERROR_FS 0x0004 285 285 #define OCFS2_DEFAULT_ATIME_QUANTUM 60 286 286 287 + struct ocfs2_triggers { 288 + struct jbd2_buffer_trigger_type ot_triggers; 289 + int ot_offset; 290 + struct super_block *sb; 291 + }; 292 + 293 + enum ocfs2_journal_trigger_type { 294 + OCFS2_JTR_DI, 295 + OCFS2_JTR_EB, 296 + OCFS2_JTR_RB, 297 + OCFS2_JTR_GD, 298 + OCFS2_JTR_DB, 299 + OCFS2_JTR_XB, 300 + OCFS2_JTR_DQ, 301 + OCFS2_JTR_DR, 302 + OCFS2_JTR_DL, 303 + OCFS2_JTR_NONE /* This must be the last entry */ 304 + }; 305 + 306 + #define OCFS2_JOURNAL_TRIGGER_COUNT OCFS2_JTR_NONE 307 + 308 + void ocfs2_initialize_journal_triggers(struct super_block *sb, 309 + struct ocfs2_triggers triggers[]); 310 + 287 311 struct ocfs2_journal; 288 312 struct ocfs2_slot_info; 289 313 struct ocfs2_recovery_map; ··· 374 350 wait_queue_head_t checkpoint_event; 375 351 struct ocfs2_journal *journal; 376 352 unsigned long osb_commit_interval; 353 + 354 + /* Journal triggers for checksum */ 355 + struct ocfs2_triggers s_journal_triggers[OCFS2_JOURNAL_TRIGGER_COUNT]; 377 356 378 357 struct delayed_work la_enable_wq; 379 358
+3 -1
fs/ocfs2/super.c
··· 1075 1075 debugfs_create_file("fs_state", S_IFREG|S_IRUSR, osb->osb_debug_root, 1076 1076 osb, &ocfs2_osb_debug_fops); 1077 1077 1078 - if (ocfs2_meta_ecc(osb)) 1078 + if (ocfs2_meta_ecc(osb)) { 1079 + ocfs2_initialize_journal_triggers(sb, osb->s_journal_triggers); 1079 1080 ocfs2_blockcheck_stats_debugfs_install( &osb->osb_ecc_stats, 1080 1081 osb->osb_debug_root); 1082 + } 1081 1083 1082 1084 status = ocfs2_mount_volume(sb); 1083 1085 if (status < 0)
+10 -12
fs/smb/server/smb2pdu.c
··· 630 630 return name; 631 631 } 632 632 633 + if (*name == '\\') { 634 + pr_err("not allow directory name included leading slash\n"); 635 + kfree(name); 636 + return ERR_PTR(-EINVAL); 637 + } 638 + 633 639 ksmbd_conv_path_to_unix(name); 634 640 ksmbd_strip_last_slash(name); 635 641 return name; ··· 2367 2361 if (rc > 0) { 2368 2362 rc = ksmbd_vfs_remove_xattr(idmap, 2369 2363 path, 2370 - attr_name); 2364 + attr_name, 2365 + get_write); 2371 2366 2372 2367 if (rc < 0) { 2373 2368 ksmbd_debug(SMB, ··· 2383 2376 } else { 2384 2377 rc = ksmbd_vfs_setxattr(idmap, path, attr_name, value, 2385 2378 le16_to_cpu(eabuf->EaValueLength), 2386 - 0, true); 2379 + 0, get_write); 2387 2380 if (rc < 0) { 2388 2381 ksmbd_debug(SMB, 2389 2382 "ksmbd_vfs_setxattr is failed(%d)\n", ··· 2475 2468 !strncmp(&name[XATTR_USER_PREFIX_LEN], STREAM_PREFIX, 2476 2469 STREAM_PREFIX_LEN)) { 2477 2470 err = ksmbd_vfs_remove_xattr(idmap, path, 2478 - name); 2471 + name, true); 2479 2472 if (err) 2480 2473 ksmbd_debug(SMB, "remove xattr failed : %s\n", 2481 2474 name); ··· 2849 2842 } 2850 2843 2851 2844 if (req->NameLength) { 2852 - if ((req->CreateOptions & FILE_DIRECTORY_FILE_LE) && 2853 - *(char *)req->Buffer == '\\') { 2854 - pr_err("not allow directory name included leading slash\n"); 2855 - rc = -EINVAL; 2856 - goto err_out2; 2857 - } 2858 - 2859 2845 name = smb2_get_name((char *)req + le16_to_cpu(req->NameOffset), 2860 2846 le16_to_cpu(req->NameLength), 2861 2847 work->conn->local_nls); 2862 2848 if (IS_ERR(name)) { 2863 2849 rc = PTR_ERR(name); 2864 - if (rc != -ENOMEM) 2865 - rc = -ENOENT; 2866 2850 name = NULL; 2867 2851 goto err_out2; 2868 2852 }
+11 -6
fs/smb/server/vfs.c
··· 1058 1058 } 1059 1059 1060 1060 int ksmbd_vfs_remove_xattr(struct mnt_idmap *idmap, 1061 - const struct path *path, char *attr_name) 1061 + const struct path *path, char *attr_name, 1062 + bool get_write) 1062 1063 { 1063 1064 int err; 1064 1065 1065 - err = mnt_want_write(path->mnt); 1066 - if (err) 1067 - return err; 1066 + if (get_write == true) { 1067 + err = mnt_want_write(path->mnt); 1068 + if (err) 1069 + return err; 1070 + } 1068 1071 1069 1072 err = vfs_removexattr(idmap, path->dentry, attr_name); 1070 - mnt_drop_write(path->mnt); 1073 + 1074 + if (get_write == true) 1075 + mnt_drop_write(path->mnt); 1071 1076 1072 1077 return err; 1073 1078 } ··· 1385 1380 ksmbd_debug(SMB, "%s, len %zd\n", name, strlen(name)); 1386 1381 1387 1382 if (!strncmp(name, XATTR_NAME_SD, XATTR_NAME_SD_LEN)) { 1388 - err = ksmbd_vfs_remove_xattr(idmap, path, name); 1383 + err = ksmbd_vfs_remove_xattr(idmap, path, name, true); 1389 1384 if (err) 1390 1385 ksmbd_debug(SMB, "remove xattr failed : %s\n", name); 1391 1386 }
+2 -1
fs/smb/server/vfs.h
··· 114 114 int ksmbd_vfs_xattr_stream_name(char *stream_name, char **xattr_stream_name, 115 115 size_t *xattr_stream_name_size, int s_type); 116 116 int ksmbd_vfs_remove_xattr(struct mnt_idmap *idmap, 117 - const struct path *path, char *attr_name); 117 + const struct path *path, char *attr_name, 118 + bool get_write); 118 119 int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name, 119 120 unsigned int flags, struct path *parent_path, 120 121 struct path *path, bool caseless);
+2 -1
fs/smb/server/vfs_cache.c
··· 254 254 ci->m_flags &= ~S_DEL_ON_CLS_STREAM; 255 255 err = ksmbd_vfs_remove_xattr(file_mnt_idmap(filp), 256 256 &filp->f_path, 257 - fp->stream.name); 257 + fp->stream.name, 258 + true); 258 259 if (err) 259 260 pr_err("remove xattr failed : %s\n", 260 261 fp->stream.name);
+4 -3
fs/xfs/libxfs/xfs_sb.c
··· 1038 1038 * and hence we don't need have to update it here. 1039 1039 */ 1040 1040 if (xfs_has_lazysbcount(mp)) { 1041 - mp->m_sb.sb_icount = percpu_counter_sum(&mp->m_icount); 1041 + mp->m_sb.sb_icount = percpu_counter_sum_positive(&mp->m_icount); 1042 1042 mp->m_sb.sb_ifree = min_t(uint64_t, 1043 - percpu_counter_sum(&mp->m_ifree), 1043 + percpu_counter_sum_positive(&mp->m_ifree), 1044 1044 mp->m_sb.sb_icount); 1045 - mp->m_sb.sb_fdblocks = percpu_counter_sum(&mp->m_fdblocks); 1045 + mp->m_sb.sb_fdblocks = 1046 + percpu_counter_sum_positive(&mp->m_fdblocks); 1046 1047 } 1047 1048 1048 1049 xfs_sb_to_disk(bp->b_addr, &mp->m_sb);
+4
include/acpi/acpixf.h
··· 663 663 acpi_adr_space_type 664 664 space_id)) 665 665 ACPI_EXTERNAL_RETURN_STATUS(acpi_status 666 + acpi_execute_orphan_reg_method(acpi_handle device, 667 + acpi_adr_space_type 668 + space_id)) 669 + ACPI_EXTERNAL_RETURN_STATUS(acpi_status 666 670 acpi_remove_address_space_handler(acpi_handle 667 671 device, 668 672 acpi_adr_space_type
+4
include/linux/bio.h
··· 731 731 bip_for_each_vec(_bvl, _bio->bi_integrity, _iter) 732 732 733 733 int bio_integrity_map_user(struct bio *bio, void __user *ubuf, ssize_t len, u32 seed); 734 + void bio_integrity_unmap_free_user(struct bio *bio); 734 735 extern struct bio_integrity_payload *bio_integrity_alloc(struct bio *, gfp_t, unsigned int); 735 736 extern int bio_integrity_add_page(struct bio *, struct page *, unsigned int, unsigned int); 736 737 extern bool bio_integrity_prep(struct bio *); ··· 807 806 ssize_t len, u32 seed) 808 807 { 809 808 return -EINVAL; 809 + } 810 + static inline void bio_integrity_unmap_free_user(struct bio *bio) 811 + { 810 812 } 811 813 812 814 #endif /* CONFIG_BLK_DEV_INTEGRITY */
+2
include/linux/bpf_verifier.h
··· 746 746 /* Same as scratched_regs but for stack slots */ 747 747 u64 scratched_stack_slots; 748 748 u64 prev_log_pos, prev_insn_print_pos; 749 + /* buffer used to temporary hold constants as scalar registers */ 750 + struct bpf_reg_state fake_reg[2]; 749 751 /* buffer used to generate temporary string representations, 750 752 * e.g., in reg_type_str() to generate reg_type string 751 753 */
+1 -1
include/linux/btf.h
··· 82 82 * as to avoid issues such as the compiler inlining or eliding either a static 83 83 * kfunc, or a global kfunc in an LTO build. 84 84 */ 85 - #define __bpf_kfunc __used noinline 85 + #define __bpf_kfunc __used __retain noinline 86 86 87 87 #define __bpf_kfunc_start_defs() \ 88 88 __diag_push(); \
+23
include/linux/compiler_types.h
··· 143 143 # define __preserve_most 144 144 #endif 145 145 146 + /* 147 + * Annotating a function/variable with __retain tells the compiler to place 148 + * the object in its own section and set the flag SHF_GNU_RETAIN. This flag 149 + * instructs the linker to retain the object during garbage-cleanup or LTO 150 + * phases. 151 + * 152 + * Note that the __used macro is also used to prevent functions or data 153 + * being optimized out, but operates at the compiler/IR-level and may still 154 + * allow unintended removal of objects during linking. 155 + * 156 + * Optional: only supported since gcc >= 11, clang >= 13 157 + * 158 + * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-retain-function-attribute 159 + * clang: https://clang.llvm.org/docs/AttributeReference.html#retain 160 + */ 161 + #if __has_attribute(__retain__) && \ 162 + (defined(CONFIG_LD_DEAD_CODE_DATA_ELIMINATION) || \ 163 + defined(CONFIG_LTO_CLANG)) 164 + # define __retain __attribute__((__retain__)) 165 + #else 166 + # define __retain 167 + #endif 168 + 146 169 /* Compiler specific macros. */ 147 170 #ifdef __clang__ 148 171 #include <linux/compiler-clang.h>
-2
include/linux/device.h
··· 1220 1220 return device_remove_groups(dev, groups); 1221 1221 } 1222 1222 1223 - int __must_check devm_device_add_groups(struct device *dev, 1224 - const struct attribute_group **groups); 1225 1223 int __must_check devm_device_add_group(struct device *dev, 1226 1224 const struct attribute_group *grp); 1227 1225
+2 -1
include/linux/io_uring_types.h
··· 648 648 struct io_rsrc_node *rsrc_node; 649 649 650 650 atomic_t refs; 651 - atomic_t poll_refs; 651 + bool cancel_seq_set; 652 652 struct io_task_work io_task_work; 653 653 /* for polled requests, i.e. IORING_OP_POLL_ADD and async armed poll */ 654 654 struct hlist_node hash_node; ··· 657 657 /* opcode allocated if it needs to store data for async defer */ 658 658 void *async_data; 659 659 /* linked requests, IFF REQ_F_HARDLINK or REQ_F_LINK are set */ 660 + atomic_t poll_refs; 660 661 struct io_kiocb *link; 661 662 /* custom credentials, valid IFF REQ_F_CREDS is set */ 662 663 const struct cred *creds;
+40 -9
include/linux/kcov.h
··· 21 21 KCOV_MODE_TRACE_PC = 2, 22 22 /* Collecting comparison operands mode. */ 23 23 KCOV_MODE_TRACE_CMP = 3, 24 + /* The process owns a KCOV remote reference. */ 25 + KCOV_MODE_REMOTE = 4, 24 26 }; 25 27 26 28 #define KCOV_IN_CTXSW (1 << 30) ··· 57 55 58 56 /* 59 57 * The softirq flavor of kcov_remote_*() functions is introduced as a temporary 60 - * work around for kcov's lack of nested remote coverage sections support in 61 - * task context. Adding support for nested sections is tracked in: 62 - * https://bugzilla.kernel.org/show_bug.cgi?id=210337 58 + * workaround for KCOV's lack of nested remote coverage sections support. 59 + * 60 + * Adding support is tracked in https://bugzilla.kernel.org/show_bug.cgi?id=210337. 61 + * 62 + * kcov_remote_start_usb_softirq(): 63 + * 64 + * 1. Only collects coverage when called in the softirq context. This allows 65 + * avoiding nested remote coverage collection sections in the task context. 66 + * For example, USB/IP calls usb_hcd_giveback_urb() in the task context 67 + * within an existing remote coverage collection section. Thus, KCOV should 68 + * not attempt to start collecting coverage within the coverage collection 69 + * section in __usb_hcd_giveback_urb() in this case. 70 + * 71 + * 2. Disables interrupts for the duration of the coverage collection section. 72 + * This allows avoiding nested remote coverage collection sections in the 73 + * softirq context (a softirq might occur during the execution of a work in 74 + * the BH workqueue, which runs with in_serving_softirq() > 0). 75 + * For example, usb_giveback_urb_bh() runs in the BH workqueue with 76 + * interrupts enabled, so __usb_hcd_giveback_urb() might be interrupted in 77 + * the middle of its remote coverage collection section, and the interrupt 78 + * handler might invoke __usb_hcd_giveback_urb() again. 63 79 */ 64 80 65 - static inline void kcov_remote_start_usb_softirq(u64 id) 81 + static inline unsigned long kcov_remote_start_usb_softirq(u64 id) 66 82 { 67 - if (in_serving_softirq()) 83 + unsigned long flags = 0; 84 + 85 + if (in_serving_softirq()) { 86 + local_irq_save(flags); 68 87 kcov_remote_start_usb(id); 88 + } 89 + 90 + return flags; 69 91 } 70 92 71 - static inline void kcov_remote_stop_softirq(void) 93 + static inline void kcov_remote_stop_softirq(unsigned long flags) 72 94 { 73 - if (in_serving_softirq()) 95 + if (in_serving_softirq()) { 74 96 kcov_remote_stop(); 97 + local_irq_restore(flags); 98 + } 75 99 } 76 100 77 101 #ifdef CONFIG_64BIT ··· 131 103 } 132 104 static inline void kcov_remote_start_common(u64 id) {} 133 105 static inline void kcov_remote_start_usb(u64 id) {} 134 - static inline void kcov_remote_start_usb_softirq(u64 id) {} 135 - static inline void kcov_remote_stop_softirq(void) {} 106 + static inline unsigned long kcov_remote_start_usb_softirq(u64 id) 107 + { 108 + return 0; 109 + } 110 + static inline void kcov_remote_stop_softirq(unsigned long flags) {} 136 111 137 112 #endif /* CONFIG_KCOV */ 138 113 #endif /* _LINUX_KCOV_H */
+1 -1
include/linux/lsm_hook_defs.h
··· 413 413 414 414 #ifdef CONFIG_AUDIT 415 415 LSM_HOOK(int, 0, audit_rule_init, u32 field, u32 op, char *rulestr, 416 - void **lsmrule) 416 + void **lsmrule, gfp_t gfp) 417 417 LSM_HOOK(int, 0, audit_rule_known, struct audit_krule *krule) 418 418 LSM_HOOK(int, 0, audit_rule_match, u32 secid, u32 field, u32 op, void *lsmrule) 419 419 LSM_HOOK(void, LSM_RET_VOID, audit_rule_free, void *lsmrule)
+1 -8
include/linux/mm.h
··· 3776 3776 static inline bool want_init_on_free(void) 3777 3777 { 3778 3778 return static_branch_maybe(CONFIG_INIT_ON_FREE_DEFAULT_ON, 3779 - &init_on_free); 3780 - } 3781 - 3782 - DECLARE_STATIC_KEY_MAYBE(CONFIG_INIT_MLOCKED_ON_FREE_DEFAULT_ON, init_mlocked_on_free); 3783 - static inline bool want_init_mlocked_on_free(void) 3784 - { 3785 - return static_branch_maybe(CONFIG_INIT_MLOCKED_ON_FREE_DEFAULT_ON, 3786 - &init_mlocked_on_free); 3779 + &init_on_free); 3787 3780 } 3788 3781 3789 3782 extern bool _debug_pagealloc_enabled_early;
+4
include/linux/pagemap.h
··· 381 381 */ 382 382 static inline bool mapping_large_folio_support(struct address_space *mapping) 383 383 { 384 + /* AS_LARGE_FOLIO_SUPPORT is only reasonable for pagecache folios */ 385 + VM_WARN_ONCE((unsigned long)mapping & PAGE_MAPPING_ANON, 386 + "Anonymous mapping always supports large folio"); 387 + 384 388 return IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && 385 389 test_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); 386 390 }
+8 -3
include/linux/pgalloc_tag.h
··· 37 37 38 38 static inline void put_page_tag_ref(union codetag_ref *ref) 39 39 { 40 + if (WARN_ON(!ref)) 41 + return; 42 + 40 43 page_ext_put(page_ext_from_codetag_ref(ref)); 41 44 } 42 45 ··· 105 102 union codetag_ref *ref = get_page_tag_ref(page); 106 103 107 104 alloc_tag_sub_check(ref); 108 - if (ref && ref->ct) 109 - tag = ct_to_alloc_tag(ref->ct); 110 - put_page_tag_ref(ref); 105 + if (ref) { 106 + if (ref->ct) 107 + tag = ct_to_alloc_tag(ref->ct); 108 + put_page_tag_ref(ref); 109 + } 111 110 } 112 111 113 112 return tag;
+3 -2
include/linux/security.h
··· 2048 2048 2049 2049 #ifdef CONFIG_AUDIT 2050 2050 #ifdef CONFIG_SECURITY 2051 - int security_audit_rule_init(u32 field, u32 op, char *rulestr, void **lsmrule); 2051 + int security_audit_rule_init(u32 field, u32 op, char *rulestr, void **lsmrule, 2052 + gfp_t gfp); 2052 2053 int security_audit_rule_known(struct audit_krule *krule); 2053 2054 int security_audit_rule_match(u32 secid, u32 field, u32 op, void *lsmrule); 2054 2055 void security_audit_rule_free(void *lsmrule); ··· 2057 2056 #else 2058 2057 2059 2058 static inline int security_audit_rule_init(u32 field, u32 op, char *rulestr, 2060 - void **lsmrule) 2059 + void **lsmrule, gfp_t gfp) 2061 2060 { 2062 2061 return 0; 2063 2062 }
+1 -1
include/linux/string.h
··· 289 289 290 290 extern void *kvmemdup(const void *src, size_t len, gfp_t gfp) __realloc_size(2); 291 291 extern char *kmemdup_nul(const char *s, size_t len, gfp_t gfp); 292 - extern void *kmemdup_array(const void *src, size_t element_size, size_t count, gfp_t gfp) 292 + extern void *kmemdup_array(const void *src, size_t count, size_t element_size, gfp_t gfp) 293 293 __realloc_size(2, 3); 294 294 295 295 /* lib/argv_split.c */
+9
include/linux/sysfs.h
··· 750 750 { 751 751 return 0; 752 752 } 753 + 754 + static inline ssize_t sysfs_bin_attr_simple_read(struct file *file, 755 + struct kobject *kobj, 756 + struct bin_attribute *attr, 757 + char *buf, loff_t off, 758 + size_t count) 759 + { 760 + return 0; 761 + } 753 762 #endif /* CONFIG_SYSFS */ 754 763 755 764 static inline int __must_check sysfs_create_file(struct kobject *kobj,
+1
include/linux/vfio.h
··· 64 64 struct completion comp; 65 65 struct iommufd_access *iommufd_access; 66 66 void (*put_kvm)(struct kvm *kvm); 67 + struct inode *inode; 67 68 #if IS_ENABLED(CONFIG_IOMMUFD) 68 69 struct iommufd_device *iommufd_device; 69 70 u8 iommufd_attached:1;
-2
include/linux/vfio_pci_core.h
··· 93 93 struct list_head sriov_pfs_item; 94 94 struct vfio_pci_core_device *sriov_pf_core_dev; 95 95 struct notifier_block nb; 96 - struct mutex vma_lock; 97 - struct list_head vma_list; 98 96 struct rw_semaphore memory_lock; 99 97 }; 100 98
+3
include/net/netns/netfilter.h
··· 15 15 const struct nf_logger __rcu *nf_loggers[NFPROTO_NUMPROTO]; 16 16 #ifdef CONFIG_SYSCTL 17 17 struct ctl_table_header *nf_log_dir_header; 18 + #ifdef CONFIG_LWTUNNEL 19 + struct ctl_table_header *nf_lwtnl_dir_header; 20 + #endif 18 21 #endif 19 22 struct nf_hook_entries __rcu *hooks_ipv4[NF_INET_NUMHOOKS]; 20 23 struct nf_hook_entries __rcu *hooks_ipv6[NF_INET_NUMHOOKS];
+2
include/scsi/scsi_transport_sas.h
··· 200 200 void sas_disable_tlr(struct scsi_device *); 201 201 void sas_enable_tlr(struct scsi_device *); 202 202 203 + bool sas_ata_ncq_prio_supported(struct scsi_device *sdev); 204 + 203 205 extern struct sas_rphy *sas_end_device_alloc(struct sas_port *); 204 206 extern struct sas_rphy *sas_expander_alloc(struct sas_port *, enum sas_device_type); 205 207 void sas_rphy_free(struct sas_rphy *);
+69 -44
include/trace/events/firewire.h
··· 36 36 #define QUADLET_SIZE 4 37 37 38 38 DECLARE_EVENT_CLASS(async_outbound_initiate_template, 39 - TP_PROTO(u64 transaction, unsigned int generation, unsigned int scode, const u32 *header, const u32 *data, unsigned int data_count), 40 - TP_ARGS(transaction, generation, scode, header, data, data_count), 39 + TP_PROTO(u64 transaction, unsigned int card_index, unsigned int generation, unsigned int scode, const u32 *header, const u32 *data, unsigned int data_count), 40 + TP_ARGS(transaction, card_index, generation, scode, header, data, data_count), 41 41 TP_STRUCT__entry( 42 42 __field(u64, transaction) 43 + __field(u8, card_index) 43 44 __field(u8, generation) 44 45 __field(u8, scode) 45 46 __array(u32, header, ASYNC_HEADER_QUADLET_COUNT) ··· 48 47 ), 49 48 TP_fast_assign( 50 49 __entry->transaction = transaction; 50 + __entry->card_index = card_index; 51 51 __entry->generation = generation; 52 52 __entry->scode = scode; 53 53 memcpy(__entry->header, header, QUADLET_SIZE * ASYNC_HEADER_QUADLET_COUNT); ··· 56 54 ), 57 55 // This format is for the request subaction. 58 56 TP_printk( 59 - "transaction=0x%llx generation=%u scode=%u dst_id=0x%04x tlabel=%u tcode=%u src_id=0x%04x offset=0x%012llx header=%s data=%s", 57 + "transaction=0x%llx card_index=%u generation=%u scode=%u dst_id=0x%04x tlabel=%u tcode=%u src_id=0x%04x offset=0x%012llx header=%s data=%s", 60 58 __entry->transaction, 59 + __entry->card_index, 61 60 __entry->generation, 62 61 __entry->scode, 63 62 ASYNC_HEADER_GET_DESTINATION(__entry->header), ··· 74 71 75 72 // The value of status is one of ack codes and rcodes specific to Linux FireWire subsystem. 76 73 DECLARE_EVENT_CLASS(async_outbound_complete_template, 77 - TP_PROTO(u64 transaction, unsigned int generation, unsigned int scode, unsigned int status, unsigned int timestamp), 78 - TP_ARGS(transaction, generation, scode, status, timestamp), 74 + TP_PROTO(u64 transaction, unsigned int card_index, unsigned int generation, unsigned int scode, unsigned int status, unsigned int timestamp), 75 + TP_ARGS(transaction, card_index, generation, scode, status, timestamp), 79 76 TP_STRUCT__entry( 80 77 __field(u64, transaction) 78 + __field(u8, card_index) 81 79 __field(u8, generation) 82 80 __field(u8, scode) 83 81 __field(u8, status) ··· 86 82 ), 87 83 TP_fast_assign( 88 84 __entry->transaction = transaction; 85 + __entry->card_index = card_index; 89 86 __entry->generation = generation; 90 87 __entry->scode = scode; 91 88 __entry->status = status; 92 89 __entry->timestamp = timestamp; 93 90 ), 94 91 TP_printk( 95 - "transaction=0x%llx generation=%u scode=%u status=%u timestamp=0x%04x", 92 + "transaction=0x%llx card_index=%u generation=%u scode=%u status=%u timestamp=0x%04x", 96 93 __entry->transaction, 94 + __entry->card_index, 97 95 __entry->generation, 98 96 __entry->scode, 99 97 __entry->status, ··· 105 99 106 100 // The value of status is one of ack codes and rcodes specific to Linux FireWire subsystem. 107 101 DECLARE_EVENT_CLASS(async_inbound_template, 108 - TP_PROTO(u64 transaction, unsigned int generation, unsigned int scode, unsigned int status, unsigned int timestamp, const u32 *header, const u32 *data, unsigned int data_count), 109 - TP_ARGS(transaction, generation, scode, status, timestamp, header, data, data_count), 102 + TP_PROTO(u64 transaction, unsigned int card_index, unsigned int generation, unsigned int scode, unsigned int status, unsigned int timestamp, const u32 *header, const u32 *data, unsigned int data_count), 103 + TP_ARGS(transaction, card_index, generation, scode, status, timestamp, header, data, data_count), 110 104 TP_STRUCT__entry( 111 105 __field(u64, transaction) 106 + __field(u8, card_index) 112 107 __field(u8, generation) 113 108 __field(u8, scode) 114 109 __field(u8, status) ··· 119 112 ), 120 113 TP_fast_assign( 121 114 __entry->transaction = transaction; 115 + __entry->card_index = card_index; 122 116 __entry->generation = generation; 123 117 __entry->scode = scode; 124 118 __entry->status = status; ··· 129 121 ), 130 122 // This format is for the response subaction. 131 123 TP_printk( 132 - "transaction=0x%llx generation=%u scode=%u status=%u timestamp=0x%04x dst_id=0x%04x tlabel=%u tcode=%u src_id=0x%04x rcode=%u header=%s data=%s", 124 + "transaction=0x%llx card_index=%u generation=%u scode=%u status=%u timestamp=0x%04x dst_id=0x%04x tlabel=%u tcode=%u src_id=0x%04x rcode=%u header=%s data=%s", 133 125 __entry->transaction, 126 + __entry->card_index, 134 127 __entry->generation, 135 128 __entry->scode, 136 129 __entry->status, ··· 148 139 ); 149 140 150 141 DEFINE_EVENT(async_outbound_initiate_template, async_request_outbound_initiate, 151 - TP_PROTO(u64 transaction, unsigned int generation, unsigned int scode, const u32 *header, const u32 *data, unsigned int data_count), 152 - TP_ARGS(transaction, generation, scode, header, data, data_count) 142 + TP_PROTO(u64 transaction, unsigned int card_index, unsigned int generation, unsigned int scode, const u32 *header, const u32 *data, unsigned int data_count), 143 + TP_ARGS(transaction, card_index, generation, scode, header, data, data_count) 153 144 ); 154 145 155 146 DEFINE_EVENT(async_outbound_complete_template, async_request_outbound_complete, 156 - TP_PROTO(u64 transaction, unsigned int generation, unsigned int scode, unsigned int status, unsigned int timestamp), 157 - TP_ARGS(transaction, generation, scode, status, timestamp) 147 + TP_PROTO(u64 transaction, unsigned int card_index, unsigned int generation, unsigned int scode, unsigned int status, unsigned int timestamp), 148 + TP_ARGS(transaction, card_index, generation, scode, status, timestamp) 158 149 ); 159 150 160 151 DEFINE_EVENT(async_inbound_template, async_response_inbound, 161 - TP_PROTO(u64 transaction, unsigned int generation, unsigned int scode, unsigned int status, unsigned int timestamp, const u32 *header, const u32 *data, unsigned int data_count), 162 - TP_ARGS(transaction, generation, scode, status, timestamp, header, data, data_count) 152 + TP_PROTO(u64 transaction, unsigned int card_index, unsigned int generation, unsigned int scode, unsigned int status, unsigned int timestamp, const u32 *header, const u32 *data, unsigned int data_count), 153 + TP_ARGS(transaction, card_index, generation, scode, status, timestamp, header, data, data_count) 163 154 ); 164 155 165 156 DEFINE_EVENT_PRINT(async_inbound_template, async_request_inbound, 166 - TP_PROTO(u64 transaction, unsigned int generation, unsigned int scode, unsigned int status, unsigned int timestamp, const u32 *header, const u32 *data, unsigned int data_count), 167 - TP_ARGS(transaction, generation, scode, status, timestamp, header, data, data_count), 157 + TP_PROTO(u64 transaction, unsigned int card_index, unsigned int generation, unsigned int scode, unsigned int status, unsigned int timestamp, const u32 *header, const u32 *data, unsigned int data_count), 158 + TP_ARGS(transaction, card_index, generation, scode, status, timestamp, header, data, data_count), 168 159 TP_printk( 169 - "transaction=0x%llx generation=%u scode=%u status=%u timestamp=0x%04x dst_id=0x%04x tlabel=%u tcode=%u src_id=0x%04x offset=0x%012llx header=%s data=%s", 160 + "transaction=0x%llx card_index=%u generation=%u scode=%u status=%u timestamp=0x%04x dst_id=0x%04x tlabel=%u tcode=%u src_id=0x%04x offset=0x%012llx header=%s data=%s", 170 161 __entry->transaction, 162 + __entry->card_index, 171 163 __entry->generation, 172 164 __entry->scode, 173 165 __entry->status, ··· 185 175 ); 186 176 187 177 DEFINE_EVENT_PRINT(async_outbound_initiate_template, async_response_outbound_initiate, 188 - TP_PROTO(u64 transaction, unsigned int generation, unsigned int scode, const u32 *header, const u32 *data, unsigned int data_count), 189 - TP_ARGS(transaction, generation, scode, header, data, data_count), 178 + TP_PROTO(u64 transaction, unsigned int card_index, unsigned int generation, unsigned int scode, const u32 *header, const u32 *data, unsigned int data_count), 179 + TP_ARGS(transaction, card_index, generation, scode, header, data, data_count), 190 180 TP_printk( 191 - "transaction=0x%llx generation=%u scode=%u dst_id=0x%04x tlabel=%u tcode=%u src_id=0x%04x rcode=%u header=%s data=%s", 181 + "transaction=0x%llx card_index=%u generation=%u scode=%u dst_id=0x%04x tlabel=%u tcode=%u src_id=0x%04x rcode=%u header=%s data=%s", 192 182 __entry->transaction, 183 + __entry->card_index, 193 184 __entry->generation, 194 185 __entry->scode, 195 186 ASYNC_HEADER_GET_DESTINATION(__entry->header), ··· 205 194 ); 206 195 207 196 DEFINE_EVENT(async_outbound_complete_template, async_response_outbound_complete, 208 - TP_PROTO(u64 transaction, unsigned int generation, unsigned int scode, unsigned int status, unsigned int timestamp), 209 - TP_ARGS(transaction, generation, scode, status, timestamp) 197 + TP_PROTO(u64 transaction, unsigned int card_index, unsigned int generation, unsigned int scode, unsigned int status, unsigned int timestamp), 198 + TP_ARGS(transaction, card_index, generation, scode, status, timestamp) 210 199 ); 211 200 212 201 #undef ASYNC_HEADER_GET_DESTINATION ··· 217 206 #undef ASYNC_HEADER_GET_RCODE 218 207 219 208 TRACE_EVENT(async_phy_outbound_initiate, 220 - TP_PROTO(u64 packet, unsigned int generation, u32 first_quadlet, u32 second_quadlet), 221 - TP_ARGS(packet, generation, first_quadlet, second_quadlet), 209 + TP_PROTO(u64 packet, unsigned int card_index, unsigned int generation, u32 first_quadlet, u32 second_quadlet), 210 + TP_ARGS(packet, card_index, generation, first_quadlet, second_quadlet), 222 211 TP_STRUCT__entry( 223 212 __field(u64, packet) 213 + __field(u8, card_index) 224 214 __field(u8, generation) 225 215 __field(u32, first_quadlet) 226 216 __field(u32, second_quadlet) 227 217 ), 228 218 TP_fast_assign( 229 219 __entry->packet = packet; 220 + __entry->card_index = card_index; 230 221 __entry->generation = generation; 231 222 __entry->first_quadlet = first_quadlet; 232 223 __entry->second_quadlet = second_quadlet 233 224 ), 234 225 TP_printk( 235 - "packet=0x%llx generation=%u first_quadlet=0x%08x second_quadlet=0x%08x", 226 + "packet=0x%llx card_index=%u generation=%u first_quadlet=0x%08x second_quadlet=0x%08x", 236 227 __entry->packet, 228 + __entry->card_index, 237 229 __entry->generation, 238 230 __entry->first_quadlet, 239 231 __entry->second_quadlet ··· 244 230 ); 245 231 246 232 TRACE_EVENT(async_phy_outbound_complete, 247 - TP_PROTO(u64 packet, unsigned int generation, unsigned int status, unsigned int timestamp), 248 - TP_ARGS(packet, generation, status, timestamp), 233 + TP_PROTO(u64 packet, unsigned int card_index, unsigned int generation, unsigned int status, unsigned int timestamp), 234 + TP_ARGS(packet, card_index, generation, status, timestamp), 249 235 TP_STRUCT__entry( 250 236 __field(u64, packet) 237 + __field(u8, card_index) 251 238 __field(u8, generation) 252 239 __field(u8, status) 253 240 __field(u16, timestamp) 254 241 ), 255 242 TP_fast_assign( 256 243 __entry->packet = packet; 244 + __entry->card_index = card_index; 257 245 __entry->generation = generation; 258 246 __entry->status = status; 259 247 __entry->timestamp = timestamp; 260 248 ), 261 249 TP_printk( 262 - "packet=0x%llx generation=%u status=%u timestamp=0x%04x", 250 + "packet=0x%llx card_index=%u generation=%u status=%u timestamp=0x%04x", 263 251 __entry->packet, 252 + __entry->card_index, 264 253 __entry->generation, 265 254 __entry->status, 266 255 __entry->timestamp ··· 271 254 ); 272 255 273 256 TRACE_EVENT(async_phy_inbound, 274 - TP_PROTO(u64 packet, unsigned int generation, unsigned int status, unsigned int timestamp, u32 first_quadlet, u32 second_quadlet), 275 - TP_ARGS(packet, generation, status, timestamp, first_quadlet, second_quadlet), 257 + TP_PROTO(u64 packet, unsigned int card_index, unsigned int generation, unsigned int status, unsigned int timestamp, u32 first_quadlet, u32 second_quadlet), 258 + TP_ARGS(packet, card_index, generation, status, timestamp, first_quadlet, second_quadlet), 276 259 TP_STRUCT__entry( 277 260 __field(u64, packet) 261 + __field(u8, card_index) 278 262 __field(u8, generation) 279 263 __field(u8, status) 280 264 __field(u16, timestamp) ··· 291 273 __entry->second_quadlet = second_quadlet 292 274 ), 293 275 TP_printk( 294 - "packet=0x%llx generation=%u status=%u timestamp=0x%04x first_quadlet=0x%08x second_quadlet=0x%08x", 276 + "packet=0x%llx card_index=%u generation=%u status=%u timestamp=0x%04x first_quadlet=0x%08x second_quadlet=0x%08x", 295 277 __entry->packet, 278 + __entry->card_index, 296 279 __entry->generation, 297 280 __entry->status, 298 281 __entry->timestamp, ··· 303 284 ); 304 285 305 286 DECLARE_EVENT_CLASS(bus_reset_arrange_template, 306 - TP_PROTO(unsigned int generation, bool short_reset), 307 - TP_ARGS(generation, short_reset), 287 + TP_PROTO(unsigned int card_index, unsigned int generation, bool short_reset), 288 + TP_ARGS(card_index, generation, short_reset), 308 289 TP_STRUCT__entry( 290 + __field(u8, card_index) 309 291 __field(u8, generation) 310 292 __field(bool, short_reset) 311 293 ), 312 294 TP_fast_assign( 295 + __entry->card_index = card_index; 313 296 __entry->generation = generation; 314 297 __entry->short_reset = short_reset; 315 298 ), 316 299 TP_printk( 317 - "generation=%u short_reset=%s", 300 + "card_index=%u generation=%u short_reset=%s", 301 + __entry->card_index, 318 302 __entry->generation, 319 303 __entry->short_reset ? "true" : "false" 320 304 ) 321 305 ); 322 306 323 307 DEFINE_EVENT(bus_reset_arrange_template, bus_reset_initiate, 324 - TP_PROTO(unsigned int generation, bool short_reset), 325 - TP_ARGS(generation, short_reset) 308 + TP_PROTO(unsigned int card_index, unsigned int generation, bool short_reset), 309 + TP_ARGS(card_index, generation, short_reset) 326 310 ); 327 311 328 312 DEFINE_EVENT(bus_reset_arrange_template, bus_reset_schedule, 329 - TP_PROTO(unsigned int generation, bool short_reset), 330 - TP_ARGS(generation, short_reset) 313 + TP_PROTO(unsigned int card_index, unsigned int generation, bool short_reset), 314 + TP_ARGS(card_index, generation, short_reset) 331 315 ); 332 316 333 317 DEFINE_EVENT(bus_reset_arrange_template, bus_reset_postpone, 334 - TP_PROTO(unsigned int generation, bool short_reset), 335 - TP_ARGS(generation, short_reset) 318 + TP_PROTO(unsigned int card_index, unsigned int generation, bool short_reset), 319 + TP_ARGS(card_index, generation, short_reset) 336 320 ); 337 321 338 322 TRACE_EVENT(bus_reset_handle, 339 - TP_PROTO(unsigned int generation, unsigned int node_id, bool bm_abdicate, u32 *self_ids, unsigned int self_id_count), 340 - TP_ARGS(generation, node_id, bm_abdicate, self_ids, self_id_count), 323 + TP_PROTO(unsigned int card_index, unsigned int generation, unsigned int node_id, bool bm_abdicate, u32 *self_ids, unsigned int self_id_count), 324 + TP_ARGS(card_index, generation, node_id, bm_abdicate, self_ids, self_id_count), 341 325 TP_STRUCT__entry( 326 + __field(u8, card_index) 342 327 __field(u8, generation) 343 328 __field(u8, node_id) 344 329 __field(bool, bm_abdicate) 345 330 __dynamic_array(u32, self_ids, self_id_count) 346 331 ), 347 332 TP_fast_assign( 333 + __entry->card_index = card_index; 348 334 __entry->generation = generation; 349 335 __entry->node_id = node_id; 350 336 __entry->bm_abdicate = bm_abdicate; 351 337 memcpy(__get_dynamic_array(self_ids), self_ids, __get_dynamic_array_len(self_ids)); 352 338 ), 353 339 TP_printk( 354 - "generation=%u node_id=0x%04x bm_abdicate=%s self_ids=%s", 340 + "card_index=%u generation=%u node_id=0x%04x bm_abdicate=%s self_ids=%s", 341 + __entry->card_index, 355 342 __entry->generation, 356 343 __entry->node_id, 357 344 __entry->bm_abdicate ? "true" : "false",
+1 -1
init/Kconfig
··· 883 883 884 884 config CC_NO_ARRAY_BOUNDS 885 885 bool 886 - default y if CC_IS_GCC && GCC_VERSION >= 100000 && GCC10_NO_ARRAY_BOUNDS 886 + default y if CC_IS_GCC && GCC_VERSION >= 90000 && GCC10_NO_ARRAY_BOUNDS 887 887 888 888 # Currently, disable -Wstringop-overflow for GCC globally. 889 889 config GCC_NO_STRINGOP_OVERFLOW
+2 -2
io_uring/cancel.h
··· 27 27 28 28 static inline bool io_cancel_match_sequence(struct io_kiocb *req, int sequence) 29 29 { 30 - if ((req->flags & REQ_F_CANCEL_SEQ) && sequence == req->work.cancel_seq) 30 + if (req->cancel_seq_set && sequence == req->work.cancel_seq) 31 31 return true; 32 32 33 - req->flags |= REQ_F_CANCEL_SEQ; 33 + req->cancel_seq_set = true; 34 34 req->work.cancel_seq = sequence; 35 35 return false; 36 36 }
+1
io_uring/io_uring.c
··· 2058 2058 req->file = NULL; 2059 2059 req->rsrc_node = NULL; 2060 2060 req->task = current; 2061 + req->cancel_seq_set = false; 2061 2062 2062 2063 if (unlikely(opcode >= IORING_OP_LAST)) { 2063 2064 req->opcode = 0;
+1
io_uring/rsrc.c
··· 249 249 250 250 ret = io_run_task_work_sig(ctx); 251 251 if (ret < 0) { 252 + __set_current_state(TASK_RUNNING); 252 253 mutex_lock(&ctx->uring_lock); 253 254 if (list_empty(&ctx->rsrc_ref_list)) 254 255 ret = 0;
+3 -2
kernel/auditfilter.c
··· 529 529 entry->rule.buflen += f_val; 530 530 f->lsm_str = str; 531 531 err = security_audit_rule_init(f->type, f->op, str, 532 - (void **)&f->lsm_rule); 532 + (void **)&f->lsm_rule, 533 + GFP_KERNEL); 533 534 /* Keep currently invalid fields around in case they 534 535 * become valid after a policy reload. */ 535 536 if (err == -EINVAL) { ··· 800 799 801 800 /* our own (refreshed) copy of lsm_rule */ 802 801 ret = security_audit_rule_init(df->type, df->op, df->lsm_str, 803 - (void **)&df->lsm_rule); 802 + (void **)&df->lsm_rule, GFP_KERNEL); 804 803 /* Keep currently invalid fields around in case they 805 804 * become valid after a policy reload. */ 806 805 if (ret == -EINVAL) {
+16 -9
kernel/bpf/verifier.c
··· 4549 4549 state->stack[spi].spilled_ptr.id = 0; 4550 4550 } else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) && 4551 4551 env->bpf_capable) { 4552 - struct bpf_reg_state fake_reg = {}; 4552 + struct bpf_reg_state *tmp_reg = &env->fake_reg[0]; 4553 4553 4554 - __mark_reg_known(&fake_reg, insn->imm); 4555 - fake_reg.type = SCALAR_VALUE; 4556 - save_register_state(env, state, spi, &fake_reg, size); 4554 + memset(tmp_reg, 0, sizeof(*tmp_reg)); 4555 + __mark_reg_known(tmp_reg, insn->imm); 4556 + tmp_reg->type = SCALAR_VALUE; 4557 + save_register_state(env, state, spi, tmp_reg, size); 4557 4558 } else if (reg && is_spillable_regtype(reg->type)) { 4558 4559 /* register containing pointer is being spilled into stack */ 4559 4560 if (size != BPF_REG_SIZE) { ··· 15114 15113 struct bpf_reg_state *regs = this_branch->frame[this_branch->curframe]->regs; 15115 15114 struct bpf_reg_state *dst_reg, *other_branch_regs, *src_reg = NULL; 15116 15115 struct bpf_reg_state *eq_branch_regs; 15117 - struct bpf_reg_state fake_reg = {}; 15118 15116 u8 opcode = BPF_OP(insn->code); 15119 15117 bool is_jmp32; 15120 15118 int pred = -1; ··· 15179 15179 verbose(env, "BPF_JMP/JMP32 uses reserved fields\n"); 15180 15180 return -EINVAL; 15181 15181 } 15182 - src_reg = &fake_reg; 15182 + src_reg = &env->fake_reg[0]; 15183 + memset(src_reg, 0, sizeof(*src_reg)); 15183 15184 src_reg->type = SCALAR_VALUE; 15184 15185 __mark_reg_known(src_reg, insn->imm); 15185 15186 } ··· 15240 15239 &other_branch_regs[insn->src_reg], 15241 15240 dst_reg, src_reg, opcode, is_jmp32); 15242 15241 } else /* BPF_SRC(insn->code) == BPF_K */ { 15242 + /* reg_set_min_max() can mangle the fake_reg. Make a copy 15243 + * so that these are two different memory locations. The 15244 + * src_reg is not used beyond here in context of K. 15245 + */ 15246 + memcpy(&env->fake_reg[1], &env->fake_reg[0], 15247 + sizeof(env->fake_reg[0])); 15243 15248 err = reg_set_min_max(env, 15244 15249 &other_branch_regs[insn->dst_reg], 15245 - src_reg /* fake one */, 15246 - dst_reg, src_reg /* same fake one */, 15250 + &env->fake_reg[0], 15251 + dst_reg, &env->fake_reg[1], 15247 15252 opcode, is_jmp32); 15248 15253 } 15249 15254 if (err) ··· 20320 20313 goto next_insn; 20321 20314 } 20322 20315 20323 - #ifdef CONFIG_X86_64 20316 + #if defined(CONFIG_X86_64) && !defined(CONFIG_UML) 20324 20317 /* Implement bpf_get_smp_processor_id() inline. */ 20325 20318 if (insn->imm == BPF_FUNC_get_smp_processor_id && 20326 20319 prog->jit_requested && bpf_jit_supports_percpu_insn()) {
+3 -1
kernel/gcov/gcc_4_7.c
··· 18 18 #include <linux/mm.h> 19 19 #include "gcov.h" 20 20 21 - #if (__GNUC__ >= 10) 21 + #if (__GNUC__ >= 14) 22 + #define GCOV_COUNTERS 9 23 + #elif (__GNUC__ >= 10) 22 24 #define GCOV_COUNTERS 8 23 25 #elif (__GNUC__ >= 7) 24 26 #define GCOV_COUNTERS 9
+1
kernel/kcov.c
··· 632 632 return -EINVAL; 633 633 kcov->mode = mode; 634 634 t->kcov = kcov; 635 + t->kcov_mode = KCOV_MODE_REMOTE; 635 636 kcov->t = t; 636 637 kcov->remote = true; 637 638 kcov->remote_size = remote_arg->area_size;
+1
kernel/pid_namespace.c
··· 218 218 */ 219 219 do { 220 220 clear_thread_flag(TIF_SIGPENDING); 221 + clear_thread_flag(TIF_NOTIFY_SIGNAL); 221 222 rc = kernel_wait4(-1, NULL, __WALL, NULL); 222 223 } while (rc != -ECHILD); 223 224
+14 -28
kernel/time/tick-common.c
··· 178 178 } 179 179 } 180 180 181 - #ifdef CONFIG_NO_HZ_FULL 182 - static void giveup_do_timer(void *info) 183 - { 184 - int cpu = *(unsigned int *)info; 185 - 186 - WARN_ON(tick_do_timer_cpu != smp_processor_id()); 187 - 188 - tick_do_timer_cpu = cpu; 189 - } 190 - 191 - static void tick_take_do_timer_from_boot(void) 192 - { 193 - int cpu = smp_processor_id(); 194 - int from = tick_do_timer_boot_cpu; 195 - 196 - if (from >= 0 && from != cpu) 197 - smp_call_function_single(from, giveup_do_timer, &cpu, 1); 198 - } 199 - #endif 200 - 201 181 /* 202 182 * Setup the tick device 203 183 */ ··· 201 221 tick_next_period = ktime_get(); 202 222 #ifdef CONFIG_NO_HZ_FULL 203 223 /* 204 - * The boot CPU may be nohz_full, in which case set 205 - * tick_do_timer_boot_cpu so the first housekeeping 206 - * secondary that comes up will take do_timer from 207 - * us. 224 + * The boot CPU may be nohz_full, in which case the 225 + * first housekeeping secondary will take do_timer() 226 + * from it. 208 227 */ 209 228 if (tick_nohz_full_cpu(cpu)) 210 229 tick_do_timer_boot_cpu = cpu; 211 230 212 - } else if (tick_do_timer_boot_cpu != -1 && 213 - !tick_nohz_full_cpu(cpu)) { 214 - tick_take_do_timer_from_boot(); 231 + } else if (tick_do_timer_boot_cpu != -1 && !tick_nohz_full_cpu(cpu)) { 215 232 tick_do_timer_boot_cpu = -1; 216 - WARN_ON(READ_ONCE(tick_do_timer_cpu) != cpu); 233 + /* 234 + * The boot CPU will stay in periodic (NOHZ disabled) 235 + * mode until clocksource_done_booting() called after 236 + * smp_init() selects a high resolution clocksource and 237 + * timekeeping_notify() kicks the NOHZ stuff alive. 238 + * 239 + * So this WRITE_ONCE can only race with the READ_ONCE 240 + * check in tick_periodic() but this race is harmless. 241 + */ 242 + WRITE_ONCE(tick_do_timer_cpu, cpu); 217 243 #endif 218 244 } 219 245
+2 -2
kernel/trace/Kconfig
··· 1136 1136 1137 1137 config SYNTH_EVENT_GEN_TEST 1138 1138 tristate "Test module for in-kernel synthetic event generation" 1139 - depends on SYNTH_EVENTS 1139 + depends on SYNTH_EVENTS && m 1140 1140 help 1141 1141 This option creates a test module to check the base 1142 1142 functionality of in-kernel synthetic event definition and ··· 1149 1149 1150 1150 config KPROBE_EVENT_GEN_TEST 1151 1151 tristate "Test module for in-kernel kprobe event generation" 1152 - depends on KPROBE_EVENTS 1152 + depends on KPROBE_EVENTS && m 1153 1153 help 1154 1154 This option creates a test module to check the base 1155 1155 functionality of in-kernel kprobe event definition.
+1 -7
lib/Kconfig
··· 539 539 stack overflow. 540 540 541 541 config FORCE_NR_CPUS 542 - bool "Set number of CPUs at compile time" 543 - depends on SMP && EXPERT && !COMPILE_TEST 544 - help 545 - Say Yes if you have NR_CPUS set to an actual number of possible 546 - CPUs in your system, not to a default value. This forces the core 547 - code to rely on compile-time value and optimize kernel routines 548 - better. 542 + def_bool !SMP 549 543 550 544 config CPU_RMAP 551 545 bool
+13 -3
lib/alloc_tag.c
··· 227 227 }; 228 228 EXPORT_SYMBOL(page_alloc_tagging_ops); 229 229 230 + #ifdef CONFIG_SYSCTL 230 231 static struct ctl_table memory_allocation_profiling_sysctls[] = { 231 232 { 232 233 .procname = "mem_profiling", ··· 242 241 { } 243 242 }; 244 243 244 + static void __init sysctl_init(void) 245 + { 246 + if (!mem_profiling_support) 247 + memory_allocation_profiling_sysctls[0].mode = 0444; 248 + 249 + register_sysctl_init("vm", memory_allocation_profiling_sysctls); 250 + } 251 + #else /* CONFIG_SYSCTL */ 252 + static inline void sysctl_init(void) {} 253 + #endif /* CONFIG_SYSCTL */ 254 + 245 255 static int __init alloc_tag_init(void) 246 256 { 247 257 const struct codetag_type_desc desc = { ··· 265 253 if (IS_ERR(alloc_tag_cttype)) 266 254 return PTR_ERR(alloc_tag_cttype); 267 255 268 - if (!mem_profiling_support) 269 - memory_allocation_profiling_sysctls[0].mode = 0444; 270 - register_sysctl_init("vm", memory_allocation_profiling_sysctls); 256 + sysctl_init(); 271 257 procfs_init(); 272 258 273 259 return 0;
+1 -1
lib/fortify_kunit.c
··· 374 374 for (i = 0; i < ARRAY_SIZE(test_strs); i++) { \ 375 375 len = strlen(test_strs[i]); \ 376 376 KUNIT_EXPECT_EQ(test, __builtin_constant_p(len), 0); \ 377 - checker(len, kmemdup_array(test_strs[i], len, 1, gfp), \ 377 + checker(len, kmemdup_array(test_strs[i], 1, len, gfp), \ 378 378 kfree(p)); \ 379 379 checker(len, kmemdup(test_strs[i], len, gfp), \ 380 380 kfree(p)); \
+17 -3
lib/overflow_kunit.c
··· 1178 1178 s16 array[] __counted_by(counter); 1179 1179 }; 1180 1180 1181 + struct bar { 1182 + int a; 1183 + u32 counter; 1184 + s16 array[]; 1185 + }; 1186 + 1181 1187 static void DEFINE_FLEX_test(struct kunit *test) 1182 1188 { 1183 - DEFINE_RAW_FLEX(struct foo, two, array, 2); 1189 + /* Using _RAW_ on a __counted_by struct will initialize "counter" to zero */ 1190 + DEFINE_RAW_FLEX(struct foo, two_but_zero, array, 2); 1191 + #if __has_attribute(__counted_by__) 1192 + int expected_raw_size = sizeof(struct foo); 1193 + #else 1194 + int expected_raw_size = sizeof(struct foo) + 2 * sizeof(s16); 1195 + #endif 1196 + /* Without annotation, it will always be on-stack size. */ 1197 + DEFINE_RAW_FLEX(struct bar, two, array, 2); 1184 1198 DEFINE_FLEX(struct foo, eight, array, counter, 8); 1185 1199 DEFINE_FLEX(struct foo, empty, array, counter, 0); 1186 1200 1187 - KUNIT_EXPECT_EQ(test, __struct_size(two), 1188 - sizeof(struct foo) + sizeof(s16) + sizeof(s16)); 1201 + KUNIT_EXPECT_EQ(test, __struct_size(two_but_zero), expected_raw_size); 1202 + KUNIT_EXPECT_EQ(test, __struct_size(two), sizeof(struct bar) + 2 * sizeof(s16)); 1189 1203 KUNIT_EXPECT_EQ(test, __struct_size(eight), 24); 1190 1204 KUNIT_EXPECT_EQ(test, __struct_size(empty), sizeof(struct foo)); 1191 1205 }
+5 -26
mm/debug_vm_pgtable.c
··· 40 40 * Please refer Documentation/mm/arch_pgtable_helpers.rst for the semantics 41 41 * expectations that are being validated here. All future changes in here 42 42 * or the documentation need to be in sync. 43 - * 44 - * On s390 platform, the lower 4 bits are used to identify given page table 45 - * entry type. But these bits might affect the ability to clear entries with 46 - * pxx_clear() because of how dynamic page table folding works on s390. So 47 - * while loading up the entries do not change the lower 4 bits. It does not 48 - * have affect any other platform. Also avoid the 62nd bit on ppc64 that is 49 - * used to mark a pte entry. 50 43 */ 51 - #define S390_SKIP_MASK GENMASK(3, 0) 52 - #if __BITS_PER_LONG == 64 53 - #define PPC64_SKIP_MASK GENMASK(62, 62) 54 - #else 55 - #define PPC64_SKIP_MASK 0x0 56 - #endif 57 - #define ARCH_SKIP_MASK (S390_SKIP_MASK | PPC64_SKIP_MASK) 58 - #define RANDOM_ORVALUE (GENMASK(BITS_PER_LONG - 1, 0) & ~ARCH_SKIP_MASK) 59 44 #define RANDOM_NZVALUE GENMASK(7, 0) 60 45 61 46 struct pgtable_debug_args { ··· 496 511 return; 497 512 498 513 pr_debug("Validating PUD clear\n"); 499 - pud = __pud(pud_val(pud) | RANDOM_ORVALUE); 500 - WRITE_ONCE(*args->pudp, pud); 514 + WARN_ON(pud_none(pud)); 501 515 pud_clear(args->pudp); 502 516 pud = READ_ONCE(*args->pudp); 503 517 WARN_ON(!pud_none(pud)); ··· 532 548 return; 533 549 534 550 pr_debug("Validating P4D clear\n"); 535 - p4d = __p4d(p4d_val(p4d) | RANDOM_ORVALUE); 536 - WRITE_ONCE(*args->p4dp, p4d); 551 + WARN_ON(p4d_none(p4d)); 537 552 p4d_clear(args->p4dp); 538 553 p4d = READ_ONCE(*args->p4dp); 539 554 WARN_ON(!p4d_none(p4d)); ··· 565 582 return; 566 583 567 584 pr_debug("Validating PGD clear\n"); 568 - pgd = __pgd(pgd_val(pgd) | RANDOM_ORVALUE); 569 - WRITE_ONCE(*args->pgdp, pgd); 585 + WARN_ON(pgd_none(pgd)); 570 586 pgd_clear(args->pgdp); 571 587 pgd = READ_ONCE(*args->pgdp); 572 588 WARN_ON(!pgd_none(pgd)); ··· 616 634 if (WARN_ON(!args->ptep)) 617 635 return; 618 636 619 - #ifndef CONFIG_RISCV 620 - pte = __pte(pte_val(pte) | RANDOM_ORVALUE); 621 - #endif 622 637 set_pte_at(args->mm, args->vaddr, args->ptep, pte); 638 + WARN_ON(pte_none(pte)); 623 639 flush_dcache_page(page); 624 640 barrier(); 625 641 ptep_clear(args->mm, args->vaddr, args->ptep); ··· 630 650 pmd_t pmd = READ_ONCE(*args->pmdp); 631 651 632 652 pr_debug("Validating PMD clear\n"); 633 - pmd = __pmd(pmd_val(pmd) | RANDOM_ORVALUE); 634 - WRITE_ONCE(*args->pmdp, pmd); 653 + WARN_ON(pmd_none(pmd)); 635 654 pmd_clear(args->pmdp); 636 655 pmd = READ_ONCE(*args->pmdp); 637 656 WARN_ON(!pmd_none(pmd));
+17 -11
mm/huge_memory.c
··· 3009 3009 if (new_order >= folio_order(folio)) 3010 3010 return -EINVAL; 3011 3011 3012 - /* Cannot split anonymous THP to order-1 */ 3013 - if (new_order == 1 && folio_test_anon(folio)) { 3014 - VM_WARN_ONCE(1, "Cannot split to order-1 folio"); 3015 - return -EINVAL; 3016 - } 3017 - 3018 - if (new_order) { 3019 - /* Only swapping a whole PMD-mapped folio is supported */ 3020 - if (folio_test_swapcache(folio)) 3012 + if (folio_test_anon(folio)) { 3013 + /* order-1 is not supported for anonymous THP. */ 3014 + if (new_order == 1) { 3015 + VM_WARN_ONCE(1, "Cannot split to order-1 folio"); 3021 3016 return -EINVAL; 3017 + } 3018 + } else if (new_order) { 3022 3019 /* Split shmem folio to non-zero order not supported */ 3023 3020 if (shmem_mapping(folio->mapping)) { 3024 3021 VM_WARN_ONCE(1, 3025 3022 "Cannot split shmem folio to non-0 order"); 3026 3023 return -EINVAL; 3027 3024 } 3028 - /* No split if the file system does not support large folio */ 3029 - if (!mapping_large_folio_support(folio->mapping)) { 3025 + /* 3026 + * No split if the file system does not support large folio. 3027 + * Note that we might still have THPs in such mappings due to 3028 + * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping 3029 + * does not actually support large folios properly. 3030 + */ 3031 + if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && 3032 + !mapping_large_folio_support(folio->mapping)) { 3030 3033 VM_WARN_ONCE(1, 3031 3034 "Cannot split file folio to non-0 order"); 3032 3035 return -EINVAL; 3033 3036 } 3034 3037 } 3035 3038 3039 + /* Only swapping a whole PMD-mapped folio is supported */ 3040 + if (folio_test_swapcache(folio) && new_order) 3041 + return -EINVAL; 3036 3042 3037 3043 is_hzp = is_huge_zero_folio(folio); 3038 3044 if (is_hzp) {
-1
mm/internal.h
··· 588 588 extern void memblock_free_pages(struct page *page, unsigned long pfn, 589 589 unsigned int order); 590 590 extern void __free_pages_core(struct page *page, unsigned int order); 591 - extern void kernel_init_pages(struct page *page, int numpages); 592 591 593 592 /* 594 593 * This will have no effect, other than possibly generating a warning, if the
+1 -2
mm/memcontrol.c
··· 7745 7745 * @new: Replacement folio. 7746 7746 * 7747 7747 * Charge @new as a replacement folio for @old. @old will 7748 - * be uncharged upon free. This is only used by the page cache 7749 - * (in replace_page_cache_folio()). 7748 + * be uncharged upon free. 7750 7749 * 7751 7750 * Both folios must be locked, @new->mapping must be set up. 7752 7751 */
+10 -10
mm/memory.c
··· 1507 1507 if (unlikely(folio_mapcount(folio) < 0)) 1508 1508 print_bad_pte(vma, addr, ptent, page); 1509 1509 } 1510 - 1511 - if (want_init_mlocked_on_free() && folio_test_mlocked(folio) && 1512 - !delay_rmap && folio_test_anon(folio)) { 1513 - kernel_init_pages(page, folio_nr_pages(folio)); 1514 - } 1515 - 1516 1510 if (unlikely(__tlb_remove_folio_pages(tlb, page, nr, delay_rmap))) { 1517 1511 *force_flush = true; 1518 1512 *force_break = true; ··· 5100 5106 bool ignore_writable, bool pte_write_upgrade) 5101 5107 { 5102 5108 int nr = pte_pfn(fault_pte) - folio_pfn(folio); 5103 - unsigned long start = max(vmf->address - nr * PAGE_SIZE, vma->vm_start); 5104 - unsigned long end = min(vmf->address + (folio_nr_pages(folio) - nr) * PAGE_SIZE, vma->vm_end); 5105 - pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE; 5106 - unsigned long addr; 5109 + unsigned long start, end, addr = vmf->address; 5110 + unsigned long addr_start = addr - (nr << PAGE_SHIFT); 5111 + unsigned long pt_start = ALIGN_DOWN(addr, PMD_SIZE); 5112 + pte_t *start_ptep; 5113 + 5114 + /* Stay within the VMA and within the page table. */ 5115 + start = max3(addr_start, pt_start, vma->vm_start); 5116 + end = min3(addr_start + folio_size(folio), pt_start + PMD_SIZE, 5117 + vma->vm_end); 5118 + start_ptep = vmf->pte - ((addr - start) >> PAGE_SHIFT); 5107 5119 5108 5120 /* Restore all PTEs' mapping of the large folio */ 5109 5121 for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) {
+7 -1
mm/migrate.c
··· 1654 1654 1655 1655 /* 1656 1656 * The rare folio on the deferred split list should 1657 - * be split now. It should not count as a failure. 1657 + * be split now. It should not count as a failure: 1658 + * but increment nr_failed because, without doing so, 1659 + * migrate_pages() may report success with (split but 1660 + * unmigrated) pages still on its fromlist; whereas it 1661 + * always reports success when its fromlist is empty. 1662 + * 1658 1663 * Only check it without removing it from the list. 1659 1664 * Since the folio can be on deferred_split_scan() 1660 1665 * local list and removing it can cause the local list ··· 1674 1669 if (nr_pages > 2 && 1675 1670 !list_empty(&folio->_deferred_list)) { 1676 1671 if (try_split_folio(folio, split_folios) == 0) { 1672 + nr_failed++; 1677 1673 stats->nr_thp_split += is_thp; 1678 1674 stats->nr_split++; 1679 1675 continue;
+7 -36
mm/mm_init.c
··· 2523 2523 DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_FREE_DEFAULT_ON, init_on_free); 2524 2524 EXPORT_SYMBOL(init_on_free); 2525 2525 2526 - DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_MLOCKED_ON_FREE_DEFAULT_ON, init_mlocked_on_free); 2527 - EXPORT_SYMBOL(init_mlocked_on_free); 2528 - 2529 2526 static bool _init_on_alloc_enabled_early __read_mostly 2530 2527 = IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON); 2531 2528 static int __init early_init_on_alloc(char *buf) ··· 2539 2542 return kstrtobool(buf, &_init_on_free_enabled_early); 2540 2543 } 2541 2544 early_param("init_on_free", early_init_on_free); 2542 - 2543 - static bool _init_mlocked_on_free_enabled_early __read_mostly 2544 - = IS_ENABLED(CONFIG_INIT_MLOCKED_ON_FREE_DEFAULT_ON); 2545 - static int __init early_init_mlocked_on_free(char *buf) 2546 - { 2547 - return kstrtobool(buf, &_init_mlocked_on_free_enabled_early); 2548 - } 2549 - early_param("init_mlocked_on_free", early_init_mlocked_on_free); 2550 2545 2551 2546 DEFINE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled); 2552 2547 ··· 2567 2578 } 2568 2579 #endif 2569 2580 2570 - if ((_init_on_alloc_enabled_early || _init_on_free_enabled_early || 2571 - _init_mlocked_on_free_enabled_early) && 2581 + if ((_init_on_alloc_enabled_early || _init_on_free_enabled_early) && 2572 2582 page_poisoning_requested) { 2573 2583 pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, " 2574 - "will take precedence over init_on_alloc, init_on_free " 2575 - "and init_mlocked_on_free\n"); 2584 + "will take precedence over init_on_alloc and init_on_free\n"); 2576 2585 _init_on_alloc_enabled_early = false; 2577 2586 _init_on_free_enabled_early = false; 2578 - _init_mlocked_on_free_enabled_early = false; 2579 - } 2580 - 2581 - if (_init_mlocked_on_free_enabled_early && _init_on_free_enabled_early) { 2582 - pr_info("mem auto-init: init_on_free is on, " 2583 - "will take precedence over init_mlocked_on_free\n"); 2584 - _init_mlocked_on_free_enabled_early = false; 2585 2587 } 2586 2588 2587 2589 if (_init_on_alloc_enabled_early) { ··· 2589 2609 static_branch_disable(&init_on_free); 2590 2610 } 2591 2611 2592 - if (_init_mlocked_on_free_enabled_early) { 2593 - want_check_pages = true; 2594 - static_branch_enable(&init_mlocked_on_free); 2595 - } else { 2596 - static_branch_disable(&init_mlocked_on_free); 2597 - } 2598 - 2599 - if (IS_ENABLED(CONFIG_KMSAN) && (_init_on_alloc_enabled_early || 2600 - _init_on_free_enabled_early || _init_mlocked_on_free_enabled_early)) 2601 - pr_info("mem auto-init: please make sure init_on_alloc, init_on_free and " 2602 - "init_mlocked_on_free are disabled when running KMSAN\n"); 2612 + if (IS_ENABLED(CONFIG_KMSAN) && 2613 + (_init_on_alloc_enabled_early || _init_on_free_enabled_early)) 2614 + pr_info("mem auto-init: please make sure init_on_alloc and init_on_free are disabled when running KMSAN\n"); 2603 2615 2604 2616 #ifdef CONFIG_DEBUG_PAGEALLOC 2605 2617 if (debug_pagealloc_enabled()) { ··· 2630 2658 else 2631 2659 stack = "off"; 2632 2660 2633 - pr_info("mem auto-init: stack:%s, heap alloc:%s, heap free:%s, mlocked free:%s\n", 2661 + pr_info("mem auto-init: stack:%s, heap alloc:%s, heap free:%s\n", 2634 2662 stack, want_init_on_alloc(GFP_KERNEL) ? "on" : "off", 2635 - want_init_on_free() ? "on" : "off", 2636 - want_init_mlocked_on_free() ? "on" : "off"); 2663 + want_init_on_free() ? "on" : "off"); 2637 2664 if (want_init_on_free()) 2638 2665 pr_info("mem auto-init: clearing system memory may take some time...\n"); 2639 2666 }
+1 -1
mm/page_alloc.c
··· 1016 1016 return page_kasan_tag(page) == KASAN_TAG_KERNEL; 1017 1017 } 1018 1018 1019 - void kernel_init_pages(struct page *page, int numpages) 1019 + static void kernel_init_pages(struct page *page, int numpages) 1020 1020 { 1021 1021 int i; 1022 1022
+10 -1
mm/page_table_check.c
··· 73 73 page = pfn_to_page(pfn); 74 74 page_ext = page_ext_get(page); 75 75 76 + if (!page_ext) 77 + return; 78 + 76 79 BUG_ON(PageSlab(page)); 77 80 anon = PageAnon(page); 78 81 ··· 113 110 page = pfn_to_page(pfn); 114 111 page_ext = page_ext_get(page); 115 112 113 + if (!page_ext) 114 + return; 115 + 116 116 BUG_ON(PageSlab(page)); 117 117 anon = PageAnon(page); 118 118 ··· 146 140 BUG_ON(PageSlab(page)); 147 141 148 142 page_ext = page_ext_get(page); 149 - BUG_ON(!page_ext); 143 + 144 + if (!page_ext) 145 + return; 146 + 150 147 for (i = 0; i < (1ul << order); i++) { 151 148 struct page_table_check *ptc = get_page_table_check(page_ext); 152 149
+1 -1
mm/shmem.c
··· 1786 1786 xa_lock_irq(&swap_mapping->i_pages); 1787 1787 error = shmem_replace_entry(swap_mapping, swap_index, old, new); 1788 1788 if (!error) { 1789 - mem_cgroup_migrate(old, new); 1789 + mem_cgroup_replace_folio(old, new); 1790 1790 __lruvec_stat_mod_folio(new, NR_FILE_PAGES, 1); 1791 1791 __lruvec_stat_mod_folio(new, NR_SHMEM, 1); 1792 1792 __lruvec_stat_mod_folio(old, NR_FILE_PAGES, -1);
+2 -2
mm/util.c
··· 139 139 * kmemdup_array - duplicate a given array. 140 140 * 141 141 * @src: array to duplicate. 142 - * @element_size: size of each element of array. 143 142 * @count: number of elements to duplicate from array. 143 + * @element_size: size of each element of array. 144 144 * @gfp: GFP mask to use. 145 145 * 146 146 * Return: duplicated array of @src or %NULL in case of error, 147 147 * result is physically contiguous. Use kfree() to free. 148 148 */ 149 - void *kmemdup_array(const void *src, size_t element_size, size_t count, gfp_t gfp) 149 + void *kmemdup_array(const void *src, size_t count, size_t element_size, gfp_t gfp) 150 150 { 151 151 return kmemdup(src, size_mul(element_size, count), gfp); 152 152 }
+5
net/core/filter.c
··· 1665 1665 static inline int __bpf_try_make_writable(struct sk_buff *skb, 1666 1666 unsigned int write_len) 1667 1667 { 1668 + #ifdef CONFIG_DEBUG_NET 1669 + /* Avoid a splat in pskb_may_pull_reason() */ 1670 + if (write_len > INT_MAX) 1671 + return -EINVAL; 1672 + #endif 1668 1673 return skb_ensure_writable(skb, write_len); 1669 1674 } 1670 1675
+7 -2
net/core/net_namespace.c
··· 693 693 * get_net_ns - increment the refcount of the network namespace 694 694 * @ns: common namespace (net) 695 695 * 696 - * Returns the net's common namespace. 696 + * Returns the net's common namespace or ERR_PTR() if ref is zero. 697 697 */ 698 698 struct ns_common *get_net_ns(struct ns_common *ns) 699 699 { 700 - return &get_net(container_of(ns, struct net, ns))->ns; 700 + struct net *net; 701 + 702 + net = maybe_get_net(container_of(ns, struct net, ns)); 703 + if (net) 704 + return &net->ns; 705 + return ERR_PTR(-EINVAL); 701 706 } 702 707 EXPORT_SYMBOL_GPL(get_net_ns); 703 708
+8 -8
net/core/netdev-genl.c
··· 59 59 nla_put_u64_64bit(rsp, NETDEV_A_DEV_XDP_RX_METADATA_FEATURES, 60 60 xdp_rx_meta, NETDEV_A_DEV_PAD) || 61 61 nla_put_u64_64bit(rsp, NETDEV_A_DEV_XSK_FEATURES, 62 - xsk_features, NETDEV_A_DEV_PAD)) { 63 - genlmsg_cancel(rsp, hdr); 64 - return -EINVAL; 65 - } 62 + xsk_features, NETDEV_A_DEV_PAD)) 63 + goto err_cancel_msg; 66 64 67 65 if (netdev->xdp_features & NETDEV_XDP_ACT_XSK_ZEROCOPY) { 68 66 if (nla_put_u32(rsp, NETDEV_A_DEV_XDP_ZC_MAX_SEGS, 69 - netdev->xdp_zc_max_segs)) { 70 - genlmsg_cancel(rsp, hdr); 71 - return -EINVAL; 72 - } 67 + netdev->xdp_zc_max_segs)) 68 + goto err_cancel_msg; 73 69 } 74 70 75 71 genlmsg_end(rsp, hdr); 76 72 77 73 return 0; 74 + 75 + err_cancel_msg: 76 + genlmsg_cancel(rsp, hdr); 77 + return -EMSGSIZE; 78 78 } 79 79 80 80 static void
+3
net/core/sock.c
··· 3751 3751 3752 3752 sk->sk_prot->unhash(sk); 3753 3753 3754 + if (sk->sk_socket) 3755 + sk->sk_socket->sk = NULL; 3756 + 3754 3757 /* 3755 3758 * In this point socket cannot receive new packets, but it is possible 3756 3759 * that some packets are in flight because some CPU runs receiver and
+54 -21
net/ipv4/cipso_ipv4.c
··· 1810 1810 return CIPSO_V4_HDR_LEN + ret_val; 1811 1811 } 1812 1812 1813 + static int cipso_v4_get_actual_opt_len(const unsigned char *data, int len) 1814 + { 1815 + int iter = 0, optlen = 0; 1816 + 1817 + /* determining the new total option length is tricky because of 1818 + * the padding necessary, the only thing i can think to do at 1819 + * this point is walk the options one-by-one, skipping the 1820 + * padding at the end to determine the actual option size and 1821 + * from there we can determine the new total option length 1822 + */ 1823 + while (iter < len) { 1824 + if (data[iter] == IPOPT_END) { 1825 + break; 1826 + } else if (data[iter] == IPOPT_NOP) { 1827 + iter++; 1828 + } else { 1829 + iter += data[iter + 1]; 1830 + optlen = iter; 1831 + } 1832 + } 1833 + return optlen; 1834 + } 1835 + 1813 1836 /** 1814 1837 * cipso_v4_sock_setattr - Add a CIPSO option to a socket 1815 1838 * @sk: the socket ··· 2009 1986 u8 cipso_len; 2010 1987 u8 cipso_off; 2011 1988 unsigned char *cipso_ptr; 2012 - int iter; 2013 1989 int optlen_new; 2014 1990 2015 1991 cipso_off = opt->opt.cipso - sizeof(struct iphdr); ··· 2028 2006 memmove(cipso_ptr, cipso_ptr + cipso_len, 2029 2007 opt->opt.optlen - cipso_off - cipso_len); 2030 2008 2031 - /* determining the new total option length is tricky because of 2032 - * the padding necessary, the only thing i can think to do at 2033 - * this point is walk the options one-by-one, skipping the 2034 - * padding at the end to determine the actual option size and 2035 - * from there we can determine the new total option length */ 2036 - iter = 0; 2037 - optlen_new = 0; 2038 - while (iter < opt->opt.optlen) 2039 - if (opt->opt.__data[iter] != IPOPT_NOP) { 2040 - iter += opt->opt.__data[iter + 1]; 2041 - optlen_new = iter; 2042 - } else 2043 - iter++; 2009 + optlen_new = cipso_v4_get_actual_opt_len(opt->opt.__data, 2010 + opt->opt.optlen); 2044 2011 hdr_delta = opt->opt.optlen; 2045 2012 opt->opt.optlen = (optlen_new + 3) & ~3; 2046 2013 hdr_delta -= opt->opt.optlen; ··· 2249 2238 */ 2250 2239 int cipso_v4_skbuff_delattr(struct sk_buff *skb) 2251 2240 { 2252 - int ret_val; 2241 + int ret_val, cipso_len, hdr_len_actual, new_hdr_len_actual, new_hdr_len, 2242 + hdr_len_delta; 2253 2243 struct iphdr *iph; 2254 2244 struct ip_options *opt = &IPCB(skb)->opt; 2255 2245 unsigned char *cipso_ptr; ··· 2263 2251 if (ret_val < 0) 2264 2252 return ret_val; 2265 2253 2266 - /* the easiest thing to do is just replace the cipso option with noop 2267 - * options since we don't change the size of the packet, although we 2268 - * still need to recalculate the checksum */ 2269 - 2270 2254 iph = ip_hdr(skb); 2271 2255 cipso_ptr = (unsigned char *)iph + opt->cipso; 2272 - memset(cipso_ptr, IPOPT_NOOP, cipso_ptr[1]); 2256 + cipso_len = cipso_ptr[1]; 2257 + 2258 + hdr_len_actual = sizeof(struct iphdr) + 2259 + cipso_v4_get_actual_opt_len((unsigned char *)(iph + 1), 2260 + opt->optlen); 2261 + new_hdr_len_actual = hdr_len_actual - cipso_len; 2262 + new_hdr_len = (new_hdr_len_actual + 3) & ~3; 2263 + hdr_len_delta = (iph->ihl << 2) - new_hdr_len; 2264 + 2265 + /* 1. shift any options after CIPSO to the left */ 2266 + memmove(cipso_ptr, cipso_ptr + cipso_len, 2267 + new_hdr_len_actual - opt->cipso); 2268 + /* 2. move the whole IP header to its new place */ 2269 + memmove((unsigned char *)iph + hdr_len_delta, iph, new_hdr_len_actual); 2270 + /* 3. adjust the skb layout */ 2271 + skb_pull(skb, hdr_len_delta); 2272 + skb_reset_network_header(skb); 2273 + iph = ip_hdr(skb); 2274 + /* 4. re-fill new padding with IPOPT_END (may now be longer) */ 2275 + memset((unsigned char *)iph + new_hdr_len_actual, IPOPT_END, 2276 + new_hdr_len - new_hdr_len_actual); 2277 + 2278 + opt->optlen -= hdr_len_delta; 2273 2279 opt->cipso = 0; 2274 2280 opt->is_changed = 1; 2275 - 2281 + if (hdr_len_delta != 0) { 2282 + iph->ihl = new_hdr_len >> 2; 2283 + iph_set_totlen(iph, skb->len); 2284 + } 2276 2285 ip_send_check(iph); 2277 2286 2278 2287 return 0;
+4 -2
net/ipv4/tcp_ao.c
··· 1972 1972 first = true; 1973 1973 } 1974 1974 1975 - if (cmd.ao_required && tcp_ao_required_verify(sk)) 1976 - return -EKEYREJECTED; 1975 + if (cmd.ao_required && tcp_ao_required_verify(sk)) { 1976 + err = -EKEYREJECTED; 1977 + goto out; 1978 + } 1977 1979 1978 1980 /* For sockets in TCP_CLOSED it's possible set keys that aren't 1979 1981 * matching the future peer (address/port/VRF/etc),
+1
net/ipv4/tcp_input.c
··· 6305 6305 skb_rbtree_walk_from(data) 6306 6306 tcp_mark_skb_lost(sk, data); 6307 6307 tcp_xmit_retransmit_queue(sk); 6308 + tp->retrans_stamp = 0; 6308 6309 NET_INC_STATS(sock_net(sk), 6309 6310 LINUX_MIB_TCPFASTOPENACTIVEFAIL); 6310 6311 return true;
+2 -1
net/ipv6/ip6_fib.c
··· 2514 2514 goto out_kmem_cache_create; 2515 2515 2516 2516 ret = rtnl_register_module(THIS_MODULE, PF_INET6, RTM_GETROUTE, NULL, 2517 - inet6_dump_fib, RTNL_FLAG_DUMP_UNLOCKED); 2517 + inet6_dump_fib, RTNL_FLAG_DUMP_UNLOCKED | 2518 + RTNL_FLAG_DUMP_SPLIT_NLM_DONE); 2518 2519 if (ret) 2519 2520 goto out_unregister_subsys; 2520 2521
+3 -1
net/ipv6/route.c
··· 636 636 rcu_read_lock(); 637 637 last_probe = READ_ONCE(fib6_nh->last_probe); 638 638 idev = __in6_dev_get(dev); 639 + if (!idev) 640 + goto out; 639 641 neigh = __ipv6_neigh_lookup_noref(dev, nh_gw); 640 642 if (neigh) { 641 643 if (READ_ONCE(neigh->nud_state) & NUD_VALID) ··· 3603 3601 if (!dev) 3604 3602 goto out; 3605 3603 3606 - if (idev->cnf.disable_ipv6) { 3604 + if (!idev || idev->cnf.disable_ipv6) { 3607 3605 NL_SET_ERR_MSG(extack, "IPv6 is disabled on nexthop device"); 3608 3606 err = -EACCES; 3609 3607 goto out;
+4 -4
net/ipv6/seg6_local.c
··· 941 941 942 942 if (static_branch_unlikely(&nf_hooks_lwtunnel_enabled)) 943 943 return NF_HOOK(NFPROTO_IPV6, NF_INET_PRE_ROUTING, 944 - dev_net(skb->dev), NULL, skb, NULL, 945 - skb_dst(skb)->dev, input_action_end_dx6_finish); 944 + dev_net(skb->dev), NULL, skb, skb->dev, 945 + NULL, input_action_end_dx6_finish); 946 946 947 947 return input_action_end_dx6_finish(dev_net(skb->dev), NULL, skb); 948 948 drop: ··· 991 991 992 992 if (static_branch_unlikely(&nf_hooks_lwtunnel_enabled)) 993 993 return NF_HOOK(NFPROTO_IPV4, NF_INET_PRE_ROUTING, 994 - dev_net(skb->dev), NULL, skb, NULL, 995 - skb_dst(skb)->dev, input_action_end_dx4_finish); 994 + dev_net(skb->dev), NULL, skb, skb->dev, 995 + NULL, input_action_end_dx4_finish); 996 996 997 997 return input_action_end_dx4_finish(dev_net(skb->dev), NULL, skb); 998 998 drop:
+7 -1
net/ipv6/xfrm6_policy.c
··· 56 56 { 57 57 struct dst_entry *dst; 58 58 struct net_device *dev; 59 + struct inet6_dev *idev; 59 60 60 61 dst = xfrm6_dst_lookup(net, 0, oif, NULL, daddr, mark); 61 62 if (IS_ERR(dst)) 62 63 return -EHOSTUNREACH; 63 64 64 - dev = ip6_dst_idev(dst)->dev; 65 + idev = ip6_dst_idev(dst); 66 + if (!idev) { 67 + dst_release(dst); 68 + return -EHOSTUNREACH; 69 + } 70 + dev = idev->dev; 65 71 ipv6_dev_get_saddr(dev_net(dev), dev, &daddr->in6, 0, &saddr->in6); 66 72 dst_release(dst); 67 73 return 0;
+17
net/mac80211/driver-ops.c
··· 311 311 might_sleep(); 312 312 lockdep_assert_wiphy(local->hw.wiphy); 313 313 314 + /* 315 + * We should perhaps push emulate chanctx down and only 316 + * make it call ->config() when the chanctx is actually 317 + * assigned here (and unassigned below), but that's yet 318 + * another change to all drivers to add assign/unassign 319 + * emulation callbacks. Maybe later. 320 + */ 321 + if (sdata->vif.type == NL80211_IFTYPE_MONITOR && 322 + local->emulate_chanctx && 323 + !ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF)) 324 + return 0; 325 + 314 326 if (!check_sdata_in_driver(sdata)) 315 327 return -EIO; 316 328 ··· 349 337 { 350 338 might_sleep(); 351 339 lockdep_assert_wiphy(local->hw.wiphy); 340 + 341 + if (sdata->vif.type == NL80211_IFTYPE_MONITOR && 342 + local->emulate_chanctx && 343 + !ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF)) 344 + return; 352 345 353 346 if (!check_sdata_in_driver(sdata)) 354 347 return;
+10 -12
net/mac80211/iface.c
··· 686 686 ieee80211_del_virtual_monitor(local); 687 687 688 688 ieee80211_recalc_idle(local); 689 + ieee80211_recalc_offload(local); 689 690 690 691 if (!(sdata->u.mntr.flags & MONITOR_FLAG_ACTIVE)) 691 692 break; ··· 1122 1121 struct ieee80211_sub_if_data *sdata; 1123 1122 int ret; 1124 1123 1125 - if (!ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF)) 1126 - return 0; 1127 - 1128 1124 ASSERT_RTNL(); 1129 1125 lockdep_assert_wiphy(local->hw.wiphy); 1130 1126 ··· 1143 1145 1144 1146 ieee80211_set_default_queues(sdata); 1145 1147 1146 - ret = drv_add_interface(local, sdata); 1147 - if (WARN_ON(ret)) { 1148 - /* ok .. stupid driver, it asked for this! */ 1149 - kfree(sdata); 1150 - return ret; 1148 + if (ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF)) { 1149 + ret = drv_add_interface(local, sdata); 1150 + if (WARN_ON(ret)) { 1151 + /* ok .. stupid driver, it asked for this! */ 1152 + kfree(sdata); 1153 + return ret; 1154 + } 1151 1155 } 1152 1156 1153 1157 set_bit(SDATA_STATE_RUNNING, &sdata->state); ··· 1187 1187 { 1188 1188 struct ieee80211_sub_if_data *sdata; 1189 1189 1190 - if (!ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF)) 1191 - return; 1192 - 1193 1190 ASSERT_RTNL(); 1194 1191 lockdep_assert_wiphy(local->hw.wiphy); 1195 1192 ··· 1206 1209 1207 1210 ieee80211_link_release_channel(&sdata->deflink); 1208 1211 1209 - drv_remove_interface(local, sdata); 1212 + if (ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF)) 1213 + drv_remove_interface(local, sdata); 1210 1214 1211 1215 kfree(sdata); 1212 1216 }
+9 -8
net/mac80211/scan.c
··· 358 358 struct cfg80211_scan_request *req; 359 359 struct cfg80211_chan_def chandef; 360 360 u8 bands_used = 0; 361 - int i, ielen, n_chans; 361 + int i, ielen; 362 + u32 *n_chans; 362 363 u32 flags = 0; 363 364 364 365 req = rcu_dereference_protected(local->scan_req, ··· 369 368 return false; 370 369 371 370 if (ieee80211_hw_check(&local->hw, SINGLE_SCAN_ON_ALL_BANDS)) { 371 + local->hw_scan_req->req.n_channels = req->n_channels; 372 + 372 373 for (i = 0; i < req->n_channels; i++) { 373 374 local->hw_scan_req->req.channels[i] = req->channels[i]; 374 375 bands_used |= BIT(req->channels[i]->band); 375 376 } 376 - 377 - n_chans = req->n_channels; 378 377 } else { 379 378 do { 380 379 if (local->hw_scan_band == NUM_NL80211_BANDS) 381 380 return false; 382 381 383 - n_chans = 0; 382 + n_chans = &local->hw_scan_req->req.n_channels; 383 + *n_chans = 0; 384 384 385 385 for (i = 0; i < req->n_channels; i++) { 386 386 if (req->channels[i]->band != 387 387 local->hw_scan_band) 388 388 continue; 389 - local->hw_scan_req->req.channels[n_chans] = 389 + local->hw_scan_req->req.channels[(*n_chans)++] = 390 390 req->channels[i]; 391 - n_chans++; 391 + 392 392 bands_used |= BIT(req->channels[i]->band); 393 393 } 394 394 395 395 local->hw_scan_band++; 396 - } while (!n_chans); 396 + } while (!*n_chans); 397 397 } 398 398 399 - local->hw_scan_req->req.n_channels = n_chans; 400 399 ieee80211_prepare_scan_chandef(&chandef); 401 400 402 401 if (req->flags & NL80211_SCAN_FLAG_MIN_PREQ_CONTENT)
+1 -1
net/mac80211/util.c
··· 1843 1843 1844 1844 /* add interfaces */ 1845 1845 sdata = wiphy_dereference(local->hw.wiphy, local->monitor_sdata); 1846 - if (sdata) { 1846 + if (sdata && ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF)) { 1847 1847 /* in HW restart it exists already */ 1848 1848 WARN_ON(local->resuming); 1849 1849 res = drv_add_interface(local, sdata);
+11 -2
net/netfilter/core.c
··· 815 815 if (ret < 0) 816 816 goto err; 817 817 818 + #ifdef CONFIG_LWTUNNEL 819 + ret = netfilter_lwtunnel_init(); 820 + if (ret < 0) 821 + goto err_lwtunnel_pernet; 822 + #endif 818 823 ret = netfilter_log_init(); 819 824 if (ret < 0) 820 - goto err_pernet; 825 + goto err_log_pernet; 821 826 822 827 return 0; 823 - err_pernet: 828 + err_log_pernet: 829 + #ifdef CONFIG_LWTUNNEL 830 + netfilter_lwtunnel_fini(); 831 + err_lwtunnel_pernet: 832 + #endif 824 833 unregister_pernet_subsys(&netfilter_net_ops); 825 834 err: 826 835 return ret;
+6 -5
net/netfilter/ipset/ip_set_core.c
··· 53 53 MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_IPSET); 54 54 55 55 /* When the nfnl mutex or ip_set_ref_lock is held: */ 56 - #define ip_set_dereference(p) \ 57 - rcu_dereference_protected(p, \ 56 + #define ip_set_dereference(inst) \ 57 + rcu_dereference_protected((inst)->ip_set_list, \ 58 58 lockdep_nfnl_is_held(NFNL_SUBSYS_IPSET) || \ 59 - lockdep_is_held(&ip_set_ref_lock)) 59 + lockdep_is_held(&ip_set_ref_lock) || \ 60 + (inst)->is_deleted) 60 61 #define ip_set(inst, id) \ 61 - ip_set_dereference((inst)->ip_set_list)[id] 62 + ip_set_dereference(inst)[id] 62 63 #define ip_set_ref_netlink(inst,id) \ 63 64 rcu_dereference_raw((inst)->ip_set_list)[id] 64 65 #define ip_set_dereference_nfnl(p) \ ··· 1134 1133 if (!list) 1135 1134 goto cleanup; 1136 1135 /* nfnl mutex is held, both lists are valid */ 1137 - tmp = ip_set_dereference(inst->ip_set_list); 1136 + tmp = ip_set_dereference(inst); 1138 1137 memcpy(list, tmp, sizeof(struct ip_set *) * inst->ip_set_max); 1139 1138 rcu_assign_pointer(inst->ip_set_list, list); 1140 1139 /* Make sure all current packets have passed through */
-15
net/netfilter/nf_conntrack_standalone.c
··· 22 22 #include <net/netfilter/nf_conntrack_acct.h> 23 23 #include <net/netfilter/nf_conntrack_zones.h> 24 24 #include <net/netfilter/nf_conntrack_timestamp.h> 25 - #ifdef CONFIG_LWTUNNEL 26 - #include <net/netfilter/nf_hooks_lwtunnel.h> 27 - #endif 28 25 #include <linux/rculist_nulls.h> 29 26 30 27 static bool enable_hooks __read_mostly; ··· 609 612 NF_SYSCTL_CT_PROTO_TIMEOUT_GRE, 610 613 NF_SYSCTL_CT_PROTO_TIMEOUT_GRE_STREAM, 611 614 #endif 612 - #ifdef CONFIG_LWTUNNEL 613 - NF_SYSCTL_CT_LWTUNNEL, 614 - #endif 615 615 616 616 NF_SYSCTL_CT_LAST_SYSCTL, 617 617 }; ··· 938 944 .maxlen = sizeof(unsigned int), 939 945 .mode = 0644, 940 946 .proc_handler = proc_dointvec_jiffies, 941 - }, 942 - #endif 943 - #ifdef CONFIG_LWTUNNEL 944 - [NF_SYSCTL_CT_LWTUNNEL] = { 945 - .procname = "nf_hooks_lwtunnel", 946 - .data = NULL, 947 - .maxlen = sizeof(int), 948 - .mode = 0644, 949 - .proc_handler = nf_hooks_lwtunnel_sysctl_handler, 950 947 }, 951 948 #endif 952 949 };
+67
net/netfilter/nf_hooks_lwtunnel.c
··· 3 3 #include <linux/sysctl.h> 4 4 #include <net/lwtunnel.h> 5 5 #include <net/netfilter/nf_hooks_lwtunnel.h> 6 + #include <linux/netfilter.h> 7 + 8 + #include "nf_internals.h" 6 9 7 10 static inline int nf_hooks_lwtunnel_get(void) 8 11 { ··· 53 50 return ret; 54 51 } 55 52 EXPORT_SYMBOL_GPL(nf_hooks_lwtunnel_sysctl_handler); 53 + 54 + static struct ctl_table nf_lwtunnel_sysctl_table[] = { 55 + { 56 + .procname = "nf_hooks_lwtunnel", 57 + .data = NULL, 58 + .maxlen = sizeof(int), 59 + .mode = 0644, 60 + .proc_handler = nf_hooks_lwtunnel_sysctl_handler, 61 + }, 62 + }; 63 + 64 + static int __net_init nf_lwtunnel_net_init(struct net *net) 65 + { 66 + struct ctl_table_header *hdr; 67 + struct ctl_table *table; 68 + 69 + table = nf_lwtunnel_sysctl_table; 70 + if (!net_eq(net, &init_net)) { 71 + table = kmemdup(nf_lwtunnel_sysctl_table, 72 + sizeof(nf_lwtunnel_sysctl_table), 73 + GFP_KERNEL); 74 + if (!table) 75 + goto err_alloc; 76 + } 77 + 78 + hdr = register_net_sysctl_sz(net, "net/netfilter", table, 79 + ARRAY_SIZE(nf_lwtunnel_sysctl_table)); 80 + if (!hdr) 81 + goto err_reg; 82 + 83 + net->nf.nf_lwtnl_dir_header = hdr; 84 + 85 + return 0; 86 + err_reg: 87 + if (!net_eq(net, &init_net)) 88 + kfree(table); 89 + err_alloc: 90 + return -ENOMEM; 91 + } 92 + 93 + static void __net_exit nf_lwtunnel_net_exit(struct net *net) 94 + { 95 + const struct ctl_table *table; 96 + 97 + table = net->nf.nf_lwtnl_dir_header->ctl_table_arg; 98 + unregister_net_sysctl_table(net->nf.nf_lwtnl_dir_header); 99 + if (!net_eq(net, &init_net)) 100 + kfree(table); 101 + } 102 + 103 + static struct pernet_operations nf_lwtunnel_net_ops = { 104 + .init = nf_lwtunnel_net_init, 105 + .exit = nf_lwtunnel_net_exit, 106 + }; 107 + 108 + int __init netfilter_lwtunnel_init(void) 109 + { 110 + return register_pernet_subsys(&nf_lwtunnel_net_ops); 111 + } 112 + 113 + void netfilter_lwtunnel_fini(void) 114 + { 115 + unregister_pernet_subsys(&nf_lwtunnel_net_ops); 116 + } 56 117 #endif /* CONFIG_SYSCTL */
+6
net/netfilter/nf_internals.h
··· 29 29 /* nf_log.c */ 30 30 int __init netfilter_log_init(void); 31 31 32 + #ifdef CONFIG_LWTUNNEL 33 + /* nf_hooks_lwtunnel.c */ 34 + int __init netfilter_lwtunnel_init(void); 35 + void netfilter_lwtunnel_fini(void); 36 + #endif 37 + 32 38 /* core.c */ 33 39 void nf_hook_entries_delete_raw(struct nf_hook_entries __rcu **pp, 34 40 const struct nf_hook_ops *reg);
+2 -1
net/netrom/nr_timer.c
··· 121 121 is accepted() it isn't 'dead' so doesn't get removed. */ 122 122 if (sock_flag(sk, SOCK_DESTROY) || 123 123 (sk->sk_state == TCP_LISTEN && sock_flag(sk, SOCK_DEAD))) { 124 - sock_hold(sk); 124 + if (sk->sk_state == TCP_LISTEN) 125 + sock_hold(sk); 125 126 bh_unlock_sock(sk); 126 127 nr_destroy_socket(sk); 127 128 goto out;
+1 -2
net/sched/act_api.c
··· 830 830 u32 max; 831 831 832 832 if (*index) { 833 - again: 834 833 rcu_read_lock(); 835 834 p = idr_find(&idrinfo->action_idr, *index); 836 835 ··· 838 839 * index but did not assign the pointer yet. 839 840 */ 840 841 rcu_read_unlock(); 841 - goto again; 842 + return -EAGAIN; 842 843 } 843 844 844 845 if (!p) {
+11 -5
net/sched/act_ct.c
··· 41 41 static struct rhashtable zones_ht; 42 42 static DEFINE_MUTEX(zones_mutex); 43 43 44 + struct zones_ht_key { 45 + struct net *net; 46 + u16 zone; 47 + }; 48 + 44 49 struct tcf_ct_flow_table { 45 50 struct rhash_head node; /* In zones tables */ 46 51 47 52 struct rcu_work rwork; 48 53 struct nf_flowtable nf_ft; 49 54 refcount_t ref; 50 - u16 zone; 55 + struct zones_ht_key key; 51 56 52 57 bool dying; 53 58 }; 54 59 55 60 static const struct rhashtable_params zones_params = { 56 61 .head_offset = offsetof(struct tcf_ct_flow_table, node), 57 - .key_offset = offsetof(struct tcf_ct_flow_table, zone), 58 - .key_len = sizeof_field(struct tcf_ct_flow_table, zone), 62 + .key_offset = offsetof(struct tcf_ct_flow_table, key), 63 + .key_len = sizeof_field(struct tcf_ct_flow_table, key), 59 64 .automatic_shrinking = true, 60 65 }; 61 66 ··· 321 316 322 317 static int tcf_ct_flow_table_get(struct net *net, struct tcf_ct_params *params) 323 318 { 319 + struct zones_ht_key key = { .net = net, .zone = params->zone }; 324 320 struct tcf_ct_flow_table *ct_ft; 325 321 int err = -ENOMEM; 326 322 327 323 mutex_lock(&zones_mutex); 328 - ct_ft = rhashtable_lookup_fast(&zones_ht, &params->zone, zones_params); 324 + ct_ft = rhashtable_lookup_fast(&zones_ht, &key, zones_params); 329 325 if (ct_ft && refcount_inc_not_zero(&ct_ft->ref)) 330 326 goto out_unlock; 331 327 ··· 335 329 goto err_alloc; 336 330 refcount_set(&ct_ft->ref, 1); 337 331 338 - ct_ft->zone = params->zone; 332 + ct_ft->key = key; 339 333 err = rhashtable_insert_fast(&zones_ht, &ct_ft->node, zones_params); 340 334 if (err) 341 335 goto err_insert;
+1
net/tipc/node.c
··· 2105 2105 } else { 2106 2106 n = tipc_node_find_by_id(net, ehdr->id); 2107 2107 } 2108 + skb_dst_force(skb); 2108 2109 tipc_crypto_rcv(net, (n) ? n->crypto_rx : NULL, &skb, b); 2109 2110 if (!skb) 2110 2111 return;
+9 -3
net/wireless/scan.c
··· 3416 3416 wiphy = &rdev->wiphy; 3417 3417 3418 3418 /* Determine number of channels, needed to allocate creq */ 3419 - if (wreq && wreq->num_channels) 3419 + if (wreq && wreq->num_channels) { 3420 + /* Passed from userspace so should be checked */ 3421 + if (unlikely(wreq->num_channels > IW_MAX_FREQUENCIES)) 3422 + return -EINVAL; 3420 3423 n_channels = wreq->num_channels; 3421 - else 3424 + } else { 3422 3425 n_channels = ieee80211_get_num_supported_channels(wiphy); 3426 + } 3423 3427 3424 3428 creq = kzalloc(sizeof(*creq) + sizeof(struct cfg80211_ssid) + 3425 3429 n_channels * sizeof(void *), ··· 3497 3493 memcpy(creq->ssids[0].ssid, wreq->essid, wreq->essid_len); 3498 3494 creq->ssids[0].ssid_len = wreq->essid_len; 3499 3495 } 3500 - if (wreq->scan_type == IW_SCAN_TYPE_PASSIVE) 3496 + if (wreq->scan_type == IW_SCAN_TYPE_PASSIVE) { 3497 + creq->ssids = NULL; 3501 3498 creq->n_ssids = 0; 3499 + } 3502 3500 } 3503 3501 3504 3502 for (i = 0; i < NUM_NL80211_BANDS; i++)
-15
security/Kconfig.hardening
··· 255 255 touching "cold" memory areas. Most cases see 3-5% impact. Some 256 256 synthetic workloads have measured as high as 8%. 257 257 258 - config INIT_MLOCKED_ON_FREE_DEFAULT_ON 259 - bool "Enable mlocked memory zeroing on free" 260 - depends on !KMSAN 261 - help 262 - This config has the effect of setting "init_mlocked_on_free=1" 263 - on the kernel command line. If it is enabled, all mlocked process 264 - memory is zeroed when freed. This restriction to mlocked memory 265 - improves performance over "init_on_free" but can still be used to 266 - protect confidential data like key material from content exposures 267 - to other processes, as well as live forensics and cold boot attacks. 268 - Any non-mlocked memory is not cleared before it is reassigned. This 269 - configuration can be overwritten by setting "init_mlocked_on_free=0" 270 - on the command line. The "init_on_free" boot option takes 271 - precedence over "init_mlocked_on_free". 272 - 273 258 config CC_HAS_ZERO_CALL_USED_REGS 274 259 def_bool $(cc-option,-fzero-call-used-regs=used-gpr) 275 260 # https://github.com/ClangBuiltLinux/linux/issues/1766
+3 -3
security/apparmor/audit.c
··· 217 217 } 218 218 } 219 219 220 - int aa_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule) 220 + int aa_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule, gfp_t gfp) 221 221 { 222 222 struct aa_audit_rule *rule; 223 223 ··· 230 230 return -EINVAL; 231 231 } 232 232 233 - rule = kzalloc(sizeof(struct aa_audit_rule), GFP_KERNEL); 233 + rule = kzalloc(sizeof(struct aa_audit_rule), gfp); 234 234 235 235 if (!rule) 236 236 return -ENOMEM; 237 237 238 238 /* Currently rules are treated as coming from the root ns */ 239 239 rule->label = aa_label_parse(&root_ns->unconfined->label, rulestr, 240 - GFP_KERNEL, true, false); 240 + gfp, true, false); 241 241 if (IS_ERR(rule->label)) { 242 242 int err = PTR_ERR(rule->label); 243 243 aa_audit_rule_free(rule);
+1 -1
security/apparmor/include/audit.h
··· 200 200 } 201 201 202 202 void aa_audit_rule_free(void *vrule); 203 - int aa_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule); 203 + int aa_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule, gfp_t gfp); 204 204 int aa_audit_rule_known(struct audit_krule *rule); 205 205 int aa_audit_rule_match(u32 sid, u32 field, u32 op, void *vrule); 206 206
+1 -1
security/integrity/ima/ima.h
··· 546 546 #else 547 547 548 548 static inline int ima_filter_rule_init(u32 field, u32 op, char *rulestr, 549 - void **lsmrule) 549 + void **lsmrule, gfp_t gfp) 550 550 { 551 551 return -EINVAL; 552 552 }
+9 -6
security/integrity/ima/ima_policy.c
··· 401 401 kfree(entry); 402 402 } 403 403 404 - static struct ima_rule_entry *ima_lsm_copy_rule(struct ima_rule_entry *entry) 404 + static struct ima_rule_entry *ima_lsm_copy_rule(struct ima_rule_entry *entry, 405 + gfp_t gfp) 405 406 { 406 407 struct ima_rule_entry *nentry; 407 408 int i; ··· 411 410 * Immutable elements are copied over as pointers and data; only 412 411 * lsm rules can change 413 412 */ 414 - nentry = kmemdup(entry, sizeof(*nentry), GFP_KERNEL); 413 + nentry = kmemdup(entry, sizeof(*nentry), gfp); 415 414 if (!nentry) 416 415 return NULL; 417 416 ··· 426 425 427 426 ima_filter_rule_init(nentry->lsm[i].type, Audit_equal, 428 427 nentry->lsm[i].args_p, 429 - &nentry->lsm[i].rule); 428 + &nentry->lsm[i].rule, 429 + gfp); 430 430 if (!nentry->lsm[i].rule) 431 431 pr_warn("rule for LSM \'%s\' is undefined\n", 432 432 nentry->lsm[i].args_p); ··· 440 438 int i; 441 439 struct ima_rule_entry *nentry; 442 440 443 - nentry = ima_lsm_copy_rule(entry); 441 + nentry = ima_lsm_copy_rule(entry, GFP_KERNEL); 444 442 if (!nentry) 445 443 return -ENOMEM; 446 444 ··· 666 664 } 667 665 668 666 if (rc == -ESTALE && !rule_reinitialized) { 669 - lsm_rule = ima_lsm_copy_rule(rule); 667 + lsm_rule = ima_lsm_copy_rule(rule, GFP_ATOMIC); 670 668 if (lsm_rule) { 671 669 rule_reinitialized = true; 672 670 goto retry; ··· 1142 1140 entry->lsm[lsm_rule].type = audit_type; 1143 1141 result = ima_filter_rule_init(entry->lsm[lsm_rule].type, Audit_equal, 1144 1142 entry->lsm[lsm_rule].args_p, 1145 - &entry->lsm[lsm_rule].rule); 1143 + &entry->lsm[lsm_rule].rule, 1144 + GFP_KERNEL); 1146 1145 if (!entry->lsm[lsm_rule].rule) { 1147 1146 pr_warn("rule for LSM \'%s\' is undefined\n", 1148 1147 entry->lsm[lsm_rule].args_p);
+4 -2
security/security.c
··· 5332 5332 * @op: rule operator 5333 5333 * @rulestr: rule context 5334 5334 * @lsmrule: receive buffer for audit rule struct 5335 + * @gfp: GFP flag used for kmalloc 5335 5336 * 5336 5337 * Allocate and initialize an LSM audit rule structure. 5337 5338 * 5338 5339 * Return: Return 0 if @lsmrule has been successfully set, -EINVAL in case of 5339 5340 * an invalid rule. 5340 5341 */ 5341 - int security_audit_rule_init(u32 field, u32 op, char *rulestr, void **lsmrule) 5342 + int security_audit_rule_init(u32 field, u32 op, char *rulestr, void **lsmrule, 5343 + gfp_t gfp) 5342 5344 { 5343 - return call_int_hook(audit_rule_init, field, op, rulestr, lsmrule); 5345 + return call_int_hook(audit_rule_init, field, op, rulestr, lsmrule, gfp); 5344 5346 } 5345 5347 5346 5348 /**
+3 -1
security/selinux/include/audit.h
··· 21 21 * @op: the operator the rule uses 22 22 * @rulestr: the text "target" of the rule 23 23 * @rule: pointer to the new rule structure returned via this 24 + * @gfp: GFP flag used for kmalloc 24 25 * 25 26 * Returns 0 if successful, -errno if not. On success, the rule structure 26 27 * will be allocated internally. The caller must free this structure with 27 28 * selinux_audit_rule_free() after use. 28 29 */ 29 - int selinux_audit_rule_init(u32 field, u32 op, char *rulestr, void **rule); 30 + int selinux_audit_rule_init(u32 field, u32 op, char *rulestr, void **rule, 31 + gfp_t gfp); 30 32 31 33 /** 32 34 * selinux_audit_rule_free - free an selinux audit rule structure.
+3 -2
security/selinux/ss/services.c
··· 3507 3507 } 3508 3508 } 3509 3509 3510 - int selinux_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule) 3510 + int selinux_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule, 3511 + gfp_t gfp) 3511 3512 { 3512 3513 struct selinux_state *state = &selinux_state; 3513 3514 struct selinux_policy *policy; ··· 3549 3548 return -EINVAL; 3550 3549 } 3551 3550 3552 - tmprule = kzalloc(sizeof(struct selinux_audit_rule), GFP_KERNEL); 3551 + tmprule = kzalloc(sizeof(struct selinux_audit_rule), gfp); 3553 3552 if (!tmprule) 3554 3553 return -ENOMEM; 3555 3554 context_init(&tmprule->au_ctxt);
+3 -1
security/smack/smack_lsm.c
··· 4693 4693 * @op: required testing operator (=, !=, >, <, ...) 4694 4694 * @rulestr: smack label to be audited 4695 4695 * @vrule: pointer to save our own audit rule representation 4696 + * @gfp: type of the memory for the allocation 4696 4697 * 4697 4698 * Prepare to audit cases where (@field @op @rulestr) is true. 4698 4699 * The label to be audited is created if necessay. 4699 4700 */ 4700 - static int smack_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule) 4701 + static int smack_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule, 4702 + gfp_t gfp) 4701 4703 { 4702 4704 struct smack_known *skp; 4703 4705 char **rule = (char **)vrule;
+1
security/yama/yama_lsm.c
··· 111 111 112 112 /** 113 113 * yama_relation_cleanup - remove invalid entries from the relation list 114 + * @work: unused 114 115 * 115 116 */ 116 117 static void yama_relation_cleanup(struct work_struct *work)
+2
sound/core/seq/seq_ump_convert.c
··· 1075 1075 system_ev_to_ump_midi1, system_ev_to_ump_midi2 }, 1076 1076 { SNDRV_SEQ_EVENT_SENSING, UMP_SYSTEM_STATUS_ACTIVE_SENSING, 1077 1077 system_ev_to_ump_midi1, system_ev_to_ump_midi2 }, 1078 + { SNDRV_SEQ_EVENT_RESET, UMP_SYSTEM_STATUS_RESET, 1079 + system_ev_to_ump_midi1, system_ev_to_ump_midi2 }, 1078 1080 }; 1079 1081 1080 1082 static const struct seq_ev_to_ump *find_ump_encoder(int type)
+1 -1
sound/hda/intel-dsp-config.c
··· 18 18 static int dsp_driver; 19 19 20 20 module_param(dsp_driver, int, 0444); 21 - MODULE_PARM_DESC(dsp_driver, "Force the DSP driver for Intel DSP (0=auto, 1=legacy, 2=SST, 3=SOF)"); 21 + MODULE_PARM_DESC(dsp_driver, "Force the DSP driver for Intel DSP (0=auto, 1=legacy, 2=SST, 3=SOF, 4=AVS)"); 22 22 23 23 #define FLAG_SST BIT(0) 24 24 #define FLAG_SOF BIT(1)
+1
sound/oss/dmasound/dmasound_core.c
··· 204 204 static unsigned int writeBufSize = DEFAULT_BUFF_SIZE ; /* in bytes */ 205 205 module_param(writeBufSize, int, 0); 206 206 207 + MODULE_DESCRIPTION("Atari/Amiga/Q40 core DMA sound driver"); 207 208 MODULE_LICENSE("GPL"); 208 209 209 210 static int sq_unit = -1;
+2
sound/pci/hda/Kconfig
··· 162 162 depends on ACPI || COMPILE_TEST 163 163 depends on SND_SOC 164 164 select FW_CS_DSP 165 + select SERIAL_MULTI_INSTANTIATE 165 166 select SND_HDA_GENERIC 166 167 select SND_SOC_CS35L56_SHARED 167 168 select SND_HDA_SCODEC_CS35L56 ··· 179 178 depends on ACPI || COMPILE_TEST 180 179 depends on SND_SOC 181 180 select FW_CS_DSP 181 + select SERIAL_MULTI_INSTANTIATE 182 182 select SND_HDA_GENERIC 183 183 select SND_SOC_CS35L56_SHARED 184 184 select SND_HDA_SCODEC_CS35L56
+3 -3
sound/pci/hda/cs35l41_hda.c
··· 1495 1495 if (comps[cs35l41->index].dev == dev) { 1496 1496 memset(&comps[cs35l41->index], 0, sizeof(*comps)); 1497 1497 sleep_flags = lock_system_sleep(); 1498 - device_link_remove(&comps->codec->core.dev, cs35l41->dev); 1498 + device_link_remove(&cs35l41->codec->core.dev, cs35l41->dev); 1499 1499 unlock_system_sleep(sleep_flags); 1500 1500 } 1501 1501 } ··· 2019 2019 { 2020 2020 struct cs35l41_hda *cs35l41 = dev_get_drvdata(dev); 2021 2021 2022 + component_del(cs35l41->dev, &cs35l41_hda_comp_ops); 2023 + 2022 2024 pm_runtime_get_sync(cs35l41->dev); 2023 2025 pm_runtime_dont_use_autosuspend(cs35l41->dev); 2024 2026 pm_runtime_disable(cs35l41->dev); 2025 2027 2026 2028 if (cs35l41->halo_initialized) 2027 2029 cs35l41_remove_dsp(cs35l41); 2028 - 2029 - component_del(cs35l41->dev, &cs35l41_hda_comp_ops); 2030 2030 2031 2031 acpi_dev_put(cs35l41->dacpi); 2032 2032
+8
sound/pci/hda/cs35l41_hda_property.c
··· 128 128 { "17AA38B5", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 0, 0, 0 }, 129 129 { "17AA38B6", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 0, 0, 0 }, 130 130 { "17AA38B7", 2, EXTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, 0, 0 }, 0, 1, -1, 0, 0, 0 }, 131 + { "17AA38C7", 4, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, CS35L41_LEFT, CS35L41_RIGHT }, 0, 2, -1, 1000, 4500, 24 }, 132 + { "17AA38C8", 4, INTERNAL, { CS35L41_LEFT, CS35L41_RIGHT, CS35L41_LEFT, CS35L41_RIGHT }, 0, 2, -1, 1000, 4500, 24 }, 133 + { "17AA38F9", 2, EXTERNAL, { CS35L41_RIGHT, CS35L41_LEFT, 0, 0 }, 0, 2, -1, 0, 0, 0 }, 134 + { "17AA38FA", 2, EXTERNAL, { CS35L41_RIGHT, CS35L41_LEFT, 0, 0 }, 0, 2, -1, 0, 0, 0 }, 131 135 {} 132 136 }; 133 137 ··· 533 529 { "CSC3551", "17AA38B5", generic_dsd_config }, 534 530 { "CSC3551", "17AA38B6", generic_dsd_config }, 535 531 { "CSC3551", "17AA38B7", generic_dsd_config }, 532 + { "CSC3551", "17AA38C7", generic_dsd_config }, 533 + { "CSC3551", "17AA38C8", generic_dsd_config }, 534 + { "CSC3551", "17AA38F9", generic_dsd_config }, 535 + { "CSC3551", "17AA38FA", generic_dsd_config }, 536 536 {} 537 537 }; 538 538
+7 -2
sound/pci/hda/cs35l56_hda.c
··· 735 735 if (comps[cs35l56->index].dev == dev) 736 736 memset(&comps[cs35l56->index], 0, sizeof(*comps)); 737 737 738 + cs35l56->codec = NULL; 739 + 738 740 dev_dbg(cs35l56->base.dev, "Unbound\n"); 739 741 } 740 742 ··· 841 839 return ret; 842 840 843 841 cs35l56->suspended = false; 842 + 843 + if (!cs35l56->codec) 844 + return 0; 844 845 845 846 ret = cs35l56_is_fw_reload_needed(&cs35l56->base); 846 847 dev_dbg(cs35l56->base.dev, "fw_reload_needed: %d\n", ret); ··· 1077 1072 { 1078 1073 struct cs35l56_hda *cs35l56 = dev_get_drvdata(dev); 1079 1074 1075 + component_del(cs35l56->base.dev, &cs35l56_hda_comp_ops); 1076 + 1080 1077 pm_runtime_dont_use_autosuspend(cs35l56->base.dev); 1081 1078 pm_runtime_get_sync(cs35l56->base.dev); 1082 1079 pm_runtime_disable(cs35l56->base.dev); 1083 - 1084 - component_del(cs35l56->base.dev, &cs35l56_hda_comp_ops); 1085 1080 1086 1081 cs_dsp_remove(&cs35l56->cs_dsp); 1087 1082
+35 -3
sound/pci/hda/patch_realtek.c
··· 583 583 switch (codec->core.vendor_id) { 584 584 case 0x10ec0236: 585 585 case 0x10ec0256: 586 + case 0x10ec0257: 586 587 case 0x19e58326: 587 588 case 0x10ec0283: 589 + case 0x10ec0285: 588 590 case 0x10ec0286: 591 + case 0x10ec0287: 589 592 case 0x10ec0288: 593 + case 0x10ec0295: 590 594 case 0x10ec0298: 591 595 alc_headset_mic_no_shutup(codec); 592 596 break; ··· 7524 7520 ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1, 7525 7521 ALC287_FIXUP_LENOVO_THKPAD_WH_ALC1318, 7526 7522 ALC256_FIXUP_CHROME_BOOK, 7523 + ALC287_FIXUP_LENOVO_14ARP8_LEGION_IAH7, 7527 7524 }; 7528 7525 7529 7526 /* A special fixup for Lenovo C940 and Yoga Duet 7; ··· 7561 7556 id = ALC287_FIXUP_YOGA7_14ITL_SPEAKERS; /* DuetITL */ 7562 7557 else 7563 7558 id = ALC287_FIXUP_TAS2781_I2C; /* 14IRP8 */ 7559 + __snd_hda_apply_fixup(codec, id, action, 0); 7560 + } 7561 + 7562 + /* Similar to above the Lenovo Yoga Pro 7 14ARP8 PCI SSID matches the codec SSID of the 7563 + Legion Y9000X 2022 IAH7.*/ 7564 + static void alc287_fixup_lenovo_14arp8_legion_iah7(struct hda_codec *codec, 7565 + const struct hda_fixup *fix, 7566 + int action) 7567 + { 7568 + int id; 7569 + 7570 + if (codec->core.subsystem_id == 0x17aa386e) 7571 + id = ALC287_FIXUP_CS35L41_I2C_2; /* Legion Y9000X 2022 IAH7 */ 7572 + else 7573 + id = ALC285_FIXUP_SPEAKER2_TO_DAC1; /* Yoga Pro 7 14ARP8 */ 7564 7574 __snd_hda_apply_fixup(codec, id, action, 0); 7565 7575 } 7566 7576 ··· 9678 9658 .chained = true, 9679 9659 .chain_id = ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK, 9680 9660 }, 9661 + [ALC287_FIXUP_LENOVO_14ARP8_LEGION_IAH7] = { 9662 + .type = HDA_FIXUP_FUNC, 9663 + .v.func = alc287_fixup_lenovo_14arp8_legion_iah7, 9664 + }, 9681 9665 [ALC287_FIXUP_YOGA9_14IMH9_BASS_SPK_PIN] = { 9682 9666 .type = HDA_FIXUP_FUNC, 9683 9667 .v.func = alc287_fixup_yoga9_14iap7_bass_spk_pin, ··· 10218 10194 SND_PCI_QUIRK(0x103c, 0x8c70, "HP EliteBook 835 G11", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED), 10219 10195 SND_PCI_QUIRK(0x103c, 0x8c71, "HP EliteBook 845 G11", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED), 10220 10196 SND_PCI_QUIRK(0x103c, 0x8c72, "HP EliteBook 865 G11", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED), 10197 + SND_PCI_QUIRK(0x103c, 0x8c7b, "HP ProBook 445 G11", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10198 + SND_PCI_QUIRK(0x103c, 0x8c7c, "HP ProBook 445 G11", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10199 + SND_PCI_QUIRK(0x103c, 0x8c7d, "HP ProBook 465 G11", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10200 + SND_PCI_QUIRK(0x103c, 0x8c7e, "HP ProBook 465 G11", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10221 10201 SND_PCI_QUIRK(0x103c, 0x8c89, "HP ProBook 460 G11", ALC236_FIXUP_HP_GPIO_LED), 10222 10202 SND_PCI_QUIRK(0x103c, 0x8c8a, "HP EliteBook 630", ALC236_FIXUP_HP_GPIO_LED), 10223 10203 SND_PCI_QUIRK(0x103c, 0x8c8c, "HP EliteBook 660", ALC236_FIXUP_HP_GPIO_LED), ··· 10530 10502 SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS), 10531 10503 SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940 / Yoga Duet 7", ALC298_FIXUP_LENOVO_C940_DUET7), 10532 10504 SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS), 10533 - SND_PCI_QUIRK(0x17aa, 0x3820, "Yoga Duet 7 13ITL6", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), 10505 + SND_PCI_QUIRK(0x17aa, 0x3820, "IdeaPad 330-17IKB 81DM", ALC269_FIXUP_ASPIRE_HEADSET_MIC), 10534 10506 SND_PCI_QUIRK(0x17aa, 0x3824, "Legion Y9000X 2020", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS), 10535 10507 SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF), 10536 10508 SND_PCI_QUIRK(0x17aa, 0x3834, "Lenovo IdeaPad Slim 9i 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), ··· 10544 10516 SND_PCI_QUIRK(0x17aa, 0x3865, "Lenovo 13X", ALC287_FIXUP_CS35L41_I2C_2), 10545 10517 SND_PCI_QUIRK(0x17aa, 0x3866, "Lenovo 13X", ALC287_FIXUP_CS35L41_I2C_2), 10546 10518 SND_PCI_QUIRK(0x17aa, 0x3869, "Lenovo Yoga7 14IAL7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN), 10547 - SND_PCI_QUIRK(0x17aa, 0x386e, "Legion Y9000X 2022 IAH7", ALC287_FIXUP_CS35L41_I2C_2), 10519 + SND_PCI_QUIRK(0x17aa, 0x386e, "Legion Y9000X 2022 IAH7 / Yoga Pro 7 14ARP8", ALC287_FIXUP_LENOVO_14ARP8_LEGION_IAH7), 10548 10520 SND_PCI_QUIRK(0x17aa, 0x386f, "Legion Pro 7/7i", ALC287_FIXUP_LENOVO_LEGION_7), 10549 10521 SND_PCI_QUIRK(0x17aa, 0x3870, "Lenovo Yoga 7 14ARB7", ALC287_FIXUP_YOGA7_14ARB7_I2C), 10550 10522 SND_PCI_QUIRK(0x17aa, 0x3877, "Lenovo Legion 7 Slim 16ARHA7", ALC287_FIXUP_CS35L41_I2C_2), ··· 10568 10540 SND_PCI_QUIRK(0x17aa, 0x38be, "Yoga S980-14.5 proX YC Dual", ALC287_FIXUP_TAS2781_I2C), 10569 10541 SND_PCI_QUIRK(0x17aa, 0x38bf, "Yoga S980-14.5 proX LX Dual", ALC287_FIXUP_TAS2781_I2C), 10570 10542 SND_PCI_QUIRK(0x17aa, 0x38c3, "Y980 DUAL", ALC287_FIXUP_TAS2781_I2C), 10543 + SND_PCI_QUIRK(0x17aa, 0x38c7, "Thinkbook 13x Gen 4", ALC287_FIXUP_CS35L41_I2C_4), 10544 + SND_PCI_QUIRK(0x17aa, 0x38c8, "Thinkbook 13x Gen 4", ALC287_FIXUP_CS35L41_I2C_4), 10571 10545 SND_PCI_QUIRK(0x17aa, 0x38cb, "Y790 YG DUAL", ALC287_FIXUP_TAS2781_I2C), 10572 10546 SND_PCI_QUIRK(0x17aa, 0x38cd, "Y790 VECO DUAL", ALC287_FIXUP_TAS2781_I2C), 10573 10547 SND_PCI_QUIRK(0x17aa, 0x38d2, "Lenovo Yoga 9 14IMH9", ALC287_FIXUP_YOGA9_14IMH9_BASS_SPK_PIN), 10574 10548 SND_PCI_QUIRK(0x17aa, 0x38d7, "Lenovo Yoga 9 14IMH9", ALC287_FIXUP_YOGA9_14IMH9_BASS_SPK_PIN), 10549 + SND_PCI_QUIRK(0x17aa, 0x38f9, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2), 10550 + SND_PCI_QUIRK(0x17aa, 0x38fa, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2), 10575 10551 SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), 10576 10552 SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC), 10577 10553 SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI), ··· 10613 10581 SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */ 10614 10582 SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802), 10615 10583 SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X), 10584 + SND_PCI_QUIRK(0x1c6c, 0x122a, "Positivo N14AP7", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 10616 10585 SND_PCI_QUIRK(0x1c6c, 0x1251, "Positivo N14KP6-TG", ALC288_FIXUP_DELL1_MIC_NO_PRESENCE), 10617 10586 SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_SET_COEF_DEFAULTS), 10618 10587 SND_PCI_QUIRK(0x1d05, 0x1096, "TongFang GMxMRxx", ALC269_FIXUP_NO_SHUTUP), ··· 10638 10605 SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", ALC256_FIXUP_INTEL_NUC10), 10639 10606 SND_PCI_QUIRK(0x8086, 0x3038, "Intel NUC 13", ALC295_FIXUP_CHROME_BOOK), 10640 10607 SND_PCI_QUIRK(0xf111, 0x0001, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 10641 - SND_PCI_QUIRK(0xf111, 0x0005, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 10642 10608 SND_PCI_QUIRK(0xf111, 0x0006, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE), 10643 10609 10644 10610 #if 0
+2 -2
sound/pci/hda/tas2781_hda_i2c.c
··· 777 777 { 778 778 struct tas2781_hda *tas_hda = dev_get_drvdata(dev); 779 779 780 + component_del(tas_hda->dev, &tas2781_hda_comp_ops); 781 + 780 782 pm_runtime_get_sync(tas_hda->dev); 781 783 pm_runtime_disable(tas_hda->dev); 782 - 783 - component_del(tas_hda->dev, &tas2781_hda_comp_ops); 784 784 785 785 pm_runtime_put_noidle(tas_hda->dev); 786 786
+1
tools/hv/Makefile
··· 17 17 MAKEFLAGS += -r 18 18 19 19 override CFLAGS += -O2 -Wall -g -D_GNU_SOURCE -I$(OUTPUT)include 20 + override CFLAGS += -Wno-address-of-packed-member 20 21 21 22 ALL_TARGETS := hv_kvp_daemon hv_vss_daemon 22 23 ifneq ($(ARCH), aarch64)
+2
tools/testing/selftests/bpf/prog_tests/verifier.c
··· 53 53 #include "verifier_movsx.skel.h" 54 54 #include "verifier_netfilter_ctx.skel.h" 55 55 #include "verifier_netfilter_retcode.skel.h" 56 + #include "verifier_or_jmp32_k.skel.h" 56 57 #include "verifier_precision.skel.h" 57 58 #include "verifier_prevent_map_lookup.skel.h" 58 59 #include "verifier_raw_stack.skel.h" ··· 172 171 void test_verifier_movsx(void) { RUN(verifier_movsx); } 173 172 void test_verifier_netfilter_ctx(void) { RUN(verifier_netfilter_ctx); } 174 173 void test_verifier_netfilter_retcode(void) { RUN(verifier_netfilter_retcode); } 174 + void test_verifier_or_jmp32_k(void) { RUN(verifier_or_jmp32_k); } 175 175 void test_verifier_precision(void) { RUN(verifier_precision); } 176 176 void test_verifier_prevent_map_lookup(void) { RUN(verifier_prevent_map_lookup); } 177 177 void test_verifier_raw_stack(void) { RUN(verifier_raw_stack); }
+41
tools/testing/selftests/bpf/progs/verifier_or_jmp32_k.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/bpf.h> 4 + #include <bpf/bpf_helpers.h> 5 + #include "bpf_misc.h" 6 + 7 + SEC("socket") 8 + __description("or_jmp32_k: bit ops + branch on unknown value") 9 + __failure 10 + __msg("R0 invalid mem access 'scalar'") 11 + __naked void or_jmp32_k(void) 12 + { 13 + asm volatile (" \ 14 + r0 = 0xffffffff; \ 15 + r0 /= 1; \ 16 + r1 = 0; \ 17 + w1 = -1; \ 18 + w1 >>= 1; \ 19 + w0 &= w1; \ 20 + w0 |= 2; \ 21 + if w0 != 0x7ffffffd goto l1; \ 22 + r0 = 1; \ 23 + exit; \ 24 + l3: \ 25 + r0 = 5; \ 26 + *(u64*)(r0 - 8) = r0; \ 27 + exit; \ 28 + l2: \ 29 + w0 -= 0xe; \ 30 + if w0 == 1 goto l3; \ 31 + r0 = 4; \ 32 + exit; \ 33 + l1: \ 34 + w0 -= 0x7ffffff0; \ 35 + if w0 s>= 0xe goto l2; \ 36 + r0 = 3; \ 37 + exit; \ 38 + " ::: __clobber_all); 39 + } 40 + 41 + char _license[] SEC("license") = "GPL";
+7 -1
tools/testing/selftests/drivers/net/virtio_net/config
··· 1 - CONFIG_VIRTIO_NET=y 1 + CONFIG_BPF_SYSCALL=y 2 + CONFIG_CGROUP_BPF=y 3 + CONFIG_IPV6=y 4 + CONFIG_IPV6_MULTIPLE_TABLES=y 5 + CONFIG_NET_L3_MASTER_DEV=y 6 + CONFIG_NET_VRF=m 2 7 CONFIG_VIRTIO_DEBUG=y 8 + CONFIG_VIRTIO_NET=y
+10 -1
tools/testing/selftests/fchmodat2/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-or-later 2 2 3 - CFLAGS += -Wall -O2 -g -fsanitize=address -fsanitize=undefined -static-libasan $(KHDR_INCLUDES) 3 + CFLAGS += -Wall -O2 -g -fsanitize=address -fsanitize=undefined $(KHDR_INCLUDES) 4 + 5 + # gcc requires -static-libasan in order to ensure that Address Sanitizer's 6 + # library is the first one loaded. However, clang already statically links the 7 + # Address Sanitizer if -fsanitize is specified. Therefore, simply omit 8 + # -static-libasan for clang builds. 9 + ifeq ($(LLVM),) 10 + CFLAGS += -static-libasan 11 + endif 12 + 4 13 TEST_GEN_PROGS := fchmodat2_test 5 14 6 15 include ../lib.mk
+10 -2
tools/testing/selftests/filesystems/statmount/statmount_test.c
··· 125 125 126 126 static void cleanup_namespace(void) 127 127 { 128 - fchdir(orig_root); 129 - chroot("."); 128 + int ret; 129 + 130 + ret = fchdir(orig_root); 131 + if (ret == -1) 132 + ksft_perror("fchdir to original root"); 133 + 134 + ret = chroot("."); 135 + if (ret == -1) 136 + ksft_perror("chroot to original root"); 137 + 130 138 umount2(root_mntpoint, MNT_DETACH); 131 139 rmdir(root_mntpoint); 132 140 }
+16 -8
tools/testing/selftests/mm/map_fixed_noreplace.c
··· 67 67 dump_maps(); 68 68 ksft_exit_fail_msg("Error: munmap failed!?\n"); 69 69 } 70 - ksft_test_result_pass("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 70 + ksft_print_msg("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 71 + ksft_test_result_pass("mmap() 5*PAGE_SIZE at base\n"); 71 72 72 73 addr = base_addr + page_size; 73 74 size = 3 * page_size; ··· 77 76 dump_maps(); 78 77 ksft_exit_fail_msg("Error: first mmap() failed unexpectedly\n"); 79 78 } 80 - ksft_test_result_pass("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 79 + ksft_print_msg("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 80 + ksft_test_result_pass("mmap() 3*PAGE_SIZE at base+PAGE_SIZE\n"); 81 81 82 82 /* 83 83 * Exact same mapping again: ··· 95 93 dump_maps(); 96 94 ksft_exit_fail_msg("Error:1: mmap() succeeded when it shouldn't have\n"); 97 95 } 98 - ksft_test_result_pass("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 96 + ksft_print_msg("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 97 + ksft_test_result_pass("mmap() 5*PAGE_SIZE at base\n"); 99 98 100 99 /* 101 100 * Second mapping contained within first: ··· 114 111 dump_maps(); 115 112 ksft_exit_fail_msg("Error:2: mmap() succeeded when it shouldn't have\n"); 116 113 } 117 - ksft_test_result_pass("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 114 + ksft_print_msg("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 115 + ksft_test_result_pass("mmap() 2*PAGE_SIZE at base+PAGE_SIZE\n"); 118 116 119 117 /* 120 118 * Overlap end of existing mapping: ··· 132 128 dump_maps(); 133 129 ksft_exit_fail_msg("Error:3: mmap() succeeded when it shouldn't have\n"); 134 130 } 135 - ksft_test_result_pass("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 131 + ksft_print_msg("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 132 + ksft_test_result_pass("mmap() 2*PAGE_SIZE at base+(3*PAGE_SIZE)\n"); 136 133 137 134 /* 138 135 * Overlap start of existing mapping: ··· 150 145 dump_maps(); 151 146 ksft_exit_fail_msg("Error:4: mmap() succeeded when it shouldn't have\n"); 152 147 } 153 - ksft_test_result_pass("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 148 + ksft_print_msg("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 149 + ksft_test_result_pass("mmap() 2*PAGE_SIZE bytes at base\n"); 154 150 155 151 /* 156 152 * Adjacent to start of existing mapping: ··· 168 162 dump_maps(); 169 163 ksft_exit_fail_msg("Error:5: mmap() failed when it shouldn't have\n"); 170 164 } 171 - ksft_test_result_pass("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 165 + ksft_print_msg("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 166 + ksft_test_result_pass("mmap() PAGE_SIZE at base\n"); 172 167 173 168 /* 174 169 * Adjacent to end of existing mapping: ··· 186 179 dump_maps(); 187 180 ksft_exit_fail_msg("Error:6: mmap() failed when it shouldn't have\n"); 188 181 } 189 - ksft_test_result_pass("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 182 + ksft_print_msg("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 183 + ksft_test_result_pass("mmap() PAGE_SIZE at base+(4*PAGE_SIZE)\n"); 190 184 191 185 addr = base_addr; 192 186 size = 5 * page_size;
+2
tools/testing/selftests/net/Makefile
··· 43 43 TEST_PROGS += srv6_end_next_csid_l3vpn_test.sh 44 44 TEST_PROGS += srv6_end_x_next_csid_l3vpn_test.sh 45 45 TEST_PROGS += srv6_end_flavors_test.sh 46 + TEST_PROGS += srv6_end_dx4_netfilter_test.sh 47 + TEST_PROGS += srv6_end_dx6_netfilter_test.sh 46 48 TEST_PROGS += vrf_strict_mode_test.sh 47 49 TEST_PROGS += arp_ndisc_evict_nocarrier.sh 48 50 TEST_PROGS += ndisc_unsolicited_na_test.sh
+2
tools/testing/selftests/net/config
··· 101 101 CONFIG_CRYPTO_ARIA=y 102 102 CONFIG_XFRM_INTERFACE=m 103 103 CONFIG_XFRM_USER=m 104 + CONFIG_IP_NF_MATCH_RPFILTER=m 105 + CONFIG_IP6_NF_MATCH_RPFILTER=m
+28 -18
tools/testing/selftests/net/mptcp/userspace_pm.sh
··· 160 160 local is_v6=$1 161 161 local app_port=$app4_port 162 162 local connect_addr="10.0.1.1" 163 + local client_addr="10.0.1.2" 163 164 local listen_addr="0.0.0.0" 164 165 if [ "$is_v6" = "v6" ] 165 166 then 166 167 connect_addr="dead:beef:1::1" 168 + client_addr="dead:beef:1::2" 167 169 listen_addr="::" 168 170 app_port=$app6_port 169 171 else ··· 208 206 [ "$server_serverside" = 1 ] 209 207 then 210 208 test_pass 209 + print_title "Connection info: ${client_addr}:${client_port} -> ${connect_addr}:${app_port}" 211 210 else 212 211 test_fail "Expected tokens (c:${client_token} - s:${server_token}) and server (c:${client_serverside} - s:${server_serverside})" 213 212 mptcp_lib_result_print_all_tap ··· 300 297 ip netns exec "$ns2"\ 301 298 ./pm_nl_ctl ann 10.0.2.2 token "$client4_token" id $client_addr_id dev\ 302 299 ns2eth1 303 - print_test "ADD_ADDR id:${client_addr_id} 10.0.2.2 (ns2) => ns1, reuse port" 300 + print_test "ADD_ADDR id:client 10.0.2.2 (ns2) => ns1, reuse port" 304 301 sleep 0.5 305 302 verify_announce_event $server_evts $ANNOUNCED $server4_token "10.0.2.2" $client_addr_id \ 306 303 "$client4_port" ··· 309 306 :>"$server_evts" 310 307 ip netns exec "$ns2" ./pm_nl_ctl ann\ 311 308 dead:beef:2::2 token "$client6_token" id $client_addr_id dev ns2eth1 312 - print_test "ADD_ADDR6 id:${client_addr_id} dead:beef:2::2 (ns2) => ns1, reuse port" 309 + print_test "ADD_ADDR6 id:client dead:beef:2::2 (ns2) => ns1, reuse port" 313 310 sleep 0.5 314 311 verify_announce_event "$server_evts" "$ANNOUNCED" "$server6_token" "dead:beef:2::2"\ 315 312 "$client_addr_id" "$client6_port" "v6" ··· 319 316 client_addr_id=$((client_addr_id+1)) 320 317 ip netns exec "$ns2" ./pm_nl_ctl ann 10.0.2.2 token "$client4_token" id\ 321 318 $client_addr_id dev ns2eth1 port $new4_port 322 - print_test "ADD_ADDR id:${client_addr_id} 10.0.2.2 (ns2) => ns1, new port" 319 + print_test "ADD_ADDR id:client+1 10.0.2.2 (ns2) => ns1, new port" 323 320 sleep 0.5 324 321 verify_announce_event "$server_evts" "$ANNOUNCED" "$server4_token" "10.0.2.2"\ 325 322 "$client_addr_id" "$new4_port" ··· 330 327 # ADD_ADDR from the server to client machine reusing the subflow port 331 328 ip netns exec "$ns1" ./pm_nl_ctl ann 10.0.2.1 token "$server4_token" id\ 332 329 $server_addr_id dev ns1eth2 333 - print_test "ADD_ADDR id:${server_addr_id} 10.0.2.1 (ns1) => ns2, reuse port" 330 + print_test "ADD_ADDR id:server 10.0.2.1 (ns1) => ns2, reuse port" 334 331 sleep 0.5 335 332 verify_announce_event "$client_evts" "$ANNOUNCED" "$client4_token" "10.0.2.1"\ 336 333 "$server_addr_id" "$app4_port" ··· 339 336 :>"$client_evts" 340 337 ip netns exec "$ns1" ./pm_nl_ctl ann dead:beef:2::1 token "$server6_token" id\ 341 338 $server_addr_id dev ns1eth2 342 - print_test "ADD_ADDR6 id:${server_addr_id} dead:beef:2::1 (ns1) => ns2, reuse port" 339 + print_test "ADD_ADDR6 id:server dead:beef:2::1 (ns1) => ns2, reuse port" 343 340 sleep 0.5 344 341 verify_announce_event "$client_evts" "$ANNOUNCED" "$client6_token" "dead:beef:2::1"\ 345 342 "$server_addr_id" "$app6_port" "v6" ··· 349 346 server_addr_id=$((server_addr_id+1)) 350 347 ip netns exec "$ns1" ./pm_nl_ctl ann 10.0.2.1 token "$server4_token" id\ 351 348 $server_addr_id dev ns1eth2 port $new4_port 352 - print_test "ADD_ADDR id:${server_addr_id} 10.0.2.1 (ns1) => ns2, new port" 349 + print_test "ADD_ADDR id:server+1 10.0.2.1 (ns1) => ns2, new port" 353 350 sleep 0.5 354 351 verify_announce_event "$client_evts" "$ANNOUNCED" "$client4_token" "10.0.2.1"\ 355 352 "$server_addr_id" "$new4_port" ··· 383 380 local invalid_token=$(( client4_token - 1 )) 384 381 ip netns exec "$ns2" ./pm_nl_ctl rem token $invalid_token id\ 385 382 $client_addr_id > /dev/null 2>&1 386 - print_test "RM_ADDR id:${client_addr_id} ns2 => ns1, invalid token" 383 + print_test "RM_ADDR id:client ns2 => ns1, invalid token" 387 384 local type 388 385 type=$(mptcp_lib_evts_get_info type "$server_evts") 389 386 if [ "$type" = "" ] ··· 397 394 local invalid_id=$(( client_addr_id + 1 )) 398 395 ip netns exec "$ns2" ./pm_nl_ctl rem token "$client4_token" id\ 399 396 $invalid_id > /dev/null 2>&1 400 - print_test "RM_ADDR id:${invalid_id} ns2 => ns1, invalid id" 397 + print_test "RM_ADDR id:client+1 ns2 => ns1, invalid id" 401 398 type=$(mptcp_lib_evts_get_info type "$server_evts") 402 399 if [ "$type" = "" ] 403 400 then ··· 410 407 :>"$server_evts" 411 408 ip netns exec "$ns2" ./pm_nl_ctl rem token "$client4_token" id\ 412 409 $client_addr_id 413 - print_test "RM_ADDR id:${client_addr_id} ns2 => ns1" 410 + print_test "RM_ADDR id:client ns2 => ns1" 414 411 sleep 0.5 415 412 verify_remove_event "$server_evts" "$REMOVED" "$server4_token" "$client_addr_id" 416 413 ··· 419 416 client_addr_id=$(( client_addr_id - 1 )) 420 417 ip netns exec "$ns2" ./pm_nl_ctl rem token "$client4_token" id\ 421 418 $client_addr_id 422 - print_test "RM_ADDR id:${client_addr_id} ns2 => ns1" 419 + print_test "RM_ADDR id:client-1 ns2 => ns1" 423 420 sleep 0.5 424 421 verify_remove_event "$server_evts" "$REMOVED" "$server4_token" "$client_addr_id" 425 422 ··· 427 424 :>"$server_evts" 428 425 ip netns exec "$ns2" ./pm_nl_ctl rem token "$client6_token" id\ 429 426 $client_addr_id 430 - print_test "RM_ADDR6 id:${client_addr_id} ns2 => ns1" 427 + print_test "RM_ADDR6 id:client-1 ns2 => ns1" 431 428 sleep 0.5 432 429 verify_remove_event "$server_evts" "$REMOVED" "$server6_token" "$client_addr_id" 433 430 ··· 437 434 # RM_ADDR from the server to client machine 438 435 ip netns exec "$ns1" ./pm_nl_ctl rem token "$server4_token" id\ 439 436 $server_addr_id 440 - print_test "RM_ADDR id:${server_addr_id} ns1 => ns2" 437 + print_test "RM_ADDR id:server ns1 => ns2" 441 438 sleep 0.5 442 439 verify_remove_event "$client_evts" "$REMOVED" "$client4_token" "$server_addr_id" 443 440 ··· 446 443 server_addr_id=$(( server_addr_id - 1 )) 447 444 ip netns exec "$ns1" ./pm_nl_ctl rem token "$server4_token" id\ 448 445 $server_addr_id 449 - print_test "RM_ADDR id:${server_addr_id} ns1 => ns2" 446 + print_test "RM_ADDR id:server-1 ns1 => ns2" 450 447 sleep 0.5 451 448 verify_remove_event "$client_evts" "$REMOVED" "$client4_token" "$server_addr_id" 452 449 ··· 454 451 :>"$client_evts" 455 452 ip netns exec "$ns1" ./pm_nl_ctl rem token "$server6_token" id\ 456 453 $server_addr_id 457 - print_test "RM_ADDR6 id:${server_addr_id} ns1 => ns2" 454 + print_test "RM_ADDR6 id:server-1 ns1 => ns2" 458 455 sleep 0.5 459 456 verify_remove_event "$client_evts" "$REMOVED" "$client6_token" "$server_addr_id" 460 457 } ··· 482 479 local locid 483 480 local remid 484 481 local info 482 + local e_dport_txt 485 483 486 - info="${e_saddr} (${e_from}) => ${e_daddr}:${e_dport} (${e_to})" 484 + # only display the fixed ports 485 + if [ "${e_dport}" -ge "${app4_port}" ] && [ "${e_dport}" -le "${app6_port}" ]; then 486 + e_dport_txt=":${e_dport}" 487 + fi 488 + 489 + info="${e_saddr} (${e_from}) => ${e_daddr}${e_dport_txt} (${e_to})" 487 490 488 491 if [ "$e_type" = "$SUB_ESTABLISHED" ] 489 492 then ··· 775 766 :>"$client_evts" 776 767 ip netns exec "$ns1" ./pm_nl_ctl ann 10.0.2.1 token "$server6_token" id\ 777 768 $server_addr_id dev ns1eth2 778 - print_test "ADD_ADDR4 id:${server_addr_id} 10.0.2.1 (ns1) => ns2, reuse port" 769 + print_test "ADD_ADDR4 id:server 10.0.2.1 (ns1) => ns2, reuse port" 779 770 sleep 0.5 780 771 verify_announce_event "$client_evts" "$ANNOUNCED" "$client6_token" "10.0.2.1"\ 781 772 "$server_addr_id" "$app6_port" ··· 870 861 local listener_pid=$! 871 862 872 863 sleep 0.5 873 - print_test "CREATE_LISTENER 10.0.2.2:$client4_port" 864 + print_test "CREATE_LISTENER 10.0.2.2 (client port)" 874 865 verify_listener_events $client_evts $LISTENER_CREATED $AF_INET 10.0.2.2 $client4_port 875 866 876 867 # ADD_ADDR from client to server machine reusing the subflow port ··· 887 878 mptcp_lib_kill_wait $listener_pid 888 879 889 880 sleep 0.5 890 - print_test "CLOSE_LISTENER 10.0.2.2:$client4_port" 881 + print_test "CLOSE_LISTENER 10.0.2.2 (client port)" 891 882 verify_listener_events $client_evts $LISTENER_CLOSED $AF_INET 10.0.2.2 $client4_port 892 883 } 893 884 894 885 print_title "Make connections" 895 886 make_connection 896 887 make_connection "v6" 888 + print_title "Will be using address IDs ${client_addr_id} (client) and ${server_addr_id} (server)" 897 889 898 890 test_announce 899 891 test_remove
+1 -1
tools/testing/selftests/net/openvswitch/openvswitch.sh
··· 1 - #!/bin/sh 1 + #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 # 4 4 # OVS kernel module self tests
+1 -1
tools/testing/selftests/net/openvswitch/ovs-dpctl.py
··· 531 531 for flat_act in parse_flat_map: 532 532 if parse_starts_block(actstr, flat_act[0], False): 533 533 actstr = actstr[len(flat_act[0]):] 534 - self["attrs"].append([flat_act[1]]) 534 + self["attrs"].append([flat_act[1], True]) 535 535 actstr = actstr[strspn(actstr, ", ") :] 536 536 parsed = True 537 537
+335
tools/testing/selftests/net/srv6_end_dx4_netfilter_test.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # author: Jianguo Wu <wujianguo@chinatelecom.cn> 5 + # 6 + # Mostly copied from tools/testing/selftests/net/srv6_end_dt4_l3vpn_test.sh. 7 + # 8 + # This script is designed for testing the support of netfilter hooks for 9 + # SRv6 End.DX4 behavior. 10 + # 11 + # Hereafter a network diagram is shown, where one tenants (named 100) offer 12 + # IPv4 L3 VPN services allowing hosts to communicate with each other across 13 + # an IPv6 network. 14 + # 15 + # Routers rt-1 and rt-2 implement IPv4 L3 VPN services leveraging the SRv6 16 + # architecture. The key components for such VPNs are: a) SRv6 Encap behavior, 17 + # b) SRv6 End.DX4 behavior. 18 + # 19 + # To explain how an IPv4 L3 VPN based on SRv6 works, let us briefly consider an 20 + # example where, within the same domain of tenant 100, the host hs-1 pings 21 + # the host hs-2. 22 + # 23 + # First of all, L2 reachability of the host hs-2 is taken into account by 24 + # the router rt-1 which acts as an arp proxy. 25 + # 26 + # When the host hs-1 sends an IPv4 packet destined to hs-2, the router rt-1 27 + # receives the packet on the internal veth-t100 interface, rt-1 contains the 28 + # SRv6 Encap route for encapsulating the IPv4 packet in a IPv6 plus the Segment 29 + # Routing Header (SRH) packet. This packet is sent through the (IPv6) core 30 + # network up to the router rt-2 that receives it on veth0 interface. 31 + # 32 + # The rt-2 router uses the 'localsid' routing table to process incoming 33 + # IPv6+SRH packets which belong to the VPN of the tenant 100. For each of these 34 + # packets, the SRv6 End.DX4 behavior removes the outer IPv6+SRH headers and 35 + # routs the packet to the specified nexthop. Afterwards, the packet is sent to 36 + # the host hs-2 through the veth-t100 interface. 37 + # 38 + # The ping response follows the same processing but this time the role of rt-1 39 + # and rt-2 are swapped. 40 + # 41 + # And when net.netfilter.nf_hooks_lwtunnel is set to 1 in rt-1 or rt-2, and a 42 + # rpfilter iptables rule is added, SRv6 packets will go through netfilter PREROUTING 43 + # hooks. 44 + # 45 + # 46 + # +-------------------+ +-------------------+ 47 + # | | | | 48 + # | hs-1 netns | | hs-2 netns | 49 + # | | | | 50 + # | +-------------+ | | +-------------+ | 51 + # | | veth0 | | | | veth0 | | 52 + # | | 10.0.0.1/24 | | | | 10.0.0.2/24 | | 53 + # | +-------------+ | | +-------------+ | 54 + # | . | | . | 55 + # +-------------------+ +-------------------+ 56 + # . . 57 + # . . 58 + # . . 59 + # +-----------------------------------+ +-----------------------------------+ 60 + # | . | | . | 61 + # | +---------------+ | | +---------------- | 62 + # | | veth-t100 | | | | veth-t100 | | 63 + # | | 10.0.0.11/24 | +----------+ | | +----------+ | 10.0.0.22/24 | | 64 + # | +-------+-------+ | route | | | | route | +-------+-------- | 65 + # | | table | | | | table | | 66 + # | +----------+ | | +----------+ | 67 + # | +--------------+ | | +--------------+ | 68 + # | | veth0 | | | | veth0 | | 69 + # | | 2001:11::1/64 |.|...|.| 2001:11::2/64 | | 70 + # | +--------------+ | | +--------------+ | 71 + # | | | | 72 + # | rt-1 netns | | rt-2 netns | 73 + # | | | | 74 + # +-----------------------------------+ +-----------------------------------+ 75 + # 76 + # ~~~~~~~~~~~~~~~~~~~~~~~~~ 77 + # | Network configuration | 78 + # ~~~~~~~~~~~~~~~~~~~~~~~~~ 79 + # 80 + # rt-1: localsid table 81 + # +----------------------------------------------------------------+ 82 + # |SID |Action | 83 + # +----------------------------------------------------------------+ 84 + # |fc00:21:100::6004|apply SRv6 End.DX4 nh4 10.0.0.1 dev veth-t100 | 85 + # +----------------------------------------------------------------+ 86 + # 87 + # rt-1: route table 88 + # +---------------------------------------------------+ 89 + # |host |Action | 90 + # +---------------------------------------------------+ 91 + # |10.0.0.2 |apply seg6 encap segs fc00:12:100::6004| 92 + # +---------------------------------------------------+ 93 + # |10.0.0.0/24|forward to dev veth_t100 | 94 + # +---------------------------------------------------+ 95 + # 96 + # 97 + # rt-2: localsid table 98 + # +---------------------------------------------------------------+ 99 + # |SID |Action | 100 + # +---------------------------------------------------------------+ 101 + # |fc00:12:100::6004|apply SRv6 End.DX4 nh4 10.0.0.2 dev veth-t100| 102 + # +---------------------------------------------------------------+ 103 + # 104 + # rt-2: route table 105 + # +---------------------------------------------------+ 106 + # |host |Action | 107 + # +---------------------------------------------------+ 108 + # |10.0.0.1 |apply seg6 encap segs fc00:21:100::6004| 109 + # +---------------------------------------------------+ 110 + # |10.0.0.0/24|forward to dev veth_t100 | 111 + # +---------------------------------------------------+ 112 + # 113 + 114 + # Kselftest framework requirement - SKIP code is 4. 115 + ksft_skip=4 116 + 117 + readonly IPv6_RT_NETWORK=2001:11 118 + readonly IPv4_HS_NETWORK=10.0.0 119 + readonly SID_LOCATOR=fc00 120 + 121 + PING_TIMEOUT_SEC=4 122 + 123 + ret=0 124 + 125 + PAUSE_ON_FAIL=${PAUSE_ON_FAIL:=no} 126 + 127 + log_test() 128 + { 129 + local rc=$1 130 + local expected=$2 131 + local msg="$3" 132 + 133 + if [ ${rc} -eq ${expected} ]; then 134 + nsuccess=$((nsuccess+1)) 135 + printf "\n TEST: %-60s [ OK ]\n" "${msg}" 136 + else 137 + ret=1 138 + nfail=$((nfail+1)) 139 + printf "\n TEST: %-60s [FAIL]\n" "${msg}" 140 + if [ "${PAUSE_ON_FAIL}" = "yes" ]; then 141 + echo 142 + echo "hit enter to continue, 'q' to quit" 143 + read a 144 + [ "$a" = "q" ] && exit 1 145 + fi 146 + fi 147 + } 148 + 149 + print_log_test_results() 150 + { 151 + if [ "$TESTS" != "none" ]; then 152 + printf "\nTests passed: %3d\n" ${nsuccess} 153 + printf "Tests failed: %3d\n" ${nfail} 154 + fi 155 + } 156 + 157 + log_section() 158 + { 159 + echo 160 + echo "################################################################################" 161 + echo "TEST SECTION: $*" 162 + echo "################################################################################" 163 + } 164 + 165 + cleanup() 166 + { 167 + ip link del veth-rt-1 2>/dev/null || true 168 + ip link del veth-rt-2 2>/dev/null || true 169 + 170 + # destroy routers rt-* and hosts hs-* 171 + for ns in $(ip netns show | grep -E 'rt-*|hs-*'); do 172 + ip netns del ${ns} || true 173 + done 174 + } 175 + 176 + # Setup the basic networking for the routers 177 + setup_rt_networking() 178 + { 179 + local rt=$1 180 + local nsname=rt-${rt} 181 + 182 + ip netns add ${nsname} 183 + 184 + ip netns exec ${nsname} sysctl -wq net.ipv6.conf.all.accept_dad=0 185 + ip netns exec ${nsname} sysctl -wq net.ipv6.conf.default.accept_dad=0 186 + 187 + ip link set veth-rt-${rt} netns ${nsname} 188 + ip -netns ${nsname} link set veth-rt-${rt} name veth0 189 + 190 + ip -netns ${nsname} addr add ${IPv6_RT_NETWORK}::${rt}/64 dev veth0 nodad 191 + ip -netns ${nsname} link set veth0 up 192 + ip -netns ${nsname} link set lo up 193 + 194 + ip netns exec ${nsname} sysctl -wq net.ipv4.ip_forward=1 195 + ip netns exec ${nsname} sysctl -wq net.ipv6.conf.all.forwarding=1 196 + } 197 + 198 + setup_rt_netfilter() 199 + { 200 + local rt=$1 201 + local nsname=rt-${rt} 202 + 203 + ip netns exec ${nsname} sysctl -wq net.netfilter.nf_hooks_lwtunnel=1 204 + ip netns exec ${nsname} iptables -t raw -A PREROUTING -m rpfilter --invert -j DROP 205 + } 206 + 207 + setup_hs() 208 + { 209 + local hs=$1 210 + local rt=$2 211 + local tid=$3 212 + local hsname=hs-${hs} 213 + local rtname=rt-${rt} 214 + local rtveth=veth-t${tid} 215 + 216 + # set the networking for the host 217 + ip netns add ${hsname} 218 + 219 + ip -netns ${hsname} link add veth0 type veth peer name ${rtveth} 220 + ip -netns ${hsname} link set ${rtveth} netns ${rtname} 221 + ip -netns ${hsname} addr add ${IPv4_HS_NETWORK}.${hs}/24 dev veth0 222 + ip -netns ${hsname} link set veth0 up 223 + ip -netns ${hsname} link set lo up 224 + 225 + ip -netns ${rtname} addr add ${IPv4_HS_NETWORK}.${rt}${hs}/24 dev ${rtveth} 226 + ip -netns ${rtname} link set ${rtveth} up 227 + 228 + ip netns exec ${rtname} sysctl -wq net.ipv4.conf.${rtveth}.proxy_arp=1 229 + } 230 + 231 + setup_vpn_config() 232 + { 233 + local hssrc=$1 234 + local rtsrc=$2 235 + local hsdst=$3 236 + local rtdst=$4 237 + local tid=$5 238 + 239 + local hssrc_name=hs-t${tid}-${hssrc} 240 + local hsdst_name=hs-t${tid}-${hsdst} 241 + local rtsrc_name=rt-${rtsrc} 242 + local rtdst_name=rt-${rtdst} 243 + local vpn_sid=${SID_LOCATOR}:${hssrc}${hsdst}:${tid}::6004 244 + 245 + # set the encap route for encapsulating packets which arrive from the 246 + # host hssrc and destined to the access router rtsrc. 247 + ip -netns ${rtsrc_name} -4 route add ${IPv4_HS_NETWORK}.${hsdst}/32 \ 248 + encap seg6 mode encap segs ${vpn_sid} dev veth0 249 + ip -netns ${rtsrc_name} -6 route add ${vpn_sid}/128 \ 250 + via 2001:11::${rtdst} dev veth0 251 + 252 + # set the decap route for decapsulating packets which arrive from 253 + # the rtdst router and destined to the hsdst host. 254 + ip -netns ${rtdst_name} -6 route add ${vpn_sid}/128 \ 255 + encap seg6local action End.DX4 nh4 ${IPv4_HS_NETWORK}.${hsdst} dev veth-t${tid} 256 + } 257 + 258 + setup() 259 + { 260 + ip link add veth-rt-1 type veth peer name veth-rt-2 261 + # setup the networking for router rt-1 and router rt-2 262 + setup_rt_networking 1 263 + setup_rt_networking 2 264 + 265 + # setup two hosts for the tenant 100. 266 + # - host hs-1 is directly connected to the router rt-1; 267 + # - host hs-2 is directly connected to the router rt-2. 268 + setup_hs 1 1 100 269 + setup_hs 2 2 100 270 + 271 + # setup the IPv4 L3 VPN which connects the host hs-1 and host hs-2. 272 + setup_vpn_config 1 1 2 2 100 #args: src_host src_router dst_host dst_router tenant 273 + setup_vpn_config 2 2 1 1 100 274 + } 275 + 276 + check_hs_connectivity() 277 + { 278 + local hssrc=$1 279 + local hsdst=$2 280 + local tid=$3 281 + 282 + ip netns exec hs-${hssrc} ping -c 1 -W ${PING_TIMEOUT_SEC} \ 283 + ${IPv4_HS_NETWORK}.${hsdst} >/dev/null 2>&1 284 + } 285 + 286 + check_and_log_hs_connectivity() 287 + { 288 + local hssrc=$1 289 + local hsdst=$2 290 + local tid=$3 291 + 292 + check_hs_connectivity ${hssrc} ${hsdst} ${tid} 293 + log_test $? 0 "Hosts connectivity: hs-${hssrc} -> hs-${hsdst} (tenant ${tid})" 294 + } 295 + 296 + host_tests() 297 + { 298 + log_section "SRv6 VPN connectivity test among hosts in the same tenant" 299 + 300 + check_and_log_hs_connectivity 1 2 100 301 + check_and_log_hs_connectivity 2 1 100 302 + } 303 + 304 + router_netfilter_tests() 305 + { 306 + log_section "SRv6 VPN connectivity test with netfilter enabled in routers" 307 + setup_rt_netfilter 1 308 + setup_rt_netfilter 2 309 + 310 + check_and_log_hs_connectivity 1 2 100 311 + check_and_log_hs_connectivity 2 1 100 312 + } 313 + 314 + if [ "$(id -u)" -ne 0 ];then 315 + echo "SKIP: Need root privileges" 316 + exit $ksft_skip 317 + fi 318 + 319 + if [ ! -x "$(command -v ip)" ]; then 320 + echo "SKIP: Could not run test without ip tool" 321 + exit $ksft_skip 322 + fi 323 + 324 + cleanup &>/dev/null 325 + 326 + setup 327 + 328 + host_tests 329 + router_netfilter_tests 330 + 331 + print_log_test_results 332 + 333 + cleanup &>/dev/null 334 + 335 + exit ${ret}
+340
tools/testing/selftests/net/srv6_end_dx6_netfilter_test.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # author: Jianguo Wu <wujianguo@chinatelecom.cn> 5 + # 6 + # Mostly copied from tools/testing/selftests/net/srv6_end_dt6_l3vpn_test.sh. 7 + # 8 + # This script is designed for testing the support of netfilter hooks for 9 + # SRv6 End.DX4 behavior. 10 + # 11 + # Hereafter a network diagram is shown, where one tenants (named 100) offer 12 + # IPv6 L3 VPN services allowing hosts to communicate with each other across 13 + # an IPv6 network. 14 + # 15 + # Routers rt-1 and rt-2 implement IPv6 L3 VPN services leveraging the SRv6 16 + # architecture. The key components for such VPNs are: a) SRv6 Encap behavior, 17 + # b) SRv6 End.DX4 behavior. 18 + # 19 + # To explain how an IPv6 L3 VPN based on SRv6 works, let us briefly consider an 20 + # example where, within the same domain of tenant 100, the host hs-1 pings 21 + # the host hs-2. 22 + # 23 + # First of all, L2 reachability of the host hs-2 is taken into account by 24 + # the router rt-1 which acts as an arp proxy. 25 + # 26 + # When the host hs-1 sends an IPv6 packet destined to hs-2, the router rt-1 27 + # receives the packet on the internal veth-t100 interface, rt-1 contains the 28 + # SRv6 Encap route for encapsulating the IPv6 packet in a IPv6 plus the Segment 29 + # Routing Header (SRH) packet. This packet is sent through the (IPv6) core 30 + # network up to the router rt-2 that receives it on veth0 interface. 31 + # 32 + # The rt-2 router uses the 'localsid' routing table to process incoming 33 + # IPv6+SRH packets which belong to the VPN of the tenant 100. For each of these 34 + # packets, the SRv6 End.DX4 behavior removes the outer IPv6+SRH headers and 35 + # routs the packet to the specified nexthop. Afterwards, the packet is sent to 36 + # the host hs-2 through the veth-t100 interface. 37 + # 38 + # The ping response follows the same processing but this time the role of rt-1 39 + # and rt-2 are swapped. 40 + # 41 + # And when net.netfilter.nf_hooks_lwtunnel is set to 1 in rt-1 or rt-2, and a 42 + # rpfilter iptables rule is added, SRv6 packets will go through netfilter PREROUTING 43 + # hooks. 44 + # 45 + # 46 + # +-------------------+ +-------------------+ 47 + # | | | | 48 + # | hs-1 netns | | hs-2 netns | 49 + # | | | | 50 + # | +-------------+ | | +-------------+ | 51 + # | | veth0 | | | | veth0 | | 52 + # | | cafe::1/64 | | | | cafe::2/64 | | 53 + # | +-------------+ | | +-------------+ | 54 + # | . | | . | 55 + # +-------------------+ +-------------------+ 56 + # . . 57 + # . . 58 + # . . 59 + # +-----------------------------------+ +-----------------------------------+ 60 + # | . | | . | 61 + # | +---------------+ | | +---------------- | 62 + # | | veth-t100 | | | | veth-t100 | | 63 + # | | cafe::11/64 | +----------+ | | +----------+ | cafe::22/64 | | 64 + # | +-------+-------+ | route | | | | route | +-------+-------- | 65 + # | | table | | | | table | | 66 + # | +----------+ | | +----------+ | 67 + # | +--------------+ | | +--------------+ | 68 + # | | veth0 | | | | veth0 | | 69 + # | | 2001:11::1/64 |.|...|.| 2001:11::2/64 | | 70 + # | +--------------+ | | +--------------+ | 71 + # | | | | 72 + # | rt-1 netns | | rt-2 netns | 73 + # | | | | 74 + # +-----------------------------------+ +-----------------------------------+ 75 + # 76 + # ~~~~~~~~~~~~~~~~~~~~~~~~~ 77 + # | Network configuration | 78 + # ~~~~~~~~~~~~~~~~~~~~~~~~~ 79 + # 80 + # rt-1: localsid table 81 + # +----------------------------------------------------------------+ 82 + # |SID |Action | 83 + # +----------------------------------------------------------------+ 84 + # |fc00:21:100::6004|apply SRv6 End.DX6 nh6 cafe::1 dev veth-t100 | 85 + # +----------------------------------------------------------------+ 86 + # 87 + # rt-1: route table 88 + # +---------------------------------------------------+ 89 + # |host |Action | 90 + # +---------------------------------------------------+ 91 + # |cafe::2 |apply seg6 encap segs fc00:12:100::6004| 92 + # +---------------------------------------------------+ 93 + # |cafe::/64 |forward to dev veth_t100 | 94 + # +---------------------------------------------------+ 95 + # 96 + # 97 + # rt-2: localsid table 98 + # +---------------------------------------------------------------+ 99 + # |SID |Action | 100 + # +---------------------------------------------------------------+ 101 + # |fc00:12:100::6004|apply SRv6 End.DX6 nh6 cafe::2 dev veth-t100 | 102 + # +---------------------------------------------------------------+ 103 + # 104 + # rt-2: route table 105 + # +---------------------------------------------------+ 106 + # |host |Action | 107 + # +---------------------------------------------------+ 108 + # |cafe::1 |apply seg6 encap segs fc00:21:100::6004| 109 + # +---------------------------------------------------+ 110 + # |cafe::/64 |forward to dev veth_t100 | 111 + # +---------------------------------------------------+ 112 + # 113 + 114 + # Kselftest framework requirement - SKIP code is 4. 115 + ksft_skip=4 116 + 117 + readonly IPv6_RT_NETWORK=2001:11 118 + readonly IPv6_HS_NETWORK=cafe 119 + readonly SID_LOCATOR=fc00 120 + 121 + PING_TIMEOUT_SEC=4 122 + 123 + ret=0 124 + 125 + PAUSE_ON_FAIL=${PAUSE_ON_FAIL:=no} 126 + 127 + log_test() 128 + { 129 + local rc=$1 130 + local expected=$2 131 + local msg="$3" 132 + 133 + if [ ${rc} -eq ${expected} ]; then 134 + nsuccess=$((nsuccess+1)) 135 + printf "\n TEST: %-60s [ OK ]\n" "${msg}" 136 + else 137 + ret=1 138 + nfail=$((nfail+1)) 139 + printf "\n TEST: %-60s [FAIL]\n" "${msg}" 140 + if [ "${PAUSE_ON_FAIL}" = "yes" ]; then 141 + echo 142 + echo "hit enter to continue, 'q' to quit" 143 + read a 144 + [ "$a" = "q" ] && exit 1 145 + fi 146 + fi 147 + } 148 + 149 + print_log_test_results() 150 + { 151 + if [ "$TESTS" != "none" ]; then 152 + printf "\nTests passed: %3d\n" ${nsuccess} 153 + printf "Tests failed: %3d\n" ${nfail} 154 + fi 155 + } 156 + 157 + log_section() 158 + { 159 + echo 160 + echo "################################################################################" 161 + echo "TEST SECTION: $*" 162 + echo "################################################################################" 163 + } 164 + 165 + cleanup() 166 + { 167 + ip link del veth-rt-1 2>/dev/null || true 168 + ip link del veth-rt-2 2>/dev/null || true 169 + 170 + # destroy routers rt-* and hosts hs-* 171 + for ns in $(ip netns show | grep -E 'rt-*|hs-*'); do 172 + ip netns del ${ns} || true 173 + done 174 + } 175 + 176 + # Setup the basic networking for the routers 177 + setup_rt_networking() 178 + { 179 + local rt=$1 180 + local nsname=rt-${rt} 181 + 182 + ip netns add ${nsname} 183 + 184 + ip netns exec ${nsname} sysctl -wq net.ipv6.conf.all.accept_dad=0 185 + ip netns exec ${nsname} sysctl -wq net.ipv6.conf.default.accept_dad=0 186 + 187 + ip link set veth-rt-${rt} netns ${nsname} 188 + ip -netns ${nsname} link set veth-rt-${rt} name veth0 189 + 190 + ip -netns ${nsname} addr add ${IPv6_RT_NETWORK}::${rt}/64 dev veth0 nodad 191 + ip -netns ${nsname} link set veth0 up 192 + ip -netns ${nsname} link set lo up 193 + 194 + ip netns exec ${nsname} sysctl -wq net.ipv6.conf.all.forwarding=1 195 + } 196 + 197 + setup_rt_netfilter() 198 + { 199 + local rt=$1 200 + local nsname=rt-${rt} 201 + 202 + ip netns exec ${nsname} sysctl -wq net.netfilter.nf_hooks_lwtunnel=1 203 + ip netns exec ${nsname} ip6tables -t raw -A PREROUTING -m rpfilter --invert -j DROP 204 + } 205 + 206 + setup_hs() 207 + { 208 + local hs=$1 209 + local rt=$2 210 + local tid=$3 211 + local hsname=hs-${hs} 212 + local rtname=rt-${rt} 213 + local rtveth=veth-t${tid} 214 + 215 + # set the networking for the host 216 + ip netns add ${hsname} 217 + 218 + ip -netns ${hsname} link add veth0 type veth peer name ${rtveth} 219 + ip -netns ${hsname} link set ${rtveth} netns ${rtname} 220 + ip -netns ${hsname} addr add ${IPv6_HS_NETWORK}::${hs}/64 dev veth0 nodad 221 + ip -netns ${hsname} link set veth0 up 222 + ip -netns ${hsname} link set lo up 223 + 224 + ip -netns ${rtname} addr add ${IPv6_HS_NETWORK}::${rt}${hs}/64 dev ${rtveth} 225 + ip -netns ${rtname} link set ${rtveth} up 226 + 227 + ip netns exec ${rtname} sysctl -wq net.ipv6.conf.all.accept_dad=0 228 + ip netns exec ${rtname} sysctl -wq net.ipv6.conf.default.accept_dad=0 229 + 230 + ip netns exec ${rtname} sysctl -wq net.ipv6.conf.${rtveth}.proxy_ndp=1 231 + } 232 + 233 + setup_vpn_config() 234 + { 235 + local hssrc=$1 236 + local rtsrc=$2 237 + local hsdst=$3 238 + local rtdst=$4 239 + local tid=$5 240 + 241 + local hssrc_name=hs-t${tid}-${hssrc} 242 + local hsdst_name=hs-t${tid}-${hsdst} 243 + local rtsrc_name=rt-${rtsrc} 244 + local rtdst_name=rt-${rtdst} 245 + local rtveth=veth-t${tid} 246 + local vpn_sid=${SID_LOCATOR}:${hssrc}${hsdst}:${tid}::6004 247 + 248 + ip -netns ${rtsrc_name} -6 neigh add proxy ${IPv6_HS_NETWORK}::${hsdst} dev ${rtveth} 249 + 250 + # set the encap route for encapsulating packets which arrive from the 251 + # host hssrc and destined to the access router rtsrc. 252 + ip -netns ${rtsrc_name} -6 route add ${IPv6_HS_NETWORK}::${hsdst}/128 \ 253 + encap seg6 mode encap segs ${vpn_sid} dev veth0 254 + ip -netns ${rtsrc_name} -6 route add ${vpn_sid}/128 \ 255 + via 2001:11::${rtdst} dev veth0 256 + 257 + # set the decap route for decapsulating packets which arrive from 258 + # the rtdst router and destined to the hsdst host. 259 + ip -netns ${rtdst_name} -6 route add ${vpn_sid}/128 \ 260 + encap seg6local action End.DX6 nh6 ${IPv6_HS_NETWORK}::${hsdst} dev veth-t${tid} 261 + } 262 + 263 + setup() 264 + { 265 + ip link add veth-rt-1 type veth peer name veth-rt-2 266 + # setup the networking for router rt-1 and router rt-2 267 + setup_rt_networking 1 268 + setup_rt_networking 2 269 + 270 + # setup two hosts for the tenant 100. 271 + # - host hs-1 is directly connected to the router rt-1; 272 + # - host hs-2 is directly connected to the router rt-2. 273 + setup_hs 1 1 100 274 + setup_hs 2 2 100 275 + 276 + # setup the IPv4 L3 VPN which connects the host hs-1 and host hs-2. 277 + setup_vpn_config 1 1 2 2 100 #args: src_host src_router dst_host dst_router tenant 278 + setup_vpn_config 2 2 1 1 100 279 + } 280 + 281 + check_hs_connectivity() 282 + { 283 + local hssrc=$1 284 + local hsdst=$2 285 + local tid=$3 286 + 287 + ip netns exec hs-${hssrc} ping -6 -c 1 -W ${PING_TIMEOUT_SEC} \ 288 + ${IPv6_HS_NETWORK}::${hsdst} >/dev/null 2>&1 289 + } 290 + 291 + check_and_log_hs_connectivity() 292 + { 293 + local hssrc=$1 294 + local hsdst=$2 295 + local tid=$3 296 + 297 + check_hs_connectivity ${hssrc} ${hsdst} ${tid} 298 + log_test $? 0 "Hosts connectivity: hs-${hssrc} -> hs-${hsdst} (tenant ${tid})" 299 + } 300 + 301 + host_tests() 302 + { 303 + log_section "SRv6 VPN connectivity test among hosts in the same tenant" 304 + 305 + check_and_log_hs_connectivity 1 2 100 306 + check_and_log_hs_connectivity 2 1 100 307 + } 308 + 309 + router_netfilter_tests() 310 + { 311 + log_section "SRv6 VPN connectivity test with netfilter enabled in routers" 312 + setup_rt_netfilter 1 313 + setup_rt_netfilter 2 314 + 315 + check_and_log_hs_connectivity 1 2 100 316 + check_and_log_hs_connectivity 2 1 100 317 + } 318 + 319 + if [ "$(id -u)" -ne 0 ];then 320 + echo "SKIP: Need root privileges" 321 + exit $ksft_skip 322 + fi 323 + 324 + if [ ! -x "$(command -v ip)" ]; then 325 + echo "SKIP: Could not run test without ip tool" 326 + exit $ksft_skip 327 + fi 328 + 329 + cleanup &>/dev/null 330 + 331 + setup 332 + 333 + host_tests 334 + router_netfilter_tests 335 + 336 + print_log_test_results 337 + 338 + cleanup &>/dev/null 339 + 340 + exit ${ret}
+12 -2
tools/testing/selftests/openat2/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-or-later 2 2 3 - CFLAGS += -Wall -O2 -g -fsanitize=address -fsanitize=undefined -static-libasan 3 + CFLAGS += -Wall -O2 -g -fsanitize=address -fsanitize=undefined 4 4 TEST_GEN_PROGS := openat2_test resolve_test rename_attack_test 5 + 6 + # gcc requires -static-libasan in order to ensure that Address Sanitizer's 7 + # library is the first one loaded. However, clang already statically links the 8 + # Address Sanitizer if -fsanitize is specified. Therefore, simply omit 9 + # -static-libasan for clang builds. 10 + ifeq ($(LLVM),) 11 + CFLAGS += -static-libasan 12 + endif 13 + 14 + LOCAL_HDRS += helpers.h 5 15 6 16 include ../lib.mk 7 17 8 - $(TEST_GEN_PROGS): helpers.c helpers.h 18 + $(TEST_GEN_PROGS): helpers.c
+3 -3
tools/testing/selftests/seccomp/seccomp_benchmark.c
··· 194 194 ksft_set_plan(7); 195 195 196 196 ksft_print_msg("Running on:\n"); 197 - ksft_print_msg(""); 197 + ksft_print_msg("%s", ""); 198 198 system("uname -a"); 199 199 200 200 ksft_print_msg("Current BPF sysctl settings:\n"); 201 201 /* Avoid using "sysctl" which may not be installed. */ 202 - ksft_print_msg(""); 202 + ksft_print_msg("%s", ""); 203 203 system("grep -H . /proc/sys/net/core/bpf_jit_enable"); 204 - ksft_print_msg(""); 204 + ksft_print_msg("%s", ""); 205 205 system("grep -H . /proc/sys/net/core/bpf_jit_harden"); 206 206 207 207 affinity();