Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge v6.13 into drm-next

A regression was caused by commit e4b5ccd392b9 ("drm/v3d: Ensure job
pointer is set to NULL after job completion"), but this commit is not
yet in next-fixes, fast-forward it.

Note that this recreates Linus merge in 96c84703f1cf ("Merge tag
'drm-next-2025-01-17' of https://gitlab.freedesktop.org/drm/kernel")
because I didn't want to backmerge a random point in the merge window.

Signed-off-by: Simona Vetter <simona.vetter@ffwll.ch>

+4401 -2641
+3
.mailmap
··· 121 121 Benjamin Poirier <benjamin.poirier@gmail.com> <bpoirier@suse.de> 122 122 Benjamin Tissoires <bentiss@kernel.org> <benjamin.tissoires@gmail.com> 123 123 Benjamin Tissoires <bentiss@kernel.org> <benjamin.tissoires@redhat.com> 124 + Bingwu Zhang <xtex@aosc.io> <xtexchooser@duck.com> 125 + Bingwu Zhang <xtex@aosc.io> <xtex@xtexx.eu.org> 124 126 Bjorn Andersson <andersson@kernel.org> <bjorn@kryo.se> 125 127 Bjorn Andersson <andersson@kernel.org> <bjorn.andersson@linaro.org> 126 128 Bjorn Andersson <andersson@kernel.org> <bjorn.andersson@sonymobile.com> ··· 202 200 Enric Balletbo i Serra <eballetbo@kernel.org> <enric.balletbo@collabora.com> 203 201 Enric Balletbo i Serra <eballetbo@kernel.org> <eballetbo@iseebcn.com> 204 202 Erik Kaneda <erik.kaneda@intel.com> <erik.schmauss@intel.com> 203 + Ethan Carter Edwards <ethan@ethancedwards.com> Ethan Edwards <ethancarteredwards@gmail.com> 205 204 Eugen Hristev <eugen.hristev@linaro.org> <eugen.hristev@microchip.com> 206 205 Eugen Hristev <eugen.hristev@linaro.org> <eugen.hristev@collabora.com> 207 206 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+12
CREDITS
··· 20 20 E: thomas.ab@samsung.com 21 21 D: Samsung pin controller driver 22 22 23 + N: Jose Abreu 24 + E: jose.abreu@synopsys.com 25 + D: Synopsys DesignWare XPCS MDIO/PCS driver. 26 + 23 27 N: Dragos Acostachioaie 24 28 E: dragos@iname.com 25 29 W: http://www.arbornet.org/~dragos ··· 1432 1428 S: Sterling Heights, Michigan 48313 1433 1429 S: USA 1434 1430 1431 + N: Andy Gospodarek 1432 + E: andy@greyhouse.net 1433 + D: Maintenance and contributions to the network interface bonding driver. 1434 + 1435 1435 N: Wolfgang Grandegger 1436 1436 E: wg@grandegger.com 1437 1437 D: Controller Area Network (device drivers) ··· 1819 1811 D: Author/maintainer of most DRM drivers (especially ATI, MGA) 1820 1812 D: Core DRM templates, general DRM and 3D-related hacking 1821 1813 S: No fixed address 1814 + 1815 + N: Woojung Huh 1816 + E: woojung.huh@microchip.com 1817 + D: Microchip LAN78XX USB Ethernet driver 1822 1818 1823 1819 N: Kenn Humborg 1824 1820 E: kenn@wombat.ie
+30 -40
Documentation/admin-guide/pm/cpuidle.rst
··· 269 269 the CPU will ask the processor hardware to enter), it attempts to predict the 270 270 idle duration and uses the predicted value for idle state selection. 271 271 272 - It first obtains the time until the closest timer event with the assumption 273 - that the scheduler tick will be stopped. That time, referred to as the *sleep 274 - length* in what follows, is the upper bound on the time before the next CPU 275 - wakeup. It is used to determine the sleep length range, which in turn is needed 276 - to get the sleep length correction factor. 277 - 278 - The ``menu`` governor maintains two arrays of sleep length correction factors. 279 - One of them is used when tasks previously running on the given CPU are waiting 280 - for some I/O operations to complete and the other one is used when that is not 281 - the case. Each array contains several correction factor values that correspond 282 - to different sleep length ranges organized so that each range represented in the 283 - array is approximately 10 times wider than the previous one. 284 - 285 - The correction factor for the given sleep length range (determined before 286 - selecting the idle state for the CPU) is updated after the CPU has been woken 287 - up and the closer the sleep length is to the observed idle duration, the closer 288 - to 1 the correction factor becomes (it must fall between 0 and 1 inclusive). 289 - The sleep length is multiplied by the correction factor for the range that it 290 - falls into to obtain the first approximation of the predicted idle duration. 291 - 292 - Next, the governor uses a simple pattern recognition algorithm to refine its 272 + It first uses a simple pattern recognition algorithm to obtain a preliminary 293 273 idle duration prediction. Namely, it saves the last 8 observed idle duration 294 274 values and, when predicting the idle duration next time, it computes the average 295 275 and variance of them. If the variance is small (smaller than 400 square ··· 281 301 taken as the "typical interval" value and so on, until either the "typical 282 302 interval" is determined or too many data points are disregarded, in which case 283 303 the "typical interval" is assumed to equal "infinity" (the maximum unsigned 284 - integer value). The "typical interval" computed this way is compared with the 285 - sleep length multiplied by the correction factor and the minimum of the two is 286 - taken as the predicted idle duration. 304 + integer value). 287 305 288 - Then, the governor computes an extra latency limit to help "interactive" 289 - workloads. It uses the observation that if the exit latency of the selected 290 - idle state is comparable with the predicted idle duration, the total time spent 291 - in that state probably will be very short and the amount of energy to save by 292 - entering it will be relatively small, so likely it is better to avoid the 293 - overhead related to entering that state and exiting it. Thus selecting a 294 - shallower state is likely to be a better option then. The first approximation 295 - of the extra latency limit is the predicted idle duration itself which 296 - additionally is divided by a value depending on the number of tasks that 297 - previously ran on the given CPU and now they are waiting for I/O operations to 298 - complete. The result of that division is compared with the latency limit coming 299 - from the power management quality of service, or `PM QoS <cpu-pm-qos_>`_, 300 - framework and the minimum of the two is taken as the limit for the idle states' 301 - exit latency. 306 + If the "typical interval" computed this way is long enough, the governor obtains 307 + the time until the closest timer event with the assumption that the scheduler 308 + tick will be stopped. That time, referred to as the *sleep length* in what follows, 309 + is the upper bound on the time before the next CPU wakeup. It is used to determine 310 + the sleep length range, which in turn is needed to get the sleep length correction 311 + factor. 312 + 313 + The ``menu`` governor maintains an array containing several correction factor 314 + values that correspond to different sleep length ranges organized so that each 315 + range represented in the array is approximately 10 times wider than the previous 316 + one. 317 + 318 + The correction factor for the given sleep length range (determined before 319 + selecting the idle state for the CPU) is updated after the CPU has been woken 320 + up and the closer the sleep length is to the observed idle duration, the closer 321 + to 1 the correction factor becomes (it must fall between 0 and 1 inclusive). 322 + The sleep length is multiplied by the correction factor for the range that it 323 + falls into to obtain an approximation of the predicted idle duration that is 324 + compared to the "typical interval" determined previously and the minimum of 325 + the two is taken as the idle duration prediction. 326 + 327 + If the "typical interval" value is small, which means that the CPU is likely 328 + to be woken up soon enough, the sleep length computation is skipped as it may 329 + be costly and the idle duration is simply predicted to equal the "typical 330 + interval" value. 302 331 303 332 Now, the governor is ready to walk the list of idle states and choose one of 304 333 them. For this purpose, it compares the target residency of each state with 305 - the predicted idle duration and the exit latency of it with the computed latency 306 - limit. It selects the state with the target residency closest to the predicted 334 + the predicted idle duration and the exit latency of it with the with the latency 335 + limit coming from the power management quality of service, or `PM QoS <cpu-pm-qos_>`_, 336 + framework. It selects the state with the target residency closest to the predicted 307 337 idle duration, but still below it, and exit latency that does not exceed the 308 338 limit. 309 339
+18 -1
Documentation/devicetree/bindings/display/mediatek/mediatek,dp.yaml
··· 42 42 interrupts: 43 43 maxItems: 1 44 44 45 + '#sound-dai-cells': 46 + const: 0 47 + 45 48 ports: 46 49 $ref: /schemas/graph.yaml#/properties/ports 47 50 properties: ··· 88 85 - ports 89 86 - max-linkrate-mhz 90 87 91 - additionalProperties: false 88 + allOf: 89 + - $ref: /schemas/sound/dai-common.yaml# 90 + - if: 91 + not: 92 + properties: 93 + compatible: 94 + contains: 95 + enum: 96 + - mediatek,mt8188-dp-tx 97 + - mediatek,mt8195-dp-tx 98 + then: 99 + properties: 100 + '#sound-dai-cells': false 101 + 102 + unevaluatedProperties: false 92 103 93 104 examples: 94 105 - |
+1
Documentation/devicetree/bindings/iio/st,st-sensors.yaml
··· 65 65 - st,lsm9ds0-gyro 66 66 - description: STMicroelectronics Magnetometers 67 67 enum: 68 + - st,iis2mdc 68 69 - st,lis2mdl 69 70 - st,lis3mdl-magn 70 71 - st,lsm303agr-magn
+1 -1
Documentation/devicetree/bindings/net/pse-pd/pse-controller.yaml
··· 81 81 List of phandles, each pointing to the power supply for the 82 82 corresponding pairset named in 'pairset-names'. This property 83 83 aligns with IEEE 802.3-2022, Section 33.2.3 and 145.2.4. 84 - PSE Pinout Alternatives (as per IEEE 802.3-2022 Table 145\u20133) 84 + PSE Pinout Alternatives (as per IEEE 802.3-2022 Table 145-3) 85 85 |-----------|---------------|---------------|---------------|---------------| 86 86 | Conductor | Alternative A | Alternative A | Alternative B | Alternative B | 87 87 | | (MDI-X) | (MDI) | (X) | (S) |
+292
Documentation/sound/codecs/cs35l56.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0-only 2 + 3 + ===================================================================== 4 + Audio drivers for Cirrus Logic CS35L54/56/57 Boosted Smart Amplifiers 5 + ===================================================================== 6 + :Copyright: 2025 Cirrus Logic, Inc. and 7 + Cirrus Logic International Semiconductor Ltd. 8 + 9 + Contact: patches@opensource.cirrus.com 10 + 11 + Summary 12 + ======= 13 + 14 + The high-level summary of this document is: 15 + 16 + **If you have a laptop that uses CS35L54/56/57 amplifiers but audio is not 17 + working, DO NOT ATTEMPT TO USE FIRMWARE AND SETTINGS FROM ANOTHER LAPTOP, 18 + EVEN IF THAT LAPTOP SEEMS SIMILAR.** 19 + 20 + The CS35L54/56/57 amplifiers must be correctly configured for the power 21 + supply voltage, speaker impedance, maximum speaker voltage/current, and 22 + other external hardware connections. 23 + 24 + The amplifiers feature advanced boost technology that increases the voltage 25 + used to drive the speakers, while proprietary speaker protection algorithms 26 + allow these boosted amplifiers to push the limits of the speakers without 27 + causing damage. These **must** be configured correctly. 28 + 29 + Supported Cirrus Logic amplifiers 30 + --------------------------------- 31 + 32 + The cs35l56 drivers support: 33 + 34 + * CS35L54 35 + * CS35L56 36 + * CS35L57 37 + 38 + There are two drivers in the kernel 39 + 40 + *For systems using SoundWire*: sound/soc/codecs/cs35l56.c and associated files 41 + 42 + *For systems using HDA*: sound/pci/hda/cs35l56_hda.c 43 + 44 + Firmware 45 + ======== 46 + 47 + The amplifier is controlled and managed by firmware running on the internal 48 + DSP. Firmware files are essential to enable the full capabilities of the 49 + amplifier. 50 + 51 + Firmware is distributed in the linux-firmware repository: 52 + https://gitlab.com/kernel-firmware/linux-firmware.git 53 + 54 + On most SoundWire systems the amplifier has a default minimum capability to 55 + produce audio. However this will be 56 + 57 + * at low volume, to protect the speakers, since the speaker specifications 58 + and power supply voltages are unknown. 59 + * a mono mix of left and right channels. 60 + 61 + On some SoundWire systems that have both CS42L43 and CS35L56/57 the CS35L56/57 62 + receive their audio from the CS42L43 instead of directly from the host 63 + SoundWire interface. These systems can be identified by the CS42L43 showing 64 + in dmesg as a SoundWire device, but the CS35L56/57 as SPI. On these systems 65 + the firmware is *mandatory* to enable receiving the audio from the CS42L43. 66 + 67 + On HDA systems the firmware is *mandatory* to enable HDA bridge mode. There 68 + will not be any audio from the amplifiers without firmware. 69 + 70 + Cirrus Logic firmware files 71 + --------------------------- 72 + 73 + Each amplifier requires two firmware files. One file has a .wmfw suffix, the 74 + other has a .bin suffix. 75 + 76 + The firmware is customized by the OEM to match the hardware of each laptop, 77 + and the firmware is specific to that laptop. Because of this, there are many 78 + firmware files in linux-firmware for these amplifiers. Firmware files are 79 + **not interchangeable between laptops**. 80 + 81 + Cirrus Logic submits files for known laptops to the upstream linux-firmware 82 + repository. Providing Cirrus Logic is aware of a particular laptop and has 83 + permission from the manufacturer to publish the firmware, it will be pushed 84 + to linux-firmware. You may need to upgrade to a newer release of 85 + linux-firmware to obtain the firmware for your laptop. 86 + 87 + **Important:** the Makefile for linux-firmware creates symlinks that are listed 88 + in the WHENCE file. These symlinks are required for the CS35L56 driver to be 89 + able to load the firmware. 90 + 91 + How do I know which firmware file I should have? 92 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 93 + All firmware file names are qualified with a unique "system ID". On normal 94 + x86 PCs with PCI audio this is the Vendor Subsystem ID (SSID) of the host 95 + PCI audio interface. 96 + 97 + The SSID can be viewed using the lspci tool:: 98 + 99 + lspci -v -nn | grep -A2 -i audio 100 + 0000:00:1f.3 Audio device [0403]: Intel Corporation Meteor Lake-P HD Audio Controller [8086:7e28] 101 + Subsystem: Dell Meteor Lake-P HD Audio Controller [1028:0c63] 102 + 103 + In this example the SSID is 10280c63. 104 + 105 + The format of the firmware file names is: 106 + 107 + cs35lxx-b0-dsp1-misc-SSID[-spkidX]-ampN 108 + 109 + Where: 110 + 111 + * cs35lxx-b0 is the amplifier model and silicon revision. This information 112 + is logged by the driver during initialization. 113 + * SSID is the 8-digit hexadecimal SSID value. 114 + * ampN is the amplifier number (for example amp1). This is the same as 115 + the prefix on the ALSA control names except that it is always lower-case 116 + in the file name. 117 + * spkidX is an optional part, used for laptops that have firmware 118 + configurations for different makes and models of internal speakers. 119 + 120 + Sound Open Firmware and ALSA topology files 121 + ------------------------------------------- 122 + 123 + All SoundWire systems will require a Sound Open Firmware (SOF) for the 124 + host CPU audio DSP, together with an ALSA topology file (.tplg). 125 + 126 + The SOF firmware will usually be provided by the manufacturer of the host 127 + CPU (i.e. Intel or AMD). The .tplg file is normally part of the SOF firmware 128 + release. 129 + 130 + SOF binary builds are available from: https://github.com/thesofproject/sof-bin/releases 131 + 132 + The main SOF source is here: https://github.com/thesofproject 133 + 134 + ALSA-ucm configurations 135 + ----------------------- 136 + Typically an appropriate ALSA-ucm configuration file is needed for 137 + use-case managers and audio servers such as PipeWire. 138 + 139 + Configuration files are available from the alsa-ucm-conf repository: 140 + https://git.alsa-project.org/?p=alsa-ucm-conf.git 141 + 142 + Kernel log messages 143 + =================== 144 + 145 + SoundWire 146 + --------- 147 + A successful initialization will look like this (this will be repeated for 148 + each amplifier):: 149 + 150 + [ 7.568374] cs35l56 sdw:0:0:01fa:3556:01:0: supply VDD_P not found, using dummy regulator 151 + [ 7.605208] cs35l56 sdw:0:0:01fa:3556:01:0: supply VDD_IO not found, using dummy regulator 152 + [ 7.605313] cs35l56 sdw:0:0:01fa:3556:01:0: supply VDD_A not found, using dummy regulator 153 + [ 7.939279] cs35l56 sdw:0:0:01fa:3556:01:0: Cirrus Logic CS35L56 Rev B0 OTP3 fw:3.4.4 (patched=0) 154 + [ 7.947844] cs35l56 sdw:0:0:01fa:3556:01:0: Slave 4 state check1: UNATTACHED, status was 1 155 + [ 8.740280] cs35l56 sdw:0:0:01fa:3556:01:0: supply VDD_B not found, using dummy regulator 156 + [ 8.740552] cs35l56 sdw:0:0:01fa:3556:01:0: supply VDD_AMP not found, using dummy regulator 157 + [ 9.242164] cs35l56 sdw:0:0:01fa:3556:01:0: DSP1: cirrus/cs35l56-b0-dsp1-misc-xxxxxxxx.wmfw: format 3 timestamp 0x66b2b872 158 + [ 9.242173] cs35l56 sdw:0:0:01fa:3556:01:0: DSP1: cirrus/cs35l56-b0-dsp1-misc-xxxxxxxx.wmfw: Tue 05 Dec 2023 21:37:21 GMT Standard Time 159 + [ 9.991709] cs35l56 sdw:0:0:01fa:3556:01:0: DSP1: Firmware: 1a00d6 vendor: 0x2 v3.11.23, 41 algorithms 160 + [10.039098] cs35l56 sdw:0:0:01fa:3556:01:0: DSP1: cirrus/cs35l56-b0-dsp1-misc-xxxxxxxx-amp1.bin: v3.11.23 161 + [10.879235] cs35l56 sdw:0:0:01fa:3556:01:0: Slave 4 state check1: UNATTACHED, status was 1 162 + [11.401536] cs35l56 sdw:0:0:01fa:3556:01:0: Calibration applied 163 + 164 + HDA 165 + --- 166 + A successful initialization will look like this (this will be repeated for 167 + each amplifier):: 168 + 169 + [ 6.306475] cs35l56-hda i2c-CSC3556:00-cs35l56-hda.0: Cirrus Logic CS35L56 Rev B0 OTP3 fw:3.4.4 (patched=0) 170 + [ 6.613892] cs35l56-hda i2c-CSC3556:00-cs35l56-hda.0: DSP system name: 'xxxxxxxx', amp name: 'AMP1' 171 + [ 8.266660] snd_hda_codec_cs8409 ehdaudio0D0: bound i2c-CSC3556:00-cs35l56-hda.0 (ops cs35l56_hda_comp_ops [snd_hda_scodec_cs35l56]) 172 + [ 8.287525] cs35l56-hda i2c-CSC3556:00-cs35l56-hda.0: DSP1: cirrus/cs35l56-b0-dsp1-misc-xxxxxxxx.wmfw: format 3 timestamp 0x66b2b872 173 + [ 8.287528] cs35l56-hda i2c-CSC3556:00-cs35l56-hda.0: DSP1: cirrus/cs35l56-b0-dsp1-misc-xxxxxxxx.wmfw: Tue 05 Dec 2023 21:37:21 GMT Standard Time 174 + [ 9.984335] cs35l56-hda i2c-CSC3556:00-cs35l56-hda.0: DSP1: Firmware: 1a00d6 vendor: 0x2 v3.11.23, 41 algorithms 175 + [10.085797] cs35l56-hda i2c-CSC3556:00-cs35l56-hda.0: DSP1: cirrus/cs35l56-b0-dsp1-misc-xxxxxxxx-amp1.bin: v3.11.23 176 + [10.655237] cs35l56-hda i2c-CSC3556:00-cs35l56-hda.0: Calibration applied 177 + 178 + Important messages 179 + ~~~~~~~~~~~~~~~~~~ 180 + Cirrus Logic CS35L56 Rev B0 OTP3 fw:3.4.4 (patched=0) 181 + Shows that the driver has been able to read device ID registers from the 182 + amplifier. 183 + 184 + * The actual amplifier type and silicon revision (CS35L56 B0 in this 185 + example) is shown, as read from the amplifier identification registers. 186 + * (patched=0) is normal, and indicates that the amplifier has been hard 187 + reset and is running default ROM firmware. 188 + * (patched=1) means that something has previously downloaded firmware 189 + to the amplifier and the driver does not have control of the RESET 190 + signal to be able to replace this preloaded firmware. This is normal 191 + for systems where the BIOS downloads firmware to the amplifiers 192 + before OS boot. 193 + This status can also be seen if the cs35l56 kernel module is unloaded 194 + and reloaded on a system where the driver does not have control of 195 + RESET. SoundWire systems typically do not give the driver control of 196 + RESET and only a BIOS (re)boot can reset the amplifiers. 197 + 198 + DSP1: cirrus/cs35l56-b0-dsp1-misc-xxxxxxxx.wmfw 199 + Shows that a .wmfw firmware file was found and downloaded. 200 + 201 + DSP1: cirrus/cs35l56-b0-dsp1-misc-xxxxxxxx-amp1.bin 202 + Shows that a .bin firmware file was found and downloaded. 203 + 204 + Calibration applied 205 + Factory calibration data in EFI was written to the amplifier. 206 + 207 + Error messages 208 + ============== 209 + This section explains some of the error messages that the driver can log. 210 + 211 + Algorithm coefficient version %d.%d.%d but expected %d.%d.%d 212 + The version of the .bin file content does not match the loaded firmware. 213 + Caused by mismatched .wmfw and .bin file, or .bin file was found but 214 + .wmfw was not. 215 + 216 + No %s for algorithm %x 217 + The version of the .bin file content does not match the loaded firmware. 218 + Caused by mismatched .wmfw and .bin file, or .bin file was found but 219 + .wmfw was not. 220 + 221 + .bin file required but not found 222 + HDA driver did not find a .bin file that matches this hardware. 223 + 224 + Calibration disabled due to missing firmware controls 225 + Driver was not able to write EFI calibration data to firmware registers. 226 + This typically means that either: 227 + 228 + * The driver did not find a suitable wmfw for this hardware, or 229 + * The amplifier has already been patched with firmware by something 230 + previously, and the driver does not have control of a hard RESET line 231 + to be able to reset the amplifier and download the firmware files it 232 + found. This situation is indicated by the device identification 233 + string in the kernel log shows "(patched=1)" 234 + 235 + Failed to write calibration 236 + Same meaning and cause as "Calibration disabled due to missing firmware 237 + controls" 238 + 239 + Failed to read calibration data from EFI 240 + Factory calibration data in EFI is missing, empty or corrupt. 241 + This is most likely to be cause by accidentally deleting the file from 242 + the EFI filesystem. 243 + 244 + No calibration for silicon ID 245 + The factory calibration data in EFI does not match this hardware. 246 + The most likely cause is that an amplifier has been replaced on the 247 + motherboard without going through manufacturer calibration process to 248 + generate calibration data for the new amplifier. 249 + 250 + Did not find any buses for CSCxxxx 251 + Only on HDA systems. The HDA codec driver found an ACPI entry for 252 + Cirrus Logic companion amps, but could not enumerate the ACPI entries for 253 + the I2C/SPI buses. The most likely cause of this is that: 254 + 255 + * The relevant bus driver (I2C or SPI) is not part of the kernel. 256 + * The HDA codec driver was built-in to the kernel but the I2C/SPI 257 + bus driver is a module and so the HDA codec driver cannot call the 258 + bus driver functions. 259 + 260 + init_completion timed out 261 + The SoundWire bus controller (host end) did not enumerate the amplifier. 262 + In other words, the ACPI says there is an amplifier but for some reason 263 + it was not detected on the bus. 264 + 265 + No AF01 node 266 + Indicates an error in ACPI. A SoundWire system should have a Device() 267 + node named "AF01" but it was not found. 268 + 269 + Failed to get spk-id-gpios 270 + ACPI says that the driver should request a GPIO but the driver was not 271 + able to get that GPIO. The most likely cause is that the kernel does not 272 + include the correct GPIO or PINCTRL driver for this system. 273 + 274 + Failed to read spk-id 275 + ACPI says that the driver should request a GPIO but the driver was not 276 + able to read that GPIO. 277 + 278 + Unexpected spk-id element count 279 + AF01 contains more speaker ID GPIO entries than the driver supports 280 + 281 + Overtemp error 282 + Amplifier overheat protection was triggered and the amplifier shut down 283 + to protect itself. 284 + 285 + Amp short error 286 + Amplifier detected a short-circuit on the speaker output pins and shut 287 + down for protection. This would normally indicate a damaged speaker. 288 + 289 + Hibernate wake failed 290 + The driver tried to wake the amplifier from its power-saving state but 291 + did not see the expected responses from the amplifier. This can be caused 292 + by using firmware that does not match the hardware.
+9
Documentation/sound/codecs/index.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Codec-Specific Information 4 + ========================== 5 + 6 + .. toctree:: 7 + :maxdepth: 2 8 + 9 + cs35l56
+1
Documentation/sound/index.rst
··· 13 13 alsa-configuration 14 14 hd-audio/index 15 15 cards/index 16 + codecs/index 16 17 utimers 17 18 18 19 .. only:: subproject and html
+6
Documentation/trace/ftrace.rst
··· 810 810 to draw a graph of function calls similar to C code 811 811 source. 812 812 813 + Note that the function graph calculates the timings of when the 814 + function starts and returns internally and for each instance. If 815 + there are two instances that run function graph tracer and traces 816 + the same functions, the length of the timings may be slightly off as 817 + each read the timestamp separately and not at the same time. 818 + 813 819 "blk" 814 820 815 821 The block tracer. The tracer used by the blktrace user
+3
Documentation/virt/kvm/api.rst
··· 1914 1914 #define KVM_IRQ_ROUTING_HV_SINT 4 1915 1915 #define KVM_IRQ_ROUTING_XEN_EVTCHN 5 1916 1916 1917 + On s390, adding a KVM_IRQ_ROUTING_S390_ADAPTER is rejected on ucontrol VMs with 1918 + error -EINVAL. 1919 + 1917 1920 flags: 1918 1921 1919 1922 - KVM_MSI_VALID_DEVID: used along with KVM_IRQ_ROUTING_MSI routing entry
+4
Documentation/virt/kvm/devices/s390_flic.rst
··· 58 58 Enables async page faults for the guest. So in case of a major page fault 59 59 the host is allowed to handle this async and continues the guest. 60 60 61 + -EINVAL is returned when called on the FLIC of a ucontrol VM. 62 + 61 63 KVM_DEV_FLIC_APF_DISABLE_WAIT 62 64 Disables async page faults for the guest and waits until already pending 63 65 async page faults are done. This is necessary to trigger a completion interrupt 64 66 for every init interrupt before migrating the interrupt list. 67 + 68 + -EINVAL is returned when called on the FLIC of a ucontrol VM. 65 69 66 70 KVM_DEV_FLIC_ADAPTER_REGISTER 67 71 Register an I/O adapter interrupt source. Takes a kvm_s390_io_adapter
+8 -15
MAINTAINERS
··· 949 949 M: Shay Agroskin <shayagr@amazon.com> 950 950 M: Arthur Kiyanovski <akiyano@amazon.com> 951 951 R: David Arinzon <darinzon@amazon.com> 952 - R: Noam Dagan <ndagan@amazon.com> 953 952 R: Saeed Bishara <saeedb@amazon.com> 954 953 L: netdev@vger.kernel.org 955 954 S: Supported ··· 2700 2701 N: atmel 2701 2702 2702 2703 ARM/Microchip Sparx5 SoC support 2703 - M: Lars Povlsen <lars.povlsen@microchip.com> 2704 2704 M: Steen Hegelund <Steen.Hegelund@microchip.com> 2705 2705 M: Daniel Machon <daniel.machon@microchip.com> 2706 2706 M: UNGLinuxDriver@microchip.com ··· 4067 4069 4068 4070 BONDING DRIVER 4069 4071 M: Jay Vosburgh <jv@jvosburgh.net> 4070 - M: Andy Gospodarek <andy@greyhouse.net> 4071 4072 L: netdev@vger.kernel.org 4072 4073 S: Maintained 4073 4074 F: Documentation/networking/bonding.rst ··· 4139 4142 F: drivers/net/ethernet/netronome/nfp/bpf/ 4140 4143 4141 4144 BPF JIT for POWERPC (32-BIT AND 64-BIT) 4142 - M: Michael Ellerman <mpe@ellerman.id.au> 4143 4145 M: Hari Bathini <hbathini@linux.ibm.com> 4144 4146 M: Christophe Leroy <christophe.leroy@csgroup.eu> 4145 4147 R: Naveen N Rao <naveen@kernel.org> ··· 5476 5480 L: patches@opensource.cirrus.com 5477 5481 S: Maintained 5478 5482 F: Documentation/devicetree/bindings/sound/cirrus,cs* 5483 + F: Documentation/sound/codecs/cs* 5479 5484 F: drivers/mfd/cs42l43* 5480 5485 F: drivers/pinctrl/cirrus/pinctrl-cs42l43* 5481 5486 F: drivers/spi/spi-cs42l43* ··· 12642 12645 F: arch/mips/kvm/ 12643 12646 12644 12647 KERNEL VIRTUAL MACHINE FOR POWERPC (KVM/powerpc) 12645 - M: Michael Ellerman <mpe@ellerman.id.au> 12648 + M: Madhavan Srinivasan <maddy@linux.ibm.com> 12646 12649 R: Nicholas Piggin <npiggin@gmail.com> 12647 12650 L: linuxppc-dev@lists.ozlabs.org 12648 12651 L: kvm@vger.kernel.org ··· 13221 13224 X: drivers/macintosh/via-macii.c 13222 13225 13223 13226 LINUX FOR POWERPC (32-BIT AND 64-BIT) 13227 + M: Madhavan Srinivasan <maddy@linux.ibm.com> 13224 13228 M: Michael Ellerman <mpe@ellerman.id.au> 13225 13229 R: Nicholas Piggin <npiggin@gmail.com> 13226 13230 R: Christophe Leroy <christophe.leroy@csgroup.eu> 13227 13231 R: Naveen N Rao <naveen@kernel.org> 13228 - M: Madhavan Srinivasan <maddy@linux.ibm.com> 13229 13232 L: linuxppc-dev@lists.ozlabs.org 13230 13233 S: Supported 13231 13234 W: https://github.com/linuxppc/wiki/wiki ··· 14576 14579 MEDIATEK ETHERNET DRIVER 14577 14580 M: Felix Fietkau <nbd@nbd.name> 14578 14581 M: Sean Wang <sean.wang@mediatek.com> 14579 - M: Mark Lee <Mark-MC.Lee@mediatek.com> 14580 14582 M: Lorenzo Bianconi <lorenzo@kernel.org> 14581 14583 L: netdev@vger.kernel.org 14582 14584 S: Maintained ··· 22002 22006 F: sound/soc/sof/ 22003 22007 22004 22008 SOUND - GENERIC SOUND CARD (Simple-Audio-Card, Audio-Graph-Card) 22009 + M: Mark Brown <broonie@kernel.org> 22005 22010 M: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> 22006 22011 S: Supported 22007 22012 L: linux-sound@vger.kernel.org ··· 22513 22516 F: drivers/phy/st/phy-stm32-combophy.c 22514 22517 22515 22518 STMMAC ETHERNET DRIVER 22516 - M: Alexandre Torgue <alexandre.torgue@foss.st.com> 22517 - M: Jose Abreu <joabreu@synopsys.com> 22518 22519 L: netdev@vger.kernel.org 22519 - S: Supported 22520 - W: http://www.stlinux.com 22520 + S: Orphan 22521 22521 F: Documentation/networking/device_drivers/ethernet/stmicro/ 22522 22522 F: drivers/net/ethernet/stmicro/stmmac/ 22523 22523 ··· 22746 22752 F: drivers/net/ethernet/synopsys/ 22747 22753 22748 22754 SYNOPSYS DESIGNWARE ETHERNET XPCS DRIVER 22749 - M: Jose Abreu <Jose.Abreu@synopsys.com> 22750 22755 L: netdev@vger.kernel.org 22751 - S: Supported 22756 + S: Orphan 22752 22757 F: drivers/net/pcs/pcs-xpcs.c 22753 22758 F: drivers/net/pcs/pcs-xpcs.h 22754 22759 F: include/linux/pcs/pcs-xpcs.h ··· 23655 23662 23656 23663 TIPC NETWORK LAYER 23657 23664 M: Jon Maloy <jmaloy@redhat.com> 23658 - M: Ying Xue <ying.xue@windriver.com> 23659 23665 L: netdev@vger.kernel.org (core kernel code) 23660 23666 L: tipc-discussion@lists.sourceforge.net (user apps, general discussion) 23661 23667 S: Maintained ··· 24260 24268 F: drivers/usb/isp1760/* 24261 24269 24262 24270 USB LAN78XX ETHERNET DRIVER 24263 - M: Woojung Huh <woojung.huh@microchip.com> 24271 + M: Thangaraj Samynathan <Thangaraj.S@microchip.com> 24272 + M: Rengarajan Sundararajan <Rengarajan.S@microchip.com> 24264 24273 M: UNGLinuxDriver@microchip.com 24265 24274 L: netdev@vger.kernel.org 24266 24275 S: Maintained
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 13 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+1 -1
arch/arm/boot/dts/nxp/imx/imxrt1050.dtsi
··· 87 87 reg = <0x402c0000 0x4000>; 88 88 interrupts = <110>; 89 89 clocks = <&clks IMXRT1050_CLK_IPG_PDOF>, 90 - <&clks IMXRT1050_CLK_OSC>, 90 + <&clks IMXRT1050_CLK_AHB_PODF>, 91 91 <&clks IMXRT1050_CLK_USDHC1>; 92 92 clock-names = "ipg", "ahb", "per"; 93 93 bus-width = <4>;
+1
arch/arm/configs/imx_v6_v7_defconfig
··· 323 323 CONFIG_SND_SOC_FSL_ASOC_CARD=y 324 324 CONFIG_SND_SOC_AC97_CODEC=y 325 325 CONFIG_SND_SOC_CS42XX8_I2C=y 326 + CONFIG_SND_SOC_SPDIF=y 326 327 CONFIG_SND_SOC_TLV320AIC3X_I2C=y 327 328 CONFIG_SND_SOC_WM8960=y 328 329 CONFIG_SND_SOC_WM8962=y
+1 -1
arch/arm64/boot/dts/freescale/imx8-ss-audio.dtsi
··· 165 165 }; 166 166 167 167 esai0: esai@59010000 { 168 - compatible = "fsl,imx8qm-esai"; 168 + compatible = "fsl,imx8qm-esai", "fsl,imx6ull-esai"; 169 169 reg = <0x59010000 0x10000>; 170 170 interrupts = <GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>; 171 171 clocks = <&esai0_lpcg IMX_LPCG_CLK_4>,
+1 -1
arch/arm64/boot/dts/freescale/imx8qm-ss-audio.dtsi
··· 134 134 }; 135 135 136 136 esai1: esai@59810000 { 137 - compatible = "fsl,imx8qm-esai"; 137 + compatible = "fsl,imx8qm-esai", "fsl,imx6ull-esai"; 138 138 reg = <0x59810000 0x10000>; 139 139 interrupts = <GIC_SPI 411 IRQ_TYPE_LEVEL_HIGH>; 140 140 clocks = <&esai1_lpcg IMX_LPCG_CLK_0>,
+1 -1
arch/arm64/boot/dts/freescale/imx95.dtsi
··· 1673 1673 1674 1674 netcmix_blk_ctrl: syscon@4c810000 { 1675 1675 compatible = "nxp,imx95-netcmix-blk-ctrl", "syscon"; 1676 - reg = <0x0 0x4c810000 0x0 0x10000>; 1676 + reg = <0x0 0x4c810000 0x0 0x8>; 1677 1677 #clock-cells = <1>; 1678 1678 clocks = <&scmi_clk IMX95_CLK_BUSNETCMIX>; 1679 1679 assigned-clocks = <&scmi_clk IMX95_CLK_BUSNETCMIX>;
+3 -2
arch/arm64/boot/dts/qcom/sa8775p.dtsi
··· 2440 2440 2441 2441 qcom,cmb-element-bits = <32>; 2442 2442 qcom,cmb-msrs-num = <32>; 2443 + status = "disabled"; 2443 2444 2444 2445 out-ports { 2445 2446 port { ··· 6093 6092 <0x0 0x40000000 0x0 0xf20>, 6094 6093 <0x0 0x40000f20 0x0 0xa8>, 6095 6094 <0x0 0x40001000 0x0 0x4000>, 6096 - <0x0 0x40200000 0x0 0x100000>, 6095 + <0x0 0x40200000 0x0 0x1fe00000>, 6097 6096 <0x0 0x01c03000 0x0 0x1000>, 6098 6097 <0x0 0x40005000 0x0 0x2000>; 6099 6098 reg-names = "parf", "dbi", "elbi", "atu", "addr_space", ··· 6251 6250 <0x0 0x60000000 0x0 0xf20>, 6252 6251 <0x0 0x60000f20 0x0 0xa8>, 6253 6252 <0x0 0x60001000 0x0 0x4000>, 6254 - <0x0 0x60200000 0x0 0x100000>, 6253 + <0x0 0x60200000 0x0 0x1fe00000>, 6255 6254 <0x0 0x01c13000 0x0 0x1000>, 6256 6255 <0x0 0x60005000 0x0 0x2000>; 6257 6256 reg-names = "parf", "dbi", "elbi", "atu", "addr_space",
+8
arch/arm64/boot/dts/qcom/x1e78100-lenovo-thinkpad-t14s.dts
··· 773 773 status = "okay"; 774 774 }; 775 775 776 + &usb_1_ss0_dwc3 { 777 + dr_mode = "host"; 778 + }; 779 + 776 780 &usb_1_ss0_dwc3_hs { 777 781 remote-endpoint = <&pmic_glink_ss0_hs_in>; 778 782 }; ··· 803 799 804 800 &usb_1_ss1 { 805 801 status = "okay"; 802 + }; 803 + 804 + &usb_1_ss1_dwc3 { 805 + dr_mode = "host"; 806 806 }; 807 807 808 808 &usb_1_ss1_dwc3_hs {
+12
arch/arm64/boot/dts/qcom/x1e80100-crd.dts
··· 1197 1197 status = "okay"; 1198 1198 }; 1199 1199 1200 + &usb_1_ss0_dwc3 { 1201 + dr_mode = "host"; 1202 + }; 1203 + 1200 1204 &usb_1_ss0_dwc3_hs { 1201 1205 remote-endpoint = <&pmic_glink_ss0_hs_in>; 1202 1206 }; ··· 1229 1225 status = "okay"; 1230 1226 }; 1231 1227 1228 + &usb_1_ss1_dwc3 { 1229 + dr_mode = "host"; 1230 + }; 1231 + 1232 1232 &usb_1_ss1_dwc3_hs { 1233 1233 remote-endpoint = <&pmic_glink_ss1_hs_in>; 1234 1234 }; ··· 1259 1251 1260 1252 &usb_1_ss2 { 1261 1253 status = "okay"; 1254 + }; 1255 + 1256 + &usb_1_ss2_dwc3 { 1257 + dr_mode = "host"; 1262 1258 }; 1263 1259 1264 1260 &usb_1_ss2_dwc3_hs {
+1 -7
arch/arm64/boot/dts/qcom/x1e80100.dtsi
··· 2924 2924 #address-cells = <3>; 2925 2925 #size-cells = <2>; 2926 2926 ranges = <0x01000000 0x0 0x00000000 0x0 0x70200000 0x0 0x100000>, 2927 - <0x02000000 0x0 0x70300000 0x0 0x70300000 0x0 0x1d00000>; 2927 + <0x02000000 0x0 0x70300000 0x0 0x70300000 0x0 0x3d00000>; 2928 2928 bus-range = <0x00 0xff>; 2929 2929 2930 2930 dma-coherent; ··· 4066 4066 4067 4067 dma-coherent; 4068 4068 4069 - usb-role-switch; 4070 - 4071 4069 ports { 4072 4070 #address-cells = <1>; 4073 4071 #size-cells = <0>; ··· 4319 4321 4320 4322 dma-coherent; 4321 4323 4322 - usb-role-switch; 4323 - 4324 4324 ports { 4325 4325 #address-cells = <1>; 4326 4326 #size-cells = <0>; ··· 4416 4420 snps,usb3_lpm_capable; 4417 4421 4418 4422 dma-coherent; 4419 - 4420 - usb-role-switch; 4421 4423 4422 4424 ports { 4423 4425 #address-cells = <1>;
+1
arch/arm64/boot/dts/rockchip/rk3328.dtsi
··· 333 333 334 334 power-domain@RK3328_PD_HEVC { 335 335 reg = <RK3328_PD_HEVC>; 336 + clocks = <&cru SCLK_VENC_CORE>; 336 337 #power-domain-cells = <0>; 337 338 }; 338 339 power-domain@RK3328_PD_VIDEO {
+1
arch/arm64/boot/dts/rockchip/rk3568.dtsi
··· 350 350 assigned-clocks = <&pmucru CLK_PCIEPHY0_REF>; 351 351 assigned-clock-rates = <100000000>; 352 352 resets = <&cru SRST_PIPEPHY0>; 353 + reset-names = "phy"; 353 354 rockchip,pipe-grf = <&pipegrf>; 354 355 rockchip,pipe-phy-grf = <&pipe_phy_grf0>; 355 356 #phy-cells = <1>;
+2
arch/arm64/boot/dts/rockchip/rk356x-base.dtsi
··· 1681 1681 assigned-clocks = <&pmucru CLK_PCIEPHY1_REF>; 1682 1682 assigned-clock-rates = <100000000>; 1683 1683 resets = <&cru SRST_PIPEPHY1>; 1684 + reset-names = "phy"; 1684 1685 rockchip,pipe-grf = <&pipegrf>; 1685 1686 rockchip,pipe-phy-grf = <&pipe_phy_grf1>; 1686 1687 #phy-cells = <1>; ··· 1698 1697 assigned-clocks = <&pmucru CLK_PCIEPHY2_REF>; 1699 1698 assigned-clock-rates = <100000000>; 1700 1699 resets = <&cru SRST_PIPEPHY2>; 1700 + reset-names = "phy"; 1701 1701 rockchip,pipe-grf = <&pipegrf>; 1702 1702 rockchip,pipe-phy-grf = <&pipe_phy_grf2>; 1703 1703 #phy-cells = <1>;
+1 -1
arch/arm64/boot/dts/rockchip/rk3588-rock-5b.dts
··· 72 72 73 73 rfkill { 74 74 compatible = "rfkill-gpio"; 75 - label = "rfkill-pcie-wlan"; 75 + label = "rfkill-m2-wlan"; 76 76 radio-type = "wlan"; 77 77 shutdown-gpios = <&gpio4 RK_PA2 GPIO_ACTIVE_HIGH>; 78 78 };
+1
arch/arm64/boot/dts/rockchip/rk3588s-nanopi-r6.dtsi
··· 434 434 &sdmmc { 435 435 bus-width = <4>; 436 436 cap-sd-highspeed; 437 + cd-gpios = <&gpio0 RK_PA4 GPIO_ACTIVE_LOW>; 437 438 disable-wp; 438 439 max-frequency = <150000000>; 439 440 no-mmc;
-3
arch/arm64/kvm/hyp/nvhe/mem_protect.c
··· 783 783 if (tx->initiator.id == PKVM_ID_HOST && hyp_page_count((void *)addr)) 784 784 return -EBUSY; 785 785 786 - if (__hyp_ack_skip_pgtable_check(tx)) 787 - return 0; 788 - 789 786 return __hyp_check_page_state_range(addr, size, 790 787 PKVM_PAGE_SHARED_BORROWED); 791 788 }
+36 -55
arch/arm64/kvm/pmu-emul.c
··· 24 24 25 25 static void kvm_pmu_create_perf_event(struct kvm_pmc *pmc); 26 26 static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc); 27 + static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc); 27 28 28 29 static struct kvm_vcpu *kvm_pmc_to_vcpu(const struct kvm_pmc *pmc) 29 30 { ··· 328 327 return GENMASK(val - 1, 0) | BIT(ARMV8_PMU_CYCLE_IDX); 329 328 } 330 329 331 - /** 332 - * kvm_pmu_enable_counter_mask - enable selected PMU counters 333 - * @vcpu: The vcpu pointer 334 - * @val: the value guest writes to PMCNTENSET register 335 - * 336 - * Call perf_event_enable to start counting the perf event 337 - */ 338 - void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) 330 + static void kvm_pmc_enable_perf_event(struct kvm_pmc *pmc) 339 331 { 340 - int i; 341 - if (!kvm_vcpu_has_pmu(vcpu)) 332 + if (!pmc->perf_event) { 333 + kvm_pmu_create_perf_event(pmc); 342 334 return; 343 - 344 - if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) || !val) 345 - return; 346 - 347 - for (i = 0; i < KVM_ARMV8_PMU_MAX_COUNTERS; i++) { 348 - struct kvm_pmc *pmc; 349 - 350 - if (!(val & BIT(i))) 351 - continue; 352 - 353 - pmc = kvm_vcpu_idx_to_pmc(vcpu, i); 354 - 355 - if (!pmc->perf_event) { 356 - kvm_pmu_create_perf_event(pmc); 357 - } else { 358 - perf_event_enable(pmc->perf_event); 359 - if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE) 360 - kvm_debug("fail to enable perf event\n"); 361 - } 362 335 } 336 + 337 + perf_event_enable(pmc->perf_event); 338 + if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE) 339 + kvm_debug("fail to enable perf event\n"); 363 340 } 364 341 365 - /** 366 - * kvm_pmu_disable_counter_mask - disable selected PMU counters 367 - * @vcpu: The vcpu pointer 368 - * @val: the value guest writes to PMCNTENCLR register 369 - * 370 - * Call perf_event_disable to stop counting the perf event 371 - */ 372 - void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val) 342 + static void kvm_pmc_disable_perf_event(struct kvm_pmc *pmc) 343 + { 344 + if (pmc->perf_event) 345 + perf_event_disable(pmc->perf_event); 346 + } 347 + 348 + void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u64 val) 373 349 { 374 350 int i; 375 351 ··· 354 376 return; 355 377 356 378 for (i = 0; i < KVM_ARMV8_PMU_MAX_COUNTERS; i++) { 357 - struct kvm_pmc *pmc; 379 + struct kvm_pmc *pmc = kvm_vcpu_idx_to_pmc(vcpu, i); 358 380 359 381 if (!(val & BIT(i))) 360 382 continue; 361 383 362 - pmc = kvm_vcpu_idx_to_pmc(vcpu, i); 363 - 364 - if (pmc->perf_event) 365 - perf_event_disable(pmc->perf_event); 384 + if (kvm_pmu_counter_is_enabled(pmc)) 385 + kvm_pmc_enable_perf_event(pmc); 386 + else 387 + kvm_pmc_disable_perf_event(pmc); 366 388 } 389 + 390 + kvm_vcpu_pmu_restore_guest(vcpu); 367 391 } 368 392 369 393 /* ··· 606 626 if (!kvm_has_feat(vcpu->kvm, ID_AA64DFR0_EL1, PMUVer, V3P5)) 607 627 val &= ~ARMV8_PMU_PMCR_LP; 608 628 629 + /* Request a reload of the PMU to enable/disable affected counters */ 630 + if ((__vcpu_sys_reg(vcpu, PMCR_EL0) ^ val) & ARMV8_PMU_PMCR_E) 631 + kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu); 632 + 609 633 /* The reset bits don't indicate any state, and shouldn't be saved. */ 610 634 __vcpu_sys_reg(vcpu, PMCR_EL0) = val & ~(ARMV8_PMU_PMCR_C | ARMV8_PMU_PMCR_P); 611 - 612 - if (val & ARMV8_PMU_PMCR_E) { 613 - kvm_pmu_enable_counter_mask(vcpu, 614 - __vcpu_sys_reg(vcpu, PMCNTENSET_EL0)); 615 - } else { 616 - kvm_pmu_disable_counter_mask(vcpu, 617 - __vcpu_sys_reg(vcpu, PMCNTENSET_EL0)); 618 - } 619 635 620 636 if (val & ARMV8_PMU_PMCR_C) 621 637 kvm_pmu_set_counter_value(vcpu, ARMV8_PMU_CYCLE_IDX, 0); 622 638 623 639 if (val & ARMV8_PMU_PMCR_P) { 624 - unsigned long mask = kvm_pmu_accessible_counter_mask(vcpu); 625 - mask &= ~BIT(ARMV8_PMU_CYCLE_IDX); 640 + /* 641 + * Unlike other PMU sysregs, the controls in PMCR_EL0 always apply 642 + * to the 'guest' range of counters and never the 'hyp' range. 643 + */ 644 + unsigned long mask = kvm_pmu_implemented_counter_mask(vcpu) & 645 + ~kvm_pmu_hyp_counter_mask(vcpu) & 646 + ~BIT(ARMV8_PMU_CYCLE_IDX); 647 + 626 648 for_each_set_bit(i, &mask, 32) 627 649 kvm_pmu_set_pmc_value(kvm_vcpu_idx_to_pmc(vcpu, i), 0, true); 628 650 } 629 - kvm_vcpu_pmu_restore_guest(vcpu); 630 651 } 631 652 632 653 static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc) ··· 891 910 { 892 911 u64 mask = kvm_pmu_implemented_counter_mask(vcpu); 893 912 894 - kvm_pmu_handle_pmcr(vcpu, kvm_vcpu_read_pmcr(vcpu)); 895 - 896 913 __vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= mask; 897 914 __vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= mask; 898 915 __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= mask; 916 + 917 + kvm_pmu_reprogram_counter_mask(vcpu, mask); 899 918 } 900 919 901 920 int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu)
+25 -7
arch/arm64/kvm/sys_regs.c
··· 1208 1208 mask = kvm_pmu_accessible_counter_mask(vcpu); 1209 1209 if (p->is_write) { 1210 1210 val = p->regval & mask; 1211 - if (r->Op2 & 0x1) { 1211 + if (r->Op2 & 0x1) 1212 1212 /* accessing PMCNTENSET_EL0 */ 1213 1213 __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) |= val; 1214 - kvm_pmu_enable_counter_mask(vcpu, val); 1215 - kvm_vcpu_pmu_restore_guest(vcpu); 1216 - } else { 1214 + else 1217 1215 /* accessing PMCNTENCLR_EL0 */ 1218 1216 __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= ~val; 1219 - kvm_pmu_disable_counter_mask(vcpu, val); 1220 - } 1217 + 1218 + kvm_pmu_reprogram_counter_mask(vcpu, val); 1221 1219 } else { 1222 1220 p->regval = __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); 1223 1221 } ··· 2448 2450 return __el2_visibility(vcpu, rd, s1pie_visibility); 2449 2451 } 2450 2452 2453 + static bool access_mdcr(struct kvm_vcpu *vcpu, 2454 + struct sys_reg_params *p, 2455 + const struct sys_reg_desc *r) 2456 + { 2457 + u64 old = __vcpu_sys_reg(vcpu, MDCR_EL2); 2458 + 2459 + if (!access_rw(vcpu, p, r)) 2460 + return false; 2461 + 2462 + /* 2463 + * Request a reload of the PMU to enable/disable the counters affected 2464 + * by HPME. 2465 + */ 2466 + if ((old ^ __vcpu_sys_reg(vcpu, MDCR_EL2)) & MDCR_EL2_HPME) 2467 + kvm_make_request(KVM_REQ_RELOAD_PMU, vcpu); 2468 + 2469 + return true; 2470 + } 2471 + 2472 + 2451 2473 /* 2452 2474 * Architected system registers. 2453 2475 * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2 ··· 3001 2983 EL2_REG(SCTLR_EL2, access_rw, reset_val, SCTLR_EL2_RES1), 3002 2984 EL2_REG(ACTLR_EL2, access_rw, reset_val, 0), 3003 2985 EL2_REG_VNCR(HCR_EL2, reset_hcr, 0), 3004 - EL2_REG(MDCR_EL2, access_rw, reset_val, 0), 2986 + EL2_REG(MDCR_EL2, access_mdcr, reset_val, 0), 3005 2987 EL2_REG(CPTR_EL2, access_rw, reset_val, CPTR_NVHE_EL2_RES1), 3006 2988 EL2_REG_VNCR(HSTR_EL2, reset_val, 0), 3007 2989 EL2_REG_VNCR(HFGRTR_EL2, reset_val, 0),
+2
arch/powerpc/kvm/e500.h
··· 34 34 #define E500_TLB_BITMAP (1 << 30) 35 35 /* TLB1 entry is mapped by host TLB0 */ 36 36 #define E500_TLB_TLB0 (1 << 29) 37 + /* entry is writable on the host */ 38 + #define E500_TLB_WRITABLE (1 << 28) 37 39 /* bits [6-5] MAS2_X1 and MAS2_X0 and [4-0] bits for WIMGE */ 38 40 #define E500_TLB_MAS2_ATTR (0x7f) 39 41
+83 -116
arch/powerpc/kvm/e500_mmu_host.c
··· 45 45 return host_tlb_params[1].entries - tlbcam_index - 1; 46 46 } 47 47 48 - static inline u32 e500_shadow_mas3_attrib(u32 mas3, int usermode) 48 + static inline u32 e500_shadow_mas3_attrib(u32 mas3, bool writable, int usermode) 49 49 { 50 50 /* Mask off reserved bits. */ 51 51 mas3 &= MAS3_ATTRIB_MASK; 52 + 53 + if (!writable) 54 + mas3 &= ~(MAS3_UW|MAS3_SW); 52 55 53 56 #ifndef CONFIG_KVM_BOOKE_HV 54 57 if (!usermode) { ··· 245 242 return tlbe->mas7_3 & (MAS3_SW|MAS3_UW); 246 243 } 247 244 248 - static inline bool kvmppc_e500_ref_setup(struct tlbe_ref *ref, 245 + static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref, 249 246 struct kvm_book3e_206_tlb_entry *gtlbe, 250 - kvm_pfn_t pfn, unsigned int wimg) 247 + kvm_pfn_t pfn, unsigned int wimg, 248 + bool writable) 251 249 { 252 250 ref->pfn = pfn; 253 251 ref->flags = E500_TLB_VALID; 252 + if (writable) 253 + ref->flags |= E500_TLB_WRITABLE; 254 254 255 255 /* Use guest supplied MAS2_G and MAS2_E */ 256 256 ref->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg; 257 - 258 - return tlbe_is_writable(gtlbe); 259 257 } 260 258 261 259 static inline void kvmppc_e500_ref_release(struct tlbe_ref *ref) ··· 309 305 { 310 306 kvm_pfn_t pfn = ref->pfn; 311 307 u32 pr = vcpu->arch.shared->msr & MSR_PR; 308 + bool writable = !!(ref->flags & E500_TLB_WRITABLE); 312 309 313 310 BUG_ON(!(ref->flags & E500_TLB_VALID)); 314 311 ··· 317 312 stlbe->mas1 = MAS1_TSIZE(tsize) | get_tlb_sts(gtlbe) | MAS1_VALID; 318 313 stlbe->mas2 = (gvaddr & MAS2_EPN) | (ref->flags & E500_TLB_MAS2_ATTR); 319 314 stlbe->mas7_3 = ((u64)pfn << PAGE_SHIFT) | 320 - e500_shadow_mas3_attrib(gtlbe->mas7_3, pr); 315 + e500_shadow_mas3_attrib(gtlbe->mas7_3, writable, pr); 321 316 } 322 317 323 318 static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, ··· 326 321 struct tlbe_ref *ref) 327 322 { 328 323 struct kvm_memory_slot *slot; 329 - unsigned long pfn = 0; /* silence GCC warning */ 324 + unsigned int psize; 325 + unsigned long pfn; 330 326 struct page *page = NULL; 331 327 unsigned long hva; 332 - int pfnmap = 0; 333 328 int tsize = BOOK3E_PAGESZ_4K; 334 329 int ret = 0; 335 330 unsigned long mmu_seq; 336 331 struct kvm *kvm = vcpu_e500->vcpu.kvm; 337 - unsigned long tsize_pages = 0; 338 332 pte_t *ptep; 339 333 unsigned int wimg = 0; 340 334 pgd_t *pgdir; ··· 355 351 slot = gfn_to_memslot(vcpu_e500->vcpu.kvm, gfn); 356 352 hva = gfn_to_hva_memslot(slot, gfn); 357 353 358 - if (tlbsel == 1) { 359 - struct vm_area_struct *vma; 360 - mmap_read_lock(kvm->mm); 361 - 362 - vma = find_vma(kvm->mm, hva); 363 - if (vma && hva >= vma->vm_start && 364 - (vma->vm_flags & VM_PFNMAP)) { 365 - /* 366 - * This VMA is a physically contiguous region (e.g. 367 - * /dev/mem) that bypasses normal Linux page 368 - * management. Find the overlap between the 369 - * vma and the memslot. 370 - */ 371 - 372 - unsigned long start, end; 373 - unsigned long slot_start, slot_end; 374 - 375 - pfnmap = 1; 376 - 377 - start = vma->vm_pgoff; 378 - end = start + 379 - vma_pages(vma); 380 - 381 - pfn = start + ((hva - vma->vm_start) >> PAGE_SHIFT); 382 - 383 - slot_start = pfn - (gfn - slot->base_gfn); 384 - slot_end = slot_start + slot->npages; 385 - 386 - if (start < slot_start) 387 - start = slot_start; 388 - if (end > slot_end) 389 - end = slot_end; 390 - 391 - tsize = (gtlbe->mas1 & MAS1_TSIZE_MASK) >> 392 - MAS1_TSIZE_SHIFT; 393 - 394 - /* 395 - * e500 doesn't implement the lowest tsize bit, 396 - * or 1K pages. 397 - */ 398 - tsize = max(BOOK3E_PAGESZ_4K, tsize & ~1); 399 - 400 - /* 401 - * Now find the largest tsize (up to what the guest 402 - * requested) that will cover gfn, stay within the 403 - * range, and for which gfn and pfn are mutually 404 - * aligned. 405 - */ 406 - 407 - for (; tsize > BOOK3E_PAGESZ_4K; tsize -= 2) { 408 - unsigned long gfn_start, gfn_end; 409 - tsize_pages = 1UL << (tsize - 2); 410 - 411 - gfn_start = gfn & ~(tsize_pages - 1); 412 - gfn_end = gfn_start + tsize_pages; 413 - 414 - if (gfn_start + pfn - gfn < start) 415 - continue; 416 - if (gfn_end + pfn - gfn > end) 417 - continue; 418 - if ((gfn & (tsize_pages - 1)) != 419 - (pfn & (tsize_pages - 1))) 420 - continue; 421 - 422 - gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1); 423 - pfn &= ~(tsize_pages - 1); 424 - break; 425 - } 426 - } else if (vma && hva >= vma->vm_start && 427 - is_vm_hugetlb_page(vma)) { 428 - unsigned long psize = vma_kernel_pagesize(vma); 429 - 430 - tsize = (gtlbe->mas1 & MAS1_TSIZE_MASK) >> 431 - MAS1_TSIZE_SHIFT; 432 - 433 - /* 434 - * Take the largest page size that satisfies both host 435 - * and guest mapping 436 - */ 437 - tsize = min(__ilog2(psize) - 10, tsize); 438 - 439 - /* 440 - * e500 doesn't implement the lowest tsize bit, 441 - * or 1K pages. 442 - */ 443 - tsize = max(BOOK3E_PAGESZ_4K, tsize & ~1); 444 - } 445 - 446 - mmap_read_unlock(kvm->mm); 447 - } 448 - 449 - if (likely(!pfnmap)) { 450 - tsize_pages = 1UL << (tsize + 10 - PAGE_SHIFT); 451 - pfn = __kvm_faultin_pfn(slot, gfn, FOLL_WRITE, NULL, &page); 452 - if (is_error_noslot_pfn(pfn)) { 453 - if (printk_ratelimit()) 454 - pr_err("%s: real page not found for gfn %lx\n", 455 - __func__, (long)gfn); 456 - return -EINVAL; 457 - } 458 - 459 - /* Align guest and physical address to page map boundaries */ 460 - pfn &= ~(tsize_pages - 1); 461 - gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1); 354 + pfn = __kvm_faultin_pfn(slot, gfn, FOLL_WRITE, &writable, &page); 355 + if (is_error_noslot_pfn(pfn)) { 356 + if (printk_ratelimit()) 357 + pr_err("%s: real page not found for gfn %lx\n", 358 + __func__, (long)gfn); 359 + return -EINVAL; 462 360 } 463 361 464 362 spin_lock(&kvm->mmu_lock); ··· 378 472 * can't run hence pfn won't change. 379 473 */ 380 474 local_irq_save(flags); 381 - ptep = find_linux_pte(pgdir, hva, NULL, NULL); 475 + ptep = find_linux_pte(pgdir, hva, NULL, &psize); 382 476 if (ptep) { 383 477 pte_t pte = READ_ONCE(*ptep); 384 478 385 479 if (pte_present(pte)) { 386 480 wimg = (pte_val(pte) >> PTE_WIMGE_SHIFT) & 387 481 MAS2_WIMGE_MASK; 388 - local_irq_restore(flags); 389 482 } else { 390 483 local_irq_restore(flags); 391 484 pr_err_ratelimited("%s: pte not present: gfn %lx,pfn %lx\n", ··· 393 488 goto out; 394 489 } 395 490 } 396 - writable = kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg); 491 + local_irq_restore(flags); 397 492 493 + if (psize && tlbsel == 1) { 494 + unsigned long psize_pages, tsize_pages; 495 + unsigned long start, end; 496 + unsigned long slot_start, slot_end; 497 + 498 + psize_pages = 1UL << (psize - PAGE_SHIFT); 499 + start = pfn & ~(psize_pages - 1); 500 + end = start + psize_pages; 501 + 502 + slot_start = pfn - (gfn - slot->base_gfn); 503 + slot_end = slot_start + slot->npages; 504 + 505 + if (start < slot_start) 506 + start = slot_start; 507 + if (end > slot_end) 508 + end = slot_end; 509 + 510 + tsize = (gtlbe->mas1 & MAS1_TSIZE_MASK) >> 511 + MAS1_TSIZE_SHIFT; 512 + 513 + /* 514 + * Any page size that doesn't satisfy the host mapping 515 + * will fail the start and end tests. 516 + */ 517 + tsize = min(psize - PAGE_SHIFT + BOOK3E_PAGESZ_4K, tsize); 518 + 519 + /* 520 + * e500 doesn't implement the lowest tsize bit, 521 + * or 1K pages. 522 + */ 523 + tsize = max(BOOK3E_PAGESZ_4K, tsize & ~1); 524 + 525 + /* 526 + * Now find the largest tsize (up to what the guest 527 + * requested) that will cover gfn, stay within the 528 + * range, and for which gfn and pfn are mutually 529 + * aligned. 530 + */ 531 + 532 + for (; tsize > BOOK3E_PAGESZ_4K; tsize -= 2) { 533 + unsigned long gfn_start, gfn_end; 534 + tsize_pages = 1UL << (tsize - 2); 535 + 536 + gfn_start = gfn & ~(tsize_pages - 1); 537 + gfn_end = gfn_start + tsize_pages; 538 + 539 + if (gfn_start + pfn - gfn < start) 540 + continue; 541 + if (gfn_end + pfn - gfn > end) 542 + continue; 543 + if ((gfn & (tsize_pages - 1)) != 544 + (pfn & (tsize_pages - 1))) 545 + continue; 546 + 547 + gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1); 548 + pfn &= ~(tsize_pages - 1); 549 + break; 550 + } 551 + } 552 + 553 + kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg, writable); 398 554 kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize, 399 555 ref, gvaddr, stlbe); 556 + writable = tlbe_is_writable(stlbe); 400 557 401 558 /* Clear i-cache for new pages */ 402 559 kvmppc_mmu_flush_icache(pfn);
+1
arch/riscv/include/asm/page.h
··· 122 122 123 123 extern struct kernel_mapping kernel_map; 124 124 extern phys_addr_t phys_ram_base; 125 + extern unsigned long vmemmap_start_pfn; 125 126 126 127 #define is_kernel_mapping(x) \ 127 128 ((x) >= kernel_map.virt_addr && (x) < (kernel_map.virt_addr + kernel_map.size))
+1 -1
arch/riscv/include/asm/pgtable.h
··· 87 87 * Define vmemmap for pfn_to_page & page_to_pfn calls. Needed if kernel 88 88 * is configured with CONFIG_SPARSEMEM_VMEMMAP enabled. 89 89 */ 90 - #define vmemmap ((struct page *)VMEMMAP_START - (phys_ram_base >> PAGE_SHIFT)) 90 + #define vmemmap ((struct page *)VMEMMAP_START - vmemmap_start_pfn) 91 91 92 92 #define PCI_IO_SIZE SZ_16M 93 93 #define PCI_IO_END VMEMMAP_START
+1
arch/riscv/include/asm/sbi.h
··· 159 159 }; 160 160 161 161 #define RISCV_PMU_RAW_EVENT_MASK GENMASK_ULL(47, 0) 162 + #define RISCV_PMU_PLAT_FW_EVENT_MASK GENMASK_ULL(61, 0) 162 163 #define RISCV_PMU_RAW_EVENT_IDX 0x20000 163 164 #define RISCV_PLAT_FW_EVENT 0xFFFF 164 165
+4 -1
arch/riscv/include/asm/spinlock.h
··· 3 3 #ifndef __ASM_RISCV_SPINLOCK_H 4 4 #define __ASM_RISCV_SPINLOCK_H 5 5 6 - #ifdef CONFIG_RISCV_COMBO_SPINLOCKS 6 + #ifdef CONFIG_QUEUED_SPINLOCKS 7 7 #define _Q_PENDING_LOOPS (1 << 9) 8 + #endif 9 + 10 + #ifdef CONFIG_RISCV_COMBO_SPINLOCKS 8 11 9 12 #define __no_arch_spinlock_redefine 10 13 #include <asm/ticket_spinlock.h>
+11 -10
arch/riscv/kernel/entry.S
··· 23 23 REG_S a0, TASK_TI_A0(tp) 24 24 csrr a0, CSR_CAUSE 25 25 /* Exclude IRQs */ 26 - blt a0, zero, _new_vmalloc_restore_context_a0 26 + blt a0, zero, .Lnew_vmalloc_restore_context_a0 27 27 28 28 REG_S a1, TASK_TI_A1(tp) 29 29 /* Only check new_vmalloc if we are in page/protection fault */ 30 30 li a1, EXC_LOAD_PAGE_FAULT 31 - beq a0, a1, _new_vmalloc_kernel_address 31 + beq a0, a1, .Lnew_vmalloc_kernel_address 32 32 li a1, EXC_STORE_PAGE_FAULT 33 - beq a0, a1, _new_vmalloc_kernel_address 33 + beq a0, a1, .Lnew_vmalloc_kernel_address 34 34 li a1, EXC_INST_PAGE_FAULT 35 - bne a0, a1, _new_vmalloc_restore_context_a1 35 + bne a0, a1, .Lnew_vmalloc_restore_context_a1 36 36 37 - _new_vmalloc_kernel_address: 37 + .Lnew_vmalloc_kernel_address: 38 38 /* Is it a kernel address? */ 39 39 csrr a0, CSR_TVAL 40 - bge a0, zero, _new_vmalloc_restore_context_a1 40 + bge a0, zero, .Lnew_vmalloc_restore_context_a1 41 41 42 42 /* Check if a new vmalloc mapping appeared that could explain the trap */ 43 43 REG_S a2, TASK_TI_A2(tp) ··· 69 69 /* Check the value of new_vmalloc for this cpu */ 70 70 REG_L a2, 0(a0) 71 71 and a2, a2, a1 72 - beq a2, zero, _new_vmalloc_restore_context 72 + beq a2, zero, .Lnew_vmalloc_restore_context 73 73 74 74 /* Atomically reset the current cpu bit in new_vmalloc */ 75 75 amoxor.d a0, a1, (a0) ··· 83 83 csrw CSR_SCRATCH, x0 84 84 sret 85 85 86 - _new_vmalloc_restore_context: 86 + .Lnew_vmalloc_restore_context: 87 87 REG_L a2, TASK_TI_A2(tp) 88 - _new_vmalloc_restore_context_a1: 88 + .Lnew_vmalloc_restore_context_a1: 89 89 REG_L a1, TASK_TI_A1(tp) 90 - _new_vmalloc_restore_context_a0: 90 + .Lnew_vmalloc_restore_context_a0: 91 91 REG_L a0, TASK_TI_A0(tp) 92 92 .endm 93 93 ··· 278 278 #else 279 279 sret 280 280 #endif 281 + SYM_INNER_LABEL(ret_from_exception_end, SYM_L_GLOBAL) 281 282 SYM_CODE_END(ret_from_exception) 282 283 ASM_NOKPROBE(ret_from_exception) 283 284
+4 -14
arch/riscv/kernel/module.c
··· 23 23 24 24 struct relocation_head { 25 25 struct hlist_node node; 26 - struct list_head *rel_entry; 26 + struct list_head rel_entry; 27 27 void *location; 28 28 }; 29 29 ··· 634 634 location = rel_head_iter->location; 635 635 list_for_each_entry_safe(rel_entry_iter, 636 636 rel_entry_iter_tmp, 637 - rel_head_iter->rel_entry, 637 + &rel_head_iter->rel_entry, 638 638 head) { 639 639 curr_type = rel_entry_iter->type; 640 640 reloc_handlers[curr_type].reloc_handler( ··· 704 704 return -ENOMEM; 705 705 } 706 706 707 - rel_head->rel_entry = 708 - kmalloc(sizeof(struct list_head), GFP_KERNEL); 709 - 710 - if (!rel_head->rel_entry) { 711 - kfree(entry); 712 - kfree(rel_head); 713 - return -ENOMEM; 714 - } 715 - 716 - INIT_LIST_HEAD(rel_head->rel_entry); 707 + INIT_LIST_HEAD(&rel_head->rel_entry); 717 708 rel_head->location = location; 718 709 INIT_HLIST_NODE(&rel_head->node); 719 710 if (!current_head->first) { ··· 713 722 714 723 if (!bucket) { 715 724 kfree(entry); 716 - kfree(rel_head->rel_entry); 717 725 kfree(rel_head); 718 726 return -ENOMEM; 719 727 } ··· 725 735 } 726 736 727 737 /* Add relocation to head of discovered rel_head */ 728 - list_add_tail(&entry->head, rel_head->rel_entry); 738 + list_add_tail(&entry->head, &rel_head->rel_entry); 729 739 730 740 return 0; 731 741 }
+1 -1
arch/riscv/kernel/probes/kprobes.c
··· 30 30 p->ainsn.api.restore = (unsigned long)p->addr + len; 31 31 32 32 patch_text_nosync(p->ainsn.api.insn, &p->opcode, len); 33 - patch_text_nosync(p->ainsn.api.insn + len, &insn, GET_INSN_LENGTH(insn)); 33 + patch_text_nosync((void *)p->ainsn.api.insn + len, &insn, GET_INSN_LENGTH(insn)); 34 34 } 35 35 36 36 static void __kprobes arch_prepare_simulate(struct kprobe *p)
+3 -1
arch/riscv/kernel/stacktrace.c
··· 17 17 #ifdef CONFIG_FRAME_POINTER 18 18 19 19 extern asmlinkage void handle_exception(void); 20 + extern unsigned long ret_from_exception_end; 20 21 21 22 static inline int fp_is_valid(unsigned long fp, unsigned long sp) 22 23 { ··· 72 71 fp = frame->fp; 73 72 pc = ftrace_graph_ret_addr(current, &graph_idx, frame->ra, 74 73 &frame->ra); 75 - if (pc == (unsigned long)handle_exception) { 74 + if (pc >= (unsigned long)handle_exception && 75 + pc < (unsigned long)&ret_from_exception_end) { 76 76 if (unlikely(!__kernel_text_address(pc) || !fn(arg, pc))) 77 77 break; 78 78
+3 -3
arch/riscv/kernel/traps.c
··· 35 35 36 36 int show_unhandled_signals = 1; 37 37 38 - static DEFINE_SPINLOCK(die_lock); 38 + static DEFINE_RAW_SPINLOCK(die_lock); 39 39 40 40 static int copy_code(struct pt_regs *regs, u16 *val, const u16 *insns) 41 41 { ··· 81 81 82 82 oops_enter(); 83 83 84 - spin_lock_irqsave(&die_lock, flags); 84 + raw_spin_lock_irqsave(&die_lock, flags); 85 85 console_verbose(); 86 86 bust_spinlocks(1); 87 87 ··· 100 100 101 101 bust_spinlocks(0); 102 102 add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE); 103 - spin_unlock_irqrestore(&die_lock, flags); 103 + raw_spin_unlock_irqrestore(&die_lock, flags); 104 104 oops_exit(); 105 105 106 106 if (in_interrupt())
+16 -1
arch/riscv/mm/init.c
··· 33 33 #include <asm/pgtable.h> 34 34 #include <asm/sections.h> 35 35 #include <asm/soc.h> 36 + #include <asm/sparsemem.h> 36 37 #include <asm/tlbflush.h> 37 38 38 39 #include "../kernel/head.h" ··· 62 61 63 62 phys_addr_t phys_ram_base __ro_after_init; 64 63 EXPORT_SYMBOL(phys_ram_base); 64 + 65 + #ifdef CONFIG_SPARSEMEM_VMEMMAP 66 + #define VMEMMAP_ADDR_ALIGN (1ULL << SECTION_SIZE_BITS) 67 + 68 + unsigned long vmemmap_start_pfn __ro_after_init; 69 + EXPORT_SYMBOL(vmemmap_start_pfn); 70 + #endif 65 71 66 72 unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] 67 73 __page_aligned_bss; ··· 248 240 * Make sure we align the start of the memory on a PMD boundary so that 249 241 * at worst, we map the linear mapping with PMD mappings. 250 242 */ 251 - if (!IS_ENABLED(CONFIG_XIP_KERNEL)) 243 + if (!IS_ENABLED(CONFIG_XIP_KERNEL)) { 252 244 phys_ram_base = memblock_start_of_DRAM() & PMD_MASK; 245 + #ifdef CONFIG_SPARSEMEM_VMEMMAP 246 + vmemmap_start_pfn = round_down(phys_ram_base, VMEMMAP_ADDR_ALIGN) >> PAGE_SHIFT; 247 + #endif 248 + } 253 249 254 250 /* 255 251 * In 64-bit, any use of __va/__pa before this point is wrong as we ··· 1113 1101 kernel_map.xiprom_sz = (uintptr_t)(&_exiprom) - (uintptr_t)(&_xiprom); 1114 1102 1115 1103 phys_ram_base = CONFIG_PHYS_RAM_BASE; 1104 + #ifdef CONFIG_SPARSEMEM_VMEMMAP 1105 + vmemmap_start_pfn = round_down(phys_ram_base, VMEMMAP_ADDR_ALIGN) >> PAGE_SHIFT; 1106 + #endif 1116 1107 kernel_map.phys_addr = (uintptr_t)CONFIG_PHYS_RAM_BASE; 1117 1108 kernel_map.size = (uintptr_t)(&_end) - (uintptr_t)(&_start); 1118 1109
+6
arch/s390/kvm/interrupt.c
··· 2678 2678 kvm_s390_clear_float_irqs(dev->kvm); 2679 2679 break; 2680 2680 case KVM_DEV_FLIC_APF_ENABLE: 2681 + if (kvm_is_ucontrol(dev->kvm)) 2682 + return -EINVAL; 2681 2683 dev->kvm->arch.gmap->pfault_enabled = 1; 2682 2684 break; 2683 2685 case KVM_DEV_FLIC_APF_DISABLE_WAIT: 2686 + if (kvm_is_ucontrol(dev->kvm)) 2687 + return -EINVAL; 2684 2688 dev->kvm->arch.gmap->pfault_enabled = 0; 2685 2689 /* 2686 2690 * Make sure no async faults are in transition when ··· 2898 2894 switch (ue->type) { 2899 2895 /* we store the userspace addresses instead of the guest addresses */ 2900 2896 case KVM_IRQ_ROUTING_S390_ADAPTER: 2897 + if (kvm_is_ucontrol(kvm)) 2898 + return -EINVAL; 2901 2899 e->set = set_adapter_int; 2902 2900 uaddr = gmap_translate(kvm->arch.gmap, ue->u.adapter.summary_addr); 2903 2901 if (uaddr == -EFAULT)
+1 -1
arch/s390/kvm/vsie.c
··· 854 854 static void unpin_scb(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page, 855 855 gpa_t gpa) 856 856 { 857 - hpa_t hpa = (hpa_t) vsie_page->scb_o; 857 + hpa_t hpa = virt_to_phys(vsie_page->scb_o); 858 858 859 859 if (hpa) 860 860 unpin_guest_page(vcpu->kvm, gpa, hpa);
-1
arch/x86/Kconfig
··· 83 83 select ARCH_HAS_DMA_OPS if GART_IOMMU || XEN 84 84 select ARCH_HAS_EARLY_DEBUG if KGDB 85 85 select ARCH_HAS_ELF_RANDOMIZE 86 - select ARCH_HAS_EXECMEM_ROX if X86_64 87 86 select ARCH_HAS_FAST_MULTIPLIER 88 87 select ARCH_HAS_FORTIFY_SOURCE 89 88 select ARCH_HAS_GCOV_PROFILE_ALL
+1 -1
arch/x86/include/asm/special_insns.h
··· 217 217 218 218 #define nop() asm volatile ("nop") 219 219 220 - static inline void serialize(void) 220 + static __always_inline void serialize(void) 221 221 { 222 222 /* Instruction opcode for SERIALIZE; supported in binutils >= 2.35. */ 223 223 asm volatile(".byte 0xf, 0x1, 0xe8" ::: "memory");
+2 -1
arch/x86/kernel/fpu/regset.c
··· 190 190 struct fpu *fpu = &target->thread.fpu; 191 191 struct cet_user_state *cetregs; 192 192 193 - if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) 193 + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK) || 194 + !ssp_active(target, regset)) 194 195 return -ENODEV; 195 196 196 197 sync_fpstate(fpu);
+7 -1
arch/x86/kernel/fred.c
··· 50 50 FRED_CONFIG_ENTRYPOINT(asm_fred_entrypoint_user)); 51 51 52 52 wrmsrl(MSR_IA32_FRED_STKLVLS, 0); 53 - wrmsrl(MSR_IA32_FRED_RSP0, 0); 53 + 54 + /* 55 + * Ater a CPU offline/online cycle, the FRED RSP0 MSR should be 56 + * resynchronized with its per-CPU cache. 57 + */ 58 + wrmsrl(MSR_IA32_FRED_RSP0, __this_cpu_read(fred_rsp0)); 59 + 54 60 wrmsrl(MSR_IA32_FRED_RSP1, 0); 55 61 wrmsrl(MSR_IA32_FRED_RSP2, 0); 56 62 wrmsrl(MSR_IA32_FRED_RSP3, 0);
-1
arch/x86/kernel/static_call.c
··· 175 175 noinstr void __static_call_update_early(void *tramp, void *func) 176 176 { 177 177 BUG_ON(system_state != SYSTEM_BOOTING); 178 - BUG_ON(!early_boot_irqs_disabled); 179 178 BUG_ON(static_call_initialized); 180 179 __text_gen_insn(tramp, JMP32_INSN_OPCODE, tramp, func, JMP32_INSN_SIZE); 181 180 sync_core();
+2 -1
arch/x86/mm/init.c
··· 1080 1080 1081 1081 start = MODULES_VADDR + offset; 1082 1082 1083 - if (IS_ENABLED(CONFIG_ARCH_HAS_EXECMEM_ROX)) { 1083 + if (IS_ENABLED(CONFIG_ARCH_HAS_EXECMEM_ROX) && 1084 + cpu_feature_enabled(X86_FEATURE_PSE)) { 1084 1085 pgprot = PAGE_KERNEL_ROX; 1085 1086 flags = EXECMEM_KASAN_SHADOW | EXECMEM_ROX_CACHE; 1086 1087 } else {
+10 -2
block/bfq-iosched.c
··· 6844 6844 if (new_bfqq == waker_bfqq) { 6845 6845 /* 6846 6846 * If waker_bfqq is in the merge chain, and current 6847 - * is the only procress. 6847 + * is the only process, waker_bfqq can be freed. 6848 6848 */ 6849 6849 if (bfqq_process_refs(waker_bfqq) == 1) 6850 6850 return NULL; 6851 - break; 6851 + 6852 + return waker_bfqq; 6852 6853 } 6853 6854 6854 6855 new_bfqq = new_bfqq->new_bfqq; 6855 6856 } 6857 + 6858 + /* 6859 + * If waker_bfqq is not in the merge chain, and it's procress reference 6860 + * is 0, waker_bfqq can be freed. 6861 + */ 6862 + if (bfqq_process_refs(waker_bfqq) == 0) 6863 + return NULL; 6856 6864 6857 6865 return waker_bfqq; 6858 6866 }
+27 -22
drivers/acpi/acpi_video.c
··· 610 610 return 0; 611 611 } 612 612 613 + /** 614 + * acpi_video_device_EDID() - Get EDID from ACPI _DDC 615 + * @device: video output device (LCD, CRT, ..) 616 + * @edid: address for returned EDID pointer 617 + * @length: _DDC length to request (must be a multiple of 128) 618 + * 619 + * Get EDID from ACPI _DDC. On success, a pointer to the EDID data is written 620 + * to the @edid address, and the length of the EDID is returned. The caller is 621 + * responsible for freeing the edid pointer. 622 + * 623 + * Return the length of EDID (positive value) on success or error (negative 624 + * value). 625 + */ 613 626 static int 614 - acpi_video_device_EDID(struct acpi_video_device *device, 615 - union acpi_object **edid, int length) 627 + acpi_video_device_EDID(struct acpi_video_device *device, void **edid, int length) 616 628 { 617 - int status; 629 + acpi_status status; 618 630 struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 619 631 union acpi_object *obj; 620 632 union acpi_object arg0 = { ACPI_TYPE_INTEGER }; 621 633 struct acpi_object_list args = { 1, &arg0 }; 622 - 634 + int ret; 623 635 624 636 *edid = NULL; 625 637 ··· 648 636 649 637 obj = buffer.pointer; 650 638 651 - if (obj && obj->type == ACPI_TYPE_BUFFER) 652 - *edid = obj; 653 - else { 639 + if (obj && obj->type == ACPI_TYPE_BUFFER) { 640 + *edid = kmemdup(obj->buffer.pointer, obj->buffer.length, GFP_KERNEL); 641 + ret = *edid ? obj->buffer.length : -ENOMEM; 642 + } else { 654 643 acpi_handle_debug(device->dev->handle, 655 644 "Invalid _DDC data for length %d\n", length); 656 - status = -EFAULT; 657 - kfree(obj); 645 + ret = -EFAULT; 658 646 } 659 647 660 - return status; 648 + kfree(obj); 649 + return ret; 661 650 } 662 651 663 652 /* bus */ ··· 1448 1435 { 1449 1436 struct acpi_video_bus *video; 1450 1437 struct acpi_video_device *video_device; 1451 - union acpi_object *buffer = NULL; 1452 - acpi_status status; 1453 - int i, length; 1438 + int i, length, ret; 1454 1439 1455 1440 if (!device || !acpi_driver_data(device)) 1456 1441 return -EINVAL; ··· 1488 1477 } 1489 1478 1490 1479 for (length = 512; length > 0; length -= 128) { 1491 - status = acpi_video_device_EDID(video_device, &buffer, 1492 - length); 1493 - if (ACPI_SUCCESS(status)) 1494 - break; 1480 + ret = acpi_video_device_EDID(video_device, edid, length); 1481 + if (ret > 0) 1482 + return ret; 1495 1483 } 1496 - if (!length) 1497 - continue; 1498 - 1499 - *edid = buffer->buffer.pointer; 1500 - return length; 1501 1484 } 1502 1485 1503 1486 return -ENODEV;
+21 -3
drivers/acpi/resource.c
··· 441 441 }, 442 442 }, 443 443 { 444 + /* Asus Vivobook X1504VAP */ 445 + .matches = { 446 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 447 + DMI_MATCH(DMI_BOARD_NAME, "X1504VAP"), 448 + }, 449 + }, 450 + { 444 451 /* Asus Vivobook X1704VAP */ 445 452 .matches = { 446 453 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), ··· 653 646 DMI_MATCH(DMI_BOARD_NAME, "GMxHGxx"), 654 647 }, 655 648 }, 649 + { 650 + /* 651 + * TongFang GM5HG0A in case of the SKIKK Vanaheim relabel the 652 + * board-name is changed, so check OEM strings instead. Note 653 + * OEM string matches are always exact matches. 654 + * https://bugzilla.kernel.org/show_bug.cgi?id=219614 655 + */ 656 + .matches = { 657 + DMI_EXACT_MATCH(DMI_OEM_STRING, "GM5HG0A"), 658 + }, 659 + }, 656 660 { } 657 661 }; 658 662 ··· 689 671 for (i = 0; i < ARRAY_SIZE(override_table); i++) { 690 672 const struct irq_override_cmp *entry = &override_table[i]; 691 673 692 - if (dmi_check_system(entry->system) && 693 - entry->irq == gsi && 674 + if (entry->irq == gsi && 694 675 entry->triggering == triggering && 695 676 entry->polarity == polarity && 696 - entry->shareable == shareable) 677 + entry->shareable == shareable && 678 + dmi_check_system(entry->system)) 697 679 return entry->override; 698 680 } 699 681
+20 -4
drivers/base/topology.c
··· 27 27 loff_t off, size_t count) \ 28 28 { \ 29 29 struct device *dev = kobj_to_dev(kobj); \ 30 + cpumask_var_t mask; \ 31 + ssize_t n; \ 30 32 \ 31 - return cpumap_print_bitmask_to_buf(buf, topology_##mask(dev->id), \ 32 - off, count); \ 33 + if (!alloc_cpumask_var(&mask, GFP_KERNEL)) \ 34 + return -ENOMEM; \ 35 + \ 36 + cpumask_copy(mask, topology_##mask(dev->id)); \ 37 + n = cpumap_print_bitmask_to_buf(buf, mask, off, count); \ 38 + free_cpumask_var(mask); \ 39 + \ 40 + return n; \ 33 41 } \ 34 42 \ 35 43 static ssize_t name##_list_read(struct file *file, struct kobject *kobj, \ ··· 45 37 loff_t off, size_t count) \ 46 38 { \ 47 39 struct device *dev = kobj_to_dev(kobj); \ 40 + cpumask_var_t mask; \ 41 + ssize_t n; \ 48 42 \ 49 - return cpumap_print_list_to_buf(buf, topology_##mask(dev->id), \ 50 - off, count); \ 43 + if (!alloc_cpumask_var(&mask, GFP_KERNEL)) \ 44 + return -ENOMEM; \ 45 + \ 46 + cpumask_copy(mask, topology_##mask(dev->id)); \ 47 + n = cpumap_print_list_to_buf(buf, mask, off, count); \ 48 + free_cpumask_var(mask); \ 49 + \ 50 + return n; \ 51 51 } 52 52 53 53 define_id_show_func(physical_package_id, "%d");
+1
drivers/block/zram/zram_drv.c
··· 1468 1468 zram->mem_pool = zs_create_pool(zram->disk->disk_name); 1469 1469 if (!zram->mem_pool) { 1470 1470 vfree(zram->table); 1471 + zram->table = NULL; 1471 1472 return false; 1472 1473 } 1473 1474
+7
drivers/bluetooth/btmtk.c
··· 1472 1472 1473 1473 int btmtk_usb_shutdown(struct hci_dev *hdev) 1474 1474 { 1475 + struct btmtk_data *data = hci_get_priv(hdev); 1475 1476 struct btmtk_hci_wmt_params wmt_params; 1476 1477 u8 param = 0; 1477 1478 int err; 1479 + 1480 + err = usb_autopm_get_interface(data->intf); 1481 + if (err < 0) 1482 + return err; 1478 1483 1479 1484 /* Disable the device */ 1480 1485 wmt_params.op = BTMTK_WMT_FUNC_CTRL; ··· 1491 1486 err = btmtk_usb_hci_wmt_sync(hdev, &wmt_params); 1492 1487 if (err < 0) { 1493 1488 bt_dev_err(hdev, "Failed to send wmt func ctrl (%d)", err); 1489 + usb_autopm_put_interface(data->intf); 1494 1490 return err; 1495 1491 } 1496 1492 1493 + usb_autopm_put_interface(data->intf); 1497 1494 return 0; 1498 1495 } 1499 1496 EXPORT_SYMBOL_GPL(btmtk_usb_shutdown);
+1
drivers/bluetooth/btnxpuart.c
··· 1381 1381 1382 1382 while ((skb = nxp_dequeue(nxpdev))) { 1383 1383 len = serdev_device_write_buf(serdev, skb->data, skb->len); 1384 + serdev_device_wait_until_sent(serdev, 0); 1384 1385 hdev->stat.byte_tx += len; 1385 1386 1386 1387 skb_pull(skb, len);
+1 -1
drivers/bus/mhi/host/pci_generic.c
··· 917 917 return err; 918 918 } 919 919 920 - mhi_cntrl->regs = pcim_iomap_region(pdev, 1 << bar_num, pci_name(pdev)); 920 + mhi_cntrl->regs = pcim_iomap_region(pdev, bar_num, pci_name(pdev)); 921 921 if (IS_ERR(mhi_cntrl->regs)) { 922 922 err = PTR_ERR(mhi_cntrl->regs); 923 923 dev_err(&pdev->dev, "failed to map pci region: %d\n", err);
+2 -2
drivers/cpufreq/Kconfig
··· 325 325 This adds the CPUFreq driver support for Freescale QorIQ SoCs 326 326 which are capable of changing the CPU's frequency dynamically. 327 327 328 - endif 329 - 330 328 config ACPI_CPPC_CPUFREQ 331 329 tristate "CPUFreq driver based on the ACPI CPPC spec" 332 330 depends on ACPI_PROCESSOR ··· 352 354 by using CPPC delivered and reference performance counters. 353 355 354 356 If in doubt, say N. 357 + 358 + endif 355 359 356 360 endmenu
+2 -2
drivers/cpuidle/cpuidle-riscv-sbi.c
··· 504 504 int cpu, ret; 505 505 struct cpuidle_driver *drv; 506 506 struct cpuidle_device *dev; 507 - struct device_node *np, *pds_node; 507 + struct device_node *pds_node; 508 508 509 509 /* Detect OSI support based on CPU DT nodes */ 510 510 sbi_cpuidle_use_osi = true; 511 511 for_each_possible_cpu(cpu) { 512 - np = of_cpu_device_node_get(cpu); 512 + struct device_node *np __free(device_node) = of_cpu_device_node_get(cpu); 513 513 if (np && 514 514 of_property_present(np, "power-domains") && 515 515 of_property_present(np, "power-domain-names")) {
+48 -43
drivers/cpuidle/governors/teo.c
··· 10 10 * DOC: teo-description 11 11 * 12 12 * The idea of this governor is based on the observation that on many systems 13 - * timer events are two or more orders of magnitude more frequent than any 14 - * other interrupts, so they are likely to be the most significant cause of CPU 15 - * wakeups from idle states. Moreover, information about what happened in the 16 - * (relatively recent) past can be used to estimate whether or not the deepest 17 - * idle state with target residency within the (known) time till the closest 18 - * timer event, referred to as the sleep length, is likely to be suitable for 19 - * the upcoming CPU idle period and, if not, then which of the shallower idle 20 - * states to choose instead of it. 13 + * timer interrupts are two or more orders of magnitude more frequent than any 14 + * other interrupt types, so they are likely to dominate CPU wakeup patterns. 15 + * Moreover, in principle, the time when the next timer event is going to occur 16 + * can be determined at the idle state selection time, although doing that may 17 + * be costly, so it can be regarded as the most reliable source of information 18 + * for idle state selection. 21 19 * 22 - * Of course, non-timer wakeup sources are more important in some use cases 23 - * which can be covered by taking a few most recent idle time intervals of the 24 - * CPU into account. However, even in that context it is not necessary to 25 - * consider idle duration values greater than the sleep length, because the 26 - * closest timer will ultimately wake up the CPU anyway unless it is woken up 27 - * earlier. 20 + * Of course, non-timer wakeup sources are more important in some use cases, 21 + * but even then it is generally unnecessary to consider idle duration values 22 + * greater than the time time till the next timer event, referred as the sleep 23 + * length in what follows, because the closest timer will ultimately wake up the 24 + * CPU anyway unless it is woken up earlier. 28 25 * 29 - * Thus this governor estimates whether or not the prospective idle duration of 30 - * a CPU is likely to be significantly shorter than the sleep length and selects 31 - * an idle state for it accordingly. 26 + * However, since obtaining the sleep length may be costly, the governor first 27 + * checks if it can select a shallow idle state using wakeup pattern information 28 + * from recent times, in which case it can do without knowing the sleep length 29 + * at all. For this purpose, it counts CPU wakeup events and looks for an idle 30 + * state whose target residency has not exceeded the idle duration (measured 31 + * after wakeup) in the majority of relevant recent cases. If the target 32 + * residency of that state is small enough, it may be used right away and the 33 + * sleep length need not be determined. 32 34 * 33 35 * The computations carried out by this governor are based on using bins whose 34 36 * boundaries are aligned with the target residency parameter values of the CPU ··· 41 39 * idle state 2, the third bin spans from the target residency of idle state 2 42 40 * up to, but not including, the target residency of idle state 3 and so on. 43 41 * The last bin spans from the target residency of the deepest idle state 44 - * supplied by the driver to infinity. 42 + * supplied by the driver to the scheduler tick period length or to infinity if 43 + * the tick period length is less than the target residency of that state. In 44 + * the latter case, the governor also counts events with the measured idle 45 + * duration between the tick period length and the target residency of the 46 + * deepest idle state. 45 47 * 46 48 * Two metrics called "hits" and "intercepts" are associated with each bin. 47 49 * They are updated every time before selecting an idle state for the given CPU ··· 55 49 * sleep length and the idle duration measured after CPU wakeup fall into the 56 50 * same bin (that is, the CPU appears to wake up "on time" relative to the sleep 57 51 * length). In turn, the "intercepts" metric reflects the relative frequency of 58 - * situations in which the measured idle duration is so much shorter than the 59 - * sleep length that the bin it falls into corresponds to an idle state 60 - * shallower than the one whose bin is fallen into by the sleep length (these 61 - * situations are referred to as "intercepts" below). 52 + * non-timer wakeup events for which the measured idle duration falls into a bin 53 + * that corresponds to an idle state shallower than the one whose bin is fallen 54 + * into by the sleep length (these events are also referred to as "intercepts" 55 + * below). 62 56 * 63 57 * In order to select an idle state for a CPU, the governor takes the following 64 58 * steps (modulo the possible latency constraint that must be taken into account 65 59 * too): 66 60 * 67 - * 1. Find the deepest CPU idle state whose target residency does not exceed 68 - * the current sleep length (the candidate idle state) and compute 2 sums as 69 - * follows: 61 + * 1. Find the deepest enabled CPU idle state (the candidate idle state) and 62 + * compute 2 sums as follows: 70 63 * 71 - * - The sum of the "hits" and "intercepts" metrics for the candidate state 72 - * and all of the deeper idle states (it represents the cases in which the 73 - * CPU was idle long enough to avoid being intercepted if the sleep length 74 - * had been equal to the current one). 64 + * - The sum of the "hits" metric for all of the idle states shallower than 65 + * the candidate one (it represents the cases in which the CPU was likely 66 + * woken up by a timer). 75 67 * 76 - * - The sum of the "intercepts" metrics for all of the idle states shallower 77 - * than the candidate one (it represents the cases in which the CPU was not 78 - * idle long enough to avoid being intercepted if the sleep length had been 79 - * equal to the current one). 68 + * - The sum of the "intercepts" metric for all of the idle states shallower 69 + * than the candidate one (it represents the cases in which the CPU was 70 + * likely woken up by a non-timer wakeup source). 80 71 * 81 - * 2. If the second sum is greater than the first one the CPU is likely to wake 82 - * up early, so look for an alternative idle state to select. 72 + * 2. If the second sum computed in step 1 is greater than a half of the sum of 73 + * both metrics for the candidate state bin and all subsequent bins(if any), 74 + * a shallower idle state is likely to be more suitable, so look for it. 83 75 * 84 - * - Traverse the idle states shallower than the candidate one in the 76 + * - Traverse the enabled idle states shallower than the candidate one in the 85 77 * descending order. 86 78 * 87 79 * - For each of them compute the sum of the "intercepts" metrics over all 88 80 * of the idle states between it and the candidate one (including the 89 81 * former and excluding the latter). 90 82 * 91 - * - If each of these sums that needs to be taken into account (because the 92 - * check related to it has indicated that the CPU is likely to wake up 93 - * early) is greater than a half of the corresponding sum computed in step 94 - * 1 (which means that the target residency of the state in question had 95 - * not exceeded the idle duration in over a half of the relevant cases), 96 - * select the given idle state instead of the candidate one. 83 + * - If this sum is greater than a half of the second sum computed in step 1, 84 + * use the given idle state as the new candidate one. 97 85 * 98 - * 3. By default, select the candidate state. 86 + * 3. If the current candidate state is state 0 or its target residency is short 87 + * enough, return it and prevent the scheduler tick from being stopped. 88 + * 89 + * 4. Obtain the sleep length value and check if it is below the target 90 + * residency of the current candidate state, in which case a new shallower 91 + * candidate state needs to be found, so look for it. 99 92 */ 100 93 101 94 #include <linux/cpuidle.h>
+3 -3
drivers/gpio/gpio-loongson-64bit.c
··· 237 237 static const struct loongson_gpio_chip_data loongson_gpio_ls2k2000_data2 = { 238 238 .label = "ls2k2000_gpio", 239 239 .mode = BIT_CTRL_MODE, 240 - .conf_offset = 0x84, 241 - .in_offset = 0x88, 242 - .out_offset = 0x80, 240 + .conf_offset = 0x4, 241 + .in_offset = 0x8, 242 + .out_offset = 0x0, 243 243 }; 244 244 245 245 static const struct loongson_gpio_chip_data loongson_gpio_ls3a5000_data = {
+41 -7
drivers/gpio/gpio-sim.c
··· 1027 1027 dev->pdev = NULL; 1028 1028 } 1029 1029 1030 + static void 1031 + gpio_sim_device_lockup_configfs(struct gpio_sim_device *dev, bool lock) 1032 + { 1033 + struct configfs_subsystem *subsys = dev->group.cg_subsys; 1034 + struct gpio_sim_bank *bank; 1035 + struct gpio_sim_line *line; 1036 + 1037 + /* 1038 + * The device only needs to depend on leaf line entries. This is 1039 + * sufficient to lock up all the configfs entries that the 1040 + * instantiated, alive device depends on. 1041 + */ 1042 + list_for_each_entry(bank, &dev->bank_list, siblings) { 1043 + list_for_each_entry(line, &bank->line_list, siblings) { 1044 + if (lock) 1045 + WARN_ON(configfs_depend_item_unlocked( 1046 + subsys, &line->group.cg_item)); 1047 + else 1048 + configfs_undepend_item_unlocked( 1049 + &line->group.cg_item); 1050 + } 1051 + } 1052 + } 1053 + 1030 1054 static ssize_t 1031 1055 gpio_sim_device_config_live_store(struct config_item *item, 1032 1056 const char *page, size_t count) ··· 1063 1039 if (ret) 1064 1040 return ret; 1065 1041 1066 - guard(mutex)(&dev->lock); 1042 + if (live) 1043 + gpio_sim_device_lockup_configfs(dev, true); 1067 1044 1068 - if (live == gpio_sim_device_is_live(dev)) 1069 - ret = -EPERM; 1070 - else if (live) 1071 - ret = gpio_sim_device_activate(dev); 1072 - else 1073 - gpio_sim_device_deactivate(dev); 1045 + scoped_guard(mutex, &dev->lock) { 1046 + if (live == gpio_sim_device_is_live(dev)) 1047 + ret = -EPERM; 1048 + else if (live) 1049 + ret = gpio_sim_device_activate(dev); 1050 + else 1051 + gpio_sim_device_deactivate(dev); 1052 + } 1053 + 1054 + /* 1055 + * Undepend is required only if device disablement (live == 0) 1056 + * succeeds or if device enablement (live == 1) fails. 1057 + */ 1058 + if (live == !!ret) 1059 + gpio_sim_device_lockup_configfs(dev, false); 1074 1060 1075 1061 return ret ?: count; 1076 1062 }
+70 -23
drivers/gpio/gpio-virtuser.c
··· 1410 1410 size_t num_entries = gpio_virtuser_get_lookup_count(dev); 1411 1411 struct gpio_virtuser_lookup_entry *entry; 1412 1412 struct gpio_virtuser_lookup *lookup; 1413 - unsigned int i = 0; 1413 + unsigned int i = 0, idx; 1414 1414 1415 1415 lockdep_assert_held(&dev->lock); 1416 1416 ··· 1424 1424 return -ENOMEM; 1425 1425 1426 1426 list_for_each_entry(lookup, &dev->lookup_list, siblings) { 1427 + idx = 0; 1427 1428 list_for_each_entry(entry, &lookup->entry_list, siblings) { 1428 - table->table[i] = 1429 + table->table[i++] = 1429 1430 GPIO_LOOKUP_IDX(entry->key, 1430 1431 entry->offset < 0 ? U16_MAX : entry->offset, 1431 - lookup->con_id, i, entry->flags); 1432 - i++; 1432 + lookup->con_id, idx++, entry->flags); 1433 1433 } 1434 1434 } 1435 1435 ··· 1437 1437 dev->lookup_table = no_free_ptr(table); 1438 1438 1439 1439 return 0; 1440 + } 1441 + 1442 + static void 1443 + gpio_virtuser_remove_lookup_table(struct gpio_virtuser_device *dev) 1444 + { 1445 + gpiod_remove_lookup_table(dev->lookup_table); 1446 + kfree(dev->lookup_table->dev_id); 1447 + kfree(dev->lookup_table); 1448 + dev->lookup_table = NULL; 1440 1449 } 1441 1450 1442 1451 static struct fwnode_handle * ··· 1496 1487 pdevinfo.fwnode = swnode; 1497 1488 1498 1489 ret = gpio_virtuser_make_lookup_table(dev); 1499 - if (ret) { 1500 - fwnode_remove_software_node(swnode); 1501 - return ret; 1502 - } 1490 + if (ret) 1491 + goto err_remove_swnode; 1503 1492 1504 1493 reinit_completion(&dev->probe_completion); 1505 1494 dev->driver_bound = false; ··· 1505 1498 1506 1499 pdev = platform_device_register_full(&pdevinfo); 1507 1500 if (IS_ERR(pdev)) { 1501 + ret = PTR_ERR(pdev); 1508 1502 bus_unregister_notifier(&platform_bus_type, &dev->bus_notifier); 1509 - fwnode_remove_software_node(swnode); 1510 - return PTR_ERR(pdev); 1503 + goto err_remove_lookup_table; 1511 1504 } 1512 1505 1513 1506 wait_for_completion(&dev->probe_completion); 1514 1507 bus_unregister_notifier(&platform_bus_type, &dev->bus_notifier); 1515 1508 1516 1509 if (!dev->driver_bound) { 1517 - platform_device_unregister(pdev); 1518 - fwnode_remove_software_node(swnode); 1519 - return -ENXIO; 1510 + ret = -ENXIO; 1511 + goto err_unregister_pdev; 1520 1512 } 1521 1513 1522 1514 dev->pdev = pdev; 1523 1515 1524 1516 return 0; 1517 + 1518 + err_unregister_pdev: 1519 + platform_device_unregister(pdev); 1520 + err_remove_lookup_table: 1521 + gpio_virtuser_remove_lookup_table(dev); 1522 + err_remove_swnode: 1523 + fwnode_remove_software_node(swnode); 1524 + 1525 + return ret; 1525 1526 } 1526 1527 1527 1528 static void ··· 1541 1526 1542 1527 swnode = dev_fwnode(&dev->pdev->dev); 1543 1528 platform_device_unregister(dev->pdev); 1529 + gpio_virtuser_remove_lookup_table(dev); 1544 1530 fwnode_remove_software_node(swnode); 1545 1531 dev->pdev = NULL; 1546 - gpiod_remove_lookup_table(dev->lookup_table); 1547 - kfree(dev->lookup_table); 1532 + } 1533 + 1534 + static void 1535 + gpio_virtuser_device_lockup_configfs(struct gpio_virtuser_device *dev, bool lock) 1536 + { 1537 + struct configfs_subsystem *subsys = dev->group.cg_subsys; 1538 + struct gpio_virtuser_lookup_entry *entry; 1539 + struct gpio_virtuser_lookup *lookup; 1540 + 1541 + /* 1542 + * The device only needs to depend on leaf lookup entries. This is 1543 + * sufficient to lock up all the configfs entries that the 1544 + * instantiated, alive device depends on. 1545 + */ 1546 + list_for_each_entry(lookup, &dev->lookup_list, siblings) { 1547 + list_for_each_entry(entry, &lookup->entry_list, siblings) { 1548 + if (lock) 1549 + WARN_ON(configfs_depend_item_unlocked( 1550 + subsys, &entry->group.cg_item)); 1551 + else 1552 + configfs_undepend_item_unlocked( 1553 + &entry->group.cg_item); 1554 + } 1555 + } 1548 1556 } 1549 1557 1550 1558 static ssize_t ··· 1582 1544 if (ret) 1583 1545 return ret; 1584 1546 1585 - guard(mutex)(&dev->lock); 1586 - 1587 - if (live == gpio_virtuser_device_is_live(dev)) 1588 - return -EPERM; 1589 - 1590 1547 if (live) 1591 - ret = gpio_virtuser_device_activate(dev); 1592 - else 1593 - gpio_virtuser_device_deactivate(dev); 1548 + gpio_virtuser_device_lockup_configfs(dev, true); 1549 + 1550 + scoped_guard(mutex, &dev->lock) { 1551 + if (live == gpio_virtuser_device_is_live(dev)) 1552 + ret = -EPERM; 1553 + else if (live) 1554 + ret = gpio_virtuser_device_activate(dev); 1555 + else 1556 + gpio_virtuser_device_deactivate(dev); 1557 + } 1558 + 1559 + /* 1560 + * Undepend is required only if device disablement (live == 0) 1561 + * succeeds or if device enablement (live == 1) fails. 1562 + */ 1563 + if (live == !!ret) 1564 + gpio_virtuser_device_lockup_configfs(dev, false); 1594 1565 1595 1566 return ret ?: count; 1596 1567 }
+16 -16
drivers/gpio/gpio-xilinx.c
··· 65 65 DECLARE_BITMAP(state, 64); 66 66 DECLARE_BITMAP(last_irq_read, 64); 67 67 DECLARE_BITMAP(dir, 64); 68 - spinlock_t gpio_lock; /* For serializing operations */ 68 + raw_spinlock_t gpio_lock; /* For serializing operations */ 69 69 int irq; 70 70 DECLARE_BITMAP(enable, 64); 71 71 DECLARE_BITMAP(rising_edge, 64); ··· 179 179 struct xgpio_instance *chip = gpiochip_get_data(gc); 180 180 int bit = xgpio_to_bit(chip, gpio); 181 181 182 - spin_lock_irqsave(&chip->gpio_lock, flags); 182 + raw_spin_lock_irqsave(&chip->gpio_lock, flags); 183 183 184 184 /* Write to GPIO signal and set its direction to output */ 185 185 __assign_bit(bit, chip->state, val); 186 186 187 187 xgpio_write_ch(chip, XGPIO_DATA_OFFSET, bit, chip->state); 188 188 189 - spin_unlock_irqrestore(&chip->gpio_lock, flags); 189 + raw_spin_unlock_irqrestore(&chip->gpio_lock, flags); 190 190 } 191 191 192 192 /** ··· 210 210 bitmap_remap(hw_mask, mask, chip->sw_map, chip->hw_map, 64); 211 211 bitmap_remap(hw_bits, bits, chip->sw_map, chip->hw_map, 64); 212 212 213 - spin_lock_irqsave(&chip->gpio_lock, flags); 213 + raw_spin_lock_irqsave(&chip->gpio_lock, flags); 214 214 215 215 bitmap_replace(state, chip->state, hw_bits, hw_mask, 64); 216 216 ··· 218 218 219 219 bitmap_copy(chip->state, state, 64); 220 220 221 - spin_unlock_irqrestore(&chip->gpio_lock, flags); 221 + raw_spin_unlock_irqrestore(&chip->gpio_lock, flags); 222 222 } 223 223 224 224 /** ··· 236 236 struct xgpio_instance *chip = gpiochip_get_data(gc); 237 237 int bit = xgpio_to_bit(chip, gpio); 238 238 239 - spin_lock_irqsave(&chip->gpio_lock, flags); 239 + raw_spin_lock_irqsave(&chip->gpio_lock, flags); 240 240 241 241 /* Set the GPIO bit in shadow register and set direction as input */ 242 242 __set_bit(bit, chip->dir); 243 243 xgpio_write_ch(chip, XGPIO_TRI_OFFSET, bit, chip->dir); 244 244 245 - spin_unlock_irqrestore(&chip->gpio_lock, flags); 245 + raw_spin_unlock_irqrestore(&chip->gpio_lock, flags); 246 246 247 247 return 0; 248 248 } ··· 265 265 struct xgpio_instance *chip = gpiochip_get_data(gc); 266 266 int bit = xgpio_to_bit(chip, gpio); 267 267 268 - spin_lock_irqsave(&chip->gpio_lock, flags); 268 + raw_spin_lock_irqsave(&chip->gpio_lock, flags); 269 269 270 270 /* Write state of GPIO signal */ 271 271 __assign_bit(bit, chip->state, val); ··· 275 275 __clear_bit(bit, chip->dir); 276 276 xgpio_write_ch(chip, XGPIO_TRI_OFFSET, bit, chip->dir); 277 277 278 - spin_unlock_irqrestore(&chip->gpio_lock, flags); 278 + raw_spin_unlock_irqrestore(&chip->gpio_lock, flags); 279 279 280 280 return 0; 281 281 } ··· 398 398 int bit = xgpio_to_bit(chip, irq_offset); 399 399 u32 mask = BIT(bit / 32), temp; 400 400 401 - spin_lock_irqsave(&chip->gpio_lock, flags); 401 + raw_spin_lock_irqsave(&chip->gpio_lock, flags); 402 402 403 403 __clear_bit(bit, chip->enable); 404 404 ··· 408 408 temp &= ~mask; 409 409 xgpio_writereg(chip->regs + XGPIO_IPIER_OFFSET, temp); 410 410 } 411 - spin_unlock_irqrestore(&chip->gpio_lock, flags); 411 + raw_spin_unlock_irqrestore(&chip->gpio_lock, flags); 412 412 413 413 gpiochip_disable_irq(&chip->gc, irq_offset); 414 414 } ··· 428 428 429 429 gpiochip_enable_irq(&chip->gc, irq_offset); 430 430 431 - spin_lock_irqsave(&chip->gpio_lock, flags); 431 + raw_spin_lock_irqsave(&chip->gpio_lock, flags); 432 432 433 433 __set_bit(bit, chip->enable); 434 434 ··· 447 447 xgpio_writereg(chip->regs + XGPIO_IPIER_OFFSET, val); 448 448 } 449 449 450 - spin_unlock_irqrestore(&chip->gpio_lock, flags); 450 + raw_spin_unlock_irqrestore(&chip->gpio_lock, flags); 451 451 } 452 452 453 453 /** ··· 512 512 513 513 chained_irq_enter(irqchip, desc); 514 514 515 - spin_lock(&chip->gpio_lock); 515 + raw_spin_lock(&chip->gpio_lock); 516 516 517 517 xgpio_read_ch_all(chip, XGPIO_DATA_OFFSET, all); 518 518 ··· 529 529 bitmap_copy(chip->last_irq_read, all, 64); 530 530 bitmap_or(all, rising, falling, 64); 531 531 532 - spin_unlock(&chip->gpio_lock); 532 + raw_spin_unlock(&chip->gpio_lock); 533 533 534 534 dev_dbg(gc->parent, "IRQ rising %*pb falling %*pb\n", 64, rising, 64, falling); 535 535 ··· 620 620 bitmap_set(chip->hw_map, 0, width[0]); 621 621 bitmap_set(chip->hw_map, 32, width[1]); 622 622 623 - spin_lock_init(&chip->gpio_lock); 623 + raw_spin_lock_init(&chip->gpio_lock); 624 624 625 625 chip->gc.base = -1; 626 626 chip->gc.ngpio = bitmap_weight(chip->hw_map, 64);
+3 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
··· 715 715 void amdgpu_amdkfd_set_compute_idle(struct amdgpu_device *adev, bool idle) 716 716 { 717 717 enum amd_powergating_state state = idle ? AMD_PG_STATE_GATE : AMD_PG_STATE_UNGATE; 718 - if (IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 11 && 719 - ((adev->mes.kiq_version & AMDGPU_MES_VERSION_MASK) <= 64)) { 718 + if ((IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 11 && 719 + ((adev->mes.kiq_version & AMDGPU_MES_VERSION_MASK) <= 64)) || 720 + (IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 12)) { 720 721 pr_debug("GFXOFF is %s\n", idle ? "enabled" : "disabled"); 721 722 amdgpu_gfx_off_ctrl(adev, idle); 722 723 } else if ((IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 9) &&
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_fw_attestation.c
··· 122 122 if (adev->flags & AMD_IS_APU) 123 123 return 0; 124 124 125 + if (amdgpu_ip_version(adev, MP0_HWIP, 0) == IP_VERSION(14, 0, 2) || 126 + amdgpu_ip_version(adev, MP0_HWIP, 0) == IP_VERSION(14, 0, 3)) 127 + return 0; 128 + 125 129 if (adev->asic_type >= CHIP_SIENNA_CICHLID) 126 130 return 1; 127 131
+10 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
··· 2054 2054 { 2055 2055 struct amdgpu_device *adev = ring->adev; 2056 2056 u32 idx; 2057 + bool sched_work = false; 2057 2058 2058 2059 if (!adev->gfx.enable_cleaner_shader) 2059 2060 return; ··· 2073 2072 mutex_lock(&adev->enforce_isolation_mutex); 2074 2073 if (adev->enforce_isolation[idx]) { 2075 2074 if (adev->kfd.init_complete) 2076 - amdgpu_gfx_kfd_sch_ctrl(adev, idx, false); 2075 + sched_work = true; 2077 2076 } 2078 2077 mutex_unlock(&adev->enforce_isolation_mutex); 2078 + 2079 + if (sched_work) 2080 + amdgpu_gfx_kfd_sch_ctrl(adev, idx, false); 2079 2081 } 2080 2082 2081 2083 /** ··· 2094 2090 { 2095 2091 struct amdgpu_device *adev = ring->adev; 2096 2092 u32 idx; 2093 + bool sched_work = false; 2097 2094 2098 2095 if (!adev->gfx.enable_cleaner_shader) 2099 2096 return; ··· 2110 2105 mutex_lock(&adev->enforce_isolation_mutex); 2111 2106 if (adev->enforce_isolation[idx]) { 2112 2107 if (adev->kfd.init_complete) 2113 - amdgpu_gfx_kfd_sch_ctrl(adev, idx, true); 2108 + sched_work = true; 2114 2109 } 2115 2110 mutex_unlock(&adev->enforce_isolation_mutex); 2111 + 2112 + if (sched_work) 2113 + amdgpu_gfx_kfd_sch_ctrl(adev, idx, true); 2116 2114 } 2117 2115 2118 2116 /*
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
··· 191 191 need_ctx_switch = ring->current_ctx != fence_ctx; 192 192 if (ring->funcs->emit_pipeline_sync && job && 193 193 ((tmp = amdgpu_sync_get_fence(&job->explicit_sync)) || 194 - (amdgpu_sriov_vf(adev) && need_ctx_switch) || 195 - amdgpu_vm_need_pipeline_sync(ring, job))) { 194 + need_ctx_switch || amdgpu_vm_need_pipeline_sync(ring, job))) { 195 + 196 196 need_pipe_sync = true; 197 197 198 198 if (tmp)
+2 -2
drivers/gpu/drm/bridge/ite-it6263.c
··· 854 854 it->lvds_i2c = devm_i2c_new_dummy_device(dev, client->adapter, 855 855 LVDS_INPUT_CTRL_I2C_ADDR); 856 856 if (IS_ERR(it->lvds_i2c)) 857 - dev_err_probe(it->dev, PTR_ERR(it->lvds_i2c), 858 - "failed to allocate I2C device for LVDS\n"); 857 + return dev_err_probe(it->dev, PTR_ERR(it->lvds_i2c), 858 + "failed to allocate I2C device for LVDS\n"); 859 859 860 860 it->lvds_regmap = devm_regmap_init_i2c(it->lvds_i2c, 861 861 &it6263_lvds_regmap_config);
+3
drivers/gpu/drm/display/drm_bridge_connector.c
··· 596 596 return ERR_PTR(-EINVAL); 597 597 598 598 if (bridge_connector->bridge_hdmi) { 599 + if (!connector->ycbcr_420_allowed) 600 + supported_formats &= ~BIT(HDMI_COLORSPACE_YUV420); 601 + 599 602 bridge = bridge_connector->bridge_hdmi; 600 603 601 604 ret = drmm_connector_hdmi_init(drm, connector,
+4
drivers/gpu/drm/drm_bridge.c
··· 207 207 { 208 208 mutex_init(&bridge->hpd_mutex); 209 209 210 + if (bridge->ops & DRM_BRIDGE_OP_HDMI) 211 + bridge->ycbcr_420_allowed = !!(bridge->supported_formats & 212 + BIT(HDMI_COLORSPACE_YUV420)); 213 + 210 214 mutex_lock(&bridge_lock); 211 215 list_add_tail(&bridge->list, &bridge_list); 212 216 mutex_unlock(&bridge_lock);
+3
drivers/gpu/drm/drm_connector.c
··· 592 592 if (!supported_formats || !(supported_formats & BIT(HDMI_COLORSPACE_RGB))) 593 593 return -EINVAL; 594 594 595 + if (connector->ycbcr_420_allowed != !!(supported_formats & BIT(HDMI_COLORSPACE_YUV420))) 596 + return -EINVAL; 597 + 595 598 if (!(max_bpc == 8 || max_bpc == 10 || max_bpc == 12)) 596 599 return -EINVAL; 597 600
+1 -1
drivers/gpu/drm/i915/display/intel_fb.c
··· 1694 1694 * arithmetic related to alignment and offset calculation. 1695 1695 */ 1696 1696 if (is_gen12_ccs_cc_plane(&fb->base, i)) { 1697 - if (IS_ALIGNED(fb->base.offsets[i], PAGE_SIZE)) 1697 + if (IS_ALIGNED(fb->base.offsets[i], 64)) 1698 1698 continue; 1699 1699 else 1700 1700 return -EINVAL;
-5
drivers/gpu/drm/mediatek/Kconfig
··· 14 14 select DRM_BRIDGE_CONNECTOR 15 15 select DRM_MIPI_DSI 16 16 select DRM_PANEL 17 - select MEMORY 18 - select MTK_SMI 19 - select PHY_MTK_MIPI_DSI 20 17 select VIDEOMODE_HELPERS 21 18 help 22 19 Choose this option if you have a Mediatek SoCs. ··· 24 27 config DRM_MEDIATEK_DP 25 28 tristate "DRM DPTX Support for MediaTek SoCs" 26 29 depends on DRM_MEDIATEK 27 - select PHY_MTK_DP 28 30 select DRM_DISPLAY_HELPER 29 31 select DRM_DISPLAY_DP_HELPER 30 32 select DRM_DISPLAY_DP_AUX_BUS ··· 34 38 tristate "DRM HDMI Support for Mediatek SoCs" 35 39 depends on DRM_MEDIATEK 36 40 select SND_SOC_HDMI_CODEC if SND_SOC 37 - select PHY_MTK_HDMI 38 41 help 39 42 DRM/KMS HDMI driver for Mediatek SoCs
+19 -6
drivers/gpu/drm/mediatek/mtk_crtc.c
··· 112 112 113 113 drm_crtc_handle_vblank(&mtk_crtc->base); 114 114 115 + #if IS_REACHABLE(CONFIG_MTK_CMDQ) 116 + if (mtk_crtc->cmdq_client.chan) 117 + return; 118 + #endif 119 + 115 120 spin_lock_irqsave(&mtk_crtc->config_lock, flags); 116 121 if (!mtk_crtc->config_updating && mtk_crtc->pending_needs_vblank) { 117 122 mtk_crtc_finish_page_flip(mtk_crtc); ··· 289 284 state = to_mtk_crtc_state(mtk_crtc->base.state); 290 285 291 286 spin_lock_irqsave(&mtk_crtc->config_lock, flags); 292 - if (mtk_crtc->config_updating) { 293 - spin_unlock_irqrestore(&mtk_crtc->config_lock, flags); 287 + if (mtk_crtc->config_updating) 294 288 goto ddp_cmdq_cb_out; 295 - } 296 289 297 290 state->pending_config = false; 298 291 ··· 318 315 mtk_crtc->pending_async_planes = false; 319 316 } 320 317 321 - spin_unlock_irqrestore(&mtk_crtc->config_lock, flags); 322 - 323 318 ddp_cmdq_cb_out: 319 + 320 + if (mtk_crtc->pending_needs_vblank) { 321 + mtk_crtc_finish_page_flip(mtk_crtc); 322 + mtk_crtc->pending_needs_vblank = false; 323 + } 324 + 325 + spin_unlock_irqrestore(&mtk_crtc->config_lock, flags); 324 326 325 327 mtk_crtc->cmdq_vblank_cnt = 0; 326 328 wake_up(&mtk_crtc->cb_blocking_queue); ··· 614 606 */ 615 607 mtk_crtc->cmdq_vblank_cnt = 3; 616 608 609 + spin_lock_irqsave(&mtk_crtc->config_lock, flags); 610 + mtk_crtc->config_updating = false; 611 + spin_unlock_irqrestore(&mtk_crtc->config_lock, flags); 612 + 617 613 mbox_send_message(mtk_crtc->cmdq_client.chan, cmdq_handle); 618 614 mbox_client_txdone(mtk_crtc->cmdq_client.chan, 0); 619 615 } 620 - #endif 616 + #else 621 617 spin_lock_irqsave(&mtk_crtc->config_lock, flags); 622 618 mtk_crtc->config_updating = false; 623 619 spin_unlock_irqrestore(&mtk_crtc->config_lock, flags); 620 + #endif 624 621 625 622 mutex_unlock(&mtk_crtc->hw_lock); 626 623 }
+39 -30
drivers/gpu/drm/mediatek/mtk_disp_ovl.c
··· 460 460 } 461 461 } 462 462 463 + static void mtk_ovl_afbc_layer_config(struct mtk_disp_ovl *ovl, 464 + unsigned int idx, 465 + struct mtk_plane_pending_state *pending, 466 + struct cmdq_pkt *cmdq_pkt) 467 + { 468 + unsigned int pitch_msb = pending->pitch >> 16; 469 + unsigned int hdr_pitch = pending->hdr_pitch; 470 + unsigned int hdr_addr = pending->hdr_addr; 471 + 472 + if (pending->modifier != DRM_FORMAT_MOD_LINEAR) { 473 + mtk_ddp_write_relaxed(cmdq_pkt, hdr_addr, &ovl->cmdq_reg, ovl->regs, 474 + DISP_REG_OVL_HDR_ADDR(ovl, idx)); 475 + mtk_ddp_write_relaxed(cmdq_pkt, 476 + OVL_PITCH_MSB_2ND_SUBBUF | pitch_msb, 477 + &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx)); 478 + mtk_ddp_write_relaxed(cmdq_pkt, hdr_pitch, &ovl->cmdq_reg, ovl->regs, 479 + DISP_REG_OVL_HDR_PITCH(ovl, idx)); 480 + } else { 481 + mtk_ddp_write_relaxed(cmdq_pkt, pitch_msb, 482 + &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx)); 483 + } 484 + } 485 + 463 486 void mtk_ovl_layer_config(struct device *dev, unsigned int idx, 464 487 struct mtk_plane_state *state, 465 488 struct cmdq_pkt *cmdq_pkt) ··· 490 467 struct mtk_disp_ovl *ovl = dev_get_drvdata(dev); 491 468 struct mtk_plane_pending_state *pending = &state->pending; 492 469 unsigned int addr = pending->addr; 493 - unsigned int hdr_addr = pending->hdr_addr; 494 - unsigned int pitch = pending->pitch; 495 - unsigned int hdr_pitch = pending->hdr_pitch; 470 + unsigned int pitch_lsb = pending->pitch & GENMASK(15, 0); 496 471 unsigned int fmt = pending->format; 472 + unsigned int rotation = pending->rotation; 497 473 unsigned int offset = (pending->y << 16) | pending->x; 498 474 unsigned int src_size = (pending->height << 16) | pending->width; 499 475 unsigned int blend_mode = state->base.pixel_blend_mode; 500 476 unsigned int ignore_pixel_alpha = 0; 501 477 unsigned int con; 502 - bool is_afbc = pending->modifier != DRM_FORMAT_MOD_LINEAR; 503 - union overlay_pitch { 504 - struct split_pitch { 505 - u16 lsb; 506 - u16 msb; 507 - } split_pitch; 508 - u32 pitch; 509 - } overlay_pitch; 510 - 511 - overlay_pitch.pitch = pitch; 512 478 513 479 if (!pending->enable) { 514 480 mtk_ovl_layer_off(dev, idx, cmdq_pkt); ··· 525 513 ignore_pixel_alpha = OVL_CONST_BLEND; 526 514 } 527 515 528 - if (pending->rotation & DRM_MODE_REFLECT_Y) { 516 + /* 517 + * Treat rotate 180 as flip x + flip y, and XOR the original rotation value 518 + * to flip x + flip y to support both in the same time. 519 + */ 520 + if (rotation & DRM_MODE_ROTATE_180) 521 + rotation ^= DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y; 522 + 523 + if (rotation & DRM_MODE_REFLECT_Y) { 529 524 con |= OVL_CON_VIRT_FLIP; 530 525 addr += (pending->height - 1) * pending->pitch; 531 526 } 532 527 533 - if (pending->rotation & DRM_MODE_REFLECT_X) { 528 + if (rotation & DRM_MODE_REFLECT_X) { 534 529 con |= OVL_CON_HORZ_FLIP; 535 530 addr += pending->pitch - 1; 536 531 } 537 532 538 533 if (ovl->data->supports_afbc) 539 - mtk_ovl_set_afbc(ovl, cmdq_pkt, idx, is_afbc); 534 + mtk_ovl_set_afbc(ovl, cmdq_pkt, idx, 535 + pending->modifier != DRM_FORMAT_MOD_LINEAR); 540 536 541 537 mtk_ddp_write_relaxed(cmdq_pkt, con, &ovl->cmdq_reg, ovl->regs, 542 538 DISP_REG_OVL_CON(idx)); 543 - mtk_ddp_write_relaxed(cmdq_pkt, overlay_pitch.split_pitch.lsb | ignore_pixel_alpha, 539 + mtk_ddp_write_relaxed(cmdq_pkt, pitch_lsb | ignore_pixel_alpha, 544 540 &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH(idx)); 545 541 mtk_ddp_write_relaxed(cmdq_pkt, src_size, &ovl->cmdq_reg, ovl->regs, 546 542 DISP_REG_OVL_SRC_SIZE(idx)); ··· 557 537 mtk_ddp_write_relaxed(cmdq_pkt, addr, &ovl->cmdq_reg, ovl->regs, 558 538 DISP_REG_OVL_ADDR(ovl, idx)); 559 539 560 - if (is_afbc) { 561 - mtk_ddp_write_relaxed(cmdq_pkt, hdr_addr, &ovl->cmdq_reg, ovl->regs, 562 - DISP_REG_OVL_HDR_ADDR(ovl, idx)); 563 - mtk_ddp_write_relaxed(cmdq_pkt, 564 - OVL_PITCH_MSB_2ND_SUBBUF | overlay_pitch.split_pitch.msb, 565 - &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx)); 566 - mtk_ddp_write_relaxed(cmdq_pkt, hdr_pitch, &ovl->cmdq_reg, ovl->regs, 567 - DISP_REG_OVL_HDR_PITCH(ovl, idx)); 568 - } else { 569 - mtk_ddp_write_relaxed(cmdq_pkt, 570 - overlay_pitch.split_pitch.msb, 571 - &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx)); 572 - } 540 + if (ovl->data->supports_afbc) 541 + mtk_ovl_afbc_layer_config(ovl, idx, pending, cmdq_pkt); 573 542 574 543 mtk_ovl_set_bit_depth(dev, idx, fmt, cmdq_pkt); 575 544 mtk_ovl_layer_on(dev, idx, cmdq_pkt);
+27 -19
drivers/gpu/drm/mediatek/mtk_dp.c
··· 543 543 enum dp_pixelformat color_format) 544 544 { 545 545 u32 val; 546 - 547 - /* update MISC0 */ 548 - mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3034, 549 - color_format << DP_TEST_COLOR_FORMAT_SHIFT, 550 - DP_TEST_COLOR_FORMAT_MASK); 546 + u32 misc0_color; 551 547 552 548 switch (color_format) { 553 549 case DP_PIXELFORMAT_YUV422: 554 550 val = PIXEL_ENCODE_FORMAT_DP_ENC0_P0_YCBCR422; 551 + misc0_color = DP_COLOR_FORMAT_YCbCr422; 555 552 break; 556 553 case DP_PIXELFORMAT_RGB: 557 554 val = PIXEL_ENCODE_FORMAT_DP_ENC0_P0_RGB; 555 + misc0_color = DP_COLOR_FORMAT_RGB; 558 556 break; 559 557 default: 560 558 drm_warn(mtk_dp->drm_dev, "Unsupported color format: %d\n", 561 559 color_format); 562 560 return -EINVAL; 563 561 } 562 + 563 + /* update MISC0 */ 564 + mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3034, 565 + misc0_color, 566 + DP_TEST_COLOR_FORMAT_MASK); 564 567 565 568 mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_303C, 566 569 val, PIXEL_ENCODE_FORMAT_DP_ENC0_P0_MASK); ··· 2123 2120 struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge); 2124 2121 enum drm_connector_status ret = connector_status_disconnected; 2125 2122 bool enabled = mtk_dp->enabled; 2126 - u8 sink_count = 0; 2127 2123 2128 2124 if (!mtk_dp->train_info.cable_plugged_in) 2129 2125 return ret; ··· 2137 2135 * function, we just need to check the HPD connection to check 2138 2136 * whether we connect to a sink device. 2139 2137 */ 2140 - drm_dp_dpcd_readb(&mtk_dp->aux, DP_SINK_COUNT, &sink_count); 2141 - if (DP_GET_SINK_COUNT(sink_count)) 2138 + 2139 + if (drm_dp_read_sink_count(&mtk_dp->aux) > 0) 2142 2140 ret = connector_status_connected; 2143 2141 2144 2142 if (!enabled) ··· 2433 2431 { 2434 2432 struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge); 2435 2433 u32 bpp = info->color_formats & DRM_COLOR_FORMAT_YCBCR422 ? 16 : 24; 2436 - u32 rate = min_t(u32, drm_dp_max_link_rate(mtk_dp->rx_cap) * 2437 - drm_dp_max_lane_count(mtk_dp->rx_cap), 2438 - drm_dp_bw_code_to_link_rate(mtk_dp->max_linkrate) * 2439 - mtk_dp->max_lanes); 2434 + u32 lane_count_min = mtk_dp->train_info.lane_count; 2435 + u32 rate = drm_dp_bw_code_to_link_rate(mtk_dp->train_info.link_rate) * 2436 + lane_count_min; 2440 2437 2441 - if (rate < mode->clock * bpp / 8) 2438 + /* 2439 + *FEC overhead is approximately 2.4% from DP 1.4a spec 2.2.1.4.2. 2440 + *The down-spread amplitude shall either be disabled (0.0%) or up 2441 + *to 0.5% from 1.4a 3.5.2.6. Add up to approximately 3% total overhead. 2442 + * 2443 + *Because rate is already divided by 10, 2444 + *mode->clock does not need to be multiplied by 10 2445 + */ 2446 + if ((rate * 97 / 100) < (mode->clock * bpp / 8)) 2442 2447 return MODE_CLOCK_HIGH; 2443 2448 2444 2449 return MODE_OK; ··· 2486 2477 struct drm_display_mode *mode = &crtc_state->adjusted_mode; 2487 2478 struct drm_display_info *display_info = 2488 2479 &conn_state->connector->display_info; 2489 - u32 rate = min_t(u32, drm_dp_max_link_rate(mtk_dp->rx_cap) * 2490 - drm_dp_max_lane_count(mtk_dp->rx_cap), 2491 - drm_dp_bw_code_to_link_rate(mtk_dp->max_linkrate) * 2492 - mtk_dp->max_lanes); 2480 + u32 lane_count_min = mtk_dp->train_info.lane_count; 2481 + u32 rate = drm_dp_bw_code_to_link_rate(mtk_dp->train_info.link_rate) * 2482 + lane_count_min; 2493 2483 2494 2484 *num_input_fmts = 0; 2495 2485 ··· 2497 2489 * datarate of YUV422 and sink device supports YUV422, we output YUV422 2498 2490 * format. Use this condition, we can support more resolution. 2499 2491 */ 2500 - if ((rate < (mode->clock * 24 / 8)) && 2501 - (rate > (mode->clock * 16 / 8)) && 2492 + if (((rate * 97 / 100) < (mode->clock * 24 / 8)) && 2493 + ((rate * 97 / 100) > (mode->clock * 16 / 8)) && 2502 2494 (display_info->color_formats & DRM_COLOR_FORMAT_YCBCR422)) { 2503 2495 input_fmts = kcalloc(1, sizeof(*input_fmts), GFP_KERNEL); 2504 2496 if (!input_fmts)
+9 -4
drivers/gpu/drm/mediatek/mtk_drm_drv.c
··· 372 372 struct mtk_drm_private *temp_drm_priv; 373 373 struct device_node *phandle = dev->parent->of_node; 374 374 const struct of_device_id *of_id; 375 + struct device_node *node; 375 376 struct device *drm_dev; 376 377 unsigned int cnt = 0; 377 378 int i, j; 378 379 379 - for_each_child_of_node_scoped(phandle->parent, node) { 380 + for_each_child_of_node(phandle->parent, node) { 380 381 struct platform_device *pdev; 381 382 382 383 of_id = of_match_node(mtk_drm_of_ids, node); ··· 406 405 if (temp_drm_priv->mtk_drm_bound) 407 406 cnt++; 408 407 409 - if (cnt == MAX_CRTC) 408 + if (cnt == MAX_CRTC) { 409 + of_node_put(node); 410 410 break; 411 + } 411 412 } 412 413 413 414 if (drm_priv->data->mmsys_dev_num == cnt) { ··· 674 671 err_free: 675 672 private->drm = NULL; 676 673 drm_dev_put(drm); 674 + for (i = 0; i < private->data->mmsys_dev_num; i++) 675 + private->all_drm_private[i]->drm = NULL; 677 676 return ret; 678 677 } 679 678 ··· 903 898 const unsigned int **out_path, 904 899 unsigned int *out_path_len) 905 900 { 906 - struct device_node *next, *prev, *vdo = dev->parent->of_node; 901 + struct device_node *next = NULL, *prev, *vdo = dev->parent->of_node; 907 902 unsigned int temp_path[DDP_COMPONENT_DRM_ID_MAX] = { 0 }; 908 903 unsigned int *final_ddp_path; 909 904 unsigned short int idx = 0; ··· 1092 1087 /* No devicetree graphs support: go with hardcoded paths if present */ 1093 1088 dev_dbg(dev, "Using hardcoded paths for MMSYS %u\n", mtk_drm_data->mmsys_id); 1094 1089 private->data = mtk_drm_data; 1095 - }; 1090 + } 1096 1091 1097 1092 private->all_drm_private = devm_kmalloc_array(dev, private->data->mmsys_dev_num, 1098 1093 sizeof(*private->all_drm_private),
+30 -19
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 139 139 #define CLK_HS_POST GENMASK(15, 8) 140 140 #define CLK_HS_EXIT GENMASK(23, 16) 141 141 142 - #define DSI_VM_CMD_CON 0x130 142 + /* DSI_VM_CMD_CON */ 143 143 #define VM_CMD_EN BIT(0) 144 144 #define TS_VFP_EN BIT(5) 145 145 146 - #define DSI_SHADOW_DEBUG 0x190U 146 + /* DSI_SHADOW_DEBUG */ 147 147 #define FORCE_COMMIT BIT(0) 148 148 #define BYPASS_SHADOW BIT(1) 149 149 ··· 187 187 188 188 struct mtk_dsi_driver_data { 189 189 const u32 reg_cmdq_off; 190 + const u32 reg_vm_cmd_off; 191 + const u32 reg_shadow_dbg_off; 190 192 bool has_shadow_ctl; 191 193 bool has_size_ctl; 192 194 bool cmdq_long_packet_ctl; ··· 248 246 u32 data_rate_mhz = DIV_ROUND_UP(dsi->data_rate, HZ_PER_MHZ); 249 247 struct mtk_phy_timing *timing = &dsi->phy_timing; 250 248 251 - timing->lpx = (80 * data_rate_mhz / (8 * 1000)) + 1; 252 - timing->da_hs_prepare = (59 * data_rate_mhz + 4 * 1000) / 8000 + 1; 253 - timing->da_hs_zero = (163 * data_rate_mhz + 11 * 1000) / 8000 + 1 - 249 + timing->lpx = (60 * data_rate_mhz / (8 * 1000)) + 1; 250 + timing->da_hs_prepare = (80 * data_rate_mhz + 4 * 1000) / 8000; 251 + timing->da_hs_zero = (170 * data_rate_mhz + 10 * 1000) / 8000 + 1 - 254 252 timing->da_hs_prepare; 255 - timing->da_hs_trail = (78 * data_rate_mhz + 7 * 1000) / 8000 + 1; 253 + timing->da_hs_trail = timing->da_hs_prepare + 1; 256 254 257 - timing->ta_go = 4 * timing->lpx; 258 - timing->ta_sure = 3 * timing->lpx / 2; 259 - timing->ta_get = 5 * timing->lpx; 260 - timing->da_hs_exit = (118 * data_rate_mhz / (8 * 1000)) + 1; 255 + timing->ta_go = 4 * timing->lpx - 2; 256 + timing->ta_sure = timing->lpx + 2; 257 + timing->ta_get = 4 * timing->lpx; 258 + timing->da_hs_exit = 2 * timing->lpx + 1; 261 259 262 - timing->clk_hs_prepare = (57 * data_rate_mhz / (8 * 1000)) + 1; 263 - timing->clk_hs_post = (65 * data_rate_mhz + 53 * 1000) / 8000 + 1; 264 - timing->clk_hs_trail = (78 * data_rate_mhz + 7 * 1000) / 8000 + 1; 265 - timing->clk_hs_zero = (330 * data_rate_mhz / (8 * 1000)) + 1 - 266 - timing->clk_hs_prepare; 267 - timing->clk_hs_exit = (118 * data_rate_mhz / (8 * 1000)) + 1; 260 + timing->clk_hs_prepare = 70 * data_rate_mhz / (8 * 1000); 261 + timing->clk_hs_post = timing->clk_hs_prepare + 8; 262 + timing->clk_hs_trail = timing->clk_hs_prepare; 263 + timing->clk_hs_zero = timing->clk_hs_trail * 4; 264 + timing->clk_hs_exit = 2 * timing->clk_hs_trail; 268 265 269 266 timcon0 = FIELD_PREP(LPX, timing->lpx) | 270 267 FIELD_PREP(HS_PREP, timing->da_hs_prepare) | ··· 368 367 369 368 static void mtk_dsi_set_vm_cmd(struct mtk_dsi *dsi) 370 369 { 371 - mtk_dsi_mask(dsi, DSI_VM_CMD_CON, VM_CMD_EN, VM_CMD_EN); 372 - mtk_dsi_mask(dsi, DSI_VM_CMD_CON, TS_VFP_EN, TS_VFP_EN); 370 + mtk_dsi_mask(dsi, dsi->driver_data->reg_vm_cmd_off, VM_CMD_EN, VM_CMD_EN); 371 + mtk_dsi_mask(dsi, dsi->driver_data->reg_vm_cmd_off, TS_VFP_EN, TS_VFP_EN); 373 372 } 374 373 375 374 static void mtk_dsi_rxtx_control(struct mtk_dsi *dsi) ··· 715 714 716 715 if (dsi->driver_data->has_shadow_ctl) 717 716 writel(FORCE_COMMIT | BYPASS_SHADOW, 718 - dsi->regs + DSI_SHADOW_DEBUG); 717 + dsi->regs + dsi->driver_data->reg_shadow_dbg_off); 719 718 720 719 mtk_dsi_reset_engine(dsi); 721 720 mtk_dsi_phy_timconfig(dsi); ··· 1264 1263 1265 1264 static const struct mtk_dsi_driver_data mt8173_dsi_driver_data = { 1266 1265 .reg_cmdq_off = 0x200, 1266 + .reg_vm_cmd_off = 0x130, 1267 + .reg_shadow_dbg_off = 0x190 1267 1268 }; 1268 1269 1269 1270 static const struct mtk_dsi_driver_data mt2701_dsi_driver_data = { 1270 1271 .reg_cmdq_off = 0x180, 1272 + .reg_vm_cmd_off = 0x130, 1273 + .reg_shadow_dbg_off = 0x190 1271 1274 }; 1272 1275 1273 1276 static const struct mtk_dsi_driver_data mt8183_dsi_driver_data = { 1274 1277 .reg_cmdq_off = 0x200, 1278 + .reg_vm_cmd_off = 0x130, 1279 + .reg_shadow_dbg_off = 0x190, 1275 1280 .has_shadow_ctl = true, 1276 1281 .has_size_ctl = true, 1277 1282 }; 1278 1283 1279 1284 static const struct mtk_dsi_driver_data mt8186_dsi_driver_data = { 1280 1285 .reg_cmdq_off = 0xd00, 1286 + .reg_vm_cmd_off = 0x200, 1287 + .reg_shadow_dbg_off = 0xc00, 1281 1288 .has_shadow_ctl = true, 1282 1289 .has_size_ctl = true, 1283 1290 }; 1284 1291 1285 1292 static const struct mtk_dsi_driver_data mt8188_dsi_driver_data = { 1286 1293 .reg_cmdq_off = 0xd00, 1294 + .reg_vm_cmd_off = 0x200, 1295 + .reg_shadow_dbg_off = 0xc00, 1287 1296 .has_shadow_ctl = true, 1288 1297 .has_size_ctl = true, 1289 1298 .cmdq_long_packet_ctl = true,
+1 -1
drivers/gpu/drm/nouveau/nouveau_acpi.c
··· 384 384 if (ret < 0) 385 385 return NULL; 386 386 387 - return kmemdup(edid, EDID_LENGTH, GFP_KERNEL); 387 + return edid; 388 388 } 389 389 390 390 bool nouveau_acpi_video_backlight_use_native(void)
+4 -2
drivers/gpu/drm/nouveau/nouveau_fence.c
··· 387 387 if (f) { 388 388 struct nouveau_channel *prev; 389 389 bool must_wait = true; 390 + bool local; 390 391 391 392 rcu_read_lock(); 392 393 prev = rcu_dereference(f->channel); 393 - if (prev && (prev == chan || 394 - fctx->sync(f, prev, chan) == 0)) 394 + local = prev && prev->cli->drm == chan->cli->drm; 395 + if (local && (prev == chan || 396 + fctx->sync(f, prev, chan) == 0)) 395 397 must_wait = false; 396 398 rcu_read_unlock(); 397 399 if (!must_wait)
+1
drivers/gpu/drm/nouveau/nvkm/engine/disp/mcp77.c
··· 31 31 .state = g94_sor_state, 32 32 .power = nv50_sor_power, 33 33 .clock = nv50_sor_clock, 34 + .bl = &nv50_sor_bl, 34 35 .hdmi = &g84_sor_hdmi, 35 36 .dp = &g94_sor_dp, 36 37 };
+60
drivers/gpu/drm/tests/drm_connector_test.c
··· 1095 1095 KUNIT_EXPECT_LT(test, ret, 0); 1096 1096 } 1097 1097 1098 + struct drm_connector_hdmi_init_formats_yuv420_allowed_test { 1099 + unsigned long supported_formats; 1100 + bool yuv420_allowed; 1101 + int expected_result; 1102 + }; 1103 + 1104 + #define YUV420_ALLOWED_TEST(_formats, _allowed, _result) \ 1105 + { \ 1106 + .supported_formats = BIT(HDMI_COLORSPACE_RGB) | (_formats), \ 1107 + .yuv420_allowed = _allowed, \ 1108 + .expected_result = _result, \ 1109 + } 1110 + 1111 + static const struct drm_connector_hdmi_init_formats_yuv420_allowed_test 1112 + drm_connector_hdmi_init_formats_yuv420_allowed_tests[] = { 1113 + YUV420_ALLOWED_TEST(BIT(HDMI_COLORSPACE_YUV420), true, 0), 1114 + YUV420_ALLOWED_TEST(BIT(HDMI_COLORSPACE_YUV420), false, -EINVAL), 1115 + YUV420_ALLOWED_TEST(BIT(HDMI_COLORSPACE_YUV422), true, -EINVAL), 1116 + YUV420_ALLOWED_TEST(BIT(HDMI_COLORSPACE_YUV422), false, 0), 1117 + }; 1118 + 1119 + static void 1120 + drm_connector_hdmi_init_formats_yuv420_allowed_desc(const struct drm_connector_hdmi_init_formats_yuv420_allowed_test *t, 1121 + char *desc) 1122 + { 1123 + sprintf(desc, "supported_formats=0x%lx yuv420_allowed=%d", 1124 + t->supported_formats, t->yuv420_allowed); 1125 + } 1126 + 1127 + KUNIT_ARRAY_PARAM(drm_connector_hdmi_init_formats_yuv420_allowed, 1128 + drm_connector_hdmi_init_formats_yuv420_allowed_tests, 1129 + drm_connector_hdmi_init_formats_yuv420_allowed_desc); 1130 + 1131 + /* 1132 + * Test that the registration of an HDMI connector succeeds only when 1133 + * the presence of YUV420 in the supported formats matches the value 1134 + * of the ycbcr_420_allowed flag. 1135 + */ 1136 + static void drm_test_connector_hdmi_init_formats_yuv420_allowed(struct kunit *test) 1137 + { 1138 + const struct drm_connector_hdmi_init_formats_yuv420_allowed_test *params; 1139 + struct drm_connector_init_priv *priv = test->priv; 1140 + int ret; 1141 + 1142 + params = test->param_value; 1143 + priv->connector.ycbcr_420_allowed = params->yuv420_allowed; 1144 + 1145 + ret = drmm_connector_hdmi_init(&priv->drm, &priv->connector, 1146 + "Vendor", "Product", 1147 + &dummy_funcs, 1148 + &dummy_hdmi_funcs, 1149 + DRM_MODE_CONNECTOR_HDMIA, 1150 + &priv->ddc, 1151 + params->supported_formats, 1152 + 8); 1153 + KUNIT_EXPECT_EQ(test, ret, params->expected_result); 1154 + } 1155 + 1098 1156 /* 1099 1157 * Test that the registration of an HDMI connector with an HDMI 1100 1158 * connector type succeeds. ··· 1244 1186 KUNIT_CASE(drm_test_connector_hdmi_init_bpc_null), 1245 1187 KUNIT_CASE(drm_test_connector_hdmi_init_formats_empty), 1246 1188 KUNIT_CASE(drm_test_connector_hdmi_init_formats_no_rgb), 1189 + KUNIT_CASE_PARAM(drm_test_connector_hdmi_init_formats_yuv420_allowed, 1190 + drm_connector_hdmi_init_formats_yuv420_allowed_gen_params), 1247 1191 KUNIT_CASE(drm_test_connector_hdmi_init_null_ddc), 1248 1192 KUNIT_CASE(drm_test_connector_hdmi_init_null_product), 1249 1193 KUNIT_CASE(drm_test_connector_hdmi_init_null_vendor),
+1 -2
drivers/gpu/drm/tests/drm_kunit_helpers.c
··· 320 320 } 321 321 322 322 /** 323 - * drm_kunit_display_mode_from_cea_vic() - return a mode for CEA VIC 324 - for a KUnit test 323 + * drm_kunit_display_mode_from_cea_vic() - return a mode for CEA VIC for a KUnit test 325 324 * @test: The test context object 326 325 * @dev: DRM device 327 326 * @video_code: CEA VIC of the mode
+4
drivers/gpu/drm/v3d/v3d_irq.c
··· 108 108 v3d_job_update_stats(&v3d->bin_job->base, V3D_BIN); 109 109 trace_v3d_bcl_irq(&v3d->drm, fence->seqno); 110 110 dma_fence_signal(&fence->base); 111 + v3d->bin_job = NULL; 111 112 status = IRQ_HANDLED; 112 113 } 113 114 ··· 119 118 v3d_job_update_stats(&v3d->render_job->base, V3D_RENDER); 120 119 trace_v3d_rcl_irq(&v3d->drm, fence->seqno); 121 120 dma_fence_signal(&fence->base); 121 + v3d->render_job = NULL; 122 122 status = IRQ_HANDLED; 123 123 } 124 124 ··· 130 128 v3d_job_update_stats(&v3d->csd_job->base, V3D_CSD); 131 129 trace_v3d_csd_irq(&v3d->drm, fence->seqno); 132 130 dma_fence_signal(&fence->base); 131 + v3d->csd_job = NULL; 133 132 status = IRQ_HANDLED; 134 133 } 135 134 ··· 168 165 v3d_job_update_stats(&v3d->tfu_job->base, V3D_TFU); 169 166 trace_v3d_tfu_irq(&v3d->drm, fence->seqno); 170 167 dma_fence_signal(&fence->base); 168 + v3d->tfu_job = NULL; 171 169 status = IRQ_HANDLED; 172 170 } 173 171
+2 -2
drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
··· 228 228 VMW_BO_DOMAIN_VRAM, 229 229 VMW_BO_DOMAIN_VRAM); 230 230 buf->places[0].lpfn = PFN_UP(bo->resource->size); 231 - buf->busy_places[0].lpfn = PFN_UP(bo->resource->size); 232 231 ret = ttm_bo_validate(bo, &buf->placement, &ctx); 233 232 234 233 /* For some reason we didn't end up at the start of vram */ ··· 442 443 443 444 if (params->pin) 444 445 ttm_bo_pin(&vmw_bo->tbo); 445 - ttm_bo_unreserve(&vmw_bo->tbo); 446 + if (!params->keep_resv) 447 + ttm_bo_unreserve(&vmw_bo->tbo); 446 448 447 449 return 0; 448 450 }
+2 -2
drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
··· 56 56 u32 domain; 57 57 u32 busy_domain; 58 58 enum ttm_bo_type bo_type; 59 - size_t size; 60 59 bool pin; 60 + bool keep_resv; 61 + size_t size; 61 62 struct dma_resv *resv; 62 63 struct sg_table *sg; 63 64 }; ··· 84 83 85 84 struct ttm_placement placement; 86 85 struct ttm_place places[5]; 87 - struct ttm_place busy_places[5]; 88 86 89 87 /* Protected by reservation */ 90 88 struct ttm_bo_kmap_obj map;
+2 -5
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 403 403 .busy_domain = VMW_BO_DOMAIN_SYS, 404 404 .bo_type = ttm_bo_type_kernel, 405 405 .size = PAGE_SIZE, 406 - .pin = true 406 + .pin = true, 407 + .keep_resv = true, 407 408 }; 408 409 409 410 /* ··· 415 414 ret = vmw_bo_create(dev_priv, &bo_params, &vbo); 416 415 if (unlikely(ret != 0)) 417 416 return ret; 418 - 419 - ret = ttm_bo_reserve(&vbo->tbo, false, true, NULL); 420 - BUG_ON(ret != 0); 421 - vmw_bo_pin_reserved(vbo, true); 422 417 423 418 ret = ttm_bo_kmap(&vbo->tbo, 0, 1, &map); 424 419 if (likely(ret == 0)) {
+1
drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
··· 206 206 .bo_type = ttm_bo_type_sg, 207 207 .size = attach->dmabuf->size, 208 208 .pin = false, 209 + .keep_resv = true, 209 210 .resv = attach->dmabuf->resv, 210 211 .sg = table, 211 212
+15 -5
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 750 750 struct vmw_plane_state *old_vps = vmw_plane_state_to_vps(old_state); 751 751 struct vmw_bo *old_bo = NULL; 752 752 struct vmw_bo *new_bo = NULL; 753 + struct ww_acquire_ctx ctx; 753 754 s32 hotspot_x, hotspot_y; 754 755 int ret; 755 756 ··· 770 769 if (du->cursor_surface) 771 770 du->cursor_age = du->cursor_surface->snooper.age; 772 771 772 + ww_acquire_init(&ctx, &reservation_ww_class); 773 + 773 774 if (!vmw_user_object_is_null(&old_vps->uo)) { 774 775 old_bo = vmw_user_object_buffer(&old_vps->uo); 775 - ret = ttm_bo_reserve(&old_bo->tbo, false, false, NULL); 776 + ret = ttm_bo_reserve(&old_bo->tbo, false, false, &ctx); 776 777 if (ret != 0) 777 778 return; 778 779 } ··· 782 779 if (!vmw_user_object_is_null(&vps->uo)) { 783 780 new_bo = vmw_user_object_buffer(&vps->uo); 784 781 if (old_bo != new_bo) { 785 - ret = ttm_bo_reserve(&new_bo->tbo, false, false, NULL); 786 - if (ret != 0) 782 + ret = ttm_bo_reserve(&new_bo->tbo, false, false, &ctx); 783 + if (ret != 0) { 784 + if (old_bo) { 785 + ttm_bo_unreserve(&old_bo->tbo); 786 + ww_acquire_fini(&ctx); 787 + } 787 788 return; 789 + } 788 790 } else { 789 791 new_bo = NULL; 790 792 } ··· 811 803 hotspot_x, hotspot_y); 812 804 } 813 805 814 - if (old_bo) 815 - ttm_bo_unreserve(&old_bo->tbo); 816 806 if (new_bo) 817 807 ttm_bo_unreserve(&new_bo->tbo); 808 + if (old_bo) 809 + ttm_bo_unreserve(&old_bo->tbo); 810 + 811 + ww_acquire_fini(&ctx); 818 812 819 813 du->cursor_x = new_state->crtc_x + du->set_gui_x; 820 814 du->cursor_y = new_state->crtc_y + du->set_gui_y;
+2 -5
drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
··· 896 896 .busy_domain = VMW_BO_DOMAIN_SYS, 897 897 .bo_type = ttm_bo_type_device, 898 898 .size = size, 899 - .pin = true 899 + .pin = true, 900 + .keep_resv = true, 900 901 }; 901 902 902 903 if (!vmw_shader_id_ok(user_key, shader_type)) ··· 906 905 ret = vmw_bo_create(dev_priv, &bo_params, &buf); 907 906 if (unlikely(ret != 0)) 908 907 goto out; 909 - 910 - ret = ttm_bo_reserve(&buf->tbo, false, true, NULL); 911 - if (unlikely(ret != 0)) 912 - goto no_reserve; 913 908 914 909 /* Map and copy shader bytecode. */ 915 910 ret = ttm_bo_kmap(&buf->tbo, 0, PFN_UP(size), &map);
+2 -3
drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
··· 572 572 .busy_domain = domain, 573 573 .bo_type = ttm_bo_type_kernel, 574 574 .size = bo_size, 575 - .pin = true 575 + .pin = true, 576 + .keep_resv = true, 576 577 }; 577 578 578 579 ret = vmw_bo_create(dev_priv, &bo_params, &vbo); 579 580 if (unlikely(ret != 0)) 580 581 return ret; 581 582 582 - ret = ttm_bo_reserve(&vbo->tbo, false, true, NULL); 583 - BUG_ON(ret != 0); 584 583 ret = vmw_ttm_populate(vbo->tbo.bdev, vbo->tbo.ttm, &ctx); 585 584 if (likely(ret == 0)) { 586 585 struct vmw_ttm_tt *vmw_tt =
+3 -4
drivers/gpu/drm/xe/tests/xe_bo.c
··· 264 264 * however seems quite fragile not to also restart the GT. Try 265 265 * to do that here by triggering a GT reset. 266 266 */ 267 - for_each_gt(__gt, xe, id) { 268 - xe_gt_reset_async(__gt); 269 - flush_work(&__gt->reset.worker); 270 - } 267 + for_each_gt(__gt, xe, id) 268 + xe_gt_reset(__gt); 269 + 271 270 if (err) { 272 271 KUNIT_FAIL(test, "restore kernel err=%pe\n", 273 272 ERR_PTR(err));
+1 -2
drivers/gpu/drm/xe/tests/xe_mocs.c
··· 162 162 if (flags & HAS_LNCF_MOCS) 163 163 read_l3cc_table(gt, &mocs.table); 164 164 165 - xe_gt_reset_async(gt); 166 - flush_work(&gt->reset.worker); 165 + xe_gt_reset(gt); 167 166 168 167 kunit_info(test, "mocs_reset_test after reset\n"); 169 168 if (flags & HAS_GLOBAL_MOCS)
+25
drivers/gpu/drm/xe/xe_gt.h
··· 57 57 void xe_gt_remove(struct xe_gt *gt); 58 58 59 59 /** 60 + * xe_gt_wait_for_reset - wait for gt's async reset to finalize. 61 + * @gt: GT structure 62 + * Return: 63 + * %true if it waited for the work to finish execution, 64 + * %false if there was no scheduled reset or it was done. 65 + */ 66 + static inline bool xe_gt_wait_for_reset(struct xe_gt *gt) 67 + { 68 + return flush_work(&gt->reset.worker); 69 + } 70 + 71 + /** 72 + * xe_gt_reset - perform synchronous reset 73 + * @gt: GT structure 74 + * Return: 75 + * %true if it waited for the reset to finish, 76 + * %false if there was no scheduled reset. 77 + */ 78 + static inline bool xe_gt_reset(struct xe_gt *gt) 79 + { 80 + xe_gt_reset_async(gt); 81 + return xe_gt_wait_for_reset(gt); 82 + } 83 + 84 + /** 60 85 * xe_gt_any_hw_engine_by_reset_domain - scan the list of engines and return the 61 86 * first that matches the same reset domain as @class 62 87 * @gt: GT structure
+1 -1
drivers/gpu/drm/xe/xe_gt_ccs_mode.c
··· 150 150 xe_gt_info(gt, "Setting compute mode to %d\n", num_engines); 151 151 gt->ccs_mode = num_engines; 152 152 xe_gt_record_user_engines(gt); 153 - xe_gt_reset_async(gt); 153 + xe_gt_reset(gt); 154 154 } 155 155 156 156 mutex_unlock(&xe->drm.filelist_mutex);
+1 -3
drivers/gpu/drm/xe/xe_gt_debugfs.c
··· 132 132 static int force_reset_sync(struct xe_gt *gt, struct drm_printer *p) 133 133 { 134 134 xe_pm_runtime_get(gt_to_xe(gt)); 135 - xe_gt_reset_async(gt); 135 + xe_gt_reset(gt); 136 136 xe_pm_runtime_put(gt_to_xe(gt)); 137 - 138 - flush_work(&gt->reset.worker); 139 137 140 138 return 0; 141 139 }
+1 -1
drivers/gpu/drm/xe/xe_hw_engine.c
··· 422 422 * Bspec: 72161 423 423 */ 424 424 const u8 mocs_write_idx = gt->mocs.uc_index; 425 - const u8 mocs_read_idx = hwe->class == XE_ENGINE_CLASS_COMPUTE && 425 + const u8 mocs_read_idx = hwe->class == XE_ENGINE_CLASS_COMPUTE && IS_DGFX(xe) && 426 426 (GRAPHICS_VER(xe) >= 20 || xe->info.platform == XE_PVC) ? 427 427 gt->mocs.wb_index : gt->mocs.uc_index; 428 428 u32 ring_cmd_cctl_val = REG_FIELD_PREP(CMD_CCTL_WRITE_OVERRIDE_MASK, mocs_write_idx) |
+1
drivers/gpu/drm/xe/xe_oa.c
··· 2163 2163 { .start = 0x5194, .end = 0x5194 }, /* SYS_MEM_LAT_MEASURE_MERTF_GRP_3D */ 2164 2164 { .start = 0x8704, .end = 0x8704 }, /* LMEM_LAT_MEASURE_MCFG_GRP */ 2165 2165 { .start = 0xB1BC, .end = 0xB1BC }, /* L3_BANK_LAT_MEASURE_LBCF_GFX */ 2166 + { .start = 0xD0E0, .end = 0xD0F4 }, /* VISACTL */ 2166 2167 { .start = 0xE18C, .end = 0xE18C }, /* SAMPLER_MODE */ 2167 2168 { .start = 0xE590, .end = 0xE590 }, /* TDL_LSC_LAT_MEASURE_TDL_GFX */ 2168 2169 { .start = 0x13000, .end = 0x137FC }, /* PES_0_PESL0 - PES_63_UPPER_PESL3 */
+1 -1
drivers/hwmon/acpi_power_meter.c
··· 682 682 683 683 /* _PMD method is optional. */ 684 684 res = read_domain_devices(resource); 685 - if (res != -ENODEV) 685 + if (res && res != -ENODEV) 686 686 return res; 687 687 688 688 if (resource->caps.flags & POWER_METER_CAN_MEASURE) {
+6 -2
drivers/hwmon/drivetemp.c
··· 165 165 { 166 166 u8 scsi_cmd[MAX_COMMAND_SIZE]; 167 167 enum req_op op; 168 + int err; 168 169 169 170 memset(scsi_cmd, 0, sizeof(scsi_cmd)); 170 171 scsi_cmd[0] = ATA_16; ··· 193 192 scsi_cmd[12] = lba_high; 194 193 scsi_cmd[14] = ata_command; 195 194 196 - return scsi_execute_cmd(st->sdev, scsi_cmd, op, st->smartdata, 197 - ATA_SECT_SIZE, HZ, 5, NULL); 195 + err = scsi_execute_cmd(st->sdev, scsi_cmd, op, st->smartdata, 196 + ATA_SECT_SIZE, 10 * HZ, 5, NULL); 197 + if (err > 0) 198 + err = -EIO; 199 + return err; 198 200 } 199 201 200 202 static int drivetemp_ata_command(struct drivetemp_data *st, u8 feature,
+1 -1
drivers/hwmon/ltc2991.c
··· 125 125 126 126 /* Vx-Vy, 19.075uV/LSB */ 127 127 *val = DIV_ROUND_CLOSEST(sign_extend32(reg_val, 14) * 19075, 128 - st->r_sense_uohm[channel]); 128 + (s32)st->r_sense_uohm[channel]); 129 129 130 130 return 0; 131 131 }
+4 -3
drivers/hwmon/tmp513.c
··· 207 207 *val = sign_extend32(regval, 208 208 reg == TMP51X_SHUNT_CURRENT_RESULT ? 209 209 16 - tmp51x_get_pga_shift(data) : 15); 210 - *val = DIV_ROUND_CLOSEST(*val * 10 * MILLI, data->shunt_uohms); 210 + *val = DIV_ROUND_CLOSEST(*val * 10 * (long)MILLI, (long)data->shunt_uohms); 211 + 211 212 break; 212 213 case TMP51X_BUS_VOLTAGE_RESULT: 213 214 case TMP51X_BUS_VOLTAGE_H_LIMIT: ··· 224 223 case TMP51X_BUS_CURRENT_RESULT: 225 224 // Current = (ShuntVoltage * CalibrationRegister) / 4096 226 225 *val = sign_extend32(regval, 15) * (long)data->curr_lsb_ua; 227 - *val = DIV_ROUND_CLOSEST(*val, MILLI); 226 + *val = DIV_ROUND_CLOSEST(*val, (long)MILLI); 228 227 break; 229 228 case TMP51X_LOCAL_TEMP_RESULT: 230 229 case TMP51X_REMOTE_TEMP_RESULT_1: ··· 264 263 * The user enter current value and we convert it to 265 264 * voltage. 1lsb = 10uV 266 265 */ 267 - val = DIV_ROUND_CLOSEST(val * data->shunt_uohms, 10 * MILLI); 266 + val = DIV_ROUND_CLOSEST(val * (long)data->shunt_uohms, 10 * (long)MILLI); 268 267 max_val = U16_MAX >> tmp51x_get_pga_shift(data); 269 268 regval = clamp_val(val, -max_val, max_val); 270 269 break;
+15 -5
drivers/i2c/busses/i2c-rcar.c
··· 130 130 #define ID_P_PM_BLOCKED BIT(31) 131 131 #define ID_P_MASK GENMASK(31, 27) 132 132 133 + #define ID_SLAVE_NACK BIT(0) 134 + 133 135 enum rcar_i2c_type { 134 136 I2C_RCAR_GEN1, 135 137 I2C_RCAR_GEN2, ··· 168 166 int irq; 169 167 170 168 struct i2c_client *host_notify_client; 169 + u8 slave_flags; 171 170 }; 172 171 173 172 #define rcar_i2c_priv_to_dev(p) ((p)->adap.dev.parent) ··· 658 655 { 659 656 u32 ssr_raw, ssr_filtered; 660 657 u8 value; 658 + int ret; 661 659 662 660 ssr_raw = rcar_i2c_read(priv, ICSSR) & 0xff; 663 661 ssr_filtered = ssr_raw & rcar_i2c_read(priv, ICSIER); ··· 674 670 rcar_i2c_write(priv, ICRXTX, value); 675 671 rcar_i2c_write(priv, ICSIER, SDE | SSR | SAR); 676 672 } else { 677 - i2c_slave_event(priv->slave, I2C_SLAVE_WRITE_REQUESTED, &value); 673 + ret = i2c_slave_event(priv->slave, I2C_SLAVE_WRITE_REQUESTED, &value); 674 + if (ret) 675 + priv->slave_flags |= ID_SLAVE_NACK; 676 + 678 677 rcar_i2c_read(priv, ICRXTX); /* dummy read */ 679 678 rcar_i2c_write(priv, ICSIER, SDR | SSR | SAR); 680 679 } ··· 690 683 if (ssr_filtered & SSR) { 691 684 i2c_slave_event(priv->slave, I2C_SLAVE_STOP, &value); 692 685 rcar_i2c_write(priv, ICSCR, SIE | SDBS); /* clear our NACK */ 686 + priv->slave_flags &= ~ID_SLAVE_NACK; 693 687 rcar_i2c_write(priv, ICSIER, SAR); 694 688 rcar_i2c_write(priv, ICSSR, ~SSR & 0xff); 695 689 } 696 690 697 691 /* master wants to write to us */ 698 692 if (ssr_filtered & SDR) { 699 - int ret; 700 - 701 693 value = rcar_i2c_read(priv, ICRXTX); 702 694 ret = i2c_slave_event(priv->slave, I2C_SLAVE_WRITE_RECEIVED, &value); 703 - /* Send NACK in case of error */ 704 - rcar_i2c_write(priv, ICSCR, SIE | SDBS | (ret < 0 ? FNA : 0)); 695 + if (ret) 696 + priv->slave_flags |= ID_SLAVE_NACK; 697 + 698 + /* Send NACK in case of error, but it will come 1 byte late :( */ 699 + rcar_i2c_write(priv, ICSCR, SIE | SDBS | 700 + (priv->slave_flags & ID_SLAVE_NACK ? FNA : 0)); 705 701 rcar_i2c_write(priv, ICSSR, ~SDR & 0xff); 706 702 } 707 703
+1 -1
drivers/i2c/i2c-atr.c
··· 412 412 dev_name(dev), ret); 413 413 break; 414 414 415 - case BUS_NOTIFY_DEL_DEVICE: 415 + case BUS_NOTIFY_REMOVED_DEVICE: 416 416 i2c_atr_detach_client(client->adapter, client); 417 417 break; 418 418
+1
drivers/i2c/i2c-core-base.c
··· 1562 1562 res = device_add(&adap->dev); 1563 1563 if (res) { 1564 1564 pr_err("adapter '%s': can't register device (%d)\n", adap->name, res); 1565 + put_device(&adap->dev); 1565 1566 goto out_list; 1566 1567 } 1567 1568
+15 -4
drivers/i2c/i2c-slave-testunit.c
··· 38 38 39 39 enum testunit_flags { 40 40 TU_FLAG_IN_PROCESS, 41 + TU_FLAG_NACK, 41 42 }; 42 43 43 44 struct testunit_data { ··· 91 90 92 91 switch (event) { 93 92 case I2C_SLAVE_WRITE_REQUESTED: 94 - if (test_bit(TU_FLAG_IN_PROCESS, &tu->flags)) 95 - return -EBUSY; 93 + if (test_bit(TU_FLAG_IN_PROCESS | TU_FLAG_NACK, &tu->flags)) { 94 + ret = -EBUSY; 95 + break; 96 + } 96 97 97 98 memset(tu->regs, 0, TU_NUM_REGS); 98 99 tu->reg_idx = 0; ··· 102 99 break; 103 100 104 101 case I2C_SLAVE_WRITE_RECEIVED: 105 - if (test_bit(TU_FLAG_IN_PROCESS, &tu->flags)) 106 - return -EBUSY; 102 + if (test_bit(TU_FLAG_IN_PROCESS | TU_FLAG_NACK, &tu->flags)) { 103 + ret = -EBUSY; 104 + break; 105 + } 107 106 108 107 if (tu->reg_idx < TU_NUM_REGS) 109 108 tu->regs[tu->reg_idx] = *val; ··· 134 129 * here because we still need them in the workqueue! 135 130 */ 136 131 tu->reg_idx = 0; 132 + 133 + clear_bit(TU_FLAG_NACK, &tu->flags); 137 134 break; 138 135 139 136 case I2C_SLAVE_READ_PROCESSED: ··· 157 150 tu->regs[TU_REG_CMD] : 0; 158 151 break; 159 152 } 153 + 154 + /* If an error occurred somewhen, we NACK everything until next STOP */ 155 + if (ret) 156 + set_bit(TU_FLAG_NACK, &tu->flags); 160 157 161 158 return ret; 162 159 }
+4 -2
drivers/i2c/muxes/i2c-demux-pinctrl.c
··· 68 68 } 69 69 70 70 /* 71 - * Check if there are pinctrl states at all. Note: we cant' use 71 + * Check if there are pinctrl states at all. Note: we can't use 72 72 * devm_pinctrl_get_select() because we need to distinguish between 73 73 * the -ENODEV from devm_pinctrl_get() and pinctrl_lookup_state(). 74 74 */ ··· 261 261 pm_runtime_no_callbacks(&pdev->dev); 262 262 263 263 /* switch to first parent as active master */ 264 - i2c_demux_activate_master(priv, 0); 264 + err = i2c_demux_activate_master(priv, 0); 265 + if (err) 266 + goto err_rollback; 265 267 266 268 err = device_create_file(&pdev->dev, &dev_attr_available_masters); 267 269 if (err)
+68 -30
drivers/iio/adc/ad4695.c
··· 91 91 #define AD4695_T_WAKEUP_SW_MS 3 92 92 #define AD4695_T_REFBUF_MS 100 93 93 #define AD4695_T_REGCONFIG_NS 20 94 + #define AD4695_T_SCK_CNV_DELAY_NS 80 94 95 #define AD4695_REG_ACCESS_SCLK_HZ (10 * MEGA) 95 96 96 97 /* Max number of voltage input channels. */ ··· 133 132 unsigned int vref_mv; 134 133 /* Common mode input pin voltage. */ 135 134 unsigned int com_mv; 136 - /* 1 per voltage and temperature chan plus 1 xfer to trigger 1st CNV */ 137 - struct spi_transfer buf_read_xfer[AD4695_MAX_CHANNELS + 2]; 135 + /* 136 + * 2 per voltage and temperature chan plus 1 xfer to trigger 1st 137 + * CNV. Excluding the trigger xfer, every 2nd xfer only serves 138 + * to control CS and add a delay between the last SCLK and next 139 + * CNV rising edges. 140 + */ 141 + struct spi_transfer buf_read_xfer[AD4695_MAX_CHANNELS * 2 + 3]; 138 142 struct spi_message buf_read_msg; 139 143 /* Raw conversion data received. */ 140 144 u8 buf[ALIGN((AD4695_MAX_CHANNELS + 2) * AD4695_MAX_CHANNEL_SIZE, ··· 429 423 u8 temp_chan_bit = st->chip_info->num_voltage_inputs; 430 424 u32 bit, num_xfer, num_slots; 431 425 u32 temp_en = 0; 432 - int ret; 426 + int ret, rx_buf_offset = 0; 433 427 434 428 /* 435 429 * We are using the advanced sequencer since it is the only way to read ··· 455 449 iio_for_each_active_channel(indio_dev, bit) { 456 450 xfer = &st->buf_read_xfer[num_xfer]; 457 451 xfer->bits_per_word = 16; 458 - xfer->rx_buf = &st->buf[(num_xfer - 1) * 2]; 452 + xfer->rx_buf = &st->buf[rx_buf_offset]; 459 453 xfer->len = 2; 460 - xfer->cs_change = 1; 461 - xfer->cs_change_delay.value = AD4695_T_CONVERT_NS; 462 - xfer->cs_change_delay.unit = SPI_DELAY_UNIT_NSECS; 454 + rx_buf_offset += xfer->len; 463 455 464 456 if (bit == temp_chan_bit) { 465 457 temp_en = 1; ··· 472 468 } 473 469 474 470 num_xfer++; 471 + 472 + /* 473 + * We need to add a blank xfer in data reads, to meet the timing 474 + * requirement of a minimum delay between the last SCLK rising 475 + * edge and the CS deassert. 476 + */ 477 + xfer = &st->buf_read_xfer[num_xfer]; 478 + xfer->delay.value = AD4695_T_SCK_CNV_DELAY_NS; 479 + xfer->delay.unit = SPI_DELAY_UNIT_NSECS; 480 + xfer->cs_change = 1; 481 + xfer->cs_change_delay.value = AD4695_T_CONVERT_NS; 482 + xfer->cs_change_delay.unit = SPI_DELAY_UNIT_NSECS; 483 + 484 + num_xfer++; 475 485 } 476 486 477 487 /* 478 488 * The advanced sequencer requires that at least 2 slots are enabled. 479 489 * Since slot 0 is always used for other purposes, we need only 1 480 - * enabled voltage channel to meet this requirement. If the temperature 481 - * channel is the only enabled channel, we need to add one more slot 482 - * in the sequence but not read from it. 490 + * enabled voltage channel to meet this requirement. If the temperature 491 + * channel is the only enabled channel, we need to add one more slot in 492 + * the sequence but not read from it. This is because the temperature 493 + * sensor is sampled at the end of the channel sequence in advanced 494 + * sequencer mode (see datasheet page 38). 495 + * 496 + * From the iio_for_each_active_channel() block above, we now have an 497 + * xfer with data followed by a blank xfer to allow us to meet the 498 + * timing spec, so move both of those up before adding an extra to 499 + * handle the temperature-only case. 483 500 */ 484 501 if (num_slots < 2) { 485 - /* move last xfer so we can insert one more xfer before it */ 486 - st->buf_read_xfer[num_xfer] = *xfer; 502 + /* Move last two xfers */ 503 + st->buf_read_xfer[num_xfer] = st->buf_read_xfer[num_xfer - 1]; 504 + st->buf_read_xfer[num_xfer - 1] = st->buf_read_xfer[num_xfer - 2]; 487 505 num_xfer++; 488 506 489 - /* modify 2nd to last xfer for extra slot */ 507 + /* Modify inserted xfer for extra slot. */ 508 + xfer = &st->buf_read_xfer[num_xfer - 3]; 490 509 memset(xfer, 0, sizeof(*xfer)); 491 510 xfer->cs_change = 1; 492 511 xfer->delay.value = st->chip_info->t_acq_ns; ··· 526 499 return ret; 527 500 528 501 num_slots++; 502 + 503 + /* 504 + * We still want to point at the last xfer when finished, so 505 + * update the pointer. 506 + */ 507 + xfer = &st->buf_read_xfer[num_xfer - 1]; 529 508 } 530 509 531 510 /* ··· 616 583 */ 617 584 static int ad4695_read_one_sample(struct ad4695_state *st, unsigned int address) 618 585 { 619 - struct spi_transfer xfer[2] = { }; 620 - int ret, i = 0; 586 + struct spi_transfer xfers[2] = { 587 + { 588 + .speed_hz = AD4695_REG_ACCESS_SCLK_HZ, 589 + .bits_per_word = 16, 590 + .tx_buf = &st->cnv_cmd, 591 + .len = 2, 592 + }, 593 + { 594 + /* Required delay between last SCLK and CNV/CS */ 595 + .delay.value = AD4695_T_SCK_CNV_DELAY_NS, 596 + .delay.unit = SPI_DELAY_UNIT_NSECS, 597 + } 598 + }; 599 + int ret; 621 600 622 601 ret = ad4695_set_single_cycle_mode(st, address); 623 602 if (ret) ··· 637 592 638 593 /* 639 594 * Setting the first channel to the temperature channel isn't supported 640 - * in single-cycle mode, so we have to do an extra xfer to read the 641 - * temperature. 595 + * in single-cycle mode, so we have to do an extra conversion to read 596 + * the temperature. 642 597 */ 643 598 if (address == AD4695_CMD_TEMP_CHAN) { 644 - /* We aren't reading, so we can make this a short xfer. */ 645 - st->cnv_cmd2 = AD4695_CMD_TEMP_CHAN << 3; 646 - xfer[0].tx_buf = &st->cnv_cmd2; 647 - xfer[0].len = 1; 648 - xfer[0].cs_change = 1; 649 - xfer[0].cs_change_delay.value = AD4695_T_CONVERT_NS; 650 - xfer[0].cs_change_delay.unit = SPI_DELAY_UNIT_NSECS; 599 + st->cnv_cmd = AD4695_CMD_TEMP_CHAN << 11; 651 600 652 - i = 1; 601 + ret = spi_sync_transfer(st->spi, xfers, ARRAY_SIZE(xfers)); 602 + if (ret) 603 + return ret; 653 604 } 654 605 655 606 /* Then read the result and exit conversion mode. */ 656 607 st->cnv_cmd = AD4695_CMD_EXIT_CNV_MODE << 11; 657 - xfer[i].bits_per_word = 16; 658 - xfer[i].tx_buf = &st->cnv_cmd; 659 - xfer[i].rx_buf = &st->raw_data; 660 - xfer[i].len = 2; 608 + xfers[0].rx_buf = &st->raw_data; 661 609 662 - return spi_sync_transfer(st->spi, xfer, i + 1); 610 + return spi_sync_transfer(st->spi, xfers, ARRAY_SIZE(xfers)); 663 611 } 664 612 665 613 static int ad4695_read_raw(struct iio_dev *indio_dev,
+3
drivers/iio/adc/ad7124.c
··· 917 917 * set all channels to this default value. 918 918 */ 919 919 ad7124_set_channel_odr(st, i, 10); 920 + 921 + /* Disable all channels to prevent unintended conversions. */ 922 + ad_sd_write_reg(&st->sd, AD7124_CHANNEL(i), 2, 0); 920 923 } 921 924 922 925 ret = ad_sd_write_reg(&st->sd, AD7124_ADC_CONTROL, 2, st->adc_control);
+6 -4
drivers/iio/adc/ad7173.c
··· 200 200 201 201 struct ad7173_state { 202 202 struct ad_sigma_delta sd; 203 + struct ad_sigma_delta_info sigma_delta_info; 203 204 const struct ad7173_device_info *info; 204 205 struct ad7173_channel *channels; 205 206 struct regulator_bulk_data regulators[3]; ··· 754 753 return ad_sd_write_reg(sd, AD7173_REG_CH(chan), 2, 0); 755 754 } 756 755 757 - static struct ad_sigma_delta_info ad7173_sigma_delta_info = { 756 + static const struct ad_sigma_delta_info ad7173_sigma_delta_info = { 758 757 .set_channel = ad7173_set_channel, 759 758 .append_status = ad7173_append_status, 760 759 .disable_all = ad7173_disable_all, ··· 1404 1403 if (ret < 0) 1405 1404 return dev_err_probe(dev, ret, "Interrupt 'rdy' is required\n"); 1406 1405 1407 - ad7173_sigma_delta_info.irq_line = ret; 1406 + st->sigma_delta_info.irq_line = ret; 1408 1407 1409 1408 return ad7173_fw_parse_channel_config(indio_dev); 1410 1409 } ··· 1437 1436 spi->mode = SPI_MODE_3; 1438 1437 spi_setup(spi); 1439 1438 1440 - ad7173_sigma_delta_info.num_slots = st->info->num_configs; 1441 - ret = ad_sd_init(&st->sd, indio_dev, spi, &ad7173_sigma_delta_info); 1439 + st->sigma_delta_info = ad7173_sigma_delta_info; 1440 + st->sigma_delta_info.num_slots = st->info->num_configs; 1441 + ret = ad_sd_init(&st->sd, indio_dev, spi, &st->sigma_delta_info); 1442 1442 if (ret) 1443 1443 return ret; 1444 1444
+12 -3
drivers/iio/adc/ad9467.c
··· 895 895 return 0; 896 896 } 897 897 898 - static struct iio_info ad9467_info = { 898 + static const struct iio_info ad9467_info = { 899 899 .read_raw = ad9467_read_raw, 900 900 .write_raw = ad9467_write_raw, 901 901 .update_scan_mode = ad9467_update_scan_mode, 902 902 .debugfs_reg_access = ad9467_reg_access, 903 903 .read_avail = ad9467_read_avail, 904 + }; 905 + 906 + /* Same as above, but without .read_avail */ 907 + static const struct iio_info ad9467_info_no_read_avail = { 908 + .read_raw = ad9467_read_raw, 909 + .write_raw = ad9467_write_raw, 910 + .update_scan_mode = ad9467_update_scan_mode, 911 + .debugfs_reg_access = ad9467_reg_access, 904 912 }; 905 913 906 914 static int ad9467_scale_fill(struct ad9467_state *st) ··· 1222 1214 } 1223 1215 1224 1216 if (st->info->num_scales > 1) 1225 - ad9467_info.read_avail = ad9467_read_avail; 1217 + indio_dev->info = &ad9467_info; 1218 + else 1219 + indio_dev->info = &ad9467_info_no_read_avail; 1226 1220 indio_dev->name = st->info->name; 1227 1221 indio_dev->channels = st->info->channels; 1228 1222 indio_dev->num_channels = st->info->num_channels; 1229 - indio_dev->info = &ad9467_info; 1230 1223 1231 1224 ret = ad9467_iio_backend_get(st); 1232 1225 if (ret)
+1 -1
drivers/iio/adc/at91_adc.c
··· 979 979 return ret; 980 980 981 981 err: 982 - input_free_device(st->ts_input); 982 + input_free_device(input); 983 983 return ret; 984 984 } 985 985
+2
drivers/iio/adc/rockchip_saradc.c
··· 368 368 int ret; 369 369 int i, j = 0; 370 370 371 + memset(&data, 0, sizeof(data)); 372 + 371 373 mutex_lock(&info->lock); 372 374 373 375 iio_for_each_active_channel(i_dev, i) {
+8 -5
drivers/iio/adc/stm32-dfsdm-adc.c
··· 691 691 return -EINVAL; 692 692 } 693 693 694 - ret = fwnode_property_read_string(node, "label", &ch->datasheet_name); 695 - if (ret < 0) { 696 - dev_err(&indio_dev->dev, 697 - " Error parsing 'label' for idx %d\n", ch->channel); 698 - return ret; 694 + if (fwnode_property_present(node, "label")) { 695 + /* label is optional */ 696 + ret = fwnode_property_read_string(node, "label", &ch->datasheet_name); 697 + if (ret < 0) { 698 + dev_err(&indio_dev->dev, 699 + " Error parsing 'label' for idx %d\n", ch->channel); 700 + return ret; 701 + } 699 702 } 700 703 701 704 df_ch = &dfsdm->ch_list[ch->channel];
+3 -1
drivers/iio/adc/ti-ads1119.c
··· 500 500 struct iio_dev *indio_dev = pf->indio_dev; 501 501 struct ads1119_state *st = iio_priv(indio_dev); 502 502 struct { 503 - unsigned int sample; 503 + s16 sample; 504 504 s64 timestamp __aligned(8); 505 505 } scan; 506 506 unsigned int index; 507 507 int ret; 508 + 509 + memset(&scan, 0, sizeof(scan)); 508 510 509 511 if (!iio_trigger_using_own(indio_dev)) { 510 512 index = find_first_bit(indio_dev->active_scan_mask,
+2 -2
drivers/iio/adc/ti-ads124s08.c
··· 183 183 struct ads124s_private *priv = iio_priv(indio_dev); 184 184 185 185 if (priv->reset_gpio) { 186 - gpiod_set_value(priv->reset_gpio, 0); 186 + gpiod_set_value_cansleep(priv->reset_gpio, 0); 187 187 udelay(200); 188 - gpiod_set_value(priv->reset_gpio, 1); 188 + gpiod_set_value_cansleep(priv->reset_gpio, 1); 189 189 } else { 190 190 return ads124s_write_cmd(indio_dev, ADS124S08_CMD_RESET); 191 191 }
+2
drivers/iio/adc/ti-ads1298.c
··· 613 613 } 614 614 indio_dev->name = devm_kasprintf(dev, GFP_KERNEL, "ads129%u%s", 615 615 indio_dev->num_channels, suffix); 616 + if (!indio_dev->name) 617 + return -ENOMEM; 616 618 617 619 /* Enable internal test signal, double amplitude, double frequency */ 618 620 ret = regmap_write(priv->regmap, ADS1298_REG_CONFIG2,
+1 -1
drivers/iio/adc/ti-ads8688.c
··· 381 381 struct iio_poll_func *pf = p; 382 382 struct iio_dev *indio_dev = pf->indio_dev; 383 383 /* Ensure naturally aligned timestamp */ 384 - u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)] __aligned(8); 384 + u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)] __aligned(8) = { }; 385 385 int i, j = 0; 386 386 387 387 iio_for_each_active_channel(indio_dev, i) {
+1 -1
drivers/iio/dummy/iio_simple_dummy_buffer.c
··· 48 48 int i = 0, j; 49 49 u16 *data; 50 50 51 - data = kmalloc(indio_dev->scan_bytes, GFP_KERNEL); 51 + data = kzalloc(indio_dev->scan_bytes, GFP_KERNEL); 52 52 if (!data) 53 53 goto done; 54 54
+9 -2
drivers/iio/gyro/fxas21002c_core.c
··· 730 730 int ret; 731 731 732 732 mutex_lock(&data->lock); 733 - ret = regmap_bulk_read(data->regmap, FXAS21002C_REG_OUT_X_MSB, 734 - data->buffer, CHANNEL_SCAN_MAX * sizeof(s16)); 733 + ret = fxas21002c_pm_get(data); 735 734 if (ret < 0) 736 735 goto out_unlock; 737 736 737 + ret = regmap_bulk_read(data->regmap, FXAS21002C_REG_OUT_X_MSB, 738 + data->buffer, CHANNEL_SCAN_MAX * sizeof(s16)); 739 + if (ret < 0) 740 + goto out_pm_put; 741 + 738 742 iio_push_to_buffers_with_timestamp(indio_dev, data->buffer, 739 743 data->timestamp); 744 + 745 + out_pm_put: 746 + fxas21002c_pm_put(data); 740 747 741 748 out_unlock: 742 749 mutex_unlock(&data->lock);
+1
drivers/iio/imu/inv_icm42600/inv_icm42600.h
··· 403 403 typedef int (*inv_icm42600_bus_setup)(struct inv_icm42600_state *); 404 404 405 405 extern const struct regmap_config inv_icm42600_regmap_config; 406 + extern const struct regmap_config inv_icm42600_spi_regmap_config; 406 407 extern const struct dev_pm_ops inv_icm42600_pm_ops; 407 408 408 409 const struct iio_mount_matrix *
+21 -1
drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
··· 87 87 }; 88 88 EXPORT_SYMBOL_NS_GPL(inv_icm42600_regmap_config, "IIO_ICM42600"); 89 89 90 + /* define specific regmap for SPI not supporting burst write */ 91 + const struct regmap_config inv_icm42600_spi_regmap_config = { 92 + .name = "inv_icm42600", 93 + .reg_bits = 8, 94 + .val_bits = 8, 95 + .max_register = 0x4FFF, 96 + .ranges = inv_icm42600_regmap_ranges, 97 + .num_ranges = ARRAY_SIZE(inv_icm42600_regmap_ranges), 98 + .volatile_table = inv_icm42600_regmap_volatile_accesses, 99 + .rd_noinc_table = inv_icm42600_regmap_rd_noinc_accesses, 100 + .cache_type = REGCACHE_RBTREE, 101 + .use_single_write = true, 102 + }; 103 + EXPORT_SYMBOL_NS_GPL(inv_icm42600_spi_regmap_config, "IIO_ICM42600"); 104 + 90 105 struct inv_icm42600_hw { 91 106 uint8_t whoami; 92 107 const char *name; ··· 829 814 static int inv_icm42600_resume(struct device *dev) 830 815 { 831 816 struct inv_icm42600_state *st = dev_get_drvdata(dev); 817 + struct inv_icm42600_sensor_state *gyro_st = iio_priv(st->indio_gyro); 818 + struct inv_icm42600_sensor_state *accel_st = iio_priv(st->indio_accel); 832 819 int ret; 833 820 834 821 mutex_lock(&st->lock); ··· 851 834 goto out_unlock; 852 835 853 836 /* restore FIFO data streaming */ 854 - if (st->fifo.on) 837 + if (st->fifo.on) { 838 + inv_sensors_timestamp_reset(&gyro_st->ts); 839 + inv_sensors_timestamp_reset(&accel_st->ts); 855 840 ret = regmap_write(st->map, INV_ICM42600_REG_FIFO_CONFIG, 856 841 INV_ICM42600_FIFO_CONFIG_STREAM); 842 + } 857 843 858 844 out_unlock: 859 845 mutex_unlock(&st->lock);
+2 -1
drivers/iio/imu/inv_icm42600/inv_icm42600_spi.c
··· 59 59 return -EINVAL; 60 60 chip = (uintptr_t)match; 61 61 62 - regmap = devm_regmap_init_spi(spi, &inv_icm42600_regmap_config); 62 + /* use SPI specific regmap */ 63 + regmap = devm_regmap_init_spi(spi, &inv_icm42600_spi_regmap_config); 63 64 if (IS_ERR(regmap)) 64 65 return PTR_ERR(regmap); 65 66
+1 -1
drivers/iio/imu/kmx61.c
··· 1193 1193 struct kmx61_data *data = kmx61_get_data(indio_dev); 1194 1194 int bit, ret, i = 0; 1195 1195 u8 base; 1196 - s16 buffer[8]; 1196 + s16 buffer[8] = { }; 1197 1197 1198 1198 if (indio_dev == data->acc_indio_dev) 1199 1199 base = KMX61_ACC_XOUT_L;
+1 -1
drivers/iio/inkern.c
··· 500 500 return_ptr(chans); 501 501 502 502 error_free_chans: 503 - for (i = 0; i < nummaps; i++) 503 + for (i = 0; i < mapind; i++) 504 504 iio_device_put(chans[i].indio_dev); 505 505 return ERR_PTR(ret); 506 506 }
+2
drivers/iio/light/bh1745.c
··· 746 746 int i; 747 747 int j = 0; 748 748 749 + memset(&scan, 0, sizeof(scan)); 750 + 749 751 iio_for_each_active_channel(indio_dev, i) { 750 752 ret = regmap_bulk_read(data->regmap, BH1745_RED_LSB + 2 * i, 751 753 &value, 2);
+1 -1
drivers/iio/light/vcnl4035.c
··· 105 105 struct iio_dev *indio_dev = pf->indio_dev; 106 106 struct vcnl4035_data *data = iio_priv(indio_dev); 107 107 /* Ensure naturally aligned timestamp */ 108 - u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)] __aligned(8); 108 + u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)] __aligned(8) = { }; 109 109 int ret; 110 110 111 111 ret = regmap_read(data->regmap, VCNL4035_ALS_DATA, (int *)buffer);
+2
drivers/iio/pressure/zpa2326.c
··· 586 586 } sample; 587 587 int err; 588 588 589 + memset(&sample, 0, sizeof(sample)); 590 + 589 591 if (test_bit(0, indio_dev->active_scan_mask)) { 590 592 /* Get current pressure from hardware FIFO. */ 591 593 err = zpa2326_dequeue_pressure(indio_dev, &sample.pressure);
+2
drivers/iio/temperature/tmp006.c
··· 252 252 } scan; 253 253 s32 ret; 254 254 255 + memset(&scan, 0, sizeof(scan)); 256 + 255 257 ret = i2c_smbus_read_word_data(data->client, TMP006_VOBJECT); 256 258 if (ret < 0) 257 259 goto err;
+1 -1
drivers/iio/test/Kconfig
··· 5 5 6 6 # Keep in alphabetical order 7 7 config IIO_GTS_KUNIT_TEST 8 - tristate "Test IIO formatting functions" if !KUNIT_ALL_TESTS 8 + tristate "Test IIO gain-time-scale helpers" if !KUNIT_ALL_TESTS 9 9 depends on KUNIT 10 10 select IIO_GTS_HELPER 11 11 select TEST_KUNIT_DEVICE_HELPERS
+4
drivers/iio/test/iio-test-rescale.c
··· 652 652 int rel_ppm; 653 653 int ret; 654 654 655 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buff); 656 + 655 657 rescale.numerator = t->numerator; 656 658 rescale.denominator = t->denominator; 657 659 rescale.offset = t->offset; ··· 682 680 struct rescale rescale; 683 681 int values[2]; 684 682 int ret; 683 + 684 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buff_off); 685 685 686 686 rescale.numerator = t->numerator; 687 687 rescale.denominator = t->denominator;
+10
drivers/interconnect/icc-clk.c
··· 116 116 } 117 117 118 118 node->name = devm_kasprintf(dev, GFP_KERNEL, "%s_master", data[i].name); 119 + if (!node->name) { 120 + ret = -ENOMEM; 121 + goto err; 122 + } 123 + 119 124 node->data = &qp->clocks[i]; 120 125 icc_node_add(node, provider); 121 126 /* link to the next node, slave */ ··· 134 129 } 135 130 136 131 node->name = devm_kasprintf(dev, GFP_KERNEL, "%s_slave", data[i].name); 132 + if (!node->name) { 133 + ret = -ENOMEM; 134 + goto err; 135 + } 136 + 137 137 /* no data for slave node */ 138 138 icc_node_add(node, provider); 139 139 onecell->nodes[j++] = node;
+1 -1
drivers/interconnect/qcom/icc-rpm.c
··· 503 503 GFP_KERNEL); 504 504 if (!data) 505 505 return -ENOMEM; 506 + data->num_nodes = num_nodes; 506 507 507 508 qp->num_intf_clks = cd_num; 508 509 for (i = 0; i < cd_num; i++) ··· 598 597 599 598 data->nodes[i] = node; 600 599 } 601 - data->num_nodes = num_nodes; 602 600 603 601 clk_bulk_disable_unprepare(qp->num_intf_clks, qp->intf_clks); 604 602
+1 -1
drivers/irqchip/irq-gic-v3-its.c
··· 2045 2045 if (!is_v4(its_dev->its)) 2046 2046 return -EINVAL; 2047 2047 2048 - guard(raw_spinlock_irq)(&its_dev->event_map.vlpi_lock); 2048 + guard(raw_spinlock)(&its_dev->event_map.vlpi_lock); 2049 2049 2050 2050 /* Unmap request? */ 2051 2051 if (!info)
+1 -1
drivers/irqchip/irq-gic-v3.c
··· 1522 1522 static int gic_cpu_pm_notifier(struct notifier_block *self, 1523 1523 unsigned long cmd, void *v) 1524 1524 { 1525 - if (cmd == CPU_PM_EXIT) { 1525 + if (cmd == CPU_PM_EXIT || cmd == CPU_PM_ENTER_FAILED) { 1526 1526 if (gic_dist_security_disabled()) 1527 1527 gic_enable_redist(true); 1528 1528 gic_cpu_sys_reg_enable();
+2 -1
drivers/irqchip/irq-sunxi-nmi.c
··· 186 186 gc->chip_types[0].chip.irq_unmask = irq_gc_mask_set_bit; 187 187 gc->chip_types[0].chip.irq_eoi = irq_gc_ack_set_bit; 188 188 gc->chip_types[0].chip.irq_set_type = sunxi_sc_nmi_set_type; 189 - gc->chip_types[0].chip.flags = IRQCHIP_EOI_THREADED | IRQCHIP_EOI_IF_HANDLED; 189 + gc->chip_types[0].chip.flags = IRQCHIP_EOI_THREADED | IRQCHIP_EOI_IF_HANDLED | 190 + IRQCHIP_SKIP_SET_WAKE; 190 191 gc->chip_types[0].regs.ack = reg_offs->pend; 191 192 gc->chip_types[0].regs.mask = reg_offs->enable; 192 193 gc->chip_types[0].regs.type = reg_offs->ctrl;
+1 -3
drivers/irqchip/irqchip.c
··· 35 35 int platform_irqchip_probe(struct platform_device *pdev) 36 36 { 37 37 struct device_node *np = pdev->dev.of_node; 38 - struct device_node *par_np = of_irq_find_parent(np); 38 + struct device_node *par_np __free(device_node) = of_irq_find_parent(np); 39 39 of_irq_init_cb_t irq_init_cb = of_device_get_match_data(&pdev->dev); 40 40 41 41 if (!irq_init_cb) { 42 - of_node_put(par_np); 43 42 return -EINVAL; 44 43 } 45 44 ··· 54 55 * interrupt controller can check for specific domains as necessary. 55 56 */ 56 57 if (par_np && !irq_find_matching_host(par_np, DOMAIN_BUS_ANY)) { 57 - of_node_put(par_np); 58 58 return -EPROBE_DEFER; 59 59 } 60 60
+1 -1
drivers/md/dm-ebs-target.c
··· 442 442 static struct target_type ebs_target = { 443 443 .name = "ebs", 444 444 .version = {1, 0, 1}, 445 - .features = DM_TARGET_PASSES_INTEGRITY, 445 + .features = 0, 446 446 .module = THIS_MODULE, 447 447 .ctr = ebs_ctr, 448 448 .dtr = ebs_dtr,
+2 -3
drivers/md/dm-thin.c
··· 2332 2332 struct thin_c *tc = NULL; 2333 2333 2334 2334 rcu_read_lock(); 2335 - if (!list_empty(&pool->active_thins)) { 2336 - tc = list_entry_rcu(pool->active_thins.next, struct thin_c, list); 2335 + tc = list_first_or_null_rcu(&pool->active_thins, struct thin_c, list); 2336 + if (tc) 2337 2337 thin_get(tc); 2338 - } 2339 2338 rcu_read_unlock(); 2340 2339 2341 2340 return tc;
+30 -29
drivers/md/dm-verity-fec.c
··· 40 40 } 41 41 42 42 /* 43 - * Decode an RS block using Reed-Solomon. 44 - */ 45 - static int fec_decode_rs8(struct dm_verity *v, struct dm_verity_fec_io *fio, 46 - u8 *data, u8 *fec, int neras) 47 - { 48 - int i; 49 - uint16_t par[DM_VERITY_FEC_RSM - DM_VERITY_FEC_MIN_RSN]; 50 - 51 - for (i = 0; i < v->fec->roots; i++) 52 - par[i] = fec[i]; 53 - 54 - return decode_rs8(fio->rs, data, par, v->fec->rsn, NULL, neras, 55 - fio->erasures, 0, NULL); 56 - } 57 - 58 - /* 59 43 * Read error-correcting codes for the requested RS block. Returns a pointer 60 44 * to the data block. Caller is responsible for releasing buf. 61 45 */ 62 46 static u8 *fec_read_parity(struct dm_verity *v, u64 rsb, int index, 63 - unsigned int *offset, struct dm_buffer **buf, 64 - unsigned short ioprio) 47 + unsigned int *offset, unsigned int par_buf_offset, 48 + struct dm_buffer **buf, unsigned short ioprio) 65 49 { 66 50 u64 position, block, rem; 67 51 u8 *res; 68 52 53 + /* We have already part of parity bytes read, skip to the next block */ 54 + if (par_buf_offset) 55 + index++; 56 + 69 57 position = (index + rsb) * v->fec->roots; 70 58 block = div64_u64_rem(position, v->fec->io_size, &rem); 71 - *offset = (unsigned int)rem; 59 + *offset = par_buf_offset ? 0 : (unsigned int)rem; 72 60 73 61 res = dm_bufio_read_with_ioprio(v->fec->bufio, block, buf, ioprio); 74 62 if (IS_ERR(res)) { ··· 116 128 { 117 129 int r, corrected = 0, res; 118 130 struct dm_buffer *buf; 119 - unsigned int n, i, offset; 131 + unsigned int n, i, j, offset, par_buf_offset = 0; 132 + uint16_t par_buf[DM_VERITY_FEC_RSM - DM_VERITY_FEC_MIN_RSN]; 120 133 u8 *par, *block; 121 134 struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size); 122 135 123 - par = fec_read_parity(v, rsb, block_offset, &offset, &buf, bio_prio(bio)); 136 + par = fec_read_parity(v, rsb, block_offset, &offset, 137 + par_buf_offset, &buf, bio_prio(bio)); 124 138 if (IS_ERR(par)) 125 139 return PTR_ERR(par); 126 140 ··· 132 142 */ 133 143 fec_for_each_buffer_rs_block(fio, n, i) { 134 144 block = fec_buffer_rs_block(v, fio, n, i); 135 - res = fec_decode_rs8(v, fio, block, &par[offset], neras); 145 + for (j = 0; j < v->fec->roots - par_buf_offset; j++) 146 + par_buf[par_buf_offset + j] = par[offset + j]; 147 + /* Decode an RS block using Reed-Solomon */ 148 + res = decode_rs8(fio->rs, block, par_buf, v->fec->rsn, 149 + NULL, neras, fio->erasures, 0, NULL); 136 150 if (res < 0) { 137 151 r = res; 138 152 goto error; ··· 149 155 if (block_offset >= 1 << v->data_dev_block_bits) 150 156 goto done; 151 157 152 - /* read the next block when we run out of parity bytes */ 153 - offset += v->fec->roots; 158 + /* Read the next block when we run out of parity bytes */ 159 + offset += (v->fec->roots - par_buf_offset); 160 + /* Check if parity bytes are split between blocks */ 161 + if (offset < v->fec->io_size && (offset + v->fec->roots) > v->fec->io_size) { 162 + par_buf_offset = v->fec->io_size - offset; 163 + for (j = 0; j < par_buf_offset; j++) 164 + par_buf[j] = par[offset + j]; 165 + offset += par_buf_offset; 166 + } else 167 + par_buf_offset = 0; 168 + 154 169 if (offset >= v->fec->io_size) { 155 170 dm_bufio_release(buf); 156 171 157 - par = fec_read_parity(v, rsb, block_offset, &offset, &buf, bio_prio(bio)); 172 + par = fec_read_parity(v, rsb, block_offset, &offset, 173 + par_buf_offset, &buf, bio_prio(bio)); 158 174 if (IS_ERR(par)) 159 175 return PTR_ERR(par); 160 176 } ··· 728 724 return -E2BIG; 729 725 } 730 726 731 - if ((f->roots << SECTOR_SHIFT) & ((1 << v->data_dev_block_bits) - 1)) 732 - f->io_size = 1 << v->data_dev_block_bits; 733 - else 734 - f->io_size = v->fec->roots << SECTOR_SHIFT; 727 + f->io_size = 1 << v->data_dev_block_bits; 735 728 736 729 f->bufio = dm_bufio_client_create(f->dev->bdev, 737 730 f->io_size,
+12 -7
drivers/md/persistent-data/dm-array.c
··· 917 917 if (c->block) 918 918 unlock_ablock(c->info, c->block); 919 919 920 - c->block = NULL; 921 - c->ab = NULL; 922 920 c->index = 0; 923 921 924 922 r = dm_btree_cursor_get_value(&c->cursor, &key, &value_le); 925 923 if (r) { 926 924 DMERR("dm_btree_cursor_get_value failed"); 927 - dm_btree_cursor_end(&c->cursor); 925 + goto out; 928 926 929 927 } else { 930 928 r = get_ablock(c->info, le64_to_cpu(value_le), &c->block, &c->ab); 931 929 if (r) { 932 930 DMERR("get_ablock failed"); 933 - dm_btree_cursor_end(&c->cursor); 931 + goto out; 934 932 } 935 933 } 936 934 935 + return 0; 936 + 937 + out: 938 + dm_btree_cursor_end(&c->cursor); 939 + c->block = NULL; 940 + c->ab = NULL; 937 941 return r; 938 942 } 939 943 ··· 960 956 961 957 void dm_array_cursor_end(struct dm_array_cursor *c) 962 958 { 963 - if (c->block) { 959 + if (c->block) 964 960 unlock_ablock(c->info, c->block); 965 - dm_btree_cursor_end(&c->cursor); 966 - } 961 + 962 + dm_btree_cursor_end(&c->cursor); 967 963 } 968 964 EXPORT_SYMBOL_GPL(dm_array_cursor_end); 969 965 ··· 1003 999 } 1004 1000 1005 1001 count -= remaining; 1002 + c->index += (remaining - 1); 1006 1003 r = dm_array_cursor_next(c); 1007 1004 1008 1005 } while (!r);
+2 -2
drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gpio.c
··· 148 148 pci1xxx_assign_bit(priv->reg_base, OPENDRAIN_OFFSET(offset), (offset % 32), true); 149 149 break; 150 150 default: 151 - ret = -EOPNOTSUPP; 151 + ret = -ENOTSUPP; 152 152 break; 153 153 } 154 154 spin_unlock_irqrestore(&priv->lock, flags); ··· 277 277 writel(BIT(bit), priv->reg_base + INTR_STATUS_OFFSET(gpiobank)); 278 278 spin_unlock_irqrestore(&priv->lock, flags); 279 279 irq = irq_find_mapping(gc->irq.domain, (bit + (gpiobank * 32))); 280 - generic_handle_irq(irq); 280 + handle_nested_irq(irq); 281 281 } 282 282 } 283 283 spin_lock_irqsave(&priv->lock, flags);
+1 -1
drivers/mtd/spi-nor/core.c
··· 89 89 op->addr.buswidth = spi_nor_get_protocol_addr_nbits(proto); 90 90 91 91 if (op->dummy.nbytes) 92 - op->dummy.buswidth = spi_nor_get_protocol_data_nbits(proto); 92 + op->dummy.buswidth = spi_nor_get_protocol_addr_nbits(proto); 93 93 94 94 if (op->data.nbytes) 95 95 op->data.buswidth = spi_nor_get_protocol_data_nbits(proto);
+1 -1
drivers/net/ethernet/amd/pds_core/devlink.c
··· 118 118 if (err && err != -EIO) 119 119 return err; 120 120 121 - listlen = fw_list.num_fw_slots; 121 + listlen = min(fw_list.num_fw_slots, ARRAY_SIZE(fw_list.fw_names)); 122 122 for (i = 0; i < listlen; i++) { 123 123 if (i < ARRAY_SIZE(fw_slotnames)) 124 124 strscpy(buf, fw_slotnames[i], sizeof(buf));
+2 -17
drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
··· 923 923 924 924 static bool xgbe_phy_finisar_phy_quirks(struct xgbe_prv_data *pdata) 925 925 { 926 - __ETHTOOL_DECLARE_LINK_MODE_MASK(supported) = { 0, }; 927 926 struct xgbe_phy_data *phy_data = pdata->phy_data; 928 927 unsigned int phy_id = phy_data->phydev->phy_id; 929 928 ··· 944 945 phy_write(phy_data->phydev, 0x04, 0x0d01); 945 946 phy_write(phy_data->phydev, 0x00, 0x9140); 946 947 947 - linkmode_set_bit_array(phy_10_100_features_array, 948 - ARRAY_SIZE(phy_10_100_features_array), 949 - supported); 950 - linkmode_set_bit_array(phy_gbit_features_array, 951 - ARRAY_SIZE(phy_gbit_features_array), 952 - supported); 953 - 954 - linkmode_copy(phy_data->phydev->supported, supported); 948 + linkmode_copy(phy_data->phydev->supported, PHY_GBIT_FEATURES); 955 949 956 950 phy_support_asym_pause(phy_data->phydev); 957 951 ··· 956 964 957 965 static bool xgbe_phy_belfuse_phy_quirks(struct xgbe_prv_data *pdata) 958 966 { 959 - __ETHTOOL_DECLARE_LINK_MODE_MASK(supported) = { 0, }; 960 967 struct xgbe_phy_data *phy_data = pdata->phy_data; 961 968 struct xgbe_sfp_eeprom *sfp_eeprom = &phy_data->sfp_eeprom; 962 969 unsigned int phy_id = phy_data->phydev->phy_id; ··· 1019 1028 reg = phy_read(phy_data->phydev, 0x00); 1020 1029 phy_write(phy_data->phydev, 0x00, reg & ~0x00800); 1021 1030 1022 - linkmode_set_bit_array(phy_10_100_features_array, 1023 - ARRAY_SIZE(phy_10_100_features_array), 1024 - supported); 1025 - linkmode_set_bit_array(phy_gbit_features_array, 1026 - ARRAY_SIZE(phy_gbit_features_array), 1027 - supported); 1028 - linkmode_copy(phy_data->phydev->supported, supported); 1031 + linkmode_copy(phy_data->phydev->supported, PHY_GBIT_FEATURES); 1029 1032 phy_support_asym_pause(phy_data->phydev); 1030 1033 1031 1034 netif_dbg(pdata, drv, pdata->netdev,
+53 -10
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 2897 2897 return 0; 2898 2898 } 2899 2899 2900 + static bool bnxt_vnic_is_active(struct bnxt *bp) 2901 + { 2902 + struct bnxt_vnic_info *vnic = &bp->vnic_info[0]; 2903 + 2904 + return vnic->fw_vnic_id != INVALID_HW_RING_ID && vnic->mru > 0; 2905 + } 2906 + 2900 2907 static irqreturn_t bnxt_msix(int irq, void *dev_instance) 2901 2908 { 2902 2909 struct bnxt_napi *bnapi = dev_instance; ··· 3171 3164 break; 3172 3165 } 3173 3166 } 3174 - if (bp->flags & BNXT_FLAG_DIM) { 3167 + if ((bp->flags & BNXT_FLAG_DIM) && bnxt_vnic_is_active(bp)) { 3175 3168 struct dim_sample dim_sample = {}; 3176 3169 3177 3170 dim_update_sample(cpr->event_ctr, ··· 3302 3295 poll_done: 3303 3296 cpr_rx = &cpr->cp_ring_arr[0]; 3304 3297 if (cpr_rx->cp_ring_type == BNXT_NQ_HDL_TYPE_RX && 3305 - (bp->flags & BNXT_FLAG_DIM)) { 3298 + (bp->flags & BNXT_FLAG_DIM) && bnxt_vnic_is_active(bp)) { 3306 3299 struct dim_sample dim_sample = {}; 3307 3300 3308 3301 dim_update_sample(cpr->event_ctr, ··· 4708 4701 /* Changing allocation mode of RX rings. 4709 4702 * TODO: Update when extending xdp_rxq_info to support allocation modes. 4710 4703 */ 4711 - int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode) 4704 + static void __bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode) 4712 4705 { 4713 4706 struct net_device *dev = bp->dev; 4714 4707 ··· 4729 4722 bp->rx_skb_func = bnxt_rx_page_skb; 4730 4723 } 4731 4724 bp->rx_dir = DMA_BIDIRECTIONAL; 4732 - /* Disable LRO or GRO_HW */ 4733 - netdev_update_features(dev); 4734 4725 } else { 4735 4726 dev->max_mtu = bp->max_mtu; 4736 4727 bp->flags &= ~BNXT_FLAG_RX_PAGE_MODE; 4737 4728 bp->rx_dir = DMA_FROM_DEVICE; 4738 4729 bp->rx_skb_func = bnxt_rx_skb; 4739 4730 } 4740 - return 0; 4731 + } 4732 + 4733 + void bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode) 4734 + { 4735 + __bnxt_set_rx_skb_mode(bp, page_mode); 4736 + 4737 + if (!page_mode) { 4738 + int rx, tx; 4739 + 4740 + bnxt_get_max_rings(bp, &rx, &tx, true); 4741 + if (rx > 1) { 4742 + bp->flags &= ~BNXT_FLAG_NO_AGG_RINGS; 4743 + bp->dev->hw_features |= NETIF_F_LRO; 4744 + } 4745 + } 4746 + 4747 + /* Update LRO and GRO_HW availability */ 4748 + netdev_update_features(bp->dev); 4741 4749 } 4742 4750 4743 4751 static void bnxt_free_vnic_attributes(struct bnxt *bp) ··· 7288 7266 return rc; 7289 7267 } 7290 7268 7269 + static void bnxt_cancel_dim(struct bnxt *bp) 7270 + { 7271 + int i; 7272 + 7273 + /* DIM work is initialized in bnxt_enable_napi(). Proceed only 7274 + * if NAPI is enabled. 7275 + */ 7276 + if (!bp->bnapi || test_bit(BNXT_STATE_NAPI_DISABLED, &bp->state)) 7277 + return; 7278 + 7279 + /* Make sure NAPI sees that the VNIC is disabled */ 7280 + synchronize_net(); 7281 + for (i = 0; i < bp->rx_nr_rings; i++) { 7282 + struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i]; 7283 + struct bnxt_napi *bnapi = rxr->bnapi; 7284 + 7285 + cancel_work_sync(&bnapi->cp_ring.dim.work); 7286 + } 7287 + } 7288 + 7291 7289 static int hwrm_ring_free_send_msg(struct bnxt *bp, 7292 7290 struct bnxt_ring_struct *ring, 7293 7291 u32 ring_type, int cmpl_ring_id) ··· 7408 7366 } 7409 7367 } 7410 7368 7369 + bnxt_cancel_dim(bp); 7411 7370 for (i = 0; i < bp->rx_nr_rings; i++) { 7412 7371 bnxt_hwrm_rx_ring_free(bp, &bp->rx_ring[i], close_path); 7413 7372 bnxt_hwrm_rx_agg_ring_free(bp, &bp->rx_ring[i], close_path); ··· 11352 11309 if (bnapi->in_reset) 11353 11310 cpr->sw_stats->rx.rx_resets++; 11354 11311 napi_disable(&bnapi->napi); 11355 - if (bnapi->rx_ring) 11356 - cancel_work_sync(&cpr->dim.work); 11357 11312 } 11358 11313 } 11359 11314 ··· 15613 15572 bnxt_hwrm_vnic_update(bp, vnic, 15614 15573 VNIC_UPDATE_REQ_ENABLES_MRU_VALID); 15615 15574 } 15616 - 15575 + /* Make sure NAPI sees that the VNIC is disabled */ 15576 + synchronize_net(); 15617 15577 rxr = &bp->rx_ring[idx]; 15578 + cancel_work_sync(&rxr->bnapi->cp_ring.dim.work); 15618 15579 bnxt_hwrm_rx_ring_free(bp, rxr, false); 15619 15580 bnxt_hwrm_rx_agg_ring_free(bp, rxr, false); 15620 15581 rxr->rx_next_cons = 0; ··· 16229 16186 if (bp->max_fltr < BNXT_MAX_FLTR) 16230 16187 bp->max_fltr = BNXT_MAX_FLTR; 16231 16188 bnxt_init_l2_fltr_tbl(bp); 16232 - bnxt_set_rx_skb_mode(bp, false); 16189 + __bnxt_set_rx_skb_mode(bp, false); 16233 16190 bnxt_set_tpa_flags(bp); 16234 16191 bnxt_set_ring_params(bp); 16235 16192 bnxt_rdma_aux_device_init(bp);
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 2846 2846 bool bnxt_bs_trace_avail(struct bnxt *bp, u16 type); 2847 2847 void bnxt_set_tpa_flags(struct bnxt *bp); 2848 2848 void bnxt_set_ring_params(struct bnxt *); 2849 - int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode); 2849 + void bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode); 2850 2850 void bnxt_insert_usr_fltr(struct bnxt *bp, struct bnxt_filter_base *fltr); 2851 2851 void bnxt_del_one_usr_fltr(struct bnxt *bp, struct bnxt_filter_base *fltr); 2852 2852 int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp, unsigned long *bmap,
+2 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
··· 208 208 209 209 rc = hwrm_req_replace(bp, req, fw_msg->msg, fw_msg->msg_len); 210 210 if (rc) 211 - return rc; 211 + goto drop_req; 212 212 213 213 hwrm_req_timeout(bp, req, fw_msg->timeout); 214 214 resp = hwrm_req_hold(bp, req); ··· 220 220 221 221 memcpy(fw_msg->resp, resp, resp_len); 222 222 } 223 + drop_req: 223 224 hwrm_req_drop(bp, req); 224 225 return rc; 225 226 }
-7
drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
··· 422 422 bnxt_set_rx_skb_mode(bp, true); 423 423 xdp_features_set_redirect_target(dev, true); 424 424 } else { 425 - int rx, tx; 426 - 427 425 xdp_features_clear_redirect_target(dev); 428 426 bnxt_set_rx_skb_mode(bp, false); 429 - bnxt_get_max_rings(bp, &rx, &tx, true); 430 - if (rx > 1) { 431 - bp->flags &= ~BNXT_FLAG_NO_AGG_RINGS; 432 - bp->dev->hw_features |= NETIF_F_LRO; 433 - } 434 427 } 435 428 bp->tx_nr_rings_xdp = tx_xdp; 436 429 bp->tx_nr_rings = bp->tx_nr_rings_per_tc * tc + tx_xdp;
+4 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 1799 1799 struct adapter *adap = container_of(t, struct adapter, tids); 1800 1800 struct sk_buff *skb; 1801 1801 1802 - WARN_ON(tid_out_of_range(&adap->tids, tid)); 1802 + if (tid_out_of_range(&adap->tids, tid)) { 1803 + dev_err(adap->pdev_dev, "tid %d out of range\n", tid); 1804 + return; 1805 + } 1803 1806 1804 1807 if (t->tid_tab[tid - adap->tids.tid_base]) { 1805 1808 t->tid_tab[tid - adap->tids.tid_base] = NULL;
+14 -5
drivers/net/ethernet/freescale/fec_main.c
··· 1591 1591 fec_enet_tx_queue(ndev, i, budget); 1592 1592 } 1593 1593 1594 - static void fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq, 1594 + static int fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq, 1595 1595 struct bufdesc *bdp, int index) 1596 1596 { 1597 1597 struct page *new_page; 1598 1598 dma_addr_t phys_addr; 1599 1599 1600 1600 new_page = page_pool_dev_alloc_pages(rxq->page_pool); 1601 - WARN_ON(!new_page); 1602 - rxq->rx_skb_info[index].page = new_page; 1601 + if (unlikely(!new_page)) 1602 + return -ENOMEM; 1603 1603 1604 + rxq->rx_skb_info[index].page = new_page; 1604 1605 rxq->rx_skb_info[index].offset = FEC_ENET_XDP_HEADROOM; 1605 1606 phys_addr = page_pool_get_dma_addr(new_page) + FEC_ENET_XDP_HEADROOM; 1606 1607 bdp->cbd_bufaddr = cpu_to_fec32(phys_addr); 1608 + 1609 + return 0; 1607 1610 } 1608 1611 1609 1612 static u32 ··· 1701 1698 int cpu = smp_processor_id(); 1702 1699 struct xdp_buff xdp; 1703 1700 struct page *page; 1701 + __fec32 cbd_bufaddr; 1704 1702 u32 sub_len = 4; 1705 1703 1706 1704 #if !defined(CONFIG_M5272) ··· 1770 1766 1771 1767 index = fec_enet_get_bd_index(bdp, &rxq->bd); 1772 1768 page = rxq->rx_skb_info[index].page; 1769 + cbd_bufaddr = bdp->cbd_bufaddr; 1770 + if (fec_enet_update_cbd(rxq, bdp, index)) { 1771 + ndev->stats.rx_dropped++; 1772 + goto rx_processing_done; 1773 + } 1774 + 1773 1775 dma_sync_single_for_cpu(&fep->pdev->dev, 1774 - fec32_to_cpu(bdp->cbd_bufaddr), 1776 + fec32_to_cpu(cbd_bufaddr), 1775 1777 pkt_len, 1776 1778 DMA_FROM_DEVICE); 1777 1779 prefetch(page_address(page)); 1778 - fec_enet_update_cbd(rxq, bdp, index); 1779 1780 1780 1781 if (xdp_prog) { 1781 1782 xdp_buff_clear_frags_flag(&xdp);
+9 -5
drivers/net/ethernet/google/gve/gve_main.c
··· 2241 2241 2242 2242 static void gve_set_netdev_xdp_features(struct gve_priv *priv) 2243 2243 { 2244 + xdp_features_t xdp_features; 2245 + 2244 2246 if (priv->queue_format == GVE_GQI_QPL_FORMAT) { 2245 - priv->dev->xdp_features = NETDEV_XDP_ACT_BASIC; 2246 - priv->dev->xdp_features |= NETDEV_XDP_ACT_REDIRECT; 2247 - priv->dev->xdp_features |= NETDEV_XDP_ACT_NDO_XMIT; 2248 - priv->dev->xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY; 2247 + xdp_features = NETDEV_XDP_ACT_BASIC; 2248 + xdp_features |= NETDEV_XDP_ACT_REDIRECT; 2249 + xdp_features |= NETDEV_XDP_ACT_NDO_XMIT; 2250 + xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY; 2249 2251 } else { 2250 - priv->dev->xdp_features = 0; 2252 + xdp_features = 0; 2251 2253 } 2254 + 2255 + xdp_set_features_flag(priv->dev, xdp_features); 2252 2256 } 2253 2257 2254 2258 static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device)
-3
drivers/net/ethernet/hisilicon/hns3/hnae3.h
··· 916 916 917 917 u8 netdev_flags; 918 918 struct dentry *hnae3_dbgfs; 919 - /* protects concurrent contention between debugfs commands */ 920 - struct mutex dbgfs_lock; 921 - char **dbgfs_buf; 922 919 923 920 /* Network interface message level enabled bits */ 924 921 u32 msg_enable;
+31 -65
drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
··· 1260 1260 static ssize_t hns3_dbg_read(struct file *filp, char __user *buffer, 1261 1261 size_t count, loff_t *ppos) 1262 1262 { 1263 - struct hns3_dbg_data *dbg_data = filp->private_data; 1263 + char *buf = filp->private_data; 1264 + 1265 + return simple_read_from_buffer(buffer, count, ppos, buf, strlen(buf)); 1266 + } 1267 + 1268 + static int hns3_dbg_open(struct inode *inode, struct file *filp) 1269 + { 1270 + struct hns3_dbg_data *dbg_data = inode->i_private; 1264 1271 struct hnae3_handle *handle = dbg_data->handle; 1265 1272 struct hns3_nic_priv *priv = handle->priv; 1266 - ssize_t size = 0; 1267 - char **save_buf; 1268 - char *read_buf; 1269 1273 u32 index; 1274 + char *buf; 1270 1275 int ret; 1276 + 1277 + if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) || 1278 + test_bit(HNS3_NIC_STATE_RESETTING, &priv->state)) 1279 + return -EBUSY; 1271 1280 1272 1281 ret = hns3_dbg_get_cmd_index(dbg_data, &index); 1273 1282 if (ret) 1274 1283 return ret; 1275 1284 1276 - mutex_lock(&handle->dbgfs_lock); 1277 - save_buf = &handle->dbgfs_buf[index]; 1285 + buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL); 1286 + if (!buf) 1287 + return -ENOMEM; 1278 1288 1279 - if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) || 1280 - test_bit(HNS3_NIC_STATE_RESETTING, &priv->state)) { 1281 - ret = -EBUSY; 1282 - goto out; 1289 + ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd, 1290 + buf, hns3_dbg_cmd[index].buf_len); 1291 + if (ret) { 1292 + kvfree(buf); 1293 + return ret; 1283 1294 } 1284 1295 1285 - if (*save_buf) { 1286 - read_buf = *save_buf; 1287 - } else { 1288 - read_buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL); 1289 - if (!read_buf) { 1290 - ret = -ENOMEM; 1291 - goto out; 1292 - } 1296 + filp->private_data = buf; 1297 + return 0; 1298 + } 1293 1299 1294 - /* save the buffer addr until the last read operation */ 1295 - *save_buf = read_buf; 1296 - 1297 - /* get data ready for the first time to read */ 1298 - ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd, 1299 - read_buf, hns3_dbg_cmd[index].buf_len); 1300 - if (ret) 1301 - goto out; 1302 - } 1303 - 1304 - size = simple_read_from_buffer(buffer, count, ppos, read_buf, 1305 - strlen(read_buf)); 1306 - if (size > 0) { 1307 - mutex_unlock(&handle->dbgfs_lock); 1308 - return size; 1309 - } 1310 - 1311 - out: 1312 - /* free the buffer for the last read operation */ 1313 - if (*save_buf) { 1314 - kvfree(*save_buf); 1315 - *save_buf = NULL; 1316 - } 1317 - 1318 - mutex_unlock(&handle->dbgfs_lock); 1319 - return ret; 1300 + static int hns3_dbg_release(struct inode *inode, struct file *filp) 1301 + { 1302 + kvfree(filp->private_data); 1303 + filp->private_data = NULL; 1304 + return 0; 1320 1305 } 1321 1306 1322 1307 static const struct file_operations hns3_dbg_fops = { 1323 1308 .owner = THIS_MODULE, 1324 - .open = simple_open, 1309 + .open = hns3_dbg_open, 1325 1310 .read = hns3_dbg_read, 1311 + .release = hns3_dbg_release, 1326 1312 }; 1327 1313 1328 1314 static int hns3_dbg_bd_file_init(struct hnae3_handle *handle, u32 cmd) ··· 1365 1379 int ret; 1366 1380 u32 i; 1367 1381 1368 - handle->dbgfs_buf = devm_kcalloc(&handle->pdev->dev, 1369 - ARRAY_SIZE(hns3_dbg_cmd), 1370 - sizeof(*handle->dbgfs_buf), 1371 - GFP_KERNEL); 1372 - if (!handle->dbgfs_buf) 1373 - return -ENOMEM; 1374 - 1375 1382 hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry = 1376 1383 debugfs_create_dir(name, hns3_dbgfs_root); 1377 1384 handle->hnae3_dbgfs = hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry; ··· 1373 1394 hns3_dbg_dentry[i].dentry = 1374 1395 debugfs_create_dir(hns3_dbg_dentry[i].name, 1375 1396 handle->hnae3_dbgfs); 1376 - 1377 - mutex_init(&handle->dbgfs_lock); 1378 1397 1379 1398 for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++) { 1380 1399 if ((hns3_dbg_cmd[i].cmd == HNAE3_DBG_CMD_TM_NODES && ··· 1402 1425 out: 1403 1426 debugfs_remove_recursive(handle->hnae3_dbgfs); 1404 1427 handle->hnae3_dbgfs = NULL; 1405 - mutex_destroy(&handle->dbgfs_lock); 1406 1428 return ret; 1407 1429 } 1408 1430 1409 1431 void hns3_dbg_uninit(struct hnae3_handle *handle) 1410 1432 { 1411 - u32 i; 1412 - 1413 1433 debugfs_remove_recursive(handle->hnae3_dbgfs); 1414 1434 handle->hnae3_dbgfs = NULL; 1415 - 1416 - for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++) 1417 - if (handle->dbgfs_buf[i]) { 1418 - kvfree(handle->dbgfs_buf[i]); 1419 - handle->dbgfs_buf[i] = NULL; 1420 - } 1421 - 1422 - mutex_destroy(&handle->dbgfs_lock); 1423 1435 } 1424 1436 1425 1437 void hns3_dbg_register_debugfs(const char *debugfs_dir_name)
-1
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 2452 2452 return ret; 2453 2453 } 2454 2454 2455 - netdev->features = features; 2456 2455 return 0; 2457 2456 } 2458 2457
+36 -9
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 6 6 #include <linux/etherdevice.h> 7 7 #include <linux/init.h> 8 8 #include <linux/interrupt.h> 9 + #include <linux/irq.h> 9 10 #include <linux/kernel.h> 10 11 #include <linux/module.h> 11 12 #include <linux/netdevice.h> ··· 3575 3574 return ret; 3576 3575 } 3577 3576 3577 + static void hclge_set_reset_pending(struct hclge_dev *hdev, 3578 + enum hnae3_reset_type reset_type) 3579 + { 3580 + /* When an incorrect reset type is executed, the get_reset_level 3581 + * function generates the HNAE3_NONE_RESET flag. As a result, this 3582 + * type do not need to pending. 3583 + */ 3584 + if (reset_type != HNAE3_NONE_RESET) 3585 + set_bit(reset_type, &hdev->reset_pending); 3586 + } 3587 + 3578 3588 static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval) 3579 3589 { 3580 3590 u32 cmdq_src_reg, msix_src_reg, hw_err_src_reg; ··· 3606 3594 */ 3607 3595 if (BIT(HCLGE_VECTOR0_IMPRESET_INT_B) & msix_src_reg) { 3608 3596 dev_info(&hdev->pdev->dev, "IMP reset interrupt\n"); 3609 - set_bit(HNAE3_IMP_RESET, &hdev->reset_pending); 3597 + hclge_set_reset_pending(hdev, HNAE3_IMP_RESET); 3610 3598 set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); 3611 3599 *clearval = BIT(HCLGE_VECTOR0_IMPRESET_INT_B); 3612 3600 hdev->rst_stats.imp_rst_cnt++; ··· 3616 3604 if (BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B) & msix_src_reg) { 3617 3605 dev_info(&hdev->pdev->dev, "global reset interrupt\n"); 3618 3606 set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); 3619 - set_bit(HNAE3_GLOBAL_RESET, &hdev->reset_pending); 3607 + hclge_set_reset_pending(hdev, HNAE3_GLOBAL_RESET); 3620 3608 *clearval = BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B); 3621 3609 hdev->rst_stats.global_rst_cnt++; 3622 3610 return HCLGE_VECTOR0_EVENT_RST; ··· 3771 3759 snprintf(hdev->misc_vector.name, HNAE3_INT_NAME_LEN, "%s-misc-%s", 3772 3760 HCLGE_NAME, pci_name(hdev->pdev)); 3773 3761 ret = request_irq(hdev->misc_vector.vector_irq, hclge_misc_irq_handle, 3774 - 0, hdev->misc_vector.name, hdev); 3762 + IRQF_NO_AUTOEN, hdev->misc_vector.name, hdev); 3775 3763 if (ret) { 3776 3764 hclge_free_vector(hdev, 0); 3777 3765 dev_err(&hdev->pdev->dev, "request misc irq(%d) fail\n", ··· 4064 4052 case HNAE3_FUNC_RESET: 4065 4053 dev_info(&pdev->dev, "PF reset requested\n"); 4066 4054 /* schedule again to check later */ 4067 - set_bit(HNAE3_FUNC_RESET, &hdev->reset_pending); 4055 + hclge_set_reset_pending(hdev, HNAE3_FUNC_RESET); 4068 4056 hclge_reset_task_schedule(hdev); 4069 4057 break; 4070 4058 default: ··· 4097 4085 rst_level = HNAE3_FLR_RESET; 4098 4086 clear_bit(HNAE3_FLR_RESET, addr); 4099 4087 } 4088 + 4089 + clear_bit(HNAE3_NONE_RESET, addr); 4100 4090 4101 4091 if (hdev->reset_type != HNAE3_NONE_RESET && 4102 4092 rst_level < hdev->reset_type) ··· 4241 4227 return false; 4242 4228 } else if (hdev->rst_stats.reset_fail_cnt < MAX_RESET_FAIL_CNT) { 4243 4229 hdev->rst_stats.reset_fail_cnt++; 4244 - set_bit(hdev->reset_type, &hdev->reset_pending); 4230 + hclge_set_reset_pending(hdev, hdev->reset_type); 4245 4231 dev_info(&hdev->pdev->dev, 4246 4232 "re-schedule reset task(%u)\n", 4247 4233 hdev->rst_stats.reset_fail_cnt); ··· 4484 4470 static void hclge_set_def_reset_request(struct hnae3_ae_dev *ae_dev, 4485 4471 enum hnae3_reset_type rst_type) 4486 4472 { 4473 + #define HCLGE_SUPPORT_RESET_TYPE \ 4474 + (BIT(HNAE3_FLR_RESET) | BIT(HNAE3_FUNC_RESET) | \ 4475 + BIT(HNAE3_GLOBAL_RESET) | BIT(HNAE3_IMP_RESET)) 4476 + 4487 4477 struct hclge_dev *hdev = ae_dev->priv; 4478 + 4479 + if (!(BIT(rst_type) & HCLGE_SUPPORT_RESET_TYPE)) { 4480 + /* To prevent reset triggered by hclge_reset_event */ 4481 + set_bit(HNAE3_NONE_RESET, &hdev->default_reset_request); 4482 + dev_warn(&hdev->pdev->dev, "unsupported reset type %d\n", 4483 + rst_type); 4484 + return; 4485 + } 4488 4486 4489 4487 set_bit(rst_type, &hdev->default_reset_request); 4490 4488 } ··· 11907 11881 11908 11882 hclge_init_rxd_adv_layout(hdev); 11909 11883 11910 - /* Enable MISC vector(vector0) */ 11911 - hclge_enable_vector(&hdev->misc_vector, true); 11912 - 11913 11884 ret = hclge_init_wol(hdev); 11914 11885 if (ret) 11915 11886 dev_warn(&pdev->dev, ··· 11918 11895 11919 11896 hclge_state_init(hdev); 11920 11897 hdev->last_reset_time = jiffies; 11898 + 11899 + /* Enable MISC vector(vector0) */ 11900 + enable_irq(hdev->misc_vector.vector_irq); 11901 + hclge_enable_vector(&hdev->misc_vector, true); 11921 11902 11922 11903 dev_info(&hdev->pdev->dev, "%s driver initialization finished.\n", 11923 11904 HCLGE_DRIVER_NAME); ··· 12328 12301 12329 12302 /* Disable MISC vector(vector0) */ 12330 12303 hclge_enable_vector(&hdev->misc_vector, false); 12331 - synchronize_irq(hdev->misc_vector.vector_irq); 12304 + disable_irq(hdev->misc_vector.vector_irq); 12332 12305 12333 12306 /* Disable all hw interrupts */ 12334 12307 hclge_config_mac_tnl_int(hdev, false);
+3
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
··· 58 58 struct hclge_dev *hdev = vport->back; 59 59 struct hclge_ptp *ptp = hdev->ptp; 60 60 61 + if (!ptp) 62 + return false; 63 + 61 64 if (!test_bit(HCLGE_PTP_FLAG_TX_EN, &ptp->flags) || 62 65 test_and_set_bit(HCLGE_STATE_PTP_TX_HANDLING, &hdev->state)) { 63 66 ptp->tx_skipped++;
+5 -4
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.c
··· 510 510 static int hclge_fetch_pf_reg(struct hclge_dev *hdev, void *data, 511 511 struct hnae3_knic_private_info *kinfo) 512 512 { 513 - #define HCLGE_RING_REG_OFFSET 0x200 514 513 #define HCLGE_RING_INT_REG_OFFSET 0x4 515 514 515 + struct hnae3_queue *tqp; 516 516 int i, j, reg_num; 517 517 int data_num_sum; 518 518 u32 *reg = data; ··· 533 533 reg_num = ARRAY_SIZE(ring_reg_addr_list); 534 534 for (j = 0; j < kinfo->num_tqps; j++) { 535 535 reg += hclge_reg_get_tlv(HCLGE_REG_TAG_RING, reg_num, reg); 536 + tqp = kinfo->tqp[j]; 536 537 for (i = 0; i < reg_num; i++) 537 - *reg++ = hclge_read_dev(&hdev->hw, 538 - ring_reg_addr_list[i] + 539 - HCLGE_RING_REG_OFFSET * j); 538 + *reg++ = readl_relaxed(tqp->io_base - 539 + HCLGE_TQP_REG_OFFSET + 540 + ring_reg_addr_list[i]); 540 541 } 541 542 data_num_sum += (reg_num + HCLGE_REG_TLV_SPACE) * kinfo->num_tqps; 542 543
+34 -7
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
··· 1393 1393 return ret; 1394 1394 } 1395 1395 1396 + static void hclgevf_set_reset_pending(struct hclgevf_dev *hdev, 1397 + enum hnae3_reset_type reset_type) 1398 + { 1399 + /* When an incorrect reset type is executed, the get_reset_level 1400 + * function generates the HNAE3_NONE_RESET flag. As a result, this 1401 + * type do not need to pending. 1402 + */ 1403 + if (reset_type != HNAE3_NONE_RESET) 1404 + set_bit(reset_type, &hdev->reset_pending); 1405 + } 1406 + 1396 1407 static int hclgevf_reset_wait(struct hclgevf_dev *hdev) 1397 1408 { 1398 1409 #define HCLGEVF_RESET_WAIT_US 20000 ··· 1553 1542 hdev->rst_stats.rst_fail_cnt); 1554 1543 1555 1544 if (hdev->rst_stats.rst_fail_cnt < HCLGEVF_RESET_MAX_FAIL_CNT) 1556 - set_bit(hdev->reset_type, &hdev->reset_pending); 1545 + hclgevf_set_reset_pending(hdev, hdev->reset_type); 1557 1546 1558 1547 if (hclgevf_is_reset_pending(hdev)) { 1559 1548 set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state); ··· 1673 1662 clear_bit(HNAE3_FLR_RESET, addr); 1674 1663 } 1675 1664 1665 + clear_bit(HNAE3_NONE_RESET, addr); 1666 + 1676 1667 return rst_level; 1677 1668 } 1678 1669 ··· 1684 1671 struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev); 1685 1672 struct hclgevf_dev *hdev = ae_dev->priv; 1686 1673 1687 - dev_info(&hdev->pdev->dev, "received reset request from VF enet\n"); 1688 - 1689 1674 if (hdev->default_reset_request) 1690 1675 hdev->reset_level = 1691 1676 hclgevf_get_reset_level(&hdev->default_reset_request); 1692 1677 else 1693 1678 hdev->reset_level = HNAE3_VF_FUNC_RESET; 1679 + 1680 + dev_info(&hdev->pdev->dev, "received reset request from VF enet, reset level is %d\n", 1681 + hdev->reset_level); 1694 1682 1695 1683 /* reset of this VF requested */ 1696 1684 set_bit(HCLGEVF_RESET_REQUESTED, &hdev->reset_state); ··· 1703 1689 static void hclgevf_set_def_reset_request(struct hnae3_ae_dev *ae_dev, 1704 1690 enum hnae3_reset_type rst_type) 1705 1691 { 1692 + #define HCLGEVF_SUPPORT_RESET_TYPE \ 1693 + (BIT(HNAE3_VF_RESET) | BIT(HNAE3_VF_FUNC_RESET) | \ 1694 + BIT(HNAE3_VF_PF_FUNC_RESET) | BIT(HNAE3_VF_FULL_RESET) | \ 1695 + BIT(HNAE3_FLR_RESET) | BIT(HNAE3_VF_EXP_RESET)) 1696 + 1706 1697 struct hclgevf_dev *hdev = ae_dev->priv; 1707 1698 1699 + if (!(BIT(rst_type) & HCLGEVF_SUPPORT_RESET_TYPE)) { 1700 + /* To prevent reset triggered by hclge_reset_event */ 1701 + set_bit(HNAE3_NONE_RESET, &hdev->default_reset_request); 1702 + dev_info(&hdev->pdev->dev, "unsupported reset type %d\n", 1703 + rst_type); 1704 + return; 1705 + } 1708 1706 set_bit(rst_type, &hdev->default_reset_request); 1709 1707 } 1710 1708 ··· 1873 1847 */ 1874 1848 if (hdev->reset_attempts > HCLGEVF_MAX_RESET_ATTEMPTS_CNT) { 1875 1849 /* prepare for full reset of stack + pcie interface */ 1876 - set_bit(HNAE3_VF_FULL_RESET, &hdev->reset_pending); 1850 + hclgevf_set_reset_pending(hdev, HNAE3_VF_FULL_RESET); 1877 1851 1878 1852 /* "defer" schedule the reset task again */ 1879 1853 set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state); 1880 1854 } else { 1881 1855 hdev->reset_attempts++; 1882 1856 1883 - set_bit(hdev->reset_level, &hdev->reset_pending); 1857 + hclgevf_set_reset_pending(hdev, hdev->reset_level); 1884 1858 set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state); 1885 1859 } 1886 1860 hclgevf_reset_task_schedule(hdev); ··· 2003 1977 rst_ing_reg = hclgevf_read_dev(&hdev->hw, HCLGEVF_RST_ING); 2004 1978 dev_info(&hdev->pdev->dev, 2005 1979 "receive reset interrupt 0x%x!\n", rst_ing_reg); 2006 - set_bit(HNAE3_VF_RESET, &hdev->reset_pending); 1980 + hclgevf_set_reset_pending(hdev, HNAE3_VF_RESET); 2007 1981 set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state); 2008 1982 set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); 2009 1983 *clearval = ~(1U << HCLGEVF_VECTOR0_RST_INT_B); ··· 2313 2287 clear_bit(HCLGEVF_STATE_RST_FAIL, &hdev->state); 2314 2288 2315 2289 INIT_DELAYED_WORK(&hdev->service_task, hclgevf_service_task); 2290 + /* timer needs to be initialized before misc irq */ 2291 + timer_setup(&hdev->reset_timer, hclgevf_reset_timer, 0); 2316 2292 2317 2293 mutex_init(&hdev->mbx_resp.mbx_mutex); 2318 2294 sema_init(&hdev->reset_sem, 1); ··· 3014 2986 HCLGEVF_DRIVER_NAME); 3015 2987 3016 2988 hclgevf_task_schedule(hdev, round_jiffies_relative(HZ)); 3017 - timer_setup(&hdev->reset_timer, hclgevf_reset_timer, 0); 3018 2989 3019 2990 return 0; 3020 2991
+5 -4
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.c
··· 123 123 void hclgevf_get_regs(struct hnae3_handle *handle, u32 *version, 124 124 void *data) 125 125 { 126 - #define HCLGEVF_RING_REG_OFFSET 0x200 127 126 #define HCLGEVF_RING_INT_REG_OFFSET 0x4 128 127 129 128 struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle); 129 + struct hnae3_queue *tqp; 130 130 int i, j, reg_um; 131 131 u32 *reg = data; 132 132 ··· 147 147 reg_um = ARRAY_SIZE(ring_reg_addr_list); 148 148 for (j = 0; j < hdev->num_tqps; j++) { 149 149 reg += hclgevf_reg_get_tlv(HCLGEVF_REG_TAG_RING, reg_um, reg); 150 + tqp = &hdev->htqp[j].q; 150 151 for (i = 0; i < reg_um; i++) 151 - *reg++ = hclgevf_read_dev(&hdev->hw, 152 - ring_reg_addr_list[i] + 153 - HCLGEVF_RING_REG_OFFSET * j); 152 + *reg++ = readl_relaxed(tqp->io_base - 153 + HCLGEVF_TQP_REG_OFFSET + 154 + ring_reg_addr_list[i]); 154 155 } 155 156 156 157 reg_um = ARRAY_SIZE(tqp_intr_reg_addr_list);
+3
drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
··· 1665 1665 #define ICE_AQC_PORT_OPT_MAX_LANE_25G 5 1666 1666 #define ICE_AQC_PORT_OPT_MAX_LANE_50G 6 1667 1667 #define ICE_AQC_PORT_OPT_MAX_LANE_100G 7 1668 + #define ICE_AQC_PORT_OPT_MAX_LANE_200G 8 1668 1669 1669 1670 u8 global_scid[2]; 1670 1671 u8 phy_scid[2]; ··· 2264 2263 __le32 count; 2265 2264 struct ice_aqc_get_pkg_info pkg_info[]; 2266 2265 }; 2266 + 2267 + #define ICE_AQC_GET_CGU_MAX_PHASE_ADJ GENMASK(30, 0) 2267 2268 2268 2269 /* Get CGU abilities command response data structure (indirect 0x0C61) */ 2269 2270 struct ice_aqc_get_cgu_abilities {
+51
drivers/net/ethernet/intel/ice/ice_common.c
··· 4096 4096 } 4097 4097 4098 4098 /** 4099 + * ice_get_phy_lane_number - Get PHY lane number for current adapter 4100 + * @hw: pointer to the hw struct 4101 + * 4102 + * Return: PHY lane number on success, negative error code otherwise. 4103 + */ 4104 + int ice_get_phy_lane_number(struct ice_hw *hw) 4105 + { 4106 + struct ice_aqc_get_port_options_elem *options; 4107 + unsigned int lport = 0; 4108 + unsigned int lane; 4109 + int err; 4110 + 4111 + options = kcalloc(ICE_AQC_PORT_OPT_MAX, sizeof(*options), GFP_KERNEL); 4112 + if (!options) 4113 + return -ENOMEM; 4114 + 4115 + for (lane = 0; lane < ICE_MAX_PORT_PER_PCI_DEV; lane++) { 4116 + u8 options_count = ICE_AQC_PORT_OPT_MAX; 4117 + u8 speed, active_idx, pending_idx; 4118 + bool active_valid, pending_valid; 4119 + 4120 + err = ice_aq_get_port_options(hw, options, &options_count, lane, 4121 + true, &active_idx, &active_valid, 4122 + &pending_idx, &pending_valid); 4123 + if (err) 4124 + goto err; 4125 + 4126 + if (!active_valid) 4127 + continue; 4128 + 4129 + speed = options[active_idx].max_lane_speed; 4130 + /* If we don't get speed for this lane, it's unoccupied */ 4131 + if (speed > ICE_AQC_PORT_OPT_MAX_LANE_200G) 4132 + continue; 4133 + 4134 + if (hw->pf_id == lport) { 4135 + kfree(options); 4136 + return lane; 4137 + } 4138 + 4139 + lport++; 4140 + } 4141 + 4142 + /* PHY lane not found */ 4143 + err = -ENXIO; 4144 + err: 4145 + kfree(options); 4146 + return err; 4147 + } 4148 + 4149 + /** 4099 4150 * ice_aq_sff_eeprom 4100 4151 * @hw: pointer to the HW struct 4101 4152 * @lport: bits [7:0] = logical port, bit [8] = logical port valid
+1
drivers/net/ethernet/intel/ice/ice_common.h
··· 193 193 int 194 194 ice_aq_set_port_option(struct ice_hw *hw, u8 lport, u8 lport_valid, 195 195 u8 new_option); 196 + int ice_get_phy_lane_number(struct ice_hw *hw); 196 197 int 197 198 ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr, 198 199 u16 mem_addr, u8 page, u8 set_page, u8 *data, u8 length,
+23 -12
drivers/net/ethernet/intel/ice/ice_dpll.c
··· 2065 2065 } 2066 2066 2067 2067 /** 2068 + * ice_dpll_phase_range_set - initialize phase adjust range helper 2069 + * @range: pointer to phase adjust range struct to be initialized 2070 + * @phase_adj: a value to be used as min(-)/max(+) boundary 2071 + */ 2072 + static void ice_dpll_phase_range_set(struct dpll_pin_phase_adjust_range *range, 2073 + u32 phase_adj) 2074 + { 2075 + range->min = -phase_adj; 2076 + range->max = phase_adj; 2077 + } 2078 + 2079 + /** 2068 2080 * ice_dpll_init_info_pins_generic - initializes generic pins info 2069 2081 * @pf: board private structure 2070 2082 * @input: if input pins initialized ··· 2117 2105 for (i = 0; i < pin_num; i++) { 2118 2106 pins[i].idx = i; 2119 2107 pins[i].prop.board_label = labels[i]; 2120 - pins[i].prop.phase_range.min = phase_adj_max; 2121 - pins[i].prop.phase_range.max = -phase_adj_max; 2108 + ice_dpll_phase_range_set(&pins[i].prop.phase_range, 2109 + phase_adj_max); 2122 2110 pins[i].prop.capabilities = cap; 2123 2111 pins[i].pf = pf; 2124 2112 ret = ice_dpll_pin_state_update(pf, &pins[i], pin_type, NULL); ··· 2164 2152 struct ice_hw *hw = &pf->hw; 2165 2153 struct ice_dpll_pin *pins; 2166 2154 unsigned long caps; 2155 + u32 phase_adj_max; 2167 2156 u8 freq_supp_num; 2168 2157 bool input; 2169 2158 ··· 2172 2159 case ICE_DPLL_PIN_TYPE_INPUT: 2173 2160 pins = pf->dplls.inputs; 2174 2161 num_pins = pf->dplls.num_inputs; 2162 + phase_adj_max = pf->dplls.input_phase_adj_max; 2175 2163 input = true; 2176 2164 break; 2177 2165 case ICE_DPLL_PIN_TYPE_OUTPUT: 2178 2166 pins = pf->dplls.outputs; 2179 2167 num_pins = pf->dplls.num_outputs; 2168 + phase_adj_max = pf->dplls.output_phase_adj_max; 2180 2169 input = false; 2181 2170 break; 2182 2171 default: ··· 2203 2188 return ret; 2204 2189 caps |= (DPLL_PIN_CAPABILITIES_PRIORITY_CAN_CHANGE | 2205 2190 DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE); 2206 - pins[i].prop.phase_range.min = 2207 - pf->dplls.input_phase_adj_max; 2208 - pins[i].prop.phase_range.max = 2209 - -pf->dplls.input_phase_adj_max; 2210 2191 } else { 2211 - pins[i].prop.phase_range.min = 2212 - pf->dplls.output_phase_adj_max; 2213 - pins[i].prop.phase_range.max = 2214 - -pf->dplls.output_phase_adj_max; 2215 2192 ret = ice_cgu_get_output_pin_state_caps(hw, i, &caps); 2216 2193 if (ret) 2217 2194 return ret; 2218 2195 } 2196 + ice_dpll_phase_range_set(&pins[i].prop.phase_range, 2197 + phase_adj_max); 2219 2198 pins[i].prop.capabilities = caps; 2220 2199 ret = ice_dpll_pin_state_update(pf, &pins[i], pin_type, NULL); 2221 2200 if (ret) ··· 2317 2308 dp->dpll_idx = abilities.pps_dpll_idx; 2318 2309 d->num_inputs = abilities.num_inputs; 2319 2310 d->num_outputs = abilities.num_outputs; 2320 - d->input_phase_adj_max = le32_to_cpu(abilities.max_in_phase_adj); 2321 - d->output_phase_adj_max = le32_to_cpu(abilities.max_out_phase_adj); 2311 + d->input_phase_adj_max = le32_to_cpu(abilities.max_in_phase_adj) & 2312 + ICE_AQC_GET_CGU_MAX_PHASE_ADJ; 2313 + d->output_phase_adj_max = le32_to_cpu(abilities.max_out_phase_adj) & 2314 + ICE_AQC_GET_CGU_MAX_PHASE_ADJ; 2322 2315 2323 2316 alloc_size = sizeof(*d->inputs) * d->num_inputs; 2324 2317 d->inputs = kzalloc(alloc_size, GFP_KERNEL);
+3 -3
drivers/net/ethernet/intel/ice/ice_main.c
··· 1144 1144 if (link_up == old_link && link_speed == old_link_speed) 1145 1145 return 0; 1146 1146 1147 - ice_ptp_link_change(pf, pf->hw.pf_id, link_up); 1147 + ice_ptp_link_change(pf, link_up); 1148 1148 1149 1149 if (ice_is_dcb_active(pf)) { 1150 1150 if (test_bit(ICE_FLAG_DCB_ENA, pf->flags)) ··· 6790 6790 ice_print_link_msg(vsi, true); 6791 6791 netif_tx_start_all_queues(vsi->netdev); 6792 6792 netif_carrier_on(vsi->netdev); 6793 - ice_ptp_link_change(pf, pf->hw.pf_id, true); 6793 + ice_ptp_link_change(pf, true); 6794 6794 } 6795 6795 6796 6796 /* Perform an initial read of the statistics registers now to ··· 7260 7260 7261 7261 if (vsi->netdev) { 7262 7262 vlan_err = ice_vsi_del_vlan_zero(vsi); 7263 - ice_ptp_link_change(vsi->back, vsi->back->hw.pf_id, false); 7263 + ice_ptp_link_change(vsi->back, false); 7264 7264 netif_carrier_off(vsi->netdev); 7265 7265 netif_tx_disable(vsi->netdev); 7266 7266 }
+9 -14
drivers/net/ethernet/intel/ice/ice_ptp.c
··· 1388 1388 /** 1389 1389 * ice_ptp_link_change - Reconfigure PTP after link status change 1390 1390 * @pf: Board private structure 1391 - * @port: Port for which the PHY start is set 1392 1391 * @linkup: Link is up or down 1393 1392 */ 1394 - void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup) 1393 + void ice_ptp_link_change(struct ice_pf *pf, bool linkup) 1395 1394 { 1396 1395 struct ice_ptp_port *ptp_port; 1397 1396 struct ice_hw *hw = &pf->hw; ··· 1398 1399 if (pf->ptp.state != ICE_PTP_READY) 1399 1400 return; 1400 1401 1401 - if (WARN_ON_ONCE(port >= hw->ptp.num_lports)) 1402 - return; 1403 - 1404 1402 ptp_port = &pf->ptp.port; 1405 - if (ice_is_e825c(hw) && hw->ptp.is_2x50g_muxed_topo) 1406 - port *= 2; 1407 - if (WARN_ON_ONCE(ptp_port->port_num != port)) 1408 - return; 1409 1403 1410 1404 /* Update cached link status for this port immediately */ 1411 1405 ptp_port->link_up = linkup; ··· 3156 3164 { 3157 3165 struct ice_ptp *ptp = &pf->ptp; 3158 3166 struct ice_hw *hw = &pf->hw; 3159 - int err; 3167 + int lane_num, err; 3160 3168 3161 3169 ptp->state = ICE_PTP_INITIALIZING; 3162 3170 3171 + lane_num = ice_get_phy_lane_number(hw); 3172 + if (lane_num < 0) { 3173 + err = lane_num; 3174 + goto err_exit; 3175 + } 3176 + 3177 + ptp->port.port_num = (u8)lane_num; 3163 3178 ice_ptp_init_hw(hw); 3164 3179 3165 3180 ice_ptp_init_tx_interrupt_mode(pf); ··· 3186 3187 err = ice_ptp_setup_pf(pf); 3187 3188 if (err) 3188 3189 goto err_exit; 3189 - 3190 - ptp->port.port_num = hw->pf_id; 3191 - if (ice_is_e825c(hw) && hw->ptp.is_2x50g_muxed_topo) 3192 - ptp->port.port_num = hw->pf_id * 2; 3193 3190 3194 3191 err = ice_ptp_init_port(pf, &ptp->port); 3195 3192 if (err)
+2 -2
drivers/net/ethernet/intel/ice/ice_ptp.h
··· 310 310 enum ice_reset_req reset_type); 311 311 void ice_ptp_init(struct ice_pf *pf); 312 312 void ice_ptp_release(struct ice_pf *pf); 313 - void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup); 313 + void ice_ptp_link_change(struct ice_pf *pf, bool linkup); 314 314 #else /* IS_ENABLED(CONFIG_PTP_1588_CLOCK) */ 315 315 static inline int ice_ptp_set_ts_config(struct ice_pf *pf, struct ifreq *ifr) 316 316 { ··· 358 358 } 359 359 static inline void ice_ptp_init(struct ice_pf *pf) { } 360 360 static inline void ice_ptp_release(struct ice_pf *pf) { } 361 - static inline void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup) 361 + static inline void ice_ptp_link_change(struct ice_pf *pf, bool linkup) 362 362 { 363 363 } 364 364
+3 -3
drivers/net/ethernet/intel/ice/ice_ptp_consts.h
··· 131 131 .rx_offset = { 132 132 .serdes = 0xffffeb27, /* -10.42424 */ 133 133 .no_fec = 0xffffcccd, /* -25.6 */ 134 - .fc = 0xfffe0014, /* -255.96 */ 134 + .fc = 0xfffc557b, /* -469.26 */ 135 135 .sfd = 0x4a4, /* 2.32 */ 136 136 .bs_ds = 0x32 /* 0.0969697 */ 137 137 } ··· 761 761 /* rx_desk_rsgb_par */ 762 762 644531250, /* 644.53125 MHz Reed Solomon gearbox */ 763 763 /* tx_desk_rsgb_pcs */ 764 - 644531250, /* 644.53125 MHz Reed Solomon gearbox */ 764 + 390625000, /* 390.625 MHz Reed Solomon gearbox */ 765 765 /* rx_desk_rsgb_pcs */ 766 - 644531250, /* 644.53125 MHz Reed Solomon gearbox */ 766 + 390625000, /* 390.625 MHz Reed Solomon gearbox */ 767 767 /* tx_fixed_delay */ 768 768 1620, 769 769 /* pmd_adj_divisor */
+141 -122
drivers/net/ethernet/intel/ice/ice_ptp_hw.c
··· 901 901 */ 902 902 903 903 /** 904 + * ice_ptp_get_dest_dev_e825 - get destination PHY for given port number 905 + * @hw: pointer to the HW struct 906 + * @port: destination port 907 + * 908 + * Return: destination sideband queue PHY device. 909 + */ 910 + static enum ice_sbq_msg_dev ice_ptp_get_dest_dev_e825(struct ice_hw *hw, 911 + u8 port) 912 + { 913 + /* On a single complex E825, PHY 0 is always destination device phy_0 914 + * and PHY 1 is phy_0_peer. 915 + */ 916 + if (port >= hw->ptp.ports_per_phy) 917 + return eth56g_phy_1; 918 + else 919 + return eth56g_phy_0; 920 + } 921 + 922 + /** 904 923 * ice_write_phy_eth56g - Write a PHY port register 905 924 * @hw: pointer to the HW struct 906 - * @phy_idx: PHY index 925 + * @port: destination port 907 926 * @addr: PHY register address 908 927 * @val: Value to write 909 928 * 910 929 * Return: 0 on success, other error codes when failed to write to PHY 911 930 */ 912 - static int ice_write_phy_eth56g(struct ice_hw *hw, u8 phy_idx, u32 addr, 913 - u32 val) 931 + static int ice_write_phy_eth56g(struct ice_hw *hw, u8 port, u32 addr, u32 val) 914 932 { 915 - struct ice_sbq_msg_input phy_msg; 933 + struct ice_sbq_msg_input msg = { 934 + .dest_dev = ice_ptp_get_dest_dev_e825(hw, port), 935 + .opcode = ice_sbq_msg_wr, 936 + .msg_addr_low = lower_16_bits(addr), 937 + .msg_addr_high = upper_16_bits(addr), 938 + .data = val 939 + }; 916 940 int err; 917 941 918 - phy_msg.opcode = ice_sbq_msg_wr; 919 - 920 - phy_msg.msg_addr_low = lower_16_bits(addr); 921 - phy_msg.msg_addr_high = upper_16_bits(addr); 922 - 923 - phy_msg.data = val; 924 - phy_msg.dest_dev = hw->ptp.phy.eth56g.phy_addr[phy_idx]; 925 - 926 - err = ice_sbq_rw_reg(hw, &phy_msg, ICE_AQ_FLAG_RD); 927 - 942 + err = ice_sbq_rw_reg(hw, &msg, ICE_AQ_FLAG_RD); 928 943 if (err) 929 944 ice_debug(hw, ICE_DBG_PTP, "PTP failed to send msg to phy %d\n", 930 945 err); ··· 950 935 /** 951 936 * ice_read_phy_eth56g - Read a PHY port register 952 937 * @hw: pointer to the HW struct 953 - * @phy_idx: PHY index 938 + * @port: destination port 954 939 * @addr: PHY register address 955 940 * @val: Value to write 956 941 * 957 942 * Return: 0 on success, other error codes when failed to read from PHY 958 943 */ 959 - static int ice_read_phy_eth56g(struct ice_hw *hw, u8 phy_idx, u32 addr, 960 - u32 *val) 944 + static int ice_read_phy_eth56g(struct ice_hw *hw, u8 port, u32 addr, u32 *val) 961 945 { 962 - struct ice_sbq_msg_input phy_msg; 946 + struct ice_sbq_msg_input msg = { 947 + .dest_dev = ice_ptp_get_dest_dev_e825(hw, port), 948 + .opcode = ice_sbq_msg_rd, 949 + .msg_addr_low = lower_16_bits(addr), 950 + .msg_addr_high = upper_16_bits(addr) 951 + }; 963 952 int err; 964 953 965 - phy_msg.opcode = ice_sbq_msg_rd; 966 - 967 - phy_msg.msg_addr_low = lower_16_bits(addr); 968 - phy_msg.msg_addr_high = upper_16_bits(addr); 969 - 970 - phy_msg.data = 0; 971 - phy_msg.dest_dev = hw->ptp.phy.eth56g.phy_addr[phy_idx]; 972 - 973 - err = ice_sbq_rw_reg(hw, &phy_msg, ICE_AQ_FLAG_RD); 974 - if (err) { 954 + err = ice_sbq_rw_reg(hw, &msg, ICE_AQ_FLAG_RD); 955 + if (err) 975 956 ice_debug(hw, ICE_DBG_PTP, "PTP failed to send msg to phy %d\n", 976 957 err); 977 - return err; 978 - } 958 + else 959 + *val = msg.data; 979 960 980 - *val = phy_msg.data; 981 - 982 - return 0; 961 + return err; 983 962 } 984 963 985 964 /** 986 965 * ice_phy_res_address_eth56g - Calculate a PHY port register address 987 - * @port: Port number to be written 966 + * @hw: pointer to the HW struct 967 + * @lane: Lane number to be written 988 968 * @res_type: resource type (register/memory) 989 969 * @offset: Offset from PHY port register base 990 970 * @addr: The result address ··· 988 978 * * %0 - success 989 979 * * %EINVAL - invalid port number or resource type 990 980 */ 991 - static int ice_phy_res_address_eth56g(u8 port, enum eth56g_res_type res_type, 992 - u32 offset, u32 *addr) 981 + static int ice_phy_res_address_eth56g(struct ice_hw *hw, u8 lane, 982 + enum eth56g_res_type res_type, 983 + u32 offset, 984 + u32 *addr) 993 985 { 994 - u8 lane = port % ICE_PORTS_PER_QUAD; 995 - u8 phy = ICE_GET_QUAD_NUM(port); 996 - 997 986 if (res_type >= NUM_ETH56G_PHY_RES) 998 987 return -EINVAL; 999 988 1000 - *addr = eth56g_phy_res[res_type].base[phy] + 989 + /* Lanes 4..7 are in fact 0..3 on a second PHY */ 990 + lane %= hw->ptp.ports_per_phy; 991 + *addr = eth56g_phy_res[res_type].base[0] + 1001 992 lane * eth56g_phy_res[res_type].step + offset; 993 + 1002 994 return 0; 1003 995 } 1004 996 ··· 1020 1008 static int ice_write_port_eth56g(struct ice_hw *hw, u8 port, u32 offset, 1021 1009 u32 val, enum eth56g_res_type res_type) 1022 1010 { 1023 - u8 phy_port = port % hw->ptp.ports_per_phy; 1024 - u8 phy_idx = port / hw->ptp.ports_per_phy; 1025 1011 u32 addr; 1026 1012 int err; 1027 1013 1028 1014 if (port >= hw->ptp.num_lports) 1029 1015 return -EINVAL; 1030 1016 1031 - err = ice_phy_res_address_eth56g(phy_port, res_type, offset, &addr); 1017 + err = ice_phy_res_address_eth56g(hw, port, res_type, offset, &addr); 1032 1018 if (err) 1033 1019 return err; 1034 1020 1035 - return ice_write_phy_eth56g(hw, phy_idx, addr, val); 1021 + return ice_write_phy_eth56g(hw, port, addr, val); 1036 1022 } 1037 1023 1038 1024 /** ··· 1049 1039 static int ice_read_port_eth56g(struct ice_hw *hw, u8 port, u32 offset, 1050 1040 u32 *val, enum eth56g_res_type res_type) 1051 1041 { 1052 - u8 phy_port = port % hw->ptp.ports_per_phy; 1053 - u8 phy_idx = port / hw->ptp.ports_per_phy; 1054 1042 u32 addr; 1055 1043 int err; 1056 1044 1057 1045 if (port >= hw->ptp.num_lports) 1058 1046 return -EINVAL; 1059 1047 1060 - err = ice_phy_res_address_eth56g(phy_port, res_type, offset, &addr); 1048 + err = ice_phy_res_address_eth56g(hw, port, res_type, offset, &addr); 1061 1049 if (err) 1062 1050 return err; 1063 1051 1064 - return ice_read_phy_eth56g(hw, phy_idx, addr, val); 1052 + return ice_read_phy_eth56g(hw, port, addr, val); 1065 1053 } 1066 1054 1067 1055 /** ··· 1206 1198 u32 val) 1207 1199 { 1208 1200 return ice_write_port_eth56g(hw, port, offset, val, ETH56G_PHY_MEM_PTP); 1201 + } 1202 + 1203 + /** 1204 + * ice_write_quad_ptp_reg_eth56g - Write a PHY quad register 1205 + * @hw: pointer to the HW struct 1206 + * @offset: PHY register offset 1207 + * @port: Port number 1208 + * @val: Value to write 1209 + * 1210 + * Return: 1211 + * * %0 - success 1212 + * * %EIO - invalid port number or resource type 1213 + * * %other - failed to write to PHY 1214 + */ 1215 + static int ice_write_quad_ptp_reg_eth56g(struct ice_hw *hw, u8 port, 1216 + u32 offset, u32 val) 1217 + { 1218 + u32 addr; 1219 + 1220 + if (port >= hw->ptp.num_lports) 1221 + return -EIO; 1222 + 1223 + addr = eth56g_phy_res[ETH56G_PHY_REG_PTP].base[0] + offset; 1224 + 1225 + return ice_write_phy_eth56g(hw, port, addr, val); 1226 + } 1227 + 1228 + /** 1229 + * ice_read_quad_ptp_reg_eth56g - Read a PHY quad register 1230 + * @hw: pointer to the HW struct 1231 + * @offset: PHY register offset 1232 + * @port: Port number 1233 + * @val: Value to read 1234 + * 1235 + * Return: 1236 + * * %0 - success 1237 + * * %EIO - invalid port number or resource type 1238 + * * %other - failed to read from PHY 1239 + */ 1240 + static int ice_read_quad_ptp_reg_eth56g(struct ice_hw *hw, u8 port, 1241 + u32 offset, u32 *val) 1242 + { 1243 + u32 addr; 1244 + 1245 + if (port >= hw->ptp.num_lports) 1246 + return -EIO; 1247 + 1248 + addr = eth56g_phy_res[ETH56G_PHY_REG_PTP].base[0] + offset; 1249 + 1250 + return ice_read_phy_eth56g(hw, port, addr, val); 1209 1251 } 1210 1252 1211 1253 /** ··· 1977 1919 */ 1978 1920 static int ice_phy_cfg_parpcs_eth56g(struct ice_hw *hw, u8 port) 1979 1921 { 1980 - u8 port_blk = port & ~(ICE_PORTS_PER_QUAD - 1); 1981 1922 u32 val; 1982 1923 int err; 1983 1924 ··· 1991 1934 switch (ice_phy_get_speed_eth56g(&hw->port_info->phy.link_info)) { 1992 1935 case ICE_ETH56G_LNK_SPD_1G: 1993 1936 case ICE_ETH56G_LNK_SPD_2_5G: 1994 - err = ice_read_ptp_reg_eth56g(hw, port_blk, 1995 - PHY_GPCS_CONFIG_REG0, &val); 1937 + err = ice_read_quad_ptp_reg_eth56g(hw, port, 1938 + PHY_GPCS_CONFIG_REG0, &val); 1996 1939 if (err) { 1997 1940 ice_debug(hw, ICE_DBG_PTP, "Failed to read PHY_GPCS_CONFIG_REG0, status: %d", 1998 1941 err); ··· 2003 1946 val |= FIELD_PREP(PHY_GPCS_CONFIG_REG0_TX_THR_M, 2004 1947 ICE_ETH56G_NOMINAL_TX_THRESH); 2005 1948 2006 - err = ice_write_ptp_reg_eth56g(hw, port_blk, 2007 - PHY_GPCS_CONFIG_REG0, val); 1949 + err = ice_write_quad_ptp_reg_eth56g(hw, port, 1950 + PHY_GPCS_CONFIG_REG0, val); 2008 1951 if (err) { 2009 1952 ice_debug(hw, ICE_DBG_PTP, "Failed to write PHY_GPCS_CONFIG_REG0, status: %d", 2010 1953 err); ··· 2045 1988 */ 2046 1989 int ice_phy_cfg_ptp_1step_eth56g(struct ice_hw *hw, u8 port) 2047 1990 { 2048 - u8 port_blk = port & ~(ICE_PORTS_PER_QUAD - 1); 2049 - u8 blk_port = port & (ICE_PORTS_PER_QUAD - 1); 1991 + u8 quad_lane = port % ICE_PORTS_PER_QUAD; 1992 + u32 addr, val, peer_delay; 2050 1993 bool enable, sfd_ena; 2051 - u32 val, peer_delay; 2052 1994 int err; 2053 1995 2054 1996 enable = hw->ptp.phy.eth56g.onestep_ena; 2055 1997 peer_delay = hw->ptp.phy.eth56g.peer_delay; 2056 1998 sfd_ena = hw->ptp.phy.eth56g.sfd_ena; 2057 1999 2058 - /* PHY_PTP_1STEP_CONFIG */ 2059 - err = ice_read_ptp_reg_eth56g(hw, port_blk, PHY_PTP_1STEP_CONFIG, &val); 2000 + addr = PHY_PTP_1STEP_CONFIG; 2001 + err = ice_read_quad_ptp_reg_eth56g(hw, port, addr, &val); 2060 2002 if (err) 2061 2003 return err; 2062 2004 2063 2005 if (enable) 2064 - val |= blk_port; 2006 + val |= BIT(quad_lane); 2065 2007 else 2066 - val &= ~blk_port; 2008 + val &= ~BIT(quad_lane); 2067 2009 2068 2010 val &= ~(PHY_PTP_1STEP_T1S_UP64_M | PHY_PTP_1STEP_T1S_DELTA_M); 2069 2011 2070 - err = ice_write_ptp_reg_eth56g(hw, port_blk, PHY_PTP_1STEP_CONFIG, val); 2012 + err = ice_write_quad_ptp_reg_eth56g(hw, port, addr, val); 2071 2013 if (err) 2072 2014 return err; 2073 2015 2074 - /* PHY_PTP_1STEP_PEER_DELAY */ 2016 + addr = PHY_PTP_1STEP_PEER_DELAY(quad_lane); 2075 2017 val = FIELD_PREP(PHY_PTP_1STEP_PD_DELAY_M, peer_delay); 2076 2018 if (peer_delay) 2077 2019 val |= PHY_PTP_1STEP_PD_ADD_PD_M; 2078 2020 val |= PHY_PTP_1STEP_PD_DLY_V_M; 2079 - err = ice_write_ptp_reg_eth56g(hw, port_blk, 2080 - PHY_PTP_1STEP_PEER_DELAY(blk_port), val); 2021 + err = ice_write_quad_ptp_reg_eth56g(hw, port, addr, val); 2081 2022 if (err) 2082 2023 return err; 2083 2024 2084 2025 val &= ~PHY_PTP_1STEP_PD_DLY_V_M; 2085 - err = ice_write_ptp_reg_eth56g(hw, port_blk, 2086 - PHY_PTP_1STEP_PEER_DELAY(blk_port), val); 2026 + err = ice_write_quad_ptp_reg_eth56g(hw, port, addr, val); 2087 2027 if (err) 2088 2028 return err; 2089 2029 2090 - /* PHY_MAC_XIF_MODE */ 2091 - err = ice_read_mac_reg_eth56g(hw, port, PHY_MAC_XIF_MODE, &val); 2030 + addr = PHY_MAC_XIF_MODE; 2031 + err = ice_read_mac_reg_eth56g(hw, port, addr, &val); 2092 2032 if (err) 2093 2033 return err; 2094 2034 ··· 2105 2051 FIELD_PREP(PHY_MAC_XIF_TS_BIN_MODE_M, enable) | 2106 2052 FIELD_PREP(PHY_MAC_XIF_TS_SFD_ENA_M, sfd_ena); 2107 2053 2108 - return ice_write_mac_reg_eth56g(hw, port, PHY_MAC_XIF_MODE, val); 2054 + return ice_write_mac_reg_eth56g(hw, port, addr, val); 2109 2055 } 2110 2056 2111 2057 /** ··· 2147 2093 bool fc, bool rs, 2148 2094 enum ice_eth56g_link_spd spd) 2149 2095 { 2150 - u8 port_offset = port & (ICE_PORTS_PER_QUAD - 1); 2151 - u8 port_blk = port & ~(ICE_PORTS_PER_QUAD - 1); 2152 2096 u32 bitslip; 2153 2097 int err; 2154 2098 2155 2099 if (!bs || rs) 2156 2100 return 0; 2157 2101 2158 - if (spd == ICE_ETH56G_LNK_SPD_1G || spd == ICE_ETH56G_LNK_SPD_2_5G) 2102 + if (spd == ICE_ETH56G_LNK_SPD_1G || spd == ICE_ETH56G_LNK_SPD_2_5G) { 2159 2103 err = ice_read_gpcs_reg_eth56g(hw, port, PHY_GPCS_BITSLIP, 2160 2104 &bitslip); 2161 - else 2162 - err = ice_read_ptp_reg_eth56g(hw, port_blk, 2163 - PHY_REG_SD_BIT_SLIP(port_offset), 2164 - &bitslip); 2105 + } else { 2106 + u8 quad_lane = port % ICE_PORTS_PER_QUAD; 2107 + u32 addr; 2108 + 2109 + addr = PHY_REG_SD_BIT_SLIP(quad_lane); 2110 + err = ice_read_quad_ptp_reg_eth56g(hw, port, addr, &bitslip); 2111 + } 2165 2112 if (err) 2166 2113 return 0; 2167 2114 ··· 2722 2667 } 2723 2668 2724 2669 /** 2725 - * ice_is_muxed_topo - detect breakout 2x50G topology for E825C 2726 - * @hw: pointer to the HW struct 2727 - * 2728 - * Return: true if it's 2x50 breakout topology, false otherwise 2729 - */ 2730 - static bool ice_is_muxed_topo(struct ice_hw *hw) 2731 - { 2732 - u8 link_topo; 2733 - bool mux; 2734 - u32 val; 2735 - 2736 - val = rd32(hw, GLGEN_SWITCH_MODE_CONFIG); 2737 - mux = FIELD_GET(GLGEN_SWITCH_MODE_CONFIG_25X4_QUAD_M, val); 2738 - val = rd32(hw, GLGEN_MAC_LINK_TOPO); 2739 - link_topo = FIELD_GET(GLGEN_MAC_LINK_TOPO_LINK_TOPO_M, val); 2740 - 2741 - return (mux && link_topo == ICE_LINK_TOPO_UP_TO_2_LINKS); 2742 - } 2743 - 2744 - /** 2745 - * ice_ptp_init_phy_e825c - initialize PHY parameters 2670 + * ice_ptp_init_phy_e825 - initialize PHY parameters 2746 2671 * @hw: pointer to the HW struct 2747 2672 */ 2748 - static void ice_ptp_init_phy_e825c(struct ice_hw *hw) 2673 + static void ice_ptp_init_phy_e825(struct ice_hw *hw) 2749 2674 { 2750 2675 struct ice_ptp_hw *ptp = &hw->ptp; 2751 2676 struct ice_eth56g_params *params; 2752 - u8 phy; 2677 + u32 phy_rev; 2678 + int err; 2753 2679 2754 2680 ptp->phy_model = ICE_PHY_ETH56G; 2755 2681 params = &ptp->phy.eth56g; 2756 2682 params->onestep_ena = false; 2757 2683 params->peer_delay = 0; 2758 2684 params->sfd_ena = false; 2759 - params->phy_addr[0] = eth56g_phy_0; 2760 - params->phy_addr[1] = eth56g_phy_1; 2761 2685 params->num_phys = 2; 2762 2686 ptp->ports_per_phy = 4; 2763 2687 ptp->num_lports = params->num_phys * ptp->ports_per_phy; 2764 2688 2765 2689 ice_sb_access_ena_eth56g(hw, true); 2766 - for (phy = 0; phy < params->num_phys; phy++) { 2767 - u32 phy_rev; 2768 - int err; 2769 - 2770 - err = ice_read_phy_eth56g(hw, phy, PHY_REG_REVISION, &phy_rev); 2771 - if (err || phy_rev != PHY_REVISION_ETH56G) { 2772 - ptp->phy_model = ICE_PHY_UNSUP; 2773 - return; 2774 - } 2775 - } 2776 - 2777 - ptp->is_2x50g_muxed_topo = ice_is_muxed_topo(hw); 2690 + err = ice_read_phy_eth56g(hw, hw->pf_id, PHY_REG_REVISION, &phy_rev); 2691 + if (err || phy_rev != PHY_REVISION_ETH56G) 2692 + ptp->phy_model = ICE_PHY_UNSUP; 2778 2693 } 2779 2694 2780 2695 /* E822 family functions ··· 2763 2738 struct ice_sbq_msg_input *msg, u8 port, 2764 2739 u16 offset) 2765 2740 { 2766 - int phy_port, phy, quadtype; 2741 + int phy_port, quadtype; 2767 2742 2768 2743 phy_port = port % hw->ptp.ports_per_phy; 2769 - phy = port / hw->ptp.ports_per_phy; 2770 2744 quadtype = ICE_GET_QUAD_NUM(port) % 2771 2745 ICE_GET_QUAD_NUM(hw->ptp.ports_per_phy); 2772 2746 ··· 2777 2753 msg->msg_addr_high = P_Q1_H(P_4_BASE + offset, phy_port); 2778 2754 } 2779 2755 2780 - if (phy == 0) 2781 - msg->dest_dev = rmn_0; 2782 - else if (phy == 1) 2783 - msg->dest_dev = rmn_1; 2784 - else 2785 - msg->dest_dev = rmn_2; 2756 + msg->dest_dev = rmn_0; 2786 2757 } 2787 2758 2788 2759 /** ··· 5497 5478 else if (ice_is_e810(hw)) 5498 5479 ice_ptp_init_phy_e810(ptp); 5499 5480 else if (ice_is_e825c(hw)) 5500 - ice_ptp_init_phy_e825c(hw); 5481 + ice_ptp_init_phy_e825(hw); 5501 5482 else 5502 5483 ptp->phy_model = ICE_PHY_UNSUP; 5503 5484 }
-2
drivers/net/ethernet/intel/ice/ice_type.h
··· 850 850 851 851 struct ice_eth56g_params { 852 852 u8 num_phys; 853 - u8 phy_addr[2]; 854 853 bool onestep_ena; 855 854 bool sfd_ena; 856 855 u32 peer_delay; ··· 880 881 union ice_phy_params phy; 881 882 u8 num_lports; 882 883 u8 ports_per_phy; 883 - bool is_2x50g_muxed_topo; 884 884 }; 885 885 886 886 /* Port hardware description */
+6
drivers/net/ethernet/intel/igc/igc_base.c
··· 68 68 u32 eecd = rd32(IGC_EECD); 69 69 u16 size; 70 70 71 + /* failed to read reg and got all F's */ 72 + if (!(~eecd)) 73 + return -ENXIO; 74 + 71 75 size = FIELD_GET(IGC_EECD_SIZE_EX_MASK, eecd); 72 76 73 77 /* Added to a constant, "size" becomes the left-shift value ··· 225 221 226 222 /* NVM initialization */ 227 223 ret_val = igc_init_nvm_params_base(hw); 224 + if (ret_val) 225 + goto out; 228 226 switch (hw->mac.type) { 229 227 case igc_i225: 230 228 ret_val = igc_init_nvm_params_i225(hw);
+1
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 1013 1013 complete(&ent->done); 1014 1014 } 1015 1015 up(&cmd->vars.sem); 1016 + complete(&ent->slotted); 1016 1017 return; 1017 1018 } 1018 1019 } else {
+12 -10
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
··· 724 724 /* check esn */ 725 725 if (x->props.flags & XFRM_STATE_ESN) 726 726 mlx5e_ipsec_update_esn_state(sa_entry); 727 + else 728 + /* According to RFC4303, section "3.3.3. Sequence Number Generation", 729 + * the first packet sent using a given SA will contain a sequence 730 + * number of 1. 731 + */ 732 + sa_entry->esn_state.esn = 1; 727 733 728 734 mlx5e_ipsec_build_accel_xfrm_attrs(sa_entry, &sa_entry->attrs); 729 735 ··· 774 768 MLX5_IPSEC_RESCHED); 775 769 776 770 if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET && 777 - x->props.mode == XFRM_MODE_TUNNEL) 778 - xa_set_mark(&ipsec->sadb, sa_entry->ipsec_obj_id, 779 - MLX5E_IPSEC_TUNNEL_SA); 771 + x->props.mode == XFRM_MODE_TUNNEL) { 772 + xa_lock_bh(&ipsec->sadb); 773 + __xa_set_mark(&ipsec->sadb, sa_entry->ipsec_obj_id, 774 + MLX5E_IPSEC_TUNNEL_SA); 775 + xa_unlock_bh(&ipsec->sadb); 776 + } 780 777 781 778 out: 782 779 x->xso.offload_handle = (unsigned long)sa_entry; ··· 806 797 static void mlx5e_xfrm_del_state(struct xfrm_state *x) 807 798 { 808 799 struct mlx5e_ipsec_sa_entry *sa_entry = to_ipsec_sa_entry(x); 809 - struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs; 810 800 struct mlx5e_ipsec *ipsec = sa_entry->ipsec; 811 801 struct mlx5e_ipsec_sa_entry *old; 812 802 ··· 814 806 815 807 old = xa_erase_bh(&ipsec->sadb, sa_entry->ipsec_obj_id); 816 808 WARN_ON(old != sa_entry); 817 - 818 - if (attrs->mode == XFRM_MODE_TUNNEL && 819 - attrs->type == XFRM_DEV_OFFLOAD_PACKET) 820 - /* Make sure that no ARP requests are running in parallel */ 821 - flush_workqueue(ipsec->wq); 822 - 823 809 } 824 810 825 811 static void mlx5e_xfrm_free_state(struct xfrm_state *x)
+5 -7
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
··· 1718 1718 goto err_alloc; 1719 1719 } 1720 1720 1721 - if (attrs->family == AF_INET) 1722 - setup_fte_addr4(spec, &attrs->saddr.a4, &attrs->daddr.a4); 1723 - else 1724 - setup_fte_addr6(spec, attrs->saddr.a6, attrs->daddr.a6); 1725 - 1726 1721 setup_fte_no_frags(spec); 1727 1722 setup_fte_upper_proto_match(spec, &attrs->upspec); 1728 1723 1729 1724 switch (attrs->type) { 1730 1725 case XFRM_DEV_OFFLOAD_CRYPTO: 1726 + if (attrs->family == AF_INET) 1727 + setup_fte_addr4(spec, &attrs->saddr.a4, &attrs->daddr.a4); 1728 + else 1729 + setup_fte_addr6(spec, attrs->saddr.a6, attrs->daddr.a6); 1731 1730 setup_fte_spi(spec, attrs->spi, false); 1732 1731 setup_fte_esp(spec); 1733 1732 setup_fte_reg_a(spec); 1734 1733 break; 1735 1734 case XFRM_DEV_OFFLOAD_PACKET: 1736 - if (attrs->reqid) 1737 - setup_fte_reg_c4(spec, attrs->reqid); 1735 + setup_fte_reg_c4(spec, attrs->reqid); 1738 1736 err = setup_pkt_reformat(ipsec, attrs, &flow_act); 1739 1737 if (err) 1740 1738 goto err_pkt_reformat;
+8 -3
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
··· 91 91 EXPORT_SYMBOL_GPL(mlx5_ipsec_device_caps); 92 92 93 93 static void mlx5e_ipsec_packet_setup(void *obj, u32 pdn, 94 - struct mlx5_accel_esp_xfrm_attrs *attrs) 94 + struct mlx5e_ipsec_sa_entry *sa_entry) 95 95 { 96 + struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs; 96 97 void *aso_ctx; 97 98 98 99 aso_ctx = MLX5_ADDR_OF(ipsec_obj, obj, ipsec_aso); ··· 121 120 * active. 122 121 */ 123 122 MLX5_SET(ipsec_obj, obj, aso_return_reg, MLX5_IPSEC_ASO_REG_C_4_5); 124 - if (attrs->dir == XFRM_DEV_OFFLOAD_OUT) 123 + if (attrs->dir == XFRM_DEV_OFFLOAD_OUT) { 125 124 MLX5_SET(ipsec_aso, aso_ctx, mode, MLX5_IPSEC_ASO_INC_SN); 125 + if (!attrs->replay_esn.trigger) 126 + MLX5_SET(ipsec_aso, aso_ctx, mode_parameter, 127 + sa_entry->esn_state.esn); 128 + } 126 129 127 130 if (attrs->lft.hard_packet_limit != XFRM_INF) { 128 131 MLX5_SET(ipsec_aso, aso_ctx, remove_flow_pkt_cnt, ··· 180 175 181 176 res = &mdev->mlx5e_res.hw_objs; 182 177 if (attrs->type == XFRM_DEV_OFFLOAD_PACKET) 183 - mlx5e_ipsec_packet_setup(obj, res->pdn, attrs); 178 + mlx5e_ipsec_packet_setup(obj, res->pdn, sa_entry); 184 179 185 180 err = mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out)); 186 181 if (!err)
+1
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 2709 2709 break; 2710 2710 case MLX5_FLOW_NAMESPACE_RDMA_TX: 2711 2711 root_ns = steering->rdma_tx_root_ns; 2712 + prio = RDMA_TX_BYPASS_PRIO; 2712 2713 break; 2713 2714 case MLX5_FLOW_NAMESPACE_RDMA_RX_COUNTERS: 2714 2715 root_ns = steering->rdma_rx_root_ns;
+3 -1
drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c
··· 530 530 set_tt_map(port_sel, hash_type); 531 531 err = mlx5_lag_create_definers(ldev, hash_type, ports); 532 532 if (err) 533 - return err; 533 + goto clear_port_sel; 534 534 535 535 if (port_sel->tunnel) { 536 536 err = mlx5_lag_create_inner_ttc_table(ldev); ··· 549 549 mlx5_destroy_ttc_table(port_sel->inner.ttc); 550 550 destroy_definers: 551 551 mlx5_lag_destroy_definers(ldev); 552 + clear_port_sel: 553 + memset(port_sel, 0, sizeof(*port_sel)); 552 554 return err; 553 555 } 554 556
+1
drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c
··· 257 257 return 0; 258 258 259 259 esw_err: 260 + mlx5_sf_function_id_erase(table, sf); 260 261 mlx5_sf_free(table, sf); 261 262 return err; 262 263 }
+12 -12
drivers/net/ethernet/mellanox/mlx5/core/wc.c
··· 382 382 383 383 bool mlx5_wc_support_get(struct mlx5_core_dev *mdev) 384 384 { 385 + struct mutex *wc_state_lock = &mdev->wc_state_lock; 385 386 struct mlx5_core_dev *parent = NULL; 386 387 387 388 if (!MLX5_CAP_GEN(mdev, bf)) { ··· 401 400 */ 402 401 goto out; 403 402 404 - mutex_lock(&mdev->wc_state_lock); 403 + #ifdef CONFIG_MLX5_SF 404 + if (mlx5_core_is_sf(mdev)) { 405 + parent = mdev->priv.parent_mdev; 406 + wc_state_lock = &parent->wc_state_lock; 407 + } 408 + #endif 409 + 410 + mutex_lock(wc_state_lock); 405 411 406 412 if (mdev->wc_state != MLX5_WC_STATE_UNINITIALIZED) 407 413 goto unlock; 408 414 409 - #ifdef CONFIG_MLX5_SF 410 - if (mlx5_core_is_sf(mdev)) 411 - parent = mdev->priv.parent_mdev; 412 - #endif 413 - 414 415 if (parent) { 415 - mutex_lock(&parent->wc_state_lock); 416 - 417 416 mlx5_core_test_wc(parent); 418 417 419 418 mlx5_core_dbg(mdev, "parent set wc_state=%d\n", 420 419 parent->wc_state); 421 420 mdev->wc_state = parent->wc_state; 422 421 423 - mutex_unlock(&parent->wc_state_lock); 422 + } else { 423 + mlx5_core_test_wc(mdev); 424 424 } 425 425 426 - mlx5_core_test_wc(mdev); 427 - 428 426 unlock: 429 - mutex_unlock(&mdev->wc_state_lock); 427 + mutex_unlock(wc_state_lock); 430 428 out: 431 429 mlx5_core_dbg(mdev, "wc_state=%d\n", mdev->wc_state); 432 430
-1
drivers/net/ethernet/meta/fbnic/Makefile
··· 13 13 fbnic_ethtool.o \ 14 14 fbnic_fw.o \ 15 15 fbnic_hw_stats.o \ 16 - fbnic_hwmon.o \ 17 16 fbnic_irq.o \ 18 17 fbnic_mac.o \ 19 18 fbnic_netdev.o \
-5
drivers/net/ethernet/meta/fbnic/fbnic.h
··· 20 20 struct device *dev; 21 21 struct net_device *netdev; 22 22 struct dentry *dbg_fbd; 23 - struct device *hwmon; 24 23 25 24 u32 __iomem *uc_addr0; 26 25 u32 __iomem *uc_addr4; ··· 32 33 33 34 struct fbnic_fw_mbx mbx[FBNIC_IPC_MBX_INDICES]; 34 35 struct fbnic_fw_cap fw_cap; 35 - struct fbnic_fw_completion *cmpl_data; 36 36 /* Lock protecting Tx Mailbox queue to prevent possible races */ 37 37 spinlock_t fw_tx_lock; 38 38 ··· 139 141 140 142 int fbnic_fw_enable_mbx(struct fbnic_dev *fbd); 141 143 void fbnic_fw_disable_mbx(struct fbnic_dev *fbd); 142 - 143 - void fbnic_hwmon_register(struct fbnic_dev *fbd); 144 - void fbnic_hwmon_unregister(struct fbnic_dev *fbd); 145 144 146 145 int fbnic_pcs_irq_enable(struct fbnic_dev *fbd); 147 146 void fbnic_pcs_irq_disable(struct fbnic_dev *fbd);
-7
drivers/net/ethernet/meta/fbnic/fbnic_fw.h
··· 44 44 u8 link_fec; 45 45 }; 46 46 47 - struct fbnic_fw_completion { 48 - struct { 49 - s32 millivolts; 50 - s32 millidegrees; 51 - } tsene; 52 - }; 53 - 54 47 void fbnic_mbx_init(struct fbnic_dev *fbd); 55 48 void fbnic_mbx_clean(struct fbnic_dev *fbd); 56 49 void fbnic_mbx_poll(struct fbnic_dev *fbd);
-81
drivers/net/ethernet/meta/fbnic/fbnic_hwmon.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* Copyright (c) Meta Platforms, Inc. and affiliates. */ 3 - 4 - #include <linux/hwmon.h> 5 - 6 - #include "fbnic.h" 7 - #include "fbnic_mac.h" 8 - 9 - static int fbnic_hwmon_sensor_id(enum hwmon_sensor_types type) 10 - { 11 - if (type == hwmon_temp) 12 - return FBNIC_SENSOR_TEMP; 13 - if (type == hwmon_in) 14 - return FBNIC_SENSOR_VOLTAGE; 15 - 16 - return -EOPNOTSUPP; 17 - } 18 - 19 - static umode_t fbnic_hwmon_is_visible(const void *drvdata, 20 - enum hwmon_sensor_types type, 21 - u32 attr, int channel) 22 - { 23 - if (type == hwmon_temp && attr == hwmon_temp_input) 24 - return 0444; 25 - if (type == hwmon_in && attr == hwmon_in_input) 26 - return 0444; 27 - 28 - return 0; 29 - } 30 - 31 - static int fbnic_hwmon_read(struct device *dev, enum hwmon_sensor_types type, 32 - u32 attr, int channel, long *val) 33 - { 34 - struct fbnic_dev *fbd = dev_get_drvdata(dev); 35 - const struct fbnic_mac *mac = fbd->mac; 36 - int id; 37 - 38 - id = fbnic_hwmon_sensor_id(type); 39 - return id < 0 ? id : mac->get_sensor(fbd, id, val); 40 - } 41 - 42 - static const struct hwmon_ops fbnic_hwmon_ops = { 43 - .is_visible = fbnic_hwmon_is_visible, 44 - .read = fbnic_hwmon_read, 45 - }; 46 - 47 - static const struct hwmon_channel_info *fbnic_hwmon_info[] = { 48 - HWMON_CHANNEL_INFO(temp, HWMON_T_INPUT), 49 - HWMON_CHANNEL_INFO(in, HWMON_I_INPUT), 50 - NULL 51 - }; 52 - 53 - static const struct hwmon_chip_info fbnic_chip_info = { 54 - .ops = &fbnic_hwmon_ops, 55 - .info = fbnic_hwmon_info, 56 - }; 57 - 58 - void fbnic_hwmon_register(struct fbnic_dev *fbd) 59 - { 60 - if (!IS_REACHABLE(CONFIG_HWMON)) 61 - return; 62 - 63 - fbd->hwmon = hwmon_device_register_with_info(fbd->dev, "fbnic", 64 - fbd, &fbnic_chip_info, 65 - NULL); 66 - if (IS_ERR(fbd->hwmon)) { 67 - dev_notice(fbd->dev, 68 - "Failed to register hwmon device %pe\n", 69 - fbd->hwmon); 70 - fbd->hwmon = NULL; 71 - } 72 - } 73 - 74 - void fbnic_hwmon_unregister(struct fbnic_dev *fbd) 75 - { 76 - if (!IS_REACHABLE(CONFIG_HWMON) || !fbd->hwmon) 77 - return; 78 - 79 - hwmon_device_unregister(fbd->hwmon); 80 - fbd->hwmon = NULL; 81 - }
-22
drivers/net/ethernet/meta/fbnic/fbnic_mac.c
··· 686 686 MAC_STAT_TX_BROADCAST); 687 687 } 688 688 689 - static int fbnic_mac_get_sensor_asic(struct fbnic_dev *fbd, int id, long *val) 690 - { 691 - struct fbnic_fw_completion fw_cmpl; 692 - s32 *sensor; 693 - 694 - switch (id) { 695 - case FBNIC_SENSOR_TEMP: 696 - sensor = &fw_cmpl.tsene.millidegrees; 697 - break; 698 - case FBNIC_SENSOR_VOLTAGE: 699 - sensor = &fw_cmpl.tsene.millivolts; 700 - break; 701 - default: 702 - return -EINVAL; 703 - } 704 - 705 - *val = *sensor; 706 - 707 - return 0; 708 - } 709 - 710 689 static const struct fbnic_mac fbnic_mac_asic = { 711 690 .init_regs = fbnic_mac_init_regs, 712 691 .pcs_enable = fbnic_pcs_enable_asic, ··· 695 716 .get_eth_mac_stats = fbnic_mac_get_eth_mac_stats, 696 717 .link_down = fbnic_mac_link_down_asic, 697 718 .link_up = fbnic_mac_link_up_asic, 698 - .get_sensor = fbnic_mac_get_sensor_asic, 699 719 }; 700 720 701 721 /**
-7
drivers/net/ethernet/meta/fbnic/fbnic_mac.h
··· 47 47 #define FBNIC_LINK_MODE_PAM4 (FBNIC_LINK_50R1) 48 48 #define FBNIC_LINK_MODE_MASK (FBNIC_LINK_AUTO - 1) 49 49 50 - enum fbnic_sensor_id { 51 - FBNIC_SENSOR_TEMP, /* Temp in millidegrees Centigrade */ 52 - FBNIC_SENSOR_VOLTAGE, /* Voltage in millivolts */ 53 - }; 54 - 55 50 /* This structure defines the interface hooks for the MAC. The MAC hooks 56 51 * will be configured as a const struct provided with a set of function 57 52 * pointers. ··· 83 88 84 89 void (*link_down)(struct fbnic_dev *fbd); 85 90 void (*link_up)(struct fbnic_dev *fbd, bool tx_pause, bool rx_pause); 86 - 87 - int (*get_sensor)(struct fbnic_dev *fbd, int id, long *val); 88 91 }; 89 92 90 93 int fbnic_mac_init(struct fbnic_dev *fbd);
-3
drivers/net/ethernet/meta/fbnic/fbnic_pci.c
··· 296 296 /* Capture snapshot of hardware stats so netdev can calculate delta */ 297 297 fbnic_reset_hw_stats(fbd); 298 298 299 - fbnic_hwmon_register(fbd); 300 - 301 299 if (!fbd->dsn) { 302 300 dev_warn(&pdev->dev, "Reading serial number failed\n"); 303 301 goto init_failure_mode; ··· 358 360 fbnic_netdev_free(fbd); 359 361 } 360 362 361 - fbnic_hwmon_unregister(fbd); 362 363 fbnic_dbg_fbd_exit(fbd); 363 364 fbnic_devlink_unregister(fbd); 364 365 fbnic_fw_disable_mbx(fbd);
+2 -2
drivers/net/ethernet/microsoft/mana/gdma_main.c
··· 1656 1656 1657 1657 static void __exit mana_driver_exit(void) 1658 1658 { 1659 - debugfs_remove(mana_debugfs_root); 1660 - 1661 1659 pci_unregister_driver(&mana_driver); 1660 + 1661 + debugfs_remove(mana_debugfs_root); 1662 1662 } 1663 1663 1664 1664 module_init(mana_driver_init);
+2 -1
drivers/net/ethernet/netronome/nfp/bpf/offload.c
··· 458 458 map_id_full = be64_to_cpu(cbe->map_ptr); 459 459 map_id = map_id_full; 460 460 461 - if (len < sizeof(struct cmsg_bpf_event) + pkt_size + data_size) 461 + if (size_add(pkt_size, data_size) > INT_MAX || 462 + len < sizeof(struct cmsg_bpf_event) + pkt_size + data_size) 462 463 return -EINVAL; 463 464 if (cbe->hdr.ver != NFP_CCM_ABI_VERSION) 464 465 return -EINVAL;
-44
drivers/net/ethernet/realtek/r8169_main.c
··· 16 16 #include <linux/clk.h> 17 17 #include <linux/delay.h> 18 18 #include <linux/ethtool.h> 19 - #include <linux/hwmon.h> 20 19 #include <linux/phy.h> 21 20 #include <linux/if_vlan.h> 22 21 #include <linux/in.h> ··· 5346 5347 return false; 5347 5348 } 5348 5349 5349 - static umode_t r8169_hwmon_is_visible(const void *drvdata, 5350 - enum hwmon_sensor_types type, 5351 - u32 attr, int channel) 5352 - { 5353 - return 0444; 5354 - } 5355 - 5356 - static int r8169_hwmon_read(struct device *dev, enum hwmon_sensor_types type, 5357 - u32 attr, int channel, long *val) 5358 - { 5359 - struct rtl8169_private *tp = dev_get_drvdata(dev); 5360 - int val_raw; 5361 - 5362 - val_raw = phy_read_paged(tp->phydev, 0xbd8, 0x12) & 0x3ff; 5363 - if (val_raw >= 512) 5364 - val_raw -= 1024; 5365 - 5366 - *val = 1000 * val_raw / 2; 5367 - 5368 - return 0; 5369 - } 5370 - 5371 - static const struct hwmon_ops r8169_hwmon_ops = { 5372 - .is_visible = r8169_hwmon_is_visible, 5373 - .read = r8169_hwmon_read, 5374 - }; 5375 - 5376 - static const struct hwmon_channel_info * const r8169_hwmon_info[] = { 5377 - HWMON_CHANNEL_INFO(temp, HWMON_T_INPUT), 5378 - NULL 5379 - }; 5380 - 5381 - static const struct hwmon_chip_info r8169_hwmon_chip_info = { 5382 - .ops = &r8169_hwmon_ops, 5383 - .info = r8169_hwmon_info, 5384 - }; 5385 - 5386 5350 static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) 5387 5351 { 5388 5352 struct rtl8169_private *tp; ··· 5525 5563 if (rc) 5526 5564 return rc; 5527 5565 5528 - /* The temperature sensor is available from RTl8125B */ 5529 - if (IS_REACHABLE(CONFIG_HWMON) && tp->mac_version >= RTL_GIGA_MAC_VER_63) 5530 - /* ignore errors */ 5531 - devm_hwmon_device_register_with_info(&pdev->dev, "nic_temp", tp, 5532 - &r8169_hwmon_chip_info, 5533 - NULL); 5534 5566 rc = register_netdev(dev); 5535 5567 if (rc) 5536 5568 return rc;
+1 -1
drivers/net/ethernet/realtek/rtase/rtase_main.c
··· 1827 1827 1828 1828 for (i = 0; i < tp->int_nums; i++) { 1829 1829 irq = pci_irq_vector(pdev, i); 1830 - if (!irq) { 1830 + if (irq < 0) { 1831 1831 pci_disable_msix(pdev); 1832 1832 return irq; 1833 1833 }
+1
drivers/net/ethernet/renesas/ravb_main.c
··· 2763 2763 .net_features = NETIF_F_RXCSUM, 2764 2764 .stats_len = ARRAY_SIZE(ravb_gstrings_stats), 2765 2765 .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3, 2766 + .tx_max_frame_size = SZ_2K, 2766 2767 .rx_max_frame_size = SZ_2K, 2767 2768 .rx_buffer_size = SZ_2K + 2768 2769 SKB_DATA_ALIGN(sizeof(struct skb_shared_info)),
+11 -3
drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 + #include <linux/iommu.h> 2 3 #include <linux/platform_device.h> 3 4 #include <linux/of.h> 4 5 #include <linux/module.h> ··· 19 18 20 19 struct reset_control *rst_mac; 21 20 struct reset_control *rst_pcs; 21 + 22 + u32 iommu_sid; 22 23 23 24 void __iomem *hv; 24 25 void __iomem *regs; ··· 53 50 #define MGBE_WRAP_COMMON_INTR_ENABLE 0x8704 54 51 #define MAC_SBD_INTR BIT(2) 55 52 #define MGBE_WRAP_AXI_ASID0_CTRL 0x8400 56 - #define MGBE_SID 0x6 57 53 58 54 static int __maybe_unused tegra_mgbe_suspend(struct device *dev) 59 55 { ··· 86 84 writel(MAC_SBD_INTR, mgbe->regs + MGBE_WRAP_COMMON_INTR_ENABLE); 87 85 88 86 /* Program SID */ 89 - writel(MGBE_SID, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL); 87 + writel(mgbe->iommu_sid, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL); 90 88 91 89 value = readl(mgbe->xpcs + XPCS_WRAP_UPHY_STATUS); 92 90 if ((value & XPCS_WRAP_UPHY_STATUS_TX_P_UP) == 0) { ··· 243 241 if (IS_ERR(mgbe->xpcs)) 244 242 return PTR_ERR(mgbe->xpcs); 245 243 244 + /* get controller's stream id from iommu property in device tree */ 245 + if (!tegra_dev_iommu_get_stream_id(mgbe->dev, &mgbe->iommu_sid)) { 246 + dev_err(mgbe->dev, "failed to get iommu stream id\n"); 247 + return -EINVAL; 248 + } 249 + 246 250 res.addr = mgbe->regs; 247 251 res.irq = irq; 248 252 ··· 354 346 writel(MAC_SBD_INTR, mgbe->regs + MGBE_WRAP_COMMON_INTR_ENABLE); 355 347 356 348 /* Program SID */ 357 - writel(MGBE_SID, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL); 349 + writel(mgbe->iommu_sid, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL); 358 350 359 351 plat->flags |= STMMAC_FLAG_SERDES_UP_AFTER_PHY_LINKUP; 360 352
+7 -7
drivers/net/ethernet/ti/cpsw_ale.c
··· 127 127 128 128 static inline int cpsw_ale_get_field(u32 *ale_entry, u32 start, u32 bits) 129 129 { 130 - int idx, idx2; 130 + int idx, idx2, index; 131 131 u32 hi_val = 0; 132 132 133 133 idx = start / 32; 134 134 idx2 = (start + bits - 1) / 32; 135 135 /* Check if bits to be fetched exceed a word */ 136 136 if (idx != idx2) { 137 - idx2 = 2 - idx2; /* flip */ 138 - hi_val = ale_entry[idx2] << ((idx2 * 32) - start); 137 + index = 2 - idx2; /* flip */ 138 + hi_val = ale_entry[index] << ((idx2 * 32) - start); 139 139 } 140 140 start -= idx * 32; 141 141 idx = 2 - idx; /* flip */ ··· 145 145 static inline void cpsw_ale_set_field(u32 *ale_entry, u32 start, u32 bits, 146 146 u32 value) 147 147 { 148 - int idx, idx2; 148 + int idx, idx2, index; 149 149 150 150 value &= BITMASK(bits); 151 151 idx = start / 32; 152 152 idx2 = (start + bits - 1) / 32; 153 153 /* Check if bits to be set exceed a word */ 154 154 if (idx != idx2) { 155 - idx2 = 2 - idx2; /* flip */ 156 - ale_entry[idx2] &= ~(BITMASK(bits + start - (idx2 * 32))); 157 - ale_entry[idx2] |= (value >> ((idx2 * 32) - start)); 155 + index = 2 - idx2; /* flip */ 156 + ale_entry[index] &= ~(BITMASK(bits + start - (idx2 * 32))); 157 + ale_entry[index] |= (value >> ((idx2 * 32) - start)); 158 158 } 159 159 start -= idx * 32; 160 160 idx = 2 - idx; /* flip */
+11 -13
drivers/net/ethernet/wangxun/libwx/wx_hw.c
··· 334 334 status = read_poll_timeout(rd32, hicr, hicr & WX_MNG_MBOX_CTL_FWRDY, 1000, 335 335 timeout * 1000, false, wx, WX_MNG_MBOX_CTL); 336 336 337 + buf[0] = rd32(wx, WX_MNG_MBOX); 338 + if ((buf[0] & 0xff0000) >> 16 == 0x80) { 339 + wx_err(wx, "Unknown FW command: 0x%x\n", buffer[0] & 0xff); 340 + status = -EINVAL; 341 + goto rel_out; 342 + } 343 + 337 344 /* Check command completion */ 338 345 if (status) { 339 - wx_dbg(wx, "Command has failed with no status valid.\n"); 340 - 341 - buf[0] = rd32(wx, WX_MNG_MBOX); 342 - if ((buffer[0] & 0xff) != (~buf[0] >> 24)) { 343 - status = -EINVAL; 344 - goto rel_out; 345 - } 346 - if ((buf[0] & 0xff0000) >> 16 == 0x80) { 347 - wx_dbg(wx, "It's unknown cmd.\n"); 348 - status = -EINVAL; 349 - goto rel_out; 350 - } 351 - 346 + wx_err(wx, "Command has failed with no status valid.\n"); 352 347 wx_dbg(wx, "write value:\n"); 353 348 for (i = 0; i < dword_len; i++) 354 349 wx_dbg(wx, "%x ", buffer[i]); 355 350 wx_dbg(wx, "read value:\n"); 356 351 for (i = 0; i < dword_len; i++) 357 352 wx_dbg(wx, "%x ", buf[i]); 353 + wx_dbg(wx, "\ncheck: %x %x\n", buffer[0] & 0xff, ~buf[0] >> 24); 354 + 355 + goto rel_out; 358 356 } 359 357 360 358 if (!return_data)
+6
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 2056 2056 return -EBUSY; 2057 2057 } 2058 2058 2059 + if (ecoalesce->rx_max_coalesced_frames > 255 || 2060 + ecoalesce->tx_max_coalesced_frames > 255) { 2061 + NL_SET_ERR_MSG(extack, "frames must be less than 256"); 2062 + return -EINVAL; 2063 + } 2064 + 2059 2065 if (ecoalesce->rx_max_coalesced_frames) 2060 2066 lp->coalesce_count_rx = ecoalesce->rx_max_coalesced_frames; 2061 2067 if (ecoalesce->rx_coalesce_usecs)
+17 -9
drivers/net/gtp.c
··· 1524 1524 goto out_encap; 1525 1525 } 1526 1526 1527 - gn = net_generic(dev_net(dev), gtp_net_id); 1528 - list_add_rcu(&gtp->list, &gn->gtp_dev_list); 1527 + gn = net_generic(src_net, gtp_net_id); 1528 + list_add(&gtp->list, &gn->gtp_dev_list); 1529 1529 dev->priv_destructor = gtp_destructor; 1530 1530 1531 1531 netdev_dbg(dev, "registered new GTP interface\n"); ··· 1551 1551 hlist_for_each_entry_safe(pctx, next, &gtp->tid_hash[i], hlist_tid) 1552 1552 pdp_context_delete(pctx); 1553 1553 1554 - list_del_rcu(&gtp->list); 1554 + list_del(&gtp->list); 1555 1555 unregister_netdevice_queue(dev, head); 1556 1556 } 1557 1557 ··· 2271 2271 struct gtp_dev *last_gtp = (struct gtp_dev *)cb->args[2], *gtp; 2272 2272 int i, j, bucket = cb->args[0], skip = cb->args[1]; 2273 2273 struct net *net = sock_net(skb->sk); 2274 + struct net_device *dev; 2274 2275 struct pdp_ctx *pctx; 2275 - struct gtp_net *gn; 2276 - 2277 - gn = net_generic(net, gtp_net_id); 2278 2276 2279 2277 if (cb->args[4]) 2280 2278 return 0; 2281 2279 2282 2280 rcu_read_lock(); 2283 - list_for_each_entry_rcu(gtp, &gn->gtp_dev_list, list) { 2281 + for_each_netdev_rcu(net, dev) { 2282 + if (dev->rtnl_link_ops != &gtp_link_ops) 2283 + continue; 2284 + 2285 + gtp = netdev_priv(dev); 2286 + 2284 2287 if (last_gtp && last_gtp != gtp) 2285 2288 continue; 2286 2289 else ··· 2478 2475 2479 2476 list_for_each_entry(net, net_list, exit_list) { 2480 2477 struct gtp_net *gn = net_generic(net, gtp_net_id); 2481 - struct gtp_dev *gtp; 2478 + struct gtp_dev *gtp, *gtp_next; 2479 + struct net_device *dev; 2482 2480 2483 - list_for_each_entry(gtp, &gn->gtp_dev_list, list) 2481 + for_each_netdev(net, dev) 2482 + if (dev->rtnl_link_ops == &gtp_link_ops) 2483 + gtp_dellink(dev, dev_to_kill); 2484 + 2485 + list_for_each_entry_safe(gtp, gtp_next, &gn->gtp_dev_list, list) 2484 2486 gtp_dellink(gtp->dev, dev_to_kill); 2485 2487 } 2486 2488 }
+5 -1
drivers/net/ieee802154/ca8210.c
··· 3072 3072 spi_set_drvdata(priv->spi, priv); 3073 3073 if (IS_ENABLED(CONFIG_IEEE802154_CA8210_DEBUGFS)) { 3074 3074 cascoda_api_upstream = ca8210_test_int_driver_write; 3075 - ca8210_test_interface_init(priv); 3075 + ret = ca8210_test_interface_init(priv); 3076 + if (ret) { 3077 + dev_crit(&spi_device->dev, "ca8210_test_interface_init failed\n"); 3078 + goto error; 3079 + } 3076 3080 } else { 3077 3081 cascoda_api_upstream = NULL; 3078 3082 }
+4
drivers/net/mctp/mctp-i3c.c
··· 125 125 126 126 xfer.data.in = skb_put(skb, mi->mrl); 127 127 128 + /* Make sure netif_rx() is read in the same order as i3c. */ 129 + mutex_lock(&mi->lock); 128 130 rc = i3c_device_do_priv_xfers(mi->i3c, &xfer, 1); 129 131 if (rc < 0) 130 132 goto err; ··· 168 166 stats->rx_dropped++; 169 167 } 170 168 169 + mutex_unlock(&mi->lock); 171 170 return 0; 172 171 err: 172 + mutex_unlock(&mi->lock); 173 173 kfree_skb(skb); 174 174 return rc; 175 175 }
+3 -1
drivers/net/pcs/pcs-xpcs.c
··· 684 684 if (ret < 0) 685 685 return ret; 686 686 687 - mask = DW_VR_MII_DIG_CTRL1_MAC_AUTO_SW; 687 + val = 0; 688 + mask = DW_VR_MII_DIG_CTRL1_2G5_EN | DW_VR_MII_DIG_CTRL1_MAC_AUTO_SW; 689 + 688 690 if (neg_mode == PHYLINK_PCS_NEG_INBAND_ENABLED) 689 691 val = DW_VR_MII_DIG_CTRL1_MAC_AUTO_SW; 690 692
+10 -5
drivers/net/pfcp.c
··· 206 206 goto exit_del_pfcp_sock; 207 207 } 208 208 209 - pn = net_generic(dev_net(dev), pfcp_net_id); 210 - list_add_rcu(&pfcp->list, &pn->pfcp_dev_list); 209 + pn = net_generic(net, pfcp_net_id); 210 + list_add(&pfcp->list, &pn->pfcp_dev_list); 211 211 212 212 netdev_dbg(dev, "registered new PFCP interface\n"); 213 213 ··· 224 224 { 225 225 struct pfcp_dev *pfcp = netdev_priv(dev); 226 226 227 - list_del_rcu(&pfcp->list); 227 + list_del(&pfcp->list); 228 228 unregister_netdevice_queue(dev, head); 229 229 } 230 230 ··· 247 247 static void __net_exit pfcp_net_exit(struct net *net) 248 248 { 249 249 struct pfcp_net *pn = net_generic(net, pfcp_net_id); 250 - struct pfcp_dev *pfcp; 250 + struct pfcp_dev *pfcp, *pfcp_next; 251 + struct net_device *dev; 251 252 LIST_HEAD(list); 252 253 253 254 rtnl_lock(); 254 - list_for_each_entry(pfcp, &pn->pfcp_dev_list, list) 255 + for_each_netdev(net, dev) 256 + if (dev->rtnl_link_ops == &pfcp_link_ops) 257 + pfcp_dellink(dev, &list); 258 + 259 + list_for_each_entry_safe(pfcp, pfcp_next, &pn->pfcp_dev_list, list) 255 260 pfcp_dellink(pfcp->dev, &list); 256 261 257 262 unregister_netdevice_many(&list);
+15 -3
drivers/of/address.c
··· 340 340 return of_property_present(np, "#address-cells") && (of_bus_n_addr_cells(np) == 3); 341 341 } 342 342 343 + static int of_bus_default_match(struct device_node *np) 344 + { 345 + /* 346 + * Check for presence first since of_bus_n_addr_cells() will warn when 347 + * walking parent nodes. 348 + */ 349 + return of_property_present(np, "#address-cells"); 350 + } 351 + 343 352 /* 344 353 * Array of bus specific translators 345 354 */ ··· 393 384 { 394 385 .name = "default", 395 386 .addresses = "reg", 396 - .match = NULL, 387 + .match = of_bus_default_match, 397 388 .count_cells = of_bus_default_count_cells, 398 389 .map = of_bus_default_map, 399 390 .translate = of_bus_default_translate, ··· 408 399 for (i = 0; i < ARRAY_SIZE(of_busses); i++) 409 400 if (!of_busses[i].match || of_busses[i].match(np)) 410 401 return &of_busses[i]; 411 - BUG(); 412 402 return NULL; 413 403 } 414 404 ··· 529 521 if (parent == NULL) 530 522 return OF_BAD_ADDR; 531 523 bus = of_match_bus(parent); 524 + if (!bus) 525 + return OF_BAD_ADDR; 532 526 533 527 /* Count address cells & copy address locally */ 534 528 bus->count_cells(dev, &na, &ns); ··· 574 564 575 565 /* Get new parent bus and counts */ 576 566 pbus = of_match_bus(parent); 567 + if (!pbus) 568 + return OF_BAD_ADDR; 577 569 pbus->count_cells(dev, &pna, &pns); 578 570 if (!OF_CHECK_COUNTS(pna, pns)) { 579 571 pr_err("Bad cell count for %pOF\n", dev); ··· 715 703 716 704 /* match the parent's bus type */ 717 705 bus = of_match_bus(parent); 718 - if (strcmp(bus->name, "pci") && (bar_no >= 0)) 706 + if (!bus || (strcmp(bus->name, "pci") && (bar_no >= 0))) 719 707 return NULL; 720 708 721 709 /* Get "reg" or "assigned-addresses" property */
+13
drivers/of/unittest-data/tests-platform.dtsi
··· 34 34 }; 35 35 }; 36 36 }; 37 + 38 + platform-tests-2 { 39 + // No #address-cells or #size-cells 40 + node { 41 + #address-cells = <1>; 42 + #size-cells = <1>; 43 + 44 + test-device@100 { 45 + compatible = "test-sub-device"; 46 + reg = <0x100 1>; 47 + }; 48 + }; 49 + }; 37 50 }; 38 51 };
+14
drivers/of/unittest.c
··· 1380 1380 static void __init of_unittest_reg(void) 1381 1381 { 1382 1382 struct device_node *np; 1383 + struct resource res; 1383 1384 int ret; 1384 1385 u64 addr, size; 1385 1386 ··· 1397 1396 np, addr); 1398 1397 1399 1398 of_node_put(np); 1399 + 1400 + np = of_find_node_by_path("/testcase-data/platform-tests-2/node/test-device@100"); 1401 + if (!np) { 1402 + pr_err("missing testcase data\n"); 1403 + return; 1404 + } 1405 + 1406 + ret = of_address_to_resource(np, 0, &res); 1407 + unittest(ret == -EINVAL, "of_address_to_resource(%pOF) expected error on untranslatable address\n", 1408 + np); 1409 + 1410 + of_node_put(np); 1411 + 1400 1412 } 1401 1413 1402 1414 struct of_unittest_expected_res {
+16 -9
drivers/pci/pcie/bwctrl.c
··· 303 303 if (ret) 304 304 return ret; 305 305 306 - ret = devm_request_irq(&srv->device, srv->irq, pcie_bwnotif_irq, 307 - IRQF_SHARED, "PCIe bwctrl", srv); 308 - if (ret) 309 - return ret; 310 - 311 306 scoped_guard(rwsem_write, &pcie_bwctrl_setspeed_rwsem) { 312 307 scoped_guard(rwsem_write, &pcie_bwctrl_lbms_rwsem) { 313 - port->link_bwctrl = no_free_ptr(data); 308 + port->link_bwctrl = data; 309 + 310 + ret = request_irq(srv->irq, pcie_bwnotif_irq, 311 + IRQF_SHARED, "PCIe bwctrl", srv); 312 + if (ret) { 313 + port->link_bwctrl = NULL; 314 + return ret; 315 + } 316 + 314 317 pcie_bwnotif_enable(srv); 315 318 } 316 319 } ··· 334 331 335 332 pcie_cooling_device_unregister(data->cdev); 336 333 337 - pcie_bwnotif_disable(srv->port); 334 + scoped_guard(rwsem_write, &pcie_bwctrl_setspeed_rwsem) { 335 + scoped_guard(rwsem_write, &pcie_bwctrl_lbms_rwsem) { 336 + pcie_bwnotif_disable(srv->port); 338 337 339 - scoped_guard(rwsem_write, &pcie_bwctrl_setspeed_rwsem) 340 - scoped_guard(rwsem_write, &pcie_bwctrl_lbms_rwsem) 338 + free_irq(srv->irq, srv); 339 + 341 340 srv->port->link_bwctrl = NULL; 341 + } 342 + } 342 343 } 343 344 344 345 static int pcie_bwnotif_suspend(struct pcie_device *srv)
+12 -10
drivers/perf/riscv_pmu_sbi.c
··· 507 507 { 508 508 u32 type = event->attr.type; 509 509 u64 config = event->attr.config; 510 - u64 raw_config_val; 511 - int ret; 510 + int ret = -ENOENT; 512 511 513 512 /* 514 513 * Ensure we are finished checking standard hardware events for ··· 527 528 case PERF_TYPE_RAW: 528 529 /* 529 530 * As per SBI specification, the upper 16 bits must be unused 530 - * for a raw event. 531 + * for a hardware raw event. 531 532 * Bits 63:62 are used to distinguish between raw events 532 533 * 00 - Hardware raw event 533 534 * 10 - SBI firmware events 534 535 * 11 - Risc-V platform specific firmware event 535 536 */ 536 - raw_config_val = config & RISCV_PMU_RAW_EVENT_MASK; 537 + 537 538 switch (config >> 62) { 538 539 case 0: 539 - ret = RISCV_PMU_RAW_EVENT_IDX; 540 - *econfig = raw_config_val; 540 + /* Return error any bits [48-63] is set as it is not allowed by the spec */ 541 + if (!(config & ~RISCV_PMU_RAW_EVENT_MASK)) { 542 + *econfig = config & RISCV_PMU_RAW_EVENT_MASK; 543 + ret = RISCV_PMU_RAW_EVENT_IDX; 544 + } 541 545 break; 542 546 case 2: 543 - ret = (raw_config_val & 0xFFFF) | 544 - (SBI_PMU_EVENT_TYPE_FW << 16); 547 + ret = (config & 0xFFFF) | (SBI_PMU_EVENT_TYPE_FW << 16); 545 548 break; 546 549 case 3: 547 550 /* ··· 552 551 * Event data - raw event encoding 553 552 */ 554 553 ret = SBI_PMU_EVENT_TYPE_FW << 16 | RISCV_PLAT_FW_EVENT; 555 - *econfig = raw_config_val; 554 + *econfig = config & RISCV_PMU_PLAT_FW_EVENT_MASK; 555 + break; 556 + default: 556 557 break; 557 558 } 558 559 break; 559 560 default: 560 - ret = -ENOENT; 561 561 break; 562 562 } 563 563
+7 -1
drivers/platform/x86/amd/pmc/pmc.c
··· 947 947 { 948 948 struct amd_pmc_dev *pdev = dev_get_drvdata(dev); 949 949 950 + /* 951 + * Must be called only from the same set of dev_pm_ops handlers 952 + * as i8042_pm_suspend() is called: currently just from .suspend. 953 + */ 950 954 if (pdev->disable_8042_wakeup && !disable_workarounds) { 951 955 int rc = amd_pmc_wa_irq1(pdev); 952 956 ··· 963 959 return 0; 964 960 } 965 961 966 - static DEFINE_SIMPLE_DEV_PM_OPS(amd_pmc_pm, amd_pmc_suspend_handler, NULL); 962 + static const struct dev_pm_ops amd_pmc_pm = { 963 + .suspend = amd_pmc_suspend_handler, 964 + }; 967 965 968 966 static const struct pci_device_id pmc_pci_ids[] = { 969 967 { PCI_DEVICE(PCI_VENDOR_ID_AMD, AMD_CPU_ID_PS) },
+3 -2
drivers/platform/x86/dell/dell-uart-backlight.c
··· 283 283 init_waitqueue_head(&dell_bl->wait_queue); 284 284 dell_bl->dev = dev; 285 285 286 + serdev_device_set_drvdata(serdev, dell_bl); 287 + serdev_device_set_client_ops(serdev, &dell_uart_bl_serdev_ops); 288 + 286 289 ret = devm_serdev_device_open(dev, serdev); 287 290 if (ret) 288 291 return dev_err_probe(dev, ret, "opening UART device\n"); ··· 293 290 /* 9600 bps, no flow control, these are the default but set them to be sure */ 294 291 serdev_device_set_baudrate(serdev, 9600); 295 292 serdev_device_set_flow_control(serdev, false); 296 - serdev_device_set_drvdata(serdev, dell_bl); 297 - serdev_device_set_client_ops(serdev, &dell_uart_bl_serdev_ops); 298 293 299 294 get_version[0] = DELL_SOF(GET_CMD_LEN); 300 295 get_version[1] = CMD_GET_VERSION;
+4
drivers/platform/x86/intel/pmc/core_ssram.c
··· 269 269 /* 270 270 * The secondary PMC BARS (which are behind hidden PCI devices) 271 271 * are read from fixed offsets in MMIO of the primary PMC BAR. 272 + * If a device is not present, the value will be 0. 272 273 */ 273 274 ssram_base = get_base(tmp_ssram, offset); 275 + if (!ssram_base) 276 + return 0; 277 + 274 278 ssram = ioremap(ssram_base, SSRAM_HDR_SIZE); 275 279 if (!ssram) 276 280 return -ENOMEM;
+1
drivers/platform/x86/intel/speed_select_if/isst_if_common.c
··· 804 804 static const struct x86_cpu_id isst_cpu_ids[] = { 805 805 X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, SST_HPM_SUPPORTED), 806 806 X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, SST_HPM_SUPPORTED), 807 + X86_MATCH_VFM(INTEL_ATOM_DARKMONT_X, SST_HPM_SUPPORTED), 807 808 X86_MATCH_VFM(INTEL_EMERALDRAPIDS_X, 0), 808 809 X86_MATCH_VFM(INTEL_GRANITERAPIDS_D, SST_HPM_SUPPORTED), 809 810 X86_MATCH_VFM(INTEL_GRANITERAPIDS_X, SST_HPM_SUPPORTED),
+1
drivers/platform/x86/intel/tpmi_power_domains.c
··· 81 81 X86_MATCH_VFM(INTEL_GRANITERAPIDS_X, NULL), 82 82 X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, NULL), 83 83 X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, NULL), 84 + X86_MATCH_VFM(INTEL_ATOM_DARKMONT_X, NULL), 84 85 X86_MATCH_VFM(INTEL_GRANITERAPIDS_D, NULL), 85 86 X86_MATCH_VFM(INTEL_PANTHERCOVE_X, NULL), 86 87 {}
+3 -2
drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c
··· 199 199 if (ret) 200 200 return ret; 201 201 202 + serdev_device_set_drvdata(serdev, fc); 203 + serdev_device_set_client_ops(serdev, &yt2_1380_fc_serdev_ops); 204 + 202 205 ret = devm_serdev_device_open(dev, serdev); 203 206 if (ret) 204 207 return dev_err_probe(dev, ret, "opening UART device\n"); 205 208 206 209 serdev_device_set_baudrate(serdev, 600); 207 210 serdev_device_set_flow_control(serdev, false); 208 - serdev_device_set_drvdata(serdev, fc); 209 - serdev_device_set_client_ops(serdev, &yt2_1380_fc_serdev_ops); 210 211 211 212 ret = devm_extcon_register_notifier_all(dev, fc->extcon, &fc->nb); 212 213 if (ret)
+1 -1
drivers/pmdomain/imx/imx8mp-blk-ctrl.c
··· 770 770 771 771 of_genpd_del_provider(pdev->dev.of_node); 772 772 773 - for (i = 0; bc->onecell_data.num_domains; i++) { 773 + for (i = 0; i < bc->onecell_data.num_domains; i++) { 774 774 struct imx8mp_blk_ctrl_domain *domain = &bc->domains[i]; 775 775 776 776 pm_genpd_remove(&domain->genpd);
+1
drivers/reset/reset-rzg2l-usbphy-ctrl.c
··· 176 176 vdev->dev.parent = dev; 177 177 priv->vdev = vdev; 178 178 179 + device_set_of_node_from_dev(&vdev->dev, dev); 179 180 error = platform_device_add(vdev); 180 181 if (error) 181 182 goto err_device_put;
+3
drivers/scsi/scsi_lib.c
··· 210 210 struct scsi_sense_hdr sshdr; 211 211 enum sam_status status; 212 212 213 + if (!scmd->result) 214 + return 0; 215 + 213 216 if (!failures) 214 217 return 0; 215 218
+3 -1
drivers/scsi/scsi_transport_iscsi.c
··· 4104 4104 } 4105 4105 do { 4106 4106 /* 4107 - * special case for GET_STATS: 4107 + * special case for GET_STATS, GET_CHAP and GET_HOST_STATS: 4108 4108 * on success - sending reply and stats from 4109 4109 * inside of if_recv_msg(), 4110 4110 * on error - fall through. ··· 4112 4112 if (ev->type == ISCSI_UEVENT_GET_STATS && !err) 4113 4113 break; 4114 4114 if (ev->type == ISCSI_UEVENT_GET_CHAP && !err) 4115 + break; 4116 + if (ev->type == ISCSI_UEVENT_GET_HOST_STATS && !err) 4115 4117 break; 4116 4118 err = iscsi_if_send_reply(portid, nlh->nlmsg_type, 4117 4119 ev, sizeof(*ev));
+6 -2
drivers/staging/gpib/Kconfig
··· 65 65 depends on ISA_BUS || PCI || PCMCIA 66 66 depends on HAS_IOPORT 67 67 depends on !X86_PAE 68 + depends on PCMCIA || !PCMCIA 69 + depends on HAS_IOPORT_MAP 68 70 select GPIB_COMMON 69 71 select GPIB_NEC7210 70 72 help ··· 91 89 depends on HAS_IOPORT 92 90 depends on ISA_BUS || PCI || PCMCIA 93 91 depends on !X86_PAE 92 + depends on PCMCIA || !PCMCIA 94 93 select GPIB_COMMON 95 94 select GPIB_NEC7210 96 95 help ··· 180 177 config GPIB_INES 181 178 tristate "INES" 182 179 depends on PCI || ISA_BUS || PCMCIA 180 + depends on PCMCIA || !PCMCIA 183 181 depends on HAS_IOPORT 184 182 depends on !X86_PAE 185 183 select GPIB_COMMON ··· 203 199 called cb7210. 204 200 205 201 config GPIB_PCMCIA 206 - bool "PCMCIA/Cardbus support for NI MC and Ines boards" 207 - depends on PCCARD && (GPIB_NI_PCI_ISA || GPIB_CB7210 || GPIB_INES) 202 + def_bool y 203 + depends on PCMCIA && (GPIB_NI_PCI_ISA || GPIB_CB7210 || GPIB_INES) 208 204 help 209 205 Enable PCMCIA/CArdbus support for National Instruments, 210 206 measurement computing boards and Ines boards.
+1 -1
drivers/staging/gpib/agilent_82350b/Makefile
··· 1 1 2 - obj-m += agilent_82350b.o 2 + obj-$(CONFIG_GPIB_AGILENT_82350B) += agilent_82350b.o
+2 -2
drivers/staging/gpib/agilent_82350b/agilent_82350b.c
··· 700 700 GPIB_82350A_REGION)); 701 701 dev_dbg(board->gpib_dev, "%s: gpib base address remapped to 0x%p\n", 702 702 driver_name, a_priv->gpib_base); 703 - tms_priv->iobase = a_priv->gpib_base + TMS9914_BASE_REG; 703 + tms_priv->mmiobase = a_priv->gpib_base + TMS9914_BASE_REG; 704 704 a_priv->sram_base = ioremap(pci_resource_start(a_priv->pci_device, 705 705 SRAM_82350A_REGION), 706 706 pci_resource_len(a_priv->pci_device, ··· 724 724 pci_resource_len(a_priv->pci_device, GPIB_REGION)); 725 725 dev_dbg(board->gpib_dev, "%s: gpib base address remapped to 0x%p\n", 726 726 driver_name, a_priv->gpib_base); 727 - tms_priv->iobase = a_priv->gpib_base + TMS9914_BASE_REG; 727 + tms_priv->mmiobase = a_priv->gpib_base + TMS9914_BASE_REG; 728 728 a_priv->sram_base = ioremap(pci_resource_start(a_priv->pci_device, SRAM_REGION), 729 729 pci_resource_len(a_priv->pci_device, SRAM_REGION)); 730 730 dev_dbg(board->gpib_dev, "%s: sram base address remapped to 0x%p\n",
+1 -1
drivers/staging/gpib/agilent_82357a/Makefile
··· 1 1 2 - obj-m += agilent_82357a.o 2 + obj-$(CONFIG_GPIB_AGILENT_82357A) += agilent_82357a.o 3 3 4 4
+1 -1
drivers/staging/gpib/cb7210/Makefile
··· 1 1 ccflags-$(CONFIG_GPIB_PCMCIA) := -DGPIB_PCMCIA 2 - obj-m += cb7210.o 2 + obj-$(CONFIG_GPIB_CB7210) += cb7210.o 3 3 4 4
+6 -6
drivers/staging/gpib/cb7210/cb7210.c
··· 971 971 switch (cb_priv->pci_chip) { 972 972 case PCI_CHIP_AMCC_S5933: 973 973 cb_priv->amcc_iobase = pci_resource_start(cb_priv->pci_device, 0); 974 - nec_priv->iobase = (void *)(pci_resource_start(cb_priv->pci_device, 1)); 974 + nec_priv->iobase = pci_resource_start(cb_priv->pci_device, 1); 975 975 cb_priv->fifo_iobase = pci_resource_start(cb_priv->pci_device, 2); 976 976 break; 977 977 case PCI_CHIP_QUANCOM: 978 - nec_priv->iobase = (void *)(pci_resource_start(cb_priv->pci_device, 0)); 979 - cb_priv->fifo_iobase = (unsigned long)nec_priv->iobase; 978 + nec_priv->iobase = pci_resource_start(cb_priv->pci_device, 0); 979 + cb_priv->fifo_iobase = nec_priv->iobase; 980 980 break; 981 981 default: 982 982 pr_err("cb7210: bug! unhandled pci_chip=%i\n", cb_priv->pci_chip); ··· 1040 1040 return retval; 1041 1041 cb_priv = board->private_data; 1042 1042 nec_priv = &cb_priv->nec7210_priv; 1043 - if (request_region((unsigned long)config->ibbase, cb7210_iosize, "cb7210") == 0) { 1044 - pr_err("gpib: ioports starting at 0x%p are already in use\n", config->ibbase); 1043 + if (request_region(config->ibbase, cb7210_iosize, "cb7210") == 0) { 1044 + pr_err("gpib: ioports starting at 0x%u are already in use\n", config->ibbase); 1045 1045 return -EIO; 1046 1046 } 1047 1047 nec_priv->iobase = config->ibbase; ··· 1471 1471 (unsigned long)curr_dev->resource[0]->start); 1472 1472 return -EIO; 1473 1473 } 1474 - nec_priv->iobase = (void *)(unsigned long)curr_dev->resource[0]->start; 1474 + nec_priv->iobase = curr_dev->resource[0]->start; 1475 1475 cb_priv->fifo_iobase = curr_dev->resource[0]->start; 1476 1476 1477 1477 if (request_irq(curr_dev->irq, cb7210_interrupt, IRQF_SHARED,
+2 -2
drivers/staging/gpib/cb7210/cb7210.h
··· 113 113 HS_STATUS = 0x8, /* HS_STATUS register */ 114 114 }; 115 115 116 - static inline unsigned long nec7210_iobase(const struct cb7210_priv *cb_priv) 116 + static inline u32 nec7210_iobase(const struct cb7210_priv *cb_priv) 117 117 { 118 - return (unsigned long)(cb_priv->nec7210_priv.iobase); 118 + return cb_priv->nec7210_priv.iobase; 119 119 } 120 120 121 121 static inline int cb7210_page_in_bits(unsigned int page)
+1 -1
drivers/staging/gpib/cec/Makefile
··· 1 1 2 - obj-m += cec_gpib.o 2 + obj-$(CONFIG_GPIB_CEC_PCI) += cec_gpib.o 3 3
+2 -2
drivers/staging/gpib/cec/cec_gpib.c
··· 297 297 298 298 cec_priv->plx_iobase = pci_resource_start(cec_priv->pci_device, 1); 299 299 pr_info(" plx9050 base address 0x%lx\n", cec_priv->plx_iobase); 300 - nec_priv->iobase = (void *)(pci_resource_start(cec_priv->pci_device, 3)); 301 - pr_info(" nec7210 base address 0x%p\n", nec_priv->iobase); 300 + nec_priv->iobase = pci_resource_start(cec_priv->pci_device, 3); 301 + pr_info(" nec7210 base address 0x%x\n", nec_priv->iobase); 302 302 303 303 isr_flags |= IRQF_SHARED; 304 304 if (request_irq(cec_priv->pci_device->irq, cec_interrupt, isr_flags, "pci-gpib", board)) {
+2 -52
drivers/staging/gpib/common/gpib_os.c
··· 116 116 return 0; 117 117 } 118 118 119 - void writeb_wrapper(unsigned int value, void *address) 120 - { 121 - writeb(value, address); 122 - }; 123 - EXPORT_SYMBOL(writeb_wrapper); 124 - 125 - void writew_wrapper(unsigned int value, void *address) 126 - { 127 - writew(value, address); 128 - }; 129 - EXPORT_SYMBOL(writew_wrapper); 130 - 131 - unsigned int readb_wrapper(void *address) 132 - { 133 - return readb(address); 134 - }; 135 - EXPORT_SYMBOL(readb_wrapper); 136 - 137 - unsigned int readw_wrapper(void *address) 138 - { 139 - return readw(address); 140 - }; 141 - EXPORT_SYMBOL(readw_wrapper); 142 - 143 - #ifdef CONFIG_HAS_IOPORT 144 - void outb_wrapper(unsigned int value, void *address) 145 - { 146 - outb(value, (unsigned long)(address)); 147 - }; 148 - EXPORT_SYMBOL(outb_wrapper); 149 - 150 - void outw_wrapper(unsigned int value, void *address) 151 - { 152 - outw(value, (unsigned long)(address)); 153 - }; 154 - EXPORT_SYMBOL(outw_wrapper); 155 - 156 - unsigned int inb_wrapper(void *address) 157 - { 158 - return inb((unsigned long)(address)); 159 - }; 160 - EXPORT_SYMBOL(inb_wrapper); 161 - 162 - unsigned int inw_wrapper(void *address) 163 - { 164 - return inw((unsigned long)(address)); 165 - }; 166 - EXPORT_SYMBOL(inw_wrapper); 167 - #endif 168 - 169 119 /* this is a function instead of a constant because of Suse 170 120 * defining HZ to be a function call to get_hz() 171 121 */ ··· 486 536 return -1; 487 537 } 488 538 489 - if (pad > MAX_GPIB_PRIMARY_ADDRESS || sad > MAX_GPIB_SECONDARY_ADDRESS) { 539 + if (pad > MAX_GPIB_PRIMARY_ADDRESS || sad > MAX_GPIB_SECONDARY_ADDRESS || sad < -1) { 490 540 pr_err("gpib: bad address for serial poll"); 491 541 return -1; 492 542 } ··· 1573 1623 1574 1624 if (WARN_ON_ONCE(sizeof(void *) > sizeof(base_addr))) 1575 1625 return -EFAULT; 1576 - config->ibbase = (void *)(unsigned long)(base_addr); 1626 + config->ibbase = base_addr; 1577 1627 1578 1628 return 0; 1579 1629 }
+1 -1
drivers/staging/gpib/eastwood/Makefile
··· 1 1 2 - obj-m += fluke_gpib.o 2 + obj-$(CONFIG_GPIB_FLUKE) += fluke_gpib.o 3 3
+6 -6
drivers/staging/gpib/eastwood/fluke_gpib.c
··· 1011 1011 } 1012 1012 e_priv->gpib_iomem_res = res; 1013 1013 1014 - nec_priv->iobase = ioremap(e_priv->gpib_iomem_res->start, 1014 + nec_priv->mmiobase = ioremap(e_priv->gpib_iomem_res->start, 1015 1015 resource_size(e_priv->gpib_iomem_res)); 1016 - pr_info("gpib: iobase %lx remapped to %p, length=%d\n", 1017 - (unsigned long)e_priv->gpib_iomem_res->start, 1018 - nec_priv->iobase, (int)resource_size(e_priv->gpib_iomem_res)); 1019 - if (!nec_priv->iobase) { 1016 + pr_info("gpib: mmiobase %llx remapped to %p, length=%d\n", 1017 + (u64)e_priv->gpib_iomem_res->start, 1018 + nec_priv->mmiobase, (int)resource_size(e_priv->gpib_iomem_res)); 1019 + if (!nec_priv->mmiobase) { 1020 1020 dev_err(&fluke_gpib_pdev->dev, "Could not map I/O memory\n"); 1021 1021 return -ENOMEM; 1022 1022 } ··· 1107 1107 gpib_free_pseudo_irq(board); 1108 1108 nec_priv = &e_priv->nec7210_priv; 1109 1109 1110 - if (nec_priv->iobase) { 1110 + if (nec_priv->mmiobase) { 1111 1111 fluke_paged_write_byte(e_priv, 0, ISR0_IMR0, ISR0_IMR0_PAGE); 1112 1112 nec7210_board_reset(nec_priv, board); 1113 1113 }
+2 -2
drivers/staging/gpib/eastwood/fluke_gpib.h
··· 72 72 { 73 73 u8 retval; 74 74 75 - retval = readl(nec_priv->iobase + register_num * nec_priv->offset); 75 + retval = readl(nec_priv->mmiobase + register_num * nec_priv->offset); 76 76 return retval; 77 77 } 78 78 ··· 80 80 static inline void fluke_write_byte_nolock(struct nec7210_priv *nec_priv, uint8_t data, 81 81 int register_num) 82 82 { 83 - writel(data, nec_priv->iobase + register_num * nec_priv->offset); 83 + writel(data, nec_priv->mmiobase + register_num * nec_priv->offset); 84 84 } 85 85 86 86 static inline uint8_t fluke_paged_read_byte(struct fluke_priv *e_priv,
+14 -13
drivers/staging/gpib/fmh_gpib/fmh_gpib.c
··· 24 24 #include <linux/slab.h> 25 25 26 26 MODULE_LICENSE("GPL"); 27 + MODULE_DESCRIPTION("GPIB Driver for fmh_gpib_core"); 28 + MODULE_AUTHOR("Frank Mori Hess <fmh6jj@gmail.com>"); 27 29 28 30 static irqreturn_t fmh_gpib_interrupt(int irq, void *arg); 29 31 static int fmh_gpib_attach_holdoff_all(gpib_board_t *board, const gpib_board_config_t *config); ··· 1421 1419 } 1422 1420 e_priv->gpib_iomem_res = res; 1423 1421 1424 - nec_priv->iobase = ioremap(e_priv->gpib_iomem_res->start, 1422 + nec_priv->mmiobase = ioremap(e_priv->gpib_iomem_res->start, 1425 1423 resource_size(e_priv->gpib_iomem_res)); 1426 - if (!nec_priv->iobase) { 1424 + if (!nec_priv->mmiobase) { 1427 1425 dev_err(board->dev, "Could not map I/O memory for gpib\n"); 1428 1426 return -ENOMEM; 1429 1427 } 1430 - dev_info(board->dev, "iobase 0x%lx remapped to %p, length=%ld\n", 1431 - (unsigned long)e_priv->gpib_iomem_res->start, 1432 - nec_priv->iobase, (unsigned long)resource_size(e_priv->gpib_iomem_res)); 1428 + dev_info(board->dev, "iobase %pr remapped to %p\n", 1429 + e_priv->gpib_iomem_res, nec_priv->mmiobase); 1433 1430 1434 1431 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dma_fifos"); 1435 1432 if (!res) { ··· 1508 1507 free_irq(e_priv->irq, board); 1509 1508 if (e_priv->fifo_base) 1510 1509 fifos_write(e_priv, 0, FIFO_CONTROL_STATUS_REG); 1511 - if (nec_priv->iobase) { 1510 + if (nec_priv->mmiobase) { 1512 1511 write_byte(nec_priv, 0, ISR0_IMR0_REG); 1513 1512 nec7210_board_reset(nec_priv, board); 1514 1513 } 1515 1514 if (e_priv->fifo_base) 1516 1515 iounmap(e_priv->fifo_base); 1517 - if (nec_priv->iobase) 1518 - iounmap(nec_priv->iobase); 1516 + if (nec_priv->mmiobase) 1517 + iounmap(nec_priv->mmiobase); 1519 1518 if (e_priv->dma_port_res) { 1520 1519 release_mem_region(e_priv->dma_port_res->start, 1521 1520 resource_size(e_priv->dma_port_res)); ··· 1565 1564 e_priv->gpib_iomem_res = &pci_device->resource[gpib_control_status_pci_resource_index]; 1566 1565 e_priv->dma_port_res = &pci_device->resource[gpib_fifo_pci_resource_index]; 1567 1566 1568 - nec_priv->iobase = ioremap(pci_resource_start(pci_device, 1567 + nec_priv->mmiobase = ioremap(pci_resource_start(pci_device, 1569 1568 gpib_control_status_pci_resource_index), 1570 1569 pci_resource_len(pci_device, 1571 1570 gpib_control_status_pci_resource_index)); 1572 1571 dev_info(board->dev, "base address for gpib control/status registers remapped to 0x%p\n", 1573 - nec_priv->iobase); 1572 + nec_priv->mmiobase); 1574 1573 1575 1574 if (e_priv->dma_port_res->flags & IORESOURCE_MEM) { 1576 1575 e_priv->fifo_base = ioremap(pci_resource_start(pci_device, ··· 1633 1632 free_irq(e_priv->irq, board); 1634 1633 if (e_priv->fifo_base) 1635 1634 fifos_write(e_priv, 0, FIFO_CONTROL_STATUS_REG); 1636 - if (nec_priv->iobase) { 1635 + if (nec_priv->mmiobase) { 1637 1636 write_byte(nec_priv, 0, ISR0_IMR0_REG); 1638 1637 nec7210_board_reset(nec_priv, board); 1639 1638 } 1640 1639 if (e_priv->fifo_base) 1641 1640 iounmap(e_priv->fifo_base); 1642 - if (nec_priv->iobase) 1643 - iounmap(nec_priv->iobase); 1641 + if (nec_priv->mmiobase) 1642 + iounmap(nec_priv->mmiobase); 1644 1643 if (e_priv->dma_port_res || e_priv->gpib_iomem_res) 1645 1644 pci_release_regions(to_pci_dev(board->dev)); 1646 1645 if (board->dev)
+2 -2
drivers/staging/gpib/fmh_gpib/fmh_gpib.h
··· 127 127 static inline uint8_t gpib_cs_read_byte(struct nec7210_priv *nec_priv, 128 128 unsigned int register_num) 129 129 { 130 - return readb(nec_priv->iobase + register_num * nec_priv->offset); 130 + return readb(nec_priv->mmiobase + register_num * nec_priv->offset); 131 131 } 132 132 133 133 static inline void gpib_cs_write_byte(struct nec7210_priv *nec_priv, uint8_t data, 134 134 unsigned int register_num) 135 135 { 136 - writeb(data, nec_priv->iobase + register_num * nec_priv->offset); 136 + writeb(data, nec_priv->mmiobase + register_num * nec_priv->offset); 137 137 } 138 138 139 139 static inline uint16_t fifos_read(struct fmh_priv *fmh_priv, int register_num)
+1 -1
drivers/staging/gpib/gpio/Makefile
··· 1 1 2 - obj-m += gpib_bitbang.o 2 + obj-$(CONFIG_GPIB_GPIO) += gpib_bitbang.o 3 3 4 4
+1 -1
drivers/staging/gpib/gpio/gpib_bitbang.c
··· 315 315 enum listener_function_state listener_state; 316 316 }; 317 317 318 - inline long usec_diff(struct timespec64 *a, struct timespec64 *b); 318 + static inline long usec_diff(struct timespec64 *a, struct timespec64 *b); 319 319 static void bb_buffer_print(unsigned char *buffer, size_t length, int cmd, int eoi); 320 320 static void set_data_lines(u8 byte); 321 321 static u8 get_data_lines(void);
+1 -1
drivers/staging/gpib/hp_82335/Makefile
··· 1 1 2 - obj-m += hp82335.o 2 + obj-$(CONFIG_GPIB_HP82335) += hp82335.o 3 3 4 4
+11 -10
drivers/staging/gpib/hp_82335/hp82335.c
··· 9 9 */ 10 10 11 11 #include "hp82335.h" 12 + #include <linux/io.h> 12 13 #include <linux/ioport.h> 13 14 #include <linux/sched.h> 14 15 #include <linux/module.h> ··· 234 233 { 235 234 struct tms9914_priv *tms_priv = &hp_priv->tms9914_priv; 236 235 237 - writeb(0, tms_priv->iobase + HPREG_INTR_CLEAR); 236 + writeb(0, tms_priv->mmiobase + HPREG_INTR_CLEAR); 238 237 } 239 238 240 239 int hp82335_attach(gpib_board_t *board, const gpib_board_config_t *config) ··· 242 241 struct hp82335_priv *hp_priv; 243 242 struct tms9914_priv *tms_priv; 244 243 int retval; 245 - const unsigned long upper_iomem_base = (unsigned long)config->ibbase + hp82335_rom_size; 244 + const unsigned long upper_iomem_base = config->ibbase + hp82335_rom_size; 246 245 247 246 board->status = 0; 248 247 ··· 254 253 tms_priv->write_byte = hp82335_write_byte; 255 254 tms_priv->offset = 1; 256 255 257 - switch ((unsigned long)(config->ibbase)) { 256 + switch (config->ibbase) { 258 257 case 0xc4000: 259 258 case 0xc8000: 260 259 case 0xcc000: ··· 272 271 case 0xfc000: 273 272 break; 274 273 default: 275 - pr_err("hp82335: invalid base io address 0x%p\n", config->ibbase); 274 + pr_err("hp82335: invalid base io address 0x%u\n", config->ibbase); 276 275 return -EINVAL; 277 276 } 278 277 if (!request_mem_region(upper_iomem_base, hp82335_upper_iomem_size, "hp82335")) { ··· 281 280 return -EBUSY; 282 281 } 283 282 hp_priv->raw_iobase = upper_iomem_base; 284 - tms_priv->iobase = ioremap(upper_iomem_base, hp82335_upper_iomem_size); 283 + tms_priv->mmiobase = ioremap(upper_iomem_base, hp82335_upper_iomem_size); 285 284 pr_info("hp82335: upper half of 82335 iomem region 0x%lx remapped to 0x%p\n", 286 - hp_priv->raw_iobase, tms_priv->iobase); 285 + hp_priv->raw_iobase, tms_priv->mmiobase); 287 286 288 287 retval = request_irq(config->ibirq, hp82335_interrupt, 0, "hp82335", board); 289 288 if (retval) { ··· 297 296 298 297 hp82335_clear_interrupt(hp_priv); 299 298 300 - writeb(INTR_ENABLE, tms_priv->iobase + HPREG_CCR); 299 + writeb(INTR_ENABLE, tms_priv->mmiobase + HPREG_CCR); 301 300 302 301 tms9914_online(board, tms_priv); 303 302 ··· 313 312 tms_priv = &hp_priv->tms9914_priv; 314 313 if (hp_priv->irq) 315 314 free_irq(hp_priv->irq, board); 316 - if (tms_priv->iobase) { 317 - writeb(0, tms_priv->iobase + HPREG_CCR); 315 + if (tms_priv->mmiobase) { 316 + writeb(0, tms_priv->mmiobase + HPREG_CCR); 318 317 tms9914_board_reset(tms_priv); 319 - iounmap((void *)tms_priv->iobase); 318 + iounmap(tms_priv->mmiobase); 320 319 } 321 320 if (hp_priv->raw_iobase) 322 321 release_mem_region(hp_priv->raw_iobase, hp82335_upper_iomem_size);
+1 -1
drivers/staging/gpib/hp_82341/Makefile
··· 1 1 2 - obj-m += hp_82341.o 2 + obj-$(CONFIG_GPIB_HP82341) += hp_82341.o
+8 -8
drivers/staging/gpib/hp_82341/hp_82341.c
··· 473 473 474 474 static uint8_t hp_82341_read_byte(struct tms9914_priv *priv, unsigned int register_num) 475 475 { 476 - return inb((unsigned long)(priv->iobase) + register_num); 476 + return inb(priv->iobase + register_num); 477 477 } 478 478 479 479 static void hp_82341_write_byte(struct tms9914_priv *priv, uint8_t data, unsigned int register_num) 480 480 { 481 - outb(data, (unsigned long)(priv->iobase) + register_num); 481 + outb(data, priv->iobase + register_num); 482 482 } 483 483 484 484 static int hp_82341_find_isapnp_board(struct pnp_dev **dev) ··· 682 682 { 683 683 struct hp_82341_priv *hp_priv; 684 684 struct tms9914_priv *tms_priv; 685 - unsigned long start_addr; 686 - void *iobase; 685 + u32 start_addr; 686 + u32 iobase; 687 687 int irq; 688 688 int i; 689 689 int retval; ··· 704 704 if (retval < 0) 705 705 return retval; 706 706 hp_priv->pnp_dev = dev; 707 - iobase = (void *)(pnp_port_start(dev, 0)); 707 + iobase = pnp_port_start(dev, 0); 708 708 irq = pnp_irq(dev, 0); 709 709 hp_priv->hw_version = HW_VERSION_82341D; 710 710 hp_priv->io_region_offset = 0x8; ··· 714 714 hp_priv->hw_version = HW_VERSION_82341C; 715 715 hp_priv->io_region_offset = 0x400; 716 716 } 717 - pr_info("hp_82341: base io 0x%p\n", iobase); 717 + pr_info("hp_82341: base io 0x%u\n", iobase); 718 718 for (i = 0; i < hp_82341_num_io_regions; ++i) { 719 - start_addr = (unsigned long)(iobase) + i * hp_priv->io_region_offset; 719 + start_addr = iobase + i * hp_priv->io_region_offset; 720 720 if (!request_region(start_addr, hp_82341_region_iosize, "hp_82341")) { 721 721 pr_err("hp_82341: failed to allocate io ports 0x%lx-0x%lx\n", 722 722 start_addr, ··· 725 725 } 726 726 hp_priv->iobase[i] = start_addr; 727 727 } 728 - tms_priv->iobase = (void *)(hp_priv->iobase[2]); 728 + tms_priv->iobase = hp_priv->iobase[2]; 729 729 if (hp_priv->hw_version == HW_VERSION_82341D) { 730 730 retval = isapnp_cfg_begin(hp_priv->pnp_dev->card->number, 731 731 hp_priv->pnp_dev->number);
+1 -11
drivers/staging/gpib/include/gpibP.h
··· 16 16 17 17 #include <linux/fs.h> 18 18 #include <linux/interrupt.h> 19 + #include <linux/io.h> 19 20 20 21 void gpib_register_driver(gpib_interface_t *interface, struct module *mod); 21 22 void gpib_unregister_driver(gpib_interface_t *interface); ··· 35 34 extern gpib_board_t board_array[GPIB_MAX_NUM_BOARDS]; 36 35 37 36 extern struct list_head registered_drivers; 38 - 39 - #include <linux/io.h> 40 - 41 - void writeb_wrapper(unsigned int value, void *address); 42 - unsigned int readb_wrapper(void *address); 43 - void outb_wrapper(unsigned int value, void *address); 44 - unsigned int inb_wrapper(void *address); 45 - void writew_wrapper(unsigned int value, void *address); 46 - unsigned int readw_wrapper(void *address); 47 - void outw_wrapper(unsigned int value, void *address); 48 - unsigned int inw_wrapper(void *address); 49 37 50 38 #endif // _GPIB_P_H 51 39
+2 -1
drivers/staging/gpib/include/gpib_types.h
··· 31 31 void *init_data; 32 32 int init_data_length; 33 33 /* IO base address to use for non-pnp cards (set by core, driver should make local copy) */ 34 - void *ibbase; 34 + u32 ibbase; 35 + void __iomem *mmibbase; 35 36 /* IRQ to use for non-pnp cards (set by core, driver should make local copy) */ 36 37 unsigned int ibirq; 37 38 /* dma channel to use for non-pnp cards (set by core, driver should make local copy) */
+4 -1
drivers/staging/gpib/include/nec7210.h
··· 18 18 19 19 /* struct used to provide variables local to a nec7210 chip */ 20 20 struct nec7210_priv { 21 - void *iobase; 21 + #ifdef CONFIG_HAS_IOPORT 22 + u32 iobase; 23 + #endif 24 + void __iomem *mmiobase; 22 25 unsigned int offset; // offset between successive nec7210 io addresses 23 26 unsigned int dma_channel; 24 27 u8 *dma_buffer;
+4 -1
drivers/staging/gpib/include/tms9914.h
··· 20 20 21 21 /* struct used to provide variables local to a tms9914 chip */ 22 22 struct tms9914_priv { 23 - void *iobase; 23 + #ifdef CONFIG_HAS_IOPORT 24 + u32 iobase; 25 + #endif 26 + void __iomem *mmiobase; 24 27 unsigned int offset; // offset between successive tms9914 io addresses 25 28 unsigned int dma_channel; 26 29 // software copy of bits written to interrupt mask registers
+1 -1
drivers/staging/gpib/ines/Makefile
··· 1 1 ccflags-$(CONFIG_GPIB_PCMCIA) := -DGPIB_PCMCIA 2 - obj-m += ines_gpib.o 2 + obj-$(CONFIG_GPIB_INES) += ines_gpib.o 3 3 4 4
+2 -2
drivers/staging/gpib/ines/ines.h
··· 83 83 /* inb/outb wrappers */ 84 84 static inline unsigned int ines_inb(struct ines_priv *priv, unsigned int register_number) 85 85 { 86 - return inb((unsigned long)(priv->nec7210_priv.iobase) + 86 + return inb(priv->nec7210_priv.iobase + 87 87 register_number * priv->nec7210_priv.offset); 88 88 } 89 89 90 90 static inline void ines_outb(struct ines_priv *priv, unsigned int value, 91 91 unsigned int register_number) 92 92 { 93 - outb(value, (unsigned long)(priv->nec7210_priv.iobase) + 93 + outb(value, priv->nec7210_priv.iobase + 94 94 register_number * priv->nec7210_priv.offset); 95 95 } 96 96
+11 -11
drivers/staging/gpib/ines/ines_gpib.c
··· 273 273 struct nec7210_priv *nec_priv = &priv->nec7210_priv; 274 274 275 275 if (priv->pci_chip_type == PCI_CHIP_QUANCOM) { 276 - if ((inb((unsigned long)nec_priv->iobase + 276 + if ((inb(nec_priv->iobase + 277 277 QUANCOM_IRQ_CONTROL_STATUS_REG) & 278 278 QUANCOM_IRQ_ASSERTED_BIT)) 279 - outb(QUANCOM_IRQ_ENABLE_BIT, (unsigned long)(nec_priv->iobase) + 279 + outb(QUANCOM_IRQ_ENABLE_BIT, nec_priv->iobase + 280 280 QUANCOM_IRQ_CONTROL_STATUS_REG); 281 281 } 282 282 ··· 780 780 781 781 if (pci_request_regions(ines_priv->pci_device, "ines-gpib")) 782 782 return -1; 783 - nec_priv->iobase = (void *)(pci_resource_start(ines_priv->pci_device, 784 - found_id.gpib_region)); 783 + nec_priv->iobase = pci_resource_start(ines_priv->pci_device, 784 + found_id.gpib_region); 785 785 786 786 ines_priv->pci_chip_type = found_id.pci_chip_type; 787 787 nec_priv->offset = found_id.io_offset; ··· 840 840 } 841 841 break; 842 842 case PCI_CHIP_QUANCOM: 843 - outb(QUANCOM_IRQ_ENABLE_BIT, (unsigned long)(nec_priv->iobase) + 843 + outb(QUANCOM_IRQ_ENABLE_BIT, nec_priv->iobase + 844 844 QUANCOM_IRQ_CONTROL_STATUS_REG); 845 845 break; 846 846 case PCI_CHIP_QUICKLOGIC5030: ··· 899 899 ines_priv = board->private_data; 900 900 nec_priv = &ines_priv->nec7210_priv; 901 901 902 - if (!request_region((unsigned long)config->ibbase, ines_isa_iosize, "ines_gpib")) { 903 - pr_err("ines_gpib: ioports at 0x%p already in use\n", config->ibbase); 902 + if (!request_region(config->ibbase, ines_isa_iosize, "ines_gpib")) { 903 + pr_err("ines_gpib: ioports at 0x%x already in use\n", config->ibbase); 904 904 return -1; 905 905 } 906 906 nec_priv->iobase = config->ibbase; ··· 931 931 break; 932 932 case PCI_CHIP_QUANCOM: 933 933 if (nec_priv->iobase) 934 - outb(0, (unsigned long)(nec_priv->iobase) + 934 + outb(0, nec_priv->iobase + 935 935 QUANCOM_IRQ_CONTROL_STATUS_REG); 936 936 break; 937 937 default: ··· 960 960 free_irq(ines_priv->irq, board); 961 961 if (nec_priv->iobase) { 962 962 nec7210_board_reset(nec_priv, board); 963 - release_region((unsigned long)(nec_priv->iobase), ines_isa_iosize); 963 + release_region(nec_priv->iobase, ines_isa_iosize); 964 964 } 965 965 } 966 966 ines_free_private(board); ··· 1355 1355 return -1; 1356 1356 } 1357 1357 1358 - nec_priv->iobase = (void *)(unsigned long)curr_dev->resource[0]->start; 1358 + nec_priv->iobase = curr_dev->resource[0]->start; 1359 1359 1360 1360 nec7210_board_reset(nec_priv, board); 1361 1361 ··· 1410 1410 free_irq(ines_priv->irq, board); 1411 1411 if (nec_priv->iobase) { 1412 1412 nec7210_board_reset(nec_priv, board); 1413 - release_region((unsigned long)(nec_priv->iobase), ines_pcmcia_iosize); 1413 + release_region(nec_priv->iobase, ines_pcmcia_iosize); 1414 1414 } 1415 1415 } 1416 1416 ines_free_private(board);
+1 -1
drivers/staging/gpib/lpvo_usb_gpib/Makefile
··· 1 1 2 - obj-m += lpvo_usb_gpib.o 2 + obj-$(CONFIG_GPIB_LPVO) += lpvo_usb_gpib.o 3 3
+9 -9
drivers/staging/gpib/lpvo_usb_gpib/lpvo_usb_gpib.c
··· 99 99 #define USB_GPIB_DEBUG_ON "\nIBDE\xAA\n" 100 100 #define USB_GPIB_SET_LISTEN "\nIBDT0\n" 101 101 #define USB_GPIB_SET_TALK "\nIBDT1\n" 102 - #define USB_GPIB_SET_LINES "\nIBDC\n" 103 - #define USB_GPIB_SET_DATA "\nIBDM\n" 102 + #define USB_GPIB_SET_LINES "\nIBDC.\n" 103 + #define USB_GPIB_SET_DATA "\nIBDM.\n" 104 104 #define USB_GPIB_READ_LINES "\nIBD?C\n" 105 105 #define USB_GPIB_READ_DATA "\nIBD?M\n" 106 106 #define USB_GPIB_READ_BUS "\nIBD??\n" ··· 210 210 * (unix time in sec and NANOsec) 211 211 */ 212 212 213 - inline int usec_diff(struct timespec64 *a, struct timespec64 *b) 213 + static inline int usec_diff(struct timespec64 *a, struct timespec64 *b) 214 214 { 215 215 return ((a->tv_sec - b->tv_sec) * 1000000 + 216 216 (a->tv_nsec - b->tv_nsec) / 1000); ··· 436 436 static int usb_gpib_attach(gpib_board_t *board, const gpib_board_config_t *config) 437 437 { 438 438 int retval, j; 439 - int base = (long)config->ibbase; 439 + u32 base = config->ibbase; 440 440 char *device_path; 441 441 int match; 442 442 struct usb_device *udev; ··· 589 589 size_t *bytes_written) 590 590 { 591 591 int i, retval; 592 - char command[6] = "IBc\n"; 592 + char command[6] = "IBc.\n"; 593 593 594 594 DIA_LOG(1, "enter %p\n", board); 595 595 ··· 608 608 } 609 609 610 610 /** 611 - * disable_eos() - Disable END on eos byte (END on EOI only) 611 + * usb_gpib_disable_eos() - Disable END on eos byte (END on EOI only) 612 612 * 613 613 * @board: the gpib_board data area for this gpib interface 614 614 * ··· 624 624 } 625 625 626 626 /** 627 - * enable_eos() - Enable END for reads when eos byte is received. 627 + * usb_gpib_enable_eos() - Enable END for reads when eos byte is received. 628 628 * 629 629 * @board: the gpib_board data area for this gpib interface 630 630 * @eos_byte: the 'eos' byte ··· 647 647 } 648 648 649 649 /** 650 - * go_to_standby() - De-assert ATN 650 + * usb_gpib_go_to_standby() - De-assert ATN 651 651 * 652 652 * @board: the gpib_board data area for this gpib interface 653 653 */ ··· 664 664 } 665 665 666 666 /** 667 - * interface_clear() - Assert or de-assert IFC 667 + * usb_gpib_interface_clear() - Assert or de-assert IFC 668 668 * 669 669 * @board: the gpib_board data area for this gpib interface 670 670 * assert: 1: assert IFC; 0: de-assert IFC
+8 -8
drivers/staging/gpib/nec7210/nec7210.c
··· 1035 1035 /* wrappers for io */ 1036 1036 uint8_t nec7210_ioport_read_byte(struct nec7210_priv *priv, unsigned int register_num) 1037 1037 { 1038 - return inb((unsigned long)(priv->iobase) + register_num * priv->offset); 1038 + return inb(priv->iobase + register_num * priv->offset); 1039 1039 } 1040 1040 EXPORT_SYMBOL(nec7210_ioport_read_byte); 1041 1041 ··· 1047 1047 */ 1048 1048 nec7210_locking_ioport_write_byte(priv, data, register_num); 1049 1049 else 1050 - outb(data, (unsigned long)(priv->iobase) + register_num * priv->offset); 1050 + outb(data, priv->iobase + register_num * priv->offset); 1051 1051 } 1052 1052 EXPORT_SYMBOL(nec7210_ioport_write_byte); 1053 1053 ··· 1058 1058 unsigned long flags; 1059 1059 1060 1060 spin_lock_irqsave(&priv->register_page_lock, flags); 1061 - retval = inb((unsigned long)(priv->iobase) + register_num * priv->offset); 1061 + retval = inb(priv->iobase + register_num * priv->offset); 1062 1062 spin_unlock_irqrestore(&priv->register_page_lock, flags); 1063 1063 return retval; 1064 1064 } ··· 1072 1072 spin_lock_irqsave(&priv->register_page_lock, flags); 1073 1073 if (register_num == AUXMR) 1074 1074 udelay(1); 1075 - outb(data, (unsigned long)(priv->iobase) + register_num * priv->offset); 1075 + outb(data, priv->iobase + register_num * priv->offset); 1076 1076 spin_unlock_irqrestore(&priv->register_page_lock, flags); 1077 1077 } 1078 1078 EXPORT_SYMBOL(nec7210_locking_ioport_write_byte); ··· 1080 1080 1081 1081 uint8_t nec7210_iomem_read_byte(struct nec7210_priv *priv, unsigned int register_num) 1082 1082 { 1083 - return readb(priv->iobase + register_num * priv->offset); 1083 + return readb(priv->mmiobase + register_num * priv->offset); 1084 1084 } 1085 1085 EXPORT_SYMBOL(nec7210_iomem_read_byte); 1086 1086 ··· 1092 1092 */ 1093 1093 nec7210_locking_iomem_write_byte(priv, data, register_num); 1094 1094 else 1095 - writeb(data, priv->iobase + register_num * priv->offset); 1095 + writeb(data, priv->mmiobase + register_num * priv->offset); 1096 1096 } 1097 1097 EXPORT_SYMBOL(nec7210_iomem_write_byte); 1098 1098 ··· 1102 1102 unsigned long flags; 1103 1103 1104 1104 spin_lock_irqsave(&priv->register_page_lock, flags); 1105 - retval = readb(priv->iobase + register_num * priv->offset); 1105 + retval = readb(priv->mmiobase + register_num * priv->offset); 1106 1106 spin_unlock_irqrestore(&priv->register_page_lock, flags); 1107 1107 return retval; 1108 1108 } ··· 1116 1116 spin_lock_irqsave(&priv->register_page_lock, flags); 1117 1117 if (register_num == AUXMR) 1118 1118 udelay(1); 1119 - writeb(data, priv->iobase + register_num * priv->offset); 1119 + writeb(data, priv->mmiobase + register_num * priv->offset); 1120 1120 spin_unlock_irqrestore(&priv->register_page_lock, flags); 1121 1121 } 1122 1122 EXPORT_SYMBOL(nec7210_locking_iomem_write_byte);
+1 -1
drivers/staging/gpib/ni_usb/Makefile
··· 1 1 2 - obj-m += ni_usb_gpib.o 2 + obj-$(CONFIG_GPIB_NI_USB) += ni_usb_gpib.o 3 3 4 4
+1 -1
drivers/staging/gpib/pc2/Makefile
··· 1 1 2 - obj-m += pc2_gpib.o 2 + obj-$(CONFIG_GPIB_PC2) += pc2_gpib.o 3 3 4 4 5 5
+8 -8
drivers/staging/gpib/pc2/pc2_gpib.c
··· 426 426 nec_priv = &pc2_priv->nec7210_priv; 427 427 nec_priv->offset = pc2_reg_offset; 428 428 429 - if (request_region((unsigned long)config->ibbase, pc2_iosize, "pc2") == 0) { 429 + if (request_region(config->ibbase, pc2_iosize, "pc2") == 0) { 430 430 pr_err("gpib: ioports are already in use\n"); 431 431 return -1; 432 432 } ··· 471 471 free_irq(pc2_priv->irq, board); 472 472 if (nec_priv->iobase) { 473 473 nec7210_board_reset(nec_priv, board); 474 - release_region((unsigned long)(nec_priv->iobase), pc2_iosize); 474 + release_region(nec_priv->iobase, pc2_iosize); 475 475 } 476 476 if (nec_priv->dma_buffer) { 477 477 dma_free_coherent(board->dev, nec_priv->dma_buffer_length, ··· 498 498 nec_priv = &pc2_priv->nec7210_priv; 499 499 nec_priv->offset = pc2a_reg_offset; 500 500 501 - switch ((unsigned long)(config->ibbase)) { 501 + switch (config->ibbase) { 502 502 case 0x02e1: 503 503 case 0x22e1: 504 504 case 0x42e1: 505 505 case 0x62e1: 506 506 break; 507 507 default: 508 - pr_err("PCIIa base range invalid, must be one of 0x[0246]2e1, but is 0x%p\n", 508 + pr_err("PCIIa base range invalid, must be one of 0x[0246]2e1, but is 0x%d\n", 509 509 config->ibbase); 510 510 return -1; 511 511 } ··· 522 522 unsigned int err = 0; 523 523 524 524 for (i = 0; i < num_registers; i++) { 525 - if (check_region((unsigned long)config->ibbase + i * pc2a_reg_offset, 1)) 525 + if (check_region(config->ibbase + i * pc2a_reg_offset, 1)) 526 526 err++; 527 527 } 528 528 if (config->ibirq && check_region(pc2a_clear_intr_iobase + config->ibirq, 1)) ··· 533 533 } 534 534 #endif 535 535 for (i = 0; i < num_registers; i++) { 536 - if (!request_region((unsigned long)config->ibbase + 536 + if (!request_region(config->ibbase + 537 537 i * pc2a_reg_offset, 1, "pc2a")) { 538 538 pr_err("gpib: ioports are already in use"); 539 539 for (j = 0; j < i; j++) 540 - release_region((unsigned long)(config->ibbase) + 540 + release_region(config->ibbase + 541 541 j * pc2a_reg_offset, 1); 542 542 return -1; 543 543 } ··· 608 608 if (nec_priv->iobase) { 609 609 nec7210_board_reset(nec_priv, board); 610 610 for (i = 0; i < num_registers; i++) 611 - release_region((unsigned long)nec_priv->iobase + 611 + release_region(nec_priv->iobase + 612 612 i * pc2a_reg_offset, 1); 613 613 } 614 614 if (pc2_priv->clear_intr_addr)
+1 -1
drivers/staging/gpib/tms9914/Makefile
··· 1 1 2 - obj-m += tms9914.o 2 + obj-$(CONFIG_GPIB_TMS9914) += tms9914.o 3 3 4 4 5 5
+4 -4
drivers/staging/gpib/tms9914/tms9914.c
··· 866 866 // wrapper for inb 867 867 uint8_t tms9914_ioport_read_byte(struct tms9914_priv *priv, unsigned int register_num) 868 868 { 869 - return inb((unsigned long)(priv->iobase) + register_num * priv->offset); 869 + return inb(priv->iobase + register_num * priv->offset); 870 870 } 871 871 EXPORT_SYMBOL_GPL(tms9914_ioport_read_byte); 872 872 873 873 // wrapper for outb 874 874 void tms9914_ioport_write_byte(struct tms9914_priv *priv, uint8_t data, unsigned int register_num) 875 875 { 876 - outb(data, (unsigned long)(priv->iobase) + register_num * priv->offset); 876 + outb(data, priv->iobase + register_num * priv->offset); 877 877 if (register_num == AUXCR) 878 878 udelay(1); 879 879 } ··· 883 883 // wrapper for readb 884 884 uint8_t tms9914_iomem_read_byte(struct tms9914_priv *priv, unsigned int register_num) 885 885 { 886 - return readb(priv->iobase + register_num * priv->offset); 886 + return readb(priv->mmiobase + register_num * priv->offset); 887 887 } 888 888 EXPORT_SYMBOL_GPL(tms9914_iomem_read_byte); 889 889 890 890 // wrapper for writeb 891 891 void tms9914_iomem_write_byte(struct tms9914_priv *priv, uint8_t data, unsigned int register_num) 892 892 { 893 - writeb(data, priv->iobase + register_num * priv->offset); 893 + writeb(data, priv->mmiobase + register_num * priv->offset); 894 894 if (register_num == AUXCR) 895 895 udelay(1); 896 896 }
+1 -1
drivers/staging/gpib/tnt4882/Makefile
··· 1 1 ccflags-$(CONFIG_GPIB_PCMCIA) := -DGPIB_PCMCIA 2 - obj-m += tnt4882.o 2 + obj-$(CONFIG_GPIB_NI_PCI_ISA) += tnt4882.o 3 3 4 4 tnt4882-objs := tnt4882_gpib.o mite.o 5 5
-69
drivers/staging/gpib/tnt4882/mite.c
··· 148 148 } 149 149 pr_info("\n"); 150 150 } 151 - 152 - int mite_bytes_transferred(struct mite_struct *mite, int chan) 153 - { 154 - int dar, fcr; 155 - 156 - dar = readl(mite->mite_io_addr + MITE_DAR + CHAN_OFFSET(chan)); 157 - fcr = readl(mite->mite_io_addr + MITE_FCR + CHAN_OFFSET(chan)) & 0x000000FF; 158 - return dar - fcr; 159 - } 160 - 161 - int mite_dma_tcr(struct mite_struct *mite) 162 - { 163 - int tcr; 164 - int lkar; 165 - 166 - lkar = readl(mite->mite_io_addr + CHAN_OFFSET(0) + MITE_LKAR); 167 - tcr = readl(mite->mite_io_addr + CHAN_OFFSET(0) + MITE_TCR); 168 - MDPRINTK("lkar=0x%08x tcr=%d\n", lkar, tcr); 169 - 170 - return tcr; 171 - } 172 - 173 - void mite_dma_disarm(struct mite_struct *mite) 174 - { 175 - int chor; 176 - 177 - /* disarm */ 178 - chor = CHOR_ABORT; 179 - writel(chor, mite->mite_io_addr + CHAN_OFFSET(0) + MITE_CHOR); 180 - } 181 - 182 - void mite_dump_regs(struct mite_struct *mite) 183 - { 184 - void *addr = 0; 185 - unsigned long temp = 0; 186 - 187 - pr_info("mite address is =0x%p\n", mite->mite_io_addr); 188 - 189 - addr = mite->mite_io_addr + MITE_CHOR + CHAN_OFFSET(0); 190 - pr_info("mite status[CHOR]at 0x%p =0x%08lx\n", addr, temp = readl(addr)); 191 - //mite_decode(mite_CHOR_strings,temp); 192 - addr = mite->mite_io_addr + MITE_CHCR + CHAN_OFFSET(0); 193 - pr_info("mite status[CHCR]at 0x%p =0x%08lx\n", addr, temp = readl(addr)); 194 - //mite_decode(mite_CHCR_strings,temp); 195 - addr = mite->mite_io_addr + MITE_TCR + CHAN_OFFSET(0); 196 - pr_info("mite status[TCR] at 0x%p =0x%08x\n", addr, readl(addr)); 197 - addr = mite->mite_io_addr + MITE_MCR + CHAN_OFFSET(0); 198 - pr_info("mite status[MCR] at 0x%p =0x%08lx\n", addr, temp = readl(addr)); 199 - //mite_decode(mite_MCR_strings,temp); 200 - addr = mite->mite_io_addr + MITE_MAR + CHAN_OFFSET(0); 201 - pr_info("mite status[MAR] at 0x%p =0x%08x\n", addr, readl(addr)); 202 - addr = mite->mite_io_addr + MITE_DCR + CHAN_OFFSET(0); 203 - pr_info("mite status[DCR] at 0x%p =0x%08lx\n", addr, temp = readl(addr)); 204 - //mite_decode(mite_CR_strings,temp); 205 - addr = mite->mite_io_addr + MITE_DAR + CHAN_OFFSET(0); 206 - pr_info("mite status[DAR] at 0x%p =0x%08x\n", addr, readl(addr)); 207 - addr = mite->mite_io_addr + MITE_LKCR + CHAN_OFFSET(0); 208 - pr_info("mite status[LKCR]at 0x%p =0x%08lx\n", addr, temp = readl(addr)); 209 - //mite_decode(mite_CR_strings,temp); 210 - addr = mite->mite_io_addr + MITE_LKAR + CHAN_OFFSET(0); 211 - pr_info("mite status[LKAR]at 0x%p =0x%08x\n", addr, readl(addr)); 212 - 213 - addr = mite->mite_io_addr + MITE_CHSR + CHAN_OFFSET(0); 214 - pr_info("mite status[CHSR]at 0x%p =0x%08lx\n", addr, temp = readl(addr)); 215 - //mite_decode(mite_CHSR_strings,temp); 216 - addr = mite->mite_io_addr + MITE_FCR + CHAN_OFFSET(0); 217 - pr_info("mite status[FCR] at 0x%p =0x%08x\n\n", addr, readl(addr)); 218 - } 219 -
+2 -11
drivers/staging/gpib/tnt4882/mite.h
··· 34 34 35 35 struct pci_dev *pcidev; 36 36 unsigned long mite_phys_addr; 37 - void *mite_io_addr; 37 + void __iomem *mite_io_addr; 38 38 unsigned long daq_phys_addr; 39 - void *daq_io_addr; 39 + void __iomem *daq_io_addr; 40 40 41 41 int DMA_CheckNearEnd; 42 42 ··· 60 60 int mite_setup(struct mite_struct *mite); 61 61 void mite_unsetup(struct mite_struct *mite); 62 62 void mite_list_devices(void); 63 - 64 - int mite_dma_tcr(struct mite_struct *mite); 65 - 66 - void mite_dma_arm(struct mite_struct *mite); 67 - void mite_dma_disarm(struct mite_struct *mite); 68 - 69 - void mite_dump_regs(struct mite_struct *mite); 70 - void mite_setregs(struct mite_struct *mite, unsigned long ll_start, int chan, int dir); 71 - int mite_bytes_transferred(struct mite_struct *mite, int chan); 72 63 73 64 #define CHAN_OFFSET(x) (0x100 * (x)) 74 65
+30 -37
drivers/staging/gpib/tnt4882/tnt4882_gpib.c
··· 45 45 unsigned short imr0_bits; 46 46 unsigned short imr3_bits; 47 47 unsigned short auxg_bits; // bits written to auxiliary register G 48 - void (*io_writeb)(unsigned int value, void *address); 49 - void (*io_writew)(unsigned int value, void *address); 50 - unsigned int (*io_readb)(void *address); 51 - unsigned int (*io_readw)(void *address); 52 48 }; 53 49 54 50 // interface functions ··· 100 104 /* paged io */ 101 105 static inline unsigned int tnt_paged_readb(struct tnt4882_priv *priv, unsigned long offset) 102 106 { 103 - priv->io_writeb(AUX_PAGEIN, priv->nec7210_priv.iobase + AUXMR * priv->nec7210_priv.offset); 107 + iowrite8(AUX_PAGEIN, priv->nec7210_priv.mmiobase + AUXMR * priv->nec7210_priv.offset); 104 108 udelay(1); 105 - return priv->io_readb(priv->nec7210_priv.iobase + offset); 109 + return ioread8(priv->nec7210_priv.mmiobase + offset); 106 110 } 107 111 108 112 static inline void tnt_paged_writeb(struct tnt4882_priv *priv, unsigned int value, 109 113 unsigned long offset) 110 114 { 111 - priv->io_writeb(AUX_PAGEIN, priv->nec7210_priv.iobase + AUXMR * priv->nec7210_priv.offset); 115 + iowrite8(AUX_PAGEIN, priv->nec7210_priv.mmiobase + AUXMR * priv->nec7210_priv.offset); 112 116 udelay(1); 113 - priv->io_writeb(value, priv->nec7210_priv.iobase + offset); 117 + iowrite8(value, priv->nec7210_priv.mmiobase + offset); 114 118 } 115 119 116 120 /* readb/writeb wrappers */ 117 121 static inline unsigned short tnt_readb(struct tnt4882_priv *priv, unsigned long offset) 118 122 { 119 - void *address = priv->nec7210_priv.iobase + offset; 123 + void *address = priv->nec7210_priv.mmiobase + offset; 120 124 unsigned long flags; 121 125 unsigned short retval; 122 126 spinlock_t *register_lock = &priv->nec7210_priv.register_page_lock; ··· 130 134 switch (priv->nec7210_priv.type) { 131 135 case TNT4882: 132 136 case TNT5004: 133 - retval = priv->io_readb(address); 137 + retval = ioread8(address); 134 138 break; 135 139 case NAT4882: 136 140 retval = tnt_paged_readb(priv, offset - tnt_pagein_offset); ··· 145 149 } 146 150 break; 147 151 default: 148 - retval = priv->io_readb(address); 152 + retval = ioread8(address); 149 153 break; 150 154 } 151 155 spin_unlock_irqrestore(register_lock, flags); ··· 154 158 155 159 static inline void tnt_writeb(struct tnt4882_priv *priv, unsigned short value, unsigned long offset) 156 160 { 157 - void *address = priv->nec7210_priv.iobase + offset; 161 + void *address = priv->nec7210_priv.mmiobase + offset; 158 162 unsigned long flags; 159 163 spinlock_t *register_lock = &priv->nec7210_priv.register_page_lock; 160 164 ··· 166 170 switch (priv->nec7210_priv.type) { 167 171 case TNT4882: 168 172 case TNT5004: 169 - priv->io_writeb(value, address); 173 + iowrite8(value, address); 170 174 break; 171 175 case NAT4882: 172 176 tnt_paged_writeb(priv, value, offset - tnt_pagein_offset); ··· 179 183 } 180 184 break; 181 185 default: 182 - priv->io_writeb(value, address); 186 + iowrite8(value, address); 183 187 break; 184 188 } 185 189 spin_unlock_irqrestore(register_lock, flags); ··· 284 288 while (fifo_word_available(tnt_priv) && count + 2 <= num_bytes) { 285 289 short word; 286 290 287 - word = tnt_priv->io_readw(nec_priv->iobase + FIFOB); 291 + word = ioread16(nec_priv->mmiobase + FIFOB); 288 292 buffer[count++] = word & 0xff; 289 293 buffer[count++] = (word >> 8) & 0xff; 290 294 } ··· 569 573 word = buffer[count++] & 0xff; 570 574 if (count < length) 571 575 word |= (buffer[count++] << 8) & 0xff00; 572 - tnt_priv->io_writew(word, nec_priv->iobase + FIFOB); 576 + iowrite16(word, nec_priv->mmiobase + FIFOB); 573 577 } 574 578 // avoid unnecessary HR_NFF interrupts 575 579 // tnt_priv->imr3_bits |= HR_NFF; ··· 1265 1269 if (tnt4882_allocate_private(board)) 1266 1270 return -ENOMEM; 1267 1271 tnt_priv = board->private_data; 1268 - tnt_priv->io_writeb = writeb_wrapper; 1269 - tnt_priv->io_readb = readb_wrapper; 1270 - tnt_priv->io_writew = writew_wrapper; 1271 - tnt_priv->io_readw = readw_wrapper; 1272 1272 nec_priv = &tnt_priv->nec7210_priv; 1273 1273 nec_priv->type = TNT4882; 1274 1274 nec_priv->read_byte = nec7210_locking_iomem_read_byte; ··· 1316 1324 return retval; 1317 1325 } 1318 1326 1319 - nec_priv->iobase = tnt_priv->mite->daq_io_addr; 1327 + nec_priv->mmiobase = tnt_priv->mite->daq_io_addr; 1320 1328 1321 1329 // get irq 1322 1330 if (request_irq(mite_irq(tnt_priv->mite), tnt4882_interrupt, isr_flags, ··· 1351 1359 if (tnt_priv) { 1352 1360 nec_priv = &tnt_priv->nec7210_priv; 1353 1361 1354 - if (nec_priv->iobase) 1362 + if (nec_priv->mmiobase) 1355 1363 tnt4882_board_reset(tnt_priv, board); 1356 1364 if (tnt_priv->irq) 1357 1365 free_irq(tnt_priv->irq, board); ··· 1392 1400 struct tnt4882_priv *tnt_priv; 1393 1401 struct nec7210_priv *nec_priv; 1394 1402 int isr_flags = 0; 1395 - void *iobase; 1403 + u32 iobase; 1396 1404 int irq; 1397 1405 1398 1406 board->status = 0; ··· 1400 1408 if (tnt4882_allocate_private(board)) 1401 1409 return -ENOMEM; 1402 1410 tnt_priv = board->private_data; 1403 - tnt_priv->io_writeb = outb_wrapper; 1404 - tnt_priv->io_readb = inb_wrapper; 1405 - tnt_priv->io_writew = outw_wrapper; 1406 - tnt_priv->io_readw = inw_wrapper; 1407 1411 nec_priv = &tnt_priv->nec7210_priv; 1408 1412 nec_priv->type = chipset; 1409 1413 nec_priv->read_byte = nec7210_locking_ioport_read_byte; ··· 1415 1427 if (retval < 0) 1416 1428 return retval; 1417 1429 tnt_priv->pnp_dev = dev; 1418 - iobase = (void *)(pnp_port_start(dev, 0)); 1430 + iobase = pnp_port_start(dev, 0); 1419 1431 irq = pnp_irq(dev, 0); 1420 1432 } else { 1421 1433 iobase = config->ibbase; 1422 1434 irq = config->ibirq; 1423 1435 } 1424 1436 // allocate ioports 1425 - if (!request_region((unsigned long)(iobase), atgpib_iosize, "atgpib")) { 1437 + if (!request_region(iobase, atgpib_iosize, "atgpib")) { 1426 1438 pr_err("tnt4882: failed to allocate ioports\n"); 1427 1439 return -1; 1428 1440 } 1429 - nec_priv->iobase = iobase; 1441 + nec_priv->mmiobase = ioport_map(iobase, atgpib_iosize); 1442 + if (!nec_priv->mmiobase) 1443 + return -1; 1430 1444 1431 1445 // get irq 1432 1446 if (request_irq(irq, tnt4882_interrupt, isr_flags, "atgpib", board)) { ··· 1468 1478 tnt4882_board_reset(tnt_priv, board); 1469 1479 if (tnt_priv->irq) 1470 1480 free_irq(tnt_priv->irq, board); 1481 + if (nec_priv->mmiobase) 1482 + ioport_unmap(nec_priv->mmiobase); 1471 1483 if (nec_priv->iobase) 1472 - release_region((unsigned long)(nec_priv->iobase), atgpib_iosize); 1484 + release_region(nec_priv->iobase, atgpib_iosize); 1473 1485 if (tnt_priv->pnp_dev) 1474 1486 pnp_device_detach(tnt_priv->pnp_dev); 1475 1487 } ··· 1809 1817 if (tnt4882_allocate_private(board)) 1810 1818 return -ENOMEM; 1811 1819 tnt_priv = board->private_data; 1812 - tnt_priv->io_writeb = outb_wrapper; 1813 - tnt_priv->io_readb = inb_wrapper; 1814 - tnt_priv->io_writew = outw_wrapper; 1815 - tnt_priv->io_readw = inw_wrapper; 1816 1820 nec_priv = &tnt_priv->nec7210_priv; 1817 1821 nec_priv->type = TNT4882; 1818 1822 nec_priv->read_byte = nec7210_locking_ioport_read_byte; ··· 1823 1835 return -EIO; 1824 1836 } 1825 1837 1826 - nec_priv->iobase = (void *)(unsigned long)curr_dev->resource[0]->start; 1838 + nec_priv->mmiobase = ioport_map(curr_dev->resource[0]->start, 1839 + resource_size(curr_dev->resource[0])); 1840 + if (!nec_priv->mmiobase) 1841 + return -1; 1827 1842 1828 1843 // get irq 1829 1844 if (request_irq(curr_dev->irq, tnt4882_interrupt, isr_flags, "tnt4882", board)) { ··· 1851 1860 nec_priv = &tnt_priv->nec7210_priv; 1852 1861 if (tnt_priv->irq) 1853 1862 free_irq(tnt_priv->irq, board); 1863 + if (nec_priv->mmiobase) 1864 + ioport_unmap(nec_priv->mmiobase); 1854 1865 if (nec_priv->iobase) { 1855 1866 tnt4882_board_reset(tnt_priv, board); 1856 - release_region((unsigned long)nec_priv->iobase, pcmcia_gpib_iosize); 1867 + release_region(nec_priv->iobase, pcmcia_gpib_iosize); 1857 1868 } 1858 1869 } 1859 1870 tnt4882_free_private(board);
+1 -1
drivers/staging/iio/frequency/ad9832.c
··· 158 158 static int ad9832_write_phase(struct ad9832_state *st, 159 159 unsigned long addr, unsigned long phase) 160 160 { 161 - if (phase > BIT(AD9832_PHASE_BITS)) 161 + if (phase >= BIT(AD9832_PHASE_BITS)) 162 162 return -EINVAL; 163 163 164 164 st->phase_data[0] = cpu_to_be16((AD9832_CMD_PHA8BITSW << CMD_SHIFT) |
+1 -1
drivers/staging/iio/frequency/ad9834.c
··· 131 131 static int ad9834_write_phase(struct ad9834_state *st, 132 132 unsigned long addr, unsigned long phase) 133 133 { 134 - if (phase > BIT(AD9834_PHASE_BITS)) 134 + if (phase >= BIT(AD9834_PHASE_BITS)) 135 135 return -EINVAL; 136 136 st->data = cpu_to_be16(addr | phase); 137 137
+1
drivers/thermal/thermal_of.c
··· 160 160 return ERR_PTR(ret); 161 161 } 162 162 163 + of_node_put(sensor_specs.np); 163 164 if ((sensor == sensor_specs.np) && id == (sensor_specs.args_count ? 164 165 sensor_specs.args[0] : 0)) { 165 166 pr_debug("sensor %pOFn id=%d belongs to %pOFn\n", sensor, id, child);
+3
drivers/tty/serial/8250/8250_core.c
··· 812 812 uart->dl_write = up->dl_write; 813 813 814 814 if (uart->port.type != PORT_8250_CIR) { 815 + if (uart_console_registered(&uart->port)) 816 + pm_runtime_get_sync(uart->port.dev); 817 + 815 818 if (serial8250_isa_config != NULL) 816 819 serial8250_isa_config(0, &uart->port, 817 820 &uart->capabilities);
+2 -2
drivers/tty/serial/imx.c
··· 2692 2692 { 2693 2693 u32 ucr3; 2694 2694 2695 - uart_port_lock(&sport->port); 2695 + uart_port_lock_irq(&sport->port); 2696 2696 2697 2697 ucr3 = imx_uart_readl(sport, UCR3); 2698 2698 if (on) { ··· 2714 2714 imx_uart_writel(sport, ucr1, UCR1); 2715 2715 } 2716 2716 2717 - uart_port_unlock(&sport->port); 2717 + uart_port_unlock_irq(&sport->port); 2718 2718 } 2719 2719 2720 2720 static int imx_uart_suspend_noirq(struct device *dev)
+2 -2
drivers/tty/serial/stm32-usart.c
··· 1051 1051 const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs; 1052 1052 unsigned long flags; 1053 1053 1054 - spin_lock_irqsave(&port->lock, flags); 1054 + uart_port_lock_irqsave(port, &flags); 1055 1055 1056 1056 if (break_state) 1057 1057 stm32_usart_set_bits(port, ofs->rqr, USART_RQR_SBKRQ); 1058 1058 else 1059 1059 stm32_usart_clr_bits(port, ofs->rqr, USART_RQR_SBKRQ); 1060 1060 1061 - spin_unlock_irqrestore(&port->lock, flags); 1061 + uart_port_unlock_irqrestore(port, flags); 1062 1062 } 1063 1063 1064 1064 static int stm32_usart_startup(struct uart_port *port)
-6
drivers/ufs/core/ufshcd-priv.h
··· 237 237 hba->vops->config_scaling_param(hba, p, data); 238 238 } 239 239 240 - static inline void ufshcd_vops_reinit_notify(struct ufs_hba *hba) 241 - { 242 - if (hba->vops && hba->vops->reinit_notify) 243 - hba->vops->reinit_notify(hba); 244 - } 245 - 246 240 static inline int ufshcd_vops_mcq_config_resource(struct ufs_hba *hba) 247 241 { 248 242 if (hba->vops && hba->vops->mcq_config_resource)
+6 -4
drivers/ufs/core/ufshcd.c
··· 8858 8858 ufshcd_device_reset(hba); 8859 8859 ufs_put_device_desc(hba); 8860 8860 ufshcd_hba_stop(hba); 8861 - ufshcd_vops_reinit_notify(hba); 8862 8861 ret = ufshcd_hba_enable(hba); 8863 8862 if (ret) { 8864 8863 dev_err(hba->dev, "Host controller enable failed\n"); ··· 10590 10591 } 10591 10592 10592 10593 /* 10593 - * Set the default power management level for runtime and system PM. 10594 + * Set the default power management level for runtime and system PM if 10595 + * not set by the host controller drivers. 10594 10596 * Default power saving mode is to keep UFS link in Hibern8 state 10595 10597 * and UFS device in sleep state. 10596 10598 */ 10597 - hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state( 10599 + if (!hba->rpm_lvl) 10600 + hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state( 10598 10601 UFS_SLEEP_PWR_MODE, 10599 10602 UIC_LINK_HIBERN8_STATE); 10600 - hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state( 10603 + if (!hba->spm_lvl) 10604 + hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state( 10601 10605 UFS_SLEEP_PWR_MODE, 10602 10606 UIC_LINK_HIBERN8_STATE); 10603 10607
+19 -12
drivers/ufs/host/ufs-qcom.c
··· 368 368 if (ret) 369 369 return ret; 370 370 371 + if (phy->power_count) { 372 + phy_power_off(phy); 373 + phy_exit(phy); 374 + } 375 + 371 376 /* phy initialization - calibrate the phy */ 372 377 ret = phy_init(phy); 373 378 if (ret) { ··· 871 866 */ 872 867 static void ufs_qcom_advertise_quirks(struct ufs_hba *hba) 873 868 { 869 + const struct ufs_qcom_drvdata *drvdata = of_device_get_match_data(hba->dev); 874 870 struct ufs_qcom_host *host = ufshcd_get_variant(hba); 875 871 876 872 if (host->hw_ver.major == 0x2) ··· 880 874 if (host->hw_ver.major > 0x3) 881 875 hba->quirks |= UFSHCD_QUIRK_REINIT_AFTER_MAX_GEAR_SWITCH; 882 876 883 - if (of_device_is_compatible(hba->dev->of_node, "qcom,sm8550-ufshc") || 884 - of_device_is_compatible(hba->dev->of_node, "qcom,sm8650-ufshc")) 885 - hba->quirks |= UFSHCD_QUIRK_BROKEN_LSDBS_CAP; 877 + if (drvdata && drvdata->quirks) 878 + hba->quirks |= drvdata->quirks; 886 879 } 887 880 888 881 static void ufs_qcom_set_phy_gear(struct ufs_qcom_host *host) ··· 1069 1064 struct device *dev = hba->dev; 1070 1065 struct ufs_qcom_host *host; 1071 1066 struct ufs_clk_info *clki; 1067 + const struct ufs_qcom_drvdata *drvdata = of_device_get_match_data(hba->dev); 1072 1068 1073 1069 host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL); 1074 1070 if (!host) ··· 1148 1142 /* Failure is non-fatal */ 1149 1143 dev_warn(dev, "%s: failed to configure the testbus %d\n", 1150 1144 __func__, err); 1145 + 1146 + if (drvdata && drvdata->no_phy_retention) 1147 + hba->spm_lvl = UFS_PM_LVL_5; 1151 1148 1152 1149 return 0; 1153 1150 ··· 1588 1579 } 1589 1580 #endif 1590 1581 1591 - static void ufs_qcom_reinit_notify(struct ufs_hba *hba) 1592 - { 1593 - struct ufs_qcom_host *host = ufshcd_get_variant(hba); 1594 - 1595 - phy_power_off(host->generic_phy); 1596 - } 1597 - 1598 1582 /* Resources */ 1599 1583 static const struct ufshcd_res_info ufs_res_info[RES_MAX] = { 1600 1584 {.name = "ufs_mem",}, ··· 1827 1825 .device_reset = ufs_qcom_device_reset, 1828 1826 .config_scaling_param = ufs_qcom_config_scaling_param, 1829 1827 .program_key = ufs_qcom_ice_program_key, 1830 - .reinit_notify = ufs_qcom_reinit_notify, 1831 1828 .mcq_config_resource = ufs_qcom_mcq_config_resource, 1832 1829 .get_hba_mac = ufs_qcom_get_hba_mac, 1833 1830 .op_runtime_config = ufs_qcom_op_runtime_config, ··· 1869 1868 platform_device_msi_free_irqs_all(hba->dev); 1870 1869 } 1871 1870 1871 + static const struct ufs_qcom_drvdata ufs_qcom_sm8550_drvdata = { 1872 + .quirks = UFSHCD_QUIRK_BROKEN_LSDBS_CAP, 1873 + .no_phy_retention = true, 1874 + }; 1875 + 1872 1876 static const struct of_device_id ufs_qcom_of_match[] __maybe_unused = { 1873 1877 { .compatible = "qcom,ufshc" }, 1874 - { .compatible = "qcom,sm8550-ufshc" }, 1878 + { .compatible = "qcom,sm8550-ufshc", .data = &ufs_qcom_sm8550_drvdata }, 1879 + { .compatible = "qcom,sm8650-ufshc", .data = &ufs_qcom_sm8550_drvdata }, 1875 1880 {}, 1876 1881 }; 1877 1882 MODULE_DEVICE_TABLE(of, ufs_qcom_of_match);
+5
drivers/ufs/host/ufs-qcom.h
··· 217 217 bool esi_enabled; 218 218 }; 219 219 220 + struct ufs_qcom_drvdata { 221 + enum ufshcd_quirks quirks; 222 + bool no_phy_retention; 223 + }; 224 + 220 225 static inline u32 221 226 ufs_qcom_get_debug_reg_offset(struct ufs_qcom_host *host, u32 reg) 222 227 {
+17 -8
drivers/usb/chipidea/ci_hdrc_imx.c
··· 370 370 data->pinctrl = devm_pinctrl_get(dev); 371 371 if (PTR_ERR(data->pinctrl) == -ENODEV) 372 372 data->pinctrl = NULL; 373 - else if (IS_ERR(data->pinctrl)) 374 - return dev_err_probe(dev, PTR_ERR(data->pinctrl), 373 + else if (IS_ERR(data->pinctrl)) { 374 + ret = dev_err_probe(dev, PTR_ERR(data->pinctrl), 375 375 "pinctrl get failed\n"); 376 + goto err_put; 377 + } 376 378 377 379 data->hsic_pad_regulator = 378 380 devm_regulator_get_optional(dev, "hsic"); 379 381 if (PTR_ERR(data->hsic_pad_regulator) == -ENODEV) { 380 382 /* no pad regulator is needed */ 381 383 data->hsic_pad_regulator = NULL; 382 - } else if (IS_ERR(data->hsic_pad_regulator)) 383 - return dev_err_probe(dev, PTR_ERR(data->hsic_pad_regulator), 384 + } else if (IS_ERR(data->hsic_pad_regulator)) { 385 + ret = dev_err_probe(dev, PTR_ERR(data->hsic_pad_regulator), 384 386 "Get HSIC pad regulator error\n"); 387 + goto err_put; 388 + } 385 389 386 390 if (data->hsic_pad_regulator) { 387 391 ret = regulator_enable(data->hsic_pad_regulator); 388 392 if (ret) { 389 393 dev_err(dev, 390 394 "Failed to enable HSIC pad regulator\n"); 391 - return ret; 395 + goto err_put; 392 396 } 393 397 } 394 398 } ··· 406 402 dev_err(dev, 407 403 "pinctrl_hsic_idle lookup failed, err=%ld\n", 408 404 PTR_ERR(pinctrl_hsic_idle)); 409 - return PTR_ERR(pinctrl_hsic_idle); 405 + ret = PTR_ERR(pinctrl_hsic_idle); 406 + goto err_put; 410 407 } 411 408 412 409 ret = pinctrl_select_state(data->pinctrl, pinctrl_hsic_idle); 413 410 if (ret) { 414 411 dev_err(dev, "hsic_idle select failed, err=%d\n", ret); 415 - return ret; 412 + goto err_put; 416 413 } 417 414 418 415 data->pinctrl_hsic_active = pinctrl_lookup_state(data->pinctrl, ··· 422 417 dev_err(dev, 423 418 "pinctrl_hsic_active lookup failed, err=%ld\n", 424 419 PTR_ERR(data->pinctrl_hsic_active)); 425 - return PTR_ERR(data->pinctrl_hsic_active); 420 + ret = PTR_ERR(data->pinctrl_hsic_active); 421 + goto err_put; 426 422 } 427 423 } 428 424 ··· 533 527 if (pdata.flags & CI_HDRC_PMQOS) 534 528 cpu_latency_qos_remove_request(&data->pm_qos_req); 535 529 data->ci_pdev = NULL; 530 + err_put: 531 + put_device(data->usbmisc_data->dev); 536 532 return ret; 537 533 } 538 534 ··· 559 551 if (data->hsic_pad_regulator) 560 552 regulator_disable(data->hsic_pad_regulator); 561 553 } 554 + put_device(data->usbmisc_data->dev); 562 555 } 563 556 564 557 static void ci_hdrc_imx_shutdown(struct platform_device *pdev)
+4 -3
drivers/usb/class/usblp.c
··· 1337 1337 if (protocol < USBLP_FIRST_PROTOCOL || protocol > USBLP_LAST_PROTOCOL) 1338 1338 return -EINVAL; 1339 1339 1340 + alts = usblp->protocol[protocol].alt_setting; 1341 + if (alts < 0) 1342 + return -EINVAL; 1343 + 1340 1344 /* Don't unnecessarily set the interface if there's a single alt. */ 1341 1345 if (usblp->intf->num_altsetting > 1) { 1342 - alts = usblp->protocol[protocol].alt_setting; 1343 - if (alts < 0) 1344 - return -EINVAL; 1345 1346 r = usb_set_interface(usblp->dev, usblp->ifnum, alts); 1346 1347 if (r < 0) { 1347 1348 printk(KERN_ERR "usblp: can't set desired altsetting %d on interface %d\n",
+4 -2
drivers/usb/core/hub.c
··· 2663 2663 err = sysfs_create_link(&udev->dev.kobj, 2664 2664 &port_dev->dev.kobj, "port"); 2665 2665 if (err) 2666 - goto fail; 2666 + goto out_del_dev; 2667 2667 2668 2668 err = sysfs_create_link(&port_dev->dev.kobj, 2669 2669 &udev->dev.kobj, "device"); 2670 2670 if (err) { 2671 2671 sysfs_remove_link(&udev->dev.kobj, "port"); 2672 - goto fail; 2672 + goto out_del_dev; 2673 2673 } 2674 2674 2675 2675 if (!test_and_set_bit(port1, hub->child_usage_bits)) ··· 2683 2683 pm_runtime_put_sync_autosuspend(&udev->dev); 2684 2684 return err; 2685 2685 2686 + out_del_dev: 2687 + device_del(&udev->dev); 2686 2688 fail: 2687 2689 usb_set_device_state(udev, USB_STATE_NOTATTACHED); 2688 2690 pm_runtime_disable(&udev->dev);
+4 -3
drivers/usb/core/port.c
··· 453 453 static void usb_port_shutdown(struct device *dev) 454 454 { 455 455 struct usb_port *port_dev = to_usb_port(dev); 456 + struct usb_device *udev = port_dev->child; 456 457 457 - if (port_dev->child) { 458 - usb_disable_usb2_hardware_lpm(port_dev->child); 459 - usb_unlocked_disable_lpm(port_dev->child); 458 + if (udev && !udev->port_is_suspended) { 459 + usb_disable_usb2_hardware_lpm(udev); 460 + usb_unlocked_disable_lpm(udev); 460 461 } 461 462 } 462 463
+1
drivers/usb/dwc3/core.h
··· 464 464 #define DWC3_DCTL_TRGTULST_SS_INACT (DWC3_DCTL_TRGTULST(6)) 465 465 466 466 /* These apply for core versions 1.94a and later */ 467 + #define DWC3_DCTL_NYET_THRES_MASK (0xf << 20) 467 468 #define DWC3_DCTL_NYET_THRES(n) (((n) & 0xf) << 20) 468 469 469 470 #define DWC3_DCTL_KEEP_CONNECT BIT(19)
+1
drivers/usb/dwc3/dwc3-am62.c
··· 309 309 310 310 pm_runtime_put_sync(dev); 311 311 pm_runtime_disable(dev); 312 + pm_runtime_dont_use_autosuspend(dev); 312 313 pm_runtime_set_suspended(dev); 313 314 } 314 315
+3 -1
drivers/usb/dwc3/gadget.c
··· 4195 4195 WARN_ONCE(DWC3_VER_IS_PRIOR(DWC3, 240A) && dwc->has_lpm_erratum, 4196 4196 "LPM Erratum not available on dwc3 revisions < 2.40a\n"); 4197 4197 4198 - if (dwc->has_lpm_erratum && !DWC3_VER_IS_PRIOR(DWC3, 240A)) 4198 + if (dwc->has_lpm_erratum && !DWC3_VER_IS_PRIOR(DWC3, 240A)) { 4199 + reg &= ~DWC3_DCTL_NYET_THRES_MASK; 4199 4200 reg |= DWC3_DCTL_NYET_THRES(dwc->lpm_nyet_threshold); 4201 + } 4200 4202 4201 4203 dwc3_gadget_dctl_write_safe(dwc, reg); 4202 4204 } else {
+2 -2
drivers/usb/gadget/Kconfig
··· 211 211 212 212 config USB_F_MIDI2 213 213 tristate 214 + select SND_UMP 215 + select SND_UMP_LEGACY_RAWMIDI 214 216 215 217 config USB_F_HID 216 218 tristate ··· 447 445 depends on USB_CONFIGFS 448 446 depends on SND 449 447 select USB_LIBCOMPOSITE 450 - select SND_UMP 451 - select SND_UMP_LEGACY_RAWMIDI 452 448 select USB_F_MIDI2 453 449 help 454 450 The MIDI 2.0 function driver provides the generic emulated
+5 -1
drivers/usb/gadget/configfs.c
··· 827 827 { 828 828 struct gadget_string *string = to_gadget_string(item); 829 829 int size = min(sizeof(string->string), len + 1); 830 + ssize_t cpy_len; 830 831 831 832 if (len > USB_MAX_STRING_LEN) 832 833 return -EINVAL; 833 834 834 - return strscpy(string->string, page, size); 835 + cpy_len = strscpy(string->string, page, size); 836 + if (cpy_len > 0 && string->string[cpy_len - 1] == '\n') 837 + string->string[cpy_len - 1] = 0; 838 + return len; 835 839 } 836 840 CONFIGFS_ATTR(gadget_string_, s); 837 841
+1 -1
drivers/usb/gadget/function/f_fs.c
··· 2285 2285 struct usb_gadget_strings **lang; 2286 2286 int first_id; 2287 2287 2288 - if (WARN_ON(ffs->state != FFS_ACTIVE 2288 + if ((ffs->state != FFS_ACTIVE 2289 2289 || test_and_set_bit(FFS_FL_BOUND, &ffs->flags))) 2290 2290 return -EBADFD; 2291 2291
+1
drivers/usb/gadget/function/f_uac2.c
··· 1185 1185 uac2->as_in_alt = 0; 1186 1186 } 1187 1187 1188 + std_ac_if_desc.bNumEndpoints = 0; 1188 1189 if (FUOUT_EN(uac2_opts) || FUIN_EN(uac2_opts)) { 1189 1190 uac2->int_ep = usb_ep_autoconfig(gadget, &fs_ep_int_desc); 1190 1191 if (!uac2->int_ep) {
+4 -4
drivers/usb/gadget/function/u_serial.c
··· 1420 1420 /* REVISIT as above: how best to track this? */ 1421 1421 port->port_line_coding = gser->port_line_coding; 1422 1422 1423 + /* disable endpoints, aborting down any active I/O */ 1424 + usb_ep_disable(gser->out); 1425 + usb_ep_disable(gser->in); 1426 + 1423 1427 port->port_usb = NULL; 1424 1428 gser->ioport = NULL; 1425 1429 if (port->port.count > 0) { ··· 1434 1430 port->suspended = false; 1435 1431 spin_unlock(&port->port_lock); 1436 1432 spin_unlock_irqrestore(&serial_port_lock, flags); 1437 - 1438 - /* disable endpoints, aborting down any active I/O */ 1439 - usb_ep_disable(gser->out); 1440 - usb_ep_disable(gser->in); 1441 1433 1442 1434 /* finally, free any unused/unusable I/O buffers */ 1443 1435 spin_lock_irqsave(&port->port_lock, flags);
+2 -1
drivers/usb/host/xhci-plat.c
··· 290 290 291 291 hcd->tpl_support = of_usb_host_tpl_support(sysdev->of_node); 292 292 293 - if (priv && (priv->quirks & XHCI_SKIP_PHY_INIT)) 293 + if ((priv && (priv->quirks & XHCI_SKIP_PHY_INIT)) || 294 + (xhci->quirks & XHCI_SKIP_PHY_INIT)) 294 295 hcd->skip_phy_initialization = 1; 295 296 296 297 if (priv && (priv->quirks & XHCI_SG_TRB_CACHE_SIZE_QUIRK))
+1
drivers/usb/serial/cp210x.c
··· 223 223 { USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */ 224 224 { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */ 225 225 { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */ 226 + { USB_DEVICE(0x1B93, 0x1013) }, /* Phoenix Contact UPS Device */ 226 227 { USB_DEVICE(0x1BA4, 0x0002) }, /* Silicon Labs 358x factory default */ 227 228 { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */ 228 229 { USB_DEVICE(0x1D6F, 0x0010) }, /* Seluxit ApS RF Dongle */
+3 -1
drivers/usb/serial/option.c
··· 621 621 622 622 /* MeiG Smart Technology products */ 623 623 #define MEIGSMART_VENDOR_ID 0x2dee 624 - /* MeiG Smart SRM825L based on Qualcomm 315 */ 624 + /* MeiG Smart SRM815/SRM825L based on Qualcomm 315 */ 625 625 #define MEIGSMART_PRODUCT_SRM825L 0x4d22 626 626 /* MeiG Smart SLM320 based on UNISOC UIS8910 */ 627 627 #define MEIGSMART_PRODUCT_SLM320 0x4d41 ··· 2405 2405 { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) }, 2406 2406 { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM320, 0xff, 0, 0) }, 2407 2407 { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM770A, 0xff, 0, 0) }, 2408 + { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0, 0) }, 2408 2409 { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x30) }, 2409 2410 { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x40) }, 2410 2411 { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x60) }, ··· 2413 2412 .driver_info = NCTRL(1) }, 2414 2413 { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0640, 0xff), /* TCL IK512 ECM */ 2415 2414 .driver_info = NCTRL(3) }, 2415 + { USB_DEVICE_INTERFACE_CLASS(0x2949, 0x8700, 0xff) }, /* Neoway N723-EA */ 2416 2416 { } /* Terminating entry */ 2417 2417 }; 2418 2418 MODULE_DEVICE_TABLE(usb, option_ids);
+7
drivers/usb/storage/unusual_devs.h
··· 255 255 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 256 256 US_FL_MAX_SECTORS_64 ), 257 257 258 + /* Added by Lubomir Rintel <lkundrak@v3.sk>, a very fine chap */ 259 + UNUSUAL_DEV( 0x0421, 0x06c2, 0x0000, 0x0406, 260 + "Nokia", 261 + "Nokia 208", 262 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 263 + US_FL_MAX_SECTORS_64 ), 264 + 258 265 #ifdef NO_SDDR09 259 266 UNUSUAL_DEV( 0x0436, 0x0005, 0x0100, 0x0100, 260 267 "Microtech",
+2 -2
drivers/usb/typec/tcpm/maxim_contaminant.c
··· 135 135 136 136 mv = max_contaminant_read_adc_mv(chip, channel, sleep_msec, raw, true); 137 137 if (mv < 0) 138 - return ret; 138 + return mv; 139 139 140 140 /* OVP enable */ 141 141 ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL2, CCOVPDIS, 0); ··· 157 157 158 158 mv = max_contaminant_read_adc_mv(chip, channel, sleep_msec, raw, true); 159 159 if (mv < 0) 160 - return ret; 160 + return mv; 161 161 /* Disable current source */ 162 162 ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL2, SBURPCTRL, 0); 163 163 if (ret < 0)
+15 -10
drivers/usb/typec/tcpm/tcpci.c
··· 700 700 701 701 tcpci->alert_mask = reg; 702 702 703 - return tcpci_write16(tcpci, TCPC_ALERT_MASK, reg); 703 + return 0; 704 704 } 705 705 706 706 irqreturn_t tcpci_irq(struct tcpci *tcpci) ··· 923 923 924 924 chip->data.set_orientation = err; 925 925 926 + chip->tcpci = tcpci_register_port(&client->dev, &chip->data); 927 + if (IS_ERR(chip->tcpci)) 928 + return PTR_ERR(chip->tcpci); 929 + 926 930 err = devm_request_threaded_irq(&client->dev, client->irq, NULL, 927 931 _tcpci_irq, 928 932 IRQF_SHARED | IRQF_ONESHOT, 929 933 dev_name(&client->dev), chip); 930 934 if (err < 0) 931 - return err; 935 + goto unregister_port; 932 936 933 - /* 934 - * Disable irq while registering port. If irq is configured as an edge 935 - * irq this allow to keep track and process the irq as soon as it is enabled. 936 - */ 937 - disable_irq(client->irq); 938 - chip->tcpci = tcpci_register_port(&client->dev, &chip->data); 939 - enable_irq(client->irq); 937 + /* Enable chip interrupts at last */ 938 + err = tcpci_write16(chip->tcpci, TCPC_ALERT_MASK, chip->tcpci->alert_mask); 939 + if (err < 0) 940 + goto unregister_port; 940 941 941 - return PTR_ERR_OR_ZERO(chip->tcpci); 942 + return 0; 943 + 944 + unregister_port: 945 + tcpci_unregister_port(chip->tcpci); 946 + return err; 942 947 } 943 948 944 949 static void tcpci_remove(struct i2c_client *client)
+2 -2
drivers/usb/typec/ucsi/ucsi_ccg.c
··· 646 646 UCSI_CMD_CONNECTOR_MASK; 647 647 if (con_index == 0) { 648 648 ret = -EINVAL; 649 - goto unlock; 649 + goto err_put; 650 650 } 651 651 con = &uc->ucsi->connector[con_index - 1]; 652 652 ucsi_ccg_update_set_new_cam_cmd(uc, con, &command); ··· 654 654 655 655 ret = ucsi_sync_control_common(ucsi, command); 656 656 657 + err_put: 657 658 pm_runtime_put_sync(uc->dev); 658 - unlock: 659 659 mutex_unlock(&uc->lock); 660 660 661 661 return ret;
+5
drivers/usb/typec/ucsi/ucsi_glink.c
··· 185 185 struct pmic_glink_ucsi *ucsi = ucsi_get_drvdata(con->ucsi); 186 186 int orientation; 187 187 188 + if (!UCSI_CONSTAT(con, CONNECTED)) { 189 + typec_set_orientation(con->port, TYPEC_ORIENTATION_NONE); 190 + return; 191 + } 192 + 188 193 if (con->num > PMIC_GLINK_MAX_PORTS || 189 194 !ucsi->port_orientation[con->num - 1]) 190 195 return;
+9 -8
drivers/vfio/pci/vfio_pci_core.c
··· 1661 1661 unsigned long pfn, pgoff = vmf->pgoff - vma->vm_pgoff; 1662 1662 vm_fault_t ret = VM_FAULT_SIGBUS; 1663 1663 1664 - if (order && (vmf->address & ((PAGE_SIZE << order) - 1) || 1664 + pfn = vma_to_pfn(vma) + pgoff; 1665 + 1666 + if (order && (pfn & ((1 << order) - 1) || 1667 + vmf->address & ((PAGE_SIZE << order) - 1) || 1665 1668 vmf->address + (PAGE_SIZE << order) > vma->vm_end)) { 1666 1669 ret = VM_FAULT_FALLBACK; 1667 1670 goto out; 1668 1671 } 1669 - 1670 - pfn = vma_to_pfn(vma); 1671 1672 1672 1673 down_read(&vdev->memory_lock); 1673 1674 ··· 1677 1676 1678 1677 switch (order) { 1679 1678 case 0: 1680 - ret = vmf_insert_pfn(vma, vmf->address, pfn + pgoff); 1679 + ret = vmf_insert_pfn(vma, vmf->address, pfn); 1681 1680 break; 1682 1681 #ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP 1683 1682 case PMD_ORDER: 1684 - ret = vmf_insert_pfn_pmd(vmf, __pfn_to_pfn_t(pfn + pgoff, 1685 - PFN_DEV), false); 1683 + ret = vmf_insert_pfn_pmd(vmf, 1684 + __pfn_to_pfn_t(pfn, PFN_DEV), false); 1686 1685 break; 1687 1686 #endif 1688 1687 #ifdef CONFIG_ARCH_SUPPORTS_PUD_PFNMAP 1689 1688 case PUD_ORDER: 1690 - ret = vmf_insert_pfn_pud(vmf, __pfn_to_pfn_t(pfn + pgoff, 1691 - PFN_DEV), false); 1689 + ret = vmf_insert_pfn_pud(vmf, 1690 + __pfn_to_pfn_t(pfn, PFN_DEV), false); 1692 1691 break; 1693 1692 #endif 1694 1693 default:
+5 -1
fs/9p/vfs_addr.c
··· 57 57 int err, len; 58 58 59 59 len = p9_client_write(fid, subreq->start, &subreq->io_iter, &err); 60 + if (len > 0) 61 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); 60 62 netfs_write_subrequest_terminated(subreq, len ?: err, false); 61 63 } 62 64 ··· 82 80 if (pos + total >= i_size_read(rreq->inode)) 83 81 __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags); 84 82 85 - if (!err) 83 + if (!err) { 86 84 subreq->transferred += total; 85 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); 86 + } 87 87 88 88 netfs_read_subreq_terminated(subreq, err, false); 89 89 }
+4 -2
fs/afs/addr_prefs.c
··· 413 413 414 414 do { 415 415 argc = afs_split_string(&buf, argv, ARRAY_SIZE(argv)); 416 - if (argc < 0) 417 - return argc; 416 + if (argc < 0) { 417 + ret = argc; 418 + goto done; 419 + } 418 420 if (argc < 2) 419 421 goto inval; 420 422
+1 -1
fs/afs/afs.h
··· 10 10 11 11 #include <linux/in.h> 12 12 13 - #define AFS_MAXCELLNAME 256 /* Maximum length of a cell name */ 13 + #define AFS_MAXCELLNAME 253 /* Maximum length of a cell name (DNS limited) */ 14 14 #define AFS_MAXVOLNAME 64 /* Maximum length of a volume name */ 15 15 #define AFS_MAXNSERVERS 8 /* Maximum servers in a basic volume record */ 16 16 #define AFS_NMAXNSERVERS 13 /* Maximum servers in a N/U-class volume record */
+1
fs/afs/afs_vl.h
··· 13 13 #define AFS_VL_PORT 7003 /* volume location service port */ 14 14 #define VL_SERVICE 52 /* RxRPC service ID for the Volume Location service */ 15 15 #define YFS_VL_SERVICE 2503 /* Service ID for AuriStor upgraded VL service */ 16 + #define YFS_VL_MAXCELLNAME 256 /* Maximum length of a cell name in YFS protocol */ 16 17 17 18 enum AFSVL_Operations { 18 19 VLGETENTRYBYID = 503, /* AFS Get VLDB entry by ID */
+6 -2
fs/afs/vl_alias.c
··· 253 253 static int yfs_check_canonical_cell_name(struct afs_cell *cell, struct key *key) 254 254 { 255 255 struct afs_cell *master; 256 + size_t name_len; 256 257 char *cell_name; 257 258 258 259 cell_name = afs_vl_get_cell_name(cell, key); ··· 265 264 return 0; 266 265 } 267 266 268 - master = afs_lookup_cell(cell->net, cell_name, strlen(cell_name), 269 - NULL, false); 267 + name_len = strlen(cell_name); 268 + if (!name_len || name_len > AFS_MAXCELLNAME) 269 + master = ERR_PTR(-EOPNOTSUPP); 270 + else 271 + master = afs_lookup_cell(cell->net, cell_name, name_len, NULL, false); 270 272 kfree(cell_name); 271 273 if (IS_ERR(master)) 272 274 return PTR_ERR(master);
+1 -1
fs/afs/vlclient.c
··· 697 697 return ret; 698 698 699 699 namesz = ntohl(call->tmp); 700 - if (namesz > AFS_MAXCELLNAME) 700 + if (namesz > YFS_VL_MAXCELLNAME) 701 701 return afs_protocol_error(call, afs_eproto_cellname_len); 702 702 paddedsz = (namesz + 3) & ~3; 703 703 call->count = namesz;
+4 -1
fs/afs/write.c
··· 122 122 if (subreq->debug_index == 3) 123 123 return netfs_write_subrequest_terminated(subreq, -ENOANO, false); 124 124 125 - if (!test_bit(NETFS_SREQ_RETRYING, &subreq->flags)) { 125 + if (!subreq->retry_count) { 126 126 set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 127 127 return netfs_write_subrequest_terminated(subreq, -EAGAIN, false); 128 128 } ··· 149 149 afs_wait_for_operation(op); 150 150 ret = afs_put_operation(op); 151 151 switch (ret) { 152 + case 0: 153 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); 154 + break; 152 155 case -EACCES: 153 156 case -EPERM: 154 157 case -ENOKEY:
+68 -60
fs/btrfs/ioctl.c
··· 4878 4878 return ret; 4879 4879 } 4880 4880 4881 + struct btrfs_uring_encoded_data { 4882 + struct btrfs_ioctl_encoded_io_args args; 4883 + struct iovec iovstack[UIO_FASTIOV]; 4884 + struct iovec *iov; 4885 + struct iov_iter iter; 4886 + }; 4887 + 4881 4888 static int btrfs_uring_encoded_read(struct io_uring_cmd *cmd, unsigned int issue_flags) 4882 4889 { 4883 4890 size_t copy_end_kernel = offsetofend(struct btrfs_ioctl_encoded_io_args, flags); 4884 4891 size_t copy_end; 4885 - struct btrfs_ioctl_encoded_io_args args = { 0 }; 4886 4892 int ret; 4887 4893 u64 disk_bytenr, disk_io_size; 4888 4894 struct file *file; 4889 4895 struct btrfs_inode *inode; 4890 4896 struct btrfs_fs_info *fs_info; 4891 4897 struct extent_io_tree *io_tree; 4892 - struct iovec iovstack[UIO_FASTIOV]; 4893 - struct iovec *iov = iovstack; 4894 - struct iov_iter iter; 4895 4898 loff_t pos; 4896 4899 struct kiocb kiocb; 4897 4900 struct extent_state *cached_state = NULL; 4898 4901 u64 start, lockend; 4899 4902 void __user *sqe_addr; 4903 + struct btrfs_uring_encoded_data *data = io_uring_cmd_get_async_data(cmd)->op_data; 4900 4904 4901 4905 if (!capable(CAP_SYS_ADMIN)) { 4902 4906 ret = -EPERM; ··· 4914 4910 4915 4911 if (issue_flags & IO_URING_F_COMPAT) { 4916 4912 #if defined(CONFIG_64BIT) && defined(CONFIG_COMPAT) 4917 - struct btrfs_ioctl_encoded_io_args_32 args32; 4918 - 4919 4913 copy_end = offsetofend(struct btrfs_ioctl_encoded_io_args_32, flags); 4920 - if (copy_from_user(&args32, sqe_addr, copy_end)) { 4921 - ret = -EFAULT; 4922 - goto out_acct; 4923 - } 4924 - args.iov = compat_ptr(args32.iov); 4925 - args.iovcnt = args32.iovcnt; 4926 - args.offset = args32.offset; 4927 - args.flags = args32.flags; 4928 4914 #else 4929 4915 return -ENOTTY; 4930 4916 #endif 4931 4917 } else { 4932 4918 copy_end = copy_end_kernel; 4933 - if (copy_from_user(&args, sqe_addr, copy_end)) { 4934 - ret = -EFAULT; 4919 + } 4920 + 4921 + if (!data) { 4922 + data = kzalloc(sizeof(*data), GFP_NOFS); 4923 + if (!data) { 4924 + ret = -ENOMEM; 4935 4925 goto out_acct; 4926 + } 4927 + 4928 + io_uring_cmd_get_async_data(cmd)->op_data = data; 4929 + 4930 + if (issue_flags & IO_URING_F_COMPAT) { 4931 + #if defined(CONFIG_64BIT) && defined(CONFIG_COMPAT) 4932 + struct btrfs_ioctl_encoded_io_args_32 args32; 4933 + 4934 + if (copy_from_user(&args32, sqe_addr, copy_end)) { 4935 + ret = -EFAULT; 4936 + goto out_acct; 4937 + } 4938 + 4939 + data->args.iov = compat_ptr(args32.iov); 4940 + data->args.iovcnt = args32.iovcnt; 4941 + data->args.offset = args32.offset; 4942 + data->args.flags = args32.flags; 4943 + #endif 4944 + } else { 4945 + if (copy_from_user(&data->args, sqe_addr, copy_end)) { 4946 + ret = -EFAULT; 4947 + goto out_acct; 4948 + } 4949 + } 4950 + 4951 + if (data->args.flags != 0) { 4952 + ret = -EINVAL; 4953 + goto out_acct; 4954 + } 4955 + 4956 + data->iov = data->iovstack; 4957 + ret = import_iovec(ITER_DEST, data->args.iov, data->args.iovcnt, 4958 + ARRAY_SIZE(data->iovstack), &data->iov, 4959 + &data->iter); 4960 + if (ret < 0) 4961 + goto out_acct; 4962 + 4963 + if (iov_iter_count(&data->iter) == 0) { 4964 + ret = 0; 4965 + goto out_free; 4936 4966 } 4937 4967 } 4938 4968 4939 - if (args.flags != 0) 4940 - return -EINVAL; 4941 - 4942 - ret = import_iovec(ITER_DEST, args.iov, args.iovcnt, ARRAY_SIZE(iovstack), 4943 - &iov, &iter); 4944 - if (ret < 0) 4945 - goto out_acct; 4946 - 4947 - if (iov_iter_count(&iter) == 0) { 4948 - ret = 0; 4949 - goto out_free; 4950 - } 4951 - 4952 - pos = args.offset; 4953 - ret = rw_verify_area(READ, file, &pos, args.len); 4969 + pos = data->args.offset; 4970 + ret = rw_verify_area(READ, file, &pos, data->args.len); 4954 4971 if (ret < 0) 4955 4972 goto out_free; 4956 4973 ··· 4984 4959 start = ALIGN_DOWN(pos, fs_info->sectorsize); 4985 4960 lockend = start + BTRFS_MAX_UNCOMPRESSED - 1; 4986 4961 4987 - ret = btrfs_encoded_read(&kiocb, &iter, &args, &cached_state, 4962 + ret = btrfs_encoded_read(&kiocb, &data->iter, &data->args, &cached_state, 4988 4963 &disk_bytenr, &disk_io_size); 4989 4964 if (ret < 0 && ret != -EIOCBQUEUED) 4990 4965 goto out_free; 4991 4966 4992 4967 file_accessed(file); 4993 4968 4994 - if (copy_to_user(sqe_addr + copy_end, (const char *)&args + copy_end_kernel, 4995 - sizeof(args) - copy_end_kernel)) { 4969 + if (copy_to_user(sqe_addr + copy_end, 4970 + (const char *)&data->args + copy_end_kernel, 4971 + sizeof(data->args) - copy_end_kernel)) { 4996 4972 if (ret == -EIOCBQUEUED) { 4997 4973 unlock_extent(io_tree, start, lockend, &cached_state); 4998 4974 btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED); ··· 5003 4977 } 5004 4978 5005 4979 if (ret == -EIOCBQUEUED) { 5006 - u64 count; 5007 - 5008 - /* 5009 - * If we've optimized things by storing the iovecs on the stack, 5010 - * undo this. 5011 - */ 5012 - if (!iov) { 5013 - iov = kmalloc(sizeof(struct iovec) * args.iovcnt, GFP_NOFS); 5014 - if (!iov) { 5015 - unlock_extent(io_tree, start, lockend, &cached_state); 5016 - btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED); 5017 - ret = -ENOMEM; 5018 - goto out_acct; 5019 - } 5020 - 5021 - memcpy(iov, iovstack, sizeof(struct iovec) * args.iovcnt); 5022 - } 5023 - 5024 - count = min_t(u64, iov_iter_count(&iter), disk_io_size); 4980 + u64 count = min_t(u64, iov_iter_count(&data->iter), disk_io_size); 5025 4981 5026 4982 /* Match ioctl by not returning past EOF if uncompressed. */ 5027 - if (!args.compression) 5028 - count = min_t(u64, count, args.len); 4983 + if (!data->args.compression) 4984 + count = min_t(u64, count, data->args.len); 5029 4985 5030 - ret = btrfs_uring_read_extent(&kiocb, &iter, start, lockend, 5031 - cached_state, disk_bytenr, 5032 - disk_io_size, count, 5033 - args.compression, iov, cmd); 4986 + ret = btrfs_uring_read_extent(&kiocb, &data->iter, start, lockend, 4987 + cached_state, disk_bytenr, disk_io_size, 4988 + count, data->args.compression, 4989 + data->iov, cmd); 5034 4990 5035 4991 goto out_acct; 5036 4992 } 5037 4993 5038 4994 out_free: 5039 - kfree(iov); 4995 + kfree(data->iov); 5040 4996 5041 4997 out_acct: 5042 4998 if (ret > 0)
+4
fs/btrfs/scrub.c
··· 1541 1541 u64 extent_gen; 1542 1542 int ret; 1543 1543 1544 + if (unlikely(!extent_root)) { 1545 + btrfs_err(fs_info, "no valid extent root for scrub"); 1546 + return -EUCLEAN; 1547 + } 1544 1548 memset(stripe->sectors, 0, sizeof(struct scrub_sector_verification) * 1545 1549 stripe->nr_sectors); 1546 1550 scrub_stripe_reset_bitmaps(stripe);
+4
fs/btrfs/volumes.c
··· 797 797 if (ret) 798 798 goto out; 799 799 resolved_path = d_path(&path, path_buf, PATH_MAX); 800 + if (IS_ERR(resolved_path)) { 801 + ret = PTR_ERR(resolved_path); 802 + goto out; 803 + } 800 804 ret = strscpy(canonical, resolved_path, PATH_MAX); 801 805 out: 802 806 kfree(path_buf);
+2 -2
fs/btrfs/zlib.c
··· 174 174 copy_page(workspace->buf + i * PAGE_SIZE, 175 175 data_in); 176 176 start += PAGE_SIZE; 177 - workspace->strm.avail_in = 178 - (in_buf_folios << PAGE_SHIFT); 179 177 } 180 178 workspace->strm.next_in = workspace->buf; 179 + workspace->strm.avail_in = min(bytes_left, 180 + in_buf_folios << PAGE_SHIFT); 181 181 } else { 182 182 unsigned int pg_off; 183 183 unsigned int cur_len;
+3 -2
fs/btrfs/zoned.c
··· 748 748 (u64)lim->max_segments << PAGE_SHIFT), 749 749 fs_info->sectorsize); 750 750 fs_info->fs_devices->chunk_alloc_policy = BTRFS_CHUNK_ALLOC_ZONED; 751 - if (fs_info->max_zone_append_size < fs_info->max_extent_size) 752 - fs_info->max_extent_size = fs_info->max_zone_append_size; 751 + 752 + fs_info->max_extent_size = min_not_zero(fs_info->max_extent_size, 753 + fs_info->max_zone_append_size); 753 754 754 755 /* 755 756 * Check mount options here, because we might change fs_info->zoned
+7 -7
fs/cachefiles/daemon.c
··· 15 15 #include <linux/namei.h> 16 16 #include <linux/poll.h> 17 17 #include <linux/mount.h> 18 + #include <linux/security.h> 18 19 #include <linux/statfs.h> 19 20 #include <linux/ctype.h> 20 21 #include <linux/string.h> ··· 577 576 */ 578 577 static int cachefiles_daemon_secctx(struct cachefiles_cache *cache, char *args) 579 578 { 580 - char *secctx; 579 + int err; 581 580 582 581 _enter(",%s", args); 583 582 ··· 586 585 return -EINVAL; 587 586 } 588 587 589 - if (cache->secctx) { 588 + if (cache->have_secid) { 590 589 pr_err("Second security context specified\n"); 591 590 return -EINVAL; 592 591 } 593 592 594 - secctx = kstrdup(args, GFP_KERNEL); 595 - if (!secctx) 596 - return -ENOMEM; 593 + err = security_secctx_to_secid(args, strlen(args), &cache->secid); 594 + if (err) 595 + return err; 597 596 598 - cache->secctx = secctx; 597 + cache->have_secid = true; 599 598 return 0; 600 599 } 601 600 ··· 821 820 put_cred(cache->cache_cred); 822 821 823 822 kfree(cache->rootdirname); 824 - kfree(cache->secctx); 825 823 kfree(cache->tag); 826 824 827 825 _leave("");
+2 -1
fs/cachefiles/internal.h
··· 122 122 #define CACHEFILES_STATE_CHANGED 3 /* T if state changed (poll trigger) */ 123 123 #define CACHEFILES_ONDEMAND_MODE 4 /* T if in on-demand read mode */ 124 124 char *rootdirname; /* name of cache root directory */ 125 - char *secctx; /* LSM security context */ 126 125 char *tag; /* cache binding tag */ 127 126 refcount_t unbind_pincount;/* refcount to do daemon unbind */ 128 127 struct xarray reqs; /* xarray of pending on-demand requests */ ··· 129 130 struct xarray ondemand_ids; /* xarray for ondemand_id allocation */ 130 131 u32 ondemand_id_next; 131 132 u32 msg_id_next; 133 + u32 secid; /* LSM security id */ 134 + bool have_secid; /* whether "secid" was set */ 132 135 }; 133 136 134 137 static inline bool cachefiles_in_ondemand_mode(struct cachefiles_cache *cache)
+3 -3
fs/cachefiles/security.c
··· 18 18 struct cred *new; 19 19 int ret; 20 20 21 - _enter("{%s}", cache->secctx); 21 + _enter("{%u}", cache->have_secid ? cache->secid : 0); 22 22 23 23 new = prepare_kernel_cred(current); 24 24 if (!new) { ··· 26 26 goto error; 27 27 } 28 28 29 - if (cache->secctx) { 30 - ret = set_security_override_from_ctx(new, cache->secctx); 29 + if (cache->have_secid) { 30 + ret = set_security_override(new, cache->secid); 31 31 if (ret < 0) { 32 32 put_cred(new); 33 33 pr_err("Security denies permission to nominate security context: error %d\n",
+51 -23
fs/debugfs/file.c
··· 64 64 } 65 65 EXPORT_SYMBOL_GPL(debugfs_real_fops); 66 66 67 - /** 68 - * debugfs_file_get - mark the beginning of file data access 69 - * @dentry: the dentry object whose data is being accessed. 70 - * 71 - * Up to a matching call to debugfs_file_put(), any successive call 72 - * into the file removing functions debugfs_remove() and 73 - * debugfs_remove_recursive() will block. Since associated private 74 - * file data may only get freed after a successful return of any of 75 - * the removal functions, you may safely access it after a successful 76 - * call to debugfs_file_get() without worrying about lifetime issues. 77 - * 78 - * If -%EIO is returned, the file has already been removed and thus, 79 - * it is not safe to access any of its data. If, on the other hand, 80 - * it is allowed to access the file data, zero is returned. 81 - */ 82 - int debugfs_file_get(struct dentry *dentry) 67 + enum dbgfs_get_mode { 68 + DBGFS_GET_ALREADY, 69 + DBGFS_GET_REGULAR, 70 + DBGFS_GET_SHORT, 71 + }; 72 + 73 + static int __debugfs_file_get(struct dentry *dentry, enum dbgfs_get_mode mode) 83 74 { 84 75 struct debugfs_fsdata *fsd; 85 76 void *d_fsd; ··· 87 96 if (!((unsigned long)d_fsd & DEBUGFS_FSDATA_IS_REAL_FOPS_BIT)) { 88 97 fsd = d_fsd; 89 98 } else { 99 + if (WARN_ON(mode == DBGFS_GET_ALREADY)) 100 + return -EINVAL; 101 + 90 102 fsd = kmalloc(sizeof(*fsd), GFP_KERNEL); 91 103 if (!fsd) 92 104 return -ENOMEM; 93 105 94 - if ((unsigned long)d_fsd & DEBUGFS_FSDATA_IS_SHORT_FOPS_BIT) { 106 + if (mode == DBGFS_GET_SHORT) { 95 107 fsd->real_fops = NULL; 96 108 fsd->short_fops = (void *)((unsigned long)d_fsd & 97 - ~(DEBUGFS_FSDATA_IS_REAL_FOPS_BIT | 98 - DEBUGFS_FSDATA_IS_SHORT_FOPS_BIT)); 109 + ~DEBUGFS_FSDATA_IS_REAL_FOPS_BIT); 99 110 } else { 100 111 fsd->real_fops = (void *)((unsigned long)d_fsd & 101 112 ~DEBUGFS_FSDATA_IS_REAL_FOPS_BIT); ··· 130 137 return -EIO; 131 138 132 139 return 0; 140 + } 141 + 142 + /** 143 + * debugfs_file_get - mark the beginning of file data access 144 + * @dentry: the dentry object whose data is being accessed. 145 + * 146 + * Up to a matching call to debugfs_file_put(), any successive call 147 + * into the file removing functions debugfs_remove() and 148 + * debugfs_remove_recursive() will block. Since associated private 149 + * file data may only get freed after a successful return of any of 150 + * the removal functions, you may safely access it after a successful 151 + * call to debugfs_file_get() without worrying about lifetime issues. 152 + * 153 + * If -%EIO is returned, the file has already been removed and thus, 154 + * it is not safe to access any of its data. If, on the other hand, 155 + * it is allowed to access the file data, zero is returned. 156 + */ 157 + int debugfs_file_get(struct dentry *dentry) 158 + { 159 + return __debugfs_file_get(dentry, DBGFS_GET_ALREADY); 133 160 } 134 161 EXPORT_SYMBOL_GPL(debugfs_file_get); 135 162 ··· 280 267 const struct file_operations *real_fops = NULL; 281 268 int r; 282 269 283 - r = debugfs_file_get(dentry); 270 + r = __debugfs_file_get(dentry, DBGFS_GET_REGULAR); 284 271 if (r) 285 272 return r == -EIO ? -ENOENT : r; 286 273 ··· 437 424 proxy_fops->unlocked_ioctl = full_proxy_unlocked_ioctl; 438 425 } 439 426 440 - static int full_proxy_open(struct inode *inode, struct file *filp) 427 + static int full_proxy_open(struct inode *inode, struct file *filp, 428 + enum dbgfs_get_mode mode) 441 429 { 442 430 struct dentry *dentry = F_DENTRY(filp); 443 431 const struct file_operations *real_fops; ··· 446 432 struct debugfs_fsdata *fsd; 447 433 int r; 448 434 449 - r = debugfs_file_get(dentry); 435 + r = __debugfs_file_get(dentry, mode); 450 436 if (r) 451 437 return r == -EIO ? -ENOENT : r; 452 438 ··· 505 491 return r; 506 492 } 507 493 494 + static int full_proxy_open_regular(struct inode *inode, struct file *filp) 495 + { 496 + return full_proxy_open(inode, filp, DBGFS_GET_REGULAR); 497 + } 498 + 508 499 const struct file_operations debugfs_full_proxy_file_operations = { 509 - .open = full_proxy_open, 500 + .open = full_proxy_open_regular, 501 + }; 502 + 503 + static int full_proxy_open_short(struct inode *inode, struct file *filp) 504 + { 505 + return full_proxy_open(inode, filp, DBGFS_GET_SHORT); 506 + } 507 + 508 + const struct file_operations debugfs_full_short_proxy_file_operations = { 509 + .open = full_proxy_open_short, 510 510 }; 511 511 512 512 ssize_t debugfs_attr_read(struct file *file, char __user *buf,
+5 -8
fs/debugfs/inode.c
··· 229 229 return; 230 230 231 231 /* check it wasn't a dir (no fsdata) or automount (no real_fops) */ 232 - if (fsd && fsd->real_fops) { 232 + if (fsd && (fsd->real_fops || fsd->short_fops)) { 233 233 WARN_ON(!list_empty(&fsd->cancellations)); 234 234 mutex_destroy(&fsd->cancellations_mtx); 235 235 } ··· 455 455 const struct file_operations *fops) 456 456 { 457 457 if (WARN_ON((unsigned long)fops & 458 - (DEBUGFS_FSDATA_IS_SHORT_FOPS_BIT | 459 - DEBUGFS_FSDATA_IS_REAL_FOPS_BIT))) 458 + DEBUGFS_FSDATA_IS_REAL_FOPS_BIT)) 460 459 return ERR_PTR(-EINVAL); 461 460 462 461 return __debugfs_create_file(name, mode, parent, data, ··· 470 471 const struct debugfs_short_fops *fops) 471 472 { 472 473 if (WARN_ON((unsigned long)fops & 473 - (DEBUGFS_FSDATA_IS_SHORT_FOPS_BIT | 474 - DEBUGFS_FSDATA_IS_REAL_FOPS_BIT))) 474 + DEBUGFS_FSDATA_IS_REAL_FOPS_BIT)) 475 475 return ERR_PTR(-EINVAL); 476 476 477 477 return __debugfs_create_file(name, mode, parent, data, 478 - fops ? &debugfs_full_proxy_file_operations : 478 + fops ? &debugfs_full_short_proxy_file_operations : 479 479 &debugfs_noop_file_operations, 480 - (const void *)((unsigned long)fops | 481 - DEBUGFS_FSDATA_IS_SHORT_FOPS_BIT)); 480 + fops); 482 481 } 483 482 EXPORT_SYMBOL_GPL(debugfs_create_file_short); 484 483
+1 -5
fs/debugfs/internal.h
··· 15 15 extern const struct file_operations debugfs_noop_file_operations; 16 16 extern const struct file_operations debugfs_open_proxy_file_operations; 17 17 extern const struct file_operations debugfs_full_proxy_file_operations; 18 + extern const struct file_operations debugfs_full_short_proxy_file_operations; 18 19 19 20 struct debugfs_fsdata { 20 21 const struct file_operations *real_fops; ··· 41 40 * pointer gets its lowest bit set. 42 41 */ 43 42 #define DEBUGFS_FSDATA_IS_REAL_FOPS_BIT BIT(0) 44 - /* 45 - * A dentry's ->d_fsdata, when pointing to real fops, is with 46 - * short fops instead of full fops. 47 - */ 48 - #define DEBUGFS_FSDATA_IS_SHORT_FOPS_BIT BIT(1) 49 43 50 44 /* Access BITS */ 51 45 #define DEBUGFS_ALLOW_API BIT(0)
+2 -1
fs/exfat/dir.c
··· 122 122 type = exfat_get_entry_type(ep); 123 123 if (type == TYPE_UNUSED) { 124 124 brelse(bh); 125 - break; 125 + goto out; 126 126 } 127 127 128 128 if (type != TYPE_FILE && type != TYPE_DIR) { ··· 170 170 } 171 171 } 172 172 173 + out: 173 174 dir_entry->namebuf.lfn[0] = '\0'; 174 175 *cpos = EXFAT_DEN_TO_B(dentry); 175 176 return 0;
+10
fs/exfat/fatent.c
··· 216 216 217 217 if (err) 218 218 goto dec_used_clus; 219 + 220 + if (num_clusters >= sbi->num_clusters - EXFAT_FIRST_CLUSTER) { 221 + /* 222 + * The cluster chain includes a loop, scan the 223 + * bitmap to get the number of used clusters. 224 + */ 225 + exfat_count_used_clusters(sb, &sbi->used_clusters); 226 + 227 + return 0; 228 + } 219 229 } while (clu != EXFAT_EOF_CLUSTER); 220 230 } 221 231
+6
fs/exfat/file.c
··· 545 545 while (pos < new_valid_size) { 546 546 u32 len; 547 547 struct folio *folio; 548 + unsigned long off; 548 549 549 550 len = PAGE_SIZE - (pos & (PAGE_SIZE - 1)); 550 551 if (pos + len > new_valid_size) ··· 555 554 if (err) 556 555 goto out; 557 556 557 + off = offset_in_folio(folio, pos); 558 + folio_zero_new_buffers(folio, off, off + len); 559 + 558 560 err = ops->write_end(file, mapping, pos, len, len, folio, NULL); 559 561 if (err < 0) 560 562 goto out; ··· 566 562 balance_dirty_pages_ratelimited(mapping); 567 563 cond_resched(); 568 564 } 565 + 566 + return 0; 569 567 570 568 out: 571 569 return err;
+2 -2
fs/exfat/namei.c
··· 330 330 331 331 while ((dentry = exfat_search_empty_slot(sb, &hint_femp, p_dir, 332 332 num_entries, es)) < 0) { 333 - if (dentry == -EIO) 334 - break; 333 + if (dentry != -ENOSPC) 334 + return dentry; 335 335 336 336 if (exfat_check_max_dentries(inode)) 337 337 return -ENOSPC;
+1
fs/file.c
··· 22 22 #include <linux/close_range.h> 23 23 #include <linux/file_ref.h> 24 24 #include <net/sock.h> 25 + #include <linux/init_task.h> 25 26 26 27 #include "internal.h" 27 28
+2
fs/fuse/dir.c
··· 1681 1681 */ 1682 1682 if (ff->open_flags & (FOPEN_STREAM | FOPEN_NONSEEKABLE)) 1683 1683 nonseekable_open(inode, file); 1684 + if (!(ff->open_flags & FOPEN_KEEP_CACHE)) 1685 + invalidate_inode_pages2(inode->i_mapping); 1684 1686 } 1685 1687 1686 1688 return err;
+19 -12
fs/fuse/file.c
··· 1541 1541 */ 1542 1542 struct page **pages = kzalloc(max_pages * sizeof(struct page *), 1543 1543 GFP_KERNEL); 1544 - if (!pages) 1545 - return -ENOMEM; 1544 + if (!pages) { 1545 + ret = -ENOMEM; 1546 + goto out; 1547 + } 1546 1548 1547 1549 while (nbytes < *nbytesp && nr_pages < max_pages) { 1548 1550 unsigned nfolios, i; ··· 1559 1557 1560 1558 nbytes += ret; 1561 1559 1562 - ret += start; 1563 - /* Currently, all folios in FUSE are one page */ 1564 - nfolios = DIV_ROUND_UP(ret, PAGE_SIZE); 1560 + nfolios = DIV_ROUND_UP(ret + start, PAGE_SIZE); 1565 1561 1566 - ap->descs[ap->num_folios].offset = start; 1567 - fuse_folio_descs_length_init(ap->descs, ap->num_folios, nfolios); 1568 - for (i = 0; i < nfolios; i++) 1569 - ap->folios[i + ap->num_folios] = page_folio(pages[i]); 1562 + for (i = 0; i < nfolios; i++) { 1563 + struct folio *folio = page_folio(pages[i]); 1564 + unsigned int offset = start + 1565 + (folio_page_idx(folio, pages[i]) << PAGE_SHIFT); 1566 + unsigned int len = min_t(unsigned int, ret, PAGE_SIZE - start); 1570 1567 1571 - ap->num_folios += nfolios; 1572 - ap->descs[ap->num_folios - 1].length -= 1573 - (PAGE_SIZE - ret) & (PAGE_SIZE - 1); 1568 + ap->descs[ap->num_folios].offset = offset; 1569 + ap->descs[ap->num_folios].length = len; 1570 + ap->folios[ap->num_folios] = folio; 1571 + start = 0; 1572 + ret -= len; 1573 + ap->num_folios++; 1574 + } 1575 + 1574 1576 nr_pages += nfolios; 1575 1577 } 1576 1578 kfree(pages); ··· 1590 1584 else 1591 1585 ap->args.out_pages = true; 1592 1586 1587 + out: 1593 1588 *nbytesp = nbytes; 1594 1589 1595 1590 return ret < 0 ? ret : 0;
+3 -1
fs/hfs/super.c
··· 349 349 goto bail_no_root; 350 350 res = hfs_cat_find_brec(sb, HFS_ROOT_CNID, &fd); 351 351 if (!res) { 352 - if (fd.entrylength > sizeof(rec) || fd.entrylength < 0) { 352 + if (fd.entrylength != sizeof(rec.dir)) { 353 353 res = -EIO; 354 354 goto bail_hfs_find; 355 355 } 356 356 hfs_bnode_read(fd.bnode, &rec, fd.entryoffset, fd.entrylength); 357 + if (rec.type != HFS_CDR_DIR) 358 + res = -EIO; 357 359 } 358 360 if (res) 359 361 goto bail_hfs_find;
+58 -10
fs/iomap/buffered-io.c
··· 1138 1138 start_byte, end_byte, iomap, punch); 1139 1139 1140 1140 /* move offset to start of next folio in range */ 1141 - start_byte = folio_next_index(folio) << PAGE_SHIFT; 1141 + start_byte = folio_pos(folio) + folio_size(folio); 1142 1142 folio_unlock(folio); 1143 1143 folio_put(folio); 1144 1144 } ··· 1774 1774 */ 1775 1775 static int iomap_add_to_ioend(struct iomap_writepage_ctx *wpc, 1776 1776 struct writeback_control *wbc, struct folio *folio, 1777 - struct inode *inode, loff_t pos, unsigned len) 1777 + struct inode *inode, loff_t pos, loff_t end_pos, 1778 + unsigned len) 1778 1779 { 1779 1780 struct iomap_folio_state *ifs = folio->private; 1780 1781 size_t poff = offset_in_folio(folio, pos); ··· 1794 1793 1795 1794 if (ifs) 1796 1795 atomic_add(len, &ifs->write_bytes_pending); 1796 + 1797 + /* 1798 + * Clamp io_offset and io_size to the incore EOF so that ondisk 1799 + * file size updates in the ioend completion are byte-accurate. 1800 + * This avoids recovering files with zeroed tail regions when 1801 + * writeback races with appending writes: 1802 + * 1803 + * Thread 1: Thread 2: 1804 + * ------------ ----------- 1805 + * write [A, A+B] 1806 + * update inode size to A+B 1807 + * submit I/O [A, A+BS] 1808 + * write [A+B, A+B+C] 1809 + * update inode size to A+B+C 1810 + * <I/O completes, updates disk size to min(A+B+C, A+BS)> 1811 + * <power failure> 1812 + * 1813 + * After reboot: 1814 + * 1) with A+B+C < A+BS, the file has zero padding in range 1815 + * [A+B, A+B+C] 1816 + * 1817 + * |< Block Size (BS) >| 1818 + * |DDDDDDDDDDDD0000000000000| 1819 + * ^ ^ ^ 1820 + * A A+B A+B+C 1821 + * (EOF) 1822 + * 1823 + * 2) with A+B+C > A+BS, the file has zero padding in range 1824 + * [A+B, A+BS] 1825 + * 1826 + * |< Block Size (BS) >|< Block Size (BS) >| 1827 + * |DDDDDDDDDDDD0000000000000|00000000000000000000000000| 1828 + * ^ ^ ^ ^ 1829 + * A A+B A+BS A+B+C 1830 + * (EOF) 1831 + * 1832 + * D = Valid Data 1833 + * 0 = Zero Padding 1834 + * 1835 + * Note that this defeats the ability to chain the ioends of 1836 + * appending writes. 1837 + */ 1797 1838 wpc->ioend->io_size += len; 1839 + if (wpc->ioend->io_offset + wpc->ioend->io_size > end_pos) 1840 + wpc->ioend->io_size = end_pos - wpc->ioend->io_offset; 1841 + 1798 1842 wbc_account_cgroup_owner(wbc, folio, len); 1799 1843 return 0; 1800 1844 } 1801 1845 1802 1846 static int iomap_writepage_map_blocks(struct iomap_writepage_ctx *wpc, 1803 1847 struct writeback_control *wbc, struct folio *folio, 1804 - struct inode *inode, u64 pos, unsigned dirty_len, 1805 - unsigned *count) 1848 + struct inode *inode, u64 pos, u64 end_pos, 1849 + unsigned dirty_len, unsigned *count) 1806 1850 { 1807 1851 int error; 1808 1852 ··· 1872 1826 break; 1873 1827 default: 1874 1828 error = iomap_add_to_ioend(wpc, wbc, folio, inode, pos, 1875 - map_len); 1829 + end_pos, map_len); 1876 1830 if (!error) 1877 1831 (*count)++; 1878 1832 break; ··· 1943 1897 * remaining memory is zeroed when mapped, and writes to that 1944 1898 * region are not written out to the file. 1945 1899 * 1946 - * Also adjust the writeback range to skip all blocks entirely 1947 - * beyond i_size. 1900 + * Also adjust the end_pos to the end of file and skip writeback 1901 + * for all blocks entirely beyond i_size. 1948 1902 */ 1949 1903 folio_zero_segment(folio, poff, folio_size(folio)); 1950 - *end_pos = round_up(isize, i_blocksize(inode)); 1904 + *end_pos = isize; 1951 1905 } 1952 1906 1953 1907 return true; ··· 1960 1914 struct inode *inode = folio->mapping->host; 1961 1915 u64 pos = folio_pos(folio); 1962 1916 u64 end_pos = pos + folio_size(folio); 1917 + u64 end_aligned = 0; 1963 1918 unsigned count = 0; 1964 1919 int error = 0; 1965 1920 u32 rlen; ··· 2002 1955 /* 2003 1956 * Walk through the folio to find dirty areas to write back. 2004 1957 */ 2005 - while ((rlen = iomap_find_dirty_range(folio, &pos, end_pos))) { 1958 + end_aligned = round_up(end_pos, i_blocksize(inode)); 1959 + while ((rlen = iomap_find_dirty_range(folio, &pos, end_aligned))) { 2006 1960 error = iomap_writepage_map_blocks(wpc, wbc, folio, inode, 2007 - pos, rlen, &count); 1961 + pos, end_pos, rlen, &count); 2008 1962 if (error) 2009 1963 break; 2010 1964 pos += rlen;
+2 -2
fs/jbd2/commit.c
··· 772 772 /* 773 773 * If the journal is not located on the file system device, 774 774 * then we must flush the file system device before we issue 775 - * the commit record 775 + * the commit record and update the journal tail sequence. 776 776 */ 777 - if (commit_transaction->t_need_data_flush && 777 + if ((commit_transaction->t_need_data_flush || update_tail) && 778 778 (journal->j_fs_dev != journal->j_dev) && 779 779 (journal->j_flags & JBD2_BARRIER)) 780 780 blkdev_issue_flush(journal->j_fs_dev);
+1 -1
fs/jbd2/revoke.c
··· 654 654 set_buffer_jwrite(descriptor); 655 655 BUFFER_TRACE(descriptor, "write"); 656 656 set_buffer_dirty(descriptor); 657 - write_dirty_buffer(descriptor, REQ_SYNC); 657 + write_dirty_buffer(descriptor, JBD2_JOURNAL_REQ_FLAGS); 658 658 } 659 659 #endif 660 660
+9 -6
fs/mount.h
··· 38 38 struct dentry *mnt_mountpoint; 39 39 struct vfsmount mnt; 40 40 union { 41 + struct rb_node mnt_node; /* node in the ns->mounts rbtree */ 41 42 struct rcu_head mnt_rcu; 42 43 struct llist_node mnt_llist; 43 44 }; ··· 52 51 struct list_head mnt_child; /* and going through their mnt_child */ 53 52 struct list_head mnt_instance; /* mount instance on sb->s_mounts */ 54 53 const char *mnt_devname; /* Name of device e.g. /dev/dsk/hda1 */ 55 - union { 56 - struct rb_node mnt_node; /* Under ns->mounts */ 57 - struct list_head mnt_list; 58 - }; 54 + struct list_head mnt_list; 59 55 struct list_head mnt_expire; /* link in fs-specific expiry list */ 60 56 struct list_head mnt_share; /* circular list of shared mounts */ 61 57 struct list_head mnt_slave_list;/* list of slave mounts */ ··· 143 145 return ns->seq == 0; 144 146 } 145 147 148 + static inline bool mnt_ns_attached(const struct mount *mnt) 149 + { 150 + return !RB_EMPTY_NODE(&mnt->mnt_node); 151 + } 152 + 146 153 static inline void move_from_ns(struct mount *mnt, struct list_head *dt_list) 147 154 { 148 - WARN_ON(!(mnt->mnt.mnt_flags & MNT_ONRB)); 149 - mnt->mnt.mnt_flags &= ~MNT_ONRB; 155 + WARN_ON(!mnt_ns_attached(mnt)); 150 156 rb_erase(&mnt->mnt_node, &mnt->mnt_ns->mounts); 157 + RB_CLEAR_NODE(&mnt->mnt_node); 151 158 list_add_tail(&mnt->mnt_list, dt_list); 152 159 } 153 160
+14 -10
fs/namespace.c
··· 344 344 INIT_HLIST_NODE(&mnt->mnt_mp_list); 345 345 INIT_LIST_HEAD(&mnt->mnt_umounting); 346 346 INIT_HLIST_HEAD(&mnt->mnt_stuck_children); 347 + RB_CLEAR_NODE(&mnt->mnt_node); 347 348 mnt->mnt.mnt_idmap = &nop_mnt_idmap; 348 349 } 349 350 return mnt; ··· 1125 1124 struct rb_node **link = &ns->mounts.rb_node; 1126 1125 struct rb_node *parent = NULL; 1127 1126 1128 - WARN_ON(mnt->mnt.mnt_flags & MNT_ONRB); 1127 + WARN_ON(mnt_ns_attached(mnt)); 1129 1128 mnt->mnt_ns = ns; 1130 1129 while (*link) { 1131 1130 parent = *link; ··· 1136 1135 } 1137 1136 rb_link_node(&mnt->mnt_node, parent, link); 1138 1137 rb_insert_color(&mnt->mnt_node, &ns->mounts); 1139 - mnt->mnt.mnt_flags |= MNT_ONRB; 1140 1138 } 1141 1139 1142 1140 /* ··· 1305 1305 } 1306 1306 1307 1307 mnt->mnt.mnt_flags = old->mnt.mnt_flags; 1308 - mnt->mnt.mnt_flags &= ~(MNT_WRITE_HOLD|MNT_MARKED|MNT_INTERNAL|MNT_ONRB); 1308 + mnt->mnt.mnt_flags &= ~(MNT_WRITE_HOLD|MNT_MARKED|MNT_INTERNAL); 1309 1309 1310 1310 atomic_inc(&sb->s_active); 1311 1311 mnt->mnt.mnt_idmap = mnt_idmap_get(mnt_idmap(&old->mnt)); ··· 1763 1763 /* Gather the mounts to umount */ 1764 1764 for (p = mnt; p; p = next_mnt(p, mnt)) { 1765 1765 p->mnt.mnt_flags |= MNT_UMOUNT; 1766 - if (p->mnt.mnt_flags & MNT_ONRB) 1766 + if (mnt_ns_attached(p)) 1767 1767 move_from_ns(p, &tmp_list); 1768 1768 else 1769 1769 list_move(&p->mnt_list, &tmp_list); ··· 1912 1912 1913 1913 event++; 1914 1914 if (flags & MNT_DETACH) { 1915 - if (mnt->mnt.mnt_flags & MNT_ONRB || 1916 - !list_empty(&mnt->mnt_list)) 1915 + if (mnt_ns_attached(mnt) || !list_empty(&mnt->mnt_list)) 1917 1916 umount_tree(mnt, UMOUNT_PROPAGATE); 1918 1917 retval = 0; 1919 1918 } else { 1920 1919 shrink_submounts(mnt); 1921 1920 retval = -EBUSY; 1922 1921 if (!propagate_mount_busy(mnt, 2)) { 1923 - if (mnt->mnt.mnt_flags & MNT_ONRB || 1924 - !list_empty(&mnt->mnt_list)) 1922 + if (mnt_ns_attached(mnt) || !list_empty(&mnt->mnt_list)) 1925 1923 umount_tree(mnt, UMOUNT_PROPAGATE|UMOUNT_SYNC); 1926 1924 retval = 0; 1927 1925 } ··· 2053 2055 2054 2056 static bool is_mnt_ns_file(struct dentry *dentry) 2055 2057 { 2058 + struct ns_common *ns; 2059 + 2056 2060 /* Is this a proxy for a mount namespace? */ 2057 - return dentry->d_op == &ns_dentry_operations && 2058 - dentry->d_fsdata == &mntns_operations; 2061 + if (dentry->d_op != &ns_dentry_operations) 2062 + return false; 2063 + 2064 + ns = d_inode(dentry)->i_private; 2065 + 2066 + return ns->ops == &mntns_operations; 2059 2067 } 2060 2068 2061 2069 struct ns_common *from_mnt_ns(struct mnt_namespace *mnt)
+16 -12
fs/netfs/buffered_read.c
··· 275 275 netfs_stat(&netfs_n_rh_download); 276 276 if (rreq->netfs_ops->prepare_read) { 277 277 ret = rreq->netfs_ops->prepare_read(subreq); 278 - if (ret < 0) { 279 - atomic_dec(&rreq->nr_outstanding); 280 - netfs_put_subrequest(subreq, false, 281 - netfs_sreq_trace_put_cancel); 282 - break; 283 - } 278 + if (ret < 0) 279 + goto prep_failed; 284 280 trace_netfs_sreq(subreq, netfs_sreq_trace_prepare); 285 281 } 286 282 287 283 slice = netfs_prepare_read_iterator(subreq); 288 - if (slice < 0) { 289 - atomic_dec(&rreq->nr_outstanding); 290 - netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); 291 - ret = slice; 292 - break; 293 - } 284 + if (slice < 0) 285 + goto prep_iter_failed; 294 286 295 287 rreq->netfs_ops->issue_read(subreq); 296 288 goto done; ··· 294 302 trace_netfs_sreq(subreq, netfs_sreq_trace_submit); 295 303 netfs_stat(&netfs_n_rh_zero); 296 304 slice = netfs_prepare_read_iterator(subreq); 305 + if (slice < 0) 306 + goto prep_iter_failed; 297 307 __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); 298 308 netfs_read_subreq_terminated(subreq, 0, false); 299 309 goto done; ··· 304 310 if (source == NETFS_READ_FROM_CACHE) { 305 311 trace_netfs_sreq(subreq, netfs_sreq_trace_submit); 306 312 slice = netfs_prepare_read_iterator(subreq); 313 + if (slice < 0) 314 + goto prep_iter_failed; 307 315 netfs_read_cache_to_pagecache(rreq, subreq); 308 316 goto done; 309 317 } 310 318 311 319 pr_err("Unexpected read source %u\n", source); 312 320 WARN_ON_ONCE(1); 321 + break; 322 + 323 + prep_iter_failed: 324 + ret = slice; 325 + prep_failed: 326 + subreq->error = ret; 327 + atomic_dec(&rreq->nr_outstanding); 328 + netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel); 313 329 break; 314 330 315 331 done:
+6 -2
fs/netfs/direct_write.c
··· 67 67 * allocate a sufficiently large bvec array and may shorten the 68 68 * request. 69 69 */ 70 - if (async || user_backed_iter(iter)) { 70 + if (user_backed_iter(iter)) { 71 71 n = netfs_extract_user_iter(iter, len, &wreq->iter, 0); 72 72 if (n < 0) { 73 73 ret = n; ··· 77 77 wreq->direct_bv_count = n; 78 78 wreq->direct_bv_unpin = iov_iter_extract_will_pin(iter); 79 79 } else { 80 + /* If this is a kernel-generated async DIO request, 81 + * assume that any resources the iterator points to 82 + * (eg. a bio_vec array) will persist till the end of 83 + * the op. 84 + */ 80 85 wreq->iter = *iter; 81 86 } 82 87 ··· 109 104 trace_netfs_rreq(wreq, netfs_rreq_trace_wait_ip); 110 105 wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS, 111 106 TASK_UNINTERRUPTIBLE); 112 - smp_rmb(); /* Read error/transferred after RIP flag */ 113 107 ret = wreq->error; 114 108 if (ret == 0) { 115 109 ret = wreq->transferred;
+19 -14
fs/netfs/read_collect.c
··· 62 62 } else { 63 63 trace_netfs_folio(folio, netfs_folio_trace_read_done); 64 64 } 65 + 66 + folioq_clear(folioq, slot); 65 67 } else { 66 68 // TODO: Use of PG_private_2 is deprecated. 67 69 if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) 68 70 netfs_pgpriv2_mark_copy_to_cache(subreq, rreq, folioq, slot); 71 + else 72 + folioq_clear(folioq, slot); 69 73 } 70 74 71 75 if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) { ··· 81 77 folio_unlock(folio); 82 78 } 83 79 } 84 - 85 - folioq_clear(folioq, slot); 86 80 } 87 81 88 82 /* ··· 249 247 250 248 /* Deal with the trickiest case: that this subreq is in the middle of a 251 249 * folio, not touching either edge, but finishes first. In such a 252 - * case, we donate to the previous subreq, if there is one, so that the 253 - * donation is only handled when that completes - and remove this 254 - * subreq from the list. 250 + * case, we donate to the previous subreq, if there is one and if it is 251 + * contiguous, so that the donation is only handled when that completes 252 + * - and remove this subreq from the list. 255 253 * 256 254 * If the previous subreq finished first, we will have acquired their 257 255 * donation and should be able to unlock folios and/or donate nextwards. 258 256 */ 259 257 if (!subreq->consumed && 260 258 !prev_donated && 261 - !list_is_first(&subreq->rreq_link, &rreq->subrequests)) { 259 + !list_is_first(&subreq->rreq_link, &rreq->subrequests) && 260 + subreq->start == prev->start + prev->len) { 262 261 prev = list_prev_entry(subreq, rreq_link); 263 262 WRITE_ONCE(prev->next_donated, prev->next_donated + subreq->len); 264 263 subreq->start += subreq->len; ··· 381 378 task_io_account_read(rreq->transferred); 382 379 383 380 trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip); 384 - clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags); 385 - wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS); 381 + clear_and_wake_up_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags); 386 382 387 383 trace_netfs_rreq(rreq, netfs_rreq_trace_done); 388 384 netfs_clear_subrequests(rreq, false); ··· 440 438 rreq->origin == NETFS_READPAGE || 441 439 rreq->origin == NETFS_READ_FOR_WRITE)) { 442 440 netfs_consume_read_data(subreq, was_async); 443 - __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); 441 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); 444 442 } 445 443 } 446 444 EXPORT_SYMBOL(netfs_read_subreq_progress); ··· 499 497 rreq->origin == NETFS_READPAGE || 500 498 rreq->origin == NETFS_READ_FOR_WRITE)) { 501 499 netfs_consume_read_data(subreq, was_async); 502 - __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); 500 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); 503 501 } 504 502 rreq->transferred += subreq->transferred; 505 503 } ··· 513 511 } else { 514 512 trace_netfs_sreq(subreq, netfs_sreq_trace_short); 515 513 if (subreq->transferred > subreq->consumed) { 516 - __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 517 - __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags); 518 - set_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags); 519 - } else if (!__test_and_set_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags)) { 514 + /* If we didn't read new data, abandon retry. */ 515 + if (subreq->retry_count && 516 + test_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags)) { 517 + __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 518 + set_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags); 519 + } 520 + } else if (test_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags)) { 520 521 __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 521 522 set_bit(NETFS_RREQ_NEED_RETRY, &rreq->flags); 522 523 } else {
+4
fs/netfs/read_pgpriv2.c
··· 170 170 171 171 trace_netfs_write(wreq, netfs_write_trace_copy_to_cache); 172 172 netfs_stat(&netfs_n_wh_copy_to_cache); 173 + if (!wreq->io_streams[1].avail) { 174 + netfs_put_request(wreq, false, netfs_rreq_trace_put_return); 175 + goto couldnt_start; 176 + } 173 177 174 178 for (;;) { 175 179 error = netfs_pgpriv2_copy_folio(wreq, folio);
+7 -4
fs/netfs/read_retry.c
··· 49 49 * up to the first permanently failed one. 50 50 */ 51 51 if (!rreq->netfs_ops->prepare_read && 52 - !test_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags)) { 52 + !rreq->cache_resources.ops) { 53 53 struct netfs_io_subrequest *subreq; 54 54 55 55 list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { 56 56 if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) 57 57 break; 58 58 if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { 59 + __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); 60 + subreq->retry_count++; 59 61 netfs_reset_iter(subreq); 60 62 netfs_reissue_read(rreq, subreq); 61 63 } ··· 139 137 stream0->sreq_max_len = subreq->len; 140 138 141 139 __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 142 - __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); 140 + __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); 141 + subreq->retry_count++; 143 142 144 143 spin_lock_bh(&rreq->lock); 145 144 list_add_tail(&subreq->rreq_link, &rreq->subrequests); ··· 152 149 BUG_ON(!len); 153 150 154 151 /* Renegotiate max_len (rsize) */ 155 - if (rreq->netfs_ops->prepare_read(subreq) < 0) { 152 + if (rreq->netfs_ops->prepare_read && 153 + rreq->netfs_ops->prepare_read(subreq) < 0) { 156 154 trace_netfs_sreq(subreq, netfs_sreq_trace_reprep_failed); 157 155 __set_bit(NETFS_SREQ_FAILED, &subreq->flags); 158 156 } ··· 217 213 subreq->error = -ENOMEM; 218 214 __clear_bit(NETFS_SREQ_FAILED, &subreq->flags); 219 215 __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 220 - __clear_bit(NETFS_SREQ_RETRYING, &subreq->flags); 221 216 } 222 217 spin_lock_bh(&rreq->lock); 223 218 list_splice_tail_init(&queue, &rreq->subrequests);
+5 -9
fs/netfs/write_collect.c
··· 179 179 struct iov_iter source = subreq->io_iter; 180 180 181 181 iov_iter_revert(&source, subreq->len - source.count); 182 - __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); 183 182 netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); 184 183 netfs_reissue_write(stream, subreq, &source); 185 184 } ··· 233 234 /* Renegotiate max_len (wsize) */ 234 235 trace_netfs_sreq(subreq, netfs_sreq_trace_retry); 235 236 __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 236 - __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); 237 + subreq->retry_count++; 237 238 stream->prepare_write(subreq); 238 239 239 240 part = min(len, stream->sreq_max_len); ··· 278 279 subreq->start = start; 279 280 subreq->debug_index = atomic_inc_return(&wreq->subreq_counter); 280 281 subreq->stream_nr = to->stream_nr; 281 - __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); 282 + subreq->retry_count = 1; 282 283 283 284 trace_netfs_sreq_ref(wreq->debug_id, subreq->debug_index, 284 285 refcount_read(&subreq->ref), ··· 500 501 goto need_retry; 501 502 if ((notes & MADE_PROGRESS) && test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) { 502 503 trace_netfs_rreq(wreq, netfs_rreq_trace_unpause); 503 - clear_bit_unlock(NETFS_RREQ_PAUSE, &wreq->flags); 504 - wake_up_bit(&wreq->flags, NETFS_RREQ_PAUSE); 504 + clear_and_wake_up_bit(NETFS_RREQ_PAUSE, &wreq->flags); 505 505 } 506 506 507 507 if (notes & NEED_REASSESS) { ··· 603 605 604 606 _debug("finished"); 605 607 trace_netfs_rreq(wreq, netfs_rreq_trace_wake_ip); 606 - clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &wreq->flags); 607 - wake_up_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS); 608 + clear_and_wake_up_bit(NETFS_RREQ_IN_PROGRESS, &wreq->flags); 608 609 609 610 if (wreq->iocb) { 610 611 size_t written = min(wreq->transferred, wreq->len); ··· 711 714 712 715 trace_netfs_sreq(subreq, netfs_sreq_trace_terminated); 713 716 714 - clear_bit_unlock(NETFS_SREQ_IN_PROGRESS, &subreq->flags); 715 - wake_up_bit(&subreq->flags, NETFS_SREQ_IN_PROGRESS); 717 + clear_and_wake_up_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); 716 718 717 719 /* If we are at the head of the queue, wake up the collector, 718 720 * transferring a ref to it if we were the ones to do so.
+2
fs/netfs/write_issue.c
··· 244 244 iov_iter_advance(source, size); 245 245 iov_iter_truncate(&subreq->io_iter, size); 246 246 247 + subreq->retry_count++; 248 + __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags); 247 249 __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); 248 250 netfs_do_issue_write(stream, subreq); 249 251 }
+8 -1
fs/nfs/fscache.c
··· 263 263 static atomic_t nfs_netfs_debug_id; 264 264 static int nfs_netfs_init_request(struct netfs_io_request *rreq, struct file *file) 265 265 { 266 + if (!file) { 267 + if (WARN_ON_ONCE(rreq->origin != NETFS_PGPRIV2_COPY_TO_CACHE)) 268 + return -EIO; 269 + return 0; 270 + } 271 + 266 272 rreq->netfs_priv = get_nfs_open_context(nfs_file_open_context(file)); 267 273 rreq->debug_id = atomic_inc_return(&nfs_netfs_debug_id); 268 274 /* [DEPRECATED] Use PG_private_2 to mark folio being written to the cache. */ ··· 280 274 281 275 static void nfs_netfs_free_request(struct netfs_io_request *rreq) 282 276 { 283 - put_nfs_open_context(rreq->netfs_priv); 277 + if (rreq->netfs_priv) 278 + put_nfs_open_context(rreq->netfs_priv); 284 279 } 285 280 286 281 static struct nfs_netfs_io_data *nfs_netfs_alloc(struct netfs_io_subrequest *sreq)
+1 -3
fs/notify/fdinfo.c
··· 47 47 size = f->handle_bytes >> 2; 48 48 49 49 ret = exportfs_encode_fid(inode, (struct fid *)f->f_handle, &size); 50 - if ((ret == FILEID_INVALID) || (ret < 0)) { 51 - WARN_ONCE(1, "Can't encode file handler for inotify: %d\n", ret); 50 + if ((ret == FILEID_INVALID) || (ret < 0)) 52 51 return; 53 - } 54 52 55 53 f->handle_type = ret; 56 54 f->handle_bytes = size * sizeof(u32);
+21 -4
fs/ocfs2/dir.c
··· 1065 1065 { 1066 1066 struct buffer_head *bh; 1067 1067 struct ocfs2_dir_entry *res_dir = NULL; 1068 + int ret = 0; 1068 1069 1069 1070 if (ocfs2_dir_indexed(dir)) 1070 1071 return ocfs2_find_entry_dx(name, namelen, dir, lookup); 1071 1072 1073 + if (unlikely(i_size_read(dir) <= 0)) { 1074 + ret = -EFSCORRUPTED; 1075 + mlog_errno(ret); 1076 + goto out; 1077 + } 1072 1078 /* 1073 1079 * The unindexed dir code only uses part of the lookup 1074 1080 * structure, so there's no reason to push it down further 1075 1081 * than this. 1076 1082 */ 1077 - if (OCFS2_I(dir)->ip_dyn_features & OCFS2_INLINE_DATA_FL) 1083 + if (OCFS2_I(dir)->ip_dyn_features & OCFS2_INLINE_DATA_FL) { 1084 + if (unlikely(i_size_read(dir) > dir->i_sb->s_blocksize)) { 1085 + ret = -EFSCORRUPTED; 1086 + mlog_errno(ret); 1087 + goto out; 1088 + } 1078 1089 bh = ocfs2_find_entry_id(name, namelen, dir, &res_dir); 1079 - else 1090 + } else { 1080 1091 bh = ocfs2_find_entry_el(name, namelen, dir, &res_dir); 1092 + } 1081 1093 1082 1094 if (bh == NULL) 1083 1095 return -ENOENT; 1084 1096 1085 1097 lookup->dl_leaf_bh = bh; 1086 1098 lookup->dl_entry = res_dir; 1087 - return 0; 1099 + out: 1100 + return ret; 1088 1101 } 1089 1102 1090 1103 /* ··· 2023 2010 * 2024 2011 * Return 0 if the name does not exist 2025 2012 * Return -EEXIST if the directory contains the name 2013 + * Return -EFSCORRUPTED if found corruption 2026 2014 * 2027 2015 * Callers should have i_rwsem + a cluster lock on dir 2028 2016 */ ··· 2037 2023 trace_ocfs2_check_dir_for_entry( 2038 2024 (unsigned long long)OCFS2_I(dir)->ip_blkno, namelen, name); 2039 2025 2040 - if (ocfs2_find_entry(name, namelen, dir, &lookup) == 0) { 2026 + ret = ocfs2_find_entry(name, namelen, dir, &lookup); 2027 + if (ret == 0) { 2041 2028 ret = -EEXIST; 2042 2029 mlog_errno(ret); 2030 + } else if (ret == -ENOENT) { 2031 + ret = 0; 2043 2032 } 2044 2033 2045 2034 ocfs2_free_dir_lookup_result(&lookup);
+8 -8
fs/overlayfs/copy_up.c
··· 415 415 return err; 416 416 } 417 417 418 - struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct dentry *real, 418 + struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct inode *realinode, 419 419 bool is_upper) 420 420 { 421 421 struct ovl_fh *fh; 422 422 int fh_type, dwords; 423 423 int buflen = MAX_HANDLE_SZ; 424 - uuid_t *uuid = &real->d_sb->s_uuid; 424 + uuid_t *uuid = &realinode->i_sb->s_uuid; 425 425 int err; 426 426 427 427 /* Make sure the real fid stays 32bit aligned */ ··· 438 438 * the price or reconnecting the dentry. 439 439 */ 440 440 dwords = buflen >> 2; 441 - fh_type = exportfs_encode_fh(real, (void *)fh->fb.fid, &dwords, 0); 441 + fh_type = exportfs_encode_inode_fh(realinode, (void *)fh->fb.fid, 442 + &dwords, NULL, 0); 442 443 buflen = (dwords << 2); 443 444 444 445 err = -EIO; 445 - if (WARN_ON(fh_type < 0) || 446 - WARN_ON(buflen > MAX_HANDLE_SZ) || 447 - WARN_ON(fh_type == FILEID_INVALID)) 446 + if (fh_type < 0 || fh_type == FILEID_INVALID || 447 + WARN_ON(buflen > MAX_HANDLE_SZ)) 448 448 goto out_err; 449 449 450 450 fh->fb.version = OVL_FH_VERSION; ··· 480 480 if (!ovl_can_decode_fh(origin->d_sb)) 481 481 return NULL; 482 482 483 - return ovl_encode_real_fh(ofs, origin, false); 483 + return ovl_encode_real_fh(ofs, d_inode(origin), false); 484 484 } 485 485 486 486 int ovl_set_origin_fh(struct ovl_fs *ofs, const struct ovl_fh *fh, ··· 505 505 const struct ovl_fh *fh; 506 506 int err; 507 507 508 - fh = ovl_encode_real_fh(ofs, upper, true); 508 + fh = ovl_encode_real_fh(ofs, d_inode(upper), true); 509 509 if (IS_ERR(fh)) 510 510 return PTR_ERR(fh); 511 511
+27 -22
fs/overlayfs/export.c
··· 176 176 * 177 177 * Return 0 for upper file handle, > 0 for lower file handle or < 0 on error. 178 178 */ 179 - static int ovl_check_encode_origin(struct dentry *dentry) 179 + static int ovl_check_encode_origin(struct inode *inode) 180 180 { 181 - struct ovl_fs *ofs = OVL_FS(dentry->d_sb); 181 + struct ovl_fs *ofs = OVL_FS(inode->i_sb); 182 182 bool decodable = ofs->config.nfs_export; 183 + struct dentry *dentry; 184 + int err; 183 185 184 186 /* No upper layer? */ 185 187 if (!ovl_upper_mnt(ofs)) 186 188 return 1; 187 189 188 190 /* Lower file handle for non-upper non-decodable */ 189 - if (!ovl_dentry_upper(dentry) && !decodable) 191 + if (!ovl_inode_upper(inode) && !decodable) 190 192 return 1; 191 193 192 194 /* Upper file handle for pure upper */ 193 - if (!ovl_dentry_lower(dentry)) 195 + if (!ovl_inode_lower(inode)) 194 196 return 0; 195 197 196 198 /* 197 199 * Root is never indexed, so if there's an upper layer, encode upper for 198 200 * root. 199 201 */ 200 - if (dentry == dentry->d_sb->s_root) 202 + if (inode == d_inode(inode->i_sb->s_root)) 201 203 return 0; 202 204 203 205 /* 204 206 * Upper decodable file handle for non-indexed upper. 205 207 */ 206 - if (ovl_dentry_upper(dentry) && decodable && 207 - !ovl_test_flag(OVL_INDEX, d_inode(dentry))) 208 + if (ovl_inode_upper(inode) && decodable && 209 + !ovl_test_flag(OVL_INDEX, inode)) 208 210 return 0; 209 211 210 212 /* ··· 215 213 * ovl_connect_layer() will try to make origin's layer "connected" by 216 214 * copying up a "connectable" ancestor. 217 215 */ 218 - if (d_is_dir(dentry) && decodable) 219 - return ovl_connect_layer(dentry); 216 + if (!decodable || !S_ISDIR(inode->i_mode)) 217 + return 1; 218 + 219 + dentry = d_find_any_alias(inode); 220 + if (!dentry) 221 + return -ENOENT; 222 + 223 + err = ovl_connect_layer(dentry); 224 + dput(dentry); 225 + if (err < 0) 226 + return err; 220 227 221 228 /* Lower file handle for indexed and non-upper dir/non-dir */ 222 229 return 1; 223 230 } 224 231 225 - static int ovl_dentry_to_fid(struct ovl_fs *ofs, struct dentry *dentry, 232 + static int ovl_dentry_to_fid(struct ovl_fs *ofs, struct inode *inode, 226 233 u32 *fid, int buflen) 227 234 { 228 235 struct ovl_fh *fh = NULL; ··· 242 231 * Check if we should encode a lower or upper file handle and maybe 243 232 * copy up an ancestor to make lower file handle connectable. 244 233 */ 245 - err = enc_lower = ovl_check_encode_origin(dentry); 234 + err = enc_lower = ovl_check_encode_origin(inode); 246 235 if (enc_lower < 0) 247 236 goto fail; 248 237 249 238 /* Encode an upper or lower file handle */ 250 - fh = ovl_encode_real_fh(ofs, enc_lower ? ovl_dentry_lower(dentry) : 251 - ovl_dentry_upper(dentry), !enc_lower); 239 + fh = ovl_encode_real_fh(ofs, enc_lower ? ovl_inode_lower(inode) : 240 + ovl_inode_upper(inode), !enc_lower); 252 241 if (IS_ERR(fh)) 253 242 return PTR_ERR(fh); 254 243 ··· 262 251 return err; 263 252 264 253 fail: 265 - pr_warn_ratelimited("failed to encode file handle (%pd2, err=%i)\n", 266 - dentry, err); 254 + pr_warn_ratelimited("failed to encode file handle (ino=%lu, err=%i)\n", 255 + inode->i_ino, err); 267 256 goto out; 268 257 } 269 258 ··· 271 260 struct inode *parent) 272 261 { 273 262 struct ovl_fs *ofs = OVL_FS(inode->i_sb); 274 - struct dentry *dentry; 275 263 int bytes, buflen = *max_len << 2; 276 264 277 265 /* TODO: encode connectable file handles */ 278 266 if (parent) 279 267 return FILEID_INVALID; 280 268 281 - dentry = d_find_any_alias(inode); 282 - if (!dentry) 283 - return FILEID_INVALID; 284 - 285 - bytes = ovl_dentry_to_fid(ofs, dentry, fid, buflen); 286 - dput(dentry); 269 + bytes = ovl_dentry_to_fid(ofs, inode, fid, buflen); 287 270 if (bytes <= 0) 288 271 return FILEID_INVALID; 289 272
+2 -2
fs/overlayfs/namei.c
··· 542 542 struct ovl_fh *fh; 543 543 int err; 544 544 545 - fh = ovl_encode_real_fh(ofs, real, is_upper); 545 + fh = ovl_encode_real_fh(ofs, d_inode(real), is_upper); 546 546 err = PTR_ERR(fh); 547 547 if (IS_ERR(fh)) { 548 548 fh = NULL; ··· 738 738 struct ovl_fh *fh; 739 739 int err; 740 740 741 - fh = ovl_encode_real_fh(ofs, origin, false); 741 + fh = ovl_encode_real_fh(ofs, d_inode(origin), false); 742 742 if (IS_ERR(fh)) 743 743 return PTR_ERR(fh); 744 744
+1 -1
fs/overlayfs/overlayfs.h
··· 865 865 int ovl_maybe_copy_up(struct dentry *dentry, int flags); 866 866 int ovl_copy_xattr(struct super_block *sb, const struct path *path, struct dentry *new); 867 867 int ovl_set_attr(struct ovl_fs *ofs, struct dentry *upper, struct kstat *stat); 868 - struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct dentry *real, 868 + struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct inode *realinode, 869 869 bool is_upper); 870 870 struct ovl_fh *ovl_get_origin_fh(struct ovl_fs *ofs, struct dentry *origin); 871 871 int ovl_set_origin_fh(struct ovl_fs *ofs, const struct ovl_fh *fh,
+2
fs/proc/vmcore.c
··· 404 404 if (!iov_iter_count(iter)) 405 405 return acc; 406 406 } 407 + 408 + cond_resched(); 407 409 } 408 410 409 411 return acc;
+4 -7
fs/qnx6/inode.c
··· 179 179 */ 180 180 static const char *qnx6_checkroot(struct super_block *s) 181 181 { 182 - static char match_root[2][3] = {".\0\0", "..\0"}; 183 - int i, error = 0; 182 + int error = 0; 184 183 struct qnx6_dir_entry *dir_entry; 185 184 struct inode *root = d_inode(s->s_root); 186 185 struct address_space *mapping = root->i_mapping; ··· 188 189 if (IS_ERR(folio)) 189 190 return "error reading root directory"; 190 191 dir_entry = kmap_local_folio(folio, 0); 191 - for (i = 0; i < 2; i++) { 192 - /* maximum 3 bytes - due to match_root limitation */ 193 - if (strncmp(dir_entry[i].de_fname, match_root[i], 3)) 194 - error = 1; 195 - } 192 + if (memcmp(dir_entry[0].de_fname, ".", 2) || 193 + memcmp(dir_entry[1].de_fname, "..", 3)) 194 + error = 1; 196 195 folio_release_kmap(folio, dir_entry); 197 196 if (error) 198 197 return "error reading root directory.";
+19 -5
fs/smb/client/cifssmb.c
··· 152 152 spin_unlock(&ses->ses_lock); 153 153 154 154 rc = cifs_negotiate_protocol(0, ses, server); 155 - if (!rc) 155 + if (!rc) { 156 156 rc = cifs_setup_session(0, ses, server, ses->local_nls); 157 + if ((rc == -EACCES) || (rc == -EHOSTDOWN) || (rc == -EKEYREVOKED)) { 158 + /* 159 + * Try alternate password for next reconnect if an alternate 160 + * password is available. 161 + */ 162 + if (ses->password2) 163 + swap(ses->password2, ses->password); 164 + } 165 + } 157 166 158 167 /* do we need to reconnect tcon? */ 159 168 if (rc || !tcon->need_reconnect) { ··· 1328 1319 } 1329 1320 1330 1321 if (rdata->result == -ENODATA) { 1331 - __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 1332 1322 rdata->result = 0; 1323 + __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 1333 1324 } else { 1334 1325 size_t trans = rdata->subreq.transferred + rdata->got_bytes; 1335 1326 if (trans < rdata->subreq.len && 1336 1327 rdata->subreq.start + trans == ictx->remote_i_size) { 1337 - __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 1338 1328 rdata->result = 0; 1329 + __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 1330 + } else if (rdata->got_bytes > 0) { 1331 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &rdata->subreq.flags); 1339 1332 } 1340 1333 } 1341 1334 ··· 1681 1670 if (written > wdata->subreq.len) 1682 1671 written &= 0xFFFF; 1683 1672 1684 - if (written < wdata->subreq.len) 1673 + if (written < wdata->subreq.len) { 1685 1674 result = -ENOSPC; 1686 - else 1675 + } else { 1687 1676 result = written; 1677 + if (written > 0) 1678 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &wdata->subreq.flags); 1679 + } 1688 1680 break; 1689 1681 case MID_REQUEST_SUBMITTED: 1690 1682 case MID_RETRY_NEEDED:
+1 -2
fs/smb/client/connect.c
··· 1044 1044 /* Release netns reference for this server. */ 1045 1045 put_net(cifs_net_ns(server)); 1046 1046 kfree(server->leaf_fullpath); 1047 + kfree(server->hostname); 1047 1048 kfree(server); 1048 1049 1049 1050 length = atomic_dec_return(&tcpSesAllocCount); ··· 1671 1670 kfree_sensitive(server->session_key.response); 1672 1671 server->session_key.response = NULL; 1673 1672 server->session_key.len = 0; 1674 - kfree(server->hostname); 1675 - server->hostname = NULL; 1676 1673 1677 1674 task = xchg(&server->tsk, NULL); 1678 1675 if (task)
+18 -1
fs/smb/client/namespace.c
··· 196 196 struct smb3_fs_context tmp; 197 197 char *full_path; 198 198 struct vfsmount *mnt; 199 + struct cifs_sb_info *mntpt_sb; 200 + struct cifs_ses *ses; 199 201 200 202 if (IS_ROOT(mntpt)) 201 203 return ERR_PTR(-ESTALE); 202 204 203 - cur_ctx = CIFS_SB(mntpt->d_sb)->ctx; 205 + mntpt_sb = CIFS_SB(mntpt->d_sb); 206 + ses = cifs_sb_master_tcon(mntpt_sb)->ses; 207 + cur_ctx = mntpt_sb->ctx; 208 + 209 + /* 210 + * At this point, the root session should be in the mntpt sb. We should 211 + * bring the sb context passwords in sync with the root session's 212 + * passwords. This would help prevent unnecessary retries and password 213 + * swaps for automounts. 214 + */ 215 + mutex_lock(&ses->session_mutex); 216 + rc = smb3_sync_session_ctx_passwords(mntpt_sb, ses); 217 + mutex_unlock(&ses->session_mutex); 218 + 219 + if (rc) 220 + return ERR_PTR(rc); 204 221 205 222 fc = fs_context_for_submount(path->mnt->mnt_sb->s_type, mntpt); 206 223 if (IS_ERR(fc))
+6 -3
fs/smb/client/smb2pdu.c
··· 4615 4615 __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 4616 4616 rdata->result = 0; 4617 4617 } 4618 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &rdata->subreq.flags); 4618 4619 } 4619 4620 trace_smb3_rw_credits(rreq_debug_id, subreq_debug_index, rdata->credits.value, 4620 4621 server->credits, server->in_flight, ··· 4843 4842 4844 4843 cifs_stats_bytes_written(tcon, written); 4845 4844 4846 - if (written < wdata->subreq.len) 4845 + if (written < wdata->subreq.len) { 4847 4846 wdata->result = -ENOSPC; 4848 - else 4847 + } else if (written > 0) { 4849 4848 wdata->subreq.len = written; 4849 + __set_bit(NETFS_SREQ_MADE_PROGRESS, &wdata->subreq.flags); 4850 + } 4850 4851 break; 4851 4852 case MID_REQUEST_SUBMITTED: 4852 4853 case MID_RETRY_NEEDED: ··· 5017 5014 } 5018 5015 #endif 5019 5016 5020 - if (test_bit(NETFS_SREQ_RETRYING, &wdata->subreq.flags)) 5017 + if (wdata->subreq.retry_count > 0) 5021 5018 smb2_set_replay(server, &rqst); 5022 5019 5023 5020 cifs_dbg(FYI, "async write at %llu %u bytes iter=%zx\n",
+43
fs/smb/server/smb2pdu.c
··· 695 695 struct smb2_hdr *rsp_hdr; 696 696 struct ksmbd_work *in_work = ksmbd_alloc_work_struct(); 697 697 698 + if (!in_work) 699 + return; 700 + 698 701 if (allocate_interim_rsp_buf(in_work)) { 699 702 pr_err("smb_allocate_rsp_buf failed!\n"); 700 703 ksmbd_free_work_struct(in_work); ··· 3994 3991 posix_info->DeviceId = cpu_to_le32(ksmbd_kstat->kstat->rdev); 3995 3992 posix_info->HardLinks = cpu_to_le32(ksmbd_kstat->kstat->nlink); 3996 3993 posix_info->Mode = cpu_to_le32(ksmbd_kstat->kstat->mode & 0777); 3994 + switch (ksmbd_kstat->kstat->mode & S_IFMT) { 3995 + case S_IFDIR: 3996 + posix_info->Mode |= cpu_to_le32(POSIX_TYPE_DIR << POSIX_FILETYPE_SHIFT); 3997 + break; 3998 + case S_IFLNK: 3999 + posix_info->Mode |= cpu_to_le32(POSIX_TYPE_SYMLINK << POSIX_FILETYPE_SHIFT); 4000 + break; 4001 + case S_IFCHR: 4002 + posix_info->Mode |= cpu_to_le32(POSIX_TYPE_CHARDEV << POSIX_FILETYPE_SHIFT); 4003 + break; 4004 + case S_IFBLK: 4005 + posix_info->Mode |= cpu_to_le32(POSIX_TYPE_BLKDEV << POSIX_FILETYPE_SHIFT); 4006 + break; 4007 + case S_IFIFO: 4008 + posix_info->Mode |= cpu_to_le32(POSIX_TYPE_FIFO << POSIX_FILETYPE_SHIFT); 4009 + break; 4010 + case S_IFSOCK: 4011 + posix_info->Mode |= cpu_to_le32(POSIX_TYPE_SOCKET << POSIX_FILETYPE_SHIFT); 4012 + } 4013 + 3997 4014 posix_info->Inode = cpu_to_le64(ksmbd_kstat->kstat->ino); 3998 4015 posix_info->DosAttributes = 3999 4016 S_ISDIR(ksmbd_kstat->kstat->mode) ? ··· 5204 5181 file_info->AllocationSize = cpu_to_le64(stat.blocks << 9); 5205 5182 file_info->HardLinks = cpu_to_le32(stat.nlink); 5206 5183 file_info->Mode = cpu_to_le32(stat.mode & 0777); 5184 + switch (stat.mode & S_IFMT) { 5185 + case S_IFDIR: 5186 + file_info->Mode |= cpu_to_le32(POSIX_TYPE_DIR << POSIX_FILETYPE_SHIFT); 5187 + break; 5188 + case S_IFLNK: 5189 + file_info->Mode |= cpu_to_le32(POSIX_TYPE_SYMLINK << POSIX_FILETYPE_SHIFT); 5190 + break; 5191 + case S_IFCHR: 5192 + file_info->Mode |= cpu_to_le32(POSIX_TYPE_CHARDEV << POSIX_FILETYPE_SHIFT); 5193 + break; 5194 + case S_IFBLK: 5195 + file_info->Mode |= cpu_to_le32(POSIX_TYPE_BLKDEV << POSIX_FILETYPE_SHIFT); 5196 + break; 5197 + case S_IFIFO: 5198 + file_info->Mode |= cpu_to_le32(POSIX_TYPE_FIFO << POSIX_FILETYPE_SHIFT); 5199 + break; 5200 + case S_IFSOCK: 5201 + file_info->Mode |= cpu_to_le32(POSIX_TYPE_SOCKET << POSIX_FILETYPE_SHIFT); 5202 + } 5203 + 5207 5204 file_info->DeviceId = cpu_to_le32(stat.rdev); 5208 5205 5209 5206 /*
+10
fs/smb/server/smb2pdu.h
··· 502 502 return buf + 4; 503 503 } 504 504 505 + #define POSIX_TYPE_FILE 0 506 + #define POSIX_TYPE_DIR 1 507 + #define POSIX_TYPE_SYMLINK 2 508 + #define POSIX_TYPE_CHARDEV 3 509 + #define POSIX_TYPE_BLKDEV 4 510 + #define POSIX_TYPE_FIFO 5 511 + #define POSIX_TYPE_SOCKET 6 512 + 513 + #define POSIX_FILETYPE_SHIFT 12 514 + 505 515 #endif /* _SMB2PDU_H */
+1 -2
fs/smb/server/transport_rdma.c
··· 2283 2283 2284 2284 ibdev = ib_device_get_by_netdev(netdev, RDMA_DRIVER_UNKNOWN); 2285 2285 if (ibdev) { 2286 - if (rdma_frwr_is_supported(&ibdev->attrs)) 2287 - rdma_capable = true; 2286 + rdma_capable = rdma_frwr_is_supported(&ibdev->attrs); 2288 2287 ib_device_put(ibdev); 2289 2288 } 2290 2289 }
+2 -1
fs/smb/server/vfs.c
··· 1264 1264 filepath, 1265 1265 flags, 1266 1266 path); 1267 + if (!is_last) 1268 + next[0] = '/'; 1267 1269 if (err) 1268 1270 goto out2; 1269 1271 else if (is_last) ··· 1273 1271 path_put(parent_path); 1274 1272 *parent_path = *path; 1275 1273 1276 - next[0] = '/'; 1277 1274 remain_len -= filename_len + 1; 1278 1275 } 1279 1276
+1 -1
fs/xfs/libxfs/xfs_rtgroup.h
··· 272 272 } 273 273 274 274 # define xfs_rtgroup_extents(mp, rgno) (0) 275 - # define xfs_update_last_rtgroup_size(mp, rgno) (-EOPNOTSUPP) 275 + # define xfs_update_last_rtgroup_size(mp, rgno) (0) 276 276 # define xfs_rtgroup_lock(rtg, gf) ((void)0) 277 277 # define xfs_rtgroup_unlock(rtg, gf) ((void)0) 278 278 # define xfs_rtgroup_trans_join(tp, rtg, gf) ((void)0)
+2 -1
fs/xfs/xfs_dquot.c
··· 87 87 } 88 88 spin_unlock(&qlip->qli_lock); 89 89 if (bp) { 90 + xfs_buf_lock(bp); 90 91 list_del_init(&qlip->qli_item.li_bio_list); 91 - xfs_buf_rele(bp); 92 + xfs_buf_relse(bp); 92 93 } 93 94 } 94 95
+2 -4
include/kvm/arm_pmu.h
··· 53 53 void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu); 54 54 void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu); 55 55 void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu); 56 - void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val); 57 - void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val); 56 + void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u64 val); 58 57 void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu); 59 58 void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu); 60 59 bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu); ··· 126 127 static inline void kvm_pmu_vcpu_init(struct kvm_vcpu *vcpu) {} 127 128 static inline void kvm_pmu_vcpu_reset(struct kvm_vcpu *vcpu) {} 128 129 static inline void kvm_pmu_vcpu_destroy(struct kvm_vcpu *vcpu) {} 129 - static inline void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu, u64 val) {} 130 - static inline void kvm_pmu_enable_counter_mask(struct kvm_vcpu *vcpu, u64 val) {} 130 + static inline void kvm_pmu_reprogram_counter_mask(struct kvm_vcpu *vcpu, u64 val) {} 131 131 static inline void kvm_pmu_flush_hwstate(struct kvm_vcpu *vcpu) {} 132 132 static inline void kvm_pmu_sync_hwstate(struct kvm_vcpu *vcpu) {} 133 133 static inline bool kvm_pmu_should_notify_user(struct kvm_vcpu *vcpu)
+1 -1
include/linux/bus/stm32_firewall_device.h
··· 115 115 #else /* CONFIG_STM32_FIREWALL */ 116 116 117 117 int stm32_firewall_get_firewall(struct device_node *np, struct stm32_firewall *firewall, 118 - unsigned int nb_firewall); 118 + unsigned int nb_firewall) 119 119 { 120 120 return -ENODEV; 121 121 }
+1
include/linux/hrtimer.h
··· 386 386 extern void sysrq_timer_list_show(void); 387 387 388 388 int hrtimers_prepare_cpu(unsigned int cpu); 389 + int hrtimers_cpu_starting(unsigned int cpu); 389 390 #ifdef CONFIG_HOTPLUG_CPU 390 391 int hrtimers_cpu_dying(unsigned int cpu); 391 392 #else
+10
include/linux/io_uring/cmd.h
··· 18 18 u8 pdu[32]; /* available inline for free use */ 19 19 }; 20 20 21 + struct io_uring_cmd_data { 22 + struct io_uring_sqe sqes[2]; 23 + void *op_data; 24 + }; 25 + 21 26 static inline const void *io_uring_sqe_cmd(const struct io_uring_sqe *sqe) 22 27 { 23 28 return sqe->cmd; ··· 116 111 static inline struct task_struct *io_uring_cmd_get_task(struct io_uring_cmd *cmd) 117 112 { 118 113 return cmd_to_io_kiocb(cmd)->tctx->task; 114 + } 115 + 116 + static inline struct io_uring_cmd_data *io_uring_cmd_get_async_data(struct io_uring_cmd *cmd) 117 + { 118 + return cmd_to_io_kiocb(cmd)->async_data; 119 119 } 120 120 121 121 #endif /* _LINUX_IO_URING_CMD_H */
+1 -1
include/linux/iomap.h
··· 335 335 u16 io_type; 336 336 u16 io_flags; /* IOMAP_F_* */ 337 337 struct inode *io_inode; /* file being written to */ 338 - size_t io_size; /* size of the extent */ 338 + size_t io_size; /* size of data within eof */ 339 339 loff_t io_offset; /* offset in the file */ 340 340 sector_t io_sector; /* start sector of ioend */ 341 341 struct bio io_bio; /* MUST BE LAST! */
+2 -1
include/linux/module.h
··· 773 773 774 774 static inline void *module_writable_address(struct module *mod, void *loc) 775 775 { 776 - if (!IS_ENABLED(CONFIG_ARCH_HAS_EXECMEM_ROX) || !mod) 776 + if (!IS_ENABLED(CONFIG_ARCH_HAS_EXECMEM_ROX) || !mod || 777 + mod->state != MODULE_STATE_UNFORMED) 777 778 return loc; 778 779 return __module_writable_address(mod, loc); 779 780 }
+1 -2
include/linux/mount.h
··· 50 50 #define MNT_ATIME_MASK (MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME ) 51 51 52 52 #define MNT_INTERNAL_FLAGS (MNT_SHARED | MNT_WRITE_HOLD | MNT_INTERNAL | \ 53 - MNT_DOOMED | MNT_SYNC_UMOUNT | MNT_MARKED | MNT_ONRB) 53 + MNT_DOOMED | MNT_SYNC_UMOUNT | MNT_MARKED) 54 54 55 55 #define MNT_INTERNAL 0x4000 56 56 ··· 64 64 #define MNT_SYNC_UMOUNT 0x2000000 65 65 #define MNT_MARKED 0x4000000 66 66 #define MNT_UMOUNT 0x8000000 67 - #define MNT_ONRB 0x10000000 68 67 69 68 struct vfsmount { 70 69 struct dentry *mnt_root; /* root of the mounted tree */
+3 -4
include/linux/netfs.h
··· 185 185 short error; /* 0 or error that occurred */ 186 186 unsigned short debug_index; /* Index in list (for debugging output) */ 187 187 unsigned int nr_segs; /* Number of segs in io_iter */ 188 + u8 retry_count; /* The number of retries (0 on initial pass) */ 188 189 enum netfs_io_source source; /* Where to read from/write to */ 189 190 unsigned char stream_nr; /* I/O stream this belongs to */ 190 191 unsigned char curr_folioq_slot; /* Folio currently being read */ ··· 195 194 #define NETFS_SREQ_COPY_TO_CACHE 0 /* Set if should copy the data to the cache */ 196 195 #define NETFS_SREQ_CLEAR_TAIL 1 /* Set if the rest of the read should be cleared */ 197 196 #define NETFS_SREQ_SEEK_DATA_READ 3 /* Set if ->read() should SEEK_DATA first */ 198 - #define NETFS_SREQ_NO_PROGRESS 4 /* Set if we didn't manage to read any data */ 197 + #define NETFS_SREQ_MADE_PROGRESS 4 /* Set if we transferred at least some data */ 199 198 #define NETFS_SREQ_ONDEMAND 5 /* Set if it's from on-demand read mode */ 200 199 #define NETFS_SREQ_BOUNDARY 6 /* Set if ends on hard boundary (eg. ceph object) */ 201 200 #define NETFS_SREQ_HIT_EOF 7 /* Set if short due to EOF */ 202 201 #define NETFS_SREQ_IN_PROGRESS 8 /* Unlocked when the subrequest completes */ 203 202 #define NETFS_SREQ_NEED_RETRY 9 /* Set if the filesystem requests a retry */ 204 - #define NETFS_SREQ_RETRYING 10 /* Set if we're retrying */ 205 - #define NETFS_SREQ_FAILED 11 /* Set if the subreq failed unretryably */ 203 + #define NETFS_SREQ_FAILED 10 /* Set if the subreq failed unretryably */ 206 204 }; 207 205 208 206 enum netfs_io_origin { ··· 269 269 size_t prev_donated; /* Fallback for subreq->prev_donated */ 270 270 refcount_t ref; 271 271 unsigned long flags; 272 - #define NETFS_RREQ_COPY_TO_CACHE 1 /* Need to write to the cache */ 273 272 #define NETFS_RREQ_NO_UNLOCK_FOLIO 2 /* Don't unlock no_unlock_folio on completion */ 274 273 #define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on completion */ 275 274 #define NETFS_RREQ_FAILED 4 /* The request failed */
+12 -14
include/linux/poll.h
··· 25 25 26 26 struct poll_table_struct; 27 27 28 - /* 28 + /* 29 29 * structures and helpers for f_op->poll implementations 30 30 */ 31 31 typedef void (*poll_queue_proc)(struct file *, wait_queue_head_t *, struct poll_table_struct *); 32 32 33 33 /* 34 - * Do not touch the structure directly, use the access functions 35 - * poll_does_not_wait() and poll_requested_events() instead. 34 + * Do not touch the structure directly, use the access function 35 + * poll_requested_events() instead. 36 36 */ 37 37 typedef struct poll_table_struct { 38 38 poll_queue_proc _qproc; ··· 41 41 42 42 static inline void poll_wait(struct file * filp, wait_queue_head_t * wait_address, poll_table *p) 43 43 { 44 - if (p && p->_qproc && wait_address) 44 + if (p && p->_qproc) { 45 45 p->_qproc(filp, wait_address, p); 46 - } 47 - 48 - /* 49 - * Return true if it is guaranteed that poll will not wait. This is the case 50 - * if the poll() of another file descriptor in the set got an event, so there 51 - * is no need for waiting. 52 - */ 53 - static inline bool poll_does_not_wait(const poll_table *p) 54 - { 55 - return p == NULL || p->_qproc == NULL; 46 + /* 47 + * This memory barrier is paired in the wq_has_sleeper(). 48 + * See the comment above prepare_to_wait(), we need to 49 + * ensure that subsequent tests in this thread can't be 50 + * reordered with __add_wait_queue() in _qproc() paths. 51 + */ 52 + smp_mb(); 53 + } 56 54 } 57 55 58 56 /*
+6 -6
include/linux/pruss_driver.h
··· 144 144 static inline int pruss_cfg_get_gpmux(struct pruss *pruss, 145 145 enum pruss_pru_id pru_id, u8 *mux) 146 146 { 147 - return ERR_PTR(-EOPNOTSUPP); 147 + return -EOPNOTSUPP; 148 148 } 149 149 150 150 static inline int pruss_cfg_set_gpmux(struct pruss *pruss, 151 151 enum pruss_pru_id pru_id, u8 mux) 152 152 { 153 - return ERR_PTR(-EOPNOTSUPP); 153 + return -EOPNOTSUPP; 154 154 } 155 155 156 156 static inline int pruss_cfg_gpimode(struct pruss *pruss, 157 157 enum pruss_pru_id pru_id, 158 158 enum pruss_gpi_mode mode) 159 159 { 160 - return ERR_PTR(-EOPNOTSUPP); 160 + return -EOPNOTSUPP; 161 161 } 162 162 163 163 static inline int pruss_cfg_miirt_enable(struct pruss *pruss, bool enable) 164 164 { 165 - return ERR_PTR(-EOPNOTSUPP); 165 + return -EOPNOTSUPP; 166 166 } 167 167 168 168 static inline int pruss_cfg_xfr_enable(struct pruss *pruss, 169 169 enum pru_type pru_type, 170 - bool enable); 170 + bool enable) 171 171 { 172 - return ERR_PTR(-EOPNOTSUPP); 172 + return -EOPNOTSUPP; 173 173 } 174 174 175 175 #endif /* CONFIG_TI_PRUSS */
+32 -45
include/linux/regulator/consumer.h
··· 168 168 void regulator_put(struct regulator *regulator); 169 169 void devm_regulator_put(struct regulator *regulator); 170 170 171 - #if IS_ENABLED(CONFIG_OF) 172 - struct regulator *__must_check of_regulator_get_optional(struct device *dev, 173 - struct device_node *node, 174 - const char *id); 175 - struct regulator *__must_check devm_of_regulator_get_optional(struct device *dev, 176 - struct device_node *node, 177 - const char *id); 178 - #else 179 - static inline struct regulator *__must_check of_regulator_get_optional(struct device *dev, 180 - struct device_node *node, 181 - const char *id) 182 - { 183 - return ERR_PTR(-ENODEV); 184 - } 185 - 186 - static inline struct regulator *__must_check devm_of_regulator_get_optional(struct device *dev, 187 - struct device_node *node, 188 - const char *id) 189 - { 190 - return ERR_PTR(-ENODEV); 191 - } 192 - #endif 193 - 194 171 int regulator_register_supply_alias(struct device *dev, const char *id, 195 172 struct device *alias_dev, 196 173 const char *alias_id); ··· 200 223 201 224 int __must_check regulator_bulk_get(struct device *dev, int num_consumers, 202 225 struct regulator_bulk_data *consumers); 203 - int __must_check of_regulator_bulk_get_all(struct device *dev, struct device_node *np, 204 - struct regulator_bulk_data **consumers); 205 226 int __must_check devm_regulator_bulk_get(struct device *dev, int num_consumers, 206 227 struct regulator_bulk_data *consumers); 207 228 void devm_regulator_bulk_put(struct regulator_bulk_data *consumers); ··· 348 373 return ERR_PTR(-ENODEV); 349 374 } 350 375 351 - static inline struct regulator *__must_check of_regulator_get_optional(struct device *dev, 352 - struct device_node *node, 353 - const char *id) 354 - { 355 - return ERR_PTR(-ENODEV); 356 - } 357 - 358 - static inline struct regulator *__must_check devm_of_regulator_get_optional(struct device *dev, 359 - struct device_node *node, 360 - const char *id) 361 - { 362 - return ERR_PTR(-ENODEV); 363 - } 364 - 365 376 static inline void regulator_put(struct regulator *regulator) 366 377 { 367 378 } ··· 440 479 441 480 static inline int devm_regulator_bulk_get(struct device *dev, int num_consumers, 442 481 struct regulator_bulk_data *consumers) 443 - { 444 - return 0; 445 - } 446 - 447 - static inline int of_regulator_bulk_get_all(struct device *dev, struct device_node *np, 448 - struct regulator_bulk_data **consumers) 449 482 { 450 483 return 0; 451 484 } ··· 653 698 { 654 699 return false; 655 700 } 701 + #endif 702 + 703 + #if IS_ENABLED(CONFIG_OF) && IS_ENABLED(CONFIG_REGULATOR) 704 + struct regulator *__must_check of_regulator_get_optional(struct device *dev, 705 + struct device_node *node, 706 + const char *id); 707 + struct regulator *__must_check devm_of_regulator_get_optional(struct device *dev, 708 + struct device_node *node, 709 + const char *id); 710 + int __must_check of_regulator_bulk_get_all(struct device *dev, struct device_node *np, 711 + struct regulator_bulk_data **consumers); 712 + #else 713 + static inline struct regulator *__must_check of_regulator_get_optional(struct device *dev, 714 + struct device_node *node, 715 + const char *id) 716 + { 717 + return ERR_PTR(-ENODEV); 718 + } 719 + 720 + static inline struct regulator *__must_check devm_of_regulator_get_optional(struct device *dev, 721 + struct device_node *node, 722 + const char *id) 723 + { 724 + return ERR_PTR(-ENODEV); 725 + } 726 + 727 + static inline int of_regulator_bulk_get_all(struct device *dev, struct device_node *np, 728 + struct regulator_bulk_data **consumers) 729 + { 730 + return 0; 731 + } 732 + 656 733 #endif 657 734 658 735 static inline int regulator_set_voltage_triplet(struct regulator *regulator,
+1 -1
include/linux/seccomp.h
··· 55 55 56 56 #ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER 57 57 static inline int secure_computing(void) { return 0; } 58 - static inline int __secure_computing(const struct seccomp_data *sd) { return 0; } 59 58 #else 60 59 static inline void secure_computing_strict(int this_syscall) { return; } 61 60 #endif 61 + static inline int __secure_computing(const struct seccomp_data *sd) { return 0; } 62 62 63 63 static inline long prctl_get_seccomp(void) 64 64 {
+12
include/linux/userfaultfd_k.h
··· 247 247 vma_is_shmem(vma); 248 248 } 249 249 250 + static inline bool vma_has_uffd_without_event_remap(struct vm_area_struct *vma) 251 + { 252 + struct userfaultfd_ctx *uffd_ctx = vma->vm_userfaultfd_ctx.ctx; 253 + 254 + return uffd_ctx && (uffd_ctx->features & UFFD_FEATURE_EVENT_REMAP) == 0; 255 + } 256 + 250 257 extern int dup_userfaultfd(struct vm_area_struct *, struct list_head *); 251 258 extern void dup_userfaultfd_complete(struct list_head *); 252 259 void dup_userfaultfd_fail(struct list_head *); ··· 405 398 } 406 399 407 400 static inline bool userfaultfd_wp_async(struct vm_area_struct *vma) 401 + { 402 + return false; 403 + } 404 + 405 + static inline bool vma_has_uffd_without_event_remap(struct vm_area_struct *vma) 408 406 { 409 407 return false; 410 408 }
-8
include/net/busy_poll.h
··· 174 174 #endif 175 175 } 176 176 177 - static inline void sk_mark_napi_id_once_xdp(struct sock *sk, 178 - const struct xdp_buff *xdp) 179 - { 180 - #ifdef CONFIG_NET_RX_BUSY_POLL 181 - __sk_mark_napi_id_once(sk, xdp->rxq->napi_id); 182 - #endif 183 - } 184 - 185 177 #endif /* _LINUX_NET_BUSY_POLL_H */
+1 -1
include/net/inet_connection_sock.h
··· 282 282 283 283 static inline int inet_csk_reqsk_queue_is_full(const struct sock *sk) 284 284 { 285 - return inet_csk_reqsk_queue_len(sk) >= READ_ONCE(sk->sk_max_ack_backlog); 285 + return inet_csk_reqsk_queue_len(sk) > READ_ONCE(sk->sk_max_ack_backlog); 286 286 } 287 287 288 288 bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req);
+1 -1
include/net/page_pool/helpers.h
··· 294 294 295 295 static inline void page_pool_ref_netmem(netmem_ref netmem) 296 296 { 297 - atomic_long_inc(&netmem_to_page(netmem)->pp_ref_count); 297 + atomic_long_inc(netmem_get_pp_ref_count_ref(netmem)); 298 298 } 299 299 300 300 static inline void page_pool_ref_page(struct page *page)
+7 -10
include/net/sock.h
··· 2297 2297 } 2298 2298 2299 2299 /** 2300 - * sock_poll_wait - place memory barrier behind the poll_wait call. 2300 + * sock_poll_wait - wrapper for the poll_wait call. 2301 2301 * @filp: file 2302 2302 * @sock: socket to wait on 2303 2303 * @p: poll_table ··· 2307 2307 static inline void sock_poll_wait(struct file *filp, struct socket *sock, 2308 2308 poll_table *p) 2309 2309 { 2310 - if (!poll_does_not_wait(p)) { 2311 - poll_wait(filp, &sock->wq.wait, p); 2312 - /* We need to be sure we are in sync with the 2313 - * socket flags modification. 2314 - * 2315 - * This memory barrier is paired in the wq_has_sleeper. 2316 - */ 2317 - smp_mb(); 2318 - } 2310 + /* Provides a barrier we need to be sure we are in sync 2311 + * with the socket flags modification. 2312 + * 2313 + * This memory barrier is paired in the wq_has_sleeper. 2314 + */ 2315 + poll_wait(filp, &sock->wq.wait, p); 2319 2316 } 2320 2317 2321 2318 static inline void skb_set_hash_from_sk(struct sk_buff *skb, struct sock *sk)
-1
include/net/xdp.h
··· 62 62 u32 queue_index; 63 63 u32 reg_state; 64 64 struct xdp_mem_info mem; 65 - unsigned int napi_id; 66 65 u32 frag_size; 67 66 } ____cacheline_aligned; /* perf critical, avoid false-sharing */ 68 67
-14
include/net/xdp_sock_drv.h
··· 59 59 xp_fill_cb(pool, desc); 60 60 } 61 61 62 - static inline unsigned int xsk_pool_get_napi_id(struct xsk_buff_pool *pool) 63 - { 64 - #ifdef CONFIG_NET_RX_BUSY_POLL 65 - return pool->heads[0].xdp.rxq->napi_id; 66 - #else 67 - return 0; 68 - #endif 69 - } 70 - 71 62 static inline void xsk_pool_dma_unmap(struct xsk_buff_pool *pool, 72 63 unsigned long attrs) 73 64 { ··· 295 304 static inline void xsk_pool_fill_cb(struct xsk_buff_pool *pool, 296 305 struct xsk_cb_desc *desc) 297 306 { 298 - } 299 - 300 - static inline unsigned int xsk_pool_get_napi_id(struct xsk_buff_pool *pool) 301 - { 302 - return 0; 303 307 } 304 308 305 309 static inline void xsk_pool_dma_unmap(struct xsk_buff_pool *pool,
+1 -1
include/trace/events/hugetlbfs.h
··· 23 23 TP_fast_assign( 24 24 __entry->dev = inode->i_sb->s_dev; 25 25 __entry->ino = inode->i_ino; 26 - __entry->dir = dir->i_ino; 26 + __entry->dir = dir ? dir->i_ino : 0; 27 27 __entry->mode = mode; 28 28 ), 29 29
+63
include/trace/events/mmflags.h
··· 13 13 * Thus most bits set go first. 14 14 */ 15 15 16 + /* These define the values that are enums (the bits) */ 17 + #define TRACE_GFP_FLAGS_GENERAL \ 18 + TRACE_GFP_EM(DMA) \ 19 + TRACE_GFP_EM(HIGHMEM) \ 20 + TRACE_GFP_EM(DMA32) \ 21 + TRACE_GFP_EM(MOVABLE) \ 22 + TRACE_GFP_EM(RECLAIMABLE) \ 23 + TRACE_GFP_EM(HIGH) \ 24 + TRACE_GFP_EM(IO) \ 25 + TRACE_GFP_EM(FS) \ 26 + TRACE_GFP_EM(ZERO) \ 27 + TRACE_GFP_EM(DIRECT_RECLAIM) \ 28 + TRACE_GFP_EM(KSWAPD_RECLAIM) \ 29 + TRACE_GFP_EM(WRITE) \ 30 + TRACE_GFP_EM(NOWARN) \ 31 + TRACE_GFP_EM(RETRY_MAYFAIL) \ 32 + TRACE_GFP_EM(NOFAIL) \ 33 + TRACE_GFP_EM(NORETRY) \ 34 + TRACE_GFP_EM(MEMALLOC) \ 35 + TRACE_GFP_EM(COMP) \ 36 + TRACE_GFP_EM(NOMEMALLOC) \ 37 + TRACE_GFP_EM(HARDWALL) \ 38 + TRACE_GFP_EM(THISNODE) \ 39 + TRACE_GFP_EM(ACCOUNT) \ 40 + TRACE_GFP_EM(ZEROTAGS) 41 + 42 + #ifdef CONFIG_KASAN_HW_TAGS 43 + # define TRACE_GFP_FLAGS_KASAN \ 44 + TRACE_GFP_EM(SKIP_ZERO) \ 45 + TRACE_GFP_EM(SKIP_KASAN) 46 + #else 47 + # define TRACE_GFP_FLAGS_KASAN 48 + #endif 49 + 50 + #ifdef CONFIG_LOCKDEP 51 + # define TRACE_GFP_FLAGS_LOCKDEP \ 52 + TRACE_GFP_EM(NOLOCKDEP) 53 + #else 54 + # define TRACE_GFP_FLAGS_LOCKDEP 55 + #endif 56 + 57 + #ifdef CONFIG_SLAB_OBJ_EXT 58 + # define TRACE_GFP_FLAGS_SLAB \ 59 + TRACE_GFP_EM(NO_OBJ_EXT) 60 + #else 61 + # define TRACE_GFP_FLAGS_SLAB 62 + #endif 63 + 64 + #define TRACE_GFP_FLAGS \ 65 + TRACE_GFP_FLAGS_GENERAL \ 66 + TRACE_GFP_FLAGS_KASAN \ 67 + TRACE_GFP_FLAGS_LOCKDEP \ 68 + TRACE_GFP_FLAGS_SLAB 69 + 70 + #undef TRACE_GFP_EM 71 + #define TRACE_GFP_EM(a) TRACE_DEFINE_ENUM(___GFP_##a##_BIT); 72 + 73 + TRACE_GFP_FLAGS 74 + 75 + /* Just in case these are ever used */ 76 + TRACE_DEFINE_ENUM(___GFP_UNUSED_BIT); 77 + TRACE_DEFINE_ENUM(___GFP_LAST_BIT); 78 + 16 79 #define gfpflag_string(flag) {(__force unsigned long)flag, #flag} 17 80 18 81 #define __def_gfpflag_names \
-2
include/ufs/ufshcd.h
··· 329 329 * @program_key: program or evict an inline encryption key 330 330 * @fill_crypto_prdt: initialize crypto-related fields in the PRDT 331 331 * @event_notify: called to notify important events 332 - * @reinit_notify: called to notify reinit of UFSHCD during max gear switch 333 332 * @mcq_config_resource: called to configure MCQ platform resources 334 333 * @get_hba_mac: reports maximum number of outstanding commands supported by 335 334 * the controller. Should be implemented for UFSHCI 4.0 or later ··· 380 381 void *prdt, unsigned int num_segments); 381 382 void (*event_notify)(struct ufs_hba *hba, 382 383 enum ufs_event_type evt, void *data); 383 - void (*reinit_notify)(struct ufs_hba *); 384 384 int (*mcq_config_resource)(struct ufs_hba *hba); 385 385 int (*get_hba_mac)(struct ufs_hba *hba); 386 386 int (*op_runtime_config)(struct ufs_hba *hba);
+7 -9
io_uring/eventfd.c
··· 33 33 kfree(ev_fd); 34 34 } 35 35 36 + static void io_eventfd_put(struct io_ev_fd *ev_fd) 37 + { 38 + if (refcount_dec_and_test(&ev_fd->refs)) 39 + call_rcu(&ev_fd->rcu, io_eventfd_free); 40 + } 41 + 36 42 static void io_eventfd_do_signal(struct rcu_head *rcu) 37 43 { 38 44 struct io_ev_fd *ev_fd = container_of(rcu, struct io_ev_fd, rcu); 39 45 40 46 eventfd_signal_mask(ev_fd->cq_ev_fd, EPOLL_URING_WAKE); 41 - 42 - if (refcount_dec_and_test(&ev_fd->refs)) 43 - io_eventfd_free(rcu); 44 - } 45 - 46 - static void io_eventfd_put(struct io_ev_fd *ev_fd) 47 - { 48 - if (refcount_dec_and_test(&ev_fd->refs)) 49 - call_rcu(&ev_fd->rcu, io_eventfd_free); 47 + io_eventfd_put(ev_fd); 50 48 } 51 49 52 50 static void io_eventfd_release(struct io_ev_fd *ev_fd, bool put_ref)
+6 -10
io_uring/io_uring.c
··· 320 320 ret |= io_alloc_cache_init(&ctx->rw_cache, IO_ALLOC_CACHE_MAX, 321 321 sizeof(struct io_async_rw)); 322 322 ret |= io_alloc_cache_init(&ctx->uring_cache, IO_ALLOC_CACHE_MAX, 323 - sizeof(struct uring_cache)); 323 + sizeof(struct io_uring_cmd_data)); 324 324 spin_lock_init(&ctx->msg_lock); 325 325 ret |= io_alloc_cache_init(&ctx->msg_cache, IO_ALLOC_CACHE_MAX, 326 326 sizeof(struct io_kiocb)); ··· 1226 1226 1227 1227 /* SQPOLL doesn't need the task_work added, it'll run it itself */ 1228 1228 if (ctx->flags & IORING_SETUP_SQPOLL) { 1229 - struct io_sq_data *sqd = ctx->sq_data; 1230 - 1231 - if (sqd->thread) 1232 - __set_notify_signal(sqd->thread); 1229 + __set_notify_signal(tctx->task); 1233 1230 return; 1234 1231 } 1235 1232 ··· 2810 2813 2811 2814 if (unlikely(!ctx->poll_activated)) 2812 2815 io_activate_pollwq(ctx); 2813 - 2814 - poll_wait(file, &ctx->poll_wq, wait); 2815 2816 /* 2816 - * synchronizes with barrier from wq_has_sleeper call in 2817 - * io_commit_cqring 2817 + * provides mb() which pairs with barrier from wq_has_sleeper 2818 + * call in io_commit_cqring 2818 2819 */ 2819 - smp_rmb(); 2820 + poll_wait(file, &ctx->poll_wq, wait); 2821 + 2820 2822 if (!io_sqring_full(ctx)) 2821 2823 mask |= EPOLLOUT | EPOLLWRNORM; 2822 2824
+4 -3
io_uring/io_uring.h
··· 125 125 #if defined(CONFIG_PROVE_LOCKING) 126 126 lockdep_assert(in_task()); 127 127 128 + if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) 129 + lockdep_assert_held(&ctx->uring_lock); 130 + 128 131 if (ctx->flags & IORING_SETUP_IOPOLL) { 129 132 lockdep_assert_held(&ctx->uring_lock); 130 133 } else if (!ctx->task_complete) { ··· 139 136 * Not from an SQE, as those cannot be submitted, but via 140 137 * updating tagged resources. 141 138 */ 142 - if (percpu_ref_is_dying(&ctx->refs)) 143 - lockdep_assert(current_work()); 144 - else 139 + if (!percpu_ref_is_dying(&ctx->refs)) 145 140 lockdep_assert(current == ctx->submitter_task); 146 141 } 147 142 #endif
+2 -1
io_uring/opdef.c
··· 7 7 #include <linux/fs.h> 8 8 #include <linux/file.h> 9 9 #include <linux/io_uring.h> 10 + #include <linux/io_uring/cmd.h> 10 11 11 12 #include "io_uring.h" 12 13 #include "opdef.h" ··· 415 414 .plug = 1, 416 415 .iopoll = 1, 417 416 .iopoll_queue = 1, 418 - .async_size = 2 * sizeof(struct io_uring_sqe), 417 + .async_size = sizeof(struct io_uring_cmd_data), 419 418 .prep = io_uring_cmd_prep, 420 419 .issue = io_uring_cmd, 421 420 },
+31 -21
io_uring/register.c
··· 405 405 { 406 406 struct io_ring_ctx_rings o = { }, n = { }, *to_free = NULL; 407 407 size_t size, sq_array_offset; 408 + unsigned i, tail, old_head; 408 409 struct io_uring_params p; 409 - unsigned i, tail; 410 410 void *ptr; 411 411 int ret; 412 412 ··· 449 449 if (IS_ERR(n.rings)) 450 450 return PTR_ERR(n.rings); 451 451 452 - n.rings->sq_ring_mask = p.sq_entries - 1; 453 - n.rings->cq_ring_mask = p.cq_entries - 1; 454 - n.rings->sq_ring_entries = p.sq_entries; 455 - n.rings->cq_ring_entries = p.cq_entries; 452 + /* 453 + * At this point n.rings is shared with userspace, just like o.rings 454 + * is as well. While we don't expect userspace to modify it while 455 + * a resize is in progress, and it's most likely that userspace will 456 + * shoot itself in the foot if it does, we can't always assume good 457 + * intent... Use read/write once helpers from here on to indicate the 458 + * shared nature of it. 459 + */ 460 + WRITE_ONCE(n.rings->sq_ring_mask, p.sq_entries - 1); 461 + WRITE_ONCE(n.rings->cq_ring_mask, p.cq_entries - 1); 462 + WRITE_ONCE(n.rings->sq_ring_entries, p.sq_entries); 463 + WRITE_ONCE(n.rings->cq_ring_entries, p.cq_entries); 456 464 457 465 if (copy_to_user(arg, &p, sizeof(p))) { 458 466 io_register_free_rings(&p, &n); ··· 517 509 * rings can't hold what is already there, then fail the operation. 518 510 */ 519 511 n.sq_sqes = ptr; 520 - tail = o.rings->sq.tail; 521 - if (tail - o.rings->sq.head > p.sq_entries) 512 + tail = READ_ONCE(o.rings->sq.tail); 513 + old_head = READ_ONCE(o.rings->sq.head); 514 + if (tail - old_head > p.sq_entries) 522 515 goto overflow; 523 - for (i = o.rings->sq.head; i < tail; i++) { 516 + for (i = old_head; i < tail; i++) { 524 517 unsigned src_head = i & (ctx->sq_entries - 1); 525 - unsigned dst_head = i & n.rings->sq_ring_mask; 518 + unsigned dst_head = i & (p.sq_entries - 1); 526 519 527 520 n.sq_sqes[dst_head] = o.sq_sqes[src_head]; 528 521 } 529 - n.rings->sq.head = o.rings->sq.head; 530 - n.rings->sq.tail = o.rings->sq.tail; 522 + WRITE_ONCE(n.rings->sq.head, READ_ONCE(o.rings->sq.head)); 523 + WRITE_ONCE(n.rings->sq.tail, READ_ONCE(o.rings->sq.tail)); 531 524 532 - tail = o.rings->cq.tail; 533 - if (tail - o.rings->cq.head > p.cq_entries) { 525 + tail = READ_ONCE(o.rings->cq.tail); 526 + old_head = READ_ONCE(o.rings->cq.head); 527 + if (tail - old_head > p.cq_entries) { 534 528 overflow: 535 529 /* restore old rings, and return -EOVERFLOW via cleanup path */ 536 530 ctx->rings = o.rings; ··· 541 531 ret = -EOVERFLOW; 542 532 goto out; 543 533 } 544 - for (i = o.rings->cq.head; i < tail; i++) { 534 + for (i = old_head; i < tail; i++) { 545 535 unsigned src_head = i & (ctx->cq_entries - 1); 546 - unsigned dst_head = i & n.rings->cq_ring_mask; 536 + unsigned dst_head = i & (p.cq_entries - 1); 547 537 548 538 n.rings->cqes[dst_head] = o.rings->cqes[src_head]; 549 539 } 550 - n.rings->cq.head = o.rings->cq.head; 551 - n.rings->cq.tail = o.rings->cq.tail; 540 + WRITE_ONCE(n.rings->cq.head, READ_ONCE(o.rings->cq.head)); 541 + WRITE_ONCE(n.rings->cq.tail, READ_ONCE(o.rings->cq.tail)); 552 542 /* invalidate cached cqe refill */ 553 543 ctx->cqe_cached = ctx->cqe_sentinel = NULL; 554 544 555 - n.rings->sq_dropped = o.rings->sq_dropped; 556 - n.rings->sq_flags = o.rings->sq_flags; 557 - n.rings->cq_flags = o.rings->cq_flags; 558 - n.rings->cq_overflow = o.rings->cq_overflow; 545 + WRITE_ONCE(n.rings->sq_dropped, READ_ONCE(o.rings->sq_dropped)); 546 + WRITE_ONCE(n.rings->sq_flags, READ_ONCE(o.rings->sq_flags)); 547 + WRITE_ONCE(n.rings->cq_flags, READ_ONCE(o.rings->cq_flags)); 548 + WRITE_ONCE(n.rings->cq_overflow, READ_ONCE(o.rings->cq_overflow)); 559 549 560 550 /* all done, store old pointers and assign new ones */ 561 551 if (!(ctx->flags & IORING_SETUP_NO_SQARRAY))
+1 -9
io_uring/rsrc.c
··· 997 997 dst_node = io_rsrc_node_alloc(ctx, IORING_RSRC_BUFFER); 998 998 if (!dst_node) { 999 999 ret = -ENOMEM; 1000 - goto out_put_free; 1000 + goto out_unlock; 1001 1001 } 1002 1002 1003 1003 refcount_inc(&src_node->buf->refs); ··· 1033 1033 mutex_lock(&src_ctx->uring_lock); 1034 1034 /* someone raced setting up buffers, dump ours */ 1035 1035 ret = -EBUSY; 1036 - out_put_free: 1037 - i = data.nr; 1038 - while (i--) { 1039 - if (data.nodes[i]) { 1040 - io_buffer_unmap(src_ctx, data.nodes[i]); 1041 - kfree(data.nodes[i]); 1042 - } 1043 - } 1044 1036 out_unlock: 1045 1037 io_rsrc_data_free(ctx, &data); 1046 1038 mutex_unlock(&src_ctx->uring_lock);
+5 -1
io_uring/sqpoll.c
··· 268 268 DEFINE_WAIT(wait); 269 269 270 270 /* offload context creation failed, just exit */ 271 - if (!current->io_uring) 271 + if (!current->io_uring) { 272 + mutex_lock(&sqd->lock); 273 + sqd->thread = NULL; 274 + mutex_unlock(&sqd->lock); 272 275 goto err_out; 276 + } 273 277 274 278 snprintf(buf, sizeof(buf), "iou-sqp-%d", sqd->task_pid); 275 279 set_task_comm(current, buf);
+3 -1
io_uring/timeout.c
··· 427 427 428 428 timeout->off = 0; /* noseq */ 429 429 data = req->async_data; 430 + data->ts = *ts; 431 + 430 432 list_add_tail(&timeout->list, &ctx->timeout_list); 431 433 hrtimer_init(&data->timer, io_timeout_get_clock(data), mode); 432 434 data->timer.function = io_timeout_fn; 433 - hrtimer_start(&data->timer, timespec64_to_ktime(*ts), mode); 435 + hrtimer_start(&data->timer, timespec64_to_ktime(data->ts), mode); 434 436 return 0; 435 437 } 436 438
+16 -7
io_uring/uring_cmd.c
··· 16 16 #include "rsrc.h" 17 17 #include "uring_cmd.h" 18 18 19 - static struct uring_cache *io_uring_async_get(struct io_kiocb *req) 19 + static struct io_uring_cmd_data *io_uring_async_get(struct io_kiocb *req) 20 20 { 21 21 struct io_ring_ctx *ctx = req->ctx; 22 - struct uring_cache *cache; 22 + struct io_uring_cmd_data *cache; 23 23 24 24 cache = io_alloc_cache_get(&ctx->uring_cache); 25 25 if (cache) { 26 + cache->op_data = NULL; 26 27 req->flags |= REQ_F_ASYNC_DATA; 27 28 req->async_data = cache; 28 29 return cache; 29 30 } 30 - if (!io_alloc_async_data(req)) 31 - return req->async_data; 31 + if (!io_alloc_async_data(req)) { 32 + cache = req->async_data; 33 + cache->op_data = NULL; 34 + return cache; 35 + } 32 36 return NULL; 33 37 } 34 38 35 39 static void io_req_uring_cleanup(struct io_kiocb *req, unsigned int issue_flags) 36 40 { 37 41 struct io_uring_cmd *ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd); 38 - struct uring_cache *cache = req->async_data; 42 + struct io_uring_cmd_data *cache = req->async_data; 43 + 44 + if (cache->op_data) { 45 + kfree(cache->op_data); 46 + cache->op_data = NULL; 47 + } 39 48 40 49 if (issue_flags & IO_URING_F_UNLOCKED) 41 50 return; ··· 192 183 const struct io_uring_sqe *sqe) 193 184 { 194 185 struct io_uring_cmd *ioucmd = io_kiocb_to_cmd(req, struct io_uring_cmd); 195 - struct uring_cache *cache; 186 + struct io_uring_cmd_data *cache; 196 187 197 188 cache = io_uring_async_get(req); 198 189 if (unlikely(!cache)) ··· 269 260 270 261 ret = file->f_op->uring_cmd(ioucmd, issue_flags); 271 262 if (ret == -EAGAIN) { 272 - struct uring_cache *cache = req->async_data; 263 + struct io_uring_cmd_data *cache = req->async_data; 273 264 274 265 if (ioucmd->sqe != (void *) cache) 275 266 memcpy(cache, ioucmd->sqe, uring_sqe_size(req->ctx));
-4
io_uring/uring_cmd.h
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 - struct uring_cache { 4 - struct io_uring_sqe sqes[2]; 5 - }; 6 - 7 3 int io_uring_cmd(struct io_kiocb *req, unsigned int issue_flags); 8 4 int io_uring_cmd_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe); 9 5
+11 -33
kernel/cgroup/cpuset.c
··· 197 197 198 198 /* 199 199 * There are two global locks guarding cpuset structures - cpuset_mutex and 200 - * callback_lock. We also require taking task_lock() when dereferencing a 201 - * task's cpuset pointer. See "The task_lock() exception", at the end of this 202 - * comment. The cpuset code uses only cpuset_mutex. Other kernel subsystems 203 - * can use cpuset_lock()/cpuset_unlock() to prevent change to cpuset 200 + * callback_lock. The cpuset code uses only cpuset_mutex. Other kernel 201 + * subsystems can use cpuset_lock()/cpuset_unlock() to prevent change to cpuset 204 202 * structures. Note that cpuset_mutex needs to be a mutex as it is used in 205 203 * paths that rely on priority inheritance (e.g. scheduler - on RT) for 206 204 * correctness. ··· 227 229 * The cpuset_common_seq_show() handlers only hold callback_lock across 228 230 * small pieces of code, such as when reading out possibly multi-word 229 231 * cpumasks and nodemasks. 230 - * 231 - * Accessing a task's cpuset should be done in accordance with the 232 - * guidelines for accessing subsystem state in kernel/cgroup.c 233 232 */ 234 233 235 234 static DEFINE_MUTEX(cpuset_mutex); ··· 885 890 */ 886 891 if (cgrpv2) { 887 892 for (i = 0; i < ndoms; i++) { 888 - cpumask_copy(doms[i], csa[i]->effective_cpus); 893 + /* 894 + * The top cpuset may contain some boot time isolated 895 + * CPUs that need to be excluded from the sched domain. 896 + */ 897 + if (csa[i] == &top_cpuset) 898 + cpumask_and(doms[i], csa[i]->effective_cpus, 899 + housekeeping_cpumask(HK_TYPE_DOMAIN)); 900 + else 901 + cpumask_copy(doms[i], csa[i]->effective_cpus); 889 902 if (dattr) 890 903 dattr[i] = SD_ATTR_INIT; 891 904 } ··· 3124 3121 int retval = -ENODEV; 3125 3122 3126 3123 buf = strstrip(buf); 3127 - 3128 - /* 3129 - * CPU or memory hotunplug may leave @cs w/o any execution 3130 - * resources, in which case the hotplug code asynchronously updates 3131 - * configuration and transfers all tasks to the nearest ancestor 3132 - * which can execute. 3133 - * 3134 - * As writes to "cpus" or "mems" may restore @cs's execution 3135 - * resources, wait for the previously scheduled operations before 3136 - * proceeding, so that we don't end up keep removing tasks added 3137 - * after execution capability is restored. 3138 - * 3139 - * cpuset_handle_hotplug may call back into cgroup core asynchronously 3140 - * via cgroup_transfer_tasks() and waiting for it from a cgroupfs 3141 - * operation like this one can lead to a deadlock through kernfs 3142 - * active_ref protection. Let's break the protection. Losing the 3143 - * protection is okay as we check whether @cs is online after 3144 - * grabbing cpuset_mutex anyway. This only happens on the legacy 3145 - * hierarchies. 3146 - */ 3147 - css_get(&cs->css); 3148 - kernfs_break_active_protection(of->kn); 3149 - 3150 3124 cpus_read_lock(); 3151 3125 mutex_lock(&cpuset_mutex); 3152 3126 if (!is_cpuset_online(cs)) ··· 3156 3176 out_unlock: 3157 3177 mutex_unlock(&cpuset_mutex); 3158 3178 cpus_read_unlock(); 3159 - kernfs_unbreak_active_protection(of->kn); 3160 - css_put(&cs->css); 3161 3179 flush_workqueue(cpuset_migrate_mm_wq); 3162 3180 return retval ?: nbytes; 3163 3181 }
+1 -1
kernel/cpu.c
··· 2179 2179 }, 2180 2180 [CPUHP_AP_HRTIMERS_DYING] = { 2181 2181 .name = "hrtimers:dying", 2182 - .startup.single = NULL, 2182 + .startup.single = hrtimers_cpu_starting, 2183 2183 .teardown.single = hrtimers_cpu_dying, 2184 2184 }, 2185 2185 [CPUHP_AP_TICK_DYING] = {
+1 -1
kernel/events/uprobes.c
··· 1915 1915 if (!utask) 1916 1916 return; 1917 1917 1918 + t->utask = NULL; 1918 1919 WARN_ON_ONCE(utask->active_uprobe || utask->xol_vaddr); 1919 1920 1920 1921 timer_delete_sync(&utask->ri_timer); ··· 1925 1924 ri = free_ret_instance(ri, true /* cleanup_hprobe */); 1926 1925 1927 1926 kfree(utask); 1928 - t->utask = NULL; 1929 1927 } 1930 1928 1931 1929 #define RI_TIMER_PERIOD (HZ / 10) /* 100 ms */
+1
kernel/gen_kheaders.sh
··· 89 89 90 90 # Create archive and try to normalize metadata for reproducibility. 91 91 tar "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \ 92 + --exclude=".__afs*" --exclude=".nfs*" \ 92 93 --owner=0 --group=0 --sort=name --numeric-owner --mode=u=rw,go=r,a+X \ 93 94 -I $XZ -cf $tarfile -C $cpio_dir/ . > /dev/null 94 95
+67 -20
kernel/sched/ext.c
··· 2747 2747 { 2748 2748 struct scx_dsp_ctx *dspc = this_cpu_ptr(scx_dsp_ctx); 2749 2749 bool prev_on_scx = prev->sched_class == &ext_sched_class; 2750 + bool prev_on_rq = prev->scx.flags & SCX_TASK_QUEUED; 2750 2751 int nr_loops = SCX_DSP_MAX_LOOPS; 2751 2752 2752 2753 lockdep_assert_rq_held(rq); ··· 2780 2779 * See scx_ops_disable_workfn() for the explanation on the 2781 2780 * bypassing test. 2782 2781 */ 2783 - if ((prev->scx.flags & SCX_TASK_QUEUED) && 2784 - prev->scx.slice && !scx_rq_bypassing(rq)) { 2782 + if (prev_on_rq && prev->scx.slice && !scx_rq_bypassing(rq)) { 2785 2783 rq->scx.flags |= SCX_RQ_BAL_KEEP; 2786 2784 goto has_tasks; 2787 2785 } ··· 2813 2813 2814 2814 flush_dispatch_buf(rq); 2815 2815 2816 + if (prev_on_rq && prev->scx.slice) { 2817 + rq->scx.flags |= SCX_RQ_BAL_KEEP; 2818 + goto has_tasks; 2819 + } 2816 2820 if (rq->scx.local_dsq.nr) 2817 2821 goto has_tasks; 2818 2822 if (consume_global_dsq(rq)) ··· 2842 2838 * Didn't find another task to run. Keep running @prev unless 2843 2839 * %SCX_OPS_ENQ_LAST is in effect. 2844 2840 */ 2845 - if ((prev->scx.flags & SCX_TASK_QUEUED) && 2846 - (!static_branch_unlikely(&scx_ops_enq_last) || 2841 + if (prev_on_rq && (!static_branch_unlikely(&scx_ops_enq_last) || 2847 2842 scx_rq_bypassing(rq))) { 2848 2843 rq->scx.flags |= SCX_RQ_BAL_KEEP; 2849 2844 goto has_tasks; ··· 3037 3034 */ 3038 3035 if (p->scx.slice && !scx_rq_bypassing(rq)) { 3039 3036 dispatch_enqueue(&rq->scx.local_dsq, p, SCX_ENQ_HEAD); 3040 - return; 3037 + goto switch_class; 3041 3038 } 3042 3039 3043 3040 /* ··· 3054 3051 } 3055 3052 } 3056 3053 3054 + switch_class: 3057 3055 if (next && next->sched_class != &ext_sched_class) 3058 3056 switch_class(rq, next); 3059 3057 } ··· 3590 3586 cpumask_copy(idle_masks.smt, cpu_online_mask); 3591 3587 } 3592 3588 3593 - void __scx_update_idle(struct rq *rq, bool idle) 3589 + static void update_builtin_idle(int cpu, bool idle) 3594 3590 { 3595 - int cpu = cpu_of(rq); 3596 - 3597 - if (SCX_HAS_OP(update_idle) && !scx_rq_bypassing(rq)) { 3598 - SCX_CALL_OP(SCX_KF_REST, update_idle, cpu_of(rq), idle); 3599 - if (!static_branch_unlikely(&scx_builtin_idle_enabled)) 3600 - return; 3601 - } 3602 - 3603 3591 if (idle) 3604 3592 cpumask_set_cpu(cpu, idle_masks.cpu); 3605 3593 else ··· 3616 3620 } 3617 3621 } 3618 3622 #endif 3623 + } 3624 + 3625 + /* 3626 + * Update the idle state of a CPU to @idle. 3627 + * 3628 + * If @do_notify is true, ops.update_idle() is invoked to notify the scx 3629 + * scheduler of an actual idle state transition (idle to busy or vice 3630 + * versa). If @do_notify is false, only the idle state in the idle masks is 3631 + * refreshed without invoking ops.update_idle(). 3632 + * 3633 + * This distinction is necessary, because an idle CPU can be "reserved" and 3634 + * awakened via scx_bpf_pick_idle_cpu() + scx_bpf_kick_cpu(), marking it as 3635 + * busy even if no tasks are dispatched. In this case, the CPU may return 3636 + * to idle without a true state transition. Refreshing the idle masks 3637 + * without invoking ops.update_idle() ensures accurate idle state tracking 3638 + * while avoiding unnecessary updates and maintaining balanced state 3639 + * transitions. 3640 + */ 3641 + void __scx_update_idle(struct rq *rq, bool idle, bool do_notify) 3642 + { 3643 + int cpu = cpu_of(rq); 3644 + 3645 + lockdep_assert_rq_held(rq); 3646 + 3647 + /* 3648 + * Trigger ops.update_idle() only when transitioning from a task to 3649 + * the idle thread and vice versa. 3650 + * 3651 + * Idle transitions are indicated by do_notify being set to true, 3652 + * managed by put_prev_task_idle()/set_next_task_idle(). 3653 + */ 3654 + if (SCX_HAS_OP(update_idle) && do_notify && !scx_rq_bypassing(rq)) 3655 + SCX_CALL_OP(SCX_KF_REST, update_idle, cpu_of(rq), idle); 3656 + 3657 + /* 3658 + * Update the idle masks: 3659 + * - for real idle transitions (do_notify == true) 3660 + * - for idle-to-idle transitions (indicated by the previous task 3661 + * being the idle thread, managed by pick_task_idle()) 3662 + * 3663 + * Skip updating idle masks if the previous task is not the idle 3664 + * thread, since set_next_task_idle() has already handled it when 3665 + * transitioning from a task to the idle thread (calling this 3666 + * function with do_notify == true). 3667 + * 3668 + * In this way we can avoid updating the idle masks twice, 3669 + * unnecessarily. 3670 + */ 3671 + if (static_branch_likely(&scx_builtin_idle_enabled)) 3672 + if (do_notify || is_idle_task(rq->curr)) 3673 + update_builtin_idle(cpu, idle); 3619 3674 } 3620 3675 3621 3676 static void handle_hotplug(struct rq *rq, bool online) ··· 4791 4744 */ 4792 4745 for_each_possible_cpu(cpu) { 4793 4746 struct rq *rq = cpu_rq(cpu); 4794 - struct rq_flags rf; 4795 4747 struct task_struct *p, *n; 4796 4748 4797 - rq_lock(rq, &rf); 4749 + raw_spin_rq_lock(rq); 4798 4750 4799 4751 if (bypass) { 4800 4752 WARN_ON_ONCE(rq->scx.flags & SCX_RQ_BYPASSING); ··· 4809 4763 * sees scx_rq_bypassing() before moving tasks to SCX. 4810 4764 */ 4811 4765 if (!scx_enabled()) { 4812 - rq_unlock(rq, &rf); 4766 + raw_spin_rq_unlock(rq); 4813 4767 continue; 4814 4768 } 4815 4769 ··· 4829 4783 sched_enq_and_set_task(&ctx); 4830 4784 } 4831 4785 4832 - rq_unlock(rq, &rf); 4833 - 4834 4786 /* resched to restore ticks and idle state */ 4835 - resched_cpu(cpu); 4787 + if (cpu_online(cpu) || cpu == smp_processor_id()) 4788 + resched_curr(rq); 4789 + 4790 + raw_spin_rq_unlock(rq); 4836 4791 } 4837 4792 4838 4793 atomic_dec(&scx_ops_breather_depth);
+4 -4
kernel/sched/ext.h
··· 57 57 #endif /* CONFIG_SCHED_CLASS_EXT */ 58 58 59 59 #if defined(CONFIG_SCHED_CLASS_EXT) && defined(CONFIG_SMP) 60 - void __scx_update_idle(struct rq *rq, bool idle); 60 + void __scx_update_idle(struct rq *rq, bool idle, bool do_notify); 61 61 62 - static inline void scx_update_idle(struct rq *rq, bool idle) 62 + static inline void scx_update_idle(struct rq *rq, bool idle, bool do_notify) 63 63 { 64 64 if (scx_enabled()) 65 - __scx_update_idle(rq, idle); 65 + __scx_update_idle(rq, idle, do_notify); 66 66 } 67 67 #else 68 - static inline void scx_update_idle(struct rq *rq, bool idle) {} 68 + static inline void scx_update_idle(struct rq *rq, bool idle, bool do_notify) {} 69 69 #endif 70 70 71 71 #ifdef CONFIG_CGROUP_SCHED
+23 -128
kernel/sched/fair.c
··· 689 689 * 690 690 * XXX could add max_slice to the augmented data to track this. 691 691 */ 692 - static s64 entity_lag(u64 avruntime, struct sched_entity *se) 692 + static void update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se) 693 693 { 694 694 s64 vlag, limit; 695 695 696 - vlag = avruntime - se->vruntime; 697 - limit = calc_delta_fair(max_t(u64, 2*se->slice, TICK_NSEC), se); 698 - 699 - return clamp(vlag, -limit, limit); 700 - } 701 - 702 - static void update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se) 703 - { 704 696 SCHED_WARN_ON(!se->on_rq); 705 697 706 - se->vlag = entity_lag(avg_vruntime(cfs_rq), se); 698 + vlag = avg_vruntime(cfs_rq) - se->vruntime; 699 + limit = calc_delta_fair(max_t(u64, 2*se->slice, TICK_NSEC), se); 700 + 701 + se->vlag = clamp(vlag, -limit, limit); 707 702 } 708 703 709 704 /* ··· 3769 3774 dequeue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { } 3770 3775 #endif 3771 3776 3772 - static void reweight_eevdf(struct sched_entity *se, u64 avruntime, 3773 - unsigned long weight) 3774 - { 3775 - unsigned long old_weight = se->load.weight; 3776 - s64 vlag, vslice; 3777 - 3778 - /* 3779 - * VRUNTIME 3780 - * -------- 3781 - * 3782 - * COROLLARY #1: The virtual runtime of the entity needs to be 3783 - * adjusted if re-weight at !0-lag point. 3784 - * 3785 - * Proof: For contradiction assume this is not true, so we can 3786 - * re-weight without changing vruntime at !0-lag point. 3787 - * 3788 - * Weight VRuntime Avg-VRuntime 3789 - * before w v V 3790 - * after w' v' V' 3791 - * 3792 - * Since lag needs to be preserved through re-weight: 3793 - * 3794 - * lag = (V - v)*w = (V'- v')*w', where v = v' 3795 - * ==> V' = (V - v)*w/w' + v (1) 3796 - * 3797 - * Let W be the total weight of the entities before reweight, 3798 - * since V' is the new weighted average of entities: 3799 - * 3800 - * V' = (WV + w'v - wv) / (W + w' - w) (2) 3801 - * 3802 - * by using (1) & (2) we obtain: 3803 - * 3804 - * (WV + w'v - wv) / (W + w' - w) = (V - v)*w/w' + v 3805 - * ==> (WV-Wv+Wv+w'v-wv)/(W+w'-w) = (V - v)*w/w' + v 3806 - * ==> (WV - Wv)/(W + w' - w) + v = (V - v)*w/w' + v 3807 - * ==> (V - v)*W/(W + w' - w) = (V - v)*w/w' (3) 3808 - * 3809 - * Since we are doing at !0-lag point which means V != v, we 3810 - * can simplify (3): 3811 - * 3812 - * ==> W / (W + w' - w) = w / w' 3813 - * ==> Ww' = Ww + ww' - ww 3814 - * ==> W * (w' - w) = w * (w' - w) 3815 - * ==> W = w (re-weight indicates w' != w) 3816 - * 3817 - * So the cfs_rq contains only one entity, hence vruntime of 3818 - * the entity @v should always equal to the cfs_rq's weighted 3819 - * average vruntime @V, which means we will always re-weight 3820 - * at 0-lag point, thus breach assumption. Proof completed. 3821 - * 3822 - * 3823 - * COROLLARY #2: Re-weight does NOT affect weighted average 3824 - * vruntime of all the entities. 3825 - * 3826 - * Proof: According to corollary #1, Eq. (1) should be: 3827 - * 3828 - * (V - v)*w = (V' - v')*w' 3829 - * ==> v' = V' - (V - v)*w/w' (4) 3830 - * 3831 - * According to the weighted average formula, we have: 3832 - * 3833 - * V' = (WV - wv + w'v') / (W - w + w') 3834 - * = (WV - wv + w'(V' - (V - v)w/w')) / (W - w + w') 3835 - * = (WV - wv + w'V' - Vw + wv) / (W - w + w') 3836 - * = (WV + w'V' - Vw) / (W - w + w') 3837 - * 3838 - * ==> V'*(W - w + w') = WV + w'V' - Vw 3839 - * ==> V' * (W - w) = (W - w) * V (5) 3840 - * 3841 - * If the entity is the only one in the cfs_rq, then reweight 3842 - * always occurs at 0-lag point, so V won't change. Or else 3843 - * there are other entities, hence W != w, then Eq. (5) turns 3844 - * into V' = V. So V won't change in either case, proof done. 3845 - * 3846 - * 3847 - * So according to corollary #1 & #2, the effect of re-weight 3848 - * on vruntime should be: 3849 - * 3850 - * v' = V' - (V - v) * w / w' (4) 3851 - * = V - (V - v) * w / w' 3852 - * = V - vl * w / w' 3853 - * = V - vl' 3854 - */ 3855 - if (avruntime != se->vruntime) { 3856 - vlag = entity_lag(avruntime, se); 3857 - vlag = div_s64(vlag * old_weight, weight); 3858 - se->vruntime = avruntime - vlag; 3859 - } 3860 - 3861 - /* 3862 - * DEADLINE 3863 - * -------- 3864 - * 3865 - * When the weight changes, the virtual time slope changes and 3866 - * we should adjust the relative virtual deadline accordingly. 3867 - * 3868 - * d' = v' + (d - v)*w/w' 3869 - * = V' - (V - v)*w/w' + (d - v)*w/w' 3870 - * = V - (V - v)*w/w' + (d - v)*w/w' 3871 - * = V + (d - V)*w/w' 3872 - */ 3873 - vslice = (s64)(se->deadline - avruntime); 3874 - vslice = div_s64(vslice * old_weight, weight); 3875 - se->deadline = avruntime + vslice; 3876 - } 3777 + static void place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags); 3877 3778 3878 3779 static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, 3879 3780 unsigned long weight) 3880 3781 { 3881 3782 bool curr = cfs_rq->curr == se; 3882 - u64 avruntime; 3883 3783 3884 3784 if (se->on_rq) { 3885 3785 /* commit outstanding execution time */ 3886 3786 update_curr(cfs_rq); 3887 - avruntime = avg_vruntime(cfs_rq); 3787 + update_entity_lag(cfs_rq, se); 3788 + se->deadline -= se->vruntime; 3789 + se->rel_deadline = 1; 3888 3790 if (!curr) 3889 3791 __dequeue_entity(cfs_rq, se); 3890 3792 update_load_sub(&cfs_rq->load, se->load.weight); 3891 3793 } 3892 3794 dequeue_load_avg(cfs_rq, se); 3893 3795 3894 - if (se->on_rq) { 3895 - reweight_eevdf(se, avruntime, weight); 3896 - } else { 3897 - /* 3898 - * Because we keep se->vlag = V - v_i, while: lag_i = w_i*(V - v_i), 3899 - * we need to scale se->vlag when w_i changes. 3900 - */ 3901 - se->vlag = div_s64(se->vlag * se->load.weight, weight); 3902 - } 3796 + /* 3797 + * Because we keep se->vlag = V - v_i, while: lag_i = w_i*(V - v_i), 3798 + * we need to scale se->vlag when w_i changes. 3799 + */ 3800 + se->vlag = div_s64(se->vlag * se->load.weight, weight); 3801 + if (se->rel_deadline) 3802 + se->deadline = div_s64(se->deadline * se->load.weight, weight); 3903 3803 3904 3804 update_load_set(&se->load, weight); 3905 3805 ··· 3809 3919 enqueue_load_avg(cfs_rq, se); 3810 3920 if (se->on_rq) { 3811 3921 update_load_add(&cfs_rq->load, se->load.weight); 3922 + place_entity(cfs_rq, se, 0); 3812 3923 if (!curr) 3813 3924 __enqueue_entity(cfs_rq, se); 3814 3925 ··· 3956 4065 struct cfs_rq *gcfs_rq = group_cfs_rq(se); 3957 4066 long shares; 3958 4067 3959 - if (!gcfs_rq) 4068 + /* 4069 + * When a group becomes empty, preserve its weight. This matters for 4070 + * DELAY_DEQUEUE. 4071 + */ 4072 + if (!gcfs_rq || !gcfs_rq->load.weight) 3960 4073 return; 3961 4074 3962 4075 if (throttled_hierarchy(gcfs_rq)) ··· 5254 5359 5255 5360 se->vruntime = vruntime - lag; 5256 5361 5257 - if (sched_feat(PLACE_REL_DEADLINE) && se->rel_deadline) { 5362 + if (se->rel_deadline) { 5258 5363 se->deadline += se->vruntime; 5259 5364 se->rel_deadline = 0; 5260 5365 return;
+3 -2
kernel/sched/idle.c
··· 452 452 static void put_prev_task_idle(struct rq *rq, struct task_struct *prev, struct task_struct *next) 453 453 { 454 454 dl_server_update_idle_time(rq, prev); 455 - scx_update_idle(rq, false); 455 + scx_update_idle(rq, false, true); 456 456 } 457 457 458 458 static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first) 459 459 { 460 460 update_idle_core(rq); 461 - scx_update_idle(rq, true); 461 + scx_update_idle(rq, true, true); 462 462 schedstat_inc(rq->sched_goidle); 463 463 next->se.exec_start = rq_clock_task(rq); 464 464 } 465 465 466 466 struct task_struct *pick_task_idle(struct rq *rq) 467 467 { 468 + scx_update_idle(rq, true, false); 468 469 return rq->idle; 469 470 } 470 471
+28 -9
kernel/signal.c
··· 2007 2007 2008 2008 if (!list_empty(&q->list)) { 2009 2009 /* 2010 - * If task group is exiting with the signal already pending, 2011 - * wait for __exit_signal() to do its job. Otherwise if 2012 - * ignored, it's not supposed to be queued. Try to survive. 2010 + * The signal was ignored and blocked. The timer 2011 + * expiry queued it because blocked signals are 2012 + * queued independent of the ignored state. 2013 + * 2014 + * The unblocking set SIGPENDING, but the signal 2015 + * was not yet dequeued from the pending list. 2016 + * So prepare_signal() sees unblocked and ignored, 2017 + * which ends up here. Leave it queued like a 2018 + * regular signal. 2019 + * 2020 + * The same happens when the task group is exiting 2021 + * and the signal is already queued. 2022 + * prepare_signal() treats SIGNAL_GROUP_EXIT as 2023 + * ignored independent of its queued state. This 2024 + * gets cleaned up in __exit_signal(). 2013 2025 */ 2014 - WARN_ON_ONCE(!(t->signal->flags & SIGNAL_GROUP_EXIT)); 2015 2026 goto out; 2016 2027 } 2017 2028 ··· 2057 2046 goto out; 2058 2047 } 2059 2048 2060 - /* This should never happen and leaks a reference count */ 2061 - if (WARN_ON_ONCE(!hlist_unhashed(&tmr->ignored_list))) 2062 - hlist_del_init(&tmr->ignored_list); 2063 - 2064 2049 if (unlikely(!list_empty(&q->list))) { 2065 2050 /* This holds a reference count already */ 2066 2051 result = TRACE_SIGNAL_ALREADY_PENDING; 2067 2052 goto out; 2068 2053 } 2069 2054 2070 - posixtimer_sigqueue_getref(q); 2055 + /* 2056 + * If the signal is on the ignore list, it got blocked after it was 2057 + * ignored earlier. But nothing lifted the ignore. Move it back to 2058 + * the pending list to be consistent with the regular signal 2059 + * handling. This already holds a reference count. 2060 + * 2061 + * If it's not on the ignore list acquire a reference count. 2062 + */ 2063 + if (likely(hlist_unhashed(&tmr->ignored_list))) 2064 + posixtimer_sigqueue_getref(q); 2065 + else 2066 + hlist_del_init(&tmr->ignored_list); 2067 + 2071 2068 posixtimer_queue_sigqueue(q, t, tmr->it_pid_type); 2072 2069 result = TRACE_SIGNAL_DELIVERED; 2073 2070 out:
+10 -1
kernel/time/hrtimer.c
··· 2202 2202 } 2203 2203 2204 2204 cpu_base->cpu = cpu; 2205 + hrtimer_cpu_base_init_expiry_lock(cpu_base); 2206 + return 0; 2207 + } 2208 + 2209 + int hrtimers_cpu_starting(unsigned int cpu) 2210 + { 2211 + struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases); 2212 + 2213 + /* Clear out any left over state from a CPU down operation */ 2205 2214 cpu_base->active_bases = 0; 2206 2215 cpu_base->hres_active = 0; 2207 2216 cpu_base->hang_detected = 0; ··· 2219 2210 cpu_base->expires_next = KTIME_MAX; 2220 2211 cpu_base->softirq_expires_next = KTIME_MAX; 2221 2212 cpu_base->online = 1; 2222 - hrtimer_cpu_base_init_expiry_lock(cpu_base); 2223 2213 return 0; 2224 2214 } 2225 2215 ··· 2294 2286 void __init hrtimers_init(void) 2295 2287 { 2296 2288 hrtimers_prepare_cpu(smp_processor_id()); 2289 + hrtimers_cpu_starting(smp_processor_id()); 2297 2290 open_softirq(HRTIMER_SOFTIRQ, hrtimer_run_softirq); 2298 2291 }
+55 -9
kernel/time/timer_migration.c
··· 534 534 break; 535 535 536 536 child = group; 537 - group = group->parent; 537 + /* 538 + * Pairs with the store release on group connection 539 + * to make sure group initialization is visible. 540 + */ 541 + group = READ_ONCE(group->parent); 538 542 data->childmask = child->groupmask; 543 + WARN_ON_ONCE(!data->childmask); 539 544 } while (group); 540 545 } 541 546 ··· 569 564 while ((node = timerqueue_getnext(&group->events))) { 570 565 evt = container_of(node, struct tmigr_event, nextevt); 571 566 572 - if (!evt->ignore) { 567 + if (!READ_ONCE(evt->ignore)) { 573 568 WRITE_ONCE(group->next_expiry, evt->nextevt.expires); 574 569 return evt; 575 570 } ··· 665 660 * lock is held while updating the ignore flag in idle path. So this 666 661 * state change will not be lost. 667 662 */ 668 - group->groupevt.ignore = true; 663 + WRITE_ONCE(group->groupevt.ignore, true); 669 664 670 665 return walk_done; 671 666 } ··· 726 721 union tmigr_state childstate, groupstate; 727 722 bool remote = data->remote; 728 723 bool walk_done = false; 724 + bool ignore; 729 725 u64 nextexp; 730 726 731 727 if (child) { ··· 745 739 nextexp = child->next_expiry; 746 740 evt = &child->groupevt; 747 741 748 - evt->ignore = (nextexp == KTIME_MAX) ? true : false; 742 + /* 743 + * This can race with concurrent idle exit (activate). 744 + * If the current writer wins, a useless remote expiration may 745 + * be scheduled. If the activate wins, the event is properly 746 + * ignored. 747 + */ 748 + ignore = (nextexp == KTIME_MAX) ? true : false; 749 + WRITE_ONCE(evt->ignore, ignore); 749 750 } else { 750 751 nextexp = data->nextexp; 751 752 752 753 first_childevt = evt = data->evt; 754 + ignore = evt->ignore; 753 755 754 756 /* 755 757 * Walking the hierarchy is required in any case when a ··· 783 769 * first event information of the group is updated properly and 784 770 * also handled properly, so skip this fast return path. 785 771 */ 786 - if (evt->ignore && !remote && group->parent) 772 + if (ignore && !remote && group->parent) 787 773 return true; 788 774 789 775 raw_spin_lock(&group->lock); ··· 797 783 * queue when the expiry time changed only or when it could be ignored. 798 784 */ 799 785 if (timerqueue_node_queued(&evt->nextevt)) { 800 - if ((evt->nextevt.expires == nextexp) && !evt->ignore) { 786 + if ((evt->nextevt.expires == nextexp) && !ignore) { 801 787 /* Make sure not to miss a new CPU event with the same expiry */ 802 788 evt->cpu = first_childevt->cpu; 803 789 goto check_toplvl; ··· 807 793 WRITE_ONCE(group->next_expiry, KTIME_MAX); 808 794 } 809 795 810 - if (evt->ignore) { 796 + if (ignore) { 811 797 /* 812 798 * When the next child event could be ignored (nextexp is 813 799 * KTIME_MAX) and there was no remote timer handling before or ··· 1501 1487 s.seq = 0; 1502 1488 atomic_set(&group->migr_state, s.state); 1503 1489 1490 + /* 1491 + * If this is a new top-level, prepare its groupmask in advance. 1492 + * This avoids accidents where yet another new top-level is 1493 + * created in the future and made visible before the current groupmask. 1494 + */ 1495 + if (list_empty(&tmigr_level_list[lvl])) { 1496 + group->groupmask = BIT(0); 1497 + /* 1498 + * The previous top level has prepared its groupmask already, 1499 + * simply account it as the first child. 1500 + */ 1501 + if (lvl > 0) 1502 + group->num_children = 1; 1503 + } 1504 + 1504 1505 timerqueue_init_head(&group->events); 1505 1506 timerqueue_init(&group->groupevt.nextevt); 1506 1507 group->groupevt.nextevt.expires = KTIME_MAX; ··· 1579 1550 raw_spin_lock_irq(&child->lock); 1580 1551 raw_spin_lock_nested(&parent->lock, SINGLE_DEPTH_NESTING); 1581 1552 1582 - child->parent = parent; 1583 - child->groupmask = BIT(parent->num_children++); 1553 + if (activate) { 1554 + /* 1555 + * @child is the old top and @parent the new one. In this 1556 + * case groupmask is pre-initialized and @child already 1557 + * accounted, along with its new sibling corresponding to the 1558 + * CPU going up. 1559 + */ 1560 + WARN_ON_ONCE(child->groupmask != BIT(0) || parent->num_children != 2); 1561 + } else { 1562 + /* Adding @child for the CPU going up to @parent. */ 1563 + child->groupmask = BIT(parent->num_children++); 1564 + } 1565 + 1566 + /* 1567 + * Make sure parent initialization is visible before publishing it to a 1568 + * racing CPU entering/exiting idle. This RELEASE barrier enforces an 1569 + * address dependency that pairs with the READ_ONCE() in __walk_groups(). 1570 + */ 1571 + smp_store_release(&child->parent, parent); 1584 1572 1585 1573 raw_spin_unlock(&parent->lock); 1586 1574 raw_spin_unlock_irq(&child->lock);
+1
kernel/trace/trace.c
··· 4122 4122 preempt_model_none() ? "server" : 4123 4123 preempt_model_voluntary() ? "desktop" : 4124 4124 preempt_model_full() ? "preempt" : 4125 + preempt_model_lazy() ? "lazy" : 4125 4126 preempt_model_rt() ? "preempt_rt" : 4126 4127 "unknown", 4127 4128 /* These are reserved for later use */
+14
kernel/trace/trace_irqsoff.c
··· 182 182 struct trace_array_cpu *data; 183 183 unsigned long flags; 184 184 unsigned int trace_ctx; 185 + u64 *calltime; 185 186 int ret; 186 187 187 188 if (ftrace_graph_ignore_func(gops, trace)) ··· 200 199 if (!func_prolog_dec(tr, &data, &flags)) 201 200 return 0; 202 201 202 + calltime = fgraph_reserve_data(gops->idx, sizeof(*calltime)); 203 + if (!calltime) 204 + return 0; 205 + 206 + *calltime = trace_clock_local(); 207 + 203 208 trace_ctx = tracing_gen_ctx_flags(flags); 204 209 ret = __trace_graph_entry(tr, trace, trace_ctx); 205 210 atomic_dec(&data->disabled); ··· 220 213 struct trace_array_cpu *data; 221 214 unsigned long flags; 222 215 unsigned int trace_ctx; 216 + u64 *calltime; 217 + int size; 223 218 224 219 ftrace_graph_addr_finish(gops, trace); 225 220 226 221 if (!func_prolog_dec(tr, &data, &flags)) 227 222 return; 223 + 224 + calltime = fgraph_retrieve_data(gops->idx, &size); 225 + if (!calltime) 226 + return; 227 + trace->calltime = *calltime; 228 228 229 229 trace_ctx = tracing_gen_ctx_flags(flags); 230 230 __trace_graph_return(tr, trace, trace_ctx);
+4 -2
kernel/trace/trace_kprobe.c
··· 940 940 } 941 941 /* a symbol specified */ 942 942 symbol = kstrdup(argv[1], GFP_KERNEL); 943 - if (!symbol) 944 - return -ENOMEM; 943 + if (!symbol) { 944 + ret = -ENOMEM; 945 + goto error; 946 + } 945 947 946 948 tmp = strchr(symbol, '%'); 947 949 if (tmp) {
+14
kernel/trace/trace_sched_wakeup.c
··· 118 118 struct trace_array *tr = wakeup_trace; 119 119 struct trace_array_cpu *data; 120 120 unsigned int trace_ctx; 121 + u64 *calltime; 121 122 int ret = 0; 122 123 123 124 if (ftrace_graph_ignore_func(gops, trace)) ··· 136 135 if (!func_prolog_preempt_disable(tr, &data, &trace_ctx)) 137 136 return 0; 138 137 138 + calltime = fgraph_reserve_data(gops->idx, sizeof(*calltime)); 139 + if (!calltime) 140 + return 0; 141 + 142 + *calltime = trace_clock_local(); 143 + 139 144 ret = __trace_graph_entry(tr, trace, trace_ctx); 140 145 atomic_dec(&data->disabled); 141 146 preempt_enable_notrace(); ··· 155 148 struct trace_array *tr = wakeup_trace; 156 149 struct trace_array_cpu *data; 157 150 unsigned int trace_ctx; 151 + u64 *calltime; 152 + int size; 158 153 159 154 ftrace_graph_addr_finish(gops, trace); 160 155 161 156 if (!func_prolog_preempt_disable(tr, &data, &trace_ctx)) 162 157 return; 158 + 159 + calltime = fgraph_retrieve_data(gops->idx, &size); 160 + if (!calltime) 161 + return; 162 + trace->calltime = *calltime; 163 163 164 164 __trace_graph_return(tr, trace, trace_ctx); 165 165 atomic_dec(&data->disabled);
+7
kernel/workqueue.c
··· 2508 2508 return; 2509 2509 } 2510 2510 2511 + WARN_ON_ONCE(cpu != WORK_CPU_UNBOUND && !cpu_online(cpu)); 2511 2512 dwork->wq = wq; 2512 2513 dwork->cpu = cpu; 2513 2514 timer->expires = jiffies + delay; ··· 2533 2532 * @wq: workqueue to use 2534 2533 * @dwork: work to queue 2535 2534 * @delay: number of jiffies to wait before queueing 2535 + * 2536 + * We queue the delayed_work to a specific CPU, for non-zero delays the 2537 + * caller must ensure it is online and can't go away. Callers that fail 2538 + * to ensure this, may get @dwork->timer queued to an offlined CPU and 2539 + * this will prevent queueing of @dwork->work unless the offlined CPU 2540 + * becomes online again. 2536 2541 * 2537 2542 * Return: %false if @work was already on a queue, %true otherwise. If 2538 2543 * @delay is zero and @dwork is idle, it will be scheduled for immediate
+3
lib/alloc_tag.c
··· 195 195 union codetag_ref ref_old, ref_new; 196 196 struct alloc_tag *tag_old, *tag_new; 197 197 198 + if (!mem_alloc_profiling_enabled()) 199 + return; 200 + 198 201 tag_old = pgalloc_tag_get(&old->page); 199 202 if (!tag_old) 200 203 return;
+2 -2
mm/filemap.c
··· 1523 1523 /* Must be in bottom byte for x86 to work */ 1524 1524 BUILD_BUG_ON(PG_uptodate > 7); 1525 1525 VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); 1526 - VM_BUG_ON_FOLIO(folio_test_uptodate(folio), folio); 1526 + VM_BUG_ON_FOLIO(success && folio_test_uptodate(folio), folio); 1527 1527 1528 1528 if (likely(success)) 1529 1529 mask |= 1 << PG_uptodate; ··· 2996 2996 if (ops->is_partially_uptodate(folio, offset, bsz) == 2997 2997 seek_data) 2998 2998 break; 2999 - start = (start + bsz) & ~(bsz - 1); 2999 + start = (start + bsz) & ~((u64)bsz - 1); 3000 3000 offset += bsz; 3001 3001 } while (offset < folio_size(folio)); 3002 3002 unlock:
+12
mm/huge_memory.c
··· 2206 2206 return pmd; 2207 2207 } 2208 2208 2209 + static pmd_t clear_uffd_wp_pmd(pmd_t pmd) 2210 + { 2211 + if (pmd_present(pmd)) 2212 + pmd = pmd_clear_uffd_wp(pmd); 2213 + else if (is_swap_pmd(pmd)) 2214 + pmd = pmd_swp_clear_uffd_wp(pmd); 2215 + 2216 + return pmd; 2217 + } 2218 + 2209 2219 bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, 2210 2220 unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd) 2211 2221 { ··· 2254 2244 pgtable_trans_huge_deposit(mm, new_pmd, pgtable); 2255 2245 } 2256 2246 pmd = move_soft_dirty_pmd(pmd); 2247 + if (vma_has_uffd_without_event_remap(vma)) 2248 + pmd = clear_uffd_wp_pmd(pmd); 2257 2249 set_pmd_at(mm, new_addr, new_pmd, pmd); 2258 2250 if (force_flush) 2259 2251 flush_pmd_tlb_range(vma, old_addr, old_addr + PMD_SIZE);
+13 -1
mm/hugetlb.c
··· 5402 5402 unsigned long new_addr, pte_t *src_pte, pte_t *dst_pte, 5403 5403 unsigned long sz) 5404 5404 { 5405 + bool need_clear_uffd_wp = vma_has_uffd_without_event_remap(vma); 5405 5406 struct hstate *h = hstate_vma(vma); 5406 5407 struct mm_struct *mm = vma->vm_mm; 5407 5408 spinlock_t *src_ptl, *dst_ptl; ··· 5419 5418 spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); 5420 5419 5421 5420 pte = huge_ptep_get_and_clear(mm, old_addr, src_pte); 5422 - set_huge_pte_at(mm, new_addr, dst_pte, pte, sz); 5421 + 5422 + if (need_clear_uffd_wp && pte_marker_uffd_wp(pte)) 5423 + huge_pte_clear(mm, new_addr, dst_pte, sz); 5424 + else { 5425 + if (need_clear_uffd_wp) { 5426 + if (pte_present(pte)) 5427 + pte = huge_pte_clear_uffd_wp(pte); 5428 + else if (is_swap_pte(pte)) 5429 + pte = pte_swp_clear_uffd_wp(pte); 5430 + } 5431 + set_huge_pte_at(mm, new_addr, dst_pte, pte, sz); 5432 + } 5423 5433 5424 5434 if (src_ptl != dst_ptl) 5425 5435 spin_unlock(src_ptl);
+2 -2
mm/khugepaged.c
··· 2422 2422 VM_BUG_ON(khugepaged_scan.address < hstart || 2423 2423 khugepaged_scan.address + HPAGE_PMD_SIZE > 2424 2424 hend); 2425 - if (IS_ENABLED(CONFIG_SHMEM) && vma->vm_file) { 2425 + if (IS_ENABLED(CONFIG_SHMEM) && !vma_is_anonymous(vma)) { 2426 2426 struct file *file = get_file(vma->vm_file); 2427 2427 pgoff_t pgoff = linear_page_index(vma, 2428 2428 khugepaged_scan.address); ··· 2768 2768 mmap_assert_locked(mm); 2769 2769 memset(cc->node_load, 0, sizeof(cc->node_load)); 2770 2770 nodes_clear(cc->alloc_nmask); 2771 - if (IS_ENABLED(CONFIG_SHMEM) && vma->vm_file) { 2771 + if (IS_ENABLED(CONFIG_SHMEM) && !vma_is_anonymous(vma)) { 2772 2772 struct file *file = get_file(vma->vm_file); 2773 2773 pgoff_t pgoff = linear_page_index(vma, addr); 2774 2774
+1 -1
mm/kmemleak.c
··· 1093 1093 pr_debug("%s(0x%px, %zu)\n", __func__, ptr, size); 1094 1094 1095 1095 if (kmemleak_enabled && ptr && !IS_ERR_PCPU(ptr)) 1096 - create_object_percpu((__force unsigned long)ptr, size, 0, gfp); 1096 + create_object_percpu((__force unsigned long)ptr, size, 1, gfp); 1097 1097 } 1098 1098 EXPORT_SYMBOL_GPL(kmemleak_alloc_percpu); 1099 1099
+2 -1
mm/mempolicy.c
··· 2268 2268 2269 2269 page = __alloc_pages_noprof(gfp, order, nid, nodemask); 2270 2270 2271 - if (unlikely(pol->mode == MPOL_INTERLEAVE) && page) { 2271 + if (unlikely(pol->mode == MPOL_INTERLEAVE || 2272 + pol->mode == MPOL_WEIGHTED_INTERLEAVE) && page) { 2272 2273 /* skip NUMA_INTERLEAVE_HIT update if numa stats is disabled */ 2273 2274 if (static_branch_likely(&vm_numa_stat_key) && 2274 2275 page_to_nid(page) == nid) {
+31 -1
mm/mremap.c
··· 138 138 struct vm_area_struct *new_vma, pmd_t *new_pmd, 139 139 unsigned long new_addr, bool need_rmap_locks) 140 140 { 141 + bool need_clear_uffd_wp = vma_has_uffd_without_event_remap(vma); 141 142 struct mm_struct *mm = vma->vm_mm; 142 143 pte_t *old_pte, *new_pte, pte; 143 144 pmd_t dummy_pmdval; ··· 217 216 force_flush = true; 218 217 pte = move_pte(pte, old_addr, new_addr); 219 218 pte = move_soft_dirty_pte(pte); 220 - set_pte_at(mm, new_addr, new_pte, pte); 219 + 220 + if (need_clear_uffd_wp && pte_marker_uffd_wp(pte)) 221 + pte_clear(mm, new_addr, new_pte); 222 + else { 223 + if (need_clear_uffd_wp) { 224 + if (pte_present(pte)) 225 + pte = pte_clear_uffd_wp(pte); 226 + else if (is_swap_pte(pte)) 227 + pte = pte_swp_clear_uffd_wp(pte); 228 + } 229 + set_pte_at(mm, new_addr, new_pte, pte); 230 + } 221 231 } 222 232 223 233 arch_leave_lazy_mmu_mode(); ··· 290 278 if (WARN_ON_ONCE(!pmd_none(*new_pmd))) 291 279 return false; 292 280 281 + /* If this pmd belongs to a uffd vma with remap events disabled, we need 282 + * to ensure that the uffd-wp state is cleared from all pgtables. This 283 + * means recursing into lower page tables in move_page_tables(), and we 284 + * can reuse the existing code if we simply treat the entry as "not 285 + * moved". 286 + */ 287 + if (vma_has_uffd_without_event_remap(vma)) 288 + return false; 289 + 293 290 /* 294 291 * We don't have to worry about the ordering of src and dst 295 292 * ptlocks because exclusive mmap_lock prevents deadlock. ··· 352 331 * should have released it. 353 332 */ 354 333 if (WARN_ON_ONCE(!pud_none(*new_pud))) 334 + return false; 335 + 336 + /* If this pud belongs to a uffd vma with remap events disabled, we need 337 + * to ensure that the uffd-wp state is cleared from all pgtables. This 338 + * means recursing into lower page tables in move_page_tables(), and we 339 + * can reuse the existing code if we simply treat the entry as "not 340 + * moved". 341 + */ 342 + if (vma_has_uffd_without_event_remap(vma)) 355 343 return false; 356 344 357 345 /*
+8 -2
mm/page-writeback.c
··· 692 692 unsigned long ratio; 693 693 694 694 global_dirty_limits(&background_thresh, &dirty_thresh); 695 + if (!dirty_thresh) 696 + return -EINVAL; 695 697 ratio = div64_u64(pages * 100ULL * BDI_RATIO_SCALE, dirty_thresh); 696 698 697 699 return ratio; ··· 792 790 { 793 791 int ret; 794 792 unsigned long pages = min_bytes >> PAGE_SHIFT; 795 - unsigned long min_ratio; 793 + long min_ratio; 796 794 797 795 ret = bdi_check_pages_limit(pages); 798 796 if (ret) 799 797 return ret; 800 798 801 799 min_ratio = bdi_ratio_from_pages(pages); 800 + if (min_ratio < 0) 801 + return min_ratio; 802 802 return __bdi_set_min_ratio(bdi, min_ratio); 803 803 } 804 804 ··· 813 809 { 814 810 int ret; 815 811 unsigned long pages = max_bytes >> PAGE_SHIFT; 816 - unsigned long max_ratio; 812 + long max_ratio; 817 813 818 814 ret = bdi_check_pages_limit(pages); 819 815 if (ret) 820 816 return ret; 821 817 822 818 max_ratio = bdi_ratio_from_pages(pages); 819 + if (max_ratio < 0) 820 + return max_ratio; 823 821 return __bdi_set_max_ratio(bdi, max_ratio); 824 822 } 825 823
+3
mm/page_alloc.c
··· 5692 5692 zone->present_pages, zone_batchsize(zone)); 5693 5693 } 5694 5694 5695 + static void setup_per_zone_lowmem_reserve(void); 5696 + 5695 5697 void adjust_managed_page_count(struct page *page, long count) 5696 5698 { 5697 5699 atomic_long_add(count, &page_zone(page)->managed_pages); 5698 5700 totalram_pages_add(count); 5701 + setup_per_zone_lowmem_reserve(); 5699 5702 } 5700 5703 EXPORT_SYMBOL(adjust_managed_page_count); 5701 5704
+1 -1
mm/shmem.c
··· 4368 4368 bool latest_version) 4369 4369 { 4370 4370 struct shmem_options *ctx = fc->fs_private; 4371 - unsigned int version = UTF8_LATEST; 4371 + int version = UTF8_LATEST; 4372 4372 struct unicode_map *encoding; 4373 4373 char *version_str = param->string + 5; 4374 4374
+3
mm/vmscan.c
··· 4642 4642 reset_batch_size(walk); 4643 4643 } 4644 4644 4645 + __mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(), 4646 + stat.nr_demoted); 4647 + 4645 4648 item = PGSTEAL_KSWAPD + reclaimer_offset(); 4646 4649 if (!cgroup_reclaim(sc)) 4647 4650 __count_vm_events(item, reclaimed);
+11 -1
mm/vmstat.c
··· 2122 2122 { 2123 2123 int cpu; 2124 2124 2125 - for_each_possible_cpu(cpu) 2125 + for_each_possible_cpu(cpu) { 2126 2126 INIT_DEFERRABLE_WORK(per_cpu_ptr(&vmstat_work, cpu), 2127 2127 vmstat_update); 2128 + 2129 + /* 2130 + * For secondary CPUs during CPU hotplug scenarios, 2131 + * vmstat_cpu_online() will enable the work. 2132 + * mm/vmstat:online enables and disables vmstat_work 2133 + * symmetrically during CPU hotplug events. 2134 + */ 2135 + if (!cpu_online(cpu)) 2136 + disable_delayed_work_sync(&per_cpu(vmstat_work, cpu)); 2137 + } 2128 2138 2129 2139 schedule_delayed_work(&shepherd, 2130 2140 round_jiffies_relative(sysctl_stat_interval));
+57 -34
mm/zswap.c
··· 251 251 struct zswap_pool *pool; 252 252 char name[38]; /* 'zswap' + 32 char (max) num + \0 */ 253 253 gfp_t gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM; 254 - int ret; 254 + int ret, cpu; 255 255 256 256 if (!zswap_has_pool) { 257 257 /* if either are unset, pool initialization failed, and we ··· 284 284 pr_err("percpu alloc failed\n"); 285 285 goto error; 286 286 } 287 + 288 + for_each_possible_cpu(cpu) 289 + mutex_init(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex); 287 290 288 291 ret = cpuhp_state_add_instance(CPUHP_MM_ZSWP_POOL_PREPARE, 289 292 &pool->node); ··· 820 817 { 821 818 struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node); 822 819 struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); 823 - struct crypto_acomp *acomp; 824 - struct acomp_req *req; 820 + struct crypto_acomp *acomp = NULL; 821 + struct acomp_req *req = NULL; 822 + u8 *buffer = NULL; 825 823 int ret; 826 824 827 - mutex_init(&acomp_ctx->mutex); 828 - 829 - acomp_ctx->buffer = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, cpu_to_node(cpu)); 830 - if (!acomp_ctx->buffer) 831 - return -ENOMEM; 825 + buffer = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, cpu_to_node(cpu)); 826 + if (!buffer) { 827 + ret = -ENOMEM; 828 + goto fail; 829 + } 832 830 833 831 acomp = crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to_node(cpu)); 834 832 if (IS_ERR(acomp)) { 835 833 pr_err("could not alloc crypto acomp %s : %ld\n", 836 834 pool->tfm_name, PTR_ERR(acomp)); 837 835 ret = PTR_ERR(acomp); 838 - goto acomp_fail; 836 + goto fail; 839 837 } 840 - acomp_ctx->acomp = acomp; 841 - acomp_ctx->is_sleepable = acomp_is_async(acomp); 842 838 843 - req = acomp_request_alloc(acomp_ctx->acomp); 839 + req = acomp_request_alloc(acomp); 844 840 if (!req) { 845 841 pr_err("could not alloc crypto acomp_request %s\n", 846 842 pool->tfm_name); 847 843 ret = -ENOMEM; 848 - goto req_fail; 844 + goto fail; 849 845 } 850 - acomp_ctx->req = req; 851 846 847 + /* 848 + * Only hold the mutex after completing allocations, otherwise we may 849 + * recurse into zswap through reclaim and attempt to hold the mutex 850 + * again resulting in a deadlock. 851 + */ 852 + mutex_lock(&acomp_ctx->mutex); 852 853 crypto_init_wait(&acomp_ctx->wait); 854 + 853 855 /* 854 856 * if the backend of acomp is async zip, crypto_req_done() will wakeup 855 857 * crypto_wait_req(); if the backend of acomp is scomp, the callback ··· 863 855 acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, 864 856 crypto_req_done, &acomp_ctx->wait); 865 857 858 + acomp_ctx->buffer = buffer; 859 + acomp_ctx->acomp = acomp; 860 + acomp_ctx->is_sleepable = acomp_is_async(acomp); 861 + acomp_ctx->req = req; 862 + mutex_unlock(&acomp_ctx->mutex); 866 863 return 0; 867 864 868 - req_fail: 869 - crypto_free_acomp(acomp_ctx->acomp); 870 - acomp_fail: 871 - kfree(acomp_ctx->buffer); 865 + fail: 866 + if (acomp) 867 + crypto_free_acomp(acomp); 868 + kfree(buffer); 872 869 return ret; 873 870 } 874 871 ··· 882 869 struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node); 883 870 struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu); 884 871 872 + mutex_lock(&acomp_ctx->mutex); 885 873 if (!IS_ERR_OR_NULL(acomp_ctx)) { 886 874 if (!IS_ERR_OR_NULL(acomp_ctx->req)) 887 875 acomp_request_free(acomp_ctx->req); 876 + acomp_ctx->req = NULL; 888 877 if (!IS_ERR_OR_NULL(acomp_ctx->acomp)) 889 878 crypto_free_acomp(acomp_ctx->acomp); 890 879 kfree(acomp_ctx->buffer); 891 880 } 881 + mutex_unlock(&acomp_ctx->mutex); 892 882 893 883 return 0; 894 884 } 895 885 896 - /* Prevent CPU hotplug from freeing up the per-CPU acomp_ctx resources */ 897 - static struct crypto_acomp_ctx *acomp_ctx_get_cpu(struct crypto_acomp_ctx __percpu *acomp_ctx) 886 + static struct crypto_acomp_ctx *acomp_ctx_get_cpu_lock(struct zswap_pool *pool) 898 887 { 899 - cpus_read_lock(); 900 - return raw_cpu_ptr(acomp_ctx); 888 + struct crypto_acomp_ctx *acomp_ctx; 889 + 890 + for (;;) { 891 + acomp_ctx = raw_cpu_ptr(pool->acomp_ctx); 892 + mutex_lock(&acomp_ctx->mutex); 893 + if (likely(acomp_ctx->req)) 894 + return acomp_ctx; 895 + /* 896 + * It is possible that we were migrated to a different CPU after 897 + * getting the per-CPU ctx but before the mutex was acquired. If 898 + * the old CPU got offlined, zswap_cpu_comp_dead() could have 899 + * already freed ctx->req (among other things) and set it to 900 + * NULL. Just try again on the new CPU that we ended up on. 901 + */ 902 + mutex_unlock(&acomp_ctx->mutex); 903 + } 901 904 } 902 905 903 - static void acomp_ctx_put_cpu(void) 906 + static void acomp_ctx_put_unlock(struct crypto_acomp_ctx *acomp_ctx) 904 907 { 905 - cpus_read_unlock(); 908 + mutex_unlock(&acomp_ctx->mutex); 906 909 } 907 910 908 911 static bool zswap_compress(struct page *page, struct zswap_entry *entry, ··· 934 905 gfp_t gfp; 935 906 u8 *dst; 936 907 937 - acomp_ctx = acomp_ctx_get_cpu(pool->acomp_ctx); 938 - mutex_lock(&acomp_ctx->mutex); 939 - 908 + acomp_ctx = acomp_ctx_get_cpu_lock(pool); 940 909 dst = acomp_ctx->buffer; 941 910 sg_init_table(&input, 1); 942 911 sg_set_page(&input, page, PAGE_SIZE, 0); ··· 987 960 else if (alloc_ret) 988 961 zswap_reject_alloc_fail++; 989 962 990 - mutex_unlock(&acomp_ctx->mutex); 991 - acomp_ctx_put_cpu(); 963 + acomp_ctx_put_unlock(acomp_ctx); 992 964 return comp_ret == 0 && alloc_ret == 0; 993 965 } 994 966 ··· 998 972 struct crypto_acomp_ctx *acomp_ctx; 999 973 u8 *src; 1000 974 1001 - acomp_ctx = acomp_ctx_get_cpu(entry->pool->acomp_ctx); 1002 - mutex_lock(&acomp_ctx->mutex); 1003 - 975 + acomp_ctx = acomp_ctx_get_cpu_lock(entry->pool); 1004 976 src = zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO); 1005 977 /* 1006 978 * If zpool_map_handle is atomic, we cannot reliably utilize its mapped buffer ··· 1022 998 acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, PAGE_SIZE); 1023 999 BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait)); 1024 1000 BUG_ON(acomp_ctx->req->dlen != PAGE_SIZE); 1025 - mutex_unlock(&acomp_ctx->mutex); 1026 1001 1027 1002 if (src != acomp_ctx->buffer) 1028 1003 zpool_unmap_handle(zpool, entry->handle); 1029 - acomp_ctx_put_cpu(); 1004 + acomp_ctx_put_unlock(acomp_ctx); 1030 1005 } 1031 1006 1032 1007 /*********************************
+2 -2
net/802/psnap.c
··· 55 55 goto drop; 56 56 57 57 rcu_read_lock(); 58 - proto = find_snap_client(skb_transport_header(skb)); 58 + proto = find_snap_client(skb->data); 59 59 if (proto) { 60 60 /* Pass the frame on. */ 61 - skb->transport_header += 5; 62 61 skb_pull_rcsum(skb, 5); 62 + skb_reset_transport_header(skb); 63 63 rc = proto->rcvfunc(skb, dev, &snap_packet_type, orig_dev); 64 64 } 65 65 rcu_read_unlock();
+6 -5
net/bluetooth/hci_sync.c
··· 1031 1031 1032 1032 static int hci_set_random_addr_sync(struct hci_dev *hdev, bdaddr_t *rpa) 1033 1033 { 1034 - /* If we're advertising or initiating an LE connection we can't 1035 - * go ahead and change the random address at this time. This is 1036 - * because the eventual initiator address used for the 1034 + /* If a random_addr has been set we're advertising or initiating an LE 1035 + * connection we can't go ahead and change the random address at this 1036 + * time. This is because the eventual initiator address used for the 1037 1037 * subsequently created connection will be undefined (some 1038 1038 * controllers use the new address and others the one we had 1039 1039 * when the operation started). ··· 1041 1041 * In this kind of scenario skip the update and let the random 1042 1042 * address be updated at the next cycle. 1043 1043 */ 1044 - if (hci_dev_test_flag(hdev, HCI_LE_ADV) || 1045 - hci_lookup_le_connect(hdev)) { 1044 + if (bacmp(&hdev->random_addr, BDADDR_ANY) && 1045 + (hci_dev_test_flag(hdev, HCI_LE_ADV) || 1046 + hci_lookup_le_connect(hdev))) { 1046 1047 bt_dev_dbg(hdev, "Deferring random address update"); 1047 1048 hci_dev_set_flag(hdev, HCI_RPA_EXPIRED); 1048 1049 return 0;
+36 -2
net/bluetooth/mgmt.c
··· 7655 7655 mgmt_event(MGMT_EV_DEVICE_ADDED, hdev, &ev, sizeof(ev), sk); 7656 7656 } 7657 7657 7658 + static void add_device_complete(struct hci_dev *hdev, void *data, int err) 7659 + { 7660 + struct mgmt_pending_cmd *cmd = data; 7661 + struct mgmt_cp_add_device *cp = cmd->param; 7662 + 7663 + if (!err) { 7664 + device_added(cmd->sk, hdev, &cp->addr.bdaddr, cp->addr.type, 7665 + cp->action); 7666 + device_flags_changed(NULL, hdev, &cp->addr.bdaddr, 7667 + cp->addr.type, hdev->conn_flags, 7668 + PTR_UINT(cmd->user_data)); 7669 + } 7670 + 7671 + mgmt_cmd_complete(cmd->sk, hdev->id, MGMT_OP_ADD_DEVICE, 7672 + mgmt_status(err), &cp->addr, sizeof(cp->addr)); 7673 + mgmt_pending_free(cmd); 7674 + } 7675 + 7658 7676 static int add_device_sync(struct hci_dev *hdev, void *data) 7659 7677 { 7660 7678 return hci_update_passive_scan_sync(hdev); ··· 7681 7663 static int add_device(struct sock *sk, struct hci_dev *hdev, 7682 7664 void *data, u16 len) 7683 7665 { 7666 + struct mgmt_pending_cmd *cmd; 7684 7667 struct mgmt_cp_add_device *cp = data; 7685 7668 u8 auto_conn, addr_type; 7686 7669 struct hci_conn_params *params; ··· 7762 7743 current_flags = params->flags; 7763 7744 } 7764 7745 7765 - err = hci_cmd_sync_queue(hdev, add_device_sync, NULL, NULL); 7766 - if (err < 0) 7746 + cmd = mgmt_pending_new(sk, MGMT_OP_ADD_DEVICE, hdev, data, len); 7747 + if (!cmd) { 7748 + err = -ENOMEM; 7767 7749 goto unlock; 7750 + } 7751 + 7752 + cmd->user_data = UINT_PTR(current_flags); 7753 + 7754 + err = hci_cmd_sync_queue(hdev, add_device_sync, cmd, 7755 + add_device_complete); 7756 + if (err < 0) { 7757 + err = mgmt_cmd_complete(sk, hdev->id, MGMT_OP_ADD_DEVICE, 7758 + MGMT_STATUS_FAILED, &cp->addr, 7759 + sizeof(cp->addr)); 7760 + mgmt_pending_free(cmd); 7761 + } 7762 + 7763 + goto unlock; 7768 7764 7769 7765 added: 7770 7766 device_added(sk, hdev, &cp->addr.bdaddr, cp->addr.type, cp->action);
+2 -2
net/bluetooth/rfcomm/tty.c
··· 201 201 struct device_attribute *attr, char *buf) 202 202 { 203 203 struct rfcomm_dev *dev = dev_get_drvdata(tty_dev); 204 - return sprintf(buf, "%pMR\n", &dev->dst); 204 + return sysfs_emit(buf, "%pMR\n", &dev->dst); 205 205 } 206 206 207 207 static ssize_t channel_show(struct device *tty_dev, 208 208 struct device_attribute *attr, char *buf) 209 209 { 210 210 struct rfcomm_dev *dev = dev_get_drvdata(tty_dev); 211 - return sprintf(buf, "%d\n", dev->channel); 211 + return sysfs_emit(buf, "%d\n", dev->channel); 212 212 } 213 213 214 214 static DEVICE_ATTR_RO(address);
+30 -13
net/core/dev.c
··· 753 753 } 754 754 EXPORT_SYMBOL_GPL(dev_fill_forward_path); 755 755 756 + /* must be called under rcu_read_lock(), as we dont take a reference */ 757 + static struct napi_struct *napi_by_id(unsigned int napi_id) 758 + { 759 + unsigned int hash = napi_id % HASH_SIZE(napi_hash); 760 + struct napi_struct *napi; 761 + 762 + hlist_for_each_entry_rcu(napi, &napi_hash[hash], napi_hash_node) 763 + if (napi->napi_id == napi_id) 764 + return napi; 765 + 766 + return NULL; 767 + } 768 + 769 + /* must be called under rcu_read_lock(), as we dont take a reference */ 770 + struct napi_struct *netdev_napi_by_id(struct net *net, unsigned int napi_id) 771 + { 772 + struct napi_struct *napi; 773 + 774 + napi = napi_by_id(napi_id); 775 + if (!napi) 776 + return NULL; 777 + 778 + if (WARN_ON_ONCE(!napi->dev)) 779 + return NULL; 780 + if (!net_eq(net, dev_net(napi->dev))) 781 + return NULL; 782 + 783 + return napi; 784 + } 785 + 756 786 /** 757 787 * __dev_get_by_name - find a device by its name 758 788 * @net: the applicable net namespace ··· 6322 6292 return ret; 6323 6293 } 6324 6294 EXPORT_SYMBOL(napi_complete_done); 6325 - 6326 - /* must be called under rcu_read_lock(), as we dont take a reference */ 6327 - struct napi_struct *napi_by_id(unsigned int napi_id) 6328 - { 6329 - unsigned int hash = napi_id % HASH_SIZE(napi_hash); 6330 - struct napi_struct *napi; 6331 - 6332 - hlist_for_each_entry_rcu(napi, &napi_hash[hash], napi_hash_node) 6333 - if (napi->napi_id == napi_id) 6334 - return napi; 6335 - 6336 - return NULL; 6337 - } 6338 6295 6339 6296 static void skb_defer_free_flush(struct softnet_data *sd) 6340 6297 {
+2 -1
net/core/dev.h
··· 22 22 23 23 extern int netdev_flow_limit_table_len; 24 24 25 + struct napi_struct *netdev_napi_by_id(struct net *net, unsigned int napi_id); 26 + 25 27 #ifdef CONFIG_PROC_FS 26 28 int __init dev_proc_init(void); 27 29 #else ··· 271 269 static inline void xdp_do_check_flushed(struct napi_struct *napi) { } 272 270 #endif 273 271 274 - struct napi_struct *napi_by_id(unsigned int napi_id); 275 272 void kick_defer_list_purge(struct softnet_data *sd, unsigned int cpu); 276 273 277 274 #define XMIT_RECURSION_LIMIT 8
+18 -12
net/core/filter.c
··· 11251 11251 bool is_sockarray = map->map_type == BPF_MAP_TYPE_REUSEPORT_SOCKARRAY; 11252 11252 struct sock_reuseport *reuse; 11253 11253 struct sock *selected_sk; 11254 + int err; 11254 11255 11255 11256 selected_sk = map->ops->map_lookup_elem(map, key); 11256 11257 if (!selected_sk) ··· 11259 11258 11260 11259 reuse = rcu_dereference(selected_sk->sk_reuseport_cb); 11261 11260 if (!reuse) { 11262 - /* Lookup in sock_map can return TCP ESTABLISHED sockets. */ 11263 - if (sk_is_refcounted(selected_sk)) 11264 - sock_put(selected_sk); 11265 - 11266 11261 /* reuseport_array has only sk with non NULL sk_reuseport_cb. 11267 11262 * The only (!reuse) case here is - the sk has already been 11268 11263 * unhashed (e.g. by close()), so treat it as -ENOENT. ··· 11266 11269 * Other maps (e.g. sock_map) do not provide this guarantee and 11267 11270 * the sk may never be in the reuseport group to begin with. 11268 11271 */ 11269 - return is_sockarray ? -ENOENT : -EINVAL; 11272 + err = is_sockarray ? -ENOENT : -EINVAL; 11273 + goto error; 11270 11274 } 11271 11275 11272 11276 if (unlikely(reuse->reuseport_id != reuse_kern->reuseport_id)) { 11273 11277 struct sock *sk = reuse_kern->sk; 11274 11278 11275 - if (sk->sk_protocol != selected_sk->sk_protocol) 11276 - return -EPROTOTYPE; 11277 - else if (sk->sk_family != selected_sk->sk_family) 11278 - return -EAFNOSUPPORT; 11279 - 11280 - /* Catch all. Likely bound to a different sockaddr. */ 11281 - return -EBADFD; 11279 + if (sk->sk_protocol != selected_sk->sk_protocol) { 11280 + err = -EPROTOTYPE; 11281 + } else if (sk->sk_family != selected_sk->sk_family) { 11282 + err = -EAFNOSUPPORT; 11283 + } else { 11284 + /* Catch all. Likely bound to a different sockaddr. */ 11285 + err = -EBADFD; 11286 + } 11287 + goto error; 11282 11288 } 11283 11289 11284 11290 reuse_kern->selected_sk = selected_sk; 11285 11291 11286 11292 return 0; 11293 + error: 11294 + /* Lookup in sock_map can return TCP ESTABLISHED sockets. */ 11295 + if (sk_is_refcounted(selected_sk)) 11296 + sock_put(selected_sk); 11297 + 11298 + return err; 11287 11299 } 11288 11300 11289 11301 static const struct bpf_func_proto sk_select_reuseport_proto = {
+12 -2
net/core/netdev-genl-gen.c
··· 197 197 [NETDEV_NLGRP_PAGE_POOL] = { "page-pool", }, 198 198 }; 199 199 200 + static void __netdev_nl_sock_priv_init(void *priv) 201 + { 202 + netdev_nl_sock_priv_init(priv); 203 + } 204 + 205 + static void __netdev_nl_sock_priv_destroy(void *priv) 206 + { 207 + netdev_nl_sock_priv_destroy(priv); 208 + } 209 + 200 210 struct genl_family netdev_nl_family __ro_after_init = { 201 211 .name = NETDEV_FAMILY_NAME, 202 212 .version = NETDEV_FAMILY_VERSION, ··· 218 208 .mcgrps = netdev_nl_mcgrps, 219 209 .n_mcgrps = ARRAY_SIZE(netdev_nl_mcgrps), 220 210 .sock_priv_size = sizeof(struct list_head), 221 - .sock_priv_init = (void *)netdev_nl_sock_priv_init, 222 - .sock_priv_destroy = (void *)netdev_nl_sock_priv_destroy, 211 + .sock_priv_init = __netdev_nl_sock_priv_init, 212 + .sock_priv_destroy = __netdev_nl_sock_priv_destroy, 223 213 };
+5 -6
net/core/netdev-genl.c
··· 167 167 void *hdr; 168 168 pid_t pid; 169 169 170 - if (WARN_ON_ONCE(!napi->dev)) 171 - return -EINVAL; 172 170 if (!(napi->dev->flags & IFF_UP)) 173 171 return 0; 174 172 ··· 174 176 if (!hdr) 175 177 return -EMSGSIZE; 176 178 177 - if (napi->napi_id >= MIN_NAPI_ID && 178 - nla_put_u32(rsp, NETDEV_A_NAPI_ID, napi->napi_id)) 179 + if (nla_put_u32(rsp, NETDEV_A_NAPI_ID, napi->napi_id)) 179 180 goto nla_put_failure; 180 181 181 182 if (nla_put_u32(rsp, NETDEV_A_NAPI_IFINDEX, napi->dev->ifindex)) ··· 232 235 rtnl_lock(); 233 236 rcu_read_lock(); 234 237 235 - napi = napi_by_id(napi_id); 238 + napi = netdev_napi_by_id(genl_info_net(info), napi_id); 236 239 if (napi) { 237 240 err = netdev_nl_napi_fill_one(rsp, napi, info); 238 241 } else { ··· 269 272 return err; 270 273 271 274 list_for_each_entry(napi, &netdev->napi_list, dev_list) { 275 + if (napi->napi_id < MIN_NAPI_ID) 276 + continue; 272 277 if (ctx->napi_id && napi->napi_id >= ctx->napi_id) 273 278 continue; 274 279 ··· 353 354 rtnl_lock(); 354 355 rcu_read_lock(); 355 356 356 - napi = napi_by_id(napi_id); 357 + napi = netdev_napi_by_id(genl_info_net(info), napi_id); 357 358 if (napi) { 358 359 err = netdev_nl_napi_set_config(napi, info); 359 360 } else {
+5 -5
net/core/netpoll.c
··· 627 627 const struct net_device_ops *ops; 628 628 int err; 629 629 630 + skb_queue_head_init(&np->skb_pool); 631 + 630 632 if (ndev->priv_flags & IFF_DISABLE_NETPOLL) { 631 633 np_err(np, "%s doesn't support polling, aborting\n", 632 634 ndev->name); ··· 664 662 strscpy(np->dev_name, ndev->name, IFNAMSIZ); 665 663 npinfo->netpoll = np; 666 664 665 + /* fill up the skb queue */ 666 + refill_skbs(np); 667 + 667 668 /* last thing to do is link it to the net device structure */ 668 669 rcu_assign_pointer(ndev->npinfo, npinfo); 669 670 ··· 685 680 bool ip_overwritten = false; 686 681 struct in_device *in_dev; 687 682 int err; 688 - 689 - skb_queue_head_init(&np->skb_pool); 690 683 691 684 rtnl_lock(); 692 685 if (np->dev_name[0]) { ··· 784 781 #endif 785 782 } 786 783 } 787 - 788 - /* fill up the skb queue */ 789 - refill_skbs(np); 790 784 791 785 err = __netpoll_setup(np, ndev); 792 786 if (err)
+3 -3
net/core/pktgen.c
··· 851 851 unsigned long weight; 852 852 unsigned long size; 853 853 854 + if (pkt_dev->n_imix_entries >= MAX_IMIX_ENTRIES) 855 + return -E2BIG; 856 + 854 857 len = num_arg(&buffer[i], max_digits, &size); 855 858 if (len < 0) 856 859 return len; ··· 883 880 884 881 i++; 885 882 pkt_dev->n_imix_entries++; 886 - 887 - if (pkt_dev->n_imix_entries > MAX_IMIX_ENTRIES) 888 - return -E2BIG; 889 883 } while (c == ' '); 890 884 891 885 return i;
-1
net/core/xdp.c
··· 186 186 xdp_rxq_info_init(xdp_rxq); 187 187 xdp_rxq->dev = dev; 188 188 xdp_rxq->queue_index = queue_index; 189 - xdp_rxq->napi_id = napi_id; 190 189 xdp_rxq->frag_size = frag_size; 191 190 192 191 xdp_rxq->reg_state = REG_STATE_REGISTERED;
+1
net/ipv4/route.c
··· 2445 2445 net_warn_ratelimited("martian destination %pI4 from %pI4, dev %s\n", 2446 2446 &daddr, &saddr, dev->name); 2447 2447 #endif 2448 + goto out; 2448 2449 2449 2450 e_nobufs: 2450 2451 reason = SKB_DROP_REASON_NOMEM;
+1 -1
net/ipv4/tcp_ipv4.c
··· 896 896 sock_net_set(ctl_sk, net); 897 897 if (sk) { 898 898 ctl_sk->sk_mark = (sk->sk_state == TCP_TIME_WAIT) ? 899 - inet_twsk(sk)->tw_mark : sk->sk_mark; 899 + inet_twsk(sk)->tw_mark : READ_ONCE(sk->sk_mark); 900 900 ctl_sk->sk_priority = (sk->sk_state == TCP_TIME_WAIT) ? 901 901 inet_twsk(sk)->tw_priority : READ_ONCE(sk->sk_priority); 902 902 transmit_time = tcp_transmit_time(sk);
+27 -19
net/ipv4/udp.c
··· 533 533 return NULL; 534 534 } 535 535 536 - /* In hash4, rehash can happen in connect(), where hash4_cnt keeps unchanged. */ 536 + /* udp_rehash4() only checks hslot4, and hash4_cnt is not processed. */ 537 537 static void udp_rehash4(struct udp_table *udptable, struct sock *sk, 538 538 u16 newhash4) 539 539 { ··· 582 582 struct net *net = sock_net(sk); 583 583 struct udp_table *udptable; 584 584 585 - /* Connected udp socket can re-connect to another remote address, 586 - * so rehash4 is needed. 585 + /* Connected udp socket can re-connect to another remote address, which 586 + * will be handled by rehash. Thus no need to redo hash4 here. 587 587 */ 588 - udptable = net->ipv4.udp_table; 589 - if (udp_hashed4(sk)) { 590 - udp_rehash4(udptable, sk, hash); 588 + if (udp_hashed4(sk)) 591 589 return; 592 - } 593 590 591 + udptable = net->ipv4.udp_table; 594 592 hslot = udp_hashslot(udptable, net, udp_sk(sk)->udp_port_hash); 595 593 hslot2 = udp_hashslot2(udptable, udp_sk(sk)->udp_portaddr_hash); 596 594 hslot4 = udp_hashslot4(udptable, hash); ··· 2171 2173 struct udp_table *udptable = udp_get_table_prot(sk); 2172 2174 struct udp_hslot *hslot, *hslot2, *nhslot2; 2173 2175 2176 + hslot = udp_hashslot(udptable, sock_net(sk), 2177 + udp_sk(sk)->udp_port_hash); 2174 2178 hslot2 = udp_hashslot2(udptable, udp_sk(sk)->udp_portaddr_hash); 2175 2179 nhslot2 = udp_hashslot2(udptable, newhash); 2176 2180 udp_sk(sk)->udp_portaddr_hash = newhash; 2177 2181 2178 2182 if (hslot2 != nhslot2 || 2179 2183 rcu_access_pointer(sk->sk_reuseport_cb)) { 2180 - hslot = udp_hashslot(udptable, sock_net(sk), 2181 - udp_sk(sk)->udp_port_hash); 2182 2184 /* we must lock primary chain too */ 2183 2185 spin_lock_bh(&hslot->lock); 2184 2186 if (rcu_access_pointer(sk->sk_reuseport_cb)) ··· 2197 2199 spin_unlock(&nhslot2->lock); 2198 2200 } 2199 2201 2200 - if (udp_hashed4(sk)) { 2201 - udp_rehash4(udptable, sk, newhash4); 2202 + spin_unlock_bh(&hslot->lock); 2203 + } 2202 2204 2203 - if (hslot2 != nhslot2) { 2204 - spin_lock(&hslot2->lock); 2205 - udp_hash4_dec(hslot2); 2206 - spin_unlock(&hslot2->lock); 2205 + /* Now process hash4 if necessary: 2206 + * (1) update hslot4; 2207 + * (2) update hslot2->hash4_cnt. 2208 + * Note that hslot2/hslot4 should be checked separately, as 2209 + * either of them may change with the other unchanged. 2210 + */ 2211 + if (udp_hashed4(sk)) { 2212 + spin_lock_bh(&hslot->lock); 2207 2213 2208 - spin_lock(&nhslot2->lock); 2209 - udp_hash4_inc(nhslot2); 2210 - spin_unlock(&nhslot2->lock); 2211 - } 2214 + udp_rehash4(udptable, sk, newhash4); 2215 + if (hslot2 != nhslot2) { 2216 + spin_lock(&hslot2->lock); 2217 + udp_hash4_dec(hslot2); 2218 + spin_unlock(&hslot2->lock); 2219 + 2220 + spin_lock(&nhslot2->lock); 2221 + udp_hash4_inc(nhslot2); 2222 + spin_unlock(&nhslot2->lock); 2212 2223 } 2224 + 2213 2225 spin_unlock_bh(&hslot->lock); 2214 2226 } 2215 2227 }
+4
net/mac802154/iface.c
··· 684 684 ASSERT_RTNL(); 685 685 686 686 mutex_lock(&sdata->local->iflist_mtx); 687 + if (list_empty(&sdata->local->interfaces)) { 688 + mutex_unlock(&sdata->local->iflist_mtx); 689 + return; 690 + } 687 691 list_del_rcu(&sdata->list); 688 692 mutex_unlock(&sdata->local->iflist_mtx); 689 693
+9 -8
net/mptcp/ctrl.c
··· 102 102 } 103 103 104 104 #ifdef CONFIG_SYSCTL 105 - static int mptcp_set_scheduler(const struct net *net, const char *name) 105 + static int mptcp_set_scheduler(char *scheduler, const char *name) 106 106 { 107 - struct mptcp_pernet *pernet = mptcp_get_pernet(net); 108 107 struct mptcp_sched_ops *sched; 109 108 int ret = 0; 110 109 111 110 rcu_read_lock(); 112 111 sched = mptcp_sched_find(name); 113 112 if (sched) 114 - strscpy(pernet->scheduler, name, MPTCP_SCHED_NAME_MAX); 113 + strscpy(scheduler, name, MPTCP_SCHED_NAME_MAX); 115 114 else 116 115 ret = -ENOENT; 117 116 rcu_read_unlock(); ··· 121 122 static int proc_scheduler(const struct ctl_table *ctl, int write, 122 123 void *buffer, size_t *lenp, loff_t *ppos) 123 124 { 124 - const struct net *net = current->nsproxy->net_ns; 125 + char (*scheduler)[MPTCP_SCHED_NAME_MAX] = ctl->data; 125 126 char val[MPTCP_SCHED_NAME_MAX]; 126 127 struct ctl_table tbl = { 127 128 .data = val, ··· 129 130 }; 130 131 int ret; 131 132 132 - strscpy(val, mptcp_get_scheduler(net), MPTCP_SCHED_NAME_MAX); 133 + strscpy(val, *scheduler, MPTCP_SCHED_NAME_MAX); 133 134 134 135 ret = proc_dostring(&tbl, write, buffer, lenp, ppos); 135 136 if (write && ret == 0) 136 - ret = mptcp_set_scheduler(net, val); 137 + ret = mptcp_set_scheduler(*scheduler, val); 137 138 138 139 return ret; 139 140 } ··· 160 161 int write, void *buffer, size_t *lenp, 161 162 loff_t *ppos) 162 163 { 163 - struct mptcp_pernet *pernet = mptcp_get_pernet(current->nsproxy->net_ns); 164 + struct mptcp_pernet *pernet = container_of(table->data, 165 + struct mptcp_pernet, 166 + blackhole_timeout); 164 167 int ret; 165 168 166 169 ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); ··· 229 228 { 230 229 .procname = "available_schedulers", 231 230 .maxlen = MPTCP_SCHED_BUF_MAX, 232 - .mode = 0644, 231 + .mode = 0444, 233 232 .proc_handler = proc_available_schedulers, 234 233 }, 235 234 {
+4 -2
net/mptcp/options.c
··· 607 607 } 608 608 opts->ext_copy.use_ack = 1; 609 609 opts->suboptions = OPTION_MPTCP_DSS; 610 - WRITE_ONCE(msk->old_wspace, __mptcp_space((struct sock *)msk)); 611 610 612 611 /* Add kind/length/subtype/flag overhead if mapping is not populated */ 613 612 if (dss_size == 0) ··· 1287 1288 } 1288 1289 MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_RCVWNDCONFLICT); 1289 1290 } 1290 - return; 1291 + goto update_wspace; 1291 1292 } 1292 1293 1293 1294 if (rcv_wnd_new != rcv_wnd_old) { ··· 1312 1313 th->window = htons(new_win); 1313 1314 MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_RCVWNDSHARED); 1314 1315 } 1316 + 1317 + update_wspace: 1318 + WRITE_ONCE(msk->old_wspace, tp->rcv_wnd); 1315 1319 } 1316 1320 1317 1321 __sum16 __mptcp_make_csum(u64 data_seq, u32 subflow_seq, u16 data_len, __wsum sum)
+7 -2
net/mptcp/protocol.h
··· 760 760 761 761 static inline bool mptcp_epollin_ready(const struct sock *sk) 762 762 { 763 + u64 data_avail = mptcp_data_avail(mptcp_sk(sk)); 764 + 765 + if (!data_avail) 766 + return false; 767 + 763 768 /* mptcp doesn't have to deal with small skbs in the receive queue, 764 - * at it can always coalesce them 769 + * as it can always coalesce them 765 770 */ 766 - return (mptcp_data_avail(mptcp_sk(sk)) >= sk->sk_rcvlowat) || 771 + return (data_avail >= sk->sk_rcvlowat) || 767 772 (mem_cgroup_sockets_enabled && sk->sk_memcg && 768 773 mem_cgroup_under_socket_pressure(sk->sk_memcg)) || 769 774 READ_ONCE(tcp_memory_pressure);
+2
net/ncsi/internal.h
··· 289 289 ncsi_dev_state_config_sp = 0x0301, 290 290 ncsi_dev_state_config_cis, 291 291 ncsi_dev_state_config_oem_gma, 292 + ncsi_dev_state_config_apply_mac, 292 293 ncsi_dev_state_config_clear_vids, 293 294 ncsi_dev_state_config_svf, 294 295 ncsi_dev_state_config_ev, ··· 323 322 #define NCSI_DEV_RESHUFFLE 4 324 323 #define NCSI_DEV_RESET 8 /* Reset state of NC */ 325 324 unsigned int gma_flag; /* OEM GMA flag */ 325 + struct sockaddr pending_mac; /* MAC address received from GMA */ 326 326 spinlock_t lock; /* Protect the NCSI device */ 327 327 unsigned int package_probe_id;/* Current ID during probe */ 328 328 unsigned int package_num; /* Number of packages */
+14 -2
net/ncsi/ncsi-manage.c
··· 1038 1038 : ncsi_dev_state_config_clear_vids; 1039 1039 break; 1040 1040 case ncsi_dev_state_config_oem_gma: 1041 - nd->state = ncsi_dev_state_config_clear_vids; 1041 + nd->state = ncsi_dev_state_config_apply_mac; 1042 1042 1043 1043 nca.package = np->id; 1044 1044 nca.channel = nc->id; ··· 1050 1050 nca.type = NCSI_PKT_CMD_OEM; 1051 1051 ret = ncsi_gma_handler(&nca, nc->version.mf_id); 1052 1052 } 1053 - if (ret < 0) 1053 + if (ret < 0) { 1054 + nd->state = ncsi_dev_state_config_clear_vids; 1054 1055 schedule_work(&ndp->work); 1056 + } 1055 1057 1056 1058 break; 1059 + case ncsi_dev_state_config_apply_mac: 1060 + rtnl_lock(); 1061 + ret = dev_set_mac_address(dev, &ndp->pending_mac, NULL); 1062 + rtnl_unlock(); 1063 + if (ret < 0) 1064 + netdev_warn(dev, "NCSI: 'Writing MAC address to device failed\n"); 1065 + 1066 + nd->state = ncsi_dev_state_config_clear_vids; 1067 + 1068 + fallthrough; 1057 1069 case ncsi_dev_state_config_clear_vids: 1058 1070 case ncsi_dev_state_config_svf: 1059 1071 case ncsi_dev_state_config_ev:
+6 -13
net/ncsi/ncsi-rsp.c
··· 628 628 static int ncsi_rsp_handler_oem_gma(struct ncsi_request *nr, int mfr_id) 629 629 { 630 630 struct ncsi_dev_priv *ndp = nr->ndp; 631 + struct sockaddr *saddr = &ndp->pending_mac; 631 632 struct net_device *ndev = ndp->ndev.dev; 632 633 struct ncsi_rsp_oem_pkt *rsp; 633 - struct sockaddr saddr; 634 634 u32 mac_addr_off = 0; 635 - int ret = 0; 636 635 637 636 /* Get the response header */ 638 637 rsp = (struct ncsi_rsp_oem_pkt *)skb_network_header(nr->rsp); 639 638 640 - saddr.sa_family = ndev->type; 641 639 ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE; 642 640 if (mfr_id == NCSI_OEM_MFR_BCM_ID) 643 641 mac_addr_off = BCM_MAC_ADDR_OFFSET; ··· 644 646 else if (mfr_id == NCSI_OEM_MFR_INTEL_ID) 645 647 mac_addr_off = INTEL_MAC_ADDR_OFFSET; 646 648 647 - memcpy(saddr.sa_data, &rsp->data[mac_addr_off], ETH_ALEN); 649 + saddr->sa_family = ndev->type; 650 + memcpy(saddr->sa_data, &rsp->data[mac_addr_off], ETH_ALEN); 648 651 if (mfr_id == NCSI_OEM_MFR_BCM_ID || mfr_id == NCSI_OEM_MFR_INTEL_ID) 649 - eth_addr_inc((u8 *)saddr.sa_data); 650 - if (!is_valid_ether_addr((const u8 *)saddr.sa_data)) 652 + eth_addr_inc((u8 *)saddr->sa_data); 653 + if (!is_valid_ether_addr((const u8 *)saddr->sa_data)) 651 654 return -ENXIO; 652 655 653 656 /* Set the flag for GMA command which should only be called once */ 654 657 ndp->gma_flag = 1; 655 658 656 - rtnl_lock(); 657 - ret = dev_set_mac_address(ndev, &saddr, NULL); 658 - rtnl_unlock(); 659 - if (ret < 0) 660 - netdev_warn(ndev, "NCSI: 'Writing mac address to device failed\n"); 661 - 662 - return ret; 659 + return 0; 663 660 } 664 661 665 662 /* Response handler for Mellanox card */
+4 -1
net/netfilter/nf_conntrack_core.c
··· 2517 2517 struct hlist_nulls_head *hash; 2518 2518 unsigned int nr_slots, i; 2519 2519 2520 - if (*sizep > (UINT_MAX / sizeof(struct hlist_nulls_head))) 2520 + if (*sizep > (INT_MAX / sizeof(struct hlist_nulls_head))) 2521 2521 return NULL; 2522 2522 2523 2523 BUILD_BUG_ON(sizeof(struct hlist_nulls_head) != sizeof(struct hlist_head)); 2524 2524 nr_slots = *sizep = roundup(*sizep, PAGE_SIZE / sizeof(struct hlist_nulls_head)); 2525 + 2526 + if (nr_slots > (INT_MAX / sizeof(struct hlist_nulls_head))) 2527 + return NULL; 2525 2528 2526 2529 hash = kvcalloc(nr_slots, sizeof(struct hlist_nulls_head), GFP_KERNEL); 2527 2530
+11 -4
net/netfilter/nf_tables_api.c
··· 8822 8822 } 8823 8823 8824 8824 static void __nft_unregister_flowtable_net_hooks(struct net *net, 8825 + struct nft_flowtable *flowtable, 8825 8826 struct list_head *hook_list, 8826 8827 bool release_netdev) 8827 8828 { ··· 8830 8829 8831 8830 list_for_each_entry_safe(hook, next, hook_list, list) { 8832 8831 nf_unregister_net_hook(net, &hook->ops); 8832 + flowtable->data.type->setup(&flowtable->data, hook->ops.dev, 8833 + FLOW_BLOCK_UNBIND); 8833 8834 if (release_netdev) { 8834 8835 list_del(&hook->list); 8835 8836 kfree_rcu(hook, rcu); ··· 8840 8837 } 8841 8838 8842 8839 static void nft_unregister_flowtable_net_hooks(struct net *net, 8840 + struct nft_flowtable *flowtable, 8843 8841 struct list_head *hook_list) 8844 8842 { 8845 - __nft_unregister_flowtable_net_hooks(net, hook_list, false); 8843 + __nft_unregister_flowtable_net_hooks(net, flowtable, hook_list, false); 8846 8844 } 8847 8845 8848 8846 static int nft_register_flowtable_net_hooks(struct net *net, ··· 9485 9481 9486 9482 flowtable->data.type->free(&flowtable->data); 9487 9483 list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) { 9488 - flowtable->data.type->setup(&flowtable->data, hook->ops.dev, 9489 - FLOW_BLOCK_UNBIND); 9490 9484 list_del_rcu(&hook->list); 9491 9485 kfree_rcu(hook, rcu); 9492 9486 } ··· 10872 10870 &nft_trans_flowtable_hooks(trans), 10873 10871 trans->msg_type); 10874 10872 nft_unregister_flowtable_net_hooks(net, 10873 + nft_trans_flowtable(trans), 10875 10874 &nft_trans_flowtable_hooks(trans)); 10876 10875 } else { 10877 10876 list_del_rcu(&nft_trans_flowtable(trans)->list); ··· 10881 10878 NULL, 10882 10879 trans->msg_type); 10883 10880 nft_unregister_flowtable_net_hooks(net, 10881 + nft_trans_flowtable(trans), 10884 10882 &nft_trans_flowtable(trans)->hook_list); 10885 10883 } 10886 10884 break; ··· 11144 11140 case NFT_MSG_NEWFLOWTABLE: 11145 11141 if (nft_trans_flowtable_update(trans)) { 11146 11142 nft_unregister_flowtable_net_hooks(net, 11143 + nft_trans_flowtable(trans), 11147 11144 &nft_trans_flowtable_hooks(trans)); 11148 11145 } else { 11149 11146 nft_use_dec_restore(&table->use); 11150 11147 list_del_rcu(&nft_trans_flowtable(trans)->list); 11151 11148 nft_unregister_flowtable_net_hooks(net, 11149 + nft_trans_flowtable(trans), 11152 11150 &nft_trans_flowtable(trans)->hook_list); 11153 11151 } 11154 11152 break; ··· 11743 11737 list_for_each_entry(chain, &table->chains, list) 11744 11738 __nf_tables_unregister_hook(net, table, chain, true); 11745 11739 list_for_each_entry(flowtable, &table->flowtables, list) 11746 - __nft_unregister_flowtable_net_hooks(net, &flowtable->hook_list, 11740 + __nft_unregister_flowtable_net_hooks(net, flowtable, 11741 + &flowtable->hook_list, 11747 11742 true); 11748 11743 } 11749 11744
+3 -1
net/openvswitch/actions.c
··· 934 934 { 935 935 struct vport *vport = ovs_vport_rcu(dp, out_port); 936 936 937 - if (likely(vport && netif_carrier_ok(vport->dev))) { 937 + if (likely(vport && 938 + netif_running(vport->dev) && 939 + netif_carrier_ok(vport->dev))) { 938 940 u16 mru = OVS_CB(skb)->mru; 939 941 u32 cutlen = OVS_CB(skb)->cutlen; 940 942
+32 -7
net/rds/tcp.c
··· 61 61 62 62 static struct kmem_cache *rds_tcp_conn_slab; 63 63 64 - static int rds_tcp_skbuf_handler(const struct ctl_table *ctl, int write, 65 - void *buffer, size_t *lenp, loff_t *fpos); 64 + static int rds_tcp_sndbuf_handler(const struct ctl_table *ctl, int write, 65 + void *buffer, size_t *lenp, loff_t *fpos); 66 + static int rds_tcp_rcvbuf_handler(const struct ctl_table *ctl, int write, 67 + void *buffer, size_t *lenp, loff_t *fpos); 66 68 67 69 static int rds_tcp_min_sndbuf = SOCK_MIN_SNDBUF; 68 70 static int rds_tcp_min_rcvbuf = SOCK_MIN_RCVBUF; ··· 76 74 /* data is per-net pointer */ 77 75 .maxlen = sizeof(int), 78 76 .mode = 0644, 79 - .proc_handler = rds_tcp_skbuf_handler, 77 + .proc_handler = rds_tcp_sndbuf_handler, 80 78 .extra1 = &rds_tcp_min_sndbuf, 81 79 }, 82 80 #define RDS_TCP_RCVBUF 1 ··· 85 83 /* data is per-net pointer */ 86 84 .maxlen = sizeof(int), 87 85 .mode = 0644, 88 - .proc_handler = rds_tcp_skbuf_handler, 86 + .proc_handler = rds_tcp_rcvbuf_handler, 89 87 .extra1 = &rds_tcp_min_rcvbuf, 90 88 }, 91 89 }; ··· 684 682 spin_unlock_irq(&rds_tcp_conn_lock); 685 683 } 686 684 687 - static int rds_tcp_skbuf_handler(const struct ctl_table *ctl, int write, 685 + static int rds_tcp_skbuf_handler(struct rds_tcp_net *rtn, 686 + const struct ctl_table *ctl, int write, 688 687 void *buffer, size_t *lenp, loff_t *fpos) 689 688 { 690 - struct net *net = current->nsproxy->net_ns; 691 689 int err; 692 690 693 691 err = proc_dointvec_minmax(ctl, write, buffer, lenp, fpos); ··· 696 694 *(int *)(ctl->extra1)); 697 695 return err; 698 696 } 699 - if (write) 697 + 698 + if (write && rtn->rds_tcp_listen_sock && rtn->rds_tcp_listen_sock->sk) { 699 + struct net *net = sock_net(rtn->rds_tcp_listen_sock->sk); 700 + 700 701 rds_tcp_sysctl_reset(net); 702 + } 703 + 701 704 return 0; 705 + } 706 + 707 + static int rds_tcp_sndbuf_handler(const struct ctl_table *ctl, int write, 708 + void *buffer, size_t *lenp, loff_t *fpos) 709 + { 710 + struct rds_tcp_net *rtn = container_of(ctl->data, struct rds_tcp_net, 711 + sndbuf_size); 712 + 713 + return rds_tcp_skbuf_handler(rtn, ctl, write, buffer, lenp, fpos); 714 + } 715 + 716 + static int rds_tcp_rcvbuf_handler(const struct ctl_table *ctl, int write, 717 + void *buffer, size_t *lenp, loff_t *fpos) 718 + { 719 + struct rds_tcp_net *rtn = container_of(ctl->data, struct rds_tcp_net, 720 + rcvbuf_size); 721 + 722 + return rds_tcp_skbuf_handler(rtn, ctl, write, buffer, lenp, fpos); 702 723 } 703 724 704 725 static void rds_tcp_exit(void)
+2 -1
net/sched/cls_flow.c
··· 356 356 [TCA_FLOW_KEYS] = { .type = NLA_U32 }, 357 357 [TCA_FLOW_MODE] = { .type = NLA_U32 }, 358 358 [TCA_FLOW_BASECLASS] = { .type = NLA_U32 }, 359 - [TCA_FLOW_RSHIFT] = { .type = NLA_U32 }, 359 + [TCA_FLOW_RSHIFT] = NLA_POLICY_MAX(NLA_U32, 360 + 31 /* BITS_PER_U32 - 1 */), 360 361 [TCA_FLOW_ADDEND] = { .type = NLA_U32 }, 361 362 [TCA_FLOW_MASK] = { .type = NLA_U32 }, 362 363 [TCA_FLOW_XOR] = { .type = NLA_U32 },
+75 -65
net/sched/sch_cake.c
··· 627 627 return (flow_mode & CAKE_FLOW_DUAL_DST) == CAKE_FLOW_DUAL_DST; 628 628 } 629 629 630 + static void cake_dec_srchost_bulk_flow_count(struct cake_tin_data *q, 631 + struct cake_flow *flow, 632 + int flow_mode) 633 + { 634 + if (likely(cake_dsrc(flow_mode) && 635 + q->hosts[flow->srchost].srchost_bulk_flow_count)) 636 + q->hosts[flow->srchost].srchost_bulk_flow_count--; 637 + } 638 + 639 + static void cake_inc_srchost_bulk_flow_count(struct cake_tin_data *q, 640 + struct cake_flow *flow, 641 + int flow_mode) 642 + { 643 + if (likely(cake_dsrc(flow_mode) && 644 + q->hosts[flow->srchost].srchost_bulk_flow_count < CAKE_QUEUES)) 645 + q->hosts[flow->srchost].srchost_bulk_flow_count++; 646 + } 647 + 648 + static void cake_dec_dsthost_bulk_flow_count(struct cake_tin_data *q, 649 + struct cake_flow *flow, 650 + int flow_mode) 651 + { 652 + if (likely(cake_ddst(flow_mode) && 653 + q->hosts[flow->dsthost].dsthost_bulk_flow_count)) 654 + q->hosts[flow->dsthost].dsthost_bulk_flow_count--; 655 + } 656 + 657 + static void cake_inc_dsthost_bulk_flow_count(struct cake_tin_data *q, 658 + struct cake_flow *flow, 659 + int flow_mode) 660 + { 661 + if (likely(cake_ddst(flow_mode) && 662 + q->hosts[flow->dsthost].dsthost_bulk_flow_count < CAKE_QUEUES)) 663 + q->hosts[flow->dsthost].dsthost_bulk_flow_count++; 664 + } 665 + 666 + static u16 cake_get_flow_quantum(struct cake_tin_data *q, 667 + struct cake_flow *flow, 668 + int flow_mode) 669 + { 670 + u16 host_load = 1; 671 + 672 + if (cake_dsrc(flow_mode)) 673 + host_load = max(host_load, 674 + q->hosts[flow->srchost].srchost_bulk_flow_count); 675 + 676 + if (cake_ddst(flow_mode)) 677 + host_load = max(host_load, 678 + q->hosts[flow->dsthost].dsthost_bulk_flow_count); 679 + 680 + /* The get_random_u16() is a way to apply dithering to avoid 681 + * accumulating roundoff errors 682 + */ 683 + return (q->flow_quantum * quantum_div[host_load] + 684 + get_random_u16()) >> 16; 685 + } 686 + 630 687 static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb, 631 688 int flow_mode, u16 flow_override, u16 host_override) 632 689 { ··· 830 773 allocate_dst = cake_ddst(flow_mode); 831 774 832 775 if (q->flows[outer_hash + k].set == CAKE_SET_BULK) { 833 - if (allocate_src) 834 - q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--; 835 - if (allocate_dst) 836 - q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--; 776 + cake_dec_srchost_bulk_flow_count(q, &q->flows[outer_hash + k], flow_mode); 777 + cake_dec_dsthost_bulk_flow_count(q, &q->flows[outer_hash + k], flow_mode); 837 778 } 838 779 found: 839 780 /* reserve queue for future packets in same flow */ ··· 856 801 q->hosts[outer_hash + k].srchost_tag = srchost_hash; 857 802 found_src: 858 803 srchost_idx = outer_hash + k; 859 - if (q->flows[reduced_hash].set == CAKE_SET_BULK) 860 - q->hosts[srchost_idx].srchost_bulk_flow_count++; 861 804 q->flows[reduced_hash].srchost = srchost_idx; 805 + 806 + if (q->flows[reduced_hash].set == CAKE_SET_BULK) 807 + cake_inc_srchost_bulk_flow_count(q, &q->flows[reduced_hash], flow_mode); 862 808 } 863 809 864 810 if (allocate_dst) { ··· 880 824 q->hosts[outer_hash + k].dsthost_tag = dsthost_hash; 881 825 found_dst: 882 826 dsthost_idx = outer_hash + k; 883 - if (q->flows[reduced_hash].set == CAKE_SET_BULK) 884 - q->hosts[dsthost_idx].dsthost_bulk_flow_count++; 885 827 q->flows[reduced_hash].dsthost = dsthost_idx; 828 + 829 + if (q->flows[reduced_hash].set == CAKE_SET_BULK) 830 + cake_inc_dsthost_bulk_flow_count(q, &q->flows[reduced_hash], flow_mode); 886 831 } 887 832 } 888 833 ··· 1896 1839 1897 1840 /* flowchain */ 1898 1841 if (!flow->set || flow->set == CAKE_SET_DECAYING) { 1899 - struct cake_host *srchost = &b->hosts[flow->srchost]; 1900 - struct cake_host *dsthost = &b->hosts[flow->dsthost]; 1901 - u16 host_load = 1; 1902 - 1903 1842 if (!flow->set) { 1904 1843 list_add_tail(&flow->flowchain, &b->new_flows); 1905 1844 } else { ··· 1905 1852 flow->set = CAKE_SET_SPARSE; 1906 1853 b->sparse_flow_count++; 1907 1854 1908 - if (cake_dsrc(q->flow_mode)) 1909 - host_load = max(host_load, srchost->srchost_bulk_flow_count); 1910 - 1911 - if (cake_ddst(q->flow_mode)) 1912 - host_load = max(host_load, dsthost->dsthost_bulk_flow_count); 1913 - 1914 - flow->deficit = (b->flow_quantum * 1915 - quantum_div[host_load]) >> 16; 1855 + flow->deficit = cake_get_flow_quantum(b, flow, q->flow_mode); 1916 1856 } else if (flow->set == CAKE_SET_SPARSE_WAIT) { 1917 - struct cake_host *srchost = &b->hosts[flow->srchost]; 1918 - struct cake_host *dsthost = &b->hosts[flow->dsthost]; 1919 - 1920 1857 /* this flow was empty, accounted as a sparse flow, but actually 1921 1858 * in the bulk rotation. 1922 1859 */ ··· 1914 1871 b->sparse_flow_count--; 1915 1872 b->bulk_flow_count++; 1916 1873 1917 - if (cake_dsrc(q->flow_mode)) 1918 - srchost->srchost_bulk_flow_count++; 1919 - 1920 - if (cake_ddst(q->flow_mode)) 1921 - dsthost->dsthost_bulk_flow_count++; 1922 - 1874 + cake_inc_srchost_bulk_flow_count(b, flow, q->flow_mode); 1875 + cake_inc_dsthost_bulk_flow_count(b, flow, q->flow_mode); 1923 1876 } 1924 1877 1925 1878 if (q->buffer_used > q->buffer_max_used) ··· 1972 1933 { 1973 1934 struct cake_sched_data *q = qdisc_priv(sch); 1974 1935 struct cake_tin_data *b = &q->tins[q->cur_tin]; 1975 - struct cake_host *srchost, *dsthost; 1976 1936 ktime_t now = ktime_get(); 1977 1937 struct cake_flow *flow; 1978 1938 struct list_head *head; 1979 1939 bool first_flow = true; 1980 1940 struct sk_buff *skb; 1981 - u16 host_load; 1982 1941 u64 delay; 1983 1942 u32 len; 1984 1943 ··· 2076 2039 q->cur_flow = flow - b->flows; 2077 2040 first_flow = false; 2078 2041 2079 - /* triple isolation (modified DRR++) */ 2080 - srchost = &b->hosts[flow->srchost]; 2081 - dsthost = &b->hosts[flow->dsthost]; 2082 - host_load = 1; 2083 - 2084 2042 /* flow isolation (DRR++) */ 2085 2043 if (flow->deficit <= 0) { 2086 2044 /* Keep all flows with deficits out of the sparse and decaying ··· 2087 2055 b->sparse_flow_count--; 2088 2056 b->bulk_flow_count++; 2089 2057 2090 - if (cake_dsrc(q->flow_mode)) 2091 - srchost->srchost_bulk_flow_count++; 2092 - 2093 - if (cake_ddst(q->flow_mode)) 2094 - dsthost->dsthost_bulk_flow_count++; 2058 + cake_inc_srchost_bulk_flow_count(b, flow, q->flow_mode); 2059 + cake_inc_dsthost_bulk_flow_count(b, flow, q->flow_mode); 2095 2060 2096 2061 flow->set = CAKE_SET_BULK; 2097 2062 } else { ··· 2100 2071 } 2101 2072 } 2102 2073 2103 - if (cake_dsrc(q->flow_mode)) 2104 - host_load = max(host_load, srchost->srchost_bulk_flow_count); 2105 - 2106 - if (cake_ddst(q->flow_mode)) 2107 - host_load = max(host_load, dsthost->dsthost_bulk_flow_count); 2108 - 2109 - WARN_ON(host_load > CAKE_QUEUES); 2110 - 2111 - /* The get_random_u16() is a way to apply dithering to avoid 2112 - * accumulating roundoff errors 2113 - */ 2114 - flow->deficit += (b->flow_quantum * quantum_div[host_load] + 2115 - get_random_u16()) >> 16; 2074 + flow->deficit += cake_get_flow_quantum(b, flow, q->flow_mode); 2116 2075 list_move_tail(&flow->flowchain, &b->old_flows); 2117 2076 2118 2077 goto retry; ··· 2124 2107 if (flow->set == CAKE_SET_BULK) { 2125 2108 b->bulk_flow_count--; 2126 2109 2127 - if (cake_dsrc(q->flow_mode)) 2128 - srchost->srchost_bulk_flow_count--; 2129 - 2130 - if (cake_ddst(q->flow_mode)) 2131 - dsthost->dsthost_bulk_flow_count--; 2110 + cake_dec_srchost_bulk_flow_count(b, flow, q->flow_mode); 2111 + cake_dec_dsthost_bulk_flow_count(b, flow, q->flow_mode); 2132 2112 2133 2113 b->decaying_flow_count++; 2134 2114 } else if (flow->set == CAKE_SET_SPARSE || ··· 2143 2129 else if (flow->set == CAKE_SET_BULK) { 2144 2130 b->bulk_flow_count--; 2145 2131 2146 - if (cake_dsrc(q->flow_mode)) 2147 - srchost->srchost_bulk_flow_count--; 2148 - 2149 - if (cake_ddst(q->flow_mode)) 2150 - dsthost->dsthost_bulk_flow_count--; 2151 - 2132 + cake_dec_srchost_bulk_flow_count(b, flow, q->flow_mode); 2133 + cake_dec_dsthost_bulk_flow_count(b, flow, q->flow_mode); 2152 2134 } else 2153 2135 b->decaying_flow_count--; 2154 2136
+8 -6
net/sctp/sysctl.c
··· 387 387 static int proc_sctp_do_hmac_alg(const struct ctl_table *ctl, int write, 388 388 void *buffer, size_t *lenp, loff_t *ppos) 389 389 { 390 - struct net *net = current->nsproxy->net_ns; 390 + struct net *net = container_of(ctl->data, struct net, 391 + sctp.sctp_hmac_alg); 391 392 struct ctl_table tbl; 392 393 bool changed = false; 393 394 char *none = "none"; ··· 433 432 static int proc_sctp_do_rto_min(const struct ctl_table *ctl, int write, 434 433 void *buffer, size_t *lenp, loff_t *ppos) 435 434 { 436 - struct net *net = current->nsproxy->net_ns; 435 + struct net *net = container_of(ctl->data, struct net, sctp.rto_min); 437 436 unsigned int min = *(unsigned int *) ctl->extra1; 438 437 unsigned int max = *(unsigned int *) ctl->extra2; 439 438 struct ctl_table tbl; ··· 461 460 static int proc_sctp_do_rto_max(const struct ctl_table *ctl, int write, 462 461 void *buffer, size_t *lenp, loff_t *ppos) 463 462 { 464 - struct net *net = current->nsproxy->net_ns; 463 + struct net *net = container_of(ctl->data, struct net, sctp.rto_max); 465 464 unsigned int min = *(unsigned int *) ctl->extra1; 466 465 unsigned int max = *(unsigned int *) ctl->extra2; 467 466 struct ctl_table tbl; ··· 499 498 static int proc_sctp_do_auth(const struct ctl_table *ctl, int write, 500 499 void *buffer, size_t *lenp, loff_t *ppos) 501 500 { 502 - struct net *net = current->nsproxy->net_ns; 501 + struct net *net = container_of(ctl->data, struct net, sctp.auth_enable); 503 502 struct ctl_table tbl; 504 503 int new_value, ret; 505 504 ··· 528 527 static int proc_sctp_do_udp_port(const struct ctl_table *ctl, int write, 529 528 void *buffer, size_t *lenp, loff_t *ppos) 530 529 { 531 - struct net *net = current->nsproxy->net_ns; 530 + struct net *net = container_of(ctl->data, struct net, sctp.udp_port); 532 531 unsigned int min = *(unsigned int *)ctl->extra1; 533 532 unsigned int max = *(unsigned int *)ctl->extra2; 534 533 struct ctl_table tbl; ··· 569 568 static int proc_sctp_do_probe_interval(const struct ctl_table *ctl, int write, 570 569 void *buffer, size_t *lenp, loff_t *ppos) 571 570 { 572 - struct net *net = current->nsproxy->net_ns; 571 + struct net *net = container_of(ctl->data, struct net, 572 + sctp.probe_interval); 573 573 struct ctl_table tbl; 574 574 int ret, new_value; 575 575
+1 -1
net/tls/tls_sw.c
··· 458 458 459 459 tx_err: 460 460 if (rc < 0 && rc != -EAGAIN) 461 - tls_err_abort(sk, -EBADMSG); 461 + tls_err_abort(sk, rc); 462 462 463 463 return rc; 464 464 }
+18
net/vmw_vsock/af_vsock.c
··· 491 491 */ 492 492 vsk->transport->release(vsk); 493 493 vsock_deassign_transport(vsk); 494 + 495 + /* transport's release() and destruct() can touch some socket 496 + * state, since we are reassigning the socket to a new transport 497 + * during vsock_connect(), let's reset these fields to have a 498 + * clean state. 499 + */ 500 + sock_reset_flag(sk, SOCK_DONE); 501 + sk->sk_state = TCP_CLOSE; 502 + vsk->peer_shutdown = 0; 494 503 } 495 504 496 505 /* We increase the module refcnt to prevent the transport unloading ··· 879 870 880 871 s64 vsock_stream_has_data(struct vsock_sock *vsk) 881 872 { 873 + if (WARN_ON(!vsk->transport)) 874 + return 0; 875 + 882 876 return vsk->transport->stream_has_data(vsk); 883 877 } 884 878 EXPORT_SYMBOL_GPL(vsock_stream_has_data); ··· 889 877 s64 vsock_connectible_has_data(struct vsock_sock *vsk) 890 878 { 891 879 struct sock *sk = sk_vsock(vsk); 880 + 881 + if (WARN_ON(!vsk->transport)) 882 + return 0; 892 883 893 884 if (sk->sk_type == SOCK_SEQPACKET) 894 885 return vsk->transport->seqpacket_has_data(vsk); ··· 902 887 903 888 s64 vsock_stream_has_space(struct vsock_sock *vsk) 904 889 { 890 + if (WARN_ON(!vsk->transport)) 891 + return 0; 892 + 905 893 return vsk->transport->stream_has_space(vsk); 906 894 } 907 895 EXPORT_SYMBOL_GPL(vsock_stream_has_space);
+27 -11
net/vmw_vsock/virtio_transport_common.c
··· 26 26 /* Threshold for detecting small packets to copy */ 27 27 #define GOOD_COPY_LEN 128 28 28 29 + static void virtio_transport_cancel_close_work(struct vsock_sock *vsk, 30 + bool cancel_timeout); 31 + 29 32 static const struct virtio_transport * 30 33 virtio_transport_get_ops(struct vsock_sock *vsk) 31 34 { ··· 1112 1109 { 1113 1110 struct virtio_vsock_sock *vvs = vsk->trans; 1114 1111 1112 + virtio_transport_cancel_close_work(vsk, true); 1113 + 1115 1114 kfree(vvs); 1116 1115 vsk->trans = NULL; 1117 1116 } ··· 1209 1204 } 1210 1205 } 1211 1206 1207 + static void virtio_transport_cancel_close_work(struct vsock_sock *vsk, 1208 + bool cancel_timeout) 1209 + { 1210 + struct sock *sk = sk_vsock(vsk); 1211 + 1212 + if (vsk->close_work_scheduled && 1213 + (!cancel_timeout || cancel_delayed_work(&vsk->close_work))) { 1214 + vsk->close_work_scheduled = false; 1215 + 1216 + virtio_transport_remove_sock(vsk); 1217 + 1218 + /* Release refcnt obtained when we scheduled the timeout */ 1219 + sock_put(sk); 1220 + } 1221 + } 1222 + 1212 1223 static void virtio_transport_do_close(struct vsock_sock *vsk, 1213 1224 bool cancel_timeout) 1214 1225 { ··· 1236 1215 sk->sk_state = TCP_CLOSING; 1237 1216 sk->sk_state_change(sk); 1238 1217 1239 - if (vsk->close_work_scheduled && 1240 - (!cancel_timeout || cancel_delayed_work(&vsk->close_work))) { 1241 - vsk->close_work_scheduled = false; 1242 - 1243 - virtio_transport_remove_sock(vsk); 1244 - 1245 - /* Release refcnt obtained when we scheduled the timeout */ 1246 - sock_put(sk); 1247 - } 1218 + virtio_transport_cancel_close_work(vsk, cancel_timeout); 1248 1219 } 1249 1220 1250 1221 static void virtio_transport_close_timeout(struct work_struct *work) ··· 1641 1628 1642 1629 lock_sock(sk); 1643 1630 1644 - /* Check if sk has been closed before lock_sock */ 1645 - if (sock_flag(sk, SOCK_DONE)) { 1631 + /* Check if sk has been closed or assigned to another transport before 1632 + * lock_sock (note: listener sockets are not assigned to any transport) 1633 + */ 1634 + if (sock_flag(sk, SOCK_DONE) || 1635 + (sk->sk_state != TCP_LISTEN && vsk->transport != &t->transport)) { 1646 1636 (void)virtio_transport_reset_no_sock(t, skb); 1647 1637 release_sock(sk); 1648 1638 sock_put(sk);
+9
net/vmw_vsock/vsock_bpf.c
··· 77 77 size_t len, int flags, int *addr_len) 78 78 { 79 79 struct sk_psock *psock; 80 + struct vsock_sock *vsk; 80 81 int copied; 81 82 82 83 psock = sk_psock_get(sk); ··· 85 84 return __vsock_recvmsg(sk, msg, len, flags); 86 85 87 86 lock_sock(sk); 87 + vsk = vsock_sk(sk); 88 + 89 + if (!vsk->transport) { 90 + copied = -ENODEV; 91 + goto out; 92 + } 93 + 88 94 if (vsock_has_data(sk, psock) && sk_psock_queue_empty(psock)) { 89 95 release_sock(sk); 90 96 sk_psock_put(sk, psock); ··· 116 108 copied = sk_msg_recvmsg(sk, psock, msg, len, flags); 117 109 } 118 110 111 + out: 119 112 release_sock(sk); 120 113 sk_psock_put(sk, psock); 121 114
+9 -5
net/xdp/xsk.c
··· 322 322 return -ENOSPC; 323 323 } 324 324 325 - sk_mark_napi_id_once_xdp(&xs->sk, xdp); 326 325 return 0; 327 326 } 328 327 ··· 907 908 if (unlikely(!xs->tx)) 908 909 return -ENOBUFS; 909 910 910 - if (sk_can_busy_loop(sk)) { 911 - if (xs->zc) 912 - __sk_mark_napi_id_once(sk, xsk_pool_get_napi_id(xs->pool)); 911 + if (sk_can_busy_loop(sk)) 913 912 sk_busy_loop(sk, 1); /* only support non-blocking sockets */ 914 - } 915 913 916 914 if (xs->zc && xsk_no_wakeup(sk)) 917 915 return 0; ··· 1293 1297 xs->sg = !!(xs->umem->flags & XDP_UMEM_SG_FLAG); 1294 1298 xs->queue_id = qid; 1295 1299 xp_add_xsk(xs->pool, xs); 1300 + 1301 + if (xs->zc && qid < dev->real_num_rx_queues) { 1302 + struct netdev_rx_queue *rxq; 1303 + 1304 + rxq = __netif_get_rx_queue(dev, qid); 1305 + if (rxq->napi) 1306 + __sk_mark_napi_id_once(sk, rxq->napi->napi_id); 1307 + } 1296 1308 1297 1309 out_unlock: 1298 1310 if (err) {
+14 -2
scripts/decode_stacktrace.sh
··· 286 286 last=$(( $last - 1 )) 287 287 fi 288 288 289 + # Extract info after the symbol if present. E.g.: 290 + # func_name+0x54/0x80 (P) 291 + # ^^^ 292 + # The regex assumes only uppercase letters will be used. To be 293 + # extended if needed. 294 + local info_str="" 295 + if [[ ${words[$last]} =~ \([A-Z]*\) ]]; then 296 + info_str=${words[$last]} 297 + unset words[$last] 298 + last=$(( $last - 1 )) 299 + fi 300 + 289 301 if [[ ${words[$last]} =~ \[([^]]+)\] ]]; then 290 302 module=${words[$last]} 291 303 # some traces format is "(%pS)", which like "(foo+0x0/0x1 [bar])" ··· 325 313 # Add up the line number to the symbol 326 314 if [[ -z ${module} ]] 327 315 then 328 - echo "${words[@]}" "$symbol" 316 + echo "${words[@]}" "$symbol ${info_str}" 329 317 else 330 - echo "${words[@]}" "$symbol $module" 318 + echo "${words[@]}" "$symbol $module ${info_str}" 331 319 fi 332 320 } 333 321
+34 -27
security/selinux/avc.c
··· 174 174 * using a linked list for extended_perms_decision lookup because the list is 175 175 * always small. i.e. less than 5, typically 1 176 176 */ 177 - static struct extended_perms_decision *avc_xperms_decision_lookup(u8 driver, 178 - struct avc_xperms_node *xp_node) 177 + static struct extended_perms_decision * 178 + avc_xperms_decision_lookup(u8 driver, u8 base_perm, 179 + struct avc_xperms_node *xp_node) 179 180 { 180 181 struct avc_xperms_decision_node *xpd_node; 181 182 182 183 list_for_each_entry(xpd_node, &xp_node->xpd_head, xpd_list) { 183 - if (xpd_node->xpd.driver == driver) 184 + if (xpd_node->xpd.driver == driver && 185 + xpd_node->xpd.base_perm == base_perm) 184 186 return &xpd_node->xpd; 185 187 } 186 188 return NULL; ··· 207 205 } 208 206 209 207 static void avc_xperms_allow_perm(struct avc_xperms_node *xp_node, 210 - u8 driver, u8 perm) 208 + u8 driver, u8 base_perm, u8 perm) 211 209 { 212 210 struct extended_perms_decision *xpd; 213 211 security_xperm_set(xp_node->xp.drivers.p, driver); 214 - xpd = avc_xperms_decision_lookup(driver, xp_node); 212 + xp_node->xp.base_perms |= base_perm; 213 + xpd = avc_xperms_decision_lookup(driver, base_perm, xp_node); 215 214 if (xpd && xpd->allowed) 216 215 security_xperm_set(xpd->allowed->p, perm); 217 216 } ··· 248 245 static void avc_copy_xperms_decision(struct extended_perms_decision *dest, 249 246 struct extended_perms_decision *src) 250 247 { 248 + dest->base_perm = src->base_perm; 251 249 dest->driver = src->driver; 252 250 dest->used = src->used; 253 251 if (dest->used & XPERMS_ALLOWED) ··· 276 272 */ 277 273 u8 i = perm >> 5; 278 274 275 + dest->base_perm = src->base_perm; 279 276 dest->used = src->used; 280 277 if (dest->used & XPERMS_ALLOWED) 281 278 dest->allowed->p[i] = src->allowed->p[i]; ··· 362 357 363 358 memcpy(dest->xp.drivers.p, src->xp.drivers.p, sizeof(dest->xp.drivers.p)); 364 359 dest->xp.len = src->xp.len; 360 + dest->xp.base_perms = src->xp.base_perms; 365 361 366 362 /* for each source xpd allocate a destination xpd and copy */ 367 363 list_for_each_entry(src_xpd, &src->xpd_head, xpd_list) { ··· 813 807 * @event : Updating event 814 808 * @perms : Permission mask bits 815 809 * @driver: xperm driver information 810 + * @base_perm: the base permission associated with the extended permission 816 811 * @xperm: xperm permissions 817 812 * @ssid: AVC entry source sid 818 813 * @tsid: AVC entry target sid ··· 827 820 * otherwise, this function updates the AVC entry. The original AVC-entry object 828 821 * will release later by RCU. 829 822 */ 830 - static int avc_update_node(u32 event, u32 perms, u8 driver, u8 xperm, u32 ssid, 831 - u32 tsid, u16 tclass, u32 seqno, 832 - struct extended_perms_decision *xpd, 833 - u32 flags) 823 + static int avc_update_node(u32 event, u32 perms, u8 driver, u8 base_perm, 824 + u8 xperm, u32 ssid, u32 tsid, u16 tclass, u32 seqno, 825 + struct extended_perms_decision *xpd, u32 flags) 834 826 { 835 827 u32 hvalue; 836 828 int rc = 0; ··· 886 880 case AVC_CALLBACK_GRANT: 887 881 node->ae.avd.allowed |= perms; 888 882 if (node->ae.xp_node && (flags & AVC_EXTENDED_PERMS)) 889 - avc_xperms_allow_perm(node->ae.xp_node, driver, xperm); 883 + avc_xperms_allow_perm(node->ae.xp_node, driver, base_perm, xperm); 890 884 break; 891 885 case AVC_CALLBACK_TRY_REVOKE: 892 886 case AVC_CALLBACK_REVOKE: ··· 993 987 avc_insert(ssid, tsid, tclass, avd, xp_node); 994 988 } 995 989 996 - static noinline int avc_denied(u32 ssid, u32 tsid, 997 - u16 tclass, u32 requested, 998 - u8 driver, u8 xperm, unsigned int flags, 999 - struct av_decision *avd) 990 + static noinline int avc_denied(u32 ssid, u32 tsid, u16 tclass, u32 requested, 991 + u8 driver, u8 base_perm, u8 xperm, 992 + unsigned int flags, struct av_decision *avd) 1000 993 { 1001 994 if (flags & AVC_STRICT) 1002 995 return -EACCES; ··· 1004 999 !(avd->flags & AVD_FLAGS_PERMISSIVE)) 1005 1000 return -EACCES; 1006 1001 1007 - avc_update_node(AVC_CALLBACK_GRANT, requested, driver, 1002 + avc_update_node(AVC_CALLBACK_GRANT, requested, driver, base_perm, 1008 1003 xperm, ssid, tsid, tclass, avd->seqno, NULL, flags); 1009 1004 return 0; 1010 1005 } ··· 1017 1012 * driver field is used to specify which set contains the permission. 1018 1013 */ 1019 1014 int avc_has_extended_perms(u32 ssid, u32 tsid, u16 tclass, u32 requested, 1020 - u8 driver, u8 xperm, struct common_audit_data *ad) 1015 + u8 driver, u8 base_perm, u8 xperm, 1016 + struct common_audit_data *ad) 1021 1017 { 1022 1018 struct avc_node *node; 1023 1019 struct av_decision avd; ··· 1053 1047 local_xpd.auditallow = &auditallow; 1054 1048 local_xpd.dontaudit = &dontaudit; 1055 1049 1056 - xpd = avc_xperms_decision_lookup(driver, xp_node); 1050 + xpd = avc_xperms_decision_lookup(driver, base_perm, xp_node); 1057 1051 if (unlikely(!xpd)) { 1058 1052 /* 1059 1053 * Compute the extended_perms_decision only if the driver 1060 - * is flagged 1054 + * is flagged and the base permission is known. 1061 1055 */ 1062 - if (!security_xperm_test(xp_node->xp.drivers.p, driver)) { 1056 + if (!security_xperm_test(xp_node->xp.drivers.p, driver) || 1057 + !(xp_node->xp.base_perms & base_perm)) { 1063 1058 avd.allowed &= ~requested; 1064 1059 goto decision; 1065 1060 } 1066 1061 rcu_read_unlock(); 1067 - security_compute_xperms_decision(ssid, tsid, tclass, 1068 - driver, &local_xpd); 1062 + security_compute_xperms_decision(ssid, tsid, tclass, driver, 1063 + base_perm, &local_xpd); 1069 1064 rcu_read_lock(); 1070 - avc_update_node(AVC_CALLBACK_ADD_XPERMS, requested, 1071 - driver, xperm, ssid, tsid, tclass, avd.seqno, 1065 + avc_update_node(AVC_CALLBACK_ADD_XPERMS, requested, driver, 1066 + base_perm, xperm, ssid, tsid, tclass, avd.seqno, 1072 1067 &local_xpd, 0); 1073 1068 } else { 1074 1069 avc_quick_copy_xperms_decision(xperm, &local_xpd, xpd); ··· 1082 1075 decision: 1083 1076 denied = requested & ~(avd.allowed); 1084 1077 if (unlikely(denied)) 1085 - rc = avc_denied(ssid, tsid, tclass, requested, 1086 - driver, xperm, AVC_EXTENDED_PERMS, &avd); 1078 + rc = avc_denied(ssid, tsid, tclass, requested, driver, 1079 + base_perm, xperm, AVC_EXTENDED_PERMS, &avd); 1087 1080 1088 1081 rcu_read_unlock(); 1089 1082 ··· 1117 1110 avc_compute_av(ssid, tsid, tclass, avd, &xp_node); 1118 1111 denied = requested & ~(avd->allowed); 1119 1112 if (unlikely(denied)) 1120 - return avc_denied(ssid, tsid, tclass, requested, 0, 0, 1113 + return avc_denied(ssid, tsid, tclass, requested, 0, 0, 0, 1121 1114 flags, avd); 1122 1115 return 0; 1123 1116 } ··· 1165 1158 rcu_read_unlock(); 1166 1159 1167 1160 if (unlikely(denied)) 1168 - return avc_denied(ssid, tsid, tclass, requested, 0, 0, 1161 + return avc_denied(ssid, tsid, tclass, requested, 0, 0, 0, 1169 1162 flags, avd); 1170 1163 return 0; 1171 1164 }
+3 -3
security/selinux/hooks.c
··· 3688 3688 return 0; 3689 3689 3690 3690 isec = inode_security(inode); 3691 - rc = avc_has_extended_perms(ssid, isec->sid, isec->sclass, 3692 - requested, driver, xperm, &ad); 3691 + rc = avc_has_extended_perms(ssid, isec->sid, isec->sclass, requested, 3692 + driver, AVC_EXT_IOCTL, xperm, &ad); 3693 3693 out: 3694 3694 return rc; 3695 3695 } ··· 5952 5952 xperm = nlmsg_type & 0xff; 5953 5953 5954 5954 return avc_has_extended_perms(current_sid(), sksec->sid, sksec->sclass, 5955 - perms, driver, xperm, &ad); 5955 + perms, driver, AVC_EXT_NLMSG, xperm, &ad); 5956 5956 } 5957 5957 5958 5958 static int selinux_netlink_send(struct sock *sk, struct sk_buff *skb)
+4 -1
security/selinux/include/avc.h
··· 136 136 int avc_has_perm(u32 ssid, u32 tsid, u16 tclass, u32 requested, 137 137 struct common_audit_data *auditdata); 138 138 139 + #define AVC_EXT_IOCTL (1 << 0) /* Cache entry for an ioctl extended permission */ 140 + #define AVC_EXT_NLMSG (1 << 1) /* Cache entry for an nlmsg extended permission */ 139 141 int avc_has_extended_perms(u32 ssid, u32 tsid, u16 tclass, u32 requested, 140 - u8 driver, u8 perm, struct common_audit_data *ad); 142 + u8 driver, u8 base_perm, u8 perm, 143 + struct common_audit_data *ad); 141 144 142 145 u32 avc_policy_seqno(void); 143 146
+3
security/selinux/include/security.h
··· 239 239 struct extended_perms_decision { 240 240 u8 used; 241 241 u8 driver; 242 + u8 base_perm; 242 243 struct extended_perms_data *allowed; 243 244 struct extended_perms_data *auditallow; 244 245 struct extended_perms_data *dontaudit; ··· 247 246 248 247 struct extended_perms { 249 248 u16 len; /* length associated decision chain */ 249 + u8 base_perms; /* which base permissions are covered */ 250 250 struct extended_perms_data drivers; /* flag drivers that are used */ 251 251 }; 252 252 ··· 259 257 struct extended_perms *xperms); 260 258 261 259 void security_compute_xperms_decision(u32 ssid, u32 tsid, u16 tclass, u8 driver, 260 + u8 base_perm, 262 261 struct extended_perms_decision *xpermd); 263 262 264 263 void security_compute_av_user(u32 ssid, u32 tsid, u16 tclass,
+21 -7
security/selinux/ss/services.c
··· 582 582 } 583 583 584 584 /* 585 - * Flag which drivers have permissions. 585 + * Flag which drivers have permissions and which base permissions are covered. 586 586 */ 587 587 void services_compute_xperms_drivers( 588 588 struct extended_perms *xperms, ··· 592 592 593 593 switch (node->datum.u.xperms->specified) { 594 594 case AVTAB_XPERMS_IOCTLDRIVER: 595 + xperms->base_perms |= AVC_EXT_IOCTL; 595 596 /* if one or more driver has all permissions allowed */ 596 597 for (i = 0; i < ARRAY_SIZE(xperms->drivers.p); i++) 597 598 xperms->drivers.p[i] |= node->datum.u.xperms->perms.p[i]; 598 599 break; 599 600 case AVTAB_XPERMS_IOCTLFUNCTION: 601 + xperms->base_perms |= AVC_EXT_IOCTL; 602 + /* if allowing permissions within a driver */ 603 + security_xperm_set(xperms->drivers.p, 604 + node->datum.u.xperms->driver); 605 + break; 600 606 case AVTAB_XPERMS_NLMSG: 607 + xperms->base_perms |= AVC_EXT_NLMSG; 601 608 /* if allowing permissions within a driver */ 602 609 security_xperm_set(xperms->drivers.p, 603 610 node->datum.u.xperms->driver); ··· 638 631 avd->auditallow = 0; 639 632 avd->auditdeny = 0xffffffff; 640 633 if (xperms) { 641 - memset(&xperms->drivers, 0, sizeof(xperms->drivers)); 642 - xperms->len = 0; 634 + memset(xperms, 0, sizeof(*xperms)); 643 635 } 644 636 645 637 if (unlikely(!tclass || tclass > policydb->p_classes.nprim)) { ··· 975 969 { 976 970 switch (node->datum.u.xperms->specified) { 977 971 case AVTAB_XPERMS_IOCTLFUNCTION: 978 - case AVTAB_XPERMS_NLMSG: 979 - if (xpermd->driver != node->datum.u.xperms->driver) 972 + if (xpermd->base_perm != AVC_EXT_IOCTL || 973 + xpermd->driver != node->datum.u.xperms->driver) 980 974 return; 981 975 break; 982 976 case AVTAB_XPERMS_IOCTLDRIVER: 983 - if (!security_xperm_test(node->datum.u.xperms->perms.p, 984 - xpermd->driver)) 977 + if (xpermd->base_perm != AVC_EXT_IOCTL || 978 + !security_xperm_test(node->datum.u.xperms->perms.p, 979 + xpermd->driver)) 980 + return; 981 + break; 982 + case AVTAB_XPERMS_NLMSG: 983 + if (xpermd->base_perm != AVC_EXT_NLMSG || 984 + xpermd->driver != node->datum.u.xperms->driver) 985 985 return; 986 986 break; 987 987 default: ··· 1022 1010 u32 tsid, 1023 1011 u16 orig_tclass, 1024 1012 u8 driver, 1013 + u8 base_perm, 1025 1014 struct extended_perms_decision *xpermd) 1026 1015 { 1027 1016 struct selinux_policy *policy; ··· 1036 1023 struct ebitmap_node *snode, *tnode; 1037 1024 unsigned int i, j; 1038 1025 1026 + xpermd->base_perm = base_perm; 1039 1027 xpermd->driver = driver; 1040 1028 xpermd->used = 0; 1041 1029 memset(xpermd->allowed->p, 0, sizeof(xpermd->allowed->p));
+5 -2
sound/pci/hda/patch_realtek.c
··· 10641 10641 SND_PCI_QUIRK(0x1043, 0x1e1f, "ASUS Vivobook 15 X1504VAP", ALC2XX_FIXUP_HEADSET_MIC), 10642 10642 SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS), 10643 10643 SND_PCI_QUIRK(0x1043, 0x1e5e, "ASUS ROG Strix G513", ALC294_FIXUP_ASUS_G513_PINS), 10644 + SND_PCI_QUIRK(0x1043, 0x1e63, "ASUS H7606W", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1), 10645 + SND_PCI_QUIRK(0x1043, 0x1e83, "ASUS GA605W", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1), 10644 10646 SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401), 10645 10647 SND_PCI_QUIRK(0x1043, 0x1eb3, "ASUS Ally RCLA72", ALC287_FIXUP_TAS2781_I2C), 10646 10648 SND_PCI_QUIRK(0x1043, 0x1ed3, "ASUS HN7306W", ALC287_FIXUP_CS35L41_I2C_2), ··· 10932 10930 SND_PCI_QUIRK(0x17aa, 0x38e0, "Yoga Y990 Intel VECO Dual", ALC287_FIXUP_TAS2781_I2C), 10933 10931 SND_PCI_QUIRK(0x17aa, 0x38f8, "Yoga Book 9i", ALC287_FIXUP_TAS2781_I2C), 10934 10932 SND_PCI_QUIRK(0x17aa, 0x38df, "Y990 YG DUAL", ALC287_FIXUP_TAS2781_I2C), 10935 - SND_PCI_QUIRK(0x17aa, 0x38f9, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2), 10936 - SND_PCI_QUIRK(0x17aa, 0x38fa, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2), 10933 + SND_PCI_QUIRK(0x17aa, 0x38f9, "Thinkbook 16P Gen5", ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD), 10934 + SND_PCI_QUIRK(0x17aa, 0x38fa, "Thinkbook 16P Gen5", ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD), 10937 10935 SND_PCI_QUIRK(0x17aa, 0x38fd, "ThinkBook plus Gen5 Hybrid", ALC287_FIXUP_TAS2781_I2C), 10938 10936 SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), 10939 10937 SND_PCI_QUIRK(0x17aa, 0x3913, "Lenovo 145", ALC236_FIXUP_LENOVO_INV_DMIC), ··· 10997 10995 SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC), 10998 10996 SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC), 10999 10997 SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC), 10998 + SND_PCI_QUIRK(0x1f66, 0x0105, "Ayaneo Portable Game Player", ALC287_FIXUP_CS35L41_I2C_2), 11000 10999 SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 11001 11000 SND_PCI_QUIRK(0x2782, 0x0228, "Infinix ZERO BOOK 13", ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13), 11002 11001 SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO),
+1
sound/soc/codecs/Kconfig
··· 2451 2451 2452 2452 config SND_SOC_WM8994 2453 2453 tristate 2454 + depends on MFD_WM8994 2454 2455 2455 2456 config SND_SOC_WM8995 2456 2457 tristate
+1
sound/soc/codecs/cs42l43.c
··· 2404 2404 2405 2405 static const struct dev_pm_ops cs42l43_codec_pm_ops = { 2406 2406 RUNTIME_PM_OPS(NULL, cs42l43_codec_runtime_resume, NULL) 2407 + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume) 2407 2408 }; 2408 2409 2409 2410 static const struct platform_device_id cs42l43_codec_id_table[] = {
+9 -1
sound/soc/codecs/es8316.c
··· 39 39 struct snd_soc_jack *jack; 40 40 int irq; 41 41 unsigned int sysclk; 42 - unsigned int allowed_rates[ARRAY_SIZE(supported_mclk_lrck_ratios)]; 42 + /* ES83xx supports halving the MCLK so it supports twice as many rates 43 + */ 44 + unsigned int allowed_rates[ARRAY_SIZE(supported_mclk_lrck_ratios) * 2]; 43 45 struct snd_pcm_hw_constraint_list sysclk_constraints; 44 46 bool jd_inverted; 45 47 }; ··· 388 386 389 387 if (freq % ratio == 0) 390 388 es8316->allowed_rates[count++] = freq / ratio; 389 + 390 + /* We also check if the halved MCLK produces a valid rate 391 + * since the codec supports halving the MCLK. 392 + */ 393 + if ((freq / ratio) % 2 == 0) 394 + es8316->allowed_rates[count++] = freq / ratio / 2; 391 395 } 392 396 393 397 if (count) {
+19 -4
sound/soc/codecs/es8326.c
··· 616 616 0x0F, 0x0F); 617 617 if (es8326->version > ES8326_VERSION_B) { 618 618 regmap_update_bits(es8326->regmap, ES8326_VMIDSEL, 0x40, 0x40); 619 - regmap_update_bits(es8326->regmap, ES8326_ANA_MICBIAS, 0x70, 0x10); 619 + regmap_update_bits(es8326->regmap, ES8326_ANA_MICBIAS, 0x70, 0x30); 620 620 } 621 621 } 622 622 } else { ··· 631 631 regmap_write(es8326->regmap, ES8326_HPR_OFFSET_INI, offset_r); 632 632 es8326->calibrated = true; 633 633 } 634 + regmap_update_bits(es8326->regmap, ES8326_CLK_INV, 0xc0, 0x00); 635 + regmap_update_bits(es8326->regmap, ES8326_CLK_MUX, 0x80, 0x00); 634 636 if (direction == SNDRV_PCM_STREAM_PLAYBACK) { 635 637 regmap_update_bits(es8326->regmap, ES8326_DAC_DSM, 0x01, 0x01); 636 638 usleep_range(1000, 5000); ··· 647 645 } else { 648 646 msleep(300); 649 647 if (es8326->version > ES8326_VERSION_B) { 650 - regmap_update_bits(es8326->regmap, ES8326_ANA_MICBIAS, 0x70, 0x50); 648 + regmap_update_bits(es8326->regmap, ES8326_ANA_MICBIAS, 0x70, 0x70); 651 649 regmap_update_bits(es8326->regmap, ES8326_VMIDSEL, 0x40, 0x00); 652 650 } 653 651 regmap_update_bits(es8326->regmap, ES8326_ADC_MUTE, ··· 678 676 regmap_write(es8326->regmap, ES8326_ANA_PDN, 0x00); 679 677 regmap_update_bits(es8326->regmap, ES8326_CLK_CTL, 0x20, 0x20); 680 678 regmap_update_bits(es8326->regmap, ES8326_RESET, 0x02, 0x00); 679 + if (es8326->version > ES8326_VERSION_B) { 680 + regmap_update_bits(es8326->regmap, ES8326_VMIDSEL, 0x40, 0x40); 681 + regmap_update_bits(es8326->regmap, ES8326_ANA_MICBIAS, 0x70, 0x30); 682 + } 681 683 break; 682 684 case SND_SOC_BIAS_PREPARE: 683 685 break; ··· 689 683 regmap_write(es8326->regmap, ES8326_ANA_PDN, 0x3b); 690 684 regmap_update_bits(es8326->regmap, ES8326_CLK_CTL, 0x20, 0x00); 691 685 regmap_write(es8326->regmap, ES8326_SDINOUT1_IO, ES8326_IO_INPUT); 686 + if (es8326->version > ES8326_VERSION_B) { 687 + regmap_update_bits(es8326->regmap, ES8326_VMIDSEL, 0x40, 0x40); 688 + regmap_update_bits(es8326->regmap, ES8326_ANA_MICBIAS, 0x70, 0x10); 689 + } 690 + regmap_update_bits(es8326->regmap, ES8326_CLK_INV, 0xc0, 0xc0); 691 + regmap_update_bits(es8326->regmap, ES8326_CLK_MUX, 0x80, 0x80); 692 692 break; 693 693 case SND_SOC_BIAS_OFF: 694 694 clk_disable_unprepare(es8326->mclk); ··· 785 773 case 0x6f: 786 774 case 0x4b: 787 775 /* button volume up */ 788 - cur_button = SND_JACK_BTN_1; 776 + if ((iface == 0x6f) && (es8326->version > ES8326_VERSION_B)) 777 + cur_button = SND_JACK_BTN_0; 778 + else 779 + cur_button = SND_JACK_BTN_1; 789 780 break; 790 781 case 0x27: 791 782 /* button volume down */ ··· 1097 1082 regmap_write(es8326->regmap, ES8326_ADC2_SRC, 0x66); 1098 1083 es8326_disable_micbias(es8326->component); 1099 1084 if (es8326->version > ES8326_VERSION_B) { 1100 - regmap_update_bits(es8326->regmap, ES8326_ANA_MICBIAS, 0x73, 0x13); 1085 + regmap_update_bits(es8326->regmap, ES8326_ANA_MICBIAS, 0x73, 0x10); 1101 1086 regmap_update_bits(es8326->regmap, ES8326_VMIDSEL, 0x40, 0x40); 1102 1087 } 1103 1088
+1 -1
sound/soc/codecs/tas2781-i2c.c
··· 78 78 X2781_CL_STT_VAL(TAS2781_PRM_INT_MASK_REG, 0xfe, false), 79 79 X2781_CL_STT_VAL(TAS2781_PRM_CLK_CFG_REG, 0xdd, false), 80 80 X2781_CL_STT_VAL(TAS2781_PRM_RSVD_REG, 0x20, false), 81 - X2781_CL_STT_VAL(TAS2781_PRM_TEST_57_REG, 0x14, false), 81 + X2781_CL_STT_VAL(TAS2781_PRM_TEST_57_REG, 0x14, true), 82 82 X2781_CL_STT_VAL(TAS2781_PRM_TEST_62_REG, 0x45, true), 83 83 X2781_CL_STT_VAL(TAS2781_PRM_PVDD_UVLO_REG, 0x03, false), 84 84 X2781_CL_STT_VAL(TAS2781_PRM_CHNL_0_REG, 0xa8, false),
+23 -5
sound/soc/renesas/rcar/adg.c
··· 374 374 return 0; 375 375 } 376 376 377 - void rsnd_adg_clk_control(struct rsnd_priv *priv, int enable) 377 + int rsnd_adg_clk_control(struct rsnd_priv *priv, int enable) 378 378 { 379 379 struct rsnd_adg *adg = rsnd_priv_to_adg(priv); 380 380 struct rsnd_mod *adg_mod = rsnd_mod_get(adg); 381 381 struct clk *clk; 382 - int i; 382 + int ret = 0, i; 383 383 384 384 if (enable) { 385 385 rsnd_mod_bset(adg_mod, BRGCKR, 0x80770000, adg->ckr); ··· 389 389 390 390 for_each_rsnd_clkin(clk, adg, i) { 391 391 if (enable) { 392 - clk_prepare_enable(clk); 392 + ret = clk_prepare_enable(clk); 393 393 394 394 /* 395 395 * We shouldn't use clk_get_rate() under 396 396 * atomic context. Let's keep it when 397 397 * rsnd_adg_clk_enable() was called 398 398 */ 399 + if (ret < 0) 400 + break; 401 + 399 402 adg->clkin_rate[i] = clk_get_rate(clk); 400 403 } else { 401 - clk_disable_unprepare(clk); 404 + if (adg->clkin_rate[i]) 405 + clk_disable_unprepare(clk); 406 + 407 + adg->clkin_rate[i] = 0; 402 408 } 403 409 } 410 + 411 + /* 412 + * rsnd_adg_clk_enable() might return error (_disable() will not). 413 + * We need to rollback in such case 414 + */ 415 + if (ret < 0) 416 + rsnd_adg_clk_disable(priv); 417 + 418 + return ret; 404 419 } 405 420 406 421 static struct clk *rsnd_adg_create_null_clk(struct rsnd_priv *priv, ··· 768 753 if (ret) 769 754 return ret; 770 755 771 - rsnd_adg_clk_enable(priv); 756 + ret = rsnd_adg_clk_enable(priv); 757 + if (ret) 758 + return ret; 759 + 772 760 rsnd_adg_clk_dbg_info(priv, NULL); 773 761 774 762 return 0;
+1 -3
sound/soc/renesas/rcar/core.c
··· 2086 2086 { 2087 2087 struct rsnd_priv *priv = dev_get_drvdata(dev); 2088 2088 2089 - rsnd_adg_clk_enable(priv); 2090 - 2091 - return 0; 2089 + return rsnd_adg_clk_enable(priv); 2092 2090 } 2093 2091 2094 2092 static const struct dev_pm_ops rsnd_pm_ops = {
+1 -1
sound/soc/renesas/rcar/rsnd.h
··· 608 608 struct rsnd_dai_stream *io); 609 609 #define rsnd_adg_clk_enable(priv) rsnd_adg_clk_control(priv, 1) 610 610 #define rsnd_adg_clk_disable(priv) rsnd_adg_clk_control(priv, 0) 611 - void rsnd_adg_clk_control(struct rsnd_priv *priv, int enable); 611 + int rsnd_adg_clk_control(struct rsnd_priv *priv, int enable); 612 612 void rsnd_adg_clk_dbg_info(struct rsnd_priv *priv, struct seq_file *m); 613 613 614 614 /*
+4 -2
sound/soc/samsung/Kconfig
··· 127 127 128 128 config SND_SOC_SAMSUNG_ARIES_WM8994 129 129 tristate "SoC I2S Audio support for WM8994 on Aries" 130 - depends on SND_SOC_SAMSUNG && MFD_WM8994 && IIO && EXTCON 130 + depends on SND_SOC_SAMSUNG && I2C && IIO && EXTCON 131 131 select SND_SOC_BT_SCO 132 + select MFD_WM8994 132 133 select SND_SOC_WM8994 133 134 select SND_SAMSUNG_I2S 134 135 help ··· 141 140 142 141 config SND_SOC_SAMSUNG_MIDAS_WM1811 143 142 tristate "SoC I2S Audio support for Midas boards" 144 - depends on SND_SOC_SAMSUNG && IIO 143 + depends on SND_SOC_SAMSUNG && I2C && IIO 145 144 select SND_SAMSUNG_I2S 145 + select MFD_WM8994 146 146 select SND_SOC_WM8994 147 147 help 148 148 Say Y if you want to add support for SoC audio on the Midas boards.
+13 -3
tools/net/ynl/ynl-gen-c.py
··· 2384 2384 if not kernel_can_gen_family_struct(family): 2385 2385 return 2386 2386 2387 + if 'sock-priv' in family.kernel_family: 2388 + # Generate "trampolines" to make CFI happy 2389 + cw.write_func("static void", f"__{family.c_name}_nl_sock_priv_init", 2390 + [f"{family.c_name}_nl_sock_priv_init(priv);"], 2391 + ["void *priv"]) 2392 + cw.nl() 2393 + cw.write_func("static void", f"__{family.c_name}_nl_sock_priv_destroy", 2394 + [f"{family.c_name}_nl_sock_priv_destroy(priv);"], 2395 + ["void *priv"]) 2396 + cw.nl() 2397 + 2387 2398 cw.block_start(f"struct genl_family {family.ident_name}_nl_family __ro_after_init =") 2388 2399 cw.p('.name\t\t= ' + family.fam_key + ',') 2389 2400 cw.p('.version\t= ' + family.ver_key + ',') ··· 2412 2401 cw.p(f'.n_mcgrps\t= ARRAY_SIZE({family.c_name}_nl_mcgrps),') 2413 2402 if 'sock-priv' in family.kernel_family: 2414 2403 cw.p(f'.sock_priv_size\t= sizeof({family.kernel_family["sock-priv"]}),') 2415 - # Force cast here, actual helpers take pointer to the real type. 2416 - cw.p(f'.sock_priv_init\t= (void *){family.c_name}_nl_sock_priv_init,') 2417 - cw.p(f'.sock_priv_destroy = (void *){family.c_name}_nl_sock_priv_destroy,') 2404 + cw.p(f'.sock_priv_init\t= __{family.c_name}_nl_sock_priv_init,') 2405 + cw.p(f'.sock_priv_destroy = __{family.c_name}_nl_sock_priv_destroy,') 2418 2406 cw.block_end(';') 2419 2407 2420 2408
+19 -14
tools/testing/selftests/cgroup/test_cpuset_prs.sh
··· 86 86 87 87 # 88 88 # If isolated CPUs have been reserved at boot time (as shown in 89 - # cpuset.cpus.isolated), these isolated CPUs should be outside of CPUs 0-7 89 + # cpuset.cpus.isolated), these isolated CPUs should be outside of CPUs 0-8 90 90 # that will be used by this script for testing purpose. If not, some of 91 - # the tests may fail incorrectly. These isolated CPUs will also be removed 92 - # before being compared with the expected results. 91 + # the tests may fail incorrectly. These pre-isolated CPUs should stay in 92 + # an isolated state throughout the testing process for now. 93 93 # 94 94 BOOT_ISOLCPUS=$(cat $CGROUP2/cpuset.cpus.isolated) 95 95 if [[ -n "$BOOT_ISOLCPUS" ]] 96 96 then 97 - [[ $(echo $BOOT_ISOLCPUS | sed -e "s/[,-].*//") -le 7 ]] && 97 + [[ $(echo $BOOT_ISOLCPUS | sed -e "s/[,-].*//") -le 8 ]] && 98 98 skip_test "Pre-isolated CPUs ($BOOT_ISOLCPUS) overlap CPUs to be tested" 99 99 echo "Pre-isolated CPUs: $BOOT_ISOLCPUS" 100 100 fi ··· 684 684 fi 685 685 686 686 # 687 + # Appending pre-isolated CPUs 688 + # Even though CPU #8 isn't used for testing, it can't be pre-isolated 689 + # to make appending those CPUs easier. 690 + # 691 + [[ -n "$BOOT_ISOLCPUS" ]] && { 692 + EXPECT_VAL=${EXPECT_VAL:+${EXPECT_VAL},}${BOOT_ISOLCPUS} 693 + EXPECT_VAL2=${EXPECT_VAL2:+${EXPECT_VAL2},}${BOOT_ISOLCPUS} 694 + } 695 + 696 + # 687 697 # Check cpuset.cpus.isolated cpumask 688 698 # 689 - if [[ -z "$BOOT_ISOLCPUS" ]] 690 - then 691 - ISOLCPUS=$(cat $ISCPUS) 692 - else 693 - ISOLCPUS=$(cat $ISCPUS | sed -e "s/,*$BOOT_ISOLCPUS//") 694 - fi 695 699 [[ "$EXPECT_VAL2" != "$ISOLCPUS" ]] && { 696 700 # Take a 50ms pause and try again 697 701 pause 0.05 ··· 735 731 fi 736 732 done 737 733 [[ "$ISOLCPUS" = *- ]] && ISOLCPUS=${ISOLCPUS}$LASTISOLCPU 738 - [[ -n "BOOT_ISOLCPUS" ]] && 739 - ISOLCPUS=$(echo $ISOLCPUS | sed -e "s/,*$BOOT_ISOLCPUS//") 740 734 741 735 [[ "$EXPECT_VAL" = "$ISOLCPUS" ]] 742 736 } ··· 838 836 # if available 839 837 [[ -n "$ICPUS" ]] && { 840 838 check_isolcpus $ICPUS 841 - [[ $? -ne 0 ]] && test_fail $I "isolated CPU" \ 842 - "Expect $ICPUS, get $ISOLCPUS instead" 839 + [[ $? -ne 0 ]] && { 840 + [[ -n "$BOOT_ISOLCPUS" ]] && ICPUS=${ICPUS},${BOOT_ISOLCPUS} 841 + test_fail $I "isolated CPU" \ 842 + "Expect $ICPUS, get $ISOLCPUS instead" 843 + } 843 844 } 844 845 reset_cgroup_states 845 846 #
+6 -3
tools/testing/selftests/drivers/net/netdevsim/tc-mq-visibility.sh
··· 58 58 ethtool -L $NDEV combined 4 59 59 n_child_assert 4 "One real queue, rest default" 60 60 61 - # Graft some 62 - tcq replace parent 100:1 handle 204: 63 - n_child_assert 3 "Grafted" 61 + # Remove real one 62 + tcq del parent 100:4 handle 204: 63 + 64 + # Replace default with pfifo 65 + tcq replace parent 100:1 handle 205: pfifo limit 1000 66 + n_child_assert 3 "Deleting real one, replacing default one with pfifo" 64 67 65 68 ethtool -L $NDEV combined 1 66 69 n_child_assert 1 "Grafted, one"
-1
tools/testing/selftests/kvm/aarch64/set_id_regs.c
··· 152 152 REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, BIGENDEL0, 0), 153 153 REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, SNSMEM, 0), 154 154 REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, BIGEND, 0), 155 - REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, ASIDBITS, 0), 156 155 REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, PARANGE, 0), 157 156 REG_FTR_END, 158 157 };
+171 -1
tools/testing/selftests/kvm/s390x/ucontrol_test.c
··· 210 210 struct kvm_device_attr attr = { 211 211 .group = KVM_S390_VM_MEM_CTRL, 212 212 .attr = KVM_S390_VM_MEM_LIMIT_SIZE, 213 - .addr = (unsigned long)&limit, 213 + .addr = (u64)&limit, 214 214 }; 215 215 int rc; 216 + 217 + rc = ioctl(self->vm_fd, KVM_HAS_DEVICE_ATTR, &attr); 218 + EXPECT_EQ(0, rc); 216 219 217 220 rc = ioctl(self->vm_fd, KVM_GET_DEVICE_ATTR, &attr); 218 221 EXPECT_EQ(0, rc); ··· 636 633 ASSERT_EQ(skeyvalue & 0xfa, sync_regs->gprs[1]); 637 634 ASSERT_EQ(0, sync_regs->gprs[1] & 0x04); 638 635 uc_assert_diag44(self); 636 + } 637 + 638 + static char uc_flic_b[PAGE_SIZE]; 639 + static struct kvm_s390_io_adapter uc_flic_ioa = { .id = 0 }; 640 + static struct kvm_s390_io_adapter_req uc_flic_ioam = { .id = 0 }; 641 + static struct kvm_s390_ais_req uc_flic_asim = { .isc = 0 }; 642 + static struct kvm_s390_ais_all uc_flic_asima = { .simm = 0 }; 643 + static struct uc_flic_attr_test { 644 + char *name; 645 + struct kvm_device_attr a; 646 + int hasrc; 647 + int geterrno; 648 + int seterrno; 649 + } uc_flic_attr_tests[] = { 650 + { 651 + .name = "KVM_DEV_FLIC_GET_ALL_IRQS", 652 + .seterrno = EINVAL, 653 + .a = { 654 + .group = KVM_DEV_FLIC_GET_ALL_IRQS, 655 + .addr = (u64)&uc_flic_b, 656 + .attr = PAGE_SIZE, 657 + }, 658 + }, 659 + { 660 + .name = "KVM_DEV_FLIC_ENQUEUE", 661 + .geterrno = EINVAL, 662 + .a = { .group = KVM_DEV_FLIC_ENQUEUE, }, 663 + }, 664 + { 665 + .name = "KVM_DEV_FLIC_CLEAR_IRQS", 666 + .geterrno = EINVAL, 667 + .a = { .group = KVM_DEV_FLIC_CLEAR_IRQS, }, 668 + }, 669 + { 670 + .name = "KVM_DEV_FLIC_ADAPTER_REGISTER", 671 + .geterrno = EINVAL, 672 + .a = { 673 + .group = KVM_DEV_FLIC_ADAPTER_REGISTER, 674 + .addr = (u64)&uc_flic_ioa, 675 + }, 676 + }, 677 + { 678 + .name = "KVM_DEV_FLIC_ADAPTER_MODIFY", 679 + .geterrno = EINVAL, 680 + .seterrno = EINVAL, 681 + .a = { 682 + .group = KVM_DEV_FLIC_ADAPTER_MODIFY, 683 + .addr = (u64)&uc_flic_ioam, 684 + .attr = sizeof(uc_flic_ioam), 685 + }, 686 + }, 687 + { 688 + .name = "KVM_DEV_FLIC_CLEAR_IO_IRQ", 689 + .geterrno = EINVAL, 690 + .seterrno = EINVAL, 691 + .a = { 692 + .group = KVM_DEV_FLIC_CLEAR_IO_IRQ, 693 + .attr = 32, 694 + }, 695 + }, 696 + { 697 + .name = "KVM_DEV_FLIC_AISM", 698 + .geterrno = EINVAL, 699 + .seterrno = ENOTSUP, 700 + .a = { 701 + .group = KVM_DEV_FLIC_AISM, 702 + .addr = (u64)&uc_flic_asim, 703 + }, 704 + }, 705 + { 706 + .name = "KVM_DEV_FLIC_AIRQ_INJECT", 707 + .geterrno = EINVAL, 708 + .a = { .group = KVM_DEV_FLIC_AIRQ_INJECT, }, 709 + }, 710 + { 711 + .name = "KVM_DEV_FLIC_AISM_ALL", 712 + .geterrno = ENOTSUP, 713 + .seterrno = ENOTSUP, 714 + .a = { 715 + .group = KVM_DEV_FLIC_AISM_ALL, 716 + .addr = (u64)&uc_flic_asima, 717 + .attr = sizeof(uc_flic_asima), 718 + }, 719 + }, 720 + { 721 + .name = "KVM_DEV_FLIC_APF_ENABLE", 722 + .geterrno = EINVAL, 723 + .seterrno = EINVAL, 724 + .a = { .group = KVM_DEV_FLIC_APF_ENABLE, }, 725 + }, 726 + { 727 + .name = "KVM_DEV_FLIC_APF_DISABLE_WAIT", 728 + .geterrno = EINVAL, 729 + .seterrno = EINVAL, 730 + .a = { .group = KVM_DEV_FLIC_APF_DISABLE_WAIT, }, 731 + }, 732 + }; 733 + 734 + TEST_F(uc_kvm, uc_flic_attrs) 735 + { 736 + struct kvm_create_device cd = { .type = KVM_DEV_TYPE_FLIC }; 737 + struct kvm_device_attr attr; 738 + u64 value; 739 + int rc, i; 740 + 741 + rc = ioctl(self->vm_fd, KVM_CREATE_DEVICE, &cd); 742 + ASSERT_EQ(0, rc) TH_LOG("create device failed with err %s (%i)", 743 + strerror(errno), errno); 744 + 745 + for (i = 0; i < ARRAY_SIZE(uc_flic_attr_tests); i++) { 746 + TH_LOG("test %s", uc_flic_attr_tests[i].name); 747 + attr = (struct kvm_device_attr) { 748 + .group = uc_flic_attr_tests[i].a.group, 749 + .attr = uc_flic_attr_tests[i].a.attr, 750 + .addr = uc_flic_attr_tests[i].a.addr, 751 + }; 752 + if (attr.addr == 0) 753 + attr.addr = (u64)&value; 754 + 755 + rc = ioctl(cd.fd, KVM_HAS_DEVICE_ATTR, &attr); 756 + EXPECT_EQ(uc_flic_attr_tests[i].hasrc, !!rc) 757 + TH_LOG("expected dev attr missing %s", 758 + uc_flic_attr_tests[i].name); 759 + 760 + rc = ioctl(cd.fd, KVM_GET_DEVICE_ATTR, &attr); 761 + EXPECT_EQ(!!uc_flic_attr_tests[i].geterrno, !!rc) 762 + TH_LOG("get dev attr rc not expected on %s %s (%i)", 763 + uc_flic_attr_tests[i].name, 764 + strerror(errno), errno); 765 + if (uc_flic_attr_tests[i].geterrno) 766 + EXPECT_EQ(uc_flic_attr_tests[i].geterrno, errno) 767 + TH_LOG("get dev attr errno not expected on %s %s (%i)", 768 + uc_flic_attr_tests[i].name, 769 + strerror(errno), errno); 770 + 771 + rc = ioctl(cd.fd, KVM_SET_DEVICE_ATTR, &attr); 772 + EXPECT_EQ(!!uc_flic_attr_tests[i].seterrno, !!rc) 773 + TH_LOG("set sev attr rc not expected on %s %s (%i)", 774 + uc_flic_attr_tests[i].name, 775 + strerror(errno), errno); 776 + if (uc_flic_attr_tests[i].seterrno) 777 + EXPECT_EQ(uc_flic_attr_tests[i].seterrno, errno) 778 + TH_LOG("set dev attr errno not expected on %s %s (%i)", 779 + uc_flic_attr_tests[i].name, 780 + strerror(errno), errno); 781 + } 782 + 783 + close(cd.fd); 784 + } 785 + 786 + TEST_F(uc_kvm, uc_set_gsi_routing) 787 + { 788 + struct kvm_irq_routing *routing = kvm_gsi_routing_create(); 789 + struct kvm_irq_routing_entry ue = { 790 + .type = KVM_IRQ_ROUTING_S390_ADAPTER, 791 + .gsi = 1, 792 + .u.adapter = (struct kvm_irq_routing_s390_adapter) { 793 + .ind_addr = 0, 794 + }, 795 + }; 796 + int rc; 797 + 798 + routing->entries[0] = ue; 799 + routing->nr = 1; 800 + rc = ioctl(self->vm_fd, KVM_SET_GSI_ROUTING, routing); 801 + ASSERT_EQ(-1, rc) TH_LOG("err %s (%i)", strerror(errno), errno); 802 + ASSERT_EQ(EINVAL, errno) TH_LOG("err %s (%i)", strerror(errno), errno); 639 803 } 640 804 641 805 TEST_HARNESS_MAIN
+4 -4
tools/testing/selftests/mm/cow.c
··· 758 758 } 759 759 760 760 /* Populate a base page. */ 761 - memset(mem, 0, pagesize); 761 + memset(mem, 1, pagesize); 762 762 763 763 if (swapout) { 764 764 madvise(mem, pagesize, MADV_PAGEOUT); ··· 824 824 * Try to populate a THP. Touch the first sub-page and test if 825 825 * we get the last sub-page populated automatically. 826 826 */ 827 - mem[0] = 0; 827 + mem[0] = 1; 828 828 if (!pagemap_is_populated(pagemap_fd, mem + thpsize - pagesize)) { 829 829 ksft_test_result_skip("Did not get a THP populated\n"); 830 830 goto munmap; 831 831 } 832 - memset(mem, 0, thpsize); 832 + memset(mem, 1, thpsize); 833 833 834 834 size = thpsize; 835 835 switch (thp_run) { ··· 1012 1012 } 1013 1013 1014 1014 /* Populate an huge page. */ 1015 - memset(mem, 0, hugetlbsize); 1015 + memset(mem, 1, hugetlbsize); 1016 1016 1017 1017 /* 1018 1018 * We need a total of two hugetlb pages to handle COW/unsharing
+32 -11
tools/testing/selftests/net/mptcp/mptcp_connect.c
··· 25 25 #include <sys/types.h> 26 26 #include <sys/mman.h> 27 27 28 + #include <arpa/inet.h> 29 + 28 30 #include <netdb.h> 29 31 #include <netinet/in.h> 30 32 ··· 1213 1211 exit(1); 1214 1212 } 1215 1213 1216 - void xdisconnect(int fd, int addrlen) 1214 + void xdisconnect(int fd) 1217 1215 { 1218 - struct sockaddr_storage empty; 1216 + socklen_t addrlen = sizeof(struct sockaddr_storage); 1217 + struct sockaddr_storage addr, empty; 1219 1218 int msec_sleep = 10; 1220 - int queued = 1; 1221 - int i; 1219 + void *raw_addr; 1220 + int i, cmdlen; 1221 + char cmd[128]; 1222 + 1223 + /* get the local address and convert it to string */ 1224 + if (getsockname(fd, (struct sockaddr *)&addr, &addrlen) < 0) 1225 + xerror("getsockname"); 1226 + 1227 + if (addr.ss_family == AF_INET) 1228 + raw_addr = &(((struct sockaddr_in *)&addr)->sin_addr); 1229 + else if (addr.ss_family == AF_INET6) 1230 + raw_addr = &(((struct sockaddr_in6 *)&addr)->sin6_addr); 1231 + else 1232 + xerror("bad family"); 1233 + 1234 + strcpy(cmd, "ss -M | grep -q "); 1235 + cmdlen = strlen(cmd); 1236 + if (!inet_ntop(addr.ss_family, raw_addr, &cmd[cmdlen], 1237 + sizeof(cmd) - cmdlen)) 1238 + xerror("inet_ntop"); 1222 1239 1223 1240 shutdown(fd, SHUT_WR); 1224 1241 1225 - /* while until the pending data is completely flushed, the later 1242 + /* 1243 + * wait until the pending data is completely flushed and all 1244 + * the MPTCP sockets reached the closed status. 1226 1245 * disconnect will bypass/ignore/drop any pending data. 1227 1246 */ 1228 1247 for (i = 0; ; i += msec_sleep) { 1229 - if (ioctl(fd, SIOCOUTQ, &queued) < 0) 1230 - xerror("can't query out socket queue: %d", errno); 1231 - 1232 - if (!queued) 1248 + /* closed socket are not listed by 'ss' */ 1249 + if (system(cmd) != 0) 1233 1250 break; 1234 1251 1235 1252 if (i > poll_timeout) ··· 1302 1281 return ret; 1303 1282 1304 1283 if (cfg_truncate > 0) { 1305 - xdisconnect(fd, peer->ai_addrlen); 1284 + xdisconnect(fd); 1306 1285 } else if (--cfg_repeat > 0) { 1307 - xdisconnect(fd, peer->ai_addrlen); 1286 + xdisconnect(fd); 1308 1287 1309 1288 /* the socket could be unblocking at this point, we need the 1310 1289 * connect to be blocking
+22 -6
tools/testing/selftests/riscv/abi/pointer_masking.c
··· 185 185 } 186 186 } 187 187 188 + static bool pwrite_wrapper(int fd, void *buf, size_t count, const char *msg) 189 + { 190 + int ret = pwrite(fd, buf, count, 0); 191 + 192 + if (ret != count) { 193 + ksft_perror(msg); 194 + return false; 195 + } 196 + return true; 197 + } 198 + 188 199 static void test_tagged_addr_abi_sysctl(void) 189 200 { 201 + char *err_pwrite_msg = "failed to write to /proc/sys/abi/tagged_addr_disabled\n"; 190 202 char value; 191 203 int fd; 192 204 ··· 212 200 } 213 201 214 202 value = '1'; 215 - pwrite(fd, &value, 1, 0); 216 - ksft_test_result(set_tagged_addr_ctrl(min_pmlen, true) == -EINVAL, 217 - "sysctl disabled\n"); 203 + if (!pwrite_wrapper(fd, &value, 1, "write '1'")) 204 + ksft_test_result_fail(err_pwrite_msg); 205 + else 206 + ksft_test_result(set_tagged_addr_ctrl(min_pmlen, true) == -EINVAL, 207 + "sysctl disabled\n"); 218 208 219 209 value = '0'; 220 - pwrite(fd, &value, 1, 0); 221 - ksft_test_result(set_tagged_addr_ctrl(min_pmlen, true) == 0, 222 - "sysctl enabled\n"); 210 + if (!pwrite_wrapper(fd, &value, 1, "write '0'")) 211 + ksft_test_result_fail(err_pwrite_msg); 212 + else 213 + ksft_test_result(set_tagged_addr_ctrl(min_pmlen, true) == 0, 214 + "sysctl enabled\n"); 223 215 224 216 set_tagged_addr_ctrl(0, false); 225 217
+4
tools/testing/selftests/riscv/vector/v_initval_nolibc.c
··· 25 25 unsigned long vl; 26 26 char *datap, *tmp; 27 27 28 + ksft_set_plan(1); 29 + 28 30 datap = malloc(MAX_VSIZE); 29 31 if (!datap) { 30 32 ksft_test_result_fail("fail to allocate memory for size = %d\n", MAX_VSIZE); ··· 65 63 } 66 64 67 65 free(datap); 66 + 67 + ksft_test_result_pass("tests for v_initval_nolibc pass\n"); 68 68 ksft_exit_pass(); 69 69 return 0; 70 70 }
+2
tools/testing/selftests/riscv/vector/vstate_prctl.c
··· 76 76 long flag, expected; 77 77 long rc; 78 78 79 + ksft_set_plan(1); 80 + 79 81 pair.key = RISCV_HWPROBE_KEY_IMA_EXT_0; 80 82 rc = riscv_hwprobe(&pair, 1, 0, NULL, 0); 81 83 if (rc < 0) {
+2 -2
tools/testing/selftests/tc-testing/tc-tests/filters/flow.json
··· 78 78 "setup": [ 79 79 "$TC qdisc add dev $DEV1 ingress" 80 80 ], 81 - "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 protocol ip flow map key dst rshift 0xff", 81 + "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 protocol ip flow map key dst rshift 0x1f", 82 82 "expExitCode": "0", 83 83 "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 protocol ip prio 1 flow", 84 - "matchPattern": "filter parent ffff: protocol ip pref 1 flow chain [0-9]+ handle 0x1 map keys dst rshift 255 baseclass", 84 + "matchPattern": "filter parent ffff: protocol ip pref 1 flow chain [0-9]+ handle 0x1 map keys dst rshift 31 baseclass", 85 85 "matchCount": "1", 86 86 "teardown": [ 87 87 "$TC qdisc del dev $DEV1 ingress"
+1 -1
tools/testing/shared/linux/maple_tree.h
··· 2 2 #define atomic_t int32_t 3 3 #define atomic_inc(x) uatomic_inc(x) 4 4 #define atomic_read(x) uatomic_read(x) 5 - #define atomic_set(x, y) do {} while (0) 5 + #define atomic_set(x, y) uatomic_set(x, y) 6 6 #define U8_MAX UCHAR_MAX 7 7 #include "../../../../include/linux/maple_tree.h"
+1 -1
tools/testing/vma/linux/atomic.h
··· 6 6 #define atomic_t int32_t 7 7 #define atomic_inc(x) uatomic_inc(x) 8 8 #define atomic_read(x) uatomic_read(x) 9 - #define atomic_set(x, y) do {} while (0) 9 + #define atomic_set(x, y) uatomic_set(x, y) 10 10 #define U8_MAX UCHAR_MAX 11 11 12 12 #endif /* _LINUX_ATOMIC_H */