Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 4.17-rc3 into char-misc-next

We want the fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+4219 -2419
+1 -1
Documentation/devicetree/bindings/serial/amlogic,meson-uart.txt
··· 21 21 - interrupts : identifier to the device interrupt 22 22 - clocks : a list of phandle + clock-specifier pairs, one for each 23 23 entry in clock names. 24 - - clocks-names : 24 + - clock-names : 25 25 * "xtal" for external xtal clock identifier 26 26 * "pclk" for the bus core clock, either the clk81 clock or the gate clock 27 27 * "baud" for the source of the baudrate generator, can be either the xtal
+1 -1
Documentation/devicetree/bindings/serial/mvebu-uart.txt
··· 24 24 - Must contain two elements for the extended variant of the IP 25 25 (marvell,armada-3700-uart-ext): "uart-tx" and "uart-rx", 26 26 respectively the UART TX interrupt and the UART RX interrupt. A 27 - corresponding interrupts-names property must be defined. 27 + corresponding interrupt-names property must be defined. 28 28 - For backward compatibility reasons, a single element interrupts 29 29 property is also supported for the standard variant of the IP, 30 30 containing only the UART sum interrupt. This form is deprecated
+2
Documentation/devicetree/bindings/serial/renesas,sci-serial.txt
··· 17 17 - "renesas,scifa-r8a7745" for R8A7745 (RZ/G1E) SCIFA compatible UART. 18 18 - "renesas,scifb-r8a7745" for R8A7745 (RZ/G1E) SCIFB compatible UART. 19 19 - "renesas,hscif-r8a7745" for R8A7745 (RZ/G1E) HSCIF compatible UART. 20 + - "renesas,scif-r8a77470" for R8A77470 (RZ/G1C) SCIF compatible UART. 21 + - "renesas,hscif-r8a77470" for R8A77470 (RZ/G1C) HSCIF compatible UART. 20 22 - "renesas,scif-r8a7778" for R8A7778 (R-Car M1) SCIF compatible UART. 21 23 - "renesas,scif-r8a7779" for R8A7779 (R-Car H1) SCIF compatible UART. 22 24 - "renesas,scif-r8a7790" for R8A7790 (R-Car H2) SCIF compatible UART.
+4 -1
Documentation/devicetree/bindings/usb/usb-xhci.txt
··· 28 28 - interrupts: one XHCI interrupt should be described here. 29 29 30 30 Optional properties: 31 - - clocks: reference to a clock 31 + - clocks: reference to the clocks 32 + - clock-names: mandatory if there is a second clock, in this case 33 + the name must be "core" for the first clock and "reg" for the 34 + second one 32 35 - usb2-lpm-disable: indicate if we don't want to enable USB2 HW LPM 33 36 - usb3-lpm-capable: determines if platform is USB3 LPM capable 34 37 - quirk-broken-port-ped: set if the controller has broken port disable mechanism
+8 -8
Documentation/driver-api/firmware/request_firmware.rst
··· 17 17 18 18 request_firmware 19 19 ---------------- 20 - .. kernel-doc:: drivers/base/firmware_class.c 20 + .. kernel-doc:: drivers/base/firmware_loader/main.c 21 21 :functions: request_firmware 22 22 23 23 request_firmware_direct 24 24 ----------------------- 25 - .. kernel-doc:: drivers/base/firmware_class.c 25 + .. kernel-doc:: drivers/base/firmware_loader/main.c 26 26 :functions: request_firmware_direct 27 27 28 28 request_firmware_into_buf 29 29 ------------------------- 30 - .. kernel-doc:: drivers/base/firmware_class.c 30 + .. kernel-doc:: drivers/base/firmware_loader/main.c 31 31 :functions: request_firmware_into_buf 32 32 33 33 Asynchronous firmware requests ··· 41 41 42 42 request_firmware_nowait 43 43 ----------------------- 44 - .. kernel-doc:: drivers/base/firmware_class.c 44 + .. kernel-doc:: drivers/base/firmware_loader/main.c 45 45 :functions: request_firmware_nowait 46 46 47 47 Special optimizations on reboot ··· 50 50 Some devices have an optimization in place to enable the firmware to be 51 51 retained during system reboot. When such optimizations are used the driver 52 52 author must ensure the firmware is still available on resume from suspend, 53 - this can be done with firmware_request_cache() insted of requesting for the 54 - firmare to be loaded. 53 + this can be done with firmware_request_cache() instead of requesting for the 54 + firmware to be loaded. 55 55 56 56 firmware_request_cache() 57 - ----------------------- 58 - .. kernel-doc:: drivers/base/firmware_class.c 57 + ------------------------ 58 + .. kernel-doc:: drivers/base/firmware_loader/main.c 59 59 :functions: firmware_request_cache 60 60 61 61 request firmware API expected driver use
+1 -1
Documentation/driver-api/infrastructure.rst
··· 28 28 .. kernel-doc:: drivers/base/node.c 29 29 :internal: 30 30 31 - .. kernel-doc:: drivers/base/firmware_class.c 31 + .. kernel-doc:: drivers/base/firmware_loader/main.c 32 32 :export: 33 33 34 34 .. kernel-doc:: drivers/base/transport_class.c
+1 -1
Documentation/driver-api/usb/typec.rst
··· 210 210 role. USB Type-C Connector Class does not supply separate API for them. The 211 211 port drivers can use USB Role Class API with those. 212 212 213 - Illustration of the muxes behind a connector that supports an alternate mode: 213 + Illustration of the muxes behind a connector that supports an alternate mode:: 214 214 215 215 ------------------------ 216 216 | Connector |
+14 -18
Documentation/i2c/dev-interface
··· 9 9 the i2c-tools package. 10 10 11 11 I2C device files are character device files with major device number 89 12 - and a minor device number corresponding to the number assigned as 13 - explained above. They should be called "i2c-%d" (i2c-0, i2c-1, ..., 12 + and a minor device number corresponding to the number assigned as 13 + explained above. They should be called "i2c-%d" (i2c-0, i2c-1, ..., 14 14 i2c-10, ...). All 256 minor device numbers are reserved for i2c. 15 15 16 16 ··· 23 23 #include <linux/i2c-dev.h> 24 24 #include <i2c/smbus.h> 25 25 26 - (Please note that there are two files named "i2c-dev.h" out there. One is 27 - distributed with the Linux kernel and the other one is included in the 28 - source tree of i2c-tools. They used to be different in content but since 2012 29 - they're identical. You should use "linux/i2c-dev.h"). 30 - 31 26 Now, you have to decide which adapter you want to access. You should 32 27 inspect /sys/class/i2c-dev/ or run "i2cdetect -l" to decide this. 33 28 Adapter numbers are assigned somewhat dynamically, so you can not ··· 33 38 int file; 34 39 int adapter_nr = 2; /* probably dynamically determined */ 35 40 char filename[20]; 36 - 41 + 37 42 snprintf(filename, 19, "/dev/i2c-%d", adapter_nr); 38 43 file = open(filename, O_RDWR); 39 44 if (file < 0) { ··· 67 72 /* res contains the read word */ 68 73 } 69 74 70 - /* Using I2C Write, equivalent of 71 - i2c_smbus_write_word_data(file, reg, 0x6543) */ 75 + /* 76 + * Using I2C Write, equivalent of 77 + * i2c_smbus_write_word_data(file, reg, 0x6543) 78 + */ 72 79 buf[0] = reg; 73 80 buf[1] = 0x43; 74 81 buf[2] = 0x65; ··· 137 140 set in each message, overriding the values set with the above ioctl's. 138 141 139 142 ioctl(file, I2C_SMBUS, struct i2c_smbus_ioctl_data *args) 140 - Not meant to be called directly; instead, use the access functions 141 - below. 143 + If possible, use the provided i2c_smbus_* methods described below instead 144 + of issuing direct ioctls. 142 145 143 146 You can do plain i2c transactions by using read(2) and write(2) calls. 144 147 You do not need to pass the address byte; instead, set it through 145 148 ioctl I2C_SLAVE before you try to access the device. 146 149 147 - You can do SMBus level transactions (see documentation file smbus-protocol 150 + You can do SMBus level transactions (see documentation file smbus-protocol 148 151 for details) through the following functions: 149 152 __s32 i2c_smbus_write_quick(int file, __u8 value); 150 153 __s32 i2c_smbus_read_byte(int file); ··· 155 158 __s32 i2c_smbus_write_word_data(int file, __u8 command, __u16 value); 156 159 __s32 i2c_smbus_process_call(int file, __u8 command, __u16 value); 157 160 __s32 i2c_smbus_read_block_data(int file, __u8 command, __u8 *values); 158 - __s32 i2c_smbus_write_block_data(int file, __u8 command, __u8 length, 161 + __s32 i2c_smbus_write_block_data(int file, __u8 command, __u8 length, 159 162 __u8 *values); 160 163 All these transactions return -1 on failure; you can read errno to see 161 164 what happened. The 'write' transactions return 0 on success; the ··· 163 166 returns the number of values read. The block buffers need not be longer 164 167 than 32 bytes. 165 168 166 - The above functions are all inline functions, that resolve to calls to 167 - the i2c_smbus_access function, that on its turn calls a specific ioctl 168 - with the data in a specific format. Read the source code if you 169 - want to know what happens behind the screens. 169 + The above functions are made available by linking against the libi2c library, 170 + which is provided by the i2c-tools project. See: 171 + https://git.kernel.org/pub/scm/utils/i2c-tools/i2c-tools.git/. 170 172 171 173 172 174 Implementation details
-2
Documentation/ioctl/ioctl-number.txt
··· 217 217 'd' 02-40 pcmcia/ds.h conflict! 218 218 'd' F0-FF linux/digi1.h 219 219 'e' all linux/digi1.h conflict! 220 - 'e' 00-1F drivers/net/irda/irtty-sir.h conflict! 221 220 'f' 00-1F linux/ext2_fs.h conflict! 222 221 'f' 00-1F linux/ext3_fs.h conflict! 223 222 'f' 00-0F fs/jfs/jfs_dinode.h conflict! ··· 246 247 'm' all linux/synclink.h conflict! 247 248 'm' 00-19 drivers/message/fusion/mptctl.h conflict! 248 249 'm' 00 drivers/scsi/megaraid/megaraid_ioctl.h conflict! 249 - 'm' 00-1F net/irda/irmod.h conflict! 250 250 'n' 00-7F linux/ncp_fs.h and fs/ncpfs/ioctl.c 251 251 'n' 80-8F uapi/linux/nilfs2_api.h NILFS2 252 252 'n' E0-FF linux/matroxfb.h matroxfb
-15
Documentation/networking/ip-sysctl.txt
··· 2126 2126 2127 2127 Default: 10 2128 2128 2129 - 2130 - UNDOCUMENTED: 2131 - 2132 - /proc/sys/net/irda/* 2133 - fast_poll_increase FIXME 2134 - warn_noreply_time FIXME 2135 - discovery_slots FIXME 2136 - slot_timeout FIXME 2137 - max_baud_rate FIXME 2138 - discovery_timeout FIXME 2139 - lap_keepalive_time FIXME 2140 - max_noreply_time FIXME 2141 - max_tx_data_size FIXME 2142 - max_tx_window FIXME 2143 - min_tx_turn_time FIXME
+1 -1
Documentation/power/suspend-and-cpuhotplug.txt
··· 168 168 169 169 [Please bear in mind that the kernel requests the microcode images from 170 170 userspace, using the request_firmware() function defined in 171 - drivers/base/firmware_class.c] 171 + drivers/base/firmware_loader/main.c] 172 172 173 173 174 174 a. When all the CPUs are identical:
-3
Documentation/process/magic-number.rst
··· 157 157 OSS sound drivers have their magic numbers constructed from the soundcard PCI 158 158 ID - these are not listed here as well. 159 159 160 - IrDA subsystem also uses large number of own magic numbers, see 161 - ``include/net/irda/irda.h`` for a complete list of them. 162 - 163 160 HFS is another larger user of magic numbers - you can find them in 164 161 ``fs/hfs/hfs.h``.
+11 -3
Documentation/trace/ftrace.rst
··· 461 461 and ticks at the same rate as the hardware clocksource. 462 462 463 463 boot: 464 - Same as mono. Used to be a separate clock which accounted 465 - for the time spent in suspend while CLOCK_MONOTONIC did 466 - not. 464 + This is the boot clock (CLOCK_BOOTTIME) and is based on the 465 + fast monotonic clock, but also accounts for time spent in 466 + suspend. Since the clock access is designed for use in 467 + tracing in the suspend path, some side effects are possible 468 + if clock is accessed after the suspend time is accounted before 469 + the fast mono clock is updated. In this case, the clock update 470 + appears to happen slightly sooner than it normally would have. 471 + Also on 32-bit systems, it's possible that the 64-bit boot offset 472 + sees a partial update. These effects are rare and post 473 + processing should be able to handle them. See comments in the 474 + ktime_get_boot_fast_ns() function for more information. 467 475 468 476 To set a clock, simply echo the clock name into this file:: 469 477
+8 -1
Documentation/virtual/kvm/api.txt
··· 1960 1960 ARM 64-bit FP registers have the following id bit patterns: 1961 1961 0x4030 0000 0012 0 <regno:12> 1962 1962 1963 + ARM firmware pseudo-registers have the following bit pattern: 1964 + 0x4030 0000 0014 <regno:16> 1965 + 1963 1966 1964 1967 arm64 registers are mapped using the lower 32 bits. The upper 16 of 1965 1968 that is the register group type, or coprocessor number: ··· 1978 1975 1979 1976 arm64 system registers have the following id bit patterns: 1980 1977 0x6030 0000 0013 <op0:2> <op1:3> <crn:4> <crm:4> <op2:3> 1978 + 1979 + arm64 firmware pseudo-registers have the following bit pattern: 1980 + 0x6030 0000 0014 <regno:16> 1981 1981 1982 1982 1983 1983 MIPS registers are mapped using the lower 32 bits. The upper 16 of that is ··· 2516 2510 and execute guest code when KVM_RUN is called. 2517 2511 - KVM_ARM_VCPU_EL1_32BIT: Starts the CPU in a 32bit mode. 2518 2512 Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only). 2519 - - KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU. 2513 + - KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 (or a future revision 2514 + backward compatible with v0.2) for the CPU. 2520 2515 Depends on KVM_CAP_ARM_PSCI_0_2. 2521 2516 - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU. 2522 2517 Depends on KVM_CAP_ARM_PMU_V3.
+30
Documentation/virtual/kvm/arm/psci.txt
··· 1 + KVM implements the PSCI (Power State Coordination Interface) 2 + specification in order to provide services such as CPU on/off, reset 3 + and power-off to the guest. 4 + 5 + The PSCI specification is regularly updated to provide new features, 6 + and KVM implements these updates if they make sense from a virtualization 7 + point of view. 8 + 9 + This means that a guest booted on two different versions of KVM can 10 + observe two different "firmware" revisions. This could cause issues if 11 + a given guest is tied to a particular PSCI revision (unlikely), or if 12 + a migration causes a different PSCI version to be exposed out of the 13 + blue to an unsuspecting guest. 14 + 15 + In order to remedy this situation, KVM exposes a set of "firmware 16 + pseudo-registers" that can be manipulated using the GET/SET_ONE_REG 17 + interface. These registers can be saved/restored by userspace, and set 18 + to a convenient value if required. 19 + 20 + The following register is defined: 21 + 22 + * KVM_REG_ARM_PSCI_VERSION: 23 + 24 + - Only valid if the vcpu has the KVM_ARM_VCPU_PSCI_0_2 feature set 25 + (and thus has already been initialized) 26 + - Returns the current PSCI version on GET_ONE_REG (defaulting to the 27 + highest PSCI version implemented by KVM and compatible with v0.2) 28 + - Allows any PSCI version implemented by KVM and compatible with 29 + v0.2 to be set with SET_ONE_REG 30 + - Affects the whole VM (even if the register view is per-vcpu)
+9 -18
MAINTAINERS
··· 564 564 F: drivers/media/dvb-frontends/af9033* 565 565 566 566 AFFS FILE SYSTEM 567 + M: David Sterba <dsterba@suse.com> 567 568 L: linux-fsdevel@vger.kernel.org 568 - S: Orphan 569 + S: Odd Fixes 569 570 F: Documentation/filesystems/affs.txt 570 571 F: fs/affs/ 571 572 ··· 906 905 M: Laura Abbott <labbott@redhat.com> 907 906 M: Sumit Semwal <sumit.semwal@linaro.org> 908 907 L: devel@driverdev.osuosl.org 908 + L: dri-devel@lists.freedesktop.org 909 + L: linaro-mm-sig@lists.linaro.org (moderated for non-subscribers) 909 910 S: Supported 910 911 F: drivers/staging/android/ion 911 912 F: drivers/staging/android/uapi/ion.h ··· 1211 1208 ARM/ARTPEC MACHINE SUPPORT 1212 1209 M: Jesper Nilsson <jesper.nilsson@axis.com> 1213 1210 M: Lars Persson <lars.persson@axis.com> 1214 - M: Niklas Cassel <niklas.cassel@axis.com> 1215 1211 S: Maintained 1216 1212 L: linux-arm-kernel@axis.com 1217 1213 F: arch/arm/mach-artpec ··· 2619 2617 F: drivers/net/hamradio/baycom* 2620 2618 2621 2619 BCACHE (BLOCK LAYER CACHE) 2622 - M: Michael Lyle <mlyle@lyle.org> 2620 + M: Coly Li <colyli@suse.de> 2623 2621 M: Kent Overstreet <kent.overstreet@gmail.com> 2624 2622 L: linux-bcache@vger.kernel.org 2625 2623 W: http://bcache.evilpiepirate.org ··· 7413 7411 F: include/uapi/linux/ipx.h 7414 7412 F: drivers/staging/ipx/ 7415 7413 7416 - IRDA SUBSYSTEM 7417 - M: Samuel Ortiz <samuel@sortiz.org> 7418 - L: irda-users@lists.sourceforge.net (subscribers-only) 7419 - L: netdev@vger.kernel.org 7420 - W: http://irda.sourceforge.net/ 7421 - S: Obsolete 7422 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/sameo/irda-2.6.git 7423 - F: Documentation/networking/irda.txt 7424 - F: drivers/staging/irda/ 7425 - 7426 7414 IRQ DOMAINS (IRQ NUMBER MAPPING LIBRARY) 7427 7415 M: Marc Zyngier <marc.zyngier@arm.com> 7428 7416 S: Maintained ··· 7745 7753 F: arch/x86/kvm/svm.c 7746 7754 7747 7755 KERNEL VIRTUAL MACHINE FOR ARM (KVM/arm) 7748 - M: Christoffer Dall <christoffer.dall@linaro.org> 7756 + M: Christoffer Dall <christoffer.dall@arm.com> 7749 7757 M: Marc Zyngier <marc.zyngier@arm.com> 7750 7758 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 7751 7759 L: kvmarm@lists.cs.columbia.edu ··· 7759 7767 F: include/kvm/arm_* 7760 7768 7761 7769 KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64) 7762 - M: Christoffer Dall <christoffer.dall@linaro.org> 7770 + M: Christoffer Dall <christoffer.dall@arm.com> 7763 7771 M: Marc Zyngier <marc.zyngier@arm.com> 7764 7772 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 7765 7773 L: kvmarm@lists.cs.columbia.edu ··· 10901 10909 F: drivers/pci/dwc/ 10902 10910 10903 10911 PCIE DRIVER FOR AXIS ARTPEC 10904 - M: Niklas Cassel <niklas.cassel@axis.com> 10905 10912 M: Jesper Nilsson <jesper.nilsson@axis.com> 10906 10913 L: linux-arm-kernel@axis.com 10907 10914 L: linux-pci@vger.kernel.org ··· 13956 13965 M: Andreas Noever <andreas.noever@gmail.com> 13957 13966 M: Michael Jamet <michael.jamet@intel.com> 13958 13967 M: Mika Westerberg <mika.westerberg@linux.intel.com> 13959 - M: Yehezkel Bernat <yehezkel.bernat@intel.com> 13968 + M: Yehezkel Bernat <YehezkelShB@gmail.com> 13960 13969 T: git git://git.kernel.org/pub/scm/linux/kernel/git/westeri/thunderbolt.git 13961 13970 S: Maintained 13962 13971 F: Documentation/admin-guide/thunderbolt.rst ··· 13966 13975 THUNDERBOLT NETWORK DRIVER 13967 13976 M: Michael Jamet <michael.jamet@intel.com> 13968 13977 M: Mika Westerberg <mika.westerberg@linux.intel.com> 13969 - M: Yehezkel Bernat <yehezkel.bernat@intel.com> 13978 + M: Yehezkel Bernat <YehezkelShB@gmail.com> 13970 13979 L: netdev@vger.kernel.org 13971 13980 S: Maintained 13972 13981 F: drivers/net/thunderbolt.c
+1 -1
Makefile
··· 2 2 VERSION = 4 3 3 PATCHLEVEL = 17 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc3 6 6 NAME = Fearless Coyote 7 7 8 8 # *DOCUMENTATION*
+14 -14
arch/arm/boot/dts/gemini-nas4220b.dts
··· 134 134 function = "gmii"; 135 135 groups = "gmii_gmac0_grp"; 136 136 }; 137 - /* Settings come from OpenWRT */ 137 + /* Settings come from OpenWRT, pins on SL3516 */ 138 138 conf0 { 139 - pins = "R8 GMAC0 RXDV", "U11 GMAC1 RXDV"; 139 + pins = "V8 GMAC0 RXDV", "T10 GMAC1 RXDV"; 140 140 skew-delay = <0>; 141 141 }; 142 142 conf1 { 143 - pins = "T8 GMAC0 RXC", "T11 GMAC1 RXC"; 143 + pins = "Y7 GMAC0 RXC", "Y11 GMAC1 RXC"; 144 144 skew-delay = <15>; 145 145 }; 146 146 conf2 { 147 - pins = "P8 GMAC0 TXEN", "V11 GMAC1 TXEN"; 147 + pins = "T8 GMAC0 TXEN", "W11 GMAC1 TXEN"; 148 148 skew-delay = <7>; 149 149 }; 150 150 conf3 { 151 - pins = "V7 GMAC0 TXC"; 151 + pins = "U8 GMAC0 TXC"; 152 152 skew-delay = <11>; 153 153 }; 154 154 conf4 { 155 - pins = "P10 GMAC1 TXC"; 155 + pins = "V11 GMAC1 TXC"; 156 156 skew-delay = <10>; 157 157 }; 158 158 conf5 { 159 159 /* The data lines all have default skew */ 160 - pins = "U8 GMAC0 RXD0", "V8 GMAC0 RXD1", 161 - "P9 GMAC0 RXD2", "R9 GMAC0 RXD3", 162 - "U7 GMAC0 TXD0", "T7 GMAC0 TXD1", 163 - "R7 GMAC0 TXD2", "P7 GMAC0 TXD3", 164 - "R11 GMAC1 RXD0", "P11 GMAC1 RXD1", 165 - "V12 GMAC1 RXD2", "U12 GMAC1 RXD3", 166 - "R10 GMAC1 TXD0", "T10 GMAC1 TXD1", 167 - "U10 GMAC1 TXD2", "V10 GMAC1 TXD3"; 160 + pins = "W8 GMAC0 RXD0", "V9 GMAC0 RXD1", 161 + "Y8 GMAC0 RXD2", "U9 GMAC0 RXD3", 162 + "T7 GMAC0 TXD0", "U6 GMAC0 TXD1", 163 + "V7 GMAC0 TXD2", "U7 GMAC0 TXD3", 164 + "Y12 GMAC1 RXD0", "V12 GMAC1 RXD1", 165 + "T11 GMAC1 RXD2", "W12 GMAC1 RXD3", 166 + "U10 GMAC1 TXD0", "Y10 GMAC1 TXD1", 167 + "W10 GMAC1 TXD2", "T9 GMAC1 TXD3"; 168 168 skew-delay = <7>; 169 169 }; 170 170 /* Set up drive strength on GMAC0 to 16 mA */
+4 -4
arch/arm/boot/dts/omap4.dtsi
··· 163 163 164 164 cm2: cm2@8000 { 165 165 compatible = "ti,omap4-cm2", "simple-bus"; 166 - reg = <0x8000 0x3000>; 166 + reg = <0x8000 0x2000>; 167 167 #address-cells = <1>; 168 168 #size-cells = <1>; 169 - ranges = <0 0x8000 0x3000>; 169 + ranges = <0 0x8000 0x2000>; 170 170 171 171 cm2_clocks: clocks { 172 172 #address-cells = <1>; ··· 250 250 251 251 prm: prm@6000 { 252 252 compatible = "ti,omap4-prm"; 253 - reg = <0x6000 0x3000>; 253 + reg = <0x6000 0x2000>; 254 254 interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>; 255 255 #address-cells = <1>; 256 256 #size-cells = <1>; 257 - ranges = <0 0x6000 0x3000>; 257 + ranges = <0 0x6000 0x2000>; 258 258 259 259 prm_clocks: clocks { 260 260 #address-cells = <1>;
+25 -2
arch/arm/configs/gemini_defconfig
··· 1 1 # CONFIG_LOCALVERSION_AUTO is not set 2 2 CONFIG_SYSVIPC=y 3 3 CONFIG_NO_HZ_IDLE=y 4 + CONFIG_HIGH_RES_TIMERS=y 4 5 CONFIG_BSD_PROCESS_ACCT=y 5 6 CONFIG_USER_NS=y 6 7 CONFIG_RELAY=y ··· 13 12 CONFIG_PCI=y 14 13 CONFIG_PREEMPT=y 15 14 CONFIG_AEABI=y 15 + CONFIG_HIGHMEM=y 16 + CONFIG_CMA=y 16 17 CONFIG_CMDLINE="console=ttyS0,115200n8" 17 18 CONFIG_KEXEC=y 18 19 CONFIG_BINFMT_MISC=y 19 20 CONFIG_PM=y 21 + CONFIG_NET=y 22 + CONFIG_UNIX=y 23 + CONFIG_INET=y 20 24 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 21 25 CONFIG_DEVTMPFS=y 22 26 CONFIG_MTD=y 23 27 CONFIG_MTD_BLOCK=y 24 28 CONFIG_MTD_CFI=y 29 + CONFIG_MTD_JEDECPROBE=y 25 30 CONFIG_MTD_CFI_INTELEXT=y 26 31 CONFIG_MTD_CFI_AMDSTD=y 27 32 CONFIG_MTD_CFI_STAA=y ··· 40 33 # CONFIG_SCSI_LOWLEVEL is not set 41 34 CONFIG_ATA=y 42 35 CONFIG_PATA_FTIDE010=y 36 + CONFIG_NETDEVICES=y 37 + CONFIG_GEMINI_ETHERNET=y 38 + CONFIG_MDIO_BITBANG=y 39 + CONFIG_MDIO_GPIO=y 40 + CONFIG_REALTEK_PHY=y 43 41 CONFIG_INPUT_EVDEV=y 44 42 CONFIG_KEYBOARD_GPIO=y 45 43 # CONFIG_INPUT_MOUSE is not set ··· 55 43 CONFIG_SERIAL_8250_RUNTIME_UARTS=1 56 44 CONFIG_SERIAL_OF_PLATFORM=y 57 45 # CONFIG_HW_RANDOM is not set 58 - # CONFIG_HWMON is not set 46 + CONFIG_I2C_GPIO=y 47 + CONFIG_SPI=y 48 + CONFIG_SPI_GPIO=y 49 + CONFIG_SENSORS_GPIO_FAN=y 50 + CONFIG_SENSORS_LM75=y 51 + CONFIG_THERMAL=y 59 52 CONFIG_WATCHDOG=y 60 - CONFIG_GEMINI_WATCHDOG=y 53 + CONFIG_REGULATOR=y 54 + CONFIG_REGULATOR_FIXED_VOLTAGE=y 55 + CONFIG_DRM=y 56 + CONFIG_DRM_PANEL_ILITEK_IL9322=y 57 + CONFIG_DRM_TVE200=y 58 + CONFIG_LOGO=y 61 59 CONFIG_USB=y 62 60 CONFIG_USB_MON=y 63 61 CONFIG_USB_FOTG210_HCD=y ··· 76 54 CONFIG_LEDS_CLASS=y 77 55 CONFIG_LEDS_GPIO=y 78 56 CONFIG_LEDS_TRIGGERS=y 57 + CONFIG_LEDS_TRIGGER_DISK=y 79 58 CONFIG_LEDS_TRIGGER_HEARTBEAT=y 80 59 CONFIG_RTC_CLASS=y 81 60 CONFIG_DMADEVICES=y
+1
arch/arm/configs/socfpga_defconfig
··· 57 57 CONFIG_MTD_NAND=y 58 58 CONFIG_MTD_NAND_DENALI_DT=y 59 59 CONFIG_MTD_SPI_NOR=y 60 + # CONFIG_MTD_SPI_NOR_USE_4K_SECTORS is not set 60 61 CONFIG_SPI_CADENCE_QUADSPI=y 61 62 CONFIG_OF_OVERLAY=y 62 63 CONFIG_OF_CONFIGFS=y
+3
arch/arm/include/asm/kvm_host.h
··· 77 77 /* Interrupt controller */ 78 78 struct vgic_dist vgic; 79 79 int max_vcpus; 80 + 81 + /* Mandated version of PSCI */ 82 + u32 psci_version; 80 83 }; 81 84 82 85 #define KVM_NR_MEM_OBJS 40
+6
arch/arm/include/uapi/asm/kvm.h
··· 195 195 #define KVM_REG_ARM_VFP_FPINST 0x1009 196 196 #define KVM_REG_ARM_VFP_FPINST2 0x100A 197 197 198 + /* KVM-as-firmware specific pseudo-registers */ 199 + #define KVM_REG_ARM_FW (0x0014 << KVM_REG_ARM_COPROC_SHIFT) 200 + #define KVM_REG_ARM_FW_REG(r) (KVM_REG_ARM | KVM_REG_SIZE_U64 | \ 201 + KVM_REG_ARM_FW | ((r) & 0xffff)) 202 + #define KVM_REG_ARM_PSCI_VERSION KVM_REG_ARM_FW_REG(0) 203 + 198 204 /* Device Control API: ARM VGIC */ 199 205 #define KVM_DEV_ARM_VGIC_GRP_ADDR 0 200 206 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1
+13
arch/arm/kvm/guest.c
··· 22 22 #include <linux/module.h> 23 23 #include <linux/vmalloc.h> 24 24 #include <linux/fs.h> 25 + #include <kvm/arm_psci.h> 25 26 #include <asm/cputype.h> 26 27 #include <linux/uaccess.h> 27 28 #include <asm/kvm.h> ··· 177 176 unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) 178 177 { 179 178 return num_core_regs() + kvm_arm_num_coproc_regs(vcpu) 179 + + kvm_arm_get_fw_num_regs(vcpu) 180 180 + NUM_TIMER_REGS; 181 181 } 182 182 ··· 198 196 uindices++; 199 197 } 200 198 199 + ret = kvm_arm_copy_fw_reg_indices(vcpu, uindices); 200 + if (ret) 201 + return ret; 202 + uindices += kvm_arm_get_fw_num_regs(vcpu); 203 + 201 204 ret = copy_timer_indices(vcpu, uindices); 202 205 if (ret) 203 206 return ret; ··· 221 214 if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) 222 215 return get_core_reg(vcpu, reg); 223 216 217 + if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW) 218 + return kvm_arm_get_fw_reg(vcpu, reg); 219 + 224 220 if (is_timer_reg(reg->id)) 225 221 return get_timer_reg(vcpu, reg); 226 222 ··· 239 229 /* Register group 16 means we set a core register. */ 240 230 if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) 241 231 return set_core_reg(vcpu, reg); 232 + 233 + if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW) 234 + return kvm_arm_set_fw_reg(vcpu, reg); 242 235 243 236 if (is_timer_reg(reg->id)) 244 237 return set_timer_reg(vcpu, reg);
+1 -5
arch/arm/mach-omap2/Makefile
··· 243 243 include/generated/ti-pm-asm-offsets.h: arch/arm/mach-omap2/pm-asm-offsets.s FORCE 244 244 $(call filechk,offsets,__TI_PM_ASM_OFFSETS_H__) 245 245 246 - # For rule to generate ti-emif-asm-offsets.h dependency 247 - include drivers/memory/Makefile.asm-offsets 248 - 249 - arch/arm/mach-omap2/sleep33xx.o: include/generated/ti-pm-asm-offsets.h include/generated/ti-emif-asm-offsets.h 250 - arch/arm/mach-omap2/sleep43xx.o: include/generated/ti-pm-asm-offsets.h include/generated/ti-emif-asm-offsets.h 246 + $(obj)/sleep33xx.o $(obj)/sleep43xx.o: include/generated/ti-pm-asm-offsets.h
+3
arch/arm/mach-omap2/pm-asm-offsets.c
··· 7 7 8 8 #include <linux/kbuild.h> 9 9 #include <linux/platform_data/pm33xx.h> 10 + #include <linux/ti-emif-sram.h> 10 11 11 12 int main(void) 12 13 { 14 + ti_emif_asm_offsets(); 15 + 13 16 DEFINE(AMX3_PM_WFI_FLAGS_OFFSET, 14 17 offsetof(struct am33xx_pm_sram_data, wfi_flags)); 15 18 DEFINE(AMX3_PM_L2_AUX_CTRL_VAL_OFFSET,
-1
arch/arm/mach-omap2/sleep33xx.S
··· 6 6 * Dave Gerlach, Vaibhav Bedia 7 7 */ 8 8 9 - #include <generated/ti-emif-asm-offsets.h> 10 9 #include <generated/ti-pm-asm-offsets.h> 11 10 #include <linux/linkage.h> 12 11 #include <linux/ti-emif-sram.h>
-1
arch/arm/mach-omap2/sleep43xx.S
··· 6 6 * Dave Gerlach, Vaibhav Bedia 7 7 */ 8 8 9 - #include <generated/ti-emif-asm-offsets.h> 10 9 #include <generated/ti-pm-asm-offsets.h> 11 10 #include <linux/linkage.h> 12 11 #include <linux/ti-emif-sram.h>
+2 -2
arch/arm/mach-s3c24xx/mach-jive.c
··· 427 427 .dev_id = "spi_gpio", 428 428 .table = { 429 429 GPIO_LOOKUP("GPIOB", 4, 430 - "gpio-sck", GPIO_ACTIVE_HIGH), 430 + "sck", GPIO_ACTIVE_HIGH), 431 431 GPIO_LOOKUP("GPIOB", 9, 432 - "gpio-mosi", GPIO_ACTIVE_HIGH), 432 + "mosi", GPIO_ACTIVE_HIGH), 433 433 GPIO_LOOKUP("GPIOH", 10, 434 434 "cs", GPIO_ACTIVE_HIGH), 435 435 { },
+4
arch/arm64/Makefile
··· 56 56 KBUILD_CFLAGS += $(call cc-option,-mabi=lp64) 57 57 KBUILD_AFLAGS += $(call cc-option,-mabi=lp64) 58 58 59 + ifeq ($(cc-name),clang) 60 + KBUILD_CFLAGS += -DCONFIG_ARCH_SUPPORTS_INT128 61 + else 59 62 KBUILD_CFLAGS += $(call cc-ifversion, -ge, 0500, -DCONFIG_ARCH_SUPPORTS_INT128) 63 + endif 60 64 61 65 ifeq ($(CONFIG_CPU_BIG_ENDIAN), y) 62 66 KBUILD_CPPFLAGS += -mbig-endian
+4
arch/arm64/boot/dts/amlogic/meson-gx-p23x-q20x.dtsi
··· 212 212 pinctrl-0 = <&uart_ao_a_pins>; 213 213 pinctrl-names = "default"; 214 214 }; 215 + 216 + &usb0 { 217 + status = "okay"; 218 + };
+12
arch/arm64/boot/dts/amlogic/meson-gxl-s905x-libretech-cc.dts
··· 271 271 pinctrl-0 = <&uart_ao_a_pins>; 272 272 pinctrl-names = "default"; 273 273 }; 274 + 275 + &usb0 { 276 + status = "okay"; 277 + }; 278 + 279 + &usb2_phy0 { 280 + /* 281 + * even though the schematics don't show it: 282 + * HDMI_5V is also used as supply for the USB VBUS. 283 + */ 284 + phy-supply = <&hdmi_5v>; 285 + };
+4
arch/arm64/boot/dts/amlogic/meson-gxl-s905x-nexbox-a95x.dts
··· 215 215 pinctrl-0 = <&uart_ao_a_pins>; 216 216 pinctrl-names = "default"; 217 217 }; 218 + 219 + &usb0 { 220 + status = "okay"; 221 + };
+4
arch/arm64/boot/dts/amlogic/meson-gxl-s905x-p212.dtsi
··· 185 185 pinctrl-0 = <&uart_ao_a_pins>; 186 186 pinctrl-names = "default"; 187 187 }; 188 + 189 + &usb0 { 190 + status = "okay"; 191 + };
+61
arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
··· 20 20 no-map; 21 21 }; 22 22 }; 23 + 24 + soc { 25 + usb0: usb@c9000000 { 26 + status = "disabled"; 27 + compatible = "amlogic,meson-gxl-dwc3"; 28 + #address-cells = <2>; 29 + #size-cells = <2>; 30 + ranges; 31 + 32 + clocks = <&clkc CLKID_USB>; 33 + clock-names = "usb_general"; 34 + resets = <&reset RESET_USB_OTG>; 35 + reset-names = "usb_otg"; 36 + 37 + dwc3: dwc3@c9000000 { 38 + compatible = "snps,dwc3"; 39 + reg = <0x0 0xc9000000 0x0 0x100000>; 40 + interrupts = <GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>; 41 + dr_mode = "host"; 42 + maximum-speed = "high-speed"; 43 + snps,dis_u2_susphy_quirk; 44 + phys = <&usb3_phy>, <&usb2_phy0>, <&usb2_phy1>; 45 + }; 46 + }; 47 + }; 48 + }; 49 + 50 + &apb { 51 + usb2_phy0: phy@78000 { 52 + compatible = "amlogic,meson-gxl-usb2-phy"; 53 + #phy-cells = <0>; 54 + reg = <0x0 0x78000 0x0 0x20>; 55 + clocks = <&clkc CLKID_USB>; 56 + clock-names = "phy"; 57 + resets = <&reset RESET_USB_OTG>; 58 + reset-names = "phy"; 59 + status = "okay"; 60 + }; 61 + 62 + usb2_phy1: phy@78020 { 63 + compatible = "amlogic,meson-gxl-usb2-phy"; 64 + #phy-cells = <0>; 65 + reg = <0x0 0x78020 0x0 0x20>; 66 + clocks = <&clkc CLKID_USB>; 67 + clock-names = "phy"; 68 + resets = <&reset RESET_USB_OTG>; 69 + reset-names = "phy"; 70 + status = "okay"; 71 + }; 72 + 73 + usb3_phy: phy@78080 { 74 + compatible = "amlogic,meson-gxl-usb3-phy"; 75 + #phy-cells = <0>; 76 + reg = <0x0 0x78080 0x0 0x20>; 77 + interrupts = <GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>; 78 + clocks = <&clkc CLKID_USB>, <&clkc_AO CLKID_AO_CEC_32K>; 79 + clock-names = "phy", "peripheral"; 80 + resets = <&reset RESET_USB_OTG>, <&reset RESET_USB_OTG>; 81 + reset-names = "phy", "peripheral"; 82 + status = "okay"; 83 + }; 23 84 }; 24 85 25 86 &ethmac {
+4
arch/arm64/boot/dts/amlogic/meson-gxm-khadas-vim2.dts
··· 406 406 status = "okay"; 407 407 vref-supply = <&vddio_ao18>; 408 408 }; 409 + 410 + &usb0 { 411 + status = "okay"; 412 + };
+17
arch/arm64/boot/dts/amlogic/meson-gxm.dtsi
··· 80 80 }; 81 81 }; 82 82 83 + &apb { 84 + usb2_phy2: phy@78040 { 85 + compatible = "amlogic,meson-gxl-usb2-phy"; 86 + #phy-cells = <0>; 87 + reg = <0x0 0x78040 0x0 0x20>; 88 + clocks = <&clkc CLKID_USB>; 89 + clock-names = "phy"; 90 + resets = <&reset RESET_USB_OTG>; 91 + reset-names = "phy"; 92 + status = "okay"; 93 + }; 94 + }; 95 + 83 96 &clkc_AO { 84 97 compatible = "amlogic,meson-gxm-aoclkc", "amlogic,meson-gx-aoclkc"; 85 98 }; ··· 112 99 113 100 &hdmi_tx { 114 101 compatible = "amlogic,meson-gxm-dw-hdmi", "amlogic,meson-gx-dw-hdmi"; 102 + }; 103 + 104 + &dwc3 { 105 + phys = <&usb3_phy>, <&usb2_phy0>, <&usb2_phy1>, <&usb2_phy2>; 115 106 };
-2
arch/arm64/boot/dts/arm/juno-motherboard.dtsi
··· 56 56 57 57 gpio_keys { 58 58 compatible = "gpio-keys"; 59 - #address-cells = <1>; 60 - #size-cells = <0>; 61 59 62 60 power-button { 63 61 debounce_interval = <50>;
+40 -40
arch/arm64/boot/dts/broadcom/stingray/stingray-sata.dtsi
··· 36 36 #size-cells = <1>; 37 37 ranges = <0x0 0x0 0x67d00000 0x00800000>; 38 38 39 - sata0: ahci@210000 { 39 + sata0: ahci@0 { 40 40 compatible = "brcm,iproc-ahci", "generic-ahci"; 41 - reg = <0x00210000 0x1000>; 41 + reg = <0x00000000 0x1000>; 42 42 reg-names = "ahci"; 43 - interrupts = <GIC_SPI 339 IRQ_TYPE_LEVEL_HIGH>; 43 + interrupts = <GIC_SPI 321 IRQ_TYPE_LEVEL_HIGH>; 44 44 #address-cells = <1>; 45 45 #size-cells = <0>; 46 46 status = "disabled"; ··· 52 52 }; 53 53 }; 54 54 55 - sata_phy0: sata_phy@212100 { 55 + sata_phy0: sata_phy@2100 { 56 56 compatible = "brcm,iproc-sr-sata-phy"; 57 - reg = <0x00212100 0x1000>; 57 + reg = <0x00002100 0x1000>; 58 58 reg-names = "phy"; 59 59 #address-cells = <1>; 60 60 #size-cells = <0>; ··· 66 66 }; 67 67 }; 68 68 69 - sata1: ahci@310000 { 69 + sata1: ahci@10000 { 70 70 compatible = "brcm,iproc-ahci", "generic-ahci"; 71 - reg = <0x00310000 0x1000>; 71 + reg = <0x00010000 0x1000>; 72 72 reg-names = "ahci"; 73 - interrupts = <GIC_SPI 347 IRQ_TYPE_LEVEL_HIGH>; 73 + interrupts = <GIC_SPI 323 IRQ_TYPE_LEVEL_HIGH>; 74 74 #address-cells = <1>; 75 75 #size-cells = <0>; 76 76 status = "disabled"; ··· 82 82 }; 83 83 }; 84 84 85 - sata_phy1: sata_phy@312100 { 85 + sata_phy1: sata_phy@12100 { 86 86 compatible = "brcm,iproc-sr-sata-phy"; 87 - reg = <0x00312100 0x1000>; 87 + reg = <0x00012100 0x1000>; 88 88 reg-names = "phy"; 89 89 #address-cells = <1>; 90 90 #size-cells = <0>; ··· 96 96 }; 97 97 }; 98 98 99 - sata2: ahci@120000 { 99 + sata2: ahci@20000 { 100 100 compatible = "brcm,iproc-ahci", "generic-ahci"; 101 - reg = <0x00120000 0x1000>; 101 + reg = <0x00020000 0x1000>; 102 102 reg-names = "ahci"; 103 - interrupts = <GIC_SPI 333 IRQ_TYPE_LEVEL_HIGH>; 103 + interrupts = <GIC_SPI 325 IRQ_TYPE_LEVEL_HIGH>; 104 104 #address-cells = <1>; 105 105 #size-cells = <0>; 106 106 status = "disabled"; ··· 112 112 }; 113 113 }; 114 114 115 - sata_phy2: sata_phy@122100 { 115 + sata_phy2: sata_phy@22100 { 116 116 compatible = "brcm,iproc-sr-sata-phy"; 117 - reg = <0x00122100 0x1000>; 117 + reg = <0x00022100 0x1000>; 118 118 reg-names = "phy"; 119 119 #address-cells = <1>; 120 120 #size-cells = <0>; ··· 126 126 }; 127 127 }; 128 128 129 - sata3: ahci@130000 { 129 + sata3: ahci@30000 { 130 130 compatible = "brcm,iproc-ahci", "generic-ahci"; 131 - reg = <0x00130000 0x1000>; 131 + reg = <0x00030000 0x1000>; 132 132 reg-names = "ahci"; 133 - interrupts = <GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>; 133 + interrupts = <GIC_SPI 327 IRQ_TYPE_LEVEL_HIGH>; 134 134 #address-cells = <1>; 135 135 #size-cells = <0>; 136 136 status = "disabled"; ··· 142 142 }; 143 143 }; 144 144 145 - sata_phy3: sata_phy@132100 { 145 + sata_phy3: sata_phy@32100 { 146 146 compatible = "brcm,iproc-sr-sata-phy"; 147 - reg = <0x00132100 0x1000>; 147 + reg = <0x00032100 0x1000>; 148 148 reg-names = "phy"; 149 149 #address-cells = <1>; 150 150 #size-cells = <0>; ··· 156 156 }; 157 157 }; 158 158 159 - sata4: ahci@330000 { 159 + sata4: ahci@100000 { 160 160 compatible = "brcm,iproc-ahci", "generic-ahci"; 161 - reg = <0x00330000 0x1000>; 161 + reg = <0x00100000 0x1000>; 162 162 reg-names = "ahci"; 163 - interrupts = <GIC_SPI 351 IRQ_TYPE_LEVEL_HIGH>; 163 + interrupts = <GIC_SPI 329 IRQ_TYPE_LEVEL_HIGH>; 164 164 #address-cells = <1>; 165 165 #size-cells = <0>; 166 166 status = "disabled"; ··· 172 172 }; 173 173 }; 174 174 175 - sata_phy4: sata_phy@332100 { 175 + sata_phy4: sata_phy@102100 { 176 176 compatible = "brcm,iproc-sr-sata-phy"; 177 - reg = <0x00332100 0x1000>; 177 + reg = <0x00102100 0x1000>; 178 178 reg-names = "phy"; 179 179 #address-cells = <1>; 180 180 #size-cells = <0>; ··· 186 186 }; 187 187 }; 188 188 189 - sata5: ahci@400000 { 189 + sata5: ahci@110000 { 190 190 compatible = "brcm,iproc-ahci", "generic-ahci"; 191 - reg = <0x00400000 0x1000>; 191 + reg = <0x00110000 0x1000>; 192 192 reg-names = "ahci"; 193 - interrupts = <GIC_SPI 353 IRQ_TYPE_LEVEL_HIGH>; 193 + interrupts = <GIC_SPI 331 IRQ_TYPE_LEVEL_HIGH>; 194 194 #address-cells = <1>; 195 195 #size-cells = <0>; 196 196 status = "disabled"; ··· 202 202 }; 203 203 }; 204 204 205 - sata_phy5: sata_phy@402100 { 205 + sata_phy5: sata_phy@112100 { 206 206 compatible = "brcm,iproc-sr-sata-phy"; 207 - reg = <0x00402100 0x1000>; 207 + reg = <0x00112100 0x1000>; 208 208 reg-names = "phy"; 209 209 #address-cells = <1>; 210 210 #size-cells = <0>; ··· 216 216 }; 217 217 }; 218 218 219 - sata6: ahci@410000 { 219 + sata6: ahci@120000 { 220 220 compatible = "brcm,iproc-ahci", "generic-ahci"; 221 - reg = <0x00410000 0x1000>; 221 + reg = <0x00120000 0x1000>; 222 222 reg-names = "ahci"; 223 - interrupts = <GIC_SPI 355 IRQ_TYPE_LEVEL_HIGH>; 223 + interrupts = <GIC_SPI 333 IRQ_TYPE_LEVEL_HIGH>; 224 224 #address-cells = <1>; 225 225 #size-cells = <0>; 226 226 status = "disabled"; ··· 232 232 }; 233 233 }; 234 234 235 - sata_phy6: sata_phy@412100 { 235 + sata_phy6: sata_phy@122100 { 236 236 compatible = "brcm,iproc-sr-sata-phy"; 237 - reg = <0x00412100 0x1000>; 237 + reg = <0x00122100 0x1000>; 238 238 reg-names = "phy"; 239 239 #address-cells = <1>; 240 240 #size-cells = <0>; ··· 246 246 }; 247 247 }; 248 248 249 - sata7: ahci@420000 { 249 + sata7: ahci@130000 { 250 250 compatible = "brcm,iproc-ahci", "generic-ahci"; 251 - reg = <0x00420000 0x1000>; 251 + reg = <0x00130000 0x1000>; 252 252 reg-names = "ahci"; 253 - interrupts = <GIC_SPI 357 IRQ_TYPE_LEVEL_HIGH>; 253 + interrupts = <GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>; 254 254 #address-cells = <1>; 255 255 #size-cells = <0>; 256 256 status = "disabled"; ··· 262 262 }; 263 263 }; 264 264 265 - sata_phy7: sata_phy@422100 { 265 + sata_phy7: sata_phy@132100 { 266 266 compatible = "brcm,iproc-sr-sata-phy"; 267 - reg = <0x00422100 0x1000>; 267 + reg = <0x00132100 0x1000>; 268 268 reg-names = "phy"; 269 269 #address-cells = <1>; 270 270 #size-cells = <0>;
+3
arch/arm64/include/asm/kvm_host.h
··· 75 75 76 76 /* Interrupt controller */ 77 77 struct vgic_dist vgic; 78 + 79 + /* Mandated version of PSCI */ 80 + u32 psci_version; 78 81 }; 79 82 80 83 #define KVM_NR_MEM_OBJS 40
+1 -1
arch/arm64/include/asm/module.h
··· 39 39 u64 module_emit_plt_entry(struct module *mod, void *loc, const Elf64_Rela *rela, 40 40 Elf64_Sym *sym); 41 41 42 - u64 module_emit_adrp_veneer(struct module *mod, void *loc, u64 val); 42 + u64 module_emit_veneer_for_adrp(struct module *mod, void *loc, u64 val); 43 43 44 44 #ifdef CONFIG_RANDOMIZE_BASE 45 45 extern u64 module_alloc_base;
+2 -2
arch/arm64/include/asm/pgtable.h
··· 230 230 } 231 231 } 232 232 233 - extern void __sync_icache_dcache(pte_t pteval, unsigned long addr); 233 + extern void __sync_icache_dcache(pte_t pteval); 234 234 235 235 /* 236 236 * PTE bits configuration in the presence of hardware Dirty Bit Management ··· 253 253 pte_t old_pte; 254 254 255 255 if (pte_present(pte) && pte_user_exec(pte) && !pte_special(pte)) 256 - __sync_icache_dcache(pte, addr); 256 + __sync_icache_dcache(pte); 257 257 258 258 /* 259 259 * If the existing pte is valid, check for potential race with
+6
arch/arm64/include/uapi/asm/kvm.h
··· 206 206 #define KVM_REG_ARM_TIMER_CNT ARM64_SYS_REG(3, 3, 14, 3, 2) 207 207 #define KVM_REG_ARM_TIMER_CVAL ARM64_SYS_REG(3, 3, 14, 0, 2) 208 208 209 + /* KVM-as-firmware specific pseudo-registers */ 210 + #define KVM_REG_ARM_FW (0x0014 << KVM_REG_ARM_COPROC_SHIFT) 211 + #define KVM_REG_ARM_FW_REG(r) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \ 212 + KVM_REG_ARM_FW | ((r) & 0xffff)) 213 + #define KVM_REG_ARM_PSCI_VERSION KVM_REG_ARM_FW_REG(0) 214 + 209 215 /* Device Control API: ARM VGIC */ 210 216 #define KVM_DEV_ARM_VGIC_GRP_ADDR 0 211 217 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1
+1
arch/arm64/kernel/cpufeature.c
··· 868 868 static const struct midr_range kpti_safe_list[] = { 869 869 MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2), 870 870 MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN), 871 + { /* sentinel */ } 871 872 }; 872 873 char const *str = "command line option"; 873 874
+1 -1
arch/arm64/kernel/module-plts.c
··· 43 43 } 44 44 45 45 #ifdef CONFIG_ARM64_ERRATUM_843419 46 - u64 module_emit_adrp_veneer(struct module *mod, void *loc, u64 val) 46 + u64 module_emit_veneer_for_adrp(struct module *mod, void *loc, u64 val) 47 47 { 48 48 struct mod_plt_sec *pltsec = !in_init(mod, loc) ? &mod->arch.core : 49 49 &mod->arch.init;
+1 -1
arch/arm64/kernel/module.c
··· 215 215 insn &= ~BIT(31); 216 216 } else { 217 217 /* out of range for ADR -> emit a veneer */ 218 - val = module_emit_adrp_veneer(mod, place, val & ~0xfff); 218 + val = module_emit_veneer_for_adrp(mod, place, val & ~0xfff); 219 219 if (!val) 220 220 return -ENOEXEC; 221 221 insn = aarch64_insn_gen_branch_imm((u64)place, val,
+10 -10
arch/arm64/kernel/ptrace.c
··· 25 25 #include <linux/sched/signal.h> 26 26 #include <linux/sched/task_stack.h> 27 27 #include <linux/mm.h> 28 + #include <linux/nospec.h> 28 29 #include <linux/smp.h> 29 30 #include <linux/ptrace.h> 30 31 #include <linux/user.h> ··· 250 249 251 250 switch (note_type) { 252 251 case NT_ARM_HW_BREAK: 253 - if (idx < ARM_MAX_BRP) 254 - bp = tsk->thread.debug.hbp_break[idx]; 252 + if (idx >= ARM_MAX_BRP) 253 + goto out; 254 + idx = array_index_nospec(idx, ARM_MAX_BRP); 255 + bp = tsk->thread.debug.hbp_break[idx]; 255 256 break; 256 257 case NT_ARM_HW_WATCH: 257 - if (idx < ARM_MAX_WRP) 258 - bp = tsk->thread.debug.hbp_watch[idx]; 258 + if (idx >= ARM_MAX_WRP) 259 + goto out; 260 + idx = array_index_nospec(idx, ARM_MAX_WRP); 261 + bp = tsk->thread.debug.hbp_watch[idx]; 259 262 break; 260 263 } 261 264 265 + out: 262 266 return bp; 263 267 } 264 268 ··· 1464 1458 { 1465 1459 int ret; 1466 1460 u32 kdata; 1467 - mm_segment_t old_fs = get_fs(); 1468 1461 1469 - set_fs(KERNEL_DS); 1470 1462 /* Watchpoint */ 1471 1463 if (num < 0) { 1472 1464 ret = compat_ptrace_hbp_get(NT_ARM_HW_WATCH, tsk, num, &kdata); ··· 1475 1471 } else { 1476 1472 ret = compat_ptrace_hbp_get(NT_ARM_HW_BREAK, tsk, num, &kdata); 1477 1473 } 1478 - set_fs(old_fs); 1479 1474 1480 1475 if (!ret) 1481 1476 ret = put_user(kdata, data); ··· 1487 1484 { 1488 1485 int ret; 1489 1486 u32 kdata = 0; 1490 - mm_segment_t old_fs = get_fs(); 1491 1487 1492 1488 if (num == 0) 1493 1489 return 0; ··· 1495 1493 if (ret) 1496 1494 return ret; 1497 1495 1498 - set_fs(KERNEL_DS); 1499 1496 if (num < 0) 1500 1497 ret = compat_ptrace_hbp_set(NT_ARM_HW_WATCH, tsk, num, &kdata); 1501 1498 else 1502 1499 ret = compat_ptrace_hbp_set(NT_ARM_HW_BREAK, tsk, num, &kdata); 1503 - set_fs(old_fs); 1504 1500 1505 1501 return ret; 1506 1502 }
+2 -1
arch/arm64/kernel/traps.c
··· 277 277 * If we were single stepping, we want to get the step exception after 278 278 * we return from the trap. 279 279 */ 280 - user_fastforward_single_step(current); 280 + if (user_mode(regs)) 281 + user_fastforward_single_step(current); 281 282 } 282 283 283 284 static LIST_HEAD(undef_hook);
+13 -1
arch/arm64/kvm/guest.c
··· 25 25 #include <linux/module.h> 26 26 #include <linux/vmalloc.h> 27 27 #include <linux/fs.h> 28 + #include <kvm/arm_psci.h> 28 29 #include <asm/cputype.h> 29 30 #include <linux/uaccess.h> 30 31 #include <asm/kvm.h> ··· 206 205 unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu) 207 206 { 208 207 return num_core_regs() + kvm_arm_num_sys_reg_descs(vcpu) 209 - + NUM_TIMER_REGS; 208 + + kvm_arm_get_fw_num_regs(vcpu) + NUM_TIMER_REGS; 210 209 } 211 210 212 211 /** ··· 226 225 uindices++; 227 226 } 228 227 228 + ret = kvm_arm_copy_fw_reg_indices(vcpu, uindices); 229 + if (ret) 230 + return ret; 231 + uindices += kvm_arm_get_fw_num_regs(vcpu); 232 + 229 233 ret = copy_timer_indices(vcpu, uindices); 230 234 if (ret) 231 235 return ret; ··· 249 243 if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) 250 244 return get_core_reg(vcpu, reg); 251 245 246 + if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW) 247 + return kvm_arm_get_fw_reg(vcpu, reg); 248 + 252 249 if (is_timer_reg(reg->id)) 253 250 return get_timer_reg(vcpu, reg); 254 251 ··· 267 258 /* Register group 16 means we set a core register. */ 268 259 if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_CORE) 269 260 return set_core_reg(vcpu, reg); 261 + 262 + if ((reg->id & KVM_REG_ARM_COPROC_MASK) == KVM_REG_ARM_FW) 263 + return kvm_arm_set_fw_reg(vcpu, reg); 270 264 271 265 if (is_timer_reg(reg->id)) 272 266 return set_timer_reg(vcpu, reg);
+2 -4
arch/arm64/kvm/sys_regs.c
··· 996 996 997 997 if (id == SYS_ID_AA64PFR0_EL1) { 998 998 if (val & (0xfUL << ID_AA64PFR0_SVE_SHIFT)) 999 - pr_err_once("kvm [%i]: SVE unsupported for guests, suppressing\n", 1000 - task_pid_nr(current)); 999 + kvm_debug("SVE unsupported for guests, suppressing\n"); 1001 1000 1002 1001 val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT); 1003 1002 } else if (id == SYS_ID_AA64MMFR1_EL1) { 1004 1003 if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT)) 1005 - pr_err_once("kvm [%i]: LORegions unsupported for guests, suppressing\n", 1006 - task_pid_nr(current)); 1004 + kvm_debug("LORegions unsupported for guests, suppressing\n"); 1007 1005 1008 1006 val &= ~(0xfUL << ID_AA64MMFR1_LOR_SHIFT); 1009 1007 }
+4
arch/arm64/lib/Makefile
··· 19 19 -fcall-saved-x13 -fcall-saved-x14 -fcall-saved-x15 \ 20 20 -fcall-saved-x18 -fomit-frame-pointer 21 21 CFLAGS_REMOVE_atomic_ll_sc.o := -pg 22 + GCOV_PROFILE_atomic_ll_sc.o := n 23 + KASAN_SANITIZE_atomic_ll_sc.o := n 24 + KCOV_INSTRUMENT_atomic_ll_sc.o := n 25 + UBSAN_SANITIZE_atomic_ll_sc.o := n 22 26 23 27 lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o
+1 -1
arch/arm64/mm/flush.c
··· 58 58 flush_ptrace_access(vma, page, uaddr, dst, len); 59 59 } 60 60 61 - void __sync_icache_dcache(pte_t pte, unsigned long addr) 61 + void __sync_icache_dcache(pte_t pte) 62 62 { 63 63 struct page *page = pte_page(pte); 64 64
+1 -1
arch/powerpc/include/asm/powernv.h
··· 15 15 extern void powernv_set_nmmu_ptcr(unsigned long ptcr); 16 16 extern struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev, 17 17 unsigned long flags, 18 - struct npu_context *(*cb)(struct npu_context *, void *), 18 + void (*cb)(struct npu_context *, void *), 19 19 void *priv); 20 20 extern void pnv_npu2_destroy_context(struct npu_context *context, 21 21 struct pci_dev *gpdev);
+2 -5
arch/powerpc/kernel/mce_power.c
··· 441 441 if (pfn != ULONG_MAX) { 442 442 *phys_addr = 443 443 (pfn << PAGE_SHIFT); 444 - handled = 1; 445 444 } 446 445 } 447 446 } ··· 531 532 * kernel/exception-64s.h 532 533 */ 533 534 if (get_paca()->in_mce < MAX_MCE_DEPTH) 534 - if (!mce_find_instr_ea_and_pfn(regs, addr, 535 - phys_addr)) 536 - handled = 1; 535 + mce_find_instr_ea_and_pfn(regs, addr, phys_addr); 537 536 } 538 537 found = 1; 539 538 } ··· 569 572 const struct mce_ierror_table itable[]) 570 573 { 571 574 struct mce_error_info mce_err = { 0 }; 572 - uint64_t addr, phys_addr; 575 + uint64_t addr, phys_addr = ULONG_MAX; 573 576 uint64_t srr1 = regs->msr; 574 577 long handled; 575 578
+42 -7
arch/powerpc/kernel/smp.c
··· 566 566 #endif 567 567 568 568 #ifdef CONFIG_NMI_IPI 569 - static void stop_this_cpu(struct pt_regs *regs) 570 - #else 569 + static void nmi_stop_this_cpu(struct pt_regs *regs) 570 + { 571 + /* 572 + * This is a special case because it never returns, so the NMI IPI 573 + * handling would never mark it as done, which makes any later 574 + * smp_send_nmi_ipi() call spin forever. Mark it done now. 575 + * 576 + * IRQs are already hard disabled by the smp_handle_nmi_ipi. 577 + */ 578 + nmi_ipi_lock(); 579 + nmi_ipi_busy_count--; 580 + nmi_ipi_unlock(); 581 + 582 + /* Remove this CPU */ 583 + set_cpu_online(smp_processor_id(), false); 584 + 585 + spin_begin(); 586 + while (1) 587 + spin_cpu_relax(); 588 + } 589 + 590 + void smp_send_stop(void) 591 + { 592 + smp_send_nmi_ipi(NMI_IPI_ALL_OTHERS, nmi_stop_this_cpu, 1000000); 593 + } 594 + 595 + #else /* CONFIG_NMI_IPI */ 596 + 571 597 static void stop_this_cpu(void *dummy) 572 - #endif 573 598 { 574 599 /* Remove this CPU */ 575 600 set_cpu_online(smp_processor_id(), false); ··· 607 582 608 583 void smp_send_stop(void) 609 584 { 610 - #ifdef CONFIG_NMI_IPI 611 - smp_send_nmi_ipi(NMI_IPI_ALL_OTHERS, stop_this_cpu, 1000000); 612 - #else 585 + static bool stopped = false; 586 + 587 + /* 588 + * Prevent waiting on csd lock from a previous smp_send_stop. 589 + * This is racy, but in general callers try to do the right 590 + * thing and only fire off one smp_send_stop (e.g., see 591 + * kernel/panic.c) 592 + */ 593 + if (stopped) 594 + return; 595 + 596 + stopped = true; 597 + 613 598 smp_call_function(stop_this_cpu, NULL, 0); 614 - #endif 615 599 } 600 + #endif /* CONFIG_NMI_IPI */ 616 601 617 602 struct thread_info *current_set[NR_CPUS]; 618 603
+7
arch/powerpc/kvm/booke.c
··· 305 305 kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_FP_UNAVAIL); 306 306 } 307 307 308 + #ifdef CONFIG_ALTIVEC 309 + void kvmppc_core_queue_vec_unavail(struct kvm_vcpu *vcpu) 310 + { 311 + kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_ALTIVEC_UNAVAIL); 312 + } 313 + #endif 314 + 308 315 void kvmppc_core_queue_dec(struct kvm_vcpu *vcpu) 309 316 { 310 317 kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_DECREMENTER);
+2
arch/powerpc/mm/mem.c
··· 133 133 start, start + size, rc); 134 134 return -EFAULT; 135 135 } 136 + flush_inval_dcache_range(start, start + size); 136 137 137 138 return __add_pages(nid, start_pfn, nr_pages, altmap, want_memblock); 138 139 } ··· 160 159 161 160 /* Remove htab bolted mappings for this section of memory */ 162 161 start = (unsigned long)__va(start); 162 + flush_inval_dcache_range(start, start + size); 163 163 ret = remove_section_mapping(start, start + size); 164 164 165 165 /* Ensure all vmalloc mappings are flushed in case they also
-17
arch/powerpc/platforms/powernv/memtrace.c
··· 82 82 .open = simple_open, 83 83 }; 84 84 85 - static void flush_memory_region(u64 base, u64 size) 86 - { 87 - unsigned long line_size = ppc64_caches.l1d.size; 88 - u64 end = base + size; 89 - u64 addr; 90 - 91 - base = round_down(base, line_size); 92 - end = round_up(end, line_size); 93 - 94 - for (addr = base; addr < end; addr += line_size) 95 - asm volatile("dcbf 0,%0" : "=r" (addr) :: "memory"); 96 - } 97 - 98 85 static int check_memblock_online(struct memory_block *mem, void *arg) 99 86 { 100 87 if (mem->state != MEM_ONLINE) ··· 118 131 119 132 walk_memory_range(start_pfn, end_pfn, (void *)MEM_OFFLINE, 120 133 change_memblock_state); 121 - 122 - /* RCU grace period? */ 123 - flush_memory_region((u64)__va(start_pfn << PAGE_SHIFT), 124 - nr_pages << PAGE_SHIFT); 125 134 126 135 lock_device_hotplug(); 127 136 remove_memory(nid, start_pfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT);
+73 -15
arch/powerpc/platforms/powernv/npu-dma.c
··· 34 34 #define npu_to_phb(x) container_of(x, struct pnv_phb, npu) 35 35 36 36 /* 37 + * spinlock to protect initialisation of an npu_context for a particular 38 + * mm_struct. 39 + */ 40 + static DEFINE_SPINLOCK(npu_context_lock); 41 + 42 + /* 43 + * When an address shootdown range exceeds this threshold we invalidate the 44 + * entire TLB on the GPU for the given PID rather than each specific address in 45 + * the range. 46 + */ 47 + #define ATSD_THRESHOLD (2*1024*1024) 48 + 49 + /* 37 50 * Other types of TCE cache invalidation are not functional in the 38 51 * hardware. 39 52 */ ··· 414 401 bool nmmu_flush; 415 402 416 403 /* Callback to stop translation requests on a given GPU */ 417 - struct npu_context *(*release_cb)(struct npu_context *, void *); 404 + void (*release_cb)(struct npu_context *context, void *priv); 418 405 419 406 /* 420 407 * Private pointer passed to the above callback for usage by ··· 684 671 struct npu_context *npu_context = mn_to_npu_context(mn); 685 672 unsigned long address; 686 673 687 - for (address = start; address < end; address += PAGE_SIZE) 688 - mmio_invalidate(npu_context, 1, address, false); 674 + if (end - start > ATSD_THRESHOLD) { 675 + /* 676 + * Just invalidate the entire PID if the address range is too 677 + * large. 678 + */ 679 + mmio_invalidate(npu_context, 0, 0, true); 680 + } else { 681 + for (address = start; address < end; address += PAGE_SIZE) 682 + mmio_invalidate(npu_context, 1, address, false); 689 683 690 - /* Do the flush only on the final addess == end */ 691 - mmio_invalidate(npu_context, 1, address, true); 684 + /* Do the flush only on the final addess == end */ 685 + mmio_invalidate(npu_context, 1, address, true); 686 + } 692 687 } 693 688 694 689 static const struct mmu_notifier_ops nv_nmmu_notifier_ops = { ··· 717 696 * Returns an error if there no contexts are currently available or a 718 697 * npu_context which should be passed to pnv_npu2_handle_fault(). 719 698 * 720 - * mmap_sem must be held in write mode. 699 + * mmap_sem must be held in write mode and must not be called from interrupt 700 + * context. 721 701 */ 722 702 struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev, 723 703 unsigned long flags, 724 - struct npu_context *(*cb)(struct npu_context *, void *), 704 + void (*cb)(struct npu_context *, void *), 725 705 void *priv) 726 706 { 727 707 int rc; ··· 765 743 /* 766 744 * Setup the NPU context table for a particular GPU. These need to be 767 745 * per-GPU as we need the tables to filter ATSDs when there are no 768 - * active contexts on a particular GPU. 746 + * active contexts on a particular GPU. It is safe for these to be 747 + * called concurrently with destroy as the OPAL call takes appropriate 748 + * locks and refcounts on init/destroy. 769 749 */ 770 750 rc = opal_npu_init_context(nphb->opal_id, mm->context.id, flags, 771 751 PCI_DEVID(gpdev->bus->number, gpdev->devfn)); ··· 778 754 * We store the npu pci device so we can more easily get at the 779 755 * associated npus. 780 756 */ 757 + spin_lock(&npu_context_lock); 781 758 npu_context = mm->context.npu_context; 759 + if (npu_context) { 760 + if (npu_context->release_cb != cb || 761 + npu_context->priv != priv) { 762 + spin_unlock(&npu_context_lock); 763 + opal_npu_destroy_context(nphb->opal_id, mm->context.id, 764 + PCI_DEVID(gpdev->bus->number, 765 + gpdev->devfn)); 766 + return ERR_PTR(-EINVAL); 767 + } 768 + 769 + WARN_ON(!kref_get_unless_zero(&npu_context->kref)); 770 + } 771 + spin_unlock(&npu_context_lock); 772 + 782 773 if (!npu_context) { 774 + /* 775 + * We can set up these fields without holding the 776 + * npu_context_lock as the npu_context hasn't been returned to 777 + * the caller meaning it can't be destroyed. Parallel allocation 778 + * is protected against by mmap_sem. 779 + */ 783 780 rc = -ENOMEM; 784 781 npu_context = kzalloc(sizeof(struct npu_context), GFP_KERNEL); 785 782 if (npu_context) { ··· 819 774 } 820 775 821 776 mm->context.npu_context = npu_context; 822 - } else { 823 - WARN_ON(!kref_get_unless_zero(&npu_context->kref)); 824 777 } 825 778 826 779 npu_context->release_cb = cb; ··· 857 814 mm_context_remove_copro(npu_context->mm); 858 815 859 816 npu_context->mm->context.npu_context = NULL; 860 - mmu_notifier_unregister(&npu_context->mn, 861 - npu_context->mm); 862 - 863 - kfree(npu_context); 864 817 } 865 818 819 + /* 820 + * Destroy a context on the given GPU. May free the npu_context if it is no 821 + * longer active on any GPUs. Must not be called from interrupt context. 822 + */ 866 823 void pnv_npu2_destroy_context(struct npu_context *npu_context, 867 824 struct pci_dev *gpdev) 868 825 { 826 + int removed; 869 827 struct pnv_phb *nphb; 870 828 struct npu *npu; 871 829 struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0); ··· 888 844 WRITE_ONCE(npu_context->npdev[npu->index][nvlink_index], NULL); 889 845 opal_npu_destroy_context(nphb->opal_id, npu_context->mm->context.id, 890 846 PCI_DEVID(gpdev->bus->number, gpdev->devfn)); 891 - kref_put(&npu_context->kref, pnv_npu2_release_context); 847 + spin_lock(&npu_context_lock); 848 + removed = kref_put(&npu_context->kref, pnv_npu2_release_context); 849 + spin_unlock(&npu_context_lock); 850 + 851 + /* 852 + * We need to do this outside of pnv_npu2_release_context so that it is 853 + * outside the spinlock as mmu_notifier_destroy uses SRCU. 854 + */ 855 + if (removed) { 856 + mmu_notifier_unregister(&npu_context->mn, 857 + npu_context->mm); 858 + 859 + kfree(npu_context); 860 + } 861 + 892 862 } 893 863 EXPORT_SYMBOL(pnv_npu2_destroy_context); 894 864
+5 -3
arch/powerpc/platforms/powernv/opal-rtc.c
··· 48 48 49 49 while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) { 50 50 rc = opal_rtc_read(&__y_m_d, &__h_m_s_ms); 51 - if (rc == OPAL_BUSY_EVENT) 51 + if (rc == OPAL_BUSY_EVENT) { 52 + mdelay(OPAL_BUSY_DELAY_MS); 52 53 opal_poll_events(NULL); 53 - else if (rc == OPAL_BUSY) 54 - mdelay(10); 54 + } else if (rc == OPAL_BUSY) { 55 + mdelay(OPAL_BUSY_DELAY_MS); 56 + } 55 57 } 56 58 if (rc != OPAL_SUCCESS) 57 59 return 0;
+1 -3
arch/riscv/Kconfig
··· 11 11 select ARCH_WANT_FRAME_POINTERS 12 12 select CLONE_BACKWARDS 13 13 select COMMON_CLK 14 + select DMA_DIRECT_OPS 14 15 select GENERIC_CLOCKEVENTS 15 16 select GENERIC_CPU_DEVICES 16 17 select GENERIC_IRQ_SHOW ··· 89 88 90 89 config HAVE_KPROBES 91 90 def_bool n 92 - 93 - config DMA_DIRECT_OPS 94 - def_bool y 95 91 96 92 menu "Platform type" 97 93
-1
arch/riscv/include/asm/Kbuild
··· 15 15 generic-y += futex.h 16 16 generic-y += hardirq.h 17 17 generic-y += hash.h 18 - generic-y += handle_irq.h 19 18 generic-y += hw_irq.h 20 19 generic-y += ioctl.h 21 20 generic-y += ioctls.h
+1 -1
arch/riscv/kernel/vdso/Makefile
··· 52 52 # Add -lgcc so rv32 gets static muldi3 and lshrdi3 definitions. 53 53 # Make sure only to export the intended __vdso_xxx symbol offsets. 54 54 quiet_cmd_vdsold = VDSOLD $@ 55 - cmd_vdsold = $(CC) $(KCFLAGS) -nostdlib $(SYSCFLAGS_$(@F)) \ 55 + cmd_vdsold = $(CC) $(KCFLAGS) $(call cc-option, -no-pie) -nostdlib $(SYSCFLAGS_$(@F)) \ 56 56 -Wl,-T,$(filter-out FORCE,$^) -o $@.tmp -lgcc && \ 57 57 $(CROSS_COMPILE)objcopy \ 58 58 $(patsubst %, -G __vdso_%, $(vdso-syms)) $@.tmp $@
+3
arch/s390/include/asm/thread_info.h
··· 45 45 void arch_release_task_struct(struct task_struct *tsk); 46 46 int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src); 47 47 48 + void arch_setup_new_exec(void); 49 + #define arch_setup_new_exec arch_setup_new_exec 50 + 48 51 #endif 49 52 50 53 /*
+2 -2
arch/s390/kernel/module.c
··· 465 465 apply_alternatives(aseg, aseg + s->sh_size); 466 466 467 467 if (IS_ENABLED(CONFIG_EXPOLINE) && 468 - (!strcmp(".nospec_call_table", secname))) 468 + (!strncmp(".s390_indirect", secname, 14))) 469 469 nospec_revert(aseg, aseg + s->sh_size); 470 470 471 471 if (IS_ENABLED(CONFIG_EXPOLINE) && 472 - (!strcmp(".nospec_return_table", secname))) 472 + (!strncmp(".s390_return", secname, 12))) 473 473 nospec_revert(aseg, aseg + s->sh_size); 474 474 } 475 475
+4 -4
arch/s390/kernel/perf_cpum_cf_events.c
··· 123 123 CPUMF_EVENT_ATTR(cf_zec12, TX_NC_TABORT, 0x00b1); 124 124 CPUMF_EVENT_ATTR(cf_zec12, TX_C_TABORT_NO_SPECIAL, 0x00b2); 125 125 CPUMF_EVENT_ATTR(cf_zec12, TX_C_TABORT_SPECIAL, 0x00b3); 126 - CPUMF_EVENT_ATTR(cf_z13, L1D_WRITES_RO_EXCL, 0x0080); 126 + CPUMF_EVENT_ATTR(cf_z13, L1D_RO_EXCL_WRITES, 0x0080); 127 127 CPUMF_EVENT_ATTR(cf_z13, DTLB1_WRITES, 0x0081); 128 128 CPUMF_EVENT_ATTR(cf_z13, DTLB1_MISSES, 0x0082); 129 129 CPUMF_EVENT_ATTR(cf_z13, DTLB1_HPAGE_WRITES, 0x0083); ··· 179 179 CPUMF_EVENT_ATTR(cf_z13, TX_C_TABORT_SPECIAL, 0x00dc); 180 180 CPUMF_EVENT_ATTR(cf_z13, MT_DIAG_CYCLES_ONE_THR_ACTIVE, 0x01c0); 181 181 CPUMF_EVENT_ATTR(cf_z13, MT_DIAG_CYCLES_TWO_THR_ACTIVE, 0x01c1); 182 - CPUMF_EVENT_ATTR(cf_z14, L1D_WRITES_RO_EXCL, 0x0080); 182 + CPUMF_EVENT_ATTR(cf_z14, L1D_RO_EXCL_WRITES, 0x0080); 183 183 CPUMF_EVENT_ATTR(cf_z14, DTLB2_WRITES, 0x0081); 184 184 CPUMF_EVENT_ATTR(cf_z14, DTLB2_MISSES, 0x0082); 185 185 CPUMF_EVENT_ATTR(cf_z14, DTLB2_HPAGE_WRITES, 0x0083); ··· 371 371 }; 372 372 373 373 static struct attribute *cpumcf_z13_pmu_event_attr[] __initdata = { 374 - CPUMF_EVENT_PTR(cf_z13, L1D_WRITES_RO_EXCL), 374 + CPUMF_EVENT_PTR(cf_z13, L1D_RO_EXCL_WRITES), 375 375 CPUMF_EVENT_PTR(cf_z13, DTLB1_WRITES), 376 376 CPUMF_EVENT_PTR(cf_z13, DTLB1_MISSES), 377 377 CPUMF_EVENT_PTR(cf_z13, DTLB1_HPAGE_WRITES), ··· 431 431 }; 432 432 433 433 static struct attribute *cpumcf_z14_pmu_event_attr[] __initdata = { 434 - CPUMF_EVENT_PTR(cf_z14, L1D_WRITES_RO_EXCL), 434 + CPUMF_EVENT_PTR(cf_z14, L1D_RO_EXCL_WRITES), 435 435 CPUMF_EVENT_PTR(cf_z14, DTLB2_WRITES), 436 436 CPUMF_EVENT_PTR(cf_z14, DTLB2_MISSES), 437 437 CPUMF_EVENT_PTR(cf_z14, DTLB2_HPAGE_WRITES),
+10
arch/s390/kernel/process.c
··· 29 29 #include <linux/random.h> 30 30 #include <linux/export.h> 31 31 #include <linux/init_task.h> 32 + #include <asm/cpu_mf.h> 32 33 #include <asm/io.h> 33 34 #include <asm/processor.h> 34 35 #include <asm/vtimer.h> ··· 47 46 48 47 void flush_thread(void) 49 48 { 49 + } 50 + 51 + void arch_setup_new_exec(void) 52 + { 53 + if (S390_lowcore.current_pid != current->pid) { 54 + S390_lowcore.current_pid = current->pid; 55 + if (test_facility(40)) 56 + lpp(&S390_lowcore.lpp); 57 + } 50 58 } 51 59 52 60 void arch_release_task_struct(struct task_struct *tsk)
+9
arch/s390/kernel/uprobes.c
··· 150 150 return orig; 151 151 } 152 152 153 + bool arch_uretprobe_is_alive(struct return_instance *ret, enum rp_check ctx, 154 + struct pt_regs *regs) 155 + { 156 + if (ctx == RP_CHECK_CHAIN_CALL) 157 + return user_stack_pointer(regs) <= ret->stack; 158 + else 159 + return user_stack_pointer(regs) < ret->stack; 160 + } 161 + 153 162 /* Instruction Emulation */ 154 163 155 164 static void adjust_psw_addr(psw_t *psw, unsigned long len)
+4
arch/x86/Kconfig
··· 52 52 select ARCH_HAS_DEVMEM_IS_ALLOWED 53 53 select ARCH_HAS_ELF_RANDOMIZE 54 54 select ARCH_HAS_FAST_MULTIPLIER 55 + select ARCH_HAS_FILTER_PGPROT 55 56 select ARCH_HAS_FORTIFY_SOURCE 56 57 select ARCH_HAS_GCOV_PROFILE_ALL 57 58 select ARCH_HAS_KCOV if X86_64 ··· 272 271 def_bool y 273 272 274 273 config ARCH_HAS_CACHE_LINE_SIZE 274 + def_bool y 275 + 276 + config ARCH_HAS_FILTER_PGPROT 275 277 def_bool y 276 278 277 279 config HAVE_SETUP_PER_CPU_AREA
+4 -4
arch/x86/entry/entry_64_compat.S
··· 84 84 pushq %rdx /* pt_regs->dx */ 85 85 pushq %rcx /* pt_regs->cx */ 86 86 pushq $-ENOSYS /* pt_regs->ax */ 87 - pushq $0 /* pt_regs->r8 = 0 */ 87 + pushq %r8 /* pt_regs->r8 */ 88 88 xorl %r8d, %r8d /* nospec r8 */ 89 - pushq $0 /* pt_regs->r9 = 0 */ 89 + pushq %r9 /* pt_regs->r9 */ 90 90 xorl %r9d, %r9d /* nospec r9 */ 91 - pushq $0 /* pt_regs->r10 = 0 */ 91 + pushq %r10 /* pt_regs->r10 */ 92 92 xorl %r10d, %r10d /* nospec r10 */ 93 - pushq $0 /* pt_regs->r11 = 0 */ 93 + pushq %r11 /* pt_regs->r11 */ 94 94 xorl %r11d, %r11d /* nospec r11 */ 95 95 pushq %rbx /* pt_regs->rbx */ 96 96 xorl %ebx, %ebx /* nospec rbx */
+6 -3
arch/x86/events/intel/core.c
··· 3339 3339 3340 3340 cpuc->lbr_sel = NULL; 3341 3341 3342 - flip_smm_bit(&x86_pmu.attr_freeze_on_smi); 3342 + if (x86_pmu.version > 1) 3343 + flip_smm_bit(&x86_pmu.attr_freeze_on_smi); 3343 3344 3344 3345 if (!cpuc->shared_regs) 3345 3346 return; ··· 3503 3502 .cpu_dying = intel_pmu_cpu_dying, 3504 3503 }; 3505 3504 3505 + static struct attribute *intel_pmu_attrs[]; 3506 + 3506 3507 static __initconst const struct x86_pmu intel_pmu = { 3507 3508 .name = "Intel", 3508 3509 .handle_irq = intel_pmu_handle_irq, ··· 3535 3532 3536 3533 .format_attrs = intel_arch3_formats_attr, 3537 3534 .events_sysfs_show = intel_event_sysfs_show, 3535 + 3536 + .attrs = intel_pmu_attrs, 3538 3537 3539 3538 .cpu_prepare = intel_pmu_cpu_prepare, 3540 3539 .cpu_starting = intel_pmu_cpu_starting, ··· 3916 3911 3917 3912 x86_pmu.max_pebs_events = min_t(unsigned, MAX_PEBS_EVENTS, x86_pmu.num_counters); 3918 3913 3919 - 3920 - x86_pmu.attrs = intel_pmu_attrs; 3921 3914 /* 3922 3915 * Quirk: v2 perfmon does not report fixed-purpose events, so 3923 3916 * assume at least 3 events, when not running in a hypervisor:
+1
arch/x86/include/asm/cpufeatures.h
··· 320 320 #define X86_FEATURE_AVX512_VPOPCNTDQ (16*32+14) /* POPCNT for vectors of DW/QW */ 321 321 #define X86_FEATURE_LA57 (16*32+16) /* 5-level page tables */ 322 322 #define X86_FEATURE_RDPID (16*32+22) /* RDPID instruction */ 323 + #define X86_FEATURE_CLDEMOTE (16*32+25) /* CLDEMOTE instruction */ 323 324 324 325 /* AMD-defined CPU features, CPUID level 0x80000007 (EBX), word 17 */ 325 326 #define X86_FEATURE_OVERFLOW_RECOV (17*32+ 0) /* MCA overflow recovery support */
+17 -2
arch/x86/include/asm/ftrace.h
··· 46 46 #endif /* CONFIG_FUNCTION_TRACER */ 47 47 48 48 49 - #if !defined(__ASSEMBLY__) && !defined(COMPILE_OFFSETS) 49 + #ifndef __ASSEMBLY__ 50 + 51 + #define ARCH_HAS_SYSCALL_MATCH_SYM_NAME 52 + static inline bool arch_syscall_match_sym_name(const char *sym, const char *name) 53 + { 54 + /* 55 + * Compare the symbol name with the system call name. Skip the 56 + * "__x64_sys", "__ia32_sys" or simple "sys" prefix. 57 + */ 58 + return !strcmp(sym + 3, name + 3) || 59 + (!strncmp(sym, "__x64_", 6) && !strcmp(sym + 9, name + 3)) || 60 + (!strncmp(sym, "__ia32_", 7) && !strcmp(sym + 10, name + 3)); 61 + } 62 + 63 + #ifndef COMPILE_OFFSETS 50 64 51 65 #if defined(CONFIG_FTRACE_SYSCALLS) && defined(CONFIG_IA32_EMULATION) 52 66 #include <asm/compat.h> ··· 81 67 return false; 82 68 } 83 69 #endif /* CONFIG_FTRACE_SYSCALLS && CONFIG_IA32_EMULATION */ 84 - #endif /* !__ASSEMBLY__ && !COMPILE_OFFSETS */ 70 + #endif /* !COMPILE_OFFSETS */ 71 + #endif /* !__ASSEMBLY__ */ 85 72 86 73 #endif /* _ASM_X86_FTRACE_H */
-7
arch/x86/include/asm/irq_vectors.h
··· 34 34 * (0x80 is the syscall vector, 0x30-0x3f are for ISA) 35 35 */ 36 36 #define FIRST_EXTERNAL_VECTOR 0x20 37 - /* 38 - * We start allocating at 0x21 to spread out vectors evenly between 39 - * priority levels. (0x80 is the syscall vector) 40 - */ 41 - #define VECTOR_OFFSET_START 1 42 37 43 38 /* 44 39 * Reserve the lowest usable vector (and hence lowest priority) 0x20 for ··· 113 118 #else 114 119 #define FIRST_SYSTEM_VECTOR NR_VECTORS 115 120 #endif 116 - 117 - #define FPU_IRQ 13 118 121 119 122 /* 120 123 * Size the maximum number of interrupts.
+1 -1
arch/x86/include/asm/jailhouse_para.h
··· 1 - /* SPDX-License-Identifier: GPL2.0 */ 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 2 3 3 /* 4 4 * Jailhouse paravirt detection
+5
arch/x86/include/asm/pgtable.h
··· 601 601 602 602 #define canon_pgprot(p) __pgprot(massage_pgprot(p)) 603 603 604 + static inline pgprot_t arch_filter_pgprot(pgprot_t prot) 605 + { 606 + return canon_pgprot(prot); 607 + } 608 + 604 609 static inline int is_new_memtype_allowed(u64 paddr, unsigned long size, 605 610 enum page_cache_mode pcm, 606 611 enum page_cache_mode new_pcm)
+4 -4
arch/x86/include/asm/pgtable_64_types.h
··· 105 105 #define LDT_PGD_ENTRY (pgtable_l5_enabled ? LDT_PGD_ENTRY_L5 : LDT_PGD_ENTRY_L4) 106 106 #define LDT_BASE_ADDR (LDT_PGD_ENTRY << PGDIR_SHIFT) 107 107 108 - #define __VMALLOC_BASE_L4 0xffffc90000000000 109 - #define __VMALLOC_BASE_L5 0xffa0000000000000 108 + #define __VMALLOC_BASE_L4 0xffffc90000000000UL 109 + #define __VMALLOC_BASE_L5 0xffa0000000000000UL 110 110 111 111 #define VMALLOC_SIZE_TB_L4 32UL 112 112 #define VMALLOC_SIZE_TB_L5 12800UL 113 113 114 - #define __VMEMMAP_BASE_L4 0xffffea0000000000 115 - #define __VMEMMAP_BASE_L5 0xffd4000000000000 114 + #define __VMEMMAP_BASE_L4 0xffffea0000000000UL 115 + #define __VMEMMAP_BASE_L5 0xffd4000000000000UL 116 116 117 117 #ifdef CONFIG_DYNAMIC_MEMORY_LAYOUT 118 118 # define VMALLOC_START vmalloc_base
+31
arch/x86/include/uapi/asm/msgbuf.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + #ifndef __ASM_X64_MSGBUF_H 3 + #define __ASM_X64_MSGBUF_H 4 + 5 + #if !defined(__x86_64__) || !defined(__ILP32__) 1 6 #include <asm-generic/msgbuf.h> 7 + #else 8 + /* 9 + * The msqid64_ds structure for x86 architecture with x32 ABI. 10 + * 11 + * On x86-32 and x86-64 we can just use the generic definition, but 12 + * x32 uses the same binary layout as x86_64, which is differnet 13 + * from other 32-bit architectures. 14 + */ 15 + 16 + struct msqid64_ds { 17 + struct ipc64_perm msg_perm; 18 + __kernel_time_t msg_stime; /* last msgsnd time */ 19 + __kernel_time_t msg_rtime; /* last msgrcv time */ 20 + __kernel_time_t msg_ctime; /* last change time */ 21 + __kernel_ulong_t msg_cbytes; /* current number of bytes on queue */ 22 + __kernel_ulong_t msg_qnum; /* number of messages in queue */ 23 + __kernel_ulong_t msg_qbytes; /* max number of bytes on queue */ 24 + __kernel_pid_t msg_lspid; /* pid of last msgsnd */ 25 + __kernel_pid_t msg_lrpid; /* last receive pid */ 26 + __kernel_ulong_t __unused4; 27 + __kernel_ulong_t __unused5; 28 + }; 29 + 30 + #endif 31 + 32 + #endif /* __ASM_GENERIC_MSGBUF_H */
+42
arch/x86/include/uapi/asm/shmbuf.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + #ifndef __ASM_X86_SHMBUF_H 3 + #define __ASM_X86_SHMBUF_H 4 + 5 + #if !defined(__x86_64__) || !defined(__ILP32__) 1 6 #include <asm-generic/shmbuf.h> 7 + #else 8 + /* 9 + * The shmid64_ds structure for x86 architecture with x32 ABI. 10 + * 11 + * On x86-32 and x86-64 we can just use the generic definition, but 12 + * x32 uses the same binary layout as x86_64, which is differnet 13 + * from other 32-bit architectures. 14 + */ 15 + 16 + struct shmid64_ds { 17 + struct ipc64_perm shm_perm; /* operation perms */ 18 + size_t shm_segsz; /* size of segment (bytes) */ 19 + __kernel_time_t shm_atime; /* last attach time */ 20 + __kernel_time_t shm_dtime; /* last detach time */ 21 + __kernel_time_t shm_ctime; /* last change time */ 22 + __kernel_pid_t shm_cpid; /* pid of creator */ 23 + __kernel_pid_t shm_lpid; /* pid of last operator */ 24 + __kernel_ulong_t shm_nattch; /* no. of current attaches */ 25 + __kernel_ulong_t __unused4; 26 + __kernel_ulong_t __unused5; 27 + }; 28 + 29 + struct shminfo64 { 30 + __kernel_ulong_t shmmax; 31 + __kernel_ulong_t shmmin; 32 + __kernel_ulong_t shmmni; 33 + __kernel_ulong_t shmseg; 34 + __kernel_ulong_t shmall; 35 + __kernel_ulong_t __unused1; 36 + __kernel_ulong_t __unused2; 37 + __kernel_ulong_t __unused3; 38 + __kernel_ulong_t __unused4; 39 + }; 40 + 41 + #endif 42 + 43 + #endif /* __ASM_X86_SHMBUF_H */
+3
arch/x86/kernel/cpu/intel.c
··· 835 835 { 0x5d, TLB_DATA_4K_4M, 256, " TLB_DATA 4 KByte and 4 MByte pages" }, 836 836 { 0x61, TLB_INST_4K, 48, " TLB_INST 4 KByte pages, full associative" }, 837 837 { 0x63, TLB_DATA_1G, 4, " TLB_DATA 1 GByte pages, 4-way set associative" }, 838 + { 0x6b, TLB_DATA_4K, 256, " TLB_DATA 4 KByte pages, 8-way associative" }, 839 + { 0x6c, TLB_DATA_2M_4M, 128, " TLB_DATA 2 MByte or 4 MByte pages, 8-way associative" }, 840 + { 0x6d, TLB_DATA_1G, 16, " TLB_DATA 1 GByte pages, fully associative" }, 838 841 { 0x76, TLB_INST_2M_4M, 8, " TLB_INST 2-MByte or 4-MByte pages, fully associative" }, 839 842 { 0xb0, TLB_INST_4K, 128, " TLB_INST 4 KByte pages, 4-way set associative" }, 840 843 { 0xb1, TLB_INST_2M_4M, 4, " TLB_INST 2M pages, 4-way, 8 entries or 4M pages, 4-way entries" },
+2 -4
arch/x86/kernel/cpu/microcode/core.c
··· 564 564 apply_microcode_local(&err); 565 565 spin_unlock(&update_lock); 566 566 567 + /* siblings return UCODE_OK because their engine got updated already */ 567 568 if (err > UCODE_NFOUND) { 568 569 pr_warn("Error reloading microcode on CPU %d\n", cpu); 569 - return -1; 570 - /* siblings return UCODE_OK because their engine got updated already */ 570 + ret = -1; 571 571 } else if (err == UCODE_UPDATED || err == UCODE_OK) { 572 572 ret = 1; 573 - } else { 574 - return ret; 575 573 } 576 574 577 575 /*
-2
arch/x86/kernel/cpu/microcode/intel.c
··· 485 485 */ 486 486 static void save_mc_for_early(u8 *mc, unsigned int size) 487 487 { 488 - #ifdef CONFIG_HOTPLUG_CPU 489 488 /* Synchronization during CPU hotplug. */ 490 489 static DEFINE_MUTEX(x86_cpu_microcode_mutex); 491 490 ··· 494 495 show_saved_mc(); 495 496 496 497 mutex_unlock(&x86_cpu_microcode_mutex); 497 - #endif 498 498 } 499 499 500 500 static bool load_builtin_intel_microcode(struct cpio_data *cp)
+1 -1
arch/x86/kernel/jailhouse.c
··· 1 - // SPDX-License-Identifier: GPL2.0 1 + // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * Jailhouse paravirt_ops implementation 4 4 *
+6
arch/x86/kernel/setup.c
··· 50 50 #include <linux/init_ohci1394_dma.h> 51 51 #include <linux/kvm_para.h> 52 52 #include <linux/dma-contiguous.h> 53 + #include <xen/xen.h> 53 54 54 55 #include <linux/errno.h> 55 56 #include <linux/kernel.h> ··· 533 532 if (ret != 0 || crash_size <= 0) 534 533 return; 535 534 high = true; 535 + } 536 + 537 + if (xen_pv_domain()) { 538 + pr_info("Ignoring crashkernel for a Xen PV domain\n"); 539 + return; 536 540 } 537 541 538 542 /* 0 means: find the address automatically */
+2
arch/x86/kernel/smpboot.c
··· 1571 1571 void *mwait_ptr; 1572 1572 int i; 1573 1573 1574 + if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) 1575 + return; 1574 1576 if (!this_cpu_has(X86_FEATURE_MWAIT)) 1575 1577 return; 1576 1578 if (!this_cpu_has(X86_FEATURE_CLFLUSH))
+4 -10
arch/x86/kvm/vmx.c
··· 4544 4544 __vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, invalidate_gpa); 4545 4545 } 4546 4546 4547 - static void vmx_flush_tlb_ept_only(struct kvm_vcpu *vcpu) 4548 - { 4549 - if (enable_ept) 4550 - vmx_flush_tlb(vcpu, true); 4551 - } 4552 - 4553 4547 static void vmx_decache_cr0_guest_bits(struct kvm_vcpu *vcpu) 4554 4548 { 4555 4549 ulong cr0_guest_owned_bits = vcpu->arch.cr0_guest_owned_bits; ··· 9272 9278 } else { 9273 9279 sec_exec_control &= ~SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE; 9274 9280 sec_exec_control |= SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES; 9275 - vmx_flush_tlb_ept_only(vcpu); 9281 + vmx_flush_tlb(vcpu, true); 9276 9282 } 9277 9283 vmcs_write32(SECONDARY_VM_EXEC_CONTROL, sec_exec_control); 9278 9284 ··· 9300 9306 !nested_cpu_has2(get_vmcs12(&vmx->vcpu), 9301 9307 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) { 9302 9308 vmcs_write64(APIC_ACCESS_ADDR, hpa); 9303 - vmx_flush_tlb_ept_only(vcpu); 9309 + vmx_flush_tlb(vcpu, true); 9304 9310 } 9305 9311 } 9306 9312 ··· 11214 11220 } 11215 11221 } else if (nested_cpu_has2(vmcs12, 11216 11222 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) { 11217 - vmx_flush_tlb_ept_only(vcpu); 11223 + vmx_flush_tlb(vcpu, true); 11218 11224 } 11219 11225 11220 11226 /* ··· 12067 12073 } else if (!nested_cpu_has_ept(vmcs12) && 12068 12074 nested_cpu_has2(vmcs12, 12069 12075 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) { 12070 - vmx_flush_tlb_ept_only(vcpu); 12076 + vmx_flush_tlb(vcpu, true); 12071 12077 } 12072 12078 12073 12079 /* This is needed for same reason as it was needed in prepare_vmcs02 */
-7
arch/x86/kvm/x86.h
··· 302 302 __rem; \ 303 303 }) 304 304 305 - #define KVM_X86_DISABLE_EXITS_MWAIT (1 << 0) 306 - #define KVM_X86_DISABLE_EXITS_HTL (1 << 1) 307 - #define KVM_X86_DISABLE_EXITS_PAUSE (1 << 2) 308 - #define KVM_X86_DISABLE_VALID_EXITS (KVM_X86_DISABLE_EXITS_MWAIT | \ 309 - KVM_X86_DISABLE_EXITS_HTL | \ 310 - KVM_X86_DISABLE_EXITS_PAUSE) 311 - 312 305 static inline bool kvm_mwait_in_guest(struct kvm *kvm) 313 306 { 314 307 return kvm->arch.mwait_in_guest;
+36 -16
arch/x86/mm/pageattr.c
··· 93 93 static inline void split_page_count(int level) { } 94 94 #endif 95 95 96 + static inline int 97 + within(unsigned long addr, unsigned long start, unsigned long end) 98 + { 99 + return addr >= start && addr < end; 100 + } 101 + 102 + static inline int 103 + within_inclusive(unsigned long addr, unsigned long start, unsigned long end) 104 + { 105 + return addr >= start && addr <= end; 106 + } 107 + 96 108 #ifdef CONFIG_X86_64 97 109 98 110 static inline unsigned long highmap_start_pfn(void) ··· 118 106 return __pa_symbol(roundup(_brk_end, PMD_SIZE) - 1) >> PAGE_SHIFT; 119 107 } 120 108 109 + static bool __cpa_pfn_in_highmap(unsigned long pfn) 110 + { 111 + /* 112 + * Kernel text has an alias mapping at a high address, known 113 + * here as "highmap". 114 + */ 115 + return within_inclusive(pfn, highmap_start_pfn(), highmap_end_pfn()); 116 + } 117 + 118 + #else 119 + 120 + static bool __cpa_pfn_in_highmap(unsigned long pfn) 121 + { 122 + /* There is no highmap on 32-bit */ 123 + return false; 124 + } 125 + 121 126 #endif 122 - 123 - static inline int 124 - within(unsigned long addr, unsigned long start, unsigned long end) 125 - { 126 - return addr >= start && addr < end; 127 - } 128 - 129 - static inline int 130 - within_inclusive(unsigned long addr, unsigned long start, unsigned long end) 131 - { 132 - return addr >= start && addr <= end; 133 - } 134 127 135 128 /* 136 129 * Flushing functions ··· 189 172 190 173 static void cpa_flush_all(unsigned long cache) 191 174 { 192 - BUG_ON(irqs_disabled()); 175 + BUG_ON(irqs_disabled() && !early_boot_irqs_disabled); 193 176 194 177 on_each_cpu(__cpa_flush_all, (void *) cache, 1); 195 178 } ··· 253 236 unsigned long do_wbinvd = cache && numpages >= 1024; /* 4M threshold */ 254 237 #endif 255 238 256 - BUG_ON(irqs_disabled()); 239 + BUG_ON(irqs_disabled() && !early_boot_irqs_disabled); 257 240 258 241 on_each_cpu(__cpa_flush_all, (void *) do_wbinvd, 1); 259 242 ··· 1200 1183 cpa->numpages = 1; 1201 1184 cpa->pfn = __pa(vaddr) >> PAGE_SHIFT; 1202 1185 return 0; 1186 + 1187 + } else if (__cpa_pfn_in_highmap(cpa->pfn)) { 1188 + /* Faults in the highmap are OK, so do not warn: */ 1189 + return -EFAULT; 1203 1190 } else { 1204 1191 WARN(1, KERN_WARNING "CPA: called for zero pte. " 1205 1192 "vaddr = %lx cpa->vaddr = %lx\n", vaddr, ··· 1356 1335 * to touch the high mapped kernel as well: 1357 1336 */ 1358 1337 if (!within(vaddr, (unsigned long)_text, _brk_end) && 1359 - within_inclusive(cpa->pfn, highmap_start_pfn(), 1360 - highmap_end_pfn())) { 1338 + __cpa_pfn_in_highmap(cpa->pfn)) { 1361 1339 unsigned long temp_cpa_vaddr = (cpa->pfn << PAGE_SHIFT) + 1362 1340 __START_KERNEL_map - phys_base; 1363 1341 alias_cpa = *cpa;
+23 -3
arch/x86/mm/pti.c
··· 421 421 if (boot_cpu_has(X86_FEATURE_K8)) 422 422 return false; 423 423 424 + /* 425 + * RANDSTRUCT derives its hardening benefits from the 426 + * attacker's lack of knowledge about the layout of kernel 427 + * data structures. Keep the kernel image non-global in 428 + * cases where RANDSTRUCT is in use to help keep the layout a 429 + * secret. 430 + */ 431 + if (IS_ENABLED(CONFIG_GCC_PLUGIN_RANDSTRUCT)) 432 + return false; 433 + 424 434 return true; 425 435 } 426 436 ··· 440 430 */ 441 431 void pti_clone_kernel_text(void) 442 432 { 433 + /* 434 + * rodata is part of the kernel image and is normally 435 + * readable on the filesystem or on the web. But, do not 436 + * clone the areas past rodata, they might contain secrets. 437 + */ 443 438 unsigned long start = PFN_ALIGN(_text); 444 - unsigned long end = ALIGN((unsigned long)_end, PMD_PAGE_SIZE); 439 + unsigned long end = (unsigned long)__end_rodata_hpage_align; 445 440 446 441 if (!pti_kernel_image_global_ok()) 447 442 return; 448 443 444 + pr_debug("mapping partial kernel image into user address space\n"); 445 + 446 + /* 447 + * Note that this will undo _some_ of the work that 448 + * pti_set_kernel_image_nonglobal() did to clear the 449 + * global bit. 450 + */ 449 451 pti_clone_pmds(start, end, _PAGE_RW); 450 452 } 451 453 ··· 479 457 480 458 if (pti_kernel_image_global_ok()) 481 459 return; 482 - 483 - pr_debug("set kernel image non-global\n"); 484 460 485 461 set_memory_nonglobal(start, (end - start) >> PAGE_SHIFT); 486 462 }
+9 -1
block/bfq-iosched.c
··· 4934 4934 bool new_queue = false; 4935 4935 bool bfqq_already_existing = false, split = false; 4936 4936 4937 - if (!rq->elv.icq) 4937 + /* 4938 + * Even if we don't have an icq attached, we should still clear 4939 + * the scheduler pointers, as they might point to previously 4940 + * allocated bic/bfqq structs. 4941 + */ 4942 + if (!rq->elv.icq) { 4943 + rq->elv.priv[0] = rq->elv.priv[1] = NULL; 4938 4944 return; 4945 + } 4946 + 4939 4947 bic = icq_to_bic(rq->elv.icq); 4940 4948 4941 4949 spin_lock_irq(&bfqd->lock);
+12 -16
block/blk-cgroup.c
··· 1177 1177 1178 1178 preloaded = !radix_tree_preload(GFP_KERNEL); 1179 1179 1180 - /* 1181 - * Make sure the root blkg exists and count the existing blkgs. As 1182 - * @q is bypassing at this point, blkg_lookup_create() can't be 1183 - * used. Open code insertion. 1184 - */ 1180 + /* Make sure the root blkg exists. */ 1185 1181 rcu_read_lock(); 1186 1182 spin_lock_irq(q->queue_lock); 1187 1183 blkg = blkg_create(&blkcg_root, q, new_blkg); 1184 + if (IS_ERR(blkg)) 1185 + goto err_unlock; 1186 + q->root_blkg = blkg; 1187 + q->root_rl.blkg = blkg; 1188 1188 spin_unlock_irq(q->queue_lock); 1189 1189 rcu_read_unlock(); 1190 1190 1191 1191 if (preloaded) 1192 1192 radix_tree_preload_end(); 1193 - 1194 - if (IS_ERR(blkg)) 1195 - return PTR_ERR(blkg); 1196 - 1197 - q->root_blkg = blkg; 1198 - q->root_rl.blkg = blkg; 1199 1193 1200 1194 ret = blk_throtl_init(q); 1201 1195 if (ret) { ··· 1198 1204 spin_unlock_irq(q->queue_lock); 1199 1205 } 1200 1206 return ret; 1207 + 1208 + err_unlock: 1209 + spin_unlock_irq(q->queue_lock); 1210 + rcu_read_unlock(); 1211 + if (preloaded) 1212 + radix_tree_preload_end(); 1213 + return PTR_ERR(blkg); 1201 1214 } 1202 1215 1203 1216 /** ··· 1411 1410 __clear_bit(pol->plid, q->blkcg_pols); 1412 1411 1413 1412 list_for_each_entry(blkg, &q->blkg_list, q_node) { 1414 - /* grab blkcg lock too while removing @pd from @blkg */ 1415 - spin_lock(&blkg->blkcg->lock); 1416 - 1417 1413 if (blkg->pd[pol->plid]) { 1418 1414 if (!blkg->pd[pol->plid]->offline && 1419 1415 pol->pd_offline_fn) { ··· 1420 1422 pol->pd_free_fn(blkg->pd[pol->plid]); 1421 1423 blkg->pd[pol->plid] = NULL; 1422 1424 } 1423 - 1424 - spin_unlock(&blkg->blkcg->lock); 1425 1425 } 1426 1426 1427 1427 spin_unlock_irq(q->queue_lock);
+8 -7
block/blk-core.c
··· 201 201 rq->part = NULL; 202 202 seqcount_init(&rq->gstate_seq); 203 203 u64_stats_init(&rq->aborted_gstate_sync); 204 + /* 205 + * See comment of blk_mq_init_request 206 + */ 207 + WRITE_ONCE(rq->gstate, MQ_RQ_GEN_INC); 204 208 } 205 209 EXPORT_SYMBOL(blk_rq_init); 206 210 ··· 919 915 920 916 while (true) { 921 917 bool success = false; 922 - int ret; 923 918 924 919 rcu_read_lock(); 925 920 if (percpu_ref_tryget_live(&q->q_usage_counter)) { ··· 950 947 */ 951 948 smp_rmb(); 952 949 953 - ret = wait_event_interruptible(q->mq_freeze_wq, 954 - (atomic_read(&q->mq_freeze_depth) == 0 && 955 - (preempt || !blk_queue_preempt_only(q))) || 956 - blk_queue_dying(q)); 950 + wait_event(q->mq_freeze_wq, 951 + (atomic_read(&q->mq_freeze_depth) == 0 && 952 + (preempt || !blk_queue_preempt_only(q))) || 953 + blk_queue_dying(q)); 957 954 if (blk_queue_dying(q)) 958 955 return -ENODEV; 959 - if (ret) 960 - return ret; 961 956 } 962 957 } 963 958
+38 -3
block/blk-mq.c
··· 2042 2042 2043 2043 seqcount_init(&rq->gstate_seq); 2044 2044 u64_stats_init(&rq->aborted_gstate_sync); 2045 + /* 2046 + * start gstate with gen 1 instead of 0, otherwise it will be equal 2047 + * to aborted_gstate, and be identified timed out by 2048 + * blk_mq_terminate_expired. 2049 + */ 2050 + WRITE_ONCE(rq->gstate, MQ_RQ_GEN_INC); 2051 + 2045 2052 return 0; 2046 2053 } 2047 2054 ··· 2336 2329 2337 2330 static void blk_mq_map_swqueue(struct request_queue *q) 2338 2331 { 2339 - unsigned int i; 2332 + unsigned int i, hctx_idx; 2340 2333 struct blk_mq_hw_ctx *hctx; 2341 2334 struct blk_mq_ctx *ctx; 2342 2335 struct blk_mq_tag_set *set = q->tag_set; ··· 2353 2346 2354 2347 /* 2355 2348 * Map software to hardware queues. 2349 + * 2350 + * If the cpu isn't present, the cpu is mapped to first hctx. 2356 2351 */ 2357 2352 for_each_possible_cpu(i) { 2353 + hctx_idx = q->mq_map[i]; 2354 + /* unmapped hw queue can be remapped after CPU topo changed */ 2355 + if (!set->tags[hctx_idx] && 2356 + !__blk_mq_alloc_rq_map(set, hctx_idx)) { 2357 + /* 2358 + * If tags initialization fail for some hctx, 2359 + * that hctx won't be brought online. In this 2360 + * case, remap the current ctx to hctx[0] which 2361 + * is guaranteed to always have tags allocated 2362 + */ 2363 + q->mq_map[i] = 0; 2364 + } 2365 + 2358 2366 ctx = per_cpu_ptr(q->queue_ctx, i); 2359 2367 hctx = blk_mq_map_queue(q, i); 2360 2368 ··· 2381 2359 mutex_unlock(&q->sysfs_lock); 2382 2360 2383 2361 queue_for_each_hw_ctx(q, hctx, i) { 2384 - /* every hctx should get mapped by at least one CPU */ 2385 - WARN_ON(!hctx->nr_ctx); 2362 + /* 2363 + * If no software queues are mapped to this hardware queue, 2364 + * disable it and free the request entries. 2365 + */ 2366 + if (!hctx->nr_ctx) { 2367 + /* Never unmap queue 0. We need it as a 2368 + * fallback in case of a new remap fails 2369 + * allocation 2370 + */ 2371 + if (i && set->tags[i]) 2372 + blk_mq_free_map_and_requests(set, i); 2373 + 2374 + hctx->tags = NULL; 2375 + continue; 2376 + } 2386 2377 2387 2378 hctx->tags = set->tags[i]; 2388 2379 WARN_ON(!hctx->tags);
+3
block/blk-mq.h
··· 7 7 8 8 struct blk_mq_tag_set; 9 9 10 + /** 11 + * struct blk_mq_ctx - State for a software queue facing the submitting CPUs 12 + */ 10 13 struct blk_mq_ctx { 11 14 struct { 12 15 spinlock_t lock;
+8 -3
crypto/api.c
··· 204 204 205 205 down_read(&crypto_alg_sem); 206 206 alg = __crypto_alg_lookup(name, type | test, mask | test); 207 - if (!alg && test) 208 - alg = __crypto_alg_lookup(name, type, mask) ? 209 - ERR_PTR(-ELIBBAD) : NULL; 207 + if (!alg && test) { 208 + alg = __crypto_alg_lookup(name, type, mask); 209 + if (alg && !crypto_is_larval(alg)) { 210 + /* Test failed */ 211 + crypto_mod_put(alg); 212 + alg = ERR_PTR(-ELIBBAD); 213 + } 214 + } 210 215 up_read(&crypto_alg_sem); 211 216 212 217 return alg;
+2
crypto/drbg.c
··· 1134 1134 if (!drbg) 1135 1135 return; 1136 1136 kzfree(drbg->Vbuf); 1137 + drbg->Vbuf = NULL; 1137 1138 drbg->V = NULL; 1138 1139 kzfree(drbg->Cbuf); 1140 + drbg->Cbuf = NULL; 1139 1141 drbg->C = NULL; 1140 1142 kzfree(drbg->scratchpadbuf); 1141 1143 drbg->scratchpadbuf = NULL;
+25 -2
drivers/acpi/acpi_video.c
··· 2123 2123 return opregion; 2124 2124 } 2125 2125 2126 + static bool dmi_is_desktop(void) 2127 + { 2128 + const char *chassis_type; 2129 + 2130 + chassis_type = dmi_get_system_info(DMI_CHASSIS_TYPE); 2131 + if (!chassis_type) 2132 + return false; 2133 + 2134 + if (!strcmp(chassis_type, "3") || /* 3: Desktop */ 2135 + !strcmp(chassis_type, "4") || /* 4: Low Profile Desktop */ 2136 + !strcmp(chassis_type, "5") || /* 5: Pizza Box */ 2137 + !strcmp(chassis_type, "6") || /* 6: Mini Tower */ 2138 + !strcmp(chassis_type, "7") || /* 7: Tower */ 2139 + !strcmp(chassis_type, "11")) /* 11: Main Server Chassis */ 2140 + return true; 2141 + 2142 + return false; 2143 + } 2144 + 2126 2145 int acpi_video_register(void) 2127 2146 { 2128 2147 int ret = 0; ··· 2162 2143 * win8 ready (where we also prefer the native backlight driver, so 2163 2144 * normally the acpi_video code should not register there anyways). 2164 2145 */ 2165 - if (only_lcd == -1) 2166 - only_lcd = acpi_osi_is_win8(); 2146 + if (only_lcd == -1) { 2147 + if (dmi_is_desktop() && acpi_osi_is_win8()) 2148 + only_lcd = true; 2149 + else 2150 + only_lcd = false; 2151 + } 2167 2152 2168 2153 dmi_check_system(video_dmi_table); 2169 2154
+49 -10
drivers/acpi/acpi_watchdog.c
··· 12 12 #define pr_fmt(fmt) "ACPI: watchdog: " fmt 13 13 14 14 #include <linux/acpi.h> 15 + #include <linux/dmi.h> 15 16 #include <linux/ioport.h> 16 17 #include <linux/platform_device.h> 17 18 18 19 #include "internal.h" 20 + 21 + static const struct dmi_system_id acpi_watchdog_skip[] = { 22 + { 23 + /* 24 + * On Lenovo Z50-70 there are two issues with the WDAT 25 + * table. First some of the instructions use RTC SRAM 26 + * to store persistent information. This does not work well 27 + * with Linux RTC driver. Second, more important thing is 28 + * that the instructions do not actually reset the system. 29 + * 30 + * On this particular system iTCO_wdt seems to work just 31 + * fine so we prefer that over WDAT for now. 32 + * 33 + * See also https://bugzilla.kernel.org/show_bug.cgi?id=199033. 34 + */ 35 + .ident = "Lenovo Z50-70", 36 + .matches = { 37 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 38 + DMI_MATCH(DMI_PRODUCT_NAME, "20354"), 39 + DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Z50-70"), 40 + }, 41 + }, 42 + {} 43 + }; 44 + 45 + static const struct acpi_table_wdat *acpi_watchdog_get_wdat(void) 46 + { 47 + const struct acpi_table_wdat *wdat = NULL; 48 + acpi_status status; 49 + 50 + if (acpi_disabled) 51 + return NULL; 52 + 53 + if (dmi_check_system(acpi_watchdog_skip)) 54 + return NULL; 55 + 56 + status = acpi_get_table(ACPI_SIG_WDAT, 0, 57 + (struct acpi_table_header **)&wdat); 58 + if (ACPI_FAILURE(status)) { 59 + /* It is fine if there is no WDAT */ 60 + return NULL; 61 + } 62 + 63 + return wdat; 64 + } 19 65 20 66 /** 21 67 * Returns true if this system should prefer ACPI based watchdog instead of ··· 69 23 */ 70 24 bool acpi_has_watchdog(void) 71 25 { 72 - struct acpi_table_header hdr; 73 - 74 - if (acpi_disabled) 75 - return false; 76 - 77 - return ACPI_SUCCESS(acpi_get_table_header(ACPI_SIG_WDAT, 0, &hdr)); 26 + return !!acpi_watchdog_get_wdat(); 78 27 } 79 28 EXPORT_SYMBOL_GPL(acpi_has_watchdog); 80 29 ··· 82 41 struct platform_device *pdev; 83 42 struct resource *resources; 84 43 size_t nresources = 0; 85 - acpi_status status; 86 44 int i; 87 45 88 - status = acpi_get_table(ACPI_SIG_WDAT, 0, 89 - (struct acpi_table_header **)&wdat); 90 - if (ACPI_FAILURE(status)) { 46 + wdat = acpi_watchdog_get_wdat(); 47 + if (!wdat) { 91 48 /* It is fine if there is no WDAT */ 92 49 return; 93 50 }
+23 -1
drivers/acpi/button.c
··· 635 635 NULL, 0644); 636 636 MODULE_PARM_DESC(lid_init_state, "Behavior for reporting LID initial state"); 637 637 638 - module_acpi_driver(acpi_button_driver); 638 + static int acpi_button_register_driver(struct acpi_driver *driver) 639 + { 640 + /* 641 + * Modules such as nouveau.ko and i915.ko have a link time dependency 642 + * on acpi_lid_open(), and would therefore not be loadable on ACPI 643 + * capable kernels booted in non-ACPI mode if the return value of 644 + * acpi_bus_register_driver() is returned from here with ACPI disabled 645 + * when this driver is built as a module. 646 + */ 647 + if (acpi_disabled) 648 + return 0; 649 + 650 + return acpi_bus_register_driver(driver); 651 + } 652 + 653 + static void acpi_button_unregister_driver(struct acpi_driver *driver) 654 + { 655 + if (!acpi_disabled) 656 + acpi_bus_unregister_driver(driver); 657 + } 658 + 659 + module_driver(acpi_button_driver, acpi_button_register_driver, 660 + acpi_button_unregister_driver);
+1 -1
drivers/acpi/scan.c
··· 2166 2166 acpi_cmos_rtc_init(); 2167 2167 acpi_container_init(); 2168 2168 acpi_memory_hotplug_init(); 2169 + acpi_watchdog_init(); 2169 2170 acpi_pnp_init(); 2170 2171 acpi_int340x_thermal_init(); 2171 2172 acpi_amba_init(); 2172 - acpi_watchdog_init(); 2173 2173 acpi_init_lpit(); 2174 2174 2175 2175 acpi_scan_add_handler(&generic_device_handler);
+13
drivers/acpi/sleep.c
··· 364 364 DMI_MATCH(DMI_PRODUCT_NAME, "XPS 13 9360"), 365 365 }, 366 366 }, 367 + /* 368 + * ThinkPad X1 Tablet(2016) cannot do suspend-to-idle using 369 + * the Low Power S0 Idle firmware interface (see 370 + * https://bugzilla.kernel.org/show_bug.cgi?id=199057). 371 + */ 372 + { 373 + .callback = init_no_lps0, 374 + .ident = "ThinkPad X1 Tablet(2016)", 375 + .matches = { 376 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 377 + DMI_MATCH(DMI_PRODUCT_NAME, "20GGA00L00"), 378 + }, 379 + }, 367 380 {}, 368 381 }; 369 382
+11 -6
drivers/amba/bus.c
··· 69 69 struct device_attribute *attr, char *buf) 70 70 { 71 71 struct amba_device *dev = to_amba_device(_dev); 72 + ssize_t len; 72 73 73 - if (!dev->driver_override) 74 - return 0; 75 - 76 - return sprintf(buf, "%s\n", dev->driver_override); 74 + device_lock(_dev); 75 + len = sprintf(buf, "%s\n", dev->driver_override); 76 + device_unlock(_dev); 77 + return len; 77 78 } 78 79 79 80 static ssize_t driver_override_store(struct device *_dev, ··· 82 81 const char *buf, size_t count) 83 82 { 84 83 struct amba_device *dev = to_amba_device(_dev); 85 - char *driver_override, *old = dev->driver_override, *cp; 84 + char *driver_override, *old, *cp; 86 85 87 - if (count > PATH_MAX) 86 + /* We need to keep extra room for a newline */ 87 + if (count >= (PAGE_SIZE - 1)) 88 88 return -EINVAL; 89 89 90 90 driver_override = kstrndup(buf, count, GFP_KERNEL); ··· 96 94 if (cp) 97 95 *cp = '\0'; 98 96 97 + device_lock(_dev); 98 + old = dev->driver_override; 99 99 if (strlen(driver_override)) { 100 100 dev->driver_override = driver_override; 101 101 } else { 102 102 kfree(driver_override); 103 103 dev->driver_override = NULL; 104 104 } 105 + device_unlock(_dev); 105 106 106 107 kfree(old); 107 108
+8
drivers/android/binder.c
··· 2839 2839 else 2840 2840 return_error = BR_DEAD_REPLY; 2841 2841 mutex_unlock(&context->context_mgr_node_lock); 2842 + if (target_node && target_proc == proc) { 2843 + binder_user_error("%d:%d got transaction to context manager from process owning it\n", 2844 + proc->pid, thread->pid); 2845 + return_error = BR_FAILED_REPLY; 2846 + return_error_param = -EINVAL; 2847 + return_error_line = __LINE__; 2848 + goto err_invalid_target_handle; 2849 + } 2842 2850 } 2843 2851 if (!target_node) { 2844 2852 /*
+3 -2
drivers/base/dma-coherent.c
··· 312 312 * This checks whether the memory was allocated from the per-device 313 313 * coherent memory pool and if so, maps that memory to the provided vma. 314 314 * 315 - * Returns 1 if we correctly mapped the memory, or 0 if the caller should 316 - * proceed with mapping memory from generic pools. 315 + * Returns 1 if @vaddr belongs to the device coherent pool and the caller 316 + * should return @ret, or 0 if they should proceed with mapping memory from 317 + * generic areas. 317 318 */ 318 319 int dma_mmap_from_dev_coherent(struct device *dev, struct vm_area_struct *vma, 319 320 void *vaddr, size_t size, int *ret)
+2 -4
drivers/base/dma-mapping.c
··· 226 226 #ifndef CONFIG_ARCH_NO_COHERENT_DMA_MMAP 227 227 unsigned long user_count = vma_pages(vma); 228 228 unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; 229 - unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr)); 230 229 unsigned long off = vma->vm_pgoff; 231 230 232 231 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); ··· 233 234 if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) 234 235 return ret; 235 236 236 - if (off < count && user_count <= (count - off)) { 237 + if (off < count && user_count <= (count - off)) 237 238 ret = remap_pfn_range(vma, vma->vm_start, 238 - pfn + off, 239 + page_to_pfn(virt_to_page(cpu_addr)) + off, 239 240 user_count << PAGE_SHIFT, 240 241 vma->vm_page_prot); 241 - } 242 242 #endif /* !CONFIG_ARCH_NO_COHERENT_DMA_MMAP */ 243 243 244 244 return ret;
+2 -2
drivers/base/firmware_loader/fallback.c
··· 537 537 } 538 538 539 539 /** 540 - * fw_load_sysfs_fallback - load a firmware via the syfs fallback mechanism 541 - * @fw_sysfs: firmware syfs information for the firmware to load 540 + * fw_load_sysfs_fallback - load a firmware via the sysfs fallback mechanism 541 + * @fw_sysfs: firmware sysfs information for the firmware to load 542 542 * @opt_flags: flags of options, FW_OPT_* 543 543 * @timeout: timeout to wait for the load 544 544 *
+1 -1
drivers/base/firmware_loader/fallback.h
··· 6 6 #include <linux/device.h> 7 7 8 8 /** 9 - * struct firmware_fallback_config - firmware fallback configuratioon settings 9 + * struct firmware_fallback_config - firmware fallback configuration settings 10 10 * 11 11 * Helps describe and fine tune the fallback mechanism. 12 12 *
+43 -21
drivers/block/loop.c
··· 451 451 static void lo_complete_rq(struct request *rq) 452 452 { 453 453 struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq); 454 + blk_status_t ret = BLK_STS_OK; 454 455 455 - if (unlikely(req_op(cmd->rq) == REQ_OP_READ && cmd->use_aio && 456 - cmd->ret >= 0 && cmd->ret < blk_rq_bytes(cmd->rq))) { 457 - struct bio *bio = cmd->rq->bio; 458 - 459 - bio_advance(bio, cmd->ret); 460 - zero_fill_bio(bio); 456 + if (!cmd->use_aio || cmd->ret < 0 || cmd->ret == blk_rq_bytes(rq) || 457 + req_op(rq) != REQ_OP_READ) { 458 + if (cmd->ret < 0) 459 + ret = BLK_STS_IOERR; 460 + goto end_io; 461 461 } 462 462 463 - blk_mq_end_request(rq, cmd->ret < 0 ? BLK_STS_IOERR : BLK_STS_OK); 463 + /* 464 + * Short READ - if we got some data, advance our request and 465 + * retry it. If we got no data, end the rest with EIO. 466 + */ 467 + if (cmd->ret) { 468 + blk_update_request(rq, BLK_STS_OK, cmd->ret); 469 + cmd->ret = 0; 470 + blk_mq_requeue_request(rq, true); 471 + } else { 472 + if (cmd->use_aio) { 473 + struct bio *bio = rq->bio; 474 + 475 + while (bio) { 476 + zero_fill_bio(bio); 477 + bio = bio->bi_next; 478 + } 479 + } 480 + ret = BLK_STS_IOERR; 481 + end_io: 482 + blk_mq_end_request(rq, ret); 483 + } 464 484 } 465 485 466 486 static void lo_rw_aio_do_completion(struct loop_cmd *cmd) 467 487 { 488 + struct request *rq = blk_mq_rq_from_pdu(cmd); 489 + 468 490 if (!atomic_dec_and_test(&cmd->ref)) 469 491 return; 470 492 kfree(cmd->bvec); 471 493 cmd->bvec = NULL; 472 - blk_mq_complete_request(cmd->rq); 494 + blk_mq_complete_request(rq); 473 495 } 474 496 475 497 static void lo_rw_aio_complete(struct kiocb *iocb, long ret, long ret2) ··· 509 487 { 510 488 struct iov_iter iter; 511 489 struct bio_vec *bvec; 512 - struct request *rq = cmd->rq; 490 + struct request *rq = blk_mq_rq_from_pdu(cmd); 513 491 struct bio *bio = rq->bio; 514 492 struct file *file = lo->lo_backing_file; 515 493 unsigned int offset; ··· 1724 1702 static blk_status_t loop_queue_rq(struct blk_mq_hw_ctx *hctx, 1725 1703 const struct blk_mq_queue_data *bd) 1726 1704 { 1727 - struct loop_cmd *cmd = blk_mq_rq_to_pdu(bd->rq); 1728 - struct loop_device *lo = cmd->rq->q->queuedata; 1705 + struct request *rq = bd->rq; 1706 + struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq); 1707 + struct loop_device *lo = rq->q->queuedata; 1729 1708 1730 - blk_mq_start_request(bd->rq); 1709 + blk_mq_start_request(rq); 1731 1710 1732 1711 if (lo->lo_state != Lo_bound) 1733 1712 return BLK_STS_IOERR; 1734 1713 1735 - switch (req_op(cmd->rq)) { 1714 + switch (req_op(rq)) { 1736 1715 case REQ_OP_FLUSH: 1737 1716 case REQ_OP_DISCARD: 1738 1717 case REQ_OP_WRITE_ZEROES: ··· 1746 1723 1747 1724 /* always use the first bio's css */ 1748 1725 #ifdef CONFIG_BLK_CGROUP 1749 - if (cmd->use_aio && cmd->rq->bio && cmd->rq->bio->bi_css) { 1750 - cmd->css = cmd->rq->bio->bi_css; 1726 + if (cmd->use_aio && rq->bio && rq->bio->bi_css) { 1727 + cmd->css = rq->bio->bi_css; 1751 1728 css_get(cmd->css); 1752 1729 } else 1753 1730 #endif ··· 1759 1736 1760 1737 static void loop_handle_cmd(struct loop_cmd *cmd) 1761 1738 { 1762 - const bool write = op_is_write(req_op(cmd->rq)); 1763 - struct loop_device *lo = cmd->rq->q->queuedata; 1739 + struct request *rq = blk_mq_rq_from_pdu(cmd); 1740 + const bool write = op_is_write(req_op(rq)); 1741 + struct loop_device *lo = rq->q->queuedata; 1764 1742 int ret = 0; 1765 1743 1766 1744 if (write && (lo->lo_flags & LO_FLAGS_READ_ONLY)) { ··· 1769 1745 goto failed; 1770 1746 } 1771 1747 1772 - ret = do_req_filebacked(lo, cmd->rq); 1748 + ret = do_req_filebacked(lo, rq); 1773 1749 failed: 1774 1750 /* complete non-aio request */ 1775 1751 if (!cmd->use_aio || ret) { 1776 1752 cmd->ret = ret ? -EIO : 0; 1777 - blk_mq_complete_request(cmd->rq); 1753 + blk_mq_complete_request(rq); 1778 1754 } 1779 1755 } 1780 1756 ··· 1791 1767 { 1792 1768 struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq); 1793 1769 1794 - cmd->rq = rq; 1795 1770 kthread_init_work(&cmd->work, loop_queue_work); 1796 - 1797 1771 return 0; 1798 1772 } 1799 1773
-1
drivers/block/loop.h
··· 66 66 67 67 struct loop_cmd { 68 68 struct kthread_work work; 69 - struct request *rq; 70 69 bool use_aio; /* use AIO interface to handle I/O */ 71 70 atomic_t ref; /* only for aio */ 72 71 long ret;
+22 -27
drivers/block/swim.c
··· 110 110 /* Select values for swim_select and swim_readbit */ 111 111 112 112 #define READ_DATA_0 0x074 113 - #define TWOMEG_DRIVE 0x075 113 + #define ONEMEG_DRIVE 0x075 114 114 #define SINGLE_SIDED 0x076 115 115 #define DRIVE_PRESENT 0x077 116 116 #define DISK_IN 0x170 ··· 118 118 #define TRACK_ZERO 0x172 119 119 #define TACHO 0x173 120 120 #define READ_DATA_1 0x174 121 - #define MFM_MODE 0x175 121 + #define GCR_MODE 0x175 122 122 #define SEEK_COMPLETE 0x176 123 - #define ONEMEG_MEDIA 0x177 123 + #define TWOMEG_MEDIA 0x177 124 124 125 125 /* Bits in handshake register */ 126 126 ··· 612 612 struct floppy_struct *g; 613 613 fs->disk_in = 1; 614 614 fs->write_protected = swim_readbit(base, WRITE_PROT); 615 - fs->type = swim_readbit(base, ONEMEG_MEDIA); 616 615 617 616 if (swim_track00(base)) 618 617 printk(KERN_ERR ··· 619 620 620 621 swim_track00(base); 621 622 623 + fs->type = swim_readbit(base, TWOMEG_MEDIA) ? 624 + HD_MEDIA : DD_MEDIA; 625 + fs->head_number = swim_readbit(base, SINGLE_SIDED) ? 1 : 2; 622 626 get_floppy_geometry(fs, 0, &g); 623 627 fs->total_secs = g->size; 624 628 fs->secpercyl = g->head * g->sect; ··· 648 646 649 647 swim_write(base, setup, S_IBM_DRIVE | S_FCLK_DIV2); 650 648 udelay(10); 651 - swim_drive(base, INTERNAL_DRIVE); 649 + swim_drive(base, fs->location); 652 650 swim_motor(base, ON); 653 651 swim_action(base, SETMFM); 654 652 if (fs->ejected) ··· 657 655 err = -ENXIO; 658 656 goto out; 659 657 } 658 + 659 + set_capacity(fs->disk, fs->total_secs); 660 660 661 661 if (mode & FMODE_NDELAY) 662 662 return 0; ··· 731 727 if (copy_to_user((void __user *) param, (void *) &floppy_type, 732 728 sizeof(struct floppy_struct))) 733 729 return -EFAULT; 734 - break; 735 - 736 - default: 737 - printk(KERN_DEBUG "SWIM floppy_ioctl: unknown cmd %d\n", 738 - cmd); 739 - return -ENOSYS; 730 + return 0; 740 731 } 741 - return 0; 732 + return -ENOTTY; 742 733 } 743 734 744 735 static int floppy_getgeo(struct block_device *bdev, struct hd_geometry *geo) ··· 794 795 struct swim_priv *swd = data; 795 796 int drive = (*part & 3); 796 797 797 - if (drive > swd->floppy_count) 798 + if (drive >= swd->floppy_count) 798 799 return NULL; 799 800 800 801 *part = 0; ··· 812 813 813 814 swim_motor(base, OFF); 814 815 815 - if (swim_readbit(base, SINGLE_SIDED)) 816 - fs->head_number = 1; 817 - else 818 - fs->head_number = 2; 816 + fs->type = HD_MEDIA; 817 + fs->head_number = 2; 818 + 819 819 fs->ref_count = 0; 820 820 fs->ejected = 1; 821 821 ··· 832 834 /* scan floppy drives */ 833 835 834 836 swim_drive(base, INTERNAL_DRIVE); 835 - if (swim_readbit(base, DRIVE_PRESENT)) 837 + if (swim_readbit(base, DRIVE_PRESENT) && 838 + !swim_readbit(base, ONEMEG_DRIVE)) 836 839 swim_add_floppy(swd, INTERNAL_DRIVE); 837 840 swim_drive(base, EXTERNAL_DRIVE); 838 - if (swim_readbit(base, DRIVE_PRESENT)) 841 + if (swim_readbit(base, DRIVE_PRESENT) && 842 + !swim_readbit(base, ONEMEG_DRIVE)) 839 843 swim_add_floppy(swd, EXTERNAL_DRIVE); 840 844 841 845 /* register floppy drives */ ··· 861 861 &swd->lock); 862 862 if (!swd->unit[drive].disk->queue) { 863 863 err = -ENOMEM; 864 - put_disk(swd->unit[drive].disk); 865 864 goto exit_put_disks; 866 865 } 867 866 blk_queue_bounce_limit(swd->unit[drive].disk->queue, ··· 910 911 goto out; 911 912 } 912 913 913 - swim_base = ioremap(res->start, resource_size(res)); 914 + swim_base = (struct swim __iomem *)res->start; 914 915 if (!swim_base) { 915 916 ret = -ENOMEM; 916 917 goto out_release_io; ··· 922 923 if (!get_swim_mode(swim_base)) { 923 924 printk(KERN_INFO "SWIM device not found !\n"); 924 925 ret = -ENODEV; 925 - goto out_iounmap; 926 + goto out_release_io; 926 927 } 927 928 928 929 /* set platform driver data */ ··· 930 931 swd = kzalloc(sizeof(struct swim_priv), GFP_KERNEL); 931 932 if (!swd) { 932 933 ret = -ENOMEM; 933 - goto out_iounmap; 934 + goto out_release_io; 934 935 } 935 936 platform_set_drvdata(dev, swd); 936 937 ··· 944 945 945 946 out_kfree: 946 947 kfree(swd); 947 - out_iounmap: 948 - iounmap(swim_base); 949 948 out_release_io: 950 949 release_mem_region(res->start, resource_size(res)); 951 950 out: ··· 970 973 971 974 for (drive = 0; drive < swd->floppy_count; drive++) 972 975 floppy_eject(&swd->unit[drive]); 973 - 974 - iounmap(swd->base); 975 976 976 977 res = platform_get_resource(dev, IORESOURCE_MEM, 0); 977 978 if (res)
+3 -3
drivers/block/swim3.c
··· 148 148 #define MOTOR_ON 2 149 149 #define RELAX 3 /* also eject in progress */ 150 150 #define READ_DATA_0 4 151 - #define TWOMEG_DRIVE 5 151 + #define ONEMEG_DRIVE 5 152 152 #define SINGLE_SIDED 6 /* drive or diskette is 4MB type? */ 153 153 #define DRIVE_PRESENT 7 154 154 #define DISK_IN 8 ··· 156 156 #define TRACK_ZERO 10 157 157 #define TACHO 11 158 158 #define READ_DATA_1 12 159 - #define MFM_MODE 13 159 + #define GCR_MODE 13 160 160 #define SEEK_COMPLETE 14 161 - #define ONEMEG_MEDIA 15 161 + #define TWOMEG_MEDIA 15 162 162 163 163 /* Definitions of values used in writing and formatting */ 164 164 #define DATA_ESCAPE 0x99
+1
drivers/bus/Kconfig
··· 33 33 bool "Support for ISA I/O space on HiSilicon Hip06/7" 34 34 depends on ARM64 && (ARCH_HISI || COMPILE_TEST) 35 35 select INDIRECT_PIO 36 + select MFD_CORE if ACPI 36 37 help 37 38 Driver to enable I/O access to devices attached to the Low Pin 38 39 Count bus on the HiSilicon Hip06/7 SoC.
+1 -1
drivers/cdrom/cdrom.c
··· 2371 2371 if (!CDROM_CAN(CDC_SELECT_DISC) || arg == CDSL_CURRENT) 2372 2372 return media_changed(cdi, 1); 2373 2373 2374 - if ((unsigned int)arg >= cdi->capacity) 2374 + if (arg >= cdi->capacity) 2375 2375 return -EINVAL; 2376 2376 2377 2377 info = kmalloc(sizeof(*info), GFP_KERNEL);
+42 -6
drivers/char/random.c
··· 261 261 #include <linux/ptrace.h> 262 262 #include <linux/workqueue.h> 263 263 #include <linux/irq.h> 264 + #include <linux/ratelimit.h> 264 265 #include <linux/syscalls.h> 265 266 #include <linux/completion.h> 266 267 #include <linux/uuid.h> ··· 438 437 __u32 tmp[CHACHA20_BLOCK_WORDS], int used); 439 438 static void process_random_ready_list(void); 440 439 static void _get_random_bytes(void *buf, int nbytes); 440 + 441 + static struct ratelimit_state unseeded_warning = 442 + RATELIMIT_STATE_INIT("warn_unseeded_randomness", HZ, 3); 443 + static struct ratelimit_state urandom_warning = 444 + RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3); 445 + 446 + static int ratelimit_disable __read_mostly; 447 + 448 + module_param_named(ratelimit_disable, ratelimit_disable, int, 0644); 449 + MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression"); 441 450 442 451 /********************************************************************** 443 452 * ··· 800 789 } 801 790 802 791 #ifdef CONFIG_NUMA 803 - static void numa_crng_init(void) 792 + static void do_numa_crng_init(struct work_struct *work) 804 793 { 805 794 int i; 806 795 struct crng_state *crng; ··· 820 809 kfree(pool[i]); 821 810 kfree(pool); 822 811 } 812 + } 813 + 814 + static DECLARE_WORK(numa_crng_init_work, do_numa_crng_init); 815 + 816 + static void numa_crng_init(void) 817 + { 818 + schedule_work(&numa_crng_init_work); 823 819 } 824 820 #else 825 821 static void numa_crng_init(void) {} ··· 943 925 process_random_ready_list(); 944 926 wake_up_interruptible(&crng_init_wait); 945 927 pr_notice("random: crng init done\n"); 928 + if (unseeded_warning.missed) { 929 + pr_notice("random: %d get_random_xx warning(s) missed " 930 + "due to ratelimiting\n", 931 + unseeded_warning.missed); 932 + unseeded_warning.missed = 0; 933 + } 934 + if (urandom_warning.missed) { 935 + pr_notice("random: %d urandom warning(s) missed " 936 + "due to ratelimiting\n", 937 + urandom_warning.missed); 938 + urandom_warning.missed = 0; 939 + } 946 940 } 947 941 } 948 942 ··· 1595 1565 #ifndef CONFIG_WARN_ALL_UNSEEDED_RANDOM 1596 1566 print_once = true; 1597 1567 #endif 1598 - pr_notice("random: %s called from %pS with crng_init=%d\n", 1599 - func_name, caller, crng_init); 1568 + if (__ratelimit(&unseeded_warning)) 1569 + pr_notice("random: %s called from %pS with crng_init=%d\n", 1570 + func_name, caller, crng_init); 1600 1571 } 1601 1572 1602 1573 /* ··· 1791 1760 init_std_data(&blocking_pool); 1792 1761 crng_initialize(&primary_crng); 1793 1762 crng_global_init_time = jiffies; 1763 + if (ratelimit_disable) { 1764 + urandom_warning.interval = 0; 1765 + unseeded_warning.interval = 0; 1766 + } 1794 1767 return 0; 1795 1768 } 1796 1769 early_initcall(rand_initialize); ··· 1862 1827 1863 1828 if (!crng_ready() && maxwarn > 0) { 1864 1829 maxwarn--; 1865 - printk(KERN_NOTICE "random: %s: uninitialized urandom read " 1866 - "(%zd bytes read)\n", 1867 - current->comm, nbytes); 1830 + if (__ratelimit(&urandom_warning)) 1831 + printk(KERN_NOTICE "random: %s: uninitialized " 1832 + "urandom read (%zd bytes read)\n", 1833 + current->comm, nbytes); 1868 1834 spin_lock_irqsave(&primary_crng.lock, flags); 1869 1835 crng_init_cnt = 0; 1870 1836 spin_unlock_irqrestore(&primary_crng.lock, flags);
+71 -86
drivers/char/virtio_console.c
··· 422 422 } 423 423 } 424 424 425 - static struct port_buffer *alloc_buf(struct virtqueue *vq, size_t buf_size, 425 + static struct port_buffer *alloc_buf(struct virtio_device *vdev, size_t buf_size, 426 426 int pages) 427 427 { 428 428 struct port_buffer *buf; ··· 445 445 return buf; 446 446 } 447 447 448 - if (is_rproc_serial(vq->vdev)) { 448 + if (is_rproc_serial(vdev)) { 449 449 /* 450 450 * Allocate DMA memory from ancestor. When a virtio 451 451 * device is created by remoteproc, the DMA memory is 452 452 * associated with the grandparent device: 453 453 * vdev => rproc => platform-dev. 454 454 */ 455 - if (!vq->vdev->dev.parent || !vq->vdev->dev.parent->parent) 455 + if (!vdev->dev.parent || !vdev->dev.parent->parent) 456 456 goto free_buf; 457 - buf->dev = vq->vdev->dev.parent->parent; 457 + buf->dev = vdev->dev.parent->parent; 458 458 459 459 /* Increase device refcnt to avoid freeing it */ 460 460 get_device(buf->dev); ··· 838 838 839 839 count = min((size_t)(32 * 1024), count); 840 840 841 - buf = alloc_buf(port->out_vq, count, 0); 841 + buf = alloc_buf(port->portdev->vdev, count, 0); 842 842 if (!buf) 843 843 return -ENOMEM; 844 844 ··· 957 957 if (ret < 0) 958 958 goto error_out; 959 959 960 - buf = alloc_buf(port->out_vq, 0, pipe->nrbufs); 960 + buf = alloc_buf(port->portdev->vdev, 0, pipe->nrbufs); 961 961 if (!buf) { 962 962 ret = -ENOMEM; 963 963 goto error_out; ··· 1374 1374 1375 1375 nr_added_bufs = 0; 1376 1376 do { 1377 - buf = alloc_buf(vq, PAGE_SIZE, 0); 1377 + buf = alloc_buf(vq->vdev, PAGE_SIZE, 0); 1378 1378 if (!buf) 1379 1379 break; 1380 1380 ··· 1402 1402 { 1403 1403 char debugfs_name[16]; 1404 1404 struct port *port; 1405 - struct port_buffer *buf; 1406 1405 dev_t devt; 1407 1406 unsigned int nr_added_bufs; 1408 1407 int err; ··· 1512 1513 return 0; 1513 1514 1514 1515 free_inbufs: 1515 - while ((buf = virtqueue_detach_unused_buf(port->in_vq))) 1516 - free_buf(buf, true); 1517 1516 free_device: 1518 1517 device_destroy(pdrvdata.class, port->dev->devt); 1519 1518 free_cdev: ··· 1536 1539 1537 1540 static void remove_port_data(struct port *port) 1538 1541 { 1539 - struct port_buffer *buf; 1540 - 1541 1542 spin_lock_irq(&port->inbuf_lock); 1542 1543 /* Remove unused data this port might have received. */ 1543 1544 discard_port_data(port); 1544 1545 spin_unlock_irq(&port->inbuf_lock); 1545 1546 1546 - /* Remove buffers we queued up for the Host to send us data in. */ 1547 - do { 1548 - spin_lock_irq(&port->inbuf_lock); 1549 - buf = virtqueue_detach_unused_buf(port->in_vq); 1550 - spin_unlock_irq(&port->inbuf_lock); 1551 - if (buf) 1552 - free_buf(buf, true); 1553 - } while (buf); 1554 - 1555 1547 spin_lock_irq(&port->outvq_lock); 1556 1548 reclaim_consumed_buffers(port); 1557 1549 spin_unlock_irq(&port->outvq_lock); 1558 - 1559 - /* Free pending buffers from the out-queue. */ 1560 - do { 1561 - spin_lock_irq(&port->outvq_lock); 1562 - buf = virtqueue_detach_unused_buf(port->out_vq); 1563 - spin_unlock_irq(&port->outvq_lock); 1564 - if (buf) 1565 - free_buf(buf, true); 1566 - } while (buf); 1567 1550 } 1568 1551 1569 1552 /* ··· 1768 1791 spin_unlock(&portdev->c_ivq_lock); 1769 1792 } 1770 1793 1794 + static void flush_bufs(struct virtqueue *vq, bool can_sleep) 1795 + { 1796 + struct port_buffer *buf; 1797 + unsigned int len; 1798 + 1799 + while ((buf = virtqueue_get_buf(vq, &len))) 1800 + free_buf(buf, can_sleep); 1801 + } 1802 + 1771 1803 static void out_intr(struct virtqueue *vq) 1772 1804 { 1773 1805 struct port *port; 1774 1806 1775 1807 port = find_port_by_vq(vq->vdev->priv, vq); 1776 - if (!port) 1808 + if (!port) { 1809 + flush_bufs(vq, false); 1777 1810 return; 1811 + } 1778 1812 1779 1813 wake_up_interruptible(&port->waitqueue); 1780 1814 } ··· 1796 1808 unsigned long flags; 1797 1809 1798 1810 port = find_port_by_vq(vq->vdev->priv, vq); 1799 - if (!port) 1811 + if (!port) { 1812 + flush_bufs(vq, false); 1800 1813 return; 1814 + } 1801 1815 1802 1816 spin_lock_irqsave(&port->inbuf_lock, flags); 1803 1817 port->inbuf = get_inbuf(port); ··· 1974 1984 1975 1985 static void remove_vqs(struct ports_device *portdev) 1976 1986 { 1987 + struct virtqueue *vq; 1988 + 1989 + virtio_device_for_each_vq(portdev->vdev, vq) { 1990 + struct port_buffer *buf; 1991 + 1992 + flush_bufs(vq, true); 1993 + while ((buf = virtqueue_detach_unused_buf(vq))) 1994 + free_buf(buf, true); 1995 + } 1977 1996 portdev->vdev->config->del_vqs(portdev->vdev); 1978 1997 kfree(portdev->in_vqs); 1979 1998 kfree(portdev->out_vqs); 1980 1999 } 1981 2000 1982 - static void remove_controlq_data(struct ports_device *portdev) 2001 + static void virtcons_remove(struct virtio_device *vdev) 1983 2002 { 1984 - struct port_buffer *buf; 1985 - unsigned int len; 2003 + struct ports_device *portdev; 2004 + struct port *port, *port2; 1986 2005 1987 - if (!use_multiport(portdev)) 1988 - return; 2006 + portdev = vdev->priv; 1989 2007 1990 - while ((buf = virtqueue_get_buf(portdev->c_ivq, &len))) 1991 - free_buf(buf, true); 2008 + spin_lock_irq(&pdrvdata_lock); 2009 + list_del(&portdev->list); 2010 + spin_unlock_irq(&pdrvdata_lock); 1992 2011 1993 - while ((buf = virtqueue_detach_unused_buf(portdev->c_ivq))) 1994 - free_buf(buf, true); 2012 + /* Disable interrupts for vqs */ 2013 + vdev->config->reset(vdev); 2014 + /* Finish up work that's lined up */ 2015 + if (use_multiport(portdev)) 2016 + cancel_work_sync(&portdev->control_work); 2017 + else 2018 + cancel_work_sync(&portdev->config_work); 2019 + 2020 + list_for_each_entry_safe(port, port2, &portdev->ports, list) 2021 + unplug_port(port); 2022 + 2023 + unregister_chrdev(portdev->chr_major, "virtio-portsdev"); 2024 + 2025 + /* 2026 + * When yanking out a device, we immediately lose the 2027 + * (device-side) queues. So there's no point in keeping the 2028 + * guest side around till we drop our final reference. This 2029 + * also means that any ports which are in an open state will 2030 + * have to just stop using the port, as the vqs are going 2031 + * away. 2032 + */ 2033 + remove_vqs(portdev); 2034 + kfree(portdev); 1995 2035 } 1996 2036 1997 2037 /* ··· 2090 2070 2091 2071 spin_lock_init(&portdev->ports_lock); 2092 2072 INIT_LIST_HEAD(&portdev->ports); 2073 + INIT_LIST_HEAD(&portdev->list); 2093 2074 2094 2075 virtio_device_ready(portdev->vdev); 2095 2076 ··· 2108 2087 if (!nr_added_bufs) { 2109 2088 dev_err(&vdev->dev, 2110 2089 "Error allocating buffers for control queue\n"); 2111 - err = -ENOMEM; 2112 - goto free_vqs; 2090 + /* 2091 + * The host might want to notify mgmt sw about device 2092 + * add failure. 2093 + */ 2094 + __send_control_msg(portdev, VIRTIO_CONSOLE_BAD_ID, 2095 + VIRTIO_CONSOLE_DEVICE_READY, 0); 2096 + /* Device was functional: we need full cleanup. */ 2097 + virtcons_remove(vdev); 2098 + return -ENOMEM; 2113 2099 } 2114 2100 } else { 2115 2101 /* ··· 2147 2119 2148 2120 return 0; 2149 2121 2150 - free_vqs: 2151 - /* The host might want to notify mgmt sw about device add failure */ 2152 - __send_control_msg(portdev, VIRTIO_CONSOLE_BAD_ID, 2153 - VIRTIO_CONSOLE_DEVICE_READY, 0); 2154 - remove_vqs(portdev); 2155 2122 free_chrdev: 2156 2123 unregister_chrdev(portdev->chr_major, "virtio-portsdev"); 2157 2124 free: 2158 2125 kfree(portdev); 2159 2126 fail: 2160 2127 return err; 2161 - } 2162 - 2163 - static void virtcons_remove(struct virtio_device *vdev) 2164 - { 2165 - struct ports_device *portdev; 2166 - struct port *port, *port2; 2167 - 2168 - portdev = vdev->priv; 2169 - 2170 - spin_lock_irq(&pdrvdata_lock); 2171 - list_del(&portdev->list); 2172 - spin_unlock_irq(&pdrvdata_lock); 2173 - 2174 - /* Disable interrupts for vqs */ 2175 - vdev->config->reset(vdev); 2176 - /* Finish up work that's lined up */ 2177 - if (use_multiport(portdev)) 2178 - cancel_work_sync(&portdev->control_work); 2179 - else 2180 - cancel_work_sync(&portdev->config_work); 2181 - 2182 - list_for_each_entry_safe(port, port2, &portdev->ports, list) 2183 - unplug_port(port); 2184 - 2185 - unregister_chrdev(portdev->chr_major, "virtio-portsdev"); 2186 - 2187 - /* 2188 - * When yanking out a device, we immediately lose the 2189 - * (device-side) queues. So there's no point in keeping the 2190 - * guest side around till we drop our final reference. This 2191 - * also means that any ports which are in an open state will 2192 - * have to just stop using the port, as the vqs are going 2193 - * away. 2194 - */ 2195 - remove_controlq_data(portdev); 2196 - remove_vqs(portdev); 2197 - kfree(portdev); 2198 2128 } 2199 2129 2200 2130 static struct virtio_device_id id_table[] = { ··· 2195 2209 */ 2196 2210 if (use_multiport(portdev)) 2197 2211 virtqueue_disable_cb(portdev->c_ivq); 2198 - remove_controlq_data(portdev); 2199 2212 2200 2213 list_for_each_entry(port, &portdev->ports, list) { 2201 2214 virtqueue_disable_cb(port->in_vq);
-10
drivers/cpufreq/Kconfig.arm
··· 71 71 72 72 Say Y, if you have a Broadcom SoC with AVS support for DFS or DVFS. 73 73 74 - config ARM_BRCMSTB_AVS_CPUFREQ_DEBUG 75 - bool "Broadcom STB AVS CPUfreq driver sysfs debug capability" 76 - depends on ARM_BRCMSTB_AVS_CPUFREQ 77 - help 78 - Enabling this option turns on debug support via sysfs under 79 - /sys/kernel/debug/brcmstb-avs-cpufreq. It is possible to read all and 80 - write some AVS mailbox registers through sysfs entries. 81 - 82 - If in doubt, say N. 83 - 84 74 config ARM_EXYNOS5440_CPUFREQ 85 75 tristate "SAMSUNG EXYNOS5440" 86 76 depends on SOC_EXYNOS5440
+1 -322
drivers/cpufreq/brcmstb-avs-cpufreq.c
··· 49 49 #include <linux/platform_device.h> 50 50 #include <linux/semaphore.h> 51 51 52 - #ifdef CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG 53 - #include <linux/ctype.h> 54 - #include <linux/debugfs.h> 55 - #include <linux/slab.h> 56 - #include <linux/uaccess.h> 57 - #endif 58 - 59 52 /* Max number of arguments AVS calls take */ 60 53 #define AVS_MAX_CMD_ARGS 4 61 54 /* ··· 175 182 void __iomem *base; 176 183 void __iomem *avs_intr_base; 177 184 struct device *dev; 178 - #ifdef CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG 179 - struct dentry *debugfs; 180 - #endif 181 185 struct completion done; 182 186 struct semaphore sem; 183 187 struct pmap pmap; 184 188 }; 185 - 186 - #ifdef CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG 187 - 188 - enum debugfs_format { 189 - DEBUGFS_NORMAL, 190 - DEBUGFS_FLOAT, 191 - DEBUGFS_REV, 192 - }; 193 - 194 - struct debugfs_data { 195 - struct debugfs_entry *entry; 196 - struct private_data *priv; 197 - }; 198 - 199 - struct debugfs_entry { 200 - char *name; 201 - u32 offset; 202 - fmode_t mode; 203 - enum debugfs_format format; 204 - }; 205 - 206 - #define DEBUGFS_ENTRY(name, mode, format) { \ 207 - #name, AVS_MBOX_##name, mode, format \ 208 - } 209 - 210 - /* 211 - * These are used for debugfs only. Otherwise we use AVS_MBOX_PARAM() directly. 212 - */ 213 - #define AVS_MBOX_PARAM1 AVS_MBOX_PARAM(0) 214 - #define AVS_MBOX_PARAM2 AVS_MBOX_PARAM(1) 215 - #define AVS_MBOX_PARAM3 AVS_MBOX_PARAM(2) 216 - #define AVS_MBOX_PARAM4 AVS_MBOX_PARAM(3) 217 - 218 - /* 219 - * This table stores the name, access permissions and offset for each hardware 220 - * register and is used to generate debugfs entries. 221 - */ 222 - static struct debugfs_entry debugfs_entries[] = { 223 - DEBUGFS_ENTRY(COMMAND, S_IWUSR, DEBUGFS_NORMAL), 224 - DEBUGFS_ENTRY(STATUS, S_IWUSR, DEBUGFS_NORMAL), 225 - DEBUGFS_ENTRY(VOLTAGE0, 0, DEBUGFS_FLOAT), 226 - DEBUGFS_ENTRY(TEMP0, 0, DEBUGFS_FLOAT), 227 - DEBUGFS_ENTRY(PV0, 0, DEBUGFS_FLOAT), 228 - DEBUGFS_ENTRY(MV0, 0, DEBUGFS_FLOAT), 229 - DEBUGFS_ENTRY(PARAM1, S_IWUSR, DEBUGFS_NORMAL), 230 - DEBUGFS_ENTRY(PARAM2, S_IWUSR, DEBUGFS_NORMAL), 231 - DEBUGFS_ENTRY(PARAM3, S_IWUSR, DEBUGFS_NORMAL), 232 - DEBUGFS_ENTRY(PARAM4, S_IWUSR, DEBUGFS_NORMAL), 233 - DEBUGFS_ENTRY(REVISION, 0, DEBUGFS_REV), 234 - DEBUGFS_ENTRY(PSTATE, 0, DEBUGFS_NORMAL), 235 - DEBUGFS_ENTRY(HEARTBEAT, 0, DEBUGFS_NORMAL), 236 - DEBUGFS_ENTRY(MAGIC, S_IWUSR, DEBUGFS_NORMAL), 237 - DEBUGFS_ENTRY(SIGMA_HVT, 0, DEBUGFS_NORMAL), 238 - DEBUGFS_ENTRY(SIGMA_SVT, 0, DEBUGFS_NORMAL), 239 - DEBUGFS_ENTRY(VOLTAGE1, 0, DEBUGFS_FLOAT), 240 - DEBUGFS_ENTRY(TEMP1, 0, DEBUGFS_FLOAT), 241 - DEBUGFS_ENTRY(PV1, 0, DEBUGFS_FLOAT), 242 - DEBUGFS_ENTRY(MV1, 0, DEBUGFS_FLOAT), 243 - DEBUGFS_ENTRY(FREQUENCY, 0, DEBUGFS_NORMAL), 244 - }; 245 - 246 - static int brcm_avs_target_index(struct cpufreq_policy *, unsigned int); 247 - 248 - static char *__strtolower(char *s) 249 - { 250 - char *p; 251 - 252 - for (p = s; *p; p++) 253 - *p = tolower(*p); 254 - 255 - return s; 256 - } 257 - 258 - #endif /* CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG */ 259 189 260 190 static void __iomem *__map_region(const char *name) 261 191 { ··· 431 515 432 516 return table; 433 517 } 434 - 435 - #ifdef CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG 436 - 437 - #define MANT(x) (unsigned int)(abs((x)) / 1000) 438 - #define FRAC(x) (unsigned int)(abs((x)) - abs((x)) / 1000 * 1000) 439 - 440 - static int brcm_avs_debug_show(struct seq_file *s, void *data) 441 - { 442 - struct debugfs_data *dbgfs = s->private; 443 - void __iomem *base; 444 - u32 val, offset; 445 - 446 - if (!dbgfs) { 447 - seq_puts(s, "No device pointer\n"); 448 - return 0; 449 - } 450 - 451 - base = dbgfs->priv->base; 452 - offset = dbgfs->entry->offset; 453 - val = readl(base + offset); 454 - switch (dbgfs->entry->format) { 455 - case DEBUGFS_NORMAL: 456 - seq_printf(s, "%u\n", val); 457 - break; 458 - case DEBUGFS_FLOAT: 459 - seq_printf(s, "%d.%03d\n", MANT(val), FRAC(val)); 460 - break; 461 - case DEBUGFS_REV: 462 - seq_printf(s, "%c.%c.%c.%c\n", (val >> 24 & 0xff), 463 - (val >> 16 & 0xff), (val >> 8 & 0xff), 464 - val & 0xff); 465 - break; 466 - } 467 - seq_printf(s, "0x%08x\n", val); 468 - 469 - return 0; 470 - } 471 - 472 - #undef MANT 473 - #undef FRAC 474 - 475 - static ssize_t brcm_avs_seq_write(struct file *file, const char __user *buf, 476 - size_t size, loff_t *ppos) 477 - { 478 - struct seq_file *s = file->private_data; 479 - struct debugfs_data *dbgfs = s->private; 480 - struct private_data *priv = dbgfs->priv; 481 - void __iomem *base, *avs_intr_base; 482 - bool use_issue_command = false; 483 - unsigned long val, offset; 484 - char str[128]; 485 - int ret; 486 - char *str_ptr = str; 487 - 488 - if (size >= sizeof(str)) 489 - return -E2BIG; 490 - 491 - memset(str, 0, sizeof(str)); 492 - ret = copy_from_user(str, buf, size); 493 - if (ret) 494 - return ret; 495 - 496 - base = priv->base; 497 - avs_intr_base = priv->avs_intr_base; 498 - offset = dbgfs->entry->offset; 499 - /* 500 - * Special case writing to "command" entry only: if the string starts 501 - * with a 'c', we use the driver's __issue_avs_command() function. 502 - * Otherwise, we perform a raw write. This should allow testing of raw 503 - * access as well as using the higher level function. (Raw access 504 - * doesn't clear the firmware return status after issuing the command.) 505 - */ 506 - if (str_ptr[0] == 'c' && offset == AVS_MBOX_COMMAND) { 507 - use_issue_command = true; 508 - str_ptr++; 509 - } 510 - if (kstrtoul(str_ptr, 0, &val) != 0) 511 - return -EINVAL; 512 - 513 - /* 514 - * Setting the P-state is a special case. We need to update the CPU 515 - * frequency we report. 516 - */ 517 - if (val == AVS_CMD_SET_PSTATE) { 518 - struct cpufreq_policy *policy; 519 - unsigned int pstate; 520 - 521 - policy = cpufreq_cpu_get(smp_processor_id()); 522 - /* Read back the P-state we are about to set */ 523 - pstate = readl(base + AVS_MBOX_PARAM(0)); 524 - if (use_issue_command) { 525 - ret = brcm_avs_target_index(policy, pstate); 526 - return ret ? ret : size; 527 - } 528 - policy->cur = policy->freq_table[pstate].frequency; 529 - } 530 - 531 - if (use_issue_command) { 532 - ret = __issue_avs_command(priv, val, false, NULL); 533 - } else { 534 - /* Locking here is not perfect, but is only for debug. */ 535 - ret = down_interruptible(&priv->sem); 536 - if (ret) 537 - return ret; 538 - 539 - writel(val, base + offset); 540 - /* We have to wake up the firmware to process a command. */ 541 - if (offset == AVS_MBOX_COMMAND) 542 - writel(AVS_CPU_L2_INT_MASK, 543 - avs_intr_base + AVS_CPU_L2_SET0); 544 - up(&priv->sem); 545 - } 546 - 547 - return ret ? ret : size; 548 - } 549 - 550 - static struct debugfs_entry *__find_debugfs_entry(const char *name) 551 - { 552 - int i; 553 - 554 - for (i = 0; i < ARRAY_SIZE(debugfs_entries); i++) 555 - if (strcasecmp(debugfs_entries[i].name, name) == 0) 556 - return &debugfs_entries[i]; 557 - 558 - return NULL; 559 - } 560 - 561 - static int brcm_avs_debug_open(struct inode *inode, struct file *file) 562 - { 563 - struct debugfs_data *data; 564 - fmode_t fmode; 565 - int ret; 566 - 567 - /* 568 - * seq_open(), which is called by single_open(), clears "write" access. 569 - * We need write access to some files, so we preserve our access mode 570 - * and restore it. 571 - */ 572 - fmode = file->f_mode; 573 - /* 574 - * Check access permissions even for root. We don't want to be writing 575 - * to read-only registers. Access for regular users has already been 576 - * checked by the VFS layer. 577 - */ 578 - if ((fmode & FMODE_WRITER) && !(inode->i_mode & S_IWUSR)) 579 - return -EACCES; 580 - 581 - data = kmalloc(sizeof(*data), GFP_KERNEL); 582 - if (!data) 583 - return -ENOMEM; 584 - /* 585 - * We use the same file system operations for all our debug files. To 586 - * produce specific output, we look up the file name upon opening a 587 - * debugfs entry and map it to a memory offset. This offset is then used 588 - * in the generic "show" function to read a specific register. 589 - */ 590 - data->entry = __find_debugfs_entry(file->f_path.dentry->d_iname); 591 - data->priv = inode->i_private; 592 - 593 - ret = single_open(file, brcm_avs_debug_show, data); 594 - if (ret) 595 - kfree(data); 596 - file->f_mode = fmode; 597 - 598 - return ret; 599 - } 600 - 601 - static int brcm_avs_debug_release(struct inode *inode, struct file *file) 602 - { 603 - struct seq_file *seq_priv = file->private_data; 604 - struct debugfs_data *data = seq_priv->private; 605 - 606 - kfree(data); 607 - return single_release(inode, file); 608 - } 609 - 610 - static const struct file_operations brcm_avs_debug_ops = { 611 - .open = brcm_avs_debug_open, 612 - .read = seq_read, 613 - .write = brcm_avs_seq_write, 614 - .llseek = seq_lseek, 615 - .release = brcm_avs_debug_release, 616 - }; 617 - 618 - static void brcm_avs_cpufreq_debug_init(struct platform_device *pdev) 619 - { 620 - struct private_data *priv = platform_get_drvdata(pdev); 621 - struct dentry *dir; 622 - int i; 623 - 624 - if (!priv) 625 - return; 626 - 627 - dir = debugfs_create_dir(BRCM_AVS_CPUFREQ_NAME, NULL); 628 - if (IS_ERR_OR_NULL(dir)) 629 - return; 630 - priv->debugfs = dir; 631 - 632 - for (i = 0; i < ARRAY_SIZE(debugfs_entries); i++) { 633 - /* 634 - * The DEBUGFS_ENTRY macro generates uppercase strings. We 635 - * convert them to lowercase before creating the debugfs 636 - * entries. 637 - */ 638 - char *entry = __strtolower(debugfs_entries[i].name); 639 - fmode_t mode = debugfs_entries[i].mode; 640 - 641 - if (!debugfs_create_file(entry, S_IFREG | S_IRUGO | mode, 642 - dir, priv, &brcm_avs_debug_ops)) { 643 - priv->debugfs = NULL; 644 - debugfs_remove_recursive(dir); 645 - break; 646 - } 647 - } 648 - } 649 - 650 - static void brcm_avs_cpufreq_debug_exit(struct platform_device *pdev) 651 - { 652 - struct private_data *priv = platform_get_drvdata(pdev); 653 - 654 - if (priv && priv->debugfs) { 655 - debugfs_remove_recursive(priv->debugfs); 656 - priv->debugfs = NULL; 657 - } 658 - } 659 - 660 - #else 661 - 662 - static void brcm_avs_cpufreq_debug_init(struct platform_device *pdev) {} 663 - static void brcm_avs_cpufreq_debug_exit(struct platform_device *pdev) {} 664 - 665 - #endif /* CONFIG_ARM_BRCMSTB_AVS_CPUFREQ_DEBUG */ 666 518 667 519 /* 668 520 * To ensure the right firmware is running we need to ··· 700 1016 return ret; 701 1017 702 1018 brcm_avs_driver.driver_data = pdev; 703 - ret = cpufreq_register_driver(&brcm_avs_driver); 704 - if (!ret) 705 - brcm_avs_cpufreq_debug_init(pdev); 706 1019 707 - return ret; 1020 + return cpufreq_register_driver(&brcm_avs_driver); 708 1021 } 709 1022 710 1023 static int brcm_avs_cpufreq_remove(struct platform_device *pdev) ··· 712 1031 ret = cpufreq_unregister_driver(&brcm_avs_driver); 713 1032 if (ret) 714 1033 return ret; 715 - 716 - brcm_avs_cpufreq_debug_exit(pdev); 717 1034 718 1035 priv = platform_get_drvdata(pdev); 719 1036 iounmap(priv->base);
+11 -3
drivers/cpufreq/powernv-cpufreq.c
··· 679 679 680 680 if (!spin_trylock(&gpstates->gpstate_lock)) 681 681 return; 682 + /* 683 + * If the timer has migrated to the different cpu then bring 684 + * it back to one of the policy->cpus 685 + */ 686 + if (!cpumask_test_cpu(raw_smp_processor_id(), policy->cpus)) { 687 + gpstates->timer.expires = jiffies + msecs_to_jiffies(1); 688 + add_timer_on(&gpstates->timer, cpumask_first(policy->cpus)); 689 + spin_unlock(&gpstates->gpstate_lock); 690 + return; 691 + } 682 692 683 693 /* 684 694 * If PMCR was last updated was using fast_swtich then ··· 728 718 if (gpstate_idx != gpstates->last_lpstate_idx) 729 719 queue_gpstate_timer(gpstates); 730 720 721 + set_pstate(&freq_data); 731 722 spin_unlock(&gpstates->gpstate_lock); 732 - 733 - /* Timer may get migrated to a different cpu on cpu hot unplug */ 734 - smp_call_function_any(policy->cpus, set_pstate, &freq_data, 1); 735 723 } 736 724 737 725 /*
+1 -1
drivers/firmware/arm_scmi/clock.c
··· 284 284 struct clock_info *ci = handle->clk_priv; 285 285 struct scmi_clock_info *clk = ci->clk + clk_id; 286 286 287 - if (!clk->name || !clk->name[0]) 287 + if (!clk->name[0]) 288 288 return NULL; 289 289 290 290 return clk;
+1 -1
drivers/fpga/altera-ps-spi.c
··· 249 249 250 250 conf->data = of_id->data; 251 251 conf->spi = spi; 252 - conf->config = devm_gpiod_get(&spi->dev, "nconfig", GPIOD_OUT_HIGH); 252 + conf->config = devm_gpiod_get(&spi->dev, "nconfig", GPIOD_OUT_LOW); 253 253 if (IS_ERR(conf->config)) { 254 254 dev_err(&spi->dev, "Failed to get config gpio: %ld\n", 255 255 PTR_ERR(conf->config));
+5 -2
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
··· 1459 1459 static const u32 vgpr_init_regs[] = 1460 1460 { 1461 1461 mmCOMPUTE_STATIC_THREAD_MGMT_SE0, 0xffffffff, 1462 - mmCOMPUTE_RESOURCE_LIMITS, 0, 1462 + mmCOMPUTE_RESOURCE_LIMITS, 0x1000000, /* CU_GROUP_COUNT=1 */ 1463 1463 mmCOMPUTE_NUM_THREAD_X, 256*4, 1464 1464 mmCOMPUTE_NUM_THREAD_Y, 1, 1465 1465 mmCOMPUTE_NUM_THREAD_Z, 1, 1466 + mmCOMPUTE_PGM_RSRC1, 0x100004f, /* VGPRS=15 (64 logical VGPRs), SGPRS=1 (16 SGPRs), BULKY=1 */ 1466 1467 mmCOMPUTE_PGM_RSRC2, 20, 1467 1468 mmCOMPUTE_USER_DATA_0, 0xedcedc00, 1468 1469 mmCOMPUTE_USER_DATA_1, 0xedcedc01, ··· 1480 1479 static const u32 sgpr1_init_regs[] = 1481 1480 { 1482 1481 mmCOMPUTE_STATIC_THREAD_MGMT_SE0, 0x0f, 1483 - mmCOMPUTE_RESOURCE_LIMITS, 0x1000000, 1482 + mmCOMPUTE_RESOURCE_LIMITS, 0x1000000, /* CU_GROUP_COUNT=1 */ 1484 1483 mmCOMPUTE_NUM_THREAD_X, 256*5, 1485 1484 mmCOMPUTE_NUM_THREAD_Y, 1, 1486 1485 mmCOMPUTE_NUM_THREAD_Z, 1, 1486 + mmCOMPUTE_PGM_RSRC1, 0x240, /* SGPRS=9 (80 GPRS) */ 1487 1487 mmCOMPUTE_PGM_RSRC2, 20, 1488 1488 mmCOMPUTE_USER_DATA_0, 0xedcedc00, 1489 1489 mmCOMPUTE_USER_DATA_1, 0xedcedc01, ··· 1505 1503 mmCOMPUTE_NUM_THREAD_X, 256*5, 1506 1504 mmCOMPUTE_NUM_THREAD_Y, 1, 1507 1505 mmCOMPUTE_NUM_THREAD_Z, 1, 1506 + mmCOMPUTE_PGM_RSRC1, 0x240, /* SGPRS=9 (80 GPRS) */ 1508 1507 mmCOMPUTE_PGM_RSRC2, 20, 1509 1508 mmCOMPUTE_USER_DATA_0, 0xedcedc00, 1510 1509 mmCOMPUTE_USER_DATA_1, 0xedcedc01,
+1
drivers/gpu/drm/amd/amdkfd/Kconfig
··· 6 6 tristate "HSA kernel driver for AMD GPU devices" 7 7 depends on DRM_AMDGPU && X86_64 8 8 imply AMD_IOMMU_V2 9 + select MMU_NOTIFIER 9 10 help 10 11 Enable this if you want to use HSA features on AMD GPU devices.
+9 -8
drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
··· 749 749 struct timespec64 time; 750 750 751 751 dev = kfd_device_by_id(args->gpu_id); 752 - if (dev == NULL) 753 - return -EINVAL; 754 - 755 - /* Reading GPU clock counter from KGD */ 756 - args->gpu_clock_counter = 757 - dev->kfd2kgd->get_gpu_clock_counter(dev->kgd); 752 + if (dev) 753 + /* Reading GPU clock counter from KGD */ 754 + args->gpu_clock_counter = 755 + dev->kfd2kgd->get_gpu_clock_counter(dev->kgd); 756 + else 757 + /* Node without GPU resource */ 758 + args->gpu_clock_counter = 0; 758 759 759 760 /* No access to rdtsc. Using raw monotonic time */ 760 761 getrawmonotonic64(&time); ··· 1148 1147 return ret; 1149 1148 } 1150 1149 1151 - bool kfd_dev_is_large_bar(struct kfd_dev *dev) 1150 + static bool kfd_dev_is_large_bar(struct kfd_dev *dev) 1152 1151 { 1153 1152 struct kfd_local_mem_info mem_info; 1154 1153 ··· 1422 1421 1423 1422 pdd = kfd_get_process_device_data(dev, p); 1424 1423 if (!pdd) { 1425 - err = PTR_ERR(pdd); 1424 + err = -EINVAL; 1426 1425 goto bind_process_to_device_failed; 1427 1426 } 1428 1427
+9 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 4557 4557 struct amdgpu_dm_connector *aconnector = NULL; 4558 4558 struct drm_connector_state *new_con_state = NULL; 4559 4559 struct dm_connector_state *dm_conn_state = NULL; 4560 + struct drm_plane_state *new_plane_state = NULL; 4560 4561 4561 4562 new_stream = NULL; 4562 4563 4563 4564 dm_old_crtc_state = to_dm_crtc_state(old_crtc_state); 4564 4565 dm_new_crtc_state = to_dm_crtc_state(new_crtc_state); 4565 4566 acrtc = to_amdgpu_crtc(crtc); 4567 + 4568 + new_plane_state = drm_atomic_get_new_plane_state(state, new_crtc_state->crtc->primary); 4569 + 4570 + if (new_crtc_state->enable && new_plane_state && !new_plane_state->fb) { 4571 + ret = -EINVAL; 4572 + goto fail; 4573 + } 4566 4574 4567 4575 aconnector = amdgpu_dm_find_first_crtc_matching_connector(state, crtc); 4568 4576 ··· 4768 4760 if (!dm_old_crtc_state->stream) 4769 4761 continue; 4770 4762 4771 - DRM_DEBUG_DRIVER("Disabling DRM plane: %d on DRM crtc %d\n", 4763 + DRM_DEBUG_ATOMIC("Disabling DRM plane: %d on DRM crtc %d\n", 4772 4764 plane->base.id, old_plane_crtc->base.id); 4773 4765 4774 4766 if (!dc_remove_plane_from_context(
+3 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
··· 329 329 { 330 330 int src; 331 331 struct irq_list_head *lh; 332 + unsigned long irq_table_flags; 332 333 DRM_DEBUG_KMS("DM_IRQ: releasing resources.\n"); 333 - 334 334 for (src = 0; src < DAL_IRQ_SOURCES_NUMBER; src++) { 335 - 335 + DM_IRQ_TABLE_LOCK(adev, irq_table_flags); 336 336 /* The handler was removed from the table, 337 337 * it means it is safe to flush all the 'work' 338 338 * (because no code can schedule a new one). */ 339 339 lh = &adev->dm.irq_handler_list_low_tab[src]; 340 + DM_IRQ_TABLE_UNLOCK(adev, irq_table_flags); 340 341 flush_work(&lh->work); 341 342 } 342 343 }
+22 -32
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
··· 161 161 struct amdgpu_dm_connector *amdgpu_dm_connector = to_amdgpu_dm_connector(connector); 162 162 struct amdgpu_encoder *amdgpu_encoder = amdgpu_dm_connector->mst_encoder; 163 163 164 + if (amdgpu_dm_connector->edid) { 165 + kfree(amdgpu_dm_connector->edid); 166 + amdgpu_dm_connector->edid = NULL; 167 + } 168 + 164 169 drm_encoder_cleanup(&amdgpu_encoder->base); 165 170 kfree(amdgpu_encoder); 166 171 drm_connector_cleanup(connector); ··· 186 181 void dm_dp_mst_dc_sink_create(struct drm_connector *connector) 187 182 { 188 183 struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector); 189 - struct edid *edid; 190 184 struct dc_sink *dc_sink; 191 185 struct dc_sink_init_data init_params = { 192 186 .link = aconnector->dc_link, 193 187 .sink_signal = SIGNAL_TYPE_DISPLAY_PORT_MST }; 188 + 189 + /* FIXME none of this is safe. we shouldn't touch aconnector here in 190 + * atomic_check 191 + */ 194 192 195 193 /* 196 194 * TODO: Need to further figure out why ddc.algo is NULL while MST port exists ··· 201 193 if (!aconnector->port || !aconnector->port->aux.ddc.algo) 202 194 return; 203 195 204 - edid = drm_dp_mst_get_edid(connector, &aconnector->mst_port->mst_mgr, aconnector->port); 205 - 206 - if (!edid) { 207 - drm_mode_connector_update_edid_property( 208 - &aconnector->base, 209 - NULL); 210 - return; 211 - } 212 - 213 - aconnector->edid = edid; 196 + ASSERT(aconnector->edid); 214 197 215 198 dc_sink = dc_link_add_remote_sink( 216 199 aconnector->dc_link, ··· 214 215 215 216 amdgpu_dm_add_sink_to_freesync_module( 216 217 connector, aconnector->edid); 217 - 218 - drm_mode_connector_update_edid_property( 219 - &aconnector->base, aconnector->edid); 220 218 } 221 219 222 220 static int dm_dp_mst_get_modes(struct drm_connector *connector) ··· 226 230 227 231 if (!aconnector->edid) { 228 232 struct edid *edid; 229 - struct dc_sink *dc_sink; 230 - struct dc_sink_init_data init_params = { 231 - .link = aconnector->dc_link, 232 - .sink_signal = SIGNAL_TYPE_DISPLAY_PORT_MST }; 233 233 edid = drm_dp_mst_get_edid(connector, &aconnector->mst_port->mst_mgr, aconnector->port); 234 234 235 235 if (!edid) { ··· 236 244 } 237 245 238 246 aconnector->edid = edid; 247 + } 239 248 249 + if (!aconnector->dc_sink) { 250 + struct dc_sink *dc_sink; 251 + struct dc_sink_init_data init_params = { 252 + .link = aconnector->dc_link, 253 + .sink_signal = SIGNAL_TYPE_DISPLAY_PORT_MST }; 240 254 dc_sink = dc_link_add_remote_sink( 241 255 aconnector->dc_link, 242 - (uint8_t *)edid, 243 - (edid->extensions + 1) * EDID_LENGTH, 256 + (uint8_t *)aconnector->edid, 257 + (aconnector->edid->extensions + 1) * EDID_LENGTH, 244 258 &init_params); 245 259 246 260 dc_sink->priv = aconnector; ··· 254 256 255 257 if (aconnector->dc_sink) 256 258 amdgpu_dm_add_sink_to_freesync_module( 257 - connector, edid); 258 - 259 - drm_mode_connector_update_edid_property( 260 - &aconnector->base, edid); 259 + connector, aconnector->edid); 261 260 } 261 + 262 + drm_mode_connector_update_edid_property( 263 + &aconnector->base, aconnector->edid); 262 264 263 265 ret = drm_add_edid_modes(connector, aconnector->edid); 264 266 ··· 422 424 dc_sink_release(aconnector->dc_sink); 423 425 aconnector->dc_sink = NULL; 424 426 } 425 - if (aconnector->edid) { 426 - kfree(aconnector->edid); 427 - aconnector->edid = NULL; 428 - } 429 - 430 - drm_mode_connector_update_edid_property( 431 - &aconnector->base, 432 - NULL); 433 427 434 428 aconnector->mst_connected = false; 435 429 }
+3 -8
drivers/gpu/drm/drm_edid.c
··· 4451 4451 info->max_tmds_clock = 0; 4452 4452 info->dvi_dual = false; 4453 4453 info->has_hdmi_infoframe = false; 4454 + memset(&info->hdmi, 0, sizeof(info->hdmi)); 4454 4455 4455 4456 info->non_desktop = 0; 4456 4457 } ··· 4463 4462 4464 4463 u32 quirks = edid_get_quirks(edid); 4465 4464 4465 + drm_reset_display_info(connector); 4466 + 4466 4467 info->width_mm = edid->width_cm * 10; 4467 4468 info->height_mm = edid->height_cm * 10; 4468 - 4469 - /* driver figures it out in this case */ 4470 - info->bpc = 0; 4471 - info->color_formats = 0; 4472 - info->cea_rev = 0; 4473 - info->max_tmds_clock = 0; 4474 - info->dvi_dual = false; 4475 - info->has_hdmi_infoframe = false; 4476 4469 4477 4470 info->non_desktop = !!(quirks & EDID_QUIRK_NON_DESKTOP); 4478 4471
+14 -2
drivers/gpu/drm/i915/intel_cdclk.c
··· 2140 2140 } 2141 2141 } 2142 2142 2143 - /* According to BSpec, "The CD clock frequency must be at least twice 2143 + /* 2144 + * According to BSpec, "The CD clock frequency must be at least twice 2144 2145 * the frequency of the Azalia BCLK." and BCLK is 96 MHz by default. 2146 + * 2147 + * FIXME: Check the actual, not default, BCLK being used. 2148 + * 2149 + * FIXME: This does not depend on ->has_audio because the higher CDCLK 2150 + * is required for audio probe, also when there are no audio capable 2151 + * displays connected at probe time. This leads to unnecessarily high 2152 + * CDCLK when audio is not required. 2153 + * 2154 + * FIXME: This limit is only applied when there are displays connected 2155 + * at probe time. If we probe without displays, we'll still end up using 2156 + * the platform minimum CDCLK, failing audio probe. 2145 2157 */ 2146 - if (crtc_state->has_audio && INTEL_GEN(dev_priv) >= 9) 2158 + if (INTEL_GEN(dev_priv) >= 9) 2147 2159 min_cdclk = max(2 * 96000, min_cdclk); 2148 2160 2149 2161 /*
+2 -2
drivers/gpu/drm/i915/intel_drv.h
··· 49 49 * check the condition before the timeout. 50 50 */ 51 51 #define __wait_for(OP, COND, US, Wmin, Wmax) ({ \ 52 - unsigned long timeout__ = jiffies + usecs_to_jiffies(US) + 1; \ 52 + const ktime_t end__ = ktime_add_ns(ktime_get_raw(), 1000ll * (US)); \ 53 53 long wait__ = (Wmin); /* recommended min for usleep is 10 us */ \ 54 54 int ret__; \ 55 55 might_sleep(); \ 56 56 for (;;) { \ 57 - bool expired__ = time_after(jiffies, timeout__); \ 57 + const bool expired__ = ktime_after(ktime_get_raw(), end__); \ 58 58 OP; \ 59 59 if (COND) { \ 60 60 ret__ = 0; \
+1 -1
drivers/gpu/drm/i915/intel_fbdev.c
··· 806 806 return; 807 807 808 808 intel_fbdev_sync(ifbdev); 809 - if (ifbdev->vma) 809 + if (ifbdev->vma || ifbdev->helper.deferred_setup) 810 810 drm_fb_helper_hotplug_event(&ifbdev->helper); 811 811 } 812 812
+5 -6
drivers/gpu/drm/i915/intel_runtime_pm.c
··· 641 641 642 642 DRM_DEBUG_KMS("Enabling DC6\n"); 643 643 644 - gen9_set_dc_state(dev_priv, DC_STATE_EN_UPTO_DC6); 644 + /* Wa Display #1183: skl,kbl,cfl */ 645 + if (IS_GEN9_BC(dev_priv)) 646 + I915_WRITE(GEN8_CHICKEN_DCPR_1, I915_READ(GEN8_CHICKEN_DCPR_1) | 647 + SKL_SELECT_ALTERNATE_DC_EXIT); 645 648 649 + gen9_set_dc_state(dev_priv, DC_STATE_EN_UPTO_DC6); 646 650 } 647 651 648 652 void skl_disable_dc6(struct drm_i915_private *dev_priv) 649 653 { 650 654 DRM_DEBUG_KMS("Disabling DC6\n"); 651 - 652 - /* Wa Display #1183: skl,kbl,cfl */ 653 - if (IS_GEN9_BC(dev_priv)) 654 - I915_WRITE(GEN8_CHICKEN_DCPR_1, I915_READ(GEN8_CHICKEN_DCPR_1) | 655 - SKL_SELECT_ALTERNATE_DC_EXIT); 656 655 657 656 gen9_set_dc_state(dev_priv, DC_STATE_DISABLE); 658 657 }
+1
drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c
··· 351 351 352 352 spin_lock_irqsave(&dev->event_lock, flags); 353 353 mdp4_crtc->event = crtc->state->event; 354 + crtc->state->event = NULL; 354 355 spin_unlock_irqrestore(&dev->event_lock, flags); 355 356 356 357 blend_setup(crtc);
+1
drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
··· 708 708 709 709 spin_lock_irqsave(&dev->event_lock, flags); 710 710 mdp5_crtc->event = crtc->state->event; 711 + crtc->state->event = NULL; 711 712 spin_unlock_irqrestore(&dev->event_lock, flags); 712 713 713 714 /*
+2 -1
drivers/gpu/drm/msm/disp/mdp_format.c
··· 171 171 return i; 172 172 } 173 173 174 - const struct msm_format *mdp_get_format(struct msm_kms *kms, uint32_t format) 174 + const struct msm_format *mdp_get_format(struct msm_kms *kms, uint32_t format, 175 + uint64_t modifier) 175 176 { 176 177 int i; 177 178 for (i = 0; i < ARRAY_SIZE(formats); i++) {
+1 -1
drivers/gpu/drm/msm/disp/mdp_kms.h
··· 98 98 #define MDP_FORMAT_IS_YUV(mdp_format) ((mdp_format)->is_yuv) 99 99 100 100 uint32_t mdp_get_formats(uint32_t *formats, uint32_t max_formats, bool rgb_only); 101 - const struct msm_format *mdp_get_format(struct msm_kms *kms, uint32_t format); 101 + const struct msm_format *mdp_get_format(struct msm_kms *kms, uint32_t format, uint64_t modifier); 102 102 103 103 /* MDP capabilities */ 104 104 #define MDP_CAP_SMP BIT(0) /* Shared Memory Pool */
+12 -4
drivers/gpu/drm/msm/dsi/dsi_host.c
··· 173 173 174 174 bool registered; 175 175 bool power_on; 176 + bool enabled; 176 177 int irq; 177 178 }; 178 179 ··· 776 775 switch (mipi_fmt) { 777 776 case MIPI_DSI_FMT_RGB888: return CMD_DST_FORMAT_RGB888; 778 777 case MIPI_DSI_FMT_RGB666_PACKED: 779 - case MIPI_DSI_FMT_RGB666: return VID_DST_FORMAT_RGB666; 778 + case MIPI_DSI_FMT_RGB666: return CMD_DST_FORMAT_RGB666; 780 779 case MIPI_DSI_FMT_RGB565: return CMD_DST_FORMAT_RGB565; 781 780 default: return CMD_DST_FORMAT_RGB888; 782 781 } ··· 987 986 988 987 static void dsi_wait4video_done(struct msm_dsi_host *msm_host) 989 988 { 989 + u32 ret = 0; 990 + struct device *dev = &msm_host->pdev->dev; 991 + 990 992 dsi_intr_ctrl(msm_host, DSI_IRQ_MASK_VIDEO_DONE, 1); 991 993 992 994 reinit_completion(&msm_host->video_comp); 993 995 994 - wait_for_completion_timeout(&msm_host->video_comp, 996 + ret = wait_for_completion_timeout(&msm_host->video_comp, 995 997 msecs_to_jiffies(70)); 998 + 999 + if (ret <= 0) 1000 + dev_err(dev, "wait for video done timed out\n"); 996 1001 997 1002 dsi_intr_ctrl(msm_host, DSI_IRQ_MASK_VIDEO_DONE, 0); 998 1003 } ··· 1008 1001 if (!(msm_host->mode_flags & MIPI_DSI_MODE_VIDEO)) 1009 1002 return; 1010 1003 1011 - if (msm_host->power_on) { 1004 + if (msm_host->power_on && msm_host->enabled) { 1012 1005 dsi_wait4video_done(msm_host); 1013 1006 /* delay 4 ms to skip BLLP */ 1014 1007 usleep_range(2000, 4000); ··· 2210 2203 * pm_runtime_put_autosuspend(&msm_host->pdev->dev); 2211 2204 * } 2212 2205 */ 2213 - 2206 + msm_host->enabled = true; 2214 2207 return 0; 2215 2208 } 2216 2209 ··· 2218 2211 { 2219 2212 struct msm_dsi_host *msm_host = to_msm_dsi_host(host); 2220 2213 2214 + msm_host->enabled = false; 2221 2215 dsi_op_mode_config(msm_host, 2222 2216 !!(msm_host->mode_flags & MIPI_DSI_MODE_VIDEO), false); 2223 2217
+109
drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
··· 265 265 return 0; 266 266 } 267 267 268 + int msm_dsi_dphy_timing_calc_v3(struct msm_dsi_dphy_timing *timing, 269 + struct msm_dsi_phy_clk_request *clk_req) 270 + { 271 + const unsigned long bit_rate = clk_req->bitclk_rate; 272 + const unsigned long esc_rate = clk_req->escclk_rate; 273 + s32 ui, ui_x8, lpx; 274 + s32 tmax, tmin; 275 + s32 pcnt0 = 50; 276 + s32 pcnt1 = 50; 277 + s32 pcnt2 = 10; 278 + s32 pcnt3 = 30; 279 + s32 pcnt4 = 10; 280 + s32 pcnt5 = 2; 281 + s32 coeff = 1000; /* Precision, should avoid overflow */ 282 + s32 hb_en, hb_en_ckln; 283 + s32 temp; 284 + 285 + if (!bit_rate || !esc_rate) 286 + return -EINVAL; 287 + 288 + timing->hs_halfbyte_en = 0; 289 + hb_en = 0; 290 + timing->hs_halfbyte_en_ckln = 0; 291 + hb_en_ckln = 0; 292 + 293 + ui = mult_frac(NSEC_PER_MSEC, coeff, bit_rate / 1000); 294 + ui_x8 = ui << 3; 295 + lpx = mult_frac(NSEC_PER_MSEC, coeff, esc_rate / 1000); 296 + 297 + temp = S_DIV_ROUND_UP(38 * coeff, ui_x8); 298 + tmin = max_t(s32, temp, 0); 299 + temp = (95 * coeff) / ui_x8; 300 + tmax = max_t(s32, temp, 0); 301 + timing->clk_prepare = linear_inter(tmax, tmin, pcnt0, 0, false); 302 + 303 + temp = 300 * coeff - (timing->clk_prepare << 3) * ui; 304 + tmin = S_DIV_ROUND_UP(temp, ui_x8) - 1; 305 + tmax = (tmin > 255) ? 511 : 255; 306 + timing->clk_zero = linear_inter(tmax, tmin, pcnt5, 0, false); 307 + 308 + tmin = DIV_ROUND_UP(60 * coeff + 3 * ui, ui_x8); 309 + temp = 105 * coeff + 12 * ui - 20 * coeff; 310 + tmax = (temp + 3 * ui) / ui_x8; 311 + timing->clk_trail = linear_inter(tmax, tmin, pcnt3, 0, false); 312 + 313 + temp = S_DIV_ROUND_UP(40 * coeff + 4 * ui, ui_x8); 314 + tmin = max_t(s32, temp, 0); 315 + temp = (85 * coeff + 6 * ui) / ui_x8; 316 + tmax = max_t(s32, temp, 0); 317 + timing->hs_prepare = linear_inter(tmax, tmin, pcnt1, 0, false); 318 + 319 + temp = 145 * coeff + 10 * ui - (timing->hs_prepare << 3) * ui; 320 + tmin = S_DIV_ROUND_UP(temp, ui_x8) - 1; 321 + tmax = 255; 322 + timing->hs_zero = linear_inter(tmax, tmin, pcnt4, 0, false); 323 + 324 + tmin = DIV_ROUND_UP(60 * coeff + 4 * ui, ui_x8) - 1; 325 + temp = 105 * coeff + 12 * ui - 20 * coeff; 326 + tmax = (temp / ui_x8) - 1; 327 + timing->hs_trail = linear_inter(tmax, tmin, pcnt3, 0, false); 328 + 329 + temp = 50 * coeff + ((hb_en << 2) - 8) * ui; 330 + timing->hs_rqst = S_DIV_ROUND_UP(temp, ui_x8); 331 + 332 + tmin = DIV_ROUND_UP(100 * coeff, ui_x8) - 1; 333 + tmax = 255; 334 + timing->hs_exit = linear_inter(tmax, tmin, pcnt2, 0, false); 335 + 336 + temp = 50 * coeff + ((hb_en_ckln << 2) - 8) * ui; 337 + timing->hs_rqst_ckln = S_DIV_ROUND_UP(temp, ui_x8); 338 + 339 + temp = 60 * coeff + 52 * ui - 43 * ui; 340 + tmin = DIV_ROUND_UP(temp, ui_x8) - 1; 341 + tmax = 63; 342 + timing->shared_timings.clk_post = 343 + linear_inter(tmax, tmin, pcnt2, 0, false); 344 + 345 + temp = 8 * ui + (timing->clk_prepare << 3) * ui; 346 + temp += (((timing->clk_zero + 3) << 3) + 11) * ui; 347 + temp += hb_en_ckln ? (((timing->hs_rqst_ckln << 3) + 4) * ui) : 348 + (((timing->hs_rqst_ckln << 3) + 8) * ui); 349 + tmin = S_DIV_ROUND_UP(temp, ui_x8) - 1; 350 + tmax = 63; 351 + if (tmin > tmax) { 352 + temp = linear_inter(tmax << 1, tmin, pcnt2, 0, false); 353 + timing->shared_timings.clk_pre = temp >> 1; 354 + timing->shared_timings.clk_pre_inc_by_2 = 1; 355 + } else { 356 + timing->shared_timings.clk_pre = 357 + linear_inter(tmax, tmin, pcnt2, 0, false); 358 + timing->shared_timings.clk_pre_inc_by_2 = 0; 359 + } 360 + 361 + timing->ta_go = 3; 362 + timing->ta_sure = 0; 363 + timing->ta_get = 4; 364 + 365 + DBG("%d, %d, %d, %d, %d, %d, %d, %d, %d, %d, %d, %d, %d, %d, %d, %d", 366 + timing->shared_timings.clk_pre, timing->shared_timings.clk_post, 367 + timing->shared_timings.clk_pre_inc_by_2, timing->clk_zero, 368 + timing->clk_trail, timing->clk_prepare, timing->hs_exit, 369 + timing->hs_zero, timing->hs_prepare, timing->hs_trail, 370 + timing->hs_rqst, timing->hs_rqst_ckln, timing->hs_halfbyte_en, 371 + timing->hs_halfbyte_en_ckln, timing->hs_prep_dly, 372 + timing->hs_prep_dly_ckln); 373 + 374 + return 0; 375 + } 376 + 268 377 void msm_dsi_phy_set_src_pll(struct msm_dsi_phy *phy, int pll_id, u32 reg, 269 378 u32 bit_mask) 270 379 {
+2
drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
··· 101 101 struct msm_dsi_phy_clk_request *clk_req); 102 102 int msm_dsi_dphy_timing_calc_v2(struct msm_dsi_dphy_timing *timing, 103 103 struct msm_dsi_phy_clk_request *clk_req); 104 + int msm_dsi_dphy_timing_calc_v3(struct msm_dsi_dphy_timing *timing, 105 + struct msm_dsi_phy_clk_request *clk_req); 104 106 void msm_dsi_phy_set_src_pll(struct msm_dsi_phy *phy, int pll_id, u32 reg, 105 107 u32 bit_mask); 106 108 int msm_dsi_phy_init_common(struct msm_dsi_phy *phy);
-28
drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
··· 79 79 dsi_phy_write(lane_base + REG_DSI_10nm_PHY_LN_TX_DCTRL(3), 0x04); 80 80 } 81 81 82 - static int msm_dsi_dphy_timing_calc_v3(struct msm_dsi_dphy_timing *timing, 83 - struct msm_dsi_phy_clk_request *clk_req) 84 - { 85 - /* 86 - * TODO: These params need to be computed, they're currently hardcoded 87 - * for a 1440x2560@60Hz panel with a byteclk of 100.618 Mhz, and a 88 - * default escape clock of 19.2 Mhz. 89 - */ 90 - 91 - timing->hs_halfbyte_en = 0; 92 - timing->clk_zero = 0x1c; 93 - timing->clk_prepare = 0x07; 94 - timing->clk_trail = 0x07; 95 - timing->hs_exit = 0x23; 96 - timing->hs_zero = 0x21; 97 - timing->hs_prepare = 0x07; 98 - timing->hs_trail = 0x07; 99 - timing->hs_rqst = 0x05; 100 - timing->ta_sure = 0x00; 101 - timing->ta_go = 0x03; 102 - timing->ta_get = 0x04; 103 - 104 - timing->shared_timings.clk_pre = 0x2d; 105 - timing->shared_timings.clk_post = 0x0d; 106 - 107 - return 0; 108 - } 109 - 110 82 static int dsi_10nm_phy_enable(struct msm_dsi_phy *phy, int src_pll_id, 111 83 struct msm_dsi_phy_clk_request *clk_req) 112 84 {
+2 -1
drivers/gpu/drm/msm/msm_fb.c
··· 183 183 hsub = drm_format_horz_chroma_subsampling(mode_cmd->pixel_format); 184 184 vsub = drm_format_vert_chroma_subsampling(mode_cmd->pixel_format); 185 185 186 - format = kms->funcs->get_format(kms, mode_cmd->pixel_format); 186 + format = kms->funcs->get_format(kms, mode_cmd->pixel_format, 187 + mode_cmd->modifier[0]); 187 188 if (!format) { 188 189 dev_err(dev->dev, "unsupported pixel format: %4.4s\n", 189 190 (char *)&mode_cmd->pixel_format);
+2 -9
drivers/gpu/drm/msm/msm_fbdev.c
··· 92 92 93 93 if (IS_ERR(fb)) { 94 94 dev_err(dev->dev, "failed to allocate fb\n"); 95 - ret = PTR_ERR(fb); 96 - goto fail; 95 + return PTR_ERR(fb); 97 96 } 98 97 99 98 bo = msm_framebuffer_bo(fb, 0); ··· 150 151 151 152 fail_unlock: 152 153 mutex_unlock(&dev->struct_mutex); 153 - fail: 154 - 155 - if (ret) { 156 - if (fb) 157 - drm_framebuffer_remove(fb); 158 - } 159 - 154 + drm_framebuffer_remove(fb); 160 155 return ret; 161 156 } 162 157
+11 -9
drivers/gpu/drm/msm/msm_gem.c
··· 132 132 struct msm_gem_object *msm_obj = to_msm_bo(obj); 133 133 134 134 if (msm_obj->pages) { 135 - /* For non-cached buffers, ensure the new pages are clean 136 - * because display controller, GPU, etc. are not coherent: 137 - */ 138 - if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED)) 139 - dma_unmap_sg(obj->dev->dev, msm_obj->sgt->sgl, 140 - msm_obj->sgt->nents, DMA_BIDIRECTIONAL); 135 + if (msm_obj->sgt) { 136 + /* For non-cached buffers, ensure the new 137 + * pages are clean because display controller, 138 + * GPU, etc. are not coherent: 139 + */ 140 + if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED)) 141 + dma_unmap_sg(obj->dev->dev, msm_obj->sgt->sgl, 142 + msm_obj->sgt->nents, 143 + DMA_BIDIRECTIONAL); 141 144 142 - if (msm_obj->sgt) 143 145 sg_free_table(msm_obj->sgt); 144 - 145 - kfree(msm_obj->sgt); 146 + kfree(msm_obj->sgt); 147 + } 146 148 147 149 if (use_pages(obj)) 148 150 drm_gem_put_pages(obj, msm_obj->pages, true, false);
+4 -1
drivers/gpu/drm/msm/msm_kms.h
··· 48 48 /* functions to wait for atomic commit completed on each CRTC */ 49 49 void (*wait_for_crtc_commit_done)(struct msm_kms *kms, 50 50 struct drm_crtc *crtc); 51 + /* get msm_format w/ optional format modifiers from drm_mode_fb_cmd2 */ 52 + const struct msm_format *(*get_format)(struct msm_kms *kms, 53 + const uint32_t format, 54 + const uint64_t modifiers); 51 55 /* misc: */ 52 - const struct msm_format *(*get_format)(struct msm_kms *kms, uint32_t format); 53 56 long (*round_pixclk)(struct msm_kms *kms, unsigned long rate, 54 57 struct drm_encoder *encoder); 55 58 int (*set_split_display)(struct msm_kms *kms,
+2 -4
drivers/gpu/drm/qxl/qxl_cmd.c
··· 179 179 uint32_t type, bool interruptible) 180 180 { 181 181 struct qxl_command cmd; 182 - struct qxl_bo_list *entry = list_first_entry(&release->bos, struct qxl_bo_list, tv.head); 183 182 184 183 cmd.type = type; 185 - cmd.data = qxl_bo_physical_address(qdev, to_qxl_bo(entry->tv.bo), release->release_offset); 184 + cmd.data = qxl_bo_physical_address(qdev, release->release_bo, release->release_offset); 186 185 187 186 return qxl_ring_push(qdev->command_ring, &cmd, interruptible); 188 187 } ··· 191 192 uint32_t type, bool interruptible) 192 193 { 193 194 struct qxl_command cmd; 194 - struct qxl_bo_list *entry = list_first_entry(&release->bos, struct qxl_bo_list, tv.head); 195 195 196 196 cmd.type = type; 197 - cmd.data = qxl_bo_physical_address(qdev, to_qxl_bo(entry->tv.bo), release->release_offset); 197 + cmd.data = qxl_bo_physical_address(qdev, release->release_bo, release->release_offset); 198 198 199 199 return qxl_ring_push(qdev->cursor_ring, &cmd, interruptible); 200 200 }
+1
drivers/gpu/drm/qxl/qxl_drv.h
··· 167 167 168 168 int id; 169 169 int type; 170 + struct qxl_bo *release_bo; 170 171 uint32_t release_offset; 171 172 uint32_t surface_release_id; 172 173 struct ww_acquire_ctx ticket;
+2 -2
drivers/gpu/drm/qxl/qxl_ioctl.c
··· 182 182 goto out_free_reloc; 183 183 184 184 /* TODO copy slow path code from i915 */ 185 - fb_cmd = qxl_bo_kmap_atomic_page(qdev, cmd_bo, (release->release_offset & PAGE_SIZE)); 185 + fb_cmd = qxl_bo_kmap_atomic_page(qdev, cmd_bo, (release->release_offset & PAGE_MASK)); 186 186 unwritten = __copy_from_user_inatomic_nocache 187 - (fb_cmd + sizeof(union qxl_release_info) + (release->release_offset & ~PAGE_SIZE), 187 + (fb_cmd + sizeof(union qxl_release_info) + (release->release_offset & ~PAGE_MASK), 188 188 u64_to_user_ptr(cmd->command), cmd->command_size); 189 189 190 190 {
+9 -9
drivers/gpu/drm/qxl/qxl_release.c
··· 173 173 list_del(&entry->tv.head); 174 174 kfree(entry); 175 175 } 176 + release->release_bo = NULL; 176 177 } 177 178 178 179 void ··· 297 296 { 298 297 if (surface_cmd_type == QXL_SURFACE_CMD_DESTROY && create_rel) { 299 298 int idr_ret; 300 - struct qxl_bo_list *entry = list_first_entry(&create_rel->bos, struct qxl_bo_list, tv.head); 301 299 struct qxl_bo *bo; 302 300 union qxl_release_info *info; 303 301 ··· 304 304 idr_ret = qxl_release_alloc(qdev, QXL_RELEASE_SURFACE_CMD, release); 305 305 if (idr_ret < 0) 306 306 return idr_ret; 307 - bo = to_qxl_bo(entry->tv.bo); 307 + bo = create_rel->release_bo; 308 308 309 + (*release)->release_bo = bo; 309 310 (*release)->release_offset = create_rel->release_offset + 64; 310 311 311 312 qxl_release_list_add(*release, bo); ··· 366 365 367 366 bo = qxl_bo_ref(qdev->current_release_bo[cur_idx]); 368 367 368 + (*release)->release_bo = bo; 369 369 (*release)->release_offset = qdev->current_release_bo_offset[cur_idx] * release_size_per_bo[cur_idx]; 370 370 qdev->current_release_bo_offset[cur_idx]++; 371 371 ··· 410 408 { 411 409 void *ptr; 412 410 union qxl_release_info *info; 413 - struct qxl_bo_list *entry = list_first_entry(&release->bos, struct qxl_bo_list, tv.head); 414 - struct qxl_bo *bo = to_qxl_bo(entry->tv.bo); 411 + struct qxl_bo *bo = release->release_bo; 415 412 416 - ptr = qxl_bo_kmap_atomic_page(qdev, bo, release->release_offset & PAGE_SIZE); 413 + ptr = qxl_bo_kmap_atomic_page(qdev, bo, release->release_offset & PAGE_MASK); 417 414 if (!ptr) 418 415 return NULL; 419 - info = ptr + (release->release_offset & ~PAGE_SIZE); 416 + info = ptr + (release->release_offset & ~PAGE_MASK); 420 417 return info; 421 418 } 422 419 ··· 423 422 struct qxl_release *release, 424 423 union qxl_release_info *info) 425 424 { 426 - struct qxl_bo_list *entry = list_first_entry(&release->bos, struct qxl_bo_list, tv.head); 427 - struct qxl_bo *bo = to_qxl_bo(entry->tv.bo); 425 + struct qxl_bo *bo = release->release_bo; 428 426 void *ptr; 429 427 430 - ptr = ((void *)info) - (release->release_offset & ~PAGE_SIZE); 428 + ptr = ((void *)info) - (release->release_offset & ~PAGE_MASK); 431 429 qxl_bo_kunmap_atomic_page(qdev, bo, ptr); 432 430 } 433 431
-55
drivers/gpu/drm/sun4i/sun4i_lvds.c
··· 94 94 } 95 95 } 96 96 97 - static enum drm_mode_status sun4i_lvds_encoder_mode_valid(struct drm_encoder *crtc, 98 - const struct drm_display_mode *mode) 99 - { 100 - struct sun4i_lvds *lvds = drm_encoder_to_sun4i_lvds(crtc); 101 - struct sun4i_tcon *tcon = lvds->tcon; 102 - u32 hsync = mode->hsync_end - mode->hsync_start; 103 - u32 vsync = mode->vsync_end - mode->vsync_start; 104 - unsigned long rate = mode->clock * 1000; 105 - long rounded_rate; 106 - 107 - DRM_DEBUG_DRIVER("Validating modes...\n"); 108 - 109 - if (hsync < 1) 110 - return MODE_HSYNC_NARROW; 111 - 112 - if (hsync > 0x3ff) 113 - return MODE_HSYNC_WIDE; 114 - 115 - if ((mode->hdisplay < 1) || (mode->htotal < 1)) 116 - return MODE_H_ILLEGAL; 117 - 118 - if ((mode->hdisplay > 0x7ff) || (mode->htotal > 0xfff)) 119 - return MODE_BAD_HVALUE; 120 - 121 - DRM_DEBUG_DRIVER("Horizontal parameters OK\n"); 122 - 123 - if (vsync < 1) 124 - return MODE_VSYNC_NARROW; 125 - 126 - if (vsync > 0x3ff) 127 - return MODE_VSYNC_WIDE; 128 - 129 - if ((mode->vdisplay < 1) || (mode->vtotal < 1)) 130 - return MODE_V_ILLEGAL; 131 - 132 - if ((mode->vdisplay > 0x7ff) || (mode->vtotal > 0xfff)) 133 - return MODE_BAD_VVALUE; 134 - 135 - DRM_DEBUG_DRIVER("Vertical parameters OK\n"); 136 - 137 - tcon->dclk_min_div = 7; 138 - tcon->dclk_max_div = 7; 139 - rounded_rate = clk_round_rate(tcon->dclk, rate); 140 - if (rounded_rate < rate) 141 - return MODE_CLOCK_LOW; 142 - 143 - if (rounded_rate > rate) 144 - return MODE_CLOCK_HIGH; 145 - 146 - DRM_DEBUG_DRIVER("Clock rate OK\n"); 147 - 148 - return MODE_OK; 149 - } 150 - 151 97 static const struct drm_encoder_helper_funcs sun4i_lvds_enc_helper_funcs = { 152 98 .disable = sun4i_lvds_encoder_disable, 153 99 .enable = sun4i_lvds_encoder_enable, 154 - .mode_valid = sun4i_lvds_encoder_mode_valid, 155 100 }; 156 101 157 102 static const struct drm_encoder_funcs sun4i_lvds_enc_funcs = {
+2 -2
drivers/gpu/drm/virtio/virtgpu_vq.c
··· 293 293 ret = virtqueue_add_sgs(vq, sgs, outcnt, incnt, vbuf, GFP_ATOMIC); 294 294 if (ret == -ENOSPC) { 295 295 spin_unlock(&vgdev->ctrlq.qlock); 296 - wait_event(vgdev->ctrlq.ack_queue, vq->num_free); 296 + wait_event(vgdev->ctrlq.ack_queue, vq->num_free >= outcnt + incnt); 297 297 spin_lock(&vgdev->ctrlq.qlock); 298 298 goto retry; 299 299 } else { ··· 368 368 ret = virtqueue_add_sgs(vq, sgs, outcnt, 0, vbuf, GFP_ATOMIC); 369 369 if (ret == -ENOSPC) { 370 370 spin_unlock(&vgdev->cursorq.qlock); 371 - wait_event(vgdev->cursorq.ack_queue, vq->num_free); 371 + wait_event(vgdev->cursorq.ack_queue, vq->num_free >= outcnt); 372 372 spin_lock(&vgdev->cursorq.qlock); 373 373 goto retry; 374 374 } else {
+14 -3
drivers/hwmon/k10temp.c
··· 40 40 #define PCI_DEVICE_ID_AMD_17H_DF_F3 0x1463 41 41 #endif 42 42 43 + #ifndef PCI_DEVICE_ID_AMD_17H_RR_NB 44 + #define PCI_DEVICE_ID_AMD_17H_RR_NB 0x15d0 45 + #endif 46 + 43 47 /* CPUID function 0x80000001, ebx */ 44 48 #define CPUID_PKGTYPE_MASK 0xf0000000 45 49 #define CPUID_PKGTYPE_F 0x00000000 ··· 76 72 struct pci_dev *pdev; 77 73 void (*read_tempreg)(struct pci_dev *pdev, u32 *regval); 78 74 int temp_offset; 75 + u32 temp_adjust_mask; 79 76 }; 80 77 81 78 struct tctl_offset { ··· 89 84 { 0x17, "AMD Ryzen 5 1600X", 20000 }, 90 85 { 0x17, "AMD Ryzen 7 1700X", 20000 }, 91 86 { 0x17, "AMD Ryzen 7 1800X", 20000 }, 87 + { 0x17, "AMD Ryzen 7 2700X", 10000 }, 92 88 { 0x17, "AMD Ryzen Threadripper 1950X", 27000 }, 93 89 { 0x17, "AMD Ryzen Threadripper 1920X", 27000 }, 94 90 { 0x17, "AMD Ryzen Threadripper 1900X", 27000 }, ··· 135 129 136 130 data->read_tempreg(data->pdev, &regval); 137 131 temp = (regval >> 21) * 125; 132 + if (regval & data->temp_adjust_mask) 133 + temp -= 49000; 138 134 if (temp > data->temp_offset) 139 135 temp -= data->temp_offset; 140 136 else ··· 267 259 data->pdev = pdev; 268 260 269 261 if (boot_cpu_data.x86 == 0x15 && (boot_cpu_data.x86_model == 0x60 || 270 - boot_cpu_data.x86_model == 0x70)) 262 + boot_cpu_data.x86_model == 0x70)) { 271 263 data->read_tempreg = read_tempreg_nb_f15; 272 - else if (boot_cpu_data.x86 == 0x17) 264 + } else if (boot_cpu_data.x86 == 0x17) { 265 + data->temp_adjust_mask = 0x80000; 273 266 data->read_tempreg = read_tempreg_nb_f17; 274 - else 267 + } else { 275 268 data->read_tempreg = read_tempreg_pci; 269 + } 276 270 277 271 for (i = 0; i < ARRAY_SIZE(tctl_offset_table); i++) { 278 272 const struct tctl_offset *entry = &tctl_offset_table[i]; ··· 302 292 { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_NB_F3) }, 303 293 { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F3) }, 304 294 { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_DF_F3) }, 295 + { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_17H_RR_NB) }, 305 296 {} 306 297 }; 307 298 MODULE_DEVICE_TABLE(pci, k10temp_id_table);
+2 -2
drivers/hwmon/nct6683.c
··· 1380 1380 /* Activate logical device if needed */ 1381 1381 val = superio_inb(sioaddr, SIO_REG_ENABLE); 1382 1382 if (!(val & 0x01)) { 1383 - pr_err("EC is disabled\n"); 1384 - goto fail; 1383 + pr_warn("Forcibly enabling EC access. Data may be unusable.\n"); 1384 + superio_outb(sioaddr, SIO_REG_ENABLE, val | 0x01); 1385 1385 } 1386 1386 1387 1387 superio_exit(sioaddr);
+4 -1
drivers/hwmon/scmi-hwmon.c
··· 170 170 scmi_chip_info.info = ptr_scmi_ci; 171 171 chip_info = &scmi_chip_info; 172 172 173 - for (type = 0; type < hwmon_max && nr_count[type]; type++) { 173 + for (type = 0; type < hwmon_max; type++) { 174 + if (!nr_count[type]) 175 + continue; 176 + 174 177 scmi_hwmon_add_chan_info(scmi_hwmon_chan, dev, nr_count[type], 175 178 type, hwmon_attributes[type]); 176 179 *ptr_scmi_ci++ = scmi_hwmon_chan++;
-3
drivers/i2c/busses/Kconfig
··· 707 707 config I2C_MT65XX 708 708 tristate "MediaTek I2C adapter" 709 709 depends on ARCH_MEDIATEK || COMPILE_TEST 710 - depends on HAS_DMA 711 710 help 712 711 This selects the MediaTek(R) Integrated Inter Circuit bus driver 713 712 for MT65xx and MT81xx. ··· 884 885 885 886 config I2C_SH_MOBILE 886 887 tristate "SuperH Mobile I2C Controller" 887 - depends on HAS_DMA 888 888 depends on ARCH_SHMOBILE || ARCH_RENESAS || COMPILE_TEST 889 889 help 890 890 If you say yes to this option, support will be included for the ··· 1096 1098 1097 1099 config I2C_RCAR 1098 1100 tristate "Renesas R-Car I2C Controller" 1099 - depends on HAS_DMA 1100 1101 depends on ARCH_RENESAS || COMPILE_TEST 1101 1102 select I2C_SLAVE 1102 1103 help
+18 -4
drivers/i2c/busses/i2c-sprd.c
··· 86 86 u32 count; 87 87 int irq; 88 88 int err; 89 + bool is_suspended; 89 90 }; 90 91 91 92 static void sprd_i2c_set_count(struct sprd_i2c *i2c_dev, u32 count) ··· 284 283 struct sprd_i2c *i2c_dev = i2c_adap->algo_data; 285 284 int im, ret; 286 285 286 + if (i2c_dev->is_suspended) 287 + return -EBUSY; 288 + 287 289 ret = pm_runtime_get_sync(i2c_dev->dev); 288 290 if (ret < 0) 289 291 return ret; ··· 368 364 struct sprd_i2c *i2c_dev = dev_id; 369 365 struct i2c_msg *msg = i2c_dev->msg; 370 366 bool ack = !(readl(i2c_dev->base + I2C_STATUS) & I2C_RX_ACK); 371 - u32 i2c_count = readl(i2c_dev->base + I2C_COUNT); 372 367 u32 i2c_tran; 373 368 374 369 if (msg->flags & I2C_M_RD) 375 370 i2c_tran = i2c_dev->count >= I2C_FIFO_FULL_THLD; 376 371 else 377 - i2c_tran = i2c_count; 372 + i2c_tran = i2c_dev->count; 378 373 379 374 /* 380 375 * If we got one ACK from slave when writing data, and we did not ··· 411 408 { 412 409 struct sprd_i2c *i2c_dev = dev_id; 413 410 struct i2c_msg *msg = i2c_dev->msg; 414 - u32 i2c_count = readl(i2c_dev->base + I2C_COUNT); 415 411 bool ack = !(readl(i2c_dev->base + I2C_STATUS) & I2C_RX_ACK); 416 412 u32 i2c_tran; 417 413 418 414 if (msg->flags & I2C_M_RD) 419 415 i2c_tran = i2c_dev->count >= I2C_FIFO_FULL_THLD; 420 416 else 421 - i2c_tran = i2c_count; 417 + i2c_tran = i2c_dev->count; 422 418 423 419 /* 424 420 * If we did not get one ACK from slave when writing data, then we ··· 588 586 589 587 static int __maybe_unused sprd_i2c_suspend_noirq(struct device *pdev) 590 588 { 589 + struct sprd_i2c *i2c_dev = dev_get_drvdata(pdev); 590 + 591 + i2c_lock_adapter(&i2c_dev->adap); 592 + i2c_dev->is_suspended = true; 593 + i2c_unlock_adapter(&i2c_dev->adap); 594 + 591 595 return pm_runtime_force_suspend(pdev); 592 596 } 593 597 594 598 static int __maybe_unused sprd_i2c_resume_noirq(struct device *pdev) 595 599 { 600 + struct sprd_i2c *i2c_dev = dev_get_drvdata(pdev); 601 + 602 + i2c_lock_adapter(&i2c_dev->adap); 603 + i2c_dev->is_suspended = false; 604 + i2c_unlock_adapter(&i2c_dev->adap); 605 + 596 606 return pm_runtime_force_resume(pdev); 597 607 } 598 608
+1 -1
drivers/i2c/i2c-dev.c
··· 280 280 */ 281 281 if (msgs[i].flags & I2C_M_RECV_LEN) { 282 282 if (!(msgs[i].flags & I2C_M_RD) || 283 - msgs[i].buf[0] < 1 || 283 + msgs[i].len < 1 || msgs[i].buf[0] < 1 || 284 284 msgs[i].len < msgs[i].buf[0] + 285 285 I2C_SMBUS_BLOCK_MAX) { 286 286 res = -EINVAL;
+6 -1
drivers/input/evdev.c
··· 31 31 enum evdev_clock_type { 32 32 EV_CLK_REAL = 0, 33 33 EV_CLK_MONO, 34 + EV_CLK_BOOT, 34 35 EV_CLK_MAX 35 36 }; 36 37 ··· 198 197 case CLOCK_REALTIME: 199 198 clk_type = EV_CLK_REAL; 200 199 break; 201 - case CLOCK_BOOTTIME: 202 200 case CLOCK_MONOTONIC: 203 201 clk_type = EV_CLK_MONO; 202 + break; 203 + case CLOCK_BOOTTIME: 204 + clk_type = EV_CLK_BOOT; 204 205 break; 205 206 default: 206 207 return -EINVAL; ··· 314 311 315 312 ev_time[EV_CLK_MONO] = ktime_get(); 316 313 ev_time[EV_CLK_REAL] = ktime_mono_to_real(ev_time[EV_CLK_MONO]); 314 + ev_time[EV_CLK_BOOT] = ktime_mono_to_any(ev_time[EV_CLK_MONO], 315 + TK_OFFS_BOOT); 317 316 318 317 rcu_read_lock(); 319 318
+1 -71
drivers/memory/emif-asm-offsets.c
··· 16 16 17 17 int main(void) 18 18 { 19 - DEFINE(EMIF_SDCFG_VAL_OFFSET, 20 - offsetof(struct emif_regs_amx3, emif_sdcfg_val)); 21 - DEFINE(EMIF_TIMING1_VAL_OFFSET, 22 - offsetof(struct emif_regs_amx3, emif_timing1_val)); 23 - DEFINE(EMIF_TIMING2_VAL_OFFSET, 24 - offsetof(struct emif_regs_amx3, emif_timing2_val)); 25 - DEFINE(EMIF_TIMING3_VAL_OFFSET, 26 - offsetof(struct emif_regs_amx3, emif_timing3_val)); 27 - DEFINE(EMIF_REF_CTRL_VAL_OFFSET, 28 - offsetof(struct emif_regs_amx3, emif_ref_ctrl_val)); 29 - DEFINE(EMIF_ZQCFG_VAL_OFFSET, 30 - offsetof(struct emif_regs_amx3, emif_zqcfg_val)); 31 - DEFINE(EMIF_PMCR_VAL_OFFSET, 32 - offsetof(struct emif_regs_amx3, emif_pmcr_val)); 33 - DEFINE(EMIF_PMCR_SHDW_VAL_OFFSET, 34 - offsetof(struct emif_regs_amx3, emif_pmcr_shdw_val)); 35 - DEFINE(EMIF_RD_WR_LEVEL_RAMP_CTRL_OFFSET, 36 - offsetof(struct emif_regs_amx3, emif_rd_wr_level_ramp_ctrl)); 37 - DEFINE(EMIF_RD_WR_EXEC_THRESH_OFFSET, 38 - offsetof(struct emif_regs_amx3, emif_rd_wr_exec_thresh)); 39 - DEFINE(EMIF_COS_CONFIG_OFFSET, 40 - offsetof(struct emif_regs_amx3, emif_cos_config)); 41 - DEFINE(EMIF_PRIORITY_TO_COS_MAPPING_OFFSET, 42 - offsetof(struct emif_regs_amx3, emif_priority_to_cos_mapping)); 43 - DEFINE(EMIF_CONNECT_ID_SERV_1_MAP_OFFSET, 44 - offsetof(struct emif_regs_amx3, emif_connect_id_serv_1_map)); 45 - DEFINE(EMIF_CONNECT_ID_SERV_2_MAP_OFFSET, 46 - offsetof(struct emif_regs_amx3, emif_connect_id_serv_2_map)); 47 - DEFINE(EMIF_OCP_CONFIG_VAL_OFFSET, 48 - offsetof(struct emif_regs_amx3, emif_ocp_config_val)); 49 - DEFINE(EMIF_LPDDR2_NVM_TIM_OFFSET, 50 - offsetof(struct emif_regs_amx3, emif_lpddr2_nvm_tim)); 51 - DEFINE(EMIF_LPDDR2_NVM_TIM_SHDW_OFFSET, 52 - offsetof(struct emif_regs_amx3, emif_lpddr2_nvm_tim_shdw)); 53 - DEFINE(EMIF_DLL_CALIB_CTRL_VAL_OFFSET, 54 - offsetof(struct emif_regs_amx3, emif_dll_calib_ctrl_val)); 55 - DEFINE(EMIF_DLL_CALIB_CTRL_VAL_SHDW_OFFSET, 56 - offsetof(struct emif_regs_amx3, emif_dll_calib_ctrl_val_shdw)); 57 - DEFINE(EMIF_DDR_PHY_CTLR_1_OFFSET, 58 - offsetof(struct emif_regs_amx3, emif_ddr_phy_ctlr_1)); 59 - DEFINE(EMIF_EXT_PHY_CTRL_VALS_OFFSET, 60 - offsetof(struct emif_regs_amx3, emif_ext_phy_ctrl_vals)); 61 - DEFINE(EMIF_REGS_AMX3_SIZE, sizeof(struct emif_regs_amx3)); 62 - 63 - BLANK(); 64 - 65 - DEFINE(EMIF_PM_BASE_ADDR_VIRT_OFFSET, 66 - offsetof(struct ti_emif_pm_data, ti_emif_base_addr_virt)); 67 - DEFINE(EMIF_PM_BASE_ADDR_PHYS_OFFSET, 68 - offsetof(struct ti_emif_pm_data, ti_emif_base_addr_phys)); 69 - DEFINE(EMIF_PM_CONFIG_OFFSET, 70 - offsetof(struct ti_emif_pm_data, ti_emif_sram_config)); 71 - DEFINE(EMIF_PM_REGS_VIRT_OFFSET, 72 - offsetof(struct ti_emif_pm_data, regs_virt)); 73 - DEFINE(EMIF_PM_REGS_PHYS_OFFSET, 74 - offsetof(struct ti_emif_pm_data, regs_phys)); 75 - DEFINE(EMIF_PM_DATA_SIZE, sizeof(struct ti_emif_pm_data)); 76 - 77 - BLANK(); 78 - 79 - DEFINE(EMIF_PM_SAVE_CONTEXT_OFFSET, 80 - offsetof(struct ti_emif_pm_functions, save_context)); 81 - DEFINE(EMIF_PM_RESTORE_CONTEXT_OFFSET, 82 - offsetof(struct ti_emif_pm_functions, restore_context)); 83 - DEFINE(EMIF_PM_ENTER_SR_OFFSET, 84 - offsetof(struct ti_emif_pm_functions, enter_sr)); 85 - DEFINE(EMIF_PM_EXIT_SR_OFFSET, 86 - offsetof(struct ti_emif_pm_functions, exit_sr)); 87 - DEFINE(EMIF_PM_ABORT_SR_OFFSET, 88 - offsetof(struct ti_emif_pm_functions, abort_sr)); 89 - DEFINE(EMIF_PM_FUNCTIONS_SIZE, sizeof(struct ti_emif_pm_functions)); 19 + ti_emif_asm_offsets(); 90 20 91 21 return 0; 92 22 }
+1
drivers/message/fusion/mptsas.c
··· 1994 1994 .cmd_per_lun = 7, 1995 1995 .use_clustering = ENABLE_CLUSTERING, 1996 1996 .shost_attrs = mptscsih_host_attrs, 1997 + .no_write_same = 1, 1997 1998 }; 1998 1999 1999 2000 static int mptsas_get_linkerrors(struct sas_phy *phy)
+28 -5
drivers/mtd/chips/cfi_cmdset_0001.c
··· 45 45 #define I82802AB 0x00ad 46 46 #define I82802AC 0x00ac 47 47 #define PF38F4476 0x881c 48 + #define M28F00AP30 0x8963 48 49 /* STMicroelectronics chips */ 49 50 #define M50LPW080 0x002F 50 51 #define M50FLW080A 0x0080 ··· 374 373 if (cfi->mfr == CFI_MFR_INTEL && 375 374 cfi->id == PF38F4476 && extp->MinorVersion == '3') 376 375 extp->MinorVersion = '1'; 376 + } 377 + 378 + static int cfi_is_micron_28F00AP30(struct cfi_private *cfi, struct flchip *chip) 379 + { 380 + /* 381 + * Micron(was Numonyx) 1Gbit bottom boot are buggy w.r.t 382 + * Erase Supend for their small Erase Blocks(0x8000) 383 + */ 384 + if (cfi->mfr == CFI_MFR_INTEL && cfi->id == M28F00AP30) 385 + return 1; 386 + return 0; 377 387 } 378 388 379 389 static inline struct cfi_pri_intelext * ··· 843 831 (mode == FL_WRITING && (cfip->SuspendCmdSupport & 1)))) 844 832 goto sleep; 845 833 834 + /* Do not allow suspend iff read/write to EB address */ 835 + if ((adr & chip->in_progress_block_mask) == 836 + chip->in_progress_block_addr) 837 + goto sleep; 838 + 839 + /* do not suspend small EBs, buggy Micron Chips */ 840 + if (cfi_is_micron_28F00AP30(cfi, chip) && 841 + (chip->in_progress_block_mask == ~(0x8000-1))) 842 + goto sleep; 846 843 847 844 /* Erase suspend */ 848 - map_write(map, CMD(0xB0), adr); 845 + map_write(map, CMD(0xB0), chip->in_progress_block_addr); 849 846 850 847 /* If the flash has finished erasing, then 'erase suspend' 851 848 * appears to make some (28F320) flash devices switch to 852 849 * 'read' mode. Make sure that we switch to 'read status' 853 850 * mode so we get the right data. --rmk 854 851 */ 855 - map_write(map, CMD(0x70), adr); 852 + map_write(map, CMD(0x70), chip->in_progress_block_addr); 856 853 chip->oldstate = FL_ERASING; 857 854 chip->state = FL_ERASE_SUSPENDING; 858 855 chip->erase_suspended = 1; 859 856 for (;;) { 860 - status = map_read(map, adr); 857 + status = map_read(map, chip->in_progress_block_addr); 861 858 if (map_word_andequal(map, status, status_OK, status_OK)) 862 859 break; 863 860 ··· 1062 1041 sending the 0x70 (Read Status) command to an erasing 1063 1042 chip and expecting it to be ignored, that's what we 1064 1043 do. */ 1065 - map_write(map, CMD(0xd0), adr); 1066 - map_write(map, CMD(0x70), adr); 1044 + map_write(map, CMD(0xd0), chip->in_progress_block_addr); 1045 + map_write(map, CMD(0x70), chip->in_progress_block_addr); 1067 1046 chip->oldstate = FL_READY; 1068 1047 chip->state = FL_ERASING; 1069 1048 break; ··· 1954 1933 map_write(map, CMD(0xD0), adr); 1955 1934 chip->state = FL_ERASING; 1956 1935 chip->erase_suspended = 0; 1936 + chip->in_progress_block_addr = adr; 1937 + chip->in_progress_block_mask = ~(len - 1); 1957 1938 1958 1939 ret = INVAL_CACHE_AND_WAIT(map, chip, adr, 1959 1940 adr, len,
+6 -3
drivers/mtd/chips/cfi_cmdset_0002.c
··· 816 816 (mode == FL_WRITING && (cfip->EraseSuspend & 0x2)))) 817 817 goto sleep; 818 818 819 - /* We could check to see if we're trying to access the sector 820 - * that is currently being erased. However, no user will try 821 - * anything like that so we just wait for the timeout. */ 819 + /* Do not allow suspend iff read/write to EB address */ 820 + if ((adr & chip->in_progress_block_mask) == 821 + chip->in_progress_block_addr) 822 + goto sleep; 822 823 823 824 /* Erase suspend */ 824 825 /* It's harmless to issue the Erase-Suspend and Erase-Resume ··· 2268 2267 chip->state = FL_ERASING; 2269 2268 chip->erase_suspended = 0; 2270 2269 chip->in_progress_block_addr = adr; 2270 + chip->in_progress_block_mask = ~(map->size - 1); 2271 2271 2272 2272 INVALIDATE_CACHE_UDELAY(map, chip, 2273 2273 adr, map->size, ··· 2358 2356 chip->state = FL_ERASING; 2359 2357 chip->erase_suspended = 0; 2360 2358 chip->in_progress_block_addr = adr; 2359 + chip->in_progress_block_mask = ~(len - 1); 2361 2360 2362 2361 INVALIDATE_CACHE_UDELAY(map, chip, 2363 2362 adr, len,
-3
drivers/mtd/nand/core.c
··· 162 162 ret = nanddev_erase(nand, &pos); 163 163 if (ret) { 164 164 einfo->fail_addr = nanddev_pos_to_offs(nand, &pos); 165 - einfo->state = MTD_ERASE_FAILED; 166 165 167 166 return ret; 168 167 } 169 168 170 169 nanddev_pos_next_eraseblock(nand, &pos); 171 170 } 172 - 173 - einfo->state = MTD_ERASE_DONE; 174 171 175 172 return 0; 176 173 }
+8 -17
drivers/mtd/nand/raw/marvell_nand.c
··· 2299 2299 /* 2300 2300 * The legacy "num-cs" property indicates the number of CS on the only 2301 2301 * chip connected to the controller (legacy bindings does not support 2302 - * more than one chip). CS are only incremented one by one while the RB 2303 - * pin is always the #0. 2302 + * more than one chip). The CS and RB pins are always the #0. 2304 2303 * 2305 2304 * When not using legacy bindings, a couple of "reg" and "nand-rb" 2306 2305 * properties must be filled. For each chip, expressed as a subnode, 2307 2306 * "reg" points to the CS lines and "nand-rb" to the RB line. 2308 2307 */ 2309 - if (pdata) { 2308 + if (pdata || nfc->caps->legacy_of_bindings) { 2310 2309 nsels = 1; 2311 - } else if (nfc->caps->legacy_of_bindings && 2312 - !of_get_property(np, "num-cs", &nsels)) { 2313 - dev_err(dev, "missing num-cs property\n"); 2314 - return -EINVAL; 2315 - } else if (!of_get_property(np, "reg", &nsels)) { 2316 - dev_err(dev, "missing reg property\n"); 2317 - return -EINVAL; 2318 - } 2319 - 2320 - if (!pdata) 2321 - nsels /= sizeof(u32); 2322 - if (!nsels) { 2323 - dev_err(dev, "invalid reg property size\n"); 2324 - return -EINVAL; 2310 + } else { 2311 + nsels = of_property_count_elems_of_size(np, "reg", sizeof(u32)); 2312 + if (nsels <= 0) { 2313 + dev_err(dev, "missing/invalid reg property\n"); 2314 + return -EINVAL; 2315 + } 2325 2316 } 2326 2317 2327 2318 /* Alloc the nand chip structure */
+1 -1
drivers/mtd/nand/raw/tango_nand.c
··· 645 645 646 646 writel_relaxed(MODE_RAW, nfc->pbus_base + PBUS_PAD_MODE); 647 647 648 - clk = clk_get(&pdev->dev, NULL); 648 + clk = devm_clk_get(&pdev->dev, NULL); 649 649 if (IS_ERR(clk)) 650 650 return PTR_ERR(clk); 651 651
+17 -2
drivers/mtd/spi-nor/cadence-quadspi.c
··· 501 501 void __iomem *reg_base = cqspi->iobase; 502 502 void __iomem *ahb_base = cqspi->ahb_base; 503 503 unsigned int remaining = n_rx; 504 + unsigned int mod_bytes = n_rx % 4; 504 505 unsigned int bytes_to_read = 0; 506 + u8 *rxbuf_end = rxbuf + n_rx; 505 507 int ret = 0; 506 508 507 509 writel(from_addr, reg_base + CQSPI_REG_INDIRECTRDSTARTADDR); ··· 532 530 } 533 531 534 532 while (bytes_to_read != 0) { 533 + unsigned int word_remain = round_down(remaining, 4); 534 + 535 535 bytes_to_read *= cqspi->fifo_width; 536 536 bytes_to_read = bytes_to_read > remaining ? 537 537 remaining : bytes_to_read; 538 - ioread32_rep(ahb_base, rxbuf, 539 - DIV_ROUND_UP(bytes_to_read, 4)); 538 + bytes_to_read = round_down(bytes_to_read, 4); 539 + /* Read 4 byte word chunks then single bytes */ 540 + if (bytes_to_read) { 541 + ioread32_rep(ahb_base, rxbuf, 542 + (bytes_to_read / 4)); 543 + } else if (!word_remain && mod_bytes) { 544 + unsigned int temp = ioread32(ahb_base); 545 + 546 + bytes_to_read = mod_bytes; 547 + memcpy(rxbuf, &temp, min((unsigned int) 548 + (rxbuf_end - rxbuf), 549 + bytes_to_read)); 550 + } 540 551 rxbuf += bytes_to_read; 541 552 remaining -= bytes_to_read; 542 553 bytes_to_read = cqspi_get_rd_sram_level(cqspi);
+1 -2
drivers/net/bonding/bond_main.c
··· 1660 1660 } /* switch(bond_mode) */ 1661 1661 1662 1662 #ifdef CONFIG_NET_POLL_CONTROLLER 1663 - slave_dev->npinfo = bond->dev->npinfo; 1664 - if (slave_dev->npinfo) { 1663 + if (bond->dev->npinfo) { 1665 1664 if (slave_enable_netpoll(new_slave)) { 1666 1665 netdev_info(bond_dev, "master_dev is using netpoll, but new slave device does not support netpoll\n"); 1667 1666 res = -EBUSY;
+8
drivers/net/ethernet/amd/xgbe/xgbe-common.h
··· 1321 1321 #define MDIO_VEND2_AN_STAT 0x8002 1322 1322 #endif 1323 1323 1324 + #ifndef MDIO_VEND2_PMA_CDR_CONTROL 1325 + #define MDIO_VEND2_PMA_CDR_CONTROL 0x8056 1326 + #endif 1327 + 1324 1328 #ifndef MDIO_CTRL1_SPEED1G 1325 1329 #define MDIO_CTRL1_SPEED1G (MDIO_CTRL1_SPEED10G & ~BMCR_SPEED100) 1326 1330 #endif ··· 1372 1368 #define XGBE_AN_CL37_PCS_MODE_SGMII 0x04 1373 1369 #define XGBE_AN_CL37_TX_CONFIG_MASK 0x08 1374 1370 #define XGBE_AN_CL37_MII_CTRL_8BIT 0x0100 1371 + 1372 + #define XGBE_PMA_CDR_TRACK_EN_MASK 0x01 1373 + #define XGBE_PMA_CDR_TRACK_EN_OFF 0x00 1374 + #define XGBE_PMA_CDR_TRACK_EN_ON 0x01 1375 1375 1376 1376 /* Bit setting and getting macros 1377 1377 * The get macro will extract the current bit field value from within
+16
drivers/net/ethernet/amd/xgbe/xgbe-debugfs.c
··· 519 519 "debugfs_create_file failed\n"); 520 520 } 521 521 522 + if (pdata->vdata->an_cdr_workaround) { 523 + pfile = debugfs_create_bool("an_cdr_workaround", 0600, 524 + pdata->xgbe_debugfs, 525 + &pdata->debugfs_an_cdr_workaround); 526 + if (!pfile) 527 + netdev_err(pdata->netdev, 528 + "debugfs_create_bool failed\n"); 529 + 530 + pfile = debugfs_create_bool("an_cdr_track_early", 0600, 531 + pdata->xgbe_debugfs, 532 + &pdata->debugfs_an_cdr_track_early); 533 + if (!pfile) 534 + netdev_err(pdata->netdev, 535 + "debugfs_create_bool failed\n"); 536 + } 537 + 522 538 kfree(buf); 523 539 } 524 540
+1
drivers/net/ethernet/amd/xgbe/xgbe-main.c
··· 349 349 XGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1); 350 350 351 351 /* Call MDIO/PHY initialization routine */ 352 + pdata->debugfs_an_cdr_workaround = pdata->vdata->an_cdr_workaround; 352 353 ret = pdata->phy_if.phy_init(pdata); 353 354 if (ret) 354 355 return ret;
+19 -5
drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
··· 432 432 xgbe_an73_set(pdata, false, false); 433 433 xgbe_an73_disable_interrupts(pdata); 434 434 435 + pdata->an_start = 0; 436 + 435 437 netif_dbg(pdata, link, pdata->netdev, "CL73 AN disabled\n"); 436 438 } 437 439 438 440 static void xgbe_an_restart(struct xgbe_prv_data *pdata) 439 441 { 442 + if (pdata->phy_if.phy_impl.an_pre) 443 + pdata->phy_if.phy_impl.an_pre(pdata); 444 + 440 445 switch (pdata->an_mode) { 441 446 case XGBE_AN_MODE_CL73: 442 447 case XGBE_AN_MODE_CL73_REDRV: ··· 458 453 459 454 static void xgbe_an_disable(struct xgbe_prv_data *pdata) 460 455 { 456 + if (pdata->phy_if.phy_impl.an_post) 457 + pdata->phy_if.phy_impl.an_post(pdata); 458 + 461 459 switch (pdata->an_mode) { 462 460 case XGBE_AN_MODE_CL73: 463 461 case XGBE_AN_MODE_CL73_REDRV: ··· 513 505 XMDIO_WRITE(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL, 514 506 reg); 515 507 516 - if (pdata->phy_if.phy_impl.kr_training_post) 517 - pdata->phy_if.phy_impl.kr_training_post(pdata); 518 - 519 508 netif_dbg(pdata, link, pdata->netdev, 520 509 "KR training initiated\n"); 510 + 511 + if (pdata->phy_if.phy_impl.kr_training_post) 512 + pdata->phy_if.phy_impl.kr_training_post(pdata); 521 513 } 522 514 523 515 return XGBE_AN_PAGE_RECEIVED; ··· 645 637 return XGBE_AN_NO_LINK; 646 638 } 647 639 648 - xgbe_an73_disable(pdata); 640 + xgbe_an_disable(pdata); 649 641 650 642 xgbe_switch_mode(pdata); 651 643 652 - xgbe_an73_restart(pdata); 644 + xgbe_an_restart(pdata); 653 645 654 646 return XGBE_AN_INCOMPAT_LINK; 655 647 } ··· 828 820 pdata->an_result = pdata->an_state; 829 821 pdata->an_state = XGBE_AN_READY; 830 822 823 + if (pdata->phy_if.phy_impl.an_post) 824 + pdata->phy_if.phy_impl.an_post(pdata); 825 + 831 826 netif_dbg(pdata, link, pdata->netdev, "CL37 AN result: %s\n", 832 827 xgbe_state_as_string(pdata->an_result)); 833 828 } ··· 913 902 pdata->kr_state = XGBE_RX_BPA; 914 903 pdata->kx_state = XGBE_RX_BPA; 915 904 pdata->an_start = 0; 905 + 906 + if (pdata->phy_if.phy_impl.an_post) 907 + pdata->phy_if.phy_impl.an_post(pdata); 916 908 917 909 netif_dbg(pdata, link, pdata->netdev, "CL73 AN result: %s\n", 918 910 xgbe_state_as_string(pdata->an_result));
+2
drivers/net/ethernet/amd/xgbe/xgbe-pci.c
··· 456 456 .irq_reissue_support = 1, 457 457 .tx_desc_prefetch = 5, 458 458 .rx_desc_prefetch = 5, 459 + .an_cdr_workaround = 1, 459 460 }; 460 461 461 462 static const struct xgbe_version_data xgbe_v2b = { ··· 471 470 .irq_reissue_support = 1, 472 471 .tx_desc_prefetch = 5, 473 472 .rx_desc_prefetch = 5, 473 + .an_cdr_workaround = 1, 474 474 }; 475 475 476 476 static const struct pci_device_id xgbe_pci_table[] = {
+178 -18
drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
··· 147 147 /* Rate-change complete wait/retry count */ 148 148 #define XGBE_RATECHANGE_COUNT 500 149 149 150 + /* CDR delay values for KR support (in usec) */ 151 + #define XGBE_CDR_DELAY_INIT 10000 152 + #define XGBE_CDR_DELAY_INC 10000 153 + #define XGBE_CDR_DELAY_MAX 100000 154 + 155 + /* RRC frequency during link status check */ 156 + #define XGBE_RRC_FREQUENCY 10 157 + 150 158 enum xgbe_port_mode { 151 159 XGBE_PORT_MODE_RSVD = 0, 152 160 XGBE_PORT_MODE_BACKPLANE, ··· 253 245 #define XGBE_SFP_BASE_VENDOR_SN 4 254 246 #define XGBE_SFP_BASE_VENDOR_SN_LEN 16 255 247 248 + #define XGBE_SFP_EXTD_OPT1 1 249 + #define XGBE_SFP_EXTD_OPT1_RX_LOS BIT(1) 250 + #define XGBE_SFP_EXTD_OPT1_TX_FAULT BIT(3) 251 + 256 252 #define XGBE_SFP_EXTD_DIAG 28 257 253 #define XGBE_SFP_EXTD_DIAG_ADDR_CHANGE BIT(2) 258 254 ··· 336 324 337 325 unsigned int sfp_gpio_address; 338 326 unsigned int sfp_gpio_mask; 327 + unsigned int sfp_gpio_inputs; 339 328 unsigned int sfp_gpio_rx_los; 340 329 unsigned int sfp_gpio_tx_fault; 341 330 unsigned int sfp_gpio_mod_absent; ··· 368 355 unsigned int redrv_addr; 369 356 unsigned int redrv_lane; 370 357 unsigned int redrv_model; 358 + 359 + /* KR AN support */ 360 + unsigned int phy_cdr_notrack; 361 + unsigned int phy_cdr_delay; 371 362 }; 372 363 373 364 /* I2C, MDIO and GPIO lines are muxed, so only one device at a time */ ··· 991 974 phy_data->sfp_phy_avail = 1; 992 975 } 993 976 977 + static bool xgbe_phy_check_sfp_rx_los(struct xgbe_phy_data *phy_data) 978 + { 979 + u8 *sfp_extd = phy_data->sfp_eeprom.extd; 980 + 981 + if (!(sfp_extd[XGBE_SFP_EXTD_OPT1] & XGBE_SFP_EXTD_OPT1_RX_LOS)) 982 + return false; 983 + 984 + if (phy_data->sfp_gpio_mask & XGBE_GPIO_NO_RX_LOS) 985 + return false; 986 + 987 + if (phy_data->sfp_gpio_inputs & (1 << phy_data->sfp_gpio_rx_los)) 988 + return true; 989 + 990 + return false; 991 + } 992 + 993 + static bool xgbe_phy_check_sfp_tx_fault(struct xgbe_phy_data *phy_data) 994 + { 995 + u8 *sfp_extd = phy_data->sfp_eeprom.extd; 996 + 997 + if (!(sfp_extd[XGBE_SFP_EXTD_OPT1] & XGBE_SFP_EXTD_OPT1_TX_FAULT)) 998 + return false; 999 + 1000 + if (phy_data->sfp_gpio_mask & XGBE_GPIO_NO_TX_FAULT) 1001 + return false; 1002 + 1003 + if (phy_data->sfp_gpio_inputs & (1 << phy_data->sfp_gpio_tx_fault)) 1004 + return true; 1005 + 1006 + return false; 1007 + } 1008 + 1009 + static bool xgbe_phy_check_sfp_mod_absent(struct xgbe_phy_data *phy_data) 1010 + { 1011 + if (phy_data->sfp_gpio_mask & XGBE_GPIO_NO_MOD_ABSENT) 1012 + return false; 1013 + 1014 + if (phy_data->sfp_gpio_inputs & (1 << phy_data->sfp_gpio_mod_absent)) 1015 + return true; 1016 + 1017 + return false; 1018 + } 1019 + 994 1020 static bool xgbe_phy_belfuse_parse_quirks(struct xgbe_prv_data *pdata) 995 1021 { 996 1022 struct xgbe_phy_data *phy_data = pdata->phy_data; ··· 1078 1018 1079 1019 if (sfp_base[XGBE_SFP_BASE_EXT_ID] != XGBE_SFP_EXT_ID_SFP) 1080 1020 return; 1021 + 1022 + /* Update transceiver signals (eeprom extd/options) */ 1023 + phy_data->sfp_tx_fault = xgbe_phy_check_sfp_tx_fault(phy_data); 1024 + phy_data->sfp_rx_los = xgbe_phy_check_sfp_rx_los(phy_data); 1081 1025 1082 1026 if (xgbe_phy_sfp_parse_quirks(pdata)) 1083 1027 return; ··· 1248 1184 static void xgbe_phy_sfp_signals(struct xgbe_prv_data *pdata) 1249 1185 { 1250 1186 struct xgbe_phy_data *phy_data = pdata->phy_data; 1251 - unsigned int gpio_input; 1252 1187 u8 gpio_reg, gpio_ports[2]; 1253 1188 int ret; 1254 1189 ··· 1262 1199 return; 1263 1200 } 1264 1201 1265 - gpio_input = (gpio_ports[1] << 8) | gpio_ports[0]; 1202 + phy_data->sfp_gpio_inputs = (gpio_ports[1] << 8) | gpio_ports[0]; 1266 1203 1267 - if (phy_data->sfp_gpio_mask & XGBE_GPIO_NO_MOD_ABSENT) { 1268 - /* No GPIO, just assume the module is present for now */ 1269 - phy_data->sfp_mod_absent = 0; 1270 - } else { 1271 - if (!(gpio_input & (1 << phy_data->sfp_gpio_mod_absent))) 1272 - phy_data->sfp_mod_absent = 0; 1273 - } 1274 - 1275 - if (!(phy_data->sfp_gpio_mask & XGBE_GPIO_NO_RX_LOS) && 1276 - (gpio_input & (1 << phy_data->sfp_gpio_rx_los))) 1277 - phy_data->sfp_rx_los = 1; 1278 - 1279 - if (!(phy_data->sfp_gpio_mask & XGBE_GPIO_NO_TX_FAULT) && 1280 - (gpio_input & (1 << phy_data->sfp_gpio_tx_fault))) 1281 - phy_data->sfp_tx_fault = 1; 1204 + phy_data->sfp_mod_absent = xgbe_phy_check_sfp_mod_absent(phy_data); 1282 1205 } 1283 1206 1284 1207 static void xgbe_phy_sfp_mod_absent(struct xgbe_prv_data *pdata) ··· 2410 2361 return 1; 2411 2362 2412 2363 /* No link, attempt a receiver reset cycle */ 2413 - if (phy_data->rrc_count++) { 2364 + if (phy_data->rrc_count++ > XGBE_RRC_FREQUENCY) { 2414 2365 phy_data->rrc_count = 0; 2415 2366 xgbe_phy_rrc(pdata); 2416 2367 } ··· 2718 2669 return true; 2719 2670 } 2720 2671 2672 + static void xgbe_phy_cdr_track(struct xgbe_prv_data *pdata) 2673 + { 2674 + struct xgbe_phy_data *phy_data = pdata->phy_data; 2675 + 2676 + if (!pdata->debugfs_an_cdr_workaround) 2677 + return; 2678 + 2679 + if (!phy_data->phy_cdr_notrack) 2680 + return; 2681 + 2682 + usleep_range(phy_data->phy_cdr_delay, 2683 + phy_data->phy_cdr_delay + 500); 2684 + 2685 + XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_VEND2_PMA_CDR_CONTROL, 2686 + XGBE_PMA_CDR_TRACK_EN_MASK, 2687 + XGBE_PMA_CDR_TRACK_EN_ON); 2688 + 2689 + phy_data->phy_cdr_notrack = 0; 2690 + } 2691 + 2692 + static void xgbe_phy_cdr_notrack(struct xgbe_prv_data *pdata) 2693 + { 2694 + struct xgbe_phy_data *phy_data = pdata->phy_data; 2695 + 2696 + if (!pdata->debugfs_an_cdr_workaround) 2697 + return; 2698 + 2699 + if (phy_data->phy_cdr_notrack) 2700 + return; 2701 + 2702 + XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_VEND2_PMA_CDR_CONTROL, 2703 + XGBE_PMA_CDR_TRACK_EN_MASK, 2704 + XGBE_PMA_CDR_TRACK_EN_OFF); 2705 + 2706 + xgbe_phy_rrc(pdata); 2707 + 2708 + phy_data->phy_cdr_notrack = 1; 2709 + } 2710 + 2711 + static void xgbe_phy_kr_training_post(struct xgbe_prv_data *pdata) 2712 + { 2713 + if (!pdata->debugfs_an_cdr_track_early) 2714 + xgbe_phy_cdr_track(pdata); 2715 + } 2716 + 2717 + static void xgbe_phy_kr_training_pre(struct xgbe_prv_data *pdata) 2718 + { 2719 + if (pdata->debugfs_an_cdr_track_early) 2720 + xgbe_phy_cdr_track(pdata); 2721 + } 2722 + 2723 + static void xgbe_phy_an_post(struct xgbe_prv_data *pdata) 2724 + { 2725 + struct xgbe_phy_data *phy_data = pdata->phy_data; 2726 + 2727 + switch (pdata->an_mode) { 2728 + case XGBE_AN_MODE_CL73: 2729 + case XGBE_AN_MODE_CL73_REDRV: 2730 + if (phy_data->cur_mode != XGBE_MODE_KR) 2731 + break; 2732 + 2733 + xgbe_phy_cdr_track(pdata); 2734 + 2735 + switch (pdata->an_result) { 2736 + case XGBE_AN_READY: 2737 + case XGBE_AN_COMPLETE: 2738 + break; 2739 + default: 2740 + if (phy_data->phy_cdr_delay < XGBE_CDR_DELAY_MAX) 2741 + phy_data->phy_cdr_delay += XGBE_CDR_DELAY_INC; 2742 + else 2743 + phy_data->phy_cdr_delay = XGBE_CDR_DELAY_INIT; 2744 + break; 2745 + } 2746 + break; 2747 + default: 2748 + break; 2749 + } 2750 + } 2751 + 2752 + static void xgbe_phy_an_pre(struct xgbe_prv_data *pdata) 2753 + { 2754 + struct xgbe_phy_data *phy_data = pdata->phy_data; 2755 + 2756 + switch (pdata->an_mode) { 2757 + case XGBE_AN_MODE_CL73: 2758 + case XGBE_AN_MODE_CL73_REDRV: 2759 + if (phy_data->cur_mode != XGBE_MODE_KR) 2760 + break; 2761 + 2762 + xgbe_phy_cdr_notrack(pdata); 2763 + break; 2764 + default: 2765 + break; 2766 + } 2767 + } 2768 + 2721 2769 static void xgbe_phy_stop(struct xgbe_prv_data *pdata) 2722 2770 { 2723 2771 struct xgbe_phy_data *phy_data = pdata->phy_data; ··· 2825 2679 /* Reset SFP data */ 2826 2680 xgbe_phy_sfp_reset(phy_data); 2827 2681 xgbe_phy_sfp_mod_absent(pdata); 2682 + 2683 + /* Reset CDR support */ 2684 + xgbe_phy_cdr_track(pdata); 2828 2685 2829 2686 /* Power off the PHY */ 2830 2687 xgbe_phy_power_off(pdata); ··· 2860 2711 2861 2712 /* Start in highest supported mode */ 2862 2713 xgbe_phy_set_mode(pdata, phy_data->start_mode); 2714 + 2715 + /* Reset CDR support */ 2716 + xgbe_phy_cdr_track(pdata); 2863 2717 2864 2718 /* After starting the I2C controller, we can check for an SFP */ 2865 2719 switch (phy_data->port_mode) { ··· 3171 3019 } 3172 3020 } 3173 3021 3022 + phy_data->phy_cdr_delay = XGBE_CDR_DELAY_INIT; 3023 + 3174 3024 /* Register for driving external PHYs */ 3175 3025 mii = devm_mdiobus_alloc(pdata->dev); 3176 3026 if (!mii) { ··· 3225 3071 phy_impl->an_advertising = xgbe_phy_an_advertising; 3226 3072 3227 3073 phy_impl->an_outcome = xgbe_phy_an_outcome; 3074 + 3075 + phy_impl->an_pre = xgbe_phy_an_pre; 3076 + phy_impl->an_post = xgbe_phy_an_post; 3077 + 3078 + phy_impl->kr_training_pre = xgbe_phy_kr_training_pre; 3079 + phy_impl->kr_training_post = xgbe_phy_kr_training_post; 3228 3080 }
+9
drivers/net/ethernet/amd/xgbe/xgbe.h
··· 833 833 /* This structure represents implementation specific routines for an 834 834 * implementation of a PHY. All routines are required unless noted below. 835 835 * Optional routines: 836 + * an_pre, an_post 836 837 * kr_training_pre, kr_training_post 837 838 */ 838 839 struct xgbe_phy_impl_if { ··· 875 874 876 875 /* Process results of auto-negotiation */ 877 876 enum xgbe_mode (*an_outcome)(struct xgbe_prv_data *); 877 + 878 + /* Pre/Post auto-negotiation support */ 879 + void (*an_pre)(struct xgbe_prv_data *); 880 + void (*an_post)(struct xgbe_prv_data *); 878 881 879 882 /* Pre/Post KR training enablement support */ 880 883 void (*kr_training_pre)(struct xgbe_prv_data *); ··· 994 989 unsigned int irq_reissue_support; 995 990 unsigned int tx_desc_prefetch; 996 991 unsigned int rx_desc_prefetch; 992 + unsigned int an_cdr_workaround; 997 993 }; 998 994 999 995 struct xgbe_vxlan_data { ··· 1263 1257 unsigned int debugfs_xprop_reg; 1264 1258 1265 1259 unsigned int debugfs_xi2c_reg; 1260 + 1261 + bool debugfs_an_cdr_workaround; 1262 + bool debugfs_an_cdr_track_early; 1266 1263 }; 1267 1264 1268 1265 /* Function prototypes*/
+2 -2
drivers/net/ethernet/ibm/ibmvnic.c
··· 1128 1128 if (!adapter->rx_pool) 1129 1129 return; 1130 1130 1131 - rx_scrqs = be32_to_cpu(adapter->login_rsp_buf->num_rxadd_subcrqs); 1131 + rx_scrqs = adapter->num_active_rx_pools; 1132 1132 rx_entries = adapter->req_rx_add_entries_per_subcrq; 1133 1133 1134 1134 /* Free any remaining skbs in the rx buffer pools */ ··· 1177 1177 if (!adapter->tx_pool || !adapter->tso_pool) 1178 1178 return; 1179 1179 1180 - tx_scrqs = be32_to_cpu(adapter->login_rsp_buf->num_txsubm_subcrqs); 1180 + tx_scrqs = adapter->num_active_tx_pools; 1181 1181 1182 1182 /* Free any remaining skbs in the tx buffer pools */ 1183 1183 for (i = 0; i < tx_scrqs; i++) {
+1 -1
drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
··· 586 586 #define ICE_LG_ACT_MIRROR_VSI_ID_S 3 587 587 #define ICE_LG_ACT_MIRROR_VSI_ID_M (0x3FF << ICE_LG_ACT_MIRROR_VSI_ID_S) 588 588 589 - /* Action type = 5 - Large Action */ 589 + /* Action type = 5 - Generic Value */ 590 590 #define ICE_LG_ACT_GENERIC 0x5 591 591 #define ICE_LG_ACT_GENERIC_VALUE_S 3 592 592 #define ICE_LG_ACT_GENERIC_VALUE_M (0xFFFF << ICE_LG_ACT_GENERIC_VALUE_S)
+17 -5
drivers/net/ethernet/intel/ice/ice_common.c
··· 78 78 struct ice_aq_desc desc; 79 79 enum ice_status status; 80 80 u16 flags; 81 + u8 i; 81 82 82 83 cmd = &desc.params.mac_read; 83 84 ··· 99 98 return ICE_ERR_CFG; 100 99 } 101 100 102 - ether_addr_copy(hw->port_info->mac.lan_addr, resp->mac_addr); 103 - ether_addr_copy(hw->port_info->mac.perm_addr, resp->mac_addr); 101 + /* A single port can report up to two (LAN and WoL) addresses */ 102 + for (i = 0; i < cmd->num_addr; i++) 103 + if (resp[i].addr_type == ICE_AQC_MAN_MAC_ADDR_TYPE_LAN) { 104 + ether_addr_copy(hw->port_info->mac.lan_addr, 105 + resp[i].mac_addr); 106 + ether_addr_copy(hw->port_info->mac.perm_addr, 107 + resp[i].mac_addr); 108 + break; 109 + } 110 + 104 111 return 0; 105 112 } 106 113 ··· 473 464 if (status) 474 465 goto err_unroll_sched; 475 466 476 - /* Get port MAC information */ 477 - mac_buf_len = sizeof(struct ice_aqc_manage_mac_read_resp); 478 - mac_buf = devm_kzalloc(ice_hw_to_dev(hw), mac_buf_len, GFP_KERNEL); 467 + /* Get MAC information */ 468 + /* A single port can report up to two (LAN and WoL) addresses */ 469 + mac_buf = devm_kcalloc(ice_hw_to_dev(hw), 2, 470 + sizeof(struct ice_aqc_manage_mac_read_resp), 471 + GFP_KERNEL); 472 + mac_buf_len = 2 * sizeof(struct ice_aqc_manage_mac_read_resp); 479 473 480 474 if (!mac_buf) { 481 475 status = ICE_ERR_NO_MEMORY;
-2
drivers/net/ethernet/intel/ice/ice_hw_autogen.h
··· 121 121 #define PFINT_FW_CTL_CAUSE_ENA_S 30 122 122 #define PFINT_FW_CTL_CAUSE_ENA_M BIT(PFINT_FW_CTL_CAUSE_ENA_S) 123 123 #define PFINT_OICR 0x0016CA00 124 - #define PFINT_OICR_INTEVENT_S 0 125 - #define PFINT_OICR_INTEVENT_M BIT(PFINT_OICR_INTEVENT_S) 126 124 #define PFINT_OICR_HLP_RDY_S 14 127 125 #define PFINT_OICR_HLP_RDY_M BIT(PFINT_OICR_HLP_RDY_S) 128 126 #define PFINT_OICR_CPM_RDY_S 15
-4
drivers/net/ethernet/intel/ice/ice_main.c
··· 1722 1722 oicr = rd32(hw, PFINT_OICR); 1723 1723 ena_mask = rd32(hw, PFINT_OICR_ENA); 1724 1724 1725 - if (!(oicr & PFINT_OICR_INTEVENT_M)) 1726 - goto ena_intr; 1727 - 1728 1725 if (oicr & PFINT_OICR_GRST_M) { 1729 1726 u32 reset; 1730 1727 /* we have a reset warning */ ··· 1779 1782 } 1780 1783 ret = IRQ_HANDLED; 1781 1784 1782 - ena_intr: 1783 1785 /* re-enable interrupt causes that are not handled during this pass */ 1784 1786 wr32(hw, PFINT_OICR_ENA, ena_mask); 1785 1787 if (!test_bit(__ICE_DOWN, pf->state)) {
+2 -2
drivers/net/ethernet/intel/ice/ice_sched.c
··· 751 751 u16 num_added = 0; 752 752 u32 temp; 753 753 754 + *num_nodes_added = 0; 755 + 754 756 if (!num_nodes) 755 757 return status; 756 758 757 759 if (!parent || layer < hw->sw_entry_point_layer) 758 760 return ICE_ERR_PARAM; 759 - 760 - *num_nodes_added = 0; 761 761 762 762 /* max children per node per layer */ 763 763 max_child_nodes =
+16 -1
drivers/net/ethernet/intel/igb/igb_main.c
··· 1700 1700 WARN_ON(hw->mac.type != e1000_i210); 1701 1701 WARN_ON(queue < 0 || queue > 1); 1702 1702 1703 - if (enable) { 1703 + if (enable || queue == 0) { 1704 + /* i210 does not allow the queue 0 to be in the Strict 1705 + * Priority mode while the Qav mode is enabled, so, 1706 + * instead of disabling strict priority mode, we give 1707 + * queue 0 the maximum of credits possible. 1708 + * 1709 + * See section 8.12.19 of the i210 datasheet, "Note: 1710 + * Queue0 QueueMode must be set to 1b when 1711 + * TransmitMode is set to Qav." 1712 + */ 1713 + if (queue == 0 && !enable) { 1714 + /* max "linkspeed" idleslope in kbps */ 1715 + idleslope = 1000000; 1716 + hicredit = ETH_FRAME_LEN; 1717 + } 1718 + 1704 1719 set_tx_desc_fetch_prio(hw, queue, TX_QUEUE_PRIO_HIGH); 1705 1720 set_queue_mode(hw, queue, QUEUE_MODE_STREAM_RESERVATION); 1706 1721
+1 -1
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
··· 3420 3420 if (!err) 3421 3421 continue; 3422 3422 hw_dbg(&adapter->hw, "Allocation for XDP Queue %u failed\n", j); 3423 - break; 3423 + goto err_setup_tx; 3424 3424 } 3425 3425 3426 3426 return 0;
+46 -34
drivers/net/ethernet/sfc/ef10.c
··· 3999 3999 atomic_set(&efx->active_queues, 0); 4000 4000 } 4001 4001 4002 - static bool efx_ef10_filter_equal(const struct efx_filter_spec *left, 4003 - const struct efx_filter_spec *right) 4004 - { 4005 - if ((left->match_flags ^ right->match_flags) | 4006 - ((left->flags ^ right->flags) & 4007 - (EFX_FILTER_FLAG_RX | EFX_FILTER_FLAG_TX))) 4008 - return false; 4009 - 4010 - return memcmp(&left->outer_vid, &right->outer_vid, 4011 - sizeof(struct efx_filter_spec) - 4012 - offsetof(struct efx_filter_spec, outer_vid)) == 0; 4013 - } 4014 - 4015 - static unsigned int efx_ef10_filter_hash(const struct efx_filter_spec *spec) 4016 - { 4017 - BUILD_BUG_ON(offsetof(struct efx_filter_spec, outer_vid) & 3); 4018 - return jhash2((const u32 *)&spec->outer_vid, 4019 - (sizeof(struct efx_filter_spec) - 4020 - offsetof(struct efx_filter_spec, outer_vid)) / 4, 4021 - 0); 4022 - /* XXX should we randomise the initval? */ 4023 - } 4024 - 4025 4002 /* Decide whether a filter should be exclusive or else should allow 4026 4003 * delivery to additional recipients. Currently we decide that 4027 4004 * filters for specific local unicast MAC and IP addresses are ··· 4323 4346 goto out_unlock; 4324 4347 match_pri = rc; 4325 4348 4326 - hash = efx_ef10_filter_hash(spec); 4349 + hash = efx_filter_spec_hash(spec); 4327 4350 is_mc_recip = efx_filter_is_mc_recipient(spec); 4328 4351 if (is_mc_recip) 4329 4352 bitmap_zero(mc_rem_map, EFX_EF10_FILTER_SEARCH_LIMIT); ··· 4355 4378 if (!saved_spec) { 4356 4379 if (ins_index < 0) 4357 4380 ins_index = i; 4358 - } else if (efx_ef10_filter_equal(spec, saved_spec)) { 4381 + } else if (efx_filter_spec_equal(spec, saved_spec)) { 4359 4382 if (spec->priority < saved_spec->priority && 4360 4383 spec->priority != EFX_FILTER_PRI_AUTO) { 4361 4384 rc = -EPERM; ··· 4739 4762 static bool efx_ef10_filter_rfs_expire_one(struct efx_nic *efx, u32 flow_id, 4740 4763 unsigned int filter_idx) 4741 4764 { 4765 + struct efx_filter_spec *spec, saved_spec; 4742 4766 struct efx_ef10_filter_table *table; 4743 - struct efx_filter_spec *spec; 4744 - bool ret; 4767 + struct efx_arfs_rule *rule = NULL; 4768 + bool ret = true, force = false; 4769 + u16 arfs_id; 4745 4770 4746 4771 down_read(&efx->filter_sem); 4747 4772 table = efx->filter_state; 4748 4773 down_write(&table->lock); 4749 4774 spec = efx_ef10_filter_entry_spec(table, filter_idx); 4750 4775 4751 - if (!spec || spec->priority != EFX_FILTER_PRI_HINT) { 4752 - ret = true; 4776 + if (!spec || spec->priority != EFX_FILTER_PRI_HINT) 4753 4777 goto out_unlock; 4754 - } 4755 4778 4756 - if (!rps_may_expire_flow(efx->net_dev, spec->dmaq_id, flow_id, 0)) { 4779 + spin_lock_bh(&efx->rps_hash_lock); 4780 + if (!efx->rps_hash_table) { 4781 + /* In the absence of the table, we always return 0 to ARFS. */ 4782 + arfs_id = 0; 4783 + } else { 4784 + rule = efx_rps_hash_find(efx, spec); 4785 + if (!rule) 4786 + /* ARFS table doesn't know of this filter, so remove it */ 4787 + goto expire; 4788 + arfs_id = rule->arfs_id; 4789 + ret = efx_rps_check_rule(rule, filter_idx, &force); 4790 + if (force) 4791 + goto expire; 4792 + if (!ret) { 4793 + spin_unlock_bh(&efx->rps_hash_lock); 4794 + goto out_unlock; 4795 + } 4796 + } 4797 + if (!rps_may_expire_flow(efx->net_dev, spec->dmaq_id, flow_id, arfs_id)) 4757 4798 ret = false; 4758 - goto out_unlock; 4759 - } 4760 - 4799 + else if (rule) 4800 + rule->filter_id = EFX_ARFS_FILTER_ID_REMOVING; 4801 + expire: 4802 + saved_spec = *spec; /* remove operation will kfree spec */ 4803 + spin_unlock_bh(&efx->rps_hash_lock); 4804 + /* At this point (since we dropped the lock), another thread might queue 4805 + * up a fresh insertion request (but the actual insertion will be held 4806 + * up by our possession of the filter table lock). In that case, it 4807 + * will set rule->filter_id to EFX_ARFS_FILTER_ID_PENDING, meaning that 4808 + * the rule is not removed by efx_rps_hash_del() below. 4809 + */ 4761 4810 ret = efx_ef10_filter_remove_internal(efx, 1U << spec->priority, 4762 4811 filter_idx, true) == 0; 4812 + /* While we can't safely dereference rule (we dropped the lock), we can 4813 + * still test it for NULL. 4814 + */ 4815 + if (ret && rule) { 4816 + /* Expiring, so remove entry from ARFS table */ 4817 + spin_lock_bh(&efx->rps_hash_lock); 4818 + efx_rps_hash_del(efx, &saved_spec); 4819 + spin_unlock_bh(&efx->rps_hash_lock); 4820 + } 4763 4821 out_unlock: 4764 4822 up_write(&table->lock); 4765 4823 up_read(&efx->filter_sem);
+143
drivers/net/ethernet/sfc/efx.c
··· 3027 3027 mutex_init(&efx->mac_lock); 3028 3028 #ifdef CONFIG_RFS_ACCEL 3029 3029 mutex_init(&efx->rps_mutex); 3030 + spin_lock_init(&efx->rps_hash_lock); 3031 + /* Failure to allocate is not fatal, but may degrade ARFS performance */ 3032 + efx->rps_hash_table = kcalloc(EFX_ARFS_HASH_TABLE_SIZE, 3033 + sizeof(*efx->rps_hash_table), GFP_KERNEL); 3030 3034 #endif 3031 3035 efx->phy_op = &efx_dummy_phy_operations; 3032 3036 efx->mdio.dev = net_dev; ··· 3074 3070 { 3075 3071 int i; 3076 3072 3073 + #ifdef CONFIG_RFS_ACCEL 3074 + kfree(efx->rps_hash_table); 3075 + #endif 3076 + 3077 3077 for (i = 0; i < EFX_MAX_CHANNELS; i++) 3078 3078 kfree(efx->channel[i]); 3079 3079 ··· 3099 3091 stats[GENERIC_STAT_rx_nodesc_trunc] = n_rx_nodesc_trunc; 3100 3092 stats[GENERIC_STAT_rx_noskb_drops] = atomic_read(&efx->n_rx_noskb_drops); 3101 3093 } 3094 + 3095 + bool efx_filter_spec_equal(const struct efx_filter_spec *left, 3096 + const struct efx_filter_spec *right) 3097 + { 3098 + if ((left->match_flags ^ right->match_flags) | 3099 + ((left->flags ^ right->flags) & 3100 + (EFX_FILTER_FLAG_RX | EFX_FILTER_FLAG_TX))) 3101 + return false; 3102 + 3103 + return memcmp(&left->outer_vid, &right->outer_vid, 3104 + sizeof(struct efx_filter_spec) - 3105 + offsetof(struct efx_filter_spec, outer_vid)) == 0; 3106 + } 3107 + 3108 + u32 efx_filter_spec_hash(const struct efx_filter_spec *spec) 3109 + { 3110 + BUILD_BUG_ON(offsetof(struct efx_filter_spec, outer_vid) & 3); 3111 + return jhash2((const u32 *)&spec->outer_vid, 3112 + (sizeof(struct efx_filter_spec) - 3113 + offsetof(struct efx_filter_spec, outer_vid)) / 4, 3114 + 0); 3115 + } 3116 + 3117 + #ifdef CONFIG_RFS_ACCEL 3118 + bool efx_rps_check_rule(struct efx_arfs_rule *rule, unsigned int filter_idx, 3119 + bool *force) 3120 + { 3121 + if (rule->filter_id == EFX_ARFS_FILTER_ID_PENDING) { 3122 + /* ARFS is currently updating this entry, leave it */ 3123 + return false; 3124 + } 3125 + if (rule->filter_id == EFX_ARFS_FILTER_ID_ERROR) { 3126 + /* ARFS tried and failed to update this, so it's probably out 3127 + * of date. Remove the filter and the ARFS rule entry. 3128 + */ 3129 + rule->filter_id = EFX_ARFS_FILTER_ID_REMOVING; 3130 + *force = true; 3131 + return true; 3132 + } else if (WARN_ON(rule->filter_id != filter_idx)) { /* can't happen */ 3133 + /* ARFS has moved on, so old filter is not needed. Since we did 3134 + * not mark the rule with EFX_ARFS_FILTER_ID_REMOVING, it will 3135 + * not be removed by efx_rps_hash_del() subsequently. 3136 + */ 3137 + *force = true; 3138 + return true; 3139 + } 3140 + /* Remove it iff ARFS wants to. */ 3141 + return true; 3142 + } 3143 + 3144 + struct hlist_head *efx_rps_hash_bucket(struct efx_nic *efx, 3145 + const struct efx_filter_spec *spec) 3146 + { 3147 + u32 hash = efx_filter_spec_hash(spec); 3148 + 3149 + WARN_ON(!spin_is_locked(&efx->rps_hash_lock)); 3150 + if (!efx->rps_hash_table) 3151 + return NULL; 3152 + return &efx->rps_hash_table[hash % EFX_ARFS_HASH_TABLE_SIZE]; 3153 + } 3154 + 3155 + struct efx_arfs_rule *efx_rps_hash_find(struct efx_nic *efx, 3156 + const struct efx_filter_spec *spec) 3157 + { 3158 + struct efx_arfs_rule *rule; 3159 + struct hlist_head *head; 3160 + struct hlist_node *node; 3161 + 3162 + head = efx_rps_hash_bucket(efx, spec); 3163 + if (!head) 3164 + return NULL; 3165 + hlist_for_each(node, head) { 3166 + rule = container_of(node, struct efx_arfs_rule, node); 3167 + if (efx_filter_spec_equal(spec, &rule->spec)) 3168 + return rule; 3169 + } 3170 + return NULL; 3171 + } 3172 + 3173 + struct efx_arfs_rule *efx_rps_hash_add(struct efx_nic *efx, 3174 + const struct efx_filter_spec *spec, 3175 + bool *new) 3176 + { 3177 + struct efx_arfs_rule *rule; 3178 + struct hlist_head *head; 3179 + struct hlist_node *node; 3180 + 3181 + head = efx_rps_hash_bucket(efx, spec); 3182 + if (!head) 3183 + return NULL; 3184 + hlist_for_each(node, head) { 3185 + rule = container_of(node, struct efx_arfs_rule, node); 3186 + if (efx_filter_spec_equal(spec, &rule->spec)) { 3187 + *new = false; 3188 + return rule; 3189 + } 3190 + } 3191 + rule = kmalloc(sizeof(*rule), GFP_ATOMIC); 3192 + *new = true; 3193 + if (rule) { 3194 + memcpy(&rule->spec, spec, sizeof(rule->spec)); 3195 + hlist_add_head(&rule->node, head); 3196 + } 3197 + return rule; 3198 + } 3199 + 3200 + void efx_rps_hash_del(struct efx_nic *efx, const struct efx_filter_spec *spec) 3201 + { 3202 + struct efx_arfs_rule *rule; 3203 + struct hlist_head *head; 3204 + struct hlist_node *node; 3205 + 3206 + head = efx_rps_hash_bucket(efx, spec); 3207 + if (WARN_ON(!head)) 3208 + return; 3209 + hlist_for_each(node, head) { 3210 + rule = container_of(node, struct efx_arfs_rule, node); 3211 + if (efx_filter_spec_equal(spec, &rule->spec)) { 3212 + /* Someone already reused the entry. We know that if 3213 + * this check doesn't fire (i.e. filter_id == REMOVING) 3214 + * then the REMOVING mark was put there by our caller, 3215 + * because caller is holding a lock on filter table and 3216 + * only holders of that lock set REMOVING. 3217 + */ 3218 + if (rule->filter_id != EFX_ARFS_FILTER_ID_REMOVING) 3219 + return; 3220 + hlist_del(node); 3221 + kfree(rule); 3222 + return; 3223 + } 3224 + } 3225 + /* We didn't find it. */ 3226 + WARN_ON(1); 3227 + } 3228 + #endif 3102 3229 3103 3230 /* RSS contexts. We're using linked lists and crappy O(n) algorithms, because 3104 3231 * (a) this is an infrequent control-plane operation and (b) n is small (max 64)
+21
drivers/net/ethernet/sfc/efx.h
··· 186 186 #endif 187 187 bool efx_filter_is_mc_recipient(const struct efx_filter_spec *spec); 188 188 189 + bool efx_filter_spec_equal(const struct efx_filter_spec *left, 190 + const struct efx_filter_spec *right); 191 + u32 efx_filter_spec_hash(const struct efx_filter_spec *spec); 192 + 193 + #ifdef CONFIG_RFS_ACCEL 194 + bool efx_rps_check_rule(struct efx_arfs_rule *rule, unsigned int filter_idx, 195 + bool *force); 196 + 197 + struct efx_arfs_rule *efx_rps_hash_find(struct efx_nic *efx, 198 + const struct efx_filter_spec *spec); 199 + 200 + /* @new is written to indicate if entry was newly added (true) or if an old 201 + * entry was found and returned (false). 202 + */ 203 + struct efx_arfs_rule *efx_rps_hash_add(struct efx_nic *efx, 204 + const struct efx_filter_spec *spec, 205 + bool *new); 206 + 207 + void efx_rps_hash_del(struct efx_nic *efx, const struct efx_filter_spec *spec); 208 + #endif 209 + 189 210 /* RSS contexts */ 190 211 struct efx_rss_context *efx_alloc_rss_context_entry(struct efx_nic *efx); 191 212 struct efx_rss_context *efx_find_rss_context_entry(struct efx_nic *efx, u32 id);
+34 -7
drivers/net/ethernet/sfc/farch.c
··· 2905 2905 { 2906 2906 struct efx_farch_filter_state *state = efx->filter_state; 2907 2907 struct efx_farch_filter_table *table; 2908 - bool ret = false; 2908 + bool ret = false, force = false; 2909 + u16 arfs_id; 2909 2910 2910 2911 down_write(&state->lock); 2912 + spin_lock_bh(&efx->rps_hash_lock); 2911 2913 table = &state->table[EFX_FARCH_FILTER_TABLE_RX_IP]; 2912 2914 if (test_bit(index, table->used_bitmap) && 2913 - table->spec[index].priority == EFX_FILTER_PRI_HINT && 2914 - rps_may_expire_flow(efx->net_dev, table->spec[index].dmaq_id, 2915 - flow_id, 0)) { 2916 - efx_farch_filter_table_clear_entry(efx, table, index); 2917 - ret = true; 2918 - } 2915 + table->spec[index].priority == EFX_FILTER_PRI_HINT) { 2916 + struct efx_arfs_rule *rule = NULL; 2917 + struct efx_filter_spec spec; 2919 2918 2919 + efx_farch_filter_to_gen_spec(&spec, &table->spec[index]); 2920 + if (!efx->rps_hash_table) { 2921 + /* In the absence of the table, we always returned 0 to 2922 + * ARFS, so use the same to query it. 2923 + */ 2924 + arfs_id = 0; 2925 + } else { 2926 + rule = efx_rps_hash_find(efx, &spec); 2927 + if (!rule) { 2928 + /* ARFS table doesn't know of this filter, remove it */ 2929 + force = true; 2930 + } else { 2931 + arfs_id = rule->arfs_id; 2932 + if (!efx_rps_check_rule(rule, index, &force)) 2933 + goto out_unlock; 2934 + } 2935 + } 2936 + if (force || rps_may_expire_flow(efx->net_dev, spec.dmaq_id, 2937 + flow_id, arfs_id)) { 2938 + if (rule) 2939 + rule->filter_id = EFX_ARFS_FILTER_ID_REMOVING; 2940 + efx_rps_hash_del(efx, &spec); 2941 + efx_farch_filter_table_clear_entry(efx, table, index); 2942 + ret = true; 2943 + } 2944 + } 2945 + out_unlock: 2946 + spin_unlock_bh(&efx->rps_hash_lock); 2920 2947 up_write(&state->lock); 2921 2948 return ret; 2922 2949 }
+36
drivers/net/ethernet/sfc/net_driver.h
··· 734 734 }; 735 735 736 736 #ifdef CONFIG_RFS_ACCEL 737 + /* Order of these is important, since filter_id >= %EFX_ARFS_FILTER_ID_PENDING 738 + * is used to test if filter does or will exist. 739 + */ 740 + #define EFX_ARFS_FILTER_ID_PENDING -1 741 + #define EFX_ARFS_FILTER_ID_ERROR -2 742 + #define EFX_ARFS_FILTER_ID_REMOVING -3 743 + /** 744 + * struct efx_arfs_rule - record of an ARFS filter and its IDs 745 + * @node: linkage into hash table 746 + * @spec: details of the filter (used as key for hash table). Use efx->type to 747 + * determine which member to use. 748 + * @rxq_index: channel to which the filter will steer traffic. 749 + * @arfs_id: filter ID which was returned to ARFS 750 + * @filter_id: index in software filter table. May be 751 + * %EFX_ARFS_FILTER_ID_PENDING if filter was not inserted yet, 752 + * %EFX_ARFS_FILTER_ID_ERROR if filter insertion failed, or 753 + * %EFX_ARFS_FILTER_ID_REMOVING if expiry is currently removing the filter. 754 + */ 755 + struct efx_arfs_rule { 756 + struct hlist_node node; 757 + struct efx_filter_spec spec; 758 + u16 rxq_index; 759 + u16 arfs_id; 760 + s32 filter_id; 761 + }; 762 + 763 + /* Size chosen so that the table is one page (4kB) */ 764 + #define EFX_ARFS_HASH_TABLE_SIZE 512 765 + 737 766 /** 738 767 * struct efx_async_filter_insertion - Request to asynchronously insert a filter 739 768 * @net_dev: Reference to the netdevice ··· 902 873 * @rps_expire_channel's @rps_flow_id 903 874 * @rps_slot_map: bitmap of in-flight entries in @rps_slot 904 875 * @rps_slot: array of ARFS insertion requests for efx_filter_rfs_work() 876 + * @rps_hash_lock: Protects ARFS filter mapping state (@rps_hash_table and 877 + * @rps_next_id). 878 + * @rps_hash_table: Mapping between ARFS filters and their various IDs 879 + * @rps_next_id: next arfs_id for an ARFS filter 905 880 * @active_queues: Count of RX and TX queues that haven't been flushed and drained. 906 881 * @rxq_flush_pending: Count of number of receive queues that need to be flushed. 907 882 * Decremented when the efx_flush_rx_queue() is called. ··· 1062 1029 unsigned int rps_expire_index; 1063 1030 unsigned long rps_slot_map; 1064 1031 struct efx_async_filter_insertion rps_slot[EFX_RPS_MAX_IN_FLIGHT]; 1032 + spinlock_t rps_hash_lock; 1033 + struct hlist_head *rps_hash_table; 1034 + u32 rps_next_id; 1065 1035 #endif 1066 1036 1067 1037 atomic_t active_queues;
+57 -5
drivers/net/ethernet/sfc/rx.c
··· 834 834 struct efx_nic *efx = netdev_priv(req->net_dev); 835 835 struct efx_channel *channel = efx_get_channel(efx, req->rxq_index); 836 836 int slot_idx = req - efx->rps_slot; 837 + struct efx_arfs_rule *rule; 838 + u16 arfs_id = 0; 837 839 int rc; 838 840 839 841 rc = efx->type->filter_insert(efx, &req->spec, true); 842 + if (efx->rps_hash_table) { 843 + spin_lock_bh(&efx->rps_hash_lock); 844 + rule = efx_rps_hash_find(efx, &req->spec); 845 + /* The rule might have already gone, if someone else's request 846 + * for the same spec was already worked and then expired before 847 + * we got around to our work. In that case we have nothing 848 + * tying us to an arfs_id, meaning that as soon as the filter 849 + * is considered for expiry it will be removed. 850 + */ 851 + if (rule) { 852 + if (rc < 0) 853 + rule->filter_id = EFX_ARFS_FILTER_ID_ERROR; 854 + else 855 + rule->filter_id = rc; 856 + arfs_id = rule->arfs_id; 857 + } 858 + spin_unlock_bh(&efx->rps_hash_lock); 859 + } 840 860 if (rc >= 0) { 841 861 /* Remember this so we can check whether to expire the filter 842 862 * later. ··· 868 848 869 849 if (req->spec.ether_type == htons(ETH_P_IP)) 870 850 netif_info(efx, rx_status, efx->net_dev, 871 - "steering %s %pI4:%u:%pI4:%u to queue %u [flow %u filter %d]\n", 851 + "steering %s %pI4:%u:%pI4:%u to queue %u [flow %u filter %d id %u]\n", 872 852 (req->spec.ip_proto == IPPROTO_TCP) ? "TCP" : "UDP", 873 853 req->spec.rem_host, ntohs(req->spec.rem_port), 874 854 req->spec.loc_host, ntohs(req->spec.loc_port), 875 - req->rxq_index, req->flow_id, rc); 855 + req->rxq_index, req->flow_id, rc, arfs_id); 876 856 else 877 857 netif_info(efx, rx_status, efx->net_dev, 878 - "steering %s [%pI6]:%u:[%pI6]:%u to queue %u [flow %u filter %d]\n", 858 + "steering %s [%pI6]:%u:[%pI6]:%u to queue %u [flow %u filter %d id %u]\n", 879 859 (req->spec.ip_proto == IPPROTO_TCP) ? "TCP" : "UDP", 880 860 req->spec.rem_host, ntohs(req->spec.rem_port), 881 861 req->spec.loc_host, ntohs(req->spec.loc_port), 882 - req->rxq_index, req->flow_id, rc); 862 + req->rxq_index, req->flow_id, rc, arfs_id); 883 863 } 884 864 885 865 /* Release references */ ··· 892 872 { 893 873 struct efx_nic *efx = netdev_priv(net_dev); 894 874 struct efx_async_filter_insertion *req; 875 + struct efx_arfs_rule *rule; 895 876 struct flow_keys fk; 896 877 int slot_idx; 878 + bool new; 897 879 int rc; 898 880 899 881 /* find a free slot */ ··· 948 926 req->spec.rem_port = fk.ports.src; 949 927 req->spec.loc_port = fk.ports.dst; 950 928 929 + if (efx->rps_hash_table) { 930 + /* Add it to ARFS hash table */ 931 + spin_lock(&efx->rps_hash_lock); 932 + rule = efx_rps_hash_add(efx, &req->spec, &new); 933 + if (!rule) { 934 + rc = -ENOMEM; 935 + goto out_unlock; 936 + } 937 + if (new) 938 + rule->arfs_id = efx->rps_next_id++ % RPS_NO_FILTER; 939 + rc = rule->arfs_id; 940 + /* Skip if existing or pending filter already does the right thing */ 941 + if (!new && rule->rxq_index == rxq_index && 942 + rule->filter_id >= EFX_ARFS_FILTER_ID_PENDING) 943 + goto out_unlock; 944 + rule->rxq_index = rxq_index; 945 + rule->filter_id = EFX_ARFS_FILTER_ID_PENDING; 946 + spin_unlock(&efx->rps_hash_lock); 947 + } else { 948 + /* Without an ARFS hash table, we just use arfs_id 0 for all 949 + * filters. This means if multiple flows hash to the same 950 + * flow_id, all but the most recently touched will be eligible 951 + * for expiry. 952 + */ 953 + rc = 0; 954 + } 955 + 956 + /* Queue the request */ 951 957 dev_hold(req->net_dev = net_dev); 952 958 INIT_WORK(&req->work, efx_filter_rfs_work); 953 959 req->rxq_index = rxq_index; 954 960 req->flow_id = flow_id; 955 961 schedule_work(&req->work); 956 - return 0; 962 + return rc; 963 + out_unlock: 964 + spin_unlock(&efx->rps_hash_lock); 957 965 out_clear: 958 966 clear_bit(slot_idx, &efx->rps_slot_map); 959 967 return rc;
+1 -1
drivers/net/ethernet/ti/cpsw.c
··· 129 129 130 130 #define RX_PRIORITY_MAPPING 0x76543210 131 131 #define TX_PRIORITY_MAPPING 0x33221100 132 - #define CPDMA_TX_PRIORITY_MAP 0x01234567 132 + #define CPDMA_TX_PRIORITY_MAP 0x76543210 133 133 134 134 #define CPSW_VLAN_AWARE BIT(1) 135 135 #define CPSW_RX_VLAN_ENCAP BIT(2)
+9
drivers/net/phy/marvell.c
··· 1393 1393 if (err < 0) 1394 1394 goto error; 1395 1395 1396 + /* If WOL event happened once, the LED[2] interrupt pin 1397 + * will not be cleared unless we reading the interrupt status 1398 + * register. If interrupts are in use, the normal interrupt 1399 + * handling will clear the WOL event. Clear the WOL event 1400 + * before enabling it if !phy_interrupt_is_valid() 1401 + */ 1402 + if (!phy_interrupt_is_valid(phydev)) 1403 + phy_read(phydev, MII_M1011_IEVENT); 1404 + 1396 1405 /* Enable the WOL interrupt */ 1397 1406 err = __phy_modify(phydev, MII_88E1318S_PHY_CSIER, 0, 1398 1407 MII_88E1318S_PHY_CSIER_WOL_EIE);
+4
drivers/net/ppp/pppoe.c
··· 620 620 lock_sock(sk); 621 621 622 622 error = -EINVAL; 623 + 624 + if (sockaddr_len != sizeof(struct sockaddr_pppox)) 625 + goto end; 626 + 623 627 if (sp->sa_protocol != PX_PROTO_OE) 624 628 goto end; 625 629
+12 -7
drivers/net/team/team.c
··· 1072 1072 } 1073 1073 1074 1074 #ifdef CONFIG_NET_POLL_CONTROLLER 1075 - static int team_port_enable_netpoll(struct team *team, struct team_port *port) 1075 + static int __team_port_enable_netpoll(struct team_port *port) 1076 1076 { 1077 1077 struct netpoll *np; 1078 1078 int err; 1079 - 1080 - if (!team->dev->npinfo) 1081 - return 0; 1082 1079 1083 1080 np = kzalloc(sizeof(*np), GFP_KERNEL); 1084 1081 if (!np) ··· 1088 1091 } 1089 1092 port->np = np; 1090 1093 return err; 1094 + } 1095 + 1096 + static int team_port_enable_netpoll(struct team_port *port) 1097 + { 1098 + if (!port->team->dev->npinfo) 1099 + return 0; 1100 + 1101 + return __team_port_enable_netpoll(port); 1091 1102 } 1092 1103 1093 1104 static void team_port_disable_netpoll(struct team_port *port) ··· 1112 1107 kfree(np); 1113 1108 } 1114 1109 #else 1115 - static int team_port_enable_netpoll(struct team *team, struct team_port *port) 1110 + static int team_port_enable_netpoll(struct team_port *port) 1116 1111 { 1117 1112 return 0; 1118 1113 } ··· 1226 1221 goto err_vids_add; 1227 1222 } 1228 1223 1229 - err = team_port_enable_netpoll(team, port); 1224 + err = team_port_enable_netpoll(port); 1230 1225 if (err) { 1231 1226 netdev_err(dev, "Failed to enable netpoll on device %s\n", 1232 1227 portname); ··· 1923 1918 1924 1919 mutex_lock(&team->lock); 1925 1920 list_for_each_entry(port, &team->port_list, list) { 1926 - err = team_port_enable_netpoll(team, port); 1921 + err = __team_port_enable_netpoll(port); 1927 1922 if (err) { 1928 1923 __team_netpoll_cleanup(team); 1929 1924 break;
+5 -2
drivers/of/fdt.c
··· 942 942 int offset; 943 943 const char *p, *q, *options = NULL; 944 944 int l; 945 - const struct earlycon_id *match; 945 + const struct earlycon_id **p_match; 946 946 const void *fdt = initial_boot_params; 947 947 948 948 offset = fdt_path_offset(fdt, "/chosen"); ··· 969 969 return 0; 970 970 } 971 971 972 - for (match = __earlycon_table; match < __earlycon_table_end; match++) { 972 + for (p_match = __earlycon_table; p_match < __earlycon_table_end; 973 + p_match++) { 974 + const struct earlycon_id *match = *p_match; 975 + 973 976 if (!match->compatible[0]) 974 977 continue; 975 978
+1 -1
drivers/pci/dwc/pcie-kirin.c
··· 486 486 return ret; 487 487 488 488 kirin_pcie->gpio_id_reset = of_get_named_gpio(dev->of_node, 489 - "reset-gpio", 0); 489 + "reset-gpios", 0); 490 490 if (kirin_pcie->gpio_id_reset < 0) 491 491 return -ENODEV; 492 492
+30 -23
drivers/pci/host/pci-aardvark.c
··· 29 29 #define PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT 5 30 30 #define PCIE_CORE_DEV_CTRL_STATS_SNOOP_DISABLE (0 << 11) 31 31 #define PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SIZE_SHIFT 12 32 + #define PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SZ 0x2 32 33 #define PCIE_CORE_LINK_CTRL_STAT_REG 0xd0 33 34 #define PCIE_CORE_LINK_L0S_ENTRY BIT(0) 34 35 #define PCIE_CORE_LINK_TRAINING BIT(5) ··· 101 100 #define PCIE_ISR1_MASK_REG (CONTROL_BASE_ADDR + 0x4C) 102 101 #define PCIE_ISR1_POWER_STATE_CHANGE BIT(4) 103 102 #define PCIE_ISR1_FLUSH BIT(5) 104 - #define PCIE_ISR1_ALL_MASK GENMASK(5, 4) 103 + #define PCIE_ISR1_INTX_ASSERT(val) BIT(8 + (val)) 104 + #define PCIE_ISR1_ALL_MASK GENMASK(11, 4) 105 105 #define PCIE_MSI_ADDR_LOW_REG (CONTROL_BASE_ADDR + 0x50) 106 106 #define PCIE_MSI_ADDR_HIGH_REG (CONTROL_BASE_ADDR + 0x54) 107 107 #define PCIE_MSI_STATUS_REG (CONTROL_BASE_ADDR + 0x58) ··· 174 172 #define PCIE_CONFIG_WR_TYPE0 0xa 175 173 #define PCIE_CONFIG_WR_TYPE1 0xb 176 174 177 - /* PCI_BDF shifts 8bit, so we need extra 4bit shift */ 178 - #define PCIE_BDF(dev) (dev << 4) 179 175 #define PCIE_CONF_BUS(bus) (((bus) & 0xff) << 20) 180 176 #define PCIE_CONF_DEV(dev) (((dev) & 0x1f) << 15) 181 177 #define PCIE_CONF_FUNC(fun) (((fun) & 0x7) << 12) ··· 296 296 reg = PCIE_CORE_DEV_CTRL_STATS_RELAX_ORDER_DISABLE | 297 297 (7 << PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT) | 298 298 PCIE_CORE_DEV_CTRL_STATS_SNOOP_DISABLE | 299 - PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SIZE_SHIFT; 299 + (PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SZ << 300 + PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SIZE_SHIFT); 300 301 advk_writel(pcie, reg, PCIE_CORE_DEV_CTRL_STATS_REG); 301 302 302 303 /* Program PCIe Control 2 to disable strict ordering */ ··· 438 437 u32 reg; 439 438 int ret; 440 439 441 - if (PCI_SLOT(devfn) != 0) { 440 + if ((bus->number == pcie->root_bus_nr) && PCI_SLOT(devfn) != 0) { 442 441 *val = 0xffffffff; 443 442 return PCIBIOS_DEVICE_NOT_FOUND; 444 443 } ··· 457 456 advk_writel(pcie, reg, PIO_CTRL); 458 457 459 458 /* Program the address registers */ 460 - reg = PCIE_BDF(devfn) | PCIE_CONF_REG(where); 459 + reg = PCIE_CONF_ADDR(bus->number, devfn, where); 461 460 advk_writel(pcie, reg, PIO_ADDR_LS); 462 461 advk_writel(pcie, 0, PIO_ADDR_MS); 463 462 ··· 492 491 int offset; 493 492 int ret; 494 493 495 - if (PCI_SLOT(devfn) != 0) 494 + if ((bus->number == pcie->root_bus_nr) && PCI_SLOT(devfn) != 0) 496 495 return PCIBIOS_DEVICE_NOT_FOUND; 497 496 498 497 if (where % size) ··· 610 609 irq_hw_number_t hwirq = irqd_to_hwirq(d); 611 610 u32 mask; 612 611 613 - mask = advk_readl(pcie, PCIE_ISR0_MASK_REG); 614 - mask |= PCIE_ISR0_INTX_ASSERT(hwirq); 615 - advk_writel(pcie, mask, PCIE_ISR0_MASK_REG); 612 + mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); 613 + mask |= PCIE_ISR1_INTX_ASSERT(hwirq); 614 + advk_writel(pcie, mask, PCIE_ISR1_MASK_REG); 616 615 } 617 616 618 617 static void advk_pcie_irq_unmask(struct irq_data *d) ··· 621 620 irq_hw_number_t hwirq = irqd_to_hwirq(d); 622 621 u32 mask; 623 622 624 - mask = advk_readl(pcie, PCIE_ISR0_MASK_REG); 625 - mask &= ~PCIE_ISR0_INTX_ASSERT(hwirq); 626 - advk_writel(pcie, mask, PCIE_ISR0_MASK_REG); 623 + mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); 624 + mask &= ~PCIE_ISR1_INTX_ASSERT(hwirq); 625 + advk_writel(pcie, mask, PCIE_ISR1_MASK_REG); 627 626 } 628 627 629 628 static int advk_pcie_irq_map(struct irq_domain *h, ··· 766 765 767 766 static void advk_pcie_handle_int(struct advk_pcie *pcie) 768 767 { 769 - u32 val, mask, status; 768 + u32 isr0_val, isr0_mask, isr0_status; 769 + u32 isr1_val, isr1_mask, isr1_status; 770 770 int i, virq; 771 771 772 - val = advk_readl(pcie, PCIE_ISR0_REG); 773 - mask = advk_readl(pcie, PCIE_ISR0_MASK_REG); 774 - status = val & ((~mask) & PCIE_ISR0_ALL_MASK); 772 + isr0_val = advk_readl(pcie, PCIE_ISR0_REG); 773 + isr0_mask = advk_readl(pcie, PCIE_ISR0_MASK_REG); 774 + isr0_status = isr0_val & ((~isr0_mask) & PCIE_ISR0_ALL_MASK); 775 775 776 - if (!status) { 777 - advk_writel(pcie, val, PCIE_ISR0_REG); 776 + isr1_val = advk_readl(pcie, PCIE_ISR1_REG); 777 + isr1_mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); 778 + isr1_status = isr1_val & ((~isr1_mask) & PCIE_ISR1_ALL_MASK); 779 + 780 + if (!isr0_status && !isr1_status) { 781 + advk_writel(pcie, isr0_val, PCIE_ISR0_REG); 782 + advk_writel(pcie, isr1_val, PCIE_ISR1_REG); 778 783 return; 779 784 } 780 785 781 786 /* Process MSI interrupts */ 782 - if (status & PCIE_ISR0_MSI_INT_PENDING) 787 + if (isr0_status & PCIE_ISR0_MSI_INT_PENDING) 783 788 advk_pcie_handle_msi(pcie); 784 789 785 790 /* Process legacy interrupts */ 786 791 for (i = 0; i < PCI_NUM_INTX; i++) { 787 - if (!(status & PCIE_ISR0_INTX_ASSERT(i))) 792 + if (!(isr1_status & PCIE_ISR1_INTX_ASSERT(i))) 788 793 continue; 789 794 790 - advk_writel(pcie, PCIE_ISR0_INTX_ASSERT(i), 791 - PCIE_ISR0_REG); 795 + advk_writel(pcie, PCIE_ISR1_INTX_ASSERT(i), 796 + PCIE_ISR1_REG); 792 797 793 798 virq = irq_find_mapping(pcie->irq_domain, i); 794 799 generic_handle_irq(virq);
+3 -2
drivers/pci/pci-driver.c
··· 958 958 * devices should not be touched during freeze/thaw transitions, 959 959 * however. 960 960 */ 961 - if (!dev_pm_test_driver_flags(dev, DPM_FLAG_SMART_SUSPEND)) 961 + if (!dev_pm_smart_suspend_and_suspended(dev)) { 962 962 pm_runtime_resume(dev); 963 + pci_dev->state_saved = false; 964 + } 963 965 964 - pci_dev->state_saved = false; 965 966 if (pm->freeze) { 966 967 int error; 967 968
+2 -2
drivers/pci/pci.c
··· 5273 5273 bw_avail = pcie_bandwidth_available(dev, &limiting_dev, &speed, &width); 5274 5274 5275 5275 if (bw_avail >= bw_cap) 5276 - pci_info(dev, "%u.%03u Gb/s available bandwidth (%s x%d link)\n", 5276 + pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth (%s x%d link)\n", 5277 5277 bw_cap / 1000, bw_cap % 1000, 5278 5278 PCIE_SPEED2STR(speed_cap), width_cap); 5279 5279 else 5280 - pci_info(dev, "%u.%03u Gb/s available bandwidth, limited by %s x%d link at %s (capable of %u.%03u Gb/s with %s x%d link)\n", 5280 + pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth, limited by %s x%d link at %s (capable of %u.%03u Gb/s with %s x%d link)\n", 5281 5281 bw_avail / 1000, bw_avail % 1000, 5282 5282 PCIE_SPEED2STR(speed), width, 5283 5283 limiting_dev ? pci_name(limiting_dev) : "<unknown>",
+23 -14
drivers/rtc/rtc-opal.c
··· 57 57 58 58 static int opal_get_rtc_time(struct device *dev, struct rtc_time *tm) 59 59 { 60 - long rc = OPAL_BUSY; 60 + s64 rc = OPAL_BUSY; 61 61 int retries = 10; 62 62 u32 y_m_d; 63 63 u64 h_m_s_ms; ··· 66 66 67 67 while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) { 68 68 rc = opal_rtc_read(&__y_m_d, &__h_m_s_ms); 69 - if (rc == OPAL_BUSY_EVENT) 69 + if (rc == OPAL_BUSY_EVENT) { 70 + msleep(OPAL_BUSY_DELAY_MS); 70 71 opal_poll_events(NULL); 71 - else if (retries-- && (rc == OPAL_HARDWARE 72 - || rc == OPAL_INTERNAL_ERROR)) 73 - msleep(10); 74 - else if (rc != OPAL_BUSY && rc != OPAL_BUSY_EVENT) 75 - break; 72 + } else if (rc == OPAL_BUSY) { 73 + msleep(OPAL_BUSY_DELAY_MS); 74 + } else if (rc == OPAL_HARDWARE || rc == OPAL_INTERNAL_ERROR) { 75 + if (retries--) { 76 + msleep(10); /* Wait 10ms before retry */ 77 + rc = OPAL_BUSY; /* go around again */ 78 + } 79 + } 76 80 } 77 81 78 82 if (rc != OPAL_SUCCESS) ··· 91 87 92 88 static int opal_set_rtc_time(struct device *dev, struct rtc_time *tm) 93 89 { 94 - long rc = OPAL_BUSY; 90 + s64 rc = OPAL_BUSY; 95 91 int retries = 10; 96 92 u32 y_m_d = 0; 97 93 u64 h_m_s_ms = 0; 98 94 99 95 tm_to_opal(tm, &y_m_d, &h_m_s_ms); 96 + 100 97 while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) { 101 98 rc = opal_rtc_write(y_m_d, h_m_s_ms); 102 - if (rc == OPAL_BUSY_EVENT) 99 + if (rc == OPAL_BUSY_EVENT) { 100 + msleep(OPAL_BUSY_DELAY_MS); 103 101 opal_poll_events(NULL); 104 - else if (retries-- && (rc == OPAL_HARDWARE 105 - || rc == OPAL_INTERNAL_ERROR)) 106 - msleep(10); 107 - else if (rc != OPAL_BUSY && rc != OPAL_BUSY_EVENT) 108 - break; 102 + } else if (rc == OPAL_BUSY) { 103 + msleep(OPAL_BUSY_DELAY_MS); 104 + } else if (rc == OPAL_HARDWARE || rc == OPAL_INTERNAL_ERROR) { 105 + if (retries--) { 106 + msleep(10); /* Wait 10ms before retry */ 107 + rc = OPAL_BUSY; /* go around again */ 108 + } 109 + } 109 110 } 110 111 111 112 return rc == OPAL_SUCCESS ? 0 : -EIO;
+11 -2
drivers/s390/block/dasd_alias.c
··· 592 592 int dasd_alias_add_device(struct dasd_device *device) 593 593 { 594 594 struct dasd_eckd_private *private = device->private; 595 - struct alias_lcu *lcu; 595 + __u8 uaddr = private->uid.real_unit_addr; 596 + struct alias_lcu *lcu = private->lcu; 596 597 unsigned long flags; 597 598 int rc; 598 599 599 - lcu = private->lcu; 600 600 rc = 0; 601 601 spin_lock_irqsave(&lcu->lock, flags); 602 + /* 603 + * Check if device and lcu type differ. If so, the uac data may be 604 + * outdated and needs to be updated. 605 + */ 606 + if (private->uid.type != lcu->uac->unit[uaddr].ua_type) { 607 + lcu->flags |= UPDATE_PENDING; 608 + DBF_DEV_EVENT(DBF_WARNING, device, "%s", 609 + "uid type mismatch - trigger rescan"); 610 + } 602 611 if (!(lcu->flags & UPDATE_PENDING)) { 603 612 rc = _add_device_to_lcu(lcu, device, device); 604 613 if (rc)
+11 -3
drivers/s390/cio/chsc.c
··· 452 452 453 453 static void chsc_process_sei_res_acc(struct chsc_sei_nt0_area *sei_area) 454 454 { 455 + struct channel_path *chp; 455 456 struct chp_link link; 456 457 struct chp_id chpid; 457 458 int status; ··· 465 464 chpid.id = sei_area->rsid; 466 465 /* allocate a new channel path structure, if needed */ 467 466 status = chp_get_status(chpid); 468 - if (status < 0) 469 - chp_new(chpid); 470 - else if (!status) 467 + if (!status) 471 468 return; 469 + 470 + if (status < 0) { 471 + chp_new(chpid); 472 + } else { 473 + chp = chpid_to_chp(chpid); 474 + mutex_lock(&chp->lock); 475 + chp_update_desc(chp); 476 + mutex_unlock(&chp->lock); 477 + } 472 478 memset(&link, 0, sizeof(struct chp_link)); 473 479 link.chpid = chpid; 474 480 if ((sei_area->vf & 0xc0) != 0) {
+12 -7
drivers/s390/cio/vfio_ccw_fsm.c
··· 20 20 int ccode; 21 21 __u8 lpm; 22 22 unsigned long flags; 23 + int ret; 23 24 24 25 sch = private->sch; 25 26 26 27 spin_lock_irqsave(sch->lock, flags); 27 28 private->state = VFIO_CCW_STATE_BUSY; 28 - spin_unlock_irqrestore(sch->lock, flags); 29 29 30 30 orb = cp_get_orb(&private->cp, (u32)(addr_t)sch, sch->lpm); 31 31 ··· 38 38 * Initialize device status information 39 39 */ 40 40 sch->schib.scsw.cmd.actl |= SCSW_ACTL_START_PEND; 41 - return 0; 41 + ret = 0; 42 + break; 42 43 case 1: /* Status pending */ 43 44 case 2: /* Busy */ 44 - return -EBUSY; 45 + ret = -EBUSY; 46 + break; 45 47 case 3: /* Device/path not operational */ 46 48 { 47 49 lpm = orb->cmd.lpm; ··· 53 51 sch->lpm = 0; 54 52 55 53 if (cio_update_schib(sch)) 56 - return -ENODEV; 57 - 58 - return sch->lpm ? -EACCES : -ENODEV; 54 + ret = -ENODEV; 55 + else 56 + ret = sch->lpm ? -EACCES : -ENODEV; 57 + break; 59 58 } 60 59 default: 61 - return ccode; 60 + ret = ccode; 62 61 } 62 + spin_unlock_irqrestore(sch->lock, flags); 63 + return ret; 63 64 } 64 65 65 66 static void fsm_notoper(struct vfio_ccw_private *private,
-2
drivers/s390/net/qeth_core.h
··· 557 557 enum qeth_cmd_buffer_state { 558 558 BUF_STATE_FREE, 559 559 BUF_STATE_LOCKED, 560 - BUF_STATE_PROCESSED, 561 560 }; 562 561 563 562 enum qeth_cq { ··· 600 601 struct qeth_cmd_buffer iob[QETH_CMD_BUFFER_NO]; 601 602 atomic_t irq_pending; 602 603 int io_buf_no; 603 - int buf_no; 604 604 }; 605 605 606 606 /**
+71 -87
drivers/s390/net/qeth_core_main.c
··· 706 706 qeth_put_reply(reply); 707 707 } 708 708 spin_unlock_irqrestore(&card->lock, flags); 709 - atomic_set(&card->write.irq_pending, 0); 710 709 } 711 710 EXPORT_SYMBOL_GPL(qeth_clear_ipacmd_list); 712 711 ··· 817 818 818 819 for (cnt = 0; cnt < QETH_CMD_BUFFER_NO; cnt++) 819 820 qeth_release_buffer(channel, &channel->iob[cnt]); 820 - channel->buf_no = 0; 821 821 channel->io_buf_no = 0; 822 822 } 823 823 EXPORT_SYMBOL_GPL(qeth_clear_cmd_buffers); ··· 922 924 kfree(channel->iob[cnt].data); 923 925 return -ENOMEM; 924 926 } 925 - channel->buf_no = 0; 926 927 channel->io_buf_no = 0; 927 928 atomic_set(&channel->irq_pending, 0); 928 929 spin_lock_init(&channel->iob_lock); ··· 1097 1100 { 1098 1101 int rc; 1099 1102 int cstat, dstat; 1100 - struct qeth_cmd_buffer *buffer; 1103 + struct qeth_cmd_buffer *iob = NULL; 1101 1104 struct qeth_channel *channel; 1102 1105 struct qeth_card *card; 1103 - struct qeth_cmd_buffer *iob; 1104 - __u8 index; 1105 - 1106 - if (__qeth_check_irb_error(cdev, intparm, irb)) 1107 - return; 1108 - cstat = irb->scsw.cmd.cstat; 1109 - dstat = irb->scsw.cmd.dstat; 1110 1106 1111 1107 card = CARD_FROM_CDEV(cdev); 1112 1108 if (!card) ··· 1117 1127 channel = &card->data; 1118 1128 QETH_CARD_TEXT(card, 5, "data"); 1119 1129 } 1130 + 1131 + if (qeth_intparm_is_iob(intparm)) 1132 + iob = (struct qeth_cmd_buffer *) __va((addr_t)intparm); 1133 + 1134 + if (__qeth_check_irb_error(cdev, intparm, irb)) { 1135 + /* IO was terminated, free its resources. */ 1136 + if (iob) 1137 + qeth_release_buffer(iob->channel, iob); 1138 + atomic_set(&channel->irq_pending, 0); 1139 + wake_up(&card->wait_q); 1140 + return; 1141 + } 1142 + 1120 1143 atomic_set(&channel->irq_pending, 0); 1121 1144 1122 1145 if (irb->scsw.cmd.fctl & (SCSW_FCTL_CLEAR_FUNC)) ··· 1153 1150 /* we don't have to handle this further */ 1154 1151 intparm = 0; 1155 1152 } 1153 + 1154 + cstat = irb->scsw.cmd.cstat; 1155 + dstat = irb->scsw.cmd.dstat; 1156 + 1156 1157 if ((dstat & DEV_STAT_UNIT_EXCEP) || 1157 1158 (dstat & DEV_STAT_UNIT_CHECK) || 1158 1159 (cstat)) { ··· 1189 1182 channel->state = CH_STATE_RCD_DONE; 1190 1183 goto out; 1191 1184 } 1192 - if (intparm) { 1193 - buffer = (struct qeth_cmd_buffer *) __va((addr_t)intparm); 1194 - buffer->state = BUF_STATE_PROCESSED; 1195 - } 1196 1185 if (channel == &card->data) 1197 1186 return; 1198 1187 if (channel == &card->read && 1199 1188 channel->state == CH_STATE_UP) 1200 1189 __qeth_issue_next_read(card); 1201 1190 1202 - iob = channel->iob; 1203 - index = channel->buf_no; 1204 - while (iob[index].state == BUF_STATE_PROCESSED) { 1205 - if (iob[index].callback != NULL) 1206 - iob[index].callback(channel, iob + index); 1191 + if (iob && iob->callback) 1192 + iob->callback(iob->channel, iob); 1207 1193 1208 - index = (index + 1) % QETH_CMD_BUFFER_NO; 1209 - } 1210 - channel->buf_no = index; 1211 1194 out: 1212 1195 wake_up(&card->wait_q); 1213 1196 return; ··· 1867 1870 atomic_cmpxchg(&channel->irq_pending, 0, 1) == 0); 1868 1871 QETH_DBF_TEXT(SETUP, 6, "noirqpnd"); 1869 1872 spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); 1870 - rc = ccw_device_start(channel->ccwdev, 1871 - &channel->ccw, (addr_t) iob, 0, 0); 1873 + rc = ccw_device_start_timeout(channel->ccwdev, &channel->ccw, 1874 + (addr_t) iob, 0, 0, QETH_TIMEOUT); 1872 1875 spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); 1873 1876 1874 1877 if (rc) { ··· 1885 1888 if (channel->state != CH_STATE_UP) { 1886 1889 rc = -ETIME; 1887 1890 QETH_DBF_TEXT_(SETUP, 2, "3err%d", rc); 1888 - qeth_clear_cmd_buffers(channel); 1889 1891 } else 1890 1892 rc = 0; 1891 1893 return rc; ··· 1938 1942 atomic_cmpxchg(&channel->irq_pending, 0, 1) == 0); 1939 1943 QETH_DBF_TEXT(SETUP, 6, "noirqpnd"); 1940 1944 spin_lock_irqsave(get_ccwdev_lock(channel->ccwdev), flags); 1941 - rc = ccw_device_start(channel->ccwdev, 1942 - &channel->ccw, (addr_t) iob, 0, 0); 1945 + rc = ccw_device_start_timeout(channel->ccwdev, &channel->ccw, 1946 + (addr_t) iob, 0, 0, QETH_TIMEOUT); 1943 1947 spin_unlock_irqrestore(get_ccwdev_lock(channel->ccwdev), flags); 1944 1948 1945 1949 if (rc) { ··· 1960 1964 QETH_DBF_MESSAGE(2, "%s IDX activate timed out\n", 1961 1965 dev_name(&channel->ccwdev->dev)); 1962 1966 QETH_DBF_TEXT_(SETUP, 2, "2err%d", -ETIME); 1963 - qeth_clear_cmd_buffers(channel); 1964 1967 return -ETIME; 1965 1968 } 1966 1969 return qeth_idx_activate_get_answer(channel, idx_reply_cb); ··· 2161 2166 2162 2167 QETH_CARD_TEXT(card, 6, "noirqpnd"); 2163 2168 spin_lock_irqsave(get_ccwdev_lock(card->write.ccwdev), flags); 2164 - rc = ccw_device_start(card->write.ccwdev, &card->write.ccw, 2165 - (addr_t) iob, 0, 0); 2169 + rc = ccw_device_start_timeout(CARD_WDEV(card), &card->write.ccw, 2170 + (addr_t) iob, 0, 0, event_timeout); 2166 2171 spin_unlock_irqrestore(get_ccwdev_lock(card->write.ccwdev), flags); 2167 2172 if (rc) { 2168 2173 QETH_DBF_MESSAGE(2, "%s qeth_send_control_data: " ··· 2194 2199 } 2195 2200 } 2196 2201 2197 - if (reply->rc == -EIO) 2198 - goto error; 2199 2202 rc = reply->rc; 2200 2203 qeth_put_reply(reply); 2201 2204 return rc; ··· 2204 2211 list_del_init(&reply->list); 2205 2212 spin_unlock_irqrestore(&reply->card->lock, flags); 2206 2213 atomic_inc(&reply->received); 2207 - error: 2208 - atomic_set(&card->write.irq_pending, 0); 2209 - qeth_release_buffer(iob->channel, iob); 2210 - card->write.buf_no = (card->write.buf_no + 1) % QETH_CMD_BUFFER_NO; 2211 2214 rc = reply->rc; 2212 2215 qeth_put_reply(reply); 2213 2216 return rc; ··· 3022 3033 return rc; 3023 3034 } 3024 3035 3025 - static int qeth_default_setadapterparms_cb(struct qeth_card *card, 3026 - struct qeth_reply *reply, unsigned long data) 3036 + static int qeth_setadpparms_inspect_rc(struct qeth_ipa_cmd *cmd) 3027 3037 { 3028 - struct qeth_ipa_cmd *cmd; 3029 - 3030 - QETH_CARD_TEXT(card, 4, "defadpcb"); 3031 - 3032 - cmd = (struct qeth_ipa_cmd *) data; 3033 - if (cmd->hdr.return_code == 0) 3038 + if (!cmd->hdr.return_code) 3034 3039 cmd->hdr.return_code = 3035 3040 cmd->data.setadapterparms.hdr.return_code; 3036 - return 0; 3041 + return cmd->hdr.return_code; 3037 3042 } 3038 3043 3039 3044 static int qeth_query_setadapterparms_cb(struct qeth_card *card, 3040 3045 struct qeth_reply *reply, unsigned long data) 3041 3046 { 3042 - struct qeth_ipa_cmd *cmd; 3047 + struct qeth_ipa_cmd *cmd = (struct qeth_ipa_cmd *) data; 3043 3048 3044 3049 QETH_CARD_TEXT(card, 3, "quyadpcb"); 3050 + if (qeth_setadpparms_inspect_rc(cmd)) 3051 + return 0; 3045 3052 3046 - cmd = (struct qeth_ipa_cmd *) data; 3047 3053 if (cmd->data.setadapterparms.data.query_cmds_supp.lan_type & 0x7f) { 3048 3054 card->info.link_type = 3049 3055 cmd->data.setadapterparms.data.query_cmds_supp.lan_type; ··· 3046 3062 } 3047 3063 card->options.adp.supported_funcs = 3048 3064 cmd->data.setadapterparms.data.query_cmds_supp.supported_cmds; 3049 - return qeth_default_setadapterparms_cb(card, reply, (unsigned long)cmd); 3065 + return 0; 3050 3066 } 3051 3067 3052 3068 static struct qeth_cmd_buffer *qeth_get_adapter_cmd(struct qeth_card *card, ··· 3138 3154 static int qeth_query_switch_attributes_cb(struct qeth_card *card, 3139 3155 struct qeth_reply *reply, unsigned long data) 3140 3156 { 3141 - struct qeth_ipa_cmd *cmd; 3142 - struct qeth_switch_info *sw_info; 3157 + struct qeth_ipa_cmd *cmd = (struct qeth_ipa_cmd *) data; 3143 3158 struct qeth_query_switch_attributes *attrs; 3159 + struct qeth_switch_info *sw_info; 3144 3160 3145 3161 QETH_CARD_TEXT(card, 2, "qswiatcb"); 3146 - cmd = (struct qeth_ipa_cmd *) data; 3147 - sw_info = (struct qeth_switch_info *)reply->param; 3148 - if (cmd->data.setadapterparms.hdr.return_code == 0) { 3149 - attrs = &cmd->data.setadapterparms.data.query_switch_attributes; 3150 - sw_info->capabilities = attrs->capabilities; 3151 - sw_info->settings = attrs->settings; 3152 - QETH_CARD_TEXT_(card, 2, "%04x%04x", sw_info->capabilities, 3153 - sw_info->settings); 3154 - } 3155 - qeth_default_setadapterparms_cb(card, reply, (unsigned long) cmd); 3162 + if (qeth_setadpparms_inspect_rc(cmd)) 3163 + return 0; 3156 3164 3165 + sw_info = (struct qeth_switch_info *)reply->param; 3166 + attrs = &cmd->data.setadapterparms.data.query_switch_attributes; 3167 + sw_info->capabilities = attrs->capabilities; 3168 + sw_info->settings = attrs->settings; 3169 + QETH_CARD_TEXT_(card, 2, "%04x%04x", sw_info->capabilities, 3170 + sw_info->settings); 3157 3171 return 0; 3158 3172 } 3159 3173 ··· 4189 4207 static int qeth_setadp_promisc_mode_cb(struct qeth_card *card, 4190 4208 struct qeth_reply *reply, unsigned long data) 4191 4209 { 4192 - struct qeth_ipa_cmd *cmd; 4210 + struct qeth_ipa_cmd *cmd = (struct qeth_ipa_cmd *) data; 4193 4211 struct qeth_ipacmd_setadpparms *setparms; 4194 4212 4195 4213 QETH_CARD_TEXT(card, 4, "prmadpcb"); 4196 4214 4197 - cmd = (struct qeth_ipa_cmd *) data; 4198 4215 setparms = &(cmd->data.setadapterparms); 4199 - 4200 - qeth_default_setadapterparms_cb(card, reply, (unsigned long)cmd); 4201 - if (cmd->hdr.return_code) { 4216 + if (qeth_setadpparms_inspect_rc(cmd)) { 4202 4217 QETH_CARD_TEXT_(card, 4, "prmrc%x", cmd->hdr.return_code); 4203 4218 setparms->data.mode = SET_PROMISC_MODE_OFF; 4204 4219 } ··· 4265 4286 static int qeth_setadpparms_change_macaddr_cb(struct qeth_card *card, 4266 4287 struct qeth_reply *reply, unsigned long data) 4267 4288 { 4268 - struct qeth_ipa_cmd *cmd; 4289 + struct qeth_ipa_cmd *cmd = (struct qeth_ipa_cmd *) data; 4269 4290 4270 4291 QETH_CARD_TEXT(card, 4, "chgmaccb"); 4292 + if (qeth_setadpparms_inspect_rc(cmd)) 4293 + return 0; 4271 4294 4272 - cmd = (struct qeth_ipa_cmd *) data; 4273 4295 if (!card->options.layer2 || 4274 4296 !(card->info.mac_bits & QETH_LAYER2_MAC_READ)) { 4275 4297 ether_addr_copy(card->dev->dev_addr, 4276 4298 cmd->data.setadapterparms.data.change_addr.addr); 4277 4299 card->info.mac_bits |= QETH_LAYER2_MAC_READ; 4278 4300 } 4279 - qeth_default_setadapterparms_cb(card, reply, (unsigned long) cmd); 4280 4301 return 0; 4281 4302 } 4282 4303 ··· 4307 4328 static int qeth_setadpparms_set_access_ctrl_cb(struct qeth_card *card, 4308 4329 struct qeth_reply *reply, unsigned long data) 4309 4330 { 4310 - struct qeth_ipa_cmd *cmd; 4331 + struct qeth_ipa_cmd *cmd = (struct qeth_ipa_cmd *) data; 4311 4332 struct qeth_set_access_ctrl *access_ctrl_req; 4312 4333 int fallback = *(int *)reply->param; 4313 4334 4314 4335 QETH_CARD_TEXT(card, 4, "setaccb"); 4336 + if (cmd->hdr.return_code) 4337 + return 0; 4338 + qeth_setadpparms_inspect_rc(cmd); 4315 4339 4316 - cmd = (struct qeth_ipa_cmd *) data; 4317 4340 access_ctrl_req = &cmd->data.setadapterparms.data.set_access_ctrl; 4318 4341 QETH_DBF_TEXT_(SETUP, 2, "setaccb"); 4319 4342 QETH_DBF_TEXT_(SETUP, 2, "%s", card->gdev->dev.kobj.name); ··· 4388 4407 card->options.isolation = card->options.prev_isolation; 4389 4408 break; 4390 4409 } 4391 - qeth_default_setadapterparms_cb(card, reply, (unsigned long) cmd); 4392 4410 return 0; 4393 4411 } 4394 4412 ··· 4675 4695 static int qeth_setadpparms_query_oat_cb(struct qeth_card *card, 4676 4696 struct qeth_reply *reply, unsigned long data) 4677 4697 { 4678 - struct qeth_ipa_cmd *cmd; 4698 + struct qeth_ipa_cmd *cmd = (struct qeth_ipa_cmd *)data; 4679 4699 struct qeth_qoat_priv *priv; 4680 4700 char *resdata; 4681 4701 int resdatalen; 4682 4702 4683 4703 QETH_CARD_TEXT(card, 3, "qoatcb"); 4704 + if (qeth_setadpparms_inspect_rc(cmd)) 4705 + return 0; 4684 4706 4685 - cmd = (struct qeth_ipa_cmd *)data; 4686 4707 priv = (struct qeth_qoat_priv *)reply->param; 4687 4708 resdatalen = cmd->data.setadapterparms.hdr.cmdlength; 4688 4709 resdata = (char *)data + 28; ··· 4777 4796 static int qeth_query_card_info_cb(struct qeth_card *card, 4778 4797 struct qeth_reply *reply, unsigned long data) 4779 4798 { 4780 - struct qeth_ipa_cmd *cmd; 4799 + struct carrier_info *carrier_info = (struct carrier_info *)reply->param; 4800 + struct qeth_ipa_cmd *cmd = (struct qeth_ipa_cmd *)data; 4781 4801 struct qeth_query_card_info *card_info; 4782 - struct carrier_info *carrier_info; 4783 4802 4784 4803 QETH_CARD_TEXT(card, 2, "qcrdincb"); 4785 - carrier_info = (struct carrier_info *)reply->param; 4786 - cmd = (struct qeth_ipa_cmd *)data; 4787 - card_info = &cmd->data.setadapterparms.data.card_info; 4788 - if (cmd->data.setadapterparms.hdr.return_code == 0) { 4789 - carrier_info->card_type = card_info->card_type; 4790 - carrier_info->port_mode = card_info->port_mode; 4791 - carrier_info->port_speed = card_info->port_speed; 4792 - } 4804 + if (qeth_setadpparms_inspect_rc(cmd)) 4805 + return 0; 4793 4806 4794 - qeth_default_setadapterparms_cb(card, reply, (unsigned long) cmd); 4807 + card_info = &cmd->data.setadapterparms.data.card_info; 4808 + carrier_info->card_type = card_info->card_type; 4809 + carrier_info->port_mode = card_info->port_mode; 4810 + carrier_info->port_speed = card_info->port_speed; 4795 4811 return 0; 4796 4812 } 4797 4813 ··· 4835 4857 goto out; 4836 4858 } 4837 4859 4838 - ccw_device_get_id(CARD_DDEV(card), &id); 4860 + ccw_device_get_id(CARD_RDEV(card), &id); 4839 4861 request->resp_buf_len = sizeof(*response); 4840 4862 request->resp_version = DIAG26C_VERSION2; 4841 4863 request->op_code = DIAG26C_GET_MAC; ··· 6541 6563 mutex_init(&qeth_mod_mutex); 6542 6564 6543 6565 qeth_wq = create_singlethread_workqueue("qeth_wq"); 6566 + if (!qeth_wq) { 6567 + rc = -ENOMEM; 6568 + goto out_err; 6569 + } 6544 6570 6545 6571 rc = qeth_register_dbf_views(); 6546 6572 if (rc) 6547 - goto out_err; 6573 + goto dbf_err; 6548 6574 qeth_core_root_dev = root_device_register("qeth"); 6549 6575 rc = PTR_ERR_OR_ZERO(qeth_core_root_dev); 6550 6576 if (rc) ··· 6585 6603 root_device_unregister(qeth_core_root_dev); 6586 6604 register_err: 6587 6605 qeth_unregister_dbf_views(); 6606 + dbf_err: 6607 + destroy_workqueue(qeth_wq); 6588 6608 out_err: 6589 6609 pr_err("Initializing the qeth device driver failed\n"); 6590 6610 return rc;
+12
drivers/s390/net/qeth_core_mpc.h
··· 35 35 #define QETH_HALT_CHANNEL_PARM -11 36 36 #define QETH_RCD_PARM -12 37 37 38 + static inline bool qeth_intparm_is_iob(unsigned long intparm) 39 + { 40 + switch (intparm) { 41 + case QETH_CLEAR_CHANNEL_PARM: 42 + case QETH_HALT_CHANNEL_PARM: 43 + case QETH_RCD_PARM: 44 + case 0: 45 + return false; 46 + } 47 + return true; 48 + } 49 + 38 50 /*****************************************************************************/ 39 51 /* IP Assist related definitions */ 40 52 /*****************************************************************************/
+33 -26
drivers/s390/net/qeth_l2_main.c
··· 121 121 QETH_CARD_TEXT(card, 2, "L2Setmac"); 122 122 rc = qeth_l2_send_setdelmac(card, mac, IPA_CMD_SETVMAC); 123 123 if (rc == 0) { 124 - card->info.mac_bits |= QETH_LAYER2_MAC_REGISTERED; 125 - ether_addr_copy(card->dev->dev_addr, mac); 126 124 dev_info(&card->gdev->dev, 127 - "MAC address %pM successfully registered on device %s\n", 128 - card->dev->dev_addr, card->dev->name); 125 + "MAC address %pM successfully registered on device %s\n", 126 + mac, card->dev->name); 129 127 } else { 130 - card->info.mac_bits &= ~QETH_LAYER2_MAC_REGISTERED; 131 128 switch (rc) { 132 129 case -EEXIST: 133 130 dev_warn(&card->gdev->dev, ··· 136 139 break; 137 140 } 138 141 } 139 - return rc; 140 - } 141 - 142 - static int qeth_l2_send_delmac(struct qeth_card *card, __u8 *mac) 143 - { 144 - int rc; 145 - 146 - QETH_CARD_TEXT(card, 2, "L2Delmac"); 147 - if (!(card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED)) 148 - return 0; 149 - rc = qeth_l2_send_setdelmac(card, mac, IPA_CMD_DELVMAC); 150 - if (rc == 0) 151 - card->info.mac_bits &= ~QETH_LAYER2_MAC_REGISTERED; 152 142 return rc; 153 143 } 154 144 ··· 503 519 { 504 520 struct sockaddr *addr = p; 505 521 struct qeth_card *card = dev->ml_priv; 522 + u8 old_addr[ETH_ALEN]; 506 523 int rc = 0; 507 524 508 525 QETH_CARD_TEXT(card, 3, "setmac"); ··· 515 530 return -EOPNOTSUPP; 516 531 } 517 532 QETH_CARD_HEX(card, 3, addr->sa_data, ETH_ALEN); 533 + if (!is_valid_ether_addr(addr->sa_data)) 534 + return -EADDRNOTAVAIL; 535 + 518 536 if (qeth_wait_for_threads(card, QETH_RECOVER_THREAD)) { 519 537 QETH_CARD_TEXT(card, 3, "setmcREC"); 520 538 return -ERESTARTSYS; 521 539 } 522 - rc = qeth_l2_send_delmac(card, &card->dev->dev_addr[0]); 523 - if (!rc || (rc == -ENOENT)) 524 - rc = qeth_l2_send_setmac(card, addr->sa_data); 525 - return rc ? -EINVAL : 0; 540 + 541 + if (!qeth_card_hw_is_reachable(card)) { 542 + ether_addr_copy(dev->dev_addr, addr->sa_data); 543 + return 0; 544 + } 545 + 546 + /* don't register the same address twice */ 547 + if (ether_addr_equal_64bits(dev->dev_addr, addr->sa_data) && 548 + (card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED)) 549 + return 0; 550 + 551 + /* add the new address, switch over, drop the old */ 552 + rc = qeth_l2_send_setmac(card, addr->sa_data); 553 + if (rc) 554 + return rc; 555 + ether_addr_copy(old_addr, dev->dev_addr); 556 + ether_addr_copy(dev->dev_addr, addr->sa_data); 557 + 558 + if (card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED) 559 + qeth_l2_remove_mac(card, old_addr); 560 + card->info.mac_bits |= QETH_LAYER2_MAC_REGISTERED; 561 + return 0; 526 562 } 527 563 528 564 static void qeth_promisc_to_bridge(struct qeth_card *card) ··· 1073 1067 goto out_remove; 1074 1068 } 1075 1069 1076 - if (card->info.type != QETH_CARD_TYPE_OSN) 1077 - qeth_l2_send_setmac(card, &card->dev->dev_addr[0]); 1070 + if (card->info.type != QETH_CARD_TYPE_OSN && 1071 + !qeth_l2_send_setmac(card, card->dev->dev_addr)) 1072 + card->info.mac_bits |= QETH_LAYER2_MAC_REGISTERED; 1078 1073 1079 1074 if (qeth_is_diagass_supported(card, QETH_DIAGS_CMD_TRAP)) { 1080 1075 if (card->info.hwtrap && ··· 1345 1338 qeth_prepare_control_data(card, len, iob); 1346 1339 QETH_CARD_TEXT(card, 6, "osnoirqp"); 1347 1340 spin_lock_irqsave(get_ccwdev_lock(card->write.ccwdev), flags); 1348 - rc = ccw_device_start(card->write.ccwdev, &card->write.ccw, 1349 - (addr_t) iob, 0, 0); 1341 + rc = ccw_device_start_timeout(CARD_WDEV(card), &card->write.ccw, 1342 + (addr_t) iob, 0, 0, QETH_IPA_TIMEOUT); 1350 1343 spin_unlock_irqrestore(get_ccwdev_lock(card->write.ccwdev), flags); 1351 1344 if (rc) { 1352 1345 QETH_DBF_MESSAGE(2, "qeth_osn_send_control_data: "
+1 -1
drivers/scsi/fnic/fnic_trace.c
··· 296 296 "Number of Abort FW Timeouts: %lld\n" 297 297 "Number of Abort IO NOT Found: %lld\n" 298 298 299 - "Abord issued times: \n" 299 + "Abort issued times: \n" 300 300 " < 6 sec : %lld\n" 301 301 " 6 sec - 20 sec : %lld\n" 302 302 " 20 sec - 30 sec : %lld\n"
+3 -3
drivers/scsi/megaraid/megaraid_sas_fusion.c
··· 1124 1124 goto fail_fw_init; 1125 1125 } 1126 1126 1127 - ret = 0; 1127 + return 0; 1128 1128 1129 1129 fail_fw_init: 1130 1130 dev_err(&instance->pdev->dev, 1131 - "Init cmd return status %s for SCSI host %d\n", 1132 - ret ? "FAILED" : "SUCCESS", instance->host->host_no); 1131 + "Init cmd return status FAILED for SCSI host %d\n", 1132 + instance->host->host_no); 1133 1133 1134 1134 return ret; 1135 1135 }
+24 -9
drivers/scsi/scsi_debug.c
··· 234 234 #define F_INV_OP 0x200 235 235 #define F_FAKE_RW 0x400 236 236 #define F_M_ACCESS 0x800 /* media access */ 237 - #define F_LONG_DELAY 0x1000 237 + #define F_SSU_DELAY 0x1000 238 + #define F_SYNC_DELAY 0x2000 238 239 239 240 #define FF_RESPOND (F_RL_WLUN_OK | F_SKIP_UA | F_DELAY_OVERR) 240 241 #define FF_MEDIA_IO (F_M_ACCESS | F_FAKE_RW) 241 242 #define FF_SA (F_SA_HIGH | F_SA_LOW) 243 + #define F_LONG_DELAY (F_SSU_DELAY | F_SYNC_DELAY) 242 244 243 245 #define SDEBUG_MAX_PARTS 4 244 246 ··· 512 510 }; 513 511 514 512 static const struct opcode_info_t sync_cache_iarr[] = { 515 - {0, 0x91, 0, F_LONG_DELAY | F_M_ACCESS, resp_sync_cache, NULL, 513 + {0, 0x91, 0, F_SYNC_DELAY | F_M_ACCESS, resp_sync_cache, NULL, 516 514 {16, 0x6, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 517 515 0xff, 0xff, 0xff, 0xff, 0x3f, 0xc7} }, /* SYNC_CACHE (16) */ 518 516 }; ··· 555 553 resp_write_dt0, write_iarr, /* WRITE(16) */ 556 554 {16, 0xfa, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 557 555 0xff, 0xff, 0xff, 0xff, 0xff, 0xc7} }, 558 - {0, 0x1b, 0, F_LONG_DELAY, resp_start_stop, NULL,/* START STOP UNIT */ 556 + {0, 0x1b, 0, F_SSU_DELAY, resp_start_stop, NULL,/* START STOP UNIT */ 559 557 {6, 0x1, 0, 0xf, 0xf7, 0xc7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0} }, 560 558 {ARRAY_SIZE(sa_in_16_iarr), 0x9e, 0x10, F_SA_LOW | F_D_IN, 561 559 resp_readcap16, sa_in_16_iarr, /* SA_IN(16), READ CAPACITY(16) */ ··· 608 606 resp_write_same_10, write_same_iarr, /* WRITE SAME(10) */ 609 607 {10, 0xff, 0xff, 0xff, 0xff, 0xff, 0x3f, 0xff, 0xff, 0xc7, 0, 610 608 0, 0, 0, 0, 0} }, 611 - {ARRAY_SIZE(sync_cache_iarr), 0x35, 0, F_LONG_DELAY | F_M_ACCESS, 609 + {ARRAY_SIZE(sync_cache_iarr), 0x35, 0, F_SYNC_DELAY | F_M_ACCESS, 612 610 resp_sync_cache, sync_cache_iarr, 613 611 {10, 0x7, 0xff, 0xff, 0xff, 0xff, 0x3f, 0xff, 0xff, 0xc7, 0, 0, 614 612 0, 0, 0, 0} }, /* SYNC_CACHE (10) */ ··· 669 667 static bool sdebug_any_injecting_opt; 670 668 static bool sdebug_verbose; 671 669 static bool have_dif_prot; 670 + static bool write_since_sync; 672 671 static bool sdebug_statistics = DEF_STATISTICS; 673 672 674 673 static unsigned int sdebug_store_sectors; ··· 1610 1607 { 1611 1608 unsigned char *cmd = scp->cmnd; 1612 1609 int power_cond, stop; 1610 + bool changing; 1613 1611 1614 1612 power_cond = (cmd[4] & 0xf0) >> 4; 1615 1613 if (power_cond) { ··· 1618 1614 return check_condition_result; 1619 1615 } 1620 1616 stop = !(cmd[4] & 1); 1617 + changing = atomic_read(&devip->stopped) == !stop; 1621 1618 atomic_xchg(&devip->stopped, stop); 1622 - return (cmd[1] & 0x1) ? SDEG_RES_IMMED_MASK : 0; /* check IMMED bit */ 1619 + if (!changing || cmd[1] & 0x1) /* state unchanged or IMMED set */ 1620 + return SDEG_RES_IMMED_MASK; 1621 + else 1622 + return 0; 1623 1623 } 1624 1624 1625 1625 static sector_t get_sdebug_capacity(void) ··· 2481 2473 if (do_write) { 2482 2474 sdb = scsi_out(scmd); 2483 2475 dir = DMA_TO_DEVICE; 2476 + write_since_sync = true; 2484 2477 } else { 2485 2478 sdb = scsi_in(scmd); 2486 2479 dir = DMA_FROM_DEVICE; ··· 3592 3583 static int resp_sync_cache(struct scsi_cmnd *scp, 3593 3584 struct sdebug_dev_info *devip) 3594 3585 { 3586 + int res = 0; 3595 3587 u64 lba; 3596 3588 u32 num_blocks; 3597 3589 u8 *cmd = scp->cmnd; ··· 3608 3598 mk_sense_buffer(scp, ILLEGAL_REQUEST, LBA_OUT_OF_RANGE, 0); 3609 3599 return check_condition_result; 3610 3600 } 3611 - return (cmd[1] & 0x2) ? SDEG_RES_IMMED_MASK : 0; /* check IMMED bit */ 3601 + if (!write_since_sync || cmd[1] & 0x2) 3602 + res = SDEG_RES_IMMED_MASK; 3603 + else /* delay if write_since_sync and IMMED clear */ 3604 + write_since_sync = false; 3605 + return res; 3612 3606 } 3613 3607 3614 3608 #define RL_BUCKET_ELEMS 8 ··· 5791 5777 return schedule_resp(scp, devip, errsts, pfp, 0, 0); 5792 5778 else if ((sdebug_jdelay || sdebug_ndelay) && (flags & F_LONG_DELAY)) { 5793 5779 /* 5794 - * If any delay is active, want F_LONG_DELAY to be at least 1 5780 + * If any delay is active, for F_SSU_DELAY want at least 1 5795 5781 * second and if sdebug_jdelay>0 want a long delay of that 5796 - * many seconds. 5782 + * many seconds; for F_SYNC_DELAY want 1/20 of that. 5797 5783 */ 5798 5784 int jdelay = (sdebug_jdelay < 2) ? 1 : sdebug_jdelay; 5785 + int denom = (flags & F_SYNC_DELAY) ? 20 : 1; 5799 5786 5800 - jdelay = mult_frac(USER_HZ * jdelay, HZ, USER_HZ); 5787 + jdelay = mult_frac(USER_HZ * jdelay, HZ, denom * USER_HZ); 5801 5788 return schedule_resp(scp, devip, errsts, pfp, jdelay, 0); 5802 5789 } else 5803 5790 return schedule_resp(scp, devip, errsts, pfp, sdebug_jdelay,
+18 -11
drivers/scsi/scsi_transport_iscsi.c
··· 2322 2322 return nlmsg_multicast(nls, skb, 0, group, gfp); 2323 2323 } 2324 2324 2325 + static int 2326 + iscsi_unicast_skb(struct sk_buff *skb, u32 portid) 2327 + { 2328 + return nlmsg_unicast(nls, skb, portid); 2329 + } 2330 + 2325 2331 int iscsi_recv_pdu(struct iscsi_cls_conn *conn, struct iscsi_hdr *hdr, 2326 2332 char *data, uint32_t data_size) 2327 2333 { ··· 2530 2524 EXPORT_SYMBOL_GPL(iscsi_ping_comp_event); 2531 2525 2532 2526 static int 2533 - iscsi_if_send_reply(uint32_t group, int seq, int type, int done, int multi, 2534 - void *payload, int size) 2527 + iscsi_if_send_reply(u32 portid, int type, void *payload, int size) 2535 2528 { 2536 2529 struct sk_buff *skb; 2537 2530 struct nlmsghdr *nlh; 2538 2531 int len = nlmsg_total_size(size); 2539 - int flags = multi ? NLM_F_MULTI : 0; 2540 - int t = done ? NLMSG_DONE : type; 2541 2532 2542 2533 skb = alloc_skb(len, GFP_ATOMIC); 2543 2534 if (!skb) { ··· 2542 2539 return -ENOMEM; 2543 2540 } 2544 2541 2545 - nlh = __nlmsg_put(skb, 0, 0, t, (len - sizeof(*nlh)), 0); 2546 - nlh->nlmsg_flags = flags; 2542 + nlh = __nlmsg_put(skb, 0, 0, type, (len - sizeof(*nlh)), 0); 2547 2543 memcpy(nlmsg_data(nlh), payload, size); 2548 - return iscsi_multicast_skb(skb, group, GFP_ATOMIC); 2544 + return iscsi_unicast_skb(skb, portid); 2549 2545 } 2550 2546 2551 2547 static int ··· 3472 3470 iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group) 3473 3471 { 3474 3472 int err = 0; 3473 + u32 portid; 3475 3474 struct iscsi_uevent *ev = nlmsg_data(nlh); 3476 3475 struct iscsi_transport *transport = NULL; 3477 3476 struct iscsi_internal *priv; ··· 3493 3490 if (!try_module_get(transport->owner)) 3494 3491 return -EINVAL; 3495 3492 3493 + portid = NETLINK_CB(skb).portid; 3494 + 3496 3495 switch (nlh->nlmsg_type) { 3497 3496 case ISCSI_UEVENT_CREATE_SESSION: 3498 3497 err = iscsi_if_create_session(priv, ep, ev, 3499 - NETLINK_CB(skb).portid, 3498 + portid, 3500 3499 ev->u.c_session.initial_cmdsn, 3501 3500 ev->u.c_session.cmds_max, 3502 3501 ev->u.c_session.queue_depth); ··· 3511 3506 } 3512 3507 3513 3508 err = iscsi_if_create_session(priv, ep, ev, 3514 - NETLINK_CB(skb).portid, 3509 + portid, 3515 3510 ev->u.c_bound_session.initial_cmdsn, 3516 3511 ev->u.c_bound_session.cmds_max, 3517 3512 ev->u.c_bound_session.queue_depth); ··· 3669 3664 static void 3670 3665 iscsi_if_rx(struct sk_buff *skb) 3671 3666 { 3667 + u32 portid = NETLINK_CB(skb).portid; 3668 + 3672 3669 mutex_lock(&rx_queue_mutex); 3673 3670 while (skb->len >= NLMSG_HDRLEN) { 3674 3671 int err; ··· 3706 3699 break; 3707 3700 if (ev->type == ISCSI_UEVENT_GET_CHAP && !err) 3708 3701 break; 3709 - err = iscsi_if_send_reply(group, nlh->nlmsg_seq, 3710 - nlh->nlmsg_type, 0, 0, ev, sizeof(*ev)); 3702 + err = iscsi_if_send_reply(portid, nlh->nlmsg_type, 3703 + ev, sizeof(*ev)); 3711 3704 } while (err < 0 && err != -ECONNREFUSED && err != -ESRCH); 3712 3705 skb_pull(skb, rlen); 3713 3706 }
+2
drivers/scsi/sd.c
··· 2121 2121 break; /* standby */ 2122 2122 if (sshdr.asc == 4 && sshdr.ascq == 0xc) 2123 2123 break; /* unavailable */ 2124 + if (sshdr.asc == 4 && sshdr.ascq == 0x1b) 2125 + break; /* sanitize in progress */ 2124 2126 /* 2125 2127 * Issue command to spin up drive when not ready 2126 2128 */
+80 -56
drivers/scsi/sd_zbc.c
··· 400 400 * 401 401 * Check that all zones of the device are equal. The last zone can however 402 402 * be smaller. The zone size must also be a power of two number of LBAs. 403 + * 404 + * Returns the zone size in bytes upon success or an error code upon failure. 403 405 */ 404 - static int sd_zbc_check_zone_size(struct scsi_disk *sdkp) 406 + static s64 sd_zbc_check_zone_size(struct scsi_disk *sdkp) 405 407 { 406 408 u64 zone_blocks = 0; 407 409 sector_t block = 0; ··· 413 411 unsigned int list_length; 414 412 int ret; 415 413 u8 same; 416 - 417 - sdkp->zone_blocks = 0; 418 414 419 415 /* Get a buffer */ 420 416 buf = kmalloc(SD_ZBC_BUF_SIZE, GFP_KERNEL); ··· 445 445 446 446 /* Parse zone descriptors */ 447 447 while (rec < buf + buf_len) { 448 - zone_blocks = get_unaligned_be64(&rec[8]); 449 - if (sdkp->zone_blocks == 0) { 450 - sdkp->zone_blocks = zone_blocks; 451 - } else if (zone_blocks != sdkp->zone_blocks && 452 - (block + zone_blocks < sdkp->capacity 453 - || zone_blocks > sdkp->zone_blocks)) { 454 - zone_blocks = 0; 448 + u64 this_zone_blocks = get_unaligned_be64(&rec[8]); 449 + 450 + if (zone_blocks == 0) { 451 + zone_blocks = this_zone_blocks; 452 + } else if (this_zone_blocks != zone_blocks && 453 + (block + this_zone_blocks < sdkp->capacity 454 + || this_zone_blocks > zone_blocks)) { 455 + this_zone_blocks = 0; 455 456 goto out; 456 457 } 457 - block += zone_blocks; 458 + block += this_zone_blocks; 458 459 rec += 64; 459 460 } 460 461 ··· 467 466 } 468 467 469 468 } while (block < sdkp->capacity); 470 - 471 - zone_blocks = sdkp->zone_blocks; 472 469 473 470 out: 474 471 if (!zone_blocks) { ··· 487 488 "Zone size too large\n"); 488 489 ret = -ENODEV; 489 490 } else { 490 - sdkp->zone_blocks = zone_blocks; 491 - sdkp->zone_shift = ilog2(zone_blocks); 491 + ret = zone_blocks; 492 492 } 493 493 494 494 out_free: ··· 498 500 499 501 /** 500 502 * sd_zbc_alloc_zone_bitmap - Allocate a zone bitmap (one bit per zone). 501 - * @sdkp: The disk of the bitmap 503 + * @nr_zones: Number of zones to allocate space for. 504 + * @numa_node: NUMA node to allocate the memory from. 502 505 */ 503 - static inline unsigned long *sd_zbc_alloc_zone_bitmap(struct scsi_disk *sdkp) 506 + static inline unsigned long * 507 + sd_zbc_alloc_zone_bitmap(u32 nr_zones, int numa_node) 504 508 { 505 - struct request_queue *q = sdkp->disk->queue; 506 - 507 - return kzalloc_node(BITS_TO_LONGS(sdkp->nr_zones) 508 - * sizeof(unsigned long), 509 - GFP_KERNEL, q->node); 509 + return kzalloc_node(BITS_TO_LONGS(nr_zones) * sizeof(unsigned long), 510 + GFP_KERNEL, numa_node); 510 511 } 511 512 512 513 /** ··· 513 516 * @sdkp: disk used 514 517 * @buf: report reply buffer 515 518 * @buflen: length of @buf 519 + * @zone_shift: logarithm base 2 of the number of blocks in a zone 516 520 * @seq_zones_bitmap: bitmap of sequential zones to set 517 521 * 518 522 * Parse reported zone descriptors in @buf to identify sequential zones and ··· 523 525 * Return the LBA after the last zone reported. 524 526 */ 525 527 static sector_t sd_zbc_get_seq_zones(struct scsi_disk *sdkp, unsigned char *buf, 526 - unsigned int buflen, 528 + unsigned int buflen, u32 zone_shift, 527 529 unsigned long *seq_zones_bitmap) 528 530 { 529 531 sector_t lba, next_lba = sdkp->capacity; ··· 542 544 if (type != ZBC_ZONE_TYPE_CONV && 543 545 cond != ZBC_ZONE_COND_READONLY && 544 546 cond != ZBC_ZONE_COND_OFFLINE) 545 - set_bit(lba >> sdkp->zone_shift, seq_zones_bitmap); 547 + set_bit(lba >> zone_shift, seq_zones_bitmap); 546 548 next_lba = lba + get_unaligned_be64(&rec[8]); 547 549 rec += 64; 548 550 } ··· 551 553 } 552 554 553 555 /** 554 - * sd_zbc_setup_seq_zones_bitmap - Initialize the disk seq zone bitmap. 556 + * sd_zbc_setup_seq_zones_bitmap - Initialize a seq zone bitmap. 555 557 * @sdkp: target disk 558 + * @zone_shift: logarithm base 2 of the number of blocks in a zone 559 + * @nr_zones: number of zones to set up a seq zone bitmap for 556 560 * 557 561 * Allocate a zone bitmap and initialize it by identifying sequential zones. 558 562 */ 559 - static int sd_zbc_setup_seq_zones_bitmap(struct scsi_disk *sdkp) 563 + static unsigned long * 564 + sd_zbc_setup_seq_zones_bitmap(struct scsi_disk *sdkp, u32 zone_shift, 565 + u32 nr_zones) 560 566 { 561 567 struct request_queue *q = sdkp->disk->queue; 562 568 unsigned long *seq_zones_bitmap; ··· 568 566 unsigned char *buf; 569 567 int ret = -ENOMEM; 570 568 571 - seq_zones_bitmap = sd_zbc_alloc_zone_bitmap(sdkp); 569 + seq_zones_bitmap = sd_zbc_alloc_zone_bitmap(nr_zones, q->node); 572 570 if (!seq_zones_bitmap) 573 - return -ENOMEM; 571 + return ERR_PTR(-ENOMEM); 574 572 575 573 buf = kmalloc(SD_ZBC_BUF_SIZE, GFP_KERNEL); 576 574 if (!buf) ··· 581 579 if (ret) 582 580 goto out; 583 581 lba = sd_zbc_get_seq_zones(sdkp, buf, SD_ZBC_BUF_SIZE, 584 - seq_zones_bitmap); 582 + zone_shift, seq_zones_bitmap); 585 583 } 586 584 587 585 if (lba != sdkp->capacity) { ··· 593 591 kfree(buf); 594 592 if (ret) { 595 593 kfree(seq_zones_bitmap); 596 - return ret; 594 + return ERR_PTR(ret); 597 595 } 598 - 599 - q->seq_zones_bitmap = seq_zones_bitmap; 600 - 601 - return 0; 596 + return seq_zones_bitmap; 602 597 } 603 598 604 599 static void sd_zbc_cleanup(struct scsi_disk *sdkp) ··· 611 612 q->nr_zones = 0; 612 613 } 613 614 614 - static int sd_zbc_setup(struct scsi_disk *sdkp) 615 + static int sd_zbc_setup(struct scsi_disk *sdkp, u32 zone_blocks) 615 616 { 616 617 struct request_queue *q = sdkp->disk->queue; 618 + u32 zone_shift = ilog2(zone_blocks); 619 + u32 nr_zones; 617 620 int ret; 618 621 619 - /* READ16/WRITE16 is mandatory for ZBC disks */ 620 - sdkp->device->use_16_for_rw = 1; 621 - sdkp->device->use_10_for_rw = 0; 622 - 623 622 /* chunk_sectors indicates the zone size */ 624 - blk_queue_chunk_sectors(sdkp->disk->queue, 625 - logical_to_sectors(sdkp->device, sdkp->zone_blocks)); 626 - sdkp->nr_zones = 627 - round_up(sdkp->capacity, sdkp->zone_blocks) >> sdkp->zone_shift; 623 + blk_queue_chunk_sectors(q, 624 + logical_to_sectors(sdkp->device, zone_blocks)); 625 + nr_zones = round_up(sdkp->capacity, zone_blocks) >> zone_shift; 628 626 629 627 /* 630 628 * Initialize the device request queue information if the number 631 629 * of zones changed. 632 630 */ 633 - if (sdkp->nr_zones != q->nr_zones) { 631 + if (nr_zones != sdkp->nr_zones || nr_zones != q->nr_zones) { 632 + unsigned long *seq_zones_wlock = NULL, *seq_zones_bitmap = NULL; 633 + size_t zone_bitmap_size; 634 634 635 - sd_zbc_cleanup(sdkp); 636 - 637 - q->nr_zones = sdkp->nr_zones; 638 - if (sdkp->nr_zones) { 639 - q->seq_zones_wlock = sd_zbc_alloc_zone_bitmap(sdkp); 640 - if (!q->seq_zones_wlock) { 635 + if (nr_zones) { 636 + seq_zones_wlock = sd_zbc_alloc_zone_bitmap(nr_zones, 637 + q->node); 638 + if (!seq_zones_wlock) { 641 639 ret = -ENOMEM; 642 640 goto err; 643 641 } 644 642 645 - ret = sd_zbc_setup_seq_zones_bitmap(sdkp); 646 - if (ret) { 647 - sd_zbc_cleanup(sdkp); 643 + seq_zones_bitmap = sd_zbc_setup_seq_zones_bitmap(sdkp, 644 + zone_shift, nr_zones); 645 + if (IS_ERR(seq_zones_bitmap)) { 646 + ret = PTR_ERR(seq_zones_bitmap); 647 + kfree(seq_zones_wlock); 648 648 goto err; 649 649 } 650 650 } 651 + zone_bitmap_size = BITS_TO_LONGS(nr_zones) * 652 + sizeof(unsigned long); 653 + blk_mq_freeze_queue(q); 654 + if (q->nr_zones != nr_zones) { 655 + /* READ16/WRITE16 is mandatory for ZBC disks */ 656 + sdkp->device->use_16_for_rw = 1; 657 + sdkp->device->use_10_for_rw = 0; 651 658 659 + sdkp->zone_blocks = zone_blocks; 660 + sdkp->zone_shift = zone_shift; 661 + sdkp->nr_zones = nr_zones; 662 + q->nr_zones = nr_zones; 663 + swap(q->seq_zones_wlock, seq_zones_wlock); 664 + swap(q->seq_zones_bitmap, seq_zones_bitmap); 665 + } else if (memcmp(q->seq_zones_bitmap, seq_zones_bitmap, 666 + zone_bitmap_size) != 0) { 667 + memcpy(q->seq_zones_bitmap, seq_zones_bitmap, 668 + zone_bitmap_size); 669 + } 670 + blk_mq_unfreeze_queue(q); 671 + kfree(seq_zones_wlock); 672 + kfree(seq_zones_bitmap); 652 673 } 653 674 654 675 return 0; ··· 680 661 681 662 int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buf) 682 663 { 664 + int64_t zone_blocks; 683 665 int ret; 684 666 685 667 if (!sd_is_zoned(sdkp)) ··· 717 697 * Check zone size: only devices with a constant zone size (except 718 698 * an eventual last runt zone) that is a power of 2 are supported. 719 699 */ 720 - ret = sd_zbc_check_zone_size(sdkp); 721 - if (ret) 700 + zone_blocks = sd_zbc_check_zone_size(sdkp); 701 + ret = -EFBIG; 702 + if (zone_blocks != (u32)zone_blocks) 703 + goto err; 704 + ret = zone_blocks; 705 + if (ret < 0) 722 706 goto err; 723 707 724 708 /* The drive satisfies the kernel restrictions: set it up */ 725 - ret = sd_zbc_setup(sdkp); 709 + ret = sd_zbc_setup(sdkp, zone_blocks); 726 710 if (ret) 727 711 goto err; 728 712
+40
drivers/scsi/ufs/ufshcd.c
··· 276 276 *val = ' '; 277 277 } 278 278 279 + static void ufshcd_add_cmd_upiu_trace(struct ufs_hba *hba, unsigned int tag, 280 + const char *str) 281 + { 282 + struct utp_upiu_req *rq = hba->lrb[tag].ucd_req_ptr; 283 + 284 + trace_ufshcd_upiu(dev_name(hba->dev), str, &rq->header, &rq->sc.cdb); 285 + } 286 + 287 + static void ufshcd_add_query_upiu_trace(struct ufs_hba *hba, unsigned int tag, 288 + const char *str) 289 + { 290 + struct utp_upiu_req *rq = hba->lrb[tag].ucd_req_ptr; 291 + 292 + trace_ufshcd_upiu(dev_name(hba->dev), str, &rq->header, &rq->qr); 293 + } 294 + 295 + static void ufshcd_add_tm_upiu_trace(struct ufs_hba *hba, unsigned int tag, 296 + const char *str) 297 + { 298 + struct utp_task_req_desc *descp; 299 + struct utp_upiu_task_req *task_req; 300 + int off = (int)tag - hba->nutrs; 301 + 302 + descp = &hba->utmrdl_base_addr[off]; 303 + task_req = (struct utp_upiu_task_req *)descp->task_req_upiu; 304 + trace_ufshcd_upiu(dev_name(hba->dev), str, &task_req->header, 305 + &task_req->input_param1); 306 + } 307 + 279 308 static void ufshcd_add_command_trace(struct ufs_hba *hba, 280 309 unsigned int tag, const char *str) 281 310 { ··· 313 284 u32 intr, doorbell; 314 285 struct ufshcd_lrb *lrbp; 315 286 int transfer_len = -1; 287 + 288 + /* trace UPIU also */ 289 + ufshcd_add_cmd_upiu_trace(hba, tag, str); 316 290 317 291 if (!trace_ufshcd_command_enabled()) 318 292 return; ··· 2582 2550 2583 2551 hba->dev_cmd.complete = &wait; 2584 2552 2553 + ufshcd_add_query_upiu_trace(hba, tag, "query_send"); 2585 2554 /* Make sure descriptors are ready before ringing the doorbell */ 2586 2555 wmb(); 2587 2556 spin_lock_irqsave(hba->host->host_lock, flags); ··· 2591 2558 spin_unlock_irqrestore(hba->host->host_lock, flags); 2592 2559 2593 2560 err = ufshcd_wait_for_dev_cmd(hba, lrbp, timeout); 2561 + 2562 + ufshcd_add_query_upiu_trace(hba, tag, 2563 + err ? "query_complete_err" : "query_complete"); 2594 2564 2595 2565 out_put_tag: 2596 2566 ufshcd_put_dev_cmd_tag(hba, tag); ··· 5479 5443 5480 5444 spin_unlock_irqrestore(host->host_lock, flags); 5481 5445 5446 + ufshcd_add_tm_upiu_trace(hba, task_tag, "tm_send"); 5447 + 5482 5448 /* wait until the task management command is completed */ 5483 5449 err = wait_event_timeout(hba->tm_wq, 5484 5450 test_bit(free_slot, &hba->tm_condition), 5485 5451 msecs_to_jiffies(TM_CMD_TIMEOUT)); 5486 5452 if (!err) { 5453 + ufshcd_add_tm_upiu_trace(hba, task_tag, "tm_complete_err"); 5487 5454 dev_err(hba->dev, "%s: task management cmd 0x%.2x timed-out\n", 5488 5455 __func__, tm_function); 5489 5456 if (ufshcd_clear_tm_cmd(hba, free_slot)) ··· 5495 5456 err = -ETIMEDOUT; 5496 5457 } else { 5497 5458 err = ufshcd_task_req_compl(hba, free_slot, tm_response); 5459 + ufshcd_add_tm_upiu_trace(hba, task_tag, "tm_complete"); 5498 5460 } 5499 5461 5500 5462 clear_bit(free_slot, &hba->tm_condition);
+1 -1
drivers/slimbus/messaging.c
··· 183 183 0, 1, 2, 3, 3, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7 184 184 }; 185 185 186 - clamp(code, 1, (int)ARRAY_SIZE(sizetocode)); 186 + code = clamp(code, 1, (int)ARRAY_SIZE(sizetocode)); 187 187 188 188 return sizetocode[code - 1]; 189 189 }
+1 -1
drivers/soc/bcm/raspberrypi-power.c
··· 45 45 struct rpi_power_domain_packet { 46 46 u32 domain; 47 47 u32 on; 48 - } __packet; 48 + }; 49 49 50 50 /* 51 51 * Asks the firmware to enable or disable power on a specific power
+1 -1
drivers/staging/wilc1000/host_interface.c
··· 1390 1390 } 1391 1391 1392 1392 if (hif_drv->usr_conn_req.ies) { 1393 - conn_info.req_ies = kmemdup(conn_info.req_ies, 1393 + conn_info.req_ies = kmemdup(hif_drv->usr_conn_req.ies, 1394 1394 hif_drv->usr_conn_req.ies_len, 1395 1395 GFP_KERNEL); 1396 1396 if (conn_info.req_ies)
+2
drivers/target/target_core_pscsi.c
··· 890 890 bytes = min(bytes, data_len); 891 891 892 892 if (!bio) { 893 + new_bio: 893 894 nr_vecs = min_t(int, BIO_MAX_PAGES, nr_pages); 894 895 nr_pages -= nr_vecs; 895 896 /* ··· 932 931 * be allocated with pscsi_get_bio() above. 933 932 */ 934 933 bio = NULL; 934 + goto new_bio; 935 935 } 936 936 937 937 data_len -= bytes;
+22 -1
drivers/tty/n_gsm.c
··· 121 121 struct mutex mutex; 122 122 123 123 /* Link layer */ 124 + int mode; 125 + #define DLCI_MODE_ABM 0 /* Normal Asynchronous Balanced Mode */ 126 + #define DLCI_MODE_ADM 1 /* Asynchronous Disconnected Mode */ 124 127 spinlock_t lock; /* Protects the internal state */ 125 128 struct timer_list t1; /* Retransmit timer for SABM and UA */ 126 129 int retries; ··· 1367 1364 ctrl->data = data; 1368 1365 ctrl->len = clen; 1369 1366 gsm->pending_cmd = ctrl; 1370 - gsm->cretries = gsm->n2; 1367 + 1368 + /* If DLCI0 is in ADM mode skip retries, it won't respond */ 1369 + if (gsm->dlci[0]->mode == DLCI_MODE_ADM) 1370 + gsm->cretries = 1; 1371 + else 1372 + gsm->cretries = gsm->n2; 1373 + 1371 1374 mod_timer(&gsm->t2_timer, jiffies + gsm->t2 * HZ / 100); 1372 1375 gsm_control_transmit(gsm, ctrl); 1373 1376 spin_unlock_irqrestore(&gsm->control_lock, flags); ··· 1481 1472 if (debug & 8) 1482 1473 pr_info("DLCI %d opening in ADM mode.\n", 1483 1474 dlci->addr); 1475 + dlci->mode = DLCI_MODE_ADM; 1484 1476 gsm_dlci_open(dlci); 1485 1477 } else { 1486 1478 gsm_dlci_close(dlci); ··· 2871 2861 static int gsm_carrier_raised(struct tty_port *port) 2872 2862 { 2873 2863 struct gsm_dlci *dlci = container_of(port, struct gsm_dlci, port); 2864 + struct gsm_mux *gsm = dlci->gsm; 2865 + 2874 2866 /* Not yet open so no carrier info */ 2875 2867 if (dlci->state != DLCI_OPEN) 2876 2868 return 0; 2877 2869 if (debug & 2) 2878 2870 return 1; 2871 + 2872 + /* 2873 + * Basic mode with control channel in ADM mode may not respond 2874 + * to CMD_MSC at all and modem_rx is empty. 2875 + */ 2876 + if (gsm->encoding == 0 && gsm->dlci[0]->mode == DLCI_MODE_ADM && 2877 + !dlci->modem_rx) 2878 + return 1; 2879 + 2879 2880 return dlci->modem_rx & TIOCM_CD; 2880 2881 } 2881 2882
+4 -2
drivers/tty/serial/earlycon.c
··· 169 169 */ 170 170 int __init setup_earlycon(char *buf) 171 171 { 172 - const struct earlycon_id *match; 172 + const struct earlycon_id **p_match; 173 173 174 174 if (!buf || !buf[0]) 175 175 return -EINVAL; ··· 177 177 if (early_con.flags & CON_ENABLED) 178 178 return -EALREADY; 179 179 180 - for (match = __earlycon_table; match < __earlycon_table_end; match++) { 180 + for (p_match = __earlycon_table; p_match < __earlycon_table_end; 181 + p_match++) { 182 + const struct earlycon_id *match = *p_match; 181 183 size_t len = strlen(match->name); 182 184 183 185 if (strncmp(buf, match->name, len))
+18 -1
drivers/tty/serial/imx.c
··· 316 316 * differ from the value that was last written. As it only 317 317 * clears after being set, reread conditionally. 318 318 */ 319 - if (sport->ucr2 & UCR2_SRST) 319 + if (!(sport->ucr2 & UCR2_SRST)) 320 320 sport->ucr2 = readl(sport->port.membase + offset); 321 321 return sport->ucr2; 322 322 break; ··· 1833 1833 rs485conf->flags &= ~SER_RS485_ENABLED; 1834 1834 1835 1835 if (rs485conf->flags & SER_RS485_ENABLED) { 1836 + /* Enable receiver if low-active RTS signal is requested */ 1837 + if (sport->have_rtscts && !sport->have_rtsgpio && 1838 + !(rs485conf->flags & SER_RS485_RTS_ON_SEND)) 1839 + rs485conf->flags |= SER_RS485_RX_DURING_TX; 1840 + 1836 1841 /* disable transmitter */ 1837 1842 ucr2 = imx_uart_readl(sport, UCR2); 1838 1843 if (rs485conf->flags & SER_RS485_RTS_AFTER_SEND) ··· 2269 2264 if (sport->port.rs485.flags & SER_RS485_ENABLED && 2270 2265 (!sport->have_rtscts && !sport->have_rtsgpio)) 2271 2266 dev_err(&pdev->dev, "no RTS control, disabling rs485\n"); 2267 + 2268 + /* 2269 + * If using the i.MX UART RTS/CTS control then the RTS (CTS_B) 2270 + * signal cannot be set low during transmission in case the 2271 + * receiver is off (limitation of the i.MX UART IP). 2272 + */ 2273 + if (sport->port.rs485.flags & SER_RS485_ENABLED && 2274 + sport->have_rtscts && !sport->have_rtsgpio && 2275 + (!(sport->port.rs485.flags & SER_RS485_RTS_ON_SEND) && 2276 + !(sport->port.rs485.flags & SER_RS485_RX_DURING_TX))) 2277 + dev_err(&pdev->dev, 2278 + "low-active RTS not possible when receiver is off, enabling receiver\n"); 2272 2279 2273 2280 imx_uart_rs485_config(&sport->port, &sport->port.rs485); 2274 2281
-1
drivers/tty/serial/mvebu-uart.c
··· 495 495 termios->c_iflag |= old->c_iflag & ~(INPCK | IGNPAR); 496 496 termios->c_cflag &= CREAD | CBAUD; 497 497 termios->c_cflag |= old->c_cflag & ~(CREAD | CBAUD); 498 - termios->c_lflag = old->c_lflag; 499 498 } 500 499 501 500 spin_unlock_irqrestore(&port->lock, flags);
+6 -4
drivers/tty/serial/qcom_geni_serial.c
··· 1022 1022 struct qcom_geni_serial_port *port; 1023 1023 struct uart_port *uport; 1024 1024 struct resource *res; 1025 + int irq; 1025 1026 1026 1027 if (pdev->dev.of_node) 1027 1028 line = of_alias_get_id(pdev->dev.of_node, "serial"); ··· 1062 1061 port->rx_fifo_depth = DEF_FIFO_DEPTH_WORDS; 1063 1062 port->tx_fifo_width = DEF_FIFO_WIDTH_BITS; 1064 1063 1065 - uport->irq = platform_get_irq(pdev, 0); 1066 - if (uport->irq < 0) { 1067 - dev_err(&pdev->dev, "Failed to get IRQ %d\n", uport->irq); 1068 - return uport->irq; 1064 + irq = platform_get_irq(pdev, 0); 1065 + if (irq < 0) { 1066 + dev_err(&pdev->dev, "Failed to get IRQ %d\n", irq); 1067 + return irq; 1069 1068 } 1069 + uport->irq = irq; 1070 1070 1071 1071 uport->private_data = &qcom_geni_console_driver; 1072 1072 platform_set_drvdata(pdev, port);
+1 -1
drivers/tty/serial/xilinx_uartps.c
··· 1181 1181 /* only set baud if specified on command line - otherwise 1182 1182 * assume it has been initialized by a boot loader. 1183 1183 */ 1184 - if (device->baud) { 1184 + if (port->uartclk && device->baud) { 1185 1185 u32 cd = 0, bdiv = 0; 1186 1186 u32 mr; 1187 1187 int div8;
+4 -1
drivers/tty/tty_io.c
··· 2816 2816 2817 2817 kref_init(&tty->kref); 2818 2818 tty->magic = TTY_MAGIC; 2819 - tty_ldisc_init(tty); 2819 + if (tty_ldisc_init(tty)) { 2820 + kfree(tty); 2821 + return NULL; 2822 + } 2820 2823 tty->session = NULL; 2821 2824 tty->pgrp = NULL; 2822 2825 mutex_init(&tty->legacy_mutex);
+13 -16
drivers/tty/tty_ldisc.c
··· 176 176 return ERR_CAST(ldops); 177 177 } 178 178 179 - ld = kmalloc(sizeof(struct tty_ldisc), GFP_KERNEL); 180 - if (ld == NULL) { 181 - put_ldops(ldops); 182 - return ERR_PTR(-ENOMEM); 183 - } 184 - 179 + /* 180 + * There is no way to handle allocation failure of only 16 bytes. 181 + * Let's simplify error handling and save more memory. 182 + */ 183 + ld = kmalloc(sizeof(struct tty_ldisc), GFP_KERNEL | __GFP_NOFAIL); 185 184 ld->ops = ldops; 186 185 ld->tty = tty; 187 186 ··· 526 527 static void tty_ldisc_restore(struct tty_struct *tty, struct tty_ldisc *old) 527 528 { 528 529 /* There is an outstanding reference here so this is safe */ 529 - old = tty_ldisc_get(tty, old->ops->num); 530 - WARN_ON(IS_ERR(old)); 531 - tty->ldisc = old; 532 - tty_set_termios_ldisc(tty, old->ops->num); 533 - if (tty_ldisc_open(tty, old) < 0) { 534 - tty_ldisc_put(old); 530 + if (tty_ldisc_failto(tty, old->ops->num) < 0) { 531 + const char *name = tty_name(tty); 532 + 533 + pr_warn("Falling back ldisc for %s.\n", name); 535 534 /* The traditional behaviour is to fall back to N_TTY, we 536 535 want to avoid falling back to N_NULL unless we have no 537 536 choice to avoid the risk of breaking anything */ 538 537 if (tty_ldisc_failto(tty, N_TTY) < 0 && 539 538 tty_ldisc_failto(tty, N_NULL) < 0) 540 - panic("Couldn't open N_NULL ldisc for %s.", 541 - tty_name(tty)); 539 + panic("Couldn't open N_NULL ldisc for %s.", name); 542 540 } 543 541 } 544 542 ··· 820 824 * the tty structure is not completely set up when this call is made. 821 825 */ 822 826 823 - void tty_ldisc_init(struct tty_struct *tty) 827 + int tty_ldisc_init(struct tty_struct *tty) 824 828 { 825 829 struct tty_ldisc *ld = tty_ldisc_get(tty, N_TTY); 826 830 if (IS_ERR(ld)) 827 - panic("n_tty: init_tty"); 831 + return PTR_ERR(ld); 828 832 tty->ldisc = ld; 833 + return 0; 829 834 } 830 835 831 836 /**
+23 -49
drivers/uio/uio_hv_generic.c
··· 19 19 * # echo -n "ed963694-e847-4b2a-85af-bc9cfc11d6f3" \ 20 20 * > /sys/bus/vmbus/drivers/uio_hv_generic/bind 21 21 */ 22 - 22 + #define DEBUG 1 23 23 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 24 24 25 25 #include <linux/device.h> ··· 94 94 */ 95 95 static void hv_uio_channel_cb(void *context) 96 96 { 97 - struct hv_uio_private_data *pdata = context; 98 - struct hv_device *dev = pdata->device; 97 + struct vmbus_channel *chan = context; 98 + struct hv_device *hv_dev = chan->device_obj; 99 + struct hv_uio_private_data *pdata = hv_get_drvdata(hv_dev); 99 100 100 - dev->channel->inbound.ring_buffer->interrupt_mask = 1; 101 + chan->inbound.ring_buffer->interrupt_mask = 1; 101 102 virt_mb(); 102 103 103 104 uio_event_notify(&pdata->info); ··· 122 121 uio_event_notify(&pdata->info); 123 122 } 124 123 125 - /* 126 - * Handle fault when looking for sub channel ring buffer 127 - * Subchannel ring buffer is same as resource 0 which is main ring buffer 128 - * This is derived from uio_vma_fault 124 + /* Sysfs API to allow mmap of the ring buffers 125 + * The ring buffer is allocated as contiguous memory by vmbus_open 129 126 */ 130 - static int hv_uio_vma_fault(struct vm_fault *vmf) 131 - { 132 - struct vm_area_struct *vma = vmf->vma; 133 - void *ring_buffer = vma->vm_private_data; 134 - struct page *page; 135 - void *addr; 136 - 137 - addr = ring_buffer + (vmf->pgoff << PAGE_SHIFT); 138 - page = virt_to_page(addr); 139 - get_page(page); 140 - vmf->page = page; 141 - return 0; 142 - } 143 - 144 - static const struct vm_operations_struct hv_uio_vm_ops = { 145 - .fault = hv_uio_vma_fault, 146 - }; 147 - 148 - /* Sysfs API to allow mmap of the ring buffers */ 149 127 static int hv_uio_ring_mmap(struct file *filp, struct kobject *kobj, 150 128 struct bin_attribute *attr, 151 129 struct vm_area_struct *vma) 152 130 { 153 131 struct vmbus_channel *channel 154 132 = container_of(kobj, struct vmbus_channel, kobj); 155 - unsigned long requested_pages, actual_pages; 133 + struct hv_device *dev = channel->primary_channel->device_obj; 134 + u16 q_idx = channel->offermsg.offer.sub_channel_index; 156 135 157 - if (vma->vm_end < vma->vm_start) 158 - return -EINVAL; 136 + dev_dbg(&dev->device, "mmap channel %u pages %#lx at %#lx\n", 137 + q_idx, vma_pages(vma), vma->vm_pgoff); 159 138 160 - /* only allow 0 for now */ 161 - if (vma->vm_pgoff > 0) 162 - return -EINVAL; 163 - 164 - requested_pages = vma_pages(vma); 165 - actual_pages = 2 * HV_RING_SIZE; 166 - if (requested_pages > actual_pages) 167 - return -EINVAL; 168 - 169 - vma->vm_private_data = channel->ringbuffer_pages; 170 - vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP; 171 - vma->vm_ops = &hv_uio_vm_ops; 172 - return 0; 139 + return vm_iomap_memory(vma, virt_to_phys(channel->ringbuffer_pages), 140 + channel->ringbuffer_pagecount << PAGE_SHIFT); 173 141 } 174 142 175 - static struct bin_attribute ring_buffer_bin_attr __ro_after_init = { 143 + static const struct bin_attribute ring_buffer_bin_attr = { 176 144 .attr = { 177 145 .name = "ring", 178 146 .mode = 0600, 179 - /* size is set at init time */ 180 147 }, 148 + .size = 2 * HV_RING_SIZE * PAGE_SIZE, 181 149 .mmap = hv_uio_ring_mmap, 182 150 }; 183 151 184 - /* Callback from VMBUS subystem when new channel created. */ 152 + /* Callback from VMBUS subsystem when new channel created. */ 185 153 static void 186 154 hv_uio_new_channel(struct vmbus_channel *new_sc) 187 155 { 188 156 struct hv_device *hv_dev = new_sc->primary_channel->device_obj; 189 157 struct device *device = &hv_dev->device; 190 - struct hv_uio_private_data *pdata = hv_get_drvdata(hv_dev); 191 158 const size_t ring_bytes = HV_RING_SIZE * PAGE_SIZE; 192 159 int ret; 193 160 194 161 /* Create host communication ring */ 195 162 ret = vmbus_open(new_sc, ring_bytes, ring_bytes, NULL, 0, 196 - hv_uio_channel_cb, pdata); 163 + hv_uio_channel_cb, new_sc); 197 164 if (ret) { 198 165 dev_err(device, "vmbus_open subchannel failed: %d\n", ret); 199 166 return; ··· 203 234 204 235 ret = vmbus_open(dev->channel, HV_RING_SIZE * PAGE_SIZE, 205 236 HV_RING_SIZE * PAGE_SIZE, NULL, 0, 206 - hv_uio_channel_cb, pdata); 237 + hv_uio_channel_cb, dev->channel); 207 238 if (ret) 208 239 goto fail; 209 240 ··· 294 325 295 326 vmbus_set_chn_rescind_callback(dev->channel, hv_uio_rescind); 296 327 vmbus_set_sc_create_callback(dev->channel, hv_uio_new_channel); 328 + 329 + ret = sysfs_create_bin_file(&dev->channel->kobj, &ring_buffer_bin_attr); 330 + if (ret) 331 + dev_notice(&dev->device, 332 + "sysfs create ring bin file failed; %d\n", ret); 297 333 298 334 hv_set_drvdata(dev, pdata); 299 335
+1
drivers/usb/Kconfig
··· 207 207 208 208 config USB_ROLE_SWITCH 209 209 tristate 210 + select USB_COMMON 210 211 211 212 endif # USB_SUPPORT
+13 -6
drivers/usb/core/hcd.c
··· 2262 2262 hcd->state = HC_STATE_SUSPENDED; 2263 2263 2264 2264 if (!PMSG_IS_AUTO(msg)) 2265 - usb_phy_roothub_power_off(hcd->phy_roothub); 2265 + usb_phy_roothub_suspend(hcd->self.sysdev, 2266 + hcd->phy_roothub); 2266 2267 2267 2268 /* Did we race with a root-hub wakeup event? */ 2268 2269 if (rhdev->do_remote_wakeup) { ··· 2303 2302 } 2304 2303 2305 2304 if (!PMSG_IS_AUTO(msg)) { 2306 - status = usb_phy_roothub_power_on(hcd->phy_roothub); 2305 + status = usb_phy_roothub_resume(hcd->self.sysdev, 2306 + hcd->phy_roothub); 2307 2307 if (status) 2308 2308 return status; 2309 2309 } ··· 2346 2344 } 2347 2345 } else { 2348 2346 hcd->state = old_state; 2349 - usb_phy_roothub_power_off(hcd->phy_roothub); 2347 + usb_phy_roothub_suspend(hcd->self.sysdev, hcd->phy_roothub); 2350 2348 dev_dbg(&rhdev->dev, "bus %s fail, err %d\n", 2351 2349 "resume", status); 2352 2350 if (status != -ESHUTDOWN) ··· 2379 2377 2380 2378 spin_lock_irqsave (&hcd_root_hub_lock, flags); 2381 2379 if (hcd->rh_registered) { 2380 + pm_wakeup_event(&hcd->self.root_hub->dev, 0); 2382 2381 set_bit(HCD_FLAG_WAKEUP_PENDING, &hcd->flags); 2383 2382 queue_work(pm_wq, &hcd->wakeup_work); 2384 2383 } ··· 2761 2758 } 2762 2759 2763 2760 if (!hcd->skip_phy_initialization && usb_hcd_is_primary_hcd(hcd)) { 2764 - hcd->phy_roothub = usb_phy_roothub_init(hcd->self.sysdev); 2761 + hcd->phy_roothub = usb_phy_roothub_alloc(hcd->self.sysdev); 2765 2762 if (IS_ERR(hcd->phy_roothub)) { 2766 2763 retval = PTR_ERR(hcd->phy_roothub); 2767 - goto err_phy_roothub_init; 2764 + goto err_phy_roothub_alloc; 2768 2765 } 2766 + 2767 + retval = usb_phy_roothub_init(hcd->phy_roothub); 2768 + if (retval) 2769 + goto err_phy_roothub_alloc; 2769 2770 2770 2771 retval = usb_phy_roothub_power_on(hcd->phy_roothub); 2771 2772 if (retval) ··· 2943 2936 usb_phy_roothub_power_off(hcd->phy_roothub); 2944 2937 err_usb_phy_roothub_power_on: 2945 2938 usb_phy_roothub_exit(hcd->phy_roothub); 2946 - err_phy_roothub_init: 2939 + err_phy_roothub_alloc: 2947 2940 if (hcd->remove_phy && hcd->usb_phy) { 2948 2941 usb_phy_shutdown(hcd->usb_phy); 2949 2942 usb_put_phy(hcd->usb_phy);
+9 -1
drivers/usb/core/hub.c
··· 653 653 unsigned int portnum) 654 654 { 655 655 struct usb_hub *hub; 656 + struct usb_port *port_dev; 656 657 657 658 if (!hdev) 658 659 return; 659 660 660 661 hub = usb_hub_to_struct_hub(hdev); 661 662 if (hub) { 663 + port_dev = hub->ports[portnum - 1]; 664 + if (port_dev && port_dev->child) 665 + pm_wakeup_event(&port_dev->child->dev, 0); 666 + 662 667 set_bit(portnum, hub->wakeup_bits); 663 668 kick_hub_wq(hub); 664 669 } ··· 3439 3434 3440 3435 /* Skip the initial Clear-Suspend step for a remote wakeup */ 3441 3436 status = hub_port_status(hub, port1, &portstatus, &portchange); 3442 - if (status == 0 && !port_is_suspended(hub, portstatus)) 3437 + if (status == 0 && !port_is_suspended(hub, portstatus)) { 3438 + if (portchange & USB_PORT_STAT_C_SUSPEND) 3439 + pm_wakeup_event(&udev->dev, 0); 3443 3440 goto SuspendCleared; 3441 + } 3444 3442 3445 3443 /* see 7.1.7.7; affects power usage, but not budgeting */ 3446 3444 if (hub_is_superspeed(hub->hdev))
+66 -27
drivers/usb/core/phy.c
··· 19 19 struct list_head list; 20 20 }; 21 21 22 - static struct usb_phy_roothub *usb_phy_roothub_alloc(struct device *dev) 23 - { 24 - struct usb_phy_roothub *roothub_entry; 25 - 26 - roothub_entry = devm_kzalloc(dev, sizeof(*roothub_entry), GFP_KERNEL); 27 - if (!roothub_entry) 28 - return ERR_PTR(-ENOMEM); 29 - 30 - INIT_LIST_HEAD(&roothub_entry->list); 31 - 32 - return roothub_entry; 33 - } 34 - 35 22 static int usb_phy_roothub_add_phy(struct device *dev, int index, 36 23 struct list_head *list) 37 24 { ··· 32 45 return PTR_ERR(phy); 33 46 } 34 47 35 - roothub_entry = usb_phy_roothub_alloc(dev); 36 - if (IS_ERR(roothub_entry)) 37 - return PTR_ERR(roothub_entry); 48 + roothub_entry = devm_kzalloc(dev, sizeof(*roothub_entry), GFP_KERNEL); 49 + if (!roothub_entry) 50 + return -ENOMEM; 51 + 52 + INIT_LIST_HEAD(&roothub_entry->list); 38 53 39 54 roothub_entry->phy = phy; 40 55 ··· 45 56 return 0; 46 57 } 47 58 48 - struct usb_phy_roothub *usb_phy_roothub_init(struct device *dev) 59 + struct usb_phy_roothub *usb_phy_roothub_alloc(struct device *dev) 49 60 { 50 61 struct usb_phy_roothub *phy_roothub; 51 - struct usb_phy_roothub *roothub_entry; 52 - struct list_head *head; 53 62 int i, num_phys, err; 63 + 64 + if (!IS_ENABLED(CONFIG_GENERIC_PHY)) 65 + return NULL; 54 66 55 67 num_phys = of_count_phandle_with_args(dev->of_node, "phys", 56 68 "#phy-cells"); 57 69 if (num_phys <= 0) 58 70 return NULL; 59 71 60 - phy_roothub = usb_phy_roothub_alloc(dev); 61 - if (IS_ERR(phy_roothub)) 62 - return phy_roothub; 72 + phy_roothub = devm_kzalloc(dev, sizeof(*phy_roothub), GFP_KERNEL); 73 + if (!phy_roothub) 74 + return ERR_PTR(-ENOMEM); 75 + 76 + INIT_LIST_HEAD(&phy_roothub->list); 63 77 64 78 for (i = 0; i < num_phys; i++) { 65 79 err = usb_phy_roothub_add_phy(dev, i, &phy_roothub->list); 66 80 if (err) 67 - goto err_out; 81 + return ERR_PTR(err); 68 82 } 83 + 84 + return phy_roothub; 85 + } 86 + EXPORT_SYMBOL_GPL(usb_phy_roothub_alloc); 87 + 88 + int usb_phy_roothub_init(struct usb_phy_roothub *phy_roothub) 89 + { 90 + struct usb_phy_roothub *roothub_entry; 91 + struct list_head *head; 92 + int err; 93 + 94 + if (!phy_roothub) 95 + return 0; 69 96 70 97 head = &phy_roothub->list; 71 98 ··· 91 86 goto err_exit_phys; 92 87 } 93 88 94 - return phy_roothub; 89 + return 0; 95 90 96 91 err_exit_phys: 97 92 list_for_each_entry_continue_reverse(roothub_entry, head, list) 98 93 phy_exit(roothub_entry->phy); 99 94 100 - err_out: 101 - return ERR_PTR(err); 95 + return err; 102 96 } 103 97 EXPORT_SYMBOL_GPL(usb_phy_roothub_init); 104 98 ··· 115 111 list_for_each_entry(roothub_entry, head, list) { 116 112 err = phy_exit(roothub_entry->phy); 117 113 if (err) 118 - ret = ret; 114 + ret = err; 119 115 } 120 116 121 117 return ret; ··· 160 156 phy_power_off(roothub_entry->phy); 161 157 } 162 158 EXPORT_SYMBOL_GPL(usb_phy_roothub_power_off); 159 + 160 + int usb_phy_roothub_suspend(struct device *controller_dev, 161 + struct usb_phy_roothub *phy_roothub) 162 + { 163 + usb_phy_roothub_power_off(phy_roothub); 164 + 165 + /* keep the PHYs initialized so the device can wake up the system */ 166 + if (device_may_wakeup(controller_dev)) 167 + return 0; 168 + 169 + return usb_phy_roothub_exit(phy_roothub); 170 + } 171 + EXPORT_SYMBOL_GPL(usb_phy_roothub_suspend); 172 + 173 + int usb_phy_roothub_resume(struct device *controller_dev, 174 + struct usb_phy_roothub *phy_roothub) 175 + { 176 + int err; 177 + 178 + /* if the device can't wake up the system _exit was called */ 179 + if (!device_may_wakeup(controller_dev)) { 180 + err = usb_phy_roothub_init(phy_roothub); 181 + if (err) 182 + return err; 183 + } 184 + 185 + err = usb_phy_roothub_power_on(phy_roothub); 186 + 187 + /* undo _init if _power_on failed */ 188 + if (err && !device_may_wakeup(controller_dev)) 189 + usb_phy_roothub_exit(phy_roothub); 190 + 191 + return err; 192 + } 193 + EXPORT_SYMBOL_GPL(usb_phy_roothub_resume);
+21 -1
drivers/usb/core/phy.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ */ 2 + /* 3 + * USB roothub wrapper 4 + * 5 + * Copyright (C) 2018 Martin Blumenstingl <martin.blumenstingl@googlemail.com> 6 + */ 7 + 8 + #ifndef __USB_CORE_PHY_H_ 9 + #define __USB_CORE_PHY_H_ 10 + 11 + struct device; 1 12 struct usb_phy_roothub; 2 13 3 - struct usb_phy_roothub *usb_phy_roothub_init(struct device *dev); 14 + struct usb_phy_roothub *usb_phy_roothub_alloc(struct device *dev); 15 + 16 + int usb_phy_roothub_init(struct usb_phy_roothub *phy_roothub); 4 17 int usb_phy_roothub_exit(struct usb_phy_roothub *phy_roothub); 5 18 6 19 int usb_phy_roothub_power_on(struct usb_phy_roothub *phy_roothub); 7 20 void usb_phy_roothub_power_off(struct usb_phy_roothub *phy_roothub); 21 + 22 + int usb_phy_roothub_suspend(struct device *controller_dev, 23 + struct usb_phy_roothub *phy_roothub); 24 + int usb_phy_roothub_resume(struct device *controller_dev, 25 + struct usb_phy_roothub *phy_roothub); 26 + 27 + #endif /* __USB_CORE_PHY_H_ */
+3
drivers/usb/core/quirks.c
··· 186 186 { USB_DEVICE(0x03f0, 0x0701), .driver_info = 187 187 USB_QUIRK_STRING_FETCH_255 }, 188 188 189 + /* HP v222w 16GB Mini USB Drive */ 190 + { USB_DEVICE(0x03f0, 0x3f40), .driver_info = USB_QUIRK_DELAY_INIT }, 191 + 189 192 /* Creative SB Audigy 2 NX */ 190 193 { USB_DEVICE(0x041e, 0x3020), .driver_info = USB_QUIRK_RESET_RESUME }, 191 194
+5 -3
drivers/usb/host/xhci-dbgtty.c
··· 320 320 321 321 void xhci_dbc_tty_unregister_driver(void) 322 322 { 323 - tty_unregister_driver(dbc_tty_driver); 324 - put_tty_driver(dbc_tty_driver); 325 - dbc_tty_driver = NULL; 323 + if (dbc_tty_driver) { 324 + tty_unregister_driver(dbc_tty_driver); 325 + put_tty_driver(dbc_tty_driver); 326 + dbc_tty_driver = NULL; 327 + } 326 328 } 327 329 328 330 static void dbc_rx_push(unsigned long _port)
+4 -1
drivers/usb/host/xhci-pci.c
··· 126 126 if (pdev->vendor == PCI_VENDOR_ID_AMD && usb_amd_find_chipset_info()) 127 127 xhci->quirks |= XHCI_AMD_PLL_FIX; 128 128 129 - if (pdev->vendor == PCI_VENDOR_ID_AMD && pdev->device == 0x43bb) 129 + if (pdev->vendor == PCI_VENDOR_ID_AMD && 130 + (pdev->device == 0x15e0 || 131 + pdev->device == 0x15e1 || 132 + pdev->device == 0x43bb)) 130 133 xhci->quirks |= XHCI_SUSPEND_DELAY; 131 134 132 135 if (pdev->vendor == PCI_VENDOR_ID_AMD)
+23 -9
drivers/usb/host/xhci-plat.c
··· 157 157 struct resource *res; 158 158 struct usb_hcd *hcd; 159 159 struct clk *clk; 160 + struct clk *reg_clk; 160 161 int ret; 161 162 int irq; 162 163 ··· 227 226 hcd->rsrc_len = resource_size(res); 228 227 229 228 /* 230 - * Not all platforms have a clk so it is not an error if the 231 - * clock does not exists. 229 + * Not all platforms have clks so it is not an error if the 230 + * clock do not exist. 232 231 */ 232 + reg_clk = devm_clk_get(&pdev->dev, "reg"); 233 + if (!IS_ERR(reg_clk)) { 234 + ret = clk_prepare_enable(reg_clk); 235 + if (ret) 236 + goto put_hcd; 237 + } else if (PTR_ERR(reg_clk) == -EPROBE_DEFER) { 238 + ret = -EPROBE_DEFER; 239 + goto put_hcd; 240 + } 241 + 233 242 clk = devm_clk_get(&pdev->dev, NULL); 234 243 if (!IS_ERR(clk)) { 235 244 ret = clk_prepare_enable(clk); 236 245 if (ret) 237 - goto put_hcd; 246 + goto disable_reg_clk; 238 247 } else if (PTR_ERR(clk) == -EPROBE_DEFER) { 239 248 ret = -EPROBE_DEFER; 240 - goto put_hcd; 249 + goto disable_reg_clk; 241 250 } 242 251 243 252 xhci = hcd_to_xhci(hcd); ··· 263 252 device_wakeup_enable(hcd->self.controller); 264 253 265 254 xhci->clk = clk; 255 + xhci->reg_clk = reg_clk; 266 256 xhci->main_hcd = hcd; 267 257 xhci->shared_hcd = __usb_create_hcd(driver, sysdev, &pdev->dev, 268 258 dev_name(&pdev->dev), hcd); ··· 332 320 usb_put_hcd(xhci->shared_hcd); 333 321 334 322 disable_clk: 335 - if (!IS_ERR(clk)) 336 - clk_disable_unprepare(clk); 323 + clk_disable_unprepare(clk); 324 + 325 + disable_reg_clk: 326 + clk_disable_unprepare(reg_clk); 337 327 338 328 put_hcd: 339 329 usb_put_hcd(hcd); ··· 352 338 struct usb_hcd *hcd = platform_get_drvdata(dev); 353 339 struct xhci_hcd *xhci = hcd_to_xhci(hcd); 354 340 struct clk *clk = xhci->clk; 341 + struct clk *reg_clk = xhci->reg_clk; 355 342 356 343 xhci->xhc_state |= XHCI_STATE_REMOVING; 357 344 ··· 362 347 usb_remove_hcd(hcd); 363 348 usb_put_hcd(xhci->shared_hcd); 364 349 365 - if (!IS_ERR(clk)) 366 - clk_disable_unprepare(clk); 350 + clk_disable_unprepare(clk); 351 + clk_disable_unprepare(reg_clk); 367 352 usb_put_hcd(hcd); 368 353 369 354 pm_runtime_set_suspended(&dev->dev); ··· 435 420 static struct platform_driver usb_xhci_driver = { 436 421 .probe = xhci_plat_probe, 437 422 .remove = xhci_plat_remove, 438 - .shutdown = usb_hcd_platform_shutdown, 439 423 .driver = { 440 424 .name = "xhci-hcd", 441 425 .pm = &xhci_plat_pm_ops,
+2 -1
drivers/usb/host/xhci.h
··· 1729 1729 int page_shift; 1730 1730 /* msi-x vectors */ 1731 1731 int msix_count; 1732 - /* optional clock */ 1732 + /* optional clocks */ 1733 1733 struct clk *clk; 1734 + struct clk *reg_clk; 1734 1735 /* data structures */ 1735 1736 struct xhci_device_context_array *dcbaa; 1736 1737 struct xhci_ring *cmd_ring;
-2
drivers/usb/musb/musb_dsps.c
··· 451 451 if (!rev) 452 452 return -ENODEV; 453 453 454 - usb_phy_init(musb->xceiv); 455 454 if (IS_ERR(musb->phy)) { 456 455 musb->phy = NULL; 457 456 } else { ··· 500 501 struct dsps_glue *glue = dev_get_drvdata(dev->parent); 501 502 502 503 del_timer_sync(&musb->dev_timer); 503 - usb_phy_shutdown(musb->xceiv); 504 504 phy_power_off(musb->phy); 505 505 phy_exit(musb->phy); 506 506 debugfs_remove_recursive(glue->dbgfs_root);
+1
drivers/usb/musb/musb_host.c
··· 2754 2754 hcd->self.otg_port = 1; 2755 2755 musb->xceiv->otg->host = &hcd->self; 2756 2756 hcd->power_budget = 2 * (power_budget ? : 250); 2757 + hcd->skip_phy_initialization = 1; 2757 2758 2758 2759 ret = usb_add_hcd(hcd, 0, 0); 2759 2760 if (ret < 0)
+1
drivers/usb/serial/Kconfig
··· 62 62 - Fundamental Software dongle. 63 63 - Google USB serial devices 64 64 - HP4x calculators 65 + - Libtransistor USB console 65 66 - a number of Motorola phones 66 67 - Motorola Tetra devices 67 68 - Novatel Wireless GPS receivers
+1
drivers/usb/serial/cp210x.c
··· 214 214 { USB_DEVICE(0x3195, 0xF190) }, /* Link Instruments MSO-19 */ 215 215 { USB_DEVICE(0x3195, 0xF280) }, /* Link Instruments MSO-28 */ 216 216 { USB_DEVICE(0x3195, 0xF281) }, /* Link Instruments MSO-28 */ 217 + { USB_DEVICE(0x3923, 0x7A0B) }, /* National Instruments USB Serial Console */ 217 218 { USB_DEVICE(0x413C, 0x9500) }, /* DW700 GPS USB interface */ 218 219 { } /* Terminating Entry */ 219 220 };
+2 -1
drivers/usb/serial/ftdi_sio.c
··· 1898 1898 return ftdi_jtag_probe(serial); 1899 1899 1900 1900 if (udev->product && 1901 - (!strcmp(udev->product, "BeagleBone/XDS100V2") || 1901 + (!strcmp(udev->product, "Arrow USB Blaster") || 1902 + !strcmp(udev->product, "BeagleBone/XDS100V2") || 1902 1903 !strcmp(udev->product, "SNAP Connect E10"))) 1903 1904 return ftdi_jtag_probe(serial); 1904 1905
+7
drivers/usb/serial/usb-serial-simple.c
··· 63 63 0x01) } 64 64 DEVICE(google, GOOGLE_IDS); 65 65 66 + /* Libtransistor USB console */ 67 + #define LIBTRANSISTOR_IDS() \ 68 + { USB_DEVICE(0x1209, 0x8b00) } 69 + DEVICE(libtransistor, LIBTRANSISTOR_IDS); 70 + 66 71 /* ViVOpay USB Serial Driver */ 67 72 #define VIVOPAY_IDS() \ 68 73 { USB_DEVICE(0x1d5f, 0x1004) } /* ViVOpay 8800 */ ··· 115 110 &funsoft_device, 116 111 &flashloader_device, 117 112 &google_device, 113 + &libtransistor_device, 118 114 &vivopay_device, 119 115 &moto_modem_device, 120 116 &motorola_tetra_device, ··· 132 126 FUNSOFT_IDS(), 133 127 FLASHLOADER_IDS(), 134 128 GOOGLE_IDS(), 129 + LIBTRANSISTOR_IDS(), 135 130 VIVOPAY_IDS(), 136 131 MOTO_IDS(), 137 132 MOTOROLA_TETRA_IDS(),
+1 -1
drivers/usb/typec/ucsi/Makefile
··· 5 5 6 6 typec_ucsi-y := ucsi.o 7 7 8 - typec_ucsi-$(CONFIG_FTRACE) += trace.o 8 + typec_ucsi-$(CONFIG_TRACING) += trace.o 9 9 10 10 obj-$(CONFIG_UCSI_ACPI) += ucsi_acpi.o
+1 -1
drivers/usb/typec/ucsi/ucsi.c
··· 28 28 * difficult to estimate the time it takes for the system to process the command 29 29 * before it is actually passed to the PPM. 30 30 */ 31 - #define UCSI_TIMEOUT_MS 1000 31 + #define UCSI_TIMEOUT_MS 5000 32 32 33 33 /* 34 34 * UCSI_SWAP_TIMEOUT_MS - Timeout for role swap requests
+5
drivers/usb/usbip/stub_main.c
··· 186 186 if (!bid) 187 187 return -ENODEV; 188 188 189 + /* device_attach() callers should hold parent lock for USB */ 190 + if (bid->udev->dev.parent) 191 + device_lock(bid->udev->dev.parent); 189 192 ret = device_attach(&bid->udev->dev); 193 + if (bid->udev->dev.parent) 194 + device_unlock(bid->udev->dev.parent); 190 195 if (ret < 0) { 191 196 dev_err(&bid->udev->dev, "rebind failed\n"); 192 197 return ret;
+1 -1
drivers/usb/usbip/usbip_common.h
··· 243 243 #define VUDC_EVENT_ERROR_USB (USBIP_EH_SHUTDOWN | USBIP_EH_UNUSABLE) 244 244 #define VUDC_EVENT_ERROR_MALLOC (USBIP_EH_SHUTDOWN | USBIP_EH_UNUSABLE) 245 245 246 - #define VDEV_EVENT_REMOVED (USBIP_EH_SHUTDOWN | USBIP_EH_BYE) 246 + #define VDEV_EVENT_REMOVED (USBIP_EH_SHUTDOWN | USBIP_EH_RESET | USBIP_EH_BYE) 247 247 #define VDEV_EVENT_DOWN (USBIP_EH_SHUTDOWN | USBIP_EH_RESET) 248 248 #define VDEV_EVENT_ERROR_TCP (USBIP_EH_SHUTDOWN | USBIP_EH_RESET) 249 249 #define VDEV_EVENT_ERROR_MALLOC (USBIP_EH_SHUTDOWN | USBIP_EH_UNUSABLE)
-4
drivers/usb/usbip/usbip_event.c
··· 91 91 unset_event(ud, USBIP_EH_UNUSABLE); 92 92 } 93 93 94 - /* Stop the error handler. */ 95 - if (ud->event & USBIP_EH_BYE) 96 - usbip_dbg_eh("removed %p\n", ud); 97 - 98 94 wake_up(&ud->eh_waitq); 99 95 } 100 96 }
+13
drivers/usb/usbip/vhci_hcd.c
··· 354 354 usbip_dbg_vhci_rh(" ClearHubFeature\n"); 355 355 break; 356 356 case ClearPortFeature: 357 + if (rhport < 0) 358 + goto error; 357 359 switch (wValue) { 358 360 case USB_PORT_FEAT_SUSPEND: 359 361 if (hcd->speed == HCD_USB3) { ··· 513 511 goto error; 514 512 } 515 513 514 + if (rhport < 0) 515 + goto error; 516 + 516 517 vhci_hcd->port_status[rhport] |= USB_PORT_STAT_SUSPEND; 517 518 break; 518 519 case USB_PORT_FEAT_POWER: 519 520 usbip_dbg_vhci_rh( 520 521 " SetPortFeature: USB_PORT_FEAT_POWER\n"); 522 + if (rhport < 0) 523 + goto error; 521 524 if (hcd->speed == HCD_USB3) 522 525 vhci_hcd->port_status[rhport] |= USB_SS_PORT_STAT_POWER; 523 526 else ··· 531 524 case USB_PORT_FEAT_BH_PORT_RESET: 532 525 usbip_dbg_vhci_rh( 533 526 " SetPortFeature: USB_PORT_FEAT_BH_PORT_RESET\n"); 527 + if (rhport < 0) 528 + goto error; 534 529 /* Applicable only for USB3.0 hub */ 535 530 if (hcd->speed != HCD_USB3) { 536 531 pr_err("USB_PORT_FEAT_BH_PORT_RESET req not " ··· 543 534 case USB_PORT_FEAT_RESET: 544 535 usbip_dbg_vhci_rh( 545 536 " SetPortFeature: USB_PORT_FEAT_RESET\n"); 537 + if (rhport < 0) 538 + goto error; 546 539 /* if it's already enabled, disable */ 547 540 if (hcd->speed == HCD_USB3) { 548 541 vhci_hcd->port_status[rhport] = 0; ··· 565 554 default: 566 555 usbip_dbg_vhci_rh(" SetPortFeature: default %d\n", 567 556 wValue); 557 + if (rhport < 0) 558 + goto error; 568 559 if (hcd->speed == HCD_USB3) { 569 560 if ((vhci_hcd->port_status[rhport] & 570 561 USB_SS_PORT_STAT_POWER) != 0) {
+38 -32
drivers/virt/vboxguest/vboxguest_core.c
··· 114 114 } 115 115 116 116 out: 117 - kfree(req); 117 + vbg_req_free(req, sizeof(*req)); 118 118 kfree(pages); 119 119 } 120 120 ··· 144 144 145 145 rc = vbg_req_perform(gdev, req); 146 146 147 - kfree(req); 147 + vbg_req_free(req, sizeof(*req)); 148 148 149 149 if (rc < 0) { 150 150 vbg_err("%s error: %d\n", __func__, rc); ··· 214 214 ret = vbg_status_code_to_errno(rc); 215 215 216 216 out_free: 217 - kfree(req2); 218 - kfree(req1); 217 + vbg_req_free(req2, sizeof(*req2)); 218 + vbg_req_free(req1, sizeof(*req1)); 219 219 return ret; 220 220 } 221 221 ··· 245 245 if (rc == VERR_NOT_IMPLEMENTED) /* Compatibility with older hosts. */ 246 246 rc = VINF_SUCCESS; 247 247 248 - kfree(req); 248 + vbg_req_free(req, sizeof(*req)); 249 249 250 250 return vbg_status_code_to_errno(rc); 251 251 } ··· 431 431 rc = vbg_req_perform(gdev, req); 432 432 do_div(req->interval_ns, 1000000); /* ns -> ms */ 433 433 gdev->heartbeat_interval_ms = req->interval_ns; 434 - kfree(req); 434 + vbg_req_free(req, sizeof(*req)); 435 435 436 436 return vbg_status_code_to_errno(rc); 437 437 } ··· 454 454 if (ret < 0) 455 455 return ret; 456 456 457 - /* 458 - * Preallocate the request to use it from the timer callback because: 459 - * 1) on Windows vbg_req_alloc must be called at IRQL <= APC_LEVEL 460 - * and the timer callback runs at DISPATCH_LEVEL; 461 - * 2) avoid repeated allocations. 462 - */ 463 457 gdev->guest_heartbeat_req = vbg_req_alloc( 464 458 sizeof(*gdev->guest_heartbeat_req), 465 459 VMMDEVREQ_GUEST_HEARTBEAT); ··· 475 481 { 476 482 del_timer_sync(&gdev->heartbeat_timer); 477 483 vbg_heartbeat_host_config(gdev, false); 478 - kfree(gdev->guest_heartbeat_req); 479 - 484 + vbg_req_free(gdev->guest_heartbeat_req, 485 + sizeof(*gdev->guest_heartbeat_req)); 480 486 } 481 487 482 488 /** ··· 537 543 if (rc < 0) 538 544 vbg_err("%s error, rc: %d\n", __func__, rc); 539 545 540 - kfree(req); 546 + vbg_req_free(req, sizeof(*req)); 541 547 return vbg_status_code_to_errno(rc); 542 548 } 543 549 ··· 611 617 612 618 out: 613 619 mutex_unlock(&gdev->session_mutex); 614 - kfree(req); 620 + vbg_req_free(req, sizeof(*req)); 615 621 616 622 return ret; 617 623 } ··· 636 642 if (rc < 0) 637 643 vbg_err("%s error, rc: %d\n", __func__, rc); 638 644 639 - kfree(req); 645 + vbg_req_free(req, sizeof(*req)); 640 646 return vbg_status_code_to_errno(rc); 641 647 } 642 648 ··· 706 712 707 713 out: 708 714 mutex_unlock(&gdev->session_mutex); 709 - kfree(req); 715 + vbg_req_free(req, sizeof(*req)); 710 716 711 717 return ret; 712 718 } ··· 727 733 728 734 rc = vbg_req_perform(gdev, req); 729 735 ret = vbg_status_code_to_errno(rc); 730 - if (ret) 736 + if (ret) { 737 + vbg_err("%s error: %d\n", __func__, rc); 731 738 goto out; 739 + } 732 740 733 741 snprintf(gdev->host_version, sizeof(gdev->host_version), "%u.%u.%ur%u", 734 742 req->major, req->minor, req->build, req->revision); ··· 745 749 } 746 750 747 751 out: 748 - kfree(req); 752 + vbg_req_free(req, sizeof(*req)); 749 753 return ret; 750 754 } 751 755 ··· 843 847 return 0; 844 848 845 849 err_free_reqs: 846 - kfree(gdev->mouse_status_req); 847 - kfree(gdev->ack_events_req); 848 - kfree(gdev->cancel_req); 849 - kfree(gdev->mem_balloon.change_req); 850 - kfree(gdev->mem_balloon.get_req); 850 + vbg_req_free(gdev->mouse_status_req, 851 + sizeof(*gdev->mouse_status_req)); 852 + vbg_req_free(gdev->ack_events_req, 853 + sizeof(*gdev->ack_events_req)); 854 + vbg_req_free(gdev->cancel_req, 855 + sizeof(*gdev->cancel_req)); 856 + vbg_req_free(gdev->mem_balloon.change_req, 857 + sizeof(*gdev->mem_balloon.change_req)); 858 + vbg_req_free(gdev->mem_balloon.get_req, 859 + sizeof(*gdev->mem_balloon.get_req)); 851 860 return ret; 852 861 } 853 862 ··· 873 872 vbg_reset_host_capabilities(gdev); 874 873 vbg_core_set_mouse_status(gdev, 0); 875 874 876 - kfree(gdev->mouse_status_req); 877 - kfree(gdev->ack_events_req); 878 - kfree(gdev->cancel_req); 879 - kfree(gdev->mem_balloon.change_req); 880 - kfree(gdev->mem_balloon.get_req); 875 + vbg_req_free(gdev->mouse_status_req, 876 + sizeof(*gdev->mouse_status_req)); 877 + vbg_req_free(gdev->ack_events_req, 878 + sizeof(*gdev->ack_events_req)); 879 + vbg_req_free(gdev->cancel_req, 880 + sizeof(*gdev->cancel_req)); 881 + vbg_req_free(gdev->mem_balloon.change_req, 882 + sizeof(*gdev->mem_balloon.change_req)); 883 + vbg_req_free(gdev->mem_balloon.get_req, 884 + sizeof(*gdev->mem_balloon.get_req)); 881 885 } 882 886 883 887 /** ··· 1421 1415 req->flags = dump->u.in.flags; 1422 1416 dump->hdr.rc = vbg_req_perform(gdev, req); 1423 1417 1424 - kfree(req); 1418 + vbg_req_free(req, sizeof(*req)); 1425 1419 return 0; 1426 1420 } 1427 1421 ··· 1519 1513 if (rc < 0) 1520 1514 vbg_err("%s error, rc: %d\n", __func__, rc); 1521 1515 1522 - kfree(req); 1516 + vbg_req_free(req, sizeof(*req)); 1523 1517 return vbg_status_code_to_errno(rc); 1524 1518 } 1525 1519
+9
drivers/virt/vboxguest/vboxguest_core.h
··· 171 171 172 172 void vbg_linux_mouse_event(struct vbg_dev *gdev); 173 173 174 + /* Private (non exported) functions form vboxguest_utils.c */ 175 + void *vbg_req_alloc(size_t len, enum vmmdev_request_type req_type); 176 + void vbg_req_free(void *req, size_t len); 177 + int vbg_req_perform(struct vbg_dev *gdev, void *req); 178 + int vbg_hgcm_call32( 179 + struct vbg_dev *gdev, u32 client_id, u32 function, u32 timeout_ms, 180 + struct vmmdev_hgcm_function_parameter32 *parm32, u32 parm_count, 181 + int *vbox_status); 182 + 174 183 #endif
+16 -3
drivers/virt/vboxguest/vboxguest_linux.c
··· 87 87 struct vbg_session *session = filp->private_data; 88 88 size_t returned_size, size; 89 89 struct vbg_ioctl_hdr hdr; 90 + bool is_vmmdev_req; 90 91 int ret = 0; 91 92 void *buf; 92 93 ··· 107 106 if (size > SZ_16M) 108 107 return -E2BIG; 109 108 110 - /* __GFP_DMA32 because IOCTL_VMMDEV_REQUEST passes this to the host */ 111 - buf = kmalloc(size, GFP_KERNEL | __GFP_DMA32); 109 + /* 110 + * IOCTL_VMMDEV_REQUEST needs the buffer to be below 4G to avoid 111 + * the need for a bounce-buffer and another copy later on. 112 + */ 113 + is_vmmdev_req = (req & ~IOCSIZE_MASK) == VBG_IOCTL_VMMDEV_REQUEST(0) || 114 + req == VBG_IOCTL_VMMDEV_REQUEST_BIG; 115 + 116 + if (is_vmmdev_req) 117 + buf = vbg_req_alloc(size, VBG_IOCTL_HDR_TYPE_DEFAULT); 118 + else 119 + buf = kmalloc(size, GFP_KERNEL); 112 120 if (!buf) 113 121 return -ENOMEM; 114 122 ··· 142 132 ret = -EFAULT; 143 133 144 134 out: 145 - kfree(buf); 135 + if (is_vmmdev_req) 136 + vbg_req_free(buf, size); 137 + else 138 + kfree(buf); 146 139 147 140 return ret; 148 141 }
+13 -4
drivers/virt/vboxguest/vboxguest_utils.c
··· 65 65 void *vbg_req_alloc(size_t len, enum vmmdev_request_type req_type) 66 66 { 67 67 struct vmmdev_request_header *req; 68 + int order = get_order(PAGE_ALIGN(len)); 68 69 69 - req = kmalloc(len, GFP_KERNEL | __GFP_DMA32); 70 + req = (void *)__get_free_pages(GFP_KERNEL | GFP_DMA32, order); 70 71 if (!req) 71 72 return NULL; 72 73 ··· 81 80 req->reserved2 = 0; 82 81 83 82 return req; 83 + } 84 + 85 + void vbg_req_free(void *req, size_t len) 86 + { 87 + if (!req) 88 + return; 89 + 90 + free_pages((unsigned long)req, get_order(PAGE_ALIGN(len))); 84 91 } 85 92 86 93 /* Note this function returns a VBox status code, not a negative errno!! */ ··· 146 137 rc = hgcm_connect->header.result; 147 138 } 148 139 149 - kfree(hgcm_connect); 140 + vbg_req_free(hgcm_connect, sizeof(*hgcm_connect)); 150 141 151 142 *vbox_status = rc; 152 143 return 0; ··· 175 166 if (rc >= 0) 176 167 rc = hgcm_disconnect->header.result; 177 168 178 - kfree(hgcm_disconnect); 169 + vbg_req_free(hgcm_disconnect, sizeof(*hgcm_disconnect)); 179 170 180 171 *vbox_status = rc; 181 172 return 0; ··· 632 623 } 633 624 634 625 if (!leak_it) 635 - kfree(call); 626 + vbg_req_free(call, size); 636 627 637 628 free_bounce_bufs: 638 629 if (bounce_bufs) {
+25 -3
fs/ceph/xattr.c
··· 228 228 229 229 static bool ceph_vxattrcb_quota_exists(struct ceph_inode_info *ci) 230 230 { 231 - return (ci->i_max_files || ci->i_max_bytes); 231 + bool ret = false; 232 + spin_lock(&ci->i_ceph_lock); 233 + if ((ci->i_max_files || ci->i_max_bytes) && 234 + ci->i_vino.snap == CEPH_NOSNAP && 235 + ci->i_snap_realm && 236 + ci->i_snap_realm->ino == ci->i_vino.ino) 237 + ret = true; 238 + spin_unlock(&ci->i_ceph_lock); 239 + return ret; 232 240 } 233 241 234 242 static size_t ceph_vxattrcb_quota(struct ceph_inode_info *ci, char *val, ··· 1016 1008 char *newval = NULL; 1017 1009 struct ceph_inode_xattr *xattr = NULL; 1018 1010 int required_blob_size; 1011 + bool check_realm = false; 1019 1012 bool lock_snap_rwsem = false; 1020 1013 1021 1014 if (ceph_snap(inode) != CEPH_NOSNAP) 1022 1015 return -EROFS; 1023 1016 1024 1017 vxattr = ceph_match_vxattr(inode, name); 1025 - if (vxattr && vxattr->readonly) 1026 - return -EOPNOTSUPP; 1018 + if (vxattr) { 1019 + if (vxattr->readonly) 1020 + return -EOPNOTSUPP; 1021 + if (value && !strncmp(vxattr->name, "ceph.quota", 10)) 1022 + check_realm = true; 1023 + } 1027 1024 1028 1025 /* pass any unhandled ceph.* xattrs through to the MDS */ 1029 1026 if (!strncmp(name, XATTR_CEPH_PREFIX, XATTR_CEPH_PREFIX_LEN)) ··· 1122 1109 err = -EBUSY; 1123 1110 } else { 1124 1111 err = ceph_sync_setxattr(inode, name, value, size, flags); 1112 + if (err >= 0 && check_realm) { 1113 + /* check if snaprealm was created for quota inode */ 1114 + spin_lock(&ci->i_ceph_lock); 1115 + if ((ci->i_max_files || ci->i_max_bytes) && 1116 + !(ci->i_snap_realm && 1117 + ci->i_snap_realm->ino == ci->i_vino.ino)) 1118 + err = -EOPNOTSUPP; 1119 + spin_unlock(&ci->i_ceph_lock); 1120 + } 1125 1121 } 1126 1122 out: 1127 1123 ceph_free_cap_flush(prealloc_cf);
+3
fs/cifs/cifssmb.c
··· 455 455 server->sign = true; 456 456 } 457 457 458 + if (cifs_rdma_enabled(server) && server->sign) 459 + cifs_dbg(VFS, "Signing is enabled, and RDMA read/write will be disabled"); 460 + 458 461 return 0; 459 462 } 460 463
+16 -16
fs/cifs/connect.c
··· 2959 2959 } 2960 2960 } 2961 2961 2962 + if (volume_info->seal) { 2963 + if (ses->server->vals->protocol_id == 0) { 2964 + cifs_dbg(VFS, 2965 + "SMB3 or later required for encryption\n"); 2966 + rc = -EOPNOTSUPP; 2967 + goto out_fail; 2968 + } else if (tcon->ses->server->capabilities & 2969 + SMB2_GLOBAL_CAP_ENCRYPTION) 2970 + tcon->seal = true; 2971 + else { 2972 + cifs_dbg(VFS, "Encryption is not supported on share\n"); 2973 + rc = -EOPNOTSUPP; 2974 + goto out_fail; 2975 + } 2976 + } 2977 + 2962 2978 /* 2963 2979 * BB Do we need to wrap session_mutex around this TCon call and Unix 2964 2980 * SetFS as we do on SessSetup and reconnect? ··· 3021 3005 goto out_fail; 3022 3006 } 3023 3007 tcon->use_resilient = true; 3024 - } 3025 - 3026 - if (volume_info->seal) { 3027 - if (ses->server->vals->protocol_id == 0) { 3028 - cifs_dbg(VFS, 3029 - "SMB3 or later required for encryption\n"); 3030 - rc = -EOPNOTSUPP; 3031 - goto out_fail; 3032 - } else if (tcon->ses->server->capabilities & 3033 - SMB2_GLOBAL_CAP_ENCRYPTION) 3034 - tcon->seal = true; 3035 - else { 3036 - cifs_dbg(VFS, "Encryption is not supported on share\n"); 3037 - rc = -EOPNOTSUPP; 3038 - goto out_fail; 3039 - } 3040 3008 } 3041 3009 3042 3010 /*
+14 -4
fs/cifs/smb2ops.c
··· 252 252 wsize = volume_info->wsize ? volume_info->wsize : CIFS_DEFAULT_IOSIZE; 253 253 wsize = min_t(unsigned int, wsize, server->max_write); 254 254 #ifdef CONFIG_CIFS_SMB_DIRECT 255 - if (server->rdma) 256 - wsize = min_t(unsigned int, 255 + if (server->rdma) { 256 + if (server->sign) 257 + wsize = min_t(unsigned int, 258 + wsize, server->smbd_conn->max_fragmented_send_size); 259 + else 260 + wsize = min_t(unsigned int, 257 261 wsize, server->smbd_conn->max_readwrite_size); 262 + } 258 263 #endif 259 264 if (!(server->capabilities & SMB2_GLOBAL_CAP_LARGE_MTU)) 260 265 wsize = min_t(unsigned int, wsize, SMB2_MAX_BUFFER_SIZE); ··· 277 272 rsize = volume_info->rsize ? volume_info->rsize : CIFS_DEFAULT_IOSIZE; 278 273 rsize = min_t(unsigned int, rsize, server->max_read); 279 274 #ifdef CONFIG_CIFS_SMB_DIRECT 280 - if (server->rdma) 281 - rsize = min_t(unsigned int, 275 + if (server->rdma) { 276 + if (server->sign) 277 + rsize = min_t(unsigned int, 278 + rsize, server->smbd_conn->max_fragmented_recv_size); 279 + else 280 + rsize = min_t(unsigned int, 282 281 rsize, server->smbd_conn->max_readwrite_size); 282 + } 283 283 #endif 284 284 285 285 if (!(server->capabilities & SMB2_GLOBAL_CAP_LARGE_MTU))
+7 -6
fs/cifs/smb2pdu.c
··· 383 383 build_encrypt_ctxt(struct smb2_encryption_neg_context *pneg_ctxt) 384 384 { 385 385 pneg_ctxt->ContextType = SMB2_ENCRYPTION_CAPABILITIES; 386 - pneg_ctxt->DataLength = cpu_to_le16(6); 387 - pneg_ctxt->CipherCount = cpu_to_le16(2); 388 - pneg_ctxt->Ciphers[0] = SMB2_ENCRYPTION_AES128_GCM; 389 - pneg_ctxt->Ciphers[1] = SMB2_ENCRYPTION_AES128_CCM; 386 + pneg_ctxt->DataLength = cpu_to_le16(4); /* Cipher Count + le16 cipher */ 387 + pneg_ctxt->CipherCount = cpu_to_le16(1); 388 + /* pneg_ctxt->Ciphers[0] = SMB2_ENCRYPTION_AES128_GCM;*/ /* not supported yet */ 389 + pneg_ctxt->Ciphers[0] = SMB2_ENCRYPTION_AES128_CCM; 390 390 } 391 391 392 392 static void ··· 444 444 return -EINVAL; 445 445 } 446 446 server->cipher_type = ctxt->Ciphers[0]; 447 + server->capabilities |= SMB2_GLOBAL_CAP_ENCRYPTION; 447 448 return 0; 448 449 } 449 450 ··· 2591 2590 * If we want to do a RDMA write, fill in and append 2592 2591 * smbd_buffer_descriptor_v1 to the end of read request 2593 2592 */ 2594 - if (server->rdma && rdata && 2593 + if (server->rdma && rdata && !server->sign && 2595 2594 rdata->bytes >= server->smbd_conn->rdma_readwrite_threshold) { 2596 2595 2597 2596 struct smbd_buffer_descriptor_v1 *v1; ··· 2969 2968 * If we want to do a server RDMA read, fill in and append 2970 2969 * smbd_buffer_descriptor_v1 to the end of write request 2971 2970 */ 2972 - if (server->rdma && wdata->bytes >= 2971 + if (server->rdma && !server->sign && wdata->bytes >= 2973 2972 server->smbd_conn->rdma_readwrite_threshold) { 2974 2973 2975 2974 struct smbd_buffer_descriptor_v1 *v1;
+1 -1
fs/cifs/smb2pdu.h
··· 297 297 __le16 DataLength; 298 298 __le32 Reserved; 299 299 __le16 CipherCount; /* AES-128-GCM and AES-128-CCM */ 300 - __le16 Ciphers[2]; /* Ciphers[0] since only one used now */ 300 + __le16 Ciphers[1]; /* Ciphers[0] since only one used now */ 301 301 } __packed; 302 302 303 303 struct smb2_negotiate_rsp {
+12 -24
fs/cifs/smbdirect.c
··· 2086 2086 int start, i, j; 2087 2087 int max_iov_size = 2088 2088 info->max_send_size - sizeof(struct smbd_data_transfer); 2089 - struct kvec iov[SMBDIRECT_MAX_SGE]; 2089 + struct kvec *iov; 2090 2090 int rc; 2091 2091 2092 2092 info->smbd_send_pending++; ··· 2096 2096 } 2097 2097 2098 2098 /* 2099 - * This usually means a configuration error 2100 - * We use RDMA read/write for packet size > rdma_readwrite_threshold 2101 - * as long as it's properly configured we should never get into this 2102 - * situation 2103 - */ 2104 - if (rqst->rq_nvec + rqst->rq_npages > SMBDIRECT_MAX_SGE) { 2105 - log_write(ERR, "maximum send segment %x exceeding %x\n", 2106 - rqst->rq_nvec + rqst->rq_npages, SMBDIRECT_MAX_SGE); 2107 - rc = -EINVAL; 2108 - goto done; 2109 - } 2110 - 2111 - /* 2112 - * Remove the RFC1002 length defined in MS-SMB2 section 2.1 2113 - * It is used only for TCP transport 2099 + * Skip the RFC1002 length defined in MS-SMB2 section 2.1 2100 + * It is used only for TCP transport in the iov[0] 2114 2101 * In future we may want to add a transport layer under protocol 2115 2102 * layer so this will only be issued to TCP transport 2116 2103 */ 2117 - iov[0].iov_base = (char *)rqst->rq_iov[0].iov_base + 4; 2118 - iov[0].iov_len = rqst->rq_iov[0].iov_len - 4; 2119 - buflen += iov[0].iov_len; 2104 + 2105 + if (rqst->rq_iov[0].iov_len != 4) { 2106 + log_write(ERR, "expected the pdu length in 1st iov, but got %zu\n", rqst->rq_iov[0].iov_len); 2107 + return -EINVAL; 2108 + } 2109 + iov = &rqst->rq_iov[1]; 2120 2110 2121 2111 /* total up iov array first */ 2122 - for (i = 1; i < rqst->rq_nvec; i++) { 2123 - iov[i].iov_base = rqst->rq_iov[i].iov_base; 2124 - iov[i].iov_len = rqst->rq_iov[i].iov_len; 2112 + for (i = 0; i < rqst->rq_nvec-1; i++) { 2125 2113 buflen += iov[i].iov_len; 2126 2114 } 2127 2115 ··· 2186 2198 goto done; 2187 2199 } 2188 2200 i++; 2189 - if (i == rqst->rq_nvec) 2201 + if (i == rqst->rq_nvec-1) 2190 2202 break; 2191 2203 } 2192 2204 start = i; 2193 2205 buflen = 0; 2194 2206 } else { 2195 2207 i++; 2196 - if (i == rqst->rq_nvec) { 2208 + if (i == rqst->rq_nvec-1) { 2197 2209 /* send out all remaining vecs */ 2198 2210 remaining_data_length -= buflen; 2199 2211 log_write(INFO,
+6 -3
fs/cifs/transport.c
··· 753 753 goto out; 754 754 755 755 #ifdef CONFIG_CIFS_SMB311 756 - if (ses->status == CifsNew) 756 + if ((ses->status == CifsNew) || (optype & CIFS_NEG_OP)) 757 757 smb311_update_preauth_hash(ses, rqst->rq_iov+1, 758 758 rqst->rq_nvec-1); 759 759 #endif ··· 798 798 *resp_buf_type = CIFS_SMALL_BUFFER; 799 799 800 800 #ifdef CONFIG_CIFS_SMB311 801 - if (ses->status == CifsNew) { 801 + if ((ses->status == CifsNew) || (optype & CIFS_NEG_OP)) { 802 802 struct kvec iov = { 803 803 .iov_base = buf + 4, 804 804 .iov_len = get_rfc1002_length(buf) ··· 834 834 if (n_vec + 1 > CIFS_MAX_IOV_SIZE) { 835 835 new_iov = kmalloc(sizeof(struct kvec) * (n_vec + 1), 836 836 GFP_KERNEL); 837 - if (!new_iov) 837 + if (!new_iov) { 838 + /* otherwise cifs_send_recv below sets resp_buf_type */ 839 + *resp_buf_type = CIFS_NO_BUFFER; 838 840 return -ENOMEM; 841 + } 839 842 } else 840 843 new_iov = s_iov; 841 844
+5 -4
fs/ext4/balloc.c
··· 321 321 struct ext4_sb_info *sbi = EXT4_SB(sb); 322 322 ext4_grpblk_t offset; 323 323 ext4_grpblk_t next_zero_bit; 324 + ext4_grpblk_t max_bit = EXT4_CLUSTERS_PER_GROUP(sb); 324 325 ext4_fsblk_t blk; 325 326 ext4_fsblk_t group_first_block; 326 327 ··· 339 338 /* check whether block bitmap block number is set */ 340 339 blk = ext4_block_bitmap(sb, desc); 341 340 offset = blk - group_first_block; 342 - if (offset < 0 || EXT4_B2C(sbi, offset) >= sb->s_blocksize || 341 + if (offset < 0 || EXT4_B2C(sbi, offset) >= max_bit || 343 342 !ext4_test_bit(EXT4_B2C(sbi, offset), bh->b_data)) 344 343 /* bad block bitmap */ 345 344 return blk; ··· 347 346 /* check whether the inode bitmap block number is set */ 348 347 blk = ext4_inode_bitmap(sb, desc); 349 348 offset = blk - group_first_block; 350 - if (offset < 0 || EXT4_B2C(sbi, offset) >= sb->s_blocksize || 349 + if (offset < 0 || EXT4_B2C(sbi, offset) >= max_bit || 351 350 !ext4_test_bit(EXT4_B2C(sbi, offset), bh->b_data)) 352 351 /* bad block bitmap */ 353 352 return blk; ··· 355 354 /* check whether the inode table block number is set */ 356 355 blk = ext4_inode_table(sb, desc); 357 356 offset = blk - group_first_block; 358 - if (offset < 0 || EXT4_B2C(sbi, offset) >= sb->s_blocksize || 359 - EXT4_B2C(sbi, offset + sbi->s_itb_per_group) >= sb->s_blocksize) 357 + if (offset < 0 || EXT4_B2C(sbi, offset) >= max_bit || 358 + EXT4_B2C(sbi, offset + sbi->s_itb_per_group) >= max_bit) 360 359 return blk; 361 360 next_zero_bit = ext4_find_next_zero_bit(bh->b_data, 362 361 EXT4_B2C(sbi, offset + sbi->s_itb_per_group),
+11 -5
fs/ext4/extents.c
··· 5329 5329 stop = le32_to_cpu(extent->ee_block); 5330 5330 5331 5331 /* 5332 - * In case of left shift, Don't start shifting extents until we make 5333 - * sure the hole is big enough to accommodate the shift. 5332 + * For left shifts, make sure the hole on the left is big enough to 5333 + * accommodate the shift. For right shifts, make sure the last extent 5334 + * won't be shifted beyond EXT_MAX_BLOCKS. 5334 5335 */ 5335 5336 if (SHIFT == SHIFT_LEFT) { 5336 5337 path = ext4_find_extent(inode, start - 1, &path, ··· 5351 5350 5352 5351 if ((start == ex_start && shift > ex_start) || 5353 5352 (shift > start - ex_end)) { 5354 - ext4_ext_drop_refs(path); 5355 - kfree(path); 5356 - return -EINVAL; 5353 + ret = -EINVAL; 5354 + goto out; 5355 + } 5356 + } else { 5357 + if (shift > EXT_MAX_BLOCKS - 5358 + (stop + ext4_ext_get_actual_len(extent))) { 5359 + ret = -EINVAL; 5360 + goto out; 5357 5361 } 5358 5362 } 5359 5363
+1
fs/ext4/super.c
··· 5886 5886 MODULE_AUTHOR("Remy Card, Stephen Tweedie, Andrew Morton, Andreas Dilger, Theodore Ts'o and others"); 5887 5887 MODULE_DESCRIPTION("Fourth Extended Filesystem"); 5888 5888 MODULE_LICENSE("GPL"); 5889 + MODULE_SOFTDEP("pre: crc32c"); 5889 5890 module_init(ext4_init_fs) 5890 5891 module_exit(ext4_exit_fs)
+1
fs/jbd2/transaction.c
··· 532 532 */ 533 533 ret = start_this_handle(journal, handle, GFP_NOFS); 534 534 if (ret < 0) { 535 + handle->h_journal = journal; 535 536 jbd2_journal_free_reserved(handle); 536 537 return ret; 537 538 }
+1 -1
include/asm-generic/vmlinux.lds.h
··· 188 188 #endif 189 189 190 190 #ifdef CONFIG_SERIAL_EARLYCON 191 - #define EARLYCON_TABLE() STRUCT_ALIGN(); \ 191 + #define EARLYCON_TABLE() . = ALIGN(8); \ 192 192 VMLINUX_SYMBOL(__earlycon_table) = .; \ 193 193 KEEP(*(__earlycon_table)) \ 194 194 VMLINUX_SYMBOL(__earlycon_table_end) = .;
+14 -2
include/kvm/arm_psci.h
··· 37 37 * Our PSCI implementation stays the same across versions from 38 38 * v0.2 onward, only adding the few mandatory functions (such 39 39 * as FEATURES with 1.0) that are required by newer 40 - * revisions. It is thus safe to return the latest. 40 + * revisions. It is thus safe to return the latest, unless 41 + * userspace has instructed us otherwise. 41 42 */ 42 - if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) 43 + if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) { 44 + if (vcpu->kvm->arch.psci_version) 45 + return vcpu->kvm->arch.psci_version; 46 + 43 47 return KVM_ARM_PSCI_LATEST; 48 + } 44 49 45 50 return KVM_ARM_PSCI_0_1; 46 51 } 47 52 48 53 49 54 int kvm_hvc_call_handler(struct kvm_vcpu *vcpu); 55 + 56 + struct kvm_one_reg; 57 + 58 + int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu); 59 + int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices); 60 + int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); 61 + int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); 50 62 51 63 #endif /* __KVM_ARM_PSCI_H__ */
+3
include/linux/blk-mq.h
··· 9 9 struct blk_mq_tags; 10 10 struct blk_flush_queue; 11 11 12 + /** 13 + * struct blk_mq_hw_ctx - State for a hardware queue facing the hardware block device 14 + */ 12 15 struct blk_mq_hw_ctx { 13 16 struct { 14 17 spinlock_t lock;
+6
include/linux/blkdev.h
··· 605 605 * initialized by the low level device driver (e.g. scsi/sd.c). 606 606 * Stacking drivers (device mappers) may or may not initialize 607 607 * these fields. 608 + * 609 + * Reads of this information must be protected with blk_queue_enter() / 610 + * blk_queue_exit(). Modifying this information is only allowed while 611 + * no requests are being processed. See also blk_mq_freeze_queue() and 612 + * blk_mq_unfreeze_queue(). 608 613 */ 609 614 unsigned int nr_zones; 610 615 unsigned long *seq_zones_bitmap; ··· 742 737 #define blk_queue_quiesced(q) test_bit(QUEUE_FLAG_QUIESCED, &(q)->queue_flags) 743 738 #define blk_queue_preempt_only(q) \ 744 739 test_bit(QUEUE_FLAG_PREEMPT_ONLY, &(q)->queue_flags) 740 + #define blk_queue_fua(q) test_bit(QUEUE_FLAG_FUA, &(q)->queue_flags) 745 741 746 742 extern int blk_set_preempt_only(struct request_queue *q); 747 743 extern void blk_clear_preempt_only(struct request_queue *q);
+2 -2
include/linux/bpf.h
··· 339 339 void bpf_prog_array_delete_safe(struct bpf_prog_array __rcu *progs, 340 340 struct bpf_prog *old_prog); 341 341 int bpf_prog_array_copy_info(struct bpf_prog_array __rcu *array, 342 - __u32 __user *prog_ids, u32 request_cnt, 343 - __u32 __user *prog_cnt); 342 + u32 *prog_ids, u32 request_cnt, 343 + u32 *prog_cnt); 344 344 int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array, 345 345 struct bpf_prog *exclude_prog, 346 346 struct bpf_prog *include_prog,
+4 -2
include/linux/device.h
··· 256 256 * automatically. 257 257 * @pm: Power management operations of the device which matched 258 258 * this driver. 259 - * @coredump: Called through sysfs to initiate a device coredump. 259 + * @coredump: Called when sysfs entry is written to. The device driver 260 + * is expected to call the dev_coredump API resulting in a 261 + * uevent. 260 262 * @p: Driver core's private data, no one other than the driver 261 263 * core can touch this. 262 264 * ··· 290 288 const struct attribute_group **groups; 291 289 292 290 const struct dev_pm_ops *pm; 293 - int (*coredump) (struct device *dev); 291 + void (*coredump) (struct device *dev); 294 292 295 293 struct driver_private *p; 296 294 };
+2
include/linux/ethtool.h
··· 310 310 * fields should be ignored (use %__ETHTOOL_LINK_MODE_MASK_NBITS 311 311 * instead of the latter), any change to them will be overwritten 312 312 * by kernel. Returns a negative error code or zero. 313 + * @get_fecparam: Get the network device Forward Error Correction parameters. 314 + * @set_fecparam: Set the network device Forward Error Correction parameters. 313 315 * 314 316 * All operations are optional (i.e. the function pointer may be set 315 317 * to %NULL) and callers must take this into account. Callers must
+1 -3
include/linux/fsnotify_backend.h
··· 217 217 union { /* Object pointer [lock] */ 218 218 struct inode *inode; 219 219 struct vfsmount *mnt; 220 - }; 221 - union { 222 - struct hlist_head list; 223 220 /* Used listing heads to free after srcu period expires */ 224 221 struct fsnotify_mark_connector *destroy_next; 225 222 }; 223 + struct hlist_head list; 226 224 }; 227 225 228 226 /*
+2
include/linux/hrtimer.h
··· 161 161 enum hrtimer_base_type { 162 162 HRTIMER_BASE_MONOTONIC, 163 163 HRTIMER_BASE_REALTIME, 164 + HRTIMER_BASE_BOOTTIME, 164 165 HRTIMER_BASE_TAI, 165 166 HRTIMER_BASE_MONOTONIC_SOFT, 166 167 HRTIMER_BASE_REALTIME_SOFT, 168 + HRTIMER_BASE_BOOTTIME_SOFT, 167 169 HRTIMER_BASE_TAI_SOFT, 168 170 HRTIMER_MAX_CLOCK_BASES, 169 171 };
+1
include/linux/mtd/flashchip.h
··· 85 85 unsigned int write_suspended:1; 86 86 unsigned int erase_suspended:1; 87 87 unsigned long in_progress_block_addr; 88 + unsigned long in_progress_block_mask; 88 89 89 90 struct mutex mutex; 90 91 wait_queue_head_t wq; /* Wait on here when we're waiting for the chip
+14 -7
include/linux/serial_core.h
··· 351 351 char name[16]; 352 352 char compatible[128]; 353 353 int (*setup)(struct earlycon_device *, const char *options); 354 - } __aligned(32); 354 + }; 355 355 356 - extern const struct earlycon_id __earlycon_table[]; 357 - extern const struct earlycon_id __earlycon_table_end[]; 356 + extern const struct earlycon_id *__earlycon_table[]; 357 + extern const struct earlycon_id *__earlycon_table_end[]; 358 358 359 359 #if defined(CONFIG_SERIAL_EARLYCON) && !defined(MODULE) 360 360 #define EARLYCON_USED_OR_UNUSED __used ··· 362 362 #define EARLYCON_USED_OR_UNUSED __maybe_unused 363 363 #endif 364 364 365 - #define OF_EARLYCON_DECLARE(_name, compat, fn) \ 366 - static const struct earlycon_id __UNIQUE_ID(__earlycon_##_name) \ 367 - EARLYCON_USED_OR_UNUSED __section(__earlycon_table) \ 365 + #define _OF_EARLYCON_DECLARE(_name, compat, fn, unique_id) \ 366 + static const struct earlycon_id unique_id \ 367 + EARLYCON_USED_OR_UNUSED __initconst \ 368 368 = { .name = __stringify(_name), \ 369 369 .compatible = compat, \ 370 - .setup = fn } 370 + .setup = fn }; \ 371 + static const struct earlycon_id EARLYCON_USED_OR_UNUSED \ 372 + __section(__earlycon_table) \ 373 + * const __PASTE(__p, unique_id) = &unique_id 374 + 375 + #define OF_EARLYCON_DECLARE(_name, compat, fn) \ 376 + _OF_EARLYCON_DECLARE(_name, compat, fn, \ 377 + __UNIQUE_ID(__earlycon_##_name)) 371 378 372 379 #define EARLYCON_DECLARE(_name, fn) OF_EARLYCON_DECLARE(_name, "", fn) 373 380
+2 -2
include/linux/stringhash.h
··· 50 50 * losing bits). This also has the property (wanted by the dcache) 51 51 * that the msbits make a good hash table index. 52 52 */ 53 - static inline unsigned long end_name_hash(unsigned long hash) 53 + static inline unsigned int end_name_hash(unsigned long hash) 54 54 { 55 - return __hash_32((unsigned int)hash); 55 + return hash_long(hash, 32); 56 56 } 57 57 58 58 /*
+75
include/linux/ti-emif-sram.h
··· 60 60 u32 abort_sr; 61 61 } __packed __aligned(8); 62 62 63 + static inline void ti_emif_asm_offsets(void) 64 + { 65 + DEFINE(EMIF_SDCFG_VAL_OFFSET, 66 + offsetof(struct emif_regs_amx3, emif_sdcfg_val)); 67 + DEFINE(EMIF_TIMING1_VAL_OFFSET, 68 + offsetof(struct emif_regs_amx3, emif_timing1_val)); 69 + DEFINE(EMIF_TIMING2_VAL_OFFSET, 70 + offsetof(struct emif_regs_amx3, emif_timing2_val)); 71 + DEFINE(EMIF_TIMING3_VAL_OFFSET, 72 + offsetof(struct emif_regs_amx3, emif_timing3_val)); 73 + DEFINE(EMIF_REF_CTRL_VAL_OFFSET, 74 + offsetof(struct emif_regs_amx3, emif_ref_ctrl_val)); 75 + DEFINE(EMIF_ZQCFG_VAL_OFFSET, 76 + offsetof(struct emif_regs_amx3, emif_zqcfg_val)); 77 + DEFINE(EMIF_PMCR_VAL_OFFSET, 78 + offsetof(struct emif_regs_amx3, emif_pmcr_val)); 79 + DEFINE(EMIF_PMCR_SHDW_VAL_OFFSET, 80 + offsetof(struct emif_regs_amx3, emif_pmcr_shdw_val)); 81 + DEFINE(EMIF_RD_WR_LEVEL_RAMP_CTRL_OFFSET, 82 + offsetof(struct emif_regs_amx3, emif_rd_wr_level_ramp_ctrl)); 83 + DEFINE(EMIF_RD_WR_EXEC_THRESH_OFFSET, 84 + offsetof(struct emif_regs_amx3, emif_rd_wr_exec_thresh)); 85 + DEFINE(EMIF_COS_CONFIG_OFFSET, 86 + offsetof(struct emif_regs_amx3, emif_cos_config)); 87 + DEFINE(EMIF_PRIORITY_TO_COS_MAPPING_OFFSET, 88 + offsetof(struct emif_regs_amx3, emif_priority_to_cos_mapping)); 89 + DEFINE(EMIF_CONNECT_ID_SERV_1_MAP_OFFSET, 90 + offsetof(struct emif_regs_amx3, emif_connect_id_serv_1_map)); 91 + DEFINE(EMIF_CONNECT_ID_SERV_2_MAP_OFFSET, 92 + offsetof(struct emif_regs_amx3, emif_connect_id_serv_2_map)); 93 + DEFINE(EMIF_OCP_CONFIG_VAL_OFFSET, 94 + offsetof(struct emif_regs_amx3, emif_ocp_config_val)); 95 + DEFINE(EMIF_LPDDR2_NVM_TIM_OFFSET, 96 + offsetof(struct emif_regs_amx3, emif_lpddr2_nvm_tim)); 97 + DEFINE(EMIF_LPDDR2_NVM_TIM_SHDW_OFFSET, 98 + offsetof(struct emif_regs_amx3, emif_lpddr2_nvm_tim_shdw)); 99 + DEFINE(EMIF_DLL_CALIB_CTRL_VAL_OFFSET, 100 + offsetof(struct emif_regs_amx3, emif_dll_calib_ctrl_val)); 101 + DEFINE(EMIF_DLL_CALIB_CTRL_VAL_SHDW_OFFSET, 102 + offsetof(struct emif_regs_amx3, emif_dll_calib_ctrl_val_shdw)); 103 + DEFINE(EMIF_DDR_PHY_CTLR_1_OFFSET, 104 + offsetof(struct emif_regs_amx3, emif_ddr_phy_ctlr_1)); 105 + DEFINE(EMIF_EXT_PHY_CTRL_VALS_OFFSET, 106 + offsetof(struct emif_regs_amx3, emif_ext_phy_ctrl_vals)); 107 + DEFINE(EMIF_REGS_AMX3_SIZE, sizeof(struct emif_regs_amx3)); 108 + 109 + BLANK(); 110 + 111 + DEFINE(EMIF_PM_BASE_ADDR_VIRT_OFFSET, 112 + offsetof(struct ti_emif_pm_data, ti_emif_base_addr_virt)); 113 + DEFINE(EMIF_PM_BASE_ADDR_PHYS_OFFSET, 114 + offsetof(struct ti_emif_pm_data, ti_emif_base_addr_phys)); 115 + DEFINE(EMIF_PM_CONFIG_OFFSET, 116 + offsetof(struct ti_emif_pm_data, ti_emif_sram_config)); 117 + DEFINE(EMIF_PM_REGS_VIRT_OFFSET, 118 + offsetof(struct ti_emif_pm_data, regs_virt)); 119 + DEFINE(EMIF_PM_REGS_PHYS_OFFSET, 120 + offsetof(struct ti_emif_pm_data, regs_phys)); 121 + DEFINE(EMIF_PM_DATA_SIZE, sizeof(struct ti_emif_pm_data)); 122 + 123 + BLANK(); 124 + 125 + DEFINE(EMIF_PM_SAVE_CONTEXT_OFFSET, 126 + offsetof(struct ti_emif_pm_functions, save_context)); 127 + DEFINE(EMIF_PM_RESTORE_CONTEXT_OFFSET, 128 + offsetof(struct ti_emif_pm_functions, restore_context)); 129 + DEFINE(EMIF_PM_ENTER_SR_OFFSET, 130 + offsetof(struct ti_emif_pm_functions, enter_sr)); 131 + DEFINE(EMIF_PM_EXIT_SR_OFFSET, 132 + offsetof(struct ti_emif_pm_functions, exit_sr)); 133 + DEFINE(EMIF_PM_ABORT_SR_OFFSET, 134 + offsetof(struct ti_emif_pm_functions, abort_sr)); 135 + DEFINE(EMIF_PM_FUNCTIONS_SIZE, sizeof(struct ti_emif_pm_functions)); 136 + } 137 + 63 138 struct gen_pool; 64 139 65 140 int ti_emif_copy_pm_function_table(struct gen_pool *sram_pool, void *dst);
-2
include/linux/timekeeper_internal.h
··· 52 52 * @offs_real: Offset clock monotonic -> clock realtime 53 53 * @offs_boot: Offset clock monotonic -> clock boottime 54 54 * @offs_tai: Offset clock monotonic -> clock tai 55 - * @time_suspended: Accumulated suspend time 56 55 * @tai_offset: The current UTC to TAI offset in seconds 57 56 * @clock_was_set_seq: The sequence number of clock was set events 58 57 * @cs_was_changed_seq: The sequence number of clocksource change events ··· 94 95 ktime_t offs_real; 95 96 ktime_t offs_boot; 96 97 ktime_t offs_tai; 97 - ktime_t time_suspended; 98 98 s32 tai_offset; 99 99 unsigned int clock_was_set_seq; 100 100 u8 cs_was_changed_seq;
+25 -12
include/linux/timekeeping.h
··· 33 33 extern time64_t ktime_get_seconds(void); 34 34 extern time64_t __ktime_get_real_seconds(void); 35 35 extern time64_t ktime_get_real_seconds(void); 36 - extern void ktime_get_active_ts64(struct timespec64 *ts); 37 36 38 37 extern int __getnstimeofday64(struct timespec64 *tv); 39 38 extern void getnstimeofday64(struct timespec64 *tv); 40 39 extern void getboottime64(struct timespec64 *ts); 41 40 42 - #define ktime_get_real_ts64(ts) getnstimeofday64(ts) 43 - 44 - /* Clock BOOTTIME compatibility wrappers */ 45 - static inline void get_monotonic_boottime64(struct timespec64 *ts) 46 - { 47 - ktime_get_ts64(ts); 48 - } 41 + #define ktime_get_real_ts64(ts) getnstimeofday64(ts) 49 42 50 43 /* 51 44 * ktime_t based interfaces 52 45 */ 46 + 53 47 enum tk_offsets { 54 48 TK_OFFS_REAL, 49 + TK_OFFS_BOOT, 55 50 TK_OFFS_TAI, 56 51 TK_OFFS_MAX, 57 52 }; ··· 57 62 extern ktime_t ktime_get_raw(void); 58 63 extern u32 ktime_get_resolution_ns(void); 59 64 60 - /* Clock BOOTTIME compatibility wrappers */ 61 - static inline ktime_t ktime_get_boottime(void) { return ktime_get(); } 62 - static inline u64 ktime_get_boot_ns(void) { return ktime_get(); } 63 - 64 65 /** 65 66 * ktime_get_real - get the real (wall-) time in ktime_t format 66 67 */ 67 68 static inline ktime_t ktime_get_real(void) 68 69 { 69 70 return ktime_get_with_offset(TK_OFFS_REAL); 71 + } 72 + 73 + /** 74 + * ktime_get_boottime - Returns monotonic time since boot in ktime_t format 75 + * 76 + * This is similar to CLOCK_MONTONIC/ktime_get, but also includes the 77 + * time spent in suspend. 78 + */ 79 + static inline ktime_t ktime_get_boottime(void) 80 + { 81 + return ktime_get_with_offset(TK_OFFS_BOOT); 70 82 } 71 83 72 84 /** ··· 102 100 return ktime_to_ns(ktime_get_real()); 103 101 } 104 102 103 + static inline u64 ktime_get_boot_ns(void) 104 + { 105 + return ktime_to_ns(ktime_get_boottime()); 106 + } 107 + 105 108 static inline u64 ktime_get_tai_ns(void) 106 109 { 107 110 return ktime_to_ns(ktime_get_clocktai()); ··· 119 112 120 113 extern u64 ktime_get_mono_fast_ns(void); 121 114 extern u64 ktime_get_raw_fast_ns(void); 115 + extern u64 ktime_get_boot_fast_ns(void); 122 116 extern u64 ktime_get_real_fast_ns(void); 123 117 124 118 /* 125 119 * timespec64 interfaces utilizing the ktime based ones 126 120 */ 121 + static inline void get_monotonic_boottime64(struct timespec64 *ts) 122 + { 123 + *ts = ktime_to_timespec64(ktime_get_boottime()); 124 + } 125 + 127 126 static inline void timekeeping_clocktai64(struct timespec64 *ts) 128 127 { 129 128 *ts = ktime_to_timespec64(ktime_get_clocktai());
+1 -1
include/linux/tty.h
··· 701 701 extern int tty_set_ldisc(struct tty_struct *tty, int disc); 702 702 extern int tty_ldisc_setup(struct tty_struct *tty, struct tty_struct *o_tty); 703 703 extern void tty_ldisc_release(struct tty_struct *tty); 704 - extern void tty_ldisc_init(struct tty_struct *tty); 704 + extern int __must_check tty_ldisc_init(struct tty_struct *tty); 705 705 extern void tty_ldisc_deinit(struct tty_struct *tty); 706 706 extern int tty_ldisc_receive_buf(struct tty_ldisc *ld, const unsigned char *p, 707 707 char *f, int count);
-23
include/linux/vbox_utils.h
··· 24 24 #define vbg_debug pr_debug 25 25 #endif 26 26 27 - /** 28 - * Allocate memory for generic request and initialize the request header. 29 - * 30 - * Return: the allocated memory 31 - * @len: Size of memory block required for the request. 32 - * @req_type: The generic request type. 33 - */ 34 - void *vbg_req_alloc(size_t len, enum vmmdev_request_type req_type); 35 - 36 - /** 37 - * Perform a generic request. 38 - * 39 - * Return: VBox status code 40 - * @gdev: The Guest extension device. 41 - * @req: Pointer to the request structure. 42 - */ 43 - int vbg_req_perform(struct vbg_dev *gdev, void *req); 44 - 45 27 int vbg_hgcm_connect(struct vbg_dev *gdev, 46 28 struct vmmdev_hgcm_service_location *loc, 47 29 u32 *client_id, int *vbox_status); ··· 33 51 int vbg_hgcm_call(struct vbg_dev *gdev, u32 client_id, u32 function, 34 52 u32 timeout_ms, struct vmmdev_hgcm_function_parameter *parms, 35 53 u32 parm_count, int *vbox_status); 36 - 37 - int vbg_hgcm_call32( 38 - struct vbg_dev *gdev, u32 client_id, u32 function, u32 timeout_ms, 39 - struct vmmdev_hgcm_function_parameter32 *parm32, u32 parm_count, 40 - int *vbox_status); 41 54 42 55 /** 43 56 * Convert a VirtualBox status code to a standard Linux kernel return value.
+3
include/linux/virtio.h
··· 157 157 int virtio_device_restore(struct virtio_device *dev); 158 158 #endif 159 159 160 + #define virtio_device_for_each_vq(vdev, vq) \ 161 + list_for_each_entry(vq, &vdev->vqs, list) 162 + 160 163 /** 161 164 * virtio_driver - operations for a virtio I/O driver 162 165 * @driver: underlying device driver (populate name and owner).
+2 -1
include/net/ife.h
··· 12 12 void *ife_encode(struct sk_buff *skb, u16 metalen); 13 13 void *ife_decode(struct sk_buff *skb, u16 *metalen); 14 14 15 - void *ife_tlv_meta_decode(void *skbdata, u16 *attrtype, u16 *dlen, u16 *totlen); 15 + void *ife_tlv_meta_decode(void *skbdata, const void *ifehdr_end, u16 *attrtype, 16 + u16 *dlen, u16 *totlen); 16 17 int ife_tlv_meta_encode(void *skbdata, u16 attrtype, u16 dlen, 17 18 const void *dval); 18 19
+1
include/net/llc_conn.h
··· 97 97 98 98 struct sock *llc_sk_alloc(struct net *net, int family, gfp_t priority, 99 99 struct proto *prot, int kern); 100 + void llc_sk_stop_all_timers(struct sock *sk, bool sync); 100 101 void llc_sk_free(struct sock *sk); 101 102 102 103 void llc_sk_reset(struct sock *sk);
-2
include/scsi/scsi_dbg.h
··· 11 11 extern void scsi_print_command(struct scsi_cmnd *); 12 12 extern size_t __scsi_format_command(char *, size_t, 13 13 const unsigned char *, size_t); 14 - extern void scsi_show_extd_sense(const struct scsi_device *, const char *, 15 - unsigned char, unsigned char); 16 14 extern void scsi_print_sense_hdr(const struct scsi_device *, const char *, 17 15 const struct scsi_sense_hdr *); 18 16 extern void scsi_print_sense(const struct scsi_cmnd *);
+2 -2
include/soc/bcm2835/raspberrypi-firmware.h
··· 143 143 static inline int rpi_firmware_property(struct rpi_firmware *fw, u32 tag, 144 144 void *data, size_t len) 145 145 { 146 - return 0; 146 + return -ENOSYS; 147 147 } 148 148 149 149 static inline int rpi_firmware_property_list(struct rpi_firmware *fw, 150 150 void *data, size_t tag_size) 151 151 { 152 - return 0; 152 + return -ENOSYS; 153 153 } 154 154 155 155 static inline struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node)
+5 -2
include/sound/control.h
··· 23 23 */ 24 24 25 25 #include <linux/wait.h> 26 + #include <linux/nospec.h> 26 27 #include <sound/asound.h> 27 28 28 29 #define snd_kcontrol_chip(kcontrol) ((kcontrol)->private_data) ··· 149 148 150 149 static inline unsigned int snd_ctl_get_ioffnum(struct snd_kcontrol *kctl, struct snd_ctl_elem_id *id) 151 150 { 152 - return id->numid - kctl->id.numid; 151 + unsigned int ioff = id->numid - kctl->id.numid; 152 + return array_index_nospec(ioff, kctl->count); 153 153 } 154 154 155 155 static inline unsigned int snd_ctl_get_ioffidx(struct snd_kcontrol *kctl, struct snd_ctl_elem_id *id) 156 156 { 157 - return id->index - kctl->id.index; 157 + unsigned int ioff = id->index - kctl->id.index; 158 + return array_index_nospec(ioff, kctl->count); 158 159 } 159 160 160 161 static inline unsigned int snd_ctl_get_ioff(struct snd_kcontrol *kctl, struct snd_ctl_elem_id *id)
+27
include/trace/events/ufs.h
··· 257 257 ) 258 258 ); 259 259 260 + TRACE_EVENT(ufshcd_upiu, 261 + TP_PROTO(const char *dev_name, const char *str, void *hdr, void *tsf), 262 + 263 + TP_ARGS(dev_name, str, hdr, tsf), 264 + 265 + TP_STRUCT__entry( 266 + __string(dev_name, dev_name) 267 + __string(str, str) 268 + __array(unsigned char, hdr, 12) 269 + __array(unsigned char, tsf, 16) 270 + ), 271 + 272 + TP_fast_assign( 273 + __assign_str(dev_name, dev_name); 274 + __assign_str(str, str); 275 + memcpy(__entry->hdr, hdr, sizeof(__entry->hdr)); 276 + memcpy(__entry->tsf, tsf, sizeof(__entry->tsf)); 277 + ), 278 + 279 + TP_printk( 280 + "%s: %s: HDR:%s, CDB:%s", 281 + __get_str(str), __get_str(dev_name), 282 + __print_hex(__entry->hdr, sizeof(__entry->hdr)), 283 + __print_hex(__entry->tsf, sizeof(__entry->tsf)) 284 + ) 285 + ); 286 + 260 287 #endif /* if !defined(_TRACE_UFS_H) || defined(TRACE_HEADER_MULTI_READ) */ 261 288 262 289 /* This part must be outside protection */
+2
include/trace/events/workqueue.h
··· 25 25 TP_printk("work struct %p", __entry->work) 26 26 ); 27 27 28 + struct pool_workqueue; 29 + 28 30 /** 29 31 * workqueue_queue_work - called when a work gets queued 30 32 * @req_cpu: the requested cpu
+7
include/uapi/linux/kvm.h
··· 676 676 __u8 pad[36]; 677 677 }; 678 678 679 + #define KVM_X86_DISABLE_EXITS_MWAIT (1 << 0) 680 + #define KVM_X86_DISABLE_EXITS_HTL (1 << 1) 681 + #define KVM_X86_DISABLE_EXITS_PAUSE (1 << 2) 682 + #define KVM_X86_DISABLE_VALID_EXITS (KVM_X86_DISABLE_EXITS_MWAIT | \ 683 + KVM_X86_DISABLE_EXITS_HTL | \ 684 + KVM_X86_DISABLE_EXITS_PAUSE) 685 + 679 686 /* for KVM_ENABLE_CAP */ 680 687 struct kvm_enable_cap { 681 688 /* in */
-18
include/uapi/linux/sysctl.h
··· 780 780 NET_BRIDGE_NF_FILTER_PPPOE_TAGGED = 5, 781 781 }; 782 782 783 - /* proc/sys/net/irda */ 784 - enum { 785 - NET_IRDA_DISCOVERY=1, 786 - NET_IRDA_DEVNAME=2, 787 - NET_IRDA_DEBUG=3, 788 - NET_IRDA_FAST_POLL=4, 789 - NET_IRDA_DISCOVERY_SLOTS=5, 790 - NET_IRDA_DISCOVERY_TIMEOUT=6, 791 - NET_IRDA_SLOT_TIMEOUT=7, 792 - NET_IRDA_MAX_BAUD_RATE=8, 793 - NET_IRDA_MIN_TX_TURN_TIME=9, 794 - NET_IRDA_MAX_TX_DATA_SIZE=10, 795 - NET_IRDA_MAX_TX_WINDOW=11, 796 - NET_IRDA_MAX_NOREPLY_TIME=12, 797 - NET_IRDA_WARN_NOREPLY_TIME=13, 798 - NET_IRDA_LAP_KEEPALIVE_TIME=14, 799 - }; 800 - 801 783 802 784 /* CTL_FS names: */ 803 785 enum
-1
include/uapi/linux/time.h
··· 73 73 */ 74 74 #define CLOCK_SGI_CYCLE 10 75 75 #define CLOCK_TAI 11 76 - #define CLOCK_MONOTONIC_ACTIVE 12 77 76 78 77 #define MAX_CLOCKS 16 79 78 #define CLOCKS_MASK (CLOCK_REALTIME | CLOCK_MONOTONIC)
+15
include/uapi/linux/virtio_balloon.h
··· 57 57 #define VIRTIO_BALLOON_S_HTLB_PGFAIL 9 /* Hugetlb page allocation failures */ 58 58 #define VIRTIO_BALLOON_S_NR 10 59 59 60 + #define VIRTIO_BALLOON_S_NAMES_WITH_PREFIX(VIRTIO_BALLOON_S_NAMES_prefix) { \ 61 + VIRTIO_BALLOON_S_NAMES_prefix "swap-in", \ 62 + VIRTIO_BALLOON_S_NAMES_prefix "swap-out", \ 63 + VIRTIO_BALLOON_S_NAMES_prefix "major-faults", \ 64 + VIRTIO_BALLOON_S_NAMES_prefix "minor-faults", \ 65 + VIRTIO_BALLOON_S_NAMES_prefix "free-memory", \ 66 + VIRTIO_BALLOON_S_NAMES_prefix "total-memory", \ 67 + VIRTIO_BALLOON_S_NAMES_prefix "available-memory", \ 68 + VIRTIO_BALLOON_S_NAMES_prefix "disk-caches", \ 69 + VIRTIO_BALLOON_S_NAMES_prefix "hugetlb-allocations", \ 70 + VIRTIO_BALLOON_S_NAMES_prefix "hugetlb-failures" \ 71 + } 72 + 73 + #define VIRTIO_BALLOON_S_NAMES VIRTIO_BALLOON_S_NAMES_WITH_PREFIX("") 74 + 60 75 /* 61 76 * Memory statistics structure. 62 77 * Driver fills an array of these structures and passes to device.
+29 -16
kernel/bpf/core.c
··· 1572 1572 return cnt; 1573 1573 } 1574 1574 1575 + static bool bpf_prog_array_copy_core(struct bpf_prog **prog, 1576 + u32 *prog_ids, 1577 + u32 request_cnt) 1578 + { 1579 + int i = 0; 1580 + 1581 + for (; *prog; prog++) { 1582 + if (*prog == &dummy_bpf_prog.prog) 1583 + continue; 1584 + prog_ids[i] = (*prog)->aux->id; 1585 + if (++i == request_cnt) { 1586 + prog++; 1587 + break; 1588 + } 1589 + } 1590 + 1591 + return !!(*prog); 1592 + } 1593 + 1575 1594 int bpf_prog_array_copy_to_user(struct bpf_prog_array __rcu *progs, 1576 1595 __u32 __user *prog_ids, u32 cnt) 1577 1596 { 1578 1597 struct bpf_prog **prog; 1579 1598 unsigned long err = 0; 1580 - u32 i = 0, *ids; 1581 1599 bool nospc; 1600 + u32 *ids; 1582 1601 1583 1602 /* users of this function are doing: 1584 1603 * cnt = bpf_prog_array_length(); ··· 1614 1595 return -ENOMEM; 1615 1596 rcu_read_lock(); 1616 1597 prog = rcu_dereference(progs)->progs; 1617 - for (; *prog; prog++) { 1618 - if (*prog == &dummy_bpf_prog.prog) 1619 - continue; 1620 - ids[i] = (*prog)->aux->id; 1621 - if (++i == cnt) { 1622 - prog++; 1623 - break; 1624 - } 1625 - } 1626 - nospc = !!(*prog); 1598 + nospc = bpf_prog_array_copy_core(prog, ids, cnt); 1627 1599 rcu_read_unlock(); 1628 1600 err = copy_to_user(prog_ids, ids, cnt * sizeof(u32)); 1629 1601 kfree(ids); ··· 1693 1683 } 1694 1684 1695 1685 int bpf_prog_array_copy_info(struct bpf_prog_array __rcu *array, 1696 - __u32 __user *prog_ids, u32 request_cnt, 1697 - __u32 __user *prog_cnt) 1686 + u32 *prog_ids, u32 request_cnt, 1687 + u32 *prog_cnt) 1698 1688 { 1689 + struct bpf_prog **prog; 1699 1690 u32 cnt = 0; 1700 1691 1701 1692 if (array) 1702 1693 cnt = bpf_prog_array_length(array); 1703 1694 1704 - if (copy_to_user(prog_cnt, &cnt, sizeof(cnt))) 1705 - return -EFAULT; 1695 + *prog_cnt = cnt; 1706 1696 1707 1697 /* return early if user requested only program count or nothing to copy */ 1708 1698 if (!request_cnt || !cnt) 1709 1699 return 0; 1710 1700 1711 - return bpf_prog_array_copy_to_user(array, prog_ids, request_cnt); 1701 + /* this function is called under trace/bpf_trace.c: bpf_event_mutex */ 1702 + prog = rcu_dereference_check(array, 1)->progs; 1703 + return bpf_prog_array_copy_core(prog, prog_ids, request_cnt) ? -ENOSPC 1704 + : 0; 1712 1705 } 1713 1706 1714 1707 static void bpf_prog_free_deferred(struct work_struct *work)
-3
kernel/bpf/sockmap.c
··· 1442 1442 attr->value_size != 4 || attr->map_flags & ~SOCK_CREATE_FLAG_MASK) 1443 1443 return ERR_PTR(-EINVAL); 1444 1444 1445 - if (attr->value_size > KMALLOC_MAX_SIZE) 1446 - return ERR_PTR(-E2BIG); 1447 - 1448 1445 err = bpf_tcp_ulp_register(); 1449 1446 if (err && err != -EEXIST) 1450 1447 return ERR_PTR(err);
+1 -1
kernel/kprobes.c
··· 2428 2428 struct kprobe_blacklist_entry *ent = 2429 2429 list_entry(v, struct kprobe_blacklist_entry, list); 2430 2430 2431 - seq_printf(m, "0x%p-0x%p\t%ps\n", (void *)ent->start_addr, 2431 + seq_printf(m, "0x%px-0x%px\t%ps\n", (void *)ent->start_addr, 2432 2432 (void *)ent->end_addr, (void *)ent->start_addr); 2433 2433 return 0; 2434 2434 }
+2 -1
kernel/module.c
··· 1472 1472 { 1473 1473 struct module_sect_attr *sattr = 1474 1474 container_of(mattr, struct module_sect_attr, mattr); 1475 - return sprintf(buf, "0x%pK\n", (void *)sattr->address); 1475 + return sprintf(buf, "0x%px\n", kptr_restrict < 2 ? 1476 + (void *)sattr->address : NULL); 1476 1477 } 1477 1478 1478 1479 static void free_sect_attrs(struct module_sect_attrs *sect_attrs)
+1 -19
kernel/sysctl_binary.c
··· 704 704 {} 705 705 }; 706 706 707 - static const struct bin_table bin_net_irda_table[] = { 708 - { CTL_INT, NET_IRDA_DISCOVERY, "discovery" }, 709 - { CTL_STR, NET_IRDA_DEVNAME, "devname" }, 710 - { CTL_INT, NET_IRDA_DEBUG, "debug" }, 711 - { CTL_INT, NET_IRDA_FAST_POLL, "fast_poll_increase" }, 712 - { CTL_INT, NET_IRDA_DISCOVERY_SLOTS, "discovery_slots" }, 713 - { CTL_INT, NET_IRDA_DISCOVERY_TIMEOUT, "discovery_timeout" }, 714 - { CTL_INT, NET_IRDA_SLOT_TIMEOUT, "slot_timeout" }, 715 - { CTL_INT, NET_IRDA_MAX_BAUD_RATE, "max_baud_rate" }, 716 - { CTL_INT, NET_IRDA_MIN_TX_TURN_TIME, "min_tx_turn_time" }, 717 - { CTL_INT, NET_IRDA_MAX_TX_DATA_SIZE, "max_tx_data_size" }, 718 - { CTL_INT, NET_IRDA_MAX_TX_WINDOW, "max_tx_window" }, 719 - { CTL_INT, NET_IRDA_MAX_NOREPLY_TIME, "max_noreply_time" }, 720 - { CTL_INT, NET_IRDA_WARN_NOREPLY_TIME, "warn_noreply_time" }, 721 - { CTL_INT, NET_IRDA_LAP_KEEPALIVE_TIME, "lap_keepalive_time" }, 722 - {} 723 - }; 724 - 725 707 static const struct bin_table bin_net_table[] = { 726 708 { CTL_DIR, NET_CORE, "core", bin_net_core_table }, 727 709 /* NET_ETHER not used */ ··· 725 743 { CTL_DIR, NET_LLC, "llc", bin_net_llc_table }, 726 744 { CTL_DIR, NET_NETFILTER, "netfilter", bin_net_netfilter_table }, 727 745 /* NET_DCCP "dccp" no longer used */ 728 - { CTL_DIR, NET_IRDA, "irda", bin_net_irda_table }, 746 + /* NET_IRDA "irda" no longer used */ 729 747 { CTL_INT, 2089, "nf_conntrack_max" }, 730 748 {} 731 749 };
+14 -2
kernel/time/hrtimer.c
··· 91 91 .get_time = &ktime_get_real, 92 92 }, 93 93 { 94 + .index = HRTIMER_BASE_BOOTTIME, 95 + .clockid = CLOCK_BOOTTIME, 96 + .get_time = &ktime_get_boottime, 97 + }, 98 + { 94 99 .index = HRTIMER_BASE_TAI, 95 100 .clockid = CLOCK_TAI, 96 101 .get_time = &ktime_get_clocktai, ··· 111 106 .get_time = &ktime_get_real, 112 107 }, 113 108 { 109 + .index = HRTIMER_BASE_BOOTTIME_SOFT, 110 + .clockid = CLOCK_BOOTTIME, 111 + .get_time = &ktime_get_boottime, 112 + }, 113 + { 114 114 .index = HRTIMER_BASE_TAI_SOFT, 115 115 .clockid = CLOCK_TAI, 116 116 .get_time = &ktime_get_clocktai, ··· 129 119 130 120 [CLOCK_REALTIME] = HRTIMER_BASE_REALTIME, 131 121 [CLOCK_MONOTONIC] = HRTIMER_BASE_MONOTONIC, 132 - [CLOCK_BOOTTIME] = HRTIMER_BASE_MONOTONIC, 122 + [CLOCK_BOOTTIME] = HRTIMER_BASE_BOOTTIME, 133 123 [CLOCK_TAI] = HRTIMER_BASE_TAI, 134 124 }; 135 125 ··· 581 571 static inline ktime_t hrtimer_update_base(struct hrtimer_cpu_base *base) 582 572 { 583 573 ktime_t *offs_real = &base->clock_base[HRTIMER_BASE_REALTIME].offset; 574 + ktime_t *offs_boot = &base->clock_base[HRTIMER_BASE_BOOTTIME].offset; 584 575 ktime_t *offs_tai = &base->clock_base[HRTIMER_BASE_TAI].offset; 585 576 586 577 ktime_t now = ktime_get_update_offsets_now(&base->clock_was_set_seq, 587 - offs_real, offs_tai); 578 + offs_real, offs_boot, offs_tai); 588 579 589 580 base->clock_base[HRTIMER_BASE_REALTIME_SOFT].offset = *offs_real; 581 + base->clock_base[HRTIMER_BASE_BOOTTIME_SOFT].offset = *offs_boot; 590 582 base->clock_base[HRTIMER_BASE_TAI_SOFT].offset = *offs_tai; 591 583 592 584 return now;
-2
kernel/time/posix-stubs.c
··· 83 83 case CLOCK_BOOTTIME: 84 84 get_monotonic_boottime64(tp); 85 85 break; 86 - case CLOCK_MONOTONIC_ACTIVE: 87 - ktime_get_active_ts64(tp); 88 86 default: 89 87 return -EINVAL; 90 88 }
+17 -9
kernel/time/posix-timers.c
··· 252 252 return 0; 253 253 } 254 254 255 - static int posix_get_tai(clockid_t which_clock, struct timespec64 *tp) 255 + static int posix_get_boottime(const clockid_t which_clock, struct timespec64 *tp) 256 256 { 257 - timekeeping_clocktai64(tp); 257 + get_monotonic_boottime64(tp); 258 258 return 0; 259 259 } 260 260 261 - static int posix_get_monotonic_active(clockid_t which_clock, 262 - struct timespec64 *tp) 261 + static int posix_get_tai(clockid_t which_clock, struct timespec64 *tp) 263 262 { 264 - ktime_get_active_ts64(tp); 263 + timekeeping_clocktai64(tp); 265 264 return 0; 266 265 } 267 266 ··· 1316 1317 .timer_arm = common_hrtimer_arm, 1317 1318 }; 1318 1319 1319 - static const struct k_clock clock_monotonic_active = { 1320 + static const struct k_clock clock_boottime = { 1320 1321 .clock_getres = posix_get_hrtimer_res, 1321 - .clock_get = posix_get_monotonic_active, 1322 + .clock_get = posix_get_boottime, 1323 + .nsleep = common_nsleep, 1324 + .timer_create = common_timer_create, 1325 + .timer_set = common_timer_set, 1326 + .timer_get = common_timer_get, 1327 + .timer_del = common_timer_del, 1328 + .timer_rearm = common_hrtimer_rearm, 1329 + .timer_forward = common_hrtimer_forward, 1330 + .timer_remaining = common_hrtimer_remaining, 1331 + .timer_try_to_cancel = common_hrtimer_try_to_cancel, 1332 + .timer_arm = common_hrtimer_arm, 1322 1333 }; 1323 1334 1324 1335 static const struct k_clock * const posix_clocks[] = { ··· 1339 1330 [CLOCK_MONOTONIC_RAW] = &clock_monotonic_raw, 1340 1331 [CLOCK_REALTIME_COARSE] = &clock_realtime_coarse, 1341 1332 [CLOCK_MONOTONIC_COARSE] = &clock_monotonic_coarse, 1342 - [CLOCK_BOOTTIME] = &clock_monotonic, 1333 + [CLOCK_BOOTTIME] = &clock_boottime, 1343 1334 [CLOCK_REALTIME_ALARM] = &alarm_clock, 1344 1335 [CLOCK_BOOTTIME_ALARM] = &alarm_clock, 1345 1336 [CLOCK_TAI] = &clock_tai, 1346 - [CLOCK_MONOTONIC_ACTIVE] = &clock_monotonic_active, 1347 1337 }; 1348 1338 1349 1339 static const struct k_clock *clockid_to_kclock(const clockid_t id)
-15
kernel/time/tick-common.c
··· 419 419 clockevents_shutdown(td->evtdev); 420 420 } 421 421 422 - static void tick_forward_next_period(void) 423 - { 424 - ktime_t delta, now = ktime_get(); 425 - u64 n; 426 - 427 - delta = ktime_sub(now, tick_next_period); 428 - n = ktime_divns(delta, tick_period); 429 - tick_next_period += n * tick_period; 430 - if (tick_next_period < now) 431 - tick_next_period += tick_period; 432 - tick_sched_forward_next_period(); 433 - } 434 - 435 422 /** 436 423 * tick_resume_local - Resume the local tick device 437 424 * ··· 430 443 { 431 444 struct tick_device *td = this_cpu_ptr(&tick_cpu_device); 432 445 bool broadcast = tick_resume_check_broadcast(); 433 - 434 - tick_forward_next_period(); 435 446 436 447 clockevents_tick_resume(td->evtdev); 437 448 if (!broadcast) {
-6
kernel/time/tick-internal.h
··· 141 141 static inline bool tick_broadcast_oneshot_available(void) { return tick_oneshot_possible(); } 142 142 #endif /* !(BROADCAST && ONESHOT) */ 143 143 144 - #if defined(CONFIG_NO_HZ_COMMON) || defined(CONFIG_HIGH_RES_TIMERS) 145 - extern void tick_sched_forward_next_period(void); 146 - #else 147 - static inline void tick_sched_forward_next_period(void) { } 148 - #endif 149 - 150 144 /* NO_HZ_FULL internal */ 151 145 #ifdef CONFIG_NO_HZ_FULL 152 146 extern void tick_nohz_init(void);
+5 -14
kernel/time/tick-sched.c
··· 52 52 static ktime_t last_jiffies_update; 53 53 54 54 /* 55 - * Called after resume. Make sure that jiffies are not fast forwarded due to 56 - * clock monotonic being forwarded by the suspended time. 57 - */ 58 - void tick_sched_forward_next_period(void) 59 - { 60 - last_jiffies_update = tick_next_period; 61 - } 62 - 63 - /* 64 55 * Must be called with interrupts disabled ! 65 56 */ 66 57 static void tick_do_update_jiffies64(ktime_t now) ··· 795 804 return; 796 805 } 797 806 798 - hrtimer_set_expires(&ts->sched_timer, tick); 799 - 800 - if (ts->nohz_mode == NOHZ_MODE_HIGHRES) 801 - hrtimer_start_expires(&ts->sched_timer, HRTIMER_MODE_ABS_PINNED); 802 - else 807 + if (ts->nohz_mode == NOHZ_MODE_HIGHRES) { 808 + hrtimer_start(&ts->sched_timer, tick, HRTIMER_MODE_ABS_PINNED); 809 + } else { 810 + hrtimer_set_expires(&ts->sched_timer, tick); 803 811 tick_program_event(tick, 1); 812 + } 804 813 } 805 814 806 815 static void tick_nohz_retain_tick(struct tick_sched *ts)
+37 -41
kernel/time/timekeeping.c
··· 138 138 139 139 static inline void tk_update_sleep_time(struct timekeeper *tk, ktime_t delta) 140 140 { 141 - /* Update both bases so mono and raw stay coupled. */ 142 - tk->tkr_mono.base += delta; 143 - tk->tkr_raw.base += delta; 144 - 145 - /* Accumulate time spent in suspend */ 146 - tk->time_suspended += delta; 141 + tk->offs_boot = ktime_add(tk->offs_boot, delta); 147 142 } 148 143 149 144 /* ··· 468 473 } 469 474 EXPORT_SYMBOL_GPL(ktime_get_raw_fast_ns); 470 475 476 + /** 477 + * ktime_get_boot_fast_ns - NMI safe and fast access to boot clock. 478 + * 479 + * To keep it NMI safe since we're accessing from tracing, we're not using a 480 + * separate timekeeper with updates to monotonic clock and boot offset 481 + * protected with seqlocks. This has the following minor side effects: 482 + * 483 + * (1) Its possible that a timestamp be taken after the boot offset is updated 484 + * but before the timekeeper is updated. If this happens, the new boot offset 485 + * is added to the old timekeeping making the clock appear to update slightly 486 + * earlier: 487 + * CPU 0 CPU 1 488 + * timekeeping_inject_sleeptime64() 489 + * __timekeeping_inject_sleeptime(tk, delta); 490 + * timestamp(); 491 + * timekeeping_update(tk, TK_CLEAR_NTP...); 492 + * 493 + * (2) On 32-bit systems, the 64-bit boot offset (tk->offs_boot) may be 494 + * partially updated. Since the tk->offs_boot update is a rare event, this 495 + * should be a rare occurrence which postprocessing should be able to handle. 496 + */ 497 + u64 notrace ktime_get_boot_fast_ns(void) 498 + { 499 + struct timekeeper *tk = &tk_core.timekeeper; 500 + 501 + return (ktime_get_mono_fast_ns() + ktime_to_ns(tk->offs_boot)); 502 + } 503 + EXPORT_SYMBOL_GPL(ktime_get_boot_fast_ns); 504 + 505 + 471 506 /* 472 507 * See comment for __ktime_get_fast_ns() vs. timestamp ordering 473 508 */ ··· 789 764 790 765 static ktime_t *offsets[TK_OFFS_MAX] = { 791 766 [TK_OFFS_REAL] = &tk_core.timekeeper.offs_real, 767 + [TK_OFFS_BOOT] = &tk_core.timekeeper.offs_boot, 792 768 [TK_OFFS_TAI] = &tk_core.timekeeper.offs_tai, 793 769 }; 794 770 ··· 885 859 timespec64_add_ns(ts, nsec + tomono.tv_nsec); 886 860 } 887 861 EXPORT_SYMBOL_GPL(ktime_get_ts64); 888 - 889 - /** 890 - * ktime_get_active_ts64 - Get the active non-suspended monotonic clock 891 - * @ts: pointer to timespec variable 892 - * 893 - * The function calculates the monotonic clock from the realtime clock and 894 - * the wall_to_monotonic offset, subtracts the accumulated suspend time and 895 - * stores the result in normalized timespec64 format in the variable 896 - * pointed to by @ts. 897 - */ 898 - void ktime_get_active_ts64(struct timespec64 *ts) 899 - { 900 - struct timekeeper *tk = &tk_core.timekeeper; 901 - struct timespec64 tomono, tsusp; 902 - u64 nsec, nssusp; 903 - unsigned int seq; 904 - 905 - WARN_ON(timekeeping_suspended); 906 - 907 - do { 908 - seq = read_seqcount_begin(&tk_core.seq); 909 - ts->tv_sec = tk->xtime_sec; 910 - nsec = timekeeping_get_ns(&tk->tkr_mono); 911 - tomono = tk->wall_to_monotonic; 912 - nssusp = tk->time_suspended; 913 - } while (read_seqcount_retry(&tk_core.seq, seq)); 914 - 915 - ts->tv_sec += tomono.tv_sec; 916 - ts->tv_nsec = 0; 917 - timespec64_add_ns(ts, nsec + tomono.tv_nsec); 918 - tsusp = ns_to_timespec64(nssusp); 919 - *ts = timespec64_sub(*ts, tsusp); 920 - } 921 862 922 863 /** 923 864 * ktime_get_seconds - Get the seconds portion of CLOCK_MONOTONIC ··· 1586 1593 return; 1587 1594 } 1588 1595 tk_xtime_add(tk, delta); 1596 + tk_set_wall_to_mono(tk, timespec64_sub(tk->wall_to_monotonic, *delta)); 1589 1597 tk_update_sleep_time(tk, timespec64_to_ktime(*delta)); 1590 1598 tk_debug_account_sleep_time(delta); 1591 1599 } ··· 2119 2125 void getboottime64(struct timespec64 *ts) 2120 2126 { 2121 2127 struct timekeeper *tk = &tk_core.timekeeper; 2122 - ktime_t t = ktime_sub(tk->offs_real, tk->time_suspended); 2128 + ktime_t t = ktime_sub(tk->offs_real, tk->offs_boot); 2123 2129 2124 2130 *ts = ktime_to_timespec64(t); 2125 2131 } ··· 2182 2188 * ktime_get_update_offsets_now - hrtimer helper 2183 2189 * @cwsseq: pointer to check and store the clock was set sequence number 2184 2190 * @offs_real: pointer to storage for monotonic -> realtime offset 2191 + * @offs_boot: pointer to storage for monotonic -> boottime offset 2185 2192 * @offs_tai: pointer to storage for monotonic -> clock tai offset 2186 2193 * 2187 2194 * Returns current monotonic time and updates the offsets if the ··· 2192 2197 * Called from hrtimer_interrupt() or retrigger_next_event() 2193 2198 */ 2194 2199 ktime_t ktime_get_update_offsets_now(unsigned int *cwsseq, ktime_t *offs_real, 2195 - ktime_t *offs_tai) 2200 + ktime_t *offs_boot, ktime_t *offs_tai) 2196 2201 { 2197 2202 struct timekeeper *tk = &tk_core.timekeeper; 2198 2203 unsigned int seq; ··· 2209 2214 if (*cwsseq != tk->clock_was_set_seq) { 2210 2215 *cwsseq = tk->clock_was_set_seq; 2211 2216 *offs_real = tk->offs_real; 2217 + *offs_boot = tk->offs_boot; 2212 2218 *offs_tai = tk->offs_tai; 2213 2219 } 2214 2220
+1
kernel/time/timekeeping.h
··· 6 6 */ 7 7 extern ktime_t ktime_get_update_offsets_now(unsigned int *cwsseq, 8 8 ktime_t *offs_real, 9 + ktime_t *offs_boot, 9 10 ktime_t *offs_tai); 10 11 11 12 extern int timekeeping_valid_for_hres(void);
+21 -4
kernel/trace/bpf_trace.c
··· 977 977 { 978 978 struct perf_event_query_bpf __user *uquery = info; 979 979 struct perf_event_query_bpf query = {}; 980 + u32 *ids, prog_cnt, ids_len; 980 981 int ret; 981 982 982 983 if (!capable(CAP_SYS_ADMIN)) ··· 986 985 return -EINVAL; 987 986 if (copy_from_user(&query, uquery, sizeof(query))) 988 987 return -EFAULT; 989 - if (query.ids_len > BPF_TRACE_MAX_PROGS) 988 + 989 + ids_len = query.ids_len; 990 + if (ids_len > BPF_TRACE_MAX_PROGS) 990 991 return -E2BIG; 992 + ids = kcalloc(ids_len, sizeof(u32), GFP_USER | __GFP_NOWARN); 993 + if (!ids) 994 + return -ENOMEM; 995 + /* 996 + * The above kcalloc returns ZERO_SIZE_PTR when ids_len = 0, which 997 + * is required when user only wants to check for uquery->prog_cnt. 998 + * There is no need to check for it since the case is handled 999 + * gracefully in bpf_prog_array_copy_info. 1000 + */ 991 1001 992 1002 mutex_lock(&bpf_event_mutex); 993 1003 ret = bpf_prog_array_copy_info(event->tp_event->prog_array, 994 - uquery->ids, 995 - query.ids_len, 996 - &uquery->prog_cnt); 1004 + ids, 1005 + ids_len, 1006 + &prog_cnt); 997 1007 mutex_unlock(&bpf_event_mutex); 998 1008 1009 + if (copy_to_user(&uquery->prog_cnt, &prog_cnt, sizeof(prog_cnt)) || 1010 + copy_to_user(uquery->ids, ids, ids_len * sizeof(u32))) 1011 + ret = -EFAULT; 1012 + 1013 + kfree(ids); 999 1014 return ret; 1000 1015 } 1001 1016
+1 -1
kernel/trace/trace.c
··· 1165 1165 { trace_clock, "perf", 1 }, 1166 1166 { ktime_get_mono_fast_ns, "mono", 1 }, 1167 1167 { ktime_get_raw_fast_ns, "mono_raw", 1 }, 1168 - { ktime_get_mono_fast_ns, "boot", 1 }, 1168 + { ktime_get_boot_fast_ns, "boot", 1 }, 1169 1169 ARCH_TRACE_CLOCKS 1170 1170 }; 1171 1171
+1 -1
kernel/trace/trace_entries.h
··· 356 356 __field( unsigned int, seqnum ) 357 357 ), 358 358 359 - F_printk("cnt:%u\tts:%010llu.%010lu\tinner:%llu\touter:%llunmi-ts:%llu\tnmi-count:%u\n", 359 + F_printk("cnt:%u\tts:%010llu.%010lu\tinner:%llu\touter:%llu\tnmi-ts:%llu\tnmi-count:%u\n", 360 360 __entry->seqnum, 361 361 __entry->tv_sec, 362 362 __entry->tv_nsec,
+7 -7
kernel/trace/trace_events_filter.c
··· 1499 1499 return ret; 1500 1500 } 1501 1501 1502 - if (!nr_preds) { 1503 - prog = NULL; 1504 - } else { 1505 - prog = predicate_parse(filter_string, nr_parens, nr_preds, 1502 + if (!nr_preds) 1503 + return -EINVAL; 1504 + 1505 + prog = predicate_parse(filter_string, nr_parens, nr_preds, 1506 1506 parse_pred, call, pe); 1507 - if (IS_ERR(prog)) 1508 - return PTR_ERR(prog); 1509 - } 1507 + if (IS_ERR(prog)) 1508 + return PTR_ERR(prog); 1509 + 1510 1510 rcu_assign_pointer(filter->prog, prog); 1511 1511 return 0; 1512 1512 }
+2 -1
lib/dma-direct.c
··· 84 84 __free_pages(page, page_order); 85 85 page = NULL; 86 86 87 - if (dev->coherent_dma_mask < DMA_BIT_MASK(32) && 87 + if (IS_ENABLED(CONFIG_ZONE_DMA) && 88 + dev->coherent_dma_mask < DMA_BIT_MASK(32) && 88 89 !(gfp & GFP_DMA)) { 89 90 gfp = (gfp & ~GFP_DMA32) | GFP_DMA; 90 91 goto again;
+5 -6
lib/kobject.c
··· 233 233 234 234 /* be noisy on error issues */ 235 235 if (error == -EEXIST) 236 - WARN(1, 237 - "%s failed for %s with -EEXIST, don't try to register things with the same name in the same directory.\n", 238 - __func__, kobject_name(kobj)); 236 + pr_err("%s failed for %s with -EEXIST, don't try to register things with the same name in the same directory.\n", 237 + __func__, kobject_name(kobj)); 239 238 else 240 - WARN(1, "%s failed for %s (error: %d parent: %s)\n", 241 - __func__, kobject_name(kobj), error, 242 - parent ? kobject_name(parent) : "'none'"); 239 + pr_err("%s failed for %s (error: %d parent: %s)\n", 240 + __func__, kobject_name(kobj), error, 241 + parent ? kobject_name(parent) : "'none'"); 243 242 } else 244 243 kobj->state_in_sysfs = 1; 245 244
+10 -1
mm/mmap.c
··· 100 100 __S000, __S001, __S010, __S011, __S100, __S101, __S110, __S111 101 101 }; 102 102 103 + #ifndef CONFIG_ARCH_HAS_FILTER_PGPROT 104 + static inline pgprot_t arch_filter_pgprot(pgprot_t prot) 105 + { 106 + return prot; 107 + } 108 + #endif 109 + 103 110 pgprot_t vm_get_page_prot(unsigned long vm_flags) 104 111 { 105 - return __pgprot(pgprot_val(protection_map[vm_flags & 112 + pgprot_t ret = __pgprot(pgprot_val(protection_map[vm_flags & 106 113 (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]) | 107 114 pgprot_val(arch_vm_get_page_prot(vm_flags))); 115 + 116 + return arch_filter_pgprot(ret); 108 117 } 109 118 EXPORT_SYMBOL(vm_get_page_prot); 110 119
+6 -5
net/bridge/netfilter/ebtables.c
··· 1825 1825 { 1826 1826 unsigned int size = info->entries_size; 1827 1827 const void *entries = info->entries; 1828 - int ret; 1829 1828 1830 1829 newinfo->entries_size = size; 1831 - 1832 - ret = xt_compat_init_offsets(NFPROTO_BRIDGE, info->nentries); 1833 - if (ret) 1834 - return ret; 1830 + if (info->nentries) { 1831 + int ret = xt_compat_init_offsets(NFPROTO_BRIDGE, 1832 + info->nentries); 1833 + if (ret) 1834 + return ret; 1835 + } 1835 1836 1836 1837 return EBT_ENTRY_ITERATE(entries, size, compat_calc_entry, info, 1837 1838 entries, newinfo);
+7
net/ceph/messenger.c
··· 2569 2569 int ret = 1; 2570 2570 2571 2571 dout("try_write start %p state %lu\n", con, con->state); 2572 + if (con->state != CON_STATE_PREOPEN && 2573 + con->state != CON_STATE_CONNECTING && 2574 + con->state != CON_STATE_NEGOTIATING && 2575 + con->state != CON_STATE_OPEN) 2576 + return 0; 2572 2577 2573 2578 more: 2574 2579 dout("try_write out_kvec_bytes %d\n", con->out_kvec_bytes); ··· 2599 2594 } 2600 2595 2601 2596 more_kvec: 2597 + BUG_ON(!con->sock); 2598 + 2602 2599 /* kvec data queued? */ 2603 2600 if (con->out_kvec_left) { 2604 2601 ret = write_partial_kvec(con);
+11 -3
net/ceph/mon_client.c
··· 209 209 __open_session(monc); 210 210 } 211 211 212 + static void un_backoff(struct ceph_mon_client *monc) 213 + { 214 + monc->hunt_mult /= 2; /* reduce by 50% */ 215 + if (monc->hunt_mult < 1) 216 + monc->hunt_mult = 1; 217 + dout("%s hunt_mult now %d\n", __func__, monc->hunt_mult); 218 + } 219 + 212 220 /* 213 221 * Reschedule delayed work timer. 214 222 */ ··· 971 963 if (!monc->hunting) { 972 964 ceph_con_keepalive(&monc->con); 973 965 __validate_auth(monc); 966 + un_backoff(monc); 974 967 } 975 968 976 969 if (is_auth && ··· 1132 1123 dout("%s found mon%d\n", __func__, monc->cur_mon); 1133 1124 monc->hunting = false; 1134 1125 monc->had_a_connection = true; 1135 - monc->hunt_mult /= 2; /* reduce by 50% */ 1136 - if (monc->hunt_mult < 1) 1137 - monc->hunt_mult = 1; 1126 + un_backoff(monc); 1127 + __schedule_delayed(monc); 1138 1128 } 1139 1129 } 1140 1130
+36 -2
net/ife/ife.c
··· 69 69 int total_pull; 70 70 u16 ifehdrln; 71 71 72 + if (!pskb_may_pull(skb, skb->dev->hard_header_len + IFE_METAHDRLEN)) 73 + return NULL; 74 + 72 75 ifehdr = (struct ifeheadr *) (skb->data + skb->dev->hard_header_len); 73 76 ifehdrln = ntohs(ifehdr->metalen); 74 77 total_pull = skb->dev->hard_header_len + ifehdrln; ··· 95 92 __be16 len; 96 93 }; 97 94 95 + static bool __ife_tlv_meta_valid(const unsigned char *skbdata, 96 + const unsigned char *ifehdr_end) 97 + { 98 + const struct meta_tlvhdr *tlv; 99 + u16 tlvlen; 100 + 101 + if (unlikely(skbdata + sizeof(*tlv) > ifehdr_end)) 102 + return false; 103 + 104 + tlv = (const struct meta_tlvhdr *)skbdata; 105 + tlvlen = ntohs(tlv->len); 106 + 107 + /* tlv length field is inc header, check on minimum */ 108 + if (tlvlen < NLA_HDRLEN) 109 + return false; 110 + 111 + /* overflow by NLA_ALIGN check */ 112 + if (NLA_ALIGN(tlvlen) < tlvlen) 113 + return false; 114 + 115 + if (unlikely(skbdata + NLA_ALIGN(tlvlen) > ifehdr_end)) 116 + return false; 117 + 118 + return true; 119 + } 120 + 98 121 /* Caller takes care of presenting data in network order 99 122 */ 100 - void *ife_tlv_meta_decode(void *skbdata, u16 *attrtype, u16 *dlen, u16 *totlen) 123 + void *ife_tlv_meta_decode(void *skbdata, const void *ifehdr_end, u16 *attrtype, 124 + u16 *dlen, u16 *totlen) 101 125 { 102 - struct meta_tlvhdr *tlv = (struct meta_tlvhdr *) skbdata; 126 + struct meta_tlvhdr *tlv; 103 127 128 + if (!__ife_tlv_meta_valid(skbdata, ifehdr_end)) 129 + return NULL; 130 + 131 + tlv = (struct meta_tlvhdr *)skbdata; 104 132 *dlen = ntohs(tlv->len) - NLA_HDRLEN; 105 133 *attrtype = ntohs(tlv->type); 106 134
+2 -5
net/ipv4/tcp_input.c
··· 3868 3868 int length = (th->doff << 2) - sizeof(*th); 3869 3869 const u8 *ptr = (const u8 *)(th + 1); 3870 3870 3871 - /* If the TCP option is too short, we can short cut */ 3872 - if (length < TCPOLEN_MD5SIG) 3873 - return NULL; 3874 - 3875 - while (length > 0) { 3871 + /* If not enough data remaining, we can short cut */ 3872 + while (length >= TCPOLEN_MD5SIG) { 3876 3873 int opcode = *ptr++; 3877 3874 int opsize; 3878 3875
+28 -27
net/ipv6/netfilter/Kconfig
··· 48 48 fields such as the source, destination, flowlabel, hop-limit and 49 49 the packet mark. 50 50 51 + if NF_NAT_IPV6 52 + 53 + config NFT_CHAIN_NAT_IPV6 54 + tristate "IPv6 nf_tables nat chain support" 55 + help 56 + This option enables the "nat" chain for IPv6 in nf_tables. This 57 + chain type is used to perform Network Address Translation (NAT) 58 + packet transformations such as the source, destination address and 59 + source and destination ports. 60 + 61 + config NFT_MASQ_IPV6 62 + tristate "IPv6 masquerade support for nf_tables" 63 + depends on NFT_MASQ 64 + select NF_NAT_MASQUERADE_IPV6 65 + help 66 + This is the expression that provides IPv4 masquerading support for 67 + nf_tables. 68 + 69 + config NFT_REDIR_IPV6 70 + tristate "IPv6 redirect support for nf_tables" 71 + depends on NFT_REDIR 72 + select NF_NAT_REDIRECT 73 + help 74 + This is the expression that provides IPv4 redirect support for 75 + nf_tables. 76 + 77 + endif # NF_NAT_IPV6 78 + 51 79 config NFT_REJECT_IPV6 52 80 select NF_REJECT_IPV6 53 81 default NFT_REJECT ··· 135 107 136 108 if NF_NAT_IPV6 137 109 138 - config NFT_CHAIN_NAT_IPV6 139 - depends on NF_TABLES_IPV6 140 - tristate "IPv6 nf_tables nat chain support" 141 - help 142 - This option enables the "nat" chain for IPv6 in nf_tables. This 143 - chain type is used to perform Network Address Translation (NAT) 144 - packet transformations such as the source, destination address and 145 - source and destination ports. 146 - 147 110 config NF_NAT_MASQUERADE_IPV6 148 111 tristate "IPv6 masquerade support" 149 112 help 150 113 This is the kernel functionality to provide NAT in the masquerade 151 114 flavour (automatic source address selection) for IPv6. 152 - 153 - config NFT_MASQ_IPV6 154 - tristate "IPv6 masquerade support for nf_tables" 155 - depends on NF_TABLES_IPV6 156 - depends on NFT_MASQ 157 - select NF_NAT_MASQUERADE_IPV6 158 - help 159 - This is the expression that provides IPv4 masquerading support for 160 - nf_tables. 161 - 162 - config NFT_REDIR_IPV6 163 - tristate "IPv6 redirect support for nf_tables" 164 - depends on NF_TABLES_IPV6 165 - depends on NFT_REDIR 166 - select NF_NAT_REDIRECT 167 - help 168 - This is the expression that provides IPv4 redirect support for 169 - nf_tables. 170 115 171 116 endif # NF_NAT_IPV6 172 117
+2
net/ipv6/route.c
··· 3975 3975 3976 3976 static const struct nla_policy rtm_ipv6_policy[RTA_MAX+1] = { 3977 3977 [RTA_GATEWAY] = { .len = sizeof(struct in6_addr) }, 3978 + [RTA_PREFSRC] = { .len = sizeof(struct in6_addr) }, 3978 3979 [RTA_OIF] = { .type = NLA_U32 }, 3979 3980 [RTA_IIF] = { .type = NLA_U32 }, 3980 3981 [RTA_PRIORITY] = { .type = NLA_U32 }, ··· 3987 3986 [RTA_EXPIRES] = { .type = NLA_U32 }, 3988 3987 [RTA_UID] = { .type = NLA_U32 }, 3989 3988 [RTA_MARK] = { .type = NLA_U32 }, 3989 + [RTA_TABLE] = { .type = NLA_U32 }, 3990 3990 }; 3991 3991 3992 3992 static int rtm_to_fib6_config(struct sk_buff *skb, struct nlmsghdr *nlh,
+1 -1
net/ipv6/seg6_iptunnel.c
··· 136 136 isrh->nexthdr = proto; 137 137 138 138 hdr->daddr = isrh->segments[isrh->first_segment]; 139 - set_tun_src(net, ip6_dst_idev(dst)->dev, &hdr->daddr, &hdr->saddr); 139 + set_tun_src(net, dst->dev, &hdr->daddr, &hdr->saddr); 140 140 141 141 #ifdef CONFIG_IPV6_SEG6_HMAC 142 142 if (sr_has_hmac(isrh)) {
+4 -1
net/l2tp/l2tp_debugfs.c
··· 106 106 return; 107 107 108 108 /* Drop reference taken by last invocation of l2tp_dfs_next_tunnel() */ 109 - if (pd->tunnel) 109 + if (pd->tunnel) { 110 110 l2tp_tunnel_dec_refcount(pd->tunnel); 111 + pd->tunnel = NULL; 112 + pd->session = NULL; 113 + } 111 114 } 112 115 113 116 static void l2tp_dfs_seq_tunnel_show(struct seq_file *m, void *v)
+11 -1
net/l2tp/l2tp_ppp.c
··· 619 619 lock_sock(sk); 620 620 621 621 error = -EINVAL; 622 + 623 + if (sockaddr_len != sizeof(struct sockaddr_pppol2tp) && 624 + sockaddr_len != sizeof(struct sockaddr_pppol2tpv3) && 625 + sockaddr_len != sizeof(struct sockaddr_pppol2tpin6) && 626 + sockaddr_len != sizeof(struct sockaddr_pppol2tpv3in6)) 627 + goto end; 628 + 622 629 if (sp->sa_protocol != PX_PROTO_OL2TP) 623 630 goto end; 624 631 ··· 1625 1618 return; 1626 1619 1627 1620 /* Drop reference taken by last invocation of pppol2tp_next_tunnel() */ 1628 - if (pd->tunnel) 1621 + if (pd->tunnel) { 1629 1622 l2tp_tunnel_dec_refcount(pd->tunnel); 1623 + pd->tunnel = NULL; 1624 + pd->session = NULL; 1625 + } 1630 1626 } 1631 1627 1632 1628 static void pppol2tp_seq_tunnel_show(struct seq_file *m, void *v)
+12 -9
net/llc/af_llc.c
··· 189 189 { 190 190 struct sock *sk = sock->sk; 191 191 struct llc_sock *llc; 192 - struct llc_sap *sap; 193 192 194 193 if (unlikely(sk == NULL)) 195 194 goto out; ··· 199 200 llc->laddr.lsap, llc->daddr.lsap); 200 201 if (!llc_send_disc(sk)) 201 202 llc_ui_wait_for_disc(sk, sk->sk_rcvtimeo); 202 - sap = llc->sap; 203 - /* Hold this for release_sock(), so that llc_backlog_rcv() could still 204 - * use it. 205 - */ 206 - llc_sap_hold(sap); 207 - if (!sock_flag(sk, SOCK_ZAPPED)) 203 + if (!sock_flag(sk, SOCK_ZAPPED)) { 204 + struct llc_sap *sap = llc->sap; 205 + 206 + /* Hold this for release_sock(), so that llc_backlog_rcv() 207 + * could still use it. 208 + */ 209 + llc_sap_hold(sap); 208 210 llc_sap_remove_socket(llc->sap, sk); 209 - release_sock(sk); 210 - llc_sap_put(sap); 211 + release_sock(sk); 212 + llc_sap_put(sap); 213 + } else { 214 + release_sock(sk); 215 + } 211 216 if (llc->dev) 212 217 dev_put(llc->dev); 213 218 sock_put(sk);
+1 -8
net/llc/llc_c_ac.c
··· 1099 1099 1100 1100 int llc_conn_ac_stop_all_timers(struct sock *sk, struct sk_buff *skb) 1101 1101 { 1102 - struct llc_sock *llc = llc_sk(sk); 1103 - 1104 - del_timer(&llc->pf_cycle_timer.timer); 1105 - del_timer(&llc->ack_timer.timer); 1106 - del_timer(&llc->rej_sent_timer.timer); 1107 - del_timer(&llc->busy_state_timer.timer); 1108 - llc->ack_must_be_send = 0; 1109 - llc->ack_pf = 0; 1102 + llc_sk_stop_all_timers(sk, false); 1110 1103 return 0; 1111 1104 } 1112 1105
+21 -1
net/llc/llc_conn.c
··· 961 961 return sk; 962 962 } 963 963 964 + void llc_sk_stop_all_timers(struct sock *sk, bool sync) 965 + { 966 + struct llc_sock *llc = llc_sk(sk); 967 + 968 + if (sync) { 969 + del_timer_sync(&llc->pf_cycle_timer.timer); 970 + del_timer_sync(&llc->ack_timer.timer); 971 + del_timer_sync(&llc->rej_sent_timer.timer); 972 + del_timer_sync(&llc->busy_state_timer.timer); 973 + } else { 974 + del_timer(&llc->pf_cycle_timer.timer); 975 + del_timer(&llc->ack_timer.timer); 976 + del_timer(&llc->rej_sent_timer.timer); 977 + del_timer(&llc->busy_state_timer.timer); 978 + } 979 + 980 + llc->ack_must_be_send = 0; 981 + llc->ack_pf = 0; 982 + } 983 + 964 984 /** 965 985 * llc_sk_free - Frees a LLC socket 966 986 * @sk - socket to free ··· 993 973 994 974 llc->state = LLC_CONN_OUT_OF_SVC; 995 975 /* Stop all (possibly) running timers */ 996 - llc_conn_ac_stop_all_timers(sk, NULL); 976 + llc_sk_stop_all_timers(sk, true); 997 977 #ifdef DEBUG_LLC_CONN_ALLOC 998 978 printk(KERN_INFO "%s: unackq=%d, txq=%d\n", __func__, 999 979 skb_queue_len(&llc->pdu_unack_q),
+1
net/netfilter/Kconfig
··· 594 594 config NFT_REJECT 595 595 default m if NETFILTER_ADVANCED=n 596 596 tristate "Netfilter nf_tables reject support" 597 + depends on !NF_TABLES_INET || (IPV6!=m || m) 597 598 help 598 599 This option adds the "reject" expression that you can use to 599 600 explicitly deny and notify via TCP reset/ICMP informational errors
-8
net/netfilter/ipvs/ip_vs_ctl.c
··· 2384 2384 strlcpy(cfg.mcast_ifn, dm->mcast_ifn, 2385 2385 sizeof(cfg.mcast_ifn)); 2386 2386 cfg.syncid = dm->syncid; 2387 - rtnl_lock(); 2388 - mutex_lock(&ipvs->sync_mutex); 2389 2387 ret = start_sync_thread(ipvs, &cfg, dm->state); 2390 - mutex_unlock(&ipvs->sync_mutex); 2391 - rtnl_unlock(); 2392 2388 } else { 2393 2389 mutex_lock(&ipvs->sync_mutex); 2394 2390 ret = stop_sync_thread(ipvs, dm->state); ··· 3477 3481 if (ipvs->mixed_address_family_dests > 0) 3478 3482 return -EINVAL; 3479 3483 3480 - rtnl_lock(); 3481 - mutex_lock(&ipvs->sync_mutex); 3482 3484 ret = start_sync_thread(ipvs, &c, 3483 3485 nla_get_u32(attrs[IPVS_DAEMON_ATTR_STATE])); 3484 - mutex_unlock(&ipvs->sync_mutex); 3485 - rtnl_unlock(); 3486 3486 return ret; 3487 3487 } 3488 3488
+80 -75
net/netfilter/ipvs/ip_vs_sync.c
··· 49 49 #include <linux/kthread.h> 50 50 #include <linux/wait.h> 51 51 #include <linux/kernel.h> 52 + #include <linux/sched/signal.h> 52 53 53 54 #include <asm/unaligned.h> /* Used for ntoh_seq and hton_seq */ 54 55 ··· 1361 1360 /* 1362 1361 * Specifiy default interface for outgoing multicasts 1363 1362 */ 1364 - static int set_mcast_if(struct sock *sk, char *ifname) 1363 + static int set_mcast_if(struct sock *sk, struct net_device *dev) 1365 1364 { 1366 - struct net_device *dev; 1367 1365 struct inet_sock *inet = inet_sk(sk); 1368 - struct net *net = sock_net(sk); 1369 - 1370 - dev = __dev_get_by_name(net, ifname); 1371 - if (!dev) 1372 - return -ENODEV; 1373 1366 1374 1367 if (sk->sk_bound_dev_if && dev->ifindex != sk->sk_bound_dev_if) 1375 1368 return -EINVAL; ··· 1391 1396 * in the in_addr structure passed in as a parameter. 1392 1397 */ 1393 1398 static int 1394 - join_mcast_group(struct sock *sk, struct in_addr *addr, char *ifname) 1399 + join_mcast_group(struct sock *sk, struct in_addr *addr, struct net_device *dev) 1395 1400 { 1396 - struct net *net = sock_net(sk); 1397 1401 struct ip_mreqn mreq; 1398 - struct net_device *dev; 1399 1402 int ret; 1400 1403 1401 1404 memset(&mreq, 0, sizeof(mreq)); 1402 1405 memcpy(&mreq.imr_multiaddr, addr, sizeof(struct in_addr)); 1403 1406 1404 - dev = __dev_get_by_name(net, ifname); 1405 - if (!dev) 1406 - return -ENODEV; 1407 1407 if (sk->sk_bound_dev_if && dev->ifindex != sk->sk_bound_dev_if) 1408 1408 return -EINVAL; 1409 1409 ··· 1413 1423 1414 1424 #ifdef CONFIG_IP_VS_IPV6 1415 1425 static int join_mcast_group6(struct sock *sk, struct in6_addr *addr, 1416 - char *ifname) 1426 + struct net_device *dev) 1417 1427 { 1418 - struct net *net = sock_net(sk); 1419 - struct net_device *dev; 1420 1428 int ret; 1421 1429 1422 - dev = __dev_get_by_name(net, ifname); 1423 - if (!dev) 1424 - return -ENODEV; 1425 1430 if (sk->sk_bound_dev_if && dev->ifindex != sk->sk_bound_dev_if) 1426 1431 return -EINVAL; 1427 1432 ··· 1428 1443 } 1429 1444 #endif 1430 1445 1431 - static int bind_mcastif_addr(struct socket *sock, char *ifname) 1446 + static int bind_mcastif_addr(struct socket *sock, struct net_device *dev) 1432 1447 { 1433 - struct net *net = sock_net(sock->sk); 1434 - struct net_device *dev; 1435 1448 __be32 addr; 1436 1449 struct sockaddr_in sin; 1437 - 1438 - dev = __dev_get_by_name(net, ifname); 1439 - if (!dev) 1440 - return -ENODEV; 1441 1450 1442 1451 addr = inet_select_addr(dev, 0, RT_SCOPE_UNIVERSE); 1443 1452 if (!addr) ··· 1439 1460 "multicast interface.\n"); 1440 1461 1441 1462 IP_VS_DBG(7, "binding socket with (%s) %pI4\n", 1442 - ifname, &addr); 1463 + dev->name, &addr); 1443 1464 1444 1465 /* Now bind the socket with the address of multicast interface */ 1445 1466 sin.sin_family = AF_INET; ··· 1472 1493 /* 1473 1494 * Set up sending multicast socket over UDP 1474 1495 */ 1475 - static struct socket *make_send_sock(struct netns_ipvs *ipvs, int id) 1496 + static int make_send_sock(struct netns_ipvs *ipvs, int id, 1497 + struct net_device *dev, struct socket **sock_ret) 1476 1498 { 1477 1499 /* multicast addr */ 1478 1500 union ipvs_sockaddr mcast_addr; ··· 1485 1505 IPPROTO_UDP, &sock); 1486 1506 if (result < 0) { 1487 1507 pr_err("Error during creation of socket; terminating\n"); 1488 - return ERR_PTR(result); 1508 + goto error; 1489 1509 } 1490 - result = set_mcast_if(sock->sk, ipvs->mcfg.mcast_ifn); 1510 + *sock_ret = sock; 1511 + result = set_mcast_if(sock->sk, dev); 1491 1512 if (result < 0) { 1492 1513 pr_err("Error setting outbound mcast interface\n"); 1493 1514 goto error; ··· 1503 1522 set_sock_size(sock->sk, 1, result); 1504 1523 1505 1524 if (AF_INET == ipvs->mcfg.mcast_af) 1506 - result = bind_mcastif_addr(sock, ipvs->mcfg.mcast_ifn); 1525 + result = bind_mcastif_addr(sock, dev); 1507 1526 else 1508 1527 result = 0; 1509 1528 if (result < 0) { ··· 1519 1538 goto error; 1520 1539 } 1521 1540 1522 - return sock; 1541 + return 0; 1523 1542 1524 1543 error: 1525 - sock_release(sock); 1526 - return ERR_PTR(result); 1544 + return result; 1527 1545 } 1528 1546 1529 1547 1530 1548 /* 1531 1549 * Set up receiving multicast socket over UDP 1532 1550 */ 1533 - static struct socket *make_receive_sock(struct netns_ipvs *ipvs, int id, 1534 - int ifindex) 1551 + static int make_receive_sock(struct netns_ipvs *ipvs, int id, 1552 + struct net_device *dev, struct socket **sock_ret) 1535 1553 { 1536 1554 /* multicast addr */ 1537 1555 union ipvs_sockaddr mcast_addr; ··· 1542 1562 IPPROTO_UDP, &sock); 1543 1563 if (result < 0) { 1544 1564 pr_err("Error during creation of socket; terminating\n"); 1545 - return ERR_PTR(result); 1565 + goto error; 1546 1566 } 1567 + *sock_ret = sock; 1547 1568 /* it is equivalent to the REUSEADDR option in user-space */ 1548 1569 sock->sk->sk_reuse = SK_CAN_REUSE; 1549 1570 result = sysctl_sync_sock_size(ipvs); ··· 1552 1571 set_sock_size(sock->sk, 0, result); 1553 1572 1554 1573 get_mcast_sockaddr(&mcast_addr, &salen, &ipvs->bcfg, id); 1555 - sock->sk->sk_bound_dev_if = ifindex; 1574 + sock->sk->sk_bound_dev_if = dev->ifindex; 1556 1575 result = sock->ops->bind(sock, (struct sockaddr *)&mcast_addr, salen); 1557 1576 if (result < 0) { 1558 1577 pr_err("Error binding to the multicast addr\n"); ··· 1563 1582 #ifdef CONFIG_IP_VS_IPV6 1564 1583 if (ipvs->bcfg.mcast_af == AF_INET6) 1565 1584 result = join_mcast_group6(sock->sk, &mcast_addr.in6.sin6_addr, 1566 - ipvs->bcfg.mcast_ifn); 1585 + dev); 1567 1586 else 1568 1587 #endif 1569 1588 result = join_mcast_group(sock->sk, &mcast_addr.in.sin_addr, 1570 - ipvs->bcfg.mcast_ifn); 1589 + dev); 1571 1590 if (result < 0) { 1572 1591 pr_err("Error joining to the multicast group\n"); 1573 1592 goto error; 1574 1593 } 1575 1594 1576 - return sock; 1595 + return 0; 1577 1596 1578 1597 error: 1579 - sock_release(sock); 1580 - return ERR_PTR(result); 1598 + return result; 1581 1599 } 1582 1600 1583 1601 ··· 1758 1778 int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c, 1759 1779 int state) 1760 1780 { 1761 - struct ip_vs_sync_thread_data *tinfo; 1781 + struct ip_vs_sync_thread_data *tinfo = NULL; 1762 1782 struct task_struct **array = NULL, *task; 1763 - struct socket *sock; 1764 1783 struct net_device *dev; 1765 1784 char *name; 1766 1785 int (*threadfn)(void *data); 1767 - int id, count, hlen; 1786 + int id = 0, count, hlen; 1768 1787 int result = -ENOMEM; 1769 1788 u16 mtu, min_mtu; 1770 1789 1771 1790 IP_VS_DBG(7, "%s(): pid %d\n", __func__, task_pid_nr(current)); 1772 1791 IP_VS_DBG(7, "Each ip_vs_sync_conn entry needs %zd bytes\n", 1773 1792 sizeof(struct ip_vs_sync_conn_v0)); 1793 + 1794 + /* Do not hold one mutex and then to block on another */ 1795 + for (;;) { 1796 + rtnl_lock(); 1797 + if (mutex_trylock(&ipvs->sync_mutex)) 1798 + break; 1799 + rtnl_unlock(); 1800 + mutex_lock(&ipvs->sync_mutex); 1801 + if (rtnl_trylock()) 1802 + break; 1803 + mutex_unlock(&ipvs->sync_mutex); 1804 + } 1774 1805 1775 1806 if (!ipvs->sync_state) { 1776 1807 count = clamp(sysctl_sync_ports(ipvs), 1, IPVS_SYNC_PORTS_MAX); ··· 1801 1810 dev = __dev_get_by_name(ipvs->net, c->mcast_ifn); 1802 1811 if (!dev) { 1803 1812 pr_err("Unknown mcast interface: %s\n", c->mcast_ifn); 1804 - return -ENODEV; 1813 + result = -ENODEV; 1814 + goto out_early; 1805 1815 } 1806 1816 hlen = (AF_INET6 == c->mcast_af) ? 1807 1817 sizeof(struct ipv6hdr) + sizeof(struct udphdr) : ··· 1819 1827 c->sync_maxlen = mtu - hlen; 1820 1828 1821 1829 if (state == IP_VS_STATE_MASTER) { 1830 + result = -EEXIST; 1822 1831 if (ipvs->ms) 1823 - return -EEXIST; 1832 + goto out_early; 1824 1833 1825 1834 ipvs->mcfg = *c; 1826 1835 name = "ipvs-m:%d:%d"; 1827 1836 threadfn = sync_thread_master; 1828 1837 } else if (state == IP_VS_STATE_BACKUP) { 1838 + result = -EEXIST; 1829 1839 if (ipvs->backup_threads) 1830 - return -EEXIST; 1840 + goto out_early; 1831 1841 1832 1842 ipvs->bcfg = *c; 1833 1843 name = "ipvs-b:%d:%d"; 1834 1844 threadfn = sync_thread_backup; 1835 1845 } else { 1836 - return -EINVAL; 1846 + result = -EINVAL; 1847 + goto out_early; 1837 1848 } 1838 1849 1839 1850 if (state == IP_VS_STATE_MASTER) { 1840 1851 struct ipvs_master_sync_state *ms; 1841 1852 1853 + result = -ENOMEM; 1842 1854 ipvs->ms = kcalloc(count, sizeof(ipvs->ms[0]), GFP_KERNEL); 1843 1855 if (!ipvs->ms) 1844 1856 goto out; ··· 1858 1862 } else { 1859 1863 array = kcalloc(count, sizeof(struct task_struct *), 1860 1864 GFP_KERNEL); 1865 + result = -ENOMEM; 1861 1866 if (!array) 1862 1867 goto out; 1863 1868 } 1864 1869 1865 - tinfo = NULL; 1866 1870 for (id = 0; id < count; id++) { 1867 - if (state == IP_VS_STATE_MASTER) 1868 - sock = make_send_sock(ipvs, id); 1869 - else 1870 - sock = make_receive_sock(ipvs, id, dev->ifindex); 1871 - if (IS_ERR(sock)) { 1872 - result = PTR_ERR(sock); 1873 - goto outtinfo; 1874 - } 1871 + result = -ENOMEM; 1875 1872 tinfo = kmalloc(sizeof(*tinfo), GFP_KERNEL); 1876 1873 if (!tinfo) 1877 - goto outsocket; 1874 + goto out; 1878 1875 tinfo->ipvs = ipvs; 1879 - tinfo->sock = sock; 1876 + tinfo->sock = NULL; 1880 1877 if (state == IP_VS_STATE_BACKUP) { 1881 1878 tinfo->buf = kmalloc(ipvs->bcfg.sync_maxlen, 1882 1879 GFP_KERNEL); 1883 1880 if (!tinfo->buf) 1884 - goto outtinfo; 1881 + goto out; 1885 1882 } else { 1886 1883 tinfo->buf = NULL; 1887 1884 } 1888 1885 tinfo->id = id; 1886 + if (state == IP_VS_STATE_MASTER) 1887 + result = make_send_sock(ipvs, id, dev, &tinfo->sock); 1888 + else 1889 + result = make_receive_sock(ipvs, id, dev, &tinfo->sock); 1890 + if (result < 0) 1891 + goto out; 1889 1892 1890 1893 task = kthread_run(threadfn, tinfo, name, ipvs->gen, id); 1891 1894 if (IS_ERR(task)) { 1892 1895 result = PTR_ERR(task); 1893 - goto outtinfo; 1896 + goto out; 1894 1897 } 1895 1898 tinfo = NULL; 1896 1899 if (state == IP_VS_STATE_MASTER) ··· 1906 1911 ipvs->sync_state |= state; 1907 1912 spin_unlock_bh(&ipvs->sync_buff_lock); 1908 1913 1914 + mutex_unlock(&ipvs->sync_mutex); 1915 + rtnl_unlock(); 1916 + 1909 1917 /* increase the module use count */ 1910 1918 ip_vs_use_count_inc(); 1911 1919 1912 1920 return 0; 1913 1921 1914 - outsocket: 1915 - sock_release(sock); 1916 - 1917 - outtinfo: 1918 - if (tinfo) { 1919 - sock_release(tinfo->sock); 1920 - kfree(tinfo->buf); 1921 - kfree(tinfo); 1922 - } 1922 + out: 1923 + /* We do not need RTNL lock anymore, release it here so that 1924 + * sock_release below and in the kthreads can use rtnl_lock 1925 + * to leave the mcast group. 1926 + */ 1927 + rtnl_unlock(); 1923 1928 count = id; 1924 1929 while (count-- > 0) { 1925 1930 if (state == IP_VS_STATE_MASTER) ··· 1927 1932 else 1928 1933 kthread_stop(array[count]); 1929 1934 } 1930 - kfree(array); 1931 - 1932 - out: 1933 1935 if (!(ipvs->sync_state & IP_VS_STATE_MASTER)) { 1934 1936 kfree(ipvs->ms); 1935 1937 ipvs->ms = NULL; 1936 1938 } 1939 + mutex_unlock(&ipvs->sync_mutex); 1940 + if (tinfo) { 1941 + if (tinfo->sock) 1942 + sock_release(tinfo->sock); 1943 + kfree(tinfo->buf); 1944 + kfree(tinfo); 1945 + } 1946 + kfree(array); 1947 + return result; 1948 + 1949 + out_early: 1950 + mutex_unlock(&ipvs->sync_mutex); 1951 + rtnl_unlock(); 1937 1952 return result; 1938 1953 } 1939 1954
+4 -1
net/netfilter/nf_conntrack_expect.c
··· 252 252 static inline int expect_matches(const struct nf_conntrack_expect *a, 253 253 const struct nf_conntrack_expect *b) 254 254 { 255 - return a->master == b->master && a->class == b->class && 255 + return a->master == b->master && 256 256 nf_ct_tuple_equal(&a->tuple, &b->tuple) && 257 257 nf_ct_tuple_mask_equal(&a->mask, &b->mask) && 258 258 net_eq(nf_ct_net(a->master), nf_ct_net(b->master)) && ··· 421 421 h = nf_ct_expect_dst_hash(net, &expect->tuple); 422 422 hlist_for_each_entry_safe(i, next, &nf_ct_expect_hash[h], hnode) { 423 423 if (expect_matches(i, expect)) { 424 + if (i->class != expect->class) 425 + return -EALREADY; 426 + 424 427 if (nf_ct_remove_expect(i)) 425 428 break; 426 429 } else if (expect_clash(i, expect)) {
+2
net/netfilter/nf_conntrack_extend.c
··· 9 9 * 2 of the License, or (at your option) any later version. 10 10 */ 11 11 #include <linux/kernel.h> 12 + #include <linux/kmemleak.h> 12 13 #include <linux/module.h> 13 14 #include <linux/mutex.h> 14 15 #include <linux/rcupdate.h> ··· 72 71 rcu_read_unlock(); 73 72 74 73 alloc = max(newlen, NF_CT_EXT_PREALLOC); 74 + kmemleak_not_leak(old); 75 75 new = __krealloc(old, alloc, gfp); 76 76 if (!new) 77 77 return NULL;
+12 -4
net/netfilter/nf_conntrack_sip.c
··· 938 938 datalen, rtp_exp, rtcp_exp, 939 939 mediaoff, medialen, daddr); 940 940 else { 941 - if (nf_ct_expect_related(rtp_exp) == 0) { 942 - if (nf_ct_expect_related(rtcp_exp) != 0) 943 - nf_ct_unexpect_related(rtp_exp); 944 - else 941 + /* -EALREADY handling works around end-points that send 942 + * SDP messages with identical port but different media type, 943 + * we pretend expectation was set up. 944 + */ 945 + int errp = nf_ct_expect_related(rtp_exp); 946 + 947 + if (errp == 0 || errp == -EALREADY) { 948 + int errcp = nf_ct_expect_related(rtcp_exp); 949 + 950 + if (errcp == 0 || errcp == -EALREADY) 945 951 ret = NF_ACCEPT; 952 + else if (errp == 0) 953 + nf_ct_unexpect_related(rtp_exp); 946 954 } 947 955 } 948 956 nf_ct_expect_put(rtcp_exp);
+38 -31
net/netfilter/nf_tables_api.c
··· 2361 2361 } 2362 2362 2363 2363 if (nlh->nlmsg_flags & NLM_F_REPLACE) { 2364 - if (nft_is_active_next(net, old_rule)) { 2365 - trans = nft_trans_rule_add(&ctx, NFT_MSG_DELRULE, 2366 - old_rule); 2367 - if (trans == NULL) { 2368 - err = -ENOMEM; 2369 - goto err2; 2370 - } 2371 - nft_deactivate_next(net, old_rule); 2372 - chain->use--; 2373 - list_add_tail_rcu(&rule->list, &old_rule->list); 2374 - } else { 2364 + if (!nft_is_active_next(net, old_rule)) { 2375 2365 err = -ENOENT; 2376 2366 goto err2; 2377 2367 } 2378 - } else if (nlh->nlmsg_flags & NLM_F_APPEND) 2379 - if (old_rule) 2380 - list_add_rcu(&rule->list, &old_rule->list); 2381 - else 2382 - list_add_tail_rcu(&rule->list, &chain->rules); 2383 - else { 2384 - if (old_rule) 2385 - list_add_tail_rcu(&rule->list, &old_rule->list); 2386 - else 2387 - list_add_rcu(&rule->list, &chain->rules); 2388 - } 2368 + trans = nft_trans_rule_add(&ctx, NFT_MSG_DELRULE, 2369 + old_rule); 2370 + if (trans == NULL) { 2371 + err = -ENOMEM; 2372 + goto err2; 2373 + } 2374 + nft_deactivate_next(net, old_rule); 2375 + chain->use--; 2389 2376 2390 - if (nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule) == NULL) { 2391 - err = -ENOMEM; 2392 - goto err3; 2377 + if (nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule) == NULL) { 2378 + err = -ENOMEM; 2379 + goto err2; 2380 + } 2381 + 2382 + list_add_tail_rcu(&rule->list, &old_rule->list); 2383 + } else { 2384 + if (nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule) == NULL) { 2385 + err = -ENOMEM; 2386 + goto err2; 2387 + } 2388 + 2389 + if (nlh->nlmsg_flags & NLM_F_APPEND) { 2390 + if (old_rule) 2391 + list_add_rcu(&rule->list, &old_rule->list); 2392 + else 2393 + list_add_tail_rcu(&rule->list, &chain->rules); 2394 + } else { 2395 + if (old_rule) 2396 + list_add_tail_rcu(&rule->list, &old_rule->list); 2397 + else 2398 + list_add_rcu(&rule->list, &chain->rules); 2399 + } 2393 2400 } 2394 2401 chain->use++; 2395 2402 return 0; 2396 2403 2397 - err3: 2398 - list_del_rcu(&rule->list); 2399 2404 err2: 2400 2405 nf_tables_rule_destroy(&ctx, rule); 2401 2406 err1: ··· 3212 3207 3213 3208 err = ops->init(set, &desc, nla); 3214 3209 if (err < 0) 3215 - goto err2; 3210 + goto err3; 3216 3211 3217 3212 err = nft_trans_set_add(&ctx, NFT_MSG_NEWSET, set); 3218 3213 if (err < 0) 3219 - goto err3; 3214 + goto err4; 3220 3215 3221 3216 list_add_tail_rcu(&set->list, &table->sets); 3222 3217 table->use++; 3223 3218 return 0; 3224 3219 3225 - err3: 3220 + err4: 3226 3221 ops->destroy(set); 3222 + err3: 3223 + kfree(set->name); 3227 3224 err2: 3228 3225 kvfree(set); 3229 3226 err1: ··· 5745 5738 struct nft_base_chain *basechain; 5746 5739 5747 5740 if (nft_trans_chain_name(trans)) 5748 - strcpy(trans->ctx.chain->name, nft_trans_chain_name(trans)); 5741 + swap(trans->ctx.chain->name, nft_trans_chain_name(trans)); 5749 5742 5750 5743 if (!nft_is_base_chain(trans->ctx.chain)) 5751 5744 return;
+28 -19
net/netfilter/xt_connmark.c
··· 36 36 MODULE_ALIAS("ip6t_connmark"); 37 37 38 38 static unsigned int 39 - connmark_tg_shift(struct sk_buff *skb, 40 - const struct xt_connmark_tginfo1 *info, 41 - u8 shift_bits, u8 shift_dir) 39 + connmark_tg_shift(struct sk_buff *skb, const struct xt_connmark_tginfo2 *info) 42 40 { 43 41 enum ip_conntrack_info ctinfo; 42 + u_int32_t new_targetmark; 44 43 struct nf_conn *ct; 45 44 u_int32_t newmark; 46 45 ··· 50 51 switch (info->mode) { 51 52 case XT_CONNMARK_SET: 52 53 newmark = (ct->mark & ~info->ctmask) ^ info->ctmark; 53 - if (shift_dir == D_SHIFT_RIGHT) 54 - newmark >>= shift_bits; 54 + if (info->shift_dir == D_SHIFT_RIGHT) 55 + newmark >>= info->shift_bits; 55 56 else 56 - newmark <<= shift_bits; 57 + newmark <<= info->shift_bits; 58 + 57 59 if (ct->mark != newmark) { 58 60 ct->mark = newmark; 59 61 nf_conntrack_event_cache(IPCT_MARK, ct); 60 62 } 61 63 break; 62 64 case XT_CONNMARK_SAVE: 63 - newmark = (ct->mark & ~info->ctmask) ^ 64 - (skb->mark & info->nfmask); 65 - if (shift_dir == D_SHIFT_RIGHT) 66 - newmark >>= shift_bits; 65 + new_targetmark = (skb->mark & info->nfmask); 66 + if (info->shift_dir == D_SHIFT_RIGHT) 67 + new_targetmark >>= info->shift_bits; 67 68 else 68 - newmark <<= shift_bits; 69 + new_targetmark <<= info->shift_bits; 70 + 71 + newmark = (ct->mark & ~info->ctmask) ^ 72 + new_targetmark; 69 73 if (ct->mark != newmark) { 70 74 ct->mark = newmark; 71 75 nf_conntrack_event_cache(IPCT_MARK, ct); 72 76 } 73 77 break; 74 78 case XT_CONNMARK_RESTORE: 75 - newmark = (skb->mark & ~info->nfmask) ^ 76 - (ct->mark & info->ctmask); 77 - if (shift_dir == D_SHIFT_RIGHT) 78 - newmark >>= shift_bits; 79 + new_targetmark = (ct->mark & info->ctmask); 80 + if (info->shift_dir == D_SHIFT_RIGHT) 81 + new_targetmark >>= info->shift_bits; 79 82 else 80 - newmark <<= shift_bits; 83 + new_targetmark <<= info->shift_bits; 84 + 85 + newmark = (skb->mark & ~info->nfmask) ^ 86 + new_targetmark; 81 87 skb->mark = newmark; 82 88 break; 83 89 } ··· 93 89 connmark_tg(struct sk_buff *skb, const struct xt_action_param *par) 94 90 { 95 91 const struct xt_connmark_tginfo1 *info = par->targinfo; 92 + const struct xt_connmark_tginfo2 info2 = { 93 + .ctmark = info->ctmark, 94 + .ctmask = info->ctmask, 95 + .nfmask = info->nfmask, 96 + .mode = info->mode, 97 + }; 96 98 97 - return connmark_tg_shift(skb, info, 0, 0); 99 + return connmark_tg_shift(skb, &info2); 98 100 } 99 101 100 102 static unsigned int ··· 108 98 { 109 99 const struct xt_connmark_tginfo2 *info = par->targinfo; 110 100 111 - return connmark_tg_shift(skb, (const struct xt_connmark_tginfo1 *)info, 112 - info->shift_bits, info->shift_dir); 101 + return connmark_tg_shift(skb, info); 113 102 } 114 103 115 104 static int connmark_tg_check(const struct xt_tgchk_param *par)
+44 -16
net/packet/af_packet.c
··· 329 329 skb_set_queue_mapping(skb, queue_index); 330 330 } 331 331 332 - /* register_prot_hook must be invoked with the po->bind_lock held, 332 + /* __register_prot_hook must be invoked through register_prot_hook 333 333 * or from a context in which asynchronous accesses to the packet 334 334 * socket is not possible (packet_create()). 335 335 */ 336 - static void register_prot_hook(struct sock *sk) 336 + static void __register_prot_hook(struct sock *sk) 337 337 { 338 338 struct packet_sock *po = pkt_sk(sk); 339 339 ··· 348 348 } 349 349 } 350 350 351 - /* {,__}unregister_prot_hook() must be invoked with the po->bind_lock 352 - * held. If the sync parameter is true, we will temporarily drop 351 + static void register_prot_hook(struct sock *sk) 352 + { 353 + lockdep_assert_held_once(&pkt_sk(sk)->bind_lock); 354 + __register_prot_hook(sk); 355 + } 356 + 357 + /* If the sync parameter is true, we will temporarily drop 353 358 * the po->bind_lock and do a synchronize_net to make sure no 354 359 * asynchronous packet processing paths still refer to the elements 355 360 * of po->prot_hook. If the sync parameter is false, it is the ··· 363 358 static void __unregister_prot_hook(struct sock *sk, bool sync) 364 359 { 365 360 struct packet_sock *po = pkt_sk(sk); 361 + 362 + lockdep_assert_held_once(&po->bind_lock); 366 363 367 364 po->running = 0; 368 365 ··· 3259 3252 3260 3253 if (proto) { 3261 3254 po->prot_hook.type = proto; 3262 - register_prot_hook(sk); 3255 + __register_prot_hook(sk); 3263 3256 } 3264 3257 3265 3258 mutex_lock(&net->packet.sklist_lock); ··· 3739 3732 3740 3733 if (optlen != sizeof(val)) 3741 3734 return -EINVAL; 3742 - if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) 3743 - return -EBUSY; 3744 3735 if (copy_from_user(&val, optval, sizeof(val))) 3745 3736 return -EFAULT; 3746 - po->tp_loss = !!val; 3747 - return 0; 3737 + 3738 + lock_sock(sk); 3739 + if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) { 3740 + ret = -EBUSY; 3741 + } else { 3742 + po->tp_loss = !!val; 3743 + ret = 0; 3744 + } 3745 + release_sock(sk); 3746 + return ret; 3748 3747 } 3749 3748 case PACKET_AUXDATA: 3750 3749 { ··· 3761 3748 if (copy_from_user(&val, optval, sizeof(val))) 3762 3749 return -EFAULT; 3763 3750 3751 + lock_sock(sk); 3764 3752 po->auxdata = !!val; 3753 + release_sock(sk); 3765 3754 return 0; 3766 3755 } 3767 3756 case PACKET_ORIGDEV: ··· 3775 3760 if (copy_from_user(&val, optval, sizeof(val))) 3776 3761 return -EFAULT; 3777 3762 3763 + lock_sock(sk); 3778 3764 po->origdev = !!val; 3765 + release_sock(sk); 3779 3766 return 0; 3780 3767 } 3781 3768 case PACKET_VNET_HDR: ··· 3786 3769 3787 3770 if (sock->type != SOCK_RAW) 3788 3771 return -EINVAL; 3789 - if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) 3790 - return -EBUSY; 3791 3772 if (optlen < sizeof(val)) 3792 3773 return -EINVAL; 3793 3774 if (copy_from_user(&val, optval, sizeof(val))) 3794 3775 return -EFAULT; 3795 3776 3796 - po->has_vnet_hdr = !!val; 3797 - return 0; 3777 + lock_sock(sk); 3778 + if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) { 3779 + ret = -EBUSY; 3780 + } else { 3781 + po->has_vnet_hdr = !!val; 3782 + ret = 0; 3783 + } 3784 + release_sock(sk); 3785 + return ret; 3798 3786 } 3799 3787 case PACKET_TIMESTAMP: 3800 3788 { ··· 3837 3815 3838 3816 if (optlen != sizeof(val)) 3839 3817 return -EINVAL; 3840 - if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) 3841 - return -EBUSY; 3842 3818 if (copy_from_user(&val, optval, sizeof(val))) 3843 3819 return -EFAULT; 3844 - po->tp_tx_has_off = !!val; 3820 + 3821 + lock_sock(sk); 3822 + if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) { 3823 + ret = -EBUSY; 3824 + } else { 3825 + po->tp_tx_has_off = !!val; 3826 + ret = 0; 3827 + } 3828 + release_sock(sk); 3845 3829 return 0; 3846 3830 } 3847 3831 case PACKET_QDISC_BYPASS:
+5 -5
net/packet/internal.h
··· 112 112 int copy_thresh; 113 113 spinlock_t bind_lock; 114 114 struct mutex pg_vec_lock; 115 - unsigned int running:1, /* prot_hook is attached*/ 116 - auxdata:1, 115 + unsigned int running; /* bind_lock must be held */ 116 + unsigned int auxdata:1, /* writer must hold sock lock */ 117 117 origdev:1, 118 - has_vnet_hdr:1; 118 + has_vnet_hdr:1, 119 + tp_loss:1, 120 + tp_tx_has_off:1; 119 121 int pressure; 120 122 int ifindex; /* bound device */ 121 123 __be16 num; ··· 127 125 enum tpacket_versions tp_version; 128 126 unsigned int tp_hdrlen; 129 127 unsigned int tp_reserve; 130 - unsigned int tp_loss:1; 131 - unsigned int tp_tx_has_off:1; 132 128 unsigned int tp_tstamp; 133 129 struct net_device __rcu *cached_dev; 134 130 int (*xmit)(struct sk_buff *skb);
+7 -2
net/sched/act_ife.c
··· 652 652 } 653 653 } 654 654 655 - return 0; 655 + return -ENOENT; 656 656 } 657 657 658 658 static int tcf_ife_decode(struct sk_buff *skb, const struct tc_action *a, ··· 682 682 u16 mtype; 683 683 u16 dlen; 684 684 685 - curr_data = ife_tlv_meta_decode(tlv_data, &mtype, &dlen, NULL); 685 + curr_data = ife_tlv_meta_decode(tlv_data, ifehdr_end, &mtype, 686 + &dlen, NULL); 687 + if (!curr_data) { 688 + qstats_drop_inc(this_cpu_ptr(ife->common.cpu_qstats)); 689 + return TC_ACT_SHOT; 690 + } 686 691 687 692 if (find_decode_metaid(skb, ife, mtype, dlen, curr_data)) { 688 693 /* abuse overlimits to count when we receive metadata
+1 -1
net/strparser/strparser.c
··· 67 67 68 68 static void strp_start_timer(struct strparser *strp, long timeo) 69 69 { 70 - if (timeo) 70 + if (timeo && timeo != LONG_MAX) 71 71 mod_delayed_work(strp_wq, &strp->msg_timer_work, timeo); 72 72 } 73 73
+2
security/commoncap.c
··· 449 449 magic |= VFS_CAP_FLAGS_EFFECTIVE; 450 450 memcpy(&cap->data, &nscap->data, sizeof(__le32) * 2 * VFS_CAP_U32); 451 451 cap->magic_etc = cpu_to_le32(magic); 452 + } else { 453 + size = -ENOMEM; 452 454 } 453 455 } 454 456 kfree(tmpbuf);
+1 -1
sound/core/control.c
··· 1492 1492 int op_flag) 1493 1493 { 1494 1494 struct snd_ctl_tlv header; 1495 - unsigned int *container; 1495 + unsigned int __user *container; 1496 1496 unsigned int container_size; 1497 1497 struct snd_kcontrol *kctl; 1498 1498 struct snd_ctl_elem_id id;
+4 -3
sound/core/pcm_compat.c
··· 27 27 s32 __user *src) 28 28 { 29 29 snd_pcm_sframes_t delay; 30 + int err; 30 31 31 - delay = snd_pcm_delay(substream); 32 - if (delay < 0) 33 - return delay; 32 + err = snd_pcm_delay(substream, &delay); 33 + if (err) 34 + return err; 34 35 if (put_user(delay, src)) 35 36 return -EFAULT; 36 37 return 0;
+15 -15
sound/core/pcm_native.c
··· 2692 2692 return err; 2693 2693 } 2694 2694 2695 - static snd_pcm_sframes_t snd_pcm_delay(struct snd_pcm_substream *substream) 2695 + static int snd_pcm_delay(struct snd_pcm_substream *substream, 2696 + snd_pcm_sframes_t *delay) 2696 2697 { 2697 2698 struct snd_pcm_runtime *runtime = substream->runtime; 2698 2699 int err; ··· 2709 2708 n += runtime->delay; 2710 2709 } 2711 2710 snd_pcm_stream_unlock_irq(substream); 2712 - return err < 0 ? err : n; 2711 + if (!err) 2712 + *delay = n; 2713 + return err; 2713 2714 } 2714 2715 2715 2716 static int snd_pcm_sync_ptr(struct snd_pcm_substream *substream, ··· 2754 2751 sync_ptr.s.status.hw_ptr = status->hw_ptr; 2755 2752 sync_ptr.s.status.tstamp = status->tstamp; 2756 2753 sync_ptr.s.status.suspended_state = status->suspended_state; 2754 + sync_ptr.s.status.audio_tstamp = status->audio_tstamp; 2757 2755 snd_pcm_stream_unlock_irq(substream); 2758 2756 if (copy_to_user(_sync_ptr, &sync_ptr, sizeof(sync_ptr))) 2759 2757 return -EFAULT; ··· 2920 2916 return snd_pcm_hwsync(substream); 2921 2917 case SNDRV_PCM_IOCTL_DELAY: 2922 2918 { 2923 - snd_pcm_sframes_t delay = snd_pcm_delay(substream); 2919 + snd_pcm_sframes_t delay; 2924 2920 snd_pcm_sframes_t __user *res = arg; 2921 + int err; 2925 2922 2926 - if (delay < 0) 2927 - return delay; 2923 + err = snd_pcm_delay(substream, &delay); 2924 + if (err) 2925 + return err; 2928 2926 if (put_user(delay, res)) 2929 2927 return -EFAULT; 2930 2928 return 0; ··· 3014 3008 case SNDRV_PCM_IOCTL_DROP: 3015 3009 return snd_pcm_drop(substream); 3016 3010 case SNDRV_PCM_IOCTL_DELAY: 3017 - { 3018 - result = snd_pcm_delay(substream); 3019 - if (result < 0) 3020 - return result; 3021 - *frames = result; 3022 - return 0; 3023 - } 3011 + return snd_pcm_delay(substream, frames); 3024 3012 default: 3025 3013 return -EINVAL; 3026 3014 } ··· 3234 3234 /* 3235 3235 * mmap status record 3236 3236 */ 3237 - static int snd_pcm_mmap_status_fault(struct vm_fault *vmf) 3237 + static vm_fault_t snd_pcm_mmap_status_fault(struct vm_fault *vmf) 3238 3238 { 3239 3239 struct snd_pcm_substream *substream = vmf->vma->vm_private_data; 3240 3240 struct snd_pcm_runtime *runtime; ··· 3270 3270 /* 3271 3271 * mmap control record 3272 3272 */ 3273 - static int snd_pcm_mmap_control_fault(struct vm_fault *vmf) 3273 + static vm_fault_t snd_pcm_mmap_control_fault(struct vm_fault *vmf) 3274 3274 { 3275 3275 struct snd_pcm_substream *substream = vmf->vma->vm_private_data; 3276 3276 struct snd_pcm_runtime *runtime; ··· 3359 3359 /* 3360 3360 * fault callback for mmapping a RAM page 3361 3361 */ 3362 - static int snd_pcm_mmap_data_fault(struct vm_fault *vmf) 3362 + static vm_fault_t snd_pcm_mmap_data_fault(struct vm_fault *vmf) 3363 3363 { 3364 3364 struct snd_pcm_substream *substream = vmf->vma->vm_private_data; 3365 3365 struct snd_pcm_runtime *runtime;
+9 -6
sound/core/seq/oss/seq_oss_event.c
··· 26 26 #include <sound/seq_oss_legacy.h> 27 27 #include "seq_oss_readq.h" 28 28 #include "seq_oss_writeq.h" 29 + #include <linux/nospec.h> 29 30 30 31 31 32 /* ··· 288 287 { 289 288 struct seq_oss_synthinfo *info; 290 289 291 - if (!snd_seq_oss_synth_is_valid(dp, dev)) 290 + info = snd_seq_oss_synth_info(dp, dev); 291 + if (!info) 292 292 return -ENXIO; 293 293 294 - info = &dp->synths[dev]; 295 294 switch (info->arg.event_passing) { 296 295 case SNDRV_SEQ_OSS_PROCESS_EVENTS: 297 296 if (! info->ch || ch < 0 || ch >= info->nr_voices) { ··· 299 298 return set_note_event(dp, dev, SNDRV_SEQ_EVENT_NOTEON, ch, note, vel, ev); 300 299 } 301 300 301 + ch = array_index_nospec(ch, info->nr_voices); 302 302 if (note == 255 && info->ch[ch].note >= 0) { 303 303 /* volume control */ 304 304 int type; ··· 349 347 { 350 348 struct seq_oss_synthinfo *info; 351 349 352 - if (!snd_seq_oss_synth_is_valid(dp, dev)) 350 + info = snd_seq_oss_synth_info(dp, dev); 351 + if (!info) 353 352 return -ENXIO; 354 353 355 - info = &dp->synths[dev]; 356 354 switch (info->arg.event_passing) { 357 355 case SNDRV_SEQ_OSS_PROCESS_EVENTS: 358 356 if (! info->ch || ch < 0 || ch >= info->nr_voices) { ··· 360 358 return set_note_event(dp, dev, SNDRV_SEQ_EVENT_NOTEON, ch, note, vel, ev); 361 359 } 362 360 361 + ch = array_index_nospec(ch, info->nr_voices); 363 362 if (info->ch[ch].note >= 0) { 364 363 note = info->ch[ch].note; 365 364 info->ch[ch].vel = 0; ··· 384 381 static int 385 382 set_note_event(struct seq_oss_devinfo *dp, int dev, int type, int ch, int note, int vel, struct snd_seq_event *ev) 386 383 { 387 - if (! snd_seq_oss_synth_is_valid(dp, dev)) 384 + if (!snd_seq_oss_synth_info(dp, dev)) 388 385 return -ENXIO; 389 386 390 387 ev->type = type; ··· 402 399 static int 403 400 set_control_event(struct seq_oss_devinfo *dp, int dev, int type, int ch, int param, int val, struct snd_seq_event *ev) 404 401 { 405 - if (! snd_seq_oss_synth_is_valid(dp, dev)) 402 + if (!snd_seq_oss_synth_info(dp, dev)) 406 403 return -ENXIO; 407 404 408 405 ev->type = type;
+2
sound/core/seq/oss/seq_oss_midi.c
··· 29 29 #include "../seq_lock.h" 30 30 #include <linux/init.h> 31 31 #include <linux/slab.h> 32 + #include <linux/nospec.h> 32 33 33 34 34 35 /* ··· 316 315 { 317 316 if (dev < 0 || dev >= dp->max_mididev) 318 317 return NULL; 318 + dev = array_index_nospec(dev, dp->max_mididev); 319 319 return get_mdev(dev); 320 320 } 321 321
+49 -36
sound/core/seq/oss/seq_oss_synth.c
··· 26 26 #include <linux/init.h> 27 27 #include <linux/module.h> 28 28 #include <linux/slab.h> 29 + #include <linux/nospec.h> 29 30 30 31 /* 31 32 * constants ··· 340 339 dp->max_synthdev = 0; 341 340 } 342 341 343 - /* 344 - * check if the specified device is MIDI mapped device 345 - */ 346 - static int 347 - is_midi_dev(struct seq_oss_devinfo *dp, int dev) 342 + static struct seq_oss_synthinfo * 343 + get_synthinfo_nospec(struct seq_oss_devinfo *dp, int dev) 348 344 { 349 345 if (dev < 0 || dev >= dp->max_synthdev) 350 - return 0; 351 - if (dp->synths[dev].is_midi) 352 - return 1; 353 - return 0; 346 + return NULL; 347 + dev = array_index_nospec(dev, SNDRV_SEQ_OSS_MAX_SYNTH_DEVS); 348 + return &dp->synths[dev]; 354 349 } 355 350 356 351 /* ··· 356 359 get_synthdev(struct seq_oss_devinfo *dp, int dev) 357 360 { 358 361 struct seq_oss_synth *rec; 359 - if (dev < 0 || dev >= dp->max_synthdev) 362 + struct seq_oss_synthinfo *info = get_synthinfo_nospec(dp, dev); 363 + 364 + if (!info) 360 365 return NULL; 361 - if (! dp->synths[dev].opened) 366 + if (!info->opened) 362 367 return NULL; 363 - if (dp->synths[dev].is_midi) 364 - return &midi_synth_dev; 365 - if ((rec = get_sdev(dev)) == NULL) 366 - return NULL; 368 + if (info->is_midi) { 369 + rec = &midi_synth_dev; 370 + snd_use_lock_use(&rec->use_lock); 371 + } else { 372 + rec = get_sdev(dev); 373 + if (!rec) 374 + return NULL; 375 + } 367 376 if (! rec->opened) { 368 377 snd_use_lock_free(&rec->use_lock); 369 378 return NULL; ··· 405 402 struct seq_oss_synth *rec; 406 403 struct seq_oss_synthinfo *info; 407 404 408 - if (snd_BUG_ON(dev < 0 || dev >= dp->max_synthdev)) 409 - return; 410 - info = &dp->synths[dev]; 411 - if (! info->opened) 405 + info = get_synthinfo_nospec(dp, dev); 406 + if (!info || !info->opened) 412 407 return; 413 408 if (info->sysex) 414 409 info->sysex->len = 0; /* reset sysex */ ··· 455 454 const char __user *buf, int p, int c) 456 455 { 457 456 struct seq_oss_synth *rec; 457 + struct seq_oss_synthinfo *info; 458 458 int rc; 459 459 460 - if (dev < 0 || dev >= dp->max_synthdev) 460 + info = get_synthinfo_nospec(dp, dev); 461 + if (!info) 461 462 return -ENXIO; 462 463 463 - if (is_midi_dev(dp, dev)) 464 + if (info->is_midi) 464 465 return 0; 465 466 if ((rec = get_synthdev(dp, dev)) == NULL) 466 467 return -ENXIO; ··· 470 467 if (rec->oper.load_patch == NULL) 471 468 rc = -ENXIO; 472 469 else 473 - rc = rec->oper.load_patch(&dp->synths[dev].arg, fmt, buf, p, c); 470 + rc = rec->oper.load_patch(&info->arg, fmt, buf, p, c); 474 471 snd_use_lock_free(&rec->use_lock); 475 472 return rc; 476 473 } 477 474 478 475 /* 479 - * check if the device is valid synth device 476 + * check if the device is valid synth device and return the synth info 480 477 */ 481 - int 482 - snd_seq_oss_synth_is_valid(struct seq_oss_devinfo *dp, int dev) 478 + struct seq_oss_synthinfo * 479 + snd_seq_oss_synth_info(struct seq_oss_devinfo *dp, int dev) 483 480 { 484 481 struct seq_oss_synth *rec; 482 + 485 483 rec = get_synthdev(dp, dev); 486 484 if (rec) { 487 485 snd_use_lock_free(&rec->use_lock); 488 - return 1; 486 + return get_synthinfo_nospec(dp, dev); 489 487 } 490 - return 0; 488 + return NULL; 491 489 } 492 490 493 491 ··· 503 499 int i, send; 504 500 unsigned char *dest; 505 501 struct seq_oss_synth_sysex *sysex; 502 + struct seq_oss_synthinfo *info; 506 503 507 - if (! snd_seq_oss_synth_is_valid(dp, dev)) 504 + info = snd_seq_oss_synth_info(dp, dev); 505 + if (!info) 508 506 return -ENXIO; 509 507 510 - sysex = dp->synths[dev].sysex; 508 + sysex = info->sysex; 511 509 if (sysex == NULL) { 512 510 sysex = kzalloc(sizeof(*sysex), GFP_KERNEL); 513 511 if (sysex == NULL) 514 512 return -ENOMEM; 515 - dp->synths[dev].sysex = sysex; 513 + info->sysex = sysex; 516 514 } 517 515 518 516 send = 0; ··· 559 553 int 560 554 snd_seq_oss_synth_addr(struct seq_oss_devinfo *dp, int dev, struct snd_seq_event *ev) 561 555 { 562 - if (! snd_seq_oss_synth_is_valid(dp, dev)) 556 + struct seq_oss_synthinfo *info = snd_seq_oss_synth_info(dp, dev); 557 + 558 + if (!info) 563 559 return -EINVAL; 564 - snd_seq_oss_fill_addr(dp, ev, dp->synths[dev].arg.addr.client, 565 - dp->synths[dev].arg.addr.port); 560 + snd_seq_oss_fill_addr(dp, ev, info->arg.addr.client, 561 + info->arg.addr.port); 566 562 return 0; 567 563 } 568 564 ··· 576 568 snd_seq_oss_synth_ioctl(struct seq_oss_devinfo *dp, int dev, unsigned int cmd, unsigned long addr) 577 569 { 578 570 struct seq_oss_synth *rec; 571 + struct seq_oss_synthinfo *info; 579 572 int rc; 580 573 581 - if (is_midi_dev(dp, dev)) 574 + info = get_synthinfo_nospec(dp, dev); 575 + if (!info || info->is_midi) 582 576 return -ENXIO; 583 577 if ((rec = get_synthdev(dp, dev)) == NULL) 584 578 return -ENXIO; 585 579 if (rec->oper.ioctl == NULL) 586 580 rc = -ENXIO; 587 581 else 588 - rc = rec->oper.ioctl(&dp->synths[dev].arg, cmd, addr); 582 + rc = rec->oper.ioctl(&info->arg, cmd, addr); 589 583 snd_use_lock_free(&rec->use_lock); 590 584 return rc; 591 585 } ··· 599 589 int 600 590 snd_seq_oss_synth_raw_event(struct seq_oss_devinfo *dp, int dev, unsigned char *data, struct snd_seq_event *ev) 601 591 { 602 - if (! snd_seq_oss_synth_is_valid(dp, dev) || is_midi_dev(dp, dev)) 592 + struct seq_oss_synthinfo *info; 593 + 594 + info = snd_seq_oss_synth_info(dp, dev); 595 + if (!info || info->is_midi) 603 596 return -ENXIO; 604 597 ev->type = SNDRV_SEQ_EVENT_OSS; 605 598 memcpy(ev->data.raw8.d, data, 8);
+2 -1
sound/core/seq/oss/seq_oss_synth.h
··· 37 37 void snd_seq_oss_synth_reset(struct seq_oss_devinfo *dp, int dev); 38 38 int snd_seq_oss_synth_load_patch(struct seq_oss_devinfo *dp, int dev, int fmt, 39 39 const char __user *buf, int p, int c); 40 - int snd_seq_oss_synth_is_valid(struct seq_oss_devinfo *dp, int dev); 40 + struct seq_oss_synthinfo *snd_seq_oss_synth_info(struct seq_oss_devinfo *dp, 41 + int dev); 41 42 int snd_seq_oss_synth_sysex(struct seq_oss_devinfo *dp, int dev, unsigned char *buf, 42 43 struct snd_seq_event *ev); 43 44 int snd_seq_oss_synth_addr(struct seq_oss_devinfo *dp, int dev, struct snd_seq_event *ev);
+5 -2
sound/drivers/opl3/opl3_synth.c
··· 21 21 22 22 #include <linux/slab.h> 23 23 #include <linux/export.h> 24 + #include <linux/nospec.h> 24 25 #include <sound/opl3.h> 25 26 #include <sound/asound_fm.h> 26 27 ··· 449 448 { 450 449 unsigned short reg_side; 451 450 unsigned char op_offset; 452 - unsigned char voice_offset; 451 + unsigned char voice_offset, voice_op; 453 452 454 453 unsigned short opl3_reg; 455 454 unsigned char reg_val; ··· 474 473 voice_offset = voice->voice - MAX_OPL2_VOICES; 475 474 } 476 475 /* Get register offset of operator */ 477 - op_offset = snd_opl3_regmap[voice_offset][voice->op]; 476 + voice_offset = array_index_nospec(voice_offset, MAX_OPL2_VOICES); 477 + voice_op = array_index_nospec(voice->op, 4); 478 + op_offset = snd_opl3_regmap[voice_offset][voice_op]; 478 479 479 480 reg_val = 0x00; 480 481 /* Set amplitude modulation (tremolo) effect */
+1 -1
sound/firewire/dice/dice-stream.c
··· 435 435 err = init_stream(dice, AMDTP_IN_STREAM, i); 436 436 if (err < 0) { 437 437 for (; i >= 0; i--) 438 - destroy_stream(dice, AMDTP_OUT_STREAM, i); 438 + destroy_stream(dice, AMDTP_IN_STREAM, i); 439 439 goto end; 440 440 } 441 441 }
+1 -1
sound/firewire/dice/dice.c
··· 14 14 #define OUI_WEISS 0x001c6a 15 15 #define OUI_LOUD 0x000ff2 16 16 #define OUI_FOCUSRITE 0x00130e 17 - #define OUI_TCELECTRONIC 0x001486 17 + #define OUI_TCELECTRONIC 0x000166 18 18 19 19 #define DICE_CATEGORY_ID 0x04 20 20 #define WEISS_CATEGORY_ID 0x00
+9 -4
sound/pci/asihpi/hpimsginit.c
··· 23 23 24 24 #include "hpi_internal.h" 25 25 #include "hpimsginit.h" 26 + #include <linux/nospec.h> 26 27 27 28 /* The actual message size for each object type */ 28 29 static u16 msg_size[HPI_OBJ_MAXINDEX + 1] = HPI_MESSAGE_SIZE_BY_OBJECT; ··· 40 39 { 41 40 u16 size; 42 41 43 - if ((object > 0) && (object <= HPI_OBJ_MAXINDEX)) 42 + if ((object > 0) && (object <= HPI_OBJ_MAXINDEX)) { 43 + object = array_index_nospec(object, HPI_OBJ_MAXINDEX + 1); 44 44 size = msg_size[object]; 45 - else 45 + } else { 46 46 size = sizeof(*phm); 47 + } 47 48 48 49 memset(phm, 0, size); 49 50 phm->size = size; ··· 69 66 { 70 67 u16 size; 71 68 72 - if ((object > 0) && (object <= HPI_OBJ_MAXINDEX)) 69 + if ((object > 0) && (object <= HPI_OBJ_MAXINDEX)) { 70 + object = array_index_nospec(object, HPI_OBJ_MAXINDEX + 1); 73 71 size = res_size[object]; 74 - else 72 + } else { 75 73 size = sizeof(*phr); 74 + } 76 75 77 76 memset(phr, 0, sizeof(*phr)); 78 77 phr->size = size;
+3 -1
sound/pci/asihpi/hpioctl.c
··· 33 33 #include <linux/stringify.h> 34 34 #include <linux/module.h> 35 35 #include <linux/vmalloc.h> 36 + #include <linux/nospec.h> 36 37 37 38 #ifdef MODULE_FIRMWARE 38 39 MODULE_FIRMWARE("asihpi/dsp5000.bin"); ··· 187 186 struct hpi_adapter *pa = NULL; 188 187 189 188 if (hm->h.adapter_index < ARRAY_SIZE(adapters)) 190 - pa = &adapters[hm->h.adapter_index]; 189 + pa = &adapters[array_index_nospec(hm->h.adapter_index, 190 + ARRAY_SIZE(adapters))]; 191 191 192 192 if (!pa || !pa->adapter || !pa->adapter->type) { 193 193 hpi_init_response(&hr->r0, hm->h.object,
+11 -1
sound/pci/hda/hda_hwdep.c
··· 21 21 #include <linux/init.h> 22 22 #include <linux/slab.h> 23 23 #include <linux/compat.h> 24 + #include <linux/nospec.h> 24 25 #include <sound/core.h> 25 26 #include "hda_codec.h" 26 27 #include "hda_local.h" ··· 52 51 53 52 if (get_user(verb, &arg->verb)) 54 53 return -EFAULT; 55 - res = get_wcaps(codec, verb >> 24); 54 + /* open-code get_wcaps(verb>>24) with nospec */ 55 + verb >>= 24; 56 + if (verb < codec->core.start_nid || 57 + verb >= codec->core.start_nid + codec->core.num_nodes) { 58 + res = 0; 59 + } else { 60 + verb -= codec->core.start_nid; 61 + verb = array_index_nospec(verb, codec->core.num_nodes); 62 + res = codec->wcaps[verb]; 63 + } 56 64 if (put_user(res, &arg->res)) 57 65 return -EFAULT; 58 66 return 0;
+8 -1
sound/pci/hda/patch_hdmi.c
··· 1383 1383 pcm = get_pcm_rec(spec, per_pin->pcm_idx); 1384 1384 else 1385 1385 return; 1386 + if (!pcm->pcm) 1387 + return; 1386 1388 if (!test_bit(per_pin->pcm_idx, &spec->pcm_in_use)) 1387 1389 return; 1388 1390 ··· 2153 2151 int dev, err; 2154 2152 int pin_idx, pcm_idx; 2155 2153 2156 - 2157 2154 for (pcm_idx = 0; pcm_idx < spec->pcm_used; pcm_idx++) { 2155 + if (!get_pcm_rec(spec, pcm_idx)->pcm) { 2156 + /* no PCM: mark this for skipping permanently */ 2157 + set_bit(pcm_idx, &spec->pcm_bitmap); 2158 + continue; 2159 + } 2160 + 2158 2161 err = generic_hdmi_build_jack(codec, pcm_idx); 2159 2162 if (err < 0) 2160 2163 return err;
+5
sound/pci/hda/patch_realtek.c
··· 331 331 /* fallthrough */ 332 332 case 0x10ec0215: 333 333 case 0x10ec0233: 334 + case 0x10ec0235: 334 335 case 0x10ec0236: 335 336 case 0x10ec0255: 336 337 case 0x10ec0256: ··· 6576 6575 SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), 6577 6576 SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), 6578 6577 SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6578 + SND_PCI_QUIRK(0x17aa, 0x312f, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6579 6579 SND_PCI_QUIRK(0x17aa, 0x3138, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6580 6580 SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6581 6581 SND_PCI_QUIRK(0x17aa, 0x3112, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), ··· 7162 7160 case 0x10ec0298: 7163 7161 spec->codec_variant = ALC269_TYPE_ALC298; 7164 7162 break; 7163 + case 0x10ec0235: 7165 7164 case 0x10ec0255: 7166 7165 spec->codec_variant = ALC269_TYPE_ALC255; 7166 + spec->shutup = alc256_shutup; 7167 + spec->init_hook = alc256_init; 7167 7168 break; 7168 7169 case 0x10ec0236: 7169 7170 case 0x10ec0256:
+14 -10
sound/pci/rme9652/hdspm.c
··· 137 137 #include <linux/pci.h> 138 138 #include <linux/math64.h> 139 139 #include <linux/io.h> 140 + #include <linux/nospec.h> 140 141 141 142 #include <sound/core.h> 142 143 #include <sound/control.h> ··· 5699 5698 struct snd_pcm_channel_info *info) 5700 5699 { 5701 5700 struct hdspm *hdspm = snd_pcm_substream_chip(substream); 5701 + unsigned int channel = info->channel; 5702 5702 5703 5703 if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { 5704 - if (snd_BUG_ON(info->channel >= hdspm->max_channels_out)) { 5704 + if (snd_BUG_ON(channel >= hdspm->max_channels_out)) { 5705 5705 dev_info(hdspm->card->dev, 5706 5706 "snd_hdspm_channel_info: output channel out of range (%d)\n", 5707 - info->channel); 5707 + channel); 5708 5708 return -EINVAL; 5709 5709 } 5710 5710 5711 - if (hdspm->channel_map_out[info->channel] < 0) { 5711 + channel = array_index_nospec(channel, hdspm->max_channels_out); 5712 + if (hdspm->channel_map_out[channel] < 0) { 5712 5713 dev_info(hdspm->card->dev, 5713 5714 "snd_hdspm_channel_info: output channel %d mapped out\n", 5714 - info->channel); 5715 + channel); 5715 5716 return -EINVAL; 5716 5717 } 5717 5718 5718 - info->offset = hdspm->channel_map_out[info->channel] * 5719 + info->offset = hdspm->channel_map_out[channel] * 5719 5720 HDSPM_CHANNEL_BUFFER_BYTES; 5720 5721 } else { 5721 - if (snd_BUG_ON(info->channel >= hdspm->max_channels_in)) { 5722 + if (snd_BUG_ON(channel >= hdspm->max_channels_in)) { 5722 5723 dev_info(hdspm->card->dev, 5723 5724 "snd_hdspm_channel_info: input channel out of range (%d)\n", 5724 - info->channel); 5725 + channel); 5725 5726 return -EINVAL; 5726 5727 } 5727 5728 5728 - if (hdspm->channel_map_in[info->channel] < 0) { 5729 + channel = array_index_nospec(channel, hdspm->max_channels_in); 5730 + if (hdspm->channel_map_in[channel] < 0) { 5729 5731 dev_info(hdspm->card->dev, 5730 5732 "snd_hdspm_channel_info: input channel %d mapped out\n", 5731 - info->channel); 5733 + channel); 5732 5734 return -EINVAL; 5733 5735 } 5734 5736 5735 - info->offset = hdspm->channel_map_in[info->channel] * 5737 + info->offset = hdspm->channel_map_in[channel] * 5736 5738 HDSPM_CHANNEL_BUFFER_BYTES; 5737 5739 } 5738 5740
+4 -2
sound/pci/rme9652/rme9652.c
··· 26 26 #include <linux/pci.h> 27 27 #include <linux/module.h> 28 28 #include <linux/io.h> 29 + #include <linux/nospec.h> 29 30 30 31 #include <sound/core.h> 31 32 #include <sound/control.h> ··· 2072 2071 if (snd_BUG_ON(info->channel >= RME9652_NCHANNELS)) 2073 2072 return -EINVAL; 2074 2073 2075 - if ((chn = rme9652->channel_map[info->channel]) < 0) { 2074 + chn = rme9652->channel_map[array_index_nospec(info->channel, 2075 + RME9652_NCHANNELS)]; 2076 + if (chn < 0) 2076 2077 return -EINVAL; 2077 - } 2078 2078 2079 2079 info->offset = chn * RME9652_CHANNEL_BUFFER_BYTES; 2080 2080 info->first = 0;
+1 -1
sound/soc/amd/acp-da7219-max98357a.c
··· 43 43 #define DUAL_CHANNEL 2 44 44 45 45 static struct snd_soc_jack cz_jack; 46 - struct clk *da7219_dai_clk; 46 + static struct clk *da7219_dai_clk; 47 47 48 48 static int cz_da7219_init(struct snd_soc_pcm_runtime *rtd) 49 49 {
+20 -6
sound/soc/codecs/adau17x1.c
··· 502 502 } 503 503 504 504 if (adau->sigmadsp) { 505 - ret = adau17x1_setup_firmware(adau, params_rate(params)); 505 + ret = adau17x1_setup_firmware(component, params_rate(params)); 506 506 if (ret < 0) 507 507 return ret; 508 508 } ··· 835 835 } 836 836 EXPORT_SYMBOL_GPL(adau17x1_volatile_register); 837 837 838 - int adau17x1_setup_firmware(struct adau *adau, unsigned int rate) 838 + int adau17x1_setup_firmware(struct snd_soc_component *component, 839 + unsigned int rate) 839 840 { 840 841 int ret; 841 - int dspsr; 842 + int dspsr, dsp_run; 843 + struct adau *adau = snd_soc_component_get_drvdata(component); 844 + struct snd_soc_dapm_context *dapm = snd_soc_component_get_dapm(component); 845 + 846 + snd_soc_dapm_mutex_lock(dapm); 842 847 843 848 ret = regmap_read(adau->regmap, ADAU17X1_DSP_SAMPLING_RATE, &dspsr); 844 849 if (ret) 845 - return ret; 850 + goto err; 851 + 852 + ret = regmap_read(adau->regmap, ADAU17X1_DSP_RUN, &dsp_run); 853 + if (ret) 854 + goto err; 846 855 847 856 regmap_write(adau->regmap, ADAU17X1_DSP_ENABLE, 1); 848 857 regmap_write(adau->regmap, ADAU17X1_DSP_SAMPLING_RATE, 0xf); 858 + regmap_write(adau->regmap, ADAU17X1_DSP_RUN, 0); 849 859 850 860 ret = sigmadsp_setup(adau->sigmadsp, rate); 851 861 if (ret) { 852 862 regmap_write(adau->regmap, ADAU17X1_DSP_ENABLE, 0); 853 - return ret; 863 + goto err; 854 864 } 855 865 regmap_write(adau->regmap, ADAU17X1_DSP_SAMPLING_RATE, dspsr); 866 + regmap_write(adau->regmap, ADAU17X1_DSP_RUN, dsp_run); 856 867 857 - return 0; 868 + err: 869 + snd_soc_dapm_mutex_unlock(dapm); 870 + 871 + return ret; 858 872 } 859 873 EXPORT_SYMBOL_GPL(adau17x1_setup_firmware); 860 874
+2 -1
sound/soc/codecs/adau17x1.h
··· 68 68 69 69 extern const struct snd_soc_dai_ops adau17x1_dai_ops; 70 70 71 - int adau17x1_setup_firmware(struct adau *adau, unsigned int rate); 71 + int adau17x1_setup_firmware(struct snd_soc_component *component, 72 + unsigned int rate); 72 73 bool adau17x1_has_dsp(struct adau *adau); 73 74 74 75 #define ADAU17X1_CLOCK_CONTROL 0x4000
+6 -3
sound/soc/codecs/msm8916-wcd-analog.c
··· 1187 1187 return irq; 1188 1188 } 1189 1189 1190 - ret = devm_request_irq(dev, irq, pm8916_mbhc_switch_irq_handler, 1190 + ret = devm_request_threaded_irq(dev, irq, NULL, 1191 + pm8916_mbhc_switch_irq_handler, 1191 1192 IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | 1192 1193 IRQF_ONESHOT, 1193 1194 "mbhc switch irq", priv); ··· 1202 1201 return irq; 1203 1202 } 1204 1203 1205 - ret = devm_request_irq(dev, irq, mbhc_btn_press_irq_handler, 1204 + ret = devm_request_threaded_irq(dev, irq, NULL, 1205 + mbhc_btn_press_irq_handler, 1206 1206 IRQF_TRIGGER_RISING | 1207 1207 IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 1208 1208 "mbhc btn press irq", priv); ··· 1216 1214 return irq; 1217 1215 } 1218 1216 1219 - ret = devm_request_irq(dev, irq, mbhc_btn_release_irq_handler, 1217 + ret = devm_request_threaded_irq(dev, irq, NULL, 1218 + mbhc_btn_release_irq_handler, 1220 1219 IRQF_TRIGGER_RISING | 1221 1220 IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 1222 1221 "mbhc btn release irq", priv);
+3
sound/soc/codecs/rt5514.c
··· 89 89 {RT5514_PLL3_CALIB_CTRL5, 0x40220012}, 90 90 {RT5514_DELAY_BUF_CTRL1, 0x7fff006a}, 91 91 {RT5514_DELAY_BUF_CTRL3, 0x00000000}, 92 + {RT5514_ASRC_IN_CTRL1, 0x00000003}, 92 93 {RT5514_DOWNFILTER0_CTRL1, 0x00020c2f}, 93 94 {RT5514_DOWNFILTER0_CTRL2, 0x00020c2f}, 94 95 {RT5514_DOWNFILTER0_CTRL3, 0x10000362}, ··· 182 181 case RT5514_PLL3_CALIB_CTRL5: 183 182 case RT5514_DELAY_BUF_CTRL1: 184 183 case RT5514_DELAY_BUF_CTRL3: 184 + case RT5514_ASRC_IN_CTRL1: 185 185 case RT5514_DOWNFILTER0_CTRL1: 186 186 case RT5514_DOWNFILTER0_CTRL2: 187 187 case RT5514_DOWNFILTER0_CTRL3: ··· 240 238 case RT5514_DSP_MAPPING | RT5514_PLL3_CALIB_CTRL5: 241 239 case RT5514_DSP_MAPPING | RT5514_DELAY_BUF_CTRL1: 242 240 case RT5514_DSP_MAPPING | RT5514_DELAY_BUF_CTRL3: 241 + case RT5514_DSP_MAPPING | RT5514_ASRC_IN_CTRL1: 243 242 case RT5514_DSP_MAPPING | RT5514_DOWNFILTER0_CTRL1: 244 243 case RT5514_DSP_MAPPING | RT5514_DOWNFILTER0_CTRL2: 245 244 case RT5514_DSP_MAPPING | RT5514_DOWNFILTER0_CTRL3:
+7
sound/soc/fsl/fsl_esai.c
··· 144 144 145 145 psr = ratio <= 256 * maxfp ? ESAI_xCCR_xPSR_BYPASS : ESAI_xCCR_xPSR_DIV8; 146 146 147 + /* Do not loop-search if PM (1 ~ 256) alone can serve the ratio */ 148 + if (ratio <= 256) { 149 + pm = ratio; 150 + fp = 1; 151 + goto out; 152 + } 153 + 147 154 /* Set the max fluctuation -- 0.1% of the max devisor */ 148 155 savesub = (psr ? 1 : 8) * 256 * maxfp / 1000; 149 156
+11 -3
sound/soc/fsl/fsl_ssi.c
··· 217 217 * @dai_fmt: DAI configuration this device is currently used with 218 218 * @streams: Mask of current active streams: BIT(TX) and BIT(RX) 219 219 * @i2s_net: I2S and Network mode configurations of SCR register 220 + * (this is the initial settings based on the DAI format) 220 221 * @synchronous: Use synchronous mode - both of TX and RX use STCK and SFCK 221 222 * @use_dma: DMA is used or FIQ with stream filter 222 223 * @use_dual_fifo: DMA with support for dual FIFO mode ··· 830 829 } 831 830 832 831 if (!fsl_ssi_is_ac97(ssi)) { 832 + /* 833 + * Keep the ssi->i2s_net intact while having a local variable 834 + * to override settings for special use cases. Otherwise, the 835 + * ssi->i2s_net will lose the settings for regular use cases. 836 + */ 837 + u8 i2s_net = ssi->i2s_net; 838 + 833 839 /* Normal + Network mode to send 16-bit data in 32-bit frames */ 834 840 if (fsl_ssi_is_i2s_cbm_cfs(ssi) && sample_size == 16) 835 - ssi->i2s_net = SSI_SCR_I2S_MODE_NORMAL | SSI_SCR_NET; 841 + i2s_net = SSI_SCR_I2S_MODE_NORMAL | SSI_SCR_NET; 836 842 837 843 /* Use Normal mode to send mono data at 1st slot of 2 slots */ 838 844 if (channels == 1) 839 - ssi->i2s_net = SSI_SCR_I2S_MODE_NORMAL; 845 + i2s_net = SSI_SCR_I2S_MODE_NORMAL; 840 846 841 847 regmap_update_bits(regs, REG_SSI_SCR, 842 - SSI_SCR_I2S_NET_MASK, ssi->i2s_net); 848 + SSI_SCR_I2S_NET_MASK, i2s_net); 843 849 } 844 850 845 851 /* In synchronous mode, the SSI uses STCCR for capture */
+13 -9
sound/soc/intel/Kconfig
··· 72 72 for Baytrail Chromebooks but this option is now deprecated and is 73 73 not recommended, use SND_SST_ATOM_HIFI2_PLATFORM instead. 74 74 75 + config SND_SST_ATOM_HIFI2_PLATFORM 76 + tristate 77 + select SND_SOC_COMPRESS 78 + 75 79 config SND_SST_ATOM_HIFI2_PLATFORM_PCI 76 - tristate "PCI HiFi2 (Medfield, Merrifield) Platforms" 80 + tristate "PCI HiFi2 (Merrifield) Platforms" 77 81 depends on X86 && PCI 78 82 select SND_SST_IPC_PCI 79 - select SND_SOC_COMPRESS 83 + select SND_SST_ATOM_HIFI2_PLATFORM 80 84 help 81 - If you have a Intel Medfield or Merrifield/Edison platform, then 85 + If you have a Intel Merrifield/Edison platform, then 82 86 enable this option by saying Y or m. Distros will typically not 83 - enable this option: Medfield devices are not available to 84 - developers and while Merrifield/Edison can run a mainline kernel with 85 - limited functionality it will require a firmware file which 86 - is not in the standard firmware tree 87 + enable this option: while Merrifield/Edison can run a mainline 88 + kernel with limited functionality it will require a firmware file 89 + which is not in the standard firmware tree 87 90 88 - config SND_SST_ATOM_HIFI2_PLATFORM 91 + config SND_SST_ATOM_HIFI2_PLATFORM_ACPI 89 92 tristate "ACPI HiFi2 (Baytrail, Cherrytrail) Platforms" 93 + default ACPI 90 94 depends on X86 && ACPI 91 95 select SND_SST_IPC_ACPI 92 - select SND_SOC_COMPRESS 96 + select SND_SST_ATOM_HIFI2_PLATFORM 93 97 select SND_SOC_ACPI_INTEL_MATCH 94 98 select IOSF_MBI 95 99 help
+11 -3
sound/soc/omap/omap-dmic.c
··· 281 281 static int omap_dmic_select_fclk(struct omap_dmic *dmic, int clk_id, 282 282 unsigned int freq) 283 283 { 284 - struct clk *parent_clk; 284 + struct clk *parent_clk, *mux; 285 285 char *parent_clk_name; 286 286 int ret = 0; 287 287 ··· 329 329 return -ENODEV; 330 330 } 331 331 332 + mux = clk_get_parent(dmic->fclk); 333 + if (IS_ERR(mux)) { 334 + dev_err(dmic->dev, "can't get fck mux parent\n"); 335 + clk_put(parent_clk); 336 + return -ENODEV; 337 + } 338 + 332 339 mutex_lock(&dmic->mutex); 333 340 if (dmic->active) { 334 341 /* disable clock while reparenting */ 335 342 pm_runtime_put_sync(dmic->dev); 336 - ret = clk_set_parent(dmic->fclk, parent_clk); 343 + ret = clk_set_parent(mux, parent_clk); 337 344 pm_runtime_get_sync(dmic->dev); 338 345 } else { 339 - ret = clk_set_parent(dmic->fclk, parent_clk); 346 + ret = clk_set_parent(mux, parent_clk); 340 347 } 341 348 mutex_unlock(&dmic->mutex); 342 349 ··· 356 349 dmic->fclk_freq = freq; 357 350 358 351 err_busy: 352 + clk_put(mux); 359 353 clk_put(parent_clk); 360 354 361 355 return ret;
+2 -2
sound/soc/sh/rcar/core.c
··· 1536 1536 return ret; 1537 1537 } 1538 1538 1539 - static int rsnd_suspend(struct device *dev) 1539 + static int __maybe_unused rsnd_suspend(struct device *dev) 1540 1540 { 1541 1541 struct rsnd_priv *priv = dev_get_drvdata(dev); 1542 1542 ··· 1545 1545 return 0; 1546 1546 } 1547 1547 1548 - static int rsnd_resume(struct device *dev) 1548 + static int __maybe_unused rsnd_resume(struct device *dev) 1549 1549 { 1550 1550 struct rsnd_priv *priv = dev_get_drvdata(dev); 1551 1551
+9 -5
sound/soc/soc-topology.c
··· 513 513 */ 514 514 if (dobj->widget.kcontrol_type == SND_SOC_TPLG_TYPE_ENUM) { 515 515 /* enumerated widget mixer */ 516 - for (i = 0; i < w->num_kcontrols; i++) { 516 + for (i = 0; w->kcontrols != NULL && i < w->num_kcontrols; i++) { 517 517 struct snd_kcontrol *kcontrol = w->kcontrols[i]; 518 518 struct soc_enum *se = 519 519 (struct soc_enum *)kcontrol->private_value; ··· 530 530 } 531 531 } else { 532 532 /* volume mixer or bytes controls */ 533 - for (i = 0; i < w->num_kcontrols; i++) { 533 + for (i = 0; w->kcontrols != NULL && i < w->num_kcontrols; i++) { 534 534 struct snd_kcontrol *kcontrol = w->kcontrols[i]; 535 535 536 536 if (dobj->widget.kcontrol_type ··· 1325 1325 ec->hdr.name); 1326 1326 1327 1327 kc[i].name = kstrdup(ec->hdr.name, GFP_KERNEL); 1328 - if (kc[i].name == NULL) 1328 + if (kc[i].name == NULL) { 1329 + kfree(se); 1329 1330 goto err_se; 1331 + } 1330 1332 kc[i].private_value = (long)se; 1331 1333 kc[i].iface = SNDRV_CTL_ELEM_IFACE_MIXER; 1332 1334 kc[i].access = ec->hdr.access; ··· 1444 1442 be->hdr.name, be->hdr.access); 1445 1443 1446 1444 kc[i].name = kstrdup(be->hdr.name, GFP_KERNEL); 1447 - if (kc[i].name == NULL) 1445 + if (kc[i].name == NULL) { 1446 + kfree(sbe); 1448 1447 goto err; 1448 + } 1449 1449 kc[i].private_value = (long)sbe; 1450 1450 kc[i].iface = SNDRV_CTL_ELEM_IFACE_MIXER; 1451 1451 kc[i].access = be->hdr.access; ··· 2580 2576 2581 2577 /* match index */ 2582 2578 if (dobj->index != index && 2583 - dobj->index != SND_SOC_TPLG_INDEX_ALL) 2579 + index != SND_SOC_TPLG_INDEX_ALL) 2584 2580 continue; 2585 2581 2586 2582 switch (dobj->type) {
+4 -3
sound/usb/mixer.c
··· 1776 1776 build_feature_ctl(state, _ftr, ch_bits, control, 1777 1777 &iterm, unitid, ch_read_only); 1778 1778 if (uac_v2v3_control_is_readable(master_bits, control)) 1779 - build_feature_ctl(state, _ftr, 0, i, &iterm, unitid, 1779 + build_feature_ctl(state, _ftr, 0, control, 1780 + &iterm, unitid, 1780 1781 !uac_v2v3_control_is_writeable(master_bits, 1781 1782 control)); 1782 1783 } ··· 1860 1859 check_input_term(state, d->bTerminalID, &iterm); 1861 1860 if (state->mixer->protocol == UAC_VERSION_2) { 1862 1861 /* Check for jack detection. */ 1863 - if (uac_v2v3_control_is_readable(d->bmControls, 1862 + if (uac_v2v3_control_is_readable(le16_to_cpu(d->bmControls), 1864 1863 UAC2_TE_CONNECTOR)) { 1865 1864 build_connector_control(state, &iterm, true); 1866 1865 } ··· 2562 2561 if (err < 0 && err != -EINVAL) 2563 2562 return err; 2564 2563 2565 - if (uac_v2v3_control_is_readable(desc->bmControls, 2564 + if (uac_v2v3_control_is_readable(le16_to_cpu(desc->bmControls), 2566 2565 UAC2_TE_CONNECTOR)) { 2567 2566 build_connector_control(&state, &state.oterm, 2568 2567 false);
+3
sound/usb/mixer_maps.c
··· 353 353 /* 354 354 * Dell usb dock with ALC4020 codec had a firmware problem where it got 355 355 * screwed up when zero volume is passed; just skip it as a workaround 356 + * 357 + * Also the extension unit gives an access error, so skip it as well. 356 358 */ 357 359 static const struct usbmix_name_map dell_alc4020_map[] = { 360 + { 4, NULL }, /* extension unit */ 358 361 { 16, NULL }, 359 362 { 19, NULL }, 360 363 { 0 }
+1 -1
sound/usb/stream.c
··· 349 349 * TODO: this conversion is not complete, update it 350 350 * after adding UAC3 values to asound.h 351 351 */ 352 - switch (is->bChPurpose) { 352 + switch (is->bChRelationship) { 353 353 case UAC3_CH_MONO: 354 354 map = SNDRV_CHMAP_MONO; 355 355 break;
+1 -1
sound/usb/usx2y/us122l.c
··· 139 139 snd_printdd(KERN_DEBUG "%i\n", atomic_read(&us122l->mmap_count)); 140 140 } 141 141 142 - static int usb_stream_hwdep_vm_fault(struct vm_fault *vmf) 142 + static vm_fault_t usb_stream_hwdep_vm_fault(struct vm_fault *vmf) 143 143 { 144 144 unsigned long offset; 145 145 struct page *page;
+1 -1
sound/usb/usx2y/usX2Yhwdep.c
··· 31 31 #include "usbusx2y.h" 32 32 #include "usX2Yhwdep.h" 33 33 34 - static int snd_us428ctls_vm_fault(struct vm_fault *vmf) 34 + static vm_fault_t snd_us428ctls_vm_fault(struct vm_fault *vmf) 35 35 { 36 36 unsigned long offset; 37 37 struct page * page;
+1 -1
sound/usb/usx2y/usx2yhwdeppcm.c
··· 652 652 } 653 653 654 654 655 - static int snd_usX2Y_hwdep_pcm_vm_fault(struct vm_fault *vmf) 655 + static vm_fault_t snd_usX2Y_hwdep_pcm_vm_fault(struct vm_fault *vmf) 656 656 { 657 657 unsigned long offset; 658 658 void *vaddr;
+29 -12
tools/perf/Documentation/perf-mem.txt
··· 28 28 <command>...:: 29 29 Any command you can specify in a shell. 30 30 31 + -i:: 32 + --input=<file>:: 33 + Input file name. 34 + 31 35 -f:: 32 36 --force:: 33 37 Don't do ownership validation 34 38 35 39 -t:: 36 - --type=:: 40 + --type=<type>:: 37 41 Select the memory operation type: load or store (default: load,store) 38 42 39 43 -D:: 40 - --dump-raw-samples=:: 44 + --dump-raw-samples:: 41 45 Dump the raw decoded samples on the screen in a format that is easy to parse with 42 46 one sample per line. 43 47 44 48 -x:: 45 - --field-separator:: 49 + --field-separator=<separator>:: 46 50 Specify the field separator used when dump raw samples (-D option). By default, 47 51 The separator is the space character. 48 52 49 53 -C:: 50 - --cpu-list:: 51 - Restrict dump of raw samples to those provided via this option. Note that the same 52 - option can be passed in record mode. It will be interpreted the same way as perf 53 - record. 54 + --cpu=<cpu>:: 55 + Monitor only on the list of CPUs provided. Multiple CPUs can be provided as a 56 + comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2. Default 57 + is to monitor all CPUS. 58 + -U:: 59 + --hide-unresolved:: 60 + Only display entries resolved to a symbol. 61 + 62 + -p:: 63 + --phys-data:: 64 + Record/Report sample physical addresses 65 + 66 + RECORD OPTIONS 67 + -------------- 68 + -e:: 69 + --event <event>:: 70 + Event selector. Use 'perf mem record -e list' to list available events. 54 71 55 72 -K:: 56 73 --all-kernel:: ··· 77 60 --all-user:: 78 61 Configure all used events to run in user space. 79 62 80 - --ldload:: 81 - Specify desired latency for loads event. 63 + -v:: 64 + --verbose:: 65 + Be more verbose (show counter open errors, etc) 82 66 83 - -p:: 84 - --phys-data:: 85 - Record/Report sample physical addresses 67 + --ldlat <n>:: 68 + Specify desired latency for loads event. 86 69 87 70 In addition, for report all perf report options are valid, and for record 88 71 all perf record options.
+1
tools/perf/arch/s390/util/auxtrace.c
··· 87 87 struct perf_evsel *pos; 88 88 int diagnose = 0; 89 89 90 + *err = 0; 90 91 if (evlist->nr_entries == 0) 91 92 return NULL; 92 93
-18
tools/perf/arch/s390/util/header.c
··· 146 146 zfree(&buf); 147 147 return buf; 148 148 } 149 - 150 - /* 151 - * Compare the cpuid string returned by get_cpuid() function 152 - * with the name generated by the jevents file read from 153 - * pmu-events/arch/s390/mapfile.csv. 154 - * 155 - * Parameter mapcpuid is the cpuid as stored in the 156 - * pmu-events/arch/s390/mapfile.csv. This is just the type number. 157 - * Parameter cpuid is the cpuid returned by function get_cpuid(). 158 - */ 159 - int strcmp_cpuid_str(const char *mapcpuid, const char *cpuid) 160 - { 161 - char *cp = strchr(cpuid, ','); 162 - 163 - if (cp == NULL) 164 - return -1; 165 - return strncmp(cp + 1, mapcpuid, strlen(mapcpuid)); 166 - }
+38 -2
tools/perf/builtin-stat.c
··· 172 172 static const char *output_name; 173 173 static int output_fd; 174 174 static int print_free_counters_hint; 175 + static int print_mixed_hw_group_error; 175 176 176 177 struct perf_stat { 177 178 bool record; ··· 1127 1126 fprintf(output, "%s%s", csv_sep, evsel->cgrp->name); 1128 1127 } 1129 1128 1129 + static bool is_mixed_hw_group(struct perf_evsel *counter) 1130 + { 1131 + struct perf_evlist *evlist = counter->evlist; 1132 + u32 pmu_type = counter->attr.type; 1133 + struct perf_evsel *pos; 1134 + 1135 + if (counter->nr_members < 2) 1136 + return false; 1137 + 1138 + evlist__for_each_entry(evlist, pos) { 1139 + /* software events can be part of any hardware group */ 1140 + if (pos->attr.type == PERF_TYPE_SOFTWARE) 1141 + continue; 1142 + if (pmu_type == PERF_TYPE_SOFTWARE) { 1143 + pmu_type = pos->attr.type; 1144 + continue; 1145 + } 1146 + if (pmu_type != pos->attr.type) 1147 + return true; 1148 + } 1149 + 1150 + return false; 1151 + } 1152 + 1130 1153 static void printout(int id, int nr, struct perf_evsel *counter, double uval, 1131 1154 char *prefix, u64 run, u64 ena, double noise, 1132 1155 struct runtime_stat *st) ··· 1203 1178 counter->supported ? CNTR_NOT_COUNTED : CNTR_NOT_SUPPORTED, 1204 1179 csv_sep); 1205 1180 1206 - if (counter->supported) 1181 + if (counter->supported) { 1207 1182 print_free_counters_hint = 1; 1183 + if (is_mixed_hw_group(counter)) 1184 + print_mixed_hw_group_error = 1; 1185 + } 1208 1186 1209 1187 fprintf(stat_config.output, "%-*s%s", 1210 1188 csv_output ? 0 : unit_width, ··· 1284 1256 char *new_name; 1285 1257 char *config; 1286 1258 1287 - if (!counter->pmu_name || !strncmp(counter->name, counter->pmu_name, 1259 + if (counter->uniquified_name || 1260 + !counter->pmu_name || !strncmp(counter->name, counter->pmu_name, 1288 1261 strlen(counter->pmu_name))) 1289 1262 return; 1290 1263 ··· 1303 1274 counter->name = new_name; 1304 1275 } 1305 1276 } 1277 + 1278 + counter->uniquified_name = true; 1306 1279 } 1307 1280 1308 1281 static void collect_all_aliases(struct perf_evsel *counter, ··· 1788 1757 " echo 0 > /proc/sys/kernel/nmi_watchdog\n" 1789 1758 " perf stat ...\n" 1790 1759 " echo 1 > /proc/sys/kernel/nmi_watchdog\n"); 1760 + 1761 + if (print_mixed_hw_group_error) 1762 + fprintf(output, 1763 + "The events in group usually have to be from " 1764 + "the same PMU. Try reorganizing the group.\n"); 1791 1765 } 1792 1766 1793 1767 static void print_counters(struct timespec *ts, int argc, const char **argv)
+5 -5
tools/perf/pmu-events/arch/s390/mapfile.csv
··· 1 1 Family-model,Version,Filename,EventType 2 - 209[78],1,cf_z10,core 3 - 281[78],1,cf_z196,core 4 - 282[78],1,cf_zec12,core 5 - 296[45],1,cf_z13,core 6 - 3906,3,cf_z14,core 2 + ^IBM.209[78].*[13]\.[1-5].[[:xdigit:]]+$,1,cf_z10,core 3 + ^IBM.281[78].*[13]\.[1-5].[[:xdigit:]]+$,1,cf_z196,core 4 + ^IBM.282[78].*[13]\.[1-5].[[:xdigit:]]+$,1,cf_zec12,core 5 + ^IBM.296[45].*[13]\.[1-5].[[:xdigit:]]+$,1,cf_z13,core 6 + ^IBM.390[67].*[13]\.[1-5].[[:xdigit:]]+$,3,cf_z14,core
+3
tools/perf/tests/attr/test-record-group-sampling
··· 35 35 # sampling disabled 36 36 sample_freq=0 37 37 sample_period=0 38 + freq=0 39 + write_backward=0 40 + sample_id_all=0
+2 -4
tools/perf/tests/shell/record+probe_libc_inet_pton.sh
··· 19 19 expected[1]=".*inet_pton[[:space:]]\($libc\)$" 20 20 case "$(uname -m)" in 21 21 s390x) 22 - eventattr='call-graph=dwarf' 22 + eventattr='call-graph=dwarf,max-stack=4' 23 23 expected[2]="gaih_inet.*[[:space:]]\($libc|inlined\)$" 24 - expected[3]="__GI_getaddrinfo[[:space:]]\($libc|inlined\)$" 24 + expected[3]="(__GI_)?getaddrinfo[[:space:]]\($libc|inlined\)$" 25 25 expected[4]="main[[:space:]]\(.*/bin/ping.*\)$" 26 - expected[5]="__libc_start_main[[:space:]]\($libc\)$" 27 - expected[6]="_start[[:space:]]\(.*/bin/ping.*\)$" 28 26 ;; 29 27 *) 30 28 eventattr='max-stack=3'
+14 -4
tools/perf/util/evsel.c
··· 930 930 * than leader in case leader 'leads' the sampling. 931 931 */ 932 932 if ((leader != evsel) && leader->sample_read) { 933 - attr->sample_freq = 0; 934 - attr->sample_period = 0; 933 + attr->freq = 0; 934 + attr->sample_freq = 0; 935 + attr->sample_period = 0; 936 + attr->write_backward = 0; 937 + attr->sample_id_all = 0; 935 938 } 936 939 937 940 if (opts->no_samples) ··· 1925 1922 goto fallback_missing_features; 1926 1923 } else if (!perf_missing_features.group_read && 1927 1924 evsel->attr.inherit && 1928 - (evsel->attr.read_format & PERF_FORMAT_GROUP)) { 1925 + (evsel->attr.read_format & PERF_FORMAT_GROUP) && 1926 + perf_evsel__is_group_leader(evsel)) { 1929 1927 perf_missing_features.group_read = true; 1930 1928 pr_debug2("switching off group read\n"); 1931 1929 goto fallback_missing_features; ··· 2758 2754 (paranoid = perf_event_paranoid()) > 1) { 2759 2755 const char *name = perf_evsel__name(evsel); 2760 2756 char *new_name; 2757 + const char *sep = ":"; 2761 2758 2762 - if (asprintf(&new_name, "%s%su", name, strchr(name, ':') ? "" : ":") < 0) 2759 + /* Is there already the separator in the name. */ 2760 + if (strchr(name, '/') || 2761 + strchr(name, ':')) 2762 + sep = ""; 2763 + 2764 + if (asprintf(&new_name, "%s%su", name, sep) < 0) 2763 2765 return false; 2764 2766 2765 2767 if (evsel->name)
+1
tools/perf/util/evsel.h
··· 115 115 unsigned int sample_size; 116 116 int id_pos; 117 117 int is_pos; 118 + bool uniquified_name; 118 119 bool snapshot; 119 120 bool supported; 120 121 bool needs_swap;
+18 -12
tools/perf/util/machine.c
··· 1019 1019 return ret; 1020 1020 } 1021 1021 1022 - static void map_groups__fixup_end(struct map_groups *mg) 1023 - { 1024 - int i; 1025 - for (i = 0; i < MAP__NR_TYPES; ++i) 1026 - __map_groups__fixup_end(mg, i); 1027 - } 1028 - 1029 1022 static char *get_kernel_version(const char *root_dir) 1030 1023 { 1031 1024 char version[PATH_MAX]; ··· 1226 1233 { 1227 1234 struct dso *kernel = machine__get_kernel(machine); 1228 1235 const char *name = NULL; 1236 + struct map *map; 1229 1237 u64 addr = 0; 1230 1238 int ret; 1231 1239 ··· 1253 1259 machine__destroy_kernel_maps(machine); 1254 1260 return -1; 1255 1261 } 1256 - machine__set_kernel_mmap(machine, addr, 0); 1262 + 1263 + /* we have a real start address now, so re-order the kmaps */ 1264 + map = machine__kernel_map(machine); 1265 + 1266 + map__get(map); 1267 + map_groups__remove(&machine->kmaps, map); 1268 + 1269 + /* assume it's the last in the kmaps */ 1270 + machine__set_kernel_mmap(machine, addr, ~0ULL); 1271 + 1272 + map_groups__insert(&machine->kmaps, map); 1273 + map__put(map); 1257 1274 } 1258 1275 1259 - /* 1260 - * Now that we have all the maps created, just set the ->end of them: 1261 - */ 1262 - map_groups__fixup_end(&machine->kmaps); 1276 + /* update end address of the kernel map using adjacent module address */ 1277 + map = map__next(machine__kernel_map(machine)); 1278 + if (map) 1279 + machine__set_kernel_mmap(machine, addr, map->start); 1280 + 1263 1281 return 0; 1264 1282 } 1265 1283
+4 -4
tools/perf/util/parse-events.y
··· 224 224 event_bpf_file 225 225 226 226 event_pmu: 227 - PE_NAME opt_event_config 227 + PE_NAME '/' event_config '/' 228 228 { 229 229 struct list_head *list, *orig_terms, *terms; 230 230 231 - if (parse_events_copy_term_list($2, &orig_terms)) 231 + if (parse_events_copy_term_list($3, &orig_terms)) 232 232 YYABORT; 233 233 234 234 ALLOC_LIST(list); 235 - if (parse_events_add_pmu(_parse_state, list, $1, $2, false)) { 235 + if (parse_events_add_pmu(_parse_state, list, $1, $3, false)) { 236 236 struct perf_pmu *pmu = NULL; 237 237 int ok = 0; 238 238 char *pattern; ··· 262 262 if (!ok) 263 263 YYABORT; 264 264 } 265 - parse_events_terms__delete($2); 265 + parse_events_terms__delete($3); 266 266 parse_events_terms__delete(orig_terms); 267 267 $$ = list; 268 268 }
+8 -14
tools/perf/util/pmu.c
··· 539 539 540 540 /* 541 541 * PMU CORE devices have different name other than cpu in sysfs on some 542 - * platforms. looking for possible sysfs files to identify as core device. 542 + * platforms. 543 + * Looking for possible sysfs files to identify the arm core device. 543 544 */ 544 - static int is_pmu_core(const char *name) 545 + static int is_arm_pmu_core(const char *name) 545 546 { 546 547 struct stat st; 547 548 char path[PATH_MAX]; ··· 550 549 551 550 if (!sysfs) 552 551 return 0; 553 - 554 - /* Look for cpu sysfs (x86 and others) */ 555 - scnprintf(path, PATH_MAX, "%s/bus/event_source/devices/cpu", sysfs); 556 - if ((stat(path, &st) == 0) && 557 - (strncmp(name, "cpu", strlen("cpu")) == 0)) 558 - return 1; 559 552 560 553 /* Look for cpu sysfs (specific to arm) */ 561 554 scnprintf(path, PATH_MAX, "%s/bus/event_source/devices/%s/cpus", ··· 581 586 * cpuid string generated on this platform. 582 587 * Otherwise return non-zero. 583 588 */ 584 - int __weak strcmp_cpuid_str(const char *mapcpuid, const char *cpuid) 589 + int strcmp_cpuid_str(const char *mapcpuid, const char *cpuid) 585 590 { 586 591 regex_t re; 587 592 regmatch_t pmatch[1]; ··· 663 668 struct pmu_events_map *map; 664 669 struct pmu_event *pe; 665 670 const char *name = pmu->name; 671 + const char *pname; 666 672 667 673 map = perf_pmu__find_map(pmu); 668 674 if (!map) ··· 682 686 break; 683 687 } 684 688 685 - if (!is_pmu_core(name)) { 686 - /* check for uncore devices */ 687 - if (pe->pmu == NULL) 688 - continue; 689 - if (strncmp(pe->pmu, name, strlen(pe->pmu))) 689 + if (!is_arm_pmu_core(name)) { 690 + pname = pe->pmu ? pe->pmu : "cpu"; 691 + if (strncmp(pname, name, strlen(pname))) 690 692 continue; 691 693 } 692 694
+3
tools/testing/selftests/bpf/.gitignore
··· 12 12 test_verifier_log 13 13 feature 14 14 test_libbpf_open 15 + test_sock 16 + test_sock_addr 17 + urandom_read
+1
tools/testing/selftests/bpf/test_sock.c
··· 13 13 #include <bpf/bpf.h> 14 14 15 15 #include "cgroup_helpers.h" 16 + #include "bpf_rlimit.h" 16 17 17 18 #ifndef ARRAY_SIZE 18 19 # define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+1
tools/testing/selftests/bpf/test_sock_addr.c
··· 15 15 #include <bpf/libbpf.h> 16 16 17 17 #include "cgroup_helpers.h" 18 + #include "bpf_rlimit.h" 18 19 19 20 #define CG_PATH "/foo" 20 21 #define CONNECT4_PROG_PATH "./connect4_prog.o"
+2 -2
tools/testing/selftests/bpf/test_sock_addr.sh
··· 4 4 5 5 ping_once() 6 6 { 7 - ping -q -c 1 -W 1 ${1%%/*} >/dev/null 2>&1 7 + ping -${1} -q -c 1 -W 1 ${2%%/*} >/dev/null 2>&1 8 8 } 9 9 10 10 wait_for_ip() ··· 13 13 echo -n "Wait for testing IPv4/IPv6 to become available " 14 14 for _i in $(seq ${MAX_PING_TRIES}); do 15 15 echo -n "." 16 - if ping_once ${TEST_IPv4} && ping_once ${TEST_IPv6}; then 16 + if ping_once 4 ${TEST_IPv4} && ping_once 6 ${TEST_IPv6}; then 17 17 echo " OK" 18 18 return 19 19 fi
+1
tools/testing/selftests/firmware/Makefile
··· 4 4 all: 5 5 6 6 TEST_PROGS := fw_run_tests.sh 7 + TEST_FILES := fw_fallback.sh fw_filesystem.sh fw_lib.sh 7 8 8 9 include ../lib.mk 9 10
+6 -4
tools/testing/selftests/firmware/fw_lib.sh
··· 154 154 if [ "$HAS_FW_LOADER_USER_HELPER" = "yes" ]; then 155 155 echo "$OLD_TIMEOUT" >/sys/class/firmware/timeout 156 156 fi 157 - if [ "$OLD_FWPATH" = "" ]; then 158 - OLD_FWPATH=" " 159 - fi 160 157 if [ "$TEST_REQS_FW_SET_CUSTOM_PATH" = "yes" ]; then 161 - echo -n "$OLD_FWPATH" >/sys/module/firmware_class/parameters/path 158 + if [ "$OLD_FWPATH" = "" ]; then 159 + # A zero-length write won't work; write a null byte 160 + printf '\000' >/sys/module/firmware_class/parameters/path 161 + else 162 + echo -n "$OLD_FWPATH" >/sys/module/firmware_class/parameters/path 163 + fi 162 164 fi 163 165 if [ -f $FW ]; then 164 166 rm -f "$FW"
+1 -1
tools/testing/selftests/firmware/fw_run_tests.sh
··· 66 66 run_test_config_0003 67 67 else 68 68 echo "Running basic kernel configuration, working with your config" 69 - run_test 69 + run_tests 70 70 fi
+1 -1
tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-extended-error-support.tc
··· 29 29 30 30 echo "Test extended error support" 31 31 echo 'hist:keys=pid:ts0=common_timestamp.usecs if comm=="ping"' > events/sched/sched_wakeup/trigger 32 - echo 'hist:keys=pid:ts0=common_timestamp.usecs if comm=="ping"' >> events/sched/sched_wakeup/trigger &>/dev/null 32 + ! echo 'hist:keys=pid:ts0=common_timestamp.usecs if comm=="ping"' >> events/sched/sched_wakeup/trigger 2> /dev/null 33 33 if ! grep -q "ERROR:" events/sched/sched_wakeup/hist; then 34 34 fail "Failed to generate extended error in histogram" 35 35 fi
+44
tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-multi-actions-accept.tc
··· 1 + #!/bin/sh 2 + # description: event trigger - test multiple actions on hist trigger 3 + 4 + 5 + do_reset() { 6 + reset_trigger 7 + echo > set_event 8 + clear_trace 9 + } 10 + 11 + fail() { #msg 12 + do_reset 13 + echo $1 14 + exit_fail 15 + } 16 + 17 + if [ ! -f set_event ]; then 18 + echo "event tracing is not supported" 19 + exit_unsupported 20 + fi 21 + 22 + if [ ! -f synthetic_events ]; then 23 + echo "synthetic event is not supported" 24 + exit_unsupported 25 + fi 26 + 27 + clear_synthetic_events 28 + reset_tracer 29 + do_reset 30 + 31 + echo "Test multiple actions on hist trigger" 32 + echo 'wakeup_latency u64 lat; pid_t pid' >> synthetic_events 33 + TRIGGER1=events/sched/sched_wakeup/trigger 34 + TRIGGER2=events/sched/sched_switch/trigger 35 + 36 + echo 'hist:keys=pid:ts0=common_timestamp.usecs if comm=="cyclictest"' > $TRIGGER1 37 + echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0 if next_comm=="cyclictest"' >> $TRIGGER2 38 + echo 'hist:keys=next_pid:onmatch(sched.sched_wakeup).wakeup_latency(sched.sched_switch.$wakeup_lat,next_pid) if next_comm=="cyclictest"' >> $TRIGGER2 39 + echo 'hist:keys=next_pid:onmatch(sched.sched_wakeup).wakeup_latency(sched.sched_switch.$wakeup_lat,prev_pid) if next_comm=="cyclictest"' >> $TRIGGER2 40 + echo 'hist:keys=next_pid if next_comm=="cyclictest"' >> $TRIGGER2 41 + 42 + do_reset 43 + 44 + exit 0
+21 -14
tools/testing/selftests/x86/test_syscall_vdso.c
··· 100 100 " shl $32, %r8\n" 101 101 " orq $0x7f7f7f7f, %r8\n" 102 102 " movq %r8, %r9\n" 103 - " movq %r8, %r10\n" 104 - " movq %r8, %r11\n" 105 - " movq %r8, %r12\n" 106 - " movq %r8, %r13\n" 107 - " movq %r8, %r14\n" 108 - " movq %r8, %r15\n" 103 + " incq %r9\n" 104 + " movq %r9, %r10\n" 105 + " incq %r10\n" 106 + " movq %r10, %r11\n" 107 + " incq %r11\n" 108 + " movq %r11, %r12\n" 109 + " incq %r12\n" 110 + " movq %r12, %r13\n" 111 + " incq %r13\n" 112 + " movq %r13, %r14\n" 113 + " incq %r14\n" 114 + " movq %r14, %r15\n" 115 + " incq %r15\n" 109 116 " ret\n" 110 117 " .code32\n" 111 118 " .popsection\n" ··· 135 128 int err = 0; 136 129 int num = 8; 137 130 uint64_t *r64 = &regs64.r8; 131 + uint64_t expected = 0x7f7f7f7f7f7f7f7fULL; 138 132 139 133 if (!kernel_is_64bit) 140 134 return 0; 141 135 142 136 do { 143 - if (*r64 == 0x7f7f7f7f7f7f7f7fULL) 137 + if (*r64 == expected++) 144 138 continue; /* register did not change */ 145 139 if (syscall_addr != (long)&int80) { 146 140 /* ··· 155 147 continue; 156 148 } 157 149 } else { 158 - /* INT80 syscall entrypoint can be used by 150 + /* 151 + * INT80 syscall entrypoint can be used by 159 152 * 64-bit programs too, unlike SYSCALL/SYSENTER. 160 153 * Therefore it must preserve R12+ 161 154 * (they are callee-saved registers in 64-bit C ABI). 162 155 * 163 - * This was probably historically not intended, 164 - * but R8..11 are clobbered (cleared to 0). 165 - * IOW: they are the only registers which aren't 166 - * preserved across INT80 syscall. 156 + * Starting in Linux 4.17 (and any kernel that 157 + * backports the change), R8..11 are preserved. 158 + * Historically (and probably unintentionally), they 159 + * were clobbered or zeroed. 167 160 */ 168 - if (*r64 == 0 && num <= 11) 169 - continue; 170 161 } 171 162 printf("[FAIL]\tR%d has changed:%016llx\n", num, *r64); 172 163 err++;
+10 -5
virt/kvm/arm/arm.c
··· 63 63 static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1); 64 64 static u32 kvm_next_vmid; 65 65 static unsigned int kvm_vmid_bits __read_mostly; 66 - static DEFINE_SPINLOCK(kvm_vmid_lock); 66 + static DEFINE_RWLOCK(kvm_vmid_lock); 67 67 68 68 static bool vgic_present; 69 69 ··· 473 473 { 474 474 phys_addr_t pgd_phys; 475 475 u64 vmid; 476 + bool new_gen; 476 477 477 - if (!need_new_vmid_gen(kvm)) 478 + read_lock(&kvm_vmid_lock); 479 + new_gen = need_new_vmid_gen(kvm); 480 + read_unlock(&kvm_vmid_lock); 481 + 482 + if (!new_gen) 478 483 return; 479 484 480 - spin_lock(&kvm_vmid_lock); 485 + write_lock(&kvm_vmid_lock); 481 486 482 487 /* 483 488 * We need to re-check the vmid_gen here to ensure that if another vcpu ··· 490 485 * use the same vmid. 491 486 */ 492 487 if (!need_new_vmid_gen(kvm)) { 493 - spin_unlock(&kvm_vmid_lock); 488 + write_unlock(&kvm_vmid_lock); 494 489 return; 495 490 } 496 491 ··· 524 519 vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits); 525 520 kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid; 526 521 527 - spin_unlock(&kvm_vmid_lock); 522 + write_unlock(&kvm_vmid_lock); 528 523 } 529 524 530 525 static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
+60
virt/kvm/arm/psci.c
··· 18 18 #include <linux/arm-smccc.h> 19 19 #include <linux/preempt.h> 20 20 #include <linux/kvm_host.h> 21 + #include <linux/uaccess.h> 21 22 #include <linux/wait.h> 22 23 23 24 #include <asm/cputype.h> ··· 427 426 428 427 smccc_set_retval(vcpu, val, 0, 0, 0); 429 428 return 1; 429 + } 430 + 431 + int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu) 432 + { 433 + return 1; /* PSCI version */ 434 + } 435 + 436 + int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) 437 + { 438 + if (put_user(KVM_REG_ARM_PSCI_VERSION, uindices)) 439 + return -EFAULT; 440 + 441 + return 0; 442 + } 443 + 444 + int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) 445 + { 446 + if (reg->id == KVM_REG_ARM_PSCI_VERSION) { 447 + void __user *uaddr = (void __user *)(long)reg->addr; 448 + u64 val; 449 + 450 + val = kvm_psci_version(vcpu, vcpu->kvm); 451 + if (copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id))) 452 + return -EFAULT; 453 + 454 + return 0; 455 + } 456 + 457 + return -EINVAL; 458 + } 459 + 460 + int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) 461 + { 462 + if (reg->id == KVM_REG_ARM_PSCI_VERSION) { 463 + void __user *uaddr = (void __user *)(long)reg->addr; 464 + bool wants_02; 465 + u64 val; 466 + 467 + if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id))) 468 + return -EFAULT; 469 + 470 + wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features); 471 + 472 + switch (val) { 473 + case KVM_ARM_PSCI_0_1: 474 + if (wants_02) 475 + return -EINVAL; 476 + vcpu->kvm->arch.psci_version = val; 477 + return 0; 478 + case KVM_ARM_PSCI_0_2: 479 + case KVM_ARM_PSCI_1_0: 480 + if (!wants_02) 481 + return -EINVAL; 482 + vcpu->kvm->arch.psci_version = val; 483 + return 0; 484 + } 485 + } 486 + 487 + return -EINVAL; 430 488 }
+5
virt/kvm/arm/vgic/vgic-mmio-v2.c
··· 14 14 #include <linux/irqchip/arm-gic.h> 15 15 #include <linux/kvm.h> 16 16 #include <linux/kvm_host.h> 17 + #include <linux/nospec.h> 18 + 17 19 #include <kvm/iodev.h> 18 20 #include <kvm/arm_vgic.h> 19 21 ··· 326 324 327 325 if (n > vgic_v3_max_apr_idx(vcpu)) 328 326 return 0; 327 + 328 + n = array_index_nospec(n, 4); 329 + 329 330 /* GICv3 only uses ICH_AP1Rn for memory mapped (GICv2) guests */ 330 331 return vgicv3->vgic_ap1r[n]; 331 332 }
+18 -4
virt/kvm/arm/vgic/vgic.c
··· 14 14 * along with this program. If not, see <http://www.gnu.org/licenses/>. 15 15 */ 16 16 17 + #include <linux/interrupt.h> 18 + #include <linux/irq.h> 17 19 #include <linux/kvm.h> 18 20 #include <linux/kvm_host.h> 19 21 #include <linux/list_sort.h> 20 - #include <linux/interrupt.h> 21 - #include <linux/irq.h> 22 + #include <linux/nospec.h> 23 + 22 24 #include <asm/kvm_hyp.h> 23 25 24 26 #include "vgic.h" ··· 103 101 u32 intid) 104 102 { 105 103 /* SGIs and PPIs */ 106 - if (intid <= VGIC_MAX_PRIVATE) 104 + if (intid <= VGIC_MAX_PRIVATE) { 105 + intid = array_index_nospec(intid, VGIC_MAX_PRIVATE); 107 106 return &vcpu->arch.vgic_cpu.private_irqs[intid]; 107 + } 108 108 109 109 /* SPIs */ 110 - if (intid <= VGIC_MAX_SPI) 110 + if (intid <= VGIC_MAX_SPI) { 111 + intid = array_index_nospec(intid, VGIC_MAX_SPI); 111 112 return &kvm->arch.vgic.spis[intid - VGIC_NR_PRIVATE_IRQS]; 113 + } 112 114 113 115 /* LPIs */ 114 116 if (intid >= VGIC_MIN_LPI) ··· 600 594 601 595 list_for_each_entry_safe(irq, tmp, &vgic_cpu->ap_list_head, ap_list) { 602 596 struct kvm_vcpu *target_vcpu, *vcpuA, *vcpuB; 597 + bool target_vcpu_needs_kick = false; 603 598 604 599 spin_lock(&irq->irq_lock); 605 600 ··· 671 664 list_del(&irq->ap_list); 672 665 irq->vcpu = target_vcpu; 673 666 list_add_tail(&irq->ap_list, &new_cpu->ap_list_head); 667 + target_vcpu_needs_kick = true; 674 668 } 675 669 676 670 spin_unlock(&irq->irq_lock); 677 671 spin_unlock(&vcpuB->arch.vgic_cpu.ap_list_lock); 678 672 spin_unlock_irqrestore(&vcpuA->arch.vgic_cpu.ap_list_lock, flags); 673 + 674 + if (target_vcpu_needs_kick) { 675 + kvm_make_request(KVM_REQ_IRQ_PENDING, target_vcpu); 676 + kvm_vcpu_kick(target_vcpu); 677 + } 678 + 679 679 goto retry; 680 680 } 681 681