Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
- keep the ZC code, drop the code related to reinit
net/bridge/netfilter/ebtables.c
- fix build after move to net_generic

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+1403 -993
+7
.mailmap
··· 168 168 Johan Hovold <johan@kernel.org> <johan@hovoldconsulting.com> 169 169 John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> 170 170 John Stultz <johnstul@us.ibm.com> 171 + Jordan Crouse <jordan@cosmicpenguin.net> <jcrouse@codeaurora.org> 171 172 <josh@joshtriplett.org> <josh@freedesktop.org> 172 173 <josh@joshtriplett.org> <josh@kernel.org> 173 174 <josh@joshtriplett.org> <josht@linux.vnet.ibm.com> ··· 254 253 Morten Welinder <welinder@darter.rentec.com> 255 254 Morten Welinder <welinder@troll.com> 256 255 Mythri P K <mythripk@ti.com> 256 + Nadia Yvette Chambers <nyc@holomorphy.com> William Lee Irwin III <wli@holomorphy.com> 257 257 Nathan Chancellor <nathan@kernel.org> <natechancellor@gmail.com> 258 258 Nguyen Anh Quynh <aquynh@gmail.com> 259 + Nicholas Piggin <npiggin@gmail.com> <npiggen@suse.de> 260 + Nicholas Piggin <npiggin@gmail.com> <npiggin@kernel.dk> 261 + Nicholas Piggin <npiggin@gmail.com> <npiggin@suse.de> 262 + Nicholas Piggin <npiggin@gmail.com> <nickpiggin@yahoo.com.au> 263 + Nicholas Piggin <npiggin@gmail.com> <piggin@cyberone.com.au> 259 264 Nicolas Ferre <nicolas.ferre@microchip.com> <nicolas.ferre@atmel.com> 260 265 Nicolas Pitre <nico@fluxnic.net> <nicolas.pitre@linaro.org> 261 266 Nicolas Pitre <nico@fluxnic.net> <nico@linaro.org>
+2 -2
Documentation/ABI/testing/debugfs-moxtet
··· 1 1 What: /sys/kernel/debug/moxtet/input 2 2 Date: March 2019 3 3 KernelVersion: 5.3 4 - Contact: Marek Behún <marek.behun@nic.cz> 4 + Contact: Marek Behún <kabel@kernel.org> 5 5 Description: (Read) Read input from the shift registers, in hexadecimal. 6 6 Returns N+1 bytes, where N is the number of Moxtet connected 7 7 modules. The first byte is from the CPU board itself. ··· 19 19 What: /sys/kernel/debug/moxtet/output 20 20 Date: March 2019 21 21 KernelVersion: 5.3 22 - Contact: Marek Behún <marek.behun@nic.cz> 22 + Contact: Marek Behún <kabel@kernel.org> 23 23 Description: (RW) Read last written value to the shift registers, in 24 24 hexadecimal, or write values to the shift registers, also 25 25 in hexadecimal.
+1 -1
Documentation/ABI/testing/debugfs-turris-mox-rwtm
··· 1 1 What: /sys/kernel/debug/turris-mox-rwtm/do_sign 2 2 Date: Jun 2020 3 3 KernelVersion: 5.8 4 - Contact: Marek Behún <marek.behun@nic.cz> 4 + Contact: Marek Behún <kabel@kernel.org> 5 5 Description: 6 6 7 7 ======= ===========================================================
+3 -3
Documentation/ABI/testing/sysfs-bus-moxtet-devices
··· 1 1 What: /sys/bus/moxtet/devices/moxtet-<name>.<addr>/module_description 2 2 Date: March 2019 3 3 KernelVersion: 5.3 4 - Contact: Marek Behún <marek.behun@nic.cz> 4 + Contact: Marek Behún <kabel@kernel.org> 5 5 Description: (Read) Moxtet module description. Format: string 6 6 7 7 What: /sys/bus/moxtet/devices/moxtet-<name>.<addr>/module_id 8 8 Date: March 2019 9 9 KernelVersion: 5.3 10 - Contact: Marek Behún <marek.behun@nic.cz> 10 + Contact: Marek Behún <kabel@kernel.org> 11 11 Description: (Read) Moxtet module ID. Format: %x 12 12 13 13 What: /sys/bus/moxtet/devices/moxtet-<name>.<addr>/module_name 14 14 Date: March 2019 15 15 KernelVersion: 5.3 16 - Contact: Marek Behún <marek.behun@nic.cz> 16 + Contact: Marek Behún <kabel@kernel.org> 17 17 Description: (Read) Moxtet module name. Format: string
+1 -1
Documentation/ABI/testing/sysfs-class-led-driver-turris-omnia
··· 1 1 What: /sys/class/leds/<led>/device/brightness 2 2 Date: July 2020 3 3 KernelVersion: 5.9 4 - Contact: Marek Behún <marek.behun@nic.cz> 4 + Contact: Marek Behún <kabel@kernel.org> 5 5 Description: (RW) On the front panel of the Turris Omnia router there is also 6 6 a button which can be used to control the intensity of all the 7 7 LEDs at once, so that if they are too bright, user can dim them.
+5 -5
Documentation/ABI/testing/sysfs-firmware-turris-mox-rwtm
··· 1 1 What: /sys/firmware/turris-mox-rwtm/board_version 2 2 Date: August 2019 3 3 KernelVersion: 5.4 4 - Contact: Marek Behún <marek.behun@nic.cz> 4 + Contact: Marek Behún <kabel@kernel.org> 5 5 Description: (Read) Board version burned into eFuses of this Turris Mox board. 6 6 Format: %i 7 7 8 8 What: /sys/firmware/turris-mox-rwtm/mac_address* 9 9 Date: August 2019 10 10 KernelVersion: 5.4 11 - Contact: Marek Behún <marek.behun@nic.cz> 11 + Contact: Marek Behún <kabel@kernel.org> 12 12 Description: (Read) MAC addresses burned into eFuses of this Turris Mox board. 13 13 Format: %pM 14 14 15 15 What: /sys/firmware/turris-mox-rwtm/pubkey 16 16 Date: August 2019 17 17 KernelVersion: 5.4 18 - Contact: Marek Behún <marek.behun@nic.cz> 18 + Contact: Marek Behún <kabel@kernel.org> 19 19 Description: (Read) ECDSA public key (in pubkey hex compressed form) computed 20 20 as pair to the ECDSA private key burned into eFuses of this 21 21 Turris Mox Board. ··· 24 24 What: /sys/firmware/turris-mox-rwtm/ram_size 25 25 Date: August 2019 26 26 KernelVersion: 5.4 27 - Contact: Marek Behún <marek.behun@nic.cz> 27 + Contact: Marek Behún <kabel@kernel.org> 28 28 Description: (Read) RAM size in MiB of this Turris Mox board as was detected 29 29 during manufacturing and burned into eFuses. Can be 512 or 1024. 30 30 Format: %i ··· 32 32 What: /sys/firmware/turris-mox-rwtm/serial_number 33 33 Date: August 2019 34 34 KernelVersion: 5.4 35 - Contact: Marek Behún <marek.behun@nic.cz> 35 + Contact: Marek Behún <kabel@kernel.org> 36 36 Description: (Read) Serial number burned into eFuses of this Turris Mox device. 37 37 Format: %016X
+1 -1
Documentation/devicetree/bindings/i2c/i2c-gpio.yaml
··· 7 7 title: Bindings for GPIO bitbanged I2C 8 8 9 9 maintainers: 10 - - Wolfram Sang <wolfram@the-dreams.de> 10 + - Wolfram Sang <wsa@kernel.org> 11 11 12 12 allOf: 13 13 - $ref: /schemas/i2c/i2c-controller.yaml#
+1 -1
Documentation/devicetree/bindings/i2c/i2c-imx.yaml
··· 7 7 title: Freescale Inter IC (I2C) and High Speed Inter IC (HS-I2C) for i.MX 8 8 9 9 maintainers: 10 - - Wolfram Sang <wolfram@the-dreams.de> 10 + - Oleksij Rempel <o.rempel@pengutronix.de> 11 11 12 12 allOf: 13 13 - $ref: /schemas/i2c/i2c-controller.yaml#
+1 -1
Documentation/devicetree/bindings/leds/cznic,turris-omnia-leds.yaml
··· 7 7 title: CZ.NIC's Turris Omnia LEDs driver 8 8 9 9 maintainers: 10 - - Marek Behún <marek.behun@nic.cz> 10 + - Marek Behún <kabel@kernel.org> 11 11 12 12 description: 13 13 This module adds support for the RGB LEDs found on the front panel of the
-15
Documentation/networking/ip-sysctl.rst
··· 1857 1857 ip6frag_time - INTEGER 1858 1858 Time in seconds to keep an IPv6 fragment in memory. 1859 1859 1860 - IPv6 Segment Routing: 1861 - 1862 - seg6_flowlabel - INTEGER 1863 - Controls the behaviour of computing the flowlabel of outer 1864 - IPv6 header in case of SR T.encaps 1865 - 1866 - == ======================================================= 1867 - -1 set flowlabel to zero. 1868 - 0 copy flowlabel from Inner packet in case of Inner IPv6 1869 - (Set flowlabel to 0 in case IPv4/L2) 1870 - 1 Compute the flowlabel using seg6_make_flowlabel() 1871 - == ======================================================= 1872 - 1873 - Default is 0. 1874 - 1875 1860 ``conf/default/*``: 1876 1861 Change the interface-specific default settings. 1877 1862
+13
Documentation/networking/seg6-sysctl.rst
··· 24 24 * 1 - Drop SR packets without HMAC, validate SR packets with HMAC 25 25 26 26 Default is 0. 27 + 28 + seg6_flowlabel - INTEGER 29 + Controls the behaviour of computing the flowlabel of outer 30 + IPv6 header in case of SR T.encaps 31 + 32 + == ======================================================= 33 + -1 set flowlabel to zero. 34 + 0 copy flowlabel from Inner packet in case of Inner IPv6 35 + (Set flowlabel to 0 in case IPv4/L2) 36 + 1 Compute the flowlabel using seg6_make_flowlabel() 37 + == ======================================================= 38 + 39 + Default is 0.
+12 -5
MAINTAINERS
··· 1792 1792 F: drivers/pinctrl/pinctrl-gemini.c 1793 1793 F: drivers/rtc/rtc-ftrtc010.c 1794 1794 1795 - ARM/CZ.NIC TURRIS MOX SUPPORT 1796 - M: Marek Behun <marek.behun@nic.cz> 1795 + ARM/CZ.NIC TURRIS SUPPORT 1796 + M: Marek Behun <kabel@kernel.org> 1797 1797 S: Maintained 1798 - W: http://mox.turris.cz 1798 + W: https://www.turris.cz/ 1799 1799 F: Documentation/ABI/testing/debugfs-moxtet 1800 1800 F: Documentation/ABI/testing/sysfs-bus-moxtet-devices 1801 1801 F: Documentation/ABI/testing/sysfs-firmware-turris-mox-rwtm 1802 1802 F: Documentation/devicetree/bindings/bus/moxtet.txt 1803 1803 F: Documentation/devicetree/bindings/firmware/cznic,turris-mox-rwtm.txt 1804 1804 F: Documentation/devicetree/bindings/gpio/gpio-moxtet.txt 1805 + F: Documentation/devicetree/bindings/leds/cznic,turris-omnia-leds.yaml 1806 + F: Documentation/devicetree/bindings/watchdog/armada-37xx-wdt.txt 1805 1807 F: drivers/bus/moxtet.c 1806 1808 F: drivers/firmware/turris-mox-rwtm.c 1809 + F: drivers/leds/leds-turris-omnia.c 1810 + F: drivers/mailbox/armada-37xx-rwtm-mailbox.c 1807 1811 F: drivers/gpio/gpio-moxtet.c 1812 + F: drivers/watchdog/armada_37xx_wdt.c 1813 + F: include/dt-bindings/bus/moxtet.h 1814 + F: include/linux/armada-37xx-rwtm-mailbox.h 1808 1815 F: include/linux/moxtet.h 1809 1816 1810 1817 ARM/EZX SMARTPHONES (A780, A910, A1200, E680, ROKR E2 and ROKR E6) ··· 7100 7093 F: drivers/i2c/busses/i2c-cpm.c 7101 7094 7102 7095 FREESCALE IMX / MXC FEC DRIVER 7103 - M: Fugang Duan <fugang.duan@nxp.com> 7096 + M: Joakim Zhang <qiangqing.zhang@nxp.com> 7104 7097 L: netdev@vger.kernel.org 7105 7098 S: Maintained 7106 7099 F: Documentation/devicetree/bindings/net/fsl-fec.txt ··· 8528 8521 8529 8522 IBM Power SRIOV Virtual NIC Device Driver 8530 8523 M: Dany Madden <drt@linux.ibm.com> 8531 - M: Lijun Pan <ljp@linux.ibm.com> 8532 8524 M: Sukadev Bhattiprolu <sukadev@linux.ibm.com> 8533 8525 R: Thomas Falcon <tlfalcon@linux.ibm.com> 8526 + R: Lijun Pan <lijunp213@gmail.com> 8534 8527 L: netdev@vger.kernel.org 8535 8528 S: Supported 8536 8529 F: drivers/net/ethernet/ibm/ibmvnic.*
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 12 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc7 6 6 NAME = Frozen Wasteland 7 7 8 8 # *DOCUMENTATION*
+5 -1
arch/arm64/Kconfig
··· 1406 1406 config AS_HAS_LDAPR 1407 1407 def_bool $(as-instr,.arch_extension rcpc) 1408 1408 1409 + config AS_HAS_LSE_ATOMICS 1410 + def_bool $(as-instr,.arch_extension lse) 1411 + 1409 1412 config ARM64_LSE_ATOMICS 1410 1413 bool 1411 1414 default ARM64_USE_LSE_ATOMICS 1412 - depends on $(as-instr,.arch_extension lse) 1415 + depends on AS_HAS_LSE_ATOMICS 1413 1416 1414 1417 config ARM64_USE_LSE_ATOMICS 1415 1418 bool "Atomic instructions" ··· 1669 1666 default y 1670 1667 depends on ARM64_AS_HAS_MTE && ARM64_TAGGED_ADDR_ABI 1671 1668 depends on AS_HAS_ARMV8_5 1669 + depends on AS_HAS_LSE_ATOMICS 1672 1670 # Required for tag checking in the uaccess routines 1673 1671 depends on ARM64_PAN 1674 1672 select ARCH_USES_HIGH_VMA_FLAGS
+1 -1
arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
··· 1 1 // SPDX-License-Identifier: (GPL-2.0+ OR MIT) 2 2 /* 3 3 * Device Tree file for CZ.NIC Turris Mox Board 4 - * 2019 by Marek Behun <marek.behun@nic.cz> 4 + * 2019 by Marek Behún <kabel@kernel.org> 5 5 */ 6 6 7 7 /dts-v1/;
+4 -4
arch/arm64/include/asm/alternative-macros.h
··· 97 97 .popsection 98 98 .subsection 1 99 99 663: \insn2 100 - 664: .previous 101 - .org . - (664b-663b) + (662b-661b) 100 + 664: .org . - (664b-663b) + (662b-661b) 102 101 .org . - (662b-661b) + (664b-663b) 102 + .previous 103 103 .endif 104 104 .endm 105 105 ··· 169 169 */ 170 170 .macro alternative_endif 171 171 664: 172 + .org . - (664b-663b) + (662b-661b) 173 + .org . - (662b-661b) + (664b-663b) 172 174 .if .Lasm_alt_mode==0 173 175 .previous 174 176 .endif 175 - .org . - (664b-663b) + (662b-661b) 176 - .org . - (662b-661b) + (664b-663b) 177 177 .endm 178 178 179 179 /*
+5 -5
arch/arm64/include/asm/word-at-a-time.h
··· 53 53 */ 54 54 static inline unsigned long load_unaligned_zeropad(const void *addr) 55 55 { 56 - unsigned long ret, offset; 56 + unsigned long ret, tmp; 57 57 58 58 /* Load word from unaligned pointer addr */ 59 59 asm( ··· 61 61 "2:\n" 62 62 " .pushsection .fixup,\"ax\"\n" 63 63 " .align 2\n" 64 - "3: and %1, %2, #0x7\n" 65 - " bic %2, %2, #0x7\n" 66 - " ldr %0, [%2]\n" 64 + "3: bic %1, %2, #0x7\n" 65 + " ldr %0, [%1]\n" 66 + " and %1, %2, #0x7\n" 67 67 " lsl %1, %1, #0x3\n" 68 68 #ifndef __AARCH64EB__ 69 69 " lsr %0, %0, %1\n" ··· 73 73 " b 2b\n" 74 74 " .popsection\n" 75 75 _ASM_EXTABLE(1b, 3b) 76 - : "=&r" (ret), "=&r" (offset) 76 + : "=&r" (ret), "=&r" (tmp) 77 77 : "r" (addr), "Q" (*(unsigned long *)addr)); 78 78 79 79 return ret;
+6 -4
arch/arm64/kernel/entry.S
··· 148 148 .endm 149 149 150 150 /* Check for MTE asynchronous tag check faults */ 151 - .macro check_mte_async_tcf, flgs, tmp 151 + .macro check_mte_async_tcf, tmp, ti_flags 152 152 #ifdef CONFIG_ARM64_MTE 153 + .arch_extension lse 153 154 alternative_if_not ARM64_MTE 154 155 b 1f 155 156 alternative_else_nop_endif 156 157 mrs_s \tmp, SYS_TFSRE0_EL1 157 158 tbz \tmp, #SYS_TFSR_EL1_TF0_SHIFT, 1f 158 159 /* Asynchronous TCF occurred for TTBR0 access, set the TI flag */ 159 - orr \flgs, \flgs, #_TIF_MTE_ASYNC_FAULT 160 - str \flgs, [tsk, #TSK_TI_FLAGS] 160 + mov \tmp, #_TIF_MTE_ASYNC_FAULT 161 + add \ti_flags, tsk, #TSK_TI_FLAGS 162 + stset \tmp, [\ti_flags] 161 163 msr_s SYS_TFSRE0_EL1, xzr 162 164 1: 163 165 #endif ··· 246 244 disable_step_tsk x19, x20 247 245 248 246 /* Check for asynchronous tag check faults in user space */ 249 - check_mte_async_tcf x19, x22 247 + check_mte_async_tcf x22, x23 250 248 apply_ssbd 1, x22, x23 251 249 252 250 ptrauth_keys_install_kernel tsk, x20, x22, x23
+4 -2
arch/arm64/kernel/probes/kprobes.c
··· 267 267 if (!instruction_pointer(regs)) 268 268 BUG(); 269 269 270 - if (kcb->kprobe_status == KPROBE_REENTER) 270 + if (kcb->kprobe_status == KPROBE_REENTER) { 271 271 restore_previous_kprobe(kcb); 272 - else 272 + } else { 273 + kprobes_restore_local_irqflag(kcb, regs); 273 274 reset_current_kprobe(); 275 + } 274 276 275 277 break; 276 278 case KPROBE_HIT_ACTIVE:
+1 -1
arch/arm64/kernel/sleep.S
··· 134 134 */ 135 135 bl cpu_do_resume 136 136 137 - #if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK 137 + #if defined(CONFIG_KASAN) && defined(CONFIG_KASAN_STACK) 138 138 mov x0, sp 139 139 bl kasan_unpoison_task_stack_below 140 140 #endif
+1 -1
arch/csky/Kconfig
··· 314 314 int "Maximum zone order" 315 315 default "11" 316 316 317 - config RAM_BASE 317 + config DRAM_BASE 318 318 hex "DRAM start addr (the same with memory-section in dts)" 319 319 default 0x0 320 320
+1 -1
arch/csky/include/asm/page.h
··· 28 28 #define SSEG_SIZE 0x20000000 29 29 #define LOWMEM_LIMIT (SSEG_SIZE * 2) 30 30 31 - #define PHYS_OFFSET_OFFSET (CONFIG_RAM_BASE & (SSEG_SIZE - 1)) 31 + #define PHYS_OFFSET_OFFSET (CONFIG_DRAM_BASE & (SSEG_SIZE - 1)) 32 32 33 33 #ifndef __ASSEMBLY__ 34 34
-2
arch/ia64/configs/generic_defconfig
··· 55 55 CONFIG_SCSI_FC_ATTRS=y 56 56 CONFIG_SCSI_SYM53C8XX_2=y 57 57 CONFIG_SCSI_QLOGIC_1280=y 58 - CONFIG_ATA=y 59 - CONFIG_ATA_PIIX=y 60 58 CONFIG_SATA_VITESSE=y 61 59 CONFIG_MD=y 62 60 CONFIG_BLK_DEV_MD=m
+1 -7
arch/ia64/include/asm/ptrace.h
··· 54 54 55 55 static inline unsigned long user_stack_pointer(struct pt_regs *regs) 56 56 { 57 - /* FIXME: should this be bspstore + nr_dirty regs? */ 58 - return regs->ar_bspstore; 57 + return regs->r12; 59 58 } 60 59 61 60 static inline int is_syscall_success(struct pt_regs *regs) ··· 78 79 unsigned long __ip = instruction_pointer(regs); \ 79 80 (__ip & ~3UL) + ((__ip & 3UL) << 2); \ 80 81 }) 81 - /* 82 - * Why not default? Because user_stack_pointer() on ia64 gives register 83 - * stack backing store instead... 84 - */ 85 - #define current_user_stack_pointer() (current_pt_regs()->r12) 86 82 87 83 /* given a pointer to a task_struct, return the user's pt_regs */ 88 84 # define task_pt_regs(t) (((struct pt_regs *) ((char *) (t) + IA64_STK_OFFSET)) - 1)
+3 -3
arch/ia64/mm/discontig.c
··· 95 95 * acpi_boot_init() (which builds the node_to_cpu_mask array) hasn't been 96 96 * called yet. Note that node 0 will also count all non-existent cpus. 97 97 */ 98 - static int __meminit early_nr_cpus_node(int node) 98 + static int early_nr_cpus_node(int node) 99 99 { 100 100 int cpu, n = 0; 101 101 ··· 110 110 * compute_pernodesize - compute size of pernode data 111 111 * @node: the node id. 112 112 */ 113 - static unsigned long __meminit compute_pernodesize(int node) 113 + static unsigned long compute_pernodesize(int node) 114 114 { 115 115 unsigned long pernodesize = 0, cpus; 116 116 ··· 367 367 } 368 368 } 369 369 370 - static void __meminit scatter_node_data(void) 370 + static void scatter_node_data(void) 371 371 { 372 372 pg_data_t **dst; 373 373 int node;
+1 -1
arch/m68k/include/asm/page_mm.h
··· 167 167 ((__p) - pgdat->node_mem_map) + pgdat->node_start_pfn; \ 168 168 }) 169 169 #else 170 - #define ARCH_PFN_OFFSET (m68k_memory[0].addr) 170 + #define ARCH_PFN_OFFSET (m68k_memory[0].addr >> PAGE_SHIFT) 171 171 #include <asm-generic/memory_model.h> 172 172 #endif 173 173
+1 -1
arch/nds32/mm/cacheflush.c
··· 238 238 { 239 239 struct address_space *mapping; 240 240 241 - mapping = page_mapping(page); 241 + mapping = page_mapping_file(page); 242 242 if (mapping && !mapping_mapped(mapping)) 243 243 set_bit(PG_dcache_dirty, &page->flags); 244 244 else {
+4
arch/powerpc/kernel/Makefile
··· 191 191 targets += prom_init_check 192 192 193 193 clean-files := vmlinux.lds 194 + 195 + # Force dependency (incbin is bad) 196 + $(obj)/vdso32_wrapper.o : $(obj)/vdso32/vdso32.so.dbg 197 + $(obj)/vdso64_wrapper.o : $(obj)/vdso64/vdso64.so.dbg
+2 -2
arch/powerpc/kernel/ptrace/Makefile
··· 6 6 CFLAGS_ptrace-view.o += -DUTS_MACHINE='"$(UTS_MACHINE)"' 7 7 8 8 obj-y += ptrace.o ptrace-view.o 9 - obj-$(CONFIG_PPC_FPU_REGS) += ptrace-fpu.o 9 + obj-y += ptrace-fpu.o 10 10 obj-$(CONFIG_COMPAT) += ptrace32.o 11 11 obj-$(CONFIG_VSX) += ptrace-vsx.o 12 12 ifneq ($(CONFIG_VSX),y) 13 - obj-$(CONFIG_PPC_FPU_REGS) += ptrace-novsx.o 13 + obj-y += ptrace-novsx.o 14 14 endif 15 15 obj-$(CONFIG_ALTIVEC) += ptrace-altivec.o 16 16 obj-$(CONFIG_SPE) += ptrace-spe.o
-14
arch/powerpc/kernel/ptrace/ptrace-decl.h
··· 165 165 extern const struct user_regset_view user_ppc_native_view; 166 166 167 167 /* ptrace-fpu */ 168 - #ifdef CONFIG_PPC_FPU_REGS 169 168 int ptrace_get_fpr(struct task_struct *child, int index, unsigned long *data); 170 169 int ptrace_put_fpr(struct task_struct *child, int index, unsigned long data); 171 - #else 172 - static inline int 173 - ptrace_get_fpr(struct task_struct *child, int index, unsigned long *data) 174 - { 175 - return -EIO; 176 - } 177 - 178 - static inline int 179 - ptrace_put_fpr(struct task_struct *child, int index, unsigned long data) 180 - { 181 - return -EIO; 182 - } 183 - #endif 184 170 185 171 /* ptrace-(no)adv */ 186 172 void ppc_gethwdinfo(struct ppc_debug_info *dbginfo);
+10
arch/powerpc/kernel/ptrace/ptrace-fpu.c
··· 8 8 9 9 int ptrace_get_fpr(struct task_struct *child, int index, unsigned long *data) 10 10 { 11 + #ifdef CONFIG_PPC_FPU_REGS 11 12 unsigned int fpidx = index - PT_FPR0; 13 + #endif 12 14 13 15 if (index > PT_FPSCR) 14 16 return -EIO; 15 17 18 + #ifdef CONFIG_PPC_FPU_REGS 16 19 flush_fp_to_thread(child); 17 20 if (fpidx < (PT_FPSCR - PT_FPR0)) 18 21 memcpy(data, &child->thread.TS_FPR(fpidx), sizeof(long)); 19 22 else 20 23 *data = child->thread.fp_state.fpscr; 24 + #else 25 + *data = 0; 26 + #endif 21 27 22 28 return 0; 23 29 } 24 30 25 31 int ptrace_put_fpr(struct task_struct *child, int index, unsigned long data) 26 32 { 33 + #ifdef CONFIG_PPC_FPU_REGS 27 34 unsigned int fpidx = index - PT_FPR0; 35 + #endif 28 36 29 37 if (index > PT_FPSCR) 30 38 return -EIO; 31 39 40 + #ifdef CONFIG_PPC_FPU_REGS 32 41 flush_fp_to_thread(child); 33 42 if (fpidx < (PT_FPSCR - PT_FPR0)) 34 43 memcpy(&child->thread.TS_FPR(fpidx), &data, sizeof(long)); 35 44 else 36 45 child->thread.fp_state.fpscr = data; 46 + #endif 37 47 38 48 return 0; 39 49 }
+8
arch/powerpc/kernel/ptrace/ptrace-novsx.c
··· 21 21 int fpr_get(struct task_struct *target, const struct user_regset *regset, 22 22 struct membuf to) 23 23 { 24 + #ifdef CONFIG_PPC_FPU_REGS 24 25 BUILD_BUG_ON(offsetof(struct thread_fp_state, fpscr) != 25 26 offsetof(struct thread_fp_state, fpr[32])); 26 27 27 28 flush_fp_to_thread(target); 28 29 29 30 return membuf_write(&to, &target->thread.fp_state, 33 * sizeof(u64)); 31 + #else 32 + return membuf_write(&to, &empty_zero_page, 33 * sizeof(u64)); 33 + #endif 30 34 } 31 35 32 36 /* ··· 50 46 unsigned int pos, unsigned int count, 51 47 const void *kbuf, const void __user *ubuf) 52 48 { 49 + #ifdef CONFIG_PPC_FPU_REGS 53 50 BUILD_BUG_ON(offsetof(struct thread_fp_state, fpscr) != 54 51 offsetof(struct thread_fp_state, fpr[32])); 55 52 ··· 58 53 59 54 return user_regset_copyin(&pos, &count, &kbuf, &ubuf, 60 55 &target->thread.fp_state, 0, -1); 56 + #else 57 + return 0; 58 + #endif 61 59 }
-2
arch/powerpc/kernel/ptrace/ptrace-view.c
··· 522 522 .size = sizeof(long), .align = sizeof(long), 523 523 .regset_get = gpr_get, .set = gpr_set 524 524 }, 525 - #ifdef CONFIG_PPC_FPU_REGS 526 525 [REGSET_FPR] = { 527 526 .core_note_type = NT_PRFPREG, .n = ELF_NFPREG, 528 527 .size = sizeof(double), .align = sizeof(double), 529 528 .regset_get = fpr_get, .set = fpr_set 530 529 }, 531 - #endif 532 530 #ifdef CONFIG_ALTIVEC 533 531 [REGSET_VMX] = { 534 532 .core_note_type = NT_PPC_VMX, .n = 34,
+8 -12
arch/powerpc/kernel/signal_32.c
··· 775 775 else 776 776 prepare_save_user_regs(1); 777 777 778 - if (!user_write_access_begin(frame, sizeof(*frame))) 778 + if (!user_access_begin(frame, sizeof(*frame))) 779 779 goto badframe; 780 780 781 781 /* Put the siginfo & fill in most of the ucontext */ ··· 809 809 unsafe_put_user(PPC_INST_ADDI + __NR_rt_sigreturn, &mctx->mc_pad[0], 810 810 failed); 811 811 unsafe_put_user(PPC_INST_SC, &mctx->mc_pad[1], failed); 812 + asm("dcbst %y0; sync; icbi %y0; sync" :: "Z" (mctx->mc_pad[0])); 812 813 } 813 814 unsafe_put_sigset_t(&frame->uc.uc_sigmask, oldset, failed); 814 815 815 - user_write_access_end(); 816 + user_access_end(); 816 817 817 818 if (copy_siginfo_to_user(&frame->info, &ksig->info)) 818 819 goto badframe; 819 - 820 - if (tramp == (unsigned long)mctx->mc_pad) 821 - flush_icache_range(tramp, tramp + 2 * sizeof(unsigned long)); 822 820 823 821 regs->link = tramp; 824 822 ··· 842 844 return 0; 843 845 844 846 failed: 845 - user_write_access_end(); 847 + user_access_end(); 846 848 847 849 badframe: 848 850 signal_fault(tsk, regs, "handle_rt_signal32", frame); ··· 877 879 else 878 880 prepare_save_user_regs(1); 879 881 880 - if (!user_write_access_begin(frame, sizeof(*frame))) 882 + if (!user_access_begin(frame, sizeof(*frame))) 881 883 goto badframe; 882 884 sc = (struct sigcontext __user *) &frame->sctx; 883 885 ··· 906 908 /* Set up the sigreturn trampoline: li r0,sigret; sc */ 907 909 unsafe_put_user(PPC_INST_ADDI + __NR_sigreturn, &mctx->mc_pad[0], failed); 908 910 unsafe_put_user(PPC_INST_SC, &mctx->mc_pad[1], failed); 911 + asm("dcbst %y0; sync; icbi %y0; sync" :: "Z" (mctx->mc_pad[0])); 909 912 } 910 - user_write_access_end(); 911 - 912 - if (tramp == (unsigned long)mctx->mc_pad) 913 - flush_icache_range(tramp, tramp + 2 * sizeof(unsigned long)); 913 + user_access_end(); 914 914 915 915 regs->link = tramp; 916 916 ··· 931 935 return 0; 932 936 933 937 failed: 934 - user_write_access_end(); 938 + user_access_end(); 935 939 936 940 badframe: 937 941 signal_fault(tsk, regs, "handle_signal32", frame);
+1 -1
arch/riscv/Kconfig
··· 153 153 config ARCH_SPARSEMEM_ENABLE 154 154 def_bool y 155 155 depends on MMU 156 - select SPARSEMEM_STATIC if 32BIT && SPARSMEM 156 + select SPARSEMEM_STATIC if 32BIT && SPARSEMEM 157 157 select SPARSEMEM_VMEMMAP_ENABLE if 64BIT 158 158 159 159 config ARCH_SELECT_MEMORY_MODEL
+3
arch/riscv/kernel/entry.S
··· 130 130 */ 131 131 andi t0, s1, SR_PIE 132 132 beqz t0, 1f 133 + /* kprobes, entered via ebreak, must have interrupts disabled. */ 134 + li t0, EXC_BREAKPOINT 135 + beq s4, t0, 1f 133 136 #ifdef CONFIG_TRACE_IRQFLAGS 134 137 call trace_hardirqs_on 135 138 #endif
+10 -1
arch/riscv/kernel/probes/ftrace.c
··· 9 9 struct kprobe *p; 10 10 struct pt_regs *regs; 11 11 struct kprobe_ctlblk *kcb; 12 + int bit; 12 13 14 + bit = ftrace_test_recursion_trylock(ip, parent_ip); 15 + if (bit < 0) 16 + return; 17 + 18 + preempt_disable_notrace(); 13 19 p = get_kprobe((kprobe_opcode_t *)ip); 14 20 if (unlikely(!p) || kprobe_disabled(p)) 15 - return; 21 + goto out; 16 22 17 23 regs = ftrace_get_regs(fregs); 18 24 kcb = get_kprobe_ctlblk(); ··· 51 45 */ 52 46 __this_cpu_write(current_kprobe, NULL); 53 47 } 48 + out: 49 + preempt_enable_notrace(); 50 + ftrace_test_recursion_unlock(bit); 54 51 } 55 52 NOKPROBE_SYMBOL(kprobe_ftrace_handler); 56 53
+1
arch/riscv/kernel/traps.c
··· 178 178 else 179 179 die(regs, "Kernel BUG"); 180 180 } 181 + NOKPROBE_SYMBOL(do_trap_break); 181 182 182 183 #ifdef CONFIG_GENERIC_BUG 183 184 int is_valid_bugaddr(unsigned long pc)
+1
arch/riscv/mm/fault.c
··· 328 328 } 329 329 return; 330 330 } 331 + NOKPROBE_SYMBOL(do_page_fault);
+3 -4
arch/s390/kernel/entry.S
··· 401 401 brasl %r14,.Lcleanup_sie_int 402 402 #endif 403 403 0: CHECK_STACK __LC_SAVE_AREA_ASYNC 404 - lgr %r11,%r15 405 404 aghi %r15,-(STACK_FRAME_OVERHEAD + __PT_SIZE) 406 - stg %r11,__SF_BACKCHAIN(%r15) 407 405 j 2f 408 406 1: BPENTER __TI_flags(%r12),_TIF_ISOLATE_BP 409 407 lctlg %c1,%c1,__LC_KERNEL_ASCE 410 408 lg %r15,__LC_KERNEL_STACK 411 - xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15) 412 - 2: la %r11,STACK_FRAME_OVERHEAD(%r15) 409 + 2: xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15) 410 + la %r11,STACK_FRAME_OVERHEAD(%r15) 413 411 stmg %r0,%r7,__PT_R0(%r11) 414 412 # clear user controlled registers to prevent speculative use 415 413 xgr %r0,%r0 ··· 443 445 * Load idle PSW. 444 446 */ 445 447 ENTRY(psw_idle) 448 + stg %r14,(__SF_GPRS+8*8)(%r15) 446 449 stg %r3,__SF_EMPTY(%r15) 447 450 larl %r1,psw_idle_exit 448 451 stg %r1,__SF_EMPTY+8(%r15)
+6 -1
arch/x86/include/asm/kfence.h
··· 56 56 else 57 57 set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT)); 58 58 59 - /* Flush this CPU's TLB. */ 59 + /* 60 + * Flush this CPU's TLB, assuming whoever did the allocation/free is 61 + * likely to continue running on this CPU. 62 + */ 63 + preempt_disable(); 60 64 flush_tlb_one_kernel(addr); 65 + preempt_enable(); 61 66 return true; 62 67 } 63 68
+1 -1
arch/x86/kernel/acpi/wakeup_64.S
··· 115 115 movq pt_regs_r14(%rax), %r14 116 116 movq pt_regs_r15(%rax), %r15 117 117 118 - #if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK 118 + #if defined(CONFIG_KASAN) && defined(CONFIG_KASAN_STACK) 119 119 /* 120 120 * The suspend path may have poisoned some areas deeper in the stack, 121 121 * which we now need to unpoison.
+2 -3
arch/x86/kernel/setup.c
··· 1045 1045 1046 1046 cleanup_highmap(); 1047 1047 1048 - /* Look for ACPI tables and reserve memory occupied by them. */ 1049 - acpi_boot_table_init(); 1050 - 1051 1048 memblock_set_current_limit(ISA_END_ADDRESS); 1052 1049 e820__memblock_setup(); 1053 1050 ··· 1129 1132 reserve_initrd(); 1130 1133 1131 1134 acpi_table_upgrade(); 1135 + /* Look for ACPI tables and reserve memory occupied by them. */ 1136 + acpi_boot_table_init(); 1132 1137 1133 1138 vsmp_init(); 1134 1139
+2 -2
arch/x86/kernel/traps.c
··· 556 556 tsk->thread.trap_nr = X86_TRAP_GP; 557 557 558 558 if (fixup_vdso_exception(regs, X86_TRAP_GP, error_code, 0)) 559 - return; 559 + goto exit; 560 560 561 561 show_signal(tsk, SIGSEGV, "", desc, regs, error_code); 562 562 force_sig(SIGSEGV); ··· 1057 1057 goto exit; 1058 1058 1059 1059 if (fixup_vdso_exception(regs, trapnr, 0, 0)) 1060 - return; 1060 + goto exit; 1061 1061 1062 1062 force_sig_fault(SIGFPE, si_code, 1063 1063 (void __user *)uprobe_get_trap_addr(regs));
+5 -5
arch/x86/kvm/vmx/vmx.c
··· 6027 6027 exit_reason.basic != EXIT_REASON_PML_FULL && 6028 6028 exit_reason.basic != EXIT_REASON_APIC_ACCESS && 6029 6029 exit_reason.basic != EXIT_REASON_TASK_SWITCH)) { 6030 + int ndata = 3; 6031 + 6030 6032 vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; 6031 6033 vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_DELIVERY_EV; 6032 - vcpu->run->internal.ndata = 3; 6033 6034 vcpu->run->internal.data[0] = vectoring_info; 6034 6035 vcpu->run->internal.data[1] = exit_reason.full; 6035 6036 vcpu->run->internal.data[2] = vcpu->arch.exit_qualification; 6036 6037 if (exit_reason.basic == EXIT_REASON_EPT_MISCONFIG) { 6037 - vcpu->run->internal.ndata++; 6038 - vcpu->run->internal.data[3] = 6038 + vcpu->run->internal.data[ndata++] = 6039 6039 vmcs_read64(GUEST_PHYSICAL_ADDRESS); 6040 6040 } 6041 - vcpu->run->internal.data[vcpu->run->internal.ndata++] = 6042 - vcpu->arch.last_vmentry_cpu; 6041 + vcpu->run->internal.data[ndata++] = vcpu->arch.last_vmentry_cpu; 6042 + vcpu->run->internal.ndata = ndata; 6043 6043 return 0; 6044 6044 } 6045 6045
+5 -3
drivers/base/dd.c
··· 292 292 293 293 static void deferred_probe_timeout_work_func(struct work_struct *work) 294 294 { 295 - struct device_private *private, *p; 295 + struct device_private *p; 296 296 297 297 driver_deferred_probe_timeout = 0; 298 298 driver_deferred_probe_trigger(); 299 299 flush_work(&deferred_probe_work); 300 300 301 - list_for_each_entry_safe(private, p, &deferred_probe_pending_list, deferred_probe) 302 - dev_info(private->device, "deferred probe pending\n"); 301 + mutex_lock(&deferred_probe_mutex); 302 + list_for_each_entry(p, &deferred_probe_pending_list, deferred_probe) 303 + dev_info(p->device, "deferred probe pending\n"); 304 + mutex_unlock(&deferred_probe_mutex); 303 305 wake_up_all(&probe_timeout_waitqueue); 304 306 } 305 307 static DECLARE_DELAYED_WORK(deferred_probe_timeout_work, deferred_probe_timeout_work_func);
+2 -2
drivers/bus/moxtet.c
··· 2 2 /* 3 3 * Turris Mox module configuration bus driver 4 4 * 5 - * Copyright (C) 2019 Marek Behun <marek.behun@nic.cz> 5 + * Copyright (C) 2019 Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #include <dt-bindings/bus/moxtet.h> ··· 879 879 } 880 880 module_exit(moxtet_exit); 881 881 882 - MODULE_AUTHOR("Marek Behun <marek.behun@nic.cz>"); 882 + MODULE_AUTHOR("Marek Behun <kabel@kernel.org>"); 883 883 MODULE_DESCRIPTION("CZ.NIC's Turris Mox module configuration bus"); 884 884 MODULE_LICENSE("GPL v2");
+8 -1
drivers/clk/clk-fixed-factor.c
··· 66 66 67 67 static void devm_clk_hw_register_fixed_factor_release(struct device *dev, void *res) 68 68 { 69 - clk_hw_unregister_fixed_factor(&((struct clk_fixed_factor *)res)->hw); 69 + struct clk_fixed_factor *fix = res; 70 + 71 + /* 72 + * We can not use clk_hw_unregister_fixed_factor, since it will kfree() 73 + * the hw, resulting in double free. Just unregister the hw and let 74 + * devres code kfree() it. 75 + */ 76 + clk_hw_unregister(&fix->hw); 70 77 } 71 78 72 79 static struct clk_hw *
+22 -27
drivers/clk/clk.c
··· 4357 4357 /* search the list of notifiers for this clk */ 4358 4358 list_for_each_entry(cn, &clk_notifier_list, node) 4359 4359 if (cn->clk == clk) 4360 - break; 4360 + goto found; 4361 4361 4362 4362 /* if clk wasn't in the notifier list, allocate new clk_notifier */ 4363 - if (cn->clk != clk) { 4364 - cn = kzalloc(sizeof(*cn), GFP_KERNEL); 4365 - if (!cn) 4366 - goto out; 4363 + cn = kzalloc(sizeof(*cn), GFP_KERNEL); 4364 + if (!cn) 4365 + goto out; 4367 4366 4368 - cn->clk = clk; 4369 - srcu_init_notifier_head(&cn->notifier_head); 4367 + cn->clk = clk; 4368 + srcu_init_notifier_head(&cn->notifier_head); 4370 4369 4371 - list_add(&cn->node, &clk_notifier_list); 4372 - } 4370 + list_add(&cn->node, &clk_notifier_list); 4373 4371 4372 + found: 4374 4373 ret = srcu_notifier_chain_register(&cn->notifier_head, nb); 4375 4374 4376 4375 clk->core->notifier_count++; ··· 4394 4395 */ 4395 4396 int clk_notifier_unregister(struct clk *clk, struct notifier_block *nb) 4396 4397 { 4397 - struct clk_notifier *cn = NULL; 4398 - int ret = -EINVAL; 4398 + struct clk_notifier *cn; 4399 + int ret = -ENOENT; 4399 4400 4400 4401 if (!clk || !nb) 4401 4402 return -EINVAL; 4402 4403 4403 4404 clk_prepare_lock(); 4404 4405 4405 - list_for_each_entry(cn, &clk_notifier_list, node) 4406 - if (cn->clk == clk) 4406 + list_for_each_entry(cn, &clk_notifier_list, node) { 4407 + if (cn->clk == clk) { 4408 + ret = srcu_notifier_chain_unregister(&cn->notifier_head, nb); 4409 + 4410 + clk->core->notifier_count--; 4411 + 4412 + /* XXX the notifier code should handle this better */ 4413 + if (!cn->notifier_head.head) { 4414 + srcu_cleanup_notifier_head(&cn->notifier_head); 4415 + list_del(&cn->node); 4416 + kfree(cn); 4417 + } 4407 4418 break; 4408 - 4409 - if (cn->clk == clk) { 4410 - ret = srcu_notifier_chain_unregister(&cn->notifier_head, nb); 4411 - 4412 - clk->core->notifier_count--; 4413 - 4414 - /* XXX the notifier code should handle this better */ 4415 - if (!cn->notifier_head.head) { 4416 - srcu_cleanup_notifier_head(&cn->notifier_head); 4417 - list_del(&cn->node); 4418 - kfree(cn); 4419 4419 } 4420 - 4421 - } else { 4422 - ret = -ENOENT; 4423 4420 } 4424 4421 4425 4422 clk_prepare_unlock();
+25 -25
drivers/clk/qcom/camcc-sc7180.c
··· 304 304 .name = "cam_cc_bps_clk_src", 305 305 .parent_data = cam_cc_parent_data_2, 306 306 .num_parents = 5, 307 - .ops = &clk_rcg2_ops, 307 + .ops = &clk_rcg2_shared_ops, 308 308 }, 309 309 }; 310 310 ··· 325 325 .name = "cam_cc_cci_0_clk_src", 326 326 .parent_data = cam_cc_parent_data_5, 327 327 .num_parents = 3, 328 - .ops = &clk_rcg2_ops, 328 + .ops = &clk_rcg2_shared_ops, 329 329 }, 330 330 }; 331 331 ··· 339 339 .name = "cam_cc_cci_1_clk_src", 340 340 .parent_data = cam_cc_parent_data_5, 341 341 .num_parents = 3, 342 - .ops = &clk_rcg2_ops, 342 + .ops = &clk_rcg2_shared_ops, 343 343 }, 344 344 }; 345 345 ··· 360 360 .name = "cam_cc_cphy_rx_clk_src", 361 361 .parent_data = cam_cc_parent_data_3, 362 362 .num_parents = 6, 363 - .ops = &clk_rcg2_ops, 363 + .ops = &clk_rcg2_shared_ops, 364 364 }, 365 365 }; 366 366 ··· 379 379 .name = "cam_cc_csi0phytimer_clk_src", 380 380 .parent_data = cam_cc_parent_data_0, 381 381 .num_parents = 4, 382 - .ops = &clk_rcg2_ops, 382 + .ops = &clk_rcg2_shared_ops, 383 383 }, 384 384 }; 385 385 ··· 393 393 .name = "cam_cc_csi1phytimer_clk_src", 394 394 .parent_data = cam_cc_parent_data_0, 395 395 .num_parents = 4, 396 - .ops = &clk_rcg2_ops, 396 + .ops = &clk_rcg2_shared_ops, 397 397 }, 398 398 }; 399 399 ··· 407 407 .name = "cam_cc_csi2phytimer_clk_src", 408 408 .parent_data = cam_cc_parent_data_0, 409 409 .num_parents = 4, 410 - .ops = &clk_rcg2_ops, 410 + .ops = &clk_rcg2_shared_ops, 411 411 }, 412 412 }; 413 413 ··· 421 421 .name = "cam_cc_csi3phytimer_clk_src", 422 422 .parent_data = cam_cc_parent_data_0, 423 423 .num_parents = 4, 424 - .ops = &clk_rcg2_ops, 424 + .ops = &clk_rcg2_shared_ops, 425 425 }, 426 426 }; 427 427 ··· 443 443 .name = "cam_cc_fast_ahb_clk_src", 444 444 .parent_data = cam_cc_parent_data_0, 445 445 .num_parents = 4, 446 - .ops = &clk_rcg2_ops, 446 + .ops = &clk_rcg2_shared_ops, 447 447 }, 448 448 }; 449 449 ··· 466 466 .name = "cam_cc_icp_clk_src", 467 467 .parent_data = cam_cc_parent_data_2, 468 468 .num_parents = 5, 469 - .ops = &clk_rcg2_ops, 469 + .ops = &clk_rcg2_shared_ops, 470 470 }, 471 471 }; 472 472 ··· 488 488 .name = "cam_cc_ife_0_clk_src", 489 489 .parent_data = cam_cc_parent_data_4, 490 490 .num_parents = 4, 491 - .ops = &clk_rcg2_ops, 491 + .ops = &clk_rcg2_shared_ops, 492 492 }, 493 493 }; 494 494 ··· 510 510 .name = "cam_cc_ife_0_csid_clk_src", 511 511 .parent_data = cam_cc_parent_data_3, 512 512 .num_parents = 6, 513 - .ops = &clk_rcg2_ops, 513 + .ops = &clk_rcg2_shared_ops, 514 514 }, 515 515 }; 516 516 ··· 524 524 .name = "cam_cc_ife_1_clk_src", 525 525 .parent_data = cam_cc_parent_data_4, 526 526 .num_parents = 4, 527 - .ops = &clk_rcg2_ops, 527 + .ops = &clk_rcg2_shared_ops, 528 528 }, 529 529 }; 530 530 ··· 538 538 .name = "cam_cc_ife_1_csid_clk_src", 539 539 .parent_data = cam_cc_parent_data_3, 540 540 .num_parents = 6, 541 - .ops = &clk_rcg2_ops, 541 + .ops = &clk_rcg2_shared_ops, 542 542 }, 543 543 }; 544 544 ··· 553 553 .parent_data = cam_cc_parent_data_4, 554 554 .num_parents = 4, 555 555 .flags = CLK_SET_RATE_PARENT, 556 - .ops = &clk_rcg2_ops, 556 + .ops = &clk_rcg2_shared_ops, 557 557 }, 558 558 }; 559 559 ··· 567 567 .name = "cam_cc_ife_lite_csid_clk_src", 568 568 .parent_data = cam_cc_parent_data_3, 569 569 .num_parents = 6, 570 - .ops = &clk_rcg2_ops, 570 + .ops = &clk_rcg2_shared_ops, 571 571 }, 572 572 }; 573 573 ··· 590 590 .name = "cam_cc_ipe_0_clk_src", 591 591 .parent_data = cam_cc_parent_data_2, 592 592 .num_parents = 5, 593 - .ops = &clk_rcg2_ops, 593 + .ops = &clk_rcg2_shared_ops, 594 594 }, 595 595 }; 596 596 ··· 613 613 .name = "cam_cc_jpeg_clk_src", 614 614 .parent_data = cam_cc_parent_data_2, 615 615 .num_parents = 5, 616 - .ops = &clk_rcg2_ops, 616 + .ops = &clk_rcg2_shared_ops, 617 617 }, 618 618 }; 619 619 ··· 635 635 .name = "cam_cc_lrme_clk_src", 636 636 .parent_data = cam_cc_parent_data_6, 637 637 .num_parents = 5, 638 - .ops = &clk_rcg2_ops, 638 + .ops = &clk_rcg2_shared_ops, 639 639 }, 640 640 }; 641 641 ··· 656 656 .name = "cam_cc_mclk0_clk_src", 657 657 .parent_data = cam_cc_parent_data_1, 658 658 .num_parents = 3, 659 - .ops = &clk_rcg2_ops, 659 + .ops = &clk_rcg2_shared_ops, 660 660 }, 661 661 }; 662 662 ··· 670 670 .name = "cam_cc_mclk1_clk_src", 671 671 .parent_data = cam_cc_parent_data_1, 672 672 .num_parents = 3, 673 - .ops = &clk_rcg2_ops, 673 + .ops = &clk_rcg2_shared_ops, 674 674 }, 675 675 }; 676 676 ··· 684 684 .name = "cam_cc_mclk2_clk_src", 685 685 .parent_data = cam_cc_parent_data_1, 686 686 .num_parents = 3, 687 - .ops = &clk_rcg2_ops, 687 + .ops = &clk_rcg2_shared_ops, 688 688 }, 689 689 }; 690 690 ··· 698 698 .name = "cam_cc_mclk3_clk_src", 699 699 .parent_data = cam_cc_parent_data_1, 700 700 .num_parents = 3, 701 - .ops = &clk_rcg2_ops, 701 + .ops = &clk_rcg2_shared_ops, 702 702 }, 703 703 }; 704 704 ··· 712 712 .name = "cam_cc_mclk4_clk_src", 713 713 .parent_data = cam_cc_parent_data_1, 714 714 .num_parents = 3, 715 - .ops = &clk_rcg2_ops, 715 + .ops = &clk_rcg2_shared_ops, 716 716 }, 717 717 }; 718 718 ··· 732 732 .parent_data = cam_cc_parent_data_0, 733 733 .num_parents = 4, 734 734 .flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE, 735 - .ops = &clk_rcg2_ops, 735 + .ops = &clk_rcg2_shared_ops, 736 736 }, 737 737 }; 738 738
+1 -1
drivers/clk/socfpga/clk-gate.c
··· 99 99 val = readl(socfpgaclk->div_reg) >> socfpgaclk->shift; 100 100 val &= GENMASK(socfpgaclk->width - 1, 0); 101 101 /* Check for GPIO_DB_CLK by its offset */ 102 - if ((int) socfpgaclk->div_reg & SOCFPGA_GPIO_DB_CLK_OFFSET) 102 + if ((uintptr_t) socfpgaclk->div_reg & SOCFPGA_GPIO_DB_CLK_OFFSET) 103 103 div = val + 1; 104 104 else 105 105 div = (1 << val);
+91 -65
drivers/cxl/mem.c
··· 4 4 #include <linux/security.h> 5 5 #include <linux/debugfs.h> 6 6 #include <linux/module.h> 7 + #include <linux/sizes.h> 7 8 #include <linux/mutex.h> 8 9 #include <linux/cdev.h> 9 10 #include <linux/idr.h> ··· 97 96 * @dev: driver core device object 98 97 * @cdev: char dev core object for ioctl operations 99 98 * @cxlm: pointer to the parent device driver data 100 - * @ops_active: active user of @cxlm in ops handlers 101 - * @ops_dead: completion when all @cxlm ops users have exited 102 99 * @id: id number of this memdev instance. 103 100 */ 104 101 struct cxl_memdev { 105 102 struct device dev; 106 103 struct cdev cdev; 107 104 struct cxl_mem *cxlm; 108 - struct percpu_ref ops_active; 109 - struct completion ops_dead; 110 105 int id; 111 106 }; 112 107 113 108 static int cxl_mem_major; 114 109 static DEFINE_IDA(cxl_memdev_ida); 110 + static DECLARE_RWSEM(cxl_memdev_rwsem); 115 111 static struct dentry *cxl_debugfs; 116 112 static bool cxl_raw_allow_all; 117 113 ··· 167 169 * table will be validated against the user's input. For example, if size_in is 168 170 * 0, and the user passed in 1, it is an error. 169 171 */ 170 - static struct cxl_mem_command mem_commands[] = { 172 + static struct cxl_mem_command mem_commands[CXL_MEM_COMMAND_ID_MAX] = { 171 173 CXL_CMD(IDENTIFY, 0, 0x43, CXL_CMD_FLAG_FORCE_ENABLE), 172 174 #ifdef CONFIG_CXL_MEM_RAW_COMMANDS 173 175 CXL_CMD(RAW, ~0, ~0, 0), ··· 774 776 static long cxl_memdev_ioctl(struct file *file, unsigned int cmd, 775 777 unsigned long arg) 776 778 { 777 - struct cxl_memdev *cxlmd; 778 - struct inode *inode; 779 - int rc = -ENOTTY; 779 + struct cxl_memdev *cxlmd = file->private_data; 780 + int rc = -ENXIO; 780 781 781 - inode = file_inode(file); 782 - cxlmd = container_of(inode->i_cdev, typeof(*cxlmd), cdev); 783 - 784 - if (!percpu_ref_tryget_live(&cxlmd->ops_active)) 785 - return -ENXIO; 786 - 787 - rc = __cxl_memdev_ioctl(cxlmd, cmd, arg); 788 - 789 - percpu_ref_put(&cxlmd->ops_active); 782 + down_read(&cxl_memdev_rwsem); 783 + if (cxlmd->cxlm) 784 + rc = __cxl_memdev_ioctl(cxlmd, cmd, arg); 785 + up_read(&cxl_memdev_rwsem); 790 786 791 787 return rc; 788 + } 789 + 790 + static int cxl_memdev_open(struct inode *inode, struct file *file) 791 + { 792 + struct cxl_memdev *cxlmd = 793 + container_of(inode->i_cdev, typeof(*cxlmd), cdev); 794 + 795 + get_device(&cxlmd->dev); 796 + file->private_data = cxlmd; 797 + 798 + return 0; 799 + } 800 + 801 + static int cxl_memdev_release_file(struct inode *inode, struct file *file) 802 + { 803 + struct cxl_memdev *cxlmd = 804 + container_of(inode->i_cdev, typeof(*cxlmd), cdev); 805 + 806 + put_device(&cxlmd->dev); 807 + 808 + return 0; 792 809 } 793 810 794 811 static const struct file_operations cxl_memdev_fops = { 795 812 .owner = THIS_MODULE, 796 813 .unlocked_ioctl = cxl_memdev_ioctl, 814 + .open = cxl_memdev_open, 815 + .release = cxl_memdev_release_file, 797 816 .compat_ioctl = compat_ptr_ioctl, 798 817 .llseek = noop_llseek, 799 818 }; ··· 999 984 return NULL; 1000 985 } 1001 986 1002 - offset = ((u64)reg_hi << 32) | FIELD_GET(CXL_REGLOC_ADDR_MASK, reg_lo); 987 + offset = ((u64)reg_hi << 32) | (reg_lo & CXL_REGLOC_ADDR_MASK); 1003 988 bar = FIELD_GET(CXL_REGLOC_BIR_MASK, reg_lo); 1004 989 1005 990 /* Basic sanity check that BAR is big enough */ ··· 1064 1049 { 1065 1050 struct cxl_memdev *cxlmd = to_cxl_memdev(dev); 1066 1051 1067 - percpu_ref_exit(&cxlmd->ops_active); 1068 1052 ida_free(&cxl_memdev_ida, cxlmd->id); 1069 1053 kfree(cxlmd); 1070 1054 } ··· 1080 1066 struct cxl_memdev *cxlmd = to_cxl_memdev(dev); 1081 1067 struct cxl_mem *cxlm = cxlmd->cxlm; 1082 1068 1083 - return sprintf(buf, "%.16s\n", cxlm->firmware_version); 1069 + return sysfs_emit(buf, "%.16s\n", cxlm->firmware_version); 1084 1070 } 1085 1071 static DEVICE_ATTR_RO(firmware_version); 1086 1072 ··· 1090 1076 struct cxl_memdev *cxlmd = to_cxl_memdev(dev); 1091 1077 struct cxl_mem *cxlm = cxlmd->cxlm; 1092 1078 1093 - return sprintf(buf, "%zu\n", cxlm->payload_size); 1079 + return sysfs_emit(buf, "%zu\n", cxlm->payload_size); 1094 1080 } 1095 1081 static DEVICE_ATTR_RO(payload_max); 1096 1082 ··· 1101 1087 struct cxl_mem *cxlm = cxlmd->cxlm; 1102 1088 unsigned long long len = range_len(&cxlm->ram_range); 1103 1089 1104 - return sprintf(buf, "%#llx\n", len); 1090 + return sysfs_emit(buf, "%#llx\n", len); 1105 1091 } 1106 1092 1107 1093 static struct device_attribute dev_attr_ram_size = ··· 1114 1100 struct cxl_mem *cxlm = cxlmd->cxlm; 1115 1101 unsigned long long len = range_len(&cxlm->pmem_range); 1116 1102 1117 - return sprintf(buf, "%#llx\n", len); 1103 + return sysfs_emit(buf, "%#llx\n", len); 1118 1104 } 1119 1105 1120 1106 static struct device_attribute dev_attr_pmem_size = ··· 1164 1150 .groups = cxl_memdev_attribute_groups, 1165 1151 }; 1166 1152 1167 - static void cxlmdev_unregister(void *_cxlmd) 1153 + static void cxl_memdev_shutdown(struct cxl_memdev *cxlmd) 1154 + { 1155 + down_write(&cxl_memdev_rwsem); 1156 + cxlmd->cxlm = NULL; 1157 + up_write(&cxl_memdev_rwsem); 1158 + } 1159 + 1160 + static void cxl_memdev_unregister(void *_cxlmd) 1168 1161 { 1169 1162 struct cxl_memdev *cxlmd = _cxlmd; 1170 1163 struct device *dev = &cxlmd->dev; 1171 1164 1172 - percpu_ref_kill(&cxlmd->ops_active); 1173 1165 cdev_device_del(&cxlmd->cdev, dev); 1174 - wait_for_completion(&cxlmd->ops_dead); 1175 - cxlmd->cxlm = NULL; 1166 + cxl_memdev_shutdown(cxlmd); 1176 1167 put_device(dev); 1177 1168 } 1178 1169 1179 - static void cxlmdev_ops_active_release(struct percpu_ref *ref) 1180 - { 1181 - struct cxl_memdev *cxlmd = 1182 - container_of(ref, typeof(*cxlmd), ops_active); 1183 - 1184 - complete(&cxlmd->ops_dead); 1185 - } 1186 - 1187 - static int cxl_mem_add_memdev(struct cxl_mem *cxlm) 1170 + static struct cxl_memdev *cxl_memdev_alloc(struct cxl_mem *cxlm) 1188 1171 { 1189 1172 struct pci_dev *pdev = cxlm->pdev; 1190 1173 struct cxl_memdev *cxlmd; ··· 1191 1180 1192 1181 cxlmd = kzalloc(sizeof(*cxlmd), GFP_KERNEL); 1193 1182 if (!cxlmd) 1194 - return -ENOMEM; 1195 - init_completion(&cxlmd->ops_dead); 1196 - 1197 - /* 1198 - * @cxlm is deallocated when the driver unbinds so operations 1199 - * that are using it need to hold a live reference. 1200 - */ 1201 - cxlmd->cxlm = cxlm; 1202 - rc = percpu_ref_init(&cxlmd->ops_active, cxlmdev_ops_active_release, 0, 1203 - GFP_KERNEL); 1204 - if (rc) 1205 - goto err_ref; 1183 + return ERR_PTR(-ENOMEM); 1206 1184 1207 1185 rc = ida_alloc_range(&cxl_memdev_ida, 0, CXL_MEM_MAX_DEVS, GFP_KERNEL); 1208 1186 if (rc < 0) 1209 - goto err_id; 1187 + goto err; 1210 1188 cxlmd->id = rc; 1211 1189 1212 1190 dev = &cxlmd->dev; ··· 1204 1204 dev->bus = &cxl_bus_type; 1205 1205 dev->devt = MKDEV(cxl_mem_major, cxlmd->id); 1206 1206 dev->type = &cxl_memdev_type; 1207 - dev_set_name(dev, "mem%d", cxlmd->id); 1207 + device_set_pm_not_required(dev); 1208 1208 1209 1209 cdev = &cxlmd->cdev; 1210 1210 cdev_init(cdev, &cxl_memdev_fops); 1211 + return cxlmd; 1211 1212 1213 + err: 1214 + kfree(cxlmd); 1215 + return ERR_PTR(rc); 1216 + } 1217 + 1218 + static int cxl_mem_add_memdev(struct cxl_mem *cxlm) 1219 + { 1220 + struct cxl_memdev *cxlmd; 1221 + struct device *dev; 1222 + struct cdev *cdev; 1223 + int rc; 1224 + 1225 + cxlmd = cxl_memdev_alloc(cxlm); 1226 + if (IS_ERR(cxlmd)) 1227 + return PTR_ERR(cxlmd); 1228 + 1229 + dev = &cxlmd->dev; 1230 + rc = dev_set_name(dev, "mem%d", cxlmd->id); 1231 + if (rc) 1232 + goto err; 1233 + 1234 + /* 1235 + * Activate ioctl operations, no cxl_memdev_rwsem manipulation 1236 + * needed as this is ordered with cdev_add() publishing the device. 1237 + */ 1238 + cxlmd->cxlm = cxlm; 1239 + 1240 + cdev = &cxlmd->cdev; 1212 1241 rc = cdev_device_add(cdev, dev); 1213 1242 if (rc) 1214 - goto err_add; 1243 + goto err; 1215 1244 1216 - return devm_add_action_or_reset(dev->parent, cxlmdev_unregister, cxlmd); 1245 + return devm_add_action_or_reset(dev->parent, cxl_memdev_unregister, 1246 + cxlmd); 1217 1247 1218 - err_add: 1219 - ida_free(&cxl_memdev_ida, cxlmd->id); 1220 - err_id: 1248 + err: 1221 1249 /* 1222 - * Theoretically userspace could have already entered the fops, 1223 - * so flush ops_active. 1250 + * The cdev was briefly live, shutdown any ioctl operations that 1251 + * saw that state. 1224 1252 */ 1225 - percpu_ref_kill(&cxlmd->ops_active); 1226 - wait_for_completion(&cxlmd->ops_dead); 1227 - percpu_ref_exit(&cxlmd->ops_active); 1228 - err_ref: 1229 - kfree(cxlmd); 1230 - 1253 + cxl_memdev_shutdown(cxlmd); 1254 + put_device(dev); 1231 1255 return rc; 1232 1256 } 1233 1257 ··· 1420 1396 */ 1421 1397 static int cxl_mem_identify(struct cxl_mem *cxlm) 1422 1398 { 1399 + /* See CXL 2.0 Table 175 Identify Memory Device Output Payload */ 1423 1400 struct cxl_mbox_identify { 1424 1401 char fw_revision[0x10]; 1425 1402 __le64 total_capacity; ··· 1449 1424 * For now, only the capacity is exported in sysfs 1450 1425 */ 1451 1426 cxlm->ram_range.start = 0; 1452 - cxlm->ram_range.end = le64_to_cpu(id.volatile_capacity) - 1; 1427 + cxlm->ram_range.end = le64_to_cpu(id.volatile_capacity) * SZ_256M - 1; 1453 1428 1454 1429 cxlm->pmem_range.start = 0; 1455 - cxlm->pmem_range.end = le64_to_cpu(id.persistent_capacity) - 1; 1430 + cxlm->pmem_range.end = 1431 + le64_to_cpu(id.persistent_capacity) * SZ_256M - 1; 1456 1432 1457 1433 memcpy(cxlm->firmware_version, id.fw_revision, sizeof(id.fw_revision)); 1458 1434
+2 -4
drivers/dax/bus.c
··· 90 90 list_add(&dax_id->list, &dax_drv->ids); 91 91 } else 92 92 rc = -ENOMEM; 93 - } else 94 - /* nothing to remove */; 93 + } 95 94 } else if (action == ID_REMOVE) { 96 95 list_del(&dax_id->list); 97 96 kfree(dax_id); 98 - } else 99 - /* dax_id already added */; 97 + } 100 98 mutex_unlock(&dax_bus_lock); 101 99 102 100 if (rc < 0)
+1
drivers/dma/dmaengine.c
··· 1086 1086 kfree(chan->dev); 1087 1087 err_free_local: 1088 1088 free_percpu(chan->local); 1089 + chan->local = NULL; 1089 1090 return rc; 1090 1091 } 1091 1092
+2
drivers/dma/dw/Kconfig
··· 10 10 11 11 config DW_DMAC 12 12 tristate "Synopsys DesignWare AHB DMA platform driver" 13 + depends on HAS_IOMEM 13 14 select DW_DMAC_CORE 14 15 help 15 16 Support the Synopsys DesignWare AHB DMA controller. This ··· 19 18 config DW_DMAC_PCI 20 19 tristate "Synopsys DesignWare AHB DMA PCI driver" 21 20 depends on PCI 21 + depends on HAS_IOMEM 22 22 select DW_DMAC_CORE 23 23 help 24 24 Support the Synopsys DesignWare AHB DMA controller on the
+54 -11
drivers/dma/idxd/device.c
··· 282 282 idxd_cmd_exec(idxd, IDXD_CMD_DRAIN_WQ, operand, NULL); 283 283 } 284 284 285 + void idxd_wq_reset(struct idxd_wq *wq) 286 + { 287 + struct idxd_device *idxd = wq->idxd; 288 + struct device *dev = &idxd->pdev->dev; 289 + u32 operand; 290 + 291 + if (wq->state != IDXD_WQ_ENABLED) { 292 + dev_dbg(dev, "WQ %d in wrong state: %d\n", wq->id, wq->state); 293 + return; 294 + } 295 + 296 + operand = BIT(wq->id % 16) | ((wq->id / 16) << 16); 297 + idxd_cmd_exec(idxd, IDXD_CMD_RESET_WQ, operand, NULL); 298 + wq->state = IDXD_WQ_DISABLED; 299 + } 300 + 285 301 int idxd_wq_map_portal(struct idxd_wq *wq) 286 302 { 287 303 struct idxd_device *idxd = wq->idxd; ··· 379 363 void idxd_wq_disable_cleanup(struct idxd_wq *wq) 380 364 { 381 365 struct idxd_device *idxd = wq->idxd; 382 - struct device *dev = &idxd->pdev->dev; 383 - int i, wq_offset; 384 366 385 367 lockdep_assert_held(&idxd->dev_lock); 386 368 memset(wq->wqcfg, 0, idxd->wqcfg_size); ··· 390 376 wq->ats_dis = 0; 391 377 clear_bit(WQ_FLAG_DEDICATED, &wq->flags); 392 378 memset(wq->name, 0, WQ_NAME_SIZE); 393 - 394 - for (i = 0; i < WQCFG_STRIDES(idxd); i++) { 395 - wq_offset = WQCFG_OFFSET(idxd, wq->id, i); 396 - iowrite32(0, idxd->reg_base + wq_offset); 397 - dev_dbg(dev, "WQ[%d][%d][%#x]: %#x\n", 398 - wq->id, i, wq_offset, 399 - ioread32(idxd->reg_base + wq_offset)); 400 - } 401 379 } 402 380 403 381 /* Device control bits */ ··· 580 574 } 581 575 582 576 /* Device configuration bits */ 577 + void idxd_msix_perm_setup(struct idxd_device *idxd) 578 + { 579 + union msix_perm mperm; 580 + int i, msixcnt; 581 + 582 + msixcnt = pci_msix_vec_count(idxd->pdev); 583 + if (msixcnt < 0) 584 + return; 585 + 586 + mperm.bits = 0; 587 + mperm.pasid = idxd->pasid; 588 + mperm.pasid_en = device_pasid_enabled(idxd); 589 + for (i = 1; i < msixcnt; i++) 590 + iowrite32(mperm.bits, idxd->reg_base + idxd->msix_perm_offset + i * 8); 591 + } 592 + 593 + void idxd_msix_perm_clear(struct idxd_device *idxd) 594 + { 595 + union msix_perm mperm; 596 + int i, msixcnt; 597 + 598 + msixcnt = pci_msix_vec_count(idxd->pdev); 599 + if (msixcnt < 0) 600 + return; 601 + 602 + mperm.bits = 0; 603 + for (i = 1; i < msixcnt; i++) 604 + iowrite32(mperm.bits, idxd->reg_base + idxd->msix_perm_offset + i * 8); 605 + } 606 + 583 607 static void idxd_group_config_write(struct idxd_group *group) 584 608 { 585 609 struct idxd_device *idxd = group->idxd; ··· 678 642 if (!wq->group) 679 643 return 0; 680 644 681 - memset(wq->wqcfg, 0, idxd->wqcfg_size); 645 + /* 646 + * Instead of memset the entire shadow copy of WQCFG, copy from the hardware after 647 + * wq reset. This will copy back the sticky values that are present on some devices. 648 + */ 649 + for (i = 0; i < WQCFG_STRIDES(idxd); i++) { 650 + wq_offset = WQCFG_OFFSET(idxd, wq->id, i); 651 + wq->wqcfg->bits[i] = ioread32(idxd->reg_base + wq_offset); 652 + } 682 653 683 654 /* byte 0-3 */ 684 655 wq->wqcfg->wq_size = wq->size;
+3
drivers/dma/idxd/idxd.h
··· 316 316 struct bus_type *idxd_get_bus_type(struct idxd_device *idxd); 317 317 318 318 /* device interrupt control */ 319 + void idxd_msix_perm_setup(struct idxd_device *idxd); 320 + void idxd_msix_perm_clear(struct idxd_device *idxd); 319 321 irqreturn_t idxd_irq_handler(int vec, void *data); 320 322 irqreturn_t idxd_misc_thread(int vec, void *data); 321 323 irqreturn_t idxd_wq_thread(int irq, void *data); ··· 343 341 int idxd_wq_enable(struct idxd_wq *wq); 344 342 int idxd_wq_disable(struct idxd_wq *wq); 345 343 void idxd_wq_drain(struct idxd_wq *wq); 344 + void idxd_wq_reset(struct idxd_wq *wq); 346 345 int idxd_wq_map_portal(struct idxd_wq *wq); 347 346 void idxd_wq_unmap_portal(struct idxd_wq *wq); 348 347 void idxd_wq_disable_cleanup(struct idxd_wq *wq);
+2 -9
drivers/dma/idxd/init.c
··· 65 65 struct idxd_irq_entry *irq_entry; 66 66 int i, msixcnt; 67 67 int rc = 0; 68 - union msix_perm mperm; 69 68 70 69 msixcnt = pci_msix_vec_count(pdev); 71 70 if (msixcnt < 0) { ··· 143 144 } 144 145 145 146 idxd_unmask_error_interrupts(idxd); 146 - 147 - /* Setup MSIX permission table */ 148 - mperm.bits = 0; 149 - mperm.pasid = idxd->pasid; 150 - mperm.pasid_en = device_pasid_enabled(idxd); 151 - for (i = 1; i < msixcnt; i++) 152 - iowrite32(mperm.bits, idxd->reg_base + idxd->msix_perm_offset + i * 8); 153 - 147 + idxd_msix_perm_setup(idxd); 154 148 return 0; 155 149 156 150 err_no_irq: ··· 502 510 idxd_flush_work_list(irq_entry); 503 511 } 504 512 513 + idxd_msix_perm_clear(idxd); 505 514 destroy_workqueue(idxd->wq); 506 515 } 507 516
+3 -1
drivers/dma/idxd/irq.c
··· 124 124 for (i = 0; i < 4; i++) 125 125 idxd->sw_err.bits[i] = ioread64(idxd->reg_base + 126 126 IDXD_SWERR_OFFSET + i * sizeof(u64)); 127 - iowrite64(IDXD_SWERR_ACK, idxd->reg_base + IDXD_SWERR_OFFSET); 127 + 128 + iowrite64(idxd->sw_err.bits[0] & IDXD_SWERR_ACK, 129 + idxd->reg_base + IDXD_SWERR_OFFSET); 128 130 129 131 if (idxd->sw_err.valid && idxd->sw_err.wq_idx_valid) { 130 132 int id = idxd->sw_err.wq_idx;
+10 -9
drivers/dma/idxd/sysfs.c
··· 275 275 { 276 276 struct idxd_device *idxd = wq->idxd; 277 277 struct device *dev = &idxd->pdev->dev; 278 - int rc; 279 278 280 279 mutex_lock(&wq->wq_lock); 281 280 dev_dbg(dev, "%s removing WQ %s\n", __func__, dev_name(&wq->conf_dev)); ··· 295 296 idxd_wq_unmap_portal(wq); 296 297 297 298 idxd_wq_drain(wq); 298 - rc = idxd_wq_disable(wq); 299 + idxd_wq_reset(wq); 299 300 300 301 idxd_wq_free_resources(wq); 301 302 wq->client_count = 0; 302 303 mutex_unlock(&wq->wq_lock); 303 304 304 - if (rc < 0) 305 - dev_warn(dev, "Failed to disable %s: %d\n", 306 - dev_name(&wq->conf_dev), rc); 307 - else 308 - dev_info(dev, "wq %s disabled\n", dev_name(&wq->conf_dev)); 305 + dev_info(dev, "wq %s disabled\n", dev_name(&wq->conf_dev)); 309 306 } 310 307 311 308 static int idxd_config_bus_remove(struct device *dev) ··· 984 989 if (!test_bit(IDXD_FLAG_CONFIGURABLE, &idxd->flags)) 985 990 return -EPERM; 986 991 987 - if (wq->state != IDXD_WQ_DISABLED) 992 + if (idxd->state == IDXD_DEV_ENABLED) 988 993 return -EPERM; 989 994 990 995 if (size + total_claimed_wq_size(idxd) - wq->size > idxd->max_wq_size) ··· 1444 1449 { 1445 1450 struct idxd_device *idxd = 1446 1451 container_of(dev, struct idxd_device, conf_dev); 1452 + int i, rc = 0; 1447 1453 1448 - return sprintf(buf, "%#llx\n", idxd->hw.opcap.bits[0]); 1454 + for (i = 0; i < 4; i++) 1455 + rc += sysfs_emit_at(buf, rc, "%#llx ", idxd->hw.opcap.bits[i]); 1456 + 1457 + rc--; 1458 + rc += sysfs_emit_at(buf, rc, "\n"); 1459 + return rc; 1449 1460 } 1450 1461 static DEVICE_ATTR_RO(op_cap); 1451 1462
+11 -7
drivers/dma/plx_dma.c
··· 507 507 508 508 rc = request_irq(pci_irq_vector(pdev, 0), plx_dma_isr, 0, 509 509 KBUILD_MODNAME, plxdev); 510 - if (rc) { 511 - kfree(plxdev); 512 - return rc; 513 - } 510 + if (rc) 511 + goto free_plx; 514 512 515 513 spin_lock_init(&plxdev->ring_lock); 516 514 tasklet_setup(&plxdev->desc_task, plx_dma_desc_task); ··· 538 540 rc = dma_async_device_register(dma); 539 541 if (rc) { 540 542 pci_err(pdev, "Failed to register dma device: %d\n", rc); 541 - free_irq(pci_irq_vector(pdev, 0), plxdev); 542 - kfree(plxdev); 543 - return rc; 543 + goto put_device; 544 544 } 545 545 546 546 pci_set_drvdata(pdev, plxdev); 547 547 548 548 return 0; 549 + 550 + put_device: 551 + put_device(&pdev->dev); 552 + free_irq(pci_irq_vector(pdev, 0), plxdev); 553 + free_plx: 554 + kfree(plxdev); 555 + 556 + return rc; 549 557 } 550 558 551 559 static int plx_dma_probe(struct pci_dev *pdev,
+2 -2
drivers/dma/tegra20-apb-dma.c
··· 723 723 goto end; 724 724 } 725 725 if (!tdc->busy) { 726 - err = pm_runtime_get_sync(tdc->tdma->dev); 726 + err = pm_runtime_resume_and_get(tdc->tdma->dev); 727 727 if (err < 0) { 728 728 dev_err(tdc2dev(tdc), "Failed to enable DMA\n"); 729 729 goto end; ··· 818 818 struct tegra_dma_channel *tdc = to_tegra_dma_chan(dc); 819 819 int err; 820 820 821 - err = pm_runtime_get_sync(tdc->tdma->dev); 821 + err = pm_runtime_resume_and_get(tdc->tdma->dev); 822 822 if (err < 0) { 823 823 dev_err(tdc2dev(tdc), "Failed to synchronize DMA: %d\n", err); 824 824 return;
+19 -12
drivers/dma/xilinx/xilinx_dpdma.c
··· 839 839 struct xilinx_dpdma_tx_desc *desc; 840 840 struct virt_dma_desc *vdesc; 841 841 u32 reg, channels; 842 + bool first_frame; 842 843 843 844 lockdep_assert_held(&chan->lock); 844 845 ··· 852 851 chan->first_frame = true; 853 852 chan->running = true; 854 853 } 855 - 856 - if (chan->video_group) 857 - channels = xilinx_dpdma_chan_video_group_ready(chan); 858 - else 859 - channels = BIT(chan->id); 860 - 861 - if (!channels) 862 - return; 863 854 864 855 vdesc = vchan_next_desc(&chan->vchan); 865 856 if (!vdesc) ··· 877 884 FIELD_PREP(XILINX_DPDMA_CH_DESC_START_ADDRE_MASK, 878 885 upper_32_bits(sw_desc->dma_addr))); 879 886 880 - if (chan->first_frame) 887 + first_frame = chan->first_frame; 888 + chan->first_frame = false; 889 + 890 + if (chan->video_group) { 891 + channels = xilinx_dpdma_chan_video_group_ready(chan); 892 + /* 893 + * Trigger the transfer only when all channels in the group are 894 + * ready. 895 + */ 896 + if (!channels) 897 + return; 898 + } else { 899 + channels = BIT(chan->id); 900 + } 901 + 902 + if (first_frame) 881 903 reg = XILINX_DPDMA_GBL_TRIG_MASK(channels); 882 904 else 883 905 reg = XILINX_DPDMA_GBL_RETRIG_MASK(channels); 884 - 885 - chan->first_frame = false; 886 906 887 907 dpdma_write(xdev->reg, XILINX_DPDMA_GBL, reg); 888 908 } ··· 1048 1042 */ 1049 1043 static void xilinx_dpdma_chan_done_irq(struct xilinx_dpdma_chan *chan) 1050 1044 { 1051 - struct xilinx_dpdma_tx_desc *active = chan->desc.active; 1045 + struct xilinx_dpdma_tx_desc *active; 1052 1046 unsigned long flags; 1053 1047 1054 1048 spin_lock_irqsave(&chan->lock, flags); 1055 1049 1056 1050 xilinx_dpdma_debugfs_desc_done_irq(chan); 1057 1051 1052 + active = chan->desc.active; 1058 1053 if (active) 1059 1054 vchan_cyclic_callback(&active->vdesc); 1060 1055 else
+2 -2
drivers/firmware/turris-mox-rwtm.c
··· 2 2 /* 3 3 * Turris Mox rWTM firmware driver 4 4 * 5 - * Copyright (C) 2019 Marek Behun <marek.behun@nic.cz> 5 + * Copyright (C) 2019 Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #include <linux/armada-37xx-rwtm-mailbox.h> ··· 547 547 548 548 MODULE_LICENSE("GPL v2"); 549 549 MODULE_DESCRIPTION("Turris Mox rWTM firmware driver"); 550 - MODULE_AUTHOR("Marek Behun <marek.behun@nic.cz>"); 550 + MODULE_AUTHOR("Marek Behun <kabel@kernel.org>");
+2 -2
drivers/gpio/gpio-moxtet.c
··· 2 2 /* 3 3 * Turris Mox Moxtet GPIO expander 4 4 * 5 - * Copyright (C) 2018 Marek Behun <marek.behun@nic.cz> 5 + * Copyright (C) 2018 Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #include <linux/bitops.h> ··· 174 174 }; 175 175 module_moxtet_driver(moxtet_gpio_driver); 176 176 177 - MODULE_AUTHOR("Marek Behun <marek.behun@nic.cz>"); 177 + MODULE_AUTHOR("Marek Behun <kabel@kernel.org>"); 178 178 MODULE_DESCRIPTION("Turris Mox Moxtet GPIO expander"); 179 179 MODULE_LICENSE("GPL v2");
+8
drivers/gpio/gpiolib-sysfs.c
··· 458 458 long gpio; 459 459 struct gpio_desc *desc; 460 460 int status; 461 + struct gpio_chip *gc; 462 + int offset; 461 463 462 464 status = kstrtol(buf, 0, &gpio); 463 465 if (status < 0) ··· 469 467 /* reject invalid GPIOs */ 470 468 if (!desc) { 471 469 pr_warn("%s: invalid GPIO %ld\n", __func__, gpio); 470 + return -EINVAL; 471 + } 472 + gc = desc->gdev->chip; 473 + offset = gpio_chip_hwgpio(desc); 474 + if (!gpiochip_line_is_valid(gc, offset)) { 475 + pr_warn("%s: GPIO %ld masked\n", __func__, gpio); 472 476 return -EINVAL; 473 477 } 474 478
-1
drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c
··· 646 646 break; 647 647 case INTEL_BACKLIGHT_DISPLAY_DDI: 648 648 try_intel_interface = true; 649 - try_vesa_interface = true; 650 649 break; 651 650 default: 652 651 return -ENODEV;
+2 -2
drivers/gpu/drm/i915/display/vlv_dsi.c
··· 992 992 * FIXME As we do with eDP, just make a note of the time here 993 993 * and perform the wait before the next panel power on. 994 994 */ 995 - intel_dsi_msleep(intel_dsi, intel_dsi->panel_pwr_cycle_delay); 995 + msleep(intel_dsi->panel_pwr_cycle_delay); 996 996 } 997 997 998 998 static void intel_dsi_shutdown(struct intel_encoder *encoder) 999 999 { 1000 1000 struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); 1001 1001 1002 - intel_dsi_msleep(intel_dsi, intel_dsi->panel_pwr_cycle_delay); 1002 + msleep(intel_dsi->panel_pwr_cycle_delay); 1003 1003 } 1004 1004 1005 1005 static bool intel_dsi_get_hw_state(struct intel_encoder *encoder,
+2 -2
drivers/gpu/drm/i915/intel_pm.c
··· 5471 5471 struct skl_plane_wm *wm = &crtc_state->wm.skl.raw.planes[plane_id]; 5472 5472 int ret; 5473 5473 5474 - memset(wm, 0, sizeof(*wm)); 5475 - 5476 5474 /* Watermarks calculated in master */ 5477 5475 if (plane_state->planar_slave) 5478 5476 return 0; 5477 + 5478 + memset(wm, 0, sizeof(*wm)); 5479 5479 5480 5480 if (plane_state->planar_linked_plane) { 5481 5481 const struct drm_framebuffer *fb = plane_state->hw.fb;
+37 -3
drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
··· 10 10 #include <linux/bitops.h> 11 11 #include <linux/delay.h> 12 12 #include <linux/dma-mapping.h> 13 + #include <linux/dmi.h> 13 14 #include <linux/interrupt.h> 14 15 #include <linux/io-64-nonatomic-lo-hi.h> 15 16 #include <linux/module.h> ··· 23 22 24 23 #define ACEL_EN BIT(0) 25 24 #define GYRO_EN BIT(1) 26 - #define MAGNO_EN BIT(2) 25 + #define MAGNO_EN BIT(2) 27 26 #define ALS_EN BIT(19) 27 + 28 + static int sensor_mask_override = -1; 29 + module_param_named(sensor_mask, sensor_mask_override, int, 0444); 30 + MODULE_PARM_DESC(sensor_mask, "override the detected sensors mask"); 28 31 29 32 void amd_start_sensor(struct amd_mp2_dev *privdata, struct amd_mp2_sensor_info info) 30 33 { ··· 78 73 writel(cmd_base.ul, privdata->mmio + AMD_C2P_MSG0); 79 74 } 80 75 76 + static const struct dmi_system_id dmi_sensor_mask_overrides[] = { 77 + { 78 + .matches = { 79 + DMI_MATCH(DMI_PRODUCT_NAME, "HP ENVY x360 Convertible 13-ag0xxx"), 80 + }, 81 + .driver_data = (void *)(ACEL_EN | MAGNO_EN), 82 + }, 83 + { 84 + .matches = { 85 + DMI_MATCH(DMI_PRODUCT_NAME, "HP ENVY x360 Convertible 15-cp0xxx"), 86 + }, 87 + .driver_data = (void *)(ACEL_EN | MAGNO_EN), 88 + }, 89 + { } 90 + }; 91 + 81 92 int amd_mp2_get_sensor_num(struct amd_mp2_dev *privdata, u8 *sensor_id) 82 93 { 83 94 int activestatus, num_of_sensors = 0; 95 + const struct dmi_system_id *dmi_id; 96 + u32 activecontrolstatus; 84 97 85 - privdata->activecontrolstatus = readl(privdata->mmio + AMD_P2C_MSG3); 86 - activestatus = privdata->activecontrolstatus >> 4; 98 + if (sensor_mask_override == -1) { 99 + dmi_id = dmi_first_match(dmi_sensor_mask_overrides); 100 + if (dmi_id) 101 + sensor_mask_override = (long)dmi_id->driver_data; 102 + } 103 + 104 + if (sensor_mask_override >= 0) { 105 + activestatus = sensor_mask_override; 106 + } else { 107 + activecontrolstatus = readl(privdata->mmio + AMD_P2C_MSG3); 108 + activestatus = activecontrolstatus >> 4; 109 + } 110 + 87 111 if (ACEL_EN & activestatus) 88 112 sensor_id[num_of_sensors++] = accel_idx; 89 113
-1
drivers/hid/amd-sfh-hid/amd_sfh_pcie.h
··· 61 61 struct pci_dev *pdev; 62 62 struct amdtp_cl_data *cl_data; 63 63 void __iomem *mmio; 64 - u32 activecontrolstatus; 65 64 }; 66 65 67 66 struct amd_mp2_sensor_info {
+1
drivers/hid/hid-alps.c
··· 761 761 762 762 if (input_register_device(data->input2)) { 763 763 input_free_device(input2); 764 + ret = -ENOENT; 764 765 goto exit; 765 766 } 766 767 }
+3
drivers/hid/hid-asus.c
··· 1222 1222 USB_DEVICE_ID_ASUSTEK_ROG_NKEY_KEYBOARD), 1223 1223 QUIRK_USE_KBD_BACKLIGHT | QUIRK_ROG_NKEY_KEYBOARD }, 1224 1224 { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, 1225 + USB_DEVICE_ID_ASUSTEK_ROG_NKEY_KEYBOARD2), 1226 + QUIRK_USE_KBD_BACKLIGHT | QUIRK_ROG_NKEY_KEYBOARD }, 1227 + { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, 1225 1228 USB_DEVICE_ID_ASUSTEK_T100TA_KEYBOARD), 1226 1229 QUIRK_T100_KEYBOARD | QUIRK_NO_CONSUMER_USAGES }, 1227 1230 { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK,
+11 -11
drivers/hid/hid-cp2112.c
··· 161 161 atomic_t read_avail; 162 162 atomic_t xfer_avail; 163 163 struct gpio_chip gc; 164 + struct irq_chip irq; 164 165 u8 *in_out_buffer; 165 166 struct mutex lock; 166 167 ··· 1176 1175 return 0; 1177 1176 } 1178 1177 1179 - static struct irq_chip cp2112_gpio_irqchip = { 1180 - .name = "cp2112-gpio", 1181 - .irq_startup = cp2112_gpio_irq_startup, 1182 - .irq_shutdown = cp2112_gpio_irq_shutdown, 1183 - .irq_ack = cp2112_gpio_irq_ack, 1184 - .irq_mask = cp2112_gpio_irq_mask, 1185 - .irq_unmask = cp2112_gpio_irq_unmask, 1186 - .irq_set_type = cp2112_gpio_irq_type, 1187 - }; 1188 - 1189 1178 static int __maybe_unused cp2112_allocate_irq(struct cp2112_device *dev, 1190 1179 int pin) 1191 1180 { ··· 1330 1339 dev->gc.can_sleep = 1; 1331 1340 dev->gc.parent = &hdev->dev; 1332 1341 1342 + dev->irq.name = "cp2112-gpio"; 1343 + dev->irq.irq_startup = cp2112_gpio_irq_startup; 1344 + dev->irq.irq_shutdown = cp2112_gpio_irq_shutdown; 1345 + dev->irq.irq_ack = cp2112_gpio_irq_ack; 1346 + dev->irq.irq_mask = cp2112_gpio_irq_mask; 1347 + dev->irq.irq_unmask = cp2112_gpio_irq_unmask; 1348 + dev->irq.irq_set_type = cp2112_gpio_irq_type; 1349 + dev->irq.flags = IRQCHIP_MASK_ON_SUSPEND; 1350 + 1333 1351 girq = &dev->gc.irq; 1334 - girq->chip = &cp2112_gpio_irqchip; 1352 + girq->chip = &dev->irq; 1335 1353 /* The event comes from the outside so no parent handler */ 1336 1354 girq->parent_handler = NULL; 1337 1355 girq->num_parents = 0;
+2
drivers/hid/hid-google-hammer.c
··· 574 574 575 575 static const struct hid_device_id hammer_devices[] = { 576 576 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 577 + USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_DON) }, 578 + { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 577 579 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_HAMMER) }, 578 580 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 579 581 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MAGNEMITE) },
+2
drivers/hid/hid-ids.h
··· 194 194 #define USB_DEVICE_ID_ASUSTEK_ROG_KEYBOARD2 0x1837 195 195 #define USB_DEVICE_ID_ASUSTEK_ROG_KEYBOARD3 0x1822 196 196 #define USB_DEVICE_ID_ASUSTEK_ROG_NKEY_KEYBOARD 0x1866 197 + #define USB_DEVICE_ID_ASUSTEK_ROG_NKEY_KEYBOARD2 0x19b6 197 198 #define USB_DEVICE_ID_ASUSTEK_FX503VD_KEYBOARD 0x1869 198 199 199 200 #define USB_VENDOR_ID_ATEN 0x0557 ··· 494 493 #define USB_DEVICE_ID_GOOGLE_MASTERBALL 0x503c 495 494 #define USB_DEVICE_ID_GOOGLE_MAGNEMITE 0x503d 496 495 #define USB_DEVICE_ID_GOOGLE_MOONBALL 0x5044 496 + #define USB_DEVICE_ID_GOOGLE_DON 0x5050 497 497 498 498 #define USB_VENDOR_ID_GOTOP 0x08f2 499 499 #define USB_DEVICE_ID_SUPER_Q2 0x007f
+3 -5
drivers/hid/wacom_wac.c
··· 2533 2533 !wacom_wac->shared->is_touch_on) { 2534 2534 if (!wacom_wac->shared->touch_down) 2535 2535 return; 2536 - prox = 0; 2536 + prox = false; 2537 2537 } 2538 2538 2539 2539 wacom_wac->hid_data.num_received++; ··· 3574 3574 { 3575 3575 struct wacom_features *features = &wacom_wac->features; 3576 3576 3577 - input_dev->evbit[0] |= BIT_MASK(EV_KEY) | BIT_MASK(EV_ABS); 3578 - 3579 3577 if (!(features->device_type & WACOM_DEVICETYPE_PEN)) 3580 3578 return -ENODEV; 3581 3579 ··· 3588 3590 return 0; 3589 3591 } 3590 3592 3593 + input_dev->evbit[0] |= BIT_MASK(EV_KEY) | BIT_MASK(EV_ABS); 3591 3594 __set_bit(BTN_TOUCH, input_dev->keybit); 3592 3595 __set_bit(ABS_MISC, input_dev->absbit); 3593 3596 ··· 3741 3742 { 3742 3743 struct wacom_features *features = &wacom_wac->features; 3743 3744 3744 - input_dev->evbit[0] |= BIT_MASK(EV_KEY) | BIT_MASK(EV_ABS); 3745 - 3746 3745 if (!(features->device_type & WACOM_DEVICETYPE_TOUCH)) 3747 3746 return -ENODEV; 3748 3747 ··· 3753 3756 /* setup has already been done */ 3754 3757 return 0; 3755 3758 3759 + input_dev->evbit[0] |= BIT_MASK(EV_KEY) | BIT_MASK(EV_ABS); 3756 3760 __set_bit(BTN_TOUCH, input_dev->keybit); 3757 3761 3758 3762 if (features->touch_max == 1) {
+1
drivers/i2c/busses/i2c-designware-master.c
··· 129 129 if ((comp_param1 & DW_IC_COMP_PARAM_1_SPEED_MODE_MASK) 130 130 != DW_IC_COMP_PARAM_1_SPEED_MODE_HIGH) { 131 131 dev_err(dev->dev, "High Speed not supported!\n"); 132 + t->bus_freq_hz = I2C_MAX_FAST_MODE_FREQ; 132 133 dev->master_cfg &= ~DW_IC_CON_SPEED_MASK; 133 134 dev->master_cfg |= DW_IC_CON_SPEED_FAST; 134 135 dev->hs_hcnt = 0;
+1 -1
drivers/i2c/busses/i2c-exynos5.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 - /** 2 + /* 3 3 * i2c-exynos5.c - Samsung Exynos5 I2C Controller Driver 4 4 * 5 5 * Copyright (C) 2013 Samsung Electronics Co., Ltd.
+1 -1
drivers/i2c/busses/i2c-hix5hd2.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 2 /* 3 3 * Copyright (c) 2014 Linaro Ltd. 4 - * Copyright (c) 2014 Hisilicon Limited. 4 + * Copyright (c) 2014 HiSilicon Limited. 5 5 * 6 6 * Now only support 7 bit address. 7 7 */
+2 -2
drivers/i2c/busses/i2c-jz4780.c
··· 525 525 i2c_sta = jz4780_i2c_readw(i2c, JZ4780_I2C_STA); 526 526 data = *i2c->wbuf; 527 527 data &= ~JZ4780_I2C_DC_READ; 528 - if ((!i2c->stop_hold) && (i2c->cdata->version >= 529 - ID_X1000)) 528 + if ((i2c->wt_len == 1) && (!i2c->stop_hold) && 529 + (i2c->cdata->version >= ID_X1000)) 530 530 data |= X1000_I2C_DC_STOP; 531 531 jz4780_i2c_writew(i2c, JZ4780_I2C_DC, data); 532 532 i2c->wbuf++;
+1 -1
drivers/i2c/busses/i2c-stm32f4.c
··· 534 534 default: 535 535 /* 536 536 * N-byte reception: 537 - * Enable ACK, reset POS (ACK postion) and clear ADDR flag. 537 + * Enable ACK, reset POS (ACK position) and clear ADDR flag. 538 538 * In that way, ACK will be sent as soon as the current byte 539 539 * will be received in the shift register 540 540 */
+4 -3
drivers/i2c/i2c-core-base.c
··· 378 378 static int i2c_init_recovery(struct i2c_adapter *adap) 379 379 { 380 380 struct i2c_bus_recovery_info *bri = adap->bus_recovery_info; 381 - char *err_str; 381 + char *err_str, *err_level = KERN_ERR; 382 382 383 383 if (!bri) 384 384 return 0; ··· 387 387 return -EPROBE_DEFER; 388 388 389 389 if (!bri->recover_bus) { 390 - err_str = "no recover_bus() found"; 390 + err_str = "no suitable method provided"; 391 + err_level = KERN_DEBUG; 391 392 goto err; 392 393 } 393 394 ··· 415 414 416 415 return 0; 417 416 err: 418 - dev_err(&adap->dev, "Not using recovery: %s\n", err_str); 417 + dev_printk(err_level, &adap->dev, "Not using recovery: %s\n", err_str); 419 418 adap->bus_recovery_info = NULL; 420 419 421 420 return -EINVAL;
+2 -2
drivers/input/joystick/n64joy.c
··· 252 252 mutex_init(&priv->n64joy_mutex); 253 253 254 254 priv->reg_base = devm_platform_ioremap_resource(pdev, 0); 255 - if (!priv->reg_base) { 256 - err = -EINVAL; 255 + if (IS_ERR(priv->reg_base)) { 256 + err = PTR_ERR(priv->reg_base); 257 257 goto fail; 258 258 } 259 259
+31 -25
drivers/input/keyboard/nspire-keypad.c
··· 93 93 return IRQ_HANDLED; 94 94 } 95 95 96 - static int nspire_keypad_chip_init(struct nspire_keypad *keypad) 96 + static int nspire_keypad_open(struct input_dev *input) 97 97 { 98 + struct nspire_keypad *keypad = input_get_drvdata(input); 98 99 unsigned long val = 0, cycles_per_us, delay_cycles, row_delay_cycles; 100 + int error; 101 + 102 + error = clk_prepare_enable(keypad->clk); 103 + if (error) 104 + return error; 99 105 100 106 cycles_per_us = (clk_get_rate(keypad->clk) / 1000000); 101 107 if (cycles_per_us == 0) ··· 127 121 keypad->int_mask = 1 << 1; 128 122 writel(keypad->int_mask, keypad->reg_base + KEYPAD_INTMSK); 129 123 130 - /* Disable GPIO interrupts to prevent hanging on touchpad */ 131 - /* Possibly used to detect touchpad events */ 132 - writel(0, keypad->reg_base + KEYPAD_UNKNOWN_INT); 133 - /* Acknowledge existing interrupts */ 134 - writel(~0, keypad->reg_base + KEYPAD_UNKNOWN_INT_STS); 135 - 136 - return 0; 137 - } 138 - 139 - static int nspire_keypad_open(struct input_dev *input) 140 - { 141 - struct nspire_keypad *keypad = input_get_drvdata(input); 142 - int error; 143 - 144 - error = clk_prepare_enable(keypad->clk); 145 - if (error) 146 - return error; 147 - 148 - error = nspire_keypad_chip_init(keypad); 149 - if (error) { 150 - clk_disable_unprepare(keypad->clk); 151 - return error; 152 - } 153 - 154 124 return 0; 155 125 } 156 126 157 127 static void nspire_keypad_close(struct input_dev *input) 158 128 { 159 129 struct nspire_keypad *keypad = input_get_drvdata(input); 130 + 131 + /* Disable interrupts */ 132 + writel(0, keypad->reg_base + KEYPAD_INTMSK); 133 + /* Acknowledge existing interrupts */ 134 + writel(~0, keypad->reg_base + KEYPAD_INT); 160 135 161 136 clk_disable_unprepare(keypad->clk); 162 137 } ··· 196 209 dev_err(&pdev->dev, "failed to allocate input device\n"); 197 210 return -ENOMEM; 198 211 } 212 + 213 + error = clk_prepare_enable(keypad->clk); 214 + if (error) { 215 + dev_err(&pdev->dev, "failed to enable clock\n"); 216 + return error; 217 + } 218 + 219 + /* Disable interrupts */ 220 + writel(0, keypad->reg_base + KEYPAD_INTMSK); 221 + /* Acknowledge existing interrupts */ 222 + writel(~0, keypad->reg_base + KEYPAD_INT); 223 + 224 + /* Disable GPIO interrupts to prevent hanging on touchpad */ 225 + /* Possibly used to detect touchpad events */ 226 + writel(0, keypad->reg_base + KEYPAD_UNKNOWN_INT); 227 + /* Acknowledge existing GPIO interrupts */ 228 + writel(~0, keypad->reg_base + KEYPAD_UNKNOWN_INT_STS); 229 + 230 + clk_disable_unprepare(keypad->clk); 199 231 200 232 input_set_drvdata(input, keypad); 201 233
+1
drivers/input/serio/i8042-x86ia64io.h
··· 588 588 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 589 589 DMI_MATCH(DMI_CHASSIS_TYPE, "10"), /* Notebook */ 590 590 }, 591 + }, { 591 592 .matches = { 592 593 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 593 594 DMI_MATCH(DMI_CHASSIS_TYPE, "31"), /* Convertible Notebook */
+2 -3
drivers/input/touchscreen/elants_i2c.c
··· 1441 1441 1442 1442 touchscreen_parse_properties(ts->input, true, &ts->prop); 1443 1443 1444 - if (ts->chip_id == EKTF3624) { 1444 + if (ts->chip_id == EKTF3624 && ts->phy_x && ts->phy_y) { 1445 1445 /* calculate resolution from size */ 1446 1446 ts->x_res = DIV_ROUND_CLOSEST(ts->prop.max_x, ts->phy_x); 1447 1447 ts->y_res = DIV_ROUND_CLOSEST(ts->prop.max_y, ts->phy_y); ··· 1449 1449 1450 1450 input_abs_set_res(ts->input, ABS_MT_POSITION_X, ts->x_res); 1451 1451 input_abs_set_res(ts->input, ABS_MT_POSITION_Y, ts->y_res); 1452 - if (ts->major_res > 0) 1453 - input_abs_set_res(ts->input, ABS_MT_TOUCH_MAJOR, ts->major_res); 1452 + input_abs_set_res(ts->input, ABS_MT_TOUCH_MAJOR, ts->major_res); 1454 1453 1455 1454 error = input_mt_init_slots(ts->input, MAX_CONTACT_NUM, 1456 1455 INPUT_MT_DIRECT | INPUT_MT_DROP_UNUSED);
+2 -2
drivers/input/touchscreen/s6sy761.c
··· 145 145 u8 major = event[4]; 146 146 u8 minor = event[5]; 147 147 u8 z = event[6] & S6SY761_MASK_Z; 148 - u16 x = (event[1] << 3) | ((event[3] & S6SY761_MASK_X) >> 4); 149 - u16 y = (event[2] << 3) | (event[3] & S6SY761_MASK_Y); 148 + u16 x = (event[1] << 4) | ((event[3] & S6SY761_MASK_X) >> 4); 149 + u16 y = (event[2] << 4) | (event[3] & S6SY761_MASK_Y); 150 150 151 151 input_mt_slot(sdata->input, tid); 152 152
+2 -2
drivers/leds/leds-turris-omnia.c
··· 2 2 /* 3 3 * CZ.NIC's Turris Omnia LEDs driver 4 4 * 5 - * 2020 by Marek Behun <marek.behun@nic.cz> 5 + * 2020 by Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #include <linux/i2c.h> ··· 287 287 288 288 module_i2c_driver(omnia_leds_driver); 289 289 290 - MODULE_AUTHOR("Marek Behun <marek.behun@nic.cz>"); 290 + MODULE_AUTHOR("Marek Behun <kabel@kernel.org>"); 291 291 MODULE_DESCRIPTION("CZ.NIC's Turris Omnia LEDs"); 292 292 MODULE_LICENSE("GPL v2");
+2 -2
drivers/mailbox/armada-37xx-rwtm-mailbox.c
··· 2 2 /* 3 3 * rWTM BIU Mailbox driver for Armada 37xx 4 4 * 5 - * Author: Marek Behun <marek.behun@nic.cz> 5 + * Author: Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #include <linux/device.h> ··· 203 203 204 204 MODULE_LICENSE("GPL v2"); 205 205 MODULE_DESCRIPTION("rWTM BIU Mailbox driver for Armada 37xx"); 206 - MODULE_AUTHOR("Marek Behun <marek.behun@nic.cz>"); 206 + MODULE_AUTHOR("Marek Behun <kabel@kernel.org>");
+8 -3
drivers/md/dm-verity-fec.c
··· 65 65 u8 *res; 66 66 67 67 position = (index + rsb) * v->fec->roots; 68 - block = div64_u64_rem(position, v->fec->roots << SECTOR_SHIFT, &rem); 68 + block = div64_u64_rem(position, v->fec->io_size, &rem); 69 69 *offset = (unsigned)rem; 70 70 71 71 res = dm_bufio_read(v->fec->bufio, block, buf); ··· 154 154 155 155 /* read the next block when we run out of parity bytes */ 156 156 offset += v->fec->roots; 157 - if (offset >= v->fec->roots << SECTOR_SHIFT) { 157 + if (offset >= v->fec->io_size) { 158 158 dm_bufio_release(buf); 159 159 160 160 par = fec_read_parity(v, rsb, block_offset, &offset, &buf); ··· 742 742 return -E2BIG; 743 743 } 744 744 745 + if ((f->roots << SECTOR_SHIFT) & ((1 << v->data_dev_block_bits) - 1)) 746 + f->io_size = 1 << v->data_dev_block_bits; 747 + else 748 + f->io_size = v->fec->roots << SECTOR_SHIFT; 749 + 745 750 f->bufio = dm_bufio_client_create(f->dev->bdev, 746 - f->roots << SECTOR_SHIFT, 751 + f->io_size, 747 752 1, 0, NULL, NULL); 748 753 if (IS_ERR(f->bufio)) { 749 754 ti->error = "Cannot initialize FEC bufio client";
+1
drivers/md/dm-verity-fec.h
··· 36 36 struct dm_dev *dev; /* parity data device */ 37 37 struct dm_bufio_client *data_bufio; /* for data dev access */ 38 38 struct dm_bufio_client *bufio; /* for parity data access */ 39 + size_t io_size; /* IO size for roots */ 39 40 sector_t start; /* parity data start in blocks */ 40 41 sector_t blocks; /* number of blocks covered */ 41 42 sector_t rounds; /* number of interleaving rounds */
+2 -2
drivers/mtd/nand/raw/mtk_nand.c
··· 488 488 return 0; 489 489 case NAND_OP_WAITRDY_INSTR: 490 490 return readl_poll_timeout(nfc->regs + NFI_STA, status, 491 - status & STA_BUSY, 20, 492 - instr->ctx.waitrdy.timeout_ms); 491 + !(status & STA_BUSY), 20, 492 + instr->ctx.waitrdy.timeout_ms * 1000); 493 493 default: 494 494 break; 495 495 }
+13 -17
drivers/net/dsa/mv88e6xxx/chip.c
··· 3161 3161 return err; 3162 3162 } 3163 3163 3164 + /* prod_id for switch families which do not have a PHY model number */ 3165 + static const u16 family_prod_id_table[] = { 3166 + [MV88E6XXX_FAMILY_6341] = MV88E6XXX_PORT_SWITCH_ID_PROD_6341, 3167 + [MV88E6XXX_FAMILY_6390] = MV88E6XXX_PORT_SWITCH_ID_PROD_6390, 3168 + }; 3169 + 3164 3170 static int mv88e6xxx_mdio_read(struct mii_bus *bus, int phy, int reg) 3165 3171 { 3166 3172 struct mv88e6xxx_mdio_bus *mdio_bus = bus->priv; 3167 3173 struct mv88e6xxx_chip *chip = mdio_bus->chip; 3174 + u16 prod_id; 3168 3175 u16 val; 3169 3176 int err; 3170 3177 ··· 3182 3175 err = chip->info->ops->phy_read(chip, bus, phy, reg, &val); 3183 3176 mv88e6xxx_reg_unlock(chip); 3184 3177 3185 - if (reg == MII_PHYSID2) { 3186 - /* Some internal PHYs don't have a model number. */ 3187 - if (chip->info->family != MV88E6XXX_FAMILY_6165) 3188 - /* Then there is the 6165 family. It gets is 3189 - * PHYs correct. But it can also have two 3190 - * SERDES interfaces in the PHY address 3191 - * space. And these don't have a model 3192 - * number. But they are not PHYs, so we don't 3193 - * want to give them something a PHY driver 3194 - * will recognise. 3195 - * 3196 - * Use the mv88e6390 family model number 3197 - * instead, for anything which really could be 3198 - * a PHY, 3199 - */ 3200 - if (!(val & 0x3f0)) 3201 - val |= MV88E6XXX_PORT_SWITCH_ID_PROD_6390 >> 4; 3178 + /* Some internal PHYs don't have a model number. */ 3179 + if (reg == MII_PHYSID2 && !(val & 0x3f0) && 3180 + chip->info->family < ARRAY_SIZE(family_prod_id_table)) { 3181 + prod_id = family_prod_id_table[chip->info->family]; 3182 + if (prod_id) 3183 + val |= prod_id >> 4; 3202 3184 } 3203 3185 3204 3186 return err ? err : val;
+1 -1
drivers/net/ethernet/cadence/macb_main.c
··· 3946 3946 reg = gem_readl(bp, DCFG8); 3947 3947 bp->max_tuples = min((GEM_BFEXT(SCR2CMP, reg) / 3), 3948 3948 GEM_BFEXT(T2SCR, reg)); 3949 + INIT_LIST_HEAD(&bp->rx_fs_list.list); 3949 3950 if (bp->max_tuples > 0) { 3950 3951 /* also needs one ethtype match to check IPv4 */ 3951 3952 if (GEM_BFEXT(SCR2ETH, reg) > 0) { ··· 3957 3956 /* Filtering is supported in hw but don't enable it in kernel now */ 3958 3957 dev->hw_features |= NETIF_F_NTUPLE; 3959 3958 /* init Rx flow definitions */ 3960 - INIT_LIST_HEAD(&bp->rx_fs_list.list); 3961 3959 bp->rx_fs_list.count = 0; 3962 3960 spin_lock_init(&bp->rx_fs_lock); 3963 3961 } else
+1 -1
drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h
··· 412 412 | CN6XXX_INTR_M0UNWI_ERR \ 413 413 | CN6XXX_INTR_M1UPB0_ERR \ 414 414 | CN6XXX_INTR_M1UPWI_ERR \ 415 - | CN6XXX_INTR_M1UPB0_ERR \ 415 + | CN6XXX_INTR_M1UNB0_ERR \ 416 416 | CN6XXX_INTR_M1UNWI_ERR \ 417 417 | CN6XXX_INTR_INSTR_DB_OF_ERR \ 418 418 | CN6XXX_INTR_SLIST_DB_OF_ERR \
+11 -91
drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
··· 350 350 } 351 351 352 352 /* 353 - * chcr_ktls_mark_tcb_close: mark tcb state to CLOSE 354 - * @tx_info - driver specific tls info. 355 - * return: NET_TX_OK/NET_XMIT_DROP. 356 - */ 357 - static int chcr_ktls_mark_tcb_close(struct chcr_ktls_info *tx_info) 358 - { 359 - return chcr_set_tcb_field(tx_info, TCB_T_STATE_W, 360 - TCB_T_STATE_V(TCB_T_STATE_M), 361 - CHCR_TCB_STATE_CLOSED, 1); 362 - } 363 - 364 - /* 365 353 * chcr_ktls_dev_del: call back for tls_dev_del. 366 354 * Remove the tid and l2t entry and close the connection. 367 355 * it per connection basis. ··· 383 395 384 396 /* clear tid */ 385 397 if (tx_info->tid != -1) { 386 - /* clear tcb state and then release tid */ 387 - chcr_ktls_mark_tcb_close(tx_info); 388 398 cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan, 389 399 tx_info->tid, tx_info->ip_family); 390 400 } ··· 560 574 return 0; 561 575 562 576 free_tid: 563 - chcr_ktls_mark_tcb_close(tx_info); 564 577 #if IS_ENABLED(CONFIG_IPV6) 565 578 /* clear clip entry */ 566 579 if (tx_info->ip_family == AF_INET6) ··· 657 672 if (tx_info->pending_close) { 658 673 spin_unlock(&tx_info->lock); 659 674 if (!status) { 660 - /* it's a late success, tcb status is established, 661 - * mark it close. 662 - */ 663 - chcr_ktls_mark_tcb_close(tx_info); 664 675 cxgb4_remove_tid(&tx_info->adap->tids, tx_info->tx_chan, 665 676 tid, tx_info->ip_family); 666 677 } ··· 1645 1664 } 1646 1665 1647 1666 /* 1648 - * chcr_ktls_update_snd_una: Reset the SEND_UNA. It will be done to avoid 1649 - * sending the same segment again. It will discard the segment which is before 1650 - * the current tx max. 1651 - * @tx_info - driver specific tls info. 1652 - * @q - TX queue. 1653 - * return: NET_TX_OK/NET_XMIT_DROP. 1654 - */ 1655 - static int chcr_ktls_update_snd_una(struct chcr_ktls_info *tx_info, 1656 - struct sge_eth_txq *q) 1657 - { 1658 - struct fw_ulptx_wr *wr; 1659 - unsigned int ndesc; 1660 - int credits; 1661 - void *pos; 1662 - u32 len; 1663 - 1664 - len = sizeof(*wr) + roundup(CHCR_SET_TCB_FIELD_LEN, 16); 1665 - ndesc = DIV_ROUND_UP(len, 64); 1666 - 1667 - credits = chcr_txq_avail(&q->q) - ndesc; 1668 - if (unlikely(credits < 0)) { 1669 - chcr_eth_txq_stop(q); 1670 - return NETDEV_TX_BUSY; 1671 - } 1672 - 1673 - pos = &q->q.desc[q->q.pidx]; 1674 - 1675 - wr = pos; 1676 - /* ULPTX wr */ 1677 - wr->op_to_compl = htonl(FW_WR_OP_V(FW_ULPTX_WR)); 1678 - wr->cookie = 0; 1679 - /* fill len in wr field */ 1680 - wr->flowid_len16 = htonl(FW_WR_LEN16_V(DIV_ROUND_UP(len, 16))); 1681 - 1682 - pos += sizeof(*wr); 1683 - 1684 - pos = chcr_write_cpl_set_tcb_ulp(tx_info, q, tx_info->tid, pos, 1685 - TCB_SND_UNA_RAW_W, 1686 - TCB_SND_UNA_RAW_V(TCB_SND_UNA_RAW_M), 1687 - TCB_SND_UNA_RAW_V(0), 0); 1688 - 1689 - chcr_txq_advance(&q->q, ndesc); 1690 - cxgb4_ring_tx_db(tx_info->adap, &q->q, ndesc); 1691 - 1692 - return 0; 1693 - } 1694 - 1695 - /* 1696 1667 * chcr_end_part_handler: This handler will handle the record which 1697 1668 * is complete or if record's end part is received. T6 adapter has a issue that 1698 1669 * it can't send out TAG with partial record so if its an end part then we have ··· 1668 1735 struct sge_eth_txq *q, u32 skb_offset, 1669 1736 u32 tls_end_offset, bool last_wr) 1670 1737 { 1738 + bool free_skb_if_tx_fails = false; 1671 1739 struct sk_buff *nskb = NULL; 1740 + 1672 1741 /* check if it is a complete record */ 1673 1742 if (tls_end_offset == record->len) { 1674 1743 nskb = skb; ··· 1693 1758 1694 1759 if (last_wr) 1695 1760 dev_kfree_skb_any(skb); 1761 + else 1762 + free_skb_if_tx_fails = true; 1696 1763 1697 1764 last_wr = true; 1698 1765 ··· 1706 1769 record->num_frags, 1707 1770 (last_wr && tcp_push_no_fin), 1708 1771 mss)) { 1772 + if (free_skb_if_tx_fails) 1773 + dev_kfree_skb_any(skb); 1709 1774 goto out; 1710 1775 } 1711 1776 tx_info->prev_seq = record->end_seq; ··· 1844 1905 /* reset tcp_seq as per the prior_data_required len */ 1845 1906 tcp_seq -= prior_data_len; 1846 1907 } 1847 - /* reset snd una, so the middle record won't send the already 1848 - * sent part. 1849 - */ 1850 - if (chcr_ktls_update_snd_una(tx_info, q)) 1851 - goto out; 1852 1908 atomic64_inc(&tx_info->adap->ch_ktls_stats.ktls_tx_middle_pkts); 1853 1909 } else { 1854 1910 atomic64_inc(&tx_info->adap->ch_ktls_stats.ktls_tx_start_pkts); ··· 1944 2010 * we will send the complete record again. 1945 2011 */ 1946 2012 2013 + spin_lock_irqsave(&tx_ctx->base.lock, flags); 2014 + 1947 2015 do { 1948 - int i; 1949 2016 1950 2017 cxgb4_reclaim_completed_tx(adap, &q->q, true); 1951 - /* lock taken */ 1952 - spin_lock_irqsave(&tx_ctx->base.lock, flags); 1953 2018 /* fetch the tls record */ 1954 2019 record = tls_get_record(&tx_ctx->base, tcp_seq, 1955 2020 &tx_info->record_no); ··· 2007 2074 tls_end_offset, skb_offset, 2008 2075 0); 2009 2076 2010 - spin_unlock_irqrestore(&tx_ctx->base.lock, flags); 2011 2077 if (ret) { 2012 2078 /* free the refcount taken earlier */ 2013 2079 if (tls_end_offset < data_len) 2014 2080 dev_kfree_skb_any(skb); 2081 + spin_unlock_irqrestore(&tx_ctx->base.lock, flags); 2015 2082 goto out; 2016 2083 } 2017 2084 ··· 2020 2087 skb_offset += tls_end_offset; 2021 2088 continue; 2022 2089 } 2023 - 2024 - /* increase page reference count of the record, so that there 2025 - * won't be any chance of page free in middle if in case stack 2026 - * receives ACK and try to delete the record. 2027 - */ 2028 - for (i = 0; i < record->num_frags; i++) 2029 - __skb_frag_ref(&record->frags[i]); 2030 - /* lock cleared */ 2031 - spin_unlock_irqrestore(&tx_ctx->base.lock, flags); 2032 - 2033 2090 2034 2091 /* if a tls record is finishing in this SKB */ 2035 2092 if (tls_end_offset <= data_len) { ··· 2045 2122 data_len = 0; 2046 2123 } 2047 2124 2048 - /* clear the frag ref count which increased locally before */ 2049 - for (i = 0; i < record->num_frags; i++) { 2050 - /* clear the frag ref count */ 2051 - __skb_frag_unref(&record->frags[i]); 2052 - } 2053 2125 /* if any failure, come out from the loop. */ 2054 2126 if (ret) { 2127 + spin_unlock_irqrestore(&tx_ctx->base.lock, flags); 2055 2128 if (th->fin) 2056 2129 dev_kfree_skb_any(skb); 2057 2130 ··· 2062 2143 2063 2144 } while (data_len > 0); 2064 2145 2146 + spin_unlock_irqrestore(&tx_ctx->base.lock, flags); 2065 2147 atomic64_inc(&port_stats->ktls_tx_encrypted_packets); 2066 2148 atomic64_add(skb_data_len, &port_stats->ktls_tx_encrypted_bytes); 2067 2149
+4 -2
drivers/net/ethernet/davicom/dm9000.c
··· 1469 1469 1470 1470 /* Init network device */ 1471 1471 ndev = alloc_etherdev(sizeof(struct board_info)); 1472 - if (!ndev) 1473 - return -ENOMEM; 1472 + if (!ndev) { 1473 + ret = -ENOMEM; 1474 + goto out_regulator_disable; 1475 + } 1474 1476 1475 1477 SET_NETDEV_DEV(ndev, &pdev->dev); 1476 1478
+9 -16
drivers/net/ethernet/ibm/ibmvnic.c
··· 1173 1173 1174 1174 rc = set_link_state(adapter, IBMVNIC_LOGICAL_LNK_UP); 1175 1175 if (rc) { 1176 - for (i = 0; i < adapter->req_rx_queues; i++) 1177 - napi_disable(&adapter->napi[i]); 1176 + ibmvnic_napi_disable(adapter); 1178 1177 release_resources(adapter); 1179 1178 return rc; 1180 1179 } 1181 1180 1182 1181 netif_tx_start_all_queues(netdev); 1183 - 1184 - if (prev_state == VNIC_CLOSED) { 1185 - for (i = 0; i < adapter->req_rx_queues; i++) 1186 - napi_schedule(&adapter->napi[i]); 1187 - } 1188 1182 1189 1183 adapter->state = VNIC_OPEN; 1190 1184 return rc; ··· 1961 1967 u64 old_num_rx_queues, old_num_tx_queues; 1962 1968 u64 old_num_rx_slots, old_num_tx_slots; 1963 1969 struct net_device *netdev = adapter->netdev; 1964 - int i, rc; 1970 + int rc; 1965 1971 1966 1972 netdev_dbg(adapter->netdev, 1967 1973 "[S:%s FOP:%d] Reset reason: %s, reset_state: %s\n", ··· 2151 2157 2152 2158 /* refresh device's multicast list */ 2153 2159 ibmvnic_set_multi(netdev); 2154 - 2155 - /* kick napi */ 2156 - for (i = 0; i < adapter->req_rx_queues; i++) 2157 - napi_schedule(&adapter->napi[i]); 2158 2160 2159 2161 if (adapter->reset_reason == VNIC_RESET_FAILOVER || 2160 2162 adapter->reset_reason == VNIC_RESET_MOBILITY) ··· 3246 3256 3247 3257 next = ibmvnic_next_scrq(adapter, scrq); 3248 3258 for (i = 0; i < next->tx_comp.num_comps; i++) { 3249 - if (next->tx_comp.rcs[i]) 3250 - dev_err(dev, "tx error %x\n", 3251 - next->tx_comp.rcs[i]); 3252 3259 index = be32_to_cpu(next->tx_comp.correlators[i]); 3253 3260 if (index & IBMVNIC_TSO_POOL_MASK) { 3254 3261 tx_pool = &adapter->tso_pool[pool]; ··· 3259 3272 num_entries += txbuff->num_entries; 3260 3273 if (txbuff->skb) { 3261 3274 total_bytes += txbuff->skb->len; 3262 - dev_consume_skb_irq(txbuff->skb); 3275 + if (next->tx_comp.rcs[i]) { 3276 + dev_err(dev, "tx error %x\n", 3277 + next->tx_comp.rcs[i]); 3278 + dev_kfree_skb_irq(txbuff->skb); 3279 + } else { 3280 + dev_consume_skb_irq(txbuff->skb); 3281 + } 3263 3282 txbuff->skb = NULL; 3264 3283 } else { 3265 3284 netdev_warn(adapter->netdev,
+6
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 12357 12357 { 12358 12358 int err = 0; 12359 12359 int size; 12360 + u16 pow; 12360 12361 12361 12362 /* Set default capability flags */ 12362 12363 pf->flags = I40E_FLAG_RX_CSUM_ENABLED | ··· 12376 12375 pf->rss_table_size = pf->hw.func_caps.rss_table_size; 12377 12376 pf->rss_size_max = min_t(int, pf->rss_size_max, 12378 12377 pf->hw.func_caps.num_tx_qp); 12378 + 12379 + /* find the next higher power-of-2 of num cpus */ 12380 + pow = roundup_pow_of_two(num_online_cpus()); 12381 + pf->rss_size_max = min_t(int, pf->rss_size_max, pow); 12382 + 12379 12383 if (pf->hw.func_caps.rss) { 12380 12384 pf->flags |= I40E_FLAG_RSS_ENABLED; 12381 12385 pf->alloc_rss_size = min_t(int, pf->rss_size_max,
+2 -2
drivers/net/ethernet/intel/ice/ice_dcb.c
··· 747 747 struct ice_port_info *pi) 748 748 { 749 749 u32 status, tlv_status = le32_to_cpu(cee_cfg->tlv_status); 750 - u32 ice_aqc_cee_status_mask, ice_aqc_cee_status_shift; 751 - u8 i, j, err, sync, oper, app_index, ice_app_sel_type; 750 + u32 ice_aqc_cee_status_mask, ice_aqc_cee_status_shift, j; 751 + u8 i, err, sync, oper, app_index, ice_app_sel_type; 752 752 u16 app_prio = le16_to_cpu(cee_cfg->oper_app_prio); 753 753 u16 ice_aqc_cee_app_mask, ice_aqc_cee_app_shift; 754 754 struct ice_dcbx_cfg *cmp_dcbcfg, *dcbcfg;
+13 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 6536 6536 return err; 6537 6537 } 6538 6538 6539 + static int ixgbe_rx_napi_id(struct ixgbe_ring *rx_ring) 6540 + { 6541 + struct ixgbe_q_vector *q_vector = rx_ring->q_vector; 6542 + 6543 + return q_vector ? q_vector->napi.napi_id : 0; 6544 + } 6545 + 6539 6546 /** 6540 6547 * ixgbe_setup_rx_resources - allocate Rx resources (Descriptors) 6541 6548 * @adapter: pointer to ixgbe_adapter ··· 6590 6583 6591 6584 /* XDP RX-queue info */ 6592 6585 if (xdp_rxq_info_reg(&rx_ring->xdp_rxq, adapter->netdev, 6593 - rx_ring->queue_index, rx_ring->q_vector->napi.napi_id) < 0) 6586 + rx_ring->queue_index, ixgbe_rx_napi_id(rx_ring)) < 0) 6594 6587 goto err; 6595 6588 6596 6589 rx_ring->xdp_prog = adapter->xdp_prog; ··· 6899 6892 6900 6893 adapter->hw.hw_addr = adapter->io_addr; 6901 6894 6895 + err = pci_enable_device_mem(pdev); 6896 + if (err) { 6897 + e_dev_err("Cannot enable PCI device from suspend\n"); 6898 + return err; 6899 + } 6902 6900 smp_mb__before_atomic(); 6903 6901 clear_bit(__IXGBE_DISABLED, &adapter->state); 6904 6902 pci_set_master(pdev);
+5
drivers/net/ethernet/mellanox/mlx5/core/devlink.c
··· 246 246 struct mlx5_devlink_trap *dl_trap; 247 247 int err = 0; 248 248 249 + if (is_mdev_switchdev_mode(dev)) { 250 + NL_SET_ERR_MSG_MOD(extack, "Devlink traps can't be set in switchdev mode"); 251 + return -EOPNOTSUPP; 252 + } 253 + 249 254 dl_trap = mlx5_find_trap_by_id(dev, trap->id); 250 255 if (!dl_trap) { 251 256 mlx5_core_err(dev, "Devlink trap: Set action on invalid trap id 0x%x", trap->id);
+4 -19
drivers/net/ethernet/mellanox/mlx5/core/en/port.c
··· 387 387 *_policy = MLX5_GET(pplm_reg, _buf, fec_override_admin_##link); \ 388 388 } while (0) 389 389 390 - #define MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(buf, policy, write, link) \ 391 - do { \ 392 - unsigned long policy_long; \ 393 - u16 *__policy = &(policy); \ 394 - bool _write = (write); \ 395 - \ 396 - policy_long = *__policy; \ 397 - if (_write && *__policy) \ 398 - *__policy = find_first_bit(&policy_long, \ 399 - sizeof(policy_long) * BITS_PER_BYTE);\ 400 - MLX5E_FEC_OVERRIDE_ADMIN_POLICY(buf, *__policy, _write, link); \ 401 - if (!_write && *__policy) \ 402 - *__policy = 1 << *__policy; \ 403 - } while (0) 404 - 405 390 /* get/set FEC admin field for a given speed */ 406 391 static int mlx5e_fec_admin_field(u32 *pplm, u16 *fec_policy, bool write, 407 392 enum mlx5e_fec_supported_link_mode link_mode) ··· 408 423 MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 100g); 409 424 break; 410 425 case MLX5E_FEC_SUPPORTED_LINK_MODE_50G_1X: 411 - MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(pplm, *fec_policy, write, 50g_1x); 426 + MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 50g_1x); 412 427 break; 413 428 case MLX5E_FEC_SUPPORTED_LINK_MODE_100G_2X: 414 - MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(pplm, *fec_policy, write, 100g_2x); 429 + MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 100g_2x); 415 430 break; 416 431 case MLX5E_FEC_SUPPORTED_LINK_MODE_200G_4X: 417 - MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(pplm, *fec_policy, write, 200g_4x); 432 + MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 200g_4x); 418 433 break; 419 434 case MLX5E_FEC_SUPPORTED_LINK_MODE_400G_8X: 420 - MLX5E_FEC_OVERRIDE_ADMIN_50G_POLICY(pplm, *fec_policy, write, 400g_8x); 435 + MLX5E_FEC_OVERRIDE_ADMIN_POLICY(pplm, *fec_policy, write, 400g_8x); 421 436 break; 422 437 default: 423 438 return -EINVAL;
+3
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 1942 1942 return 0; 1943 1943 1944 1944 flow_rule_match_meta(rule, &match); 1945 + if (!match.mask->ingress_ifindex) 1946 + return 0; 1947 + 1945 1948 if (match.mask->ingress_ifindex != 0xFFFFFFFF) { 1946 1949 NL_SET_ERR_MSG_MOD(extack, "Unsupported ingress ifindex mask"); 1947 1950 return -EOPNOTSUPP;
+7 -2
drivers/net/ethernet/realtek/r8169_main.c
··· 2386 2386 2387 2387 if (pci_is_pcie(tp->pci_dev) && tp->supports_gmii) 2388 2388 pcie_set_readrq(tp->pci_dev, readrq); 2389 + 2390 + /* Chip doesn't support pause in jumbo mode */ 2391 + linkmode_mod_bit(ETHTOOL_LINK_MODE_Pause_BIT, 2392 + tp->phydev->advertising, !jumbo); 2393 + linkmode_mod_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT, 2394 + tp->phydev->advertising, !jumbo); 2395 + phy_start_aneg(tp->phydev); 2389 2396 } 2390 2397 2391 2398 DECLARE_RTL_COND(rtl_chipcmd_cond) ··· 4667 4660 4668 4661 if (!tp->supports_gmii) 4669 4662 phy_set_max_speed(phydev, SPEED_100); 4670 - 4671 - phy_support_asym_pause(phydev); 4672 4663 4673 4664 phy_attached_info(phydev); 4674 4665
+1 -77
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 1558 1558 } 1559 1559 1560 1560 /** 1561 - * dma_recycle_rx_skbufs - recycle RX dma buffers 1562 - * @priv: private structure 1563 - * @queue: RX queue index 1564 - */ 1565 - static void dma_recycle_rx_skbufs(struct stmmac_priv *priv, u32 queue) 1566 - { 1567 - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 1568 - int i; 1569 - 1570 - for (i = 0; i < priv->dma_rx_size; i++) { 1571 - struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; 1572 - 1573 - if (buf->page) { 1574 - page_pool_recycle_direct(rx_q->page_pool, buf->page); 1575 - buf->page = NULL; 1576 - } 1577 - 1578 - if (priv->sph && buf->sec_page) { 1579 - page_pool_recycle_direct(rx_q->page_pool, buf->sec_page); 1580 - buf->sec_page = NULL; 1581 - } 1582 - } 1583 - } 1584 - 1585 - /** 1586 1561 * dma_free_rx_xskbufs - free RX dma buffers from XSK pool 1587 1562 * @priv: private structure 1588 1563 * @queue: RX queue index ··· 1605 1630 } 1606 1631 1607 1632 return 0; 1608 - } 1609 - 1610 - /** 1611 - * stmmac_reinit_rx_buffers - reinit the RX descriptor buffer. 1612 - * @priv: driver private structure 1613 - * Description: this function is called to re-allocate a receive buffer, perform 1614 - * the DMA mapping and init the descriptor. 1615 - */ 1616 - static void stmmac_reinit_rx_buffers(struct stmmac_priv *priv) 1617 - { 1618 - u32 rx_count = priv->plat->rx_queues_to_use; 1619 - u32 queue; 1620 - 1621 - for (queue = 0; queue < rx_count; queue++) { 1622 - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 1623 - 1624 - if (rx_q->xsk_pool) 1625 - dma_free_rx_xskbufs(priv, queue); 1626 - else 1627 - dma_recycle_rx_skbufs(priv, queue); 1628 - 1629 - rx_q->buf_alloc_num = 0; 1630 - } 1631 - 1632 - for (queue = 0; queue < rx_count; queue++) { 1633 - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 1634 - int ret; 1635 - 1636 - if (rx_q->xsk_pool) { 1637 - /* RX XDP ZC buffer pool may not be populated, e.g. 1638 - * xdpsock TX-only. 1639 - */ 1640 - stmmac_alloc_rx_buffers_zc(priv, queue); 1641 - } else { 1642 - ret = stmmac_alloc_rx_buffers(priv, queue, GFP_KERNEL); 1643 - if (ret < 0) 1644 - goto err_reinit_rx_buffers; 1645 - } 1646 - } 1647 - 1648 - return; 1649 - 1650 - err_reinit_rx_buffers: 1651 - while (queue >= 0) { 1652 - dma_free_rx_skbufs(priv, queue); 1653 - 1654 - if (queue == 0) 1655 - break; 1656 - 1657 - queue--; 1658 - } 1659 1633 } 1660 1634 1661 1635 static struct xsk_buff_pool *stmmac_get_xsk_pool(struct stmmac_priv *priv, u32 queue) ··· 7248 7324 mutex_lock(&priv->lock); 7249 7325 7250 7326 stmmac_reset_queues_param(priv); 7251 - stmmac_reinit_rx_buffers(priv); 7327 + 7252 7328 stmmac_free_tx_skbufs(priv); 7253 7329 stmmac_clear_descriptors(priv); 7254 7330
+6
drivers/net/geneve.c
··· 892 892 __be16 sport; 893 893 int err; 894 894 895 + if (!pskb_network_may_pull(skb, sizeof(struct iphdr))) 896 + return -EINVAL; 897 + 895 898 sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 896 899 rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info, 897 900 geneve->cfg.info.key.tp_dst, sport); ··· 988 985 __u8 prio, ttl; 989 986 __be16 sport; 990 987 int err; 988 + 989 + if (!pskb_network_may_pull(skb, sizeof(struct ipv6hdr))) 990 + return -EINVAL; 991 991 992 992 sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 993 993 dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
+29 -3
drivers/net/phy/marvell.c
··· 3021 3021 .get_stats = marvell_get_stats, 3022 3022 }, 3023 3023 { 3024 - .phy_id = MARVELL_PHY_ID_88E6390, 3024 + .phy_id = MARVELL_PHY_ID_88E6341_FAMILY, 3025 3025 .phy_id_mask = MARVELL_PHY_ID_MASK, 3026 - .name = "Marvell 88E6390", 3026 + .name = "Marvell 88E6341 Family", 3027 + /* PHY_GBIT_FEATURES */ 3028 + .flags = PHY_POLL_CABLE_TEST, 3029 + .probe = m88e1510_probe, 3030 + .config_init = marvell_config_init, 3031 + .config_aneg = m88e6390_config_aneg, 3032 + .read_status = marvell_read_status, 3033 + .config_intr = marvell_config_intr, 3034 + .handle_interrupt = marvell_handle_interrupt, 3035 + .resume = genphy_resume, 3036 + .suspend = genphy_suspend, 3037 + .read_page = marvell_read_page, 3038 + .write_page = marvell_write_page, 3039 + .get_sset_count = marvell_get_sset_count, 3040 + .get_strings = marvell_get_strings, 3041 + .get_stats = marvell_get_stats, 3042 + .get_tunable = m88e1540_get_tunable, 3043 + .set_tunable = m88e1540_set_tunable, 3044 + .cable_test_start = marvell_vct7_cable_test_start, 3045 + .cable_test_tdr_start = marvell_vct5_cable_test_tdr_start, 3046 + .cable_test_get_status = marvell_vct7_cable_test_get_status, 3047 + }, 3048 + { 3049 + .phy_id = MARVELL_PHY_ID_88E6390_FAMILY, 3050 + .phy_id_mask = MARVELL_PHY_ID_MASK, 3051 + .name = "Marvell 88E6390 Family", 3027 3052 /* PHY_GBIT_FEATURES */ 3028 3053 .flags = PHY_POLL_CABLE_TEST, 3029 3054 .probe = m88e6390_probe, ··· 3132 3107 { MARVELL_PHY_ID_88E1540, MARVELL_PHY_ID_MASK }, 3133 3108 { MARVELL_PHY_ID_88E1545, MARVELL_PHY_ID_MASK }, 3134 3109 { MARVELL_PHY_ID_88E3016, MARVELL_PHY_ID_MASK }, 3135 - { MARVELL_PHY_ID_88E6390, MARVELL_PHY_ID_MASK }, 3110 + { MARVELL_PHY_ID_88E6341_FAMILY, MARVELL_PHY_ID_MASK }, 3111 + { MARVELL_PHY_ID_88E6390_FAMILY, MARVELL_PHY_ID_MASK }, 3136 3112 { MARVELL_PHY_ID_88E1340S, MARVELL_PHY_ID_MASK }, 3137 3113 { MARVELL_PHY_ID_88E1548P, MARVELL_PHY_ID_MASK }, 3138 3114 { }
+4 -6
drivers/net/vrf.c
··· 471 471 472 472 skb_dst_drop(skb); 473 473 474 - /* if dst.dev is loopback or the VRF device again this is locally 475 - * originated traffic destined to a local address. Short circuit 476 - * to Rx path 474 + /* if dst.dev is the VRF device again this is locally originated traffic 475 + * destined to a local address. Short circuit to Rx path. 477 476 */ 478 477 if (dst->dev == dev) 479 478 return vrf_local_xmit(skb, dev, dst); ··· 546 547 547 548 skb_dst_drop(skb); 548 549 549 - /* if dst.dev is loopback or the VRF device again this is locally 550 - * originated traffic destined to a local address. Short circuit 551 - * to Rx path 550 + /* if dst.dev is the VRF device again this is locally originated traffic 551 + * destined to a local address. Short circuit to Rx path. 552 552 */ 553 553 if (rt->dst.dev == vrf_dev) 554 554 return vrf_local_xmit(skb, vrf_dev, &rt->dst);
+8 -4
drivers/net/xen-netback/xenbus.c
··· 824 824 xenvif_carrier_on(be->vif); 825 825 826 826 unregister_hotplug_status_watch(be); 827 - err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch, NULL, 828 - hotplug_status_changed, 829 - "%s/%s", dev->nodename, "hotplug-status"); 830 - if (!err) 827 + if (xenbus_exists(XBT_NIL, dev->nodename, "hotplug-status")) { 828 + err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch, 829 + NULL, hotplug_status_changed, 830 + "%s/%s", dev->nodename, 831 + "hotplug-status"); 832 + if (err) 833 + goto err; 831 834 be->have_hotplug_status_watch = 1; 835 + } 832 836 833 837 netif_tx_wake_all_queues(be->vif->dev); 834 838
+6 -8
drivers/nvdimm/bus.c
··· 631 631 struct nd_region *nd_region = to_nd_region(dev->parent); 632 632 int disk_ro = get_disk_ro(disk); 633 633 634 - /* 635 - * Upgrade to read-only if the region is read-only preserve as 636 - * read-only if the disk is already read-only. 637 - */ 638 - if (disk_ro || nd_region->ro == disk_ro) 634 + /* catch the disk up with the region ro state */ 635 + if (disk_ro == nd_region->ro) 639 636 return; 640 637 641 - dev_info(dev, "%s read-only, marking %s read-only\n", 642 - dev_name(&nd_region->dev), disk->disk_name); 643 - set_disk_ro(disk, 1); 638 + dev_info(dev, "%s read-%s, marking %s read-%s\n", 639 + dev_name(&nd_region->dev), nd_region->ro ? "only" : "write", 640 + disk->disk_name, nd_region->ro ? "only" : "write"); 641 + set_disk_ro(disk, nd_region->ro); 644 642 } 645 643 EXPORT_SYMBOL(nvdimm_check_and_set_ro); 646 644
+33 -4
drivers/nvdimm/pmem.c
··· 26 26 #include <linux/mm.h> 27 27 #include <asm/cacheflush.h> 28 28 #include "pmem.h" 29 + #include "btt.h" 29 30 #include "pfn.h" 30 31 #include "nd.h" 31 32 ··· 586 585 nvdimm_flush(to_nd_region(dev->parent), NULL); 587 586 } 588 587 589 - static void nd_pmem_notify(struct device *dev, enum nvdimm_event event) 588 + static void pmem_revalidate_poison(struct device *dev) 590 589 { 591 590 struct nd_region *nd_region; 592 591 resource_size_t offset = 0, end_trunc = 0; ··· 595 594 struct badblocks *bb; 596 595 struct range range; 597 596 struct kernfs_node *bb_state; 598 - 599 - if (event != NVDIMM_REVALIDATE_POISON) 600 - return; 601 597 602 598 if (is_nd_btt(dev)) { 603 599 struct nd_btt *nd_btt = to_nd_btt(dev); ··· 631 633 nvdimm_badblocks_populate(nd_region, bb, &range); 632 634 if (bb_state) 633 635 sysfs_notify_dirent(bb_state); 636 + } 637 + 638 + static void pmem_revalidate_region(struct device *dev) 639 + { 640 + struct pmem_device *pmem; 641 + 642 + if (is_nd_btt(dev)) { 643 + struct nd_btt *nd_btt = to_nd_btt(dev); 644 + struct btt *btt = nd_btt->btt; 645 + 646 + nvdimm_check_and_set_ro(btt->btt_disk); 647 + return; 648 + } 649 + 650 + pmem = dev_get_drvdata(dev); 651 + nvdimm_check_and_set_ro(pmem->disk); 652 + } 653 + 654 + static void nd_pmem_notify(struct device *dev, enum nvdimm_event event) 655 + { 656 + switch (event) { 657 + case NVDIMM_REVALIDATE_POISON: 658 + pmem_revalidate_poison(dev); 659 + break; 660 + case NVDIMM_REVALIDATE_REGION: 661 + pmem_revalidate_region(dev); 662 + break; 663 + default: 664 + dev_WARN_ONCE(dev, 1, "notify: unknown event: %d\n", event); 665 + break; 666 + } 634 667 } 635 668 636 669 MODULE_ALIAS("pmem");
+14 -2
drivers/nvdimm/region_devs.c
··· 518 518 return sprintf(buf, "%d\n", nd_region->ro); 519 519 } 520 520 521 + static int revalidate_read_only(struct device *dev, void *data) 522 + { 523 + nd_device_notify(dev, NVDIMM_REVALIDATE_REGION); 524 + return 0; 525 + } 526 + 521 527 static ssize_t read_only_store(struct device *dev, 522 528 struct device_attribute *attr, const char *buf, size_t len) 523 529 { ··· 535 529 return rc; 536 530 537 531 nd_region->ro = ro; 532 + device_for_each_child(dev, NULL, revalidate_read_only); 538 533 return len; 539 534 } 540 535 static DEVICE_ATTR_RW(read_only); ··· 1246 1239 || !IS_ENABLED(CONFIG_ARCH_HAS_PMEM_API)) 1247 1240 return -ENXIO; 1248 1241 1242 + /* Test if an explicit flush function is defined */ 1243 + if (test_bit(ND_REGION_ASYNC, &nd_region->flags) && nd_region->flush) 1244 + return 1; 1245 + 1246 + /* Test if any flush hints for the region are available */ 1249 1247 for (i = 0; i < nd_region->ndr_mappings; i++) { 1250 1248 struct nd_mapping *nd_mapping = &nd_region->mapping[i]; 1251 1249 struct nvdimm *nvdimm = nd_mapping->nvdimm; ··· 1261 1249 } 1262 1250 1263 1251 /* 1264 - * The platform defines dimm devices without hints, assume 1265 - * platform persistence mechanism like ADR 1252 + * The platform defines dimm devices without hints nor explicit flush, 1253 + * assume platform persistence mechanism like ADR 1266 1254 */ 1267 1255 return 0; 1268 1256 }
+12 -3
drivers/ras/cec.c
··· 309 309 return ret; 310 310 } 311 311 312 + /** 313 + * cec_add_elem - Add an element to the CEC array. 314 + * @pfn: page frame number to insert 315 + * 316 + * Return values: 317 + * - <0: on error 318 + * - 0: on success 319 + * - >0: when the inserted pfn was offlined 320 + */ 312 321 static int cec_add_elem(u64 pfn) 313 322 { 314 323 struct ce_array *ca = &ce_arr; 324 + int count, err, ret = 0; 315 325 unsigned int to = 0; 316 - int count, ret = 0; 317 326 318 327 /* 319 328 * We can be called very early on the identify_cpu() path where we are ··· 339 330 if (ca->n == MAX_ELEMS) 340 331 WARN_ON(!del_lru_elem_unlocked(ca)); 341 332 342 - ret = find_elem(ca, pfn, &to); 343 - if (ret < 0) { 333 + err = find_elem(ca, pfn, &to); 334 + if (err < 0) { 344 335 /* 345 336 * Shift range [to-end] to make room for one more element. 346 337 */
+45 -33
drivers/scsi/hpsa_cmd.h
··· 20 20 #ifndef HPSA_CMD_H 21 21 #define HPSA_CMD_H 22 22 23 + #include <linux/compiler.h> 24 + 25 + #include <linux/build_bug.h> /* static_assert */ 26 + #include <linux/stddef.h> /* offsetof */ 27 + 23 28 /* general boundary defintions */ 24 29 #define SENSEINFOBYTES 32 /* may vary between hbas */ 25 30 #define SG_ENTRIES_IN_CMD 32 /* Max SG entries excluding chain blocks */ ··· 205 200 MAX_EXT_TARGETS + 1) /* + 1 is for the controller itself */ 206 201 207 202 /* SCSI-3 Commands */ 208 - #pragma pack(1) 209 - 210 203 #define HPSA_INQUIRY 0x12 211 204 struct InquiryData { 212 205 u8 data_byte[36]; 213 - }; 206 + } __packed; 214 207 215 208 #define HPSA_REPORT_LOG 0xc2 /* Report Logical LUNs */ 216 209 #define HPSA_REPORT_PHYS 0xc3 /* Report Physical LUNs */ ··· 224 221 u8 xor_mult[2]; /**< XOR multipliers for this position, 225 222 * valid for data disks only */ 226 223 u8 reserved[2]; 227 - }; 224 + } __packed; 228 225 229 226 struct raid_map_data { 230 227 __le32 structure_size; /* Size of entire structure in bytes */ ··· 250 247 __le16 dekindex; /* Data encryption key index. */ 251 248 u8 reserved[16]; 252 249 struct raid_map_disk_data data[RAID_MAP_MAX_ENTRIES]; 253 - }; 250 + } __packed; 254 251 255 252 struct ReportLUNdata { 256 253 u8 LUNListLength[4]; 257 254 u8 extended_response_flag; 258 255 u8 reserved[3]; 259 256 u8 LUN[HPSA_MAX_LUN][8]; 260 - }; 257 + } __packed; 261 258 262 259 struct ext_report_lun_entry { 263 260 u8 lunid[8]; ··· 272 269 u8 lun_count; /* multi-lun device, how many luns */ 273 270 u8 redundant_paths; 274 271 u32 ioaccel_handle; /* ioaccel1 only uses lower 16 bits */ 275 - }; 272 + } __packed; 276 273 277 274 struct ReportExtendedLUNdata { 278 275 u8 LUNListLength[4]; 279 276 u8 extended_response_flag; 280 277 u8 reserved[3]; 281 278 struct ext_report_lun_entry LUN[HPSA_MAX_PHYS_LUN]; 282 - }; 279 + } __packed; 283 280 284 281 struct SenseSubsystem_info { 285 282 u8 reserved[36]; 286 283 u8 portname[8]; 287 284 u8 reserved1[1108]; 288 - }; 285 + } __packed; 289 286 290 287 /* BMIC commands */ 291 288 #define BMIC_READ 0x26 ··· 320 317 u8 Targ:6; 321 318 u8 Mode:2; /* b10 */ 322 319 } LogUnit; 323 - }; 320 + } __packed; 324 321 325 322 struct PhysDevAddr { 326 323 u32 TargetId:24; ··· 328 325 u32 Mode:2; 329 326 /* 2 level target device addr */ 330 327 union SCSI3Addr Target[2]; 331 - }; 328 + } __packed; 332 329 333 330 struct LogDevAddr { 334 331 u32 VolId:30; 335 332 u32 Mode:2; 336 333 u8 reserved[4]; 337 - }; 334 + } __packed; 338 335 339 336 union LUNAddr { 340 337 u8 LunAddrBytes[8]; 341 338 union SCSI3Addr SCSI3Lun[4]; 342 339 struct PhysDevAddr PhysDev; 343 340 struct LogDevAddr LogDev; 344 - }; 341 + } __packed; 345 342 346 343 struct CommandListHeader { 347 344 u8 ReplyQueue; ··· 349 346 __le16 SGTotal; 350 347 __le64 tag; 351 348 union LUNAddr LUN; 352 - }; 349 + } __packed; 353 350 354 351 struct RequestBlock { 355 352 u8 CDBLen; ··· 368 365 #define GET_DIR(tad) (((tad) >> 6) & 0x03) 369 366 u16 Timeout; 370 367 u8 CDB[16]; 371 - }; 368 + } __packed; 372 369 373 370 struct ErrDescriptor { 374 371 __le64 Addr; 375 372 __le32 Len; 376 - }; 373 + } __packed; 377 374 378 375 struct SGDescriptor { 379 376 __le64 Addr; 380 377 __le32 Len; 381 378 __le32 Ext; 382 - }; 379 + } __packed; 383 380 384 381 union MoreErrInfo { 385 382 struct { ··· 393 390 u8 offense_num; /* byte # of offense 0-base */ 394 391 u32 offense_value; 395 392 } Invalid_Cmd; 396 - }; 393 + } __packed; 394 + 397 395 struct ErrorInfo { 398 396 u8 ScsiStatus; 399 397 u8 SenseLen; ··· 402 398 u32 ResidualCnt; 403 399 union MoreErrInfo MoreErrInfo; 404 400 u8 SenseInfo[SENSEINFOBYTES]; 405 - }; 401 + } __packed; 406 402 /* Command types */ 407 403 #define CMD_IOCTL_PEND 0x01 408 404 #define CMD_SCSI 0x03 ··· 457 453 atomic_t refcount; /* Must be last to avoid memset in hpsa_cmd_init() */ 458 454 } __aligned(COMMANDLIST_ALIGNMENT); 459 455 456 + /* 457 + * Make sure our embedded atomic variable is aligned. Otherwise we break atomic 458 + * operations on architectures that don't support unaligned atomics like IA64. 459 + * 460 + * The assert guards against reintroductin against unwanted __packed to 461 + * the struct CommandList. 462 + */ 463 + static_assert(offsetof(struct CommandList, refcount) % __alignof__(atomic_t) == 0); 464 + 460 465 /* Max S/G elements in I/O accelerator command */ 461 466 #define IOACCEL1_MAXSGENTRIES 24 462 467 #define IOACCEL2_MAXSGENTRIES 28 ··· 502 489 __le64 host_addr; /* 0x70 - 0x77 */ 503 490 u8 CISS_LUN[8]; /* 0x78 - 0x7F */ 504 491 struct SGDescriptor SG[IOACCEL1_MAXSGENTRIES]; 505 - } __aligned(IOACCEL1_COMMANDLIST_ALIGNMENT); 492 + } __packed __aligned(IOACCEL1_COMMANDLIST_ALIGNMENT); 506 493 507 494 #define IOACCEL1_FUNCTION_SCSIIO 0x00 508 495 #define IOACCEL1_SGLOFFSET 32 ··· 532 519 u8 chain_indicator; 533 520 #define IOACCEL2_CHAIN 0x80 534 521 #define IOACCEL2_LAST_SG 0x40 535 - }; 522 + } __packed; 536 523 537 524 /* 538 525 * SCSI Response Format structure for IO Accelerator Mode 2 ··· 572 559 u8 sense_data_len; /* sense/response data length */ 573 560 u8 resid_cnt[4]; /* residual count */ 574 561 u8 sense_data_buff[32]; /* sense/response data buffer */ 575 - }; 562 + } __packed; 576 563 577 564 /* 578 565 * Structure for I/O accelerator (mode 2 or m2) commands. ··· 605 592 __le32 tweak_upper; /* Encryption tweak, upper 4 bytes */ 606 593 struct ioaccel2_sg_element sg[IOACCEL2_MAXSGENTRIES]; 607 594 struct io_accel2_scsi_response error_data; 608 - } __aligned(IOACCEL2_COMMANDLIST_ALIGNMENT); 595 + } __packed __aligned(IOACCEL2_COMMANDLIST_ALIGNMENT); 609 596 610 597 /* 611 598 * defines for Mode 2 command struct ··· 631 618 __le64 abort_tag; /* cciss tag of SCSI cmd or TMF to abort */ 632 619 __le64 error_ptr; /* Error Pointer */ 633 620 __le32 error_len; /* Error Length */ 634 - } __aligned(IOACCEL2_COMMANDLIST_ALIGNMENT); 621 + } __packed __aligned(IOACCEL2_COMMANDLIST_ALIGNMENT); 635 622 636 623 /* Configuration Table Structure */ 637 624 struct HostWrite { ··· 639 626 __le32 command_pool_addr_hi; 640 627 __le32 CoalIntDelay; 641 628 __le32 CoalIntCount; 642 - }; 629 + } __packed; 643 630 644 631 #define SIMPLE_MODE 0x02 645 632 #define PERFORMANT_MODE 0x04 ··· 688 675 #define HPSA_EVENT_NOTIFY_ACCEL_IO_PATH_STATE_CHANGE (1 << 30) 689 676 #define HPSA_EVENT_NOTIFY_ACCEL_IO_PATH_CONFIG_CHANGE (1 << 31) 690 677 __le32 clear_event_notify; 691 - }; 678 + } __packed; 692 679 693 680 #define NUM_BLOCKFETCH_ENTRIES 8 694 681 struct TransTable_struct { ··· 699 686 __le32 RepQCtrAddrHigh32; 700 687 #define MAX_REPLY_QUEUES 64 701 688 struct vals32 RepQAddr[MAX_REPLY_QUEUES]; 702 - }; 689 + } __packed; 703 690 704 691 struct hpsa_pci_info { 705 692 unsigned char bus; 706 693 unsigned char dev_fn; 707 694 unsigned short domain; 708 695 u32 board_id; 709 - }; 696 + } __packed; 710 697 711 698 struct bmic_identify_controller { 712 699 u8 configured_logical_drive_count; /* offset 0 */ ··· 715 702 u8 pad2[136]; 716 703 u8 controller_mode; /* offset 292 */ 717 704 u8 pad3[32]; 718 - }; 705 + } __packed; 719 706 720 707 721 708 struct bmic_identify_physical_device { ··· 858 845 u8 max_link_rate[256]; 859 846 u8 neg_phys_link_rate[256]; 860 847 u8 box_conn_name[8]; 861 - } __attribute((aligned(512))); 848 + } __packed __attribute((aligned(512))); 862 849 863 850 struct bmic_sense_subsystem_info { 864 851 u8 primary_slot_number; ··· 871 858 u8 secondary_array_serial_number[32]; 872 859 u8 secondary_cache_serial_number[32]; 873 860 u8 pad[332]; 874 - }; 861 + } __packed; 875 862 876 863 struct bmic_sense_storage_box_params { 877 864 u8 reserved[36]; ··· 883 870 u8 reserver_3[84]; 884 871 u8 phys_connector[2]; 885 872 u8 reserved_4[296]; 886 - }; 873 + } __packed; 887 874 888 - #pragma pack() 889 875 #endif /* HPSA_CMD_H */
+4 -4
drivers/scsi/pm8001/pm8001_hwi.c
··· 223 223 PM8001_EVENT_LOG_SIZE; 224 224 pm8001_ha->main_cfg_tbl.pm8001_tbl.iop_event_log_option = 0x01; 225 225 pm8001_ha->main_cfg_tbl.pm8001_tbl.fatal_err_interrupt = 0x01; 226 - for (i = 0; i < PM8001_MAX_INB_NUM; i++) { 226 + for (i = 0; i < pm8001_ha->max_q_num; i++) { 227 227 pm8001_ha->inbnd_q_tbl[i].element_pri_size_cnt = 228 228 PM8001_MPI_QUEUE | (pm8001_ha->iomb_size << 16) | (0x00<<30); 229 229 pm8001_ha->inbnd_q_tbl[i].upper_base_addr = ··· 249 249 pm8001_ha->inbnd_q_tbl[i].producer_idx = 0; 250 250 pm8001_ha->inbnd_q_tbl[i].consumer_index = 0; 251 251 } 252 - for (i = 0; i < PM8001_MAX_OUTB_NUM; i++) { 252 + for (i = 0; i < pm8001_ha->max_q_num; i++) { 253 253 pm8001_ha->outbnd_q_tbl[i].element_size_cnt = 254 254 PM8001_MPI_QUEUE | (pm8001_ha->iomb_size << 16) | (0x01<<30); 255 255 pm8001_ha->outbnd_q_tbl[i].upper_base_addr = ··· 671 671 read_outbnd_queue_table(pm8001_ha); 672 672 /* update main config table ,inbound table and outbound table */ 673 673 update_main_config_table(pm8001_ha); 674 - for (i = 0; i < PM8001_MAX_INB_NUM; i++) 674 + for (i = 0; i < pm8001_ha->max_q_num; i++) 675 675 update_inbnd_queue_table(pm8001_ha, i); 676 - for (i = 0; i < PM8001_MAX_OUTB_NUM; i++) 676 + for (i = 0; i < pm8001_ha->max_q_num; i++) 677 677 update_outbnd_queue_table(pm8001_ha, i); 678 678 /* 8081 controller donot require these operations */ 679 679 if (deviceid != 0x8081 && deviceid != 0x0042) {
+1 -1
drivers/scsi/scsi_transport_srp.c
··· 541 541 res = mutex_lock_interruptible(&rport->mutex); 542 542 if (res) 543 543 goto out; 544 - if (rport->state != SRP_RPORT_FAIL_FAST) 544 + if (rport->state != SRP_RPORT_FAIL_FAST && rport->state != SRP_RPORT_LOST) 545 545 /* 546 546 * sdev state must be SDEV_TRANSPORT_OFFLINE, transition 547 547 * to SDEV_BLOCK is illegal. Calling scsi_target_unblock()
+14 -17
drivers/scsi/ufs/ufshcd.c
··· 6386 6386 DECLARE_COMPLETION_ONSTACK(wait); 6387 6387 struct request *req; 6388 6388 unsigned long flags; 6389 - int free_slot, task_tag, err; 6389 + int task_tag, err; 6390 6390 6391 6391 /* 6392 - * Get free slot, sleep if slots are unavailable. 6393 - * Even though we use wait_event() which sleeps indefinitely, 6394 - * the maximum wait time is bounded by %TM_CMD_TIMEOUT. 6392 + * blk_get_request() is used here only to get a free tag. 6395 6393 */ 6396 6394 req = blk_get_request(q, REQ_OP_DRV_OUT, 0); 6397 6395 if (IS_ERR(req)) 6398 6396 return PTR_ERR(req); 6399 6397 6400 6398 req->end_io_data = &wait; 6401 - free_slot = req->tag; 6402 - WARN_ON_ONCE(free_slot < 0 || free_slot >= hba->nutmrs); 6403 6399 ufshcd_hold(hba, false); 6404 6400 6405 6401 spin_lock_irqsave(host->host_lock, flags); 6406 - task_tag = hba->nutrs + free_slot; 6402 + blk_mq_start_request(req); 6407 6403 6404 + task_tag = req->tag; 6408 6405 treq->req_header.dword_0 |= cpu_to_be32(task_tag); 6409 6406 6410 - memcpy(hba->utmrdl_base_addr + free_slot, treq, sizeof(*treq)); 6411 - ufshcd_vops_setup_task_mgmt(hba, free_slot, tm_function); 6407 + memcpy(hba->utmrdl_base_addr + task_tag, treq, sizeof(*treq)); 6408 + ufshcd_vops_setup_task_mgmt(hba, task_tag, tm_function); 6412 6409 6413 6410 /* send command to the controller */ 6414 - __set_bit(free_slot, &hba->outstanding_tasks); 6411 + __set_bit(task_tag, &hba->outstanding_tasks); 6415 6412 6416 6413 /* Make sure descriptors are ready before ringing the task doorbell */ 6417 6414 wmb(); 6418 6415 6419 - ufshcd_writel(hba, 1 << free_slot, REG_UTP_TASK_REQ_DOOR_BELL); 6416 + ufshcd_writel(hba, 1 << task_tag, REG_UTP_TASK_REQ_DOOR_BELL); 6420 6417 /* Make sure that doorbell is committed immediately */ 6421 6418 wmb(); 6422 6419 ··· 6433 6436 ufshcd_add_tm_upiu_trace(hba, task_tag, UFS_TM_ERR); 6434 6437 dev_err(hba->dev, "%s: task management cmd 0x%.2x timed-out\n", 6435 6438 __func__, tm_function); 6436 - if (ufshcd_clear_tm_cmd(hba, free_slot)) 6437 - dev_WARN(hba->dev, "%s: unable clear tm cmd (slot %d) after timeout\n", 6438 - __func__, free_slot); 6439 + if (ufshcd_clear_tm_cmd(hba, task_tag)) 6440 + dev_WARN(hba->dev, "%s: unable to clear tm cmd (slot %d) after timeout\n", 6441 + __func__, task_tag); 6439 6442 err = -ETIMEDOUT; 6440 6443 } else { 6441 6444 err = 0; 6442 - memcpy(treq, hba->utmrdl_base_addr + free_slot, sizeof(*treq)); 6445 + memcpy(treq, hba->utmrdl_base_addr + task_tag, sizeof(*treq)); 6443 6446 6444 6447 ufshcd_add_tm_upiu_trace(hba, task_tag, UFS_TM_COMP); 6445 6448 } 6446 6449 6447 6450 spin_lock_irqsave(hba->host->host_lock, flags); 6448 - __clear_bit(free_slot, &hba->outstanding_tasks); 6451 + __clear_bit(task_tag, &hba->outstanding_tasks); 6449 6452 spin_unlock_irqrestore(hba->host->host_lock, flags); 6450 6453 6454 + ufshcd_release(hba); 6451 6455 blk_put_request(req); 6452 6456 6453 - ufshcd_release(hba); 6454 6457 return err; 6455 6458 } 6456 6459
+1 -2
drivers/target/iscsi/iscsi_target.c
··· 1166 1166 1167 1167 target_get_sess_cmd(&cmd->se_cmd, true); 1168 1168 1169 + cmd->se_cmd.tag = (__force u32)cmd->init_task_tag; 1169 1170 cmd->sense_reason = target_cmd_init_cdb(&cmd->se_cmd, hdr->cdb); 1170 1171 if (cmd->sense_reason) { 1171 1172 if (cmd->sense_reason == TCM_OUT_OF_RESOURCES) { ··· 1181 1180 if (cmd->sense_reason) 1182 1181 goto attach_cmd; 1183 1182 1184 - /* only used for printks or comparing with ->ref_task_tag */ 1185 - cmd->se_cmd.tag = (__force u32)cmd->init_task_tag; 1186 1183 cmd->sense_reason = target_cmd_parse_cdb(&cmd->se_cmd); 1187 1184 if (cmd->sense_reason) 1188 1185 goto attach_cmd;
+2 -2
drivers/thunderbolt/retimer.c
··· 347 347 ret = tb_retimer_nvm_add(rt); 348 348 if (ret) { 349 349 dev_err(&rt->dev, "failed to add NVM devices: %d\n", ret); 350 - device_del(&rt->dev); 350 + device_unregister(&rt->dev); 351 351 return ret; 352 352 } 353 353 ··· 406 406 */ 407 407 int tb_retimer_scan(struct tb_port *port) 408 408 { 409 - u32 status[TB_MAX_RETIMER_INDEX] = {}; 409 + u32 status[TB_MAX_RETIMER_INDEX + 1] = {}; 410 410 int ret, i, last_idx = 0; 411 411 412 412 if (!port->cap_usb4)
+4
drivers/usb/cdns3/cdnsp-gadget.c
··· 1128 1128 return -ESHUTDOWN; 1129 1129 } 1130 1130 1131 + /* Requests has been dequeued during disabling endpoint. */ 1132 + if (!(pep->ep_state & EP_ENABLED)) 1133 + return 0; 1134 + 1131 1135 spin_lock_irqsave(&pdev->lock, flags); 1132 1136 ret = cdnsp_ep_dequeue(pep, to_cdnsp_request(request)); 1133 1137 spin_unlock_irqrestore(&pdev->lock, flags);
+9 -2
drivers/usb/usbip/stub_dev.c
··· 63 63 64 64 dev_info(dev, "stub up\n"); 65 65 66 + mutex_lock(&sdev->ud.sysfs_lock); 66 67 spin_lock_irq(&sdev->ud.lock); 67 68 68 69 if (sdev->ud.status != SDEV_ST_AVAILABLE) { ··· 88 87 tcp_rx = kthread_create(stub_rx_loop, &sdev->ud, "stub_rx"); 89 88 if (IS_ERR(tcp_rx)) { 90 89 sockfd_put(socket); 91 - return -EINVAL; 90 + goto unlock_mutex; 92 91 } 93 92 tcp_tx = kthread_create(stub_tx_loop, &sdev->ud, "stub_tx"); 94 93 if (IS_ERR(tcp_tx)) { 95 94 kthread_stop(tcp_rx); 96 95 sockfd_put(socket); 97 - return -EINVAL; 96 + goto unlock_mutex; 98 97 } 99 98 100 99 /* get task structs now */ ··· 113 112 wake_up_process(sdev->ud.tcp_rx); 114 113 wake_up_process(sdev->ud.tcp_tx); 115 114 115 + mutex_unlock(&sdev->ud.sysfs_lock); 116 + 116 117 } else { 117 118 dev_info(dev, "stub down\n"); 118 119 ··· 125 122 spin_unlock_irq(&sdev->ud.lock); 126 123 127 124 usbip_event_add(&sdev->ud, SDEV_EVENT_DOWN); 125 + mutex_unlock(&sdev->ud.sysfs_lock); 128 126 } 129 127 130 128 return count; ··· 134 130 sockfd_put(socket); 135 131 err: 136 132 spin_unlock_irq(&sdev->ud.lock); 133 + unlock_mutex: 134 + mutex_unlock(&sdev->ud.sysfs_lock); 137 135 return -EINVAL; 138 136 } 139 137 static DEVICE_ATTR_WO(usbip_sockfd); ··· 276 270 sdev->ud.side = USBIP_STUB; 277 271 sdev->ud.status = SDEV_ST_AVAILABLE; 278 272 spin_lock_init(&sdev->ud.lock); 273 + mutex_init(&sdev->ud.sysfs_lock); 279 274 sdev->ud.tcp_socket = NULL; 280 275 sdev->ud.sockfd = -1; 281 276
+3
drivers/usb/usbip/usbip_common.h
··· 263 263 /* lock for status */ 264 264 spinlock_t lock; 265 265 266 + /* mutex for synchronizing sysfs store paths */ 267 + struct mutex sysfs_lock; 268 + 266 269 int sockfd; 267 270 struct socket *tcp_socket; 268 271
+2
drivers/usb/usbip/usbip_event.c
··· 70 70 while ((ud = get_event()) != NULL) { 71 71 usbip_dbg_eh("pending event %lx\n", ud->event); 72 72 73 + mutex_lock(&ud->sysfs_lock); 73 74 /* 74 75 * NOTE: shutdown must come first. 75 76 * Shutdown the device. ··· 91 90 ud->eh_ops.unusable(ud); 92 91 unset_event(ud, USBIP_EH_UNUSABLE); 93 92 } 93 + mutex_unlock(&ud->sysfs_lock); 94 94 95 95 wake_up(&ud->eh_waitq); 96 96 }
+1
drivers/usb/usbip/vhci_hcd.c
··· 1101 1101 vdev->ud.side = USBIP_VHCI; 1102 1102 vdev->ud.status = VDEV_ST_NULL; 1103 1103 spin_lock_init(&vdev->ud.lock); 1104 + mutex_init(&vdev->ud.sysfs_lock); 1104 1105 1105 1106 INIT_LIST_HEAD(&vdev->priv_rx); 1106 1107 INIT_LIST_HEAD(&vdev->priv_tx);
+25 -5
drivers/usb/usbip/vhci_sysfs.c
··· 185 185 186 186 usbip_dbg_vhci_sysfs("enter\n"); 187 187 188 + mutex_lock(&vdev->ud.sysfs_lock); 189 + 188 190 /* lock */ 189 191 spin_lock_irqsave(&vhci->lock, flags); 190 192 spin_lock(&vdev->ud.lock); ··· 197 195 /* unlock */ 198 196 spin_unlock(&vdev->ud.lock); 199 197 spin_unlock_irqrestore(&vhci->lock, flags); 198 + mutex_unlock(&vdev->ud.sysfs_lock); 200 199 201 200 return -EINVAL; 202 201 } ··· 207 204 spin_unlock_irqrestore(&vhci->lock, flags); 208 205 209 206 usbip_event_add(&vdev->ud, VDEV_EVENT_DOWN); 207 + 208 + mutex_unlock(&vdev->ud.sysfs_lock); 210 209 211 210 return 0; 212 211 } ··· 354 349 else 355 350 vdev = &vhci->vhci_hcd_hs->vdev[rhport]; 356 351 352 + mutex_lock(&vdev->ud.sysfs_lock); 353 + 357 354 /* Extract socket from fd. */ 358 355 socket = sockfd_lookup(sockfd, &err); 359 356 if (!socket) { 360 357 dev_err(dev, "failed to lookup sock"); 361 - return -EINVAL; 358 + err = -EINVAL; 359 + goto unlock_mutex; 362 360 } 363 361 if (socket->type != SOCK_STREAM) { 364 362 dev_err(dev, "Expecting SOCK_STREAM - found %d", 365 363 socket->type); 366 364 sockfd_put(socket); 367 - return -EINVAL; 365 + err = -EINVAL; 366 + goto unlock_mutex; 368 367 } 369 368 370 369 /* create threads before locking */ 371 370 tcp_rx = kthread_create(vhci_rx_loop, &vdev->ud, "vhci_rx"); 372 371 if (IS_ERR(tcp_rx)) { 373 372 sockfd_put(socket); 374 - return -EINVAL; 373 + err = -EINVAL; 374 + goto unlock_mutex; 375 375 } 376 376 tcp_tx = kthread_create(vhci_tx_loop, &vdev->ud, "vhci_tx"); 377 377 if (IS_ERR(tcp_tx)) { 378 378 kthread_stop(tcp_rx); 379 379 sockfd_put(socket); 380 - return -EINVAL; 380 + err = -EINVAL; 381 + goto unlock_mutex; 381 382 } 382 383 383 384 /* get task structs now */ ··· 408 397 * Will be retried from userspace 409 398 * if there's another free port. 410 399 */ 411 - return -EBUSY; 400 + err = -EBUSY; 401 + goto unlock_mutex; 412 402 } 413 403 414 404 dev_info(dev, "pdev(%u) rhport(%u) sockfd(%d)\n", ··· 435 423 436 424 rh_port_connect(vdev, speed); 437 425 426 + dev_info(dev, "Device attached\n"); 427 + 428 + mutex_unlock(&vdev->ud.sysfs_lock); 429 + 438 430 return count; 431 + 432 + unlock_mutex: 433 + mutex_unlock(&vdev->ud.sysfs_lock); 434 + return err; 439 435 } 440 436 static DEVICE_ATTR_WO(attach); 441 437
+1
drivers/usb/usbip/vudc_dev.c
··· 572 572 init_waitqueue_head(&udc->tx_waitq); 573 573 574 574 spin_lock_init(&ud->lock); 575 + mutex_init(&ud->sysfs_lock); 575 576 ud->status = SDEV_ST_AVAILABLE; 576 577 ud->side = USBIP_VUDC; 577 578
+5
drivers/usb/usbip/vudc_sysfs.c
··· 112 112 dev_err(dev, "no device"); 113 113 return -ENODEV; 114 114 } 115 + mutex_lock(&udc->ud.sysfs_lock); 115 116 spin_lock_irqsave(&udc->lock, flags); 116 117 /* Don't export what we don't have */ 117 118 if (!udc->driver || !udc->pullup) { ··· 188 187 189 188 wake_up_process(udc->ud.tcp_rx); 190 189 wake_up_process(udc->ud.tcp_tx); 190 + 191 + mutex_unlock(&udc->ud.sysfs_lock); 191 192 return count; 192 193 193 194 } else { ··· 210 207 } 211 208 212 209 spin_unlock_irqrestore(&udc->lock, flags); 210 + mutex_unlock(&udc->ud.sysfs_lock); 213 211 214 212 return count; 215 213 ··· 220 216 spin_unlock_irq(&udc->ud.lock); 221 217 unlock: 222 218 spin_unlock_irqrestore(&udc->lock, flags); 219 + mutex_unlock(&udc->ud.sysfs_lock); 223 220 224 221 return ret; 225 222 }
+3 -1
drivers/vfio/pci/vfio_pci.c
··· 1656 1656 1657 1657 index = vma->vm_pgoff >> (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT); 1658 1658 1659 + if (index >= VFIO_PCI_NUM_REGIONS + vdev->num_regions) 1660 + return -EINVAL; 1659 1661 if (vma->vm_end < vma->vm_start) 1660 1662 return -EINVAL; 1661 1663 if ((vma->vm_flags & VM_SHARED) == 0) ··· 1666 1664 int regnum = index - VFIO_PCI_NUM_REGIONS; 1667 1665 struct vfio_pci_region *region = vdev->region + regnum; 1668 1666 1669 - if (region && region->ops && region->ops->mmap && 1667 + if (region->ops && region->ops->mmap && 1670 1668 (region->flags & VFIO_REGION_INFO_FLAG_MMAP)) 1671 1669 return region->ops->mmap(vdev, region, vma); 1672 1670 return -EINVAL;
+2 -2
drivers/watchdog/armada_37xx_wdt.c
··· 2 2 /* 3 3 * Watchdog driver for Marvell Armada 37xx SoCs 4 4 * 5 - * Author: Marek Behun <marek.behun@nic.cz> 5 + * Author: Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #include <linux/clk.h> ··· 366 366 367 367 module_platform_driver(armada_37xx_wdt_driver); 368 368 369 - MODULE_AUTHOR("Marek Behun <marek.behun@nic.cz>"); 369 + MODULE_AUTHOR("Marek Behun <kabel@kernel.org>"); 370 370 MODULE_DESCRIPTION("Armada 37xx CPU Watchdog"); 371 371 372 372 MODULE_LICENSE("GPL v2");
+42 -11
fs/btrfs/zoned.c
··· 21 21 /* Pseudo write pointer value for conventional zone */ 22 22 #define WP_CONVENTIONAL ((u64)-2) 23 23 24 + /* 25 + * Location of the first zone of superblock logging zone pairs. 26 + * 27 + * - primary superblock: 0B (zone 0) 28 + * - first copy: 512G (zone starting at that offset) 29 + * - second copy: 4T (zone starting at that offset) 30 + */ 31 + #define BTRFS_SB_LOG_PRIMARY_OFFSET (0ULL) 32 + #define BTRFS_SB_LOG_FIRST_OFFSET (512ULL * SZ_1G) 33 + #define BTRFS_SB_LOG_SECOND_OFFSET (4096ULL * SZ_1G) 34 + 35 + #define BTRFS_SB_LOG_FIRST_SHIFT const_ilog2(BTRFS_SB_LOG_FIRST_OFFSET) 36 + #define BTRFS_SB_LOG_SECOND_SHIFT const_ilog2(BTRFS_SB_LOG_SECOND_OFFSET) 37 + 24 38 /* Number of superblock log zones */ 25 39 #define BTRFS_NR_SB_LOG_ZONES 2 40 + 41 + /* 42 + * Maximum supported zone size. Currently, SMR disks have a zone size of 43 + * 256MiB, and we are expecting ZNS drives to be in the 1-4GiB range. We do not 44 + * expect the zone size to become larger than 8GiB in the near future. 45 + */ 46 + #define BTRFS_MAX_ZONE_SIZE SZ_8G 26 47 27 48 static int copy_zone_info_cb(struct blk_zone *zone, unsigned int idx, void *data) 28 49 { ··· 132 111 } 133 112 134 113 /* 135 - * The following zones are reserved as the circular buffer on ZONED btrfs. 136 - * - The primary superblock: zones 0 and 1 137 - * - The first copy: zones 16 and 17 138 - * - The second copy: zones 1024 or zone at 256GB which is minimum, and 139 - * the following one 114 + * Get the first zone number of the superblock mirror 140 115 */ 141 116 static inline u32 sb_zone_number(int shift, int mirror) 142 117 { 143 - ASSERT(mirror < BTRFS_SUPER_MIRROR_MAX); 118 + u64 zone; 144 119 120 + ASSERT(mirror < BTRFS_SUPER_MIRROR_MAX); 145 121 switch (mirror) { 146 - case 0: return 0; 147 - case 1: return 16; 148 - case 2: return min_t(u64, btrfs_sb_offset(mirror) >> shift, 1024); 122 + case 0: zone = 0; break; 123 + case 1: zone = 1ULL << (BTRFS_SB_LOG_FIRST_SHIFT - shift); break; 124 + case 2: zone = 1ULL << (BTRFS_SB_LOG_SECOND_SHIFT - shift); break; 149 125 } 150 126 151 - return 0; 127 + ASSERT(zone <= U32_MAX); 128 + 129 + return (u32)zone; 152 130 } 153 131 154 132 /* ··· 320 300 zone_sectors = bdev_zone_sectors(bdev); 321 301 } 322 302 323 - nr_sectors = bdev_nr_sectors(bdev); 324 303 /* Check if it's power of 2 (see is_power_of_2) */ 325 304 ASSERT(zone_sectors != 0 && (zone_sectors & (zone_sectors - 1)) == 0); 326 305 zone_info->zone_size = zone_sectors << SECTOR_SHIFT; 306 + 307 + /* We reject devices with a zone size larger than 8GB */ 308 + if (zone_info->zone_size > BTRFS_MAX_ZONE_SIZE) { 309 + btrfs_err_in_rcu(fs_info, 310 + "zoned: %s: zone size %llu larger than supported maximum %llu", 311 + rcu_str_deref(device->name), 312 + zone_info->zone_size, BTRFS_MAX_ZONE_SIZE); 313 + ret = -EINVAL; 314 + goto out; 315 + } 316 + 317 + nr_sectors = bdev_nr_sectors(bdev); 327 318 zone_info->zone_size_shift = ilog2(zone_info->zone_size); 328 319 zone_info->max_zone_append_size = 329 320 (u64)queue_max_zone_append_sectors(queue) << SECTOR_SHIFT;
+3 -2
fs/direct-io.c
··· 812 812 struct buffer_head *map_bh) 813 813 { 814 814 int ret = 0; 815 + int boundary = sdio->boundary; /* dio_send_cur_page may clear it */ 815 816 816 817 if (dio->op == REQ_OP_WRITE) { 817 818 /* ··· 851 850 sdio->cur_page_fs_offset = sdio->block_in_file << sdio->blkbits; 852 851 out: 853 852 /* 854 - * If sdio->boundary then we want to schedule the IO now to 853 + * If boundary then we want to schedule the IO now to 855 854 * avoid metadata seeks. 856 855 */ 857 - if (sdio->boundary) { 856 + if (boundary) { 858 857 ret = dio_send_cur_page(dio, sdio, map_bh); 859 858 if (sdio->bio) 860 859 dio_bio_submit(dio, sdio);
+5 -2
fs/io_uring.c
··· 6754 6754 current->flags |= PF_NO_SETAFFINITY; 6755 6755 6756 6756 mutex_lock(&sqd->lock); 6757 + /* a user may had exited before the thread started */ 6758 + io_run_task_work_head(&sqd->park_task_work); 6759 + 6757 6760 while (!test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state)) { 6758 6761 int ret; 6759 6762 bool cap_entries, sqt_spin, needs_sched; ··· 6773 6770 } 6774 6771 cond_resched(); 6775 6772 mutex_lock(&sqd->lock); 6776 - if (did_sig) 6777 - break; 6778 6773 io_run_task_work(); 6779 6774 io_run_task_work_head(&sqd->park_task_work); 6775 + if (did_sig) 6776 + break; 6780 6777 timeout = jiffies + sqd->sq_thread_idle; 6781 6778 continue; 6782 6779 }
+1 -10
fs/ocfs2/aops.c
··· 2295 2295 struct ocfs2_alloc_context *meta_ac = NULL; 2296 2296 handle_t *handle = NULL; 2297 2297 loff_t end = offset + bytes; 2298 - int ret = 0, credits = 0, locked = 0; 2298 + int ret = 0, credits = 0; 2299 2299 2300 2300 ocfs2_init_dealloc_ctxt(&dealloc); 2301 2301 ··· 2305 2305 end <= i_size_read(inode) && 2306 2306 !dwc->dw_orphaned) 2307 2307 goto out; 2308 - 2309 - /* ocfs2_file_write_iter will get i_mutex, so we need not lock if we 2310 - * are in that context. */ 2311 - if (dwc->dw_writer_pid != task_pid_nr(current)) { 2312 - inode_lock(inode); 2313 - locked = 1; 2314 - } 2315 2308 2316 2309 ret = ocfs2_inode_lock(inode, &di_bh, 1); 2317 2310 if (ret < 0) { ··· 2386 2393 if (meta_ac) 2387 2394 ocfs2_free_alloc_context(meta_ac); 2388 2395 ocfs2_run_deallocs(osb, &dealloc); 2389 - if (locked) 2390 - inode_unlock(inode); 2391 2396 ocfs2_dio_free_write_ctx(inode, dwc); 2392 2397 2393 2398 return ret;
+6 -2
fs/ocfs2/file.c
··· 1245 1245 goto bail_unlock; 1246 1246 } 1247 1247 } 1248 + down_write(&OCFS2_I(inode)->ip_alloc_sem); 1248 1249 handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS + 1249 1250 2 * ocfs2_quota_trans_credits(sb)); 1250 1251 if (IS_ERR(handle)) { 1251 1252 status = PTR_ERR(handle); 1252 1253 mlog_errno(status); 1253 - goto bail_unlock; 1254 + goto bail_unlock_alloc; 1254 1255 } 1255 1256 status = __dquot_transfer(inode, transfer_to); 1256 1257 if (status < 0) 1257 1258 goto bail_commit; 1258 1259 } else { 1260 + down_write(&OCFS2_I(inode)->ip_alloc_sem); 1259 1261 handle = ocfs2_start_trans(osb, OCFS2_INODE_UPDATE_CREDITS); 1260 1262 if (IS_ERR(handle)) { 1261 1263 status = PTR_ERR(handle); 1262 1264 mlog_errno(status); 1263 - goto bail_unlock; 1265 + goto bail_unlock_alloc; 1264 1266 } 1265 1267 } 1266 1268 ··· 1275 1273 1276 1274 bail_commit: 1277 1275 ocfs2_commit_trans(osb, handle); 1276 + bail_unlock_alloc: 1277 + up_write(&OCFS2_I(inode)->ip_alloc_sem); 1278 1278 bail_unlock: 1279 1279 if (status && inode_locked) { 1280 1280 ocfs2_inode_unlock_tracker(inode, 1, &oh, had_lock);
+1 -1
include/dt-bindings/bus/moxtet.h
··· 2 2 /* 3 3 * Constant for device tree bindings for Turris Mox module configuration bus 4 4 * 5 - * Copyright (C) 2019 Marek Behun <marek.behun@nic.cz> 5 + * Copyright (C) 2019 Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #ifndef _DT_BINDINGS_BUS_MOXTET_H
+1 -1
include/linux/armada-37xx-rwtm-mailbox.h
··· 2 2 /* 3 3 * rWTM BIU Mailbox driver for Armada 37xx 4 4 * 5 - * Author: Marek Behun <marek.behun@nic.cz> 5 + * Author: Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #ifndef _LINUX_ARMADA_37XX_RWTM_MAILBOX_H_
+1 -1
include/linux/kasan.h
··· 330 330 331 331 #endif /* CONFIG_KASAN */ 332 332 333 - #if defined(CONFIG_KASAN) && CONFIG_KASAN_STACK 333 + #if defined(CONFIG_KASAN) && defined(CONFIG_KASAN_STACK) 334 334 void kasan_unpoison_task_stack(struct task_struct *task); 335 335 #else 336 336 static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
+3 -2
include/linux/marvell_phy.h
··· 33 33 /* Marvel 88E1111 in Finisar SFP module with modified PHY ID */ 34 34 #define MARVELL_PHY_ID_88E1111_FINISAR 0x01ff0cc0 35 35 36 - /* The MV88e6390 Ethernet switch contains embedded PHYs. These PHYs do 36 + /* These Ethernet switch families contain embedded PHYs, but they do 37 37 * not have a model ID. So the switch driver traps reads to the ID2 38 38 * register and returns the switch family ID 39 39 */ 40 - #define MARVELL_PHY_ID_88E6390 0x01410f90 40 + #define MARVELL_PHY_ID_88E6341_FAMILY 0x01410f41 41 + #define MARVELL_PHY_ID_88E6390_FAMILY 0x01410f90 41 42 42 43 #define MARVELL_PHY_FAMILY_ID(id) ((id) >> 4) 43 44
+1 -1
include/linux/moxtet.h
··· 2 2 /* 3 3 * Turris Mox module configuration bus driver 4 4 * 5 - * Copyright (C) 2019 Marek Behun <marek.behun@nic.cz> 5 + * Copyright (C) 2019 Marek Behún <kabel@kernel.org> 6 6 */ 7 7 8 8 #ifndef __LINUX_MOXTET_H
+1
include/linux/nd.h
··· 11 11 12 12 enum nvdimm_event { 13 13 NVDIMM_REVALIDATE_POISON, 14 + NVDIMM_REVALIDATE_REGION, 14 15 }; 15 16 16 17 enum nvdimm_claim_class {
+3 -2
include/linux/netfilter_arp/arp_tables.h
··· 52 52 int arpt_register_table(struct net *net, const struct xt_table *table, 53 53 const struct arpt_replace *repl, 54 54 const struct nf_hook_ops *ops, struct xt_table **res); 55 - void arpt_unregister_table(struct net *net, struct xt_table *table, 56 - const struct nf_hook_ops *ops); 55 + void arpt_unregister_table(struct net *net, struct xt_table *table); 56 + void arpt_unregister_table_pre_exit(struct net *net, struct xt_table *table, 57 + const struct nf_hook_ops *ops); 57 58 extern unsigned int arpt_do_table(struct sk_buff *skb, 58 59 const struct nf_hook_state *state, 59 60 struct xt_table *table);
+3 -2
include/linux/netfilter_bridge/ebtables.h
··· 110 110 const struct ebt_table *table, 111 111 const struct nf_hook_ops *ops, 112 112 struct ebt_table **res); 113 - extern void ebt_unregister_table(struct net *net, struct ebt_table *table, 114 - const struct nf_hook_ops *); 113 + extern void ebt_unregister_table(struct net *net, struct ebt_table *table); 114 + void ebt_unregister_table_pre_exit(struct net *net, const char *tablename, 115 + const struct nf_hook_ops *ops); 115 116 extern unsigned int ebt_do_table(struct sk_buff *skb, 116 117 const struct nf_hook_state *state, 117 118 struct ebt_table *table);
+2 -2
include/uapi/linux/idxd.h
··· 247 247 uint32_t rsvd2:8; 248 248 }; 249 249 250 - uint16_t delta_rec_size; 251 - uint16_t crc_val; 250 + uint32_t delta_rec_size; 251 + uint32_t crc_val; 252 252 253 253 /* DIF check & strip */ 254 254 struct {
+156 -74
kernel/bpf/verifier.c
··· 6318 6318 return &env->insn_aux_data[env->insn_idx]; 6319 6319 } 6320 6320 6321 + enum { 6322 + REASON_BOUNDS = -1, 6323 + REASON_TYPE = -2, 6324 + REASON_PATHS = -3, 6325 + REASON_LIMIT = -4, 6326 + REASON_STACK = -5, 6327 + }; 6328 + 6321 6329 static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg, 6322 - u32 *ptr_limit, u8 opcode, bool off_is_neg) 6330 + const struct bpf_reg_state *off_reg, 6331 + u32 *alu_limit, u8 opcode) 6323 6332 { 6333 + bool off_is_neg = off_reg->smin_value < 0; 6324 6334 bool mask_to_left = (opcode == BPF_ADD && off_is_neg) || 6325 6335 (opcode == BPF_SUB && !off_is_neg); 6326 - u32 off, max; 6336 + u32 max = 0, ptr_limit = 0; 6337 + 6338 + if (!tnum_is_const(off_reg->var_off) && 6339 + (off_reg->smin_value < 0) != (off_reg->smax_value < 0)) 6340 + return REASON_BOUNDS; 6327 6341 6328 6342 switch (ptr_reg->type) { 6329 6343 case PTR_TO_STACK: 6330 6344 /* Offset 0 is out-of-bounds, but acceptable start for the 6331 - * left direction, see BPF_REG_FP. 6345 + * left direction, see BPF_REG_FP. Also, unknown scalar 6346 + * offset where we would need to deal with min/max bounds is 6347 + * currently prohibited for unprivileged. 6332 6348 */ 6333 6349 max = MAX_BPF_STACK + mask_to_left; 6334 - /* Indirect variable offset stack access is prohibited in 6335 - * unprivileged mode so it's not handled here. 6336 - */ 6337 - off = ptr_reg->off + ptr_reg->var_off.value; 6338 - if (mask_to_left) 6339 - *ptr_limit = MAX_BPF_STACK + off; 6340 - else 6341 - *ptr_limit = -off - 1; 6342 - return *ptr_limit >= max ? -ERANGE : 0; 6350 + ptr_limit = -(ptr_reg->var_off.value + ptr_reg->off); 6351 + break; 6343 6352 case PTR_TO_MAP_VALUE: 6344 6353 max = ptr_reg->map_ptr->value_size; 6345 - if (mask_to_left) { 6346 - *ptr_limit = ptr_reg->umax_value + ptr_reg->off; 6347 - } else { 6348 - off = ptr_reg->smin_value + ptr_reg->off; 6349 - *ptr_limit = ptr_reg->map_ptr->value_size - off - 1; 6350 - } 6351 - return *ptr_limit >= max ? -ERANGE : 0; 6354 + ptr_limit = (mask_to_left ? 6355 + ptr_reg->smin_value : 6356 + ptr_reg->umax_value) + ptr_reg->off; 6357 + break; 6352 6358 default: 6353 - return -EINVAL; 6359 + return REASON_TYPE; 6354 6360 } 6361 + 6362 + if (ptr_limit >= max) 6363 + return REASON_LIMIT; 6364 + *alu_limit = ptr_limit; 6365 + return 0; 6355 6366 } 6356 6367 6357 6368 static bool can_skip_alu_sanitation(const struct bpf_verifier_env *env, ··· 6380 6369 if (aux->alu_state && 6381 6370 (aux->alu_state != alu_state || 6382 6371 aux->alu_limit != alu_limit)) 6383 - return -EACCES; 6372 + return REASON_PATHS; 6384 6373 6385 6374 /* Corresponding fixup done in do_misc_fixups(). */ 6386 6375 aux->alu_state = alu_state; ··· 6399 6388 return update_alu_sanitation_state(aux, BPF_ALU_NON_POINTER, 0); 6400 6389 } 6401 6390 6391 + static bool sanitize_needed(u8 opcode) 6392 + { 6393 + return opcode == BPF_ADD || opcode == BPF_SUB; 6394 + } 6395 + 6402 6396 static int sanitize_ptr_alu(struct bpf_verifier_env *env, 6403 6397 struct bpf_insn *insn, 6404 6398 const struct bpf_reg_state *ptr_reg, 6399 + const struct bpf_reg_state *off_reg, 6405 6400 struct bpf_reg_state *dst_reg, 6406 - bool off_is_neg) 6401 + struct bpf_insn_aux_data *tmp_aux, 6402 + const bool commit_window) 6407 6403 { 6404 + struct bpf_insn_aux_data *aux = commit_window ? cur_aux(env) : tmp_aux; 6408 6405 struct bpf_verifier_state *vstate = env->cur_state; 6409 - struct bpf_insn_aux_data *aux = cur_aux(env); 6406 + bool off_is_neg = off_reg->smin_value < 0; 6410 6407 bool ptr_is_dst_reg = ptr_reg == dst_reg; 6411 6408 u8 opcode = BPF_OP(insn->code); 6412 6409 u32 alu_state, alu_limit; ··· 6432 6413 if (vstate->speculative) 6433 6414 goto do_sim; 6434 6415 6435 - alu_state = off_is_neg ? BPF_ALU_NEG_VALUE : 0; 6436 - alu_state |= ptr_is_dst_reg ? 6437 - BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST; 6438 - 6439 - err = retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg); 6416 + err = retrieve_ptr_limit(ptr_reg, off_reg, &alu_limit, opcode); 6440 6417 if (err < 0) 6441 6418 return err; 6419 + 6420 + if (commit_window) { 6421 + /* In commit phase we narrow the masking window based on 6422 + * the observed pointer move after the simulated operation. 6423 + */ 6424 + alu_state = tmp_aux->alu_state; 6425 + alu_limit = abs(tmp_aux->alu_limit - alu_limit); 6426 + } else { 6427 + alu_state = off_is_neg ? BPF_ALU_NEG_VALUE : 0; 6428 + alu_state |= ptr_is_dst_reg ? 6429 + BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST; 6430 + } 6442 6431 6443 6432 err = update_alu_sanitation_state(aux, alu_state, alu_limit); 6444 6433 if (err < 0) 6445 6434 return err; 6446 6435 do_sim: 6436 + /* If we're in commit phase, we're done here given we already 6437 + * pushed the truncated dst_reg into the speculative verification 6438 + * stack. 6439 + */ 6440 + if (commit_window) 6441 + return 0; 6442 + 6447 6443 /* Simulate and find potential out-of-bounds access under 6448 6444 * speculative execution from truncation as a result of 6449 6445 * masking when off was not within expected range. If off ··· 6475 6441 ret = push_stack(env, env->insn_idx + 1, env->insn_idx, true); 6476 6442 if (!ptr_is_dst_reg && ret) 6477 6443 *dst_reg = tmp; 6478 - return !ret ? -EFAULT : 0; 6444 + return !ret ? REASON_STACK : 0; 6445 + } 6446 + 6447 + static int sanitize_err(struct bpf_verifier_env *env, 6448 + const struct bpf_insn *insn, int reason, 6449 + const struct bpf_reg_state *off_reg, 6450 + const struct bpf_reg_state *dst_reg) 6451 + { 6452 + static const char *err = "pointer arithmetic with it prohibited for !root"; 6453 + const char *op = BPF_OP(insn->code) == BPF_ADD ? "add" : "sub"; 6454 + u32 dst = insn->dst_reg, src = insn->src_reg; 6455 + 6456 + switch (reason) { 6457 + case REASON_BOUNDS: 6458 + verbose(env, "R%d has unknown scalar with mixed signed bounds, %s\n", 6459 + off_reg == dst_reg ? dst : src, err); 6460 + break; 6461 + case REASON_TYPE: 6462 + verbose(env, "R%d has pointer with unsupported alu operation, %s\n", 6463 + off_reg == dst_reg ? src : dst, err); 6464 + break; 6465 + case REASON_PATHS: 6466 + verbose(env, "R%d tried to %s from different maps, paths or scalars, %s\n", 6467 + dst, op, err); 6468 + break; 6469 + case REASON_LIMIT: 6470 + verbose(env, "R%d tried to %s beyond pointer bounds, %s\n", 6471 + dst, op, err); 6472 + break; 6473 + case REASON_STACK: 6474 + verbose(env, "R%d could not be pushed for speculative verification, %s\n", 6475 + dst, err); 6476 + break; 6477 + default: 6478 + verbose(env, "verifier internal error: unknown reason (%d)\n", 6479 + reason); 6480 + break; 6481 + } 6482 + 6483 + return -EACCES; 6479 6484 } 6480 6485 6481 6486 /* check that stack access falls within stack limits and that 'reg' doesn't ··· 6551 6478 return 0; 6552 6479 } 6553 6480 6481 + static int sanitize_check_bounds(struct bpf_verifier_env *env, 6482 + const struct bpf_insn *insn, 6483 + const struct bpf_reg_state *dst_reg) 6484 + { 6485 + u32 dst = insn->dst_reg; 6486 + 6487 + /* For unprivileged we require that resulting offset must be in bounds 6488 + * in order to be able to sanitize access later on. 6489 + */ 6490 + if (env->bypass_spec_v1) 6491 + return 0; 6492 + 6493 + switch (dst_reg->type) { 6494 + case PTR_TO_STACK: 6495 + if (check_stack_access_for_ptr_arithmetic(env, dst, dst_reg, 6496 + dst_reg->off + dst_reg->var_off.value)) 6497 + return -EACCES; 6498 + break; 6499 + case PTR_TO_MAP_VALUE: 6500 + if (check_map_access(env, dst, dst_reg->off, 1, false)) { 6501 + verbose(env, "R%d pointer arithmetic of map value goes out of range, " 6502 + "prohibited for !root\n", dst); 6503 + return -EACCES; 6504 + } 6505 + break; 6506 + default: 6507 + break; 6508 + } 6509 + 6510 + return 0; 6511 + } 6554 6512 6555 6513 /* Handles arithmetic on a pointer and a scalar: computes new min/max and var_off. 6556 6514 * Caller should also handle BPF_MOV case separately. ··· 6601 6497 smin_ptr = ptr_reg->smin_value, smax_ptr = ptr_reg->smax_value; 6602 6498 u64 umin_val = off_reg->umin_value, umax_val = off_reg->umax_value, 6603 6499 umin_ptr = ptr_reg->umin_value, umax_ptr = ptr_reg->umax_value; 6604 - u32 dst = insn->dst_reg, src = insn->src_reg; 6500 + struct bpf_insn_aux_data tmp_aux = {}; 6605 6501 u8 opcode = BPF_OP(insn->code); 6502 + u32 dst = insn->dst_reg; 6606 6503 int ret; 6607 6504 6608 6505 dst_reg = &regs[dst]; ··· 6651 6546 verbose(env, "R%d pointer arithmetic on %s prohibited\n", 6652 6547 dst, reg_type_str[ptr_reg->type]); 6653 6548 return -EACCES; 6654 - case PTR_TO_MAP_VALUE: 6655 - if (!env->allow_ptr_leaks && !known && (smin_val < 0) != (smax_val < 0)) { 6656 - verbose(env, "R%d has unknown scalar with mixed signed bounds, pointer arithmetic with it prohibited for !root\n", 6657 - off_reg == dst_reg ? dst : src); 6658 - return -EACCES; 6659 - } 6660 - fallthrough; 6661 6549 default: 6662 6550 break; 6663 6551 } ··· 6668 6570 /* pointer types do not carry 32-bit bounds at the moment. */ 6669 6571 __mark_reg32_unbounded(dst_reg); 6670 6572 6573 + if (sanitize_needed(opcode)) { 6574 + ret = sanitize_ptr_alu(env, insn, ptr_reg, off_reg, dst_reg, 6575 + &tmp_aux, false); 6576 + if (ret < 0) 6577 + return sanitize_err(env, insn, ret, off_reg, dst_reg); 6578 + } 6579 + 6671 6580 switch (opcode) { 6672 6581 case BPF_ADD: 6673 - ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0); 6674 - if (ret < 0) { 6675 - verbose(env, "R%d tried to add from different maps, paths, or prohibited types\n", dst); 6676 - return ret; 6677 - } 6678 6582 /* We can take a fixed offset as long as it doesn't overflow 6679 6583 * the s32 'off' field 6680 6584 */ ··· 6727 6627 } 6728 6628 break; 6729 6629 case BPF_SUB: 6730 - ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0); 6731 - if (ret < 0) { 6732 - verbose(env, "R%d tried to sub from different maps, paths, or prohibited types\n", dst); 6733 - return ret; 6734 - } 6735 6630 if (dst_reg == off_reg) { 6736 6631 /* scalar -= pointer. Creates an unknown scalar */ 6737 6632 verbose(env, "R%d tried to subtract pointer from scalar\n", ··· 6807 6712 __reg_deduce_bounds(dst_reg); 6808 6713 __reg_bound_offset(dst_reg); 6809 6714 6810 - /* For unprivileged we require that resulting offset must be in bounds 6811 - * in order to be able to sanitize access later on. 6812 - */ 6813 - if (!env->bypass_spec_v1) { 6814 - if (dst_reg->type == PTR_TO_MAP_VALUE && 6815 - check_map_access(env, dst, dst_reg->off, 1, false)) { 6816 - verbose(env, "R%d pointer arithmetic of map value goes out of range, " 6817 - "prohibited for !root\n", dst); 6818 - return -EACCES; 6819 - } else if (dst_reg->type == PTR_TO_STACK && 6820 - check_stack_access_for_ptr_arithmetic( 6821 - env, dst, dst_reg, dst_reg->off + 6822 - dst_reg->var_off.value)) { 6823 - return -EACCES; 6824 - } 6715 + if (sanitize_check_bounds(env, insn, dst_reg) < 0) 6716 + return -EACCES; 6717 + if (sanitize_needed(opcode)) { 6718 + ret = sanitize_ptr_alu(env, insn, dst_reg, off_reg, dst_reg, 6719 + &tmp_aux, true); 6720 + if (ret < 0) 6721 + return sanitize_err(env, insn, ret, off_reg, dst_reg); 6825 6722 } 6826 6723 6827 6724 return 0; ··· 7407 7320 s32 s32_min_val, s32_max_val; 7408 7321 u32 u32_min_val, u32_max_val; 7409 7322 u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32; 7410 - u32 dst = insn->dst_reg; 7411 - int ret; 7412 7323 bool alu32 = (BPF_CLASS(insn->code) != BPF_ALU64); 7324 + int ret; 7413 7325 7414 7326 smin_val = src_reg.smin_value; 7415 7327 smax_val = src_reg.smax_value; ··· 7450 7364 return 0; 7451 7365 } 7452 7366 7367 + if (sanitize_needed(opcode)) { 7368 + ret = sanitize_val_alu(env, insn); 7369 + if (ret < 0) 7370 + return sanitize_err(env, insn, ret, NULL, NULL); 7371 + } 7372 + 7453 7373 /* Calculate sign/unsigned bounds and tnum for alu32 and alu64 bit ops. 7454 7374 * There are two classes of instructions: The first class we track both 7455 7375 * alu32 and alu64 sign/unsigned bounds independently this provides the ··· 7472 7380 */ 7473 7381 switch (opcode) { 7474 7382 case BPF_ADD: 7475 - ret = sanitize_val_alu(env, insn); 7476 - if (ret < 0) { 7477 - verbose(env, "R%d tried to add from different pointers or scalars\n", dst); 7478 - return ret; 7479 - } 7480 7383 scalar32_min_max_add(dst_reg, &src_reg); 7481 7384 scalar_min_max_add(dst_reg, &src_reg); 7482 7385 dst_reg->var_off = tnum_add(dst_reg->var_off, src_reg.var_off); 7483 7386 break; 7484 7387 case BPF_SUB: 7485 - ret = sanitize_val_alu(env, insn); 7486 - if (ret < 0) { 7487 - verbose(env, "R%d tried to sub from different pointers or scalars\n", dst); 7488 - return ret; 7489 - } 7490 7388 scalar32_min_max_sub(dst_reg, &src_reg); 7491 7389 scalar_min_max_sub(dst_reg, &src_reg); 7492 7390 dst_reg->var_off = tnum_sub(dst_reg->var_off, src_reg.var_off);
+20 -11
kernel/gcov/clang.c
··· 70 70 71 71 u32 ident; 72 72 u32 checksum; 73 + #if CONFIG_CLANG_VERSION < 110000 73 74 u8 use_extra_checksum; 75 + #endif 74 76 u32 cfg_checksum; 75 77 76 78 u32 num_counters; ··· 147 145 148 146 list_add_tail(&info->head, &current_info->functions); 149 147 } 150 - EXPORT_SYMBOL(llvm_gcda_emit_function); 151 148 #else 152 - void llvm_gcda_emit_function(u32 ident, u32 func_checksum, 153 - u8 use_extra_checksum, u32 cfg_checksum) 149 + void llvm_gcda_emit_function(u32 ident, u32 func_checksum, u32 cfg_checksum) 154 150 { 155 151 struct gcov_fn_info *info = kzalloc(sizeof(*info), GFP_KERNEL); 156 152 ··· 158 158 INIT_LIST_HEAD(&info->head); 159 159 info->ident = ident; 160 160 info->checksum = func_checksum; 161 - info->use_extra_checksum = use_extra_checksum; 162 161 info->cfg_checksum = cfg_checksum; 163 162 list_add_tail(&info->head, &current_info->functions); 164 163 } 165 - EXPORT_SYMBOL(llvm_gcda_emit_function); 166 164 #endif 165 + EXPORT_SYMBOL(llvm_gcda_emit_function); 167 166 168 167 void llvm_gcda_emit_arcs(u32 num_counters, u64 *counters) 169 168 { ··· 292 293 !list_is_last(&fn_ptr2->head, &info2->functions)) { 293 294 if (fn_ptr1->checksum != fn_ptr2->checksum) 294 295 return false; 296 + #if CONFIG_CLANG_VERSION < 110000 295 297 if (fn_ptr1->use_extra_checksum != fn_ptr2->use_extra_checksum) 296 298 return false; 297 299 if (fn_ptr1->use_extra_checksum && 298 300 fn_ptr1->cfg_checksum != fn_ptr2->cfg_checksum) 299 301 return false; 302 + #else 303 + if (fn_ptr1->cfg_checksum != fn_ptr2->cfg_checksum) 304 + return false; 305 + #endif 300 306 fn_ptr1 = list_next_entry(fn_ptr1, head); 301 307 fn_ptr2 = list_next_entry(fn_ptr2, head); 302 308 } ··· 369 365 INIT_LIST_HEAD(&fn_dup->head); 370 366 371 367 cv_size = fn->num_counters * sizeof(fn->counters[0]); 372 - fn_dup->counters = vmalloc(cv_size); 368 + fn_dup->counters = kvmalloc(cv_size, GFP_KERNEL); 373 369 if (!fn_dup->counters) { 374 370 kfree(fn_dup); 375 371 return NULL; ··· 533 529 534 530 list_for_each_entry(fi_ptr, &info->functions, head) { 535 531 u32 i; 536 - u32 len = 2; 537 - 538 - if (fi_ptr->use_extra_checksum) 539 - len++; 540 532 541 533 pos += store_gcov_u32(buffer, pos, GCOV_TAG_FUNCTION); 542 - pos += store_gcov_u32(buffer, pos, len); 534 + #if CONFIG_CLANG_VERSION < 110000 535 + pos += store_gcov_u32(buffer, pos, 536 + fi_ptr->use_extra_checksum ? 3 : 2); 537 + #else 538 + pos += store_gcov_u32(buffer, pos, 3); 539 + #endif 543 540 pos += store_gcov_u32(buffer, pos, fi_ptr->ident); 544 541 pos += store_gcov_u32(buffer, pos, fi_ptr->checksum); 542 + #if CONFIG_CLANG_VERSION < 110000 545 543 if (fi_ptr->use_extra_checksum) 546 544 pos += store_gcov_u32(buffer, pos, fi_ptr->cfg_checksum); 545 + #else 546 + pos += store_gcov_u32(buffer, pos, fi_ptr->cfg_checksum); 547 + #endif 547 548 548 549 pos += store_gcov_u32(buffer, pos, GCOV_TAG_COUNTER_BASE); 549 550 pos += store_gcov_u32(buffer, pos, fi_ptr->num_counters * 2);
+3 -2
kernel/locking/lockdep.c
··· 705 705 706 706 printk(KERN_CONT " ("); 707 707 __print_lock_name(class); 708 - printk(KERN_CONT "){%s}-{%hd:%hd}", usage, 708 + printk(KERN_CONT "){%s}-{%d:%d}", usage, 709 709 class->wait_type_outer ?: class->wait_type_inner, 710 710 class->wait_type_inner); 711 711 } ··· 930 930 /* Debug-check: all keys must be persistent! */ 931 931 debug_locks_off(); 932 932 pr_err("INFO: trying to register non-static key.\n"); 933 - pr_err("the code is fine but needs lockdep annotation.\n"); 933 + pr_err("The code is fine but needs lockdep annotation, or maybe\n"); 934 + pr_err("you didn't initialize this object before use?\n"); 934 935 pr_err("turning off the locking correctness validator.\n"); 935 936 dump_stack(); 936 937 return false;
+4 -2
kernel/trace/trace_dynevent.c
··· 63 63 event = p + 1; 64 64 *p = '\0'; 65 65 } 66 - if (event[0] == '\0') 67 - return -EINVAL; 66 + if (event[0] == '\0') { 67 + ret = -EINVAL; 68 + goto out; 69 + } 68 70 69 71 mutex_lock(&event_mutex); 70 72 for_each_dyn_event_safe(pos, n) {
+3 -3
lib/Kconfig.debug
··· 1363 1363 bool 1364 1364 depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT 1365 1365 select STACKTRACE 1366 - select FRAME_POINTER if !MIPS && !PPC && !ARM && !S390 && !MICROBLAZE && !ARC && !X86 1366 + depends on FRAME_POINTER || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86 1367 1367 select KALLSYMS 1368 1368 select KALLSYMS_ALL 1369 1369 ··· 1665 1665 depends on DEBUG_KERNEL 1666 1666 depends on STACKTRACE_SUPPORT 1667 1667 depends on PROC_FS 1668 - select FRAME_POINTER if !MIPS && !PPC && !S390 && !MICROBLAZE && !ARM && !ARC && !X86 1668 + depends on FRAME_POINTER || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86 1669 1669 select KALLSYMS 1670 1670 select KALLSYMS_ALL 1671 1671 select STACKTRACE ··· 1918 1918 depends on FAULT_INJECTION_DEBUG_FS && STACKTRACE_SUPPORT 1919 1919 depends on !X86_64 1920 1920 select STACKTRACE 1921 - select FRAME_POINTER if !MIPS && !PPC && !S390 && !MICROBLAZE && !ARM && !ARC && !X86 1921 + depends on FRAME_POINTER || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86 1922 1922 help 1923 1923 Provide stacktrace filter for fault-injection capabilities 1924 1924
+2 -7
lib/Kconfig.kasan
··· 138 138 139 139 endchoice 140 140 141 - config KASAN_STACK_ENABLE 141 + config KASAN_STACK 142 142 bool "Enable stack instrumentation (unsafe)" if CC_IS_CLANG && !COMPILE_TEST 143 143 depends on KASAN_GENERIC || KASAN_SW_TAGS 144 + default y if CC_IS_GCC 144 145 help 145 146 The LLVM stack address sanitizer has a know problem that 146 147 causes excessive stack usage in a lot of functions, see ··· 154 153 but clang users can still enable it for builds without 155 154 CONFIG_COMPILE_TEST. On gcc it is assumed to always be safe 156 155 to use and enabled by default. 157 - 158 - config KASAN_STACK 159 - int 160 - depends on KASAN_GENERIC || KASAN_SW_TAGS 161 - default 1 if KASAN_STACK_ENABLE || CC_IS_GCC 162 - default 0 163 156 164 157 config KASAN_SW_TAGS_IDENTIFY 165 158 bool "Enable memory corruption identification"
+2 -2
lib/earlycpio.c
··· 40 40 }; 41 41 42 42 /** 43 - * cpio_data find_cpio_data - Search for files in an uncompressed cpio 43 + * find_cpio_data - Search for files in an uncompressed cpio 44 44 * @path: The directory to search for, including a slash at the end 45 45 * @data: Pointer to the cpio archive or a header inside 46 46 * @len: Remaining length of the cpio based on data pointer ··· 49 49 * matching file itself. It can be used to iterate through the cpio 50 50 * to find all files inside of a directory path. 51 51 * 52 - * @return: struct cpio_data containing the address, length and 52 + * Return: &struct cpio_data containing the address, length and 53 53 * filename (with the directory path cut off) of the found file. 54 54 * If you search for a filename and not for files in a directory, 55 55 * pass the absolute path of the filename in the cpio and make sure
+2 -1
lib/lru_cache.c
··· 76 76 /** 77 77 * lc_create - prepares to track objects in an active set 78 78 * @name: descriptive name only used in lc_seq_printf_stats and lc_seq_dump_details 79 + * @cache: cache root pointer 79 80 * @max_pending_changes: maximum changes to accumulate until a transaction is required 80 81 * @e_count: number of elements allowed to be active simultaneously 81 82 * @e_size: size of the tracked objects ··· 628 627 } 629 628 630 629 /** 631 - * lc_dump - Dump a complete LRU cache to seq in textual form. 630 + * lc_seq_dump_details - Dump a complete LRU cache to seq in textual form. 632 631 * @lc: the lru cache to operate on 633 632 * @seq: the &struct seq_file pointer to seq_printf into 634 633 * @utext: user supplied additional "heading" or other info
+2 -2
lib/parman.c
··· 297 297 * parman_prio_init - initializes a parman priority chunk 298 298 * @parman: parman instance 299 299 * @prio: parman prio structure to be initialized 300 - * @prority: desired priority of the chunk 300 + * @priority: desired priority of the chunk 301 301 * 302 302 * Note: all locking must be provided by the caller. 303 303 * ··· 356 356 EXPORT_SYMBOL(parman_item_add); 357 357 358 358 /** 359 - * parman_item_del - deletes parman item 359 + * parman_item_remove - deletes parman item 360 360 * @parman: parman instance 361 361 * @prio: parman prio instance to delete the item from 362 362 * @item: parman item instance
+6 -5
lib/radix-tree.c
··· 166 166 /** 167 167 * radix_tree_find_next_bit - find the next set bit in a memory region 168 168 * 169 - * @addr: The address to base the search on 170 - * @size: The bitmap size in bits 171 - * @offset: The bitnumber to start searching at 169 + * @node: where to begin the search 170 + * @tag: the tag index 171 + * @offset: the bitnumber to start searching at 172 172 * 173 173 * Unrollable variant of find_next_bit() for constant size arrays. 174 174 * Tail bits starting from size to roundup(size, BITS_PER_LONG) must be zero. ··· 461 461 462 462 /** 463 463 * radix_tree_shrink - shrink radix tree to minimum height 464 - * @root radix tree root 464 + * @root: radix tree root 465 465 */ 466 466 static inline bool radix_tree_shrink(struct radix_tree_root *root) 467 467 { ··· 691 691 } 692 692 693 693 /** 694 - * __radix_tree_insert - insert into a radix tree 694 + * radix_tree_insert - insert into a radix tree 695 695 * @root: radix tree root 696 696 * @index: index key 697 697 * @item: item to insert ··· 919 919 /** 920 920 * radix_tree_iter_replace - replace item in a slot 921 921 * @root: radix tree root 922 + * @iter: iterator state 922 923 * @slot: pointer to slot 923 924 * @item: new item to store in the slot. 924 925 *
+1 -1
lib/test_kasan_module.c
··· 22 22 char *kmem; 23 23 char __user *usermem; 24 24 size_t size = 10; 25 - int unused; 25 + int __maybe_unused unused; 26 26 27 27 kmem = kmalloc(size, GFP_KERNEL); 28 28 if (!kmem)
+4
mm/gup.c
··· 1535 1535 FOLL_FORCE | FOLL_DUMP | FOLL_GET); 1536 1536 if (locked) 1537 1537 mmap_read_unlock(mm); 1538 + 1539 + if (ret == 1 && is_page_poisoned(page)) 1540 + return NULL; 1541 + 1538 1542 return (ret == 1) ? page : NULL; 1539 1543 } 1540 1544 #endif /* CONFIG_ELF_CORE */
+20
mm/internal.h
··· 97 97 set_page_count(page, 1); 98 98 } 99 99 100 + /* 101 + * When kernel touch the user page, the user page may be have been marked 102 + * poison but still mapped in user space, if without this page, the kernel 103 + * can guarantee the data integrity and operation success, the kernel is 104 + * better to check the posion status and avoid touching it, be good not to 105 + * panic, coredump for process fatal signal is a sample case matching this 106 + * scenario. Or if kernel can't guarantee the data integrity, it's better 107 + * not to call this function, let kernel touch the poison page and get to 108 + * panic. 109 + */ 110 + static inline bool is_page_poisoned(struct page *page) 111 + { 112 + if (PageHWPoison(page)) 113 + return true; 114 + else if (PageHuge(page) && PageHWPoison(compound_head(page))) 115 + return true; 116 + 117 + return false; 118 + } 119 + 100 120 extern unsigned long highest_memmap_pfn; 101 121 102 122 /*
+1 -1
mm/kasan/common.c
··· 63 63 kasan_unpoison(address, size); 64 64 } 65 65 66 - #if CONFIG_KASAN_STACK 66 + #ifdef CONFIG_KASAN_STACK 67 67 /* Unpoison the entire stack for a task. */ 68 68 void kasan_unpoison_task_stack(struct task_struct *task) 69 69 {
+1 -1
mm/kasan/kasan.h
··· 231 231 const char *kasan_get_bug_type(struct kasan_access_info *info); 232 232 void kasan_metadata_fetch_row(char *buffer, void *row); 233 233 234 - #if defined(CONFIG_KASAN_GENERIC) && CONFIG_KASAN_STACK 234 + #if defined(CONFIG_KASAN_GENERIC) && defined(CONFIG_KASAN_STACK) 235 235 void kasan_print_address_stack_frame(const void *addr); 236 236 #else 237 237 static inline void kasan_print_address_stack_frame(const void *addr) { }
+1 -1
mm/kasan/report_generic.c
··· 128 128 memcpy(buffer, kasan_mem_to_shadow(row), META_BYTES_PER_ROW); 129 129 } 130 130 131 - #if CONFIG_KASAN_STACK 131 + #ifdef CONFIG_KASAN_STACK 132 132 static bool __must_check tokenize_frame_descr(const char **frame_descr, 133 133 char *token, size_t max_tok_len, 134 134 unsigned long *value)
+2
mm/mapping_dirty_helpers.c
··· 165 165 return 0; 166 166 } 167 167 168 + #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD 168 169 /* Huge pud */ 169 170 walk->action = ACTION_CONTINUE; 170 171 if (pud_trans_huge(pudval) || pud_devmap(pudval)) 171 172 WARN_ON(pud_write(pudval) || pud_dirty(pudval)); 173 + #endif 172 174 173 175 return 0; 174 176 }
+19 -10
mm/mmu_gather.c
··· 249 249 tlb_flush_mmu_free(tlb); 250 250 } 251 251 252 - /** 253 - * tlb_gather_mmu - initialize an mmu_gather structure for page-table tear-down 254 - * @tlb: the mmu_gather structure to initialize 255 - * @mm: the mm_struct of the target address space 256 - * @fullmm: @mm is without users and we're going to destroy the full address 257 - * space (exit/execve) 258 - * 259 - * Called to initialize an (on-stack) mmu_gather structure for page-table 260 - * tear-down from @mm. 261 - */ 262 252 static void __tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, 263 253 bool fullmm) 264 254 { ··· 273 283 inc_tlb_flush_pending(tlb->mm); 274 284 } 275 285 286 + /** 287 + * tlb_gather_mmu - initialize an mmu_gather structure for page-table tear-down 288 + * @tlb: the mmu_gather structure to initialize 289 + * @mm: the mm_struct of the target address space 290 + * 291 + * Called to initialize an (on-stack) mmu_gather structure for page-table 292 + * tear-down from @mm. 293 + */ 276 294 void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm) 277 295 { 278 296 __tlb_gather_mmu(tlb, mm, false); 279 297 } 280 298 299 + /** 300 + * tlb_gather_mmu_fullmm - initialize an mmu_gather structure for page-table tear-down 301 + * @tlb: the mmu_gather structure to initialize 302 + * @mm: the mm_struct of the target address space 303 + * 304 + * In this case, @mm is without users and we're going to destroy the 305 + * full address space (exit/execve). 306 + * 307 + * Called to initialize an (on-stack) mmu_gather structure for page-table 308 + * tear-down from @mm. 309 + */ 281 310 void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm) 282 311 { 283 312 __tlb_gather_mmu(tlb, mm, true);
+1 -1
mm/oom_kill.c
··· 170 170 return false; 171 171 } 172 172 173 - /** 173 + /* 174 174 * Check whether unreclaimable slab amount is greater than 175 175 * all user memory(LRU pages). 176 176 * dump_unreclaimable_slab() could help in the case that
+3 -1
mm/page_poison.c
··· 77 77 void *addr; 78 78 79 79 addr = kmap_atomic(page); 80 + kasan_disable_current(); 80 81 /* 81 82 * Page poisoning when enabled poisons each and every page 82 83 * that is freed to buddy. Thus no extra check is done to 83 84 * see if a page was poisoned. 84 85 */ 85 - check_poison_mem(addr, PAGE_SIZE); 86 + check_poison_mem(kasan_reset_tag(addr), PAGE_SIZE); 87 + kasan_enable_current(); 86 88 kunmap_atomic(addr); 87 89 } 88 90
+1 -1
mm/percpu-internal.h
··· 87 87 88 88 extern struct list_head *pcpu_chunk_lists; 89 89 extern int pcpu_nr_slots; 90 - extern int pcpu_nr_empty_pop_pages; 90 + extern int pcpu_nr_empty_pop_pages[]; 91 91 92 92 extern struct pcpu_chunk *pcpu_first_chunk; 93 93 extern struct pcpu_chunk *pcpu_reserved_chunk;
+7 -2
mm/percpu-stats.c
··· 145 145 int slot, max_nr_alloc; 146 146 int *buffer; 147 147 enum pcpu_chunk_type type; 148 + int nr_empty_pop_pages; 148 149 149 150 alloc_buffer: 150 151 spin_lock_irq(&pcpu_lock); ··· 166 165 goto alloc_buffer; 167 166 } 168 167 169 - #define PL(X) \ 168 + nr_empty_pop_pages = 0; 169 + for (type = 0; type < PCPU_NR_CHUNK_TYPES; type++) 170 + nr_empty_pop_pages += pcpu_nr_empty_pop_pages[type]; 171 + 172 + #define PL(X) \ 170 173 seq_printf(m, " %-20s: %12lld\n", #X, (long long int)pcpu_stats_ai.X) 171 174 172 175 seq_printf(m, ··· 201 196 PU(nr_max_chunks); 202 197 PU(min_alloc_size); 203 198 PU(max_alloc_size); 204 - P("empty_pop_pages", pcpu_nr_empty_pop_pages); 199 + P("empty_pop_pages", nr_empty_pop_pages); 205 200 seq_putc(m, '\n'); 206 201 207 202 #undef PU
+7 -7
mm/percpu.c
··· 173 173 static LIST_HEAD(pcpu_map_extend_chunks); 174 174 175 175 /* 176 - * The number of empty populated pages, protected by pcpu_lock. The 177 - * reserved chunk doesn't contribute to the count. 176 + * The number of empty populated pages by chunk type, protected by pcpu_lock. 177 + * The reserved chunk doesn't contribute to the count. 178 178 */ 179 - int pcpu_nr_empty_pop_pages; 179 + int pcpu_nr_empty_pop_pages[PCPU_NR_CHUNK_TYPES]; 180 180 181 181 /* 182 182 * The number of populated pages in use by the allocator, protected by ··· 556 556 { 557 557 chunk->nr_empty_pop_pages += nr; 558 558 if (chunk != pcpu_reserved_chunk) 559 - pcpu_nr_empty_pop_pages += nr; 559 + pcpu_nr_empty_pop_pages[pcpu_chunk_type(chunk)] += nr; 560 560 } 561 561 562 562 /* ··· 1832 1832 mutex_unlock(&pcpu_alloc_mutex); 1833 1833 } 1834 1834 1835 - if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW) 1835 + if (pcpu_nr_empty_pop_pages[type] < PCPU_EMPTY_POP_PAGES_LOW) 1836 1836 pcpu_schedule_balance_work(); 1837 1837 1838 1838 /* clear the areas and return address relative to base address */ ··· 2000 2000 pcpu_atomic_alloc_failed = false; 2001 2001 } else { 2002 2002 nr_to_pop = clamp(PCPU_EMPTY_POP_PAGES_HIGH - 2003 - pcpu_nr_empty_pop_pages, 2003 + pcpu_nr_empty_pop_pages[type], 2004 2004 0, PCPU_EMPTY_POP_PAGES_HIGH); 2005 2005 } 2006 2006 ··· 2580 2580 2581 2581 /* link the first chunk in */ 2582 2582 pcpu_first_chunk = chunk; 2583 - pcpu_nr_empty_pop_pages = pcpu_first_chunk->nr_empty_pop_pages; 2583 + pcpu_nr_empty_pop_pages[PCPU_CHUNK_ROOT] = pcpu_first_chunk->nr_empty_pop_pages; 2584 2584 pcpu_chunk_relocate(pcpu_first_chunk, -1); 2585 2585 2586 2586 /* include all regions of the first chunk */
+1 -1
mm/ptdump.c
··· 111 111 unsigned long next, struct mm_walk *walk) 112 112 { 113 113 struct ptdump_state *st = walk->private; 114 - pte_t val = READ_ONCE(*pte); 114 + pte_t val = ptep_get(pte); 115 115 116 116 if (st->effective_prot) 117 117 st->effective_prot(st, 4, pte_val(val));
+2 -2
mm/shuffle.c
··· 147 147 spin_unlock_irqrestore(&z->lock, flags); 148 148 } 149 149 150 - /** 151 - * shuffle_free_memory - reduce the predictability of the page allocator 150 + /* 151 + * __shuffle_free_memory - reduce the predictability of the page allocator 152 152 * @pgdat: node page data 153 153 */ 154 154 void __meminit __shuffle_free_memory(pg_data_t *pgdat)
+7 -1
net/bridge/netfilter/ebtable_broute.c
··· 105 105 &net->xt.broute_table); 106 106 } 107 107 108 + static void __net_exit broute_net_pre_exit(struct net *net) 109 + { 110 + ebt_unregister_table_pre_exit(net, "broute", &ebt_ops_broute); 111 + } 112 + 108 113 static void __net_exit broute_net_exit(struct net *net) 109 114 { 110 - ebt_unregister_table(net, net->xt.broute_table, &ebt_ops_broute); 115 + ebt_unregister_table(net, net->xt.broute_table); 111 116 } 112 117 113 118 static struct pernet_operations broute_net_ops = { 114 119 .init = broute_net_init, 115 120 .exit = broute_net_exit, 121 + .pre_exit = broute_net_pre_exit, 116 122 }; 117 123 118 124 static int __init ebtable_broute_init(void)
+7 -1
net/bridge/netfilter/ebtable_filter.c
··· 99 99 &net->xt.frame_filter); 100 100 } 101 101 102 + static void __net_exit frame_filter_net_pre_exit(struct net *net) 103 + { 104 + ebt_unregister_table_pre_exit(net, "filter", ebt_ops_filter); 105 + } 106 + 102 107 static void __net_exit frame_filter_net_exit(struct net *net) 103 108 { 104 - ebt_unregister_table(net, net->xt.frame_filter, ebt_ops_filter); 109 + ebt_unregister_table(net, net->xt.frame_filter); 105 110 } 106 111 107 112 static struct pernet_operations frame_filter_net_ops = { 108 113 .init = frame_filter_net_init, 109 114 .exit = frame_filter_net_exit, 115 + .pre_exit = frame_filter_net_pre_exit, 110 116 }; 111 117 112 118 static int __init ebtable_filter_init(void)
+7 -1
net/bridge/netfilter/ebtable_nat.c
··· 99 99 &net->xt.frame_nat); 100 100 } 101 101 102 + static void __net_exit frame_nat_net_pre_exit(struct net *net) 103 + { 104 + ebt_unregister_table_pre_exit(net, "nat", ebt_ops_nat); 105 + } 106 + 102 107 static void __net_exit frame_nat_net_exit(struct net *net) 103 108 { 104 - ebt_unregister_table(net, net->xt.frame_nat, ebt_ops_nat); 109 + ebt_unregister_table(net, net->xt.frame_nat); 105 110 } 106 111 107 112 static struct pernet_operations frame_nat_net_ops = { 108 113 .init = frame_nat_net_init, 109 114 .exit = frame_nat_net_exit, 115 + .pre_exit = frame_nat_net_pre_exit, 110 116 }; 111 117 112 118 static int __init ebtable_nat_init(void)
+28 -3
net/bridge/netfilter/ebtables.c
··· 1239 1239 return ret; 1240 1240 } 1241 1241 1242 - void ebt_unregister_table(struct net *net, struct ebt_table *table, 1243 - const struct nf_hook_ops *ops) 1242 + static struct ebt_table *__ebt_find_table(struct net *net, const char *name) 1244 1243 { 1245 - nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks)); 1244 + struct ebt_pernet *ebt_net = net_generic(net, ebt_pernet_id); 1245 + struct ebt_table *t; 1246 + 1247 + mutex_lock(&ebt_mutex); 1248 + 1249 + list_for_each_entry(t, &ebt_net->tables, list) { 1250 + if (strcmp(t->name, name) == 0) { 1251 + mutex_unlock(&ebt_mutex); 1252 + return t; 1253 + } 1254 + } 1255 + 1256 + mutex_unlock(&ebt_mutex); 1257 + return NULL; 1258 + } 1259 + 1260 + void ebt_unregister_table_pre_exit(struct net *net, const char *name, const struct nf_hook_ops *ops) 1261 + { 1262 + struct ebt_table *table = __ebt_find_table(net, name); 1263 + 1264 + if (table) 1265 + nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks)); 1266 + } 1267 + EXPORT_SYMBOL(ebt_unregister_table_pre_exit); 1268 + 1269 + void ebt_unregister_table(struct net *net, struct ebt_table *table) 1270 + { 1246 1271 __ebt_unregister_table(net, table); 1247 1272 } 1248 1273
+2 -1
net/core/dev.c
··· 5972 5972 NAPI_GRO_CB(skb)->frag0_len = 0; 5973 5973 5974 5974 if (!skb_headlen(skb) && pinfo->nr_frags && 5975 - !PageHighMem(skb_frag_page(frag0))) { 5975 + !PageHighMem(skb_frag_page(frag0)) && 5976 + (!NET_IP_ALIGN || !(skb_frag_off(frag0) & 3))) { 5976 5977 NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0); 5977 5978 NAPI_GRO_CB(skb)->frag0_len = min_t(unsigned int, 5978 5979 skb_frag_size(frag0),
+3 -3
net/ethtool/netlink.h
··· 36 36 37 37 /** 38 38 * ethnl_put_strz() - put string attribute with fixed size string 39 - * @skb: skb with the message 40 - * @attrype: attribute type 41 - * @s: ETH_GSTRING_LEN sized string (may not be null terminated) 39 + * @skb: skb with the message 40 + * @attrtype: attribute type 41 + * @s: ETH_GSTRING_LEN sized string (may not be null terminated) 42 42 * 43 43 * Puts an attribute with null terminated string from @s into the message. 44 44 *
+4 -4
net/ethtool/pause.c
··· 32 32 if (!dev->ethtool_ops->get_pauseparam) 33 33 return -EOPNOTSUPP; 34 34 35 + ethtool_stats_init((u64 *)&data->pausestat, 36 + sizeof(data->pausestat) / 8); 37 + 35 38 ret = ethnl_ops_begin(dev); 36 39 if (ret < 0) 37 40 return ret; 38 41 dev->ethtool_ops->get_pauseparam(dev, &data->pauseparam); 39 42 if (req_base->flags & ETHTOOL_FLAG_STATS && 40 - dev->ethtool_ops->get_pause_stats) { 41 - ethtool_stats_init((u64 *)&data->pausestat, 42 - sizeof(data->pausestat) / 8); 43 + dev->ethtool_ops->get_pause_stats) 43 44 dev->ethtool_ops->get_pause_stats(dev, &data->pausestat); 44 - } 45 45 ethnl_ops_complete(dev); 46 46 47 47 return 0;
+9 -2
net/ipv4/netfilter/arp_tables.c
··· 1193 1193 if (!newinfo) 1194 1194 goto out_unlock; 1195 1195 1196 + memset(newinfo->entries, 0, size); 1197 + 1196 1198 newinfo->number = compatr->num_entries; 1197 1199 for (i = 0; i < NF_ARP_NUMHOOKS; i++) { 1198 1200 newinfo->hook_entry[i] = compatr->hook_entry[i]; ··· 1541 1539 return ret; 1542 1540 } 1543 1541 1544 - void arpt_unregister_table(struct net *net, struct xt_table *table, 1545 - const struct nf_hook_ops *ops) 1542 + void arpt_unregister_table_pre_exit(struct net *net, struct xt_table *table, 1543 + const struct nf_hook_ops *ops) 1546 1544 { 1547 1545 nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks)); 1546 + } 1547 + EXPORT_SYMBOL(arpt_unregister_table_pre_exit); 1548 + 1549 + void arpt_unregister_table(struct net *net, struct xt_table *table) 1550 + { 1548 1551 __arpt_unregister_table(net, table); 1549 1552 } 1550 1553
+9 -1
net/ipv4/netfilter/arptable_filter.c
··· 56 56 return err; 57 57 } 58 58 59 + static void __net_exit arptable_filter_net_pre_exit(struct net *net) 60 + { 61 + if (net->ipv4.arptable_filter) 62 + arpt_unregister_table_pre_exit(net, net->ipv4.arptable_filter, 63 + arpfilter_ops); 64 + } 65 + 59 66 static void __net_exit arptable_filter_net_exit(struct net *net) 60 67 { 61 68 if (!net->ipv4.arptable_filter) 62 69 return; 63 - arpt_unregister_table(net, net->ipv4.arptable_filter, arpfilter_ops); 70 + arpt_unregister_table(net, net->ipv4.arptable_filter); 64 71 net->ipv4.arptable_filter = NULL; 65 72 } 66 73 67 74 static struct pernet_operations arptable_filter_net_ops = { 68 75 .exit = arptable_filter_net_exit, 76 + .pre_exit = arptable_filter_net_pre_exit, 69 77 }; 70 78 71 79 static int __init arptable_filter_init(void)
+2
net/ipv4/netfilter/ip_tables.c
··· 1428 1428 if (!newinfo) 1429 1429 goto out_unlock; 1430 1430 1431 + memset(newinfo->entries, 0, size); 1432 + 1431 1433 newinfo->number = compatr->num_entries; 1432 1434 for (i = 0; i < NF_INET_NUMHOOKS; i++) { 1433 1435 newinfo->hook_entry[i] = compatr->hook_entry[i];
+13 -3
net/ipv4/sysctl_net_ipv4.c
··· 1383 1383 if (!table) 1384 1384 goto err_alloc; 1385 1385 1386 - /* Update the variables to point into the current struct net */ 1387 - for (i = 0; i < ARRAY_SIZE(ipv4_net_table) - 1; i++) 1388 - table[i].data += (void *)net - (void *)&init_net; 1386 + for (i = 0; i < ARRAY_SIZE(ipv4_net_table) - 1; i++) { 1387 + if (table[i].data) { 1388 + /* Update the variables to point into 1389 + * the current struct net 1390 + */ 1391 + table[i].data += (void *)net - (void *)&init_net; 1392 + } else { 1393 + /* Entries without data pointer are global; 1394 + * Make them read-only in non-init_net ns 1395 + */ 1396 + table[i].mode &= ~0222; 1397 + } 1398 + } 1389 1399 } 1390 1400 1391 1401 net->ipv4.ipv4_hdr = register_net_sysctl(net, "net/ipv4", table);
+10
net/ipv6/ip6_tunnel.c
··· 2243 2243 t = rtnl_dereference(t->next); 2244 2244 } 2245 2245 } 2246 + 2247 + t = rtnl_dereference(ip6n->tnls_wc[0]); 2248 + while (t) { 2249 + /* If dev is in the same netns, it has already 2250 + * been added to the list by the previous loop. 2251 + */ 2252 + if (!net_eq(dev_net(t->dev), net)) 2253 + unregister_netdevice_queue(t->dev, list); 2254 + t = rtnl_dereference(t->next); 2255 + } 2246 2256 } 2247 2257 2248 2258 static int __net_init ip6_tnl_init_net(struct net *net)
+2
net/ipv6/netfilter/ip6_tables.c
··· 1443 1443 if (!newinfo) 1444 1444 goto out_unlock; 1445 1445 1446 + memset(newinfo->entries, 0, size); 1447 + 1446 1448 newinfo->number = compatr->num_entries; 1447 1449 for (i = 0; i < NF_INET_NUMHOOKS; i++) { 1448 1450 newinfo->hook_entry[i] = compatr->hook_entry[i];
+2 -2
net/ipv6/sit.c
··· 1864 1864 if (dev->rtnl_link_ops == &sit_link_ops) 1865 1865 unregister_netdevice_queue(dev, head); 1866 1866 1867 - for (prio = 1; prio < 4; prio++) { 1867 + for (prio = 0; prio < 4; prio++) { 1868 1868 int h; 1869 - for (h = 0; h < IP6_SIT_HASH_SIZE; h++) { 1869 + for (h = 0; h < (prio ? IP6_SIT_HASH_SIZE : 1); h++) { 1870 1870 struct ip_tunnel *t; 1871 1871 1872 1872 t = rtnl_dereference(sitn->tunnels[prio][h]);
+1
net/netfilter/nf_conntrack_standalone.c
··· 266 266 case IPPROTO_GRE: return "gre"; 267 267 case IPPROTO_SCTP: return "sctp"; 268 268 case IPPROTO_UDPLITE: return "udplite"; 269 + case IPPROTO_ICMPV6: return "icmpv6"; 269 270 } 270 271 271 272 return "unknown";
+3 -3
net/netfilter/nf_flow_table_offload.c
··· 336 336 const __be32 *addr, const __be32 *mask) 337 337 { 338 338 struct flow_action_entry *entry; 339 - int i; 339 + int i, j; 340 340 341 - for (i = 0; i < sizeof(struct in6_addr) / sizeof(u32); i += sizeof(u32)) { 341 + for (i = 0, j = 0; i < sizeof(struct in6_addr) / sizeof(u32); i += sizeof(u32), j++) { 342 342 entry = flow_action_entry_next(flow_rule); 343 343 flow_offload_mangle(entry, FLOW_ACT_MANGLE_HDR_TYPE_IP6, 344 - offset + i, &addr[i], mask); 344 + offset + i, &addr[j], mask); 345 345 } 346 346 } 347 347
+34 -12
net/netfilter/nf_tables_api.c
··· 5301 5301 return -ENOMEM; 5302 5302 } 5303 5303 5304 - static void nft_set_elem_expr_setup(const struct nft_set_ext *ext, int i, 5305 - struct nft_expr *expr_array[]) 5304 + static int nft_set_elem_expr_setup(struct nft_ctx *ctx, 5305 + const struct nft_set_ext *ext, 5306 + struct nft_expr *expr_array[], 5307 + u32 num_exprs) 5306 5308 { 5307 5309 struct nft_set_elem_expr *elem_expr = nft_set_ext_expr(ext); 5308 - struct nft_expr *expr = nft_setelem_expr_at(elem_expr, elem_expr->size); 5310 + struct nft_expr *expr; 5311 + int i, err; 5309 5312 5310 - memcpy(expr, expr_array[i], expr_array[i]->ops->size); 5311 - elem_expr->size += expr_array[i]->ops->size; 5312 - kfree(expr_array[i]); 5313 - expr_array[i] = NULL; 5313 + for (i = 0; i < num_exprs; i++) { 5314 + expr = nft_setelem_expr_at(elem_expr, elem_expr->size); 5315 + err = nft_expr_clone(expr, expr_array[i]); 5316 + if (err < 0) 5317 + goto err_elem_expr_setup; 5318 + 5319 + elem_expr->size += expr_array[i]->ops->size; 5320 + nft_expr_destroy(ctx, expr_array[i]); 5321 + expr_array[i] = NULL; 5322 + } 5323 + 5324 + return 0; 5325 + 5326 + err_elem_expr_setup: 5327 + for (; i < num_exprs; i++) { 5328 + nft_expr_destroy(ctx, expr_array[i]); 5329 + expr_array[i] = NULL; 5330 + } 5331 + 5332 + return -ENOMEM; 5314 5333 } 5315 5334 5316 5335 static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set, ··· 5581 5562 *nft_set_ext_obj(ext) = obj; 5582 5563 obj->use++; 5583 5564 } 5584 - for (i = 0; i < num_exprs; i++) 5585 - nft_set_elem_expr_setup(ext, i, expr_array); 5565 + err = nft_set_elem_expr_setup(ctx, ext, expr_array, num_exprs); 5566 + if (err < 0) 5567 + goto err_elem_expr; 5586 5568 5587 5569 trans = nft_trans_elem_alloc(ctx, NFT_MSG_NEWSETELEM, set); 5588 - if (trans == NULL) 5589 - goto err_trans; 5570 + if (trans == NULL) { 5571 + err = -ENOMEM; 5572 + goto err_elem_expr; 5573 + } 5590 5574 5591 5575 ext->genmask = nft_genmask_cur(ctx->net) | NFT_SET_ELEM_BUSY_MASK; 5592 5576 err = set->ops->insert(ctx->net, set, &elem, &ext2); ··· 5633 5611 set->ops->remove(ctx->net, set, &elem); 5634 5612 err_element_clash: 5635 5613 kfree(trans); 5636 - err_trans: 5614 + err_elem_expr: 5637 5615 if (obj) 5638 5616 obj->use--; 5639 5617
+2 -2
net/netfilter/nft_limit.c
··· 76 76 return -EOVERFLOW; 77 77 78 78 if (pkts) { 79 - tokens = div_u64(limit->nsecs, limit->rate) * limit->burst; 79 + tokens = div64_u64(limit->nsecs, limit->rate) * limit->burst; 80 80 } else { 81 81 /* The token bucket size limits the number of tokens can be 82 82 * accumulated. tokens_max specifies the bucket size. 83 83 * tokens_max = unit * (rate + burst) / rate. 84 84 */ 85 - tokens = div_u64(limit->nsecs * (limit->rate + limit->burst), 85 + tokens = div64_u64(limit->nsecs * (limit->rate + limit->burst), 86 86 limit->rate); 87 87 } 88 88
+2 -8
net/netfilter/x_tables.c
··· 739 739 { 740 740 const struct xt_match *match = m->u.kernel.match; 741 741 struct compat_xt_entry_match *cm = (struct compat_xt_entry_match *)m; 742 - int pad, off = xt_compat_match_offset(match); 742 + int off = xt_compat_match_offset(match); 743 743 u_int16_t msize = cm->u.user.match_size; 744 744 char name[sizeof(m->u.user.name)]; 745 745 ··· 749 749 match->compat_from_user(m->data, cm->data); 750 750 else 751 751 memcpy(m->data, cm->data, msize - sizeof(*cm)); 752 - pad = XT_ALIGN(match->matchsize) - match->matchsize; 753 - if (pad > 0) 754 - memset(m->data + match->matchsize, 0, pad); 755 752 756 753 msize += off; 757 754 m->u.user.match_size = msize; ··· 1119 1122 { 1120 1123 const struct xt_target *target = t->u.kernel.target; 1121 1124 struct compat_xt_entry_target *ct = (struct compat_xt_entry_target *)t; 1122 - int pad, off = xt_compat_target_offset(target); 1125 + int off = xt_compat_target_offset(target); 1123 1126 u_int16_t tsize = ct->u.user.target_size; 1124 1127 char name[sizeof(t->u.user.name)]; 1125 1128 ··· 1129 1132 target->compat_from_user(t->data, ct->data); 1130 1133 else 1131 1134 memcpy(t->data, ct->data, tsize - sizeof(*ct)); 1132 - pad = XT_ALIGN(target->targetsize) - target->targetsize; 1133 - if (pad > 0) 1134 - memset(t->data + target->targetsize, 0, pad); 1135 1135 1136 1136 tsize += off; 1137 1137 t->u.user.target_size = tsize;
+2 -2
net/netlink/af_netlink.c
··· 1019 1019 return -EINVAL; 1020 1020 } 1021 1021 1022 - netlink_lock_table(); 1023 1022 if (nlk->netlink_bind && groups) { 1024 1023 int group; 1025 1024 ··· 1030 1031 if (!err) 1031 1032 continue; 1032 1033 netlink_undo_bind(group, groups, sk); 1033 - goto unlock; 1034 + return err; 1034 1035 } 1035 1036 } 1036 1037 1037 1038 /* No need for barriers here as we return to user-space without 1038 1039 * using any of the bound attributes. 1039 1040 */ 1041 + netlink_lock_table(); 1040 1042 if (!bound) { 1041 1043 err = nladdr->nl_pid ? 1042 1044 netlink_insert(sk, nladdr->nl_pid) :
+5 -8
net/sctp/socket.c
··· 1520 1520 1521 1521 /* Supposedly, no process has access to the socket, but 1522 1522 * the net layers still may. 1523 - * Also, sctp_destroy_sock() needs to be called with addr_wq_lock 1524 - * held and that should be grabbed before socket lock. 1525 1523 */ 1526 - spin_lock_bh(&net->sctp.addr_wq_lock); 1527 - bh_lock_sock_nested(sk); 1524 + local_bh_disable(); 1525 + bh_lock_sock(sk); 1528 1526 1529 1527 /* Hold the sock, since sk_common_release() will put sock_put() 1530 1528 * and we have just a little more cleanup. ··· 1531 1533 sk_common_release(sk); 1532 1534 1533 1535 bh_unlock_sock(sk); 1534 - spin_unlock_bh(&net->sctp.addr_wq_lock); 1536 + local_bh_enable(); 1535 1537 1536 1538 sock_put(sk); 1537 1539 ··· 4991 4993 sk_sockets_allocated_inc(sk); 4992 4994 sock_prot_inuse_add(net, sk->sk_prot, 1); 4993 4995 4994 - /* Nothing can fail after this block, otherwise 4995 - * sctp_destroy_sock() will be called without addr_wq_lock held 4996 - */ 4997 4996 if (net->sctp.default_auto_asconf) { 4998 4997 spin_lock(&sock_net(sk)->sctp.addr_wq_lock); 4999 4998 list_add_tail(&sp->auto_asconf_list, ··· 5025 5030 5026 5031 if (sp->do_auto_asconf) { 5027 5032 sp->do_auto_asconf = 0; 5033 + spin_lock_bh(&sock_net(sk)->sctp.addr_wq_lock); 5028 5034 list_del(&sp->auto_asconf_list); 5035 + spin_unlock_bh(&sock_net(sk)->sctp.addr_wq_lock); 5029 5036 } 5030 5037 sctp_endpoint_free(sp->ep); 5031 5038 local_bh_disable();
+13 -7
scripts/Makefile.kasan
··· 2 2 CFLAGS_KASAN_NOSANITIZE := -fno-builtin 3 3 KASAN_SHADOW_OFFSET ?= $(CONFIG_KASAN_SHADOW_OFFSET) 4 4 5 + cc-param = $(call cc-option, -mllvm -$(1), $(call cc-option, --param $(1))) 6 + 7 + ifdef CONFIG_KASAN_STACK 8 + stack_enable := 1 9 + else 10 + stack_enable := 0 11 + endif 12 + 5 13 ifdef CONFIG_KASAN_GENERIC 6 14 7 15 ifdef CONFIG_KASAN_INLINE ··· 19 11 endif 20 12 21 13 CFLAGS_KASAN_MINIMAL := -fsanitize=kernel-address 22 - 23 - cc-param = $(call cc-option, -mllvm -$(1), $(call cc-option, --param $(1))) 24 14 25 15 # -fasan-shadow-offset fails without -fsanitize 26 16 CFLAGS_KASAN_SHADOW := $(call cc-option, -fsanitize=kernel-address \ ··· 33 27 CFLAGS_KASAN := $(CFLAGS_KASAN_SHADOW) \ 34 28 $(call cc-param,asan-globals=1) \ 35 29 $(call cc-param,asan-instrumentation-with-call-threshold=$(call_threshold)) \ 36 - $(call cc-param,asan-stack=$(CONFIG_KASAN_STACK)) \ 30 + $(call cc-param,asan-stack=$(stack_enable)) \ 37 31 $(call cc-param,asan-instrument-allocas=1) 38 32 endif 39 33 ··· 42 36 ifdef CONFIG_KASAN_SW_TAGS 43 37 44 38 ifdef CONFIG_KASAN_INLINE 45 - instrumentation_flags := -mllvm -hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET) 39 + instrumentation_flags := $(call cc-param,hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET)) 46 40 else 47 - instrumentation_flags := -mllvm -hwasan-instrument-with-calls=1 41 + instrumentation_flags := $(call cc-param,hwasan-instrument-with-calls=1) 48 42 endif 49 43 50 44 CFLAGS_KASAN := -fsanitize=kernel-hwaddress \ 51 - -mllvm -hwasan-instrument-stack=$(CONFIG_KASAN_STACK) \ 52 - -mllvm -hwasan-use-short-granules=0 \ 45 + $(call cc-param,hwasan-instrument-stack=$(stack_enable)) \ 46 + $(call cc-param,hwasan-use-short-granules=0) \ 53 47 $(instrumentation_flags) 54 48 55 49 endif # CONFIG_KASAN_SW_TAGS
+2 -2
security/Kconfig.hardening
··· 64 64 config GCC_PLUGIN_STRUCTLEAK_BYREF 65 65 bool "zero-init structs passed by reference (strong)" 66 66 depends on GCC_PLUGINS 67 - depends on !(KASAN && KASAN_STACK=1) 67 + depends on !(KASAN && KASAN_STACK) 68 68 select GCC_PLUGIN_STRUCTLEAK 69 69 help 70 70 Zero-initialize any structures on the stack that may ··· 82 82 config GCC_PLUGIN_STRUCTLEAK_BYREF_ALL 83 83 bool "zero-init anything passed by reference (very strong)" 84 84 depends on GCC_PLUGINS 85 - depends on !(KASAN && KASAN_STACK=1) 85 + depends on !(KASAN && KASAN_STACK) 86 86 select GCC_PLUGIN_STRUCTLEAK 87 87 help 88 88 Zero-initialize any stack variables that may be passed
-3
tools/arch/ia64/include/asm/barrier.h
··· 39 39 * sequential memory pages only. 40 40 */ 41 41 42 - /* XXX From arch/ia64/include/uapi/asm/gcc_intrin.h */ 43 - #define ia64_mf() asm volatile ("mf" ::: "memory") 44 - 45 42 #define mb() ia64_mf() 46 43 #define rmb() mb() 47 44 #define wmb() mb()
-2
tools/include/uapi/asm/errno.h
··· 9 9 #include "../../../arch/alpha/include/uapi/asm/errno.h" 10 10 #elif defined(__mips__) 11 11 #include "../../../arch/mips/include/uapi/asm/errno.h" 12 - #elif defined(__ia64__) 13 - #include "../../../arch/ia64/include/uapi/asm/errno.h" 14 12 #elif defined(__xtensa__) 15 13 #include "../../../arch/xtensa/include/uapi/asm/errno.h" 16 14 #else
+3 -2
tools/lib/bpf/xsk.c
··· 1017 1017 struct xsk_ring_cons *comp, 1018 1018 const struct xsk_socket_config *usr_config) 1019 1019 { 1020 + bool unmap, rx_setup_done = false, tx_setup_done = false; 1020 1021 void *rx_map = NULL, *tx_map = NULL; 1021 1022 struct sockaddr_xdp sxdp = {}; 1022 1023 struct xdp_mmap_offsets off; 1023 1024 struct xsk_socket *xsk; 1024 1025 struct xsk_ctx *ctx; 1025 1026 int err, ifindex; 1026 - bool unmap = umem->fill_save != fill; 1027 - bool rx_setup_done = false, tx_setup_done = false; 1028 1027 1029 1028 if (!umem || !xsk_ptr || !(rx || tx)) 1030 1029 return -EFAULT; 1030 + 1031 + unmap = umem->fill_save != fill; 1031 1032 1032 1033 xsk = calloc(1, sizeof(*xsk)); 1033 1034 if (!xsk)
+1 -1
tools/perf/builtin-inject.c
··· 906 906 } 907 907 908 908 data.path = inject.input_name; 909 - inject.session = perf_session__new(&data, true, &inject.tool); 909 + inject.session = perf_session__new(&data, inject.output.is_pipe, &inject.tool); 910 910 if (IS_ERR(inject.session)) 911 911 return PTR_ERR(inject.session); 912 912
+3 -1
tools/perf/util/arm-spe-decoder/arm-spe-pkt-decoder.c
··· 210 210 211 211 if ((hdr & SPE_HEADER0_MASK2) == SPE_HEADER0_EXTENDED) { 212 212 /* 16-bit extended format header */ 213 - ext_hdr = 1; 213 + if (len == 1) 214 + return ARM_SPE_BAD_PACKET; 214 215 216 + ext_hdr = 1; 215 217 hdr = buf[1]; 216 218 if (hdr == SPE_HEADER1_ALIGNMENT) 217 219 return arm_spe_get_alignment(buf, len, packet);
+3 -3
tools/perf/util/block-info.c
··· 201 201 double ratio = 0.0; 202 202 203 203 if (block_fmt->total_cycles) 204 - ratio = (double)bi->cycles / (double)block_fmt->total_cycles; 204 + ratio = (double)bi->cycles_aggr / (double)block_fmt->total_cycles; 205 205 206 206 return color_pct(hpp, block_fmt->width, 100.0 * ratio); 207 207 } ··· 216 216 double l, r; 217 217 218 218 if (block_fmt->total_cycles) { 219 - l = ((double)bi_l->cycles / 219 + l = ((double)bi_l->cycles_aggr / 220 220 (double)block_fmt->total_cycles) * 100000.0; 221 - r = ((double)bi_r->cycles / 221 + r = ((double)bi_r->cycles_aggr / 222 222 (double)block_fmt->total_cycles) * 100000.0; 223 223 return (int64_t)l - (int64_t)r; 224 224 }
-5
tools/testing/selftests/bpf/verifier/bounds.c
··· 261 261 }, 262 262 .fixup_map_hash_8b = { 3 }, 263 263 /* not actually fully unbounded, but the bound is very high */ 264 - .errstr_unpriv = "R1 has unknown scalar with mixed signed bounds, pointer arithmetic with it prohibited for !root", 265 - .result_unpriv = REJECT, 266 264 .errstr = "value -4294967168 makes map_value pointer be out of bounds", 267 265 .result = REJECT, 268 266 }, ··· 296 298 BPF_EXIT_INSN(), 297 299 }, 298 300 .fixup_map_hash_8b = { 3 }, 299 - /* not actually fully unbounded, but the bound is very high */ 300 - .errstr_unpriv = "R1 has unknown scalar with mixed signed bounds, pointer arithmetic with it prohibited for !root", 301 - .result_unpriv = REJECT, 302 301 .errstr = "value -4294967168 makes map_value pointer be out of bounds", 303 302 .result = REJECT, 304 303 },
+11 -10
tools/testing/selftests/bpf/verifier/bounds_deduction.c
··· 6 6 BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), 7 7 BPF_EXIT_INSN(), 8 8 }, 9 - .errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types", 9 + .errstr_unpriv = "R1 has pointer with unsupported alu operation", 10 10 .errstr = "R0 tried to subtract pointer from scalar", 11 11 .result = REJECT, 12 12 }, ··· 21 21 BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0), 22 22 BPF_EXIT_INSN(), 23 23 }, 24 - .errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types", 24 + .errstr_unpriv = "R1 has pointer with unsupported alu operation", 25 25 .result_unpriv = REJECT, 26 26 .result = ACCEPT, 27 27 .retval = 1, ··· 34 34 BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), 35 35 BPF_EXIT_INSN(), 36 36 }, 37 - .errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types", 37 + .errstr_unpriv = "R1 has pointer with unsupported alu operation", 38 38 .errstr = "R0 tried to subtract pointer from scalar", 39 39 .result = REJECT, 40 40 }, 41 41 { 42 42 "check deducing bounds from const, 4", 43 43 .insns = { 44 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 44 45 BPF_MOV64_IMM(BPF_REG_0, 0), 45 46 BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 0, 1), 46 47 BPF_EXIT_INSN(), 47 48 BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1), 48 49 BPF_EXIT_INSN(), 49 - BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0), 50 + BPF_ALU64_REG(BPF_SUB, BPF_REG_6, BPF_REG_0), 50 51 BPF_EXIT_INSN(), 51 52 }, 52 - .errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types", 53 + .errstr_unpriv = "R6 has pointer with unsupported alu operation", 53 54 .result_unpriv = REJECT, 54 55 .result = ACCEPT, 55 56 }, ··· 62 61 BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), 63 62 BPF_EXIT_INSN(), 64 63 }, 65 - .errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types", 64 + .errstr_unpriv = "R1 has pointer with unsupported alu operation", 66 65 .errstr = "R0 tried to subtract pointer from scalar", 67 66 .result = REJECT, 68 67 }, ··· 75 74 BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), 76 75 BPF_EXIT_INSN(), 77 76 }, 78 - .errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types", 77 + .errstr_unpriv = "R1 has pointer with unsupported alu operation", 79 78 .errstr = "R0 tried to subtract pointer from scalar", 80 79 .result = REJECT, 81 80 }, ··· 89 88 offsetof(struct __sk_buff, mark)), 90 89 BPF_EXIT_INSN(), 91 90 }, 92 - .errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types", 91 + .errstr_unpriv = "R1 has pointer with unsupported alu operation", 93 92 .errstr = "dereference of modified ctx ptr", 94 93 .result = REJECT, 95 94 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, ··· 104 103 offsetof(struct __sk_buff, mark)), 105 104 BPF_EXIT_INSN(), 106 105 }, 107 - .errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types", 106 + .errstr_unpriv = "R1 has pointer with unsupported alu operation", 108 107 .errstr = "dereference of modified ctx ptr", 109 108 .result = REJECT, 110 109 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, ··· 117 116 BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), 118 117 BPF_EXIT_INSN(), 119 118 }, 120 - .errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types", 119 + .errstr_unpriv = "R1 has pointer with unsupported alu operation", 121 120 .errstr = "R0 tried to subtract pointer from scalar", 122 121 .result = REJECT, 123 122 },
-13
tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c
··· 19 19 }, 20 20 .fixup_map_hash_8b = { 3 }, 21 21 .errstr = "unbounded min value", 22 - .errstr_unpriv = "R1 has unknown scalar with mixed signed bounds", 23 22 .result = REJECT, 24 23 }, 25 24 { ··· 42 43 }, 43 44 .fixup_map_hash_8b = { 3 }, 44 45 .errstr = "unbounded min value", 45 - .errstr_unpriv = "R1 has unknown scalar with mixed signed bounds", 46 46 .result = REJECT, 47 47 }, 48 48 { ··· 67 69 }, 68 70 .fixup_map_hash_8b = { 3 }, 69 71 .errstr = "unbounded min value", 70 - .errstr_unpriv = "R8 has unknown scalar with mixed signed bounds", 71 72 .result = REJECT, 72 73 }, 73 74 { ··· 91 94 }, 92 95 .fixup_map_hash_8b = { 3 }, 93 96 .errstr = "unbounded min value", 94 - .errstr_unpriv = "R8 has unknown scalar with mixed signed bounds", 95 97 .result = REJECT, 96 98 }, 97 99 { ··· 137 141 }, 138 142 .fixup_map_hash_8b = { 3 }, 139 143 .errstr = "unbounded min value", 140 - .errstr_unpriv = "R1 has unknown scalar with mixed signed bounds", 141 144 .result = REJECT, 142 145 }, 143 146 { ··· 205 210 }, 206 211 .fixup_map_hash_8b = { 3 }, 207 212 .errstr = "unbounded min value", 208 - .errstr_unpriv = "R1 has unknown scalar with mixed signed bounds", 209 213 .result = REJECT, 210 214 }, 211 215 { ··· 254 260 }, 255 261 .fixup_map_hash_8b = { 3 }, 256 262 .errstr = "unbounded min value", 257 - .errstr_unpriv = "R1 has unknown scalar with mixed signed bounds", 258 263 .result = REJECT, 259 264 }, 260 265 { ··· 280 287 }, 281 288 .fixup_map_hash_8b = { 3 }, 282 289 .errstr = "unbounded min value", 283 - .errstr_unpriv = "R1 has unknown scalar with mixed signed bounds", 284 290 .result = REJECT, 285 291 }, 286 292 { ··· 305 313 }, 306 314 .fixup_map_hash_8b = { 3 }, 307 315 .errstr = "unbounded min value", 308 - .errstr_unpriv = "R1 has unknown scalar with mixed signed bounds", 309 316 .result = REJECT, 310 317 }, 311 318 { ··· 333 342 }, 334 343 .fixup_map_hash_8b = { 3 }, 335 344 .errstr = "unbounded min value", 336 - .errstr_unpriv = "R7 has unknown scalar with mixed signed bounds", 337 345 .result = REJECT, 338 346 }, 339 347 { ··· 362 372 }, 363 373 .fixup_map_hash_8b = { 4 }, 364 374 .errstr = "unbounded min value", 365 - .errstr_unpriv = "R1 has unknown scalar with mixed signed bounds", 366 375 .result = REJECT, 367 376 }, 368 377 { ··· 389 400 }, 390 401 .fixup_map_hash_8b = { 3 }, 391 402 .errstr = "unbounded min value", 392 - .errstr_unpriv = "R1 has unknown scalar with mixed signed bounds", 393 403 .result = REJECT, 394 - .result_unpriv = REJECT, 395 404 },
+2 -2
tools/testing/selftests/bpf/verifier/map_ptr.c
··· 76 76 }, 77 77 .fixup_map_hash_16b = { 4 }, 78 78 .result_unpriv = REJECT, 79 - .errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types", 79 + .errstr_unpriv = "R1 has pointer with unsupported alu operation", 80 80 .result = ACCEPT, 81 81 }, 82 82 { ··· 94 94 }, 95 95 .fixup_map_hash_16b = { 4 }, 96 96 .result_unpriv = REJECT, 97 - .errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types", 97 + .errstr_unpriv = "R0 has pointer with unsupported alu operation", 98 98 .result = ACCEPT, 99 99 },
+1 -1
tools/testing/selftests/bpf/verifier/unpriv.c
··· 505 505 BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8), 506 506 BPF_EXIT_INSN(), 507 507 }, 508 - .errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types", 508 + .errstr_unpriv = "R1 stack pointer arithmetic goes out of range", 509 509 .result_unpriv = REJECT, 510 510 .result = ACCEPT, 511 511 },
+2 -4
tools/testing/selftests/bpf/verifier/value_ptr_arith.c
··· 21 21 .fixup_map_hash_16b = { 5 }, 22 22 .fixup_map_array_48b = { 8 }, 23 23 .result = ACCEPT, 24 - .result_unpriv = REJECT, 25 - .errstr_unpriv = "R1 tried to add from different maps", 26 24 .retval = 1, 27 25 }, 28 26 { ··· 120 122 .fixup_map_array_48b = { 1 }, 121 123 .result = ACCEPT, 122 124 .result_unpriv = REJECT, 123 - .errstr_unpriv = "R2 tried to add from different pointers or scalars", 125 + .errstr_unpriv = "R2 tried to add from different maps, paths or scalars", 124 126 .retval = 0, 125 127 }, 126 128 { ··· 167 169 .fixup_map_array_48b = { 1 }, 168 170 .result = ACCEPT, 169 171 .result_unpriv = REJECT, 170 - .errstr_unpriv = "R2 tried to add from different maps, paths, or prohibited types", 172 + .errstr_unpriv = "R2 tried to add from different maps, paths or scalars", 171 173 .retval = 0, 172 174 }, 173 175 {