Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

+1218 -655
+2 -2
Documentation/networking/switchdev.txt
··· 228 228 bridge link set dev DEV learning on self 229 229 bridge link set dev DEV learning_sync on self 230 230 231 - Learning_sync attribute enables syncing of the learned/forgotton FDB entry to 231 + Learning_sync attribute enables syncing of the learned/forgotten FDB entry to 232 232 the bridge's FDB. It's possible, but not optimal, to enable learning on the 233 233 device port and on the bridge port, and disable learning_sync. 234 234 ··· 245 245 port device supports ageing, when the FDB entry expires, it will notify the 246 246 driver which in turn will notify the bridge with SWITCHDEV_FDB_DEL. If the 247 247 device does not support ageing, the driver can simulate ageing using a 248 - garbage collection timer to monitor FBD entries. Expired entries will be 248 + garbage collection timer to monitor FDB entries. Expired entries will be 249 249 notified to the bridge using SWITCHDEV_FDB_DEL. See rocker driver for 250 250 example of driver running ageing timer. 251 251
+11 -8
Documentation/printk-formats.txt
··· 58 58 %ps versatile_init 59 59 %pB prev_fn_of_versatile_init+0x88/0x88 60 60 61 - For printing symbols and function pointers. The ``S`` and ``s`` specifiers 62 - result in the symbol name with (``S``) or without (``s``) offsets. Where 63 - this is used on a kernel without KALLSYMS - the symbol address is 64 - printed instead. 61 + The ``F`` and ``f`` specifiers are for printing function pointers, 62 + for example, f->func, &gettimeofday. They have the same result as 63 + ``S`` and ``s`` specifiers. But they do an extra conversion on 64 + ia64, ppc64 and parisc64 architectures where the function pointers 65 + are actually function descriptors. 66 + 67 + The ``S`` and ``s`` specifiers can be used for printing symbols 68 + from direct addresses, for example, __builtin_return_address(0), 69 + (void *)regs->ip. They result in the symbol name with (``S``) or 70 + without (``s``) offsets. If KALLSYMS are disabled then the symbol 71 + address is printed instead. 65 72 66 73 The ``B`` specifier results in the symbol name with offsets and should be 67 74 used when printing stack backtraces. The specifier takes into 68 75 consideration the effect of compiler optimisations which may occur 69 76 when tail-call``s are used and marked with the noreturn GCC attribute. 70 77 71 - On ia64, ppc64 and parisc64 architectures function pointers are 72 - actually function descriptors which must first be resolved. The ``F`` and 73 - ``f`` specifiers perform this resolution and then provide the same 74 - functionality as the ``S`` and ``s`` specifiers. 75 78 76 79 Kernel Pointers 77 80 ===============
+36 -11
Documentation/sysctl/net.txt
··· 35 35 bpf_jit_enable 36 36 -------------- 37 37 38 - This enables Berkeley Packet Filter Just in Time compiler. 39 - Currently supported on x86_64 architecture, bpf_jit provides a framework 40 - to speed packet filtering, the one used by tcpdump/libpcap for example. 38 + This enables the BPF Just in Time (JIT) compiler. BPF is a flexible 39 + and efficient infrastructure allowing to execute bytecode at various 40 + hook points. It is used in a number of Linux kernel subsystems such 41 + as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints) 42 + and security (e.g. seccomp). LLVM has a BPF back end that can compile 43 + restricted C into a sequence of BPF instructions. After program load 44 + through bpf(2) and passing a verifier in the kernel, a JIT will then 45 + translate these BPF proglets into native CPU instructions. There are 46 + two flavors of JITs, the newer eBPF JIT currently supported on: 47 + - x86_64 48 + - arm64 49 + - ppc64 50 + - sparc64 51 + - mips64 52 + - s390x 53 + 54 + And the older cBPF JIT supported on the following archs: 55 + - arm 56 + - mips 57 + - ppc 58 + - sparc 59 + 60 + eBPF JITs are a superset of cBPF JITs, meaning the kernel will 61 + migrate cBPF instructions into eBPF instructions and then JIT 62 + compile them transparently. Older cBPF JITs can only translate 63 + tcpdump filters, seccomp rules, etc, but not mentioned eBPF 64 + programs loaded through bpf(2). 65 + 41 66 Values : 42 67 0 - disable the JIT (default value) 43 68 1 - enable the JIT ··· 71 46 bpf_jit_harden 72 47 -------------- 73 48 74 - This enables hardening for the Berkeley Packet Filter Just in Time compiler. 75 - Supported are eBPF JIT backends. Enabling hardening trades off performance, 76 - but can mitigate JIT spraying. 49 + This enables hardening for the BPF JIT compiler. Supported are eBPF 50 + JIT backends. Enabling hardening trades off performance, but can 51 + mitigate JIT spraying. 77 52 Values : 78 53 0 - disable JIT hardening (default value) 79 54 1 - enable JIT hardening for unprivileged users only ··· 82 57 bpf_jit_kallsyms 83 58 ---------------- 84 59 85 - When Berkeley Packet Filter Just in Time compiler is enabled, then compiled 86 - images are unknown addresses to the kernel, meaning they neither show up in 87 - traces nor in /proc/kallsyms. This enables export of these addresses, which 88 - can be used for debugging/tracing. If bpf_jit_harden is enabled, this feature 89 - is disabled. 60 + When BPF JIT compiler is enabled, then compiled images are unknown 61 + addresses to the kernel, meaning they neither show up in traces nor 62 + in /proc/kallsyms. This enables export of these addresses, which can 63 + be used for debugging/tracing. If bpf_jit_harden is enabled, this 64 + feature is disabled. 90 65 Values : 91 66 0 - disable JIT kallsyms export (default value) 92 67 1 - enable JIT kallsyms export for privileged users only
-1
MAINTAINERS
··· 7120 7120 L: linux-kernel@vger.kernel.org 7121 7121 S: Maintained 7122 7122 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core 7123 - T: git git://git.infradead.org/users/jcooper/linux.git irqchip/core 7124 7123 F: Documentation/devicetree/bindings/interrupt-controller/ 7125 7124 F: drivers/irqchip/ 7126 7125
+1 -1
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 13 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc5 4 + EXTRAVERSION = -rc6 5 5 NAME = Fearless Coyote 6 6 7 7 # *DOCUMENTATION*
-1
arch/arc/Kconfig
··· 96 96 97 97 menu "ARC Platform/SoC/Board" 98 98 99 - source "arch/arc/plat-sim/Kconfig" 100 99 source "arch/arc/plat-tb10x/Kconfig" 101 100 source "arch/arc/plat-axs10x/Kconfig" 102 101 #New platform adds here
+1 -1
arch/arc/Makefile
··· 107 107 # w/o this dtb won't embed into kernel binary 108 108 core-y += arch/arc/boot/dts/ 109 109 110 - core-$(CONFIG_ARC_PLAT_SIM) += arch/arc/plat-sim/ 110 + core-y += arch/arc/plat-sim/ 111 111 core-$(CONFIG_ARC_PLAT_TB10X) += arch/arc/plat-tb10x/ 112 112 core-$(CONFIG_ARC_PLAT_AXS10X) += arch/arc/plat-axs10x/ 113 113 core-$(CONFIG_ARC_PLAT_EZNPS) += arch/arc/plat-eznps/
+9 -11
arch/arc/boot/dts/axc001.dtsi
··· 15 15 16 16 / { 17 17 compatible = "snps,arc"; 18 - #address-cells = <1>; 19 - #size-cells = <1>; 18 + #address-cells = <2>; 19 + #size-cells = <2>; 20 20 21 21 cpu_card { 22 22 compatible = "simple-bus"; 23 23 #address-cells = <1>; 24 24 #size-cells = <1>; 25 25 26 - ranges = <0x00000000 0xf0000000 0x10000000>; 26 + ranges = <0x00000000 0x0 0xf0000000 0x10000000>; 27 27 28 28 core_clk: core_clk { 29 29 #clock-cells = <0>; ··· 91 91 mb_intc: dw-apb-ictl@0xe0012000 { 92 92 #interrupt-cells = <1>; 93 93 compatible = "snps,dw-apb-ictl"; 94 - reg = < 0xe0012000 0x200 >; 94 + reg = < 0x0 0xe0012000 0x0 0x200 >; 95 95 interrupt-controller; 96 96 interrupt-parent = <&core_intc>; 97 97 interrupts = < 7 >; 98 98 }; 99 99 100 100 memory { 101 - #address-cells = <1>; 102 - #size-cells = <1>; 103 - ranges = <0x00000000 0x80000000 0x20000000>; 104 101 device_type = "memory"; 105 - reg = <0x80000000 0x1b000000>; /* (512 - 32) MiB */ 102 + /* CONFIG_KERNEL_RAM_BASE_ADDRESS needs to match low mem start */ 103 + reg = <0x0 0x80000000 0x0 0x1b000000>; /* (512 - 32) MiB */ 106 104 }; 107 105 108 106 reserved-memory { 109 - #address-cells = <1>; 110 - #size-cells = <1>; 107 + #address-cells = <2>; 108 + #size-cells = <2>; 111 109 ranges; 112 110 /* 113 111 * We just move frame buffer area to the very end of ··· 116 118 */ 117 119 frame_buffer: frame_buffer@9e000000 { 118 120 compatible = "shared-dma-pool"; 119 - reg = <0x9e000000 0x2000000>; 121 + reg = <0x0 0x9e000000 0x0 0x2000000>; 120 122 no-map; 121 123 }; 122 124 };
+10 -11
arch/arc/boot/dts/axc003.dtsi
··· 14 14 15 15 / { 16 16 compatible = "snps,arc"; 17 - #address-cells = <1>; 18 - #size-cells = <1>; 17 + #address-cells = <2>; 18 + #size-cells = <2>; 19 19 20 20 cpu_card { 21 21 compatible = "simple-bus"; 22 22 #address-cells = <1>; 23 23 #size-cells = <1>; 24 24 25 - ranges = <0x00000000 0xf0000000 0x10000000>; 25 + ranges = <0x00000000 0x0 0xf0000000 0x10000000>; 26 26 27 27 core_clk: core_clk { 28 28 #clock-cells = <0>; ··· 94 94 mb_intc: dw-apb-ictl@0xe0012000 { 95 95 #interrupt-cells = <1>; 96 96 compatible = "snps,dw-apb-ictl"; 97 - reg = < 0xe0012000 0x200 >; 97 + reg = < 0x0 0xe0012000 0x0 0x200 >; 98 98 interrupt-controller; 99 99 interrupt-parent = <&core_intc>; 100 100 interrupts = < 24 >; 101 101 }; 102 102 103 103 memory { 104 - #address-cells = <1>; 105 - #size-cells = <1>; 106 - ranges = <0x00000000 0x80000000 0x40000000>; 107 104 device_type = "memory"; 108 - reg = <0x80000000 0x20000000>; /* 512MiB */ 105 + /* CONFIG_KERNEL_RAM_BASE_ADDRESS needs to match low mem start */ 106 + reg = <0x0 0x80000000 0x0 0x20000000 /* 512 MiB low mem */ 107 + 0x1 0xc0000000 0x0 0x40000000>; /* 1 GiB highmem */ 109 108 }; 110 109 111 110 reserved-memory { 112 - #address-cells = <1>; 113 - #size-cells = <1>; 111 + #address-cells = <2>; 112 + #size-cells = <2>; 114 113 ranges; 115 114 /* 116 115 * Move frame buffer out of IOC aperture (0x8z-0xAz). 117 116 */ 118 117 frame_buffer: frame_buffer@be000000 { 119 118 compatible = "shared-dma-pool"; 120 - reg = <0xbe000000 0x2000000>; 119 + reg = <0x0 0xbe000000 0x0 0x2000000>; 121 120 no-map; 122 121 }; 123 122 };
+10 -11
arch/arc/boot/dts/axc003_idu.dtsi
··· 14 14 15 15 / { 16 16 compatible = "snps,arc"; 17 - #address-cells = <1>; 18 - #size-cells = <1>; 17 + #address-cells = <2>; 18 + #size-cells = <2>; 19 19 20 20 cpu_card { 21 21 compatible = "simple-bus"; 22 22 #address-cells = <1>; 23 23 #size-cells = <1>; 24 24 25 - ranges = <0x00000000 0xf0000000 0x10000000>; 25 + ranges = <0x00000000 0x0 0xf0000000 0x10000000>; 26 26 27 27 core_clk: core_clk { 28 28 #clock-cells = <0>; ··· 100 100 mb_intc: dw-apb-ictl@0xe0012000 { 101 101 #interrupt-cells = <1>; 102 102 compatible = "snps,dw-apb-ictl"; 103 - reg = < 0xe0012000 0x200 >; 103 + reg = < 0x0 0xe0012000 0x0 0x200 >; 104 104 interrupt-controller; 105 105 interrupt-parent = <&idu_intc>; 106 106 interrupts = <0>; 107 107 }; 108 108 109 109 memory { 110 - #address-cells = <1>; 111 - #size-cells = <1>; 112 - ranges = <0x00000000 0x80000000 0x40000000>; 113 110 device_type = "memory"; 114 - reg = <0x80000000 0x20000000>; /* 512MiB */ 111 + /* CONFIG_KERNEL_RAM_BASE_ADDRESS needs to match low mem start */ 112 + reg = <0x0 0x80000000 0x0 0x20000000 /* 512 MiB low mem */ 113 + 0x1 0xc0000000 0x0 0x40000000>; /* 1 GiB highmem */ 115 114 }; 116 115 117 116 reserved-memory { 118 - #address-cells = <1>; 119 - #size-cells = <1>; 117 + #address-cells = <2>; 118 + #size-cells = <2>; 120 119 ranges; 121 120 /* 122 121 * Move frame buffer out of IOC aperture (0x8z-0xAz). 123 122 */ 124 123 frame_buffer: frame_buffer@be000000 { 125 124 compatible = "shared-dma-pool"; 126 - reg = <0xbe000000 0x2000000>; 125 + reg = <0x0 0xbe000000 0x0 0x2000000>; 127 126 no-map; 128 127 }; 129 128 };
+1 -1
arch/arc/boot/dts/axs10x_mb.dtsi
··· 13 13 compatible = "simple-bus"; 14 14 #address-cells = <1>; 15 15 #size-cells = <1>; 16 - ranges = <0x00000000 0xe0000000 0x10000000>; 16 + ranges = <0x00000000 0x0 0xe0000000 0x10000000>; 17 17 interrupt-parent = <&mb_intc>; 18 18 19 19 i2sclk: i2sclk@100a0 {
-1
arch/arc/configs/haps_hs_defconfig
··· 21 21 # CONFIG_BLK_DEV_BSG is not set 22 22 # CONFIG_IOSCHED_DEADLINE is not set 23 23 # CONFIG_IOSCHED_CFQ is not set 24 - CONFIG_ARC_PLAT_SIM=y 25 24 CONFIG_ISA_ARCV2=y 26 25 CONFIG_ARC_BUILTIN_DTB_NAME="haps_hs" 27 26 CONFIG_PREEMPT=y
-1
arch/arc/configs/haps_hs_smp_defconfig
··· 23 23 # CONFIG_BLK_DEV_BSG is not set 24 24 # CONFIG_IOSCHED_DEADLINE is not set 25 25 # CONFIG_IOSCHED_CFQ is not set 26 - CONFIG_ARC_PLAT_SIM=y 27 26 CONFIG_ISA_ARCV2=y 28 27 CONFIG_SMP=y 29 28 CONFIG_ARC_BUILTIN_DTB_NAME="haps_hs_idu"
-1
arch/arc/configs/nps_defconfig
··· 39 39 # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 40 40 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 41 41 # CONFIG_INET_XFRM_MODE_BEET is not set 42 - # CONFIG_INET_LRO is not set 43 42 # CONFIG_INET_DIAG is not set 44 43 # CONFIG_IPV6 is not set 45 44 # CONFIG_WIRELESS is not set
-1
arch/arc/configs/nsim_700_defconfig
··· 23 23 # CONFIG_BLK_DEV_BSG is not set 24 24 # CONFIG_IOSCHED_DEADLINE is not set 25 25 # CONFIG_IOSCHED_CFQ is not set 26 - CONFIG_ARC_PLAT_SIM=y 27 26 CONFIG_ARC_BUILTIN_DTB_NAME="nsim_700" 28 27 CONFIG_PREEMPT=y 29 28 # CONFIG_COMPACTION is not set
-1
arch/arc/configs/nsim_hs_defconfig
··· 26 26 # CONFIG_BLK_DEV_BSG is not set 27 27 # CONFIG_IOSCHED_DEADLINE is not set 28 28 # CONFIG_IOSCHED_CFQ is not set 29 - CONFIG_ARC_PLAT_SIM=y 30 29 CONFIG_ISA_ARCV2=y 31 30 CONFIG_ARC_BUILTIN_DTB_NAME="nsim_hs" 32 31 CONFIG_PREEMPT=y
-1
arch/arc/configs/nsim_hs_smp_defconfig
··· 24 24 # CONFIG_BLK_DEV_BSG is not set 25 25 # CONFIG_IOSCHED_DEADLINE is not set 26 26 # CONFIG_IOSCHED_CFQ is not set 27 - CONFIG_ARC_PLAT_SIM=y 28 27 CONFIG_ISA_ARCV2=y 29 28 CONFIG_SMP=y 30 29 CONFIG_ARC_BUILTIN_DTB_NAME="nsim_hs_idu"
-1
arch/arc/configs/nsimosci_defconfig
··· 23 23 # CONFIG_BLK_DEV_BSG is not set 24 24 # CONFIG_IOSCHED_DEADLINE is not set 25 25 # CONFIG_IOSCHED_CFQ is not set 26 - CONFIG_ARC_PLAT_SIM=y 27 26 CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci" 28 27 # CONFIG_COMPACTION is not set 29 28 CONFIG_NET=y
-1
arch/arc/configs/nsimosci_hs_defconfig
··· 23 23 # CONFIG_BLK_DEV_BSG is not set 24 24 # CONFIG_IOSCHED_DEADLINE is not set 25 25 # CONFIG_IOSCHED_CFQ is not set 26 - CONFIG_ARC_PLAT_SIM=y 27 26 CONFIG_ISA_ARCV2=y 28 27 CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci_hs" 29 28 # CONFIG_COMPACTION is not set
-1
arch/arc/configs/nsimosci_hs_smp_defconfig
··· 18 18 # CONFIG_BLK_DEV_BSG is not set 19 19 # CONFIG_IOSCHED_DEADLINE is not set 20 20 # CONFIG_IOSCHED_CFQ is not set 21 - CONFIG_ARC_PLAT_SIM=y 22 21 CONFIG_ISA_ARCV2=y 23 22 CONFIG_SMP=y 24 23 # CONFIG_ARC_TIMERS_64BIT is not set
-1
arch/arc/configs/tb10x_defconfig
··· 38 38 # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 39 39 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 40 40 # CONFIG_INET_XFRM_MODE_BEET is not set 41 - # CONFIG_INET_LRO is not set 42 41 # CONFIG_INET_DIAG is not set 43 42 # CONFIG_IPV6 is not set 44 43 # CONFIG_WIRELESS is not set
+2
arch/arc/include/asm/cache.h
··· 96 96 #define ARC_REG_SLC_FLUSH 0x904 97 97 #define ARC_REG_SLC_INVALIDATE 0x905 98 98 #define ARC_REG_SLC_RGN_START 0x914 99 + #define ARC_REG_SLC_RGN_START1 0x915 99 100 #define ARC_REG_SLC_RGN_END 0x916 101 + #define ARC_REG_SLC_RGN_END1 0x917 100 102 101 103 /* Bit val in SLC_CONTROL */ 102 104 #define SLC_CTRL_DIS 0x001
+2
arch/arc/include/asm/mmu.h
··· 94 94 return IS_ENABLED(CONFIG_ARC_HAS_PAE40); 95 95 } 96 96 97 + extern int pae40_exist_but_not_enab(void); 98 + 97 99 #endif /* !__ASSEMBLY__ */ 98 100 99 101 #endif
+3
arch/arc/kernel/intc-arcv2.c
··· 75 75 * Set a default priority for all available interrupts to prevent 76 76 * switching of register banks if Fast IRQ and multiple register banks 77 77 * are supported by CPU. 78 + * Also disable all IRQ lines so faulty external hardware won't 79 + * trigger interrupt that kernel is not ready to handle. 78 80 */ 79 81 for (i = NR_EXCEPTIONS; i < irq_bcr.irqs + NR_EXCEPTIONS; i++) { 80 82 write_aux_reg(AUX_IRQ_SELECT, i); 81 83 write_aux_reg(AUX_IRQ_PRIORITY, ARCV2_IRQ_DEF_PRIO); 84 + write_aux_reg(AUX_IRQ_ENABLE, 0); 82 85 } 83 86 84 87 /* setup status32, don't enable intr yet as kernel doesn't want */
+13 -1
arch/arc/kernel/intc-compact.c
··· 27 27 */ 28 28 void arc_init_IRQ(void) 29 29 { 30 - int level_mask = 0; 30 + int level_mask = 0, i; 31 31 32 32 /* Is timer high priority Interrupt (Level2 in ARCompact jargon) */ 33 33 level_mask |= IS_ENABLED(CONFIG_ARC_COMPACT_IRQ_LEVELS) << TIMER0_IRQ; ··· 40 40 41 41 if (level_mask) 42 42 pr_info("Level-2 interrupts bitset %x\n", level_mask); 43 + 44 + /* 45 + * Disable all IRQ lines so faulty external hardware won't 46 + * trigger interrupt that kernel is not ready to handle. 47 + */ 48 + for (i = TIMER0_IRQ; i < NR_CPU_IRQS; i++) { 49 + unsigned int ienb; 50 + 51 + ienb = read_aux_reg(AUX_IENABLE); 52 + ienb &= ~(1 << i); 53 + write_aux_reg(AUX_IENABLE, ienb); 54 + } 43 55 } 44 56 45 57 /*
+42 -8
arch/arc/mm/cache.c
··· 665 665 static DEFINE_SPINLOCK(lock); 666 666 unsigned long flags; 667 667 unsigned int ctrl; 668 + phys_addr_t end; 668 669 669 670 spin_lock_irqsave(&lock, flags); 670 671 ··· 695 694 * END needs to be setup before START (latter triggers the operation) 696 695 * END can't be same as START, so add (l2_line_sz - 1) to sz 697 696 */ 698 - write_aux_reg(ARC_REG_SLC_RGN_END, (paddr + sz + l2_line_sz - 1)); 699 - write_aux_reg(ARC_REG_SLC_RGN_START, paddr); 697 + end = paddr + sz + l2_line_sz - 1; 698 + if (is_pae40_enabled()) 699 + write_aux_reg(ARC_REG_SLC_RGN_END1, upper_32_bits(end)); 700 + 701 + write_aux_reg(ARC_REG_SLC_RGN_END, lower_32_bits(end)); 702 + 703 + if (is_pae40_enabled()) 704 + write_aux_reg(ARC_REG_SLC_RGN_START1, upper_32_bits(paddr)); 705 + 706 + write_aux_reg(ARC_REG_SLC_RGN_START, lower_32_bits(paddr)); 707 + 708 + /* Make sure "busy" bit reports correct stataus, see STAR 9001165532 */ 709 + read_aux_reg(ARC_REG_SLC_CTRL); 700 710 701 711 while (read_aux_reg(ARC_REG_SLC_CTRL) & SLC_CTRL_BUSY); 702 712 ··· 1123 1111 __dc_enable(); 1124 1112 } 1125 1113 1114 + /* 1115 + * Cache related boot time checks/setups only needed on master CPU: 1116 + * - Geometry checks (kernel build and hardware agree: e.g. L1_CACHE_BYTES) 1117 + * Assume SMP only, so all cores will have same cache config. A check on 1118 + * one core suffices for all 1119 + * - IOC setup / dma callbacks only need to be done once 1120 + */ 1126 1121 void __init arc_cache_init_master(void) 1127 1122 { 1128 1123 unsigned int __maybe_unused cpu = smp_processor_id(); ··· 1209 1190 1210 1191 printk(arc_cache_mumbojumbo(0, str, sizeof(str))); 1211 1192 1212 - /* 1213 - * Only master CPU needs to execute rest of function: 1214 - * - Assume SMP so all cores will have same cache config so 1215 - * any geomtry checks will be same for all 1216 - * - IOC setup / dma callbacks only need to be setup once 1217 - */ 1218 1193 if (!cpu) 1219 1194 arc_cache_init_master(); 1195 + 1196 + /* 1197 + * In PAE regime, TLB and cache maintenance ops take wider addresses 1198 + * And even if PAE is not enabled in kernel, the upper 32-bits still need 1199 + * to be zeroed to keep the ops sane. 1200 + * As an optimization for more common !PAE enabled case, zero them out 1201 + * once at init, rather than checking/setting to 0 for every runtime op 1202 + */ 1203 + if (is_isa_arcv2() && pae40_exist_but_not_enab()) { 1204 + 1205 + if (IS_ENABLED(CONFIG_ARC_HAS_ICACHE)) 1206 + write_aux_reg(ARC_REG_IC_PTAG_HI, 0); 1207 + 1208 + if (IS_ENABLED(CONFIG_ARC_HAS_DCACHE)) 1209 + write_aux_reg(ARC_REG_DC_PTAG_HI, 0); 1210 + 1211 + if (l2_line_sz) { 1212 + write_aux_reg(ARC_REG_SLC_RGN_END1, 0); 1213 + write_aux_reg(ARC_REG_SLC_RGN_START1, 0); 1214 + } 1215 + } 1220 1216 }
+45
arch/arc/mm/dma.c
··· 153 153 } 154 154 } 155 155 156 + /* 157 + * arc_dma_map_page - map a portion of a page for streaming DMA 158 + * 159 + * Ensure that any data held in the cache is appropriately discarded 160 + * or written back. 161 + * 162 + * The device owns this memory once this call has completed. The CPU 163 + * can regain ownership by calling dma_unmap_page(). 164 + * 165 + * Note: while it takes struct page as arg, caller can "abuse" it to pass 166 + * a region larger than PAGE_SIZE, provided it is physically contiguous 167 + * and this still works correctly 168 + */ 156 169 static dma_addr_t arc_dma_map_page(struct device *dev, struct page *page, 157 170 unsigned long offset, size_t size, enum dma_data_direction dir, 158 171 unsigned long attrs) ··· 176 163 _dma_cache_sync(paddr, size, dir); 177 164 178 165 return plat_phys_to_dma(dev, paddr); 166 + } 167 + 168 + /* 169 + * arc_dma_unmap_page - unmap a buffer previously mapped through dma_map_page() 170 + * 171 + * After this call, reads by the CPU to the buffer are guaranteed to see 172 + * whatever the device wrote there. 173 + * 174 + * Note: historically this routine was not implemented for ARC 175 + */ 176 + static void arc_dma_unmap_page(struct device *dev, dma_addr_t handle, 177 + size_t size, enum dma_data_direction dir, 178 + unsigned long attrs) 179 + { 180 + phys_addr_t paddr = plat_dma_to_phys(dev, handle); 181 + 182 + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) 183 + _dma_cache_sync(paddr, size, dir); 179 184 } 180 185 181 186 static int arc_dma_map_sg(struct device *dev, struct scatterlist *sg, ··· 207 176 s->length, dir); 208 177 209 178 return nents; 179 + } 180 + 181 + static void arc_dma_unmap_sg(struct device *dev, struct scatterlist *sg, 182 + int nents, enum dma_data_direction dir, 183 + unsigned long attrs) 184 + { 185 + struct scatterlist *s; 186 + int i; 187 + 188 + for_each_sg(sg, s, nents, i) 189 + arc_dma_unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir, 190 + attrs); 210 191 } 211 192 212 193 static void arc_dma_sync_single_for_cpu(struct device *dev, ··· 266 223 .free = arc_dma_free, 267 224 .mmap = arc_dma_mmap, 268 225 .map_page = arc_dma_map_page, 226 + .unmap_page = arc_dma_unmap_page, 269 227 .map_sg = arc_dma_map_sg, 228 + .unmap_sg = arc_dma_unmap_sg, 270 229 .sync_single_for_device = arc_dma_sync_single_for_device, 271 230 .sync_single_for_cpu = arc_dma_sync_single_for_cpu, 272 231 .sync_sg_for_cpu = arc_dma_sync_sg_for_cpu,
+11 -1
arch/arc/mm/tlb.c
··· 104 104 /* A copy of the ASID from the PID reg is kept in asid_cache */ 105 105 DEFINE_PER_CPU(unsigned int, asid_cache) = MM_CTXT_FIRST_CYCLE; 106 106 107 + static int __read_mostly pae_exists; 108 + 107 109 /* 108 110 * Utility Routine to erase a J-TLB entry 109 111 * Caller needs to setup Index Reg (manually or via getIndex) ··· 786 784 mmu->u_dtlb = mmu4->u_dtlb * 4; 787 785 mmu->u_itlb = mmu4->u_itlb * 4; 788 786 mmu->sasid = mmu4->sasid; 789 - mmu->pae = mmu4->pae; 787 + pae_exists = mmu->pae = mmu4->pae; 790 788 } 791 789 } 792 790 ··· 809 807 IS_AVAIL2(p_mmu->pae, ", PAE40 ", CONFIG_ARC_HAS_PAE40)); 810 808 811 809 return buf; 810 + } 811 + 812 + int pae40_exist_but_not_enab(void) 813 + { 814 + return pae_exists && !is_pae40_enabled(); 812 815 } 813 816 814 817 void arc_mmu_init(void) ··· 866 859 /* swapper_pg_dir is the pgd for the kernel, used by vmalloc */ 867 860 write_aux_reg(ARC_REG_SCRATCH_DATA0, swapper_pg_dir); 868 861 #endif 862 + 863 + if (pae40_exist_but_not_enab()) 864 + write_aux_reg(ARC_REG_TLBPD1HI, 0); 869 865 } 870 866 871 867 /*
-13
arch/arc/plat-sim/Kconfig
··· 1 - # 2 - # Copyright (C) 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com) 3 - # 4 - # This program is free software; you can redistribute it and/or modify 5 - # it under the terms of the GNU General Public License version 2 as 6 - # published by the Free Software Foundation. 7 - # 8 - 9 - menuconfig ARC_PLAT_SIM 10 - bool "ARC nSIM based simulation virtual platforms" 11 - help 12 - Support for nSIM based ARC simulation platforms 13 - This includes the standalone nSIM (uart only) vs. System C OSCI VP
+4 -1
arch/arc/plat-sim/platform.c
··· 20 20 */ 21 21 22 22 static const char *simulation_compat[] __initconst = { 23 + #ifdef CONFIG_ISA_ARCOMPACT 23 24 "snps,nsim", 24 - "snps,nsim_hs", 25 25 "snps,nsimosci", 26 + #else 27 + "snps,nsim_hs", 26 28 "snps,nsimosci_hs", 27 29 "snps,zebu_hs", 30 + #endif 28 31 NULL, 29 32 }; 30 33
+1
arch/arm/boot/dts/imx25.dtsi
··· 297 297 #address-cells = <1>; 298 298 #size-cells = <1>; 299 299 status = "disabled"; 300 + ranges; 300 301 301 302 adc: adc@50030800 { 302 303 compatible = "fsl,imx25-gcq";
+2 -2
arch/arm/boot/dts/imx6qdl-nitrogen6_som2.dtsi
··· 507 507 pinctrl_pcie: pciegrp { 508 508 fsl,pins = < 509 509 /* PCIe reset */ 510 - MX6QDL_PAD_EIM_BCLK__GPIO6_IO31 0x030b0 510 + MX6QDL_PAD_EIM_DA0__GPIO3_IO00 0x030b0 511 511 MX6QDL_PAD_EIM_DA4__GPIO3_IO04 0x030b0 512 512 >; 513 513 }; ··· 668 668 &pcie { 669 669 pinctrl-names = "default"; 670 670 pinctrl-0 = <&pinctrl_pcie>; 671 - reset-gpio = <&gpio6 31 GPIO_ACTIVE_LOW>; 671 + reset-gpio = <&gpio3 0 GPIO_ACTIVE_LOW>; 672 672 status = "okay"; 673 673 }; 674 674
+8 -8
arch/arm/boot/dts/imx7d-sdb.dts
··· 557 557 >; 558 558 }; 559 559 560 + pinctrl_spi4: spi4grp { 561 + fsl,pins = < 562 + MX7D_PAD_GPIO1_IO09__GPIO1_IO9 0x59 563 + MX7D_PAD_GPIO1_IO12__GPIO1_IO12 0x59 564 + MX7D_PAD_GPIO1_IO13__GPIO1_IO13 0x59 565 + >; 566 + }; 567 + 560 568 pinctrl_tsc2046_pendown: tsc2046_pendown { 561 569 fsl,pins = < 562 570 MX7D_PAD_EPDC_BDR1__GPIO2_IO29 0x59 ··· 705 697 fsl,pins = < 706 698 MX7D_PAD_LPSR_GPIO1_IO01__PWM1_OUT 0x110b0 707 699 >; 708 - 709 - pinctrl_spi4: spi4grp { 710 - fsl,pins = < 711 - MX7D_PAD_GPIO1_IO09__GPIO1_IO9 0x59 712 - MX7D_PAD_GPIO1_IO12__GPIO1_IO12 0x59 713 - MX7D_PAD_GPIO1_IO13__GPIO1_IO13 0x59 714 - >; 715 - }; 716 700 }; 717 701 };
+6 -6
arch/arm/boot/dts/sama5d2.dtsi
··· 303 303 #size-cells = <1>; 304 304 atmel,smc = <&hsmc>; 305 305 reg = <0x10000000 0x10000000 306 - 0x40000000 0x30000000>; 306 + 0x60000000 0x30000000>; 307 307 ranges = <0x0 0x0 0x10000000 0x10000000 308 308 0x1 0x0 0x60000000 0x10000000 309 309 0x2 0x0 0x70000000 0x10000000 ··· 1048 1048 }; 1049 1049 1050 1050 hsmc: hsmc@f8014000 { 1051 - compatible = "atmel,sama5d3-smc", "syscon", "simple-mfd"; 1051 + compatible = "atmel,sama5d2-smc", "syscon", "simple-mfd"; 1052 1052 reg = <0xf8014000 0x1000>; 1053 - interrupts = <5 IRQ_TYPE_LEVEL_HIGH 6>; 1053 + interrupts = <17 IRQ_TYPE_LEVEL_HIGH 6>; 1054 1054 clocks = <&hsmc_clk>; 1055 1055 #address-cells = <1>; 1056 1056 #size-cells = <1>; 1057 1057 ranges; 1058 1058 1059 - pmecc: ecc-engine@ffffc070 { 1059 + pmecc: ecc-engine@f8014070 { 1060 1060 compatible = "atmel,sama5d2-pmecc"; 1061 - reg = <0xffffc070 0x490>, 1062 - <0xffffc500 0x100>; 1061 + reg = <0xf8014070 0x490>, 1062 + <0xf8014500 0x100>; 1063 1063 }; 1064 1064 }; 1065 1065
+1
arch/arm64/boot/dts/allwinner/sun50i-a64-bananapi-m64.dts
··· 51 51 compatible = "sinovoip,bananapi-m64", "allwinner,sun50i-a64"; 52 52 53 53 aliases { 54 + ethernet0 = &emac; 54 55 serial0 = &uart0; 55 56 serial1 = &uart1; 56 57 };
+1
arch/arm64/boot/dts/allwinner/sun50i-a64-pine64.dts
··· 51 51 compatible = "pine64,pine64", "allwinner,sun50i-a64"; 52 52 53 53 aliases { 54 + ethernet0 = &emac; 54 55 serial0 = &uart0; 55 56 serial1 = &uart1; 56 57 serial2 = &uart2;
+1
arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts
··· 53 53 "allwinner,sun50i-a64"; 54 54 55 55 aliases { 56 + ethernet0 = &emac; 56 57 serial0 = &uart0; 57 58 }; 58 59
+3
arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi
··· 120 120 }; 121 121 122 122 &pio { 123 + interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>, 124 + <GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>, 125 + <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>; 123 126 compatible = "allwinner,sun50i-h5-pinctrl"; 124 127 };
+1 -1
arch/arm64/boot/dts/renesas/salvator-common.dtsi
··· 45 45 stdout-path = "serial0:115200n8"; 46 46 }; 47 47 48 - audio_clkout: audio_clkout { 48 + audio_clkout: audio-clkout { 49 49 /* 50 50 * This is same as <&rcar_sound 0> 51 51 * but needed to avoid cs2000/rcar_sound probe dead-lock
+2 -2
arch/arm64/include/asm/arch_timer.h
··· 65 65 u64 _val; \ 66 66 if (needs_unstable_timer_counter_workaround()) { \ 67 67 const struct arch_timer_erratum_workaround *wa; \ 68 - preempt_disable(); \ 68 + preempt_disable_notrace(); \ 69 69 wa = __this_cpu_read(timer_unstable_counter_workaround); \ 70 70 if (wa && wa->read_##reg) \ 71 71 _val = wa->read_##reg(); \ 72 72 else \ 73 73 _val = read_sysreg(reg); \ 74 - preempt_enable(); \ 74 + preempt_enable_notrace(); \ 75 75 } else { \ 76 76 _val = read_sysreg(reg); \ 77 77 } \
+2 -2
arch/arm64/include/asm/elf.h
··· 114 114 115 115 /* 116 116 * This is the base location for PIE (ET_DYN with INTERP) loads. On 117 - * 64-bit, this is raised to 4GB to leave the entire 32-bit address 117 + * 64-bit, this is above 4GB to leave the entire 32-bit address 118 118 * space open for things that want to use the area for 32-bit pointers. 119 119 */ 120 - #define ELF_ET_DYN_BASE 0x100000000UL 120 + #define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3) 121 121 122 122 #ifndef __ASSEMBLY__ 123 123
+1 -1
arch/powerpc/Kconfig
··· 199 199 select HAVE_OPTPROBES if PPC64 200 200 select HAVE_PERF_EVENTS 201 201 select HAVE_PERF_EVENTS_NMI if PPC64 202 - select HAVE_HARDLOCKUP_DETECTOR_PERF if HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH 202 + select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH 203 203 select HAVE_PERF_REGS 204 204 select HAVE_PERF_USER_STACK_DUMP 205 205 select HAVE_RCU_TABLE_FREE if SMP
+3 -2
arch/powerpc/kernel/process.c
··· 362 362 363 363 cpumsr = msr_check_and_set(MSR_FP|MSR_VEC|MSR_VSX); 364 364 365 - if (current->thread.regs && (current->thread.regs->msr & MSR_VSX)) { 365 + if (current->thread.regs && 366 + (current->thread.regs->msr & (MSR_VSX|MSR_VEC|MSR_FP))) { 366 367 check_if_tm_restore_required(current); 367 368 /* 368 369 * If a thread has already been reclaimed then the ··· 387 386 { 388 387 if (tsk->thread.regs) { 389 388 preempt_disable(); 390 - if (tsk->thread.regs->msr & MSR_VSX) { 389 + if (tsk->thread.regs->msr & (MSR_VSX|MSR_VEC|MSR_FP)) { 391 390 BUG_ON(tsk != current); 392 391 giveup_vsx(tsk); 393 392 }
+2
arch/sparc/include/asm/page_32.h
··· 68 68 #define iopgprot_val(x) ((x).iopgprot) 69 69 70 70 #define __pte(x) ((pte_t) { (x) } ) 71 + #define __pmd(x) ((pmd_t) { { (x) }, }) 71 72 #define __iopte(x) ((iopte_t) { (x) } ) 72 73 #define __pgd(x) ((pgd_t) { (x) } ) 73 74 #define __ctxd(x) ((ctxd_t) { (x) } ) ··· 96 95 #define iopgprot_val(x) (x) 97 96 98 97 #define __pte(x) (x) 98 + #define __pmd(x) ((pmd_t) { { (x) }, }) 99 99 #define __iopte(x) (x) 100 100 #define __pgd(x) (x) 101 101 #define __ctxd(x) (x)
-2
arch/sparc/kernel/pci_sun4v.c
··· 1266 1266 * ATU group, but ATU hcalls won't be available. 1267 1267 */ 1268 1268 hv_atu = false; 1269 - pr_err(PFX "Could not register hvapi ATU err=%d\n", 1270 - err); 1271 1269 } else { 1272 1270 pr_info(PFX "Registered hvapi ATU major[%lu] minor[%lu]\n", 1273 1271 vatu_major, vatu_minor);
+1 -1
arch/sparc/kernel/pcic.c
··· 602 602 { 603 603 struct pci_dev *dev; 604 604 int i, has_io, has_mem; 605 - unsigned int cmd; 605 + unsigned int cmd = 0; 606 606 struct linux_pcic *pcic; 607 607 /* struct linux_pbm_info* pbm = &pcic->pbm; */ 608 608 int node;
+12 -12
arch/sparc/lib/multi3.S
··· 5 5 .align 4 6 6 ENTRY(__multi3) /* %o0 = u, %o1 = v */ 7 7 mov %o1, %g1 8 - srl %o3, 0, %g4 9 - mulx %g4, %g1, %o1 8 + srl %o3, 0, %o4 9 + mulx %o4, %g1, %o1 10 10 srlx %g1, 0x20, %g3 11 - mulx %g3, %g4, %g5 12 - sllx %g5, 0x20, %o5 13 - srl %g1, 0, %g4 11 + mulx %g3, %o4, %g7 12 + sllx %g7, 0x20, %o5 13 + srl %g1, 0, %o4 14 14 sub %o1, %o5, %o5 15 15 srlx %o5, 0x20, %o5 16 - addcc %g5, %o5, %g5 16 + addcc %g7, %o5, %g7 17 17 srlx %o3, 0x20, %o5 18 - mulx %g4, %o5, %g4 18 + mulx %o4, %o5, %o4 19 19 mulx %g3, %o5, %o5 20 20 sethi %hi(0x80000000), %g3 21 - addcc %g5, %g4, %g5 22 - srlx %g5, 0x20, %g5 21 + addcc %g7, %o4, %g7 22 + srlx %g7, 0x20, %g7 23 23 add %g3, %g3, %g3 24 24 movcc %xcc, %g0, %g3 25 - addcc %o5, %g5, %o5 26 - sllx %g4, 0x20, %g4 27 - add %o1, %g4, %o1 25 + addcc %o5, %g7, %o5 26 + sllx %o4, 0x20, %o4 27 + add %o1, %o4, %o1 28 28 add %o5, %g3, %g2 29 29 mulx %g1, %o2, %g1 30 30 add %g1, %g2, %g1
+2 -1
arch/x86/Kconfig
··· 100 100 select GENERIC_STRNCPY_FROM_USER 101 101 select GENERIC_STRNLEN_USER 102 102 select GENERIC_TIME_VSYSCALL 103 + select HARDLOCKUP_CHECK_TIMESTAMP if X86_64 103 104 select HAVE_ACPI_APEI if ACPI 104 105 select HAVE_ACPI_APEI_NMI if ACPI 105 106 select HAVE_ALIGNED_STRUCT_PAGE if SLUB ··· 164 163 select HAVE_PCSPKR_PLATFORM 165 164 select HAVE_PERF_EVENTS 166 165 select HAVE_PERF_EVENTS_NMI 167 - select HAVE_HARDLOCKUP_DETECTOR_PERF if HAVE_PERF_EVENTS_NMI 166 + select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI 168 167 select HAVE_PERF_REGS 169 168 select HAVE_PERF_USER_STACK_DUMP 170 169 select HAVE_REGS_AND_STACK_ACCESS_API
+2
arch/x86/entry/entry_64.S
··· 1211 1211 * other IST entries. 1212 1212 */ 1213 1213 1214 + ASM_CLAC 1215 + 1214 1216 /* Use %rdx as our temp variable throughout */ 1215 1217 pushq %rdx 1216 1218
+7 -9
arch/x86/events/core.c
··· 2114 2114 load_mm_cr4(this_cpu_read(cpu_tlbstate.loaded_mm)); 2115 2115 } 2116 2116 2117 - static void x86_pmu_event_mapped(struct perf_event *event) 2117 + static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm) 2118 2118 { 2119 2119 if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED)) 2120 2120 return; ··· 2129 2129 * For now, this can't happen because all callers hold mmap_sem 2130 2130 * for write. If this changes, we'll need a different solution. 2131 2131 */ 2132 - lockdep_assert_held_exclusive(&current->mm->mmap_sem); 2132 + lockdep_assert_held_exclusive(&mm->mmap_sem); 2133 2133 2134 - if (atomic_inc_return(&current->mm->context.perf_rdpmc_allowed) == 1) 2135 - on_each_cpu_mask(mm_cpumask(current->mm), refresh_pce, NULL, 1); 2134 + if (atomic_inc_return(&mm->context.perf_rdpmc_allowed) == 1) 2135 + on_each_cpu_mask(mm_cpumask(mm), refresh_pce, NULL, 1); 2136 2136 } 2137 2137 2138 - static void x86_pmu_event_unmapped(struct perf_event *event) 2138 + static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm) 2139 2139 { 2140 - if (!current->mm) 2141 - return; 2142 2140 2143 2141 if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED)) 2144 2142 return; 2145 2143 2146 - if (atomic_dec_and_test(&current->mm->context.perf_rdpmc_allowed)) 2147 - on_each_cpu_mask(mm_cpumask(current->mm), refresh_pce, NULL, 1); 2144 + if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed)) 2145 + on_each_cpu_mask(mm_cpumask(mm), refresh_pce, NULL, 1); 2148 2146 } 2149 2147 2150 2148 static int x86_pmu_event_idx(struct perf_event *event)
+1 -1
arch/x86/events/intel/bts.c
··· 69 69 struct bts_phys buf[0]; 70 70 }; 71 71 72 - struct pmu bts_pmu; 72 + static struct pmu bts_pmu; 73 73 74 74 static size_t buf_size(struct page *page) 75 75 {
+1 -1
arch/x86/events/intel/p4.c
··· 587 587 * P4_CONFIG_ALIASABLE or bits for P4_PEBS_METRIC, they are 588 588 * either up to date automatically or not applicable at all. 589 589 */ 590 - struct p4_event_alias { 590 + static struct p4_event_alias { 591 591 u64 original; 592 592 u64 alternative; 593 593 } p4_event_aliases[] = {
+1 -1
arch/x86/events/intel/rapl.c
··· 559 559 .attrs = rapl_formats_attr, 560 560 }; 561 561 562 - const struct attribute_group *rapl_attr_groups[] = { 562 + static const struct attribute_group *rapl_attr_groups[] = { 563 563 &rapl_pmu_attr_group, 564 564 &rapl_pmu_format_group, 565 565 &rapl_pmu_events_group,
+1 -1
arch/x86/events/intel/uncore.c
··· 721 721 NULL, 722 722 }; 723 723 724 - static struct attribute_group uncore_pmu_attr_group = { 724 + static const struct attribute_group uncore_pmu_attr_group = { 725 725 .attrs = uncore_pmu_attrs, 726 726 }; 727 727
+6 -6
arch/x86/events/intel/uncore_nhmex.c
··· 272 272 NULL, 273 273 }; 274 274 275 - static struct attribute_group nhmex_uncore_ubox_format_group = { 275 + static const struct attribute_group nhmex_uncore_ubox_format_group = { 276 276 .name = "format", 277 277 .attrs = nhmex_uncore_ubox_formats_attr, 278 278 }; ··· 299 299 NULL, 300 300 }; 301 301 302 - static struct attribute_group nhmex_uncore_cbox_format_group = { 302 + static const struct attribute_group nhmex_uncore_cbox_format_group = { 303 303 .name = "format", 304 304 .attrs = nhmex_uncore_cbox_formats_attr, 305 305 }; ··· 407 407 NULL, 408 408 }; 409 409 410 - static struct attribute_group nhmex_uncore_bbox_format_group = { 410 + static const struct attribute_group nhmex_uncore_bbox_format_group = { 411 411 .name = "format", 412 412 .attrs = nhmex_uncore_bbox_formats_attr, 413 413 }; ··· 484 484 NULL, 485 485 }; 486 486 487 - static struct attribute_group nhmex_uncore_sbox_format_group = { 487 + static const struct attribute_group nhmex_uncore_sbox_format_group = { 488 488 .name = "format", 489 489 .attrs = nhmex_uncore_sbox_formats_attr, 490 490 }; ··· 898 898 NULL, 899 899 }; 900 900 901 - static struct attribute_group nhmex_uncore_mbox_format_group = { 901 + static const struct attribute_group nhmex_uncore_mbox_format_group = { 902 902 .name = "format", 903 903 .attrs = nhmex_uncore_mbox_formats_attr, 904 904 }; ··· 1163 1163 NULL, 1164 1164 }; 1165 1165 1166 - static struct attribute_group nhmex_uncore_rbox_format_group = { 1166 + static const struct attribute_group nhmex_uncore_rbox_format_group = { 1167 1167 .name = "format", 1168 1168 .attrs = nhmex_uncore_rbox_formats_attr, 1169 1169 };
+3 -3
arch/x86/events/intel/uncore_snb.c
··· 130 130 NULL, 131 131 }; 132 132 133 - static struct attribute_group snb_uncore_format_group = { 133 + static const struct attribute_group snb_uncore_format_group = { 134 134 .name = "format", 135 135 .attrs = snb_uncore_formats_attr, 136 136 }; ··· 289 289 NULL, 290 290 }; 291 291 292 - static struct attribute_group snb_uncore_imc_format_group = { 292 + static const struct attribute_group snb_uncore_imc_format_group = { 293 293 .name = "format", 294 294 .attrs = snb_uncore_imc_formats_attr, 295 295 }; ··· 769 769 NULL, 770 770 }; 771 771 772 - static struct attribute_group nhm_uncore_format_group = { 772 + static const struct attribute_group nhm_uncore_format_group = { 773 773 .name = "format", 774 774 .attrs = nhm_uncore_formats_attr, 775 775 };
+21 -21
arch/x86/events/intel/uncore_snbep.c
··· 602 602 { /* end: all zeroes */ }, 603 603 }; 604 604 605 - static struct attribute_group snbep_uncore_format_group = { 605 + static const struct attribute_group snbep_uncore_format_group = { 606 606 .name = "format", 607 607 .attrs = snbep_uncore_formats_attr, 608 608 }; 609 609 610 - static struct attribute_group snbep_uncore_ubox_format_group = { 610 + static const struct attribute_group snbep_uncore_ubox_format_group = { 611 611 .name = "format", 612 612 .attrs = snbep_uncore_ubox_formats_attr, 613 613 }; 614 614 615 - static struct attribute_group snbep_uncore_cbox_format_group = { 615 + static const struct attribute_group snbep_uncore_cbox_format_group = { 616 616 .name = "format", 617 617 .attrs = snbep_uncore_cbox_formats_attr, 618 618 }; 619 619 620 - static struct attribute_group snbep_uncore_pcu_format_group = { 620 + static const struct attribute_group snbep_uncore_pcu_format_group = { 621 621 .name = "format", 622 622 .attrs = snbep_uncore_pcu_formats_attr, 623 623 }; 624 624 625 - static struct attribute_group snbep_uncore_qpi_format_group = { 625 + static const struct attribute_group snbep_uncore_qpi_format_group = { 626 626 .name = "format", 627 627 .attrs = snbep_uncore_qpi_formats_attr, 628 628 }; ··· 1431 1431 NULL, 1432 1432 }; 1433 1433 1434 - static struct attribute_group ivbep_uncore_format_group = { 1434 + static const struct attribute_group ivbep_uncore_format_group = { 1435 1435 .name = "format", 1436 1436 .attrs = ivbep_uncore_formats_attr, 1437 1437 }; 1438 1438 1439 - static struct attribute_group ivbep_uncore_ubox_format_group = { 1439 + static const struct attribute_group ivbep_uncore_ubox_format_group = { 1440 1440 .name = "format", 1441 1441 .attrs = ivbep_uncore_ubox_formats_attr, 1442 1442 }; 1443 1443 1444 - static struct attribute_group ivbep_uncore_cbox_format_group = { 1444 + static const struct attribute_group ivbep_uncore_cbox_format_group = { 1445 1445 .name = "format", 1446 1446 .attrs = ivbep_uncore_cbox_formats_attr, 1447 1447 }; 1448 1448 1449 - static struct attribute_group ivbep_uncore_pcu_format_group = { 1449 + static const struct attribute_group ivbep_uncore_pcu_format_group = { 1450 1450 .name = "format", 1451 1451 .attrs = ivbep_uncore_pcu_formats_attr, 1452 1452 }; 1453 1453 1454 - static struct attribute_group ivbep_uncore_qpi_format_group = { 1454 + static const struct attribute_group ivbep_uncore_qpi_format_group = { 1455 1455 .name = "format", 1456 1456 .attrs = ivbep_uncore_qpi_formats_attr, 1457 1457 }; ··· 1887 1887 NULL, 1888 1888 }; 1889 1889 1890 - static struct attribute_group knl_uncore_ubox_format_group = { 1890 + static const struct attribute_group knl_uncore_ubox_format_group = { 1891 1891 .name = "format", 1892 1892 .attrs = knl_uncore_ubox_formats_attr, 1893 1893 }; ··· 1927 1927 NULL, 1928 1928 }; 1929 1929 1930 - static struct attribute_group knl_uncore_cha_format_group = { 1930 + static const struct attribute_group knl_uncore_cha_format_group = { 1931 1931 .name = "format", 1932 1932 .attrs = knl_uncore_cha_formats_attr, 1933 1933 }; ··· 2037 2037 NULL, 2038 2038 }; 2039 2039 2040 - static struct attribute_group knl_uncore_pcu_format_group = { 2040 + static const struct attribute_group knl_uncore_pcu_format_group = { 2041 2041 .name = "format", 2042 2042 .attrs = knl_uncore_pcu_formats_attr, 2043 2043 }; ··· 2187 2187 NULL, 2188 2188 }; 2189 2189 2190 - static struct attribute_group knl_uncore_irp_format_group = { 2190 + static const struct attribute_group knl_uncore_irp_format_group = { 2191 2191 .name = "format", 2192 2192 .attrs = knl_uncore_irp_formats_attr, 2193 2193 }; ··· 2385 2385 NULL, 2386 2386 }; 2387 2387 2388 - static struct attribute_group hswep_uncore_ubox_format_group = { 2388 + static const struct attribute_group hswep_uncore_ubox_format_group = { 2389 2389 .name = "format", 2390 2390 .attrs = hswep_uncore_ubox_formats_attr, 2391 2391 }; ··· 2439 2439 NULL, 2440 2440 }; 2441 2441 2442 - static struct attribute_group hswep_uncore_cbox_format_group = { 2442 + static const struct attribute_group hswep_uncore_cbox_format_group = { 2443 2443 .name = "format", 2444 2444 .attrs = hswep_uncore_cbox_formats_attr, 2445 2445 }; ··· 2621 2621 NULL, 2622 2622 }; 2623 2623 2624 - static struct attribute_group hswep_uncore_sbox_format_group = { 2624 + static const struct attribute_group hswep_uncore_sbox_format_group = { 2625 2625 .name = "format", 2626 2626 .attrs = hswep_uncore_sbox_formats_attr, 2627 2627 }; ··· 3314 3314 NULL, 3315 3315 }; 3316 3316 3317 - static struct attribute_group skx_uncore_chabox_format_group = { 3317 + static const struct attribute_group skx_uncore_chabox_format_group = { 3318 3318 .name = "format", 3319 3319 .attrs = skx_uncore_cha_formats_attr, 3320 3320 }; ··· 3427 3427 NULL, 3428 3428 }; 3429 3429 3430 - static struct attribute_group skx_uncore_iio_format_group = { 3430 + static const struct attribute_group skx_uncore_iio_format_group = { 3431 3431 .name = "format", 3432 3432 .attrs = skx_uncore_iio_formats_attr, 3433 3433 }; ··· 3484 3484 NULL, 3485 3485 }; 3486 3486 3487 - static struct attribute_group skx_uncore_format_group = { 3487 + static const struct attribute_group skx_uncore_format_group = { 3488 3488 .name = "format", 3489 3489 .attrs = skx_uncore_formats_attr, 3490 3490 }; ··· 3605 3605 NULL, 3606 3606 }; 3607 3607 3608 - static struct attribute_group skx_upi_uncore_format_group = { 3608 + static const struct attribute_group skx_upi_uncore_format_group = { 3609 3609 .name = "format", 3610 3610 .attrs = skx_upi_uncore_formats_attr, 3611 3611 };
+1 -1
arch/x86/include/asm/cpufeatures.h
··· 286 286 #define X86_FEATURE_PAUSEFILTER (15*32+10) /* filtered pause intercept */ 287 287 #define X86_FEATURE_PFTHRESHOLD (15*32+12) /* pause filter threshold */ 288 288 #define X86_FEATURE_AVIC (15*32+13) /* Virtual Interrupt Controller */ 289 - #define X86_FEATURE_VIRTUAL_VMLOAD_VMSAVE (15*32+15) /* Virtual VMLOAD VMSAVE */ 289 + #define X86_FEATURE_V_VMSAVE_VMLOAD (15*32+15) /* Virtual VMSAVE VMLOAD */ 290 290 291 291 /* Intel-defined CPU features, CPUID level 0x00000007:0 (ecx), word 16 */ 292 292 #define X86_FEATURE_AVX512VBMI (16*32+ 1) /* AVX512 Vector Bit Manipulation instructions*/
+2 -2
arch/x86/include/asm/elf.h
··· 247 247 248 248 /* 249 249 * This is the base location for PIE (ET_DYN with INTERP) loads. On 250 - * 64-bit, this is raised to 4GB to leave the entire 32-bit address 250 + * 64-bit, this is above 4GB to leave the entire 32-bit address 251 251 * space open for things that want to use the area for 32-bit pointers. 252 252 */ 253 253 #define ELF_ET_DYN_BASE (mmap_is_ia32() ? 0x000400000UL : \ 254 - 0x100000000UL) 254 + (TASK_SIZE / 3 * 2)) 255 255 256 256 /* This yields a mask that user programs can use to figure out what 257 257 instruction set this CPU supports. This could be done in user space,
+3
arch/x86/kernel/cpu/aperfmperf.c
··· 40 40 struct aperfmperf_sample *s = this_cpu_ptr(&samples); 41 41 ktime_t now = ktime_get(); 42 42 s64 time_delta = ktime_ms_delta(now, s->time); 43 + unsigned long flags; 43 44 44 45 /* Don't bother re-computing within the cache threshold time. */ 45 46 if (time_delta < APERFMPERF_CACHE_THRESHOLD_MS) 46 47 return; 47 48 49 + local_irq_save(flags); 48 50 rdmsrl(MSR_IA32_APERF, aperf); 49 51 rdmsrl(MSR_IA32_MPERF, mperf); 52 + local_irq_restore(flags); 50 53 51 54 aperf_delta = aperf - s->aperf; 52 55 mperf_delta = mperf - s->mperf;
+1 -1
arch/x86/kernel/cpu/mcheck/therm_throt.c
··· 122 122 NULL 123 123 }; 124 124 125 - static struct attribute_group thermal_attr_group = { 125 + static const struct attribute_group thermal_attr_group = { 126 126 .attrs = thermal_throttle_attrs, 127 127 .name = "thermal_throttle" 128 128 };
+2 -2
arch/x86/kernel/cpu/microcode/core.c
··· 561 561 NULL 562 562 }; 563 563 564 - static struct attribute_group mc_attr_group = { 564 + static const struct attribute_group mc_attr_group = { 565 565 .attrs = mc_default_attrs, 566 566 .name = "microcode", 567 567 }; ··· 707 707 NULL 708 708 }; 709 709 710 - static struct attribute_group cpu_root_microcode_group = { 710 + static const struct attribute_group cpu_root_microcode_group = { 711 711 .name = "microcode", 712 712 .attrs = cpu_root_microcode_attrs, 713 713 };
+15 -3
arch/x86/kernel/cpu/mtrr/main.c
··· 237 237 stop_machine(mtrr_rendezvous_handler, &data, cpu_online_mask); 238 238 } 239 239 240 + static void set_mtrr_cpuslocked(unsigned int reg, unsigned long base, 241 + unsigned long size, mtrr_type type) 242 + { 243 + struct set_mtrr_data data = { .smp_reg = reg, 244 + .smp_base = base, 245 + .smp_size = size, 246 + .smp_type = type 247 + }; 248 + 249 + stop_machine_cpuslocked(mtrr_rendezvous_handler, &data, cpu_online_mask); 250 + } 251 + 240 252 static void set_mtrr_from_inactive_cpu(unsigned int reg, unsigned long base, 241 253 unsigned long size, mtrr_type type) 242 254 { ··· 382 370 /* Search for an empty MTRR */ 383 371 i = mtrr_if->get_free_region(base, size, replace); 384 372 if (i >= 0) { 385 - set_mtrr(i, base, size, type); 373 + set_mtrr_cpuslocked(i, base, size, type); 386 374 if (likely(replace < 0)) { 387 375 mtrr_usage_table[i] = 1; 388 376 } else { ··· 390 378 if (increment) 391 379 mtrr_usage_table[i]++; 392 380 if (unlikely(replace != i)) { 393 - set_mtrr(replace, 0, 0, 0); 381 + set_mtrr_cpuslocked(replace, 0, 0, 0); 394 382 mtrr_usage_table[replace] = 0; 395 383 } 396 384 } ··· 518 506 goto out; 519 507 } 520 508 if (--mtrr_usage_table[reg] < 1) 521 - set_mtrr(reg, 0, 0, 0); 509 + set_mtrr_cpuslocked(reg, 0, 0, 0); 522 510 error = reg; 523 511 out: 524 512 mutex_unlock(&mtrr_mutex);
+4 -3
arch/x86/kernel/head64.c
··· 53 53 pudval_t *pud; 54 54 pmdval_t *pmd, pmd_entry; 55 55 int i; 56 + unsigned int *next_pgt_ptr; 56 57 57 58 /* Is the address too large? */ 58 59 if (physaddr >> MAX_PHYSMEM_BITS) ··· 92 91 * creates a bunch of nonsense entries but that is fine -- 93 92 * it avoids problems around wraparound. 94 93 */ 95 - 96 - pud = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr); 97 - pmd = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr); 94 + next_pgt_ptr = fixup_pointer(&next_early_pgt, physaddr); 95 + pud = fixup_pointer(early_dynamic_pgts[(*next_pgt_ptr)++], physaddr); 96 + pmd = fixup_pointer(early_dynamic_pgts[(*next_pgt_ptr)++], physaddr); 98 97 99 98 if (IS_ENABLED(CONFIG_X86_5LEVEL)) { 100 99 p4d = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);
+2 -2
arch/x86/kernel/ksysfs.c
··· 55 55 NULL, 56 56 }; 57 57 58 - static struct attribute_group boot_params_attr_group = { 58 + static const struct attribute_group boot_params_attr_group = { 59 59 .attrs = boot_params_version_attrs, 60 60 .bin_attrs = boot_params_data_attrs, 61 61 }; ··· 202 202 NULL, 203 203 }; 204 204 205 - static struct attribute_group setup_data_attr_group = { 205 + static const struct attribute_group setup_data_attr_group = { 206 206 .attrs = setup_data_type_attrs, 207 207 .bin_attrs = setup_data_data_attrs, 208 208 };
+17 -13
arch/x86/kernel/smpboot.c
··· 971 971 * Returns zero if CPU booted OK, else error code from 972 972 * ->wakeup_secondary_cpu. 973 973 */ 974 - static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle) 974 + static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle, 975 + int *cpu0_nmi_registered) 975 976 { 976 977 volatile u32 *trampoline_status = 977 978 (volatile u32 *) __va(real_mode_header->trampoline_status); ··· 980 979 unsigned long start_ip = real_mode_header->trampoline_start; 981 980 982 981 unsigned long boot_error = 0; 983 - int cpu0_nmi_registered = 0; 984 982 unsigned long timeout; 985 983 986 984 idle->thread.sp = (unsigned long)task_pt_regs(idle); ··· 1035 1035 boot_error = apic->wakeup_secondary_cpu(apicid, start_ip); 1036 1036 else 1037 1037 boot_error = wakeup_cpu_via_init_nmi(cpu, start_ip, apicid, 1038 - &cpu0_nmi_registered); 1038 + cpu0_nmi_registered); 1039 1039 1040 1040 if (!boot_error) { 1041 1041 /* ··· 1080 1080 */ 1081 1081 smpboot_restore_warm_reset_vector(); 1082 1082 } 1083 - /* 1084 - * Clean up the nmi handler. Do this after the callin and callout sync 1085 - * to avoid impact of possible long unregister time. 1086 - */ 1087 - if (cpu0_nmi_registered) 1088 - unregister_nmi_handler(NMI_LOCAL, "wake_cpu0"); 1089 1083 1090 1084 return boot_error; 1091 1085 } ··· 1087 1093 int native_cpu_up(unsigned int cpu, struct task_struct *tidle) 1088 1094 { 1089 1095 int apicid = apic->cpu_present_to_apicid(cpu); 1096 + int cpu0_nmi_registered = 0; 1090 1097 unsigned long flags; 1091 - int err; 1098 + int err, ret = 0; 1092 1099 1093 1100 WARN_ON(irqs_disabled()); 1094 1101 ··· 1126 1131 1127 1132 common_cpu_up(cpu, tidle); 1128 1133 1129 - err = do_boot_cpu(apicid, cpu, tidle); 1134 + err = do_boot_cpu(apicid, cpu, tidle, &cpu0_nmi_registered); 1130 1135 if (err) { 1131 1136 pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu); 1132 - return -EIO; 1137 + ret = -EIO; 1138 + goto unreg_nmi; 1133 1139 } 1134 1140 1135 1141 /* ··· 1146 1150 touch_nmi_watchdog(); 1147 1151 } 1148 1152 1149 - return 0; 1153 + unreg_nmi: 1154 + /* 1155 + * Clean up the nmi handler. Do this after the callin and callout sync 1156 + * to avoid impact of possible long unregister time. 1157 + */ 1158 + if (cpu0_nmi_registered) 1159 + unregister_nmi_handler(NMI_LOCAL, "wake_cpu0"); 1160 + 1161 + return ret; 1150 1162 } 1151 1163 1152 1164 /**
+1 -1
arch/x86/kvm/svm.c
··· 1100 1100 1101 1101 if (vls) { 1102 1102 if (!npt_enabled || 1103 - !boot_cpu_has(X86_FEATURE_VIRTUAL_VMLOAD_VMSAVE) || 1103 + !boot_cpu_has(X86_FEATURE_V_VMSAVE_VMLOAD) || 1104 1104 !IS_ENABLED(CONFIG_X86_64)) { 1105 1105 vls = false; 1106 1106 } else {
+3 -4
arch/x86/mm/mmap.c
··· 50 50 static unsigned long stack_maxrandom_size(unsigned long task_size) 51 51 { 52 52 unsigned long max = 0; 53 - if ((current->flags & PF_RANDOMIZE) && 54 - !(current->personality & ADDR_NO_RANDOMIZE)) { 53 + if (current->flags & PF_RANDOMIZE) { 55 54 max = (-1UL) & __STACK_RND_MASK(task_size == tasksize_32bit()); 56 55 max <<= PAGE_SHIFT; 57 56 } ··· 78 79 79 80 static unsigned long arch_rnd(unsigned int rndbits) 80 81 { 82 + if (!(current->flags & PF_RANDOMIZE)) 83 + return 0; 81 84 return (get_random_long() & ((1UL << rndbits) - 1)) << PAGE_SHIFT; 82 85 } 83 86 84 87 unsigned long arch_mmap_rnd(void) 85 88 { 86 - if (!(current->flags & PF_RANDOMIZE)) 87 - return 0; 88 89 return arch_rnd(mmap_is_ia32() ? mmap32_rnd_bits : mmap64_rnd_bits); 89 90 } 90 91
+2 -2
arch/x86/platform/uv/tlb_uv.c
··· 26 26 static struct bau_operations ops __ro_after_init; 27 27 28 28 /* timeouts in nanoseconds (indexed by UVH_AGING_PRESCALE_SEL urgency7 30:28) */ 29 - static int timeout_base_ns[] = { 29 + static const int timeout_base_ns[] = { 30 30 20, 31 31 160, 32 32 1280, ··· 1216 1216 * set a bit in the UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE register. 1217 1217 * Such a message must be ignored. 1218 1218 */ 1219 - void process_uv2_message(struct msg_desc *mdp, struct bau_control *bcp) 1219 + static void process_uv2_message(struct msg_desc *mdp, struct bau_control *bcp) 1220 1220 { 1221 1221 unsigned long mmr_image; 1222 1222 unsigned char swack_vec;
+7 -1
block/blk-mq-pci.c
··· 36 36 for (queue = 0; queue < set->nr_hw_queues; queue++) { 37 37 mask = pci_irq_get_affinity(pdev, queue); 38 38 if (!mask) 39 - return -EINVAL; 39 + goto fallback; 40 40 41 41 for_each_cpu(cpu, mask) 42 42 set->mq_map[cpu] = queue; 43 43 } 44 44 45 + return 0; 46 + 47 + fallback: 48 + WARN_ON_ONCE(set->nr_hw_queues > 1); 49 + for_each_possible_cpu(cpu) 50 + set->mq_map[cpu] = 0; 45 51 return 0; 46 52 } 47 53 EXPORT_SYMBOL_GPL(blk_mq_pci_map_queues);
+2 -3
block/blk-mq.c
··· 360 360 return ERR_PTR(ret); 361 361 362 362 rq = blk_mq_get_request(q, NULL, op, &alloc_data); 363 + blk_queue_exit(q); 363 364 364 365 if (!rq) 365 366 return ERR_PTR(-EWOULDBLOCK); 366 367 367 368 blk_mq_put_ctx(alloc_data.ctx); 368 - blk_queue_exit(q); 369 369 370 370 rq->__data_len = 0; 371 371 rq->__sector = (sector_t) -1; ··· 411 411 alloc_data.ctx = __blk_mq_get_ctx(q, cpu); 412 412 413 413 rq = blk_mq_get_request(q, NULL, op, &alloc_data); 414 + blk_queue_exit(q); 414 415 415 416 if (!rq) 416 417 return ERR_PTR(-EWOULDBLOCK); 417 - 418 - blk_queue_exit(q); 419 418 420 419 return rq; 421 420 }
+3 -3
drivers/block/xen-blkfront.c
··· 2075 2075 /* 2076 2076 * Get the bios in the request so we can re-queue them. 2077 2077 */ 2078 - if (req_op(shadow[i].request) == REQ_OP_FLUSH || 2079 - req_op(shadow[i].request) == REQ_OP_DISCARD || 2080 - req_op(shadow[i].request) == REQ_OP_SECURE_ERASE || 2078 + if (req_op(shadow[j].request) == REQ_OP_FLUSH || 2079 + req_op(shadow[j].request) == REQ_OP_DISCARD || 2080 + req_op(shadow[j].request) == REQ_OP_SECURE_ERASE || 2081 2081 shadow[j].request->cmd_flags & REQ_FUA) { 2082 2082 /* 2083 2083 * Flush operations don't contain bios, so
+1 -1
drivers/clocksource/Kconfig
··· 262 262 263 263 config CLKSRC_PISTACHIO 264 264 bool "Clocksource for Pistachio SoC" if COMPILE_TEST 265 - depends on HAS_IOMEM 265 + depends on GENERIC_CLOCKEVENTS && HAS_IOMEM 266 266 select TIMER_OF 267 267 help 268 268 Enables the clocksource for the Pistachio SoC.
+1 -1
drivers/clocksource/arm_arch_timer.c
··· 1440 1440 * While unlikely, it's theoretically possible that none of the frames 1441 1441 * in a timer expose the combination of feature we want. 1442 1442 */ 1443 - for (i = i; i < timer_count; i++) { 1443 + for (i = 0; i < timer_count; i++) { 1444 1444 timer = &timers[i]; 1445 1445 1446 1446 frame = arch_timer_mem_find_best_frame(timer);
+6 -5
drivers/clocksource/em_sti.c
··· 305 305 irq = platform_get_irq(pdev, 0); 306 306 if (irq < 0) { 307 307 dev_err(&pdev->dev, "failed to get irq\n"); 308 - return -EINVAL; 308 + return irq; 309 309 } 310 310 311 311 /* map memory, let base point to the STI instance */ ··· 314 314 if (IS_ERR(p->base)) 315 315 return PTR_ERR(p->base); 316 316 317 - if (devm_request_irq(&pdev->dev, irq, em_sti_interrupt, 318 - IRQF_TIMER | IRQF_IRQPOLL | IRQF_NOBALANCING, 319 - dev_name(&pdev->dev), p)) { 317 + ret = devm_request_irq(&pdev->dev, irq, em_sti_interrupt, 318 + IRQF_TIMER | IRQF_IRQPOLL | IRQF_NOBALANCING, 319 + dev_name(&pdev->dev), p); 320 + if (ret) { 320 321 dev_err(&pdev->dev, "failed to request low IRQ\n"); 321 - return -ENOENT; 322 + return ret; 322 323 } 323 324 324 325 /* get hold of clock */
+2 -2
drivers/clocksource/timer-of.c
··· 128 128 const char *name = of_base->name ? of_base->name : np->full_name; 129 129 130 130 of_base->base = of_io_request_and_map(np, of_base->index, name); 131 - if (!of_base->base) { 131 + if (IS_ERR(of_base->base)) { 132 132 pr_err("Failed to iomap (%s)\n", name); 133 - return -ENXIO; 133 + return PTR_ERR(of_base->base); 134 134 } 135 135 136 136 return 0;
+1 -2
drivers/cpufreq/intel_pstate.c
··· 1613 1613 1614 1614 static inline int32_t get_avg_frequency(struct cpudata *cpu) 1615 1615 { 1616 - return mul_ext_fp(cpu->sample.core_avg_perf, 1617 - cpu->pstate.max_pstate_physical * cpu->pstate.scaling); 1616 + return mul_ext_fp(cpu->sample.core_avg_perf, cpu_khz); 1618 1617 } 1619 1618 1620 1619 static inline int32_t get_avg_pstate(struct cpudata *cpu)
+6 -7
drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
··· 244 244 struct dma_fence *f = e->fence; 245 245 struct amd_sched_fence *s_fence = to_amd_sched_fence(f); 246 246 247 + if (dma_fence_is_signaled(f)) { 248 + hash_del(&e->node); 249 + dma_fence_put(f); 250 + kmem_cache_free(amdgpu_sync_slab, e); 251 + continue; 252 + } 247 253 if (ring && s_fence) { 248 254 /* For fences from the same ring it is sufficient 249 255 * when they are scheduled. ··· 260 254 261 255 return &s_fence->scheduled; 262 256 } 263 - } 264 - 265 - if (dma_fence_is_signaled(f)) { 266 - hash_del(&e->node); 267 - dma_fence_put(f); 268 - kmem_cache_free(amdgpu_sync_slab, e); 269 - continue; 270 257 } 271 258 272 259 return f;
+1 -1
drivers/gpu/drm/i915/i915_debugfs.c
··· 4580 4580 4581 4581 sseu->slice_mask |= BIT(s); 4582 4582 4583 - if (IS_GEN9_BC(dev_priv)) 4583 + if (IS_GEN9_BC(dev_priv) || IS_CANNONLAKE(dev_priv)) 4584 4584 sseu->subslice_mask = 4585 4585 INTEL_INFO(dev_priv)->sseu.subslice_mask; 4586 4586
+8 -7
drivers/gpu/drm/i915/i915_gem_context.c
··· 688 688 } 689 689 690 690 static bool 691 - needs_pd_load_pre(struct i915_hw_ppgtt *ppgtt, 692 - struct intel_engine_cs *engine, 693 - struct i915_gem_context *to) 691 + needs_pd_load_pre(struct i915_hw_ppgtt *ppgtt, struct intel_engine_cs *engine) 694 692 { 693 + struct i915_gem_context *from = engine->legacy_active_context; 694 + 695 695 if (!ppgtt) 696 696 return false; 697 697 698 698 /* Always load the ppgtt on first use */ 699 - if (!engine->legacy_active_context) 699 + if (!from) 700 700 return true; 701 701 702 702 /* Same context without new entries, skip */ 703 - if (engine->legacy_active_context == to && 703 + if ((!from->ppgtt || from->ppgtt == ppgtt) && 704 704 !(intel_engine_flag(engine) & ppgtt->pd_dirty_rings)) 705 705 return false; 706 706 ··· 744 744 if (skip_rcs_switch(ppgtt, engine, to)) 745 745 return 0; 746 746 747 - if (needs_pd_load_pre(ppgtt, engine, to)) { 747 + if (needs_pd_load_pre(ppgtt, engine)) { 748 748 /* Older GENs and non render rings still want the load first, 749 749 * "PP_DCLV followed by PP_DIR_BASE register through Load 750 750 * Register Immediate commands in Ring Buffer before submitting ··· 841 841 struct i915_hw_ppgtt *ppgtt = 842 842 to->ppgtt ?: req->i915->mm.aliasing_ppgtt; 843 843 844 - if (needs_pd_load_pre(ppgtt, engine, to)) { 844 + if (needs_pd_load_pre(ppgtt, engine)) { 845 845 int ret; 846 846 847 847 trace_switch_mm(engine, to); ··· 852 852 ppgtt->pd_dirty_rings &= ~intel_engine_flag(engine); 853 853 } 854 854 855 + engine->legacy_active_context = to; 855 856 return 0; 856 857 } 857 858
+4
drivers/gpu/drm/i915/i915_gem_render_state.c
··· 242 242 goto err_unpin; 243 243 } 244 244 245 + ret = req->engine->emit_flush(req, EMIT_INVALIDATE); 246 + if (ret) 247 + goto err_unpin; 248 + 245 249 ret = req->engine->emit_bb_start(req, 246 250 so->batch_offset, so->batch_size, 247 251 I915_DISPATCH_SECURE);
+1 -1
drivers/gpu/drm/i915/intel_ddi.c
··· 1762 1762 if (dev_priv->vbt.edp.low_vswing) { 1763 1763 if (voltage == VOLTAGE_INFO_0_85V) { 1764 1764 *n_entries = ARRAY_SIZE(cnl_ddi_translations_edp_0_85V); 1765 - return cnl_ddi_translations_dp_0_85V; 1765 + return cnl_ddi_translations_edp_0_85V; 1766 1766 } else if (voltage == VOLTAGE_INFO_0_95V) { 1767 1767 *n_entries = ARRAY_SIZE(cnl_ddi_translations_edp_0_95V); 1768 1768 return cnl_ddi_translations_edp_0_95V;
+7
drivers/gpu/drm/i915/intel_display.c
··· 3485 3485 !gpu_reset_clobbers_display(dev_priv)) 3486 3486 return; 3487 3487 3488 + /* We have a modeset vs reset deadlock, defensively unbreak it. 3489 + * 3490 + * FIXME: We can do a _lot_ better, this is just a first iteration. 3491 + */ 3492 + i915_gem_set_wedged(dev_priv); 3493 + DRM_DEBUG_DRIVER("Wedging GPU to avoid deadlocks with pending modeset updates\n"); 3494 + 3488 3495 /* 3489 3496 * Need mode_config.mutex so that we don't 3490 3497 * trample ongoing ->detect() and whatnot.
-1
drivers/gpu/drm/i915/intel_lrc.h
··· 63 63 }; 64 64 65 65 /* Logical Rings */ 66 - void intel_logical_ring_stop(struct intel_engine_cs *engine); 67 66 void intel_logical_ring_cleanup(struct intel_engine_cs *engine); 68 67 int logical_render_ring_init(struct intel_engine_cs *engine); 69 68 int logical_xcs_ring_init(struct intel_engine_cs *engine);
+3 -2
drivers/infiniband/core/device.c
··· 537 537 } 538 538 up_read(&lists_rwsem); 539 539 540 - mutex_unlock(&device_mutex); 541 - 542 540 ib_device_unregister_rdmacg(device); 543 541 ib_device_unregister_sysfs(device); 542 + 543 + mutex_unlock(&device_mutex); 544 + 544 545 ib_cache_cleanup_one(device); 545 546 546 547 ib_security_destroy_port_pkey_list(device);
+1 -1
drivers/infiniband/core/uverbs_main.c
··· 1153 1153 kref_get(&file->ref); 1154 1154 mutex_unlock(&uverbs_dev->lists_mutex); 1155 1155 1156 - ib_uverbs_event_handler(&file->event_handler, &event); 1157 1156 1158 1157 mutex_lock(&file->cleanup_mutex); 1159 1158 ucontext = file->ucontext; ··· 1169 1170 * for example due to freeing the resources 1170 1171 * (e.g mmput). 1171 1172 */ 1173 + ib_uverbs_event_handler(&file->event_handler, &event); 1172 1174 ib_dev->disassociate_ucontext(ucontext); 1173 1175 mutex_lock(&file->cleanup_mutex); 1174 1176 ib_uverbs_cleanup_ucontext(file, ucontext, true);
+1 -1
drivers/infiniband/hw/cxgb4/mem.c
··· 661 661 rhp = php->rhp; 662 662 663 663 if (mr_type != IB_MR_TYPE_MEM_REG || 664 - max_num_sg > t4_max_fr_depth(&rhp->rdev.lldi.ulptx_memwrite_dsgl && 664 + max_num_sg > t4_max_fr_depth(rhp->rdev.lldi.ulptx_memwrite_dsgl && 665 665 use_dsgl)) 666 666 return ERR_PTR(-EINVAL); 667 667
+3 -1
drivers/infiniband/hw/hns/hns_roce_ah.c
··· 64 64 } else { 65 65 u8 *dmac = rdma_ah_retrieve_dmac(ah_attr); 66 66 67 - if (!dmac) 67 + if (!dmac) { 68 + kfree(ah); 68 69 return ERR_PTR(-EINVAL); 70 + } 69 71 memcpy(ah->av.mac, dmac, ETH_ALEN); 70 72 } 71 73
+82 -41
drivers/infiniband/hw/i40iw/i40iw_ctrl.c
··· 130 130 u64 base = 0; 131 131 u32 i, j; 132 132 u32 k = 0; 133 - u32 low; 134 133 135 134 /* copy base values in obj_info */ 136 - for (i = I40IW_HMC_IW_QP, j = 0; 137 - i <= I40IW_HMC_IW_PBLE; i++, j += 8) { 135 + for (i = I40IW_HMC_IW_QP, j = 0; i <= I40IW_HMC_IW_PBLE; i++, j += 8) { 136 + if ((i == I40IW_HMC_IW_SRQ) || 137 + (i == I40IW_HMC_IW_FSIMC) || 138 + (i == I40IW_HMC_IW_FSIAV)) { 139 + info[i].base = 0; 140 + info[i].cnt = 0; 141 + continue; 142 + } 138 143 get_64bit_val(buf, j, &temp); 139 144 info[i].base = RS_64_1(temp, 32) * 512; 140 145 if (info[i].base > base) { 141 146 base = info[i].base; 142 147 k = i; 143 148 } 144 - low = (u32)(temp); 145 - if (low) 146 - info[i].cnt = low; 149 + if (i == I40IW_HMC_IW_APBVT_ENTRY) { 150 + info[i].cnt = 1; 151 + continue; 152 + } 153 + if (i == I40IW_HMC_IW_QP) 154 + info[i].cnt = (u32)RS_64(temp, I40IW_QUERY_FPM_MAX_QPS); 155 + else if (i == I40IW_HMC_IW_CQ) 156 + info[i].cnt = (u32)RS_64(temp, I40IW_QUERY_FPM_MAX_CQS); 157 + else 158 + info[i].cnt = (u32)(temp); 147 159 } 148 160 size = info[k].cnt * info[k].size + info[k].base; 149 161 if (size & 0x1FFFFF) ··· 164 152 *sd = (u32)(size >> 21); 165 153 166 154 return 0; 155 + } 156 + 157 + /** 158 + * i40iw_sc_decode_fpm_query() - Decode a 64 bit value into max count and size 159 + * @buf: ptr to fpm query buffer 160 + * @buf_idx: index into buf 161 + * @info: ptr to i40iw_hmc_obj_info struct 162 + * @rsrc_idx: resource index into info 163 + * 164 + * Decode a 64 bit value from fpm query buffer into max count and size 165 + */ 166 + static u64 i40iw_sc_decode_fpm_query(u64 *buf, 167 + u32 buf_idx, 168 + struct i40iw_hmc_obj_info *obj_info, 169 + u32 rsrc_idx) 170 + { 171 + u64 temp; 172 + u32 size; 173 + 174 + get_64bit_val(buf, buf_idx, &temp); 175 + obj_info[rsrc_idx].max_cnt = (u32)temp; 176 + size = (u32)RS_64_1(temp, 32); 177 + obj_info[rsrc_idx].size = LS_64_1(1, size); 178 + 179 + return temp; 167 180 } 168 181 169 182 /** ··· 205 168 struct i40iw_hmc_info *hmc_info, 206 169 struct i40iw_hmc_fpm_misc *hmc_fpm_misc) 207 170 { 208 - u64 temp; 209 171 struct i40iw_hmc_obj_info *obj_info; 210 - u32 i, j, size; 172 + u64 temp; 173 + u32 size; 211 174 u16 max_pe_sds; 212 175 213 176 obj_info = hmc_info->hmc_obj; ··· 222 185 hmc_fpm_misc->max_sds = max_pe_sds; 223 186 hmc_info->sd_table.sd_cnt = max_pe_sds + hmc_info->first_sd_index; 224 187 225 - for (i = I40IW_HMC_IW_QP, j = 8; 226 - i <= I40IW_HMC_IW_ARP; i++, j += 8) { 227 - get_64bit_val(buf, j, &temp); 228 - if (i == I40IW_HMC_IW_QP) 229 - obj_info[i].max_cnt = (u32)RS_64(temp, I40IW_QUERY_FPM_MAX_QPS); 230 - else if (i == I40IW_HMC_IW_CQ) 231 - obj_info[i].max_cnt = (u32)RS_64(temp, I40IW_QUERY_FPM_MAX_CQS); 232 - else 233 - obj_info[i].max_cnt = (u32)temp; 188 + get_64bit_val(buf, 8, &temp); 189 + obj_info[I40IW_HMC_IW_QP].max_cnt = (u32)RS_64(temp, I40IW_QUERY_FPM_MAX_QPS); 190 + size = (u32)RS_64_1(temp, 32); 191 + obj_info[I40IW_HMC_IW_QP].size = LS_64_1(1, size); 234 192 235 - size = (u32)RS_64_1(temp, 32); 236 - obj_info[i].size = ((u64)1 << size); 237 - } 238 - for (i = I40IW_HMC_IW_MR, j = 48; 239 - i <= I40IW_HMC_IW_PBLE; i++, j += 8) { 240 - get_64bit_val(buf, j, &temp); 241 - obj_info[i].max_cnt = (u32)temp; 242 - size = (u32)RS_64_1(temp, 32); 243 - obj_info[i].size = LS_64_1(1, size); 244 - } 193 + get_64bit_val(buf, 16, &temp); 194 + obj_info[I40IW_HMC_IW_CQ].max_cnt = (u32)RS_64(temp, I40IW_QUERY_FPM_MAX_CQS); 195 + size = (u32)RS_64_1(temp, 32); 196 + obj_info[I40IW_HMC_IW_CQ].size = LS_64_1(1, size); 245 197 246 - get_64bit_val(buf, 120, &temp); 247 - hmc_fpm_misc->max_ceqs = (u8)RS_64(temp, I40IW_QUERY_FPM_MAX_CEQS); 248 - get_64bit_val(buf, 120, &temp); 249 - hmc_fpm_misc->ht_multiplier = RS_64(temp, I40IW_QUERY_FPM_HTMULTIPLIER); 250 - get_64bit_val(buf, 120, &temp); 251 - hmc_fpm_misc->timer_bucket = RS_64(temp, I40IW_QUERY_FPM_TIMERBUCKET); 198 + i40iw_sc_decode_fpm_query(buf, 32, obj_info, I40IW_HMC_IW_HTE); 199 + i40iw_sc_decode_fpm_query(buf, 40, obj_info, I40IW_HMC_IW_ARP); 200 + 201 + obj_info[I40IW_HMC_IW_APBVT_ENTRY].size = 8192; 202 + obj_info[I40IW_HMC_IW_APBVT_ENTRY].max_cnt = 1; 203 + 204 + i40iw_sc_decode_fpm_query(buf, 48, obj_info, I40IW_HMC_IW_MR); 205 + i40iw_sc_decode_fpm_query(buf, 56, obj_info, I40IW_HMC_IW_XF); 206 + 252 207 get_64bit_val(buf, 64, &temp); 208 + obj_info[I40IW_HMC_IW_XFFL].max_cnt = (u32)temp; 209 + obj_info[I40IW_HMC_IW_XFFL].size = 4; 253 210 hmc_fpm_misc->xf_block_size = RS_64(temp, I40IW_QUERY_FPM_XFBLOCKSIZE); 254 211 if (!hmc_fpm_misc->xf_block_size) 255 212 return I40IW_ERR_INVALID_SIZE; 213 + 214 + i40iw_sc_decode_fpm_query(buf, 72, obj_info, I40IW_HMC_IW_Q1); 215 + 256 216 get_64bit_val(buf, 80, &temp); 217 + obj_info[I40IW_HMC_IW_Q1FL].max_cnt = (u32)temp; 218 + obj_info[I40IW_HMC_IW_Q1FL].size = 4; 257 219 hmc_fpm_misc->q1_block_size = RS_64(temp, I40IW_QUERY_FPM_Q1BLOCKSIZE); 258 220 if (!hmc_fpm_misc->q1_block_size) 259 221 return I40IW_ERR_INVALID_SIZE; 222 + 223 + i40iw_sc_decode_fpm_query(buf, 88, obj_info, I40IW_HMC_IW_TIMER); 224 + 225 + get_64bit_val(buf, 112, &temp); 226 + obj_info[I40IW_HMC_IW_PBLE].max_cnt = (u32)temp; 227 + obj_info[I40IW_HMC_IW_PBLE].size = 8; 228 + 229 + get_64bit_val(buf, 120, &temp); 230 + hmc_fpm_misc->max_ceqs = (u8)RS_64(temp, I40IW_QUERY_FPM_MAX_CEQS); 231 + hmc_fpm_misc->ht_multiplier = RS_64(temp, I40IW_QUERY_FPM_HTMULTIPLIER); 232 + hmc_fpm_misc->timer_bucket = RS_64(temp, I40IW_QUERY_FPM_TIMERBUCKET); 233 + 260 234 return 0; 261 235 } 262 236 ··· 3440 3392 hmc_info->sd_table.sd_entry = virt_mem.va; 3441 3393 } 3442 3394 3443 - /* fill size of objects which are fixed */ 3444 - hmc_info->hmc_obj[I40IW_HMC_IW_XFFL].size = 4; 3445 - hmc_info->hmc_obj[I40IW_HMC_IW_Q1FL].size = 4; 3446 - hmc_info->hmc_obj[I40IW_HMC_IW_PBLE].size = 8; 3447 - hmc_info->hmc_obj[I40IW_HMC_IW_APBVT_ENTRY].size = 8192; 3448 - hmc_info->hmc_obj[I40IW_HMC_IW_APBVT_ENTRY].max_cnt = 1; 3449 - 3450 3395 return ret_code; 3451 3396 } 3452 3397 ··· 4881 4840 { 4882 4841 u8 fcn_id = vsi->fcn_id; 4883 4842 4884 - if ((vsi->stats_fcn_id_alloc) && (fcn_id != I40IW_INVALID_FCN_ID)) 4843 + if (vsi->stats_fcn_id_alloc && fcn_id < I40IW_MAX_STATS_COUNT) 4885 4844 vsi->dev->fcn_id_array[fcn_id] = false; 4886 4845 i40iw_hw_stats_stop_timer(vsi); 4887 4846 }
+2 -2
drivers/infiniband/hw/i40iw/i40iw_d.h
··· 1507 1507 I40IW_CQ0_ALIGNMENT_MASK = (256 - 1), 1508 1508 I40IW_HOST_CTX_ALIGNMENT_MASK = (4 - 1), 1509 1509 I40IW_SHADOWAREA_MASK = (128 - 1), 1510 - I40IW_FPM_QUERY_BUF_ALIGNMENT_MASK = 0, 1511 - I40IW_FPM_COMMIT_BUF_ALIGNMENT_MASK = 0 1510 + I40IW_FPM_QUERY_BUF_ALIGNMENT_MASK = (4 - 1), 1511 + I40IW_FPM_COMMIT_BUF_ALIGNMENT_MASK = (4 - 1) 1512 1512 }; 1513 1513 1514 1514 enum i40iw_alignment {
+1 -1
drivers/infiniband/hw/i40iw/i40iw_puda.c
··· 685 685 cqsize = rsrc->cq_size * (sizeof(struct i40iw_cqe)); 686 686 tsize = cqsize + sizeof(struct i40iw_cq_shadow_area); 687 687 ret = i40iw_allocate_dma_mem(dev->hw, &rsrc->cqmem, tsize, 688 - I40IW_CQ0_ALIGNMENT_MASK); 688 + I40IW_CQ0_ALIGNMENT); 689 689 if (ret) 690 690 return ret; 691 691
+1 -1
drivers/infiniband/hw/i40iw/i40iw_status.h
··· 62 62 I40IW_ERR_INVALID_ALIGNMENT = -23, 63 63 I40IW_ERR_FLUSHED_QUEUE = -24, 64 64 I40IW_ERR_INVALID_PUSH_PAGE_INDEX = -25, 65 - I40IW_ERR_INVALID_IMM_DATA_SIZE = -26, 65 + I40IW_ERR_INVALID_INLINE_DATA_SIZE = -26, 66 66 I40IW_ERR_TIMEOUT = -27, 67 67 I40IW_ERR_OPCODE_MISMATCH = -28, 68 68 I40IW_ERR_CQP_COMPL_ERROR = -29,
+4 -4
drivers/infiniband/hw/i40iw/i40iw_uk.c
··· 435 435 436 436 op_info = &info->op.inline_rdma_write; 437 437 if (op_info->len > I40IW_MAX_INLINE_DATA_SIZE) 438 - return I40IW_ERR_INVALID_IMM_DATA_SIZE; 438 + return I40IW_ERR_INVALID_INLINE_DATA_SIZE; 439 439 440 440 ret_code = i40iw_inline_data_size_to_wqesize(op_info->len, &wqe_size); 441 441 if (ret_code) ··· 511 511 512 512 op_info = &info->op.inline_send; 513 513 if (op_info->len > I40IW_MAX_INLINE_DATA_SIZE) 514 - return I40IW_ERR_INVALID_IMM_DATA_SIZE; 514 + return I40IW_ERR_INVALID_INLINE_DATA_SIZE; 515 515 516 516 ret_code = i40iw_inline_data_size_to_wqesize(op_info->len, &wqe_size); 517 517 if (ret_code) ··· 784 784 get_64bit_val(cqe, 0, &qword0); 785 785 get_64bit_val(cqe, 16, &qword2); 786 786 787 - info->tcp_seq_num = (u8)RS_64(qword0, I40IWCQ_TCPSEQNUM); 787 + info->tcp_seq_num = (u32)RS_64(qword0, I40IWCQ_TCPSEQNUM); 788 788 789 789 info->qp_id = (u32)RS_64(qword2, I40IWCQ_QPID); 790 790 ··· 1187 1187 u8 *wqe_size) 1188 1188 { 1189 1189 if (data_size > I40IW_MAX_INLINE_DATA_SIZE) 1190 - return I40IW_ERR_INVALID_IMM_DATA_SIZE; 1190 + return I40IW_ERR_INVALID_INLINE_DATA_SIZE; 1191 1191 1192 1192 if (data_size <= 16) 1193 1193 *wqe_size = I40IW_QP_WQE_MIN_SIZE;
+16 -1
drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
··· 65 65 struct pvrdma_dev *dev = to_vdev(ibcq->device); 66 66 struct pvrdma_cq *cq = to_vcq(ibcq); 67 67 u32 val = cq->cq_handle; 68 + unsigned long flags; 69 + int has_data = 0; 68 70 69 71 val |= (notify_flags & IB_CQ_SOLICITED_MASK) == IB_CQ_SOLICITED ? 70 72 PVRDMA_UAR_CQ_ARM_SOL : PVRDMA_UAR_CQ_ARM; 71 73 74 + spin_lock_irqsave(&cq->cq_lock, flags); 75 + 72 76 pvrdma_write_uar_cq(dev, val); 73 77 74 - return 0; 78 + if (notify_flags & IB_CQ_REPORT_MISSED_EVENTS) { 79 + unsigned int head; 80 + 81 + has_data = pvrdma_idx_ring_has_data(&cq->ring_state->rx, 82 + cq->ibcq.cqe, &head); 83 + if (unlikely(has_data == PVRDMA_INVALID_IDX)) 84 + dev_err(&dev->pdev->dev, "CQ ring state invalid\n"); 85 + } 86 + 87 + spin_unlock_irqrestore(&cq->cq_lock, flags); 88 + 89 + return has_data; 75 90 } 76 91 77 92 /**
+4
drivers/input/mouse/elan_i2c_core.c
··· 1248 1248 { "ELAN0100", 0 }, 1249 1249 { "ELAN0600", 0 }, 1250 1250 { "ELAN0605", 0 }, 1251 + { "ELAN0608", 0 }, 1252 + { "ELAN0605", 0 }, 1253 + { "ELAN0609", 0 }, 1254 + { "ELAN060B", 0 }, 1251 1255 { "ELAN1000", 0 }, 1252 1256 { } 1253 1257 };
+2 -2
drivers/input/mouse/trackpoint.c
··· 380 380 return 0; 381 381 382 382 if (trackpoint_read(ps2dev, TP_EXT_BTN, &button_info)) { 383 - psmouse_warn(psmouse, "failed to get extended button data\n"); 384 - button_info = 0; 383 + psmouse_warn(psmouse, "failed to get extended button data, assuming 3 buttons\n"); 384 + button_info = 0x33; 385 385 } 386 386 387 387 psmouse->private = kzalloc(sizeof(struct trackpoint_data), GFP_KERNEL);
+6 -7
drivers/irqchip/irq-atmel-aic-common.c
··· 137 137 #define AT91_RTC_IMR 0x28 138 138 #define AT91_RTC_IRQ_MASK 0x1f 139 139 140 - void __init aic_common_rtc_irq_fixup(struct device_node *root) 140 + void __init aic_common_rtc_irq_fixup(void) 141 141 { 142 142 struct device_node *np; 143 143 void __iomem *regs; 144 144 145 - np = of_find_compatible_node(root, NULL, "atmel,at91rm9200-rtc"); 145 + np = of_find_compatible_node(NULL, NULL, "atmel,at91rm9200-rtc"); 146 146 if (!np) 147 - np = of_find_compatible_node(root, NULL, 147 + np = of_find_compatible_node(NULL, NULL, 148 148 "atmel,at91sam9x5-rtc"); 149 149 150 150 if (!np) ··· 165 165 #define AT91_RTT_ALMIEN (1 << 16) /* Alarm Interrupt Enable */ 166 166 #define AT91_RTT_RTTINCIEN (1 << 17) /* Real Time Timer Increment Interrupt Enable */ 167 167 168 - void __init aic_common_rtt_irq_fixup(struct device_node *root) 168 + void __init aic_common_rtt_irq_fixup(void) 169 169 { 170 170 struct device_node *np; 171 171 void __iomem *regs; ··· 196 196 return; 197 197 198 198 match = of_match_node(matches, root); 199 - of_node_put(root); 200 199 201 200 if (match) { 202 - void (*fixup)(struct device_node *) = match->data; 203 - fixup(root); 201 + void (*fixup)(void) = match->data; 202 + fixup(); 204 203 } 205 204 206 205 of_node_put(root);
+2 -2
drivers/irqchip/irq-atmel-aic-common.h
··· 33 33 const char *name, int nirqs, 34 34 const struct of_device_id *matches); 35 35 36 - void __init aic_common_rtc_irq_fixup(struct device_node *root); 36 + void __init aic_common_rtc_irq_fixup(void); 37 37 38 - void __init aic_common_rtt_irq_fixup(struct device_node *root); 38 + void __init aic_common_rtt_irq_fixup(void); 39 39 40 40 #endif /* __IRQ_ATMEL_AIC_COMMON_H */
+7 -7
drivers/irqchip/irq-atmel-aic.c
··· 209 209 .xlate = aic_irq_domain_xlate, 210 210 }; 211 211 212 - static void __init at91rm9200_aic_irq_fixup(struct device_node *root) 212 + static void __init at91rm9200_aic_irq_fixup(void) 213 213 { 214 - aic_common_rtc_irq_fixup(root); 214 + aic_common_rtc_irq_fixup(); 215 215 } 216 216 217 - static void __init at91sam9260_aic_irq_fixup(struct device_node *root) 217 + static void __init at91sam9260_aic_irq_fixup(void) 218 218 { 219 - aic_common_rtt_irq_fixup(root); 219 + aic_common_rtt_irq_fixup(); 220 220 } 221 221 222 - static void __init at91sam9g45_aic_irq_fixup(struct device_node *root) 222 + static void __init at91sam9g45_aic_irq_fixup(void) 223 223 { 224 - aic_common_rtc_irq_fixup(root); 225 - aic_common_rtt_irq_fixup(root); 224 + aic_common_rtc_irq_fixup(); 225 + aic_common_rtt_irq_fixup(); 226 226 } 227 227 228 228 static const struct of_device_id aic_irq_fixups[] __initconst = {
+2 -2
drivers/irqchip/irq-atmel-aic5.c
··· 305 305 .xlate = aic5_irq_domain_xlate, 306 306 }; 307 307 308 - static void __init sama5d3_aic_irq_fixup(struct device_node *root) 308 + static void __init sama5d3_aic_irq_fixup(void) 309 309 { 310 - aic_common_rtc_irq_fixup(root); 310 + aic_common_rtc_irq_fixup(); 311 311 } 312 312 313 313 static const struct of_device_id aic5_irq_fixups[] __initconst = {
+1
drivers/irqchip/irq-brcmstb-l2.c
··· 189 189 190 190 ct->chip.irq_suspend = brcmstb_l2_intc_suspend; 191 191 ct->chip.irq_resume = brcmstb_l2_intc_resume; 192 + ct->chip.irq_pm_shutdown = brcmstb_l2_intc_suspend; 192 193 193 194 if (data->can_wake) { 194 195 /* This IRQ chip can wake the system, set all child interrupts
+1
drivers/irqchip/irq-gic-v3-its-platform-msi.c
··· 43 43 *dev_id = args.args[0]; 44 44 break; 45 45 } 46 + index++; 46 47 } while (!ret); 47 48 48 49 return ret;
+32 -8
drivers/irqchip/irq-gic-v3-its.c
··· 1835 1835 1836 1836 #define ACPI_GICV3_ITS_MEM_SIZE (SZ_128K) 1837 1837 1838 - #if defined(CONFIG_ACPI_NUMA) && (ACPI_CA_VERSION >= 0x20170531) 1838 + #ifdef CONFIG_ACPI_NUMA 1839 1839 struct its_srat_map { 1840 1840 /* numa node id */ 1841 1841 u32 numa_node; ··· 1843 1843 u32 its_id; 1844 1844 }; 1845 1845 1846 - static struct its_srat_map its_srat_maps[MAX_NUMNODES] __initdata; 1846 + static struct its_srat_map *its_srat_maps __initdata; 1847 1847 static int its_in_srat __initdata; 1848 1848 1849 1849 static int __init acpi_get_its_numa_node(u32 its_id) ··· 1855 1855 return its_srat_maps[i].numa_node; 1856 1856 } 1857 1857 return NUMA_NO_NODE; 1858 + } 1859 + 1860 + static int __init gic_acpi_match_srat_its(struct acpi_subtable_header *header, 1861 + const unsigned long end) 1862 + { 1863 + return 0; 1858 1864 } 1859 1865 1860 1866 static int __init gic_acpi_parse_srat_its(struct acpi_subtable_header *header, ··· 1876 1870 if (its_affinity->header.length < sizeof(*its_affinity)) { 1877 1871 pr_err("SRAT: Invalid header length %d in ITS affinity\n", 1878 1872 its_affinity->header.length); 1879 - return -EINVAL; 1880 - } 1881 - 1882 - if (its_in_srat >= MAX_NUMNODES) { 1883 - pr_err("SRAT: ITS affinity exceeding max count[%d]\n", 1884 - MAX_NUMNODES); 1885 1873 return -EINVAL; 1886 1874 } 1887 1875 ··· 1897 1897 1898 1898 static void __init acpi_table_parse_srat_its(void) 1899 1899 { 1900 + int count; 1901 + 1902 + count = acpi_table_parse_entries(ACPI_SIG_SRAT, 1903 + sizeof(struct acpi_table_srat), 1904 + ACPI_SRAT_TYPE_GIC_ITS_AFFINITY, 1905 + gic_acpi_match_srat_its, 0); 1906 + if (count <= 0) 1907 + return; 1908 + 1909 + its_srat_maps = kmalloc(count * sizeof(struct its_srat_map), 1910 + GFP_KERNEL); 1911 + if (!its_srat_maps) { 1912 + pr_warn("SRAT: Failed to allocate memory for its_srat_maps!\n"); 1913 + return; 1914 + } 1915 + 1900 1916 acpi_table_parse_entries(ACPI_SIG_SRAT, 1901 1917 sizeof(struct acpi_table_srat), 1902 1918 ACPI_SRAT_TYPE_GIC_ITS_AFFINITY, 1903 1919 gic_acpi_parse_srat_its, 0); 1904 1920 } 1921 + 1922 + /* free the its_srat_maps after ITS probing */ 1923 + static void __init acpi_its_srat_maps_free(void) 1924 + { 1925 + kfree(its_srat_maps); 1926 + } 1905 1927 #else 1906 1928 static void __init acpi_table_parse_srat_its(void) { } 1907 1929 static int __init acpi_get_its_numa_node(u32 its_id) { return NUMA_NO_NODE; } 1930 + static void __init acpi_its_srat_maps_free(void) { } 1908 1931 #endif 1909 1932 1910 1933 static int __init gic_acpi_parse_madt_its(struct acpi_subtable_header *header, ··· 1974 1951 acpi_table_parse_srat_its(); 1975 1952 acpi_table_parse_madt(ACPI_MADT_TYPE_GENERIC_TRANSLATOR, 1976 1953 gic_acpi_parse_madt_its, 0); 1954 + acpi_its_srat_maps_free(); 1977 1955 } 1978 1956 #else 1979 1957 static void __init its_acpi_probe(void) { }
+13 -3
drivers/irqchip/irq-gic-v3.c
··· 353 353 354 354 if (static_key_true(&supports_deactivate)) 355 355 gic_write_eoir(irqnr); 356 + else 357 + isb(); 356 358 357 359 err = handle_domain_irq(gic_data.domain, irqnr, regs); 358 360 if (err) { ··· 642 640 static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val, 643 641 bool force) 644 642 { 645 - unsigned int cpu = cpumask_any_and(mask_val, cpu_online_mask); 643 + unsigned int cpu; 646 644 void __iomem *reg; 647 645 int enabled; 648 646 u64 val; 647 + 648 + if (force) 649 + cpu = cpumask_first(mask_val); 650 + else 651 + cpu = cpumask_any_and(mask_val, cpu_online_mask); 649 652 650 653 if (cpu >= nr_cpu_ids) 651 654 return -EINVAL; ··· 838 831 if (ret) 839 832 return ret; 840 833 841 - for (i = 0; i < nr_irqs; i++) 842 - gic_irq_domain_map(domain, virq + i, hwirq + i); 834 + for (i = 0; i < nr_irqs; i++) { 835 + ret = gic_irq_domain_map(domain, virq + i, hwirq + i); 836 + if (ret) 837 + return ret; 838 + } 843 839 844 840 return 0; 845 841 }
+10 -4
drivers/irqchip/irq-gic.c
··· 361 361 if (likely(irqnr > 15 && irqnr < 1020)) { 362 362 if (static_key_true(&supports_deactivate)) 363 363 writel_relaxed(irqstat, cpu_base + GIC_CPU_EOI); 364 + isb(); 364 365 handle_domain_irq(gic->domain, irqnr, regs); 365 366 continue; 366 367 } ··· 402 401 goto out; 403 402 404 403 cascade_irq = irq_find_mapping(chip_data->domain, gic_irq); 405 - if (unlikely(gic_irq < 32 || gic_irq > 1020)) 404 + if (unlikely(gic_irq < 32 || gic_irq > 1020)) { 406 405 handle_bad_irq(desc); 407 - else 406 + } else { 407 + isb(); 408 408 generic_handle_irq(cascade_irq); 409 + } 409 410 410 411 out: 411 412 chained_irq_exit(chip, desc); ··· 1030 1027 if (ret) 1031 1028 return ret; 1032 1029 1033 - for (i = 0; i < nr_irqs; i++) 1034 - gic_irq_domain_map(domain, virq + i, hwirq + i); 1030 + for (i = 0; i < nr_irqs; i++) { 1031 + ret = gic_irq_domain_map(domain, virq + i, hwirq + i); 1032 + if (ret) 1033 + return ret; 1034 + } 1035 1035 1036 1036 return 0; 1037 1037 }
+2 -2
drivers/net/ethernet/mellanox/mlx4/main.c
··· 432 432 /* Virtual PCI function needs to determine UAR page size from 433 433 * firmware. Only master PCI function can set the uar page size 434 434 */ 435 - if (enable_4k_uar) 435 + if (enable_4k_uar || !dev->persist->num_vfs) 436 436 dev->uar_page_shift = DEFAULT_UAR_PAGE_SHIFT; 437 437 else 438 438 dev->uar_page_shift = PAGE_SHIFT; ··· 2277 2277 2278 2278 dev->caps.max_fmr_maps = (1 << (32 - ilog2(dev->caps.num_mpts))) - 1; 2279 2279 2280 - if (enable_4k_uar) { 2280 + if (enable_4k_uar || !dev->persist->num_vfs) { 2281 2281 init_hca.log_uar_sz = ilog2(dev->caps.num_uars) + 2282 2282 PAGE_SHIFT - DEFAULT_UAR_PAGE_SHIFT; 2283 2283 init_hca.uar_page_sz = DEFAULT_UAR_PAGE_SHIFT - 12;
+1 -2
drivers/net/ethernet/netronome/nfp/nfp_net_common.c
··· 908 908 return NETDEV_TX_OK; 909 909 910 910 err_unmap: 911 - --f; 912 - while (f >= 0) { 911 + while (--f >= 0) { 913 912 frag = &skb_shinfo(skb)->frags[f]; 914 913 dma_unmap_page(dp->dev, tx_ring->txbufs[wr_idx].dma_addr, 915 914 skb_frag_size(frag), DMA_TO_DEVICE);
+1 -1
drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c
··· 2311 2311 loop_cnt++) { 2312 2312 NX_WR_DUMP_REG(select_addr, adapter->ahw.pci_base0, queue_id); 2313 2313 read_addr = queueEntry->read_addr; 2314 - for (k = 0; k < read_cnt; k--) { 2314 + for (k = 0; k < read_cnt; k++) { 2315 2315 NX_RD_DUMP_REG(read_addr, adapter->ahw.pci_base0, 2316 2316 &read_value); 2317 2317 *data_buff++ = read_value;
+3
drivers/net/tun.c
··· 2079 2079 2080 2080 err_detach: 2081 2081 tun_detach_all(dev); 2082 + /* register_netdevice() already called tun_free_netdev() */ 2083 + goto err_free_dev; 2084 + 2082 2085 err_free_flow: 2083 2086 tun_flow_uninit(tun); 2084 2087 security_tun_dev_free_security(tun->security);
+2 -1
drivers/nvme/host/fabrics.c
··· 794 794 int i; 795 795 796 796 for (i = 0; i < ARRAY_SIZE(opt_tokens); i++) { 797 - if (opt_tokens[i].token & ~allowed_opts) { 797 + if ((opt_tokens[i].token & opts->mask) && 798 + (opt_tokens[i].token & ~allowed_opts)) { 798 799 pr_warn("invalid parameter '%s'\n", 799 800 opt_tokens[i].pattern); 800 801 }
+2 -3
drivers/nvme/host/pci.c
··· 801 801 return; 802 802 } 803 803 804 + nvmeq->cqe_seen = 1; 804 805 req = blk_mq_tag_to_rq(*nvmeq->tags, cqe->command_id); 805 806 nvme_end_request(req, cqe->status, cqe->result); 806 807 } ··· 831 830 consumed++; 832 831 } 833 832 834 - if (consumed) { 833 + if (consumed) 835 834 nvme_ring_cq_doorbell(nvmeq); 836 - nvmeq->cqe_seen = 1; 837 - } 838 835 } 839 836 840 837 static irqreturn_t nvme_irq(int irq, void *data)
-6
drivers/nvme/target/admin-cmd.c
··· 199 199 copy_and_pad(id->mn, sizeof(id->mn), model, sizeof(model) - 1); 200 200 copy_and_pad(id->fr, sizeof(id->fr), UTS_RELEASE, strlen(UTS_RELEASE)); 201 201 202 - memset(id->mn, ' ', sizeof(id->mn)); 203 - strncpy((char *)id->mn, "Linux", sizeof(id->mn)); 204 - 205 - memset(id->fr, ' ', sizeof(id->fr)); 206 - strncpy((char *)id->fr, UTS_RELEASE, sizeof(id->fr)); 207 - 208 202 id->rab = 6; 209 203 210 204 /*
+5 -4
drivers/nvme/target/fc.c
··· 394 394 static struct nvmet_fc_ls_iod * 395 395 nvmet_fc_alloc_ls_iod(struct nvmet_fc_tgtport *tgtport) 396 396 { 397 - static struct nvmet_fc_ls_iod *iod; 397 + struct nvmet_fc_ls_iod *iod; 398 398 unsigned long flags; 399 399 400 400 spin_lock_irqsave(&tgtport->lock, flags); ··· 471 471 static struct nvmet_fc_fcp_iod * 472 472 nvmet_fc_alloc_fcp_iod(struct nvmet_fc_tgt_queue *queue) 473 473 { 474 - static struct nvmet_fc_fcp_iod *fod; 474 + struct nvmet_fc_fcp_iod *fod; 475 475 476 476 lockdep_assert_held(&queue->qlock); 477 477 ··· 704 704 { 705 705 struct nvmet_fc_tgtport *tgtport = queue->assoc->tgtport; 706 706 struct nvmet_fc_fcp_iod *fod = queue->fod; 707 - struct nvmet_fc_defer_fcp_req *deferfcp; 707 + struct nvmet_fc_defer_fcp_req *deferfcp, *tempptr; 708 708 unsigned long flags; 709 709 int i, writedataactive; 710 710 bool disconnect; ··· 735 735 } 736 736 737 737 /* Cleanup defer'ed IOs in queue */ 738 - list_for_each_entry(deferfcp, &queue->avail_defer_list, req_list) { 738 + list_for_each_entry_safe(deferfcp, tempptr, &queue->avail_defer_list, 739 + req_list) { 739 740 list_del(&deferfcp->req_list); 740 741 kfree(deferfcp); 741 742 }
+4 -4
drivers/of/device.c
··· 89 89 bool coherent; 90 90 unsigned long offset; 91 91 const struct iommu_ops *iommu; 92 + u64 mask; 92 93 93 94 /* 94 95 * Set default coherent_dma_mask to 32 bit. Drivers are expected to ··· 135 134 * Limit coherent and dma mask based on size and default mask 136 135 * set by the driver. 137 136 */ 138 - dev->coherent_dma_mask = min(dev->coherent_dma_mask, 139 - DMA_BIT_MASK(ilog2(dma_addr + size))); 140 - *dev->dma_mask = min((*dev->dma_mask), 141 - DMA_BIT_MASK(ilog2(dma_addr + size))); 137 + mask = DMA_BIT_MASK(ilog2(dma_addr + size - 1) + 1); 138 + dev->coherent_dma_mask &= mask; 139 + *dev->dma_mask &= mask; 142 140 143 141 coherent = of_dma_is_coherent(np); 144 142 dev_dbg(dev, "device is%sdma coherent\n",
+1 -1
drivers/parisc/dino.c
··· 956 956 957 957 dino_dev->hba.dev = dev; 958 958 dino_dev->hba.base_addr = ioremap_nocache(hpa, 4096); 959 - dino_dev->hba.lmmio_space_offset = 0; /* CPU addrs == bus addrs */ 959 + dino_dev->hba.lmmio_space_offset = PCI_F_EXTEND; 960 960 spin_lock_init(&dino_dev->dinosaur_pen); 961 961 dino_dev->hba.iommu = ccio_get_iommu(dev); 962 962
+4 -5
drivers/pci/pci.c
··· 514 514 */ 515 515 struct pci_dev *pci_find_pcie_root_port(struct pci_dev *dev) 516 516 { 517 - struct pci_dev *bridge, *highest_pcie_bridge = NULL; 517 + struct pci_dev *bridge, *highest_pcie_bridge = dev; 518 518 519 519 bridge = pci_upstream_bridge(dev); 520 520 while (bridge && pci_is_pcie(bridge)) { ··· 522 522 bridge = pci_upstream_bridge(bridge); 523 523 } 524 524 525 - if (highest_pcie_bridge && 526 - pci_pcie_type(highest_pcie_bridge) == PCI_EXP_TYPE_ROOT_PORT) 527 - return highest_pcie_bridge; 525 + if (pci_pcie_type(highest_pcie_bridge) != PCI_EXP_TYPE_ROOT_PORT) 526 + return NULL; 528 527 529 - return NULL; 528 + return highest_pcie_bridge; 530 529 } 531 530 EXPORT_SYMBOL(pci_find_pcie_root_port); 532 531
-1
drivers/rtc/rtc-ds1307.c
··· 1301 1301 static const struct regmap_config regmap_config = { 1302 1302 .reg_bits = 8, 1303 1303 .val_bits = 8, 1304 - .max_register = 0x12, 1305 1304 }; 1306 1305 1307 1306 static int ds1307_probe(struct i2c_client *client,
+19 -14
drivers/scsi/ipr.c
··· 3351 3351 return; 3352 3352 } 3353 3353 3354 + if (ioa_cfg->scsi_unblock) { 3355 + ioa_cfg->scsi_unblock = 0; 3356 + ioa_cfg->scsi_blocked = 0; 3357 + spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3358 + scsi_unblock_requests(ioa_cfg->host); 3359 + spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); 3360 + if (ioa_cfg->scsi_blocked) 3361 + scsi_block_requests(ioa_cfg->host); 3362 + } 3363 + 3354 3364 if (!ioa_cfg->scan_enabled) { 3355 3365 spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3356 3366 return; ··· 7221 7211 ENTER; 7222 7212 if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].removing_ioa) { 7223 7213 ipr_trace; 7224 - spin_unlock_irq(ioa_cfg->host->host_lock); 7225 - scsi_unblock_requests(ioa_cfg->host); 7226 - spin_lock_irq(ioa_cfg->host->host_lock); 7214 + ioa_cfg->scsi_unblock = 1; 7215 + schedule_work(&ioa_cfg->work_q); 7227 7216 } 7228 7217 7229 7218 ioa_cfg->in_reset_reload = 0; ··· 7296 7287 list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q); 7297 7288 wake_up_all(&ioa_cfg->reset_wait_q); 7298 7289 7299 - spin_unlock(ioa_cfg->host->host_lock); 7300 - scsi_unblock_requests(ioa_cfg->host); 7301 - spin_lock(ioa_cfg->host->host_lock); 7302 - 7303 - if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds) 7304 - scsi_block_requests(ioa_cfg->host); 7305 - 7290 + ioa_cfg->scsi_unblock = 1; 7306 7291 schedule_work(&ioa_cfg->work_q); 7307 7292 LEAVE; 7308 7293 return IPR_RC_JOB_RETURN; ··· 9252 9249 spin_unlock(&ioa_cfg->hrrq[i]._lock); 9253 9250 } 9254 9251 wmb(); 9255 - if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].removing_ioa) 9252 + if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].removing_ioa) { 9253 + ioa_cfg->scsi_unblock = 0; 9254 + ioa_cfg->scsi_blocked = 1; 9256 9255 scsi_block_requests(ioa_cfg->host); 9256 + } 9257 9257 9258 9258 ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg); 9259 9259 ioa_cfg->reset_cmd = ipr_cmd; ··· 9312 9306 wake_up_all(&ioa_cfg->reset_wait_q); 9313 9307 9314 9308 if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].removing_ioa) { 9315 - spin_unlock_irq(ioa_cfg->host->host_lock); 9316 - scsi_unblock_requests(ioa_cfg->host); 9317 - spin_lock_irq(ioa_cfg->host->host_lock); 9309 + ioa_cfg->scsi_unblock = 1; 9310 + schedule_work(&ioa_cfg->work_q); 9318 9311 } 9319 9312 return; 9320 9313 } else {
+2
drivers/scsi/ipr.h
··· 1488 1488 u8 cfg_locked:1; 1489 1489 u8 clear_isr:1; 1490 1490 u8 probe_done:1; 1491 + u8 scsi_unblock:1; 1492 + u8 scsi_blocked:1; 1491 1493 1492 1494 u8 revid; 1493 1495
-12
drivers/scsi/qla2xxx/qla_tmpl.c
··· 401 401 for (i = 0; i < vha->hw->max_req_queues; i++) { 402 402 struct req_que *req = vha->hw->req_q_map[i]; 403 403 404 - if (!test_bit(i, vha->hw->req_qid_map)) 405 - continue; 406 - 407 404 if (req || !buf) { 408 405 length = req ? 409 406 req->length : REQUEST_ENTRY_CNT_24XX; ··· 414 417 } else if (ent->t263.queue_type == T263_QUEUE_TYPE_RSP) { 415 418 for (i = 0; i < vha->hw->max_rsp_queues; i++) { 416 419 struct rsp_que *rsp = vha->hw->rsp_q_map[i]; 417 - 418 - if (!test_bit(i, vha->hw->rsp_qid_map)) 419 - continue; 420 420 421 421 if (rsp || !buf) { 422 422 length = rsp ? ··· 654 660 for (i = 0; i < vha->hw->max_req_queues; i++) { 655 661 struct req_que *req = vha->hw->req_q_map[i]; 656 662 657 - if (!test_bit(i, vha->hw->req_qid_map)) 658 - continue; 659 - 660 663 if (req || !buf) { 661 664 qla27xx_insert16(i, buf, len); 662 665 qla27xx_insert16(1, buf, len); ··· 665 674 } else if (ent->t274.queue_type == T274_QUEUE_TYPE_RSP_SHAD) { 666 675 for (i = 0; i < vha->hw->max_rsp_queues; i++) { 667 676 struct rsp_que *rsp = vha->hw->rsp_q_map[i]; 668 - 669 - if (!test_bit(i, vha->hw->rsp_qid_map)) 670 - continue; 671 677 672 678 if (rsp || !buf) { 673 679 qla27xx_insert16(i, buf, len);
+1 -1
drivers/scsi/ses.c
··· 99 99 100 100 ret = scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf, bufflen, 101 101 NULL, SES_TIMEOUT, SES_RETRIES, NULL); 102 - if (unlikely(!ret)) 102 + if (unlikely(ret)) 103 103 return ret; 104 104 105 105 recv_page_code = ((unsigned char *)buf)[0];
+2 -2
drivers/scsi/st.c
··· 4299 4299 kref_init(&tpnt->kref); 4300 4300 tpnt->disk = disk; 4301 4301 disk->private_data = &tpnt->driver; 4302 - disk->queue = SDp->request_queue; 4303 4302 /* SCSI tape doesn't register this gendisk via add_disk(). Manually 4304 4303 * take queue reference that release_disk() expects. */ 4305 - if (!blk_get_queue(disk->queue)) 4304 + if (!blk_get_queue(SDp->request_queue)) 4306 4305 goto out_put_disk; 4306 + disk->queue = SDp->request_queue; 4307 4307 tpnt->driver = &st_template; 4308 4308 4309 4309 tpnt->device = SDp;
+8 -7
drivers/soc/imx/gpcv2.c
··· 200 200 201 201 domain->dev = &pdev->dev; 202 202 203 - ret = pm_genpd_init(&domain->genpd, NULL, true); 204 - if (ret) { 205 - dev_err(domain->dev, "Failed to init power domain\n"); 206 - return ret; 207 - } 208 - 209 203 domain->regulator = devm_regulator_get_optional(domain->dev, "power"); 210 204 if (IS_ERR(domain->regulator)) { 211 205 if (PTR_ERR(domain->regulator) != -ENODEV) { 212 - dev_err(domain->dev, "Failed to get domain's regulator\n"); 206 + if (PTR_ERR(domain->regulator) != -EPROBE_DEFER) 207 + dev_err(domain->dev, "Failed to get domain's regulator\n"); 213 208 return PTR_ERR(domain->regulator); 214 209 } 215 210 } else { 216 211 regulator_set_voltage(domain->regulator, 217 212 domain->voltage, domain->voltage); 213 + } 214 + 215 + ret = pm_genpd_init(&domain->genpd, NULL, true); 216 + if (ret) { 217 + dev_err(domain->dev, "Failed to init power domain\n"); 218 + return ret; 218 219 } 219 220 220 221 ret = of_genpd_add_provider_simple(domain->dev->of_node,
+2
drivers/soc/ti/ti_sci_pm_domains.c
··· 176 176 177 177 ti_sci_pd->dev = dev; 178 178 179 + ti_sci_pd->pd.name = "ti_sci_pd"; 180 + 179 181 ti_sci_pd->pd.attach_dev = ti_sci_pd_attach_dev; 180 182 ti_sci_pd->pd.detach_dev = ti_sci_pd_detach_dev; 181 183
+5 -2
drivers/tty/pty.c
··· 793 793 struct tty_struct *tty; 794 794 struct path *pts_path; 795 795 struct dentry *dentry; 796 + struct vfsmount *mnt; 796 797 int retval; 797 798 int index; 798 799 ··· 806 805 if (retval) 807 806 return retval; 808 807 809 - fsi = devpts_acquire(filp); 808 + fsi = devpts_acquire(filp, &mnt); 810 809 if (IS_ERR(fsi)) { 811 810 retval = PTR_ERR(fsi); 812 811 goto out_free_file; ··· 850 849 pts_path = kmalloc(sizeof(struct path), GFP_KERNEL); 851 850 if (!pts_path) 852 851 goto err_release; 853 - pts_path->mnt = filp->f_path.mnt; 852 + pts_path->mnt = mnt; 854 853 pts_path->dentry = dentry; 855 854 path_get(pts_path); 856 855 tty->link->driver_data = pts_path; ··· 867 866 path_put(pts_path); 868 867 kfree(pts_path); 869 868 err_release: 869 + mntput(mnt); 870 870 tty_unlock(tty); 871 871 // This will also put-ref the fsi 872 872 tty_release(inode, filp); ··· 876 874 devpts_kill_index(fsi, index); 877 875 out_put_fsi: 878 876 devpts_release(fsi); 877 + mntput(mnt); 879 878 out_free_file: 880 879 tty_free_file(filp); 881 880 return retval;
+1 -2
drivers/xen/biomerge.c
··· 10 10 unsigned long bfn1 = pfn_to_bfn(page_to_pfn(vec1->bv_page)); 11 11 unsigned long bfn2 = pfn_to_bfn(page_to_pfn(vec2->bv_page)); 12 12 13 - return __BIOVEC_PHYS_MERGEABLE(vec1, vec2) && 14 - ((bfn1 == bfn2) || ((bfn1+1) == bfn2)); 13 + return bfn1 + PFN_DOWN(vec1->bv_offset + vec1->bv_len) == bfn2; 15 14 #else 16 15 /* 17 16 * XXX: Add support for merging bio_vec when using different page
+1 -2
fs/binfmt_elf.c
··· 664 664 { 665 665 unsigned long random_variable = 0; 666 666 667 - if ((current->flags & PF_RANDOMIZE) && 668 - !(current->personality & ADDR_NO_RANDOMIZE)) { 667 + if (current->flags & PF_RANDOMIZE) { 669 668 random_variable = get_random_long(); 670 669 random_variable &= STACK_RND_MASK; 671 670 random_variable <<= PAGE_SHIFT;
+3 -1
fs/devpts/inode.c
··· 133 133 return sb->s_fs_info; 134 134 } 135 135 136 - struct pts_fs_info *devpts_acquire(struct file *filp) 136 + struct pts_fs_info *devpts_acquire(struct file *filp, struct vfsmount **ptsmnt) 137 137 { 138 138 struct pts_fs_info *result; 139 139 struct path path; ··· 142 142 143 143 path = filp->f_path; 144 144 path_get(&path); 145 + *ptsmnt = NULL; 145 146 146 147 /* Has the devpts filesystem already been found? */ 147 148 sb = path.mnt->mnt_sb; ··· 166 165 * pty code needs to hold extra references in case of last /dev/tty close 167 166 */ 168 167 atomic_inc(&sb->s_active); 168 + *ptsmnt = mntget(path.mnt); 169 169 result = DEVPTS_SB(sb); 170 170 171 171 out:
+2 -2
fs/iomap.c
··· 278 278 unsigned long bytes; /* Bytes to write to page */ 279 279 280 280 offset = (pos & (PAGE_SIZE - 1)); 281 - bytes = min_t(unsigned long, PAGE_SIZE - offset, length); 281 + bytes = min_t(loff_t, PAGE_SIZE - offset, length); 282 282 283 283 rpage = __iomap_read_page(inode, pos); 284 284 if (IS_ERR(rpage)) ··· 373 373 unsigned offset, bytes; 374 374 375 375 offset = pos & (PAGE_SIZE - 1); /* Within page */ 376 - bytes = min_t(unsigned, PAGE_SIZE - offset, count); 376 + bytes = min_t(loff_t, PAGE_SIZE - offset, count); 377 377 378 378 if (IS_DAX(inode)) 379 379 status = iomap_dax_zero(pos, offset, bytes, iomap);
+15 -6
fs/quota/dquot.c
··· 1124 1124 WARN_ON_ONCE(1); 1125 1125 dquot->dq_dqb.dqb_rsvspace = 0; 1126 1126 } 1127 + if (dquot->dq_dqb.dqb_curspace + dquot->dq_dqb.dqb_rsvspace <= 1128 + dquot->dq_dqb.dqb_bsoftlimit) 1129 + dquot->dq_dqb.dqb_btime = (time64_t) 0; 1130 + clear_bit(DQ_BLKS_B, &dquot->dq_flags); 1127 1131 } 1128 1132 1129 1133 static void dquot_decr_inodes(struct dquot *dquot, qsize_t number) ··· 1149 1145 dquot->dq_dqb.dqb_curspace -= number; 1150 1146 else 1151 1147 dquot->dq_dqb.dqb_curspace = 0; 1152 - if (dquot->dq_dqb.dqb_curspace <= dquot->dq_dqb.dqb_bsoftlimit) 1148 + if (dquot->dq_dqb.dqb_curspace + dquot->dq_dqb.dqb_rsvspace <= 1149 + dquot->dq_dqb.dqb_bsoftlimit) 1153 1150 dquot->dq_dqb.dqb_btime = (time64_t) 0; 1154 1151 clear_bit(DQ_BLKS_B, &dquot->dq_flags); 1155 1152 } ··· 1386 1381 1387 1382 static int info_bdq_free(struct dquot *dquot, qsize_t space) 1388 1383 { 1384 + qsize_t tspace; 1385 + 1386 + tspace = dquot->dq_dqb.dqb_curspace + dquot->dq_dqb.dqb_rsvspace; 1387 + 1389 1388 if (test_bit(DQ_FAKE_B, &dquot->dq_flags) || 1390 - dquot->dq_dqb.dqb_curspace <= dquot->dq_dqb.dqb_bsoftlimit) 1389 + tspace <= dquot->dq_dqb.dqb_bsoftlimit) 1391 1390 return QUOTA_NL_NOWARN; 1392 1391 1393 - if (dquot->dq_dqb.dqb_curspace - space <= dquot->dq_dqb.dqb_bsoftlimit) 1392 + if (tspace - space <= dquot->dq_dqb.dqb_bsoftlimit) 1394 1393 return QUOTA_NL_BSOFTBELOW; 1395 - if (dquot->dq_dqb.dqb_curspace >= dquot->dq_dqb.dqb_bhardlimit && 1396 - dquot->dq_dqb.dqb_curspace - space < dquot->dq_dqb.dqb_bhardlimit) 1394 + if (tspace >= dquot->dq_dqb.dqb_bhardlimit && 1395 + tspace - space < dquot->dq_dqb.dqb_bhardlimit) 1397 1396 return QUOTA_NL_BHARDBELOW; 1398 1397 return QUOTA_NL_NOWARN; 1399 1398 } ··· 2690 2681 2691 2682 if (check_blim) { 2692 2683 if (!dm->dqb_bsoftlimit || 2693 - dm->dqb_curspace < dm->dqb_bsoftlimit) { 2684 + dm->dqb_curspace + dm->dqb_rsvspace < dm->dqb_bsoftlimit) { 2694 2685 dm->dqb_btime = 0; 2695 2686 clear_bit(DQ_BLKS_B, &dquot->dq_flags); 2696 2687 } else if (!(di->d_fieldmask & QC_SPC_TIMER))
+1 -1
fs/xfs/libxfs/xfs_ialloc.c
··· 1246 1246 1247 1247 /* free inodes to the left? */ 1248 1248 if (useleft && trec.ir_freecount) { 1249 - rec = trec; 1250 1249 xfs_btree_del_cursor(cur, XFS_BTREE_NOERROR); 1251 1250 cur = tcur; 1252 1251 1253 1252 pag->pagl_leftrec = trec.ir_startino; 1254 1253 pag->pagl_rightrec = rec.ir_startino; 1255 1254 pag->pagl_pagino = pagino; 1255 + rec = trec; 1256 1256 goto alloc_inode; 1257 1257 } 1258 1258
+11
fs/xfs/xfs_log.c
··· 749 749 return 0; 750 750 } 751 751 752 + /* 753 + * During the second phase of log recovery, we need iget and 754 + * iput to behave like they do for an active filesystem. 755 + * xfs_fs_drop_inode needs to be able to prevent the deletion 756 + * of inodes before we're done replaying log items on those 757 + * inodes. Turn it off immediately after recovery finishes 758 + * so that we don't leak the quota inodes if subsequent mount 759 + * activities fail. 760 + */ 761 + mp->m_super->s_flags |= MS_ACTIVE; 752 762 error = xlog_recover_finish(mp->m_log); 753 763 if (!error) 754 764 xfs_log_work_queue(mp); 765 + mp->m_super->s_flags &= ~MS_ACTIVE; 755 766 756 767 return error; 757 768 }
+2 -10
fs/xfs/xfs_mount.c
··· 945 945 } 946 946 947 947 /* 948 - * During the second phase of log recovery, we need iget and 949 - * iput to behave like they do for an active filesystem. 950 - * xfs_fs_drop_inode needs to be able to prevent the deletion 951 - * of inodes before we're done replaying log items on those 952 - * inodes. 953 - */ 954 - mp->m_super->s_flags |= MS_ACTIVE; 955 - 956 - /* 957 948 * Finish recovering the file system. This part needed to be delayed 958 949 * until after the root and real-time bitmap inodes were consistently 959 950 * read in. ··· 1019 1028 out_quota: 1020 1029 xfs_qm_unmount_quotas(mp); 1021 1030 out_rtunmount: 1022 - mp->m_super->s_flags &= ~MS_ACTIVE; 1023 1031 xfs_rtunmount_inodes(mp); 1024 1032 out_rele_rip: 1025 1033 IRELE(rip); 1026 1034 cancel_delayed_work_sync(&mp->m_reclaim_work); 1027 1035 xfs_reclaim_inodes(mp, SYNC_WAIT); 1036 + /* Clean out dquots that might be in memory after quotacheck. */ 1037 + xfs_qm_unmount(mp); 1028 1038 out_log_dealloc: 1029 1039 mp->m_flags |= XFS_MOUNT_UNMOUNTING; 1030 1040 xfs_log_mount_cancel(mp);
+1 -1
include/linux/devpts_fs.h
··· 19 19 20 20 struct pts_fs_info; 21 21 22 - struct pts_fs_info *devpts_acquire(struct file *); 22 + struct pts_fs_info *devpts_acquire(struct file *, struct vfsmount **ptsmnt); 23 23 void devpts_release(struct pts_fs_info *); 24 24 25 25 int devpts_new_index(struct pts_fs_info *);
+4 -2
include/linux/memblock.h
··· 61 61 #ifdef CONFIG_ARCH_DISCARD_MEMBLOCK 62 62 #define __init_memblock __meminit 63 63 #define __initdata_memblock __meminitdata 64 + void memblock_discard(void); 64 65 #else 65 66 #define __init_memblock 66 67 #define __initdata_memblock ··· 75 74 int nid, ulong flags); 76 75 phys_addr_t memblock_find_in_range(phys_addr_t start, phys_addr_t end, 77 76 phys_addr_t size, phys_addr_t align); 78 - phys_addr_t get_allocated_memblock_reserved_regions_info(phys_addr_t *addr); 79 - phys_addr_t get_allocated_memblock_memory_regions_info(phys_addr_t *addr); 80 77 void memblock_allow_resize(void); 81 78 int memblock_add_node(phys_addr_t base, phys_addr_t size, int nid); 82 79 int memblock_add(phys_addr_t base, phys_addr_t size); ··· 108 109 109 110 void __next_reserved_mem_region(u64 *idx, phys_addr_t *out_start, 110 111 phys_addr_t *out_end); 112 + 113 + void __memblock_free_early(phys_addr_t base, phys_addr_t size); 114 + void __memblock_free_late(phys_addr_t base, phys_addr_t size); 111 115 112 116 /** 113 117 * for_each_mem_range - iterate through memblock areas from type_a and not
+8 -2
include/linux/memcontrol.h
··· 484 484 extern int do_swap_account; 485 485 #endif 486 486 487 - void lock_page_memcg(struct page *page); 487 + struct mem_cgroup *lock_page_memcg(struct page *page); 488 + void __unlock_page_memcg(struct mem_cgroup *memcg); 488 489 void unlock_page_memcg(struct page *page); 489 490 490 491 static inline unsigned long memcg_page_state(struct mem_cgroup *memcg, ··· 810 809 { 811 810 } 812 811 813 - static inline void lock_page_memcg(struct page *page) 812 + static inline struct mem_cgroup *lock_page_memcg(struct page *page) 813 + { 814 + return NULL; 815 + } 816 + 817 + static inline void __unlock_page_memcg(struct mem_cgroup *memcg) 814 818 { 815 819 } 816 820
+8
include/linux/nmi.h
··· 168 168 #define sysctl_softlockup_all_cpu_backtrace 0 169 169 #define sysctl_hardlockup_all_cpu_backtrace 0 170 170 #endif 171 + 172 + #if defined(CONFIG_HARDLOCKUP_CHECK_TIMESTAMP) && \ 173 + defined(CONFIG_HARDLOCKUP_DETECTOR) 174 + void watchdog_update_hrtimer_threshold(u64 period); 175 + #else 176 + static inline void watchdog_update_hrtimer_threshold(u64 period) { } 177 + #endif 178 + 171 179 extern bool is_hardlockup(void); 172 180 struct ctl_table; 173 181 extern int proc_watchdog(struct ctl_table *, int ,
+22
include/linux/oom.h
··· 6 6 #include <linux/types.h> 7 7 #include <linux/nodemask.h> 8 8 #include <uapi/linux/oom.h> 9 + #include <linux/sched/coredump.h> /* MMF_* */ 10 + #include <linux/mm.h> /* VM_FAULT* */ 9 11 10 12 struct zonelist; 11 13 struct notifier_block; ··· 63 61 static inline bool tsk_is_oom_victim(struct task_struct * tsk) 64 62 { 65 63 return tsk->signal->oom_mm; 64 + } 65 + 66 + /* 67 + * Checks whether a page fault on the given mm is still reliable. 68 + * This is no longer true if the oom reaper started to reap the 69 + * address space which is reflected by MMF_UNSTABLE flag set in 70 + * the mm. At that moment any !shared mapping would lose the content 71 + * and could cause a memory corruption (zero pages instead of the 72 + * original content). 73 + * 74 + * User should call this before establishing a page table entry for 75 + * a !shared mapping and under the proper page table lock. 76 + * 77 + * Return 0 when the PF is safe VM_FAULT_SIGBUS otherwise. 78 + */ 79 + static inline int check_stable_address_space(struct mm_struct *mm) 80 + { 81 + if (unlikely(test_bit(MMF_UNSTABLE, &mm->flags))) 82 + return VM_FAULT_SIGBUS; 83 + return 0; 66 84 } 67 85 68 86 extern unsigned long oom_badness(struct task_struct *p,
+2 -2
include/linux/perf_event.h
··· 310 310 * Notification that the event was mapped or unmapped. Called 311 311 * in the context of the mapping task. 312 312 */ 313 - void (*event_mapped) (struct perf_event *event); /*optional*/ 314 - void (*event_unmapped) (struct perf_event *event); /*optional*/ 313 + void (*event_mapped) (struct perf_event *event, struct mm_struct *mm); /* optional */ 314 + void (*event_unmapped) (struct perf_event *event, struct mm_struct *mm); /* optional */ 315 315 316 316 /* 317 317 * Flags for ->add()/->del()/ ->start()/->stop(). There are
+3 -1
include/linux/pid.h
··· 8 8 PIDTYPE_PID, 9 9 PIDTYPE_PGID, 10 10 PIDTYPE_SID, 11 - PIDTYPE_MAX 11 + PIDTYPE_MAX, 12 + /* only valid to __task_pid_nr_ns() */ 13 + __PIDTYPE_TGID 12 14 }; 13 15 14 16 /*
+5 -4
include/linux/ptr_ring.h
··· 436 436 __PTR_RING_PEEK_CALL_v; \ 437 437 }) 438 438 439 - static inline void **__ptr_ring_init_queue_alloc(int size, gfp_t gfp) 439 + static inline void **__ptr_ring_init_queue_alloc(unsigned int size, gfp_t gfp) 440 440 { 441 - return kzalloc(ALIGN(size * sizeof(void *), SMP_CACHE_BYTES), gfp); 441 + return kcalloc(size, sizeof(void *), gfp); 442 442 } 443 443 444 444 static inline void __ptr_ring_set_size(struct ptr_ring *r, int size) ··· 582 582 * In particular if you consume ring in interrupt or BH context, you must 583 583 * disable interrupts/BH when doing so. 584 584 */ 585 - static inline int ptr_ring_resize_multiple(struct ptr_ring **rings, int nrings, 585 + static inline int ptr_ring_resize_multiple(struct ptr_ring **rings, 586 + unsigned int nrings, 586 587 int size, 587 588 gfp_t gfp, void (*destroy)(void *)) 588 589 { ··· 591 590 void ***queues; 592 591 int i; 593 592 594 - queues = kmalloc(nrings * sizeof *queues, gfp); 593 + queues = kmalloc_array(nrings, sizeof(*queues), gfp); 595 594 if (!queues) 596 595 goto noqueues; 597 596
+27 -24
include/linux/sched.h
··· 1163 1163 return tsk->tgid; 1164 1164 } 1165 1165 1166 - extern pid_t task_tgid_nr_ns(struct task_struct *tsk, struct pid_namespace *ns); 1167 - 1168 - static inline pid_t task_tgid_vnr(struct task_struct *tsk) 1169 - { 1170 - return pid_vnr(task_tgid(tsk)); 1171 - } 1172 - 1173 1166 /** 1174 1167 * pid_alive - check that a task structure is not stale 1175 1168 * @p: Task structure to be checked. ··· 1176 1183 static inline int pid_alive(const struct task_struct *p) 1177 1184 { 1178 1185 return p->pids[PIDTYPE_PID].pid != NULL; 1179 - } 1180 - 1181 - static inline pid_t task_ppid_nr_ns(const struct task_struct *tsk, struct pid_namespace *ns) 1182 - { 1183 - pid_t pid = 0; 1184 - 1185 - rcu_read_lock(); 1186 - if (pid_alive(tsk)) 1187 - pid = task_tgid_nr_ns(rcu_dereference(tsk->real_parent), ns); 1188 - rcu_read_unlock(); 1189 - 1190 - return pid; 1191 - } 1192 - 1193 - static inline pid_t task_ppid_nr(const struct task_struct *tsk) 1194 - { 1195 - return task_ppid_nr_ns(tsk, &init_pid_ns); 1196 1186 } 1197 1187 1198 1188 static inline pid_t task_pgrp_nr_ns(struct task_struct *tsk, struct pid_namespace *ns) ··· 1197 1221 static inline pid_t task_session_vnr(struct task_struct *tsk) 1198 1222 { 1199 1223 return __task_pid_nr_ns(tsk, PIDTYPE_SID, NULL); 1224 + } 1225 + 1226 + static inline pid_t task_tgid_nr_ns(struct task_struct *tsk, struct pid_namespace *ns) 1227 + { 1228 + return __task_pid_nr_ns(tsk, __PIDTYPE_TGID, ns); 1229 + } 1230 + 1231 + static inline pid_t task_tgid_vnr(struct task_struct *tsk) 1232 + { 1233 + return __task_pid_nr_ns(tsk, __PIDTYPE_TGID, NULL); 1234 + } 1235 + 1236 + static inline pid_t task_ppid_nr_ns(const struct task_struct *tsk, struct pid_namespace *ns) 1237 + { 1238 + pid_t pid = 0; 1239 + 1240 + rcu_read_lock(); 1241 + if (pid_alive(tsk)) 1242 + pid = task_tgid_nr_ns(rcu_dereference(tsk->real_parent), ns); 1243 + rcu_read_unlock(); 1244 + 1245 + return pid; 1246 + } 1247 + 1248 + static inline pid_t task_ppid_nr(const struct task_struct *tsk) 1249 + { 1250 + return task_ppid_nr_ns(tsk, &init_pid_ns); 1200 1251 } 1201 1252 1202 1253 /* Obsolete, do not use: */
+2 -1
include/linux/skb_array.h
··· 193 193 } 194 194 195 195 static inline int skb_array_resize_multiple(struct skb_array **rings, 196 - int nrings, int size, gfp_t gfp) 196 + int nrings, unsigned int size, 197 + gfp_t gfp) 197 198 { 198 199 BUILD_BUG_ON(offsetof(struct skb_array, ring)); 199 200 return ptr_ring_resize_multiple((struct ptr_ring **)rings,
+37
include/linux/wait.h
··· 757 757 __ret; \ 758 758 }) 759 759 760 + #define __wait_event_killable_timeout(wq_head, condition, timeout) \ 761 + ___wait_event(wq_head, ___wait_cond_timeout(condition), \ 762 + TASK_KILLABLE, 0, timeout, \ 763 + __ret = schedule_timeout(__ret)) 764 + 765 + /** 766 + * wait_event_killable_timeout - sleep until a condition gets true or a timeout elapses 767 + * @wq_head: the waitqueue to wait on 768 + * @condition: a C expression for the event to wait for 769 + * @timeout: timeout, in jiffies 770 + * 771 + * The process is put to sleep (TASK_KILLABLE) until the 772 + * @condition evaluates to true or a kill signal is received. 773 + * The @condition is checked each time the waitqueue @wq_head is woken up. 774 + * 775 + * wake_up() has to be called after changing any variable that could 776 + * change the result of the wait condition. 777 + * 778 + * Returns: 779 + * 0 if the @condition evaluated to %false after the @timeout elapsed, 780 + * 1 if the @condition evaluated to %true after the @timeout elapsed, 781 + * the remaining jiffies (at least 1) if the @condition evaluated 782 + * to %true before the @timeout elapsed, or -%ERESTARTSYS if it was 783 + * interrupted by a kill signal. 784 + * 785 + * Only kill signals interrupt this process. 786 + */ 787 + #define wait_event_killable_timeout(wq_head, condition, timeout) \ 788 + ({ \ 789 + long __ret = timeout; \ 790 + might_sleep(); \ 791 + if (!___wait_cond_timeout(condition)) \ 792 + __ret = __wait_event_killable_timeout(wq_head, \ 793 + condition, timeout); \ 794 + __ret; \ 795 + }) 796 + 760 797 761 798 #define __wait_event_lock_irq(wq_head, condition, lock, cmd) \ 762 799 (void)___wait_event(wq_head, condition, TASK_UNINTERRUPTIBLE, 0, 0, \
+2 -2
include/net/ip.h
··· 362 362 !forwarding) 363 363 return dst_mtu(dst); 364 364 365 - return min(dst->dev->mtu, IP_MAX_MTU); 365 + return min(READ_ONCE(dst->dev->mtu), IP_MAX_MTU); 366 366 } 367 367 368 368 static inline unsigned int ip_skb_dst_mtu(struct sock *sk, ··· 374 374 return ip_dst_mtu_maybe_forward(skb_dst(skb), forwarding); 375 375 } 376 376 377 - return min(skb_dst(skb)->dev->mtu, IP_MAX_MTU); 377 + return min(READ_ONCE(skb_dst(skb)->dev->mtu), IP_MAX_MTU); 378 378 } 379 379 380 380 u32 ip_idents_reserve(u32 hash, int segs);
+4 -1
include/net/sch_generic.h
··· 808 808 old = *pold; 809 809 *pold = new; 810 810 if (old != NULL) { 811 - qdisc_tree_reduce_backlog(old, old->q.qlen, old->qstats.backlog); 811 + unsigned int qlen = old->q.qlen; 812 + unsigned int backlog = old->qstats.backlog; 813 + 812 814 qdisc_reset(old); 815 + qdisc_tree_reduce_backlog(old, qlen, backlog); 813 816 } 814 817 sch_tree_unlock(sch); 815 818
+1 -3
include/net/sock.h
··· 509 509 static inline int sk_peek_offset(struct sock *sk, int flags) 510 510 { 511 511 if (unlikely(flags & MSG_PEEK)) { 512 - s32 off = READ_ONCE(sk->sk_peek_off); 513 - if (off >= 0) 514 - return off; 512 + return READ_ONCE(sk->sk_peek_off); 515 513 } 516 514 517 515 return 0;
+8 -6
kernel/audit_watch.c
··· 66 66 67 67 /* fsnotify events we care about. */ 68 68 #define AUDIT_FS_WATCH (FS_MOVE | FS_CREATE | FS_DELETE | FS_DELETE_SELF |\ 69 - FS_MOVE_SELF | FS_EVENT_ON_CHILD) 69 + FS_MOVE_SELF | FS_EVENT_ON_CHILD | FS_UNMOUNT) 70 70 71 71 static void audit_free_parent(struct audit_parent *parent) 72 72 { ··· 457 457 list_del(&krule->rlist); 458 458 459 459 if (list_empty(&watch->rules)) { 460 + /* 461 + * audit_remove_watch() drops our reference to 'parent' which 462 + * can get freed. Grab our own reference to be safe. 463 + */ 464 + audit_get_parent(parent); 460 465 audit_remove_watch(watch); 461 - 462 - if (list_empty(&parent->watches)) { 463 - audit_get_parent(parent); 466 + if (list_empty(&parent->watches)) 464 467 fsnotify_destroy_mark(&parent->mark, audit_watch_group); 465 - audit_put_parent(parent); 466 - } 468 + audit_put_parent(parent); 467 469 } 468 470 } 469 471
+39 -8
kernel/events/core.c
··· 2217 2217 return can_add_hw; 2218 2218 } 2219 2219 2220 + /* 2221 + * Complement to update_event_times(). This computes the tstamp_* values to 2222 + * continue 'enabled' state from @now, and effectively discards the time 2223 + * between the prior tstamp_stopped and now (as we were in the OFF state, or 2224 + * just switched (context) time base). 2225 + * 2226 + * This further assumes '@event->state == INACTIVE' (we just came from OFF) and 2227 + * cannot have been scheduled in yet. And going into INACTIVE state means 2228 + * '@event->tstamp_stopped = @now'. 2229 + * 2230 + * Thus given the rules of update_event_times(): 2231 + * 2232 + * total_time_enabled = tstamp_stopped - tstamp_enabled 2233 + * total_time_running = tstamp_stopped - tstamp_running 2234 + * 2235 + * We can insert 'tstamp_stopped == now' and reverse them to compute new 2236 + * tstamp_* values. 2237 + */ 2238 + static void __perf_event_enable_time(struct perf_event *event, u64 now) 2239 + { 2240 + WARN_ON_ONCE(event->state != PERF_EVENT_STATE_INACTIVE); 2241 + 2242 + event->tstamp_stopped = now; 2243 + event->tstamp_enabled = now - event->total_time_enabled; 2244 + event->tstamp_running = now - event->total_time_running; 2245 + } 2246 + 2220 2247 static void add_event_to_ctx(struct perf_event *event, 2221 2248 struct perf_event_context *ctx) 2222 2249 { ··· 2251 2224 2252 2225 list_add_event(event, ctx); 2253 2226 perf_group_attach(event); 2254 - event->tstamp_enabled = tstamp; 2255 - event->tstamp_running = tstamp; 2256 - event->tstamp_stopped = tstamp; 2227 + /* 2228 + * We can be called with event->state == STATE_OFF when we create with 2229 + * .disabled = 1. In that case the IOC_ENABLE will call this function. 2230 + */ 2231 + if (event->state == PERF_EVENT_STATE_INACTIVE) 2232 + __perf_event_enable_time(event, tstamp); 2257 2233 } 2258 2234 2259 2235 static void ctx_sched_out(struct perf_event_context *ctx, ··· 2501 2471 u64 tstamp = perf_event_time(event); 2502 2472 2503 2473 event->state = PERF_EVENT_STATE_INACTIVE; 2504 - event->tstamp_enabled = tstamp - event->total_time_enabled; 2474 + __perf_event_enable_time(event, tstamp); 2505 2475 list_for_each_entry(sub, &event->sibling_list, group_entry) { 2476 + /* XXX should not be > INACTIVE if event isn't */ 2506 2477 if (sub->state >= PERF_EVENT_STATE_INACTIVE) 2507 - sub->tstamp_enabled = tstamp - sub->total_time_enabled; 2478 + __perf_event_enable_time(sub, tstamp); 2508 2479 } 2509 2480 } 2510 2481 ··· 5121 5090 atomic_inc(&event->rb->aux_mmap_count); 5122 5091 5123 5092 if (event->pmu->event_mapped) 5124 - event->pmu->event_mapped(event); 5093 + event->pmu->event_mapped(event, vma->vm_mm); 5125 5094 } 5126 5095 5127 5096 static void perf_pmu_output_stop(struct perf_event *event); ··· 5144 5113 unsigned long size = perf_data_size(rb); 5145 5114 5146 5115 if (event->pmu->event_unmapped) 5147 - event->pmu->event_unmapped(event); 5116 + event->pmu->event_unmapped(event, vma->vm_mm); 5148 5117 5149 5118 /* 5150 5119 * rb->aux_mmap_count will always drop before rb->mmap_count and ··· 5442 5411 vma->vm_ops = &perf_mmap_vmops; 5443 5412 5444 5413 if (event->pmu->event_mapped) 5445 - event->pmu->event_mapped(event); 5414 + event->pmu->event_mapped(event, vma->vm_mm); 5446 5415 5447 5416 return ret; 5448 5417 }
+8 -2
kernel/irq/chip.c
··· 1000 1000 1001 1001 void irq_modify_status(unsigned int irq, unsigned long clr, unsigned long set) 1002 1002 { 1003 - unsigned long flags; 1003 + unsigned long flags, trigger, tmp; 1004 1004 struct irq_desc *desc = irq_get_desc_lock(irq, &flags, 0); 1005 1005 1006 1006 if (!desc) ··· 1014 1014 1015 1015 irq_settings_clr_and_set(desc, clr, set); 1016 1016 1017 + trigger = irqd_get_trigger_type(&desc->irq_data); 1018 + 1017 1019 irqd_clear(&desc->irq_data, IRQD_NO_BALANCING | IRQD_PER_CPU | 1018 1020 IRQD_TRIGGER_MASK | IRQD_LEVEL | IRQD_MOVE_PCNTXT); 1019 1021 if (irq_settings_has_no_balance_set(desc)) ··· 1027 1025 if (irq_settings_is_level(desc)) 1028 1026 irqd_set(&desc->irq_data, IRQD_LEVEL); 1029 1027 1030 - irqd_set(&desc->irq_data, irq_settings_get_trigger_mask(desc)); 1028 + tmp = irq_settings_get_trigger_mask(desc); 1029 + if (tmp != IRQ_TYPE_NONE) 1030 + trigger = tmp; 1031 + 1032 + irqd_set(&desc->irq_data, trigger); 1031 1033 1032 1034 irq_put_desc_unlock(desc, flags); 1033 1035 }
+2 -2
kernel/irq/ipi.c
··· 165 165 struct irq_data *data = irq_get_irq_data(irq); 166 166 struct cpumask *ipimask = data ? irq_data_get_affinity_mask(data) : NULL; 167 167 168 - if (!data || !ipimask || cpu > nr_cpu_ids) 168 + if (!data || !ipimask || cpu >= nr_cpu_ids) 169 169 return INVALID_HWIRQ; 170 170 171 171 if (!cpumask_test_cpu(cpu, ipimask)) ··· 195 195 if (!chip->ipi_send_single && !chip->ipi_send_mask) 196 196 return -EINVAL; 197 197 198 - if (cpu > nr_cpu_ids) 198 + if (cpu >= nr_cpu_ids) 199 199 return -EINVAL; 200 200 201 201 if (dest) {
+23 -2
kernel/kmod.c
··· 71 71 static DECLARE_WAIT_QUEUE_HEAD(kmod_wq); 72 72 73 73 /* 74 + * This is a restriction on having *all* MAX_KMOD_CONCURRENT threads 75 + * running at the same time without returning. When this happens we 76 + * believe you've somehow ended up with a recursive module dependency 77 + * creating a loop. 78 + * 79 + * We have no option but to fail. 80 + * 81 + * Userspace should proactively try to detect and prevent these. 82 + */ 83 + #define MAX_KMOD_ALL_BUSY_TIMEOUT 5 84 + 85 + /* 74 86 modprobe_path is set via /proc/sys. 75 87 */ 76 88 char modprobe_path[KMOD_PATH_LEN] = "/sbin/modprobe"; ··· 179 167 pr_warn_ratelimited("request_module: kmod_concurrent_max (%u) close to 0 (max_modprobes: %u), for module %s, throttling...", 180 168 atomic_read(&kmod_concurrent_max), 181 169 MAX_KMOD_CONCURRENT, module_name); 182 - wait_event_interruptible(kmod_wq, 183 - atomic_dec_if_positive(&kmod_concurrent_max) >= 0); 170 + ret = wait_event_killable_timeout(kmod_wq, 171 + atomic_dec_if_positive(&kmod_concurrent_max) >= 0, 172 + MAX_KMOD_ALL_BUSY_TIMEOUT * HZ); 173 + if (!ret) { 174 + pr_warn_ratelimited("request_module: modprobe %s cannot be processed, kmod busy with %d threads for more than %d seconds now", 175 + module_name, MAX_KMOD_CONCURRENT, MAX_KMOD_ALL_BUSY_TIMEOUT); 176 + return -ETIME; 177 + } else if (ret == -ERESTARTSYS) { 178 + pr_warn_ratelimited("request_module: sigkill sent for modprobe %s, giving up", module_name); 179 + return ret; 180 + } 184 181 } 185 182 186 183 trace_module_request(module_name, wait, _RET_IP_);
+4 -7
kernel/pid.c
··· 527 527 if (!ns) 528 528 ns = task_active_pid_ns(current); 529 529 if (likely(pid_alive(task))) { 530 - if (type != PIDTYPE_PID) 530 + if (type != PIDTYPE_PID) { 531 + if (type == __PIDTYPE_TGID) 532 + type = PIDTYPE_PID; 531 533 task = task->group_leader; 534 + } 532 535 nr = pid_nr_ns(rcu_dereference(task->pids[type].pid), ns); 533 536 } 534 537 rcu_read_unlock(); ··· 539 536 return nr; 540 537 } 541 538 EXPORT_SYMBOL(__task_pid_nr_ns); 542 - 543 - pid_t task_tgid_nr_ns(struct task_struct *tsk, struct pid_namespace *ns) 544 - { 545 - return pid_nr_ns(task_tgid(tsk), ns); 546 - } 547 - EXPORT_SYMBOL(task_tgid_nr_ns); 548 539 549 540 struct pid_namespace *task_active_pid_ns(struct task_struct *tsk) 550 541 {
+5 -1
kernel/signal.c
··· 1194 1194 recalc_sigpending_and_wake(t); 1195 1195 } 1196 1196 } 1197 - if (action->sa.sa_handler == SIG_DFL) 1197 + /* 1198 + * Don't clear SIGNAL_UNKILLABLE for traced tasks, users won't expect 1199 + * debugging to leave init killable. 1200 + */ 1201 + if (action->sa.sa_handler == SIG_DFL && !t->ptrace) 1198 1202 t->signal->flags &= ~SIGNAL_UNKILLABLE; 1199 1203 ret = specific_send_sig_info(sig, info, t); 1200 1204 spin_unlock_irqrestore(&t->sighand->siglock, flags);
+1
kernel/watchdog.c
··· 240 240 * hardlockup detector generates a warning 241 241 */ 242 242 sample_period = get_softlockup_thresh() * ((u64)NSEC_PER_SEC / 5); 243 + watchdog_update_hrtimer_threshold(sample_period); 243 244 } 244 245 245 246 /* Commands for resetting the watchdog */
+59
kernel/watchdog_hld.c
··· 37 37 } 38 38 EXPORT_SYMBOL(arch_touch_nmi_watchdog); 39 39 40 + #ifdef CONFIG_HARDLOCKUP_CHECK_TIMESTAMP 41 + static DEFINE_PER_CPU(ktime_t, last_timestamp); 42 + static DEFINE_PER_CPU(unsigned int, nmi_rearmed); 43 + static ktime_t watchdog_hrtimer_sample_threshold __read_mostly; 44 + 45 + void watchdog_update_hrtimer_threshold(u64 period) 46 + { 47 + /* 48 + * The hrtimer runs with a period of (watchdog_threshold * 2) / 5 49 + * 50 + * So it runs effectively with 2.5 times the rate of the NMI 51 + * watchdog. That means the hrtimer should fire 2-3 times before 52 + * the NMI watchdog expires. The NMI watchdog on x86 is based on 53 + * unhalted CPU cycles, so if Turbo-Mode is enabled the CPU cycles 54 + * might run way faster than expected and the NMI fires in a 55 + * smaller period than the one deduced from the nominal CPU 56 + * frequency. Depending on the Turbo-Mode factor this might be fast 57 + * enough to get the NMI period smaller than the hrtimer watchdog 58 + * period and trigger false positives. 59 + * 60 + * The sample threshold is used to check in the NMI handler whether 61 + * the minimum time between two NMI samples has elapsed. That 62 + * prevents false positives. 63 + * 64 + * Set this to 4/5 of the actual watchdog threshold period so the 65 + * hrtimer is guaranteed to fire at least once within the real 66 + * watchdog threshold. 67 + */ 68 + watchdog_hrtimer_sample_threshold = period * 2; 69 + } 70 + 71 + static bool watchdog_check_timestamp(void) 72 + { 73 + ktime_t delta, now = ktime_get_mono_fast_ns(); 74 + 75 + delta = now - __this_cpu_read(last_timestamp); 76 + if (delta < watchdog_hrtimer_sample_threshold) { 77 + /* 78 + * If ktime is jiffies based, a stalled timer would prevent 79 + * jiffies from being incremented and the filter would look 80 + * at a stale timestamp and never trigger. 81 + */ 82 + if (__this_cpu_inc_return(nmi_rearmed) < 10) 83 + return false; 84 + } 85 + __this_cpu_write(nmi_rearmed, 0); 86 + __this_cpu_write(last_timestamp, now); 87 + return true; 88 + } 89 + #else 90 + static inline bool watchdog_check_timestamp(void) 91 + { 92 + return true; 93 + } 94 + #endif 95 + 40 96 static struct perf_event_attr wd_hw_attr = { 41 97 .type = PERF_TYPE_HARDWARE, 42 98 .config = PERF_COUNT_HW_CPU_CYCLES, ··· 116 60 __this_cpu_write(watchdog_nmi_touch, false); 117 61 return; 118 62 } 63 + 64 + if (!watchdog_check_timestamp()) 65 + return; 119 66 120 67 /* check for a hardlockup 121 68 * This is done by making sure our timer interrupt
+7
lib/Kconfig.debug
··· 798 798 select SOFTLOCKUP_DETECTOR 799 799 800 800 # 801 + # Enables a timestamp based low pass filter to compensate for perf based 802 + # hard lockup detection which runs too fast due to turbo modes. 803 + # 804 + config HARDLOCKUP_CHECK_TIMESTAMP 805 + bool 806 + 807 + # 801 808 # arch/ can define HAVE_HARDLOCKUP_DETECTOR_ARCH to provide their own hard 802 809 # lockup detector rather than the perf based detector. 803 810 #
+1 -1
mm/cma_debug.c
··· 167 167 char name[16]; 168 168 int u32s; 169 169 170 - sprintf(name, "cma-%s", cma->name); 170 + scnprintf(name, sizeof(name), "cma-%s", cma->name); 171 171 172 172 tmp = debugfs_create_dir(name, cma_debugfs_root); 173 173
+22 -8
mm/huge_memory.c
··· 32 32 #include <linux/userfaultfd_k.h> 33 33 #include <linux/page_idle.h> 34 34 #include <linux/shmem_fs.h> 35 + #include <linux/oom.h> 35 36 36 37 #include <asm/tlb.h> 37 38 #include <asm/pgalloc.h> ··· 551 550 struct mem_cgroup *memcg; 552 551 pgtable_t pgtable; 553 552 unsigned long haddr = vmf->address & HPAGE_PMD_MASK; 553 + int ret = 0; 554 554 555 555 VM_BUG_ON_PAGE(!PageCompound(page), page); 556 556 ··· 563 561 564 562 pgtable = pte_alloc_one(vma->vm_mm, haddr); 565 563 if (unlikely(!pgtable)) { 566 - mem_cgroup_cancel_charge(page, memcg, true); 567 - put_page(page); 568 - return VM_FAULT_OOM; 564 + ret = VM_FAULT_OOM; 565 + goto release; 569 566 } 570 567 571 568 clear_huge_page(page, haddr, HPAGE_PMD_NR); ··· 577 576 578 577 vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); 579 578 if (unlikely(!pmd_none(*vmf->pmd))) { 580 - spin_unlock(vmf->ptl); 581 - mem_cgroup_cancel_charge(page, memcg, true); 582 - put_page(page); 583 - pte_free(vma->vm_mm, pgtable); 579 + goto unlock_release; 584 580 } else { 585 581 pmd_t entry; 582 + 583 + ret = check_stable_address_space(vma->vm_mm); 584 + if (ret) 585 + goto unlock_release; 586 586 587 587 /* Deliver the page fault to userland */ 588 588 if (userfaultfd_missing(vma)) { ··· 612 610 } 613 611 614 612 return 0; 613 + unlock_release: 614 + spin_unlock(vmf->ptl); 615 + release: 616 + if (pgtable) 617 + pte_free(vma->vm_mm, pgtable); 618 + mem_cgroup_cancel_charge(page, memcg, true); 619 + put_page(page); 620 + return ret; 621 + 615 622 } 616 623 617 624 /* ··· 699 688 ret = 0; 700 689 set = false; 701 690 if (pmd_none(*vmf->pmd)) { 702 - if (userfaultfd_missing(vma)) { 691 + ret = check_stable_address_space(vma->vm_mm); 692 + if (ret) { 693 + spin_unlock(vmf->ptl); 694 + } else if (userfaultfd_missing(vma)) { 703 695 spin_unlock(vmf->ptl); 704 696 ret = handle_userfault(vmf, VM_UFFD_MISSING); 705 697 VM_BUG_ON(ret & VM_FAULT_FALLBACK);
+17 -21
mm/memblock.c
··· 285 285 } 286 286 287 287 #ifdef CONFIG_ARCH_DISCARD_MEMBLOCK 288 - 289 - phys_addr_t __init_memblock get_allocated_memblock_reserved_regions_info( 290 - phys_addr_t *addr) 288 + /** 289 + * Discard memory and reserved arrays if they were allocated 290 + */ 291 + void __init memblock_discard(void) 291 292 { 292 - if (memblock.reserved.regions == memblock_reserved_init_regions) 293 - return 0; 293 + phys_addr_t addr, size; 294 294 295 - *addr = __pa(memblock.reserved.regions); 295 + if (memblock.reserved.regions != memblock_reserved_init_regions) { 296 + addr = __pa(memblock.reserved.regions); 297 + size = PAGE_ALIGN(sizeof(struct memblock_region) * 298 + memblock.reserved.max); 299 + __memblock_free_late(addr, size); 300 + } 296 301 297 - return PAGE_ALIGN(sizeof(struct memblock_region) * 298 - memblock.reserved.max); 302 + if (memblock.memory.regions == memblock_memory_init_regions) { 303 + addr = __pa(memblock.memory.regions); 304 + size = PAGE_ALIGN(sizeof(struct memblock_region) * 305 + memblock.memory.max); 306 + __memblock_free_late(addr, size); 307 + } 299 308 } 300 - 301 - phys_addr_t __init_memblock get_allocated_memblock_memory_regions_info( 302 - phys_addr_t *addr) 303 - { 304 - if (memblock.memory.regions == memblock_memory_init_regions) 305 - return 0; 306 - 307 - *addr = __pa(memblock.memory.regions); 308 - 309 - return PAGE_ALIGN(sizeof(struct memblock_region) * 310 - memblock.memory.max); 311 - } 312 - 313 309 #endif 314 310 315 311 /**
+31 -12
mm/memcontrol.c
··· 1611 1611 * @page: the page 1612 1612 * 1613 1613 * This function protects unlocked LRU pages from being moved to 1614 - * another cgroup and stabilizes their page->mem_cgroup binding. 1614 + * another cgroup. 1615 + * 1616 + * It ensures lifetime of the returned memcg. Caller is responsible 1617 + * for the lifetime of the page; __unlock_page_memcg() is available 1618 + * when @page might get freed inside the locked section. 1615 1619 */ 1616 - void lock_page_memcg(struct page *page) 1620 + struct mem_cgroup *lock_page_memcg(struct page *page) 1617 1621 { 1618 1622 struct mem_cgroup *memcg; 1619 1623 unsigned long flags; ··· 1626 1622 * The RCU lock is held throughout the transaction. The fast 1627 1623 * path can get away without acquiring the memcg->move_lock 1628 1624 * because page moving starts with an RCU grace period. 1629 - */ 1625 + * 1626 + * The RCU lock also protects the memcg from being freed when 1627 + * the page state that is going to change is the only thing 1628 + * preventing the page itself from being freed. E.g. writeback 1629 + * doesn't hold a page reference and relies on PG_writeback to 1630 + * keep off truncation, migration and so forth. 1631 + */ 1630 1632 rcu_read_lock(); 1631 1633 1632 1634 if (mem_cgroup_disabled()) 1633 - return; 1635 + return NULL; 1634 1636 again: 1635 1637 memcg = page->mem_cgroup; 1636 1638 if (unlikely(!memcg)) 1637 - return; 1639 + return NULL; 1638 1640 1639 1641 if (atomic_read(&memcg->moving_account) <= 0) 1640 - return; 1642 + return memcg; 1641 1643 1642 1644 spin_lock_irqsave(&memcg->move_lock, flags); 1643 1645 if (memcg != page->mem_cgroup) { ··· 1659 1649 memcg->move_lock_task = current; 1660 1650 memcg->move_lock_flags = flags; 1661 1651 1662 - return; 1652 + return memcg; 1663 1653 } 1664 1654 EXPORT_SYMBOL(lock_page_memcg); 1665 1655 1666 1656 /** 1667 - * unlock_page_memcg - unlock a page->mem_cgroup binding 1668 - * @page: the page 1657 + * __unlock_page_memcg - unlock and unpin a memcg 1658 + * @memcg: the memcg 1659 + * 1660 + * Unlock and unpin a memcg returned by lock_page_memcg(). 1669 1661 */ 1670 - void unlock_page_memcg(struct page *page) 1662 + void __unlock_page_memcg(struct mem_cgroup *memcg) 1671 1663 { 1672 - struct mem_cgroup *memcg = page->mem_cgroup; 1673 - 1674 1664 if (memcg && memcg->move_lock_task == current) { 1675 1665 unsigned long flags = memcg->move_lock_flags; 1676 1666 ··· 1681 1671 } 1682 1672 1683 1673 rcu_read_unlock(); 1674 + } 1675 + 1676 + /** 1677 + * unlock_page_memcg - unlock a page->mem_cgroup binding 1678 + * @page: the page 1679 + */ 1680 + void unlock_page_memcg(struct page *page) 1681 + { 1682 + __unlock_page_memcg(page->mem_cgroup); 1684 1683 } 1685 1684 EXPORT_SYMBOL(unlock_page_memcg); 1686 1685
+20 -16
mm/memory.c
··· 68 68 #include <linux/debugfs.h> 69 69 #include <linux/userfaultfd_k.h> 70 70 #include <linux/dax.h> 71 + #include <linux/oom.h> 71 72 72 73 #include <asm/io.h> 73 74 #include <asm/mmu_context.h> ··· 2894 2893 struct vm_area_struct *vma = vmf->vma; 2895 2894 struct mem_cgroup *memcg; 2896 2895 struct page *page; 2896 + int ret = 0; 2897 2897 pte_t entry; 2898 2898 2899 2899 /* File mapping without ->vm_ops ? */ ··· 2926 2924 vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, 2927 2925 vmf->address, &vmf->ptl); 2928 2926 if (!pte_none(*vmf->pte)) 2927 + goto unlock; 2928 + ret = check_stable_address_space(vma->vm_mm); 2929 + if (ret) 2929 2930 goto unlock; 2930 2931 /* Deliver the page fault to userland, check inside PT lock */ 2931 2932 if (userfaultfd_missing(vma)) { ··· 2964 2959 if (!pte_none(*vmf->pte)) 2965 2960 goto release; 2966 2961 2962 + ret = check_stable_address_space(vma->vm_mm); 2963 + if (ret) 2964 + goto release; 2965 + 2967 2966 /* Deliver the page fault to userland, check inside PT lock */ 2968 2967 if (userfaultfd_missing(vma)) { 2969 2968 pte_unmap_unlock(vmf->pte, vmf->ptl); ··· 2987 2978 update_mmu_cache(vma, vmf->address, vmf->pte); 2988 2979 unlock: 2989 2980 pte_unmap_unlock(vmf->pte, vmf->ptl); 2990 - return 0; 2981 + return ret; 2991 2982 release: 2992 2983 mem_cgroup_cancel_charge(page, memcg, false); 2993 2984 put_page(page); ··· 3261 3252 int finish_fault(struct vm_fault *vmf) 3262 3253 { 3263 3254 struct page *page; 3264 - int ret; 3255 + int ret = 0; 3265 3256 3266 3257 /* Did we COW the page? */ 3267 3258 if ((vmf->flags & FAULT_FLAG_WRITE) && ··· 3269 3260 page = vmf->cow_page; 3270 3261 else 3271 3262 page = vmf->page; 3272 - ret = alloc_set_pte(vmf, vmf->memcg, page); 3263 + 3264 + /* 3265 + * check even for read faults because we might have lost our CoWed 3266 + * page 3267 + */ 3268 + if (!(vmf->vma->vm_flags & VM_SHARED)) 3269 + ret = check_stable_address_space(vmf->vma->vm_mm); 3270 + if (!ret) 3271 + ret = alloc_set_pte(vmf, vmf->memcg, page); 3273 3272 if (vmf->pte) 3274 3273 pte_unmap_unlock(vmf->pte, vmf->ptl); 3275 3274 return ret; ··· 3916 3899 if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM)) 3917 3900 mem_cgroup_oom_synchronize(false); 3918 3901 } 3919 - 3920 - /* 3921 - * This mm has been already reaped by the oom reaper and so the 3922 - * refault cannot be trusted in general. Anonymous refaults would 3923 - * lose data and give a zero page instead e.g. This is especially 3924 - * problem for use_mm() because regular tasks will just die and 3925 - * the corrupted data will not be visible anywhere while kthread 3926 - * will outlive the oom victim and potentially propagate the date 3927 - * further. 3928 - */ 3929 - if (unlikely((current->flags & PF_KTHREAD) && !(ret & VM_FAULT_ERROR) 3930 - && test_bit(MMF_UNSTABLE, &vma->vm_mm->flags))) 3931 - ret = VM_FAULT_SIGBUS; 3932 3902 3933 3903 return ret; 3934 3904 }
-5
mm/mempolicy.c
··· 861 861 *policy |= (pol->flags & MPOL_MODE_FLAGS); 862 862 } 863 863 864 - if (vma) { 865 - up_read(&current->mm->mmap_sem); 866 - vma = NULL; 867 - } 868 - 869 864 err = 0; 870 865 if (nmask) { 871 866 if (mpol_store_user_nodemask(pol)) {
+3 -8
mm/migrate.c
··· 41 41 #include <linux/page_idle.h> 42 42 #include <linux/page_owner.h> 43 43 #include <linux/sched/mm.h> 44 + #include <linux/ptrace.h> 44 45 45 46 #include <asm/tlbflush.h> 46 47 ··· 1653 1652 const int __user *, nodes, 1654 1653 int __user *, status, int, flags) 1655 1654 { 1656 - const struct cred *cred = current_cred(), *tcred; 1657 1655 struct task_struct *task; 1658 1656 struct mm_struct *mm; 1659 1657 int err; ··· 1676 1676 1677 1677 /* 1678 1678 * Check if this process has the right to modify the specified 1679 - * process. The right exists if the process has administrative 1680 - * capabilities, superuser privileges or the same 1681 - * userid as the target process. 1679 + * process. Use the regular "ptrace_may_access()" checks. 1682 1680 */ 1683 - tcred = __task_cred(task); 1684 - if (!uid_eq(cred->euid, tcred->suid) && !uid_eq(cred->euid, tcred->uid) && 1685 - !uid_eq(cred->uid, tcred->suid) && !uid_eq(cred->uid, tcred->uid) && 1686 - !capable(CAP_SYS_NICE)) { 1681 + if (!ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS)) { 1687 1682 rcu_read_unlock(); 1688 1683 err = -EPERM; 1689 1684 goto out;
-16
mm/nobootmem.c
··· 146 146 NULL) 147 147 count += __free_memory_core(start, end); 148 148 149 - #ifdef CONFIG_ARCH_DISCARD_MEMBLOCK 150 - { 151 - phys_addr_t size; 152 - 153 - /* Free memblock.reserved array if it was allocated */ 154 - size = get_allocated_memblock_reserved_regions_info(&start); 155 - if (size) 156 - count += __free_memory_core(start, start + size); 157 - 158 - /* Free memblock.memory array if it was allocated */ 159 - size = get_allocated_memblock_memory_regions_info(&start); 160 - if (size) 161 - count += __free_memory_core(start, start + size); 162 - } 163 - #endif 164 - 165 149 return count; 166 150 } 167 151
+12 -3
mm/page-writeback.c
··· 2724 2724 int test_clear_page_writeback(struct page *page) 2725 2725 { 2726 2726 struct address_space *mapping = page_mapping(page); 2727 + struct mem_cgroup *memcg; 2728 + struct lruvec *lruvec; 2727 2729 int ret; 2728 2730 2729 - lock_page_memcg(page); 2731 + memcg = lock_page_memcg(page); 2732 + lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); 2730 2733 if (mapping && mapping_use_writeback_tags(mapping)) { 2731 2734 struct inode *inode = mapping->host; 2732 2735 struct backing_dev_info *bdi = inode_to_bdi(inode); ··· 2757 2754 } else { 2758 2755 ret = TestClearPageWriteback(page); 2759 2756 } 2757 + /* 2758 + * NOTE: Page might be free now! Writeback doesn't hold a page 2759 + * reference on its own, it relies on truncation to wait for 2760 + * the clearing of PG_writeback. The below can only access 2761 + * page state that is static across allocation cycles. 2762 + */ 2760 2763 if (ret) { 2761 - dec_lruvec_page_state(page, NR_WRITEBACK); 2764 + dec_lruvec_state(lruvec, NR_WRITEBACK); 2762 2765 dec_zone_page_state(page, NR_ZONE_WRITE_PENDING); 2763 2766 inc_node_page_state(page, NR_WRITTEN); 2764 2767 } 2765 - unlock_page_memcg(page); 2768 + __unlock_page_memcg(memcg); 2766 2769 return ret; 2767 2770 } 2768 2771
+4
mm/page_alloc.c
··· 1584 1584 /* Reinit limits that are based on free pages after the kernel is up */ 1585 1585 files_maxfiles_init(); 1586 1586 #endif 1587 + #ifdef CONFIG_ARCH_DISCARD_MEMBLOCK 1588 + /* Discard memblock private memory */ 1589 + memblock_discard(); 1590 + #endif 1587 1591 1588 1592 for_each_populated_zone(zone) 1589 1593 set_zone_contiguous(zone);
+2 -1
mm/slub.c
··· 5642 5642 * A cache is never shut down before deactivation is 5643 5643 * complete, so no need to worry about synchronization. 5644 5644 */ 5645 - return; 5645 + goto out; 5646 5646 5647 5647 #ifdef CONFIG_MEMCG 5648 5648 kset_unregister(s->memcg_kset); 5649 5649 #endif 5650 5650 kobject_uevent(&s->kobj, KOBJ_REMOVE); 5651 5651 kobject_del(&s->kobj); 5652 + out: 5652 5653 kobject_put(&s->kobj); 5653 5654 } 5654 5655
+8 -5
mm/vmalloc.c
··· 1671 1671 struct page **pages; 1672 1672 unsigned int nr_pages, array_size, i; 1673 1673 const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO; 1674 - const gfp_t alloc_mask = gfp_mask | __GFP_HIGHMEM | __GFP_NOWARN; 1674 + const gfp_t alloc_mask = gfp_mask | __GFP_NOWARN; 1675 + const gfp_t highmem_mask = (gfp_mask & (GFP_DMA | GFP_DMA32)) ? 1676 + 0 : 1677 + __GFP_HIGHMEM; 1675 1678 1676 1679 nr_pages = get_vm_area_size(area) >> PAGE_SHIFT; 1677 1680 array_size = (nr_pages * sizeof(struct page *)); ··· 1682 1679 area->nr_pages = nr_pages; 1683 1680 /* Please note that the recursion is strictly bounded. */ 1684 1681 if (array_size > PAGE_SIZE) { 1685 - pages = __vmalloc_node(array_size, 1, nested_gfp|__GFP_HIGHMEM, 1682 + pages = __vmalloc_node(array_size, 1, nested_gfp|highmem_mask, 1686 1683 PAGE_KERNEL, node, area->caller); 1687 1684 } else { 1688 1685 pages = kmalloc_node(array_size, nested_gfp, node); ··· 1703 1700 } 1704 1701 1705 1702 if (node == NUMA_NO_NODE) 1706 - page = alloc_page(alloc_mask); 1703 + page = alloc_page(alloc_mask|highmem_mask); 1707 1704 else 1708 - page = alloc_pages_node(node, alloc_mask, 0); 1705 + page = alloc_pages_node(node, alloc_mask|highmem_mask, 0); 1709 1706 1710 1707 if (unlikely(!page)) { 1711 1708 /* Successfully allocated i pages, free them in __vunmap() */ ··· 1713 1710 goto fail; 1714 1711 } 1715 1712 area->pages[i] = page; 1716 - if (gfpflags_allow_blocking(gfp_mask)) 1713 + if (gfpflags_allow_blocking(gfp_mask|highmem_mask)) 1717 1714 cond_resched(); 1718 1715 } 1719 1716
+9 -3
net/core/datagram.c
··· 169 169 int *peeked, int *off, int *err, 170 170 struct sk_buff **last) 171 171 { 172 + bool peek_at_off = false; 172 173 struct sk_buff *skb; 173 - int _off = *off; 174 + int _off = 0; 175 + 176 + if (unlikely(flags & MSG_PEEK && *off >= 0)) { 177 + peek_at_off = true; 178 + _off = *off; 179 + } 174 180 175 181 *last = queue->prev; 176 182 skb_queue_walk(queue, skb) { 177 183 if (flags & MSG_PEEK) { 178 - if (_off >= skb->len && (skb->len || _off || 179 - skb->peeked)) { 184 + if (peek_at_off && _off >= skb->len && 185 + (_off || skb->peeked)) { 180 186 _off -= skb->len; 181 187 continue; 182 188 }
+12 -2
net/dccp/proto.c
··· 24 24 #include <net/checksum.h> 25 25 26 26 #include <net/inet_sock.h> 27 + #include <net/inet_common.h> 27 28 #include <net/sock.h> 28 29 #include <net/xfrm.h> 29 30 ··· 171 170 172 171 EXPORT_SYMBOL_GPL(dccp_packet_name); 173 172 173 + static void dccp_sk_destruct(struct sock *sk) 174 + { 175 + struct dccp_sock *dp = dccp_sk(sk); 176 + 177 + ccid_hc_tx_delete(dp->dccps_hc_tx_ccid, sk); 178 + dp->dccps_hc_tx_ccid = NULL; 179 + inet_sock_destruct(sk); 180 + } 181 + 174 182 int dccp_init_sock(struct sock *sk, const __u8 ctl_sock_initialized) 175 183 { 176 184 struct dccp_sock *dp = dccp_sk(sk); ··· 189 179 icsk->icsk_syn_retries = sysctl_dccp_request_retries; 190 180 sk->sk_state = DCCP_CLOSED; 191 181 sk->sk_write_space = dccp_write_space; 182 + sk->sk_destruct = dccp_sk_destruct; 192 183 icsk->icsk_sync_mss = dccp_sync_mss; 193 184 dp->dccps_mss_cache = 536; 194 185 dp->dccps_rate_last = jiffies; ··· 230 219 dp->dccps_hc_rx_ackvec = NULL; 231 220 } 232 221 ccid_hc_rx_delete(dp->dccps_hc_rx_ccid, sk); 233 - ccid_hc_tx_delete(dp->dccps_hc_tx_ccid, sk); 234 - dp->dccps_hc_rx_ccid = dp->dccps_hc_tx_ccid = NULL; 222 + dp->dccps_hc_rx_ccid = NULL; 235 223 236 224 /* clean up feature negotiation state */ 237 225 dccp_feat_list_purge(&dp->dccps_featneg);
+9 -1
net/ipv4/igmp.c
··· 1007 1007 { 1008 1008 /* This basically follows the spec line by line -- see RFC1112 */ 1009 1009 struct igmphdr *ih; 1010 - struct in_device *in_dev = __in_dev_get_rcu(skb->dev); 1010 + struct net_device *dev = skb->dev; 1011 + struct in_device *in_dev; 1011 1012 int len = skb->len; 1012 1013 bool dropped = true; 1013 1014 1015 + if (netif_is_l3_master(dev)) { 1016 + dev = dev_get_by_index_rcu(dev_net(dev), IPCB(skb)->iif); 1017 + if (!dev) 1018 + goto drop; 1019 + } 1020 + 1021 + in_dev = __in_dev_get_rcu(dev); 1014 1022 if (!in_dev) 1015 1023 goto drop; 1016 1024
+10 -3
net/ipv4/route.c
··· 1267 1267 if (mtu) 1268 1268 return mtu; 1269 1269 1270 - mtu = dst->dev->mtu; 1270 + mtu = READ_ONCE(dst->dev->mtu); 1271 1271 1272 1272 if (unlikely(dst_metric_locked(dst, RTAX_MTU))) { 1273 1273 if (rt->rt_uses_gateway && mtu > 576) ··· 2769 2769 if (rtm->rtm_flags & RTM_F_LOOKUP_TABLE) 2770 2770 table_id = rt->rt_table_id; 2771 2771 2772 - if (rtm->rtm_flags & RTM_F_FIB_MATCH) 2772 + if (rtm->rtm_flags & RTM_F_FIB_MATCH) { 2773 + if (!res.fi) { 2774 + err = fib_props[res.type].error; 2775 + if (!err) 2776 + err = -EHOSTUNREACH; 2777 + goto errout_free; 2778 + } 2773 2779 err = fib_dump_info(skb, NETLINK_CB(in_skb).portid, 2774 2780 nlh->nlmsg_seq, RTM_NEWROUTE, table_id, 2775 2781 rt->rt_type, res.prefix, res.prefixlen, 2776 2782 fl4.flowi4_tos, res.fi, 0); 2777 - else 2783 + } else { 2778 2784 err = rt_fill_info(net, dst, src, table_id, &fl4, skb, 2779 2785 NETLINK_CB(in_skb).portid, nlh->nlmsg_seq); 2786 + } 2780 2787 if (err < 0) 2781 2788 goto errout_free; 2782 2789
+1 -2
net/ipv4/tcp_input.c
··· 3009 3009 /* delta_us may not be positive if the socket is locked 3010 3010 * when the retrans timer fires and is rescheduled. 3011 3011 */ 3012 - if (delta_us > 0) 3013 - rto = usecs_to_jiffies(delta_us); 3012 + rto = usecs_to_jiffies(max_t(int, delta_us, 1)); 3014 3013 } 3015 3014 inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS, rto, 3016 3015 TCP_RTO_MAX);
+2 -1
net/ipv4/udp.c
··· 1585 1585 return ip_recv_error(sk, msg, len, addr_len); 1586 1586 1587 1587 try_again: 1588 - peeking = off = sk_peek_offset(sk, flags); 1588 + peeking = flags & MSG_PEEK; 1589 + off = sk_peek_offset(sk, flags); 1589 1590 skb = __skb_recv_udp(sk, flags, noblock, &peeked, &off, &err); 1590 1591 if (!skb) 1591 1592 return err;
+15 -13
net/ipv6/ip6_fib.c
··· 1013 1013 nsiblings = iter->rt6i_nsiblings; 1014 1014 iter->rt6i_node = NULL; 1015 1015 fib6_purge_rt(iter, fn, info->nl_net); 1016 + if (fn->rr_ptr == iter) 1017 + fn->rr_ptr = NULL; 1016 1018 rt6_release(iter); 1017 1019 1018 1020 if (nsiblings) { ··· 1028 1026 *ins = iter->dst.rt6_next; 1029 1027 iter->rt6i_node = NULL; 1030 1028 fib6_purge_rt(iter, fn, info->nl_net); 1029 + if (fn->rr_ptr == iter) 1030 + fn->rr_ptr = NULL; 1031 1031 rt6_release(iter); 1032 1032 nsiblings--; 1033 1033 } else { ··· 1118 1114 /* Create subtree root node */ 1119 1115 sfn = node_alloc(); 1120 1116 if (!sfn) 1121 - goto st_failure; 1117 + goto failure; 1122 1118 1123 1119 sfn->leaf = info->nl_net->ipv6.ip6_null_entry; 1124 1120 atomic_inc(&info->nl_net->ipv6.ip6_null_entry->rt6i_ref); ··· 1135 1131 1136 1132 if (IS_ERR(sn)) { 1137 1133 /* If it is failed, discard just allocated 1138 - root, and then (in st_failure) stale node 1134 + root, and then (in failure) stale node 1139 1135 in main tree. 1140 1136 */ 1141 1137 node_free(sfn); 1142 1138 err = PTR_ERR(sn); 1143 - goto st_failure; 1139 + goto failure; 1144 1140 } 1145 1141 1146 1142 /* Now link new subtree to main tree */ ··· 1155 1151 1156 1152 if (IS_ERR(sn)) { 1157 1153 err = PTR_ERR(sn); 1158 - goto st_failure; 1154 + goto failure; 1159 1155 } 1160 1156 } 1161 1157 ··· 1196 1192 atomic_inc(&pn->leaf->rt6i_ref); 1197 1193 } 1198 1194 #endif 1199 - /* Always release dst as dst->__refcnt is guaranteed 1200 - * to be taken before entering this function 1201 - */ 1202 - dst_release_immediate(&rt->dst); 1195 + goto failure; 1203 1196 } 1204 1197 return err; 1205 1198 1206 - #ifdef CONFIG_IPV6_SUBTREES 1207 - /* Subtree creation failed, probably main tree node 1208 - is orphan. If it is, shoot it. 1199 + failure: 1200 + /* fn->leaf could be NULL if fn is an intermediate node and we 1201 + * failed to add the new route to it in both subtree creation 1202 + * failure and fib6_add_rt2node() failure case. 1203 + * In both cases, fib6_repair_tree() should be called to fix 1204 + * fn->leaf. 1209 1205 */ 1210 - st_failure: 1211 1206 if (fn && !(fn->fn_flags & (RTN_RTINFO|RTN_ROOT))) 1212 1207 fib6_repair_tree(info->nl_net, fn); 1213 1208 /* Always release dst as dst->__refcnt is guaranteed ··· 1214 1211 */ 1215 1212 dst_release_immediate(&rt->dst); 1216 1213 return err; 1217 - #endif 1218 1214 } 1219 1215 1220 1216 /*
+2 -1
net/ipv6/udp.c
··· 366 366 return ipv6_recv_rxpmtu(sk, msg, len, addr_len); 367 367 368 368 try_again: 369 - peeking = off = sk_peek_offset(sk, flags); 369 + peeking = flags & MSG_PEEK; 370 + off = sk_peek_offset(sk, flags); 370 371 skb = __skb_recv_udp(sk, flags, noblock, &peeked, &off, &err); 371 372 if (!skb) 372 373 return err;
+1 -1
net/irda/af_irda.c
··· 2213 2213 { 2214 2214 struct sock *sk = sock->sk; 2215 2215 struct irda_sock *self = irda_sk(sk); 2216 - struct irda_device_list list; 2216 + struct irda_device_list list = { 0 }; 2217 2217 struct irda_device_info *discoveries; 2218 2218 struct irda_ias_set * ias_opt; /* IAS get/query params */ 2219 2219 struct ias_object * ias_obj; /* Object in IAS */
+1
net/openvswitch/actions.c
··· 1337 1337 goto out; 1338 1338 } 1339 1339 1340 + OVS_CB(skb)->acts_origlen = acts->orig_len; 1340 1341 err = do_execute_actions(dp, skb, key, 1341 1342 acts->actions, acts->actions_len); 1342 1343
+4 -3
net/openvswitch/datapath.c
··· 367 367 } 368 368 369 369 static size_t upcall_msg_size(const struct dp_upcall_info *upcall_info, 370 - unsigned int hdrlen) 370 + unsigned int hdrlen, int actions_attrlen) 371 371 { 372 372 size_t size = NLMSG_ALIGN(sizeof(struct ovs_header)) 373 373 + nla_total_size(hdrlen) /* OVS_PACKET_ATTR_PACKET */ ··· 384 384 385 385 /* OVS_PACKET_ATTR_ACTIONS */ 386 386 if (upcall_info->actions_len) 387 - size += nla_total_size(upcall_info->actions_len); 387 + size += nla_total_size(actions_attrlen); 388 388 389 389 /* OVS_PACKET_ATTR_MRU */ 390 390 if (upcall_info->mru) ··· 451 451 else 452 452 hlen = skb->len; 453 453 454 - len = upcall_msg_size(upcall_info, hlen - cutlen); 454 + len = upcall_msg_size(upcall_info, hlen - cutlen, 455 + OVS_CB(skb)->acts_origlen); 455 456 user_skb = genlmsg_new(len, GFP_ATOMIC); 456 457 if (!user_skb) { 457 458 err = -ENOMEM;
+2
net/openvswitch/datapath.h
··· 99 99 * when a packet is received by OVS. 100 100 * @mru: The maximum received fragement size; 0 if the packet is not 101 101 * fragmented. 102 + * @acts_origlen: The netlink size of the flow actions applied to this skb. 102 103 * @cutlen: The number of bytes from the packet end to be removed. 103 104 */ 104 105 struct ovs_skb_cb { 105 106 struct vport *input_vport; 106 107 u16 mru; 108 + u16 acts_origlen; 107 109 u32 cutlen; 108 110 }; 109 111 #define OVS_CB(skb) ((struct ovs_skb_cb *)(skb)->cb)
+1
net/rxrpc/call_accept.c
··· 223 223 tail = b->call_backlog_tail; 224 224 while (CIRC_CNT(head, tail, size) > 0) { 225 225 struct rxrpc_call *call = b->call_backlog[tail]; 226 + call->socket = rx; 226 227 if (rx->discard_new_call) { 227 228 _debug("discard %lx", call->user_call_ID); 228 229 rx->discard_new_call(call, call->user_call_ID);
+2
net/sched/act_ipt.c
··· 41 41 { 42 42 struct xt_tgchk_param par; 43 43 struct xt_target *target; 44 + struct ipt_entry e = {}; 44 45 int ret = 0; 45 46 46 47 target = xt_request_find_target(AF_INET, t->u.user.name, ··· 53 52 memset(&par, 0, sizeof(par)); 54 53 par.net = net; 55 54 par.table = table; 55 + par.entryinfo = &e; 56 56 par.target = target; 57 57 par.targinfo = t->data; 58 58 par.hook_mask = hook;
+1 -1
net/sched/cls_api.c
··· 190 190 { 191 191 struct tcf_proto *tp; 192 192 193 - if (*chain->p_filter_chain) 193 + if (chain->p_filter_chain) 194 194 RCU_INIT_POINTER(*chain->p_filter_chain, NULL); 195 195 while ((tp = rtnl_dereference(chain->filter_chain)) != NULL) { 196 196 RCU_INIT_POINTER(chain->filter_chain, tp->next);
+2
net/sctp/ipv6.c
··· 512 512 { 513 513 addr->sa.sa_family = AF_INET6; 514 514 addr->v6.sin6_port = port; 515 + addr->v6.sin6_flowinfo = 0; 515 516 addr->v6.sin6_addr = *saddr; 517 + addr->v6.sin6_scope_id = 0; 516 518 } 517 519 518 520 /* Compare addresses exactly.
+1 -4
net/unix/af_unix.c
··· 2283 2283 */ 2284 2284 mutex_lock(&u->iolock); 2285 2285 2286 - if (flags & MSG_PEEK) 2287 - skip = sk_peek_offset(sk, flags); 2288 - else 2289 - skip = 0; 2286 + skip = max(sk_peek_offset(sk, flags), 0); 2290 2287 2291 2288 do { 2292 2289 int chunk;
+2 -2
sound/core/seq/Kconfig
··· 47 47 timer. 48 48 49 49 config SND_SEQ_MIDI_EVENT 50 - def_tristate SND_RAWMIDI 50 + tristate 51 51 52 52 config SND_SEQ_MIDI 53 - tristate 53 + def_tristate SND_RAWMIDI 54 54 select SND_SEQ_MIDI_EVENT 55 55 56 56 config SND_SEQ_MIDI_EMUL
+4 -9
sound/core/seq/seq_clientmgr.c
··· 1502 1502 static int snd_seq_ioctl_create_queue(struct snd_seq_client *client, void *arg) 1503 1503 { 1504 1504 struct snd_seq_queue_info *info = arg; 1505 - int result; 1506 1505 struct snd_seq_queue *q; 1507 1506 1508 - result = snd_seq_queue_alloc(client->number, info->locked, info->flags); 1509 - if (result < 0) 1510 - return result; 1511 - 1512 - q = queueptr(result); 1513 - if (q == NULL) 1514 - return -EINVAL; 1507 + q = snd_seq_queue_alloc(client->number, info->locked, info->flags); 1508 + if (IS_ERR(q)) 1509 + return PTR_ERR(q); 1515 1510 1516 1511 info->queue = q->queue; 1517 1512 info->locked = q->locked; ··· 1516 1521 if (!info->name[0]) 1517 1522 snprintf(info->name, sizeof(info->name), "Queue-%d", q->queue); 1518 1523 strlcpy(q->name, info->name, sizeof(q->name)); 1519 - queuefree(q); 1524 + snd_use_lock_free(&q->use_lock); 1520 1525 1521 1526 return 0; 1522 1527 }
+9 -5
sound/core/seq/seq_queue.c
··· 184 184 static void queue_use(struct snd_seq_queue *queue, int client, int use); 185 185 186 186 /* allocate a new queue - 187 - * return queue index value or negative value for error 187 + * return pointer to new queue or ERR_PTR(-errno) for error 188 + * The new queue's use_lock is set to 1. It is the caller's responsibility to 189 + * call snd_use_lock_free(&q->use_lock). 188 190 */ 189 - int snd_seq_queue_alloc(int client, int locked, unsigned int info_flags) 191 + struct snd_seq_queue *snd_seq_queue_alloc(int client, int locked, unsigned int info_flags) 190 192 { 191 193 struct snd_seq_queue *q; 192 194 193 195 q = queue_new(client, locked); 194 196 if (q == NULL) 195 - return -ENOMEM; 197 + return ERR_PTR(-ENOMEM); 196 198 q->info_flags = info_flags; 197 199 queue_use(q, client, 1); 200 + snd_use_lock_use(&q->use_lock); 198 201 if (queue_list_add(q) < 0) { 202 + snd_use_lock_free(&q->use_lock); 199 203 queue_delete(q); 200 - return -ENOMEM; 204 + return ERR_PTR(-ENOMEM); 201 205 } 202 - return q->queue; 206 + return q; 203 207 } 204 208 205 209 /* delete a queue - queue must be owned by the client */
+1 -1
sound/core/seq/seq_queue.h
··· 71 71 72 72 73 73 /* create new queue (constructor) */ 74 - int snd_seq_queue_alloc(int client, int locked, unsigned int flags); 74 + struct snd_seq_queue *snd_seq_queue_alloc(int client, int locked, unsigned int flags); 75 75 76 76 /* delete queue (destructor) */ 77 77 int snd_seq_queue_delete(int client, int queueid);
+11 -3
sound/pci/emu10k1/emufx.c
··· 698 698 { 699 699 struct snd_emu10k1_fx8010_control_old_gpr __user *octl; 700 700 701 - if (emu->support_tlv) 702 - return copy_from_user(gctl, &_gctl[idx], sizeof(*gctl)); 701 + if (emu->support_tlv) { 702 + if (in_kernel) 703 + memcpy(gctl, (void *)&_gctl[idx], sizeof(*gctl)); 704 + else if (copy_from_user(gctl, &_gctl[idx], sizeof(*gctl))) 705 + return -EFAULT; 706 + return 0; 707 + } 708 + 703 709 octl = (struct snd_emu10k1_fx8010_control_old_gpr __user *)_gctl; 704 - if (copy_from_user(gctl, &octl[idx], sizeof(*octl))) 710 + if (in_kernel) 711 + memcpy(gctl, (void *)&octl[idx], sizeof(*octl)); 712 + else if (copy_from_user(gctl, &octl[idx], sizeof(*octl))) 705 713 return -EFAULT; 706 714 gctl->tlv = NULL; 707 715 return 0;
-1
sound/pci/hda/patch_realtek.c
··· 6647 6647 SND_HDA_PIN_QUIRK(0x10ec0299, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE, 6648 6648 ALC225_STANDARD_PINS, 6649 6649 {0x12, 0xb7a60130}, 6650 - {0x13, 0xb8a61140}, 6651 6650 {0x17, 0x90170110}), 6652 6651 {} 6653 6652 };
+2
sound/usb/mixer.c
··· 542 542 543 543 if (size < sizeof(scale)) 544 544 return -ENOMEM; 545 + if (cval->min_mute) 546 + scale[0] = SNDRV_CTL_TLVT_DB_MINMAX_MUTE; 545 547 scale[2] = cval->dBmin; 546 548 scale[3] = cval->dBmax; 547 549 if (copy_to_user(_tlv, scale, sizeof(scale)))
+1
sound/usb/mixer.h
··· 64 64 int cached; 65 65 int cache_val[MAX_CHANNELS]; 66 66 u8 initialized; 67 + u8 min_mute; 67 68 void *private_data; 68 69 }; 69 70
+6
sound/usb/mixer_quirks.c
··· 1878 1878 if (unitid == 7 && cval->control == UAC_FU_VOLUME) 1879 1879 snd_dragonfly_quirk_db_scale(mixer, cval, kctl); 1880 1880 break; 1881 + /* lowest playback value is muted on C-Media devices */ 1882 + case USB_ID(0x0d8c, 0x000c): 1883 + case USB_ID(0x0d8c, 0x0014): 1884 + if (strstr(kctl->id.name, "Playback")) 1885 + cval->min_mute = 1; 1886 + break; 1881 1887 } 1882 1888 } 1883 1889
+5
sound/usb/quirks.c
··· 1142 1142 case USB_ID(0x0556, 0x0014): /* Phoenix Audio TMX320VC */ 1143 1143 case USB_ID(0x05A3, 0x9420): /* ELP HD USB Camera */ 1144 1144 case USB_ID(0x074D, 0x3553): /* Outlaw RR2150 (Micronas UAC3553B) */ 1145 + case USB_ID(0x1395, 0x740a): /* Sennheiser DECT */ 1145 1146 case USB_ID(0x1901, 0x0191): /* GE B850V3 CP2114 audio interface */ 1146 1147 case USB_ID(0x1de7, 0x0013): /* Phoenix Audio MT202exe */ 1147 1148 case USB_ID(0x1de7, 0x0014): /* Phoenix Audio TMX320 */ ··· 1374 1373 break; 1375 1374 } 1376 1375 } 1376 + break; 1377 + case USB_ID(0x16d0, 0x0a23): 1378 + if (fp->altsetting == 2) 1379 + return SNDRV_PCM_FMTBIT_DSD_U32_BE; 1377 1380 break; 1378 1381 1379 1382 default:
+2 -1
tools/lib/bpf/libbpf.c
··· 879 879 size_t j; 880 880 int err = *pfd; 881 881 882 - pr_warning("failed to create map: %s\n", 882 + pr_warning("failed to create map (name: '%s'): %s\n", 883 + obj->maps[i].name, 883 884 strerror(errno)); 884 885 for (j = 0; j < i; j++) 885 886 zclose(obj->maps[j].fd);
+2 -2
tools/testing/selftests/kmod/kmod.sh
··· 473 473 echo " all Runs all tests (default)" 474 474 echo " -t Run test ID the number amount of times is recommended" 475 475 echo " -w Watch test ID run until it runs into an error" 476 - echo " -c Run test ID once" 477 - echo " -s Run test ID x test-count number of times" 476 + echo " -s Run test ID once" 477 + echo " -c Run test ID x test-count number of times" 478 478 echo " -l List all test ID list" 479 479 echo " -h|--help Help" 480 480 echo