Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 4.13-rc7 into staging-next

We want the staging and iio fixes in here to handle the merge issues.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+2976 -1441
+2 -2
Documentation/networking/switchdev.txt
··· 228 228 bridge link set dev DEV learning on self 229 229 bridge link set dev DEV learning_sync on self 230 230 231 - Learning_sync attribute enables syncing of the learned/forgotton FDB entry to 231 + Learning_sync attribute enables syncing of the learned/forgotten FDB entry to 232 232 the bridge's FDB. It's possible, but not optimal, to enable learning on the 233 233 device port and on the bridge port, and disable learning_sync. 234 234 ··· 245 245 port device supports ageing, when the FDB entry expires, it will notify the 246 246 driver which in turn will notify the bridge with SWITCHDEV_FDB_DEL. If the 247 247 device does not support ageing, the driver can simulate ageing using a 248 - garbage collection timer to monitor FBD entries. Expired entries will be 248 + garbage collection timer to monitor FDB entries. Expired entries will be 249 249 notified to the bridge using SWITCHDEV_FDB_DEL. See rocker driver for 250 250 example of driver running ageing timer. 251 251
+11 -8
Documentation/printk-formats.txt
··· 58 58 %ps versatile_init 59 59 %pB prev_fn_of_versatile_init+0x88/0x88 60 60 61 - For printing symbols and function pointers. The ``S`` and ``s`` specifiers 62 - result in the symbol name with (``S``) or without (``s``) offsets. Where 63 - this is used on a kernel without KALLSYMS - the symbol address is 64 - printed instead. 61 + The ``F`` and ``f`` specifiers are for printing function pointers, 62 + for example, f->func, &gettimeofday. They have the same result as 63 + ``S`` and ``s`` specifiers. But they do an extra conversion on 64 + ia64, ppc64 and parisc64 architectures where the function pointers 65 + are actually function descriptors. 66 + 67 + The ``S`` and ``s`` specifiers can be used for printing symbols 68 + from direct addresses, for example, __builtin_return_address(0), 69 + (void *)regs->ip. They result in the symbol name with (``S``) or 70 + without (``s``) offsets. If KALLSYMS are disabled then the symbol 71 + address is printed instead. 65 72 66 73 The ``B`` specifier results in the symbol name with offsets and should be 67 74 used when printing stack backtraces. The specifier takes into 68 75 consideration the effect of compiler optimisations which may occur 69 76 when tail-call``s are used and marked with the noreturn GCC attribute. 70 77 71 - On ia64, ppc64 and parisc64 architectures function pointers are 72 - actually function descriptors which must first be resolved. The ``F`` and 73 - ``f`` specifiers perform this resolution and then provide the same 74 - functionality as the ``S`` and ``s`` specifiers. 75 78 76 79 Kernel Pointers 77 80 ===============
+36 -11
Documentation/sysctl/net.txt
··· 35 35 bpf_jit_enable 36 36 -------------- 37 37 38 - This enables Berkeley Packet Filter Just in Time compiler. 39 - Currently supported on x86_64 architecture, bpf_jit provides a framework 40 - to speed packet filtering, the one used by tcpdump/libpcap for example. 38 + This enables the BPF Just in Time (JIT) compiler. BPF is a flexible 39 + and efficient infrastructure allowing to execute bytecode at various 40 + hook points. It is used in a number of Linux kernel subsystems such 41 + as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints) 42 + and security (e.g. seccomp). LLVM has a BPF back end that can compile 43 + restricted C into a sequence of BPF instructions. After program load 44 + through bpf(2) and passing a verifier in the kernel, a JIT will then 45 + translate these BPF proglets into native CPU instructions. There are 46 + two flavors of JITs, the newer eBPF JIT currently supported on: 47 + - x86_64 48 + - arm64 49 + - ppc64 50 + - sparc64 51 + - mips64 52 + - s390x 53 + 54 + And the older cBPF JIT supported on the following archs: 55 + - arm 56 + - mips 57 + - ppc 58 + - sparc 59 + 60 + eBPF JITs are a superset of cBPF JITs, meaning the kernel will 61 + migrate cBPF instructions into eBPF instructions and then JIT 62 + compile them transparently. Older cBPF JITs can only translate 63 + tcpdump filters, seccomp rules, etc, but not mentioned eBPF 64 + programs loaded through bpf(2). 65 + 41 66 Values : 42 67 0 - disable the JIT (default value) 43 68 1 - enable the JIT ··· 71 46 bpf_jit_harden 72 47 -------------- 73 48 74 - This enables hardening for the Berkeley Packet Filter Just in Time compiler. 75 - Supported are eBPF JIT backends. Enabling hardening trades off performance, 76 - but can mitigate JIT spraying. 49 + This enables hardening for the BPF JIT compiler. Supported are eBPF 50 + JIT backends. Enabling hardening trades off performance, but can 51 + mitigate JIT spraying. 77 52 Values : 78 53 0 - disable JIT hardening (default value) 79 54 1 - enable JIT hardening for unprivileged users only ··· 82 57 bpf_jit_kallsyms 83 58 ---------------- 84 59 85 - When Berkeley Packet Filter Just in Time compiler is enabled, then compiled 86 - images are unknown addresses to the kernel, meaning they neither show up in 87 - traces nor in /proc/kallsyms. This enables export of these addresses, which 88 - can be used for debugging/tracing. If bpf_jit_harden is enabled, this feature 89 - is disabled. 60 + When BPF JIT compiler is enabled, then compiled images are unknown 61 + addresses to the kernel, meaning they neither show up in traces nor 62 + in /proc/kallsyms. This enables export of these addresses, which can 63 + be used for debugging/tracing. If bpf_jit_harden is enabled, this 64 + feature is disabled. 90 65 Values : 91 66 0 - disable JIT kallsyms export (default value) 92 67 1 - enable JIT kallsyms export for privileged users only
-1
MAINTAINERS
··· 7110 7110 L: linux-kernel@vger.kernel.org 7111 7111 S: Maintained 7112 7112 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core 7113 - T: git git://git.infradead.org/users/jcooper/linux.git irqchip/core 7114 7113 F: Documentation/devicetree/bindings/interrupt-controller/ 7115 7114 F: drivers/irqchip/ 7116 7115
+8 -7
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 13 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc5 4 + EXTRAVERSION = -rc7 5 5 NAME = Fearless Coyote 6 6 7 7 # *DOCUMENTATION* ··· 396 396 KBUILD_CPPFLAGS := -D__KERNEL__ 397 397 398 398 KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ 399 - -fno-strict-aliasing -fno-common \ 399 + -fno-strict-aliasing -fno-common -fshort-wchar \ 400 400 -Werror-implicit-function-declaration \ 401 401 -Wno-format-security \ 402 402 -std=gnu89 $(call cc-option,-fno-PIE) ··· 442 442 # =========================================================================== 443 443 # Rules shared between *config targets and build targets 444 444 445 - # Basic helpers built in scripts/ 445 + # Basic helpers built in scripts/basic/ 446 446 PHONY += scripts_basic 447 447 scripts_basic: 448 448 $(Q)$(MAKE) $(build)=scripts/basic ··· 505 505 endif 506 506 endif 507 507 endif 508 - # install and module_install need also be processed one by one 508 + # install and modules_install need also be processed one by one 509 509 ifneq ($(filter install,$(MAKECMDGOALS)),) 510 510 ifneq ($(filter modules_install,$(MAKECMDGOALS)),) 511 511 mixed-targets := 1 ··· 964 964 export KBUILD_VMLINUX_LIBS := $(libs-y1) 965 965 export KBUILD_LDS := arch/$(SRCARCH)/kernel/vmlinux.lds 966 966 export LDFLAGS_vmlinux 967 - # used by scripts/pacmage/Makefile 967 + # used by scripts/package/Makefile 968 968 export KBUILD_ALLDIRS := $(sort $(filter-out arch/%,$(vmlinux-alldirs)) arch Documentation include samples scripts tools) 969 969 970 970 vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN) $(KBUILD_VMLINUX_LIBS) ··· 992 992 ARCH_POSTLINK := $(wildcard $(srctree)/arch/$(SRCARCH)/Makefile.postlink) 993 993 994 994 # Final link of vmlinux with optional arch pass after final link 995 - cmd_link-vmlinux = \ 996 - $(CONFIG_SHELL) $< $(LD) $(LDFLAGS) $(LDFLAGS_vmlinux) ; \ 995 + cmd_link-vmlinux = \ 996 + $(CONFIG_SHELL) $< $(LD) $(LDFLAGS) $(LDFLAGS_vmlinux) ; \ 997 997 $(if $(ARCH_POSTLINK), $(MAKE) -f $(ARCH_POSTLINK) $@, true) 998 998 999 999 vmlinux: scripts/link-vmlinux.sh vmlinux_prereq $(vmlinux-deps) FORCE ··· 1184 1184 kselftest: 1185 1185 $(Q)$(MAKE) -C tools/testing/selftests run_tests 1186 1186 1187 + PHONY += kselftest-clean 1187 1188 kselftest-clean: 1188 1189 $(Q)$(MAKE) -C tools/testing/selftests clean 1189 1190
-1
arch/arc/Kconfig
··· 96 96 97 97 menu "ARC Platform/SoC/Board" 98 98 99 - source "arch/arc/plat-sim/Kconfig" 100 99 source "arch/arc/plat-tb10x/Kconfig" 101 100 source "arch/arc/plat-axs10x/Kconfig" 102 101 #New platform adds here
+1 -1
arch/arc/Makefile
··· 107 107 # w/o this dtb won't embed into kernel binary 108 108 core-y += arch/arc/boot/dts/ 109 109 110 - core-$(CONFIG_ARC_PLAT_SIM) += arch/arc/plat-sim/ 110 + core-y += arch/arc/plat-sim/ 111 111 core-$(CONFIG_ARC_PLAT_TB10X) += arch/arc/plat-tb10x/ 112 112 core-$(CONFIG_ARC_PLAT_AXS10X) += arch/arc/plat-axs10x/ 113 113 core-$(CONFIG_ARC_PLAT_EZNPS) += arch/arc/plat-eznps/
+9 -11
arch/arc/boot/dts/axc001.dtsi
··· 15 15 16 16 / { 17 17 compatible = "snps,arc"; 18 - #address-cells = <1>; 19 - #size-cells = <1>; 18 + #address-cells = <2>; 19 + #size-cells = <2>; 20 20 21 21 cpu_card { 22 22 compatible = "simple-bus"; 23 23 #address-cells = <1>; 24 24 #size-cells = <1>; 25 25 26 - ranges = <0x00000000 0xf0000000 0x10000000>; 26 + ranges = <0x00000000 0x0 0xf0000000 0x10000000>; 27 27 28 28 core_clk: core_clk { 29 29 #clock-cells = <0>; ··· 91 91 mb_intc: dw-apb-ictl@0xe0012000 { 92 92 #interrupt-cells = <1>; 93 93 compatible = "snps,dw-apb-ictl"; 94 - reg = < 0xe0012000 0x200 >; 94 + reg = < 0x0 0xe0012000 0x0 0x200 >; 95 95 interrupt-controller; 96 96 interrupt-parent = <&core_intc>; 97 97 interrupts = < 7 >; 98 98 }; 99 99 100 100 memory { 101 - #address-cells = <1>; 102 - #size-cells = <1>; 103 - ranges = <0x00000000 0x80000000 0x20000000>; 104 101 device_type = "memory"; 105 - reg = <0x80000000 0x1b000000>; /* (512 - 32) MiB */ 102 + /* CONFIG_KERNEL_RAM_BASE_ADDRESS needs to match low mem start */ 103 + reg = <0x0 0x80000000 0x0 0x1b000000>; /* (512 - 32) MiB */ 106 104 }; 107 105 108 106 reserved-memory { 109 - #address-cells = <1>; 110 - #size-cells = <1>; 107 + #address-cells = <2>; 108 + #size-cells = <2>; 111 109 ranges; 112 110 /* 113 111 * We just move frame buffer area to the very end of ··· 116 118 */ 117 119 frame_buffer: frame_buffer@9e000000 { 118 120 compatible = "shared-dma-pool"; 119 - reg = <0x9e000000 0x2000000>; 121 + reg = <0x0 0x9e000000 0x0 0x2000000>; 120 122 no-map; 121 123 }; 122 124 };
+10 -11
arch/arc/boot/dts/axc003.dtsi
··· 14 14 15 15 / { 16 16 compatible = "snps,arc"; 17 - #address-cells = <1>; 18 - #size-cells = <1>; 17 + #address-cells = <2>; 18 + #size-cells = <2>; 19 19 20 20 cpu_card { 21 21 compatible = "simple-bus"; 22 22 #address-cells = <1>; 23 23 #size-cells = <1>; 24 24 25 - ranges = <0x00000000 0xf0000000 0x10000000>; 25 + ranges = <0x00000000 0x0 0xf0000000 0x10000000>; 26 26 27 27 core_clk: core_clk { 28 28 #clock-cells = <0>; ··· 94 94 mb_intc: dw-apb-ictl@0xe0012000 { 95 95 #interrupt-cells = <1>; 96 96 compatible = "snps,dw-apb-ictl"; 97 - reg = < 0xe0012000 0x200 >; 97 + reg = < 0x0 0xe0012000 0x0 0x200 >; 98 98 interrupt-controller; 99 99 interrupt-parent = <&core_intc>; 100 100 interrupts = < 24 >; 101 101 }; 102 102 103 103 memory { 104 - #address-cells = <1>; 105 - #size-cells = <1>; 106 - ranges = <0x00000000 0x80000000 0x40000000>; 107 104 device_type = "memory"; 108 - reg = <0x80000000 0x20000000>; /* 512MiB */ 105 + /* CONFIG_KERNEL_RAM_BASE_ADDRESS needs to match low mem start */ 106 + reg = <0x0 0x80000000 0x0 0x20000000 /* 512 MiB low mem */ 107 + 0x1 0xc0000000 0x0 0x40000000>; /* 1 GiB highmem */ 109 108 }; 110 109 111 110 reserved-memory { 112 - #address-cells = <1>; 113 - #size-cells = <1>; 111 + #address-cells = <2>; 112 + #size-cells = <2>; 114 113 ranges; 115 114 /* 116 115 * Move frame buffer out of IOC aperture (0x8z-0xAz). 117 116 */ 118 117 frame_buffer: frame_buffer@be000000 { 119 118 compatible = "shared-dma-pool"; 120 - reg = <0xbe000000 0x2000000>; 119 + reg = <0x0 0xbe000000 0x0 0x2000000>; 121 120 no-map; 122 121 }; 123 122 };
+10 -11
arch/arc/boot/dts/axc003_idu.dtsi
··· 14 14 15 15 / { 16 16 compatible = "snps,arc"; 17 - #address-cells = <1>; 18 - #size-cells = <1>; 17 + #address-cells = <2>; 18 + #size-cells = <2>; 19 19 20 20 cpu_card { 21 21 compatible = "simple-bus"; 22 22 #address-cells = <1>; 23 23 #size-cells = <1>; 24 24 25 - ranges = <0x00000000 0xf0000000 0x10000000>; 25 + ranges = <0x00000000 0x0 0xf0000000 0x10000000>; 26 26 27 27 core_clk: core_clk { 28 28 #clock-cells = <0>; ··· 100 100 mb_intc: dw-apb-ictl@0xe0012000 { 101 101 #interrupt-cells = <1>; 102 102 compatible = "snps,dw-apb-ictl"; 103 - reg = < 0xe0012000 0x200 >; 103 + reg = < 0x0 0xe0012000 0x0 0x200 >; 104 104 interrupt-controller; 105 105 interrupt-parent = <&idu_intc>; 106 106 interrupts = <0>; 107 107 }; 108 108 109 109 memory { 110 - #address-cells = <1>; 111 - #size-cells = <1>; 112 - ranges = <0x00000000 0x80000000 0x40000000>; 113 110 device_type = "memory"; 114 - reg = <0x80000000 0x20000000>; /* 512MiB */ 111 + /* CONFIG_KERNEL_RAM_BASE_ADDRESS needs to match low mem start */ 112 + reg = <0x0 0x80000000 0x0 0x20000000 /* 512 MiB low mem */ 113 + 0x1 0xc0000000 0x0 0x40000000>; /* 1 GiB highmem */ 115 114 }; 116 115 117 116 reserved-memory { 118 - #address-cells = <1>; 119 - #size-cells = <1>; 117 + #address-cells = <2>; 118 + #size-cells = <2>; 120 119 ranges; 121 120 /* 122 121 * Move frame buffer out of IOC aperture (0x8z-0xAz). 123 122 */ 124 123 frame_buffer: frame_buffer@be000000 { 125 124 compatible = "shared-dma-pool"; 126 - reg = <0xbe000000 0x2000000>; 125 + reg = <0x0 0xbe000000 0x0 0x2000000>; 127 126 no-map; 128 127 }; 129 128 };
+1 -1
arch/arc/boot/dts/axs10x_mb.dtsi
··· 13 13 compatible = "simple-bus"; 14 14 #address-cells = <1>; 15 15 #size-cells = <1>; 16 - ranges = <0x00000000 0xe0000000 0x10000000>; 16 + ranges = <0x00000000 0x0 0xe0000000 0x10000000>; 17 17 interrupt-parent = <&mb_intc>; 18 18 19 19 i2sclk: i2sclk@100a0 {
-1
arch/arc/configs/haps_hs_defconfig
··· 21 21 # CONFIG_BLK_DEV_BSG is not set 22 22 # CONFIG_IOSCHED_DEADLINE is not set 23 23 # CONFIG_IOSCHED_CFQ is not set 24 - CONFIG_ARC_PLAT_SIM=y 25 24 CONFIG_ISA_ARCV2=y 26 25 CONFIG_ARC_BUILTIN_DTB_NAME="haps_hs" 27 26 CONFIG_PREEMPT=y
-1
arch/arc/configs/haps_hs_smp_defconfig
··· 23 23 # CONFIG_BLK_DEV_BSG is not set 24 24 # CONFIG_IOSCHED_DEADLINE is not set 25 25 # CONFIG_IOSCHED_CFQ is not set 26 - CONFIG_ARC_PLAT_SIM=y 27 26 CONFIG_ISA_ARCV2=y 28 27 CONFIG_SMP=y 29 28 CONFIG_ARC_BUILTIN_DTB_NAME="haps_hs_idu"
-1
arch/arc/configs/nps_defconfig
··· 39 39 # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 40 40 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 41 41 # CONFIG_INET_XFRM_MODE_BEET is not set 42 - # CONFIG_INET_LRO is not set 43 42 # CONFIG_INET_DIAG is not set 44 43 # CONFIG_IPV6 is not set 45 44 # CONFIG_WIRELESS is not set
-1
arch/arc/configs/nsim_700_defconfig
··· 23 23 # CONFIG_BLK_DEV_BSG is not set 24 24 # CONFIG_IOSCHED_DEADLINE is not set 25 25 # CONFIG_IOSCHED_CFQ is not set 26 - CONFIG_ARC_PLAT_SIM=y 27 26 CONFIG_ARC_BUILTIN_DTB_NAME="nsim_700" 28 27 CONFIG_PREEMPT=y 29 28 # CONFIG_COMPACTION is not set
-1
arch/arc/configs/nsim_hs_defconfig
··· 26 26 # CONFIG_BLK_DEV_BSG is not set 27 27 # CONFIG_IOSCHED_DEADLINE is not set 28 28 # CONFIG_IOSCHED_CFQ is not set 29 - CONFIG_ARC_PLAT_SIM=y 30 29 CONFIG_ISA_ARCV2=y 31 30 CONFIG_ARC_BUILTIN_DTB_NAME="nsim_hs" 32 31 CONFIG_PREEMPT=y
-1
arch/arc/configs/nsim_hs_smp_defconfig
··· 24 24 # CONFIG_BLK_DEV_BSG is not set 25 25 # CONFIG_IOSCHED_DEADLINE is not set 26 26 # CONFIG_IOSCHED_CFQ is not set 27 - CONFIG_ARC_PLAT_SIM=y 28 27 CONFIG_ISA_ARCV2=y 29 28 CONFIG_SMP=y 30 29 CONFIG_ARC_BUILTIN_DTB_NAME="nsim_hs_idu"
-1
arch/arc/configs/nsimosci_defconfig
··· 23 23 # CONFIG_BLK_DEV_BSG is not set 24 24 # CONFIG_IOSCHED_DEADLINE is not set 25 25 # CONFIG_IOSCHED_CFQ is not set 26 - CONFIG_ARC_PLAT_SIM=y 27 26 CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci" 28 27 # CONFIG_COMPACTION is not set 29 28 CONFIG_NET=y
-1
arch/arc/configs/nsimosci_hs_defconfig
··· 23 23 # CONFIG_BLK_DEV_BSG is not set 24 24 # CONFIG_IOSCHED_DEADLINE is not set 25 25 # CONFIG_IOSCHED_CFQ is not set 26 - CONFIG_ARC_PLAT_SIM=y 27 26 CONFIG_ISA_ARCV2=y 28 27 CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci_hs" 29 28 # CONFIG_COMPACTION is not set
-1
arch/arc/configs/nsimosci_hs_smp_defconfig
··· 18 18 # CONFIG_BLK_DEV_BSG is not set 19 19 # CONFIG_IOSCHED_DEADLINE is not set 20 20 # CONFIG_IOSCHED_CFQ is not set 21 - CONFIG_ARC_PLAT_SIM=y 22 21 CONFIG_ISA_ARCV2=y 23 22 CONFIG_SMP=y 24 23 # CONFIG_ARC_TIMERS_64BIT is not set
-1
arch/arc/configs/tb10x_defconfig
··· 38 38 # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 39 39 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 40 40 # CONFIG_INET_XFRM_MODE_BEET is not set 41 - # CONFIG_INET_LRO is not set 42 41 # CONFIG_INET_DIAG is not set 43 42 # CONFIG_IPV6 is not set 44 43 # CONFIG_WIRELESS is not set
+2
arch/arc/include/asm/cache.h
··· 96 96 #define ARC_REG_SLC_FLUSH 0x904 97 97 #define ARC_REG_SLC_INVALIDATE 0x905 98 98 #define ARC_REG_SLC_RGN_START 0x914 99 + #define ARC_REG_SLC_RGN_START1 0x915 99 100 #define ARC_REG_SLC_RGN_END 0x916 101 + #define ARC_REG_SLC_RGN_END1 0x917 100 102 101 103 /* Bit val in SLC_CONTROL */ 102 104 #define SLC_CTRL_DIS 0x001
+2
arch/arc/include/asm/mmu.h
··· 94 94 return IS_ENABLED(CONFIG_ARC_HAS_PAE40); 95 95 } 96 96 97 + extern int pae40_exist_but_not_enab(void); 98 + 97 99 #endif /* !__ASSEMBLY__ */ 98 100 99 101 #endif
+3
arch/arc/kernel/intc-arcv2.c
··· 75 75 * Set a default priority for all available interrupts to prevent 76 76 * switching of register banks if Fast IRQ and multiple register banks 77 77 * are supported by CPU. 78 + * Also disable all IRQ lines so faulty external hardware won't 79 + * trigger interrupt that kernel is not ready to handle. 78 80 */ 79 81 for (i = NR_EXCEPTIONS; i < irq_bcr.irqs + NR_EXCEPTIONS; i++) { 80 82 write_aux_reg(AUX_IRQ_SELECT, i); 81 83 write_aux_reg(AUX_IRQ_PRIORITY, ARCV2_IRQ_DEF_PRIO); 84 + write_aux_reg(AUX_IRQ_ENABLE, 0); 82 85 } 83 86 84 87 /* setup status32, don't enable intr yet as kernel doesn't want */
+13 -1
arch/arc/kernel/intc-compact.c
··· 27 27 */ 28 28 void arc_init_IRQ(void) 29 29 { 30 - int level_mask = 0; 30 + int level_mask = 0, i; 31 31 32 32 /* Is timer high priority Interrupt (Level2 in ARCompact jargon) */ 33 33 level_mask |= IS_ENABLED(CONFIG_ARC_COMPACT_IRQ_LEVELS) << TIMER0_IRQ; ··· 40 40 41 41 if (level_mask) 42 42 pr_info("Level-2 interrupts bitset %x\n", level_mask); 43 + 44 + /* 45 + * Disable all IRQ lines so faulty external hardware won't 46 + * trigger interrupt that kernel is not ready to handle. 47 + */ 48 + for (i = TIMER0_IRQ; i < NR_CPU_IRQS; i++) { 49 + unsigned int ienb; 50 + 51 + ienb = read_aux_reg(AUX_IENABLE); 52 + ienb &= ~(1 << i); 53 + write_aux_reg(AUX_IENABLE, ienb); 54 + } 43 55 } 44 56 45 57 /*
+42 -8
arch/arc/mm/cache.c
··· 665 665 static DEFINE_SPINLOCK(lock); 666 666 unsigned long flags; 667 667 unsigned int ctrl; 668 + phys_addr_t end; 668 669 669 670 spin_lock_irqsave(&lock, flags); 670 671 ··· 695 694 * END needs to be setup before START (latter triggers the operation) 696 695 * END can't be same as START, so add (l2_line_sz - 1) to sz 697 696 */ 698 - write_aux_reg(ARC_REG_SLC_RGN_END, (paddr + sz + l2_line_sz - 1)); 699 - write_aux_reg(ARC_REG_SLC_RGN_START, paddr); 697 + end = paddr + sz + l2_line_sz - 1; 698 + if (is_pae40_enabled()) 699 + write_aux_reg(ARC_REG_SLC_RGN_END1, upper_32_bits(end)); 700 + 701 + write_aux_reg(ARC_REG_SLC_RGN_END, lower_32_bits(end)); 702 + 703 + if (is_pae40_enabled()) 704 + write_aux_reg(ARC_REG_SLC_RGN_START1, upper_32_bits(paddr)); 705 + 706 + write_aux_reg(ARC_REG_SLC_RGN_START, lower_32_bits(paddr)); 707 + 708 + /* Make sure "busy" bit reports correct stataus, see STAR 9001165532 */ 709 + read_aux_reg(ARC_REG_SLC_CTRL); 700 710 701 711 while (read_aux_reg(ARC_REG_SLC_CTRL) & SLC_CTRL_BUSY); 702 712 ··· 1123 1111 __dc_enable(); 1124 1112 } 1125 1113 1114 + /* 1115 + * Cache related boot time checks/setups only needed on master CPU: 1116 + * - Geometry checks (kernel build and hardware agree: e.g. L1_CACHE_BYTES) 1117 + * Assume SMP only, so all cores will have same cache config. A check on 1118 + * one core suffices for all 1119 + * - IOC setup / dma callbacks only need to be done once 1120 + */ 1126 1121 void __init arc_cache_init_master(void) 1127 1122 { 1128 1123 unsigned int __maybe_unused cpu = smp_processor_id(); ··· 1209 1190 1210 1191 printk(arc_cache_mumbojumbo(0, str, sizeof(str))); 1211 1192 1212 - /* 1213 - * Only master CPU needs to execute rest of function: 1214 - * - Assume SMP so all cores will have same cache config so 1215 - * any geomtry checks will be same for all 1216 - * - IOC setup / dma callbacks only need to be setup once 1217 - */ 1218 1193 if (!cpu) 1219 1194 arc_cache_init_master(); 1195 + 1196 + /* 1197 + * In PAE regime, TLB and cache maintenance ops take wider addresses 1198 + * And even if PAE is not enabled in kernel, the upper 32-bits still need 1199 + * to be zeroed to keep the ops sane. 1200 + * As an optimization for more common !PAE enabled case, zero them out 1201 + * once at init, rather than checking/setting to 0 for every runtime op 1202 + */ 1203 + if (is_isa_arcv2() && pae40_exist_but_not_enab()) { 1204 + 1205 + if (IS_ENABLED(CONFIG_ARC_HAS_ICACHE)) 1206 + write_aux_reg(ARC_REG_IC_PTAG_HI, 0); 1207 + 1208 + if (IS_ENABLED(CONFIG_ARC_HAS_DCACHE)) 1209 + write_aux_reg(ARC_REG_DC_PTAG_HI, 0); 1210 + 1211 + if (l2_line_sz) { 1212 + write_aux_reg(ARC_REG_SLC_RGN_END1, 0); 1213 + write_aux_reg(ARC_REG_SLC_RGN_START1, 0); 1214 + } 1215 + } 1220 1216 }
+45
arch/arc/mm/dma.c
··· 153 153 } 154 154 } 155 155 156 + /* 157 + * arc_dma_map_page - map a portion of a page for streaming DMA 158 + * 159 + * Ensure that any data held in the cache is appropriately discarded 160 + * or written back. 161 + * 162 + * The device owns this memory once this call has completed. The CPU 163 + * can regain ownership by calling dma_unmap_page(). 164 + * 165 + * Note: while it takes struct page as arg, caller can "abuse" it to pass 166 + * a region larger than PAGE_SIZE, provided it is physically contiguous 167 + * and this still works correctly 168 + */ 156 169 static dma_addr_t arc_dma_map_page(struct device *dev, struct page *page, 157 170 unsigned long offset, size_t size, enum dma_data_direction dir, 158 171 unsigned long attrs) ··· 176 163 _dma_cache_sync(paddr, size, dir); 177 164 178 165 return plat_phys_to_dma(dev, paddr); 166 + } 167 + 168 + /* 169 + * arc_dma_unmap_page - unmap a buffer previously mapped through dma_map_page() 170 + * 171 + * After this call, reads by the CPU to the buffer are guaranteed to see 172 + * whatever the device wrote there. 173 + * 174 + * Note: historically this routine was not implemented for ARC 175 + */ 176 + static void arc_dma_unmap_page(struct device *dev, dma_addr_t handle, 177 + size_t size, enum dma_data_direction dir, 178 + unsigned long attrs) 179 + { 180 + phys_addr_t paddr = plat_dma_to_phys(dev, handle); 181 + 182 + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) 183 + _dma_cache_sync(paddr, size, dir); 179 184 } 180 185 181 186 static int arc_dma_map_sg(struct device *dev, struct scatterlist *sg, ··· 207 176 s->length, dir); 208 177 209 178 return nents; 179 + } 180 + 181 + static void arc_dma_unmap_sg(struct device *dev, struct scatterlist *sg, 182 + int nents, enum dma_data_direction dir, 183 + unsigned long attrs) 184 + { 185 + struct scatterlist *s; 186 + int i; 187 + 188 + for_each_sg(sg, s, nents, i) 189 + arc_dma_unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir, 190 + attrs); 210 191 } 211 192 212 193 static void arc_dma_sync_single_for_cpu(struct device *dev, ··· 266 223 .free = arc_dma_free, 267 224 .mmap = arc_dma_mmap, 268 225 .map_page = arc_dma_map_page, 226 + .unmap_page = arc_dma_unmap_page, 269 227 .map_sg = arc_dma_map_sg, 228 + .unmap_sg = arc_dma_unmap_sg, 270 229 .sync_single_for_device = arc_dma_sync_single_for_device, 271 230 .sync_single_for_cpu = arc_dma_sync_single_for_cpu, 272 231 .sync_sg_for_cpu = arc_dma_sync_sg_for_cpu,
+11 -1
arch/arc/mm/tlb.c
··· 104 104 /* A copy of the ASID from the PID reg is kept in asid_cache */ 105 105 DEFINE_PER_CPU(unsigned int, asid_cache) = MM_CTXT_FIRST_CYCLE; 106 106 107 + static int __read_mostly pae_exists; 108 + 107 109 /* 108 110 * Utility Routine to erase a J-TLB entry 109 111 * Caller needs to setup Index Reg (manually or via getIndex) ··· 786 784 mmu->u_dtlb = mmu4->u_dtlb * 4; 787 785 mmu->u_itlb = mmu4->u_itlb * 4; 788 786 mmu->sasid = mmu4->sasid; 789 - mmu->pae = mmu4->pae; 787 + pae_exists = mmu->pae = mmu4->pae; 790 788 } 791 789 } 792 790 ··· 809 807 IS_AVAIL2(p_mmu->pae, ", PAE40 ", CONFIG_ARC_HAS_PAE40)); 810 808 811 809 return buf; 810 + } 811 + 812 + int pae40_exist_but_not_enab(void) 813 + { 814 + return pae_exists && !is_pae40_enabled(); 812 815 } 813 816 814 817 void arc_mmu_init(void) ··· 866 859 /* swapper_pg_dir is the pgd for the kernel, used by vmalloc */ 867 860 write_aux_reg(ARC_REG_SCRATCH_DATA0, swapper_pg_dir); 868 861 #endif 862 + 863 + if (pae40_exist_but_not_enab()) 864 + write_aux_reg(ARC_REG_TLBPD1HI, 0); 869 865 } 870 866 871 867 /*
-13
arch/arc/plat-sim/Kconfig
··· 1 - # 2 - # Copyright (C) 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com) 3 - # 4 - # This program is free software; you can redistribute it and/or modify 5 - # it under the terms of the GNU General Public License version 2 as 6 - # published by the Free Software Foundation. 7 - # 8 - 9 - menuconfig ARC_PLAT_SIM 10 - bool "ARC nSIM based simulation virtual platforms" 11 - help 12 - Support for nSIM based ARC simulation platforms 13 - This includes the standalone nSIM (uart only) vs. System C OSCI VP
+4 -1
arch/arc/plat-sim/platform.c
··· 20 20 */ 21 21 22 22 static const char *simulation_compat[] __initconst = { 23 + #ifdef CONFIG_ISA_ARCOMPACT 23 24 "snps,nsim", 24 - "snps,nsim_hs", 25 25 "snps,nsimosci", 26 + #else 27 + "snps,nsim_hs", 26 28 "snps,nsimosci_hs", 27 29 "snps,zebu_hs", 30 + #endif 28 31 NULL, 29 32 }; 30 33
+1
arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi
··· 266 266 267 267 &hdmicec { 268 268 status = "okay"; 269 + needs-hpd; 269 270 }; 270 271 271 272 &hsi2c_4 {
+1
arch/arm/boot/dts/imx25.dtsi
··· 297 297 #address-cells = <1>; 298 298 #size-cells = <1>; 299 299 status = "disabled"; 300 + ranges; 300 301 301 302 adc: adc@50030800 { 302 303 compatible = "fsl,imx25-gcq";
+2 -2
arch/arm/boot/dts/imx6qdl-nitrogen6_som2.dtsi
··· 507 507 pinctrl_pcie: pciegrp { 508 508 fsl,pins = < 509 509 /* PCIe reset */ 510 - MX6QDL_PAD_EIM_BCLK__GPIO6_IO31 0x030b0 510 + MX6QDL_PAD_EIM_DA0__GPIO3_IO00 0x030b0 511 511 MX6QDL_PAD_EIM_DA4__GPIO3_IO04 0x030b0 512 512 >; 513 513 }; ··· 668 668 &pcie { 669 669 pinctrl-names = "default"; 670 670 pinctrl-0 = <&pinctrl_pcie>; 671 - reset-gpio = <&gpio6 31 GPIO_ACTIVE_LOW>; 671 + reset-gpio = <&gpio3 0 GPIO_ACTIVE_LOW>; 672 672 status = "okay"; 673 673 }; 674 674
+8 -8
arch/arm/boot/dts/imx7d-sdb.dts
··· 557 557 >; 558 558 }; 559 559 560 + pinctrl_spi4: spi4grp { 561 + fsl,pins = < 562 + MX7D_PAD_GPIO1_IO09__GPIO1_IO9 0x59 563 + MX7D_PAD_GPIO1_IO12__GPIO1_IO12 0x59 564 + MX7D_PAD_GPIO1_IO13__GPIO1_IO13 0x59 565 + >; 566 + }; 567 + 560 568 pinctrl_tsc2046_pendown: tsc2046_pendown { 561 569 fsl,pins = < 562 570 MX7D_PAD_EPDC_BDR1__GPIO2_IO29 0x59 ··· 705 697 fsl,pins = < 706 698 MX7D_PAD_LPSR_GPIO1_IO01__PWM1_OUT 0x110b0 707 699 >; 708 - 709 - pinctrl_spi4: spi4grp { 710 - fsl,pins = < 711 - MX7D_PAD_GPIO1_IO09__GPIO1_IO9 0x59 712 - MX7D_PAD_GPIO1_IO12__GPIO1_IO12 0x59 713 - MX7D_PAD_GPIO1_IO13__GPIO1_IO13 0x59 714 - >; 715 - }; 716 700 }; 717 701 };
+6 -6
arch/arm/boot/dts/sama5d2.dtsi
··· 303 303 #size-cells = <1>; 304 304 atmel,smc = <&hsmc>; 305 305 reg = <0x10000000 0x10000000 306 - 0x40000000 0x30000000>; 306 + 0x60000000 0x30000000>; 307 307 ranges = <0x0 0x0 0x10000000 0x10000000 308 308 0x1 0x0 0x60000000 0x10000000 309 309 0x2 0x0 0x70000000 0x10000000 ··· 1048 1048 }; 1049 1049 1050 1050 hsmc: hsmc@f8014000 { 1051 - compatible = "atmel,sama5d3-smc", "syscon", "simple-mfd"; 1051 + compatible = "atmel,sama5d2-smc", "syscon", "simple-mfd"; 1052 1052 reg = <0xf8014000 0x1000>; 1053 - interrupts = <5 IRQ_TYPE_LEVEL_HIGH 6>; 1053 + interrupts = <17 IRQ_TYPE_LEVEL_HIGH 6>; 1054 1054 clocks = <&hsmc_clk>; 1055 1055 #address-cells = <1>; 1056 1056 #size-cells = <1>; 1057 1057 ranges; 1058 1058 1059 - pmecc: ecc-engine@ffffc070 { 1059 + pmecc: ecc-engine@f8014070 { 1060 1060 compatible = "atmel,sama5d2-pmecc"; 1061 - reg = <0xffffc070 0x490>, 1062 - <0xffffc500 0x100>; 1061 + reg = <0xf8014070 0x490>, 1062 + <0xf8014500 0x100>; 1063 1063 }; 1064 1064 }; 1065 1065
+1 -1
arch/arm/mach-at91/Kconfig
··· 1 1 menuconfig ARCH_AT91 2 2 bool "Atmel SoCs" 3 3 depends on ARCH_MULTI_V4T || ARCH_MULTI_V5 || ARCH_MULTI_V7 || ARM_SINGLE_ARMV7M 4 - select ARM_CPU_SUSPEND if PM 4 + select ARM_CPU_SUSPEND if PM && ARCH_MULTI_V7 5 5 select COMMON_CLK_AT91 6 6 select GPIOLIB 7 7 select PINCTRL
+12
arch/arm/mach-at91/pm.c
··· 608 608 609 609 void __init at91rm9200_pm_init(void) 610 610 { 611 + if (!IS_ENABLED(CONFIG_SOC_AT91RM9200)) 612 + return; 613 + 611 614 at91_dt_ramc(); 612 615 613 616 /* ··· 623 620 624 621 void __init at91sam9_pm_init(void) 625 622 { 623 + if (!IS_ENABLED(CONFIG_SOC_AT91SAM9)) 624 + return; 625 + 626 626 at91_dt_ramc(); 627 627 at91_pm_init(at91sam9_idle); 628 628 } 629 629 630 630 void __init sama5_pm_init(void) 631 631 { 632 + if (!IS_ENABLED(CONFIG_SOC_SAMA5)) 633 + return; 634 + 632 635 at91_dt_ramc(); 633 636 at91_pm_init(NULL); 634 637 } 635 638 636 639 void __init sama5d2_pm_init(void) 637 640 { 641 + if (!IS_ENABLED(CONFIG_SOC_SAMA5D2)) 642 + return; 643 + 638 644 at91_pm_backup_init(); 639 645 sama5_pm_init(); 640 646 }
+1
arch/arm64/boot/dts/allwinner/sun50i-a64-bananapi-m64.dts
··· 51 51 compatible = "sinovoip,bananapi-m64", "allwinner,sun50i-a64"; 52 52 53 53 aliases { 54 + ethernet0 = &emac; 54 55 serial0 = &uart0; 55 56 serial1 = &uart1; 56 57 };
+1
arch/arm64/boot/dts/allwinner/sun50i-a64-pine64.dts
··· 51 51 compatible = "pine64,pine64", "allwinner,sun50i-a64"; 52 52 53 53 aliases { 54 + ethernet0 = &emac; 54 55 serial0 = &uart0; 55 56 serial1 = &uart1; 56 57 serial2 = &uart2;
+1
arch/arm64/boot/dts/allwinner/sun50i-a64-sopine-baseboard.dts
··· 53 53 "allwinner,sun50i-a64"; 54 54 55 55 aliases { 56 + ethernet0 = &emac; 56 57 serial0 = &uart0; 57 58 }; 58 59
+3
arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi
··· 120 120 }; 121 121 122 122 &pio { 123 + interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>, 124 + <GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>, 125 + <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>; 123 126 compatible = "allwinner,sun50i-h5-pinctrl"; 124 127 };
+1 -1
arch/arm64/boot/dts/renesas/salvator-common.dtsi
··· 45 45 stdout-path = "serial0:115200n8"; 46 46 }; 47 47 48 - audio_clkout: audio_clkout { 48 + audio_clkout: audio-clkout { 49 49 /* 50 50 * This is same as <&rcar_sound 0> 51 51 * but needed to avoid cs2000/rcar_sound probe dead-lock
+2 -2
arch/arm64/include/asm/arch_timer.h
··· 65 65 u64 _val; \ 66 66 if (needs_unstable_timer_counter_workaround()) { \ 67 67 const struct arch_timer_erratum_workaround *wa; \ 68 - preempt_disable(); \ 68 + preempt_disable_notrace(); \ 69 69 wa = __this_cpu_read(timer_unstable_counter_workaround); \ 70 70 if (wa && wa->read_##reg) \ 71 71 _val = wa->read_##reg(); \ 72 72 else \ 73 73 _val = read_sysreg(reg); \ 74 - preempt_enable(); \ 74 + preempt_enable_notrace(); \ 75 75 } else { \ 76 76 _val = read_sysreg(reg); \ 77 77 } \
+2 -2
arch/arm64/include/asm/elf.h
··· 114 114 115 115 /* 116 116 * This is the base location for PIE (ET_DYN with INTERP) loads. On 117 - * 64-bit, this is raised to 4GB to leave the entire 32-bit address 117 + * 64-bit, this is above 4GB to leave the entire 32-bit address 118 118 * space open for things that want to use the area for 32-bit pointers. 119 119 */ 120 - #define ELF_ET_DYN_BASE 0x100000000UL 120 + #define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3) 121 121 122 122 #ifndef __ASSEMBLY__ 123 123
+2
arch/arm64/kernel/fpsimd.c
··· 161 161 { 162 162 if (!system_supports_fpsimd()) 163 163 return; 164 + preempt_disable(); 164 165 memset(&current->thread.fpsimd_state, 0, sizeof(struct fpsimd_state)); 165 166 fpsimd_flush_task_state(current); 166 167 set_thread_flag(TIF_FOREIGN_FPSTATE); 168 + preempt_enable(); 167 169 } 168 170 169 171 /*
-1
arch/arm64/kernel/head.S
··· 354 354 tst x23, ~(MIN_KIMG_ALIGN - 1) // already running randomized? 355 355 b.ne 0f 356 356 mov x0, x21 // pass FDT address in x0 357 - mov x1, x23 // pass modulo offset in x1 358 357 bl kaslr_early_init // parse FDT for KASLR options 359 358 cbz x0, 0f // KASLR disabled? just proceed 360 359 orr x23, x23, x0 // record KASLR offset
+11 -9
arch/arm64/kernel/kaslr.c
··· 75 75 * containing function pointers) to be reinitialized, and zero-initialized 76 76 * .bss variables will be reset to 0. 77 77 */ 78 - u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset) 78 + u64 __init kaslr_early_init(u64 dt_phys) 79 79 { 80 80 void *fdt; 81 81 u64 seed, offset, mask, module_range; ··· 131 131 /* 132 132 * The kernel Image should not extend across a 1GB/32MB/512MB alignment 133 133 * boundary (for 4KB/16KB/64KB granule kernels, respectively). If this 134 - * happens, increase the KASLR offset by the size of the kernel image 135 - * rounded up by SWAPPER_BLOCK_SIZE. 134 + * happens, round down the KASLR offset by (1 << SWAPPER_TABLE_SHIFT). 135 + * 136 + * NOTE: The references to _text and _end below will already take the 137 + * modulo offset (the physical displacement modulo 2 MB) into 138 + * account, given that the physical placement is controlled by 139 + * the loader, and will not change as a result of the virtual 140 + * mapping we choose. 136 141 */ 137 - if ((((u64)_text + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT) != 138 - (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT)) { 139 - u64 kimg_sz = _end - _text; 140 - offset = (offset + round_up(kimg_sz, SWAPPER_BLOCK_SIZE)) 141 - & mask; 142 - } 142 + if ((((u64)_text + offset) >> SWAPPER_TABLE_SHIFT) != 143 + (((u64)_end + offset) >> SWAPPER_TABLE_SHIFT)) 144 + offset = round_down(offset, 1 << SWAPPER_TABLE_SHIFT); 143 145 144 146 if (IS_ENABLED(CONFIG_KASAN)) 145 147 /*
+4 -1
arch/arm64/mm/fault.c
··· 435 435 * the mmap_sem because it would already be released 436 436 * in __lock_page_or_retry in mm/filemap.c. 437 437 */ 438 - if (fatal_signal_pending(current)) 438 + if (fatal_signal_pending(current)) { 439 + if (!user_mode(regs)) 440 + goto no_context; 439 441 return 0; 442 + } 440 443 441 444 /* 442 445 * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk of
+1 -1
arch/powerpc/Kconfig
··· 199 199 select HAVE_OPTPROBES if PPC64 200 200 select HAVE_PERF_EVENTS 201 201 select HAVE_PERF_EVENTS_NMI if PPC64 202 - select HAVE_HARDLOCKUP_DETECTOR_PERF if HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH 202 + select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH 203 203 select HAVE_PERF_REGS 204 204 select HAVE_PERF_USER_STACK_DUMP 205 205 select HAVE_RCU_TABLE_FREE if SMP
+18
arch/powerpc/include/asm/mmu_context.h
··· 90 90 /* Mark this context has been used on the new CPU */ 91 91 if (!cpumask_test_cpu(smp_processor_id(), mm_cpumask(next))) { 92 92 cpumask_set_cpu(smp_processor_id(), mm_cpumask(next)); 93 + 94 + /* 95 + * This full barrier orders the store to the cpumask above vs 96 + * a subsequent operation which allows this CPU to begin loading 97 + * translations for next. 98 + * 99 + * When using the radix MMU that operation is the load of the 100 + * MMU context id, which is then moved to SPRN_PID. 101 + * 102 + * For the hash MMU it is either the first load from slb_cache 103 + * in switch_slb(), and/or the store of paca->mm_ctx_id in 104 + * copy_mm_to_paca(). 105 + * 106 + * On the read side the barrier is in pte_xchg(), which orders 107 + * the store to the PTE vs the load of mm_cpumask. 108 + */ 109 + smp_mb(); 110 + 93 111 new_on_cpu = true; 94 112 } 95 113
+1
arch/powerpc/include/asm/pgtable-be-types.h
··· 87 87 unsigned long *p = (unsigned long *)ptep; 88 88 __be64 prev; 89 89 90 + /* See comment in switch_mm_irqs_off() */ 90 91 prev = (__force __be64)__cmpxchg_u64(p, (__force unsigned long)pte_raw(old), 91 92 (__force unsigned long)pte_raw(new)); 92 93
+1
arch/powerpc/include/asm/pgtable-types.h
··· 62 62 { 63 63 unsigned long *p = (unsigned long *)ptep; 64 64 65 + /* See comment in switch_mm_irqs_off() */ 65 66 return pte_val(old) == __cmpxchg_u64(p, pte_val(old), pte_val(new)); 66 67 } 67 68 #endif
+3 -2
arch/powerpc/kernel/process.c
··· 362 362 363 363 cpumsr = msr_check_and_set(MSR_FP|MSR_VEC|MSR_VSX); 364 364 365 - if (current->thread.regs && (current->thread.regs->msr & MSR_VSX)) { 365 + if (current->thread.regs && 366 + (current->thread.regs->msr & (MSR_VSX|MSR_VEC|MSR_FP))) { 366 367 check_if_tm_restore_required(current); 367 368 /* 368 369 * If a thread has already been reclaimed then the ··· 387 386 { 388 387 if (tsk->thread.regs) { 389 388 preempt_disable(); 390 - if (tsk->thread.regs->msr & MSR_VSX) { 389 + if (tsk->thread.regs->msr & (MSR_VSX|MSR_VEC|MSR_FP)) { 391 390 BUG_ON(tsk != current); 392 391 giveup_vsx(tsk); 393 392 }
+34 -22
arch/powerpc/kvm/book3s_64_vio.c
··· 294 294 struct kvm_create_spapr_tce_64 *args) 295 295 { 296 296 struct kvmppc_spapr_tce_table *stt = NULL; 297 + struct kvmppc_spapr_tce_table *siter; 297 298 unsigned long npages, size; 298 299 int ret = -ENOMEM; 299 300 int i; 301 + int fd = -1; 300 302 301 303 if (!args->size) 302 304 return -EINVAL; 303 305 304 - /* Check this LIOBN hasn't been previously allocated */ 305 - list_for_each_entry(stt, &kvm->arch.spapr_tce_tables, list) { 306 - if (stt->liobn == args->liobn) 307 - return -EBUSY; 308 - } 309 - 310 306 size = _ALIGN_UP(args->size, PAGE_SIZE >> 3); 311 307 npages = kvmppc_tce_pages(size); 312 308 ret = kvmppc_account_memlimit(kvmppc_stt_pages(npages), true); 313 - if (ret) { 314 - stt = NULL; 315 - goto fail; 316 - } 309 + if (ret) 310 + return ret; 317 311 318 312 ret = -ENOMEM; 319 313 stt = kzalloc(sizeof(*stt) + npages * sizeof(struct page *), 320 314 GFP_KERNEL); 321 315 if (!stt) 322 - goto fail; 316 + goto fail_acct; 323 317 324 318 stt->liobn = args->liobn; 325 319 stt->page_shift = args->page_shift; ··· 328 334 goto fail; 329 335 } 330 336 331 - kvm_get_kvm(kvm); 337 + ret = fd = anon_inode_getfd("kvm-spapr-tce", &kvm_spapr_tce_fops, 338 + stt, O_RDWR | O_CLOEXEC); 339 + if (ret < 0) 340 + goto fail; 332 341 333 342 mutex_lock(&kvm->lock); 334 - list_add_rcu(&stt->list, &kvm->arch.spapr_tce_tables); 343 + 344 + /* Check this LIOBN hasn't been previously allocated */ 345 + ret = 0; 346 + list_for_each_entry(siter, &kvm->arch.spapr_tce_tables, list) { 347 + if (siter->liobn == args->liobn) { 348 + ret = -EBUSY; 349 + break; 350 + } 351 + } 352 + 353 + if (!ret) { 354 + list_add_rcu(&stt->list, &kvm->arch.spapr_tce_tables); 355 + kvm_get_kvm(kvm); 356 + } 335 357 336 358 mutex_unlock(&kvm->lock); 337 359 338 - return anon_inode_getfd("kvm-spapr-tce", &kvm_spapr_tce_fops, 339 - stt, O_RDWR | O_CLOEXEC); 360 + if (!ret) 361 + return fd; 340 362 341 - fail: 342 - if (stt) { 343 - for (i = 0; i < npages; i++) 344 - if (stt->pages[i]) 345 - __free_page(stt->pages[i]); 363 + put_unused_fd(fd); 346 364 347 - kfree(stt); 348 - } 365 + fail: 366 + for (i = 0; i < npages; i++) 367 + if (stt->pages[i]) 368 + __free_page(stt->pages[i]); 369 + 370 + kfree(stt); 371 + fail_acct: 372 + kvmppc_account_memlimit(kvmppc_stt_pages(npages), false); 349 373 return ret; 350 374 } 351 375
+3
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 1291 1291 /* Hypervisor doorbell - exit only if host IPI flag set */ 1292 1292 cmpwi r12, BOOK3S_INTERRUPT_H_DOORBELL 1293 1293 bne 3f 1294 + BEGIN_FTR_SECTION 1295 + PPC_MSGSYNC 1296 + END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) 1294 1297 lbz r0, HSTATE_HOST_IPI(r13) 1295 1298 cmpwi r0, 0 1296 1299 beq 4f
+65 -3
arch/powerpc/kvm/book3s_xive_template.c
··· 16 16 u8 cppr; 17 17 u16 ack; 18 18 19 - /* XXX DD1 bug workaround: Check PIPR vs. CPPR first ! */ 19 + /* 20 + * Ensure any previous store to CPPR is ordered vs. 21 + * the subsequent loads from PIPR or ACK. 22 + */ 23 + eieio(); 24 + 25 + /* 26 + * DD1 bug workaround: If PIPR is less favored than CPPR 27 + * ignore the interrupt or we might incorrectly lose an IPB 28 + * bit. 29 + */ 30 + if (cpu_has_feature(CPU_FTR_POWER9_DD1)) { 31 + u8 pipr = __x_readb(__x_tima + TM_QW1_OS + TM_PIPR); 32 + if (pipr >= xc->hw_cppr) 33 + return; 34 + } 20 35 21 36 /* Perform the acknowledge OS to register cycle. */ 22 37 ack = be16_to_cpu(__x_readw(__x_tima + TM_SPC_ACK_OS_REG)); ··· 250 235 /* 251 236 * If we found an interrupt, adjust what the guest CPPR should 252 237 * be as if we had just fetched that interrupt from HW. 238 + * 239 + * Note: This can only make xc->cppr smaller as the previous 240 + * loop will only exit with hirq != 0 if prio is lower than 241 + * the current xc->cppr. Thus we don't need to re-check xc->mfrr 242 + * for pending IPIs. 253 243 */ 254 244 if (hirq) 255 245 xc->cppr = prio; ··· 401 381 xc->cppr = cppr; 402 382 403 383 /* 384 + * Order the above update of xc->cppr with the subsequent 385 + * read of xc->mfrr inside push_pending_to_hw() 386 + */ 387 + smp_mb(); 388 + 389 + /* 404 390 * We are masking less, we need to look for pending things 405 391 * to deliver and set VP pending bits accordingly to trigger 406 392 * a new interrupt otherwise we might miss MFRR changes for ··· 446 420 * used to signal MFRR changes is EOId when fetched from 447 421 * the queue. 448 422 */ 449 - if (irq == XICS_IPI || irq == 0) 423 + if (irq == XICS_IPI || irq == 0) { 424 + /* 425 + * This barrier orders the setting of xc->cppr vs. 426 + * subsquent test of xc->mfrr done inside 427 + * scan_interrupts and push_pending_to_hw 428 + */ 429 + smp_mb(); 450 430 goto bail; 431 + } 451 432 452 433 /* Find interrupt source */ 453 434 sb = kvmppc_xive_find_source(xive, irq, &src); 454 435 if (!sb) { 455 436 pr_devel(" source not found !\n"); 456 437 rc = H_PARAMETER; 438 + /* Same as above */ 439 + smp_mb(); 457 440 goto bail; 458 441 } 459 442 state = &sb->irq_state[src]; 460 443 kvmppc_xive_select_irq(state, &hw_num, &xd); 461 444 462 445 state->in_eoi = true; 463 - mb(); 446 + 447 + /* 448 + * This barrier orders both setting of in_eoi above vs, 449 + * subsequent test of guest_priority, and the setting 450 + * of xc->cppr vs. subsquent test of xc->mfrr done inside 451 + * scan_interrupts and push_pending_to_hw 452 + */ 453 + smp_mb(); 464 454 465 455 again: 466 456 if (state->guest_priority == MASKED) { ··· 503 461 504 462 } 505 463 464 + /* 465 + * This barrier orders the above guest_priority check 466 + * and spin_lock/unlock with clearing in_eoi below. 467 + * 468 + * It also has to be a full mb() as it must ensure 469 + * the MMIOs done in source_eoi() are completed before 470 + * state->in_eoi is visible. 471 + */ 506 472 mb(); 507 473 state->in_eoi = false; 508 474 bail: ··· 544 494 545 495 /* Locklessly write over MFRR */ 546 496 xc->mfrr = mfrr; 497 + 498 + /* 499 + * The load of xc->cppr below and the subsequent MMIO store 500 + * to the IPI must happen after the above mfrr update is 501 + * globally visible so that: 502 + * 503 + * - Synchronize with another CPU doing an H_EOI or a H_CPPR 504 + * updating xc->cppr then reading xc->mfrr. 505 + * 506 + * - The target of the IPI sees the xc->mfrr update 507 + */ 508 + mb(); 547 509 548 510 /* Shoot the IPI if most favored than target cppr */ 549 511 if (mfrr < xc->cppr)
+5 -2
arch/s390/kvm/sthyi.c
··· 394 394 "srl %[cc],28\n" 395 395 : [cc] "=d" (cc) 396 396 : [code] "d" (code), [addr] "a" (addr) 397 - : "memory", "cc"); 397 + : "3", "memory", "cc"); 398 398 return cc; 399 399 } 400 400 ··· 425 425 VCPU_EVENT(vcpu, 3, "STHYI: fc: %llu addr: 0x%016llx", code, addr); 426 426 trace_kvm_s390_handle_sthyi(vcpu, code, addr); 427 427 428 - if (reg1 == reg2 || reg1 & 1 || reg2 & 1 || addr & ~PAGE_MASK) 428 + if (reg1 == reg2 || reg1 & 1 || reg2 & 1) 429 429 return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION); 430 430 431 431 if (code & 0xffff) { 432 432 cc = 3; 433 433 goto out; 434 434 } 435 + 436 + if (addr & ~PAGE_MASK) 437 + return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION); 435 438 436 439 /* 437 440 * If the page has not yet been faulted in, we want to do that
+2
arch/sparc/include/asm/page_32.h
··· 68 68 #define iopgprot_val(x) ((x).iopgprot) 69 69 70 70 #define __pte(x) ((pte_t) { (x) } ) 71 + #define __pmd(x) ((pmd_t) { { (x) }, }) 71 72 #define __iopte(x) ((iopte_t) { (x) } ) 72 73 #define __pgd(x) ((pgd_t) { (x) } ) 73 74 #define __ctxd(x) ((ctxd_t) { (x) } ) ··· 96 95 #define iopgprot_val(x) (x) 97 96 98 97 #define __pte(x) (x) 98 + #define __pmd(x) ((pmd_t) { { (x) }, }) 99 99 #define __iopte(x) (x) 100 100 #define __pgd(x) (x) 101 101 #define __ctxd(x) (x)
-2
arch/sparc/kernel/pci_sun4v.c
··· 1266 1266 * ATU group, but ATU hcalls won't be available. 1267 1267 */ 1268 1268 hv_atu = false; 1269 - pr_err(PFX "Could not register hvapi ATU err=%d\n", 1270 - err); 1271 1269 } else { 1272 1270 pr_info(PFX "Registered hvapi ATU major[%lu] minor[%lu]\n", 1273 1271 vatu_major, vatu_minor);
+1 -1
arch/sparc/kernel/pcic.c
··· 602 602 { 603 603 struct pci_dev *dev; 604 604 int i, has_io, has_mem; 605 - unsigned int cmd; 605 + unsigned int cmd = 0; 606 606 struct linux_pcic *pcic; 607 607 /* struct linux_pbm_info* pbm = &pcic->pbm; */ 608 608 int node;
+12 -12
arch/sparc/lib/multi3.S
··· 5 5 .align 4 6 6 ENTRY(__multi3) /* %o0 = u, %o1 = v */ 7 7 mov %o1, %g1 8 - srl %o3, 0, %g4 9 - mulx %g4, %g1, %o1 8 + srl %o3, 0, %o4 9 + mulx %o4, %g1, %o1 10 10 srlx %g1, 0x20, %g3 11 - mulx %g3, %g4, %g5 12 - sllx %g5, 0x20, %o5 13 - srl %g1, 0, %g4 11 + mulx %g3, %o4, %g7 12 + sllx %g7, 0x20, %o5 13 + srl %g1, 0, %o4 14 14 sub %o1, %o5, %o5 15 15 srlx %o5, 0x20, %o5 16 - addcc %g5, %o5, %g5 16 + addcc %g7, %o5, %g7 17 17 srlx %o3, 0x20, %o5 18 - mulx %g4, %o5, %g4 18 + mulx %o4, %o5, %o4 19 19 mulx %g3, %o5, %o5 20 20 sethi %hi(0x80000000), %g3 21 - addcc %g5, %g4, %g5 22 - srlx %g5, 0x20, %g5 21 + addcc %g7, %o4, %g7 22 + srlx %g7, 0x20, %g7 23 23 add %g3, %g3, %g3 24 24 movcc %xcc, %g0, %g3 25 - addcc %o5, %g5, %o5 26 - sllx %g4, 0x20, %g4 27 - add %o1, %g4, %o1 25 + addcc %o5, %g7, %o5 26 + sllx %o4, 0x20, %o4 27 + add %o1, %o4, %o1 28 28 add %o5, %g3, %g2 29 29 mulx %g1, %o2, %g1 30 30 add %g1, %g2, %g1
+2 -1
arch/x86/Kconfig
··· 100 100 select GENERIC_STRNCPY_FROM_USER 101 101 select GENERIC_STRNLEN_USER 102 102 select GENERIC_TIME_VSYSCALL 103 + select HARDLOCKUP_CHECK_TIMESTAMP if X86_64 103 104 select HAVE_ACPI_APEI if ACPI 104 105 select HAVE_ACPI_APEI_NMI if ACPI 105 106 select HAVE_ALIGNED_STRUCT_PAGE if SLUB ··· 164 163 select HAVE_PCSPKR_PLATFORM 165 164 select HAVE_PERF_EVENTS 166 165 select HAVE_PERF_EVENTS_NMI 167 - select HAVE_HARDLOCKUP_DETECTOR_PERF if HAVE_PERF_EVENTS_NMI 166 + select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI 168 167 select HAVE_PERF_REGS 169 168 select HAVE_PERF_USER_STACK_DUMP 170 169 select HAVE_REGS_AND_STACK_ACCESS_API
+36 -31
arch/x86/crypto/sha1_avx2_x86_64_asm.S
··· 117 117 .set T1, REG_T1 118 118 .endm 119 119 120 - #define K_BASE %r8 121 120 #define HASH_PTR %r9 121 + #define BLOCKS_CTR %r8 122 122 #define BUFFER_PTR %r10 123 123 #define BUFFER_PTR2 %r13 124 - #define BUFFER_END %r11 125 124 126 125 #define PRECALC_BUF %r14 127 126 #define WK_BUF %r15 ··· 204 205 * blended AVX2 and ALU instruction scheduling 205 206 * 1 vector iteration per 8 rounds 206 207 */ 207 - vmovdqu ((i * 2) + PRECALC_OFFSET)(BUFFER_PTR), W_TMP 208 + vmovdqu (i * 2)(BUFFER_PTR), W_TMP 208 209 .elseif ((i & 7) == 1) 209 - vinsertf128 $1, (((i-1) * 2)+PRECALC_OFFSET)(BUFFER_PTR2),\ 210 + vinsertf128 $1, ((i-1) * 2)(BUFFER_PTR2),\ 210 211 WY_TMP, WY_TMP 211 212 .elseif ((i & 7) == 2) 212 213 vpshufb YMM_SHUFB_BSWAP, WY_TMP, WY 213 214 .elseif ((i & 7) == 4) 214 - vpaddd K_XMM(K_BASE), WY, WY_TMP 215 + vpaddd K_XMM + K_XMM_AR(%rip), WY, WY_TMP 215 216 .elseif ((i & 7) == 7) 216 217 vmovdqu WY_TMP, PRECALC_WK(i&~7) 217 218 ··· 254 255 vpxor WY, WY_TMP, WY_TMP 255 256 .elseif ((i & 7) == 7) 256 257 vpxor WY_TMP2, WY_TMP, WY 257 - vpaddd K_XMM(K_BASE), WY, WY_TMP 258 + vpaddd K_XMM + K_XMM_AR(%rip), WY, WY_TMP 258 259 vmovdqu WY_TMP, PRECALC_WK(i&~7) 259 260 260 261 PRECALC_ROTATE_WY ··· 290 291 vpsrld $30, WY, WY 291 292 vpor WY, WY_TMP, WY 292 293 .elseif ((i & 7) == 7) 293 - vpaddd K_XMM(K_BASE), WY, WY_TMP 294 + vpaddd K_XMM + K_XMM_AR(%rip), WY, WY_TMP 294 295 vmovdqu WY_TMP, PRECALC_WK(i&~7) 295 296 296 297 PRECALC_ROTATE_WY ··· 445 446 446 447 .endm 447 448 449 + /* Add constant only if (%2 > %3) condition met (uses RTA as temp) 450 + * %1 + %2 >= %3 ? %4 : 0 451 + */ 452 + .macro ADD_IF_GE a, b, c, d 453 + mov \a, RTA 454 + add $\d, RTA 455 + cmp $\c, \b 456 + cmovge RTA, \a 457 + .endm 458 + 448 459 /* 449 460 * macro implements 80 rounds of SHA-1, for multiple blocks with s/w pipelining 450 461 */ ··· 472 463 lea (2*4*80+32)(%rsp), WK_BUF 473 464 474 465 # Precalc WK for first 2 blocks 475 - PRECALC_OFFSET = 0 466 + ADD_IF_GE BUFFER_PTR2, BLOCKS_CTR, 2, 64 476 467 .set i, 0 477 468 .rept 160 478 469 PRECALC i 479 470 .set i, i + 1 480 471 .endr 481 - PRECALC_OFFSET = 128 472 + 473 + /* Go to next block if needed */ 474 + ADD_IF_GE BUFFER_PTR, BLOCKS_CTR, 3, 128 475 + ADD_IF_GE BUFFER_PTR2, BLOCKS_CTR, 4, 128 482 476 xchg WK_BUF, PRECALC_BUF 483 477 484 478 .align 32 ··· 491 479 * we use K_BASE value as a signal of a last block, 492 480 * it is set below by: cmovae BUFFER_PTR, K_BASE 493 481 */ 494 - cmp K_BASE, BUFFER_PTR 495 - jne _begin 482 + test BLOCKS_CTR, BLOCKS_CTR 483 + jnz _begin 496 484 .align 32 497 485 jmp _end 498 486 .align 32 ··· 524 512 .set j, j+2 525 513 .endr 526 514 527 - add $(2*64), BUFFER_PTR /* move to next odd-64-byte block */ 528 - cmp BUFFER_END, BUFFER_PTR /* is current block the last one? */ 529 - cmovae K_BASE, BUFFER_PTR /* signal the last iteration smartly */ 530 - 515 + /* Update Counter */ 516 + sub $1, BLOCKS_CTR 517 + /* Move to the next block only if needed*/ 518 + ADD_IF_GE BUFFER_PTR, BLOCKS_CTR, 4, 128 531 519 /* 532 520 * rounds 533 521 * 60,62,64,66,68 ··· 544 532 UPDATE_HASH 12(HASH_PTR), D 545 533 UPDATE_HASH 16(HASH_PTR), E 546 534 547 - cmp K_BASE, BUFFER_PTR /* is current block the last one? */ 548 - je _loop 535 + test BLOCKS_CTR, BLOCKS_CTR 536 + jz _loop 549 537 550 538 mov TB, B 551 539 ··· 587 575 .set j, j+2 588 576 .endr 589 577 590 - add $(2*64), BUFFER_PTR2 /* move to next even-64-byte block */ 591 - 592 - cmp BUFFER_END, BUFFER_PTR2 /* is current block the last one */ 593 - cmovae K_BASE, BUFFER_PTR /* signal the last iteration smartly */ 578 + /* update counter */ 579 + sub $1, BLOCKS_CTR 580 + /* Move to the next block only if needed*/ 581 + ADD_IF_GE BUFFER_PTR2, BLOCKS_CTR, 4, 128 594 582 595 583 jmp _loop3 596 584 _loop3: ··· 653 641 654 642 avx2_zeroupper 655 643 656 - lea K_XMM_AR(%rip), K_BASE 657 - 644 + /* Setup initial values */ 658 645 mov CTX, HASH_PTR 659 646 mov BUF, BUFFER_PTR 660 - lea 64(BUF), BUFFER_PTR2 661 647 662 - shl $6, CNT /* mul by 64 */ 663 - add BUF, CNT 664 - add $64, CNT 665 - mov CNT, BUFFER_END 666 - 667 - cmp BUFFER_END, BUFFER_PTR2 668 - cmovae K_BASE, BUFFER_PTR2 648 + mov BUF, BUFFER_PTR2 649 + mov CNT, BLOCKS_CTR 669 650 670 651 xmm_mov BSWAP_SHUFB_CTL(%rip), YMM_SHUFB_BSWAP 671 652
+1 -1
arch/x86/crypto/sha1_ssse3_glue.c
··· 201 201 202 202 static bool avx2_usable(void) 203 203 { 204 - if (false && avx_usable() && boot_cpu_has(X86_FEATURE_AVX2) 204 + if (avx_usable() && boot_cpu_has(X86_FEATURE_AVX2) 205 205 && boot_cpu_has(X86_FEATURE_BMI1) 206 206 && boot_cpu_has(X86_FEATURE_BMI2)) 207 207 return true;
+2
arch/x86/entry/entry_64.S
··· 1211 1211 * other IST entries. 1212 1212 */ 1213 1213 1214 + ASM_CLAC 1215 + 1214 1216 /* Use %rdx as our temp variable throughout */ 1215 1217 pushq %rdx 1216 1218
+7 -9
arch/x86/events/core.c
··· 2114 2114 load_mm_cr4(this_cpu_read(cpu_tlbstate.loaded_mm)); 2115 2115 } 2116 2116 2117 - static void x86_pmu_event_mapped(struct perf_event *event) 2117 + static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm) 2118 2118 { 2119 2119 if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED)) 2120 2120 return; ··· 2129 2129 * For now, this can't happen because all callers hold mmap_sem 2130 2130 * for write. If this changes, we'll need a different solution. 2131 2131 */ 2132 - lockdep_assert_held_exclusive(&current->mm->mmap_sem); 2132 + lockdep_assert_held_exclusive(&mm->mmap_sem); 2133 2133 2134 - if (atomic_inc_return(&current->mm->context.perf_rdpmc_allowed) == 1) 2135 - on_each_cpu_mask(mm_cpumask(current->mm), refresh_pce, NULL, 1); 2134 + if (atomic_inc_return(&mm->context.perf_rdpmc_allowed) == 1) 2135 + on_each_cpu_mask(mm_cpumask(mm), refresh_pce, NULL, 1); 2136 2136 } 2137 2137 2138 - static void x86_pmu_event_unmapped(struct perf_event *event) 2138 + static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm) 2139 2139 { 2140 - if (!current->mm) 2141 - return; 2142 2140 2143 2141 if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED)) 2144 2142 return; 2145 2143 2146 - if (atomic_dec_and_test(&current->mm->context.perf_rdpmc_allowed)) 2147 - on_each_cpu_mask(mm_cpumask(current->mm), refresh_pce, NULL, 1); 2144 + if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed)) 2145 + on_each_cpu_mask(mm_cpumask(mm), refresh_pce, NULL, 1); 2148 2146 } 2149 2147 2150 2148 static int x86_pmu_event_idx(struct perf_event *event)
+1 -1
arch/x86/events/intel/bts.c
··· 69 69 struct bts_phys buf[0]; 70 70 }; 71 71 72 - struct pmu bts_pmu; 72 + static struct pmu bts_pmu; 73 73 74 74 static size_t buf_size(struct page *page) 75 75 {
+1 -1
arch/x86/events/intel/p4.c
··· 587 587 * P4_CONFIG_ALIASABLE or bits for P4_PEBS_METRIC, they are 588 588 * either up to date automatically or not applicable at all. 589 589 */ 590 - struct p4_event_alias { 590 + static struct p4_event_alias { 591 591 u64 original; 592 592 u64 alternative; 593 593 } p4_event_aliases[] = {
+1 -1
arch/x86/events/intel/rapl.c
··· 559 559 .attrs = rapl_formats_attr, 560 560 }; 561 561 562 - const struct attribute_group *rapl_attr_groups[] = { 562 + static const struct attribute_group *rapl_attr_groups[] = { 563 563 &rapl_pmu_attr_group, 564 564 &rapl_pmu_format_group, 565 565 &rapl_pmu_events_group,
+1 -1
arch/x86/events/intel/uncore.c
··· 721 721 NULL, 722 722 }; 723 723 724 - static struct attribute_group uncore_pmu_attr_group = { 724 + static const struct attribute_group uncore_pmu_attr_group = { 725 725 .attrs = uncore_pmu_attrs, 726 726 }; 727 727
+6 -6
arch/x86/events/intel/uncore_nhmex.c
··· 272 272 NULL, 273 273 }; 274 274 275 - static struct attribute_group nhmex_uncore_ubox_format_group = { 275 + static const struct attribute_group nhmex_uncore_ubox_format_group = { 276 276 .name = "format", 277 277 .attrs = nhmex_uncore_ubox_formats_attr, 278 278 }; ··· 299 299 NULL, 300 300 }; 301 301 302 - static struct attribute_group nhmex_uncore_cbox_format_group = { 302 + static const struct attribute_group nhmex_uncore_cbox_format_group = { 303 303 .name = "format", 304 304 .attrs = nhmex_uncore_cbox_formats_attr, 305 305 }; ··· 407 407 NULL, 408 408 }; 409 409 410 - static struct attribute_group nhmex_uncore_bbox_format_group = { 410 + static const struct attribute_group nhmex_uncore_bbox_format_group = { 411 411 .name = "format", 412 412 .attrs = nhmex_uncore_bbox_formats_attr, 413 413 }; ··· 484 484 NULL, 485 485 }; 486 486 487 - static struct attribute_group nhmex_uncore_sbox_format_group = { 487 + static const struct attribute_group nhmex_uncore_sbox_format_group = { 488 488 .name = "format", 489 489 .attrs = nhmex_uncore_sbox_formats_attr, 490 490 }; ··· 898 898 NULL, 899 899 }; 900 900 901 - static struct attribute_group nhmex_uncore_mbox_format_group = { 901 + static const struct attribute_group nhmex_uncore_mbox_format_group = { 902 902 .name = "format", 903 903 .attrs = nhmex_uncore_mbox_formats_attr, 904 904 }; ··· 1163 1163 NULL, 1164 1164 }; 1165 1165 1166 - static struct attribute_group nhmex_uncore_rbox_format_group = { 1166 + static const struct attribute_group nhmex_uncore_rbox_format_group = { 1167 1167 .name = "format", 1168 1168 .attrs = nhmex_uncore_rbox_formats_attr, 1169 1169 };
+3 -3
arch/x86/events/intel/uncore_snb.c
··· 130 130 NULL, 131 131 }; 132 132 133 - static struct attribute_group snb_uncore_format_group = { 133 + static const struct attribute_group snb_uncore_format_group = { 134 134 .name = "format", 135 135 .attrs = snb_uncore_formats_attr, 136 136 }; ··· 289 289 NULL, 290 290 }; 291 291 292 - static struct attribute_group snb_uncore_imc_format_group = { 292 + static const struct attribute_group snb_uncore_imc_format_group = { 293 293 .name = "format", 294 294 .attrs = snb_uncore_imc_formats_attr, 295 295 }; ··· 769 769 NULL, 770 770 }; 771 771 772 - static struct attribute_group nhm_uncore_format_group = { 772 + static const struct attribute_group nhm_uncore_format_group = { 773 773 .name = "format", 774 774 .attrs = nhm_uncore_formats_attr, 775 775 };
+21 -21
arch/x86/events/intel/uncore_snbep.c
··· 602 602 { /* end: all zeroes */ }, 603 603 }; 604 604 605 - static struct attribute_group snbep_uncore_format_group = { 605 + static const struct attribute_group snbep_uncore_format_group = { 606 606 .name = "format", 607 607 .attrs = snbep_uncore_formats_attr, 608 608 }; 609 609 610 - static struct attribute_group snbep_uncore_ubox_format_group = { 610 + static const struct attribute_group snbep_uncore_ubox_format_group = { 611 611 .name = "format", 612 612 .attrs = snbep_uncore_ubox_formats_attr, 613 613 }; 614 614 615 - static struct attribute_group snbep_uncore_cbox_format_group = { 615 + static const struct attribute_group snbep_uncore_cbox_format_group = { 616 616 .name = "format", 617 617 .attrs = snbep_uncore_cbox_formats_attr, 618 618 }; 619 619 620 - static struct attribute_group snbep_uncore_pcu_format_group = { 620 + static const struct attribute_group snbep_uncore_pcu_format_group = { 621 621 .name = "format", 622 622 .attrs = snbep_uncore_pcu_formats_attr, 623 623 }; 624 624 625 - static struct attribute_group snbep_uncore_qpi_format_group = { 625 + static const struct attribute_group snbep_uncore_qpi_format_group = { 626 626 .name = "format", 627 627 .attrs = snbep_uncore_qpi_formats_attr, 628 628 }; ··· 1431 1431 NULL, 1432 1432 }; 1433 1433 1434 - static struct attribute_group ivbep_uncore_format_group = { 1434 + static const struct attribute_group ivbep_uncore_format_group = { 1435 1435 .name = "format", 1436 1436 .attrs = ivbep_uncore_formats_attr, 1437 1437 }; 1438 1438 1439 - static struct attribute_group ivbep_uncore_ubox_format_group = { 1439 + static const struct attribute_group ivbep_uncore_ubox_format_group = { 1440 1440 .name = "format", 1441 1441 .attrs = ivbep_uncore_ubox_formats_attr, 1442 1442 }; 1443 1443 1444 - static struct attribute_group ivbep_uncore_cbox_format_group = { 1444 + static const struct attribute_group ivbep_uncore_cbox_format_group = { 1445 1445 .name = "format", 1446 1446 .attrs = ivbep_uncore_cbox_formats_attr, 1447 1447 }; 1448 1448 1449 - static struct attribute_group ivbep_uncore_pcu_format_group = { 1449 + static const struct attribute_group ivbep_uncore_pcu_format_group = { 1450 1450 .name = "format", 1451 1451 .attrs = ivbep_uncore_pcu_formats_attr, 1452 1452 }; 1453 1453 1454 - static struct attribute_group ivbep_uncore_qpi_format_group = { 1454 + static const struct attribute_group ivbep_uncore_qpi_format_group = { 1455 1455 .name = "format", 1456 1456 .attrs = ivbep_uncore_qpi_formats_attr, 1457 1457 }; ··· 1887 1887 NULL, 1888 1888 }; 1889 1889 1890 - static struct attribute_group knl_uncore_ubox_format_group = { 1890 + static const struct attribute_group knl_uncore_ubox_format_group = { 1891 1891 .name = "format", 1892 1892 .attrs = knl_uncore_ubox_formats_attr, 1893 1893 }; ··· 1927 1927 NULL, 1928 1928 }; 1929 1929 1930 - static struct attribute_group knl_uncore_cha_format_group = { 1930 + static const struct attribute_group knl_uncore_cha_format_group = { 1931 1931 .name = "format", 1932 1932 .attrs = knl_uncore_cha_formats_attr, 1933 1933 }; ··· 2037 2037 NULL, 2038 2038 }; 2039 2039 2040 - static struct attribute_group knl_uncore_pcu_format_group = { 2040 + static const struct attribute_group knl_uncore_pcu_format_group = { 2041 2041 .name = "format", 2042 2042 .attrs = knl_uncore_pcu_formats_attr, 2043 2043 }; ··· 2187 2187 NULL, 2188 2188 }; 2189 2189 2190 - static struct attribute_group knl_uncore_irp_format_group = { 2190 + static const struct attribute_group knl_uncore_irp_format_group = { 2191 2191 .name = "format", 2192 2192 .attrs = knl_uncore_irp_formats_attr, 2193 2193 }; ··· 2385 2385 NULL, 2386 2386 }; 2387 2387 2388 - static struct attribute_group hswep_uncore_ubox_format_group = { 2388 + static const struct attribute_group hswep_uncore_ubox_format_group = { 2389 2389 .name = "format", 2390 2390 .attrs = hswep_uncore_ubox_formats_attr, 2391 2391 }; ··· 2439 2439 NULL, 2440 2440 }; 2441 2441 2442 - static struct attribute_group hswep_uncore_cbox_format_group = { 2442 + static const struct attribute_group hswep_uncore_cbox_format_group = { 2443 2443 .name = "format", 2444 2444 .attrs = hswep_uncore_cbox_formats_attr, 2445 2445 }; ··· 2621 2621 NULL, 2622 2622 }; 2623 2623 2624 - static struct attribute_group hswep_uncore_sbox_format_group = { 2624 + static const struct attribute_group hswep_uncore_sbox_format_group = { 2625 2625 .name = "format", 2626 2626 .attrs = hswep_uncore_sbox_formats_attr, 2627 2627 }; ··· 3314 3314 NULL, 3315 3315 }; 3316 3316 3317 - static struct attribute_group skx_uncore_chabox_format_group = { 3317 + static const struct attribute_group skx_uncore_chabox_format_group = { 3318 3318 .name = "format", 3319 3319 .attrs = skx_uncore_cha_formats_attr, 3320 3320 }; ··· 3427 3427 NULL, 3428 3428 }; 3429 3429 3430 - static struct attribute_group skx_uncore_iio_format_group = { 3430 + static const struct attribute_group skx_uncore_iio_format_group = { 3431 3431 .name = "format", 3432 3432 .attrs = skx_uncore_iio_formats_attr, 3433 3433 }; ··· 3484 3484 NULL, 3485 3485 }; 3486 3486 3487 - static struct attribute_group skx_uncore_format_group = { 3487 + static const struct attribute_group skx_uncore_format_group = { 3488 3488 .name = "format", 3489 3489 .attrs = skx_uncore_formats_attr, 3490 3490 }; ··· 3605 3605 NULL, 3606 3606 }; 3607 3607 3608 - static struct attribute_group skx_upi_uncore_format_group = { 3608 + static const struct attribute_group skx_upi_uncore_format_group = { 3609 3609 .name = "format", 3610 3610 .attrs = skx_upi_uncore_formats_attr, 3611 3611 };
+1 -1
arch/x86/include/asm/cpufeatures.h
··· 286 286 #define X86_FEATURE_PAUSEFILTER (15*32+10) /* filtered pause intercept */ 287 287 #define X86_FEATURE_PFTHRESHOLD (15*32+12) /* pause filter threshold */ 288 288 #define X86_FEATURE_AVIC (15*32+13) /* Virtual Interrupt Controller */ 289 - #define X86_FEATURE_VIRTUAL_VMLOAD_VMSAVE (15*32+15) /* Virtual VMLOAD VMSAVE */ 289 + #define X86_FEATURE_V_VMSAVE_VMLOAD (15*32+15) /* Virtual VMSAVE VMLOAD */ 290 290 291 291 /* Intel-defined CPU features, CPUID level 0x00000007:0 (ecx), word 16 */ 292 292 #define X86_FEATURE_AVX512VBMI (16*32+ 1) /* AVX512 Vector Bit Manipulation instructions*/
+2 -2
arch/x86/include/asm/elf.h
··· 247 247 248 248 /* 249 249 * This is the base location for PIE (ET_DYN with INTERP) loads. On 250 - * 64-bit, this is raised to 4GB to leave the entire 32-bit address 250 + * 64-bit, this is above 4GB to leave the entire 32-bit address 251 251 * space open for things that want to use the area for 32-bit pointers. 252 252 */ 253 253 #define ELF_ET_DYN_BASE (mmap_is_ia32() ? 0x000400000UL : \ 254 - 0x100000000UL) 254 + (TASK_SIZE / 3 * 2)) 255 255 256 256 /* This yields a mask that user programs can use to figure out what 257 257 instruction set this CPU supports. This could be done in user space,
+3 -3
arch/x86/include/asm/fpu/internal.h
··· 450 450 return 0; 451 451 } 452 452 453 - static inline void __copy_kernel_to_fpregs(union fpregs_state *fpstate) 453 + static inline void __copy_kernel_to_fpregs(union fpregs_state *fpstate, u64 mask) 454 454 { 455 455 if (use_xsave()) { 456 - copy_kernel_to_xregs(&fpstate->xsave, -1); 456 + copy_kernel_to_xregs(&fpstate->xsave, mask); 457 457 } else { 458 458 if (use_fxsr()) 459 459 copy_kernel_to_fxregs(&fpstate->fxsave); ··· 477 477 : : [addr] "m" (fpstate)); 478 478 } 479 479 480 - __copy_kernel_to_fpregs(fpstate); 480 + __copy_kernel_to_fpregs(fpstate, -1); 481 481 } 482 482 483 483 extern int copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size);
+1
arch/x86/include/asm/kvm_host.h
··· 492 492 unsigned long cr4; 493 493 unsigned long cr4_guest_owned_bits; 494 494 unsigned long cr8; 495 + u32 pkru; 495 496 u32 hflags; 496 497 u64 efer; 497 498 u64 apic_base;
+1 -3
arch/x86/include/asm/mmu_context.h
··· 140 140 mm->context.execute_only_pkey = -1; 141 141 } 142 142 #endif 143 - init_new_context_ldt(tsk, mm); 144 - 145 - return 0; 143 + return init_new_context_ldt(tsk, mm); 146 144 } 147 145 static inline void destroy_context(struct mm_struct *mm) 148 146 {
+3
arch/x86/kernel/cpu/aperfmperf.c
··· 40 40 struct aperfmperf_sample *s = this_cpu_ptr(&samples); 41 41 ktime_t now = ktime_get(); 42 42 s64 time_delta = ktime_ms_delta(now, s->time); 43 + unsigned long flags; 43 44 44 45 /* Don't bother re-computing within the cache threshold time. */ 45 46 if (time_delta < APERFMPERF_CACHE_THRESHOLD_MS) 46 47 return; 47 48 49 + local_irq_save(flags); 48 50 rdmsrl(MSR_IA32_APERF, aperf); 49 51 rdmsrl(MSR_IA32_MPERF, mperf); 52 + local_irq_restore(flags); 50 53 51 54 aperf_delta = aperf - s->aperf; 52 55 mperf_delta = mperf - s->mperf;
+1 -1
arch/x86/kernel/cpu/mcheck/therm_throt.c
··· 122 122 NULL 123 123 }; 124 124 125 - static struct attribute_group thermal_attr_group = { 125 + static const struct attribute_group thermal_attr_group = { 126 126 .attrs = thermal_throttle_attrs, 127 127 .name = "thermal_throttle" 128 128 };
+2 -2
arch/x86/kernel/cpu/microcode/core.c
··· 561 561 NULL 562 562 }; 563 563 564 - static struct attribute_group mc_attr_group = { 564 + static const struct attribute_group mc_attr_group = { 565 565 .attrs = mc_default_attrs, 566 566 .name = "microcode", 567 567 }; ··· 707 707 NULL 708 708 }; 709 709 710 - static struct attribute_group cpu_root_microcode_group = { 710 + static const struct attribute_group cpu_root_microcode_group = { 711 711 .name = "microcode", 712 712 .attrs = cpu_root_microcode_attrs, 713 713 };
+15 -3
arch/x86/kernel/cpu/mtrr/main.c
··· 237 237 stop_machine(mtrr_rendezvous_handler, &data, cpu_online_mask); 238 238 } 239 239 240 + static void set_mtrr_cpuslocked(unsigned int reg, unsigned long base, 241 + unsigned long size, mtrr_type type) 242 + { 243 + struct set_mtrr_data data = { .smp_reg = reg, 244 + .smp_base = base, 245 + .smp_size = size, 246 + .smp_type = type 247 + }; 248 + 249 + stop_machine_cpuslocked(mtrr_rendezvous_handler, &data, cpu_online_mask); 250 + } 251 + 240 252 static void set_mtrr_from_inactive_cpu(unsigned int reg, unsigned long base, 241 253 unsigned long size, mtrr_type type) 242 254 { ··· 382 370 /* Search for an empty MTRR */ 383 371 i = mtrr_if->get_free_region(base, size, replace); 384 372 if (i >= 0) { 385 - set_mtrr(i, base, size, type); 373 + set_mtrr_cpuslocked(i, base, size, type); 386 374 if (likely(replace < 0)) { 387 375 mtrr_usage_table[i] = 1; 388 376 } else { ··· 390 378 if (increment) 391 379 mtrr_usage_table[i]++; 392 380 if (unlikely(replace != i)) { 393 - set_mtrr(replace, 0, 0, 0); 381 + set_mtrr_cpuslocked(replace, 0, 0, 0); 394 382 mtrr_usage_table[replace] = 0; 395 383 } 396 384 } ··· 518 506 goto out; 519 507 } 520 508 if (--mtrr_usage_table[reg] < 1) 521 - set_mtrr(reg, 0, 0, 0); 509 + set_mtrr_cpuslocked(reg, 0, 0, 0); 522 510 error = reg; 523 511 out: 524 512 mutex_unlock(&mtrr_mutex);
+4 -3
arch/x86/kernel/head64.c
··· 53 53 pudval_t *pud; 54 54 pmdval_t *pmd, pmd_entry; 55 55 int i; 56 + unsigned int *next_pgt_ptr; 56 57 57 58 /* Is the address too large? */ 58 59 if (physaddr >> MAX_PHYSMEM_BITS) ··· 92 91 * creates a bunch of nonsense entries but that is fine -- 93 92 * it avoids problems around wraparound. 94 93 */ 95 - 96 - pud = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr); 97 - pmd = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr); 94 + next_pgt_ptr = fixup_pointer(&next_early_pgt, physaddr); 95 + pud = fixup_pointer(early_dynamic_pgts[(*next_pgt_ptr)++], physaddr); 96 + pmd = fixup_pointer(early_dynamic_pgts[(*next_pgt_ptr)++], physaddr); 98 97 99 98 if (IS_ENABLED(CONFIG_X86_5LEVEL)) { 100 99 p4d = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);
+2 -2
arch/x86/kernel/ksysfs.c
··· 55 55 NULL, 56 56 }; 57 57 58 - static struct attribute_group boot_params_attr_group = { 58 + static const struct attribute_group boot_params_attr_group = { 59 59 .attrs = boot_params_version_attrs, 60 60 .bin_attrs = boot_params_data_attrs, 61 61 }; ··· 202 202 NULL, 203 203 }; 204 204 205 - static struct attribute_group setup_data_attr_group = { 205 + static const struct attribute_group setup_data_attr_group = { 206 206 .attrs = setup_data_type_attrs, 207 207 .bin_attrs = setup_data_data_attrs, 208 208 };
+17 -13
arch/x86/kernel/smpboot.c
··· 971 971 * Returns zero if CPU booted OK, else error code from 972 972 * ->wakeup_secondary_cpu. 973 973 */ 974 - static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle) 974 + static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle, 975 + int *cpu0_nmi_registered) 975 976 { 976 977 volatile u32 *trampoline_status = 977 978 (volatile u32 *) __va(real_mode_header->trampoline_status); ··· 980 979 unsigned long start_ip = real_mode_header->trampoline_start; 981 980 982 981 unsigned long boot_error = 0; 983 - int cpu0_nmi_registered = 0; 984 982 unsigned long timeout; 985 983 986 984 idle->thread.sp = (unsigned long)task_pt_regs(idle); ··· 1035 1035 boot_error = apic->wakeup_secondary_cpu(apicid, start_ip); 1036 1036 else 1037 1037 boot_error = wakeup_cpu_via_init_nmi(cpu, start_ip, apicid, 1038 - &cpu0_nmi_registered); 1038 + cpu0_nmi_registered); 1039 1039 1040 1040 if (!boot_error) { 1041 1041 /* ··· 1080 1080 */ 1081 1081 smpboot_restore_warm_reset_vector(); 1082 1082 } 1083 - /* 1084 - * Clean up the nmi handler. Do this after the callin and callout sync 1085 - * to avoid impact of possible long unregister time. 1086 - */ 1087 - if (cpu0_nmi_registered) 1088 - unregister_nmi_handler(NMI_LOCAL, "wake_cpu0"); 1089 1083 1090 1084 return boot_error; 1091 1085 } ··· 1087 1093 int native_cpu_up(unsigned int cpu, struct task_struct *tidle) 1088 1094 { 1089 1095 int apicid = apic->cpu_present_to_apicid(cpu); 1096 + int cpu0_nmi_registered = 0; 1090 1097 unsigned long flags; 1091 - int err; 1098 + int err, ret = 0; 1092 1099 1093 1100 WARN_ON(irqs_disabled()); 1094 1101 ··· 1126 1131 1127 1132 common_cpu_up(cpu, tidle); 1128 1133 1129 - err = do_boot_cpu(apicid, cpu, tidle); 1134 + err = do_boot_cpu(apicid, cpu, tidle, &cpu0_nmi_registered); 1130 1135 if (err) { 1131 1136 pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu); 1132 - return -EIO; 1137 + ret = -EIO; 1138 + goto unreg_nmi; 1133 1139 } 1134 1140 1135 1141 /* ··· 1146 1150 touch_nmi_watchdog(); 1147 1151 } 1148 1152 1149 - return 0; 1153 + unreg_nmi: 1154 + /* 1155 + * Clean up the nmi handler. Do this after the callin and callout sync 1156 + * to avoid impact of possible long unregister time. 1157 + */ 1158 + if (cpu0_nmi_registered) 1159 + unregister_nmi_handler(NMI_LOCAL, "wake_cpu0"); 1160 + 1161 + return ret; 1150 1162 } 1151 1163 1152 1164 /**
+1 -1
arch/x86/kvm/cpuid.c
··· 469 469 entry->ecx &= kvm_cpuid_7_0_ecx_x86_features; 470 470 cpuid_mask(&entry->ecx, CPUID_7_ECX); 471 471 /* PKU is not yet implemented for shadow paging. */ 472 - if (!tdp_enabled) 472 + if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE)) 473 473 entry->ecx &= ~F(PKU); 474 474 entry->edx &= kvm_cpuid_7_0_edx_x86_features; 475 475 entry->edx &= get_scattered_cpuid_leaf(7, 0, CPUID_EDX);
-5
arch/x86/kvm/kvm_cache_regs.h
··· 84 84 | ((u64)(kvm_register_read(vcpu, VCPU_REGS_RDX) & -1u) << 32); 85 85 } 86 86 87 - static inline u32 kvm_read_pkru(struct kvm_vcpu *vcpu) 88 - { 89 - return kvm_x86_ops->get_pkru(vcpu); 90 - } 91 - 92 87 static inline void enter_guest_mode(struct kvm_vcpu *vcpu) 93 88 { 94 89 vcpu->arch.hflags |= HF_GUEST_MASK;
+1 -1
arch/x86/kvm/mmu.h
··· 185 185 * index of the protection domain, so pte_pkey * 2 is 186 186 * is the index of the first bit for the domain. 187 187 */ 188 - pkru_bits = (kvm_read_pkru(vcpu) >> (pte_pkey * 2)) & 3; 188 + pkru_bits = (vcpu->arch.pkru >> (pte_pkey * 2)) & 3; 189 189 190 190 /* clear present bit, replace PFEC.RSVD with ACC_USER_MASK. */ 191 191 offset = (pfec & ~1) +
+1 -8
arch/x86/kvm/svm.c
··· 1100 1100 1101 1101 if (vls) { 1102 1102 if (!npt_enabled || 1103 - !boot_cpu_has(X86_FEATURE_VIRTUAL_VMLOAD_VMSAVE) || 1103 + !boot_cpu_has(X86_FEATURE_V_VMSAVE_VMLOAD) || 1104 1104 !IS_ENABLED(CONFIG_X86_64)) { 1105 1105 vls = false; 1106 1106 } else { ··· 1775 1775 * so we do not need to update the CPL here. 1776 1776 */ 1777 1777 to_svm(vcpu)->vmcb->save.rflags = rflags; 1778 - } 1779 - 1780 - static u32 svm_get_pkru(struct kvm_vcpu *vcpu) 1781 - { 1782 - return 0; 1783 1778 } 1784 1779 1785 1780 static void svm_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg) ··· 5407 5412 .cache_reg = svm_cache_reg, 5408 5413 .get_rflags = svm_get_rflags, 5409 5414 .set_rflags = svm_set_rflags, 5410 - 5411 - .get_pkru = svm_get_pkru, 5412 5415 5413 5416 .tlb_flush = svm_flush_tlb, 5414 5417
+8 -17
arch/x86/kvm/vmx.c
··· 636 636 637 637 u64 current_tsc_ratio; 638 638 639 - bool guest_pkru_valid; 640 - u32 guest_pkru; 641 639 u32 host_pkru; 642 640 643 641 /* ··· 2379 2381 2380 2382 if ((old_rflags ^ to_vmx(vcpu)->rflags) & X86_EFLAGS_VM) 2381 2383 to_vmx(vcpu)->emulation_required = emulation_required(vcpu); 2382 - } 2383 - 2384 - static u32 vmx_get_pkru(struct kvm_vcpu *vcpu) 2385 - { 2386 - return to_vmx(vcpu)->guest_pkru; 2387 2384 } 2388 2385 2389 2386 static u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu) ··· 9013 9020 if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) 9014 9021 vmx_set_interrupt_shadow(vcpu, 0); 9015 9022 9016 - if (vmx->guest_pkru_valid) 9017 - __write_pkru(vmx->guest_pkru); 9023 + if (static_cpu_has(X86_FEATURE_PKU) && 9024 + kvm_read_cr4_bits(vcpu, X86_CR4_PKE) && 9025 + vcpu->arch.pkru != vmx->host_pkru) 9026 + __write_pkru(vcpu->arch.pkru); 9018 9027 9019 9028 atomic_switch_perf_msrs(vmx); 9020 9029 debugctlmsr = get_debugctlmsr(); ··· 9164 9169 * back on host, so it is safe to read guest PKRU from current 9165 9170 * XSAVE. 9166 9171 */ 9167 - if (boot_cpu_has(X86_FEATURE_OSPKE)) { 9168 - vmx->guest_pkru = __read_pkru(); 9169 - if (vmx->guest_pkru != vmx->host_pkru) { 9170 - vmx->guest_pkru_valid = true; 9172 + if (static_cpu_has(X86_FEATURE_PKU) && 9173 + kvm_read_cr4_bits(vcpu, X86_CR4_PKE)) { 9174 + vcpu->arch.pkru = __read_pkru(); 9175 + if (vcpu->arch.pkru != vmx->host_pkru) 9171 9176 __write_pkru(vmx->host_pkru); 9172 - } else 9173 - vmx->guest_pkru_valid = false; 9174 9177 } 9175 9178 9176 9179 /* ··· 11674 11681 .cache_reg = vmx_cache_reg, 11675 11682 .get_rflags = vmx_get_rflags, 11676 11683 .set_rflags = vmx_set_rflags, 11677 - 11678 - .get_pkru = vmx_get_pkru, 11679 11684 11680 11685 .tlb_flush = vmx_flush_tlb, 11681 11686
+14 -3
arch/x86/kvm/x86.c
··· 3245 3245 u32 size, offset, ecx, edx; 3246 3246 cpuid_count(XSTATE_CPUID, index, 3247 3247 &size, &offset, &ecx, &edx); 3248 - memcpy(dest + offset, src, size); 3248 + if (feature == XFEATURE_MASK_PKRU) 3249 + memcpy(dest + offset, &vcpu->arch.pkru, 3250 + sizeof(vcpu->arch.pkru)); 3251 + else 3252 + memcpy(dest + offset, src, size); 3253 + 3249 3254 } 3250 3255 3251 3256 valid -= feature; ··· 3288 3283 u32 size, offset, ecx, edx; 3289 3284 cpuid_count(XSTATE_CPUID, index, 3290 3285 &size, &offset, &ecx, &edx); 3291 - memcpy(dest, src + offset, size); 3286 + if (feature == XFEATURE_MASK_PKRU) 3287 + memcpy(&vcpu->arch.pkru, src + offset, 3288 + sizeof(vcpu->arch.pkru)); 3289 + else 3290 + memcpy(dest, src + offset, size); 3292 3291 } 3293 3292 3294 3293 valid -= feature; ··· 7642 7633 */ 7643 7634 vcpu->guest_fpu_loaded = 1; 7644 7635 __kernel_fpu_begin(); 7645 - __copy_kernel_to_fpregs(&vcpu->arch.guest_fpu.state); 7636 + /* PKRU is separately restored in kvm_x86_ops->run. */ 7637 + __copy_kernel_to_fpregs(&vcpu->arch.guest_fpu.state, 7638 + ~XFEATURE_MASK_PKRU); 7646 7639 trace_kvm_fpu(1); 7647 7640 } 7648 7641
+3 -4
arch/x86/mm/mmap.c
··· 50 50 static unsigned long stack_maxrandom_size(unsigned long task_size) 51 51 { 52 52 unsigned long max = 0; 53 - if ((current->flags & PF_RANDOMIZE) && 54 - !(current->personality & ADDR_NO_RANDOMIZE)) { 53 + if (current->flags & PF_RANDOMIZE) { 55 54 max = (-1UL) & __STACK_RND_MASK(task_size == tasksize_32bit()); 56 55 max <<= PAGE_SHIFT; 57 56 } ··· 78 79 79 80 static unsigned long arch_rnd(unsigned int rndbits) 80 81 { 82 + if (!(current->flags & PF_RANDOMIZE)) 83 + return 0; 81 84 return (get_random_long() & ((1UL << rndbits) - 1)) << PAGE_SHIFT; 82 85 } 83 86 84 87 unsigned long arch_mmap_rnd(void) 85 88 { 86 - if (!(current->flags & PF_RANDOMIZE)) 87 - return 0; 88 89 return arch_rnd(mmap_is_ia32() ? mmap32_rnd_bits : mmap64_rnd_bits); 89 90 } 90 91
+2 -2
arch/x86/platform/uv/tlb_uv.c
··· 26 26 static struct bau_operations ops __ro_after_init; 27 27 28 28 /* timeouts in nanoseconds (indexed by UVH_AGING_PRESCALE_SEL urgency7 30:28) */ 29 - static int timeout_base_ns[] = { 29 + static const int timeout_base_ns[] = { 30 30 20, 31 31 160, 32 32 1280, ··· 1216 1216 * set a bit in the UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE register. 1217 1217 * Such a message must be ignored. 1218 1218 */ 1219 - void process_uv2_message(struct msg_desc *mdp, struct bau_control *bcp) 1219 + static void process_uv2_message(struct msg_desc *mdp, struct bau_control *bcp) 1220 1220 { 1221 1221 unsigned long mmr_image; 1222 1222 unsigned char swack_vec;
+3
block/blk-mq-debugfs.c
··· 75 75 QUEUE_FLAG_NAME(STATS), 76 76 QUEUE_FLAG_NAME(POLL_STATS), 77 77 QUEUE_FLAG_NAME(REGISTERED), 78 + QUEUE_FLAG_NAME(SCSI_PASSTHROUGH), 79 + QUEUE_FLAG_NAME(QUIESCED), 78 80 }; 79 81 #undef QUEUE_FLAG_NAME 80 82 ··· 267 265 CMD_FLAG_NAME(RAHEAD), 268 266 CMD_FLAG_NAME(BACKGROUND), 269 267 CMD_FLAG_NAME(NOUNMAP), 268 + CMD_FLAG_NAME(NOWAIT), 270 269 }; 271 270 #undef CMD_FLAG_NAME 272 271
+7 -1
block/blk-mq-pci.c
··· 36 36 for (queue = 0; queue < set->nr_hw_queues; queue++) { 37 37 mask = pci_irq_get_affinity(pdev, queue); 38 38 if (!mask) 39 - return -EINVAL; 39 + goto fallback; 40 40 41 41 for_each_cpu(cpu, mask) 42 42 set->mq_map[cpu] = queue; 43 43 } 44 44 45 + return 0; 46 + 47 + fallback: 48 + WARN_ON_ONCE(set->nr_hw_queues > 1); 49 + for_each_possible_cpu(cpu) 50 + set->mq_map[cpu] = 0; 45 51 return 0; 46 52 } 47 53 EXPORT_SYMBOL_GPL(blk_mq_pci_map_queues);
+2 -3
block/blk-mq.c
··· 360 360 return ERR_PTR(ret); 361 361 362 362 rq = blk_mq_get_request(q, NULL, op, &alloc_data); 363 + blk_queue_exit(q); 363 364 364 365 if (!rq) 365 366 return ERR_PTR(-EWOULDBLOCK); 366 367 367 368 blk_mq_put_ctx(alloc_data.ctx); 368 - blk_queue_exit(q); 369 369 370 370 rq->__data_len = 0; 371 371 rq->__sector = (sector_t) -1; ··· 411 411 alloc_data.ctx = __blk_mq_get_ctx(q, cpu); 412 412 413 413 rq = blk_mq_get_request(q, NULL, op, &alloc_data); 414 + blk_queue_exit(q); 414 415 415 416 if (!rq) 416 417 return ERR_PTR(-EWOULDBLOCK); 417 - 418 - blk_queue_exit(q); 419 418 420 419 return rq; 421 420 }
+14 -4
block/blk-throttle.c
··· 382 382 } \ 383 383 } while (0) 384 384 385 + static inline unsigned int throtl_bio_data_size(struct bio *bio) 386 + { 387 + /* assume it's one sector */ 388 + if (unlikely(bio_op(bio) == REQ_OP_DISCARD)) 389 + return 512; 390 + return bio->bi_iter.bi_size; 391 + } 392 + 385 393 static void throtl_qnode_init(struct throtl_qnode *qn, struct throtl_grp *tg) 386 394 { 387 395 INIT_LIST_HEAD(&qn->node); ··· 942 934 bool rw = bio_data_dir(bio); 943 935 u64 bytes_allowed, extra_bytes, tmp; 944 936 unsigned long jiffy_elapsed, jiffy_wait, jiffy_elapsed_rnd; 937 + unsigned int bio_size = throtl_bio_data_size(bio); 945 938 946 939 jiffy_elapsed = jiffy_elapsed_rnd = jiffies - tg->slice_start[rw]; 947 940 ··· 956 947 do_div(tmp, HZ); 957 948 bytes_allowed = tmp; 958 949 959 - if (tg->bytes_disp[rw] + bio->bi_iter.bi_size <= bytes_allowed) { 950 + if (tg->bytes_disp[rw] + bio_size <= bytes_allowed) { 960 951 if (wait) 961 952 *wait = 0; 962 953 return true; 963 954 } 964 955 965 956 /* Calc approx time to dispatch */ 966 - extra_bytes = tg->bytes_disp[rw] + bio->bi_iter.bi_size - bytes_allowed; 957 + extra_bytes = tg->bytes_disp[rw] + bio_size - bytes_allowed; 967 958 jiffy_wait = div64_u64(extra_bytes * HZ, tg_bps_limit(tg, rw)); 968 959 969 960 if (!jiffy_wait) ··· 1043 1034 static void throtl_charge_bio(struct throtl_grp *tg, struct bio *bio) 1044 1035 { 1045 1036 bool rw = bio_data_dir(bio); 1037 + unsigned int bio_size = throtl_bio_data_size(bio); 1046 1038 1047 1039 /* Charge the bio to the group */ 1048 - tg->bytes_disp[rw] += bio->bi_iter.bi_size; 1040 + tg->bytes_disp[rw] += bio_size; 1049 1041 tg->io_disp[rw]++; 1050 - tg->last_bytes_disp[rw] += bio->bi_iter.bi_size; 1042 + tg->last_bytes_disp[rw] += bio_size; 1051 1043 tg->last_io_disp[rw]++; 1052 1044 1053 1045 /*
+44 -30
block/bsg-lib.c
··· 29 29 #include <scsi/scsi_cmnd.h> 30 30 31 31 /** 32 - * bsg_destroy_job - routine to teardown/delete a bsg job 32 + * bsg_teardown_job - routine to teardown a bsg job 33 33 * @job: bsg_job that is to be torn down 34 34 */ 35 - static void bsg_destroy_job(struct kref *kref) 35 + static void bsg_teardown_job(struct kref *kref) 36 36 { 37 37 struct bsg_job *job = container_of(kref, struct bsg_job, kref); 38 38 struct request *rq = job->req; 39 - 40 - blk_end_request_all(rq, BLK_STS_OK); 41 39 42 40 put_device(job->dev); /* release reference for the request */ 43 41 44 42 kfree(job->request_payload.sg_list); 45 43 kfree(job->reply_payload.sg_list); 46 - kfree(job); 44 + 45 + blk_end_request_all(rq, BLK_STS_OK); 47 46 } 48 47 49 48 void bsg_job_put(struct bsg_job *job) 50 49 { 51 - kref_put(&job->kref, bsg_destroy_job); 50 + kref_put(&job->kref, bsg_teardown_job); 52 51 } 53 52 EXPORT_SYMBOL_GPL(bsg_job_put); 54 53 ··· 99 100 */ 100 101 static void bsg_softirq_done(struct request *rq) 101 102 { 102 - struct bsg_job *job = rq->special; 103 + struct bsg_job *job = blk_mq_rq_to_pdu(rq); 103 104 104 105 bsg_job_put(job); 105 106 } ··· 121 122 } 122 123 123 124 /** 124 - * bsg_create_job - create the bsg_job structure for the bsg request 125 + * bsg_prepare_job - create the bsg_job structure for the bsg request 125 126 * @dev: device that is being sent the bsg request 126 127 * @req: BSG request that needs a job structure 127 128 */ 128 - static int bsg_create_job(struct device *dev, struct request *req) 129 + static int bsg_prepare_job(struct device *dev, struct request *req) 129 130 { 130 131 struct request *rsp = req->next_rq; 131 - struct request_queue *q = req->q; 132 132 struct scsi_request *rq = scsi_req(req); 133 - struct bsg_job *job; 133 + struct bsg_job *job = blk_mq_rq_to_pdu(req); 134 134 int ret; 135 135 136 - BUG_ON(req->special); 137 - 138 - job = kzalloc(sizeof(struct bsg_job) + q->bsg_job_size, GFP_KERNEL); 139 - if (!job) 140 - return -ENOMEM; 141 - 142 - req->special = job; 143 - job->req = req; 144 - if (q->bsg_job_size) 145 - job->dd_data = (void *)&job[1]; 146 136 job->request = rq->cmd; 147 137 job->request_len = rq->cmd_len; 148 - job->reply = rq->sense; 149 - job->reply_len = SCSI_SENSE_BUFFERSIZE; /* Size of sense buffer 150 - * allocated */ 138 + 151 139 if (req->bio) { 152 140 ret = bsg_map_buffer(&job->request_payload, req); 153 141 if (ret) ··· 173 187 { 174 188 struct device *dev = q->queuedata; 175 189 struct request *req; 176 - struct bsg_job *job; 177 190 int ret; 178 191 179 192 if (!get_device(dev)) ··· 184 199 break; 185 200 spin_unlock_irq(q->queue_lock); 186 201 187 - ret = bsg_create_job(dev, req); 202 + ret = bsg_prepare_job(dev, req); 188 203 if (ret) { 189 204 scsi_req(req)->result = ret; 190 205 blk_end_request_all(req, BLK_STS_OK); ··· 192 207 continue; 193 208 } 194 209 195 - job = req->special; 196 - ret = q->bsg_job_fn(job); 210 + ret = q->bsg_job_fn(blk_mq_rq_to_pdu(req)); 197 211 spin_lock_irq(q->queue_lock); 198 212 if (ret) 199 213 break; ··· 201 217 spin_unlock_irq(q->queue_lock); 202 218 put_device(dev); 203 219 spin_lock_irq(q->queue_lock); 220 + } 221 + 222 + static int bsg_init_rq(struct request_queue *q, struct request *req, gfp_t gfp) 223 + { 224 + struct bsg_job *job = blk_mq_rq_to_pdu(req); 225 + struct scsi_request *sreq = &job->sreq; 226 + 227 + memset(job, 0, sizeof(*job)); 228 + 229 + scsi_req_init(sreq); 230 + sreq->sense_len = SCSI_SENSE_BUFFERSIZE; 231 + sreq->sense = kzalloc(sreq->sense_len, gfp); 232 + if (!sreq->sense) 233 + return -ENOMEM; 234 + 235 + job->req = req; 236 + job->reply = sreq->sense; 237 + job->reply_len = sreq->sense_len; 238 + job->dd_data = job + 1; 239 + 240 + return 0; 241 + } 242 + 243 + static void bsg_exit_rq(struct request_queue *q, struct request *req) 244 + { 245 + struct bsg_job *job = blk_mq_rq_to_pdu(req); 246 + struct scsi_request *sreq = &job->sreq; 247 + 248 + kfree(sreq->sense); 204 249 } 205 250 206 251 /** ··· 248 235 q = blk_alloc_queue(GFP_KERNEL); 249 236 if (!q) 250 237 return ERR_PTR(-ENOMEM); 251 - q->cmd_size = sizeof(struct scsi_request); 238 + q->cmd_size = sizeof(struct bsg_job) + dd_job_size; 239 + q->init_rq_fn = bsg_init_rq; 240 + q->exit_rq_fn = bsg_exit_rq; 252 241 q->request_fn = bsg_request_fn; 253 242 254 243 ret = blk_init_allocated_queue(q); ··· 258 243 goto out_cleanup_queue; 259 244 260 245 q->queuedata = dev; 261 - q->bsg_job_size = dd_job_size; 262 246 q->bsg_job_fn = job_fn; 263 247 queue_flag_set_unlocked(QUEUE_FLAG_BIDI, q); 264 248 queue_flag_set_unlocked(QUEUE_FLAG_SCSI_PASSTHROUGH, q);
+7 -3
drivers/acpi/acpica/nsxfeval.c
··· 100 100 free_buffer_on_error = TRUE; 101 101 } 102 102 103 - status = acpi_get_handle(handle, pathname, &target_handle); 104 - if (ACPI_FAILURE(status)) { 105 - return_ACPI_STATUS(status); 103 + if (pathname) { 104 + status = acpi_get_handle(handle, pathname, &target_handle); 105 + if (ACPI_FAILURE(status)) { 106 + return_ACPI_STATUS(status); 107 + } 108 + } else { 109 + target_handle = handle; 106 110 } 107 111 108 112 full_pathname = acpi_ns_get_external_pathname(target_handle);
+7 -10
drivers/acpi/ec.c
··· 1741 1741 * functioning ECDT EC first in order to handle the events. 1742 1742 * https://bugzilla.kernel.org/show_bug.cgi?id=115021 1743 1743 */ 1744 - int __init acpi_ec_ecdt_start(void) 1744 + static int __init acpi_ec_ecdt_start(void) 1745 1745 { 1746 1746 acpi_handle handle; 1747 1747 ··· 2003 2003 int __init acpi_ec_init(void) 2004 2004 { 2005 2005 int result; 2006 + int ecdt_fail, dsdt_fail; 2006 2007 2007 2008 /* register workqueue for _Qxx evaluations */ 2008 2009 result = acpi_ec_query_init(); 2009 2010 if (result) 2010 - goto err_exit; 2011 - /* Now register the driver for the EC */ 2012 - result = acpi_bus_register_driver(&acpi_ec_driver); 2013 - if (result) 2014 - goto err_exit; 2011 + return result; 2015 2012 2016 - err_exit: 2017 - if (result) 2018 - acpi_ec_query_exit(); 2019 - return result; 2013 + /* Drivers must be started after acpi_ec_query_init() */ 2014 + ecdt_fail = acpi_ec_ecdt_start(); 2015 + dsdt_fail = acpi_bus_register_driver(&acpi_ec_driver); 2016 + return ecdt_fail && dsdt_fail ? -ENODEV : 0; 2020 2017 } 2021 2018 2022 2019 /* EC driver currently not unloadable */
-1
drivers/acpi/internal.h
··· 185 185 int acpi_ec_init(void); 186 186 int acpi_ec_ecdt_probe(void); 187 187 int acpi_ec_dsdt_probe(void); 188 - int acpi_ec_ecdt_start(void); 189 188 void acpi_ec_block_transactions(void); 190 189 void acpi_ec_unblock_transactions(void); 191 190 int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit,
+1 -1
drivers/acpi/property.c
··· 1047 1047 fwnode_for_each_child_node(fwnode, child) { 1048 1048 u32 nr; 1049 1049 1050 - if (!fwnode_property_read_u32(fwnode, prop_name, &nr)) 1050 + if (fwnode_property_read_u32(child, prop_name, &nr)) 1051 1051 continue; 1052 1052 1053 1053 if (val == nr)
-1
drivers/acpi/scan.c
··· 2084 2084 2085 2085 acpi_gpe_apply_masked_gpes(); 2086 2086 acpi_update_all_gpes(); 2087 - acpi_ec_ecdt_start(); 2088 2087 2089 2088 acpi_scan_initialized = true; 2090 2089
+1 -1
drivers/android/binder.c
··· 3362 3362 const char *failure_string; 3363 3363 struct binder_buffer *buffer; 3364 3364 3365 - if (proc->tsk != current) 3365 + if (proc->tsk != current->group_leader) 3366 3366 return -EINVAL; 3367 3367 3368 3368 if ((vma->vm_end - vma->vm_start) > SZ_4M)
+6 -36
drivers/block/loop.c
··· 221 221 } 222 222 223 223 static int 224 - figure_loop_size(struct loop_device *lo, loff_t offset, loff_t sizelimit, 225 - loff_t logical_blocksize) 224 + figure_loop_size(struct loop_device *lo, loff_t offset, loff_t sizelimit) 226 225 { 227 226 loff_t size = get_size(offset, sizelimit, lo->lo_backing_file); 228 227 sector_t x = (sector_t)size; ··· 233 234 lo->lo_offset = offset; 234 235 if (lo->lo_sizelimit != sizelimit) 235 236 lo->lo_sizelimit = sizelimit; 236 - if (lo->lo_flags & LO_FLAGS_BLOCKSIZE) { 237 - lo->lo_logical_blocksize = logical_blocksize; 238 - blk_queue_physical_block_size(lo->lo_queue, lo->lo_blocksize); 239 - blk_queue_logical_block_size(lo->lo_queue, 240 - lo->lo_logical_blocksize); 241 - } 242 237 set_capacity(lo->lo_disk, x); 243 238 bd_set_size(bdev, (loff_t)get_capacity(bdev->bd_disk) << 9); 244 239 /* let user-space know about the new size */ ··· 813 820 struct file *file = lo->lo_backing_file; 814 821 struct inode *inode = file->f_mapping->host; 815 822 struct request_queue *q = lo->lo_queue; 816 - int lo_bits = 9; 817 823 818 824 /* 819 825 * We use punch hole to reclaim the free space used by the ··· 832 840 833 841 q->limits.discard_granularity = inode->i_sb->s_blocksize; 834 842 q->limits.discard_alignment = 0; 835 - if (lo->lo_flags & LO_FLAGS_BLOCKSIZE) 836 - lo_bits = blksize_bits(lo->lo_logical_blocksize); 837 843 838 - blk_queue_max_discard_sectors(q, UINT_MAX >> lo_bits); 839 - blk_queue_max_write_zeroes_sectors(q, UINT_MAX >> lo_bits); 844 + blk_queue_max_discard_sectors(q, UINT_MAX >> 9); 845 + blk_queue_max_write_zeroes_sectors(q, UINT_MAX >> 9); 840 846 queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, q); 841 847 } 842 848 ··· 928 938 929 939 lo->use_dio = false; 930 940 lo->lo_blocksize = lo_blocksize; 931 - lo->lo_logical_blocksize = 512; 932 941 lo->lo_device = bdev; 933 942 lo->lo_flags = lo_flags; 934 943 lo->lo_backing_file = file; ··· 1093 1104 int err; 1094 1105 struct loop_func_table *xfer; 1095 1106 kuid_t uid = current_uid(); 1096 - int lo_flags = lo->lo_flags; 1097 1107 1098 1108 if (lo->lo_encrypt_key_size && 1099 1109 !uid_eq(lo->lo_key_owner, uid) && ··· 1125 1137 if (err) 1126 1138 goto exit; 1127 1139 1128 - if (info->lo_flags & LO_FLAGS_BLOCKSIZE) { 1129 - if (!(lo->lo_flags & LO_FLAGS_BLOCKSIZE)) 1130 - lo->lo_logical_blocksize = 512; 1131 - lo->lo_flags |= LO_FLAGS_BLOCKSIZE; 1132 - if (LO_INFO_BLOCKSIZE(info) != 512 && 1133 - LO_INFO_BLOCKSIZE(info) != 1024 && 1134 - LO_INFO_BLOCKSIZE(info) != 2048 && 1135 - LO_INFO_BLOCKSIZE(info) != 4096) 1136 - return -EINVAL; 1137 - if (LO_INFO_BLOCKSIZE(info) > lo->lo_blocksize) 1138 - return -EINVAL; 1139 - } 1140 - 1141 1140 if (lo->lo_offset != info->lo_offset || 1142 - lo->lo_sizelimit != info->lo_sizelimit || 1143 - lo->lo_flags != lo_flags || 1144 - ((lo->lo_flags & LO_FLAGS_BLOCKSIZE) && 1145 - lo->lo_logical_blocksize != LO_INFO_BLOCKSIZE(info))) { 1146 - if (figure_loop_size(lo, info->lo_offset, info->lo_sizelimit, 1147 - LO_INFO_BLOCKSIZE(info))) { 1141 + lo->lo_sizelimit != info->lo_sizelimit) { 1142 + if (figure_loop_size(lo, info->lo_offset, info->lo_sizelimit)) { 1148 1143 err = -EFBIG; 1149 1144 goto exit; 1150 1145 } ··· 1319 1348 if (unlikely(lo->lo_state != Lo_bound)) 1320 1349 return -ENXIO; 1321 1350 1322 - return figure_loop_size(lo, lo->lo_offset, lo->lo_sizelimit, 1323 - lo->lo_logical_blocksize); 1351 + return figure_loop_size(lo, lo->lo_offset, lo->lo_sizelimit); 1324 1352 } 1325 1353 1326 1354 static int loop_set_dio(struct loop_device *lo, unsigned long arg)
-1
drivers/block/loop.h
··· 49 49 struct file * lo_backing_file; 50 50 struct block_device *lo_device; 51 51 unsigned lo_blocksize; 52 - unsigned lo_logical_blocksize; 53 52 void *key_data; 54 53 55 54 gfp_t old_gfp_mask;
+10 -6
drivers/block/virtio_blk.c
··· 381 381 struct request_queue *q = vblk->disk->queue; 382 382 char cap_str_2[10], cap_str_10[10]; 383 383 char *envp[] = { "RESIZE=1", NULL }; 384 + unsigned long long nblocks; 384 385 u64 capacity; 385 386 386 387 /* Host must always specify the capacity. */ ··· 394 393 capacity = (sector_t)-1; 395 394 } 396 395 397 - string_get_size(capacity, queue_logical_block_size(q), 396 + nblocks = DIV_ROUND_UP_ULL(capacity, queue_logical_block_size(q) >> 9); 397 + 398 + string_get_size(nblocks, queue_logical_block_size(q), 398 399 STRING_UNITS_2, cap_str_2, sizeof(cap_str_2)); 399 - string_get_size(capacity, queue_logical_block_size(q), 400 + string_get_size(nblocks, queue_logical_block_size(q), 400 401 STRING_UNITS_10, cap_str_10, sizeof(cap_str_10)); 401 402 402 403 dev_notice(&vdev->dev, 403 - "new size: %llu %d-byte logical blocks (%s/%s)\n", 404 - (unsigned long long)capacity, 405 - queue_logical_block_size(q), 406 - cap_str_10, cap_str_2); 404 + "new size: %llu %d-byte logical blocks (%s/%s)\n", 405 + nblocks, 406 + queue_logical_block_size(q), 407 + cap_str_10, 408 + cap_str_2); 407 409 408 410 set_capacity(vblk->disk, capacity); 409 411 revalidate_disk(vblk->disk);
+3 -3
drivers/block/xen-blkfront.c
··· 2075 2075 /* 2076 2076 * Get the bios in the request so we can re-queue them. 2077 2077 */ 2078 - if (req_op(shadow[i].request) == REQ_OP_FLUSH || 2079 - req_op(shadow[i].request) == REQ_OP_DISCARD || 2080 - req_op(shadow[i].request) == REQ_OP_SECURE_ERASE || 2078 + if (req_op(shadow[j].request) == REQ_OP_FLUSH || 2079 + req_op(shadow[j].request) == REQ_OP_DISCARD || 2080 + req_op(shadow[j].request) == REQ_OP_SECURE_ERASE || 2081 2081 shadow[j].request->cmd_flags & REQ_FUA) { 2082 2082 /* 2083 2083 * Flush operations don't contain bios, so
+1 -1
drivers/clocksource/Kconfig
··· 262 262 263 263 config CLKSRC_PISTACHIO 264 264 bool "Clocksource for Pistachio SoC" if COMPILE_TEST 265 - depends on HAS_IOMEM 265 + depends on GENERIC_CLOCKEVENTS && HAS_IOMEM 266 266 select TIMER_OF 267 267 help 268 268 Enables the clocksource for the Pistachio SoC.
+1 -1
drivers/clocksource/arm_arch_timer.c
··· 1440 1440 * While unlikely, it's theoretically possible that none of the frames 1441 1441 * in a timer expose the combination of feature we want. 1442 1442 */ 1443 - for (i = i; i < timer_count; i++) { 1443 + for (i = 0; i < timer_count; i++) { 1444 1444 timer = &timers[i]; 1445 1445 1446 1446 frame = arch_timer_mem_find_best_frame(timer);
+6 -5
drivers/clocksource/em_sti.c
··· 305 305 irq = platform_get_irq(pdev, 0); 306 306 if (irq < 0) { 307 307 dev_err(&pdev->dev, "failed to get irq\n"); 308 - return -EINVAL; 308 + return irq; 309 309 } 310 310 311 311 /* map memory, let base point to the STI instance */ ··· 314 314 if (IS_ERR(p->base)) 315 315 return PTR_ERR(p->base); 316 316 317 - if (devm_request_irq(&pdev->dev, irq, em_sti_interrupt, 318 - IRQF_TIMER | IRQF_IRQPOLL | IRQF_NOBALANCING, 319 - dev_name(&pdev->dev), p)) { 317 + ret = devm_request_irq(&pdev->dev, irq, em_sti_interrupt, 318 + IRQF_TIMER | IRQF_IRQPOLL | IRQF_NOBALANCING, 319 + dev_name(&pdev->dev), p); 320 + if (ret) { 320 321 dev_err(&pdev->dev, "failed to request low IRQ\n"); 321 - return -ENOENT; 322 + return ret; 322 323 } 323 324 324 325 /* get hold of clock */
+2 -2
drivers/clocksource/timer-of.c
··· 128 128 const char *name = of_base->name ? of_base->name : np->full_name; 129 129 130 130 of_base->base = of_io_request_and_map(np, of_base->index, name); 131 - if (!of_base->base) { 131 + if (IS_ERR(of_base->base)) { 132 132 pr_err("Failed to iomap (%s)\n", name); 133 - return -ENXIO; 133 + return PTR_ERR(of_base->base); 134 134 } 135 135 136 136 return 0;
+1 -2
drivers/cpufreq/intel_pstate.c
··· 1613 1613 1614 1614 static inline int32_t get_avg_frequency(struct cpudata *cpu) 1615 1615 { 1616 - return mul_ext_fp(cpu->sample.core_avg_perf, 1617 - cpu->pstate.max_pstate_physical * cpu->pstate.scaling); 1616 + return mul_ext_fp(cpu->sample.core_avg_perf, cpu_khz); 1618 1617 } 1619 1618 1620 1619 static inline int32_t get_avg_pstate(struct cpudata *cpu)
+3 -3
drivers/crypto/ixp4xx_crypto.c
··· 1073 1073 req_ctx->hmac_virt = dma_pool_alloc(buffer_pool, flags, 1074 1074 &crypt->icv_rev_aes); 1075 1075 if (unlikely(!req_ctx->hmac_virt)) 1076 - goto free_buf_src; 1076 + goto free_buf_dst; 1077 1077 if (!encrypt) { 1078 1078 scatterwalk_map_and_copy(req_ctx->hmac_virt, 1079 1079 req->src, cryptlen, authsize, 0); ··· 1088 1088 BUG_ON(qmgr_stat_overflow(SEND_QID)); 1089 1089 return -EINPROGRESS; 1090 1090 1091 - free_buf_src: 1092 - free_buf_chain(dev, req_ctx->src, crypt->src_buf); 1093 1091 free_buf_dst: 1094 1092 free_buf_chain(dev, req_ctx->dst, crypt->dst_buf); 1093 + free_buf_src: 1094 + free_buf_chain(dev, req_ctx->src, crypt->src_buf); 1095 1095 crypt->ctl_flags = CTL_FLAG_UNUSED; 1096 1096 return -ENOMEM; 1097 1097 }
+2 -2
drivers/dma/tegra210-adma.c
··· 717 717 tdc->chan_addr = tdma->base_addr + ADMA_CH_REG_OFFSET(i); 718 718 719 719 tdc->irq = of_irq_get(pdev->dev.of_node, i); 720 - if (tdc->irq < 0) { 721 - ret = tdc->irq; 720 + if (tdc->irq <= 0) { 721 + ret = tdc->irq ?: -ENXIO; 722 722 goto irq_dispose; 723 723 } 724 724
+1 -1
drivers/gpio/gpio-mvebu.c
··· 557 557 edge_cause = mvebu_gpio_read_edge_cause(mvchip); 558 558 edge_mask = mvebu_gpio_read_edge_mask(mvchip); 559 559 560 - cause = (data_in ^ level_mask) | (edge_cause & edge_mask); 560 + cause = (data_in & level_mask) | (edge_cause & edge_mask); 561 561 562 562 for (i = 0; i < mvchip->chip.ngpio; i++) { 563 563 int irq;
+8 -2
drivers/gpio/gpiolib-sysfs.c
··· 2 2 #include <linux/mutex.h> 3 3 #include <linux/device.h> 4 4 #include <linux/sysfs.h> 5 + #include <linux/gpio.h> 5 6 #include <linux/gpio/consumer.h> 6 7 #include <linux/gpio/driver.h> 7 8 #include <linux/interrupt.h> ··· 433 432 }; 434 433 ATTRIBUTE_GROUPS(gpiochip); 435 434 435 + static struct gpio_desc *gpio_to_valid_desc(int gpio) 436 + { 437 + return gpio_is_valid(gpio) ? gpio_to_desc(gpio) : NULL; 438 + } 439 + 436 440 /* 437 441 * /sys/class/gpio/export ... write-only 438 442 * integer N ... number of GPIO to export (full access) ··· 456 450 if (status < 0) 457 451 goto done; 458 452 459 - desc = gpio_to_desc(gpio); 453 + desc = gpio_to_valid_desc(gpio); 460 454 /* reject invalid GPIOs */ 461 455 if (!desc) { 462 456 pr_warn("%s: invalid GPIO %ld\n", __func__, gpio); ··· 499 493 if (status < 0) 500 494 goto done; 501 495 502 - desc = gpio_to_desc(gpio); 496 + desc = gpio_to_valid_desc(gpio); 503 497 /* reject bogus commands (gpio_unexport ignores them) */ 504 498 if (!desc) { 505 499 pr_warn("%s: invalid GPIO %ld\n", __func__, gpio);
+6 -7
drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
··· 244 244 struct dma_fence *f = e->fence; 245 245 struct amd_sched_fence *s_fence = to_amd_sched_fence(f); 246 246 247 + if (dma_fence_is_signaled(f)) { 248 + hash_del(&e->node); 249 + dma_fence_put(f); 250 + kmem_cache_free(amdgpu_sync_slab, e); 251 + continue; 252 + } 247 253 if (ring && s_fence) { 248 254 /* For fences from the same ring it is sufficient 249 255 * when they are scheduled. ··· 260 254 261 255 return &s_fence->scheduled; 262 256 } 263 - } 264 - 265 - if (dma_fence_is_signaled(f)) { 266 - hash_del(&e->node); 267 - dma_fence_put(f); 268 - kmem_cache_free(amdgpu_sync_slab, e); 269 - continue; 270 257 } 271 258 272 259 return f;
+8 -3
drivers/gpu/drm/drm_atomic.c
··· 1655 1655 if (config->funcs->atomic_check) 1656 1656 ret = config->funcs->atomic_check(state->dev, state); 1657 1657 1658 + if (ret) 1659 + return ret; 1660 + 1658 1661 if (!state->allow_modeset) { 1659 1662 for_each_new_crtc_in_state(state, crtc, crtc_state, i) { 1660 1663 if (drm_atomic_crtc_needs_modeset(crtc_state)) { ··· 1668 1665 } 1669 1666 } 1670 1667 1671 - return ret; 1668 + return 0; 1672 1669 } 1673 1670 EXPORT_SYMBOL(drm_atomic_check_only); 1674 1671 ··· 2170 2167 struct drm_atomic_state *state; 2171 2168 struct drm_modeset_acquire_ctx ctx; 2172 2169 struct drm_plane *plane; 2173 - struct drm_out_fence_state *fence_state = NULL; 2170 + struct drm_out_fence_state *fence_state; 2174 2171 unsigned plane_mask; 2175 2172 int ret = 0; 2176 - unsigned int i, j, num_fences = 0; 2173 + unsigned int i, j, num_fences; 2177 2174 2178 2175 /* disallow for drivers not supporting atomic: */ 2179 2176 if (!drm_core_check_feature(dev, DRIVER_ATOMIC)) ··· 2214 2211 plane_mask = 0; 2215 2212 copied_objs = 0; 2216 2213 copied_props = 0; 2214 + fence_state = NULL; 2215 + num_fences = 0; 2217 2216 2218 2217 for (i = 0; i < arg->count_objs; i++) { 2219 2218 uint32_t obj_id, count_props;
+3 -3
drivers/gpu/drm/drm_gem.c
··· 255 255 struct drm_gem_object *obj = ptr; 256 256 struct drm_device *dev = obj->dev; 257 257 258 + if (dev->driver->gem_close_object) 259 + dev->driver->gem_close_object(obj, file_priv); 260 + 258 261 if (drm_core_check_feature(dev, DRIVER_PRIME)) 259 262 drm_gem_remove_prime_handles(obj, file_priv); 260 263 drm_vma_node_revoke(&obj->vma_node, file_priv); 261 - 262 - if (dev->driver->gem_close_object) 263 - dev->driver->gem_close_object(obj, file_priv); 264 264 265 265 drm_gem_object_handle_put_unlocked(obj); 266 266
+1
drivers/gpu/drm/drm_plane.c
··· 601 601 602 602 crtc = drm_crtc_find(dev, plane_req->crtc_id); 603 603 if (!crtc) { 604 + drm_framebuffer_put(fb); 604 605 DRM_DEBUG_KMS("Unknown crtc ID %d\n", 605 606 plane_req->crtc_id); 606 607 return -ENOENT;
+1 -1
drivers/gpu/drm/i915/gvt/cmd_parser.c
··· 2714 2714 unmap_src: 2715 2715 i915_gem_object_unpin_map(obj); 2716 2716 put_obj: 2717 - i915_gem_object_put(wa_ctx->indirect_ctx.obj); 2717 + i915_gem_object_put(obj); 2718 2718 return ret; 2719 2719 } 2720 2720
+1 -1
drivers/gpu/drm/i915/i915_debugfs.c
··· 4580 4580 4581 4581 sseu->slice_mask |= BIT(s); 4582 4582 4583 - if (IS_GEN9_BC(dev_priv)) 4583 + if (IS_GEN9_BC(dev_priv) || IS_CANNONLAKE(dev_priv)) 4584 4584 sseu->subslice_mask = 4585 4585 INTEL_INFO(dev_priv)->sseu.subslice_mask; 4586 4586
+8 -7
drivers/gpu/drm/i915/i915_gem_context.c
··· 688 688 } 689 689 690 690 static bool 691 - needs_pd_load_pre(struct i915_hw_ppgtt *ppgtt, 692 - struct intel_engine_cs *engine, 693 - struct i915_gem_context *to) 691 + needs_pd_load_pre(struct i915_hw_ppgtt *ppgtt, struct intel_engine_cs *engine) 694 692 { 693 + struct i915_gem_context *from = engine->legacy_active_context; 694 + 695 695 if (!ppgtt) 696 696 return false; 697 697 698 698 /* Always load the ppgtt on first use */ 699 - if (!engine->legacy_active_context) 699 + if (!from) 700 700 return true; 701 701 702 702 /* Same context without new entries, skip */ 703 - if (engine->legacy_active_context == to && 703 + if ((!from->ppgtt || from->ppgtt == ppgtt) && 704 704 !(intel_engine_flag(engine) & ppgtt->pd_dirty_rings)) 705 705 return false; 706 706 ··· 744 744 if (skip_rcs_switch(ppgtt, engine, to)) 745 745 return 0; 746 746 747 - if (needs_pd_load_pre(ppgtt, engine, to)) { 747 + if (needs_pd_load_pre(ppgtt, engine)) { 748 748 /* Older GENs and non render rings still want the load first, 749 749 * "PP_DCLV followed by PP_DIR_BASE register through Load 750 750 * Register Immediate commands in Ring Buffer before submitting ··· 841 841 struct i915_hw_ppgtt *ppgtt = 842 842 to->ppgtt ?: req->i915->mm.aliasing_ppgtt; 843 843 844 - if (needs_pd_load_pre(ppgtt, engine, to)) { 844 + if (needs_pd_load_pre(ppgtt, engine)) { 845 845 int ret; 846 846 847 847 trace_switch_mm(engine, to); ··· 852 852 ppgtt->pd_dirty_rings &= ~intel_engine_flag(engine); 853 853 } 854 854 855 + engine->legacy_active_context = to; 855 856 return 0; 856 857 } 857 858
+4
drivers/gpu/drm/i915/i915_gem_render_state.c
··· 242 242 goto err_unpin; 243 243 } 244 244 245 + ret = req->engine->emit_flush(req, EMIT_INVALIDATE); 246 + if (ret) 247 + goto err_unpin; 248 + 245 249 ret = req->engine->emit_bb_start(req, 246 250 so->batch_offset, so->batch_size, 247 251 I915_DISPATCH_SECURE);
+9 -6
drivers/gpu/drm/i915/intel_bios.c
··· 1120 1120 bool is_dvi, is_hdmi, is_dp, is_edp, is_crt; 1121 1121 uint8_t aux_channel, ddc_pin; 1122 1122 /* Each DDI port can have more than one value on the "DVO Port" field, 1123 - * so look for all the possible values for each port and abort if more 1124 - * than one is found. */ 1123 + * so look for all the possible values for each port. 1124 + */ 1125 1125 int dvo_ports[][3] = { 1126 1126 {DVO_PORT_HDMIA, DVO_PORT_DPA, -1}, 1127 1127 {DVO_PORT_HDMIB, DVO_PORT_DPB, -1}, ··· 1130 1130 {DVO_PORT_CRT, DVO_PORT_HDMIE, DVO_PORT_DPE}, 1131 1131 }; 1132 1132 1133 - /* Find the child device to use, abort if more than one found. */ 1133 + /* 1134 + * Find the first child device to reference the port, report if more 1135 + * than one found. 1136 + */ 1134 1137 for (i = 0; i < dev_priv->vbt.child_dev_num; i++) { 1135 1138 it = dev_priv->vbt.child_dev + i; 1136 1139 ··· 1143 1140 1144 1141 if (it->common.dvo_port == dvo_ports[port][j]) { 1145 1142 if (child) { 1146 - DRM_DEBUG_KMS("More than one child device for port %c in VBT.\n", 1143 + DRM_DEBUG_KMS("More than one child device for port %c in VBT, using the first.\n", 1147 1144 port_name(port)); 1148 - return; 1145 + } else { 1146 + child = it; 1149 1147 } 1150 - child = it; 1151 1148 } 1152 1149 } 1153 1150 }
+1 -1
drivers/gpu/drm/i915/intel_ddi.c
··· 1762 1762 if (dev_priv->vbt.edp.low_vswing) { 1763 1763 if (voltage == VOLTAGE_INFO_0_85V) { 1764 1764 *n_entries = ARRAY_SIZE(cnl_ddi_translations_edp_0_85V); 1765 - return cnl_ddi_translations_dp_0_85V; 1765 + return cnl_ddi_translations_edp_0_85V; 1766 1766 } else if (voltage == VOLTAGE_INFO_0_95V) { 1767 1767 *n_entries = ARRAY_SIZE(cnl_ddi_translations_edp_0_95V); 1768 1768 return cnl_ddi_translations_edp_0_95V;
+7
drivers/gpu/drm/i915/intel_display.c
··· 3485 3485 !gpu_reset_clobbers_display(dev_priv)) 3486 3486 return; 3487 3487 3488 + /* We have a modeset vs reset deadlock, defensively unbreak it. 3489 + * 3490 + * FIXME: We can do a _lot_ better, this is just a first iteration. 3491 + */ 3492 + i915_gem_set_wedged(dev_priv); 3493 + DRM_DEBUG_DRIVER("Wedging GPU to avoid deadlocks with pending modeset updates\n"); 3494 + 3488 3495 /* 3489 3496 * Need mode_config.mutex so that we don't 3490 3497 * trample ongoing ->detect() and whatnot.
+1 -1
drivers/gpu/drm/i915/intel_dsi_dcs_backlight.c
··· 46 46 struct intel_encoder *encoder = connector->encoder; 47 47 struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base); 48 48 struct mipi_dsi_device *dsi_device; 49 - u8 data; 49 + u8 data = 0; 50 50 enum port port; 51 51 52 52 /* FIXME: Need to take care of 16 bit brightness level */
+1 -1
drivers/gpu/drm/i915/intel_dsi_vbt.c
··· 306 306 307 307 if (!gpio_desc) { 308 308 gpio_desc = devm_gpiod_get_index(dev_priv->drm.dev, 309 - "panel", gpio_index, 309 + NULL, gpio_index, 310 310 value ? GPIOD_OUT_LOW : 311 311 GPIOD_OUT_HIGH); 312 312
+22 -1
drivers/gpu/drm/i915/intel_lrc.c
··· 1221 1221 return ret; 1222 1222 } 1223 1223 1224 + static u8 gtiir[] = { 1225 + [RCS] = 0, 1226 + [BCS] = 0, 1227 + [VCS] = 1, 1228 + [VCS2] = 1, 1229 + [VECS] = 3, 1230 + }; 1231 + 1224 1232 static int gen8_init_common_ring(struct intel_engine_cs *engine) 1225 1233 { 1226 1234 struct drm_i915_private *dev_priv = engine->i915; ··· 1253 1245 1254 1246 DRM_DEBUG_DRIVER("Execlists enabled for %s\n", engine->name); 1255 1247 1256 - /* After a GPU reset, we may have requests to replay */ 1248 + GEM_BUG_ON(engine->id >= ARRAY_SIZE(gtiir)); 1249 + 1250 + /* 1251 + * Clear any pending interrupt state. 1252 + * 1253 + * We do it twice out of paranoia that some of the IIR are double 1254 + * buffered, and if we only reset it once there may still be 1255 + * an interrupt pending. 1256 + */ 1257 + I915_WRITE(GEN8_GT_IIR(gtiir[engine->id]), 1258 + GT_CONTEXT_SWITCH_INTERRUPT << engine->irq_shift); 1259 + I915_WRITE(GEN8_GT_IIR(gtiir[engine->id]), 1260 + GT_CONTEXT_SWITCH_INTERRUPT << engine->irq_shift); 1257 1261 clear_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted); 1258 1262 1263 + /* After a GPU reset, we may have requests to replay */ 1259 1264 submit = false; 1260 1265 for (n = 0; n < ARRAY_SIZE(engine->execlist_port); n++) { 1261 1266 if (!port_isset(&port[n]))
-1
drivers/gpu/drm/i915/intel_lrc.h
··· 63 63 }; 64 64 65 65 /* Logical Rings */ 66 - void intel_logical_ring_stop(struct intel_engine_cs *engine); 67 66 void intel_logical_ring_cleanup(struct intel_engine_cs *engine); 68 67 int logical_render_ring_init(struct intel_engine_cs *engine); 69 68 int logical_xcs_ring_init(struct intel_engine_cs *engine);
+2 -2
drivers/gpu/drm/i915/intel_lspcon.c
··· 210 210 struct drm_device *dev = intel_dig_port->base.base.dev; 211 211 struct drm_i915_private *dev_priv = to_i915(dev); 212 212 213 - if (!IS_GEN9(dev_priv)) { 214 - DRM_ERROR("LSPCON is supported on GEN9 only\n"); 213 + if (!HAS_LSPCON(dev_priv)) { 214 + DRM_ERROR("LSPCON is not supported on this platform\n"); 215 215 return false; 216 216 } 217 217
+2 -4
drivers/gpu/drm/imx/ipuv3-plane.c
··· 545 545 return; 546 546 } 547 547 548 + ics = ipu_drm_fourcc_to_colorspace(fb->format->format); 548 549 switch (ipu_plane->dp_flow) { 549 550 case IPU_DP_FLOW_SYNC_BG: 550 - ipu_dp_setup_channel(ipu_plane->dp, 551 - IPUV3_COLORSPACE_RGB, 552 - IPUV3_COLORSPACE_RGB); 551 + ipu_dp_setup_channel(ipu_plane->dp, ics, IPUV3_COLORSPACE_RGB); 553 552 ipu_dp_set_global_alpha(ipu_plane->dp, true, 0, true); 554 553 break; 555 554 case IPU_DP_FLOW_SYNC_FG: 556 - ics = ipu_drm_fourcc_to_colorspace(state->fb->format->format); 557 555 ipu_dp_setup_channel(ipu_plane->dp, ics, 558 556 IPUV3_COLORSPACE_UNKNOWN); 559 557 /* Enable local alpha on partial plane */
+10 -2
drivers/gpu/drm/rockchip/rockchip_drm_drv.c
··· 275 275 static int rockchip_drm_sys_suspend(struct device *dev) 276 276 { 277 277 struct drm_device *drm = dev_get_drvdata(dev); 278 - struct rockchip_drm_private *priv = drm->dev_private; 278 + struct rockchip_drm_private *priv; 279 + 280 + if (!drm) 281 + return 0; 279 282 280 283 drm_kms_helper_poll_disable(drm); 281 284 rockchip_drm_fb_suspend(drm); 282 285 286 + priv = drm->dev_private; 283 287 priv->state = drm_atomic_helper_suspend(drm); 284 288 if (IS_ERR(priv->state)) { 285 289 rockchip_drm_fb_resume(drm); ··· 297 293 static int rockchip_drm_sys_resume(struct device *dev) 298 294 { 299 295 struct drm_device *drm = dev_get_drvdata(dev); 300 - struct rockchip_drm_private *priv = drm->dev_private; 296 + struct rockchip_drm_private *priv; 301 297 298 + if (!drm) 299 + return 0; 300 + 301 + priv = drm->dev_private; 302 302 drm_atomic_helper_resume(drm, priv->state); 303 303 rockchip_drm_fb_resume(drm); 304 304 drm_kms_helper_poll_enable(drm);
+8
drivers/gpu/drm/sun4i/sun4i_drv.c
··· 25 25 #include "sun4i_framebuffer.h" 26 26 #include "sun4i_tcon.h" 27 27 28 + static void sun4i_drv_lastclose(struct drm_device *dev) 29 + { 30 + struct sun4i_drv *drv = dev->dev_private; 31 + 32 + drm_fbdev_cma_restore_mode(drv->fbdev); 33 + } 34 + 28 35 DEFINE_DRM_GEM_CMA_FOPS(sun4i_drv_fops); 29 36 30 37 static struct drm_driver sun4i_drv_driver = { 31 38 .driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME | DRIVER_ATOMIC, 32 39 33 40 /* Generic Operations */ 41 + .lastclose = sun4i_drv_lastclose, 34 42 .fops = &sun4i_drv_fops, 35 43 .name = "sun4i-drm", 36 44 .desc = "Allwinner sun4i Display Engine",
+1
drivers/gpu/ipu-v3/Kconfig
··· 1 1 config IMX_IPUV3_CORE 2 2 tristate "IPUv3 core support" 3 3 depends on SOC_IMX5 || SOC_IMX6Q || ARCH_MULTIPLATFORM 4 + depends on DRM || !DRM # if DRM=m, this can't be 'y' 4 5 select GENERIC_IRQ_CHIP 5 6 help 6 7 Choose this if you have a i.MX5/6 system and want to use the Image
+3 -2
drivers/i2c/busses/i2c-aspeed.c
··· 410 410 } 411 411 412 412 /* We are in an invalid state; reset bus to a known state. */ 413 - if (!bus->msgs && bus->master_state != ASPEED_I2C_MASTER_STOP) { 413 + if (!bus->msgs) { 414 414 dev_err(bus->dev, "bus in unknown state"); 415 415 bus->cmd_err = -EIO; 416 - aspeed_i2c_do_stop(bus); 416 + if (bus->master_state != ASPEED_I2C_MASTER_STOP) 417 + aspeed_i2c_do_stop(bus); 417 418 goto out_no_complete; 418 419 } 419 420 msg = &bus->msgs[bus->msgs_index];
+13 -4
drivers/i2c/busses/i2c-designware-platdrv.c
··· 198 198 dev->functionality = I2C_FUNC_SLAVE | DW_IC_DEFAULT_FUNCTIONALITY; 199 199 200 200 dev->slave_cfg = DW_IC_CON_RX_FIFO_FULL_HLD_CTRL | 201 - DW_IC_CON_RESTART_EN | DW_IC_CON_STOP_DET_IFADDRESSED | 202 - DW_IC_CON_SPEED_FAST; 201 + DW_IC_CON_RESTART_EN | DW_IC_CON_STOP_DET_IFADDRESSED; 203 202 204 203 dev->mode = DW_IC_SLAVE; 205 204 ··· 429 430 #endif 430 431 431 432 #ifdef CONFIG_PM 432 - static int dw_i2c_plat_suspend(struct device *dev) 433 + static int dw_i2c_plat_runtime_suspend(struct device *dev) 433 434 { 434 435 struct platform_device *pdev = to_platform_device(dev); 435 436 struct dw_i2c_dev *i_dev = platform_get_drvdata(pdev); ··· 451 452 return 0; 452 453 } 453 454 455 + #ifdef CONFIG_PM_SLEEP 456 + static int dw_i2c_plat_suspend(struct device *dev) 457 + { 458 + pm_runtime_resume(dev); 459 + return dw_i2c_plat_runtime_suspend(dev); 460 + } 461 + #endif 462 + 454 463 static const struct dev_pm_ops dw_i2c_dev_pm_ops = { 455 464 .prepare = dw_i2c_plat_prepare, 456 465 .complete = dw_i2c_plat_complete, 457 466 SET_SYSTEM_SLEEP_PM_OPS(dw_i2c_plat_suspend, dw_i2c_plat_resume) 458 - SET_RUNTIME_PM_OPS(dw_i2c_plat_suspend, dw_i2c_plat_resume, NULL) 467 + SET_RUNTIME_PM_OPS(dw_i2c_plat_runtime_suspend, 468 + dw_i2c_plat_resume, 469 + NULL) 459 470 }; 460 471 461 472 #define DW_I2C_DEV_PMOPS (&dw_i2c_dev_pm_ops)
+4 -2
drivers/i2c/busses/i2c-designware-slave.c
··· 177 177 return -EBUSY; 178 178 if (slave->flags & I2C_CLIENT_TEN) 179 179 return -EAFNOSUPPORT; 180 + pm_runtime_get_sync(dev->dev); 181 + 180 182 /* 181 183 * Set slave address in the IC_SAR register, 182 184 * the address to which the DW_apb_i2c responds. ··· 207 205 dev->disable_int(dev); 208 206 dev->disable(dev); 209 207 dev->slave = NULL; 208 + pm_runtime_put(dev->dev); 210 209 211 210 return 0; 212 211 } ··· 275 272 slave_activity = ((dw_readl(dev, DW_IC_STATUS) & 276 273 DW_IC_STATUS_SLAVE_ACTIVITY) >> 6); 277 274 278 - if (!enabled || !(raw_stat & ~DW_IC_INTR_ACTIVITY)) 275 + if (!enabled || !(raw_stat & ~DW_IC_INTR_ACTIVITY) || !dev->slave) 279 276 return 0; 280 277 281 278 dev_dbg(dev->dev, ··· 385 382 ret = i2c_add_numbered_adapter(adap); 386 383 if (ret) 387 384 dev_err(dev->dev, "failure adding adapter: %d\n", ret); 388 - pm_runtime_put_noidle(dev->dev); 389 385 390 386 return ret; 391 387 }
+2 -4
drivers/i2c/busses/i2c-simtec.c
··· 127 127 iounmap(pd->reg); 128 128 129 129 err_res: 130 - release_resource(pd->ioarea); 131 - kfree(pd->ioarea); 130 + release_mem_region(pd->ioarea->start, size); 132 131 133 132 err: 134 133 kfree(pd); ··· 141 142 i2c_del_adapter(&pd->adap); 142 143 143 144 iounmap(pd->reg); 144 - release_resource(pd->ioarea); 145 - kfree(pd->ioarea); 145 + release_mem_region(pd->ioarea->start, resource_size(pd->ioarea)); 146 146 kfree(pd); 147 147 148 148 return 0;
+2 -2
drivers/i2c/i2c-core-base.c
··· 353 353 } 354 354 355 355 /* 356 - * An I2C ID table is not mandatory, if and only if, a suitable Device 357 - * Tree match table entry is supplied for the probing device. 356 + * An I2C ID table is not mandatory, if and only if, a suitable OF 357 + * or ACPI ID table is supplied for the probing device. 358 358 */ 359 359 if (!driver->id_table && 360 360 !i2c_acpi_match_device(dev->driver->acpi_match_table, client) &&
+1 -1
drivers/iio/adc/ina2xx-adc.c
··· 666 666 { 667 667 struct iio_dev *indio_dev = data; 668 668 struct ina2xx_chip_info *chip = iio_priv(indio_dev); 669 - unsigned int sampling_us = SAMPLING_PERIOD(chip); 669 + int sampling_us = SAMPLING_PERIOD(chip); 670 670 int buffer_us; 671 671 672 672 /*
+5 -5
drivers/iio/adc/stm32-adc-core.c
··· 64 64 #define STM32H7_CKMODE_MASK GENMASK(17, 16) 65 65 66 66 /* STM32 H7 maximum analog clock rate (from datasheet) */ 67 - #define STM32H7_ADC_MAX_CLK_RATE 72000000 67 + #define STM32H7_ADC_MAX_CLK_RATE 36000000 68 68 69 69 /** 70 70 * stm32_adc_common_regs - stm32 common registers, compatible dependent data ··· 148 148 return -EINVAL; 149 149 } 150 150 151 - priv->common.rate = rate; 151 + priv->common.rate = rate / stm32f4_pclk_div[i]; 152 152 val = readl_relaxed(priv->common.base + STM32F4_ADC_CCR); 153 153 val &= ~STM32F4_ADC_ADCPRE_MASK; 154 154 val |= i << STM32F4_ADC_ADCPRE_SHIFT; 155 155 writel_relaxed(val, priv->common.base + STM32F4_ADC_CCR); 156 156 157 157 dev_dbg(&pdev->dev, "Using analog clock source at %ld kHz\n", 158 - rate / (stm32f4_pclk_div[i] * 1000)); 158 + priv->common.rate / 1000); 159 159 160 160 return 0; 161 161 } ··· 250 250 251 251 out: 252 252 /* rate used later by each ADC instance to control BOOST mode */ 253 - priv->common.rate = rate; 253 + priv->common.rate = rate / div; 254 254 255 255 /* Set common clock mode and prescaler */ 256 256 val = readl_relaxed(priv->common.base + STM32H7_ADC_CCR); ··· 260 260 writel_relaxed(val, priv->common.base + STM32H7_ADC_CCR); 261 261 262 262 dev_dbg(&pdev->dev, "Using %s clock/%d source at %ld kHz\n", 263 - ckmode ? "bus" : "adc", div, rate / (div * 1000)); 263 + ckmode ? "bus" : "adc", div, priv->common.rate / 1000); 264 264 265 265 return 0; 266 266 }
+4 -4
drivers/iio/common/hid-sensors/hid-sensor-trigger.c
··· 111 111 s32 poll_value = 0; 112 112 113 113 if (state) { 114 - if (!atomic_read(&st->user_requested_state)) 115 - return 0; 116 114 if (sensor_hub_device_open(st->hsdev)) 117 115 return -EIO; 118 116 ··· 159 161 &report_val); 160 162 } 161 163 164 + pr_debug("HID_SENSOR %s set power_state %d report_state %d\n", 165 + st->pdev->name, state_val, report_val); 166 + 162 167 sensor_hub_get_feature(st->hsdev, st->power_state.report_id, 163 168 st->power_state.index, 164 169 sizeof(state_val), &state_val); ··· 183 182 ret = pm_runtime_get_sync(&st->pdev->dev); 184 183 else { 185 184 pm_runtime_mark_last_busy(&st->pdev->dev); 185 + pm_runtime_use_autosuspend(&st->pdev->dev); 186 186 ret = pm_runtime_put_autosuspend(&st->pdev->dev); 187 187 } 188 188 if (ret < 0) { ··· 287 285 /* Default to 3 seconds, but can be changed from sysfs */ 288 286 pm_runtime_set_autosuspend_delay(&attrb->pdev->dev, 289 287 3000); 290 - pm_runtime_use_autosuspend(&attrb->pdev->dev); 291 - 292 288 return ret; 293 289 error_unreg_trigger: 294 290 iio_trigger_unregister(trig);
+1 -1
drivers/iio/imu/adis16480.c
··· 696 696 .gyro_max_val = IIO_RAD_TO_DEGREE(22500), 697 697 .gyro_max_scale = 450, 698 698 .accel_max_val = IIO_M_S_2_TO_G(12500), 699 - .accel_max_scale = 5, 699 + .accel_max_scale = 10, 700 700 }, 701 701 [ADIS16485] = { 702 702 .channels = adis16485_channels,
+1 -3
drivers/iio/magnetometer/st_magn_core.c
··· 357 357 .drdy_irq = { 358 358 .addr = 0x62, 359 359 .mask_int1 = 0x01, 360 - .addr_ihl = 0x63, 361 - .mask_ihl = 0x04, 362 - .addr_stat_drdy = ST_SENSORS_DEFAULT_STAT_ADDR, 360 + .addr_stat_drdy = 0x67, 363 361 }, 364 362 .multi_read_bit = false, 365 363 .bootime = 2,
+24 -3
drivers/iio/pressure/bmp280-core.c
··· 282 282 } 283 283 284 284 adc_temp = be32_to_cpu(tmp) >> 12; 285 + if (adc_temp == BMP280_TEMP_SKIPPED) { 286 + /* reading was skipped */ 287 + dev_err(data->dev, "reading temperature skipped\n"); 288 + return -EIO; 289 + } 285 290 comp_temp = bmp280_compensate_temp(data, adc_temp); 286 291 287 292 /* ··· 322 317 } 323 318 324 319 adc_press = be32_to_cpu(tmp) >> 12; 320 + if (adc_press == BMP280_PRESS_SKIPPED) { 321 + /* reading was skipped */ 322 + dev_err(data->dev, "reading pressure skipped\n"); 323 + return -EIO; 324 + } 325 325 comp_press = bmp280_compensate_press(data, adc_press); 326 326 327 327 *val = comp_press; ··· 355 345 } 356 346 357 347 adc_humidity = be16_to_cpu(tmp); 348 + if (adc_humidity == BMP280_HUMIDITY_SKIPPED) { 349 + /* reading was skipped */ 350 + dev_err(data->dev, "reading humidity skipped\n"); 351 + return -EIO; 352 + } 358 353 comp_humidity = bmp280_compensate_humidity(data, adc_humidity); 359 354 360 355 *val = comp_humidity; ··· 612 597 613 598 static int bme280_chip_config(struct bmp280_data *data) 614 599 { 615 - int ret = bmp280_chip_config(data); 600 + int ret; 616 601 u8 osrs = BMP280_OSRS_HUMIDITIY_X(data->oversampling_humid + 1); 602 + 603 + /* 604 + * Oversampling of humidity must be set before oversampling of 605 + * temperature/pressure is set to become effective. 606 + */ 607 + ret = regmap_update_bits(data->regmap, BMP280_REG_CTRL_HUMIDITY, 608 + BMP280_OSRS_HUMIDITY_MASK, osrs); 617 609 618 610 if (ret < 0) 619 611 return ret; 620 612 621 - return regmap_update_bits(data->regmap, BMP280_REG_CTRL_HUMIDITY, 622 - BMP280_OSRS_HUMIDITY_MASK, osrs); 613 + return bmp280_chip_config(data); 623 614 } 624 615 625 616 static const struct bmp280_chip_info bme280_chip_info = {
+5
drivers/iio/pressure/bmp280.h
··· 96 96 #define BME280_CHIP_ID 0x60 97 97 #define BMP280_SOFT_RESET_VAL 0xB6 98 98 99 + /* BMP280 register skipped special values */ 100 + #define BMP280_TEMP_SKIPPED 0x80000 101 + #define BMP280_PRESS_SKIPPED 0x80000 102 + #define BMP280_HUMIDITY_SKIPPED 0x8000 103 + 99 104 /* Regmap configurations */ 100 105 extern const struct regmap_config bmp180_regmap_config; 101 106 extern const struct regmap_config bmp280_regmap_config;
+60 -24
drivers/iio/trigger/stm32-timer-trigger.c
··· 402 402 int *val, int *val2, long mask) 403 403 { 404 404 struct stm32_timer_trigger *priv = iio_priv(indio_dev); 405 + u32 dat; 405 406 406 407 switch (mask) { 407 408 case IIO_CHAN_INFO_RAW: 408 - { 409 - u32 cnt; 410 - 411 - regmap_read(priv->regmap, TIM_CNT, &cnt); 412 - *val = cnt; 413 - 409 + regmap_read(priv->regmap, TIM_CNT, &dat); 410 + *val = dat; 414 411 return IIO_VAL_INT; 415 - } 416 - case IIO_CHAN_INFO_SCALE: 417 - { 418 - u32 smcr; 419 412 420 - regmap_read(priv->regmap, TIM_SMCR, &smcr); 421 - smcr &= TIM_SMCR_SMS; 413 + case IIO_CHAN_INFO_ENABLE: 414 + regmap_read(priv->regmap, TIM_CR1, &dat); 415 + *val = (dat & TIM_CR1_CEN) ? 1 : 0; 416 + return IIO_VAL_INT; 417 + 418 + case IIO_CHAN_INFO_SCALE: 419 + regmap_read(priv->regmap, TIM_SMCR, &dat); 420 + dat &= TIM_SMCR_SMS; 422 421 423 422 *val = 1; 424 423 *val2 = 0; 425 424 426 425 /* in quadrature case scale = 0.25 */ 427 - if (smcr == 3) 426 + if (dat == 3) 428 427 *val2 = 2; 429 428 430 429 return IIO_VAL_FRACTIONAL_LOG2; 431 - } 432 430 } 433 431 434 432 return -EINVAL; ··· 437 439 int val, int val2, long mask) 438 440 { 439 441 struct stm32_timer_trigger *priv = iio_priv(indio_dev); 442 + u32 dat; 440 443 441 444 switch (mask) { 442 445 case IIO_CHAN_INFO_RAW: 443 - regmap_write(priv->regmap, TIM_CNT, val); 446 + return regmap_write(priv->regmap, TIM_CNT, val); 444 447 445 - return IIO_VAL_INT; 446 448 case IIO_CHAN_INFO_SCALE: 447 449 /* fixed scale */ 448 450 return -EINVAL; 451 + 452 + case IIO_CHAN_INFO_ENABLE: 453 + if (val) { 454 + regmap_read(priv->regmap, TIM_CR1, &dat); 455 + if (!(dat & TIM_CR1_CEN)) 456 + clk_enable(priv->clk); 457 + regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN, 458 + TIM_CR1_CEN); 459 + } else { 460 + regmap_read(priv->regmap, TIM_CR1, &dat); 461 + regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN, 462 + 0); 463 + if (dat & TIM_CR1_CEN) 464 + clk_disable(priv->clk); 465 + } 466 + return 0; 449 467 } 450 468 451 469 return -EINVAL; ··· 521 507 522 508 regmap_read(priv->regmap, TIM_SMCR, &smcr); 523 509 524 - return smcr == TIM_SMCR_SMS ? 0 : -EINVAL; 510 + return (smcr & TIM_SMCR_SMS) == TIM_SMCR_SMS ? 0 : -EINVAL; 525 511 } 526 512 527 513 static const struct iio_enum stm32_trigger_mode_enum = { ··· 557 543 { 558 544 struct stm32_timer_trigger *priv = iio_priv(indio_dev); 559 545 int sms = stm32_enable_mode2sms(mode); 546 + u32 val; 560 547 561 548 if (sms < 0) 562 549 return sms; 550 + /* 551 + * Triggered mode sets CEN bit automatically by hardware. So, first 552 + * enable counter clock, so it can use it. Keeps it in sync with CEN. 553 + */ 554 + if (sms == 6) { 555 + regmap_read(priv->regmap, TIM_CR1, &val); 556 + if (!(val & TIM_CR1_CEN)) 557 + clk_enable(priv->clk); 558 + } 563 559 564 560 regmap_update_bits(priv->regmap, TIM_SMCR, TIM_SMCR_SMS, sms); 565 561 ··· 631 607 { 632 608 struct stm32_timer_trigger *priv = iio_priv(indio_dev); 633 609 u32 smcr; 610 + int mode; 634 611 635 612 regmap_read(priv->regmap, TIM_SMCR, &smcr); 636 - smcr &= TIM_SMCR_SMS; 613 + mode = (smcr & TIM_SMCR_SMS) - 1; 614 + if ((mode < 0) || (mode > ARRAY_SIZE(stm32_quadrature_modes))) 615 + return -EINVAL; 637 616 638 - return smcr - 1; 617 + return mode; 639 618 } 640 619 641 620 static const struct iio_enum stm32_quadrature_mode_enum = { ··· 655 628 656 629 static int stm32_set_count_direction(struct iio_dev *indio_dev, 657 630 const struct iio_chan_spec *chan, 658 - unsigned int mode) 631 + unsigned int dir) 659 632 { 660 633 struct stm32_timer_trigger *priv = iio_priv(indio_dev); 634 + u32 val; 635 + int mode; 661 636 662 - regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_DIR, mode); 637 + /* In encoder mode, direction is RO (given by TI1/TI2 signals) */ 638 + regmap_read(priv->regmap, TIM_SMCR, &val); 639 + mode = (val & TIM_SMCR_SMS) - 1; 640 + if ((mode >= 0) || (mode < ARRAY_SIZE(stm32_quadrature_modes))) 641 + return -EBUSY; 663 642 664 - return 0; 643 + return regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_DIR, 644 + dir ? TIM_CR1_DIR : 0); 665 645 } 666 646 667 647 static int stm32_get_count_direction(struct iio_dev *indio_dev, ··· 679 645 680 646 regmap_read(priv->regmap, TIM_CR1, &cr1); 681 647 682 - return (cr1 & TIM_CR1_DIR); 648 + return ((cr1 & TIM_CR1_DIR) ? 1 : 0); 683 649 } 684 650 685 651 static const struct iio_enum stm32_count_direction_enum = { ··· 742 708 static const struct iio_chan_spec stm32_trigger_channel = { 743 709 .type = IIO_COUNT, 744 710 .channel = 0, 745 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | BIT(IIO_CHAN_INFO_SCALE), 711 + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | 712 + BIT(IIO_CHAN_INFO_ENABLE) | 713 + BIT(IIO_CHAN_INFO_SCALE), 746 714 .ext_info = stm32_trigger_count_info, 747 715 .indexed = 1 748 716 };
+3 -2
drivers/infiniband/core/device.c
··· 537 537 } 538 538 up_read(&lists_rwsem); 539 539 540 - mutex_unlock(&device_mutex); 541 - 542 540 ib_device_unregister_rdmacg(device); 543 541 ib_device_unregister_sysfs(device); 542 + 543 + mutex_unlock(&device_mutex); 544 + 544 545 ib_cache_cleanup_one(device); 545 546 546 547 ib_security_destroy_port_pkey_list(device);
+8 -5
drivers/infiniband/core/uverbs_cmd.c
··· 1015 1015 cq->uobject = &obj->uobject; 1016 1016 cq->comp_handler = ib_uverbs_comp_handler; 1017 1017 cq->event_handler = ib_uverbs_cq_event_handler; 1018 - cq->cq_context = &ev_file->ev_queue; 1018 + cq->cq_context = ev_file ? &ev_file->ev_queue : NULL; 1019 1019 atomic_set(&cq->usecnt, 0); 1020 1020 1021 1021 obj->uobject.object = cq; ··· 1522 1522 qp->qp_type = attr.qp_type; 1523 1523 atomic_set(&qp->usecnt, 0); 1524 1524 atomic_inc(&pd->usecnt); 1525 + qp->port = 0; 1525 1526 if (attr.send_cq) 1526 1527 atomic_inc(&attr.send_cq->usecnt); 1527 1528 if (attr.recv_cq) ··· 1963 1962 attr->alt_timeout = cmd->base.alt_timeout; 1964 1963 attr->rate_limit = cmd->rate_limit; 1965 1964 1966 - attr->ah_attr.type = rdma_ah_find_type(qp->device, 1967 - cmd->base.dest.port_num); 1965 + if (cmd->base.attr_mask & IB_QP_AV) 1966 + attr->ah_attr.type = rdma_ah_find_type(qp->device, 1967 + cmd->base.dest.port_num); 1968 1968 if (cmd->base.dest.is_global) { 1969 1969 rdma_ah_set_grh(&attr->ah_attr, NULL, 1970 1970 cmd->base.dest.flow_label, ··· 1983 1981 rdma_ah_set_port_num(&attr->ah_attr, 1984 1982 cmd->base.dest.port_num); 1985 1983 1986 - attr->alt_ah_attr.type = rdma_ah_find_type(qp->device, 1987 - cmd->base.dest.port_num); 1984 + if (cmd->base.attr_mask & IB_QP_ALT_PATH) 1985 + attr->alt_ah_attr.type = 1986 + rdma_ah_find_type(qp->device, cmd->base.dest.port_num); 1988 1987 if (cmd->base.alt_dest.is_global) { 1989 1988 rdma_ah_set_grh(&attr->alt_ah_attr, NULL, 1990 1989 cmd->base.alt_dest.flow_label,
+1 -1
drivers/infiniband/core/uverbs_main.c
··· 1153 1153 kref_get(&file->ref); 1154 1154 mutex_unlock(&uverbs_dev->lists_mutex); 1155 1155 1156 - ib_uverbs_event_handler(&file->event_handler, &event); 1157 1156 1158 1157 mutex_lock(&file->cleanup_mutex); 1159 1158 ucontext = file->ucontext; ··· 1169 1170 * for example due to freeing the resources 1170 1171 * (e.g mmput). 1171 1172 */ 1173 + ib_uverbs_event_handler(&file->event_handler, &event); 1172 1174 ib_dev->disassociate_ucontext(ucontext); 1173 1175 mutex_lock(&file->cleanup_mutex); 1174 1176 ib_uverbs_cleanup_ucontext(file, ucontext, true);
+6 -1
drivers/infiniband/core/verbs.c
··· 838 838 spin_lock_init(&qp->mr_lock); 839 839 INIT_LIST_HEAD(&qp->rdma_mrs); 840 840 INIT_LIST_HEAD(&qp->sig_mrs); 841 + qp->port = 0; 841 842 842 843 if (qp_init_attr->qp_type == IB_QPT_XRC_TGT) 843 844 return ib_create_xrc_qp(qp, qp_init_attr); ··· 1298 1297 if (ret) 1299 1298 return ret; 1300 1299 } 1301 - return ib_security_modify_qp(qp, attr, attr_mask, udata); 1300 + ret = ib_security_modify_qp(qp, attr, attr_mask, udata); 1301 + if (!ret && (attr_mask & IB_QP_PORT)) 1302 + qp->port = attr->port_num; 1303 + 1304 + return ret; 1302 1305 } 1303 1306 EXPORT_SYMBOL(ib_modify_qp_with_udata); 1304 1307
+1 -1
drivers/infiniband/hw/cxgb4/mem.c
··· 661 661 rhp = php->rhp; 662 662 663 663 if (mr_type != IB_MR_TYPE_MEM_REG || 664 - max_num_sg > t4_max_fr_depth(&rhp->rdev.lldi.ulptx_memwrite_dsgl && 664 + max_num_sg > t4_max_fr_depth(rhp->rdev.lldi.ulptx_memwrite_dsgl && 665 665 use_dsgl)) 666 666 return ERR_PTR(-EINVAL); 667 667
+3 -1
drivers/infiniband/hw/hns/hns_roce_ah.c
··· 64 64 } else { 65 65 u8 *dmac = rdma_ah_retrieve_dmac(ah_attr); 66 66 67 - if (!dmac) 67 + if (!dmac) { 68 + kfree(ah); 68 69 return ERR_PTR(-EINVAL); 70 + } 69 71 memcpy(ah->av.mac, dmac, ETH_ALEN); 70 72 } 71 73
+82 -41
drivers/infiniband/hw/i40iw/i40iw_ctrl.c
··· 130 130 u64 base = 0; 131 131 u32 i, j; 132 132 u32 k = 0; 133 - u32 low; 134 133 135 134 /* copy base values in obj_info */ 136 - for (i = I40IW_HMC_IW_QP, j = 0; 137 - i <= I40IW_HMC_IW_PBLE; i++, j += 8) { 135 + for (i = I40IW_HMC_IW_QP, j = 0; i <= I40IW_HMC_IW_PBLE; i++, j += 8) { 136 + if ((i == I40IW_HMC_IW_SRQ) || 137 + (i == I40IW_HMC_IW_FSIMC) || 138 + (i == I40IW_HMC_IW_FSIAV)) { 139 + info[i].base = 0; 140 + info[i].cnt = 0; 141 + continue; 142 + } 138 143 get_64bit_val(buf, j, &temp); 139 144 info[i].base = RS_64_1(temp, 32) * 512; 140 145 if (info[i].base > base) { 141 146 base = info[i].base; 142 147 k = i; 143 148 } 144 - low = (u32)(temp); 145 - if (low) 146 - info[i].cnt = low; 149 + if (i == I40IW_HMC_IW_APBVT_ENTRY) { 150 + info[i].cnt = 1; 151 + continue; 152 + } 153 + if (i == I40IW_HMC_IW_QP) 154 + info[i].cnt = (u32)RS_64(temp, I40IW_QUERY_FPM_MAX_QPS); 155 + else if (i == I40IW_HMC_IW_CQ) 156 + info[i].cnt = (u32)RS_64(temp, I40IW_QUERY_FPM_MAX_CQS); 157 + else 158 + info[i].cnt = (u32)(temp); 147 159 } 148 160 size = info[k].cnt * info[k].size + info[k].base; 149 161 if (size & 0x1FFFFF) ··· 164 152 *sd = (u32)(size >> 21); 165 153 166 154 return 0; 155 + } 156 + 157 + /** 158 + * i40iw_sc_decode_fpm_query() - Decode a 64 bit value into max count and size 159 + * @buf: ptr to fpm query buffer 160 + * @buf_idx: index into buf 161 + * @info: ptr to i40iw_hmc_obj_info struct 162 + * @rsrc_idx: resource index into info 163 + * 164 + * Decode a 64 bit value from fpm query buffer into max count and size 165 + */ 166 + static u64 i40iw_sc_decode_fpm_query(u64 *buf, 167 + u32 buf_idx, 168 + struct i40iw_hmc_obj_info *obj_info, 169 + u32 rsrc_idx) 170 + { 171 + u64 temp; 172 + u32 size; 173 + 174 + get_64bit_val(buf, buf_idx, &temp); 175 + obj_info[rsrc_idx].max_cnt = (u32)temp; 176 + size = (u32)RS_64_1(temp, 32); 177 + obj_info[rsrc_idx].size = LS_64_1(1, size); 178 + 179 + return temp; 167 180 } 168 181 169 182 /** ··· 205 168 struct i40iw_hmc_info *hmc_info, 206 169 struct i40iw_hmc_fpm_misc *hmc_fpm_misc) 207 170 { 208 - u64 temp; 209 171 struct i40iw_hmc_obj_info *obj_info; 210 - u32 i, j, size; 172 + u64 temp; 173 + u32 size; 211 174 u16 max_pe_sds; 212 175 213 176 obj_info = hmc_info->hmc_obj; ··· 222 185 hmc_fpm_misc->max_sds = max_pe_sds; 223 186 hmc_info->sd_table.sd_cnt = max_pe_sds + hmc_info->first_sd_index; 224 187 225 - for (i = I40IW_HMC_IW_QP, j = 8; 226 - i <= I40IW_HMC_IW_ARP; i++, j += 8) { 227 - get_64bit_val(buf, j, &temp); 228 - if (i == I40IW_HMC_IW_QP) 229 - obj_info[i].max_cnt = (u32)RS_64(temp, I40IW_QUERY_FPM_MAX_QPS); 230 - else if (i == I40IW_HMC_IW_CQ) 231 - obj_info[i].max_cnt = (u32)RS_64(temp, I40IW_QUERY_FPM_MAX_CQS); 232 - else 233 - obj_info[i].max_cnt = (u32)temp; 188 + get_64bit_val(buf, 8, &temp); 189 + obj_info[I40IW_HMC_IW_QP].max_cnt = (u32)RS_64(temp, I40IW_QUERY_FPM_MAX_QPS); 190 + size = (u32)RS_64_1(temp, 32); 191 + obj_info[I40IW_HMC_IW_QP].size = LS_64_1(1, size); 234 192 235 - size = (u32)RS_64_1(temp, 32); 236 - obj_info[i].size = ((u64)1 << size); 237 - } 238 - for (i = I40IW_HMC_IW_MR, j = 48; 239 - i <= I40IW_HMC_IW_PBLE; i++, j += 8) { 240 - get_64bit_val(buf, j, &temp); 241 - obj_info[i].max_cnt = (u32)temp; 242 - size = (u32)RS_64_1(temp, 32); 243 - obj_info[i].size = LS_64_1(1, size); 244 - } 193 + get_64bit_val(buf, 16, &temp); 194 + obj_info[I40IW_HMC_IW_CQ].max_cnt = (u32)RS_64(temp, I40IW_QUERY_FPM_MAX_CQS); 195 + size = (u32)RS_64_1(temp, 32); 196 + obj_info[I40IW_HMC_IW_CQ].size = LS_64_1(1, size); 245 197 246 - get_64bit_val(buf, 120, &temp); 247 - hmc_fpm_misc->max_ceqs = (u8)RS_64(temp, I40IW_QUERY_FPM_MAX_CEQS); 248 - get_64bit_val(buf, 120, &temp); 249 - hmc_fpm_misc->ht_multiplier = RS_64(temp, I40IW_QUERY_FPM_HTMULTIPLIER); 250 - get_64bit_val(buf, 120, &temp); 251 - hmc_fpm_misc->timer_bucket = RS_64(temp, I40IW_QUERY_FPM_TIMERBUCKET); 198 + i40iw_sc_decode_fpm_query(buf, 32, obj_info, I40IW_HMC_IW_HTE); 199 + i40iw_sc_decode_fpm_query(buf, 40, obj_info, I40IW_HMC_IW_ARP); 200 + 201 + obj_info[I40IW_HMC_IW_APBVT_ENTRY].size = 8192; 202 + obj_info[I40IW_HMC_IW_APBVT_ENTRY].max_cnt = 1; 203 + 204 + i40iw_sc_decode_fpm_query(buf, 48, obj_info, I40IW_HMC_IW_MR); 205 + i40iw_sc_decode_fpm_query(buf, 56, obj_info, I40IW_HMC_IW_XF); 206 + 252 207 get_64bit_val(buf, 64, &temp); 208 + obj_info[I40IW_HMC_IW_XFFL].max_cnt = (u32)temp; 209 + obj_info[I40IW_HMC_IW_XFFL].size = 4; 253 210 hmc_fpm_misc->xf_block_size = RS_64(temp, I40IW_QUERY_FPM_XFBLOCKSIZE); 254 211 if (!hmc_fpm_misc->xf_block_size) 255 212 return I40IW_ERR_INVALID_SIZE; 213 + 214 + i40iw_sc_decode_fpm_query(buf, 72, obj_info, I40IW_HMC_IW_Q1); 215 + 256 216 get_64bit_val(buf, 80, &temp); 217 + obj_info[I40IW_HMC_IW_Q1FL].max_cnt = (u32)temp; 218 + obj_info[I40IW_HMC_IW_Q1FL].size = 4; 257 219 hmc_fpm_misc->q1_block_size = RS_64(temp, I40IW_QUERY_FPM_Q1BLOCKSIZE); 258 220 if (!hmc_fpm_misc->q1_block_size) 259 221 return I40IW_ERR_INVALID_SIZE; 222 + 223 + i40iw_sc_decode_fpm_query(buf, 88, obj_info, I40IW_HMC_IW_TIMER); 224 + 225 + get_64bit_val(buf, 112, &temp); 226 + obj_info[I40IW_HMC_IW_PBLE].max_cnt = (u32)temp; 227 + obj_info[I40IW_HMC_IW_PBLE].size = 8; 228 + 229 + get_64bit_val(buf, 120, &temp); 230 + hmc_fpm_misc->max_ceqs = (u8)RS_64(temp, I40IW_QUERY_FPM_MAX_CEQS); 231 + hmc_fpm_misc->ht_multiplier = RS_64(temp, I40IW_QUERY_FPM_HTMULTIPLIER); 232 + hmc_fpm_misc->timer_bucket = RS_64(temp, I40IW_QUERY_FPM_TIMERBUCKET); 233 + 260 234 return 0; 261 235 } 262 236 ··· 3440 3392 hmc_info->sd_table.sd_entry = virt_mem.va; 3441 3393 } 3442 3394 3443 - /* fill size of objects which are fixed */ 3444 - hmc_info->hmc_obj[I40IW_HMC_IW_XFFL].size = 4; 3445 - hmc_info->hmc_obj[I40IW_HMC_IW_Q1FL].size = 4; 3446 - hmc_info->hmc_obj[I40IW_HMC_IW_PBLE].size = 8; 3447 - hmc_info->hmc_obj[I40IW_HMC_IW_APBVT_ENTRY].size = 8192; 3448 - hmc_info->hmc_obj[I40IW_HMC_IW_APBVT_ENTRY].max_cnt = 1; 3449 - 3450 3395 return ret_code; 3451 3396 } 3452 3397 ··· 4881 4840 { 4882 4841 u8 fcn_id = vsi->fcn_id; 4883 4842 4884 - if ((vsi->stats_fcn_id_alloc) && (fcn_id != I40IW_INVALID_FCN_ID)) 4843 + if (vsi->stats_fcn_id_alloc && fcn_id < I40IW_MAX_STATS_COUNT) 4885 4844 vsi->dev->fcn_id_array[fcn_id] = false; 4886 4845 i40iw_hw_stats_stop_timer(vsi); 4887 4846 }
+2 -2
drivers/infiniband/hw/i40iw/i40iw_d.h
··· 1507 1507 I40IW_CQ0_ALIGNMENT_MASK = (256 - 1), 1508 1508 I40IW_HOST_CTX_ALIGNMENT_MASK = (4 - 1), 1509 1509 I40IW_SHADOWAREA_MASK = (128 - 1), 1510 - I40IW_FPM_QUERY_BUF_ALIGNMENT_MASK = 0, 1511 - I40IW_FPM_COMMIT_BUF_ALIGNMENT_MASK = 0 1510 + I40IW_FPM_QUERY_BUF_ALIGNMENT_MASK = (4 - 1), 1511 + I40IW_FPM_COMMIT_BUF_ALIGNMENT_MASK = (4 - 1) 1512 1512 }; 1513 1513 1514 1514 enum i40iw_alignment {
+1 -1
drivers/infiniband/hw/i40iw/i40iw_puda.c
··· 685 685 cqsize = rsrc->cq_size * (sizeof(struct i40iw_cqe)); 686 686 tsize = cqsize + sizeof(struct i40iw_cq_shadow_area); 687 687 ret = i40iw_allocate_dma_mem(dev->hw, &rsrc->cqmem, tsize, 688 - I40IW_CQ0_ALIGNMENT_MASK); 688 + I40IW_CQ0_ALIGNMENT); 689 689 if (ret) 690 690 return ret; 691 691
+1 -1
drivers/infiniband/hw/i40iw/i40iw_status.h
··· 62 62 I40IW_ERR_INVALID_ALIGNMENT = -23, 63 63 I40IW_ERR_FLUSHED_QUEUE = -24, 64 64 I40IW_ERR_INVALID_PUSH_PAGE_INDEX = -25, 65 - I40IW_ERR_INVALID_IMM_DATA_SIZE = -26, 65 + I40IW_ERR_INVALID_INLINE_DATA_SIZE = -26, 66 66 I40IW_ERR_TIMEOUT = -27, 67 67 I40IW_ERR_OPCODE_MISMATCH = -28, 68 68 I40IW_ERR_CQP_COMPL_ERROR = -29,
+4 -4
drivers/infiniband/hw/i40iw/i40iw_uk.c
··· 435 435 436 436 op_info = &info->op.inline_rdma_write; 437 437 if (op_info->len > I40IW_MAX_INLINE_DATA_SIZE) 438 - return I40IW_ERR_INVALID_IMM_DATA_SIZE; 438 + return I40IW_ERR_INVALID_INLINE_DATA_SIZE; 439 439 440 440 ret_code = i40iw_inline_data_size_to_wqesize(op_info->len, &wqe_size); 441 441 if (ret_code) ··· 511 511 512 512 op_info = &info->op.inline_send; 513 513 if (op_info->len > I40IW_MAX_INLINE_DATA_SIZE) 514 - return I40IW_ERR_INVALID_IMM_DATA_SIZE; 514 + return I40IW_ERR_INVALID_INLINE_DATA_SIZE; 515 515 516 516 ret_code = i40iw_inline_data_size_to_wqesize(op_info->len, &wqe_size); 517 517 if (ret_code) ··· 784 784 get_64bit_val(cqe, 0, &qword0); 785 785 get_64bit_val(cqe, 16, &qword2); 786 786 787 - info->tcp_seq_num = (u8)RS_64(qword0, I40IWCQ_TCPSEQNUM); 787 + info->tcp_seq_num = (u32)RS_64(qword0, I40IWCQ_TCPSEQNUM); 788 788 789 789 info->qp_id = (u32)RS_64(qword2, I40IWCQ_QPID); 790 790 ··· 1187 1187 u8 *wqe_size) 1188 1188 { 1189 1189 if (data_size > I40IW_MAX_INLINE_DATA_SIZE) 1190 - return I40IW_ERR_INVALID_IMM_DATA_SIZE; 1190 + return I40IW_ERR_INVALID_INLINE_DATA_SIZE; 1191 1191 1192 1192 if (data_size <= 16) 1193 1193 *wqe_size = I40IW_QP_WQE_MIN_SIZE;
+6
drivers/infiniband/hw/mlx5/main.c
··· 1085 1085 bool is_ib = (mlx5_ib_port_link_layer(ibdev, port) == 1086 1086 IB_LINK_LAYER_INFINIBAND); 1087 1087 1088 + /* CM layer calls ib_modify_port() regardless of the link layer. For 1089 + * Ethernet ports, qkey violation and Port capabilities are meaningless. 1090 + */ 1091 + if (!is_ib) 1092 + return 0; 1093 + 1088 1094 if (MLX5_CAP_GEN(dev->mdev, ib_virt) && is_ib) { 1089 1095 change_mask = props->clr_port_cap_mask | props->set_port_cap_mask; 1090 1096 value = ~props->clr_port_cap_mask | props->set_port_cap_mask;
+1
drivers/infiniband/hw/mlx5/qp.c
··· 1238 1238 goto err_destroy_tis; 1239 1239 1240 1240 sq->base.container_mibqp = qp; 1241 + sq->base.mqp.event = mlx5_ib_qp_event; 1241 1242 } 1242 1243 1243 1244 if (qp->rq.wqe_cnt) {
+16 -1
drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
··· 65 65 struct pvrdma_dev *dev = to_vdev(ibcq->device); 66 66 struct pvrdma_cq *cq = to_vcq(ibcq); 67 67 u32 val = cq->cq_handle; 68 + unsigned long flags; 69 + int has_data = 0; 68 70 69 71 val |= (notify_flags & IB_CQ_SOLICITED_MASK) == IB_CQ_SOLICITED ? 70 72 PVRDMA_UAR_CQ_ARM_SOL : PVRDMA_UAR_CQ_ARM; 71 73 74 + spin_lock_irqsave(&cq->cq_lock, flags); 75 + 72 76 pvrdma_write_uar_cq(dev, val); 73 77 74 - return 0; 78 + if (notify_flags & IB_CQ_REPORT_MISSED_EVENTS) { 79 + unsigned int head; 80 + 81 + has_data = pvrdma_idx_ring_has_data(&cq->ring_state->rx, 82 + cq->ibcq.cqe, &head); 83 + if (unlikely(has_data == PVRDMA_INVALID_IDX)) 84 + dev_err(&dev->pdev->dev, "CQ ring state invalid\n"); 85 + } 86 + 87 + spin_unlock_irqrestore(&cq->cq_lock, flags); 88 + 89 + return has_data; 75 90 } 76 91 77 92 /**
+1 -1
drivers/input/misc/soc_button_array.c
··· 331 331 error = gpiod_count(dev, NULL); 332 332 if (error < 0) { 333 333 dev_dbg(dev, "no GPIO attached, ignoring...\n"); 334 - return error; 334 + return -ENODEV; 335 335 } 336 336 337 337 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+31 -10
drivers/input/mouse/alps.c
··· 1215 1215 1216 1216 case SS4_PACKET_ID_TWO: 1217 1217 if (priv->flags & ALPS_BUTTONPAD) { 1218 - f->mt[0].x = SS4_BTL_MF_X_V2(p, 0); 1218 + if (IS_SS4PLUS_DEV(priv->dev_id)) { 1219 + f->mt[0].x = SS4_PLUS_BTL_MF_X_V2(p, 0); 1220 + f->mt[1].x = SS4_PLUS_BTL_MF_X_V2(p, 1); 1221 + } else { 1222 + f->mt[0].x = SS4_BTL_MF_X_V2(p, 0); 1223 + f->mt[1].x = SS4_BTL_MF_X_V2(p, 1); 1224 + } 1219 1225 f->mt[0].y = SS4_BTL_MF_Y_V2(p, 0); 1220 - f->mt[1].x = SS4_BTL_MF_X_V2(p, 1); 1221 1226 f->mt[1].y = SS4_BTL_MF_Y_V2(p, 1); 1222 1227 } else { 1223 - f->mt[0].x = SS4_STD_MF_X_V2(p, 0); 1228 + if (IS_SS4PLUS_DEV(priv->dev_id)) { 1229 + f->mt[0].x = SS4_PLUS_STD_MF_X_V2(p, 0); 1230 + f->mt[1].x = SS4_PLUS_STD_MF_X_V2(p, 1); 1231 + } else { 1232 + f->mt[0].x = SS4_STD_MF_X_V2(p, 0); 1233 + f->mt[1].x = SS4_STD_MF_X_V2(p, 1); 1234 + } 1224 1235 f->mt[0].y = SS4_STD_MF_Y_V2(p, 0); 1225 - f->mt[1].x = SS4_STD_MF_X_V2(p, 1); 1226 1236 f->mt[1].y = SS4_STD_MF_Y_V2(p, 1); 1227 1237 } 1228 1238 f->pressure = SS4_MF_Z_V2(p, 0) ? 0x30 : 0; ··· 1249 1239 1250 1240 case SS4_PACKET_ID_MULTI: 1251 1241 if (priv->flags & ALPS_BUTTONPAD) { 1252 - f->mt[2].x = SS4_BTL_MF_X_V2(p, 0); 1242 + if (IS_SS4PLUS_DEV(priv->dev_id)) { 1243 + f->mt[0].x = SS4_PLUS_BTL_MF_X_V2(p, 0); 1244 + f->mt[1].x = SS4_PLUS_BTL_MF_X_V2(p, 1); 1245 + } else { 1246 + f->mt[2].x = SS4_BTL_MF_X_V2(p, 0); 1247 + f->mt[3].x = SS4_BTL_MF_X_V2(p, 1); 1248 + } 1249 + 1253 1250 f->mt[2].y = SS4_BTL_MF_Y_V2(p, 0); 1254 - f->mt[3].x = SS4_BTL_MF_X_V2(p, 1); 1255 1251 f->mt[3].y = SS4_BTL_MF_Y_V2(p, 1); 1256 1252 no_data_x = SS4_MFPACKET_NO_AX_BL; 1257 1253 no_data_y = SS4_MFPACKET_NO_AY_BL; 1258 1254 } else { 1259 - f->mt[2].x = SS4_STD_MF_X_V2(p, 0); 1255 + if (IS_SS4PLUS_DEV(priv->dev_id)) { 1256 + f->mt[0].x = SS4_PLUS_STD_MF_X_V2(p, 0); 1257 + f->mt[1].x = SS4_PLUS_STD_MF_X_V2(p, 1); 1258 + } else { 1259 + f->mt[0].x = SS4_STD_MF_X_V2(p, 0); 1260 + f->mt[1].x = SS4_STD_MF_X_V2(p, 1); 1261 + } 1260 1262 f->mt[2].y = SS4_STD_MF_Y_V2(p, 0); 1261 - f->mt[3].x = SS4_STD_MF_X_V2(p, 1); 1262 1263 f->mt[3].y = SS4_STD_MF_Y_V2(p, 1); 1263 1264 no_data_x = SS4_MFPACKET_NO_AX; 1264 1265 no_data_y = SS4_MFPACKET_NO_AY; ··· 2562 2541 2563 2542 memset(otp, 0, sizeof(otp)); 2564 2543 2565 - if (alps_get_otp_values_ss4_v2(psmouse, 0, &otp[0][0]) || 2566 - alps_get_otp_values_ss4_v2(psmouse, 1, &otp[1][0])) 2544 + if (alps_get_otp_values_ss4_v2(psmouse, 1, &otp[1][0]) || 2545 + alps_get_otp_values_ss4_v2(psmouse, 0, &otp[0][0])) 2567 2546 return -1; 2568 2547 2569 2548 alps_update_device_area_ss4_v2(otp, priv);
+8
drivers/input/mouse/alps.h
··· 100 100 ((_b[1 + _i * 3] << 5) & 0x1F00) \ 101 101 ) 102 102 103 + #define SS4_PLUS_STD_MF_X_V2(_b, _i) (((_b[0 + (_i) * 3] << 4) & 0x0070) | \ 104 + ((_b[1 + (_i) * 3] << 4) & 0x0F80) \ 105 + ) 106 + 103 107 #define SS4_STD_MF_Y_V2(_b, _i) (((_b[1 + (_i) * 3] << 3) & 0x0010) | \ 104 108 ((_b[2 + (_i) * 3] << 5) & 0x01E0) | \ 105 109 ((_b[2 + (_i) * 3] << 4) & 0x0E00) \ ··· 111 107 112 108 #define SS4_BTL_MF_X_V2(_b, _i) (SS4_STD_MF_X_V2(_b, _i) | \ 113 109 ((_b[0 + (_i) * 3] >> 3) & 0x0010) \ 110 + ) 111 + 112 + #define SS4_PLUS_BTL_MF_X_V2(_b, _i) (SS4_PLUS_STD_MF_X_V2(_b, _i) | \ 113 + ((_b[0 + (_i) * 3] >> 4) & 0x0008) \ 114 114 ) 115 115 116 116 #define SS4_BTL_MF_Y_V2(_b, _i) (SS4_STD_MF_Y_V2(_b, _i) | \
+5
drivers/input/mouse/elan_i2c_core.c
··· 1247 1247 { "ELAN0000", 0 }, 1248 1248 { "ELAN0100", 0 }, 1249 1249 { "ELAN0600", 0 }, 1250 + { "ELAN0602", 0 }, 1250 1251 { "ELAN0605", 0 }, 1252 + { "ELAN0608", 0 }, 1253 + { "ELAN0605", 0 }, 1254 + { "ELAN0609", 0 }, 1255 + { "ELAN060B", 0 }, 1251 1256 { "ELAN1000", 0 }, 1252 1257 { } 1253 1258 };
+4 -3
drivers/input/mouse/trackpoint.c
··· 265 265 if (ps2_command(&psmouse->ps2dev, param, MAKE_PS2_CMD(0, 2, TP_READ_ID))) 266 266 return -1; 267 267 268 - if (param[0] != TP_MAGIC_IDENT) 268 + /* add new TP ID. */ 269 + if (!(param[0] & TP_MAGIC_IDENT)) 269 270 return -1; 270 271 271 272 if (firmware_id) ··· 381 380 return 0; 382 381 383 382 if (trackpoint_read(ps2dev, TP_EXT_BTN, &button_info)) { 384 - psmouse_warn(psmouse, "failed to get extended button data\n"); 385 - button_info = 0; 383 + psmouse_warn(psmouse, "failed to get extended button data, assuming 3 buttons\n"); 384 + button_info = 0x33; 386 385 } 387 386 388 387 psmouse->private = kzalloc(sizeof(struct trackpoint_data), GFP_KERNEL);
+2 -1
drivers/input/mouse/trackpoint.h
··· 21 21 #define TP_COMMAND 0xE2 /* Commands start with this */ 22 22 23 23 #define TP_READ_ID 0xE1 /* Sent for device identification */ 24 - #define TP_MAGIC_IDENT 0x01 /* Sent after a TP_READ_ID followed */ 24 + #define TP_MAGIC_IDENT 0x03 /* Sent after a TP_READ_ID followed */ 25 25 /* by the firmware ID */ 26 + /* Firmware ID includes 0x1, 0x2, 0x3 */ 26 27 27 28 28 29 /*
+3 -1
drivers/iommu/amd_iommu_types.h
··· 574 574 575 575 static inline struct amd_iommu *dev_to_amd_iommu(struct device *dev) 576 576 { 577 - return container_of(dev, struct amd_iommu, iommu.dev); 577 + struct iommu_device *iommu = dev_to_iommu_device(dev); 578 + 579 + return container_of(iommu, struct amd_iommu, iommu); 578 580 } 579 581 580 582 #define ACPIHID_UID_LEN 256
+3 -1
drivers/iommu/intel-iommu.c
··· 4736 4736 4737 4737 static inline struct intel_iommu *dev_to_intel_iommu(struct device *dev) 4738 4738 { 4739 - return container_of(dev, struct intel_iommu, iommu.dev); 4739 + struct iommu_device *iommu_dev = dev_to_iommu_device(dev); 4740 + 4741 + return container_of(iommu_dev, struct intel_iommu, iommu); 4740 4742 } 4741 4743 4742 4744 static ssize_t intel_iommu_show_version(struct device *dev,
+20 -12
drivers/iommu/iommu-sysfs.c
··· 62 62 va_list vargs; 63 63 int ret; 64 64 65 - device_initialize(&iommu->dev); 65 + iommu->dev = kzalloc(sizeof(*iommu->dev), GFP_KERNEL); 66 + if (!iommu->dev) 67 + return -ENOMEM; 66 68 67 - iommu->dev.class = &iommu_class; 68 - iommu->dev.parent = parent; 69 - iommu->dev.groups = groups; 69 + device_initialize(iommu->dev); 70 + 71 + iommu->dev->class = &iommu_class; 72 + iommu->dev->parent = parent; 73 + iommu->dev->groups = groups; 70 74 71 75 va_start(vargs, fmt); 72 - ret = kobject_set_name_vargs(&iommu->dev.kobj, fmt, vargs); 76 + ret = kobject_set_name_vargs(&iommu->dev->kobj, fmt, vargs); 73 77 va_end(vargs); 74 78 if (ret) 75 79 goto error; 76 80 77 - ret = device_add(&iommu->dev); 81 + ret = device_add(iommu->dev); 78 82 if (ret) 79 83 goto error; 84 + 85 + dev_set_drvdata(iommu->dev, iommu); 80 86 81 87 return 0; 82 88 83 89 error: 84 - put_device(&iommu->dev); 90 + put_device(iommu->dev); 85 91 return ret; 86 92 } 87 93 88 94 void iommu_device_sysfs_remove(struct iommu_device *iommu) 89 95 { 90 - device_unregister(&iommu->dev); 96 + dev_set_drvdata(iommu->dev, NULL); 97 + device_unregister(iommu->dev); 98 + iommu->dev = NULL; 91 99 } 92 100 /* 93 101 * IOMMU drivers can indicate a device is managed by a given IOMMU using ··· 110 102 if (!iommu || IS_ERR(iommu)) 111 103 return -ENODEV; 112 104 113 - ret = sysfs_add_link_to_group(&iommu->dev.kobj, "devices", 105 + ret = sysfs_add_link_to_group(&iommu->dev->kobj, "devices", 114 106 &link->kobj, dev_name(link)); 115 107 if (ret) 116 108 return ret; 117 109 118 - ret = sysfs_create_link_nowarn(&link->kobj, &iommu->dev.kobj, "iommu"); 110 + ret = sysfs_create_link_nowarn(&link->kobj, &iommu->dev->kobj, "iommu"); 119 111 if (ret) 120 - sysfs_remove_link_from_group(&iommu->dev.kobj, "devices", 112 + sysfs_remove_link_from_group(&iommu->dev->kobj, "devices", 121 113 dev_name(link)); 122 114 123 115 return ret; ··· 129 121 return; 130 122 131 123 sysfs_remove_link(&link->kobj, "iommu"); 132 - sysfs_remove_link_from_group(&iommu->dev.kobj, "devices", dev_name(link)); 124 + sysfs_remove_link_from_group(&iommu->dev->kobj, "devices", dev_name(link)); 133 125 }
+6 -7
drivers/irqchip/irq-atmel-aic-common.c
··· 137 137 #define AT91_RTC_IMR 0x28 138 138 #define AT91_RTC_IRQ_MASK 0x1f 139 139 140 - void __init aic_common_rtc_irq_fixup(struct device_node *root) 140 + void __init aic_common_rtc_irq_fixup(void) 141 141 { 142 142 struct device_node *np; 143 143 void __iomem *regs; 144 144 145 - np = of_find_compatible_node(root, NULL, "atmel,at91rm9200-rtc"); 145 + np = of_find_compatible_node(NULL, NULL, "atmel,at91rm9200-rtc"); 146 146 if (!np) 147 - np = of_find_compatible_node(root, NULL, 147 + np = of_find_compatible_node(NULL, NULL, 148 148 "atmel,at91sam9x5-rtc"); 149 149 150 150 if (!np) ··· 165 165 #define AT91_RTT_ALMIEN (1 << 16) /* Alarm Interrupt Enable */ 166 166 #define AT91_RTT_RTTINCIEN (1 << 17) /* Real Time Timer Increment Interrupt Enable */ 167 167 168 - void __init aic_common_rtt_irq_fixup(struct device_node *root) 168 + void __init aic_common_rtt_irq_fixup(void) 169 169 { 170 170 struct device_node *np; 171 171 void __iomem *regs; ··· 196 196 return; 197 197 198 198 match = of_match_node(matches, root); 199 - of_node_put(root); 200 199 201 200 if (match) { 202 - void (*fixup)(struct device_node *) = match->data; 203 - fixup(root); 201 + void (*fixup)(void) = match->data; 202 + fixup(); 204 203 } 205 204 206 205 of_node_put(root);
+2 -2
drivers/irqchip/irq-atmel-aic-common.h
··· 33 33 const char *name, int nirqs, 34 34 const struct of_device_id *matches); 35 35 36 - void __init aic_common_rtc_irq_fixup(struct device_node *root); 36 + void __init aic_common_rtc_irq_fixup(void); 37 37 38 - void __init aic_common_rtt_irq_fixup(struct device_node *root); 38 + void __init aic_common_rtt_irq_fixup(void); 39 39 40 40 #endif /* __IRQ_ATMEL_AIC_COMMON_H */
+7 -7
drivers/irqchip/irq-atmel-aic.c
··· 209 209 .xlate = aic_irq_domain_xlate, 210 210 }; 211 211 212 - static void __init at91rm9200_aic_irq_fixup(struct device_node *root) 212 + static void __init at91rm9200_aic_irq_fixup(void) 213 213 { 214 - aic_common_rtc_irq_fixup(root); 214 + aic_common_rtc_irq_fixup(); 215 215 } 216 216 217 - static void __init at91sam9260_aic_irq_fixup(struct device_node *root) 217 + static void __init at91sam9260_aic_irq_fixup(void) 218 218 { 219 - aic_common_rtt_irq_fixup(root); 219 + aic_common_rtt_irq_fixup(); 220 220 } 221 221 222 - static void __init at91sam9g45_aic_irq_fixup(struct device_node *root) 222 + static void __init at91sam9g45_aic_irq_fixup(void) 223 223 { 224 - aic_common_rtc_irq_fixup(root); 225 - aic_common_rtt_irq_fixup(root); 224 + aic_common_rtc_irq_fixup(); 225 + aic_common_rtt_irq_fixup(); 226 226 } 227 227 228 228 static const struct of_device_id aic_irq_fixups[] __initconst = {
+2 -2
drivers/irqchip/irq-atmel-aic5.c
··· 305 305 .xlate = aic5_irq_domain_xlate, 306 306 }; 307 307 308 - static void __init sama5d3_aic_irq_fixup(struct device_node *root) 308 + static void __init sama5d3_aic_irq_fixup(void) 309 309 { 310 - aic_common_rtc_irq_fixup(root); 310 + aic_common_rtc_irq_fixup(); 311 311 } 312 312 313 313 static const struct of_device_id aic5_irq_fixups[] __initconst = {
+1
drivers/irqchip/irq-brcmstb-l2.c
··· 189 189 190 190 ct->chip.irq_suspend = brcmstb_l2_intc_suspend; 191 191 ct->chip.irq_resume = brcmstb_l2_intc_resume; 192 + ct->chip.irq_pm_shutdown = brcmstb_l2_intc_suspend; 192 193 193 194 if (data->can_wake) { 194 195 /* This IRQ chip can wake the system, set all child interrupts
+1
drivers/irqchip/irq-gic-v3-its-platform-msi.c
··· 43 43 *dev_id = args.args[0]; 44 44 break; 45 45 } 46 + index++; 46 47 } while (!ret); 47 48 48 49 return ret;
+32 -8
drivers/irqchip/irq-gic-v3-its.c
··· 1835 1835 1836 1836 #define ACPI_GICV3_ITS_MEM_SIZE (SZ_128K) 1837 1837 1838 - #if defined(CONFIG_ACPI_NUMA) && (ACPI_CA_VERSION >= 0x20170531) 1838 + #ifdef CONFIG_ACPI_NUMA 1839 1839 struct its_srat_map { 1840 1840 /* numa node id */ 1841 1841 u32 numa_node; ··· 1843 1843 u32 its_id; 1844 1844 }; 1845 1845 1846 - static struct its_srat_map its_srat_maps[MAX_NUMNODES] __initdata; 1846 + static struct its_srat_map *its_srat_maps __initdata; 1847 1847 static int its_in_srat __initdata; 1848 1848 1849 1849 static int __init acpi_get_its_numa_node(u32 its_id) ··· 1855 1855 return its_srat_maps[i].numa_node; 1856 1856 } 1857 1857 return NUMA_NO_NODE; 1858 + } 1859 + 1860 + static int __init gic_acpi_match_srat_its(struct acpi_subtable_header *header, 1861 + const unsigned long end) 1862 + { 1863 + return 0; 1858 1864 } 1859 1865 1860 1866 static int __init gic_acpi_parse_srat_its(struct acpi_subtable_header *header, ··· 1876 1870 if (its_affinity->header.length < sizeof(*its_affinity)) { 1877 1871 pr_err("SRAT: Invalid header length %d in ITS affinity\n", 1878 1872 its_affinity->header.length); 1879 - return -EINVAL; 1880 - } 1881 - 1882 - if (its_in_srat >= MAX_NUMNODES) { 1883 - pr_err("SRAT: ITS affinity exceeding max count[%d]\n", 1884 - MAX_NUMNODES); 1885 1873 return -EINVAL; 1886 1874 } 1887 1875 ··· 1897 1897 1898 1898 static void __init acpi_table_parse_srat_its(void) 1899 1899 { 1900 + int count; 1901 + 1902 + count = acpi_table_parse_entries(ACPI_SIG_SRAT, 1903 + sizeof(struct acpi_table_srat), 1904 + ACPI_SRAT_TYPE_GIC_ITS_AFFINITY, 1905 + gic_acpi_match_srat_its, 0); 1906 + if (count <= 0) 1907 + return; 1908 + 1909 + its_srat_maps = kmalloc(count * sizeof(struct its_srat_map), 1910 + GFP_KERNEL); 1911 + if (!its_srat_maps) { 1912 + pr_warn("SRAT: Failed to allocate memory for its_srat_maps!\n"); 1913 + return; 1914 + } 1915 + 1900 1916 acpi_table_parse_entries(ACPI_SIG_SRAT, 1901 1917 sizeof(struct acpi_table_srat), 1902 1918 ACPI_SRAT_TYPE_GIC_ITS_AFFINITY, 1903 1919 gic_acpi_parse_srat_its, 0); 1904 1920 } 1921 + 1922 + /* free the its_srat_maps after ITS probing */ 1923 + static void __init acpi_its_srat_maps_free(void) 1924 + { 1925 + kfree(its_srat_maps); 1926 + } 1905 1927 #else 1906 1928 static void __init acpi_table_parse_srat_its(void) { } 1907 1929 static int __init acpi_get_its_numa_node(u32 its_id) { return NUMA_NO_NODE; } 1930 + static void __init acpi_its_srat_maps_free(void) { } 1908 1931 #endif 1909 1932 1910 1933 static int __init gic_acpi_parse_madt_its(struct acpi_subtable_header *header, ··· 1974 1951 acpi_table_parse_srat_its(); 1975 1952 acpi_table_parse_madt(ACPI_MADT_TYPE_GENERIC_TRANSLATOR, 1976 1953 gic_acpi_parse_madt_its, 0); 1954 + acpi_its_srat_maps_free(); 1977 1955 } 1978 1956 #else 1979 1957 static void __init its_acpi_probe(void) { }
+13 -3
drivers/irqchip/irq-gic-v3.c
··· 353 353 354 354 if (static_key_true(&supports_deactivate)) 355 355 gic_write_eoir(irqnr); 356 + else 357 + isb(); 356 358 357 359 err = handle_domain_irq(gic_data.domain, irqnr, regs); 358 360 if (err) { ··· 642 640 static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val, 643 641 bool force) 644 642 { 645 - unsigned int cpu = cpumask_any_and(mask_val, cpu_online_mask); 643 + unsigned int cpu; 646 644 void __iomem *reg; 647 645 int enabled; 648 646 u64 val; 647 + 648 + if (force) 649 + cpu = cpumask_first(mask_val); 650 + else 651 + cpu = cpumask_any_and(mask_val, cpu_online_mask); 649 652 650 653 if (cpu >= nr_cpu_ids) 651 654 return -EINVAL; ··· 838 831 if (ret) 839 832 return ret; 840 833 841 - for (i = 0; i < nr_irqs; i++) 842 - gic_irq_domain_map(domain, virq + i, hwirq + i); 834 + for (i = 0; i < nr_irqs; i++) { 835 + ret = gic_irq_domain_map(domain, virq + i, hwirq + i); 836 + if (ret) 837 + return ret; 838 + } 843 839 844 840 return 0; 845 841 }
+10 -4
drivers/irqchip/irq-gic.c
··· 361 361 if (likely(irqnr > 15 && irqnr < 1020)) { 362 362 if (static_key_true(&supports_deactivate)) 363 363 writel_relaxed(irqstat, cpu_base + GIC_CPU_EOI); 364 + isb(); 364 365 handle_domain_irq(gic->domain, irqnr, regs); 365 366 continue; 366 367 } ··· 402 401 goto out; 403 402 404 403 cascade_irq = irq_find_mapping(chip_data->domain, gic_irq); 405 - if (unlikely(gic_irq < 32 || gic_irq > 1020)) 404 + if (unlikely(gic_irq < 32 || gic_irq > 1020)) { 406 405 handle_bad_irq(desc); 407 - else 406 + } else { 407 + isb(); 408 408 generic_handle_irq(cascade_irq); 409 + } 409 410 410 411 out: 411 412 chained_irq_exit(chip, desc); ··· 1030 1027 if (ret) 1031 1028 return ret; 1032 1029 1033 - for (i = 0; i < nr_irqs; i++) 1034 - gic_irq_domain_map(domain, virq + i, hwirq + i); 1030 + for (i = 0; i < nr_irqs; i++) { 1031 + ret = gic_irq_domain_map(domain, virq + i, hwirq + i); 1032 + if (ret) 1033 + return ret; 1034 + } 1035 1035 1036 1036 return 0; 1037 1037 }
+4 -1
drivers/isdn/mISDN/fsm.c
··· 26 26 27 27 #define FSM_TIMER_DEBUG 0 28 28 29 - void 29 + int 30 30 mISDN_FsmNew(struct Fsm *fsm, 31 31 struct FsmNode *fnlist, int fncount) 32 32 { ··· 34 34 35 35 fsm->jumpmatrix = kzalloc(sizeof(FSMFNPTR) * fsm->state_count * 36 36 fsm->event_count, GFP_KERNEL); 37 + if (fsm->jumpmatrix == NULL) 38 + return -ENOMEM; 37 39 38 40 for (i = 0; i < fncount; i++) 39 41 if ((fnlist[i].state >= fsm->state_count) || ··· 47 45 } else 48 46 fsm->jumpmatrix[fsm->state_count * fnlist[i].event + 49 47 fnlist[i].state] = (FSMFNPTR) fnlist[i].routine; 48 + return 0; 50 49 } 51 50 EXPORT_SYMBOL(mISDN_FsmNew); 52 51
+1 -1
drivers/isdn/mISDN/fsm.h
··· 55 55 void *arg; 56 56 }; 57 57 58 - extern void mISDN_FsmNew(struct Fsm *, struct FsmNode *, int); 58 + extern int mISDN_FsmNew(struct Fsm *, struct FsmNode *, int); 59 59 extern void mISDN_FsmFree(struct Fsm *); 60 60 extern int mISDN_FsmEvent(struct FsmInst *, int , void *); 61 61 extern void mISDN_FsmChangeState(struct FsmInst *, int);
+1 -2
drivers/isdn/mISDN/layer1.c
··· 414 414 l1fsm_s.event_count = L1_EVENT_COUNT; 415 415 l1fsm_s.strEvent = strL1Event; 416 416 l1fsm_s.strState = strL1SState; 417 - mISDN_FsmNew(&l1fsm_s, L1SFnList, ARRAY_SIZE(L1SFnList)); 418 - return 0; 417 + return mISDN_FsmNew(&l1fsm_s, L1SFnList, ARRAY_SIZE(L1SFnList)); 419 418 } 420 419 421 420 void
+13 -2
drivers/isdn/mISDN/layer2.c
··· 2247 2247 int 2248 2248 Isdnl2_Init(u_int *deb) 2249 2249 { 2250 + int res; 2250 2251 debug = deb; 2251 2252 mISDN_register_Bprotocol(&X75SLP); 2252 2253 l2fsm.state_count = L2_STATE_COUNT; 2253 2254 l2fsm.event_count = L2_EVENT_COUNT; 2254 2255 l2fsm.strEvent = strL2Event; 2255 2256 l2fsm.strState = strL2State; 2256 - mISDN_FsmNew(&l2fsm, L2FnList, ARRAY_SIZE(L2FnList)); 2257 - TEIInit(deb); 2257 + res = mISDN_FsmNew(&l2fsm, L2FnList, ARRAY_SIZE(L2FnList)); 2258 + if (res) 2259 + goto error; 2260 + res = TEIInit(deb); 2261 + if (res) 2262 + goto error_fsm; 2258 2263 return 0; 2264 + 2265 + error_fsm: 2266 + mISDN_FsmFree(&l2fsm); 2267 + error: 2268 + mISDN_unregister_Bprotocol(&X75SLP); 2269 + return res; 2259 2270 } 2260 2271 2261 2272 void
+17 -3
drivers/isdn/mISDN/tei.c
··· 1387 1387 1388 1388 int TEIInit(u_int *deb) 1389 1389 { 1390 + int res; 1390 1391 debug = deb; 1391 1392 teifsmu.state_count = TEI_STATE_COUNT; 1392 1393 teifsmu.event_count = TEI_EVENT_COUNT; 1393 1394 teifsmu.strEvent = strTeiEvent; 1394 1395 teifsmu.strState = strTeiState; 1395 - mISDN_FsmNew(&teifsmu, TeiFnListUser, ARRAY_SIZE(TeiFnListUser)); 1396 + res = mISDN_FsmNew(&teifsmu, TeiFnListUser, ARRAY_SIZE(TeiFnListUser)); 1397 + if (res) 1398 + goto error; 1396 1399 teifsmn.state_count = TEI_STATE_COUNT; 1397 1400 teifsmn.event_count = TEI_EVENT_COUNT; 1398 1401 teifsmn.strEvent = strTeiEvent; 1399 1402 teifsmn.strState = strTeiState; 1400 - mISDN_FsmNew(&teifsmn, TeiFnListNet, ARRAY_SIZE(TeiFnListNet)); 1403 + res = mISDN_FsmNew(&teifsmn, TeiFnListNet, ARRAY_SIZE(TeiFnListNet)); 1404 + if (res) 1405 + goto error_smn; 1401 1406 deactfsm.state_count = DEACT_STATE_COUNT; 1402 1407 deactfsm.event_count = DEACT_EVENT_COUNT; 1403 1408 deactfsm.strEvent = strDeactEvent; 1404 1409 deactfsm.strState = strDeactState; 1405 - mISDN_FsmNew(&deactfsm, DeactFnList, ARRAY_SIZE(DeactFnList)); 1410 + res = mISDN_FsmNew(&deactfsm, DeactFnList, ARRAY_SIZE(DeactFnList)); 1411 + if (res) 1412 + goto error_deact; 1406 1413 return 0; 1414 + 1415 + error_deact: 1416 + mISDN_FsmFree(&teifsmn); 1417 + error_smn: 1418 + mISDN_FsmFree(&teifsmu); 1419 + error: 1420 + return res; 1407 1421 } 1408 1422 1409 1423 void TEIFree(void)
+4 -1
drivers/md/md.c
··· 7996 7996 if (mddev->safemode == 1) 7997 7997 mddev->safemode = 0; 7998 7998 /* sync_checkers is always 0 when writes_pending is in per-cpu mode */ 7999 - if (mddev->in_sync || !mddev->sync_checkers) { 7999 + if (mddev->in_sync || mddev->sync_checkers) { 8000 8000 spin_lock(&mddev->lock); 8001 8001 if (mddev->in_sync) { 8002 8002 mddev->in_sync = 0; ··· 8655 8655 8656 8656 if (mddev_trylock(mddev)) { 8657 8657 int spares = 0; 8658 + 8659 + if (!mddev->external && mddev->safemode == 1) 8660 + mddev->safemode = 0; 8658 8661 8659 8662 if (mddev->ro) { 8660 8663 struct md_rdev *rdev;
+46 -15
drivers/md/raid5-cache.c
··· 236 236 bool need_split_bio; 237 237 struct bio *split_bio; 238 238 239 - unsigned int has_flush:1; /* include flush request */ 240 - unsigned int has_fua:1; /* include fua request */ 241 - unsigned int has_null_flush:1; /* include empty flush request */ 239 + unsigned int has_flush:1; /* include flush request */ 240 + unsigned int has_fua:1; /* include fua request */ 241 + unsigned int has_null_flush:1; /* include null flush request */ 242 + unsigned int has_flush_payload:1; /* include flush payload */ 242 243 /* 243 244 * io isn't sent yet, flush/fua request can only be submitted till it's 244 245 * the first IO in running_ios list ··· 572 571 struct r5l_io_unit *io_deferred; 573 572 struct r5l_log *log = io->log; 574 573 unsigned long flags; 574 + bool has_null_flush; 575 + bool has_flush_payload; 575 576 576 577 if (bio->bi_status) 577 578 md_error(log->rdev->mddev, log->rdev); ··· 583 580 584 581 spin_lock_irqsave(&log->io_list_lock, flags); 585 582 __r5l_set_io_unit_state(io, IO_UNIT_IO_END); 583 + 584 + /* 585 + * if the io doesn't not have null_flush or flush payload, 586 + * it is not safe to access it after releasing io_list_lock. 587 + * Therefore, it is necessary to check the condition with 588 + * the lock held. 589 + */ 590 + has_null_flush = io->has_null_flush; 591 + has_flush_payload = io->has_flush_payload; 592 + 586 593 if (log->need_cache_flush && !list_empty(&io->stripe_list)) 587 594 r5l_move_to_end_ios(log); 588 595 else ··· 613 600 if (log->need_cache_flush) 614 601 md_wakeup_thread(log->rdev->mddev->thread); 615 602 616 - if (io->has_null_flush) { 603 + /* finish flush only io_unit and PAYLOAD_FLUSH only io_unit */ 604 + if (has_null_flush) { 617 605 struct bio *bi; 618 606 619 607 WARN_ON(bio_list_empty(&io->flush_barriers)); 620 608 while ((bi = bio_list_pop(&io->flush_barriers)) != NULL) { 621 609 bio_endio(bi); 622 - atomic_dec(&io->pending_stripe); 610 + if (atomic_dec_and_test(&io->pending_stripe)) { 611 + __r5l_stripe_write_finished(io); 612 + return; 613 + } 623 614 } 624 615 } 625 - 626 - /* finish flush only io_unit and PAYLOAD_FLUSH only io_unit */ 627 - if (atomic_read(&io->pending_stripe) == 0) 628 - __r5l_stripe_write_finished(io); 616 + /* decrease pending_stripe for flush payload */ 617 + if (has_flush_payload) 618 + if (atomic_dec_and_test(&io->pending_stripe)) 619 + __r5l_stripe_write_finished(io); 629 620 } 630 621 631 622 static void r5l_do_submit_io(struct r5l_log *log, struct r5l_io_unit *io) ··· 898 881 payload->size = cpu_to_le32(sizeof(__le64)); 899 882 payload->flush_stripes[0] = cpu_to_le64(sect); 900 883 io->meta_offset += meta_size; 884 + /* multiple flush payloads count as one pending_stripe */ 885 + if (!io->has_flush_payload) { 886 + io->has_flush_payload = 1; 887 + atomic_inc(&io->pending_stripe); 888 + } 901 889 mutex_unlock(&log->io_mutex); 902 890 } 903 891 ··· 2562 2540 */ 2563 2541 int r5c_journal_mode_set(struct mddev *mddev, int mode) 2564 2542 { 2565 - struct r5conf *conf = mddev->private; 2566 - struct r5l_log *log = conf->log; 2567 - 2568 - if (!log) 2569 - return -ENODEV; 2543 + struct r5conf *conf; 2544 + int err; 2570 2545 2571 2546 if (mode < R5C_JOURNAL_MODE_WRITE_THROUGH || 2572 2547 mode > R5C_JOURNAL_MODE_WRITE_BACK) 2573 2548 return -EINVAL; 2574 2549 2550 + err = mddev_lock(mddev); 2551 + if (err) 2552 + return err; 2553 + conf = mddev->private; 2554 + if (!conf || !conf->log) { 2555 + mddev_unlock(mddev); 2556 + return -ENODEV; 2557 + } 2558 + 2575 2559 if (raid5_calc_degraded(conf) > 0 && 2576 - mode == R5C_JOURNAL_MODE_WRITE_BACK) 2560 + mode == R5C_JOURNAL_MODE_WRITE_BACK) { 2561 + mddev_unlock(mddev); 2577 2562 return -EINVAL; 2563 + } 2578 2564 2579 2565 mddev_suspend(mddev); 2580 2566 conf->log->r5c_journal_mode = mode; 2581 2567 mddev_resume(mddev); 2568 + mddev_unlock(mddev); 2582 2569 2583 2570 pr_debug("md/raid:%s: setting r5c cache mode to %d: %s\n", 2584 2571 mdname(mddev), mode, r5c_journal_mode_str[mode]);
+6 -4
drivers/memory/atmel-ebi.c
··· 72 72 { .name = nm, .converter = atmel_smc_cs_conf_set_pulse, .shift = pos} 73 73 74 74 #define ATMEL_SMC_CYCLE_XLATE(nm, pos) \ 75 - { .name = nm, .converter = atmel_smc_cs_conf_set_setup, .shift = pos} 75 + { .name = nm, .converter = atmel_smc_cs_conf_set_cycle, .shift = pos} 76 76 77 77 static void at91sam9_ebi_get_config(struct atmel_ebi_dev *ebid, 78 78 struct atmel_ebi_dev_config *conf) ··· 120 120 if (!ret) { 121 121 required = true; 122 122 ncycles = DIV_ROUND_UP(val, clk_period_ns); 123 - if (ncycles > ATMEL_SMC_MODE_TDF_MAX || 124 - ncycles < ATMEL_SMC_MODE_TDF_MIN) { 123 + if (ncycles > ATMEL_SMC_MODE_TDF_MAX) { 125 124 ret = -EINVAL; 126 125 goto out; 127 126 } 127 + 128 + if (ncycles < ATMEL_SMC_MODE_TDF_MIN) 129 + ncycles = ATMEL_SMC_MODE_TDF_MIN; 128 130 129 131 smcconf->mode |= ATMEL_SMC_MODE_TDF(ncycles); 130 132 } ··· 265 263 } 266 264 267 265 ret = atmel_ebi_xslate_smc_timings(ebid, np, &conf->smcconf); 268 - if (ret) 266 + if (ret < 0) 269 267 return -EINVAL; 270 268 271 269 if ((ret > 0 && !required) || (!ret && required)) {
+1 -1
drivers/mfd/atmel-smc.c
··· 206 206 * parameter 207 207 * 208 208 * This function encodes the @ncycles value as described in the datasheet 209 - * (section "SMC Pulse Register"), and then stores the result in the 209 + * (section "SMC Cycle Register"), and then stores the result in the 210 210 * @conf->setup field at @shift position. 211 211 * 212 212 * Returns -EINVAL if @shift is invalid, -ERANGE if @ncycles does not fit in
+6
drivers/mfd/da9062-core.c
··· 645 645 .range_min = DA9062AA_VLDO1_B, 646 646 .range_max = DA9062AA_VLDO4_B, 647 647 }, { 648 + .range_min = DA9062AA_BBAT_CONT, 649 + .range_max = DA9062AA_BBAT_CONT, 650 + }, { 648 651 .range_min = DA9062AA_INTERFACE, 649 652 .range_max = DA9062AA_CONFIG_E, 650 653 }, { ··· 723 720 }, { 724 721 .range_min = DA9062AA_VLDO1_B, 725 722 .range_max = DA9062AA_VLDO4_B, 723 + }, { 724 + .range_min = DA9062AA_BBAT_CONT, 725 + .range_max = DA9062AA_BBAT_CONT, 726 726 }, { 727 727 .range_min = DA9062AA_GP_ID_0, 728 728 .range_max = DA9062AA_GP_ID_19,
+43 -6
drivers/mmc/core/block.c
··· 1371 1371 R1_CC_ERROR | /* Card controller error */ \ 1372 1372 R1_ERROR) /* General/unknown error */ 1373 1373 1374 - static bool mmc_blk_has_cmd_err(struct mmc_command *cmd) 1374 + static void mmc_blk_eval_resp_error(struct mmc_blk_request *brq) 1375 1375 { 1376 - if (!cmd->error && cmd->resp[0] & CMD_ERRORS) 1377 - cmd->error = -EIO; 1376 + u32 val; 1378 1377 1379 - return cmd->error; 1378 + /* 1379 + * Per the SD specification(physical layer version 4.10)[1], 1380 + * section 4.3.3, it explicitly states that "When the last 1381 + * block of user area is read using CMD18, the host should 1382 + * ignore OUT_OF_RANGE error that may occur even the sequence 1383 + * is correct". And JESD84-B51 for eMMC also has a similar 1384 + * statement on section 6.8.3. 1385 + * 1386 + * Multiple block read/write could be done by either predefined 1387 + * method, namely CMD23, or open-ending mode. For open-ending mode, 1388 + * we should ignore the OUT_OF_RANGE error as it's normal behaviour. 1389 + * 1390 + * However the spec[1] doesn't tell us whether we should also 1391 + * ignore that for predefined method. But per the spec[1], section 1392 + * 4.15 Set Block Count Command, it says"If illegal block count 1393 + * is set, out of range error will be indicated during read/write 1394 + * operation (For example, data transfer is stopped at user area 1395 + * boundary)." In another word, we could expect a out of range error 1396 + * in the response for the following CMD18/25. And if argument of 1397 + * CMD23 + the argument of CMD18/25 exceed the max number of blocks, 1398 + * we could also expect to get a -ETIMEDOUT or any error number from 1399 + * the host drivers due to missing data response(for write)/data(for 1400 + * read), as the cards will stop the data transfer by itself per the 1401 + * spec. So we only need to check R1_OUT_OF_RANGE for open-ending mode. 1402 + */ 1403 + 1404 + if (!brq->stop.error) { 1405 + bool oor_with_open_end; 1406 + /* If there is no error yet, check R1 response */ 1407 + 1408 + val = brq->stop.resp[0] & CMD_ERRORS; 1409 + oor_with_open_end = val & R1_OUT_OF_RANGE && !brq->mrq.sbc; 1410 + 1411 + if (val && !oor_with_open_end) 1412 + brq->stop.error = -EIO; 1413 + } 1380 1414 } 1381 1415 1382 1416 static enum mmc_blk_status mmc_blk_err_check(struct mmc_card *card, ··· 1434 1400 * stop.error indicates a problem with the stop command. Data 1435 1401 * may have been transferred, or may still be transferring. 1436 1402 */ 1437 - if (brq->sbc.error || brq->cmd.error || mmc_blk_has_cmd_err(&brq->stop) || 1438 - brq->data.error) { 1403 + 1404 + mmc_blk_eval_resp_error(brq); 1405 + 1406 + if (brq->sbc.error || brq->cmd.error || 1407 + brq->stop.error || brq->data.error) { 1439 1408 switch (mmc_blk_cmd_recovery(card, req, brq, &ecc_err, &gen_err)) { 1440 1409 case ERR_RETRY: 1441 1410 return MMC_BLK_RETRY;
+12 -1
drivers/mtd/nand/atmel/nand-controller.c
··· 1364 1364 ret = atmel_smc_cs_conf_set_timing(smcconf, 1365 1365 ATMEL_HSMC_TIMINGS_TADL_SHIFT, 1366 1366 ncycles); 1367 - if (ret) 1367 + /* 1368 + * Version 4 of the ONFI spec mandates that tADL be at least 400 1369 + * nanoseconds, but, depending on the master clock rate, 400 ns may not 1370 + * fit in the tADL field of the SMC reg. We need to relax the check and 1371 + * accept the -ERANGE return code. 1372 + * 1373 + * Note that previous versions of the ONFI spec had a lower tADL_min 1374 + * (100 or 200 ns). It's not clear why this timing constraint got 1375 + * increased but it seems most NANDs are fine with values lower than 1376 + * 400ns, so we should be safe. 1377 + */ 1378 + if (ret && ret != -ERANGE) 1368 1379 return ret; 1369 1380 1370 1381 ncycles = DIV_ROUND_UP(conf->timings.sdr.tAR_min, mckperiodps);
+1
drivers/mtd/nand/nandsim.c
··· 2373 2373 return 0; 2374 2374 2375 2375 err_exit: 2376 + nandsim_debugfs_remove(nand); 2376 2377 free_nandsim(nand); 2377 2378 nand_release(nsmtd); 2378 2379 for (i = 0;i < ARRAY_SIZE(nand->partitions); ++i)
+8 -5
drivers/net/bonding/bond_main.c
··· 1569 1569 new_slave->delay = 0; 1570 1570 new_slave->link_failure_count = 0; 1571 1571 1572 - if (bond_update_speed_duplex(new_slave)) 1572 + if (bond_update_speed_duplex(new_slave) && 1573 + bond_needs_speed_duplex(bond)) 1573 1574 new_slave->link = BOND_LINK_DOWN; 1574 1575 1575 1576 new_slave->last_rx = jiffies - ··· 2141 2140 continue; 2142 2141 2143 2142 case BOND_LINK_UP: 2144 - if (bond_update_speed_duplex(slave)) { 2143 + if (bond_update_speed_duplex(slave) && 2144 + bond_needs_speed_duplex(bond)) { 2145 2145 slave->link = BOND_LINK_DOWN; 2146 - netdev_warn(bond->dev, 2147 - "failed to get link speed/duplex for %s\n", 2148 - slave->dev->name); 2146 + if (net_ratelimit()) 2147 + netdev_warn(bond->dev, 2148 + "failed to get link speed/duplex for %s\n", 2149 + slave->dev->name); 2149 2150 continue; 2150 2151 } 2151 2152 bond_set_slave_link_state(slave, BOND_LINK_UP,
+1
drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
··· 529 529 USING_SOFT_PARAMS = (1 << 6), 530 530 MASTER_PF = (1 << 7), 531 531 FW_OFLD_CONN = (1 << 9), 532 + ROOT_NO_RELAXED_ORDERING = (1 << 10), 532 533 }; 533 534 534 535 enum {
+17 -6
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 4654 4654 dev->name, adap->params.vpd.id, adap->name, buf); 4655 4655 } 4656 4656 4657 - static void enable_pcie_relaxed_ordering(struct pci_dev *dev) 4658 - { 4659 - pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_DEVCTL_RELAX_EN); 4660 - } 4661 - 4662 4657 /* 4663 4658 * Free the following resources: 4664 4659 * - memory used for tables ··· 4903 4908 } 4904 4909 4905 4910 pci_enable_pcie_error_reporting(pdev); 4906 - enable_pcie_relaxed_ordering(pdev); 4907 4911 pci_set_master(pdev); 4908 4912 pci_save_state(pdev); 4909 4913 ··· 4940 4946 adapter->pf = func; 4941 4947 adapter->msg_enable = DFLT_MSG_ENABLE; 4942 4948 memset(adapter->chan_map, 0xff, sizeof(adapter->chan_map)); 4949 + 4950 + /* If possible, we use PCIe Relaxed Ordering Attribute to deliver 4951 + * Ingress Packet Data to Free List Buffers in order to allow for 4952 + * chipset performance optimizations between the Root Complex and 4953 + * Memory Controllers. (Messages to the associated Ingress Queue 4954 + * notifying new Packet Placement in the Free Lists Buffers will be 4955 + * send without the Relaxed Ordering Attribute thus guaranteeing that 4956 + * all preceding PCIe Transaction Layer Packets will be processed 4957 + * first.) But some Root Complexes have various issues with Upstream 4958 + * Transaction Layer Packets with the Relaxed Ordering Attribute set. 4959 + * The PCIe devices which under the Root Complexes will be cleared the 4960 + * Relaxed Ordering bit in the configuration space, So we check our 4961 + * PCIe configuration space to see if it's flagged with advice against 4962 + * using Relaxed Ordering. 4963 + */ 4964 + if (!pcie_relaxed_ordering_enabled(pdev)) 4965 + adapter->flags |= ROOT_NO_RELAXED_ORDERING; 4943 4966 4944 4967 spin_lock_init(&adapter->stats_lock); 4945 4968 spin_lock_init(&adapter->tid_release_lock);
+3 -2
drivers/net/ethernet/chelsio/cxgb4/sge.c
··· 2719 2719 struct fw_iq_cmd c; 2720 2720 struct sge *s = &adap->sge; 2721 2721 struct port_info *pi = netdev_priv(dev); 2722 + int relaxed = !(adap->flags & ROOT_NO_RELAXED_ORDERING); 2722 2723 2723 2724 /* Size needs to be multiple of 16, including status entry. */ 2724 2725 iq->size = roundup(iq->size, 16); ··· 2773 2772 2774 2773 flsz = fl->size / 8 + s->stat_len / sizeof(struct tx_desc); 2775 2774 c.iqns_to_fl0congen |= htonl(FW_IQ_CMD_FL0PACKEN_F | 2776 - FW_IQ_CMD_FL0FETCHRO_F | 2777 - FW_IQ_CMD_FL0DATARO_F | 2775 + FW_IQ_CMD_FL0FETCHRO_V(relaxed) | 2776 + FW_IQ_CMD_FL0DATARO_V(relaxed) | 2778 2777 FW_IQ_CMD_FL0PADEN_F); 2779 2778 if (cong >= 0) 2780 2779 c.iqns_to_fl0congen |=
+1
drivers/net/ethernet/chelsio/cxgb4vf/adapter.h
··· 408 408 USING_MSI = (1UL << 1), 409 409 USING_MSIX = (1UL << 2), 410 410 QUEUES_BOUND = (1UL << 3), 411 + ROOT_NO_RELAXED_ORDERING = (1UL << 4), 411 412 }; 412 413 413 414 /*
+18
drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
··· 2888 2888 */ 2889 2889 adapter->name = pci_name(pdev); 2890 2890 adapter->msg_enable = DFLT_MSG_ENABLE; 2891 + 2892 + /* If possible, we use PCIe Relaxed Ordering Attribute to deliver 2893 + * Ingress Packet Data to Free List Buffers in order to allow for 2894 + * chipset performance optimizations between the Root Complex and 2895 + * Memory Controllers. (Messages to the associated Ingress Queue 2896 + * notifying new Packet Placement in the Free Lists Buffers will be 2897 + * send without the Relaxed Ordering Attribute thus guaranteeing that 2898 + * all preceding PCIe Transaction Layer Packets will be processed 2899 + * first.) But some Root Complexes have various issues with Upstream 2900 + * Transaction Layer Packets with the Relaxed Ordering Attribute set. 2901 + * The PCIe devices which under the Root Complexes will be cleared the 2902 + * Relaxed Ordering bit in the configuration space, So we check our 2903 + * PCIe configuration space to see if it's flagged with advice against 2904 + * using Relaxed Ordering. 2905 + */ 2906 + if (!pcie_relaxed_ordering_enabled(pdev)) 2907 + adapter->flags |= ROOT_NO_RELAXED_ORDERING; 2908 + 2891 2909 err = adap_init0(adapter); 2892 2910 if (err) 2893 2911 goto err_unmap_bar;
+3
drivers/net/ethernet/chelsio/cxgb4vf/sge.c
··· 2205 2205 struct port_info *pi = netdev_priv(dev); 2206 2206 struct fw_iq_cmd cmd, rpl; 2207 2207 int ret, iqandst, flsz = 0; 2208 + int relaxed = !(adapter->flags & ROOT_NO_RELAXED_ORDERING); 2208 2209 2209 2210 /* 2210 2211 * If we're using MSI interrupts and we're not initializing the ··· 2301 2300 cpu_to_be32( 2302 2301 FW_IQ_CMD_FL0HOSTFCMODE_V(SGE_HOSTFCMODE_NONE) | 2303 2302 FW_IQ_CMD_FL0PACKEN_F | 2303 + FW_IQ_CMD_FL0FETCHRO_V(relaxed) | 2304 + FW_IQ_CMD_FL0DATARO_V(relaxed) | 2304 2305 FW_IQ_CMD_FL0PADEN_F); 2305 2306 2306 2307 /* In T6, for egress queue type FL there is internal overhead
+2 -2
drivers/net/ethernet/mellanox/mlx4/main.c
··· 432 432 /* Virtual PCI function needs to determine UAR page size from 433 433 * firmware. Only master PCI function can set the uar page size 434 434 */ 435 - if (enable_4k_uar) 435 + if (enable_4k_uar || !dev->persist->num_vfs) 436 436 dev->uar_page_shift = DEFAULT_UAR_PAGE_SHIFT; 437 437 else 438 438 dev->uar_page_shift = PAGE_SHIFT; ··· 2277 2277 2278 2278 dev->caps.max_fmr_maps = (1 << (32 - ilog2(dev->caps.num_mpts))) - 1; 2279 2279 2280 - if (enable_4k_uar) { 2280 + if (enable_4k_uar || !dev->persist->num_vfs) { 2281 2281 init_hca.log_uar_sz = ilog2(dev->caps.num_uars) + 2282 2282 PAGE_SHIFT - DEFAULT_UAR_PAGE_SHIFT; 2283 2283 init_hca.uar_page_sz = DEFAULT_UAR_PAGE_SHIFT - 12;
+2 -6
drivers/net/ethernet/netronome/nfp/flower/cmsg.c
··· 115 115 return; 116 116 } 117 117 118 - if (link) { 118 + if (link) 119 119 netif_carrier_on(netdev); 120 - rtnl_lock(); 121 - dev_set_mtu(netdev, be16_to_cpu(msg->mtu)); 122 - rtnl_unlock(); 123 - } else { 120 + else 124 121 netif_carrier_off(netdev); 125 - } 126 122 rcu_read_unlock(); 127 123 } 128 124
+1 -2
drivers/net/ethernet/netronome/nfp/nfp_net_common.c
··· 908 908 return NETDEV_TX_OK; 909 909 910 910 err_unmap: 911 - --f; 912 - while (f >= 0) { 911 + while (--f >= 0) { 913 912 frag = &skb_shinfo(skb)->frags[f]; 914 913 dma_unmap_page(dp->dev, tx_ring->txbufs[wr_idx].dma_addr, 915 914 skb_frag_size(frag), DMA_TO_DEVICE);
+1 -1
drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c
··· 2311 2311 loop_cnt++) { 2312 2312 NX_WR_DUMP_REG(select_addr, adapter->ahw.pci_base0, queue_id); 2313 2313 read_addr = queueEntry->read_addr; 2314 - for (k = 0; k < read_cnt; k--) { 2314 + for (k = 0; k < read_cnt; k++) { 2315 2315 NX_RD_DUMP_REG(read_addr, adapter->ahw.pci_base0, 2316 2316 &read_value); 2317 2317 *data_buff++ = read_value;
+6 -2
drivers/net/ethernet/sfc/mcdi_port.c
··· 938 938 static int efx_mcdi_mac_stats(struct efx_nic *efx, 939 939 enum efx_stats_action action, int clear) 940 940 { 941 - struct efx_ef10_nic_data *nic_data = efx->nic_data; 942 941 MCDI_DECLARE_BUF(inbuf, MC_CMD_MAC_STATS_IN_LEN); 943 942 int rc; 944 943 int change = action == EFX_STATS_PULL ? 0 : 1; ··· 959 960 MAC_STATS_IN_PERIODIC_NOEVENT, 1, 960 961 MAC_STATS_IN_PERIOD_MS, period); 961 962 MCDI_SET_DWORD(inbuf, MAC_STATS_IN_DMA_LEN, dma_len); 962 - MCDI_SET_DWORD(inbuf, MAC_STATS_IN_PORT_ID, nic_data->vport_id); 963 + 964 + if (efx_nic_rev(efx) >= EFX_REV_HUNT_A0) { 965 + struct efx_ef10_nic_data *nic_data = efx->nic_data; 966 + 967 + MCDI_SET_DWORD(inbuf, MAC_STATS_IN_PORT_ID, nic_data->vport_id); 968 + } 963 969 964 970 rc = efx_mcdi_rpc_quiet(efx, MC_CMD_MAC_STATS, inbuf, sizeof(inbuf), 965 971 NULL, 0, NULL);
+4 -5
drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
··· 204 204 struct stmmac_priv *priv = netdev_priv(ndev); 205 205 struct stmmac_mdio_bus_data *mdio_bus_data = priv->plat->mdio_bus_data; 206 206 struct device_node *mdio_node = priv->plat->mdio_node; 207 + struct device *dev = ndev->dev.parent; 207 208 int addr, found; 208 209 209 210 if (!mdio_bus_data) ··· 238 237 else 239 238 err = mdiobus_register(new_bus); 240 239 if (err != 0) { 241 - netdev_err(ndev, "Cannot register the MDIO bus\n"); 240 + dev_err(dev, "Cannot register the MDIO bus\n"); 242 241 goto bus_register_fail; 243 242 } 244 243 ··· 286 285 irq_str = irq_num; 287 286 break; 288 287 } 289 - netdev_info(ndev, "PHY ID %08x at %d IRQ %s (%s)%s\n", 290 - phydev->phy_id, addr, irq_str, phydev_name(phydev), 291 - act ? " active" : ""); 288 + phy_attached_info(phydev); 292 289 found = 1; 293 290 } 294 291 295 292 if (!found && !mdio_node) { 296 - netdev_warn(ndev, "No PHY found\n"); 293 + dev_warn(dev, "No PHY found\n"); 297 294 mdiobus_unregister(new_bus); 298 295 mdiobus_free(new_bus); 299 296 return -ENODEV;
+3
drivers/net/tun.c
··· 1879 1879 1880 1880 err_detach: 1881 1881 tun_detach_all(dev); 1882 + /* register_netdevice() already called tun_free_netdev() */ 1883 + goto err_free_dev; 1884 + 1882 1885 err_free_flow: 1883 1886 tun_flow_uninit(tun); 1884 1887 security_tun_dev_free_security(tun->security);
+4 -2
drivers/net/wireless/broadcom/brcm80211/brcmfmac/feature.c
··· 159 159 160 160 brcmf_feat_firmware_capabilities(ifp); 161 161 memset(&gscan_cfg, 0, sizeof(gscan_cfg)); 162 - brcmf_feat_iovar_data_set(ifp, BRCMF_FEAT_GSCAN, "pfn_gscan_cfg", 163 - &gscan_cfg, sizeof(gscan_cfg)); 162 + if (drvr->bus_if->chip != BRCM_CC_43430_CHIP_ID) 163 + brcmf_feat_iovar_data_set(ifp, BRCMF_FEAT_GSCAN, 164 + "pfn_gscan_cfg", 165 + &gscan_cfg, sizeof(gscan_cfg)); 164 166 brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_PNO, "pfn"); 165 167 if (drvr->bus_if->wowl_supported) 166 168 brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_WOWL, "wowl");
+7 -7
drivers/net/wireless/intel/iwlwifi/cfg/9000.c
··· 154 154 const struct iwl_cfg iwl9160_2ac_cfg = { 155 155 .name = "Intel(R) Dual Band Wireless AC 9160", 156 156 .fw_name_pre = IWL9260A_FW_PRE, 157 - .fw_name_pre_next_step = IWL9260B_FW_PRE, 157 + .fw_name_pre_b_or_c_step = IWL9260B_FW_PRE, 158 158 IWL_DEVICE_9000, 159 159 .ht_params = &iwl9000_ht_params, 160 160 .nvm_ver = IWL9000_NVM_VERSION, ··· 165 165 const struct iwl_cfg iwl9260_2ac_cfg = { 166 166 .name = "Intel(R) Dual Band Wireless AC 9260", 167 167 .fw_name_pre = IWL9260A_FW_PRE, 168 - .fw_name_pre_next_step = IWL9260B_FW_PRE, 168 + .fw_name_pre_b_or_c_step = IWL9260B_FW_PRE, 169 169 IWL_DEVICE_9000, 170 170 .ht_params = &iwl9000_ht_params, 171 171 .nvm_ver = IWL9000_NVM_VERSION, ··· 176 176 const struct iwl_cfg iwl9270_2ac_cfg = { 177 177 .name = "Intel(R) Dual Band Wireless AC 9270", 178 178 .fw_name_pre = IWL9260A_FW_PRE, 179 - .fw_name_pre_next_step = IWL9260B_FW_PRE, 179 + .fw_name_pre_b_or_c_step = IWL9260B_FW_PRE, 180 180 IWL_DEVICE_9000, 181 181 .ht_params = &iwl9000_ht_params, 182 182 .nvm_ver = IWL9000_NVM_VERSION, ··· 186 186 187 187 const struct iwl_cfg iwl9460_2ac_cfg = { 188 188 .name = "Intel(R) Dual Band Wireless AC 9460", 189 - .fw_name_pre = IWL9000_FW_PRE, 190 - .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, 189 + .fw_name_pre = IWL9260A_FW_PRE, 190 + .fw_name_pre_b_or_c_step = IWL9260B_FW_PRE, 191 191 IWL_DEVICE_9000, 192 192 .ht_params = &iwl9000_ht_params, 193 193 .nvm_ver = IWL9000_NVM_VERSION, ··· 198 198 199 199 const struct iwl_cfg iwl9560_2ac_cfg = { 200 200 .name = "Intel(R) Dual Band Wireless AC 9560", 201 - .fw_name_pre = IWL9000_FW_PRE, 202 - .fw_name_pre_rf_next_step = IWL9000RFB_FW_PRE, 201 + .fw_name_pre = IWL9260A_FW_PRE, 202 + .fw_name_pre_b_or_c_step = IWL9260B_FW_PRE, 203 203 IWL_DEVICE_9000, 204 204 .ht_params = &iwl9000_ht_params, 205 205 .nvm_ver = IWL9000_NVM_VERSION,
+2
drivers/net/wireless/intel/iwlwifi/fw/file.h
··· 328 328 * @IWL_UCODE_TLV_CAPA_TX_POWER_ACK: reduced TX power API has larger 329 329 * command size (command version 4) that supports toggling ACK TX 330 330 * power reduction. 331 + * @IWL_UCODE_TLV_CAPA_MLME_OFFLOAD: supports MLME offload 331 332 * 332 333 * @NUM_IWL_UCODE_TLV_CAPA: number of bits used 333 334 */ ··· 374 373 IWL_UCODE_TLV_CAPA_EXTEND_SHARED_MEM_CFG = (__force iwl_ucode_tlv_capa_t)80, 375 374 IWL_UCODE_TLV_CAPA_LQM_SUPPORT = (__force iwl_ucode_tlv_capa_t)81, 376 375 IWL_UCODE_TLV_CAPA_TX_POWER_ACK = (__force iwl_ucode_tlv_capa_t)84, 376 + IWL_UCODE_TLV_CAPA_MLME_OFFLOAD = (__force iwl_ucode_tlv_capa_t)96, 377 377 378 378 NUM_IWL_UCODE_TLV_CAPA 379 379 #ifdef __CHECKER__
+4 -4
drivers/net/wireless/intel/iwlwifi/iwl-config.h
··· 276 276 * @fw_name_pre: Firmware filename prefix. The api version and extension 277 277 * (.ucode) will be added to filename before loading from disk. The 278 278 * filename is constructed as fw_name_pre<api>.ucode. 279 - * @fw_name_pre_next_step: same as @fw_name_pre, only for next step 279 + * @fw_name_pre_b_or_c_step: same as @fw_name_pre, only for b or c steps 280 280 * (if supported) 281 - * @fw_name_pre_rf_next_step: same as @fw_name_pre_next_step, only for rf next 282 - * step. Supported only in integrated solutions. 281 + * @fw_name_pre_rf_next_step: same as @fw_name_pre_b_or_c_step, only for rf 282 + * next step. Supported only in integrated solutions. 283 283 * @ucode_api_max: Highest version of uCode API supported by driver. 284 284 * @ucode_api_min: Lowest version of uCode API supported by driver. 285 285 * @max_inst_size: The maximal length of the fw inst section ··· 330 330 /* params specific to an individual device within a device family */ 331 331 const char *name; 332 332 const char *fw_name_pre; 333 - const char *fw_name_pre_next_step; 333 + const char *fw_name_pre_b_or_c_step; 334 334 const char *fw_name_pre_rf_next_step; 335 335 /* params not likely to change within a device family */ 336 336 const struct iwl_base_params *base_params;
+3 -2
drivers/net/wireless/intel/iwlwifi/iwl-drv.c
··· 216 216 const char *fw_pre_name; 217 217 218 218 if (drv->trans->cfg->device_family == IWL_DEVICE_FAMILY_9000 && 219 - CSR_HW_REV_STEP(drv->trans->hw_rev) == SILICON_B_STEP) 220 - fw_pre_name = cfg->fw_name_pre_next_step; 219 + (CSR_HW_REV_STEP(drv->trans->hw_rev) == SILICON_B_STEP || 220 + CSR_HW_REV_STEP(drv->trans->hw_rev) == SILICON_C_STEP)) 221 + fw_pre_name = cfg->fw_name_pre_b_or_c_step; 221 222 else if (drv->trans->cfg->integrated && 222 223 CSR_HW_RFID_STEP(drv->trans->hw_rf_id) == SILICON_B_STEP && 223 224 cfg->fw_name_pre_rf_next_step)
+11 -8
drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
··· 785 785 int num_of_ch, __le32 *channels, u16 fw_mcc) 786 786 { 787 787 int ch_idx; 788 - u16 ch_flags, prev_ch_flags = 0; 788 + u16 ch_flags; 789 + u32 reg_rule_flags, prev_reg_rule_flags = 0; 789 790 const u8 *nvm_chan = cfg->ext_nvm ? 790 791 iwl_ext_nvm_channels : iwl_nvm_channels; 791 792 struct ieee80211_regdomain *regd; ··· 835 834 continue; 836 835 } 837 836 837 + reg_rule_flags = iwl_nvm_get_regdom_bw_flags(nvm_chan, ch_idx, 838 + ch_flags, cfg); 839 + 838 840 /* we can't continue the same rule */ 839 - if (ch_idx == 0 || prev_ch_flags != ch_flags || 841 + if (ch_idx == 0 || prev_reg_rule_flags != reg_rule_flags || 840 842 center_freq - prev_center_freq > 20) { 841 843 valid_rules++; 842 844 new_rule = true; ··· 858 854 rule->power_rule.max_eirp = 859 855 DBM_TO_MBM(IWL_DEFAULT_MAX_TX_POWER); 860 856 861 - rule->flags = iwl_nvm_get_regdom_bw_flags(nvm_chan, ch_idx, 862 - ch_flags, cfg); 857 + rule->flags = reg_rule_flags; 863 858 864 859 /* rely on auto-calculation to merge BW of contiguous chans */ 865 860 rule->flags |= NL80211_RRF_AUTO_BW; 866 861 rule->freq_range.max_bandwidth_khz = 0; 867 862 868 - prev_ch_flags = ch_flags; 869 863 prev_center_freq = center_freq; 864 + prev_reg_rule_flags = reg_rule_flags; 870 865 871 866 IWL_DEBUG_DEV(dev, IWL_DL_LAR, 872 - "Ch. %d [%sGHz] %s%s%s%s%s%s%s%s%s(0x%02x): Ad-Hoc %ssupported\n", 867 + "Ch. %d [%sGHz] %s%s%s%s%s%s%s%s%s(0x%02x) reg_flags 0x%x: %s\n", 873 868 center_freq, 874 869 band == NL80211_BAND_5GHZ ? "5.2" : "2.4", 875 870 CHECK_AND_PRINT_I(VALID), ··· 880 877 CHECK_AND_PRINT_I(160MHZ), 881 878 CHECK_AND_PRINT_I(INDOOR_ONLY), 882 879 CHECK_AND_PRINT_I(GO_CONCURRENT), 883 - ch_flags, 880 + ch_flags, reg_rule_flags, 884 881 ((ch_flags & NVM_CHANNEL_ACTIVE) && 885 882 !(ch_flags & NVM_CHANNEL_RADAR)) 886 - ? "" : "not "); 883 + ? "Ad-Hoc" : ""); 887 884 } 888 885 889 886 regd->n_reg_rules = valid_rules;
+4 -2
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 1275 1275 1276 1276 entry = &wifi_pkg->package.elements[idx++]; 1277 1277 if ((entry->type != ACPI_TYPE_INTEGER) || 1278 - (entry->integer.value > U8_MAX)) 1279 - return -EINVAL; 1278 + (entry->integer.value > U8_MAX)) { 1279 + ret = -EINVAL; 1280 + goto out_free; 1281 + } 1280 1282 1281 1283 mvm->geo_profiles[i].values[j] = entry->integer.value; 1282 1284 }
+11 -1
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 2597 2597 spin_lock_bh(&mvm_sta->lock); 2598 2598 for (i = 0; i <= IWL_MAX_TID_COUNT; i++) { 2599 2599 tid_data = &mvm_sta->tid_data[i]; 2600 - while ((skb = __skb_dequeue(&tid_data->deferred_tx_frames))) 2600 + 2601 + while ((skb = __skb_dequeue(&tid_data->deferred_tx_frames))) { 2602 + struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 2603 + 2604 + /* 2605 + * The first deferred frame should've stopped the MAC 2606 + * queues, so we should never get a second deferred 2607 + * frame for the RA/TID. 2608 + */ 2609 + iwl_mvm_start_mac_queues(mvm, info->hw_queue); 2601 2610 ieee80211_free_txskb(mvm->hw, skb); 2611 + } 2602 2612 } 2603 2613 spin_unlock_bh(&mvm_sta->lock); 2604 2614 }
+4 -4
drivers/net/wireless/intel/iwlwifi/mvm/rs.c
··· 1291 1291 * first index into rate scale table. 1292 1292 */ 1293 1293 if (info->flags & IEEE80211_TX_STAT_AMPDU) { 1294 - rs_collect_tpc_data(mvm, lq_sta, curr_tbl, lq_rate.index, 1294 + rs_collect_tpc_data(mvm, lq_sta, curr_tbl, tx_resp_rate.index, 1295 1295 info->status.ampdu_len, 1296 1296 info->status.ampdu_ack_len, 1297 1297 reduced_txp); ··· 1312 1312 if (info->status.ampdu_ack_len == 0) 1313 1313 info->status.ampdu_len = 1; 1314 1314 1315 - rs_collect_tlc_data(mvm, lq_sta, curr_tbl, lq_rate.index, 1315 + rs_collect_tlc_data(mvm, lq_sta, curr_tbl, tx_resp_rate.index, 1316 1316 info->status.ampdu_len, 1317 1317 info->status.ampdu_ack_len); 1318 1318 ··· 1348 1348 continue; 1349 1349 1350 1350 rs_collect_tpc_data(mvm, lq_sta, tmp_tbl, 1351 - lq_rate.index, 1, 1351 + tx_resp_rate.index, 1, 1352 1352 i < retries ? 0 : legacy_success, 1353 1353 reduced_txp); 1354 1354 rs_collect_tlc_data(mvm, lq_sta, tmp_tbl, 1355 - lq_rate.index, 1, 1355 + tx_resp_rate.index, 1, 1356 1356 i < retries ? 0 : legacy_success); 1357 1357 } 1358 1358
+6 -4
drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
··· 636 636 637 637 baid_data = rcu_dereference(mvm->baid_map[baid]); 638 638 if (!baid_data) { 639 - WARN(!(reorder & IWL_RX_MPDU_REORDER_BA_OLD_SN), 640 - "Received baid %d, but no data exists for this BAID\n", 641 - baid); 639 + IWL_DEBUG_RX(mvm, 640 + "Got valid BAID but no baid allocated, bypass the re-ordering buffer. Baid %d reorder 0x%x\n", 641 + baid, reorder); 642 642 return false; 643 643 } 644 644 ··· 759 759 760 760 data = rcu_dereference(mvm->baid_map[baid]); 761 761 if (!data) { 762 - WARN_ON(!(reorder_data & IWL_RX_MPDU_REORDER_BA_OLD_SN)); 762 + IWL_DEBUG_RX(mvm, 763 + "Got valid BAID but no baid allocated, bypass the re-ordering buffer. Baid %d reorder 0x%x\n", 764 + baid, reorder_data); 763 765 goto out; 764 766 } 765 767
+4 -3
drivers/net/wireless/intel/iwlwifi/mvm/sta.c
··· 121 121 .mac_id_n_color = cpu_to_le32(mvm_sta->mac_id_n_color), 122 122 .add_modify = update ? 1 : 0, 123 123 .station_flags_msk = cpu_to_le32(STA_FLG_FAT_EN_MSK | 124 - STA_FLG_MIMO_EN_MSK), 124 + STA_FLG_MIMO_EN_MSK | 125 + STA_FLG_RTS_MIMO_PROT), 125 126 .tid_disable_tx = cpu_to_le16(mvm_sta->tid_disable_agg), 126 127 }; 127 128 int ret; ··· 291 290 goto unlock; 292 291 293 292 mvm_sta = iwl_mvm_sta_from_mac80211(sta); 294 - ieee80211_stop_rx_ba_session_offl(mvm_sta->vif, 295 - sta->addr, ba_data->tid); 293 + ieee80211_rx_ba_timer_expired(mvm_sta->vif, 294 + sta->addr, ba_data->tid); 296 295 unlock: 297 296 rcu_read_unlock(); 298 297 }
+10 -2
drivers/net/wireless/intel/iwlwifi/mvm/tx.c
··· 185 185 else 186 186 udp_hdr(skb)->check = 0; 187 187 188 - /* mac header len should include IV, size is in words */ 189 - if (info->control.hw_key) 188 + /* 189 + * mac header len should include IV, size is in words unless 190 + * the IV is added by the firmware like in WEP. 191 + * In new Tx API, the IV is always added by the firmware. 192 + */ 193 + if (!iwl_mvm_has_new_tx_api(mvm) && info->control.hw_key && 194 + info->control.hw_key->cipher != WLAN_CIPHER_SUITE_WEP40 && 195 + info->control.hw_key->cipher != WLAN_CIPHER_SUITE_WEP104) 190 196 mh_len += info->control.hw_key->iv_len; 191 197 mh_len /= 2; 192 198 offload_assist |= mh_len << TX_CMD_OFFLD_MH_SIZE; ··· 1820 1814 struct iwl_mvm_ba_notif *ba_notif; 1821 1815 struct iwl_mvm_tid_data *tid_data; 1822 1816 struct iwl_mvm_sta *mvmsta; 1817 + 1818 + ba_info.flags = IEEE80211_TX_STAT_AMPDU; 1823 1819 1824 1820 if (iwl_mvm_has_new_tx_api(mvm)) { 1825 1821 struct iwl_mvm_compressed_ba_notif *ba_res =
+20
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 510 510 511 511 /* 9000 Series */ 512 512 {IWL_PCI_DEVICE(0x271B, 0x0010, iwl9160_2ac_cfg)}, 513 + {IWL_PCI_DEVICE(0x271B, 0x0014, iwl9160_2ac_cfg)}, 514 + {IWL_PCI_DEVICE(0x271B, 0x0210, iwl9160_2ac_cfg)}, 513 515 {IWL_PCI_DEVICE(0x2526, 0x0000, iwl9260_2ac_cfg)}, 514 516 {IWL_PCI_DEVICE(0x2526, 0x0010, iwl9260_2ac_cfg)}, 517 + {IWL_PCI_DEVICE(0x2526, 0x0014, iwl9260_2ac_cfg)}, 518 + {IWL_PCI_DEVICE(0x2526, 0xA014, iwl9260_2ac_cfg)}, 519 + {IWL_PCI_DEVICE(0x2526, 0x4010, iwl9260_2ac_cfg)}, 520 + {IWL_PCI_DEVICE(0x2526, 0x0210, iwl9260_2ac_cfg)}, 521 + {IWL_PCI_DEVICE(0x2526, 0x0214, iwl9260_2ac_cfg)}, 515 522 {IWL_PCI_DEVICE(0x2526, 0x1410, iwl9270_2ac_cfg)}, 523 + {IWL_PCI_DEVICE(0x2526, 0x1610, iwl9270_2ac_cfg)}, 516 524 {IWL_PCI_DEVICE(0x9DF0, 0x0A10, iwl9460_2ac_cfg)}, 517 525 {IWL_PCI_DEVICE(0x9DF0, 0x0010, iwl9460_2ac_cfg)}, 518 526 {IWL_PCI_DEVICE(0x9DF0, 0x0210, iwl9460_2ac_cfg)}, ··· 535 527 {IWL_PCI_DEVICE(0x9DF0, 0x2A10, iwl9460_2ac_cfg)}, 536 528 {IWL_PCI_DEVICE(0x30DC, 0x0060, iwl9460_2ac_cfg)}, 537 529 {IWL_PCI_DEVICE(0x2526, 0x0060, iwl9460_2ac_cfg)}, 530 + {IWL_PCI_DEVICE(0x2526, 0x0260, iwl9460_2ac_cfg)}, 531 + {IWL_PCI_DEVICE(0x2526, 0x0064, iwl9460_2ac_cfg)}, 532 + {IWL_PCI_DEVICE(0x2526, 0x00A4, iwl9460_2ac_cfg)}, 533 + {IWL_PCI_DEVICE(0x2526, 0x40A4, iwl9460_2ac_cfg)}, 534 + {IWL_PCI_DEVICE(0x2526, 0x02A4, iwl9460_2ac_cfg)}, 535 + {IWL_PCI_DEVICE(0x2526, 0x00A0, iwl9460_2ac_cfg)}, 536 + {IWL_PCI_DEVICE(0x2526, 0x02A0, iwl9460_2ac_cfg)}, 538 537 {IWL_PCI_DEVICE(0x9DF0, 0x0060, iwl9460_2ac_cfg)}, 539 538 {IWL_PCI_DEVICE(0xA370, 0x0060, iwl9460_2ac_cfg)}, 540 539 {IWL_PCI_DEVICE(0x31DC, 0x0060, iwl9460_2ac_cfg)}, 541 540 {IWL_PCI_DEVICE(0x2526, 0x0030, iwl9560_2ac_cfg)}, 541 + {IWL_PCI_DEVICE(0x2526, 0x4030, iwl9560_2ac_cfg)}, 542 + {IWL_PCI_DEVICE(0x2526, 0x0230, iwl9560_2ac_cfg)}, 543 + {IWL_PCI_DEVICE(0x2526, 0x0234, iwl9560_2ac_cfg)}, 544 + {IWL_PCI_DEVICE(0x2526, 0x0238, iwl9560_2ac_cfg)}, 545 + {IWL_PCI_DEVICE(0x2526, 0x023C, iwl9560_2ac_cfg)}, 542 546 {IWL_PCI_DEVICE(0x9DF0, 0x0030, iwl9560_2ac_cfg)}, 543 547 {IWL_PCI_DEVICE(0xA370, 0x0030, iwl9560_2ac_cfg)}, 544 548 {IWL_PCI_DEVICE(0x31DC, 0x0030, iwl9560_2ac_cfg)},
+2 -4
drivers/ntb/ntb_transport.c
··· 924 924 ntb_free_mw(nt, i); 925 925 926 926 /* if there's an actual failure, we should just bail */ 927 - if (rc < 0) { 928 - ntb_link_disable(ndev); 927 + if (rc < 0) 929 928 return; 930 - } 931 929 932 930 out: 933 931 if (ntb_link_is_up(ndev, NULL, NULL) == 1) ··· 1057 1059 int node; 1058 1060 int rc, i; 1059 1061 1060 - mw_count = ntb_mw_count(ndev, PIDX); 1062 + mw_count = ntb_peer_mw_count(ndev); 1061 1063 1062 1064 if (!ndev->ops->mw_set_trans) { 1063 1065 dev_err(&ndev->dev, "Inbound MW based NTB API is required\n");
+1 -1
drivers/ntb/test/ntb_tool.c
··· 959 959 tc->ntb = ntb; 960 960 init_waitqueue_head(&tc->link_wq); 961 961 962 - tc->mw_count = min(ntb_mw_count(tc->ntb, PIDX), MAX_MWS); 962 + tc->mw_count = min(ntb_peer_mw_count(tc->ntb), MAX_MWS); 963 963 for (i = 0; i < tc->mw_count; i++) { 964 964 rc = tool_init_mw(tc, i); 965 965 if (rc)
+2 -1
drivers/nvme/host/fabrics.c
··· 794 794 int i; 795 795 796 796 for (i = 0; i < ARRAY_SIZE(opt_tokens); i++) { 797 - if (opt_tokens[i].token & ~allowed_opts) { 797 + if ((opt_tokens[i].token & opts->mask) && 798 + (opt_tokens[i].token & ~allowed_opts)) { 798 799 pr_warn("invalid parameter '%s'\n", 799 800 opt_tokens[i].pattern); 800 801 }
+2 -3
drivers/nvme/host/pci.c
··· 801 801 return; 802 802 } 803 803 804 + nvmeq->cqe_seen = 1; 804 805 req = blk_mq_tag_to_rq(*nvmeq->tags, cqe->command_id); 805 806 nvme_end_request(req, cqe->status, cqe->result); 806 807 } ··· 831 830 consumed++; 832 831 } 833 832 834 - if (consumed) { 833 + if (consumed) 835 834 nvme_ring_cq_doorbell(nvmeq); 836 - nvmeq->cqe_seen = 1; 837 - } 838 835 } 839 836 840 837 static irqreturn_t nvme_irq(int irq, void *data)
-6
drivers/nvme/target/admin-cmd.c
··· 199 199 copy_and_pad(id->mn, sizeof(id->mn), model, sizeof(model) - 1); 200 200 copy_and_pad(id->fr, sizeof(id->fr), UTS_RELEASE, strlen(UTS_RELEASE)); 201 201 202 - memset(id->mn, ' ', sizeof(id->mn)); 203 - strncpy((char *)id->mn, "Linux", sizeof(id->mn)); 204 - 205 - memset(id->fr, ' ', sizeof(id->fr)); 206 - strncpy((char *)id->fr, UTS_RELEASE, sizeof(id->fr)); 207 - 208 202 id->rab = 6; 209 203 210 204 /*
+5 -4
drivers/nvme/target/fc.c
··· 394 394 static struct nvmet_fc_ls_iod * 395 395 nvmet_fc_alloc_ls_iod(struct nvmet_fc_tgtport *tgtport) 396 396 { 397 - static struct nvmet_fc_ls_iod *iod; 397 + struct nvmet_fc_ls_iod *iod; 398 398 unsigned long flags; 399 399 400 400 spin_lock_irqsave(&tgtport->lock, flags); ··· 471 471 static struct nvmet_fc_fcp_iod * 472 472 nvmet_fc_alloc_fcp_iod(struct nvmet_fc_tgt_queue *queue) 473 473 { 474 - static struct nvmet_fc_fcp_iod *fod; 474 + struct nvmet_fc_fcp_iod *fod; 475 475 476 476 lockdep_assert_held(&queue->qlock); 477 477 ··· 704 704 { 705 705 struct nvmet_fc_tgtport *tgtport = queue->assoc->tgtport; 706 706 struct nvmet_fc_fcp_iod *fod = queue->fod; 707 - struct nvmet_fc_defer_fcp_req *deferfcp; 707 + struct nvmet_fc_defer_fcp_req *deferfcp, *tempptr; 708 708 unsigned long flags; 709 709 int i, writedataactive; 710 710 bool disconnect; ··· 735 735 } 736 736 737 737 /* Cleanup defer'ed IOs in queue */ 738 - list_for_each_entry(deferfcp, &queue->avail_defer_list, req_list) { 738 + list_for_each_entry_safe(deferfcp, tempptr, &queue->avail_defer_list, 739 + req_list) { 739 740 list_del(&deferfcp->req_list); 740 741 kfree(deferfcp); 741 742 }
+4 -4
drivers/of/device.c
··· 89 89 bool coherent; 90 90 unsigned long offset; 91 91 const struct iommu_ops *iommu; 92 + u64 mask; 92 93 93 94 /* 94 95 * Set default coherent_dma_mask to 32 bit. Drivers are expected to ··· 135 134 * Limit coherent and dma mask based on size and default mask 136 135 * set by the driver. 137 136 */ 138 - dev->coherent_dma_mask = min(dev->coherent_dma_mask, 139 - DMA_BIT_MASK(ilog2(dma_addr + size))); 140 - *dev->dma_mask = min((*dev->dma_mask), 141 - DMA_BIT_MASK(ilog2(dma_addr + size))); 137 + mask = DMA_BIT_MASK(ilog2(dma_addr + size - 1) + 1); 138 + dev->coherent_dma_mask &= mask; 139 + *dev->dma_mask &= mask; 142 140 143 141 coherent = of_dma_is_coherent(np); 144 142 dev_dbg(dev, "device is%sdma coherent\n",
+1 -1
drivers/parisc/dino.c
··· 956 956 957 957 dino_dev->hba.dev = dev; 958 958 dino_dev->hba.base_addr = ioremap_nocache(hpa, 4096); 959 - dino_dev->hba.lmmio_space_offset = 0; /* CPU addrs == bus addrs */ 959 + dino_dev->hba.lmmio_space_offset = PCI_F_EXTEND; 960 960 spin_lock_init(&dino_dev->dinosaur_pen); 961 961 dino_dev->hba.iommu = ccio_get_iommu(dev); 962 962
+3 -10
drivers/pci/msi.c
··· 538 538 struct msi_desc *entry; 539 539 u16 control; 540 540 541 - if (affd) { 541 + if (affd) 542 542 masks = irq_create_affinity_masks(nvec, affd); 543 - if (!masks) 544 - dev_err(&dev->dev, "can't allocate MSI affinity masks for %d vectors\n", 545 - nvec); 546 - } 543 + 547 544 548 545 /* MSI Entry Initialization */ 549 546 entry = alloc_msi_entry(&dev->dev, nvec, masks); ··· 676 679 struct msi_desc *entry; 677 680 int ret, i; 678 681 679 - if (affd) { 682 + if (affd) 680 683 masks = irq_create_affinity_masks(nvec, affd); 681 - if (!masks) 682 - dev_err(&dev->dev, "can't allocate MSI-X affinity masks for %d vectors\n", 683 - nvec); 684 - } 685 684 686 685 for (i = 0, curmsk = masks; i < nvec; i++) { 687 686 entry = alloc_msi_entry(&dev->dev, 1, curmsk);
+1 -1
drivers/pci/pci.c
··· 514 514 */ 515 515 struct pci_dev *pci_find_pcie_root_port(struct pci_dev *dev) 516 516 { 517 - struct pci_dev *bridge, *highest_pcie_bridge = NULL; 517 + struct pci_dev *bridge, *highest_pcie_bridge = dev; 518 518 519 519 bridge = pci_upstream_bridge(dev); 520 520 while (bridge && pci_is_pcie(bridge)) {
+43
drivers/pci/probe.c
··· 1762 1762 PCI_EXP_DEVCTL_EXT_TAG); 1763 1763 } 1764 1764 1765 + /** 1766 + * pcie_relaxed_ordering_enabled - Probe for PCIe relaxed ordering enable 1767 + * @dev: PCI device to query 1768 + * 1769 + * Returns true if the device has enabled relaxed ordering attribute. 1770 + */ 1771 + bool pcie_relaxed_ordering_enabled(struct pci_dev *dev) 1772 + { 1773 + u16 v; 1774 + 1775 + pcie_capability_read_word(dev, PCI_EXP_DEVCTL, &v); 1776 + 1777 + return !!(v & PCI_EXP_DEVCTL_RELAX_EN); 1778 + } 1779 + EXPORT_SYMBOL(pcie_relaxed_ordering_enabled); 1780 + 1781 + static void pci_configure_relaxed_ordering(struct pci_dev *dev) 1782 + { 1783 + struct pci_dev *root; 1784 + 1785 + /* PCI_EXP_DEVICE_RELAX_EN is RsvdP in VFs */ 1786 + if (dev->is_virtfn) 1787 + return; 1788 + 1789 + if (!pcie_relaxed_ordering_enabled(dev)) 1790 + return; 1791 + 1792 + /* 1793 + * For now, we only deal with Relaxed Ordering issues with Root 1794 + * Ports. Peer-to-Peer DMA is another can of worms. 1795 + */ 1796 + root = pci_find_pcie_root_port(dev); 1797 + if (!root) 1798 + return; 1799 + 1800 + if (root->dev_flags & PCI_DEV_FLAGS_NO_RELAXED_ORDERING) { 1801 + pcie_capability_clear_word(dev, PCI_EXP_DEVCTL, 1802 + PCI_EXP_DEVCTL_RELAX_EN); 1803 + dev_info(&dev->dev, "Disable Relaxed Ordering because the Root Port didn't support it\n"); 1804 + } 1805 + } 1806 + 1765 1807 static void pci_configure_device(struct pci_dev *dev) 1766 1808 { 1767 1809 struct hotplug_params hpp; ··· 1811 1769 1812 1770 pci_configure_mps(dev); 1813 1771 pci_configure_extended_tags(dev); 1772 + pci_configure_relaxed_ordering(dev); 1814 1773 1815 1774 memset(&hpp, 0, sizeof(hpp)); 1816 1775 ret = pci_get_hp_params(dev, &hpp);
+89
drivers/pci/quirks.c
··· 4016 4016 quirk_tw686x_class); 4017 4017 4018 4018 /* 4019 + * Some devices have problems with Transaction Layer Packets with the Relaxed 4020 + * Ordering Attribute set. Such devices should mark themselves and other 4021 + * Device Drivers should check before sending TLPs with RO set. 4022 + */ 4023 + static void quirk_relaxedordering_disable(struct pci_dev *dev) 4024 + { 4025 + dev->dev_flags |= PCI_DEV_FLAGS_NO_RELAXED_ORDERING; 4026 + dev_info(&dev->dev, "Disable Relaxed Ordering Attributes to avoid PCIe Completion erratum\n"); 4027 + } 4028 + 4029 + /* 4030 + * Intel Xeon processors based on Broadwell/Haswell microarchitecture Root 4031 + * Complex has a Flow Control Credit issue which can cause performance 4032 + * problems with Upstream Transaction Layer Packets with Relaxed Ordering set. 4033 + */ 4034 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x6f01, PCI_CLASS_NOT_DEFINED, 8, 4035 + quirk_relaxedordering_disable); 4036 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x6f02, PCI_CLASS_NOT_DEFINED, 8, 4037 + quirk_relaxedordering_disable); 4038 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x6f03, PCI_CLASS_NOT_DEFINED, 8, 4039 + quirk_relaxedordering_disable); 4040 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x6f04, PCI_CLASS_NOT_DEFINED, 8, 4041 + quirk_relaxedordering_disable); 4042 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x6f05, PCI_CLASS_NOT_DEFINED, 8, 4043 + quirk_relaxedordering_disable); 4044 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x6f06, PCI_CLASS_NOT_DEFINED, 8, 4045 + quirk_relaxedordering_disable); 4046 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x6f07, PCI_CLASS_NOT_DEFINED, 8, 4047 + quirk_relaxedordering_disable); 4048 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x6f08, PCI_CLASS_NOT_DEFINED, 8, 4049 + quirk_relaxedordering_disable); 4050 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x6f09, PCI_CLASS_NOT_DEFINED, 8, 4051 + quirk_relaxedordering_disable); 4052 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x6f0a, PCI_CLASS_NOT_DEFINED, 8, 4053 + quirk_relaxedordering_disable); 4054 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x6f0b, PCI_CLASS_NOT_DEFINED, 8, 4055 + quirk_relaxedordering_disable); 4056 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x6f0c, PCI_CLASS_NOT_DEFINED, 8, 4057 + quirk_relaxedordering_disable); 4058 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x6f0d, PCI_CLASS_NOT_DEFINED, 8, 4059 + quirk_relaxedordering_disable); 4060 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x6f0e, PCI_CLASS_NOT_DEFINED, 8, 4061 + quirk_relaxedordering_disable); 4062 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f01, PCI_CLASS_NOT_DEFINED, 8, 4063 + quirk_relaxedordering_disable); 4064 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f02, PCI_CLASS_NOT_DEFINED, 8, 4065 + quirk_relaxedordering_disable); 4066 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f03, PCI_CLASS_NOT_DEFINED, 8, 4067 + quirk_relaxedordering_disable); 4068 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f04, PCI_CLASS_NOT_DEFINED, 8, 4069 + quirk_relaxedordering_disable); 4070 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f05, PCI_CLASS_NOT_DEFINED, 8, 4071 + quirk_relaxedordering_disable); 4072 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f06, PCI_CLASS_NOT_DEFINED, 8, 4073 + quirk_relaxedordering_disable); 4074 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f07, PCI_CLASS_NOT_DEFINED, 8, 4075 + quirk_relaxedordering_disable); 4076 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f08, PCI_CLASS_NOT_DEFINED, 8, 4077 + quirk_relaxedordering_disable); 4078 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f09, PCI_CLASS_NOT_DEFINED, 8, 4079 + quirk_relaxedordering_disable); 4080 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f0a, PCI_CLASS_NOT_DEFINED, 8, 4081 + quirk_relaxedordering_disable); 4082 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f0b, PCI_CLASS_NOT_DEFINED, 8, 4083 + quirk_relaxedordering_disable); 4084 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f0c, PCI_CLASS_NOT_DEFINED, 8, 4085 + quirk_relaxedordering_disable); 4086 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f0d, PCI_CLASS_NOT_DEFINED, 8, 4087 + quirk_relaxedordering_disable); 4088 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, 0x2f0e, PCI_CLASS_NOT_DEFINED, 8, 4089 + quirk_relaxedordering_disable); 4090 + 4091 + /* 4092 + * The AMD ARM A1100 (AKA "SEATTLE") SoC has a bug in its PCIe Root Complex 4093 + * where Upstream Transaction Layer Packets with the Relaxed Ordering 4094 + * Attribute clear are allowed to bypass earlier TLPs with Relaxed Ordering 4095 + * set. This is a violation of the PCIe 3.0 Transaction Ordering Rules 4096 + * outlined in Section 2.4.1 (PCI Express(r) Base Specification Revision 3.0 4097 + * November 10, 2010). As a result, on this platform we can't use Relaxed 4098 + * Ordering for Upstream TLPs. 4099 + */ 4100 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_AMD, 0x1a00, PCI_CLASS_NOT_DEFINED, 8, 4101 + quirk_relaxedordering_disable); 4102 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_AMD, 0x1a01, PCI_CLASS_NOT_DEFINED, 8, 4103 + quirk_relaxedordering_disable); 4104 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_AMD, 0x1a02, PCI_CLASS_NOT_DEFINED, 8, 4105 + quirk_relaxedordering_disable); 4106 + 4107 + /* 4019 4108 * Per PCIe r3.0, sec 2.2.9, "Completion headers must supply the same 4020 4109 * values for the Attribute as were supplied in the header of the 4021 4110 * corresponding Request, except as explicitly allowed when IDO is used."
-1
drivers/rtc/rtc-ds1307.c
··· 1301 1301 static const struct regmap_config regmap_config = { 1302 1302 .reg_bits = 8, 1303 1303 .val_bits = 8, 1304 - .max_register = 0x12, 1305 1304 }; 1306 1305 1307 1306 static int ds1307_probe(struct i2c_client *client,
+11
drivers/scsi/Kconfig
··· 47 47 default n 48 48 depends on NET 49 49 50 + config SCSI_MQ_DEFAULT 51 + bool "SCSI: use blk-mq I/O path by default" 52 + depends on SCSI 53 + ---help--- 54 + This option enables the new blk-mq based I/O path for SCSI 55 + devices by default. With the option the scsi_mod.use_blk_mq 56 + module/boot option defaults to Y, without it to N, but it can 57 + still be overridden either way. 58 + 59 + If unsure say N. 60 + 50 61 config SCSI_PROC_FS 51 62 bool "legacy /proc/scsi/ support" 52 63 depends on SCSI && PROC_FS
+7 -2
drivers/scsi/aacraid/aachba.c
··· 549 549 if ((le32_to_cpu(get_name_reply->status) == CT_OK) 550 550 && (get_name_reply->data[0] != '\0')) { 551 551 char *sp = get_name_reply->data; 552 - sp[sizeof(((struct aac_get_name_resp *)NULL)->data)] = '\0'; 552 + int data_size = FIELD_SIZEOF(struct aac_get_name_resp, data); 553 + 554 + sp[data_size - 1] = '\0'; 553 555 while (*sp == ' ') 554 556 ++sp; 555 557 if (*sp) { ··· 581 579 static int aac_get_container_name(struct scsi_cmnd * scsicmd) 582 580 { 583 581 int status; 582 + int data_size; 584 583 struct aac_get_name *dinfo; 585 584 struct fib * cmd_fibcontext; 586 585 struct aac_dev * dev; 587 586 588 587 dev = (struct aac_dev *)scsicmd->device->host->hostdata; 588 + 589 + data_size = FIELD_SIZEOF(struct aac_get_name_resp, data); 589 590 590 591 cmd_fibcontext = aac_fib_alloc_tag(dev, scsicmd); 591 592 ··· 598 593 dinfo->command = cpu_to_le32(VM_ContainerConfig); 599 594 dinfo->type = cpu_to_le32(CT_READ_NAME); 600 595 dinfo->cid = cpu_to_le32(scmd_id(scsicmd)); 601 - dinfo->count = cpu_to_le32(sizeof(((struct aac_get_name_resp *)NULL)->data)); 596 + dinfo->count = cpu_to_le32(data_size - 1); 602 597 603 598 status = aac_fib_send(ContainerCommand, 604 599 cmd_fibcontext,
+1 -1
drivers/scsi/aacraid/aacraid.h
··· 2274 2274 __le32 parm3; 2275 2275 __le32 parm4; 2276 2276 __le32 parm5; 2277 - u8 data[16]; 2277 + u8 data[17]; 2278 2278 }; 2279 2279 2280 2280 #define CT_CID_TO_32BITS_UID 165
+3 -1
drivers/scsi/csiostor/csio_hw.c
··· 3845 3845 3846 3846 if (csio_is_hw_ready(hw)) 3847 3847 return 0; 3848 - else 3848 + else if (csio_match_state(hw, csio_hws_uninit)) 3849 3849 return -EINVAL; 3850 + else 3851 + return -ENODEV; 3850 3852 } 3851 3853 3852 3854 int
+8 -4
drivers/scsi/csiostor/csio_init.c
··· 969 969 970 970 pci_set_drvdata(pdev, hw); 971 971 972 - if (csio_hw_start(hw) != 0) { 973 - dev_err(&pdev->dev, 974 - "Failed to start FW, continuing in debug mode.\n"); 975 - return 0; 972 + rv = csio_hw_start(hw); 973 + if (rv) { 974 + if (rv == -EINVAL) { 975 + dev_err(&pdev->dev, 976 + "Failed to start FW, continuing in debug mode.\n"); 977 + return 0; 978 + } 979 + goto err_lnode_exit; 976 980 } 977 981 978 982 sprintf(hw->fwrev_str, "%u.%u.%u.%u\n",
+3
drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
··· 1635 1635 goto rel_resource; 1636 1636 } 1637 1637 1638 + if (!(n->nud_state & NUD_VALID)) 1639 + neigh_event_send(n, NULL); 1640 + 1638 1641 csk->atid = cxgb4_alloc_atid(lldi->tids, csk); 1639 1642 if (csk->atid < 0) { 1640 1643 pr_err("%s, NO atid available.\n", ndev->name);
+19 -14
drivers/scsi/ipr.c
··· 3351 3351 return; 3352 3352 } 3353 3353 3354 + if (ioa_cfg->scsi_unblock) { 3355 + ioa_cfg->scsi_unblock = 0; 3356 + ioa_cfg->scsi_blocked = 0; 3357 + spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3358 + scsi_unblock_requests(ioa_cfg->host); 3359 + spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); 3360 + if (ioa_cfg->scsi_blocked) 3361 + scsi_block_requests(ioa_cfg->host); 3362 + } 3363 + 3354 3364 if (!ioa_cfg->scan_enabled) { 3355 3365 spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3356 3366 return; ··· 7221 7211 ENTER; 7222 7212 if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].removing_ioa) { 7223 7213 ipr_trace; 7224 - spin_unlock_irq(ioa_cfg->host->host_lock); 7225 - scsi_unblock_requests(ioa_cfg->host); 7226 - spin_lock_irq(ioa_cfg->host->host_lock); 7214 + ioa_cfg->scsi_unblock = 1; 7215 + schedule_work(&ioa_cfg->work_q); 7227 7216 } 7228 7217 7229 7218 ioa_cfg->in_reset_reload = 0; ··· 7296 7287 list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q); 7297 7288 wake_up_all(&ioa_cfg->reset_wait_q); 7298 7289 7299 - spin_unlock(ioa_cfg->host->host_lock); 7300 - scsi_unblock_requests(ioa_cfg->host); 7301 - spin_lock(ioa_cfg->host->host_lock); 7302 - 7303 - if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].allow_cmds) 7304 - scsi_block_requests(ioa_cfg->host); 7305 - 7290 + ioa_cfg->scsi_unblock = 1; 7306 7291 schedule_work(&ioa_cfg->work_q); 7307 7292 LEAVE; 7308 7293 return IPR_RC_JOB_RETURN; ··· 9252 9249 spin_unlock(&ioa_cfg->hrrq[i]._lock); 9253 9250 } 9254 9251 wmb(); 9255 - if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].removing_ioa) 9252 + if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].removing_ioa) { 9253 + ioa_cfg->scsi_unblock = 0; 9254 + ioa_cfg->scsi_blocked = 1; 9256 9255 scsi_block_requests(ioa_cfg->host); 9256 + } 9257 9257 9258 9258 ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg); 9259 9259 ioa_cfg->reset_cmd = ipr_cmd; ··· 9312 9306 wake_up_all(&ioa_cfg->reset_wait_q); 9313 9307 9314 9308 if (!ioa_cfg->hrrq[IPR_INIT_HRRQ].removing_ioa) { 9315 - spin_unlock_irq(ioa_cfg->host->host_lock); 9316 - scsi_unblock_requests(ioa_cfg->host); 9317 - spin_lock_irq(ioa_cfg->host->host_lock); 9309 + ioa_cfg->scsi_unblock = 1; 9310 + schedule_work(&ioa_cfg->work_q); 9318 9311 } 9319 9312 return; 9320 9313 } else {
+2
drivers/scsi/ipr.h
··· 1488 1488 u8 cfg_locked:1; 1489 1489 u8 clear_isr:1; 1490 1490 u8 probe_done:1; 1491 + u8 scsi_unblock:1; 1492 + u8 scsi_blocked:1; 1491 1493 1492 1494 u8 revid; 1493 1495
+1 -1
drivers/scsi/megaraid/megaraid_sas_base.c
··· 6228 6228 fail_start_aen: 6229 6229 fail_io_attach: 6230 6230 megasas_mgmt_info.count--; 6231 - megasas_mgmt_info.instance[megasas_mgmt_info.max_index] = NULL; 6232 6231 megasas_mgmt_info.max_index--; 6232 + megasas_mgmt_info.instance[megasas_mgmt_info.max_index] = NULL; 6233 6233 6234 6234 instance->instancet->disable_intr(instance); 6235 6235 megasas_destroy_irqs(instance);
-12
drivers/scsi/qla2xxx/qla_tmpl.c
··· 401 401 for (i = 0; i < vha->hw->max_req_queues; i++) { 402 402 struct req_que *req = vha->hw->req_q_map[i]; 403 403 404 - if (!test_bit(i, vha->hw->req_qid_map)) 405 - continue; 406 - 407 404 if (req || !buf) { 408 405 length = req ? 409 406 req->length : REQUEST_ENTRY_CNT_24XX; ··· 414 417 } else if (ent->t263.queue_type == T263_QUEUE_TYPE_RSP) { 415 418 for (i = 0; i < vha->hw->max_rsp_queues; i++) { 416 419 struct rsp_que *rsp = vha->hw->rsp_q_map[i]; 417 - 418 - if (!test_bit(i, vha->hw->rsp_qid_map)) 419 - continue; 420 420 421 421 if (rsp || !buf) { 422 422 length = rsp ? ··· 654 660 for (i = 0; i < vha->hw->max_req_queues; i++) { 655 661 struct req_que *req = vha->hw->req_q_map[i]; 656 662 657 - if (!test_bit(i, vha->hw->req_qid_map)) 658 - continue; 659 - 660 663 if (req || !buf) { 661 664 qla27xx_insert16(i, buf, len); 662 665 qla27xx_insert16(1, buf, len); ··· 665 674 } else if (ent->t274.queue_type == T274_QUEUE_TYPE_RSP_SHAD) { 666 675 for (i = 0; i < vha->hw->max_rsp_queues; i++) { 667 676 struct rsp_que *rsp = vha->hw->rsp_q_map[i]; 668 - 669 - if (!test_bit(i, vha->hw->rsp_qid_map)) 670 - continue; 671 677 672 678 if (rsp || !buf) { 673 679 qla27xx_insert16(i, buf, len);
+4
drivers/scsi/scsi.c
··· 800 800 module_param(scsi_logging_level, int, S_IRUGO|S_IWUSR); 801 801 MODULE_PARM_DESC(scsi_logging_level, "a bit mask of logging levels"); 802 802 803 + #ifdef CONFIG_SCSI_MQ_DEFAULT 803 804 bool scsi_use_blk_mq = true; 805 + #else 806 + bool scsi_use_blk_mq = false; 807 + #endif 804 808 module_param_named(use_blk_mq, scsi_use_blk_mq, bool, S_IWUSR | S_IRUGO); 805 809 806 810 static int __init init_scsi(void)
+3
drivers/scsi/sd.c
··· 1277 1277 { 1278 1278 struct request *rq = SCpnt->request; 1279 1279 1280 + if (SCpnt->flags & SCMD_ZONE_WRITE_LOCK) 1281 + sd_zbc_write_unlock_zone(SCpnt); 1282 + 1280 1283 if (rq->rq_flags & RQF_SPECIAL_PAYLOAD) 1281 1284 __free_page(rq->special_vec.bv_page); 1282 1285
+5 -4
drivers/scsi/sd_zbc.c
··· 294 294 test_and_set_bit(zno, sdkp->zones_wlock)) 295 295 return BLKPREP_DEFER; 296 296 297 + WARN_ON_ONCE(cmd->flags & SCMD_ZONE_WRITE_LOCK); 298 + cmd->flags |= SCMD_ZONE_WRITE_LOCK; 299 + 297 300 return BLKPREP_OK; 298 301 } 299 302 ··· 305 302 struct request *rq = cmd->request; 306 303 struct scsi_disk *sdkp = scsi_disk(rq->rq_disk); 307 304 308 - if (sdkp->zones_wlock) { 305 + if (sdkp->zones_wlock && cmd->flags & SCMD_ZONE_WRITE_LOCK) { 309 306 unsigned int zno = sd_zbc_zone_no(sdkp, blk_rq_pos(rq)); 310 307 WARN_ON_ONCE(!test_bit(zno, sdkp->zones_wlock)); 308 + cmd->flags &= ~SCMD_ZONE_WRITE_LOCK; 311 309 clear_bit_unlock(zno, sdkp->zones_wlock); 312 310 smp_mb__after_atomic(); 313 311 } ··· 338 334 case REQ_OP_WRITE: 339 335 case REQ_OP_WRITE_ZEROES: 340 336 case REQ_OP_WRITE_SAME: 341 - 342 - /* Unlock the zone */ 343 - sd_zbc_write_unlock_zone(cmd); 344 337 345 338 if (result && 346 339 sshdr->sense_key == ILLEGAL_REQUEST &&
+1 -1
drivers/scsi/ses.c
··· 99 99 100 100 ret = scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf, bufflen, 101 101 NULL, SES_TIMEOUT, SES_RETRIES, NULL); 102 - if (unlikely(!ret)) 102 + if (unlikely(ret)) 103 103 return ret; 104 104 105 105 recv_page_code = ((unsigned char *)buf)[0];
+2 -2
drivers/scsi/st.c
··· 4299 4299 kref_init(&tpnt->kref); 4300 4300 tpnt->disk = disk; 4301 4301 disk->private_data = &tpnt->driver; 4302 - disk->queue = SDp->request_queue; 4303 4302 /* SCSI tape doesn't register this gendisk via add_disk(). Manually 4304 4303 * take queue reference that release_disk() expects. */ 4305 - if (!blk_get_queue(disk->queue)) 4304 + if (!blk_get_queue(SDp->request_queue)) 4306 4305 goto out_put_disk; 4306 + disk->queue = SDp->request_queue; 4307 4307 tpnt->driver = &st_template; 4308 4308 4309 4309 tpnt->device = SDp;
+8 -7
drivers/soc/imx/gpcv2.c
··· 200 200 201 201 domain->dev = &pdev->dev; 202 202 203 - ret = pm_genpd_init(&domain->genpd, NULL, true); 204 - if (ret) { 205 - dev_err(domain->dev, "Failed to init power domain\n"); 206 - return ret; 207 - } 208 - 209 203 domain->regulator = devm_regulator_get_optional(domain->dev, "power"); 210 204 if (IS_ERR(domain->regulator)) { 211 205 if (PTR_ERR(domain->regulator) != -ENODEV) { 212 - dev_err(domain->dev, "Failed to get domain's regulator\n"); 206 + if (PTR_ERR(domain->regulator) != -EPROBE_DEFER) 207 + dev_err(domain->dev, "Failed to get domain's regulator\n"); 213 208 return PTR_ERR(domain->regulator); 214 209 } 215 210 } else { 216 211 regulator_set_voltage(domain->regulator, 217 212 domain->voltage, domain->voltage); 213 + } 214 + 215 + ret = pm_genpd_init(&domain->genpd, NULL, true); 216 + if (ret) { 217 + dev_err(domain->dev, "Failed to init power domain\n"); 218 + return ret; 218 219 } 219 220 220 221 ret = of_genpd_add_provider_simple(domain->dev->of_node,
+3
drivers/soc/ti/knav_qmss_queue.c
··· 745 745 bool slot_found; 746 746 int ret; 747 747 748 + if (!kdev) 749 + return ERR_PTR(-EPROBE_DEFER); 750 + 748 751 if (!kdev->dev) 749 752 return ERR_PTR(-ENODEV); 750 753
+2
drivers/soc/ti/ti_sci_pm_domains.c
··· 176 176 177 177 ti_sci_pd->dev = dev; 178 178 179 + ti_sci_pd->pd.name = "ti_sci_pd"; 180 + 179 181 ti_sci_pd->pd.attach_dev = ti_sci_pd_attach_dev; 180 182 ti_sci_pd->pd.detach_dev = ti_sci_pd_detach_dev; 181 183
+3 -3
drivers/staging/fsl-mc/bus/fsl-mc-allocator.c
··· 16 16 17 17 static bool __must_check fsl_mc_is_allocatable(const char *obj_type) 18 18 { 19 - return strcmp(obj_type, "dpbp") || 20 - strcmp(obj_type, "dpmcp") || 21 - strcmp(obj_type, "dpcon"); 19 + return strcmp(obj_type, "dpbp") == 0 || 20 + strcmp(obj_type, "dpmcp") == 0 || 21 + strcmp(obj_type, "dpcon") == 0; 22 22 } 23 23 24 24 /**
+1
drivers/staging/rtl8188eu/os_dep/usb_intf.c
··· 45 45 {USB_DEVICE(0x2001, 0x3311)}, /* DLink GO-USB-N150 REV B1 */ 46 46 {USB_DEVICE(0x2357, 0x010c)}, /* TP-Link TL-WN722N v2 */ 47 47 {USB_DEVICE(0x0df6, 0x0076)}, /* Sitecom N150 v2 */ 48 + {USB_DEVICE(USB_VENDER_ID_REALTEK, 0xffef)}, /* Rosewill RNX-N150NUB */ 48 49 {} /* Terminating entry */ 49 50 }; 50 51
+27 -37
drivers/tty/pty.c
··· 69 69 #ifdef CONFIG_UNIX98_PTYS 70 70 if (tty->driver == ptm_driver) { 71 71 mutex_lock(&devpts_mutex); 72 - if (tty->link->driver_data) { 73 - struct path *path = tty->link->driver_data; 74 - 75 - devpts_pty_kill(path->dentry); 76 - path_put(path); 77 - kfree(path); 78 - } 72 + if (tty->link->driver_data) 73 + devpts_pty_kill(tty->link->driver_data); 79 74 mutex_unlock(&devpts_mutex); 80 75 } 81 76 #endif ··· 602 607 static struct cdev ptmx_cdev; 603 608 604 609 /** 605 - * pty_open_peer - open the peer of a pty 606 - * @tty: the peer of the pty being opened 610 + * ptm_open_peer - open the peer of a pty 611 + * @master: the open struct file of the ptmx device node 612 + * @tty: the master of the pty being opened 613 + * @flags: the flags for open 607 614 * 608 - * Open the cached dentry in tty->link, providing a safe way for userspace 609 - * to get the slave end of a pty (where they have the master fd and cannot 610 - * access or trust the mount namespace /dev/pts was mounted inside). 615 + * Provide a race free way for userspace to open the slave end of a pty 616 + * (where they have the master fd and cannot access or trust the mount 617 + * namespace /dev/pts was mounted inside). 611 618 */ 612 - static struct file *pty_open_peer(struct tty_struct *tty, int flags) 613 - { 614 - if (tty->driver->subtype != PTY_TYPE_MASTER) 615 - return ERR_PTR(-EIO); 616 - return dentry_open(tty->link->driver_data, flags, current_cred()); 617 - } 618 - 619 - static int pty_get_peer(struct tty_struct *tty, int flags) 619 + int ptm_open_peer(struct file *master, struct tty_struct *tty, int flags) 620 620 { 621 621 int fd = -1; 622 - struct file *filp = NULL; 622 + struct file *filp; 623 623 int retval = -EINVAL; 624 + struct path path; 625 + 626 + if (tty->driver != ptm_driver) 627 + return -EIO; 624 628 625 629 fd = get_unused_fd_flags(0); 626 630 if (fd < 0) { ··· 627 633 goto err; 628 634 } 629 635 630 - filp = pty_open_peer(tty, flags); 636 + /* Compute the slave's path */ 637 + path.mnt = devpts_mntget(master, tty->driver_data); 638 + if (IS_ERR(path.mnt)) { 639 + retval = PTR_ERR(path.mnt); 640 + goto err_put; 641 + } 642 + path.dentry = tty->link->driver_data; 643 + 644 + filp = dentry_open(&path, flags, current_cred()); 645 + mntput(path.mnt); 631 646 if (IS_ERR(filp)) { 632 647 retval = PTR_ERR(filp); 633 648 goto err_put; ··· 665 662 return pty_get_pktmode(tty, (int __user *)arg); 666 663 case TIOCGPTN: /* Get PT Number */ 667 664 return put_user(tty->index, (unsigned int __user *)arg); 668 - case TIOCGPTPEER: /* Open the other end */ 669 - return pty_get_peer(tty, (int) arg); 670 665 case TIOCSIG: /* Send signal to other side of pty */ 671 666 return pty_signal(tty, (int) arg); 672 667 } ··· 792 791 { 793 792 struct pts_fs_info *fsi; 794 793 struct tty_struct *tty; 795 - struct path *pts_path; 796 794 struct dentry *dentry; 797 795 int retval; 798 796 int index; ··· 845 845 retval = PTR_ERR(dentry); 846 846 goto err_release; 847 847 } 848 - /* We need to cache a fake path for TIOCGPTPEER. */ 849 - pts_path = kmalloc(sizeof(struct path), GFP_KERNEL); 850 - if (!pts_path) 851 - goto err_release; 852 - pts_path->mnt = filp->f_path.mnt; 853 - pts_path->dentry = dentry; 854 - path_get(pts_path); 855 - tty->link->driver_data = pts_path; 848 + tty->link->driver_data = dentry; 856 849 857 850 retval = ptm_driver->ops->open(tty, filp); 858 851 if (retval) 859 - goto err_path_put; 852 + goto err_release; 860 853 861 854 tty_debug_hangup(tty, "opening (count=%d)\n", tty->count); 862 855 863 856 tty_unlock(tty); 864 857 return 0; 865 - err_path_put: 866 - path_put(pts_path); 867 - kfree(pts_path); 868 858 err_release: 869 859 tty_unlock(tty); 870 860 // This will also put-ref the fsi
+3
drivers/tty/tty_io.c
··· 2518 2518 case TIOCSSERIAL: 2519 2519 tty_warn_deprecated_flags(p); 2520 2520 break; 2521 + case TIOCGPTPEER: 2522 + /* Special because the struct file is needed */ 2523 + return ptm_open_peer(file, tty, (int)arg); 2521 2524 default: 2522 2525 retval = tty_jobctrl_ioctl(tty, real_tty, file, cmd, arg); 2523 2526 if (retval != -ENOIOCTLCMD)
+7 -3
drivers/virtio/virtio_pci_common.c
··· 107 107 { 108 108 struct virtio_pci_device *vp_dev = to_vp_device(vdev); 109 109 const char *name = dev_name(&vp_dev->vdev.dev); 110 + unsigned flags = PCI_IRQ_MSIX; 110 111 unsigned i, v; 111 112 int err = -ENOMEM; 112 113 ··· 127 126 GFP_KERNEL)) 128 127 goto error; 129 128 129 + if (desc) { 130 + flags |= PCI_IRQ_AFFINITY; 131 + desc->pre_vectors++; /* virtio config vector */ 132 + } 133 + 130 134 err = pci_alloc_irq_vectors_affinity(vp_dev->pci_dev, nvectors, 131 - nvectors, PCI_IRQ_MSIX | 132 - (desc ? PCI_IRQ_AFFINITY : 0), 133 - desc); 135 + nvectors, flags, desc); 134 136 if (err < 0) 135 137 goto error; 136 138 vp_dev->msix_enabled = 1;
-3
drivers/xen/Makefile
··· 7 7 nostackp := $(call cc-option, -fno-stack-protector) 8 8 CFLAGS_features.o := $(nostackp) 9 9 10 - CFLAGS_efi.o += -fshort-wchar 11 - LDFLAGS += $(call ld-option, --no-wchar-size-warning) 12 - 13 10 dom0-$(CONFIG_ARM64) += arm-device.o 14 11 dom0-$(CONFIG_PCI) += pci.o 15 12 dom0-$(CONFIG_USB_SUPPORT) += dbgp.o
+1 -2
drivers/xen/biomerge.c
··· 10 10 unsigned long bfn1 = pfn_to_bfn(page_to_pfn(vec1->bv_page)); 11 11 unsigned long bfn2 = pfn_to_bfn(page_to_pfn(vec2->bv_page)); 12 12 13 - return __BIOVEC_PHYS_MERGEABLE(vec1, vec2) && 14 - ((bfn1 == bfn2) || ((bfn1+1) == bfn2)); 13 + return bfn1 + PFN_DOWN(vec1->bv_offset + vec1->bv_len) == bfn2; 15 14 #else 16 15 /* 17 16 * XXX: Add support for merging bio_vec when using different page
+1 -2
fs/binfmt_elf.c
··· 664 664 { 665 665 unsigned long random_variable = 0; 666 666 667 - if ((current->flags & PF_RANDOMIZE) && 668 - !(current->personality & ADDR_NO_RANDOMIZE)) { 667 + if (current->flags & PF_RANDOMIZE) { 669 668 random_variable = get_random_long(); 670 669 random_variable &= STACK_RND_MASK; 671 670 random_variable <<= PAGE_SHIFT;
+2 -2
fs/btrfs/disk-io.c
··· 3516 3516 struct bio *bio = device->flush_bio; 3517 3517 3518 3518 if (!device->flush_bio_sent) 3519 - return 0; 3519 + return BLK_STS_OK; 3520 3520 3521 3521 device->flush_bio_sent = 0; 3522 3522 wait_for_completion_io(&device->flush_wait); ··· 3563 3563 continue; 3564 3564 3565 3565 write_dev_flush(dev); 3566 - dev->last_flush_error = 0; 3566 + dev->last_flush_error = BLK_STS_OK; 3567 3567 } 3568 3568 3569 3569 /* wait for all the barriers */
+37 -33
fs/btrfs/inode.c
··· 7924 7924 return ret; 7925 7925 } 7926 7926 7927 - static inline int submit_dio_repair_bio(struct inode *inode, struct bio *bio, 7928 - int mirror_num) 7927 + static inline blk_status_t submit_dio_repair_bio(struct inode *inode, 7928 + struct bio *bio, 7929 + int mirror_num) 7929 7930 { 7930 7931 struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); 7931 - int ret; 7932 + blk_status_t ret; 7932 7933 7933 7934 BUG_ON(bio_op(bio) == REQ_OP_WRITE); 7934 7935 ··· 7981 7980 return 1; 7982 7981 } 7983 7982 7984 - static int dio_read_error(struct inode *inode, struct bio *failed_bio, 7985 - struct page *page, unsigned int pgoff, 7986 - u64 start, u64 end, int failed_mirror, 7987 - bio_end_io_t *repair_endio, void *repair_arg) 7983 + static blk_status_t dio_read_error(struct inode *inode, struct bio *failed_bio, 7984 + struct page *page, unsigned int pgoff, 7985 + u64 start, u64 end, int failed_mirror, 7986 + bio_end_io_t *repair_endio, void *repair_arg) 7988 7987 { 7989 7988 struct io_failure_record *failrec; 7990 7989 struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree; ··· 7994 7993 int read_mode = 0; 7995 7994 int segs; 7996 7995 int ret; 7996 + blk_status_t status; 7997 7997 7998 7998 BUG_ON(bio_op(failed_bio) == REQ_OP_WRITE); 7999 7999 8000 8000 ret = btrfs_get_io_failure_record(inode, start, end, &failrec); 8001 8001 if (ret) 8002 - return ret; 8002 + return errno_to_blk_status(ret); 8003 8003 8004 8004 ret = btrfs_check_dio_repairable(inode, failed_bio, failrec, 8005 8005 failed_mirror); 8006 8006 if (!ret) { 8007 8007 free_io_failure(failure_tree, io_tree, failrec); 8008 - return -EIO; 8008 + return BLK_STS_IOERR; 8009 8009 } 8010 8010 8011 8011 segs = bio_segments(failed_bio); ··· 8024 8022 "Repair DIO Read Error: submitting new dio read[%#x] to this_mirror=%d, in_validation=%d\n", 8025 8023 read_mode, failrec->this_mirror, failrec->in_validation); 8026 8024 8027 - ret = submit_dio_repair_bio(inode, bio, failrec->this_mirror); 8028 - if (ret) { 8025 + status = submit_dio_repair_bio(inode, bio, failrec->this_mirror); 8026 + if (status) { 8029 8027 free_io_failure(failure_tree, io_tree, failrec); 8030 8028 bio_put(bio); 8031 8029 } 8032 8030 8033 - return ret; 8031 + return status; 8034 8032 } 8035 8033 8036 8034 struct btrfs_retry_complete { ··· 8067 8065 bio_put(bio); 8068 8066 } 8069 8067 8070 - static int __btrfs_correct_data_nocsum(struct inode *inode, 8071 - struct btrfs_io_bio *io_bio) 8068 + static blk_status_t __btrfs_correct_data_nocsum(struct inode *inode, 8069 + struct btrfs_io_bio *io_bio) 8072 8070 { 8073 8071 struct btrfs_fs_info *fs_info; 8074 8072 struct bio_vec bvec; ··· 8078 8076 unsigned int pgoff; 8079 8077 u32 sectorsize; 8080 8078 int nr_sectors; 8081 - int ret; 8082 - int err = 0; 8079 + blk_status_t ret; 8080 + blk_status_t err = BLK_STS_OK; 8083 8081 8084 8082 fs_info = BTRFS_I(inode)->root->fs_info; 8085 8083 sectorsize = fs_info->sectorsize; ··· 8185 8183 int csum_pos; 8186 8184 bool uptodate = (err == 0); 8187 8185 int ret; 8186 + blk_status_t status; 8188 8187 8189 8188 fs_info = BTRFS_I(inode)->root->fs_info; 8190 8189 sectorsize = fs_info->sectorsize; 8191 8190 8192 - err = 0; 8191 + err = BLK_STS_OK; 8193 8192 start = io_bio->logical; 8194 8193 done.inode = inode; 8195 8194 io_bio->bio.bi_iter = io_bio->iter; ··· 8212 8209 done.start = start; 8213 8210 init_completion(&done.done); 8214 8211 8215 - ret = dio_read_error(inode, &io_bio->bio, bvec.bv_page, 8216 - pgoff, start, start + sectorsize - 1, 8217 - io_bio->mirror_num, 8218 - btrfs_retry_endio, &done); 8219 - if (ret) { 8220 - err = errno_to_blk_status(ret); 8212 + status = dio_read_error(inode, &io_bio->bio, bvec.bv_page, 8213 + pgoff, start, start + sectorsize - 1, 8214 + io_bio->mirror_num, btrfs_retry_endio, 8215 + &done); 8216 + if (status) { 8217 + err = status; 8221 8218 goto next; 8222 8219 } 8223 8220 ··· 8253 8250 if (unlikely(err)) 8254 8251 return __btrfs_correct_data_nocsum(inode, io_bio); 8255 8252 else 8256 - return 0; 8253 + return BLK_STS_OK; 8257 8254 } else { 8258 8255 return __btrfs_subio_endio_read(inode, io_bio, err); 8259 8256 } ··· 8426 8423 return 0; 8427 8424 } 8428 8425 8429 - static inline int __btrfs_submit_dio_bio(struct bio *bio, struct inode *inode, 8430 - u64 file_offset, int skip_sum, 8431 - int async_submit) 8426 + static inline blk_status_t 8427 + __btrfs_submit_dio_bio(struct bio *bio, struct inode *inode, u64 file_offset, 8428 + int skip_sum, int async_submit) 8432 8429 { 8433 8430 struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb); 8434 8431 struct btrfs_dio_private *dip = bio->bi_private; ··· 8491 8488 int clone_offset = 0; 8492 8489 int clone_len; 8493 8490 int ret; 8491 + blk_status_t status; 8494 8492 8495 8493 map_length = orig_bio->bi_iter.bi_size; 8496 8494 submit_len = map_length; ··· 8541 8537 */ 8542 8538 atomic_inc(&dip->pending_bios); 8543 8539 8544 - ret = __btrfs_submit_dio_bio(bio, inode, file_offset, skip_sum, 8545 - async_submit); 8546 - if (ret) { 8540 + status = __btrfs_submit_dio_bio(bio, inode, file_offset, skip_sum, 8541 + async_submit); 8542 + if (status) { 8547 8543 bio_put(bio); 8548 8544 atomic_dec(&dip->pending_bios); 8549 8545 goto out_err; ··· 8561 8557 } while (submit_len > 0); 8562 8558 8563 8559 submit: 8564 - ret = __btrfs_submit_dio_bio(bio, inode, file_offset, skip_sum, 8565 - async_submit); 8566 - if (!ret) 8560 + status = __btrfs_submit_dio_bio(bio, inode, file_offset, skip_sum, 8561 + async_submit); 8562 + if (!status) 8567 8563 return 0; 8568 8564 8569 8565 bio_put(bio);
+17 -17
fs/btrfs/raid56.c
··· 905 905 if (!atomic_dec_and_test(&rbio->stripes_pending)) 906 906 return; 907 907 908 - err = 0; 908 + err = BLK_STS_OK; 909 909 910 910 /* OK, we have read all the stripes we need to. */ 911 911 max_errors = (rbio->operation == BTRFS_RBIO_PARITY_SCRUB) ? ··· 1324 1324 return; 1325 1325 1326 1326 cleanup: 1327 - rbio_orig_end_io(rbio, -EIO); 1327 + rbio_orig_end_io(rbio, BLK_STS_IOERR); 1328 1328 } 1329 1329 1330 1330 /* ··· 1475 1475 1476 1476 cleanup: 1477 1477 1478 - rbio_orig_end_io(rbio, -EIO); 1478 + rbio_orig_end_io(rbio, BLK_STS_IOERR); 1479 1479 } 1480 1480 1481 1481 static void async_rmw_stripe(struct btrfs_raid_bio *rbio) ··· 1579 1579 return 0; 1580 1580 1581 1581 cleanup: 1582 - rbio_orig_end_io(rbio, -EIO); 1582 + rbio_orig_end_io(rbio, BLK_STS_IOERR); 1583 1583 return -EIO; 1584 1584 1585 1585 finish: ··· 1795 1795 void **pointers; 1796 1796 int faila = -1, failb = -1; 1797 1797 struct page *page; 1798 - int err; 1798 + blk_status_t err; 1799 1799 int i; 1800 1800 1801 1801 pointers = kcalloc(rbio->real_stripes, sizeof(void *), GFP_NOFS); 1802 1802 if (!pointers) { 1803 - err = -ENOMEM; 1803 + err = BLK_STS_RESOURCE; 1804 1804 goto cleanup_io; 1805 1805 } 1806 1806 ··· 1856 1856 * a bad data or Q stripe. 1857 1857 * TODO, we should redo the xor here. 1858 1858 */ 1859 - err = -EIO; 1859 + err = BLK_STS_IOERR; 1860 1860 goto cleanup; 1861 1861 } 1862 1862 /* ··· 1882 1882 if (rbio->bbio->raid_map[failb] == RAID6_Q_STRIPE) { 1883 1883 if (rbio->bbio->raid_map[faila] == 1884 1884 RAID5_P_STRIPE) { 1885 - err = -EIO; 1885 + err = BLK_STS_IOERR; 1886 1886 goto cleanup; 1887 1887 } 1888 1888 /* ··· 1954 1954 } 1955 1955 } 1956 1956 1957 - err = 0; 1957 + err = BLK_STS_OK; 1958 1958 cleanup: 1959 1959 kfree(pointers); 1960 1960 1961 1961 cleanup_io: 1962 1962 if (rbio->operation == BTRFS_RBIO_READ_REBUILD) { 1963 - if (err == 0) 1963 + if (err == BLK_STS_OK) 1964 1964 cache_rbio_pages(rbio); 1965 1965 else 1966 1966 clear_bit(RBIO_CACHE_READY_BIT, &rbio->flags); ··· 1968 1968 rbio_orig_end_io(rbio, err); 1969 1969 } else if (rbio->operation == BTRFS_RBIO_REBUILD_MISSING) { 1970 1970 rbio_orig_end_io(rbio, err); 1971 - } else if (err == 0) { 1971 + } else if (err == BLK_STS_OK) { 1972 1972 rbio->faila = -1; 1973 1973 rbio->failb = -1; 1974 1974 ··· 2005 2005 return; 2006 2006 2007 2007 if (atomic_read(&rbio->error) > rbio->bbio->max_errors) 2008 - rbio_orig_end_io(rbio, -EIO); 2008 + rbio_orig_end_io(rbio, BLK_STS_IOERR); 2009 2009 else 2010 2010 __raid_recover_end_io(rbio); 2011 2011 } ··· 2104 2104 cleanup: 2105 2105 if (rbio->operation == BTRFS_RBIO_READ_REBUILD || 2106 2106 rbio->operation == BTRFS_RBIO_REBUILD_MISSING) 2107 - rbio_orig_end_io(rbio, -EIO); 2107 + rbio_orig_end_io(rbio, BLK_STS_IOERR); 2108 2108 return -EIO; 2109 2109 } 2110 2110 ··· 2431 2431 nr_data = bio_list_size(&bio_list); 2432 2432 if (!nr_data) { 2433 2433 /* Every parity is right */ 2434 - rbio_orig_end_io(rbio, 0); 2434 + rbio_orig_end_io(rbio, BLK_STS_OK); 2435 2435 return; 2436 2436 } 2437 2437 ··· 2451 2451 return; 2452 2452 2453 2453 cleanup: 2454 - rbio_orig_end_io(rbio, -EIO); 2454 + rbio_orig_end_io(rbio, BLK_STS_IOERR); 2455 2455 } 2456 2456 2457 2457 static inline int is_data_stripe(struct btrfs_raid_bio *rbio, int stripe) ··· 2519 2519 return; 2520 2520 2521 2521 cleanup: 2522 - rbio_orig_end_io(rbio, -EIO); 2522 + rbio_orig_end_io(rbio, BLK_STS_IOERR); 2523 2523 } 2524 2524 2525 2525 /* ··· 2633 2633 return; 2634 2634 2635 2635 cleanup: 2636 - rbio_orig_end_io(rbio, -EIO); 2636 + rbio_orig_end_io(rbio, BLK_STS_IOERR); 2637 2637 return; 2638 2638 2639 2639 finish:
+5 -5
fs/btrfs/volumes.c
··· 6212 6212 } 6213 6213 } 6214 6214 6215 - int btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio, 6216 - int mirror_num, int async_submit) 6215 + blk_status_t btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio, 6216 + int mirror_num, int async_submit) 6217 6217 { 6218 6218 struct btrfs_device *dev; 6219 6219 struct bio *first_bio = bio; ··· 6233 6233 &map_length, &bbio, mirror_num, 1); 6234 6234 if (ret) { 6235 6235 btrfs_bio_counter_dec(fs_info); 6236 - return ret; 6236 + return errno_to_blk_status(ret); 6237 6237 } 6238 6238 6239 6239 total_devs = bbio->num_stripes; ··· 6256 6256 } 6257 6257 6258 6258 btrfs_bio_counter_dec(fs_info); 6259 - return ret; 6259 + return errno_to_blk_status(ret); 6260 6260 } 6261 6261 6262 6262 if (map_length < length) { ··· 6283 6283 dev_nr, async_submit); 6284 6284 } 6285 6285 btrfs_bio_counter_dec(fs_info); 6286 - return 0; 6286 + return BLK_STS_OK; 6287 6287 } 6288 6288 6289 6289 struct btrfs_device *btrfs_find_device(struct btrfs_fs_info *fs_info, u64 devid,
+3 -3
fs/btrfs/volumes.h
··· 74 74 int missing; 75 75 int can_discard; 76 76 int is_tgtdev_for_dev_replace; 77 - int last_flush_error; 77 + blk_status_t last_flush_error; 78 78 int flush_bio_sent; 79 79 80 80 #ifdef __BTRFS_NEED_DEVICE_DATA_ORDERED ··· 416 416 struct btrfs_fs_info *fs_info, u64 type); 417 417 void btrfs_mapping_init(struct btrfs_mapping_tree *tree); 418 418 void btrfs_mapping_tree_free(struct btrfs_mapping_tree *tree); 419 - int btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio, 420 - int mirror_num, int async_submit); 419 + blk_status_t btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio, 420 + int mirror_num, int async_submit); 421 421 int btrfs_open_devices(struct btrfs_fs_devices *fs_devices, 422 422 fmode_t flags, void *holder); 423 423 int btrfs_scan_one_device(const char *path, fmode_t flags, void *holder,
+12 -6
fs/cifs/dir.c
··· 194 194 } 195 195 196 196 /* 197 + * Don't allow path components longer than the server max. 197 198 * Don't allow the separator character in a path component. 198 199 * The VFS will not allow "/", but "\" is allowed by posix. 199 200 */ 200 201 static int 201 - check_name(struct dentry *direntry) 202 + check_name(struct dentry *direntry, struct cifs_tcon *tcon) 202 203 { 203 204 struct cifs_sb_info *cifs_sb = CIFS_SB(direntry->d_sb); 204 205 int i; 206 + 207 + if (unlikely(direntry->d_name.len > 208 + tcon->fsAttrInfo.MaxPathNameComponentLength)) 209 + return -ENAMETOOLONG; 205 210 206 211 if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS)) { 207 212 for (i = 0; i < direntry->d_name.len; i++) { ··· 505 500 return finish_no_open(file, res); 506 501 } 507 502 508 - rc = check_name(direntry); 509 - if (rc) 510 - return rc; 511 - 512 503 xid = get_xid(); 513 504 514 505 cifs_dbg(FYI, "parent inode = 0x%p name is: %pd and dentry = 0x%p\n", ··· 517 516 } 518 517 519 518 tcon = tlink_tcon(tlink); 519 + 520 + rc = check_name(direntry, tcon); 521 + if (rc) 522 + goto out_free_xid; 523 + 520 524 server = tcon->ses->server; 521 525 522 526 if (server->ops->new_lease_key) ··· 782 776 } 783 777 pTcon = tlink_tcon(tlink); 784 778 785 - rc = check_name(direntry); 779 + rc = check_name(direntry, pTcon); 786 780 if (rc) 787 781 goto lookup_out; 788 782
+2 -2
fs/cifs/smb2pdu.c
··· 3219 3219 kst->f_bsize = le32_to_cpu(pfs_inf->BytesPerSector) * 3220 3220 le32_to_cpu(pfs_inf->SectorsPerAllocationUnit); 3221 3221 kst->f_blocks = le64_to_cpu(pfs_inf->TotalAllocationUnits); 3222 - kst->f_bfree = le64_to_cpu(pfs_inf->ActualAvailableAllocationUnits); 3223 - kst->f_bavail = le64_to_cpu(pfs_inf->CallerAvailableAllocationUnits); 3222 + kst->f_bfree = kst->f_bavail = 3223 + le64_to_cpu(pfs_inf->CallerAvailableAllocationUnits); 3224 3224 return; 3225 3225 } 3226 3226
+10
fs/dax.c
··· 1383 1383 1384 1384 trace_dax_pmd_fault(inode, vmf, max_pgoff, 0); 1385 1385 1386 + /* 1387 + * Make sure that the faulting address's PMD offset (color) matches 1388 + * the PMD offset from the start of the file. This is necessary so 1389 + * that a PMD range in the page table overlaps exactly with a PMD 1390 + * range in the radix tree. 1391 + */ 1392 + if ((vmf->pgoff & PG_PMD_COLOUR) != 1393 + ((vmf->address >> PAGE_SHIFT) & PG_PMD_COLOUR)) 1394 + goto fallback; 1395 + 1386 1396 /* Fall back to PTEs if we're going to COW */ 1387 1397 if (write && !(vma->vm_flags & VM_SHARED)) 1388 1398 goto fallback;
+49 -16
fs/devpts/inode.c
··· 133 133 return sb->s_fs_info; 134 134 } 135 135 136 + static int devpts_ptmx_path(struct path *path) 137 + { 138 + struct super_block *sb; 139 + int err; 140 + 141 + /* Has the devpts filesystem already been found? */ 142 + if (path->mnt->mnt_sb->s_magic == DEVPTS_SUPER_MAGIC) 143 + return 0; 144 + 145 + /* Is a devpts filesystem at "pts" in the same directory? */ 146 + err = path_pts(path); 147 + if (err) 148 + return err; 149 + 150 + /* Is the path the root of a devpts filesystem? */ 151 + sb = path->mnt->mnt_sb; 152 + if ((sb->s_magic != DEVPTS_SUPER_MAGIC) || 153 + (path->mnt->mnt_root != sb->s_root)) 154 + return -ENODEV; 155 + 156 + return 0; 157 + } 158 + 159 + struct vfsmount *devpts_mntget(struct file *filp, struct pts_fs_info *fsi) 160 + { 161 + struct path path; 162 + int err; 163 + 164 + path = filp->f_path; 165 + path_get(&path); 166 + 167 + err = devpts_ptmx_path(&path); 168 + dput(path.dentry); 169 + if (err) { 170 + mntput(path.mnt); 171 + path.mnt = ERR_PTR(err); 172 + } 173 + if (DEVPTS_SB(path.mnt->mnt_sb) != fsi) { 174 + mntput(path.mnt); 175 + path.mnt = ERR_PTR(-ENODEV); 176 + } 177 + return path.mnt; 178 + } 179 + 136 180 struct pts_fs_info *devpts_acquire(struct file *filp) 137 181 { 138 182 struct pts_fs_info *result; ··· 187 143 path = filp->f_path; 188 144 path_get(&path); 189 145 190 - /* Has the devpts filesystem already been found? */ 191 - sb = path.mnt->mnt_sb; 192 - if (sb->s_magic != DEVPTS_SUPER_MAGIC) { 193 - /* Is a devpts filesystem at "pts" in the same directory? */ 194 - err = path_pts(&path); 195 - if (err) { 196 - result = ERR_PTR(err); 197 - goto out; 198 - } 199 - 200 - /* Is the path the root of a devpts filesystem? */ 201 - result = ERR_PTR(-ENODEV); 202 - sb = path.mnt->mnt_sb; 203 - if ((sb->s_magic != DEVPTS_SUPER_MAGIC) || 204 - (path.mnt->mnt_root != sb->s_root)) 205 - goto out; 146 + err = devpts_ptmx_path(&path); 147 + if (err) { 148 + result = ERR_PTR(err); 149 + goto out; 206 150 } 207 151 208 152 /* 209 153 * pty code needs to hold extra references in case of last /dev/tty close 210 154 */ 155 + sb = path.mnt->mnt_sb; 211 156 atomic_inc(&sb->s_active); 212 157 result = DEVPTS_SB(sb); 213 158
+5 -2
fs/ext4/mballoc.c
··· 2300 2300 EXT4_MAX_BLOCK_LOG_SIZE); 2301 2301 struct sg { 2302 2302 struct ext4_group_info info; 2303 - ext4_grpblk_t counters[blocksize_bits + 2]; 2303 + ext4_grpblk_t counters[EXT4_MAX_BLOCK_LOG_SIZE + 2]; 2304 2304 } sg; 2305 2305 2306 2306 group--; ··· 2308 2308 seq_puts(seq, "#group: free frags first [" 2309 2309 " 2^0 2^1 2^2 2^3 2^4 2^5 2^6 " 2310 2310 " 2^7 2^8 2^9 2^10 2^11 2^12 2^13 ]\n"); 2311 + 2312 + i = (blocksize_bits + 2) * sizeof(sg.info.bb_counters[0]) + 2313 + sizeof(struct ext4_group_info); 2311 2314 2312 2315 grinfo = ext4_get_group_info(sb, group); 2313 2316 /* Load the group info in memory only if not already loaded. */ ··· 2323 2320 buddy_loaded = 1; 2324 2321 } 2325 2322 2326 - memcpy(&sg, ext4_get_group_info(sb, group), sizeof(sg)); 2323 + memcpy(&sg, ext4_get_group_info(sb, group), i); 2327 2324 2328 2325 if (buddy_loaded) 2329 2326 ext4_mb_unload_buddy(&e4b);
+4 -2
fs/ext4/xattr.c
··· 1543 1543 /* Clear padding bytes. */ 1544 1544 memset(val + i->value_len, 0, new_size - i->value_len); 1545 1545 } 1546 - return 0; 1546 + goto update_hash; 1547 1547 } 1548 1548 1549 1549 /* Compute min_offs and last. */ ··· 1707 1707 here->e_value_size = cpu_to_le32(i->value_len); 1708 1708 } 1709 1709 1710 + update_hash: 1710 1711 if (i->value) { 1711 1712 __le32 hash = 0; 1712 1713 ··· 1726 1725 here->e_name_len, 1727 1726 &crc32c_hash, 1); 1728 1727 } else if (is_block) { 1729 - __le32 *value = s->base + min_offs - new_size; 1728 + __le32 *value = s->base + le16_to_cpu( 1729 + here->e_value_offs); 1730 1730 1731 1731 hash = ext4_xattr_hash_entry(here->e_name, 1732 1732 here->e_name_len, value,
+2 -2
fs/iomap.c
··· 278 278 unsigned long bytes; /* Bytes to write to page */ 279 279 280 280 offset = (pos & (PAGE_SIZE - 1)); 281 - bytes = min_t(unsigned long, PAGE_SIZE - offset, length); 281 + bytes = min_t(loff_t, PAGE_SIZE - offset, length); 282 282 283 283 rpage = __iomap_read_page(inode, pos); 284 284 if (IS_ERR(rpage)) ··· 373 373 unsigned offset, bytes; 374 374 375 375 offset = pos & (PAGE_SIZE - 1); /* Within page */ 376 - bytes = min_t(unsigned, PAGE_SIZE - offset, count); 376 + bytes = min_t(loff_t, PAGE_SIZE - offset, count); 377 377 378 378 if (IS_DAX(inode)) 379 379 status = iomap_dax_zero(pos, offset, bytes, iomap);
+2 -4
fs/nfsd/nfs4xdr.c
··· 144 144 argp->p = page_address(argp->pagelist[0]); 145 145 argp->pagelist++; 146 146 if (argp->pagelen < PAGE_SIZE) { 147 - argp->end = argp->p + (argp->pagelen>>2); 147 + argp->end = argp->p + XDR_QUADLEN(argp->pagelen); 148 148 argp->pagelen = 0; 149 149 } else { 150 150 argp->end = argp->p + (PAGE_SIZE>>2); ··· 1279 1279 argp->pagelen -= pages * PAGE_SIZE; 1280 1280 len -= pages * PAGE_SIZE; 1281 1281 1282 - argp->p = (__be32 *)page_address(argp->pagelist[0]); 1283 - argp->pagelist++; 1284 - argp->end = argp->p + XDR_QUADLEN(PAGE_SIZE); 1282 + next_decode_page(argp); 1285 1283 } 1286 1284 argp->p += XDR_QUADLEN(len); 1287 1285
+15 -6
fs/quota/dquot.c
··· 1124 1124 WARN_ON_ONCE(1); 1125 1125 dquot->dq_dqb.dqb_rsvspace = 0; 1126 1126 } 1127 + if (dquot->dq_dqb.dqb_curspace + dquot->dq_dqb.dqb_rsvspace <= 1128 + dquot->dq_dqb.dqb_bsoftlimit) 1129 + dquot->dq_dqb.dqb_btime = (time64_t) 0; 1130 + clear_bit(DQ_BLKS_B, &dquot->dq_flags); 1127 1131 } 1128 1132 1129 1133 static void dquot_decr_inodes(struct dquot *dquot, qsize_t number) ··· 1149 1145 dquot->dq_dqb.dqb_curspace -= number; 1150 1146 else 1151 1147 dquot->dq_dqb.dqb_curspace = 0; 1152 - if (dquot->dq_dqb.dqb_curspace <= dquot->dq_dqb.dqb_bsoftlimit) 1148 + if (dquot->dq_dqb.dqb_curspace + dquot->dq_dqb.dqb_rsvspace <= 1149 + dquot->dq_dqb.dqb_bsoftlimit) 1153 1150 dquot->dq_dqb.dqb_btime = (time64_t) 0; 1154 1151 clear_bit(DQ_BLKS_B, &dquot->dq_flags); 1155 1152 } ··· 1386 1381 1387 1382 static int info_bdq_free(struct dquot *dquot, qsize_t space) 1388 1383 { 1384 + qsize_t tspace; 1385 + 1386 + tspace = dquot->dq_dqb.dqb_curspace + dquot->dq_dqb.dqb_rsvspace; 1387 + 1389 1388 if (test_bit(DQ_FAKE_B, &dquot->dq_flags) || 1390 - dquot->dq_dqb.dqb_curspace <= dquot->dq_dqb.dqb_bsoftlimit) 1389 + tspace <= dquot->dq_dqb.dqb_bsoftlimit) 1391 1390 return QUOTA_NL_NOWARN; 1392 1391 1393 - if (dquot->dq_dqb.dqb_curspace - space <= dquot->dq_dqb.dqb_bsoftlimit) 1392 + if (tspace - space <= dquot->dq_dqb.dqb_bsoftlimit) 1394 1393 return QUOTA_NL_BSOFTBELOW; 1395 - if (dquot->dq_dqb.dqb_curspace >= dquot->dq_dqb.dqb_bhardlimit && 1396 - dquot->dq_dqb.dqb_curspace - space < dquot->dq_dqb.dqb_bhardlimit) 1394 + if (tspace >= dquot->dq_dqb.dqb_bhardlimit && 1395 + tspace - space < dquot->dq_dqb.dqb_bhardlimit) 1397 1396 return QUOTA_NL_BHARDBELOW; 1398 1397 return QUOTA_NL_NOWARN; 1399 1398 } ··· 2690 2681 2691 2682 if (check_blim) { 2692 2683 if (!dm->dqb_bsoftlimit || 2693 - dm->dqb_curspace < dm->dqb_bsoftlimit) { 2684 + dm->dqb_curspace + dm->dqb_rsvspace < dm->dqb_bsoftlimit) { 2694 2685 dm->dqb_btime = 0; 2695 2686 clear_bit(DQ_BLKS_B, &dquot->dq_flags); 2696 2687 } else if (!(di->d_fieldmask & QC_SPC_TIMER))
+1 -1
fs/xfs/libxfs/xfs_ialloc.c
··· 1246 1246 1247 1247 /* free inodes to the left? */ 1248 1248 if (useleft && trec.ir_freecount) { 1249 - rec = trec; 1250 1249 xfs_btree_del_cursor(cur, XFS_BTREE_NOERROR); 1251 1250 cur = tcur; 1252 1251 1253 1252 pag->pagl_leftrec = trec.ir_startino; 1254 1253 pag->pagl_rightrec = rec.ir_startino; 1255 1254 pag->pagl_pagino = pagino; 1255 + rec = trec; 1256 1256 goto alloc_inode; 1257 1257 } 1258 1258
+11
fs/xfs/xfs_log.c
··· 749 749 return 0; 750 750 } 751 751 752 + /* 753 + * During the second phase of log recovery, we need iget and 754 + * iput to behave like they do for an active filesystem. 755 + * xfs_fs_drop_inode needs to be able to prevent the deletion 756 + * of inodes before we're done replaying log items on those 757 + * inodes. Turn it off immediately after recovery finishes 758 + * so that we don't leak the quota inodes if subsequent mount 759 + * activities fail. 760 + */ 761 + mp->m_super->s_flags |= MS_ACTIVE; 752 762 error = xlog_recover_finish(mp->m_log); 753 763 if (!error) 754 764 xfs_log_work_queue(mp); 765 + mp->m_super->s_flags &= ~MS_ACTIVE; 755 766 756 767 return error; 757 768 }
+2 -10
fs/xfs/xfs_mount.c
··· 945 945 } 946 946 947 947 /* 948 - * During the second phase of log recovery, we need iget and 949 - * iput to behave like they do for an active filesystem. 950 - * xfs_fs_drop_inode needs to be able to prevent the deletion 951 - * of inodes before we're done replaying log items on those 952 - * inodes. 953 - */ 954 - mp->m_super->s_flags |= MS_ACTIVE; 955 - 956 - /* 957 948 * Finish recovering the file system. This part needed to be delayed 958 949 * until after the root and real-time bitmap inodes were consistently 959 950 * read in. ··· 1019 1028 out_quota: 1020 1029 xfs_qm_unmount_quotas(mp); 1021 1030 out_rtunmount: 1022 - mp->m_super->s_flags &= ~MS_ACTIVE; 1023 1031 xfs_rtunmount_inodes(mp); 1024 1032 out_rele_rip: 1025 1033 IRELE(rip); 1026 1034 cancel_delayed_work_sync(&mp->m_reclaim_work); 1027 1035 xfs_reclaim_inodes(mp, SYNC_WAIT); 1036 + /* Clean out dquots that might be in memory after quotacheck. */ 1037 + xfs_qm_unmount(mp); 1028 1038 out_log_dealloc: 1029 1039 mp->m_flags |= XFS_MOUNT_UNMOUNTING; 1030 1040 xfs_log_mount_cancel(mp);
+26 -12
include/asm-generic/vmlinux.lds.h
··· 60 60 #define ALIGN_FUNCTION() . = ALIGN(8) 61 61 62 62 /* 63 + * LD_DEAD_CODE_DATA_ELIMINATION option enables -fdata-sections, which 64 + * generates .data.identifier sections, which need to be pulled in with 65 + * .data. We don't want to pull in .data..other sections, which Linux 66 + * has defined. Same for text and bss. 67 + */ 68 + #ifdef CONFIG_LD_DEAD_CODE_DATA_ELIMINATION 69 + #define TEXT_MAIN .text .text.[0-9a-zA-Z_]* 70 + #define DATA_MAIN .data .data.[0-9a-zA-Z_]* 71 + #define BSS_MAIN .bss .bss.[0-9a-zA-Z_]* 72 + #else 73 + #define TEXT_MAIN .text 74 + #define DATA_MAIN .data 75 + #define BSS_MAIN .bss 76 + #endif 77 + 78 + /* 63 79 * Align to a 32 byte boundary equal to the 64 80 * alignment gcc 4.5 uses for a struct 65 81 */ ··· 214 198 215 199 /* 216 200 * .data section 217 - * LD_DEAD_CODE_DATA_ELIMINATION option enables -fdata-sections generates 218 - * .data.identifier which needs to be pulled in with .data, but don't want to 219 - * pull in .data..stuff which has its own requirements. Same for bss. 220 201 */ 221 202 #define DATA_DATA \ 222 - *(.data .data.[0-9a-zA-Z_]*) \ 203 + *(DATA_MAIN) \ 223 204 *(.ref.data) \ 224 205 *(.data..shared_aligned) /* percpu related */ \ 225 206 MEM_KEEP(init.data) \ ··· 447 434 VMLINUX_SYMBOL(__security_initcall_end) = .; \ 448 435 } 449 436 450 - /* .text section. Map to function alignment to avoid address changes 437 + /* 438 + * .text section. Map to function alignment to avoid address changes 451 439 * during second ld run in second ld pass when generating System.map 452 - * LD_DEAD_CODE_DATA_ELIMINATION option enables -ffunction-sections generates 453 - * .text.identifier which needs to be pulled in with .text , but some 454 - * architectures define .text.foo which is not intended to be pulled in here. 455 - * Those enabling LD_DEAD_CODE_DATA_ELIMINATION must ensure they don't have 456 - * conflicting section names, and must pull in .text.[0-9a-zA-Z_]* */ 440 + * 441 + * TEXT_MAIN here will match .text.fixup and .text.unlikely if dead 442 + * code elimination is enabled, so these sections should be converted 443 + * to use ".." first. 444 + */ 457 445 #define TEXT_TEXT \ 458 446 ALIGN_FUNCTION(); \ 459 - *(.text.hot .text .text.fixup .text.unlikely) \ 447 + *(.text.hot TEXT_MAIN .text.fixup .text.unlikely) \ 460 448 *(.ref.text) \ 461 449 MEM_KEEP(init.text) \ 462 450 MEM_KEEP(exit.text) \ ··· 627 613 BSS_FIRST_SECTIONS \ 628 614 *(.bss..page_aligned) \ 629 615 *(.dynbss) \ 630 - *(.bss .bss.[0-9a-zA-Z_]*) \ 616 + *(BSS_MAIN) \ 631 617 *(COMMON) \ 632 618 } 633 619
-1
include/linux/blkdev.h
··· 568 568 569 569 #if defined(CONFIG_BLK_DEV_BSG) 570 570 bsg_job_fn *bsg_job_fn; 571 - int bsg_job_size; 572 571 struct bsg_class_device bsg_dev; 573 572 #endif 574 573
+2
include/linux/bsg-lib.h
··· 24 24 #define _BLK_BSG_ 25 25 26 26 #include <linux/blkdev.h> 27 + #include <scsi/scsi_request.h> 27 28 28 29 struct request; 29 30 struct device; ··· 38 37 }; 39 38 40 39 struct bsg_job { 40 + struct scsi_request sreq; 41 41 struct device *dev; 42 42 struct request *req; 43 43
+10
include/linux/devpts_fs.h
··· 19 19 20 20 struct pts_fs_info; 21 21 22 + struct vfsmount *devpts_mntget(struct file *, struct pts_fs_info *); 22 23 struct pts_fs_info *devpts_acquire(struct file *); 23 24 void devpts_release(struct pts_fs_info *); 24 25 ··· 33 32 /* unlink */ 34 33 void devpts_pty_kill(struct dentry *); 35 34 35 + /* in pty.c */ 36 + int ptm_open_peer(struct file *master, struct tty_struct *tty, int flags); 37 + 38 + #else 39 + static inline int 40 + ptm_open_peer(struct file *master, struct tty_struct *tty, int flags) 41 + { 42 + return -EIO; 43 + } 36 44 #endif 37 45 38 46
+2 -2
include/linux/fs.h
··· 907 907 /* Page cache limit. The filesystems should put that into their s_maxbytes 908 908 limits, otherwise bad things can happen in VM. */ 909 909 #if BITS_PER_LONG==32 910 - #define MAX_LFS_FILESIZE (((loff_t)PAGE_SIZE << (BITS_PER_LONG-1))-1) 910 + #define MAX_LFS_FILESIZE ((loff_t)ULONG_MAX << PAGE_SHIFT) 911 911 #elif BITS_PER_LONG==64 912 - #define MAX_LFS_FILESIZE ((loff_t)0x7fffffffffffffffLL) 912 + #define MAX_LFS_FILESIZE ((loff_t)LLONG_MAX) 913 913 #endif 914 914 915 915 #define FL_POSIX 1
+1 -1
include/linux/iio/iio.h
··· 535 535 * @scan_timestamp: [INTERN] set if any buffers have requested timestamp 536 536 * @scan_index_timestamp:[INTERN] cache of the index to the timestamp 537 537 * @trig: [INTERN] current device trigger (buffer modes) 538 - * @trig_readonly [INTERN] mark the current trigger immutable 538 + * @trig_readonly: [INTERN] mark the current trigger immutable 539 539 * @pollfunc: [DRIVER] function run on trigger being received 540 540 * @pollfunc_event: [DRIVER] function run on events trigger being received 541 541 * @channels: [DRIVER] channel specification structure table
+2 -2
include/linux/iio/trigger.h
··· 144 144 /** 145 145 * iio_trigger_set_immutable() - set an immutable trigger on destination 146 146 * 147 - * @indio_dev - IIO device structure containing the device 148 - * @trig - trigger to assign to device 147 + * @indio_dev: IIO device structure containing the device 148 + * @trig: trigger to assign to device 149 149 * 150 150 **/ 151 151 int iio_trigger_set_immutable(struct iio_dev *indio_dev, struct iio_trigger *trig);
+11 -1
include/linux/iommu.h
··· 240 240 struct list_head list; 241 241 const struct iommu_ops *ops; 242 242 struct fwnode_handle *fwnode; 243 - struct device dev; 243 + struct device *dev; 244 244 }; 245 245 246 246 int iommu_device_register(struct iommu_device *iommu); ··· 263 263 struct fwnode_handle *fwnode) 264 264 { 265 265 iommu->fwnode = fwnode; 266 + } 267 + 268 + static inline struct iommu_device *dev_to_iommu_device(struct device *dev) 269 + { 270 + return (struct iommu_device *)dev_get_drvdata(dev); 266 271 } 267 272 268 273 #define IOMMU_GROUP_NOTIFY_ADD_DEVICE 1 /* Device added */ ··· 592 587 static inline void iommu_device_set_fwnode(struct iommu_device *iommu, 593 588 struct fwnode_handle *fwnode) 594 589 { 590 + } 591 + 592 + static inline struct iommu_device *dev_to_iommu_device(struct device *dev) 593 + { 594 + return NULL; 595 595 } 596 596 597 597 static inline void iommu_device_unregister(struct iommu_device *iommu)
+4 -2
include/linux/memblock.h
··· 61 61 #ifdef CONFIG_ARCH_DISCARD_MEMBLOCK 62 62 #define __init_memblock __meminit 63 63 #define __initdata_memblock __meminitdata 64 + void memblock_discard(void); 64 65 #else 65 66 #define __init_memblock 66 67 #define __initdata_memblock ··· 75 74 int nid, ulong flags); 76 75 phys_addr_t memblock_find_in_range(phys_addr_t start, phys_addr_t end, 77 76 phys_addr_t size, phys_addr_t align); 78 - phys_addr_t get_allocated_memblock_reserved_regions_info(phys_addr_t *addr); 79 - phys_addr_t get_allocated_memblock_memory_regions_info(phys_addr_t *addr); 80 77 void memblock_allow_resize(void); 81 78 int memblock_add_node(phys_addr_t base, phys_addr_t size, int nid); 82 79 int memblock_add(phys_addr_t base, phys_addr_t size); ··· 108 109 109 110 void __next_reserved_mem_region(u64 *idx, phys_addr_t *out_start, 110 111 phys_addr_t *out_end); 112 + 113 + void __memblock_free_early(phys_addr_t base, phys_addr_t size); 114 + void __memblock_free_late(phys_addr_t base, phys_addr_t size); 111 115 112 116 /** 113 117 * for_each_mem_range - iterate through memblock areas from type_a and not
+8 -2
include/linux/memcontrol.h
··· 484 484 extern int do_swap_account; 485 485 #endif 486 486 487 - void lock_page_memcg(struct page *page); 487 + struct mem_cgroup *lock_page_memcg(struct page *page); 488 + void __unlock_page_memcg(struct mem_cgroup *memcg); 488 489 void unlock_page_memcg(struct page *page); 489 490 490 491 static inline unsigned long memcg_page_state(struct mem_cgroup *memcg, ··· 810 809 { 811 810 } 812 811 813 - static inline void lock_page_memcg(struct page *page) 812 + static inline struct mem_cgroup *lock_page_memcg(struct page *page) 813 + { 814 + return NULL; 815 + } 816 + 817 + static inline void __unlock_page_memcg(struct mem_cgroup *memcg) 814 818 { 815 819 } 816 820
+1 -1
include/linux/net.h
··· 37 37 38 38 /* Historically, SOCKWQ_ASYNC_NOSPACE & SOCKWQ_ASYNC_WAITDATA were located 39 39 * in sock->flags, but moved into sk->sk_wq->flags to be RCU protected. 40 - * Eventually all flags will be in sk->sk_wq_flags. 40 + * Eventually all flags will be in sk->sk_wq->flags. 41 41 */ 42 42 #define SOCKWQ_ASYNC_NOSPACE 0 43 43 #define SOCKWQ_ASYNC_WAITDATA 1
+8
include/linux/nmi.h
··· 168 168 #define sysctl_softlockup_all_cpu_backtrace 0 169 169 #define sysctl_hardlockup_all_cpu_backtrace 0 170 170 #endif 171 + 172 + #if defined(CONFIG_HARDLOCKUP_CHECK_TIMESTAMP) && \ 173 + defined(CONFIG_HARDLOCKUP_DETECTOR) 174 + void watchdog_update_hrtimer_threshold(u64 period); 175 + #else 176 + static inline void watchdog_update_hrtimer_threshold(u64 period) { } 177 + #endif 178 + 171 179 extern bool is_hardlockup(void); 172 180 struct ctl_table; 173 181 extern int proc_watchdog(struct ctl_table *, int ,
+22
include/linux/oom.h
··· 6 6 #include <linux/types.h> 7 7 #include <linux/nodemask.h> 8 8 #include <uapi/linux/oom.h> 9 + #include <linux/sched/coredump.h> /* MMF_* */ 10 + #include <linux/mm.h> /* VM_FAULT* */ 9 11 10 12 struct zonelist; 11 13 struct notifier_block; ··· 63 61 static inline bool tsk_is_oom_victim(struct task_struct * tsk) 64 62 { 65 63 return tsk->signal->oom_mm; 64 + } 65 + 66 + /* 67 + * Checks whether a page fault on the given mm is still reliable. 68 + * This is no longer true if the oom reaper started to reap the 69 + * address space which is reflected by MMF_UNSTABLE flag set in 70 + * the mm. At that moment any !shared mapping would lose the content 71 + * and could cause a memory corruption (zero pages instead of the 72 + * original content). 73 + * 74 + * User should call this before establishing a page table entry for 75 + * a !shared mapping and under the proper page table lock. 76 + * 77 + * Return 0 when the PF is safe VM_FAULT_SIGBUS otherwise. 78 + */ 79 + static inline int check_stable_address_space(struct mm_struct *mm) 80 + { 81 + if (unlikely(test_bit(MMF_UNSTABLE, &mm->flags))) 82 + return VM_FAULT_SIGBUS; 83 + return 0; 66 84 } 67 85 68 86 extern unsigned long oom_badness(struct task_struct *p,
+3
include/linux/pci.h
··· 188 188 * the direct_complete optimization. 189 189 */ 190 190 PCI_DEV_FLAGS_NEEDS_RESUME = (__force pci_dev_flags_t) (1 << 11), 191 + /* Don't use Relaxed Ordering for TLPs directed at this device */ 192 + PCI_DEV_FLAGS_NO_RELAXED_ORDERING = (__force pci_dev_flags_t) (1 << 12), 191 193 }; 192 194 193 195 enum pci_irq_reroute_variant { ··· 1128 1126 void pci_pme_wakeup_bus(struct pci_bus *bus); 1129 1127 void pci_d3cold_enable(struct pci_dev *dev); 1130 1128 void pci_d3cold_disable(struct pci_dev *dev); 1129 + bool pcie_relaxed_ordering_enabled(struct pci_dev *dev); 1131 1130 1132 1131 /* PCI Virtual Channel */ 1133 1132 int pci_save_vc_state(struct pci_dev *dev);
+2 -2
include/linux/perf_event.h
··· 310 310 * Notification that the event was mapped or unmapped. Called 311 311 * in the context of the mapping task. 312 312 */ 313 - void (*event_mapped) (struct perf_event *event); /*optional*/ 314 - void (*event_unmapped) (struct perf_event *event); /*optional*/ 313 + void (*event_mapped) (struct perf_event *event, struct mm_struct *mm); /* optional */ 314 + void (*event_unmapped) (struct perf_event *event, struct mm_struct *mm); /* optional */ 315 315 316 316 /* 317 317 * Flags for ->add()/->del()/ ->start()/->stop(). There are
+3 -1
include/linux/pid.h
··· 8 8 PIDTYPE_PID, 9 9 PIDTYPE_PGID, 10 10 PIDTYPE_SID, 11 - PIDTYPE_MAX 11 + PIDTYPE_MAX, 12 + /* only valid to __task_pid_nr_ns() */ 13 + __PIDTYPE_TGID 12 14 }; 13 15 14 16 /*
+5 -4
include/linux/ptr_ring.h
··· 436 436 __PTR_RING_PEEK_CALL_v; \ 437 437 }) 438 438 439 - static inline void **__ptr_ring_init_queue_alloc(int size, gfp_t gfp) 439 + static inline void **__ptr_ring_init_queue_alloc(unsigned int size, gfp_t gfp) 440 440 { 441 - return kzalloc(ALIGN(size * sizeof(void *), SMP_CACHE_BYTES), gfp); 441 + return kcalloc(size, sizeof(void *), gfp); 442 442 } 443 443 444 444 static inline void __ptr_ring_set_size(struct ptr_ring *r, int size) ··· 582 582 * In particular if you consume ring in interrupt or BH context, you must 583 583 * disable interrupts/BH when doing so. 584 584 */ 585 - static inline int ptr_ring_resize_multiple(struct ptr_ring **rings, int nrings, 585 + static inline int ptr_ring_resize_multiple(struct ptr_ring **rings, 586 + unsigned int nrings, 586 587 int size, 587 588 gfp_t gfp, void (*destroy)(void *)) 588 589 { ··· 591 590 void ***queues; 592 591 int i; 593 592 594 - queues = kmalloc(nrings * sizeof *queues, gfp); 593 + queues = kmalloc_array(nrings, sizeof(*queues), gfp); 595 594 if (!queues) 596 595 goto noqueues; 597 596
+27 -24
include/linux/sched.h
··· 1163 1163 return tsk->tgid; 1164 1164 } 1165 1165 1166 - extern pid_t task_tgid_nr_ns(struct task_struct *tsk, struct pid_namespace *ns); 1167 - 1168 - static inline pid_t task_tgid_vnr(struct task_struct *tsk) 1169 - { 1170 - return pid_vnr(task_tgid(tsk)); 1171 - } 1172 - 1173 1166 /** 1174 1167 * pid_alive - check that a task structure is not stale 1175 1168 * @p: Task structure to be checked. ··· 1176 1183 static inline int pid_alive(const struct task_struct *p) 1177 1184 { 1178 1185 return p->pids[PIDTYPE_PID].pid != NULL; 1179 - } 1180 - 1181 - static inline pid_t task_ppid_nr_ns(const struct task_struct *tsk, struct pid_namespace *ns) 1182 - { 1183 - pid_t pid = 0; 1184 - 1185 - rcu_read_lock(); 1186 - if (pid_alive(tsk)) 1187 - pid = task_tgid_nr_ns(rcu_dereference(tsk->real_parent), ns); 1188 - rcu_read_unlock(); 1189 - 1190 - return pid; 1191 - } 1192 - 1193 - static inline pid_t task_ppid_nr(const struct task_struct *tsk) 1194 - { 1195 - return task_ppid_nr_ns(tsk, &init_pid_ns); 1196 1186 } 1197 1187 1198 1188 static inline pid_t task_pgrp_nr_ns(struct task_struct *tsk, struct pid_namespace *ns) ··· 1197 1221 static inline pid_t task_session_vnr(struct task_struct *tsk) 1198 1222 { 1199 1223 return __task_pid_nr_ns(tsk, PIDTYPE_SID, NULL); 1224 + } 1225 + 1226 + static inline pid_t task_tgid_nr_ns(struct task_struct *tsk, struct pid_namespace *ns) 1227 + { 1228 + return __task_pid_nr_ns(tsk, __PIDTYPE_TGID, ns); 1229 + } 1230 + 1231 + static inline pid_t task_tgid_vnr(struct task_struct *tsk) 1232 + { 1233 + return __task_pid_nr_ns(tsk, __PIDTYPE_TGID, NULL); 1234 + } 1235 + 1236 + static inline pid_t task_ppid_nr_ns(const struct task_struct *tsk, struct pid_namespace *ns) 1237 + { 1238 + pid_t pid = 0; 1239 + 1240 + rcu_read_lock(); 1241 + if (pid_alive(tsk)) 1242 + pid = task_tgid_nr_ns(rcu_dereference(tsk->real_parent), ns); 1243 + rcu_read_unlock(); 1244 + 1245 + return pid; 1246 + } 1247 + 1248 + static inline pid_t task_ppid_nr(const struct task_struct *tsk) 1249 + { 1250 + return task_ppid_nr_ns(tsk, &init_pid_ns); 1200 1251 } 1201 1252 1202 1253 /* Obsolete, do not use: */
+2 -1
include/linux/skb_array.h
··· 193 193 } 194 194 195 195 static inline int skb_array_resize_multiple(struct skb_array **rings, 196 - int nrings, int size, gfp_t gfp) 196 + int nrings, unsigned int size, 197 + gfp_t gfp) 197 198 { 198 199 BUILD_BUG_ON(offsetof(struct skb_array, ring)); 199 200 return ptr_ring_resize_multiple((struct ptr_ring **)rings,
+37
include/linux/wait.h
··· 757 757 __ret; \ 758 758 }) 759 759 760 + #define __wait_event_killable_timeout(wq_head, condition, timeout) \ 761 + ___wait_event(wq_head, ___wait_cond_timeout(condition), \ 762 + TASK_KILLABLE, 0, timeout, \ 763 + __ret = schedule_timeout(__ret)) 764 + 765 + /** 766 + * wait_event_killable_timeout - sleep until a condition gets true or a timeout elapses 767 + * @wq_head: the waitqueue to wait on 768 + * @condition: a C expression for the event to wait for 769 + * @timeout: timeout, in jiffies 770 + * 771 + * The process is put to sleep (TASK_KILLABLE) until the 772 + * @condition evaluates to true or a kill signal is received. 773 + * The @condition is checked each time the waitqueue @wq_head is woken up. 774 + * 775 + * wake_up() has to be called after changing any variable that could 776 + * change the result of the wait condition. 777 + * 778 + * Returns: 779 + * 0 if the @condition evaluated to %false after the @timeout elapsed, 780 + * 1 if the @condition evaluated to %true after the @timeout elapsed, 781 + * the remaining jiffies (at least 1) if the @condition evaluated 782 + * to %true before the @timeout elapsed, or -%ERESTARTSYS if it was 783 + * interrupted by a kill signal. 784 + * 785 + * Only kill signals interrupt this process. 786 + */ 787 + #define wait_event_killable_timeout(wq_head, condition, timeout) \ 788 + ({ \ 789 + long __ret = timeout; \ 790 + might_sleep(); \ 791 + if (!___wait_cond_timeout(condition)) \ 792 + __ret = __wait_event_killable_timeout(wq_head, \ 793 + condition, timeout); \ 794 + __ret; \ 795 + }) 796 + 760 797 761 798 #define __wait_event_lock_irq(wq_head, condition, lock, cmd) \ 762 799 (void)___wait_event(wq_head, condition, TASK_UNINTERRUPTIBLE, 0, 0, \
+10
include/net/addrconf.h
··· 336 336 in6_dev_finish_destroy(idev); 337 337 } 338 338 339 + static inline void in6_dev_put_clear(struct inet6_dev **pidev) 340 + { 341 + struct inet6_dev *idev = *pidev; 342 + 343 + if (idev) { 344 + in6_dev_put(idev); 345 + *pidev = NULL; 346 + } 347 + } 348 + 339 349 static inline void __in6_dev_put(struct inet6_dev *idev) 340 350 { 341 351 refcount_dec(&idev->refcnt);
+5
include/net/bonding.h
··· 277 277 BOND_MODE(bond) == BOND_MODE_ALB; 278 278 } 279 279 280 + static inline bool bond_needs_speed_duplex(const struct bonding *bond) 281 + { 282 + return BOND_MODE(bond) == BOND_MODE_8023AD || bond_is_lb(bond); 283 + } 284 + 280 285 static inline bool bond_is_nondyn_tlb(const struct bonding *bond) 281 286 { 282 287 return (BOND_MODE(bond) == BOND_MODE_TLB) &&
+6 -6
include/net/busy_poll.h
··· 29 29 #include <linux/sched/signal.h> 30 30 #include <net/ip.h> 31 31 32 - #ifdef CONFIG_NET_RX_BUSY_POLL 33 - 34 - struct napi_struct; 35 - extern unsigned int sysctl_net_busy_read __read_mostly; 36 - extern unsigned int sysctl_net_busy_poll __read_mostly; 37 - 38 32 /* 0 - Reserved to indicate value not set 39 33 * 1..NR_CPUS - Reserved for sender_cpu 40 34 * NR_CPUS+1..~0 - Region available for NAPI IDs 41 35 */ 42 36 #define MIN_NAPI_ID ((unsigned int)(NR_CPUS + 1)) 37 + 38 + #ifdef CONFIG_NET_RX_BUSY_POLL 39 + 40 + struct napi_struct; 41 + extern unsigned int sysctl_net_busy_read __read_mostly; 42 + extern unsigned int sysctl_net_busy_poll __read_mostly; 43 43 44 44 static inline bool net_busy_loop_on(void) 45 45 {
+2 -2
include/net/ip.h
··· 352 352 !forwarding) 353 353 return dst_mtu(dst); 354 354 355 - return min(dst->dev->mtu, IP_MAX_MTU); 355 + return min(READ_ONCE(dst->dev->mtu), IP_MAX_MTU); 356 356 } 357 357 358 358 static inline unsigned int ip_skb_dst_mtu(struct sock *sk, ··· 364 364 return ip_dst_mtu_maybe_forward(skb_dst(skb), forwarding); 365 365 } 366 366 367 - return min(skb_dst(skb)->dev->mtu, IP_MAX_MTU); 367 + return min(READ_ONCE(skb_dst(skb)->dev->mtu), IP_MAX_MTU); 368 368 } 369 369 370 370 u32 ip_idents_reserve(u32 hash, int segs);
+15
include/net/mac80211.h
··· 5499 5499 ieee80211_manage_rx_ba_offl(vif, addr, tid + IEEE80211_NUM_TIDS); 5500 5500 } 5501 5501 5502 + /** 5503 + * ieee80211_rx_ba_timer_expired - stop a Rx BA session due to timeout 5504 + * 5505 + * Some device drivers do not offload AddBa/DelBa negotiation, but handle rx 5506 + * buffer reording internally, and therefore also handle the session timer. 5507 + * 5508 + * Trigger the timeout flow, which sends a DelBa. 5509 + * 5510 + * @vif: &struct ieee80211_vif pointer from the add_interface callback 5511 + * @addr: station mac address 5512 + * @tid: the rx tid 5513 + */ 5514 + void ieee80211_rx_ba_timer_expired(struct ieee80211_vif *vif, 5515 + const u8 *addr, unsigned int tid); 5516 + 5502 5517 /* Rate control API */ 5503 5518 5504 5519 /**
+4 -1
include/net/sch_generic.h
··· 806 806 old = *pold; 807 807 *pold = new; 808 808 if (old != NULL) { 809 - qdisc_tree_reduce_backlog(old, old->q.qlen, old->qstats.backlog); 809 + unsigned int qlen = old->q.qlen; 810 + unsigned int backlog = old->qstats.backlog; 811 + 810 812 qdisc_reset(old); 813 + qdisc_tree_reduce_backlog(old, qlen, backlog); 811 814 } 812 815 sch_tree_unlock(sch); 813 816
+1 -3
include/net/sock.h
··· 507 507 static inline int sk_peek_offset(struct sock *sk, int flags) 508 508 { 509 509 if (unlikely(flags & MSG_PEEK)) { 510 - s32 off = READ_ONCE(sk->sk_peek_off); 511 - if (off >= 0) 512 - return off; 510 + return READ_ONCE(sk->sk_peek_off); 513 511 } 514 512 515 513 return 0;
+4 -3
include/net/udp.h
··· 366 366 static inline int copy_linear_skb(struct sk_buff *skb, int len, int off, 367 367 struct iov_iter *to) 368 368 { 369 - int n, copy = len - off; 369 + int n; 370 370 371 - n = copy_to_iter(skb->data + off, copy, to); 372 - if (n == copy) 371 + n = copy_to_iter(skb->data + off, len, to); 372 + if (n == len) 373 373 return 0; 374 374 375 + iov_iter_revert(to, n); 375 376 return -EFAULT; 376 377 } 377 378
+1
include/rdma/ib_verbs.h
··· 1683 1683 enum ib_qp_type qp_type; 1684 1684 struct ib_rwq_ind_table *rwq_ind_tbl; 1685 1685 struct ib_qp_security *qp_sec; 1686 + u8 port; 1686 1687 }; 1687 1688 1688 1689 struct ib_mr {
+1
include/scsi/scsi_cmnd.h
··· 57 57 /* for scmd->flags */ 58 58 #define SCMD_TAGGED (1 << 0) 59 59 #define SCMD_UNCHECKED_ISA_DMA (1 << 1) 60 + #define SCMD_ZONE_WRITE_LOCK (1 << 2) 60 61 61 62 struct scsi_cmnd { 62 63 struct scsi_request req;
-3
include/uapi/linux/loop.h
··· 22 22 LO_FLAGS_AUTOCLEAR = 4, 23 23 LO_FLAGS_PARTSCAN = 8, 24 24 LO_FLAGS_DIRECT_IO = 16, 25 - LO_FLAGS_BLOCKSIZE = 32, 26 25 }; 27 26 28 27 #include <asm/posix_types.h> /* for __kernel_old_dev_t */ ··· 58 59 __u8 lo_encrypt_key[LO_KEY_SIZE]; /* ioctl w/o */ 59 60 __u64 lo_init[2]; 60 61 }; 61 - 62 - #define LO_INFO_BLOCKSIZE(l) (l)->lo_init[0] 63 62 64 63 /* 65 64 * Loop filter types
+8 -6
kernel/audit_watch.c
··· 66 66 67 67 /* fsnotify events we care about. */ 68 68 #define AUDIT_FS_WATCH (FS_MOVE | FS_CREATE | FS_DELETE | FS_DELETE_SELF |\ 69 - FS_MOVE_SELF | FS_EVENT_ON_CHILD) 69 + FS_MOVE_SELF | FS_EVENT_ON_CHILD | FS_UNMOUNT) 70 70 71 71 static void audit_free_parent(struct audit_parent *parent) 72 72 { ··· 457 457 list_del(&krule->rlist); 458 458 459 459 if (list_empty(&watch->rules)) { 460 + /* 461 + * audit_remove_watch() drops our reference to 'parent' which 462 + * can get freed. Grab our own reference to be safe. 463 + */ 464 + audit_get_parent(parent); 460 465 audit_remove_watch(watch); 461 - 462 - if (list_empty(&parent->watches)) { 463 - audit_get_parent(parent); 466 + if (list_empty(&parent->watches)) 464 467 fsnotify_destroy_mark(&parent->mark, audit_watch_group); 465 - audit_put_parent(parent); 466 - } 468 + audit_put_parent(parent); 467 469 } 468 470 } 469 471
+58 -28
kernel/events/core.c
··· 2217 2217 return can_add_hw; 2218 2218 } 2219 2219 2220 + /* 2221 + * Complement to update_event_times(). This computes the tstamp_* values to 2222 + * continue 'enabled' state from @now, and effectively discards the time 2223 + * between the prior tstamp_stopped and now (as we were in the OFF state, or 2224 + * just switched (context) time base). 2225 + * 2226 + * This further assumes '@event->state == INACTIVE' (we just came from OFF) and 2227 + * cannot have been scheduled in yet. And going into INACTIVE state means 2228 + * '@event->tstamp_stopped = @now'. 2229 + * 2230 + * Thus given the rules of update_event_times(): 2231 + * 2232 + * total_time_enabled = tstamp_stopped - tstamp_enabled 2233 + * total_time_running = tstamp_stopped - tstamp_running 2234 + * 2235 + * We can insert 'tstamp_stopped == now' and reverse them to compute new 2236 + * tstamp_* values. 2237 + */ 2238 + static void __perf_event_enable_time(struct perf_event *event, u64 now) 2239 + { 2240 + WARN_ON_ONCE(event->state != PERF_EVENT_STATE_INACTIVE); 2241 + 2242 + event->tstamp_stopped = now; 2243 + event->tstamp_enabled = now - event->total_time_enabled; 2244 + event->tstamp_running = now - event->total_time_running; 2245 + } 2246 + 2220 2247 static void add_event_to_ctx(struct perf_event *event, 2221 2248 struct perf_event_context *ctx) 2222 2249 { ··· 2251 2224 2252 2225 list_add_event(event, ctx); 2253 2226 perf_group_attach(event); 2254 - event->tstamp_enabled = tstamp; 2255 - event->tstamp_running = tstamp; 2256 - event->tstamp_stopped = tstamp; 2227 + /* 2228 + * We can be called with event->state == STATE_OFF when we create with 2229 + * .disabled = 1. In that case the IOC_ENABLE will call this function. 2230 + */ 2231 + if (event->state == PERF_EVENT_STATE_INACTIVE) 2232 + __perf_event_enable_time(event, tstamp); 2257 2233 } 2258 2234 2259 2235 static void ctx_sched_out(struct perf_event_context *ctx, ··· 2501 2471 u64 tstamp = perf_event_time(event); 2502 2472 2503 2473 event->state = PERF_EVENT_STATE_INACTIVE; 2504 - event->tstamp_enabled = tstamp - event->total_time_enabled; 2474 + __perf_event_enable_time(event, tstamp); 2505 2475 list_for_each_entry(sub, &event->sibling_list, group_entry) { 2476 + /* XXX should not be > INACTIVE if event isn't */ 2506 2477 if (sub->state >= PERF_EVENT_STATE_INACTIVE) 2507 - sub->tstamp_enabled = tstamp - sub->total_time_enabled; 2478 + __perf_event_enable_time(sub, tstamp); 2508 2479 } 2509 2480 } 2510 2481 ··· 5121 5090 atomic_inc(&event->rb->aux_mmap_count); 5122 5091 5123 5092 if (event->pmu->event_mapped) 5124 - event->pmu->event_mapped(event); 5093 + event->pmu->event_mapped(event, vma->vm_mm); 5125 5094 } 5126 5095 5127 5096 static void perf_pmu_output_stop(struct perf_event *event); ··· 5144 5113 unsigned long size = perf_data_size(rb); 5145 5114 5146 5115 if (event->pmu->event_unmapped) 5147 - event->pmu->event_unmapped(event); 5116 + event->pmu->event_unmapped(event, vma->vm_mm); 5148 5117 5149 5118 /* 5150 5119 * rb->aux_mmap_count will always drop before rb->mmap_count and ··· 5442 5411 vma->vm_ops = &perf_mmap_vmops; 5443 5412 5444 5413 if (event->pmu->event_mapped) 5445 - event->pmu->event_mapped(event); 5414 + event->pmu->event_mapped(event, vma->vm_mm); 5446 5415 5447 5416 return ret; 5448 5417 } ··· 10032 10001 goto err_context; 10033 10002 10034 10003 /* 10035 - * Do not allow to attach to a group in a different 10036 - * task or CPU context: 10004 + * Make sure we're both events for the same CPU; 10005 + * grouping events for different CPUs is broken; since 10006 + * you can never concurrently schedule them anyhow. 10037 10007 */ 10038 - if (move_group) { 10039 - /* 10040 - * Make sure we're both on the same task, or both 10041 - * per-cpu events. 10042 - */ 10043 - if (group_leader->ctx->task != ctx->task) 10044 - goto err_context; 10008 + if (group_leader->cpu != event->cpu) 10009 + goto err_context; 10045 10010 10046 - /* 10047 - * Make sure we're both events for the same CPU; 10048 - * grouping events for different CPUs is broken; since 10049 - * you can never concurrently schedule them anyhow. 10050 - */ 10051 - if (group_leader->cpu != event->cpu) 10052 - goto err_context; 10053 - } else { 10054 - if (group_leader->ctx != ctx) 10055 - goto err_context; 10056 - } 10011 + /* 10012 + * Make sure we're both on the same task, or both 10013 + * per-CPU events. 10014 + */ 10015 + if (group_leader->ctx->task != ctx->task) 10016 + goto err_context; 10017 + 10018 + /* 10019 + * Do not allow to attach to a group in a different task 10020 + * or CPU context. If we're moving SW events, we'll fix 10021 + * this up later, so allow that. 10022 + */ 10023 + if (!move_group && group_leader->ctx != ctx) 10024 + goto err_context; 10057 10025 10058 10026 /* 10059 10027 * Only a group leader can be exclusive or pinned
+1
kernel/fork.c
··· 806 806 mm_init_cpumask(mm); 807 807 mm_init_aio(mm); 808 808 mm_init_owner(mm, p); 809 + RCU_INIT_POINTER(mm->exe_file, NULL); 809 810 mmu_notifier_mm_init(mm); 810 811 init_tlb_flush_pending(mm); 811 812 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !USE_SPLIT_PMD_PTLOCKS
+8 -2
kernel/irq/chip.c
··· 1000 1000 1001 1001 void irq_modify_status(unsigned int irq, unsigned long clr, unsigned long set) 1002 1002 { 1003 - unsigned long flags; 1003 + unsigned long flags, trigger, tmp; 1004 1004 struct irq_desc *desc = irq_get_desc_lock(irq, &flags, 0); 1005 1005 1006 1006 if (!desc) ··· 1014 1014 1015 1015 irq_settings_clr_and_set(desc, clr, set); 1016 1016 1017 + trigger = irqd_get_trigger_type(&desc->irq_data); 1018 + 1017 1019 irqd_clear(&desc->irq_data, IRQD_NO_BALANCING | IRQD_PER_CPU | 1018 1020 IRQD_TRIGGER_MASK | IRQD_LEVEL | IRQD_MOVE_PCNTXT); 1019 1021 if (irq_settings_has_no_balance_set(desc)) ··· 1027 1025 if (irq_settings_is_level(desc)) 1028 1026 irqd_set(&desc->irq_data, IRQD_LEVEL); 1029 1027 1030 - irqd_set(&desc->irq_data, irq_settings_get_trigger_mask(desc)); 1028 + tmp = irq_settings_get_trigger_mask(desc); 1029 + if (tmp != IRQ_TYPE_NONE) 1030 + trigger = tmp; 1031 + 1032 + irqd_set(&desc->irq_data, trigger); 1031 1033 1032 1034 irq_put_desc_unlock(desc, flags); 1033 1035 }
+2 -2
kernel/irq/ipi.c
··· 165 165 struct irq_data *data = irq_get_irq_data(irq); 166 166 struct cpumask *ipimask = data ? irq_data_get_affinity_mask(data) : NULL; 167 167 168 - if (!data || !ipimask || cpu > nr_cpu_ids) 168 + if (!data || !ipimask || cpu >= nr_cpu_ids) 169 169 return INVALID_HWIRQ; 170 170 171 171 if (!cpumask_test_cpu(cpu, ipimask)) ··· 195 195 if (!chip->ipi_send_single && !chip->ipi_send_mask) 196 196 return -EINVAL; 197 197 198 - if (cpu > nr_cpu_ids) 198 + if (cpu >= nr_cpu_ids) 199 199 return -EINVAL; 200 200 201 201 if (dest) {
+23 -2
kernel/kmod.c
··· 71 71 static DECLARE_WAIT_QUEUE_HEAD(kmod_wq); 72 72 73 73 /* 74 + * This is a restriction on having *all* MAX_KMOD_CONCURRENT threads 75 + * running at the same time without returning. When this happens we 76 + * believe you've somehow ended up with a recursive module dependency 77 + * creating a loop. 78 + * 79 + * We have no option but to fail. 80 + * 81 + * Userspace should proactively try to detect and prevent these. 82 + */ 83 + #define MAX_KMOD_ALL_BUSY_TIMEOUT 5 84 + 85 + /* 74 86 modprobe_path is set via /proc/sys. 75 87 */ 76 88 char modprobe_path[KMOD_PATH_LEN] = "/sbin/modprobe"; ··· 179 167 pr_warn_ratelimited("request_module: kmod_concurrent_max (%u) close to 0 (max_modprobes: %u), for module %s, throttling...", 180 168 atomic_read(&kmod_concurrent_max), 181 169 MAX_KMOD_CONCURRENT, module_name); 182 - wait_event_interruptible(kmod_wq, 183 - atomic_dec_if_positive(&kmod_concurrent_max) >= 0); 170 + ret = wait_event_killable_timeout(kmod_wq, 171 + atomic_dec_if_positive(&kmod_concurrent_max) >= 0, 172 + MAX_KMOD_ALL_BUSY_TIMEOUT * HZ); 173 + if (!ret) { 174 + pr_warn_ratelimited("request_module: modprobe %s cannot be processed, kmod busy with %d threads for more than %d seconds now", 175 + module_name, MAX_KMOD_CONCURRENT, MAX_KMOD_ALL_BUSY_TIMEOUT); 176 + return -ETIME; 177 + } else if (ret == -ERESTARTSYS) { 178 + pr_warn_ratelimited("request_module: sigkill sent for modprobe %s, giving up", module_name); 179 + return ret; 180 + } 184 181 } 185 182 186 183 trace_module_request(module_name, wait, _RET_IP_);
+4 -7
kernel/pid.c
··· 527 527 if (!ns) 528 528 ns = task_active_pid_ns(current); 529 529 if (likely(pid_alive(task))) { 530 - if (type != PIDTYPE_PID) 530 + if (type != PIDTYPE_PID) { 531 + if (type == __PIDTYPE_TGID) 532 + type = PIDTYPE_PID; 531 533 task = task->group_leader; 534 + } 532 535 nr = pid_nr_ns(rcu_dereference(task->pids[type].pid), ns); 533 536 } 534 537 rcu_read_unlock(); ··· 539 536 return nr; 540 537 } 541 538 EXPORT_SYMBOL(__task_pid_nr_ns); 542 - 543 - pid_t task_tgid_nr_ns(struct task_struct *tsk, struct pid_namespace *ns) 544 - { 545 - return pid_nr_ns(task_tgid(tsk), ns); 546 - } 547 - EXPORT_SYMBOL(task_tgid_nr_ns); 548 539 549 540 struct pid_namespace *task_active_pid_ns(struct task_struct *tsk) 550 541 {
+4 -3
kernel/sched/wait.c
··· 70 70 71 71 list_for_each_entry_safe(curr, next, &wq_head->head, entry) { 72 72 unsigned flags = curr->flags; 73 - 74 - if (curr->func(curr, mode, wake_flags, key) && 75 - (flags & WQ_FLAG_EXCLUSIVE) && !--nr_exclusive) 73 + int ret = curr->func(curr, mode, wake_flags, key); 74 + if (ret < 0) 75 + break; 76 + if (ret && (flags & WQ_FLAG_EXCLUSIVE) && !--nr_exclusive) 76 77 break; 77 78 } 78 79 }
+5 -1
kernel/signal.c
··· 1194 1194 recalc_sigpending_and_wake(t); 1195 1195 } 1196 1196 } 1197 - if (action->sa.sa_handler == SIG_DFL) 1197 + /* 1198 + * Don't clear SIGNAL_UNKILLABLE for traced tasks, users won't expect 1199 + * debugging to leave init killable. 1200 + */ 1201 + if (action->sa.sa_handler == SIG_DFL && !t->ptrace) 1198 1202 t->signal->flags &= ~SIGNAL_UNKILLABLE; 1199 1203 ret = specific_send_sig_info(sig, info, t); 1200 1204 spin_unlock_irqrestore(&t->sighand->siglock, flags);
+41 -9
kernel/time/timer.c
··· 203 203 bool migration_enabled; 204 204 bool nohz_active; 205 205 bool is_idle; 206 + bool must_forward_clk; 206 207 DECLARE_BITMAP(pending_map, WHEEL_SIZE); 207 208 struct hlist_head vectors[WHEEL_SIZE]; 208 209 } ____cacheline_aligned; ··· 857 856 858 857 static inline void forward_timer_base(struct timer_base *base) 859 858 { 860 - unsigned long jnow = READ_ONCE(jiffies); 859 + unsigned long jnow; 861 860 862 861 /* 863 - * We only forward the base when it's idle and we have a delta between 864 - * base clock and jiffies. 862 + * We only forward the base when we are idle or have just come out of 863 + * idle (must_forward_clk logic), and have a delta between base clock 864 + * and jiffies. In the common case, run_timers will take care of it. 865 865 */ 866 - if (!base->is_idle || (long) (jnow - base->clk) < 2) 866 + if (likely(!base->must_forward_clk)) 867 + return; 868 + 869 + jnow = READ_ONCE(jiffies); 870 + base->must_forward_clk = base->is_idle; 871 + if ((long)(jnow - base->clk) < 2) 867 872 return; 868 873 869 874 /* ··· 945 938 * same array bucket then just return: 946 939 */ 947 940 if (timer_pending(timer)) { 941 + /* 942 + * The downside of this optimization is that it can result in 943 + * larger granularity than you would get from adding a new 944 + * timer with this expiry. 945 + */ 948 946 if (timer->expires == expires) 949 947 return 1; 950 948 ··· 960 948 * dequeue/enqueue dance. 961 949 */ 962 950 base = lock_timer_base(timer, &flags); 951 + forward_timer_base(base); 963 952 964 953 clk = base->clk; 965 954 idx = calc_wheel_index(expires, clk); ··· 977 964 } 978 965 } else { 979 966 base = lock_timer_base(timer, &flags); 967 + forward_timer_base(base); 980 968 } 981 969 982 970 ret = detach_if_pending(timer, base, false); ··· 1005 991 raw_spin_lock(&base->lock); 1006 992 WRITE_ONCE(timer->flags, 1007 993 (timer->flags & ~TIMER_BASEMASK) | base->cpu); 994 + forward_timer_base(base); 1008 995 } 1009 996 } 1010 - 1011 - /* Try to forward a stale timer base clock */ 1012 - forward_timer_base(base); 1013 997 1014 998 timer->expires = expires; 1015 999 /* ··· 1124 1112 WRITE_ONCE(timer->flags, 1125 1113 (timer->flags & ~TIMER_BASEMASK) | cpu); 1126 1114 } 1115 + forward_timer_base(base); 1127 1116 1128 1117 debug_activate(timer, timer->expires); 1129 1118 internal_add_timer(base, timer); ··· 1510 1497 if (!is_max_delta) 1511 1498 expires = basem + (u64)(nextevt - basej) * TICK_NSEC; 1512 1499 /* 1513 - * If we expect to sleep more than a tick, mark the base idle: 1500 + * If we expect to sleep more than a tick, mark the base idle. 1501 + * Also the tick is stopped so any added timer must forward 1502 + * the base clk itself to keep granularity small. This idle 1503 + * logic is only maintained for the BASE_STD base, deferrable 1504 + * timers may still see large granularity skew (by design). 1514 1505 */ 1515 - if ((expires - basem) > TICK_NSEC) 1506 + if ((expires - basem) > TICK_NSEC) { 1507 + base->must_forward_clk = true; 1516 1508 base->is_idle = true; 1509 + } 1517 1510 } 1518 1511 raw_spin_unlock(&base->lock); 1519 1512 ··· 1629 1610 static __latent_entropy void run_timer_softirq(struct softirq_action *h) 1630 1611 { 1631 1612 struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]); 1613 + 1614 + /* 1615 + * must_forward_clk must be cleared before running timers so that any 1616 + * timer functions that call mod_timer will not try to forward the 1617 + * base. idle trcking / clock forwarding logic is only used with 1618 + * BASE_STD timers. 1619 + * 1620 + * The deferrable base does not do idle tracking at all, so we do 1621 + * not forward it. This can result in very large variations in 1622 + * granularity for deferrable timers, but they can be deferred for 1623 + * long periods due to idle. 1624 + */ 1625 + base->must_forward_clk = false; 1632 1626 1633 1627 __run_timers(base); 1634 1628 if (IS_ENABLED(CONFIG_NO_HZ_COMMON) && base->nohz_active)
+30 -4
kernel/trace/bpf_trace.c
··· 204 204 fmt_cnt++; 205 205 } 206 206 207 - return __trace_printk(1/* fake ip will not be printed */, fmt, 208 - mod[0] == 2 ? arg1 : mod[0] == 1 ? (long) arg1 : (u32) arg1, 209 - mod[1] == 2 ? arg2 : mod[1] == 1 ? (long) arg2 : (u32) arg2, 210 - mod[2] == 2 ? arg3 : mod[2] == 1 ? (long) arg3 : (u32) arg3); 207 + /* Horrid workaround for getting va_list handling working with different 208 + * argument type combinations generically for 32 and 64 bit archs. 209 + */ 210 + #define __BPF_TP_EMIT() __BPF_ARG3_TP() 211 + #define __BPF_TP(...) \ 212 + __trace_printk(1 /* Fake ip will not be printed. */, \ 213 + fmt, ##__VA_ARGS__) 214 + 215 + #define __BPF_ARG1_TP(...) \ 216 + ((mod[0] == 2 || (mod[0] == 1 && __BITS_PER_LONG == 64)) \ 217 + ? __BPF_TP(arg1, ##__VA_ARGS__) \ 218 + : ((mod[0] == 1 || (mod[0] == 0 && __BITS_PER_LONG == 32)) \ 219 + ? __BPF_TP((long)arg1, ##__VA_ARGS__) \ 220 + : __BPF_TP((u32)arg1, ##__VA_ARGS__))) 221 + 222 + #define __BPF_ARG2_TP(...) \ 223 + ((mod[1] == 2 || (mod[1] == 1 && __BITS_PER_LONG == 64)) \ 224 + ? __BPF_ARG1_TP(arg2, ##__VA_ARGS__) \ 225 + : ((mod[1] == 1 || (mod[1] == 0 && __BITS_PER_LONG == 32)) \ 226 + ? __BPF_ARG1_TP((long)arg2, ##__VA_ARGS__) \ 227 + : __BPF_ARG1_TP((u32)arg2, ##__VA_ARGS__))) 228 + 229 + #define __BPF_ARG3_TP(...) \ 230 + ((mod[2] == 2 || (mod[2] == 1 && __BITS_PER_LONG == 64)) \ 231 + ? __BPF_ARG2_TP(arg3, ##__VA_ARGS__) \ 232 + : ((mod[2] == 1 || (mod[2] == 0 && __BITS_PER_LONG == 32)) \ 233 + ? __BPF_ARG2_TP((long)arg3, ##__VA_ARGS__) \ 234 + : __BPF_ARG2_TP((u32)arg3, ##__VA_ARGS__))) 235 + 236 + return __BPF_TP_EMIT(); 211 237 } 212 238 213 239 static const struct bpf_func_proto bpf_trace_printk_proto = {
+4
kernel/trace/ftrace.c
··· 889 889 890 890 function_profile_call(trace->func, 0, NULL, NULL); 891 891 892 + /* If function graph is shutting down, ret_stack can be NULL */ 893 + if (!current->ret_stack) 894 + return 0; 895 + 892 896 if (index >= 0 && index < FTRACE_RETFUNC_DEPTH) 893 897 current->ret_stack[index].subtime = 0; 894 898
+9 -5
kernel/trace/ring_buffer.c
··· 4386 4386 * the page that was allocated, with the read page of the buffer. 4387 4387 * 4388 4388 * Returns: 4389 - * The page allocated, or NULL on error. 4389 + * The page allocated, or ERR_PTR 4390 4390 */ 4391 4391 void *ring_buffer_alloc_read_page(struct ring_buffer *buffer, int cpu) 4392 4392 { 4393 - struct ring_buffer_per_cpu *cpu_buffer = buffer->buffers[cpu]; 4393 + struct ring_buffer_per_cpu *cpu_buffer; 4394 4394 struct buffer_data_page *bpage = NULL; 4395 4395 unsigned long flags; 4396 4396 struct page *page; 4397 4397 4398 + if (!cpumask_test_cpu(cpu, buffer->cpumask)) 4399 + return ERR_PTR(-ENODEV); 4400 + 4401 + cpu_buffer = buffer->buffers[cpu]; 4398 4402 local_irq_save(flags); 4399 4403 arch_spin_lock(&cpu_buffer->lock); 4400 4404 ··· 4416 4412 page = alloc_pages_node(cpu_to_node(cpu), 4417 4413 GFP_KERNEL | __GFP_NORETRY, 0); 4418 4414 if (!page) 4419 - return NULL; 4415 + return ERR_PTR(-ENOMEM); 4420 4416 4421 4417 bpage = page_address(page); 4422 4418 ··· 4471 4467 * 4472 4468 * for example: 4473 4469 * rpage = ring_buffer_alloc_read_page(buffer, cpu); 4474 - * if (!rpage) 4475 - * return error; 4470 + * if (IS_ERR(rpage)) 4471 + * return PTR_ERR(rpage); 4476 4472 * ret = ring_buffer_read_page(buffer, &rpage, len, cpu, 0); 4477 4473 * if (ret >= 0) 4478 4474 * process_page(rpage, ret);
+1 -1
kernel/trace/ring_buffer_benchmark.c
··· 113 113 int i; 114 114 115 115 bpage = ring_buffer_alloc_read_page(buffer, cpu); 116 - if (!bpage) 116 + if (IS_ERR(bpage)) 117 117 return EVENT_DROPPED; 118 118 119 119 ret = ring_buffer_read_page(buffer, &bpage, PAGE_SIZE, cpu, 1);
+13 -6
kernel/trace/trace.c
··· 6598 6598 { 6599 6599 struct ftrace_buffer_info *info = filp->private_data; 6600 6600 struct trace_iterator *iter = &info->iter; 6601 - ssize_t ret; 6601 + ssize_t ret = 0; 6602 6602 ssize_t size; 6603 6603 6604 6604 if (!count) ··· 6612 6612 if (!info->spare) { 6613 6613 info->spare = ring_buffer_alloc_read_page(iter->trace_buffer->buffer, 6614 6614 iter->cpu_file); 6615 - info->spare_cpu = iter->cpu_file; 6615 + if (IS_ERR(info->spare)) { 6616 + ret = PTR_ERR(info->spare); 6617 + info->spare = NULL; 6618 + } else { 6619 + info->spare_cpu = iter->cpu_file; 6620 + } 6616 6621 } 6617 6622 if (!info->spare) 6618 - return -ENOMEM; 6623 + return ret; 6619 6624 6620 6625 /* Do we have previous read data to read? */ 6621 6626 if (info->read < PAGE_SIZE) ··· 6795 6790 ref->ref = 1; 6796 6791 ref->buffer = iter->trace_buffer->buffer; 6797 6792 ref->page = ring_buffer_alloc_read_page(ref->buffer, iter->cpu_file); 6798 - if (!ref->page) { 6799 - ret = -ENOMEM; 6793 + if (IS_ERR(ref->page)) { 6794 + ret = PTR_ERR(ref->page); 6795 + ref->page = NULL; 6800 6796 kfree(ref); 6801 6797 break; 6802 6798 } ··· 8299 8293 if (ret < 0) 8300 8294 goto out_free_cpumask; 8301 8295 /* Used for event triggers */ 8296 + ret = -ENOMEM; 8302 8297 temp_buffer = ring_buffer_alloc(PAGE_SIZE, RB_FL_OVERWRITE); 8303 8298 if (!temp_buffer) 8304 8299 goto out_rm_hp_state; ··· 8414 8407 } 8415 8408 8416 8409 fs_initcall(tracer_init_tracefs); 8417 - late_initcall(clear_boot_tracer); 8410 + late_initcall_sync(clear_boot_tracer);
+4
kernel/trace/trace_events_filter.c
··· 1959 1959 if (err && set_str) 1960 1960 append_filter_err(ps, filter); 1961 1961 } 1962 + if (err && !set_str) { 1963 + free_event_filter(filter); 1964 + filter = NULL; 1965 + } 1962 1966 create_filter_finish(ps); 1963 1967 1964 1968 *filterp = filter;
+7 -4
kernel/trace/tracing_map.c
··· 221 221 if (!a) 222 222 return; 223 223 224 - if (!a->pages) { 225 - kfree(a); 226 - return; 227 - } 224 + if (!a->pages) 225 + goto free; 228 226 229 227 for (i = 0; i < a->n_pages; i++) { 230 228 if (!a->pages[i]) 231 229 break; 232 230 free_page((unsigned long)a->pages[i]); 233 231 } 232 + 233 + kfree(a->pages); 234 + 235 + free: 236 + kfree(a); 234 237 } 235 238 236 239 struct tracing_map_array *tracing_map_array_alloc(unsigned int n_elts,
+1
kernel/watchdog.c
··· 240 240 * hardlockup detector generates a warning 241 241 */ 242 242 sample_period = get_softlockup_thresh() * ((u64)NSEC_PER_SEC / 5); 243 + watchdog_update_hrtimer_threshold(sample_period); 243 244 } 244 245 245 246 /* Commands for resetting the watchdog */
+59
kernel/watchdog_hld.c
··· 37 37 } 38 38 EXPORT_SYMBOL(arch_touch_nmi_watchdog); 39 39 40 + #ifdef CONFIG_HARDLOCKUP_CHECK_TIMESTAMP 41 + static DEFINE_PER_CPU(ktime_t, last_timestamp); 42 + static DEFINE_PER_CPU(unsigned int, nmi_rearmed); 43 + static ktime_t watchdog_hrtimer_sample_threshold __read_mostly; 44 + 45 + void watchdog_update_hrtimer_threshold(u64 period) 46 + { 47 + /* 48 + * The hrtimer runs with a period of (watchdog_threshold * 2) / 5 49 + * 50 + * So it runs effectively with 2.5 times the rate of the NMI 51 + * watchdog. That means the hrtimer should fire 2-3 times before 52 + * the NMI watchdog expires. The NMI watchdog on x86 is based on 53 + * unhalted CPU cycles, so if Turbo-Mode is enabled the CPU cycles 54 + * might run way faster than expected and the NMI fires in a 55 + * smaller period than the one deduced from the nominal CPU 56 + * frequency. Depending on the Turbo-Mode factor this might be fast 57 + * enough to get the NMI period smaller than the hrtimer watchdog 58 + * period and trigger false positives. 59 + * 60 + * The sample threshold is used to check in the NMI handler whether 61 + * the minimum time between two NMI samples has elapsed. That 62 + * prevents false positives. 63 + * 64 + * Set this to 4/5 of the actual watchdog threshold period so the 65 + * hrtimer is guaranteed to fire at least once within the real 66 + * watchdog threshold. 67 + */ 68 + watchdog_hrtimer_sample_threshold = period * 2; 69 + } 70 + 71 + static bool watchdog_check_timestamp(void) 72 + { 73 + ktime_t delta, now = ktime_get_mono_fast_ns(); 74 + 75 + delta = now - __this_cpu_read(last_timestamp); 76 + if (delta < watchdog_hrtimer_sample_threshold) { 77 + /* 78 + * If ktime is jiffies based, a stalled timer would prevent 79 + * jiffies from being incremented and the filter would look 80 + * at a stale timestamp and never trigger. 81 + */ 82 + if (__this_cpu_inc_return(nmi_rearmed) < 10) 83 + return false; 84 + } 85 + __this_cpu_write(nmi_rearmed, 0); 86 + __this_cpu_write(last_timestamp, now); 87 + return true; 88 + } 89 + #else 90 + static inline bool watchdog_check_timestamp(void) 91 + { 92 + return true; 93 + } 94 + #endif 95 + 40 96 static struct perf_event_attr wd_hw_attr = { 41 97 .type = PERF_TYPE_HARDWARE, 42 98 .config = PERF_COUNT_HW_CPU_CYCLES, ··· 116 60 __this_cpu_write(watchdog_nmi_touch, false); 117 61 return; 118 62 } 63 + 64 + if (!watchdog_check_timestamp()) 65 + return; 119 66 120 67 /* check for a hardlockup 121 68 * This is done by making sure our timer interrupt
+7
lib/Kconfig.debug
··· 798 798 select SOFTLOCKUP_DETECTOR 799 799 800 800 # 801 + # Enables a timestamp based low pass filter to compensate for perf based 802 + # hard lockup detection which runs too fast due to turbo modes. 803 + # 804 + config HARDLOCKUP_CHECK_TIMESTAMP 805 + bool 806 + 807 + # 801 808 # arch/ can define HAVE_HARDLOCKUP_DETECTOR_ARCH to provide their own hard 802 809 # lockup detector rather than the perf based detector. 803 810 #
+1 -1
mm/cma_debug.c
··· 167 167 char name[16]; 168 168 int u32s; 169 169 170 - sprintf(name, "cma-%s", cma->name); 170 + scnprintf(name, sizeof(name), "cma-%s", cma->name); 171 171 172 172 tmp = debugfs_create_dir(name, cma_debugfs_root); 173 173
+11 -9
mm/filemap.c
··· 885 885 page_writeback_init(); 886 886 } 887 887 888 + /* This has the same layout as wait_bit_key - see fs/cachefiles/rdwr.c */ 888 889 struct wait_page_key { 889 890 struct page *page; 890 891 int bit_nr; ··· 910 909 911 910 if (wait_page->bit_nr != key->bit_nr) 912 911 return 0; 912 + 913 + /* Stop walking if it's locked */ 913 914 if (test_bit(key->bit_nr, &key->page->flags)) 914 - return 0; 915 + return -1; 915 916 916 917 return autoremove_wake_function(wait, mode, sync, key); 917 918 } ··· 967 964 int ret = 0; 968 965 969 966 init_wait(wait); 967 + wait->flags = lock ? WQ_FLAG_EXCLUSIVE : 0; 970 968 wait->func = wake_page_function; 971 969 wait_page.page = page; 972 970 wait_page.bit_nr = bit_nr; ··· 976 972 spin_lock_irq(&q->lock); 977 973 978 974 if (likely(list_empty(&wait->entry))) { 979 - if (lock) 980 - __add_wait_queue_entry_tail_exclusive(q, wait); 981 - else 982 - __add_wait_queue(q, wait); 975 + __add_wait_queue_entry_tail(q, wait); 983 976 SetPageWaiters(page); 984 977 } 985 978 ··· 986 985 987 986 if (likely(test_bit(bit_nr, &page->flags))) { 988 987 io_schedule(); 989 - if (unlikely(signal_pending_state(state, current))) { 990 - ret = -EINTR; 991 - break; 992 - } 993 988 } 994 989 995 990 if (lock) { ··· 994 997 } else { 995 998 if (!test_bit(bit_nr, &page->flags)) 996 999 break; 1000 + } 1001 + 1002 + if (unlikely(signal_pending_state(state, current))) { 1003 + ret = -EINTR; 1004 + break; 997 1005 } 998 1006 } 999 1007
+22 -8
mm/huge_memory.c
··· 32 32 #include <linux/userfaultfd_k.h> 33 33 #include <linux/page_idle.h> 34 34 #include <linux/shmem_fs.h> 35 + #include <linux/oom.h> 35 36 36 37 #include <asm/tlb.h> 37 38 #include <asm/pgalloc.h> ··· 551 550 struct mem_cgroup *memcg; 552 551 pgtable_t pgtable; 553 552 unsigned long haddr = vmf->address & HPAGE_PMD_MASK; 553 + int ret = 0; 554 554 555 555 VM_BUG_ON_PAGE(!PageCompound(page), page); 556 556 ··· 563 561 564 562 pgtable = pte_alloc_one(vma->vm_mm, haddr); 565 563 if (unlikely(!pgtable)) { 566 - mem_cgroup_cancel_charge(page, memcg, true); 567 - put_page(page); 568 - return VM_FAULT_OOM; 564 + ret = VM_FAULT_OOM; 565 + goto release; 569 566 } 570 567 571 568 clear_huge_page(page, haddr, HPAGE_PMD_NR); ··· 577 576 578 577 vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); 579 578 if (unlikely(!pmd_none(*vmf->pmd))) { 580 - spin_unlock(vmf->ptl); 581 - mem_cgroup_cancel_charge(page, memcg, true); 582 - put_page(page); 583 - pte_free(vma->vm_mm, pgtable); 579 + goto unlock_release; 584 580 } else { 585 581 pmd_t entry; 582 + 583 + ret = check_stable_address_space(vma->vm_mm); 584 + if (ret) 585 + goto unlock_release; 586 586 587 587 /* Deliver the page fault to userland */ 588 588 if (userfaultfd_missing(vma)) { ··· 612 610 } 613 611 614 612 return 0; 613 + unlock_release: 614 + spin_unlock(vmf->ptl); 615 + release: 616 + if (pgtable) 617 + pte_free(vma->vm_mm, pgtable); 618 + mem_cgroup_cancel_charge(page, memcg, true); 619 + put_page(page); 620 + return ret; 621 + 615 622 } 616 623 617 624 /* ··· 699 688 ret = 0; 700 689 set = false; 701 690 if (pmd_none(*vmf->pmd)) { 702 - if (userfaultfd_missing(vma)) { 691 + ret = check_stable_address_space(vma->vm_mm); 692 + if (ret) { 693 + spin_unlock(vmf->ptl); 694 + } else if (userfaultfd_missing(vma)) { 703 695 spin_unlock(vmf->ptl); 704 696 ret = handle_userfault(vmf, VM_UFFD_MISSING); 705 697 VM_BUG_ON(ret & VM_FAULT_FALLBACK);
+1 -1
mm/madvise.c
··· 368 368 pte_offset_map_lock(mm, pmd, addr, &ptl); 369 369 goto out; 370 370 } 371 - put_page(page); 372 371 unlock_page(page); 372 + put_page(page); 373 373 pte = pte_offset_map_lock(mm, pmd, addr, &ptl); 374 374 pte--; 375 375 addr -= PAGE_SIZE;
+17 -21
mm/memblock.c
··· 285 285 } 286 286 287 287 #ifdef CONFIG_ARCH_DISCARD_MEMBLOCK 288 - 289 - phys_addr_t __init_memblock get_allocated_memblock_reserved_regions_info( 290 - phys_addr_t *addr) 288 + /** 289 + * Discard memory and reserved arrays if they were allocated 290 + */ 291 + void __init memblock_discard(void) 291 292 { 292 - if (memblock.reserved.regions == memblock_reserved_init_regions) 293 - return 0; 293 + phys_addr_t addr, size; 294 294 295 - *addr = __pa(memblock.reserved.regions); 295 + if (memblock.reserved.regions != memblock_reserved_init_regions) { 296 + addr = __pa(memblock.reserved.regions); 297 + size = PAGE_ALIGN(sizeof(struct memblock_region) * 298 + memblock.reserved.max); 299 + __memblock_free_late(addr, size); 300 + } 296 301 297 - return PAGE_ALIGN(sizeof(struct memblock_region) * 298 - memblock.reserved.max); 302 + if (memblock.memory.regions != memblock_memory_init_regions) { 303 + addr = __pa(memblock.memory.regions); 304 + size = PAGE_ALIGN(sizeof(struct memblock_region) * 305 + memblock.memory.max); 306 + __memblock_free_late(addr, size); 307 + } 299 308 } 300 - 301 - phys_addr_t __init_memblock get_allocated_memblock_memory_regions_info( 302 - phys_addr_t *addr) 303 - { 304 - if (memblock.memory.regions == memblock_memory_init_regions) 305 - return 0; 306 - 307 - *addr = __pa(memblock.memory.regions); 308 - 309 - return PAGE_ALIGN(sizeof(struct memblock_region) * 310 - memblock.memory.max); 311 - } 312 - 313 309 #endif 314 310 315 311 /**
+31 -12
mm/memcontrol.c
··· 1611 1611 * @page: the page 1612 1612 * 1613 1613 * This function protects unlocked LRU pages from being moved to 1614 - * another cgroup and stabilizes their page->mem_cgroup binding. 1614 + * another cgroup. 1615 + * 1616 + * It ensures lifetime of the returned memcg. Caller is responsible 1617 + * for the lifetime of the page; __unlock_page_memcg() is available 1618 + * when @page might get freed inside the locked section. 1615 1619 */ 1616 - void lock_page_memcg(struct page *page) 1620 + struct mem_cgroup *lock_page_memcg(struct page *page) 1617 1621 { 1618 1622 struct mem_cgroup *memcg; 1619 1623 unsigned long flags; ··· 1626 1622 * The RCU lock is held throughout the transaction. The fast 1627 1623 * path can get away without acquiring the memcg->move_lock 1628 1624 * because page moving starts with an RCU grace period. 1629 - */ 1625 + * 1626 + * The RCU lock also protects the memcg from being freed when 1627 + * the page state that is going to change is the only thing 1628 + * preventing the page itself from being freed. E.g. writeback 1629 + * doesn't hold a page reference and relies on PG_writeback to 1630 + * keep off truncation, migration and so forth. 1631 + */ 1630 1632 rcu_read_lock(); 1631 1633 1632 1634 if (mem_cgroup_disabled()) 1633 - return; 1635 + return NULL; 1634 1636 again: 1635 1637 memcg = page->mem_cgroup; 1636 1638 if (unlikely(!memcg)) 1637 - return; 1639 + return NULL; 1638 1640 1639 1641 if (atomic_read(&memcg->moving_account) <= 0) 1640 - return; 1642 + return memcg; 1641 1643 1642 1644 spin_lock_irqsave(&memcg->move_lock, flags); 1643 1645 if (memcg != page->mem_cgroup) { ··· 1659 1649 memcg->move_lock_task = current; 1660 1650 memcg->move_lock_flags = flags; 1661 1651 1662 - return; 1652 + return memcg; 1663 1653 } 1664 1654 EXPORT_SYMBOL(lock_page_memcg); 1665 1655 1666 1656 /** 1667 - * unlock_page_memcg - unlock a page->mem_cgroup binding 1668 - * @page: the page 1657 + * __unlock_page_memcg - unlock and unpin a memcg 1658 + * @memcg: the memcg 1659 + * 1660 + * Unlock and unpin a memcg returned by lock_page_memcg(). 1669 1661 */ 1670 - void unlock_page_memcg(struct page *page) 1662 + void __unlock_page_memcg(struct mem_cgroup *memcg) 1671 1663 { 1672 - struct mem_cgroup *memcg = page->mem_cgroup; 1673 - 1674 1664 if (memcg && memcg->move_lock_task == current) { 1675 1665 unsigned long flags = memcg->move_lock_flags; 1676 1666 ··· 1681 1671 } 1682 1672 1683 1673 rcu_read_unlock(); 1674 + } 1675 + 1676 + /** 1677 + * unlock_page_memcg - unlock a page->mem_cgroup binding 1678 + * @page: the page 1679 + */ 1680 + void unlock_page_memcg(struct page *page) 1681 + { 1682 + __unlock_page_memcg(page->mem_cgroup); 1684 1683 } 1685 1684 EXPORT_SYMBOL(unlock_page_memcg); 1686 1685
+20 -16
mm/memory.c
··· 68 68 #include <linux/debugfs.h> 69 69 #include <linux/userfaultfd_k.h> 70 70 #include <linux/dax.h> 71 + #include <linux/oom.h> 71 72 72 73 #include <asm/io.h> 73 74 #include <asm/mmu_context.h> ··· 2894 2893 struct vm_area_struct *vma = vmf->vma; 2895 2894 struct mem_cgroup *memcg; 2896 2895 struct page *page; 2896 + int ret = 0; 2897 2897 pte_t entry; 2898 2898 2899 2899 /* File mapping without ->vm_ops ? */ ··· 2926 2924 vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, 2927 2925 vmf->address, &vmf->ptl); 2928 2926 if (!pte_none(*vmf->pte)) 2927 + goto unlock; 2928 + ret = check_stable_address_space(vma->vm_mm); 2929 + if (ret) 2929 2930 goto unlock; 2930 2931 /* Deliver the page fault to userland, check inside PT lock */ 2931 2932 if (userfaultfd_missing(vma)) { ··· 2964 2959 if (!pte_none(*vmf->pte)) 2965 2960 goto release; 2966 2961 2962 + ret = check_stable_address_space(vma->vm_mm); 2963 + if (ret) 2964 + goto release; 2965 + 2967 2966 /* Deliver the page fault to userland, check inside PT lock */ 2968 2967 if (userfaultfd_missing(vma)) { 2969 2968 pte_unmap_unlock(vmf->pte, vmf->ptl); ··· 2987 2978 update_mmu_cache(vma, vmf->address, vmf->pte); 2988 2979 unlock: 2989 2980 pte_unmap_unlock(vmf->pte, vmf->ptl); 2990 - return 0; 2981 + return ret; 2991 2982 release: 2992 2983 mem_cgroup_cancel_charge(page, memcg, false); 2993 2984 put_page(page); ··· 3261 3252 int finish_fault(struct vm_fault *vmf) 3262 3253 { 3263 3254 struct page *page; 3264 - int ret; 3255 + int ret = 0; 3265 3256 3266 3257 /* Did we COW the page? */ 3267 3258 if ((vmf->flags & FAULT_FLAG_WRITE) && ··· 3269 3260 page = vmf->cow_page; 3270 3261 else 3271 3262 page = vmf->page; 3272 - ret = alloc_set_pte(vmf, vmf->memcg, page); 3263 + 3264 + /* 3265 + * check even for read faults because we might have lost our CoWed 3266 + * page 3267 + */ 3268 + if (!(vmf->vma->vm_flags & VM_SHARED)) 3269 + ret = check_stable_address_space(vmf->vma->vm_mm); 3270 + if (!ret) 3271 + ret = alloc_set_pte(vmf, vmf->memcg, page); 3273 3272 if (vmf->pte) 3274 3273 pte_unmap_unlock(vmf->pte, vmf->ptl); 3275 3274 return ret; ··· 3916 3899 if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM)) 3917 3900 mem_cgroup_oom_synchronize(false); 3918 3901 } 3919 - 3920 - /* 3921 - * This mm has been already reaped by the oom reaper and so the 3922 - * refault cannot be trusted in general. Anonymous refaults would 3923 - * lose data and give a zero page instead e.g. This is especially 3924 - * problem for use_mm() because regular tasks will just die and 3925 - * the corrupted data will not be visible anywhere while kthread 3926 - * will outlive the oom victim and potentially propagate the date 3927 - * further. 3928 - */ 3929 - if (unlikely((current->flags & PF_KTHREAD) && !(ret & VM_FAULT_ERROR) 3930 - && test_bit(MMF_UNSTABLE, &vma->vm_mm->flags))) 3931 - ret = VM_FAULT_SIGBUS; 3932 3902 3933 3903 return ret; 3934 3904 }
-5
mm/mempolicy.c
··· 861 861 *policy |= (pol->flags & MPOL_MODE_FLAGS); 862 862 } 863 863 864 - if (vma) { 865 - up_read(&current->mm->mmap_sem); 866 - vma = NULL; 867 - } 868 - 869 864 err = 0; 870 865 if (nmask) { 871 866 if (mpol_store_user_nodemask(pol)) {
+3 -8
mm/migrate.c
··· 41 41 #include <linux/page_idle.h> 42 42 #include <linux/page_owner.h> 43 43 #include <linux/sched/mm.h> 44 + #include <linux/ptrace.h> 44 45 45 46 #include <asm/tlbflush.h> 46 47 ··· 1653 1652 const int __user *, nodes, 1654 1653 int __user *, status, int, flags) 1655 1654 { 1656 - const struct cred *cred = current_cred(), *tcred; 1657 1655 struct task_struct *task; 1658 1656 struct mm_struct *mm; 1659 1657 int err; ··· 1676 1676 1677 1677 /* 1678 1678 * Check if this process has the right to modify the specified 1679 - * process. The right exists if the process has administrative 1680 - * capabilities, superuser privileges or the same 1681 - * userid as the target process. 1679 + * process. Use the regular "ptrace_may_access()" checks. 1682 1680 */ 1683 - tcred = __task_cred(task); 1684 - if (!uid_eq(cred->euid, tcred->suid) && !uid_eq(cred->euid, tcred->uid) && 1685 - !uid_eq(cred->uid, tcred->suid) && !uid_eq(cred->uid, tcred->uid) && 1686 - !capable(CAP_SYS_NICE)) { 1681 + if (!ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS)) { 1687 1682 rcu_read_unlock(); 1688 1683 err = -EPERM; 1689 1684 goto out;
-16
mm/nobootmem.c
··· 146 146 NULL) 147 147 count += __free_memory_core(start, end); 148 148 149 - #ifdef CONFIG_ARCH_DISCARD_MEMBLOCK 150 - { 151 - phys_addr_t size; 152 - 153 - /* Free memblock.reserved array if it was allocated */ 154 - size = get_allocated_memblock_reserved_regions_info(&start); 155 - if (size) 156 - count += __free_memory_core(start, start + size); 157 - 158 - /* Free memblock.memory array if it was allocated */ 159 - size = get_allocated_memblock_memory_regions_info(&start); 160 - if (size) 161 - count += __free_memory_core(start, start + size); 162 - } 163 - #endif 164 - 165 149 return count; 166 150 } 167 151
+12 -3
mm/page-writeback.c
··· 2724 2724 int test_clear_page_writeback(struct page *page) 2725 2725 { 2726 2726 struct address_space *mapping = page_mapping(page); 2727 + struct mem_cgroup *memcg; 2728 + struct lruvec *lruvec; 2727 2729 int ret; 2728 2730 2729 - lock_page_memcg(page); 2731 + memcg = lock_page_memcg(page); 2732 + lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); 2730 2733 if (mapping && mapping_use_writeback_tags(mapping)) { 2731 2734 struct inode *inode = mapping->host; 2732 2735 struct backing_dev_info *bdi = inode_to_bdi(inode); ··· 2757 2754 } else { 2758 2755 ret = TestClearPageWriteback(page); 2759 2756 } 2757 + /* 2758 + * NOTE: Page might be free now! Writeback doesn't hold a page 2759 + * reference on its own, it relies on truncation to wait for 2760 + * the clearing of PG_writeback. The below can only access 2761 + * page state that is static across allocation cycles. 2762 + */ 2760 2763 if (ret) { 2761 - dec_lruvec_page_state(page, NR_WRITEBACK); 2764 + dec_lruvec_state(lruvec, NR_WRITEBACK); 2762 2765 dec_zone_page_state(page, NR_ZONE_WRITE_PENDING); 2763 2766 inc_node_page_state(page, NR_WRITTEN); 2764 2767 } 2765 - unlock_page_memcg(page); 2768 + __unlock_page_memcg(memcg); 2766 2769 return ret; 2767 2770 } 2768 2771
+22 -2
mm/page_alloc.c
··· 66 66 #include <linux/kthread.h> 67 67 #include <linux/memcontrol.h> 68 68 #include <linux/ftrace.h> 69 + #include <linux/nmi.h> 69 70 70 71 #include <asm/sections.h> 71 72 #include <asm/tlbflush.h> ··· 1585 1584 /* Reinit limits that are based on free pages after the kernel is up */ 1586 1585 files_maxfiles_init(); 1587 1586 #endif 1587 + #ifdef CONFIG_ARCH_DISCARD_MEMBLOCK 1588 + /* Discard memblock private memory */ 1589 + memblock_discard(); 1590 + #endif 1588 1591 1589 1592 for_each_populated_zone(zone) 1590 1593 set_zone_contiguous(zone); ··· 2536 2531 2537 2532 #ifdef CONFIG_HIBERNATION 2538 2533 2534 + /* 2535 + * Touch the watchdog for every WD_PAGE_COUNT pages. 2536 + */ 2537 + #define WD_PAGE_COUNT (128*1024) 2538 + 2539 2539 void mark_free_pages(struct zone *zone) 2540 2540 { 2541 - unsigned long pfn, max_zone_pfn; 2541 + unsigned long pfn, max_zone_pfn, page_count = WD_PAGE_COUNT; 2542 2542 unsigned long flags; 2543 2543 unsigned int order, t; 2544 2544 struct page *page; ··· 2558 2548 if (pfn_valid(pfn)) { 2559 2549 page = pfn_to_page(pfn); 2560 2550 2551 + if (!--page_count) { 2552 + touch_nmi_watchdog(); 2553 + page_count = WD_PAGE_COUNT; 2554 + } 2555 + 2561 2556 if (page_zone(page) != zone) 2562 2557 continue; 2563 2558 ··· 2576 2561 unsigned long i; 2577 2562 2578 2563 pfn = page_to_pfn(page); 2579 - for (i = 0; i < (1UL << order); i++) 2564 + for (i = 0; i < (1UL << order); i++) { 2565 + if (!--page_count) { 2566 + touch_nmi_watchdog(); 2567 + page_count = WD_PAGE_COUNT; 2568 + } 2580 2569 swsusp_set_page_free(pfn_to_page(pfn + i)); 2570 + } 2581 2571 } 2582 2572 } 2583 2573 spin_unlock_irqrestore(&zone->lock, flags);
+2 -2
mm/shmem.c
··· 3967 3967 } 3968 3968 3969 3969 #ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE 3970 - if (has_transparent_hugepage() && shmem_huge < SHMEM_HUGE_DENY) 3970 + if (has_transparent_hugepage() && shmem_huge > SHMEM_HUGE_DENY) 3971 3971 SHMEM_SB(shm_mnt->mnt_sb)->huge = shmem_huge; 3972 3972 else 3973 3973 shmem_huge = 0; /* just in case it was patched */ ··· 4028 4028 return -EINVAL; 4029 4029 4030 4030 shmem_huge = huge; 4031 - if (shmem_huge < SHMEM_HUGE_DENY) 4031 + if (shmem_huge > SHMEM_HUGE_DENY) 4032 4032 SHMEM_SB(shm_mnt->mnt_sb)->huge = shmem_huge; 4033 4033 return count; 4034 4034 }
+2 -1
mm/slub.c
··· 5642 5642 * A cache is never shut down before deactivation is 5643 5643 * complete, so no need to worry about synchronization. 5644 5644 */ 5645 - return; 5645 + goto out; 5646 5646 5647 5647 #ifdef CONFIG_MEMCG 5648 5648 kset_unregister(s->memcg_kset); 5649 5649 #endif 5650 5650 kobject_uevent(&s->kobj, KOBJ_REMOVE); 5651 5651 kobject_del(&s->kobj); 5652 + out: 5652 5653 kobject_put(&s->kobj); 5653 5654 } 5654 5655
+8 -5
mm/vmalloc.c
··· 1671 1671 struct page **pages; 1672 1672 unsigned int nr_pages, array_size, i; 1673 1673 const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO; 1674 - const gfp_t alloc_mask = gfp_mask | __GFP_HIGHMEM | __GFP_NOWARN; 1674 + const gfp_t alloc_mask = gfp_mask | __GFP_NOWARN; 1675 + const gfp_t highmem_mask = (gfp_mask & (GFP_DMA | GFP_DMA32)) ? 1676 + 0 : 1677 + __GFP_HIGHMEM; 1675 1678 1676 1679 nr_pages = get_vm_area_size(area) >> PAGE_SHIFT; 1677 1680 array_size = (nr_pages * sizeof(struct page *)); ··· 1682 1679 area->nr_pages = nr_pages; 1683 1680 /* Please note that the recursion is strictly bounded. */ 1684 1681 if (array_size > PAGE_SIZE) { 1685 - pages = __vmalloc_node(array_size, 1, nested_gfp|__GFP_HIGHMEM, 1682 + pages = __vmalloc_node(array_size, 1, nested_gfp|highmem_mask, 1686 1683 PAGE_KERNEL, node, area->caller); 1687 1684 } else { 1688 1685 pages = kmalloc_node(array_size, nested_gfp, node); ··· 1703 1700 } 1704 1701 1705 1702 if (node == NUMA_NO_NODE) 1706 - page = alloc_page(alloc_mask); 1703 + page = alloc_page(alloc_mask|highmem_mask); 1707 1704 else 1708 - page = alloc_pages_node(node, alloc_mask, 0); 1705 + page = alloc_pages_node(node, alloc_mask|highmem_mask, 0); 1709 1706 1710 1707 if (unlikely(!page)) { 1711 1708 /* Successfully allocated i pages, free them in __vunmap() */ ··· 1713 1710 goto fail; 1714 1711 } 1715 1712 area->pages[i] = page; 1716 - if (gfpflags_allow_blocking(gfp_mask)) 1713 + if (gfpflags_allow_blocking(gfp_mask|highmem_mask)) 1717 1714 cond_resched(); 1718 1715 } 1719 1716
+9 -3
net/core/datagram.c
··· 169 169 int *peeked, int *off, int *err, 170 170 struct sk_buff **last) 171 171 { 172 + bool peek_at_off = false; 172 173 struct sk_buff *skb; 173 - int _off = *off; 174 + int _off = 0; 175 + 176 + if (unlikely(flags & MSG_PEEK && *off >= 0)) { 177 + peek_at_off = true; 178 + _off = *off; 179 + } 174 180 175 181 *last = queue->prev; 176 182 skb_queue_walk(queue, skb) { 177 183 if (flags & MSG_PEEK) { 178 - if (_off >= skb->len && (skb->len || _off || 179 - skb->peeked)) { 184 + if (peek_at_off && _off >= skb->len && 185 + (_off || skb->peeked)) { 180 186 _off -= skb->len; 181 187 continue; 182 188 }
+2
net/core/filter.c
··· 3505 3505 bpf_target_off(struct sk_buff, tc_index, 2, 3506 3506 target_size)); 3507 3507 #else 3508 + *target_size = 2; 3508 3509 if (type == BPF_WRITE) 3509 3510 *insn++ = BPF_MOV64_REG(si->dst_reg, si->dst_reg); 3510 3511 else ··· 3521 3520 *insn++ = BPF_JMP_IMM(BPF_JGE, si->dst_reg, MIN_NAPI_ID, 1); 3522 3521 *insn++ = BPF_MOV64_IMM(si->dst_reg, 0); 3523 3522 #else 3523 + *target_size = 4; 3524 3524 *insn++ = BPF_MOV64_IMM(si->dst_reg, 0); 3525 3525 #endif 3526 3526 break;
+13 -6
net/dccp/proto.c
··· 24 24 #include <net/checksum.h> 25 25 26 26 #include <net/inet_sock.h> 27 + #include <net/inet_common.h> 27 28 #include <net/sock.h> 28 29 #include <net/xfrm.h> 29 30 ··· 171 170 172 171 EXPORT_SYMBOL_GPL(dccp_packet_name); 173 172 173 + static void dccp_sk_destruct(struct sock *sk) 174 + { 175 + struct dccp_sock *dp = dccp_sk(sk); 176 + 177 + ccid_hc_tx_delete(dp->dccps_hc_tx_ccid, sk); 178 + dp->dccps_hc_tx_ccid = NULL; 179 + inet_sock_destruct(sk); 180 + } 181 + 174 182 int dccp_init_sock(struct sock *sk, const __u8 ctl_sock_initialized) 175 183 { 176 184 struct dccp_sock *dp = dccp_sk(sk); ··· 189 179 icsk->icsk_syn_retries = sysctl_dccp_request_retries; 190 180 sk->sk_state = DCCP_CLOSED; 191 181 sk->sk_write_space = dccp_write_space; 182 + sk->sk_destruct = dccp_sk_destruct; 192 183 icsk->icsk_sync_mss = dccp_sync_mss; 193 184 dp->dccps_mss_cache = 536; 194 185 dp->dccps_rate_last = jiffies; ··· 212 201 { 213 202 struct dccp_sock *dp = dccp_sk(sk); 214 203 215 - /* 216 - * DCCP doesn't use sk_write_queue, just sk_send_head 217 - * for retransmissions 218 - */ 204 + __skb_queue_purge(&sk->sk_write_queue); 219 205 if (sk->sk_send_head != NULL) { 220 206 kfree_skb(sk->sk_send_head); 221 207 sk->sk_send_head = NULL; ··· 230 222 dp->dccps_hc_rx_ackvec = NULL; 231 223 } 232 224 ccid_hc_rx_delete(dp->dccps_hc_rx_ccid, sk); 233 - ccid_hc_tx_delete(dp->dccps_hc_tx_ccid, sk); 234 - dp->dccps_hc_rx_ccid = dp->dccps_hc_tx_ccid = NULL; 225 + dp->dccps_hc_rx_ccid = NULL; 235 226 236 227 /* clean up feature negotiation state */ 237 228 dccp_feat_list_purge(&dp->dccps_featneg);
+9 -4
net/dsa/tag_ksz.c
··· 42 42 padlen = (skb->len >= ETH_ZLEN) ? 0 : ETH_ZLEN - skb->len; 43 43 44 44 if (skb_tailroom(skb) >= padlen + KSZ_INGRESS_TAG_LEN) { 45 + if (skb_put_padto(skb, skb->len + padlen)) 46 + return NULL; 47 + 45 48 nskb = skb; 46 49 } else { 47 50 nskb = alloc_skb(NET_IP_ALIGN + skb->len + ··· 59 56 skb_set_transport_header(nskb, 60 57 skb_transport_header(skb) - skb->head); 61 58 skb_copy_and_csum_dev(skb, skb_put(nskb, skb->len)); 59 + 60 + if (skb_put_padto(nskb, nskb->len + padlen)) { 61 + kfree_skb(nskb); 62 + return NULL; 63 + } 64 + 62 65 kfree_skb(skb); 63 66 } 64 - 65 - /* skb is freed when it fails */ 66 - if (skb_put_padto(nskb, nskb->len + padlen)) 67 - return NULL; 68 67 69 68 tag = skb_put(nskb, KSZ_INGRESS_TAG_LEN); 70 69 tag[0] = 0;
+7 -5
net/ipv4/fib_semantics.c
··· 1083 1083 fi = kzalloc(sizeof(*fi)+nhs*sizeof(struct fib_nh), GFP_KERNEL); 1084 1084 if (!fi) 1085 1085 goto failure; 1086 - fib_info_cnt++; 1087 1086 if (cfg->fc_mx) { 1088 1087 fi->fib_metrics = kzalloc(sizeof(*fi->fib_metrics), GFP_KERNEL); 1089 - if (!fi->fib_metrics) 1090 - goto failure; 1088 + if (unlikely(!fi->fib_metrics)) { 1089 + kfree(fi); 1090 + return ERR_PTR(err); 1091 + } 1091 1092 atomic_set(&fi->fib_metrics->refcnt, 1); 1092 - } else 1093 + } else { 1093 1094 fi->fib_metrics = (struct dst_metrics *)&dst_default_metrics; 1094 - 1095 + } 1096 + fib_info_cnt++; 1095 1097 fi->fib_net = net; 1096 1098 fi->fib_protocol = cfg->fc_protocol; 1097 1099 fi->fib_scope = cfg->fc_scope;
+9 -1
net/ipv4/igmp.c
··· 1007 1007 { 1008 1008 /* This basically follows the spec line by line -- see RFC1112 */ 1009 1009 struct igmphdr *ih; 1010 - struct in_device *in_dev = __in_dev_get_rcu(skb->dev); 1010 + struct net_device *dev = skb->dev; 1011 + struct in_device *in_dev; 1011 1012 int len = skb->len; 1012 1013 bool dropped = true; 1013 1014 1015 + if (netif_is_l3_master(dev)) { 1016 + dev = dev_get_by_index_rcu(dev_net(dev), IPCB(skb)->iif); 1017 + if (!dev) 1018 + goto drop; 1019 + } 1020 + 1021 + in_dev = __in_dev_get_rcu(dev); 1014 1022 if (!in_dev) 1015 1023 goto drop; 1016 1024
+12 -4
net/ipv4/route.c
··· 1267 1267 if (mtu) 1268 1268 return mtu; 1269 1269 1270 - mtu = dst->dev->mtu; 1270 + mtu = READ_ONCE(dst->dev->mtu); 1271 1271 1272 1272 if (unlikely(dst_metric_locked(dst, RTAX_MTU))) { 1273 1273 if (rt->rt_uses_gateway && mtu > 576) ··· 2750 2750 err = 0; 2751 2751 if (IS_ERR(rt)) 2752 2752 err = PTR_ERR(rt); 2753 + else 2754 + skb_dst_set(skb, &rt->dst); 2753 2755 } 2754 2756 2755 2757 if (err) 2756 2758 goto errout_free; 2757 2759 2758 - skb_dst_set(skb, &rt->dst); 2759 2760 if (rtm->rtm_flags & RTM_F_NOTIFY) 2760 2761 rt->rt_flags |= RTCF_NOTIFY; 2761 2762 2762 2763 if (rtm->rtm_flags & RTM_F_LOOKUP_TABLE) 2763 2764 table_id = rt->rt_table_id; 2764 2765 2765 - if (rtm->rtm_flags & RTM_F_FIB_MATCH) 2766 + if (rtm->rtm_flags & RTM_F_FIB_MATCH) { 2767 + if (!res.fi) { 2768 + err = fib_props[res.type].error; 2769 + if (!err) 2770 + err = -EHOSTUNREACH; 2771 + goto errout_free; 2772 + } 2766 2773 err = fib_dump_info(skb, NETLINK_CB(in_skb).portid, 2767 2774 nlh->nlmsg_seq, RTM_NEWROUTE, table_id, 2768 2775 rt->rt_type, res.prefix, res.prefixlen, 2769 2776 fl4.flowi4_tos, res.fi, 0); 2770 - else 2777 + } else { 2771 2778 err = rt_fill_info(net, dst, src, table_id, &fl4, skb, 2772 2779 NETLINK_CB(in_skb).portid, nlh->nlmsg_seq); 2780 + } 2773 2781 if (err < 0) 2774 2782 goto errout_free; 2775 2783
+1 -2
net/ipv4/tcp_input.c
··· 3009 3009 /* delta_us may not be positive if the socket is locked 3010 3010 * when the retrans timer fires and is rescheduled. 3011 3011 */ 3012 - if (delta_us > 0) 3013 - rto = usecs_to_jiffies(delta_us); 3012 + rto = usecs_to_jiffies(max_t(int, delta_us, 1)); 3014 3013 } 3015 3014 inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS, rto, 3016 3015 TCP_RTO_MAX);
+2 -2
net/ipv4/tcp_ipv4.c
··· 1722 1722 */ 1723 1723 sock_hold(sk); 1724 1724 refcounted = true; 1725 + if (tcp_filter(sk, skb)) 1726 + goto discard_and_relse; 1725 1727 nsk = tcp_check_req(sk, skb, req, false); 1726 1728 if (!nsk) { 1727 1729 reqsk_put(req); ··· 1731 1729 } 1732 1730 if (nsk == sk) { 1733 1731 reqsk_put(req); 1734 - } else if (tcp_filter(sk, skb)) { 1735 - goto discard_and_relse; 1736 1732 } else if (tcp_child_process(sk, nsk, skb)) { 1737 1733 tcp_v4_send_reset(nsk, skb); 1738 1734 goto discard_and_relse;
+7 -7
net/ipv4/tcp_ulp.c
··· 122 122 123 123 ulp_ops = __tcp_ulp_find_autoload(name); 124 124 if (!ulp_ops) 125 - err = -ENOENT; 126 - else 127 - err = ulp_ops->init(sk); 125 + return -ENOENT; 128 126 129 - if (err) 130 - goto out; 127 + err = ulp_ops->init(sk); 128 + if (err) { 129 + module_put(ulp_ops->owner); 130 + return err; 131 + } 131 132 132 133 icsk->icsk_ulp_ops = ulp_ops; 133 - out: 134 - return err; 134 + return 0; 135 135 }
+2 -1
net/ipv4/udp.c
··· 1574 1574 return ip_recv_error(sk, msg, len, addr_len); 1575 1575 1576 1576 try_again: 1577 - peeking = off = sk_peek_offset(sk, flags); 1577 + peeking = flags & MSG_PEEK; 1578 + off = sk_peek_offset(sk, flags); 1578 1579 skb = __skb_recv_udp(sk, flags, noblock, &peeked, &off, &err); 1579 1580 if (!skb) 1580 1581 return err;
+15 -13
net/ipv6/ip6_fib.c
··· 914 914 } 915 915 nsiblings = iter->rt6i_nsiblings; 916 916 fib6_purge_rt(iter, fn, info->nl_net); 917 + if (fn->rr_ptr == iter) 918 + fn->rr_ptr = NULL; 917 919 rt6_release(iter); 918 920 919 921 if (nsiblings) { ··· 928 926 if (rt6_qualify_for_ecmp(iter)) { 929 927 *ins = iter->dst.rt6_next; 930 928 fib6_purge_rt(iter, fn, info->nl_net); 929 + if (fn->rr_ptr == iter) 930 + fn->rr_ptr = NULL; 931 931 rt6_release(iter); 932 932 nsiblings--; 933 933 } else { ··· 1018 1014 /* Create subtree root node */ 1019 1015 sfn = node_alloc(); 1020 1016 if (!sfn) 1021 - goto st_failure; 1017 + goto failure; 1022 1018 1023 1019 sfn->leaf = info->nl_net->ipv6.ip6_null_entry; 1024 1020 atomic_inc(&info->nl_net->ipv6.ip6_null_entry->rt6i_ref); ··· 1035 1031 1036 1032 if (IS_ERR(sn)) { 1037 1033 /* If it is failed, discard just allocated 1038 - root, and then (in st_failure) stale node 1034 + root, and then (in failure) stale node 1039 1035 in main tree. 1040 1036 */ 1041 1037 node_free(sfn); 1042 1038 err = PTR_ERR(sn); 1043 - goto st_failure; 1039 + goto failure; 1044 1040 } 1045 1041 1046 1042 /* Now link new subtree to main tree */ ··· 1055 1051 1056 1052 if (IS_ERR(sn)) { 1057 1053 err = PTR_ERR(sn); 1058 - goto st_failure; 1054 + goto failure; 1059 1055 } 1060 1056 } 1061 1057 ··· 1096 1092 atomic_inc(&pn->leaf->rt6i_ref); 1097 1093 } 1098 1094 #endif 1099 - /* Always release dst as dst->__refcnt is guaranteed 1100 - * to be taken before entering this function 1101 - */ 1102 - dst_release_immediate(&rt->dst); 1095 + goto failure; 1103 1096 } 1104 1097 return err; 1105 1098 1106 - #ifdef CONFIG_IPV6_SUBTREES 1107 - /* Subtree creation failed, probably main tree node 1108 - is orphan. If it is, shoot it. 1099 + failure: 1100 + /* fn->leaf could be NULL if fn is an intermediate node and we 1101 + * failed to add the new route to it in both subtree creation 1102 + * failure and fib6_add_rt2node() failure case. 1103 + * In both cases, fib6_repair_tree() should be called to fix 1104 + * fn->leaf. 1109 1105 */ 1110 - st_failure: 1111 1106 if (fn && !(fn->fn_flags & (RTN_RTINFO|RTN_ROOT))) 1112 1107 fib6_repair_tree(info->nl_net, fn); 1113 1108 /* Always release dst as dst->__refcnt is guaranteed ··· 1114 1111 */ 1115 1112 dst_release_immediate(&rt->dst); 1116 1113 return err; 1117 - #endif 1118 1114 } 1119 1115 1120 1116 /*
+8 -11
net/ipv6/route.c
··· 417 417 struct net_device *loopback_dev = 418 418 dev_net(dev)->loopback_dev; 419 419 420 - if (dev != loopback_dev) { 421 - if (idev && idev->dev == dev) { 422 - struct inet6_dev *loopback_idev = 423 - in6_dev_get(loopback_dev); 424 - if (loopback_idev) { 425 - rt->rt6i_idev = loopback_idev; 426 - in6_dev_put(idev); 427 - } 420 + if (idev && idev->dev != loopback_dev) { 421 + struct inet6_dev *loopback_idev = in6_dev_get(loopback_dev); 422 + if (loopback_idev) { 423 + rt->rt6i_idev = loopback_idev; 424 + in6_dev_put(idev); 428 425 } 429 426 } 430 427 } ··· 3721 3724 /* NETDEV_UNREGISTER could be fired for multiple times by 3722 3725 * netdev_wait_allrefs(). Make sure we only call this once. 3723 3726 */ 3724 - in6_dev_put(net->ipv6.ip6_null_entry->rt6i_idev); 3727 + in6_dev_put_clear(&net->ipv6.ip6_null_entry->rt6i_idev); 3725 3728 #ifdef CONFIG_IPV6_MULTIPLE_TABLES 3726 - in6_dev_put(net->ipv6.ip6_prohibit_entry->rt6i_idev); 3727 - in6_dev_put(net->ipv6.ip6_blk_hole_entry->rt6i_idev); 3729 + in6_dev_put_clear(&net->ipv6.ip6_prohibit_entry->rt6i_idev); 3730 + in6_dev_put_clear(&net->ipv6.ip6_blk_hole_entry->rt6i_idev); 3728 3731 #endif 3729 3732 } 3730 3733
+2 -2
net/ipv6/tcp_ipv6.c
··· 1456 1456 } 1457 1457 sock_hold(sk); 1458 1458 refcounted = true; 1459 + if (tcp_filter(sk, skb)) 1460 + goto discard_and_relse; 1459 1461 nsk = tcp_check_req(sk, skb, req, false); 1460 1462 if (!nsk) { 1461 1463 reqsk_put(req); ··· 1466 1464 if (nsk == sk) { 1467 1465 reqsk_put(req); 1468 1466 tcp_v6_restore_cb(skb); 1469 - } else if (tcp_filter(sk, skb)) { 1470 - goto discard_and_relse; 1471 1467 } else if (tcp_child_process(sk, nsk, skb)) { 1472 1468 tcp_v6_send_reset(nsk, skb); 1473 1469 goto discard_and_relse;
+2 -1
net/ipv6/udp.c
··· 362 362 return ipv6_recv_rxpmtu(sk, msg, len, addr_len); 363 363 364 364 try_again: 365 - peeking = off = sk_peek_offset(sk, flags); 365 + peeking = flags & MSG_PEEK; 366 + off = sk_peek_offset(sk, flags); 366 367 skb = __skb_recv_udp(sk, flags, noblock, &peeked, &off, &err); 367 368 if (!skb) 368 369 return err;
+1 -1
net/irda/af_irda.c
··· 2213 2213 { 2214 2214 struct sock *sk = sock->sk; 2215 2215 struct irda_sock *self = irda_sk(sk); 2216 - struct irda_device_list list; 2216 + struct irda_device_list list = { 0 }; 2217 2217 struct irda_device_info *discoveries; 2218 2218 struct irda_ias_set * ias_opt; /* IAS get/query params */ 2219 2219 struct ias_object * ias_obj; /* Object in IAS */
+26 -22
net/key/af_key.c
··· 228 228 #define BROADCAST_ONE 1 229 229 #define BROADCAST_REGISTERED 2 230 230 #define BROADCAST_PROMISC_ONLY 4 231 - static int pfkey_broadcast(struct sk_buff *skb, 231 + static int pfkey_broadcast(struct sk_buff *skb, gfp_t allocation, 232 232 int broadcast_flags, struct sock *one_sk, 233 233 struct net *net) 234 234 { ··· 278 278 rcu_read_unlock(); 279 279 280 280 if (one_sk != NULL) 281 - err = pfkey_broadcast_one(skb, &skb2, GFP_KERNEL, one_sk); 281 + err = pfkey_broadcast_one(skb, &skb2, allocation, one_sk); 282 282 283 283 kfree_skb(skb2); 284 284 kfree_skb(skb); ··· 311 311 hdr = (struct sadb_msg *) pfk->dump.skb->data; 312 312 hdr->sadb_msg_seq = 0; 313 313 hdr->sadb_msg_errno = rc; 314 - pfkey_broadcast(pfk->dump.skb, BROADCAST_ONE, 314 + pfkey_broadcast(pfk->dump.skb, GFP_ATOMIC, BROADCAST_ONE, 315 315 &pfk->sk, sock_net(&pfk->sk)); 316 316 pfk->dump.skb = NULL; 317 317 } ··· 355 355 hdr->sadb_msg_len = (sizeof(struct sadb_msg) / 356 356 sizeof(uint64_t)); 357 357 358 - pfkey_broadcast(skb, BROADCAST_ONE, sk, sock_net(sk)); 358 + pfkey_broadcast(skb, GFP_KERNEL, BROADCAST_ONE, sk, sock_net(sk)); 359 359 360 360 return 0; 361 361 } ··· 1389 1389 1390 1390 xfrm_state_put(x); 1391 1391 1392 - pfkey_broadcast(resp_skb, BROADCAST_ONE, sk, net); 1392 + pfkey_broadcast(resp_skb, GFP_KERNEL, BROADCAST_ONE, sk, net); 1393 1393 1394 1394 return 0; 1395 1395 } ··· 1476 1476 hdr->sadb_msg_seq = c->seq; 1477 1477 hdr->sadb_msg_pid = c->portid; 1478 1478 1479 - pfkey_broadcast(skb, BROADCAST_ALL, NULL, xs_net(x)); 1479 + pfkey_broadcast(skb, GFP_ATOMIC, BROADCAST_ALL, NULL, xs_net(x)); 1480 1480 1481 1481 return 0; 1482 1482 } ··· 1589 1589 out_hdr->sadb_msg_reserved = 0; 1590 1590 out_hdr->sadb_msg_seq = hdr->sadb_msg_seq; 1591 1591 out_hdr->sadb_msg_pid = hdr->sadb_msg_pid; 1592 - pfkey_broadcast(out_skb, BROADCAST_ONE, sk, sock_net(sk)); 1592 + pfkey_broadcast(out_skb, GFP_ATOMIC, BROADCAST_ONE, sk, sock_net(sk)); 1593 1593 1594 1594 return 0; 1595 1595 } ··· 1694 1694 return -ENOBUFS; 1695 1695 } 1696 1696 1697 - pfkey_broadcast(supp_skb, BROADCAST_REGISTERED, sk, sock_net(sk)); 1698 - 1697 + pfkey_broadcast(supp_skb, GFP_KERNEL, BROADCAST_REGISTERED, sk, 1698 + sock_net(sk)); 1699 1699 return 0; 1700 1700 } 1701 1701 ··· 1712 1712 hdr->sadb_msg_errno = (uint8_t) 0; 1713 1713 hdr->sadb_msg_len = (sizeof(struct sadb_msg) / sizeof(uint64_t)); 1714 1714 1715 - return pfkey_broadcast(skb, BROADCAST_ONE, sk, sock_net(sk)); 1715 + return pfkey_broadcast(skb, GFP_ATOMIC, BROADCAST_ONE, sk, 1716 + sock_net(sk)); 1716 1717 } 1717 1718 1718 1719 static int key_notify_sa_flush(const struct km_event *c) ··· 1734 1733 hdr->sadb_msg_len = (sizeof(struct sadb_msg) / sizeof(uint64_t)); 1735 1734 hdr->sadb_msg_reserved = 0; 1736 1735 1737 - pfkey_broadcast(skb, BROADCAST_ALL, NULL, c->net); 1736 + pfkey_broadcast(skb, GFP_ATOMIC, BROADCAST_ALL, NULL, c->net); 1738 1737 1739 1738 return 0; 1740 1739 } ··· 1791 1790 out_hdr->sadb_msg_pid = pfk->dump.msg_portid; 1792 1791 1793 1792 if (pfk->dump.skb) 1794 - pfkey_broadcast(pfk->dump.skb, BROADCAST_ONE, 1793 + pfkey_broadcast(pfk->dump.skb, GFP_ATOMIC, BROADCAST_ONE, 1795 1794 &pfk->sk, sock_net(&pfk->sk)); 1796 1795 pfk->dump.skb = out_skb; 1797 1796 ··· 1879 1878 new_hdr->sadb_msg_errno = 0; 1880 1879 } 1881 1880 1882 - pfkey_broadcast(skb, BROADCAST_ALL, NULL, sock_net(sk)); 1881 + pfkey_broadcast(skb, GFP_KERNEL, BROADCAST_ALL, NULL, sock_net(sk)); 1883 1882 return 0; 1884 1883 } 1885 1884 ··· 2207 2206 out_hdr->sadb_msg_errno = 0; 2208 2207 out_hdr->sadb_msg_seq = c->seq; 2209 2208 out_hdr->sadb_msg_pid = c->portid; 2210 - pfkey_broadcast(out_skb, BROADCAST_ALL, NULL, xp_net(xp)); 2209 + pfkey_broadcast(out_skb, GFP_ATOMIC, BROADCAST_ALL, NULL, xp_net(xp)); 2211 2210 return 0; 2212 2211 2213 2212 } ··· 2427 2426 out_hdr->sadb_msg_errno = 0; 2428 2427 out_hdr->sadb_msg_seq = hdr->sadb_msg_seq; 2429 2428 out_hdr->sadb_msg_pid = hdr->sadb_msg_pid; 2430 - pfkey_broadcast(out_skb, BROADCAST_ONE, sk, xp_net(xp)); 2429 + pfkey_broadcast(out_skb, GFP_ATOMIC, BROADCAST_ONE, sk, xp_net(xp)); 2431 2430 err = 0; 2432 2431 2433 2432 out: ··· 2683 2682 out_hdr->sadb_msg_pid = pfk->dump.msg_portid; 2684 2683 2685 2684 if (pfk->dump.skb) 2686 - pfkey_broadcast(pfk->dump.skb, BROADCAST_ONE, 2685 + pfkey_broadcast(pfk->dump.skb, GFP_ATOMIC, BROADCAST_ONE, 2687 2686 &pfk->sk, sock_net(&pfk->sk)); 2688 2687 pfk->dump.skb = out_skb; 2689 2688 ··· 2740 2739 hdr->sadb_msg_satype = SADB_SATYPE_UNSPEC; 2741 2740 hdr->sadb_msg_len = (sizeof(struct sadb_msg) / sizeof(uint64_t)); 2742 2741 hdr->sadb_msg_reserved = 0; 2743 - pfkey_broadcast(skb_out, BROADCAST_ALL, NULL, c->net); 2742 + pfkey_broadcast(skb_out, GFP_ATOMIC, BROADCAST_ALL, NULL, c->net); 2744 2743 return 0; 2745 2744 2746 2745 } ··· 2804 2803 void *ext_hdrs[SADB_EXT_MAX]; 2805 2804 int err; 2806 2805 2807 - pfkey_broadcast(skb_clone(skb, GFP_KERNEL), 2806 + pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL, 2808 2807 BROADCAST_PROMISC_ONLY, NULL, sock_net(sk)); 2809 2808 2810 2809 memset(ext_hdrs, 0, sizeof(ext_hdrs)); ··· 3025 3024 out_hdr->sadb_msg_seq = 0; 3026 3025 out_hdr->sadb_msg_pid = 0; 3027 3026 3028 - pfkey_broadcast(out_skb, BROADCAST_REGISTERED, NULL, xs_net(x)); 3027 + pfkey_broadcast(out_skb, GFP_ATOMIC, BROADCAST_REGISTERED, NULL, 3028 + xs_net(x)); 3029 3029 return 0; 3030 3030 } 3031 3031 ··· 3214 3212 xfrm_ctx->ctx_len); 3215 3213 } 3216 3214 3217 - return pfkey_broadcast(skb, BROADCAST_REGISTERED, NULL, xs_net(x)); 3215 + return pfkey_broadcast(skb, GFP_ATOMIC, BROADCAST_REGISTERED, NULL, 3216 + xs_net(x)); 3218 3217 } 3219 3218 3220 3219 static struct xfrm_policy *pfkey_compile_policy(struct sock *sk, int opt, ··· 3411 3408 n_port->sadb_x_nat_t_port_port = sport; 3412 3409 n_port->sadb_x_nat_t_port_reserved = 0; 3413 3410 3414 - return pfkey_broadcast(skb, BROADCAST_REGISTERED, NULL, xs_net(x)); 3411 + return pfkey_broadcast(skb, GFP_ATOMIC, BROADCAST_REGISTERED, NULL, 3412 + xs_net(x)); 3415 3413 } 3416 3414 3417 3415 #ifdef CONFIG_NET_KEY_MIGRATE ··· 3603 3599 } 3604 3600 3605 3601 /* broadcast migrate message to sockets */ 3606 - pfkey_broadcast(skb, BROADCAST_ALL, NULL, &init_net); 3602 + pfkey_broadcast(skb, GFP_ATOMIC, BROADCAST_ALL, NULL, &init_net); 3607 3603 3608 3604 return 0; 3609 3605
+21 -1
net/mac80211/agg-rx.c
··· 7 7 * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz> 8 8 * Copyright 2007, Michael Wu <flamingice@sourmilk.net> 9 9 * Copyright 2007-2010, Intel Corporation 10 - * Copyright(c) 2015 Intel Deutschland GmbH 10 + * Copyright(c) 2015-2017 Intel Deutschland GmbH 11 11 * 12 12 * This program is free software; you can redistribute it and/or modify 13 13 * it under the terms of the GNU General Public License version 2 as ··· 466 466 rcu_read_unlock(); 467 467 } 468 468 EXPORT_SYMBOL(ieee80211_manage_rx_ba_offl); 469 + 470 + void ieee80211_rx_ba_timer_expired(struct ieee80211_vif *vif, 471 + const u8 *addr, unsigned int tid) 472 + { 473 + struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif); 474 + struct ieee80211_local *local = sdata->local; 475 + struct sta_info *sta; 476 + 477 + rcu_read_lock(); 478 + sta = sta_info_get_bss(sdata, addr); 479 + if (!sta) 480 + goto unlock; 481 + 482 + set_bit(tid, sta->ampdu_mlme.tid_rx_timer_expired); 483 + ieee80211_queue_work(&local->hw, &sta->ampdu_mlme.work); 484 + 485 + unlock: 486 + rcu_read_unlock(); 487 + } 488 + EXPORT_SYMBOL(ieee80211_rx_ba_timer_expired);
+1
net/openvswitch/actions.c
··· 1337 1337 goto out; 1338 1338 } 1339 1339 1340 + OVS_CB(skb)->acts_origlen = acts->orig_len; 1340 1341 err = do_execute_actions(dp, skb, key, 1341 1342 acts->actions, acts->actions_len); 1342 1343
+4 -3
net/openvswitch/datapath.c
··· 381 381 } 382 382 383 383 static size_t upcall_msg_size(const struct dp_upcall_info *upcall_info, 384 - unsigned int hdrlen) 384 + unsigned int hdrlen, int actions_attrlen) 385 385 { 386 386 size_t size = NLMSG_ALIGN(sizeof(struct ovs_header)) 387 387 + nla_total_size(hdrlen) /* OVS_PACKET_ATTR_PACKET */ ··· 398 398 399 399 /* OVS_PACKET_ATTR_ACTIONS */ 400 400 if (upcall_info->actions_len) 401 - size += nla_total_size(upcall_info->actions_len); 401 + size += nla_total_size(actions_attrlen); 402 402 403 403 /* OVS_PACKET_ATTR_MRU */ 404 404 if (upcall_info->mru) ··· 465 465 else 466 466 hlen = skb->len; 467 467 468 - len = upcall_msg_size(upcall_info, hlen - cutlen); 468 + len = upcall_msg_size(upcall_info, hlen - cutlen, 469 + OVS_CB(skb)->acts_origlen); 469 470 user_skb = genlmsg_new(len, GFP_ATOMIC); 470 471 if (!user_skb) { 471 472 err = -ENOMEM;
+2
net/openvswitch/datapath.h
··· 99 99 * when a packet is received by OVS. 100 100 * @mru: The maximum received fragement size; 0 if the packet is not 101 101 * fragmented. 102 + * @acts_origlen: The netlink size of the flow actions applied to this skb. 102 103 * @cutlen: The number of bytes from the packet end to be removed. 103 104 */ 104 105 struct ovs_skb_cb { 105 106 struct vport *input_vport; 106 107 u16 mru; 108 + u16 acts_origlen; 107 109 u32 cutlen; 108 110 }; 109 111 #define OVS_CB(skb) ((struct ovs_skb_cb *)(skb)->cb)
+1
net/rxrpc/call_accept.c
··· 223 223 tail = b->call_backlog_tail; 224 224 while (CIRC_CNT(head, tail, size) > 0) { 225 225 struct rxrpc_call *call = b->call_backlog[tail]; 226 + call->socket = rx; 226 227 if (rx->discard_new_call) { 227 228 _debug("discard %lx", call->user_call_ID); 228 229 rx->discard_new_call(call, call->user_call_ID);
+2
net/sched/act_ipt.c
··· 41 41 { 42 42 struct xt_tgchk_param par; 43 43 struct xt_target *target; 44 + struct ipt_entry e = {}; 44 45 int ret = 0; 45 46 46 47 target = xt_request_find_target(AF_INET, t->u.user.name, ··· 53 52 memset(&par, 0, sizeof(par)); 54 53 par.net = net; 55 54 par.table = table; 55 + par.entryinfo = &e; 56 56 par.target = target; 57 57 par.targinfo = t->data; 58 58 par.hook_mask = hook;
+1 -1
net/sched/cls_api.c
··· 205 205 { 206 206 struct tcf_proto *tp; 207 207 208 - if (*chain->p_filter_chain) 208 + if (chain->p_filter_chain) 209 209 RCU_INIT_POINTER(*chain->p_filter_chain, NULL); 210 210 while ((tp = rtnl_dereference(chain->filter_chain)) != NULL) { 211 211 RCU_INIT_POINTER(chain->filter_chain, tp->next);
-3
net/sched/sch_api.c
··· 286 286 void qdisc_hash_add(struct Qdisc *q, bool invisible) 287 287 { 288 288 if ((q->parent != TC_H_ROOT) && !(q->flags & TCQ_F_INGRESS)) { 289 - struct Qdisc *root = qdisc_dev(q)->qdisc; 290 - 291 - WARN_ON_ONCE(root == &noop_qdisc); 292 289 ASSERT_RTNL(); 293 290 hash_add_rcu(qdisc_dev(q)->qdisc_hash, &q->hash, q->handle); 294 291 if (invisible)
+3 -1
net/sched/sch_atm.c
··· 572 572 struct atm_flow_data *flow, *tmp; 573 573 574 574 pr_debug("atm_tc_destroy(sch %p,[qdisc %p])\n", sch, p); 575 - list_for_each_entry(flow, &p->flows, list) 575 + list_for_each_entry(flow, &p->flows, list) { 576 576 tcf_block_put(flow->block); 577 + flow->block = NULL; 578 + } 577 579 578 580 list_for_each_entry_safe(flow, tmp, &p->flows, list) { 579 581 if (flow->ref > 1)
+3 -1
net/sched/sch_cbq.c
··· 1431 1431 * be bound to classes which have been destroyed already. --TGR '04 1432 1432 */ 1433 1433 for (h = 0; h < q->clhash.hashsize; h++) { 1434 - hlist_for_each_entry(cl, &q->clhash.hash[h], common.hnode) 1434 + hlist_for_each_entry(cl, &q->clhash.hash[h], common.hnode) { 1435 1435 tcf_block_put(cl->block); 1436 + cl->block = NULL; 1437 + } 1436 1438 } 1437 1439 for (h = 0; h < q->clhash.hashsize; h++) { 1438 1440 hlist_for_each_entry_safe(cl, next, &q->clhash.hash[h],
+11 -1
net/sched/sch_hfsc.c
··· 1428 1428 return err; 1429 1429 q->eligible = RB_ROOT; 1430 1430 1431 + err = tcf_block_get(&q->root.block, &q->root.filter_list); 1432 + if (err) 1433 + goto err_tcf; 1434 + 1431 1435 q->root.cl_common.classid = sch->handle; 1432 1436 q->root.refcnt = 1; 1433 1437 q->root.sched = q; ··· 1451 1447 qdisc_watchdog_init(&q->watchdog, sch); 1452 1448 1453 1449 return 0; 1450 + 1451 + err_tcf: 1452 + qdisc_class_hash_destroy(&q->clhash); 1453 + return err; 1454 1454 } 1455 1455 1456 1456 static int ··· 1530 1522 unsigned int i; 1531 1523 1532 1524 for (i = 0; i < q->clhash.hashsize; i++) { 1533 - hlist_for_each_entry(cl, &q->clhash.hash[i], cl_common.hnode) 1525 + hlist_for_each_entry(cl, &q->clhash.hash[i], cl_common.hnode) { 1534 1526 tcf_block_put(cl->block); 1527 + cl->block = NULL; 1528 + } 1535 1529 } 1536 1530 for (i = 0; i < q->clhash.hashsize; i++) { 1537 1531 hlist_for_each_entry_safe(cl, next, &q->clhash.hash[i],
+3 -1
net/sched/sch_htb.c
··· 1258 1258 tcf_block_put(q->block); 1259 1259 1260 1260 for (i = 0; i < q->clhash.hashsize; i++) { 1261 - hlist_for_each_entry(cl, &q->clhash.hash[i], common.hnode) 1261 + hlist_for_each_entry(cl, &q->clhash.hash[i], common.hnode) { 1262 1262 tcf_block_put(cl->block); 1263 + cl->block = NULL; 1264 + } 1263 1265 } 1264 1266 for (i = 0; i < q->clhash.hashsize; i++) { 1265 1267 hlist_for_each_entry_safe(cl, next, &q->clhash.hash[i],
+4 -1
net/sched/sch_sfq.c
··· 437 437 qdisc_drop(head, sch, to_free); 438 438 439 439 slot_queue_add(slot, skb); 440 + qdisc_tree_reduce_backlog(sch, 0, delta); 440 441 return NET_XMIT_CN; 441 442 } 442 443 ··· 469 468 /* Return Congestion Notification only if we dropped a packet 470 469 * from this flow. 471 470 */ 472 - if (qlen != slot->qlen) 471 + if (qlen != slot->qlen) { 472 + qdisc_tree_reduce_backlog(sch, 0, dropped - qdisc_pkt_len(skb)); 473 473 return NET_XMIT_CN; 474 + } 474 475 475 476 /* As we dropped a packet, better let upper stack know this */ 476 477 qdisc_tree_reduce_backlog(sch, 1, dropped);
+2
net/sctp/ipv6.c
··· 512 512 { 513 513 addr->sa.sa_family = AF_INET6; 514 514 addr->v6.sin6_port = port; 515 + addr->v6.sin6_flowinfo = 0; 515 516 addr->v6.sin6_addr = *saddr; 517 + addr->v6.sin6_scope_id = 0; 516 518 } 517 519 518 520 /* Compare addresses exactly.
+20 -2
net/sunrpc/svcsock.c
··· 421 421 dprintk("svc: socket %p(inet %p), busy=%d\n", 422 422 svsk, sk, 423 423 test_bit(XPT_BUSY, &svsk->sk_xprt.xpt_flags)); 424 + 425 + /* Refer to svc_setup_socket() for details. */ 426 + rmb(); 424 427 svsk->sk_odata(sk); 425 428 if (!test_and_set_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags)) 426 429 svc_xprt_enqueue(&svsk->sk_xprt); ··· 440 437 if (svsk) { 441 438 dprintk("svc: socket %p(inet %p), write_space busy=%d\n", 442 439 svsk, sk, test_bit(XPT_BUSY, &svsk->sk_xprt.xpt_flags)); 440 + 441 + /* Refer to svc_setup_socket() for details. */ 442 + rmb(); 443 443 svsk->sk_owspace(sk); 444 444 svc_xprt_enqueue(&svsk->sk_xprt); 445 445 } ··· 766 760 dprintk("svc: socket %p TCP (listen) state change %d\n", 767 761 sk, sk->sk_state); 768 762 769 - if (svsk) 763 + if (svsk) { 764 + /* Refer to svc_setup_socket() for details. */ 765 + rmb(); 770 766 svsk->sk_odata(sk); 767 + } 768 + 771 769 /* 772 770 * This callback may called twice when a new connection 773 771 * is established as a child socket inherits everything ··· 804 794 if (!svsk) 805 795 printk("svc: socket %p: no user data\n", sk); 806 796 else { 797 + /* Refer to svc_setup_socket() for details. */ 798 + rmb(); 807 799 svsk->sk_ostate(sk); 808 800 if (sk->sk_state != TCP_ESTABLISHED) { 809 801 set_bit(XPT_CLOSE, &svsk->sk_xprt.xpt_flags); ··· 1393 1381 return ERR_PTR(err); 1394 1382 } 1395 1383 1396 - inet->sk_user_data = svsk; 1397 1384 svsk->sk_sock = sock; 1398 1385 svsk->sk_sk = inet; 1399 1386 svsk->sk_ostate = inet->sk_state_change; 1400 1387 svsk->sk_odata = inet->sk_data_ready; 1401 1388 svsk->sk_owspace = inet->sk_write_space; 1389 + /* 1390 + * This barrier is necessary in order to prevent race condition 1391 + * with svc_data_ready(), svc_listen_data_ready() and others 1392 + * when calling callbacks above. 1393 + */ 1394 + wmb(); 1395 + inet->sk_user_data = svsk; 1402 1396 1403 1397 /* Initialize the socket */ 1404 1398 if (sock->type == SOCK_DGRAM)
+1 -1
net/tipc/bearer.c
··· 596 596 rcu_read_lock(); 597 597 b = rcu_dereference_rtnl(dev->tipc_ptr); 598 598 if (likely(b && test_bit(0, &b->up) && 599 - (skb->pkt_type <= PACKET_BROADCAST))) { 599 + (skb->pkt_type <= PACKET_MULTICAST))) { 600 600 skb->next = NULL; 601 601 tipc_rcv(dev_net(dev), skb, b); 602 602 rcu_read_unlock();
+1
net/tipc/msg.c
··· 513 513 514 514 /* Now reverse the concerned fields */ 515 515 msg_set_errcode(hdr, err); 516 + msg_set_non_seq(hdr, 0); 516 517 msg_set_origport(hdr, msg_destport(&ohdr)); 517 518 msg_set_destport(hdr, msg_origport(&ohdr)); 518 519 msg_set_destnode(hdr, msg_prevnode(&ohdr));
+1 -4
net/unix/af_unix.c
··· 2304 2304 */ 2305 2305 mutex_lock(&u->iolock); 2306 2306 2307 - if (flags & MSG_PEEK) 2308 - skip = sk_peek_offset(sk, flags); 2309 - else 2310 - skip = 0; 2307 + skip = max(sk_peek_offset(sk, flags), 0); 2311 2308 2312 2309 do { 2313 2310 int chunk;
+3 -4
scripts/Kbuild.include
··· 85 85 86 86 # try-run 87 87 # Usage: option = $(call try-run, $(CC)...-o "$$TMP",option-ok,otherwise) 88 - # Exit code chooses option. "$$TMP" is can be used as temporary file and 89 - # is automatically cleaned up. 88 + # Exit code chooses option. "$$TMP" serves as a temporary file and is 89 + # automatically cleaned up. 90 90 try-run = $(shell set -e; \ 91 91 TMP="$(TMPOUT).$$$$.tmp"; \ 92 92 TMPO="$(TMPOUT).$$$$.o"; \ ··· 261 261 any-prereq = $(filter-out $(PHONY),$?) $(filter-out $(PHONY) $(wildcard $^),$^) 262 262 263 263 # Execute command if command has changed or prerequisite(s) are updated. 264 - # 265 264 if_changed = $(if $(strip $(any-prereq) $(arg-check)), \ 266 265 @set -e; \ 267 266 $(echo-cmd) $(cmd_$(1)); \ ··· 314 315 $(rule_$(1)), @:) 315 316 316 317 ### 317 - # why - tell why a a target got build 318 + # why - tell why a target got built 318 319 # enabled by make V=2 319 320 # Output (listed in the order they are checked): 320 321 # (1) - due to target is PHONY
+2 -2
scripts/Makefile.asm-generic
··· 1 1 # include/asm-generic contains a lot of files that are used 2 2 # verbatim by several architectures. 3 3 # 4 - # This Makefile reads the file arch/$(SRCARCH)/include/asm/Kbuild 4 + # This Makefile reads the file arch/$(SRCARCH)/include/$(src)/Kbuild 5 5 # and for each file listed in this file with generic-y creates 6 - # a small wrapper file in $(obj) (arch/$(SRCARCH)/include/generated/asm) 6 + # a small wrapper file in $(obj) (arch/$(SRCARCH)/include/generated/$(src)) 7 7 8 8 kbuild-file := $(srctree)/arch/$(SRCARCH)/include/$(src)/Kbuild 9 9 -include $(kbuild-file)
+4 -4
scripts/Makefile.build
··· 229 229 endif 230 230 # Due to recursion, we must skip empty.o. 231 231 # The empty.o file is created in the make process in order to determine 232 - # the target endianness and word size. It is made before all other C 233 - # files, including recordmcount. 232 + # the target endianness and word size. It is made before all other C 233 + # files, including recordmcount. 234 234 sub_cmd_record_mcount = \ 235 235 if [ $(@) != "scripts/mod/empty.o" ]; then \ 236 236 $(objtree)/scripts/recordmcount $(RECORDMCOUNT_FLAGS) "$(@)"; \ ··· 245 245 "$(LD)" "$(NM)" "$(RM)" "$(MV)" \ 246 246 "$(if $(part-of-module),1,0)" "$(@)"; 247 247 recordmcount_source := $(srctree)/scripts/recordmcount.pl 248 - endif 248 + endif # BUILD_C_RECORDMCOUNT 249 249 cmd_record_mcount = \ 250 250 if [ "$(findstring $(CC_FLAGS_FTRACE),$(_c_flags))" = \ 251 251 "$(CC_FLAGS_FTRACE)" ]; then \ 252 252 $(sub_cmd_record_mcount) \ 253 253 fi; 254 - endif 254 + endif # CONFIG_FTRACE_MCOUNT_RECORD 255 255 256 256 ifdef CONFIG_STACK_VALIDATION 257 257 ifneq ($(SKIP_STACK_VALIDATION),1)
+2 -2
scripts/Makefile.dtbinst
··· 14 14 PHONY := __dtbs_install 15 15 __dtbs_install: 16 16 17 - export dtbinst-root ?= $(obj) 17 + export dtbinst_root ?= $(obj) 18 18 19 19 include include/config/auto.conf 20 20 include scripts/Kbuild.include ··· 27 27 quiet_cmd_dtb_install = INSTALL $< 28 28 cmd_dtb_install = mkdir -p $(2); cp $< $(2) 29 29 30 - install-dir = $(patsubst $(dtbinst-root)%,$(INSTALL_DTBS_PATH)%,$(obj)) 30 + install-dir = $(patsubst $(dtbinst_root)%,$(INSTALL_DTBS_PATH)%,$(obj)) 31 31 32 32 $(dtbinst-files): %.dtb: $(obj)/%.dtb 33 33 $(call cmd,dtb_install,$(install-dir))
+1 -1
scripts/basic/Makefile
··· 1 1 ### 2 - # Makefile.basic lists the most basic programs used during the build process. 2 + # This Makefile lists the most basic programs used during the build process. 3 3 # The programs listed herein are what are needed to do the basic stuff, 4 4 # such as fix file dependencies. 5 5 # This initial step is needed to avoid files to be recompiled
+3 -3
scripts/basic/fixdep.c
··· 25 25 * 26 26 * So we play the same trick that "mkdep" played before. We replace 27 27 * the dependency on autoconf.h by a dependency on every config 28 - * option which is mentioned in any of the listed prequisites. 28 + * option which is mentioned in any of the listed prerequisites. 29 29 * 30 30 * kconfig populates a tree in include/config/ with an empty file 31 31 * for each config symbol and when the configuration is updated ··· 34 34 * the config symbols are rebuilt. 35 35 * 36 36 * So if the user changes his CONFIG_HIS_DRIVER option, only the objects 37 - * which depend on "include/linux/config/his/driver.h" will be rebuilt, 37 + * which depend on "include/config/his/driver.h" will be rebuilt, 38 38 * so most likely only his driver ;-) 39 39 * 40 40 * The idea above dates, by the way, back to Michael E Chastain, AFAIK. ··· 75 75 * and then basically copies the .<target>.d file to stdout, in the 76 76 * process filtering out the dependency on autoconf.h and adding 77 77 * dependencies on include/config/my/option.h for every 78 - * CONFIG_MY_OPTION encountered in any of the prequisites. 78 + * CONFIG_MY_OPTION encountered in any of the prerequisites. 79 79 * 80 80 * It will also filter out all the dependencies on *.ver. We need 81 81 * to make sure that the generated version checksum are globally up
+1 -1
sound/core/control.c
··· 1137 1137 mutex_lock(&ue->card->user_ctl_lock); 1138 1138 change = ue->tlv_data_size != size; 1139 1139 if (!change) 1140 - change = memcmp(ue->tlv_data, new_data, size); 1140 + change = memcmp(ue->tlv_data, new_data, size) != 0; 1141 1141 kfree(ue->tlv_data); 1142 1142 ue->tlv_data = new_data; 1143 1143 ue->tlv_data_size = size;
+2 -2
sound/core/seq/Kconfig
··· 47 47 timer. 48 48 49 49 config SND_SEQ_MIDI_EVENT 50 - def_tristate SND_RAWMIDI 50 + tristate 51 51 52 52 config SND_SEQ_MIDI 53 - tristate 53 + def_tristate SND_RAWMIDI 54 54 select SND_SEQ_MIDI_EVENT 55 55 56 56 config SND_SEQ_MIDI_EMUL
+4 -9
sound/core/seq/seq_clientmgr.c
··· 1502 1502 static int snd_seq_ioctl_create_queue(struct snd_seq_client *client, void *arg) 1503 1503 { 1504 1504 struct snd_seq_queue_info *info = arg; 1505 - int result; 1506 1505 struct snd_seq_queue *q; 1507 1506 1508 - result = snd_seq_queue_alloc(client->number, info->locked, info->flags); 1509 - if (result < 0) 1510 - return result; 1511 - 1512 - q = queueptr(result); 1513 - if (q == NULL) 1514 - return -EINVAL; 1507 + q = snd_seq_queue_alloc(client->number, info->locked, info->flags); 1508 + if (IS_ERR(q)) 1509 + return PTR_ERR(q); 1515 1510 1516 1511 info->queue = q->queue; 1517 1512 info->locked = q->locked; ··· 1516 1521 if (!info->name[0]) 1517 1522 snprintf(info->name, sizeof(info->name), "Queue-%d", q->queue); 1518 1523 strlcpy(q->name, info->name, sizeof(q->name)); 1519 - queuefree(q); 1524 + snd_use_lock_free(&q->use_lock); 1520 1525 1521 1526 return 0; 1522 1527 }
+9 -5
sound/core/seq/seq_queue.c
··· 184 184 static void queue_use(struct snd_seq_queue *queue, int client, int use); 185 185 186 186 /* allocate a new queue - 187 - * return queue index value or negative value for error 187 + * return pointer to new queue or ERR_PTR(-errno) for error 188 + * The new queue's use_lock is set to 1. It is the caller's responsibility to 189 + * call snd_use_lock_free(&q->use_lock). 188 190 */ 189 - int snd_seq_queue_alloc(int client, int locked, unsigned int info_flags) 191 + struct snd_seq_queue *snd_seq_queue_alloc(int client, int locked, unsigned int info_flags) 190 192 { 191 193 struct snd_seq_queue *q; 192 194 193 195 q = queue_new(client, locked); 194 196 if (q == NULL) 195 - return -ENOMEM; 197 + return ERR_PTR(-ENOMEM); 196 198 q->info_flags = info_flags; 197 199 queue_use(q, client, 1); 200 + snd_use_lock_use(&q->use_lock); 198 201 if (queue_list_add(q) < 0) { 202 + snd_use_lock_free(&q->use_lock); 199 203 queue_delete(q); 200 - return -ENOMEM; 204 + return ERR_PTR(-ENOMEM); 201 205 } 202 - return q->queue; 206 + return q; 203 207 } 204 208 205 209 /* delete a queue - queue must be owned by the client */
+1 -1
sound/core/seq/seq_queue.h
··· 71 71 72 72 73 73 /* create new queue (constructor) */ 74 - int snd_seq_queue_alloc(int client, int locked, unsigned int flags); 74 + struct snd_seq_queue *snd_seq_queue_alloc(int client, int locked, unsigned int flags); 75 75 76 76 /* delete queue (destructor) */ 77 77 int snd_seq_queue_delete(int client, int queueid);
+6 -1
sound/firewire/iso-resources.c
··· 210 210 */ 211 211 void fw_iso_resources_free(struct fw_iso_resources *r) 212 212 { 213 - struct fw_card *card = fw_parent_device(r->unit)->card; 213 + struct fw_card *card; 214 214 int bandwidth, channel; 215 + 216 + /* Not initialized. */ 217 + if (r->unit == NULL) 218 + return; 219 + card = fw_parent_device(r->unit)->card; 215 220 216 221 mutex_lock(&r->mutex); 217 222
+1
sound/firewire/motu/motu.c
··· 128 128 return; 129 129 error: 130 130 snd_motu_transaction_unregister(motu); 131 + snd_motu_stream_destroy_duplex(motu); 131 132 snd_card_free(motu->card); 132 133 dev_info(&motu->unit->device, 133 134 "Sound card registration failed: %d\n", err);
+11 -3
sound/pci/emu10k1/emufx.c
··· 698 698 { 699 699 struct snd_emu10k1_fx8010_control_old_gpr __user *octl; 700 700 701 - if (emu->support_tlv) 702 - return copy_from_user(gctl, &_gctl[idx], sizeof(*gctl)); 701 + if (emu->support_tlv) { 702 + if (in_kernel) 703 + memcpy(gctl, (void *)&_gctl[idx], sizeof(*gctl)); 704 + else if (copy_from_user(gctl, &_gctl[idx], sizeof(*gctl))) 705 + return -EFAULT; 706 + return 0; 707 + } 708 + 703 709 octl = (struct snd_emu10k1_fx8010_control_old_gpr __user *)_gctl; 704 - if (copy_from_user(gctl, &octl[idx], sizeof(*octl))) 710 + if (in_kernel) 711 + memcpy(gctl, (void *)&octl[idx], sizeof(*octl)); 712 + else if (copy_from_user(gctl, &octl[idx], sizeof(*octl))) 705 713 return -EFAULT; 706 714 gctl->tlv = NULL; 707 715 return 0;
+1
sound/pci/hda/patch_conexant.c
··· 947 947 SND_PCI_QUIRK(0x17aa, 0x390b, "Lenovo G50-80", CXT_FIXUP_STEREO_DMIC), 948 948 SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC), 949 949 SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC), 950 + SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo G50-70", CXT_FIXUP_STEREO_DMIC), 950 951 SND_PCI_QUIRK(0x17aa, 0x397b, "Lenovo S205", CXT_FIXUP_STEREO_DMIC), 951 952 SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", CXT_FIXUP_THINKPAD_ACPI), 952 953 SND_PCI_QUIRK(0x1c06, 0x2011, "Lemote A1004", CXT_PINCFG_LEMOTE_A1004),
-1
sound/pci/hda/patch_realtek.c
··· 6647 6647 SND_HDA_PIN_QUIRK(0x10ec0299, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE, 6648 6648 ALC225_STANDARD_PINS, 6649 6649 {0x12, 0xb7a60130}, 6650 - {0x13, 0xb8a61140}, 6651 6650 {0x17, 0x90170110}), 6652 6651 {} 6653 6652 };
+1
sound/soc/codecs/rt5677.c
··· 5021 5021 static const struct i2c_device_id rt5677_i2c_id[] = { 5022 5022 { "rt5677", RT5677 }, 5023 5023 { "rt5676", RT5676 }, 5024 + { "RT5677CE:00", RT5677 }, 5024 5025 { } 5025 5026 }; 5026 5027 MODULE_DEVICE_TABLE(i2c, rt5677_i2c_id);
+2
sound/usb/mixer.c
··· 542 542 543 543 if (size < sizeof(scale)) 544 544 return -ENOMEM; 545 + if (cval->min_mute) 546 + scale[0] = SNDRV_CTL_TLVT_DB_MINMAX_MUTE; 545 547 scale[2] = cval->dBmin; 546 548 scale[3] = cval->dBmax; 547 549 if (copy_to_user(_tlv, scale, sizeof(scale)))
+1
sound/usb/mixer.h
··· 64 64 int cached; 65 65 int cache_val[MAX_CHANNELS]; 66 66 u8 initialized; 67 + u8 min_mute; 67 68 void *private_data; 68 69 }; 69 70
+6
sound/usb/mixer_quirks.c
··· 1878 1878 if (unitid == 7 && cval->control == UAC_FU_VOLUME) 1879 1879 snd_dragonfly_quirk_db_scale(mixer, cval, kctl); 1880 1880 break; 1881 + /* lowest playback value is muted on C-Media devices */ 1882 + case USB_ID(0x0d8c, 0x000c): 1883 + case USB_ID(0x0d8c, 0x0014): 1884 + if (strstr(kctl->id.name, "Playback")) 1885 + cval->min_mute = 1; 1886 + break; 1881 1887 } 1882 1888 } 1883 1889
+11 -3
sound/usb/quirks.c
··· 1142 1142 case USB_ID(0x0556, 0x0014): /* Phoenix Audio TMX320VC */ 1143 1143 case USB_ID(0x05A3, 0x9420): /* ELP HD USB Camera */ 1144 1144 case USB_ID(0x074D, 0x3553): /* Outlaw RR2150 (Micronas UAC3553B) */ 1145 + case USB_ID(0x1395, 0x740a): /* Sennheiser DECT */ 1145 1146 case USB_ID(0x1901, 0x0191): /* GE B850V3 CP2114 audio interface */ 1146 1147 case USB_ID(0x1de7, 0x0013): /* Phoenix Audio MT202exe */ 1147 1148 case USB_ID(0x1de7, 0x0014): /* Phoenix Audio TMX320 */ ··· 1309 1308 && (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS) 1310 1309 mdelay(20); 1311 1310 1312 - /* Zoom R16/24 needs a tiny delay here, otherwise requests like 1313 - * get/set frequency return as failed despite actually succeeding. 1311 + /* Zoom R16/24, Logitech H650e, Jabra 550a needs a tiny delay here, 1312 + * otherwise requests like get/set frequency return as failed despite 1313 + * actually succeeding. 1314 1314 */ 1315 - if (chip->usb_id == USB_ID(0x1686, 0x00dd) && 1315 + if ((chip->usb_id == USB_ID(0x1686, 0x00dd) || 1316 + chip->usb_id == USB_ID(0x046d, 0x0a46) || 1317 + chip->usb_id == USB_ID(0x0b0e, 0x0349)) && 1316 1318 (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS) 1317 1319 mdelay(1); 1318 1320 } ··· 1377 1373 break; 1378 1374 } 1379 1375 } 1376 + break; 1377 + case USB_ID(0x16d0, 0x0a23): 1378 + if (fp->altsetting == 2) 1379 + return SNDRV_PCM_FMTBIT_DSD_U32_BE; 1380 1380 break; 1381 1381 1382 1382 default:
+2 -1
tools/lib/bpf/libbpf.c
··· 879 879 size_t j; 880 880 int err = *pfd; 881 881 882 - pr_warning("failed to create map: %s\n", 882 + pr_warning("failed to create map (name: '%s'): %s\n", 883 + obj->maps[i].name, 883 884 strerror(errno)); 884 885 for (j = 0; j < i; j++) 885 886 zclose(obj->maps[j].fd);
+25 -1
tools/objtool/arch/x86/decode.c
··· 271 271 case 0x8d: 272 272 if (rex == 0x48 && modrm == 0x65) { 273 273 274 - /* lea -disp(%rbp), %rsp */ 274 + /* lea disp(%rbp), %rsp */ 275 275 *type = INSN_STACK; 276 276 op->src.type = OP_SRC_ADD; 277 277 op->src.reg = CFI_BP; 278 278 op->src.offset = insn.displacement.value; 279 279 op->dest.type = OP_DEST_REG; 280 280 op->dest.reg = CFI_SP; 281 + break; 282 + } 283 + 284 + if (rex == 0x48 && (modrm == 0xa4 || modrm == 0x64) && 285 + sib == 0x24) { 286 + 287 + /* lea disp(%rsp), %rsp */ 288 + *type = INSN_STACK; 289 + op->src.type = OP_SRC_ADD; 290 + op->src.reg = CFI_SP; 291 + op->src.offset = insn.displacement.value; 292 + op->dest.type = OP_DEST_REG; 293 + op->dest.reg = CFI_SP; 294 + break; 295 + } 296 + 297 + if (rex == 0x48 && modrm == 0x2c && sib == 0x24) { 298 + 299 + /* lea (%rsp), %rbp */ 300 + *type = INSN_STACK; 301 + op->src.type = OP_SRC_REG; 302 + op->src.reg = CFI_SP; 303 + op->dest.type = OP_DEST_REG; 304 + op->dest.reg = CFI_BP; 281 305 break; 282 306 } 283 307
+1 -1
tools/testing/selftests/futex/Makefile
··· 14 14 done 15 15 16 16 override define RUN_TESTS 17 - @if [ `dirname $(OUTPUT)` = $(PWD) ]; then ./run.sh; fi 17 + $(OUTPUT)/run.sh 18 18 endef 19 19 20 20 override define INSTALL_RULE
+2 -2
tools/testing/selftests/kmod/kmod.sh
··· 473 473 echo " all Runs all tests (default)" 474 474 echo " -t Run test ID the number amount of times is recommended" 475 475 echo " -w Watch test ID run until it runs into an error" 476 - echo " -c Run test ID once" 477 - echo " -s Run test ID x test-count number of times" 476 + echo " -s Run test ID once" 477 + echo " -c Run test ID x test-count number of times" 478 478 echo " -l List all test ID list" 479 479 echo " -h|--help Help" 480 480 echo
+4
tools/testing/selftests/ntb/ntb_test.sh
··· 333 333 link_test $LOCAL_TOOL $REMOTE_TOOL 334 334 link_test $REMOTE_TOOL $LOCAL_TOOL 335 335 336 + #Ensure the link is up on both sides before continuing 337 + write_file Y $LOCAL_TOOL/link_event 338 + write_file Y $REMOTE_TOOL/link_event 339 + 336 340 for PEER_TRANS in $(ls $LOCAL_TOOL/peer_trans*); do 337 341 PT=$(basename $PEER_TRANS) 338 342 write_file $MW_SIZE $LOCAL_TOOL/$PT
tools/testing/selftests/sysctl/sysctl.sh
+3 -4
tools/testing/selftests/timers/freq-step.c
··· 229 229 printf("CLOCK_MONOTONIC_RAW+CLOCK_MONOTONIC precision: %.0f ns\t\t", 230 230 1e9 * precision); 231 231 232 - if (precision > MAX_PRECISION) { 233 - printf("[SKIP]\n"); 234 - ksft_exit_skip(); 235 - } 232 + if (precision > MAX_PRECISION) 233 + ksft_exit_skip("precision: %.0f ns > MAX_PRECISION: %.0f ns\n", 234 + 1e9 * precision, 1e9 * MAX_PRECISION); 236 235 237 236 printf("[OK]\n"); 238 237 srand(ts.tv_sec ^ ts.tv_nsec);