Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Overlapping header include additions in macsec.c

A bug fix in 'net' overlapping with the removal of 'version'
string in ena_netdev.c

Overlapping test additions in selftests Makefile

Overlapping PCI ID table adjustments in iwlwifi driver.

Signed-off-by: David S. Miller <davem@davemloft.net>

+4248 -1925
+1
.mailmap
··· 225 225 Praveen BP <praveenbp@ti.com> 226 226 Punit Agrawal <punitagrawal@gmail.com> <punit.agrawal@arm.com> 227 227 Qais Yousef <qsyousef@gmail.com> <qais.yousef@imgtec.com> 228 + Quentin Monnet <quentin@isovalent.com> <quentin.monnet@netronome.com> 228 229 Quentin Perret <qperret@qperret.net> <quentin.perret@arm.com> 229 230 Rafael J. Wysocki <rjw@rjwysocki.net> <rjw@sisk.pl> 230 231 Rajesh Shah <rajesh.shah@intel.com>
+2
Documentation/arm64/silicon-errata.rst
··· 110 110 +----------------+-----------------+-----------------+-----------------------------+ 111 111 | Cavium | ThunderX GICv3 | #23154 | CAVIUM_ERRATUM_23154 | 112 112 +----------------+-----------------+-----------------+-----------------------------+ 113 + | Cavium | ThunderX GICv3 | #38539 | N/A | 114 + +----------------+-----------------+-----------------+-----------------------------+ 113 115 | Cavium | ThunderX Core | #27456 | CAVIUM_ERRATUM_27456 | 114 116 +----------------+-----------------+-----------------+-----------------------------+ 115 117 | Cavium | ThunderX Core | #30115 | CAVIUM_ERRATUM_30115 |
+10 -2
Documentation/driver-api/dmaengine/provider.rst
··· 266 266 attached (via the dmaengine_desc_attach_metadata() helper to the descriptor. 267 267 268 268 From the DMA driver the following is expected for this mode: 269 + 269 270 - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM 271 + 270 272 The data from the provided metadata buffer should be prepared for the DMA 271 273 controller to be sent alongside of the payload data. Either by copying to a 272 274 hardware descriptor, or highly coupled packet. 275 + 273 276 - DMA_DEV_TO_MEM 277 + 274 278 On transfer completion the DMA driver must copy the metadata to the client 275 279 provided metadata buffer before notifying the client about the completion. 276 280 After the transfer completion, DMA drivers must not touch the metadata ··· 288 284 and dmaengine_desc_set_metadata_len() is provided as helper functions. 289 285 290 286 From the DMA driver the following is expected for this mode: 291 - - get_metadata_ptr 287 + 288 + - get_metadata_ptr() 289 + 292 290 Should return a pointer for the metadata buffer, the maximum size of the 293 291 metadata buffer and the currently used / valid (if any) bytes in the buffer. 294 - - set_metadata_len 292 + 293 + - set_metadata_len() 294 + 295 295 It is called by the clients after it have placed the metadata to the buffer 296 296 to let the DMA driver know the number of valid bytes provided. 297 297
+13 -5
Documentation/filesystems/zonefs.txt
··· 258 258 | option | condition | size read write read write | 259 259 +--------------+-----------+-----------------------------------------+ 260 260 | | good | fixed yes no yes yes | 261 - | remount-ro | read-only | fixed yes no yes no | 261 + | remount-ro | read-only | as is yes no yes no | 262 262 | (default) | offline | 0 no no no no | 263 263 +--------------+-----------+-----------------------------------------+ 264 264 | | good | fixed yes no yes yes | 265 - | zone-ro | read-only | fixed yes no yes no | 265 + | zone-ro | read-only | as is yes no yes no | 266 266 | | offline | 0 no no no no | 267 267 +--------------+-----------+-----------------------------------------+ 268 268 | | good | 0 no no yes yes | ··· 270 270 | | offline | 0 no no no no | 271 271 +--------------+-----------+-----------------------------------------+ 272 272 | | good | fixed yes yes yes yes | 273 - | repair | read-only | fixed yes no yes no | 273 + | repair | read-only | as is yes no yes no | 274 274 | | offline | 0 no no no no | 275 275 +--------------+-----------+-----------------------------------------+ 276 276 ··· 307 307 * zone-offline 308 308 * repair 309 309 310 - The I/O error actions defined for each behavior are detailed in the previous 311 - section. 310 + The run-time I/O error actions defined for each behavior are detailed in the 311 + previous section. Mount time I/O errors will cause the mount operation to fail. 312 + The handling of read-only zones also differs between mount-time and run-time. 313 + If a read-only zone is found at mount time, the zone is always treated in the 314 + same manner as offline zones, that is, all accesses are disabled and the zone 315 + file size set to 0. This is necessary as the write pointer of read-only zones 316 + is defined as invalib by the ZBC and ZAC standards, making it impossible to 317 + discover the amount of data that has been written to the zone. In the case of a 318 + read-only zone discovered at run-time, as indicated in the previous section. 319 + the size of the zone file is left unchanged from its last updated value. 312 320 313 321 Zonefs User Space Tools 314 322 =======================
+1 -1
Documentation/kbuild/kbuild.rst
··· 237 237 KBUILD_EXTRA_SYMBOLS 238 238 -------------------- 239 239 For modules that use symbols from other modules. 240 - See more details in modules.txt. 240 + See more details in modules.rst. 241 241 242 242 ALLSOURCE_ARCHS 243 243 ---------------
+1 -1
Documentation/kbuild/kconfig-macro-language.rst
··· 44 44 def_bool y 45 45 46 46 Then, Kconfig moves onto the evaluation stage to resolve inter-symbol 47 - dependency as explained in kconfig-language.txt. 47 + dependency as explained in kconfig-language.rst. 48 48 49 49 50 50 Variables
+3 -3
Documentation/kbuild/makefiles.rst
··· 924 924 $(KBUILD_AFLAGS_MODULE) is used to add arch-specific options that 925 925 are used for assembler. 926 926 927 - From commandline AFLAGS_MODULE shall be used (see kbuild.txt). 927 + From commandline AFLAGS_MODULE shall be used (see kbuild.rst). 928 928 929 929 KBUILD_CFLAGS_KERNEL 930 930 $(CC) options specific for built-in ··· 937 937 938 938 $(KBUILD_CFLAGS_MODULE) is used to add arch-specific options that 939 939 are used for $(CC). 940 - From commandline CFLAGS_MODULE shall be used (see kbuild.txt). 940 + From commandline CFLAGS_MODULE shall be used (see kbuild.rst). 941 941 942 942 KBUILD_LDFLAGS_MODULE 943 943 Options for $(LD) when linking modules ··· 945 945 $(KBUILD_LDFLAGS_MODULE) is used to add arch-specific options 946 946 used when linking modules. This is often a linker script. 947 947 948 - From commandline LDFLAGS_MODULE shall be used (see kbuild.txt). 948 + From commandline LDFLAGS_MODULE shall be used (see kbuild.rst). 949 949 950 950 KBUILD_LDS 951 951
+2 -2
Documentation/kbuild/modules.rst
··· 470 470 471 471 The syntax of the Module.symvers file is:: 472 472 473 - <CRC> <Symbol> <Namespace> <Module> <Export Type> 473 + <CRC> <Symbol> <Module> <Export Type> <Namespace> 474 474 475 - 0xe1cc2a05 usb_stor_suspend USB_STORAGE drivers/usb/storage/usb-storage EXPORT_SYMBOL_GPL 475 + 0xe1cc2a05 usb_stor_suspend drivers/usb/storage/usb-storage EXPORT_SYMBOL_GPL USB_STORAGE 476 476 477 477 The fields are separated by tabs and values may be empty (e.g. 478 478 if no namespace is defined for an exported symbol).
+7 -2
MAINTAINERS
··· 7527 7527 F: net/802/hippi.c 7528 7528 F: drivers/net/hippi/ 7529 7529 7530 + HISILICON DMA DRIVER 7531 + M: Zhou Wang <wangzhou1@hisilicon.com> 7532 + L: dmaengine@vger.kernel.org 7533 + S: Maintained 7534 + F: drivers/dma/hisi_dma.c 7535 + 7530 7536 HISILICON SECURITY ENGINE V2 DRIVER (SEC2) 7531 7537 M: Zaibo Xu <xuzaibo@huawei.com> 7532 7538 L: linux-crypto@vger.kernel.org ··· 8493 8487 S: Supported 8494 8488 F: drivers/dma/idxd/* 8495 8489 F: include/uapi/linux/idxd.h 8496 - F: include/linux/idxd.h 8497 8490 8498 8491 INTEL IDLE DRIVER 8499 8492 M: Jacob Pan <jacob.jun.pan@linux.intel.com> ··· 8699 8694 M: Luca Coelho <luciano.coelho@intel.com> 8700 8695 M: Intel Linux Wireless <linuxwifi@intel.com> 8701 8696 L: linux-wireless@vger.kernel.org 8702 - W: http://intellinuxwireless.org 8697 + W: https://wireless.wiki.kernel.org/en/users/drivers/iwlwifi 8703 8698 T: git git://git.kernel.org/pub/scm/linux/kernel/git/iwlwifi/iwlwifi.git 8704 8699 S: Supported 8705 8700 F: drivers/net/wireless/intel/iwlwifi/
+2 -2
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 6 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc7 6 6 NAME = Kleptomaniac Octopus 7 7 8 8 # *DOCUMENTATION* ··· 1804 1804 1805 1805 -include $(foreach f,$(existing-targets),$(dir $(f)).$(notdir $(f)).cmd) 1806 1806 1807 - endif # config-targets 1807 + endif # config-build 1808 1808 endif # mixed-build 1809 1809 endif # need-sub-make 1810 1810
+2 -2
arch/arc/Kconfig
··· 154 154 help 155 155 Support for ARC HS38x Cores based on ARCv2 ISA 156 156 The notable features are: 157 - - SMP configurations of upto 4 core with coherency 157 + - SMP configurations of up to 4 cores with coherency 158 158 - Optional L2 Cache and IO-Coherency 159 159 - Revised Interrupt Architecture (multiple priorites, reg banks, 160 160 auto stack switch, auto regfile save/restore) ··· 192 192 help 193 193 In SMP configuration cores can be configured as Halt-on-reset 194 194 or they could all start at same time. For Halt-on-reset, non 195 - masters are parked until Master kicks them so they can start of 195 + masters are parked until Master kicks them so they can start off 196 196 at designated entry point. For other case, all jump to common 197 197 entry point and spin wait for Master's signal. 198 198
-2
arch/arc/configs/nps_defconfig
··· 21 21 CONFIG_MODULE_FORCE_LOAD=y 22 22 CONFIG_MODULE_UNLOAD=y 23 23 # CONFIG_BLK_DEV_BSG is not set 24 - # CONFIG_IOSCHED_DEADLINE is not set 25 - # CONFIG_IOSCHED_CFQ is not set 26 24 CONFIG_ARC_PLAT_EZNPS=y 27 25 CONFIG_SMP=y 28 26 CONFIG_NR_CPUS=4096
-2
arch/arc/configs/nsimosci_defconfig
··· 20 20 CONFIG_KPROBES=y 21 21 CONFIG_MODULES=y 22 22 # CONFIG_BLK_DEV_BSG is not set 23 - # CONFIG_IOSCHED_DEADLINE is not set 24 - # CONFIG_IOSCHED_CFQ is not set 25 23 CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci" 26 24 # CONFIG_COMPACTION is not set 27 25 CONFIG_NET=y
-2
arch/arc/configs/nsimosci_hs_defconfig
··· 19 19 CONFIG_KPROBES=y 20 20 CONFIG_MODULES=y 21 21 # CONFIG_BLK_DEV_BSG is not set 22 - # CONFIG_IOSCHED_DEADLINE is not set 23 - # CONFIG_IOSCHED_CFQ is not set 24 22 CONFIG_ISA_ARCV2=y 25 23 CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci_hs" 26 24 # CONFIG_COMPACTION is not set
-2
arch/arc/configs/nsimosci_hs_smp_defconfig
··· 14 14 CONFIG_KPROBES=y 15 15 CONFIG_MODULES=y 16 16 # CONFIG_BLK_DEV_BSG is not set 17 - # CONFIG_IOSCHED_DEADLINE is not set 18 - # CONFIG_IOSCHED_CFQ is not set 19 17 CONFIG_ISA_ARCV2=y 20 18 CONFIG_SMP=y 21 19 # CONFIG_ARC_TIMERS_64BIT is not set
+2
arch/arc/include/asm/fpu.h
··· 43 43 44 44 #endif /* !CONFIG_ISA_ARCOMPACT */ 45 45 46 + struct task_struct; 47 + 46 48 extern void fpu_save_restore(struct task_struct *p, struct task_struct *n); 47 49 48 50 #else /* !CONFIG_ARC_FPU_SAVE_RESTORE */
+2
arch/arc/include/asm/linkage.h
··· 29 29 .endm 30 30 31 31 #define ASM_NL ` /* use '`' to mark new line in macro */ 32 + #define __ALIGN .align 4 33 + #define __ALIGN_STR __stringify(__ALIGN) 32 34 33 35 /* annotation for data we want in DCCM - if enabled in .config */ 34 36 .macro ARCFP_DATA nm
+1 -1
arch/arc/kernel/setup.c
··· 8 8 #include <linux/delay.h> 9 9 #include <linux/root_dev.h> 10 10 #include <linux/clk.h> 11 - #include <linux/clk-provider.h> 12 11 #include <linux/clocksource.h> 13 12 #include <linux/console.h> 14 13 #include <linux/module.h> 15 14 #include <linux/cpu.h> 15 + #include <linux/of_clk.h> 16 16 #include <linux/of_fdt.h> 17 17 #include <linux/of.h> 18 18 #include <linux/cache.h>
+12 -15
arch/arc/kernel/troubleshoot.c
··· 104 104 if (IS_ERR(nm)) 105 105 nm = "?"; 106 106 } 107 - pr_info(" @off 0x%lx in [%s]\n" 108 - " VMA: 0x%08lx to 0x%08lx\n", 107 + pr_info(" @off 0x%lx in [%s] VMA: 0x%08lx to 0x%08lx\n", 109 108 vma->vm_start < TASK_UNMAPPED_BASE ? 110 109 address : address - vma->vm_start, 111 110 nm, vma->vm_start, vma->vm_end); ··· 119 120 unsigned int vec, cause_code; 120 121 unsigned long address; 121 122 122 - pr_info("\n[ECR ]: 0x%08lx => ", regs->event); 123 - 124 123 /* For Data fault, this is data address not instruction addr */ 125 124 address = current->thread.fault_address; 126 125 ··· 127 130 128 131 /* For DTLB Miss or ProtV, display the memory involved too */ 129 132 if (vec == ECR_V_DTLB_MISS) { 130 - pr_cont("Invalid %s @ 0x%08lx by insn @ 0x%08lx\n", 133 + pr_cont("Invalid %s @ 0x%08lx by insn @ %pS\n", 131 134 (cause_code == 0x01) ? "Read" : 132 135 ((cause_code == 0x02) ? "Write" : "EX"), 133 - address, regs->ret); 136 + address, (void *)regs->ret); 134 137 } else if (vec == ECR_V_ITLB_MISS) { 135 138 pr_cont("Insn could not be fetched\n"); 136 139 } else if (vec == ECR_V_MACH_CHK) { ··· 188 191 189 192 show_ecr_verbose(regs); 190 193 191 - pr_info("[EFA ]: 0x%08lx\n[BLINK ]: %pS\n[ERET ]: %pS\n", 192 - current->thread.fault_address, 193 - (void *)regs->blink, (void *)regs->ret); 194 - 195 194 if (user_mode(regs)) 196 195 show_faulting_vma(regs->ret); /* faulting code, not data */ 197 196 198 - pr_info("[STAT32]: 0x%08lx", regs->status32); 197 + pr_info("ECR: 0x%08lx EFA: 0x%08lx ERET: 0x%08lx\n", 198 + regs->event, current->thread.fault_address, regs->ret); 199 + 200 + pr_info("STAT32: 0x%08lx", regs->status32); 199 201 200 202 #define STS_BIT(r, bit) r->status32 & STATUS_##bit##_MASK ? #bit" " : "" 201 203 202 204 #ifdef CONFIG_ISA_ARCOMPACT 203 - pr_cont(" : %2s%2s%2s%2s%2s%2s%2s\n", 205 + pr_cont(" [%2s%2s%2s%2s%2s%2s%2s]", 204 206 (regs->status32 & STATUS_U_MASK) ? "U " : "K ", 205 207 STS_BIT(regs, DE), STS_BIT(regs, AE), 206 208 STS_BIT(regs, A2), STS_BIT(regs, A1), 207 209 STS_BIT(regs, E2), STS_BIT(regs, E1)); 208 210 #else 209 - pr_cont(" : %2s%2s%2s%2s\n", 211 + pr_cont(" [%2s%2s%2s%2s]", 210 212 STS_BIT(regs, IE), 211 213 (regs->status32 & STATUS_U_MASK) ? "U " : "K ", 212 214 STS_BIT(regs, DE), STS_BIT(regs, AE)); 213 215 #endif 214 - pr_info("BTA: 0x%08lx\t SP: 0x%08lx\t FP: 0x%08lx\n", 215 - regs->bta, regs->sp, regs->fp); 216 + pr_cont(" BTA: 0x%08lx\n", regs->bta); 217 + pr_info("BLK: %pS\n SP: 0x%08lx FP: 0x%08lx\n", 218 + (void *)regs->blink, regs->sp, regs->fp); 216 219 pr_info("LPS: 0x%08lx\tLPE: 0x%08lx\tLPC: 0x%08lx\n", 217 220 regs->lp_start, regs->lp_end, regs->lp_count); 218 221
+3 -1
arch/arm/Makefile
··· 307 307 ifeq ($(CONFIG_STACKPROTECTOR_PER_TASK),y) 308 308 prepare: stack_protector_prepare 309 309 stack_protector_prepare: prepare0 310 - $(eval KBUILD_CFLAGS += \ 310 + $(eval SSP_PLUGIN_CFLAGS := \ 311 311 -fplugin-arg-arm_ssp_per_task_plugin-tso=$(shell \ 312 312 awk '{if ($$2 == "THREAD_SZ_ORDER") print $$3;}'\ 313 313 include/generated/asm-offsets.h) \ 314 314 -fplugin-arg-arm_ssp_per_task_plugin-offset=$(shell \ 315 315 awk '{if ($$2 == "TI_STACK_CANARY") print $$3;}'\ 316 316 include/generated/asm-offsets.h)) 317 + $(eval KBUILD_CFLAGS += $(SSP_PLUGIN_CFLAGS)) 318 + $(eval GCC_PLUGINS_CFLAGS += $(SSP_PLUGIN_CFLAGS)) 317 319 endif 318 320 319 321 all: $(notdir $(KBUILD_IMAGE))
+2 -2
arch/arm/boot/compressed/Makefile
··· 101 101 $(libfdt) $(libfdt_hdrs) hyp-stub.S 102 102 103 103 KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING 104 - KBUILD_CFLAGS += $(DISABLE_ARM_SSP_PER_TASK_PLUGIN) 105 104 106 105 ifeq ($(CONFIG_FUNCTION_TRACER),y) 107 106 ORIG_CFLAGS := $(KBUILD_CFLAGS) ··· 116 117 CFLAGS_fdt_rw.o := $(nossp-flags-y) 117 118 CFLAGS_fdt_wip.o := $(nossp-flags-y) 118 119 119 - ccflags-y := -fpic $(call cc-option,-mno-single-pic-base,) -fno-builtin -I$(obj) 120 + ccflags-y := -fpic $(call cc-option,-mno-single-pic-base,) -fno-builtin \ 121 + -I$(obj) $(DISABLE_ARM_SSP_PER_TASK_PLUGIN) 120 122 asflags-y := -DZIMAGE 121 123 122 124 # Supply kernel BSS size to the decompressor via a linker symbol.
+2
arch/arm/kernel/vdso.c
··· 95 95 */ 96 96 np = of_find_compatible_node(NULL, NULL, "arm,armv7-timer"); 97 97 if (!np) 98 + np = of_find_compatible_node(NULL, NULL, "arm,armv8-timer"); 99 + if (!np) 98 100 goto out_put; 99 101 100 102 if (of_property_read_bool(np, "arm,cpu-registers-not-fw-configured"))
+1 -1
arch/arm/lib/copy_from_user.S
··· 118 118 119 119 ENDPROC(arm_copy_from_user) 120 120 121 - .pushsection .fixup,"ax" 121 + .pushsection .text.fixup,"ax" 122 122 .align 0 123 123 copy_abort_preamble 124 124 ldmfd sp!, {r1, r2, r3}
+2 -2
arch/arm64/boot/dts/freescale/fsl-ls1043a-rdb.dts
··· 119 119 120 120 ethernet@e4000 { 121 121 phy-handle = <&rgmii_phy1>; 122 - phy-connection-type = "rgmii-txid"; 122 + phy-connection-type = "rgmii-id"; 123 123 }; 124 124 125 125 ethernet@e6000 { 126 126 phy-handle = <&rgmii_phy2>; 127 - phy-connection-type = "rgmii-txid"; 127 + phy-connection-type = "rgmii-id"; 128 128 }; 129 129 130 130 ethernet@e8000 {
+2 -2
arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts
··· 131 131 &fman0 { 132 132 ethernet@e4000 { 133 133 phy-handle = <&rgmii_phy1>; 134 - phy-connection-type = "rgmii"; 134 + phy-connection-type = "rgmii-id"; 135 135 }; 136 136 137 137 ethernet@e6000 { 138 138 phy-handle = <&rgmii_phy2>; 139 - phy-connection-type = "rgmii"; 139 + phy-connection-type = "rgmii-id"; 140 140 }; 141 141 142 142 ethernet@e8000 {
+4 -4
arch/arm64/crypto/chacha-neon-glue.c
··· 55 55 break; 56 56 } 57 57 chacha_4block_xor_neon(state, dst, src, nrounds, l); 58 - bytes -= CHACHA_BLOCK_SIZE * 5; 59 - src += CHACHA_BLOCK_SIZE * 5; 60 - dst += CHACHA_BLOCK_SIZE * 5; 61 - state[12] += 5; 58 + bytes -= l; 59 + src += l; 60 + dst += l; 61 + state[12] += DIV_ROUND_UP(l, CHACHA_BLOCK_SIZE); 62 62 } 63 63 } 64 64
+1 -3
arch/arm64/include/asm/mmu.h
··· 29 29 */ 30 30 #define ASID(mm) ((mm)->context.id.counter & 0xffff) 31 31 32 - extern bool arm64_use_ng_mappings; 33 - 34 32 static inline bool arm64_kernel_unmapped_at_el0(void) 35 33 { 36 - return arm64_use_ng_mappings; 34 + return cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0); 37 35 } 38 36 39 37 typedef void (*bp_hardening_cb_t)(void);
+4 -2
arch/arm64/include/asm/pgtable-prot.h
··· 23 23 24 24 #include <asm/pgtable-types.h> 25 25 26 + extern bool arm64_use_ng_mappings; 27 + 26 28 #define _PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) 27 29 #define _PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S) 28 30 29 - #define PTE_MAYBE_NG (arm64_kernel_unmapped_at_el0() ? PTE_NG : 0) 30 - #define PMD_MAYBE_NG (arm64_kernel_unmapped_at_el0() ? PMD_SECT_NG : 0) 31 + #define PTE_MAYBE_NG (arm64_use_ng_mappings ? PTE_NG : 0) 32 + #define PMD_MAYBE_NG (arm64_use_ng_mappings ? PMD_SECT_NG : 0) 31 33 32 34 #define PROT_DEFAULT (_PROT_DEFAULT | PTE_MAYBE_NG) 33 35 #define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
+1 -1
arch/arm64/include/asm/unistd.h
··· 25 25 #define __NR_compat_gettimeofday 78 26 26 #define __NR_compat_sigreturn 119 27 27 #define __NR_compat_rt_sigreturn 173 28 - #define __NR_compat_clock_getres 247 29 28 #define __NR_compat_clock_gettime 263 29 + #define __NR_compat_clock_getres 264 30 30 #define __NR_compat_clock_gettime64 403 31 31 #define __NR_compat_clock_getres_time64 406 32 32
+20 -5
arch/arm64/kernel/smp.c
··· 958 958 } 959 959 #endif 960 960 961 + /* 962 + * The number of CPUs online, not counting this CPU (which may not be 963 + * fully online and so not counted in num_online_cpus()). 964 + */ 965 + static inline unsigned int num_other_online_cpus(void) 966 + { 967 + unsigned int this_cpu_online = cpu_online(smp_processor_id()); 968 + 969 + return num_online_cpus() - this_cpu_online; 970 + } 971 + 961 972 void smp_send_stop(void) 962 973 { 963 974 unsigned long timeout; 964 975 965 - if (num_online_cpus() > 1) { 976 + if (num_other_online_cpus()) { 966 977 cpumask_t mask; 967 978 968 979 cpumask_copy(&mask, cpu_online_mask); ··· 986 975 987 976 /* Wait up to one second for other CPUs to stop */ 988 977 timeout = USEC_PER_SEC; 989 - while (num_online_cpus() > 1 && timeout--) 978 + while (num_other_online_cpus() && timeout--) 990 979 udelay(1); 991 980 992 - if (num_online_cpus() > 1) 981 + if (num_other_online_cpus()) 993 982 pr_warn("SMP: failed to stop secondary CPUs %*pbl\n", 994 983 cpumask_pr_args(cpu_online_mask)); 995 984 ··· 1012 1001 1013 1002 cpus_stopped = 1; 1014 1003 1015 - if (num_online_cpus() == 1) { 1004 + /* 1005 + * If this cpu is the only one alive at this point in time, online or 1006 + * not, there are no stop messages to be sent around, so just back out. 1007 + */ 1008 + if (num_other_online_cpus() == 0) { 1016 1009 sdei_mask_local_cpu(); 1017 1010 return; 1018 1011 } ··· 1024 1009 cpumask_copy(&mask, cpu_online_mask); 1025 1010 cpumask_clear_cpu(smp_processor_id(), &mask); 1026 1011 1027 - atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1); 1012 + atomic_set(&waiting_for_crash_ipi, num_other_online_cpus()); 1028 1013 1029 1014 pr_crit("SMP: stopping secondary CPUs\n"); 1030 1015 smp_cross_call(&mask, IPI_CPU_CRASH_STOP);
+1
arch/powerpc/kvm/book3s_pr.c
··· 1817 1817 { 1818 1818 struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu); 1819 1819 1820 + kvmppc_mmu_destroy_pr(vcpu); 1820 1821 free_page((unsigned long)vcpu->arch.shared & PAGE_MASK); 1821 1822 #ifdef CONFIG_KVM_BOOK3S_32_HANDLER 1822 1823 kfree(vcpu->arch.shadow_vcpu);
-2
arch/powerpc/kvm/powerpc.c
··· 759 759 return 0; 760 760 761 761 out_vcpu_uninit: 762 - kvmppc_mmu_destroy(vcpu); 763 762 kvmppc_subarch_vcpu_uninit(vcpu); 764 763 return err; 765 764 } ··· 791 792 792 793 kvmppc_core_vcpu_free(vcpu); 793 794 794 - kvmppc_mmu_destroy(vcpu); 795 795 kvmppc_subarch_vcpu_uninit(vcpu); 796 796 } 797 797
+2 -7
arch/powerpc/mm/kasan/kasan_init_32.c
··· 120 120 unsigned long k_cur; 121 121 phys_addr_t pa = __pa(kasan_early_shadow_page); 122 122 123 - if (!early_mmu_has_feature(MMU_FTR_HPTE_TABLE)) { 124 - int ret = kasan_init_shadow_page_tables(k_start, k_end); 125 - 126 - if (ret) 127 - panic("kasan: kasan_init_shadow_page_tables() failed"); 128 - } 129 123 for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) { 130 124 pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur); 131 125 pte_t *ptep = pte_offset_kernel(pmd, k_cur); ··· 137 143 int ret; 138 144 struct memblock_region *reg; 139 145 140 - if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE)) { 146 + if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE) || 147 + IS_ENABLED(CONFIG_KASAN_VMALLOC)) { 141 148 ret = kasan_init_shadow_page_tables(KASAN_SHADOW_START, KASAN_SHADOW_END); 142 149 143 150 if (ret)
+17 -1
arch/s390/kvm/kvm-s390.c
··· 3268 3268 /* Initial reset is a superset of the normal reset */ 3269 3269 kvm_arch_vcpu_ioctl_normal_reset(vcpu); 3270 3270 3271 - /* this equals initial cpu reset in pop, but we don't switch to ESA */ 3271 + /* 3272 + * This equals initial cpu reset in pop, but we don't switch to ESA. 3273 + * We do not only reset the internal data, but also ... 3274 + */ 3272 3275 vcpu->arch.sie_block->gpsw.mask = 0; 3273 3276 vcpu->arch.sie_block->gpsw.addr = 0; 3274 3277 kvm_s390_set_prefix(vcpu, 0); ··· 3281 3278 memset(vcpu->arch.sie_block->gcr, 0, sizeof(vcpu->arch.sie_block->gcr)); 3282 3279 vcpu->arch.sie_block->gcr[0] = CR0_INITIAL_MASK; 3283 3280 vcpu->arch.sie_block->gcr[14] = CR14_INITIAL_MASK; 3281 + 3282 + /* ... the data in sync regs */ 3283 + memset(vcpu->run->s.regs.crs, 0, sizeof(vcpu->run->s.regs.crs)); 3284 + vcpu->run->s.regs.ckc = 0; 3285 + vcpu->run->s.regs.crs[0] = CR0_INITIAL_MASK; 3286 + vcpu->run->s.regs.crs[14] = CR14_INITIAL_MASK; 3287 + vcpu->run->psw_addr = 0; 3288 + vcpu->run->psw_mask = 0; 3289 + vcpu->run->s.regs.todpr = 0; 3290 + vcpu->run->s.regs.cputm = 0; 3291 + vcpu->run->s.regs.ckc = 0; 3292 + vcpu->run->s.regs.pp = 0; 3293 + vcpu->run->s.regs.gbea = 1; 3284 3294 vcpu->run->s.regs.fpc = 0; 3285 3295 vcpu->arch.sie_block->gbea = 1; 3286 3296 vcpu->arch.sie_block->pp = 0;
+7 -10
arch/x86/events/amd/uncore.c
··· 190 190 191 191 /* 192 192 * NB and Last level cache counters (MSRs) are shared across all cores 193 - * that share the same NB / Last level cache. Interrupts can be directed 194 - * to a single target core, however, event counts generated by processes 195 - * running on other cores cannot be masked out. So we do not support 196 - * sampling and per-thread events. 193 + * that share the same NB / Last level cache. On family 16h and below, 194 + * Interrupts can be directed to a single target core, however, event 195 + * counts generated by processes running on other cores cannot be masked 196 + * out. So we do not support sampling and per-thread events via 197 + * CAP_NO_INTERRUPT, and we do not enable counter overflow interrupts: 197 198 */ 198 - if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK) 199 - return -EINVAL; 200 - 201 - /* and we do not enable counter overflow interrupts */ 202 199 hwc->config = event->attr.config & AMD64_RAW_EVENT_MASK_NB; 203 200 hwc->idx = -1; 204 201 ··· 303 306 .start = amd_uncore_start, 304 307 .stop = amd_uncore_stop, 305 308 .read = amd_uncore_read, 306 - .capabilities = PERF_PMU_CAP_NO_EXCLUDE, 309 + .capabilities = PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT, 307 310 }; 308 311 309 312 static struct pmu amd_llc_pmu = { ··· 314 317 .start = amd_uncore_start, 315 318 .stop = amd_uncore_stop, 316 319 .read = amd_uncore_read, 317 - .capabilities = PERF_PMU_CAP_NO_EXCLUDE, 320 + .capabilities = PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT, 318 321 }; 319 322 320 323 static struct amd_uncore *amd_uncore_alloc(unsigned int cpu)
-1
arch/x86/include/asm/kvm_emulate.h
··· 360 360 u64 d; 361 361 unsigned long _eip; 362 362 struct operand memop; 363 - /* Fields above regs are cleared together. */ 364 363 unsigned long _regs[NR_VCPU_REGS]; 365 364 struct operand *memopp; 366 365 struct fetch_cache fetch;
+8 -6
arch/x86/kernel/apic/vector.c
··· 838 838 bool managed = apicd->is_managed; 839 839 840 840 /* 841 - * This should never happen. Managed interrupts are not 842 - * migrated except on CPU down, which does not involve the 843 - * cleanup vector. But try to keep the accounting correct 844 - * nevertheless. 841 + * Managed interrupts are usually not migrated away 842 + * from an online CPU, but CPU isolation 'managed_irq' 843 + * can make that happen. 844 + * 1) Activation does not take the isolation into account 845 + * to keep the code simple 846 + * 2) Migration away from an isolated CPU can happen when 847 + * a non-isolated CPU which is in the calculated 848 + * affinity mask comes online. 845 849 */ 846 - WARN_ON_ONCE(managed); 847 - 848 850 trace_vector_free_moved(apicd->irq, cpu, vector, managed); 849 851 irq_matrix_free(vector_matrix, cpu, vector, managed); 850 852 per_cpu(vector_irq, cpu)[vector] = VECTOR_UNUSED;
+5 -4
arch/x86/kernel/cpu/mce/intel.c
··· 493 493 return; 494 494 495 495 if ((val & 3UL) == 1UL) { 496 - /* PPIN available but disabled: */ 496 + /* PPIN locked in disabled mode */ 497 497 return; 498 498 } 499 499 500 - /* If PPIN is disabled, but not locked, try to enable: */ 501 - if (!(val & 3UL)) { 500 + /* If PPIN is disabled, try to enable */ 501 + if (!(val & 2UL)) { 502 502 wrmsrl_safe(MSR_PPIN_CTL, val | 2UL); 503 503 rdmsrl_safe(MSR_PPIN_CTL, &val); 504 504 } 505 505 506 - if ((val & 3UL) == 2UL) 506 + /* Is the enable bit set? */ 507 + if (val & 2UL) 507 508 set_cpu_cap(c, X86_FEATURE_INTEL_PPIN); 508 509 } 509 510 }
+7 -2
arch/x86/kernel/cpu/mce/therm_throt.c
··· 486 486 { 487 487 struct thermal_state *state = &per_cpu(thermal_state, cpu); 488 488 struct device *dev = get_cpu_device(cpu); 489 + u32 l; 489 490 490 - cancel_delayed_work(&state->package_throttle.therm_work); 491 - cancel_delayed_work(&state->core_throttle.therm_work); 491 + /* Mask the thermal vector before draining evtl. pending work */ 492 + l = apic_read(APIC_LVTTHMR); 493 + apic_write(APIC_LVTTHMR, l | APIC_LVT_MASKED); 494 + 495 + cancel_delayed_work_sync(&state->package_throttle.therm_work); 496 + cancel_delayed_work_sync(&state->core_throttle.therm_work); 492 497 493 498 state->package_throttle.rate_control_active = false; 494 499 state->core_throttle.rate_control_active = false;
+1 -1
arch/x86/kvm/Kconfig
··· 68 68 depends on (X86_64 && !KASAN) || !COMPILE_TEST 69 69 depends on EXPERT 70 70 help 71 - Add -Werror to the build flags for (and only for) i915.ko. 71 + Add -Werror to the build flags for KVM. 72 72 73 73 If in doubt, say "N". 74 74
+1
arch/x86/kvm/emulate.c
··· 5173 5173 ctxt->fetch.ptr = ctxt->fetch.data; 5174 5174 ctxt->fetch.end = ctxt->fetch.data + insn_len; 5175 5175 ctxt->opcode_len = 1; 5176 + ctxt->intercept = x86_intercept_none; 5176 5177 if (insn_len > 0) 5177 5178 memcpy(ctxt->fetch.data, insn, insn_len); 5178 5179 else {
+5 -2
arch/x86/kvm/ioapic.c
··· 378 378 if (e->fields.delivery_mode == APIC_DM_FIXED) { 379 379 struct kvm_lapic_irq irq; 380 380 381 - irq.shorthand = APIC_DEST_NOSHORT; 382 381 irq.vector = e->fields.vector; 383 382 irq.delivery_mode = e->fields.delivery_mode << 8; 384 - irq.dest_id = e->fields.dest_id; 385 383 irq.dest_mode = 386 384 kvm_lapic_irq_dest_mode(!!e->fields.dest_mode); 385 + irq.level = false; 386 + irq.trig_mode = e->fields.trig_mode; 387 + irq.shorthand = APIC_DEST_NOSHORT; 388 + irq.dest_id = e->fields.dest_id; 389 + irq.msi_redir_hint = false; 387 390 bitmap_zero(&vcpu_bitmap, 16); 388 391 kvm_bitmap_or_dest_vcpus(ioapic->kvm, &irq, 389 392 &vcpu_bitmap);
+2 -1
arch/x86/kvm/svm.c
··· 6312 6312 enum exit_fastpath_completion *exit_fastpath) 6313 6313 { 6314 6314 if (!is_guest_mode(vcpu) && 6315 - to_svm(vcpu)->vmcb->control.exit_code == EXIT_REASON_MSR_WRITE) 6315 + to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_MSR && 6316 + to_svm(vcpu)->vmcb->control.exit_info_1) 6316 6317 *exit_fastpath = handle_fastpath_set_msr_irqoff(vcpu); 6317 6318 } 6318 6319
+3 -2
arch/x86/kvm/vmx/nested.c
··· 224 224 return; 225 225 226 226 kvm_vcpu_unmap(vcpu, &vmx->nested.hv_evmcs_map, true); 227 - vmx->nested.hv_evmcs_vmptr = -1ull; 227 + vmx->nested.hv_evmcs_vmptr = 0; 228 228 vmx->nested.hv_evmcs = NULL; 229 229 } 230 230 ··· 1923 1923 if (!nested_enlightened_vmentry(vcpu, &evmcs_gpa)) 1924 1924 return 1; 1925 1925 1926 - if (unlikely(evmcs_gpa != vmx->nested.hv_evmcs_vmptr)) { 1926 + if (unlikely(!vmx->nested.hv_evmcs || 1927 + evmcs_gpa != vmx->nested.hv_evmcs_vmptr)) { 1927 1928 if (!vmx->nested.hv_evmcs) 1928 1929 vmx->nested.current_vmptr = -1ull; 1929 1930
+14 -2
arch/x86/kvm/vmx/vmx.c
··· 2338 2338 kvm_cpu_vmxoff(); 2339 2339 } 2340 2340 2341 + /* 2342 + * There is no X86_FEATURE for SGX yet, but anyway we need to query CPUID 2343 + * directly instead of going through cpu_has(), to ensure KVM is trapping 2344 + * ENCLS whenever it's supported in hardware. It does not matter whether 2345 + * the host OS supports or has enabled SGX. 2346 + */ 2347 + static bool cpu_has_sgx(void) 2348 + { 2349 + return cpuid_eax(0) >= 0x12 && (cpuid_eax(0x12) & BIT(0)); 2350 + } 2351 + 2341 2352 static __init int adjust_vmx_controls(u32 ctl_min, u32 ctl_opt, 2342 2353 u32 msr, u32 *result) 2343 2354 { ··· 2429 2418 SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE | 2430 2419 SECONDARY_EXEC_PT_USE_GPA | 2431 2420 SECONDARY_EXEC_PT_CONCEAL_VMX | 2432 - SECONDARY_EXEC_ENABLE_VMFUNC | 2433 - SECONDARY_EXEC_ENCLS_EXITING; 2421 + SECONDARY_EXEC_ENABLE_VMFUNC; 2422 + if (cpu_has_sgx()) 2423 + opt2 |= SECONDARY_EXEC_ENCLS_EXITING; 2434 2424 if (adjust_vmx_controls(min2, opt2, 2435 2425 MSR_IA32_VMX_PROCBASED_CTLS2, 2436 2426 &_cpu_based_2nd_exec_control) < 0)
+5 -3
arch/x86/kvm/x86.c
··· 7195 7195 7196 7196 cpu = get_cpu(); 7197 7197 policy = cpufreq_cpu_get(cpu); 7198 - if (policy && policy->cpuinfo.max_freq) 7199 - max_tsc_khz = policy->cpuinfo.max_freq; 7198 + if (policy) { 7199 + if (policy->cpuinfo.max_freq) 7200 + max_tsc_khz = policy->cpuinfo.max_freq; 7201 + cpufreq_cpu_put(policy); 7202 + } 7200 7203 put_cpu(); 7201 - cpufreq_cpu_put(policy); 7202 7204 #endif 7203 7205 cpufreq_register_notifier(&kvmclock_cpufreq_notifier_block, 7204 7206 CPUFREQ_TRANSITION_NOTIFIER);
+24 -2
arch/x86/mm/fault.c
··· 190 190 return pmd_k; 191 191 } 192 192 193 - void vmalloc_sync_all(void) 193 + static void vmalloc_sync(void) 194 194 { 195 195 unsigned long address; 196 196 ··· 215 215 } 216 216 spin_unlock(&pgd_lock); 217 217 } 218 + } 219 + 220 + void vmalloc_sync_mappings(void) 221 + { 222 + vmalloc_sync(); 223 + } 224 + 225 + void vmalloc_sync_unmappings(void) 226 + { 227 + vmalloc_sync(); 218 228 } 219 229 220 230 /* ··· 329 319 330 320 #else /* CONFIG_X86_64: */ 331 321 332 - void vmalloc_sync_all(void) 322 + void vmalloc_sync_mappings(void) 333 323 { 324 + /* 325 + * 64-bit mappings might allocate new p4d/pud pages 326 + * that need to be propagated to all tasks' PGDs. 327 + */ 334 328 sync_global_pgds(VMALLOC_START & PGDIR_MASK, VMALLOC_END); 329 + } 330 + 331 + void vmalloc_sync_unmappings(void) 332 + { 333 + /* 334 + * Unmappings never allocate or free p4d/pud pages. 335 + * No work is required here. 336 + */ 335 337 } 336 338 337 339 /*
+21
arch/x86/mm/ioremap.c
··· 106 106 return 0; 107 107 } 108 108 109 + /* 110 + * The EFI runtime services data area is not covered by walk_mem_res(), but must 111 + * be mapped encrypted when SEV is active. 112 + */ 113 + static void __ioremap_check_other(resource_size_t addr, struct ioremap_desc *desc) 114 + { 115 + if (!sev_active()) 116 + return; 117 + 118 + if (!IS_ENABLED(CONFIG_EFI)) 119 + return; 120 + 121 + if (efi_mem_type(addr) == EFI_RUNTIME_SERVICES_DATA) 122 + desc->flags |= IORES_MAP_ENCRYPTED; 123 + } 124 + 109 125 static int __ioremap_collect_map_flags(struct resource *res, void *arg) 110 126 { 111 127 struct ioremap_desc *desc = arg; ··· 140 124 * To avoid multiple resource walks, this function walks resources marked as 141 125 * IORESOURCE_MEM and IORESOURCE_BUSY and looking for system RAM and/or a 142 126 * resource described not as IORES_DESC_NONE (e.g. IORES_DESC_ACPI_TABLES). 127 + * 128 + * After that, deal with misc other ranges in __ioremap_check_other() which do 129 + * not fall into the above category. 143 130 */ 144 131 static void __ioremap_check_mem(resource_size_t addr, unsigned long size, 145 132 struct ioremap_desc *desc) ··· 154 135 memset(desc, 0, sizeof(struct ioremap_desc)); 155 136 156 137 walk_mem_res(start, end, desc, __ioremap_collect_map_flags); 138 + 139 + __ioremap_check_other(addr, desc); 157 140 } 158 141 159 142 /*
+6 -4
arch/x86/net/bpf_jit_comp32.c
··· 2039 2039 } 2040 2040 /* and dreg_lo,sreg_lo */ 2041 2041 EMIT2(0x23, add_2reg(0xC0, sreg_lo, dreg_lo)); 2042 - /* and dreg_hi,sreg_hi */ 2043 - EMIT2(0x23, add_2reg(0xC0, sreg_hi, dreg_hi)); 2044 - /* or dreg_lo,dreg_hi */ 2045 - EMIT2(0x09, add_2reg(0xC0, dreg_lo, dreg_hi)); 2042 + if (is_jmp64) { 2043 + /* and dreg_hi,sreg_hi */ 2044 + EMIT2(0x23, add_2reg(0xC0, sreg_hi, dreg_hi)); 2045 + /* or dreg_lo,dreg_hi */ 2046 + EMIT2(0x09, add_2reg(0xC0, dreg_lo, dreg_hi)); 2047 + } 2046 2048 goto emit_cond_jmp; 2047 2049 } 2048 2050 case BPF_JMP | BPF_JSET | BPF_K:
+1 -1
block/blk-iocost.c
··· 1318 1318 return false; 1319 1319 1320 1320 /* is something in flight? */ 1321 - if (atomic64_read(&iocg->done_vtime) < atomic64_read(&iocg->vtime)) 1321 + if (atomic64_read(&iocg->done_vtime) != atomic64_read(&iocg->vtime)) 1322 1322 return false; 1323 1323 1324 1324 return true;
+22
block/blk-mq-sched.c
··· 398 398 WARN_ON(e && (rq->tag != -1)); 399 399 400 400 if (blk_mq_sched_bypass_insert(hctx, !!e, rq)) { 401 + /* 402 + * Firstly normal IO request is inserted to scheduler queue or 403 + * sw queue, meantime we add flush request to dispatch queue( 404 + * hctx->dispatch) directly and there is at most one in-flight 405 + * flush request for each hw queue, so it doesn't matter to add 406 + * flush request to tail or front of the dispatch queue. 407 + * 408 + * Secondly in case of NCQ, flush request belongs to non-NCQ 409 + * command, and queueing it will fail when there is any 410 + * in-flight normal IO request(NCQ command). When adding flush 411 + * rq to the front of hctx->dispatch, it is easier to introduce 412 + * extra time to flush rq's latency because of S_SCHED_RESTART 413 + * compared with adding to the tail of dispatch queue, then 414 + * chance of flush merge is increased, and less flush requests 415 + * will be issued to controller. It is observed that ~10% time 416 + * is saved in blktests block/004 on disk attached to AHCI/NCQ 417 + * drive when adding flush rq to the front of hctx->dispatch. 418 + * 419 + * Simply queue flush rq to the front of hctx->dispatch so that 420 + * intensive flush workloads can benefit in case of NCQ HW. 421 + */ 422 + at_head = (rq->rq_flags & RQF_FLUSH_SEQ) ? true : at_head; 401 423 blk_mq_request_bypass_insert(rq, at_head, false); 402 424 goto run; 403 425 }
+36
block/genhd.c
··· 301 301 } 302 302 EXPORT_SYMBOL_GPL(disk_map_sector_rcu); 303 303 304 + /** 305 + * disk_has_partitions 306 + * @disk: gendisk of interest 307 + * 308 + * Walk through the partition table and check if valid partition exists. 309 + * 310 + * CONTEXT: 311 + * Don't care. 312 + * 313 + * RETURNS: 314 + * True if the gendisk has at least one valid non-zero size partition. 315 + * Otherwise false. 316 + */ 317 + bool disk_has_partitions(struct gendisk *disk) 318 + { 319 + struct disk_part_tbl *ptbl; 320 + int i; 321 + bool ret = false; 322 + 323 + rcu_read_lock(); 324 + ptbl = rcu_dereference(disk->part_tbl); 325 + 326 + /* Iterate partitions skipping the whole device at index 0 */ 327 + for (i = 1; i < ptbl->len; i++) { 328 + if (rcu_dereference(ptbl->part[i])) { 329 + ret = true; 330 + break; 331 + } 332 + } 333 + 334 + rcu_read_unlock(); 335 + 336 + return ret; 337 + } 338 + EXPORT_SYMBOL_GPL(disk_has_partitions); 339 + 304 340 /* 305 341 * Can be deleted altogether. Later. 306 342 *
+1 -1
drivers/acpi/apei/ghes.c
··· 171 171 * New allocation must be visible in all pgd before it can be found by 172 172 * an NMI allocating from the pool. 173 173 */ 174 - vmalloc_sync_all(); 174 + vmalloc_sync_mappings(); 175 175 176 176 rc = gen_pool_add(ghes_estatus_pool, addr, PAGE_ALIGN(len), -1); 177 177 if (rc)
+1
drivers/android/binderfs.c
··· 448 448 inode->i_uid = info->root_uid; 449 449 inode->i_gid = info->root_gid; 450 450 451 + refcount_set(&device->ref, 1); 451 452 device->binderfs_inode = inode; 452 453 device->miscdev.minor = minor; 453 454
+2 -2
drivers/clk/clk.c
··· 4713 4713 * 4714 4714 * Returns: The number of clocks that are possible parents of this node 4715 4715 */ 4716 - unsigned int of_clk_get_parent_count(struct device_node *np) 4716 + unsigned int of_clk_get_parent_count(const struct device_node *np) 4717 4717 { 4718 4718 int count; 4719 4719 ··· 4725 4725 } 4726 4726 EXPORT_SYMBOL_GPL(of_clk_get_parent_count); 4727 4727 4728 - const char *of_clk_get_parent_name(struct device_node *np, int index) 4728 + const char *of_clk_get_parent_name(const struct device_node *np, int index) 4729 4729 { 4730 4730 struct of_phandle_args clkspec; 4731 4731 struct property *prop;
-19
drivers/clk/qcom/dispcc-sc7180.c
··· 592 592 }, 593 593 }; 594 594 595 - static struct clk_branch disp_cc_mdss_rscc_ahb_clk = { 596 - .halt_reg = 0x400c, 597 - .halt_check = BRANCH_HALT, 598 - .clkr = { 599 - .enable_reg = 0x400c, 600 - .enable_mask = BIT(0), 601 - .hw.init = &(struct clk_init_data){ 602 - .name = "disp_cc_mdss_rscc_ahb_clk", 603 - .parent_data = &(const struct clk_parent_data){ 604 - .hw = &disp_cc_mdss_ahb_clk_src.clkr.hw, 605 - }, 606 - .num_parents = 1, 607 - .flags = CLK_IS_CRITICAL | CLK_SET_RATE_PARENT, 608 - .ops = &clk_branch2_ops, 609 - }, 610 - }, 611 - }; 612 - 613 595 static struct clk_branch disp_cc_mdss_rscc_vsync_clk = { 614 596 .halt_reg = 0x4008, 615 597 .halt_check = BRANCH_HALT, ··· 669 687 [DISP_CC_MDSS_PCLK0_CLK_SRC] = &disp_cc_mdss_pclk0_clk_src.clkr, 670 688 [DISP_CC_MDSS_ROT_CLK] = &disp_cc_mdss_rot_clk.clkr, 671 689 [DISP_CC_MDSS_ROT_CLK_SRC] = &disp_cc_mdss_rot_clk_src.clkr, 672 - [DISP_CC_MDSS_RSCC_AHB_CLK] = &disp_cc_mdss_rscc_ahb_clk.clkr, 673 690 [DISP_CC_MDSS_RSCC_VSYNC_CLK] = &disp_cc_mdss_rscc_vsync_clk.clkr, 674 691 [DISP_CC_MDSS_VSYNC_CLK] = &disp_cc_mdss_vsync_clk.clkr, 675 692 [DISP_CC_MDSS_VSYNC_CLK_SRC] = &disp_cc_mdss_vsync_clk_src.clkr,
+1 -1
drivers/clk/qcom/videocc-sc7180.c
··· 97 97 98 98 static struct clk_branch video_cc_vcodec0_core_clk = { 99 99 .halt_reg = 0x890, 100 - .halt_check = BRANCH_HALT, 100 + .halt_check = BRANCH_HALT_VOTED, 101 101 .clkr = { 102 102 .enable_reg = 0x890, 103 103 .enable_mask = BIT(0),
+1 -1
drivers/dma/dmaengine.c
··· 1151 1151 } 1152 1152 1153 1153 if (!device->device_release) 1154 - dev_warn(device->dev, 1154 + dev_dbg(device->dev, 1155 1155 "WARN: Device release is not defined so it is not safe to unbind this driver while in use\n"); 1156 1156 1157 1157 kref_init(&device->ref);
+2 -2
drivers/dma/idxd/cdev.c
··· 81 81 dev = &idxd->pdev->dev; 82 82 idxd_cdev = &wq->idxd_cdev; 83 83 84 - dev_dbg(dev, "%s called\n", __func__); 84 + dev_dbg(dev, "%s called: %d\n", __func__, idxd_wq_refcount(wq)); 85 85 86 - if (idxd_wq_refcount(wq) > 1 && wq_dedicated(wq)) 86 + if (idxd_wq_refcount(wq) > 0 && wq_dedicated(wq)) 87 87 return -EBUSY; 88 88 89 89 ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+19 -10
drivers/dma/ti/k3-udma-glue.c
··· 564 564 if (IS_ERR(flow->udma_rflow)) { 565 565 ret = PTR_ERR(flow->udma_rflow); 566 566 dev_err(dev, "UDMAX rflow get err %d\n", ret); 567 - goto err; 567 + return ret; 568 568 } 569 569 570 570 if (flow->udma_rflow_id != xudma_rflow_get_id(flow->udma_rflow)) { 571 - xudma_rflow_put(rx_chn->common.udmax, flow->udma_rflow); 572 - return -ENODEV; 571 + ret = -ENODEV; 572 + goto err_rflow_put; 573 573 } 574 574 575 575 /* request and cfg rings */ ··· 578 578 if (!flow->ringrx) { 579 579 ret = -ENODEV; 580 580 dev_err(dev, "Failed to get RX ring\n"); 581 - goto err; 581 + goto err_rflow_put; 582 582 } 583 583 584 584 flow->ringrxfdq = k3_ringacc_request_ring(rx_chn->common.ringacc, ··· 586 586 if (!flow->ringrxfdq) { 587 587 ret = -ENODEV; 588 588 dev_err(dev, "Failed to get RXFDQ ring\n"); 589 - goto err; 589 + goto err_ringrx_free; 590 590 } 591 591 592 592 ret = k3_ringacc_ring_cfg(flow->ringrx, &flow_cfg->rx_cfg); 593 593 if (ret) { 594 594 dev_err(dev, "Failed to cfg ringrx %d\n", ret); 595 - goto err; 595 + goto err_ringrxfdq_free; 596 596 } 597 597 598 598 ret = k3_ringacc_ring_cfg(flow->ringrxfdq, &flow_cfg->rxfdq_cfg); 599 599 if (ret) { 600 600 dev_err(dev, "Failed to cfg ringrxfdq %d\n", ret); 601 - goto err; 601 + goto err_ringrxfdq_free; 602 602 } 603 603 604 604 if (rx_chn->remote) { ··· 648 648 if (ret) { 649 649 dev_err(dev, "flow%d config failed: %d\n", flow->udma_rflow_id, 650 650 ret); 651 - goto err; 651 + goto err_ringrxfdq_free; 652 652 } 653 653 654 654 rx_chn->flows_ready++; ··· 656 656 flow->udma_rflow_id, rx_chn->flows_ready); 657 657 658 658 return 0; 659 - err: 660 - k3_udma_glue_release_rx_flow(rx_chn, flow_idx); 659 + 660 + err_ringrxfdq_free: 661 + k3_ringacc_ring_free(flow->ringrxfdq); 662 + 663 + err_ringrx_free: 664 + k3_ringacc_ring_free(flow->ringrx); 665 + 666 + err_rflow_put: 667 + xudma_rflow_put(rx_chn->common.udmax, flow->udma_rflow); 668 + flow->udma_rflow = NULL; 669 + 661 670 return ret; 662 671 } 663 672
+23 -9
drivers/firmware/efi/efivars.c
··· 83 83 efivar_attr_read(struct efivar_entry *entry, char *buf) 84 84 { 85 85 struct efi_variable *var = &entry->var; 86 + unsigned long size = sizeof(var->Data); 86 87 char *str = buf; 88 + int ret; 87 89 88 90 if (!entry || !buf) 89 91 return -EINVAL; 90 92 91 - var->DataSize = 1024; 92 - if (efivar_entry_get(entry, &var->Attributes, &var->DataSize, var->Data)) 93 + ret = efivar_entry_get(entry, &var->Attributes, &size, var->Data); 94 + var->DataSize = size; 95 + if (ret) 93 96 return -EIO; 94 97 95 98 if (var->Attributes & EFI_VARIABLE_NON_VOLATILE) ··· 119 116 efivar_size_read(struct efivar_entry *entry, char *buf) 120 117 { 121 118 struct efi_variable *var = &entry->var; 119 + unsigned long size = sizeof(var->Data); 122 120 char *str = buf; 121 + int ret; 123 122 124 123 if (!entry || !buf) 125 124 return -EINVAL; 126 125 127 - var->DataSize = 1024; 128 - if (efivar_entry_get(entry, &var->Attributes, &var->DataSize, var->Data)) 126 + ret = efivar_entry_get(entry, &var->Attributes, &size, var->Data); 127 + var->DataSize = size; 128 + if (ret) 129 129 return -EIO; 130 130 131 131 str += sprintf(str, "0x%lx\n", var->DataSize); ··· 139 133 efivar_data_read(struct efivar_entry *entry, char *buf) 140 134 { 141 135 struct efi_variable *var = &entry->var; 136 + unsigned long size = sizeof(var->Data); 137 + int ret; 142 138 143 139 if (!entry || !buf) 144 140 return -EINVAL; 145 141 146 - var->DataSize = 1024; 147 - if (efivar_entry_get(entry, &var->Attributes, &var->DataSize, var->Data)) 142 + ret = efivar_entry_get(entry, &var->Attributes, &size, var->Data); 143 + var->DataSize = size; 144 + if (ret) 148 145 return -EIO; 149 146 150 147 memcpy(buf, var->Data, var->DataSize); ··· 208 199 u8 *data; 209 200 int err; 210 201 202 + if (!entry || !buf) 203 + return -EINVAL; 204 + 211 205 if (in_compat_syscall()) { 212 206 struct compat_efi_variable *compat; 213 207 ··· 262 250 { 263 251 struct efi_variable *var = &entry->var; 264 252 struct compat_efi_variable *compat; 253 + unsigned long datasize = sizeof(var->Data); 265 254 size_t size; 255 + int ret; 266 256 267 257 if (!entry || !buf) 268 258 return 0; 269 259 270 - var->DataSize = 1024; 271 - if (efivar_entry_get(entry, &entry->var.Attributes, 272 - &entry->var.DataSize, entry->var.Data)) 260 + ret = efivar_entry_get(entry, &var->Attributes, &datasize, var->Data); 261 + var->DataSize = datasize; 262 + if (ret) 273 263 return -EIO; 274 264 275 265 if (in_compat_syscall()) {
+114 -26
drivers/gpio/gpiolib-acpi.c
··· 21 21 #include "gpiolib.h" 22 22 #include "gpiolib-acpi.h" 23 23 24 - #define QUIRK_NO_EDGE_EVENTS_ON_BOOT 0x01l 25 - #define QUIRK_NO_WAKEUP 0x02l 26 - 27 24 static int run_edge_events_on_boot = -1; 28 25 module_param(run_edge_events_on_boot, int, 0444); 29 26 MODULE_PARM_DESC(run_edge_events_on_boot, 30 27 "Run edge _AEI event-handlers at boot: 0=no, 1=yes, -1=auto"); 31 28 32 - static int honor_wakeup = -1; 33 - module_param(honor_wakeup, int, 0444); 34 - MODULE_PARM_DESC(honor_wakeup, 35 - "Honor the ACPI wake-capable flag: 0=no, 1=yes, -1=auto"); 29 + static char *ignore_wake; 30 + module_param(ignore_wake, charp, 0444); 31 + MODULE_PARM_DESC(ignore_wake, 32 + "controller@pin combos on which to ignore the ACPI wake flag " 33 + "ignore_wake=controller@pin[,controller@pin[,...]]"); 34 + 35 + struct acpi_gpiolib_dmi_quirk { 36 + bool no_edge_events_on_boot; 37 + char *ignore_wake; 38 + }; 36 39 37 40 /** 38 41 * struct acpi_gpio_event - ACPI GPIO event handler data ··· 205 202 acpi_gpiochip_request_irq(acpi_gpio, event); 206 203 } 207 204 205 + static bool acpi_gpio_in_ignore_list(const char *controller_in, int pin_in) 206 + { 207 + const char *controller, *pin_str; 208 + int len, pin; 209 + char *endp; 210 + 211 + controller = ignore_wake; 212 + while (controller) { 213 + pin_str = strchr(controller, '@'); 214 + if (!pin_str) 215 + goto err; 216 + 217 + len = pin_str - controller; 218 + if (len == strlen(controller_in) && 219 + strncmp(controller, controller_in, len) == 0) { 220 + pin = simple_strtoul(pin_str + 1, &endp, 10); 221 + if (*endp != 0 && *endp != ',') 222 + goto err; 223 + 224 + if (pin == pin_in) 225 + return true; 226 + } 227 + 228 + controller = strchr(controller, ','); 229 + if (controller) 230 + controller++; 231 + } 232 + 233 + return false; 234 + err: 235 + pr_err_once("Error invalid value for gpiolib_acpi.ignore_wake: %s\n", 236 + ignore_wake); 237 + return false; 238 + } 239 + 240 + static bool acpi_gpio_irq_is_wake(struct device *parent, 241 + struct acpi_resource_gpio *agpio) 242 + { 243 + int pin = agpio->pin_table[0]; 244 + 245 + if (agpio->wake_capable != ACPI_WAKE_CAPABLE) 246 + return false; 247 + 248 + if (acpi_gpio_in_ignore_list(dev_name(parent), pin)) { 249 + dev_info(parent, "Ignoring wakeup on pin %d\n", pin); 250 + return false; 251 + } 252 + 253 + return true; 254 + } 255 + 208 256 /* Always returns AE_OK so that we keep looping over the resources */ 209 257 static acpi_status acpi_gpiochip_alloc_event(struct acpi_resource *ares, 210 258 void *context) ··· 343 289 event->handle = evt_handle; 344 290 event->handler = handler; 345 291 event->irq = irq; 346 - event->irq_is_wake = honor_wakeup && agpio->wake_capable == ACPI_WAKE_CAPABLE; 292 + event->irq_is_wake = acpi_gpio_irq_is_wake(chip->parent, agpio); 347 293 event->pin = pin; 348 294 event->desc = desc; 349 295 ··· 1382 1328 DMI_MATCH(DMI_SYS_VENDOR, "MINIX"), 1383 1329 DMI_MATCH(DMI_PRODUCT_NAME, "Z83-4"), 1384 1330 }, 1385 - .driver_data = (void *)QUIRK_NO_EDGE_EVENTS_ON_BOOT, 1331 + .driver_data = &(struct acpi_gpiolib_dmi_quirk) { 1332 + .no_edge_events_on_boot = true, 1333 + }, 1386 1334 }, 1387 1335 { 1388 1336 /* ··· 1397 1341 DMI_MATCH(DMI_SYS_VENDOR, "Wortmann_AG"), 1398 1342 DMI_MATCH(DMI_PRODUCT_NAME, "TERRA_PAD_1061"), 1399 1343 }, 1400 - .driver_data = (void *)QUIRK_NO_EDGE_EVENTS_ON_BOOT, 1344 + .driver_data = &(struct acpi_gpiolib_dmi_quirk) { 1345 + .no_edge_events_on_boot = true, 1346 + }, 1401 1347 }, 1402 1348 { 1403 1349 /* 1404 - * Various HP X2 10 Cherry Trail models use an external 1405 - * embedded-controller connected via I2C + an ACPI GPIO 1406 - * event handler. The embedded controller generates various 1407 - * spurious wakeup events when suspended. So disable wakeup 1408 - * for its handler (it uses the only ACPI GPIO event handler). 1409 - * This breaks wakeup when opening the lid, the user needs 1350 + * HP X2 10 models with Cherry Trail SoC + TI PMIC use an 1351 + * external embedded-controller connected via I2C + an ACPI GPIO 1352 + * event handler on INT33FF:01 pin 0, causing spurious wakeups. 1353 + * When suspending by closing the LID, the power to the USB 1354 + * keyboard is turned off, causing INT0002 ACPI events to 1355 + * trigger once the XHCI controller notices the keyboard is 1356 + * gone. So INT0002 events cause spurious wakeups too. Ignoring 1357 + * EC wakes breaks wakeup when opening the lid, the user needs 1410 1358 * to press the power-button to wakeup the system. The 1411 1359 * alternative is suspend simply not working, which is worse. 1412 1360 */ ··· 1418 1358 DMI_MATCH(DMI_SYS_VENDOR, "HP"), 1419 1359 DMI_MATCH(DMI_PRODUCT_NAME, "HP x2 Detachable 10-p0XX"), 1420 1360 }, 1421 - .driver_data = (void *)QUIRK_NO_WAKEUP, 1361 + .driver_data = &(struct acpi_gpiolib_dmi_quirk) { 1362 + .ignore_wake = "INT33FF:01@0,INT0002:00@2", 1363 + }, 1364 + }, 1365 + { 1366 + /* 1367 + * HP X2 10 models with Bay Trail SoC + AXP288 PMIC use an 1368 + * external embedded-controller connected via I2C + an ACPI GPIO 1369 + * event handler on INT33FC:02 pin 28, causing spurious wakeups. 1370 + */ 1371 + .matches = { 1372 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 1373 + DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion x2 Detachable"), 1374 + DMI_MATCH(DMI_BOARD_NAME, "815D"), 1375 + }, 1376 + .driver_data = &(struct acpi_gpiolib_dmi_quirk) { 1377 + .ignore_wake = "INT33FC:02@28", 1378 + }, 1379 + }, 1380 + { 1381 + /* 1382 + * HP X2 10 models with Cherry Trail SoC + AXP288 PMIC use an 1383 + * external embedded-controller connected via I2C + an ACPI GPIO 1384 + * event handler on INT33FF:01 pin 0, causing spurious wakeups. 1385 + */ 1386 + .matches = { 1387 + DMI_MATCH(DMI_SYS_VENDOR, "HP"), 1388 + DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion x2 Detachable"), 1389 + DMI_MATCH(DMI_BOARD_NAME, "813E"), 1390 + }, 1391 + .driver_data = &(struct acpi_gpiolib_dmi_quirk) { 1392 + .ignore_wake = "INT33FF:01@0", 1393 + }, 1422 1394 }, 1423 1395 {} /* Terminating entry */ 1424 1396 }; 1425 1397 1426 1398 static int acpi_gpio_setup_params(void) 1427 1399 { 1400 + const struct acpi_gpiolib_dmi_quirk *quirk = NULL; 1428 1401 const struct dmi_system_id *id; 1429 - long quirks = 0; 1430 1402 1431 1403 id = dmi_first_match(gpiolib_acpi_quirks); 1432 1404 if (id) 1433 - quirks = (long)id->driver_data; 1405 + quirk = id->driver_data; 1434 1406 1435 1407 if (run_edge_events_on_boot < 0) { 1436 - if (quirks & QUIRK_NO_EDGE_EVENTS_ON_BOOT) 1408 + if (quirk && quirk->no_edge_events_on_boot) 1437 1409 run_edge_events_on_boot = 0; 1438 1410 else 1439 1411 run_edge_events_on_boot = 1; 1440 1412 } 1441 1413 1442 - if (honor_wakeup < 0) { 1443 - if (quirks & QUIRK_NO_WAKEUP) 1444 - honor_wakeup = 0; 1445 - else 1446 - honor_wakeup = 1; 1447 - } 1414 + if (ignore_wake == NULL && quirk && quirk->ignore_wake) 1415 + ignore_wake = quirk->ignore_wake; 1448 1416 1449 1417 return 0; 1450 1418 }
+8 -1
drivers/gpio/gpiolib.c
··· 2306 2306 { 2307 2307 struct gpio_chip *chip = irq_data_get_irq_chip_data(d); 2308 2308 2309 + /* 2310 + * Since we override .irq_disable() we need to mimic the 2311 + * behaviour of __irq_disable() in irq/chip.c. 2312 + * First call .irq_disable() if it exists, else mimic the 2313 + * behaviour of mask_irq() which calls .irq_mask() if 2314 + * it exists. 2315 + */ 2309 2316 if (chip->irq.irq_disable) 2310 2317 chip->irq.irq_disable(d); 2311 - else 2318 + else if (chip->irq.chip->irq_mask) 2312 2319 chip->irq.chip->irq_mask(d); 2313 2320 gpiochip_disable_irq(chip, d->hwirq); 2314 2321 }
+3 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
··· 781 781 ssize_t result = 0; 782 782 uint32_t offset, se, sh, cu, wave, simd, thread, bank, *data; 783 783 784 - if (size & 3 || *pos & 3) 784 + if (size > 4096 || size & 3 || *pos & 3) 785 785 return -EINVAL; 786 786 787 787 /* decode offset */ 788 - offset = *pos & GENMASK_ULL(11, 0); 788 + offset = (*pos & GENMASK_ULL(11, 0)) >> 2; 789 789 se = (*pos & GENMASK_ULL(19, 12)) >> 12; 790 790 sh = (*pos & GENMASK_ULL(27, 20)) >> 20; 791 791 cu = (*pos & GENMASK_ULL(35, 28)) >> 28; ··· 823 823 while (size) { 824 824 uint32_t value; 825 825 826 - value = data[offset++]; 826 + value = data[result >> 2]; 827 827 r = put_user(value, (uint32_t *)buf); 828 828 if (r) { 829 829 result = r;
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 3913 3913 if (r) 3914 3914 goto out; 3915 3915 3916 + amdgpu_fbdev_set_suspend(tmp_adev, 0); 3917 + 3916 3918 /* must succeed. */ 3917 3919 amdgpu_ras_resume(tmp_adev); 3918 3920 ··· 4087 4085 * And add them back after reset completed 4088 4086 */ 4089 4087 amdgpu_unregister_gpu_instance(tmp_adev); 4088 + 4089 + amdgpu_fbdev_set_suspend(adev, 1); 4090 4090 4091 4091 /* disable ras on ALL IPs */ 4092 4092 if (!(in_ras_intr && !use_baco) &&
+1 -1
drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
··· 693 693 bool enable = (state == AMD_CG_STATE_GATE); 694 694 695 695 if (enable) { 696 - if (jpeg_v2_0_is_idle(handle)) 696 + if (!jpeg_v2_0_is_idle(handle)) 697 697 return -EBUSY; 698 698 jpeg_v2_0_enable_clock_gating(adev); 699 699 } else {
+1 -1
drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
··· 477 477 continue; 478 478 479 479 if (enable) { 480 - if (jpeg_v2_5_is_idle(handle)) 480 + if (!jpeg_v2_5_is_idle(handle)) 481 481 return -EBUSY; 482 482 jpeg_v2_5_enable_clock_gating(adev, i); 483 483 } else {
+1 -1
drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
··· 1352 1352 1353 1353 if (enable) { 1354 1354 /* wait for STATUS to clear */ 1355 - if (vcn_v1_0_is_idle(handle)) 1355 + if (!vcn_v1_0_is_idle(handle)) 1356 1356 return -EBUSY; 1357 1357 vcn_v1_0_enable_clock_gating(adev); 1358 1358 } else {
+1 -1
drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
··· 1217 1217 1218 1218 if (enable) { 1219 1219 /* wait for STATUS to clear */ 1220 - if (vcn_v2_0_is_idle(handle)) 1220 + if (!vcn_v2_0_is_idle(handle)) 1221 1221 return -EBUSY; 1222 1222 vcn_v2_0_enable_clock_gating(adev); 1223 1223 } else {
+1 -1
drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
··· 1672 1672 return 0; 1673 1673 1674 1674 if (enable) { 1675 - if (vcn_v2_5_is_idle(handle)) 1675 + if (!vcn_v2_5_is_idle(handle)) 1676 1676 return -EBUSY; 1677 1677 vcn_v2_5_enable_clock_gating(adev); 1678 1678 } else {
+15 -3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 522 522 523 523 acrtc_state = to_dm_crtc_state(acrtc->base.state); 524 524 525 - DRM_DEBUG_DRIVER("crtc:%d, vupdate-vrr:%d\n", acrtc->crtc_id, 526 - amdgpu_dm_vrr_active(acrtc_state)); 525 + DRM_DEBUG_DRIVER("crtc:%d, vupdate-vrr:%d, planes:%d\n", acrtc->crtc_id, 526 + amdgpu_dm_vrr_active(acrtc_state), 527 + acrtc_state->active_planes); 527 528 528 529 amdgpu_dm_crtc_handle_crc_irq(&acrtc->base); 529 530 drm_crtc_handle_vblank(&acrtc->base); ··· 544 543 &acrtc_state->vrr_params.adjust); 545 544 } 546 545 547 - if (acrtc->pflip_status == AMDGPU_FLIP_SUBMITTED) { 546 + /* 547 + * If there aren't any active_planes then DCH HUBP may be clock-gated. 548 + * In that case, pageflip completion interrupts won't fire and pageflip 549 + * completion events won't get delivered. Prevent this by sending 550 + * pending pageflip events from here if a flip is still pending. 551 + * 552 + * If any planes are enabled, use dm_pflip_high_irq() instead, to 553 + * avoid race conditions between flip programming and completion, 554 + * which could cause too early flip completion events. 555 + */ 556 + if (acrtc->pflip_status == AMDGPU_FLIP_SUBMITTED && 557 + acrtc_state->active_planes == 0) { 548 558 if (acrtc->event) { 549 559 drm_crtc_send_vblank_event(&acrtc->base, acrtc->event); 550 560 acrtc->event = NULL;
-1
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
··· 108 108 .enable_power_gating_plane = dcn20_enable_power_gating_plane, 109 109 .dpp_pg_control = dcn20_dpp_pg_control, 110 110 .hubp_pg_control = dcn20_hubp_pg_control, 111 - .dsc_pg_control = NULL, 112 111 .update_odm = dcn20_update_odm, 113 112 .dsc_pg_control = dcn20_dsc_pg_control, 114 113 .get_surface_visual_confirm_color = dcn10_get_surface_visual_confirm_color,
-1
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c
··· 116 116 .enable_power_gating_plane = dcn20_enable_power_gating_plane, 117 117 .dpp_pg_control = dcn20_dpp_pg_control, 118 118 .hubp_pg_control = dcn20_hubp_pg_control, 119 - .dsc_pg_control = NULL, 120 119 .update_odm = dcn20_update_odm, 121 120 .dsc_pg_control = dcn20_dsc_pg_control, 122 121 .get_surface_visual_confirm_color = dcn10_get_surface_visual_confirm_color,
+2 -2
drivers/gpu/drm/arm/display/komeda/komeda_drv.c
··· 146 146 147 147 MODULE_DEVICE_TABLE(of, komeda_of_match); 148 148 149 - static int komeda_rt_pm_suspend(struct device *dev) 149 + static int __maybe_unused komeda_rt_pm_suspend(struct device *dev) 150 150 { 151 151 struct komeda_drv *mdrv = dev_get_drvdata(dev); 152 152 153 153 return komeda_dev_suspend(mdrv->mdev); 154 154 } 155 155 156 - static int komeda_rt_pm_resume(struct device *dev) 156 + static int __maybe_unused komeda_rt_pm_resume(struct device *dev) 157 157 { 158 158 struct komeda_drv *mdrv = dev_get_drvdata(dev); 159 159
+2 -4
drivers/gpu/drm/bochs/bochs_hw.c
··· 156 156 size = min(size, mem); 157 157 } 158 158 159 - if (pci_request_region(pdev, 0, "bochs-drm") != 0) { 160 - DRM_ERROR("Cannot request framebuffer\n"); 161 - return -EBUSY; 162 - } 159 + if (pci_request_region(pdev, 0, "bochs-drm") != 0) 160 + DRM_WARN("Cannot request framebuffer, boot fb still active?\n"); 163 161 164 162 bochs->fb_map = ioremap(addr, size); 165 163 if (bochs->fb_map == NULL) {
+26 -20
drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
··· 1624 1624 frame.colorspace = HDMI_COLORSPACE_RGB; 1625 1625 1626 1626 /* Set up colorimetry */ 1627 - switch (hdmi->hdmi_data.enc_out_encoding) { 1628 - case V4L2_YCBCR_ENC_601: 1629 - if (hdmi->hdmi_data.enc_in_encoding == V4L2_YCBCR_ENC_XV601) 1630 - frame.colorimetry = HDMI_COLORIMETRY_EXTENDED; 1631 - else 1627 + if (!hdmi_bus_fmt_is_rgb(hdmi->hdmi_data.enc_out_bus_format)) { 1628 + switch (hdmi->hdmi_data.enc_out_encoding) { 1629 + case V4L2_YCBCR_ENC_601: 1630 + if (hdmi->hdmi_data.enc_in_encoding == V4L2_YCBCR_ENC_XV601) 1631 + frame.colorimetry = HDMI_COLORIMETRY_EXTENDED; 1632 + else 1633 + frame.colorimetry = HDMI_COLORIMETRY_ITU_601; 1634 + frame.extended_colorimetry = 1635 + HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; 1636 + break; 1637 + case V4L2_YCBCR_ENC_709: 1638 + if (hdmi->hdmi_data.enc_in_encoding == V4L2_YCBCR_ENC_XV709) 1639 + frame.colorimetry = HDMI_COLORIMETRY_EXTENDED; 1640 + else 1641 + frame.colorimetry = HDMI_COLORIMETRY_ITU_709; 1642 + frame.extended_colorimetry = 1643 + HDMI_EXTENDED_COLORIMETRY_XV_YCC_709; 1644 + break; 1645 + default: /* Carries no data */ 1632 1646 frame.colorimetry = HDMI_COLORIMETRY_ITU_601; 1647 + frame.extended_colorimetry = 1648 + HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; 1649 + break; 1650 + } 1651 + } else { 1652 + frame.colorimetry = HDMI_COLORIMETRY_NONE; 1633 1653 frame.extended_colorimetry = 1634 - HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; 1635 - break; 1636 - case V4L2_YCBCR_ENC_709: 1637 - if (hdmi->hdmi_data.enc_in_encoding == V4L2_YCBCR_ENC_XV709) 1638 - frame.colorimetry = HDMI_COLORIMETRY_EXTENDED; 1639 - else 1640 - frame.colorimetry = HDMI_COLORIMETRY_ITU_709; 1641 - frame.extended_colorimetry = 1642 - HDMI_EXTENDED_COLORIMETRY_XV_YCC_709; 1643 - break; 1644 - default: /* Carries no data */ 1645 - frame.colorimetry = HDMI_COLORIMETRY_ITU_601; 1646 - frame.extended_colorimetry = 1647 - HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; 1648 - break; 1654 + HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; 1649 1655 } 1650 1656 1651 1657 frame.scan_mode = HDMI_SCAN_MODE_NONE;
+2 -1
drivers/gpu/drm/drm_lease.c
··· 542 542 } 543 543 544 544 DRM_DEBUG_LEASE("Creating lease\n"); 545 + /* lessee will take the ownership of leases */ 545 546 lessee = drm_lease_create(lessor, &leases); 546 547 547 548 if (IS_ERR(lessee)) { 548 549 ret = PTR_ERR(lessee); 550 + idr_destroy(&leases); 549 551 goto out_leases; 550 552 } 551 553 ··· 582 580 583 581 out_leases: 584 582 put_unused_fd(fd); 585 - idr_destroy(&leases); 586 583 587 584 DRM_DEBUG_LEASE("drm_mode_create_lease_ioctl failed: %d\n", ret); 588 585 return ret;
+12 -40
drivers/gpu/drm/i915/gt/intel_lrc.c
··· 1600 1600 spin_unlock(&old->breadcrumbs.irq_lock); 1601 1601 } 1602 1602 1603 - static struct i915_request * 1604 - last_active(const struct intel_engine_execlists *execlists) 1605 - { 1606 - struct i915_request * const *last = READ_ONCE(execlists->active); 1607 - 1608 - while (*last && i915_request_completed(*last)) 1609 - last++; 1610 - 1611 - return *last; 1612 - } 1613 - 1614 1603 #define for_each_waiter(p__, rq__) \ 1615 1604 list_for_each_entry_lockless(p__, \ 1616 1605 &(rq__)->sched.waiters_list, \ ··· 1729 1740 (void)I915_SELFTEST_ONLY(execlists->preempt_hang.count++); 1730 1741 } 1731 1742 1732 - static unsigned long active_preempt_timeout(struct intel_engine_cs *engine) 1743 + static unsigned long active_preempt_timeout(struct intel_engine_cs *engine, 1744 + const struct i915_request *rq) 1733 1745 { 1734 - struct i915_request *rq; 1735 - 1736 - rq = last_active(&engine->execlists); 1737 1746 if (!rq) 1738 1747 return 0; 1739 1748 ··· 1742 1755 return READ_ONCE(engine->props.preempt_timeout_ms); 1743 1756 } 1744 1757 1745 - static void set_preempt_timeout(struct intel_engine_cs *engine) 1758 + static void set_preempt_timeout(struct intel_engine_cs *engine, 1759 + const struct i915_request *rq) 1746 1760 { 1747 1761 if (!intel_engine_has_preempt_reset(engine)) 1748 1762 return; 1749 1763 1750 1764 set_timer_ms(&engine->execlists.preempt, 1751 - active_preempt_timeout(engine)); 1765 + active_preempt_timeout(engine, rq)); 1752 1766 } 1753 1767 1754 1768 static inline void clear_ports(struct i915_request **ports, int count) ··· 1762 1774 struct intel_engine_execlists * const execlists = &engine->execlists; 1763 1775 struct i915_request **port = execlists->pending; 1764 1776 struct i915_request ** const last_port = port + execlists->port_mask; 1777 + struct i915_request * const *active; 1765 1778 struct i915_request *last; 1766 1779 struct rb_node *rb; 1767 1780 bool submit = false; ··· 1817 1828 * i.e. we will retrigger preemption following the ack in case 1818 1829 * of trouble. 1819 1830 */ 1820 - last = last_active(execlists); 1831 + active = READ_ONCE(execlists->active); 1832 + while ((last = *active) && i915_request_completed(last)) 1833 + active++; 1834 + 1821 1835 if (last) { 1822 1836 if (need_preempt(engine, last, rb)) { 1823 1837 ENGINE_TRACE(engine, ··· 2102 2110 * Skip if we ended up with exactly the same set of requests, 2103 2111 * e.g. trying to timeslice a pair of ordered contexts 2104 2112 */ 2105 - if (!memcmp(execlists->active, execlists->pending, 2113 + if (!memcmp(active, execlists->pending, 2106 2114 (port - execlists->pending + 1) * sizeof(*port))) { 2107 2115 do 2108 2116 execlists_schedule_out(fetch_and_zero(port)); ··· 2113 2121 clear_ports(port + 1, last_port - port); 2114 2122 2115 2123 execlists_submit_ports(engine); 2116 - set_preempt_timeout(engine); 2124 + set_preempt_timeout(engine, *active); 2117 2125 } else { 2118 2126 skip_submit: 2119 2127 ring_set_paused(engine, 0); ··· 4000 4008 4001 4009 *cs++ = preparser_disable(false); 4002 4010 intel_ring_advance(request, cs); 4003 - 4004 - /* 4005 - * Wa_1604544889:tgl 4006 - */ 4007 - if (IS_TGL_REVID(request->i915, TGL_REVID_A0, TGL_REVID_A0)) { 4008 - flags = 0; 4009 - flags |= PIPE_CONTROL_CS_STALL; 4010 - flags |= PIPE_CONTROL_HDC_PIPELINE_FLUSH; 4011 - 4012 - flags |= PIPE_CONTROL_STORE_DATA_INDEX; 4013 - flags |= PIPE_CONTROL_QW_WRITE; 4014 - 4015 - cs = intel_ring_begin(request, 6); 4016 - if (IS_ERR(cs)) 4017 - return PTR_ERR(cs); 4018 - 4019 - cs = gen8_emit_pipe_control(cs, flags, 4020 - LRC_PPHWSP_SCRATCH_ADDR); 4021 - intel_ring_advance(request, cs); 4022 - } 4023 4011 } 4024 4012 4025 4013 return 0;
+22 -3
drivers/gpu/drm/i915/gt/intel_workarounds.c
··· 1529 1529 return ERR_PTR(err); 1530 1530 } 1531 1531 1532 + static const struct { 1533 + u32 start; 1534 + u32 end; 1535 + } mcr_ranges_gen8[] = { 1536 + { .start = 0x5500, .end = 0x55ff }, 1537 + { .start = 0x7000, .end = 0x7fff }, 1538 + { .start = 0x9400, .end = 0x97ff }, 1539 + { .start = 0xb000, .end = 0xb3ff }, 1540 + { .start = 0xe000, .end = 0xe7ff }, 1541 + {}, 1542 + }; 1543 + 1532 1544 static bool mcr_range(struct drm_i915_private *i915, u32 offset) 1533 1545 { 1546 + int i; 1547 + 1548 + if (INTEL_GEN(i915) < 8) 1549 + return false; 1550 + 1534 1551 /* 1535 - * Registers in this range are affected by the MCR selector 1552 + * Registers in these ranges are affected by the MCR selector 1536 1553 * which only controls CPU initiated MMIO. Routing does not 1537 1554 * work for CS access so we cannot verify them on this path. 1538 1555 */ 1539 - if (INTEL_GEN(i915) >= 8 && (offset >= 0xb000 && offset <= 0xb4ff)) 1540 - return true; 1556 + for (i = 0; mcr_ranges_gen8[i].start; i++) 1557 + if (offset >= mcr_ranges_gen8[i].start && 1558 + offset <= mcr_ranges_gen8[i].end) 1559 + return true; 1541 1560 1542 1561 return false; 1543 1562 }
+2
drivers/hid/hid-google-hammer.c
··· 533 533 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 534 534 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MASTERBALL) }, 535 535 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 536 + USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MOONBALL) }, 537 + { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 536 538 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_STAFF) }, 537 539 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 538 540 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_WAND) },
+2
drivers/hid/hid-ids.h
··· 478 478 #define USB_DEVICE_ID_GOOGLE_WHISKERS 0x5030 479 479 #define USB_DEVICE_ID_GOOGLE_MASTERBALL 0x503c 480 480 #define USB_DEVICE_ID_GOOGLE_MAGNEMITE 0x503d 481 + #define USB_DEVICE_ID_GOOGLE_MOONBALL 0x5044 481 482 482 483 #define USB_VENDOR_ID_GOTOP 0x08f2 483 484 #define USB_DEVICE_ID_SUPER_Q2 0x007f ··· 727 726 #define USB_DEVICE_ID_LENOVO_X1_COVER 0x6085 728 727 #define USB_DEVICE_ID_LENOVO_X1_TAB 0x60a3 729 728 #define USB_DEVICE_ID_LENOVO_X1_TAB3 0x60b5 729 + #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D 0x608d 730 730 731 731 #define USB_VENDOR_ID_LG 0x1fd2 732 732 #define USB_DEVICE_ID_LG_MULTITOUCH 0x0064
+2 -2
drivers/hid/hid-picolcd_fb.c
··· 458 458 if (ret >= PAGE_SIZE) 459 459 break; 460 460 else if (i == fb_update_rate) 461 - ret += snprintf(buf+ret, PAGE_SIZE-ret, "[%u] ", i); 461 + ret += scnprintf(buf+ret, PAGE_SIZE-ret, "[%u] ", i); 462 462 else 463 - ret += snprintf(buf+ret, PAGE_SIZE-ret, "%u ", i); 463 + ret += scnprintf(buf+ret, PAGE_SIZE-ret, "%u ", i); 464 464 if (ret > 0) 465 465 buf[min(ret, (size_t)PAGE_SIZE)-1] = '\n'; 466 466 return ret;
+1
drivers/hid/hid-quirks.c
··· 103 103 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_PENSKETCH_M912), HID_QUIRK_MULTI_INPUT }, 104 104 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M406XE), HID_QUIRK_MULTI_INPUT }, 105 105 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE_ID2), HID_QUIRK_ALWAYS_POLL }, 106 + { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D), HID_QUIRK_ALWAYS_POLL }, 106 107 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C007), HID_QUIRK_ALWAYS_POLL }, 107 108 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C077), HID_QUIRK_ALWAYS_POLL }, 108 109 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_KEYBOARD_G710_PLUS), HID_QUIRK_NOGET },
+3 -3
drivers/hid/hid-sensor-custom.c
··· 313 313 314 314 while (i < ret) { 315 315 if (i + attribute->size > ret) { 316 - len += snprintf(&buf[len], 316 + len += scnprintf(&buf[len], 317 317 PAGE_SIZE - len, 318 318 "%d ", values[i]); 319 319 break; ··· 336 336 ++i; 337 337 break; 338 338 } 339 - len += snprintf(&buf[len], PAGE_SIZE - len, 339 + len += scnprintf(&buf[len], PAGE_SIZE - len, 340 340 "%lld ", value); 341 341 } 342 - len += snprintf(&buf[len], PAGE_SIZE - len, "\n"); 342 + len += scnprintf(&buf[len], PAGE_SIZE - len, "\n"); 343 343 344 344 return len; 345 345 } else if (input)
+7 -6
drivers/hwtracing/intel_th/msu.c
··· 718 718 719 719 if (old != expect) { 720 720 ret = -EINVAL; 721 - dev_warn_ratelimited(msc_dev(win->msc), 722 - "expected lockout state %d, got %d\n", 723 - expect, old); 724 721 goto unlock; 725 722 } 726 723 ··· 738 741 /* from intel_th_msc_window_unlock(), don't warn if not locked */ 739 742 if (expect == WIN_LOCKED && old == new) 740 743 return 0; 744 + 745 + dev_warn_ratelimited(msc_dev(win->msc), 746 + "expected lockout state %d, got %d\n", 747 + expect, old); 741 748 } 742 749 743 750 return ret; ··· 761 760 lockdep_assert_held(&msc->buf_mutex); 762 761 763 762 if (msc->mode > MSC_MODE_MULTI) 764 - return -ENOTSUPP; 763 + return -EINVAL; 765 764 766 765 if (msc->mode == MSC_MODE_MULTI) { 767 766 if (msc_win_set_lockout(msc->cur_win, WIN_READY, WIN_INUSE)) ··· 1295 1294 } else if (msc->mode == MSC_MODE_MULTI) { 1296 1295 ret = msc_buffer_multi_alloc(msc, nr_pages, nr_wins); 1297 1296 } else { 1298 - ret = -ENOTSUPP; 1297 + ret = -EINVAL; 1299 1298 } 1300 1299 1301 1300 if (!ret) { ··· 1531 1530 if (ret >= 0) 1532 1531 *ppos = iter->offset; 1533 1532 } else { 1534 - ret = -ENOTSUPP; 1533 + ret = -EINVAL; 1535 1534 } 1536 1535 1537 1536 put_count:
+5
drivers/hwtracing/intel_th/pci.c
··· 235 235 .driver_data = (kernel_ulong_t)&intel_th_2x, 236 236 }, 237 237 { 238 + /* Elkhart Lake CPU */ 239 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4529), 240 + .driver_data = (kernel_ulong_t)&intel_th_2x, 241 + }, 242 + { 238 243 /* Elkhart Lake */ 239 244 PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4b26), 240 245 .driver_data = (kernel_ulong_t)&intel_th_2x,
+3 -3
drivers/hwtracing/stm/p_sys-t.c
··· 238 238 static inline bool sys_t_need_ts(struct sys_t_output *op) 239 239 { 240 240 if (op->node.ts_interval && 241 - time_after(op->ts_jiffies + op->node.ts_interval, jiffies)) { 241 + time_after(jiffies, op->ts_jiffies + op->node.ts_interval)) { 242 242 op->ts_jiffies = jiffies; 243 243 244 244 return true; ··· 250 250 static bool sys_t_need_clock_sync(struct sys_t_output *op) 251 251 { 252 252 if (op->node.clocksync_interval && 253 - time_after(op->clocksync_jiffies + op->node.clocksync_interval, 254 - jiffies)) { 253 + time_after(jiffies, 254 + op->clocksync_jiffies + op->node.clocksync_interval)) { 255 255 op->clocksync_jiffies = jiffies; 256 256 257 257 return true;
+1
drivers/i2c/busses/i2c-designware-pcidrv.c
··· 313 313 pm_runtime_get_noresume(&pdev->dev); 314 314 315 315 i2c_del_adapter(&dev->adapter); 316 + devm_free_irq(&pdev->dev, dev->irq, dev); 316 317 pci_free_irq_vectors(pdev); 317 318 } 318 319
+1 -1
drivers/i2c/busses/i2c-gpio.c
··· 348 348 if (ret == -ENOENT) 349 349 retdesc = ERR_PTR(-EPROBE_DEFER); 350 350 351 - if (ret != -EPROBE_DEFER) 351 + if (PTR_ERR(retdesc) != -EPROBE_DEFER) 352 352 dev_err(dev, "error trying to get descriptor: %d\n", ret); 353 353 354 354 return retdesc;
+12 -33
drivers/i2c/busses/i2c-i801.c
··· 132 132 #define TCOBASE 0x050 133 133 #define TCOCTL 0x054 134 134 135 - #define ACPIBASE 0x040 136 - #define ACPIBASE_SMI_OFF 0x030 137 - #define ACPICTRL 0x044 138 - #define ACPICTRL_EN 0x080 139 - 140 135 #define SBREG_BAR 0x10 141 136 #define SBREG_SMBCTRL 0xc6000c 142 137 #define SBREG_SMBCTRL_DNV 0xcf000c ··· 1548 1553 pci_bus_write_config_byte(pci_dev->bus, devfn, 0xe1, hidden); 1549 1554 spin_unlock(&p2sb_spinlock); 1550 1555 1551 - res = &tco_res[ICH_RES_MEM_OFF]; 1556 + res = &tco_res[1]; 1552 1557 if (pci_dev->device == PCI_DEVICE_ID_INTEL_DNV_SMBUS) 1553 1558 res->start = (resource_size_t)base64_addr + SBREG_SMBCTRL_DNV; 1554 1559 else ··· 1558 1563 res->flags = IORESOURCE_MEM; 1559 1564 1560 1565 return platform_device_register_resndata(&pci_dev->dev, "iTCO_wdt", -1, 1561 - tco_res, 3, &spt_tco_platform_data, 1566 + tco_res, 2, &spt_tco_platform_data, 1562 1567 sizeof(spt_tco_platform_data)); 1563 1568 } 1564 1569 ··· 1571 1576 i801_add_tco_cnl(struct i801_priv *priv, struct pci_dev *pci_dev, 1572 1577 struct resource *tco_res) 1573 1578 { 1574 - return platform_device_register_resndata(&pci_dev->dev, "iTCO_wdt", -1, 1575 - tco_res, 2, &cnl_tco_platform_data, 1576 - sizeof(cnl_tco_platform_data)); 1579 + return platform_device_register_resndata(&pci_dev->dev, 1580 + "iTCO_wdt", -1, tco_res, 1, &cnl_tco_platform_data, 1581 + sizeof(cnl_tco_platform_data)); 1577 1582 } 1578 1583 1579 1584 static void i801_add_tco(struct i801_priv *priv) 1580 1585 { 1581 - u32 base_addr, tco_base, tco_ctl, ctrl_val; 1582 1586 struct pci_dev *pci_dev = priv->pci_dev; 1583 - struct resource tco_res[3], *res; 1584 - unsigned int devfn; 1587 + struct resource tco_res[2], *res; 1588 + u32 tco_base, tco_ctl; 1585 1589 1586 1590 /* If we have ACPI based watchdog use that instead */ 1587 1591 if (acpi_has_watchdog()) ··· 1595 1601 return; 1596 1602 1597 1603 memset(tco_res, 0, sizeof(tco_res)); 1598 - 1599 - res = &tco_res[ICH_RES_IO_TCO]; 1604 + /* 1605 + * Always populate the main iTCO IO resource here. The second entry 1606 + * for NO_REBOOT MMIO is filled by the SPT specific function. 1607 + */ 1608 + res = &tco_res[0]; 1600 1609 res->start = tco_base & ~1; 1601 1610 res->end = res->start + 32 - 1; 1602 1611 res->flags = IORESOURCE_IO; 1603 - 1604 - /* 1605 - * Power Management registers. 1606 - */ 1607 - devfn = PCI_DEVFN(PCI_SLOT(pci_dev->devfn), 2); 1608 - pci_bus_read_config_dword(pci_dev->bus, devfn, ACPIBASE, &base_addr); 1609 - 1610 - res = &tco_res[ICH_RES_IO_SMI]; 1611 - res->start = (base_addr & ~1) + ACPIBASE_SMI_OFF; 1612 - res->end = res->start + 3; 1613 - res->flags = IORESOURCE_IO; 1614 - 1615 - /* 1616 - * Enable the ACPI I/O space. 1617 - */ 1618 - pci_bus_read_config_dword(pci_dev->bus, devfn, ACPICTRL, &ctrl_val); 1619 - ctrl_val |= ACPICTRL_EN; 1620 - pci_bus_write_config_dword(pci_dev->bus, devfn, ACPICTRL, ctrl_val); 1621 1612 1622 1613 if (priv->features & FEATURE_TCO_CNL) 1623 1614 priv->tco_pdev = i801_add_tco_cnl(priv, pci_dev, tco_res);
+9 -1
drivers/i2c/i2c-core-acpi.c
··· 394 394 static struct i2c_client *i2c_acpi_find_client_by_adev(struct acpi_device *adev) 395 395 { 396 396 struct device *dev; 397 + struct i2c_client *client; 397 398 398 399 dev = bus_find_device_by_acpi_dev(&i2c_bus_type, adev); 399 - return dev ? i2c_verify_client(dev) : NULL; 400 + if (!dev) 401 + return NULL; 402 + 403 + client = i2c_verify_client(dev); 404 + if (!client) 405 + put_device(dev); 406 + 407 + return client; 400 408 } 401 409 402 410 static int i2c_acpi_notify(struct notifier_block *nb, unsigned long value,
+1
drivers/iio/accel/adxl372.c
··· 237 237 .realbits = 12, \ 238 238 .storagebits = 16, \ 239 239 .shift = 4, \ 240 + .endianness = IIO_BE, \ 240 241 }, \ 241 242 } 242 243
+1 -1
drivers/iio/accel/st_accel_i2c.c
··· 110 110 111 111 #ifdef CONFIG_ACPI 112 112 static const struct acpi_device_id st_accel_acpi_match[] = { 113 - {"SMO8840", (kernel_ulong_t)LNG2DM_ACCEL_DEV_NAME}, 113 + {"SMO8840", (kernel_ulong_t)LIS2DH12_ACCEL_DEV_NAME}, 114 114 {"SMO8A90", (kernel_ulong_t)LNG2DM_ACCEL_DEV_NAME}, 115 115 { }, 116 116 };
+15
drivers/iio/adc/at91-sama5d2_adc.c
··· 723 723 724 724 for_each_set_bit(bit, indio->active_scan_mask, indio->num_channels) { 725 725 struct iio_chan_spec const *chan = at91_adc_chan_get(indio, bit); 726 + u32 cor; 726 727 727 728 if (!chan) 728 729 continue; ··· 731 730 if (chan->type == IIO_POSITIONRELATIVE || 732 731 chan->type == IIO_PRESSURE) 733 732 continue; 733 + 734 + if (state) { 735 + cor = at91_adc_readl(st, AT91_SAMA5D2_COR); 736 + 737 + if (chan->differential) 738 + cor |= (BIT(chan->channel) | 739 + BIT(chan->channel2)) << 740 + AT91_SAMA5D2_COR_DIFF_OFFSET; 741 + else 742 + cor &= ~(BIT(chan->channel) << 743 + AT91_SAMA5D2_COR_DIFF_OFFSET); 744 + 745 + at91_adc_writel(st, AT91_SAMA5D2_COR, cor); 746 + } 734 747 735 748 if (state) { 736 749 at91_adc_writel(st, AT91_SAMA5D2_CHER,
+10 -33
drivers/iio/adc/stm32-dfsdm-adc.c
··· 842 842 } 843 843 } 844 844 845 - static irqreturn_t stm32_dfsdm_adc_trigger_handler(int irq, void *p) 846 - { 847 - struct iio_poll_func *pf = p; 848 - struct iio_dev *indio_dev = pf->indio_dev; 849 - struct stm32_dfsdm_adc *adc = iio_priv(indio_dev); 850 - int available = stm32_dfsdm_adc_dma_residue(adc); 851 - 852 - while (available >= indio_dev->scan_bytes) { 853 - s32 *buffer = (s32 *)&adc->rx_buf[adc->bufi]; 854 - 855 - stm32_dfsdm_process_data(adc, buffer); 856 - 857 - iio_push_to_buffers_with_timestamp(indio_dev, buffer, 858 - pf->timestamp); 859 - available -= indio_dev->scan_bytes; 860 - adc->bufi += indio_dev->scan_bytes; 861 - if (adc->bufi >= adc->buf_sz) 862 - adc->bufi = 0; 863 - } 864 - 865 - iio_trigger_notify_done(indio_dev->trig); 866 - 867 - return IRQ_HANDLED; 868 - } 869 - 870 845 static void stm32_dfsdm_dma_buffer_done(void *data) 871 846 { 872 847 struct iio_dev *indio_dev = data; 873 848 struct stm32_dfsdm_adc *adc = iio_priv(indio_dev); 874 849 int available = stm32_dfsdm_adc_dma_residue(adc); 875 850 size_t old_pos; 876 - 877 - if (indio_dev->currentmode & INDIO_BUFFER_TRIGGERED) { 878 - iio_trigger_poll_chained(indio_dev->trig); 879 - return; 880 - } 881 851 882 852 /* 883 853 * FIXME: In Kernel interface does not support cyclic DMA buffer,and ··· 876 906 adc->bufi = 0; 877 907 old_pos = 0; 878 908 } 879 - /* regular iio buffer without trigger */ 909 + /* 910 + * In DMA mode the trigger services of IIO are not used 911 + * (e.g. no call to iio_trigger_poll). 912 + * Calling irq handler associated to the hardware trigger is not 913 + * relevant as the conversions have already been done. Data 914 + * transfers are performed directly in DMA callback instead. 915 + * This implementation avoids to call trigger irq handler that 916 + * may sleep, in an atomic context (DMA irq handler context). 917 + */ 880 918 if (adc->dev_data->type == DFSDM_IIO) 881 919 iio_push_to_buffers(indio_dev, buffer); 882 920 } ··· 1514 1536 } 1515 1537 1516 1538 ret = iio_triggered_buffer_setup(indio_dev, 1517 - &iio_pollfunc_store_time, 1518 - &stm32_dfsdm_adc_trigger_handler, 1539 + &iio_pollfunc_store_time, NULL, 1519 1540 &stm32_dfsdm_buffer_setup_ops); 1520 1541 if (ret) { 1521 1542 stm32_dfsdm_dma_release(indio_dev);
+2
drivers/iio/chemical/Kconfig
··· 91 91 tristate "SPS30 particulate matter sensor" 92 92 depends on I2C 93 93 select CRC8 94 + select IIO_BUFFER 95 + select IIO_TRIGGERED_BUFFER 94 96 help 95 97 Say Y here to build support for the Sensirion SPS30 particulate 96 98 matter sensor.
+8 -7
drivers/iio/light/vcnl4000.c
··· 167 167 data->vcnl4200_ps.reg = VCNL4200_PS_DATA; 168 168 switch (id) { 169 169 case VCNL4200_PROD_ID: 170 - /* Integration time is 50ms, but the experiments */ 171 - /* show 54ms in total. */ 172 - data->vcnl4200_al.sampling_rate = ktime_set(0, 54000 * 1000); 173 - data->vcnl4200_ps.sampling_rate = ktime_set(0, 4200 * 1000); 170 + /* Default wait time is 50ms, add 20% tolerance. */ 171 + data->vcnl4200_al.sampling_rate = ktime_set(0, 60000 * 1000); 172 + /* Default wait time is 4.8ms, add 20% tolerance. */ 173 + data->vcnl4200_ps.sampling_rate = ktime_set(0, 5760 * 1000); 174 174 data->al_scale = 24000; 175 175 break; 176 176 case VCNL4040_PROD_ID: 177 - /* Integration time is 80ms, add 10ms. */ 178 - data->vcnl4200_al.sampling_rate = ktime_set(0, 100000 * 1000); 179 - data->vcnl4200_ps.sampling_rate = ktime_set(0, 100000 * 1000); 177 + /* Default wait time is 80ms, add 20% tolerance. */ 178 + data->vcnl4200_al.sampling_rate = ktime_set(0, 96000 * 1000); 179 + /* Default wait time is 5ms, add 20% tolerance. */ 180 + data->vcnl4200_ps.sampling_rate = ktime_set(0, 6000 * 1000); 180 181 data->al_scale = 120000; 181 182 break; 182 183 }
+1 -1
drivers/iio/magnetometer/ak8974.c
··· 564 564 * We read all axes and discard all but one, for optimized 565 565 * reading, use the triggered buffer. 566 566 */ 567 - *val = le16_to_cpu(hw_values[chan->address]); 567 + *val = (s16)le16_to_cpu(hw_values[chan->address]); 568 568 569 569 ret = IIO_VAL_INT; 570 570 }
+1 -1
drivers/iio/proximity/ping.c
··· 269 269 270 270 static const struct of_device_id of_ping_match[] = { 271 271 { .compatible = "parallax,ping", .data = &pa_ping_cfg}, 272 - { .compatible = "parallax,laserping", .data = &pa_ping_cfg}, 272 + { .compatible = "parallax,laserping", .data = &pa_laser_ping_cfg}, 273 273 {}, 274 274 }; 275 275
+9 -2
drivers/iio/trigger/stm32-timer-trigger.c
··· 161 161 return 0; 162 162 } 163 163 164 - static void stm32_timer_stop(struct stm32_timer_trigger *priv) 164 + static void stm32_timer_stop(struct stm32_timer_trigger *priv, 165 + struct iio_trigger *trig) 165 166 { 166 167 u32 ccer, cr1; 167 168 ··· 179 178 regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN, 0); 180 179 regmap_write(priv->regmap, TIM_PSC, 0); 181 180 regmap_write(priv->regmap, TIM_ARR, 0); 181 + 182 + /* Force disable master mode */ 183 + if (stm32_timer_is_trgo2_name(trig->name)) 184 + regmap_update_bits(priv->regmap, TIM_CR2, TIM_CR2_MMS2, 0); 185 + else 186 + regmap_update_bits(priv->regmap, TIM_CR2, TIM_CR2_MMS, 0); 182 187 183 188 /* Make sure that registers are updated */ 184 189 regmap_update_bits(priv->regmap, TIM_EGR, TIM_EGR_UG, TIM_EGR_UG); ··· 204 197 return ret; 205 198 206 199 if (freq == 0) { 207 - stm32_timer_stop(priv); 200 + stm32_timer_stop(priv, trig); 208 201 } else { 209 202 ret = stm32_timer_start(priv, trig, freq); 210 203 if (ret)
+2 -2
drivers/iommu/amd_iommu.c
··· 3826 3826 entry->lo.fields_vapic.ga_tag = ir_data->ga_tag; 3827 3827 3828 3828 return modify_irte_ga(ir_data->irq_2_irte.devid, 3829 - ir_data->irq_2_irte.index, entry, NULL); 3829 + ir_data->irq_2_irte.index, entry, ir_data); 3830 3830 } 3831 3831 EXPORT_SYMBOL(amd_iommu_activate_guest_mode); 3832 3832 ··· 3852 3852 APICID_TO_IRTE_DEST_HI(cfg->dest_apicid); 3853 3853 3854 3854 return modify_irte_ga(ir_data->irq_2_irte.devid, 3855 - ir_data->irq_2_irte.index, entry, NULL); 3855 + ir_data->irq_2_irte.index, entry, ir_data); 3856 3856 } 3857 3857 EXPORT_SYMBOL(amd_iommu_deactivate_guest_mode); 3858 3858
+8 -8
drivers/iommu/dma-iommu.c
··· 177 177 start -= iova_offset(iovad, start); 178 178 num_pages = iova_align(iovad, end - start) >> iova_shift(iovad); 179 179 180 - msi_page = kcalloc(num_pages, sizeof(*msi_page), GFP_KERNEL); 181 - if (!msi_page) 182 - return -ENOMEM; 183 - 184 180 for (i = 0; i < num_pages; i++) { 185 - msi_page[i].phys = start; 186 - msi_page[i].iova = start; 187 - INIT_LIST_HEAD(&msi_page[i].list); 188 - list_add(&msi_page[i].list, &cookie->msi_page_list); 181 + msi_page = kmalloc(sizeof(*msi_page), GFP_KERNEL); 182 + if (!msi_page) 183 + return -ENOMEM; 184 + 185 + msi_page->phys = start; 186 + msi_page->iova = start; 187 + INIT_LIST_HEAD(&msi_page->list); 188 + list_add(&msi_page->list, &cookie->msi_page_list); 189 189 start += iovad->granule; 190 190 } 191 191
+17 -7
drivers/iommu/dmar.c
··· 28 28 #include <linux/slab.h> 29 29 #include <linux/iommu.h> 30 30 #include <linux/numa.h> 31 + #include <linux/limits.h> 31 32 #include <asm/irq_remapping.h> 32 33 #include <asm/iommu_table.h> 33 34 ··· 128 127 struct dmar_pci_notify_info *info; 129 128 130 129 BUG_ON(dev->is_virtfn); 130 + 131 + /* 132 + * Ignore devices that have a domain number higher than what can 133 + * be looked up in DMAR, e.g. VMD subdevices with domain 0x10000 134 + */ 135 + if (pci_domain_nr(dev->bus) > U16_MAX) 136 + return NULL; 131 137 132 138 /* Only generate path[] for device addition event */ 133 139 if (event == BUS_NOTIFY_ADD_DEVICE) ··· 371 363 { 372 364 struct dmar_drhd_unit *dmaru; 373 365 374 - list_for_each_entry_rcu(dmaru, &dmar_drhd_units, list) 366 + list_for_each_entry_rcu(dmaru, &dmar_drhd_units, list, 367 + dmar_rcu_check()) 375 368 if (dmaru->segment == drhd->segment && 376 369 dmaru->reg_base_addr == drhd->address) 377 370 return dmaru; ··· 449 440 450 441 /* Check for NUL termination within the designated length */ 451 442 if (strnlen(andd->device_name, header->length - 8) == header->length - 8) { 452 - WARN_TAINT(1, TAINT_FIRMWARE_WORKAROUND, 443 + pr_warn(FW_BUG 453 444 "Your BIOS is broken; ANDD object name is not NUL-terminated\n" 454 445 "BIOS vendor: %s; Ver: %s; Product Version: %s\n", 455 446 dmi_get_system_info(DMI_BIOS_VENDOR), 456 447 dmi_get_system_info(DMI_BIOS_VERSION), 457 448 dmi_get_system_info(DMI_PRODUCT_VERSION)); 449 + add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK); 458 450 return -EINVAL; 459 451 } 460 452 pr_info("ANDD device: %x name: %s\n", andd->device_number, ··· 481 471 return 0; 482 472 } 483 473 } 484 - WARN_TAINT( 485 - 1, TAINT_FIRMWARE_WORKAROUND, 474 + pr_warn(FW_BUG 486 475 "Your BIOS is broken; RHSA refers to non-existent DMAR unit at %llx\n" 487 476 "BIOS vendor: %s; Ver: %s; Product Version: %s\n", 488 - drhd->reg_base_addr, 477 + rhsa->base_address, 489 478 dmi_get_system_info(DMI_BIOS_VENDOR), 490 479 dmi_get_system_info(DMI_BIOS_VERSION), 491 480 dmi_get_system_info(DMI_PRODUCT_VERSION)); 481 + add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK); 492 482 493 483 return 0; 494 484 } ··· 837 827 838 828 static void warn_invalid_dmar(u64 addr, const char *message) 839 829 { 840 - WARN_TAINT_ONCE( 841 - 1, TAINT_FIRMWARE_WORKAROUND, 830 + pr_warn_once(FW_BUG 842 831 "Your BIOS is broken; DMAR reported at address %llx%s!\n" 843 832 "BIOS vendor: %s; Ver: %s; Product Version: %s\n", 844 833 addr, message, 845 834 dmi_get_system_info(DMI_BIOS_VENDOR), 846 835 dmi_get_system_info(DMI_BIOS_VERSION), 847 836 dmi_get_system_info(DMI_PRODUCT_VERSION)); 837 + add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK); 848 838 } 849 839 850 840 static int __ref
+38 -19
drivers/iommu/intel-iommu-debugfs.c
··· 33 33 34 34 #define IOMMU_REGSET_ENTRY(_reg_) \ 35 35 { DMAR_##_reg_##_REG, __stringify(_reg_) } 36 - static const struct iommu_regset iommu_regs[] = { 36 + 37 + static const struct iommu_regset iommu_regs_32[] = { 37 38 IOMMU_REGSET_ENTRY(VER), 38 - IOMMU_REGSET_ENTRY(CAP), 39 - IOMMU_REGSET_ENTRY(ECAP), 40 39 IOMMU_REGSET_ENTRY(GCMD), 41 40 IOMMU_REGSET_ENTRY(GSTS), 42 - IOMMU_REGSET_ENTRY(RTADDR), 43 - IOMMU_REGSET_ENTRY(CCMD), 44 41 IOMMU_REGSET_ENTRY(FSTS), 45 42 IOMMU_REGSET_ENTRY(FECTL), 46 43 IOMMU_REGSET_ENTRY(FEDATA), 47 44 IOMMU_REGSET_ENTRY(FEADDR), 48 45 IOMMU_REGSET_ENTRY(FEUADDR), 49 - IOMMU_REGSET_ENTRY(AFLOG), 50 46 IOMMU_REGSET_ENTRY(PMEN), 51 47 IOMMU_REGSET_ENTRY(PLMBASE), 52 48 IOMMU_REGSET_ENTRY(PLMLIMIT), 53 - IOMMU_REGSET_ENTRY(PHMBASE), 54 - IOMMU_REGSET_ENTRY(PHMLIMIT), 55 - IOMMU_REGSET_ENTRY(IQH), 56 - IOMMU_REGSET_ENTRY(IQT), 57 - IOMMU_REGSET_ENTRY(IQA), 58 49 IOMMU_REGSET_ENTRY(ICS), 59 - IOMMU_REGSET_ENTRY(IRTA), 60 - IOMMU_REGSET_ENTRY(PQH), 61 - IOMMU_REGSET_ENTRY(PQT), 62 - IOMMU_REGSET_ENTRY(PQA), 63 50 IOMMU_REGSET_ENTRY(PRS), 64 51 IOMMU_REGSET_ENTRY(PECTL), 65 52 IOMMU_REGSET_ENTRY(PEDATA), 66 53 IOMMU_REGSET_ENTRY(PEADDR), 67 54 IOMMU_REGSET_ENTRY(PEUADDR), 55 + }; 56 + 57 + static const struct iommu_regset iommu_regs_64[] = { 58 + IOMMU_REGSET_ENTRY(CAP), 59 + IOMMU_REGSET_ENTRY(ECAP), 60 + IOMMU_REGSET_ENTRY(RTADDR), 61 + IOMMU_REGSET_ENTRY(CCMD), 62 + IOMMU_REGSET_ENTRY(AFLOG), 63 + IOMMU_REGSET_ENTRY(PHMBASE), 64 + IOMMU_REGSET_ENTRY(PHMLIMIT), 65 + IOMMU_REGSET_ENTRY(IQH), 66 + IOMMU_REGSET_ENTRY(IQT), 67 + IOMMU_REGSET_ENTRY(IQA), 68 + IOMMU_REGSET_ENTRY(IRTA), 69 + IOMMU_REGSET_ENTRY(PQH), 70 + IOMMU_REGSET_ENTRY(PQT), 71 + IOMMU_REGSET_ENTRY(PQA), 68 72 IOMMU_REGSET_ENTRY(MTRRCAP), 69 73 IOMMU_REGSET_ENTRY(MTRRDEF), 70 74 IOMMU_REGSET_ENTRY(MTRR_FIX64K_00000), ··· 131 127 * by adding the offset to the pointer (virtual address). 132 128 */ 133 129 raw_spin_lock_irqsave(&iommu->register_lock, flag); 134 - for (i = 0 ; i < ARRAY_SIZE(iommu_regs); i++) { 135 - value = dmar_readq(iommu->reg + iommu_regs[i].offset); 130 + for (i = 0 ; i < ARRAY_SIZE(iommu_regs_32); i++) { 131 + value = dmar_readl(iommu->reg + iommu_regs_32[i].offset); 136 132 seq_printf(m, "%-16s\t0x%02x\t\t0x%016llx\n", 137 - iommu_regs[i].regs, iommu_regs[i].offset, 133 + iommu_regs_32[i].regs, iommu_regs_32[i].offset, 134 + value); 135 + } 136 + for (i = 0 ; i < ARRAY_SIZE(iommu_regs_64); i++) { 137 + value = dmar_readq(iommu->reg + iommu_regs_64[i].offset); 138 + seq_printf(m, "%-16s\t0x%02x\t\t0x%016llx\n", 139 + iommu_regs_64[i].regs, iommu_regs_64[i].offset, 138 140 value); 139 141 } 140 142 raw_spin_unlock_irqrestore(&iommu->register_lock, flag); ··· 282 272 { 283 273 struct dmar_drhd_unit *drhd; 284 274 struct intel_iommu *iommu; 275 + u32 sts; 285 276 286 277 rcu_read_lock(); 287 278 for_each_active_iommu(iommu, drhd) { 279 + sts = dmar_readl(iommu->reg + DMAR_GSTS_REG); 280 + if (!(sts & DMA_GSTS_TES)) { 281 + seq_printf(m, "DMA Remapping is not enabled on %s\n", 282 + iommu->name); 283 + continue; 284 + } 288 285 root_tbl_walk(m, iommu); 289 286 seq_putc(m, '\n'); 290 287 } ··· 432 415 struct dmar_drhd_unit *drhd; 433 416 struct intel_iommu *iommu; 434 417 u64 irta; 418 + u32 sts; 435 419 436 420 rcu_read_lock(); 437 421 for_each_active_iommu(iommu, drhd) { ··· 442 424 seq_printf(m, "Remapped Interrupt supported on IOMMU: %s\n", 443 425 iommu->name); 444 426 445 - if (iommu->ir_table) { 427 + sts = dmar_readl(iommu->reg + DMAR_GSTS_REG); 428 + if (iommu->ir_table && (sts & DMA_GSTS_IRES)) { 446 429 irta = virt_to_phys(iommu->ir_table->base); 447 430 seq_printf(m, " IR table address:%llx\n", irta); 448 431 ir_tbl_remap_entry_show(m, iommu);
+19 -9
drivers/iommu/intel-iommu.c
··· 4261 4261 4262 4262 /* we know that the this iommu should be at offset 0xa000 from vtbar */ 4263 4263 drhd = dmar_find_matched_drhd_unit(pdev); 4264 - if (WARN_TAINT_ONCE(!drhd || drhd->reg_base_addr - vtbar != 0xa000, 4265 - TAINT_FIRMWARE_WORKAROUND, 4266 - "BIOS assigned incorrect VT-d unit for Intel(R) QuickData Technology device\n")) 4264 + if (!drhd || drhd->reg_base_addr - vtbar != 0xa000) { 4265 + pr_warn_once(FW_BUG "BIOS assigned incorrect VT-d unit for Intel(R) QuickData Technology device\n"); 4266 + add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK); 4267 4267 pdev->dev.archdata.iommu = DUMMY_DEVICE_DOMAIN_INFO; 4268 + } 4268 4269 } 4269 4270 DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_SNB, quirk_ioat_snb_local_iommu); 4270 4271 ··· 4461 4460 struct dmar_rmrr_unit *rmrru; 4462 4461 4463 4462 rmrr = (struct acpi_dmar_reserved_memory *)header; 4464 - if (rmrr_sanity_check(rmrr)) 4465 - WARN_TAINT(1, TAINT_FIRMWARE_WORKAROUND, 4463 + if (rmrr_sanity_check(rmrr)) { 4464 + pr_warn(FW_BUG 4466 4465 "Your BIOS is broken; bad RMRR [%#018Lx-%#018Lx]\n" 4467 4466 "BIOS vendor: %s; Ver: %s; Product Version: %s\n", 4468 4467 rmrr->base_address, rmrr->end_address, 4469 4468 dmi_get_system_info(DMI_BIOS_VENDOR), 4470 4469 dmi_get_system_info(DMI_BIOS_VERSION), 4471 4470 dmi_get_system_info(DMI_PRODUCT_VERSION)); 4471 + add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK); 4472 + } 4472 4473 4473 4474 rmrru = kzalloc(sizeof(*rmrru), GFP_KERNEL); 4474 4475 if (!rmrru) ··· 5133 5130 5134 5131 down_write(&dmar_global_lock); 5135 5132 5133 + if (!no_iommu) 5134 + intel_iommu_debugfs_init(); 5135 + 5136 5136 if (no_iommu || dmar_disabled) { 5137 5137 /* 5138 5138 * We exit the function here to ensure IOMMU's remapping and ··· 5199 5193 5200 5194 init_iommu_pm_ops(); 5201 5195 5196 + down_read(&dmar_global_lock); 5202 5197 for_each_active_iommu(iommu, drhd) { 5203 5198 iommu_device_sysfs_add(&iommu->iommu, NULL, 5204 5199 intel_iommu_groups, ··· 5207 5200 iommu_device_set_ops(&iommu->iommu, &intel_iommu_ops); 5208 5201 iommu_device_register(&iommu->iommu); 5209 5202 } 5203 + up_read(&dmar_global_lock); 5210 5204 5211 5205 bus_set_iommu(&pci_bus_type, &intel_iommu_ops); 5212 5206 if (si_domain && !hw_pass_through) ··· 5218 5210 down_read(&dmar_global_lock); 5219 5211 if (probe_acpi_namespace_devices()) 5220 5212 pr_warn("ACPI name space devices didn't probe correctly\n"); 5221 - up_read(&dmar_global_lock); 5222 5213 5223 5214 /* Finally, we enable the DMA remapping hardware. */ 5224 5215 for_each_iommu(iommu, drhd) { ··· 5226 5219 5227 5220 iommu_disable_protect_mem_regions(iommu); 5228 5221 } 5222 + up_read(&dmar_global_lock); 5223 + 5229 5224 pr_info("Intel(R) Virtualization Technology for Directed I/O\n"); 5230 5225 5231 5226 intel_iommu_enabled = 1; 5232 - intel_iommu_debugfs_init(); 5233 5227 5234 5228 return 0; 5235 5229 ··· 5708 5700 u64 phys = 0; 5709 5701 5710 5702 pte = pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, &level); 5711 - if (pte) 5712 - phys = dma_pte_addr(pte); 5703 + if (pte && dma_pte_present(pte)) 5704 + phys = dma_pte_addr(pte) + 5705 + (iova & (BIT_MASK(level_to_offset_bits(level) + 5706 + VTD_PAGE_SHIFT) - 1)); 5713 5707 5714 5708 return phys; 5715 5709 }
+2 -2
drivers/iommu/io-pgtable-arm.c
··· 468 468 arm_lpae_iopte *ptep = data->pgd; 469 469 int ret, lvl = data->start_level; 470 470 arm_lpae_iopte prot; 471 - long iaext = (long)iova >> cfg->ias; 471 + long iaext = (s64)iova >> cfg->ias; 472 472 473 473 /* If no access, then nothing to do */ 474 474 if (!(iommu_prot & (IOMMU_READ | IOMMU_WRITE))) ··· 645 645 struct arm_lpae_io_pgtable *data = io_pgtable_ops_to_data(ops); 646 646 struct io_pgtable_cfg *cfg = &data->iop.cfg; 647 647 arm_lpae_iopte *ptep = data->pgd; 648 - long iaext = (long)iova >> cfg->ias; 648 + long iaext = (s64)iova >> cfg->ias; 649 649 650 650 if (WARN_ON(!size || (size & cfg->pgsize_bitmap) != size)) 651 651 return 0;
+29 -1
drivers/irqchip/irq-gic-v3.c
··· 34 34 #define GICD_INT_NMI_PRI (GICD_INT_DEF_PRI & ~0x80) 35 35 36 36 #define FLAGS_WORKAROUND_GICR_WAKER_MSM8996 (1ULL << 0) 37 + #define FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539 (1ULL << 1) 37 38 38 39 struct redist_region { 39 40 void __iomem *redist_base; ··· 1465 1464 return true; 1466 1465 } 1467 1466 1467 + static bool gic_enable_quirk_cavium_38539(void *data) 1468 + { 1469 + struct gic_chip_data *d = data; 1470 + 1471 + d->flags |= FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539; 1472 + 1473 + return true; 1474 + } 1475 + 1468 1476 static bool gic_enable_quirk_hip06_07(void *data) 1469 1477 { 1470 1478 struct gic_chip_data *d = data; ··· 1511 1501 .iidr = 0x00000000, 1512 1502 .mask = 0xffffffff, 1513 1503 .init = gic_enable_quirk_hip06_07, 1504 + }, 1505 + { 1506 + /* 1507 + * Reserved register accesses generate a Synchronous 1508 + * External Abort. This erratum applies to: 1509 + * - ThunderX: CN88xx 1510 + * - OCTEON TX: CN83xx, CN81xx 1511 + * - OCTEON TX2: CN93xx, CN96xx, CN98xx, CNF95xx* 1512 + */ 1513 + .desc = "GICv3: Cavium erratum 38539", 1514 + .iidr = 0xa000034c, 1515 + .mask = 0xe8f00fff, 1516 + .init = gic_enable_quirk_cavium_38539, 1514 1517 }, 1515 1518 { 1516 1519 } ··· 1600 1577 pr_info("%d SPIs implemented\n", GIC_LINE_NR - 32); 1601 1578 pr_info("%d Extended SPIs implemented\n", GIC_ESPI_NR); 1602 1579 1603 - gic_data.rdists.gicd_typer2 = readl_relaxed(gic_data.dist_base + GICD_TYPER2); 1580 + /* 1581 + * ThunderX1 explodes on reading GICD_TYPER2, in violation of the 1582 + * architecture spec (which says that reserved registers are RES0). 1583 + */ 1584 + if (!(gic_data.flags & FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539)) 1585 + gic_data.rdists.gicd_typer2 = readl_relaxed(gic_data.dist_base + GICD_TYPER2); 1604 1586 1605 1587 gic_data.domain = irq_domain_create_tree(handle, &gic_irq_domain_ops, 1606 1588 &gic_data);
+7
drivers/macintosh/windfarm_ad7417_sensor.c
··· 312 312 }; 313 313 MODULE_DEVICE_TABLE(i2c, wf_ad7417_id); 314 314 315 + static const struct of_device_id wf_ad7417_of_id[] = { 316 + { .compatible = "ad7417", }, 317 + { } 318 + }; 319 + MODULE_DEVICE_TABLE(of, wf_ad7417_of_id); 320 + 315 321 static struct i2c_driver wf_ad7417_driver = { 316 322 .driver = { 317 323 .name = "wf_ad7417", 324 + .of_match_table = wf_ad7417_of_id, 318 325 }, 319 326 .probe = wf_ad7417_probe, 320 327 .remove = wf_ad7417_remove,
+7
drivers/macintosh/windfarm_fcu_controls.c
··· 580 580 }; 581 581 MODULE_DEVICE_TABLE(i2c, wf_fcu_id); 582 582 583 + static const struct of_device_id wf_fcu_of_id[] = { 584 + { .compatible = "fcu", }, 585 + { } 586 + }; 587 + MODULE_DEVICE_TABLE(of, wf_fcu_of_id); 588 + 583 589 static struct i2c_driver wf_fcu_driver = { 584 590 .driver = { 585 591 .name = "wf_fcu", 592 + .of_match_table = wf_fcu_of_id, 586 593 }, 587 594 .probe = wf_fcu_probe, 588 595 .remove = wf_fcu_remove,
+15 -1
drivers/macintosh/windfarm_lm75_sensor.c
··· 14 14 #include <linux/init.h> 15 15 #include <linux/wait.h> 16 16 #include <linux/i2c.h> 17 + #include <linux/of_device.h> 17 18 #include <asm/prom.h> 18 19 #include <asm/machdep.h> 19 20 #include <asm/io.h> ··· 92 91 const struct i2c_device_id *id) 93 92 { 94 93 struct wf_lm75_sensor *lm; 95 - int rc, ds1775 = id->driver_data; 94 + int rc, ds1775; 96 95 const char *name, *loc; 96 + 97 + if (id) 98 + ds1775 = id->driver_data; 99 + else 100 + ds1775 = !!of_device_get_match_data(&client->dev); 97 101 98 102 DBG("wf_lm75: creating %s device at address 0x%02x\n", 99 103 ds1775 ? "ds1775" : "lm75", client->addr); ··· 170 164 }; 171 165 MODULE_DEVICE_TABLE(i2c, wf_lm75_id); 172 166 167 + static const struct of_device_id wf_lm75_of_id[] = { 168 + { .compatible = "lm75", .data = (void *)0}, 169 + { .compatible = "ds1775", .data = (void *)1 }, 170 + { } 171 + }; 172 + MODULE_DEVICE_TABLE(of, wf_lm75_of_id); 173 + 173 174 static struct i2c_driver wf_lm75_driver = { 174 175 .driver = { 175 176 .name = "wf_lm75", 177 + .of_match_table = wf_lm75_of_id, 176 178 }, 177 179 .probe = wf_lm75_probe, 178 180 .remove = wf_lm75_remove,
+7
drivers/macintosh/windfarm_lm87_sensor.c
··· 166 166 }; 167 167 MODULE_DEVICE_TABLE(i2c, wf_lm87_id); 168 168 169 + static const struct of_device_id wf_lm87_of_id[] = { 170 + { .compatible = "lm87cimt", }, 171 + { } 172 + }; 173 + MODULE_DEVICE_TABLE(of, wf_lm87_of_id); 174 + 169 175 static struct i2c_driver wf_lm87_driver = { 170 176 .driver = { 171 177 .name = "wf_lm87", 178 + .of_match_table = wf_lm87_of_id, 172 179 }, 173 180 .probe = wf_lm87_probe, 174 181 .remove = wf_lm87_remove,
+7
drivers/macintosh/windfarm_max6690_sensor.c
··· 120 120 }; 121 121 MODULE_DEVICE_TABLE(i2c, wf_max6690_id); 122 122 123 + static const struct of_device_id wf_max6690_of_id[] = { 124 + { .compatible = "max6690", }, 125 + { } 126 + }; 127 + MODULE_DEVICE_TABLE(of, wf_max6690_of_id); 128 + 123 129 static struct i2c_driver wf_max6690_driver = { 124 130 .driver = { 125 131 .name = "wf_max6690", 132 + .of_match_table = wf_max6690_of_id, 126 133 }, 127 134 .probe = wf_max6690_probe, 128 135 .remove = wf_max6690_remove,
+7
drivers/macintosh/windfarm_smu_sat.c
··· 341 341 }; 342 342 MODULE_DEVICE_TABLE(i2c, wf_sat_id); 343 343 344 + static const struct of_device_id wf_sat_of_id[] = { 345 + { .compatible = "smu-sat", }, 346 + { } 347 + }; 348 + MODULE_DEVICE_TABLE(of, wf_sat_of_id); 349 + 344 350 static struct i2c_driver wf_sat_driver = { 345 351 .driver = { 346 352 .name = "wf_smu_sat", 353 + .of_match_table = wf_sat_of_id, 347 354 }, 348 355 .probe = wf_sat_probe, 349 356 .remove = wf_sat_remove,
+1 -1
drivers/misc/cardreader/rts5227.c
··· 394 394 void rts522a_init_params(struct rtsx_pcr *pcr) 395 395 { 396 396 rts5227_init_params(pcr); 397 - 397 + pcr->tx_initial_phase = SET_CLOCK_PHASE(20, 20, 11); 398 398 pcr->reg_pm_ctrl3 = RTS522A_PM_CTRL3; 399 399 400 400 pcr->option.ocp_en = 1;
+2
drivers/misc/cardreader/rts5249.c
··· 618 618 void rts524a_init_params(struct rtsx_pcr *pcr) 619 619 { 620 620 rts5249_init_params(pcr); 621 + pcr->tx_initial_phase = SET_CLOCK_PHASE(27, 29, 11); 621 622 pcr->option.ltr_l1off_sspwrgate = LTR_L1OFF_SSPWRGATE_5250_DEF; 622 623 pcr->option.ltr_l1off_snooze_sspwrgate = 623 624 LTR_L1OFF_SNOOZE_SSPWRGATE_5250_DEF; ··· 734 733 void rts525a_init_params(struct rtsx_pcr *pcr) 735 734 { 736 735 rts5249_init_params(pcr); 736 + pcr->tx_initial_phase = SET_CLOCK_PHASE(25, 29, 11); 737 737 pcr->option.ltr_l1off_sspwrgate = LTR_L1OFF_SSPWRGATE_5250_DEF; 738 738 pcr->option.ltr_l1off_snooze_sspwrgate = 739 739 LTR_L1OFF_SNOOZE_SSPWRGATE_5250_DEF;
+1 -1
drivers/misc/cardreader/rts5260.c
··· 662 662 pcr->sd30_drive_sel_1v8 = CFG_DRIVER_TYPE_B; 663 663 pcr->sd30_drive_sel_3v3 = CFG_DRIVER_TYPE_B; 664 664 pcr->aspm_en = ASPM_L1_EN; 665 - pcr->tx_initial_phase = SET_CLOCK_PHASE(1, 29, 16); 665 + pcr->tx_initial_phase = SET_CLOCK_PHASE(27, 29, 11); 666 666 pcr->rx_initial_phase = SET_CLOCK_PHASE(24, 6, 5); 667 667 668 668 pcr->ic_version = rts5260_get_ic_version(pcr);
+1 -1
drivers/misc/cardreader/rts5261.c
··· 764 764 pcr->sd30_drive_sel_1v8 = CFG_DRIVER_TYPE_B; 765 765 pcr->sd30_drive_sel_3v3 = CFG_DRIVER_TYPE_B; 766 766 pcr->aspm_en = ASPM_L1_EN; 767 - pcr->tx_initial_phase = SET_CLOCK_PHASE(20, 27, 16); 767 + pcr->tx_initial_phase = SET_CLOCK_PHASE(27, 27, 11); 768 768 pcr->rx_initial_phase = SET_CLOCK_PHASE(24, 6, 5); 769 769 770 770 pcr->ic_version = rts5261_get_ic_version(pcr);
+2 -1
drivers/misc/eeprom/at24.c
··· 712 712 * chip is functional. 713 713 */ 714 714 err = at24_read(at24, 0, &test_byte, 1); 715 - pm_runtime_idle(dev); 716 715 if (err) { 717 716 pm_runtime_disable(dev); 718 717 regulator_disable(at24->vcc_reg); 719 718 return -ENODEV; 720 719 } 720 + 721 + pm_runtime_idle(dev); 721 722 722 723 if (writable) 723 724 dev_info(dev, "%u byte %s EEPROM, writable, %u bytes/write\n",
+4 -1
drivers/mmc/core/core.c
··· 1732 1732 * the erase operation does not exceed the max_busy_timeout, we should 1733 1733 * use R1B response. Or we need to prevent the host from doing hw busy 1734 1734 * detection, which is done by converting to a R1 response instead. 1735 + * Note, some hosts requires R1B, which also means they are on their own 1736 + * when it comes to deal with the busy timeout. 1735 1737 */ 1736 - if (card->host->max_busy_timeout && 1738 + if (!(card->host->caps & MMC_CAP_NEED_RSP_BUSY) && 1739 + card->host->max_busy_timeout && 1737 1740 busy_timeout > card->host->max_busy_timeout) { 1738 1741 cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_AC; 1739 1742 } else {
+5 -2
drivers/mmc/core/mmc.c
··· 1910 1910 * If the max_busy_timeout of the host is specified, validate it against 1911 1911 * the sleep cmd timeout. A failure means we need to prevent the host 1912 1912 * from doing hw busy detection, which is done by converting to a R1 1913 - * response instead of a R1B. 1913 + * response instead of a R1B. Note, some hosts requires R1B, which also 1914 + * means they are on their own when it comes to deal with the busy 1915 + * timeout. 1914 1916 */ 1915 - if (host->max_busy_timeout && (timeout_ms > host->max_busy_timeout)) { 1917 + if (!(host->caps & MMC_CAP_NEED_RSP_BUSY) && host->max_busy_timeout && 1918 + (timeout_ms > host->max_busy_timeout)) { 1916 1919 cmd.flags = MMC_RSP_R1 | MMC_CMD_AC; 1917 1920 } else { 1918 1921 cmd.flags = MMC_RSP_R1B | MMC_CMD_AC;
+4 -2
drivers/mmc/core/mmc_ops.c
··· 542 542 * If the max_busy_timeout of the host is specified, make sure it's 543 543 * enough to fit the used timeout_ms. In case it's not, let's instruct 544 544 * the host to avoid HW busy detection, by converting to a R1 response 545 - * instead of a R1B. 545 + * instead of a R1B. Note, some hosts requires R1B, which also means 546 + * they are on their own when it comes to deal with the busy timeout. 546 547 */ 547 - if (host->max_busy_timeout && (timeout_ms > host->max_busy_timeout)) 548 + if (!(host->caps & MMC_CAP_NEED_RSP_BUSY) && host->max_busy_timeout && 549 + (timeout_ms > host->max_busy_timeout)) 548 550 use_r1b_resp = false; 549 551 550 552 cmd.opcode = MMC_SWITCH;
+8 -5
drivers/mmc/host/rtsx_pci_sdmmc.c
··· 606 606 u8 sample_point, bool rx) 607 607 { 608 608 struct rtsx_pcr *pcr = host->pcr; 609 - 609 + u16 SD_VP_CTL = 0; 610 610 dev_dbg(sdmmc_dev(host), "%s(%s): sample_point = %d\n", 611 611 __func__, rx ? "RX" : "TX", sample_point); 612 612 613 613 rtsx_pci_write_register(pcr, CLK_CTL, CHANGE_CLK, CHANGE_CLK); 614 - if (rx) 614 + if (rx) { 615 + SD_VP_CTL = SD_VPRX_CTL; 615 616 rtsx_pci_write_register(pcr, SD_VPRX_CTL, 616 617 PHASE_SELECT_MASK, sample_point); 617 - else 618 + } else { 619 + SD_VP_CTL = SD_VPTX_CTL; 618 620 rtsx_pci_write_register(pcr, SD_VPTX_CTL, 619 621 PHASE_SELECT_MASK, sample_point); 620 - rtsx_pci_write_register(pcr, SD_VPCLK0_CTL, PHASE_NOT_RESET, 0); 621 - rtsx_pci_write_register(pcr, SD_VPCLK0_CTL, PHASE_NOT_RESET, 622 + } 623 + rtsx_pci_write_register(pcr, SD_VP_CTL, PHASE_NOT_RESET, 0); 624 + rtsx_pci_write_register(pcr, SD_VP_CTL, PHASE_NOT_RESET, 622 625 PHASE_NOT_RESET); 623 626 rtsx_pci_write_register(pcr, CLK_CTL, CHANGE_CLK, 0); 624 627 rtsx_pci_write_register(pcr, SD_CFG1, SD_ASYNC_FIFO_NOT_RST, 0);
+82 -2
drivers/mmc/host/sdhci-acpi.c
··· 23 23 #include <linux/pm.h> 24 24 #include <linux/pm_runtime.h> 25 25 #include <linux/delay.h> 26 + #include <linux/dmi.h> 26 27 27 28 #include <linux/mmc/host.h> 28 29 #include <linux/mmc/pm.h> ··· 73 72 const struct sdhci_acpi_slot *slot; 74 73 struct platform_device *pdev; 75 74 bool use_runtime_pm; 75 + bool is_intel; 76 + bool reset_signal_volt_on_suspend; 76 77 unsigned long private[0] ____cacheline_aligned; 78 + }; 79 + 80 + enum { 81 + DMI_QUIRK_RESET_SD_SIGNAL_VOLT_ON_SUSP = BIT(0), 82 + DMI_QUIRK_SD_NO_WRITE_PROTECT = BIT(1), 77 83 }; 78 84 79 85 static inline void *sdhci_acpi_priv(struct sdhci_acpi_host *c) ··· 399 391 host->mmc_host_ops.start_signal_voltage_switch = 400 392 intel_start_signal_voltage_switch; 401 393 394 + c->is_intel = true; 395 + 402 396 return 0; 403 397 } 404 398 ··· 657 647 }; 658 648 MODULE_DEVICE_TABLE(acpi, sdhci_acpi_ids); 659 649 650 + static const struct dmi_system_id sdhci_acpi_quirks[] = { 651 + { 652 + /* 653 + * The Lenovo Miix 320-10ICR has a bug in the _PS0 method of 654 + * the SHC1 ACPI device, this bug causes it to reprogram the 655 + * wrong LDO (DLDO3) to 1.8V if 1.8V modes are used and the 656 + * card is (runtime) suspended + resumed. DLDO3 is used for 657 + * the LCD and setting it to 1.8V causes the LCD to go black. 658 + */ 659 + .matches = { 660 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 661 + DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo MIIX 320-10ICR"), 662 + }, 663 + .driver_data = (void *)DMI_QUIRK_RESET_SD_SIGNAL_VOLT_ON_SUSP, 664 + }, 665 + { 666 + /* 667 + * The Acer Aspire Switch 10 (SW5-012) microSD slot always 668 + * reports the card being write-protected even though microSD 669 + * cards do not have a write-protect switch at all. 670 + */ 671 + .matches = { 672 + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 673 + DMI_MATCH(DMI_PRODUCT_NAME, "Aspire SW5-012"), 674 + }, 675 + .driver_data = (void *)DMI_QUIRK_SD_NO_WRITE_PROTECT, 676 + }, 677 + {} /* Terminating entry */ 678 + }; 679 + 660 680 static const struct sdhci_acpi_slot *sdhci_acpi_get_slot(struct acpi_device *adev) 661 681 { 662 682 const struct sdhci_acpi_uid_slot *u; ··· 703 663 struct device *dev = &pdev->dev; 704 664 const struct sdhci_acpi_slot *slot; 705 665 struct acpi_device *device, *child; 666 + const struct dmi_system_id *id; 706 667 struct sdhci_acpi_host *c; 707 668 struct sdhci_host *host; 708 669 struct resource *iomem; 709 670 resource_size_t len; 710 671 size_t priv_size; 672 + int quirks = 0; 711 673 int err; 712 674 713 675 device = ACPI_COMPANION(dev); 714 676 if (!device) 715 677 return -ENODEV; 678 + 679 + id = dmi_first_match(sdhci_acpi_quirks); 680 + if (id) 681 + quirks = (long)id->driver_data; 716 682 717 683 slot = sdhci_acpi_get_slot(device); 718 684 ··· 805 759 dev_warn(dev, "failed to setup card detect gpio\n"); 806 760 c->use_runtime_pm = false; 807 761 } 762 + 763 + if (quirks & DMI_QUIRK_RESET_SD_SIGNAL_VOLT_ON_SUSP) 764 + c->reset_signal_volt_on_suspend = true; 765 + 766 + if (quirks & DMI_QUIRK_SD_NO_WRITE_PROTECT) 767 + host->mmc->caps2 |= MMC_CAP2_NO_WRITE_PROTECT; 808 768 } 809 769 810 770 err = sdhci_setup_host(host); ··· 875 823 return 0; 876 824 } 877 825 826 + static void __maybe_unused sdhci_acpi_reset_signal_voltage_if_needed( 827 + struct device *dev) 828 + { 829 + struct sdhci_acpi_host *c = dev_get_drvdata(dev); 830 + struct sdhci_host *host = c->host; 831 + 832 + if (c->is_intel && c->reset_signal_volt_on_suspend && 833 + host->mmc->ios.signal_voltage != MMC_SIGNAL_VOLTAGE_330) { 834 + struct intel_host *intel_host = sdhci_acpi_priv(c); 835 + unsigned int fn = INTEL_DSM_V33_SWITCH; 836 + u32 result = 0; 837 + 838 + intel_dsm(intel_host, dev, fn, &result); 839 + } 840 + } 841 + 878 842 #ifdef CONFIG_PM_SLEEP 879 843 880 844 static int sdhci_acpi_suspend(struct device *dev) 881 845 { 882 846 struct sdhci_acpi_host *c = dev_get_drvdata(dev); 883 847 struct sdhci_host *host = c->host; 848 + int ret; 884 849 885 850 if (host->tuning_mode != SDHCI_TUNING_MODE_3) 886 851 mmc_retune_needed(host->mmc); 887 852 888 - return sdhci_suspend_host(host); 853 + ret = sdhci_suspend_host(host); 854 + if (ret) 855 + return ret; 856 + 857 + sdhci_acpi_reset_signal_voltage_if_needed(dev); 858 + return 0; 889 859 } 890 860 891 861 static int sdhci_acpi_resume(struct device *dev) ··· 927 853 { 928 854 struct sdhci_acpi_host *c = dev_get_drvdata(dev); 929 855 struct sdhci_host *host = c->host; 856 + int ret; 930 857 931 858 if (host->tuning_mode != SDHCI_TUNING_MODE_3) 932 859 mmc_retune_needed(host->mmc); 933 860 934 - return sdhci_runtime_suspend_host(host); 861 + ret = sdhci_runtime_suspend_host(host); 862 + if (ret) 863 + return ret; 864 + 865 + sdhci_acpi_reset_signal_voltage_if_needed(dev); 866 + return 0; 935 867 } 936 868 937 869 static int sdhci_acpi_runtime_resume(struct device *dev)
+16 -2
drivers/mmc/host/sdhci-cadence.c
··· 11 11 #include <linux/mmc/host.h> 12 12 #include <linux/mmc/mmc.h> 13 13 #include <linux/of.h> 14 + #include <linux/of_device.h> 14 15 15 16 #include "sdhci-pltfm.h" 16 17 ··· 236 235 .set_uhs_signaling = sdhci_cdns_set_uhs_signaling, 237 236 }; 238 237 238 + static const struct sdhci_pltfm_data sdhci_cdns_uniphier_pltfm_data = { 239 + .ops = &sdhci_cdns_ops, 240 + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN, 241 + }; 242 + 239 243 static const struct sdhci_pltfm_data sdhci_cdns_pltfm_data = { 240 244 .ops = &sdhci_cdns_ops, 241 245 }; ··· 340 334 static int sdhci_cdns_probe(struct platform_device *pdev) 341 335 { 342 336 struct sdhci_host *host; 337 + const struct sdhci_pltfm_data *data; 343 338 struct sdhci_pltfm_host *pltfm_host; 344 339 struct sdhci_cdns_priv *priv; 345 340 struct clk *clk; ··· 357 350 if (ret) 358 351 return ret; 359 352 353 + data = of_device_get_match_data(dev); 354 + if (!data) 355 + data = &sdhci_cdns_pltfm_data; 356 + 360 357 nr_phy_params = sdhci_cdns_phy_param_count(dev->of_node); 361 - host = sdhci_pltfm_init(pdev, &sdhci_cdns_pltfm_data, 358 + host = sdhci_pltfm_init(pdev, data, 362 359 struct_size(priv, phy_params, nr_phy_params)); 363 360 if (IS_ERR(host)) { 364 361 ret = PTR_ERR(host); ··· 442 431 }; 443 432 444 433 static const struct of_device_id sdhci_cdns_match[] = { 445 - { .compatible = "socionext,uniphier-sd4hc" }, 434 + { 435 + .compatible = "socionext,uniphier-sd4hc", 436 + .data = &sdhci_cdns_uniphier_pltfm_data, 437 + }, 446 438 { .compatible = "cdns,sd4hc" }, 447 439 { /* sentinel */ } 448 440 };
+6 -2
drivers/mmc/host/sdhci-of-at91.c
··· 132 132 133 133 sdhci_reset(host, mask); 134 134 135 - if (host->mmc->caps & MMC_CAP_NONREMOVABLE) 135 + if ((host->mmc->caps & MMC_CAP_NONREMOVABLE) 136 + || mmc_gpio_get_cd(host->mmc) >= 0) 136 137 sdhci_at91_set_force_card_detect(host); 137 138 138 139 if (priv->cal_always_on && (mask & SDHCI_RESET_ALL)) ··· 428 427 * detection procedure using the SDMCC_CD signal is bypassed. 429 428 * This bit is reset when a software reset for all command is performed 430 429 * so we need to implement our own reset function to set back this bit. 430 + * 431 + * WA: SAMA5D2 doesn't drive CMD if using CD GPIO line. 431 432 */ 432 - if (host->mmc->caps & MMC_CAP_NONREMOVABLE) 433 + if ((host->mmc->caps & MMC_CAP_NONREMOVABLE) 434 + || mmc_gpio_get_cd(host->mmc) >= 0) 433 435 sdhci_at91_set_force_card_detect(host); 434 436 435 437 pm_runtime_put_autosuspend(&pdev->dev);
+3
drivers/mmc/host/sdhci-omap.c
··· 1192 1192 if (of_find_property(dev->of_node, "dmas", NULL)) 1193 1193 sdhci_switch_external_dma(host, true); 1194 1194 1195 + /* R1B responses is required to properly manage HW busy detection. */ 1196 + mmc->caps |= MMC_CAP_NEED_RSP_BUSY; 1197 + 1195 1198 ret = sdhci_setup_host(host); 1196 1199 if (ret) 1197 1200 goto err_put_sync;
+3
drivers/mmc/host/sdhci-tegra.c
··· 1552 1552 if (tegra_host->soc_data->nvquirks & NVQUIRK_ENABLE_DDR50) 1553 1553 host->mmc->caps |= MMC_CAP_1_8V_DDR; 1554 1554 1555 + /* R1B responses is required to properly manage HW busy detection. */ 1556 + host->mmc->caps |= MMC_CAP_NEED_RSP_BUSY; 1557 + 1555 1558 tegra_sdhci_parse_dt(host); 1556 1559 1557 1560 tegra_host->power_gpio = devm_gpiod_get_optional(&pdev->dev, "power",
+1
drivers/net/Kconfig
··· 149 149 config IFB 150 150 tristate "Intermediate Functional Block support" 151 151 depends on NET_CLS_ACT 152 + select NET_REDIRECT 152 153 ---help--- 153 154 This is an intermediate driver that allows sharing of 154 155 resources.
+34 -34
drivers/net/caif/caif_spi.c
··· 141 141 return 0; 142 142 143 143 /* Print out debug information. */ 144 - len += snprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 145 - "CAIF SPI debug information:\n"); 144 + len += scnprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 145 + "CAIF SPI debug information:\n"); 146 146 147 - len += snprintf((buf + len), (DEBUGFS_BUF_SIZE - len), FLAVOR); 147 + len += scnprintf((buf + len), (DEBUGFS_BUF_SIZE - len), FLAVOR); 148 148 149 - len += snprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 150 - "STATE: %d\n", cfspi->dbg_state); 151 - len += snprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 152 - "Previous CMD: 0x%x\n", cfspi->pcmd); 153 - len += snprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 154 - "Current CMD: 0x%x\n", cfspi->cmd); 155 - len += snprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 156 - "Previous TX len: %d\n", cfspi->tx_ppck_len); 157 - len += snprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 158 - "Previous RX len: %d\n", cfspi->rx_ppck_len); 159 - len += snprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 160 - "Current TX len: %d\n", cfspi->tx_cpck_len); 161 - len += snprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 162 - "Current RX len: %d\n", cfspi->rx_cpck_len); 163 - len += snprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 164 - "Next TX len: %d\n", cfspi->tx_npck_len); 165 - len += snprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 166 - "Next RX len: %d\n", cfspi->rx_npck_len); 149 + len += scnprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 150 + "STATE: %d\n", cfspi->dbg_state); 151 + len += scnprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 152 + "Previous CMD: 0x%x\n", cfspi->pcmd); 153 + len += scnprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 154 + "Current CMD: 0x%x\n", cfspi->cmd); 155 + len += scnprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 156 + "Previous TX len: %d\n", cfspi->tx_ppck_len); 157 + len += scnprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 158 + "Previous RX len: %d\n", cfspi->rx_ppck_len); 159 + len += scnprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 160 + "Current TX len: %d\n", cfspi->tx_cpck_len); 161 + len += scnprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 162 + "Current RX len: %d\n", cfspi->rx_cpck_len); 163 + len += scnprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 164 + "Next TX len: %d\n", cfspi->tx_npck_len); 165 + len += scnprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 166 + "Next RX len: %d\n", cfspi->rx_npck_len); 167 167 168 168 if (len > DEBUGFS_BUF_SIZE) 169 169 len = DEBUGFS_BUF_SIZE; ··· 180 180 int len = 0; 181 181 int i; 182 182 for (i = 0; i < count; i++) { 183 - len += snprintf((buf + len), (size - len), 183 + len += scnprintf((buf + len), (size - len), 184 184 "[0x" BYTE_HEX_FMT "]", 185 185 frm[i]); 186 186 if ((i == cut) && (count > (cut * 2))) { 187 187 /* Fast forward. */ 188 188 i = count - cut; 189 - len += snprintf((buf + len), (size - len), 190 - "--- %zu bytes skipped ---\n", 191 - count - (cut * 2)); 189 + len += scnprintf((buf + len), (size - len), 190 + "--- %zu bytes skipped ---\n", 191 + count - (cut * 2)); 192 192 } 193 193 194 194 if ((!(i % 10)) && i) { 195 - len += snprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 196 - "\n"); 195 + len += scnprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 196 + "\n"); 197 197 } 198 198 } 199 - len += snprintf((buf + len), (DEBUGFS_BUF_SIZE - len), "\n"); 199 + len += scnprintf((buf + len), (DEBUGFS_BUF_SIZE - len), "\n"); 200 200 return len; 201 201 } 202 202 ··· 214 214 return 0; 215 215 216 216 /* Print out debug information. */ 217 - len += snprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 218 - "Current frame:\n"); 217 + len += scnprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 218 + "Current frame:\n"); 219 219 220 - len += snprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 221 - "Tx data (Len: %d):\n", cfspi->tx_cpck_len); 220 + len += scnprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 221 + "Tx data (Len: %d):\n", cfspi->tx_cpck_len); 222 222 223 223 len += print_frame((buf + len), (DEBUGFS_BUF_SIZE - len), 224 224 cfspi->xfer.va_tx[0], 225 225 (cfspi->tx_cpck_len + SPI_CMD_SZ), 100); 226 226 227 - len += snprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 228 - "Rx data (Len: %d):\n", cfspi->rx_cpck_len); 227 + len += scnprintf((buf + len), (DEBUGFS_BUF_SIZE - len), 228 + "Rx data (Len: %d):\n", cfspi->rx_cpck_len); 229 229 230 230 len += print_frame((buf + len), (DEBUGFS_BUF_SIZE - len), 231 231 cfspi->xfer.va_rx,
+3
drivers/net/can/slcan.c
··· 622 622 tty->disc_data = NULL; 623 623 clear_bit(SLF_INUSE, &sl->flags); 624 624 slc_free_netdev(sl->dev); 625 + /* do not call free_netdev before rtnl_unlock */ 626 + rtnl_unlock(); 625 627 free_netdev(sl->dev); 628 + return err; 626 629 627 630 err_exit: 628 631 rtnl_unlock();
+2 -2
drivers/net/dsa/mt7530.c
··· 566 566 static void 567 567 mt7530_port_set_status(struct mt7530_priv *priv, int port, int enable) 568 568 { 569 - u32 mask = PMCR_TX_EN | PMCR_RX_EN; 569 + u32 mask = PMCR_TX_EN | PMCR_RX_EN | PMCR_FORCE_LNK; 570 570 571 571 if (enable) 572 572 mt7530_set(priv, MT7530_PMCR_P(port), mask); ··· 1502 1502 mcr_new &= ~(PMCR_FORCE_SPEED_1000 | PMCR_FORCE_SPEED_100 | 1503 1503 PMCR_FORCE_FDX | PMCR_TX_FC_EN | PMCR_RX_FC_EN); 1504 1504 mcr_new |= PMCR_IFG_XMIT(1) | PMCR_MAC_MODE | PMCR_BACKOFF_EN | 1505 - PMCR_BACKPR_EN | PMCR_FORCE_MODE | PMCR_FORCE_LNK; 1505 + PMCR_BACKPR_EN | PMCR_FORCE_MODE; 1506 1506 1507 1507 /* Are we connected to external phy */ 1508 1508 if (port == 5 && dsa_is_user_port(ds, 5))
+60 -18
drivers/net/ethernet/amazon/ena/ena_netdev.c
··· 1015 1015 struct ena_rx_buffer *rx_info; 1016 1016 1017 1017 req_id = rx_ring->free_ids[next_to_use]; 1018 - rc = validate_rx_req_id(rx_ring, req_id); 1019 - if (unlikely(rc < 0)) 1020 - break; 1021 1018 1022 1019 rx_info = &rx_ring->rx_buffer_info[req_id]; 1023 - 1024 1020 1025 1021 rc = ena_alloc_rx_page(rx_ring, rx_info, 1026 1022 GFP_ATOMIC | __GFP_COMP); ··· 1372 1376 struct ena_rx_buffer *rx_info; 1373 1377 u16 len, req_id, buf = 0; 1374 1378 void *va; 1379 + int rc; 1375 1380 1376 1381 len = ena_bufs[buf].len; 1377 1382 req_id = ena_bufs[buf].req_id; 1383 + 1384 + rc = validate_rx_req_id(rx_ring, req_id); 1385 + if (unlikely(rc < 0)) 1386 + return NULL; 1387 + 1378 1388 rx_info = &rx_ring->rx_buffer_info[req_id]; 1379 1389 1380 1390 if (unlikely(!rx_info->page)) { ··· 1453 1451 buf++; 1454 1452 len = ena_bufs[buf].len; 1455 1453 req_id = ena_bufs[buf].req_id; 1454 + 1455 + rc = validate_rx_req_id(rx_ring, req_id); 1456 + if (unlikely(rc < 0)) 1457 + return NULL; 1458 + 1456 1459 rx_info = &rx_ring->rx_buffer_info[req_id]; 1457 1460 } while (1); 1458 1461 ··· 1972 1965 } 1973 1966 1974 1967 /* Reserved the max msix vectors we might need */ 1975 - msix_vecs = ENA_MAX_MSIX_VEC(adapter->num_io_queues); 1968 + msix_vecs = ENA_MAX_MSIX_VEC(adapter->max_num_io_queues); 1976 1969 netif_dbg(adapter, probe, adapter->netdev, 1977 1970 "trying to enable MSI-X, vectors %d\n", msix_vecs); 1978 1971 ··· 2072 2065 2073 2066 static int ena_request_io_irq(struct ena_adapter *adapter) 2074 2067 { 2068 + u32 io_queue_count = adapter->num_io_queues + adapter->xdp_num_queues; 2075 2069 unsigned long flags = 0; 2076 2070 struct ena_irq *irq; 2077 2071 int rc = 0, i, k; ··· 2083 2075 return -EINVAL; 2084 2076 } 2085 2077 2086 - for (i = ENA_IO_IRQ_FIRST_IDX; i < adapter->msix_vecs; i++) { 2078 + for (i = ENA_IO_IRQ_FIRST_IDX; i < ENA_MAX_MSIX_VEC(io_queue_count); i++) { 2087 2079 irq = &adapter->irq_tbl[i]; 2088 2080 rc = request_irq(irq->vector, irq->handler, flags, irq->name, 2089 2081 irq->data); ··· 2124 2116 2125 2117 static void ena_free_io_irq(struct ena_adapter *adapter) 2126 2118 { 2119 + u32 io_queue_count = adapter->num_io_queues + adapter->xdp_num_queues; 2127 2120 struct ena_irq *irq; 2128 2121 int i; 2129 2122 ··· 2135 2126 } 2136 2127 #endif /* CONFIG_RFS_ACCEL */ 2137 2128 2138 - for (i = ENA_IO_IRQ_FIRST_IDX; i < adapter->msix_vecs; i++) { 2129 + for (i = ENA_IO_IRQ_FIRST_IDX; i < ENA_MAX_MSIX_VEC(io_queue_count); i++) { 2139 2130 irq = &adapter->irq_tbl[i]; 2140 2131 irq_set_affinity_hint(irq->vector, NULL); 2141 2132 free_irq(irq->vector, irq->data); ··· 2150 2141 2151 2142 static void ena_disable_io_intr_sync(struct ena_adapter *adapter) 2152 2143 { 2144 + u32 io_queue_count = adapter->num_io_queues + adapter->xdp_num_queues; 2153 2145 int i; 2154 2146 2155 2147 if (!netif_running(adapter->netdev)) 2156 2148 return; 2157 2149 2158 - for (i = ENA_IO_IRQ_FIRST_IDX; i < adapter->msix_vecs; i++) 2150 + for (i = ENA_IO_IRQ_FIRST_IDX; i < ENA_MAX_MSIX_VEC(io_queue_count); i++) 2159 2151 synchronize_irq(adapter->irq_tbl[i].vector); 2160 2152 } 2161 2153 ··· 3484 3474 3485 3475 mod_timer(&adapter->timer_service, round_jiffies(jiffies + HZ)); 3486 3476 dev_err(&pdev->dev, "Device reset completed successfully\n"); 3477 + adapter->last_keep_alive_jiffies = jiffies; 3487 3478 3488 3479 return rc; 3489 3480 err_disable_msix: ··· 4329 4318 4330 4319 /*****************************************************************************/ 4331 4320 4332 - /* ena_remove - Device Removal Routine 4321 + /* __ena_shutoff - Helper used in both PCI remove/shutdown routines 4333 4322 * @pdev: PCI device information struct 4323 + * @shutdown: Is it a shutdown operation? If false, means it is a removal 4334 4324 * 4335 - * ena_remove is called by the PCI subsystem to alert the driver 4336 - * that it should release a PCI device. 4325 + * __ena_shutoff is a helper routine that does the real work on shutdown and 4326 + * removal paths; the difference between those paths is with regards to whether 4327 + * dettach or unregister the netdevice. 4337 4328 */ 4338 - static void ena_remove(struct pci_dev *pdev) 4329 + static void __ena_shutoff(struct pci_dev *pdev, bool shutdown) 4339 4330 { 4340 4331 struct ena_adapter *adapter = pci_get_drvdata(pdev); 4341 4332 struct ena_com_dev *ena_dev; ··· 4356 4343 4357 4344 cancel_work_sync(&adapter->reset_task); 4358 4345 4359 - rtnl_lock(); 4346 + rtnl_lock(); /* lock released inside the below if-else block */ 4360 4347 ena_destroy_device(adapter, true); 4361 - rtnl_unlock(); 4362 - 4363 - unregister_netdev(netdev); 4364 - 4365 - free_netdev(netdev); 4348 + if (shutdown) { 4349 + netif_device_detach(netdev); 4350 + dev_close(netdev); 4351 + rtnl_unlock(); 4352 + } else { 4353 + rtnl_unlock(); 4354 + unregister_netdev(netdev); 4355 + free_netdev(netdev); 4356 + } 4366 4357 4367 4358 ena_com_rss_destroy(ena_dev); 4368 4359 ··· 4379 4362 pci_disable_device(pdev); 4380 4363 4381 4364 vfree(ena_dev); 4365 + } 4366 + 4367 + /* ena_remove - Device Removal Routine 4368 + * @pdev: PCI device information struct 4369 + * 4370 + * ena_remove is called by the PCI subsystem to alert the driver 4371 + * that it should release a PCI device. 4372 + */ 4373 + 4374 + static void ena_remove(struct pci_dev *pdev) 4375 + { 4376 + __ena_shutoff(pdev, false); 4377 + } 4378 + 4379 + /* ena_shutdown - Device Shutdown Routine 4380 + * @pdev: PCI device information struct 4381 + * 4382 + * ena_shutdown is called by the PCI subsystem to alert the driver that 4383 + * a shutdown/reboot (or kexec) is happening and device must be disabled. 4384 + */ 4385 + 4386 + static void ena_shutdown(struct pci_dev *pdev) 4387 + { 4388 + __ena_shutoff(pdev, true); 4382 4389 } 4383 4390 4384 4391 #ifdef CONFIG_PM ··· 4454 4413 .id_table = ena_pci_tbl, 4455 4414 .probe = ena_probe, 4456 4415 .remove = ena_remove, 4416 + .shutdown = ena_shutdown, 4457 4417 #ifdef CONFIG_PM 4458 4418 .suspend = ena_suspend, 4459 4419 .resume = ena_resume,
+20 -8
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 6860 6860 } 6861 6861 ena |= FUNC_BACKING_STORE_CFG_REQ_DFLT_ENABLES; 6862 6862 rc = bnxt_hwrm_func_backing_store_cfg(bp, ena); 6863 - if (rc) 6863 + if (rc) { 6864 6864 netdev_err(bp->dev, "Failed configuring context mem, rc = %d.\n", 6865 6865 rc); 6866 - else 6867 - ctx->flags |= BNXT_CTX_FLAG_INITED; 6868 - 6866 + return rc; 6867 + } 6868 + ctx->flags |= BNXT_CTX_FLAG_INITED; 6869 6869 return 0; 6870 6870 } 6871 6871 ··· 7384 7384 pri2cos = &resp2->pri0_cos_queue_id; 7385 7385 for (i = 0; i < 8; i++) { 7386 7386 u8 queue_id = pri2cos[i]; 7387 + u8 queue_idx; 7387 7388 7389 + /* Per port queue IDs start from 0, 10, 20, etc */ 7390 + queue_idx = queue_id % 10; 7391 + if (queue_idx > BNXT_MAX_QUEUE) { 7392 + bp->pri2cos_valid = false; 7393 + goto qstats_done; 7394 + } 7388 7395 for (j = 0; j < bp->max_q; j++) { 7389 7396 if (bp->q_ids[j] == queue_id) 7390 - bp->pri2cos[i] = j; 7397 + bp->pri2cos_idx[i] = queue_idx; 7391 7398 } 7392 7399 } 7393 7400 bp->pri2cos_valid = 1; 7394 7401 } 7402 + qstats_done: 7395 7403 mutex_unlock(&bp->hwrm_cmd_lock); 7396 7404 return rc; 7397 7405 } ··· 11652 11644 bp->rx_nr_rings++; 11653 11645 bp->cp_nr_rings++; 11654 11646 } 11647 + if (rc) { 11648 + bp->tx_nr_rings = 0; 11649 + bp->rx_nr_rings = 0; 11650 + } 11655 11651 return rc; 11656 11652 } 11657 11653 ··· 11941 11929 bnxt_hwrm_func_drv_unrgtr(bp); 11942 11930 bnxt_free_hwrm_short_cmd_req(bp); 11943 11931 bnxt_free_hwrm_resources(bp); 11944 - bnxt_free_ctx_mem(bp); 11945 - kfree(bp->ctx); 11946 - bp->ctx = NULL; 11947 11932 kfree(bp->fw_health); 11948 11933 bp->fw_health = NULL; 11949 11934 bnxt_cleanup_pci(bp); 11935 + bnxt_free_ctx_mem(bp); 11936 + kfree(bp->ctx); 11937 + bp->ctx = NULL; 11950 11938 11951 11939 init_err_free: 11952 11940 free_netdev(dev);
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 1718 1718 u16 fw_rx_stats_ext_size; 1719 1719 u16 fw_tx_stats_ext_size; 1720 1720 u16 hw_ring_stats_size; 1721 - u8 pri2cos[8]; 1721 + u8 pri2cos_idx[8]; 1722 1722 u8 pri2cos_valid; 1723 1723 1724 1724 u16 hwrm_max_req_len;
+10 -5
drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
··· 472 472 { 473 473 struct bnxt *bp = netdev_priv(dev); 474 474 struct ieee_ets *my_ets = bp->ieee_ets; 475 + int rc; 475 476 476 477 ets->ets_cap = bp->max_tc; 477 478 478 479 if (!my_ets) { 479 - int rc; 480 - 481 480 if (bp->dcbx_cap & DCB_CAP_DCBX_HOST) 482 481 return 0; 483 482 484 483 my_ets = kzalloc(sizeof(*my_ets), GFP_KERNEL); 485 484 if (!my_ets) 486 - return 0; 485 + return -ENOMEM; 487 486 rc = bnxt_hwrm_queue_cos2bw_qcfg(bp, my_ets); 488 487 if (rc) 489 - return 0; 488 + goto error; 490 489 rc = bnxt_hwrm_queue_pri2cos_qcfg(bp, my_ets); 491 490 if (rc) 492 - return 0; 491 + goto error; 492 + 493 + /* cache result */ 494 + bp->ieee_ets = my_ets; 493 495 } 494 496 495 497 ets->cbs = my_ets->cbs; ··· 500 498 memcpy(ets->tc_tsa, my_ets->tc_tsa, sizeof(ets->tc_tsa)); 501 499 memcpy(ets->prio_tc, my_ets->prio_tc, sizeof(ets->prio_tc)); 502 500 return 0; 501 + error: 502 + kfree(my_ets); 503 + return rc; 503 504 } 504 505 505 506 static int bnxt_dcbnl_ieee_setets(struct net_device *dev, struct ieee_ets *ets)
+4 -4
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 589 589 if (bp->pri2cos_valid) { 590 590 for (i = 0; i < 8; i++, j++) { 591 591 long n = bnxt_rx_bytes_pri_arr[i].base_off + 592 - bp->pri2cos[i]; 592 + bp->pri2cos_idx[i]; 593 593 594 594 buf[j] = le64_to_cpu(*(rx_port_stats_ext + n)); 595 595 } 596 596 for (i = 0; i < 8; i++, j++) { 597 597 long n = bnxt_rx_pkts_pri_arr[i].base_off + 598 - bp->pri2cos[i]; 598 + bp->pri2cos_idx[i]; 599 599 600 600 buf[j] = le64_to_cpu(*(rx_port_stats_ext + n)); 601 601 } 602 602 for (i = 0; i < 8; i++, j++) { 603 603 long n = bnxt_tx_bytes_pri_arr[i].base_off + 604 - bp->pri2cos[i]; 604 + bp->pri2cos_idx[i]; 605 605 606 606 buf[j] = le64_to_cpu(*(tx_port_stats_ext + n)); 607 607 } 608 608 for (i = 0; i < 8; i++, j++) { 609 609 long n = bnxt_tx_pkts_pri_arr[i].base_off + 610 - bp->pri2cos[i]; 610 + bp->pri2cos_idx[i]; 611 611 612 612 buf[j] = le64_to_cpu(*(tx_port_stats_ext + n)); 613 613 }
+42 -100
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 95 95 bcmgenet_writel(value, d + DMA_DESC_LENGTH_STATUS); 96 96 } 97 97 98 - static inline u32 dmadesc_get_length_status(struct bcmgenet_priv *priv, 99 - void __iomem *d) 100 - { 101 - return bcmgenet_readl(d + DMA_DESC_LENGTH_STATUS); 102 - } 103 - 104 98 static inline void dmadesc_set_addr(struct bcmgenet_priv *priv, 105 99 void __iomem *d, 106 100 dma_addr_t addr) ··· 503 509 return phy_ethtool_ksettings_set(dev->phydev, cmd); 504 510 } 505 511 506 - static void bcmgenet_set_rx_csum(struct net_device *dev, 507 - netdev_features_t wanted) 508 - { 509 - struct bcmgenet_priv *priv = netdev_priv(dev); 510 - u32 rbuf_chk_ctrl; 511 - bool rx_csum_en; 512 - 513 - rx_csum_en = !!(wanted & NETIF_F_RXCSUM); 514 - 515 - rbuf_chk_ctrl = bcmgenet_rbuf_readl(priv, RBUF_CHK_CTRL); 516 - 517 - /* enable rx checksumming */ 518 - if (rx_csum_en) 519 - rbuf_chk_ctrl |= RBUF_RXCHK_EN | RBUF_L3_PARSE_DIS; 520 - else 521 - rbuf_chk_ctrl &= ~RBUF_RXCHK_EN; 522 - priv->desc_rxchk_en = rx_csum_en; 523 - 524 - /* If UniMAC forwards CRC, we need to skip over it to get 525 - * a valid CHK bit to be set in the per-packet status word 526 - */ 527 - if (rx_csum_en && priv->crc_fwd_en) 528 - rbuf_chk_ctrl |= RBUF_SKIP_FCS; 529 - else 530 - rbuf_chk_ctrl &= ~RBUF_SKIP_FCS; 531 - 532 - bcmgenet_rbuf_writel(priv, rbuf_chk_ctrl, RBUF_CHK_CTRL); 533 - } 534 - 535 - static void bcmgenet_set_tx_csum(struct net_device *dev, 536 - netdev_features_t wanted) 537 - { 538 - struct bcmgenet_priv *priv = netdev_priv(dev); 539 - bool desc_64b_en; 540 - u32 tbuf_ctrl, rbuf_ctrl; 541 - 542 - tbuf_ctrl = bcmgenet_tbuf_ctrl_get(priv); 543 - rbuf_ctrl = bcmgenet_rbuf_readl(priv, RBUF_CTRL); 544 - 545 - desc_64b_en = !!(wanted & NETIF_F_HW_CSUM); 546 - 547 - /* enable 64 bytes descriptor in both directions (RBUF and TBUF) */ 548 - if (desc_64b_en) { 549 - tbuf_ctrl |= RBUF_64B_EN; 550 - rbuf_ctrl |= RBUF_64B_EN; 551 - } else { 552 - tbuf_ctrl &= ~RBUF_64B_EN; 553 - rbuf_ctrl &= ~RBUF_64B_EN; 554 - } 555 - priv->desc_64b_en = desc_64b_en; 556 - 557 - bcmgenet_tbuf_ctrl_set(priv, tbuf_ctrl); 558 - bcmgenet_rbuf_writel(priv, rbuf_ctrl, RBUF_CTRL); 559 - } 560 - 561 512 static int bcmgenet_set_features(struct net_device *dev, 562 513 netdev_features_t features) 563 514 { ··· 517 578 /* Make sure we reflect the value of CRC_CMD_FWD */ 518 579 reg = bcmgenet_umac_readl(priv, UMAC_CMD); 519 580 priv->crc_fwd_en = !!(reg & CMD_CRC_FWD); 520 - 521 - bcmgenet_set_tx_csum(dev, features); 522 - bcmgenet_set_rx_csum(dev, features); 523 581 524 582 clk_disable_unprepare(priv->clk); 525 583 ··· 1410 1474 /* Reallocate the SKB to put enough headroom in front of it and insert 1411 1475 * the transmit checksum offsets in the descriptors 1412 1476 */ 1413 - static struct sk_buff *bcmgenet_put_tx_csum(struct net_device *dev, 1414 - struct sk_buff *skb) 1477 + static struct sk_buff *bcmgenet_add_tsb(struct net_device *dev, 1478 + struct sk_buff *skb) 1415 1479 { 1416 1480 struct bcmgenet_priv *priv = netdev_priv(dev); 1417 1481 struct status_64 *status = NULL; ··· 1525 1589 */ 1526 1590 GENET_CB(skb)->bytes_sent = skb->len; 1527 1591 1528 - /* set the SKB transmit checksum */ 1529 - if (priv->desc_64b_en) { 1530 - skb = bcmgenet_put_tx_csum(dev, skb); 1531 - if (!skb) { 1532 - ret = NETDEV_TX_OK; 1533 - goto out; 1534 - } 1592 + /* add the Transmit Status Block */ 1593 + skb = bcmgenet_add_tsb(dev, skb); 1594 + if (!skb) { 1595 + ret = NETDEV_TX_OK; 1596 + goto out; 1535 1597 } 1536 1598 1537 1599 for (i = 0; i <= nr_frags; i++) { ··· 1708 1774 1709 1775 while ((rxpktprocessed < rxpkttoprocess) && 1710 1776 (rxpktprocessed < budget)) { 1777 + struct status_64 *status; 1778 + __be16 rx_csum; 1779 + 1711 1780 cb = &priv->rx_cbs[ring->read_ptr]; 1712 1781 skb = bcmgenet_rx_refill(priv, cb); 1713 1782 ··· 1719 1782 goto next; 1720 1783 } 1721 1784 1722 - if (!priv->desc_64b_en) { 1723 - dma_length_status = 1724 - dmadesc_get_length_status(priv, cb->bd_addr); 1725 - } else { 1726 - struct status_64 *status; 1727 - __be16 rx_csum; 1728 - 1729 - status = (struct status_64 *)skb->data; 1730 - dma_length_status = status->length_status; 1785 + status = (struct status_64 *)skb->data; 1786 + dma_length_status = status->length_status; 1787 + if (dev->features & NETIF_F_RXCSUM) { 1731 1788 rx_csum = (__force __be16)(status->rx_csum & 0xffff); 1732 - if (priv->desc_rxchk_en) { 1733 - skb->csum = (__force __wsum)ntohs(rx_csum); 1734 - skb->ip_summed = CHECKSUM_COMPLETE; 1735 - } 1789 + skb->csum = (__force __wsum)ntohs(rx_csum); 1790 + skb->ip_summed = CHECKSUM_COMPLETE; 1736 1791 } 1737 1792 1738 1793 /* DMA flags and length are still valid no matter how ··· 1768 1839 } /* error packet */ 1769 1840 1770 1841 skb_put(skb, len); 1771 - if (priv->desc_64b_en) { 1772 - skb_pull(skb, 64); 1773 - len -= 64; 1774 - } 1775 1842 1776 - /* remove hardware 2bytes added for IP alignment */ 1777 - skb_pull(skb, 2); 1778 - len -= 2; 1843 + /* remove RSB and hardware 2bytes added for IP alignment */ 1844 + skb_pull(skb, 66); 1845 + len -= 66; 1779 1846 1780 1847 if (priv->crc_fwd_en) { 1781 1848 skb_trim(skb, len - ETH_FCS_LEN); ··· 1889 1964 u32 reg; 1890 1965 1891 1966 reg = bcmgenet_umac_readl(priv, UMAC_CMD); 1967 + if (reg & CMD_SW_RESET) 1968 + return; 1892 1969 if (enable) 1893 1970 reg |= mask; 1894 1971 else ··· 1910 1983 bcmgenet_rbuf_ctrl_set(priv, 0); 1911 1984 udelay(10); 1912 1985 1913 - /* disable MAC while updating its registers */ 1914 - bcmgenet_umac_writel(priv, 0, UMAC_CMD); 1915 - 1916 - /* issue soft reset with (rg)mii loopback to ensure a stable rxclk */ 1917 - bcmgenet_umac_writel(priv, CMD_SW_RESET | CMD_LCL_LOOP_EN, UMAC_CMD); 1986 + /* issue soft reset and disable MAC while updating its registers */ 1987 + bcmgenet_umac_writel(priv, CMD_SW_RESET, UMAC_CMD); 1988 + udelay(2); 1918 1989 } 1919 1990 1920 1991 static void bcmgenet_intr_disable(struct bcmgenet_priv *priv) ··· 1962 2037 1963 2038 bcmgenet_umac_writel(priv, ENET_MAX_MTU_SIZE, UMAC_MAX_FRAME_LEN); 1964 2039 1965 - /* init rx registers, enable ip header optimization */ 2040 + /* init tx registers, enable TSB */ 2041 + reg = bcmgenet_tbuf_ctrl_get(priv); 2042 + reg |= TBUF_64B_EN; 2043 + bcmgenet_tbuf_ctrl_set(priv, reg); 2044 + 2045 + /* init rx registers, enable ip header optimization and RSB */ 1966 2046 reg = bcmgenet_rbuf_readl(priv, RBUF_CTRL); 1967 - reg |= RBUF_ALIGN_2B; 2047 + reg |= RBUF_ALIGN_2B | RBUF_64B_EN; 1968 2048 bcmgenet_rbuf_writel(priv, reg, RBUF_CTRL); 2049 + 2050 + /* enable rx checksumming */ 2051 + reg = bcmgenet_rbuf_readl(priv, RBUF_CHK_CTRL); 2052 + reg |= RBUF_RXCHK_EN | RBUF_L3_PARSE_DIS; 2053 + /* If UniMAC forwards CRC, we need to skip over it to get 2054 + * a valid CHK bit to be set in the per-packet status word 2055 + */ 2056 + if (priv->crc_fwd_en) 2057 + reg |= RBUF_SKIP_FCS; 2058 + else 2059 + reg &= ~RBUF_SKIP_FCS; 2060 + bcmgenet_rbuf_writel(priv, reg, RBUF_CHK_CTRL); 1969 2061 1970 2062 if (!GENET_IS_V1(priv) && !GENET_IS_V2(priv)) 1971 2063 bcmgenet_rbuf_writel(priv, 1, RBUF_TBUF_SIZE_CTRL);
+1 -2
drivers/net/ethernet/broadcom/genet/bcmgenet.h
··· 273 273 #define RBUF_FLTR_LEN_SHIFT 8 274 274 275 275 #define TBUF_CTRL 0x00 276 + #define TBUF_64B_EN (1 << 0) 276 277 #define TBUF_BP_MC 0x0C 277 278 #define TBUF_ENERGY_CTRL 0x14 278 279 #define TBUF_EEE_EN (1 << 0) ··· 663 662 unsigned int irq0_stat; 664 663 665 664 /* HW descriptors/checksum variables */ 666 - bool desc_64b_en; 667 - bool desc_rxchk_en; 668 665 bool crc_fwd_en; 669 666 670 667 u32 dma_max_burst_length;
+5 -1
drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
··· 132 132 return -EINVAL; 133 133 } 134 134 135 - /* disable RX */ 135 + /* Can't suspend with WoL if MAC is still in reset */ 136 136 reg = bcmgenet_umac_readl(priv, UMAC_CMD); 137 + if (reg & CMD_SW_RESET) 138 + reg &= ~CMD_SW_RESET; 139 + 140 + /* disable RX */ 137 141 reg &= ~CMD_RX_EN; 138 142 bcmgenet_umac_writel(priv, reg, UMAC_CMD); 139 143 mdelay(10);
+6 -34
drivers/net/ethernet/broadcom/genet/bcmmii.c
··· 95 95 CMD_HD_EN | 96 96 CMD_RX_PAUSE_IGNORE | CMD_TX_PAUSE_IGNORE); 97 97 reg |= cmd_bits; 98 + if (reg & CMD_SW_RESET) { 99 + reg &= ~CMD_SW_RESET; 100 + bcmgenet_umac_writel(priv, reg, UMAC_CMD); 101 + udelay(2); 102 + reg |= CMD_TX_EN | CMD_RX_EN; 103 + } 98 104 bcmgenet_umac_writel(priv, reg, UMAC_CMD); 99 105 } else { 100 106 /* done if nothing has changed */ ··· 187 181 const char *phy_name = NULL; 188 182 u32 id_mode_dis = 0; 189 183 u32 port_ctrl; 190 - int bmcr = -1; 191 - int ret; 192 184 u32 reg; 193 - 194 - /* MAC clocking workaround during reset of umac state machines */ 195 - reg = bcmgenet_umac_readl(priv, UMAC_CMD); 196 - if (reg & CMD_SW_RESET) { 197 - /* An MII PHY must be isolated to prevent TXC contention */ 198 - if (priv->phy_interface == PHY_INTERFACE_MODE_MII) { 199 - ret = phy_read(phydev, MII_BMCR); 200 - if (ret >= 0) { 201 - bmcr = ret; 202 - ret = phy_write(phydev, MII_BMCR, 203 - bmcr | BMCR_ISOLATE); 204 - } 205 - if (ret) { 206 - netdev_err(dev, "failed to isolate PHY\n"); 207 - return ret; 208 - } 209 - } 210 - /* Switch MAC clocking to RGMII generated clock */ 211 - bcmgenet_sys_writel(priv, PORT_MODE_EXT_GPHY, SYS_PORT_CTRL); 212 - /* Ensure 5 clks with Rx disabled 213 - * followed by 5 clks with Reset asserted 214 - */ 215 - udelay(4); 216 - reg &= ~(CMD_SW_RESET | CMD_LCL_LOOP_EN); 217 - bcmgenet_umac_writel(priv, reg, UMAC_CMD); 218 - /* Ensure 5 more clocks before Rx is enabled */ 219 - udelay(2); 220 - } 221 185 222 186 switch (priv->phy_interface) { 223 187 case PHY_INTERFACE_MODE_INTERNAL: ··· 257 281 } 258 282 259 283 bcmgenet_sys_writel(priv, port_ctrl, SYS_PORT_CTRL); 260 - 261 - /* Restore the MII PHY after isolation */ 262 - if (bmcr >= 0) 263 - phy_write(phydev, MII_BMCR, bmcr); 264 284 265 285 priv->ext_phy = !priv->internal_phy && 266 286 (priv->phy_interface != PHY_INTERFACE_MODE_MOCA);
+2 -2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
··· 1033 1033 adapter->tids.tid_tab[i]; 1034 1034 1035 1035 if (f && (f->valid || f->pending)) 1036 - cxgb4_del_filter(dev, i, &f->fs); 1036 + cxgb4_del_filter(dev, f->tid, &f->fs); 1037 1037 } 1038 1038 1039 1039 sb = t4_read_reg(adapter, LE_DB_SRVR_START_INDEX_A); ··· 1041 1041 f = (struct filter_entry *)adapter->tids.tid_tab[i]; 1042 1042 1043 1043 if (f && (f->valid || f->pending)) 1044 - cxgb4_del_filter(dev, i, &f->fs); 1044 + cxgb4_del_filter(dev, f->tid, &f->fs); 1045 1045 } 1046 1046 } 1047 1047 }
+3
drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c
··· 246 246 FW_PTP_CMD_PORTID_V(0)); 247 247 c.retval_len16 = cpu_to_be32(FW_CMD_LEN16_V(sizeof(c) / 16)); 248 248 c.u.ts.sc = FW_PTP_SC_ADJ_FTIME; 249 + c.u.ts.sign = (delta < 0) ? 1 : 0; 250 + if (delta < 0) 251 + delta = -delta; 249 252 c.u.ts.tm = cpu_to_be64(delta); 250 253 251 254 err = t4_wr_mbox(adapter, adapter->mbox, &c, sizeof(c), NULL);
+10 -42
drivers/net/ethernet/chelsio/cxgb4/sge.c
··· 1307 1307 int t4_sge_eth_txq_egress_update(struct adapter *adap, struct sge_eth_txq *eq, 1308 1308 int maxreclaim) 1309 1309 { 1310 + unsigned int reclaimed, hw_cidx; 1310 1311 struct sge_txq *q = &eq->q; 1311 - unsigned int reclaimed; 1312 + int hw_in_use; 1312 1313 1313 1314 if (!q->in_use || !__netif_tx_trylock(eq->txq)) 1314 1315 return 0; ··· 1317 1316 /* Reclaim pending completed TX Descriptors. */ 1318 1317 reclaimed = reclaim_completed_tx(adap, &eq->q, maxreclaim, true); 1319 1318 1319 + hw_cidx = ntohs(READ_ONCE(q->stat->cidx)); 1320 + hw_in_use = q->pidx - hw_cidx; 1321 + if (hw_in_use < 0) 1322 + hw_in_use += q->size; 1323 + 1320 1324 /* If the TX Queue is currently stopped and there's now more than half 1321 1325 * the queue available, restart it. Otherwise bail out since the rest 1322 1326 * of what we want do here is with the possibility of shipping any 1323 1327 * currently buffered Coalesced TX Work Request. 1324 1328 */ 1325 - if (netif_tx_queue_stopped(eq->txq) && txq_avail(q) > (q->size / 2)) { 1329 + if (netif_tx_queue_stopped(eq->txq) && hw_in_use < (q->size / 2)) { 1326 1330 netif_tx_wake_queue(eq->txq); 1327 1331 eq->q.restarts++; 1328 1332 } ··· 1497 1491 * has opened up. 1498 1492 */ 1499 1493 eth_txq_stop(q); 1500 - 1501 - /* If we're using the SGE Doorbell Queue Timer facility, we 1502 - * don't need to ask the Firmware to send us Egress Queue CIDX 1503 - * Updates: the Hardware will do this automatically. And 1504 - * since we send the Ingress Queue CIDX Updates to the 1505 - * corresponding Ethernet Response Queue, we'll get them very 1506 - * quickly. 1507 - */ 1508 - if (!q->dbqt) 1509 - wr_mid |= FW_WR_EQUEQ_F | FW_WR_EQUIQ_F; 1494 + wr_mid |= FW_WR_EQUEQ_F | FW_WR_EQUIQ_F; 1510 1495 } 1511 1496 1512 1497 wr = (void *)&q->q.desc[q->q.pidx]; ··· 1807 1810 * has opened up. 1808 1811 */ 1809 1812 eth_txq_stop(txq); 1810 - 1811 - /* If we're using the SGE Doorbell Queue Timer facility, we 1812 - * don't need to ask the Firmware to send us Egress Queue CIDX 1813 - * Updates: the Hardware will do this automatically. And 1814 - * since we send the Ingress Queue CIDX Updates to the 1815 - * corresponding Ethernet Response Queue, we'll get them very 1816 - * quickly. 1817 - */ 1818 - if (!txq->dbqt) 1819 - wr_mid |= FW_WR_EQUEQ_F | FW_WR_EQUIQ_F; 1813 + wr_mid |= FW_WR_EQUEQ_F | FW_WR_EQUIQ_F; 1820 1814 } 1821 1815 1822 1816 /* Start filling in our Work Request. Note that we do _not_ handle ··· 3363 3375 } 3364 3376 3365 3377 txq = &s->ethtxq[pi->first_qset + rspq->idx]; 3366 - 3367 - /* We've got the Hardware Consumer Index Update in the Egress Update 3368 - * message. If we're using the SGE Doorbell Queue Timer mechanism, 3369 - * these Egress Update messages will be our sole CIDX Updates we get 3370 - * since we don't want to chew up PCIe bandwidth for both Ingress 3371 - * Messages and Status Page writes. However, The code which manages 3372 - * reclaiming successfully DMA'ed TX Work Requests uses the CIDX value 3373 - * stored in the Status Page at the end of the TX Queue. It's easiest 3374 - * to simply copy the CIDX Update value from the Egress Update message 3375 - * to the Status Page. Also note that no Endian issues need to be 3376 - * considered here since both are Big Endian and we're just copying 3377 - * bytes consistently ... 3378 - */ 3379 - if (txq->dbqt) { 3380 - struct cpl_sge_egr_update *egr; 3381 - 3382 - egr = (struct cpl_sge_egr_update *)rsp; 3383 - WRITE_ONCE(txq->q.stat->cidx, egr->cidx); 3384 - } 3385 - 3386 3378 t4_sge_eth_txq_egress_update(adapter, txq, -1); 3387 3379 } 3388 3380
+1 -1
drivers/net/ethernet/freescale/fman/fman_memac.c
··· 778 778 /* Set full duplex */ 779 779 tmp &= ~IF_MODE_HD; 780 780 781 - if (memac->phy_if == PHY_INTERFACE_MODE_RGMII) { 781 + if (phy_interface_mode_is_rgmii(memac->phy_if)) { 782 782 /* Configure RGMII in manual mode */ 783 783 tmp &= ~IF_MODE_RGMII_AUTO; 784 784 tmp &= ~IF_MODE_RGMII_SP_MASK;
+4 -1
drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c
··· 389 389 390 390 spin_unlock_bh(&cmdq->cmdq_lock); 391 391 392 - if (!wait_for_completion_timeout(&done, CMDQ_TIMEOUT)) { 392 + if (!wait_for_completion_timeout(&done, 393 + msecs_to_jiffies(CMDQ_TIMEOUT))) { 393 394 spin_lock_bh(&cmdq->cmdq_lock); 394 395 395 396 if (cmdq->errcode[curr_prod_idx] == &errcode) ··· 623 622 624 623 if (!CMDQ_WQE_COMPLETED(be32_to_cpu(ctrl->ctrl_info))) 625 624 return -EBUSY; 625 + 626 + dma_rmb(); 626 627 627 628 errcode = CMDQ_WQE_ERRCODE_GET(be32_to_cpu(status->status_info), VAL); 628 629
+2 -49
drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
··· 360 360 return -EFAULT; 361 361 } 362 362 363 - static int wait_for_io_stopped(struct hinic_hwdev *hwdev) 364 - { 365 - struct hinic_cmd_io_status cmd_io_status; 366 - struct hinic_hwif *hwif = hwdev->hwif; 367 - struct pci_dev *pdev = hwif->pdev; 368 - struct hinic_pfhwdev *pfhwdev; 369 - unsigned long end; 370 - u16 out_size; 371 - int err; 372 - 373 - if (!HINIC_IS_PF(hwif) && !HINIC_IS_PPF(hwif)) { 374 - dev_err(&pdev->dev, "Unsupported PCI Function type\n"); 375 - return -EINVAL; 376 - } 377 - 378 - pfhwdev = container_of(hwdev, struct hinic_pfhwdev, hwdev); 379 - 380 - cmd_io_status.func_idx = HINIC_HWIF_FUNC_IDX(hwif); 381 - 382 - end = jiffies + msecs_to_jiffies(IO_STATUS_TIMEOUT); 383 - do { 384 - err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM, 385 - HINIC_COMM_CMD_IO_STATUS_GET, 386 - &cmd_io_status, sizeof(cmd_io_status), 387 - &cmd_io_status, &out_size, 388 - HINIC_MGMT_MSG_SYNC); 389 - if ((err) || (out_size != sizeof(cmd_io_status))) { 390 - dev_err(&pdev->dev, "Failed to get IO status, ret = %d\n", 391 - err); 392 - return err; 393 - } 394 - 395 - if (cmd_io_status.status == IO_STOPPED) { 396 - dev_info(&pdev->dev, "IO stopped\n"); 397 - return 0; 398 - } 399 - 400 - msleep(20); 401 - } while (time_before(jiffies, end)); 402 - 403 - dev_err(&pdev->dev, "Wait for IO stopped - Timeout\n"); 404 - return -ETIMEDOUT; 405 - } 406 - 407 363 /** 408 364 * clear_io_resource - set the IO resources as not active in the NIC 409 365 * @hwdev: the NIC HW device ··· 379 423 return -EINVAL; 380 424 } 381 425 382 - err = wait_for_io_stopped(hwdev); 383 - if (err) { 384 - dev_err(&pdev->dev, "IO has not stopped yet\n"); 385 - return err; 386 - } 426 + /* sleep 100ms to wait for firmware stopping I/O */ 427 + msleep(100); 387 428 388 429 cmd_clear_io_res.func_idx = HINIC_HWIF_FUNC_IDX(hwif); 389 430
+19 -7
drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c
··· 188 188 * eq_update_ci - update the HW cons idx of event queue 189 189 * @eq: the event queue to update the cons idx for 190 190 **/ 191 - static void eq_update_ci(struct hinic_eq *eq) 191 + static void eq_update_ci(struct hinic_eq *eq, u32 arm_state) 192 192 { 193 193 u32 val, addr = EQ_CONS_IDX_REG_ADDR(eq); 194 194 ··· 202 202 203 203 val |= HINIC_EQ_CI_SET(eq->cons_idx, IDX) | 204 204 HINIC_EQ_CI_SET(eq->wrapped, WRAPPED) | 205 - HINIC_EQ_CI_SET(EQ_ARMED, INT_ARMED); 205 + HINIC_EQ_CI_SET(arm_state, INT_ARMED); 206 206 207 207 val |= HINIC_EQ_CI_SET(eq_cons_idx_checksum_set(val), XOR_CHKSUM); 208 208 ··· 234 234 /* HW toggles the wrapped bit, when it adds eq element */ 235 235 if (HINIC_EQ_ELEM_DESC_GET(aeqe_desc, WRAPPED) == eq->wrapped) 236 236 break; 237 + 238 + dma_rmb(); 237 239 238 240 event = HINIC_EQ_ELEM_DESC_GET(aeqe_desc, TYPE); 239 241 if (event >= HINIC_MAX_AEQ_EVENTS) { ··· 349 347 else if (eq->type == HINIC_CEQ) 350 348 ceq_irq_handler(eq); 351 349 352 - eq_update_ci(eq); 350 + eq_update_ci(eq, EQ_ARMED); 353 351 } 354 352 355 353 /** ··· 704 702 } 705 703 706 704 set_eq_ctrls(eq); 707 - eq_update_ci(eq); 705 + eq_update_ci(eq, EQ_ARMED); 708 706 709 707 err = alloc_eq_pages(eq); 710 708 if (err) { ··· 754 752 **/ 755 753 static void remove_eq(struct hinic_eq *eq) 756 754 { 757 - struct msix_entry *entry = &eq->msix_entry; 758 - 759 - free_irq(entry->vector, eq); 755 + hinic_set_msix_state(eq->hwif, eq->msix_entry.entry, 756 + HINIC_MSIX_DISABLE); 757 + free_irq(eq->msix_entry.vector, eq); 760 758 761 759 if (eq->type == HINIC_AEQ) { 762 760 struct hinic_eq_work *aeq_work = &eq->aeq_work; 763 761 764 762 cancel_work_sync(&aeq_work->work); 763 + /* clear aeq_len to avoid hw access host memory */ 764 + hinic_hwif_write_reg(eq->hwif, 765 + HINIC_CSR_AEQ_CTRL_1_ADDR(eq->q_id), 0); 765 766 } else if (eq->type == HINIC_CEQ) { 766 767 tasklet_kill(&eq->ceq_tasklet); 768 + /* clear ceq_len to avoid hw access host memory */ 769 + hinic_hwif_write_reg(eq->hwif, 770 + HINIC_CSR_CEQ_CTRL_1_ADDR(eq->q_id), 0); 767 771 } 772 + 773 + /* update cons_idx to avoid invalid interrupt */ 774 + eq->cons_idx = hinic_hwif_read_reg(eq->hwif, EQ_PROD_IDX_REG_ADDR(eq)); 775 + eq_update_ci(eq, EQ_NOT_ARMED); 768 776 769 777 free_eq_pages(eq); 770 778 }
+3 -2
drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
··· 43 43 44 44 #define MSG_NOT_RESP 0xFFFF 45 45 46 - #define MGMT_MSG_TIMEOUT 1000 46 + #define MGMT_MSG_TIMEOUT 5000 47 47 48 48 #define mgmt_to_pfhwdev(pf_mgmt) \ 49 49 container_of(pf_mgmt, struct hinic_pfhwdev, pf_to_mgmt) ··· 267 267 goto unlock_sync_msg; 268 268 } 269 269 270 - if (!wait_for_completion_timeout(recv_done, MGMT_MSG_TIMEOUT)) { 270 + if (!wait_for_completion_timeout(recv_done, 271 + msecs_to_jiffies(MGMT_MSG_TIMEOUT))) { 271 272 dev_err(&pdev->dev, "MGMT timeout, MSG id = %d\n", msg_id); 272 273 err = -ETIMEDOUT; 273 274 goto unlock_sync_msg;
+3
drivers/net/ethernet/huawei/hinic/hinic_rx.c
··· 350 350 if (!rq_wqe) 351 351 break; 352 352 353 + /* make sure we read rx_done before packet length */ 354 + dma_rmb(); 355 + 353 356 cqe = rq->cqe[ci]; 354 357 status = be32_to_cpu(cqe->status); 355 358 hinic_rq_get_sge(rxq->rq, rq_wqe, ci, &sge);
+3 -1
drivers/net/ethernet/huawei/hinic/hinic_tx.c
··· 45 45 46 46 #define HW_CONS_IDX(sq) be16_to_cpu(*(u16 *)((sq)->hw_ci_addr)) 47 47 48 - #define MIN_SKB_LEN 17 48 + #define MIN_SKB_LEN 32 49 49 50 50 #define MAX_PAYLOAD_OFFSET 221 51 51 #define TRANSPORT_OFFSET(l4_hdr, skb) ((u32)((l4_hdr) - (skb)->data)) ··· 621 621 622 622 do { 623 623 hw_ci = HW_CONS_IDX(sq) & wq->mask; 624 + 625 + dma_rmb(); 624 626 625 627 /* Reading a WQEBB to get real WQE size and consumer index. */ 626 628 sq_wqe = hinic_sq_read_wqebb(sq, &skb, &wqe_size, &sw_ci);
+2 -2
drivers/net/ethernet/marvell/mvmdio.c
··· 364 364 writel(MVMDIO_ERR_INT_SMI_DONE, 365 365 dev->regs + MVMDIO_ERR_INT_MASK); 366 366 367 - } else if (dev->err_interrupt < 0) { 368 - ret = dev->err_interrupt; 367 + } else if (dev->err_interrupt == -EPROBE_DEFER) { 368 + ret = -EPROBE_DEFER; 369 369 goto out_mdio; 370 370 } 371 371
+1 -2
drivers/net/ethernet/marvell/mvneta.c
··· 3109 3109 /* For the case where the last mvneta_poll did not process all 3110 3110 * RX packets 3111 3111 */ 3112 - rx_queue = fls(((cause_rx_tx >> 8) & 0xff)); 3113 - 3114 3112 cause_rx_tx |= pp->neta_armada3700 ? pp->cause_rx_tx : 3115 3113 port->cause_rx_tx; 3116 3114 3115 + rx_queue = fls(((cause_rx_tx >> 8) & 0xff)); 3117 3116 if (rx_queue) { 3118 3117 rx_queue = rx_queue - 1; 3119 3118 if (pp->bm_priv)
+31 -31
drivers/net/ethernet/mellanox/mlx4/mcg.c
··· 906 906 int len = 0; 907 907 908 908 mlx4_err(dev, "%s", str); 909 - len += snprintf(buf + len, BUF_SIZE - len, 910 - "port = %d prio = 0x%x qp = 0x%x ", 911 - rule->port, rule->priority, rule->qpn); 909 + len += scnprintf(buf + len, BUF_SIZE - len, 910 + "port = %d prio = 0x%x qp = 0x%x ", 911 + rule->port, rule->priority, rule->qpn); 912 912 913 913 list_for_each_entry(cur, &rule->list, list) { 914 914 switch (cur->id) { 915 915 case MLX4_NET_TRANS_RULE_ID_ETH: 916 - len += snprintf(buf + len, BUF_SIZE - len, 917 - "dmac = %pM ", &cur->eth.dst_mac); 916 + len += scnprintf(buf + len, BUF_SIZE - len, 917 + "dmac = %pM ", &cur->eth.dst_mac); 918 918 if (cur->eth.ether_type) 919 - len += snprintf(buf + len, BUF_SIZE - len, 920 - "ethertype = 0x%x ", 921 - be16_to_cpu(cur->eth.ether_type)); 919 + len += scnprintf(buf + len, BUF_SIZE - len, 920 + "ethertype = 0x%x ", 921 + be16_to_cpu(cur->eth.ether_type)); 922 922 if (cur->eth.vlan_id) 923 - len += snprintf(buf + len, BUF_SIZE - len, 924 - "vlan-id = %d ", 925 - be16_to_cpu(cur->eth.vlan_id)); 923 + len += scnprintf(buf + len, BUF_SIZE - len, 924 + "vlan-id = %d ", 925 + be16_to_cpu(cur->eth.vlan_id)); 926 926 break; 927 927 928 928 case MLX4_NET_TRANS_RULE_ID_IPV4: 929 929 if (cur->ipv4.src_ip) 930 - len += snprintf(buf + len, BUF_SIZE - len, 931 - "src-ip = %pI4 ", 932 - &cur->ipv4.src_ip); 930 + len += scnprintf(buf + len, BUF_SIZE - len, 931 + "src-ip = %pI4 ", 932 + &cur->ipv4.src_ip); 933 933 if (cur->ipv4.dst_ip) 934 - len += snprintf(buf + len, BUF_SIZE - len, 935 - "dst-ip = %pI4 ", 936 - &cur->ipv4.dst_ip); 934 + len += scnprintf(buf + len, BUF_SIZE - len, 935 + "dst-ip = %pI4 ", 936 + &cur->ipv4.dst_ip); 937 937 break; 938 938 939 939 case MLX4_NET_TRANS_RULE_ID_TCP: 940 940 case MLX4_NET_TRANS_RULE_ID_UDP: 941 941 if (cur->tcp_udp.src_port) 942 - len += snprintf(buf + len, BUF_SIZE - len, 943 - "src-port = %d ", 944 - be16_to_cpu(cur->tcp_udp.src_port)); 942 + len += scnprintf(buf + len, BUF_SIZE - len, 943 + "src-port = %d ", 944 + be16_to_cpu(cur->tcp_udp.src_port)); 945 945 if (cur->tcp_udp.dst_port) 946 - len += snprintf(buf + len, BUF_SIZE - len, 947 - "dst-port = %d ", 948 - be16_to_cpu(cur->tcp_udp.dst_port)); 946 + len += scnprintf(buf + len, BUF_SIZE - len, 947 + "dst-port = %d ", 948 + be16_to_cpu(cur->tcp_udp.dst_port)); 949 949 break; 950 950 951 951 case MLX4_NET_TRANS_RULE_ID_IB: 952 - len += snprintf(buf + len, BUF_SIZE - len, 953 - "dst-gid = %pI6\n", cur->ib.dst_gid); 954 - len += snprintf(buf + len, BUF_SIZE - len, 955 - "dst-gid-mask = %pI6\n", 956 - cur->ib.dst_gid_msk); 952 + len += scnprintf(buf + len, BUF_SIZE - len, 953 + "dst-gid = %pI6\n", cur->ib.dst_gid); 954 + len += scnprintf(buf + len, BUF_SIZE - len, 955 + "dst-gid-mask = %pI6\n", 956 + cur->ib.dst_gid_msk); 957 957 break; 958 958 959 959 case MLX4_NET_TRANS_RULE_ID_VXLAN: 960 - len += snprintf(buf + len, BUF_SIZE - len, 961 - "VNID = %d ", be32_to_cpu(cur->vxlan.vni)); 960 + len += scnprintf(buf + len, BUF_SIZE - len, 961 + "VNID = %d ", be32_to_cpu(cur->vxlan.vni)); 962 962 break; 963 963 case MLX4_NET_TRANS_RULE_ID_IPV6: 964 964 break; ··· 967 967 break; 968 968 } 969 969 } 970 - len += snprintf(buf + len, BUF_SIZE - len, "\n"); 970 + len += scnprintf(buf + len, BUF_SIZE - len, "\n"); 971 971 mlx4_err(dev, "%s", buf); 972 972 973 973 if (len >= BUF_SIZE)
+2
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 371 371 372 372 struct mlx5e_sq_wqe_info { 373 373 u8 opcode; 374 + u8 num_wqebbs; 374 375 375 376 /* Auxiliary data for different opcodes. */ 376 377 union { ··· 1078 1077 void mlx5e_activate_rq(struct mlx5e_rq *rq); 1079 1078 void mlx5e_deactivate_rq(struct mlx5e_rq *rq); 1080 1079 void mlx5e_free_rx_descs(struct mlx5e_rq *rq); 1080 + void mlx5e_free_rx_in_progress_descs(struct mlx5e_rq *rq); 1081 1081 void mlx5e_activate_icosq(struct mlx5e_icosq *icosq); 1082 1082 void mlx5e_deactivate_icosq(struct mlx5e_icosq *icosq); 1083 1083
+1 -2
drivers/net/ethernet/mellanox/mlx5/core/en/health.h
··· 11 11 12 12 static inline bool cqe_syndrome_needs_recover(u8 syndrome) 13 13 { 14 - return syndrome == MLX5_CQE_SYNDROME_LOCAL_LENGTH_ERR || 15 - syndrome == MLX5_CQE_SYNDROME_LOCAL_QP_OP_ERR || 14 + return syndrome == MLX5_CQE_SYNDROME_LOCAL_QP_OP_ERR || 16 15 syndrome == MLX5_CQE_SYNDROME_LOCAL_PROT_ERR || 17 16 syndrome == MLX5_CQE_SYNDROME_WR_FLUSH_ERR; 18 17 }
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
··· 90 90 goto out; 91 91 92 92 mlx5e_reset_icosq_cc_pc(icosq); 93 - mlx5e_free_rx_descs(rq); 93 + mlx5e_free_rx_in_progress_descs(rq); 94 94 clear_bit(MLX5E_SQ_STATE_RECOVERING, &icosq->state); 95 95 mlx5e_activate_icosq(icosq); 96 96 mlx5e_activate_rq(rq);
+4 -2
drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h
··· 181 181 182 182 static inline void mlx5e_rqwq_reset(struct mlx5e_rq *rq) 183 183 { 184 - if (rq->wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) 184 + if (rq->wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) { 185 185 mlx5_wq_ll_reset(&rq->mpwqe.wq); 186 - else 186 + rq->mpwqe.actual_wq_head = 0; 187 + } else { 187 188 mlx5_wq_cyc_reset(&rq->wqe.wq); 189 + } 188 190 } 189 191 190 192 /* SW parser related functions */
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.h
··· 38 38 39 39 enum { 40 40 MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_START = 0, 41 - MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_SEARCHING = 1, 42 - MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_TRACKING = 2, 41 + MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_TRACKING = 1, 42 + MLX5E_TLS_PROGRESS_PARAMS_RECORD_TRACKER_STATE_SEARCHING = 2, 43 43 }; 44 44 45 45 struct mlx5e_ktls_offload_context_tx {
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c
··· 218 218 * this packet was already acknowledged and its record info 219 219 * was released. 220 220 */ 221 - ends_before = before(tcp_seq + datalen, tls_record_start_seq(record)); 221 + ends_before = before(tcp_seq + datalen - 1, tls_record_start_seq(record)); 222 222 223 223 if (unlikely(tls_record_is_start_marker(record))) { 224 224 ret = ends_before ? MLX5E_KTLS_SYNC_SKIP_NO_DATA : MLX5E_KTLS_SYNC_FAIL;
+24 -7
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 814 814 return -ETIMEDOUT; 815 815 } 816 816 817 + void mlx5e_free_rx_in_progress_descs(struct mlx5e_rq *rq) 818 + { 819 + struct mlx5_wq_ll *wq; 820 + u16 head; 821 + int i; 822 + 823 + if (rq->wq_type != MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) 824 + return; 825 + 826 + wq = &rq->mpwqe.wq; 827 + head = wq->head; 828 + 829 + /* Outstanding UMR WQEs (in progress) start at wq->head */ 830 + for (i = 0; i < rq->mpwqe.umr_in_progress; i++) { 831 + rq->dealloc_wqe(rq, head); 832 + head = mlx5_wq_ll_get_wqe_next_ix(wq, head); 833 + } 834 + 835 + rq->mpwqe.actual_wq_head = wq->head; 836 + rq->mpwqe.umr_in_progress = 0; 837 + rq->mpwqe.umr_completed = 0; 838 + } 839 + 817 840 void mlx5e_free_rx_descs(struct mlx5e_rq *rq) 818 841 { 819 842 __be16 wqe_ix_be; ··· 844 821 845 822 if (rq->wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ) { 846 823 struct mlx5_wq_ll *wq = &rq->mpwqe.wq; 847 - u16 head = wq->head; 848 - int i; 849 824 850 - /* Outstanding UMR WQEs (in progress) start at wq->head */ 851 - for (i = 0; i < rq->mpwqe.umr_in_progress; i++) { 852 - rq->dealloc_wqe(rq, head); 853 - head = mlx5_wq_ll_get_wqe_next_ix(wq, head); 854 - } 825 + mlx5e_free_rx_in_progress_descs(rq); 855 826 856 827 while (!mlx5_wq_ll_is_empty(wq)) { 857 828 struct mlx5e_rx_wqe_ll *wqe;
+5 -6
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 479 479 /* fill sq frag edge with nops to avoid wqe wrapping two pages */ 480 480 for (; wi < edge_wi; wi++) { 481 481 wi->opcode = MLX5_OPCODE_NOP; 482 + wi->num_wqebbs = 1; 482 483 mlx5e_post_nop(wq, sq->sqn, &sq->pc); 483 484 } 484 485 } ··· 528 527 umr_wqe->uctrl.xlt_offset = cpu_to_be16(xlt_offset); 529 528 530 529 sq->db.ico_wqe[pi].opcode = MLX5_OPCODE_UMR; 530 + sq->db.ico_wqe[pi].num_wqebbs = MLX5E_UMR_WQEBBS; 531 531 sq->db.ico_wqe[pi].umr.rq = rq; 532 532 sq->pc += MLX5E_UMR_WQEBBS; 533 533 ··· 625 623 626 624 ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sqcc); 627 625 wi = &sq->db.ico_wqe[ci]; 626 + sqcc += wi->num_wqebbs; 628 627 629 628 if (last_wqe && unlikely(get_cqe_opcode(cqe) != MLX5_CQE_REQ)) { 630 629 netdev_WARN_ONCE(cq->channel->netdev, ··· 636 633 break; 637 634 } 638 635 639 - if (likely(wi->opcode == MLX5_OPCODE_UMR)) { 640 - sqcc += MLX5E_UMR_WQEBBS; 636 + if (likely(wi->opcode == MLX5_OPCODE_UMR)) 641 637 wi->umr.rq->mpwqe.umr_completed++; 642 - } else if (likely(wi->opcode == MLX5_OPCODE_NOP)) { 643 - sqcc++; 644 - } else { 638 + else if (unlikely(wi->opcode != MLX5_OPCODE_NOP)) 645 639 netdev_WARN_ONCE(cq->channel->netdev, 646 640 "Bad OPCODE in ICOSQ WQE info: 0x%x\n", 647 641 wi->opcode); 648 - } 649 642 650 643 } while (!last_wqe); 651 644
+3 -2
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 2714 2714 continue; 2715 2715 2716 2716 if (f->field_bsize == 32) { 2717 - mask_be32 = *(__be32 *)&mask; 2717 + mask_be32 = (__be32)mask; 2718 2718 mask = (__force unsigned long)cpu_to_le32(be32_to_cpu(mask_be32)); 2719 2719 } else if (f->field_bsize == 16) { 2720 - mask_be16 = *(__be16 *)&mask; 2720 + mask_be32 = (__be32)mask; 2721 + mask_be16 = *(__be16 *)&mask_be32; 2721 2722 mask = (__force unsigned long)cpu_to_le16(be16_to_cpu(mask_be16)); 2722 2723 } 2723 2724
+1
drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
··· 79 79 u16 pi = mlx5_wq_cyc_ctr2ix(wq, sq->pc); 80 80 81 81 sq->db.ico_wqe[pi].opcode = MLX5_OPCODE_NOP; 82 + sq->db.ico_wqe[pi].num_wqebbs = 1; 82 83 nopwqe = mlx5e_post_nop(wq, sq->sqn, &sq->pc); 83 84 mlx5e_notify_hw(wq, sq->pc, sq->uar_map, &nopwqe->ctrl); 84 85 }
+3 -1
drivers/net/ethernet/mellanox/mlx5/core/lag.c
··· 615 615 break; 616 616 617 617 if (i == MLX5_MAX_PORTS) { 618 - if (ldev->nb.notifier_call) 618 + if (ldev->nb.notifier_call) { 619 619 unregister_netdevice_notifier_net(&init_net, &ldev->nb); 620 + ldev->nb.notifier_call = NULL; 621 + } 620 622 mlx5_lag_mp_cleanup(ldev); 621 623 cancel_delayed_work_sync(&ldev->bond_work); 622 624 mlx5_lag_dev_free(ldev);
-1
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c
··· 933 933 934 934 action->rewrite.data = (void *)ops; 935 935 action->rewrite.num_of_actions = i; 936 - action->rewrite.chunk->byte_size = i * sizeof(*ops); 937 936 938 937 ret = mlx5dr_send_postsend_action(dmn, action); 939 938 if (ret) {
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
··· 558 558 int ret; 559 559 560 560 send_info.write.addr = (uintptr_t)action->rewrite.data; 561 - send_info.write.length = action->rewrite.chunk->byte_size; 561 + send_info.write.length = action->rewrite.num_of_actions * 562 + DR_MODIFY_ACTION_SIZE; 562 563 send_info.write.lkey = 0; 563 564 send_info.remote_addr = action->rewrite.chunk->mr_addr; 564 565 send_info.rkey = action->rewrite.chunk->rkey;
+3
drivers/net/ethernet/mellanox/mlx5/core/vport.c
··· 1071 1071 MLX5_SET64(hca_vport_context, ctx, port_guid, req->port_guid); 1072 1072 if (req->field_select & MLX5_HCA_VPORT_SEL_NODE_GUID) 1073 1073 MLX5_SET64(hca_vport_context, ctx, node_guid, req->node_guid); 1074 + MLX5_SET(hca_vport_context, ctx, cap_mask1, req->cap_mask1); 1075 + MLX5_SET(hca_vport_context, ctx, cap_mask1_field_select, 1076 + req->cap_mask1_perm); 1074 1077 err = mlx5_cmd_exec(dev, in, in_sz, out, sizeof(out)); 1075 1078 ex: 1076 1079 kfree(in);
+39 -11
drivers/net/ethernet/mellanox/mlxsw/pci.c
··· 1331 1331 mbox->mapaddr); 1332 1332 } 1333 1333 1334 - static int mlxsw_pci_sw_reset(struct mlxsw_pci *mlxsw_pci, 1335 - const struct pci_device_id *id) 1334 + static int mlxsw_pci_sys_ready_wait(struct mlxsw_pci *mlxsw_pci, 1335 + const struct pci_device_id *id, 1336 + u32 *p_sys_status) 1336 1337 { 1337 1338 unsigned long end; 1338 - char mrsr_pl[MLXSW_REG_MRSR_LEN]; 1339 - int err; 1339 + u32 val; 1340 1340 1341 - mlxsw_reg_mrsr_pack(mrsr_pl); 1342 - err = mlxsw_reg_write(mlxsw_pci->core, MLXSW_REG(mrsr), mrsr_pl); 1343 - if (err) 1344 - return err; 1345 1341 if (id->device == PCI_DEVICE_ID_MELLANOX_SWITCHX2) { 1346 1342 msleep(MLXSW_PCI_SW_RESET_TIMEOUT_MSECS); 1347 1343 return 0; 1348 1344 } 1349 1345 1350 - /* We must wait for the HW to become responsive once again. */ 1346 + /* We must wait for the HW to become responsive. */ 1351 1347 msleep(MLXSW_PCI_SW_RESET_WAIT_MSECS); 1352 1348 1353 1349 end = jiffies + msecs_to_jiffies(MLXSW_PCI_SW_RESET_TIMEOUT_MSECS); 1354 1350 do { 1355 - u32 val = mlxsw_pci_read32(mlxsw_pci, FW_READY); 1356 - 1351 + val = mlxsw_pci_read32(mlxsw_pci, FW_READY); 1357 1352 if ((val & MLXSW_PCI_FW_READY_MASK) == MLXSW_PCI_FW_READY_MAGIC) 1358 1353 return 0; 1359 1354 cond_resched(); 1360 1355 } while (time_before(jiffies, end)); 1356 + 1357 + *p_sys_status = val & MLXSW_PCI_FW_READY_MASK; 1358 + 1361 1359 return -EBUSY; 1360 + } 1361 + 1362 + static int mlxsw_pci_sw_reset(struct mlxsw_pci *mlxsw_pci, 1363 + const struct pci_device_id *id) 1364 + { 1365 + struct pci_dev *pdev = mlxsw_pci->pdev; 1366 + char mrsr_pl[MLXSW_REG_MRSR_LEN]; 1367 + u32 sys_status; 1368 + int err; 1369 + 1370 + err = mlxsw_pci_sys_ready_wait(mlxsw_pci, id, &sys_status); 1371 + if (err) { 1372 + dev_err(&pdev->dev, "Failed to reach system ready status before reset. Status is 0x%x\n", 1373 + sys_status); 1374 + return err; 1375 + } 1376 + 1377 + mlxsw_reg_mrsr_pack(mrsr_pl); 1378 + err = mlxsw_reg_write(mlxsw_pci->core, MLXSW_REG(mrsr), mrsr_pl); 1379 + if (err) 1380 + return err; 1381 + 1382 + err = mlxsw_pci_sys_ready_wait(mlxsw_pci, id, &sys_status); 1383 + if (err) { 1384 + dev_err(&pdev->dev, "Failed to reach system ready status after reset. Status is 0x%x\n", 1385 + sys_status); 1386 + return err; 1387 + } 1388 + 1389 + return 0; 1362 1390 } 1363 1391 1364 1392 static int mlxsw_pci_alloc_irq_vectors(struct mlxsw_pci *mlxsw_pci)
+1 -1
drivers/net/ethernet/mellanox/mlxsw/reg.h
··· 3572 3572 * When in bytes mode, value is specified in units of 1000bps. 3573 3573 * Access: RW 3574 3574 */ 3575 - MLXSW_ITEM32(reg, qeec, max_shaper_rate, 0x10, 0, 28); 3575 + MLXSW_ITEM32(reg, qeec, max_shaper_rate, 0x10, 0, 31); 3576 3576 3577 3577 /* reg_qeec_de 3578 3578 * DWRR configuration enable. Enables configuration of the dwrr and
+1 -1
drivers/net/ethernet/neterion/vxge/vxge-config.h
··· 2045 2045 if ((level >= VXGE_ERR && VXGE_COMPONENT_LL & VXGE_DEBUG_ERR_MASK) || \ 2046 2046 (level >= VXGE_TRACE && VXGE_COMPONENT_LL & VXGE_DEBUG_TRACE_MASK))\ 2047 2047 if ((mask & VXGE_DEBUG_MASK) == mask) \ 2048 - printk(fmt "\n", __VA_ARGS__); \ 2048 + printk(fmt "\n", ##__VA_ARGS__); \ 2049 2049 } while (0) 2050 2050 #else 2051 2051 #define vxge_debug_ll(level, mask, fmt, ...)
+7 -7
drivers/net/ethernet/neterion/vxge/vxge-main.h
··· 452 452 453 453 #if (VXGE_DEBUG_LL_CONFIG & VXGE_DEBUG_MASK) 454 454 #define vxge_debug_ll_config(level, fmt, ...) \ 455 - vxge_debug_ll(level, VXGE_DEBUG_LL_CONFIG, fmt, __VA_ARGS__) 455 + vxge_debug_ll(level, VXGE_DEBUG_LL_CONFIG, fmt, ##__VA_ARGS__) 456 456 #else 457 457 #define vxge_debug_ll_config(level, fmt, ...) 458 458 #endif 459 459 460 460 #if (VXGE_DEBUG_INIT & VXGE_DEBUG_MASK) 461 461 #define vxge_debug_init(level, fmt, ...) \ 462 - vxge_debug_ll(level, VXGE_DEBUG_INIT, fmt, __VA_ARGS__) 462 + vxge_debug_ll(level, VXGE_DEBUG_INIT, fmt, ##__VA_ARGS__) 463 463 #else 464 464 #define vxge_debug_init(level, fmt, ...) 465 465 #endif 466 466 467 467 #if (VXGE_DEBUG_TX & VXGE_DEBUG_MASK) 468 468 #define vxge_debug_tx(level, fmt, ...) \ 469 - vxge_debug_ll(level, VXGE_DEBUG_TX, fmt, __VA_ARGS__) 469 + vxge_debug_ll(level, VXGE_DEBUG_TX, fmt, ##__VA_ARGS__) 470 470 #else 471 471 #define vxge_debug_tx(level, fmt, ...) 472 472 #endif 473 473 474 474 #if (VXGE_DEBUG_RX & VXGE_DEBUG_MASK) 475 475 #define vxge_debug_rx(level, fmt, ...) \ 476 - vxge_debug_ll(level, VXGE_DEBUG_RX, fmt, __VA_ARGS__) 476 + vxge_debug_ll(level, VXGE_DEBUG_RX, fmt, ##__VA_ARGS__) 477 477 #else 478 478 #define vxge_debug_rx(level, fmt, ...) 479 479 #endif 480 480 481 481 #if (VXGE_DEBUG_MEM & VXGE_DEBUG_MASK) 482 482 #define vxge_debug_mem(level, fmt, ...) \ 483 - vxge_debug_ll(level, VXGE_DEBUG_MEM, fmt, __VA_ARGS__) 483 + vxge_debug_ll(level, VXGE_DEBUG_MEM, fmt, ##__VA_ARGS__) 484 484 #else 485 485 #define vxge_debug_mem(level, fmt, ...) 486 486 #endif 487 487 488 488 #if (VXGE_DEBUG_ENTRYEXIT & VXGE_DEBUG_MASK) 489 489 #define vxge_debug_entryexit(level, fmt, ...) \ 490 - vxge_debug_ll(level, VXGE_DEBUG_ENTRYEXIT, fmt, __VA_ARGS__) 490 + vxge_debug_ll(level, VXGE_DEBUG_ENTRYEXIT, fmt, ##__VA_ARGS__) 491 491 #else 492 492 #define vxge_debug_entryexit(level, fmt, ...) 493 493 #endif 494 494 495 495 #if (VXGE_DEBUG_INTR & VXGE_DEBUG_MASK) 496 496 #define vxge_debug_intr(level, fmt, ...) \ 497 - vxge_debug_ll(level, VXGE_DEBUG_INTR, fmt, __VA_ARGS__) 497 + vxge_debug_ll(level, VXGE_DEBUG_INTR, fmt, ##__VA_ARGS__) 498 498 #else 499 499 #define vxge_debug_intr(level, fmt, ...) 500 500 #endif
+4 -4
drivers/net/ethernet/netronome/nfp/nfpcore/nfp6000_pcie.c
··· 616 616 if (bar->iomem) { 617 617 int pf; 618 618 619 - msg += snprintf(msg, end - msg, "0.0: General/MSI-X SRAM, "); 619 + msg += scnprintf(msg, end - msg, "0.0: General/MSI-X SRAM, "); 620 620 atomic_inc(&bar->refcnt); 621 621 bars_free--; 622 622 ··· 661 661 662 662 /* Configure, and lock, BAR0.1 for PCIe XPB (MSI-X PBA) */ 663 663 bar = &nfp->bar[1]; 664 - msg += snprintf(msg, end - msg, "0.1: PCIe XPB/MSI-X PBA, "); 664 + msg += scnprintf(msg, end - msg, "0.1: PCIe XPB/MSI-X PBA, "); 665 665 atomic_inc(&bar->refcnt); 666 666 bars_free--; 667 667 ··· 680 680 bar->iomem = ioremap(nfp_bar_resource_start(bar), 681 681 nfp_bar_resource_len(bar)); 682 682 if (bar->iomem) { 683 - msg += snprintf(msg, end - msg, 684 - "0.%d: Explicit%d, ", 4 + i, i); 683 + msg += scnprintf(msg, end - msg, 684 + "0.%d: Explicit%d, ", 4 + i, i); 685 685 atomic_inc(&bar->refcnt); 686 686 bars_free--; 687 687
+1 -1
drivers/net/ethernet/pensando/ionic/ionic_if.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB OR BSD-2-Clause */ 1 + /* SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB) OR BSD-2-Clause */ 2 2 /* Copyright (c) 2017-2019 Pensando Systems, Inc. All rights reserved. */ 3 3 4 4 #ifndef _IONIC_IF_H_
+7 -7
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 953 953 int i; 954 954 #define REMAIN(__x) (sizeof(buf) - (__x)) 955 955 956 - i = snprintf(buf, sizeof(buf), "rx_mode 0x%04x -> 0x%04x:", 957 - lif->rx_mode, rx_mode); 956 + i = scnprintf(buf, sizeof(buf), "rx_mode 0x%04x -> 0x%04x:", 957 + lif->rx_mode, rx_mode); 958 958 if (rx_mode & IONIC_RX_MODE_F_UNICAST) 959 - i += snprintf(&buf[i], REMAIN(i), " RX_MODE_F_UNICAST"); 959 + i += scnprintf(&buf[i], REMAIN(i), " RX_MODE_F_UNICAST"); 960 960 if (rx_mode & IONIC_RX_MODE_F_MULTICAST) 961 - i += snprintf(&buf[i], REMAIN(i), " RX_MODE_F_MULTICAST"); 961 + i += scnprintf(&buf[i], REMAIN(i), " RX_MODE_F_MULTICAST"); 962 962 if (rx_mode & IONIC_RX_MODE_F_BROADCAST) 963 - i += snprintf(&buf[i], REMAIN(i), " RX_MODE_F_BROADCAST"); 963 + i += scnprintf(&buf[i], REMAIN(i), " RX_MODE_F_BROADCAST"); 964 964 if (rx_mode & IONIC_RX_MODE_F_PROMISC) 965 - i += snprintf(&buf[i], REMAIN(i), " RX_MODE_F_PROMISC"); 965 + i += scnprintf(&buf[i], REMAIN(i), " RX_MODE_F_PROMISC"); 966 966 if (rx_mode & IONIC_RX_MODE_F_ALLMULTI) 967 - i += snprintf(&buf[i], REMAIN(i), " RX_MODE_F_ALLMULTI"); 967 + i += scnprintf(&buf[i], REMAIN(i), " RX_MODE_F_ALLMULTI"); 968 968 netdev_dbg(lif->netdev, "lif%d %s\n", lif->index, buf); 969 969 970 970 err = ionic_adminq_post_wait(lif, &ctx);
+1 -1
drivers/net/ethernet/pensando/ionic/ionic_regs.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB OR BSD-2-Clause */ 1 + /* SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB) OR BSD-2-Clause */ 2 2 /* Copyright (c) 2018-2019 Pensando Systems, Inc. All rights reserved. */ 3 3 4 4 #ifndef IONIC_REGS_H
+1 -1
drivers/net/ethernet/realtek/r8169_main.c
··· 5091 5091 RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~MSIEnable); 5092 5092 rtl_lock_config_regs(tp); 5093 5093 /* fall through */ 5094 - case RTL_GIGA_MAC_VER_07 ... RTL_GIGA_MAC_VER_24: 5094 + case RTL_GIGA_MAC_VER_07 ... RTL_GIGA_MAC_VER_17: 5095 5095 flags = PCI_IRQ_LEGACY; 5096 5096 break; 5097 5097 default:
+18 -14
drivers/net/ethernet/sfc/mcdi.c
··· 212 212 * progress on a NIC at any one time. So no need for locking. 213 213 */ 214 214 for (i = 0; i < hdr_len / 4 && bytes < PAGE_SIZE; i++) 215 - bytes += snprintf(buf + bytes, PAGE_SIZE - bytes, 216 - " %08x", le32_to_cpu(hdr[i].u32[0])); 215 + bytes += scnprintf(buf + bytes, PAGE_SIZE - bytes, 216 + " %08x", 217 + le32_to_cpu(hdr[i].u32[0])); 217 218 218 219 for (i = 0; i < inlen / 4 && bytes < PAGE_SIZE; i++) 219 - bytes += snprintf(buf + bytes, PAGE_SIZE - bytes, 220 - " %08x", le32_to_cpu(inbuf[i].u32[0])); 220 + bytes += scnprintf(buf + bytes, PAGE_SIZE - bytes, 221 + " %08x", 222 + le32_to_cpu(inbuf[i].u32[0])); 221 223 222 224 netif_info(efx, hw, efx->net_dev, "MCDI RPC REQ:%s\n", buf); 223 225 } ··· 304 302 */ 305 303 for (i = 0; i < hdr_len && bytes < PAGE_SIZE; i++) { 306 304 efx->type->mcdi_read_response(efx, &hdr, (i * 4), 4); 307 - bytes += snprintf(buf + bytes, PAGE_SIZE - bytes, 308 - " %08x", le32_to_cpu(hdr.u32[0])); 305 + bytes += scnprintf(buf + bytes, PAGE_SIZE - bytes, 306 + " %08x", le32_to_cpu(hdr.u32[0])); 309 307 } 310 308 311 309 for (i = 0; i < data_len && bytes < PAGE_SIZE; i++) { 312 310 efx->type->mcdi_read_response(efx, &hdr, 313 311 mcdi->resp_hdr_len + (i * 4), 4); 314 - bytes += snprintf(buf + bytes, PAGE_SIZE - bytes, 315 - " %08x", le32_to_cpu(hdr.u32[0])); 312 + bytes += scnprintf(buf + bytes, PAGE_SIZE - bytes, 313 + " %08x", le32_to_cpu(hdr.u32[0])); 316 314 } 317 315 318 316 netif_info(efx, hw, efx->net_dev, "MCDI RPC RESP:%s\n", buf); ··· 1419 1417 } 1420 1418 1421 1419 ver_words = (__le16 *)MCDI_PTR(outbuf, GET_VERSION_OUT_VERSION); 1422 - offset = snprintf(buf, len, "%u.%u.%u.%u", 1423 - le16_to_cpu(ver_words[0]), le16_to_cpu(ver_words[1]), 1424 - le16_to_cpu(ver_words[2]), le16_to_cpu(ver_words[3])); 1420 + offset = scnprintf(buf, len, "%u.%u.%u.%u", 1421 + le16_to_cpu(ver_words[0]), 1422 + le16_to_cpu(ver_words[1]), 1423 + le16_to_cpu(ver_words[2]), 1424 + le16_to_cpu(ver_words[3])); 1425 1425 1426 1426 /* EF10 may have multiple datapath firmware variants within a 1427 1427 * single version. Report which variants are running. ··· 1431 1427 if (efx_nic_rev(efx) >= EFX_REV_HUNT_A0) { 1432 1428 struct efx_ef10_nic_data *nic_data = efx->nic_data; 1433 1429 1434 - offset += snprintf(buf + offset, len - offset, " rx%x tx%x", 1435 - nic_data->rx_dpcpu_fw_id, 1436 - nic_data->tx_dpcpu_fw_id); 1430 + offset += scnprintf(buf + offset, len - offset, " rx%x tx%x", 1431 + nic_data->rx_dpcpu_fw_id, 1432 + nic_data->tx_dpcpu_fw_id); 1437 1433 1438 1434 /* It's theoretically possible for the string to exceed 31 1439 1435 * characters, though in practice the first three version
+1 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
··· 1411 1411 1412 1412 ret = rk_gmac_clk_init(plat_dat); 1413 1413 if (ret) 1414 - return ret; 1414 + goto err_remove_config_dt; 1415 1415 1416 1416 ret = rk_gmac_powerup(plat_dat->bsp_priv); 1417 1417 if (ret)
+10 -4
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 661 661 * In case the wake up interrupt is not passed from the platform 662 662 * so the driver will continue to use the mac irq (ndev->irq) 663 663 */ 664 - stmmac_res->wol_irq = platform_get_irq_byname(pdev, "eth_wake_irq"); 664 + stmmac_res->wol_irq = 665 + platform_get_irq_byname_optional(pdev, "eth_wake_irq"); 665 666 if (stmmac_res->wol_irq < 0) { 666 667 if (stmmac_res->wol_irq == -EPROBE_DEFER) 667 668 return -EPROBE_DEFER; 669 + dev_info(&pdev->dev, "IRQ eth_wake_irq not found\n"); 668 670 stmmac_res->wol_irq = stmmac_res->irq; 669 671 } 670 672 671 - stmmac_res->lpi_irq = platform_get_irq_byname(pdev, "eth_lpi"); 672 - if (stmmac_res->lpi_irq == -EPROBE_DEFER) 673 - return -EPROBE_DEFER; 673 + stmmac_res->lpi_irq = 674 + platform_get_irq_byname_optional(pdev, "eth_lpi"); 675 + if (stmmac_res->lpi_irq < 0) { 676 + if (stmmac_res->lpi_irq == -EPROBE_DEFER) 677 + return -EPROBE_DEFER; 678 + dev_info(&pdev->dev, "IRQ eth_lpi not found\n"); 679 + } 674 680 675 681 stmmac_res->addr = devm_platform_ioremap_resource(pdev, 0); 676 682
+6 -2
drivers/net/geneve.c
··· 1845 1845 if (!net_eq(dev_net(geneve->dev), net)) 1846 1846 unregister_netdevice_queue(geneve->dev, head); 1847 1847 } 1848 - 1849 - WARN_ON_ONCE(!list_empty(&gn->sock_list)); 1850 1848 } 1851 1849 1852 1850 static void __net_exit geneve_exit_batch_net(struct list_head *net_list) ··· 1859 1861 /* unregister the devices gathered above */ 1860 1862 unregister_netdevice_many(&list); 1861 1863 rtnl_unlock(); 1864 + 1865 + list_for_each_entry(net, net_list, exit_list) { 1866 + const struct geneve_net *gn = net_generic(net, geneve_net_id); 1867 + 1868 + WARN_ON_ONCE(!list_empty(&gn->sock_list)); 1869 + } 1862 1870 } 1863 1871 1864 1872 static struct pernet_operations geneve_net_ops = {
+3 -3
drivers/net/ifb.c
··· 75 75 } 76 76 77 77 while ((skb = __skb_dequeue(&txp->tq)) != NULL) { 78 - skb->tc_redirected = 0; 78 + skb->redirected = 0; 79 79 skb->tc_skip_classify = 1; 80 80 81 81 u64_stats_update_begin(&txp->tsync); ··· 96 96 rcu_read_unlock(); 97 97 skb->skb_iif = txp->dev->ifindex; 98 98 99 - if (!skb->tc_from_ingress) { 99 + if (!skb->from_ingress) { 100 100 dev_queue_xmit(skb); 101 101 } else { 102 102 skb_pull_rcsum(skb, skb->mac_len); ··· 243 243 txp->rx_bytes += skb->len; 244 244 u64_stats_update_end(&txp->rsync); 245 245 246 - if (!skb->tc_redirected || !skb->skb_iif) { 246 + if (!skb->redirected || !skb->skb_iif) { 247 247 dev_kfree_skb(skb); 248 248 dev->stats.rx_dropped++; 249 249 return NETDEV_TX_OK;
+3
drivers/net/macsec.c
··· 20 20 #include <net/macsec.h> 21 21 #include <linux/phy.h> 22 22 #include <linux/byteorder/generic.h> 23 + #include <linux/if_arp.h> 23 24 24 25 #include <uapi/linux/if_macsec.h> 25 26 ··· 3857 3856 real_dev = __dev_get_by_index(net, nla_get_u32(tb[IFLA_LINK])); 3858 3857 if (!real_dev) 3859 3858 return -ENODEV; 3859 + if (real_dev->type != ARPHRD_ETHER) 3860 + return -EINVAL; 3860 3861 3861 3862 dev->priv_flags |= IFF_MACSEC; 3862 3863
+15 -15
drivers/net/netdevsim/ipsec.c
··· 29 29 return -ENOMEM; 30 30 31 31 p = buf; 32 - p += snprintf(p, bufsize - (p - buf), 33 - "SA count=%u tx=%u\n", 34 - ipsec->count, ipsec->tx); 32 + p += scnprintf(p, bufsize - (p - buf), 33 + "SA count=%u tx=%u\n", 34 + ipsec->count, ipsec->tx); 35 35 36 36 for (i = 0; i < NSIM_IPSEC_MAX_SA_COUNT; i++) { 37 37 struct nsim_sa *sap = &ipsec->sa[i]; ··· 39 39 if (!sap->used) 40 40 continue; 41 41 42 - p += snprintf(p, bufsize - (p - buf), 43 - "sa[%i] %cx ipaddr=0x%08x %08x %08x %08x\n", 44 - i, (sap->rx ? 'r' : 't'), sap->ipaddr[0], 45 - sap->ipaddr[1], sap->ipaddr[2], sap->ipaddr[3]); 46 - p += snprintf(p, bufsize - (p - buf), 47 - "sa[%i] spi=0x%08x proto=0x%x salt=0x%08x crypt=%d\n", 48 - i, be32_to_cpu(sap->xs->id.spi), 49 - sap->xs->id.proto, sap->salt, sap->crypt); 50 - p += snprintf(p, bufsize - (p - buf), 51 - "sa[%i] key=0x%08x %08x %08x %08x\n", 52 - i, sap->key[0], sap->key[1], 53 - sap->key[2], sap->key[3]); 42 + p += scnprintf(p, bufsize - (p - buf), 43 + "sa[%i] %cx ipaddr=0x%08x %08x %08x %08x\n", 44 + i, (sap->rx ? 'r' : 't'), sap->ipaddr[0], 45 + sap->ipaddr[1], sap->ipaddr[2], sap->ipaddr[3]); 46 + p += scnprintf(p, bufsize - (p - buf), 47 + "sa[%i] spi=0x%08x proto=0x%x salt=0x%08x crypt=%d\n", 48 + i, be32_to_cpu(sap->xs->id.spi), 49 + sap->xs->id.proto, sap->salt, sap->crypt); 50 + p += scnprintf(p, bufsize - (p - buf), 51 + "sa[%i] key=0x%08x %08x %08x %08x\n", 52 + i, sap->key[0], sap->key[1], 53 + sap->key[2], sap->key[3]); 54 54 } 55 55 56 56 len = simple_read_from_buffer(buffer, count, ppos, buf, p - buf);
+20 -1
drivers/net/phy/dp83867.c
··· 30 30 #define DP83867_CTRL 0x1f 31 31 32 32 /* Extended Registers */ 33 - #define DP83867_CFG4 0x0031 33 + #define DP83867_FLD_THR_CFG 0x002e 34 + #define DP83867_CFG4 0x0031 34 35 #define DP83867_CFG4_SGMII_ANEG_MASK (BIT(5) | BIT(6)) 35 36 #define DP83867_CFG4_SGMII_ANEG_TIMER_11MS (3 << 5) 36 37 #define DP83867_CFG4_SGMII_ANEG_TIMER_800US (2 << 5) ··· 94 93 #define DP83867_STRAP_STS2_CLK_SKEW_RX_MASK GENMASK(2, 0) 95 94 #define DP83867_STRAP_STS2_CLK_SKEW_RX_SHIFT 0 96 95 #define DP83867_STRAP_STS2_CLK_SKEW_NONE BIT(2) 96 + #define DP83867_STRAP_STS2_STRAP_FLD BIT(10) 97 97 98 98 /* PHY CTRL bits */ 99 99 #define DP83867_PHYCR_TX_FIFO_DEPTH_SHIFT 14 ··· 146 144 147 145 /* CFG4 bits */ 148 146 #define DP83867_CFG4_PORT_MIRROR_EN BIT(0) 147 + 148 + /* FLD_THR_CFG */ 149 + #define DP83867_FLD_THR_CFG_ENERGY_LOST_THR_MASK 0x7 149 150 150 151 enum { 151 152 DP83867_PORT_MIRROING_KEEP, ··· 626 621 if (dp83867->rxctrl_strap_quirk) 627 622 phy_clear_bits_mmd(phydev, DP83867_DEVADDR, DP83867_CFG4, 628 623 BIT(7)); 624 + 625 + bs = phy_read_mmd(phydev, DP83867_DEVADDR, DP83867_STRAP_STS2); 626 + if (bs & DP83867_STRAP_STS2_STRAP_FLD) { 627 + /* When using strap to enable FLD, the ENERGY_LOST_FLD_THR will 628 + * be set to 0x2. This may causes the PHY link to be unstable - 629 + * the default value 0x1 need to be restored. 630 + */ 631 + ret = phy_modify_mmd(phydev, DP83867_DEVADDR, 632 + DP83867_FLD_THR_CFG, 633 + DP83867_FLD_THR_CFG_ENERGY_LOST_THR_MASK, 634 + 0x1); 635 + if (ret) 636 + return ret; 637 + } 629 638 630 639 if (phy_interface_is_rgmii(phydev) || 631 640 phydev->interface == PHY_INTERFACE_MODE_SGMII) {
+2 -4
drivers/net/phy/mdio-bcm-unimac.c
··· 242 242 return -ENOMEM; 243 243 } 244 244 245 - priv->clk = devm_clk_get(&pdev->dev, NULL); 246 - if (PTR_ERR(priv->clk) == -EPROBE_DEFER) 245 + priv->clk = devm_clk_get_optional(&pdev->dev, NULL); 246 + if (IS_ERR(priv->clk)) 247 247 return PTR_ERR(priv->clk); 248 - else 249 - priv->clk = NULL; 250 248 251 249 ret = clk_prepare_enable(priv->clk); 252 250 if (ret)
+6 -1
drivers/net/phy/mdio-mux-bcm-iproc.c
··· 282 282 static int mdio_mux_iproc_resume(struct device *dev) 283 283 { 284 284 struct iproc_mdiomux_desc *md = dev_get_drvdata(dev); 285 + int rc; 285 286 286 - clk_prepare_enable(md->core_clk); 287 + rc = clk_prepare_enable(md->core_clk); 288 + if (rc) { 289 + dev_err(md->dev, "failed to enable core clk\n"); 290 + return rc; 291 + } 287 292 mdio_mux_iproc_config(md); 288 293 289 294 return 0;
+18 -14
drivers/net/phy/sfp-bus.c
··· 572 572 * the sfp_bus structure, incrementing its reference count. This must 573 573 * be put via sfp_bus_put() when done. 574 574 * 575 - * Returns: on success, a pointer to the sfp_bus structure, 576 - * %NULL if no SFP is specified, 577 - * on failure, an error pointer value: 578 - * corresponding to the errors detailed for 579 - * fwnode_property_get_reference_args(). 580 - * %-ENOMEM if we failed to allocate the bus. 581 - * an error from the upstream's connect_phy() method. 575 + * Returns: 576 + * - on success, a pointer to the sfp_bus structure, 577 + * - %NULL if no SFP is specified, 578 + * - on failure, an error pointer value: 579 + * 580 + * - corresponding to the errors detailed for 581 + * fwnode_property_get_reference_args(). 582 + * - %-ENOMEM if we failed to allocate the bus. 583 + * - an error from the upstream's connect_phy() method. 582 584 */ 583 585 struct sfp_bus *sfp_bus_find_fwnode(struct fwnode_handle *fwnode) 584 586 { ··· 614 612 * the SFP bus using sfp_register_upstream(). This takes a reference on the 615 613 * bus, so it is safe to put the bus after this call. 616 614 * 617 - * Returns: on success, a pointer to the sfp_bus structure, 618 - * %NULL if no SFP is specified, 619 - * on failure, an error pointer value: 620 - * corresponding to the errors detailed for 621 - * fwnode_property_get_reference_args(). 622 - * %-ENOMEM if we failed to allocate the bus. 623 - * an error from the upstream's connect_phy() method. 615 + * Returns: 616 + * - on success, a pointer to the sfp_bus structure, 617 + * - %NULL if no SFP is specified, 618 + * - on failure, an error pointer value: 619 + * 620 + * - corresponding to the errors detailed for 621 + * fwnode_property_get_reference_args(). 622 + * - %-ENOMEM if we failed to allocate the bus. 623 + * - an error from the upstream's connect_phy() method. 624 624 */ 625 625 int sfp_bus_add_upstream(struct sfp_bus *bus, void *upstream, 626 626 const struct sfp_upstream_ops *ops)
+1
drivers/net/usb/qmi_wwan.c
··· 1210 1210 {QMI_FIXED_INTF(0x1435, 0xd182, 5)}, /* Wistron NeWeb D18 */ 1211 1211 {QMI_FIXED_INTF(0x1435, 0xd191, 4)}, /* Wistron NeWeb D19Q1 */ 1212 1212 {QMI_QUIRK_SET_DTR(0x1508, 0x1001, 4)}, /* Fibocom NL668 series */ 1213 + {QMI_FIXED_INTF(0x1690, 0x7588, 4)}, /* ASKEY WWHC050 */ 1213 1214 {QMI_FIXED_INTF(0x16d8, 0x6003, 0)}, /* CMOTech 6003 */ 1214 1215 {QMI_FIXED_INTF(0x16d8, 0x6007, 0)}, /* CMOTech CHE-628S */ 1215 1216 {QMI_FIXED_INTF(0x16d8, 0x6008, 0)}, /* CMOTech CMU-301 */
+9 -2
drivers/net/vxlan.c
··· 2779 2779 /* Setup stats when device is created */ 2780 2780 static int vxlan_init(struct net_device *dev) 2781 2781 { 2782 + struct vxlan_dev *vxlan = netdev_priv(dev); 2783 + int err; 2784 + 2782 2785 dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); 2783 2786 if (!dev->tstats) 2784 2787 return -ENOMEM; 2788 + 2789 + err = gro_cells_init(&vxlan->gro_cells, dev); 2790 + if (err) { 2791 + free_percpu(dev->tstats); 2792 + return err; 2793 + } 2785 2794 2786 2795 return 0; 2787 2796 } ··· 3051 3042 timer_setup(&vxlan->age_timer, vxlan_cleanup, TIMER_DEFERRABLE); 3052 3043 3053 3044 vxlan->dev = dev; 3054 - 3055 - gro_cells_init(&vxlan->gro_cells, dev); 3056 3045 3057 3046 for (h = 0; h < FDB_HASH_SIZE; ++h) { 3058 3047 spin_lock_init(&vxlan->hash_lock[h]);
+1 -1
drivers/net/wireguard/device.c
··· 122 122 u32 mtu; 123 123 int ret; 124 124 125 - if (unlikely(wg_skb_examine_untrusted_ip_hdr(skb) != skb->protocol)) { 125 + if (unlikely(!wg_check_packet_protocol(skb))) { 126 126 ret = -EPROTONOSUPPORT; 127 127 net_dbg_ratelimited("%s: Invalid IP packet\n", dev->name); 128 128 goto err;
+2 -6
drivers/net/wireguard/netlink.c
··· 411 411 412 412 peer = wg_peer_create(wg, public_key, preshared_key); 413 413 if (IS_ERR(peer)) { 414 - /* Similar to the above, if the key is invalid, we skip 415 - * it without fanfare, so that services don't need to 416 - * worry about doing key validation themselves. 417 - */ 418 - ret = PTR_ERR(peer) == -EKEYREJECTED ? 0 : PTR_ERR(peer); 414 + ret = PTR_ERR(peer); 419 415 peer = NULL; 420 416 goto out; 421 417 } ··· 565 569 private_key); 566 570 list_for_each_entry_safe(peer, temp, &wg->peer_list, 567 571 peer_list) { 568 - BUG_ON(!wg_noise_precompute_static_static(peer)); 572 + wg_noise_precompute_static_static(peer); 569 573 wg_noise_expire_current_peer_keypairs(peer); 570 574 } 571 575 wg_cookie_checker_precompute_device_keys(&wg->cookie_checker);
+29 -26
drivers/net/wireguard/noise.c
··· 44 44 } 45 45 46 46 /* Must hold peer->handshake.static_identity->lock */ 47 - bool wg_noise_precompute_static_static(struct wg_peer *peer) 47 + void wg_noise_precompute_static_static(struct wg_peer *peer) 48 48 { 49 - bool ret; 50 - 51 49 down_write(&peer->handshake.lock); 52 - if (peer->handshake.static_identity->has_identity) { 53 - ret = curve25519( 54 - peer->handshake.precomputed_static_static, 50 + if (!peer->handshake.static_identity->has_identity || 51 + !curve25519(peer->handshake.precomputed_static_static, 55 52 peer->handshake.static_identity->static_private, 56 - peer->handshake.remote_static); 57 - } else { 58 - u8 empty[NOISE_PUBLIC_KEY_LEN] = { 0 }; 59 - 60 - ret = curve25519(empty, empty, peer->handshake.remote_static); 53 + peer->handshake.remote_static)) 61 54 memset(peer->handshake.precomputed_static_static, 0, 62 55 NOISE_PUBLIC_KEY_LEN); 63 - } 64 56 up_write(&peer->handshake.lock); 65 - return ret; 66 57 } 67 58 68 - bool wg_noise_handshake_init(struct noise_handshake *handshake, 69 - struct noise_static_identity *static_identity, 70 - const u8 peer_public_key[NOISE_PUBLIC_KEY_LEN], 71 - const u8 peer_preshared_key[NOISE_SYMMETRIC_KEY_LEN], 72 - struct wg_peer *peer) 59 + void wg_noise_handshake_init(struct noise_handshake *handshake, 60 + struct noise_static_identity *static_identity, 61 + const u8 peer_public_key[NOISE_PUBLIC_KEY_LEN], 62 + const u8 peer_preshared_key[NOISE_SYMMETRIC_KEY_LEN], 63 + struct wg_peer *peer) 73 64 { 74 65 memset(handshake, 0, sizeof(*handshake)); 75 66 init_rwsem(&handshake->lock); ··· 72 81 NOISE_SYMMETRIC_KEY_LEN); 73 82 handshake->static_identity = static_identity; 74 83 handshake->state = HANDSHAKE_ZEROED; 75 - return wg_noise_precompute_static_static(peer); 84 + wg_noise_precompute_static_static(peer); 76 85 } 77 86 78 87 static void handshake_zero(struct noise_handshake *handshake) ··· 394 403 return true; 395 404 } 396 405 406 + static bool __must_check mix_precomputed_dh(u8 chaining_key[NOISE_HASH_LEN], 407 + u8 key[NOISE_SYMMETRIC_KEY_LEN], 408 + const u8 precomputed[NOISE_PUBLIC_KEY_LEN]) 409 + { 410 + static u8 zero_point[NOISE_PUBLIC_KEY_LEN]; 411 + if (unlikely(!crypto_memneq(precomputed, zero_point, NOISE_PUBLIC_KEY_LEN))) 412 + return false; 413 + kdf(chaining_key, key, NULL, precomputed, NOISE_HASH_LEN, 414 + NOISE_SYMMETRIC_KEY_LEN, 0, NOISE_PUBLIC_KEY_LEN, 415 + chaining_key); 416 + return true; 417 + } 418 + 397 419 static void mix_hash(u8 hash[NOISE_HASH_LEN], const u8 *src, size_t src_len) 398 420 { 399 421 struct blake2s_state blake; ··· 535 531 NOISE_PUBLIC_KEY_LEN, key, handshake->hash); 536 532 537 533 /* ss */ 538 - kdf(handshake->chaining_key, key, NULL, 539 - handshake->precomputed_static_static, NOISE_HASH_LEN, 540 - NOISE_SYMMETRIC_KEY_LEN, 0, NOISE_PUBLIC_KEY_LEN, 541 - handshake->chaining_key); 534 + if (!mix_precomputed_dh(handshake->chaining_key, key, 535 + handshake->precomputed_static_static)) 536 + goto out; 542 537 543 538 /* {t} */ 544 539 tai64n_now(timestamp); ··· 598 595 handshake = &peer->handshake; 599 596 600 597 /* ss */ 601 - kdf(chaining_key, key, NULL, handshake->precomputed_static_static, 602 - NOISE_HASH_LEN, NOISE_SYMMETRIC_KEY_LEN, 0, NOISE_PUBLIC_KEY_LEN, 603 - chaining_key); 598 + if (!mix_precomputed_dh(chaining_key, key, 599 + handshake->precomputed_static_static)) 600 + goto out; 604 601 605 602 /* {t} */ 606 603 if (!message_decrypt(t, src->encrypted_timestamp,
+6 -6
drivers/net/wireguard/noise.h
··· 94 94 struct wg_device; 95 95 96 96 void wg_noise_init(void); 97 - bool wg_noise_handshake_init(struct noise_handshake *handshake, 98 - struct noise_static_identity *static_identity, 99 - const u8 peer_public_key[NOISE_PUBLIC_KEY_LEN], 100 - const u8 peer_preshared_key[NOISE_SYMMETRIC_KEY_LEN], 101 - struct wg_peer *peer); 97 + void wg_noise_handshake_init(struct noise_handshake *handshake, 98 + struct noise_static_identity *static_identity, 99 + const u8 peer_public_key[NOISE_PUBLIC_KEY_LEN], 100 + const u8 peer_preshared_key[NOISE_SYMMETRIC_KEY_LEN], 101 + struct wg_peer *peer); 102 102 void wg_noise_handshake_clear(struct noise_handshake *handshake); 103 103 static inline void wg_noise_reset_last_sent_handshake(atomic64_t *handshake_ns) 104 104 { ··· 116 116 void wg_noise_set_static_identity_private_key( 117 117 struct noise_static_identity *static_identity, 118 118 const u8 private_key[NOISE_PUBLIC_KEY_LEN]); 119 - bool wg_noise_precompute_static_static(struct wg_peer *peer); 119 + void wg_noise_precompute_static_static(struct wg_peer *peer); 120 120 121 121 bool 122 122 wg_noise_handshake_create_initiation(struct message_handshake_initiation *dst,
+2 -5
drivers/net/wireguard/peer.c
··· 34 34 return ERR_PTR(ret); 35 35 peer->device = wg; 36 36 37 - if (!wg_noise_handshake_init(&peer->handshake, &wg->static_identity, 38 - public_key, preshared_key, peer)) { 39 - ret = -EKEYREJECTED; 40 - goto err_1; 41 - } 37 + wg_noise_handshake_init(&peer->handshake, &wg->static_identity, 38 + public_key, preshared_key, peer); 42 39 if (dst_cache_init(&peer->endpoint_cache, GFP_KERNEL)) 43 40 goto err_1; 44 41 if (wg_packet_queue_init(&peer->tx_queue, wg_packet_tx_worker, false,
+8 -2
drivers/net/wireguard/queueing.h
··· 66 66 #define PACKET_PEER(skb) (PACKET_CB(skb)->keypair->entry.peer) 67 67 68 68 /* Returns either the correct skb->protocol value, or 0 if invalid. */ 69 - static inline __be16 wg_skb_examine_untrusted_ip_hdr(struct sk_buff *skb) 69 + static inline __be16 wg_examine_packet_protocol(struct sk_buff *skb) 70 70 { 71 71 if (skb_network_header(skb) >= skb->head && 72 72 (skb_network_header(skb) + sizeof(struct iphdr)) <= ··· 79 79 ipv6_hdr(skb)->version == 6) 80 80 return htons(ETH_P_IPV6); 81 81 return 0; 82 + } 83 + 84 + static inline bool wg_check_packet_protocol(struct sk_buff *skb) 85 + { 86 + __be16 real_protocol = wg_examine_packet_protocol(skb); 87 + return real_protocol && skb->protocol == real_protocol; 82 88 } 83 89 84 90 static inline void wg_reset_packet(struct sk_buff *skb) ··· 100 94 skb->dev = NULL; 101 95 #ifdef CONFIG_NET_SCHED 102 96 skb->tc_index = 0; 103 - skb_reset_tc(skb); 104 97 #endif 98 + skb_reset_redirect(skb); 105 99 skb->hdr_len = skb_headroom(skb); 106 100 skb_reset_mac_header(skb); 107 101 skb_reset_network_header(skb);
+3 -4
drivers/net/wireguard/receive.c
··· 56 56 size_t data_offset, data_len, header_len; 57 57 struct udphdr *udp; 58 58 59 - if (unlikely(wg_skb_examine_untrusted_ip_hdr(skb) != skb->protocol || 59 + if (unlikely(!wg_check_packet_protocol(skb) || 60 60 skb_transport_header(skb) < skb->head || 61 61 (skb_transport_header(skb) + sizeof(struct udphdr)) > 62 62 skb_tail_pointer(skb))) ··· 388 388 */ 389 389 skb->ip_summed = CHECKSUM_UNNECESSARY; 390 390 skb->csum_level = ~0; /* All levels */ 391 - skb->protocol = wg_skb_examine_untrusted_ip_hdr(skb); 391 + skb->protocol = wg_examine_packet_protocol(skb); 392 392 if (skb->protocol == htons(ETH_P_IP)) { 393 393 len = ntohs(ip_hdr(skb)->tot_len); 394 394 if (unlikely(len < sizeof(struct iphdr))) ··· 587 587 wg_packet_consume_data(wg, skb); 588 588 break; 589 589 default: 590 - net_dbg_skb_ratelimited("%s: Invalid packet from %pISpfsc\n", 591 - wg->dev->name, skb); 590 + WARN(1, "Non-exhaustive parsing of packet header lead to unknown packet type!\n"); 592 591 goto err; 593 592 } 594 593 return;
+2
drivers/net/wireless/intel/iwlwifi/cfg/22000.c
··· 300 300 * HT size; mac80211 would otherwise pick the HE max (256) by default. 301 301 */ 302 302 .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT, 303 + .tx_with_siso_diversity = true, 303 304 .num_rbds = IWL_NUM_RBDS_22000_HE, 304 305 }; 305 306 ··· 327 326 * HT size; mac80211 would otherwise pick the HE max (256) by default. 328 327 */ 329 328 .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT, 329 + .tx_with_siso_diversity = true, 330 330 .num_rbds = IWL_NUM_RBDS_22000_HE, 331 331 }; 332 332
+8 -6
drivers/net/wireless/intel/iwlwifi/fw/acpi.c
··· 6 6 * GPL LICENSE SUMMARY 7 7 * 8 8 * Copyright(c) 2017 Intel Deutschland GmbH 9 - * Copyright (C) 2019 Intel Corporation 9 + * Copyright (C) 2019 - 2020 Intel Corporation 10 10 * 11 11 * This program is free software; you can redistribute it and/or modify 12 12 * it under the terms of version 2 of the GNU General Public License as ··· 27 27 * BSD LICENSE 28 28 * 29 29 * Copyright(c) 2017 Intel Deutschland GmbH 30 - * Copyright (C) 2019 Intel Corporation 30 + * Copyright (C) 2019 - 2020 Intel Corporation 31 31 * All rights reserved. 32 32 * 33 33 * Redistribution and use in source and binary forms, with or without ··· 491 491 } 492 492 IWL_EXPORT_SYMBOL(iwl_validate_sar_geo_profile); 493 493 494 - void iwl_sar_geo_init(struct iwl_fw_runtime *fwrt, 495 - struct iwl_per_chain_offset_group *table) 494 + int iwl_sar_geo_init(struct iwl_fw_runtime *fwrt, 495 + struct iwl_per_chain_offset_group *table) 496 496 { 497 497 int ret, i, j; 498 498 499 499 if (!iwl_sar_geo_support(fwrt)) 500 - return; 500 + return -EOPNOTSUPP; 501 501 502 502 ret = iwl_sar_get_wgds_table(fwrt); 503 503 if (ret < 0) { ··· 505 505 "Geo SAR BIOS table invalid or unavailable. (%d)\n", 506 506 ret); 507 507 /* we don't fail if the table is not available */ 508 - return; 508 + return -ENOENT; 509 509 } 510 510 511 511 BUILD_BUG_ON(ACPI_NUM_GEO_PROFILES * ACPI_WGDS_NUM_BANDS * ··· 530 530 i, j, value[1], value[2], value[0]); 531 531 } 532 532 } 533 + 534 + return 0; 533 535 } 534 536 IWL_EXPORT_SYMBOL(iwl_sar_geo_init);
+8 -6
drivers/net/wireless/intel/iwlwifi/fw/acpi.h
··· 6 6 * GPL LICENSE SUMMARY 7 7 * 8 8 * Copyright(c) 2017 Intel Deutschland GmbH 9 - * Copyright(c) 2018 - 2019 Intel Corporation 9 + * Copyright(c) 2018 - 2020 Intel Corporation 10 10 * 11 11 * This program is free software; you can redistribute it and/or modify 12 12 * it under the terms of version 2 of the GNU General Public License as ··· 27 27 * BSD LICENSE 28 28 * 29 29 * Copyright(c) 2017 Intel Deutschland GmbH 30 - * Copyright(c) 2018 - 2019 Intel Corporation 30 + * Copyright(c) 2018 - 2020 Intel Corporation 31 31 * All rights reserved. 32 32 * 33 33 * Redistribution and use in source and binary forms, with or without ··· 171 171 int iwl_validate_sar_geo_profile(struct iwl_fw_runtime *fwrt, 172 172 struct iwl_host_cmd *cmd); 173 173 174 - void iwl_sar_geo_init(struct iwl_fw_runtime *fwrt, 175 - struct iwl_per_chain_offset_group *table); 174 + int iwl_sar_geo_init(struct iwl_fw_runtime *fwrt, 175 + struct iwl_per_chain_offset_group *table); 176 + 176 177 #else /* CONFIG_ACPI */ 177 178 178 179 static inline void *iwl_acpi_get_object(struct device *dev, acpi_string method) ··· 244 243 return -ENOENT; 245 244 } 246 245 247 - static inline void iwl_sar_geo_init(struct iwl_fw_runtime *fwrt, 248 - struct iwl_per_chain_offset_group *table) 246 + static inline int iwl_sar_geo_init(struct iwl_fw_runtime *fwrt, 247 + struct iwl_per_chain_offset_group *table) 249 248 { 249 + return -ENOENT; 250 250 } 251 251 252 252 #endif /* CONFIG_ACPI */
+8 -17
drivers/net/wireless/intel/iwlwifi/fw/dbg.c
··· 8 8 * Copyright(c) 2008 - 2014 Intel Corporation. All rights reserved. 9 9 * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 10 10 * Copyright(c) 2015 - 2017 Intel Deutschland GmbH 11 - * Copyright(c) 2018 - 2019 Intel Corporation 11 + * Copyright(c) 2018 - 2020 Intel Corporation 12 12 * 13 13 * This program is free software; you can redistribute it and/or modify 14 14 * it under the terms of version 2 of the GNU General Public License as ··· 31 31 * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved. 32 32 * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 33 33 * Copyright(c) 2015 - 2017 Intel Deutschland GmbH 34 - * Copyright(c) 2018 - 2019 Intel Corporation 34 + * Copyright(c) 2018 - 2020 Intel Corporation 35 35 * All rights reserved. 36 36 * 37 37 * Redistribution and use in source and binary forms, with or without ··· 1409 1409 goto out; 1410 1410 } 1411 1411 1412 - /* 1413 - * region register have absolute value so apply rxf offset after 1414 - * reading the registers 1415 - */ 1416 - offs += rxf_data.offset; 1412 + offs = rxf_data.offset; 1417 1413 1418 1414 /* Lock fence */ 1419 1415 iwl_write_prph_no_grab(fwrt->trans, RXF_SET_FENCE_MODE + offs, 0x1); ··· 2490 2494 goto out; 2491 2495 } 2492 2496 2493 - if (iwl_fw_dbg_stop_restart_recording(fwrt, &params, true)) { 2494 - IWL_ERR(fwrt, "Failed to stop DBGC recording, aborting dump\n"); 2495 - goto out; 2496 - } 2497 + iwl_fw_dbg_stop_restart_recording(fwrt, &params, true); 2497 2498 2498 2499 IWL_DEBUG_FW_INFO(fwrt, "WRT: Data collection start\n"); 2499 2500 if (iwl_trans_dbg_ini_valid(fwrt->trans)) ··· 2655 2662 return 0; 2656 2663 } 2657 2664 2658 - int iwl_fw_dbg_stop_restart_recording(struct iwl_fw_runtime *fwrt, 2659 - struct iwl_fw_dbg_params *params, 2660 - bool stop) 2665 + void iwl_fw_dbg_stop_restart_recording(struct iwl_fw_runtime *fwrt, 2666 + struct iwl_fw_dbg_params *params, 2667 + bool stop) 2661 2668 { 2662 2669 int ret = 0; 2663 2670 2664 2671 if (test_bit(STATUS_FW_ERROR, &fwrt->trans->status)) 2665 - return 0; 2672 + return; 2666 2673 2667 2674 if (fw_has_capa(&fwrt->fw->ucode_capa, 2668 2675 IWL_UCODE_TLV_CAPA_DBG_SUSPEND_RESUME_CMD_SUPP)) ··· 2679 2686 iwl_fw_set_dbg_rec_on(fwrt); 2680 2687 } 2681 2688 #endif 2682 - 2683 - return ret; 2684 2689 } 2685 2690 IWL_EXPORT_SYMBOL(iwl_fw_dbg_stop_restart_recording);
+3 -3
drivers/net/wireless/intel/iwlwifi/fw/dbg.h
··· 239 239 _iwl_fw_dbg_trigger_simple_stop((fwrt), (wdev), \ 240 240 iwl_fw_dbg_get_trigger((fwrt)->fw,\ 241 241 (trig))) 242 - int iwl_fw_dbg_stop_restart_recording(struct iwl_fw_runtime *fwrt, 243 - struct iwl_fw_dbg_params *params, 244 - bool stop); 242 + void iwl_fw_dbg_stop_restart_recording(struct iwl_fw_runtime *fwrt, 243 + struct iwl_fw_dbg_params *params, 244 + bool stop); 245 245 246 246 #ifdef CONFIG_IWLWIFI_DEBUGFS 247 247 static inline void iwl_fw_set_dbg_rec_on(struct iwl_fw_runtime *fwrt)
+1 -1
drivers/net/wireless/intel/iwlwifi/iwl-drv.c
··· 1467 1467 kmemdup(pieces->dbg_conf_tlv[i], 1468 1468 pieces->dbg_conf_tlv_len[i], 1469 1469 GFP_KERNEL); 1470 - if (!pieces->dbg_conf_tlv_len[i]) 1470 + if (!pieces->dbg_conf_tlv[i]) 1471 1471 goto out_free_fw; 1472 1472 } 1473 1473 }
+8 -1
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 762 762 u16 cmd_wide_id = WIDE_ID(PHY_OPS_GROUP, GEO_TX_POWER_LIMIT); 763 763 union geo_tx_power_profiles_cmd cmd; 764 764 u16 len; 765 + int ret; 765 766 766 767 cmd.geo_cmd.ops = cpu_to_le32(IWL_PER_CHAIN_OFFSET_SET_TABLES); 767 768 768 - iwl_sar_geo_init(&mvm->fwrt, cmd.geo_cmd.table); 769 + ret = iwl_sar_geo_init(&mvm->fwrt, cmd.geo_cmd.table); 770 + /* 771 + * It is a valid scenario to not support SAR, or miss wgds table, 772 + * but in that case there is no need to send the command. 773 + */ 774 + if (ret) 775 + return 0; 769 776 770 777 cmd.geo_cmd.table_revision = cpu_to_le32(mvm->fwrt.geo_rev); 771 778
+26 -9
drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c
··· 6 6 * GPL LICENSE SUMMARY 7 7 * 8 8 * Copyright(c) 2017 Intel Deutschland GmbH 9 - * Copyright(c) 2018 - 2019 Intel Corporation 9 + * Copyright(c) 2018 - 2020 Intel Corporation 10 10 * 11 11 * This program is free software; you can redistribute it and/or modify 12 12 * it under the terms of version 2 of the GNU General Public License as ··· 27 27 * BSD LICENSE 28 28 * 29 29 * Copyright(c) 2017 Intel Deutschland GmbH 30 - * Copyright(c) 2018 - 2019 Intel Corporation 30 + * Copyright(c) 2018 - 2020 Intel Corporation 31 31 * All rights reserved. 32 32 * 33 33 * Redistribution and use in source and binary forms, with or without ··· 147 147 (vht_ena && (vht_cap->cap & IEEE80211_VHT_CAP_RXLDPC)))) 148 148 flags |= IWL_TLC_MNG_CFG_FLAGS_LDPC_MSK; 149 149 150 - /* consider our LDPC support in case of HE */ 150 + /* consider LDPC support in case of HE */ 151 + if (he_cap->has_he && (he_cap->he_cap_elem.phy_cap_info[1] & 152 + IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD)) 153 + flags |= IWL_TLC_MNG_CFG_FLAGS_LDPC_MSK; 154 + 151 155 if (sband->iftype_data && sband->iftype_data->he_cap.has_he && 152 156 !(sband->iftype_data->he_cap.he_cap_elem.phy_cap_info[1] & 153 157 IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD)) ··· 195 191 { 196 192 u16 supp; 197 193 int i, highest_mcs; 194 + u8 nss = sta->rx_nss; 198 195 199 - for (i = 0; i < sta->rx_nss; i++) { 200 - if (i == IWL_TLC_NSS_MAX) 201 - break; 196 + /* the station support only a single receive chain */ 197 + if (sta->smps_mode == IEEE80211_SMPS_STATIC) 198 + nss = 1; 202 199 200 + for (i = 0; i < nss && i < IWL_TLC_NSS_MAX; i++) { 203 201 highest_mcs = rs_fw_vht_highest_rx_mcs_index(vht_cap, i + 1); 204 202 if (!highest_mcs) 205 203 continue; ··· 247 241 u16 tx_mcs_160 = 248 242 le16_to_cpu(sband->iftype_data->he_cap.he_mcs_nss_supp.tx_mcs_160); 249 243 int i; 244 + u8 nss = sta->rx_nss; 250 245 251 - for (i = 0; i < sta->rx_nss && i < IWL_TLC_NSS_MAX; i++) { 246 + /* the station support only a single receive chain */ 247 + if (sta->smps_mode == IEEE80211_SMPS_STATIC) 248 + nss = 1; 249 + 250 + for (i = 0; i < nss && i < IWL_TLC_NSS_MAX; i++) { 252 251 u16 _mcs_160 = (mcs_160 >> (2 * i)) & 0x3; 253 252 u16 _mcs_80 = (mcs_80 >> (2 * i)) & 0x3; 254 253 u16 _tx_mcs_160 = (tx_mcs_160 >> (2 * i)) & 0x3; ··· 314 303 cmd->mode = IWL_TLC_MNG_MODE_HT; 315 304 cmd->ht_rates[IWL_TLC_NSS_1][IWL_TLC_HT_BW_NONE_160] = 316 305 cpu_to_le16(ht_cap->mcs.rx_mask[0]); 317 - cmd->ht_rates[IWL_TLC_NSS_2][IWL_TLC_HT_BW_NONE_160] = 318 - cpu_to_le16(ht_cap->mcs.rx_mask[1]); 306 + 307 + /* the station support only a single receive chain */ 308 + if (sta->smps_mode == IEEE80211_SMPS_STATIC) 309 + cmd->ht_rates[IWL_TLC_NSS_2][IWL_TLC_HT_BW_NONE_160] = 310 + 0; 311 + else 312 + cmd->ht_rates[IWL_TLC_NSS_2][IWL_TLC_HT_BW_NONE_160] = 313 + cpu_to_le16(ht_cap->mcs.rx_mask[1]); 319 314 } 320 315 } 321 316
+4
drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
··· 785 785 if (!le32_to_cpu(notif->status)) { 786 786 iwl_mvm_te_check_disconnect(mvm, vif, 787 787 "Session protection failure"); 788 + spin_lock_bh(&mvm->time_event_lock); 788 789 iwl_mvm_te_clear_data(mvm, te_data); 790 + spin_unlock_bh(&mvm->time_event_lock); 789 791 } 790 792 791 793 if (le32_to_cpu(notif->start)) { ··· 803 801 */ 804 802 iwl_mvm_te_check_disconnect(mvm, vif, 805 803 "No beacon heard and the session protection is over already..."); 804 + spin_lock_bh(&mvm->time_event_lock); 806 805 iwl_mvm_te_clear_data(mvm, te_data); 806 + spin_unlock_bh(&mvm->time_event_lock); 807 807 } 808 808 809 809 goto out_unlock;
-1
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 939 939 IWL_DEV_INFO(0x2723, 0x1653, iwl_ax200_cfg_cc, iwl_ax200_killer_1650w_name), 940 940 IWL_DEV_INFO(0x2723, 0x1654, iwl_ax200_cfg_cc, iwl_ax200_killer_1650x_name), 941 941 IWL_DEV_INFO(0x2723, IWL_CFG_ANY, iwl_ax200_cfg_cc, iwl_ax200_name), 942 - 943 942 #endif /* CONFIG_IWLMVM */ 944 943 }; 945 944
+1
drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.h
··· 561 561 rxmcs == DESC92C_RATE11M) 562 562 563 563 struct phy_status_rpt { 564 + u8 padding[2]; 564 565 u8 ch_corr[2]; 565 566 u8 cck_sig_qual_ofdm_pwdb_all; 566 567 u8 cck_agc_rpt_ofdm_cfosho_a;
+1 -1
drivers/net/wireless/ti/wlcore/main.c
··· 6274 6274 wl->hw->wiphy->flags |= WIPHY_FLAG_AP_UAPSD | 6275 6275 WIPHY_FLAG_HAS_REMAIN_ON_CHANNEL | 6276 6276 WIPHY_FLAG_HAS_CHANNEL_SWITCH | 6277 - + WIPHY_FLAG_IBSS_RSN; 6277 + WIPHY_FLAG_IBSS_RSN; 6278 6278 6279 6279 wl->hw->wiphy->features |= NL80211_FEATURE_AP_SCAN; 6280 6280
+2 -3
drivers/nfc/fdp/fdp.c
··· 184 184 const struct firmware *fw; 185 185 struct sk_buff *skb; 186 186 unsigned long len; 187 - u8 max_size, payload_size; 187 + int max_size, payload_size; 188 188 int rc = 0; 189 189 190 190 if ((type == NCI_PATCH_TYPE_OTP && !info->otp_patch) || ··· 207 207 208 208 while (len) { 209 209 210 - payload_size = min_t(unsigned long, (unsigned long) max_size, 211 - len); 210 + payload_size = min_t(unsigned long, max_size, len); 212 211 213 212 skb = nci_skb_alloc(ndev, (NCI_CTRL_HDR_SIZE + payload_size), 214 213 GFP_KERNEL);
+5 -3
drivers/nvme/host/rdma.c
··· 850 850 if (new) 851 851 blk_mq_free_tag_set(ctrl->ctrl.admin_tagset); 852 852 out_free_async_qe: 853 - nvme_rdma_free_qe(ctrl->device->dev, &ctrl->async_event_sqe, 854 - sizeof(struct nvme_command), DMA_TO_DEVICE); 855 - ctrl->async_event_sqe.data = NULL; 853 + if (ctrl->async_event_sqe.data) { 854 + nvme_rdma_free_qe(ctrl->device->dev, &ctrl->async_event_sqe, 855 + sizeof(struct nvme_command), DMA_TO_DEVICE); 856 + ctrl->async_event_sqe.data = NULL; 857 + } 856 858 out_free_queue: 857 859 nvme_rdma_free_queue(&ctrl->queues[0]); 858 860 return error;
+9 -3
drivers/nvme/target/tcp.c
··· 515 515 return 1; 516 516 } 517 517 518 - static int nvmet_try_send_data(struct nvmet_tcp_cmd *cmd) 518 + static int nvmet_try_send_data(struct nvmet_tcp_cmd *cmd, bool last_in_batch) 519 519 { 520 520 struct nvmet_tcp_queue *queue = cmd->queue; 521 521 int ret; ··· 523 523 while (cmd->cur_sg) { 524 524 struct page *page = sg_page(cmd->cur_sg); 525 525 u32 left = cmd->cur_sg->length - cmd->offset; 526 + int flags = MSG_DONTWAIT; 527 + 528 + if ((!last_in_batch && cmd->queue->send_list_len) || 529 + cmd->wbytes_done + left < cmd->req.transfer_len || 530 + queue->data_digest || !queue->nvme_sq.sqhd_disabled) 531 + flags |= MSG_MORE; 526 532 527 533 ret = kernel_sendpage(cmd->queue->sock, page, cmd->offset, 528 - left, MSG_DONTWAIT | MSG_MORE); 534 + left, flags); 529 535 if (ret <= 0) 530 536 return ret; 531 537 ··· 666 660 } 667 661 668 662 if (cmd->state == NVMET_TCP_SEND_DATA) { 669 - ret = nvmet_try_send_data(cmd); 663 + ret = nvmet_try_send_data(cmd, last_in_batch); 670 664 if (ret <= 0) 671 665 goto done_send; 672 666 }
+1
drivers/rtc/Kconfig
··· 327 327 config RTC_DRV_MAX8907 328 328 tristate "Maxim MAX8907" 329 329 depends on MFD_MAX8907 || COMPILE_TEST 330 + select REGMAP_IRQ 330 331 help 331 332 If you say yes here you will get support for the 332 333 RTC of Maxim MAX8907 PMIC.
+24 -3
drivers/s390/block/dasd.c
··· 178 178 (unsigned long) block); 179 179 INIT_LIST_HEAD(&block->ccw_queue); 180 180 spin_lock_init(&block->queue_lock); 181 + INIT_LIST_HEAD(&block->format_list); 182 + spin_lock_init(&block->format_lock); 181 183 timer_setup(&block->timer, dasd_block_timeout, 0); 182 184 spin_lock_init(&block->profile.lock); 183 185 ··· 1781 1779 1782 1780 if (dasd_ese_needs_format(cqr->block, irb)) { 1783 1781 if (rq_data_dir((struct request *)cqr->callback_data) == READ) { 1784 - device->discipline->ese_read(cqr); 1782 + device->discipline->ese_read(cqr, irb); 1785 1783 cqr->status = DASD_CQR_SUCCESS; 1786 1784 cqr->stopclk = now; 1787 1785 dasd_device_clear_timer(device); 1788 1786 dasd_schedule_device_bh(device); 1789 1787 return; 1790 1788 } 1791 - fcqr = device->discipline->ese_format(device, cqr); 1789 + fcqr = device->discipline->ese_format(device, cqr, irb); 1792 1790 if (IS_ERR(fcqr)) { 1791 + if (PTR_ERR(fcqr) == -EINVAL) { 1792 + cqr->status = DASD_CQR_ERROR; 1793 + return; 1794 + } 1793 1795 /* 1794 1796 * If we can't format now, let the request go 1795 1797 * one extra round. Maybe we can format later. 1796 1798 */ 1797 1799 cqr->status = DASD_CQR_QUEUED; 1800 + dasd_schedule_device_bh(device); 1801 + return; 1798 1802 } else { 1799 1803 fcqr->status = DASD_CQR_QUEUED; 1800 1804 cqr->status = DASD_CQR_QUEUED; ··· 2756 2748 { 2757 2749 struct request *req; 2758 2750 blk_status_t error = BLK_STS_OK; 2751 + unsigned int proc_bytes; 2759 2752 int status; 2760 2753 2761 2754 req = (struct request *) cqr->callback_data; 2762 2755 dasd_profile_end(cqr->block, cqr, req); 2763 2756 2757 + proc_bytes = cqr->proc_bytes; 2764 2758 status = cqr->block->base->discipline->free_cp(cqr, req); 2765 2759 if (status < 0) 2766 2760 error = errno_to_blk_status(status); ··· 2793 2783 blk_mq_end_request(req, error); 2794 2784 blk_mq_run_hw_queues(req->q, true); 2795 2785 } else { 2796 - blk_mq_complete_request(req); 2786 + /* 2787 + * Partial completed requests can happen with ESE devices. 2788 + * During read we might have gotten a NRF error and have to 2789 + * complete a request partially. 2790 + */ 2791 + if (proc_bytes) { 2792 + blk_update_request(req, BLK_STS_OK, 2793 + blk_rq_bytes(req) - proc_bytes); 2794 + blk_mq_requeue_request(req, true); 2795 + } else { 2796 + blk_mq_complete_request(req); 2797 + } 2797 2798 } 2798 2799 } 2799 2800
+156 -7
drivers/s390/block/dasd_eckd.c
··· 207 207 geo->head |= head; 208 208 } 209 209 210 + /* 211 + * calculate failing track from sense data depending if 212 + * it is an EAV device or not 213 + */ 214 + static int dasd_eckd_track_from_irb(struct irb *irb, struct dasd_device *device, 215 + sector_t *track) 216 + { 217 + struct dasd_eckd_private *private = device->private; 218 + u8 *sense = NULL; 219 + u32 cyl; 220 + u8 head; 221 + 222 + sense = dasd_get_sense(irb); 223 + if (!sense) { 224 + DBF_DEV_EVENT(DBF_WARNING, device, "%s", 225 + "ESE error no sense data\n"); 226 + return -EINVAL; 227 + } 228 + if (!(sense[27] & DASD_SENSE_BIT_2)) { 229 + DBF_DEV_EVENT(DBF_WARNING, device, "%s", 230 + "ESE error no valid track data\n"); 231 + return -EINVAL; 232 + } 233 + 234 + if (sense[27] & DASD_SENSE_BIT_3) { 235 + /* enhanced addressing */ 236 + cyl = sense[30] << 20; 237 + cyl |= (sense[31] & 0xF0) << 12; 238 + cyl |= sense[28] << 8; 239 + cyl |= sense[29]; 240 + } else { 241 + cyl = sense[29] << 8; 242 + cyl |= sense[30]; 243 + } 244 + head = sense[31] & 0x0F; 245 + *track = cyl * private->rdc_data.trk_per_cyl + head; 246 + return 0; 247 + } 248 + 210 249 static int set_timestamp(struct ccw1 *ccw, struct DE_eckd_data *data, 211 250 struct dasd_device *device) 212 251 { ··· 3025 2986 0, NULL); 3026 2987 } 3027 2988 2989 + static bool test_and_set_format_track(struct dasd_format_entry *to_format, 2990 + struct dasd_block *block) 2991 + { 2992 + struct dasd_format_entry *format; 2993 + unsigned long flags; 2994 + bool rc = false; 2995 + 2996 + spin_lock_irqsave(&block->format_lock, flags); 2997 + list_for_each_entry(format, &block->format_list, list) { 2998 + if (format->track == to_format->track) { 2999 + rc = true; 3000 + goto out; 3001 + } 3002 + } 3003 + list_add_tail(&to_format->list, &block->format_list); 3004 + 3005 + out: 3006 + spin_unlock_irqrestore(&block->format_lock, flags); 3007 + return rc; 3008 + } 3009 + 3010 + static void clear_format_track(struct dasd_format_entry *format, 3011 + struct dasd_block *block) 3012 + { 3013 + unsigned long flags; 3014 + 3015 + spin_lock_irqsave(&block->format_lock, flags); 3016 + list_del_init(&format->list); 3017 + spin_unlock_irqrestore(&block->format_lock, flags); 3018 + } 3019 + 3028 3020 /* 3029 3021 * Callback function to free ESE format requests. 3030 3022 */ ··· 3063 2993 { 3064 2994 struct dasd_device *device = cqr->startdev; 3065 2995 struct dasd_eckd_private *private = device->private; 2996 + struct dasd_format_entry *format = data; 3066 2997 2998 + clear_format_track(format, cqr->basedev->block); 3067 2999 private->count--; 3068 3000 dasd_ffree_request(cqr, device); 3069 3001 } 3070 3002 3071 3003 static struct dasd_ccw_req * 3072 - dasd_eckd_ese_format(struct dasd_device *startdev, struct dasd_ccw_req *cqr) 3004 + dasd_eckd_ese_format(struct dasd_device *startdev, struct dasd_ccw_req *cqr, 3005 + struct irb *irb) 3073 3006 { 3074 3007 struct dasd_eckd_private *private; 3008 + struct dasd_format_entry *format; 3075 3009 struct format_data_t fdata; 3076 3010 unsigned int recs_per_trk; 3077 3011 struct dasd_ccw_req *fcqr; ··· 3085 3011 struct request *req; 3086 3012 sector_t first_trk; 3087 3013 sector_t last_trk; 3014 + sector_t curr_trk; 3088 3015 int rc; 3089 3016 3090 3017 req = cqr->callback_data; 3091 - base = cqr->block->base; 3018 + block = cqr->block; 3019 + base = block->base; 3092 3020 private = base->private; 3093 - block = base->block; 3094 3021 blksize = block->bp_block; 3095 3022 recs_per_trk = recs_per_track(&private->rdc_data, 0, blksize); 3023 + format = &startdev->format_entry; 3096 3024 3097 3025 first_trk = blk_rq_pos(req) >> block->s2b_shift; 3098 3026 sector_div(first_trk, recs_per_trk); 3099 3027 last_trk = 3100 3028 (blk_rq_pos(req) + blk_rq_sectors(req) - 1) >> block->s2b_shift; 3101 3029 sector_div(last_trk, recs_per_trk); 3030 + rc = dasd_eckd_track_from_irb(irb, base, &curr_trk); 3031 + if (rc) 3032 + return ERR_PTR(rc); 3102 3033 3103 - fdata.start_unit = first_trk; 3104 - fdata.stop_unit = last_trk; 3034 + if (curr_trk < first_trk || curr_trk > last_trk) { 3035 + DBF_DEV_EVENT(DBF_WARNING, startdev, 3036 + "ESE error track %llu not within range %llu - %llu\n", 3037 + curr_trk, first_trk, last_trk); 3038 + return ERR_PTR(-EINVAL); 3039 + } 3040 + format->track = curr_trk; 3041 + /* test if track is already in formatting by another thread */ 3042 + if (test_and_set_format_track(format, block)) 3043 + return ERR_PTR(-EEXIST); 3044 + 3045 + fdata.start_unit = curr_trk; 3046 + fdata.stop_unit = curr_trk; 3105 3047 fdata.blksize = blksize; 3106 3048 fdata.intensity = private->uses_cdl ? DASD_FMT_INT_COMPAT : 0; 3107 3049 ··· 3134 3044 return fcqr; 3135 3045 3136 3046 fcqr->callback = dasd_eckd_ese_format_cb; 3047 + fcqr->callback_data = (void *) format; 3137 3048 3138 3049 return fcqr; 3139 3050 } ··· 3142 3051 /* 3143 3052 * When data is read from an unformatted area of an ESE volume, this function 3144 3053 * returns zeroed data and thereby mimics a read of zero data. 3054 + * 3055 + * The first unformatted track is the one that got the NRF error, the address is 3056 + * encoded in the sense data. 3057 + * 3058 + * All tracks before have returned valid data and should not be touched. 3059 + * All tracks after the unformatted track might be formatted or not. This is 3060 + * currently not known, remember the processed data and return the remainder of 3061 + * the request to the blocklayer in __dasd_cleanup_cqr(). 3145 3062 */ 3146 - static void dasd_eckd_ese_read(struct dasd_ccw_req *cqr) 3063 + static int dasd_eckd_ese_read(struct dasd_ccw_req *cqr, struct irb *irb) 3147 3064 { 3065 + struct dasd_eckd_private *private; 3066 + sector_t first_trk, last_trk; 3067 + sector_t first_blk, last_blk; 3148 3068 unsigned int blksize, off; 3069 + unsigned int recs_per_trk; 3149 3070 struct dasd_device *base; 3150 3071 struct req_iterator iter; 3072 + struct dasd_block *block; 3073 + unsigned int skip_block; 3074 + unsigned int blk_count; 3151 3075 struct request *req; 3152 3076 struct bio_vec bv; 3077 + sector_t curr_trk; 3078 + sector_t end_blk; 3153 3079 char *dst; 3080 + int rc; 3154 3081 3155 3082 req = (struct request *) cqr->callback_data; 3156 3083 base = cqr->block->base; 3157 3084 blksize = base->block->bp_block; 3085 + block = cqr->block; 3086 + private = base->private; 3087 + skip_block = 0; 3088 + blk_count = 0; 3089 + 3090 + recs_per_trk = recs_per_track(&private->rdc_data, 0, blksize); 3091 + first_trk = first_blk = blk_rq_pos(req) >> block->s2b_shift; 3092 + sector_div(first_trk, recs_per_trk); 3093 + last_trk = last_blk = 3094 + (blk_rq_pos(req) + blk_rq_sectors(req) - 1) >> block->s2b_shift; 3095 + sector_div(last_trk, recs_per_trk); 3096 + rc = dasd_eckd_track_from_irb(irb, base, &curr_trk); 3097 + if (rc) 3098 + return rc; 3099 + 3100 + /* sanity check if the current track from sense data is valid */ 3101 + if (curr_trk < first_trk || curr_trk > last_trk) { 3102 + DBF_DEV_EVENT(DBF_WARNING, base, 3103 + "ESE error track %llu not within range %llu - %llu\n", 3104 + curr_trk, first_trk, last_trk); 3105 + return -EINVAL; 3106 + } 3107 + 3108 + /* 3109 + * if not the first track got the NRF error we have to skip over valid 3110 + * blocks 3111 + */ 3112 + if (curr_trk != first_trk) 3113 + skip_block = curr_trk * recs_per_trk - first_blk; 3114 + 3115 + /* we have no information beyond the current track */ 3116 + end_blk = (curr_trk + 1) * recs_per_trk; 3158 3117 3159 3118 rq_for_each_segment(bv, req, iter) { 3160 3119 dst = page_address(bv.bv_page) + bv.bv_offset; 3161 3120 for (off = 0; off < bv.bv_len; off += blksize) { 3162 - if (dst && rq_data_dir(req) == READ) { 3121 + if (first_blk + blk_count >= end_blk) { 3122 + cqr->proc_bytes = blk_count * blksize; 3123 + return 0; 3124 + } 3125 + if (dst && !skip_block) { 3163 3126 dst += off; 3164 3127 memset(dst, 0, blksize); 3128 + } else { 3129 + skip_block--; 3165 3130 } 3131 + blk_count++; 3166 3132 } 3167 3133 } 3134 + return 0; 3168 3135 } 3169 3136 3170 3137 /*
+13 -2
drivers/s390/block/dasd_int.h
··· 187 187 188 188 void (*callback)(struct dasd_ccw_req *, void *data); 189 189 void *callback_data; 190 + unsigned int proc_bytes; /* bytes for partial completion */ 190 191 }; 191 192 192 193 /* ··· 388 387 int (*ext_pool_warn_thrshld)(struct dasd_device *); 389 388 int (*ext_pool_oos)(struct dasd_device *); 390 389 int (*ext_pool_exhaust)(struct dasd_device *, struct dasd_ccw_req *); 391 - struct dasd_ccw_req *(*ese_format)(struct dasd_device *, struct dasd_ccw_req *); 392 - void (*ese_read)(struct dasd_ccw_req *); 390 + struct dasd_ccw_req *(*ese_format)(struct dasd_device *, 391 + struct dasd_ccw_req *, struct irb *); 392 + int (*ese_read)(struct dasd_ccw_req *, struct irb *); 393 393 }; 394 394 395 395 extern struct dasd_discipline *dasd_diag_discipline_pointer; ··· 476 474 spinlock_t lock; 477 475 }; 478 476 477 + struct dasd_format_entry { 478 + struct list_head list; 479 + sector_t track; 480 + }; 481 + 479 482 struct dasd_device { 480 483 /* Block device stuff. */ 481 484 struct dasd_block *block; ··· 546 539 struct dentry *debugfs_dentry; 547 540 struct dentry *hosts_dentry; 548 541 struct dasd_profile profile; 542 + struct dasd_format_entry format_entry; 549 543 }; 550 544 551 545 struct dasd_block { ··· 572 564 573 565 struct dentry *debugfs_dentry; 574 566 struct dasd_profile profile; 567 + 568 + struct list_head format_list; 569 + spinlock_t format_lock; 575 570 }; 576 571 577 572 struct dasd_attention_data {
+2 -1
drivers/scsi/ipr.c
··· 9950 9950 ioa_cfg->max_devs_supported = ipr_max_devs; 9951 9951 9952 9952 if (ioa_cfg->sis64) { 9953 + host->max_channel = IPR_MAX_SIS64_BUSES; 9953 9954 host->max_id = IPR_MAX_SIS64_TARGETS_PER_BUS; 9954 9955 host->max_lun = IPR_MAX_SIS64_LUNS_PER_TARGET; 9955 9956 if (ipr_max_devs > IPR_MAX_SIS64_DEVS) ··· 9959 9958 + ((sizeof(struct ipr_config_table_entry64) 9960 9959 * ioa_cfg->max_devs_supported))); 9961 9960 } else { 9961 + host->max_channel = IPR_VSET_BUS; 9962 9962 host->max_id = IPR_MAX_NUM_TARGETS_PER_BUS; 9963 9963 host->max_lun = IPR_MAX_NUM_LUNS_PER_TARGET; 9964 9964 if (ipr_max_devs > IPR_MAX_PHYSICAL_DEVS) ··· 9969 9967 * ioa_cfg->max_devs_supported))); 9970 9968 } 9971 9969 9972 - host->max_channel = IPR_VSET_BUS; 9973 9970 host->unique_id = host->host_no; 9974 9971 host->max_cmd_len = IPR_MAX_CDB_LEN; 9975 9972 host->can_queue = ioa_cfg->max_cmds;
+1
drivers/scsi/ipr.h
··· 1300 1300 #define IPR_ARRAY_VIRTUAL_BUS 0x1 1301 1301 #define IPR_VSET_VIRTUAL_BUS 0x2 1302 1302 #define IPR_IOAFP_VIRTUAL_BUS 0x3 1303 + #define IPR_MAX_SIS64_BUSES 0x4 1303 1304 1304 1305 #define IPR_GET_RES_PHYS_LOC(res) \ 1305 1306 (((res)->bus << 24) | ((res)->target << 8) | (res)->lun)
+14 -7
drivers/scsi/ufs/ufshcd.c
··· 3884 3884 void ufshcd_auto_hibern8_update(struct ufs_hba *hba, u32 ahit) 3885 3885 { 3886 3886 unsigned long flags; 3887 + bool update = false; 3887 3888 3888 - if (!(hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT)) 3889 + if (!ufshcd_is_auto_hibern8_supported(hba)) 3889 3890 return; 3890 3891 3891 3892 spin_lock_irqsave(hba->host->host_lock, flags); 3892 - if (hba->ahit == ahit) 3893 - goto out_unlock; 3894 - hba->ahit = ahit; 3895 - if (!pm_runtime_suspended(hba->dev)) 3896 - ufshcd_writel(hba, hba->ahit, REG_AUTO_HIBERNATE_IDLE_TIMER); 3897 - out_unlock: 3893 + if (hba->ahit != ahit) { 3894 + hba->ahit = ahit; 3895 + update = true; 3896 + } 3898 3897 spin_unlock_irqrestore(hba->host->host_lock, flags); 3898 + 3899 + if (update && !pm_runtime_suspended(hba->dev)) { 3900 + pm_runtime_get_sync(hba->dev); 3901 + ufshcd_hold(hba, false); 3902 + ufshcd_auto_hibern8_enable(hba); 3903 + ufshcd_release(hba); 3904 + pm_runtime_put(hba->dev); 3905 + } 3899 3906 } 3900 3907 EXPORT_SYMBOL_GPL(ufshcd_auto_hibern8_update); 3901 3908
+3
drivers/slimbus/qcom-ngd-ctrl.c
··· 1320 1320 { 1321 1321 .compatible = "qcom,slim-ngd-v1.5.0", 1322 1322 .data = &ngd_v1_5_offset_info, 1323 + },{ 1324 + .compatible = "qcom,slim-ngd-v2.1.0", 1325 + .data = &ngd_v1_5_offset_info, 1323 1326 }, 1324 1327 {} 1325 1328 };
+11 -10
drivers/staging/greybus/tools/loopback_test.c
··· 19 19 #include <signal.h> 20 20 21 21 #define MAX_NUM_DEVICES 10 22 + #define MAX_SYSFS_PREFIX 0x80 22 23 #define MAX_SYSFS_PATH 0x200 23 24 #define CSV_MAX_LINE 0x1000 24 25 #define SYSFS_MAX_INT 0x20 ··· 68 67 }; 69 68 70 69 struct loopback_device { 71 - char name[MAX_SYSFS_PATH]; 70 + char name[MAX_STR_LEN]; 72 71 char sysfs_entry[MAX_SYSFS_PATH]; 73 72 char debugfs_entry[MAX_SYSFS_PATH]; 74 73 struct loopback_results results; ··· 94 93 int stop_all; 95 94 int poll_count; 96 95 char test_name[MAX_STR_LEN]; 97 - char sysfs_prefix[MAX_SYSFS_PATH]; 98 - char debugfs_prefix[MAX_SYSFS_PATH]; 96 + char sysfs_prefix[MAX_SYSFS_PREFIX]; 97 + char debugfs_prefix[MAX_SYSFS_PREFIX]; 99 98 struct timespec poll_timeout; 100 99 struct loopback_device devices[MAX_NUM_DEVICES]; 101 100 struct loopback_results aggregate_results; ··· 638 637 static int open_poll_files(struct loopback_test *t) 639 638 { 640 639 struct loopback_device *dev; 641 - char buf[MAX_STR_LEN]; 640 + char buf[MAX_SYSFS_PATH + MAX_STR_LEN]; 642 641 char dummy; 643 642 int fds_idx = 0; 644 643 int i; ··· 656 655 goto err; 657 656 } 658 657 read(t->fds[fds_idx].fd, &dummy, 1); 659 - t->fds[fds_idx].events = EPOLLERR|EPOLLPRI; 658 + t->fds[fds_idx].events = POLLERR | POLLPRI; 660 659 t->fds[fds_idx].revents = 0; 661 660 fds_idx++; 662 661 } ··· 749 748 } 750 749 751 750 for (i = 0; i < t->poll_count; i++) { 752 - if (t->fds[i].revents & EPOLLPRI) { 751 + if (t->fds[i].revents & POLLPRI) { 753 752 /* Dummy read to clear the event */ 754 753 read(t->fds[i].fd, &dummy, 1); 755 754 number_of_events++; ··· 908 907 t.iteration_max = atoi(optarg); 909 908 break; 910 909 case 'S': 911 - snprintf(t.sysfs_prefix, MAX_SYSFS_PATH, "%s", optarg); 910 + snprintf(t.sysfs_prefix, MAX_SYSFS_PREFIX, "%s", optarg); 912 911 break; 913 912 case 'D': 914 - snprintf(t.debugfs_prefix, MAX_SYSFS_PATH, "%s", optarg); 913 + snprintf(t.debugfs_prefix, MAX_SYSFS_PREFIX, "%s", optarg); 915 914 break; 916 915 case 'm': 917 916 t.mask = atol(optarg); ··· 962 961 } 963 962 964 963 if (!strcmp(t.sysfs_prefix, "")) 965 - snprintf(t.sysfs_prefix, MAX_SYSFS_PATH, "%s", sysfs_prefix); 964 + snprintf(t.sysfs_prefix, MAX_SYSFS_PREFIX, "%s", sysfs_prefix); 966 965 967 966 if (!strcmp(t.debugfs_prefix, "")) 968 - snprintf(t.debugfs_prefix, MAX_SYSFS_PATH, "%s", debugfs_prefix); 967 + snprintf(t.debugfs_prefix, MAX_SYSFS_PREFIX, "%s", debugfs_prefix); 969 968 970 969 ret = find_loopback_devices(&t); 971 970 if (ret)
+1
drivers/staging/rtl8188eu/os_dep/usb_intf.c
··· 38 38 {USB_DEVICE(0x2001, 0x331B)}, /* D-Link DWA-121 rev B1 */ 39 39 {USB_DEVICE(0x2357, 0x010c)}, /* TP-Link TL-WN722N v2 */ 40 40 {USB_DEVICE(0x2357, 0x0111)}, /* TP-Link TL-WN727N v5.21 */ 41 + {USB_DEVICE(0x2C4E, 0x0102)}, /* MERCUSYS MW150US v2 */ 41 42 {USB_DEVICE(0x0df6, 0x0076)}, /* Sitecom N150 v2 */ 42 43 {USB_DEVICE(USB_VENDER_ID_REALTEK, 0xffef)}, /* Rosewill RNX-N150NUB */ 43 44 {} /* Terminating entry */
+1 -1
drivers/staging/speakup/main.c
··· 561 561 return 0; 562 562 } else if (tmpx < vc->vc_cols - 2 && 563 563 (ch == SPACE || ch == 0 || (ch < 0x100 && IS_WDLM(ch))) && 564 - get_char(vc, (u_short *)&tmp_pos + 1, &temp) > SPACE) { 564 + get_char(vc, (u_short *)tmp_pos + 1, &temp) > SPACE) { 565 565 tmp_pos += 2; 566 566 tmpx++; 567 567 } else {
+8 -7
drivers/staging/wfx/hif_tx.c
··· 140 140 else 141 141 control_reg_write(wdev, 0); 142 142 mutex_unlock(&wdev->hif_cmd.lock); 143 + mutex_unlock(&wdev->hif_cmd.key_renew_lock); 143 144 kfree(hif); 144 145 return ret; 145 146 } ··· 290 289 } 291 290 292 291 int hif_join(struct wfx_vif *wvif, const struct ieee80211_bss_conf *conf, 293 - const struct ieee80211_channel *channel, const u8 *ssidie) 292 + struct ieee80211_channel *channel, const u8 *ssid, int ssidlen) 294 293 { 295 294 int ret; 296 295 struct hif_msg *hif; ··· 308 307 body->basic_rate_set = 309 308 cpu_to_le32(wfx_rate_mask_to_hw(wvif->wdev, conf->basic_rates)); 310 309 memcpy(body->bssid, conf->bssid, sizeof(body->bssid)); 311 - if (!conf->ibss_joined && ssidie) { 312 - body->ssid_length = cpu_to_le32(ssidie[1]); 313 - memcpy(body->ssid, &ssidie[2], ssidie[1]); 310 + if (!conf->ibss_joined && ssid) { 311 + body->ssid_length = cpu_to_le32(ssidlen); 312 + memcpy(body->ssid, ssid, ssidlen); 314 313 } 315 314 wfx_fill_header(hif, wvif->id, HIF_REQ_ID_JOIN, sizeof(*body)); 316 315 ret = wfx_cmd_send(wvif->wdev, hif, NULL, 0, false); ··· 428 427 struct hif_msg *hif; 429 428 struct hif_req_start *body = wfx_alloc_hif(sizeof(*body), &hif); 430 429 431 - body->dtim_period = conf->dtim_period, 432 - body->short_preamble = conf->use_short_preamble, 433 - body->channel_number = cpu_to_le16(channel->hw_value), 430 + body->dtim_period = conf->dtim_period; 431 + body->short_preamble = conf->use_short_preamble; 432 + body->channel_number = cpu_to_le16(channel->hw_value); 434 433 body->beacon_interval = cpu_to_le32(conf->beacon_int); 435 434 body->basic_rate_set = 436 435 cpu_to_le32(wfx_rate_mask_to_hw(wvif->wdev, conf->basic_rates));
+1 -1
drivers/staging/wfx/hif_tx.h
··· 46 46 int chan_start, int chan_num); 47 47 int hif_stop_scan(struct wfx_vif *wvif); 48 48 int hif_join(struct wfx_vif *wvif, const struct ieee80211_bss_conf *conf, 49 - const struct ieee80211_channel *channel, const u8 *ssidie); 49 + struct ieee80211_channel *channel, const u8 *ssid, int ssidlen); 50 50 int hif_set_pm(struct wfx_vif *wvif, bool ps, int dynamic_ps_timeout); 51 51 int hif_set_bss_params(struct wfx_vif *wvif, 52 52 const struct hif_req_set_bss_params *arg);
+10 -5
drivers/staging/wfx/hif_tx_mib.h
··· 191 191 } 192 192 193 193 static inline int hif_set_association_mode(struct wfx_vif *wvif, 194 - struct ieee80211_bss_conf *info, 195 - struct ieee80211_sta_ht_cap *ht_cap) 194 + struct ieee80211_bss_conf *info) 196 195 { 197 196 int basic_rates = wfx_rate_mask_to_hw(wvif->wdev, info->basic_rates); 197 + struct ieee80211_sta *sta = NULL; 198 198 struct hif_mib_set_association_mode val = { 199 199 .preambtype_use = 1, 200 200 .mode = 1, ··· 204 204 .basic_rate_set = cpu_to_le32(basic_rates) 205 205 }; 206 206 207 + rcu_read_lock(); // protect sta 208 + if (info->bssid && !info->ibss_joined) 209 + sta = ieee80211_find_sta(wvif->vif, info->bssid); 210 + 207 211 // FIXME: it is strange to not retrieve all information from bss_info 208 - if (ht_cap && ht_cap->ht_supported) { 209 - val.mpdu_start_spacing = ht_cap->ampdu_density; 212 + if (sta && sta->ht_cap.ht_supported) { 213 + val.mpdu_start_spacing = sta->ht_cap.ampdu_density; 210 214 if (!(info->ht_operation_mode & IEEE80211_HT_OP_MODE_NON_GF_STA_PRSNT)) 211 - val.greenfield = !!(ht_cap->cap & IEEE80211_HT_CAP_GRN_FLD); 215 + val.greenfield = !!(sta->ht_cap.cap & IEEE80211_HT_CAP_GRN_FLD); 212 216 } 217 + rcu_read_unlock(); 213 218 214 219 return hif_write_mib(wvif->wdev, wvif->id, 215 220 HIF_MIB_ID_SET_ASSOCIATION_MODE, &val, sizeof(val));
+15 -10
drivers/staging/wfx/sta.c
··· 491 491 static void wfx_do_join(struct wfx_vif *wvif) 492 492 { 493 493 int ret; 494 - const u8 *ssidie; 495 494 struct ieee80211_bss_conf *conf = &wvif->vif->bss_conf; 496 495 struct cfg80211_bss *bss = NULL; 496 + u8 ssid[IEEE80211_MAX_SSID_LEN]; 497 + const u8 *ssidie = NULL; 498 + int ssidlen = 0; 497 499 498 500 wfx_tx_lock_flush(wvif->wdev); 499 501 ··· 516 514 if (!wvif->beacon_int) 517 515 wvif->beacon_int = 1; 518 516 519 - rcu_read_lock(); 517 + rcu_read_lock(); // protect ssidie 520 518 if (!conf->ibss_joined) 521 519 ssidie = ieee80211_bss_get_ie(bss, WLAN_EID_SSID); 522 - else 523 - ssidie = NULL; 520 + if (ssidie) { 521 + ssidlen = ssidie[1]; 522 + memcpy(ssid, &ssidie[2], ssidie[1]); 523 + } 524 + rcu_read_unlock(); 524 525 525 526 wfx_tx_flush(wvif->wdev); 526 527 ··· 532 527 533 528 wfx_set_mfp(wvif, bss); 534 529 535 - /* Perform actual join */ 536 530 wvif->wdev->tx_burst_idx = -1; 537 - ret = hif_join(wvif, conf, wvif->channel, ssidie); 538 - rcu_read_unlock(); 531 + ret = hif_join(wvif, conf, wvif->channel, ssid, ssidlen); 539 532 if (ret) { 540 533 ieee80211_connection_loss(wvif->vif); 541 534 wvif->join_complete_status = -1; ··· 608 605 int i; 609 606 610 607 for (i = 0; i < ARRAY_SIZE(sta_priv->buffered); i++) 611 - WARN(sta_priv->buffered[i], "release station while Tx is in progress"); 608 + if (sta_priv->buffered[i]) 609 + dev_warn(wvif->wdev->dev, "release station while %d pending frame on queue %d", 610 + sta_priv->buffered[i], i); 612 611 // FIXME: see note in wfx_sta_add() 613 612 if (vif->type == NL80211_IFTYPE_STATION) 614 613 return 0; ··· 694 689 wfx_rate_mask_to_hw(wvif->wdev, sta->supp_rates[wvif->channel->band]); 695 690 else 696 691 wvif->bss_params.operational_rate_set = -1; 692 + rcu_read_unlock(); 697 693 if (sta && 698 694 info->ht_operation_mode & IEEE80211_HT_OP_MODE_NON_GF_STA_PRSNT) 699 695 hif_dual_cts_protection(wvif, true); ··· 707 701 wvif->bss_params.beacon_lost_count = 20; 708 702 wvif->bss_params.aid = info->aid; 709 703 710 - hif_set_association_mode(wvif, info, sta ? &sta->ht_cap : NULL); 711 - rcu_read_unlock(); 704 + hif_set_association_mode(wvif, info); 712 705 713 706 if (!info->ibss_joined) { 714 707 hif_keep_alive_period(wvif, 30 /* sec */);
+1 -1
drivers/thunderbolt/switch.c
··· 954 954 ret = tb_port_read(port, &phy, TB_CFG_PORT, 955 955 port->cap_phy + LANE_ADP_CS_0, 1); 956 956 if (ret) 957 - return ret; 957 + return false; 958 958 959 959 widths = (phy & LANE_ADP_CS_0_SUPPORTED_WIDTH_MASK) >> 960 960 LANE_ADP_CS_0_SUPPORTED_WIDTH_SHIFT;
+6 -8
drivers/tty/tty_io.c
··· 1589 1589 tty_debug_hangup(tty, "freeing structure\n"); 1590 1590 /* 1591 1591 * The release_tty function takes care of the details of clearing 1592 - * the slots and preserving the termios structure. The tty_unlock_pair 1593 - * should be safe as we keep a kref while the tty is locked (so the 1594 - * unlock never unlocks a freed tty). 1592 + * the slots and preserving the termios structure. 1595 1593 */ 1596 1594 mutex_lock(&tty_mutex); 1597 1595 tty_port_set_kopened(tty->port, 0); ··· 1619 1621 tty_debug_hangup(tty, "freeing structure\n"); 1620 1622 /* 1621 1623 * The release_tty function takes care of the details of clearing 1622 - * the slots and preserving the termios structure. The tty_unlock_pair 1623 - * should be safe as we keep a kref while the tty is locked (so the 1624 - * unlock never unlocks a freed tty). 1624 + * the slots and preserving the termios structure. 1625 1625 */ 1626 1626 mutex_lock(&tty_mutex); 1627 1627 release_tty(tty, idx); ··· 2730 2734 struct serial_struct32 v32; 2731 2735 struct serial_struct v; 2732 2736 int err; 2733 - memset(&v, 0, sizeof(struct serial_struct)); 2734 2737 2735 - if (!tty->ops->set_serial) 2738 + memset(&v, 0, sizeof(v)); 2739 + memset(&v32, 0, sizeof(v32)); 2740 + 2741 + if (!tty->ops->get_serial) 2736 2742 return -ENOTTY; 2737 2743 err = tty->ops->get_serial(tty, &v); 2738 2744 if (!err) {
+4 -3
drivers/usb/chipidea/udc.c
··· 1530 1530 static void ci_hdrc_gadget_connect(struct usb_gadget *_gadget, int is_active) 1531 1531 { 1532 1532 struct ci_hdrc *ci = container_of(_gadget, struct ci_hdrc, gadget); 1533 - unsigned long flags; 1534 1533 1535 1534 if (is_active) { 1536 1535 pm_runtime_get_sync(&_gadget->dev); 1537 1536 hw_device_reset(ci); 1538 - spin_lock_irqsave(&ci->lock, flags); 1537 + spin_lock_irq(&ci->lock); 1539 1538 if (ci->driver) { 1540 1539 hw_device_state(ci, ci->ep0out->qh.dma); 1541 1540 usb_gadget_set_state(_gadget, USB_STATE_POWERED); 1541 + spin_unlock_irq(&ci->lock); 1542 1542 usb_udc_vbus_handler(_gadget, true); 1543 + } else { 1544 + spin_unlock_irq(&ci->lock); 1543 1545 } 1544 - spin_unlock_irqrestore(&ci->lock, flags); 1545 1546 } else { 1546 1547 usb_udc_vbus_handler(_gadget, false); 1547 1548 if (ci->driver)
+21 -13
drivers/usb/class/cdc-acm.c
··· 896 896 897 897 ss->xmit_fifo_size = acm->writesize; 898 898 ss->baud_base = le32_to_cpu(acm->line.dwDTERate); 899 - ss->close_delay = acm->port.close_delay / 10; 899 + ss->close_delay = jiffies_to_msecs(acm->port.close_delay) / 10; 900 900 ss->closing_wait = acm->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ? 901 901 ASYNC_CLOSING_WAIT_NONE : 902 - acm->port.closing_wait / 10; 902 + jiffies_to_msecs(acm->port.closing_wait) / 10; 903 903 return 0; 904 904 } 905 905 ··· 907 907 { 908 908 struct acm *acm = tty->driver_data; 909 909 unsigned int closing_wait, close_delay; 910 + unsigned int old_closing_wait, old_close_delay; 910 911 int retval = 0; 911 912 912 - close_delay = ss->close_delay * 10; 913 + close_delay = msecs_to_jiffies(ss->close_delay * 10); 913 914 closing_wait = ss->closing_wait == ASYNC_CLOSING_WAIT_NONE ? 914 - ASYNC_CLOSING_WAIT_NONE : ss->closing_wait * 10; 915 + ASYNC_CLOSING_WAIT_NONE : 916 + msecs_to_jiffies(ss->closing_wait * 10); 917 + 918 + /* we must redo the rounding here, so that the values match */ 919 + old_close_delay = jiffies_to_msecs(acm->port.close_delay) / 10; 920 + old_closing_wait = acm->port.closing_wait == ASYNC_CLOSING_WAIT_NONE ? 921 + ASYNC_CLOSING_WAIT_NONE : 922 + jiffies_to_msecs(acm->port.closing_wait) / 10; 915 923 916 924 mutex_lock(&acm->port.mutex); 917 925 918 - if (!capable(CAP_SYS_ADMIN)) { 919 - if ((close_delay != acm->port.close_delay) || 920 - (closing_wait != acm->port.closing_wait)) 926 + if ((ss->close_delay != old_close_delay) || 927 + (ss->closing_wait != old_closing_wait)) { 928 + if (!capable(CAP_SYS_ADMIN)) 921 929 retval = -EPERM; 922 - else 923 - retval = -EOPNOTSUPP; 924 - } else { 925 - acm->port.close_delay = close_delay; 926 - acm->port.closing_wait = closing_wait; 927 - } 930 + else { 931 + acm->port.close_delay = close_delay; 932 + acm->port.closing_wait = closing_wait; 933 + } 934 + } else 935 + retval = -EOPNOTSUPP; 928 936 929 937 mutex_unlock(&acm->port.mutex); 930 938 return retval;
+6
drivers/usb/core/quirks.c
··· 378 378 { USB_DEVICE(0x0b05, 0x17e0), .driver_info = 379 379 USB_QUIRK_IGNORE_REMOTE_WAKEUP }, 380 380 381 + /* Realtek hub in Dell WD19 (Type-C) */ 382 + { USB_DEVICE(0x0bda, 0x0487), .driver_info = USB_QUIRK_NO_LPM }, 383 + 384 + /* Generic RTL8153 based ethernet adapters */ 385 + { USB_DEVICE(0x0bda, 0x8153), .driver_info = USB_QUIRK_NO_LPM }, 386 + 381 387 /* Action Semiconductor flash disk */ 382 388 { USB_DEVICE(0x10d6, 0x2200), .driver_info = 383 389 USB_QUIRK_STRING_FETCH_255 },
+2 -1
drivers/usb/host/xhci-pci.c
··· 136 136 xhci->quirks |= XHCI_AMD_PLL_FIX; 137 137 138 138 if (pdev->vendor == PCI_VENDOR_ID_AMD && 139 - (pdev->device == 0x15e0 || 139 + (pdev->device == 0x145c || 140 + pdev->device == 0x15e0 || 140 141 pdev->device == 0x15e1 || 141 142 pdev->device == 0x43bb)) 142 143 xhci->quirks |= XHCI_SUSPEND_DELAY;
+1
drivers/usb/host/xhci-plat.c
··· 445 445 static struct platform_driver usb_xhci_driver = { 446 446 .probe = xhci_plat_probe, 447 447 .remove = xhci_plat_remove, 448 + .shutdown = usb_hcd_platform_shutdown, 448 449 .driver = { 449 450 .name = "xhci-hcd", 450 451 .pm = &xhci_plat_pm_ops,
+6 -17
drivers/usb/host/xhci-trace.h
··· 289 289 ), 290 290 TP_printk("ep%d%s-%s: urb %p pipe %u slot %d length %d/%d sgs %d/%d stream %d flags %08x", 291 291 __entry->epnum, __entry->dir_in ? "in" : "out", 292 - ({ char *s; 293 - switch (__entry->type) { 294 - case USB_ENDPOINT_XFER_INT: 295 - s = "intr"; 296 - break; 297 - case USB_ENDPOINT_XFER_CONTROL: 298 - s = "control"; 299 - break; 300 - case USB_ENDPOINT_XFER_BULK: 301 - s = "bulk"; 302 - break; 303 - case USB_ENDPOINT_XFER_ISOC: 304 - s = "isoc"; 305 - break; 306 - default: 307 - s = "UNKNOWN"; 308 - } s; }), __entry->urb, __entry->pipe, __entry->slot_id, 292 + __print_symbolic(__entry->type, 293 + { USB_ENDPOINT_XFER_INT, "intr" }, 294 + { USB_ENDPOINT_XFER_CONTROL, "control" }, 295 + { USB_ENDPOINT_XFER_BULK, "bulk" }, 296 + { USB_ENDPOINT_XFER_ISOC, "isoc" }), 297 + __entry->urb, __entry->pipe, __entry->slot_id, 309 298 __entry->actual, __entry->length, __entry->num_mapped_sgs, 310 299 __entry->num_sgs, __entry->stream, __entry->flags 311 300 )
+2
drivers/usb/serial/option.c
··· 1183 1183 .driver_info = NCTRL(0) }, 1184 1184 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x110a, 0xff), /* Telit ME910G1 */ 1185 1185 .driver_info = NCTRL(0) | RSVD(3) }, 1186 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x110b, 0xff), /* Telit ME910G1 (ECM) */ 1187 + .driver_info = NCTRL(0) }, 1186 1188 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910), 1187 1189 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1188 1190 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910_USBCFG4),
+1
drivers/usb/serial/pl2303.c
··· 99 99 { USB_DEVICE(SUPERIAL_VENDOR_ID, SUPERIAL_PRODUCT_ID) }, 100 100 { USB_DEVICE(HP_VENDOR_ID, HP_LD220_PRODUCT_ID) }, 101 101 { USB_DEVICE(HP_VENDOR_ID, HP_LD220TA_PRODUCT_ID) }, 102 + { USB_DEVICE(HP_VENDOR_ID, HP_LD381_PRODUCT_ID) }, 102 103 { USB_DEVICE(HP_VENDOR_ID, HP_LD960_PRODUCT_ID) }, 103 104 { USB_DEVICE(HP_VENDOR_ID, HP_LD960TA_PRODUCT_ID) }, 104 105 { USB_DEVICE(HP_VENDOR_ID, HP_LCM220_PRODUCT_ID) },
+1
drivers/usb/serial/pl2303.h
··· 130 130 #define HP_LM920_PRODUCT_ID 0x026b 131 131 #define HP_TD620_PRODUCT_ID 0x0956 132 132 #define HP_LD960_PRODUCT_ID 0x0b39 133 + #define HP_LD381_PRODUCT_ID 0x0f7f 133 134 #define HP_LCM220_PRODUCT_ID 0x3139 134 135 #define HP_LCM960_PRODUCT_ID 0x3239 135 136 #define HP_LD220_PRODUCT_ID 0x3524
+11 -1
drivers/usb/typec/ucsi/displayport.c
··· 271 271 return; 272 272 273 273 dp = typec_altmode_get_drvdata(alt); 274 + if (!dp) 275 + return; 276 + 274 277 dp->data.conf = 0; 275 278 dp->data.status = 0; 276 279 dp->initialized = false; ··· 288 285 struct typec_altmode *alt; 289 286 struct ucsi_dp *dp; 290 287 288 + mutex_lock(&con->lock); 289 + 291 290 /* We can't rely on the firmware with the capabilities. */ 292 291 desc->vdo |= DP_CAP_DP_SIGNALING | DP_CAP_RECEPTACLE; 293 292 ··· 298 293 desc->vdo |= all_assignments << 16; 299 294 300 295 alt = typec_port_register_altmode(con->port, desc); 301 - if (IS_ERR(alt)) 296 + if (IS_ERR(alt)) { 297 + mutex_unlock(&con->lock); 302 298 return alt; 299 + } 303 300 304 301 dp = devm_kzalloc(&alt->dev, sizeof(*dp), GFP_KERNEL); 305 302 if (!dp) { 306 303 typec_unregister_altmode(alt); 304 + mutex_unlock(&con->lock); 307 305 return ERR_PTR(-ENOMEM); 308 306 } 309 307 ··· 318 310 319 311 alt->ops = &ucsi_displayport_ops; 320 312 typec_altmode_set_drvdata(alt, dp); 313 + 314 + mutex_unlock(&con->lock); 321 315 322 316 return alt; 323 317 }
+2
drivers/watchdog/iTCO_vendor.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* iTCO Vendor Specific Support hooks */ 3 3 #ifdef CONFIG_ITCO_VENDOR_SUPPORT 4 + extern int iTCO_vendorsupport; 4 5 extern void iTCO_vendor_pre_start(struct resource *, unsigned int); 5 6 extern void iTCO_vendor_pre_stop(struct resource *); 6 7 extern int iTCO_vendor_check_noreboot_on(void); 7 8 #else 9 + #define iTCO_vendorsupport 0 8 10 #define iTCO_vendor_pre_start(acpibase, heartbeat) {} 9 11 #define iTCO_vendor_pre_stop(acpibase) {} 10 12 #define iTCO_vendor_check_noreboot_on() 1
+9 -7
drivers/watchdog/iTCO_vendor_support.c
··· 39 39 /* Broken BIOS */ 40 40 #define BROKEN_BIOS 911 41 41 42 - static int vendorsupport; 43 - module_param(vendorsupport, int, 0); 42 + int iTCO_vendorsupport; 43 + EXPORT_SYMBOL(iTCO_vendorsupport); 44 + 45 + module_param_named(vendorsupport, iTCO_vendorsupport, int, 0); 44 46 MODULE_PARM_DESC(vendorsupport, "iTCO vendor specific support mode, default=" 45 47 "0 (none), 1=SuperMicro Pent3, 911=Broken SMI BIOS"); 46 48 ··· 154 152 void iTCO_vendor_pre_start(struct resource *smires, 155 153 unsigned int heartbeat) 156 154 { 157 - switch (vendorsupport) { 155 + switch (iTCO_vendorsupport) { 158 156 case SUPERMICRO_OLD_BOARD: 159 157 supermicro_old_pre_start(smires); 160 158 break; ··· 167 165 168 166 void iTCO_vendor_pre_stop(struct resource *smires) 169 167 { 170 - switch (vendorsupport) { 168 + switch (iTCO_vendorsupport) { 171 169 case SUPERMICRO_OLD_BOARD: 172 170 supermicro_old_pre_stop(smires); 173 171 break; ··· 180 178 181 179 int iTCO_vendor_check_noreboot_on(void) 182 180 { 183 - switch (vendorsupport) { 181 + switch (iTCO_vendorsupport) { 184 182 case SUPERMICRO_OLD_BOARD: 185 183 return 0; 186 184 default: ··· 191 189 192 190 static int __init iTCO_vendor_init_module(void) 193 191 { 194 - if (vendorsupport == SUPERMICRO_NEW_BOARD) { 192 + if (iTCO_vendorsupport == SUPERMICRO_NEW_BOARD) { 195 193 pr_warn("Option vendorsupport=%d is no longer supported, " 196 194 "please use the w83627hf_wdt driver instead\n", 197 195 SUPERMICRO_NEW_BOARD); 198 196 return -EINVAL; 199 197 } 200 - pr_info("vendor-support=%d\n", vendorsupport); 198 + pr_info("vendor-support=%d\n", iTCO_vendorsupport); 201 199 return 0; 202 200 } 203 201
+16 -12
drivers/watchdog/iTCO_wdt.c
··· 459 459 if (!p->tco_res) 460 460 return -ENODEV; 461 461 462 - p->smi_res = platform_get_resource(pdev, IORESOURCE_IO, ICH_RES_IO_SMI); 463 - if (!p->smi_res) 464 - return -ENODEV; 465 - 466 462 p->iTCO_version = pdata->version; 467 463 p->pci_dev = to_pci_dev(dev->parent); 464 + 465 + p->smi_res = platform_get_resource(pdev, IORESOURCE_IO, ICH_RES_IO_SMI); 466 + if (p->smi_res) { 467 + /* The TCO logic uses the TCO_EN bit in the SMI_EN register */ 468 + if (!devm_request_region(dev, p->smi_res->start, 469 + resource_size(p->smi_res), 470 + pdev->name)) { 471 + pr_err("I/O address 0x%04llx already in use, device disabled\n", 472 + (u64)SMI_EN(p)); 473 + return -EBUSY; 474 + } 475 + } else if (iTCO_vendorsupport || 476 + turn_SMI_watchdog_clear_off >= p->iTCO_version) { 477 + pr_err("SMI I/O resource is missing\n"); 478 + return -ENODEV; 479 + } 468 480 469 481 iTCO_wdt_no_reboot_bit_setup(p, pdata); 470 482 ··· 504 492 /* Set the NO_REBOOT bit to prevent later reboots, just for sure */ 505 493 p->update_no_reboot_bit(p->no_reboot_priv, true); 506 494 507 - /* The TCO logic uses the TCO_EN bit in the SMI_EN register */ 508 - if (!devm_request_region(dev, p->smi_res->start, 509 - resource_size(p->smi_res), 510 - pdev->name)) { 511 - pr_err("I/O address 0x%04llx already in use, device disabled\n", 512 - (u64)SMI_EN(p)); 513 - return -EBUSY; 514 - } 515 495 if (turn_SMI_watchdog_clear_off >= p->iTCO_version) { 516 496 /* 517 497 * Bit 13: TCO_EN -> 0
+1 -1
fs/afs/addr_list.c
··· 19 19 void afs_put_addrlist(struct afs_addr_list *alist) 20 20 { 21 21 if (alist && refcount_dec_and_test(&alist->usage)) 22 - call_rcu(&alist->rcu, (rcu_callback_t)kfree); 22 + kfree_rcu(alist, rcu); 23 23 } 24 24 25 25 /*
+12 -2
fs/afs/cmservice.c
··· 244 244 } 245 245 246 246 /* 247 + * Abort a service call from within an action function. 248 + */ 249 + static void afs_abort_service_call(struct afs_call *call, u32 abort_code, int error, 250 + const char *why) 251 + { 252 + rxrpc_kernel_abort_call(call->net->socket, call->rxcall, 253 + abort_code, error, why); 254 + afs_set_call_complete(call, error, 0); 255 + } 256 + 257 + /* 247 258 * The server supplied a list of callbacks that it wanted to break. 248 259 */ 249 260 static void SRXAFSCB_CallBack(struct work_struct *work) ··· 521 510 if (memcmp(r, &call->net->uuid, sizeof(call->net->uuid)) == 0) 522 511 afs_send_empty_reply(call); 523 512 else 524 - rxrpc_kernel_abort_call(call->net->socket, call->rxcall, 525 - 1, 1, "K-1"); 513 + afs_abort_service_call(call, 1, 1, "K-1"); 526 514 527 515 afs_put_call(call); 528 516 _leave("");
+11 -3
fs/afs/internal.h
··· 81 81 * List of server addresses. 82 82 */ 83 83 struct afs_addr_list { 84 - struct rcu_head rcu; /* Must be first */ 84 + struct rcu_head rcu; 85 85 refcount_t usage; 86 86 u32 version; /* Version */ 87 87 unsigned char max_addrs; ··· 154 154 }; 155 155 unsigned char unmarshall; /* unmarshalling phase */ 156 156 unsigned char addr_ix; /* Address in ->alist */ 157 - bool incoming; /* T if incoming call */ 157 + bool drop_ref; /* T if need to drop ref for incoming call */ 158 158 bool send_pages; /* T if data from mapping should be sent */ 159 159 bool need_attention; /* T if RxRPC poked us */ 160 160 bool async; /* T if asynchronous */ ··· 1209 1209 ok = true; 1210 1210 } 1211 1211 spin_unlock_bh(&call->state_lock); 1212 - if (ok) 1212 + if (ok) { 1213 1213 trace_afs_call_done(call); 1214 + 1215 + /* Asynchronous calls have two refs to release - one from the alloc and 1216 + * one queued with the work item - and we can't just deallocate the 1217 + * call because the work item may be queued again. 1218 + */ 1219 + if (call->drop_ref) 1220 + afs_put_call(call); 1221 + } 1214 1222 } 1215 1223 1216 1224 /*
+10 -64
fs/afs/rxrpc.c
··· 18 18 19 19 static void afs_wake_up_call_waiter(struct sock *, struct rxrpc_call *, unsigned long); 20 20 static void afs_wake_up_async_call(struct sock *, struct rxrpc_call *, unsigned long); 21 - static void afs_delete_async_call(struct work_struct *); 22 21 static void afs_process_async_call(struct work_struct *); 23 22 static void afs_rx_new_call(struct sock *, struct rxrpc_call *, unsigned long); 24 23 static void afs_rx_discard_new_call(struct rxrpc_call *, unsigned long); ··· 168 169 int n = atomic_dec_return(&call->usage); 169 170 int o = atomic_read(&net->nr_outstanding_calls); 170 171 171 - trace_afs_call(call, afs_call_trace_put, n + 1, o, 172 + trace_afs_call(call, afs_call_trace_put, n, o, 172 173 __builtin_return_address(0)); 173 174 174 175 ASSERTCMP(n, >=, 0); ··· 401 402 /* If the call is going to be asynchronous, we need an extra ref for 402 403 * the call to hold itself so the caller need not hang on to its ref. 403 404 */ 404 - if (call->async) 405 + if (call->async) { 405 406 afs_get_call(call, afs_call_trace_get); 407 + call->drop_ref = true; 408 + } 406 409 407 410 /* create a call */ 408 411 rxcall = rxrpc_kernel_begin_call(call->net->socket, srx, call->key, ··· 414 413 afs_wake_up_async_call : 415 414 afs_wake_up_call_waiter), 416 415 call->upgrade, 417 - call->intr, 416 + (call->intr ? RXRPC_PREINTERRUPTIBLE : 417 + RXRPC_UNINTERRUPTIBLE), 418 418 call->debug_id); 419 419 if (IS_ERR(rxcall)) { 420 420 ret = PTR_ERR(rxcall); ··· 586 584 done: 587 585 if (call->type->done) 588 586 call->type->done(call); 589 - if (state == AFS_CALL_COMPLETE && call->incoming) 590 - afs_put_call(call); 591 587 out: 592 588 _leave(""); 593 589 return; ··· 604 604 long afs_wait_for_call_to_complete(struct afs_call *call, 605 605 struct afs_addr_cursor *ac) 606 606 { 607 - signed long rtt2, timeout; 608 607 long ret; 609 - bool stalled = false; 610 - u64 rtt; 611 - u32 life, last_life; 612 608 bool rxrpc_complete = false; 613 609 614 610 DECLARE_WAITQUEUE(myself, current); ··· 614 618 ret = call->error; 615 619 if (ret < 0) 616 620 goto out; 617 - 618 - rtt = rxrpc_kernel_get_rtt(call->net->socket, call->rxcall); 619 - rtt2 = nsecs_to_jiffies64(rtt) * 2; 620 - if (rtt2 < 2) 621 - rtt2 = 2; 622 - 623 - timeout = rtt2; 624 - rxrpc_kernel_check_life(call->net->socket, call->rxcall, &last_life); 625 621 626 622 add_wait_queue(&call->waitq, &myself); 627 623 for (;;) { ··· 625 637 call->need_attention = false; 626 638 __set_current_state(TASK_RUNNING); 627 639 afs_deliver_to_call(call); 628 - timeout = rtt2; 629 640 continue; 630 641 } 631 642 632 643 if (afs_check_call_state(call, AFS_CALL_COMPLETE)) 633 644 break; 634 645 635 - if (!rxrpc_kernel_check_life(call->net->socket, call->rxcall, &life)) { 646 + if (!rxrpc_kernel_check_life(call->net->socket, call->rxcall)) { 636 647 /* rxrpc terminated the call. */ 637 648 rxrpc_complete = true; 638 649 break; 639 650 } 640 651 641 - if (call->intr && timeout == 0 && 642 - life == last_life && signal_pending(current)) { 643 - if (stalled) 644 - break; 645 - __set_current_state(TASK_RUNNING); 646 - rxrpc_kernel_probe_life(call->net->socket, call->rxcall); 647 - timeout = rtt2; 648 - stalled = true; 649 - continue; 650 - } 651 - 652 - if (life != last_life) { 653 - timeout = rtt2; 654 - last_life = life; 655 - stalled = false; 656 - } 657 - 658 - timeout = schedule_timeout(timeout); 652 + schedule(); 659 653 } 660 654 661 655 remove_wait_queue(&call->waitq, &myself); ··· 705 735 706 736 u = atomic_fetch_add_unless(&call->usage, 1, 0); 707 737 if (u != 0) { 708 - trace_afs_call(call, afs_call_trace_wake, u, 738 + trace_afs_call(call, afs_call_trace_wake, u + 1, 709 739 atomic_read(&call->net->nr_outstanding_calls), 710 740 __builtin_return_address(0)); 711 741 712 742 if (!queue_work(afs_async_calls, &call->async_work)) 713 743 afs_put_call(call); 714 744 } 715 - } 716 - 717 - /* 718 - * Delete an asynchronous call. The work item carries a ref to the call struct 719 - * that we need to release. 720 - */ 721 - static void afs_delete_async_call(struct work_struct *work) 722 - { 723 - struct afs_call *call = container_of(work, struct afs_call, async_work); 724 - 725 - _enter(""); 726 - 727 - afs_put_call(call); 728 - 729 - _leave(""); 730 745 } 731 746 732 747 /* ··· 727 772 if (call->state < AFS_CALL_COMPLETE && call->need_attention) { 728 773 call->need_attention = false; 729 774 afs_deliver_to_call(call); 730 - } 731 - 732 - if (call->state == AFS_CALL_COMPLETE) { 733 - /* We have two refs to release - one from the alloc and one 734 - * queued with the work item - and we can't just deallocate the 735 - * call because the work item may be queued again. 736 - */ 737 - call->async_work.func = afs_delete_async_call; 738 - if (!queue_work(afs_async_calls, &call->async_work)) 739 - afs_put_call(call); 740 775 } 741 776 742 777 afs_put_call(call); ··· 755 810 if (!call) 756 811 break; 757 812 813 + call->drop_ref = true; 758 814 call->async = true; 759 815 call->state = AFS_CALL_SV_AWAIT_OP_ID; 760 816 init_waitqueue_head(&call->waitq);
+2 -2
fs/btrfs/block-group.c
··· 856 856 found_raid1c34 = true; 857 857 up_read(&sinfo->groups_sem); 858 858 } 859 - if (found_raid56) 859 + if (!found_raid56) 860 860 btrfs_clear_fs_incompat(fs_info, RAID56); 861 - if (found_raid1c34) 861 + if (!found_raid1c34) 862 862 btrfs_clear_fs_incompat(fs_info, RAID1C34); 863 863 } 864 864 }
+4
fs/btrfs/inode.c
··· 9496 9496 ret = btrfs_sync_log(trans, BTRFS_I(old_inode)->root, &ctx); 9497 9497 if (ret) 9498 9498 commit_transaction = true; 9499 + } else if (sync_log) { 9500 + mutex_lock(&root->log_mutex); 9501 + list_del(&ctx.list); 9502 + mutex_unlock(&root->log_mutex); 9499 9503 } 9500 9504 if (commit_transaction) { 9501 9505 ret = btrfs_commit_transaction(trans);
+2 -1
fs/cifs/file.c
··· 1169 1169 rc = posix_lock_file(file, flock, NULL); 1170 1170 up_write(&cinode->lock_sem); 1171 1171 if (rc == FILE_LOCK_DEFERRED) { 1172 - rc = wait_event_interruptible(flock->fl_wait, !flock->fl_blocker); 1172 + rc = wait_event_interruptible(flock->fl_wait, 1173 + list_empty(&flock->fl_blocked_member)); 1173 1174 if (!rc) 1174 1175 goto try_again; 1175 1176 locks_delete_block(flock);
+1 -1
fs/cifs/inode.c
··· 2191 2191 if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_OVERR_GID)) 2192 2192 stat->gid = current_fsgid(); 2193 2193 } 2194 - return rc; 2194 + return 0; 2195 2195 } 2196 2196 2197 2197 int cifs_fiemap(struct inode *inode, struct fiemap_extent_info *fei, u64 start,
+3 -1
fs/cifs/smb2ops.c
··· 2222 2222 goto qdf_free; 2223 2223 } 2224 2224 2225 + atomic_inc(&tcon->num_remote_opens); 2226 + 2225 2227 qd_rsp = (struct smb2_query_directory_rsp *)rsp_iov[1].iov_base; 2226 2228 if (qd_rsp->sync_hdr.Status == STATUS_NO_MORE_FILES) { 2227 2229 trace_smb3_query_dir_done(xid, fid->persistent_fid, ··· 3419 3417 if (rc) 3420 3418 goto out; 3421 3419 3422 - if (out_data_len < sizeof(struct file_allocated_range_buffer)) { 3420 + if (out_data_len && out_data_len < sizeof(struct file_allocated_range_buffer)) { 3423 3421 rc = -EINVAL; 3424 3422 goto out; 3425 3423 }
+4 -4
fs/eventpoll.c
··· 1854 1854 waiter = true; 1855 1855 init_waitqueue_entry(&wait, current); 1856 1856 1857 - spin_lock_irq(&ep->wq.lock); 1857 + write_lock_irq(&ep->lock); 1858 1858 __add_wait_queue_exclusive(&ep->wq, &wait); 1859 - spin_unlock_irq(&ep->wq.lock); 1859 + write_unlock_irq(&ep->lock); 1860 1860 } 1861 1861 1862 1862 for (;;) { ··· 1904 1904 goto fetch_events; 1905 1905 1906 1906 if (waiter) { 1907 - spin_lock_irq(&ep->wq.lock); 1907 + write_lock_irq(&ep->lock); 1908 1908 __remove_wait_queue(&ep->wq, &wait); 1909 - spin_unlock_irq(&ep->wq.lock); 1909 + write_unlock_irq(&ep->lock); 1910 1910 } 1911 1911 1912 1912 return res;
+6 -1
fs/file.c
··· 540 540 return __alloc_fd(current->files, start, rlimit(RLIMIT_NOFILE), flags); 541 541 } 542 542 543 + int __get_unused_fd_flags(unsigned flags, unsigned long nofile) 544 + { 545 + return __alloc_fd(current->files, 0, nofile, flags); 546 + } 547 + 543 548 int get_unused_fd_flags(unsigned flags) 544 549 { 545 - return __alloc_fd(current->files, 0, rlimit(RLIMIT_NOFILE), flags); 550 + return __get_unused_fd_flags(flags, rlimit(RLIMIT_NOFILE)); 546 551 } 547 552 EXPORT_SYMBOL(get_unused_fd_flags); 548 553
+3 -3
fs/fuse/dev.c
··· 276 276 void fuse_request_end(struct fuse_conn *fc, struct fuse_req *req) 277 277 { 278 278 struct fuse_iqueue *fiq = &fc->iq; 279 - bool async; 280 279 281 280 if (test_and_set_bit(FR_FINISHED, &req->flags)) 282 281 goto put_request; 283 282 284 - async = req->args->end; 285 283 /* 286 284 * test_and_set_bit() implies smp_mb() between bit 287 285 * changing and below intr_entry check. Pairs with ··· 322 324 wake_up(&req->waitq); 323 325 } 324 326 325 - if (async) 327 + if (test_bit(FR_ASYNC, &req->flags)) 326 328 req->args->end(fc, req->args, req->out.h.error); 327 329 put_request: 328 330 fuse_put_request(fc, req); ··· 469 471 req->in.h.opcode = args->opcode; 470 472 req->in.h.nodeid = args->nodeid; 471 473 req->args = args; 474 + if (args->end) 475 + __set_bit(FR_ASYNC, &req->flags); 472 476 } 473 477 474 478 ssize_t fuse_simple_request(struct fuse_conn *fc, struct fuse_args *args)
+2
fs/fuse/fuse_i.h
··· 301 301 * FR_SENT: request is in userspace, waiting for an answer 302 302 * FR_FINISHED: request is finished 303 303 * FR_PRIVATE: request is on private list 304 + * FR_ASYNC: request is asynchronous 304 305 */ 305 306 enum fuse_req_flag { 306 307 FR_ISREPLY, ··· 315 314 FR_SENT, 316 315 FR_FINISHED, 317 316 FR_PRIVATE, 317 + FR_ASYNC, 318 318 }; 319 319 320 320 /**
+1
fs/inode.c
··· 138 138 inode->i_sb = sb; 139 139 inode->i_blkbits = sb->s_blocksize_bits; 140 140 inode->i_flags = 0; 141 + atomic64_set(&inode->i_sequence, 0); 141 142 atomic_set(&inode->i_count, 1); 142 143 inode->i_op = &empty_iops; 143 144 inode->i_fop = &no_open_fops;
+30 -19
fs/io_uring.c
··· 191 191 struct llist_head put_llist; 192 192 struct work_struct ref_work; 193 193 struct completion done; 194 - struct rcu_head rcu; 195 194 }; 196 195 197 196 struct io_ring_ctx { ··· 343 344 struct sockaddr __user *addr; 344 345 int __user *addr_len; 345 346 int flags; 347 + unsigned long nofile; 346 348 }; 347 349 348 350 struct io_sync { ··· 398 398 struct filename *filename; 399 399 struct statx __user *buffer; 400 400 struct open_how how; 401 + unsigned long nofile; 401 402 }; 402 403 403 404 struct io_files_update { ··· 2579 2578 return ret; 2580 2579 } 2581 2580 2581 + req->open.nofile = rlimit(RLIMIT_NOFILE); 2582 2582 req->flags |= REQ_F_NEED_CLEANUP; 2583 2583 return 0; 2584 2584 } ··· 2621 2619 return ret; 2622 2620 } 2623 2621 2622 + req->open.nofile = rlimit(RLIMIT_NOFILE); 2624 2623 req->flags |= REQ_F_NEED_CLEANUP; 2625 2624 return 0; 2626 2625 } ··· 2640 2637 if (ret) 2641 2638 goto err; 2642 2639 2643 - ret = get_unused_fd_flags(req->open.how.flags); 2640 + ret = __get_unused_fd_flags(req->open.how.flags, req->open.nofile); 2644 2641 if (ret < 0) 2645 2642 goto err; 2646 2643 ··· 3325 3322 accept->addr = u64_to_user_ptr(READ_ONCE(sqe->addr)); 3326 3323 accept->addr_len = u64_to_user_ptr(READ_ONCE(sqe->addr2)); 3327 3324 accept->flags = READ_ONCE(sqe->accept_flags); 3325 + accept->nofile = rlimit(RLIMIT_NOFILE); 3328 3326 return 0; 3329 3327 #else 3330 3328 return -EOPNOTSUPP; ··· 3342 3338 3343 3339 file_flags = force_nonblock ? O_NONBLOCK : 0; 3344 3340 ret = __sys_accept4_file(req->file, file_flags, accept->addr, 3345 - accept->addr_len, accept->flags); 3341 + accept->addr_len, accept->flags, 3342 + accept->nofile); 3346 3343 if (ret == -EAGAIN && force_nonblock) 3347 3344 return -EAGAIN; 3348 3345 if (ret == -ERESTARTSYS) ··· 4137 4132 { 4138 4133 ssize_t ret = 0; 4139 4134 4135 + if (!sqe) 4136 + return 0; 4137 + 4140 4138 if (io_op_defs[req->opcode].file_table) { 4141 4139 ret = io_grab_files(req); 4142 4140 if (unlikely(ret)) ··· 4916 4908 if (sqe_flags & (IOSQE_IO_LINK|IOSQE_IO_HARDLINK)) { 4917 4909 req->flags |= REQ_F_LINK; 4918 4910 INIT_LIST_HEAD(&req->link_list); 4911 + 4912 + if (io_alloc_async_ctx(req)) { 4913 + ret = -EAGAIN; 4914 + goto err_req; 4915 + } 4919 4916 ret = io_req_defer_prep(req, sqe); 4920 4917 if (ret) 4921 4918 req->flags |= REQ_F_FAIL_LINK; ··· 5344 5331 complete(&data->done); 5345 5332 } 5346 5333 5347 - static void __io_file_ref_exit_and_free(struct rcu_head *rcu) 5334 + static void io_file_ref_exit_and_free(struct work_struct *work) 5348 5335 { 5349 - struct fixed_file_data *data = container_of(rcu, struct fixed_file_data, 5350 - rcu); 5336 + struct fixed_file_data *data; 5337 + 5338 + data = container_of(work, struct fixed_file_data, ref_work); 5339 + 5340 + /* 5341 + * Ensure any percpu-ref atomic switch callback has run, it could have 5342 + * been in progress when the files were being unregistered. Once 5343 + * that's done, we can safely exit and free the ref and containing 5344 + * data structure. 5345 + */ 5346 + rcu_barrier(); 5351 5347 percpu_ref_exit(&data->refs); 5352 5348 kfree(data); 5353 - } 5354 - 5355 - static void io_file_ref_exit_and_free(struct rcu_head *rcu) 5356 - { 5357 - /* 5358 - * We need to order our exit+free call against the potentially 5359 - * existing call_rcu() for switching to atomic. One way to do that 5360 - * is to have this rcu callback queue the final put and free, as we 5361 - * could otherwise have a pre-existing atomic switch complete _after_ 5362 - * the free callback we queued. 5363 - */ 5364 - call_rcu(rcu, __io_file_ref_exit_and_free); 5365 5349 } 5366 5350 5367 5351 static int io_sqe_files_unregister(struct io_ring_ctx *ctx) ··· 5379 5369 for (i = 0; i < nr_tables; i++) 5380 5370 kfree(data->table[i].files); 5381 5371 kfree(data->table); 5382 - call_rcu(&data->rcu, io_file_ref_exit_and_free); 5372 + INIT_WORK(&data->ref_work, io_file_ref_exit_and_free); 5373 + queue_work(system_wq, &data->ref_work); 5383 5374 ctx->file_data = NULL; 5384 5375 ctx->nr_user_files = 0; 5385 5376 return 0;
+48 -6
fs/locks.c
··· 725 725 { 726 726 locks_delete_global_blocked(waiter); 727 727 list_del_init(&waiter->fl_blocked_member); 728 - waiter->fl_blocker = NULL; 729 728 } 730 729 731 730 static void __locks_wake_up_blocks(struct file_lock *blocker) ··· 739 740 waiter->fl_lmops->lm_notify(waiter); 740 741 else 741 742 wake_up(&waiter->fl_wait); 743 + 744 + /* 745 + * The setting of fl_blocker to NULL marks the "done" 746 + * point in deleting a block. Paired with acquire at the top 747 + * of locks_delete_block(). 748 + */ 749 + smp_store_release(&waiter->fl_blocker, NULL); 742 750 } 743 751 } 744 752 ··· 759 753 { 760 754 int status = -ENOENT; 761 755 756 + /* 757 + * If fl_blocker is NULL, it won't be set again as this thread "owns" 758 + * the lock and is the only one that might try to claim the lock. 759 + * 760 + * We use acquire/release to manage fl_blocker so that we can 761 + * optimize away taking the blocked_lock_lock in many cases. 762 + * 763 + * The smp_load_acquire guarantees two things: 764 + * 765 + * 1/ that fl_blocked_requests can be tested locklessly. If something 766 + * was recently added to that list it must have been in a locked region 767 + * *before* the locked region when fl_blocker was set to NULL. 768 + * 769 + * 2/ that no other thread is accessing 'waiter', so it is safe to free 770 + * it. __locks_wake_up_blocks is careful not to touch waiter after 771 + * fl_blocker is released. 772 + * 773 + * If a lockless check of fl_blocker shows it to be NULL, we know that 774 + * no new locks can be inserted into its fl_blocked_requests list, and 775 + * can avoid doing anything further if the list is empty. 776 + */ 777 + if (!smp_load_acquire(&waiter->fl_blocker) && 778 + list_empty(&waiter->fl_blocked_requests)) 779 + return status; 780 + 762 781 spin_lock(&blocked_lock_lock); 763 782 if (waiter->fl_blocker) 764 783 status = 0; 765 784 __locks_wake_up_blocks(waiter); 766 785 __locks_delete_block(waiter); 786 + 787 + /* 788 + * The setting of fl_blocker to NULL marks the "done" point in deleting 789 + * a block. Paired with acquire at the top of this function. 790 + */ 791 + smp_store_release(&waiter->fl_blocker, NULL); 767 792 spin_unlock(&blocked_lock_lock); 768 793 return status; 769 794 } ··· 1387 1350 error = posix_lock_inode(inode, fl, NULL); 1388 1351 if (error != FILE_LOCK_DEFERRED) 1389 1352 break; 1390 - error = wait_event_interruptible(fl->fl_wait, !fl->fl_blocker); 1353 + error = wait_event_interruptible(fl->fl_wait, 1354 + list_empty(&fl->fl_blocked_member)); 1391 1355 if (error) 1392 1356 break; 1393 1357 } ··· 1473 1435 error = posix_lock_inode(inode, &fl, NULL); 1474 1436 if (error != FILE_LOCK_DEFERRED) 1475 1437 break; 1476 - error = wait_event_interruptible(fl.fl_wait, !fl.fl_blocker); 1438 + error = wait_event_interruptible(fl.fl_wait, 1439 + list_empty(&fl.fl_blocked_member)); 1477 1440 if (!error) { 1478 1441 /* 1479 1442 * If we've been sleeping someone might have ··· 1677 1638 1678 1639 locks_dispose_list(&dispose); 1679 1640 error = wait_event_interruptible_timeout(new_fl->fl_wait, 1680 - !new_fl->fl_blocker, break_time); 1641 + list_empty(&new_fl->fl_blocked_member), 1642 + break_time); 1681 1643 1682 1644 percpu_down_read(&file_rwsem); 1683 1645 spin_lock(&ctx->flc_lock); ··· 2162 2122 error = flock_lock_inode(inode, fl); 2163 2123 if (error != FILE_LOCK_DEFERRED) 2164 2124 break; 2165 - error = wait_event_interruptible(fl->fl_wait, !fl->fl_blocker); 2125 + error = wait_event_interruptible(fl->fl_wait, 2126 + list_empty(&fl->fl_blocked_member)); 2166 2127 if (error) 2167 2128 break; 2168 2129 } ··· 2440 2399 error = vfs_lock_file(filp, cmd, fl, NULL); 2441 2400 if (error != FILE_LOCK_DEFERRED) 2442 2401 break; 2443 - error = wait_event_interruptible(fl->fl_wait, !fl->fl_blocker); 2402 + error = wait_event_interruptible(fl->fl_wait, 2403 + list_empty(&fl->fl_blocked_member)); 2444 2404 if (error) 2445 2405 break; 2446 2406 }
+1
fs/nfs/client.c
··· 153 153 if ((clp = kzalloc(sizeof(*clp), GFP_KERNEL)) == NULL) 154 154 goto error_0; 155 155 156 + clp->cl_minorversion = cl_init->minorversion; 156 157 clp->cl_nfs_mod = cl_init->nfs_mod; 157 158 if (!try_module_get(clp->cl_nfs_mod->owner)) 158 159 goto error_dealloc;
+9
fs/nfs/fs_context.c
··· 832 832 if (len > maxnamlen) 833 833 goto out_hostname; 834 834 835 + kfree(ctx->nfs_server.hostname); 836 + 835 837 /* N.B. caller will free nfs_server.hostname in all cases */ 836 838 ctx->nfs_server.hostname = kmemdup_nul(dev_name, len, GFP_KERNEL); 837 839 if (!ctx->nfs_server.hostname) ··· 1241 1239 goto out_version_unavailable; 1242 1240 } 1243 1241 ctx->nfs_mod = nfs_mod; 1242 + } 1243 + 1244 + /* Ensure the filesystem context has the correct fs_type */ 1245 + if (fc->fs_type != ctx->nfs_mod->nfs_fs) { 1246 + module_put(fc->fs_type->owner); 1247 + __module_get(ctx->nfs_mod->nfs_fs->owner); 1248 + fc->fs_type = ctx->nfs_mod->nfs_fs; 1244 1249 } 1245 1250 return 0; 1246 1251
+2
fs/nfs/fscache.c
··· 31 31 struct nfs_server_key { 32 32 struct { 33 33 uint16_t nfsversion; /* NFS protocol version */ 34 + uint32_t minorversion; /* NFSv4 minor version */ 34 35 uint16_t family; /* address family */ 35 36 __be16 port; /* IP port */ 36 37 } hdr; ··· 56 55 57 56 memset(&key, 0, sizeof(key)); 58 57 key.hdr.nfsversion = clp->rpc_ops->version; 58 + key.hdr.minorversion = clp->cl_minorversion; 59 59 key.hdr.family = clp->cl_addr.ss_family; 60 60 61 61 switch (clp->cl_addr.ss_family) {
+1 -1
fs/nfs/namespace.c
··· 153 153 /* Open a new filesystem context, transferring parameters from the 154 154 * parent superblock, including the network namespace. 155 155 */ 156 - fc = fs_context_for_submount(&nfs_fs_type, path->dentry); 156 + fc = fs_context_for_submount(path->mnt->mnt_sb->s_type, path->dentry); 157 157 if (IS_ERR(fc)) 158 158 return ERR_CAST(fc); 159 159
-1
fs/nfs/nfs4client.c
··· 216 216 INIT_LIST_HEAD(&clp->cl_ds_clients); 217 217 rpc_init_wait_queue(&clp->cl_rpcwaitq, "NFS client"); 218 218 clp->cl_state = 1 << NFS4CLNT_LEASE_EXPIRED; 219 - clp->cl_minorversion = cl_init->minorversion; 220 219 clp->cl_mvops = nfs_v4_minor_ops[cl_init->minorversion]; 221 220 clp->cl_mig_gen = 1; 222 221 #if IS_ENABLED(CONFIG_NFS_V4_1)
+1
fs/overlayfs/Kconfig
··· 93 93 bool "Overlayfs: auto enable inode number mapping" 94 94 default n 95 95 depends on OVERLAY_FS 96 + depends on 64BIT 96 97 help 97 98 If this config option is enabled then overlay filesystems will use 98 99 unused high bits in undelying filesystem inode numbers to map all
+6
fs/overlayfs/file.c
··· 244 244 if (iocb->ki_flags & IOCB_WRITE) { 245 245 struct inode *inode = file_inode(orig_iocb->ki_filp); 246 246 247 + /* Actually acquired in ovl_write_iter() */ 248 + __sb_writers_acquired(file_inode(iocb->ki_filp)->i_sb, 249 + SB_FREEZE_WRITE); 247 250 file_end_write(iocb->ki_filp); 248 251 ovl_copyattr(ovl_inode_real(inode), inode); 249 252 } ··· 349 346 goto out; 350 347 351 348 file_start_write(real.file); 349 + /* Pacify lockdep, same trick as done in aio_write() */ 350 + __sb_writers_release(file_inode(real.file)->i_sb, 351 + SB_FREEZE_WRITE); 352 352 aio_req->fd = real; 353 353 real.flags = 0; 354 354 aio_req->orig_iocb = iocb;
+6 -1
fs/overlayfs/overlayfs.h
··· 318 318 return ovl_same_dev(sb) ? OVL_FS(sb)->xino_mode : 0; 319 319 } 320 320 321 - static inline int ovl_inode_lock(struct inode *inode) 321 + static inline void ovl_inode_lock(struct inode *inode) 322 + { 323 + mutex_lock(&OVL_I(inode)->lock); 324 + } 325 + 326 + static inline int ovl_inode_lock_interruptible(struct inode *inode) 322 327 { 323 328 return mutex_lock_interruptible(&OVL_I(inode)->lock); 324 329 }
+8 -1
fs/overlayfs/super.c
··· 1411 1411 if (ofs->config.xino == OVL_XINO_ON) 1412 1412 pr_info("\"xino=on\" is useless with all layers on same fs, ignore.\n"); 1413 1413 ofs->xino_mode = 0; 1414 + } else if (ofs->config.xino == OVL_XINO_OFF) { 1415 + ofs->xino_mode = -1; 1414 1416 } else if (ofs->config.xino == OVL_XINO_ON && ofs->xino_mode < 0) { 1415 1417 /* 1416 1418 * This is a roundup of number of bits needed for encoding ··· 1625 1623 sb->s_stack_depth = 0; 1626 1624 sb->s_maxbytes = MAX_LFS_FILESIZE; 1627 1625 /* Assume underlaying fs uses 32bit inodes unless proven otherwise */ 1628 - if (ofs->config.xino != OVL_XINO_OFF) 1626 + if (ofs->config.xino != OVL_XINO_OFF) { 1629 1627 ofs->xino_mode = BITS_PER_LONG - 32; 1628 + if (!ofs->xino_mode) { 1629 + pr_warn("xino not supported on 32bit kernel, falling back to xino=off.\n"); 1630 + ofs->config.xino = OVL_XINO_OFF; 1631 + } 1632 + } 1630 1633 1631 1634 /* alloc/destroy_inode needed for setting up traps in inode cache */ 1632 1635 sb->s_op = &ovl_super_operations;
+2 -2
fs/overlayfs/util.c
··· 509 509 struct inode *inode = d_inode(dentry); 510 510 int err; 511 511 512 - err = ovl_inode_lock(inode); 512 + err = ovl_inode_lock_interruptible(inode); 513 513 if (!err && ovl_already_copied_up_locked(dentry, flags)) { 514 514 err = 1; /* Already copied up */ 515 515 ovl_inode_unlock(inode); ··· 764 764 return err; 765 765 } 766 766 767 - err = ovl_inode_lock(inode); 767 + err = ovl_inode_lock_interruptible(inode); 768 768 if (err) 769 769 return err; 770 770
+21 -7
fs/zonefs/super.c
··· 178 178 * amount of readable data in the zone. 179 179 */ 180 180 static loff_t zonefs_check_zone_condition(struct inode *inode, 181 - struct blk_zone *zone, bool warn) 181 + struct blk_zone *zone, bool warn, 182 + bool mount) 182 183 { 183 184 struct zonefs_inode_info *zi = ZONEFS_I(inode); 184 185 ··· 197 196 zone->wp = zone->start; 198 197 return 0; 199 198 case BLK_ZONE_COND_READONLY: 200 - /* Do not allow writes in read-only zones */ 199 + /* 200 + * The write pointer of read-only zones is invalid. If such a 201 + * zone is found during mount, the file size cannot be retrieved 202 + * so we treat the zone as offline (mount == true case). 203 + * Otherwise, keep the file size as it was when last updated 204 + * so that the user can recover data. In both cases, writes are 205 + * always disabled for the zone. 206 + */ 201 207 if (warn) 202 208 zonefs_warn(inode->i_sb, "inode %lu: read-only zone\n", 203 209 inode->i_ino); 204 210 inode->i_flags |= S_IMMUTABLE; 211 + if (mount) { 212 + zone->cond = BLK_ZONE_COND_OFFLINE; 213 + inode->i_mode &= ~0777; 214 + zone->wp = zone->start; 215 + return 0; 216 + } 205 217 inode->i_mode &= ~0222; 206 - /* fallthrough */ 218 + return i_size_read(inode); 207 219 default: 208 220 if (zi->i_ztype == ZONEFS_ZTYPE_CNV) 209 221 return zi->i_max_size; ··· 245 231 * as there is no inconsistency between the inode size and the amount of 246 232 * data writen in the zone (data_size). 247 233 */ 248 - data_size = zonefs_check_zone_condition(inode, zone, true); 234 + data_size = zonefs_check_zone_condition(inode, zone, true, false); 249 235 isize = i_size_read(inode); 250 236 if (zone->cond != BLK_ZONE_COND_OFFLINE && 251 237 zone->cond != BLK_ZONE_COND_READONLY && ··· 288 274 if (zone->cond != BLK_ZONE_COND_OFFLINE) { 289 275 zone->cond = BLK_ZONE_COND_OFFLINE; 290 276 data_size = zonefs_check_zone_condition(inode, zone, 291 - false); 277 + false, false); 292 278 } 293 279 } else if (zone->cond == BLK_ZONE_COND_READONLY || 294 280 sbi->s_mount_opts & ZONEFS_MNTOPT_ERRORS_ZRO) { ··· 297 283 if (zone->cond != BLK_ZONE_COND_READONLY) { 298 284 zone->cond = BLK_ZONE_COND_READONLY; 299 285 data_size = zonefs_check_zone_condition(inode, zone, 300 - false); 286 + false, false); 301 287 } 302 288 } 303 289 ··· 989 975 zi->i_zsector = zone->start; 990 976 zi->i_max_size = min_t(loff_t, MAX_LFS_FILESIZE, 991 977 zone->len << SECTOR_SHIFT); 992 - zi->i_wpoffset = zonefs_check_zone_condition(inode, zone, true); 978 + zi->i_wpoffset = zonefs_check_zone_condition(inode, zone, true, true); 993 979 994 980 inode->i_uid = sbi->s_uid; 995 981 inode->i_gid = sbi->s_gid;
+2 -2
include/dt-bindings/clock/imx8mn-clock.h
··· 122 122 #define IMX8MN_CLK_I2C1 105 123 123 #define IMX8MN_CLK_I2C2 106 124 124 #define IMX8MN_CLK_I2C3 107 125 - #define IMX8MN_CLK_I2C4 118 126 - #define IMX8MN_CLK_UART1 119 125 + #define IMX8MN_CLK_I2C4 108 126 + #define IMX8MN_CLK_UART1 109 127 127 #define IMX8MN_CLK_UART2 110 128 128 #define IMX8MN_CLK_UART3 111 129 129 #define IMX8MN_CLK_UART4 112
+9 -5
include/linux/dmar.h
··· 69 69 extern struct rw_semaphore dmar_global_lock; 70 70 extern struct list_head dmar_drhd_units; 71 71 72 - #define for_each_drhd_unit(drhd) \ 73 - list_for_each_entry_rcu(drhd, &dmar_drhd_units, list) 72 + #define for_each_drhd_unit(drhd) \ 73 + list_for_each_entry_rcu(drhd, &dmar_drhd_units, list, \ 74 + dmar_rcu_check()) 74 75 75 76 #define for_each_active_drhd_unit(drhd) \ 76 - list_for_each_entry_rcu(drhd, &dmar_drhd_units, list) \ 77 + list_for_each_entry_rcu(drhd, &dmar_drhd_units, list, \ 78 + dmar_rcu_check()) \ 77 79 if (drhd->ignored) {} else 78 80 79 81 #define for_each_active_iommu(i, drhd) \ 80 - list_for_each_entry_rcu(drhd, &dmar_drhd_units, list) \ 82 + list_for_each_entry_rcu(drhd, &dmar_drhd_units, list, \ 83 + dmar_rcu_check()) \ 81 84 if (i=drhd->iommu, drhd->ignored) {} else 82 85 83 86 #define for_each_iommu(i, drhd) \ 84 - list_for_each_entry_rcu(drhd, &dmar_drhd_units, list) \ 87 + list_for_each_entry_rcu(drhd, &dmar_drhd_units, list, \ 88 + dmar_rcu_check()) \ 85 89 if (i=drhd->iommu, 0) {} else 86 90 87 91 static inline bool dmar_rcu_check(void)
-7
include/linux/dsa/8021q.h
··· 28 28 29 29 int dsa_8021q_rx_source_port(u16 vid); 30 30 31 - struct sk_buff *dsa_8021q_remove_header(struct sk_buff *skb); 32 - 33 31 #else 34 32 35 33 int dsa_port_setup_8021q_tagging(struct dsa_switch *ds, int index, ··· 60 62 int dsa_8021q_rx_source_port(u16 vid) 61 63 { 62 64 return 0; 63 - } 64 - 65 - struct sk_buff *dsa_8021q_remove_header(struct sk_buff *skb) 66 - { 67 - return NULL; 68 65 } 69 66 70 67 #endif /* IS_ENABLED(CONFIG_NET_DSA_TAG_8021Q) */
+1
include/linux/file.h
··· 85 85 extern int replace_fd(unsigned fd, struct file *file, unsigned flags); 86 86 extern void set_close_on_exec(unsigned int fd, int flag); 87 87 extern bool get_close_on_exec(unsigned int fd); 88 + extern int __get_unused_fd_flags(unsigned flags, unsigned long nofile); 88 89 extern int get_unused_fd_flags(unsigned flags); 89 90 extern void put_unused_fd(unsigned int fd); 90 91
+1
include/linux/fs.h
··· 698 698 struct rcu_head i_rcu; 699 699 }; 700 700 atomic64_t i_version; 701 + atomic64_t i_sequence; /* see futex */ 701 702 atomic_t i_count; 702 703 atomic_t i_dio_count; 703 704 atomic_t i_writecount;
+10 -7
include/linux/futex.h
··· 31 31 32 32 union futex_key { 33 33 struct { 34 + u64 i_seq; 34 35 unsigned long pgoff; 35 - struct inode *inode; 36 - int offset; 36 + unsigned int offset; 37 37 } shared; 38 38 struct { 39 + union { 40 + struct mm_struct *mm; 41 + u64 __tmp; 42 + }; 39 43 unsigned long address; 40 - struct mm_struct *mm; 41 - int offset; 44 + unsigned int offset; 42 45 } private; 43 46 struct { 47 + u64 ptr; 44 48 unsigned long word; 45 - void *ptr; 46 - int offset; 49 + unsigned int offset; 47 50 } both; 48 51 }; 49 52 50 - #define FUTEX_KEY_INIT (union futex_key) { .both = { .ptr = NULL } } 53 + #define FUTEX_KEY_INIT (union futex_key) { .both = { .ptr = 0ULL } } 51 54 52 55 #ifdef CONFIG_FUTEX 53 56 enum {
+1 -12
include/linux/genhd.h
··· 245 245 !(disk->flags & GENHD_FL_NO_PART_SCAN); 246 246 } 247 247 248 - static inline bool disk_has_partitions(struct gendisk *disk) 249 - { 250 - bool ret = false; 251 - 252 - rcu_read_lock(); 253 - if (rcu_dereference(disk->part_tbl)->len > 1) 254 - ret = true; 255 - rcu_read_unlock(); 256 - 257 - return ret; 258 - } 259 - 260 248 static inline dev_t disk_devt(struct gendisk *disk) 261 249 { 262 250 return MKDEV(disk->major, disk->first_minor); ··· 286 298 287 299 extern struct hd_struct *disk_map_sector_rcu(struct gendisk *disk, 288 300 sector_t sector); 301 + bool disk_has_partitions(struct gendisk *disk); 289 302 290 303 /* 291 304 * Macros to operate on percpu disk statistics:
+2
include/linux/intel-iommu.h
··· 123 123 124 124 #define dmar_readq(a) readq(a) 125 125 #define dmar_writeq(a,v) writeq(v,a) 126 + #define dmar_readl(a) readl(a) 127 + #define dmar_writel(a, v) writel(v, a) 126 128 127 129 #define DMAR_VER_MAJOR(v) (((v) & 0xf0) >> 4) 128 130 #define DMAR_VER_MINOR(v) ((v) & 0x0f)
+1
include/linux/mmc/host.h
··· 333 333 MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR104 | \ 334 334 MMC_CAP_UHS_DDR50) 335 335 #define MMC_CAP_SYNC_RUNTIME_PM (1 << 21) /* Synced runtime PM suspends. */ 336 + #define MMC_CAP_NEED_RSP_BUSY (1 << 22) /* Commands with R1B can't use R1. */ 336 337 #define MMC_CAP_DRIVER_TYPE_A (1 << 23) /* Host supports Driver Type A */ 337 338 #define MMC_CAP_DRIVER_TYPE_C (1 << 24) /* Host supports Driver Type C */ 338 339 #define MMC_CAP_DRIVER_TYPE_D (1 << 25) /* Host supports Driver Type D */
+13
include/linux/netlink.h
··· 115 115 { 116 116 u64 __cookie = cookie; 117 117 118 + if (!extack) 119 + return; 120 + memcpy(extack->cookie, &__cookie, sizeof(__cookie)); 121 + extack->cookie_len = sizeof(__cookie); 122 + } 123 + 124 + static inline void nl_set_extack_cookie_u32(struct netlink_ext_ack *extack, 125 + u32 cookie) 126 + { 127 + u32 __cookie = cookie; 128 + 129 + if (!extack) 130 + return; 118 131 memcpy(extack->cookie, &__cookie, sizeof(__cookie)); 119 132 extack->cookie_len = sizeof(__cookie); 120 133 }
+4 -4
include/linux/of_clk.h
··· 11 11 12 12 #if defined(CONFIG_COMMON_CLK) && defined(CONFIG_OF) 13 13 14 - unsigned int of_clk_get_parent_count(struct device_node *np); 15 - const char *of_clk_get_parent_name(struct device_node *np, int index); 14 + unsigned int of_clk_get_parent_count(const struct device_node *np); 15 + const char *of_clk_get_parent_name(const struct device_node *np, int index); 16 16 void of_clk_init(const struct of_device_id *matches); 17 17 18 18 #else /* !CONFIG_COMMON_CLK || !CONFIG_OF */ 19 19 20 - static inline unsigned int of_clk_get_parent_count(struct device_node *np) 20 + static inline unsigned int of_clk_get_parent_count(const struct device_node *np) 21 21 { 22 22 return 0; 23 23 } 24 - static inline const char *of_clk_get_parent_name(struct device_node *np, 24 + static inline const char *of_clk_get_parent_name(const struct device_node *np, 25 25 int index) 26 26 { 27 27 return NULL;
+1 -1
include/linux/page-flags.h
··· 311 311 312 312 __PAGEFLAG(Locked, locked, PF_NO_TAIL) 313 313 PAGEFLAG(Waiters, waiters, PF_ONLY_HEAD) __CLEARPAGEFLAG(Waiters, waiters, PF_ONLY_HEAD) 314 - PAGEFLAG(Error, error, PF_NO_COMPOUND) TESTCLEARFLAG(Error, error, PF_NO_COMPOUND) 314 + PAGEFLAG(Error, error, PF_NO_TAIL) TESTCLEARFLAG(Error, error, PF_NO_TAIL) 315 315 PAGEFLAG(Referenced, referenced, PF_HEAD) 316 316 TESTCLEARFLAG(Referenced, referenced, PF_HEAD) 317 317 __SETPAGEFLAG(Referenced, referenced, PF_HEAD)
+32 -4
include/linux/skbuff.h
··· 645 645 * @offload_l3_fwd_mark: Packet was L3-forwarded in hardware 646 646 * @tc_skip_classify: do not classify packet. set by IFB device 647 647 * @tc_at_ingress: used within tc_classify to distinguish in/egress 648 - * @tc_redirected: packet was redirected by a tc action 649 - * @tc_from_ingress: if tc_redirected, tc_at_ingress at time of redirect 648 + * @redirected: packet was redirected by packet classifier 649 + * @from_ingress: packet was redirected from the ingress path 650 650 * @peeked: this packet has been seen already, so stats have been 651 651 * done for it, don't do them again 652 652 * @nf_trace: netfilter packet trace flag ··· 848 848 #ifdef CONFIG_NET_CLS_ACT 849 849 __u8 tc_skip_classify:1; 850 850 __u8 tc_at_ingress:1; 851 - __u8 tc_redirected:1; 852 - __u8 tc_from_ingress:1; 851 + #endif 852 + #ifdef CONFIG_NET_REDIRECT 853 + __u8 redirected:1; 854 + __u8 from_ingress:1; 853 855 #endif 854 856 #ifdef CONFIG_TLS_DEVICE 855 857 __u8 decrypted:1; ··· 4571 4569 * adjustment filled in by caller) and return result. 4572 4570 */ 4573 4571 return csum_partial(l4_hdr, csum_start - l4_hdr, partial); 4572 + } 4573 + 4574 + static inline bool skb_is_redirected(const struct sk_buff *skb) 4575 + { 4576 + #ifdef CONFIG_NET_REDIRECT 4577 + return skb->redirected; 4578 + #else 4579 + return false; 4580 + #endif 4581 + } 4582 + 4583 + static inline void skb_set_redirected(struct sk_buff *skb, bool from_ingress) 4584 + { 4585 + #ifdef CONFIG_NET_REDIRECT 4586 + skb->redirected = 1; 4587 + skb->from_ingress = from_ingress; 4588 + if (skb->from_ingress) 4589 + skb->tstamp = 0; 4590 + #endif 4591 + } 4592 + 4593 + static inline void skb_reset_redirect(struct sk_buff *skb) 4594 + { 4595 + #ifdef CONFIG_NET_REDIRECT 4596 + skb->redirected = 0; 4597 + #endif 4574 4598 } 4575 4599 4576 4600 #endif /* __KERNEL__ */
+2 -1
include/linux/socket.h
··· 401 401 int addr_len); 402 402 extern int __sys_accept4_file(struct file *file, unsigned file_flags, 403 403 struct sockaddr __user *upeer_sockaddr, 404 - int __user *upeer_addrlen, int flags); 404 + int __user *upeer_addrlen, int flags, 405 + unsigned long nofile); 405 406 extern int __sys_accept4(int fd, struct sockaddr __user *upeer_sockaddr, 406 407 int __user *upeer_addrlen, int flags); 407 408 extern int __sys_socket(int family, int type, int protocol);
+3 -2
include/linux/vmalloc.h
··· 141 141 142 142 extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr, 143 143 unsigned long pgoff); 144 - void vmalloc_sync_all(void); 145 - 144 + void vmalloc_sync_mappings(void); 145 + void vmalloc_sync_unmappings(void); 146 + 146 147 /* 147 148 * Lowlevel-APIs (not for driver use!) 148 149 */
+8 -4
include/net/af_rxrpc.h
··· 16 16 struct socket; 17 17 struct rxrpc_call; 18 18 19 + enum rxrpc_interruptibility { 20 + RXRPC_INTERRUPTIBLE, /* Call is interruptible */ 21 + RXRPC_PREINTERRUPTIBLE, /* Call can be cancelled whilst waiting for a slot */ 22 + RXRPC_UNINTERRUPTIBLE, /* Call should not be interruptible at all */ 23 + }; 24 + 19 25 /* 20 26 * Debug ID counter for tracing. 21 27 */ ··· 47 41 gfp_t, 48 42 rxrpc_notify_rx_t, 49 43 bool, 50 - bool, 44 + enum rxrpc_interruptibility, 51 45 unsigned int); 52 46 int rxrpc_kernel_send_data(struct socket *, struct rxrpc_call *, 53 47 struct msghdr *, size_t, ··· 64 58 rxrpc_user_attach_call_t, unsigned long, gfp_t, 65 59 unsigned int); 66 60 void rxrpc_kernel_set_tx_length(struct socket *, struct rxrpc_call *, s64); 67 - bool rxrpc_kernel_check_life(const struct socket *, const struct rxrpc_call *, 68 - u32 *); 69 - void rxrpc_kernel_probe_life(struct socket *, struct rxrpc_call *); 61 + bool rxrpc_kernel_check_life(const struct socket *, const struct rxrpc_call *); 70 62 u32 rxrpc_kernel_get_epoch(struct socket *, struct rxrpc_call *); 71 63 bool rxrpc_kernel_get_reply_time(struct socket *, struct rxrpc_call *, 72 64 ktime_t *);
-16
include/net/sch_generic.h
··· 675 675 const struct qdisc_size_table *stab); 676 676 int skb_do_redirect(struct sk_buff *); 677 677 678 - static inline void skb_reset_tc(struct sk_buff *skb) 679 - { 680 - #ifdef CONFIG_NET_CLS_ACT 681 - skb->tc_redirected = 0; 682 - #endif 683 - } 684 - 685 - static inline bool skb_is_tc_redirected(const struct sk_buff *skb) 686 - { 687 - #ifdef CONFIG_NET_CLS_ACT 688 - return skb->tc_redirected; 689 - #else 690 - return false; 691 - #endif 692 - } 693 - 694 678 static inline bool skb_at_tc_ingress(const struct sk_buff *skb) 695 679 { 696 680 #ifdef CONFIG_NET_CLS_ACT
+1 -1
include/trace/events/afs.h
··· 233 233 EM(afs_call_trace_get, "GET ") \ 234 234 EM(afs_call_trace_put, "PUT ") \ 235 235 EM(afs_call_trace_wake, "WAKE ") \ 236 - E_(afs_call_trace_work, "WORK ") 236 + E_(afs_call_trace_work, "QUEUE") 237 237 238 238 #define afs_server_traces \ 239 239 EM(afs_server_trace_alloc, "ALLOC ") \
+1 -2
init/Kconfig
··· 767 767 bool 768 768 769 769 config CC_HAS_INT128 770 - def_bool y 771 - depends on !$(cc-option,-D__SIZEOF_INT128__=0) 770 + def_bool !$(cc-option,$(m64-flag) -D__SIZEOF_INT128__=0) && 64BIT 772 771 773 772 # 774 773 # For architectures that know their GCC __int128 support is sound
+11 -3
kernel/bpf/bpf_struct_ops.c
··· 490 490 prev_state = cmpxchg(&st_map->kvalue.state, 491 491 BPF_STRUCT_OPS_STATE_INUSE, 492 492 BPF_STRUCT_OPS_STATE_TOBEFREE); 493 - if (prev_state == BPF_STRUCT_OPS_STATE_INUSE) { 493 + switch (prev_state) { 494 + case BPF_STRUCT_OPS_STATE_INUSE: 494 495 st_map->st_ops->unreg(&st_map->kvalue.data); 495 496 if (refcount_dec_and_test(&st_map->kvalue.refcnt)) 496 497 bpf_map_put(map); 498 + return 0; 499 + case BPF_STRUCT_OPS_STATE_TOBEFREE: 500 + return -EINPROGRESS; 501 + case BPF_STRUCT_OPS_STATE_INIT: 502 + return -ENOENT; 503 + default: 504 + WARN_ON_ONCE(1); 505 + /* Should never happen. Treat it as not found. */ 506 + return -ENOENT; 497 507 } 498 - 499 - return 0; 500 508 } 501 509 502 510 static void bpf_struct_ops_map_seq_show_elem(struct bpf_map *map, void *key,
+1 -1
kernel/bpf/btf.c
··· 2418 2418 2419 2419 struct_size = struct_type->size; 2420 2420 bytes_offset = BITS_ROUNDDOWN_BYTES(struct_bits_off); 2421 - if (struct_size - bytes_offset < sizeof(int)) { 2421 + if (struct_size - bytes_offset < member_type->size) { 2422 2422 btf_verifier_log_member(env, struct_type, member, 2423 2423 "Member exceeds struct_size"); 2424 2424 return -EINVAL;
+5 -2
kernel/bpf/cgroup.c
··· 227 227 for (i = 0; i < NR; i++) 228 228 bpf_prog_array_free(arrays[i]); 229 229 230 + for (p = cgroup_parent(cgrp); p; p = cgroup_parent(p)) 231 + cgroup_bpf_put(p); 232 + 230 233 percpu_ref_exit(&cgrp->bpf.refcnt); 231 234 232 235 return -ENOMEM; ··· 305 302 u32 saved_flags = (flags & (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI)); 306 303 struct list_head *progs = &cgrp->bpf.progs[type]; 307 304 struct bpf_prog *old_prog = NULL; 308 - struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE], 309 - *old_storage[MAX_BPF_CGROUP_STORAGE_TYPE] = {NULL}; 305 + struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE] = {}; 306 + struct bpf_cgroup_storage *old_storage[MAX_BPF_CGROUP_STORAGE_TYPE] = {}; 310 307 struct bpf_prog_list *pl, *replace_pl = NULL; 311 308 enum bpf_cgroup_storage_type stype; 312 309 int err;
+5
kernel/bpf/syscall.c
··· 1514 1514 if (IS_ERR(map)) 1515 1515 return PTR_ERR(map); 1516 1516 1517 + if (map->map_type == BPF_MAP_TYPE_STRUCT_OPS) { 1518 + fdput(f); 1519 + return -ENOTSUPP; 1520 + } 1521 + 1517 1522 mutex_lock(&map->freeze_mutex); 1518 1523 1519 1524 if (map->writecnt) {
+55 -38
kernel/futex.c
··· 385 385 */ 386 386 static struct futex_hash_bucket *hash_futex(union futex_key *key) 387 387 { 388 - u32 hash = jhash2((u32*)&key->both.word, 389 - (sizeof(key->both.word)+sizeof(key->both.ptr))/4, 388 + u32 hash = jhash2((u32 *)key, offsetof(typeof(*key), both.offset) / 4, 390 389 key->both.offset); 390 + 391 391 return &futex_queues[hash & (futex_hashsize - 1)]; 392 392 } 393 393 ··· 429 429 430 430 switch (key->both.offset & (FUT_OFF_INODE|FUT_OFF_MMSHARED)) { 431 431 case FUT_OFF_INODE: 432 - ihold(key->shared.inode); /* implies smp_mb(); (B) */ 432 + smp_mb(); /* explicit smp_mb(); (B) */ 433 433 break; 434 434 case FUT_OFF_MMSHARED: 435 435 futex_get_mm(key); /* implies smp_mb(); (B) */ ··· 463 463 464 464 switch (key->both.offset & (FUT_OFF_INODE|FUT_OFF_MMSHARED)) { 465 465 case FUT_OFF_INODE: 466 - iput(key->shared.inode); 467 466 break; 468 467 case FUT_OFF_MMSHARED: 469 468 mmdrop(key->private.mm); ··· 504 505 return timeout; 505 506 } 506 507 508 + /* 509 + * Generate a machine wide unique identifier for this inode. 510 + * 511 + * This relies on u64 not wrapping in the life-time of the machine; which with 512 + * 1ns resolution means almost 585 years. 513 + * 514 + * This further relies on the fact that a well formed program will not unmap 515 + * the file while it has a (shared) futex waiting on it. This mapping will have 516 + * a file reference which pins the mount and inode. 517 + * 518 + * If for some reason an inode gets evicted and read back in again, it will get 519 + * a new sequence number and will _NOT_ match, even though it is the exact same 520 + * file. 521 + * 522 + * It is important that match_futex() will never have a false-positive, esp. 523 + * for PI futexes that can mess up the state. The above argues that false-negatives 524 + * are only possible for malformed programs. 525 + */ 526 + static u64 get_inode_sequence_number(struct inode *inode) 527 + { 528 + static atomic64_t i_seq; 529 + u64 old; 530 + 531 + /* Does the inode already have a sequence number? */ 532 + old = atomic64_read(&inode->i_sequence); 533 + if (likely(old)) 534 + return old; 535 + 536 + for (;;) { 537 + u64 new = atomic64_add_return(1, &i_seq); 538 + if (WARN_ON_ONCE(!new)) 539 + continue; 540 + 541 + old = atomic64_cmpxchg_relaxed(&inode->i_sequence, 0, new); 542 + if (old) 543 + return old; 544 + return new; 545 + } 546 + } 547 + 507 548 /** 508 549 * get_futex_key() - Get parameters which are the keys for a futex 509 550 * @uaddr: virtual address of the futex ··· 556 517 * 557 518 * The key words are stored in @key on success. 558 519 * 559 - * For shared mappings, it's (page->index, file_inode(vma->vm_file), 560 - * offset_within_page). For private mappings, it's (uaddr, current->mm). 561 - * We can usually work out the index without swapping in the page. 520 + * For shared mappings (when @fshared), the key is: 521 + * ( inode->i_sequence, page->index, offset_within_page ) 522 + * [ also see get_inode_sequence_number() ] 523 + * 524 + * For private mappings (or when !@fshared), the key is: 525 + * ( current->mm, address, 0 ) 526 + * 527 + * This allows (cross process, where applicable) identification of the futex 528 + * without keeping the page pinned for the duration of the FUTEX_WAIT. 562 529 * 563 530 * lock_page() might sleep, the caller should not hold a spinlock. 564 531 */ ··· 704 659 key->private.mm = mm; 705 660 key->private.address = address; 706 661 707 - get_futex_key_refs(key); /* implies smp_mb(); (B) */ 708 - 709 662 } else { 710 663 struct inode *inode; 711 664 ··· 735 692 goto again; 736 693 } 737 694 738 - /* 739 - * Take a reference unless it is about to be freed. Previously 740 - * this reference was taken by ihold under the page lock 741 - * pinning the inode in place so i_lock was unnecessary. The 742 - * only way for this check to fail is if the inode was 743 - * truncated in parallel which is almost certainly an 744 - * application bug. In such a case, just retry. 745 - * 746 - * We are not calling into get_futex_key_refs() in file-backed 747 - * cases, therefore a successful atomic_inc return below will 748 - * guarantee that get_futex_key() will still imply smp_mb(); (B). 749 - */ 750 - if (!atomic_inc_not_zero(&inode->i_count)) { 751 - rcu_read_unlock(); 752 - put_page(page); 753 - 754 - goto again; 755 - } 756 - 757 - /* Should be impossible but lets be paranoid for now */ 758 - if (WARN_ON_ONCE(inode->i_mapping != mapping)) { 759 - err = -EFAULT; 760 - rcu_read_unlock(); 761 - iput(inode); 762 - 763 - goto out; 764 - } 765 - 766 695 key->both.offset |= FUT_OFF_INODE; /* inode-based key */ 767 - key->shared.inode = inode; 696 + key->shared.i_seq = get_inode_sequence_number(inode); 768 697 key->shared.pgoff = basepage_index(tail); 769 698 rcu_read_unlock(); 770 699 } 700 + 701 + get_futex_key_refs(key); /* implies smp_mb(); (B) */ 771 702 772 703 out: 773 704 put_page(page);
+1 -1
kernel/notifier.c
··· 519 519 520 520 int register_die_notifier(struct notifier_block *nb) 521 521 { 522 - vmalloc_sync_all(); 522 + vmalloc_sync_mappings(); 523 523 return atomic_notifier_chain_register(&die_chain, nb); 524 524 } 525 525 EXPORT_SYMBOL_GPL(register_die_notifier);
+2
kernel/sys.c
··· 47 47 #include <linux/syscalls.h> 48 48 #include <linux/kprobes.h> 49 49 #include <linux/user_namespace.h> 50 + #include <linux/time_namespace.h> 50 51 #include <linux/binfmts.h> 51 52 52 53 #include <linux/sched.h> ··· 2547 2546 memset(info, 0, sizeof(struct sysinfo)); 2548 2547 2549 2548 ktime_get_boottime_ts64(&tp); 2549 + timens_add_boottime(&tp); 2550 2550 info->uptime = tp.tv_sec + (tp.tv_nsec ? 1 : 0); 2551 2551 2552 2552 get_avenrun(info->loads, 0, SI_LOAD_SHIFT - FSHIFT);
+1 -1
kernel/trace/bpf_trace.c
··· 730 730 if (unlikely(!nmi_uaccess_okay())) 731 731 return -EPERM; 732 732 733 - if (in_nmi()) { 733 + if (irqs_disabled()) { 734 734 /* Do an early check on signal validity. Otherwise, 735 735 * the error is lost in deferred irq_work. 736 736 */
+8 -3
lib/crypto/chacha20poly1305-selftest.c
··· 9028 9028 && total_len <= 1 << 10; ++total_len) { 9029 9029 for (i = 0; i <= total_len; ++i) { 9030 9030 for (j = i; j <= total_len; ++j) { 9031 + k = 0; 9031 9032 sg_init_table(sg_src, 3); 9032 - sg_set_buf(&sg_src[0], input, i); 9033 - sg_set_buf(&sg_src[1], input + i, j - i); 9034 - sg_set_buf(&sg_src[2], input + j, total_len - j); 9033 + if (i) 9034 + sg_set_buf(&sg_src[k++], input, i); 9035 + if (j - i) 9036 + sg_set_buf(&sg_src[k++], input + i, j - i); 9037 + if (total_len - j) 9038 + sg_set_buf(&sg_src[k++], input + j, total_len - j); 9039 + sg_init_marker(sg_src, k); 9035 9040 memset(computed_output, 0, total_len); 9036 9041 memset(input, 0, total_len); 9037 9042
+9 -3
mm/madvise.c
··· 335 335 } 336 336 337 337 page = pmd_page(orig_pmd); 338 + 339 + /* Do not interfere with other mappings of this page */ 340 + if (page_mapcount(page) != 1) 341 + goto huge_unlock; 342 + 338 343 if (next - addr != HPAGE_PMD_SIZE) { 339 344 int err; 340 - 341 - if (page_mapcount(page) != 1) 342 - goto huge_unlock; 343 345 344 346 get_page(page); 345 347 spin_unlock(ptl); ··· 427 425 addr -= PAGE_SIZE; 428 426 continue; 429 427 } 428 + 429 + /* Do not interfere with other mappings of this page */ 430 + if (page_mapcount(page) != 1) 431 + continue; 430 432 431 433 VM_BUG_ON_PAGE(PageTransCompound(page), page); 432 434
+66 -37
mm/memcontrol.c
··· 2297 2297 #define MEMCG_DELAY_SCALING_SHIFT 14 2298 2298 2299 2299 /* 2300 - * Scheduled by try_charge() to be executed from the userland return path 2301 - * and reclaims memory over the high limit. 2300 + * Get the number of jiffies that we should penalise a mischievous cgroup which 2301 + * is exceeding its memory.high by checking both it and its ancestors. 2302 2302 */ 2303 - void mem_cgroup_handle_over_high(void) 2303 + static unsigned long calculate_high_delay(struct mem_cgroup *memcg, 2304 + unsigned int nr_pages) 2304 2305 { 2305 - unsigned long usage, high, clamped_high; 2306 - unsigned long pflags; 2307 - unsigned long penalty_jiffies, overage; 2308 - unsigned int nr_pages = current->memcg_nr_pages_over_high; 2309 - struct mem_cgroup *memcg; 2306 + unsigned long penalty_jiffies; 2307 + u64 max_overage = 0; 2310 2308 2311 - if (likely(!nr_pages)) 2312 - return; 2309 + do { 2310 + unsigned long usage, high; 2311 + u64 overage; 2313 2312 2314 - memcg = get_mem_cgroup_from_mm(current->mm); 2315 - reclaim_high(memcg, nr_pages, GFP_KERNEL); 2316 - current->memcg_nr_pages_over_high = 0; 2313 + usage = page_counter_read(&memcg->memory); 2314 + high = READ_ONCE(memcg->high); 2315 + 2316 + /* 2317 + * Prevent division by 0 in overage calculation by acting as if 2318 + * it was a threshold of 1 page 2319 + */ 2320 + high = max(high, 1UL); 2321 + 2322 + overage = usage - high; 2323 + overage <<= MEMCG_DELAY_PRECISION_SHIFT; 2324 + overage = div64_u64(overage, high); 2325 + 2326 + if (overage > max_overage) 2327 + max_overage = overage; 2328 + } while ((memcg = parent_mem_cgroup(memcg)) && 2329 + !mem_cgroup_is_root(memcg)); 2330 + 2331 + if (!max_overage) 2332 + return 0; 2317 2333 2318 2334 /* 2319 - * memory.high is breached and reclaim is unable to keep up. Throttle 2320 - * allocators proactively to slow down excessive growth. 2321 - * 2322 2335 * We use overage compared to memory.high to calculate the number of 2323 2336 * jiffies to sleep (penalty_jiffies). Ideally this value should be 2324 2337 * fairly lenient on small overages, and increasingly harsh when the ··· 2339 2326 * its crazy behaviour, so we exponentially increase the delay based on 2340 2327 * overage amount. 2341 2328 */ 2342 - 2343 - usage = page_counter_read(&memcg->memory); 2344 - high = READ_ONCE(memcg->high); 2345 - 2346 - if (usage <= high) 2347 - goto out; 2348 - 2349 - /* 2350 - * Prevent division by 0 in overage calculation by acting as if it was a 2351 - * threshold of 1 page 2352 - */ 2353 - clamped_high = max(high, 1UL); 2354 - 2355 - overage = div_u64((u64)(usage - high) << MEMCG_DELAY_PRECISION_SHIFT, 2356 - clamped_high); 2357 - 2358 - penalty_jiffies = ((u64)overage * overage * HZ) 2359 - >> (MEMCG_DELAY_PRECISION_SHIFT + MEMCG_DELAY_SCALING_SHIFT); 2329 + penalty_jiffies = max_overage * max_overage * HZ; 2330 + penalty_jiffies >>= MEMCG_DELAY_PRECISION_SHIFT; 2331 + penalty_jiffies >>= MEMCG_DELAY_SCALING_SHIFT; 2360 2332 2361 2333 /* 2362 2334 * Factor in the task's own contribution to the overage, such that four ··· 2358 2360 * application moving forwards and also permit diagnostics, albeit 2359 2361 * extremely slowly. 2360 2362 */ 2361 - penalty_jiffies = min(penalty_jiffies, MEMCG_MAX_HIGH_DELAY_JIFFIES); 2363 + return min(penalty_jiffies, MEMCG_MAX_HIGH_DELAY_JIFFIES); 2364 + } 2365 + 2366 + /* 2367 + * Scheduled by try_charge() to be executed from the userland return path 2368 + * and reclaims memory over the high limit. 2369 + */ 2370 + void mem_cgroup_handle_over_high(void) 2371 + { 2372 + unsigned long penalty_jiffies; 2373 + unsigned long pflags; 2374 + unsigned int nr_pages = current->memcg_nr_pages_over_high; 2375 + struct mem_cgroup *memcg; 2376 + 2377 + if (likely(!nr_pages)) 2378 + return; 2379 + 2380 + memcg = get_mem_cgroup_from_mm(current->mm); 2381 + reclaim_high(memcg, nr_pages, GFP_KERNEL); 2382 + current->memcg_nr_pages_over_high = 0; 2383 + 2384 + /* 2385 + * memory.high is breached and reclaim is unable to keep up. Throttle 2386 + * allocators proactively to slow down excessive growth. 2387 + */ 2388 + penalty_jiffies = calculate_high_delay(memcg, nr_pages); 2362 2389 2363 2390 /* 2364 2391 * Don't sleep if the amount of jiffies this memcg owes us is so low ··· 4050 4027 struct mem_cgroup_thresholds *thresholds; 4051 4028 struct mem_cgroup_threshold_ary *new; 4052 4029 unsigned long usage; 4053 - int i, j, size; 4030 + int i, j, size, entries; 4054 4031 4055 4032 mutex_lock(&memcg->thresholds_lock); 4056 4033 ··· 4070 4047 __mem_cgroup_threshold(memcg, type == _MEMSWAP); 4071 4048 4072 4049 /* Calculate new number of threshold */ 4073 - size = 0; 4050 + size = entries = 0; 4074 4051 for (i = 0; i < thresholds->primary->size; i++) { 4075 4052 if (thresholds->primary->entries[i].eventfd != eventfd) 4076 4053 size++; 4054 + else 4055 + entries++; 4077 4056 } 4078 4057 4079 4058 new = thresholds->spare; 4059 + 4060 + /* If no items related to eventfd have been cleared, nothing to do */ 4061 + if (!entries) 4062 + goto unlock; 4080 4063 4081 4064 /* Set thresholds array to NULL if we don't have thresholds */ 4082 4065 if (!size) {
+18 -9
mm/mmu_notifier.c
··· 307 307 * ->release returns. 308 308 */ 309 309 id = srcu_read_lock(&srcu); 310 - hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist) 310 + hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist, 311 + srcu_read_lock_held(&srcu)) 311 312 /* 312 313 * If ->release runs before mmu_notifier_unregister it must be 313 314 * handled, as it's the only way for the driver to flush all ··· 371 370 372 371 id = srcu_read_lock(&srcu); 373 372 hlist_for_each_entry_rcu(subscription, 374 - &mm->notifier_subscriptions->list, hlist) { 373 + &mm->notifier_subscriptions->list, hlist, 374 + srcu_read_lock_held(&srcu)) { 375 375 if (subscription->ops->clear_flush_young) 376 376 young |= subscription->ops->clear_flush_young( 377 377 subscription, mm, start, end); ··· 391 389 392 390 id = srcu_read_lock(&srcu); 393 391 hlist_for_each_entry_rcu(subscription, 394 - &mm->notifier_subscriptions->list, hlist) { 392 + &mm->notifier_subscriptions->list, hlist, 393 + srcu_read_lock_held(&srcu)) { 395 394 if (subscription->ops->clear_young) 396 395 young |= subscription->ops->clear_young(subscription, 397 396 mm, start, end); ··· 410 407 411 408 id = srcu_read_lock(&srcu); 412 409 hlist_for_each_entry_rcu(subscription, 413 - &mm->notifier_subscriptions->list, hlist) { 410 + &mm->notifier_subscriptions->list, hlist, 411 + srcu_read_lock_held(&srcu)) { 414 412 if (subscription->ops->test_young) { 415 413 young = subscription->ops->test_young(subscription, mm, 416 414 address); ··· 432 428 433 429 id = srcu_read_lock(&srcu); 434 430 hlist_for_each_entry_rcu(subscription, 435 - &mm->notifier_subscriptions->list, hlist) { 431 + &mm->notifier_subscriptions->list, hlist, 432 + srcu_read_lock_held(&srcu)) { 436 433 if (subscription->ops->change_pte) 437 434 subscription->ops->change_pte(subscription, mm, address, 438 435 pte); ··· 481 476 int id; 482 477 483 478 id = srcu_read_lock(&srcu); 484 - hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist) { 479 + hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist, 480 + srcu_read_lock_held(&srcu)) { 485 481 const struct mmu_notifier_ops *ops = subscription->ops; 486 482 487 483 if (ops->invalidate_range_start) { ··· 534 528 int id; 535 529 536 530 id = srcu_read_lock(&srcu); 537 - hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist) { 531 + hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist, 532 + srcu_read_lock_held(&srcu)) { 538 533 /* 539 534 * Call invalidate_range here too to avoid the need for the 540 535 * subsystem of having to register an invalidate_range_end ··· 589 582 590 583 id = srcu_read_lock(&srcu); 591 584 hlist_for_each_entry_rcu(subscription, 592 - &mm->notifier_subscriptions->list, hlist) { 585 + &mm->notifier_subscriptions->list, hlist, 586 + srcu_read_lock_held(&srcu)) { 593 587 if (subscription->ops->invalidate_range) 594 588 subscription->ops->invalidate_range(subscription, mm, 595 589 start, end); ··· 722 714 723 715 spin_lock(&mm->notifier_subscriptions->lock); 724 716 hlist_for_each_entry_rcu(subscription, 725 - &mm->notifier_subscriptions->list, hlist) { 717 + &mm->notifier_subscriptions->list, hlist, 718 + lockdep_is_held(&mm->notifier_subscriptions->lock)) { 726 719 if (subscription->ops != ops) 727 720 continue; 728 721
+7 -3
mm/nommu.c
··· 370 370 EXPORT_SYMBOL_GPL(vm_unmap_aliases); 371 371 372 372 /* 373 - * Implement a stub for vmalloc_sync_all() if the architecture chose not to 374 - * have one. 373 + * Implement a stub for vmalloc_sync_[un]mapping() if the architecture 374 + * chose not to have one. 375 375 */ 376 - void __weak vmalloc_sync_all(void) 376 + void __weak vmalloc_sync_mappings(void) 377 + { 378 + } 379 + 380 + void __weak vmalloc_sync_unmappings(void) 377 381 { 378 382 } 379 383
+30 -11
mm/slub.c
··· 1973 1973 1974 1974 if (node == NUMA_NO_NODE) 1975 1975 searchnode = numa_mem_id(); 1976 - else if (!node_present_pages(node)) 1977 - searchnode = node_to_mem_node(node); 1978 1976 1979 1977 object = get_partial_node(s, get_node(s, searchnode), c, flags); 1980 1978 if (object || node != NUMA_NO_NODE) ··· 2561 2563 struct page *page; 2562 2564 2563 2565 page = c->page; 2564 - if (!page) 2566 + if (!page) { 2567 + /* 2568 + * if the node is not online or has no normal memory, just 2569 + * ignore the node constraint 2570 + */ 2571 + if (unlikely(node != NUMA_NO_NODE && 2572 + !node_state(node, N_NORMAL_MEMORY))) 2573 + node = NUMA_NO_NODE; 2565 2574 goto new_slab; 2575 + } 2566 2576 redo: 2567 2577 2568 2578 if (unlikely(!node_match(page, node))) { 2569 - int searchnode = node; 2570 - 2571 - if (node != NUMA_NO_NODE && !node_present_pages(node)) 2572 - searchnode = node_to_mem_node(node); 2573 - 2574 - if (unlikely(!node_match(page, searchnode))) { 2579 + /* 2580 + * same as above but node_match() being false already 2581 + * implies node != NUMA_NO_NODE 2582 + */ 2583 + if (!node_state(node, N_NORMAL_MEMORY)) { 2584 + node = NUMA_NO_NODE; 2585 + goto redo; 2586 + } else { 2575 2587 stat(s, ALLOC_NODE_MISMATCH); 2576 2588 deactivate_slab(s, page, c->freelist, c); 2577 2589 goto new_slab; ··· 3005 2997 barrier(); 3006 2998 3007 2999 if (likely(page == c->page)) { 3008 - set_freepointer(s, tail_obj, c->freelist); 3000 + void **freelist = READ_ONCE(c->freelist); 3001 + 3002 + set_freepointer(s, tail_obj, freelist); 3009 3003 3010 3004 if (unlikely(!this_cpu_cmpxchg_double( 3011 3005 s->cpu_slab->freelist, s->cpu_slab->tid, 3012 - c->freelist, tid, 3006 + freelist, tid, 3013 3007 head, next_tid(tid)))) { 3014 3008 3015 3009 note_cmpxchg_failure("slab_free", s, tid); ··· 3184 3174 void *object = c->freelist; 3185 3175 3186 3176 if (unlikely(!object)) { 3177 + /* 3178 + * We may have removed an object from c->freelist using 3179 + * the fastpath in the previous iteration; in that case, 3180 + * c->tid has not been bumped yet. 3181 + * Since ___slab_alloc() may reenable interrupts while 3182 + * allocating memory, we should bump c->tid now. 3183 + */ 3184 + c->tid = next_tid(c->tid); 3185 + 3187 3186 /* 3188 3187 * Invoking slow path likely have side-effect 3189 3188 * of re-populating per CPU c->freelist
+6 -2
mm/sparse.c
··· 734 734 struct mem_section *ms = __pfn_to_section(pfn); 735 735 bool section_is_early = early_section(ms); 736 736 struct page *memmap = NULL; 737 + bool empty; 737 738 unsigned long *subsection_map = ms->usage 738 739 ? &ms->usage->subsection_map[0] : NULL; 739 740 ··· 765 764 * For 2/ and 3/ the SPARSEMEM_VMEMMAP={y,n} cases are unified 766 765 */ 767 766 bitmap_xor(subsection_map, map, subsection_map, SUBSECTIONS_PER_SECTION); 768 - if (bitmap_empty(subsection_map, SUBSECTIONS_PER_SECTION)) { 767 + empty = bitmap_empty(subsection_map, SUBSECTIONS_PER_SECTION); 768 + if (empty) { 769 769 unsigned long section_nr = pfn_to_section_nr(pfn); 770 770 771 771 /* ··· 781 779 ms->usage = NULL; 782 780 } 783 781 memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); 784 - ms->section_mem_map = (unsigned long)NULL; 785 782 } 786 783 787 784 if (section_is_early && memmap) 788 785 free_map_bootmem(memmap); 789 786 else 790 787 depopulate_section_memmap(pfn, nr_pages, altmap); 788 + 789 + if (empty) 790 + ms->section_mem_map = (unsigned long)NULL; 791 791 } 792 792 793 793 static struct page * __meminit section_activate(int nid, unsigned long pfn,
+7 -4
mm/vmalloc.c
··· 1295 1295 * First make sure the mappings are removed from all page-tables 1296 1296 * before they are freed. 1297 1297 */ 1298 - vmalloc_sync_all(); 1298 + vmalloc_sync_unmappings(); 1299 1299 1300 1300 /* 1301 1301 * TODO: to calculate a flush range without looping. ··· 3128 3128 EXPORT_SYMBOL(remap_vmalloc_range); 3129 3129 3130 3130 /* 3131 - * Implement a stub for vmalloc_sync_all() if the architecture chose not to 3132 - * have one. 3131 + * Implement stubs for vmalloc_sync_[un]mappings () if the architecture chose 3132 + * not to have one. 3133 3133 * 3134 3134 * The purpose of this function is to make sure the vmalloc area 3135 3135 * mappings are identical in all page-tables in the system. 3136 3136 */ 3137 - void __weak vmalloc_sync_all(void) 3137 + void __weak vmalloc_sync_mappings(void) 3138 3138 { 3139 3139 } 3140 3140 3141 + void __weak vmalloc_sync_unmappings(void) 3142 + { 3143 + } 3141 3144 3142 3145 static int f(pte_t *pte, unsigned long addr, void *data) 3143 3146 {
+3
net/Kconfig
··· 52 52 config NET_EGRESS 53 53 bool 54 54 55 + config NET_REDIRECT 56 + bool 57 + 55 58 config SKB_EXTENSIONS 56 59 bool 57 60
+8 -6
net/bpfilter/main.c
··· 10 10 #include <asm/unistd.h> 11 11 #include "msgfmt.h" 12 12 13 - int debug_fd; 13 + FILE *debug_f; 14 14 15 15 static int handle_get_cmd(struct mbox_request *cmd) 16 16 { ··· 35 35 struct mbox_reply reply; 36 36 int n; 37 37 38 + fprintf(debug_f, "testing the buffer\n"); 38 39 n = read(0, &req, sizeof(req)); 39 40 if (n != sizeof(req)) { 40 - dprintf(debug_fd, "invalid request %d\n", n); 41 + fprintf(debug_f, "invalid request %d\n", n); 41 42 return; 42 43 } 43 44 ··· 48 47 49 48 n = write(1, &reply, sizeof(reply)); 50 49 if (n != sizeof(reply)) { 51 - dprintf(debug_fd, "reply failed %d\n", n); 50 + fprintf(debug_f, "reply failed %d\n", n); 52 51 return; 53 52 } 54 53 } ··· 56 55 57 56 int main(void) 58 57 { 59 - debug_fd = open("/dev/kmsg", 00000002); 60 - dprintf(debug_fd, "Started bpfilter\n"); 58 + debug_f = fopen("/dev/kmsg", "w"); 59 + setvbuf(debug_f, 0, _IOLBF, 0); 60 + fprintf(debug_f, "Started bpfilter\n"); 61 61 loop(); 62 - close(debug_fd); 62 + fclose(debug_f); 63 63 return 0; 64 64 }
+3 -3
net/core/dev.c
··· 4516 4516 /* Reinjected packets coming from act_mirred or similar should 4517 4517 * not get XDP generic processing. 4518 4518 */ 4519 - if (skb_is_tc_redirected(skb)) 4519 + if (skb_is_redirected(skb)) 4520 4520 return XDP_PASS; 4521 4521 4522 4522 /* XDP packets must be linear and must have sufficient headroom ··· 5063 5063 goto out; 5064 5064 } 5065 5065 #endif 5066 - skb_reset_tc(skb); 5066 + skb_reset_redirect(skb); 5067 5067 skip_classify: 5068 5068 if (pfmemalloc && !skb_pfmemalloc_protocol(skb)) 5069 5069 goto drop; ··· 5195 5195 * 5196 5196 * More direct receive version of netif_receive_skb(). It should 5197 5197 * only be used by callers that have a need to skip RPS and Generic XDP. 5198 - * Caller must also take care of handling if (page_is_)pfmemalloc. 5198 + * Caller must also take care of handling if ``(page_is_)pfmemalloc``. 5199 5199 * 5200 5200 * This function may only be called from softirq context and interrupts 5201 5201 * should be enabled.
+1 -1
net/core/pktgen.c
··· 3362 3362 /* skb was 'freed' by stack, so clean few 3363 3363 * bits and reuse it 3364 3364 */ 3365 - skb_reset_tc(skb); 3365 + skb_reset_redirect(skb); 3366 3366 } while (--burst > 0); 3367 3367 goto out; /* Skips xmit_mode M_START_XMIT */ 3368 3368 } else if (pkt_dev->xmit_mode == M_QUEUE_XMIT) {
+8 -4
net/core/sock_map.c
··· 299 299 struct bpf_stab *stab = container_of(map, struct bpf_stab, map); 300 300 int i; 301 301 302 + /* After the sync no updates or deletes will be in-flight so it 303 + * is safe to walk map and remove entries without risking a race 304 + * in EEXIST update case. 305 + */ 302 306 synchronize_rcu(); 303 - raw_spin_lock_bh(&stab->lock); 304 307 for (i = 0; i < stab->map.max_entries; i++) { 305 308 struct sock **psk = &stab->sks[i]; 306 309 struct sock *sk; ··· 317 314 release_sock(sk); 318 315 } 319 316 } 320 - raw_spin_unlock_bh(&stab->lock); 321 317 322 318 /* wait for psock readers accessing its map link */ 323 319 synchronize_rcu(); ··· 1010 1008 struct hlist_node *node; 1011 1009 int i; 1012 1010 1011 + /* After the sync no updates or deletes will be in-flight so it 1012 + * is safe to walk map and remove entries without risking a race 1013 + * in EEXIST update case. 1014 + */ 1013 1015 synchronize_rcu(); 1014 1016 for (i = 0; i < htab->buckets_num; i++) { 1015 1017 bucket = sock_hash_select_bucket(htab, i); 1016 - raw_spin_lock_bh(&bucket->lock); 1017 1018 hlist_for_each_entry_safe(elem, node, &bucket->head, node) { 1018 1019 hlist_del_rcu(&elem->node); 1019 1020 lock_sock(elem->sk); ··· 1025 1020 rcu_read_unlock(); 1026 1021 release_sock(elem->sk); 1027 1022 } 1028 - raw_spin_unlock_bh(&bucket->lock); 1029 1023 } 1030 1024 1031 1025 /* wait for psock readers accessing its map link */
-43
net/dsa/tag_8021q.c
··· 298 298 } 299 299 EXPORT_SYMBOL_GPL(dsa_8021q_xmit); 300 300 301 - /* In the DSA packet_type handler, skb->data points in the middle of the VLAN 302 - * tag, after tpid and before tci. This is because so far, ETH_HLEN 303 - * (DMAC, SMAC, EtherType) bytes were pulled. 304 - * There are 2 bytes of VLAN tag left in skb->data, and upper 305 - * layers expect the 'real' EtherType to be consumed as well. 306 - * Coincidentally, a VLAN header is also of the same size as 307 - * the number of bytes that need to be pulled. 308 - * 309 - * skb_mac_header skb->data 310 - * | | 311 - * v v 312 - * | | | | | | | | | | | | | | | | | | | 313 - * +-----------------------+-----------------------+-------+-------+-------+ 314 - * | Destination MAC | Source MAC | TPID | TCI | EType | 315 - * +-----------------------+-----------------------+-------+-------+-------+ 316 - * ^ | | 317 - * |<--VLAN_HLEN-->to <---VLAN_HLEN---> 318 - * from | 319 - * >>>>>>> v 320 - * >>>>>>> | | | | | | | | | | | | | | | 321 - * >>>>>>> +-----------------------+-----------------------+-------+ 322 - * >>>>>>> | Destination MAC | Source MAC | EType | 323 - * +-----------------------+-----------------------+-------+ 324 - * ^ ^ 325 - * (now part of | | 326 - * skb->head) skb_mac_header skb->data 327 - */ 328 - struct sk_buff *dsa_8021q_remove_header(struct sk_buff *skb) 329 - { 330 - u8 *from = skb_mac_header(skb); 331 - u8 *dest = from + VLAN_HLEN; 332 - 333 - memmove(dest, from, ETH_HLEN - VLAN_HLEN); 334 - skb_pull(skb, VLAN_HLEN); 335 - skb_push(skb, ETH_HLEN); 336 - skb_reset_mac_header(skb); 337 - skb_reset_mac_len(skb); 338 - skb_pull_rcsum(skb, ETH_HLEN); 339 - 340 - return skb; 341 - } 342 - EXPORT_SYMBOL_GPL(dsa_8021q_remove_header); 343 - 344 301 MODULE_LICENSE("GPL v2");
+2
net/dsa/tag_brcm.c
··· 140 140 /* Remove Broadcom tag and update checksum */ 141 141 skb_pull_rcsum(skb, BRCM_TAG_LEN); 142 142 143 + skb->offload_fwd_mark = 1; 144 + 143 145 return skb; 144 146 } 145 147
+9 -10
net/dsa/tag_sja1105.c
··· 250 250 { 251 251 struct sja1105_meta meta = {0}; 252 252 int source_port, switch_id; 253 - struct vlan_ethhdr *hdr; 253 + struct ethhdr *hdr; 254 254 u16 tpid, vid, tci; 255 255 bool is_link_local; 256 256 bool is_tagged; 257 257 bool is_meta; 258 258 259 - hdr = vlan_eth_hdr(skb); 260 - tpid = ntohs(hdr->h_vlan_proto); 259 + hdr = eth_hdr(skb); 260 + tpid = ntohs(hdr->h_proto); 261 261 is_tagged = (tpid == ETH_P_SJA1105); 262 262 is_link_local = sja1105_is_link_local(skb); 263 263 is_meta = sja1105_is_meta_frame(skb); ··· 266 266 267 267 if (is_tagged) { 268 268 /* Normal traffic path. */ 269 - tci = ntohs(hdr->h_vlan_TCI); 269 + skb_push_rcsum(skb, ETH_HLEN); 270 + __skb_vlan_pop(skb, &tci); 271 + skb_pull_rcsum(skb, ETH_HLEN); 272 + skb_reset_network_header(skb); 273 + skb_reset_transport_header(skb); 274 + 270 275 vid = tci & VLAN_VID_MASK; 271 276 source_port = dsa_8021q_rx_source_port(vid); 272 277 switch_id = dsa_8021q_rx_switch_id(vid); ··· 299 294 netdev_warn(netdev, "Couldn't decode source port\n"); 300 295 return NULL; 301 296 } 302 - 303 - /* Delete/overwrite fake VLAN header, DSA expects to not find 304 - * it there, see dsa_switch_rcv: skb_push(skb, ETH_HLEN). 305 - */ 306 - if (is_tagged) 307 - skb = dsa_8021q_remove_header(skb); 308 297 309 298 return sja1105_rcv_meta_state_machine(skb, &meta, is_link_local, 310 299 is_meta);
+3 -1
net/ethtool/debug.c
··· 109 109 if (ret < 0) 110 110 return ret; 111 111 dev = req_info.dev; 112 + ret = -EOPNOTSUPP; 112 113 if (!dev->ethtool_ops->get_msglevel || !dev->ethtool_ops->set_msglevel) 113 - return -EOPNOTSUPP; 114 + goto out_dev; 114 115 115 116 rtnl_lock(); 116 117 ret = ethnl_ops_begin(dev); ··· 132 131 ethnl_ops_complete(dev); 133 132 out_rtnl: 134 133 rtnl_unlock(); 134 + out_dev: 135 135 dev_put(dev); 136 136 return ret; 137 137 }
+3 -1
net/ethtool/linkinfo.c
··· 128 128 if (ret < 0) 129 129 return ret; 130 130 dev = req_info.dev; 131 + ret = -EOPNOTSUPP; 131 132 if (!dev->ethtool_ops->get_link_ksettings || 132 133 !dev->ethtool_ops->set_link_ksettings) 133 - return -EOPNOTSUPP; 134 + goto out_dev; 134 135 135 136 rtnl_lock(); 136 137 ret = ethnl_ops_begin(dev); ··· 165 164 ethnl_ops_complete(dev); 166 165 out_rtnl: 167 166 rtnl_unlock(); 167 + out_dev: 168 168 dev_put(dev); 169 169 return ret; 170 170 }
+3 -1
net/ethtool/linkmodes.c
··· 341 341 if (ret < 0) 342 342 return ret; 343 343 dev = req_info.dev; 344 + ret = -EOPNOTSUPP; 344 345 if (!dev->ethtool_ops->get_link_ksettings || 345 346 !dev->ethtool_ops->set_link_ksettings) 346 - return -EOPNOTSUPP; 347 + goto out_dev; 347 348 348 349 rtnl_lock(); 349 350 ret = ethnl_ops_begin(dev); ··· 374 373 ethnl_ops_complete(dev); 375 374 out_rtnl: 376 375 rtnl_unlock(); 376 + out_dev: 377 377 dev_put(dev); 378 378 return ret; 379 379 }
+12 -4
net/ethtool/netlink.c
··· 40 40 struct nlattr *tb[ETHTOOL_A_HEADER_MAX + 1]; 41 41 const struct nlattr *devname_attr; 42 42 struct net_device *dev = NULL; 43 + u32 flags = 0; 43 44 int ret; 44 45 45 46 if (!header) { ··· 51 50 ethnl_header_policy, extack); 52 51 if (ret < 0) 53 52 return ret; 54 - devname_attr = tb[ETHTOOL_A_HEADER_DEV_NAME]; 53 + if (tb[ETHTOOL_A_HEADER_FLAGS]) { 54 + flags = nla_get_u32(tb[ETHTOOL_A_HEADER_FLAGS]); 55 + if (flags & ~ETHTOOL_FLAG_ALL) { 56 + NL_SET_ERR_MSG_ATTR(extack, tb[ETHTOOL_A_HEADER_FLAGS], 57 + "unrecognized request flags"); 58 + nl_set_extack_cookie_u32(extack, ETHTOOL_FLAG_ALL); 59 + return -EOPNOTSUPP; 60 + } 61 + } 55 62 63 + devname_attr = tb[ETHTOOL_A_HEADER_DEV_NAME]; 56 64 if (tb[ETHTOOL_A_HEADER_DEV_INDEX]) { 57 65 u32 ifindex = nla_get_u32(tb[ETHTOOL_A_HEADER_DEV_INDEX]); 58 66 ··· 100 90 } 101 91 102 92 req_info->dev = dev; 103 - if (tb[ETHTOOL_A_HEADER_FLAGS]) 104 - req_info->flags = nla_get_u32(tb[ETHTOOL_A_HEADER_FLAGS]); 105 - 93 + req_info->flags = flags; 106 94 return 0; 107 95 } 108 96
+3 -1
net/ethtool/wol.c
··· 129 129 if (ret < 0) 130 130 return ret; 131 131 dev = req_info.dev; 132 + ret = -EOPNOTSUPP; 132 133 if (!dev->ethtool_ops->get_wol || !dev->ethtool_ops->set_wol) 133 - return -EOPNOTSUPP; 134 + goto out_dev; 134 135 135 136 rtnl_lock(); 136 137 ret = ethnl_ops_begin(dev); ··· 174 173 ethnl_ops_complete(dev); 175 174 out_rtnl: 176 175 rtnl_unlock(); 176 + out_dev: 177 177 dev_put(dev); 178 178 return ret; 179 179 }
+2 -7
net/hsr/hsr_framereg.c
··· 483 483 struct hsr_port *port; 484 484 unsigned long tdiff; 485 485 486 - rcu_read_lock(); 487 486 node = find_node_by_addr_A(&hsr->node_db, addr); 488 - if (!node) { 489 - rcu_read_unlock(); 490 - return -ENOENT; /* No such entry */ 491 - } 487 + if (!node) 488 + return -ENOENT; 492 489 493 490 ether_addr_copy(addr_b, node->macaddress_B); 494 491 ··· 519 522 } else { 520 523 *addr_b_ifindex = -1; 521 524 } 522 - 523 - rcu_read_unlock(); 524 525 525 526 return 0; 526 527 }
+44 -30
net/hsr/hsr_netlink.c
··· 250 250 if (!na) 251 251 goto invalid; 252 252 253 - hsr_dev = __dev_get_by_index(genl_info_net(info), 254 - nla_get_u32(info->attrs[HSR_A_IFINDEX])); 253 + rcu_read_lock(); 254 + hsr_dev = dev_get_by_index_rcu(genl_info_net(info), 255 + nla_get_u32(info->attrs[HSR_A_IFINDEX])); 255 256 if (!hsr_dev) 256 - goto invalid; 257 + goto rcu_unlock; 257 258 if (!is_hsr_master(hsr_dev)) 258 - goto invalid; 259 + goto rcu_unlock; 259 260 260 261 /* Send reply */ 261 - skb_out = genlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); 262 + skb_out = genlmsg_new(NLMSG_GOODSIZE, GFP_ATOMIC); 262 263 if (!skb_out) { 263 264 res = -ENOMEM; 264 265 goto fail; ··· 313 312 res = nla_put_u16(skb_out, HSR_A_IF1_SEQ, hsr_node_if1_seq); 314 313 if (res < 0) 315 314 goto nla_put_failure; 316 - rcu_read_lock(); 317 315 port = hsr_port_get_hsr(hsr, HSR_PT_SLAVE_A); 318 316 if (port) 319 317 res = nla_put_u32(skb_out, HSR_A_IF1_IFINDEX, 320 318 port->dev->ifindex); 321 - rcu_read_unlock(); 322 319 if (res < 0) 323 320 goto nla_put_failure; 324 321 ··· 326 327 res = nla_put_u16(skb_out, HSR_A_IF2_SEQ, hsr_node_if2_seq); 327 328 if (res < 0) 328 329 goto nla_put_failure; 329 - rcu_read_lock(); 330 330 port = hsr_port_get_hsr(hsr, HSR_PT_SLAVE_B); 331 331 if (port) 332 332 res = nla_put_u32(skb_out, HSR_A_IF2_IFINDEX, 333 333 port->dev->ifindex); 334 - rcu_read_unlock(); 335 334 if (res < 0) 336 335 goto nla_put_failure; 336 + 337 + rcu_read_unlock(); 337 338 338 339 genlmsg_end(skb_out, msg_head); 339 340 genlmsg_unicast(genl_info_net(info), skb_out, info->snd_portid); 340 341 341 342 return 0; 342 343 344 + rcu_unlock: 345 + rcu_read_unlock(); 343 346 invalid: 344 347 netlink_ack(skb_in, nlmsg_hdr(skb_in), -EINVAL, NULL); 345 348 return 0; ··· 351 350 /* Fall through */ 352 351 353 352 fail: 353 + rcu_read_unlock(); 354 354 return res; 355 355 } 356 356 ··· 359 357 */ 360 358 static int hsr_get_node_list(struct sk_buff *skb_in, struct genl_info *info) 361 359 { 362 - /* For receiving */ 363 - struct nlattr *na; 364 - struct net_device *hsr_dev; 365 - 366 - /* For sending */ 367 - struct sk_buff *skb_out; 368 - void *msg_head; 369 - struct hsr_priv *hsr; 370 - void *pos; 371 360 unsigned char addr[ETH_ALEN]; 361 + struct net_device *hsr_dev; 362 + struct sk_buff *skb_out; 363 + struct hsr_priv *hsr; 364 + bool restart = false; 365 + struct nlattr *na; 366 + void *pos = NULL; 367 + void *msg_head; 372 368 int res; 373 369 374 370 if (!info) ··· 376 376 if (!na) 377 377 goto invalid; 378 378 379 - hsr_dev = __dev_get_by_index(genl_info_net(info), 380 - nla_get_u32(info->attrs[HSR_A_IFINDEX])); 379 + rcu_read_lock(); 380 + hsr_dev = dev_get_by_index_rcu(genl_info_net(info), 381 + nla_get_u32(info->attrs[HSR_A_IFINDEX])); 381 382 if (!hsr_dev) 382 - goto invalid; 383 + goto rcu_unlock; 383 384 if (!is_hsr_master(hsr_dev)) 384 - goto invalid; 385 + goto rcu_unlock; 385 386 387 + restart: 386 388 /* Send reply */ 387 - skb_out = genlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); 389 + skb_out = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_ATOMIC); 388 390 if (!skb_out) { 389 391 res = -ENOMEM; 390 392 goto fail; ··· 400 398 goto nla_put_failure; 401 399 } 402 400 403 - res = nla_put_u32(skb_out, HSR_A_IFINDEX, hsr_dev->ifindex); 404 - if (res < 0) 405 - goto nla_put_failure; 401 + if (!restart) { 402 + res = nla_put_u32(skb_out, HSR_A_IFINDEX, hsr_dev->ifindex); 403 + if (res < 0) 404 + goto nla_put_failure; 405 + } 406 406 407 407 hsr = netdev_priv(hsr_dev); 408 408 409 - rcu_read_lock(); 410 - pos = hsr_get_next_node(hsr, NULL, addr); 409 + if (!pos) 410 + pos = hsr_get_next_node(hsr, NULL, addr); 411 411 while (pos) { 412 412 res = nla_put(skb_out, HSR_A_NODE_ADDR, ETH_ALEN, addr); 413 413 if (res < 0) { 414 - rcu_read_unlock(); 414 + if (res == -EMSGSIZE) { 415 + genlmsg_end(skb_out, msg_head); 416 + genlmsg_unicast(genl_info_net(info), skb_out, 417 + info->snd_portid); 418 + restart = true; 419 + goto restart; 420 + } 415 421 goto nla_put_failure; 416 422 } 417 423 pos = hsr_get_next_node(hsr, pos, addr); ··· 431 421 432 422 return 0; 433 423 424 + rcu_unlock: 425 + rcu_read_unlock(); 434 426 invalid: 435 427 netlink_ack(skb_in, nlmsg_hdr(skb_in), -EINVAL, NULL); 436 428 return 0; 437 429 438 430 nla_put_failure: 439 - kfree_skb(skb_out); 431 + nlmsg_free(skb_out); 440 432 /* Fall through */ 441 433 442 434 fail: 435 + rcu_read_unlock(); 443 436 return res; 444 437 } 445 438 ··· 469 456 .version = 1, 470 457 .maxattr = HSR_A_MAX, 471 458 .policy = hsr_genl_policy, 459 + .netnsok = true, 472 460 .module = THIS_MODULE, 473 461 .ops = hsr_ops, 474 462 .n_ops = ARRAY_SIZE(hsr_ops),
+4 -4
net/hsr/hsr_slave.c
··· 151 151 if (!port) 152 152 return -ENOMEM; 153 153 154 + port->hsr = hsr; 155 + port->dev = dev; 156 + port->type = type; 157 + 154 158 if (type != HSR_PT_MASTER) { 155 159 res = hsr_portdev_setup(hsr, dev, port, extack); 156 160 if (res) 157 161 goto fail_dev_setup; 158 162 } 159 - 160 - port->hsr = hsr; 161 - port->dev = dev; 162 - port->type = type; 163 163 164 164 list_add_tail_rcu(&port->port_list, &hsr->ports); 165 165 synchronize_rcu();
+2
net/ipv4/fib_frontend.c
··· 997 997 return -ENOENT; 998 998 } 999 999 1000 + rcu_read_lock(); 1000 1001 err = fib_table_dump(tb, skb, cb, &filter); 1002 + rcu_read_unlock(); 1001 1003 return skb->len ? : err; 1002 1004 } 1003 1005
+87 -18
net/ipv4/ip_gre.c
··· 1153 1153 if (data[IFLA_GRE_FWMARK]) 1154 1154 *fwmark = nla_get_u32(data[IFLA_GRE_FWMARK]); 1155 1155 1156 + return 0; 1157 + } 1158 + 1159 + static int erspan_netlink_parms(struct net_device *dev, 1160 + struct nlattr *data[], 1161 + struct nlattr *tb[], 1162 + struct ip_tunnel_parm *parms, 1163 + __u32 *fwmark) 1164 + { 1165 + struct ip_tunnel *t = netdev_priv(dev); 1166 + int err; 1167 + 1168 + err = ipgre_netlink_parms(dev, data, tb, parms, fwmark); 1169 + if (err) 1170 + return err; 1171 + if (!data) 1172 + return 0; 1173 + 1156 1174 if (data[IFLA_GRE_ERSPAN_VER]) { 1157 1175 t->erspan_ver = nla_get_u8(data[IFLA_GRE_ERSPAN_VER]); 1158 1176 ··· 1294 1276 ip_tunnel_setup(dev, gre_tap_net_id); 1295 1277 } 1296 1278 1297 - static int ipgre_newlink(struct net *src_net, struct net_device *dev, 1298 - struct nlattr *tb[], struct nlattr *data[], 1299 - struct netlink_ext_ack *extack) 1279 + static int 1280 + ipgre_newlink_encap_setup(struct net_device *dev, struct nlattr *data[]) 1300 1281 { 1301 - struct ip_tunnel_parm p; 1302 1282 struct ip_tunnel_encap ipencap; 1303 - __u32 fwmark = 0; 1304 - int err; 1305 1283 1306 1284 if (ipgre_netlink_encap_parms(data, &ipencap)) { 1307 1285 struct ip_tunnel *t = netdev_priv(dev); 1308 - err = ip_tunnel_encap_setup(t, &ipencap); 1286 + int err = ip_tunnel_encap_setup(t, &ipencap); 1309 1287 1310 1288 if (err < 0) 1311 1289 return err; 1312 1290 } 1313 1291 1292 + return 0; 1293 + } 1294 + 1295 + static int ipgre_newlink(struct net *src_net, struct net_device *dev, 1296 + struct nlattr *tb[], struct nlattr *data[], 1297 + struct netlink_ext_ack *extack) 1298 + { 1299 + struct ip_tunnel_parm p; 1300 + __u32 fwmark = 0; 1301 + int err; 1302 + 1303 + err = ipgre_newlink_encap_setup(dev, data); 1304 + if (err) 1305 + return err; 1306 + 1314 1307 err = ipgre_netlink_parms(dev, data, tb, &p, &fwmark); 1315 1308 if (err < 0) 1309 + return err; 1310 + return ip_tunnel_newlink(dev, tb, &p, fwmark); 1311 + } 1312 + 1313 + static int erspan_newlink(struct net *src_net, struct net_device *dev, 1314 + struct nlattr *tb[], struct nlattr *data[], 1315 + struct netlink_ext_ack *extack) 1316 + { 1317 + struct ip_tunnel_parm p; 1318 + __u32 fwmark = 0; 1319 + int err; 1320 + 1321 + err = ipgre_newlink_encap_setup(dev, data); 1322 + if (err) 1323 + return err; 1324 + 1325 + err = erspan_netlink_parms(dev, data, tb, &p, &fwmark); 1326 + if (err) 1316 1327 return err; 1317 1328 return ip_tunnel_newlink(dev, tb, &p, fwmark); 1318 1329 } ··· 1351 1304 struct netlink_ext_ack *extack) 1352 1305 { 1353 1306 struct ip_tunnel *t = netdev_priv(dev); 1354 - struct ip_tunnel_encap ipencap; 1355 1307 __u32 fwmark = t->fwmark; 1356 1308 struct ip_tunnel_parm p; 1357 1309 int err; 1358 1310 1359 - if (ipgre_netlink_encap_parms(data, &ipencap)) { 1360 - err = ip_tunnel_encap_setup(t, &ipencap); 1361 - 1362 - if (err < 0) 1363 - return err; 1364 - } 1311 + err = ipgre_newlink_encap_setup(dev, data); 1312 + if (err) 1313 + return err; 1365 1314 1366 1315 err = ipgre_netlink_parms(dev, data, tb, &p, &fwmark); 1367 1316 if (err < 0) ··· 1370 1327 t->parms.i_flags = p.i_flags; 1371 1328 t->parms.o_flags = p.o_flags; 1372 1329 1373 - if (strcmp(dev->rtnl_link_ops->kind, "erspan")) 1374 - ipgre_link_update(dev, !tb[IFLA_MTU]); 1330 + ipgre_link_update(dev, !tb[IFLA_MTU]); 1331 + 1332 + return 0; 1333 + } 1334 + 1335 + static int erspan_changelink(struct net_device *dev, struct nlattr *tb[], 1336 + struct nlattr *data[], 1337 + struct netlink_ext_ack *extack) 1338 + { 1339 + struct ip_tunnel *t = netdev_priv(dev); 1340 + __u32 fwmark = t->fwmark; 1341 + struct ip_tunnel_parm p; 1342 + int err; 1343 + 1344 + err = ipgre_newlink_encap_setup(dev, data); 1345 + if (err) 1346 + return err; 1347 + 1348 + err = erspan_netlink_parms(dev, data, tb, &p, &fwmark); 1349 + if (err < 0) 1350 + return err; 1351 + 1352 + err = ip_tunnel_changelink(dev, tb, &p, fwmark); 1353 + if (err < 0) 1354 + return err; 1355 + 1356 + t->parms.i_flags = p.i_flags; 1357 + t->parms.o_flags = p.o_flags; 1375 1358 1376 1359 return 0; 1377 1360 } ··· 1588 1519 .priv_size = sizeof(struct ip_tunnel), 1589 1520 .setup = erspan_setup, 1590 1521 .validate = erspan_validate, 1591 - .newlink = ipgre_newlink, 1592 - .changelink = ipgre_changelink, 1522 + .newlink = erspan_newlink, 1523 + .changelink = erspan_changelink, 1593 1524 .dellink = ip_tunnel_dellink, 1594 1525 .get_size = ipgre_get_size, 1595 1526 .fill_info = ipgre_fill_info,
+3 -1
net/ipv4/tcp.c
··· 2948 2948 err = -EPERM; 2949 2949 else if (tp->repair_queue == TCP_SEND_QUEUE) 2950 2950 WRITE_ONCE(tp->write_seq, val); 2951 - else if (tp->repair_queue == TCP_RECV_QUEUE) 2951 + else if (tp->repair_queue == TCP_RECV_QUEUE) { 2952 2952 WRITE_ONCE(tp->rcv_nxt, val); 2953 + WRITE_ONCE(tp->copied_seq, val); 2954 + } 2953 2955 else 2954 2956 err = -EINVAL; 2955 2957 break;
+10 -2
net/ipv4/tcp_output.c
··· 1109 1109 1110 1110 if (unlikely(!skb)) 1111 1111 return -ENOBUFS; 1112 + /* retransmit skbs might have a non zero value in skb->dev 1113 + * because skb->dev is aliased with skb->rbnode.rb_left 1114 + */ 1115 + skb->dev = NULL; 1112 1116 } 1113 1117 1114 1118 inet = inet_sk(sk); ··· 3041 3037 3042 3038 tcp_skb_tsorted_save(skb) { 3043 3039 nskb = __pskb_copy(skb, MAX_TCP_HEADER, GFP_ATOMIC); 3044 - err = nskb ? tcp_transmit_skb(sk, nskb, 0, GFP_ATOMIC) : 3045 - -ENOBUFS; 3040 + if (nskb) { 3041 + nskb->dev = NULL; 3042 + err = tcp_transmit_skb(sk, nskb, 0, GFP_ATOMIC); 3043 + } else { 3044 + err = -ENOBUFS; 3045 + } 3046 3046 } tcp_skb_tsorted_restore(skb); 3047 3047 3048 3048 if (!err) {
+3
net/netfilter/nf_flow_table_core.c
··· 613 613 nf_flow_table_iterate(flow_table, nf_flow_table_do_cleanup, NULL); 614 614 nf_flow_table_iterate(flow_table, nf_flow_offload_gc_step, flow_table); 615 615 nf_flow_table_offload_flush(flow_table); 616 + if (nf_flowtable_hw_offload(flow_table)) 617 + nf_flow_table_iterate(flow_table, nf_flow_offload_gc_step, 618 + flow_table); 616 619 rhashtable_destroy(&flow_table->rhashtable); 617 620 mutex_destroy(&flow_table->flow_block_lock); 618 621 }
+10 -4
net/netfilter/nf_flow_table_ip.c
··· 146 146 147 147 if (test_bit(NF_FLOW_SNAT, &flow->flags) && 148 148 (nf_flow_snat_port(flow, skb, thoff, iph->protocol, dir) < 0 || 149 - nf_flow_snat_ip(flow, skb, iph, thoff, dir) < 0)) 149 + nf_flow_snat_ip(flow, skb, ip_hdr(skb), thoff, dir) < 0)) 150 150 return -1; 151 + 152 + iph = ip_hdr(skb); 151 153 if (test_bit(NF_FLOW_DNAT, &flow->flags) && 152 154 (nf_flow_dnat_port(flow, skb, thoff, iph->protocol, dir) < 0 || 153 - nf_flow_dnat_ip(flow, skb, iph, thoff, dir) < 0)) 155 + nf_flow_dnat_ip(flow, skb, ip_hdr(skb), thoff, dir) < 0)) 154 156 return -1; 155 157 156 158 return 0; ··· 191 189 if (!pskb_may_pull(skb, thoff + sizeof(*ports))) 192 190 return -1; 193 191 192 + iph = ip_hdr(skb); 194 193 ports = (struct flow_ports *)(skb_network_header(skb) + thoff); 195 194 196 195 tuple->src_v4.s_addr = iph->saddr; ··· 420 417 421 418 if (test_bit(NF_FLOW_SNAT, &flow->flags) && 422 419 (nf_flow_snat_port(flow, skb, thoff, ip6h->nexthdr, dir) < 0 || 423 - nf_flow_snat_ipv6(flow, skb, ip6h, thoff, dir) < 0)) 420 + nf_flow_snat_ipv6(flow, skb, ipv6_hdr(skb), thoff, dir) < 0)) 424 421 return -1; 422 + 423 + ip6h = ipv6_hdr(skb); 425 424 if (test_bit(NF_FLOW_DNAT, &flow->flags) && 426 425 (nf_flow_dnat_port(flow, skb, thoff, ip6h->nexthdr, dir) < 0 || 427 - nf_flow_dnat_ipv6(flow, skb, ip6h, thoff, dir) < 0)) 426 + nf_flow_dnat_ipv6(flow, skb, ipv6_hdr(skb), thoff, dir) < 0)) 428 427 return -1; 429 428 430 429 return 0; ··· 455 450 if (!pskb_may_pull(skb, thoff + sizeof(*ports))) 456 451 return -1; 457 452 453 + ip6h = ipv6_hdr(skb); 458 454 ports = (struct flow_ports *)(skb_network_header(skb) + thoff); 459 455 460 456 tuple->src_v6 = ip6h->saddr;
+1
net/netfilter/nf_flow_table_offload.c
··· 120 120 default: 121 121 return -EOPNOTSUPP; 122 122 } 123 + mask->control.addr_type = 0xffff; 123 124 match->dissector.used_keys |= BIT(key->control.addr_type); 124 125 mask->basic.n_proto = 0xffff; 125 126
+5
net/netfilter/nf_tables_api.c
··· 5106 5106 err = -EBUSY; 5107 5107 else if (!(nlmsg_flags & NLM_F_EXCL)) 5108 5108 err = 0; 5109 + } else if (err == -ENOTEMPTY) { 5110 + /* ENOTEMPTY reports overlapping between this element 5111 + * and an existing one. 5112 + */ 5113 + err = -EEXIST; 5109 5114 } 5110 5115 goto err_element_clash; 5111 5116 }
+12
net/netfilter/nft_fwd_netdev.c
··· 28 28 struct nft_fwd_netdev *priv = nft_expr_priv(expr); 29 29 int oif = regs->data[priv->sreg_dev]; 30 30 31 + /* This is used by ifb only. */ 32 + skb_set_redirected(pkt->skb, true); 33 + 31 34 nf_fwd_netdev_egress(pkt, oif); 32 35 regs->verdict.code = NF_STOLEN; 33 36 } ··· 193 190 return -1; 194 191 } 195 192 193 + static int nft_fwd_validate(const struct nft_ctx *ctx, 194 + const struct nft_expr *expr, 195 + const struct nft_data **data) 196 + { 197 + return nft_chain_validate_hooks(ctx->chain, (1 << NF_NETDEV_INGRESS)); 198 + } 199 + 196 200 static struct nft_expr_type nft_fwd_netdev_type; 197 201 static const struct nft_expr_ops nft_fwd_neigh_netdev_ops = { 198 202 .type = &nft_fwd_netdev_type, ··· 207 197 .eval = nft_fwd_neigh_eval, 208 198 .init = nft_fwd_neigh_init, 209 199 .dump = nft_fwd_neigh_dump, 200 + .validate = nft_fwd_validate, 210 201 }; 211 202 212 203 static const struct nft_expr_ops nft_fwd_netdev_ops = { ··· 216 205 .eval = nft_fwd_netdev_eval, 217 206 .init = nft_fwd_netdev_init, 218 207 .dump = nft_fwd_netdev_dump, 208 + .validate = nft_fwd_validate, 219 209 .offload = nft_fwd_netdev_offload, 220 210 }; 221 211
+27 -7
net/netfilter/nft_set_pipapo.c
··· 1164 1164 struct nft_pipapo_field *f; 1165 1165 int i, bsize_max, err = 0; 1166 1166 1167 + if (nft_set_ext_exists(ext, NFT_SET_EXT_KEY_END)) 1168 + end = (const u8 *)nft_set_ext_key_end(ext)->data; 1169 + else 1170 + end = start; 1171 + 1167 1172 dup = pipapo_get(net, set, start, genmask); 1168 - if (PTR_ERR(dup) == -ENOENT) { 1169 - if (nft_set_ext_exists(ext, NFT_SET_EXT_KEY_END)) { 1170 - end = (const u8 *)nft_set_ext_key_end(ext)->data; 1171 - dup = pipapo_get(net, set, end, nft_genmask_next(net)); 1172 - } else { 1173 - end = start; 1173 + if (!IS_ERR(dup)) { 1174 + /* Check if we already have the same exact entry */ 1175 + const struct nft_data *dup_key, *dup_end; 1176 + 1177 + dup_key = nft_set_ext_key(&dup->ext); 1178 + if (nft_set_ext_exists(&dup->ext, NFT_SET_EXT_KEY_END)) 1179 + dup_end = nft_set_ext_key_end(&dup->ext); 1180 + else 1181 + dup_end = dup_key; 1182 + 1183 + if (!memcmp(start, dup_key->data, sizeof(*dup_key->data)) && 1184 + !memcmp(end, dup_end->data, sizeof(*dup_end->data))) { 1185 + *ext2 = &dup->ext; 1186 + return -EEXIST; 1174 1187 } 1188 + 1189 + return -ENOTEMPTY; 1190 + } 1191 + 1192 + if (PTR_ERR(dup) == -ENOENT) { 1193 + /* Look for partially overlapping entries */ 1194 + dup = pipapo_get(net, set, end, nft_genmask_next(net)); 1175 1195 } 1176 1196 1177 1197 if (PTR_ERR(dup) != -ENOENT) { 1178 1198 if (IS_ERR(dup)) 1179 1199 return PTR_ERR(dup); 1180 1200 *ext2 = &dup->ext; 1181 - return -EEXIST; 1201 + return -ENOTEMPTY; 1182 1202 } 1183 1203 1184 1204 /* Validate */
+78 -9
net/netfilter/nft_set_rbtree.c
··· 33 33 (*nft_set_ext_flags(&rbe->ext) & NFT_SET_ELEM_INTERVAL_END); 34 34 } 35 35 36 + static bool nft_rbtree_interval_start(const struct nft_rbtree_elem *rbe) 37 + { 38 + return !nft_rbtree_interval_end(rbe); 39 + } 40 + 36 41 static bool nft_rbtree_equal(const struct nft_set *set, const void *this, 37 42 const struct nft_rbtree_elem *interval) 38 43 { ··· 69 64 if (interval && 70 65 nft_rbtree_equal(set, this, interval) && 71 66 nft_rbtree_interval_end(rbe) && 72 - !nft_rbtree_interval_end(interval)) 67 + nft_rbtree_interval_start(interval)) 73 68 continue; 74 69 interval = rbe; 75 70 } else if (d > 0) ··· 94 89 95 90 if (set->flags & NFT_SET_INTERVAL && interval != NULL && 96 91 nft_set_elem_active(&interval->ext, genmask) && 97 - !nft_rbtree_interval_end(interval)) { 92 + nft_rbtree_interval_start(interval)) { 98 93 *ext = &interval->ext; 99 94 return true; 100 95 } ··· 213 208 u8 genmask = nft_genmask_next(net); 214 209 struct nft_rbtree_elem *rbe; 215 210 struct rb_node *parent, **p; 211 + bool overlap = false; 216 212 int d; 213 + 214 + /* Detect overlaps as we descend the tree. Set the flag in these cases: 215 + * 216 + * a1. |__ _ _? >|__ _ _ (insert start after existing start) 217 + * a2. _ _ __>| ?_ _ __| (insert end before existing end) 218 + * a3. _ _ ___| ?_ _ _>| (insert end after existing end) 219 + * a4. >|__ _ _ _ _ __| (insert start before existing end) 220 + * 221 + * and clear it later on, as we eventually reach the points indicated by 222 + * '?' above, in the cases described below. We'll always meet these 223 + * later, locally, due to tree ordering, and overlaps for the intervals 224 + * that are the closest together are always evaluated last. 225 + * 226 + * b1. |__ _ _! >|__ _ _ (insert start after existing end) 227 + * b2. _ _ __>| !_ _ __| (insert end before existing start) 228 + * b3. !_____>| (insert end after existing start) 229 + * 230 + * Case a4. resolves to b1.: 231 + * - if the inserted start element is the leftmost, because the '0' 232 + * element in the tree serves as end element 233 + * - otherwise, if an existing end is found. Note that end elements are 234 + * always inserted after corresponding start elements. 235 + * 236 + * For a new, rightmost pair of elements, we'll hit cases b1. and b3., 237 + * in that order. 238 + * 239 + * The flag is also cleared in two special cases: 240 + * 241 + * b4. |__ _ _!|<_ _ _ (insert start right before existing end) 242 + * b5. |__ _ >|!__ _ _ (insert end right after existing start) 243 + * 244 + * which always happen as last step and imply that no further 245 + * overlapping is possible. 246 + */ 217 247 218 248 parent = NULL; 219 249 p = &priv->root.rb_node; ··· 258 218 d = memcmp(nft_set_ext_key(&rbe->ext), 259 219 nft_set_ext_key(&new->ext), 260 220 set->klen); 261 - if (d < 0) 221 + if (d < 0) { 262 222 p = &parent->rb_left; 263 - else if (d > 0) 223 + 224 + if (nft_rbtree_interval_start(new)) { 225 + overlap = nft_rbtree_interval_start(rbe) && 226 + nft_set_elem_active(&rbe->ext, 227 + genmask); 228 + } else { 229 + overlap = nft_rbtree_interval_end(rbe) && 230 + nft_set_elem_active(&rbe->ext, 231 + genmask); 232 + } 233 + } else if (d > 0) { 264 234 p = &parent->rb_right; 265 - else { 235 + 236 + if (nft_rbtree_interval_end(new)) { 237 + overlap = nft_rbtree_interval_end(rbe) && 238 + nft_set_elem_active(&rbe->ext, 239 + genmask); 240 + } else if (nft_rbtree_interval_end(rbe) && 241 + nft_set_elem_active(&rbe->ext, genmask)) { 242 + overlap = true; 243 + } 244 + } else { 266 245 if (nft_rbtree_interval_end(rbe) && 267 - !nft_rbtree_interval_end(new)) { 246 + nft_rbtree_interval_start(new)) { 268 247 p = &parent->rb_left; 269 - } else if (!nft_rbtree_interval_end(rbe) && 248 + 249 + if (nft_set_elem_active(&rbe->ext, genmask)) 250 + overlap = false; 251 + } else if (nft_rbtree_interval_start(rbe) && 270 252 nft_rbtree_interval_end(new)) { 271 253 p = &parent->rb_right; 254 + 255 + if (nft_set_elem_active(&rbe->ext, genmask)) 256 + overlap = false; 272 257 } else if (nft_set_elem_active(&rbe->ext, genmask)) { 273 258 *ext = &rbe->ext; 274 259 return -EEXIST; ··· 302 237 } 303 238 } 304 239 } 240 + 241 + if (overlap) 242 + return -ENOTEMPTY; 243 + 305 244 rb_link_node_rcu(&new->node, parent, p); 306 245 rb_insert_color(&new->node, &priv->root); 307 246 return 0; ··· 386 317 parent = parent->rb_right; 387 318 else { 388 319 if (nft_rbtree_interval_end(rbe) && 389 - !nft_rbtree_interval_end(this)) { 320 + nft_rbtree_interval_start(this)) { 390 321 parent = parent->rb_left; 391 322 continue; 392 - } else if (!nft_rbtree_interval_end(rbe) && 323 + } else if (nft_rbtree_interval_start(rbe) && 393 324 nft_rbtree_interval_end(this)) { 394 325 parent = parent->rb_right; 395 326 continue;
+17 -26
net/netlink/af_netlink.c
··· 2392 2392 if (nlk_has_extack && extack && extack->_msg) 2393 2393 tlvlen += nla_total_size(strlen(extack->_msg) + 1); 2394 2394 2395 - if (err) { 2396 - if (!(nlk->flags & NETLINK_F_CAP_ACK)) 2397 - payload += nlmsg_len(nlh); 2398 - else 2399 - flags |= NLM_F_CAPPED; 2400 - if (nlk_has_extack && extack && extack->bad_attr) 2401 - tlvlen += nla_total_size(sizeof(u32)); 2402 - } else { 2395 + if (err && !(nlk->flags & NETLINK_F_CAP_ACK)) 2396 + payload += nlmsg_len(nlh); 2397 + else 2403 2398 flags |= NLM_F_CAPPED; 2404 - 2405 - if (nlk_has_extack && extack && extack->cookie_len) 2406 - tlvlen += nla_total_size(extack->cookie_len); 2407 - } 2399 + if (err && nlk_has_extack && extack && extack->bad_attr) 2400 + tlvlen += nla_total_size(sizeof(u32)); 2401 + if (nlk_has_extack && extack && extack->cookie_len) 2402 + tlvlen += nla_total_size(extack->cookie_len); 2408 2403 2409 2404 if (tlvlen) 2410 2405 flags |= NLM_F_ACK_TLVS; ··· 2422 2427 WARN_ON(nla_put_string(skb, NLMSGERR_ATTR_MSG, 2423 2428 extack->_msg)); 2424 2429 } 2425 - if (err) { 2426 - if (extack->bad_attr && 2427 - !WARN_ON((u8 *)extack->bad_attr < in_skb->data || 2428 - (u8 *)extack->bad_attr >= in_skb->data + 2429 - in_skb->len)) 2430 - WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_OFFS, 2431 - (u8 *)extack->bad_attr - 2432 - (u8 *)nlh)); 2433 - } else { 2434 - if (extack->cookie_len) 2435 - WARN_ON(nla_put(skb, NLMSGERR_ATTR_COOKIE, 2436 - extack->cookie_len, 2437 - extack->cookie)); 2438 - } 2430 + if (err && extack->bad_attr && 2431 + !WARN_ON((u8 *)extack->bad_attr < in_skb->data || 2432 + (u8 *)extack->bad_attr >= in_skb->data + 2433 + in_skb->len)) 2434 + WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_OFFS, 2435 + (u8 *)extack->bad_attr - 2436 + (u8 *)nlh)); 2437 + if (extack->cookie_len) 2438 + WARN_ON(nla_put(skb, NLMSGERR_ATTR_COOKIE, 2439 + extack->cookie_len, extack->cookie)); 2439 2440 } 2440 2441 2441 2442 nlmsg_end(skb, rep);
+21
net/packet/af_packet.c
··· 2173 2173 struct timespec64 ts; 2174 2174 __u32 ts_status; 2175 2175 bool is_drop_n_account = false; 2176 + unsigned int slot_id = 0; 2176 2177 bool do_vnet = false; 2177 2178 2178 2179 /* struct tpacket{2,3}_hdr is aligned to a multiple of TPACKET_ALIGNMENT. ··· 2275 2274 TP_STATUS_KERNEL, (macoff+snaplen)); 2276 2275 if (!h.raw) 2277 2276 goto drop_n_account; 2277 + 2278 + if (po->tp_version <= TPACKET_V2) { 2279 + slot_id = po->rx_ring.head; 2280 + if (test_bit(slot_id, po->rx_ring.rx_owner_map)) 2281 + goto drop_n_account; 2282 + __set_bit(slot_id, po->rx_ring.rx_owner_map); 2283 + } 2278 2284 2279 2285 if (do_vnet && 2280 2286 virtio_net_hdr_from_skb(skb, h.raw + macoff - ··· 2388 2380 #endif 2389 2381 2390 2382 if (po->tp_version <= TPACKET_V2) { 2383 + spin_lock(&sk->sk_receive_queue.lock); 2391 2384 __packet_set_status(po, h.raw, status); 2385 + __clear_bit(slot_id, po->rx_ring.rx_owner_map); 2386 + spin_unlock(&sk->sk_receive_queue.lock); 2392 2387 sk->sk_data_ready(sk); 2393 2388 } else { 2394 2389 prb_clear_blk_fill_status(&po->rx_ring); ··· 4288 4277 { 4289 4278 struct pgv *pg_vec = NULL; 4290 4279 struct packet_sock *po = pkt_sk(sk); 4280 + unsigned long *rx_owner_map = NULL; 4291 4281 int was_running, order = 0; 4292 4282 struct packet_ring_buffer *rb; 4293 4283 struct sk_buff_head *rb_queue; ··· 4374 4362 } 4375 4363 break; 4376 4364 default: 4365 + if (!tx_ring) { 4366 + rx_owner_map = bitmap_alloc(req->tp_frame_nr, 4367 + GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO); 4368 + if (!rx_owner_map) 4369 + goto out_free_pg_vec; 4370 + } 4377 4371 break; 4378 4372 } 4379 4373 } ··· 4409 4391 err = 0; 4410 4392 spin_lock_bh(&rb_queue->lock); 4411 4393 swap(rb->pg_vec, pg_vec); 4394 + if (po->tp_version <= TPACKET_V2) 4395 + swap(rb->rx_owner_map, rx_owner_map); 4412 4396 rb->frame_max = (req->tp_frame_nr - 1); 4413 4397 rb->head = 0; 4414 4398 rb->frame_size = req->tp_frame_size; ··· 4442 4422 } 4443 4423 4444 4424 out_free_pg_vec: 4425 + bitmap_free(rx_owner_map); 4445 4426 if (pg_vec) 4446 4427 free_pg_vec(pg_vec, order, req->tp_block_nr); 4447 4428 out:
+4 -1
net/packet/internal.h
··· 70 70 71 71 unsigned int __percpu *pending_refcnt; 72 72 73 - struct tpacket_kbdq_core prb_bdqc; 73 + union { 74 + unsigned long *rx_owner_map; 75 + struct tpacket_kbdq_core prb_bdqc; 76 + }; 74 77 }; 75 78 76 79 extern struct mutex fanout_mutex;
+5 -32
net/rxrpc/af_rxrpc.c
··· 285 285 gfp_t gfp, 286 286 rxrpc_notify_rx_t notify_rx, 287 287 bool upgrade, 288 - bool intr, 288 + enum rxrpc_interruptibility interruptibility, 289 289 unsigned int debug_id) 290 290 { 291 291 struct rxrpc_conn_parameters cp; ··· 310 310 memset(&p, 0, sizeof(p)); 311 311 p.user_call_ID = user_call_ID; 312 312 p.tx_total_len = tx_total_len; 313 - p.intr = intr; 313 + p.interruptibility = interruptibility; 314 314 315 315 memset(&cp, 0, sizeof(cp)); 316 316 cp.local = rx->local; ··· 371 371 * rxrpc_kernel_check_life - Check to see whether a call is still alive 372 372 * @sock: The socket the call is on 373 373 * @call: The call to check 374 - * @_life: Where to store the life value 375 374 * 376 - * Allow a kernel service to find out whether a call is still alive - ie. we're 377 - * getting ACKs from the server. Passes back in *_life a number representing 378 - * the life state which can be compared to that returned by a previous call and 379 - * return true if the call is still alive. 380 - * 381 - * If the life state stalls, rxrpc_kernel_probe_life() should be called and 382 - * then 2RTT waited. 375 + * Allow a kernel service to find out whether a call is still alive - 376 + * ie. whether it has completed. 383 377 */ 384 378 bool rxrpc_kernel_check_life(const struct socket *sock, 385 - const struct rxrpc_call *call, 386 - u32 *_life) 379 + const struct rxrpc_call *call) 387 380 { 388 - *_life = call->acks_latest; 389 381 return call->state != RXRPC_CALL_COMPLETE; 390 382 } 391 383 EXPORT_SYMBOL(rxrpc_kernel_check_life); 392 - 393 - /** 394 - * rxrpc_kernel_probe_life - Poke the peer to see if it's still alive 395 - * @sock: The socket the call is on 396 - * @call: The call to check 397 - * 398 - * In conjunction with rxrpc_kernel_check_life(), allow a kernel service to 399 - * find out whether a call is still alive by pinging it. This should cause the 400 - * life state to be bumped in about 2*RTT. 401 - * 402 - * The must be called in TASK_RUNNING state on pain of might_sleep() objecting. 403 - */ 404 - void rxrpc_kernel_probe_life(struct socket *sock, struct rxrpc_call *call) 405 - { 406 - rxrpc_propose_ACK(call, RXRPC_ACK_PING, 0, true, false, 407 - rxrpc_propose_ack_ping_for_check_life); 408 - rxrpc_send_ack_packet(call, true, NULL); 409 - } 410 - EXPORT_SYMBOL(rxrpc_kernel_probe_life); 411 384 412 385 /** 413 386 * rxrpc_kernel_get_epoch - Retrieve the epoch value from a call.
+2 -3
net/rxrpc/ar-internal.h
··· 489 489 RXRPC_CALL_BEGAN_RX_TIMER, /* We began the expect_rx_by timer */ 490 490 RXRPC_CALL_RX_HEARD, /* The peer responded at least once to this call */ 491 491 RXRPC_CALL_RX_UNDERRUN, /* Got data underrun */ 492 - RXRPC_CALL_IS_INTR, /* The call is interruptible */ 493 492 RXRPC_CALL_DISCONNECTED, /* The call has been disconnected */ 494 493 }; 495 494 ··· 597 598 atomic_t usage; 598 599 u16 service_id; /* service ID */ 599 600 u8 security_ix; /* Security type */ 601 + enum rxrpc_interruptibility interruptibility; /* At what point call may be interrupted */ 600 602 u32 call_id; /* call ID on connection */ 601 603 u32 cid; /* connection ID plus channel index */ 602 604 int debug_id; /* debug ID for printks */ ··· 675 675 676 676 /* transmission-phase ACK management */ 677 677 ktime_t acks_latest_ts; /* Timestamp of latest ACK received */ 678 - rxrpc_serial_t acks_latest; /* serial number of latest ACK received */ 679 678 rxrpc_seq_t acks_lowest_nak; /* Lowest NACK in the buffer (or ==tx_hard_ack) */ 680 679 rxrpc_seq_t acks_lost_top; /* tx_top at the time lost-ack ping sent */ 681 680 rxrpc_serial_t acks_lost_ping; /* Serial number of probe ACK */ ··· 720 721 u32 normal; /* Max time since last call packet (msec) */ 721 722 } timeouts; 722 723 u8 nr_timeouts; /* Number of timeouts specified */ 723 - bool intr; /* The call is interruptible */ 724 + enum rxrpc_interruptibility interruptibility; /* How is interruptible is the call? */ 724 725 }; 725 726 726 727 struct rxrpc_send_params {
+1 -2
net/rxrpc/call_object.c
··· 237 237 return call; 238 238 } 239 239 240 - if (p->intr) 241 - __set_bit(RXRPC_CALL_IS_INTR, &call->flags); 240 + call->interruptibility = p->interruptibility; 242 241 call->tx_total_len = p->tx_total_len; 243 242 trace_rxrpc_call(call->debug_id, rxrpc_call_new_client, 244 243 atomic_read(&call->usage),
+10 -3
net/rxrpc/conn_client.c
··· 655 655 656 656 add_wait_queue_exclusive(&call->waitq, &myself); 657 657 for (;;) { 658 - if (test_bit(RXRPC_CALL_IS_INTR, &call->flags)) 658 + switch (call->interruptibility) { 659 + case RXRPC_INTERRUPTIBLE: 660 + case RXRPC_PREINTERRUPTIBLE: 659 661 set_current_state(TASK_INTERRUPTIBLE); 660 - else 662 + break; 663 + case RXRPC_UNINTERRUPTIBLE: 664 + default: 661 665 set_current_state(TASK_UNINTERRUPTIBLE); 666 + break; 667 + } 662 668 if (call->call_id) 663 669 break; 664 - if (test_bit(RXRPC_CALL_IS_INTR, &call->flags) && 670 + if ((call->interruptibility == RXRPC_INTERRUPTIBLE || 671 + call->interruptibility == RXRPC_PREINTERRUPTIBLE) && 665 672 signal_pending(current)) { 666 673 ret = -ERESTARTSYS; 667 674 break;
-1
net/rxrpc/input.c
··· 882 882 before(prev_pkt, call->ackr_prev_seq)) 883 883 goto out; 884 884 call->acks_latest_ts = skb->tstamp; 885 - call->acks_latest = sp->hdr.serial; 886 885 887 886 call->ackr_first_seq = first_soft_ack; 888 887 call->ackr_prev_seq = prev_pkt;
+56 -19
net/rxrpc/sendmsg.c
··· 18 18 #include "ar-internal.h" 19 19 20 20 /* 21 + * Return true if there's sufficient Tx queue space. 22 + */ 23 + static bool rxrpc_check_tx_space(struct rxrpc_call *call, rxrpc_seq_t *_tx_win) 24 + { 25 + unsigned int win_size = 26 + min_t(unsigned int, call->tx_winsize, 27 + call->cong_cwnd + call->cong_extra); 28 + rxrpc_seq_t tx_win = READ_ONCE(call->tx_hard_ack); 29 + 30 + if (_tx_win) 31 + *_tx_win = tx_win; 32 + return call->tx_top - tx_win < win_size; 33 + } 34 + 35 + /* 21 36 * Wait for space to appear in the Tx queue or a signal to occur. 22 37 */ 23 38 static int rxrpc_wait_for_tx_window_intr(struct rxrpc_sock *rx, ··· 41 26 { 42 27 for (;;) { 43 28 set_current_state(TASK_INTERRUPTIBLE); 44 - if (call->tx_top - call->tx_hard_ack < 45 - min_t(unsigned int, call->tx_winsize, 46 - call->cong_cwnd + call->cong_extra)) 29 + if (rxrpc_check_tx_space(call, NULL)) 47 30 return 0; 48 31 49 32 if (call->state >= RXRPC_CALL_COMPLETE) ··· 62 49 * Wait for space to appear in the Tx queue uninterruptibly, but with 63 50 * a timeout of 2*RTT if no progress was made and a signal occurred. 64 51 */ 65 - static int rxrpc_wait_for_tx_window_nonintr(struct rxrpc_sock *rx, 52 + static int rxrpc_wait_for_tx_window_waitall(struct rxrpc_sock *rx, 66 53 struct rxrpc_call *call) 67 54 { 68 55 rxrpc_seq_t tx_start, tx_win; ··· 71 58 72 59 rtt = READ_ONCE(call->peer->rtt); 73 60 rtt2 = nsecs_to_jiffies64(rtt) * 2; 74 - if (rtt2 < 1) 75 - rtt2 = 1; 61 + if (rtt2 < 2) 62 + rtt2 = 2; 76 63 77 64 timeout = rtt2; 78 65 tx_start = READ_ONCE(call->tx_hard_ack); ··· 81 68 set_current_state(TASK_UNINTERRUPTIBLE); 82 69 83 70 tx_win = READ_ONCE(call->tx_hard_ack); 84 - if (call->tx_top - tx_win < 85 - min_t(unsigned int, call->tx_winsize, 86 - call->cong_cwnd + call->cong_extra)) 71 + if (rxrpc_check_tx_space(call, &tx_win)) 87 72 return 0; 88 73 89 74 if (call->state >= RXRPC_CALL_COMPLETE) 90 75 return call->error; 91 76 92 - if (test_bit(RXRPC_CALL_IS_INTR, &call->flags) && 93 - timeout == 0 && 77 + if (timeout == 0 && 94 78 tx_win == tx_start && signal_pending(current)) 95 79 return -EINTR; 96 80 ··· 98 88 99 89 trace_rxrpc_transmit(call, rxrpc_transmit_wait); 100 90 timeout = schedule_timeout(timeout); 91 + } 92 + } 93 + 94 + /* 95 + * Wait for space to appear in the Tx queue uninterruptibly. 96 + */ 97 + static int rxrpc_wait_for_tx_window_nonintr(struct rxrpc_sock *rx, 98 + struct rxrpc_call *call, 99 + long *timeo) 100 + { 101 + for (;;) { 102 + set_current_state(TASK_UNINTERRUPTIBLE); 103 + if (rxrpc_check_tx_space(call, NULL)) 104 + return 0; 105 + 106 + if (call->state >= RXRPC_CALL_COMPLETE) 107 + return call->error; 108 + 109 + trace_rxrpc_transmit(call, rxrpc_transmit_wait); 110 + *timeo = schedule_timeout(*timeo); 101 111 } 102 112 } 103 113 ··· 138 108 139 109 add_wait_queue(&call->waitq, &myself); 140 110 141 - if (waitall) 142 - ret = rxrpc_wait_for_tx_window_nonintr(rx, call); 143 - else 144 - ret = rxrpc_wait_for_tx_window_intr(rx, call, timeo); 111 + switch (call->interruptibility) { 112 + case RXRPC_INTERRUPTIBLE: 113 + if (waitall) 114 + ret = rxrpc_wait_for_tx_window_waitall(rx, call); 115 + else 116 + ret = rxrpc_wait_for_tx_window_intr(rx, call, timeo); 117 + break; 118 + case RXRPC_PREINTERRUPTIBLE: 119 + case RXRPC_UNINTERRUPTIBLE: 120 + default: 121 + ret = rxrpc_wait_for_tx_window_nonintr(rx, call, timeo); 122 + break; 123 + } 145 124 146 125 remove_wait_queue(&call->waitq, &myself); 147 126 set_current_state(TASK_RUNNING); ··· 341 302 342 303 _debug("alloc"); 343 304 344 - if (call->tx_top - call->tx_hard_ack >= 345 - min_t(unsigned int, call->tx_winsize, 346 - call->cong_cwnd + call->cong_extra)) { 305 + if (!rxrpc_check_tx_space(call, NULL)) { 347 306 ret = -EAGAIN; 348 307 if (msg->msg_flags & MSG_DONTWAIT) 349 308 goto maybe_error; ··· 656 619 .call.tx_total_len = -1, 657 620 .call.user_call_ID = 0, 658 621 .call.nr_timeouts = 0, 659 - .call.intr = true, 622 + .call.interruptibility = RXRPC_INTERRUPTIBLE, 660 623 .abort_code = 0, 661 624 .command = RXRPC_CMD_SEND_DATA, 662 625 .exclusive = false,
+1 -1
net/sched/act_ct.c
··· 1273 1273 if (goto_ch) 1274 1274 tcf_chain_put_by_act(goto_ch); 1275 1275 if (params) 1276 - kfree_rcu(params, rcu); 1276 + call_rcu(&params->rcu, tcf_ct_params_free); 1277 1277 if (res == ACT_P_CREATED) 1278 1278 tcf_idr_insert(tn, *a); 1279 1279
+2 -4
net/sched/act_mirred.c
··· 284 284 285 285 /* mirror is always swallowed */ 286 286 if (is_redirect) { 287 - skb2->tc_redirected = 1; 288 - skb2->tc_from_ingress = skb2->tc_at_ingress; 289 - if (skb2->tc_from_ingress) 290 - skb2->tstamp = 0; 287 + skb_set_redirected(skb2, skb2->tc_at_ingress); 288 + 291 289 /* let's the caller reinsert the packet, if possible */ 292 290 if (use_reinsert) { 293 291 res->ingress = want_ingress;
+2 -2
net/sched/cls_route.c
··· 534 534 fp = &b->ht[h]; 535 535 for (pfp = rtnl_dereference(*fp); pfp; 536 536 fp = &pfp->next, pfp = rtnl_dereference(*fp)) { 537 - if (pfp == f) { 538 - *fp = f->next; 537 + if (pfp == fold) { 538 + rcu_assign_pointer(*fp, fold->next); 539 539 break; 540 540 } 541 541 }
+3
net/sched/cls_tcindex.c
··· 261 261 struct tcindex_data, 262 262 rwork); 263 263 264 + rtnl_lock(); 264 265 kfree(p->perfect); 265 266 kfree(p); 267 + rtnl_unlock(); 266 268 } 267 269 268 270 static void tcindex_free_perfect_hash(struct tcindex_data *cp) ··· 359 357 360 358 if (tcindex_alloc_perfect_hash(net, cp) < 0) 361 359 goto errout; 360 + cp->alloc_hash = cp->hash; 362 361 for (i = 0; i < min(cp->hash, p->hash); i++) 363 362 cp->perfect[i].res = p->perfect[i].res; 364 363 balloc = 1;
+11 -1
net/sched/sch_cbs.c
··· 181 181 s64 credits; 182 182 int len; 183 183 184 + /* The previous packet is still being sent */ 185 + if (now < q->last) { 186 + qdisc_watchdog_schedule_ns(&q->watchdog, q->last); 187 + return NULL; 188 + } 184 189 if (q->credits < 0) { 185 190 credits = timediff_to_credits(now - q->last, q->idleslope); 186 191 ··· 217 212 credits += q->credits; 218 213 219 214 q->credits = max_t(s64, credits, q->locredit); 220 - q->last = now; 215 + /* Estimate of the transmission of the last byte of the packet in ns */ 216 + if (unlikely(atomic64_read(&q->port_rate) == 0)) 217 + q->last = now; 218 + else 219 + q->last = now + div64_s64(len * NSEC_PER_SEC, 220 + atomic64_read(&q->port_rate)); 221 221 222 222 return skb; 223 223 }
+5 -3
net/socket.c
··· 1707 1707 1708 1708 int __sys_accept4_file(struct file *file, unsigned file_flags, 1709 1709 struct sockaddr __user *upeer_sockaddr, 1710 - int __user *upeer_addrlen, int flags) 1710 + int __user *upeer_addrlen, int flags, 1711 + unsigned long nofile) 1711 1712 { 1712 1713 struct socket *sock, *newsock; 1713 1714 struct file *newfile; ··· 1739 1738 */ 1740 1739 __module_get(newsock->ops->owner); 1741 1740 1742 - newfd = get_unused_fd_flags(flags); 1741 + newfd = __get_unused_fd_flags(flags, nofile); 1743 1742 if (unlikely(newfd < 0)) { 1744 1743 err = newfd; 1745 1744 sock_release(newsock); ··· 1808 1807 f = fdget(fd); 1809 1808 if (f.file) { 1810 1809 ret = __sys_accept4_file(f.file, 0, upeer_sockaddr, 1811 - upeer_addrlen, flags); 1810 + upeer_addrlen, flags, 1811 + rlimit(RLIMIT_NOFILE)); 1812 1812 if (f.flags) 1813 1813 fput(f.file); 1814 1814 }
+7
scripts/Kconfig.include
··· 44 44 45 45 # gcc version including patch level 46 46 gcc-version := $(shell,$(srctree)/scripts/gcc-version.sh $(CC)) 47 + 48 + # machine bit flags 49 + # $(m32-flag): -m32 if the compiler supports it, or an empty string otherwise. 50 + # $(m64-flag): -m64 if the compiler supports it, or an empty string otherwise. 51 + cc-option-bit = $(if-success,$(CC) -Werror $(1) -E -x c /dev/null -o /dev/null,$(1)) 52 + m32-flag := $(cc-option-bit,-m32) 53 + m64-flag := $(cc-option-bit,-m64)
+1
scripts/Makefile.extrawarn
··· 48 48 KBUILD_CFLAGS += -Wno-format 49 49 KBUILD_CFLAGS += -Wno-sign-compare 50 50 KBUILD_CFLAGS += -Wno-format-zero-length 51 + KBUILD_CFLAGS += $(call cc-disable-warning, pointer-to-enum-cast) 51 52 endif 52 53 53 54 endif
+1 -1
scripts/export_report.pl
··· 94 94 # 95 95 while ( <$module_symvers> ) { 96 96 chomp; 97 - my (undef, $symbol, $namespace, $module, $gpl) = split('\t'); 97 + my (undef, $symbol, $module, $gpl, $namespace) = split('\t'); 98 98 $SYMBOL { $symbol } = [ $module , "0" , $symbol, $gpl]; 99 99 } 100 100 close($module_symvers);
+4 -4
scripts/kallsyms.c
··· 195 195 return NULL; 196 196 } 197 197 198 - if (is_ignored_symbol(name, type)) 199 - return NULL; 200 - 201 - /* Ignore most absolute/undefined (?) symbols. */ 202 198 if (strcmp(name, "_text") == 0) 203 199 _text = addr; 200 + 201 + /* Ignore most absolute/undefined (?) symbols. */ 202 + if (is_ignored_symbol(name, type)) 203 + return NULL; 204 204 205 205 check_symbol_range(name, addr, text_ranges, ARRAY_SIZE(text_ranges)); 206 206 check_symbol_range(name, addr, &percpu_range, 1);
+14 -13
scripts/mod/modpost.c
··· 308 308 309 309 static void *sym_get_data(const struct elf_info *info, const Elf_Sym *sym) 310 310 { 311 - Elf_Shdr *sechdr = &info->sechdrs[sym->st_shndx]; 311 + unsigned int secindex = get_secindex(info, sym); 312 + Elf_Shdr *sechdr = &info->sechdrs[secindex]; 312 313 unsigned long offset; 313 314 314 315 offset = sym->st_value; ··· 2428 2427 } 2429 2428 2430 2429 /* parse Module.symvers file. line format: 2431 - * 0x12345678<tab>symbol<tab>module[[<tab>export]<tab>something] 2430 + * 0x12345678<tab>symbol<tab>module<tab>export<tab>namespace 2432 2431 **/ 2433 2432 static void read_dump(const char *fname, unsigned int kernel) 2434 2433 { ··· 2441 2440 return; 2442 2441 2443 2442 while ((line = get_next_line(&pos, file, size))) { 2444 - char *symname, *namespace, *modname, *d, *export, *end; 2443 + char *symname, *namespace, *modname, *d, *export; 2445 2444 unsigned int crc; 2446 2445 struct module *mod; 2447 2446 struct symbol *s; ··· 2449 2448 if (!(symname = strchr(line, '\t'))) 2450 2449 goto fail; 2451 2450 *symname++ = '\0'; 2452 - if (!(namespace = strchr(symname, '\t'))) 2453 - goto fail; 2454 - *namespace++ = '\0'; 2455 - if (!(modname = strchr(namespace, '\t'))) 2451 + if (!(modname = strchr(symname, '\t'))) 2456 2452 goto fail; 2457 2453 *modname++ = '\0'; 2458 - if ((export = strchr(modname, '\t')) != NULL) 2459 - *export++ = '\0'; 2460 - if (export && ((end = strchr(export, '\t')) != NULL)) 2461 - *end = '\0'; 2454 + if (!(export = strchr(modname, '\t'))) 2455 + goto fail; 2456 + *export++ = '\0'; 2457 + if (!(namespace = strchr(export, '\t'))) 2458 + goto fail; 2459 + *namespace++ = '\0'; 2460 + 2462 2461 crc = strtoul(line, &d, 16); 2463 2462 if (*symname == '\0' || *modname == '\0' || *d != '\0') 2464 2463 goto fail; ··· 2509 2508 namespace = symbol->namespace; 2510 2509 buf_printf(&buf, "0x%08x\t%s\t%s\t%s\t%s\n", 2511 2510 symbol->crc, symbol->name, 2512 - namespace ? namespace : "", 2513 2511 symbol->module->name, 2514 - export_str(symbol->export)); 2512 + export_str(symbol->export), 2513 + namespace ? namespace : ""); 2515 2514 } 2516 2515 symbol = symbol->next; 2517 2516 }
+10 -2
sound/core/oss/pcm_plugin.c
··· 111 111 while (plugin->next) { 112 112 if (plugin->dst_frames) 113 113 frames = plugin->dst_frames(plugin, frames); 114 - if (snd_BUG_ON((snd_pcm_sframes_t)frames <= 0)) 114 + if ((snd_pcm_sframes_t)frames <= 0) 115 115 return -ENXIO; 116 116 plugin = plugin->next; 117 117 err = snd_pcm_plugin_alloc(plugin, frames); ··· 123 123 while (plugin->prev) { 124 124 if (plugin->src_frames) 125 125 frames = plugin->src_frames(plugin, frames); 126 - if (snd_BUG_ON((snd_pcm_sframes_t)frames <= 0)) 126 + if ((snd_pcm_sframes_t)frames <= 0) 127 127 return -ENXIO; 128 128 plugin = plugin->prev; 129 129 err = snd_pcm_plugin_alloc(plugin, frames); ··· 209 209 if (stream == SNDRV_PCM_STREAM_PLAYBACK) { 210 210 plugin = snd_pcm_plug_last(plug); 211 211 while (plugin && drv_frames > 0) { 212 + if (drv_frames > plugin->buf_frames) 213 + drv_frames = plugin->buf_frames; 212 214 plugin_prev = plugin->prev; 213 215 if (plugin->src_frames) 214 216 drv_frames = plugin->src_frames(plugin, drv_frames); ··· 222 220 plugin_next = plugin->next; 223 221 if (plugin->dst_frames) 224 222 drv_frames = plugin->dst_frames(plugin, drv_frames); 223 + if (drv_frames > plugin->buf_frames) 224 + drv_frames = plugin->buf_frames; 225 225 plugin = plugin_next; 226 226 } 227 227 } else ··· 252 248 if (frames < 0) 253 249 return frames; 254 250 } 251 + if (frames > plugin->buf_frames) 252 + frames = plugin->buf_frames; 255 253 plugin = plugin_next; 256 254 } 257 255 } else if (stream == SNDRV_PCM_STREAM_CAPTURE) { 258 256 plugin = snd_pcm_plug_last(plug); 259 257 while (plugin) { 258 + if (frames > plugin->buf_frames) 259 + frames = plugin->buf_frames; 260 260 plugin_prev = plugin->prev; 261 261 if (plugin->src_frames) { 262 262 frames = plugin->src_frames(plugin, frames);
+1
sound/core/seq/oss/seq_oss_midi.c
··· 602 602 len = snd_seq_oss_timer_start(dp->timer); 603 603 if (ev->type == SNDRV_SEQ_EVENT_SYSEX) { 604 604 snd_seq_oss_readq_sysex(dp->readq, mdev->seq_device, ev); 605 + snd_midi_event_reset_decode(mdev->coder); 605 606 } else { 606 607 len = snd_midi_event_decode(mdev->coder, msg, sizeof(msg), ev); 607 608 if (len > 0)
+1
sound/core/seq/seq_virmidi.c
··· 81 81 if ((ev->flags & SNDRV_SEQ_EVENT_LENGTH_MASK) != SNDRV_SEQ_EVENT_LENGTH_VARIABLE) 82 82 continue; 83 83 snd_seq_dump_var_event(ev, (snd_seq_dump_func_t)snd_rawmidi_receive, vmidi->substream); 84 + snd_midi_event_reset_decode(vmidi->parser); 84 85 } else { 85 86 len = snd_midi_event_decode(vmidi->parser, msg, sizeof(msg), ev); 86 87 if (len > 0)
+25
sound/pci/hda/patch_realtek.c
··· 8051 8051 spec->gen.mixer_nid = 0; 8052 8052 break; 8053 8053 case 0x10ec0225: 8054 + codec->power_save_node = 1; 8055 + /* fall through */ 8054 8056 case 0x10ec0295: 8055 8057 case 0x10ec0299: 8056 8058 spec->codec_variant = ALC269_TYPE_ALC225; ··· 8612 8610 ALC669_FIXUP_ACER_ASPIRE_ETHOS, 8613 8611 ALC669_FIXUP_ACER_ASPIRE_ETHOS_HEADSET, 8614 8612 ALC671_FIXUP_HP_HEADSET_MIC2, 8613 + ALC662_FIXUP_ACER_X2660G_HEADSET_MODE, 8614 + ALC662_FIXUP_ACER_NITRO_HEADSET_MODE, 8615 8615 }; 8616 8616 8617 8617 static const struct hda_fixup alc662_fixups[] = { ··· 8959 8955 .type = HDA_FIXUP_FUNC, 8960 8956 .v.func = alc671_fixup_hp_headset_mic2, 8961 8957 }, 8958 + [ALC662_FIXUP_ACER_X2660G_HEADSET_MODE] = { 8959 + .type = HDA_FIXUP_PINS, 8960 + .v.pins = (const struct hda_pintbl[]) { 8961 + { 0x1a, 0x02a1113c }, /* use as headset mic, without its own jack detect */ 8962 + { } 8963 + }, 8964 + .chained = true, 8965 + .chain_id = ALC662_FIXUP_USI_FUNC 8966 + }, 8967 + [ALC662_FIXUP_ACER_NITRO_HEADSET_MODE] = { 8968 + .type = HDA_FIXUP_PINS, 8969 + .v.pins = (const struct hda_pintbl[]) { 8970 + { 0x1a, 0x01a11140 }, /* use as headset mic, without its own jack detect */ 8971 + { 0x1b, 0x0221144f }, 8972 + { } 8973 + }, 8974 + .chained = true, 8975 + .chain_id = ALC662_FIXUP_USI_FUNC 8976 + }, 8962 8977 }; 8963 8978 8964 8979 static const struct snd_pci_quirk alc662_fixup_tbl[] = { ··· 8989 8966 SND_PCI_QUIRK(0x1025, 0x0349, "eMachines eM250", ALC662_FIXUP_INV_DMIC), 8990 8967 SND_PCI_QUIRK(0x1025, 0x034a, "Gateway LT27", ALC662_FIXUP_INV_DMIC), 8991 8968 SND_PCI_QUIRK(0x1025, 0x038b, "Acer Aspire 8943G", ALC662_FIXUP_ASPIRE), 8969 + SND_PCI_QUIRK(0x1025, 0x123c, "Acer Nitro N50-600", ALC662_FIXUP_ACER_NITRO_HEADSET_MODE), 8970 + SND_PCI_QUIRK(0x1025, 0x124e, "Acer 2660G", ALC662_FIXUP_ACER_X2660G_HEADSET_MODE), 8992 8971 SND_PCI_QUIRK(0x1028, 0x05d8, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 8993 8972 SND_PCI_QUIRK(0x1028, 0x05db, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 8994 8973 SND_PCI_QUIRK(0x1028, 0x05fe, "Dell XPS 15", ALC668_FIXUP_DELL_XPS13),
+1 -1
sound/usb/line6/driver.c
··· 305 305 line6_midibuf_read(mb, line6->buffer_message, 306 306 LINE6_MIDI_MESSAGE_MAXLEN); 307 307 308 - if (done == 0) 308 + if (done <= 0) 309 309 break; 310 310 311 311 line6->message_length = done;
+1 -1
sound/usb/line6/midibuf.c
··· 159 159 int midi_length_prev = 160 160 midibuf_message_length(this->command_prev); 161 161 162 - if (midi_length_prev > 0) { 162 + if (midi_length_prev > 1) { 163 163 midi_length = midi_length_prev - 1; 164 164 repeat = 1; 165 165 } else
+7 -7
tools/include/uapi/asm/errno.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 #if defined(__i386__) || defined(__x86_64__) 3 - #include "../../arch/x86/include/uapi/asm/errno.h" 3 + #include "../../../arch/x86/include/uapi/asm/errno.h" 4 4 #elif defined(__powerpc__) 5 - #include "../../arch/powerpc/include/uapi/asm/errno.h" 5 + #include "../../../arch/powerpc/include/uapi/asm/errno.h" 6 6 #elif defined(__sparc__) 7 - #include "../../arch/sparc/include/uapi/asm/errno.h" 7 + #include "../../../arch/sparc/include/uapi/asm/errno.h" 8 8 #elif defined(__alpha__) 9 - #include "../../arch/alpha/include/uapi/asm/errno.h" 9 + #include "../../../arch/alpha/include/uapi/asm/errno.h" 10 10 #elif defined(__mips__) 11 - #include "../../arch/mips/include/uapi/asm/errno.h" 11 + #include "../../../arch/mips/include/uapi/asm/errno.h" 12 12 #elif defined(__ia64__) 13 - #include "../../arch/ia64/include/uapi/asm/errno.h" 13 + #include "../../../arch/ia64/include/uapi/asm/errno.h" 14 14 #elif defined(__xtensa__) 15 - #include "../../arch/xtensa/include/uapi/asm/errno.h" 15 + #include "../../../arch/xtensa/include/uapi/asm/errno.h" 16 16 #else 17 17 #include <asm-generic/errno.h> 18 18 #endif
+2
tools/include/uapi/linux/in.h
··· 74 74 #define IPPROTO_UDPLITE IPPROTO_UDPLITE 75 75 IPPROTO_MPLS = 137, /* MPLS in IP (RFC 4023) */ 76 76 #define IPPROTO_MPLS IPPROTO_MPLS 77 + IPPROTO_ETHERNET = 143, /* Ethernet-within-IPv6 Encapsulation */ 78 + #define IPPROTO_ETHERNET IPPROTO_ETHERNET 77 79 IPPROTO_RAW = 255, /* Raw IP packets */ 78 80 #define IPPROTO_RAW IPPROTO_RAW 79 81 IPPROTO_MPTCP = 262, /* Multipath TCP connection */
+1 -1
tools/perf/Makefile
··· 35 35 # Only pass canonical directory names as the output directory: 36 36 # 37 37 ifneq ($(O),) 38 - FULL_O := $(shell readlink -f $(O) || echo $(O)) 38 + FULL_O := $(shell cd $(PWD); readlink -f $(O) || echo $(O)) 39 39 endif 40 40 41 41 #
+10 -10
tools/perf/arch/arm64/util/arm-spe.c
··· 11 11 #include <linux/zalloc.h> 12 12 #include <time.h> 13 13 14 - #include "../../util/cpumap.h" 15 - #include "../../util/event.h" 16 - #include "../../util/evsel.h" 17 - #include "../../util/evlist.h" 18 - #include "../../util/session.h" 14 + #include "../../../util/cpumap.h" 15 + #include "../../../util/event.h" 16 + #include "../../../util/evsel.h" 17 + #include "../../../util/evlist.h" 18 + #include "../../../util/session.h" 19 19 #include <internal/lib.h> // page_size 20 - #include "../../util/pmu.h" 21 - #include "../../util/debug.h" 22 - #include "../../util/auxtrace.h" 23 - #include "../../util/record.h" 24 - #include "../../util/arm-spe.h" 20 + #include "../../../util/pmu.h" 21 + #include "../../../util/debug.h" 22 + #include "../../../util/auxtrace.h" 23 + #include "../../../util/record.h" 24 + #include "../../../util/arm-spe.h" 25 25 26 26 #define KiB(x) ((x) * 1024) 27 27 #define MiB(x) ((x) * 1024 * 1024)
+1 -1
tools/perf/arch/arm64/util/perf_regs.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - #include "../../util/perf_regs.h" 2 + #include "../../../util/perf_regs.h" 3 3 4 4 const struct sample_reg sample_reg_masks[] = { 5 5 SMPL_REG_END
+2 -2
tools/perf/arch/powerpc/util/perf_regs.c
··· 4 4 #include <regex.h> 5 5 #include <linux/zalloc.h> 6 6 7 - #include "../../util/perf_regs.h" 8 - #include "../../util/debug.h" 7 + #include "../../../util/perf_regs.h" 8 + #include "../../../util/debug.h" 9 9 10 10 #include <linux/kernel.h> 11 11
+7 -7
tools/perf/arch/x86/util/auxtrace.c
··· 7 7 #include <errno.h> 8 8 #include <stdbool.h> 9 9 10 - #include "../../util/header.h" 11 - #include "../../util/debug.h" 12 - #include "../../util/pmu.h" 13 - #include "../../util/auxtrace.h" 14 - #include "../../util/intel-pt.h" 15 - #include "../../util/intel-bts.h" 16 - #include "../../util/evlist.h" 10 + #include "../../../util/header.h" 11 + #include "../../../util/debug.h" 12 + #include "../../../util/pmu.h" 13 + #include "../../../util/auxtrace.h" 14 + #include "../../../util/intel-pt.h" 15 + #include "../../../util/intel-bts.h" 16 + #include "../../../util/evlist.h" 17 17 18 18 static 19 19 struct auxtrace_record *auxtrace_record__init_intel(struct evlist *evlist,
+6 -6
tools/perf/arch/x86/util/event.c
··· 3 3 #include <linux/string.h> 4 4 #include <linux/zalloc.h> 5 5 6 - #include "../../util/event.h" 7 - #include "../../util/synthetic-events.h" 8 - #include "../../util/machine.h" 9 - #include "../../util/tool.h" 10 - #include "../../util/map.h" 11 - #include "../../util/debug.h" 6 + #include "../../../util/event.h" 7 + #include "../../../util/synthetic-events.h" 8 + #include "../../../util/machine.h" 9 + #include "../../../util/tool.h" 10 + #include "../../../util/map.h" 11 + #include "../../../util/debug.h" 12 12 13 13 #if defined(__x86_64__) 14 14
+2 -2
tools/perf/arch/x86/util/header.c
··· 7 7 #include <string.h> 8 8 #include <regex.h> 9 9 10 - #include "../../util/debug.h" 11 - #include "../../util/header.h" 10 + #include "../../../util/debug.h" 11 + #include "../../../util/header.h" 12 12 13 13 static inline void 14 14 cpuid(unsigned int op, unsigned int *a, unsigned int *b, unsigned int *c,
+12 -12
tools/perf/arch/x86/util/intel-bts.c
··· 11 11 #include <linux/log2.h> 12 12 #include <linux/zalloc.h> 13 13 14 - #include "../../util/cpumap.h" 15 - #include "../../util/event.h" 16 - #include "../../util/evsel.h" 17 - #include "../../util/evlist.h" 18 - #include "../../util/mmap.h" 19 - #include "../../util/session.h" 20 - #include "../../util/pmu.h" 21 - #include "../../util/debug.h" 22 - #include "../../util/record.h" 23 - #include "../../util/tsc.h" 24 - #include "../../util/auxtrace.h" 25 - #include "../../util/intel-bts.h" 14 + #include "../../../util/cpumap.h" 15 + #include "../../../util/event.h" 16 + #include "../../../util/evsel.h" 17 + #include "../../../util/evlist.h" 18 + #include "../../../util/mmap.h" 19 + #include "../../../util/session.h" 20 + #include "../../../util/pmu.h" 21 + #include "../../../util/debug.h" 22 + #include "../../../util/record.h" 23 + #include "../../../util/tsc.h" 24 + #include "../../../util/auxtrace.h" 25 + #include "../../../util/intel-bts.h" 26 26 #include <internal/lib.h> // page_size 27 27 28 28 #define KiB(x) ((x) * 1024)
+15 -15
tools/perf/arch/x86/util/intel-pt.c
··· 13 13 #include <linux/zalloc.h> 14 14 #include <cpuid.h> 15 15 16 - #include "../../util/session.h" 17 - #include "../../util/event.h" 18 - #include "../../util/evlist.h" 19 - #include "../../util/evsel.h" 20 - #include "../../util/evsel_config.h" 21 - #include "../../util/cpumap.h" 22 - #include "../../util/mmap.h" 16 + #include "../../../util/session.h" 17 + #include "../../../util/event.h" 18 + #include "../../../util/evlist.h" 19 + #include "../../../util/evsel.h" 20 + #include "../../../util/evsel_config.h" 21 + #include "../../../util/cpumap.h" 22 + #include "../../../util/mmap.h" 23 23 #include <subcmd/parse-options.h> 24 - #include "../../util/parse-events.h" 25 - #include "../../util/pmu.h" 26 - #include "../../util/debug.h" 27 - #include "../../util/auxtrace.h" 28 - #include "../../util/record.h" 29 - #include "../../util/target.h" 30 - #include "../../util/tsc.h" 24 + #include "../../../util/parse-events.h" 25 + #include "../../../util/pmu.h" 26 + #include "../../../util/debug.h" 27 + #include "../../../util/auxtrace.h" 28 + #include "../../../util/record.h" 29 + #include "../../../util/target.h" 30 + #include "../../../util/tsc.h" 31 31 #include <internal/lib.h> // page_size 32 - #include "../../util/intel-pt.h" 32 + #include "../../../util/intel-pt.h" 33 33 34 34 #define KiB(x) ((x) * 1024) 35 35 #define MiB(x) ((x) * 1024 * 1024)
+3 -3
tools/perf/arch/x86/util/machine.c
··· 5 5 #include <stdlib.h> 6 6 7 7 #include <internal/lib.h> // page_size 8 - #include "../../util/machine.h" 9 - #include "../../util/map.h" 10 - #include "../../util/symbol.h" 8 + #include "../../../util/machine.h" 9 + #include "../../../util/map.h" 10 + #include "../../../util/symbol.h" 11 11 #include <linux/ctype.h> 12 12 13 13 #include <symbol/kallsyms.h>
+4 -4
tools/perf/arch/x86/util/perf_regs.c
··· 5 5 #include <linux/kernel.h> 6 6 #include <linux/zalloc.h> 7 7 8 - #include "../../perf-sys.h" 9 - #include "../../util/perf_regs.h" 10 - #include "../../util/debug.h" 11 - #include "../../util/event.h" 8 + #include "../../../perf-sys.h" 9 + #include "../../../util/perf_regs.h" 10 + #include "../../../util/debug.h" 11 + #include "../../../util/event.h" 12 12 13 13 const struct sample_reg sample_reg_masks[] = { 14 14 SMPL_REG(AX, PERF_REG_X86_AX),
+3 -3
tools/perf/arch/x86/util/pmu.c
··· 4 4 #include <linux/stddef.h> 5 5 #include <linux/perf_event.h> 6 6 7 - #include "../../util/intel-pt.h" 8 - #include "../../util/intel-bts.h" 9 - #include "../../util/pmu.h" 7 + #include "../../../util/intel-pt.h" 8 + #include "../../../util/intel-bts.h" 9 + #include "../../../util/pmu.h" 10 10 11 11 struct perf_event_attr *perf_pmu__get_default_config(struct perf_pmu *pmu __maybe_unused) 12 12 {
+4
tools/perf/bench/bench.h
··· 2 2 #ifndef BENCH_H 3 3 #define BENCH_H 4 4 5 + #include <sys/time.h> 6 + 7 + extern struct timeval bench__start, bench__end, bench__runtime; 8 + 5 9 /* 6 10 * The madvise transparent hugepage constants were added in glibc 7 11 * 2.13. For compatibility with older versions of glibc, define these
+4 -4
tools/perf/bench/epoll-ctl.c
··· 35 35 36 36 static unsigned int nthreads = 0; 37 37 static unsigned int nsecs = 8; 38 - struct timeval start, end, runtime; 39 38 static bool done, __verbose, randomize; 40 39 41 40 /* ··· 93 94 { 94 95 /* inform all threads that we're done for the day */ 95 96 done = true; 96 - gettimeofday(&end, NULL); 97 - timersub(&end, &start, &runtime); 97 + gettimeofday(&bench__end, NULL); 98 + timersub(&bench__end, &bench__start, &bench__runtime); 98 99 } 99 100 100 101 static void nest_epollfd(void) ··· 312 313 exit(EXIT_FAILURE); 313 314 } 314 315 316 + memset(&act, 0, sizeof(act)); 315 317 sigfillset(&act.sa_mask); 316 318 act.sa_sigaction = toggle_done; 317 319 sigaction(SIGINT, &act, NULL); ··· 361 361 362 362 threads_starting = nthreads; 363 363 364 - gettimeofday(&start, NULL); 364 + gettimeofday(&bench__start, NULL); 365 365 366 366 do_threads(worker, cpu); 367 367
+6 -6
tools/perf/bench/epoll-wait.c
··· 90 90 91 91 static unsigned int nthreads = 0; 92 92 static unsigned int nsecs = 8; 93 - struct timeval start, end, runtime; 94 93 static bool wdone, done, __verbose, randomize, nonblocking; 95 94 96 95 /* ··· 275 276 { 276 277 /* inform all threads that we're done for the day */ 277 278 done = true; 278 - gettimeofday(&end, NULL); 279 - timersub(&end, &start, &runtime); 279 + gettimeofday(&bench__end, NULL); 280 + timersub(&bench__end, &bench__start, &bench__runtime); 280 281 } 281 282 282 283 static void print_summary(void) ··· 286 287 287 288 printf("\nAveraged %ld operations/sec (+- %.2f%%), total secs = %d\n", 288 289 avg, rel_stddev_stats(stddev, avg), 289 - (int) runtime.tv_sec); 290 + (int)bench__runtime.tv_sec); 290 291 } 291 292 292 293 static int do_threads(struct worker *worker, struct perf_cpu_map *cpu) ··· 426 427 exit(EXIT_FAILURE); 427 428 } 428 429 430 + memset(&act, 0, sizeof(act)); 429 431 sigfillset(&act.sa_mask); 430 432 act.sa_sigaction = toggle_done; 431 433 sigaction(SIGINT, &act, NULL); ··· 479 479 480 480 threads_starting = nthreads; 481 481 482 - gettimeofday(&start, NULL); 482 + gettimeofday(&bench__start, NULL); 483 483 484 484 do_threads(worker, cpu); 485 485 ··· 519 519 qsort(worker, nthreads, sizeof(struct worker), cmpworker); 520 520 521 521 for (i = 0; i < nthreads; i++) { 522 - unsigned long t = worker[i].ops/runtime.tv_sec; 522 + unsigned long t = worker[i].ops / bench__runtime.tv_sec; 523 523 524 524 update_stats(&throughput_stats, t); 525 525
+7 -6
tools/perf/bench/futex-hash.c
··· 37 37 static bool fshared = false, done = false, silent = false; 38 38 static int futex_flag = 0; 39 39 40 - struct timeval start, end, runtime; 40 + struct timeval bench__start, bench__end, bench__runtime; 41 41 static pthread_mutex_t thread_lock; 42 42 static unsigned int threads_starting; 43 43 static struct stats throughput_stats; ··· 103 103 { 104 104 /* inform all threads that we're done for the day */ 105 105 done = true; 106 - gettimeofday(&end, NULL); 107 - timersub(&end, &start, &runtime); 106 + gettimeofday(&bench__end, NULL); 107 + timersub(&bench__end, &bench__start, &bench__runtime); 108 108 } 109 109 110 110 static void print_summary(void) ··· 114 114 115 115 printf("%sAveraged %ld operations/sec (+- %.2f%%), total secs = %d\n", 116 116 !silent ? "\n" : "", avg, rel_stddev_stats(stddev, avg), 117 - (int) runtime.tv_sec); 117 + (int)bench__runtime.tv_sec); 118 118 } 119 119 120 120 int bench_futex_hash(int argc, const char **argv) ··· 137 137 if (!cpu) 138 138 goto errmem; 139 139 140 + memset(&act, 0, sizeof(act)); 140 141 sigfillset(&act.sa_mask); 141 142 act.sa_sigaction = toggle_done; 142 143 sigaction(SIGINT, &act, NULL); ··· 162 161 163 162 threads_starting = nthreads; 164 163 pthread_attr_init(&thread_attr); 165 - gettimeofday(&start, NULL); 164 + gettimeofday(&bench__start, NULL); 166 165 for (i = 0; i < nthreads; i++) { 167 166 worker[i].tid = i; 168 167 worker[i].futex = calloc(nfutexes, sizeof(*worker[i].futex)); ··· 205 204 pthread_mutex_destroy(&thread_lock); 206 205 207 206 for (i = 0; i < nthreads; i++) { 208 - unsigned long t = worker[i].ops/runtime.tv_sec; 207 + unsigned long t = worker[i].ops / bench__runtime.tv_sec; 209 208 update_stats(&throughput_stats, t); 210 209 if (!silent) { 211 210 if (nfutexes == 1)
+6 -6
tools/perf/bench/futex-lock-pi.c
··· 37 37 static bool done = false, fshared = false; 38 38 static unsigned int nthreads = 0; 39 39 static int futex_flag = 0; 40 - struct timeval start, end, runtime; 41 40 static pthread_mutex_t thread_lock; 42 41 static unsigned int threads_starting; 43 42 static struct stats throughput_stats; ··· 63 64 64 65 printf("%sAveraged %ld operations/sec (+- %.2f%%), total secs = %d\n", 65 66 !silent ? "\n" : "", avg, rel_stddev_stats(stddev, avg), 66 - (int) runtime.tv_sec); 67 + (int)bench__runtime.tv_sec); 67 68 } 68 69 69 70 static void toggle_done(int sig __maybe_unused, ··· 72 73 { 73 74 /* inform all threads that we're done for the day */ 74 75 done = true; 75 - gettimeofday(&end, NULL); 76 - timersub(&end, &start, &runtime); 76 + gettimeofday(&bench__end, NULL); 77 + timersub(&bench__end, &bench__start, &bench__runtime); 77 78 } 78 79 79 80 static void *workerfn(void *arg) ··· 160 161 if (!cpu) 161 162 err(EXIT_FAILURE, "calloc"); 162 163 164 + memset(&act, 0, sizeof(act)); 163 165 sigfillset(&act.sa_mask); 164 166 act.sa_sigaction = toggle_done; 165 167 sigaction(SIGINT, &act, NULL); ··· 185 185 186 186 threads_starting = nthreads; 187 187 pthread_attr_init(&thread_attr); 188 - gettimeofday(&start, NULL); 188 + gettimeofday(&bench__start, NULL); 189 189 190 190 create_threads(worker, thread_attr, cpu); 191 191 pthread_attr_destroy(&thread_attr); ··· 211 211 pthread_mutex_destroy(&thread_lock); 212 212 213 213 for (i = 0; i < nthreads; i++) { 214 - unsigned long t = worker[i].ops/runtime.tv_sec; 214 + unsigned long t = worker[i].ops / bench__runtime.tv_sec; 215 215 216 216 update_stats(&throughput_stats, t); 217 217 if (!silent)
+1
tools/perf/bench/futex-requeue.c
··· 128 128 if (!cpu) 129 129 err(EXIT_FAILURE, "cpu_map__new"); 130 130 131 + memset(&act, 0, sizeof(act)); 131 132 sigfillset(&act.sa_mask); 132 133 act.sa_sigaction = toggle_done; 133 134 sigaction(SIGINT, &act, NULL);
+1
tools/perf/bench/futex-wake-parallel.c
··· 234 234 exit(EXIT_FAILURE); 235 235 } 236 236 237 + memset(&act, 0, sizeof(act)); 237 238 sigfillset(&act.sa_mask); 238 239 act.sa_sigaction = toggle_done; 239 240 sigaction(SIGINT, &act, NULL);
+3 -2
tools/perf/bench/futex-wake.c
··· 43 43 static pthread_mutex_t thread_lock; 44 44 static pthread_cond_t thread_parent, thread_worker; 45 45 static struct stats waketime_stats, wakeup_stats; 46 - static unsigned int ncpus, threads_starting, nthreads = 0; 46 + static unsigned int threads_starting, nthreads = 0; 47 47 static int futex_flag = 0; 48 48 49 49 static const struct option options[] = { ··· 136 136 if (!cpu) 137 137 err(EXIT_FAILURE, "calloc"); 138 138 139 + memset(&act, 0, sizeof(act)); 139 140 sigfillset(&act.sa_mask); 140 141 act.sa_sigaction = toggle_done; 141 142 sigaction(SIGINT, &act, NULL); 142 143 143 144 if (!nthreads) 144 - nthreads = ncpus; 145 + nthreads = cpu->nr; 145 146 146 147 worker = calloc(nthreads, sizeof(*worker)); 147 148 if (!worker)
+2 -1
tools/perf/builtin-diff.c
··· 1312 1312 end_line = map__srcline(he->ms.map, bi->sym->start + bi->end, 1313 1313 he->ms.sym); 1314 1314 1315 - if ((start_line != SRCLINE_UNKNOWN) && (end_line != SRCLINE_UNKNOWN)) { 1315 + if ((strncmp(start_line, SRCLINE_UNKNOWN, strlen(SRCLINE_UNKNOWN)) != 0) && 1316 + (strncmp(end_line, SRCLINE_UNKNOWN, strlen(SRCLINE_UNKNOWN)) != 0)) { 1316 1317 scnprintf(buf, sizeof(buf), "[%s -> %s] %4ld", 1317 1318 start_line, end_line, block_he->diff.cycles); 1318 1319 } else {
+3 -1
tools/perf/builtin-top.c
··· 684 684 delay_msecs = top->delay_secs * MSEC_PER_SEC; 685 685 set_term_quiet_input(&save); 686 686 /* trash return*/ 687 - getc(stdin); 687 + clearerr(stdin); 688 + if (poll(&stdin_poll, 1, 0) > 0) 689 + getc(stdin); 688 690 689 691 while (!done) { 690 692 perf_top__print_sym_table(top);
+9 -6
tools/perf/pmu-events/jevents.c
··· 1082 1082 */ 1083 1083 int main(int argc, char *argv[]) 1084 1084 { 1085 - int rc; 1085 + int rc, ret = 0; 1086 1086 int maxfds; 1087 1087 char ldirname[PATH_MAX]; 1088 - 1089 1088 const char *arch; 1090 1089 const char *output_file; 1091 1090 const char *start_dirname; ··· 1155 1156 /* Make build fail */ 1156 1157 fclose(eventsfp); 1157 1158 free_arch_std_events(); 1158 - return 1; 1159 + ret = 1; 1160 + goto out_free_mapfile; 1159 1161 } else if (rc) { 1160 1162 goto empty_map; 1161 1163 } ··· 1174 1174 /* Make build fail */ 1175 1175 fclose(eventsfp); 1176 1176 free_arch_std_events(); 1177 - return 1; 1177 + ret = 1; 1178 1178 } 1179 1179 1180 - return 0; 1180 + 1181 + goto out_free_mapfile; 1181 1182 1182 1183 empty_map: 1183 1184 fclose(eventsfp); 1184 1185 create_empty_mapping(output_file); 1185 1186 free_arch_std_events(); 1186 - return 0; 1187 + out_free_mapfile: 1188 + free(mapfile); 1189 + return ret; 1187 1190 }
+1 -1
tools/perf/tests/bp_account.c
··· 19 19 #include "../perf-sys.h" 20 20 #include "cloexec.h" 21 21 22 - volatile long the_var; 22 + static volatile long the_var; 23 23 24 24 static noinline int test_function(void) 25 25 {
+2 -1
tools/perf/util/block-info.c
··· 295 295 end_line = map__srcline(he->ms.map, bi->sym->start + bi->end, 296 296 he->ms.sym); 297 297 298 - if ((start_line != SRCLINE_UNKNOWN) && (end_line != SRCLINE_UNKNOWN)) { 298 + if ((strncmp(start_line, SRCLINE_UNKNOWN, strlen(SRCLINE_UNKNOWN)) != 0) && 299 + (strncmp(end_line, SRCLINE_UNKNOWN, strlen(SRCLINE_UNKNOWN)) != 0)) { 299 300 scnprintf(buf, sizeof(buf), "[%s -> %s]", 300 301 start_line, end_line); 301 302 } else {
+2 -2
tools/perf/util/env.c
··· 343 343 344 344 const char *perf_env__arch(struct perf_env *env) 345 345 { 346 - struct utsname uts; 347 346 char *arch_name; 348 347 349 348 if (!env || !env->arch) { /* Assume local operation */ 350 - if (uname(&uts) < 0) 349 + static struct utsname uts = { .machine[0] = '\0', }; 350 + if (uts.machine[0] == '\0' && uname(&uts) < 0) 351 351 return NULL; 352 352 arch_name = uts.machine; 353 353 } else
+2 -2
tools/perf/util/map.c
··· 89 89 return true; 90 90 } 91 91 92 - if (!strncmp(filename, "/system/lib/", 11)) { 92 + if (!strncmp(filename, "/system/lib/", 12)) { 93 93 char *ndk, *app; 94 94 const char *arch; 95 95 size_t ndk_length; ··· 431 431 432 432 if (map && map->dso) { 433 433 char *srcline = map__srcline(map, addr, NULL); 434 - if (srcline != SRCLINE_UNKNOWN) 434 + if (strncmp(srcline, SRCLINE_UNKNOWN, strlen(SRCLINE_UNKNOWN)) != 0) 435 435 ret = fprintf(fp, "%s%s", prefix, srcline); 436 436 free_srcline(srcline); 437 437 }
+25 -31
tools/perf/util/parse-events.c
··· 257 257 path = zalloc(sizeof(*path)); 258 258 if (!path) 259 259 return NULL; 260 - path->system = malloc(MAX_EVENT_LENGTH); 261 - if (!path->system) { 260 + if (asprintf(&path->system, "%.*s", MAX_EVENT_LENGTH, sys_dirent->d_name) < 0) { 262 261 free(path); 263 262 return NULL; 264 263 } 265 - path->name = malloc(MAX_EVENT_LENGTH); 266 - if (!path->name) { 264 + if (asprintf(&path->name, "%.*s", MAX_EVENT_LENGTH, evt_dirent->d_name) < 0) { 267 265 zfree(&path->system); 268 266 free(path); 269 267 return NULL; 270 268 } 271 - strncpy(path->system, sys_dirent->d_name, 272 - MAX_EVENT_LENGTH); 273 - strncpy(path->name, evt_dirent->d_name, 274 - MAX_EVENT_LENGTH); 275 269 return path; 276 270 } 277 271 } ··· 1213 1219 static int get_config_terms(struct list_head *head_config, 1214 1220 struct list_head *head_terms __maybe_unused) 1215 1221 { 1216 - #define ADD_CONFIG_TERM(__type) \ 1222 + #define ADD_CONFIG_TERM(__type, __weak) \ 1217 1223 struct perf_evsel_config_term *__t; \ 1218 1224 \ 1219 1225 __t = zalloc(sizeof(*__t)); \ ··· 1222 1228 \ 1223 1229 INIT_LIST_HEAD(&__t->list); \ 1224 1230 __t->type = PERF_EVSEL__CONFIG_TERM_ ## __type; \ 1225 - __t->weak = term->weak; \ 1231 + __t->weak = __weak; \ 1226 1232 list_add_tail(&__t->list, head_terms) 1227 1233 1228 - #define ADD_CONFIG_TERM_VAL(__type, __name, __val) \ 1234 + #define ADD_CONFIG_TERM_VAL(__type, __name, __val, __weak) \ 1229 1235 do { \ 1230 - ADD_CONFIG_TERM(__type); \ 1236 + ADD_CONFIG_TERM(__type, __weak); \ 1231 1237 __t->val.__name = __val; \ 1232 1238 } while (0) 1233 1239 1234 - #define ADD_CONFIG_TERM_STR(__type, __val) \ 1240 + #define ADD_CONFIG_TERM_STR(__type, __val, __weak) \ 1235 1241 do { \ 1236 - ADD_CONFIG_TERM(__type); \ 1242 + ADD_CONFIG_TERM(__type, __weak); \ 1237 1243 __t->val.str = strdup(__val); \ 1238 1244 if (!__t->val.str) { \ 1239 1245 zfree(&__t); \ ··· 1247 1253 list_for_each_entry(term, head_config, list) { 1248 1254 switch (term->type_term) { 1249 1255 case PARSE_EVENTS__TERM_TYPE_SAMPLE_PERIOD: 1250 - ADD_CONFIG_TERM_VAL(PERIOD, period, term->val.num); 1256 + ADD_CONFIG_TERM_VAL(PERIOD, period, term->val.num, term->weak); 1251 1257 break; 1252 1258 case PARSE_EVENTS__TERM_TYPE_SAMPLE_FREQ: 1253 - ADD_CONFIG_TERM_VAL(FREQ, freq, term->val.num); 1259 + ADD_CONFIG_TERM_VAL(FREQ, freq, term->val.num, term->weak); 1254 1260 break; 1255 1261 case PARSE_EVENTS__TERM_TYPE_TIME: 1256 - ADD_CONFIG_TERM_VAL(TIME, time, term->val.num); 1262 + ADD_CONFIG_TERM_VAL(TIME, time, term->val.num, term->weak); 1257 1263 break; 1258 1264 case PARSE_EVENTS__TERM_TYPE_CALLGRAPH: 1259 - ADD_CONFIG_TERM_STR(CALLGRAPH, term->val.str); 1265 + ADD_CONFIG_TERM_STR(CALLGRAPH, term->val.str, term->weak); 1260 1266 break; 1261 1267 case PARSE_EVENTS__TERM_TYPE_BRANCH_SAMPLE_TYPE: 1262 - ADD_CONFIG_TERM_STR(BRANCH, term->val.str); 1268 + ADD_CONFIG_TERM_STR(BRANCH, term->val.str, term->weak); 1263 1269 break; 1264 1270 case PARSE_EVENTS__TERM_TYPE_STACKSIZE: 1265 1271 ADD_CONFIG_TERM_VAL(STACK_USER, stack_user, 1266 - term->val.num); 1272 + term->val.num, term->weak); 1267 1273 break; 1268 1274 case PARSE_EVENTS__TERM_TYPE_INHERIT: 1269 1275 ADD_CONFIG_TERM_VAL(INHERIT, inherit, 1270 - term->val.num ? 1 : 0); 1276 + term->val.num ? 1 : 0, term->weak); 1271 1277 break; 1272 1278 case PARSE_EVENTS__TERM_TYPE_NOINHERIT: 1273 1279 ADD_CONFIG_TERM_VAL(INHERIT, inherit, 1274 - term->val.num ? 0 : 1); 1280 + term->val.num ? 0 : 1, term->weak); 1275 1281 break; 1276 1282 case PARSE_EVENTS__TERM_TYPE_MAX_STACK: 1277 1283 ADD_CONFIG_TERM_VAL(MAX_STACK, max_stack, 1278 - term->val.num); 1284 + term->val.num, term->weak); 1279 1285 break; 1280 1286 case PARSE_EVENTS__TERM_TYPE_MAX_EVENTS: 1281 1287 ADD_CONFIG_TERM_VAL(MAX_EVENTS, max_events, 1282 - term->val.num); 1288 + term->val.num, term->weak); 1283 1289 break; 1284 1290 case PARSE_EVENTS__TERM_TYPE_OVERWRITE: 1285 1291 ADD_CONFIG_TERM_VAL(OVERWRITE, overwrite, 1286 - term->val.num ? 1 : 0); 1292 + term->val.num ? 1 : 0, term->weak); 1287 1293 break; 1288 1294 case PARSE_EVENTS__TERM_TYPE_NOOVERWRITE: 1289 1295 ADD_CONFIG_TERM_VAL(OVERWRITE, overwrite, 1290 - term->val.num ? 0 : 1); 1296 + term->val.num ? 0 : 1, term->weak); 1291 1297 break; 1292 1298 case PARSE_EVENTS__TERM_TYPE_DRV_CFG: 1293 - ADD_CONFIG_TERM_STR(DRV_CFG, term->val.str); 1299 + ADD_CONFIG_TERM_STR(DRV_CFG, term->val.str, term->weak); 1294 1300 break; 1295 1301 case PARSE_EVENTS__TERM_TYPE_PERCORE: 1296 1302 ADD_CONFIG_TERM_VAL(PERCORE, percore, 1297 - term->val.num ? true : false); 1303 + term->val.num ? true : false, term->weak); 1298 1304 break; 1299 1305 case PARSE_EVENTS__TERM_TYPE_AUX_OUTPUT: 1300 1306 ADD_CONFIG_TERM_VAL(AUX_OUTPUT, aux_output, 1301 - term->val.num ? 1 : 0); 1307 + term->val.num ? 1 : 0, term->weak); 1302 1308 break; 1303 1309 case PARSE_EVENTS__TERM_TYPE_AUX_SAMPLE_SIZE: 1304 1310 ADD_CONFIG_TERM_VAL(AUX_SAMPLE_SIZE, aux_sample_size, 1305 - term->val.num); 1311 + term->val.num, term->weak); 1306 1312 break; 1307 1313 default: 1308 1314 break; ··· 1339 1345 } 1340 1346 1341 1347 if (bits) 1342 - ADD_CONFIG_TERM_VAL(CFG_CHG, cfg_chg, bits); 1348 + ADD_CONFIG_TERM_VAL(CFG_CHG, cfg_chg, bits, false); 1343 1349 1344 1350 #undef ADD_CONFIG_TERM 1345 1351 return 0;
+3
tools/perf/util/probe-file.c
··· 206 206 } else 207 207 ret = strlist__add(sl, tev.event); 208 208 clear_probe_trace_event(&tev); 209 + /* Skip if there is same name multi-probe event in the list */ 210 + if (ret == -EEXIST) 211 + ret = 0; 209 212 if (ret < 0) 210 213 break; 211 214 }
+8 -3
tools/perf/util/probe-finder.c
··· 637 637 return -EINVAL; 638 638 } 639 639 640 - /* Try to get actual symbol name from symtab */ 641 - symbol = dwfl_module_addrsym(mod, paddr, &sym, NULL); 640 + if (dwarf_entrypc(sp_die, &eaddr) == 0) { 641 + /* If the DIE has entrypc, use it. */ 642 + symbol = dwarf_diename(sp_die); 643 + } else { 644 + /* Try to get actual symbol name and address from symtab */ 645 + symbol = dwfl_module_addrsym(mod, paddr, &sym, NULL); 646 + eaddr = sym.st_value; 647 + } 642 648 if (!symbol) { 643 649 pr_warning("Failed to find symbol at 0x%lx\n", 644 650 (unsigned long)paddr); 645 651 return -ENOENT; 646 652 } 647 - eaddr = sym.st_value; 648 653 649 654 tp->offset = (unsigned long)(paddr - eaddr); 650 655 tp->address = (unsigned long)paddr;
+7 -5
tools/perf/util/setup.py
··· 2 2 from subprocess import Popen, PIPE 3 3 from re import sub 4 4 5 - def clang_has_option(option): 6 - return [o for o in Popen(['clang', option], stderr=PIPE).stderr.readlines() if b"unknown argument" in o] == [ ] 7 - 8 5 cc = getenv("CC") 9 - if cc == "clang": 6 + cc_is_clang = b"clang version" in Popen([cc, "-v"], stderr=PIPE).stderr.readline() 7 + 8 + def clang_has_option(option): 9 + return [o for o in Popen([cc, option], stderr=PIPE).stderr.readlines() if b"unknown argument" in o] == [ ] 10 + 11 + if cc_is_clang: 10 12 from distutils.sysconfig import get_config_vars 11 13 vars = get_config_vars() 12 14 for var in ('CFLAGS', 'OPT'): ··· 42 40 cflags = getenv('CFLAGS', '').split() 43 41 # switch off several checks (need to be at the end of cflags list) 44 42 cflags += ['-fno-strict-aliasing', '-Wno-write-strings', '-Wno-unused-parameter', '-Wno-redundant-decls' ] 45 - if cc != "clang": 43 + if not cc_is_clang: 46 44 cflags += ['-Wno-cast-function-type' ] 47 45 48 46 src_perf = getenv('srctree') + '/tools/perf'
+6 -7
tools/perf/util/symbol.c
··· 1622 1622 goto out; 1623 1623 } 1624 1624 1625 - if (dso->kernel) { 1625 + kmod = dso->symtab_type == DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE || 1626 + dso->symtab_type == DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE_COMP || 1627 + dso->symtab_type == DSO_BINARY_TYPE__GUEST_KMODULE || 1628 + dso->symtab_type == DSO_BINARY_TYPE__GUEST_KMODULE_COMP; 1629 + 1630 + if (dso->kernel && !kmod) { 1626 1631 if (dso->kernel == DSO_TYPE_KERNEL) 1627 1632 ret = dso__load_kernel_sym(dso, map); 1628 1633 else if (dso->kernel == DSO_TYPE_GUEST_KERNEL) ··· 1654 1649 name = malloc(PATH_MAX); 1655 1650 if (!name) 1656 1651 goto out; 1657 - 1658 - kmod = dso->symtab_type == DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE || 1659 - dso->symtab_type == DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE_COMP || 1660 - dso->symtab_type == DSO_BINARY_TYPE__GUEST_KMODULE || 1661 - dso->symtab_type == DSO_BINARY_TYPE__GUEST_KMODULE_COMP; 1662 - 1663 1652 1664 1653 /* 1665 1654 * Read the build id if possible. This is required for
+1 -1
tools/power/cpupower/utils/idle_monitor/amd_fam14h_idle.c
··· 82 82 static struct pci_dev *amd_fam14h_pci_dev; 83 83 static int nbp1_entered; 84 84 85 - struct timespec start_time; 85 + static struct timespec start_time; 86 86 static unsigned long long timediff; 87 87 88 88 #ifdef DEBUG
+1 -1
tools/power/cpupower/utils/idle_monitor/cpuidle_sysfs.c
··· 19 19 20 20 static unsigned long long **previous_count; 21 21 static unsigned long long **current_count; 22 - struct timespec start_time; 22 + static struct timespec start_time; 23 23 static unsigned long long timediff; 24 24 25 25 static int cpuidle_get_count_percent(unsigned int id, double *percent,
+2
tools/power/cpupower/utils/idle_monitor/cpupower-monitor.c
··· 27 27 0 28 28 }; 29 29 30 + int cpu_count; 31 + 30 32 static struct cpuidle_monitor *monitors[MONITORS_MAX]; 31 33 static unsigned int avail_monitors; 32 34
+1 -1
tools/power/cpupower/utils/idle_monitor/cpupower-monitor.h
··· 25 25 #endif 26 26 #define CSTATE_DESC_LEN 60 27 27 28 - int cpu_count; 28 + extern int cpu_count; 29 29 30 30 /* Hard to define the right names ...: */ 31 31 enum power_range_e {
+1 -1
tools/power/x86/turbostat/Makefile
··· 16 16 17 17 %: %.c 18 18 @mkdir -p $(BUILD_OUTPUT) 19 - $(CC) $(CFLAGS) $< -o $(BUILD_OUTPUT)/$@ $(LDFLAGS) 19 + $(CC) $(CFLAGS) $< -o $(BUILD_OUTPUT)/$@ $(LDFLAGS) -lcap 20 20 21 21 .PHONY : clean 22 22 clean :
+114 -30
tools/power/x86/turbostat/turbostat.c
··· 30 30 #include <sched.h> 31 31 #include <time.h> 32 32 #include <cpuid.h> 33 - #include <linux/capability.h> 33 + #include <sys/capability.h> 34 34 #include <errno.h> 35 35 #include <math.h> 36 36 ··· 303 303 int *irqs_per_cpu; /* indexed by cpu_num */ 304 304 305 305 void setup_all_buffers(void); 306 + 307 + char *sys_lpi_file; 308 + char *sys_lpi_file_sysfs = "/sys/devices/system/cpu/cpuidle/low_power_idle_system_residency_us"; 309 + char *sys_lpi_file_debugfs = "/sys/kernel/debug/pmc_core/slp_s0_residency_usec"; 306 310 307 311 int cpu_is_not_present(int cpu) 308 312 { ··· 2920 2916 * 2921 2917 * record snapshot of 2922 2918 * /sys/devices/system/cpu/cpuidle/low_power_idle_cpu_residency_us 2923 - * 2924 - * return 1 if config change requires a restart, else return 0 2925 2919 */ 2926 2920 int snapshot_cpu_lpi_us(void) 2927 2921 { ··· 2943 2941 /* 2944 2942 * snapshot_sys_lpi() 2945 2943 * 2946 - * record snapshot of 2947 - * /sys/devices/system/cpu/cpuidle/low_power_idle_system_residency_us 2948 - * 2949 - * return 1 if config change requires a restart, else return 0 2944 + * record snapshot of sys_lpi_file 2950 2945 */ 2951 2946 int snapshot_sys_lpi_us(void) 2952 2947 { 2953 2948 FILE *fp; 2954 2949 int retval; 2955 2950 2956 - fp = fopen_or_die("/sys/devices/system/cpu/cpuidle/low_power_idle_system_residency_us", "r"); 2951 + fp = fopen_or_die(sys_lpi_file, "r"); 2957 2952 2958 2953 retval = fscanf(fp, "%lld", &cpuidle_cur_sys_lpi_us); 2959 2954 if (retval != 1) { ··· 3150 3151 err(-5, "no /dev/cpu/0/msr, Try \"# modprobe msr\" "); 3151 3152 } 3152 3153 3153 - void check_permissions() 3154 + /* 3155 + * check for CAP_SYS_RAWIO 3156 + * return 0 on success 3157 + * return 1 on fail 3158 + */ 3159 + int check_for_cap_sys_rawio(void) 3154 3160 { 3155 - struct __user_cap_header_struct cap_header_data; 3156 - cap_user_header_t cap_header = &cap_header_data; 3157 - struct __user_cap_data_struct cap_data_data; 3158 - cap_user_data_t cap_data = &cap_data_data; 3159 - extern int capget(cap_user_header_t hdrp, cap_user_data_t datap); 3161 + cap_t caps; 3162 + cap_flag_value_t cap_flag_value; 3163 + 3164 + caps = cap_get_proc(); 3165 + if (caps == NULL) 3166 + err(-6, "cap_get_proc\n"); 3167 + 3168 + if (cap_get_flag(caps, CAP_SYS_RAWIO, CAP_EFFECTIVE, &cap_flag_value)) 3169 + err(-6, "cap_get\n"); 3170 + 3171 + if (cap_flag_value != CAP_SET) { 3172 + warnx("capget(CAP_SYS_RAWIO) failed," 3173 + " try \"# setcap cap_sys_rawio=ep %s\"", progname); 3174 + return 1; 3175 + } 3176 + 3177 + if (cap_free(caps) == -1) 3178 + err(-6, "cap_free\n"); 3179 + 3180 + return 0; 3181 + } 3182 + void check_permissions(void) 3183 + { 3160 3184 int do_exit = 0; 3161 3185 char pathname[32]; 3162 3186 3163 3187 /* check for CAP_SYS_RAWIO */ 3164 - cap_header->pid = getpid(); 3165 - cap_header->version = _LINUX_CAPABILITY_VERSION; 3166 - if (capget(cap_header, cap_data) < 0) 3167 - err(-6, "capget(2) failed"); 3168 - 3169 - if ((cap_data->effective & (1 << CAP_SYS_RAWIO)) == 0) { 3170 - do_exit++; 3171 - warnx("capget(CAP_SYS_RAWIO) failed," 3172 - " try \"# setcap cap_sys_rawio=ep %s\"", progname); 3173 - } 3188 + do_exit += check_for_cap_sys_rawio(); 3174 3189 3175 3190 /* test file permissions */ 3176 3191 sprintf(pathname, "/dev/cpu/%d/msr", base_cpu); ··· 3278 3265 case INTEL_FAM6_ATOM_GOLDMONT: /* BXT */ 3279 3266 case INTEL_FAM6_ATOM_GOLDMONT_PLUS: 3280 3267 case INTEL_FAM6_ATOM_GOLDMONT_D: /* DNV */ 3268 + case INTEL_FAM6_ATOM_TREMONT: /* EHL */ 3281 3269 pkg_cstate_limits = glm_pkg_cstate_limits; 3282 3270 break; 3283 3271 default: ··· 3346 3332 3347 3333 switch (model) { 3348 3334 case INTEL_FAM6_SKYLAKE_X: 3335 + return 1; 3336 + } 3337 + return 0; 3338 + } 3339 + int is_ehl(unsigned int family, unsigned int model) 3340 + { 3341 + if (!genuine_intel) 3342 + return 0; 3343 + 3344 + switch (model) { 3345 + case INTEL_FAM6_ATOM_TREMONT: 3349 3346 return 1; 3350 3347 } 3351 3348 return 0; ··· 3503 3478 dump_nhm_cst_cfg(); 3504 3479 } 3505 3480 3481 + static void dump_sysfs_file(char *path) 3482 + { 3483 + FILE *input; 3484 + char cpuidle_buf[64]; 3485 + 3486 + input = fopen(path, "r"); 3487 + if (input == NULL) { 3488 + if (debug) 3489 + fprintf(outf, "NSFOD %s\n", path); 3490 + return; 3491 + } 3492 + if (!fgets(cpuidle_buf, sizeof(cpuidle_buf), input)) 3493 + err(1, "%s: failed to read file", path); 3494 + fclose(input); 3495 + 3496 + fprintf(outf, "%s: %s", strrchr(path, '/') + 1, cpuidle_buf); 3497 + } 3506 3498 static void 3507 3499 dump_sysfs_cstate_config(void) 3508 3500 { ··· 3532 3490 3533 3491 if (!DO_BIC(BIC_sysfs)) 3534 3492 return; 3493 + 3494 + if (access("/sys/devices/system/cpu/cpuidle", R_OK)) { 3495 + fprintf(outf, "cpuidle not loaded\n"); 3496 + return; 3497 + } 3498 + 3499 + dump_sysfs_file("/sys/devices/system/cpu/cpuidle/current_driver"); 3500 + dump_sysfs_file("/sys/devices/system/cpu/cpuidle/current_governor"); 3501 + dump_sysfs_file("/sys/devices/system/cpu/cpuidle/current_governor_ro"); 3535 3502 3536 3503 for (state = 0; state < 10; ++state) { 3537 3504 ··· 3945 3894 else 3946 3895 BIC_PRESENT(BIC_PkgWatt); 3947 3896 break; 3897 + case INTEL_FAM6_ATOM_TREMONT: /* EHL */ 3898 + do_rapl = RAPL_PKG | RAPL_CORES | RAPL_CORE_POLICY | RAPL_DRAM | RAPL_DRAM_PERF_STATUS | RAPL_PKG_PERF_STATUS | RAPL_GFX | RAPL_PKG_POWER_INFO; 3899 + if (rapl_joules) { 3900 + BIC_PRESENT(BIC_Pkg_J); 3901 + BIC_PRESENT(BIC_Cor_J); 3902 + BIC_PRESENT(BIC_RAM_J); 3903 + BIC_PRESENT(BIC_GFX_J); 3904 + } else { 3905 + BIC_PRESENT(BIC_PkgWatt); 3906 + BIC_PRESENT(BIC_CorWatt); 3907 + BIC_PRESENT(BIC_RAMWatt); 3908 + BIC_PRESENT(BIC_GFXWatt); 3909 + } 3910 + break; 3948 3911 case INTEL_FAM6_SKYLAKE_L: /* SKL */ 3949 3912 case INTEL_FAM6_CANNONLAKE_L: /* CNL */ 3950 3913 do_rapl = RAPL_PKG | RAPL_CORES | RAPL_CORE_POLICY | RAPL_DRAM | RAPL_DRAM_PERF_STATUS | RAPL_PKG_PERF_STATUS | RAPL_GFX | RAPL_PKG_POWER_INFO; ··· 4360 4295 case INTEL_FAM6_ATOM_GOLDMONT: /* BXT */ 4361 4296 case INTEL_FAM6_ATOM_GOLDMONT_PLUS: 4362 4297 case INTEL_FAM6_ATOM_GOLDMONT_D: /* DNV */ 4298 + case INTEL_FAM6_ATOM_TREMONT: /* EHL */ 4363 4299 return 1; 4364 4300 } 4365 4301 return 0; ··· 4390 4324 case INTEL_FAM6_CANNONLAKE_L: /* CNL */ 4391 4325 case INTEL_FAM6_ATOM_GOLDMONT: /* BXT */ 4392 4326 case INTEL_FAM6_ATOM_GOLDMONT_PLUS: 4327 + case INTEL_FAM6_ATOM_TREMONT: /* EHL */ 4393 4328 return 1; 4394 4329 } 4395 4330 return 0; ··· 4677 4610 case INTEL_FAM6_SKYLAKE: 4678 4611 case INTEL_FAM6_KABYLAKE_L: 4679 4612 case INTEL_FAM6_KABYLAKE: 4613 + case INTEL_FAM6_COMETLAKE_L: 4614 + case INTEL_FAM6_COMETLAKE: 4680 4615 return INTEL_FAM6_SKYLAKE_L; 4681 4616 4682 4617 case INTEL_FAM6_ICELAKE_L: 4683 4618 case INTEL_FAM6_ICELAKE_NNPI: 4619 + case INTEL_FAM6_TIGERLAKE_L: 4620 + case INTEL_FAM6_TIGERLAKE: 4684 4621 return INTEL_FAM6_CANNONLAKE_L; 4685 4622 4686 4623 case INTEL_FAM6_ATOM_TREMONT_D: 4687 4624 return INTEL_FAM6_ATOM_GOLDMONT_D; 4625 + 4626 + case INTEL_FAM6_ATOM_TREMONT_L: 4627 + return INTEL_FAM6_ATOM_TREMONT; 4628 + 4629 + case INTEL_FAM6_ICELAKE_X: 4630 + return INTEL_FAM6_SKYLAKE_X; 4688 4631 } 4689 4632 return model; 4690 4633 } ··· 4949 4872 do_slm_cstates = is_slm(family, model); 4950 4873 do_knl_cstates = is_knl(family, model); 4951 4874 4952 - if (do_slm_cstates || do_knl_cstates || is_cnl(family, model)) 4875 + if (do_slm_cstates || do_knl_cstates || is_cnl(family, model) || 4876 + is_ehl(family, model)) 4953 4877 BIC_NOT_PRESENT(BIC_CPU_c3); 4954 4878 4955 4879 if (!quiet) ··· 4985 4907 else 4986 4908 BIC_NOT_PRESENT(BIC_CPU_LPI); 4987 4909 4988 - if (!access("/sys/devices/system/cpu/cpuidle/low_power_idle_system_residency_us", R_OK)) 4910 + if (!access(sys_lpi_file_sysfs, R_OK)) { 4911 + sys_lpi_file = sys_lpi_file_sysfs; 4989 4912 BIC_PRESENT(BIC_SYS_LPI); 4990 - else 4913 + } else if (!access(sys_lpi_file_debugfs, R_OK)) { 4914 + sys_lpi_file = sys_lpi_file_debugfs; 4915 + BIC_PRESENT(BIC_SYS_LPI); 4916 + } else { 4917 + sys_lpi_file_sysfs = NULL; 4991 4918 BIC_NOT_PRESENT(BIC_SYS_LPI); 4919 + } 4992 4920 4993 4921 if (!quiet) 4994 4922 decode_misc_feature_control(); ··· 5390 5306 } 5391 5307 5392 5308 void print_version() { 5393 - fprintf(outf, "turbostat version 19.08.31" 5309 + fprintf(outf, "turbostat version 20.03.20" 5394 5310 " - Len Brown <lenb@kernel.org>\n"); 5395 5311 } 5396 5312 ··· 5407 5323 } 5408 5324 5409 5325 msrp->msr_num = msr_num; 5410 - strncpy(msrp->name, name, NAME_BYTES); 5326 + strncpy(msrp->name, name, NAME_BYTES - 1); 5411 5327 if (path) 5412 - strncpy(msrp->path, path, PATH_BYTES); 5328 + strncpy(msrp->path, path, PATH_BYTES - 1); 5413 5329 msrp->width = width; 5414 5330 msrp->type = type; 5415 5331 msrp->format = format;
+2 -2
tools/scripts/Makefile.include
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 ifneq ($(O),) 3 3 ifeq ($(origin O), command line) 4 - dummy := $(if $(shell test -d $(O) || echo $(O)),$(error O=$(O) does not exist),) 5 - ABSOLUTE_O := $(shell cd $(O) ; pwd) 4 + dummy := $(if $(shell cd $(PWD); test -d $(O) || echo $(O)),$(error O=$(O) does not exist),) 5 + ABSOLUTE_O := $(shell cd $(PWD); cd $(O) ; pwd) 6 6 OUTPUT := $(ABSOLUTE_O)/$(if $(subdir),$(subdir)/) 7 7 COMMAND_O := O=$(ABSOLUTE_O) 8 8 ifeq ($(objtree),)
+1
tools/testing/selftests/Makefile
··· 33 33 TARGETS += mount 34 34 TARGETS += mqueue 35 35 TARGETS += net 36 + TARGETS += net/forwarding 36 37 TARGETS += net/mptcp 37 38 TARGETS += netfilter 38 39 TARGETS += networking/timestamping
+60
tools/testing/selftests/bpf/prog_tests/send_signal_sched_switch.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <test_progs.h> 3 + #include <stdio.h> 4 + #include <stdlib.h> 5 + #include <sys/mman.h> 6 + #include <pthread.h> 7 + #include <sys/types.h> 8 + #include <sys/stat.h> 9 + #include <fcntl.h> 10 + #include "test_send_signal_kern.skel.h" 11 + 12 + static void sigusr1_handler(int signum) 13 + { 14 + } 15 + 16 + #define THREAD_COUNT 100 17 + 18 + static void *worker(void *p) 19 + { 20 + int i; 21 + 22 + for ( i = 0; i < 1000; i++) 23 + usleep(1); 24 + 25 + return NULL; 26 + } 27 + 28 + void test_send_signal_sched_switch(void) 29 + { 30 + struct test_send_signal_kern *skel; 31 + pthread_t threads[THREAD_COUNT]; 32 + u32 duration = 0; 33 + int i, err; 34 + 35 + signal(SIGUSR1, sigusr1_handler); 36 + 37 + skel = test_send_signal_kern__open_and_load(); 38 + if (CHECK(!skel, "skel_open_and_load", "skeleton open_and_load failed\n")) 39 + return; 40 + 41 + skel->bss->pid = getpid(); 42 + skel->bss->sig = SIGUSR1; 43 + 44 + err = test_send_signal_kern__attach(skel); 45 + if (CHECK(err, "skel_attach", "skeleton attach failed\n")) 46 + goto destroy_skel; 47 + 48 + for (i = 0; i < THREAD_COUNT; i++) { 49 + err = pthread_create(threads + i, NULL, worker, NULL); 50 + if (CHECK(err, "pthread_create", "Error creating thread, %s\n", 51 + strerror(errno))) 52 + goto destroy_skel; 53 + } 54 + 55 + for (i = 0; i < THREAD_COUNT; i++) 56 + pthread_join(threads[i], NULL); 57 + 58 + destroy_skel: 59 + test_send_signal_kern__destroy(skel); 60 + }
+6
tools/testing/selftests/bpf/progs/test_send_signal_kern.c
··· 31 31 return bpf_send_signal_test(ctx); 32 32 } 33 33 34 + SEC("tracepoint/sched/sched_switch") 35 + int send_signal_tp_sched(void *ctx) 36 + { 37 + return bpf_send_signal_test(ctx); 38 + } 39 + 34 40 SEC("perf_event") 35 41 int send_signal_perf(void *ctx) 36 42 {
+42
tools/testing/selftests/bpf/test_btf.c
··· 1062 1062 .err_str = "Member exceeds struct_size", 1063 1063 }, 1064 1064 1065 + /* Test member unexceeds the size of struct 1066 + * 1067 + * enum E { 1068 + * E0, 1069 + * E1, 1070 + * }; 1071 + * 1072 + * struct A { 1073 + * char m; 1074 + * enum E __attribute__((packed)) n; 1075 + * }; 1076 + */ 1077 + { 1078 + .descr = "size check test #5", 1079 + .raw_types = { 1080 + /* int */ /* [1] */ 1081 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, sizeof(int)), 1082 + /* char */ /* [2] */ 1083 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 8, 1), 1084 + /* enum E { */ /* [3] */ 1085 + BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_ENUM, 0, 2), 1), 1086 + BTF_ENUM_ENC(NAME_TBD, 0), 1087 + BTF_ENUM_ENC(NAME_TBD, 1), 1088 + /* } */ 1089 + /* struct A { */ /* [4] */ 1090 + BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_STRUCT, 0, 2), 2), 1091 + BTF_MEMBER_ENC(NAME_TBD, 2, 0), /* char m; */ 1092 + BTF_MEMBER_ENC(NAME_TBD, 3, 8),/* enum E __attribute__((packed)) n; */ 1093 + /* } */ 1094 + BTF_END_RAW, 1095 + }, 1096 + .str_sec = "\0E\0E0\0E1\0A\0m\0n", 1097 + .str_sec_size = sizeof("\0E\0E0\0E1\0A\0m\0n"), 1098 + .map_type = BPF_MAP_TYPE_ARRAY, 1099 + .map_name = "size_check5_map", 1100 + .key_size = sizeof(int), 1101 + .value_size = 2, 1102 + .key_type_id = 1, 1103 + .value_type_id = 4, 1104 + .max_entries = 4, 1105 + }, 1106 + 1065 1107 /* typedef const void * const_void_ptr; 1066 1108 * struct A { 1067 1109 * const_void_ptr m;
+15
tools/testing/selftests/bpf/verifier/jmp32.c
··· 62 62 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 63 63 }, 64 64 { 65 + "jset32: ignores upper bits", 66 + .insns = { 67 + BPF_MOV64_IMM(BPF_REG_0, 0), 68 + BPF_LD_IMM64(BPF_REG_7, 0x8000000000000000), 69 + BPF_LD_IMM64(BPF_REG_8, 0x8000000000000000), 70 + BPF_JMP_REG(BPF_JSET, BPF_REG_7, BPF_REG_8, 1), 71 + BPF_EXIT_INSN(), 72 + BPF_JMP32_REG(BPF_JSET, BPF_REG_7, BPF_REG_8, 1), 73 + BPF_MOV64_IMM(BPF_REG_0, 2), 74 + BPF_EXIT_INSN(), 75 + }, 76 + .result = ACCEPT, 77 + .retval = 2, 78 + }, 79 + { 65 80 "jset32: min/max deduction", 66 81 .insns = { 67 82 BPF_RAND_UEXT_R7,
+3 -1
tools/testing/selftests/net/Makefile
··· 11 11 TEST_PROGS += udpgro_bench.sh udpgro.sh test_vxlan_under_vrf.sh reuseport_addr_any.sh 12 12 TEST_PROGS += test_vxlan_fdb_changelink.sh so_txtime.sh ipv6_flowlabel.sh 13 13 TEST_PROGS += tcp_fastopen_backup_key.sh fcnal-test.sh l2tp.sh traceroute.sh 14 - TEST_PROGS += fin_ack_lat.sh 14 + TEST_PROGS += fin_ack_lat.sh fib_nexthop_multiprefix.sh fib_nexthops.sh 15 + TEST_PROGS += altnames.sh icmp_redirect.sh ip6_gre_headroom.sh 16 + TEST_PROGS += route_localnet.sh 15 17 TEST_PROGS += reuseaddr_ports_exhausted.sh 16 18 TEST_PROGS_EXTENDED := in_netns.sh 17 19 TEST_GEN_FILES = socket nettest
+76
tools/testing/selftests/net/forwarding/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0+ OR MIT 2 + 3 + TEST_PROGS = bridge_igmp.sh \ 4 + bridge_port_isolation.sh \ 5 + bridge_sticky_fdb.sh \ 6 + bridge_vlan_aware.sh \ 7 + bridge_vlan_unaware.sh \ 8 + ethtool.sh \ 9 + gre_inner_v4_multipath.sh \ 10 + gre_inner_v6_multipath.sh \ 11 + gre_multipath.sh \ 12 + ip6gre_inner_v4_multipath.sh \ 13 + ip6gre_inner_v6_multipath.sh \ 14 + ipip_flat_gre_key.sh \ 15 + ipip_flat_gre_keys.sh \ 16 + ipip_flat_gre.sh \ 17 + ipip_hier_gre_key.sh \ 18 + ipip_hier_gre_keys.sh \ 19 + ipip_hier_gre.sh \ 20 + loopback.sh \ 21 + mirror_gre_bound.sh \ 22 + mirror_gre_bridge_1d.sh \ 23 + mirror_gre_bridge_1d_vlan.sh \ 24 + mirror_gre_bridge_1q_lag.sh \ 25 + mirror_gre_bridge_1q.sh \ 26 + mirror_gre_changes.sh \ 27 + mirror_gre_flower.sh \ 28 + mirror_gre_lag_lacp.sh \ 29 + mirror_gre_neigh.sh \ 30 + mirror_gre_nh.sh \ 31 + mirror_gre.sh \ 32 + mirror_gre_vlan_bridge_1q.sh \ 33 + mirror_gre_vlan.sh \ 34 + mirror_vlan.sh \ 35 + router_bridge.sh \ 36 + router_bridge_vlan.sh \ 37 + router_broadcast.sh \ 38 + router_mpath_nh.sh \ 39 + router_multicast.sh \ 40 + router_multipath.sh \ 41 + router.sh \ 42 + router_vid_1.sh \ 43 + sch_ets.sh \ 44 + sch_tbf_ets.sh \ 45 + sch_tbf_prio.sh \ 46 + sch_tbf_root.sh \ 47 + tc_actions.sh \ 48 + tc_chains.sh \ 49 + tc_flower_router.sh \ 50 + tc_flower.sh \ 51 + tc_shblocks.sh \ 52 + tc_vlan_modify.sh \ 53 + vxlan_asymmetric.sh \ 54 + vxlan_bridge_1d_port_8472.sh \ 55 + vxlan_bridge_1d.sh \ 56 + vxlan_bridge_1q_port_8472.sh \ 57 + vxlan_bridge_1q.sh \ 58 + vxlan_symmetric.sh 59 + 60 + TEST_PROGS_EXTENDED := devlink_lib.sh \ 61 + ethtool_lib.sh \ 62 + fib_offload_lib.sh \ 63 + forwarding.config.sample \ 64 + ipip_lib.sh \ 65 + lib.sh \ 66 + mirror_gre_lib.sh \ 67 + mirror_gre_topo_lib.sh \ 68 + mirror_lib.sh \ 69 + mirror_topo_lib.sh \ 70 + sch_ets_core.sh \ 71 + sch_ets_tests.sh \ 72 + sch_tbf_core.sh \ 73 + sch_tbf_etsprio.sh \ 74 + tc_common.sh 75 + 76 + include ../../lib.mk
tools/testing/selftests/net/forwarding/ethtool_lib.sh
+4
tools/testing/selftests/net/reuseport_addr_any.c
··· 21 21 #include <sys/socket.h> 22 22 #include <unistd.h> 23 23 24 + #ifndef SOL_DCCP 25 + #define SOL_DCCP 269 26 + #endif 27 + 24 28 static const char *IP4_ADDR = "127.0.0.1"; 25 29 static const char *IP6_ADDR = "::1"; 26 30 static const char *IP4_MAPPED6 = "::ffff:127.0.0.1";
+5 -1
tools/testing/selftests/netfilter/Makefile
··· 3 3 4 4 TEST_PROGS := nft_trans_stress.sh nft_nat.sh bridge_brouter.sh \ 5 5 conntrack_icmp_related.sh nft_flowtable.sh ipvs.sh \ 6 - nft_concat_range.sh 6 + nft_concat_range.sh \ 7 + nft_queue.sh 8 + 9 + LDLIBS = -lmnl 10 + TEST_GEN_FILES = nf-queue 7 11 8 12 include ../lib.mk
+6
tools/testing/selftests/netfilter/config
··· 1 1 CONFIG_NET_NS=y 2 2 CONFIG_NF_TABLES_INET=y 3 + CONFIG_NFT_QUEUE=m 4 + CONFIG_NFT_NAT=m 5 + CONFIG_NFT_REDIR=m 6 + CONFIG_NFT_MASQ=m 7 + CONFIG_NFT_FLOW_OFFLOAD=m 8 + CONFIG_NF_CT_NETLINK=m
+352
tools/testing/selftests/netfilter/nf-queue.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <errno.h> 4 + #include <stdbool.h> 5 + #include <stdio.h> 6 + #include <stdint.h> 7 + #include <stdlib.h> 8 + #include <unistd.h> 9 + #include <string.h> 10 + #include <time.h> 11 + #include <arpa/inet.h> 12 + 13 + #include <libmnl/libmnl.h> 14 + #include <linux/netfilter.h> 15 + #include <linux/netfilter/nfnetlink.h> 16 + #include <linux/netfilter/nfnetlink_queue.h> 17 + 18 + struct options { 19 + bool count_packets; 20 + int verbose; 21 + unsigned int queue_num; 22 + unsigned int timeout; 23 + }; 24 + 25 + static unsigned int queue_stats[5]; 26 + static struct options opts; 27 + 28 + static void help(const char *p) 29 + { 30 + printf("Usage: %s [-c|-v [-vv] ] [-t timeout] [-q queue_num]\n", p); 31 + } 32 + 33 + static int parse_attr_cb(const struct nlattr *attr, void *data) 34 + { 35 + const struct nlattr **tb = data; 36 + int type = mnl_attr_get_type(attr); 37 + 38 + /* skip unsupported attribute in user-space */ 39 + if (mnl_attr_type_valid(attr, NFQA_MAX) < 0) 40 + return MNL_CB_OK; 41 + 42 + switch (type) { 43 + case NFQA_MARK: 44 + case NFQA_IFINDEX_INDEV: 45 + case NFQA_IFINDEX_OUTDEV: 46 + case NFQA_IFINDEX_PHYSINDEV: 47 + case NFQA_IFINDEX_PHYSOUTDEV: 48 + if (mnl_attr_validate(attr, MNL_TYPE_U32) < 0) { 49 + perror("mnl_attr_validate"); 50 + return MNL_CB_ERROR; 51 + } 52 + break; 53 + case NFQA_TIMESTAMP: 54 + if (mnl_attr_validate2(attr, MNL_TYPE_UNSPEC, 55 + sizeof(struct nfqnl_msg_packet_timestamp)) < 0) { 56 + perror("mnl_attr_validate2"); 57 + return MNL_CB_ERROR; 58 + } 59 + break; 60 + case NFQA_HWADDR: 61 + if (mnl_attr_validate2(attr, MNL_TYPE_UNSPEC, 62 + sizeof(struct nfqnl_msg_packet_hw)) < 0) { 63 + perror("mnl_attr_validate2"); 64 + return MNL_CB_ERROR; 65 + } 66 + break; 67 + case NFQA_PAYLOAD: 68 + break; 69 + } 70 + tb[type] = attr; 71 + return MNL_CB_OK; 72 + } 73 + 74 + static int queue_cb(const struct nlmsghdr *nlh, void *data) 75 + { 76 + struct nlattr *tb[NFQA_MAX+1] = { 0 }; 77 + struct nfqnl_msg_packet_hdr *ph = NULL; 78 + uint32_t id = 0; 79 + 80 + (void)data; 81 + 82 + mnl_attr_parse(nlh, sizeof(struct nfgenmsg), parse_attr_cb, tb); 83 + if (tb[NFQA_PACKET_HDR]) { 84 + ph = mnl_attr_get_payload(tb[NFQA_PACKET_HDR]); 85 + id = ntohl(ph->packet_id); 86 + 87 + if (opts.verbose > 0) 88 + printf("packet hook=%u, hwproto 0x%x", 89 + ntohs(ph->hw_protocol), ph->hook); 90 + 91 + if (ph->hook >= 5) { 92 + fprintf(stderr, "Unknown hook %d\n", ph->hook); 93 + return MNL_CB_ERROR; 94 + } 95 + 96 + if (opts.verbose > 0) { 97 + uint32_t skbinfo = 0; 98 + 99 + if (tb[NFQA_SKB_INFO]) 100 + skbinfo = ntohl(mnl_attr_get_u32(tb[NFQA_SKB_INFO])); 101 + if (skbinfo & NFQA_SKB_CSUMNOTREADY) 102 + printf(" csumnotready"); 103 + if (skbinfo & NFQA_SKB_GSO) 104 + printf(" gso"); 105 + if (skbinfo & NFQA_SKB_CSUM_NOTVERIFIED) 106 + printf(" csumnotverified"); 107 + puts(""); 108 + } 109 + 110 + if (opts.count_packets) 111 + queue_stats[ph->hook]++; 112 + } 113 + 114 + return MNL_CB_OK + id; 115 + } 116 + 117 + static struct nlmsghdr * 118 + nfq_build_cfg_request(char *buf, uint8_t command, int queue_num) 119 + { 120 + struct nlmsghdr *nlh = mnl_nlmsg_put_header(buf); 121 + struct nfqnl_msg_config_cmd cmd = { 122 + .command = command, 123 + .pf = htons(AF_INET), 124 + }; 125 + struct nfgenmsg *nfg; 126 + 127 + nlh->nlmsg_type = (NFNL_SUBSYS_QUEUE << 8) | NFQNL_MSG_CONFIG; 128 + nlh->nlmsg_flags = NLM_F_REQUEST; 129 + 130 + nfg = mnl_nlmsg_put_extra_header(nlh, sizeof(*nfg)); 131 + 132 + nfg->nfgen_family = AF_UNSPEC; 133 + nfg->version = NFNETLINK_V0; 134 + nfg->res_id = htons(queue_num); 135 + 136 + mnl_attr_put(nlh, NFQA_CFG_CMD, sizeof(cmd), &cmd); 137 + 138 + return nlh; 139 + } 140 + 141 + static struct nlmsghdr * 142 + nfq_build_cfg_params(char *buf, uint8_t mode, int range, int queue_num) 143 + { 144 + struct nlmsghdr *nlh = mnl_nlmsg_put_header(buf); 145 + struct nfqnl_msg_config_params params = { 146 + .copy_range = htonl(range), 147 + .copy_mode = mode, 148 + }; 149 + struct nfgenmsg *nfg; 150 + 151 + nlh->nlmsg_type = (NFNL_SUBSYS_QUEUE << 8) | NFQNL_MSG_CONFIG; 152 + nlh->nlmsg_flags = NLM_F_REQUEST; 153 + 154 + nfg = mnl_nlmsg_put_extra_header(nlh, sizeof(*nfg)); 155 + nfg->nfgen_family = AF_UNSPEC; 156 + nfg->version = NFNETLINK_V0; 157 + nfg->res_id = htons(queue_num); 158 + 159 + mnl_attr_put(nlh, NFQA_CFG_PARAMS, sizeof(params), &params); 160 + 161 + return nlh; 162 + } 163 + 164 + static struct nlmsghdr * 165 + nfq_build_verdict(char *buf, int id, int queue_num, int verd) 166 + { 167 + struct nfqnl_msg_verdict_hdr vh = { 168 + .verdict = htonl(verd), 169 + .id = htonl(id), 170 + }; 171 + struct nlmsghdr *nlh; 172 + struct nfgenmsg *nfg; 173 + 174 + nlh = mnl_nlmsg_put_header(buf); 175 + nlh->nlmsg_type = (NFNL_SUBSYS_QUEUE << 8) | NFQNL_MSG_VERDICT; 176 + nlh->nlmsg_flags = NLM_F_REQUEST; 177 + nfg = mnl_nlmsg_put_extra_header(nlh, sizeof(*nfg)); 178 + nfg->nfgen_family = AF_UNSPEC; 179 + nfg->version = NFNETLINK_V0; 180 + nfg->res_id = htons(queue_num); 181 + 182 + mnl_attr_put(nlh, NFQA_VERDICT_HDR, sizeof(vh), &vh); 183 + 184 + return nlh; 185 + } 186 + 187 + static void print_stats(void) 188 + { 189 + unsigned int last, total; 190 + int i; 191 + 192 + if (!opts.count_packets) 193 + return; 194 + 195 + total = 0; 196 + last = queue_stats[0]; 197 + 198 + for (i = 0; i < 5; i++) { 199 + printf("hook %d packets %08u\n", i, queue_stats[i]); 200 + last = queue_stats[i]; 201 + total += last; 202 + } 203 + 204 + printf("%u packets total\n", total); 205 + } 206 + 207 + struct mnl_socket *open_queue(void) 208 + { 209 + char buf[MNL_SOCKET_BUFFER_SIZE]; 210 + unsigned int queue_num; 211 + struct mnl_socket *nl; 212 + struct nlmsghdr *nlh; 213 + struct timeval tv; 214 + uint32_t flags; 215 + 216 + nl = mnl_socket_open(NETLINK_NETFILTER); 217 + if (nl == NULL) { 218 + perror("mnl_socket_open"); 219 + exit(EXIT_FAILURE); 220 + } 221 + 222 + if (mnl_socket_bind(nl, 0, MNL_SOCKET_AUTOPID) < 0) { 223 + perror("mnl_socket_bind"); 224 + exit(EXIT_FAILURE); 225 + } 226 + 227 + queue_num = opts.queue_num; 228 + nlh = nfq_build_cfg_request(buf, NFQNL_CFG_CMD_BIND, queue_num); 229 + 230 + if (mnl_socket_sendto(nl, nlh, nlh->nlmsg_len) < 0) { 231 + perror("mnl_socket_sendto"); 232 + exit(EXIT_FAILURE); 233 + } 234 + 235 + nlh = nfq_build_cfg_params(buf, NFQNL_COPY_PACKET, 0xFFFF, queue_num); 236 + 237 + flags = NFQA_CFG_F_GSO | NFQA_CFG_F_UID_GID; 238 + mnl_attr_put_u32(nlh, NFQA_CFG_FLAGS, htonl(flags)); 239 + mnl_attr_put_u32(nlh, NFQA_CFG_MASK, htonl(flags)); 240 + 241 + if (mnl_socket_sendto(nl, nlh, nlh->nlmsg_len) < 0) { 242 + perror("mnl_socket_sendto"); 243 + exit(EXIT_FAILURE); 244 + } 245 + 246 + memset(&tv, 0, sizeof(tv)); 247 + tv.tv_sec = opts.timeout; 248 + if (opts.timeout && setsockopt(mnl_socket_get_fd(nl), 249 + SOL_SOCKET, SO_RCVTIMEO, 250 + &tv, sizeof(tv))) { 251 + perror("setsockopt(SO_RCVTIMEO)"); 252 + exit(EXIT_FAILURE); 253 + } 254 + 255 + return nl; 256 + } 257 + 258 + static int mainloop(void) 259 + { 260 + unsigned int buflen = 64 * 1024 + MNL_SOCKET_BUFFER_SIZE; 261 + struct mnl_socket *nl; 262 + struct nlmsghdr *nlh; 263 + unsigned int portid; 264 + char *buf; 265 + int ret; 266 + 267 + buf = malloc(buflen); 268 + if (!buf) { 269 + perror("malloc"); 270 + exit(EXIT_FAILURE); 271 + } 272 + 273 + nl = open_queue(); 274 + portid = mnl_socket_get_portid(nl); 275 + 276 + for (;;) { 277 + uint32_t id; 278 + 279 + ret = mnl_socket_recvfrom(nl, buf, buflen); 280 + if (ret == -1) { 281 + if (errno == ENOBUFS) 282 + continue; 283 + 284 + if (errno == EAGAIN) { 285 + errno = 0; 286 + ret = 0; 287 + break; 288 + } 289 + 290 + perror("mnl_socket_recvfrom"); 291 + exit(EXIT_FAILURE); 292 + } 293 + 294 + ret = mnl_cb_run(buf, ret, 0, portid, queue_cb, NULL); 295 + if (ret < 0) { 296 + perror("mnl_cb_run"); 297 + exit(EXIT_FAILURE); 298 + } 299 + 300 + id = ret - MNL_CB_OK; 301 + nlh = nfq_build_verdict(buf, id, opts.queue_num, NF_ACCEPT); 302 + if (mnl_socket_sendto(nl, nlh, nlh->nlmsg_len) < 0) { 303 + perror("mnl_socket_sendto"); 304 + exit(EXIT_FAILURE); 305 + } 306 + } 307 + 308 + mnl_socket_close(nl); 309 + 310 + return ret; 311 + } 312 + 313 + static void parse_opts(int argc, char **argv) 314 + { 315 + int c; 316 + 317 + while ((c = getopt(argc, argv, "chvt:q:")) != -1) { 318 + switch (c) { 319 + case 'c': 320 + opts.count_packets = true; 321 + break; 322 + case 'h': 323 + help(argv[0]); 324 + exit(0); 325 + break; 326 + case 'q': 327 + opts.queue_num = atoi(optarg); 328 + if (opts.queue_num > 0xffff) 329 + opts.queue_num = 0; 330 + break; 331 + case 't': 332 + opts.timeout = atoi(optarg); 333 + break; 334 + case 'v': 335 + opts.verbose++; 336 + break; 337 + } 338 + } 339 + } 340 + 341 + int main(int argc, char *argv[]) 342 + { 343 + int ret; 344 + 345 + parse_opts(argc, argv); 346 + 347 + ret = mainloop(); 348 + if (opts.count_packets) 349 + print_stats(); 350 + 351 + return ret; 352 + }
+332
tools/testing/selftests/netfilter/nft_queue.sh
··· 1 + #!/bin/bash 2 + # 3 + # This tests nf_queue: 4 + # 1. can process packets from all hooks 5 + # 2. support running nfqueue from more than one base chain 6 + # 7 + # Kselftest framework requirement - SKIP code is 4. 8 + ksft_skip=4 9 + ret=0 10 + 11 + sfx=$(mktemp -u "XXXXXXXX") 12 + ns1="ns1-$sfx" 13 + ns2="ns2-$sfx" 14 + nsrouter="nsrouter-$sfx" 15 + 16 + cleanup() 17 + { 18 + ip netns del ${ns1} 19 + ip netns del ${ns2} 20 + ip netns del ${nsrouter} 21 + rm -f "$TMPFILE0" 22 + rm -f "$TMPFILE1" 23 + } 24 + 25 + nft --version > /dev/null 2>&1 26 + if [ $? -ne 0 ];then 27 + echo "SKIP: Could not run test without nft tool" 28 + exit $ksft_skip 29 + fi 30 + 31 + ip -Version > /dev/null 2>&1 32 + if [ $? -ne 0 ];then 33 + echo "SKIP: Could not run test without ip tool" 34 + exit $ksft_skip 35 + fi 36 + 37 + ip netns add ${nsrouter} 38 + if [ $? -ne 0 ];then 39 + echo "SKIP: Could not create net namespace" 40 + exit $ksft_skip 41 + fi 42 + 43 + TMPFILE0=$(mktemp) 44 + TMPFILE1=$(mktemp) 45 + trap cleanup EXIT 46 + 47 + ip netns add ${ns1} 48 + ip netns add ${ns2} 49 + 50 + ip link add veth0 netns ${nsrouter} type veth peer name eth0 netns ${ns1} > /dev/null 2>&1 51 + if [ $? -ne 0 ];then 52 + echo "SKIP: No virtual ethernet pair device support in kernel" 53 + exit $ksft_skip 54 + fi 55 + ip link add veth1 netns ${nsrouter} type veth peer name eth0 netns ${ns2} 56 + 57 + ip -net ${nsrouter} link set lo up 58 + ip -net ${nsrouter} link set veth0 up 59 + ip -net ${nsrouter} addr add 10.0.1.1/24 dev veth0 60 + ip -net ${nsrouter} addr add dead:1::1/64 dev veth0 61 + 62 + ip -net ${nsrouter} link set veth1 up 63 + ip -net ${nsrouter} addr add 10.0.2.1/24 dev veth1 64 + ip -net ${nsrouter} addr add dead:2::1/64 dev veth1 65 + 66 + ip -net ${ns1} link set lo up 67 + ip -net ${ns1} link set eth0 up 68 + 69 + ip -net ${ns2} link set lo up 70 + ip -net ${ns2} link set eth0 up 71 + 72 + ip -net ${ns1} addr add 10.0.1.99/24 dev eth0 73 + ip -net ${ns1} addr add dead:1::99/64 dev eth0 74 + ip -net ${ns1} route add default via 10.0.1.1 75 + ip -net ${ns1} route add default via dead:1::1 76 + 77 + ip -net ${ns2} addr add 10.0.2.99/24 dev eth0 78 + ip -net ${ns2} addr add dead:2::99/64 dev eth0 79 + ip -net ${ns2} route add default via 10.0.2.1 80 + ip -net ${ns2} route add default via dead:2::1 81 + 82 + load_ruleset() { 83 + local name=$1 84 + local prio=$2 85 + 86 + ip netns exec ${nsrouter} nft -f - <<EOF 87 + table inet $name { 88 + chain nfq { 89 + ip protocol icmp queue bypass 90 + icmpv6 type { "echo-request", "echo-reply" } queue num 1 bypass 91 + } 92 + chain pre { 93 + type filter hook prerouting priority $prio; policy accept; 94 + jump nfq 95 + } 96 + chain input { 97 + type filter hook input priority $prio; policy accept; 98 + jump nfq 99 + } 100 + chain forward { 101 + type filter hook forward priority $prio; policy accept; 102 + tcp dport 12345 queue num 2 103 + jump nfq 104 + } 105 + chain output { 106 + type filter hook output priority $prio; policy accept; 107 + tcp dport 12345 queue num 3 108 + jump nfq 109 + } 110 + chain post { 111 + type filter hook postrouting priority $prio; policy accept; 112 + jump nfq 113 + } 114 + } 115 + EOF 116 + } 117 + 118 + load_counter_ruleset() { 119 + local prio=$1 120 + 121 + ip netns exec ${nsrouter} nft -f - <<EOF 122 + table inet countrules { 123 + chain pre { 124 + type filter hook prerouting priority $prio; policy accept; 125 + counter 126 + } 127 + chain input { 128 + type filter hook input priority $prio; policy accept; 129 + counter 130 + } 131 + chain forward { 132 + type filter hook forward priority $prio; policy accept; 133 + counter 134 + } 135 + chain output { 136 + type filter hook output priority $prio; policy accept; 137 + counter 138 + } 139 + chain post { 140 + type filter hook postrouting priority $prio; policy accept; 141 + counter 142 + } 143 + } 144 + EOF 145 + } 146 + 147 + test_ping() { 148 + ip netns exec ${ns1} ping -c 1 -q 10.0.2.99 > /dev/null 149 + if [ $? -ne 0 ];then 150 + return 1 151 + fi 152 + 153 + ip netns exec ${ns1} ping -c 1 -q dead:2::99 > /dev/null 154 + if [ $? -ne 0 ];then 155 + return 1 156 + fi 157 + 158 + return 0 159 + } 160 + 161 + test_ping_router() { 162 + ip netns exec ${ns1} ping -c 1 -q 10.0.2.1 > /dev/null 163 + if [ $? -ne 0 ];then 164 + return 1 165 + fi 166 + 167 + ip netns exec ${ns1} ping -c 1 -q dead:2::1 > /dev/null 168 + if [ $? -ne 0 ];then 169 + return 1 170 + fi 171 + 172 + return 0 173 + } 174 + 175 + test_queue_blackhole() { 176 + local proto=$1 177 + 178 + ip netns exec ${nsrouter} nft -f - <<EOF 179 + table $proto blackh { 180 + chain forward { 181 + type filter hook forward priority 0; policy accept; 182 + queue num 600 183 + } 184 + } 185 + EOF 186 + if [ $proto = "ip" ] ;then 187 + ip netns exec ${ns1} ping -c 1 -q 10.0.2.99 > /dev/null 188 + lret=$? 189 + elif [ $proto = "ip6" ]; then 190 + ip netns exec ${ns1} ping -c 1 -q dead:2::99 > /dev/null 191 + lret=$? 192 + else 193 + lret=111 194 + fi 195 + 196 + # queue without bypass keyword should drop traffic if no listener exists. 197 + if [ $lret -eq 0 ];then 198 + echo "FAIL: $proto expected failure, got $lret" 1>&2 199 + exit 1 200 + fi 201 + 202 + ip netns exec ${nsrouter} nft delete table $proto blackh 203 + if [ $? -ne 0 ] ;then 204 + echo "FAIL: $proto: Could not delete blackh table" 205 + exit 1 206 + fi 207 + 208 + echo "PASS: $proto: statement with no listener results in packet drop" 209 + } 210 + 211 + test_queue() 212 + { 213 + local expected=$1 214 + local last="" 215 + 216 + # spawn nf-queue listeners 217 + ip netns exec ${nsrouter} ./nf-queue -c -q 0 -t 3 > "$TMPFILE0" & 218 + ip netns exec ${nsrouter} ./nf-queue -c -q 1 -t 3 > "$TMPFILE1" & 219 + sleep 1 220 + test_ping 221 + ret=$? 222 + if [ $ret -ne 0 ];then 223 + echo "FAIL: netns routing/connectivity with active listener on queue $queue: $ret" 1>&2 224 + exit $ret 225 + fi 226 + 227 + test_ping_router 228 + ret=$? 229 + if [ $ret -ne 0 ];then 230 + echo "FAIL: netns router unreachable listener on queue $queue: $ret" 1>&2 231 + exit $ret 232 + fi 233 + 234 + wait 235 + ret=$? 236 + 237 + for file in $TMPFILE0 $TMPFILE1; do 238 + last=$(tail -n1 "$file") 239 + if [ x"$last" != x"$expected packets total" ]; then 240 + echo "FAIL: Expected $expected packets total, but got $last" 1>&2 241 + cat "$file" 1>&2 242 + 243 + ip netns exec ${nsrouter} nft list ruleset 244 + exit 1 245 + fi 246 + done 247 + 248 + echo "PASS: Expected and received $last" 249 + } 250 + 251 + test_tcp_forward() 252 + { 253 + ip netns exec ${nsrouter} ./nf-queue -q 2 -t 10 & 254 + local nfqpid=$! 255 + 256 + tmpfile=$(mktemp) || exit 1 257 + dd conv=sparse status=none if=/dev/zero bs=1M count=100 of=$tmpfile 258 + ip netns exec ${ns2} nc -w 5 -l -p 12345 <"$tmpfile" >/dev/null & 259 + local rpid=$! 260 + 261 + sleep 1 262 + ip netns exec ${ns1} nc -w 5 10.0.2.99 12345 <"$tmpfile" >/dev/null & 263 + 264 + rm -f "$tmpfile" 265 + 266 + wait $rpid 267 + wait $lpid 268 + [ $? -eq 0 ] && echo "PASS: tcp and nfqueue in forward chain" 269 + } 270 + 271 + test_tcp_localhost() 272 + { 273 + tc -net "${nsrouter}" qdisc add dev lo root netem loss random 1% 274 + 275 + tmpfile=$(mktemp) || exit 1 276 + 277 + dd conv=sparse status=none if=/dev/zero bs=1M count=900 of=$tmpfile 278 + ip netns exec ${nsrouter} nc -w 5 -l -p 12345 <"$tmpfile" >/dev/null & 279 + local rpid=$! 280 + 281 + ip netns exec ${nsrouter} ./nf-queue -q 3 -t 30 & 282 + local nfqpid=$! 283 + 284 + sleep 1 285 + ip netns exec ${nsrouter} nc -w 5 127.0.0.1 12345 <"$tmpfile" > /dev/null 286 + rm -f "$tmpfile" 287 + 288 + wait $rpid 289 + [ $? -eq 0 ] && echo "PASS: tcp via loopback" 290 + } 291 + 292 + ip netns exec ${nsrouter} sysctl net.ipv6.conf.all.forwarding=1 > /dev/null 293 + ip netns exec ${nsrouter} sysctl net.ipv4.conf.veth0.forwarding=1 > /dev/null 294 + ip netns exec ${nsrouter} sysctl net.ipv4.conf.veth1.forwarding=1 > /dev/null 295 + 296 + load_ruleset "filter" 0 297 + 298 + sleep 3 299 + 300 + test_ping 301 + ret=$? 302 + if [ $ret -eq 0 ];then 303 + # queue bypass works (rules were skipped, no listener) 304 + echo "PASS: ${ns1} can reach ${ns2}" 305 + else 306 + echo "FAIL: ${ns1} cannot reach ${ns2}: $ret" 1>&2 307 + exit $ret 308 + fi 309 + 310 + test_queue_blackhole ip 311 + test_queue_blackhole ip6 312 + 313 + # dummy ruleset to add base chains between the 314 + # queueing rules. We don't want the second reinject 315 + # to re-execute the old hooks. 316 + load_counter_ruleset 10 317 + 318 + # we are hooking all: prerouting/input/forward/output/postrouting. 319 + # we ping ${ns2} from ${ns1} via ${nsrouter} using ipv4 and ipv6, so: 320 + # 1x icmp prerouting,forward,postrouting -> 3 queue events (6 incl. reply). 321 + # 1x icmp prerouting,input,output postrouting -> 4 queue events incl. reply. 322 + # so we expect that userspace program receives 10 packets. 323 + test_queue 10 324 + 325 + # same. We queue to a second program as well. 326 + load_ruleset "filter2" 20 327 + test_queue 20 328 + 329 + test_tcp_forward 330 + test_tcp_localhost 331 + 332 + exit $ret
+10 -5
tools/testing/selftests/wireguard/netns.sh
··· 527 527 n0 wg set wg0 peer "$pub2" allowed-ips ::/0,1700::/111,5000::/4,e000::/37,9000::/75 528 528 n0 wg set wg0 peer "$pub2" allowed-ips ::/0 529 529 n0 wg set wg0 peer "$pub2" remove 530 - low_order_points=( AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA= AQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA= 4Ot6fDtBuK4WVuP68Z/EatoJjeucMrH9hmIFFl9JuAA= X5yVvKNQjCSx0LFVnIPvWwREXMRYHI6G2CJO3dCfEVc= 7P///////////////////////////////////////38= 7f///////////////////////////////////////38= 7v///////////////////////////////////////38= ) 531 - n0 wg set wg0 private-key /dev/null ${low_order_points[@]/#/peer } 532 - [[ -z $(n0 wg show wg0 peers) ]] 533 - n0 wg set wg0 private-key <(echo "$key1") ${low_order_points[@]/#/peer } 534 - [[ -z $(n0 wg show wg0 peers) ]] 530 + for low_order_point in AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA= AQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA= 4Ot6fDtBuK4WVuP68Z/EatoJjeucMrH9hmIFFl9JuAA= X5yVvKNQjCSx0LFVnIPvWwREXMRYHI6G2CJO3dCfEVc= 7P///////////////////////////////////////38= 7f///////////////////////////////////////38= 7v///////////////////////////////////////38=; do 531 + n0 wg set wg0 peer "$low_order_point" persistent-keepalive 1 endpoint 127.0.0.1:1111 532 + done 533 + [[ -n $(n0 wg show wg0 peers) ]] 534 + exec 4< <(n0 ncat -l -u -p 1111) 535 + ncat_pid=$! 536 + waitncatudp $netns0 $ncat_pid 537 + ip0 link set wg0 up 538 + ! read -r -n 1 -t 2 <&4 || false 539 + kill $ncat_pid 535 540 ip0 link del wg0 536 541 537 542 declare -A objects
+1 -1
tools/testing/selftests/wireguard/qemu/Makefile
··· 41 41 flock -x $$@.lock -c '[ -f $$@ ] && exit 0; wget -O $$@.tmp $(MIRROR)$(1) || wget -O $$@.tmp $(2)$(1) || rm -f $$@.tmp; [ -f $$@.tmp ] || exit 1; if echo "$(3) $$@.tmp" | sha256sum -c -; then mv $$@.tmp $$@; else rm -f $$@.tmp; exit 71; fi' 42 42 endef 43 43 44 - $(eval $(call tar_download,MUSL,musl,1.1.24,.tar.gz,https://www.musl-libc.org/releases/,1370c9a812b2cf2a7d92802510cca0058cc37e66a7bedd70051f0a34015022a3)) 44 + $(eval $(call tar_download,MUSL,musl,1.2.0,.tar.gz,https://musl.libc.org/releases/,c6de7b191139142d3f9a7b5b702c9cae1b5ee6e7f57e582da9328629408fd4e8)) 45 45 $(eval $(call tar_download,IPERF,iperf,3.7,.tar.gz,https://downloads.es.net/pub/iperf/,d846040224317caf2f75c843d309a950a7db23f9b44b94688ccbe557d6d1710c)) 46 46 $(eval $(call tar_download,BASH,bash,5.0,.tar.gz,https://ftp.gnu.org/gnu/bash/,b4a80f2ac66170b2913efbfb9f2594f1f76c7b1afd11f799e22035d63077fb4d)) 47 47 $(eval $(call tar_download,IPROUTE2,iproute2,5.4.0,.tar.xz,https://www.kernel.org/pub/linux/utils/net/iproute2/,fe97aa60a0d4c5ac830be18937e18dc3400ca713a33a89ad896ff1e3d46086ae))
-1
tools/testing/selftests/wireguard/qemu/init.c
··· 13 13 #include <fcntl.h> 14 14 #include <sys/wait.h> 15 15 #include <sys/mount.h> 16 - #include <sys/types.h> 17 16 #include <sys/stat.h> 18 17 #include <sys/types.h> 19 18 #include <sys/io.h>
-1
tools/testing/selftests/wireguard/qemu/kernel.config
··· 56 56 CONFIG_NO_HZ_FULL=n 57 57 CONFIG_HZ_PERIODIC=n 58 58 CONFIG_HIGH_RES_TIMERS=y 59 - CONFIG_COMPAT_32BIT_TIME=y 60 59 CONFIG_ARCH_RANDOM=y 61 60 CONFIG_FILE_LOCKING=y 62 61 CONFIG_POSIX_TIMERS=y
+11 -11
usr/Kconfig
··· 124 124 125 125 If in doubt, select 'None' 126 126 127 - config INITRAMFS_COMPRESSION_NONE 128 - bool "None" 129 - help 130 - Do not compress the built-in initramfs at all. This may sound wasteful 131 - in space, but, you should be aware that the built-in initramfs will be 132 - compressed at a later stage anyways along with the rest of the kernel, 133 - on those architectures that support this. However, not compressing the 134 - initramfs may lead to slightly higher memory consumption during a 135 - short time at boot, while both the cpio image and the unpacked 136 - filesystem image will be present in memory simultaneously 137 - 138 127 config INITRAMFS_COMPRESSION_GZIP 139 128 bool "Gzip" 140 129 depends on RD_GZIP ··· 195 206 196 207 If you choose this, keep in mind that most distros don't provide lz4 197 208 by default which could cause a build failure. 209 + 210 + config INITRAMFS_COMPRESSION_NONE 211 + bool "None" 212 + help 213 + Do not compress the built-in initramfs at all. This may sound wasteful 214 + in space, but, you should be aware that the built-in initramfs will be 215 + compressed at a later stage anyways along with the rest of the kernel, 216 + on those architectures that support this. However, not compressing the 217 + initramfs may lead to slightly higher memory consumption during a 218 + short time at boot, while both the cpio image and the unpacked 219 + filesystem image will be present in memory simultaneously 198 220 199 221 endchoice