Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 4.18-rc5 into staging-next

We need the staging fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+3905 -2060
+5
Documentation/admin-guide/kernel-parameters.txt
··· 4846 4846 xirc2ps_cs= [NET,PCMCIA] 4847 4847 Format: 4848 4848 <irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]] 4849 + 4850 + xhci-hcd.quirks [USB,KNL] 4851 + A hex value specifying bitmask with supplemental xhci 4852 + host controller quirks. Meaning of each bit can be 4853 + consulted in header drivers/usb/host/xhci.h.
+7 -10
Documentation/kbuild/kbuild.txt
··· 50 50 -------------------------------------------------- 51 51 Additional options used for $(LD) when linking modules. 52 52 53 + KBUILD_KCONFIG 54 + -------------------------------------------------- 55 + Set the top-level Kconfig file to the value of this environment 56 + variable. The default name is "Kconfig". 57 + 53 58 KBUILD_VERBOSE 54 59 -------------------------------------------------- 55 60 Set the kbuild verbosity. Can be assigned same values as "V=...". ··· 93 88 directory name found in the arch/ directory. 94 89 But some architectures such as x86 and sparc have aliases. 95 90 x86: i386 for 32 bit, x86_64 for 64 bit 96 - sparc: sparc for 32 bit, sparc64 for 64 bit 91 + sh: sh for 32 bit, sh64 for 64 bit 92 + sparc: sparc32 for 32 bit, sparc64 for 64 bit 97 93 98 94 CROSS_COMPILE 99 95 -------------------------------------------------- ··· 153 147 stripped after they are installed. If INSTALL_MOD_STRIP is '1', then 154 148 the default option --strip-debug will be used. Otherwise, 155 149 INSTALL_MOD_STRIP value will be used as the options to the strip command. 156 - 157 - INSTALL_FW_PATH 158 - -------------------------------------------------- 159 - INSTALL_FW_PATH specifies where to install the firmware blobs. 160 - The default value is: 161 - 162 - $(INSTALL_MOD_PATH)/lib/firmware 163 - 164 - The value can be overridden in which case the default value is ignored. 165 150 166 151 INSTALL_HDR_PATH 167 152 --------------------------------------------------
+43 -8
Documentation/kbuild/kconfig.txt
··· 2 2 3 3 Use "make help" to list all of the possible configuration targets. 4 4 5 - The xconfig ('qconf') and menuconfig ('mconf') programs also 6 - have embedded help text. Be sure to check it for navigation, 7 - search, and other general help text. 5 + The xconfig ('qconf'), menuconfig ('mconf'), and nconfig ('nconf') 6 + programs also have embedded help text. Be sure to check that for 7 + navigation, search, and other general help text. 8 8 9 9 ====================================================================== 10 10 General ··· 17 17 for you, so you may find that you need to see what NEW kernel 18 18 symbols have been introduced. 19 19 20 - To see a list of new config symbols when using "make oldconfig", use 20 + To see a list of new config symbols, use 21 21 22 22 cp user/some/old.config .config 23 23 make listnewconfig 24 24 25 25 and the config program will list any new symbols, one per line. 26 26 27 + Alternatively, you can use the brute force method: 28 + 29 + make oldconfig 27 30 scripts/diffconfig .config.old .config | less 28 31 29 32 ______________________________________________________________________ ··· 163 160 This lists all config symbols that contain "hotplug", 164 161 e.g., HOTPLUG_CPU, MEMORY_HOTPLUG. 165 162 166 - For search help, enter / followed TAB-TAB-TAB (to highlight 163 + For search help, enter / followed by TAB-TAB (to highlight 167 164 <Help>) and Enter. This will tell you that you can also use 168 165 regular expressions (regexes) in the search string, so if you 169 166 are not interested in MEMORY_HOTPLUG, you could try ··· 206 203 207 204 208 205 ====================================================================== 206 + nconfig 207 + -------------------------------------------------- 208 + 209 + nconfig is an alternate text-based configurator. It lists function 210 + keys across the bottom of the terminal (window) that execute commands. 211 + You can also just use the corresponding numeric key to execute the 212 + commands unless you are in a data entry window. E.g., instead of F6 213 + for Save, you can just press 6. 214 + 215 + Use F1 for Global help or F3 for the Short help menu. 216 + 217 + Searching in nconfig: 218 + 219 + You can search either in the menu entry "prompt" strings 220 + or in the configuration symbols. 221 + 222 + Use / to begin a search through the menu entries. This does 223 + not support regular expressions. Use <Down> or <Up> for 224 + Next hit and Previous hit, respectively. Use <Esc> to 225 + terminate the search mode. 226 + 227 + F8 (SymSearch) searches the configuration symbols for the 228 + given string or regular expression (regex). 229 + 230 + NCONFIG_MODE 231 + -------------------------------------------------- 232 + This mode shows all sub-menus in one large tree. 233 + 234 + Example: 235 + make NCONFIG_MODE=single_menu nconfig 236 + 237 + 238 + ====================================================================== 209 239 xconfig 210 240 -------------------------------------------------- 211 241 ··· 266 230 267 231 Searching in gconfig: 268 232 269 - None (gconfig isn't maintained as well as xconfig or menuconfig); 270 - however, gconfig does have a few more viewing choices than 271 - xconfig does. 233 + There is no search command in gconfig. However, gconfig does 234 + have several different viewing choices, modes, and options. 272 235 273 236 ###
+7 -4
MAINTAINERS
··· 581 581 582 582 AGPGART DRIVER 583 583 M: David Airlie <airlied@linux.ie> 584 - T: git git://people.freedesktop.org/~airlied/linux (part of drm maint) 584 + T: git git://anongit.freedesktop.org/drm/drm 585 585 S: Maintained 586 586 F: drivers/char/agp/ 587 587 F: include/linux/agp* ··· 4468 4468 4469 4469 DRIVER CORE, KOBJECTS, DEBUGFS AND SYSFS 4470 4470 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 4471 + R: "Rafael J. Wysocki" <rafael@kernel.org> 4471 4472 T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git 4472 4473 S: Supported 4473 4474 F: Documentation/kobject.txt ··· 4639 4638 DRM DRIVERS 4640 4639 M: David Airlie <airlied@linux.ie> 4641 4640 L: dri-devel@lists.freedesktop.org 4642 - T: git git://people.freedesktop.org/~airlied/linux 4641 + T: git git://anongit.freedesktop.org/drm/drm 4643 4642 B: https://bugs.freedesktop.org/ 4644 4643 C: irc://chat.freenode.net/dri-devel 4645 4644 S: Maintained ··· 10233 10232 10234 10233 NXP TDA998X DRM DRIVER 10235 10234 M: Russell King <linux@armlinux.org.uk> 10236 - S: Supported 10235 + S: Maintained 10237 10236 T: git git://git.armlinux.org.uk/~rmk/linux-arm.git drm-tda998x-devel 10238 10237 T: git git://git.armlinux.org.uk/~rmk/linux-arm.git drm-tda998x-fixes 10239 10238 F: drivers/gpu/drm/i2c/tda998x_drv.c 10240 10239 F: include/drm/i2c/tda998x.h 10240 + F: include/dt-bindings/display/tda998x.h 10241 + K: "nxp,tda998x" 10241 10242 10242 10243 NXP TFA9879 DRIVER 10243 10244 M: Peter Rosin <peda@axentia.se> ··· 11857 11854 F: arch/hexagon/ 11858 11855 11859 11856 QUALCOMM HIDMA DRIVER 11860 - M: Sinan Kaya <okaya@codeaurora.org> 11857 + M: Sinan Kaya <okaya@kernel.org> 11861 11858 L: linux-arm-kernel@lists.infradead.org 11862 11859 L: linux-arm-msm@vger.kernel.org 11863 11860 L: dmaengine@vger.kernel.org
+5 -10
Makefile
··· 2 2 VERSION = 4 3 3 PATCHLEVEL = 18 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc5 6 6 NAME = Merciless Moray 7 7 8 8 # *DOCUMENTATION* ··· 353 353 else if [ -x /bin/bash ]; then echo /bin/bash; \ 354 354 else echo sh; fi ; fi) 355 355 356 - HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS) 357 - HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS) 358 - HOST_LFS_LIBS := $(shell getconf LFS_LIBS) 356 + HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS 2>/dev/null) 357 + HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS 2>/dev/null) 358 + HOST_LFS_LIBS := $(shell getconf LFS_LIBS 2>/dev/null) 359 359 360 360 HOSTCC = gcc 361 361 HOSTCXX = g++ ··· 505 505 CC_HAVE_ASM_GOTO := 1 506 506 KBUILD_CFLAGS += -DCC_HAVE_ASM_GOTO 507 507 KBUILD_AFLAGS += -DCC_HAVE_ASM_GOTO 508 - endif 509 - 510 - ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/cc-can-link.sh $(CC)), y) 511 - CC_CAN_LINK := y 512 - export CC_CAN_LINK 513 508 endif 514 509 515 510 # The expansion should be delayed until arch/$(SRCARCH)/Makefile is included. ··· 1712 1717 PHONY += FORCE 1713 1718 FORCE: 1714 1719 1715 - # Declare the contents of the .PHONY variable as phony. We keep that 1720 + # Declare the contents of the PHONY variable as phony. We keep that 1716 1721 # information in a variable so we can use it in if_changed and friends. 1717 1722 .PHONY: $(PHONY)
-1
arch/arm/boot/dts/am335x-bone-common.dtsi
··· 168 168 AM33XX_IOPAD(0x8f0, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc0_dat3.mmc0_dat3 */ 169 169 AM33XX_IOPAD(0x904, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc0_cmd.mmc0_cmd */ 170 170 AM33XX_IOPAD(0x900, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc0_clk.mmc0_clk */ 171 - AM33XX_IOPAD(0x9a0, PIN_INPUT | MUX_MODE4) /* mcasp0_aclkr.mmc0_sdwp */ 172 171 >; 173 172 }; 174 173
+9
arch/arm/boot/dts/am3517.dtsi
··· 39 39 ti,davinci-ctrl-ram-size = <0x2000>; 40 40 ti,davinci-rmii-en = /bits/ 8 <1>; 41 41 local-mac-address = [ 00 00 00 00 00 00 ]; 42 + clocks = <&emac_ick>; 43 + clock-names = "ick"; 42 44 }; 43 45 44 46 davinci_mdio: ethernet@5c030000 { ··· 51 49 bus_freq = <1000000>; 52 50 #address-cells = <1>; 53 51 #size-cells = <0>; 52 + clocks = <&emac_fck>; 53 + clock-names = "fck"; 54 54 }; 55 55 56 56 uart4: serial@4809e000 { ··· 89 85 clocks = <&hecc_ck>; 90 86 }; 91 87 }; 88 + }; 89 + 90 + /* Table Table 5-79 of the TRM shows 480ab000 is reserved */ 91 + &usb_otg_hs { 92 + status = "disabled"; 92 93 }; 93 94 94 95 &iva {
+2
arch/arm/boot/dts/am437x-sk-evm.dts
··· 610 610 611 611 touchscreen-size-x = <480>; 612 612 touchscreen-size-y = <272>; 613 + 614 + wakeup-source; 613 615 }; 614 616 615 617 tlv320aic3106: tlv320aic3106@1b {
+1 -1
arch/arm/boot/dts/armada-38x.dtsi
··· 547 547 548 548 thermal: thermal@e8078 { 549 549 compatible = "marvell,armada380-thermal"; 550 - reg = <0xe4078 0x4>, <0xe4074 0x4>; 550 + reg = <0xe4078 0x4>, <0xe4070 0x8>; 551 551 status = "okay"; 552 552 }; 553 553
+1 -1
arch/arm/boot/dts/dra7.dtsi
··· 1580 1580 dr_mode = "otg"; 1581 1581 snps,dis_u3_susphy_quirk; 1582 1582 snps,dis_u2_susphy_quirk; 1583 - snps,dis_metastability_quirk; 1584 1583 }; 1585 1584 }; 1586 1585 ··· 1607 1608 dr_mode = "otg"; 1608 1609 snps,dis_u3_susphy_quirk; 1609 1610 snps,dis_u2_susphy_quirk; 1611 + snps,dis_metastability_quirk; 1610 1612 }; 1611 1613 }; 1612 1614
+1 -1
arch/arm/boot/dts/imx51-zii-rdu1.dts
··· 770 770 771 771 pinctrl_ts: tsgrp { 772 772 fsl,pins = < 773 - MX51_PAD_CSI1_D8__GPIO3_12 0x85 773 + MX51_PAD_CSI1_D8__GPIO3_12 0x04 774 774 MX51_PAD_CSI1_D9__GPIO3_13 0x85 775 775 >; 776 776 };
+2
arch/arm/configs/imx_v4_v5_defconfig
··· 141 141 CONFIG_USB_CHIPIDEA=y 142 142 CONFIG_USB_CHIPIDEA_UDC=y 143 143 CONFIG_USB_CHIPIDEA_HOST=y 144 + CONFIG_USB_CHIPIDEA_ULPI=y 144 145 CONFIG_NOP_USB_XCEIV=y 145 146 CONFIG_USB_GADGET=y 146 147 CONFIG_USB_ETH=m 148 + CONFIG_USB_ULPI_BUS=y 147 149 CONFIG_MMC=y 148 150 CONFIG_MMC_SDHCI=y 149 151 CONFIG_MMC_SDHCI_PLTFM=y
+2
arch/arm/configs/imx_v6_v7_defconfig
··· 302 302 CONFIG_USB_CHIPIDEA=y 303 303 CONFIG_USB_CHIPIDEA_UDC=y 304 304 CONFIG_USB_CHIPIDEA_HOST=y 305 + CONFIG_USB_CHIPIDEA_ULPI=y 305 306 CONFIG_USB_SERIAL=m 306 307 CONFIG_USB_SERIAL_GENERIC=y 307 308 CONFIG_USB_SERIAL_FTDI_SIO=m ··· 339 338 CONFIG_USB_FUNCTIONFS=m 340 339 CONFIG_USB_MASS_STORAGE=m 341 340 CONFIG_USB_G_SERIAL=m 341 + CONFIG_USB_ULPI_BUS=y 342 342 CONFIG_MMC=y 343 343 CONFIG_MMC_SDHCI=y 344 344 CONFIG_MMC_SDHCI_PLTFM=y
+4 -2
arch/arm/crypto/speck-neon-core.S
··· 272 272 * Allocate stack space to store 128 bytes worth of tweaks. For 273 273 * performance, this space is aligned to a 16-byte boundary so that we 274 274 * can use the load/store instructions that declare 16-byte alignment. 275 + * For Thumb2 compatibility, don't do the 'bic' directly on 'sp'. 275 276 */ 276 - sub sp, #128 277 - bic sp, #0xf 277 + sub r12, sp, #128 278 + bic r12, #0xf 279 + mov sp, r12 278 280 279 281 .if \n == 64 280 282 // Load first tweak
+3
arch/arm/firmware/Makefile
··· 1 1 obj-$(CONFIG_TRUSTED_FOUNDATIONS) += trusted_foundations.o 2 + 3 + # tf_generic_smc() fails to build with -fsanitize-coverage=trace-pc 4 + KCOV_INSTRUMENT := n
+1 -1
arch/arm/kernel/head-nommu.S
··· 177 177 bic r0, r0, #CR_I 178 178 #endif 179 179 mcr p15, 0, r0, c1, c0, 0 @ write control reg 180 - isb 180 + instr_sync 181 181 #elif defined (CONFIG_CPU_V7M) 182 182 #ifdef CONFIG_ARM_MPU 183 183 ldreq r3, [r12, MPU_CTRL]
+41
arch/arm/mach-omap2/omap-smp.c
··· 109 109 static inline void omap5_erratum_workaround_801819(void) { } 110 110 #endif 111 111 112 + #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR 113 + /* 114 + * Configure ACR and enable ACTLR[0] (Enable invalidates of BTB with 115 + * ICIALLU) to activate the workaround for secondary Core. 116 + * NOTE: it is assumed that the primary core's configuration is done 117 + * by the boot loader (kernel will detect a misconfiguration and complain 118 + * if this is not done). 119 + * 120 + * In General Purpose(GP) devices, ACR bit settings can only be done 121 + * by ROM code in "secure world" using the smc call and there is no 122 + * option to update the "firmware" on such devices. This also works for 123 + * High security(HS) devices, as a backup option in case the 124 + * "update" is not done in the "security firmware". 125 + */ 126 + static void omap5_secondary_harden_predictor(void) 127 + { 128 + u32 acr, acr_mask; 129 + 130 + asm volatile ("mrc p15, 0, %0, c1, c0, 1" : "=r" (acr)); 131 + 132 + /* 133 + * ACTLR[0] (Enable invalidates of BTB with ICIALLU) 134 + */ 135 + acr_mask = BIT(0); 136 + 137 + /* Do we already have it done.. if yes, skip expensive smc */ 138 + if ((acr & acr_mask) == acr_mask) 139 + return; 140 + 141 + acr |= acr_mask; 142 + omap_smc1(OMAP5_DRA7_MON_SET_ACR_INDEX, acr); 143 + 144 + pr_debug("%s: ARM ACR setup for CVE_2017_5715 applied on CPU%d\n", 145 + __func__, smp_processor_id()); 146 + } 147 + #else 148 + static inline void omap5_secondary_harden_predictor(void) { } 149 + #endif 150 + 112 151 static void omap4_secondary_init(unsigned int cpu) 113 152 { 114 153 /* ··· 170 131 set_cntfreq(); 171 132 /* Configure ACR to disable streaming WA for 801819 */ 172 133 omap5_erratum_workaround_801819(); 134 + /* Enable ACR to allow for ICUALLU workaround */ 135 + omap5_secondary_harden_predictor(); 173 136 } 174 137 175 138 /*
+2 -2
arch/arm/mach-pxa/irq.c
··· 185 185 { 186 186 int i; 187 187 188 - for (i = 0; i < pxa_internal_irq_nr / 32; i++) { 188 + for (i = 0; i < DIV_ROUND_UP(pxa_internal_irq_nr, 32); i++) { 189 189 void __iomem *base = irq_base(i); 190 190 191 191 saved_icmr[i] = __raw_readl(base + ICMR); ··· 204 204 { 205 205 int i; 206 206 207 - for (i = 0; i < pxa_internal_irq_nr / 32; i++) { 207 + for (i = 0; i < DIV_ROUND_UP(pxa_internal_irq_nr, 32); i++) { 208 208 void __iomem *base = irq_base(i); 209 209 210 210 __raw_writel(saved_icmr[i], base + ICMR);
+9
arch/arm/mm/init.c
··· 736 736 return 0; 737 737 } 738 738 739 + static int kernel_set_to_readonly __read_mostly; 740 + 739 741 void mark_rodata_ro(void) 740 742 { 743 + kernel_set_to_readonly = 1; 741 744 stop_machine(__mark_rodata_ro, NULL, NULL); 742 745 debug_checkwx(); 743 746 } 744 747 745 748 void set_kernel_text_rw(void) 746 749 { 750 + if (!kernel_set_to_readonly) 751 + return; 752 + 747 753 set_section_perms(ro_perms, ARRAY_SIZE(ro_perms), false, 748 754 current->active_mm); 749 755 } 750 756 751 757 void set_kernel_text_ro(void) 752 758 { 759 + if (!kernel_set_to_readonly) 760 + return; 761 + 753 762 set_section_perms(ro_perms, ARRAY_SIZE(ro_perms), true, 754 763 current->active_mm); 755 764 }
+1 -1
arch/arm/net/bpf_jit_32.c
··· 1844 1844 /* there are 2 passes here */ 1845 1845 bpf_jit_dump(prog->len, image_size, 2, ctx.target); 1846 1846 1847 - set_memory_ro((unsigned long)header, header->pages); 1847 + bpf_jit_binary_lock_ro(header); 1848 1848 prog->bpf_func = (void *)ctx.target; 1849 1849 prog->jited = 1; 1850 1850 prog->jited_len = image_size;
+5 -5
arch/arm64/Makefile
··· 10 10 # 11 11 # Copyright (C) 1995-2001 by Russell King 12 12 13 - LDFLAGS_vmlinux :=-p --no-undefined -X 13 + LDFLAGS_vmlinux :=--no-undefined -X 14 14 CPPFLAGS_vmlinux.lds = -DTEXT_OFFSET=$(TEXT_OFFSET) 15 15 GZFLAGS :=-9 16 16 ··· 60 60 KBUILD_CPPFLAGS += -mbig-endian 61 61 CHECKFLAGS += -D__AARCH64EB__ 62 62 AS += -EB 63 - LD += -EB 64 - LDFLAGS += -maarch64linuxb 63 + # We must use the linux target here, since distributions don't tend to package 64 + # the ELF linker scripts with binutils, and this results in a build failure. 65 + LDFLAGS += -EB -maarch64linuxb 65 66 UTS_MACHINE := aarch64_be 66 67 else 67 68 KBUILD_CPPFLAGS += -mlittle-endian 68 69 CHECKFLAGS += -D__AARCH64EL__ 69 70 AS += -EL 70 - LD += -EL 71 - LDFLAGS += -maarch64linux 71 + LDFLAGS += -EL -maarch64linux # See comment above 72 72 UTS_MACHINE := aarch64 73 73 endif 74 74
+7 -12
arch/arm64/include/asm/simd.h
··· 29 29 static __must_check inline bool may_use_simd(void) 30 30 { 31 31 /* 32 - * The raw_cpu_read() is racy if called with preemption enabled. 33 - * This is not a bug: kernel_neon_busy is only set when 34 - * preemption is disabled, so we cannot migrate to another CPU 35 - * while it is set, nor can we migrate to a CPU where it is set. 36 - * So, if we find it clear on some CPU then we're guaranteed to 37 - * find it clear on any CPU we could migrate to. 38 - * 39 - * If we are in between kernel_neon_begin()...kernel_neon_end(), 40 - * the flag will be set, but preemption is also disabled, so we 41 - * can't migrate to another CPU and spuriously see it become 42 - * false. 32 + * kernel_neon_busy is only set while preemption is disabled, 33 + * and is clear whenever preemption is enabled. Since 34 + * this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy 35 + * cannot change under our feet -- if it's set we cannot be 36 + * migrated, and if it's clear we cannot be migrated to a CPU 37 + * where it is set. 43 38 */ 44 39 return !in_irq() && !irqs_disabled() && !in_nmi() && 45 - !raw_cpu_read(kernel_neon_busy); 40 + !this_cpu_read(kernel_neon_busy); 46 41 } 47 42 48 43 #else /* ! CONFIG_KERNEL_MODE_NEON */
+3 -1
arch/m68k/include/asm/mcf_pgalloc.h
··· 44 44 static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t page, 45 45 unsigned long address) 46 46 { 47 + pgtable_page_dtor(page); 47 48 __free_page(page); 48 49 } 49 50 ··· 75 74 return page; 76 75 } 77 76 78 - extern inline void pte_free(struct mm_struct *mm, struct page *page) 77 + static inline void pte_free(struct mm_struct *mm, struct page *page) 79 78 { 79 + pgtable_page_dtor(page); 80 80 __free_page(page); 81 81 } 82 82
+29 -14
arch/mips/kernel/process.c
··· 29 29 #include <linux/kallsyms.h> 30 30 #include <linux/random.h> 31 31 #include <linux/prctl.h> 32 + #include <linux/nmi.h> 32 33 33 34 #include <asm/asm.h> 34 35 #include <asm/bootinfo.h> ··· 656 655 return sp & ALMASK; 657 656 } 658 657 659 - static void arch_dump_stack(void *info) 658 + static DEFINE_PER_CPU(call_single_data_t, backtrace_csd); 659 + static struct cpumask backtrace_csd_busy; 660 + 661 + static void handle_backtrace(void *info) 660 662 { 661 - struct pt_regs *regs; 663 + nmi_cpu_backtrace(get_irq_regs()); 664 + cpumask_clear_cpu(smp_processor_id(), &backtrace_csd_busy); 665 + } 662 666 663 - regs = get_irq_regs(); 667 + static void raise_backtrace(cpumask_t *mask) 668 + { 669 + call_single_data_t *csd; 670 + int cpu; 664 671 665 - if (regs) 666 - show_regs(regs); 672 + for_each_cpu(cpu, mask) { 673 + /* 674 + * If we previously sent an IPI to the target CPU & it hasn't 675 + * cleared its bit in the busy cpumask then it didn't handle 676 + * our previous IPI & it's not safe for us to reuse the 677 + * call_single_data_t. 678 + */ 679 + if (cpumask_test_and_set_cpu(cpu, &backtrace_csd_busy)) { 680 + pr_warn("Unable to send backtrace IPI to CPU%u - perhaps it hung?\n", 681 + cpu); 682 + continue; 683 + } 667 684 668 - dump_stack(); 685 + csd = &per_cpu(backtrace_csd, cpu); 686 + csd->func = handle_backtrace; 687 + smp_call_function_single_async(cpu, csd); 688 + } 669 689 } 670 690 671 691 void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self) 672 692 { 673 - long this_cpu = get_cpu(); 674 - 675 - if (cpumask_test_cpu(this_cpu, mask) && !exclude_self) 676 - dump_stack(); 677 - 678 - smp_call_function_many(mask, arch_dump_stack, NULL, 1); 679 - 680 - put_cpu(); 693 + nmi_trigger_cpumask_backtrace(mask, exclude_self, raise_backtrace); 681 694 } 682 695 683 696 int mips_get_process_fp_mode(struct task_struct *task)
+1
arch/mips/kernel/traps.c
··· 351 351 void show_regs(struct pt_regs *regs) 352 352 { 353 353 __show_regs((struct pt_regs *)regs); 354 + dump_stack(); 354 355 } 355 356 356 357 void show_registers(struct pt_regs *regs)
+25 -12
arch/mips/mm/ioremap.c
··· 9 9 #include <linux/export.h> 10 10 #include <asm/addrspace.h> 11 11 #include <asm/byteorder.h> 12 + #include <linux/ioport.h> 12 13 #include <linux/sched.h> 13 14 #include <linux/slab.h> 14 15 #include <linux/vmalloc.h> ··· 99 98 return error; 100 99 } 101 100 101 + static int __ioremap_check_ram(unsigned long start_pfn, unsigned long nr_pages, 102 + void *arg) 103 + { 104 + unsigned long i; 105 + 106 + for (i = 0; i < nr_pages; i++) { 107 + if (pfn_valid(start_pfn + i) && 108 + !PageReserved(pfn_to_page(start_pfn + i))) 109 + return 1; 110 + } 111 + 112 + return 0; 113 + } 114 + 102 115 /* 103 116 * Generic mapping function (not visible outside): 104 117 */ ··· 131 116 132 117 void __iomem * __ioremap(phys_addr_t phys_addr, phys_addr_t size, unsigned long flags) 133 118 { 119 + unsigned long offset, pfn, last_pfn; 134 120 struct vm_struct * area; 135 - unsigned long offset; 136 121 phys_addr_t last_addr; 137 122 void * addr; 138 123 ··· 152 137 return (void __iomem *) CKSEG1ADDR(phys_addr); 153 138 154 139 /* 155 - * Don't allow anybody to remap normal RAM that we're using.. 140 + * Don't allow anybody to remap RAM that may be allocated by the page 141 + * allocator, since that could lead to races & data clobbering. 156 142 */ 157 - if (phys_addr < virt_to_phys(high_memory)) { 158 - char *t_addr, *t_end; 159 - struct page *page; 160 - 161 - t_addr = __va(phys_addr); 162 - t_end = t_addr + (size - 1); 163 - 164 - for(page = virt_to_page(t_addr); page <= virt_to_page(t_end); page++) 165 - if(!PageReserved(page)) 166 - return NULL; 143 + pfn = PFN_DOWN(phys_addr); 144 + last_pfn = PFN_DOWN(last_addr); 145 + if (walk_system_ram_range(pfn, last_pfn - pfn + 1, NULL, 146 + __ioremap_check_ram) == 1) { 147 + WARN_ONCE(1, "ioremap on RAM at %pa - %pa\n", 148 + &phys_addr, &last_addr); 149 + return NULL; 167 150 } 168 151 169 152 /*
+5 -1
arch/openrisc/include/asm/pgalloc.h
··· 98 98 __free_page(pte); 99 99 } 100 100 101 + #define __pte_free_tlb(tlb, pte, addr) \ 102 + do { \ 103 + pgtable_page_dtor(pte); \ 104 + tlb_remove_page((tlb), (pte)); \ 105 + } while (0) 101 106 102 - #define __pte_free_tlb(tlb, pte, addr) tlb_remove_page((tlb), (pte)) 103 107 #define pmd_pgtable(pmd) pmd_page(pmd) 104 108 105 109 #define check_pgt_cache() do { } while (0)
+1 -7
arch/openrisc/kernel/entry.S
··· 277 277 l.addi r3,r1,0 // pt_regs 278 278 /* r4 set be EXCEPTION_HANDLE */ // effective address of fault 279 279 280 - /* 281 - * __PHX__: TODO 282 - * 283 - * all this can be written much simpler. look at 284 - * DTLB miss handler in the CONFIG_GUARD_PROTECTED_CORE part 285 - */ 286 280 #ifdef CONFIG_OPENRISC_NO_SPR_SR_DSX 287 281 l.lwz r6,PT_PC(r3) // address of an offending insn 288 282 l.lwz r6,0(r6) // instruction that caused pf ··· 308 314 309 315 #else 310 316 311 - l.lwz r6,PT_SR(r3) // SR 317 + l.mfspr r6,r0,SPR_SR // SR 312 318 l.andi r6,r6,SPR_SR_DSX // check for delay slot exception 313 319 l.sfne r6,r0 // exception happened in delay slot 314 320 l.bnf 7f
+6 -3
arch/openrisc/kernel/head.S
··· 210 210 * r4 - EEAR exception EA 211 211 * r10 - current pointing to current_thread_info struct 212 212 * r12 - syscall 0, since we didn't come from syscall 213 - * r13 - temp it actually contains new SR, not needed anymore 214 - * r31 - handler address of the handler we'll jump to 213 + * r30 - handler address of the handler we'll jump to 215 214 * 216 215 * handler has to save remaining registers to the exception 217 216 * ksp frame *before* tainting them! ··· 243 244 /* r1 is KSP, r30 is __pa(KSP) */ ;\ 244 245 tophys (r30,r1) ;\ 245 246 l.sw PT_GPR12(r30),r12 ;\ 247 + /* r4 use for tmp before EA */ ;\ 246 248 l.mfspr r12,r0,SPR_EPCR_BASE ;\ 247 249 l.sw PT_PC(r30),r12 ;\ 248 250 l.mfspr r12,r0,SPR_ESR_BASE ;\ ··· 263 263 /* r12 == 1 if we come from syscall */ ;\ 264 264 CLEAR_GPR(r12) ;\ 265 265 /* ----- turn on MMU ----- */ ;\ 266 - l.ori r30,r0,(EXCEPTION_SR) ;\ 266 + /* Carry DSX into exception SR */ ;\ 267 + l.mfspr r30,r0,SPR_SR ;\ 268 + l.andi r30,r30,SPR_SR_DSX ;\ 269 + l.ori r30,r30,(EXCEPTION_SR) ;\ 267 270 l.mtspr r0,r30,SPR_ESR_BASE ;\ 268 271 /* r30: EA address of handler */ ;\ 269 272 LOAD_SYMBOL_2_GPR(r30,handler) ;\
+1 -1
arch/openrisc/kernel/traps.c
··· 300 300 return 0; 301 301 } 302 302 #else 303 - return regs->sr & SPR_SR_DSX; 303 + return mfspr(SPR_SR) & SPR_SR_DSX; 304 304 #endif 305 305 } 306 306
+1
arch/riscv/Kconfig
··· 107 107 select GENERIC_LIB_ASHLDI3 108 108 select GENERIC_LIB_ASHRDI3 109 109 select GENERIC_LIB_LSHRDI3 110 + select GENERIC_LIB_UCMPDI2 110 111 111 112 config ARCH_RV64I 112 113 bool "RV64I"
+7 -2
arch/riscv/include/uapi/asm/elf.h
··· 21 21 22 22 typedef union __riscv_fp_state elf_fpregset_t; 23 23 24 - #define ELF_RISCV_R_SYM(r_info) ((r_info) >> 32) 25 - #define ELF_RISCV_R_TYPE(r_info) ((r_info) & 0xffffffff) 24 + #if __riscv_xlen == 64 25 + #define ELF_RISCV_R_SYM(r_info) ELF64_R_SYM(r_info) 26 + #define ELF_RISCV_R_TYPE(r_info) ELF64_R_TYPE(r_info) 27 + #else 28 + #define ELF_RISCV_R_SYM(r_info) ELF32_R_SYM(r_info) 29 + #define ELF_RISCV_R_TYPE(r_info) ELF32_R_TYPE(r_info) 30 + #endif 26 31 27 32 /* 28 33 * RISC-V relocation types
-4
arch/riscv/kernel/irq.c
··· 16 16 #include <linux/irqchip.h> 17 17 #include <linux/irqdomain.h> 18 18 19 - #ifdef CONFIG_RISCV_INTC 20 - #include <linux/irqchip/irq-riscv-intc.h> 21 - #endif 22 - 23 19 void __init init_IRQ(void) 24 20 { 25 21 irqchip_init();
+13 -13
arch/riscv/kernel/module.c
··· 37 37 static int apply_r_riscv_branch_rela(struct module *me, u32 *location, 38 38 Elf_Addr v) 39 39 { 40 - s64 offset = (void *)v - (void *)location; 40 + ptrdiff_t offset = (void *)v - (void *)location; 41 41 u32 imm12 = (offset & 0x1000) << (31 - 12); 42 42 u32 imm11 = (offset & 0x800) >> (11 - 7); 43 43 u32 imm10_5 = (offset & 0x7e0) << (30 - 10); ··· 50 50 static int apply_r_riscv_jal_rela(struct module *me, u32 *location, 51 51 Elf_Addr v) 52 52 { 53 - s64 offset = (void *)v - (void *)location; 53 + ptrdiff_t offset = (void *)v - (void *)location; 54 54 u32 imm20 = (offset & 0x100000) << (31 - 20); 55 55 u32 imm19_12 = (offset & 0xff000); 56 56 u32 imm11 = (offset & 0x800) << (20 - 11); ··· 63 63 static int apply_r_riscv_rcv_branch_rela(struct module *me, u32 *location, 64 64 Elf_Addr v) 65 65 { 66 - s64 offset = (void *)v - (void *)location; 66 + ptrdiff_t offset = (void *)v - (void *)location; 67 67 u16 imm8 = (offset & 0x100) << (12 - 8); 68 68 u16 imm7_6 = (offset & 0xc0) >> (6 - 5); 69 69 u16 imm5 = (offset & 0x20) >> (5 - 2); ··· 78 78 static int apply_r_riscv_rvc_jump_rela(struct module *me, u32 *location, 79 79 Elf_Addr v) 80 80 { 81 - s64 offset = (void *)v - (void *)location; 81 + ptrdiff_t offset = (void *)v - (void *)location; 82 82 u16 imm11 = (offset & 0x800) << (12 - 11); 83 83 u16 imm10 = (offset & 0x400) >> (10 - 8); 84 84 u16 imm9_8 = (offset & 0x300) << (12 - 11); ··· 96 96 static int apply_r_riscv_pcrel_hi20_rela(struct module *me, u32 *location, 97 97 Elf_Addr v) 98 98 { 99 - s64 offset = (void *)v - (void *)location; 99 + ptrdiff_t offset = (void *)v - (void *)location; 100 100 s32 hi20; 101 101 102 102 if (offset != (s32)offset) { ··· 178 178 static int apply_r_riscv_got_hi20_rela(struct module *me, u32 *location, 179 179 Elf_Addr v) 180 180 { 181 - s64 offset = (void *)v - (void *)location; 181 + ptrdiff_t offset = (void *)v - (void *)location; 182 182 s32 hi20; 183 183 184 184 /* Always emit the got entry */ ··· 200 200 static int apply_r_riscv_call_plt_rela(struct module *me, u32 *location, 201 201 Elf_Addr v) 202 202 { 203 - s64 offset = (void *)v - (void *)location; 203 + ptrdiff_t offset = (void *)v - (void *)location; 204 204 s32 fill_v = offset; 205 205 u32 hi20, lo12; 206 206 ··· 227 227 static int apply_r_riscv_call_rela(struct module *me, u32 *location, 228 228 Elf_Addr v) 229 229 { 230 - s64 offset = (void *)v - (void *)location; 230 + ptrdiff_t offset = (void *)v - (void *)location; 231 231 s32 fill_v = offset; 232 232 u32 hi20, lo12; 233 233 ··· 263 263 static int apply_r_riscv_add32_rela(struct module *me, u32 *location, 264 264 Elf_Addr v) 265 265 { 266 - *(u32 *)location += (*(u32 *)v); 266 + *(u32 *)location += (u32)v; 267 267 return 0; 268 268 } 269 269 270 270 static int apply_r_riscv_sub32_rela(struct module *me, u32 *location, 271 271 Elf_Addr v) 272 272 { 273 - *(u32 *)location -= (*(u32 *)v); 273 + *(u32 *)location -= (u32)v; 274 274 return 0; 275 275 } 276 276 ··· 347 347 unsigned int j; 348 348 349 349 for (j = 0; j < sechdrs[relsec].sh_size / sizeof(*rel); j++) { 350 - u64 hi20_loc = 350 + unsigned long hi20_loc = 351 351 sechdrs[sechdrs[relsec].sh_info].sh_addr 352 352 + rel[j].r_offset; 353 353 u32 hi20_type = ELF_RISCV_R_TYPE(rel[j].r_info); ··· 360 360 Elf_Sym *hi20_sym = 361 361 (Elf_Sym *)sechdrs[symindex].sh_addr 362 362 + ELF_RISCV_R_SYM(rel[j].r_info); 363 - u64 hi20_sym_val = 363 + unsigned long hi20_sym_val = 364 364 hi20_sym->st_value 365 365 + rel[j].r_addend; 366 366 367 367 /* Calculate lo12 */ 368 - u64 offset = hi20_sym_val - hi20_loc; 368 + size_t offset = hi20_sym_val - hi20_loc; 369 369 if (IS_ENABLED(CONFIG_MODULE_SECTIONS) 370 370 && hi20_type == R_RISCV_GOT_HI20) { 371 371 offset = module_emit_got_entry(
+1 -1
arch/riscv/kernel/ptrace.c
··· 50 50 struct pt_regs *regs; 51 51 52 52 regs = task_pt_regs(target); 53 - ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &regs, 0, -1); 53 + ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, regs, 0, -1); 54 54 return ret; 55 55 } 56 56
-5
arch/riscv/kernel/setup.c
··· 220 220 riscv_fill_hwcap(); 221 221 } 222 222 223 - static int __init riscv_device_init(void) 224 - { 225 - return of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 226 - } 227 - subsys_initcall_sync(riscv_device_init);
+2
arch/riscv/mm/init.c
··· 28 28 { 29 29 unsigned long max_zone_pfns[MAX_NR_ZONES] = { 0, }; 30 30 31 + #ifdef CONFIG_ZONE_DMA32 31 32 max_zone_pfns[ZONE_DMA32] = PFN_DOWN(min(4UL * SZ_1G, max_low_pfn)); 33 + #endif 32 34 max_zone_pfns[ZONE_NORMAL] = max_low_pfn; 33 35 34 36 free_area_init_nodes(max_zone_pfns);
+1
arch/s390/Kconfig
··· 160 160 select HAVE_OPROFILE 161 161 select HAVE_PERF_EVENTS 162 162 select HAVE_REGS_AND_STACK_ACCESS_API 163 + select HAVE_RSEQ 163 164 select HAVE_SYSCALL_TRACEPOINTS 164 165 select HAVE_VIRT_CPU_ACCOUNTING 165 166 select MODULES_USE_ELF_RELA
+1
arch/s390/kernel/compat_wrapper.c
··· 183 183 COMPAT_SYSCALL_WRAP5(statx, int, dfd, const char __user *, path, unsigned, flags, unsigned, mask, struct statx __user *, buffer); 184 184 COMPAT_SYSCALL_WRAP4(s390_sthyi, unsigned long, code, void __user *, info, u64 __user *, rc, unsigned long, flags); 185 185 COMPAT_SYSCALL_WRAP5(kexec_file_load, int, kernel_fd, int, initrd_fd, unsigned long, cmdline_len, const char __user *, cmdline_ptr, unsigned long, flags) 186 + COMPAT_SYSCALL_WRAP4(rseq, struct rseq __user *, rseq, u32, rseq_len, int, flags, u32, sig)
+6 -2
arch/s390/kernel/entry.S
··· 357 357 stg %r2,__PT_R2(%r11) # store return value 358 358 359 359 .Lsysc_return: 360 + #ifdef CONFIG_DEBUG_RSEQ 361 + lgr %r2,%r11 362 + brasl %r14,rseq_syscall 363 + #endif 360 364 LOCKDEP_SYS_EXIT 361 365 .Lsysc_tif: 362 366 TSTMSK __PT_FLAGS(%r11),_PIF_WORK ··· 1269 1265 jl 0f 1270 1266 clg %r9,BASED(.Lcleanup_table+104) # .Lload_fpu_regs_end 1271 1267 jl .Lcleanup_load_fpu_regs 1272 - 0: BR_EX %r14 1268 + 0: BR_EX %r14,%r11 1273 1269 1274 1270 .align 8 1275 1271 .Lcleanup_table: ··· 1305 1301 ni __SIE_PROG0C+3(%r9),0xfe # no longer in SIE 1306 1302 lctlg %c1,%c1,__LC_USER_ASCE # load primary asce 1307 1303 larl %r9,sie_exit # skip forward to sie_exit 1308 - BR_EX %r14 1304 + BR_EX %r14,%r11 1309 1305 #endif 1310 1306 1311 1307 .Lcleanup_system_call:
+2 -1
arch/s390/kernel/signal.c
··· 498 498 } 499 499 /* No longer in a system call */ 500 500 clear_pt_regs_flag(regs, PIF_SYSCALL); 501 - 501 + rseq_signal_deliver(&ksig, regs); 502 502 if (is_compat_task()) 503 503 handle_signal32(&ksig, oldset, regs); 504 504 else ··· 537 537 { 538 538 clear_thread_flag(TIF_NOTIFY_RESUME); 539 539 tracehook_notify_resume(regs); 540 + rseq_handle_notify_resume(NULL, regs); 540 541 }
+2
arch/s390/kernel/syscalls/syscall.tbl
··· 389 389 379 common statx sys_statx compat_sys_statx 390 390 380 common s390_sthyi sys_s390_sthyi compat_sys_s390_sthyi 391 391 381 common kexec_file_load sys_kexec_file_load compat_sys_kexec_file_load 392 + 382 common io_pgetevents sys_io_pgetevents compat_sys_io_pgetevents 393 + 383 common rseq sys_rseq compat_sys_rseq
+4
arch/s390/mm/pgalloc.c
··· 252 252 spin_unlock_bh(&mm->context.lock); 253 253 if (mask != 0) 254 254 return; 255 + } else { 256 + atomic_xor_bits(&page->_refcount, 3U << 24); 255 257 } 256 258 257 259 pgtable_page_dtor(page); ··· 306 304 break; 307 305 /* fallthrough */ 308 306 case 3: /* 4K page table with pgstes */ 307 + if (mask & 3) 308 + atomic_xor_bits(&page->_refcount, 3 << 24); 309 309 pgtable_page_dtor(page); 310 310 __free_page(page); 311 311 break;
+1
arch/s390/net/bpf_jit_comp.c
··· 1286 1286 goto free_addrs; 1287 1287 } 1288 1288 if (bpf_jit_prog(&jit, fp)) { 1289 + bpf_jit_binary_free(header); 1289 1290 fp = orig_fp; 1290 1291 goto free_addrs; 1291 1292 }
+3 -9
arch/x86/boot/compressed/eboot.c
··· 114 114 struct pci_setup_rom *rom = NULL; 115 115 efi_status_t status; 116 116 unsigned long size; 117 - uint64_t attributes, romsize; 117 + uint64_t romsize; 118 118 void *romimage; 119 119 120 - status = efi_call_proto(efi_pci_io_protocol, attributes, pci, 121 - EfiPciIoAttributeOperationGet, 0ULL, 122 - &attributes); 123 - if (status != EFI_SUCCESS) 124 - return status; 125 - 126 120 /* 127 - * Some firmware images contain EFI function pointers at the place where the 128 - * romimage and romsize fields are supposed to be. Typically the EFI 121 + * Some firmware images contain EFI function pointers at the place where 122 + * the romimage and romsize fields are supposed to be. Typically the EFI 129 123 * code is mapped at high addresses, translating to an unrealistically 130 124 * large romsize. The UEFI spec limits the size of option ROMs to 16 131 125 * MiB so we reject any ROMs over 16 MiB in size to catch this.
+1
arch/x86/crypto/aegis128-aesni-asm.S
··· 535 535 movdqu STATE3, 0x40(STATEP) 536 536 537 537 FRAME_END 538 + ret 538 539 ENDPROC(crypto_aegis128_aesni_enc_tail) 539 540 540 541 .macro decrypt_block a s0 s1 s2 s3 s4 i
+1
arch/x86/crypto/aegis128l-aesni-asm.S
··· 645 645 state_store0 646 646 647 647 FRAME_END 648 + ret 648 649 ENDPROC(crypto_aegis128l_aesni_enc_tail) 649 650 650 651 /*
+1
arch/x86/crypto/aegis256-aesni-asm.S
··· 543 543 state_store0 544 544 545 545 FRAME_END 546 + ret 546 547 ENDPROC(crypto_aegis256_aesni_enc_tail) 547 548 548 549 /*
+1
arch/x86/crypto/morus1280-avx2-asm.S
··· 453 453 vmovdqu STATE4, (4 * 32)(%rdi) 454 454 455 455 FRAME_END 456 + ret 456 457 ENDPROC(crypto_morus1280_avx2_enc_tail) 457 458 458 459 /*
+1
arch/x86/crypto/morus1280-sse2-asm.S
··· 652 652 movdqu STATE4_HI, (9 * 16)(%rdi) 653 653 654 654 FRAME_END 655 + ret 655 656 ENDPROC(crypto_morus1280_sse2_enc_tail) 656 657 657 658 /*
+1
arch/x86/crypto/morus640-sse2-asm.S
··· 437 437 movdqu STATE4, (4 * 16)(%rdi) 438 438 439 439 FRAME_END 440 + ret 440 441 ENDPROC(crypto_morus640_sse2_enc_tail) 441 442 442 443 /*
+5
arch/x86/hyperv/hv_apic.c
··· 114 114 ipi_arg->vp_set.format = HV_GENERIC_SET_SPARSE_4K; 115 115 nr_bank = cpumask_to_vpset(&(ipi_arg->vp_set), mask); 116 116 } 117 + if (nr_bank < 0) 118 + goto ipi_mask_ex_done; 117 119 if (!nr_bank) 118 120 ipi_arg->vp_set.format = HV_GENERIC_SET_ALL; 119 121 ··· 160 158 161 159 for_each_cpu(cur_cpu, mask) { 162 160 vcpu = hv_cpu_number_to_vp_number(cur_cpu); 161 + if (vcpu == VP_INVAL) 162 + goto ipi_mask_done; 163 + 163 164 /* 164 165 * This particular version of the IPI hypercall can 165 166 * only target upto 64 CPUs.
+4 -1
arch/x86/hyperv/hv_init.c
··· 265 265 { 266 266 u64 guest_id, required_msrs; 267 267 union hv_x64_msr_hypercall_contents hypercall_msr; 268 - int cpuhp; 268 + int cpuhp, i; 269 269 270 270 if (x86_hyper_type != X86_HYPER_MS_HYPERV) 271 271 return; ··· 292 292 GFP_KERNEL); 293 293 if (!hv_vp_index) 294 294 return; 295 + 296 + for (i = 0; i < num_possible_cpus(); i++) 297 + hv_vp_index[i] = VP_INVAL; 295 298 296 299 hv_vp_assist_page = kcalloc(num_possible_cpus(), 297 300 sizeof(*hv_vp_assist_page), GFP_KERNEL);
+59
arch/x86/include/asm/asm.h
··· 46 46 #define _ASM_SI __ASM_REG(si) 47 47 #define _ASM_DI __ASM_REG(di) 48 48 49 + #ifndef __x86_64__ 50 + /* 32 bit */ 51 + 52 + #define _ASM_ARG1 _ASM_AX 53 + #define _ASM_ARG2 _ASM_DX 54 + #define _ASM_ARG3 _ASM_CX 55 + 56 + #define _ASM_ARG1L eax 57 + #define _ASM_ARG2L edx 58 + #define _ASM_ARG3L ecx 59 + 60 + #define _ASM_ARG1W ax 61 + #define _ASM_ARG2W dx 62 + #define _ASM_ARG3W cx 63 + 64 + #define _ASM_ARG1B al 65 + #define _ASM_ARG2B dl 66 + #define _ASM_ARG3B cl 67 + 68 + #else 69 + /* 64 bit */ 70 + 71 + #define _ASM_ARG1 _ASM_DI 72 + #define _ASM_ARG2 _ASM_SI 73 + #define _ASM_ARG3 _ASM_DX 74 + #define _ASM_ARG4 _ASM_CX 75 + #define _ASM_ARG5 r8 76 + #define _ASM_ARG6 r9 77 + 78 + #define _ASM_ARG1Q rdi 79 + #define _ASM_ARG2Q rsi 80 + #define _ASM_ARG3Q rdx 81 + #define _ASM_ARG4Q rcx 82 + #define _ASM_ARG5Q r8 83 + #define _ASM_ARG6Q r9 84 + 85 + #define _ASM_ARG1L edi 86 + #define _ASM_ARG2L esi 87 + #define _ASM_ARG3L edx 88 + #define _ASM_ARG4L ecx 89 + #define _ASM_ARG5L r8d 90 + #define _ASM_ARG6L r9d 91 + 92 + #define _ASM_ARG1W di 93 + #define _ASM_ARG2W si 94 + #define _ASM_ARG3W dx 95 + #define _ASM_ARG4W cx 96 + #define _ASM_ARG5W r8w 97 + #define _ASM_ARG6W r9w 98 + 99 + #define _ASM_ARG1B dil 100 + #define _ASM_ARG2B sil 101 + #define _ASM_ARG3B dl 102 + #define _ASM_ARG4B cl 103 + #define _ASM_ARG5B r8b 104 + #define _ASM_ARG6B r9b 105 + 106 + #endif 107 + 49 108 /* 50 109 * Macros to generate condition code outputs from inline assembly, 51 110 * The output operand must be type "bool".
+1 -1
arch/x86/include/asm/irqflags.h
··· 13 13 * Interrupt control: 14 14 */ 15 15 16 - static inline unsigned long native_save_fl(void) 16 + extern inline unsigned long native_save_fl(void) 17 17 { 18 18 unsigned long flags; 19 19
+4 -1
arch/x86/include/asm/mshyperv.h
··· 9 9 #include <asm/hyperv-tlfs.h> 10 10 #include <asm/nospec-branch.h> 11 11 12 + #define VP_INVAL U32_MAX 13 + 12 14 struct ms_hyperv_info { 13 15 u32 features; 14 16 u32 misc_features; ··· 21 19 }; 22 20 23 21 extern struct ms_hyperv_info ms_hyperv; 24 - 25 22 26 23 /* 27 24 * Generate the guest ID. ··· 282 281 */ 283 282 for_each_cpu(cpu, cpus) { 284 283 vcpu = hv_cpu_number_to_vp_number(cpu); 284 + if (vcpu == VP_INVAL) 285 + return -1; 285 286 vcpu_bank = vcpu / 64; 286 287 vcpu_offset = vcpu % 64; 287 288 __set_bit(vcpu_offset, (unsigned long *)
+1
arch/x86/kernel/Makefile
··· 61 61 obj-y += tsc.o tsc_msr.o io_delay.o rtc.o 62 62 obj-y += pci-iommu_table.o 63 63 obj-y += resource.o 64 + obj-y += irqflags.o 64 65 65 66 obj-y += process.o 66 67 obj-y += fpu/
+3 -1
arch/x86/kernel/cpu/amd.c
··· 543 543 nodes_per_socket = ((value >> 3) & 7) + 1; 544 544 } 545 545 546 - if (c->x86 >= 0x15 && c->x86 <= 0x17) { 546 + if (!boot_cpu_has(X86_FEATURE_AMD_SSBD) && 547 + !boot_cpu_has(X86_FEATURE_VIRT_SSBD) && 548 + c->x86 >= 0x15 && c->x86 <= 0x17) { 547 549 unsigned int bit; 548 550 549 551 switch (c->x86) {
+5 -3
arch/x86/kernel/cpu/bugs.c
··· 155 155 guestval |= guest_spec_ctrl & x86_spec_ctrl_mask; 156 156 157 157 /* SSBD controlled in MSR_SPEC_CTRL */ 158 - if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD)) 158 + if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) || 159 + static_cpu_has(X86_FEATURE_AMD_SSBD)) 159 160 hostval |= ssbd_tif_to_spec_ctrl(ti->flags); 160 161 161 162 if (hostval != guestval) { ··· 534 533 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may 535 534 * use a completely different MSR and bit dependent on family. 536 535 */ 537 - if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL)) 536 + if (!static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) && 537 + !static_cpu_has(X86_FEATURE_AMD_SSBD)) { 538 538 x86_amd_ssb_disable(); 539 - else { 539 + } else { 540 540 x86_spec_ctrl_base |= SPEC_CTRL_SSBD; 541 541 x86_spec_ctrl_mask |= SPEC_CTRL_SSBD; 542 542 wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+2 -1
arch/x86/kernel/cpu/mtrr/if.c
··· 106 106 107 107 memset(line, 0, LINE_SIZE); 108 108 109 - length = strncpy_from_user(line, buf, LINE_SIZE - 1); 109 + len = min_t(size_t, len, LINE_SIZE - 1); 110 + length = strncpy_from_user(line, buf, len); 110 111 if (length < 0) 111 112 return length; 112 113
+26
arch/x86/kernel/irqflags.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #include <asm/asm.h> 4 + #include <asm/export.h> 5 + #include <linux/linkage.h> 6 + 7 + /* 8 + * unsigned long native_save_fl(void) 9 + */ 10 + ENTRY(native_save_fl) 11 + pushf 12 + pop %_ASM_AX 13 + ret 14 + ENDPROC(native_save_fl) 15 + EXPORT_SYMBOL(native_save_fl) 16 + 17 + /* 18 + * void native_restore_fl(unsigned long flags) 19 + * %eax/%rdi: flags 20 + */ 21 + ENTRY(native_restore_fl) 22 + push %_ASM_ARG1 23 + popf 24 + ret 25 + ENDPROC(native_restore_fl) 26 + EXPORT_SYMBOL(native_restore_fl)
+5
arch/x86/kernel/smpboot.c
··· 221 221 #ifdef CONFIG_X86_32 222 222 /* switch away from the initial page table */ 223 223 load_cr3(swapper_pg_dir); 224 + /* 225 + * Initialize the CR4 shadow before doing anything that could 226 + * try to read it. 227 + */ 228 + cr4_init_shadow(); 224 229 __flush_tlb_all(); 225 230 #endif 226 231 load_current_idt();
+1 -1
arch/x86/purgatory/Makefile
··· 6 6 targets += $(purgatory-y) 7 7 PURGATORY_OBJS = $(addprefix $(obj)/,$(purgatory-y)) 8 8 9 - $(obj)/sha256.o: $(srctree)/lib/sha256.c 9 + $(obj)/sha256.o: $(srctree)/lib/sha256.c FORCE 10 10 $(call if_changed_rule,cc_o_c) 11 11 12 12 LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined -nostdlib -z nodefaultlib
+12 -13
arch/x86/xen/enlighten_pv.c
··· 1207 1207 1208 1208 xen_setup_features(); 1209 1209 1210 - xen_setup_machphys_mapping(); 1211 - 1212 1210 /* Install Xen paravirt ops */ 1213 1211 pv_info = xen_info; 1214 1212 pv_init_ops.patch = paravirt_patch_default; 1215 1213 pv_cpu_ops = xen_cpu_ops; 1214 + xen_init_irq_ops(); 1215 + 1216 + /* 1217 + * Setup xen_vcpu early because it is needed for 1218 + * local_irq_disable(), irqs_disabled(), e.g. in printk(). 1219 + * 1220 + * Don't do the full vcpu_info placement stuff until we have 1221 + * the cpu_possible_mask and a non-dummy shared_info. 1222 + */ 1223 + xen_vcpu_info_reset(0); 1216 1224 1217 1225 x86_platform.get_nmi_reason = xen_get_nmi_reason; 1218 1226 ··· 1233 1225 * Set up some pagetable state before starting to set any ptes. 1234 1226 */ 1235 1227 1228 + xen_setup_machphys_mapping(); 1236 1229 xen_init_mmu_ops(); 1237 1230 1238 1231 /* Prevent unwanted bits from being set in PTEs. */ 1239 1232 __supported_pte_mask &= ~_PAGE_GLOBAL; 1233 + __default_kernel_pte_mask &= ~_PAGE_GLOBAL; 1240 1234 1241 1235 /* 1242 1236 * Prevent page tables from being allocated in highmem, even ··· 1259 1249 get_cpu_cap(&boot_cpu_data); 1260 1250 x86_configure_nx(); 1261 1251 1262 - xen_init_irq_ops(); 1263 - 1264 1252 /* Let's presume PV guests always boot on vCPU with id 0. */ 1265 1253 per_cpu(xen_vcpu_id, 0) = 0; 1266 - 1267 - /* 1268 - * Setup xen_vcpu early because idt_setup_early_handler needs it for 1269 - * local_irq_disable(), irqs_disabled(). 1270 - * 1271 - * Don't do the full vcpu_info placement stuff until we have 1272 - * the cpu_possible_mask and a non-dummy shared_info. 1273 - */ 1274 - xen_vcpu_info_reset(0); 1275 1254 1276 1255 idt_setup_early_handler(); 1277 1256
+1 -3
arch/x86/xen/irq.c
··· 128 128 129 129 void __init xen_init_irq_ops(void) 130 130 { 131 - /* For PVH we use default pv_irq_ops settings. */ 132 - if (!xen_feature(XENFEAT_hvm_callback_vector)) 133 - pv_irq_ops = xen_irq_ops; 131 + pv_irq_ops = xen_irq_ops; 134 132 x86_init.irqs.intr_init = xen_init_IRQ; 135 133 }
-2
block/bsg.c
··· 267 267 } else if (hdr->din_xfer_len) { 268 268 ret = blk_rq_map_user(q, rq, NULL, uptr64(hdr->din_xferp), 269 269 hdr->din_xfer_len, GFP_KERNEL); 270 - } else { 271 - ret = blk_rq_map_user(q, rq, NULL, NULL, 0, GFP_KERNEL); 272 270 } 273 271 274 272 if (ret)
+11 -4
drivers/acpi/acpica/hwsleep.c
··· 51 51 return_ACPI_STATUS(status); 52 52 } 53 53 54 - /* 55 - * 1) Disable all GPEs 56 - * 2) Enable all wakeup GPEs 57 - */ 54 + /* Disable all GPEs */ 58 55 status = acpi_hw_disable_all_gpes(); 59 56 if (ACPI_FAILURE(status)) { 60 57 return_ACPI_STATUS(status); 61 58 } 59 + /* 60 + * If the target sleep state is S5, clear all GPEs and fixed events too 61 + */ 62 + if (sleep_state == ACPI_STATE_S5) { 63 + status = acpi_hw_clear_acpi_status(); 64 + if (ACPI_FAILURE(status)) { 65 + return_ACPI_STATUS(status); 66 + } 67 + } 62 68 acpi_gbl_system_awake_and_running = FALSE; 63 69 70 + /* Enable all wakeup GPEs */ 64 71 status = acpi_hw_enable_all_wakeup_gpes(); 65 72 if (ACPI_FAILURE(status)) { 66 73 return_ACPI_STATUS(status);
+3 -3
drivers/acpi/acpica/uterror.c
··· 182 182 switch (lookup_status) { 183 183 case AE_ALREADY_EXISTS: 184 184 185 - acpi_os_printf("\n" ACPI_MSG_BIOS_ERROR); 185 + acpi_os_printf(ACPI_MSG_BIOS_ERROR); 186 186 message = "Failure creating"; 187 187 break; 188 188 189 189 case AE_NOT_FOUND: 190 190 191 - acpi_os_printf("\n" ACPI_MSG_BIOS_ERROR); 191 + acpi_os_printf(ACPI_MSG_BIOS_ERROR); 192 192 message = "Could not resolve"; 193 193 break; 194 194 195 195 default: 196 196 197 - acpi_os_printf("\n" ACPI_MSG_ERROR); 197 + acpi_os_printf(ACPI_MSG_ERROR); 198 198 message = "Failure resolving"; 199 199 break; 200 200 }
+5 -4
drivers/acpi/battery.c
··· 717 717 */ 718 718 pr_err("extension failed to load: %s", hook->name); 719 719 __battery_hook_unregister(hook, 0); 720 - return; 720 + goto end; 721 721 } 722 722 } 723 723 pr_info("new extension: %s\n", hook->name); 724 + end: 724 725 mutex_unlock(&hook_mutex); 725 726 } 726 727 EXPORT_SYMBOL_GPL(battery_hook_register); ··· 733 732 */ 734 733 static void battery_hook_add_battery(struct acpi_battery *battery) 735 734 { 736 - struct acpi_battery_hook *hook_node; 735 + struct acpi_battery_hook *hook_node, *tmp; 737 736 738 737 mutex_lock(&hook_mutex); 739 738 INIT_LIST_HEAD(&battery->list); ··· 745 744 * when a battery gets hotplugged or initialized 746 745 * during the battery module initialization. 747 746 */ 748 - list_for_each_entry(hook_node, &battery_hook_list, list) { 747 + list_for_each_entry_safe(hook_node, tmp, &battery_hook_list, list) { 749 748 if (hook_node->add_battery(battery->bat)) { 750 749 /* 751 750 * The notification of the extensions has failed, to 752 751 * prevent further errors we will unload the extension. 753 752 */ 754 - __battery_hook_unregister(hook_node, 0); 755 753 pr_err("error in extension, unloading: %s", 756 754 hook_node->name); 755 + __battery_hook_unregister(hook_node, 0); 757 756 } 758 757 } 759 758 mutex_unlock(&hook_mutex);
+37 -11
drivers/acpi/nfit/core.c
··· 408 408 const guid_t *guid; 409 409 int rc, i; 410 410 411 + if (cmd_rc) 412 + *cmd_rc = -EINVAL; 411 413 func = cmd; 412 414 if (cmd == ND_CMD_CALL) { 413 415 call_pkg = buf; ··· 520 518 * If we return an error (like elsewhere) then caller wouldn't 521 519 * be able to rely upon data returned to make calculation. 522 520 */ 521 + if (cmd_rc) 522 + *cmd_rc = 0; 523 523 return 0; 524 524 } 525 525 ··· 1277 1273 1278 1274 mutex_lock(&acpi_desc->init_mutex); 1279 1275 rc = sprintf(buf, "%d%s", acpi_desc->scrub_count, 1280 - work_busy(&acpi_desc->dwork.work) 1276 + acpi_desc->scrub_busy 1281 1277 && !acpi_desc->cancel ? "+\n" : "\n"); 1282 1278 mutex_unlock(&acpi_desc->init_mutex); 1283 1279 } ··· 2943 2939 return 0; 2944 2940 } 2945 2941 2942 + static void __sched_ars(struct acpi_nfit_desc *acpi_desc, unsigned int tmo) 2943 + { 2944 + lockdep_assert_held(&acpi_desc->init_mutex); 2945 + 2946 + acpi_desc->scrub_busy = 1; 2947 + /* note this should only be set from within the workqueue */ 2948 + if (tmo) 2949 + acpi_desc->scrub_tmo = tmo; 2950 + queue_delayed_work(nfit_wq, &acpi_desc->dwork, tmo * HZ); 2951 + } 2952 + 2953 + static void sched_ars(struct acpi_nfit_desc *acpi_desc) 2954 + { 2955 + __sched_ars(acpi_desc, 0); 2956 + } 2957 + 2958 + static void notify_ars_done(struct acpi_nfit_desc *acpi_desc) 2959 + { 2960 + lockdep_assert_held(&acpi_desc->init_mutex); 2961 + 2962 + acpi_desc->scrub_busy = 0; 2963 + acpi_desc->scrub_count++; 2964 + if (acpi_desc->scrub_count_state) 2965 + sysfs_notify_dirent(acpi_desc->scrub_count_state); 2966 + } 2967 + 2946 2968 static void acpi_nfit_scrub(struct work_struct *work) 2947 2969 { 2948 2970 struct acpi_nfit_desc *acpi_desc; ··· 2979 2949 mutex_lock(&acpi_desc->init_mutex); 2980 2950 query_rc = acpi_nfit_query_poison(acpi_desc); 2981 2951 tmo = __acpi_nfit_scrub(acpi_desc, query_rc); 2982 - if (tmo) { 2983 - queue_delayed_work(nfit_wq, &acpi_desc->dwork, tmo * HZ); 2984 - acpi_desc->scrub_tmo = tmo; 2985 - } else { 2986 - acpi_desc->scrub_count++; 2987 - if (acpi_desc->scrub_count_state) 2988 - sysfs_notify_dirent(acpi_desc->scrub_count_state); 2989 - } 2952 + if (tmo) 2953 + __sched_ars(acpi_desc, tmo); 2954 + else 2955 + notify_ars_done(acpi_desc); 2990 2956 memset(acpi_desc->ars_status, 0, acpi_desc->max_ars); 2991 2957 mutex_unlock(&acpi_desc->init_mutex); 2992 2958 } ··· 3063 3037 break; 3064 3038 } 3065 3039 3066 - queue_delayed_work(nfit_wq, &acpi_desc->dwork, 0); 3040 + sched_ars(acpi_desc); 3067 3041 return 0; 3068 3042 } 3069 3043 ··· 3265 3239 } 3266 3240 } 3267 3241 if (scheduled) { 3268 - queue_delayed_work(nfit_wq, &acpi_desc->dwork, 0); 3242 + sched_ars(acpi_desc); 3269 3243 dev_dbg(dev, "ars_scan triggered\n"); 3270 3244 } 3271 3245 mutex_unlock(&acpi_desc->init_mutex);
+1
drivers/acpi/nfit/nfit.h
··· 203 203 unsigned int max_ars; 204 204 unsigned int scrub_count; 205 205 unsigned int scrub_mode; 206 + unsigned int scrub_busy:1; 206 207 unsigned int cancel:1; 207 208 unsigned long dimm_cmd_force_en; 208 209 unsigned long bus_cmd_force_en;
+8 -2
drivers/acpi/pptt.c
··· 481 481 if (cpu_node) { 482 482 cpu_node = acpi_find_processor_package_id(table, cpu_node, 483 483 level, flag); 484 - /* Only the first level has a guaranteed id */ 485 - if (level == 0) 484 + /* 485 + * As per specification if the processor structure represents 486 + * an actual processor, then ACPI processor ID must be valid. 487 + * For processor containers ACPI_PPTT_ACPI_PROCESSOR_ID_VALID 488 + * should be set if the UID is valid 489 + */ 490 + if (level == 0 || 491 + cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_ID_VALID) 486 492 return cpu_node->acpi_processor_id; 487 493 return ACPI_PTR_DIFF(cpu_node, table); 488 494 }
-2
drivers/ata/Kconfig
··· 398 398 399 399 config SATA_HIGHBANK 400 400 tristate "Calxeda Highbank SATA support" 401 - depends on HAS_DMA 402 401 depends on ARCH_HIGHBANK || COMPILE_TEST 403 402 help 404 403 This option enables support for the Calxeda Highbank SoC's ··· 407 408 408 409 config SATA_MV 409 410 tristate "Marvell SATA support" 410 - depends on HAS_DMA 411 411 depends on PCI || ARCH_DOVE || ARCH_MV78XX0 || \ 412 412 ARCH_MVEBU || ARCH_ORION5X || COMPILE_TEST 413 413 select GENERIC_PHY
+60
drivers/ata/ahci.c
··· 400 400 { PCI_VDEVICE(INTEL, 0x0f23), board_ahci_mobile }, /* Bay Trail AHCI */ 401 401 { PCI_VDEVICE(INTEL, 0x22a3), board_ahci_mobile }, /* Cherry Tr. AHCI */ 402 402 { PCI_VDEVICE(INTEL, 0x5ae3), board_ahci_mobile }, /* ApolloLake AHCI */ 403 + { PCI_VDEVICE(INTEL, 0x34d3), board_ahci_mobile }, /* Ice Lake LP AHCI */ 403 404 404 405 /* JMicron 360/1/3/5/6, match class to avoid IDE function */ 405 406 { PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, ··· 1281 1280 return strcmp(buf, dmi->driver_data) < 0; 1282 1281 } 1283 1282 1283 + static bool ahci_broken_lpm(struct pci_dev *pdev) 1284 + { 1285 + static const struct dmi_system_id sysids[] = { 1286 + /* Various Lenovo 50 series have LPM issues with older BIOSen */ 1287 + { 1288 + .matches = { 1289 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 1290 + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X250"), 1291 + }, 1292 + .driver_data = "20180406", /* 1.31 */ 1293 + }, 1294 + { 1295 + .matches = { 1296 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 1297 + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L450"), 1298 + }, 1299 + .driver_data = "20180420", /* 1.28 */ 1300 + }, 1301 + { 1302 + .matches = { 1303 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 1304 + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T450s"), 1305 + }, 1306 + .driver_data = "20180315", /* 1.33 */ 1307 + }, 1308 + { 1309 + .matches = { 1310 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 1311 + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W541"), 1312 + }, 1313 + /* 1314 + * Note date based on release notes, 2.35 has been 1315 + * reported to be good, but I've been unable to get 1316 + * a hold of the reporter to get the DMI BIOS date. 1317 + * TODO: fix this. 1318 + */ 1319 + .driver_data = "20180310", /* 2.35 */ 1320 + }, 1321 + { } /* terminate list */ 1322 + }; 1323 + const struct dmi_system_id *dmi = dmi_first_match(sysids); 1324 + int year, month, date; 1325 + char buf[9]; 1326 + 1327 + if (!dmi) 1328 + return false; 1329 + 1330 + dmi_get_date(DMI_BIOS_DATE, &year, &month, &date); 1331 + snprintf(buf, sizeof(buf), "%04d%02d%02d", year, month, date); 1332 + 1333 + return strcmp(buf, dmi->driver_data) < 0; 1334 + } 1335 + 1284 1336 static bool ahci_broken_online(struct pci_dev *pdev) 1285 1337 { 1286 1338 #define ENCODE_BUSDEVFN(bus, slot, func) \ ··· 1746 1692 pi.flags |= ATA_FLAG_NO_POWEROFF_SPINDOWN; 1747 1693 dev_info(&pdev->dev, 1748 1694 "quirky BIOS, skipping spindown on poweroff\n"); 1695 + } 1696 + 1697 + if (ahci_broken_lpm(pdev)) { 1698 + pi.flags |= ATA_FLAG_NO_LPM; 1699 + dev_warn(&pdev->dev, 1700 + "BIOS update required for Link Power Management support\n"); 1749 1701 } 1750 1702 1751 1703 if (ahci_broken_suspend(pdev)) {
+1 -1
drivers/ata/ahci_mvebu.c
··· 82 82 * 83 83 * Return: 0 on success; Error code otherwise. 84 84 */ 85 - int ahci_mvebu_stop_engine(struct ata_port *ap) 85 + static int ahci_mvebu_stop_engine(struct ata_port *ap) 86 86 { 87 87 void __iomem *port_mmio = ahci_port_base(ap); 88 88 u32 tmp, port_fbs;
+5 -2
drivers/ata/libahci.c
··· 35 35 #include <linux/kernel.h> 36 36 #include <linux/gfp.h> 37 37 #include <linux/module.h> 38 + #include <linux/nospec.h> 38 39 #include <linux/blkdev.h> 39 40 #include <linux/delay.h> 40 41 #include <linux/interrupt.h> ··· 1147 1146 1148 1147 /* get the slot number from the message */ 1149 1148 pmp = (state & EM_MSG_LED_PMP_SLOT) >> 8; 1150 - if (pmp < EM_MAX_SLOTS) 1149 + if (pmp < EM_MAX_SLOTS) { 1150 + pmp = array_index_nospec(pmp, EM_MAX_SLOTS); 1151 1151 emp = &pp->em_priv[pmp]; 1152 - else 1152 + } else { 1153 1153 return -EINVAL; 1154 + } 1154 1155 1155 1156 /* mask off the activity bits if we are in sw_activity 1156 1157 * mode, user should turn off sw_activity before setting
+3
drivers/ata/libata-core.c
··· 2493 2493 (id[ATA_ID_SATA_CAPABILITY] & 0xe) == 0x2) 2494 2494 dev->horkage |= ATA_HORKAGE_NOLPM; 2495 2495 2496 + if (ap->flags & ATA_FLAG_NO_LPM) 2497 + dev->horkage |= ATA_HORKAGE_NOLPM; 2498 + 2496 2499 if (dev->horkage & ATA_HORKAGE_NOLPM) { 2497 2500 ata_dev_warn(dev, "LPM support broken, forcing max_power\n"); 2498 2501 dev->link->ap->target_lpm_policy = ATA_LPM_MAX_POWER;
+16 -25
drivers/ata/libata-eh.c
··· 614 614 list_for_each_entry_safe(scmd, tmp, eh_work_q, eh_entry) { 615 615 struct ata_queued_cmd *qc; 616 616 617 - for (i = 0; i < ATA_MAX_QUEUE; i++) { 618 - qc = __ata_qc_from_tag(ap, i); 617 + ata_qc_for_each_raw(ap, qc, i) { 619 618 if (qc->flags & ATA_QCFLAG_ACTIVE && 620 619 qc->scsicmd == scmd) 621 620 break; ··· 817 818 818 819 static int ata_eh_nr_in_flight(struct ata_port *ap) 819 820 { 821 + struct ata_queued_cmd *qc; 820 822 unsigned int tag; 821 823 int nr = 0; 822 824 823 825 /* count only non-internal commands */ 824 - for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { 825 - if (ata_tag_internal(tag)) 826 - continue; 827 - if (ata_qc_from_tag(ap, tag)) 826 + ata_qc_for_each(ap, qc, tag) { 827 + if (qc) 828 828 nr++; 829 829 } 830 830 ··· 845 847 goto out_unlock; 846 848 847 849 if (cnt == ap->fastdrain_cnt) { 850 + struct ata_queued_cmd *qc; 848 851 unsigned int tag; 849 852 850 853 /* No progress during the last interval, tag all 851 854 * in-flight qcs as timed out and freeze the port. 852 855 */ 853 - for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { 854 - struct ata_queued_cmd *qc = ata_qc_from_tag(ap, tag); 856 + ata_qc_for_each(ap, qc, tag) { 855 857 if (qc) 856 858 qc->err_mask |= AC_ERR_TIMEOUT; 857 859 } ··· 997 999 998 1000 static int ata_do_link_abort(struct ata_port *ap, struct ata_link *link) 999 1001 { 1002 + struct ata_queued_cmd *qc; 1000 1003 int tag, nr_aborted = 0; 1001 1004 1002 1005 WARN_ON(!ap->ops->error_handler); ··· 1006 1007 ata_eh_set_pending(ap, 0); 1007 1008 1008 1009 /* include internal tag in iteration */ 1009 - for (tag = 0; tag <= ATA_MAX_QUEUE; tag++) { 1010 - struct ata_queued_cmd *qc = ata_qc_from_tag(ap, tag); 1011 - 1010 + ata_qc_for_each_with_internal(ap, qc, tag) { 1012 1011 if (qc && (!link || qc->dev->link == link)) { 1013 1012 qc->flags |= ATA_QCFLAG_FAILED; 1014 1013 ata_qc_complete(qc); ··· 1709 1712 return; 1710 1713 1711 1714 /* has LLDD analyzed already? */ 1712 - for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { 1713 - qc = __ata_qc_from_tag(ap, tag); 1714 - 1715 + ata_qc_for_each_raw(ap, qc, tag) { 1715 1716 if (!(qc->flags & ATA_QCFLAG_FAILED)) 1716 1717 continue; 1717 1718 ··· 2131 2136 { 2132 2137 struct ata_port *ap = link->ap; 2133 2138 struct ata_eh_context *ehc = &link->eh_context; 2139 + struct ata_queued_cmd *qc; 2134 2140 struct ata_device *dev; 2135 2141 unsigned int all_err_mask = 0, eflags = 0; 2136 2142 int tag, nr_failed = 0, nr_quiet = 0; ··· 2164 2168 2165 2169 all_err_mask |= ehc->i.err_mask; 2166 2170 2167 - for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { 2168 - struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); 2169 - 2171 + ata_qc_for_each_raw(ap, qc, tag) { 2170 2172 if (!(qc->flags & ATA_QCFLAG_FAILED) || 2171 2173 ata_dev_phys_link(qc->dev) != link) 2172 2174 continue; ··· 2430 2436 { 2431 2437 struct ata_port *ap = link->ap; 2432 2438 struct ata_eh_context *ehc = &link->eh_context; 2439 + struct ata_queued_cmd *qc; 2433 2440 const char *frozen, *desc; 2434 2441 char tries_buf[6] = ""; 2435 2442 int tag, nr_failed = 0; ··· 2442 2447 if (ehc->i.desc[0] != '\0') 2443 2448 desc = ehc->i.desc; 2444 2449 2445 - for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { 2446 - struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); 2447 - 2450 + ata_qc_for_each_raw(ap, qc, tag) { 2448 2451 if (!(qc->flags & ATA_QCFLAG_FAILED) || 2449 2452 ata_dev_phys_link(qc->dev) != link || 2450 2453 ((qc->flags & ATA_QCFLAG_QUIET) && ··· 2504 2511 ehc->i.serror & SERR_DEV_XCHG ? "DevExch " : ""); 2505 2512 #endif 2506 2513 2507 - for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { 2508 - struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); 2514 + ata_qc_for_each_raw(ap, qc, tag) { 2509 2515 struct ata_taskfile *cmd = &qc->tf, *res = &qc->result_tf; 2510 2516 char data_buf[20] = ""; 2511 2517 char cdb_buf[70] = ""; ··· 3984 3992 */ 3985 3993 void ata_eh_finish(struct ata_port *ap) 3986 3994 { 3995 + struct ata_queued_cmd *qc; 3987 3996 int tag; 3988 3997 3989 3998 /* retry or finish qcs */ 3990 - for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { 3991 - struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); 3992 - 3999 + ata_qc_for_each_raw(ap, qc, tag) { 3993 4000 if (!(qc->flags & ATA_QCFLAG_FAILED)) 3994 4001 continue; 3995 4002
+12 -6
drivers/ata/libata-scsi.c
··· 3805 3805 */ 3806 3806 goto invalid_param_len; 3807 3807 } 3808 - if (block > dev->n_sectors) 3809 - goto out_of_range; 3810 3808 3811 3809 all = cdb[14] & 0x1; 3810 + if (all) { 3811 + /* 3812 + * Ignore the block address (zone ID) as defined by ZBC. 3813 + */ 3814 + block = 0; 3815 + } else if (block >= dev->n_sectors) { 3816 + /* 3817 + * Block must be a valid zone ID (a zone start LBA). 3818 + */ 3819 + fp = 2; 3820 + goto invalid_fld; 3821 + } 3812 3822 3813 3823 if (ata_ncq_enabled(qc->dev) && 3814 3824 ata_fpdma_zac_mgmt_out_supported(qc->dev)) { ··· 3846 3836 3847 3837 invalid_fld: 3848 3838 ata_scsi_set_invalid_field(qc->dev, scmd, fp, 0xff); 3849 - return 1; 3850 - out_of_range: 3851 - /* "Logical Block Address out of range" */ 3852 - ata_scsi_set_sense(qc->dev, scmd, ILLEGAL_REQUEST, 0x21, 0x00); 3853 3839 return 1; 3854 3840 invalid_param_len: 3855 3841 /* "Parameter list length error" */
+1 -8
drivers/ata/sata_fsl.c
··· 395 395 { 396 396 /* We let libATA core do actual (queue) tag allocation */ 397 397 398 - /* all non NCQ/queued commands should have tag#0 */ 399 - if (ata_tag_internal(tag)) { 400 - DPRINTK("mapping internal cmds to tag#0\n"); 401 - return 0; 402 - } 403 - 404 398 if (unlikely(tag >= SATA_FSL_QUEUE_DEPTH)) { 405 399 DPRINTK("tag %d invalid : out of range\n", tag); 406 400 return 0; ··· 1223 1229 1224 1230 /* Workaround for data length mismatch errata */ 1225 1231 if (unlikely(hstatus & INT_ON_DATA_LENGTH_MISMATCH)) { 1226 - for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { 1227 - qc = ata_qc_from_tag(ap, tag); 1232 + ata_qc_for_each_with_internal(ap, qc, tag) { 1228 1233 if (qc && ata_is_atapi(qc->tf.protocol)) { 1229 1234 u32 hcontrol; 1230 1235 /* Set HControl[27] to clear error registers */
-3
drivers/ata/sata_nv.c
··· 675 675 struct ata_port *ap = ata_shost_to_port(sdev->host); 676 676 struct nv_adma_port_priv *pp = ap->private_data; 677 677 struct nv_adma_port_priv *port0, *port1; 678 - struct scsi_device *sdev0, *sdev1; 679 678 struct pci_dev *pdev = to_pci_dev(ap->host->dev); 680 679 unsigned long segment_boundary, flags; 681 680 unsigned short sg_tablesize; ··· 735 736 736 737 port0 = ap->host->ports[0]->private_data; 737 738 port1 = ap->host->ports[1]->private_data; 738 - sdev0 = ap->host->ports[0]->link.device[0].sdev; 739 - sdev1 = ap->host->ports[1]->link.device[0].sdev; 740 739 if ((port0->flags & NV_ADMA_ATAPI_SETUP_COMPLETE) || 741 740 (port1->flags & NV_ADMA_ATAPI_SETUP_COMPLETE)) { 742 741 /*
+1 -1
drivers/atm/iphase.c
··· 1618 1618 skb_queue_head_init(&iadev->rx_dma_q); 1619 1619 iadev->rx_free_desc_qhead = NULL; 1620 1620 1621 - iadev->rx_open = kcalloc(4, iadev->num_vc, GFP_KERNEL); 1621 + iadev->rx_open = kcalloc(iadev->num_vc, sizeof(void *), GFP_KERNEL); 1622 1622 if (!iadev->rx_open) { 1623 1623 printk(KERN_ERR DEV_LABEL "itf %d couldn't get free page\n", 1624 1624 dev->number);
+2
drivers/atm/zatm.c
··· 1483 1483 return -EFAULT; 1484 1484 if (pool < 0 || pool > ZATM_LAST_POOL) 1485 1485 return -EINVAL; 1486 + pool = array_index_nospec(pool, 1487 + ZATM_LAST_POOL + 1); 1486 1488 if (copy_from_user(&info, 1487 1489 &((struct zatm_pool_req __user *) arg)->info, 1488 1490 sizeof(info))) return -EFAULT;
+9 -7
drivers/base/power/domain.c
··· 2235 2235 } 2236 2236 2237 2237 static int __genpd_dev_pm_attach(struct device *dev, struct device_node *np, 2238 - unsigned int index) 2238 + unsigned int index, bool power_on) 2239 2239 { 2240 2240 struct of_phandle_args pd_args; 2241 2241 struct generic_pm_domain *pd; ··· 2271 2271 dev->pm_domain->detach = genpd_dev_pm_detach; 2272 2272 dev->pm_domain->sync = genpd_dev_pm_sync; 2273 2273 2274 - genpd_lock(pd); 2275 - ret = genpd_power_on(pd, 0); 2276 - genpd_unlock(pd); 2274 + if (power_on) { 2275 + genpd_lock(pd); 2276 + ret = genpd_power_on(pd, 0); 2277 + genpd_unlock(pd); 2278 + } 2277 2279 2278 2280 if (ret) 2279 2281 genpd_remove_device(pd, dev); ··· 2309 2307 "#power-domain-cells") != 1) 2310 2308 return 0; 2311 2309 2312 - return __genpd_dev_pm_attach(dev, dev->of_node, 0); 2310 + return __genpd_dev_pm_attach(dev, dev->of_node, 0, true); 2313 2311 } 2314 2312 EXPORT_SYMBOL_GPL(genpd_dev_pm_attach); 2315 2313 ··· 2361 2359 } 2362 2360 2363 2361 /* Try to attach the device to the PM domain at the specified index. */ 2364 - ret = __genpd_dev_pm_attach(genpd_dev, dev->of_node, index); 2362 + ret = __genpd_dev_pm_attach(genpd_dev, dev->of_node, index, false); 2365 2363 if (ret < 1) { 2366 2364 device_unregister(genpd_dev); 2367 2365 return ret ? ERR_PTR(ret) : NULL; 2368 2366 } 2369 2367 2370 - pm_runtime_set_active(genpd_dev); 2371 2368 pm_runtime_enable(genpd_dev); 2369 + genpd_queue_power_off_work(dev_to_genpd(genpd_dev)); 2372 2370 2373 2371 return genpd_dev; 2374 2372 }
+1 -1
drivers/block/drbd/drbd_worker.c
··· 282 282 what = COMPLETED_OK; 283 283 } 284 284 285 - bio_put(req->private_bio); 286 285 req->private_bio = ERR_PTR(blk_status_to_errno(bio->bi_status)); 286 + bio_put(bio); 287 287 288 288 /* not req_mod(), we need irqsave here! */ 289 289 spin_lock_irqsave(&device->resource->req_lock, flags);
+1
drivers/block/loop.c
··· 1613 1613 arg = (unsigned long) compat_ptr(arg); 1614 1614 case LOOP_SET_FD: 1615 1615 case LOOP_CHANGE_FD: 1616 + case LOOP_SET_BLOCK_SIZE: 1616 1617 err = lo_ioctl(bdev, mode, cmd, arg); 1617 1618 break; 1618 1619 default:
+4 -4
drivers/bus/ti-sysc.c
··· 169 169 const char *name; 170 170 int nr_fck = 0, nr_ick = 0, i, error = 0; 171 171 172 - ddata->clock_roles = devm_kzalloc(ddata->dev, 173 - sizeof(*ddata->clock_roles) * 172 + ddata->clock_roles = devm_kcalloc(ddata->dev, 174 173 SYSC_MAX_CLOCKS, 174 + sizeof(*ddata->clock_roles), 175 175 GFP_KERNEL); 176 176 if (!ddata->clock_roles) 177 177 return -ENOMEM; ··· 200 200 return -EINVAL; 201 201 } 202 202 203 - ddata->clocks = devm_kzalloc(ddata->dev, 204 - sizeof(*ddata->clocks) * ddata->nr_clocks, 203 + ddata->clocks = devm_kcalloc(ddata->dev, 204 + ddata->nr_clocks, sizeof(*ddata->clocks), 205 205 GFP_KERNEL); 206 206 if (!ddata->clocks) 207 207 return -ENOMEM;
+4 -2
drivers/char/ipmi/ipmi_si_intf.c
··· 2088 2088 return 0; 2089 2089 2090 2090 out_err: 2091 - ipmi_unregister_smi(new_smi->intf); 2092 - new_smi->intf = NULL; 2091 + if (new_smi->intf) { 2092 + ipmi_unregister_smi(new_smi->intf); 2093 + new_smi->intf = NULL; 2094 + } 2093 2095 2094 2096 kfree(init_name); 2095 2097
+11 -22
drivers/char/ipmi/kcs_bmc.c
··· 210 210 int kcs_bmc_handle_event(struct kcs_bmc *kcs_bmc) 211 211 { 212 212 unsigned long flags; 213 - int ret = 0; 213 + int ret = -ENODATA; 214 214 u8 status; 215 215 216 216 spin_lock_irqsave(&kcs_bmc->lock, flags); 217 217 218 - if (!kcs_bmc->running) { 219 - kcs_force_abort(kcs_bmc); 220 - ret = -ENODEV; 221 - goto out_unlock; 218 + status = read_status(kcs_bmc); 219 + if (status & KCS_STATUS_IBF) { 220 + if (!kcs_bmc->running) 221 + kcs_force_abort(kcs_bmc); 222 + else if (status & KCS_STATUS_CMD_DAT) 223 + kcs_bmc_handle_cmd(kcs_bmc); 224 + else 225 + kcs_bmc_handle_data(kcs_bmc); 226 + 227 + ret = 0; 222 228 } 223 229 224 - status = read_status(kcs_bmc) & (KCS_STATUS_IBF | KCS_STATUS_CMD_DAT); 225 - 226 - switch (status) { 227 - case KCS_STATUS_IBF | KCS_STATUS_CMD_DAT: 228 - kcs_bmc_handle_cmd(kcs_bmc); 229 - break; 230 - 231 - case KCS_STATUS_IBF: 232 - kcs_bmc_handle_data(kcs_bmc); 233 - break; 234 - 235 - default: 236 - ret = -ENODATA; 237 - break; 238 - } 239 - 240 - out_unlock: 241 230 spin_unlock_irqrestore(&kcs_bmc->lock, flags); 242 231 243 232 return ret;
+1 -1
drivers/clk/Makefile
··· 96 96 obj-$(CONFIG_ARCH_STI) += st/ 97 97 obj-$(CONFIG_ARCH_STRATIX10) += socfpga/ 98 98 obj-$(CONFIG_ARCH_SUNXI) += sunxi/ 99 - obj-$(CONFIG_ARCH_SUNXI) += sunxi-ng/ 99 + obj-$(CONFIG_SUNXI_CCU) += sunxi-ng/ 100 100 obj-$(CONFIG_ARCH_TEGRA) += tegra/ 101 101 obj-y += ti/ 102 102 obj-$(CONFIG_CLK_UNIPHIER) += uniphier/
+1 -1
drivers/clk/davinci/da8xx-cfgchip.c
··· 672 672 673 673 usb1 = da8xx_cfgchip_register_usb1_clk48(dev, regmap); 674 674 if (IS_ERR(usb1)) { 675 - if (PTR_ERR(usb0) == -EPROBE_DEFER) 675 + if (PTR_ERR(usb1) == -EPROBE_DEFER) 676 676 return -EPROBE_DEFER; 677 677 678 678 dev_warn(dev, "Failed to register usb1_clk48 (%ld)\n",
+1 -1
drivers/clk/davinci/psc.h
··· 107 107 #ifdef CONFIG_ARCH_DAVINCI_DM355 108 108 extern const struct davinci_psc_init_data dm355_psc_init_data; 109 109 #endif 110 - #ifdef CONFIG_ARCH_DAVINCI_DM356 110 + #ifdef CONFIG_ARCH_DAVINCI_DM365 111 111 extern const struct davinci_psc_init_data dm365_psc_init_data; 112 112 #endif 113 113 #ifdef CONFIG_ARCH_DAVINCI_DM644x
+15 -24
drivers/clk/sunxi-ng/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 # Common objects 3 - lib-$(CONFIG_SUNXI_CCU) += ccu_common.o 4 - lib-$(CONFIG_SUNXI_CCU) += ccu_mmc_timing.o 5 - lib-$(CONFIG_SUNXI_CCU) += ccu_reset.o 3 + obj-y += ccu_common.o 4 + obj-y += ccu_mmc_timing.o 5 + obj-y += ccu_reset.o 6 6 7 7 # Base clock types 8 - lib-$(CONFIG_SUNXI_CCU) += ccu_div.o 9 - lib-$(CONFIG_SUNXI_CCU) += ccu_frac.o 10 - lib-$(CONFIG_SUNXI_CCU) += ccu_gate.o 11 - lib-$(CONFIG_SUNXI_CCU) += ccu_mux.o 12 - lib-$(CONFIG_SUNXI_CCU) += ccu_mult.o 13 - lib-$(CONFIG_SUNXI_CCU) += ccu_phase.o 14 - lib-$(CONFIG_SUNXI_CCU) += ccu_sdm.o 8 + obj-y += ccu_div.o 9 + obj-y += ccu_frac.o 10 + obj-y += ccu_gate.o 11 + obj-y += ccu_mux.o 12 + obj-y += ccu_mult.o 13 + obj-y += ccu_phase.o 14 + obj-y += ccu_sdm.o 15 15 16 16 # Multi-factor clocks 17 - lib-$(CONFIG_SUNXI_CCU) += ccu_nk.o 18 - lib-$(CONFIG_SUNXI_CCU) += ccu_nkm.o 19 - lib-$(CONFIG_SUNXI_CCU) += ccu_nkmp.o 20 - lib-$(CONFIG_SUNXI_CCU) += ccu_nm.o 21 - lib-$(CONFIG_SUNXI_CCU) += ccu_mp.o 17 + obj-y += ccu_nk.o 18 + obj-y += ccu_nkm.o 19 + obj-y += ccu_nkmp.o 20 + obj-y += ccu_nm.o 21 + obj-y += ccu_mp.o 22 22 23 23 # SoC support 24 24 obj-$(CONFIG_SUN50I_A64_CCU) += ccu-sun50i-a64.o ··· 38 38 obj-$(CONFIG_SUN9I_A80_CCU) += ccu-sun9i-a80.o 39 39 obj-$(CONFIG_SUN9I_A80_CCU) += ccu-sun9i-a80-de.o 40 40 obj-$(CONFIG_SUN9I_A80_CCU) += ccu-sun9i-a80-usb.o 41 - 42 - # The lib-y file goals is supposed to work only in arch/*/lib or lib/. In our 43 - # case, we want to use that goal, but even though lib.a will be properly 44 - # generated, it will not be linked in, eventually resulting in a linker error 45 - # for missing symbols. 46 - # 47 - # We can work around that by explicitly adding lib.a to the obj-y goal. This is 48 - # an undocumented behaviour, but works well for now. 49 - obj-$(CONFIG_SUNXI_CCU) += lib.a
+1 -1
drivers/clocksource/arm_arch_timer.c
··· 735 735 clk->features |= CLOCK_EVT_FEAT_DYNIRQ; 736 736 clk->name = "arch_mem_timer"; 737 737 clk->rating = 400; 738 - clk->cpumask = cpu_all_mask; 738 + clk->cpumask = cpu_possible_mask; 739 739 if (arch_timer_mem_use_virtual) { 740 740 clk->set_state_shutdown = arch_timer_shutdown_virt_mem; 741 741 clk->set_state_oneshot_stopped = arch_timer_shutdown_virt_mem;
+8 -4
drivers/dax/device.c
··· 189 189 190 190 /* prevent private mappings from being established */ 191 191 if ((vma->vm_flags & VM_MAYSHARE) != VM_MAYSHARE) { 192 - dev_info(dev, "%s: %s: fail, attempted private mapping\n", 192 + dev_info_ratelimited(dev, 193 + "%s: %s: fail, attempted private mapping\n", 193 194 current->comm, func); 194 195 return -EINVAL; 195 196 } 196 197 197 198 mask = dax_region->align - 1; 198 199 if (vma->vm_start & mask || vma->vm_end & mask) { 199 - dev_info(dev, "%s: %s: fail, unaligned vma (%#lx - %#lx, %#lx)\n", 200 + dev_info_ratelimited(dev, 201 + "%s: %s: fail, unaligned vma (%#lx - %#lx, %#lx)\n", 200 202 current->comm, func, vma->vm_start, vma->vm_end, 201 203 mask); 202 204 return -EINVAL; ··· 206 204 207 205 if ((dax_region->pfn_flags & (PFN_DEV|PFN_MAP)) == PFN_DEV 208 206 && (vma->vm_flags & VM_DONTCOPY) == 0) { 209 - dev_info(dev, "%s: %s: fail, dax range requires MADV_DONTFORK\n", 207 + dev_info_ratelimited(dev, 208 + "%s: %s: fail, dax range requires MADV_DONTFORK\n", 210 209 current->comm, func); 211 210 return -EINVAL; 212 211 } 213 212 214 213 if (!vma_is_dax(vma)) { 215 - dev_info(dev, "%s: %s: fail, vma is not DAX capable\n", 214 + dev_info_ratelimited(dev, 215 + "%s: %s: fail, vma is not DAX capable\n", 216 216 current->comm, func); 217 217 return -EINVAL; 218 218 }
+1 -1
drivers/dma/k3dma.c
··· 794 794 struct k3_dma_dev *d = ofdma->of_dma_data; 795 795 unsigned int request = dma_spec->args[0]; 796 796 797 - if (request > d->dma_requests) 797 + if (request >= d->dma_requests) 798 798 return NULL; 799 799 800 800 return dma_get_slave_channel(&(d->chans[request].vc.chan));
+1 -1
drivers/dma/pl330.c
··· 3033 3033 pd->src_addr_widths = PL330_DMA_BUSWIDTHS; 3034 3034 pd->dst_addr_widths = PL330_DMA_BUSWIDTHS; 3035 3035 pd->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); 3036 - pd->residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT; 3036 + pd->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 3037 3037 pd->max_burst = ((pl330->quirks & PL330_QUIRK_BROKEN_NO_FLUSHP) ? 3038 3038 1 : PL330_MAX_BURST); 3039 3039
+5 -1
drivers/dma/ti/omap-dma.c
··· 1485 1485 od->ddev.src_addr_widths = OMAP_DMA_BUSWIDTHS; 1486 1486 od->ddev.dst_addr_widths = OMAP_DMA_BUSWIDTHS; 1487 1487 od->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); 1488 - od->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 1488 + if (__dma_omap15xx(od->plat->dma_attr)) 1489 + od->ddev.residue_granularity = 1490 + DMA_RESIDUE_GRANULARITY_DESCRIPTOR; 1491 + else 1492 + od->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 1489 1493 od->ddev.max_burst = SZ_16M - 1; /* CCEN: 24bit unsigned */ 1490 1494 od->ddev.dev = &pdev->dev; 1491 1495 INIT_LIST_HEAD(&od->ddev.channels);
+4 -2
drivers/fpga/altera-cvp.c
··· 455 455 456 456 mgr = fpga_mgr_create(&pdev->dev, conf->mgr_name, 457 457 &altera_cvp_ops, conf); 458 - if (!mgr) 459 - return -ENOMEM; 458 + if (!mgr) { 459 + ret = -ENOMEM; 460 + goto err_unmap; 461 + } 460 462 461 463 pci_set_drvdata(pdev, mgr); 462 464
+8 -38
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 190 190 struct amdgpu_irq_src; 191 191 struct amdgpu_fpriv; 192 192 struct amdgpu_bo_va_mapping; 193 + struct amdgpu_atif; 193 194 194 195 enum amdgpu_cp_irq { 195 196 AMDGPU_CP_IRQ_GFX_EOP = 0, ··· 1270 1269 /* 1271 1270 * ACPI 1272 1271 */ 1273 - struct amdgpu_atif_notification_cfg { 1274 - bool enabled; 1275 - int command_code; 1276 - }; 1277 - 1278 - struct amdgpu_atif_notifications { 1279 - bool display_switch; 1280 - bool expansion_mode_change; 1281 - bool thermal_state; 1282 - bool forced_power_state; 1283 - bool system_power_state; 1284 - bool display_conf_change; 1285 - bool px_gfx_switch; 1286 - bool brightness_change; 1287 - bool dgpu_display_event; 1288 - }; 1289 - 1290 - struct amdgpu_atif_functions { 1291 - bool system_params; 1292 - bool sbios_requests; 1293 - bool select_active_disp; 1294 - bool lid_state; 1295 - bool get_tv_standard; 1296 - bool set_tv_standard; 1297 - bool get_panel_expansion_mode; 1298 - bool set_panel_expansion_mode; 1299 - bool temperature_change; 1300 - bool graphics_device_types; 1301 - }; 1302 - 1303 - struct amdgpu_atif { 1304 - struct amdgpu_atif_notifications notifications; 1305 - struct amdgpu_atif_functions functions; 1306 - struct amdgpu_atif_notification_cfg notification_cfg; 1307 - struct amdgpu_encoder *encoder_for_bl; 1308 - }; 1309 - 1310 1272 struct amdgpu_atcs_functions { 1311 1273 bool get_ext_state; 1312 1274 bool pcie_perf_req; ··· 1430 1466 #if defined(CONFIG_DEBUG_FS) 1431 1467 struct dentry *debugfs_regs[AMDGPU_DEBUGFS_MAX_COMPONENTS]; 1432 1468 #endif 1433 - struct amdgpu_atif atif; 1469 + struct amdgpu_atif *atif; 1434 1470 struct amdgpu_atcs atcs; 1435 1471 struct mutex srbm_mutex; 1436 1472 /* GRBM index mutex. Protects concurrent access to GRBM index */ ··· 1856 1892 static inline bool amdgpu_is_atpx_hybrid(void) { return false; } 1857 1893 static inline bool amdgpu_atpx_dgpu_req_power_for_displays(void) { return false; } 1858 1894 static inline bool amdgpu_has_atpx(void) { return false; } 1895 + #endif 1896 + 1897 + #if defined(CONFIG_VGA_SWITCHEROO) && defined(CONFIG_ACPI) 1898 + void *amdgpu_atpx_get_dhandle(void); 1899 + #else 1900 + static inline void *amdgpu_atpx_get_dhandle(void) { return NULL; } 1859 1901 #endif 1860 1902 1861 1903 /*
+111 -26
drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
··· 34 34 #include "amd_acpi.h" 35 35 #include "atom.h" 36 36 37 + struct amdgpu_atif_notification_cfg { 38 + bool enabled; 39 + int command_code; 40 + }; 41 + 42 + struct amdgpu_atif_notifications { 43 + bool display_switch; 44 + bool expansion_mode_change; 45 + bool thermal_state; 46 + bool forced_power_state; 47 + bool system_power_state; 48 + bool display_conf_change; 49 + bool px_gfx_switch; 50 + bool brightness_change; 51 + bool dgpu_display_event; 52 + }; 53 + 54 + struct amdgpu_atif_functions { 55 + bool system_params; 56 + bool sbios_requests; 57 + bool select_active_disp; 58 + bool lid_state; 59 + bool get_tv_standard; 60 + bool set_tv_standard; 61 + bool get_panel_expansion_mode; 62 + bool set_panel_expansion_mode; 63 + bool temperature_change; 64 + bool graphics_device_types; 65 + }; 66 + 67 + struct amdgpu_atif { 68 + acpi_handle handle; 69 + 70 + struct amdgpu_atif_notifications notifications; 71 + struct amdgpu_atif_functions functions; 72 + struct amdgpu_atif_notification_cfg notification_cfg; 73 + struct amdgpu_encoder *encoder_for_bl; 74 + }; 75 + 37 76 /* Call the ATIF method 38 77 */ 39 78 /** ··· 85 46 * Executes the requested ATIF function (all asics). 86 47 * Returns a pointer to the acpi output buffer. 87 48 */ 88 - static union acpi_object *amdgpu_atif_call(acpi_handle handle, int function, 89 - struct acpi_buffer *params) 49 + static union acpi_object *amdgpu_atif_call(struct amdgpu_atif *atif, 50 + int function, 51 + struct acpi_buffer *params) 90 52 { 91 53 acpi_status status; 92 54 union acpi_object atif_arg_elements[2]; ··· 110 70 atif_arg_elements[1].integer.value = 0; 111 71 } 112 72 113 - status = acpi_evaluate_object(handle, "ATIF", &atif_arg, &buffer); 73 + status = acpi_evaluate_object(atif->handle, NULL, &atif_arg, 74 + &buffer); 114 75 115 76 /* Fail only if calling the method fails and ATIF is supported */ 116 77 if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) { ··· 182 141 * (all asics). 183 142 * returns 0 on success, error on failure. 184 143 */ 185 - static int amdgpu_atif_verify_interface(acpi_handle handle, 186 - struct amdgpu_atif *atif) 144 + static int amdgpu_atif_verify_interface(struct amdgpu_atif *atif) 187 145 { 188 146 union acpi_object *info; 189 147 struct atif_verify_interface output; 190 148 size_t size; 191 149 int err = 0; 192 150 193 - info = amdgpu_atif_call(handle, ATIF_FUNCTION_VERIFY_INTERFACE, NULL); 151 + info = amdgpu_atif_call(atif, ATIF_FUNCTION_VERIFY_INTERFACE, NULL); 194 152 if (!info) 195 153 return -EIO; 196 154 ··· 216 176 return err; 217 177 } 218 178 179 + static acpi_handle amdgpu_atif_probe_handle(acpi_handle dhandle) 180 + { 181 + acpi_handle handle = NULL; 182 + char acpi_method_name[255] = { 0 }; 183 + struct acpi_buffer buffer = { sizeof(acpi_method_name), acpi_method_name }; 184 + acpi_status status; 185 + 186 + /* For PX/HG systems, ATIF and ATPX are in the iGPU's namespace, on dGPU only 187 + * systems, ATIF is in the dGPU's namespace. 188 + */ 189 + status = acpi_get_handle(dhandle, "ATIF", &handle); 190 + if (ACPI_SUCCESS(status)) 191 + goto out; 192 + 193 + if (amdgpu_has_atpx()) { 194 + status = acpi_get_handle(amdgpu_atpx_get_dhandle(), "ATIF", 195 + &handle); 196 + if (ACPI_SUCCESS(status)) 197 + goto out; 198 + } 199 + 200 + DRM_DEBUG_DRIVER("No ATIF handle found\n"); 201 + return NULL; 202 + out: 203 + acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer); 204 + DRM_DEBUG_DRIVER("Found ATIF handle %s\n", acpi_method_name); 205 + return handle; 206 + } 207 + 219 208 /** 220 209 * amdgpu_atif_get_notification_params - determine notify configuration 221 210 * ··· 257 188 * where n is specified in the result if a notifier is used. 258 189 * Returns 0 on success, error on failure. 259 190 */ 260 - static int amdgpu_atif_get_notification_params(acpi_handle handle, 261 - struct amdgpu_atif_notification_cfg *n) 191 + static int amdgpu_atif_get_notification_params(struct amdgpu_atif *atif) 262 192 { 263 193 union acpi_object *info; 194 + struct amdgpu_atif_notification_cfg *n = &atif->notification_cfg; 264 195 struct atif_system_params params; 265 196 size_t size; 266 197 int err = 0; 267 198 268 - info = amdgpu_atif_call(handle, ATIF_FUNCTION_GET_SYSTEM_PARAMETERS, NULL); 199 + info = amdgpu_atif_call(atif, ATIF_FUNCTION_GET_SYSTEM_PARAMETERS, 200 + NULL); 269 201 if (!info) { 270 202 err = -EIO; 271 203 goto out; ··· 320 250 * (all asics). 321 251 * Returns 0 on success, error on failure. 322 252 */ 323 - static int amdgpu_atif_get_sbios_requests(acpi_handle handle, 324 - struct atif_sbios_requests *req) 253 + static int amdgpu_atif_get_sbios_requests(struct amdgpu_atif *atif, 254 + struct atif_sbios_requests *req) 325 255 { 326 256 union acpi_object *info; 327 257 size_t size; 328 258 int count = 0; 329 259 330 - info = amdgpu_atif_call(handle, ATIF_FUNCTION_GET_SYSTEM_BIOS_REQUESTS, NULL); 260 + info = amdgpu_atif_call(atif, ATIF_FUNCTION_GET_SYSTEM_BIOS_REQUESTS, 261 + NULL); 331 262 if (!info) 332 263 return -EIO; 333 264 ··· 361 290 * Returns NOTIFY code 362 291 */ 363 292 static int amdgpu_atif_handler(struct amdgpu_device *adev, 364 - struct acpi_bus_event *event) 293 + struct acpi_bus_event *event) 365 294 { 366 - struct amdgpu_atif *atif = &adev->atif; 295 + struct amdgpu_atif *atif = adev->atif; 367 296 struct atif_sbios_requests req; 368 - acpi_handle handle; 369 297 int count; 370 298 371 299 DRM_DEBUG_DRIVER("event, device_class = %s, type = %#x\n", ··· 373 303 if (strcmp(event->device_class, ACPI_VIDEO_CLASS) != 0) 374 304 return NOTIFY_DONE; 375 305 376 - if (!atif->notification_cfg.enabled || 306 + if (!atif || 307 + !atif->notification_cfg.enabled || 377 308 event->type != atif->notification_cfg.command_code) 378 309 /* Not our event */ 379 310 return NOTIFY_DONE; 380 311 381 312 /* Check pending SBIOS requests */ 382 - handle = ACPI_HANDLE(&adev->pdev->dev); 383 - count = amdgpu_atif_get_sbios_requests(handle, &req); 313 + count = amdgpu_atif_get_sbios_requests(atif, &req); 384 314 385 315 if (count <= 0) 386 316 return NOTIFY_DONE; ··· 711 641 */ 712 642 int amdgpu_acpi_init(struct amdgpu_device *adev) 713 643 { 714 - acpi_handle handle; 715 - struct amdgpu_atif *atif = &adev->atif; 644 + acpi_handle handle, atif_handle; 645 + struct amdgpu_atif *atif; 716 646 struct amdgpu_atcs *atcs = &adev->atcs; 717 647 int ret; 718 648 ··· 728 658 DRM_DEBUG_DRIVER("Call to ATCS verify_interface failed: %d\n", ret); 729 659 } 730 660 731 - /* Call the ATIF method */ 732 - ret = amdgpu_atif_verify_interface(handle, atif); 733 - if (ret) { 734 - DRM_DEBUG_DRIVER("Call to ATIF verify_interface failed: %d\n", ret); 661 + /* Probe for ATIF, and initialize it if found */ 662 + atif_handle = amdgpu_atif_probe_handle(handle); 663 + if (!atif_handle) 664 + goto out; 665 + 666 + atif = kzalloc(sizeof(*atif), GFP_KERNEL); 667 + if (!atif) { 668 + DRM_WARN("Not enough memory to initialize ATIF\n"); 735 669 goto out; 736 670 } 671 + atif->handle = atif_handle; 672 + 673 + /* Call the ATIF method */ 674 + ret = amdgpu_atif_verify_interface(atif); 675 + if (ret) { 676 + DRM_DEBUG_DRIVER("Call to ATIF verify_interface failed: %d\n", ret); 677 + kfree(atif); 678 + goto out; 679 + } 680 + adev->atif = atif; 737 681 738 682 if (atif->notifications.brightness_change) { 739 683 struct drm_encoder *tmp; ··· 777 693 } 778 694 779 695 if (atif->functions.system_params) { 780 - ret = amdgpu_atif_get_notification_params(handle, 781 - &atif->notification_cfg); 696 + ret = amdgpu_atif_get_notification_params(atif); 782 697 if (ret) { 783 698 DRM_DEBUG_DRIVER("Call to GET_SYSTEM_PARAMS failed: %d\n", 784 699 ret); ··· 803 720 void amdgpu_acpi_fini(struct amdgpu_device *adev) 804 721 { 805 722 unregister_acpi_notifier(&adev->acpi_nb); 723 + if (adev->atif) 724 + kfree(adev->atif); 806 725 }
+6
drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
··· 90 90 return amdgpu_atpx_priv.atpx.dgpu_req_power_for_displays; 91 91 } 92 92 93 + #if defined(CONFIG_ACPI) 94 + void *amdgpu_atpx_get_dhandle(void) { 95 + return amdgpu_atpx_priv.dhandle; 96 + } 97 + #endif 98 + 93 99 /** 94 100 * amdgpu_atpx_call - call an ATPX method 95 101 *
+6 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
··· 231 231 if (ib->flags & AMDGPU_IB_FLAG_TC_WB_NOT_INVALIDATE) 232 232 fence_flags |= AMDGPU_FENCE_FLAG_TC_WB_ONLY; 233 233 234 + /* wrap the last IB with fence */ 235 + if (job && job->uf_addr) { 236 + amdgpu_ring_emit_fence(ring, job->uf_addr, job->uf_sequence, 237 + fence_flags | AMDGPU_FENCE_FLAG_64BIT); 238 + } 239 + 234 240 r = amdgpu_fence_emit(ring, f, fence_flags); 235 241 if (r) { 236 242 dev_err(adev->dev, "failed to emit fence (%d)\n", r); ··· 248 242 249 243 if (ring->funcs->insert_end) 250 244 ring->funcs->insert_end(ring); 251 - 252 - /* wrap the last IB with fence */ 253 - if (job && job->uf_addr) { 254 - amdgpu_ring_emit_fence(ring, job->uf_addr, job->uf_sequence, 255 - fence_flags | AMDGPU_FENCE_FLAG_64BIT); 256 - } 257 245 258 246 if (patch_offset != ~0 && ring->funcs->patch_cond_exec) 259 247 amdgpu_ring_patch_cond_exec(ring, patch_offset);
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
··· 1882 1882 if (!amdgpu_device_has_dc_support(adev)) { 1883 1883 mutex_lock(&adev->pm.mutex); 1884 1884 amdgpu_dpm_get_active_displays(adev); 1885 - adev->pm.pm_display_cfg.num_display = adev->pm.dpm.new_active_crtcs; 1885 + adev->pm.pm_display_cfg.num_display = adev->pm.dpm.new_active_crtc_count; 1886 1886 adev->pm.pm_display_cfg.vrefresh = amdgpu_dpm_get_vrefresh(adev); 1887 1887 adev->pm.pm_display_cfg.min_vblank_time = amdgpu_dpm_get_vblank_time(adev); 1888 1888 /* we have issues with mclk switching with refresh rates over 120 hz on the non-DC code. */
+2 -2
drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
··· 900 900 .emit_frame_size = 901 901 4 + /* vce_v3_0_emit_pipeline_sync */ 902 902 6, /* amdgpu_vce_ring_emit_fence x1 no user fence */ 903 - .emit_ib_size = 5, /* vce_v3_0_ring_emit_ib */ 903 + .emit_ib_size = 4, /* amdgpu_vce_ring_emit_ib */ 904 904 .emit_ib = amdgpu_vce_ring_emit_ib, 905 905 .emit_fence = amdgpu_vce_ring_emit_fence, 906 906 .test_ring = amdgpu_vce_ring_test_ring, ··· 924 924 6 + /* vce_v3_0_emit_vm_flush */ 925 925 4 + /* vce_v3_0_emit_pipeline_sync */ 926 926 6 + 6, /* amdgpu_vce_ring_emit_fence x2 vm fence */ 927 - .emit_ib_size = 4, /* amdgpu_vce_ring_emit_ib */ 927 + .emit_ib_size = 5, /* vce_v3_0_ring_emit_ib */ 928 928 .emit_ib = vce_v3_0_ring_emit_ib, 929 929 .emit_vm_flush = vce_v3_0_emit_vm_flush, 930 930 .emit_pipeline_sync = vce_v3_0_emit_pipeline_sync,
+47 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 2175 2175 return color_space; 2176 2176 } 2177 2177 2178 + static void reduce_mode_colour_depth(struct dc_crtc_timing *timing_out) 2179 + { 2180 + if (timing_out->display_color_depth <= COLOR_DEPTH_888) 2181 + return; 2182 + 2183 + timing_out->display_color_depth--; 2184 + } 2185 + 2186 + static void adjust_colour_depth_from_display_info(struct dc_crtc_timing *timing_out, 2187 + const struct drm_display_info *info) 2188 + { 2189 + int normalized_clk; 2190 + if (timing_out->display_color_depth <= COLOR_DEPTH_888) 2191 + return; 2192 + do { 2193 + normalized_clk = timing_out->pix_clk_khz; 2194 + /* YCbCr 4:2:0 requires additional adjustment of 1/2 */ 2195 + if (timing_out->pixel_encoding == PIXEL_ENCODING_YCBCR420) 2196 + normalized_clk /= 2; 2197 + /* Adjusting pix clock following on HDMI spec based on colour depth */ 2198 + switch (timing_out->display_color_depth) { 2199 + case COLOR_DEPTH_101010: 2200 + normalized_clk = (normalized_clk * 30) / 24; 2201 + break; 2202 + case COLOR_DEPTH_121212: 2203 + normalized_clk = (normalized_clk * 36) / 24; 2204 + break; 2205 + case COLOR_DEPTH_161616: 2206 + normalized_clk = (normalized_clk * 48) / 24; 2207 + break; 2208 + default: 2209 + return; 2210 + } 2211 + if (normalized_clk <= info->max_tmds_clock) 2212 + return; 2213 + reduce_mode_colour_depth(timing_out); 2214 + 2215 + } while (timing_out->display_color_depth > COLOR_DEPTH_888); 2216 + 2217 + } 2178 2218 /*****************************************************************************/ 2179 2219 2180 2220 static void ··· 2223 2183 const struct drm_connector *connector) 2224 2184 { 2225 2185 struct dc_crtc_timing *timing_out = &stream->timing; 2186 + const struct drm_display_info *info = &connector->display_info; 2226 2187 2227 2188 memset(timing_out, 0, sizeof(struct dc_crtc_timing)); 2228 2189 ··· 2232 2191 timing_out->v_border_top = 0; 2233 2192 timing_out->v_border_bottom = 0; 2234 2193 /* TODO: un-hardcode */ 2235 - 2236 - if ((connector->display_info.color_formats & DRM_COLOR_FORMAT_YCRCB444) 2194 + if (drm_mode_is_420_only(info, mode_in) 2195 + && stream->sink->sink_signal == SIGNAL_TYPE_HDMI_TYPE_A) 2196 + timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR420; 2197 + else if ((connector->display_info.color_formats & DRM_COLOR_FORMAT_YCRCB444) 2237 2198 && stream->sink->sink_signal == SIGNAL_TYPE_HDMI_TYPE_A) 2238 2199 timing_out->pixel_encoding = PIXEL_ENCODING_YCBCR444; 2239 2200 else ··· 2271 2228 2272 2229 stream->out_transfer_func->type = TF_TYPE_PREDEFINED; 2273 2230 stream->out_transfer_func->tf = TRANSFER_FUNCTION_SRGB; 2231 + if (stream->sink->sink_signal == SIGNAL_TYPE_HDMI_TYPE_A) 2232 + adjust_colour_depth_from_display_info(timing_out, info); 2274 2233 } 2275 2234 2276 2235 static void fill_audio_info(struct audio_info *audio_info,
+4 -1
drivers/gpu/drm/amd/include/atomfirmware.h
··· 1433 1433 uint8_t acggfxclkspreadpercent; 1434 1434 uint16_t acggfxclkspreadfreq; 1435 1435 1436 - uint32_t boardreserved[10]; 1436 + uint8_t Vr2_I2C_address; 1437 + uint8_t padding_vr2[3]; 1438 + 1439 + uint32_t boardreserved[9]; 1437 1440 }; 1438 1441 1439 1442 /*
+84 -12
drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
··· 512 512 return 0; 513 513 } 514 514 515 + static void pp_atomfwctrl_copy_vbios_bootup_values_3_2(struct pp_hwmgr *hwmgr, 516 + struct pp_atomfwctrl_bios_boot_up_values *boot_values, 517 + struct atom_firmware_info_v3_2 *fw_info) 518 + { 519 + uint32_t frequency = 0; 520 + 521 + boot_values->ulRevision = fw_info->firmware_revision; 522 + boot_values->ulGfxClk = fw_info->bootup_sclk_in10khz; 523 + boot_values->ulUClk = fw_info->bootup_mclk_in10khz; 524 + boot_values->usVddc = fw_info->bootup_vddc_mv; 525 + boot_values->usVddci = fw_info->bootup_vddci_mv; 526 + boot_values->usMvddc = fw_info->bootup_mvddc_mv; 527 + boot_values->usVddGfx = fw_info->bootup_vddgfx_mv; 528 + boot_values->ucCoolingID = fw_info->coolingsolution_id; 529 + boot_values->ulSocClk = 0; 530 + boot_values->ulDCEFClk = 0; 531 + 532 + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU11_SYSPLL0_SOCCLK_ID, &frequency)) 533 + boot_values->ulSocClk = frequency; 534 + 535 + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU11_SYSPLL0_DCEFCLK_ID, &frequency)) 536 + boot_values->ulDCEFClk = frequency; 537 + 538 + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU11_SYSPLL0_ECLK_ID, &frequency)) 539 + boot_values->ulEClk = frequency; 540 + 541 + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU11_SYSPLL0_VCLK_ID, &frequency)) 542 + boot_values->ulVClk = frequency; 543 + 544 + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU11_SYSPLL0_DCLK_ID, &frequency)) 545 + boot_values->ulDClk = frequency; 546 + } 547 + 548 + static void pp_atomfwctrl_copy_vbios_bootup_values_3_1(struct pp_hwmgr *hwmgr, 549 + struct pp_atomfwctrl_bios_boot_up_values *boot_values, 550 + struct atom_firmware_info_v3_1 *fw_info) 551 + { 552 + uint32_t frequency = 0; 553 + 554 + boot_values->ulRevision = fw_info->firmware_revision; 555 + boot_values->ulGfxClk = fw_info->bootup_sclk_in10khz; 556 + boot_values->ulUClk = fw_info->bootup_mclk_in10khz; 557 + boot_values->usVddc = fw_info->bootup_vddc_mv; 558 + boot_values->usVddci = fw_info->bootup_vddci_mv; 559 + boot_values->usMvddc = fw_info->bootup_mvddc_mv; 560 + boot_values->usVddGfx = fw_info->bootup_vddgfx_mv; 561 + boot_values->ucCoolingID = fw_info->coolingsolution_id; 562 + boot_values->ulSocClk = 0; 563 + boot_values->ulDCEFClk = 0; 564 + 565 + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU9_SYSPLL0_SOCCLK_ID, &frequency)) 566 + boot_values->ulSocClk = frequency; 567 + 568 + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU9_SYSPLL0_DCEFCLK_ID, &frequency)) 569 + boot_values->ulDCEFClk = frequency; 570 + 571 + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU9_SYSPLL0_ECLK_ID, &frequency)) 572 + boot_values->ulEClk = frequency; 573 + 574 + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU9_SYSPLL0_VCLK_ID, &frequency)) 575 + boot_values->ulVClk = frequency; 576 + 577 + if (!pp_atomfwctrl_get_clk_information_by_clkid(hwmgr, SMU9_SYSPLL0_DCLK_ID, &frequency)) 578 + boot_values->ulDClk = frequency; 579 + } 580 + 515 581 int pp_atomfwctrl_get_vbios_bootup_values(struct pp_hwmgr *hwmgr, 516 582 struct pp_atomfwctrl_bios_boot_up_values *boot_values) 517 583 { 518 - struct atom_firmware_info_v3_1 *info = NULL; 584 + struct atom_firmware_info_v3_2 *fwinfo_3_2; 585 + struct atom_firmware_info_v3_1 *fwinfo_3_1; 586 + struct atom_common_table_header *info = NULL; 519 587 uint16_t ix; 520 588 521 589 ix = GetIndexIntoMasterDataTable(firmwareinfo); 522 - info = (struct atom_firmware_info_v3_1 *) 590 + info = (struct atom_common_table_header *) 523 591 smu_atom_get_data_table(hwmgr->adev, 524 592 ix, NULL, NULL, NULL); 525 593 ··· 596 528 return -EINVAL; 597 529 } 598 530 599 - boot_values->ulRevision = info->firmware_revision; 600 - boot_values->ulGfxClk = info->bootup_sclk_in10khz; 601 - boot_values->ulUClk = info->bootup_mclk_in10khz; 602 - boot_values->usVddc = info->bootup_vddc_mv; 603 - boot_values->usVddci = info->bootup_vddci_mv; 604 - boot_values->usMvddc = info->bootup_mvddc_mv; 605 - boot_values->usVddGfx = info->bootup_vddgfx_mv; 606 - boot_values->ucCoolingID = info->coolingsolution_id; 607 - boot_values->ulSocClk = 0; 608 - boot_values->ulDCEFClk = 0; 531 + if ((info->format_revision == 3) && (info->content_revision == 2)) { 532 + fwinfo_3_2 = (struct atom_firmware_info_v3_2 *)info; 533 + pp_atomfwctrl_copy_vbios_bootup_values_3_2(hwmgr, 534 + boot_values, fwinfo_3_2); 535 + } else if ((info->format_revision == 3) && (info->content_revision == 1)) { 536 + fwinfo_3_1 = (struct atom_firmware_info_v3_1 *)info; 537 + pp_atomfwctrl_copy_vbios_bootup_values_3_1(hwmgr, 538 + boot_values, fwinfo_3_1); 539 + } else { 540 + pr_info("Fw info table revision does not match!"); 541 + return -EINVAL; 542 + } 609 543 610 544 return 0; 611 545 } ··· 698 628 param->acggfxclkspreadenabled = info->acggfxclkspreadenabled; 699 629 param->acggfxclkspreadpercent = info->acggfxclkspreadpercent; 700 630 param->acggfxclkspreadfreq = info->acggfxclkspreadfreq; 631 + 632 + param->Vr2_I2C_address = info->Vr2_I2C_address; 701 633 702 634 return 0; 703 635 }
+5
drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h
··· 136 136 uint32_t ulUClk; 137 137 uint32_t ulSocClk; 138 138 uint32_t ulDCEFClk; 139 + uint32_t ulEClk; 140 + uint32_t ulVClk; 141 + uint32_t ulDClk; 139 142 uint16_t usVddc; 140 143 uint16_t usVddci; 141 144 uint16_t usMvddc; ··· 210 207 uint8_t acggfxclkspreadenabled; 211 208 uint8_t acggfxclkspreadpercent; 212 209 uint16_t acggfxclkspreadfreq; 210 + 211 + uint8_t Vr2_I2C_address; 213 212 }; 214 213 215 214 int pp_atomfwctrl_get_gpu_pll_dividers_vega10(struct pp_hwmgr *hwmgr,
+4
drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c
··· 81 81 82 82 data->registry_data.disallowed_features = 0x0; 83 83 data->registry_data.od_state_in_dc_support = 0; 84 + data->registry_data.thermal_support = 1; 84 85 data->registry_data.skip_baco_hardware = 0; 85 86 86 87 data->registry_data.log_avfs_param = 0; ··· 804 803 data->vbios_boot_state.soc_clock = boot_up_values.ulSocClk; 805 804 data->vbios_boot_state.dcef_clock = boot_up_values.ulDCEFClk; 806 805 data->vbios_boot_state.uc_cooling_id = boot_up_values.ucCoolingID; 806 + data->vbios_boot_state.eclock = boot_up_values.ulEClk; 807 + data->vbios_boot_state.dclock = boot_up_values.ulDClk; 808 + data->vbios_boot_state.vclock = boot_up_values.ulVClk; 807 809 smum_send_msg_to_smc_with_parameter(hwmgr, 808 810 PPSMC_MSG_SetMinDeepSleepDcefclk, 809 811 (uint32_t)(data->vbios_boot_state.dcef_clock / 100));
+3
drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h
··· 167 167 uint32_t mem_clock; 168 168 uint32_t soc_clock; 169 169 uint32_t dcef_clock; 170 + uint32_t eclock; 171 + uint32_t dclock; 172 + uint32_t vclock; 170 173 }; 171 174 172 175 #define DPMTABLE_OD_UPDATE_SCLK 0x00000001
+2
drivers/gpu/drm/amd/powerplay/hwmgr/vega12_processpptables.c
··· 230 230 ppsmc_pptable->AcgThresholdFreqLow = 0xFFFF; 231 231 } 232 232 233 + ppsmc_pptable->Vr2_I2C_address = smc_dpm_table.Vr2_I2C_address; 234 + 233 235 return 0; 234 236 } 235 237
+4 -1
drivers/gpu/drm/amd/powerplay/inc/vega12/smu9_driver_if.h
··· 499 499 uint8_t AcgGfxclkSpreadPercent; 500 500 uint16_t AcgGfxclkSpreadFreq; 501 501 502 - uint32_t BoardReserved[10]; 502 + uint8_t Vr2_I2C_address; 503 + uint8_t padding_vr2[3]; 504 + 505 + uint32_t BoardReserved[9]; 503 506 504 507 505 508 uint32_t MmHubPadding[7];
+55 -31
drivers/gpu/drm/bridge/sil-sii8620.c
··· 14 14 #include <drm/bridge/mhl.h> 15 15 #include <drm/drm_crtc.h> 16 16 #include <drm/drm_edid.h> 17 + #include <drm/drm_encoder.h> 17 18 18 19 #include <linux/clk.h> 19 20 #include <linux/delay.h> ··· 73 72 struct regulator_bulk_data supplies[2]; 74 73 struct mutex lock; /* context lock, protects fields below */ 75 74 int error; 76 - int pixel_clock; 77 75 unsigned int use_packed_pixel:1; 78 - int video_code; 79 76 enum sii8620_mode mode; 80 77 enum sii8620_sink_type sink_type; 81 78 u8 cbus_status; ··· 81 82 u8 xstat[MHL_XDS_SIZE]; 82 83 u8 devcap[MHL_DCAP_SIZE]; 83 84 u8 xdevcap[MHL_XDC_SIZE]; 84 - u8 avif[HDMI_INFOFRAME_SIZE(AVI)]; 85 85 bool feature_complete; 86 86 bool devcap_read; 87 87 bool sink_detected; ··· 1015 1017 1016 1018 static void sii8620_set_format(struct sii8620 *ctx) 1017 1019 { 1020 + u8 out_fmt; 1021 + 1018 1022 if (sii8620_is_mhl3(ctx)) { 1019 1023 sii8620_setbits(ctx, REG_M3_P0CTRL, 1020 1024 BIT_M3_P0CTRL_MHL3_P0_PIXEL_MODE_PACKED, 1021 1025 ctx->use_packed_pixel ? ~0 : 0); 1022 1026 } else { 1027 + if (ctx->use_packed_pixel) { 1028 + sii8620_write_seq_static(ctx, 1029 + REG_VID_MODE, BIT_VID_MODE_M1080P, 1030 + REG_MHL_TOP_CTL, BIT_MHL_TOP_CTL_MHL_PP_SEL | 1, 1031 + REG_MHLTX_CTL6, 0x60 1032 + ); 1033 + } else { 1023 1034 sii8620_write_seq_static(ctx, 1024 1035 REG_VID_MODE, 0, 1025 1036 REG_MHL_TOP_CTL, 1, 1026 1037 REG_MHLTX_CTL6, 0xa0 1027 1038 ); 1039 + } 1028 1040 } 1041 + 1042 + if (ctx->use_packed_pixel) 1043 + out_fmt = VAL_TPI_FORMAT(YCBCR422, FULL); 1044 + else 1045 + out_fmt = VAL_TPI_FORMAT(RGB, FULL); 1029 1046 1030 1047 sii8620_write_seq(ctx, 1031 1048 REG_TPI_INPUT, VAL_TPI_FORMAT(RGB, FULL), 1032 - REG_TPI_OUTPUT, VAL_TPI_FORMAT(RGB, FULL), 1049 + REG_TPI_OUTPUT, out_fmt, 1033 1050 ); 1034 1051 } 1035 1052 ··· 1095 1082 return frm_len; 1096 1083 } 1097 1084 1098 - static void sii8620_set_infoframes(struct sii8620 *ctx) 1085 + static void sii8620_set_infoframes(struct sii8620 *ctx, 1086 + struct drm_display_mode *mode) 1099 1087 { 1100 1088 struct mhl3_infoframe mhl_frm; 1101 1089 union hdmi_infoframe frm; 1102 1090 u8 buf[31]; 1103 1091 int ret; 1104 1092 1093 + ret = drm_hdmi_avi_infoframe_from_display_mode(&frm.avi, 1094 + mode, 1095 + true); 1096 + if (ctx->use_packed_pixel) 1097 + frm.avi.colorspace = HDMI_COLORSPACE_YUV422; 1098 + 1099 + if (!ret) 1100 + ret = hdmi_avi_infoframe_pack(&frm.avi, buf, ARRAY_SIZE(buf)); 1101 + if (ret > 0) 1102 + sii8620_write_buf(ctx, REG_TPI_AVI_CHSUM, buf + 3, ret - 3); 1103 + 1105 1104 if (!sii8620_is_mhl3(ctx) || !ctx->use_packed_pixel) { 1106 1105 sii8620_write(ctx, REG_TPI_SC, 1107 1106 BIT_TPI_SC_TPI_OUTPUT_MODE_0_HDMI); 1108 - sii8620_write_buf(ctx, REG_TPI_AVI_CHSUM, ctx->avif + 3, 1109 - ARRAY_SIZE(ctx->avif) - 3); 1110 1107 sii8620_write(ctx, REG_PKT_FILTER_0, 1111 1108 BIT_PKT_FILTER_0_DROP_CEA_GAMUT_PKT | 1112 1109 BIT_PKT_FILTER_0_DROP_MPEG_PKT | ··· 1125 1102 return; 1126 1103 } 1127 1104 1128 - ret = hdmi_avi_infoframe_init(&frm.avi); 1129 - frm.avi.colorspace = HDMI_COLORSPACE_YUV422; 1130 - frm.avi.active_aspect = HDMI_ACTIVE_ASPECT_PICTURE; 1131 - frm.avi.picture_aspect = HDMI_PICTURE_ASPECT_16_9; 1132 - frm.avi.colorimetry = HDMI_COLORIMETRY_ITU_709; 1133 - frm.avi.video_code = ctx->video_code; 1134 - if (!ret) 1135 - ret = hdmi_avi_infoframe_pack(&frm.avi, buf, ARRAY_SIZE(buf)); 1136 - if (ret > 0) 1137 - sii8620_write_buf(ctx, REG_TPI_AVI_CHSUM, buf + 3, ret - 3); 1138 1105 sii8620_write(ctx, REG_PKT_FILTER_0, 1139 1106 BIT_PKT_FILTER_0_DROP_CEA_GAMUT_PKT | 1140 1107 BIT_PKT_FILTER_0_DROP_MPEG_PKT | ··· 1144 1131 1145 1132 static void sii8620_start_video(struct sii8620 *ctx) 1146 1133 { 1134 + struct drm_display_mode *mode = 1135 + &ctx->bridge.encoder->crtc->state->adjusted_mode; 1136 + 1147 1137 if (!sii8620_is_mhl3(ctx)) 1148 1138 sii8620_stop_video(ctx); 1149 1139 ··· 1165 1149 sii8620_set_format(ctx); 1166 1150 1167 1151 if (!sii8620_is_mhl3(ctx)) { 1168 - sii8620_mt_write_stat(ctx, MHL_DST_REG(LINK_MODE), 1169 - MHL_DST_LM_CLK_MODE_NORMAL | MHL_DST_LM_PATH_ENABLED); 1152 + u8 link_mode = MHL_DST_LM_PATH_ENABLED; 1153 + 1154 + if (ctx->use_packed_pixel) 1155 + link_mode |= MHL_DST_LM_CLK_MODE_PACKED_PIXEL; 1156 + else 1157 + link_mode |= MHL_DST_LM_CLK_MODE_NORMAL; 1158 + 1159 + sii8620_mt_write_stat(ctx, MHL_DST_REG(LINK_MODE), link_mode); 1170 1160 sii8620_set_auto_zone(ctx); 1171 1161 } else { 1172 1162 static const struct { ··· 1189 1167 MHL_XDS_LINK_RATE_6_0_GBPS, 0x40 }, 1190 1168 }; 1191 1169 u8 p0_ctrl = BIT_M3_P0CTRL_MHL3_P0_PORT_EN; 1192 - int clk = ctx->pixel_clock * (ctx->use_packed_pixel ? 2 : 3); 1170 + int clk = mode->clock * (ctx->use_packed_pixel ? 2 : 3); 1193 1171 int i; 1194 1172 1195 1173 for (i = 0; i < ARRAY_SIZE(clk_spec) - 1; ++i) ··· 1218 1196 clk_spec[i].link_rate); 1219 1197 } 1220 1198 1221 - sii8620_set_infoframes(ctx); 1199 + sii8620_set_infoframes(ctx, mode); 1222 1200 } 1223 1201 1224 1202 static void sii8620_disable_hpd(struct sii8620 *ctx) ··· 1683 1661 1684 1662 static void sii8620_status_changed_path(struct sii8620 *ctx) 1685 1663 { 1686 - if (ctx->stat[MHL_DST_LINK_MODE] & MHL_DST_LM_PATH_ENABLED) { 1687 - sii8620_mt_write_stat(ctx, MHL_DST_REG(LINK_MODE), 1688 - MHL_DST_LM_CLK_MODE_NORMAL 1689 - | MHL_DST_LM_PATH_ENABLED); 1690 - } else { 1691 - sii8620_mt_write_stat(ctx, MHL_DST_REG(LINK_MODE), 1692 - MHL_DST_LM_CLK_MODE_NORMAL); 1693 - } 1664 + u8 link_mode; 1665 + 1666 + if (ctx->use_packed_pixel) 1667 + link_mode = MHL_DST_LM_CLK_MODE_PACKED_PIXEL; 1668 + else 1669 + link_mode = MHL_DST_LM_CLK_MODE_NORMAL; 1670 + 1671 + if (ctx->stat[MHL_DST_LINK_MODE] & MHL_DST_LM_PATH_ENABLED) 1672 + link_mode |= MHL_DST_LM_PATH_ENABLED; 1673 + 1674 + sii8620_mt_write_stat(ctx, MHL_DST_REG(LINK_MODE), 1675 + link_mode); 1694 1676 } 1695 1677 1696 1678 static void sii8620_msc_mr_write_stat(struct sii8620 *ctx) ··· 2268 2242 mutex_lock(&ctx->lock); 2269 2243 2270 2244 ctx->use_packed_pixel = sii8620_is_packing_required(ctx, adjusted_mode); 2271 - ctx->video_code = drm_match_cea_mode(adjusted_mode); 2272 - ctx->pixel_clock = adjusted_mode->clock; 2273 2245 2274 2246 mutex_unlock(&ctx->lock); 2275 2247
+3 -3
drivers/gpu/drm/drm_property.c
··· 532 532 533 533 drm_mode_object_unregister(blob->dev, &blob->base); 534 534 535 - kfree(blob); 535 + kvfree(blob); 536 536 } 537 537 538 538 /** ··· 559 559 if (!length || length > ULONG_MAX - sizeof(struct drm_property_blob)) 560 560 return ERR_PTR(-EINVAL); 561 561 562 - blob = kzalloc(sizeof(struct drm_property_blob)+length, GFP_KERNEL); 562 + blob = kvzalloc(sizeof(struct drm_property_blob)+length, GFP_KERNEL); 563 563 if (!blob) 564 564 return ERR_PTR(-ENOMEM); 565 565 ··· 576 576 ret = __drm_mode_object_add(dev, &blob->base, DRM_MODE_OBJECT_BLOB, 577 577 true, drm_property_free_blob); 578 578 if (ret) { 579 - kfree(blob); 579 + kvfree(blob); 580 580 return ERR_PTR(-EINVAL); 581 581 } 582 582
+20 -4
drivers/gpu/drm/etnaviv/etnaviv_drv.c
··· 631 631 }, 632 632 }; 633 633 634 + static struct platform_device *etnaviv_drm; 635 + 634 636 static int __init etnaviv_init(void) 635 637 { 638 + struct platform_device *pdev; 636 639 int ret; 637 640 struct device_node *np; 638 641 ··· 647 644 648 645 ret = platform_driver_register(&etnaviv_platform_driver); 649 646 if (ret != 0) 650 - platform_driver_unregister(&etnaviv_gpu_driver); 647 + goto unregister_gpu_driver; 651 648 652 649 /* 653 650 * If the DT contains at least one available GPU device, instantiate ··· 656 653 for_each_compatible_node(np, NULL, "vivante,gc") { 657 654 if (!of_device_is_available(np)) 658 655 continue; 659 - 660 - platform_device_register_simple("etnaviv", -1, NULL, 0); 656 + pdev = platform_device_register_simple("etnaviv", -1, 657 + NULL, 0); 658 + if (IS_ERR(pdev)) { 659 + ret = PTR_ERR(pdev); 660 + of_node_put(np); 661 + goto unregister_platform_driver; 662 + } 663 + etnaviv_drm = pdev; 661 664 of_node_put(np); 662 665 break; 663 666 } 664 667 668 + return 0; 669 + 670 + unregister_platform_driver: 671 + platform_driver_unregister(&etnaviv_platform_driver); 672 + unregister_gpu_driver: 673 + platform_driver_unregister(&etnaviv_gpu_driver); 665 674 return ret; 666 675 } 667 676 module_init(etnaviv_init); 668 677 669 678 static void __exit etnaviv_exit(void) 670 679 { 671 - platform_driver_unregister(&etnaviv_gpu_driver); 680 + platform_device_unregister(etnaviv_drm); 672 681 platform_driver_unregister(&etnaviv_platform_driver); 682 + platform_driver_unregister(&etnaviv_gpu_driver); 673 683 } 674 684 module_exit(etnaviv_exit); 675 685
+3
drivers/gpu/drm/etnaviv/etnaviv_gpu.h
··· 131 131 struct work_struct sync_point_work; 132 132 int sync_point_event; 133 133 134 + /* hang detection */ 135 + u32 hangcheck_dma_addr; 136 + 134 137 void __iomem *mmio; 135 138 int irq; 136 139
+24
drivers/gpu/drm/etnaviv/etnaviv_sched.c
··· 10 10 #include "etnaviv_gem.h" 11 11 #include "etnaviv_gpu.h" 12 12 #include "etnaviv_sched.h" 13 + #include "state.xml.h" 13 14 14 15 static int etnaviv_job_hang_limit = 0; 15 16 module_param_named(job_hang_limit, etnaviv_job_hang_limit, int , 0444); ··· 86 85 { 87 86 struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job); 88 87 struct etnaviv_gpu *gpu = submit->gpu; 88 + u32 dma_addr; 89 + int change; 90 + 91 + /* 92 + * If the GPU managed to complete this jobs fence, the timout is 93 + * spurious. Bail out. 94 + */ 95 + if (fence_completed(gpu, submit->out_fence->seqno)) 96 + return; 97 + 98 + /* 99 + * If the GPU is still making forward progress on the front-end (which 100 + * should never loop) we shift out the timeout to give it a chance to 101 + * finish the job. 102 + */ 103 + dma_addr = gpu_read(gpu, VIVS_FE_DMA_ADDRESS); 104 + change = dma_addr - gpu->hangcheck_dma_addr; 105 + if (change < 0 || change > 16) { 106 + gpu->hangcheck_dma_addr = dma_addr; 107 + schedule_delayed_work(&sched_job->work_tdr, 108 + sched_job->sched->timeout); 109 + return; 110 + } 89 111 90 112 /* block scheduler */ 91 113 kthread_park(gpu->sched.thread);
+3 -3
drivers/gpu/drm/exynos/exynos5433_drm_decon.c
··· 265 265 unsigned long val; 266 266 267 267 val = readl(ctx->addr + DECON_WINCONx(win)); 268 - val &= ~WINCONx_BPPMODE_MASK; 268 + val &= WINCONx_ENWIN_F; 269 269 270 270 switch (fb->format->format) { 271 271 case DRM_FORMAT_XRGB1555: ··· 356 356 writel(val, ctx->addr + DECON_VIDOSDxB(win)); 357 357 } 358 358 359 - val = VIDOSD_Wx_ALPHA_R_F(0x0) | VIDOSD_Wx_ALPHA_G_F(0x0) | 360 - VIDOSD_Wx_ALPHA_B_F(0x0); 359 + val = VIDOSD_Wx_ALPHA_R_F(0xff) | VIDOSD_Wx_ALPHA_G_F(0xff) | 360 + VIDOSD_Wx_ALPHA_B_F(0xff); 361 361 writel(val, ctx->addr + DECON_VIDOSDxC(win)); 362 362 363 363 val = VIDOSD_Wx_ALPHA_R_F(0x0) | VIDOSD_Wx_ALPHA_G_F(0x0) |
+2 -2
drivers/gpu/drm/exynos/exynos_drm_drv.c
··· 420 420 err_free_private: 421 421 kfree(private); 422 422 err_free_drm: 423 - drm_dev_unref(drm); 423 + drm_dev_put(drm); 424 424 425 425 return ret; 426 426 } ··· 444 444 drm->dev_private = NULL; 445 445 dev_set_drvdata(dev, NULL); 446 446 447 - drm_dev_unref(drm); 447 + drm_dev_put(drm); 448 448 } 449 449 450 450 static const struct component_master_ops exynos_drm_ops = {
+1 -1
drivers/gpu/drm/exynos/exynos_drm_fb.c
··· 138 138 139 139 err: 140 140 while (i--) 141 - drm_gem_object_unreference_unlocked(&exynos_gem[i]->base); 141 + drm_gem_object_put_unlocked(&exynos_gem[i]->base); 142 142 143 143 return ERR_PTR(ret); 144 144 }
+10 -7
drivers/gpu/drm/exynos/exynos_drm_fimc.c
··· 470 470 static void fimc_set_window(struct fimc_context *ctx, 471 471 struct exynos_drm_ipp_buffer *buf) 472 472 { 473 + unsigned int real_width = buf->buf.pitch[0] / buf->format->cpp[0]; 473 474 u32 cfg, h1, h2, v1, v2; 474 475 475 476 /* cropped image */ 476 477 h1 = buf->rect.x; 477 - h2 = buf->buf.width - buf->rect.w - buf->rect.x; 478 + h2 = real_width - buf->rect.w - buf->rect.x; 478 479 v1 = buf->rect.y; 479 480 v2 = buf->buf.height - buf->rect.h - buf->rect.y; 480 481 481 482 DRM_DEBUG_KMS("x[%d]y[%d]w[%d]h[%d]hsize[%d]vsize[%d]\n", 482 483 buf->rect.x, buf->rect.y, buf->rect.w, buf->rect.h, 483 - buf->buf.width, buf->buf.height); 484 + real_width, buf->buf.height); 484 485 DRM_DEBUG_KMS("h1[%d]h2[%d]v1[%d]v2[%d]\n", h1, h2, v1, v2); 485 486 486 487 /* ··· 504 503 static void fimc_src_set_size(struct fimc_context *ctx, 505 504 struct exynos_drm_ipp_buffer *buf) 506 505 { 506 + unsigned int real_width = buf->buf.pitch[0] / buf->format->cpp[0]; 507 507 u32 cfg; 508 508 509 - DRM_DEBUG_KMS("hsize[%d]vsize[%d]\n", buf->buf.width, buf->buf.height); 509 + DRM_DEBUG_KMS("hsize[%d]vsize[%d]\n", real_width, buf->buf.height); 510 510 511 511 /* original size */ 512 - cfg = (EXYNOS_ORGISIZE_HORIZONTAL(buf->buf.width) | 512 + cfg = (EXYNOS_ORGISIZE_HORIZONTAL(real_width) | 513 513 EXYNOS_ORGISIZE_VERTICAL(buf->buf.height)); 514 514 515 515 fimc_write(ctx, cfg, EXYNOS_ORGISIZE); ··· 531 529 * for now, we support only ITU601 8 bit mode 532 530 */ 533 531 cfg = (EXYNOS_CISRCFMT_ITU601_8BIT | 534 - EXYNOS_CISRCFMT_SOURCEHSIZE(buf->buf.width) | 532 + EXYNOS_CISRCFMT_SOURCEHSIZE(real_width) | 535 533 EXYNOS_CISRCFMT_SOURCEVSIZE(buf->buf.height)); 536 534 fimc_write(ctx, cfg, EXYNOS_CISRCFMT); 537 535 ··· 844 842 static void fimc_dst_set_size(struct fimc_context *ctx, 845 843 struct exynos_drm_ipp_buffer *buf) 846 844 { 845 + unsigned int real_width = buf->buf.pitch[0] / buf->format->cpp[0]; 847 846 u32 cfg, cfg_ext; 848 847 849 - DRM_DEBUG_KMS("hsize[%d]vsize[%d]\n", buf->buf.width, buf->buf.height); 848 + DRM_DEBUG_KMS("hsize[%d]vsize[%d]\n", real_width, buf->buf.height); 850 849 851 850 /* original size */ 852 - cfg = (EXYNOS_ORGOSIZE_HORIZONTAL(buf->buf.width) | 851 + cfg = (EXYNOS_ORGOSIZE_HORIZONTAL(real_width) | 853 852 EXYNOS_ORGOSIZE_VERTICAL(buf->buf.height)); 854 853 855 854 fimc_write(ctx, cfg, EXYNOS_ORGOSIZE);
+5 -5
drivers/gpu/drm/exynos/exynos_drm_gem.c
··· 143 143 DRM_DEBUG_KMS("gem handle = 0x%x\n", *handle); 144 144 145 145 /* drop reference from allocate - handle holds it now. */ 146 - drm_gem_object_unreference_unlocked(obj); 146 + drm_gem_object_put_unlocked(obj); 147 147 148 148 return 0; 149 149 } ··· 186 186 187 187 exynos_gem = to_exynos_gem(obj); 188 188 189 - drm_gem_object_unreference_unlocked(obj); 189 + drm_gem_object_put_unlocked(obj); 190 190 191 191 return exynos_gem->size; 192 192 } ··· 329 329 return; 330 330 } 331 331 332 - drm_gem_object_unreference_unlocked(obj); 332 + drm_gem_object_put_unlocked(obj); 333 333 334 334 /* 335 335 * decrease obj->refcount one more time because we has already 336 336 * increased it at exynos_drm_gem_get_dma_addr(). 337 337 */ 338 - drm_gem_object_unreference_unlocked(obj); 338 + drm_gem_object_put_unlocked(obj); 339 339 } 340 340 341 341 static int exynos_drm_gem_mmap_buffer(struct exynos_drm_gem *exynos_gem, ··· 383 383 args->flags = exynos_gem->flags; 384 384 args->size = exynos_gem->size; 385 385 386 - drm_gem_object_unreference_unlocked(obj); 386 + drm_gem_object_put_unlocked(obj); 387 387 388 388 return 0; 389 389 }
+31 -20
drivers/gpu/drm/exynos/exynos_drm_gsc.c
··· 492 492 GSC_IN_CHROMA_ORDER_CRCB); 493 493 break; 494 494 case DRM_FORMAT_NV21: 495 + cfg |= (GSC_IN_CHROMA_ORDER_CRCB | GSC_IN_YUV420_2P); 496 + break; 495 497 case DRM_FORMAT_NV61: 496 - cfg |= (GSC_IN_CHROMA_ORDER_CRCB | 497 - GSC_IN_YUV420_2P); 498 + cfg |= (GSC_IN_CHROMA_ORDER_CRCB | GSC_IN_YUV422_2P); 498 499 break; 499 500 case DRM_FORMAT_YUV422: 500 501 cfg |= GSC_IN_YUV422_3P; 501 502 break; 502 503 case DRM_FORMAT_YUV420: 504 + cfg |= (GSC_IN_CHROMA_ORDER_CBCR | GSC_IN_YUV420_3P); 505 + break; 503 506 case DRM_FORMAT_YVU420: 504 - cfg |= GSC_IN_YUV420_3P; 507 + cfg |= (GSC_IN_CHROMA_ORDER_CRCB | GSC_IN_YUV420_3P); 505 508 break; 506 509 case DRM_FORMAT_NV12: 510 + cfg |= (GSC_IN_CHROMA_ORDER_CBCR | GSC_IN_YUV420_2P); 511 + break; 507 512 case DRM_FORMAT_NV16: 508 - cfg |= (GSC_IN_CHROMA_ORDER_CBCR | 509 - GSC_IN_YUV420_2P); 513 + cfg |= (GSC_IN_CHROMA_ORDER_CBCR | GSC_IN_YUV422_2P); 510 514 break; 511 515 } 512 516 ··· 527 523 528 524 switch (degree) { 529 525 case DRM_MODE_ROTATE_0: 530 - if (rotation & DRM_MODE_REFLECT_Y) 531 - cfg |= GSC_IN_ROT_XFLIP; 532 526 if (rotation & DRM_MODE_REFLECT_X) 527 + cfg |= GSC_IN_ROT_XFLIP; 528 + if (rotation & DRM_MODE_REFLECT_Y) 533 529 cfg |= GSC_IN_ROT_YFLIP; 534 530 break; 535 531 case DRM_MODE_ROTATE_90: 536 532 cfg |= GSC_IN_ROT_90; 537 - if (rotation & DRM_MODE_REFLECT_Y) 538 - cfg |= GSC_IN_ROT_XFLIP; 539 533 if (rotation & DRM_MODE_REFLECT_X) 534 + cfg |= GSC_IN_ROT_XFLIP; 535 + if (rotation & DRM_MODE_REFLECT_Y) 540 536 cfg |= GSC_IN_ROT_YFLIP; 541 537 break; 542 538 case DRM_MODE_ROTATE_180: 543 539 cfg |= GSC_IN_ROT_180; 544 - if (rotation & DRM_MODE_REFLECT_Y) 545 - cfg &= ~GSC_IN_ROT_XFLIP; 546 540 if (rotation & DRM_MODE_REFLECT_X) 541 + cfg &= ~GSC_IN_ROT_XFLIP; 542 + if (rotation & DRM_MODE_REFLECT_Y) 547 543 cfg &= ~GSC_IN_ROT_YFLIP; 548 544 break; 549 545 case DRM_MODE_ROTATE_270: 550 546 cfg |= GSC_IN_ROT_270; 551 - if (rotation & DRM_MODE_REFLECT_Y) 552 - cfg &= ~GSC_IN_ROT_XFLIP; 553 547 if (rotation & DRM_MODE_REFLECT_X) 548 + cfg &= ~GSC_IN_ROT_XFLIP; 549 + if (rotation & DRM_MODE_REFLECT_Y) 554 550 cfg &= ~GSC_IN_ROT_YFLIP; 555 551 break; 556 552 } ··· 581 577 cfg &= ~(GSC_SRCIMG_HEIGHT_MASK | 582 578 GSC_SRCIMG_WIDTH_MASK); 583 579 584 - cfg |= (GSC_SRCIMG_WIDTH(buf->buf.width) | 580 + cfg |= (GSC_SRCIMG_WIDTH(buf->buf.pitch[0] / buf->format->cpp[0]) | 585 581 GSC_SRCIMG_HEIGHT(buf->buf.height)); 586 582 587 583 gsc_write(cfg, GSC_SRCIMG_SIZE); ··· 676 672 GSC_OUT_CHROMA_ORDER_CRCB); 677 673 break; 678 674 case DRM_FORMAT_NV21: 679 - case DRM_FORMAT_NV61: 680 675 cfg |= (GSC_OUT_CHROMA_ORDER_CRCB | GSC_OUT_YUV420_2P); 681 676 break; 677 + case DRM_FORMAT_NV61: 678 + cfg |= (GSC_OUT_CHROMA_ORDER_CRCB | GSC_OUT_YUV422_2P); 679 + break; 682 680 case DRM_FORMAT_YUV422: 681 + cfg |= GSC_OUT_YUV422_3P; 682 + break; 683 683 case DRM_FORMAT_YUV420: 684 + cfg |= (GSC_OUT_CHROMA_ORDER_CBCR | GSC_OUT_YUV420_3P); 685 + break; 684 686 case DRM_FORMAT_YVU420: 685 - cfg |= GSC_OUT_YUV420_3P; 687 + cfg |= (GSC_OUT_CHROMA_ORDER_CRCB | GSC_OUT_YUV420_3P); 686 688 break; 687 689 case DRM_FORMAT_NV12: 690 + cfg |= (GSC_OUT_CHROMA_ORDER_CBCR | GSC_OUT_YUV420_2P); 691 + break; 688 692 case DRM_FORMAT_NV16: 689 - cfg |= (GSC_OUT_CHROMA_ORDER_CBCR | 690 - GSC_OUT_YUV420_2P); 693 + cfg |= (GSC_OUT_CHROMA_ORDER_CBCR | GSC_OUT_YUV422_2P); 691 694 break; 692 695 } 693 696 ··· 879 868 /* original size */ 880 869 cfg = gsc_read(GSC_DSTIMG_SIZE); 881 870 cfg &= ~(GSC_DSTIMG_HEIGHT_MASK | GSC_DSTIMG_WIDTH_MASK); 882 - cfg |= GSC_DSTIMG_WIDTH(buf->buf.width) | 871 + cfg |= GSC_DSTIMG_WIDTH(buf->buf.pitch[0] / buf->format->cpp[0]) | 883 872 GSC_DSTIMG_HEIGHT(buf->buf.height); 884 873 gsc_write(cfg, GSC_DSTIMG_SIZE); 885 874 ··· 1352 1341 }; 1353 1342 1354 1343 static const struct drm_exynos_ipp_limit gsc_5433_limits[] = { 1355 - { IPP_SIZE_LIMIT(BUFFER, .h = { 32, 8191, 2 }, .v = { 16, 8191, 2 }) }, 1344 + { IPP_SIZE_LIMIT(BUFFER, .h = { 32, 8191, 16 }, .v = { 16, 8191, 2 }) }, 1356 1345 { IPP_SIZE_LIMIT(AREA, .h = { 16, 4800, 1 }, .v = { 8, 3344, 1 }) }, 1357 1346 { IPP_SIZE_LIMIT(ROTATED, .h = { 32, 2047 }, .v = { 8, 8191 }) }, 1358 1347 { IPP_SCALE_LIMIT(.h = { (1 << 16) / 16, (1 << 16) * 8 },
+58 -52
drivers/gpu/drm/exynos/exynos_drm_ipp.c
··· 345 345 int ret = 0; 346 346 int i; 347 347 348 - /* basic checks */ 349 - if (buf->buf.width == 0 || buf->buf.height == 0) 350 - return -EINVAL; 351 - buf->format = drm_format_info(buf->buf.fourcc); 352 - for (i = 0; i < buf->format->num_planes; i++) { 353 - unsigned int width = (i == 0) ? buf->buf.width : 354 - DIV_ROUND_UP(buf->buf.width, buf->format->hsub); 355 - 356 - if (buf->buf.pitch[i] == 0) 357 - buf->buf.pitch[i] = width * buf->format->cpp[i]; 358 - if (buf->buf.pitch[i] < width * buf->format->cpp[i]) 359 - return -EINVAL; 360 - if (!buf->buf.gem_id[i]) 361 - return -ENOENT; 362 - } 363 - 364 - /* pitch for additional planes must match */ 365 - if (buf->format->num_planes > 2 && 366 - buf->buf.pitch[1] != buf->buf.pitch[2]) 367 - return -EINVAL; 368 - 369 348 /* get GEM buffers and check their size */ 370 349 for (i = 0; i < buf->format->num_planes; i++) { 371 350 unsigned int height = (i == 0) ? buf->buf.height : ··· 407 428 IPP_LIMIT_BUFFER, IPP_LIMIT_AREA, IPP_LIMIT_ROTATED, IPP_LIMIT_MAX 408 429 }; 409 430 410 - static const enum drm_ipp_size_id limit_id_fallback[IPP_LIMIT_MAX][4] = { 431 + static const enum drm_exynos_ipp_limit_type limit_id_fallback[IPP_LIMIT_MAX][4] = { 411 432 [IPP_LIMIT_BUFFER] = { DRM_EXYNOS_IPP_LIMIT_SIZE_BUFFER }, 412 433 [IPP_LIMIT_AREA] = { DRM_EXYNOS_IPP_LIMIT_SIZE_AREA, 413 434 DRM_EXYNOS_IPP_LIMIT_SIZE_BUFFER }, ··· 474 495 enum drm_ipp_size_id id = rotate ? IPP_LIMIT_ROTATED : IPP_LIMIT_AREA; 475 496 struct drm_ipp_limit l; 476 497 struct drm_exynos_ipp_limit_val *lh = &l.h, *lv = &l.v; 498 + int real_width = buf->buf.pitch[0] / buf->format->cpp[0]; 477 499 478 500 if (!limits) 479 501 return 0; 480 502 481 503 __get_size_limit(limits, num_limits, IPP_LIMIT_BUFFER, &l); 482 - if (!__size_limit_check(buf->buf.width, &l.h) || 504 + if (!__size_limit_check(real_width, &l.h) || 483 505 !__size_limit_check(buf->buf.height, &l.v)) 484 506 return -EINVAL; 485 507 ··· 540 560 return 0; 541 561 } 542 562 563 + static int exynos_drm_ipp_check_format(struct exynos_drm_ipp_task *task, 564 + struct exynos_drm_ipp_buffer *buf, 565 + struct exynos_drm_ipp_buffer *src, 566 + struct exynos_drm_ipp_buffer *dst, 567 + bool rotate, bool swap) 568 + { 569 + const struct exynos_drm_ipp_formats *fmt; 570 + int ret, i; 571 + 572 + fmt = __ipp_format_get(task->ipp, buf->buf.fourcc, buf->buf.modifier, 573 + buf == src ? DRM_EXYNOS_IPP_FORMAT_SOURCE : 574 + DRM_EXYNOS_IPP_FORMAT_DESTINATION); 575 + if (!fmt) { 576 + DRM_DEBUG_DRIVER("Task %pK: %s format not supported\n", task, 577 + buf == src ? "src" : "dst"); 578 + return -EINVAL; 579 + } 580 + 581 + /* basic checks */ 582 + if (buf->buf.width == 0 || buf->buf.height == 0) 583 + return -EINVAL; 584 + 585 + buf->format = drm_format_info(buf->buf.fourcc); 586 + for (i = 0; i < buf->format->num_planes; i++) { 587 + unsigned int width = (i == 0) ? buf->buf.width : 588 + DIV_ROUND_UP(buf->buf.width, buf->format->hsub); 589 + 590 + if (buf->buf.pitch[i] == 0) 591 + buf->buf.pitch[i] = width * buf->format->cpp[i]; 592 + if (buf->buf.pitch[i] < width * buf->format->cpp[i]) 593 + return -EINVAL; 594 + if (!buf->buf.gem_id[i]) 595 + return -ENOENT; 596 + } 597 + 598 + /* pitch for additional planes must match */ 599 + if (buf->format->num_planes > 2 && 600 + buf->buf.pitch[1] != buf->buf.pitch[2]) 601 + return -EINVAL; 602 + 603 + /* check driver limits */ 604 + ret = exynos_drm_ipp_check_size_limits(buf, fmt->limits, 605 + fmt->num_limits, 606 + rotate, 607 + buf == dst ? swap : false); 608 + if (ret) 609 + return ret; 610 + ret = exynos_drm_ipp_check_scale_limits(&src->rect, &dst->rect, 611 + fmt->limits, 612 + fmt->num_limits, swap); 613 + return ret; 614 + } 615 + 543 616 static int exynos_drm_ipp_task_check(struct exynos_drm_ipp_task *task) 544 617 { 545 618 struct exynos_drm_ipp *ipp = task->ipp; 546 - const struct exynos_drm_ipp_formats *src_fmt, *dst_fmt; 547 619 struct exynos_drm_ipp_buffer *src = &task->src, *dst = &task->dst; 548 620 unsigned int rotation = task->transform.rotation; 549 621 int ret = 0; ··· 639 607 return -EINVAL; 640 608 } 641 609 642 - src_fmt = __ipp_format_get(ipp, src->buf.fourcc, src->buf.modifier, 643 - DRM_EXYNOS_IPP_FORMAT_SOURCE); 644 - if (!src_fmt) { 645 - DRM_DEBUG_DRIVER("Task %pK: src format not supported\n", task); 646 - return -EINVAL; 647 - } 648 - ret = exynos_drm_ipp_check_size_limits(src, src_fmt->limits, 649 - src_fmt->num_limits, 650 - rotate, false); 651 - if (ret) 652 - return ret; 653 - ret = exynos_drm_ipp_check_scale_limits(&src->rect, &dst->rect, 654 - src_fmt->limits, 655 - src_fmt->num_limits, swap); 610 + ret = exynos_drm_ipp_check_format(task, src, src, dst, rotate, swap); 656 611 if (ret) 657 612 return ret; 658 613 659 - dst_fmt = __ipp_format_get(ipp, dst->buf.fourcc, dst->buf.modifier, 660 - DRM_EXYNOS_IPP_FORMAT_DESTINATION); 661 - if (!dst_fmt) { 662 - DRM_DEBUG_DRIVER("Task %pK: dst format not supported\n", task); 663 - return -EINVAL; 664 - } 665 - ret = exynos_drm_ipp_check_size_limits(dst, dst_fmt->limits, 666 - dst_fmt->num_limits, 667 - false, swap); 668 - if (ret) 669 - return ret; 670 - ret = exynos_drm_ipp_check_scale_limits(&src->rect, &dst->rect, 671 - dst_fmt->limits, 672 - dst_fmt->num_limits, swap); 614 + ret = exynos_drm_ipp_check_format(task, dst, src, dst, false, swap); 673 615 if (ret) 674 616 return ret; 675 617
+1 -1
drivers/gpu/drm/exynos/exynos_drm_plane.c
··· 132 132 if (plane->state) { 133 133 exynos_state = to_exynos_plane_state(plane->state); 134 134 if (exynos_state->base.fb) 135 - drm_framebuffer_unreference(exynos_state->base.fb); 135 + drm_framebuffer_put(exynos_state->base.fb); 136 136 kfree(exynos_state); 137 137 plane->state = NULL; 138 138 }
+2 -2
drivers/gpu/drm/exynos/exynos_drm_rotator.c
··· 168 168 val &= ~ROT_CONTROL_FLIP_MASK; 169 169 170 170 if (rotation & DRM_MODE_REFLECT_X) 171 - val |= ROT_CONTROL_FLIP_HORIZONTAL; 172 - if (rotation & DRM_MODE_REFLECT_Y) 173 171 val |= ROT_CONTROL_FLIP_VERTICAL; 172 + if (rotation & DRM_MODE_REFLECT_Y) 173 + val |= ROT_CONTROL_FLIP_HORIZONTAL; 174 174 175 175 val &= ~ROT_CONTROL_ROT_MASK; 176 176
+35 -9
drivers/gpu/drm/exynos/exynos_drm_scaler.c
··· 30 30 #define scaler_write(cfg, offset) writel(cfg, scaler->regs + (offset)) 31 31 #define SCALER_MAX_CLK 4 32 32 #define SCALER_AUTOSUSPEND_DELAY 2000 33 + #define SCALER_RESET_WAIT_RETRIES 100 33 34 34 35 struct scaler_data { 35 36 const char *clk_name[SCALER_MAX_CLK]; ··· 52 51 static u32 scaler_get_format(u32 drm_fmt) 53 52 { 54 53 switch (drm_fmt) { 55 - case DRM_FORMAT_NV21: 56 - return SCALER_YUV420_2P_UV; 57 54 case DRM_FORMAT_NV12: 55 + return SCALER_YUV420_2P_UV; 56 + case DRM_FORMAT_NV21: 58 57 return SCALER_YUV420_2P_VU; 59 58 case DRM_FORMAT_YUV420: 60 59 return SCALER_YUV420_3P; ··· 64 63 return SCALER_YUV422_1P_UYVY; 65 64 case DRM_FORMAT_YVYU: 66 65 return SCALER_YUV422_1P_YVYU; 67 - case DRM_FORMAT_NV61: 68 - return SCALER_YUV422_2P_UV; 69 66 case DRM_FORMAT_NV16: 67 + return SCALER_YUV422_2P_UV; 68 + case DRM_FORMAT_NV61: 70 69 return SCALER_YUV422_2P_VU; 71 70 case DRM_FORMAT_YUV422: 72 71 return SCALER_YUV422_3P; 73 - case DRM_FORMAT_NV42: 74 - return SCALER_YUV444_2P_UV; 75 72 case DRM_FORMAT_NV24: 73 + return SCALER_YUV444_2P_UV; 74 + case DRM_FORMAT_NV42: 76 75 return SCALER_YUV444_2P_VU; 77 76 case DRM_FORMAT_YUV444: 78 77 return SCALER_YUV444_3P; ··· 99 98 } 100 99 101 100 return 0; 101 + } 102 + 103 + static inline int scaler_reset(struct scaler_context *scaler) 104 + { 105 + int retry = SCALER_RESET_WAIT_RETRIES; 106 + 107 + scaler_write(SCALER_CFG_SOFT_RESET, SCALER_CFG); 108 + do { 109 + cpu_relax(); 110 + } while (retry > 1 && 111 + scaler_read(SCALER_CFG) & SCALER_CFG_SOFT_RESET); 112 + do { 113 + cpu_relax(); 114 + scaler_write(1, SCALER_INT_EN); 115 + } while (retry > 0 && scaler_read(SCALER_INT_EN) != 1); 116 + 117 + return retry ? 0 : -EIO; 102 118 } 103 119 104 120 static inline void scaler_enable_int(struct scaler_context *scaler) ··· 372 354 u32 dst_fmt = scaler_get_format(task->dst.buf.fourcc); 373 355 struct drm_exynos_ipp_task_rect *dst_pos = &task->dst.rect; 374 356 375 - scaler->task = task; 376 - 377 357 pm_runtime_get_sync(scaler->dev); 358 + if (scaler_reset(scaler)) { 359 + pm_runtime_put(scaler->dev); 360 + return -EIO; 361 + } 362 + 363 + scaler->task = task; 378 364 379 365 scaler_set_src_fmt(scaler, src_fmt); 380 366 scaler_set_src_base(scaler, &task->src); ··· 416 394 417 395 static inline u32 scaler_get_int_status(struct scaler_context *scaler) 418 396 { 419 - return scaler_read(SCALER_INT_STATUS); 397 + u32 val = scaler_read(SCALER_INT_STATUS); 398 + 399 + scaler_write(val, SCALER_INT_STATUS); 400 + 401 + return val; 420 402 } 421 403 422 404 static inline int scaler_task_done(u32 val)
+1
drivers/gpu/drm/exynos/regs-gsc.h
··· 138 138 #define GSC_OUT_YUV420_3P (3 << 4) 139 139 #define GSC_OUT_YUV422_1P (4 << 4) 140 140 #define GSC_OUT_YUV422_2P (5 << 4) 141 + #define GSC_OUT_YUV422_3P (6 << 4) 141 142 #define GSC_OUT_YUV444 (7 << 4) 142 143 #define GSC_OUT_TILE_TYPE_MASK (1 << 2) 143 144 #define GSC_OUT_TILE_C_16x8 (0 << 2)
+3 -3
drivers/gpu/drm/i915/gvt/display.c
··· 196 196 ~(TRANS_DDI_BPC_MASK | TRANS_DDI_MODE_SELECT_MASK | 197 197 TRANS_DDI_PORT_MASK); 198 198 vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) |= 199 - (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST | 199 + (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DVI | 200 200 (PORT_B << TRANS_DDI_PORT_SHIFT) | 201 201 TRANS_DDI_FUNC_ENABLE); 202 202 if (IS_BROADWELL(dev_priv)) { ··· 216 216 ~(TRANS_DDI_BPC_MASK | TRANS_DDI_MODE_SELECT_MASK | 217 217 TRANS_DDI_PORT_MASK); 218 218 vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) |= 219 - (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST | 219 + (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DVI | 220 220 (PORT_C << TRANS_DDI_PORT_SHIFT) | 221 221 TRANS_DDI_FUNC_ENABLE); 222 222 if (IS_BROADWELL(dev_priv)) { ··· 236 236 ~(TRANS_DDI_BPC_MASK | TRANS_DDI_MODE_SELECT_MASK | 237 237 TRANS_DDI_PORT_MASK); 238 238 vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) |= 239 - (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DP_SST | 239 + (TRANS_DDI_BPC_8 | TRANS_DDI_MODE_SELECT_DVI | 240 240 (PORT_D << TRANS_DDI_PORT_SHIFT) | 241 241 TRANS_DDI_FUNC_ENABLE); 242 242 if (IS_BROADWELL(dev_priv)) {
+58
drivers/gpu/drm/i915/gvt/gtt.c
··· 1592 1592 vgpu_free_mm(mm); 1593 1593 return ERR_PTR(-ENOMEM); 1594 1594 } 1595 + mm->ggtt_mm.last_partial_off = -1UL; 1595 1596 1596 1597 return mm; 1597 1598 } ··· 1617 1616 invalidate_ppgtt_mm(mm); 1618 1617 } else { 1619 1618 vfree(mm->ggtt_mm.virtual_ggtt); 1619 + mm->ggtt_mm.last_partial_off = -1UL; 1620 1620 } 1621 1621 1622 1622 vgpu_free_mm(mm); ··· 1869 1867 1870 1868 memcpy((void *)&e.val64 + (off & (info->gtt_entry_size - 1)), p_data, 1871 1869 bytes); 1870 + 1871 + /* If ggtt entry size is 8 bytes, and it's split into two 4 bytes 1872 + * write, we assume the two 4 bytes writes are consecutive. 1873 + * Otherwise, we abort and report error 1874 + */ 1875 + if (bytes < info->gtt_entry_size) { 1876 + if (ggtt_mm->ggtt_mm.last_partial_off == -1UL) { 1877 + /* the first partial part*/ 1878 + ggtt_mm->ggtt_mm.last_partial_off = off; 1879 + ggtt_mm->ggtt_mm.last_partial_data = e.val64; 1880 + return 0; 1881 + } else if ((g_gtt_index == 1882 + (ggtt_mm->ggtt_mm.last_partial_off >> 1883 + info->gtt_entry_size_shift)) && 1884 + (off != ggtt_mm->ggtt_mm.last_partial_off)) { 1885 + /* the second partial part */ 1886 + 1887 + int last_off = ggtt_mm->ggtt_mm.last_partial_off & 1888 + (info->gtt_entry_size - 1); 1889 + 1890 + memcpy((void *)&e.val64 + last_off, 1891 + (void *)&ggtt_mm->ggtt_mm.last_partial_data + 1892 + last_off, bytes); 1893 + 1894 + ggtt_mm->ggtt_mm.last_partial_off = -1UL; 1895 + } else { 1896 + int last_offset; 1897 + 1898 + gvt_vgpu_err("failed to populate guest ggtt entry: abnormal ggtt entry write sequence, last_partial_off=%lx, offset=%x, bytes=%d, ggtt entry size=%d\n", 1899 + ggtt_mm->ggtt_mm.last_partial_off, off, 1900 + bytes, info->gtt_entry_size); 1901 + 1902 + /* set host ggtt entry to scratch page and clear 1903 + * virtual ggtt entry as not present for last 1904 + * partially write offset 1905 + */ 1906 + last_offset = ggtt_mm->ggtt_mm.last_partial_off & 1907 + (~(info->gtt_entry_size - 1)); 1908 + 1909 + ggtt_get_host_entry(ggtt_mm, &m, last_offset); 1910 + ggtt_invalidate_pte(vgpu, &m); 1911 + ops->set_pfn(&m, gvt->gtt.scratch_mfn); 1912 + ops->clear_present(&m); 1913 + ggtt_set_host_entry(ggtt_mm, &m, last_offset); 1914 + ggtt_invalidate(gvt->dev_priv); 1915 + 1916 + ggtt_get_guest_entry(ggtt_mm, &e, last_offset); 1917 + ops->clear_present(&e); 1918 + ggtt_set_guest_entry(ggtt_mm, &e, last_offset); 1919 + 1920 + ggtt_mm->ggtt_mm.last_partial_off = off; 1921 + ggtt_mm->ggtt_mm.last_partial_data = e.val64; 1922 + 1923 + return 0; 1924 + } 1925 + } 1872 1926 1873 1927 if (ops->test_present(&e)) { 1874 1928 gfn = ops->get_pfn(&e);
+2
drivers/gpu/drm/i915/gvt/gtt.h
··· 150 150 } ppgtt_mm; 151 151 struct { 152 152 void *virtual_ggtt; 153 + unsigned long last_partial_off; 154 + u64 last_partial_data; 153 155 } ggtt_mm; 154 156 }; 155 157 };
+17 -11
drivers/gpu/drm/i915/i915_gem.c
··· 2002 2002 bool write = !!(vmf->flags & FAULT_FLAG_WRITE); 2003 2003 struct i915_vma *vma; 2004 2004 pgoff_t page_offset; 2005 - unsigned int flags; 2006 2005 int ret; 2007 2006 2008 2007 /* We don't use vmf->pgoff since that has the fake offset */ ··· 2037 2038 goto err_unlock; 2038 2039 } 2039 2040 2040 - /* If the object is smaller than a couple of partial vma, it is 2041 - * not worth only creating a single partial vma - we may as well 2042 - * clear enough space for the full object. 2043 - */ 2044 - flags = PIN_MAPPABLE; 2045 - if (obj->base.size > 2 * MIN_CHUNK_PAGES << PAGE_SHIFT) 2046 - flags |= PIN_NONBLOCK | PIN_NONFAULT; 2047 2041 2048 2042 /* Now pin it into the GTT as needed */ 2049 - vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, flags); 2043 + vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, 2044 + PIN_MAPPABLE | 2045 + PIN_NONBLOCK | 2046 + PIN_NONFAULT); 2050 2047 if (IS_ERR(vma)) { 2051 2048 /* Use a partial view if it is bigger than available space */ 2052 2049 struct i915_ggtt_view view = 2053 2050 compute_partial_view(obj, page_offset, MIN_CHUNK_PAGES); 2051 + unsigned int flags; 2054 2052 2055 - /* Userspace is now writing through an untracked VMA, abandon 2053 + flags = PIN_MAPPABLE; 2054 + if (view.type == I915_GGTT_VIEW_NORMAL) 2055 + flags |= PIN_NONBLOCK; /* avoid warnings for pinned */ 2056 + 2057 + /* 2058 + * Userspace is now writing through an untracked VMA, abandon 2056 2059 * all hope that the hardware is able to track future writes. 2057 2060 */ 2058 2061 obj->frontbuffer_ggtt_origin = ORIGIN_CPU; 2059 2062 2060 - vma = i915_gem_object_ggtt_pin(obj, &view, 0, 0, PIN_MAPPABLE); 2063 + vma = i915_gem_object_ggtt_pin(obj, &view, 0, 0, flags); 2064 + if (IS_ERR(vma) && !view.type) { 2065 + flags = PIN_MAPPABLE; 2066 + view.type = I915_GGTT_VIEW_PARTIAL; 2067 + vma = i915_gem_object_ggtt_pin(obj, &view, 0, 0, flags); 2068 + } 2061 2069 } 2062 2070 if (IS_ERR(vma)) { 2063 2071 ret = PTR_ERR(vma);
+1 -1
drivers/gpu/drm/i915/i915_vma.c
··· 109 109 obj->base.size >> PAGE_SHIFT)); 110 110 vma->size = view->partial.size; 111 111 vma->size <<= PAGE_SHIFT; 112 - GEM_BUG_ON(vma->size >= obj->base.size); 112 + GEM_BUG_ON(vma->size > obj->base.size); 113 113 } else if (view->type == I915_GGTT_VIEW_ROTATED) { 114 114 vma->size = intel_rotation_info_size(&view->rotated); 115 115 vma->size <<= PAGE_SHIFT;
+4 -1
drivers/gpu/drm/udl/udl_fb.c
··· 137 137 138 138 if (cmd > (char *) urb->transfer_buffer) { 139 139 /* Send partial buffer remaining before exiting */ 140 - int len = cmd - (char *) urb->transfer_buffer; 140 + int len; 141 + if (cmd < (char *) urb->transfer_buffer + urb->transfer_buffer_length) 142 + *cmd++ = 0xAF; 143 + len = cmd - (char *) urb->transfer_buffer; 141 144 ret = udl_submit_urb(dev, urb, len); 142 145 bytes_sent += len; 143 146 } else
+7 -4
drivers/gpu/drm/udl/udl_transfer.c
··· 153 153 raw_pixels_count_byte = cmd++; /* we'll know this later */ 154 154 raw_pixel_start = pixel; 155 155 156 - cmd_pixel_end = pixel + (min(MAX_CMD_PIXELS + 1, 157 - min((int)(pixel_end - pixel) / bpp, 158 - (int)(cmd_buffer_end - cmd) / 2))) * bpp; 156 + cmd_pixel_end = pixel + min3(MAX_CMD_PIXELS + 1UL, 157 + (unsigned long)(pixel_end - pixel) / bpp, 158 + (unsigned long)(cmd_buffer_end - 1 - cmd) / 2) * bpp; 159 159 160 - prefetch_range((void *) pixel, (cmd_pixel_end - pixel) * bpp); 160 + prefetch_range((void *) pixel, cmd_pixel_end - pixel); 161 161 pixel_val16 = get_pixel_val16(pixel, bpp); 162 162 163 163 while (pixel < cmd_pixel_end) { ··· 193 193 if (pixel > raw_pixel_start) { 194 194 /* finalize last RAW span */ 195 195 *raw_pixels_count_byte = ((pixel-raw_pixel_start) / bpp) & 0xFF; 196 + } else { 197 + /* undo unused byte */ 198 + cmd--; 196 199 } 197 200 198 201 *cmd_pixels_count_byte = ((pixel - cmd_pixel_start) / bpp) & 0xFF;
+4 -1
drivers/hid/hid-core.c
··· 1952 1952 } 1953 1953 hdev->io_started = false; 1954 1954 1955 + clear_bit(ffs(HID_STAT_REPROBED), &hdev->status); 1956 + 1955 1957 if (!hdev->driver) { 1956 1958 id = hid_match_device(hdev, hdrv); 1957 1959 if (id == NULL) { ··· 2217 2215 struct hid_device *hdev = to_hid_device(dev); 2218 2216 2219 2217 if (hdev->driver == hdrv && 2220 - !hdrv->match(hdev, hid_ignore_special_drivers)) 2218 + !hdrv->match(hdev, hid_ignore_special_drivers) && 2219 + !test_and_set_bit(ffs(HID_STAT_REPROBED), &hdev->status)) 2221 2220 return device_reprobe(dev); 2222 2221 2223 2222 return 0;
+7 -1
drivers/hid/hid-debug.c
··· 1154 1154 goto out; 1155 1155 if (list->tail > list->head) { 1156 1156 len = list->tail - list->head; 1157 + if (len > count) 1158 + len = count; 1157 1159 1158 1160 if (copy_to_user(buffer + ret, &list->hid_debug_buf[list->head], len)) { 1159 1161 ret = -EFAULT; ··· 1165 1163 list->head += len; 1166 1164 } else { 1167 1165 len = HID_DEBUG_BUFSIZE - list->head; 1166 + if (len > count) 1167 + len = count; 1168 1168 1169 1169 if (copy_to_user(buffer, &list->hid_debug_buf[list->head], len)) { 1170 1170 ret = -EFAULT; ··· 1174 1170 } 1175 1171 list->head = 0; 1176 1172 ret += len; 1177 - goto copy_rest; 1173 + count -= len; 1174 + if (count > 0) 1175 + goto copy_rest; 1178 1176 } 1179 1177 1180 1178 }
+1 -1
drivers/hid/i2c-hid/i2c-hid.c
··· 484 484 return; 485 485 } 486 486 487 - if ((ret_size > size) || (ret_size <= 2)) { 487 + if ((ret_size > size) || (ret_size < 2)) { 488 488 dev_err(&ihid->client->dev, "%s: incomplete report (%d/%d)\n", 489 489 __func__, size, ret_size); 490 490 return;
+11
drivers/hid/usbhid/hiddev.c
··· 36 36 #include <linux/hiddev.h> 37 37 #include <linux/compat.h> 38 38 #include <linux/vmalloc.h> 39 + #include <linux/nospec.h> 39 40 #include "usbhid.h" 40 41 41 42 #ifdef CONFIG_USB_DYNAMIC_MINORS ··· 470 469 471 470 if (uref->field_index >= report->maxfield) 472 471 goto inval; 472 + uref->field_index = array_index_nospec(uref->field_index, 473 + report->maxfield); 473 474 474 475 field = report->field[uref->field_index]; 475 476 if (uref->usage_index >= field->maxusage) 476 477 goto inval; 478 + uref->usage_index = array_index_nospec(uref->usage_index, 479 + field->maxusage); 477 480 478 481 uref->usage_code = field->usage[uref->usage_index].hid; 479 482 ··· 504 499 505 500 if (uref->field_index >= report->maxfield) 506 501 goto inval; 502 + uref->field_index = array_index_nospec(uref->field_index, 503 + report->maxfield); 507 504 508 505 field = report->field[uref->field_index]; 509 506 ··· 760 753 761 754 if (finfo.field_index >= report->maxfield) 762 755 break; 756 + finfo.field_index = array_index_nospec(finfo.field_index, 757 + report->maxfield); 763 758 764 759 field = report->field[finfo.field_index]; 765 760 memset(&finfo, 0, sizeof(finfo)); ··· 806 797 807 798 if (cinfo.index >= hid->maxcollection) 808 799 break; 800 + cinfo.index = array_index_nospec(cinfo.index, 801 + hid->maxcollection); 809 802 810 803 cinfo.type = hid->collection[cinfo.index].type; 811 804 cinfo.usage = hid->collection[cinfo.index].usage;
+8 -2
drivers/hid/wacom_wac.c
··· 3365 3365 if (features->type >= INTUOSHT && features->type <= BAMBOO_PT) 3366 3366 features->device_type |= WACOM_DEVICETYPE_PAD; 3367 3367 3368 - features->x_max = 4096; 3369 - features->y_max = 4096; 3368 + if (features->type == INTUOSHT2) { 3369 + features->x_max = features->x_max / 10; 3370 + features->y_max = features->y_max / 10; 3371 + } 3372 + else { 3373 + features->x_max = 4096; 3374 + features->y_max = 4096; 3375 + } 3370 3376 } 3371 3377 else if (features->pktlen == WACOM_PKGLEN_BBTOUCH) { 3372 3378 features->device_type |= WACOM_DEVICETYPE_PAD;
+2 -1
drivers/i2c/busses/i2c-cht-wc.c
··· 234 234 .name = "cht_wc_ext_chrg_irq_chip", 235 235 }; 236 236 237 - static const char * const bq24190_suppliers[] = { "fusb302-typec-source" }; 237 + static const char * const bq24190_suppliers[] = { 238 + "tcpm-source-psy-i2c-fusb302" }; 238 239 239 240 static const struct property_entry bq24190_props[] = { 240 241 PROPERTY_ENTRY_STRING_ARRAY("supplied-from", bq24190_suppliers),
+1 -1
drivers/i2c/busses/i2c-stu300.c
··· 127 127 128 128 /* 129 129 * The number of address send athemps tried before giving up. 130 - * If the first one failes it seems like 5 to 8 attempts are required. 130 + * If the first one fails it seems like 5 to 8 attempts are required. 131 131 */ 132 132 #define NUM_ADDR_RESEND_ATTEMPTS 12 133 133
+8 -9
drivers/i2c/busses/i2c-tegra.c
··· 545 545 { 546 546 u32 cnfg; 547 547 548 + /* 549 + * NACK interrupt is generated before the I2C controller generates 550 + * the STOP condition on the bus. So wait for 2 clock periods 551 + * before disabling the controller so that the STOP condition has 552 + * been delivered properly. 553 + */ 554 + udelay(DIV_ROUND_UP(2 * 1000000, i2c_dev->bus_clk_rate)); 555 + 548 556 cnfg = i2c_readl(i2c_dev, I2C_CNFG); 549 557 if (cnfg & I2C_CNFG_PACKET_MODE_EN) 550 558 i2c_writel(i2c_dev, cnfg & ~I2C_CNFG_PACKET_MODE_EN, I2C_CNFG); ··· 713 705 714 706 if (likely(i2c_dev->msg_err == I2C_ERR_NONE)) 715 707 return 0; 716 - 717 - /* 718 - * NACK interrupt is generated before the I2C controller generates 719 - * the STOP condition on the bus. So wait for 2 clock periods 720 - * before resetting the controller so that the STOP condition has 721 - * been delivered properly. 722 - */ 723 - if (i2c_dev->msg_err == I2C_ERR_NO_ACK) 724 - udelay(DIV_ROUND_UP(2 * 1000000, i2c_dev->bus_clk_rate)); 725 708 726 709 tegra_i2c_init(i2c_dev); 727 710 if (i2c_dev->msg_err == I2C_ERR_NO_ACK) {
+10 -1
drivers/i2c/i2c-core-base.c
··· 198 198 199 199 val = !val; 200 200 bri->set_scl(adap, val); 201 - ndelay(RECOVERY_NDELAY); 201 + 202 + /* 203 + * If we can set SDA, we will always create STOP here to ensure 204 + * the additional pulses will do no harm. This is achieved by 205 + * letting SDA follow SCL half a cycle later. 206 + */ 207 + ndelay(RECOVERY_NDELAY / 2); 208 + if (bri->set_sda) 209 + bri->set_sda(adap, val); 210 + ndelay(RECOVERY_NDELAY / 2); 202 211 } 203 212 204 213 /* check if recovery actually succeeded */
+17 -11
drivers/infiniband/core/uverbs_cmd.c
··· 3488 3488 struct ib_flow_attr *flow_attr; 3489 3489 struct ib_qp *qp; 3490 3490 struct ib_uflow_resources *uflow_res; 3491 + struct ib_uverbs_flow_spec_hdr *kern_spec; 3491 3492 int err = 0; 3492 - void *kern_spec; 3493 3493 void *ib_spec; 3494 3494 int i; 3495 3495 ··· 3538 3538 if (!kern_flow_attr) 3539 3539 return -ENOMEM; 3540 3540 3541 - memcpy(kern_flow_attr, &cmd.flow_attr, sizeof(*kern_flow_attr)); 3542 - err = ib_copy_from_udata(kern_flow_attr + 1, ucore, 3541 + *kern_flow_attr = cmd.flow_attr; 3542 + err = ib_copy_from_udata(&kern_flow_attr->flow_specs, ucore, 3543 3543 cmd.flow_attr.size); 3544 3544 if (err) 3545 3545 goto err_free_attr; ··· 3557 3557 if (!qp) { 3558 3558 err = -EINVAL; 3559 3559 goto err_uobj; 3560 + } 3561 + 3562 + if (qp->qp_type != IB_QPT_UD && qp->qp_type != IB_QPT_RAW_PACKET) { 3563 + err = -EINVAL; 3564 + goto err_put; 3560 3565 } 3561 3566 3562 3567 flow_attr = kzalloc(struct_size(flow_attr, flows, ··· 3583 3578 flow_attr->flags = kern_flow_attr->flags; 3584 3579 flow_attr->size = sizeof(*flow_attr); 3585 3580 3586 - kern_spec = kern_flow_attr + 1; 3581 + kern_spec = kern_flow_attr->flow_specs; 3587 3582 ib_spec = flow_attr + 1; 3588 3583 for (i = 0; i < flow_attr->num_of_specs && 3589 - cmd.flow_attr.size > offsetof(struct ib_uverbs_flow_spec, reserved) && 3590 - cmd.flow_attr.size >= 3591 - ((struct ib_uverbs_flow_spec *)kern_spec)->size; i++) { 3592 - err = kern_spec_to_ib_spec(file->ucontext, kern_spec, ib_spec, 3593 - uflow_res); 3584 + cmd.flow_attr.size >= sizeof(*kern_spec) && 3585 + cmd.flow_attr.size >= kern_spec->size; 3586 + i++) { 3587 + err = kern_spec_to_ib_spec( 3588 + file->ucontext, (struct ib_uverbs_flow_spec *)kern_spec, 3589 + ib_spec, uflow_res); 3594 3590 if (err) 3595 3591 goto err_free; 3596 3592 3597 3593 flow_attr->size += 3598 3594 ((union ib_flow_spec *) ib_spec)->size; 3599 - cmd.flow_attr.size -= ((struct ib_uverbs_flow_spec *)kern_spec)->size; 3600 - kern_spec += ((struct ib_uverbs_flow_spec *) kern_spec)->size; 3595 + cmd.flow_attr.size -= kern_spec->size; 3596 + kern_spec = ((void *)kern_spec) + kern_spec->size; 3601 3597 ib_spec += ((union ib_flow_spec *) ib_spec)->size; 3602 3598 } 3603 3599 if (cmd.flow_attr.size || (i != flow_attr->num_of_specs)) {
+1 -1
drivers/infiniband/hw/cxgb4/mem.c
··· 774 774 { 775 775 struct c4iw_mr *mhp = to_c4iw_mr(ibmr); 776 776 777 - if (unlikely(mhp->mpl_len == mhp->max_mpl_len)) 777 + if (unlikely(mhp->mpl_len == mhp->attr.pbl_size)) 778 778 return -ENOMEM; 779 779 780 780 mhp->mpl[mhp->mpl_len++] = addr;
+1 -1
drivers/infiniband/hw/hfi1/rc.c
··· 271 271 272 272 lockdep_assert_held(&qp->s_lock); 273 273 ps->s_txreq = get_txreq(ps->dev, qp); 274 - if (IS_ERR(ps->s_txreq)) 274 + if (!ps->s_txreq) 275 275 goto bail_no_tx; 276 276 277 277 if (priv->hdr_type == HFI1_PKT_TYPE_9B) {
+2 -2
drivers/infiniband/hw/hfi1/uc.c
··· 1 1 /* 2 - * Copyright(c) 2015, 2016 Intel Corporation. 2 + * Copyright(c) 2015 - 2018 Intel Corporation. 3 3 * 4 4 * This file is provided under a dual BSD/GPLv2 license. When using or 5 5 * redistributing this file, you may do so under either license. ··· 72 72 int middle = 0; 73 73 74 74 ps->s_txreq = get_txreq(ps->dev, qp); 75 - if (IS_ERR(ps->s_txreq)) 75 + if (!ps->s_txreq) 76 76 goto bail_no_tx; 77 77 78 78 if (!(ib_rvt_state_ops[qp->state] & RVT_PROCESS_SEND_OK)) {
+2 -2
drivers/infiniband/hw/hfi1/ud.c
··· 1 1 /* 2 - * Copyright(c) 2015, 2016 Intel Corporation. 2 + * Copyright(c) 2015 - 2018 Intel Corporation. 3 3 * 4 4 * This file is provided under a dual BSD/GPLv2 license. When using or 5 5 * redistributing this file, you may do so under either license. ··· 503 503 u32 lid; 504 504 505 505 ps->s_txreq = get_txreq(ps->dev, qp); 506 - if (IS_ERR(ps->s_txreq)) 506 + if (!ps->s_txreq) 507 507 goto bail_no_tx; 508 508 509 509 if (!(ib_rvt_state_ops[qp->state] & RVT_PROCESS_NEXT_SEND_OK)) {
+2 -2
drivers/infiniband/hw/hfi1/verbs_txreq.c
··· 1 1 /* 2 - * Copyright(c) 2016 - 2017 Intel Corporation. 2 + * Copyright(c) 2016 - 2018 Intel Corporation. 3 3 * 4 4 * This file is provided under a dual BSD/GPLv2 license. When using or 5 5 * redistributing this file, you may do so under either license. ··· 94 94 struct rvt_qp *qp) 95 95 __must_hold(&qp->s_lock) 96 96 { 97 - struct verbs_txreq *tx = ERR_PTR(-EBUSY); 97 + struct verbs_txreq *tx = NULL; 98 98 99 99 write_seqlock(&dev->txwait_lock); 100 100 if (ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK) {
+2 -2
drivers/infiniband/hw/hfi1/verbs_txreq.h
··· 1 1 /* 2 - * Copyright(c) 2016 Intel Corporation. 2 + * Copyright(c) 2016 - 2018 Intel Corporation. 3 3 * 4 4 * This file is provided under a dual BSD/GPLv2 license. When using or 5 5 * redistributing this file, you may do so under either license. ··· 83 83 if (unlikely(!tx)) { 84 84 /* call slow path to get the lock */ 85 85 tx = __get_txreq(dev, qp); 86 - if (IS_ERR(tx)) 86 + if (!tx) 87 87 return tx; 88 88 } 89 89 tx->qp = qp;
+1 -1
drivers/infiniband/hw/mlx5/main.c
··· 6113 6113 dev->num_ports = max(MLX5_CAP_GEN(mdev, num_ports), 6114 6114 MLX5_CAP_GEN(mdev, num_vhca_ports)); 6115 6115 6116 - if (MLX5_VPORT_MANAGER(mdev) && 6116 + if (MLX5_ESWITCH_MANAGER(mdev) && 6117 6117 mlx5_ib_eswitch_mode(mdev->priv.eswitch) == SRIOV_OFFLOADS) { 6118 6118 dev->rep = mlx5_ib_vport_rep(mdev->priv.eswitch, 0); 6119 6119
+12 -6
drivers/infiniband/hw/mlx5/srq.c
··· 266 266 267 267 desc_size = sizeof(struct mlx5_wqe_srq_next_seg) + 268 268 srq->msrq.max_gs * sizeof(struct mlx5_wqe_data_seg); 269 - if (desc_size == 0 || srq->msrq.max_gs > desc_size) 270 - return ERR_PTR(-EINVAL); 269 + if (desc_size == 0 || srq->msrq.max_gs > desc_size) { 270 + err = -EINVAL; 271 + goto err_srq; 272 + } 271 273 desc_size = roundup_pow_of_two(desc_size); 272 274 desc_size = max_t(size_t, 32, desc_size); 273 - if (desc_size < sizeof(struct mlx5_wqe_srq_next_seg)) 274 - return ERR_PTR(-EINVAL); 275 + if (desc_size < sizeof(struct mlx5_wqe_srq_next_seg)) { 276 + err = -EINVAL; 277 + goto err_srq; 278 + } 275 279 srq->msrq.max_avail_gather = (desc_size - sizeof(struct mlx5_wqe_srq_next_seg)) / 276 280 sizeof(struct mlx5_wqe_data_seg); 277 281 srq->msrq.wqe_shift = ilog2(desc_size); 278 282 buf_size = srq->msrq.max * desc_size; 279 - if (buf_size < desc_size) 280 - return ERR_PTR(-EINVAL); 283 + if (buf_size < desc_size) { 284 + err = -EINVAL; 285 + goto err_srq; 286 + } 281 287 in.type = init_attr->srq_type; 282 288 283 289 if (pd->uobject)
-1
drivers/iommu/Kconfig
··· 142 142 config INTEL_IOMMU 143 143 bool "Support for Intel IOMMU using DMA Remapping Devices" 144 144 depends on PCI_MSI && ACPI && (X86 || IA64_GENERIC) 145 - select DMA_DIRECT_OPS 146 145 select IOMMU_API 147 146 select IOMMU_IOVA 148 147 select NEED_DMA_MAP_STATE
+46 -16
drivers/iommu/intel-iommu.c
··· 31 31 #include <linux/pci.h> 32 32 #include <linux/dmar.h> 33 33 #include <linux/dma-mapping.h> 34 - #include <linux/dma-direct.h> 35 34 #include <linux/mempool.h> 36 35 #include <linux/memory.h> 37 36 #include <linux/cpu.h> ··· 3712 3713 dma_addr_t *dma_handle, gfp_t flags, 3713 3714 unsigned long attrs) 3714 3715 { 3715 - void *vaddr; 3716 + struct page *page = NULL; 3717 + int order; 3716 3718 3717 - vaddr = dma_direct_alloc(dev, size, dma_handle, flags, attrs); 3718 - if (iommu_no_mapping(dev) || !vaddr) 3719 - return vaddr; 3719 + size = PAGE_ALIGN(size); 3720 + order = get_order(size); 3720 3721 3721 - *dma_handle = __intel_map_single(dev, virt_to_phys(vaddr), 3722 - PAGE_ALIGN(size), DMA_BIDIRECTIONAL, 3723 - dev->coherent_dma_mask); 3724 - if (!*dma_handle) 3725 - goto out_free_pages; 3726 - return vaddr; 3722 + if (!iommu_no_mapping(dev)) 3723 + flags &= ~(GFP_DMA | GFP_DMA32); 3724 + else if (dev->coherent_dma_mask < dma_get_required_mask(dev)) { 3725 + if (dev->coherent_dma_mask < DMA_BIT_MASK(32)) 3726 + flags |= GFP_DMA; 3727 + else 3728 + flags |= GFP_DMA32; 3729 + } 3727 3730 3728 - out_free_pages: 3729 - dma_direct_free(dev, size, vaddr, *dma_handle, attrs); 3731 + if (gfpflags_allow_blocking(flags)) { 3732 + unsigned int count = size >> PAGE_SHIFT; 3733 + 3734 + page = dma_alloc_from_contiguous(dev, count, order, flags); 3735 + if (page && iommu_no_mapping(dev) && 3736 + page_to_phys(page) + size > dev->coherent_dma_mask) { 3737 + dma_release_from_contiguous(dev, page, count); 3738 + page = NULL; 3739 + } 3740 + } 3741 + 3742 + if (!page) 3743 + page = alloc_pages(flags, order); 3744 + if (!page) 3745 + return NULL; 3746 + memset(page_address(page), 0, size); 3747 + 3748 + *dma_handle = __intel_map_single(dev, page_to_phys(page), size, 3749 + DMA_BIDIRECTIONAL, 3750 + dev->coherent_dma_mask); 3751 + if (*dma_handle) 3752 + return page_address(page); 3753 + if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT)) 3754 + __free_pages(page, order); 3755 + 3730 3756 return NULL; 3731 3757 } 3732 3758 3733 3759 static void intel_free_coherent(struct device *dev, size_t size, void *vaddr, 3734 3760 dma_addr_t dma_handle, unsigned long attrs) 3735 3761 { 3736 - if (!iommu_no_mapping(dev)) 3737 - intel_unmap(dev, dma_handle, PAGE_ALIGN(size)); 3738 - dma_direct_free(dev, size, vaddr, dma_handle, attrs); 3762 + int order; 3763 + struct page *page = virt_to_page(vaddr); 3764 + 3765 + size = PAGE_ALIGN(size); 3766 + order = get_order(size); 3767 + 3768 + intel_unmap(dev, dma_handle, size); 3769 + if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT)) 3770 + __free_pages(page, order); 3739 3771 } 3740 3772 3741 3773 static void intel_unmap_sg(struct device *dev, struct scatterlist *sglist,
+5 -3
drivers/md/md.c
··· 5547 5547 else 5548 5548 pr_warn("md: personality for level %s is not loaded!\n", 5549 5549 mddev->clevel); 5550 - return -EINVAL; 5550 + err = -EINVAL; 5551 + goto abort; 5551 5552 } 5552 5553 spin_unlock(&pers_lock); 5553 5554 if (mddev->level != pers->level) { ··· 5561 5560 pers->start_reshape == NULL) { 5562 5561 /* This personality cannot handle reshaping... */ 5563 5562 module_put(pers->owner); 5564 - return -EINVAL; 5563 + err = -EINVAL; 5564 + goto abort; 5565 5565 } 5566 5566 5567 5567 if (pers->sync_request) { ··· 5631 5629 mddev->private = NULL; 5632 5630 module_put(pers->owner); 5633 5631 bitmap_destroy(mddev); 5634 - return err; 5632 + goto abort; 5635 5633 } 5636 5634 if (mddev->queue) { 5637 5635 bool nonrot = true;
+7
drivers/md/raid10.c
··· 3893 3893 disk->rdev->saved_raid_disk < 0) 3894 3894 conf->fullsync = 1; 3895 3895 } 3896 + 3897 + if (disk->replacement && 3898 + !test_bit(In_sync, &disk->replacement->flags) && 3899 + disk->replacement->saved_raid_disk < 0) { 3900 + conf->fullsync = 1; 3901 + } 3902 + 3896 3903 disk->recovery_disabled = mddev->recovery_disabled - 1; 3897 3904 } 3898 3905
+2 -12
drivers/media/rc/bpf-lirc.c
··· 207 207 bpf_prog_array_free(rcdev->raw->progs); 208 208 } 209 209 210 - int lirc_prog_attach(const union bpf_attr *attr) 210 + int lirc_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog) 211 211 { 212 - struct bpf_prog *prog; 213 212 struct rc_dev *rcdev; 214 213 int ret; 215 214 216 215 if (attr->attach_flags) 217 216 return -EINVAL; 218 217 219 - prog = bpf_prog_get_type(attr->attach_bpf_fd, 220 - BPF_PROG_TYPE_LIRC_MODE2); 221 - if (IS_ERR(prog)) 222 - return PTR_ERR(prog); 223 - 224 218 rcdev = rc_dev_get_from_fd(attr->target_fd); 225 - if (IS_ERR(rcdev)) { 226 - bpf_prog_put(prog); 219 + if (IS_ERR(rcdev)) 227 220 return PTR_ERR(rcdev); 228 - } 229 221 230 222 ret = lirc_bpf_attach(rcdev, prog); 231 - if (ret) 232 - bpf_prog_put(prog); 233 223 234 224 put_device(&rcdev->dev); 235 225
+3 -24
drivers/misc/ibmasm/ibmasmfs.c
··· 507 507 static ssize_t remote_settings_file_read(struct file *file, char __user *buf, size_t count, loff_t *offset) 508 508 { 509 509 void __iomem *address = (void __iomem *)file->private_data; 510 - unsigned char *page; 511 - int retval; 512 510 int len = 0; 513 511 unsigned int value; 514 - 515 - if (*offset < 0) 516 - return -EINVAL; 517 - if (count == 0 || count > 1024) 518 - return 0; 519 - if (*offset != 0) 520 - return 0; 521 - 522 - page = (unsigned char *)__get_free_page(GFP_KERNEL); 523 - if (!page) 524 - return -ENOMEM; 512 + char lbuf[20]; 525 513 526 514 value = readl(address); 527 - len = sprintf(page, "%d\n", value); 515 + len = snprintf(lbuf, sizeof(lbuf), "%d\n", value); 528 516 529 - if (copy_to_user(buf, page, len)) { 530 - retval = -EFAULT; 531 - goto exit; 532 - } 533 - *offset += len; 534 - retval = len; 535 - 536 - exit: 537 - free_page((unsigned long)page); 538 - return retval; 517 + return simple_read_from_buffer(buf, count, offset, lbuf, len); 539 518 } 540 519 541 520 static ssize_t remote_settings_file_write(struct file *file, const char __user *ubuff, size_t count, loff_t *offset)
+4 -1
drivers/misc/mei/interrupt.c
··· 310 310 if (&cl->link == &dev->file_list) { 311 311 /* A message for not connected fixed address clients 312 312 * should be silently discarded 313 + * On power down client may be force cleaned, 314 + * silently discard such messages 313 315 */ 314 - if (hdr_is_fixed(mei_hdr)) { 316 + if (hdr_is_fixed(mei_hdr) || 317 + dev->dev_state == MEI_DEV_POWER_DOWN) { 315 318 mei_irq_discard_msg(dev, mei_hdr); 316 319 ret = 0; 317 320 goto reset_slots;
+2 -2
drivers/misc/vmw_balloon.c
··· 467 467 unsigned int num_pages, bool is_2m_pages, unsigned int *target) 468 468 { 469 469 unsigned long status; 470 - unsigned long pfn = page_to_pfn(b->page); 470 + unsigned long pfn = PHYS_PFN(virt_to_phys(b->batch_page)); 471 471 472 472 STATS_INC(b->stats.lock[is_2m_pages]); 473 473 ··· 515 515 unsigned int num_pages, bool is_2m_pages, unsigned int *target) 516 516 { 517 517 unsigned long status; 518 - unsigned long pfn = page_to_pfn(b->page); 518 + unsigned long pfn = PHYS_PFN(virt_to_phys(b->batch_page)); 519 519 520 520 STATS_INC(b->stats.unlock[is_2m_pages]); 521 521
+1 -1
drivers/mmc/core/slot-gpio.c
··· 27 27 bool override_cd_active_level; 28 28 irqreturn_t (*cd_gpio_isr)(int irq, void *dev_id); 29 29 char *ro_label; 30 - char cd_label[0]; 31 30 u32 cd_debounce_delay_ms; 31 + char cd_label[]; 32 32 }; 33 33 34 34 static irqreturn_t mmc_gpio_cd_irqt(int irq, void *dev_id)
+4 -3
drivers/mmc/host/dw_mmc.c
··· 1065 1065 * It's used when HS400 mode is enabled. 1066 1066 */ 1067 1067 if (data->flags & MMC_DATA_WRITE && 1068 - !(host->timing != MMC_TIMING_MMC_HS400)) 1069 - return; 1068 + host->timing != MMC_TIMING_MMC_HS400) 1069 + goto disable; 1070 1070 1071 1071 if (data->flags & MMC_DATA_WRITE) 1072 1072 enable = SDMMC_CARD_WR_THR_EN; ··· 1074 1074 enable = SDMMC_CARD_RD_THR_EN; 1075 1075 1076 1076 if (host->timing != MMC_TIMING_MMC_HS200 && 1077 - host->timing != MMC_TIMING_UHS_SDR104) 1077 + host->timing != MMC_TIMING_UHS_SDR104 && 1078 + host->timing != MMC_TIMING_MMC_HS400) 1078 1079 goto disable; 1079 1080 1080 1081 blksz_depth = blksz / (1 << host->data_shift);
+7 -8
drivers/mmc/host/renesas_sdhi_internal_dmac.c
··· 139 139 renesas_sdhi_internal_dmac_dm_write(host, DM_CM_RST, 140 140 RST_RESERVED_BITS | val); 141 141 142 - if (host->data && host->data->flags & MMC_DATA_READ) 143 - clear_bit(SDHI_INTERNAL_DMAC_RX_IN_USE, &global_flags); 142 + clear_bit(SDHI_INTERNAL_DMAC_RX_IN_USE, &global_flags); 144 143 145 144 renesas_sdhi_internal_dmac_enable_dma(host, true); 146 145 } ··· 163 164 goto force_pio; 164 165 165 166 /* This DMAC cannot handle if buffer is not 8-bytes alignment */ 166 - if (!IS_ALIGNED(sg_dma_address(sg), 8)) { 167 - dma_unmap_sg(&host->pdev->dev, sg, host->sg_len, 168 - mmc_get_dma_dir(data)); 169 - goto force_pio; 170 - } 167 + if (!IS_ALIGNED(sg_dma_address(sg), 8)) 168 + goto force_pio_with_unmap; 171 169 172 170 if (data->flags & MMC_DATA_READ) { 173 171 dtran_mode |= DTRAN_MODE_CH_NUM_CH1; 174 172 if (test_bit(SDHI_INTERNAL_DMAC_ONE_RX_ONLY, &global_flags) && 175 173 test_and_set_bit(SDHI_INTERNAL_DMAC_RX_IN_USE, &global_flags)) 176 - goto force_pio; 174 + goto force_pio_with_unmap; 177 175 } else { 178 176 dtran_mode |= DTRAN_MODE_CH_NUM_CH0; 179 177 } ··· 184 188 sg_dma_address(sg)); 185 189 186 190 return; 191 + 192 + force_pio_with_unmap: 193 + dma_unmap_sg(&host->pdev->dev, sg, host->sg_len, mmc_get_dma_dir(data)); 187 194 188 195 force_pio: 189 196 host->force_pio = true;
+9 -12
drivers/mmc/host/sdhci-esdhc-imx.c
··· 312 312 313 313 if (imx_data->socdata->flags & ESDHC_FLAG_HS400) 314 314 val |= SDHCI_SUPPORT_HS400; 315 + 316 + /* 317 + * Do not advertise faster UHS modes if there are no 318 + * pinctrl states for 100MHz/200MHz. 319 + */ 320 + if (IS_ERR_OR_NULL(imx_data->pins_100mhz) || 321 + IS_ERR_OR_NULL(imx_data->pins_200mhz)) 322 + val &= ~(SDHCI_SUPPORT_SDR50 | SDHCI_SUPPORT_DDR50 323 + | SDHCI_SUPPORT_SDR104 | SDHCI_SUPPORT_HS400); 315 324 } 316 325 } 317 326 ··· 1167 1158 ESDHC_PINCTRL_STATE_100MHZ); 1168 1159 imx_data->pins_200mhz = pinctrl_lookup_state(imx_data->pinctrl, 1169 1160 ESDHC_PINCTRL_STATE_200MHZ); 1170 - if (IS_ERR(imx_data->pins_100mhz) || 1171 - IS_ERR(imx_data->pins_200mhz)) { 1172 - dev_warn(mmc_dev(host->mmc), 1173 - "could not get ultra high speed state, work on normal mode\n"); 1174 - /* 1175 - * fall back to not supporting uhs by specifying no 1176 - * 1.8v quirk 1177 - */ 1178 - host->quirks2 |= SDHCI_QUIRK2_NO_1_8_V; 1179 - } 1180 - } else { 1181 - host->quirks2 |= SDHCI_QUIRK2_NO_1_8_V; 1182 1161 } 1183 1162 1184 1163 /* call to generic mmc_of_parse to support additional capabilities */
+7
drivers/mmc/host/sunxi-mmc.c
··· 1446 1446 sunxi_mmc_init_host(host); 1447 1447 sunxi_mmc_set_bus_width(host, mmc->ios.bus_width); 1448 1448 sunxi_mmc_set_clk(host, &mmc->ios); 1449 + enable_irq(host->irq); 1449 1450 1450 1451 return 0; 1451 1452 } ··· 1456 1455 struct mmc_host *mmc = dev_get_drvdata(dev); 1457 1456 struct sunxi_mmc_host *host = mmc_priv(mmc); 1458 1457 1458 + /* 1459 + * When clocks are off, it's possible receiving 1460 + * fake interrupts, which will stall the system. 1461 + * Disabling the irq will prevent this. 1462 + */ 1463 + disable_irq(host->irq); 1459 1464 sunxi_mmc_reset_host(host); 1460 1465 sunxi_mmc_disable(host); 1461 1466
+4 -2
drivers/mtd/spi-nor/cadence-quadspi.c
··· 926 926 if (ret) 927 927 return ret; 928 928 929 - if (f_pdata->use_direct_mode) 929 + if (f_pdata->use_direct_mode) { 930 930 memcpy_toio(cqspi->ahb_base + to, buf, len); 931 - else 931 + ret = cqspi_wait_idle(cqspi); 932 + } else { 932 933 ret = cqspi_indirect_write_execute(nor, to, buf, len); 934 + } 933 935 if (ret) 934 936 return ret; 935 937
+7 -1
drivers/net/ethernet/atheros/alx/main.c
··· 1897 1897 struct pci_dev *pdev = to_pci_dev(dev); 1898 1898 struct alx_priv *alx = pci_get_drvdata(pdev); 1899 1899 struct alx_hw *hw = &alx->hw; 1900 + int err; 1900 1901 1901 1902 alx_reset_phy(hw); 1902 1903 1903 1904 if (!netif_running(alx->dev)) 1904 1905 return 0; 1905 1906 netif_device_attach(alx->dev); 1906 - return __alx_open(alx, true); 1907 + 1908 + rtnl_lock(); 1909 + err = __alx_open(alx, true); 1910 + rtnl_unlock(); 1911 + 1912 + return err; 1907 1913 } 1908 1914 1909 1915 static SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume);
+1
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 1533 1533 struct link_vars link_vars; 1534 1534 u32 link_cnt; 1535 1535 struct bnx2x_link_report_data last_reported_link; 1536 + bool force_link_down; 1536 1537 1537 1538 struct mdio_if_info mdio; 1538 1539
+6
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 1261 1261 { 1262 1262 struct bnx2x_link_report_data cur_data; 1263 1263 1264 + if (bp->force_link_down) { 1265 + bp->link_vars.link_up = 0; 1266 + return; 1267 + } 1268 + 1264 1269 /* reread mf_cfg */ 1265 1270 if (IS_PF(bp) && !CHIP_IS_E1(bp)) 1266 1271 bnx2x_read_mf_cfg(bp); ··· 2822 2817 bp->pending_max = 0; 2823 2818 } 2824 2819 2820 + bp->force_link_down = false; 2825 2821 if (bp->port.pmf) { 2826 2822 rc = bnx2x_initial_phy_init(bp, load_mode); 2827 2823 if (rc)
+6
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 10279 10279 bp->sp_rtnl_state = 0; 10280 10280 smp_mb(); 10281 10281 10282 + /* Immediately indicate link as down */ 10283 + bp->link_vars.link_up = 0; 10284 + bp->force_link_down = true; 10285 + netif_carrier_off(bp->dev); 10286 + BNX2X_ERR("Indicating link is down due to Tx-timeout\n"); 10287 + 10282 10288 bnx2x_nic_unload(bp, UNLOAD_NORMAL, true); 10283 10289 /* When ret value shows failure of allocation failure, 10284 10290 * the nic is rebooted again. If open still fails, a error
+1 -1
drivers/net/ethernet/broadcom/cnic.c
··· 660 660 id_tbl->max = size; 661 661 id_tbl->next = next; 662 662 spin_lock_init(&id_tbl->lock); 663 - id_tbl->table = kcalloc(DIV_ROUND_UP(size, 32), 4, GFP_KERNEL); 663 + id_tbl->table = kcalloc(BITS_TO_LONGS(size), sizeof(long), GFP_KERNEL); 664 664 if (!id_tbl->table) 665 665 return -ENOMEM; 666 666
+2
drivers/net/ethernet/cadence/macb_main.c
··· 3726 3726 int err; 3727 3727 u32 reg; 3728 3728 3729 + bp->queues[0].bp = bp; 3730 + 3729 3731 dev->netdev_ops = &at91ether_netdev_ops; 3730 3732 dev->ethtool_ops = &macb_ethtool_ops; 3731 3733
+8 -7
drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
··· 125 125 /* Default alignment for start of data in an Rx FD */ 126 126 #define DPAA_FD_DATA_ALIGNMENT 16 127 127 128 + /* The DPAA requires 256 bytes reserved and mapped for the SGT */ 129 + #define DPAA_SGT_SIZE 256 130 + 128 131 /* Values for the L3R field of the FM Parse Results 129 132 */ 130 133 /* L3 Type field: First IP Present IPv4 */ ··· 1620 1617 1621 1618 if (unlikely(qm_fd_get_format(fd) == qm_fd_sg)) { 1622 1619 nr_frags = skb_shinfo(skb)->nr_frags; 1623 - dma_unmap_single(dev, addr, qm_fd_get_offset(fd) + 1624 - sizeof(struct qm_sg_entry) * (1 + nr_frags), 1620 + dma_unmap_single(dev, addr, 1621 + qm_fd_get_offset(fd) + DPAA_SGT_SIZE, 1625 1622 dma_dir); 1626 1623 1627 1624 /* The sgt buffer has been allocated with netdev_alloc_frag(), ··· 1906 1903 void *sgt_buf; 1907 1904 1908 1905 /* get a page frag to store the SGTable */ 1909 - sz = SKB_DATA_ALIGN(priv->tx_headroom + 1910 - sizeof(struct qm_sg_entry) * (1 + nr_frags)); 1906 + sz = SKB_DATA_ALIGN(priv->tx_headroom + DPAA_SGT_SIZE); 1911 1907 sgt_buf = netdev_alloc_frag(sz); 1912 1908 if (unlikely(!sgt_buf)) { 1913 1909 netdev_err(net_dev, "netdev_alloc_frag() failed for size %d\n", ··· 1974 1972 skbh = (struct sk_buff **)buffer_start; 1975 1973 *skbh = skb; 1976 1974 1977 - addr = dma_map_single(dev, buffer_start, priv->tx_headroom + 1978 - sizeof(struct qm_sg_entry) * (1 + nr_frags), 1979 - dma_dir); 1975 + addr = dma_map_single(dev, buffer_start, 1976 + priv->tx_headroom + DPAA_SGT_SIZE, dma_dir); 1980 1977 if (unlikely(dma_mapping_error(dev, addr))) { 1981 1978 dev_err(dev, "DMA mapping failed"); 1982 1979 err = -EINVAL;
+8
drivers/net/ethernet/freescale/fman/fman_port.c
··· 324 324 #define HWP_HXS_PHE_REPORT 0x00000800 325 325 #define HWP_HXS_PCAC_PSTAT 0x00000100 326 326 #define HWP_HXS_PCAC_PSTOP 0x00000001 327 + #define HWP_HXS_TCP_OFFSET 0xA 328 + #define HWP_HXS_UDP_OFFSET 0xB 329 + #define HWP_HXS_SH_PAD_REM 0x80000000 330 + 327 331 struct fman_port_hwp_regs { 328 332 struct { 329 333 u32 ssa; /* Soft Sequence Attachment */ ··· 731 727 iowrite32be(0x00000000, &regs->pmda[i].ssa); 732 728 iowrite32be(0xffffffff, &regs->pmda[i].lcv); 733 729 } 730 + 731 + /* Short packet padding removal from checksum calculation */ 732 + iowrite32be(HWP_HXS_SH_PAD_REM, &regs->pmda[HWP_HXS_TCP_OFFSET].ssa); 733 + iowrite32be(HWP_HXS_SH_PAD_REM, &regs->pmda[HWP_HXS_UDP_OFFSET].ssa); 734 734 735 735 start_port_hwp(port); 736 736 }
+1
drivers/net/ethernet/huawei/hinic/hinic_rx.c
··· 439 439 { 440 440 struct hinic_rq *rq = rxq->rq; 441 441 442 + irq_set_affinity_hint(rq->irq, NULL); 442 443 free_irq(rq->irq, rxq); 443 444 rx_del_napi(rxq); 444 445 }
+15 -9
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 2199 2199 return true; 2200 2200 } 2201 2201 2202 - #define I40E_XDP_PASS 0 2203 - #define I40E_XDP_CONSUMED 1 2204 - #define I40E_XDP_TX 2 2202 + #define I40E_XDP_PASS 0 2203 + #define I40E_XDP_CONSUMED BIT(0) 2204 + #define I40E_XDP_TX BIT(1) 2205 + #define I40E_XDP_REDIR BIT(2) 2205 2206 2206 2207 static int i40e_xmit_xdp_ring(struct xdp_frame *xdpf, 2207 2208 struct i40e_ring *xdp_ring); ··· 2249 2248 break; 2250 2249 case XDP_REDIRECT: 2251 2250 err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); 2252 - result = !err ? I40E_XDP_TX : I40E_XDP_CONSUMED; 2251 + result = !err ? I40E_XDP_REDIR : I40E_XDP_CONSUMED; 2253 2252 break; 2254 2253 default: 2255 2254 bpf_warn_invalid_xdp_action(act); ··· 2312 2311 unsigned int total_rx_bytes = 0, total_rx_packets = 0; 2313 2312 struct sk_buff *skb = rx_ring->skb; 2314 2313 u16 cleaned_count = I40E_DESC_UNUSED(rx_ring); 2315 - bool failure = false, xdp_xmit = false; 2314 + unsigned int xdp_xmit = 0; 2315 + bool failure = false; 2316 2316 struct xdp_buff xdp; 2317 2317 2318 2318 xdp.rxq = &rx_ring->xdp_rxq; ··· 2374 2372 } 2375 2373 2376 2374 if (IS_ERR(skb)) { 2377 - if (PTR_ERR(skb) == -I40E_XDP_TX) { 2378 - xdp_xmit = true; 2375 + unsigned int xdp_res = -PTR_ERR(skb); 2376 + 2377 + if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR)) { 2378 + xdp_xmit |= xdp_res; 2379 2379 i40e_rx_buffer_flip(rx_ring, rx_buffer, size); 2380 2380 } else { 2381 2381 rx_buffer->pagecnt_bias++; ··· 2431 2427 total_rx_packets++; 2432 2428 } 2433 2429 2434 - if (xdp_xmit) { 2430 + if (xdp_xmit & I40E_XDP_REDIR) 2431 + xdp_do_flush_map(); 2432 + 2433 + if (xdp_xmit & I40E_XDP_TX) { 2435 2434 struct i40e_ring *xdp_ring = 2436 2435 rx_ring->vsi->xdp_rings[rx_ring->queue_index]; 2437 2436 2438 2437 i40e_xdp_ring_update_tail(xdp_ring); 2439 - xdp_do_flush_map(); 2440 2438 } 2441 2439 2442 2440 rx_ring->skb = skb;
+14 -10
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 2186 2186 return skb; 2187 2187 } 2188 2188 2189 - #define IXGBE_XDP_PASS 0 2190 - #define IXGBE_XDP_CONSUMED 1 2191 - #define IXGBE_XDP_TX 2 2189 + #define IXGBE_XDP_PASS 0 2190 + #define IXGBE_XDP_CONSUMED BIT(0) 2191 + #define IXGBE_XDP_TX BIT(1) 2192 + #define IXGBE_XDP_REDIR BIT(2) 2192 2193 2193 2194 static int ixgbe_xmit_xdp_ring(struct ixgbe_adapter *adapter, 2194 2195 struct xdp_frame *xdpf); ··· 2226 2225 case XDP_REDIRECT: 2227 2226 err = xdp_do_redirect(adapter->netdev, xdp, xdp_prog); 2228 2227 if (!err) 2229 - result = IXGBE_XDP_TX; 2228 + result = IXGBE_XDP_REDIR; 2230 2229 else 2231 2230 result = IXGBE_XDP_CONSUMED; 2232 2231 break; ··· 2286 2285 unsigned int mss = 0; 2287 2286 #endif /* IXGBE_FCOE */ 2288 2287 u16 cleaned_count = ixgbe_desc_unused(rx_ring); 2289 - bool xdp_xmit = false; 2288 + unsigned int xdp_xmit = 0; 2290 2289 struct xdp_buff xdp; 2291 2290 2292 2291 xdp.rxq = &rx_ring->xdp_rxq; ··· 2329 2328 } 2330 2329 2331 2330 if (IS_ERR(skb)) { 2332 - if (PTR_ERR(skb) == -IXGBE_XDP_TX) { 2333 - xdp_xmit = true; 2331 + unsigned int xdp_res = -PTR_ERR(skb); 2332 + 2333 + if (xdp_res & (IXGBE_XDP_TX | IXGBE_XDP_REDIR)) { 2334 + xdp_xmit |= xdp_res; 2334 2335 ixgbe_rx_buffer_flip(rx_ring, rx_buffer, size); 2335 2336 } else { 2336 2337 rx_buffer->pagecnt_bias++; ··· 2404 2401 total_rx_packets++; 2405 2402 } 2406 2403 2407 - if (xdp_xmit) { 2404 + if (xdp_xmit & IXGBE_XDP_REDIR) 2405 + xdp_do_flush_map(); 2406 + 2407 + if (xdp_xmit & IXGBE_XDP_TX) { 2408 2408 struct ixgbe_ring *ring = adapter->xdp_ring[smp_processor_id()]; 2409 2409 2410 2410 /* Force memory writes to complete before letting h/w ··· 2415 2409 */ 2416 2410 wmb(); 2417 2411 writel(ring->next_to_use, ring->tail); 2418 - 2419 - xdp_do_flush_map(); 2420 2412 } 2421 2413 2422 2414 u64_stats_update_begin(&rx_ring->syncp);
+4 -4
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 807 807 unsigned long flags; 808 808 bool poll_cmd = ent->polling; 809 809 int alloc_ret; 810 + int cmd_mode; 810 811 811 812 sem = ent->page_queue ? &cmd->pages_sem : &cmd->sem; 812 813 down(sem); ··· 854 853 set_signature(ent, !cmd->checksum_disabled); 855 854 dump_command(dev, ent, 1); 856 855 ent->ts1 = ktime_get_ns(); 856 + cmd_mode = cmd->mode; 857 857 858 858 if (ent->callback) 859 859 schedule_delayed_work(&ent->cb_timeout_work, cb_timeout); ··· 879 877 iowrite32be(1 << ent->idx, &dev->iseg->cmd_dbell); 880 878 mmiowb(); 881 879 /* if not in polling don't use ent after this point */ 882 - if (cmd->mode == CMD_MODE_POLLING || poll_cmd) { 880 + if (cmd_mode == CMD_MODE_POLLING || poll_cmd) { 883 881 poll_timeout(ent); 884 882 /* make sure we read the descriptor after ownership is SW */ 885 883 rmb(); ··· 1278 1276 { 1279 1277 struct mlx5_core_dev *dev = filp->private_data; 1280 1278 struct mlx5_cmd_debug *dbg = &dev->cmd.dbg; 1281 - char outlen_str[8]; 1279 + char outlen_str[8] = {0}; 1282 1280 int outlen; 1283 1281 void *ptr; 1284 1282 int err; ··· 1292 1290 1293 1291 if (copy_from_user(outlen_str, buf, count)) 1294 1292 return -EFAULT; 1295 - 1296 - outlen_str[7] = 0; 1297 1293 1298 1294 err = sscanf(outlen_str, "%d", &outlen); 1299 1295 if (err < 0)
+6 -6
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 2846 2846 mlx5e_activate_channels(&priv->channels); 2847 2847 netif_tx_start_all_queues(priv->netdev); 2848 2848 2849 - if (MLX5_VPORT_MANAGER(priv->mdev)) 2849 + if (MLX5_ESWITCH_MANAGER(priv->mdev)) 2850 2850 mlx5e_add_sqs_fwd_rules(priv); 2851 2851 2852 2852 mlx5e_wait_channels_min_rx_wqes(&priv->channels); ··· 2857 2857 { 2858 2858 mlx5e_redirect_rqts_to_drop(priv); 2859 2859 2860 - if (MLX5_VPORT_MANAGER(priv->mdev)) 2860 + if (MLX5_ESWITCH_MANAGER(priv->mdev)) 2861 2861 mlx5e_remove_sqs_fwd_rules(priv); 2862 2862 2863 2863 /* FIXME: This is a W/A only for tx timeout watch dog false alarm when ··· 4597 4597 mlx5e_set_netdev_dev_addr(netdev); 4598 4598 4599 4599 #if IS_ENABLED(CONFIG_MLX5_ESWITCH) 4600 - if (MLX5_VPORT_MANAGER(mdev)) 4600 + if (MLX5_ESWITCH_MANAGER(mdev)) 4601 4601 netdev->switchdev_ops = &mlx5e_switchdev_ops; 4602 4602 #endif 4603 4603 ··· 4753 4753 4754 4754 mlx5e_enable_async_events(priv); 4755 4755 4756 - if (MLX5_VPORT_MANAGER(priv->mdev)) 4756 + if (MLX5_ESWITCH_MANAGER(priv->mdev)) 4757 4757 mlx5e_register_vport_reps(priv); 4758 4758 4759 4759 if (netdev->reg_state != NETREG_REGISTERED) ··· 4788 4788 4789 4789 queue_work(priv->wq, &priv->set_rx_mode_work); 4790 4790 4791 - if (MLX5_VPORT_MANAGER(priv->mdev)) 4791 + if (MLX5_ESWITCH_MANAGER(priv->mdev)) 4792 4792 mlx5e_unregister_vport_reps(priv); 4793 4793 4794 4794 mlx5e_disable_async_events(priv); ··· 4972 4972 return NULL; 4973 4973 4974 4974 #ifdef CONFIG_MLX5_ESWITCH 4975 - if (MLX5_VPORT_MANAGER(mdev)) { 4975 + if (MLX5_ESWITCH_MANAGER(mdev)) { 4976 4976 rpriv = mlx5e_alloc_nic_rep_priv(mdev); 4977 4977 if (!rpriv) { 4978 4978 mlx5_core_warn(mdev, "Failed to alloc NIC rep priv data\n");
+6 -2
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 823 823 struct mlx5e_rep_priv *rpriv = priv->ppriv; 824 824 struct mlx5_eswitch_rep *rep; 825 825 826 - if (!MLX5_CAP_GEN(priv->mdev, vport_group_manager)) 826 + if (!MLX5_ESWITCH_MANAGER(priv->mdev)) 827 827 return false; 828 828 829 829 rep = rpriv->rep; ··· 837 837 static bool mlx5e_is_vf_vport_rep(struct mlx5e_priv *priv) 838 838 { 839 839 struct mlx5e_rep_priv *rpriv = priv->ppriv; 840 - struct mlx5_eswitch_rep *rep = rpriv->rep; 840 + struct mlx5_eswitch_rep *rep; 841 841 842 + if (!MLX5_ESWITCH_MANAGER(priv->mdev)) 843 + return false; 844 + 845 + rep = rpriv->rep; 842 846 if (rep && rep->vport != FDB_UPLINK_VPORT) 843 847 return true; 844 848
+5 -7
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
··· 1594 1594 } 1595 1595 1596 1596 /* Public E-Switch API */ 1597 - #define ESW_ALLOWED(esw) ((esw) && MLX5_VPORT_MANAGER((esw)->dev)) 1597 + #define ESW_ALLOWED(esw) ((esw) && MLX5_ESWITCH_MANAGER((esw)->dev)) 1598 + 1598 1599 1599 1600 int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode) 1600 1601 { 1601 1602 int err; 1602 1603 int i, enabled_events; 1603 1604 1604 - if (!ESW_ALLOWED(esw)) 1605 - return 0; 1606 - 1607 - if (!MLX5_CAP_GEN(esw->dev, eswitch_flow_table) || 1605 + if (!ESW_ALLOWED(esw) || 1608 1606 !MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ft_support)) { 1609 1607 esw_warn(esw->dev, "E-Switch FDB is not supported, aborting ...\n"); 1610 1608 return -EOPNOTSUPP; ··· 1804 1806 u64 node_guid; 1805 1807 int err = 0; 1806 1808 1807 - if (!ESW_ALLOWED(esw)) 1809 + if (!MLX5_CAP_GEN(esw->dev, vport_group_manager)) 1808 1810 return -EPERM; 1809 1811 if (!LEGAL_VPORT(esw, vport) || is_multicast_ether_addr(mac)) 1810 1812 return -EINVAL; ··· 1881 1883 { 1882 1884 struct mlx5_vport *evport; 1883 1885 1884 - if (!ESW_ALLOWED(esw)) 1886 + if (!MLX5_CAP_GEN(esw->dev, vport_group_manager)) 1885 1887 return -EPERM; 1886 1888 if (!LEGAL_VPORT(esw, vport)) 1887 1889 return -EINVAL;
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 1079 1079 if (MLX5_CAP_GEN(dev, port_type) != MLX5_CAP_PORT_TYPE_ETH) 1080 1080 return -EOPNOTSUPP; 1081 1081 1082 - if (!MLX5_CAP_GEN(dev, vport_group_manager)) 1083 - return -EOPNOTSUPP; 1082 + if(!MLX5_ESWITCH_MANAGER(dev)) 1083 + return -EPERM; 1084 1084 1085 1085 if (dev->priv.eswitch->mode == SRIOV_NONE) 1086 1086 return -EOPNOTSUPP;
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 32 32 33 33 #include <linux/mutex.h> 34 34 #include <linux/mlx5/driver.h> 35 + #include <linux/mlx5/eswitch.h> 35 36 36 37 #include "mlx5_core.h" 37 38 #include "fs_core.h" ··· 2653 2652 goto err; 2654 2653 } 2655 2654 2656 - if (MLX5_CAP_GEN(dev, eswitch_flow_table)) { 2655 + if (MLX5_ESWITCH_MANAGER(dev)) { 2657 2656 if (MLX5_CAP_ESW_FLOWTABLE_FDB(dev, ft_support)) { 2658 2657 err = init_fdb_root_ns(steering); 2659 2658 if (err)
+3 -2
drivers/net/ethernet/mellanox/mlx5/core/fw.c
··· 32 32 33 33 #include <linux/mlx5/driver.h> 34 34 #include <linux/mlx5/cmd.h> 35 + #include <linux/mlx5/eswitch.h> 35 36 #include <linux/module.h> 36 37 #include "mlx5_core.h" 37 38 #include "../../mlxfw/mlxfw.h" ··· 160 159 } 161 160 162 161 if (MLX5_CAP_GEN(dev, vport_group_manager) && 163 - MLX5_CAP_GEN(dev, eswitch_flow_table)) { 162 + MLX5_ESWITCH_MANAGER(dev)) { 164 163 err = mlx5_core_get_caps(dev, MLX5_CAP_ESWITCH_FLOW_TABLE); 165 164 if (err) 166 165 return err; 167 166 } 168 167 169 - if (MLX5_CAP_GEN(dev, eswitch_flow_table)) { 168 + if (MLX5_ESWITCH_MANAGER(dev)) { 170 169 err = mlx5_core_get_caps(dev, MLX5_CAP_ESWITCH); 171 170 if (err) 172 171 return err;
+5 -4
drivers/net/ethernet/mellanox/mlx5/core/lib/mpfs.c
··· 33 33 #include <linux/etherdevice.h> 34 34 #include <linux/mlx5/driver.h> 35 35 #include <linux/mlx5/mlx5_ifc.h> 36 + #include <linux/mlx5/eswitch.h> 36 37 #include "mlx5_core.h" 37 38 #include "lib/mpfs.h" 38 39 ··· 99 98 int l2table_size = 1 << MLX5_CAP_GEN(dev, log_max_l2_table); 100 99 struct mlx5_mpfs *mpfs; 101 100 102 - if (!MLX5_VPORT_MANAGER(dev)) 101 + if (!MLX5_ESWITCH_MANAGER(dev)) 103 102 return 0; 104 103 105 104 mpfs = kzalloc(sizeof(*mpfs), GFP_KERNEL); ··· 123 122 { 124 123 struct mlx5_mpfs *mpfs = dev->priv.mpfs; 125 124 126 - if (!MLX5_VPORT_MANAGER(dev)) 125 + if (!MLX5_ESWITCH_MANAGER(dev)) 127 126 return; 128 127 129 128 WARN_ON(!hlist_empty(mpfs->hash)); ··· 138 137 u32 index; 139 138 int err; 140 139 141 - if (!MLX5_VPORT_MANAGER(dev)) 140 + if (!MLX5_ESWITCH_MANAGER(dev)) 142 141 return 0; 143 142 144 143 mutex_lock(&mpfs->lock); ··· 180 179 int err = 0; 181 180 u32 index; 182 181 183 - if (!MLX5_VPORT_MANAGER(dev)) 182 + if (!MLX5_ESWITCH_MANAGER(dev)) 184 183 return 0; 185 184 186 185 mutex_lock(&mpfs->lock);
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/port.c
··· 701 701 static int mlx5_set_port_qetcr_reg(struct mlx5_core_dev *mdev, u32 *in, 702 702 int inlen) 703 703 { 704 - u32 out[MLX5_ST_SZ_DW(qtct_reg)]; 704 + u32 out[MLX5_ST_SZ_DW(qetc_reg)]; 705 705 706 706 if (!MLX5_CAP_GEN(mdev, ets)) 707 707 return -EOPNOTSUPP; ··· 713 713 static int mlx5_query_port_qetcr_reg(struct mlx5_core_dev *mdev, u32 *out, 714 714 int outlen) 715 715 { 716 - u32 in[MLX5_ST_SZ_DW(qtct_reg)]; 716 + u32 in[MLX5_ST_SZ_DW(qetc_reg)]; 717 717 718 718 if (!MLX5_CAP_GEN(mdev, ets)) 719 719 return -EOPNOTSUPP;
+6 -1
drivers/net/ethernet/mellanox/mlx5/core/sriov.c
··· 88 88 return -EBUSY; 89 89 } 90 90 91 + if (!MLX5_ESWITCH_MANAGER(dev)) 92 + goto enable_vfs_hca; 93 + 91 94 err = mlx5_eswitch_enable_sriov(dev->priv.eswitch, num_vfs, SRIOV_LEGACY); 92 95 if (err) { 93 96 mlx5_core_warn(dev, ··· 98 95 return err; 99 96 } 100 97 98 + enable_vfs_hca: 101 99 for (vf = 0; vf < num_vfs; vf++) { 102 100 err = mlx5_core_enable_hca(dev, vf + 1); 103 101 if (err) { ··· 144 140 } 145 141 146 142 out: 147 - mlx5_eswitch_disable_sriov(dev->priv.eswitch); 143 + if (MLX5_ESWITCH_MANAGER(dev)) 144 + mlx5_eswitch_disable_sriov(dev->priv.eswitch); 148 145 149 146 if (mlx5_wait_for_vf_pages(dev)) 150 147 mlx5_core_warn(dev, "timeout reclaiming VFs pages\n");
-2
drivers/net/ethernet/mellanox/mlx5/core/vport.c
··· 549 549 return -EINVAL; 550 550 if (!MLX5_CAP_GEN(mdev, vport_group_manager)) 551 551 return -EACCES; 552 - if (!MLX5_CAP_ESW(mdev, nic_vport_node_guid_modify)) 553 - return -EOPNOTSUPP; 554 552 555 553 in = kvzalloc(inlen, GFP_KERNEL); 556 554 if (!in)
+6 -3
drivers/net/ethernet/netronome/nfp/bpf/main.c
··· 81 81 82 82 ret = nfp_net_bpf_offload(nn, prog, running, extack); 83 83 /* Stop offload if replace not possible */ 84 - if (ret && prog) 85 - nfp_bpf_xdp_offload(app, nn, NULL, extack); 84 + if (ret) 85 + return ret; 86 86 87 - nn->dp.bpf_offload_xdp = prog && !ret; 87 + nn->dp.bpf_offload_xdp = !!prog; 88 88 return ret; 89 89 } 90 90 ··· 200 200 struct nfp_net *nn = netdev_priv(netdev); 201 201 202 202 if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS) 203 + return -EOPNOTSUPP; 204 + 205 + if (tcf_block_shared(f->block)) 203 206 return -EOPNOTSUPP; 204 207 205 208 switch (f->command) {
+14
drivers/net/ethernet/netronome/nfp/flower/match.c
··· 123 123 NFP_FLOWER_MASK_MPLS_Q; 124 124 125 125 frame->mpls_lse = cpu_to_be32(t_mpls); 126 + } else if (dissector_uses_key(flow->dissector, 127 + FLOW_DISSECTOR_KEY_BASIC)) { 128 + /* Check for mpls ether type and set NFP_FLOWER_MASK_MPLS_Q 129 + * bit, which indicates an mpls ether type but without any 130 + * mpls fields. 131 + */ 132 + struct flow_dissector_key_basic *key_basic; 133 + 134 + key_basic = skb_flow_dissector_target(flow->dissector, 135 + FLOW_DISSECTOR_KEY_BASIC, 136 + flow->key); 137 + if (key_basic->n_proto == cpu_to_be16(ETH_P_MPLS_UC) || 138 + key_basic->n_proto == cpu_to_be16(ETH_P_MPLS_MC)) 139 + frame->mpls_lse = cpu_to_be32(NFP_FLOWER_MASK_MPLS_Q); 126 140 } 127 141 } 128 142
+11
drivers/net/ethernet/netronome/nfp/flower/offload.c
··· 264 264 case cpu_to_be16(ETH_P_ARP): 265 265 return -EOPNOTSUPP; 266 266 267 + case cpu_to_be16(ETH_P_MPLS_UC): 268 + case cpu_to_be16(ETH_P_MPLS_MC): 269 + if (!(key_layer & NFP_FLOWER_LAYER_MAC)) { 270 + key_layer |= NFP_FLOWER_LAYER_MAC; 271 + key_size += sizeof(struct nfp_flower_mac_mpls); 272 + } 273 + break; 274 + 267 275 /* Will be included in layer 2. */ 268 276 case cpu_to_be16(ETH_P_8021Q): 269 277 break; ··· 629 621 struct nfp_repr *repr = netdev_priv(netdev); 630 622 631 623 if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS) 624 + return -EOPNOTSUPP; 625 + 626 + if (tcf_block_shared(f->block)) 632 627 return -EOPNOTSUPP; 633 628 634 629 switch (f->command) {
+1 -5
drivers/net/ethernet/netronome/nfp/nfp_main.c
··· 240 240 return pci_sriov_set_totalvfs(pf->pdev, pf->limit_vfs); 241 241 242 242 pf->limit_vfs = ~0; 243 - pci_sriov_set_totalvfs(pf->pdev, 0); /* 0 is unset */ 244 243 /* Allow any setting for backwards compatibility if symbol not found */ 245 244 if (err == -ENOENT) 246 245 return 0; ··· 667 668 668 669 err = nfp_net_pci_probe(pf); 669 670 if (err) 670 - goto err_sriov_unlimit; 671 + goto err_fw_unload; 671 672 672 673 err = nfp_hwmon_register(pf); 673 674 if (err) { ··· 679 680 680 681 err_net_remove: 681 682 nfp_net_pci_remove(pf); 682 - err_sriov_unlimit: 683 - pci_sriov_set_totalvfs(pf->pdev, 0); 684 683 err_fw_unload: 685 684 kfree(pf->rtbl); 686 685 nfp_mip_close(pf->mip); ··· 712 715 nfp_hwmon_unregister(pf); 713 716 714 717 nfp_pcie_sriov_disable(pdev); 715 - pci_sriov_set_totalvfs(pf->pdev, 0); 716 718 717 719 nfp_net_pci_remove(pf); 718 720
+1 -1
drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nffw.c
··· 232 232 err = nfp_cpp_read(cpp, nfp_resource_cpp_id(state->res), 233 233 nfp_resource_address(state->res), 234 234 fwinf, sizeof(*fwinf)); 235 - if (err < sizeof(*fwinf)) 235 + if (err < (int)sizeof(*fwinf)) 236 236 goto err_release; 237 237 238 238 if (!nffw_res_flg_init_get(fwinf))
+4 -4
drivers/net/ethernet/qlogic/qed/qed_dcbx.c
··· 709 709 p_local = &p_hwfn->p_dcbx_info->lldp_local[LLDP_NEAREST_BRIDGE]; 710 710 711 711 memcpy(params->lldp_local.local_chassis_id, p_local->local_chassis_id, 712 - ARRAY_SIZE(p_local->local_chassis_id)); 712 + sizeof(p_local->local_chassis_id)); 713 713 memcpy(params->lldp_local.local_port_id, p_local->local_port_id, 714 - ARRAY_SIZE(p_local->local_port_id)); 714 + sizeof(p_local->local_port_id)); 715 715 } 716 716 717 717 static void ··· 723 723 p_remote = &p_hwfn->p_dcbx_info->lldp_remote[LLDP_NEAREST_BRIDGE]; 724 724 725 725 memcpy(params->lldp_remote.peer_chassis_id, p_remote->peer_chassis_id, 726 - ARRAY_SIZE(p_remote->peer_chassis_id)); 726 + sizeof(p_remote->peer_chassis_id)); 727 727 memcpy(params->lldp_remote.peer_port_id, p_remote->peer_port_id, 728 - ARRAY_SIZE(p_remote->peer_port_id)); 728 + sizeof(p_remote->peer_port_id)); 729 729 } 730 730 731 731 static int
+1 -1
drivers/net/ethernet/qlogic/qed/qed_dev.c
··· 1804 1804 DP_INFO(p_hwfn, "Failed to update driver state\n"); 1805 1805 1806 1806 rc = qed_mcp_ov_update_eswitch(p_hwfn, p_hwfn->p_main_ptt, 1807 - QED_OV_ESWITCH_VEB); 1807 + QED_OV_ESWITCH_NONE); 1808 1808 if (rc) 1809 1809 DP_INFO(p_hwfn, "Failed to update eswitch mode\n"); 1810 1810 }
+8
drivers/net/ethernet/qlogic/qed/qed_main.c
··· 789 789 /* We want a minimum of one slowpath and one fastpath vector per hwfn */ 790 790 cdev->int_params.in.min_msix_cnt = cdev->num_hwfns * 2; 791 791 792 + if (is_kdump_kernel()) { 793 + DP_INFO(cdev, 794 + "Kdump kernel: Limit the max number of requested MSI-X vectors to %hd\n", 795 + cdev->int_params.in.min_msix_cnt); 796 + cdev->int_params.in.num_vectors = 797 + cdev->int_params.in.min_msix_cnt; 798 + } 799 + 792 800 rc = qed_set_int_mode(cdev, false); 793 801 if (rc) { 794 802 DP_ERR(cdev, "qed_slowpath_setup_int ERR\n");
+17 -2
drivers/net/ethernet/qlogic/qed/qed_sriov.c
··· 4513 4513 static int qed_sriov_enable(struct qed_dev *cdev, int num) 4514 4514 { 4515 4515 struct qed_iov_vf_init_params params; 4516 + struct qed_hwfn *hwfn; 4517 + struct qed_ptt *ptt; 4516 4518 int i, j, rc; 4517 4519 4518 4520 if (num >= RESC_NUM(&cdev->hwfns[0], QED_VPORT)) { ··· 4527 4525 4528 4526 /* Initialize HW for VF access */ 4529 4527 for_each_hwfn(cdev, j) { 4530 - struct qed_hwfn *hwfn = &cdev->hwfns[j]; 4531 - struct qed_ptt *ptt = qed_ptt_acquire(hwfn); 4528 + hwfn = &cdev->hwfns[j]; 4529 + ptt = qed_ptt_acquire(hwfn); 4532 4530 4533 4531 /* Make sure not to use more than 16 queues per VF */ 4534 4532 params.num_queues = min_t(int, ··· 4563 4561 DP_ERR(cdev, "Failed to enable sriov [%d]\n", rc); 4564 4562 goto err; 4565 4563 } 4564 + 4565 + hwfn = QED_LEADING_HWFN(cdev); 4566 + ptt = qed_ptt_acquire(hwfn); 4567 + if (!ptt) { 4568 + DP_ERR(hwfn, "Failed to acquire ptt\n"); 4569 + rc = -EBUSY; 4570 + goto err; 4571 + } 4572 + 4573 + rc = qed_mcp_ov_update_eswitch(hwfn, ptt, QED_OV_ESWITCH_VEB); 4574 + if (rc) 4575 + DP_INFO(cdev, "Failed to update eswitch mode\n"); 4576 + qed_ptt_release(hwfn, ptt); 4566 4577 4567 4578 return num; 4568 4579
+8 -2
drivers/net/ethernet/qlogic/qede/qede_ptp.c
··· 337 337 { 338 338 struct qede_ptp *ptp = edev->ptp; 339 339 340 - if (!ptp) 341 - return -EIO; 340 + if (!ptp) { 341 + info->so_timestamping = SOF_TIMESTAMPING_TX_SOFTWARE | 342 + SOF_TIMESTAMPING_RX_SOFTWARE | 343 + SOF_TIMESTAMPING_SOFTWARE; 344 + info->phc_index = -1; 345 + 346 + return 0; 347 + } 342 348 343 349 info->so_timestamping = SOF_TIMESTAMPING_TX_SOFTWARE | 344 350 SOF_TIMESTAMPING_RX_SOFTWARE |
+1
drivers/net/ethernet/sfc/farch.c
··· 2794 2794 if (!state) 2795 2795 return -ENOMEM; 2796 2796 efx->filter_state = state; 2797 + init_rwsem(&state->lock); 2797 2798 2798 2799 table = &state->table[EFX_FARCH_FILTER_TABLE_RX_IP]; 2799 2800 table->id = EFX_FARCH_FILTER_TABLE_RX_IP;
+12
drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
··· 407 407 } 408 408 } 409 409 410 + static void dwmac4_set_bfsize(void __iomem *ioaddr, int bfsize, u32 chan) 411 + { 412 + u32 value = readl(ioaddr + DMA_CHAN_RX_CONTROL(chan)); 413 + 414 + value &= ~DMA_RBSZ_MASK; 415 + value |= (bfsize << DMA_RBSZ_SHIFT) & DMA_RBSZ_MASK; 416 + 417 + writel(value, ioaddr + DMA_CHAN_RX_CONTROL(chan)); 418 + } 419 + 410 420 const struct stmmac_dma_ops dwmac4_dma_ops = { 411 421 .reset = dwmac4_dma_reset, 412 422 .init = dwmac4_dma_init, ··· 441 431 .set_rx_tail_ptr = dwmac4_set_rx_tail_ptr, 442 432 .set_tx_tail_ptr = dwmac4_set_tx_tail_ptr, 443 433 .enable_tso = dwmac4_enable_tso, 434 + .set_bfsize = dwmac4_set_bfsize, 444 435 }; 445 436 446 437 const struct stmmac_dma_ops dwmac410_dma_ops = { ··· 468 457 .set_rx_tail_ptr = dwmac4_set_rx_tail_ptr, 469 458 .set_tx_tail_ptr = dwmac4_set_tx_tail_ptr, 470 459 .enable_tso = dwmac4_enable_tso, 460 + .set_bfsize = dwmac4_set_bfsize, 471 461 };
+2
drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.h
··· 120 120 121 121 /* DMA Rx Channel X Control register defines */ 122 122 #define DMA_CONTROL_SR BIT(0) 123 + #define DMA_RBSZ_MASK GENMASK(14, 1) 124 + #define DMA_RBSZ_SHIFT 1 123 125 124 126 /* Interrupt status per channel */ 125 127 #define DMA_CHAN_STATUS_REB GENMASK(21, 19)
+3
drivers/net/ethernet/stmicro/stmmac/hwif.h
··· 183 183 void (*set_rx_tail_ptr)(void __iomem *ioaddr, u32 tail_ptr, u32 chan); 184 184 void (*set_tx_tail_ptr)(void __iomem *ioaddr, u32 tail_ptr, u32 chan); 185 185 void (*enable_tso)(void __iomem *ioaddr, bool en, u32 chan); 186 + void (*set_bfsize)(void __iomem *ioaddr, int bfsize, u32 chan); 186 187 }; 187 188 188 189 #define stmmac_reset(__priv, __args...) \ ··· 236 235 stmmac_do_void_callback(__priv, dma, set_tx_tail_ptr, __args) 237 236 #define stmmac_enable_tso(__priv, __args...) \ 238 237 stmmac_do_void_callback(__priv, dma, enable_tso, __args) 238 + #define stmmac_set_dma_bfsize(__priv, __args...) \ 239 + stmmac_do_void_callback(__priv, dma, set_bfsize, __args) 239 240 240 241 struct mac_device_info; 241 242 struct net_device;
+2
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 1804 1804 1805 1805 stmmac_dma_rx_mode(priv, priv->ioaddr, rxmode, chan, 1806 1806 rxfifosz, qmode); 1807 + stmmac_set_dma_bfsize(priv, priv->ioaddr, priv->dma_buf_sz, 1808 + chan); 1807 1809 } 1808 1810 1809 1811 for (chan = 0; chan < tx_channels_count; chan++) {
+1 -1
drivers/net/geneve.c
··· 476 476 out_unlock: 477 477 rcu_read_unlock(); 478 478 out: 479 - NAPI_GRO_CB(skb)->flush |= flush; 479 + skb_gro_flush_final(skb, pp, flush); 480 480 481 481 return pp; 482 482 }
+1 -1
drivers/net/hyperv/hyperv_net.h
··· 210 210 void netvsc_channel_cb(void *context); 211 211 int netvsc_poll(struct napi_struct *napi, int budget); 212 212 213 - void rndis_set_subchannel(struct work_struct *w); 213 + int rndis_set_subchannel(struct net_device *ndev, struct netvsc_device *nvdev); 214 214 int rndis_filter_open(struct netvsc_device *nvdev); 215 215 int rndis_filter_close(struct netvsc_device *nvdev); 216 216 struct netvsc_device *rndis_filter_device_add(struct hv_device *dev,
+36 -1
drivers/net/hyperv/netvsc.c
··· 65 65 VM_PKT_DATA_INBAND, 0); 66 66 } 67 67 68 + /* Worker to setup sub channels on initial setup 69 + * Initial hotplug event occurs in softirq context 70 + * and can't wait for channels. 71 + */ 72 + static void netvsc_subchan_work(struct work_struct *w) 73 + { 74 + struct netvsc_device *nvdev = 75 + container_of(w, struct netvsc_device, subchan_work); 76 + struct rndis_device *rdev; 77 + int i, ret; 78 + 79 + /* Avoid deadlock with device removal already under RTNL */ 80 + if (!rtnl_trylock()) { 81 + schedule_work(w); 82 + return; 83 + } 84 + 85 + rdev = nvdev->extension; 86 + if (rdev) { 87 + ret = rndis_set_subchannel(rdev->ndev, nvdev); 88 + if (ret == 0) { 89 + netif_device_attach(rdev->ndev); 90 + } else { 91 + /* fallback to only primary channel */ 92 + for (i = 1; i < nvdev->num_chn; i++) 93 + netif_napi_del(&nvdev->chan_table[i].napi); 94 + 95 + nvdev->max_chn = 1; 96 + nvdev->num_chn = 1; 97 + } 98 + } 99 + 100 + rtnl_unlock(); 101 + } 102 + 68 103 static struct netvsc_device *alloc_net_device(void) 69 104 { 70 105 struct netvsc_device *net_device; ··· 116 81 117 82 init_completion(&net_device->channel_init_wait); 118 83 init_waitqueue_head(&net_device->subchan_open); 119 - INIT_WORK(&net_device->subchan_work, rndis_set_subchannel); 84 + INIT_WORK(&net_device->subchan_work, netvsc_subchan_work); 120 85 121 86 return net_device; 122 87 }
+16 -1
drivers/net/hyperv/netvsc_drv.c
··· 905 905 if (IS_ERR(nvdev)) 906 906 return PTR_ERR(nvdev); 907 907 908 - /* Note: enable and attach happen when sub-channels setup */ 908 + if (nvdev->num_chn > 1) { 909 + ret = rndis_set_subchannel(ndev, nvdev); 909 910 911 + /* if unavailable, just proceed with one queue */ 912 + if (ret) { 913 + nvdev->max_chn = 1; 914 + nvdev->num_chn = 1; 915 + } 916 + } 917 + 918 + /* In any case device is now ready */ 919 + netif_device_attach(ndev); 920 + 921 + /* Note: enable and attach happen when sub-channels setup */ 910 922 netif_carrier_off(ndev); 911 923 912 924 if (netif_running(ndev)) { ··· 2100 2088 } 2101 2089 2102 2090 memcpy(net->dev_addr, device_info.mac_adr, ETH_ALEN); 2091 + 2092 + if (nvdev->num_chn > 1) 2093 + schedule_work(&nvdev->subchan_work); 2103 2094 2104 2095 /* hw_features computed in rndis_netdev_set_hwcaps() */ 2105 2096 net->features = net->hw_features |
+12 -49
drivers/net/hyperv/rndis_filter.c
··· 1062 1062 * This breaks overlap of processing the host message for the 1063 1063 * new primary channel with the initialization of sub-channels. 1064 1064 */ 1065 - void rndis_set_subchannel(struct work_struct *w) 1065 + int rndis_set_subchannel(struct net_device *ndev, struct netvsc_device *nvdev) 1066 1066 { 1067 - struct netvsc_device *nvdev 1068 - = container_of(w, struct netvsc_device, subchan_work); 1069 1067 struct nvsp_message *init_packet = &nvdev->channel_init_pkt; 1070 - struct net_device_context *ndev_ctx; 1071 - struct rndis_device *rdev; 1072 - struct net_device *ndev; 1073 - struct hv_device *hv_dev; 1068 + struct net_device_context *ndev_ctx = netdev_priv(ndev); 1069 + struct hv_device *hv_dev = ndev_ctx->device_ctx; 1070 + struct rndis_device *rdev = nvdev->extension; 1074 1071 int i, ret; 1075 1072 1076 - if (!rtnl_trylock()) { 1077 - schedule_work(w); 1078 - return; 1079 - } 1080 - 1081 - rdev = nvdev->extension; 1082 - if (!rdev) 1083 - goto unlock; /* device was removed */ 1084 - 1085 - ndev = rdev->ndev; 1086 - ndev_ctx = netdev_priv(ndev); 1087 - hv_dev = ndev_ctx->device_ctx; 1073 + ASSERT_RTNL(); 1088 1074 1089 1075 memset(init_packet, 0, sizeof(struct nvsp_message)); 1090 1076 init_packet->hdr.msg_type = NVSP_MSG5_TYPE_SUBCHANNEL; ··· 1086 1100 VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 1087 1101 if (ret) { 1088 1102 netdev_err(ndev, "sub channel allocate send failed: %d\n", ret); 1089 - goto failed; 1103 + return ret; 1090 1104 } 1091 1105 1092 1106 wait_for_completion(&nvdev->channel_init_wait); 1093 1107 if (init_packet->msg.v5_msg.subchn_comp.status != NVSP_STAT_SUCCESS) { 1094 1108 netdev_err(ndev, "sub channel request failed\n"); 1095 - goto failed; 1109 + return -EIO; 1096 1110 } 1097 1111 1098 1112 nvdev->num_chn = 1 + ··· 1111 1125 for (i = 0; i < VRSS_SEND_TAB_SIZE; i++) 1112 1126 ndev_ctx->tx_table[i] = i % nvdev->num_chn; 1113 1127 1114 - netif_device_attach(ndev); 1115 - rtnl_unlock(); 1116 - return; 1117 - 1118 - failed: 1119 - /* fallback to only primary channel */ 1120 - for (i = 1; i < nvdev->num_chn; i++) 1121 - netif_napi_del(&nvdev->chan_table[i].napi); 1122 - 1123 - nvdev->max_chn = 1; 1124 - nvdev->num_chn = 1; 1125 - 1126 - netif_device_attach(ndev); 1127 - unlock: 1128 - rtnl_unlock(); 1128 + return 0; 1129 1129 } 1130 1130 1131 1131 static int rndis_netdev_set_hwcaps(struct rndis_device *rndis_device, ··· 1332 1360 netif_napi_add(net, &net_device->chan_table[i].napi, 1333 1361 netvsc_poll, NAPI_POLL_WEIGHT); 1334 1362 1335 - if (net_device->num_chn > 1) 1336 - schedule_work(&net_device->subchan_work); 1363 + return net_device; 1337 1364 1338 1365 out: 1339 - /* if unavailable, just proceed with one queue */ 1340 - if (ret) { 1341 - net_device->max_chn = 1; 1342 - net_device->num_chn = 1; 1343 - } 1344 - 1345 - /* No sub channels, device is ready */ 1346 - if (net_device->num_chn == 1) 1347 - netif_device_attach(net); 1348 - 1349 - return net_device; 1366 + /* setting up multiple channels failed */ 1367 + net_device->max_chn = 1; 1368 + net_device->num_chn = 1; 1350 1369 1351 1370 err_dev_remv: 1352 1371 rndis_filter_device_remove(dev, net_device);
+28 -8
drivers/net/ipvlan/ipvlan_main.c
··· 75 75 { 76 76 struct ipvl_dev *ipvlan; 77 77 struct net_device *mdev = port->dev; 78 - int err = 0; 78 + unsigned int flags; 79 + int err; 79 80 80 81 ASSERT_RTNL(); 81 82 if (port->mode != nval) { 83 + list_for_each_entry(ipvlan, &port->ipvlans, pnode) { 84 + flags = ipvlan->dev->flags; 85 + if (nval == IPVLAN_MODE_L3 || nval == IPVLAN_MODE_L3S) { 86 + err = dev_change_flags(ipvlan->dev, 87 + flags | IFF_NOARP); 88 + } else { 89 + err = dev_change_flags(ipvlan->dev, 90 + flags & ~IFF_NOARP); 91 + } 92 + if (unlikely(err)) 93 + goto fail; 94 + } 82 95 if (nval == IPVLAN_MODE_L3S) { 83 96 /* New mode is L3S */ 84 97 err = ipvlan_register_nf_hook(read_pnet(&port->pnet)); ··· 99 86 mdev->l3mdev_ops = &ipvl_l3mdev_ops; 100 87 mdev->priv_flags |= IFF_L3MDEV_MASTER; 101 88 } else 102 - return err; 89 + goto fail; 103 90 } else if (port->mode == IPVLAN_MODE_L3S) { 104 91 /* Old mode was L3S */ 105 92 mdev->priv_flags &= ~IFF_L3MDEV_MASTER; 106 93 ipvlan_unregister_nf_hook(read_pnet(&port->pnet)); 107 94 mdev->l3mdev_ops = NULL; 108 95 } 109 - list_for_each_entry(ipvlan, &port->ipvlans, pnode) { 110 - if (nval == IPVLAN_MODE_L3 || nval == IPVLAN_MODE_L3S) 111 - ipvlan->dev->flags |= IFF_NOARP; 112 - else 113 - ipvlan->dev->flags &= ~IFF_NOARP; 114 - } 115 96 port->mode = nval; 116 97 } 98 + return 0; 99 + 100 + fail: 101 + /* Undo the flags changes that have been done so far. */ 102 + list_for_each_entry_continue_reverse(ipvlan, &port->ipvlans, pnode) { 103 + flags = ipvlan->dev->flags; 104 + if (port->mode == IPVLAN_MODE_L3 || 105 + port->mode == IPVLAN_MODE_L3S) 106 + dev_change_flags(ipvlan->dev, flags | IFF_NOARP); 107 + else 108 + dev_change_flags(ipvlan->dev, flags & ~IFF_NOARP); 109 + } 110 + 117 111 return err; 118 112 } 119 113
+1 -1
drivers/net/phy/dp83tc811.c
··· 222 222 if (err < 0) 223 223 return err; 224 224 225 - err = phy_write(phydev, MII_DP83811_INT_STAT1, 0); 225 + err = phy_write(phydev, MII_DP83811_INT_STAT2, 0); 226 226 } 227 227 228 228 return err;
+34 -3
drivers/net/usb/lan78xx.c
··· 64 64 #define DEFAULT_RX_CSUM_ENABLE (true) 65 65 #define DEFAULT_TSO_CSUM_ENABLE (true) 66 66 #define DEFAULT_VLAN_FILTER_ENABLE (true) 67 + #define DEFAULT_VLAN_RX_OFFLOAD (true) 67 68 #define TX_OVERHEAD (8) 68 69 #define RXW_PADDING 2 69 70 ··· 2299 2298 if ((ll_mtu % dev->maxpacket) == 0) 2300 2299 return -EDOM; 2301 2300 2302 - ret = lan78xx_set_rx_max_frame_length(dev, new_mtu + ETH_HLEN); 2301 + ret = lan78xx_set_rx_max_frame_length(dev, new_mtu + VLAN_ETH_HLEN); 2303 2302 2304 2303 netdev->mtu = new_mtu; 2305 2304 ··· 2365 2364 } 2366 2365 2367 2366 if (features & NETIF_F_HW_VLAN_CTAG_RX) 2367 + pdata->rfe_ctl |= RFE_CTL_VLAN_STRIP_; 2368 + else 2369 + pdata->rfe_ctl &= ~RFE_CTL_VLAN_STRIP_; 2370 + 2371 + if (features & NETIF_F_HW_VLAN_CTAG_FILTER) 2368 2372 pdata->rfe_ctl |= RFE_CTL_VLAN_FILTER_; 2369 2373 else 2370 2374 pdata->rfe_ctl &= ~RFE_CTL_VLAN_FILTER_; ··· 2593 2587 buf |= FCT_TX_CTL_EN_; 2594 2588 ret = lan78xx_write_reg(dev, FCT_TX_CTL, buf); 2595 2589 2596 - ret = lan78xx_set_rx_max_frame_length(dev, dev->net->mtu + ETH_HLEN); 2590 + ret = lan78xx_set_rx_max_frame_length(dev, 2591 + dev->net->mtu + VLAN_ETH_HLEN); 2597 2592 2598 2593 ret = lan78xx_read_reg(dev, MAC_RX, &buf); 2599 2594 buf |= MAC_RX_RXEN_; ··· 2982 2975 if (DEFAULT_TSO_CSUM_ENABLE) 2983 2976 dev->net->features |= NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_SG; 2984 2977 2978 + if (DEFAULT_VLAN_RX_OFFLOAD) 2979 + dev->net->features |= NETIF_F_HW_VLAN_CTAG_RX; 2980 + 2981 + if (DEFAULT_VLAN_FILTER_ENABLE) 2982 + dev->net->features |= NETIF_F_HW_VLAN_CTAG_FILTER; 2983 + 2985 2984 dev->net->hw_features = dev->net->features; 2986 2985 2987 2986 ret = lan78xx_setup_irq_domain(dev); ··· 3052 3039 struct sk_buff *skb, 3053 3040 u32 rx_cmd_a, u32 rx_cmd_b) 3054 3041 { 3042 + /* HW Checksum offload appears to be flawed if used when not stripping 3043 + * VLAN headers. Drop back to S/W checksums under these conditions. 3044 + */ 3055 3045 if (!(dev->net->features & NETIF_F_RXCSUM) || 3056 - unlikely(rx_cmd_a & RX_CMD_A_ICSM_)) { 3046 + unlikely(rx_cmd_a & RX_CMD_A_ICSM_) || 3047 + ((rx_cmd_a & RX_CMD_A_FVTG_) && 3048 + !(dev->net->features & NETIF_F_HW_VLAN_CTAG_RX))) { 3057 3049 skb->ip_summed = CHECKSUM_NONE; 3058 3050 } else { 3059 3051 skb->csum = ntohs((u16)(rx_cmd_b >> RX_CMD_B_CSUM_SHIFT_)); 3060 3052 skb->ip_summed = CHECKSUM_COMPLETE; 3061 3053 } 3054 + } 3055 + 3056 + static void lan78xx_rx_vlan_offload(struct lan78xx_net *dev, 3057 + struct sk_buff *skb, 3058 + u32 rx_cmd_a, u32 rx_cmd_b) 3059 + { 3060 + if ((dev->net->features & NETIF_F_HW_VLAN_CTAG_RX) && 3061 + (rx_cmd_a & RX_CMD_A_FVTG_)) 3062 + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), 3063 + (rx_cmd_b & 0xffff)); 3062 3064 } 3063 3065 3064 3066 static void lan78xx_skb_return(struct lan78xx_net *dev, struct sk_buff *skb) ··· 3140 3112 if (skb->len == size) { 3141 3113 lan78xx_rx_csum_offload(dev, skb, 3142 3114 rx_cmd_a, rx_cmd_b); 3115 + lan78xx_rx_vlan_offload(dev, skb, 3116 + rx_cmd_a, rx_cmd_b); 3143 3117 3144 3118 skb_trim(skb, skb->len - 4); /* remove fcs */ 3145 3119 skb->truesize = size + sizeof(struct sk_buff); ··· 3160 3130 skb_set_tail_pointer(skb2, size); 3161 3131 3162 3132 lan78xx_rx_csum_offload(dev, skb2, rx_cmd_a, rx_cmd_b); 3133 + lan78xx_rx_vlan_offload(dev, skb2, rx_cmd_a, rx_cmd_b); 3163 3134 3164 3135 skb_trim(skb2, skb2->len - 4); /* remove fcs */ 3165 3136 skb2->truesize = size + sizeof(struct sk_buff);
+2 -1
drivers/net/usb/r8152.c
··· 3962 3962 #ifdef CONFIG_PM_SLEEP 3963 3963 unregister_pm_notifier(&tp->pm_notifier); 3964 3964 #endif 3965 - napi_disable(&tp->napi); 3965 + if (!test_bit(RTL8152_UNPLUG, &tp->flags)) 3966 + napi_disable(&tp->napi); 3966 3967 clear_bit(WORK_ENABLE, &tp->flags); 3967 3968 usb_kill_urb(tp->intr_urb); 3968 3969 cancel_delayed_work_sync(&tp->schedule);
+19 -11
drivers/net/virtio_net.c
··· 53 53 /* Amount of XDP headroom to prepend to packets for use by xdp_adjust_head */ 54 54 #define VIRTIO_XDP_HEADROOM 256 55 55 56 + /* Separating two types of XDP xmit */ 57 + #define VIRTIO_XDP_TX BIT(0) 58 + #define VIRTIO_XDP_REDIR BIT(1) 59 + 56 60 /* RX packet size EWMA. The average packet size is used to determine the packet 57 61 * buffer size when refilling RX rings. As the entire RX ring may be refilled 58 62 * at once, the weight is chosen so that the EWMA will be insensitive to short- ··· 586 582 struct receive_queue *rq, 587 583 void *buf, void *ctx, 588 584 unsigned int len, 589 - bool *xdp_xmit) 585 + unsigned int *xdp_xmit) 590 586 { 591 587 struct sk_buff *skb; 592 588 struct bpf_prog *xdp_prog; ··· 658 654 trace_xdp_exception(vi->dev, xdp_prog, act); 659 655 goto err_xdp; 660 656 } 661 - *xdp_xmit = true; 657 + *xdp_xmit |= VIRTIO_XDP_TX; 662 658 rcu_read_unlock(); 663 659 goto xdp_xmit; 664 660 case XDP_REDIRECT: 665 661 err = xdp_do_redirect(dev, &xdp, xdp_prog); 666 662 if (err) 667 663 goto err_xdp; 668 - *xdp_xmit = true; 664 + *xdp_xmit |= VIRTIO_XDP_REDIR; 669 665 rcu_read_unlock(); 670 666 goto xdp_xmit; 671 667 default: ··· 727 723 void *buf, 728 724 void *ctx, 729 725 unsigned int len, 730 - bool *xdp_xmit) 726 + unsigned int *xdp_xmit) 731 727 { 732 728 struct virtio_net_hdr_mrg_rxbuf *hdr = buf; 733 729 u16 num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers); ··· 822 818 put_page(xdp_page); 823 819 goto err_xdp; 824 820 } 825 - *xdp_xmit = true; 821 + *xdp_xmit |= VIRTIO_XDP_TX; 826 822 if (unlikely(xdp_page != page)) 827 823 put_page(page); 828 824 rcu_read_unlock(); ··· 834 830 put_page(xdp_page); 835 831 goto err_xdp; 836 832 } 837 - *xdp_xmit = true; 833 + *xdp_xmit |= VIRTIO_XDP_REDIR; 838 834 if (unlikely(xdp_page != page)) 839 835 put_page(page); 840 836 rcu_read_unlock(); ··· 943 939 } 944 940 945 941 static int receive_buf(struct virtnet_info *vi, struct receive_queue *rq, 946 - void *buf, unsigned int len, void **ctx, bool *xdp_xmit) 942 + void *buf, unsigned int len, void **ctx, 943 + unsigned int *xdp_xmit) 947 944 { 948 945 struct net_device *dev = vi->dev; 949 946 struct sk_buff *skb; ··· 1237 1232 } 1238 1233 } 1239 1234 1240 - static int virtnet_receive(struct receive_queue *rq, int budget, bool *xdp_xmit) 1235 + static int virtnet_receive(struct receive_queue *rq, int budget, 1236 + unsigned int *xdp_xmit) 1241 1237 { 1242 1238 struct virtnet_info *vi = rq->vq->vdev->priv; 1243 1239 unsigned int len, received = 0, bytes = 0; ··· 1327 1321 struct virtnet_info *vi = rq->vq->vdev->priv; 1328 1322 struct send_queue *sq; 1329 1323 unsigned int received, qp; 1330 - bool xdp_xmit = false; 1324 + unsigned int xdp_xmit = 0; 1331 1325 1332 1326 virtnet_poll_cleantx(rq); 1333 1327 ··· 1337 1331 if (received < budget) 1338 1332 virtqueue_napi_complete(napi, rq->vq, received); 1339 1333 1340 - if (xdp_xmit) { 1334 + if (xdp_xmit & VIRTIO_XDP_REDIR) 1335 + xdp_do_flush_map(); 1336 + 1337 + if (xdp_xmit & VIRTIO_XDP_TX) { 1341 1338 qp = vi->curr_queue_pairs - vi->xdp_queue_pairs + 1342 1339 smp_processor_id(); 1343 1340 sq = &vi->sq[qp]; 1344 1341 virtqueue_kick(sq->vq); 1345 - xdp_do_flush_map(); 1346 1342 } 1347 1343 1348 1344 return received;
+1 -3
drivers/net/vxlan.c
··· 623 623 flush = 0; 624 624 625 625 out: 626 - skb_gro_remcsum_cleanup(skb, &grc); 627 - skb->remcsum_offload = 0; 628 - NAPI_GRO_CB(skb)->flush |= flush; 626 + skb_gro_flush_final_remcsum(skb, pp, flush, &grc); 629 627 630 628 return pp; 631 629 }
+1
drivers/nvdimm/claim.c
··· 278 278 return -EIO; 279 279 if (memcpy_mcsafe(buf, nsio->addr + offset, size) != 0) 280 280 return -EIO; 281 + return 0; 281 282 } 282 283 283 284 if (unlikely(is_bad_pmem(&nsio->bb, sector, sz_align))) {
+4
drivers/nvmem/core.c
··· 936 936 return cell; 937 937 } 938 938 939 + /* NULL cell_id only allowed for device tree; invalid otherwise */ 940 + if (!cell_id) 941 + return ERR_PTR(-EINVAL); 942 + 939 943 return nvmem_cell_get_from_list(cell_id); 940 944 } 941 945 EXPORT_SYMBOL_GPL(nvmem_cell_get);
-1
drivers/pci/controller/dwc/Kconfig
··· 58 58 depends on PCI && PCI_MSI_IRQ_DOMAIN 59 59 select PCIE_DW_HOST 60 60 select PCIE_DW_PLAT 61 - default y 62 61 help 63 62 Enables support for the PCIe controller in the Designware IP to 64 63 work in host mode. There are two instances of PCIe controller in
+2
drivers/pci/controller/pci-ftpci100.c
··· 355 355 irq = of_irq_get(intc, 0); 356 356 if (irq <= 0) { 357 357 dev_err(p->dev, "failed to get parent IRQ\n"); 358 + of_node_put(intc); 358 359 return irq ?: -EINVAL; 359 360 } 360 361 361 362 p->irqdomain = irq_domain_add_linear(intc, PCI_NUM_INTX, 362 363 &faraday_pci_irqdomain_ops, p); 364 + of_node_put(intc); 363 365 if (!p->irqdomain) { 364 366 dev_err(p->dev, "failed to create Gemini PCI IRQ domain\n"); 365 367 return -EINVAL;
+13 -3
drivers/pci/controller/pcie-rcar.c
··· 680 680 if (err) 681 681 return err; 682 682 683 - return phy_power_on(pcie->phy); 683 + err = phy_power_on(pcie->phy); 684 + if (err) 685 + phy_exit(pcie->phy); 686 + 687 + return err; 684 688 } 685 689 686 690 static int rcar_msi_alloc(struct rcar_msi *chip) ··· 1169 1165 if (rcar_pcie_hw_init(pcie)) { 1170 1166 dev_info(dev, "PCIe link down\n"); 1171 1167 err = -ENODEV; 1172 - goto err_clk_disable; 1168 + goto err_phy_shutdown; 1173 1169 } 1174 1170 1175 1171 data = rcar_pci_read_reg(pcie, MACSR); ··· 1181 1177 dev_err(dev, 1182 1178 "failed to enable MSI support: %d\n", 1183 1179 err); 1184 - goto err_clk_disable; 1180 + goto err_phy_shutdown; 1185 1181 } 1186 1182 } 1187 1183 ··· 1194 1190 err_msi_teardown: 1195 1191 if (IS_ENABLED(CONFIG_PCI_MSI)) 1196 1192 rcar_pcie_teardown_msi(pcie); 1193 + 1194 + err_phy_shutdown: 1195 + if (pcie->phy) { 1196 + phy_power_off(pcie->phy); 1197 + phy_exit(pcie->phy); 1198 + } 1197 1199 1198 1200 err_clk_disable: 1199 1201 clk_disable_unprepare(pcie->bus_clk);
+1 -1
drivers/pci/controller/pcie-xilinx-nwl.c
··· 559 559 PCI_NUM_INTX, 560 560 &legacy_domain_ops, 561 561 pcie); 562 - 562 + of_node_put(legacy_intc_node); 563 563 if (!pcie->legacy_irq_domain) { 564 564 dev_err(dev, "failed to create IRQ domain\n"); 565 565 return -ENOMEM;
+1
drivers/pci/controller/pcie-xilinx.c
··· 509 509 port->leg_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, 510 510 &intx_domain_ops, 511 511 port); 512 + of_node_put(pcie_intc_node); 512 513 if (!port->leg_domain) { 513 514 dev_err(dev, "Failed to get a INTx IRQ domain\n"); 514 515 return -ENODEV;
+2 -2
drivers/pci/endpoint/pci-epf-core.c
··· 145 145 */ 146 146 void pci_epf_unregister_driver(struct pci_epf_driver *driver) 147 147 { 148 - struct config_group *group; 148 + struct config_group *group, *tmp; 149 149 150 150 mutex_lock(&pci_epf_mutex); 151 - list_for_each_entry(group, &driver->epf_group, group_entry) 151 + list_for_each_entry_safe(group, tmp, &driver->epf_group, group_entry) 152 152 pci_ep_cfs_remove_epf_group(group); 153 153 list_del(&driver->epf_group); 154 154 mutex_unlock(&pci_epf_mutex);
+16
drivers/pci/iov.c
··· 575 575 } 576 576 577 577 /** 578 + * pci_iov_remove - clean up SR-IOV state after PF driver is detached 579 + * @dev: the PCI device 580 + */ 581 + void pci_iov_remove(struct pci_dev *dev) 582 + { 583 + struct pci_sriov *iov = dev->sriov; 584 + 585 + if (!dev->is_physfn) 586 + return; 587 + 588 + iov->driver_max_VFs = iov->total_VFs; 589 + if (iov->num_VFs) 590 + pci_warn(dev, "driver left SR-IOV enabled after remove\n"); 591 + } 592 + 593 + /** 578 594 * pci_iov_update_resource - update a VF BAR 579 595 * @dev: the PCI device 580 596 * @resno: the resource number
+12
drivers/pci/pci-acpi.c
··· 629 629 { 630 630 struct acpi_device *adev = ACPI_COMPANION(&dev->dev); 631 631 632 + /* 633 + * In some cases (eg. Samsung 305V4A) leaving a bridge in suspend over 634 + * system-wide suspend/resume confuses the platform firmware, so avoid 635 + * doing that, unless the bridge has a driver that should take care of 636 + * the PM handling. According to Section 16.1.6 of ACPI 6.2, endpoint 637 + * devices are expected to be in D3 before invoking the S3 entry path 638 + * from the firmware, so they should not be affected by this issue. 639 + */ 640 + if (pci_is_bridge(dev) && !dev->driver && 641 + acpi_target_system_state() != ACPI_STATE_S0) 642 + return true; 643 + 632 644 if (!adev || !acpi_device_power_manageable(adev)) 633 645 return false; 634 646
+1
drivers/pci/pci-driver.c
··· 445 445 } 446 446 pcibios_free_irq(pci_dev); 447 447 pci_dev->driver = NULL; 448 + pci_iov_remove(pci_dev); 448 449 } 449 450 450 451 /* Undo the runtime PM settings in local_pci_probe() */
+4
drivers/pci/pci.h
··· 311 311 #ifdef CONFIG_PCI_IOV 312 312 int pci_iov_init(struct pci_dev *dev); 313 313 void pci_iov_release(struct pci_dev *dev); 314 + void pci_iov_remove(struct pci_dev *dev); 314 315 void pci_iov_update_resource(struct pci_dev *dev, int resno); 315 316 resource_size_t pci_sriov_resource_alignment(struct pci_dev *dev, int resno); 316 317 void pci_restore_iov_state(struct pci_dev *dev); ··· 324 323 } 325 324 static inline void pci_iov_release(struct pci_dev *dev) 326 325 326 + { 327 + } 328 + static inline void pci_iov_remove(struct pci_dev *dev) 327 329 { 328 330 } 329 331 static inline void pci_restore_iov_state(struct pci_dev *dev)
+5 -3
drivers/rtc/interface.c
··· 265 265 return err; 266 266 267 267 /* full-function RTCs won't have such missing fields */ 268 - if (rtc_valid_tm(&alarm->time) == 0) 268 + if (rtc_valid_tm(&alarm->time) == 0) { 269 + rtc_add_offset(rtc, &alarm->time); 269 270 return 0; 271 + } 270 272 271 273 /* get the "after" timestamp, to detect wrapped fields */ 272 274 err = rtc_read_time(rtc, &now); ··· 411 409 if (err) 412 410 return err; 413 411 414 - rtc_subtract_offset(rtc, &alarm->time); 415 412 scheduled = rtc_tm_to_time64(&alarm->time); 416 413 417 414 /* Make sure we're not setting alarms in the past */ ··· 426 425 * the is alarm set for the next second and the second ticks 427 426 * over right here, before we set the alarm. 428 427 */ 428 + 429 + rtc_subtract_offset(rtc, &alarm->time); 429 430 430 431 if (!rtc->ops) 431 432 err = -ENODEV; ··· 470 467 471 468 mutex_unlock(&rtc->ops_lock); 472 469 473 - rtc_add_offset(rtc, &alarm->time); 474 470 return err; 475 471 } 476 472 EXPORT_SYMBOL_GPL(rtc_set_alarm);
+1 -3
drivers/rtc/rtc-mrst.c
··· 367 367 } 368 368 369 369 retval = rtc_register_device(mrst_rtc.rtc); 370 - if (retval) { 371 - retval = PTR_ERR(mrst_rtc.rtc); 370 + if (retval) 372 371 goto cleanup0; 373 - } 374 372 375 373 dev_dbg(dev, "initialised\n"); 376 374 return 0;
+11 -2
drivers/s390/block/dasd.c
··· 41 41 42 42 #define DASD_DIAG_MOD "dasd_diag_mod" 43 43 44 + static unsigned int queue_depth = 32; 45 + static unsigned int nr_hw_queues = 4; 46 + 47 + module_param(queue_depth, uint, 0444); 48 + MODULE_PARM_DESC(queue_depth, "Default queue depth for new DASD devices"); 49 + 50 + module_param(nr_hw_queues, uint, 0444); 51 + MODULE_PARM_DESC(nr_hw_queues, "Default number of hardware queues for new DASD devices"); 52 + 44 53 /* 45 54 * SECTION: exported variables of dasd.c 46 55 */ ··· 3124 3115 3125 3116 block->tag_set.ops = &dasd_mq_ops; 3126 3117 block->tag_set.cmd_size = sizeof(struct dasd_ccw_req); 3127 - block->tag_set.nr_hw_queues = DASD_NR_HW_QUEUES; 3128 - block->tag_set.queue_depth = DASD_MAX_LCU_DEV * DASD_REQ_PER_DEV; 3118 + block->tag_set.nr_hw_queues = nr_hw_queues; 3119 + block->tag_set.queue_depth = queue_depth; 3129 3120 block->tag_set.flags = BLK_MQ_F_SHOULD_MERGE; 3130 3121 3131 3122 rc = blk_mq_alloc_tag_set(&block->tag_set);
-8
drivers/s390/block/dasd_int.h
··· 228 228 #define DASD_CQR_SUPPRESS_IL 6 /* Suppress 'Incorrect Length' error */ 229 229 #define DASD_CQR_SUPPRESS_CR 7 /* Suppress 'Command Reject' error */ 230 230 231 - /* 232 - * There is no reliable way to determine the number of available CPUs on 233 - * LPAR but there is no big performance difference between 1 and the 234 - * maximum CPU number. 235 - * 64 is a good trade off performance wise. 236 - */ 237 - #define DASD_NR_HW_QUEUES 64 238 - #define DASD_MAX_LCU_DEV 256 239 231 #define DASD_REQ_PER_DEV 4 240 232 241 233 /* Signature for error recovery functions. */
+12 -1
drivers/s390/net/qeth_core.h
··· 829 829 /*some helper functions*/ 830 830 #define QETH_CARD_IFNAME(card) (((card)->dev)? (card)->dev->name : "") 831 831 832 + static inline void qeth_scrub_qdio_buffer(struct qdio_buffer *buf, 833 + unsigned int elements) 834 + { 835 + unsigned int i; 836 + 837 + for (i = 0; i < elements; i++) 838 + memset(&buf->element[i], 0, sizeof(struct qdio_buffer_element)); 839 + buf->element[14].sflags = 0; 840 + buf->element[15].sflags = 0; 841 + } 842 + 832 843 /** 833 844 * qeth_get_elements_for_range() - find number of SBALEs to cover range. 834 845 * @start: Start of the address range. ··· 1040 1029 __u16, __u16, 1041 1030 enum qeth_prot_versions); 1042 1031 int qeth_set_features(struct net_device *, netdev_features_t); 1043 - void qeth_recover_features(struct net_device *dev); 1032 + void qeth_enable_hw_features(struct net_device *dev); 1044 1033 netdev_features_t qeth_fix_features(struct net_device *, netdev_features_t); 1045 1034 netdev_features_t qeth_features_check(struct sk_buff *skb, 1046 1035 struct net_device *dev,
+28 -19
drivers/s390/net/qeth_core_main.c
··· 73 73 struct qeth_qdio_out_buffer *buf, 74 74 enum iucv_tx_notify notification); 75 75 static void qeth_release_skbs(struct qeth_qdio_out_buffer *buf); 76 - static void qeth_clear_output_buffer(struct qeth_qdio_out_q *queue, 77 - struct qeth_qdio_out_buffer *buf, 78 - enum qeth_qdio_buffer_states newbufstate); 79 76 static int qeth_init_qdio_out_buf(struct qeth_qdio_out_q *, int); 80 77 81 78 struct workqueue_struct *qeth_wq; ··· 486 489 struct qaob *aob; 487 490 struct qeth_qdio_out_buffer *buffer; 488 491 enum iucv_tx_notify notification; 492 + unsigned int i; 489 493 490 494 aob = (struct qaob *) phys_to_virt(phys_aob_addr); 491 495 QETH_CARD_TEXT(card, 5, "haob"); ··· 511 513 qeth_notify_skbs(buffer->q, buffer, notification); 512 514 513 515 buffer->aob = NULL; 514 - qeth_clear_output_buffer(buffer->q, buffer, 515 - QETH_QDIO_BUF_HANDLED_DELAYED); 516 + /* Free dangling allocations. The attached skbs are handled by 517 + * qeth_cleanup_handled_pending(). 518 + */ 519 + for (i = 0; 520 + i < aob->sb_count && i < QETH_MAX_BUFFER_ELEMENTS(card); 521 + i++) { 522 + if (aob->sba[i] && buffer->is_header[i]) 523 + kmem_cache_free(qeth_core_header_cache, 524 + (void *) aob->sba[i]); 525 + } 526 + atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED); 516 527 517 - /* from here on: do not touch buffer anymore */ 518 528 qdio_release_aob(aob); 519 529 } 520 530 ··· 3765 3759 QETH_CARD_TEXT(queue->card, 5, "aob"); 3766 3760 QETH_CARD_TEXT_(queue->card, 5, "%lx", 3767 3761 virt_to_phys(buffer->aob)); 3762 + 3763 + /* prepare the queue slot for re-use: */ 3764 + qeth_scrub_qdio_buffer(buffer->buffer, 3765 + QETH_MAX_BUFFER_ELEMENTS(card)); 3768 3766 if (qeth_init_qdio_out_buf(queue, bidx)) { 3769 3767 QETH_CARD_TEXT(card, 2, "outofbuf"); 3770 3768 qeth_schedule_recovery(card); ··· 4844 4834 goto out; 4845 4835 } 4846 4836 4847 - ccw_device_get_id(CARD_RDEV(card), &id); 4837 + ccw_device_get_id(CARD_DDEV(card), &id); 4848 4838 request->resp_buf_len = sizeof(*response); 4849 4839 request->resp_version = DIAG26C_VERSION2; 4850 4840 request->op_code = DIAG26C_GET_MAC; ··· 6469 6459 #define QETH_HW_FEATURES (NETIF_F_RXCSUM | NETIF_F_IP_CSUM | NETIF_F_TSO | \ 6470 6460 NETIF_F_IPV6_CSUM) 6471 6461 /** 6472 - * qeth_recover_features() - Restore device features after recovery 6473 - * @dev: the recovering net_device 6474 - * 6475 - * Caller must hold rtnl lock. 6462 + * qeth_enable_hw_features() - (Re-)Enable HW functions for device features 6463 + * @dev: a net_device 6476 6464 */ 6477 - void qeth_recover_features(struct net_device *dev) 6465 + void qeth_enable_hw_features(struct net_device *dev) 6478 6466 { 6479 - netdev_features_t features = dev->features; 6480 6467 struct qeth_card *card = dev->ml_priv; 6468 + netdev_features_t features; 6481 6469 6470 + rtnl_lock(); 6471 + features = dev->features; 6482 6472 /* force-off any feature that needs an IPA sequence. 6483 6473 * netdev_update_features() will restart them. 6484 6474 */ 6485 6475 dev->features &= ~QETH_HW_FEATURES; 6486 6476 netdev_update_features(dev); 6487 - 6488 - if (features == dev->features) 6489 - return; 6490 - dev_warn(&card->gdev->dev, 6491 - "Device recovery failed to restore all offload features\n"); 6477 + if (features != dev->features) 6478 + dev_warn(&card->gdev->dev, 6479 + "Device recovery failed to restore all offload features\n"); 6480 + rtnl_unlock(); 6492 6481 } 6493 - EXPORT_SYMBOL_GPL(qeth_recover_features); 6482 + EXPORT_SYMBOL_GPL(qeth_enable_hw_features); 6494 6483 6495 6484 int qeth_set_features(struct net_device *dev, netdev_features_t features) 6496 6485 {
+15 -9
drivers/s390/net/qeth_l2_main.c
··· 140 140 141 141 static int qeth_l2_write_mac(struct qeth_card *card, u8 *mac) 142 142 { 143 - enum qeth_ipa_cmds cmd = is_multicast_ether_addr_64bits(mac) ? 143 + enum qeth_ipa_cmds cmd = is_multicast_ether_addr(mac) ? 144 144 IPA_CMD_SETGMAC : IPA_CMD_SETVMAC; 145 145 int rc; 146 146 ··· 157 157 158 158 static int qeth_l2_remove_mac(struct qeth_card *card, u8 *mac) 159 159 { 160 - enum qeth_ipa_cmds cmd = is_multicast_ether_addr_64bits(mac) ? 160 + enum qeth_ipa_cmds cmd = is_multicast_ether_addr(mac) ? 161 161 IPA_CMD_DELGMAC : IPA_CMD_DELVMAC; 162 162 int rc; 163 163 ··· 501 501 return -ERESTARTSYS; 502 502 } 503 503 504 + /* avoid racing against concurrent state change: */ 505 + if (!mutex_trylock(&card->conf_mutex)) 506 + return -EAGAIN; 507 + 504 508 if (!qeth_card_hw_is_reachable(card)) { 505 509 ether_addr_copy(dev->dev_addr, addr->sa_data); 506 - return 0; 510 + goto out_unlock; 507 511 } 508 512 509 513 /* don't register the same address twice */ 510 514 if (ether_addr_equal_64bits(dev->dev_addr, addr->sa_data) && 511 515 (card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED)) 512 - return 0; 516 + goto out_unlock; 513 517 514 518 /* add the new address, switch over, drop the old */ 515 519 rc = qeth_l2_send_setmac(card, addr->sa_data); 516 520 if (rc) 517 - return rc; 521 + goto out_unlock; 518 522 ether_addr_copy(old_addr, dev->dev_addr); 519 523 ether_addr_copy(dev->dev_addr, addr->sa_data); 520 524 521 525 if (card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED) 522 526 qeth_l2_remove_mac(card, old_addr); 523 527 card->info.mac_bits |= QETH_LAYER2_MAC_REGISTERED; 524 - return 0; 528 + 529 + out_unlock: 530 + mutex_unlock(&card->conf_mutex); 531 + return rc; 525 532 } 526 533 527 534 static void qeth_promisc_to_bridge(struct qeth_card *card) ··· 1119 1112 netif_carrier_off(card->dev); 1120 1113 1121 1114 qeth_set_allowed_threads(card, 0xffffffff, 0); 1115 + 1116 + qeth_enable_hw_features(card->dev); 1122 1117 if (recover_flag == CARD_STATE_RECOVER) { 1123 1118 if (recovery_mode && 1124 1119 card->info.type != QETH_CARD_TYPE_OSN) { ··· 1132 1123 } 1133 1124 /* this also sets saved unicast addresses */ 1134 1125 qeth_l2_set_rx_mode(card->dev); 1135 - rtnl_lock(); 1136 - qeth_recover_features(card->dev); 1137 - rtnl_unlock(); 1138 1126 } 1139 1127 /* let user_space know that device is online */ 1140 1128 kobject_uevent(&gdev->dev.kobj, KOBJ_CHANGE);
+2 -1
drivers/s390/net/qeth_l3_main.c
··· 2662 2662 netif_carrier_on(card->dev); 2663 2663 else 2664 2664 netif_carrier_off(card->dev); 2665 + 2666 + qeth_enable_hw_features(card->dev); 2665 2667 if (recover_flag == CARD_STATE_RECOVER) { 2666 2668 rtnl_lock(); 2667 2669 if (recovery_mode) ··· 2671 2669 else 2672 2670 dev_open(card->dev); 2673 2671 qeth_l3_set_rx_mode(card->dev); 2674 - qeth_recover_features(card->dev); 2675 2672 rtnl_unlock(); 2676 2673 } 2677 2674 qeth_trace_features(card);
+7 -8
drivers/scsi/aacraid/aachba.c
··· 1974 1974 u32 lun_count, nexus; 1975 1975 u32 i, bus, target; 1976 1976 u8 expose_flag, attribs; 1977 - u8 devtype; 1978 1977 1979 1978 lun_count = aac_get_safw_phys_lun_count(dev); 1980 1979 ··· 1991 1992 continue; 1992 1993 1993 1994 if (expose_flag != 0) { 1994 - devtype = AAC_DEVTYPE_RAID_MEMBER; 1995 - goto update_devtype; 1995 + dev->hba_map[bus][target].devtype = 1996 + AAC_DEVTYPE_RAID_MEMBER; 1997 + continue; 1996 1998 } 1997 1999 1998 2000 if (nexus != 0 && (attribs & 8)) { 1999 - devtype = AAC_DEVTYPE_NATIVE_RAW; 2001 + dev->hba_map[bus][target].devtype = 2002 + AAC_DEVTYPE_NATIVE_RAW; 2000 2003 dev->hba_map[bus][target].rmw_nexus = 2001 2004 nexus; 2002 2005 } else 2003 - devtype = AAC_DEVTYPE_ARC_RAW; 2006 + dev->hba_map[bus][target].devtype = 2007 + AAC_DEVTYPE_ARC_RAW; 2004 2008 2005 2009 dev->hba_map[bus][target].scan_counter = dev->scan_counter; 2006 2010 2007 2011 aac_set_safw_target_qd(dev, bus, target); 2008 - 2009 - update_devtype: 2010 - dev->hba_map[bus][target].devtype = devtype; 2011 2012 } 2012 2013 } 2013 2014
+40 -2
drivers/scsi/sg.c
··· 51 51 #include <linux/atomic.h> 52 52 #include <linux/ratelimit.h> 53 53 #include <linux/uio.h> 54 + #include <linux/cred.h> /* for sg_check_file_access() */ 54 55 55 56 #include "scsi.h" 56 57 #include <scsi/scsi_dbg.h> ··· 209 208 #define sg_printk(prefix, sdp, fmt, a...) \ 210 209 sdev_prefix_printk(prefix, (sdp)->device, \ 211 210 (sdp)->disk->disk_name, fmt, ##a) 211 + 212 + /* 213 + * The SCSI interfaces that use read() and write() as an asynchronous variant of 214 + * ioctl(..., SG_IO, ...) are fundamentally unsafe, since there are lots of ways 215 + * to trigger read() and write() calls from various contexts with elevated 216 + * privileges. This can lead to kernel memory corruption (e.g. if these 217 + * interfaces are called through splice()) and privilege escalation inside 218 + * userspace (e.g. if a process with access to such a device passes a file 219 + * descriptor to a SUID binary as stdin/stdout/stderr). 220 + * 221 + * This function provides protection for the legacy API by restricting the 222 + * calling context. 223 + */ 224 + static int sg_check_file_access(struct file *filp, const char *caller) 225 + { 226 + if (filp->f_cred != current_real_cred()) { 227 + pr_err_once("%s: process %d (%s) changed security contexts after opening file descriptor, this is not allowed.\n", 228 + caller, task_tgid_vnr(current), current->comm); 229 + return -EPERM; 230 + } 231 + if (uaccess_kernel()) { 232 + pr_err_once("%s: process %d (%s) called from kernel context, this is not allowed.\n", 233 + caller, task_tgid_vnr(current), current->comm); 234 + return -EACCES; 235 + } 236 + return 0; 237 + } 212 238 213 239 static int sg_allow_access(struct file *filp, unsigned char *cmd) 214 240 { ··· 420 392 sg_io_hdr_t *hp; 421 393 struct sg_header *old_hdr = NULL; 422 394 int retval = 0; 395 + 396 + /* 397 + * This could cause a response to be stranded. Close the associated 398 + * file descriptor to free up any resources being held. 399 + */ 400 + retval = sg_check_file_access(filp, __func__); 401 + if (retval) 402 + return retval; 423 403 424 404 if ((!(sfp = (Sg_fd *) filp->private_data)) || (!(sdp = sfp->parentdp))) 425 405 return -ENXIO; ··· 616 580 struct sg_header old_hdr; 617 581 sg_io_hdr_t *hp; 618 582 unsigned char cmnd[SG_MAX_CDB_SIZE]; 583 + int retval; 619 584 620 - if (unlikely(uaccess_kernel())) 621 - return -EINVAL; 585 + retval = sg_check_file_access(filp, __func__); 586 + if (retval) 587 + return retval; 622 588 623 589 if ((!(sfp = (Sg_fd *) filp->private_data)) || (!(sdp = sfp->parentdp))) 624 590 return -ENXIO;
+1 -1
drivers/staging/rtl8723bs/core/rtw_ap.c
··· 1051 1051 return _FAIL; 1052 1052 1053 1053 1054 - if (len > MAX_IE_SZ) 1054 + if (len < 0 || len > MAX_IE_SZ) 1055 1055 return _FAIL; 1056 1056 1057 1057 pbss_network->IELength = len;
+1 -1
drivers/staging/rtlwifi/rtl8822be/hw.c
··· 803 803 return; 804 804 805 805 pci_read_config_byte(rtlpci->pdev, 0x70f, &tmp); 806 - pci_write_config_byte(rtlpci->pdev, 0x70f, tmp | BIT(7)); 806 + pci_write_config_byte(rtlpci->pdev, 0x70f, tmp | ASPM_L1_LATENCY << 3); 807 807 808 808 pci_read_config_byte(rtlpci->pdev, 0x719, &tmp); 809 809 pci_write_config_byte(rtlpci->pdev, 0x719, tmp | BIT(3) | BIT(4));
+1
drivers/staging/rtlwifi/wifi.h
··· 88 88 #define RTL_USB_MAX_RX_COUNT 100 89 89 #define QBSS_LOAD_SIZE 5 90 90 #define MAX_WMMELE_LENGTH 64 91 + #define ASPM_L1_LATENCY 7 91 92 92 93 #define TOTAL_CAM_ENTRY 32 93 94
+10 -5
drivers/target/target_core_pr.c
··· 3727 3727 * Check for overflow of 8byte PRI READ_KEYS payload and 3728 3728 * next reservation key list descriptor. 3729 3729 */ 3730 - if ((add_len + 8) > (cmd->data_length - 8)) 3731 - break; 3732 - 3733 - put_unaligned_be64(pr_reg->pr_res_key, &buf[off]); 3734 - off += 8; 3730 + if (off + 8 <= cmd->data_length) { 3731 + put_unaligned_be64(pr_reg->pr_res_key, &buf[off]); 3732 + off += 8; 3733 + } 3734 + /* 3735 + * SPC5r17: 6.16.2 READ KEYS service action 3736 + * The ADDITIONAL LENGTH field indicates the number of bytes in 3737 + * the Reservation key list. The contents of the ADDITIONAL 3738 + * LENGTH field are not altered based on the allocation length 3739 + */ 3735 3740 add_len += 8; 3736 3741 } 3737 3742 spin_unlock(&dev->t10_pr.registration_lock);
+4
drivers/thunderbolt/domain.c
··· 213 213 goto err_free_acl; 214 214 } 215 215 ret = tb->cm_ops->set_boot_acl(tb, acl, tb->nboot_acl); 216 + if (!ret) { 217 + /* Notify userspace about the change */ 218 + kobject_uevent(&tb->dev.kobj, KOBJ_CHANGE); 219 + } 216 220 mutex_unlock(&tb->lock); 217 221 218 222 err_free_acl:
+103 -36
drivers/uio/uio.c
··· 215 215 struct device_attribute *attr, char *buf) 216 216 { 217 217 struct uio_device *idev = dev_get_drvdata(dev); 218 - return sprintf(buf, "%s\n", idev->info->name); 218 + int ret; 219 + 220 + mutex_lock(&idev->info_lock); 221 + if (!idev->info) { 222 + ret = -EINVAL; 223 + dev_err(dev, "the device has been unregistered\n"); 224 + goto out; 225 + } 226 + 227 + ret = sprintf(buf, "%s\n", idev->info->name); 228 + 229 + out: 230 + mutex_unlock(&idev->info_lock); 231 + return ret; 219 232 } 220 233 static DEVICE_ATTR_RO(name); 221 234 ··· 236 223 struct device_attribute *attr, char *buf) 237 224 { 238 225 struct uio_device *idev = dev_get_drvdata(dev); 239 - return sprintf(buf, "%s\n", idev->info->version); 226 + int ret; 227 + 228 + mutex_lock(&idev->info_lock); 229 + if (!idev->info) { 230 + ret = -EINVAL; 231 + dev_err(dev, "the device has been unregistered\n"); 232 + goto out; 233 + } 234 + 235 + ret = sprintf(buf, "%s\n", idev->info->version); 236 + 237 + out: 238 + mutex_unlock(&idev->info_lock); 239 + return ret; 240 240 } 241 241 static DEVICE_ATTR_RO(version); 242 242 ··· 441 415 static irqreturn_t uio_interrupt(int irq, void *dev_id) 442 416 { 443 417 struct uio_device *idev = (struct uio_device *)dev_id; 444 - irqreturn_t ret = idev->info->handler(irq, idev->info); 418 + irqreturn_t ret; 445 419 420 + mutex_lock(&idev->info_lock); 421 + 422 + ret = idev->info->handler(irq, idev->info); 446 423 if (ret == IRQ_HANDLED) 447 424 uio_event_notify(idev->info); 448 425 426 + mutex_unlock(&idev->info_lock); 449 427 return ret; 450 428 } 451 429 ··· 463 433 struct uio_device *idev; 464 434 struct uio_listener *listener; 465 435 int ret = 0; 466 - unsigned long flags; 467 436 468 437 mutex_lock(&minor_lock); 469 438 idev = idr_find(&uio_idr, iminor(inode)); ··· 489 460 listener->event_count = atomic_read(&idev->event); 490 461 filep->private_data = listener; 491 462 492 - spin_lock_irqsave(&idev->info_lock, flags); 463 + mutex_lock(&idev->info_lock); 464 + if (!idev->info) { 465 + mutex_unlock(&idev->info_lock); 466 + ret = -EINVAL; 467 + goto err_alloc_listener; 468 + } 469 + 493 470 if (idev->info && idev->info->open) 494 471 ret = idev->info->open(idev->info, inode); 495 - spin_unlock_irqrestore(&idev->info_lock, flags); 472 + mutex_unlock(&idev->info_lock); 496 473 if (ret) 497 474 goto err_infoopen; 498 475 ··· 530 495 int ret = 0; 531 496 struct uio_listener *listener = filep->private_data; 532 497 struct uio_device *idev = listener->dev; 533 - unsigned long flags; 534 498 535 - spin_lock_irqsave(&idev->info_lock, flags); 499 + mutex_lock(&idev->info_lock); 536 500 if (idev->info && idev->info->release) 537 501 ret = idev->info->release(idev->info, inode); 538 - spin_unlock_irqrestore(&idev->info_lock, flags); 502 + mutex_unlock(&idev->info_lock); 539 503 540 504 module_put(idev->owner); 541 505 kfree(listener); ··· 547 513 struct uio_listener *listener = filep->private_data; 548 514 struct uio_device *idev = listener->dev; 549 515 __poll_t ret = 0; 550 - unsigned long flags; 551 516 552 - spin_lock_irqsave(&idev->info_lock, flags); 517 + mutex_lock(&idev->info_lock); 553 518 if (!idev->info || !idev->info->irq) 554 519 ret = -EIO; 555 - spin_unlock_irqrestore(&idev->info_lock, flags); 520 + mutex_unlock(&idev->info_lock); 556 521 557 522 if (ret) 558 523 return ret; ··· 570 537 DECLARE_WAITQUEUE(wait, current); 571 538 ssize_t retval = 0; 572 539 s32 event_count; 573 - unsigned long flags; 574 540 575 - spin_lock_irqsave(&idev->info_lock, flags); 541 + mutex_lock(&idev->info_lock); 576 542 if (!idev->info || !idev->info->irq) 577 543 retval = -EIO; 578 - spin_unlock_irqrestore(&idev->info_lock, flags); 544 + mutex_unlock(&idev->info_lock); 579 545 580 546 if (retval) 581 547 return retval; ··· 624 592 struct uio_device *idev = listener->dev; 625 593 ssize_t retval; 626 594 s32 irq_on; 627 - unsigned long flags; 628 595 629 - spin_lock_irqsave(&idev->info_lock, flags); 596 + mutex_lock(&idev->info_lock); 597 + if (!idev->info) { 598 + retval = -EINVAL; 599 + goto out; 600 + } 601 + 630 602 if (!idev->info || !idev->info->irq) { 631 603 retval = -EIO; 632 604 goto out; ··· 654 618 retval = idev->info->irqcontrol(idev->info, irq_on); 655 619 656 620 out: 657 - spin_unlock_irqrestore(&idev->info_lock, flags); 621 + mutex_unlock(&idev->info_lock); 658 622 return retval ? retval : sizeof(s32); 659 623 } 660 624 ··· 676 640 struct page *page; 677 641 unsigned long offset; 678 642 void *addr; 643 + int ret = 0; 644 + int mi; 679 645 680 - int mi = uio_find_mem_index(vmf->vma); 681 - if (mi < 0) 682 - return VM_FAULT_SIGBUS; 646 + mutex_lock(&idev->info_lock); 647 + if (!idev->info) { 648 + ret = VM_FAULT_SIGBUS; 649 + goto out; 650 + } 651 + 652 + mi = uio_find_mem_index(vmf->vma); 653 + if (mi < 0) { 654 + ret = VM_FAULT_SIGBUS; 655 + goto out; 656 + } 683 657 684 658 /* 685 659 * We need to subtract mi because userspace uses offset = N*PAGE_SIZE ··· 704 658 page = vmalloc_to_page(addr); 705 659 get_page(page); 706 660 vmf->page = page; 707 - return 0; 661 + 662 + out: 663 + mutex_unlock(&idev->info_lock); 664 + 665 + return ret; 708 666 } 709 667 710 668 static const struct vm_operations_struct uio_logical_vm_ops = { ··· 733 683 struct uio_device *idev = vma->vm_private_data; 734 684 int mi = uio_find_mem_index(vma); 735 685 struct uio_mem *mem; 686 + 736 687 if (mi < 0) 737 688 return -EINVAL; 738 689 mem = idev->info->mem + mi; ··· 775 724 776 725 vma->vm_private_data = idev; 777 726 727 + mutex_lock(&idev->info_lock); 728 + if (!idev->info) { 729 + ret = -EINVAL; 730 + goto out; 731 + } 732 + 778 733 mi = uio_find_mem_index(vma); 779 - if (mi < 0) 780 - return -EINVAL; 734 + if (mi < 0) { 735 + ret = -EINVAL; 736 + goto out; 737 + } 781 738 782 739 requested_pages = vma_pages(vma); 783 740 actual_pages = ((idev->info->mem[mi].addr & ~PAGE_MASK) 784 741 + idev->info->mem[mi].size + PAGE_SIZE -1) >> PAGE_SHIFT; 785 - if (requested_pages > actual_pages) 786 - return -EINVAL; 742 + if (requested_pages > actual_pages) { 743 + ret = -EINVAL; 744 + goto out; 745 + } 787 746 788 747 if (idev->info->mmap) { 789 748 ret = idev->info->mmap(idev->info, vma); 790 - return ret; 749 + goto out; 791 750 } 792 751 793 752 switch (idev->info->mem[mi].memtype) { 794 753 case UIO_MEM_PHYS: 795 - return uio_mmap_physical(vma); 754 + ret = uio_mmap_physical(vma); 755 + break; 796 756 case UIO_MEM_LOGICAL: 797 757 case UIO_MEM_VIRTUAL: 798 - return uio_mmap_logical(vma); 758 + ret = uio_mmap_logical(vma); 759 + break; 799 760 default: 800 - return -EINVAL; 761 + ret = -EINVAL; 801 762 } 763 + 764 + out: 765 + mutex_unlock(&idev->info_lock); 766 + return 0; 802 767 } 803 768 804 769 static const struct file_operations uio_fops = { ··· 932 865 933 866 idev->owner = owner; 934 867 idev->info = info; 935 - spin_lock_init(&idev->info_lock); 868 + mutex_init(&idev->info_lock); 936 869 init_waitqueue_head(&idev->wait); 937 870 atomic_set(&idev->event, 0); 938 871 ··· 969 902 * FDs at the time of unregister and therefore may not be 970 903 * freed until they are released. 971 904 */ 972 - ret = request_irq(info->irq, uio_interrupt, 973 - info->irq_flags, info->name, idev); 905 + ret = request_threaded_irq(info->irq, NULL, uio_interrupt, 906 + info->irq_flags, info->name, idev); 907 + 974 908 if (ret) 975 909 goto err_request_irq; 976 910 } ··· 996 928 void uio_unregister_device(struct uio_info *info) 997 929 { 998 930 struct uio_device *idev; 999 - unsigned long flags; 1000 931 1001 932 if (!info || !info->uio_dev) 1002 933 return; ··· 1004 937 1005 938 uio_free_minor(idev); 1006 939 940 + mutex_lock(&idev->info_lock); 1007 941 uio_dev_del_attributes(idev); 1008 942 1009 943 if (info->irq && info->irq != UIO_IRQ_CUSTOM) 1010 944 free_irq(info->irq, idev); 1011 945 1012 - spin_lock_irqsave(&idev->info_lock, flags); 1013 946 idev->info = NULL; 1014 - spin_unlock_irqrestore(&idev->info_lock, flags); 947 + mutex_unlock(&idev->info_lock); 1015 948 1016 949 device_unregister(&idev->dev); 1017 950
+4
drivers/usb/core/quirks.c
··· 378 378 /* Corsair K70 RGB */ 379 379 { USB_DEVICE(0x1b1c, 0x1b13), .driver_info = USB_QUIRK_DELAY_INIT }, 380 380 381 + /* Corsair Strafe */ 382 + { USB_DEVICE(0x1b1c, 0x1b15), .driver_info = USB_QUIRK_DELAY_INIT | 383 + USB_QUIRK_DELAY_CTRL_MSG }, 384 + 381 385 /* Corsair Strafe RGB */ 382 386 { USB_DEVICE(0x1b1c, 0x1b20), .driver_info = USB_QUIRK_DELAY_INIT | 383 387 USB_QUIRK_DELAY_CTRL_MSG },
+1
drivers/usb/gadget/udc/aspeed-vhub/Kconfig
··· 2 2 config USB_ASPEED_VHUB 3 3 tristate "Aspeed vHub UDC driver" 4 4 depends on ARCH_ASPEED || COMPILE_TEST 5 + depends on USB_LIBCOMPOSITE 5 6 help 6 7 USB peripheral controller for the Aspeed AST2500 family 7 8 SoCs supporting the "vHub" functionality and USB2.0
+8 -4
drivers/usb/host/xhci-dbgcap.c
··· 508 508 return 0; 509 509 } 510 510 511 - static void xhci_do_dbc_stop(struct xhci_hcd *xhci) 511 + static int xhci_do_dbc_stop(struct xhci_hcd *xhci) 512 512 { 513 513 struct xhci_dbc *dbc = xhci->dbc; 514 514 515 515 if (dbc->state == DS_DISABLED) 516 - return; 516 + return -1; 517 517 518 518 writel(0, &dbc->regs->control); 519 519 xhci_dbc_mem_cleanup(xhci); 520 520 dbc->state = DS_DISABLED; 521 + 522 + return 0; 521 523 } 522 524 523 525 static int xhci_dbc_start(struct xhci_hcd *xhci) ··· 546 544 547 545 static void xhci_dbc_stop(struct xhci_hcd *xhci) 548 546 { 547 + int ret; 549 548 unsigned long flags; 550 549 struct xhci_dbc *dbc = xhci->dbc; 551 550 struct dbc_port *port = &dbc->port; ··· 559 556 xhci_dbc_tty_unregister_device(xhci); 560 557 561 558 spin_lock_irqsave(&dbc->lock, flags); 562 - xhci_do_dbc_stop(xhci); 559 + ret = xhci_do_dbc_stop(xhci); 563 560 spin_unlock_irqrestore(&dbc->lock, flags); 564 561 565 - pm_runtime_put_sync(xhci_to_hcd(xhci)->self.controller); 562 + if (!ret) 563 + pm_runtime_put_sync(xhci_to_hcd(xhci)->self.controller); 566 564 } 567 565 568 566 static void
+1 -1
drivers/usb/host/xhci-mem.c
··· 595 595 if (!ep->stream_info) 596 596 return NULL; 597 597 598 - if (stream_id > ep->stream_info->num_streams) 598 + if (stream_id >= ep->stream_info->num_streams) 599 599 return NULL; 600 600 return ep->stream_info->stream_rings[stream_id]; 601 601 }
+6 -17
drivers/usb/misc/yurex.c
··· 396 396 loff_t *ppos) 397 397 { 398 398 struct usb_yurex *dev; 399 - int retval = 0; 400 - int bytes_read = 0; 399 + int len = 0; 401 400 char in_buffer[20]; 402 401 unsigned long flags; 403 402 ··· 404 405 405 406 mutex_lock(&dev->io_mutex); 406 407 if (!dev->interface) { /* already disconnected */ 407 - retval = -ENODEV; 408 - goto exit; 408 + mutex_unlock(&dev->io_mutex); 409 + return -ENODEV; 409 410 } 410 411 411 412 spin_lock_irqsave(&dev->lock, flags); 412 - bytes_read = snprintf(in_buffer, 20, "%lld\n", dev->bbu); 413 + len = snprintf(in_buffer, 20, "%lld\n", dev->bbu); 413 414 spin_unlock_irqrestore(&dev->lock, flags); 414 - 415 - if (*ppos < bytes_read) { 416 - if (copy_to_user(buffer, in_buffer + *ppos, bytes_read - *ppos)) 417 - retval = -EFAULT; 418 - else { 419 - retval = bytes_read - *ppos; 420 - *ppos += bytes_read; 421 - } 422 - } 423 - 424 - exit: 425 415 mutex_unlock(&dev->io_mutex); 426 - return retval; 416 + 417 + return simple_read_from_buffer(buffer, count, ppos, in_buffer, len); 427 418 } 428 419 429 420 static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
+1 -1
drivers/usb/serial/ch341.c
··· 128 128 r = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0), request, 129 129 USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN, 130 130 value, index, buf, bufsize, DEFAULT_TIMEOUT); 131 - if (r < bufsize) { 131 + if (r < (int)bufsize) { 132 132 if (r >= 0) { 133 133 dev_err(&dev->dev, 134 134 "short control message received (%d < %u)\n",
+1
drivers/usb/serial/cp210x.c
··· 149 149 { USB_DEVICE(0x10C4, 0x8977) }, /* CEL MeshWorks DevKit Device */ 150 150 { USB_DEVICE(0x10C4, 0x8998) }, /* KCF Technologies PRN */ 151 151 { USB_DEVICE(0x10C4, 0x89A4) }, /* CESINEL FTBC Flexible Thyristor Bridge Controller */ 152 + { USB_DEVICE(0x10C4, 0x89FB) }, /* Qivicon ZigBee USB Radio Stick */ 152 153 { USB_DEVICE(0x10C4, 0x8A2A) }, /* HubZ dual ZigBee and Z-Wave dongle */ 153 154 { USB_DEVICE(0x10C4, 0x8A5E) }, /* CEL EM3588 ZigBee USB Stick Long Range */ 154 155 { USB_DEVICE(0x10C4, 0x8B34) }, /* Qivicon ZigBee USB Radio Stick */
+3 -1
drivers/usb/serial/keyspan_pda.c
··· 369 369 3, /* get pins */ 370 370 USB_TYPE_VENDOR|USB_RECIP_INTERFACE|USB_DIR_IN, 371 371 0, 0, data, 1, 2000); 372 - if (rc >= 0) 372 + if (rc == 1) 373 373 *value = *data; 374 + else if (rc >= 0) 375 + rc = -EIO; 374 376 375 377 kfree(data); 376 378 return rc;
+3
drivers/usb/serial/mos7840.c
··· 468 468 } 469 469 470 470 dev_dbg(dev, "%s urb buffer size is %d\n", __func__, urb->actual_length); 471 + if (urb->actual_length < 1) 472 + goto out; 473 + 471 474 dev_dbg(dev, "%s mos7840_port->MsrLsr is %d port %d\n", __func__, 472 475 mos7840_port->MsrLsr, mos7840_port->port_num); 473 476 data = urb->transfer_buffer;
+3 -2
drivers/usb/typec/tcpm.c
··· 725 725 726 726 tcpm_log(port, "Setting voltage/current limit %u mV %u mA", mv, max_ma); 727 727 728 + port->supply_voltage = mv; 729 + port->current_limit = max_ma; 730 + 728 731 if (port->tcpc->set_current_limit) 729 732 ret = port->tcpc->set_current_limit(port->tcpc, max_ma, mv); 730 733 ··· 2598 2595 tcpm_set_attached_state(port, false); 2599 2596 port->try_src_count = 0; 2600 2597 port->try_snk_count = 0; 2601 - port->supply_voltage = 0; 2602 - port->current_limit = 0; 2603 2598 port->usb_type = POWER_SUPPLY_USB_TYPE_C; 2604 2599 2605 2600 power_supply_changed(port->psy);
+10 -2
drivers/vfio/pci/Kconfig
··· 28 28 def_bool y if !S390 29 29 30 30 config VFIO_PCI_IGD 31 - depends on VFIO_PCI 32 - def_bool y if X86 31 + bool "VFIO PCI extensions for Intel graphics (GVT-d)" 32 + depends on VFIO_PCI && X86 33 + default y 34 + help 35 + Support for Intel IGD specific extensions to enable direct 36 + assignment to virtual machines. This includes exposing an IGD 37 + specific firmware table and read-only copies of the host bridge 38 + and LPC bridge config space. 39 + 40 + To enable Intel IGD assignment through vfio-pci, say Y.
+7 -9
drivers/vfio/vfio_iommu_type1.c
··· 343 343 struct page *page[1]; 344 344 struct vm_area_struct *vma; 345 345 struct vm_area_struct *vmas[1]; 346 + unsigned int flags = 0; 346 347 int ret; 347 348 349 + if (prot & IOMMU_WRITE) 350 + flags |= FOLL_WRITE; 351 + 352 + down_read(&mm->mmap_sem); 348 353 if (mm == current->mm) { 349 - ret = get_user_pages_longterm(vaddr, 1, !!(prot & IOMMU_WRITE), 350 - page, vmas); 354 + ret = get_user_pages_longterm(vaddr, 1, flags, page, vmas); 351 355 } else { 352 - unsigned int flags = 0; 353 - 354 - if (prot & IOMMU_WRITE) 355 - flags |= FOLL_WRITE; 356 - 357 - down_read(&mm->mmap_sem); 358 356 ret = get_user_pages_remote(NULL, mm, vaddr, 1, flags, page, 359 357 vmas, NULL); 360 358 /* ··· 366 368 ret = -EOPNOTSUPP; 367 369 put_page(page[0]); 368 370 } 369 - up_read(&mm->mmap_sem); 370 371 } 372 + up_read(&mm->mmap_sem); 371 373 372 374 if (ret == 1) { 373 375 *pfn = page_to_pfn(page[0]);
+2 -2
fs/autofs/Makefile
··· 2 2 # Makefile for the linux autofs-filesystem routines. 3 3 # 4 4 5 - obj-$(CONFIG_AUTOFS_FS) += autofs.o 5 + obj-$(CONFIG_AUTOFS_FS) += autofs4.o 6 6 7 - autofs-objs := init.o inode.o root.o symlink.o waitq.o expire.o dev-ioctl.o 7 + autofs4-objs := init.o inode.o root.o symlink.o waitq.o expire.o dev-ioctl.o
+13 -9
fs/autofs/dev-ioctl.c
··· 135 135 cmd); 136 136 goto out; 137 137 } 138 + } else { 139 + unsigned int inr = _IOC_NR(cmd); 140 + 141 + if (inr == AUTOFS_DEV_IOCTL_OPENMOUNT_CMD || 142 + inr == AUTOFS_DEV_IOCTL_REQUESTER_CMD || 143 + inr == AUTOFS_DEV_IOCTL_ISMOUNTPOINT_CMD) { 144 + err = -EINVAL; 145 + goto out; 146 + } 138 147 } 139 148 140 149 err = 0; ··· 280 271 dev_t devid; 281 272 int err, fd; 282 273 283 - /* param->path has already been checked */ 274 + /* param->path has been checked in validate_dev_ioctl() */ 275 + 284 276 if (!param->openmount.devid) 285 277 return -EINVAL; 286 278 ··· 443 433 dev_t devid; 444 434 int err = -ENOENT; 445 435 446 - if (param->size <= AUTOFS_DEV_IOCTL_SIZE) { 447 - err = -EINVAL; 448 - goto out; 449 - } 436 + /* param->path has been checked in validate_dev_ioctl() */ 450 437 451 438 devid = sbi->sb->s_dev; 452 439 ··· 528 521 unsigned int devid, magic; 529 522 int err = -ENOENT; 530 523 531 - if (param->size <= AUTOFS_DEV_IOCTL_SIZE) { 532 - err = -EINVAL; 533 - goto out; 534 - } 524 + /* param->path has been checked in validate_dev_ioctl() */ 535 525 536 526 name = param->path; 537 527 type = param->ismountpoint.in.type;
+1 -1
fs/autofs/init.c
··· 23 23 .kill_sb = autofs_kill_sb, 24 24 }; 25 25 MODULE_ALIAS_FS("autofs"); 26 - MODULE_ALIAS("autofs4"); 26 + MODULE_ALIAS("autofs"); 27 27 28 28 static int __init init_autofs_fs(void) 29 29 {
+2 -3
fs/binfmt_elf.c
··· 1259 1259 goto out_free_ph; 1260 1260 } 1261 1261 1262 - len = ELF_PAGESTART(eppnt->p_filesz + eppnt->p_vaddr + 1263 - ELF_MIN_ALIGN - 1); 1264 - bss = eppnt->p_memsz + eppnt->p_vaddr; 1262 + len = ELF_PAGEALIGN(eppnt->p_filesz + eppnt->p_vaddr); 1263 + bss = ELF_PAGEALIGN(eppnt->p_memsz + eppnt->p_vaddr); 1265 1264 if (bss > len) { 1266 1265 error = vm_brk(len, bss - len); 1267 1266 if (error)
+2 -1
fs/cifs/cifsglob.h
··· 423 423 void (*set_oplock_level)(struct cifsInodeInfo *, __u32, unsigned int, 424 424 bool *); 425 425 /* create lease context buffer for CREATE request */ 426 - char * (*create_lease_buf)(u8 *, u8); 426 + char * (*create_lease_buf)(u8 *lease_key, u8 oplock); 427 427 /* parse lease context buffer and return oplock/epoch info */ 428 428 __u8 (*parse_lease_buf)(void *buf, unsigned int *epoch, char *lkey); 429 429 ssize_t (*copychunk_range)(const unsigned int, ··· 1416 1416 /* one of these for every pending CIFS request to the server */ 1417 1417 struct mid_q_entry { 1418 1418 struct list_head qhead; /* mids waiting on reply from this server */ 1419 + struct kref refcount; 1419 1420 struct TCP_Server_Info *server; /* server corresponding to this mid */ 1420 1421 __u64 mid; /* multiplex id */ 1421 1422 __u32 pid; /* process id */
+1
fs/cifs/cifsproto.h
··· 82 82 struct TCP_Server_Info *server); 83 83 extern void DeleteMidQEntry(struct mid_q_entry *midEntry); 84 84 extern void cifs_delete_mid(struct mid_q_entry *mid); 85 + extern void cifs_mid_q_entry_release(struct mid_q_entry *midEntry); 85 86 extern void cifs_wake_up_task(struct mid_q_entry *mid); 86 87 extern int cifs_handle_standard(struct TCP_Server_Info *server, 87 88 struct mid_q_entry *mid);
+8 -2
fs/cifs/cifssmb.c
··· 157 157 * greater than cifs socket timeout which is 7 seconds 158 158 */ 159 159 while (server->tcpStatus == CifsNeedReconnect) { 160 - wait_event_interruptible_timeout(server->response_q, 161 - (server->tcpStatus != CifsNeedReconnect), 10 * HZ); 160 + rc = wait_event_interruptible_timeout(server->response_q, 161 + (server->tcpStatus != CifsNeedReconnect), 162 + 10 * HZ); 163 + if (rc < 0) { 164 + cifs_dbg(FYI, "%s: aborting reconnect due to a received" 165 + " signal by the process\n", __func__); 166 + return -ERESTARTSYS; 167 + } 162 168 163 169 /* are we still trying to reconnect? */ 164 170 if (server->tcpStatus != CifsNeedReconnect)
+7 -1
fs/cifs/connect.c
··· 924 924 server->pdu_size = next_offset; 925 925 } 926 926 927 + mid_entry = NULL; 927 928 if (server->ops->is_transform_hdr && 928 929 server->ops->receive_transform && 929 930 server->ops->is_transform_hdr(buf)) { ··· 939 938 length = mid_entry->receive(server, mid_entry); 940 939 } 941 940 942 - if (length < 0) 941 + if (length < 0) { 942 + if (mid_entry) 943 + cifs_mid_q_entry_release(mid_entry); 943 944 continue; 945 + } 944 946 945 947 if (server->large_buf) 946 948 buf = server->bigbuf; ··· 960 956 961 957 if (!mid_entry->multiRsp || mid_entry->multiEnd) 962 958 mid_entry->callback(mid_entry); 959 + 960 + cifs_mid_q_entry_release(mid_entry); 963 961 } else if (server->ops->is_oplock_break && 964 962 server->ops->is_oplock_break(buf, server)) { 965 963 cifs_dbg(FYI, "Received oplock break\n");
+1
fs/cifs/smb1ops.c
··· 107 107 if (compare_mid(mid->mid, buf) && 108 108 mid->mid_state == MID_REQUEST_SUBMITTED && 109 109 le16_to_cpu(mid->command) == buf->Command) { 110 + kref_get(&mid->refcount); 110 111 spin_unlock(&GlobalMid_Lock); 111 112 return mid; 112 113 }
+4 -7
fs/cifs/smb2file.c
··· 41 41 int rc; 42 42 __le16 *smb2_path; 43 43 struct smb2_file_all_info *smb2_data = NULL; 44 - __u8 smb2_oplock[17]; 44 + __u8 smb2_oplock; 45 45 struct cifs_fid *fid = oparms->fid; 46 46 struct network_resiliency_req nr_ioctl_req; 47 47 ··· 59 59 } 60 60 61 61 oparms->desired_access |= FILE_READ_ATTRIBUTES; 62 - *smb2_oplock = SMB2_OPLOCK_LEVEL_BATCH; 62 + smb2_oplock = SMB2_OPLOCK_LEVEL_BATCH; 63 63 64 - if (oparms->tcon->ses->server->capabilities & SMB2_GLOBAL_CAP_LEASING) 65 - memcpy(smb2_oplock + 1, fid->lease_key, SMB2_LEASE_KEY_SIZE); 66 - 67 - rc = SMB2_open(xid, oparms, smb2_path, smb2_oplock, smb2_data, NULL, 64 + rc = SMB2_open(xid, oparms, smb2_path, &smb2_oplock, smb2_data, NULL, 68 65 NULL); 69 66 if (rc) 70 67 goto out; ··· 98 101 move_smb2_info_to_cifs(buf, smb2_data); 99 102 } 100 103 101 - *oplock = *smb2_oplock; 104 + *oplock = smb2_oplock; 102 105 out: 103 106 kfree(smb2_data); 104 107 kfree(smb2_path);
+7 -7
fs/cifs/smb2ops.c
··· 203 203 if ((mid->mid == wire_mid) && 204 204 (mid->mid_state == MID_REQUEST_SUBMITTED) && 205 205 (mid->command == shdr->Command)) { 206 + kref_get(&mid->refcount); 206 207 spin_unlock(&GlobalMid_Lock); 207 208 return mid; 208 209 } ··· 856 855 857 856 rc = SMB2_set_ea(xid, tcon, fid.persistent_fid, fid.volatile_fid, ea, 858 857 len); 858 + kfree(ea); 859 + 859 860 SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid); 860 861 861 862 return rc; ··· 2222 2219 if (!buf) 2223 2220 return NULL; 2224 2221 2225 - buf->lcontext.LeaseKeyLow = cpu_to_le64(*((u64 *)lease_key)); 2226 - buf->lcontext.LeaseKeyHigh = cpu_to_le64(*((u64 *)(lease_key + 8))); 2222 + memcpy(&buf->lcontext.LeaseKey, lease_key, SMB2_LEASE_KEY_SIZE); 2227 2223 buf->lcontext.LeaseState = map_oplock_to_lease(oplock); 2228 2224 2229 2225 buf->ccontext.DataOffset = cpu_to_le16(offsetof ··· 2248 2246 if (!buf) 2249 2247 return NULL; 2250 2248 2251 - buf->lcontext.LeaseKeyLow = cpu_to_le64(*((u64 *)lease_key)); 2252 - buf->lcontext.LeaseKeyHigh = cpu_to_le64(*((u64 *)(lease_key + 8))); 2249 + memcpy(&buf->lcontext.LeaseKey, lease_key, SMB2_LEASE_KEY_SIZE); 2253 2250 buf->lcontext.LeaseState = map_oplock_to_lease(oplock); 2254 2251 2255 2252 buf->ccontext.DataOffset = cpu_to_le16(offsetof ··· 2285 2284 if (lc->lcontext.LeaseFlags & SMB2_LEASE_FLAG_BREAK_IN_PROGRESS) 2286 2285 return SMB2_OPLOCK_LEVEL_NOCHANGE; 2287 2286 if (lease_key) 2288 - memcpy(lease_key, &lc->lcontext.LeaseKeyLow, 2289 - SMB2_LEASE_KEY_SIZE); 2287 + memcpy(lease_key, &lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE); 2290 2288 return le32_to_cpu(lc->lcontext.LeaseState); 2291 2289 } 2292 2290 ··· 2521 2521 if (!tr_hdr) 2522 2522 goto err_free_iov; 2523 2523 2524 - orig_len = smb2_rqst_len(old_rq, false); 2524 + orig_len = smb_rqst_len(server, old_rq); 2525 2525 2526 2526 /* fill the 2nd iov with a transform header */ 2527 2527 fill_transform_hdr(tr_hdr, orig_len, old_rq);
+21 -11
fs/cifs/smb2pdu.c
··· 155 155 static int 156 156 smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon) 157 157 { 158 - int rc = 0; 158 + int rc; 159 159 struct nls_table *nls_codepage; 160 160 struct cifs_ses *ses; 161 161 struct TCP_Server_Info *server; ··· 166 166 * for those three - in the calling routine. 167 167 */ 168 168 if (tcon == NULL) 169 - return rc; 169 + return 0; 170 170 171 171 if (smb2_command == SMB2_TREE_CONNECT) 172 - return rc; 172 + return 0; 173 173 174 174 if (tcon->tidStatus == CifsExiting) { 175 175 /* ··· 212 212 return -EAGAIN; 213 213 } 214 214 215 - wait_event_interruptible_timeout(server->response_q, 216 - (server->tcpStatus != CifsNeedReconnect), 10 * HZ); 215 + rc = wait_event_interruptible_timeout(server->response_q, 216 + (server->tcpStatus != CifsNeedReconnect), 217 + 10 * HZ); 218 + if (rc < 0) { 219 + cifs_dbg(FYI, "%s: aborting reconnect due to a received" 220 + " signal by the process\n", __func__); 221 + return -ERESTARTSYS; 222 + } 217 223 218 224 /* are we still trying to reconnect? */ 219 225 if (server->tcpStatus != CifsNeedReconnect) ··· 237 231 } 238 232 239 233 if (!tcon->ses->need_reconnect && !tcon->need_reconnect) 240 - return rc; 234 + return 0; 241 235 242 236 nls_codepage = load_nls_default(); 243 237 ··· 346 340 return rc; 347 341 348 342 /* BB eventually switch this to SMB2 specific small buf size */ 349 - *request_buf = cifs_small_buf_get(); 343 + if (smb2_command == SMB2_SET_INFO) 344 + *request_buf = cifs_buf_get(); 345 + else 346 + *request_buf = cifs_small_buf_get(); 350 347 if (*request_buf == NULL) { 351 348 /* BB should we add a retry in here if not a writepage? */ 352 349 return -ENOMEM; ··· 1716 1707 1717 1708 static int 1718 1709 add_lease_context(struct TCP_Server_Info *server, struct kvec *iov, 1719 - unsigned int *num_iovec, __u8 *oplock) 1710 + unsigned int *num_iovec, u8 *lease_key, __u8 *oplock) 1720 1711 { 1721 1712 struct smb2_create_req *req = iov[0].iov_base; 1722 1713 unsigned int num = *num_iovec; 1723 1714 1724 - iov[num].iov_base = server->ops->create_lease_buf(oplock+1, *oplock); 1715 + iov[num].iov_base = server->ops->create_lease_buf(lease_key, *oplock); 1725 1716 if (iov[num].iov_base == NULL) 1726 1717 return -ENOMEM; 1727 1718 iov[num].iov_len = server->vals->create_lease_size; ··· 2181 2172 *oplock == SMB2_OPLOCK_LEVEL_NONE) 2182 2173 req->RequestedOplockLevel = *oplock; 2183 2174 else { 2184 - rc = add_lease_context(server, iov, &n_iov, oplock); 2175 + rc = add_lease_context(server, iov, &n_iov, 2176 + oparms->fid->lease_key, oplock); 2185 2177 if (rc) { 2186 2178 cifs_small_buf_release(req); 2187 2179 kfree(copy_path); ··· 3730 3720 3731 3721 rc = cifs_send_recv(xid, ses, &rqst, &resp_buftype, flags, 3732 3722 &rsp_iov); 3733 - cifs_small_buf_release(req); 3723 + cifs_buf_release(req); 3734 3724 rsp = (struct smb2_set_info_rsp *)rsp_iov.iov_base; 3735 3725 3736 3726 if (rc != 0) {
+2 -4
fs/cifs/smb2pdu.h
··· 678 678 #define SMB2_LEASE_KEY_SIZE 16 679 679 680 680 struct lease_context { 681 - __le64 LeaseKeyLow; 682 - __le64 LeaseKeyHigh; 681 + u8 LeaseKey[SMB2_LEASE_KEY_SIZE]; 683 682 __le32 LeaseState; 684 683 __le32 LeaseFlags; 685 684 __le64 LeaseDuration; 686 685 } __packed; 687 686 688 687 struct lease_context_v2 { 689 - __le64 LeaseKeyLow; 690 - __le64 LeaseKeyHigh; 688 + u8 LeaseKey[SMB2_LEASE_KEY_SIZE]; 691 689 __le32 LeaseState; 692 690 __le32 LeaseFlags; 693 691 __le64 LeaseDuration;
+2 -2
fs/cifs/smb2proto.h
··· 113 113 extern int smb2_push_mandatory_locks(struct cifsFileInfo *cfile); 114 114 extern void smb2_reconnect_server(struct work_struct *work); 115 115 extern int smb3_crypto_aead_allocate(struct TCP_Server_Info *server); 116 - extern unsigned long 117 - smb2_rqst_len(struct smb_rqst *rqst, bool skip_rfc1002_marker); 116 + extern unsigned long smb_rqst_len(struct TCP_Server_Info *server, 117 + struct smb_rqst *rqst); 118 118 119 119 /* 120 120 * SMB2 Worker functions - most of protocol specific implementation details
+50 -10
fs/cifs/smb2transport.c
··· 173 173 struct kvec *iov = rqst->rq_iov; 174 174 struct smb2_sync_hdr *shdr = (struct smb2_sync_hdr *)iov[0].iov_base; 175 175 struct cifs_ses *ses; 176 + struct shash_desc *shash = &server->secmech.sdeschmacsha256->shash; 177 + struct smb_rqst drqst; 176 178 177 179 ses = smb2_find_smb_ses(server, shdr->SessionId); 178 180 if (!ses) { ··· 192 190 } 193 191 194 192 rc = crypto_shash_setkey(server->secmech.hmacsha256, 195 - ses->auth_key.response, SMB2_NTLMV2_SESSKEY_SIZE); 193 + ses->auth_key.response, SMB2_NTLMV2_SESSKEY_SIZE); 196 194 if (rc) { 197 195 cifs_dbg(VFS, "%s: Could not update with response\n", __func__); 198 196 return rc; 199 197 } 200 198 201 - rc = crypto_shash_init(&server->secmech.sdeschmacsha256->shash); 199 + rc = crypto_shash_init(shash); 202 200 if (rc) { 203 201 cifs_dbg(VFS, "%s: Could not init sha256", __func__); 204 202 return rc; 205 203 } 206 204 207 - rc = __cifs_calc_signature(rqst, server, sigptr, 208 - &server->secmech.sdeschmacsha256->shash); 205 + /* 206 + * For SMB2+, __cifs_calc_signature() expects to sign only the actual 207 + * data, that is, iov[0] should not contain a rfc1002 length. 208 + * 209 + * Sign the rfc1002 length prior to passing the data (iov[1-N]) down to 210 + * __cifs_calc_signature(). 211 + */ 212 + drqst = *rqst; 213 + if (drqst.rq_nvec >= 2 && iov[0].iov_len == 4) { 214 + rc = crypto_shash_update(shash, iov[0].iov_base, 215 + iov[0].iov_len); 216 + if (rc) { 217 + cifs_dbg(VFS, "%s: Could not update with payload\n", 218 + __func__); 219 + return rc; 220 + } 221 + drqst.rq_iov++; 222 + drqst.rq_nvec--; 223 + } 209 224 225 + rc = __cifs_calc_signature(&drqst, server, sigptr, shash); 210 226 if (!rc) 211 227 memcpy(shdr->Signature, sigptr, SMB2_SIGNATURE_SIZE); 212 228 ··· 428 408 int 429 409 smb3_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server) 430 410 { 431 - int rc = 0; 411 + int rc; 432 412 unsigned char smb3_signature[SMB2_CMACAES_SIZE]; 433 413 unsigned char *sigptr = smb3_signature; 434 414 struct kvec *iov = rqst->rq_iov; 435 415 struct smb2_sync_hdr *shdr = (struct smb2_sync_hdr *)iov[0].iov_base; 436 416 struct cifs_ses *ses; 417 + struct shash_desc *shash = &server->secmech.sdesccmacaes->shash; 418 + struct smb_rqst drqst; 437 419 438 420 ses = smb2_find_smb_ses(server, shdr->SessionId); 439 421 if (!ses) { ··· 447 425 memset(shdr->Signature, 0x0, SMB2_SIGNATURE_SIZE); 448 426 449 427 rc = crypto_shash_setkey(server->secmech.cmacaes, 450 - ses->smb3signingkey, SMB2_CMACAES_SIZE); 451 - 428 + ses->smb3signingkey, SMB2_CMACAES_SIZE); 452 429 if (rc) { 453 430 cifs_dbg(VFS, "%s: Could not set key for cmac aes\n", __func__); 454 431 return rc; ··· 458 437 * so unlike smb2 case we do not have to check here if secmech are 459 438 * initialized 460 439 */ 461 - rc = crypto_shash_init(&server->secmech.sdesccmacaes->shash); 440 + rc = crypto_shash_init(shash); 462 441 if (rc) { 463 442 cifs_dbg(VFS, "%s: Could not init cmac aes\n", __func__); 464 443 return rc; 465 444 } 466 445 467 - rc = __cifs_calc_signature(rqst, server, sigptr, 468 - &server->secmech.sdesccmacaes->shash); 446 + /* 447 + * For SMB2+, __cifs_calc_signature() expects to sign only the actual 448 + * data, that is, iov[0] should not contain a rfc1002 length. 449 + * 450 + * Sign the rfc1002 length prior to passing the data (iov[1-N]) down to 451 + * __cifs_calc_signature(). 452 + */ 453 + drqst = *rqst; 454 + if (drqst.rq_nvec >= 2 && iov[0].iov_len == 4) { 455 + rc = crypto_shash_update(shash, iov[0].iov_base, 456 + iov[0].iov_len); 457 + if (rc) { 458 + cifs_dbg(VFS, "%s: Could not update with payload\n", 459 + __func__); 460 + return rc; 461 + } 462 + drqst.rq_iov++; 463 + drqst.rq_nvec--; 464 + } 469 465 466 + rc = __cifs_calc_signature(&drqst, server, sigptr, shash); 470 467 if (!rc) 471 468 memcpy(shdr->Signature, sigptr, SMB2_SIGNATURE_SIZE); 472 469 ··· 587 548 588 549 temp = mempool_alloc(cifs_mid_poolp, GFP_NOFS); 589 550 memset(temp, 0, sizeof(struct mid_q_entry)); 551 + kref_init(&temp->refcount); 590 552 temp->mid = le64_to_cpu(shdr->MessageId); 591 553 temp->pid = current->pid; 592 554 temp->command = shdr->Command; /* Always LE */
+3 -2
fs/cifs/smbdirect.c
··· 2083 2083 * rqst: the data to write 2084 2084 * return value: 0 if successfully write, otherwise error code 2085 2085 */ 2086 - int smbd_send(struct smbd_connection *info, struct smb_rqst *rqst) 2086 + int smbd_send(struct TCP_Server_Info *server, struct smb_rqst *rqst) 2087 2087 { 2088 + struct smbd_connection *info = server->smbd_conn; 2088 2089 struct kvec vec; 2089 2090 int nvecs; 2090 2091 int size; ··· 2119 2118 * rq_tailsz to PAGE_SIZE when the buffer has multiple pages and 2120 2119 * ends at page boundary 2121 2120 */ 2122 - buflen = smb2_rqst_len(rqst, true); 2121 + buflen = smb_rqst_len(server, rqst); 2123 2122 2124 2123 if (buflen + sizeof(struct smbd_data_transfer) > 2125 2124 info->max_fragmented_send_size) {
+2 -2
fs/cifs/smbdirect.h
··· 292 292 293 293 /* Interface for carrying upper layer I/O through send/recv */ 294 294 int smbd_recv(struct smbd_connection *info, struct msghdr *msg); 295 - int smbd_send(struct smbd_connection *info, struct smb_rqst *rqst); 295 + int smbd_send(struct TCP_Server_Info *server, struct smb_rqst *rqst); 296 296 297 297 enum mr_state { 298 298 MR_READY, ··· 332 332 static inline int smbd_reconnect(struct TCP_Server_Info *server) {return -1; } 333 333 static inline void smbd_destroy(struct smbd_connection *info) {} 334 334 static inline int smbd_recv(struct smbd_connection *info, struct msghdr *msg) {return -1; } 335 - static inline int smbd_send(struct smbd_connection *info, struct smb_rqst *rqst) {return -1; } 335 + static inline int smbd_send(struct TCP_Server_Info *server, struct smb_rqst *rqst) {return -1; } 336 336 #endif 337 337 338 338 #endif
+22 -5
fs/cifs/transport.c
··· 61 61 62 62 temp = mempool_alloc(cifs_mid_poolp, GFP_NOFS); 63 63 memset(temp, 0, sizeof(struct mid_q_entry)); 64 + kref_init(&temp->refcount); 64 65 temp->mid = get_mid(smb_buffer); 65 66 temp->pid = current->pid; 66 67 temp->command = cpu_to_le16(smb_buffer->Command); ··· 81 80 atomic_inc(&midCount); 82 81 temp->mid_state = MID_REQUEST_ALLOCATED; 83 82 return temp; 83 + } 84 + 85 + static void _cifs_mid_q_entry_release(struct kref *refcount) 86 + { 87 + struct mid_q_entry *mid = container_of(refcount, struct mid_q_entry, 88 + refcount); 89 + 90 + mempool_free(mid, cifs_mid_poolp); 91 + } 92 + 93 + void cifs_mid_q_entry_release(struct mid_q_entry *midEntry) 94 + { 95 + spin_lock(&GlobalMid_Lock); 96 + kref_put(&midEntry->refcount, _cifs_mid_q_entry_release); 97 + spin_unlock(&GlobalMid_Lock); 84 98 } 85 99 86 100 void ··· 126 110 } 127 111 } 128 112 #endif 129 - mempool_free(midEntry, cifs_mid_poolp); 113 + cifs_mid_q_entry_release(midEntry); 130 114 } 131 115 132 116 void ··· 218 202 } 219 203 220 204 unsigned long 221 - smb2_rqst_len(struct smb_rqst *rqst, bool skip_rfc1002_marker) 205 + smb_rqst_len(struct TCP_Server_Info *server, struct smb_rqst *rqst) 222 206 { 223 207 unsigned int i; 224 208 struct kvec *iov; 225 209 int nvec; 226 210 unsigned long buflen = 0; 227 211 228 - if (skip_rfc1002_marker && rqst->rq_iov[0].iov_len == 4) { 212 + if (server->vals->header_preamble_size == 0 && 213 + rqst->rq_nvec >= 2 && rqst->rq_iov[0].iov_len == 4) { 229 214 iov = &rqst->rq_iov[1]; 230 215 nvec = rqst->rq_nvec - 1; 231 216 } else { ··· 277 260 __be32 rfc1002_marker; 278 261 279 262 if (cifs_rdma_enabled(server) && server->smbd_conn) { 280 - rc = smbd_send(server->smbd_conn, rqst); 263 + rc = smbd_send(server, rqst); 281 264 goto smbd_done; 282 265 } 283 266 if (ssocket == NULL) ··· 288 271 (char *)&val, sizeof(val)); 289 272 290 273 for (j = 0; j < num_rqst; j++) 291 - send_length += smb2_rqst_len(&rqst[j], true); 274 + send_length += smb_rqst_len(server, &rqst[j]); 292 275 rfc1002_marker = cpu_to_be32(send_length); 293 276 294 277 /* Generate a rfc1002 marker for SMB2+ */
+13 -8
fs/ext4/balloc.c
··· 184 184 unsigned int bit, bit_max; 185 185 struct ext4_sb_info *sbi = EXT4_SB(sb); 186 186 ext4_fsblk_t start, tmp; 187 - int flex_bg = 0; 188 187 189 188 J_ASSERT_BH(bh, buffer_locked(bh)); 190 189 ··· 206 207 207 208 start = ext4_group_first_block_no(sb, block_group); 208 209 209 - if (ext4_has_feature_flex_bg(sb)) 210 - flex_bg = 1; 211 - 212 210 /* Set bits for block and inode bitmaps, and inode table */ 213 211 tmp = ext4_block_bitmap(sb, gdp); 214 - if (!flex_bg || ext4_block_in_group(sb, tmp, block_group)) 212 + if (ext4_block_in_group(sb, tmp, block_group)) 215 213 ext4_set_bit(EXT4_B2C(sbi, tmp - start), bh->b_data); 216 214 217 215 tmp = ext4_inode_bitmap(sb, gdp); 218 - if (!flex_bg || ext4_block_in_group(sb, tmp, block_group)) 216 + if (ext4_block_in_group(sb, tmp, block_group)) 219 217 ext4_set_bit(EXT4_B2C(sbi, tmp - start), bh->b_data); 220 218 221 219 tmp = ext4_inode_table(sb, gdp); 222 220 for (; tmp < ext4_inode_table(sb, gdp) + 223 221 sbi->s_itb_per_group; tmp++) { 224 - if (!flex_bg || ext4_block_in_group(sb, tmp, block_group)) 222 + if (ext4_block_in_group(sb, tmp, block_group)) 225 223 ext4_set_bit(EXT4_B2C(sbi, tmp - start), bh->b_data); 226 224 } 227 225 ··· 438 442 goto verify; 439 443 } 440 444 ext4_lock_group(sb, block_group); 441 - if (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) { 445 + if (ext4_has_group_desc_csum(sb) && 446 + (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))) { 447 + if (block_group == 0) { 448 + ext4_unlock_group(sb, block_group); 449 + unlock_buffer(bh); 450 + ext4_error(sb, "Block bitmap for bg 0 marked " 451 + "uninitialized"); 452 + err = -EFSCORRUPTED; 453 + goto out; 454 + } 442 455 err = ext4_init_block_bitmap(sb, bh, block_group, desc); 443 456 set_bitmap_uptodate(bh); 444 457 set_buffer_uptodate(bh);
+1 -8
fs/ext4/ext4.h
··· 1114 1114 #define EXT4_MOUNT_DIOREAD_NOLOCK 0x400000 /* Enable support for dio read nolocking */ 1115 1115 #define EXT4_MOUNT_JOURNAL_CHECKSUM 0x800000 /* Journal checksums */ 1116 1116 #define EXT4_MOUNT_JOURNAL_ASYNC_COMMIT 0x1000000 /* Journal Async Commit */ 1117 + #define EXT4_MOUNT_WARN_ON_ERROR 0x2000000 /* Trigger WARN_ON on error */ 1117 1118 #define EXT4_MOUNT_DELALLOC 0x8000000 /* Delalloc support */ 1118 1119 #define EXT4_MOUNT_DATA_ERR_ABORT 0x10000000 /* Abort on file data write */ 1119 1120 #define EXT4_MOUNT_BLOCK_VALIDITY 0x20000000 /* Block validity checking */ ··· 1508 1507 static inline int ext4_valid_inum(struct super_block *sb, unsigned long ino) 1509 1508 { 1510 1509 return ino == EXT4_ROOT_INO || 1511 - ino == EXT4_USR_QUOTA_INO || 1512 - ino == EXT4_GRP_QUOTA_INO || 1513 - ino == EXT4_BOOT_LOADER_INO || 1514 - ino == EXT4_JOURNAL_INO || 1515 - ino == EXT4_RESIZE_INO || 1516 1510 (ino >= EXT4_FIRST_INO(sb) && 1517 1511 ino <= le32_to_cpu(EXT4_SB(sb)->s_es->s_inodes_count)); 1518 1512 } ··· 3014 3018 struct iomap; 3015 3019 extern int ext4_inline_data_iomap(struct inode *inode, struct iomap *iomap); 3016 3020 3017 - extern int ext4_try_to_evict_inline_data(handle_t *handle, 3018 - struct inode *inode, 3019 - int needed); 3020 3021 extern int ext4_inline_data_truncate(struct inode *inode, int *has_inline); 3021 3022 3022 3023 extern int ext4_convert_inline_data(struct inode *inode);
+1
fs/ext4/ext4_extents.h
··· 91 91 }; 92 92 93 93 #define EXT4_EXT_MAGIC cpu_to_le16(0xf30a) 94 + #define EXT4_MAX_EXTENT_DEPTH 5 94 95 95 96 #define EXT4_EXTENT_TAIL_OFFSET(hdr) \ 96 97 (sizeof(struct ext4_extent_header) + \
+6
fs/ext4/extents.c
··· 869 869 870 870 eh = ext_inode_hdr(inode); 871 871 depth = ext_depth(inode); 872 + if (depth < 0 || depth > EXT4_MAX_EXTENT_DEPTH) { 873 + EXT4_ERROR_INODE(inode, "inode has invalid extent depth: %d", 874 + depth); 875 + ret = -EFSCORRUPTED; 876 + goto err; 877 + } 872 878 873 879 if (path) { 874 880 ext4_ext_drop_refs(path);
+12 -2
fs/ext4/ialloc.c
··· 150 150 } 151 151 152 152 ext4_lock_group(sb, block_group); 153 - if (desc->bg_flags & cpu_to_le16(EXT4_BG_INODE_UNINIT)) { 153 + if (ext4_has_group_desc_csum(sb) && 154 + (desc->bg_flags & cpu_to_le16(EXT4_BG_INODE_UNINIT))) { 155 + if (block_group == 0) { 156 + ext4_unlock_group(sb, block_group); 157 + unlock_buffer(bh); 158 + ext4_error(sb, "Inode bitmap for bg 0 marked " 159 + "uninitialized"); 160 + err = -EFSCORRUPTED; 161 + goto out; 162 + } 154 163 memset(bh->b_data, 0, (EXT4_INODES_PER_GROUP(sb) + 7) / 8); 155 164 ext4_mark_bitmap_end(EXT4_INODES_PER_GROUP(sb), 156 165 sb->s_blocksize * 8, bh->b_data); ··· 1003 994 1004 995 /* recheck and clear flag under lock if we still need to */ 1005 996 ext4_lock_group(sb, group); 1006 - if (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) { 997 + if (ext4_has_group_desc_csum(sb) && 998 + (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))) { 1007 999 gdp->bg_flags &= cpu_to_le16(~EXT4_BG_BLOCK_UNINIT); 1008 1000 ext4_free_group_clusters_set(sb, gdp, 1009 1001 ext4_free_clusters_after_init(sb, group, gdp));
+2 -37
fs/ext4/inline.c
··· 437 437 438 438 memset((void *)ext4_raw_inode(&is.iloc)->i_block, 439 439 0, EXT4_MIN_INLINE_DATA_SIZE); 440 + memset(ei->i_data, 0, EXT4_MIN_INLINE_DATA_SIZE); 440 441 441 442 if (ext4_has_feature_extents(inode->i_sb)) { 442 443 if (S_ISDIR(inode->i_mode) || ··· 887 886 flags |= AOP_FLAG_NOFS; 888 887 889 888 if (ret == -ENOSPC) { 889 + ext4_journal_stop(handle); 890 890 ret = ext4_da_convert_inline_data_to_extent(mapping, 891 891 inode, 892 892 flags, 893 893 fsdata); 894 - ext4_journal_stop(handle); 895 894 if (ret == -ENOSPC && 896 895 ext4_should_retry_alloc(inode->i_sb, &retries)) 897 896 goto retry_journal; ··· 1889 1888 out: 1890 1889 up_read(&EXT4_I(inode)->xattr_sem); 1891 1890 return (error < 0 ? error : 0); 1892 - } 1893 - 1894 - /* 1895 - * Called during xattr set, and if we can sparse space 'needed', 1896 - * just create the extent tree evict the data to the outer block. 1897 - * 1898 - * We use jbd2 instead of page cache to move data to the 1st block 1899 - * so that the whole transaction can be committed as a whole and 1900 - * the data isn't lost because of the delayed page cache write. 1901 - */ 1902 - int ext4_try_to_evict_inline_data(handle_t *handle, 1903 - struct inode *inode, 1904 - int needed) 1905 - { 1906 - int error; 1907 - struct ext4_xattr_entry *entry; 1908 - struct ext4_inode *raw_inode; 1909 - struct ext4_iloc iloc; 1910 - 1911 - error = ext4_get_inode_loc(inode, &iloc); 1912 - if (error) 1913 - return error; 1914 - 1915 - raw_inode = ext4_raw_inode(&iloc); 1916 - entry = (struct ext4_xattr_entry *)((void *)raw_inode + 1917 - EXT4_I(inode)->i_inline_off); 1918 - if (EXT4_XATTR_LEN(entry->e_name_len) + 1919 - EXT4_XATTR_SIZE(le32_to_cpu(entry->e_value_size)) < needed) { 1920 - error = -ENOSPC; 1921 - goto out; 1922 - } 1923 - 1924 - error = ext4_convert_inline_data_nolock(handle, inode, &iloc); 1925 - out: 1926 - brelse(iloc.bh); 1927 - return error; 1928 1891 } 1929 1892 1930 1893 int ext4_inline_data_truncate(struct inode *inode, int *has_inline)
+4 -3
fs/ext4/inode.c
··· 402 402 if (!ext4_data_block_valid(EXT4_SB(inode->i_sb), map->m_pblk, 403 403 map->m_len)) { 404 404 ext4_error_inode(inode, func, line, map->m_pblk, 405 - "lblock %lu mapped to illegal pblock " 405 + "lblock %lu mapped to illegal pblock %llu " 406 406 "(length %d)", (unsigned long) map->m_lblk, 407 - map->m_len); 407 + map->m_pblk, map->m_len); 408 408 return -EFSCORRUPTED; 409 409 } 410 410 return 0; ··· 4506 4506 int inodes_per_block, inode_offset; 4507 4507 4508 4508 iloc->bh = NULL; 4509 - if (!ext4_valid_inum(sb, inode->i_ino)) 4509 + if (inode->i_ino < EXT4_ROOT_INO || 4510 + inode->i_ino > le32_to_cpu(EXT4_SB(sb)->s_es->s_inodes_count)) 4510 4511 return -EFSCORRUPTED; 4511 4512 4512 4513 iloc->block_group = (inode->i_ino - 1) / EXT4_INODES_PER_GROUP(sb);
+4 -2
fs/ext4/mballoc.c
··· 2423 2423 * initialize bb_free to be able to skip 2424 2424 * empty groups without initialization 2425 2425 */ 2426 - if (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) { 2426 + if (ext4_has_group_desc_csum(sb) && 2427 + (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))) { 2427 2428 meta_group_info[i]->bb_free = 2428 2429 ext4_free_clusters_after_init(sb, group, desc); 2429 2430 } else { ··· 2990 2989 #endif 2991 2990 ext4_set_bits(bitmap_bh->b_data, ac->ac_b_ex.fe_start, 2992 2991 ac->ac_b_ex.fe_len); 2993 - if (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) { 2992 + if (ext4_has_group_desc_csum(sb) && 2993 + (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))) { 2994 2994 gdp->bg_flags &= cpu_to_le16(~EXT4_BG_BLOCK_UNINIT); 2995 2995 ext4_free_group_clusters_set(sb, gdp, 2996 2996 ext4_free_clusters_after_init(sb,
+86 -13
fs/ext4/super.c
··· 405 405 406 406 static void ext4_handle_error(struct super_block *sb) 407 407 { 408 + if (test_opt(sb, WARN_ON_ERROR)) 409 + WARN_ON_ONCE(1); 410 + 408 411 if (sb_rdonly(sb)) 409 412 return; 410 413 ··· 742 739 printk(KERN_CONT "%pV\n", &vaf); 743 740 va_end(args); 744 741 } 742 + 743 + if (test_opt(sb, WARN_ON_ERROR)) 744 + WARN_ON_ONCE(1); 745 745 746 746 if (test_opt(sb, ERRORS_CONT)) { 747 747 ext4_commit_super(sb, 0); ··· 1377 1371 Opt_jqfmt_vfsold, Opt_jqfmt_vfsv0, Opt_jqfmt_vfsv1, Opt_quota, 1378 1372 Opt_noquota, Opt_barrier, Opt_nobarrier, Opt_err, 1379 1373 Opt_usrquota, Opt_grpquota, Opt_prjquota, Opt_i_version, Opt_dax, 1380 - Opt_stripe, Opt_delalloc, Opt_nodelalloc, Opt_mblk_io_submit, 1374 + Opt_stripe, Opt_delalloc, Opt_nodelalloc, Opt_warn_on_error, 1375 + Opt_nowarn_on_error, Opt_mblk_io_submit, 1381 1376 Opt_lazytime, Opt_nolazytime, Opt_debug_want_extra_isize, 1382 1377 Opt_nomblk_io_submit, Opt_block_validity, Opt_noblock_validity, 1383 1378 Opt_inode_readahead_blks, Opt_journal_ioprio, ··· 1445 1438 {Opt_dax, "dax"}, 1446 1439 {Opt_stripe, "stripe=%u"}, 1447 1440 {Opt_delalloc, "delalloc"}, 1441 + {Opt_warn_on_error, "warn_on_error"}, 1442 + {Opt_nowarn_on_error, "nowarn_on_error"}, 1448 1443 {Opt_lazytime, "lazytime"}, 1449 1444 {Opt_nolazytime, "nolazytime"}, 1450 1445 {Opt_debug_want_extra_isize, "debug_want_extra_isize=%u"}, ··· 1611 1602 MOPT_EXT4_ONLY | MOPT_SET | MOPT_EXPLICIT}, 1612 1603 {Opt_nodelalloc, EXT4_MOUNT_DELALLOC, 1613 1604 MOPT_EXT4_ONLY | MOPT_CLEAR}, 1605 + {Opt_warn_on_error, EXT4_MOUNT_WARN_ON_ERROR, MOPT_SET}, 1606 + {Opt_nowarn_on_error, EXT4_MOUNT_WARN_ON_ERROR, MOPT_CLEAR}, 1614 1607 {Opt_nojournal_checksum, EXT4_MOUNT_JOURNAL_CHECKSUM, 1615 1608 MOPT_EXT4_ONLY | MOPT_CLEAR}, 1616 1609 {Opt_journal_checksum, EXT4_MOUNT_JOURNAL_CHECKSUM, ··· 2342 2331 struct ext4_sb_info *sbi = EXT4_SB(sb); 2343 2332 ext4_fsblk_t first_block = le32_to_cpu(sbi->s_es->s_first_data_block); 2344 2333 ext4_fsblk_t last_block; 2334 + ext4_fsblk_t last_bg_block = sb_block + ext4_bg_num_gdb(sb, 0) + 1; 2345 2335 ext4_fsblk_t block_bitmap; 2346 2336 ext4_fsblk_t inode_bitmap; 2347 2337 ext4_fsblk_t inode_table; ··· 2375 2363 if (!sb_rdonly(sb)) 2376 2364 return 0; 2377 2365 } 2366 + if (block_bitmap >= sb_block + 1 && 2367 + block_bitmap <= last_bg_block) { 2368 + ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: " 2369 + "Block bitmap for group %u overlaps " 2370 + "block group descriptors", i); 2371 + if (!sb_rdonly(sb)) 2372 + return 0; 2373 + } 2378 2374 if (block_bitmap < first_block || block_bitmap > last_block) { 2379 2375 ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: " 2380 2376 "Block bitmap for group %u not in group " ··· 2397 2377 if (!sb_rdonly(sb)) 2398 2378 return 0; 2399 2379 } 2380 + if (inode_bitmap >= sb_block + 1 && 2381 + inode_bitmap <= last_bg_block) { 2382 + ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: " 2383 + "Inode bitmap for group %u overlaps " 2384 + "block group descriptors", i); 2385 + if (!sb_rdonly(sb)) 2386 + return 0; 2387 + } 2400 2388 if (inode_bitmap < first_block || inode_bitmap > last_block) { 2401 2389 ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: " 2402 2390 "Inode bitmap for group %u not in group " ··· 2416 2388 ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: " 2417 2389 "Inode table for group %u overlaps " 2418 2390 "superblock", i); 2391 + if (!sb_rdonly(sb)) 2392 + return 0; 2393 + } 2394 + if (inode_table >= sb_block + 1 && 2395 + inode_table <= last_bg_block) { 2396 + ext4_msg(sb, KERN_ERR, "ext4_check_descriptors: " 2397 + "Inode table for group %u overlaps " 2398 + "block group descriptors", i); 2419 2399 if (!sb_rdonly(sb)) 2420 2400 return 0; 2421 2401 } ··· 3133 3097 ext4_group_t group, ngroups = EXT4_SB(sb)->s_groups_count; 3134 3098 struct ext4_group_desc *gdp = NULL; 3135 3099 3100 + if (!ext4_has_group_desc_csum(sb)) 3101 + return ngroups; 3102 + 3136 3103 for (group = 0; group < ngroups; group++) { 3137 3104 gdp = ext4_get_group_desc(sb, group, NULL); 3138 3105 if (!gdp) 3139 3106 continue; 3140 3107 3141 - if (!(gdp->bg_flags & cpu_to_le16(EXT4_BG_INODE_ZEROED))) 3108 + if (gdp->bg_flags & cpu_to_le16(EXT4_BG_INODE_ZEROED)) 3109 + continue; 3110 + if (group != 0) 3142 3111 break; 3112 + ext4_error(sb, "Inode table for bg 0 marked as " 3113 + "needing zeroing"); 3114 + if (sb_rdonly(sb)) 3115 + return ngroups; 3143 3116 } 3144 3117 3145 3118 return group; ··· 3787 3742 le32_to_cpu(es->s_log_block_size)); 3788 3743 goto failed_mount; 3789 3744 } 3745 + if (le32_to_cpu(es->s_log_cluster_size) > 3746 + (EXT4_MAX_CLUSTER_LOG_SIZE - EXT4_MIN_BLOCK_LOG_SIZE)) { 3747 + ext4_msg(sb, KERN_ERR, 3748 + "Invalid log cluster size: %u", 3749 + le32_to_cpu(es->s_log_cluster_size)); 3750 + goto failed_mount; 3751 + } 3790 3752 3791 3753 if (le16_to_cpu(sbi->s_es->s_reserved_gdt_blocks) > (blocksize / 4)) { 3792 3754 ext4_msg(sb, KERN_ERR, ··· 3858 3806 } else { 3859 3807 sbi->s_inode_size = le16_to_cpu(es->s_inode_size); 3860 3808 sbi->s_first_ino = le32_to_cpu(es->s_first_ino); 3809 + if (sbi->s_first_ino < EXT4_GOOD_OLD_FIRST_INO) { 3810 + ext4_msg(sb, KERN_ERR, "invalid first ino: %u", 3811 + sbi->s_first_ino); 3812 + goto failed_mount; 3813 + } 3861 3814 if ((sbi->s_inode_size < EXT4_GOOD_OLD_INODE_SIZE) || 3862 3815 (!is_power_of_2(sbi->s_inode_size)) || 3863 3816 (sbi->s_inode_size > blocksize)) { ··· 3939 3882 "block size (%d)", clustersize, blocksize); 3940 3883 goto failed_mount; 3941 3884 } 3942 - if (le32_to_cpu(es->s_log_cluster_size) > 3943 - (EXT4_MAX_CLUSTER_LOG_SIZE - EXT4_MIN_BLOCK_LOG_SIZE)) { 3944 - ext4_msg(sb, KERN_ERR, 3945 - "Invalid log cluster size: %u", 3946 - le32_to_cpu(es->s_log_cluster_size)); 3947 - goto failed_mount; 3948 - } 3949 3885 sbi->s_cluster_bits = le32_to_cpu(es->s_log_cluster_size) - 3950 3886 le32_to_cpu(es->s_log_block_size); 3951 3887 sbi->s_clusters_per_group = ··· 3959 3909 } 3960 3910 } else { 3961 3911 if (clustersize != blocksize) { 3962 - ext4_warning(sb, "fragment/cluster size (%d) != " 3963 - "block size (%d)", clustersize, 3964 - blocksize); 3965 - clustersize = blocksize; 3912 + ext4_msg(sb, KERN_ERR, 3913 + "fragment/cluster size (%d) != " 3914 + "block size (%d)", clustersize, blocksize); 3915 + goto failed_mount; 3966 3916 } 3967 3917 if (sbi->s_blocks_per_group > blocksize * 8) { 3968 3918 ext4_msg(sb, KERN_ERR, ··· 4016 3966 ext4_blocks_count(es)); 4017 3967 goto failed_mount; 4018 3968 } 3969 + if ((es->s_first_data_block == 0) && (es->s_log_block_size == 0) && 3970 + (sbi->s_cluster_ratio == 1)) { 3971 + ext4_msg(sb, KERN_WARNING, "bad geometry: first data " 3972 + "block is 0 with a 1k block and cluster size"); 3973 + goto failed_mount; 3974 + } 3975 + 4019 3976 blocks_count = (ext4_blocks_count(es) - 4020 3977 le32_to_cpu(es->s_first_data_block) + 4021 3978 EXT4_BLOCKS_PER_GROUP(sb) - 1); ··· 4056 3999 if (sbi->s_group_desc == NULL) { 4057 4000 ext4_msg(sb, KERN_ERR, "not enough memory"); 4058 4001 ret = -ENOMEM; 4002 + goto failed_mount; 4003 + } 4004 + if (((u64)sbi->s_groups_count * sbi->s_inodes_per_group) != 4005 + le32_to_cpu(es->s_inodes_count)) { 4006 + ext4_msg(sb, KERN_ERR, "inodes count not valid: %u vs %llu", 4007 + le32_to_cpu(es->s_inodes_count), 4008 + ((u64)sbi->s_groups_count * sbi->s_inodes_per_group)); 4009 + ret = -EINVAL; 4059 4010 goto failed_mount; 4060 4011 } 4061 4012 ··· 4801 4736 4802 4737 if (!sbh || block_device_ejected(sb)) 4803 4738 return error; 4739 + 4740 + /* 4741 + * The superblock bh should be mapped, but it might not be if the 4742 + * device was hot-removed. Not much we can do but fail the I/O. 4743 + */ 4744 + if (!buffer_mapped(sbh)) 4745 + return error; 4746 + 4804 4747 /* 4805 4748 * If the file system is mounted read-only, don't update the 4806 4749 * superblock write time. This avoids updating the superblock
+18 -22
fs/ext4/xattr.c
··· 230 230 { 231 231 int error = -EFSCORRUPTED; 232 232 233 - if (buffer_verified(bh)) 234 - return 0; 235 - 236 233 if (BHDR(bh)->h_magic != cpu_to_le32(EXT4_XATTR_MAGIC) || 237 234 BHDR(bh)->h_blocks != cpu_to_le32(1)) 238 235 goto errout; 236 + if (buffer_verified(bh)) 237 + return 0; 238 + 239 239 error = -EFSBADCRC; 240 240 if (!ext4_xattr_block_csum_verify(inode, bh)) 241 241 goto errout; ··· 1560 1560 handle_t *handle, struct inode *inode, 1561 1561 bool is_block) 1562 1562 { 1563 - struct ext4_xattr_entry *last; 1563 + struct ext4_xattr_entry *last, *next; 1564 1564 struct ext4_xattr_entry *here = s->here; 1565 1565 size_t min_offs = s->end - s->base, name_len = strlen(i->name); 1566 1566 int in_inode = i->in_inode; ··· 1595 1595 1596 1596 /* Compute min_offs and last. */ 1597 1597 last = s->first; 1598 - for (; !IS_LAST_ENTRY(last); last = EXT4_XATTR_NEXT(last)) { 1598 + for (; !IS_LAST_ENTRY(last); last = next) { 1599 + next = EXT4_XATTR_NEXT(last); 1600 + if ((void *)next >= s->end) { 1601 + EXT4_ERROR_INODE(inode, "corrupted xattr entries"); 1602 + ret = -EFSCORRUPTED; 1603 + goto out; 1604 + } 1599 1605 if (!last->e_value_inum && last->e_value_size) { 1600 1606 size_t offs = le16_to_cpu(last->e_value_offs); 1601 1607 if (offs < min_offs) ··· 2212 2206 if (EXT4_I(inode)->i_extra_isize == 0) 2213 2207 return -ENOSPC; 2214 2208 error = ext4_xattr_set_entry(i, s, handle, inode, false /* is_block */); 2215 - if (error) { 2216 - if (error == -ENOSPC && 2217 - ext4_has_inline_data(inode)) { 2218 - error = ext4_try_to_evict_inline_data(handle, inode, 2219 - EXT4_XATTR_LEN(strlen(i->name) + 2220 - EXT4_XATTR_SIZE(i->value_len))); 2221 - if (error) 2222 - return error; 2223 - error = ext4_xattr_ibody_find(inode, i, is); 2224 - if (error) 2225 - return error; 2226 - error = ext4_xattr_set_entry(i, s, handle, inode, 2227 - false /* is_block */); 2228 - } 2229 - if (error) 2230 - return error; 2231 - } 2209 + if (error) 2210 + return error; 2232 2211 header = IHDR(inode, ext4_raw_inode(&is->iloc)); 2233 2212 if (!IS_LAST_ENTRY(s->first)) { 2234 2213 header->h_magic = cpu_to_le32(EXT4_XATTR_MAGIC); ··· 2642 2651 last = IFIRST(header); 2643 2652 /* Find the entry best suited to be pushed into EA block */ 2644 2653 for (; !IS_LAST_ENTRY(last); last = EXT4_XATTR_NEXT(last)) { 2654 + /* never move system.data out of the inode */ 2655 + if ((last->e_name_len == 4) && 2656 + (last->e_name_index == EXT4_XATTR_INDEX_SYSTEM) && 2657 + !memcmp(last->e_name, "data", 4)) 2658 + continue; 2645 2659 total_size = EXT4_XATTR_LEN(last->e_name_len); 2646 2660 if (!last->e_value_inum) 2647 2661 total_size += EXT4_XATTR_SIZE(
+6
fs/inode.c
··· 1999 1999 inode->i_uid = current_fsuid(); 2000 2000 if (dir && dir->i_mode & S_ISGID) { 2001 2001 inode->i_gid = dir->i_gid; 2002 + 2003 + /* Directories are special, and always inherit S_ISGID */ 2002 2004 if (S_ISDIR(mode)) 2003 2005 mode |= S_ISGID; 2006 + else if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP) && 2007 + !in_group_p(inode->i_gid) && 2008 + !capable_wrt_inode_uidgid(dir, CAP_FSETID)) 2009 + mode &= ~S_ISGID; 2004 2010 } else 2005 2011 inode->i_gid = current_fsgid(); 2006 2012 inode->i_mode = mode;
+8 -1
fs/jbd2/transaction.c
··· 1361 1361 if (jh->b_transaction == transaction && 1362 1362 jh->b_jlist != BJ_Metadata) { 1363 1363 jbd_lock_bh_state(bh); 1364 + if (jh->b_transaction == transaction && 1365 + jh->b_jlist != BJ_Metadata) 1366 + pr_err("JBD2: assertion failure: h_type=%u " 1367 + "h_line_no=%u block_no=%llu jlist=%u\n", 1368 + handle->h_type, handle->h_line_no, 1369 + (unsigned long long) bh->b_blocknr, 1370 + jh->b_jlist); 1364 1371 J_ASSERT_JH(jh, jh->b_transaction != transaction || 1365 1372 jh->b_jlist == BJ_Metadata); 1366 1373 jbd_unlock_bh_state(bh); ··· 1387 1380 * of the transaction. This needs to be done 1388 1381 * once a transaction -bzzz 1389 1382 */ 1390 - jh->b_modified = 1; 1391 1383 if (handle->h_buffer_credits <= 0) { 1392 1384 ret = -ENOSPC; 1393 1385 goto out_unlock_bh; 1394 1386 } 1387 + jh->b_modified = 1; 1395 1388 handle->h_buffer_credits--; 1396 1389 } 1397 1390
+2 -1
fs/proc/task_mmu.c
··· 831 831 SEQ_PUT_DEC(" kB\nSwap: ", mss->swap); 832 832 SEQ_PUT_DEC(" kB\nSwapPss: ", 833 833 mss->swap_pss >> PSS_SHIFT); 834 - SEQ_PUT_DEC(" kB\nLocked: ", mss->pss >> PSS_SHIFT); 834 + SEQ_PUT_DEC(" kB\nLocked: ", 835 + mss->pss_locked >> PSS_SHIFT); 835 836 seq_puts(m, " kB\n"); 836 837 } 837 838 if (!rollup_mode) {
+81 -60
fs/reiserfs/prints.c
··· 76 76 } 77 77 78 78 /* %k */ 79 - static void sprintf_le_key(char *buf, struct reiserfs_key *key) 79 + static int scnprintf_le_key(char *buf, size_t size, struct reiserfs_key *key) 80 80 { 81 81 if (key) 82 - sprintf(buf, "[%d %d %s %s]", le32_to_cpu(key->k_dir_id), 83 - le32_to_cpu(key->k_objectid), le_offset(key), 84 - le_type(key)); 82 + return scnprintf(buf, size, "[%d %d %s %s]", 83 + le32_to_cpu(key->k_dir_id), 84 + le32_to_cpu(key->k_objectid), le_offset(key), 85 + le_type(key)); 85 86 else 86 - sprintf(buf, "[NULL]"); 87 + return scnprintf(buf, size, "[NULL]"); 87 88 } 88 89 89 90 /* %K */ 90 - static void sprintf_cpu_key(char *buf, struct cpu_key *key) 91 + static int scnprintf_cpu_key(char *buf, size_t size, struct cpu_key *key) 91 92 { 92 93 if (key) 93 - sprintf(buf, "[%d %d %s %s]", key->on_disk_key.k_dir_id, 94 - key->on_disk_key.k_objectid, reiserfs_cpu_offset(key), 95 - cpu_type(key)); 94 + return scnprintf(buf, size, "[%d %d %s %s]", 95 + key->on_disk_key.k_dir_id, 96 + key->on_disk_key.k_objectid, 97 + reiserfs_cpu_offset(key), cpu_type(key)); 96 98 else 97 - sprintf(buf, "[NULL]"); 99 + return scnprintf(buf, size, "[NULL]"); 98 100 } 99 101 100 - static void sprintf_de_head(char *buf, struct reiserfs_de_head *deh) 102 + static int scnprintf_de_head(char *buf, size_t size, 103 + struct reiserfs_de_head *deh) 101 104 { 102 105 if (deh) 103 - sprintf(buf, 104 - "[offset=%d dir_id=%d objectid=%d location=%d state=%04x]", 105 - deh_offset(deh), deh_dir_id(deh), deh_objectid(deh), 106 - deh_location(deh), deh_state(deh)); 106 + return scnprintf(buf, size, 107 + "[offset=%d dir_id=%d objectid=%d location=%d state=%04x]", 108 + deh_offset(deh), deh_dir_id(deh), 109 + deh_objectid(deh), deh_location(deh), 110 + deh_state(deh)); 107 111 else 108 - sprintf(buf, "[NULL]"); 112 + return scnprintf(buf, size, "[NULL]"); 109 113 110 114 } 111 115 112 - static void sprintf_item_head(char *buf, struct item_head *ih) 116 + static int scnprintf_item_head(char *buf, size_t size, struct item_head *ih) 113 117 { 114 118 if (ih) { 115 - strcpy(buf, 116 - (ih_version(ih) == KEY_FORMAT_3_6) ? "*3.6* " : "*3.5*"); 117 - sprintf_le_key(buf + strlen(buf), &(ih->ih_key)); 118 - sprintf(buf + strlen(buf), ", item_len %d, item_location %d, " 119 - "free_space(entry_count) %d", 120 - ih_item_len(ih), ih_location(ih), ih_free_space(ih)); 119 + char *p = buf; 120 + char * const end = buf + size; 121 + 122 + p += scnprintf(p, end - p, "%s", 123 + (ih_version(ih) == KEY_FORMAT_3_6) ? 124 + "*3.6* " : "*3.5*"); 125 + 126 + p += scnprintf_le_key(p, end - p, &ih->ih_key); 127 + 128 + p += scnprintf(p, end - p, 129 + ", item_len %d, item_location %d, free_space(entry_count) %d", 130 + ih_item_len(ih), ih_location(ih), 131 + ih_free_space(ih)); 132 + return p - buf; 121 133 } else 122 - sprintf(buf, "[NULL]"); 134 + return scnprintf(buf, size, "[NULL]"); 123 135 } 124 136 125 - static void sprintf_direntry(char *buf, struct reiserfs_dir_entry *de) 137 + static int scnprintf_direntry(char *buf, size_t size, 138 + struct reiserfs_dir_entry *de) 126 139 { 127 140 char name[20]; 128 141 129 142 memcpy(name, de->de_name, de->de_namelen > 19 ? 19 : de->de_namelen); 130 143 name[de->de_namelen > 19 ? 19 : de->de_namelen] = 0; 131 - sprintf(buf, "\"%s\"==>[%d %d]", name, de->de_dir_id, de->de_objectid); 144 + return scnprintf(buf, size, "\"%s\"==>[%d %d]", 145 + name, de->de_dir_id, de->de_objectid); 132 146 } 133 147 134 - static void sprintf_block_head(char *buf, struct buffer_head *bh) 148 + static int scnprintf_block_head(char *buf, size_t size, struct buffer_head *bh) 135 149 { 136 - sprintf(buf, "level=%d, nr_items=%d, free_space=%d rdkey ", 137 - B_LEVEL(bh), B_NR_ITEMS(bh), B_FREE_SPACE(bh)); 150 + return scnprintf(buf, size, 151 + "level=%d, nr_items=%d, free_space=%d rdkey ", 152 + B_LEVEL(bh), B_NR_ITEMS(bh), B_FREE_SPACE(bh)); 138 153 } 139 154 140 - static void sprintf_buffer_head(char *buf, struct buffer_head *bh) 155 + static int scnprintf_buffer_head(char *buf, size_t size, struct buffer_head *bh) 141 156 { 142 - sprintf(buf, 143 - "dev %pg, size %zd, blocknr %llu, count %d, state 0x%lx, page %p, (%s, %s, %s)", 144 - bh->b_bdev, bh->b_size, 145 - (unsigned long long)bh->b_blocknr, atomic_read(&(bh->b_count)), 146 - bh->b_state, bh->b_page, 147 - buffer_uptodate(bh) ? "UPTODATE" : "!UPTODATE", 148 - buffer_dirty(bh) ? "DIRTY" : "CLEAN", 149 - buffer_locked(bh) ? "LOCKED" : "UNLOCKED"); 157 + return scnprintf(buf, size, 158 + "dev %pg, size %zd, blocknr %llu, count %d, state 0x%lx, page %p, (%s, %s, %s)", 159 + bh->b_bdev, bh->b_size, 160 + (unsigned long long)bh->b_blocknr, 161 + atomic_read(&(bh->b_count)), 162 + bh->b_state, bh->b_page, 163 + buffer_uptodate(bh) ? "UPTODATE" : "!UPTODATE", 164 + buffer_dirty(bh) ? "DIRTY" : "CLEAN", 165 + buffer_locked(bh) ? "LOCKED" : "UNLOCKED"); 150 166 } 151 167 152 - static void sprintf_disk_child(char *buf, struct disk_child *dc) 168 + static int scnprintf_disk_child(char *buf, size_t size, struct disk_child *dc) 153 169 { 154 - sprintf(buf, "[dc_number=%d, dc_size=%u]", dc_block_number(dc), 155 - dc_size(dc)); 170 + return scnprintf(buf, size, "[dc_number=%d, dc_size=%u]", 171 + dc_block_number(dc), dc_size(dc)); 156 172 } 157 173 158 174 static char *is_there_reiserfs_struct(char *fmt, int *what) ··· 205 189 char *fmt1 = fmt_buf; 206 190 char *k; 207 191 char *p = error_buf; 192 + char * const end = &error_buf[sizeof(error_buf)]; 208 193 int what; 209 194 210 195 spin_lock(&error_lock); 211 196 212 - strcpy(fmt1, fmt); 197 + if (WARN_ON(strscpy(fmt_buf, fmt, sizeof(fmt_buf)) < 0)) { 198 + strscpy(error_buf, "format string too long", end - error_buf); 199 + goto out_unlock; 200 + } 213 201 214 202 while ((k = is_there_reiserfs_struct(fmt1, &what)) != NULL) { 215 203 *k = 0; 216 204 217 - p += vsprintf(p, fmt1, args); 205 + p += vscnprintf(p, end - p, fmt1, args); 218 206 219 207 switch (what) { 220 208 case 'k': 221 - sprintf_le_key(p, va_arg(args, struct reiserfs_key *)); 209 + p += scnprintf_le_key(p, end - p, 210 + va_arg(args, struct reiserfs_key *)); 222 211 break; 223 212 case 'K': 224 - sprintf_cpu_key(p, va_arg(args, struct cpu_key *)); 213 + p += scnprintf_cpu_key(p, end - p, 214 + va_arg(args, struct cpu_key *)); 225 215 break; 226 216 case 'h': 227 - sprintf_item_head(p, va_arg(args, struct item_head *)); 217 + p += scnprintf_item_head(p, end - p, 218 + va_arg(args, struct item_head *)); 228 219 break; 229 220 case 't': 230 - sprintf_direntry(p, 231 - va_arg(args, 232 - struct reiserfs_dir_entry *)); 221 + p += scnprintf_direntry(p, end - p, 222 + va_arg(args, struct reiserfs_dir_entry *)); 233 223 break; 234 224 case 'y': 235 - sprintf_disk_child(p, 236 - va_arg(args, struct disk_child *)); 225 + p += scnprintf_disk_child(p, end - p, 226 + va_arg(args, struct disk_child *)); 237 227 break; 238 228 case 'z': 239 - sprintf_block_head(p, 240 - va_arg(args, struct buffer_head *)); 229 + p += scnprintf_block_head(p, end - p, 230 + va_arg(args, struct buffer_head *)); 241 231 break; 242 232 case 'b': 243 - sprintf_buffer_head(p, 244 - va_arg(args, struct buffer_head *)); 233 + p += scnprintf_buffer_head(p, end - p, 234 + va_arg(args, struct buffer_head *)); 245 235 break; 246 236 case 'a': 247 - sprintf_de_head(p, 248 - va_arg(args, 249 - struct reiserfs_de_head *)); 237 + p += scnprintf_de_head(p, end - p, 238 + va_arg(args, struct reiserfs_de_head *)); 250 239 break; 251 240 } 252 241 253 - p += strlen(p); 254 242 fmt1 = k + 2; 255 243 } 256 - vsprintf(p, fmt1, args); 244 + p += vscnprintf(p, end - p, fmt1, args); 245 + out_unlock: 257 246 spin_unlock(&error_lock); 258 247 259 248 }
+7 -5
fs/userfaultfd.c
··· 222 222 unsigned long reason) 223 223 { 224 224 struct mm_struct *mm = ctx->mm; 225 - pte_t *pte; 225 + pte_t *ptep, pte; 226 226 bool ret = true; 227 227 228 228 VM_BUG_ON(!rwsem_is_locked(&mm->mmap_sem)); 229 229 230 - pte = huge_pte_offset(mm, address, vma_mmu_pagesize(vma)); 231 - if (!pte) 230 + ptep = huge_pte_offset(mm, address, vma_mmu_pagesize(vma)); 231 + 232 + if (!ptep) 232 233 goto out; 233 234 234 235 ret = false; 236 + pte = huge_ptep_get(ptep); 235 237 236 238 /* 237 239 * Lockless access: we're in a wait_event so it's ok if it 238 240 * changes under us. 239 241 */ 240 - if (huge_pte_none(*pte)) 242 + if (huge_pte_none(pte)) 241 243 ret = true; 242 - if (!huge_pte_write(*pte) && (reason & VM_UFFD_WP)) 244 + if (!huge_pte_write(pte) && (reason & VM_UFFD_WP)) 243 245 ret = true; 244 246 out: 245 247 return ret;
+8
include/asm-generic/tlb.h
··· 265 265 * For now w.r.t page table cache, mark the range_size as PAGE_SIZE 266 266 */ 267 267 268 + #ifndef pte_free_tlb 268 269 #define pte_free_tlb(tlb, ptep, address) \ 269 270 do { \ 270 271 __tlb_adjust_range(tlb, address, PAGE_SIZE); \ 271 272 __pte_free_tlb(tlb, ptep, address); \ 272 273 } while (0) 274 + #endif 273 275 276 + #ifndef pmd_free_tlb 274 277 #define pmd_free_tlb(tlb, pmdp, address) \ 275 278 do { \ 276 279 __tlb_adjust_range(tlb, address, PAGE_SIZE); \ 277 280 __pmd_free_tlb(tlb, pmdp, address); \ 278 281 } while (0) 282 + #endif 279 283 280 284 #ifndef __ARCH_HAS_4LEVEL_HACK 285 + #ifndef pud_free_tlb 281 286 #define pud_free_tlb(tlb, pudp, address) \ 282 287 do { \ 283 288 __tlb_adjust_range(tlb, address, PAGE_SIZE); \ 284 289 __pud_free_tlb(tlb, pudp, address); \ 285 290 } while (0) 286 291 #endif 292 + #endif 287 293 288 294 #ifndef __ARCH_HAS_5LEVEL_HACK 295 + #ifndef p4d_free_tlb 289 296 #define p4d_free_tlb(tlb, pudp, address) \ 290 297 do { \ 291 298 __tlb_adjust_range(tlb, address, PAGE_SIZE); \ 292 299 __p4d_free_tlb(tlb, pudp, address); \ 293 300 } while (0) 301 + #endif 294 302 #endif 295 303 296 304 #define tlb_migrate_finish(mm) do {} while (0)
+19 -21
include/dt-bindings/clock/imx6ul-clock.h
··· 235 235 #define IMX6UL_CLK_CSI_PODF 222 236 236 #define IMX6UL_CLK_PLL3_120M 223 237 237 #define IMX6UL_CLK_KPP 224 238 - #define IMX6UL_CLK_CKO1_SEL 225 239 - #define IMX6UL_CLK_CKO1_PODF 226 240 - #define IMX6UL_CLK_CKO1 227 241 - #define IMX6UL_CLK_CKO2_SEL 228 242 - #define IMX6UL_CLK_CKO2_PODF 229 243 - #define IMX6UL_CLK_CKO2 230 244 - #define IMX6UL_CLK_CKO 231 245 - 246 - /* For i.MX6ULL */ 247 - #define IMX6ULL_CLK_ESAI_PRED 232 248 - #define IMX6ULL_CLK_ESAI_PODF 233 249 - #define IMX6ULL_CLK_ESAI_EXTAL 234 250 - #define IMX6ULL_CLK_ESAI_MEM 235 251 - #define IMX6ULL_CLK_ESAI_IPG 236 252 - #define IMX6ULL_CLK_DCP_CLK 237 253 - #define IMX6ULL_CLK_EPDC_PRE_SEL 238 254 - #define IMX6ULL_CLK_EPDC_SEL 239 255 - #define IMX6ULL_CLK_EPDC_PODF 240 256 - #define IMX6ULL_CLK_EPDC_ACLK 241 257 - #define IMX6ULL_CLK_EPDC_PIX 242 258 - #define IMX6ULL_CLK_ESAI_SEL 243 238 + #define IMX6ULL_CLK_ESAI_PRED 225 239 + #define IMX6ULL_CLK_ESAI_PODF 226 240 + #define IMX6ULL_CLK_ESAI_EXTAL 227 241 + #define IMX6ULL_CLK_ESAI_MEM 228 242 + #define IMX6ULL_CLK_ESAI_IPG 229 243 + #define IMX6ULL_CLK_DCP_CLK 230 244 + #define IMX6ULL_CLK_EPDC_PRE_SEL 231 245 + #define IMX6ULL_CLK_EPDC_SEL 232 246 + #define IMX6ULL_CLK_EPDC_PODF 233 247 + #define IMX6ULL_CLK_EPDC_ACLK 234 248 + #define IMX6ULL_CLK_EPDC_PIX 235 249 + #define IMX6ULL_CLK_ESAI_SEL 236 250 + #define IMX6UL_CLK_CKO1_SEL 237 251 + #define IMX6UL_CLK_CKO1_PODF 238 252 + #define IMX6UL_CLK_CKO1 239 253 + #define IMX6UL_CLK_CKO2_SEL 240 254 + #define IMX6UL_CLK_CKO2_PODF 241 255 + #define IMX6UL_CLK_CKO2 242 256 + #define IMX6UL_CLK_CKO 243 259 257 #define IMX6UL_CLK_END 244 260 258 261 259 #endif /* __DT_BINDINGS_CLOCK_IMX6UL_H */
+26
include/linux/bpf-cgroup.h
··· 188 188 \ 189 189 __ret; \ 190 190 }) 191 + int cgroup_bpf_prog_attach(const union bpf_attr *attr, 192 + enum bpf_prog_type ptype, struct bpf_prog *prog); 193 + int cgroup_bpf_prog_detach(const union bpf_attr *attr, 194 + enum bpf_prog_type ptype); 195 + int cgroup_bpf_prog_query(const union bpf_attr *attr, 196 + union bpf_attr __user *uattr); 191 197 #else 192 198 199 + struct bpf_prog; 193 200 struct cgroup_bpf {}; 194 201 static inline void cgroup_bpf_put(struct cgroup *cgrp) {} 195 202 static inline int cgroup_bpf_inherit(struct cgroup *cgrp) { return 0; } 203 + 204 + static inline int cgroup_bpf_prog_attach(const union bpf_attr *attr, 205 + enum bpf_prog_type ptype, 206 + struct bpf_prog *prog) 207 + { 208 + return -EINVAL; 209 + } 210 + 211 + static inline int cgroup_bpf_prog_detach(const union bpf_attr *attr, 212 + enum bpf_prog_type ptype) 213 + { 214 + return -EINVAL; 215 + } 216 + 217 + static inline int cgroup_bpf_prog_query(const union bpf_attr *attr, 218 + union bpf_attr __user *uattr) 219 + { 220 + return -EINVAL; 221 + } 196 222 197 223 #define cgroup_bpf_enabled (0) 198 224 #define BPF_CGROUP_PRE_CONNECT_ENABLED(sk) (0)
+8
include/linux/bpf.h
··· 696 696 struct sock *__sock_map_lookup_elem(struct bpf_map *map, u32 key); 697 697 struct sock *__sock_hash_lookup_elem(struct bpf_map *map, void *key); 698 698 int sock_map_prog(struct bpf_map *map, struct bpf_prog *prog, u32 type); 699 + int sockmap_get_from_fd(const union bpf_attr *attr, int type, 700 + struct bpf_prog *prog); 699 701 #else 700 702 static inline struct sock *__sock_map_lookup_elem(struct bpf_map *map, u32 key) 701 703 { ··· 715 713 u32 type) 716 714 { 717 715 return -EOPNOTSUPP; 716 + } 717 + 718 + static inline int sockmap_get_from_fd(const union bpf_attr *attr, int type, 719 + struct bpf_prog *prog) 720 + { 721 + return -EINVAL; 718 722 } 719 723 #endif 720 724
+3 -2
include/linux/bpf_lirc.h
··· 5 5 #include <uapi/linux/bpf.h> 6 6 7 7 #ifdef CONFIG_BPF_LIRC_MODE2 8 - int lirc_prog_attach(const union bpf_attr *attr); 8 + int lirc_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog); 9 9 int lirc_prog_detach(const union bpf_attr *attr); 10 10 int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr); 11 11 #else 12 - static inline int lirc_prog_attach(const union bpf_attr *attr) 12 + static inline int lirc_prog_attach(const union bpf_attr *attr, 13 + struct bpf_prog *prog) 13 14 { 14 15 return -EINVAL; 15 16 }
+22 -7
include/linux/compiler-gcc.h
··· 66 66 #endif 67 67 68 68 /* 69 + * Feature detection for gnu_inline (gnu89 extern inline semantics). Either 70 + * __GNUC_STDC_INLINE__ is defined (not using gnu89 extern inline semantics, 71 + * and we opt in to the gnu89 semantics), or __GNUC_STDC_INLINE__ is not 72 + * defined so the gnu89 semantics are the default. 73 + */ 74 + #ifdef __GNUC_STDC_INLINE__ 75 + # define __gnu_inline __attribute__((gnu_inline)) 76 + #else 77 + # define __gnu_inline 78 + #endif 79 + 80 + /* 69 81 * Force always-inline if the user requests it so via the .config, 70 82 * or if gcc is too old. 71 83 * GCC does not warn about unused static inline functions for 72 84 * -Wunused-function. This turns out to avoid the need for complex #ifdef 73 85 * directives. Suppress the warning in clang as well by using "unused" 74 86 * function attribute, which is redundant but not harmful for gcc. 87 + * Prefer gnu_inline, so that extern inline functions do not emit an 88 + * externally visible function. This makes extern inline behave as per gnu89 89 + * semantics rather than c99. This prevents multiple symbol definition errors 90 + * of extern inline functions at link time. 91 + * A lot of inline functions can cause havoc with function tracing. 75 92 */ 76 93 #if !defined(CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING) || \ 77 94 !defined(CONFIG_OPTIMIZE_INLINING) || (__GNUC__ < 4) 78 - #define inline inline __attribute__((always_inline,unused)) notrace 79 - #define __inline__ __inline__ __attribute__((always_inline,unused)) notrace 80 - #define __inline __inline __attribute__((always_inline,unused)) notrace 95 + #define inline \ 96 + inline __attribute__((always_inline, unused)) notrace __gnu_inline 81 97 #else 82 - /* A lot of inline functions can cause havoc with function tracing */ 83 - #define inline inline __attribute__((unused)) notrace 84 - #define __inline__ __inline__ __attribute__((unused)) notrace 85 - #define __inline __inline __attribute__((unused)) notrace 98 + #define inline inline __attribute__((unused)) notrace __gnu_inline 86 99 #endif 87 100 101 + #define __inline__ inline 102 + #define __inline inline 88 103 #define __always_inline inline __attribute__((always_inline)) 89 104 #define noinline __attribute__((noinline)) 90 105
+8 -48
include/linux/filter.h
··· 470 470 }; 471 471 472 472 struct bpf_binary_header { 473 - u16 pages; 474 - u16 locked:1; 475 - 473 + u32 pages; 476 474 /* Some arches need word alignment for their instructions */ 477 475 u8 image[] __aligned(4); 478 476 }; ··· 479 481 u16 pages; /* Number of allocated pages */ 480 482 u16 jited:1, /* Is our filter JIT'ed? */ 481 483 jit_requested:1,/* archs need to JIT the prog */ 482 - locked:1, /* Program image locked? */ 484 + undo_set_mem:1, /* Passed set_memory_ro() checkpoint */ 483 485 gpl_compatible:1, /* Is filter GPL compatible? */ 484 486 cb_access:1, /* Is control block accessed? */ 485 487 dst_needed:1, /* Do we need dst entry? */ ··· 675 677 676 678 static inline void bpf_prog_lock_ro(struct bpf_prog *fp) 677 679 { 678 - #ifdef CONFIG_ARCH_HAS_SET_MEMORY 679 - fp->locked = 1; 680 - if (set_memory_ro((unsigned long)fp, fp->pages)) 681 - fp->locked = 0; 682 - #endif 680 + fp->undo_set_mem = 1; 681 + set_memory_ro((unsigned long)fp, fp->pages); 683 682 } 684 683 685 684 static inline void bpf_prog_unlock_ro(struct bpf_prog *fp) 686 685 { 687 - #ifdef CONFIG_ARCH_HAS_SET_MEMORY 688 - if (fp->locked) { 689 - WARN_ON_ONCE(set_memory_rw((unsigned long)fp, fp->pages)); 690 - /* In case set_memory_rw() fails, we want to be the first 691 - * to crash here instead of some random place later on. 692 - */ 693 - fp->locked = 0; 694 - } 695 - #endif 686 + if (fp->undo_set_mem) 687 + set_memory_rw((unsigned long)fp, fp->pages); 696 688 } 697 689 698 690 static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr) 699 691 { 700 - #ifdef CONFIG_ARCH_HAS_SET_MEMORY 701 - hdr->locked = 1; 702 - if (set_memory_ro((unsigned long)hdr, hdr->pages)) 703 - hdr->locked = 0; 704 - #endif 692 + set_memory_ro((unsigned long)hdr, hdr->pages); 705 693 } 706 694 707 695 static inline void bpf_jit_binary_unlock_ro(struct bpf_binary_header *hdr) 708 696 { 709 - #ifdef CONFIG_ARCH_HAS_SET_MEMORY 710 - if (hdr->locked) { 711 - WARN_ON_ONCE(set_memory_rw((unsigned long)hdr, hdr->pages)); 712 - /* In case set_memory_rw() fails, we want to be the first 713 - * to crash here instead of some random place later on. 714 - */ 715 - hdr->locked = 0; 716 - } 717 - #endif 697 + set_memory_rw((unsigned long)hdr, hdr->pages); 718 698 } 719 699 720 700 static inline struct bpf_binary_header * ··· 703 727 704 728 return (void *)addr; 705 729 } 706 - 707 - #ifdef CONFIG_ARCH_HAS_SET_MEMORY 708 - static inline int bpf_prog_check_pages_ro_single(const struct bpf_prog *fp) 709 - { 710 - if (!fp->locked) 711 - return -ENOLCK; 712 - if (fp->jited) { 713 - const struct bpf_binary_header *hdr = bpf_jit_binary_hdr(fp); 714 - 715 - if (!hdr->locked) 716 - return -ENOLCK; 717 - } 718 - 719 - return 0; 720 - } 721 - #endif 722 730 723 731 int sk_filter_trim_cap(struct sock *sk, struct sk_buff *skb, unsigned int cap); 724 732 static inline int sk_filter(struct sock *sk, struct sk_buff *skb)
-2
include/linux/ftrace.h
··· 223 223 */ 224 224 int register_ftrace_function(struct ftrace_ops *ops); 225 225 int unregister_ftrace_function(struct ftrace_ops *ops); 226 - void clear_ftrace_function(void); 227 226 228 227 extern void ftrace_stub(unsigned long a0, unsigned long a1, 229 228 struct ftrace_ops *op, struct pt_regs *regs); ··· 238 239 { 239 240 return 0; 240 241 } 241 - static inline void clear_ftrace_function(void) { } 242 242 static inline void ftrace_kill(void) { } 243 243 static inline void ftrace_free_init_mem(void) { } 244 244 static inline void ftrace_free_mem(struct module *mod, void *start, void *end) { }
+2 -1
include/linux/hid.h
··· 511 511 #define HID_STAT_ADDED BIT(0) 512 512 #define HID_STAT_PARSED BIT(1) 513 513 #define HID_STAT_DUP_DETECTED BIT(2) 514 + #define HID_STAT_REPROBED BIT(3) 514 515 515 516 struct hid_input { 516 517 struct list_head list; ··· 580 579 bool battery_avoid_query; 581 580 #endif 582 581 583 - unsigned int status; /* see STAT flags above */ 582 + unsigned long status; /* see STAT flags above */ 584 583 unsigned claimed; /* Claimed by hidinput, hiddev? */ 585 584 unsigned quirks; /* Various quirks the device can pull on us */ 586 585 bool io_started; /* If IO has started */
-1
include/linux/kthread.h
··· 62 62 int kthread_park(struct task_struct *k); 63 63 void kthread_unpark(struct task_struct *k); 64 64 void kthread_parkme(void); 65 - void kthread_park_complete(struct task_struct *k); 66 65 67 66 int kthreadd(void *unused); 68 67 extern struct task_struct *kthreadd_task;
+24
include/linux/libata.h
··· 210 210 ATA_FLAG_SLAVE_POSS = (1 << 0), /* host supports slave dev */ 211 211 /* (doesn't imply presence) */ 212 212 ATA_FLAG_SATA = (1 << 1), 213 + ATA_FLAG_NO_LPM = (1 << 2), /* host not happy with LPM */ 213 214 ATA_FLAG_NO_LOG_PAGE = (1 << 5), /* do not issue log page read */ 214 215 ATA_FLAG_NO_ATAPI = (1 << 6), /* No ATAPI support */ 215 216 ATA_FLAG_PIO_DMA = (1 << 7), /* PIO cmds via DMA */ ··· 1495 1494 { 1496 1495 return tag < ATA_MAX_QUEUE || ata_tag_internal(tag); 1497 1496 } 1497 + 1498 + #define __ata_qc_for_each(ap, qc, tag, max_tag, fn) \ 1499 + for ((tag) = 0; (tag) < (max_tag) && \ 1500 + ({ qc = fn((ap), (tag)); 1; }); (tag)++) \ 1501 + 1502 + /* 1503 + * Internal use only, iterate commands ignoring error handling and 1504 + * status of 'qc'. 1505 + */ 1506 + #define ata_qc_for_each_raw(ap, qc, tag) \ 1507 + __ata_qc_for_each(ap, qc, tag, ATA_MAX_QUEUE, __ata_qc_from_tag) 1508 + 1509 + /* 1510 + * Iterate all potential commands that can be queued 1511 + */ 1512 + #define ata_qc_for_each(ap, qc, tag) \ 1513 + __ata_qc_for_each(ap, qc, tag, ATA_MAX_QUEUE, ata_qc_from_tag) 1514 + 1515 + /* 1516 + * Like ata_qc_for_each, but with the internal tag included 1517 + */ 1518 + #define ata_qc_for_each_with_internal(ap, qc, tag) \ 1519 + __ata_qc_for_each(ap, qc, tag, ATA_MAX_QUEUE + 1, ata_qc_from_tag) 1498 1520 1499 1521 /* 1500 1522 * device helpers
+2
include/linux/mlx5/eswitch.h
··· 8 8 9 9 #include <linux/mlx5/driver.h> 10 10 11 + #define MLX5_ESWITCH_MANAGER(mdev) MLX5_CAP_GEN(mdev, eswitch_manager) 12 + 11 13 enum { 12 14 SRIOV_NONE, 13 15 SRIOV_LEGACY,
+1 -1
include/linux/mlx5/mlx5_ifc.h
··· 922 922 u8 vnic_env_queue_counters[0x1]; 923 923 u8 ets[0x1]; 924 924 u8 nic_flow_table[0x1]; 925 - u8 eswitch_flow_table[0x1]; 925 + u8 eswitch_manager[0x1]; 926 926 u8 device_memory[0x1]; 927 927 u8 mcam_reg[0x1]; 928 928 u8 pcam_reg[0x1];
+20
include/linux/netdevice.h
··· 2789 2789 if (PTR_ERR(pp) != -EINPROGRESS) 2790 2790 NAPI_GRO_CB(skb)->flush |= flush; 2791 2791 } 2792 + static inline void skb_gro_flush_final_remcsum(struct sk_buff *skb, 2793 + struct sk_buff **pp, 2794 + int flush, 2795 + struct gro_remcsum *grc) 2796 + { 2797 + if (PTR_ERR(pp) != -EINPROGRESS) { 2798 + NAPI_GRO_CB(skb)->flush |= flush; 2799 + skb_gro_remcsum_cleanup(skb, grc); 2800 + skb->remcsum_offload = 0; 2801 + } 2802 + } 2792 2803 #else 2793 2804 static inline void skb_gro_flush_final(struct sk_buff *skb, struct sk_buff **pp, int flush) 2794 2805 { 2795 2806 NAPI_GRO_CB(skb)->flush |= flush; 2807 + } 2808 + static inline void skb_gro_flush_final_remcsum(struct sk_buff *skb, 2809 + struct sk_buff **pp, 2810 + int flush, 2811 + struct gro_remcsum *grc) 2812 + { 2813 + NAPI_GRO_CB(skb)->flush |= flush; 2814 + skb_gro_remcsum_cleanup(skb, grc); 2815 + skb->remcsum_offload = 0; 2796 2816 } 2797 2817 #endif 2798 2818
+1 -1
include/linux/sched.h
··· 118 118 * the comment with set_special_state(). 119 119 */ 120 120 #define is_special_task_state(state) \ 121 - ((state) & (__TASK_STOPPED | __TASK_TRACED | TASK_DEAD)) 121 + ((state) & (__TASK_STOPPED | __TASK_TRACED | TASK_PARKED | TASK_DEAD)) 122 122 123 123 #define __set_current_state(state_value) \ 124 124 do { \
+1 -1
include/linux/uio_driver.h
··· 75 75 struct fasync_struct *async_queue; 76 76 wait_queue_head_t wait; 77 77 struct uio_info *info; 78 - spinlock_t info_lock; 78 + struct mutex info_lock; 79 79 struct kobject *map_dir; 80 80 struct kobject *portio_dir; 81 81 };
+1
include/net/net_namespace.h
··· 128 128 #endif 129 129 #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6) 130 130 struct netns_nf_frag nf_frag; 131 + struct ctl_table_header *nf_frag_frags_hdr; 131 132 #endif 132 133 struct sock *nfnl; 133 134 struct sock *nfnl_stash;
-1
include/net/netns/ipv6.h
··· 109 109 110 110 #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6) 111 111 struct netns_nf_frag { 112 - struct netns_sysctl_ipv6 sysctl; 113 112 struct netns_frags frags; 114 113 }; 115 114 #endif
+5
include/net/pkt_cls.h
··· 111 111 { 112 112 } 113 113 114 + static inline bool tcf_block_shared(struct tcf_block *block) 115 + { 116 + return false; 117 + } 118 + 114 119 static inline struct Qdisc *tcf_block_q(struct tcf_block *block) 115 120 { 116 121 return NULL;
+23 -5
include/uapi/linux/bpf.h
··· 1857 1857 * is resolved), the nexthop address is returned in ipv4_dst 1858 1858 * or ipv6_dst based on family, smac is set to mac address of 1859 1859 * egress device, dmac is set to nexthop mac address, rt_metric 1860 - * is set to metric from route (IPv4/IPv6 only). 1860 + * is set to metric from route (IPv4/IPv6 only), and ifindex 1861 + * is set to the device index of the nexthop from the FIB lookup. 1861 1862 * 1862 1863 * *plen* argument is the size of the passed in struct. 1863 1864 * *flags* argument can be a combination of one or more of the ··· 1874 1873 * *ctx* is either **struct xdp_md** for XDP programs or 1875 1874 * **struct sk_buff** tc cls_act programs. 1876 1875 * Return 1877 - * Egress device index on success, 0 if packet needs to continue 1878 - * up the stack for further processing or a negative error in case 1879 - * of failure. 1876 + * * < 0 if any input argument is invalid 1877 + * * 0 on success (packet is forwarded, nexthop neighbor exists) 1878 + * * > 0 one of **BPF_FIB_LKUP_RET_** codes explaining why the 1879 + * * packet is not forwarded or needs assist from full stack 1880 1880 * 1881 1881 * int bpf_sock_hash_update(struct bpf_sock_ops_kern *skops, struct bpf_map *map, void *key, u64 flags) 1882 1882 * Description ··· 2614 2612 #define BPF_FIB_LOOKUP_DIRECT BIT(0) 2615 2613 #define BPF_FIB_LOOKUP_OUTPUT BIT(1) 2616 2614 2615 + enum { 2616 + BPF_FIB_LKUP_RET_SUCCESS, /* lookup successful */ 2617 + BPF_FIB_LKUP_RET_BLACKHOLE, /* dest is blackholed; can be dropped */ 2618 + BPF_FIB_LKUP_RET_UNREACHABLE, /* dest is unreachable; can be dropped */ 2619 + BPF_FIB_LKUP_RET_PROHIBIT, /* dest not allowed; can be dropped */ 2620 + BPF_FIB_LKUP_RET_NOT_FWDED, /* packet is not forwarded */ 2621 + BPF_FIB_LKUP_RET_FWD_DISABLED, /* fwding is not enabled on ingress */ 2622 + BPF_FIB_LKUP_RET_UNSUPP_LWT, /* fwd requires encapsulation */ 2623 + BPF_FIB_LKUP_RET_NO_NEIGH, /* no neighbor entry for nh */ 2624 + BPF_FIB_LKUP_RET_FRAG_NEEDED, /* fragmentation required to fwd */ 2625 + }; 2626 + 2617 2627 struct bpf_fib_lookup { 2618 2628 /* input: network family for lookup (AF_INET, AF_INET6) 2619 2629 * output: network family of egress nexthop ··· 2639 2625 2640 2626 /* total length of packet from network header - used for MTU check */ 2641 2627 __u16 tot_len; 2642 - __u32 ifindex; /* L3 device index for lookup */ 2628 + 2629 + /* input: L3 device index for lookup 2630 + * output: device index from FIB lookup 2631 + */ 2632 + __u32 ifindex; 2643 2633 2644 2634 union { 2645 2635 /* inputs to lookup */
+58 -44
include/uapi/linux/rseq.h
··· 10 10 * Copyright (c) 2015-2018 Mathieu Desnoyers <mathieu.desnoyers@efficios.com> 11 11 */ 12 12 13 - #ifdef __KERNEL__ 14 - # include <linux/types.h> 15 - #else 16 - # include <stdint.h> 17 - #endif 18 - 19 - #include <linux/types_32_64.h> 13 + #include <linux/types.h> 14 + #include <asm/byteorder.h> 20 15 21 16 enum rseq_cpu_id_state { 22 17 RSEQ_CPU_ID_UNINITIALIZED = -1, ··· 47 52 __u32 version; 48 53 /* enum rseq_cs_flags */ 49 54 __u32 flags; 50 - LINUX_FIELD_u32_u64(start_ip); 55 + __u64 start_ip; 51 56 /* Offset from start_ip. */ 52 - LINUX_FIELD_u32_u64(post_commit_offset); 53 - LINUX_FIELD_u32_u64(abort_ip); 57 + __u64 post_commit_offset; 58 + __u64 abort_ip; 54 59 } __attribute__((aligned(4 * sizeof(__u64)))); 55 60 56 61 /* ··· 62 67 struct rseq { 63 68 /* 64 69 * Restartable sequences cpu_id_start field. Updated by the 65 - * kernel, and read by user-space with single-copy atomicity 66 - * semantics. Aligned on 32-bit. Always contains a value in the 67 - * range of possible CPUs, although the value may not be the 68 - * actual current CPU (e.g. if rseq is not initialized). This 69 - * CPU number value should always be compared against the value 70 - * of the cpu_id field before performing a rseq commit or 71 - * returning a value read from a data structure indexed using 72 - * the cpu_id_start value. 70 + * kernel. Read by user-space with single-copy atomicity 71 + * semantics. This field should only be read by the thread which 72 + * registered this data structure. Aligned on 32-bit. Always 73 + * contains a value in the range of possible CPUs, although the 74 + * value may not be the actual current CPU (e.g. if rseq is not 75 + * initialized). This CPU number value should always be compared 76 + * against the value of the cpu_id field before performing a rseq 77 + * commit or returning a value read from a data structure indexed 78 + * using the cpu_id_start value. 73 79 */ 74 80 __u32 cpu_id_start; 75 81 /* 76 - * Restartable sequences cpu_id field. Updated by the kernel, 77 - * and read by user-space with single-copy atomicity semantics. 78 - * Aligned on 32-bit. Values RSEQ_CPU_ID_UNINITIALIZED and 79 - * RSEQ_CPU_ID_REGISTRATION_FAILED have a special semantic: the 80 - * former means "rseq uninitialized", and latter means "rseq 81 - * initialization failed". This value is meant to be read within 82 - * rseq critical sections and compared with the cpu_id_start 83 - * value previously read, before performing the commit instruction, 84 - * or read and compared with the cpu_id_start value before returning 85 - * a value loaded from a data structure indexed using the 86 - * cpu_id_start value. 82 + * Restartable sequences cpu_id field. Updated by the kernel. 83 + * Read by user-space with single-copy atomicity semantics. This 84 + * field should only be read by the thread which registered this 85 + * data structure. Aligned on 32-bit. Values 86 + * RSEQ_CPU_ID_UNINITIALIZED and RSEQ_CPU_ID_REGISTRATION_FAILED 87 + * have a special semantic: the former means "rseq uninitialized", 88 + * and latter means "rseq initialization failed". This value is 89 + * meant to be read within rseq critical sections and compared 90 + * with the cpu_id_start value previously read, before performing 91 + * the commit instruction, or read and compared with the 92 + * cpu_id_start value before returning a value loaded from a data 93 + * structure indexed using the cpu_id_start value. 87 94 */ 88 95 __u32 cpu_id; 89 96 /* ··· 102 105 * targeted by the rseq_cs. Also needs to be set to NULL by user-space 103 106 * before reclaiming memory that contains the targeted struct rseq_cs. 104 107 * 105 - * Read and set by the kernel with single-copy atomicity semantics. 106 - * Set by user-space with single-copy atomicity semantics. Aligned 107 - * on 64-bit. 108 + * Read and set by the kernel. Set by user-space with single-copy 109 + * atomicity semantics. This field should only be updated by the 110 + * thread which registered this data structure. Aligned on 64-bit. 108 111 */ 109 - LINUX_FIELD_u32_u64(rseq_cs); 112 + union { 113 + __u64 ptr64; 114 + #ifdef __LP64__ 115 + __u64 ptr; 116 + #else 117 + struct { 118 + #if (defined(__BYTE_ORDER) && (__BYTE_ORDER == __BIG_ENDIAN)) || defined(__BIG_ENDIAN) 119 + __u32 padding; /* Initialized to zero. */ 120 + __u32 ptr32; 121 + #else /* LITTLE */ 122 + __u32 ptr32; 123 + __u32 padding; /* Initialized to zero. */ 124 + #endif /* ENDIAN */ 125 + } ptr; 126 + #endif 127 + } rseq_cs; 128 + 110 129 /* 111 - * - RSEQ_DISABLE flag: 130 + * Restartable sequences flags field. 112 131 * 113 - * Fallback fast-track flag for single-stepping. 114 - * Set by user-space if lack of progress is detected. 115 - * Cleared by user-space after rseq finish. 116 - * Read by the kernel. 132 + * This field should only be updated by the thread which 133 + * registered this data structure. Read by the kernel. 134 + * Mainly used for single-stepping through rseq critical sections 135 + * with debuggers. 136 + * 117 137 * - RSEQ_CS_FLAG_NO_RESTART_ON_PREEMPT 118 - * Inhibit instruction sequence block restart and event 119 - * counter increment on preemption for this thread. 138 + * Inhibit instruction sequence block restart on preemption 139 + * for this thread. 120 140 * - RSEQ_CS_FLAG_NO_RESTART_ON_SIGNAL 121 - * Inhibit instruction sequence block restart and event 122 - * counter increment on signal delivery for this thread. 141 + * Inhibit instruction sequence block restart on signal 142 + * delivery for this thread. 123 143 * - RSEQ_CS_FLAG_NO_RESTART_ON_MIGRATE 124 - * Inhibit instruction sequence block restart and event 125 - * counter increment on migration for this thread. 144 + * Inhibit instruction sequence block restart on migration for 145 + * this thread. 126 146 */ 127 147 __u32 flags; 128 148 } __attribute__((aligned(4 * sizeof(__u64))));
-50
include/uapi/linux/types_32_64.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */ 2 - #ifndef _UAPI_LINUX_TYPES_32_64_H 3 - #define _UAPI_LINUX_TYPES_32_64_H 4 - 5 - /* 6 - * linux/types_32_64.h 7 - * 8 - * Integer type declaration for pointers across 32-bit and 64-bit systems. 9 - * 10 - * Copyright (c) 2015-2018 Mathieu Desnoyers <mathieu.desnoyers@efficios.com> 11 - */ 12 - 13 - #ifdef __KERNEL__ 14 - # include <linux/types.h> 15 - #else 16 - # include <stdint.h> 17 - #endif 18 - 19 - #include <asm/byteorder.h> 20 - 21 - #ifdef __BYTE_ORDER 22 - # if (__BYTE_ORDER == __BIG_ENDIAN) 23 - # define LINUX_BYTE_ORDER_BIG_ENDIAN 24 - # else 25 - # define LINUX_BYTE_ORDER_LITTLE_ENDIAN 26 - # endif 27 - #else 28 - # ifdef __BIG_ENDIAN 29 - # define LINUX_BYTE_ORDER_BIG_ENDIAN 30 - # else 31 - # define LINUX_BYTE_ORDER_LITTLE_ENDIAN 32 - # endif 33 - #endif 34 - 35 - #ifdef __LP64__ 36 - # define LINUX_FIELD_u32_u64(field) __u64 field 37 - # define LINUX_FIELD_u32_u64_INIT_ONSTACK(field, v) field = (intptr_t)v 38 - #else 39 - # ifdef LINUX_BYTE_ORDER_BIG_ENDIAN 40 - # define LINUX_FIELD_u32_u64(field) __u32 field ## _padding, field 41 - # define LINUX_FIELD_u32_u64_INIT_ONSTACK(field, v) \ 42 - field ## _padding = 0, field = (intptr_t)v 43 - # else 44 - # define LINUX_FIELD_u32_u64(field) __u32 field, field ## _padding 45 - # define LINUX_FIELD_u32_u64_INIT_ONSTACK(field, v) \ 46 - field = (intptr_t)v, field ## _padding = 0 47 - # endif 48 - #endif 49 - 50 - #endif /* _UAPI_LINUX_TYPES_32_64_H */
+54
kernel/bpf/cgroup.c
··· 428 428 return ret; 429 429 } 430 430 431 + int cgroup_bpf_prog_attach(const union bpf_attr *attr, 432 + enum bpf_prog_type ptype, struct bpf_prog *prog) 433 + { 434 + struct cgroup *cgrp; 435 + int ret; 436 + 437 + cgrp = cgroup_get_from_fd(attr->target_fd); 438 + if (IS_ERR(cgrp)) 439 + return PTR_ERR(cgrp); 440 + 441 + ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type, 442 + attr->attach_flags); 443 + cgroup_put(cgrp); 444 + return ret; 445 + } 446 + 447 + int cgroup_bpf_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype) 448 + { 449 + struct bpf_prog *prog; 450 + struct cgroup *cgrp; 451 + int ret; 452 + 453 + cgrp = cgroup_get_from_fd(attr->target_fd); 454 + if (IS_ERR(cgrp)) 455 + return PTR_ERR(cgrp); 456 + 457 + prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype); 458 + if (IS_ERR(prog)) 459 + prog = NULL; 460 + 461 + ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, 0); 462 + if (prog) 463 + bpf_prog_put(prog); 464 + 465 + cgroup_put(cgrp); 466 + return ret; 467 + } 468 + 469 + int cgroup_bpf_prog_query(const union bpf_attr *attr, 470 + union bpf_attr __user *uattr) 471 + { 472 + struct cgroup *cgrp; 473 + int ret; 474 + 475 + cgrp = cgroup_get_from_fd(attr->query.target_fd); 476 + if (IS_ERR(cgrp)) 477 + return PTR_ERR(cgrp); 478 + 479 + ret = cgroup_bpf_query(cgrp, attr, uattr); 480 + 481 + cgroup_put(cgrp); 482 + return ret; 483 + } 484 + 431 485 /** 432 486 * __cgroup_bpf_run_filter_skb() - Run a program for packet filtering 433 487 * @sk: The socket sending or receiving traffic
-28
kernel/bpf/core.c
··· 598 598 bpf_fill_ill_insns(hdr, size); 599 599 600 600 hdr->pages = size / PAGE_SIZE; 601 - hdr->locked = 0; 602 - 603 601 hole = min_t(unsigned int, size - (proglen + sizeof(*hdr)), 604 602 PAGE_SIZE - sizeof(*hdr)); 605 603 start = (get_random_int() % hole) & ~(alignment - 1); ··· 1448 1450 return 0; 1449 1451 } 1450 1452 1451 - static int bpf_prog_check_pages_ro_locked(const struct bpf_prog *fp) 1452 - { 1453 - #ifdef CONFIG_ARCH_HAS_SET_MEMORY 1454 - int i, err; 1455 - 1456 - for (i = 0; i < fp->aux->func_cnt; i++) { 1457 - err = bpf_prog_check_pages_ro_single(fp->aux->func[i]); 1458 - if (err) 1459 - return err; 1460 - } 1461 - 1462 - return bpf_prog_check_pages_ro_single(fp); 1463 - #endif 1464 - return 0; 1465 - } 1466 - 1467 1453 static void bpf_prog_select_func(struct bpf_prog *fp) 1468 1454 { 1469 1455 #ifndef CONFIG_BPF_JIT_ALWAYS_ON ··· 1506 1524 * all eBPF JITs might immediately support all features. 1507 1525 */ 1508 1526 *err = bpf_check_tail_call(fp); 1509 - if (*err) 1510 - return fp; 1511 1527 1512 - /* Checkpoint: at this point onwards any cBPF -> eBPF or 1513 - * native eBPF program is read-only. If we failed to change 1514 - * the page attributes (e.g. allocation failure from 1515 - * splitting large pages), then reject the whole program 1516 - * in order to guarantee not ending up with any W+X pages 1517 - * from BPF side in kernel. 1518 - */ 1519 - *err = bpf_prog_check_pages_ro_locked(fp); 1520 1528 return fp; 1521 1529 } 1522 1530 EXPORT_SYMBOL_GPL(bpf_prog_select_runtime);
+184 -70
kernel/bpf/sockmap.c
··· 72 72 u32 n_buckets; 73 73 u32 elem_size; 74 74 struct bpf_sock_progs progs; 75 + struct rcu_head rcu; 75 76 }; 76 77 77 78 struct htab_elem { ··· 90 89 struct smap_psock_map_entry { 91 90 struct list_head list; 92 91 struct sock **entry; 93 - struct htab_elem *hash_link; 94 - struct bpf_htab *htab; 92 + struct htab_elem __rcu *hash_link; 93 + struct bpf_htab __rcu *htab; 95 94 }; 96 95 97 96 struct smap_psock { ··· 121 120 struct bpf_prog *bpf_parse; 122 121 struct bpf_prog *bpf_verdict; 123 122 struct list_head maps; 123 + spinlock_t maps_lock; 124 124 125 125 /* Back reference used when sock callback trigger sockmap operations */ 126 126 struct sock *sock; ··· 142 140 static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size); 143 141 static int bpf_tcp_sendpage(struct sock *sk, struct page *page, 144 142 int offset, size_t size, int flags); 143 + static void bpf_tcp_close(struct sock *sk, long timeout); 145 144 146 145 static inline struct smap_psock *smap_psock_sk(const struct sock *sk) 147 146 { ··· 164 161 return !empty; 165 162 } 166 163 167 - static struct proto tcp_bpf_proto; 164 + enum { 165 + SOCKMAP_IPV4, 166 + SOCKMAP_IPV6, 167 + SOCKMAP_NUM_PROTS, 168 + }; 169 + 170 + enum { 171 + SOCKMAP_BASE, 172 + SOCKMAP_TX, 173 + SOCKMAP_NUM_CONFIGS, 174 + }; 175 + 176 + static struct proto *saved_tcpv6_prot __read_mostly; 177 + static DEFINE_SPINLOCK(tcpv6_prot_lock); 178 + static struct proto bpf_tcp_prots[SOCKMAP_NUM_PROTS][SOCKMAP_NUM_CONFIGS]; 179 + static void build_protos(struct proto prot[SOCKMAP_NUM_CONFIGS], 180 + struct proto *base) 181 + { 182 + prot[SOCKMAP_BASE] = *base; 183 + prot[SOCKMAP_BASE].close = bpf_tcp_close; 184 + prot[SOCKMAP_BASE].recvmsg = bpf_tcp_recvmsg; 185 + prot[SOCKMAP_BASE].stream_memory_read = bpf_tcp_stream_read; 186 + 187 + prot[SOCKMAP_TX] = prot[SOCKMAP_BASE]; 188 + prot[SOCKMAP_TX].sendmsg = bpf_tcp_sendmsg; 189 + prot[SOCKMAP_TX].sendpage = bpf_tcp_sendpage; 190 + } 191 + 192 + static void update_sk_prot(struct sock *sk, struct smap_psock *psock) 193 + { 194 + int family = sk->sk_family == AF_INET6 ? SOCKMAP_IPV6 : SOCKMAP_IPV4; 195 + int conf = psock->bpf_tx_msg ? SOCKMAP_TX : SOCKMAP_BASE; 196 + 197 + sk->sk_prot = &bpf_tcp_prots[family][conf]; 198 + } 199 + 168 200 static int bpf_tcp_init(struct sock *sk) 169 201 { 170 202 struct smap_psock *psock; ··· 219 181 psock->save_close = sk->sk_prot->close; 220 182 psock->sk_proto = sk->sk_prot; 221 183 222 - if (psock->bpf_tx_msg) { 223 - tcp_bpf_proto.sendmsg = bpf_tcp_sendmsg; 224 - tcp_bpf_proto.sendpage = bpf_tcp_sendpage; 225 - tcp_bpf_proto.recvmsg = bpf_tcp_recvmsg; 226 - tcp_bpf_proto.stream_memory_read = bpf_tcp_stream_read; 184 + /* Build IPv6 sockmap whenever the address of tcpv6_prot changes */ 185 + if (sk->sk_family == AF_INET6 && 186 + unlikely(sk->sk_prot != smp_load_acquire(&saved_tcpv6_prot))) { 187 + spin_lock_bh(&tcpv6_prot_lock); 188 + if (likely(sk->sk_prot != saved_tcpv6_prot)) { 189 + build_protos(bpf_tcp_prots[SOCKMAP_IPV6], sk->sk_prot); 190 + smp_store_release(&saved_tcpv6_prot, sk->sk_prot); 191 + } 192 + spin_unlock_bh(&tcpv6_prot_lock); 227 193 } 228 - 229 - sk->sk_prot = &tcp_bpf_proto; 194 + update_sk_prot(sk, psock); 230 195 rcu_read_unlock(); 231 196 return 0; 232 197 } ··· 260 219 rcu_read_unlock(); 261 220 } 262 221 222 + static struct htab_elem *lookup_elem_raw(struct hlist_head *head, 223 + u32 hash, void *key, u32 key_size) 224 + { 225 + struct htab_elem *l; 226 + 227 + hlist_for_each_entry_rcu(l, head, hash_node) { 228 + if (l->hash == hash && !memcmp(&l->key, key, key_size)) 229 + return l; 230 + } 231 + 232 + return NULL; 233 + } 234 + 235 + static inline struct bucket *__select_bucket(struct bpf_htab *htab, u32 hash) 236 + { 237 + return &htab->buckets[hash & (htab->n_buckets - 1)]; 238 + } 239 + 240 + static inline struct hlist_head *select_bucket(struct bpf_htab *htab, u32 hash) 241 + { 242 + return &__select_bucket(htab, hash)->head; 243 + } 244 + 263 245 static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l) 264 246 { 265 247 atomic_dec(&htab->count); 266 248 kfree_rcu(l, rcu); 267 249 } 268 250 251 + static struct smap_psock_map_entry *psock_map_pop(struct sock *sk, 252 + struct smap_psock *psock) 253 + { 254 + struct smap_psock_map_entry *e; 255 + 256 + spin_lock_bh(&psock->maps_lock); 257 + e = list_first_entry_or_null(&psock->maps, 258 + struct smap_psock_map_entry, 259 + list); 260 + if (e) 261 + list_del(&e->list); 262 + spin_unlock_bh(&psock->maps_lock); 263 + return e; 264 + } 265 + 269 266 static void bpf_tcp_close(struct sock *sk, long timeout) 270 267 { 271 268 void (*close_fun)(struct sock *sk, long timeout); 272 - struct smap_psock_map_entry *e, *tmp; 269 + struct smap_psock_map_entry *e; 273 270 struct sk_msg_buff *md, *mtmp; 274 271 struct smap_psock *psock; 275 272 struct sock *osk; ··· 326 247 */ 327 248 close_fun = psock->save_close; 328 249 329 - write_lock_bh(&sk->sk_callback_lock); 330 250 if (psock->cork) { 331 251 free_start_sg(psock->sock, psock->cork); 332 252 kfree(psock->cork); ··· 338 260 kfree(md); 339 261 } 340 262 341 - list_for_each_entry_safe(e, tmp, &psock->maps, list) { 263 + e = psock_map_pop(sk, psock); 264 + while (e) { 342 265 if (e->entry) { 343 266 osk = cmpxchg(e->entry, sk, NULL); 344 267 if (osk == sk) { 345 - list_del(&e->list); 346 268 smap_release_sock(psock, sk); 347 269 } 348 270 } else { 349 - hlist_del_rcu(&e->hash_link->hash_node); 350 - smap_release_sock(psock, e->hash_link->sk); 351 - free_htab_elem(e->htab, e->hash_link); 271 + struct htab_elem *link = rcu_dereference(e->hash_link); 272 + struct bpf_htab *htab = rcu_dereference(e->htab); 273 + struct hlist_head *head; 274 + struct htab_elem *l; 275 + struct bucket *b; 276 + 277 + b = __select_bucket(htab, link->hash); 278 + head = &b->head; 279 + raw_spin_lock_bh(&b->lock); 280 + l = lookup_elem_raw(head, 281 + link->hash, link->key, 282 + htab->map.key_size); 283 + /* If another thread deleted this object skip deletion. 284 + * The refcnt on psock may or may not be zero. 285 + */ 286 + if (l) { 287 + hlist_del_rcu(&link->hash_node); 288 + smap_release_sock(psock, link->sk); 289 + free_htab_elem(htab, link); 290 + } 291 + raw_spin_unlock_bh(&b->lock); 352 292 } 293 + e = psock_map_pop(sk, psock); 353 294 } 354 - write_unlock_bh(&sk->sk_callback_lock); 355 295 rcu_read_unlock(); 356 296 close_fun(sk, timeout); 357 297 } ··· 1207 1111 1208 1112 static int bpf_tcp_ulp_register(void) 1209 1113 { 1210 - tcp_bpf_proto = tcp_prot; 1211 - tcp_bpf_proto.close = bpf_tcp_close; 1114 + build_protos(bpf_tcp_prots[SOCKMAP_IPV4], &tcp_prot); 1212 1115 /* Once BPF TX ULP is registered it is never unregistered. It 1213 1116 * will be in the ULP list for the lifetime of the system. Doing 1214 1117 * duplicate registers is not a problem. ··· 1452 1357 { 1453 1358 if (refcount_dec_and_test(&psock->refcnt)) { 1454 1359 tcp_cleanup_ulp(sock); 1360 + write_lock_bh(&sock->sk_callback_lock); 1455 1361 smap_stop_sock(psock, sock); 1362 + write_unlock_bh(&sock->sk_callback_lock); 1456 1363 clear_bit(SMAP_TX_RUNNING, &psock->state); 1457 1364 rcu_assign_sk_user_data(sock, NULL); 1458 1365 call_rcu_sched(&psock->rcu, smap_destroy_psock); ··· 1605 1508 INIT_LIST_HEAD(&psock->maps); 1606 1509 INIT_LIST_HEAD(&psock->ingress); 1607 1510 refcount_set(&psock->refcnt, 1); 1511 + spin_lock_init(&psock->maps_lock); 1608 1512 1609 1513 rcu_assign_sk_user_data(sock, psock); 1610 1514 sock_hold(sock); ··· 1662 1564 return ERR_PTR(err); 1663 1565 } 1664 1566 1665 - static void smap_list_remove(struct smap_psock *psock, 1666 - struct sock **entry, 1667 - struct htab_elem *hash_link) 1567 + static void smap_list_map_remove(struct smap_psock *psock, 1568 + struct sock **entry) 1668 1569 { 1669 1570 struct smap_psock_map_entry *e, *tmp; 1670 1571 1572 + spin_lock_bh(&psock->maps_lock); 1671 1573 list_for_each_entry_safe(e, tmp, &psock->maps, list) { 1672 - if (e->entry == entry || e->hash_link == hash_link) { 1574 + if (e->entry == entry) 1673 1575 list_del(&e->list); 1674 - break; 1675 - } 1676 1576 } 1577 + spin_unlock_bh(&psock->maps_lock); 1578 + } 1579 + 1580 + static void smap_list_hash_remove(struct smap_psock *psock, 1581 + struct htab_elem *hash_link) 1582 + { 1583 + struct smap_psock_map_entry *e, *tmp; 1584 + 1585 + spin_lock_bh(&psock->maps_lock); 1586 + list_for_each_entry_safe(e, tmp, &psock->maps, list) { 1587 + struct htab_elem *c = rcu_dereference(e->hash_link); 1588 + 1589 + if (c == hash_link) 1590 + list_del(&e->list); 1591 + } 1592 + spin_unlock_bh(&psock->maps_lock); 1677 1593 } 1678 1594 1679 1595 static void sock_map_free(struct bpf_map *map) ··· 1713 1601 if (!sock) 1714 1602 continue; 1715 1603 1716 - write_lock_bh(&sock->sk_callback_lock); 1717 1604 psock = smap_psock_sk(sock); 1718 1605 /* This check handles a racing sock event that can get the 1719 1606 * sk_callback_lock before this case but after xchg happens ··· 1720 1609 * to be null and queued for garbage collection. 1721 1610 */ 1722 1611 if (likely(psock)) { 1723 - smap_list_remove(psock, &stab->sock_map[i], NULL); 1612 + smap_list_map_remove(psock, &stab->sock_map[i]); 1724 1613 smap_release_sock(psock, sock); 1725 1614 } 1726 - write_unlock_bh(&sock->sk_callback_lock); 1727 1615 } 1728 1616 rcu_read_unlock(); 1729 1617 ··· 1771 1661 if (!sock) 1772 1662 return -EINVAL; 1773 1663 1774 - write_lock_bh(&sock->sk_callback_lock); 1775 1664 psock = smap_psock_sk(sock); 1776 1665 if (!psock) 1777 1666 goto out; 1778 1667 1779 1668 if (psock->bpf_parse) 1780 1669 smap_stop_sock(psock, sock); 1781 - smap_list_remove(psock, &stab->sock_map[k], NULL); 1670 + smap_list_map_remove(psock, &stab->sock_map[k]); 1782 1671 smap_release_sock(psock, sock); 1783 1672 out: 1784 - write_unlock_bh(&sock->sk_callback_lock); 1785 1673 return 0; 1786 1674 } 1787 1675 ··· 1860 1752 } 1861 1753 } 1862 1754 1863 - write_lock_bh(&sock->sk_callback_lock); 1864 1755 psock = smap_psock_sk(sock); 1865 1756 1866 1757 /* 2. Do not allow inheriting programs if psock exists and has ··· 1916 1809 if (err) 1917 1810 goto out_free; 1918 1811 smap_init_progs(psock, verdict, parse); 1812 + write_lock_bh(&sock->sk_callback_lock); 1919 1813 smap_start_sock(psock, sock); 1814 + write_unlock_bh(&sock->sk_callback_lock); 1920 1815 } 1921 1816 1922 1817 /* 4. Place psock in sockmap for use and stop any programs on ··· 1928 1819 */ 1929 1820 if (map_link) { 1930 1821 e->entry = map_link; 1822 + spin_lock_bh(&psock->maps_lock); 1931 1823 list_add_tail(&e->list, &psock->maps); 1824 + spin_unlock_bh(&psock->maps_lock); 1932 1825 } 1933 - write_unlock_bh(&sock->sk_callback_lock); 1934 1826 return err; 1935 1827 out_free: 1936 1828 smap_release_sock(psock, sock); ··· 1942 1832 } 1943 1833 if (tx_msg) 1944 1834 bpf_prog_put(tx_msg); 1945 - write_unlock_bh(&sock->sk_callback_lock); 1946 1835 kfree(e); 1947 1836 return err; 1948 1837 } ··· 1978 1869 if (osock) { 1979 1870 struct smap_psock *opsock = smap_psock_sk(osock); 1980 1871 1981 - write_lock_bh(&osock->sk_callback_lock); 1982 - smap_list_remove(opsock, &stab->sock_map[i], NULL); 1872 + smap_list_map_remove(opsock, &stab->sock_map[i]); 1983 1873 smap_release_sock(opsock, osock); 1984 - write_unlock_bh(&osock->sk_callback_lock); 1985 1874 } 1986 1875 out: 1987 1876 return err; ··· 2020 1913 bpf_prog_put(orig); 2021 1914 2022 1915 return 0; 1916 + } 1917 + 1918 + int sockmap_get_from_fd(const union bpf_attr *attr, int type, 1919 + struct bpf_prog *prog) 1920 + { 1921 + int ufd = attr->target_fd; 1922 + struct bpf_map *map; 1923 + struct fd f; 1924 + int err; 1925 + 1926 + f = fdget(ufd); 1927 + map = __bpf_map_get(f); 1928 + if (IS_ERR(map)) 1929 + return PTR_ERR(map); 1930 + 1931 + err = sock_map_prog(map, prog, attr->attach_type); 1932 + fdput(f); 1933 + return err; 2023 1934 } 2024 1935 2025 1936 static void *sock_map_lookup(struct bpf_map *map, void *key) ··· 2168 2043 return ERR_PTR(err); 2169 2044 } 2170 2045 2171 - static inline struct bucket *__select_bucket(struct bpf_htab *htab, u32 hash) 2046 + static void __bpf_htab_free(struct rcu_head *rcu) 2172 2047 { 2173 - return &htab->buckets[hash & (htab->n_buckets - 1)]; 2174 - } 2048 + struct bpf_htab *htab; 2175 2049 2176 - static inline struct hlist_head *select_bucket(struct bpf_htab *htab, u32 hash) 2177 - { 2178 - return &__select_bucket(htab, hash)->head; 2050 + htab = container_of(rcu, struct bpf_htab, rcu); 2051 + bpf_map_area_free(htab->buckets); 2052 + kfree(htab); 2179 2053 } 2180 2054 2181 2055 static void sock_hash_free(struct bpf_map *map) ··· 2193 2069 */ 2194 2070 rcu_read_lock(); 2195 2071 for (i = 0; i < htab->n_buckets; i++) { 2196 - struct hlist_head *head = select_bucket(htab, i); 2072 + struct bucket *b = __select_bucket(htab, i); 2073 + struct hlist_head *head; 2197 2074 struct hlist_node *n; 2198 2075 struct htab_elem *l; 2199 2076 2077 + raw_spin_lock_bh(&b->lock); 2078 + head = &b->head; 2200 2079 hlist_for_each_entry_safe(l, n, head, hash_node) { 2201 2080 struct sock *sock = l->sk; 2202 2081 struct smap_psock *psock; 2203 2082 2204 2083 hlist_del_rcu(&l->hash_node); 2205 - write_lock_bh(&sock->sk_callback_lock); 2206 2084 psock = smap_psock_sk(sock); 2207 2085 /* This check handles a racing sock event that can get 2208 2086 * the sk_callback_lock before this case but after xchg ··· 2212 2086 * (psock) to be null and queued for garbage collection. 2213 2087 */ 2214 2088 if (likely(psock)) { 2215 - smap_list_remove(psock, NULL, l); 2089 + smap_list_hash_remove(psock, l); 2216 2090 smap_release_sock(psock, sock); 2217 2091 } 2218 - write_unlock_bh(&sock->sk_callback_lock); 2219 - kfree(l); 2092 + free_htab_elem(htab, l); 2220 2093 } 2094 + raw_spin_unlock_bh(&b->lock); 2221 2095 } 2222 2096 rcu_read_unlock(); 2223 - bpf_map_area_free(htab->buckets); 2224 - kfree(htab); 2097 + call_rcu(&htab->rcu, __bpf_htab_free); 2225 2098 } 2226 2099 2227 2100 static struct htab_elem *alloc_sock_hash_elem(struct bpf_htab *htab, ··· 2245 2120 l_new->sk = sk; 2246 2121 l_new->hash = hash; 2247 2122 return l_new; 2248 - } 2249 - 2250 - static struct htab_elem *lookup_elem_raw(struct hlist_head *head, 2251 - u32 hash, void *key, u32 key_size) 2252 - { 2253 - struct htab_elem *l; 2254 - 2255 - hlist_for_each_entry_rcu(l, head, hash_node) { 2256 - if (l->hash == hash && !memcmp(&l->key, key, key_size)) 2257 - return l; 2258 - } 2259 - 2260 - return NULL; 2261 2123 } 2262 2124 2263 2125 static inline u32 htab_map_hash(const void *key, u32 key_len) ··· 2366 2254 goto bucket_err; 2367 2255 } 2368 2256 2369 - e->hash_link = l_new; 2370 - e->htab = container_of(map, struct bpf_htab, map); 2257 + rcu_assign_pointer(e->hash_link, l_new); 2258 + rcu_assign_pointer(e->htab, 2259 + container_of(map, struct bpf_htab, map)); 2260 + spin_lock_bh(&psock->maps_lock); 2371 2261 list_add_tail(&e->list, &psock->maps); 2262 + spin_unlock_bh(&psock->maps_lock); 2372 2263 2373 2264 /* add new element to the head of the list, so that 2374 2265 * concurrent search will find it before old elem ··· 2381 2266 psock = smap_psock_sk(l_old->sk); 2382 2267 2383 2268 hlist_del_rcu(&l_old->hash_node); 2384 - smap_list_remove(psock, NULL, l_old); 2269 + smap_list_hash_remove(psock, l_old); 2385 2270 smap_release_sock(psock, l_old->sk); 2386 2271 free_htab_elem(htab, l_old); 2387 2272 } ··· 2441 2326 struct smap_psock *psock; 2442 2327 2443 2328 hlist_del_rcu(&l->hash_node); 2444 - write_lock_bh(&sock->sk_callback_lock); 2445 2329 psock = smap_psock_sk(sock); 2446 2330 /* This check handles a racing sock event that can get the 2447 2331 * sk_callback_lock before this case but after xchg happens ··· 2448 2334 * to be null and queued for garbage collection. 2449 2335 */ 2450 2336 if (likely(psock)) { 2451 - smap_list_remove(psock, NULL, l); 2337 + smap_list_hash_remove(psock, l); 2452 2338 smap_release_sock(psock, sock); 2453 2339 } 2454 - write_unlock_bh(&sock->sk_callback_lock); 2455 2340 free_htab_elem(htab, l); 2456 2341 ret = 0; 2457 2342 } ··· 2496 2383 .map_get_next_key = sock_hash_get_next_key, 2497 2384 .map_update_elem = sock_hash_update_elem, 2498 2385 .map_delete_elem = sock_hash_delete_elem, 2386 + .map_release_uref = sock_map_release, 2499 2387 }; 2500 2388 2501 2389 BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, bpf_sock,
+21 -78
kernel/bpf/syscall.c
··· 1483 1483 return err; 1484 1484 } 1485 1485 1486 - #ifdef CONFIG_CGROUP_BPF 1487 - 1488 1486 static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog, 1489 1487 enum bpf_attach_type attach_type) 1490 1488 { ··· 1497 1499 1498 1500 #define BPF_PROG_ATTACH_LAST_FIELD attach_flags 1499 1501 1500 - static int sockmap_get_from_fd(const union bpf_attr *attr, 1501 - int type, bool attach) 1502 - { 1503 - struct bpf_prog *prog = NULL; 1504 - int ufd = attr->target_fd; 1505 - struct bpf_map *map; 1506 - struct fd f; 1507 - int err; 1508 - 1509 - f = fdget(ufd); 1510 - map = __bpf_map_get(f); 1511 - if (IS_ERR(map)) 1512 - return PTR_ERR(map); 1513 - 1514 - if (attach) { 1515 - prog = bpf_prog_get_type(attr->attach_bpf_fd, type); 1516 - if (IS_ERR(prog)) { 1517 - fdput(f); 1518 - return PTR_ERR(prog); 1519 - } 1520 - } 1521 - 1522 - err = sock_map_prog(map, prog, attr->attach_type); 1523 - if (err) { 1524 - fdput(f); 1525 - if (prog) 1526 - bpf_prog_put(prog); 1527 - return err; 1528 - } 1529 - 1530 - fdput(f); 1531 - return 0; 1532 - } 1533 - 1534 1502 #define BPF_F_ATTACH_MASK \ 1535 1503 (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI) 1536 1504 ··· 1504 1540 { 1505 1541 enum bpf_prog_type ptype; 1506 1542 struct bpf_prog *prog; 1507 - struct cgroup *cgrp; 1508 1543 int ret; 1509 1544 1510 1545 if (!capable(CAP_NET_ADMIN)) ··· 1540 1577 ptype = BPF_PROG_TYPE_CGROUP_DEVICE; 1541 1578 break; 1542 1579 case BPF_SK_MSG_VERDICT: 1543 - return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_MSG, true); 1580 + ptype = BPF_PROG_TYPE_SK_MSG; 1581 + break; 1544 1582 case BPF_SK_SKB_STREAM_PARSER: 1545 1583 case BPF_SK_SKB_STREAM_VERDICT: 1546 - return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, true); 1584 + ptype = BPF_PROG_TYPE_SK_SKB; 1585 + break; 1547 1586 case BPF_LIRC_MODE2: 1548 - return lirc_prog_attach(attr); 1587 + ptype = BPF_PROG_TYPE_LIRC_MODE2; 1588 + break; 1549 1589 default: 1550 1590 return -EINVAL; 1551 1591 } ··· 1562 1596 return -EINVAL; 1563 1597 } 1564 1598 1565 - cgrp = cgroup_get_from_fd(attr->target_fd); 1566 - if (IS_ERR(cgrp)) { 1567 - bpf_prog_put(prog); 1568 - return PTR_ERR(cgrp); 1599 + switch (ptype) { 1600 + case BPF_PROG_TYPE_SK_SKB: 1601 + case BPF_PROG_TYPE_SK_MSG: 1602 + ret = sockmap_get_from_fd(attr, ptype, prog); 1603 + break; 1604 + case BPF_PROG_TYPE_LIRC_MODE2: 1605 + ret = lirc_prog_attach(attr, prog); 1606 + break; 1607 + default: 1608 + ret = cgroup_bpf_prog_attach(attr, ptype, prog); 1569 1609 } 1570 1610 1571 - ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type, 1572 - attr->attach_flags); 1573 1611 if (ret) 1574 1612 bpf_prog_put(prog); 1575 - cgroup_put(cgrp); 1576 - 1577 1613 return ret; 1578 1614 } 1579 1615 ··· 1584 1616 static int bpf_prog_detach(const union bpf_attr *attr) 1585 1617 { 1586 1618 enum bpf_prog_type ptype; 1587 - struct bpf_prog *prog; 1588 - struct cgroup *cgrp; 1589 - int ret; 1590 1619 1591 1620 if (!capable(CAP_NET_ADMIN)) 1592 1621 return -EPERM; ··· 1616 1651 ptype = BPF_PROG_TYPE_CGROUP_DEVICE; 1617 1652 break; 1618 1653 case BPF_SK_MSG_VERDICT: 1619 - return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_MSG, false); 1654 + return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_MSG, NULL); 1620 1655 case BPF_SK_SKB_STREAM_PARSER: 1621 1656 case BPF_SK_SKB_STREAM_VERDICT: 1622 - return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, false); 1657 + return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, NULL); 1623 1658 case BPF_LIRC_MODE2: 1624 1659 return lirc_prog_detach(attr); 1625 1660 default: 1626 1661 return -EINVAL; 1627 1662 } 1628 1663 1629 - cgrp = cgroup_get_from_fd(attr->target_fd); 1630 - if (IS_ERR(cgrp)) 1631 - return PTR_ERR(cgrp); 1632 - 1633 - prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype); 1634 - if (IS_ERR(prog)) 1635 - prog = NULL; 1636 - 1637 - ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, 0); 1638 - if (prog) 1639 - bpf_prog_put(prog); 1640 - cgroup_put(cgrp); 1641 - return ret; 1664 + return cgroup_bpf_prog_detach(attr, ptype); 1642 1665 } 1643 1666 1644 1667 #define BPF_PROG_QUERY_LAST_FIELD query.prog_cnt ··· 1634 1681 static int bpf_prog_query(const union bpf_attr *attr, 1635 1682 union bpf_attr __user *uattr) 1636 1683 { 1637 - struct cgroup *cgrp; 1638 - int ret; 1639 - 1640 1684 if (!capable(CAP_NET_ADMIN)) 1641 1685 return -EPERM; 1642 1686 if (CHECK_ATTR(BPF_PROG_QUERY)) ··· 1661 1711 default: 1662 1712 return -EINVAL; 1663 1713 } 1664 - cgrp = cgroup_get_from_fd(attr->query.target_fd); 1665 - if (IS_ERR(cgrp)) 1666 - return PTR_ERR(cgrp); 1667 - ret = cgroup_bpf_query(cgrp, attr, uattr); 1668 - cgroup_put(cgrp); 1669 - return ret; 1714 + 1715 + return cgroup_bpf_prog_query(attr, uattr); 1670 1716 } 1671 - #endif /* CONFIG_CGROUP_BPF */ 1672 1717 1673 1718 #define BPF_PROG_TEST_RUN_LAST_FIELD test.duration 1674 1719 ··· 2310 2365 case BPF_OBJ_GET: 2311 2366 err = bpf_obj_get(&attr); 2312 2367 break; 2313 - #ifdef CONFIG_CGROUP_BPF 2314 2368 case BPF_PROG_ATTACH: 2315 2369 err = bpf_prog_attach(&attr); 2316 2370 break; ··· 2319 2375 case BPF_PROG_QUERY: 2320 2376 err = bpf_prog_query(&attr, uattr); 2321 2377 break; 2322 - #endif 2323 2378 case BPF_PROG_TEST_RUN: 2324 2379 err = bpf_prog_test_run(&attr, uattr); 2325 2380 break;
+24 -6
kernel/kthread.c
··· 177 177 static void __kthread_parkme(struct kthread *self) 178 178 { 179 179 for (;;) { 180 - set_current_state(TASK_PARKED); 180 + /* 181 + * TASK_PARKED is a special state; we must serialize against 182 + * possible pending wakeups to avoid store-store collisions on 183 + * task->state. 184 + * 185 + * Such a collision might possibly result in the task state 186 + * changin from TASK_PARKED and us failing the 187 + * wait_task_inactive() in kthread_park(). 188 + */ 189 + set_special_state(TASK_PARKED); 181 190 if (!test_bit(KTHREAD_SHOULD_PARK, &self->flags)) 182 191 break; 192 + 193 + complete_all(&self->parked); 183 194 schedule(); 184 195 } 185 196 __set_current_state(TASK_RUNNING); ··· 201 190 __kthread_parkme(to_kthread(current)); 202 191 } 203 192 EXPORT_SYMBOL_GPL(kthread_parkme); 204 - 205 - void kthread_park_complete(struct task_struct *k) 206 - { 207 - complete_all(&to_kthread(k)->parked); 208 - } 209 193 210 194 static int kthread(void *_create) 211 195 { ··· 467 461 468 462 reinit_completion(&kthread->parked); 469 463 clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags); 464 + /* 465 + * __kthread_parkme() will either see !SHOULD_PARK or get the wakeup. 466 + */ 470 467 wake_up_state(k, TASK_PARKED); 471 468 } 472 469 EXPORT_SYMBOL_GPL(kthread_unpark); ··· 496 487 set_bit(KTHREAD_SHOULD_PARK, &kthread->flags); 497 488 if (k != current) { 498 489 wake_up_process(k); 490 + /* 491 + * Wait for __kthread_parkme() to complete(), this means we 492 + * _will_ have TASK_PARKED and are about to call schedule(). 493 + */ 499 494 wait_for_completion(&kthread->parked); 495 + /* 496 + * Now wait for that schedule() to complete and the task to 497 + * get scheduled out. 498 + */ 499 + WARN_ON_ONCE(!wait_task_inactive(k, TASK_PARKED)); 500 500 } 501 501 502 502 return 0;
+25 -16
kernel/rseq.c
··· 85 85 { 86 86 u32 cpu_id = raw_smp_processor_id(); 87 87 88 - if (__put_user(cpu_id, &t->rseq->cpu_id_start)) 88 + if (put_user(cpu_id, &t->rseq->cpu_id_start)) 89 89 return -EFAULT; 90 - if (__put_user(cpu_id, &t->rseq->cpu_id)) 90 + if (put_user(cpu_id, &t->rseq->cpu_id)) 91 91 return -EFAULT; 92 92 trace_rseq_update(t); 93 93 return 0; ··· 100 100 /* 101 101 * Reset cpu_id_start to its initial state (0). 102 102 */ 103 - if (__put_user(cpu_id_start, &t->rseq->cpu_id_start)) 103 + if (put_user(cpu_id_start, &t->rseq->cpu_id_start)) 104 104 return -EFAULT; 105 105 /* 106 106 * Reset cpu_id to RSEQ_CPU_ID_UNINITIALIZED, so any user coming 107 107 * in after unregistration can figure out that rseq needs to be 108 108 * registered again. 109 109 */ 110 - if (__put_user(cpu_id, &t->rseq->cpu_id)) 110 + if (put_user(cpu_id, &t->rseq->cpu_id)) 111 111 return -EFAULT; 112 112 return 0; 113 113 } ··· 115 115 static int rseq_get_rseq_cs(struct task_struct *t, struct rseq_cs *rseq_cs) 116 116 { 117 117 struct rseq_cs __user *urseq_cs; 118 - unsigned long ptr; 118 + u64 ptr; 119 119 u32 __user *usig; 120 120 u32 sig; 121 121 int ret; 122 122 123 - ret = __get_user(ptr, &t->rseq->rseq_cs); 124 - if (ret) 125 - return ret; 123 + if (copy_from_user(&ptr, &t->rseq->rseq_cs.ptr64, sizeof(ptr))) 124 + return -EFAULT; 126 125 if (!ptr) { 127 126 memset(rseq_cs, 0, sizeof(*rseq_cs)); 128 127 return 0; 129 128 } 130 - urseq_cs = (struct rseq_cs __user *)ptr; 129 + if (ptr >= TASK_SIZE) 130 + return -EINVAL; 131 + urseq_cs = (struct rseq_cs __user *)(unsigned long)ptr; 131 132 if (copy_from_user(rseq_cs, urseq_cs, sizeof(*rseq_cs))) 132 133 return -EFAULT; 133 - if (rseq_cs->version > 0) 134 - return -EINVAL; 135 134 135 + if (rseq_cs->start_ip >= TASK_SIZE || 136 + rseq_cs->start_ip + rseq_cs->post_commit_offset >= TASK_SIZE || 137 + rseq_cs->abort_ip >= TASK_SIZE || 138 + rseq_cs->version > 0) 139 + return -EINVAL; 140 + /* Check for overflow. */ 141 + if (rseq_cs->start_ip + rseq_cs->post_commit_offset < rseq_cs->start_ip) 142 + return -EINVAL; 136 143 /* Ensure that abort_ip is not in the critical section. */ 137 144 if (rseq_cs->abort_ip - rseq_cs->start_ip < rseq_cs->post_commit_offset) 138 145 return -EINVAL; 139 146 140 - usig = (u32 __user *)(rseq_cs->abort_ip - sizeof(u32)); 147 + usig = (u32 __user *)(unsigned long)(rseq_cs->abort_ip - sizeof(u32)); 141 148 ret = get_user(sig, usig); 142 149 if (ret) 143 150 return ret; ··· 153 146 printk_ratelimited(KERN_WARNING 154 147 "Possible attack attempt. Unexpected rseq signature 0x%x, expecting 0x%x (pid=%d, addr=%p).\n", 155 148 sig, current->rseq_sig, current->pid, usig); 156 - return -EPERM; 149 + return -EINVAL; 157 150 } 158 151 return 0; 159 152 } ··· 164 157 int ret; 165 158 166 159 /* Get thread flags. */ 167 - ret = __get_user(flags, &t->rseq->flags); 160 + ret = get_user(flags, &t->rseq->flags); 168 161 if (ret) 169 162 return ret; 170 163 ··· 202 195 * of code outside of the rseq assembly block. This performs 203 196 * a lazy clear of the rseq_cs field. 204 197 * 205 - * Set rseq_cs to NULL with single-copy atomicity. 198 + * Set rseq_cs to NULL. 206 199 */ 207 - return __put_user(0UL, &t->rseq->rseq_cs); 200 + if (clear_user(&t->rseq->rseq_cs.ptr64, sizeof(t->rseq->rseq_cs.ptr64))) 201 + return -EFAULT; 202 + return 0; 208 203 } 209 204 210 205 /*
+32 -35
kernel/sched/core.c
··· 7 7 */ 8 8 #include "sched.h" 9 9 10 - #include <linux/kthread.h> 11 10 #include <linux/nospec.h> 12 11 13 12 #include <linux/kcov.h> ··· 2723 2724 membarrier_mm_sync_core_before_usermode(mm); 2724 2725 mmdrop(mm); 2725 2726 } 2726 - if (unlikely(prev_state & (TASK_DEAD|TASK_PARKED))) { 2727 - switch (prev_state) { 2728 - case TASK_DEAD: 2729 - if (prev->sched_class->task_dead) 2730 - prev->sched_class->task_dead(prev); 2727 + if (unlikely(prev_state == TASK_DEAD)) { 2728 + if (prev->sched_class->task_dead) 2729 + prev->sched_class->task_dead(prev); 2731 2730 2732 - /* 2733 - * Remove function-return probe instances associated with this 2734 - * task and put them back on the free list. 2735 - */ 2736 - kprobe_flush_task(prev); 2731 + /* 2732 + * Remove function-return probe instances associated with this 2733 + * task and put them back on the free list. 2734 + */ 2735 + kprobe_flush_task(prev); 2737 2736 2738 - /* Task is done with its stack. */ 2739 - put_task_stack(prev); 2737 + /* Task is done with its stack. */ 2738 + put_task_stack(prev); 2740 2739 2741 - put_task_struct(prev); 2742 - break; 2743 - 2744 - case TASK_PARKED: 2745 - kthread_park_complete(prev); 2746 - break; 2747 - } 2740 + put_task_struct(prev); 2748 2741 } 2749 2742 2750 2743 tick_nohz_task_switch(); ··· 3104 3113 struct tick_work *twork = container_of(dwork, struct tick_work, work); 3105 3114 int cpu = twork->cpu; 3106 3115 struct rq *rq = cpu_rq(cpu); 3116 + struct task_struct *curr; 3107 3117 struct rq_flags rf; 3118 + u64 delta; 3108 3119 3109 3120 /* 3110 3121 * Handle the tick only if it appears the remote CPU is running in full ··· 3115 3122 * statistics and checks timeslices in a time-independent way, regardless 3116 3123 * of when exactly it is running. 3117 3124 */ 3118 - if (!idle_cpu(cpu) && tick_nohz_tick_stopped_cpu(cpu)) { 3119 - struct task_struct *curr; 3120 - u64 delta; 3125 + if (idle_cpu(cpu) || !tick_nohz_tick_stopped_cpu(cpu)) 3126 + goto out_requeue; 3121 3127 3122 - rq_lock_irq(rq, &rf); 3123 - update_rq_clock(rq); 3124 - curr = rq->curr; 3125 - delta = rq_clock_task(rq) - curr->se.exec_start; 3128 + rq_lock_irq(rq, &rf); 3129 + curr = rq->curr; 3130 + if (is_idle_task(curr)) 3131 + goto out_unlock; 3126 3132 3127 - /* 3128 - * Make sure the next tick runs within a reasonable 3129 - * amount of time. 3130 - */ 3131 - WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3); 3132 - curr->sched_class->task_tick(rq, curr, 0); 3133 - rq_unlock_irq(rq, &rf); 3134 - } 3133 + update_rq_clock(rq); 3134 + delta = rq_clock_task(rq) - curr->se.exec_start; 3135 3135 3136 + /* 3137 + * Make sure the next tick runs within a reasonable 3138 + * amount of time. 3139 + */ 3140 + WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3); 3141 + curr->sched_class->task_tick(rq, curr, 0); 3142 + 3143 + out_unlock: 3144 + rq_unlock_irq(rq, &rf); 3145 + 3146 + out_requeue: 3136 3147 /* 3137 3148 * Run the remote tick once per second (1Hz). This arbitrary 3138 3149 * frequency is large enough to avoid overload but short enough
+1 -1
kernel/sched/cpufreq_schedutil.c
··· 192 192 { 193 193 struct rq *rq = cpu_rq(sg_cpu->cpu); 194 194 195 - if (rq->rt.rt_nr_running) 195 + if (rt_rq_is_runnable(&rq->rt)) 196 196 return sg_cpu->max; 197 197 198 198 /*
+22 -23
kernel/sched/fair.c
··· 3982 3982 if (!sched_feat(UTIL_EST)) 3983 3983 return; 3984 3984 3985 - /* 3986 - * Update root cfs_rq's estimated utilization 3987 - * 3988 - * If *p is the last task then the root cfs_rq's estimated utilization 3989 - * of a CPU is 0 by definition. 3990 - */ 3991 - ue.enqueued = 0; 3992 - if (cfs_rq->nr_running) { 3993 - ue.enqueued = cfs_rq->avg.util_est.enqueued; 3994 - ue.enqueued -= min_t(unsigned int, ue.enqueued, 3995 - (_task_util_est(p) | UTIL_AVG_UNCHANGED)); 3996 - } 3985 + /* Update root cfs_rq's estimated utilization */ 3986 + ue.enqueued = cfs_rq->avg.util_est.enqueued; 3987 + ue.enqueued -= min_t(unsigned int, ue.enqueued, 3988 + (_task_util_est(p) | UTIL_AVG_UNCHANGED)); 3997 3989 WRITE_ONCE(cfs_rq->avg.util_est.enqueued, ue.enqueued); 3998 3990 3999 3991 /* ··· 4582 4590 now = sched_clock_cpu(smp_processor_id()); 4583 4591 cfs_b->runtime = cfs_b->quota; 4584 4592 cfs_b->runtime_expires = now + ktime_to_ns(cfs_b->period); 4593 + cfs_b->expires_seq++; 4585 4594 } 4586 4595 4587 4596 static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg) ··· 4605 4612 struct task_group *tg = cfs_rq->tg; 4606 4613 struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(tg); 4607 4614 u64 amount = 0, min_amount, expires; 4615 + int expires_seq; 4608 4616 4609 4617 /* note: this is a positive sum as runtime_remaining <= 0 */ 4610 4618 min_amount = sched_cfs_bandwidth_slice() - cfs_rq->runtime_remaining; ··· 4622 4628 cfs_b->idle = 0; 4623 4629 } 4624 4630 } 4631 + expires_seq = cfs_b->expires_seq; 4625 4632 expires = cfs_b->runtime_expires; 4626 4633 raw_spin_unlock(&cfs_b->lock); 4627 4634 ··· 4632 4637 * spread between our sched_clock and the one on which runtime was 4633 4638 * issued. 4634 4639 */ 4635 - if ((s64)(expires - cfs_rq->runtime_expires) > 0) 4640 + if (cfs_rq->expires_seq != expires_seq) { 4641 + cfs_rq->expires_seq = expires_seq; 4636 4642 cfs_rq->runtime_expires = expires; 4643 + } 4637 4644 4638 4645 return cfs_rq->runtime_remaining > 0; 4639 4646 } ··· 4661 4664 * has not truly expired. 4662 4665 * 4663 4666 * Fortunately we can check determine whether this the case by checking 4664 - * whether the global deadline has advanced. It is valid to compare 4665 - * cfs_b->runtime_expires without any locks since we only care about 4666 - * exact equality, so a partial write will still work. 4667 + * whether the global deadline(cfs_b->expires_seq) has advanced. 4667 4668 */ 4668 - 4669 - if (cfs_rq->runtime_expires != cfs_b->runtime_expires) { 4669 + if (cfs_rq->expires_seq == cfs_b->expires_seq) { 4670 4670 /* extend local deadline, drift is bounded above by 2 ticks */ 4671 4671 cfs_rq->runtime_expires += TICK_NSEC; 4672 4672 } else { ··· 5196 5202 5197 5203 void start_cfs_bandwidth(struct cfs_bandwidth *cfs_b) 5198 5204 { 5205 + u64 overrun; 5206 + 5199 5207 lockdep_assert_held(&cfs_b->lock); 5200 5208 5201 - if (!cfs_b->period_active) { 5202 - cfs_b->period_active = 1; 5203 - hrtimer_forward_now(&cfs_b->period_timer, cfs_b->period); 5204 - hrtimer_start_expires(&cfs_b->period_timer, HRTIMER_MODE_ABS_PINNED); 5205 - } 5209 + if (cfs_b->period_active) 5210 + return; 5211 + 5212 + cfs_b->period_active = 1; 5213 + overrun = hrtimer_forward_now(&cfs_b->period_timer, cfs_b->period); 5214 + cfs_b->runtime_expires += (overrun + 1) * ktime_to_ns(cfs_b->period); 5215 + cfs_b->expires_seq++; 5216 + hrtimer_start_expires(&cfs_b->period_timer, HRTIMER_MODE_ABS_PINNED); 5206 5217 } 5207 5218 5208 5219 static void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
+10 -6
kernel/sched/rt.c
··· 508 508 509 509 rt_se = rt_rq->tg->rt_se[cpu]; 510 510 511 - if (!rt_se) 511 + if (!rt_se) { 512 512 dequeue_top_rt_rq(rt_rq); 513 + /* Kick cpufreq (see the comment in kernel/sched/sched.h). */ 514 + cpufreq_update_util(rq_of_rt_rq(rt_rq), 0); 515 + } 513 516 else if (on_rt_rq(rt_se)) 514 517 dequeue_rt_entity(rt_se, 0); 515 518 } ··· 1004 1001 sub_nr_running(rq, rt_rq->rt_nr_running); 1005 1002 rt_rq->rt_queued = 0; 1006 1003 1007 - /* Kick cpufreq (see the comment in kernel/sched/sched.h). */ 1008 - cpufreq_update_util(rq, 0); 1009 1004 } 1010 1005 1011 1006 static void ··· 1015 1014 1016 1015 if (rt_rq->rt_queued) 1017 1016 return; 1018 - if (rt_rq_throttled(rt_rq) || !rt_rq->rt_nr_running) 1017 + 1018 + if (rt_rq_throttled(rt_rq)) 1019 1019 return; 1020 1020 1021 - add_nr_running(rq, rt_rq->rt_nr_running); 1022 - rt_rq->rt_queued = 1; 1021 + if (rt_rq->rt_nr_running) { 1022 + add_nr_running(rq, rt_rq->rt_nr_running); 1023 + rt_rq->rt_queued = 1; 1024 + } 1023 1025 1024 1026 /* Kick cpufreq (see the comment in kernel/sched/sched.h). */ 1025 1027 cpufreq_update_util(rq, 0);
+9 -2
kernel/sched/sched.h
··· 334 334 u64 runtime; 335 335 s64 hierarchical_quota; 336 336 u64 runtime_expires; 337 + int expires_seq; 337 338 338 - int idle; 339 - int period_active; 339 + short idle; 340 + short period_active; 340 341 struct hrtimer period_timer; 341 342 struct hrtimer slack_timer; 342 343 struct list_head throttled_cfs_rq; ··· 552 551 553 552 #ifdef CONFIG_CFS_BANDWIDTH 554 553 int runtime_enabled; 554 + int expires_seq; 555 555 u64 runtime_expires; 556 556 s64 runtime_remaining; 557 557 ··· 610 608 struct task_group *tg; 611 609 #endif 612 610 }; 611 + 612 + static inline bool rt_rq_is_runnable(struct rt_rq *rt_rq) 613 + { 614 + return rt_rq->rt_queued && rt_rq->rt_nr_running; 615 + } 613 616 614 617 /* Deadline class' related fields in a runqueue */ 615 618 struct dl_rq {
+1 -2
kernel/time/tick-common.c
··· 277 277 */ 278 278 return !curdev || 279 279 newdev->rating > curdev->rating || 280 - (!cpumask_equal(curdev->cpumask, newdev->cpumask) && 281 - !tick_check_percpu(curdev, newdev, smp_processor_id())); 280 + !cpumask_equal(curdev->cpumask, newdev->cpumask); 282 281 } 283 282 284 283 /*
+1 -12
kernel/trace/ftrace.c
··· 192 192 op->saved_func(ip, parent_ip, op, regs); 193 193 } 194 194 195 - /** 196 - * clear_ftrace_function - reset the ftrace function 197 - * 198 - * This NULLs the ftrace function and in essence stops 199 - * tracing. There may be lag 200 - */ 201 - void clear_ftrace_function(void) 202 - { 203 - ftrace_trace_function = ftrace_stub; 204 - } 205 - 206 195 static void ftrace_sync(struct work_struct *work) 207 196 { 208 197 /* ··· 6678 6689 { 6679 6690 ftrace_disabled = 1; 6680 6691 ftrace_enabled = 0; 6681 - clear_ftrace_function(); 6692 + ftrace_trace_function = ftrace_stub; 6682 6693 } 6683 6694 6684 6695 /**
+9 -4
kernel/trace/trace.c
··· 2953 2953 } 2954 2954 EXPORT_SYMBOL_GPL(trace_vbprintk); 2955 2955 2956 + __printf(3, 0) 2956 2957 static int 2957 2958 __trace_array_vprintk(struct ring_buffer *buffer, 2958 2959 unsigned long ip, const char *fmt, va_list args) ··· 3008 3007 return len; 3009 3008 } 3010 3009 3010 + __printf(3, 0) 3011 3011 int trace_array_vprintk(struct trace_array *tr, 3012 3012 unsigned long ip, const char *fmt, va_list args) 3013 3013 { 3014 3014 return __trace_array_vprintk(tr->trace_buffer.buffer, ip, fmt, args); 3015 3015 } 3016 3016 3017 + __printf(3, 0) 3017 3018 int trace_array_printk(struct trace_array *tr, 3018 3019 unsigned long ip, const char *fmt, ...) 3019 3020 { ··· 3031 3028 return ret; 3032 3029 } 3033 3030 3031 + __printf(3, 4) 3034 3032 int trace_array_printk_buf(struct ring_buffer *buffer, 3035 3033 unsigned long ip, const char *fmt, ...) 3036 3034 { ··· 3047 3043 return ret; 3048 3044 } 3049 3045 3046 + __printf(2, 0) 3050 3047 int trace_vprintk(unsigned long ip, const char *fmt, va_list args) 3051 3048 { 3052 3049 return trace_array_vprintk(&global_trace, ip, fmt, args); ··· 3365 3360 3366 3361 print_event_info(buf, m); 3367 3362 3368 - seq_printf(m, "# TASK-PID CPU# %s TIMESTAMP FUNCTION\n", tgid ? "TGID " : ""); 3369 - seq_printf(m, "# | | | %s | |\n", tgid ? " | " : ""); 3363 + seq_printf(m, "# TASK-PID %s CPU# TIMESTAMP FUNCTION\n", tgid ? "TGID " : ""); 3364 + seq_printf(m, "# | | %s | | |\n", tgid ? " | " : ""); 3370 3365 } 3371 3366 3372 3367 static void print_func_help_header_irq(struct trace_buffer *buf, struct seq_file *m, ··· 3386 3381 tgid ? tgid_space : space); 3387 3382 seq_printf(m, "# %s||| / delay\n", 3388 3383 tgid ? tgid_space : space); 3389 - seq_printf(m, "# TASK-PID CPU#%s|||| TIMESTAMP FUNCTION\n", 3384 + seq_printf(m, "# TASK-PID %sCPU# |||| TIMESTAMP FUNCTION\n", 3390 3385 tgid ? " TGID " : space); 3391 - seq_printf(m, "# | | | %s|||| | |\n", 3386 + seq_printf(m, "# | | %s | |||| | |\n", 3392 3387 tgid ? " | " : space); 3393 3388 } 3394 3389
+1 -3
kernel/trace/trace.h
··· 583 583 static inline struct ring_buffer_iter * 584 584 trace_buffer_iter(struct trace_iterator *iter, int cpu) 585 585 { 586 - if (iter->buffer_iter && iter->buffer_iter[cpu]) 587 - return iter->buffer_iter[cpu]; 588 - return NULL; 586 + return iter->buffer_iter ? iter->buffer_iter[cpu] : NULL; 589 587 } 590 588 591 589 int tracer_init(struct tracer *t, struct trace_array *tr);
+5
kernel/trace/trace_events_filter.c
··· 1701 1701 * @filter_str: filter string 1702 1702 * @set_str: remember @filter_str and enable detailed error in filter 1703 1703 * @filterp: out param for created filter (always updated on return) 1704 + * Must be a pointer that references a NULL pointer. 1704 1705 * 1705 1706 * Creates a filter for @call with @filter_str. If @set_str is %true, 1706 1707 * @filter_str is copied and recorded in the new filter. ··· 1718 1717 { 1719 1718 struct filter_parse_error *pe = NULL; 1720 1719 int err; 1720 + 1721 + /* filterp must point to NULL */ 1722 + if (WARN_ON(*filterp)) 1723 + *filterp = NULL; 1721 1724 1722 1725 err = create_filter_start(filter_string, set_str, &pe, filterp); 1723 1726 if (err)
+1 -1
kernel/trace/trace_events_hist.c
··· 393 393 else if (system) 394 394 snprintf(err, MAX_FILTER_STR_VAL, "%s.%s", system, event); 395 395 else 396 - strncpy(err, var, MAX_FILTER_STR_VAL); 396 + strscpy(err, var, MAX_FILTER_STR_VAL); 397 397 398 398 hist_err(str, err); 399 399 }
+4 -1
kernel/trace/trace_functions_graph.c
··· 831 831 struct ftrace_graph_ret *graph_ret; 832 832 struct ftrace_graph_ent *call; 833 833 unsigned long long duration; 834 + int cpu = iter->cpu; 834 835 int i; 835 836 836 837 graph_ret = &ret_entry->ret; ··· 840 839 841 840 if (data) { 842 841 struct fgraph_cpu_data *cpu_data; 843 - int cpu = iter->cpu; 844 842 845 843 cpu_data = per_cpu_ptr(data->cpu_data, cpu); 846 844 ··· 868 868 trace_seq_putc(s, ' '); 869 869 870 870 trace_seq_printf(s, "%ps();\n", (void *)call->func); 871 + 872 + print_graph_irq(iter, graph_ret->func, TRACE_GRAPH_RET, 873 + cpu, iter->ent->pid, flags); 871 874 872 875 return trace_handle_return(s); 873 876 }
+5 -1
kernel/trace/trace_kprobe.c
··· 1480 1480 } 1481 1481 1482 1482 ret = __register_trace_kprobe(tk); 1483 - if (ret < 0) 1483 + if (ret < 0) { 1484 + kfree(tk->tp.call.print_fmt); 1484 1485 goto error; 1486 + } 1485 1487 1486 1488 return &tk->tp.call; 1487 1489 error: ··· 1503 1501 } 1504 1502 1505 1503 __unregister_trace_kprobe(tk); 1504 + 1505 + kfree(tk->tp.call.print_fmt); 1506 1506 free_trace_kprobe(tk); 1507 1507 } 1508 1508 #endif /* CONFIG_PERF_EVENTS */
+3 -2
kernel/trace/trace_output.c
··· 594 594 595 595 trace_find_cmdline(entry->pid, comm); 596 596 597 - trace_seq_printf(s, "%16s-%-5d [%03d] ", 598 - comm, entry->pid, iter->cpu); 597 + trace_seq_printf(s, "%16s-%-5d ", comm, entry->pid); 599 598 600 599 if (tr->trace_flags & TRACE_ITER_RECORD_TGID) { 601 600 unsigned int tgid = trace_find_tgid(entry->pid); ··· 604 605 else 605 606 trace_seq_printf(s, "(%5d) ", tgid); 606 607 } 608 + 609 + trace_seq_printf(s, "[%03d] ", iter->cpu); 607 610 608 611 if (tr->trace_flags & TRACE_ITER_IRQ_INFO) 609 612 trace_print_lat_fmt(s, entry);
+20
lib/test_bpf.c
··· 5282 5282 { /* Mainly checking JIT here. */ 5283 5283 "BPF_MAXINSNS: Ctx heavy transformations", 5284 5284 { }, 5285 + #if defined(CONFIG_BPF_JIT_ALWAYS_ON) && defined(CONFIG_S390) 5286 + CLASSIC | FLAG_EXPECTED_FAIL, 5287 + #else 5285 5288 CLASSIC, 5289 + #endif 5286 5290 { }, 5287 5291 { 5288 5292 { 1, !!(SKB_VLAN_TCI & VLAN_TAG_PRESENT) }, 5289 5293 { 10, !!(SKB_VLAN_TCI & VLAN_TAG_PRESENT) } 5290 5294 }, 5291 5295 .fill_helper = bpf_fill_maxinsns6, 5296 + .expected_errcode = -ENOTSUPP, 5292 5297 }, 5293 5298 { /* Mainly checking JIT here. */ 5294 5299 "BPF_MAXINSNS: Call heavy transformations", 5295 5300 { }, 5301 + #if defined(CONFIG_BPF_JIT_ALWAYS_ON) && defined(CONFIG_S390) 5302 + CLASSIC | FLAG_NO_DATA | FLAG_EXPECTED_FAIL, 5303 + #else 5296 5304 CLASSIC | FLAG_NO_DATA, 5305 + #endif 5297 5306 { }, 5298 5307 { { 1, 0 }, { 10, 0 } }, 5299 5308 .fill_helper = bpf_fill_maxinsns7, 5309 + .expected_errcode = -ENOTSUPP, 5300 5310 }, 5301 5311 { /* Mainly checking JIT here. */ 5302 5312 "BPF_MAXINSNS: Jump heavy test", ··· 5357 5347 { 5358 5348 "BPF_MAXINSNS: exec all MSH", 5359 5349 { }, 5350 + #if defined(CONFIG_BPF_JIT_ALWAYS_ON) && defined(CONFIG_S390) 5351 + CLASSIC | FLAG_EXPECTED_FAIL, 5352 + #else 5360 5353 CLASSIC, 5354 + #endif 5361 5355 { 0xfa, 0xfb, 0xfc, 0xfd, }, 5362 5356 { { 4, 0xababab83 } }, 5363 5357 .fill_helper = bpf_fill_maxinsns13, 5358 + .expected_errcode = -ENOTSUPP, 5364 5359 }, 5365 5360 { 5366 5361 "BPF_MAXINSNS: ld_abs+get_processor_id", 5367 5362 { }, 5363 + #if defined(CONFIG_BPF_JIT_ALWAYS_ON) && defined(CONFIG_S390) 5364 + CLASSIC | FLAG_EXPECTED_FAIL, 5365 + #else 5368 5366 CLASSIC, 5367 + #endif 5369 5368 { }, 5370 5369 { { 1, 0xbee } }, 5371 5370 .fill_helper = bpf_fill_ld_abs_get_processor_id, 5371 + .expected_errcode = -ENOTSUPP, 5372 5372 }, 5373 5373 /* 5374 5374 * LD_IND / LD_ABS on fragmented SKBs
+16 -2
mm/debug.c
··· 43 43 44 44 void __dump_page(struct page *page, const char *reason) 45 45 { 46 + bool page_poisoned = PagePoisoned(page); 47 + int mapcount; 48 + 49 + /* 50 + * If struct page is poisoned don't access Page*() functions as that 51 + * leads to recursive loop. Page*() check for poisoned pages, and calls 52 + * dump_page() when detected. 53 + */ 54 + if (page_poisoned) { 55 + pr_emerg("page:%px is uninitialized and poisoned", page); 56 + goto hex_only; 57 + } 58 + 46 59 /* 47 60 * Avoid VM_BUG_ON() in page_mapcount(). 48 61 * page->_mapcount space in struct page is used by sl[aou]b pages to 49 62 * encode own info. 50 63 */ 51 - int mapcount = PageSlab(page) ? 0 : page_mapcount(page); 64 + mapcount = PageSlab(page) ? 0 : page_mapcount(page); 52 65 53 66 pr_emerg("page:%px count:%d mapcount:%d mapping:%px index:%#lx", 54 67 page, page_ref_count(page), mapcount, ··· 73 60 74 61 pr_emerg("flags: %#lx(%pGp)\n", page->flags, &page->flags); 75 62 63 + hex_only: 76 64 print_hex_dump(KERN_ALERT, "raw: ", DUMP_PREFIX_NONE, 32, 77 65 sizeof(unsigned long), page, 78 66 sizeof(struct page), false); ··· 82 68 pr_alert("page dumped because: %s\n", reason); 83 69 84 70 #ifdef CONFIG_MEMCG 85 - if (page->mem_cgroup) 71 + if (!page_poisoned && page->mem_cgroup) 86 72 pr_alert("page->mem_cgroup:%px\n", page->mem_cgroup); 87 73 #endif 88 74 }
-2
mm/gup.c
··· 1238 1238 int locked = 0; 1239 1239 long ret = 0; 1240 1240 1241 - VM_BUG_ON(start & ~PAGE_MASK); 1242 - VM_BUG_ON(len != PAGE_ALIGN(len)); 1243 1241 end = start + len; 1244 1242 1245 1243 for (nstart = start; nstart < end; nstart = nend) {
+1
mm/hugetlb.c
··· 2163 2163 */ 2164 2164 if (hstate_is_gigantic(h)) 2165 2165 adjust_managed_page_count(page, 1 << h->order); 2166 + cond_resched(); 2166 2167 } 2167 2168 } 2168 2169
+3 -2
mm/kasan/kasan.c
··· 619 619 int kasan_module_alloc(void *addr, size_t size) 620 620 { 621 621 void *ret; 622 + size_t scaled_size; 622 623 size_t shadow_size; 623 624 unsigned long shadow_start; 624 625 625 626 shadow_start = (unsigned long)kasan_mem_to_shadow(addr); 626 - shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT, 627 - PAGE_SIZE); 627 + scaled_size = (size + KASAN_SHADOW_MASK) >> KASAN_SHADOW_SCALE_SHIFT; 628 + shadow_size = round_up(scaled_size, PAGE_SIZE); 628 629 629 630 if (WARN_ON(!PAGE_ALIGNED(shadow_start))) 630 631 return -EINVAL;
+2 -1
mm/memblock.c
··· 227 227 * so we use WARN_ONCE() here to see the stack trace if 228 228 * fail happens. 229 229 */ 230 - WARN_ONCE(1, "memblock: bottom-up allocation failed, memory hotunplug may be affected\n"); 230 + WARN_ONCE(IS_ENABLED(CONFIG_MEMORY_HOTREMOVE), 231 + "memblock: bottom-up allocation failed, memory hotremove may be affected\n"); 231 232 } 232 233 233 234 return __memblock_find_range_top_down(start, end, size, align, nid,
+12 -17
mm/mmap.c
··· 186 186 return next; 187 187 } 188 188 189 - static int do_brk(unsigned long addr, unsigned long len, struct list_head *uf); 190 - 189 + static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long flags, 190 + struct list_head *uf); 191 191 SYSCALL_DEFINE1(brk, unsigned long, brk) 192 192 { 193 193 unsigned long retval; ··· 245 245 goto out; 246 246 247 247 /* Ok, looks good - let it rip. */ 248 - if (do_brk(oldbrk, newbrk-oldbrk, &uf) < 0) 248 + if (do_brk_flags(oldbrk, newbrk-oldbrk, 0, &uf) < 0) 249 249 goto out; 250 250 251 251 set_brk: ··· 2929 2929 * anonymous maps. eventually we may be able to do some 2930 2930 * brk-specific accounting here. 2931 2931 */ 2932 - static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long flags, struct list_head *uf) 2932 + static int do_brk_flags(unsigned long addr, unsigned long len, unsigned long flags, struct list_head *uf) 2933 2933 { 2934 2934 struct mm_struct *mm = current->mm; 2935 2935 struct vm_area_struct *vma, *prev; 2936 - unsigned long len; 2937 2936 struct rb_node **rb_link, *rb_parent; 2938 2937 pgoff_t pgoff = addr >> PAGE_SHIFT; 2939 2938 int error; 2940 - 2941 - len = PAGE_ALIGN(request); 2942 - if (len < request) 2943 - return -ENOMEM; 2944 - if (!len) 2945 - return 0; 2946 2939 2947 2940 /* Until we need other flags, refuse anything except VM_EXEC. */ 2948 2941 if ((flags & (~VM_EXEC)) != 0) ··· 3008 3015 return 0; 3009 3016 } 3010 3017 3011 - static int do_brk(unsigned long addr, unsigned long len, struct list_head *uf) 3012 - { 3013 - return do_brk_flags(addr, len, 0, uf); 3014 - } 3015 - 3016 - int vm_brk_flags(unsigned long addr, unsigned long len, unsigned long flags) 3018 + int vm_brk_flags(unsigned long addr, unsigned long request, unsigned long flags) 3017 3019 { 3018 3020 struct mm_struct *mm = current->mm; 3021 + unsigned long len; 3019 3022 int ret; 3020 3023 bool populate; 3021 3024 LIST_HEAD(uf); 3025 + 3026 + len = PAGE_ALIGN(request); 3027 + if (len < request) 3028 + return -ENOMEM; 3029 + if (!len) 3030 + return 0; 3022 3031 3023 3032 if (down_write_killable(&mm->mmap_sem)) 3024 3033 return -EINTR;
+2 -2
mm/page_alloc.c
··· 6847 6847 /* Initialise every node */ 6848 6848 mminit_verify_pageflags_layout(); 6849 6849 setup_nr_node_ids(); 6850 + zero_resv_unavail(); 6850 6851 for_each_online_node(nid) { 6851 6852 pg_data_t *pgdat = NODE_DATA(nid); 6852 6853 free_area_init_node(nid, NULL, ··· 6858 6857 node_set_state(nid, N_MEMORY); 6859 6858 check_for_memory(pgdat, nid); 6860 6859 } 6861 - zero_resv_unavail(); 6862 6860 } 6863 6861 6864 6862 static int __init cmdline_parse_core(char *p, unsigned long *core, ··· 7033 7033 7034 7034 void __init free_area_init(unsigned long *zones_size) 7035 7035 { 7036 + zero_resv_unavail(); 7036 7037 free_area_init_node(0, zones_size, 7037 7038 __pa(PAGE_OFFSET) >> PAGE_SHIFT, NULL); 7038 - zero_resv_unavail(); 7039 7039 } 7040 7040 7041 7041 static int page_alloc_cpu_dead(unsigned int cpu)
+7 -1
mm/rmap.c
··· 64 64 #include <linux/backing-dev.h> 65 65 #include <linux/page_idle.h> 66 66 #include <linux/memremap.h> 67 + #include <linux/userfaultfd_k.h> 67 68 68 69 #include <asm/tlbflush.h> 69 70 ··· 1482 1481 set_pte_at(mm, address, pvmw.pte, pteval); 1483 1482 } 1484 1483 1485 - } else if (pte_unused(pteval)) { 1484 + } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) { 1486 1485 /* 1487 1486 * The guest indicated that the page content is of no 1488 1487 * interest anymore. Simply discard the pte, vmscan 1489 1488 * will take care of the rest. 1489 + * A future reference will then fault in a new zero 1490 + * page. When userfaultfd is active, we must not drop 1491 + * this page though, as its main user (postcopy 1492 + * migration) will not expect userfaults on already 1493 + * copied pages. 1490 1494 */ 1491 1495 dec_mm_counter(mm, mm_counter(page)); 1492 1496 /* We have to invalidate as we cleared the pte */
+1 -1
net/8021q/vlan.c
··· 693 693 out_unlock: 694 694 rcu_read_unlock(); 695 695 out: 696 - NAPI_GRO_CB(skb)->flush |= flush; 696 + skb_gro_flush_final(skb, pp, flush); 697 697 698 698 return pp; 699 699 }
+2 -1
net/9p/client.c
··· 225 225 } 226 226 227 227 free_and_return: 228 - v9fs_put_trans(clnt->trans_mod); 228 + if (ret) 229 + v9fs_put_trans(clnt->trans_mod); 229 230 kfree(tmp_options); 230 231 return ret; 231 232 }
-4
net/Makefile
··· 20 20 obj-$(CONFIG_XFRM) += xfrm/ 21 21 obj-$(CONFIG_UNIX) += unix/ 22 22 obj-$(CONFIG_NET) += ipv6/ 23 - ifneq ($(CC_CAN_LINK),y) 24 - $(warning CC cannot link executables. Skipping bpfilter.) 25 - else 26 23 obj-$(CONFIG_BPFILTER) += bpfilter/ 27 - endif 28 24 obj-$(CONFIG_PACKET) += packet/ 29 25 obj-$(CONFIG_NET_KEY) += key/ 30 26 obj-$(CONFIG_BRIDGE) += bridge/
+1 -1
net/bpfilter/Kconfig
··· 1 1 menuconfig BPFILTER 2 2 bool "BPF based packet filtering framework (BPFILTER)" 3 - default n 4 3 depends on NET && BPF && INET 5 4 help 6 5 This builds experimental bpfilter framework that is aiming to ··· 8 9 if BPFILTER 9 10 config BPFILTER_UMH 10 11 tristate "bpfilter kernel module with user mode helper" 12 + depends on $(success,$(srctree)/scripts/cc-can-link.sh $(CC)) 11 13 default m 12 14 help 13 15 This builds bpfilter kernel module with embedded user mode helper
+2 -15
net/bpfilter/Makefile
··· 15 15 HOSTLDFLAGS += -static 16 16 endif 17 17 18 - # a bit of elf magic to convert bpfilter_umh binary into a binary blob 19 - # inside bpfilter_umh.o elf file referenced by 20 - # _binary_net_bpfilter_bpfilter_umh_start symbol 21 - # which bpfilter_kern.c passes further into umh blob loader at run-time 22 - quiet_cmd_copy_umh = GEN $@ 23 - cmd_copy_umh = echo ':' > $(obj)/.bpfilter_umh.o.cmd; \ 24 - $(OBJCOPY) -I binary \ 25 - `LC_ALL=C $(OBJDUMP) -f net/bpfilter/bpfilter_umh \ 26 - |awk -F' |,' '/file format/{print "-O",$$NF} \ 27 - /^architecture:/{print "-B",$$2}'` \ 28 - --rename-section .data=.init.rodata $< $@ 29 - 30 - $(obj)/bpfilter_umh.o: $(obj)/bpfilter_umh 31 - $(call cmd,copy_umh) 18 + $(obj)/bpfilter_umh_blob.o: $(obj)/bpfilter_umh 32 19 33 20 obj-$(CONFIG_BPFILTER_UMH) += bpfilter.o 34 - bpfilter-objs += bpfilter_kern.o bpfilter_umh.o 21 + bpfilter-objs += bpfilter_kern.o bpfilter_umh_blob.o
+5 -6
net/bpfilter/bpfilter_kern.c
··· 10 10 #include <linux/file.h> 11 11 #include "msgfmt.h" 12 12 13 - #define UMH_start _binary_net_bpfilter_bpfilter_umh_start 14 - #define UMH_end _binary_net_bpfilter_bpfilter_umh_end 15 - 16 - extern char UMH_start; 17 - extern char UMH_end; 13 + extern char bpfilter_umh_start; 14 + extern char bpfilter_umh_end; 18 15 19 16 static struct umh_info info; 20 17 /* since ip_getsockopt() can run in parallel, serialize access to umh */ ··· 90 93 int err; 91 94 92 95 /* fork usermode process */ 93 - err = fork_usermode_blob(&UMH_start, &UMH_end - &UMH_start, &info); 96 + err = fork_usermode_blob(&bpfilter_umh_start, 97 + &bpfilter_umh_end - &bpfilter_umh_start, 98 + &info); 94 99 if (err) 95 100 return err; 96 101 pr_info("Loaded bpfilter_umh pid %d\n", info.pid);
+7
net/bpfilter/bpfilter_umh_blob.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + .section .init.rodata, "a" 3 + .global bpfilter_umh_start 4 + bpfilter_umh_start: 5 + .incbin "net/bpfilter/bpfilter_umh" 6 + .global bpfilter_umh_end 7 + bpfilter_umh_end:
+2 -9
net/core/dev_ioctl.c
··· 285 285 if (ifr->ifr_qlen < 0) 286 286 return -EINVAL; 287 287 if (dev->tx_queue_len ^ ifr->ifr_qlen) { 288 - unsigned int orig_len = dev->tx_queue_len; 289 - 290 - dev->tx_queue_len = ifr->ifr_qlen; 291 - err = call_netdevice_notifiers( 292 - NETDEV_CHANGE_TX_QUEUE_LEN, dev); 293 - err = notifier_to_errno(err); 294 - if (err) { 295 - dev->tx_queue_len = orig_len; 288 + err = dev_change_tx_queue_len(dev, ifr->ifr_qlen); 289 + if (err) 296 290 return err; 297 - } 298 291 } 299 292 return 0; 300 293
+79 -1
net/core/fib_rules.c
··· 416 416 if (rule->mark && r->mark != rule->mark) 417 417 continue; 418 418 419 + if (rule->suppress_ifgroup != -1 && 420 + r->suppress_ifgroup != rule->suppress_ifgroup) 421 + continue; 422 + 423 + if (rule->suppress_prefixlen != -1 && 424 + r->suppress_prefixlen != rule->suppress_prefixlen) 425 + continue; 426 + 419 427 if (rule->mark_mask && r->mark_mask != rule->mark_mask) 420 428 continue; 421 429 ··· 442 434 continue; 443 435 444 436 if (rule->ip_proto && r->ip_proto != rule->ip_proto) 437 + continue; 438 + 439 + if (rule->proto && r->proto != rule->proto) 445 440 continue; 446 441 447 442 if (fib_rule_port_range_set(&rule->sport_range) && ··· 656 645 return err; 657 646 } 658 647 648 + static int rule_exists(struct fib_rules_ops *ops, struct fib_rule_hdr *frh, 649 + struct nlattr **tb, struct fib_rule *rule) 650 + { 651 + struct fib_rule *r; 652 + 653 + list_for_each_entry(r, &ops->rules_list, list) { 654 + if (r->action != rule->action) 655 + continue; 656 + 657 + if (r->table != rule->table) 658 + continue; 659 + 660 + if (r->pref != rule->pref) 661 + continue; 662 + 663 + if (memcmp(r->iifname, rule->iifname, IFNAMSIZ)) 664 + continue; 665 + 666 + if (memcmp(r->oifname, rule->oifname, IFNAMSIZ)) 667 + continue; 668 + 669 + if (r->mark != rule->mark) 670 + continue; 671 + 672 + if (r->suppress_ifgroup != rule->suppress_ifgroup) 673 + continue; 674 + 675 + if (r->suppress_prefixlen != rule->suppress_prefixlen) 676 + continue; 677 + 678 + if (r->mark_mask != rule->mark_mask) 679 + continue; 680 + 681 + if (r->tun_id != rule->tun_id) 682 + continue; 683 + 684 + if (r->fr_net != rule->fr_net) 685 + continue; 686 + 687 + if (r->l3mdev != rule->l3mdev) 688 + continue; 689 + 690 + if (!uid_eq(r->uid_range.start, rule->uid_range.start) || 691 + !uid_eq(r->uid_range.end, rule->uid_range.end)) 692 + continue; 693 + 694 + if (r->ip_proto != rule->ip_proto) 695 + continue; 696 + 697 + if (r->proto != rule->proto) 698 + continue; 699 + 700 + if (!fib_rule_port_range_compare(&r->sport_range, 701 + &rule->sport_range)) 702 + continue; 703 + 704 + if (!fib_rule_port_range_compare(&r->dport_range, 705 + &rule->dport_range)) 706 + continue; 707 + 708 + if (!ops->compare(r, frh, tb)) 709 + continue; 710 + return 1; 711 + } 712 + return 0; 713 + } 714 + 659 715 int fib_nl_newrule(struct sk_buff *skb, struct nlmsghdr *nlh, 660 716 struct netlink_ext_ack *extack) 661 717 { ··· 757 679 goto errout; 758 680 759 681 if ((nlh->nlmsg_flags & NLM_F_EXCL) && 760 - rule_find(ops, frh, tb, rule, user_priority)) { 682 + rule_exists(ops, frh, tb, rule)) { 761 683 err = -EEXIST; 762 684 goto errout_free; 763 685 }
+54 -32
net/core/filter.c
··· 4073 4073 memcpy(params->smac, dev->dev_addr, ETH_ALEN); 4074 4074 params->h_vlan_TCI = 0; 4075 4075 params->h_vlan_proto = 0; 4076 + params->ifindex = dev->ifindex; 4076 4077 4077 - return dev->ifindex; 4078 + return 0; 4078 4079 } 4079 4080 #endif 4080 4081 ··· 4099 4098 /* verify forwarding is enabled on this interface */ 4100 4099 in_dev = __in_dev_get_rcu(dev); 4101 4100 if (unlikely(!in_dev || !IN_DEV_FORWARD(in_dev))) 4102 - return 0; 4101 + return BPF_FIB_LKUP_RET_FWD_DISABLED; 4103 4102 4104 4103 if (flags & BPF_FIB_LOOKUP_OUTPUT) { 4105 4104 fl4.flowi4_iif = 1; ··· 4124 4123 4125 4124 tb = fib_get_table(net, tbid); 4126 4125 if (unlikely(!tb)) 4127 - return 0; 4126 + return BPF_FIB_LKUP_RET_NOT_FWDED; 4128 4127 4129 4128 err = fib_table_lookup(tb, &fl4, &res, FIB_LOOKUP_NOREF); 4130 4129 } else { ··· 4136 4135 err = fib_lookup(net, &fl4, &res, FIB_LOOKUP_NOREF); 4137 4136 } 4138 4137 4139 - if (err || res.type != RTN_UNICAST) 4140 - return 0; 4138 + if (err) { 4139 + /* map fib lookup errors to RTN_ type */ 4140 + if (err == -EINVAL) 4141 + return BPF_FIB_LKUP_RET_BLACKHOLE; 4142 + if (err == -EHOSTUNREACH) 4143 + return BPF_FIB_LKUP_RET_UNREACHABLE; 4144 + if (err == -EACCES) 4145 + return BPF_FIB_LKUP_RET_PROHIBIT; 4146 + 4147 + return BPF_FIB_LKUP_RET_NOT_FWDED; 4148 + } 4149 + 4150 + if (res.type != RTN_UNICAST) 4151 + return BPF_FIB_LKUP_RET_NOT_FWDED; 4141 4152 4142 4153 if (res.fi->fib_nhs > 1) 4143 4154 fib_select_path(net, &res, &fl4, NULL); ··· 4157 4144 if (check_mtu) { 4158 4145 mtu = ip_mtu_from_fib_result(&res, params->ipv4_dst); 4159 4146 if (params->tot_len > mtu) 4160 - return 0; 4147 + return BPF_FIB_LKUP_RET_FRAG_NEEDED; 4161 4148 } 4162 4149 4163 4150 nh = &res.fi->fib_nh[res.nh_sel]; 4164 4151 4165 4152 /* do not handle lwt encaps right now */ 4166 4153 if (nh->nh_lwtstate) 4167 - return 0; 4154 + return BPF_FIB_LKUP_RET_UNSUPP_LWT; 4168 4155 4169 4156 dev = nh->nh_dev; 4170 - if (unlikely(!dev)) 4171 - return 0; 4172 - 4173 4157 if (nh->nh_gw) 4174 4158 params->ipv4_dst = nh->nh_gw; 4175 4159 ··· 4176 4166 * rcu_read_lock_bh is not needed here 4177 4167 */ 4178 4168 neigh = __ipv4_neigh_lookup_noref(dev, (__force u32)params->ipv4_dst); 4179 - if (neigh) 4180 - return bpf_fib_set_fwd_params(params, neigh, dev); 4169 + if (!neigh) 4170 + return BPF_FIB_LKUP_RET_NO_NEIGH; 4181 4171 4182 - return 0; 4172 + return bpf_fib_set_fwd_params(params, neigh, dev); 4183 4173 } 4184 4174 #endif 4185 4175 ··· 4200 4190 4201 4191 /* link local addresses are never forwarded */ 4202 4192 if (rt6_need_strict(dst) || rt6_need_strict(src)) 4203 - return 0; 4193 + return BPF_FIB_LKUP_RET_NOT_FWDED; 4204 4194 4205 4195 dev = dev_get_by_index_rcu(net, params->ifindex); 4206 4196 if (unlikely(!dev)) ··· 4208 4198 4209 4199 idev = __in6_dev_get_safely(dev); 4210 4200 if (unlikely(!idev || !net->ipv6.devconf_all->forwarding)) 4211 - return 0; 4201 + return BPF_FIB_LKUP_RET_FWD_DISABLED; 4212 4202 4213 4203 if (flags & BPF_FIB_LOOKUP_OUTPUT) { 4214 4204 fl6.flowi6_iif = 1; ··· 4235 4225 4236 4226 tb = ipv6_stub->fib6_get_table(net, tbid); 4237 4227 if (unlikely(!tb)) 4238 - return 0; 4228 + return BPF_FIB_LKUP_RET_NOT_FWDED; 4239 4229 4240 4230 f6i = ipv6_stub->fib6_table_lookup(net, tb, oif, &fl6, strict); 4241 4231 } else { ··· 4248 4238 } 4249 4239 4250 4240 if (unlikely(IS_ERR_OR_NULL(f6i) || f6i == net->ipv6.fib6_null_entry)) 4251 - return 0; 4241 + return BPF_FIB_LKUP_RET_NOT_FWDED; 4252 4242 4253 - if (unlikely(f6i->fib6_flags & RTF_REJECT || 4254 - f6i->fib6_type != RTN_UNICAST)) 4255 - return 0; 4243 + if (unlikely(f6i->fib6_flags & RTF_REJECT)) { 4244 + switch (f6i->fib6_type) { 4245 + case RTN_BLACKHOLE: 4246 + return BPF_FIB_LKUP_RET_BLACKHOLE; 4247 + case RTN_UNREACHABLE: 4248 + return BPF_FIB_LKUP_RET_UNREACHABLE; 4249 + case RTN_PROHIBIT: 4250 + return BPF_FIB_LKUP_RET_PROHIBIT; 4251 + default: 4252 + return BPF_FIB_LKUP_RET_NOT_FWDED; 4253 + } 4254 + } 4255 + 4256 + if (f6i->fib6_type != RTN_UNICAST) 4257 + return BPF_FIB_LKUP_RET_NOT_FWDED; 4256 4258 4257 4259 if (f6i->fib6_nsiblings && fl6.flowi6_oif == 0) 4258 4260 f6i = ipv6_stub->fib6_multipath_select(net, f6i, &fl6, ··· 4274 4252 if (check_mtu) { 4275 4253 mtu = ipv6_stub->ip6_mtu_from_fib6(f6i, dst, src); 4276 4254 if (params->tot_len > mtu) 4277 - return 0; 4255 + return BPF_FIB_LKUP_RET_FRAG_NEEDED; 4278 4256 } 4279 4257 4280 4258 if (f6i->fib6_nh.nh_lwtstate) 4281 - return 0; 4259 + return BPF_FIB_LKUP_RET_UNSUPP_LWT; 4282 4260 4283 4261 if (f6i->fib6_flags & RTF_GATEWAY) 4284 4262 *dst = f6i->fib6_nh.nh_gw; ··· 4292 4270 */ 4293 4271 neigh = ___neigh_lookup_noref(ipv6_stub->nd_tbl, neigh_key_eq128, 4294 4272 ndisc_hashfn, dst, dev); 4295 - if (neigh) 4296 - return bpf_fib_set_fwd_params(params, neigh, dev); 4273 + if (!neigh) 4274 + return BPF_FIB_LKUP_RET_NO_NEIGH; 4297 4275 4298 - return 0; 4276 + return bpf_fib_set_fwd_params(params, neigh, dev); 4299 4277 } 4300 4278 #endif 4301 4279 ··· 4337 4315 struct bpf_fib_lookup *, params, int, plen, u32, flags) 4338 4316 { 4339 4317 struct net *net = dev_net(skb->dev); 4340 - int index = -EAFNOSUPPORT; 4318 + int rc = -EAFNOSUPPORT; 4341 4319 4342 4320 if (plen < sizeof(*params)) 4343 4321 return -EINVAL; ··· 4348 4326 switch (params->family) { 4349 4327 #if IS_ENABLED(CONFIG_INET) 4350 4328 case AF_INET: 4351 - index = bpf_ipv4_fib_lookup(net, params, flags, false); 4329 + rc = bpf_ipv4_fib_lookup(net, params, flags, false); 4352 4330 break; 4353 4331 #endif 4354 4332 #if IS_ENABLED(CONFIG_IPV6) 4355 4333 case AF_INET6: 4356 - index = bpf_ipv6_fib_lookup(net, params, flags, false); 4334 + rc = bpf_ipv6_fib_lookup(net, params, flags, false); 4357 4335 break; 4358 4336 #endif 4359 4337 } 4360 4338 4361 - if (index > 0) { 4339 + if (!rc) { 4362 4340 struct net_device *dev; 4363 4341 4364 - dev = dev_get_by_index_rcu(net, index); 4342 + dev = dev_get_by_index_rcu(net, params->ifindex); 4365 4343 if (!is_skb_forwardable(dev, skb)) 4366 - index = 0; 4344 + rc = BPF_FIB_LKUP_RET_FRAG_NEEDED; 4367 4345 } 4368 4346 4369 - return index; 4347 + return rc; 4370 4348 } 4371 4349 4372 4350 static const struct bpf_func_proto bpf_skb_fib_lookup_proto = {
+1 -2
net/core/skbuff.c
··· 5276 5276 if (npages >= 1 << order) { 5277 5277 page = alloc_pages((gfp_mask & ~__GFP_DIRECT_RECLAIM) | 5278 5278 __GFP_COMP | 5279 - __GFP_NOWARN | 5280 - __GFP_NORETRY, 5279 + __GFP_NOWARN, 5281 5280 order); 5282 5281 if (page) 5283 5282 goto fill_page;
+5 -2
net/core/sock.c
··· 3243 3243 3244 3244 rsk_prot->slab = kmem_cache_create(rsk_prot->slab_name, 3245 3245 rsk_prot->obj_size, 0, 3246 - prot->slab_flags, NULL); 3246 + SLAB_ACCOUNT | prot->slab_flags, 3247 + NULL); 3247 3248 3248 3249 if (!rsk_prot->slab) { 3249 3250 pr_crit("%s: Can't create request sock SLAB cache!\n", ··· 3259 3258 if (alloc_slab) { 3260 3259 prot->slab = kmem_cache_create_usercopy(prot->name, 3261 3260 prot->obj_size, 0, 3262 - SLAB_HWCACHE_ALIGN | prot->slab_flags, 3261 + SLAB_HWCACHE_ALIGN | SLAB_ACCOUNT | 3262 + prot->slab_flags, 3263 3263 prot->useroffset, prot->usersize, 3264 3264 NULL); 3265 3265 ··· 3283 3281 kmem_cache_create(prot->twsk_prot->twsk_slab_name, 3284 3282 prot->twsk_prot->twsk_obj_size, 3285 3283 0, 3284 + SLAB_ACCOUNT | 3286 3285 prot->slab_flags, 3287 3286 NULL); 3288 3287 if (prot->twsk_prot->twsk_slab == NULL)
+1 -3
net/ipv4/fou.c
··· 448 448 out_unlock: 449 449 rcu_read_unlock(); 450 450 out: 451 - NAPI_GRO_CB(skb)->flush |= flush; 452 - skb_gro_remcsum_cleanup(skb, &grc); 453 - skb->remcsum_offload = 0; 451 + skb_gro_flush_final_remcsum(skb, pp, flush, &grc); 454 452 455 453 return pp; 456 454 }
+1 -1
net/ipv4/gre_offload.c
··· 223 223 out_unlock: 224 224 rcu_read_unlock(); 225 225 out: 226 - NAPI_GRO_CB(skb)->flush |= flush; 226 + skb_gro_flush_final(skb, pp, flush); 227 227 228 228 return pp; 229 229 }
+13 -5
net/ipv4/sysctl_net_ipv4.c
··· 265 265 ipv4.sysctl_tcp_fastopen); 266 266 struct ctl_table tbl = { .maxlen = (TCP_FASTOPEN_KEY_LENGTH * 2 + 10) }; 267 267 struct tcp_fastopen_context *ctxt; 268 - int ret; 269 268 u32 user_key[4]; /* 16 bytes, matching TCP_FASTOPEN_KEY_LENGTH */ 269 + __le32 key[4]; 270 + int ret, i; 270 271 271 272 tbl.data = kmalloc(tbl.maxlen, GFP_KERNEL); 272 273 if (!tbl.data) ··· 276 275 rcu_read_lock(); 277 276 ctxt = rcu_dereference(net->ipv4.tcp_fastopen_ctx); 278 277 if (ctxt) 279 - memcpy(user_key, ctxt->key, TCP_FASTOPEN_KEY_LENGTH); 278 + memcpy(key, ctxt->key, TCP_FASTOPEN_KEY_LENGTH); 280 279 else 281 - memset(user_key, 0, sizeof(user_key)); 280 + memset(key, 0, sizeof(key)); 282 281 rcu_read_unlock(); 282 + 283 + for (i = 0; i < ARRAY_SIZE(key); i++) 284 + user_key[i] = le32_to_cpu(key[i]); 283 285 284 286 snprintf(tbl.data, tbl.maxlen, "%08x-%08x-%08x-%08x", 285 287 user_key[0], user_key[1], user_key[2], user_key[3]); ··· 294 290 ret = -EINVAL; 295 291 goto bad_key; 296 292 } 297 - tcp_fastopen_reset_cipher(net, NULL, user_key, 293 + 294 + for (i = 0; i < ARRAY_SIZE(user_key); i++) 295 + key[i] = cpu_to_le32(user_key[i]); 296 + 297 + tcp_fastopen_reset_cipher(net, NULL, key, 298 298 TCP_FASTOPEN_KEY_LENGTH); 299 299 } 300 300 301 301 bad_key: 302 302 pr_debug("proc FO key set 0x%x-%x-%x-%x <- 0x%s: %u\n", 303 - user_key[0], user_key[1], user_key[2], user_key[3], 303 + user_key[0], user_key[1], user_key[2], user_key[3], 304 304 (char *)tbl.data, ret); 305 305 kfree(tbl.data); 306 306 return ret;
+11 -2
net/ipv4/tcp_input.c
··· 265 265 * it is probably a retransmit. 266 266 */ 267 267 if (tp->ecn_flags & TCP_ECN_SEEN) 268 - tcp_enter_quickack_mode(sk, 1); 268 + tcp_enter_quickack_mode(sk, 2); 269 269 break; 270 270 case INET_ECN_CE: 271 271 if (tcp_ca_needs_ecn(sk)) ··· 273 273 274 274 if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR)) { 275 275 /* Better not delay acks, sender can have a very low cwnd */ 276 - tcp_enter_quickack_mode(sk, 1); 276 + tcp_enter_quickack_mode(sk, 2); 277 277 tp->ecn_flags |= TCP_ECN_DEMAND_CWR; 278 278 } 279 279 tp->ecn_flags |= TCP_ECN_SEEN; ··· 3181 3181 3182 3182 if (tcp_is_reno(tp)) { 3183 3183 tcp_remove_reno_sacks(sk, pkts_acked); 3184 + 3185 + /* If any of the cumulatively ACKed segments was 3186 + * retransmitted, non-SACK case cannot confirm that 3187 + * progress was due to original transmission due to 3188 + * lack of TCPCB_SACKED_ACKED bits even if some of 3189 + * the packets may have been never retransmitted. 3190 + */ 3191 + if (flag & FLAG_RETRANS_DATA_ACKED) 3192 + flag &= ~FLAG_ORIG_SACK_ACKED; 3184 3193 } else { 3185 3194 int delta; 3186 3195
+1 -1
net/ipv4/udp_offload.c
··· 394 394 out_unlock: 395 395 rcu_read_unlock(); 396 396 out: 397 - NAPI_GRO_CB(skb)->flush |= flush; 397 + skb_gro_flush_final(skb, pp, flush); 398 398 return pp; 399 399 } 400 400 EXPORT_SYMBOL(udp_gro_receive);
+6 -3
net/ipv6/addrconf.c
··· 4528 4528 unsigned long expires, u32 flags) 4529 4529 { 4530 4530 struct fib6_info *f6i; 4531 + u32 prio; 4531 4532 4532 4533 f6i = addrconf_get_prefix_route(&ifp->addr, 4533 4534 ifp->prefix_len, ··· 4537 4536 if (!f6i) 4538 4537 return -ENOENT; 4539 4538 4540 - if (f6i->fib6_metric != ifp->rt_priority) { 4539 + prio = ifp->rt_priority ? : IP6_RT_PRIO_ADDRCONF; 4540 + if (f6i->fib6_metric != prio) { 4541 + /* delete old one */ 4542 + ip6_del_rt(dev_net(ifp->idev->dev), f6i); 4543 + 4541 4544 /* add new one */ 4542 4545 addrconf_prefix_route(&ifp->addr, ifp->prefix_len, 4543 4546 ifp->rt_priority, ifp->idev->dev, 4544 4547 expires, flags, GFP_KERNEL); 4545 - /* delete old one */ 4546 - ip6_del_rt(dev_net(ifp->idev->dev), f6i); 4547 4548 } else { 4548 4549 if (!expires) 4549 4550 fib6_clean_expires(f6i);
+3 -3
net/ipv6/netfilter/nf_conntrack_reasm.c
··· 107 107 if (hdr == NULL) 108 108 goto err_reg; 109 109 110 - net->nf_frag.sysctl.frags_hdr = hdr; 110 + net->nf_frag_frags_hdr = hdr; 111 111 return 0; 112 112 113 113 err_reg: ··· 121 121 { 122 122 struct ctl_table *table; 123 123 124 - table = net->nf_frag.sysctl.frags_hdr->ctl_table_arg; 125 - unregister_net_sysctl_table(net->nf_frag.sysctl.frags_hdr); 124 + table = net->nf_frag_frags_hdr->ctl_table_arg; 125 + unregister_net_sysctl_table(net->nf_frag_frags_hdr); 126 126 if (!net_eq(net, &init_net)) 127 127 kfree(table); 128 128 }
+1 -1
net/ipv6/seg6_hmac.c
··· 373 373 return -ENOMEM; 374 374 375 375 for_each_possible_cpu(cpu) { 376 - tfm = crypto_alloc_shash(algo->name, 0, GFP_KERNEL); 376 + tfm = crypto_alloc_shash(algo->name, 0, 0); 377 377 if (IS_ERR(tfm)) 378 378 return PTR_ERR(tfm); 379 379 p_tfm = per_cpu_ptr(algo->tfms, cpu);
+2
net/mac80211/tx.c
··· 4845 4845 skb_reset_network_header(skb); 4846 4846 skb_reset_mac_header(skb); 4847 4847 4848 + local_bh_disable(); 4848 4849 __ieee80211_subif_start_xmit(skb, skb->dev, flags); 4850 + local_bh_enable(); 4849 4851 4850 4852 return 0; 4851 4853 }
+47 -5
net/netfilter/nf_conncount.c
··· 47 47 struct hlist_node node; 48 48 struct nf_conntrack_tuple tuple; 49 49 struct nf_conntrack_zone zone; 50 + int cpu; 51 + u32 jiffies32; 50 52 }; 51 53 52 54 struct nf_conncount_rb { ··· 93 91 return false; 94 92 conn->tuple = *tuple; 95 93 conn->zone = *zone; 94 + conn->cpu = raw_smp_processor_id(); 95 + conn->jiffies32 = (u32)jiffies; 96 96 hlist_add_head(&conn->node, head); 97 97 return true; 98 98 } 99 99 EXPORT_SYMBOL_GPL(nf_conncount_add); 100 + 101 + static const struct nf_conntrack_tuple_hash * 102 + find_or_evict(struct net *net, struct nf_conncount_tuple *conn) 103 + { 104 + const struct nf_conntrack_tuple_hash *found; 105 + unsigned long a, b; 106 + int cpu = raw_smp_processor_id(); 107 + __s32 age; 108 + 109 + found = nf_conntrack_find_get(net, &conn->zone, &conn->tuple); 110 + if (found) 111 + return found; 112 + b = conn->jiffies32; 113 + a = (u32)jiffies; 114 + 115 + /* conn might have been added just before by another cpu and 116 + * might still be unconfirmed. In this case, nf_conntrack_find() 117 + * returns no result. Thus only evict if this cpu added the 118 + * stale entry or if the entry is older than two jiffies. 119 + */ 120 + age = a - b; 121 + if (conn->cpu == cpu || age >= 2) { 122 + hlist_del(&conn->node); 123 + kmem_cache_free(conncount_conn_cachep, conn); 124 + return ERR_PTR(-ENOENT); 125 + } 126 + 127 + return ERR_PTR(-EAGAIN); 128 + } 100 129 101 130 unsigned int nf_conncount_lookup(struct net *net, struct hlist_head *head, 102 131 const struct nf_conntrack_tuple *tuple, ··· 136 103 { 137 104 const struct nf_conntrack_tuple_hash *found; 138 105 struct nf_conncount_tuple *conn; 139 - struct hlist_node *n; 140 106 struct nf_conn *found_ct; 107 + struct hlist_node *n; 141 108 unsigned int length = 0; 142 109 143 110 *addit = tuple ? true : false; 144 111 145 112 /* check the saved connections */ 146 113 hlist_for_each_entry_safe(conn, n, head, node) { 147 - found = nf_conntrack_find_get(net, &conn->zone, &conn->tuple); 148 - if (found == NULL) { 149 - hlist_del(&conn->node); 150 - kmem_cache_free(conncount_conn_cachep, conn); 114 + found = find_or_evict(net, conn); 115 + if (IS_ERR(found)) { 116 + /* Not found, but might be about to be confirmed */ 117 + if (PTR_ERR(found) == -EAGAIN) { 118 + length++; 119 + if (!tuple) 120 + continue; 121 + 122 + if (nf_ct_tuple_equal(&conn->tuple, tuple) && 123 + nf_ct_zone_id(&conn->zone, conn->zone.dir) == 124 + nf_ct_zone_id(zone, zone->dir)) 125 + *addit = false; 126 + } 151 127 continue; 152 128 } 153 129
+5
net/netfilter/nf_conntrack_helper.c
··· 465 465 466 466 nf_ct_expect_iterate_destroy(expect_iter_me, NULL); 467 467 nf_ct_iterate_destroy(unhelp, me); 468 + 469 + /* Maybe someone has gotten the helper already when unhelp above. 470 + * So need to wait it. 471 + */ 472 + synchronize_rcu(); 468 473 } 469 474 EXPORT_SYMBOL_GPL(nf_conntrack_helper_unregister); 470 475
+10 -3
net/netfilter/nf_log.c
··· 424 424 if (write) { 425 425 struct ctl_table tmp = *table; 426 426 427 + /* proc_dostring() can append to existing strings, so we need to 428 + * initialize it as an empty string. 429 + */ 430 + buf[0] = '\0'; 427 431 tmp.data = buf; 428 432 r = proc_dostring(&tmp, write, buffer, lenp, ppos); 429 433 if (r) ··· 446 442 rcu_assign_pointer(net->nf.nf_loggers[tindex], logger); 447 443 mutex_unlock(&nf_log_mutex); 448 444 } else { 445 + struct ctl_table tmp = *table; 446 + 447 + tmp.data = buf; 449 448 mutex_lock(&nf_log_mutex); 450 449 logger = nft_log_dereference(net->nf.nf_loggers[tindex]); 451 450 if (!logger) 452 - table->data = "NONE"; 451 + strlcpy(buf, "NONE", sizeof(buf)); 453 452 else 454 - table->data = logger->name; 455 - r = proc_dostring(table, write, buffer, lenp, ppos); 453 + strlcpy(buf, logger->name, sizeof(buf)); 456 454 mutex_unlock(&nf_log_mutex); 455 + r = proc_dostring(&tmp, write, buffer, lenp, ppos); 457 456 } 458 457 459 458 return r;
+10 -1
net/rds/connection.c
··· 659 659 660 660 int rds_conn_init(void) 661 661 { 662 + int ret; 663 + 664 + ret = rds_loop_net_init(); /* register pernet callback */ 665 + if (ret) 666 + return ret; 667 + 662 668 rds_conn_slab = kmem_cache_create("rds_connection", 663 669 sizeof(struct rds_connection), 664 670 0, 0, NULL); 665 - if (!rds_conn_slab) 671 + if (!rds_conn_slab) { 672 + rds_loop_net_exit(); 666 673 return -ENOMEM; 674 + } 667 675 668 676 rds_info_register_func(RDS_INFO_CONNECTIONS, rds_conn_info); 669 677 rds_info_register_func(RDS_INFO_SEND_MESSAGES, ··· 684 676 685 677 void rds_conn_exit(void) 686 678 { 679 + rds_loop_net_exit(); /* unregister pernet callback */ 687 680 rds_loop_exit(); 688 681 689 682 WARN_ON(!hlist_empty(rds_conn_hash));
+56
net/rds/loop.c
··· 33 33 #include <linux/kernel.h> 34 34 #include <linux/slab.h> 35 35 #include <linux/in.h> 36 + #include <net/net_namespace.h> 37 + #include <net/netns/generic.h> 36 38 37 39 #include "rds_single_path.h" 38 40 #include "rds.h" ··· 42 40 43 41 static DEFINE_SPINLOCK(loop_conns_lock); 44 42 static LIST_HEAD(loop_conns); 43 + static atomic_t rds_loop_unloading = ATOMIC_INIT(0); 44 + 45 + static void rds_loop_set_unloading(void) 46 + { 47 + atomic_set(&rds_loop_unloading, 1); 48 + } 49 + 50 + static bool rds_loop_is_unloading(struct rds_connection *conn) 51 + { 52 + return atomic_read(&rds_loop_unloading) != 0; 53 + } 45 54 46 55 /* 47 56 * This 'loopback' transport is a special case for flows that originate ··· 178 165 struct rds_loop_connection *lc, *_lc; 179 166 LIST_HEAD(tmp_list); 180 167 168 + rds_loop_set_unloading(); 169 + synchronize_rcu(); 181 170 /* avoid calling conn_destroy with irqs off */ 182 171 spin_lock_irq(&loop_conns_lock); 183 172 list_splice(&loop_conns, &tmp_list); ··· 190 175 WARN_ON(lc->conn->c_passive); 191 176 rds_conn_destroy(lc->conn); 192 177 } 178 + } 179 + 180 + static void rds_loop_kill_conns(struct net *net) 181 + { 182 + struct rds_loop_connection *lc, *_lc; 183 + LIST_HEAD(tmp_list); 184 + 185 + spin_lock_irq(&loop_conns_lock); 186 + list_for_each_entry_safe(lc, _lc, &loop_conns, loop_node) { 187 + struct net *c_net = read_pnet(&lc->conn->c_net); 188 + 189 + if (net != c_net) 190 + continue; 191 + list_move_tail(&lc->loop_node, &tmp_list); 192 + } 193 + spin_unlock_irq(&loop_conns_lock); 194 + 195 + list_for_each_entry_safe(lc, _lc, &tmp_list, loop_node) { 196 + WARN_ON(lc->conn->c_passive); 197 + rds_conn_destroy(lc->conn); 198 + } 199 + } 200 + 201 + static void __net_exit rds_loop_exit_net(struct net *net) 202 + { 203 + rds_loop_kill_conns(net); 204 + } 205 + 206 + static struct pernet_operations rds_loop_net_ops = { 207 + .exit = rds_loop_exit_net, 208 + }; 209 + 210 + int rds_loop_net_init(void) 211 + { 212 + return register_pernet_device(&rds_loop_net_ops); 213 + } 214 + 215 + void rds_loop_net_exit(void) 216 + { 217 + unregister_pernet_device(&rds_loop_net_ops); 193 218 } 194 219 195 220 /* ··· 249 194 .inc_free = rds_loop_inc_free, 250 195 .t_name = "loopback", 251 196 .t_type = RDS_TRANS_LOOP, 197 + .t_unloading = rds_loop_is_unloading, 252 198 };
+2
net/rds/loop.h
··· 5 5 /* loop.c */ 6 6 extern struct rds_transport rds_loop_transport; 7 7 8 + int rds_loop_net_init(void); 9 + void rds_loop_net_exit(void); 8 10 void rds_loop_exit(void); 9 11 10 12 #endif
+64 -33
net/smc/af_smc.c
··· 45 45 */ 46 46 47 47 static void smc_tcp_listen_work(struct work_struct *); 48 + static void smc_connect_work(struct work_struct *); 48 49 49 50 static void smc_set_keepalive(struct sock *sk, int val) 50 51 { ··· 123 122 goto out; 124 123 125 124 smc = smc_sk(sk); 125 + 126 + /* cleanup for a dangling non-blocking connect */ 127 + flush_work(&smc->connect_work); 128 + kfree(smc->connect_info); 129 + smc->connect_info = NULL; 130 + 126 131 if (sk->sk_state == SMC_LISTEN) 127 132 /* smc_close_non_accepted() is called and acquires 128 133 * sock lock for child sockets again ··· 193 186 sk->sk_protocol = protocol; 194 187 smc = smc_sk(sk); 195 188 INIT_WORK(&smc->tcp_listen_work, smc_tcp_listen_work); 189 + INIT_WORK(&smc->connect_work, smc_connect_work); 196 190 INIT_DELAYED_WORK(&smc->conn.tx_work, smc_tx_work); 197 191 INIT_LIST_HEAD(&smc->accept_q); 198 192 spin_lock_init(&smc->accept_q_lock); ··· 584 576 return 0; 585 577 } 586 578 579 + static void smc_connect_work(struct work_struct *work) 580 + { 581 + struct smc_sock *smc = container_of(work, struct smc_sock, 582 + connect_work); 583 + int rc; 584 + 585 + lock_sock(&smc->sk); 586 + rc = kernel_connect(smc->clcsock, &smc->connect_info->addr, 587 + smc->connect_info->alen, smc->connect_info->flags); 588 + if (smc->clcsock->sk->sk_err) { 589 + smc->sk.sk_err = smc->clcsock->sk->sk_err; 590 + goto out; 591 + } 592 + if (rc < 0) { 593 + smc->sk.sk_err = -rc; 594 + goto out; 595 + } 596 + 597 + rc = __smc_connect(smc); 598 + if (rc < 0) 599 + smc->sk.sk_err = -rc; 600 + 601 + out: 602 + smc->sk.sk_state_change(&smc->sk); 603 + kfree(smc->connect_info); 604 + smc->connect_info = NULL; 605 + release_sock(&smc->sk); 606 + } 607 + 587 608 static int smc_connect(struct socket *sock, struct sockaddr *addr, 588 609 int alen, int flags) 589 610 { ··· 642 605 643 606 smc_copy_sock_settings_to_clc(smc); 644 607 tcp_sk(smc->clcsock->sk)->syn_smc = 1; 645 - rc = kernel_connect(smc->clcsock, addr, alen, flags); 646 - if (rc) 647 - goto out; 608 + if (flags & O_NONBLOCK) { 609 + if (smc->connect_info) { 610 + rc = -EALREADY; 611 + goto out; 612 + } 613 + smc->connect_info = kzalloc(alen + 2 * sizeof(int), GFP_KERNEL); 614 + if (!smc->connect_info) { 615 + rc = -ENOMEM; 616 + goto out; 617 + } 618 + smc->connect_info->alen = alen; 619 + smc->connect_info->flags = flags ^ O_NONBLOCK; 620 + memcpy(&smc->connect_info->addr, addr, alen); 621 + schedule_work(&smc->connect_work); 622 + rc = -EINPROGRESS; 623 + } else { 624 + rc = kernel_connect(smc->clcsock, addr, alen, flags); 625 + if (rc) 626 + goto out; 648 627 649 - rc = __smc_connect(smc); 650 - if (rc < 0) 651 - goto out; 652 - else 653 - rc = 0; /* success cases including fallback */ 628 + rc = __smc_connect(smc); 629 + if (rc < 0) 630 + goto out; 631 + else 632 + rc = 0; /* success cases including fallback */ 633 + } 654 634 655 635 out: 656 636 release_sock(sk); ··· 1333 1279 struct sock *sk = sock->sk; 1334 1280 __poll_t mask = 0; 1335 1281 struct smc_sock *smc; 1336 - int rc; 1337 1282 1338 1283 if (!sk) 1339 1284 return EPOLLNVAL; 1340 1285 1341 1286 smc = smc_sk(sock->sk); 1342 - sock_hold(sk); 1343 - lock_sock(sk); 1344 1287 if ((sk->sk_state == SMC_INIT) || smc->use_fallback) { 1345 1288 /* delegate to CLC child sock */ 1346 - release_sock(sk); 1347 1289 mask = smc->clcsock->ops->poll(file, smc->clcsock, wait); 1348 - lock_sock(sk); 1349 1290 sk->sk_err = smc->clcsock->sk->sk_err; 1350 - if (sk->sk_err) { 1291 + if (sk->sk_err) 1351 1292 mask |= EPOLLERR; 1352 - } else { 1353 - /* if non-blocking connect finished ... */ 1354 - if (sk->sk_state == SMC_INIT && 1355 - mask & EPOLLOUT && 1356 - smc->clcsock->sk->sk_state != TCP_CLOSE) { 1357 - rc = __smc_connect(smc); 1358 - if (rc < 0) 1359 - mask |= EPOLLERR; 1360 - /* success cases including fallback */ 1361 - mask |= EPOLLOUT | EPOLLWRNORM; 1362 - } 1363 - } 1364 1293 } else { 1365 - if (sk->sk_state != SMC_CLOSED) { 1366 - release_sock(sk); 1294 + if (sk->sk_state != SMC_CLOSED) 1367 1295 sock_poll_wait(file, sk_sleep(sk), wait); 1368 - lock_sock(sk); 1369 - } 1370 1296 if (sk->sk_err) 1371 1297 mask |= EPOLLERR; 1372 1298 if ((sk->sk_shutdown == SHUTDOWN_MASK) || ··· 1372 1338 } 1373 1339 if (smc->conn.urg_state == SMC_URG_VALID) 1374 1340 mask |= EPOLLPRI; 1375 - 1376 1341 } 1377 - release_sock(sk); 1378 - sock_put(sk); 1379 1342 1380 1343 return mask; 1381 1344 }
+8
net/smc/smc.h
··· 187 187 struct work_struct close_work; /* peer sent some closing */ 188 188 }; 189 189 190 + struct smc_connect_info { 191 + int flags; 192 + int alen; 193 + struct sockaddr addr; 194 + }; 195 + 190 196 struct smc_sock { /* smc sock container */ 191 197 struct sock sk; 192 198 struct socket *clcsock; /* internal tcp socket */ 193 199 struct smc_connection conn; /* smc connection */ 194 200 struct smc_sock *listen_smc; /* listen parent */ 201 + struct smc_connect_info *connect_info; /* connect address & flags */ 202 + struct work_struct connect_work; /* handle non-blocking connect*/ 195 203 struct work_struct tcp_listen_work;/* handle tcp socket accepts */ 196 204 struct work_struct smc_listen_work;/* prepare new accept socket */ 197 205 struct list_head accept_q; /* sockets to be accepted */
+1 -16
net/strparser/strparser.c
··· 35 35 */ 36 36 struct strp_msg strp; 37 37 int accum_len; 38 - int early_eaten; 39 38 }; 40 39 41 40 static inline struct _strp_msg *_strp_msg(struct sk_buff *skb) ··· 114 115 head = strp->skb_head; 115 116 if (head) { 116 117 /* Message already in progress */ 117 - 118 - stm = _strp_msg(head); 119 - if (unlikely(stm->early_eaten)) { 120 - /* Already some number of bytes on the receive sock 121 - * data saved in skb_head, just indicate they 122 - * are consumed. 123 - */ 124 - eaten = orig_len <= stm->early_eaten ? 125 - orig_len : stm->early_eaten; 126 - stm->early_eaten -= eaten; 127 - 128 - return eaten; 129 - } 130 - 131 118 if (unlikely(orig_offset)) { 132 119 /* Getting data with a non-zero offset when a message is 133 120 * in progress is not expected. If it does happen, we ··· 282 297 } 283 298 284 299 stm->accum_len += cand_len; 300 + eaten += cand_len; 285 301 strp->need_bytes = stm->strp.full_len - 286 302 stm->accum_len; 287 - stm->early_eaten = cand_len; 288 303 STRP_STATS_ADD(strp->stats.bytes, cand_len); 289 304 desc->count = 0; /* Stop reading socket */ 290 305 break;
+14 -21
net/wireless/nl80211.c
··· 6231 6231 nl80211_check_s32); 6232 6232 /* 6233 6233 * Check HT operation mode based on 6234 - * IEEE 802.11 2012 8.4.2.59 HT Operation element. 6234 + * IEEE 802.11-2016 9.4.2.57 HT Operation element. 6235 6235 */ 6236 6236 if (tb[NL80211_MESHCONF_HT_OPMODE]) { 6237 6237 ht_opmode = nla_get_u16(tb[NL80211_MESHCONF_HT_OPMODE]); ··· 6241 6241 IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT)) 6242 6242 return -EINVAL; 6243 6243 6244 - if ((ht_opmode & IEEE80211_HT_OP_MODE_NON_GF_STA_PRSNT) && 6245 - (ht_opmode & IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT)) 6246 - return -EINVAL; 6244 + /* NON_HT_STA bit is reserved, but some programs set it */ 6245 + ht_opmode &= ~IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT; 6247 6246 6248 - switch (ht_opmode & IEEE80211_HT_OP_MODE_PROTECTION) { 6249 - case IEEE80211_HT_OP_MODE_PROTECTION_NONE: 6250 - case IEEE80211_HT_OP_MODE_PROTECTION_20MHZ: 6251 - if (ht_opmode & IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT) 6252 - return -EINVAL; 6253 - break; 6254 - case IEEE80211_HT_OP_MODE_PROTECTION_NONMEMBER: 6255 - case IEEE80211_HT_OP_MODE_PROTECTION_NONHT_MIXED: 6256 - if (!(ht_opmode & IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT)) 6257 - return -EINVAL; 6258 - break; 6259 - } 6260 6247 cfg->ht_opmode = ht_opmode; 6261 6248 mask |= (1 << (NL80211_MESHCONF_HT_OPMODE - 1)); 6262 6249 } ··· 10949 10962 rem) { 10950 10963 u8 *mask_pat; 10951 10964 10952 - nla_parse_nested(pat_tb, MAX_NL80211_PKTPAT, pat, 10953 - nl80211_packet_pattern_policy, 10954 - info->extack); 10965 + err = nla_parse_nested(pat_tb, MAX_NL80211_PKTPAT, pat, 10966 + nl80211_packet_pattern_policy, 10967 + info->extack); 10968 + if (err) 10969 + goto error; 10970 + 10955 10971 err = -EINVAL; 10956 10972 if (!pat_tb[NL80211_PKTPAT_MASK] || 10957 10973 !pat_tb[NL80211_PKTPAT_PATTERN]) ··· 11203 11213 rem) { 11204 11214 u8 *mask_pat; 11205 11215 11206 - nla_parse_nested(pat_tb, MAX_NL80211_PKTPAT, pat, 11207 - nl80211_packet_pattern_policy, NULL); 11216 + err = nla_parse_nested(pat_tb, MAX_NL80211_PKTPAT, pat, 11217 + nl80211_packet_pattern_policy, NULL); 11218 + if (err) 11219 + return err; 11220 + 11208 11221 if (!pat_tb[NL80211_PKTPAT_MASK] || 11209 11222 !pat_tb[NL80211_PKTPAT_PATTERN]) 11210 11223 return -EINVAL;
+4 -4
samples/bpf/xdp_fwd_kern.c
··· 48 48 struct ethhdr *eth = data; 49 49 struct ipv6hdr *ip6h; 50 50 struct iphdr *iph; 51 - int out_index; 52 51 u16 h_proto; 53 52 u64 nh_off; 53 + int rc; 54 54 55 55 nh_off = sizeof(*eth); 56 56 if (data + nh_off > data_end) ··· 101 101 102 102 fib_params.ifindex = ctx->ingress_ifindex; 103 103 104 - out_index = bpf_fib_lookup(ctx, &fib_params, sizeof(fib_params), flags); 104 + rc = bpf_fib_lookup(ctx, &fib_params, sizeof(fib_params), flags); 105 105 106 106 /* verify egress index has xdp support 107 107 * TO-DO bpf_map_lookup_elem(&tx_port, &key) fails with ··· 109 109 * NOTE: without verification that egress index supports XDP 110 110 * forwarding packets are dropped. 111 111 */ 112 - if (out_index > 0) { 112 + if (rc == 0) { 113 113 if (h_proto == htons(ETH_P_IP)) 114 114 ip_decrease_ttl(iph); 115 115 else if (h_proto == htons(ETH_P_IPV6)) ··· 117 117 118 118 memcpy(eth->h_dest, fib_params.dmac, ETH_ALEN); 119 119 memcpy(eth->h_source, fib_params.smac, ETH_ALEN); 120 - return bpf_redirect_map(&tx_port, out_index, 0); 120 + return bpf_redirect_map(&tx_port, fib_params.ifindex, 0); 121 121 } 122 122 123 123 return XDP_PASS;
+12 -13
samples/vfio-mdev/mbochs.c
··· 178 178 return "(invalid)"; 179 179 } 180 180 181 + static struct page *__mbochs_get_page(struct mdev_state *mdev_state, 182 + pgoff_t pgoff); 181 183 static struct page *mbochs_get_page(struct mdev_state *mdev_state, 182 184 pgoff_t pgoff); 183 185 ··· 396 394 MBOCHS_MEMORY_BAR_OFFSET + mdev_state->memsize) { 397 395 pos -= MBOCHS_MMIO_BAR_OFFSET; 398 396 poff = pos & ~PAGE_MASK; 399 - pg = mbochs_get_page(mdev_state, pos >> PAGE_SHIFT); 397 + pg = __mbochs_get_page(mdev_state, pos >> PAGE_SHIFT); 400 398 map = kmap(pg); 401 399 if (is_write) 402 400 memcpy(map + poff, buf, count); ··· 659 657 dev_dbg(dev, "%s: %d pages released\n", __func__, count); 660 658 } 661 659 662 - static int mbochs_region_vm_fault(struct vm_fault *vmf) 660 + static vm_fault_t mbochs_region_vm_fault(struct vm_fault *vmf) 663 661 { 664 662 struct vm_area_struct *vma = vmf->vma; 665 663 struct mdev_state *mdev_state = vma->vm_private_data; ··· 697 695 return 0; 698 696 } 699 697 700 - static int mbochs_dmabuf_vm_fault(struct vm_fault *vmf) 698 + static vm_fault_t mbochs_dmabuf_vm_fault(struct vm_fault *vmf) 701 699 { 702 700 struct vm_area_struct *vma = vmf->vma; 703 701 struct mbochs_dmabuf *dmabuf = vma->vm_private_data; ··· 805 803 mutex_unlock(&mdev_state->ops_lock); 806 804 } 807 805 808 - static void *mbochs_kmap_atomic_dmabuf(struct dma_buf *buf, 809 - unsigned long page_num) 810 - { 811 - struct mbochs_dmabuf *dmabuf = buf->priv; 812 - struct page *page = dmabuf->pages[page_num]; 813 - 814 - return kmap_atomic(page); 815 - } 816 - 817 806 static void *mbochs_kmap_dmabuf(struct dma_buf *buf, unsigned long page_num) 818 807 { 819 808 struct mbochs_dmabuf *dmabuf = buf->priv; ··· 813 820 return kmap(page); 814 821 } 815 822 823 + static void mbochs_kunmap_dmabuf(struct dma_buf *buf, unsigned long page_num, 824 + void *vaddr) 825 + { 826 + kunmap(vaddr); 827 + } 828 + 816 829 static struct dma_buf_ops mbochs_dmabuf_ops = { 817 830 .map_dma_buf = mbochs_map_dmabuf, 818 831 .unmap_dma_buf = mbochs_unmap_dmabuf, 819 832 .release = mbochs_release_dmabuf, 820 - .map_atomic = mbochs_kmap_atomic_dmabuf, 821 833 .map = mbochs_kmap_dmabuf, 834 + .unmap = mbochs_kunmap_dmabuf, 822 835 .mmap = mbochs_mmap_dmabuf, 823 836 }; 824 837
+1 -1
scripts/Kbuild.include
··· 214 214 # Prefix -I with $(srctree) if it is not an absolute path. 215 215 # skip if -I has no parameter 216 216 addtree = $(if $(patsubst -I%,%,$(1)), \ 217 - $(if $(filter-out -I/% -I./% -I../%,$(1)),$(patsubst -I%,-I$(srctree)/%,$(1)),$(1))) 217 + $(if $(filter-out -I/% -I./% -I../%,$(1)),$(patsubst -I%,-I$(srctree)/%,$(1)),$(1)),$(1)) 218 218 219 219 # Find all -I options and call addtree 220 220 flags = $(foreach o,$($(1)),$(if $(filter -I%,$(o)),$(call addtree,$(o)),$(o)))
-3
scripts/Makefile.build
··· 590 590 # We never want them to be removed automatically. 591 591 .SECONDARY: $(targets) 592 592 593 - # Declare the contents of the .PHONY variable as phony. We keep that 594 - # information in a variable se we can use it in if_changed and friends. 595 - 596 593 .PHONY: $(PHONY)
-3
scripts/Makefile.clean
··· 88 88 $(subdir-ymn): 89 89 $(Q)$(MAKE) $(clean)=$@ 90 90 91 - # Declare the contents of the .PHONY variable as phony. We keep that 92 - # information in a variable se we can use it in if_changed and friends. 93 - 94 91 .PHONY: $(PHONY)
-4
scripts/Makefile.modbuiltin
··· 54 54 $(subdir-ym): 55 55 $(Q)$(MAKE) $(modbuiltin)=$@ 56 56 57 - 58 - # Declare the contents of the .PHONY variable as phony. We keep that 59 - # information in a variable se we can use it in if_changed and friends. 60 - 61 57 .PHONY: $(PHONY)
-4
scripts/Makefile.modinst
··· 35 35 $(modules): 36 36 $(call cmd,modules_install,$(MODLIB)/$(modinst_dir)) 37 37 38 - 39 - # Declare the contents of the .PHONY variable as phony. We keep that 40 - # information in a variable so we can use it in if_changed and friends. 41 - 42 38 .PHONY: $(PHONY)
-4
scripts/Makefile.modpost
··· 149 149 include $(cmd_files) 150 150 endif 151 151 152 - 153 - # Declare the contents of the .PHONY variable as phony. We keep that 154 - # information in a variable se we can use it in if_changed and friends. 155 - 156 152 .PHONY: $(PHONY)
-3
scripts/Makefile.modsign
··· 27 27 $(modules): 28 28 $(call cmd,sign_ko,$(MODLIB)/$(modinst_dir)) 29 29 30 - # Declare the contents of the .PHONY variable as phony. We keep that 31 - # information in a variable se we can use it in if_changed and friends. 32 - 33 30 .PHONY: $(PHONY)
+1 -1
scripts/cc-can-link.sh
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 - cat << "END" | $@ -x c - -o /dev/null >/dev/null 2>&1 && echo "y" 4 + cat << "END" | $@ -x c - -o /dev/null >/dev/null 2>&1 5 5 #include <stdio.h> 6 6 int main(void) 7 7 {
+3 -3
scripts/checkpatch.pl
··· 5813 5813 defined $stat && 5814 5814 $stat =~ /^\+(?![^\{]*\{\s*).*\b(\w+)\s*\(.*$String\s*,/s && 5815 5815 $1 !~ /^_*volatile_*$/) { 5816 - my $specifier; 5817 - my $extension; 5818 - my $bad_specifier = ""; 5819 5816 my $stat_real; 5820 5817 5821 5818 my $lc = $stat =~ tr@\n@@; 5822 5819 $lc = $lc + $linenr; 5823 5820 for (my $count = $linenr; $count <= $lc; $count++) { 5821 + my $specifier; 5822 + my $extension; 5823 + my $bad_specifier = ""; 5824 5824 my $fmt = get_quoted_string($lines[$count - 1], raw_line($count, 0)); 5825 5825 $fmt =~ s/%%//g; 5826 5826
+2
scripts/extract-vmlinux
··· 57 57 try_decompress 'BZh' xy bunzip2 58 58 try_decompress '\135\0\0\0' xxx unlzma 59 59 try_decompress '\211\114\132' xy 'lzop -d' 60 + try_decompress '\002!L\030' xxx 'lz4 -d' 61 + try_decompress '(\265/\375' xxx unzstd 60 62 61 63 # Bail out: 62 64 echo "$me: Cannot find vmlinux." >&2
+1 -1
scripts/tags.sh
··· 245 245 { 246 246 setup_regex exuberant asm c 247 247 all_target_sources | xargs $1 -a \ 248 - -I __initdata,__exitdata,__initconst, \ 248 + -I __initdata,__exitdata,__initconst,__ro_after_init \ 249 249 -I __initdata_memblock \ 250 250 -I __refdata,__attribute,__maybe_unused,__always_unused \ 251 251 -I __acquires,__releases,__deprecated \
+2 -1
sound/pci/hda/patch_ca0132.c
··· 1048 1048 SND_PCI_QUIRK(0x1102, 0x0010, "Sound Blaster Z", QUIRK_SBZ), 1049 1049 SND_PCI_QUIRK(0x1102, 0x0023, "Sound Blaster Z", QUIRK_SBZ), 1050 1050 SND_PCI_QUIRK(0x1458, 0xA016, "Recon3Di", QUIRK_R3DI), 1051 - SND_PCI_QUIRK(0x1458, 0xA036, "Recon3Di", QUIRK_R3DI), 1051 + SND_PCI_QUIRK(0x1458, 0xA026, "Gigabyte G1.Sniper Z97", QUIRK_R3DI), 1052 + SND_PCI_QUIRK(0x1458, 0xA036, "Gigabyte GA-Z170X-Gaming 7", QUIRK_R3DI), 1052 1053 {} 1053 1054 }; 1054 1055
+14 -5
sound/pci/hda/patch_hdmi.c
··· 33 33 #include <linux/delay.h> 34 34 #include <linux/slab.h> 35 35 #include <linux/module.h> 36 + #include <linux/pm_runtime.h> 36 37 #include <sound/core.h> 37 38 #include <sound/jack.h> 38 39 #include <sound/asoundef.h> ··· 765 764 766 765 if (pin_idx < 0) 767 766 return; 767 + mutex_lock(&spec->pcm_lock); 768 768 if (hdmi_present_sense(get_pin(spec, pin_idx), 1)) 769 769 snd_hda_jack_report_sync(codec); 770 + mutex_unlock(&spec->pcm_lock); 770 771 } 771 772 772 773 static void jack_callback(struct hda_codec *codec, ··· 1631 1628 static bool hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, int repoll) 1632 1629 { 1633 1630 struct hda_codec *codec = per_pin->codec; 1634 - struct hdmi_spec *spec = codec->spec; 1635 1631 int ret; 1636 1632 1637 1633 /* no temporary power up/down needed for component notifier */ 1638 - if (!codec_has_acomp(codec)) 1639 - snd_hda_power_up_pm(codec); 1634 + if (!codec_has_acomp(codec)) { 1635 + ret = snd_hda_power_up_pm(codec); 1636 + if (ret < 0 && pm_runtime_suspended(hda_codec_dev(codec))) { 1637 + snd_hda_power_down_pm(codec); 1638 + return false; 1639 + } 1640 + } 1640 1641 1641 - mutex_lock(&spec->pcm_lock); 1642 1642 if (codec_has_acomp(codec)) { 1643 1643 sync_eld_via_acomp(codec, per_pin); 1644 1644 ret = false; /* don't call snd_hda_jack_report_sync() */ 1645 1645 } else { 1646 1646 ret = hdmi_present_sense_via_verbs(per_pin, repoll); 1647 1647 } 1648 - mutex_unlock(&spec->pcm_lock); 1649 1648 1650 1649 if (!codec_has_acomp(codec)) 1651 1650 snd_hda_power_down_pm(codec); ··· 1659 1654 { 1660 1655 struct hdmi_spec_per_pin *per_pin = 1661 1656 container_of(to_delayed_work(work), struct hdmi_spec_per_pin, work); 1657 + struct hda_codec *codec = per_pin->codec; 1658 + struct hdmi_spec *spec = codec->spec; 1662 1659 1663 1660 if (per_pin->repoll_count++ > 6) 1664 1661 per_pin->repoll_count = 0; 1665 1662 1663 + mutex_lock(&spec->pcm_lock); 1666 1664 if (hdmi_present_sense(per_pin, per_pin->repoll_count)) 1667 1665 snd_hda_jack_report_sync(per_pin->codec); 1666 + mutex_unlock(&spec->pcm_lock); 1668 1667 } 1669 1668 1670 1669 static void intel_haswell_fixup_connect_list(struct hda_codec *codec,
+5 -1
sound/pci/hda/patch_realtek.c
··· 6612 6612 SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6613 6613 SND_PCI_QUIRK(0x17aa, 0x312a, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6614 6614 SND_PCI_QUIRK(0x17aa, 0x312f, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6615 - SND_PCI_QUIRK(0x17aa, 0x3136, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6616 6615 SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 6617 6616 SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), 6618 6617 SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC), ··· 6794 6795 {0x19, 0x02a11030}, 6795 6796 {0x1a, 0x02a11040}, 6796 6797 {0x1b, 0x01014020}, 6798 + {0x21, 0x0221101f}), 6799 + SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION, 6800 + {0x14, 0x90170110}, 6801 + {0x19, 0x02a11020}, 6802 + {0x1a, 0x02a11030}, 6797 6803 {0x21, 0x0221101f}), 6798 6804 SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 6799 6805 {0x12, 0x90a60140},
+8 -4
tools/bpf/bpftool/prog.c
··· 694 694 return -1; 695 695 } 696 696 697 - if (do_pin_fd(prog_fd, argv[1])) { 698 - p_err("failed to pin program"); 699 - return -1; 700 - } 697 + if (do_pin_fd(prog_fd, argv[1])) 698 + goto err_close_obj; 701 699 702 700 if (json_output) 703 701 jsonw_null(json_wtr); 704 702 703 + bpf_object__close(obj); 704 + 705 705 return 0; 706 + 707 + err_close_obj: 708 + bpf_object__close(obj); 709 + return -1; 706 710 } 707 711 708 712 static int do_help(int argc, char **argv)
+3 -3
tools/build/Build.include
··· 63 63 $(fixdep) $(depfile) $@ '$(make-cmd)' > $(dot-target).tmp; \ 64 64 rm -f $(depfile); \ 65 65 mv -f $(dot-target).tmp $(dot-target).cmd, \ 66 - printf '\# cannot find fixdep (%s)\n' $(fixdep) > $(dot-target).cmd; \ 67 - printf '\# using basic dep data\n\n' >> $(dot-target).cmd; \ 66 + printf '$(pound) cannot find fixdep (%s)\n' $(fixdep) > $(dot-target).cmd; \ 67 + printf '$(pound) using basic dep data\n\n' >> $(dot-target).cmd; \ 68 68 cat $(depfile) >> $(dot-target).cmd; \ 69 69 printf '\n%s\n' 'cmd_$@ := $(make-cmd)' >> $(dot-target).cmd) 70 70 ··· 98 98 ### 99 99 ## HOSTCC C flags 100 100 101 - host_c_flags = -Wp,-MD,$(depfile) -Wp,-MT,$@ $(CHOSTFLAGS) -D"BUILD_STR(s)=\#s" $(CHOSTFLAGS_$(basetarget).o) $(CHOSTFLAGS_$(obj)) 101 + host_c_flags = -Wp,-MD,$(depfile) -Wp,-MT,$@ $(HOSTCFLAGS) -D"BUILD_STR(s)=\#s" $(HOSTCFLAGS_$(basetarget).o) $(HOSTCFLAGS_$(obj))
+1 -1
tools/build/Makefile
··· 43 43 $(Q)$(MAKE) $(build)=fixdep 44 44 45 45 $(OUTPUT)fixdep: $(OUTPUT)fixdep-in.o 46 - $(QUIET_LINK)$(HOSTCC) $(LDFLAGS) -o $@ $< 46 + $(QUIET_LINK)$(HOSTCC) $(HOSTLDFLAGS) -o $@ $< 47 47 48 48 FORCE: 49 49
+26 -11
tools/objtool/elf.c
··· 302 302 continue; 303 303 sym->pfunc = sym->cfunc = sym; 304 304 coldstr = strstr(sym->name, ".cold."); 305 - if (coldstr) { 306 - coldstr[0] = '\0'; 307 - pfunc = find_symbol_by_name(elf, sym->name); 308 - coldstr[0] = '.'; 305 + if (!coldstr) 306 + continue; 309 307 310 - if (!pfunc) { 311 - WARN("%s(): can't find parent function", 312 - sym->name); 313 - goto err; 314 - } 308 + coldstr[0] = '\0'; 309 + pfunc = find_symbol_by_name(elf, sym->name); 310 + coldstr[0] = '.'; 315 311 316 - sym->pfunc = pfunc; 317 - pfunc->cfunc = sym; 312 + if (!pfunc) { 313 + WARN("%s(): can't find parent function", 314 + sym->name); 315 + goto err; 316 + } 317 + 318 + sym->pfunc = pfunc; 319 + pfunc->cfunc = sym; 320 + 321 + /* 322 + * Unfortunately, -fnoreorder-functions puts the child 323 + * inside the parent. Remove the overlap so we can 324 + * have sane assumptions. 325 + * 326 + * Note that pfunc->len now no longer matches 327 + * pfunc->sym.st_size. 328 + */ 329 + if (sym->sec == pfunc->sec && 330 + sym->offset >= pfunc->offset && 331 + sym->offset + sym->len == pfunc->offset + pfunc->len) { 332 + pfunc->len -= sym->len; 318 333 } 319 334 } 320 335 }
+1 -2
tools/perf/Makefile.config
··· 207 207 PYTHON_EMBED_LDOPTS := $(shell $(PYTHON_CONFIG_SQ) --ldflags 2>/dev/null) 208 208 PYTHON_EMBED_LDFLAGS := $(call strip-libs,$(PYTHON_EMBED_LDOPTS)) 209 209 PYTHON_EMBED_LIBADD := $(call grep-libs,$(PYTHON_EMBED_LDOPTS)) -lutil 210 - PYTHON_EMBED_CCOPTS := $(shell $(PYTHON_CONFIG_SQ) --cflags 2>/dev/null) 211 - PYTHON_EMBED_CCOPTS := $(filter-out -specs=%,$(PYTHON_EMBED_CCOPTS)) 210 + PYTHON_EMBED_CCOPTS := $(shell $(PYTHON_CONFIG_SQ) --includes 2>/dev/null) 212 211 FLAGS_PYTHON_EMBED := $(PYTHON_EMBED_CCOPTS) $(PYTHON_EMBED_LDOPTS) 213 212 endif 214 213
+1 -1
tools/perf/arch/x86/util/perf_regs.c
··· 226 226 else if (rm[2].rm_so != rm[2].rm_eo) 227 227 prefix[0] = '+'; 228 228 else 229 - strncpy(prefix, "+0", 2); 229 + scnprintf(prefix, sizeof(prefix), "+0"); 230 230 } 231 231 232 232 /* Rename register */
+1 -1
tools/perf/builtin-stat.c
··· 1742 1742 } 1743 1743 } 1744 1744 1745 - if ((num_print_interval == 0 && metric_only) || interval_clear) 1745 + if ((num_print_interval == 0 || interval_clear) && metric_only) 1746 1746 print_metric_headers(" ", true); 1747 1747 if (++num_print_interval == 25) 1748 1748 num_print_interval = 0;
+2 -1
tools/perf/jvmti/jvmti_agent.c
··· 35 35 #include <sys/mman.h> 36 36 #include <syscall.h> /* for gettid() */ 37 37 #include <err.h> 38 + #include <linux/kernel.h> 38 39 39 40 #include "jvmti_agent.h" 40 41 #include "../util/jitdump.h" ··· 250 249 /* 251 250 * jitdump file name 252 251 */ 253 - snprintf(dump_path, PATH_MAX, "%s/jit-%i.dump", jit_path, getpid()); 252 + scnprintf(dump_path, PATH_MAX, "%s/jit-%i.dump", jit_path, getpid()); 254 253 255 254 fd = open(dump_path, O_CREAT|O_TRUNC|O_RDWR, 0666); 256 255 if (fd == -1)
+1 -1
tools/perf/pmu-events/Build
··· 1 1 hostprogs := jevents 2 2 3 3 jevents-y += json.o jsmn.o jevents.o 4 - CHOSTFLAGS_jevents.o = -I$(srctree)/tools/include 4 + HOSTCFLAGS_jevents.o = -I$(srctree)/tools/include 5 5 pmu-events-y += pmu-events.o 6 6 JDIR = pmu-events/arch/$(SRCARCH) 7 7 JSON = $(shell [ -d $(JDIR) ] && \
+16 -22
tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/Core.py
··· 31 31 string = "" 32 32 33 33 if flag_fields[event_name][field_name]: 34 - print_delim = 0 35 - keys = flag_fields[event_name][field_name]['values'].keys() 36 - keys.sort() 37 - for idx in keys: 34 + print_delim = 0 35 + for idx in sorted(flag_fields[event_name][field_name]['values']): 38 36 if not value and not idx: 39 37 string += flag_fields[event_name][field_name]['values'][idx] 40 38 break ··· 49 51 string = "" 50 52 51 53 if symbolic_fields[event_name][field_name]: 52 - keys = symbolic_fields[event_name][field_name]['values'].keys() 53 - keys.sort() 54 - for idx in keys: 54 + for idx in sorted(symbolic_fields[event_name][field_name]['values']): 55 55 if not value and not idx: 56 - string = symbolic_fields[event_name][field_name]['values'][idx] 56 + string = symbolic_fields[event_name][field_name]['values'][idx] 57 57 break 58 - if (value == idx): 59 - string = symbolic_fields[event_name][field_name]['values'][idx] 58 + if (value == idx): 59 + string = symbolic_fields[event_name][field_name]['values'][idx] 60 60 break 61 61 62 62 return string ··· 70 74 string = "" 71 75 print_delim = 0 72 76 73 - keys = trace_flags.keys() 77 + for idx in trace_flags: 78 + if not value and not idx: 79 + string += "NONE" 80 + break 74 81 75 - for idx in keys: 76 - if not value and not idx: 77 - string += "NONE" 78 - break 79 - 80 - if idx and (value & idx) == idx: 81 - if print_delim: 82 - string += " | "; 83 - string += trace_flags[idx] 84 - print_delim = 1 85 - value &= ~idx 82 + if idx and (value & idx) == idx: 83 + if print_delim: 84 + string += " | "; 85 + string += trace_flags[idx] 86 + print_delim = 1 87 + value &= ~idx 86 88 87 89 return string 88 90
+3 -1
tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/EventClass.py
··· 8 8 # PerfEvent is the base class for all perf event sample, PebsEvent 9 9 # is a HW base Intel x86 PEBS event, and user could add more SW/HW 10 10 # event classes based on requirements. 11 + from __future__ import print_function 11 12 12 13 import struct 13 14 ··· 45 44 PerfEvent.event_num += 1 46 45 47 46 def show(self): 48 - print "PMU event: name=%12s, symbol=%24s, comm=%8s, dso=%12s" % (self.name, self.symbol, self.comm, self.dso) 47 + print("PMU event: name=%12s, symbol=%24s, comm=%8s, dso=%12s" % 48 + (self.name, self.symbol, self.comm, self.dso)) 49 49 50 50 # 51 51 # Basic Intel PEBS (Precise Event-based Sampling) event, whose raw buffer
+1 -1
tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/SchedGui.py
··· 11 11 try: 12 12 import wx 13 13 except ImportError: 14 - raise ImportError, "You need to install the wxpython lib for this script" 14 + raise ImportError("You need to install the wxpython lib for this script") 15 15 16 16 17 17 class RootFrame(wx.Frame):
+6 -5
tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/Util.py
··· 5 5 # This software may be distributed under the terms of the GNU General 6 6 # Public License ("GPL") version 2 as published by the Free Software 7 7 # Foundation. 8 + from __future__ import print_function 8 9 9 10 import errno, os 10 11 ··· 34 33 return str 35 34 36 35 def add_stats(dict, key, value): 37 - if not dict.has_key(key): 36 + if key not in dict: 38 37 dict[key] = (value, value, value, 1) 39 38 else: 40 39 min, max, avg, count = dict[key] ··· 73 72 except: 74 73 if not audit_package_warned: 75 74 audit_package_warned = True 76 - print "Install the audit-libs-python package to get syscall names.\n" \ 77 - "For example:\n # apt-get install python-audit (Ubuntu)" \ 78 - "\n # yum install audit-libs-python (Fedora)" \ 79 - "\n etc.\n" 75 + print("Install the audit-libs-python package to get syscall names.\n" 76 + "For example:\n # apt-get install python-audit (Ubuntu)" 77 + "\n # yum install audit-libs-python (Fedora)" 78 + "\n etc.\n") 80 79 81 80 def syscall_name(id): 82 81 try:
+9 -5
tools/perf/scripts/python/sched-migration.py
··· 9 9 # This software is distributed under the terms of the GNU General 10 10 # Public License ("GPL") version 2 as published by the Free Software 11 11 # Foundation. 12 - 12 + from __future__ import print_function 13 13 14 14 import os 15 15 import sys 16 16 17 17 from collections import defaultdict 18 - from UserList import UserList 18 + try: 19 + from UserList import UserList 20 + except ImportError: 21 + # Python 3: UserList moved to the collections package 22 + from collections import UserList 19 23 20 24 sys.path.append(os.environ['PERF_EXEC_PATH'] + \ 21 25 '/scripts/python/Perf-Trace-Util/lib/Perf/Trace') ··· 304 300 if i == -1: 305 301 return 306 302 307 - for i in xrange(i, len(self.data)): 303 + for i in range(i, len(self.data)): 308 304 timeslice = self.data[i] 309 305 if timeslice.start > end: 310 306 return ··· 340 336 on_cpu_task = self.current_tsk[headers.cpu] 341 337 342 338 if on_cpu_task != -1 and on_cpu_task != prev_pid: 343 - print "Sched switch event rejected ts: %s cpu: %d prev: %s(%d) next: %s(%d)" % \ 344 - (headers.ts_format(), headers.cpu, prev_comm, prev_pid, next_comm, next_pid) 339 + print("Sched switch event rejected ts: %s cpu: %d prev: %s(%d) next: %s(%d)" % \ 340 + headers.ts_format(), headers.cpu, prev_comm, prev_pid, next_comm, next_pid) 345 341 346 342 threads[prev_pid] = prev_comm 347 343 threads[next_pid] = next_comm
+1 -1
tools/perf/tests/builtin-test.c
··· 422 422 423 423 #define for_each_shell_test(dir, base, ent) \ 424 424 while ((ent = readdir(dir)) != NULL) \ 425 - if (!is_directory(base, ent)) 425 + if (!is_directory(base, ent) && ent->d_name[0] != '.') 426 426 427 427 static const char *shell_tests__dir(char *path, size_t size) 428 428 {
+21 -16
tools/perf/tests/shell/record+probe_libc_inet_pton.sh
··· 14 14 nm -Dg $libc 2>/dev/null | fgrep -q inet_pton || exit 254 15 15 16 16 trace_libc_inet_pton_backtrace() { 17 - idx=0 18 - expected[0]="ping[][0-9 \.:]+probe_libc:inet_pton: \([[:xdigit:]]+\)" 19 - expected[1]=".*inet_pton\+0x[[:xdigit:]]+[[:space:]]\($libc|inlined\)$" 17 + 18 + expected=`mktemp -u /tmp/expected.XXX` 19 + 20 + echo "ping[][0-9 \.:]+probe_libc:inet_pton: \([[:xdigit:]]+\)" > $expected 21 + echo ".*inet_pton\+0x[[:xdigit:]]+[[:space:]]\($libc|inlined\)$" >> $expected 20 22 case "$(uname -m)" in 21 23 s390x) 22 24 eventattr='call-graph=dwarf,max-stack=4' 23 - expected[2]="gaih_inet.*\+0x[[:xdigit:]]+[[:space:]]\($libc|inlined\)$" 24 - expected[3]="(__GI_)?getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc|inlined\)$" 25 - expected[4]="main\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" 25 + echo "gaih_inet.*\+0x[[:xdigit:]]+[[:space:]]\($libc|inlined\)$" >> $expected 26 + echo "(__GI_)?getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc|inlined\)$" >> $expected 27 + echo "main\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" >> $expected 26 28 ;; 27 29 *) 28 30 eventattr='max-stack=3' 29 - expected[2]="getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" 30 - expected[3]=".*\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" 31 + echo "getaddrinfo\+0x[[:xdigit:]]+[[:space:]]\($libc\)$" >> $expected 32 + echo ".*\+0x[[:xdigit:]]+[[:space:]]\(.*/bin/ping.*\)$" >> $expected 31 33 ;; 32 34 esac 33 35 34 - file=`mktemp -u /tmp/perf.data.XXX` 36 + perf_data=`mktemp -u /tmp/perf.data.XXX` 37 + perf_script=`mktemp -u /tmp/perf.script.XXX` 38 + perf record -e probe_libc:inet_pton/$eventattr/ -o $perf_data ping -6 -c 1 ::1 > /dev/null 2>&1 39 + perf script -i $perf_data > $perf_script 35 40 36 - perf record -e probe_libc:inet_pton/$eventattr/ -o $file ping -6 -c 1 ::1 > /dev/null 2>&1 37 - perf script -i $file | while read line ; do 41 + exec 3<$perf_script 42 + exec 4<$expected 43 + while read line <&3 && read -r pattern <&4; do 44 + [ -z "$pattern" ] && break 38 45 echo $line 39 - echo "$line" | egrep -q "${expected[$idx]}" 46 + echo "$line" | egrep -q "$pattern" 40 47 if [ $? -ne 0 ] ; then 41 - printf "FAIL: expected backtrace entry %d \"%s\" got \"%s\"\n" $idx "${expected[$idx]}" "$line" 48 + printf "FAIL: expected backtrace entry \"%s\" got \"%s\"\n" "$pattern" "$line" 42 49 exit 1 43 50 fi 44 - let idx+=1 45 - [ -z "${expected[$idx]}" ] && break 46 51 done 47 52 48 53 # If any statements are executed from this point onwards, ··· 63 58 perf probe -q $libc inet_pton && \ 64 59 trace_libc_inet_pton_backtrace 65 60 err=$? 66 - rm -f ${file} 61 + rm -f ${perf_data} ${perf_script} ${expected} 67 62 perf probe -q -d probe_libc:inet_pton 68 63 exit $err
+1 -1
tools/perf/tests/shell/trace+probe_vfs_getname.sh
··· 17 17 file=$(mktemp /tmp/temporary_file.XXXXX) 18 18 19 19 trace_open_vfs_getname() { 20 - evts=$(echo $(perf list syscalls:sys_enter_open* |& egrep 'open(at)? ' | sed -r 's/.*sys_enter_([a-z]+) +\[.*$/\1/') | sed 's/ /,/') 20 + evts=$(echo $(perf list syscalls:sys_enter_open* 2>&1 | egrep 'open(at)? ' | sed -r 's/.*sys_enter_([a-z]+) +\[.*$/\1/') | sed 's/ /,/') 21 21 perf trace -e $evts touch $file 2>&1 | \ 22 22 egrep " +[0-9]+\.[0-9]+ +\( +[0-9]+\.[0-9]+ ms\): +touch\/[0-9]+ open(at)?\((dfd: +CWD, +)?filename: +${file}, +flags: CREAT\|NOCTTY\|NONBLOCK\|WRONLY, +mode: +IRUGO\|IWUGO\) += +[0-9]+$" 23 23 }
+3 -3
tools/perf/util/llvm-utils.c
··· 266 266 "#!/usr/bin/env sh\n" 267 267 "if ! test -d \"$KBUILD_DIR\"\n" 268 268 "then\n" 269 - " exit -1\n" 269 + " exit 1\n" 270 270 "fi\n" 271 271 "if ! test -f \"$KBUILD_DIR/include/generated/autoconf.h\"\n" 272 272 "then\n" 273 - " exit -1\n" 273 + " exit 1\n" 274 274 "fi\n" 275 275 "TMPDIR=`mktemp -d`\n" 276 276 "if test -z \"$TMPDIR\"\n" 277 277 "then\n" 278 - " exit -1\n" 278 + " exit 1\n" 279 279 "fi\n" 280 280 "cat << EOF > $TMPDIR/Makefile\n" 281 281 "obj-y := dummy.o\n"
+17 -20
tools/perf/util/scripting-engines/trace-event-python.c
··· 908 908 if (_PyTuple_Resize(&t, n) == -1) 909 909 Py_FatalError("error resizing Python tuple"); 910 910 911 - if (!dict) { 911 + if (!dict) 912 912 call_object(handler, t, handler_name); 913 - } else { 913 + else 914 914 call_object(handler, t, default_handler_name); 915 - Py_DECREF(dict); 916 - } 917 915 918 - Py_XDECREF(all_entries_dict); 919 916 Py_DECREF(t); 920 917 } 921 918 ··· 1232 1235 1233 1236 call_object(handler, t, handler_name); 1234 1237 1235 - Py_DECREF(dict); 1236 1238 Py_DECREF(t); 1237 1239 } 1238 1240 ··· 1623 1627 fprintf(ofp, "# See the perf-script-python Documentation for the list " 1624 1628 "of available functions.\n\n"); 1625 1629 1630 + fprintf(ofp, "from __future__ import print_function\n\n"); 1626 1631 fprintf(ofp, "import os\n"); 1627 1632 fprintf(ofp, "import sys\n\n"); 1628 1633 ··· 1633 1636 fprintf(ofp, "from Core import *\n\n\n"); 1634 1637 1635 1638 fprintf(ofp, "def trace_begin():\n"); 1636 - fprintf(ofp, "\tprint \"in trace_begin\"\n\n"); 1639 + fprintf(ofp, "\tprint(\"in trace_begin\")\n\n"); 1637 1640 1638 1641 fprintf(ofp, "def trace_end():\n"); 1639 - fprintf(ofp, "\tprint \"in trace_end\"\n\n"); 1642 + fprintf(ofp, "\tprint(\"in trace_end\")\n\n"); 1640 1643 1641 1644 while ((event = trace_find_next_event(pevent, event))) { 1642 1645 fprintf(ofp, "def %s__%s(", event->system, event->name); ··· 1672 1675 "common_secs, common_nsecs,\n\t\t\t" 1673 1676 "common_pid, common_comm)\n\n"); 1674 1677 1675 - fprintf(ofp, "\t\tprint \""); 1678 + fprintf(ofp, "\t\tprint(\""); 1676 1679 1677 1680 not_first = 0; 1678 1681 count = 0; ··· 1733 1736 fprintf(ofp, "%s", f->name); 1734 1737 } 1735 1738 1736 - fprintf(ofp, ")\n\n"); 1739 + fprintf(ofp, "))\n\n"); 1737 1740 1738 - fprintf(ofp, "\t\tprint 'Sample: {'+" 1739 - "get_dict_as_string(perf_sample_dict['sample'], ', ')+'}'\n\n"); 1741 + fprintf(ofp, "\t\tprint('Sample: {'+" 1742 + "get_dict_as_string(perf_sample_dict['sample'], ', ')+'}')\n\n"); 1740 1743 1741 1744 fprintf(ofp, "\t\tfor node in common_callchain:"); 1742 1745 fprintf(ofp, "\n\t\t\tif 'sym' in node:"); 1743 - fprintf(ofp, "\n\t\t\t\tprint \"\\t[%%x] %%s\" %% (node['ip'], node['sym']['name'])"); 1746 + fprintf(ofp, "\n\t\t\t\tprint(\"\\t[%%x] %%s\" %% (node['ip'], node['sym']['name']))"); 1744 1747 fprintf(ofp, "\n\t\t\telse:"); 1745 - fprintf(ofp, "\n\t\t\t\tprint \"\t[%%x]\" %% (node['ip'])\n\n"); 1746 - fprintf(ofp, "\t\tprint \"\\n\"\n\n"); 1748 + fprintf(ofp, "\n\t\t\t\tprint(\"\t[%%x]\" %% (node['ip']))\n\n"); 1749 + fprintf(ofp, "\t\tprint()\n\n"); 1747 1750 1748 1751 } 1749 1752 1750 1753 fprintf(ofp, "def trace_unhandled(event_name, context, " 1751 1754 "event_fields_dict, perf_sample_dict):\n"); 1752 1755 1753 - fprintf(ofp, "\t\tprint get_dict_as_string(event_fields_dict)\n"); 1754 - fprintf(ofp, "\t\tprint 'Sample: {'+" 1755 - "get_dict_as_string(perf_sample_dict['sample'], ', ')+'}'\n\n"); 1756 + fprintf(ofp, "\t\tprint(get_dict_as_string(event_fields_dict))\n"); 1757 + fprintf(ofp, "\t\tprint('Sample: {'+" 1758 + "get_dict_as_string(perf_sample_dict['sample'], ', ')+'}')\n\n"); 1756 1759 1757 1760 fprintf(ofp, "def print_header(" 1758 1761 "event_name, cpu, secs, nsecs, pid, comm):\n" 1759 - "\tprint \"%%-20s %%5u %%05u.%%09u %%8u %%-20s \" %% \\\n\t" 1760 - "(event_name, cpu, secs, nsecs, pid, comm),\n\n"); 1762 + "\tprint(\"%%-20s %%5u %%05u.%%09u %%8u %%-20s \" %% \\\n\t" 1763 + "(event_name, cpu, secs, nsecs, pid, comm), end=\"\")\n\n"); 1761 1764 1762 1765 fprintf(ofp, "def get_dict_as_string(a_dict, delimiter=' '):\n" 1763 1766 "\treturn delimiter.join"
+1 -2
tools/testing/nvdimm/test/nfit.c
··· 1991 1991 pcap->header.type = ACPI_NFIT_TYPE_CAPABILITIES; 1992 1992 pcap->header.length = sizeof(*pcap); 1993 1993 pcap->highest_capability = 1; 1994 - pcap->capabilities = ACPI_NFIT_CAPABILITY_CACHE_FLUSH | 1995 - ACPI_NFIT_CAPABILITY_MEM_FLUSH; 1994 + pcap->capabilities = ACPI_NFIT_CAPABILITY_MEM_FLUSH; 1996 1995 offset += pcap->header.length; 1997 1996 1998 1997 if (t->setup_hotplug) {
+1
tools/testing/selftests/bpf/config
··· 6 6 CONFIG_CGROUP_BPF=y 7 7 CONFIG_NETDEVSIM=m 8 8 CONFIG_NET_CLS_ACT=y 9 + CONFIG_NET_SCHED=y 9 10 CONFIG_NET_SCH_INGRESS=y 10 11 CONFIG_NET_IPIP=y 11 12 CONFIG_IPV6=y
+9
tools/testing/selftests/bpf/test_kmod.sh
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + # Kselftest framework requirement - SKIP code is 4. 5 + ksft_skip=4 6 + 7 + msg="skip all tests:" 8 + if [ "$(id -u)" != "0" ]; then 9 + echo $msg please run this as root >&2 10 + exit $ksft_skip 11 + fi 12 + 4 13 SRC_TREE=../../../../ 5 14 6 15 test_run()
+9
tools/testing/selftests/bpf/test_lirc_mode2.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + # Kselftest framework requirement - SKIP code is 4. 5 + ksft_skip=4 6 + 7 + msg="skip all tests:" 8 + if [ $UID != 0 ]; then 9 + echo $msg please run this as root >&2 10 + exit $ksft_skip 11 + fi 12 + 4 13 GREEN='\033[0;92m' 5 14 RED='\033[0;31m' 6 15 NC='\033[0m' # No Color
+9
tools/testing/selftests/bpf/test_lwt_seg6local.sh
··· 21 21 # An UDP datagram is sent from fb00::1 to fb00::6. The test succeeds if this 22 22 # datagram can be read on NS6 when binding to fb00::6. 23 23 24 + # Kselftest framework requirement - SKIP code is 4. 25 + ksft_skip=4 26 + 27 + msg="skip all tests:" 28 + if [ $UID != 0 ]; then 29 + echo $msg please run this as root >&2 30 + exit $ksft_skip 31 + fi 32 + 24 33 TMP_FILE="/tmp/selftest_lwt_seg6local.txt" 25 34 26 35 cleanup()
-6
tools/testing/selftests/bpf/test_sockmap.c
··· 1413 1413 1414 1414 int main(int argc, char **argv) 1415 1415 { 1416 - struct rlimit r = {10 * 1024 * 1024, RLIM_INFINITY}; 1417 1416 int iov_count = 1, length = 1024, rate = 1; 1418 1417 struct sockmap_options options = {0}; 1419 1418 int opt, longindex, err, cg_fd = 0; 1420 1419 char *bpf_file = BPF_SOCKMAP_FILENAME; 1421 1420 int test = PING_PONG; 1422 - 1423 - if (setrlimit(RLIMIT_MEMLOCK, &r)) { 1424 - perror("setrlimit(RLIMIT_MEMLOCK)"); 1425 - return 1; 1426 - } 1427 1421 1428 1422 if (argc < 2) 1429 1423 return test_suite();
tools/testing/selftests/net/fib_tests.sh
+17 -7
tools/testing/selftests/rseq/rseq.h
··· 133 133 return cpu; 134 134 } 135 135 136 + static inline void rseq_clear_rseq_cs(void) 137 + { 138 + #ifdef __LP64__ 139 + __rseq_abi.rseq_cs.ptr = 0; 140 + #else 141 + __rseq_abi.rseq_cs.ptr.ptr32 = 0; 142 + #endif 143 + } 144 + 136 145 /* 137 - * rseq_prepare_unload() should be invoked by each thread using rseq_finish*() 138 - * at least once between their last rseq_finish*() and library unload of the 139 - * library defining the rseq critical section (struct rseq_cs). This also 140 - * applies to use of rseq in code generated by JIT: rseq_prepare_unload() 141 - * should be invoked at least once by each thread using rseq_finish*() before 142 - * reclaim of the memory holding the struct rseq_cs. 146 + * rseq_prepare_unload() should be invoked by each thread executing a rseq 147 + * critical section at least once between their last critical section and 148 + * library unload of the library defining the rseq critical section 149 + * (struct rseq_cs). This also applies to use of rseq in code generated by 150 + * JIT: rseq_prepare_unload() should be invoked at least once by each 151 + * thread executing a rseq critical section before reclaim of the memory 152 + * holding the struct rseq_cs. 143 153 */ 144 154 static inline void rseq_prepare_unload(void) 145 155 { 146 - __rseq_abi.rseq_cs = 0; 156 + rseq_clear_rseq_cs(); 147 157 } 148 158 149 159 #endif /* RSEQ_H_ */